comments
stringlengths
2
31.4k
#input # 17 # 15 4 3 3 16 4 14 2 2 10 11 2 6 18 17 10 4 8 11 3 19 9 4 13 6 6 14 3 3 14 19 16 17 3 # 13 5 13 14 6 4 5 6 9 3 4 17 6 10 8 8 9 16 10 3 2 15 15 4 16 10 4 13 7 5 11 19 8 5 # 13 7 17 17 14 19 19 12 6 10 19 12 17 15 2 19 15 15 14 17 11 5 19 5 11 5 15 10 11 18 # 4 6 2 2 19 2 2 12 6 10 11 17 8 6 17 7 2 13 2 18 5 5 # 8 14 3 4 5 13 2 8 15 7 9 16 7 9 16 17 14 7 9 11 13 13 9 19 14 3 19 12 7 5 11 13 18 12 # 4 5 16 10 19 4 17 15 9 7 12 7 2 17 14 11 11 8 19 10 2 2 10 13 7 13 4 19 12 15 15 14 # 12 4 17 14 2 13 4 8 5 10 8 2 4 18 12 11 17 2 12 18 10 6 5 4 8 5 14 4 18 9 2 10 12 18 # 12 12 8 19 15 16 7 16 19 6 8 11 3 9 3 19 17 7 5 2 14 8 15 # 6 4 17 15 14 16 19 7 8 7 6 4 4 12 19 3 16 7 13 17 15 14 17 12 2 2 13 15 8 8 12 13 11 # 8 5 7 7 10 14 13 15 17 15 7 16 17 3 4 10 18 17 5 16 10 6 16 3 19 4 10 # 15 19 3 4 5 9 9 13 3 2 8 19 15 14 15 13 15 17 3 14 14 6 10 5 10 6 6 10 9 # 3 4 14 5 6 18 13 14 11 14 15 18 13 11 13 9 4 8 6 5 2 19 10 11 4 19 15 9 10 4 4 # 7 17 15 12 15 8 6 7 3 19 6 15 10 18 4 13 6 9 16 7 9 6 16 12 6 12 19 14 # 4 6 2 2 2 13 16 8 17 4 10 17 9 5 8 7 8 19 12 16 16 17 5 3 14 15 7 6 15 19 19 # 6 2 18 6 14 14 13 11 17 4 9 6 8 16 11 14 15 3 11 12 19 14 13 13 10 18 18 5 18 18 2 5 19 19 # 13 13 3 4 10 6 12 15 12 8 7 7 4 9 16 14 8 10 8 2 19 7 18 4 5 16 5 # 15 4 17 8 16 19 11 7 5 4 2 15 11 7 3 14 14 17 9 3 8 15 3 7 2 19
# -*- encoding: utf-8 -*- ############################################################################## # # Copyright (c) 2009 Veritos - NAME - www.veritos.nl # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsability of assessing all potential # consequences resulting from its eventual inadequacies and bugs. # End users who are looking for a ready-to-use solution with commercial # garantees and support are strongly adviced to contract a Free Software # Service Company like Veritos. # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # ############################################################################## # # Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger). # Deze module werkt niet in OpenERP versie 4 en lager. # # Status 1.0 - getest op OpenERP 5.0.3 # # Versie IP_ADDRESS # account.account.type # Basis gelegd voor alle account type # # account.account.template # Basis gelegd met alle benodigde grootboekrekeningen welke via een menu- # structuur gelinkt zijn aan rubrieken 1 t/m 9. # De grootboekrekeningen gelinkt aan de account.account.type # Deze links moeten nog eens goed nagelopen worden. # # account.chart.template # Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren, # bank, inkoop en verkoop boeken en de BTW configuratie. # # Versie IP_ADDRESS # account.tax.code.template # Basis gelegd voor de BTW configuratie (structuur) # Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt? # # account.tax.template # De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende # grootboekrekeningen # # Versie IP_ADDRESS # Opschonen van de code en verwijderen van niet gebruikte componenten. # Versie IP_ADDRESS # Aanpassen a_expense van 3000 -> 7000 # record id='btw_code_5b' op negatieve waarde gezet # Versie IP_ADDRESS # BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde. # Versie IP_ADDRESS # Account Receivable en Payable goed gedefinieerd. # Versie IP_ADDRESS # Alle user_type_xxx velden goed gedefinieerd. # Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren. # Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren. # Versie IP_ADDRESS # Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging) # versie IP_ADDRESS # Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity # versie IP_ADDRESS # Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht. # versie IP_ADDRESS # BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd. # versie IP_ADDRESS - Switch to English # Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense # Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
""" ================= Structured Arrays ================= Introduction ============ NumPy provides powerful capabilities to create arrays of structured datatype. These arrays permit one to manipulate the data by named fields. A simple example will show what is meant.: :: >>> x = np.array([(1,2.,'Hello'), (2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> x array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) Here we have created a one-dimensional array of length 2. Each element of this array is a structure that contains three items, a 32-bit integer, a 32-bit float, and a string of length 10 or less. If we index this array at the second position we get the second structure: :: >>> x[1] (2,3.,"World") Conveniently, one can access any field of the array by indexing using the string that names that field. :: >>> y = x['bar'] >>> y array([ 2., 3.], dtype=float32) >>> y[:] = 2*y >>> y array([ 4., 6.], dtype=float32) >>> x array([(1, 4.0, 'Hello'), (2, 6.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) In these examples, y is a simple float array consisting of the 2nd field in the structured type. But, rather than being a copy of the data in the structured array, it is a view, i.e., it shares exactly the same memory locations. Thus, when we updated this array by doubling its values, the structured array shows the corresponding values as doubled as well. Likewise, if one changes the structured array, the field view also changes: :: >>> x[1] = (-1,-1.,"Master") >>> x array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) >>> y array([ 4., -1.], dtype=float32) Defining Structured Arrays ========================== One defines a structured array through the dtype object. There are **several** alternative ways to define the fields of a record. Some of these variants provide backward compatibility with Numeric, numarray, or another module, and should not be used except for such purposes. These will be so noted. One specifies record structure in one of four alternative ways, using an argument (as supplied to a dtype function keyword or a dtype object constructor itself). This argument must be one of the following: 1) string, 2) tuple, 3) list, or 4) dictionary. Each of these is briefly described below. 1) String argument. In this case, the constructor expects a comma-separated list of type specifiers, optionally with extra shape information. The fields are given the default names 'f0', 'f1', 'f2' and so on. The type specifiers can take 4 different forms: :: a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n> (representing bytes, ints, unsigned ints, floats, complex and fixed length strings of specified byte lengths) b) int8,...,uint8,...,float16, float32, float64, complex64, complex128 (this time with bit sizes) c) older Numeric/numarray type specifications (e.g. Float32). Don't use these in new code! d) Single character type specifiers (e.g H for unsigned short ints). Avoid using these unless you must. Details can be found in the NumPy book These different styles can be mixed within the same string (but why would you want to do that?). Furthermore, each type specifier can be prefixed with a repetition number, or a shape. In these cases an array element is created, i.e., an array within a record. That array is still referred to as a single field. An example: :: >>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64') >>> x array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])], dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))]) By using strings to define the record structure, it precludes being able to name the fields in the original definition. The names can be changed as shown later, however. 2) Tuple argument: The only relevant tuple case that applies to record structures is when a structure is mapped to an existing data type. This is done by pairing in a tuple, the existing data type with a matching dtype definition (using any of the variants being described here). As an example (using a definition using a list, so see 3) for further details): :: >>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')])) >>> x array([0, 0, 0]) >>> x['r'] array([0, 0, 0], dtype=uint8) In this case, an array is produced that looks and acts like a simple int32 array, but also has definitions for fields that use only one byte of the int32 (a bit like Fortran equivalencing). 3) List argument: In this case the record structure is defined with a list of tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field ('' is permitted), 2) the type of the field, and 3) the shape (optional). For example:: >>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) >>> x array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])], dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))]) 4) Dictionary argument: two different forms are permitted. The first consists of a dictionary with two required keys ('names' and 'formats'), each having an equal sized list of values. The format list contains any type/shape specifier allowed in other contexts. The names must be strings. There are two optional keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to the required two where offsets contain integer offsets for each field, and titles are objects containing metadata for each field (these do not have to be strings), where the value of None is permitted. As an example: :: >>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[('col1', '>i4'), ('col2', '>f4')]) The other dictionary form permitted is a dictionary of name keys with tuple values specifying type, offset, and an optional title. :: >>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')]) Accessing and modifying field names =================================== The field names are an attribute of the dtype object defining the structure. For the last example: :: >>> x.dtype.names ('col1', 'col2') >>> x.dtype.names = ('x', 'y') >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')]) >>> x.dtype.names = ('x', 'y', 'z') # wrong number of names <type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2 Accessing field titles ==================================== The field titles provide a standard place to put associated info for fields. They do not have to be strings. :: >>> x.dtype.fields['x'][2] 'title 1' Accessing multiple fields at once ==================================== You can access multiple fields at once using a list of field names: :: >>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))], dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) Notice that `x` is created with a list of tuples. :: >>> x[['x','y']] array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)], dtype=[('x', '<f4'), ('y', '<f4')]) >>> x[['x','value']] array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]), (1.0, [[2.0, 6.0], [2.0, 6.0]])], dtype=[('x', '<f4'), ('value', '<f4', (2, 2))]) The fields are returned in the order they are asked for.:: >>> x[['y','x']] array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)], dtype=[('y', '<f4'), ('x', '<f4')]) Filling structured arrays ========================= Structured arrays can be filled by field or row by row. :: >>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')]) >>> arr['var1'] = np.arange(5) If you fill it in row by row, it takes a take a tuple (but not a list or array!):: >>> arr[0] = (10,20) >>> arr array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)], dtype=[('var1', '<f8'), ('var2', '<f8')]) Record Arrays ============= For convenience, numpy provides "record arrays" which allow one to access fields of structured arrays by attribute rather than by index. Record arrays are structured arrays wrapped using a subclass of ndarray, :class:`numpy.recarray`, which allows field access by attribute on the array object, and record arrays also use a special datatype, :class:`numpy.record`, which allows field access by attribute on the individual elements of the array. The simplest way to create a record array is with :func:`numpy.rec.array`: :: >>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> recordarr.bar array([ 2., 3.], dtype=float32) >>> recordarr[1:2] rec.array([(2, 3.0, 'World')], dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]) >>> recordarr[1:2].foo array([2], dtype=int32) >>> recordarr.foo[1:2] array([2], dtype=int32) >>> recordarr[1].baz 'World' numpy.rec.array can convert a wide variety of arguments into record arrays, including normal structured arrays: :: >>> arr = array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')]) >>> recordarr = np.rec.array(arr) The numpy.rec module provides a number of other convenience functions for creating record arrays, see :ref:`record array creation routines <routines.array-creation.rec>`. A record array representation of a structured array can be obtained using the appropriate :ref:`view`: :: >>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')]) >>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)), ... type=np.recarray) For convenience, viewing an ndarray as type `np.recarray` will automatically convert to `np.record` datatype, so the dtype can be left out of the view: :: >>> recordarr = arr.view(np.recarray) >>> recordarr.dtype dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])) To get back to a plain ndarray both the dtype and type must be reset. The following view does so, taking into account the unusual case that the recordarr was not a structured type: :: >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but as a plain ndarray otherwise. :: >>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))], ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])]) >>> type(recordarr.foo) <type 'numpy.ndarray'> >>> type(recordarr.bar) <class 'numpy.core.records.recarray'> Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will be inaccessible by attribute but may still be accessed by index. """
""" TestCmd.py: a testing framework for commands and scripts. The TestCmd module provides a framework for portable automated testing of executable commands and scripts (in any language, not just Python), especially commands and scripts that require file system interaction. In addition to running tests and evaluating conditions, the TestCmd module manages and cleans up one or more temporary workspace directories, and provides methods for creating files and directories in those workspace directories from in-line data, here-documents), allowing tests to be completely self-contained. A TestCmd environment object is created via the usual invocation: import TestCmd test = TestCmd.TestCmd() There are a bunch of keyword arguments available at instantiation: test = TestCmd.TestCmd(description = 'string', program = 'program_or_script_to_test', interpreter = 'script_interpreter', workdir = 'prefix', subdir = 'subdir', verbose = Boolean, match = default_match_function, diff = default_diff_function, combine = Boolean) There are a bunch of methods that let you do different things: test.verbose_set(1) test.description_set('string') test.program_set('program_or_script_to_test') test.interpreter_set('script_interpreter') test.interpreter_set(['script_interpreter', 'arg']) test.workdir_set('prefix') test.workdir_set('') test.workpath('file') test.workpath('subdir', 'file') test.subdir('subdir', ...) test.rmdir('subdir', ...) test.write('file', "contents\n") test.write(['subdir', 'file'], "contents\n") test.read('file') test.read(['subdir', 'file']) test.read('file', mode) test.read(['subdir', 'file'], mode) test.writable('dir', 1) test.writable('dir', None) test.preserve(condition, ...) test.cleanup(condition) test.command_args(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program') test.run(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program', chdir = 'directory_to_chdir_to', stdin = 'input to feed to the program\n') universal_newlines = True) p = test.start(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program', universal_newlines = None) test.finish(self, p) test.pass_test() test.pass_test(condition) test.pass_test(condition, function) test.fail_test() test.fail_test(condition) test.fail_test(condition, function) test.fail_test(condition, function, skip) test.no_result() test.no_result(condition) test.no_result(condition, function) test.no_result(condition, function, skip) test.stdout() test.stdout(run) test.stderr() test.stderr(run) test.symlink(target, link) test.banner(string) test.banner(string, width) test.diff(actual, expected) test.match(actual, expected) test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n") test.match_exact(["actual 1\n", "actual 2\n"], ["expected 1\n", "expected 2\n"]) test.match_re("actual 1\nactual 2\n", regex_string) test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes) test.match_re_dotall("actual 1\nactual 2\n", regex_string) test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes) test.tempdir() test.tempdir('temporary-directory') test.sleep() test.sleep(seconds) test.where_is('foo') test.where_is('foo', 'PATH1:PATH2') test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') test.unlink('file') test.unlink('subdir', 'file') The TestCmd module provides pass_test(), fail_test(), and no_result() unbound functions that report test results for use with the Aegis change management system. These methods terminate the test immediately, reporting PASSED, FAILED, or NO RESULT respectively, and exiting with status 0 (success), 1 or 2 respectively. This allows for a distinction between an actual failed test and a test that could not be properly evaluated because of an external condition (such as a full file system or incorrect permissions). import TestCmd TestCmd.pass_test() TestCmd.pass_test(condition) TestCmd.pass_test(condition, function) TestCmd.fail_test() TestCmd.fail_test(condition) TestCmd.fail_test(condition, function) TestCmd.fail_test(condition, function, skip) TestCmd.no_result() TestCmd.no_result(condition) TestCmd.no_result(condition, function) TestCmd.no_result(condition, function, skip) The TestCmd module also provides unbound functions that handle matching in the same way as the match_*() methods described above. import TestCmd test = TestCmd.TestCmd(match = TestCmd.match_exact) test = TestCmd.TestCmd(match = TestCmd.match_re) test = TestCmd.TestCmd(match = TestCmd.match_re_dotall) The TestCmd module provides unbound functions that can be used for the "diff" argument to TestCmd.TestCmd instantiation: import TestCmd test = TestCmd.TestCmd(match = TestCmd.match_re, diff = TestCmd.diff_re) test = TestCmd.TestCmd(diff = TestCmd.simple_diff) The "diff" argument can also be used with standard difflib functions: import difflib test = TestCmd.TestCmd(diff = difflib.context_diff) test = TestCmd.TestCmd(diff = difflib.unified_diff) Lastly, the where_is() method also exists in an unbound function version. import TestCmd TestCmd.where_is('foo') TestCmd.where_is('foo', 'PATH1:PATH2') TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') """
""" Objects for dealing with Chebyshev series. This module provides a number of objects (mostly functions) useful for dealing with Chebyshev series, including a `Chebyshev` class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its "parent" sub-package, `numpy.polynomial`). Constants --------- - `chebdomain` -- Chebyshev series default domain, [-1,1]. - `chebzero` -- (Coefficients of the) Chebyshev series that evaluates identically to 0. - `chebone` -- (Coefficients of the) Chebyshev series that evaluates identically to 1. - `chebx` -- (Coefficients of the) Chebyshev series for the identity map, ``f(x) = x``. Arithmetic ---------- - `chebadd` -- add two Chebyshev series. - `chebsub` -- subtract one Chebyshev series from another. - `chebmul` -- multiply two Chebyshev series. - `chebdiv` -- divide one Chebyshev series by another. - `chebpow` -- raise a Chebyshev series to an positive integer power - `chebval` -- evaluate a Chebyshev series at given points. - `chebval2d` -- evaluate a 2D Chebyshev series at given points. - `chebval3d` -- evaluate a 3D Chebyshev series at given points. - `chebgrid2d` -- evaluate a 2D Chebyshev series on a Cartesian product. - `chebgrid3d` -- evaluate a 3D Chebyshev series on a Cartesian product. Calculus -------- - `chebder` -- differentiate a Chebyshev series. - `chebint` -- integrate a Chebyshev series. Misc Functions -------------- - `chebfromroots` -- create a Chebyshev series with specified roots. - `chebroots` -- find the roots of a Chebyshev series. - `chebvander` -- Vandermonde-like matrix for Chebyshev polynomials. - `chebvander2d` -- Vandermonde-like matrix for 2D power series. - `chebvander3d` -- Vandermonde-like matrix for 3D power series. - `chebgauss` -- Gauss-Chebyshev quadrature, points and weights. - `chebweight` -- Chebyshev weight function. - `chebcompanion` -- symmetrized companion matrix in Chebyshev form. - `chebfit` -- least-squares fit returning a Chebyshev series. - `chebpts1` -- Chebyshev points of the first kind. - `chebpts2` -- Chebyshev points of the second kind. - `chebtrim` -- trim leading coefficients from a Chebyshev series. - `chebline` -- Chebyshev series representing given straight line. - `cheb2poly` -- convert a Chebyshev series to a polynomial. - `poly2cheb` -- convert a polynomial to a Chebyshev series. Classes ------- - `Chebyshev` -- A Chebyshev series class. See also -------- `numpy.polynomial` Notes ----- The implementations of multiplication, division, integration, and differentiation use the algebraic identities [1]_: .. math :: T_n(x) = \\frac{z^n + z^{-n}}{2} \\\\ z\\frac{dx}{dz} = \\frac{z - z^{-1}}{2}. where .. math :: x = \\frac{z + z^{-1}}{2}. These identities allow a Chebyshev series to be expressed as a finite, symmetric Laurent series. In this module, this sort of Laurent series is referred to as a "z-series." References ---------- .. [1] NAME et al., "Combinatorial Trigonometry with Chebyshev Polynomials," *Journal of Statistical Planning and Inference 14*, 2008 (preprint: http://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf, pg. 4) """
# -*- encoding: utf-8 -*- ############################################################################## # # OpenERP, Open Source Management Solution # Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>). # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # ############################################################################## # SKR03 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU') # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer' # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung. # SKR04 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig, # d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu # Steuerschlüsseln. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU') # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer' # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# -*- coding: utf-8 -*- # # Copyright (C) 2009-2014 NAME <EMAIL> # Copyright (C) 2010 USERNAME <EMAIL> # Copyright (C) 2011 USERNAME <EMAIL> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # # # History: # # 2019-07-11, NAME <EMAIL> # version 2.6: fix detection of "/input search_text_here" # 2017-04-01, NAME <EMAIL>: # version 2.5: add option "buffer_number" # 2017-03-02, NAME <EMAIL>: # version 2.4: fix syntax and indentation error # 2017-02-25, NAME <EMAIL> # version 2.3: fix fuzzy search breaking buffer number search display # 2016-01-28, USERNAME <EMAIL> # version 2.2: add option "fuzzy_search" # 2015-11-12, USERNAME <EMAIL> # version 2.1: fix problem with buffer short_name "weechat", using option # "use_core_instead_weechat", see: # https://github.com/weechat/weechat/issues/574 # 2014-05-12, NAME <EMAIL>: # version 2.0: add help on options, replace option "sort_by_activity" by # "sort" (add sort by name and first match at beginning of # name and by number), PEP8 compliance # 2012-11-26, NAME <anti.teamidiot.de> # version 1.9: add auto_jump option to automatically go to buffer when it # is uniquely selected # 2012-09-17, NAME <EMAIL>: # version 1.8: fix jump to non-active merged buffers (jump with buffer name # instead of number) # 2012-01-03 USERNAME <EMAIL> # version 1.7: add option "use_core_instead_weechat" # 2012-01-03, NAME <EMAIL>: # version 1.6: make script compatible with Python 3.x # 2011-08-24, USERNAME <EMAIL>: # version 1.5: /go with name argument jumps directly to buffer # Remember cursor position in buffer input # 2011-05-31, NAME <EMAIL>: # version 1.4: Sort list of buffers by activity. # 2011-04-25, NAME <EMAIL>: # version 1.3: add info "go_running" (used by script input_lock.rb) # 2010-11-01, NAME <EMAIL>: # version 1.2: use high priority for hooks to prevent conflict with other # plugins/scripts (WeeChat >= 0.3.4 only) # 2010-03-25, NAME <EMAIL>: # version 1.1: use a space to match the end of a string # 2009-11-16, NAME <EMAIL>: # version 1.0: add new option to display short names # 2009-06-15, NAME <EMAIL>: # version 0.9: fix typo in /help go with command /key # 2009-05-16, NAME <EMAIL>: # version 0.8: search buffer by number, fix bug when window is split # 2009-05-03, NAME <EMAIL>: # version 0.7: eat tab key (do not complete input, just move buffer # pointer) # 2009-05-02, NAME <EMAIL>: # version 0.6: sync with last API changes # 2009-03-22, NAME <EMAIL>: # version 0.5: update modifier signal name for input text display, # fix arguments for function string_remove_color # 2009-02-18, NAME <EMAIL>: # version 0.4: do not hook command and init options if register failed # 2009-02-08, NAME <EMAIL>: # version 0.3: case insensitive search for buffers names # 2009-02-08, NAME <EMAIL>: # version 0.2: add help about Tab key # 2009-02-08, NAME <EMAIL>: # version 0.1: initial release #
""" Writing Plugins --------------- nose supports plugins for test collection, selection, observation and reporting. There are two basic rules for plugins: * Plugin classes should subclass :class:`nose.plugins.Plugin`. * Plugins may implement any of the methods described in the class :doc:`IPluginInterface <interface>` in nose.plugins.base. Please note that this class is for documentary purposes only; plugins may not subclass IPluginInterface. Hello World =========== Here's a basic plugin. It doesn't do much so read on for more ideas or dive into the :doc:`IPluginInterface <interface>` to see all available hooks. .. code-block:: python import logging import os from nose.plugins import Plugin log = logging.getLogger('nose.plugins.helloworld') class HelloWorld(Plugin): name = 'helloworld' def options(self, parser, env=os.environ): super(HelloWorld, self).options(parser, env=env) def configure(self, options, conf): super(HelloWorld, self).configure(options, conf) if not self.enabled: return def finalize(self, result): log.info('Hello pluginized world!') Registering =========== .. Note:: Important note: the following applies only to the default plugin manager. Other plugin managers may use different means to locate and load plugins. For nose to find a plugin, it must be part of a package that uses setuptools_, and the plugin must be included in the entry points defined in the setup.py for the package: .. code-block:: python setup(name='Some plugin', # ... entry_points = { 'nose.plugins.0.10': [ 'someplugin = someplugin:SomePlugin' ] }, # ... ) Once the package is installed with install or develop, nose will be able to load the plugin. .. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools Registering a plugin without setuptools ======================================= It is currently possible to register a plugin programmatically by creating a custom nose runner like this : .. code-block:: python import nose from yourplugin import YourPlugin if __name__ == '__main__': nose.main(addplugins=[YourPlugin()]) Defining options ================ All plugins must implement the methods ``options(self, parser, env)`` and ``configure(self, options, conf)``. Subclasses of nose.plugins.Plugin that want the standard options should call the superclass methods. nose uses optparse.OptionParser from the standard library to parse arguments. A plugin's ``options()`` method receives a parser instance. It's good form for a plugin to use that instance only to add additional arguments that take only long arguments (--like-this). Most of nose's built-in arguments get their default value from an environment variable. A plugin's ``configure()`` method receives the parsed ``OptionParser`` options object, as well as the current config object. Plugins should configure their behavior based on the user-selected settings, and may raise exceptions if the configured behavior is nonsensical. Logging ======= nose uses the logging classes from the standard library. To enable users to view debug messages easily, plugins should use ``logging.getLogger()`` to acquire a logger in the ``nose.plugins`` namespace. Recipes ======= * Writing a plugin that monitors or controls test result output Implement any or all of ``addError``, ``addFailure``, etc., to monitor test results. If you also want to monitor output, implement ``setOutputStream`` and keep a reference to the output stream. If you want to prevent the builtin ``TextTestResult`` output, implement ``setOutputSteam`` and *return a dummy stream*. The default output will go to the dummy stream, while you send your desired output to the real stream. Example: `examples/html_plugin/htmlplug.py`_ * Writing a plugin that handles exceptions Subclass :doc:`ErrorClassPlugin <errorclasses>`. Examples: :doc:`nose.plugins.deprecated <deprecated>`, :doc:`nose.plugins.skip <skip>` * Writing a plugin that adds detail to error reports Implement ``formatError`` and/or ``formatFailure``. The error tuple you return (error class, error message, traceback) will replace the original error tuple. Examples: :doc:`nose.plugins.capture <capture>`, :doc:`nose.plugins.failuredetail <failuredetail>` * Writing a plugin that loads tests from files other than python modules Implement ``wantFile`` and ``loadTestsFromFile``. In ``wantFile``, return True for files that you want to examine for tests. In ``loadTestsFromFile``, for those files, return an iterable containing TestCases (or yield them as you find them; ``loadTestsFromFile`` may also be a generator). Example: :doc:`nose.plugins.doctests <doctests>` * Writing a plugin that prints a report Implement ``begin`` if you need to perform setup before testing begins. Implement ``report`` and output your report to the provided stream. Examples: :doc:`nose.plugins.cover <cover>`, :doc:`nose.plugins.prof <prof>` * Writing a plugin that selects or rejects tests Implement any or all ``want*`` methods. Return False to reject the test candidate, True to accept it -- which means that the test candidate will pass through the rest of the system, so you must be prepared to load tests from it if tests can't be loaded by the core loader or another plugin -- and None if you don't care. Examples: :doc:`nose.plugins.attrib <attrib>`, :doc:`nose.plugins.doctests <doctests>`, :doc:`nose.plugins.testid <testid>` More Examples ============= See any builtin plugin or example plugin in the examples_ directory in the nose source distribution. There is a list of third-party plugins `on jottit`_. .. _examples/html_plugin/htmlplug.py: http://python-nose.googlecode.com/svn/trunk/examples/html_plugin/htmlplug.py .. _examples: http://python-nose.googlecode.com/svn/trunk/examples .. _on jottit: http://nose-plugins.jottit.com/ """
"""Doctest for method/function calls. We're going the use these types for extra testing >>> from collections import UserList >>> from collections import UserDict We're defining four helper functions >>> def e(a,b): ... print(a, b) >>> def f(*a, **k): ... print(a, support.sortdict(k)) >>> def g(x, *y, **z): ... print(x, y, support.sortdict(z)) >>> def h(j=1, a=2, h=3): ... print(j, a, h) Argument list examples >>> f() () {} >>> f(1) (1,) {} >>> f(1, 2) (1, 2) {} >>> f(1, 2, 3) (1, 2, 3) {} >>> f(1, 2, 3, *(4, 5)) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *[4, 5]) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *UserList([4, 5])) (1, 2, 3, 4, 5) {} Here we add keyword arguments >>> f(1, 2, 3, **{'a':4, 'b':5}) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *[4, 5], **{'a':6, 'b':7}) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **{'a':8, 'b': 9}) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} >>> f(1, 2, 3, **UserDict(a=4, b=5)) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *(4, 5), **UserDict(a=6, b=7)) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **UserDict(a=8, b=9)) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} Examples with invalid arguments (TypeErrors). We're also testing the function names in the exception messages. Verify clearing of SF bug #733667 >>> e(c=4) Traceback (most recent call last): ... TypeError: e() got an unexpected keyword argument 'c' >>> g() Traceback (most recent call last): ... TypeError: g() takes at least 1 positional argument (0 given) >>> g(*()) Traceback (most recent call last): ... TypeError: g() takes at least 1 positional argument (0 given) >>> g(*(), **{}) Traceback (most recent call last): ... TypeError: g() takes at least 1 positional argument (0 given) >>> g(1) 1 () {} >>> g(1, 2) 1 (2,) {} >>> g(1, 2, 3) 1 (2, 3) {} >>> g(1, 2, 3, *(4, 5)) 1 (2, 3, 4, 5) {} >>> class Nothing: pass ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not Nothing >>> class Nothing: ... def __len__(self): return 5 ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not Nothing >>> class Nothing(): ... def __len__(self): return 5 ... def __getitem__(self, i): ... if i<3: return i ... else: raise IndexError(i) ... >>> g(*Nothing()) 0 (1, 2) {} >>> class Nothing: ... def __init__(self): self.c = 0 ... def __iter__(self): return self ... def __next__(self): ... if self.c == 4: ... raise StopIteration ... c = self.c ... self.c += 1 ... return c ... >>> g(*Nothing()) 0 (1, 2, 3) {} Make sure that the function doesn't stomp the dictionary >>> d = {'a': 1, 'b': 2, 'c': 3} >>> d2 = d.copy() >>> g(1, d=4, **d) 1 () {'a': 1, 'b': 2, 'c': 3, 'd': 4} >>> d == d2 True What about willful misconduct? >>> def saboteur(**kw): ... kw['x'] = 'm' ... return kw >>> d = {} >>> kw = saboteur(a=1, **d) >>> d {} >>> g(1, 2, 3, **{'x': 4, 'y': 5}) Traceback (most recent call last): ... TypeError: g() got multiple values for keyword argument 'x' >>> f(**{1:2}) Traceback (most recent call last): ... TypeError: f() keywords must be strings >>> h(**{'e': 2}) Traceback (most recent call last): ... TypeError: h() got an unexpected keyword argument 'e' >>> h(*h) Traceback (most recent call last): ... TypeError: h() argument after * must be a sequence, not function >>> dir(*h) Traceback (most recent call last): ... TypeError: dir() argument after * must be a sequence, not function >>> None(*h) Traceback (most recent call last): ... TypeError: NoneType object argument after * must be a sequence, \ not function >>> h(**h) Traceback (most recent call last): ... TypeError: h() argument after ** must be a mapping, not function >>> dir(**h) Traceback (most recent call last): ... TypeError: dir() argument after ** must be a mapping, not function >>> None(**h) Traceback (most recent call last): ... TypeError: NoneType object argument after ** must be a mapping, \ not function >>> dir(b=1, **{'b': 1}) Traceback (most recent call last): ... TypeError: dir() got multiple values for keyword argument 'b' Another helper function >>> def f2(*a, **b): ... return a, b >>> d = {} >>> for i in range(512): ... key = 'k%d' % i ... d[key] = i >>> a, b = f2(1, *(2,3), **d) >>> len(a), len(b), b == d (3, 512, True) >>> class Foo: ... def method(self, arg1, arg2): ... return arg1+arg2 >>> x = Foo() >>> Foo.method(*(x, 1, 2)) 3 >>> Foo.method(x, *(1, 2)) 3 >>> Foo.method(*(1, 2, 3)) 5 >>> Foo.method(1, *[2, 3]) 5 A PyCFunction that takes only positional parameters shoud allow an empty keyword dictionary to pass without a complaint, but raise a TypeError if te dictionary is not empty >>> try: ... silence = id(1, *{}) ... True ... except: ... False True >>> id(1, **{'foo': 1}) Traceback (most recent call last): ... TypeError: id() takes no keyword arguments """
#!/usr/bin/env python # -*- coding: utf-8 -*- #***************************************************************************** # # Copyright (c) 2013 NAME <EMAIL> # # Published under the terms of the MIT license. # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS # IN THE SOFTWARE. # #***************************************************************************** # pts/1 2013-06-06 18:09 23120 id=ts/1 term=0 exit=0 # system boot 2013-05-20 21:27 # run-level 2 2013-05-20 21:27 # LOGIN tty4 2013-05-20 21:27 1631 id=4 # LOGIN tty5 2013-05-20 21:27 1645 id=5 # LOGIN tty2 2013-05-20 21:27 1657 id=2 # LOGIN tty3 2013-05-20 21:27 1658 id=3 # LOGIN tty6 2013-05-20 21:27 1661 id=6 # LOGIN tty1 2013-05-20 21:27 2879 id=1 # pts/22 2013-06-06 07:43 972 id=s/22 term=0 exit=0 # USERNAME + pts/0 2013-08-22 09:04 . 15682 (l26.box) # pts/34 2013-06-12 15:04 26396 id=s/34 term=0 exit=0 # pts/21 2013-06-25 11:12 32321 id=s/21 term=0 exit=0 # pts/24 2013-07-02 22:04 29473 id=/24 term=0 exit=0 # pts/27 2013-07-03 12:04 8492 id=/27 term=0 exit=0 # pts/31 2013-07-18 18:49 27215 id=s/31 term=0 exit=0 # pts/30 2013-07-24 14:40 19054 id=s/30 term=0 exit=0 # pts/28 2013-07-30 20:49 24942 id=s/28 term=0 exit=0 # pts/27 2013-08-02 17:59 31326 id=s/27 term=0 exit=0 #012345678901234567890123456789012345678901234567890123456789012345678901234567890
# -*- encoding: utf-8 -*- ############################################################################## # # Copyright (c) 2009 Veritos - NAME - www.veritos.nl # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsability of assessing all potential # consequences resulting from its eventual inadequacies and bugs. # End users who are looking for a ready-to-use solution with commercial # garantees and support are strongly adviced to contract a Free Software # Service Company like Veritos. # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # ############################################################################## # # Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger). # Deze module werkt niet in OpenERP versie 4 en lager. # # Status 1.0 - getest op OpenERP 5.0.3 # # Versie IP_ADDRESS # account.account.type # Basis gelegd voor alle account type # # account.account.template # Basis gelegd met alle benodigde grootboekrekeningen welke via een menu- # structuur gelinkt zijn aan rubrieken 1 t/m 9. # De grootboekrekeningen gelinkt aan de account.account.type # Deze links moeten nog eens goed nagelopen worden. # # account.chart.template # Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren, # bank, inkoop en verkoop boeken en de BTW configuratie. # # Versie IP_ADDRESS # account.tax.code.template # Basis gelegd voor de BTW configuratie (structuur) # Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt? # # account.tax.template # De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende # grootboekrekeningen # # Versie IP_ADDRESS # Opschonen van de code en verwijderen van niet gebruikte componenten. # Versie IP_ADDRESS # Aanpassen a_expense van 3000 -> 7000 # record id='btw_code_5b' op negatieve waarde gezet # Versie IP_ADDRESS # BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde. # Versie IP_ADDRESS # Account Receivable en Payable goed gedefinieerd. # Versie IP_ADDRESS # Alle user_type_xxx velden goed gedefinieerd. # Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren. # Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren. # Versie IP_ADDRESS # Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging) # versie IP_ADDRESS # Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity # versie IP_ADDRESS # Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht. # versie IP_ADDRESS # BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd. # versie IP_ADDRESS - Switch to English # Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense # Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
""" ============================= Subclassing ndarray in python ============================= Credits ------- This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses. Introduction ------------ Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass. ndarrays and object creation ============================ Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different ways. These are: #. Explicit constructor call - as in ``MySubClass(params)``. This is the usual route to Python instance creation. #. View casting - casting an existing ndarray as a given subclass #. New from template - creating a new instance from a template instance. Examples include returning slices from a subclassed array, creating return types from ufuncs, and copying arrays. See :ref:`new-from-template` for more details The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation. .. _view-casting: View casting ------------ *View casting* is the standard ndarray mechanism by which you take an ndarray of any subclass, and return a view of the array as another (specified) subclass: >>> import numpy as np >>> # create a completely useless ndarray subclass >>> class C(np.ndarray): pass >>> # create a standard ndarray >>> arr = np.zeros((3,)) >>> # take a view of it, as our useless subclass >>> c_arr = arr.view(C) >>> type(c_arr) <class 'C'> .. _new-from-template: Creating new from template -------------------------- New instances of an ndarray subclass can also come about by a very similar mechanism to :ref:`view-casting`, when numpy finds it needs to create a new instance from a template instance. The most obvious place this has to happen is when you are taking slices of subclassed arrays. For example: >>> v = c_arr[1:] >>> type(v) # the view is of type 'C' <class 'C'> >>> v is c_arr # but it's a new instance False The slice is a *view* onto the original ``c_arr`` data. So, when we take a view from the ndarray, we return a new ndarray, of the same class, that points to the data in the original. There are other points in the use of ndarrays where we need such views, such as copying arrays (``c_arr.copy()``), creating ufunc output arrays (see also :ref:`array-wrap`), and reducing methods (like ``c_arr.mean()``. Relationship of view casting and new-from-template -------------------------------------------------- These paths both use the same machinery. We make the distinction here, because they result in different input to your methods. Specifically, :ref:`view-casting` means you have created a new instance of your array type from any potential subclass of ndarray. :ref:`new-from-template` means you have created a new instance of your class from a pre-existing instance, allowing you - for example - to copy across attributes that are particular to your subclass. Implications for subclassing ---------------------------- If we subclass ndarray, we need to deal not only with explicit construction of our array type, but also :ref:`view-casting` or :ref:`new-from-template`. Numpy has the machinery to do this, and this machinery that makes subclassing slightly non-standard. There are two aspects to the machinery that ndarray uses to support views and new-from-template in subclasses. The first is the use of the ``ndarray.__new__`` method for the main work of object initialization, rather then the more usual ``__init__`` method. The second is the use of the ``__array_finalize__`` method to allow subclasses to clean up after the creation of views and new instances from templates. A brief Python primer on ``__new__`` and ``__init__`` ===================================================== ``__new__`` is a standard Python method, and, if present, is called before ``__init__`` when we create a class instance. See the `python __new__ documentation <http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail. For example, consider the following Python code: .. testcode:: class C(object): def __new__(cls, *args): print 'Cls in __new__:', cls print 'Args in __new__:', args return object.__new__(cls, *args) def __init__(self, *args): print 'type(self) in __init__:', type(self) print 'Args in __init__:', args meaning that we get: >>> c = C('hello') Cls in __new__: <class 'C'> Args in __new__: ('hello',) type(self) in __init__: <class 'C'> Args in __init__: ('hello',) When we call ``C('hello')``, the ``__new__`` method gets its own class as first argument, and the passed argument, which is the string ``'hello'``. After python calls ``__new__``, it usually (see below) calls our ``__init__`` method, with the output of ``__new__`` as the first argument (now a class instance), and the passed arguments following. As you can see, the object can be initialized in the ``__new__`` method or the ``__init__`` method, or both, and in fact ndarray does not have an ``__init__`` method, because all the initialization is done in the ``__new__`` method. Why use ``__new__`` rather than just the usual ``__init__``? Because in some cases, as for ndarray, we want to be able to return an object of some other class. Consider the following: .. testcode:: class D(C): def __new__(cls, *args): print 'D cls is:', cls print 'D args in __new__:', args return C.__new__(C, *args) def __init__(self, *args): # we never get here print 'In D __init__' meaning that: >>> obj = D('hello') D cls is: <class 'D'> D args in __new__: ('hello',) Cls in __new__: <class 'C'> Args in __new__: ('hello',) >>> type(obj) <class 'C'> The definition of ``C`` is the same as before, but for ``D``, the ``__new__`` method returns an instance of class ``C`` rather than ``D``. Note that the ``__init__`` method of ``D`` does not get called. In general, when the ``__new__`` method returns an object of class other than the class in which it is defined, the ``__init__`` method of that class is not called. This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view, the standard ndarray machinery creates the new ndarray object with something like:: obj = ndarray.__new__(subtype, shape, ... where ``subdtype`` is the subclass. Thus the returned view is of the same class as the subclass, rather than being of class ``ndarray``. That solves the problem of returning views of the same type, but now we have a new problem. The machinery of ndarray can set the class this way, in its standard methods for taking views, but the ndarray ``__new__`` method knows nothing of what we have done in our own ``__new__`` method in order to set attributes, and so on. (Aside - why not call ``obj = subdtype.__new__(...`` then? Because we may not have a ``__new__`` method with the same call signature). The role of ``__array_finalize__`` ================================== ``__array_finalize__`` is the mechanism that numpy provides to allow subclasses to handle the various ways that new instances get created. Remember that subclass instances can come about in these three ways: #. explicit constructor call (``obj = MySubClass(params)``). This will call the usual sequence of ``MySubClass.__new__`` then (if it exists) ``MySubClass.__init__``. #. :ref:`view-casting` #. :ref:`new-from-template` Our ``MySubClass.__new__`` method only gets called in the case of the explicit constructor call, so we can't rely on ``MySubClass.__new__`` or ``MySubClass.__init__`` to deal with the view casting and new-from-template. It turns out that ``MySubClass.__array_finalize__`` *does* get called for all three methods of object creation, so this is where our object creation housekeeping usually goes. * For the explicit constructor call, our subclass will need to create a new ndarray instance of its own class. In practice this means that we, the authors of the code, will need to make a call to ``ndarray.__new__(MySubClass,...)``, or do view casting of an existing array (see below) * For view casting and new-from-template, the equivalent of ``ndarray.__new__(MySubClass,...`` is called, at the C level. The arguments that ``__array_finalize__`` recieves differ for the three methods of instance creation above. The following code allows us to look at the call sequences and arguments: .. testcode:: import numpy as np class C(np.ndarray): def __new__(cls, *args, **kwargs): print 'In __new__ with class %s' % cls return np.ndarray.__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): # in practice you probably will not need or want an __init__ # method for your subclass print 'In __init__ with class %s' % self.__class__ def __array_finalize__(self, obj): print 'In array_finalize:' print ' self type is %s' % type(self) print ' obj type is %s' % type(obj) Now: >>> # Explicit constructor >>> c = C((10,)) In __new__ with class <class 'C'> In array_finalize: self type is <class 'C'> obj type is <type 'NoneType'> In __init__ with class <class 'C'> >>> # View casting >>> a = np.arange(10) >>> cast_a = a.view(C) In array_finalize: self type is <class 'C'> obj type is <type 'numpy.ndarray'> >>> # Slicing (example of new-from-template) >>> cv = c[:1] In array_finalize: self type is <class 'C'> obj type is <class 'C'> The signature of ``__array_finalize__`` is:: def __array_finalize__(self, obj): ``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our own class (``self``) as well as the object from which the view has been taken (``obj``). As you can see from the output above, the ``self`` is always a newly created instance of our subclass, and the type of ``obj`` differs for the three instance creation methods: * When called from the explicit constructor, ``obj`` is ``None`` * When called from view casting, ``obj`` can be an instance of any subclass of ndarray, including our own. * When called in new-from-template, ``obj`` is another instance of our own subclass, that we might use to update the new ``self`` instance. Because ``__array_finalize__`` is the only method that always sees new instances being created, it is the sensible place to fill in instance defaults for new object attributes, among other tasks. This may be clearer with an example. Simple example - adding an extra attribute to ndarray ----------------------------------------------------- .. testcode:: import numpy as np class InfoArray(np.ndarray): def __new__(subtype, shape, dtype=float, buffer=None, offset=0, strides=None, order=None, info=None): # Create the ndarray instance of our type, given the usual # ndarray input arguments. This will call the standard # ndarray constructor, but return an object of our type. # It also triggers a call to InfoArray.__array_finalize__ obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides, order) # set the new 'info' attribute to the value passed obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # ``self`` is a new object resulting from # ndarray.__new__(InfoArray, ...), therefore it only has # attributes that the ndarray.__new__ constructor gave it - # i.e. those of a standard ndarray. # # We could have got to the ndarray.__new__ call in 3 ways: # From an explicit constructor - e.g. InfoArray(): # obj is None # (we're in the middle of the InfoArray.__new__ # constructor, and self.info will be set when we return to # InfoArray.__new__) if obj is None: return # From view casting - e.g arr.view(InfoArray): # obj is arr # (type(obj) can be InfoArray) # From new-from-template - e.g infoarr[:3] # type(obj) is InfoArray # # Note that it is here, rather than in the __new__ method, # that we set the default value for 'info', because this # method sees all creation of default objects - with the # InfoArray.__new__ constructor, but also with # arr.view(InfoArray). self.info = getattr(obj, 'info', None) # We do not need to return anything Using the object looks like this: >>> obj = InfoArray(shape=(3,)) # explicit constructor >>> type(obj) <class 'InfoArray'> >>> obj.info is None True >>> obj = InfoArray(shape=(3,), info='information') >>> obj.info 'information' >>> v = obj[1:] # new-from-template - here - slicing >>> type(v) <class 'InfoArray'> >>> v.info 'information' >>> arr = np.arange(10) >>> cast_arr = arr.view(InfoArray) # view casting >>> type(cast_arr) <class 'InfoArray'> >>> cast_arr.info is None True This class isn't very useful, because it has the same constructor as the bare ndarray object, including passing in buffers and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the usual numpy calls to ``np.array`` and return an object. Slightly more realistic example - attribute added to existing array ------------------------------------------------------------------- Here is a class that takes a standard ndarray that already exists, casts as our type, and adds an extra attribute. .. testcode:: import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, input_array, info=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribute to the created instance obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return self.info = getattr(obj, 'info', None) So: >>> arr = np.arange(5) >>> obj = RealisticInfoArray(arr, info='information') >>> type(obj) <class 'RealisticInfoArray'> >>> obj.info 'information' >>> v = obj[1:] >>> type(v) <class 'RealisticInfoArray'> >>> v.info 'information' .. _array-wrap: ``__array_wrap__`` for ufuncs ------------------------------------------------------- ``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy functions, to allow a subclass to set the type of the return value and update attributes and metadata. Let's show how this works with an example. First we make the same subclass as above, but with a different name and some print statements: .. testcode:: import numpy as np class MySubClass(np.ndarray): def __new__(cls, input_array, info=None): obj = np.asarray(input_array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print 'In __array_finalize__:' print ' self is %s' % repr(self) print ' obj is %s' % repr(obj) if obj is None: return self.info = getattr(obj, 'info', None) def __array_wrap__(self, out_arr, context=None): print 'In __array_wrap__:' print ' self is %s' % repr(self) print ' arr is %s' % repr(out_arr) # then just call the parent return np.ndarray.__array_wrap__(self, out_arr, context) We run a ufunc on an instance of our new array: >>> obj = MySubClass(np.arange(5), info='spam') In __array_finalize__: self is MySubClass([0, 1, 2, 3, 4]) obj is array([0, 1, 2, 3, 4]) >>> arr2 = np.arange(5)+1 >>> ret = np.add(arr2, obj) In __array_wrap__: self is MySubClass([0, 1, 2, 3, 4]) arr is array([1, 3, 5, 7, 9]) In __array_finalize__: self is MySubClass([1, 3, 5, 7, 9]) obj is MySubClass([0, 1, 2, 3, 4]) >>> ret MySubClass([1, 3, 5, 7, 9]) >>> ret.info 'spam' Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the input with the highest ``__array_priority__`` value, in this case ``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and ``out_arr`` as the (ndarray) result of the addition. In turn, the default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the result to class ``MySubClass``, and called ``__array_finalize__`` - hence the copying of the ``info`` attribute. This has all happened at the C level. But, we could do anything we wanted: .. testcode:: class SillySubClass(np.ndarray): def __array_wrap__(self, arr, context=None): return 'I lost your data' >>> arr1 = np.arange(5) >>> obj = arr1.view(SillySubClass) >>> arr2 = np.arange(5) >>> ret = np.multiply(obj, arr2) >>> ret 'I lost your data' So, by defining a specific ``__array_wrap__`` method for our subclass, we can tweak the output from ufuncs. The ``__array_wrap__`` method requires ``self``, then an argument - which is the result of the ufunc - and an optional parameter *context*. This parameter is returned by some ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc, domain of the ufunc). ``__array_wrap__`` should return an instance of its containing class. See the masked array subclass for an implementation. In addition to ``__array_wrap__``, which is called on the way out of the ufunc, there is also an ``__array_prepare__`` method which is called on the way into the ufunc, after the output arrays are created but before any computation has been performed. The default implementation does nothing but pass through the array. ``__array_prepare__`` should not attempt to access the array data or resize the array, it is intended for setting the output array type, updating attributes and metadata, and performing any checks based on the input that may be desired before computation begins. Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or subclass thereof or raise an error. Extra gotchas - custom ``__del__`` methods and ndarray.base ----------------------------------------------------------- One of the problems that ndarray solves is keeping track of memory ownership of ndarrays and their views. Consider the case where we have created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``. The two objects are looking at the same memory. Numpy keeps track of where the data came from for a particular array or view, with the ``base`` attribute: >>> # A normal ndarray, that owns its own data >>> arr = np.zeros((4,)) >>> # In this case, base is None >>> arr.base is None True >>> # We take a view >>> v1 = arr[1:] >>> # base now points to the array that it derived from >>> v1.base is arr True >>> # Take a view of a view >>> v2 = v1[1:] >>> # base points to the view it derived from >>> v2.base is v1 True In general, if the array owns its own memory, as for ``arr`` in this case, then ``arr.base`` will be None - there are some exceptions to this - see the numpy book for more details. The ``base`` attribute is useful in being able to tell whether we have a view or the original array. This in turn can be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how this can work, have a look at the ``memmap`` class in ``numpy.core``. """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
# #!/usr/bin/env python # # """ # @package ion.agents.platform.rsn.test.oms_simple # @file ion/agents/platform/rsn/test/oms_simple.py # @author NAME @brief Program that connects to the real RSN OMS endpoint to do basic # verification of the operations. Note that VPN is required. # Also, port 5000 on the localhost (via corresponding fully-qualified # domain name as returned by socket.getfqdn()) needs to be accessible # from OMS for the event notification to be received here. # # For usage, call: # bin/python ion/agents/platform/rsn/test/oms_simple.py --help # # @see https://confluence.oceanobservatories.org/display/CIDev/RSN+OMS+endpoint+implementation+verification # @see https://confluence.oceanobservatories.org/display/syseng/CIAD+MI+SV+CI-OMS+interface # """ # # __author__ = 'Carlos NAME __license__ = 'Apache 2.0' # # # from ion.agents.platform.rsn.oms_event_listener import OmsEventListener # from ion.agents.platform.responses import InvalidResponse # from pyon.util.breakpoint import breakpoint # # import xmlrpclib # import sys # import pprint # import socket # # # DEFAULT_RSN_OMS_URI = "http://alice:1234@IP_ADDRESS:9021/" # DEFAULT_MAX_WAIT = 70 # # INVALID_PLATFORM_ID = InvalidResponse.PLATFORM_ID # # # use full-qualified domain name as the external host for the registration # HTTP_SERVER_HOST = socket.getfqdn() # HTTP_SERVER_PORT = 5000 # # EVENT_LISTENER_URL = "http://%s:%d/oms" % (HTTP_SERVER_HOST, HTTP_SERVER_PORT) # # # max time to wait to receive the test event # max_wait = 0 # # # launch IPython shell? # launch_breakpoint = False # # tried = {} # # # def launch_listener(): # pragma: no cover # def notify_driver_event(evt): # print("notify_driver_event received: %s" % str(evt.event_instance)) # # print 'launching listener, port=%d ...' % HTTP_SERVER_PORT # oms_event_listener = OmsEventListener("dummy_plat_id", notify_driver_event) # oms_event_listener.keep_notifications() # oms_event_listener.start_http_server(host='', port=HTTP_SERVER_PORT) # print 'listener launched' # return oms_event_listener # # # def main(uri): # pragma: no cover # oms_event_listener = launch_listener() # # print '\nconnecting to %r ...' % uri # proxy = xmlrpclib.ServerProxy(uri, allow_none=True) # print 'connection established.' # # pp = pprint.PrettyPrinter() # # def show_listeners(): # from datetime import datetime # from ion.agents.platform.util import ntp_2_ion_ts # # event_listeners = proxy.event.get_registered_event_listeners() # print("Event listeners (%d):" % len(event_listeners)) # for a, b in sorted(event_listeners.iteritems(), # lambda a, b: int(a[1] - b[1])): # time = datetime.fromtimestamp(float(ntp_2_ion_ts(b)) / 1000) # print(" %s %s" % (time, a)) # print # # def format_val(value): # prefix = "\t\t" # print "\n%s%s" % (prefix, pp.pformat(value).replace("\n", "\n" + prefix)) # # def format_err(msg): # prefix = "\t\t" # print "\n%s%s" % (prefix, msg.replace("\n", "\n" + prefix)) # # def get_method(handler_name, method_name): # """ # Gets the method from the proxy. # @param handler_name Name of the handler; can be None to indicate get # method directly from proxy. # @param method_name Method's name # # @return callable; None if any error getting the method # """ # # # get method: # if handler_name: # # get handler: # try: # handler = getattr(proxy, handler_name) # except Exception as e: # print "error getting handler %s: %s: %s" % (handler_name, type(e), str(e)) # return None # try: # method = getattr(handler, method_name) # return method # except Exception as e: # print "error method %s.%s: %s: %s" % (handler_name, method_name, type(e), str(e)) # return None # else: # try: # method = getattr(proxy, method_name) # return method # except Exception as e: # print "error getting proxy's method %s: %s: %s" % (method_name, type(e), str(e)) # return None # # def run(full_method_name, *args): # """ # Runs a method against the proxy. # # @param full_method_name # @param args # """ # global tried # # tried[full_method_name] = "" # # handler_name, method_name = full_method_name.split(".") # # # get the method # method = get_method(handler_name, method_name) # if method is None: # tried[full_method_name] = "could not get handler or method" # return # # sargs = ", ".join(["%r" % a for a in args]) # # sys.stdout.write("\n%s(%s) -> " % (full_method_name, sargs)) # sys.stdout.flush() # # # run method # retval, reterr = None, None # try: # retval = method(*args) # tried[full_method_name] = "OK" # # print "%r" % retval # format_val(retval) # except xmlrpclib.Fault as e: # if e.faultCode == 8001: # reterr = "-- NOT FOUND (fault %s)" % e.faultCode # else: # reterr = "-- Fault %d: %s" % (e.faultCode, e.faultString) # # raise # # print "Exception: %s: %s" % (type(e), str(e)) # # tried[full_method_name] = str(e) # # tried[full_method_name] = reterr # format_err(reterr) # # return retval, reterr # # def verify_entry_in_dict(retval, reterr, entry): # if reterr is not None: # return retval, reterr # # if not isinstance(retval, dict): # reterr = "-- expecting a dict with entry %r" % entry # elif entry not in retval: # reterr = "-- expecting a dict with entry %r" % entry # else: # retval = retval[entry] # # print("full_method_name = %s" % full_method_name) # if reterr: # tried[full_method_name] = reterr # format_err(reterr) # # return retval, reterr # # def verify_test_event_notified(retval, reterr, event): # print("waiting for a max of %d secs for test event to be notified..." % max_wait) # import time # # wait_until = time.time() + max_wait # got_it = False # while not got_it and time.time() <= wait_until: # time.sleep(1) # for evt in oms_event_listener.notifications: # if event['message'] == evt['message']: # got_it = True # break # # # print("Received external events: %s" % oms_event_listener.notifications) # if not got_it: # reterr = "error: didn't get expected test event notification within %d " \ # "secs. (Got %d event notifications.)" % ( # max_wait, len(oms_event_listener.notifications)) # # print("full_method_name = %s" % full_method_name) # if reterr: # tried[full_method_name] = reterr # format_err(reterr) # # return retval, reterr # # show_listeners() # # if launch_breakpoint: # breakpoint(locals()) # # print "Basic verification of the operations:\n" # # #---------------------------------------------------------------------- # full_method_name = "hello.ping" # retval, reterr = run(full_method_name) # if retval and retval.lower() != "pong": # error = "expecting 'pong'" # tried[full_method_name] = error # format_err(error) # # #---------------------------------------------------------------------- # full_method_name = "config.get_platform_types" # retval, reterr = run(full_method_name) # if retval and not isinstance(retval, dict): # error = "expecting a dict" # tried[full_method_name] = error # format_err(error) # # platform_id = "dummy_platform_id" # # #---------------------------------------------------------------------- # full_method_name = "config.get_platform_map" # retval, reterr = run(full_method_name) # if retval is not None: # if isinstance(retval, list): # if len(retval): # if isinstance(retval[0], (tuple, list)): # platform_id = retval[0][0] # else: # reterr = "expecting a list of tuples or lists" # else: # reterr = "expecting a non-empty list" # else: # reterr = "expecting a list" # if reterr: # tried[full_method_name] = reterr # format_err(reterr) # # #---------------------------------------------------------------------- # full_method_name = "config.get_platform_metadata" # retval, reterr = run(full_method_name, platform_id) # retval, reterr = verify_entry_in_dict(retval, reterr, platform_id) # # #---------------------------------------------------------------------- # full_method_name = "attr.get_platform_attributes" # retval, reterr = run(full_method_name, platform_id) # retval, reterr = verify_entry_in_dict(retval, reterr, platform_id) # # #---------------------------------------------------------------------- # full_method_name = "attr.get_platform_attribute_values" # retval, reterr = run(full_method_name, platform_id, []) # retval, reterr = verify_entry_in_dict(retval, reterr, platform_id) # # #---------------------------------------------------------------------- # full_method_name = "attr.set_platform_attribute_values" # retval, reterr = run(full_method_name, platform_id, {}) # retval, reterr = verify_entry_in_dict(retval, reterr, platform_id) # # port_id = "dummy_port_id" # # #---------------------------------------------------------------------- # full_method_name = "port.get_platform_ports" # retval, reterr = run(full_method_name, platform_id) # retval, reterr = verify_entry_in_dict(retval, reterr, platform_id) # if retval is not None: # if isinstance(retval, dict): # if len(retval): # port_id = retval.keys()[0] # else: # reterr = "empty dict of ports for platform %r" % platform_id # else: # reterr = "expecting a dict {%r: ...}. got: %s" % (platform_id, type(retval)) # if reterr: # tried[full_method_name] = reterr # format_err(reterr) # # instrument_id = "dummy_instrument_id" # # if reterr is None: # full_method_name = "port.get_platform_ports" # retval, reterr = run(full_method_name, "dummy_platform_id") # orig_retval = retval # retval, reterr = verify_entry_in_dict(retval, reterr, "dummy_platform_id") # if retval != INVALID_PLATFORM_ID: # reterr = "expecting dict {%r: %r}. got: %r" % ( # "dummy_platform_id", INVALID_PLATFORM_ID, orig_retval) # tried[full_method_name] = reterr # format_err(reterr) # # instrument_id = "dummy_instrument_id" # # #---------------------------------------------------------------------- # full_method_name = "instr.connect_instrument" # retval, reterr = run(full_method_name, platform_id, port_id, instrument_id, {}) # retval, reterr = verify_entry_in_dict(retval, reterr, platform_id) # retval, reterr = verify_entry_in_dict(retval, reterr, port_id) # retval, reterr = verify_entry_in_dict(retval, reterr, instrument_id) # # connect_instrument_error = reterr # # #---------------------------------------------------------------------- # full_method_name = "instr.get_connected_instruments" # retval, reterr = run(full_method_name, platform_id, port_id) # retval, reterr = verify_entry_in_dict(retval, reterr, platform_id) # retval, reterr = verify_entry_in_dict(retval, reterr, port_id) # # note, in case of error in instr.connect_instrument, don't expect the # # instrument_id to be reported: # if connect_instrument_error is None: # retval, reterr = verify_entry_in_dict(retval, reterr, instrument_id) # # #---------------------------------------------------------------------- # full_method_name = "instr.disconnect_instrument" # retval, reterr = run(full_method_name, platform_id, port_id, instrument_id) # retval, reterr = verify_entry_in_dict(retval, reterr, platform_id) # retval, reterr = verify_entry_in_dict(retval, reterr, port_id) # retval, reterr = verify_entry_in_dict(retval, reterr, instrument_id) # # #---------------------------------------------------------------------- # full_method_name = "port.turn_on_platform_port" # retval, reterr = run(full_method_name, platform_id, port_id) # # #---------------------------------------------------------------------- # full_method_name = "port.turn_off_platform_port" # retval, reterr = run(full_method_name, platform_id, port_id) # # #---------------------------------------------------------------------- # url = EVENT_LISTENER_URL # # #---------------------------------------------------------------------- # full_method_name = "event.register_event_listener" # retval, reterr = run(full_method_name, url) # retval, reterr = verify_entry_in_dict(retval, reterr, url) # # #---------------------------------------------------------------------- # full_method_name = "event.get_registered_event_listeners" # retval, reterr = run(full_method_name) # urls = retval # retval, reterr = verify_entry_in_dict(retval, reterr, url) # # #---------------------------------------------------------------------- # full_method_name = "event.unregister_event_listener" # if isinstance(urls, dict): # # this part just as a convenience to unregister listeners that were # # left registered by some error in a prior interaction. # prefix = "http://IP_ADDRESS:" # or some other needed prefix # for url2 in urls: # if url2.find(prefix) >= 0: # retval, reterr = run(full_method_name, url2) # retval, reterr = verify_entry_in_dict(retval, reterr, url2) # if reterr is not None: # break # if reterr is None: # retval, reterr = run(full_method_name, url) # retval, reterr = verify_entry_in_dict(retval, reterr, url) # # #---------------------------------------------------------------------- # full_method_name = "config.get_checksum" # retval, reterr = run(full_method_name, platform_id) # # # the following to specifically verify reception of test event # if max_wait: # full_method_name = "event.register_event_listener" # retval, reterr = run(full_method_name, EVENT_LISTENER_URL) # retval, reterr = verify_entry_in_dict(retval, reterr, EVENT_LISTENER_URL) # # full_method_name = "event.generate_test_event" # event = { # 'message' : "fake event triggered from CI using OMS' generate_test_event", # 'platform_id' : "fake_platform_id", # 'severity' : "3", # 'group ' : "power", # } # retval, reterr = run(full_method_name, event) # # if max_wait: # verify_test_event_notified(retval, reterr, event) # # full_method_name = "event.unregister_event_listener" # retval, reterr = run(full_method_name, EVENT_LISTENER_URL) # retval, reterr = verify_entry_in_dict(retval, reterr, EVENT_LISTENER_URL) # elif not reterr: # ok_but = "OK (but verification of event reception was not performed)" # tried[full_method_name] = ok_but # format_err(ok_but) # # show_listeners() # # ####################################################################### # print("\nSummary of basic verification:") # okeys = 0 # for full_method_name, result in sorted(tried.iteritems()): # print("%20s %-40s: %s" % ("", full_method_name, result)) # if result.startswith("OK"): # okeys += 1 # print("OK methods %d out of %s" % (okeys, len(tried))) # # # if __name__ == "__main__": # pragma: no cover # # import argparse # # parser = argparse.ArgumentParser(description="Basic CI-OMS verification program") # parser.add_argument("-u", "--uri", # help="RSN OMS URI (default: %s)" % DEFAULT_RSN_OMS_URI, # default=DEFAULT_RSN_OMS_URI) # parser.add_argument("-w", "--wait", # help="Max wait time for test event (default: %d)" % DEFAULT_MAX_WAIT, # default=DEFAULT_MAX_WAIT) # parser.add_argument("-b", "--breakpoint", # help="Launch IPython shell at beginning", # action='store_const', const=True) # # opts = parser.parse_args() # # uri = opts.uri # max_wait = int(opts.wait) # launch_breakpoint = bool(opts.breakpoint) # # main(uri)
# -*- encoding: utf-8 -*- ############################################################################## # # Copyright (c) 2009 Veritos - NAME - www.veritos.nl # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsability of assessing all potential # consequences resulting from its eventual inadequacies and bugs. # End users who are looking for a ready-to-use solution with commercial # garantees and support are strongly adviced to contract a Free Software # Service Company like Veritos. # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # ############################################################################## # # Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger). # Deze module werkt niet in OpenERP versie 4 en lager. # # Status 1.0 - getest op OpenERP 5.0.3 # # Versie IP_ADDRESS # account.account.type # Basis gelegd voor alle account type # # account.account.template # Basis gelegd met alle benodigde grootboekrekeningen welke via een menu- # structuur gelinkt zijn aan rubrieken 1 t/m 9. # De grootboekrekeningen gelinkt aan de account.account.type # Deze links moeten nog eens goed nagelopen worden. # # account.chart.template # Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren, # bank, inkoop en verkoop boeken en de BTW configuratie. # # Versie IP_ADDRESS # account.tax.code.template # Basis gelegd voor de BTW configuratie (structuur) # Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt? # # account.tax.template # De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende # grootboekrekeningen # # Versie IP_ADDRESS # Opschonen van de code en verwijderen van niet gebruikte componenten. # Versie IP_ADDRESS # Aanpassen a_expense van 3000 -> 7000 # record id='btw_code_5b' op negatieve waarde gezet # Versie IP_ADDRESS # BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde. # Versie IP_ADDRESS # Account Receivable en Payable goed gedefinieerd. # Versie IP_ADDRESS # Alle user_type_xxx velden goed gedefinieerd. # Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren. # Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren. # Versie IP_ADDRESS # Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging) # versie IP_ADDRESS # Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity # versie IP_ADDRESS # Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht. # versie IP_ADDRESS # BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd. # versie IP_ADDRESS - Switch to English # Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense # Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
#!/usr/bin/env python # (c) 2013, NAME <EMAIL> # # This file is part of Ansible. # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # # # Author: NAME <EMAIL> # # Description: # This module queries local or remote Docker daemons and generates # inventory information. # # This plugin does not support targeting of specific hosts using the --host # flag. Instead, it queries the Docker API for each container, running # or not, and returns this data all once. # # The plugin returns the following custom attributes on Docker containers: # docker_args # docker_config # docker_created # docker_driver # docker_exec_driver # docker_host_config # docker_hostname_path # docker_hosts_path # docker_id # docker_image # docker_name # docker_network_settings # docker_path # docker_resolv_conf_path # docker_state # docker_volumes # docker_volumes_rw # # Requirements: # The docker-py module: https://github.com/dotcloud/docker-py # # Notes: # A config file can be used to configure this inventory module, and there # are several environment variables that can be set to modify the behavior # of the plugin at runtime: # DOCKER_CONFIG_FILE # DOCKER_HOST # DOCKER_VERSION # DOCKER_TIMEOUT # DOCKER_PRIVATE_SSH_PORT # DOCKER_DEFAULT_IP # # Environment Variables: # environment variable: DOCKER_CONFIG_FILE # description: # - A path to a Docker inventory hosts/defaults file in YAML format # - A sample file has been provided, colocated with the inventory # file called 'docker.yml' # required: false # default: Uses docker.docker.Client constructor defaults # environment variable: DOCKER_HOST # description: # - The socket on which to connect to a Docker daemon API # required: false # default: Uses docker.docker.Client constructor defaults # environment variable: DOCKER_VERSION # description: # - Version of the Docker API to use # default: Uses docker.docker.Client constructor defaults # required: false # environment variable: DOCKER_TIMEOUT # description: # - Timeout in seconds for connections to Docker daemon API # default: Uses docker.docker.Client constructor defaults # required: false # environment variable: DOCKER_PRIVATE_SSH_PORT # description: # - The private port (container port) on which SSH is listening # for connections # default: 22 # required: false # environment variable: DOCKER_DEFAULT_IP # description: # - This environment variable overrides the container SSH connection # IP address (aka, 'ansible_ssh_host') # # This option allows one to override the ansible_ssh_host whenever # Docker has exercised its default behavior of binding private ports # to all interfaces of the Docker host. This behavior, when dealing # with remote Docker hosts, does not allow Ansible to determine # a proper host IP address on which to connect via SSH to containers. # By default, this inventory module assumes all IP_ADDRESS-exposed # ports to be bound to localhost:<port>. To override this # behavior, for example, to bind a container's SSH port to the public # interface of its host, one must manually set this IP. # # It is preferable to begin to launch Docker containers with # ports exposed on publicly accessible IP addresses, particularly # if the containers are to be targeted by Ansible for remote # configuration, not accessible via localhost SSH connections. # # Docker containers can be explicitly exposed on IP addresses by # a) starting the daemon with the --ip argument # b) running containers with the -P/--publish ip::containerPort # argument # default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker # required: false # # Examples: # Use the config file: # DOCKER_CONFIG_FILE=./docker.yml docker.py --list # # Connect to docker instance on localhost port 4243 # DOCKER_HOST=tcp://localhost:4243 docker.py --list # # Any container's ssh port exposed on IP_ADDRESS will mapped to # another IP address (where Ansible will attempt to connect via SSH) # DOCKER_DEFAULT_IP=IP_ADDRESS docker.py --list
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be > 1D atleast_2d Force arrays to be > 2D atleast_3d Force arrays to be > 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Substract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== 1D Array Set Operations ----------------------- Set operations for 1D numeric arrays based on sort() function. ================ =================== ediff1d Array difference (auxiliary function). unique Unique elements of an array. intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
#!/usr/bin/env python3 # # Copyright (c) NAME and the University of Texas MD Anderson Cancer Center # Distributed under the terms of the 3-clause BSD License. # # import os # import shutil # import subprocess # import unittest # # # class TestPack(unittest.TestCase): # def setUp(self): # if os.path.isdir('temp'): # shutil.rmtree('temp') # os.mkdir('temp') # os.chdir('temp') # with open('test.sos', 'w') as script: # script.write(''' # %from included include * # parameter: name='t_f1' # [0] # output: name # import os # with open(_output, 'wb') as out: # out.write(os.urandom(10000)) # # [1] # output: os.path.join('t_d1', 't_f2') # import os # with open(_output, 'wb') as out: # out.write(os.urandom(50000)) # with open(os.path.join('t_d1', 'ut_f4'), 'wb') as out: # out.write(os.urandom(10000)) # # [2] # output: os.path.join('t_d2', 't_d3', 't_f3') # import os # with open(_output, 'wb') as out: # out.write(os.urandom(5000)) # # ''') # with open('included.sos', 'w') as script: # script.write(''' # # does nothing # a = 1 # ''') # subprocess.call('sos run test -s force -w', shell=True) # # create some other files and directory # for d in ('ut_d1', 'ut_d2', 'ut_d2/ut_d3'): # os.mkdir(d) # for f in ('ut_f1', 'ut_d1/ut_f2', 'ut_d2/ut_d3/ut_f3'): # with open(f, 'w') as tf: # tf.write(f) # # def assertExists(self, fdlist): # for fd in fdlist: # self.assertTrue(os.path.exists(fd), '{} does not exist'.format(fd)) # # def assertNonExists(self, fdlist): # for fd in fdlist: # self.assertFalse(os.path.exists(fd), '{} still exists'.format(fd)) # # def testSetup(self): # self.assertExists(['ut_d1', 'ut_d2', 'ut_d2/ut_d3', 'ut_f1', # 'ut_d1/ut_f2', 'ut_d2/ut_d3/ut_f3']) # self.assertExists(['t_f1', 't_d1/t_f2', 't_d2/t_d3/t_f3', 't_d2/t_d3', 't_d2']) # # this is the tricky part, directory containing untracked file should remain # self.assertExists(['t_d1', 't_d1/ut_f4']) # # def testDryrun(self): # '''Test dryrun mode''' # self.assertEqual(subprocess.call( # 'sos pack -o b.sar -i t_d1/ut_f4 --dryrun', shell=True), 0) # self.assertFalse(os.path.isfile('b.sar')) # # def testPackZapped(self): # '''Test archiving of zapped files''' # self.assertEqual(subprocess.call('sos remove t_d1/t_f2 --zap -y', shell=True), 0) # self.assertEqual(subprocess.call('sos pack -o a.sar', shell=True), 0) # self.assertEqual(subprocess.call('sos unpack a.sar -y', shell=True), 0) # # def testPackUnpack(self): # '''Test pack command''' # self.assertEqual(subprocess.call('sos pack -o a.sar', shell=True), 0) # # extra file # self.assertEqual(subprocess.call('sos pack -o b.sar -i t_d1/ut_f4', shell=True), 0) # # extra directory # self.assertEqual(subprocess.call('sos pack -o b.sar -i t_d1 -y', shell=True), 0) # # unpack # self.assertEqual(subprocess.call('sos unpack a.sar', shell=True), 0) # # unpack to a different directory # self.assertEqual(subprocess.call('sos unpack a.sar -d tmp', shell=True), 0) # # list content # self.assertEqual(subprocess.call('sos unpack a.sar -l', shell=True), 0) # # def testUnpackScript(self): # '''Test -s option of unpack''' # self.assertEqual(subprocess.call('sos pack -o a.sar', shell=True), 0) # os.remove('test.sos') # os.remove('included.sos') # # unpack # self.assertEqual(subprocess.call('sos unpack a.sar', shell=True), 0) # self.assertFalse(os.path.isfile('test.sos')) # self.assertFalse(os.path.isfile('included.sos')) # # unpack to a different directory # self.assertEqual(subprocess.call('sos unpack a.sar -s -y', shell=True), 0) # self.assertTrue(os.path.isfile('test.sos')) # self.assertTrue(os.path.isfile('included.sos')) # # def testUnpackSelected(self): # # unpack selected file # self.assertEqual(subprocess.call('sos pack -o a.sar -i t_d1/ut_f4', shell=True), 0) # shutil.rmtree('.sos') # shutil.rmtree('t_d1') # shutil.rmtree('t_d2') # self.assertEqual(subprocess.call('sos unpack a.sar ut_f4', shell=True), 0) # self.assertTrue(os.path.isfile('t_d1/ut_f4')) # self.assertFalse(os.path.exists('t_d2')) # # def tearDown(self): # os.chdir('..') # try: # shutil.rmtree('temp') # except Exception: # pass # # # if __name__ == '__main__': # unittest.main()
""" ======================== Broadcasting over arrays ======================== The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is "broadcast" across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([ 2., 4., 6.]) NumPy's broadcasting rule relaxes this constraint when the arrays' shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([ 2., 4., 6.]) The result is equivalent to the previous example where ``b`` was an array. We can think of the scalar ``b`` being *stretched* during the arithmetic operation into an array with the same shape as ``a``. The new elements in ``b`` are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies, so that broadcasting operations are as memory and computationally efficient as possible. The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (``b`` is a scalar rather than an array). General Broadcasting Rules ========================== When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when 1) they are equal, or 2) one of them is 1 If these conditions are not met, a ``ValueError: frames are not aligned`` exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays. Arrays do not need to have the same *number* of dimensions. For example, if you have a ``256x256x3`` array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched or "copied" to match the other. In the following example, both the ``A`` and ``B`` arrays have axes with length one that are expanded to a larger size during the broadcast operation:: A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 Here are some more examples:: A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 Here are examples of shapes that do not broadcast:: A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched An example of broadcasting in practice:: >>> x = np.arange(4) >>> xx = x.reshape(4,1) >>> y = np.ones(5) >>> z = np.ones((3,4)) >>> x.shape (4,) >>> y.shape (5,) >>> x + y <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape >>> xx.shape (4, 1) >>> y.shape (5,) >>> (xx + y).shape (4, 5) >>> xx + y array([[ 1., 1., 1., 1., 1.], [ 2., 2., 2., 2., 2.], [ 3., 3., 3., 3., 3.], [ 4., 4., 4., 4., 4.]]) >>> x.shape (4,) >>> z.shape (3, 4) >>> (x + z).shape (3, 4) >>> x + z array([[ 1., 2., 3., 4.], [ 1., 2., 3., 4.], [ 1., 2., 3., 4.]]) Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays:: >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [ 11., 12., 13.], [ 21., 22., 23.], [ 31., 32., 33.]]) Here the ``newaxis`` index operator inserts a new axis into ``a``, making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array with ``b``, which has shape ``(3,)``, yields a ``4x3`` array. See `this article <http://wiki.scipy.org/EricsBroadcastingDoc>`_ for illustrations of broadcasting concepts. """
#!/usr/bin/env python2 # coding=utf-8 # ############################################################################## ### NZBGET POST-PROCESSING SCRIPT ### # Post-Process to CouchPotato, SickBeard, NzbDrone, Mylar, Gamez, HeadPhones. # # This script sends the download to your automated media management servers. # # NOTE: This script requires Python to be installed on your system. ############################################################################## ### OPTIONS ### ## General # Auto Update nzbToMedia (0, 1). # # Set to 1 if you want nzbToMedia to automatically check for and update to the latest version #auto_update=0 # Check Media for corruption (0, 1). # # Enable/Disable media file checking using ffprobe. #check_media=1 # Safe Mode protection of DestDir (0, 1). # # Enable/Disable a safety check to ensure we don't process all downloads in the default_downloadDirectory by mistake. #safe_mode=1 # Disable additional extraction checks for failed (0, 1). # # Turn this on to disable additional extraction attempts for failed downloads. Default = 0 this will attempt to extract and verify if media is present. #no_extract_failed = 0 ## CouchPotato # CouchPotato script category. # # category that gets called for post-processing with CouchPotatoServer. #cpsCategory=movie # CouchPotato api key. #cpsapikey= # CouchPotato host. # # The ipaddress for your CouchPotato server. e.g For the Same system use localhost or IP_ADDRESS #cpshost=localhost # CouchPotato port. #cpsport=5050 # CouchPotato uses ssl (0, 1). # # Set to 1 if using ssl, else set to 0. #cpsssl=0 # CouchPotato URL_Base # # set this if using a reverse proxy. #cpsweb_root= # CouchPotato Postprocess Method (renamer, manage). # # use "renamer" for CPS renamer (default) or "manage" to call a manage update. #cpsmethod=renamer # CouchPotato OMDB API Key. # # api key for www.omdbapi.com (used as alternative to imdb to assist with movie identification). #cpsomdbapikey= # CouchPotato Delete Failed Downloads (0, 1). # # set to 1 to delete failed, or 0 to leave files in place. #cpsdelete_failed=0 # CouchPotato wait_for # # Set the number of minutes to wait after calling the renamer, to check the movie has changed status. #cpswait_for=2 # Couchpotato and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #cpsremote_path=0 ## Radarr # Radarr script category. # # category that gets called for post-processing with NzbDrone. #raCategory=movies2 # Radarr host. # # The ipaddress for your Radarr server. e.g For the Same system use localhost or IP_ADDRESS #rahost=localhost # Radarr port. #raport=7878 # Radarr API key. #raapikey= # Radarr uses ssl (0, 1). # # Set to 1 if using ssl, else set to 0. #rassl=0 # Radarr web_root # # set this if using a reverse proxy. #raweb_root= # Radarr wait_for # # Set the number of minutes to wait after calling the renamer, to check the episode has changed status. #rawait_for=6 # Radarr OMDB API Key. # # api key for www.omdbapi.com (used as alternative to imdb to assist with movie identification). #raomdbapikey= # Radarr Delete Failed Downloads (0, 1). # # set to 1 to delete failed, or 0 to leave files in place. #radelete_failed=0 # Radarr and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #raremote_path=0 ## SickBeard # SickBeard script category. # # category that gets called for post-processing with SickBeard. #sbCategory=tv # SickBeard host. # # The ipaddress for your SickBeard/SickRage server. e.g For the Same system use localhost or IP_ADDRESS #sbhost=localhost # SickBeard port. #sbport=8081 # SickBeard username. #sbusername= # SickBeard password. #sbpassword= # SickBeard uses ssl (0, 1). # # Set to 1 if using ssl, else set to 0. #sbssl=0 # SickBeard web_root # # set this if using a reverse proxy. #sbweb_root= # SickBeard watch directory. # # set this if SickBeard and nzbGet are on different systems. #sbwatch_dir= # SickBeard fork. # # set to default or auto to auto-detect the custom fork type. #sbfork=auto # SickBeard Delete Failed Downloads (0, 1). # # set to 1 to delete failed, or 0 to leave files in place. #sbdelete_failed=0 # SickBeard Ignore associated subtitle check (0, 1). # # set to 1 to ignore subtitles check, or 0 to don't check. #sbignore_subs=0 # SickBeard process method. # # set this to move, copy, hardlink, symlink as appropriate if you want to over-ride SB defaults. Leave blank to use SB default. #sbprocess_method= # SickBeard and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #sbremote_path=0 ## NzbDrone # NzbDrone script category. # # category that gets called for post-processing with NzbDrone. #ndCategory=tv2 # NzbDrone host. # # The ipaddress for your NzbDrone/Sonarr server. e.g For the Same system use localhost or IP_ADDRESS #ndhost=localhost # NzbDrone port. #ndport=8989 # NzbDrone API key. #ndapikey= # NzbDrone uses SSL (0, 1). # # Set to 1 if using SSL, else set to 0. #ndssl=0 # NzbDrone web root. # # set this if using a reverse proxy. #ndweb_root= # NzbDrone wait_for # # Set the number of minutes to wait after calling the renamer, to check the episode has changed status. #ndwait_for=6 # NzbDrone Delete Failed Downloads (0, 1). # # set to 1 to delete failed, or 0 to leave files in place. #nddelete_failed=0 # NzbDrone and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #ndremote_path=0 ## HeadPhones # HeadPhones script category. # # category that gets called for post-processing with HeadHones. #hpCategory=music # HeadPhones api key. #hpapikey= # HeadPhones host. # # The ipaddress for your HeadPhones server. e.g For the Same system use localhost or IP_ADDRESS #hphost=localhost # HeadPhones port. #hpport=8181 # HeadPhones uses ssl (0, 1). # # Set to 1 if using ssl, else set to 0. #hpssl=0 # HeadPhones web_root # # set this if using a reverse proxy. #hpweb_root= # HeadPhones and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #hpremote_path=0 ## Lidarr # Lidarr script category. # # category that gets called for post-processing with NzbDrone. #liCategory=music2 # Lidarr host. # # The ipaddress for your Lidarr server. e.g For the Same system use localhost or IP_ADDRESS #lihost=localhost # Lidarr port. #liport=8686 # Lidarr API key. #liapikey= # Lidarr uses ssl (0, 1). # # Set to 1 if using ssl, else set to 0. #lissl=0 # Lidarr web_root # # set this if using a reverse proxy. #liweb_root= # Lidarr wait_for # # Set the number of minutes to wait after calling the renamer, to check the episode has changed status. #liwait_for=6 # Lidarr Delete Failed Downloads (0, 1). # # set to 1 to delete failed, or 0 to leave files in place. #lidelete_failed=0 # Lidarr and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #liremote_path=0 ## Mylar # Mylar script category. # # category that gets called for post-processing with Mylar. #myCategory=comics # Mylar host. # # The ipaddress for your Mylar server. e.g For the Same system use localhost or IP_ADDRESS #myhost=localhost # Mylar port. #myport=8090 # USERNAME Mylar password. #mypassword= # Mylar uses ssl (0, 1). # # Set to 1 if using ssl, else set to 0. #myssl=0 # Mylar web_root # # set this if using a reverse proxy. #myweb_root= # Mylar wait_for # # Set the number of minutes to wait after calling the force process, to check the issue has changed status. #myswait_for=1 # Mylar and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #myremote_path=0 ## Gamez # Gamez script category. # # category that gets called for post-processing with Gamez. #gzCategory=games # Gamez api key. #gzapikey= # Gamez host. # # The ipaddress for your Gamez server. e.g For the Same system use localhost or IP_ADDRESS #gzhost=localhost # Gamez port. #gzport=8085 # Gamez uses ssl (0, 1). # # Set to 1 if using ssl, else set to 0. #gzssl=0 # Gamez library # # move downloaded games here. #gzlibrary # Gamez web_root # # set this if using a reverse proxy. #gzweb_root= # Gamez and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #gzremote_path=0 ## Network # Network Mount Points (Needed for remote path above) # # Enter Mount points as LocalPath,RemotePath and separate each pair with '|' # e.g. mountPoints=/volume1/Public/,E:\|/volume2/share/,\\NAS\ #mountPoints= ## Extensions # Media Extensions # # This is a list of media extensions that are used to verify that the download does contain valid media. #mediaExtensions=.mkv,.avi,.divx,.xvid,.mov,.wmv,.mp4,.mpg,.mpeg,.vob,.iso,.ts ## Posix # Niceness for external tasks Extractor and Transcoder. # # Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process). #niceness=10 # ionice scheduling class (0, 1, 2, 3). # # Set the ionice scheduling class. 0 for none, 1 for real time, 2 for best-effort, 3 for idle. #ionice_class=2 # ionice scheduling class data. # # Set the ionice scheduling class data. This defines the class data, if the class accepts an argument. For real time and best-effort, 0-7 is valid data. #ionice_classdata=4 ## Transcoder # getSubs (0, 1). # # set to 1 to download subtitles. #getSubs=0 # subLanguages. # # subLanguages. create a list of languages in the order you want them in your subtitles. #subLanguages=eng,spa,fra # Transcode (0, 1). # # set to 1 to transcode, otherwise set to 0. #transcode=0 # create a duplicate, or replace the original (0, 1). # # set to 1 to cretae a new file or 0 to replace the original #duplicate=1 # ignore extensions. # # list of extensions that won't be transcoded. #ignoreExtensions=.avi,.mkv # outputFastStart (0,1). # # outputFastStart. 1 will use -movflags + faststart. 0 will disable this from being used. #outputFastStart=0 # outputVideoPath. # # outputVideoPath. Set path you want transcoded videos moved to. Leave blank to disable. #outputVideoPath= # processOutput (0,1). # # processOutput. 1 will send the outputVideoPath to SickBeard/CouchPotato. 0 will send original files. #processOutput=0 # audioLanguage. # # audioLanguage. set the 3 letter language code you want as your primary audio track. #audioLanguage=eng # allAudioLanguages (0,1). # # allAudioLanguages. 1 will keep all audio tracks (uses AudioCodec3) where available. #allAudioLanguages=0 # allSubLanguages (0,1). # # allSubLanguages. 1 will keep all exisiting sub languages. 0 will discare those not in your list above. #allSubLanguages=0 # embedSubs (0,1). # # embedSubs. 1 will embded external sub/srt subs into your video if this is supported. #embedSubs=1 # burnInSubtitle (0,1). # # burnInSubtitle. burns the default sub language into your video (needed for players that don't support subs) #burnInSubtitle=0 # extractSubs (0,1). # # extractSubs. 1 will extract subs from the video file and save these as external srt files. #extractSubs=0 # externalSubDir. # # externalSubDir. set the directory where subs should be saved (if not the same directory as the video) #externalSubDir= # outputDefault (None, iPad, iPad-1080p, iPad-720p, Apple-TV2, iPod, iPhone, PS3, xbox, Roku-1080p, Roku-720p, Roku-480p, mkv, mp4-scene-release). # # outputDefault. Loads default configs for the selected device. The remaining options below are ignored. # If you want to use your own profile, set None and set the remaining options below. #outputDefault=None # hwAccel (0,1). # # hwAccel. 1 will set ffmpeg to enable hardware acceleration (this requires a recent ffmpeg). #hwAccel=0 # ffmpeg output settings. #outputVideoExtension=.mp4 #outputVideoCodec=libx264 #VideoCodecAllow= #outputVideoResolution=720:-1 #outputVideoPreset=medium #outputVideoFramerate=24 #outputVideoBitrate=800k #outputAudioCodec=ac3 #AudioCodecAllow= #outputAudioChannels=6 #outputAudioBitrate=640k #outputQualityPercent= #outputAudioTrack2Codec=libfaac #AudioCodec2Allow= #outputAudioTrack2Channels=2 #outputAudioTrack2Bitrate=160k #outputAudioOtherCodec=libmp3lame #AudioOtherCodecAllow= #outputAudioOtherChannels=2 #outputAudioOtherBitrate=128k #outputSubtitleCodec= ## WakeOnLan # use WOL (0, 1). # # set to 1 to send WOL broadcast to the mac and test the server (e.g. xbmc) on the host and port specified. #wolwake=0 # WOL MAC # # enter the mac address of the system to be woken. #wolmac=00:01:2e:2D:64:e1 # Set the Host and Port of a server to verify system has woken. #wolhost=IP_ADDRESS #wolport=80 ## UserScript # User Script category. # # category that gets called for post-processing with user script (accepts "UNCAT", "ALL", or a defined category). #usCategory=mine # User Script Remote Path (0,1). # # Script calls commands on another system. #usremote_path=0 # User Script extensions. # # What extension do you want to process? Specify all the extension, or use "ALL" to process all files. #user_script_mediaExtensions=.mkv,.avi,.divx,.xvid,.mov,.wmv,.mp4,.mpg,.mpeg # User Script Path # # Specify the path to your custom script. #user_script_path=/nzbToMedia/userscripts/script.sh # User Script arguments. # # Specify the argument(s) passed to script, comma separated in order. # for example FP,FN,DN, TN, TL for file path (absolute file name with path), file name, absolute directory name (with path), Torrent Name, Torrent Label/Category. # So the result is /media/test/script/script.sh FP FN DN TN TL. Add other arguments as needed eg -f, -r #user_script_param=FN # User Script Run Once (0,1). # # Set user_script_runOnce = 0 to run for each file, or 1 to only run once (presumably on teh entire directory). #user_script_runOnce=0 # User Script Success Codes. # # Specify the successcodes returned by the user script as a comma separated list. Linux default is 0 #user_script_successCodes=0 # User Script Clean After (0,1). # # Clean after? Note that delay function is used to prevent possible mistake :) Delay is intended as seconds #user_script_clean=1 # User Script Delay. # # Delay in seconds after processing. #usdelay=120 ### NZBGET POST-PROCESSING SCRIPT ### ##############################################################################
""" ============ Array basics ============ Array types and conversions between types ========================================= NumPy supports a much greater variety of numerical types than Python does. This section shows which are available, and how to modify an array's data-type. ========== ========================================================== Data type Description ========== ========================================================== bool_ Boolean (True or False) stored as a byte int_ Default integer type (same as C ``long``; normally either ``int64`` or ``int32``) intc Identical to C ``int`` (normally ``int32`` or ``int64``) intp Integer used for indexing (same as C ``ssize_t``; normally either ``int32`` or ``int64``) int8 Byte (-128 to 127) int16 Integer (-32768 to 32767) int32 Integer (-2147483648 to 2147483647) int64 Integer (-9223372036854775808 to 9223372036854775807) uint8 Unsigned integer (0 to 255) uint16 Unsigned integer (0 to 65535) uint32 Unsigned integer (0 to 4294967295) uint64 Unsigned integer (0 to 18446744073709551615) float_ Shorthand for ``float64``. float16 Half precision float: sign bit, 5 bits exponent, 10 bits mantissa float32 Single precision float: sign bit, 8 bits exponent, 23 bits mantissa float64 Double precision float: sign bit, 11 bits exponent, 52 bits mantissa complex_ Shorthand for ``complex128``. complex64 Complex number, represented by two 32-bit floats (real and imaginary components) complex128 Complex number, represented by two 64-bit floats (real and imaginary components) ========== ========================================================== Additionally to ``intc`` the platform dependent C integer types ``short``, ``long``, ``longlong`` and their unsigned versions are defined. NumPy numerical types are instances of ``dtype`` (data-type) objects, each having unique characteristics. Once you have imported NumPy using :: >>> import numpy as np the dtypes are available as ``np.bool_``, ``np.float32``, etc. Advanced types, not listed in the table above, are explored in section :ref:`structured_arrays`. There are 5 basic numerical types representing booleans (bool), integers (int), unsigned integers (uint) floating point (float) and complex. Those with numbers in their name indicate the bitsize of the type (i.e. how many bits are needed to represent a single value in memory). Some types, such as ``int`` and ``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit vs. 64-bit machines). This should be taken into account when interfacing with low-level code (such as C or Fortran) where the raw memory is addressed. Data-types can be used as functions to convert python numbers to array scalars (see the array scalar section for an explanation), python sequences of numbers to arrays of that type, or as arguments to the dtype keyword that many numpy functions or methods accept. Some examples:: >>> import numpy as np >>> x = np.float32(1.0) >>> x 1.0 >>> y = np.int_([1,2,4]) >>> y array([1, 2, 4]) >>> z = np.arange(3, dtype=np.uint8) >>> z array([0, 1, 2], dtype=uint8) Array types can also be referred to by character codes, mostly to retain backward compatibility with older packages such as Numeric. Some documentation may still refer to these, for example:: >>> np.array([1, 2, 3], dtype='f') array([ 1., 2., 3.], dtype=float32) We recommend using dtype objects instead. To convert the type of an array, use the .astype() method (preferred) or the type itself as a function. For example: :: >>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE array([ 0., 1., 2.]) >>> np.int8(z) array([0, 1, 2], dtype=int8) Note that, above, we use the *Python* float object as a dtype. NumPy knows that ``int`` refers to ``np.int_``, ``bool`` means ``np.bool_``, that ``float`` is ``np.float_`` and ``complex`` is ``np.complex_``. The other data-types do not have Python equivalents. To determine the type of an array, look at the dtype attribute:: >>> z.dtype dtype('uint8') dtype objects also contain information about the type, such as its bit-width and its byte-order. The data type can also be used indirectly to query properties of the type, such as whether it is an integer:: >>> d = np.dtype(int) >>> d dtype('int32') >>> np.issubdtype(d, int) True >>> np.issubdtype(d, float) False Array Scalars ============= NumPy generally returns elements of arrays as array scalars (a scalar with an associated dtype). Array scalars differ from Python scalars, but for the most part they can be used interchangeably (the primary exception is for versions of Python older than v2.x, where integer array scalars cannot act as indices for lists and tuples). There are some exceptions, such as when code requires very specific attributes of a scalar or when it checks specifically whether a value is a Python scalar. Generally, problems are easily fixed by explicitly converting array scalars to Python scalars, using the corresponding Python type function (e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``). The primary advantage of using array scalars is that they preserve the array type (Python may not have a matching scalar type available, e.g. ``int16``). Therefore, the use of array scalars ensures identical behaviour between arrays and scalars, irrespective of whether the value is inside an array or not. NumPy scalars also have many of the same methods arrays do. Extended Precision ================== Python's floating-point numbers are usually 64-bit floating-point numbers, nearly equivalent to ``np.float64``. In some unusual situations it may be useful to use floating-point numbers with more precision. Whether this is possible in numpy depends on the hardware and on the development environment: specifically, x86 machines provide hardware floating-point with 80-bit precision, and while most C compilers provide this as their ``long double`` type, MSVC (standard for Windows builds) makes ``long double`` identical to ``double`` (64 bits). NumPy makes the compiler's ``long double`` available as ``np.longdouble`` (and ``np.clongdouble`` for the complex numbers). You can find out what your numpy provides with``np.finfo(np.longdouble)``. NumPy does not provide a dtype with more precision than C ``long double``s; in particular, the 128-bit IEEE quad precision data type (FORTRAN's ``REAL*16``) is not available. For efficient memory alignment, ``np.longdouble`` is usually stored padded with zero bits, either to 96 or 128 bits. Which is more efficient depends on hardware and development environment; typically on 32-bit systems they are padded to 96 bits, while on 64-bit systems they are typically padded to 128 bits. ``np.longdouble`` is padded to the system default; ``np.float96`` and ``np.float128`` are provided for users who want specific padding. In spite of the names, ``np.float96`` and ``np.float128`` provide only as much precision as ``np.longdouble``, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds. Be warned that even if ``np.longdouble`` offers more precision than python ``float``, it is easy to lose that extra precision, since python often forces values to pass through ``float``. For example, the ``%`` formatting operator requires its arguments to be converted to standard python types, and it is therefore impossible to preserve extended precision even if many decimal places are requested. It can be useful to test your code with the value ``1 + np.finfo(np.longdouble).eps``. """
""" ================= Structured Arrays ================= Introduction ============ Numpy provides powerful capabilities to create arrays of structured datatype. These arrays permit one to manipulate the data by named fields. A simple example will show what is meant.: :: >>> x = np.array([(1,2.,'Hello'), (2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> x array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) Here we have created a one-dimensional array of length 2. Each element of this array is a structure that contains three items, a 32-bit integer, a 32-bit float, and a string of length 10 or less. If we index this array at the second position we get the second structure: :: >>> x[1] (2,3.,"World") Conveniently, one can access any field of the array by indexing using the string that names that field. :: >>> y = x['foo'] >>> y array([ 2., 3.], dtype=float32) >>> y[:] = 2*y >>> y array([ 4., 6.], dtype=float32) >>> x array([(1, 4.0, 'Hello'), (2, 6.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) In these examples, y is a simple float array consisting of the 2nd field in the structured type. But, rather than being a copy of the data in the structured array, it is a view, i.e., it shares exactly the same memory locations. Thus, when we updated this array by doubling its values, the structured array shows the corresponding values as doubled as well. Likewise, if one changes the structured array, the field view also changes: :: >>> x[1] = (-1,-1.,"Master") >>> x array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) >>> y array([ 4., -1.], dtype=float32) Defining Structured Arrays ========================== One defines a structured array through the dtype object. There are **several** alternative ways to define the fields of a record. Some of these variants provide backward compatibility with Numeric, numarray, or another module, and should not be used except for such purposes. These will be so noted. One specifies record structure in one of four alternative ways, using an argument (as supplied to a dtype function keyword or a dtype object constructor itself). This argument must be one of the following: 1) string, 2) tuple, 3) list, or 4) dictionary. Each of these is briefly described below. 1) String argument. In this case, the constructor expects a comma-separated list of type specifiers, optionally with extra shape information. The fields are given the default names 'f0', 'f1', 'f2' and so on. The type specifiers can take 4 different forms: :: a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n> (representing bytes, ints, unsigned ints, floats, complex and fixed length strings of specified byte lengths) b) int8,...,uint8,...,float16, float32, float64, complex64, complex128 (this time with bit sizes) c) older Numeric/numarray type specifications (e.g. Float32). Don't use these in new code! d) Single character type specifiers (e.g H for unsigned short ints). Avoid using these unless you must. Details can be found in the Numpy book These different styles can be mixed within the same string (but why would you want to do that?). Furthermore, each type specifier can be prefixed with a repetition number, or a shape. In these cases an array element is created, i.e., an array within a record. That array is still referred to as a single field. An example: :: >>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64') >>> x array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])], dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))]) By using strings to define the record structure, it precludes being able to name the fields in the original definition. The names can be changed as shown later, however. 2) Tuple argument: The only relevant tuple case that applies to record structures is when a structure is mapped to an existing data type. This is done by pairing in a tuple, the existing data type with a matching dtype definition (using any of the variants being described here). As an example (using a definition using a list, so see 3) for further details): :: >>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')])) >>> x array([0, 0, 0]) >>> x['r'] array([0, 0, 0], dtype=uint8) In this case, an array is produced that looks and acts like a simple int32 array, but also has definitions for fields that use only one byte of the int32 (a bit like Fortran equivalencing). 3) List argument: In this case the record structure is defined with a list of tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field ('' is permitted), 2) the type of the field, and 3) the shape (optional). For example:: >>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) >>> x array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])], dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))]) 4) Dictionary argument: two different forms are permitted. The first consists of a dictionary with two required keys ('names' and 'formats'), each having an equal sized list of values. The format list contains any type/shape specifier allowed in other contexts. The names must be strings. There are two optional keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to the required two where offsets contain integer offsets for each field, and titles are objects containing metadata for each field (these do not have to be strings), where the value of None is permitted. As an example: :: >>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[('col1', '>i4'), ('col2', '>f4')]) The other dictionary form permitted is a dictionary of name keys with tuple values specifying type, offset, and an optional title. :: >>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')]) Accessing and modifying field names =================================== The field names are an attribute of the dtype object defining the structure. For the last example: :: >>> x.dtype.names ('col1', 'col2') >>> x.dtype.names = ('x', 'y') >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')]) >>> x.dtype.names = ('x', 'y', 'z') # wrong number of names <type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2 Accessing field titles ==================================== The field titles provide a standard place to put associated info for fields. They do not have to be strings. :: >>> x.dtype.fields['x'][2] 'title 1' Accessing multiple fields at once ==================================== You can access multiple fields at once using a list of field names: :: >>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))], dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) Notice that `x` is created with a list of tuples. :: >>> x[['x','y']] array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)], dtype=[('x', '<f4'), ('y', '<f4')]) >>> x[['x','value']] array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]), (1.0, [[2.0, 6.0], [2.0, 6.0]])], dtype=[('x', '<f4'), ('value', '<f4', (2, 2))]) The fields are returned in the order they are asked for.:: >>> x[['y','x']] array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)], dtype=[('y', '<f4'), ('x', '<f4')]) Filling structured arrays ========================= Structured arrays can be filled by field or row by row. :: >>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')]) >>> arr['var1'] = np.arange(5) If you fill it in row by row, it takes a take a tuple (but not a list or array!):: >>> arr[0] = (10,20) >>> arr array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)], dtype=[('var1', '<f8'), ('var2', '<f8')]) Record Arrays ============= For convenience, numpy provides "record arrays" which allow one to access fields of structured arrays by attribute rather than by index. Record arrays are structured arrays wrapped using a subclass of ndarray, :class:`numpy.recarray`, which allows field access by attribute on the array object, and record arrays also use a special datatype, :class:`numpy.record`, which allows field access by attribute on the individual elements of the array. The simplest way to create a record array is with :func:`numpy.rec.array`: :: >>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> recordarr.bar array([ 2., 3.], dtype=float32) >>> recordarr[1:2] rec.array([(2, 3.0, 'World')], dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]) >>> recordarr[1:2].foo array([2], dtype=int32) >>> recordarr.foo[1:2] array([2], dtype=int32) >>> recordarr[1].baz 'World' numpy.rec.array can convert a wide variety of arguments into record arrays, including normal structured arrays: :: >>> arr = array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')]) >>> recordarr = np.rec.array(arr) The numpy.rec module provides a number of other convenience functions for creating record arrays, see :ref:`record array creation routines <routines.array-creation.rec>`. A record array representation of a structured array can be obtained using the appropriate :ref:`view`: :: >>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')]) >>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)), ... type=np.recarray) For convenience, viewing an ndarray as type `np.recarray` will automatically convert to `np.record` datatype, so the dtype can be left out of the view: :: >>> recordarr = arr.view(np.recarray) >>> recordarr.dtype dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])) To get back to a plain ndarray both the dtype and type must be reset. The following view does so, taking into account the unusual case that the recordarr was not a structured type: :: >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but as a plain ndarray otherwise. :: >>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))], ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])]) >>> type(recordarr.foo) <type 'numpy.ndarray'> >>> type(recordarr.bar) <class 'numpy.core.records.recarray'> Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will be inaccessible by attribute but may still be accessed by index. """
# Defines classes that provide synchronization objects. Note that use of # this module requires that your Python support threads. # # condition(lock=None) # a POSIX-like condition-variable object # barrier(n) # an n-thread barrier # event() # an event object # semaphore(n=1) # a semaphore object, with initial count n # mrsw() # a multiple-reader single-writer lock # # CONDITIONS # # A condition object is created via # import this_module # your_condition_object = this_module.condition(lock=None) # # As explained below, a condition object has a lock associated with it, # used in the protocol to protect condition data. You can specify a # lock to use in the constructor, else the constructor will allocate # an anonymous lock for you. Specifying a lock explicitly can be useful # when more than one condition keys off the same set of shared data. # # Methods: # .acquire() # acquire the lock associated with the condition # .release() # release the lock associated with the condition # .wait() # block the thread until such time as some other thread does a # .signal or .broadcast on the same condition, and release the # lock associated with the condition. The lock associated with # the condition MUST be in the acquired state at the time # .wait is invoked. # .signal() # wake up exactly one thread (if any) that previously did a .wait # on the condition; that thread will awaken with the lock associated # with the condition in the acquired state. If no threads are # .wait'ing, this is a nop. If more than one thread is .wait'ing on # the condition, any of them may be awakened. # .broadcast() # wake up all threads (if any) that are .wait'ing on the condition; # the threads are woken up serially, each with the lock in the # acquired state, so should .release() as soon as possible. If no # threads are .wait'ing, this is a nop. # # Note that if a thread does a .wait *while* a signal/broadcast is # in progress, it's guaranteeed to block until a subsequent # signal/broadcast. # # Secret feature: `broadcast' actually takes an integer argument, # and will wake up exactly that many waiting threads (or the total # number waiting, if that's less). Use of this is dubious, though, # and probably won't be supported if this form of condition is # reimplemented in C. # # DIFFERENCES FROM POSIX # # + A separate mutex is not needed to guard condition data. Instead, a # condition object can (must) be .acquire'ed and .release'ed directly. # This eliminates a common error in using POSIX conditions. # # + Because of implementation difficulties, a POSIX `signal' wakes up # _at least_ one .wait'ing thread. Race conditions make it difficult # to stop that. This implementation guarantees to wake up only one, # but you probably shouldn't rely on that. # # PROTOCOL # # Condition objects are used to block threads until "some condition" is # true. E.g., a thread may wish to wait until a producer pumps out data # for it to consume, or a server may wish to wait until someone requests # its services, or perhaps a whole bunch of threads want to wait until a # preceding pass over the data is complete. Early models for conditions # relied on some other thread figuring out when a blocked thread's # condition was true, and made the other thread responsible both for # waking up the blocked thread and guaranteeing that it woke up with all # data in a correct state. This proved to be very delicate in practice, # and gave conditions a bad name in some circles. # # The POSIX model addresses these problems by making a thread responsible # for ensuring that its own state is correct when it wakes, and relies # on a rigid protocol to make this easy; so long as you stick to the # protocol, POSIX conditions are easy to "get right": # # A) The thread that's waiting for some arbitrarily-complex condition # (ACC) to become true does: # # condition.acquire() # while not (code to evaluate the ACC): # condition.wait() # # That blocks the thread, *and* releases the lock. When a # # condition.signal() happens, it will wake up some thread that # # did a .wait, *and* acquire the lock again before .wait # # returns. # # # # Because the lock is acquired at this point, the state used # # in evaluating the ACC is frozen, so it's safe to go back & # # reevaluate the ACC. # # # At this point, ACC is true, and the thread has the condition # # locked. # # So code here can safely muck with the shared state that # # went into evaluating the ACC -- if it wants to. # # When done mucking with the shared state, do # condition.release() # # B) Threads that are mucking with shared state that may affect the # ACC do: # # condition.acquire() # # muck with shared state # condition.release() # if it's possible that ACC is true now: # condition.signal() # or .broadcast() # # Note: You may prefer to put the "if" clause before the release(). # That's fine, but do note that anyone waiting on the signal will # stay blocked until the release() is done (since acquiring the # condition is part of what .wait() does before it returns). # # TRICK OF THE TRADE # # With simpler forms of conditions, it can be impossible to know when # a thread that's supposed to do a .wait has actually done it. But # because this form of condition releases a lock as _part_ of doing a # wait, the state of that lock can be used to guarantee it. # # E.g., suppose thread A spawns thread B and later wants to wait for B to # complete: # # In A: In B: # # B_done = condition() ... do work ... # B_done.acquire() B_done.acquire(); B_done.release() # spawn B B_done.signal() # ... some time later ... ... and B exits ... # B_done.wait() # # Because B_done was in the acquire'd state at the time B was spawned, # B's attempt to acquire B_done can't succeed until A has done its # B_done.wait() (which releases B_done). So B's B_done.signal() is # guaranteed to be seen by the .wait(). Without the lock trick, B # may signal before A .waits, and then A would wait forever. # # BARRIERS # # A barrier object is created via # import this_module # your_barrier = this_module.barrier(num_threads) # # Methods: # .enter() # the thread blocks until num_threads threads in all have done # .enter(). Then the num_threads threads that .enter'ed resume, # and the barrier resets to capture the next num_threads threads # that .enter it. # # EVENTS # # An event object is created via # import this_module # your_event = this_module.event() # # An event has two states, `posted' and `cleared'. An event is # created in the cleared state. # # Methods: # # .post() # Put the event in the posted state, and resume all threads # .wait'ing on the event (if any). # # .clear() # Put the event in the cleared state. # # .is_posted() # Returns 0 if the event is in the cleared state, or 1 if the event # is in the posted state. # # .wait() # If the event is in the posted state, returns immediately. # If the event is in the cleared state, blocks the calling thread # until the event is .post'ed by another thread. # # Note that an event, once posted, remains posted until explicitly # cleared. Relative to conditions, this is both the strength & weakness # of events. It's a strength because the .post'ing thread doesn't have to # worry about whether the threads it's trying to communicate with have # already done a .wait (a condition .signal is seen only by threads that # do a .wait _prior_ to the .signal; a .signal does not persist). But # it's a weakness because .clear'ing an event is error-prone: it's easy # to mistakenly .clear an event before all the threads you intended to # see the event get around to .wait'ing on it. But so long as you don't # need to .clear an event, events are easy to use safely. # # SEMAPHORES # # A semaphore object is created via # import this_module # your_semaphore = this_module.semaphore(count=1) # # A semaphore has an integer count associated with it. The initial value # of the count is specified by the optional argument (which defaults to # 1) passed to the semaphore constructor. # # Methods: # # .p() # If the semaphore's count is greater than 0, decrements the count # by 1 and returns. # Else if the semaphore's count is 0, blocks the calling thread # until a subsequent .v() increases the count. When that happens, # the count will be decremented by 1 and the calling thread resumed. # # .v() # Increments the semaphore's count by 1, and wakes up a thread (if # any) blocked by a .p(). It's an (detected) error for a .v() to # increase the semaphore's count to a value larger than the initial # count. # # MULTIPLE-READER SINGLE-WRITER LOCKS # # A mrsw lock is created via # import this_module # your_mrsw_lock = this_module.mrsw() # # This kind of lock is often useful with complex shared data structures. # The object lets any number of "readers" proceed, so long as no thread # wishes to "write". When a (one or more) thread declares its intention # to "write" (e.g., to update a shared structure), all current readers # are allowed to finish, and then a writer gets exclusive access; all # other readers & writers are blocked until the current writer completes. # Finally, if some thread is waiting to write and another is waiting to # read, the writer takes precedence. # # Methods: # # .read_in() # If no thread is writing or waiting to write, returns immediately. # Else blocks until no thread is writing or waiting to write. So # long as some thread has completed a .read_in but not a .read_out, # writers are blocked. # # .read_out() # Use sometime after a .read_in to declare that the thread is done # reading. When all threads complete reading, a writer can proceed. # # .write_in() # If no thread is writing (has completed a .write_in, but hasn't yet # done a .write_out) or reading (similarly), returns immediately. # Else blocks the calling thread, and threads waiting to read, until # the current writer completes writing or all the current readers # complete reading; if then more than one thread is waiting to # write, one of them is allowed to proceed, but which one is not # specified. # # .write_out() # Use sometime after a .write_in to declare that the thread is done # writing. Then if some other thread is waiting to write, it's # allowed to proceed. Else all threads (if any) waiting to read are # allowed to proceed. # # .write_to_read() # Use instead of a .write_in to declare that the thread is done # writing but wants to continue reading without other writers # intervening. If there are other threads waiting to write, they # are allowed to proceed only if the current thread calls # .read_out; threads waiting to read are only allowed to proceed # if there are are no threads waiting to write. (This is a # weakness of the interface!)
{'bevel': True, 'showLeaves': True, 'useArm': False, 'seed': 0, 'handleType': '0', 'bevelRes': 2, 'resU': 2, 'levels': 3, 'length': (1.0, 0.33000001311302185, 0.375, 0.44999998807907104), 'lengthV': (0.05000000074505806, 0.20000000298023224, 0.3499999940395355, 0.0), 'taperCrown': 0.0, 'branches': (0, 60, 30, 10), 'curveRes': (10, 8, 3, 1), 'curve': (0.0, 30.0, 25.0, 0.0), 'curveV': (10.0, 10.0, 25.0, 0.0), 'curveBack': (0.0, 0.0, 0.0, 0.0), 'baseSplits': 2, 'segSplits': (0.3499999940395355, 0.3499999940395355, 0.3499999940395355, 0.0), 'splitByLen': True, 'rMode': 'rotate', 'splitStraight': 0.0, 'splitLength': 0.0, 'splitAngle': (20.0, 36.0, 32.0, 0.0), 'splitAngleV': (2.0, 2.0, 0.0, 0.0), 'scale': 12.0, 'scaleV': 2.0, 'attractUp': (0.0, -1.0, -0.6499999761581421, 0.0), 'attractOut': (0.0, 0.20000000298023224, 0.25, 0.0), 'shape': '8', 'shapeS': '7', 'customShape': (0.699999988079071, 1.0, 0.30000001192092896, 0.5900000333786011), 'branchDist': 1.5, 'nrings': 0, 'baseSize': 0.3499999940395355, 'baseSize_s': 0.800000011920929, 'leafBaseSize': 0.20000000298023224, 'splitHeight': 0.550000011920929, 'splitBias': 0.0, 'ratio': 0.019999999552965164, 'minRadius': 0.0020000000949949026, 'closeTip': False, 'rootFlare': 1.149999976158142, 'splitRadiusRatio': 0.0, 'autoTaper': True, 'taper': (1.0, 1.0, 1.0, 1.0), 'noTip': False, 'radiusTweak': (1.0, 1.0, 1.0, 1.0), 'ratioPower': 1.0, 'downAngle': (90.0, 60.0, 50.0, 45.0), 'downAngleV': (0.0, 25.0, 30.0, 10.0), 'useOldDownAngle': False, 'useParentAngle': True, 'rotate': (99.5, 137.5, 137.5, 137.5), 'rotateV': (0.0, 0.0, 0.0, 0.0), 'scale0': 1.0, 'scaleV0': 0.10000000149011612, 'attachment': '0', 'leaves': 16, 'leafType': '0', 'leafDownAngle': 45.0, 'leafDownAngleV': 10.0, 'leafRotate': 137.5, 'leafRotateV': 0.0, 'leafObjZ': '+2', 'leafObjY': '+1', 'leafScale': 0.20000000298023224, 'leafScaleX': 0.5, 'leafScaleT': 0.20000000298023224, 'leafScaleV': 0.25, 'leafShape': 'hex', 'leafangle': -45.0, 'leafDist': '6', 'armAnim': False, 'previewArm': False, 'leafAnim': False, 'frameRate': 1.0, 'loopFrames': 0, 'wind': 1.0, 'gust': 1.0, 'gustF': 0.07500000298023224, 'af1': 1.0, 'af2': 1.0, 'af3': 4.0, 'makeMesh': False, 'armLevels': 0, 'boneStep': (1, 1, 1, 1), 'matIndex': (0, 0, 0, 0)}
# Test 64-bit COMPARE LOGICAL IMMEDIATE AND BRANCH in cases where the sheer # number of instructions causes some branches to be out of range. # RUN: python %s | llc -mtriple=s390x-linux-gnu | FileCheck %s # Construct: # # before0: # conditional branch to after0 # ... # beforeN: # conditional branch to after0 # main: # 0xffb4 bytes, from MVIY instructions # conditional branch to main # after0: # ... # conditional branch to main # afterN: # # Each conditional branch sequence occupies 18 bytes if it uses a short # branch and 24 if it uses a long one. The ones before "main:" have to # take the branch length into account, which is 6 for short branches, # so the final (0x4c - 6) / 18 == 3 blocks can use short branches. # The ones after "main:" do not, so the first 0x4c / 18 == 4 blocks # can use short branches. The conservative algorithm we use makes # one of the forward branches unnecessarily long, as noted in the # check output below. # # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 50 # CHECK: jgl [[LABEL:\.L[^ ]*]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 51 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 52 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 53 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 54 # CHECK: jgl [[LABEL]] # ...as mentioned above, the next one could be a CLGIJL instead... # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 55 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 56, [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 57, [[LABEL]] # ...main goes here... # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 100, [[LABEL:\.L[^ ]*]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 101, [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 102, [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 103, [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 104 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 105 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 106 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 107 # CHECK: jgl [[LABEL]]
"""CPStats, a package for collecting and reporting on program statistics. Overview ======== Statistics about program operation are an invaluable monitoring and debugging tool. Unfortunately, the gathering and reporting of these critical values is usually ad-hoc. This package aims to add a centralized place for gathering statistical performance data, a structure for recording that data which provides for extrapolation of that data into more useful information, and a method of serving that data to both human investigators and monitoring software. Let's examine each of those in more detail. Data Gathering -------------- Just as Python's `logging` module provides a common importable for gathering and sending messages, performance statistics would benefit from a similar common mechanism, and one that does *not* require each package which wishes to collect stats to import a third-party module. Therefore, we choose to re-use the `logging` module by adding a `statistics` object to it. That `logging.statistics` object is a nested dict. It is not a custom class, because that would: 1. require libraries and applications to import a third-party module in order to participate 2. inhibit innovation in extrapolation approaches and in reporting tools, and 3. be slow. There are, however, some specifications regarding the structure of the dict.:: { +----"SQLAlchemy": { | "Inserts": 4389745, | "Inserts per Second": | lambda s: s["Inserts"] / (time() - s["Start"]), | C +---"Table Statistics": { | o | "widgets": {-----------+ N | l | "Rows": 1.3M, | Record a | l | "Inserts": 400, | m | e | },---------------------+ e | c | "froobles": { s | t | "Rows": 7845, p | i | "Inserts": 0, a | o | }, c | n +---}, e | "Slow Queries": | [{"Query": "SELECT * FROM widgets;", | "Processing Time": 47.840923343, | }, | ], +----}, } The `logging.statistics` dict has four levels. The topmost level is nothing more than a set of names to introduce modularity, usually along the lines of package names. If the SQLAlchemy project wanted to participate, for example, it might populate the item `logging.statistics['SQLAlchemy']`, whose value would be a second-layer dict we call a "namespace". Namespaces help multiple packages to avoid collisions over key names, and make reports easier to read, to boot. The maintainers of SQLAlchemy should feel free to use more than one namespace if needed (such as 'SQLAlchemy ORM'). Note that there are no case or other syntax constraints on the namespace names; they should be chosen to be maximally readable by humans (neither too short nor too long). Each namespace, then, is a dict of named statistical values, such as 'Requests/sec' or 'Uptime'. You should choose names which will look good on a report: spaces and capitalization are just fine. In addition to scalars, values in a namespace MAY be a (third-layer) dict, or a list, called a "collection". For example, the CherryPy :class:`StatsTool` keeps track of what each request is doing (or has most recently done) in a 'Requests' collection, where each key is a thread ID; each value in the subdict MUST be a fourth dict (whew!) of statistical data about each thread. We call each subdict in the collection a "record". Similarly, the :class:`StatsTool` also keeps a list of slow queries, where each record contains data about each slow query, in order. Values in a namespace or record may also be functions, which brings us to: Extrapolation ------------- The collection of statistical data needs to be fast, as close to unnoticeable as possible to the host program. That requires us to minimize I/O, for example, but in Python it also means we need to minimize function calls. So when you are designing your namespace and record values, try to insert the most basic scalar values you already have on hand. When it comes time to report on the gathered data, however, we usually have much more freedom in what we can calculate. Therefore, whenever reporting tools (like the provided :class:`StatsPage` CherryPy class) fetch the contents of `logging.statistics` for reporting, they first call `extrapolate_statistics` (passing the whole `statistics` dict as the only argument). This makes a deep copy of the statistics dict so that the reporting tool can both iterate over it and even change it without harming the original. But it also expands any functions in the dict by calling them. For example, you might have a 'Current Time' entry in the namespace with the value "lambda scope: time.time()". The "scope" parameter is the current namespace dict (or record, if we're currently expanding one of those instead), allowing you access to existing static entries. If you're truly evil, you can even modify more than one entry at a time. However, don't try to calculate an entry and then use its value in further extrapolations; the order in which the functions are called is not guaranteed. This can lead to a certain amount of duplicated work (or a redesign of your schema), but that's better than complicating the spec. After the whole thing has been extrapolated, it's time for: Reporting --------- The :class:`StatsPage` class grabs the `logging.statistics` dict, extrapolates it all, and then transforms it to HTML for easy viewing. Each namespace gets its own header and attribute table, plus an extra table for each collection. This is NOT part of the statistics specification; other tools can format how they like. You can control which columns are output and how they are formatted by updating StatsPage.formatting, which is a dict that mirrors the keys and nesting of `logging.statistics`. The difference is that, instead of data values, it has formatting values. Use None for a given key to indicate to the StatsPage that a given column should not be output. Use a string with formatting (such as '%.3f') to interpolate the value(s), or use a callable (such as lambda v: v.isoformat()) for more advanced formatting. Any entry which is not mentioned in the formatting dict is output unchanged. Monitoring ---------- Although the HTML output takes pains to assign unique id's to each <td> with statistical data, you're probably better off fetching /cpstats/data, which outputs the whole (extrapolated) `logging.statistics` dict in JSON format. That is probably easier to parse, and doesn't have any formatting controls, so you get the "original" data in a consistently-serialized format. Note: there's no treatment yet for datetime objects. Try time.time() instead for now if you can. Nagios will probably thank you. Turning Collection Off ---------------------- It is recommended each namespace have an "Enabled" item which, if False, stops collection (but not reporting) of statistical data. Applications SHOULD provide controls to pause and resume collection by setting these entries to False or True, if present. Usage ===== To collect statistics on CherryPy applications:: from cherrypy.lib import cpstats appconfig['/']['tools.cpstats.on'] = True To collect statistics on your own code:: import logging # Initialize the repository if not hasattr(logging, 'statistics'): logging.statistics = {} # Initialize my namespace mystats = logging.statistics.setdefault('My Stuff', {}) # Initialize my namespace's scalars and collections mystats.update({ 'Enabled': True, 'Start Time': time.time(), 'Important Events': 0, 'Events/Second': lambda s: ( (s['Important Events'] / (time.time() - s['Start Time']))), }) ... for event in events: ... # Collect stats if mystats.get('Enabled', False): mystats['Important Events'] += 1 To report statistics:: root.cpstats = cpstats.StatsPage() To format statistics reports:: See 'Reporting', above. """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
""" ============ Array basics ============ Array types and conversions between types ========================================= Numpy supports a much greater variety of numerical types than Python does. This section shows which are available, and how to modify an array's data-type. ========== ========================================================= Data type Description ========== ========================================================= bool Boolean (True or False) stored as a byte int Platform integer (normally either ``int32`` or ``int64``) int8 Byte (-128 to 127) int16 Integer (-32768 to 32767) int32 Integer (-2147483648 to 2147483647) int64 Integer (9223372036854775808 to 9223372036854775807) uint8 Unsigned integer (0 to 255) uint16 Unsigned integer (0 to 65535) uint32 Unsigned integer (0 to 4294967295) uint64 Unsigned integer (0 to 18446744073709551615) float Shorthand for ``float64``. float32 Single precision float: sign bit, 8 bits exponent, 23 bits mantissa float64 Double precision float: sign bit, 11 bits exponent, 52 bits mantissa complex Shorthand for ``complex128``. complex64 Complex number, represented by two 32-bit floats (real and imaginary components) complex128 Complex number, represented by two 64-bit floats (real and imaginary components) ========== ========================================================= Numpy numerical types are instances of ``dtype`` (data-type) objects, each having unique characteristics. Once you have imported NumPy using :: >>> import numpy as np the dtypes are available as ``np.bool``, ``np.float32``, etc. Advanced types, not listed in the table above, are explored in section :ref:`structured_arrays`. There are 5 basic numerical types representing booleans (bool), integers (int), unsigned integers (uint) floating point (float) and complex. Those with numbers in their name indicate the bitsize of the type (i.e. how many bits are needed to represent a single value in memory). Some types, such as ``int`` and ``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit vs. 64-bit machines). This should be taken into account when interfacing with low-level code (such as C or Fortran) where the raw memory is addressed. Data-types can be used as functions to convert python numbers to array scalars (see the array scalar section for an explanation), python sequences of numbers to arrays of that type, or as arguments to the dtype keyword that many numpy functions or methods accept. Some examples:: >>> import numpy as np >>> x = np.float32(1.0) >>> x 1.0 >>> y = np.int_([1,2,4]) >>> y array([1, 2, 4]) >>> z = np.arange(3, dtype=np.uint8) >>> z array([0, 1, 2], dtype=uint8) Array types can also be referred to by character codes, mostly to retain backward compatibility with older packages such as Numeric. Some documentation may still refer to these, for example:: >>> np.array([1, 2, 3], dtype='f') array([ 1., 2., 3.], dtype=float32) We recommend using dtype objects instead. To convert the type of an array, use the .astype() method (preferred) or the type itself as a function. For example: :: >>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE array([ 0., 1., 2.]) >>> np.int8(z) array([0, 1, 2], dtype=int8) Note that, above, we use the *Python* float object as a dtype. NumPy knows that ``int`` refers to ``np.int``, ``bool`` means ``np.bool`` and that ``float`` is ``np.float``. The other data-types do not have Python equivalents. To determine the type of an array, look at the dtype attribute:: >>> z.dtype dtype('uint8') dtype objects also contain information about the type, such as its bit-width and its byte-order. The data type can also be used indirectly to query properties of the type, such as whether it is an integer:: >>> d = np.dtype(int) >>> d dtype('int32') >>> np.issubdtype(d, int) True >>> np.issubdtype(d, float) False Array Scalars ============= Numpy generally returns elements of arrays as array scalars (a scalar with an associated dtype). Array scalars differ from Python scalars, but for the most part they can be used interchangeably (the primary exception is for versions of Python older than v2.x, where integer array scalars cannot act as indices for lists and tuples). There are some exceptions, such as when code requires very specific attributes of a scalar or when it checks specifically whether a value is a Python scalar. Generally, problems are easily fixed by explicitly converting array scalars to Python scalars, using the corresponding Python type function (e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``). The primary advantage of using array scalars is that they preserve the array type (Python may not have a matching scalar type available, e.g. ``int16``). Therefore, the use of array scalars ensures identical behaviour between arrays and scalars, irrespective of whether the value is inside an array or not. NumPy scalars also have many of the same methods arrays do. """
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I an trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
""" This page is in the table of contents. Carve is the most important plugin to define for your printer. It carves a shape into svg slice layers. It also sets the layer height and edge width for the rest of the tool chain. The carve manual page is at: http://fabmetheus.crsndoo.com/wiki/index.php/Skeinforge_Carve On the Arcol Blog a method of deriving the layer height is posted. That article "Machine Calibrating" is at: http://blog.arcol.hu/?p=157 ==Settings== ===Add Layer Template to SVG=== Default is on. When selected, the layer template will be added to the svg output, which adds javascript control boxes. So 'Add Layer Template to SVG' should be selected when the svg will be viewed in a browser. When off, no controls will be added, the svg output will only include the fabrication paths. So 'Add Layer Template to SVG' should be deselected when the svg will be used by other software, like Inkscape. ===Edge Width over Height=== Default is 1.8. Defines the ratio of the extrusion edge width to the layer height. This parameter tells skeinforge how wide the edge wall is expected to be in relation to the layer height. Default value of 1.8 for the default layer height of 0.4 states that a single filament edge wall should be 0.4 mm * 1.8 = 0.72 mm wide. The higher the value the more the edge will be inset. A ratio of one means the extrusion is a circle, the default ratio of 1.8 means the extrusion is a wide oval. This is an important value because if you are calibrating your machine you need to ensure that the speed of the head and the extrusion rate in combination produce a wall that is 'Layer Height' * 'Edge Width over Height' wide. To start with 'Edge Width over Height' is probably best left at the default of 1.8 and the extrusion rate adjusted to give the correct calculated wall thickness. Adjustment is in the 'Speed' section with 'Feed Rate' controlling speed of the head in X & Y and 'Flow Rate' controlling the extrusion rate. Initially it is probably easier to start adjusting the flow rate only a little at a time until you get a single filament of the correct width. If you change too many parameters at once you can get in a right mess. ===Extra Decimal Places=== Default is two. Defines the number of extra decimal places export will output compared to the number of decimal places in the layer height. The higher the 'Extra Decimal Places', the more significant figures the output numbers will have. ===Import Coarseness=== Default is one. When a triangle mesh has holes in it, the triangle mesh slicer switches over to a slow algorithm that spans gaps in the mesh. The higher the 'Import Coarseness' setting, the wider the gaps in the mesh it will span. An import coarseness of one means it will span gaps of the edge width. ===Layer Height=== Default is 0.4 mm. Defines the the height of the layers skeinforge will cut your object into, in the z direction. This is the most important carve setting, many values in the toolchain are derived from the layer height. For a 0.5 mm nozzle usable values are 0.3 mm to 0.5 mm. Note; if you are using thinner layers make sure to adjust the extrusion speed as well. ===Layers=== Carve slices from bottom to top. To get a single layer, set the "Layers From" to zero and the "Layers To" to one. The 'Layers From' until 'Layers To' range is a python slice. ====Layers From==== Default is zero. Defines the index of the bottom layer that will be carved. If the 'Layers From' is the default zero, the carving will start from the lowest layer. If the 'Layers From' index is negative, then the carving will start from the 'Layers From' index below the top layer. For example if your object is 5 mm tall and your layer thicknes is 1 mm if you set layers from to 3 you will ignore the first 3 mm and start from 3 mm. ====Layers To==== Default is a huge number, which will be limited to the highest index layer. Defines the index of the top layer that will be carved. If the 'Layers To' index is a huge number like the default, the carving will go to the top of the model. If the 'Layers To' index is negative, then the carving will go to the 'Layers To' index below the top layer. This is the same as layers from, only it defines when to end the generation of gcode. ===Mesh Type=== Default is 'Correct Mesh'. ====Correct Mesh==== When selected, the mesh will be accurately carved, and if a hole is found, carve will switch over to the algorithm that spans gaps. ====Unproven Mesh==== When selected, carve will use the gap spanning algorithm from the start. The problem with the gap spanning algothm is that it will span gaps, even if there is not actually a gap in the model. ===SVG Viewer=== Default is webbrowser. If the 'SVG Viewer' is set to the default 'webbrowser', the scalable vector graphics file will be sent to the default browser to be opened. If the 'SVG Viewer' is set to a program name, the scalable vector graphics file will be sent to that program to be opened. ==Examples== The following examples carve the file Screw Holder Bottom.stl. The examples are run in a terminal in the folder which contains Screw Holder Bottom.stl and carve.py. > python carve.py This brings up the carve dialog. > python carve.py Screw Holder Bottom.stl The carve tool is parsing the file: Screw Holder Bottom.stl .. The carve tool has created the file: .. Screw Holder Bottom_carve.svg """
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright (c), NAME <EMAIL>, 2012-2013 # Copyright (c), NAME <EMAIL>, 2015 # All rights reserved. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The match_hostname function and supporting code is under the terms and # conditions of the Python Software Foundation License. They were taken from # the Python3 standard library and adapted for use in Python2. See comments in the # source for which code precisely is under this License. PSF License text # follows: # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are # retained in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement.
"""Script to generate reports on translator classes from Doxygen sources. The main purpose of the script is to extract the information from sources related to internationalization (the translator classes). It uses the information to generate documentation (language.doc, translator_report.txt) from templates (language.tpl, maintainers.txt). Simply run the script without parameters to get the reports and documentation for all supported languages. If you want to generate the translator report only for some languages, pass their codes as arguments to the script. In that case, the language.doc will not be generated. Example: python translator.py en nl cz Originally, the script was written in Perl and was known as translator.pl. The last Perl version was dated 2002/05/21 (plus some later corrections) $Id: translator.py 742 2010-09-20 18:19:55Z dimitri $ NAME (EMAIL) History: -------- 2002/05/21 - This was the last Perl version. 2003/05/16 - List of language marks can be passed as arguments. 2004/01/24 - Total reimplementation started: classes TrManager, and Transl. 2004/02/05 - First version that produces translator report. No language.doc yet. 2004/02/10 - First fully functional version that generates both the translator report and the documentation. It is a bit slower than the Perl version, but is much less tricky and much more flexible. It also solves some problems that were not solved by the Perl version. The translator report content should be more useful for developers. 2004/02/11 - Some tuning-up to provide more useful information. 2004/04/16 - Added new tokens to the tokenizer (to remove some warnings). 2004/05/25 - Added from __future__ import generators not to force Python 2.3. 2004/06/03 - Removed dependency on textwrap module. 2004/07/07 - Fixed the bug in the fill() function. 2004/07/21 - Better e-mail mangling for HTML part of language.doc. - Plural not used for reporting a single missing method. - Removal of not used translator adapters is suggested only when the report is not restricted to selected languages explicitly via script arguments. 2004/07/26 - Better reporting of not-needed adapters. 2004/10/04 - Reporting of not called translator methods added. 2004/10/05 - Modified to check only doxygen/src sources for the previous report. 2005/02/28 - Slight modification to generate "mailto.txt" auxiliary file. 2005/08/15 - Doxygen's root directory determined primarily from DOXYGEN environment variable. When not found, then relatively to the script. 2007/03/20 - The "translate me!" searched in comments and reported if found. 2008/06/09 - Warning when the MAX_DOT_GRAPH_HEIGHT is still part of trLegendDocs(). 2009/05/09 - Changed HTML output to fit it with XHTML DTD 2009/09/02 - Added percentage info to the report (implemented / to be implemented). 2010/02/09 - Added checking/suggestion 'Reimplementation using UTF-8 suggested. 2010/03/03 - Added [unreachable] prefix used in maintainers.txt. 2010/05/28 - BOM skipped; minor code cleaning. 2010/05/31 - e-mail mangled already in maintainers.txt 2010/08/20 - maintainers.txt to UTF-8, related processin of unicode strings - [any mark] introduced instead of [unreachable] only - marks hihglighted in HTML 2010/08/30 - Highlighting in what will be the table in langhowto.html modified. 2010/09/27 - The underscore in \latexonly part of the generated language.doc was prefixed by backslash (was LaTeX related error). """
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I an trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
#import unittest # #class TestCase_Unit_WithoutPerms( unittest.TestCase ): # # def test_export_insert_nok( self ): # # res = self.handler.export_insert( 1, 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( ( 1, ), 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( 1, ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( 1, { 2: 2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( ( 1, ), ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( ( 1, ), { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [ 1, ], 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( 1, [ 2, ] ) # self.assertEquals( res['OK'], False ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [ 1, ], [ 2, ] ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [1,], {'2':2} ) # self.assertEquals( res['OK'], False ) # # res = self.handler.export_insert( (1,), {'2':2} ) # self.assertEquals( res[ 'OK' ], False ) # ################################################################################# # # def test_export_update_nok( self ): # # res = self.handler.export_update( 1, 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( ( 1, ), 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( 1, ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( 1, { 2: 2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( ( 1, ), ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( ( 1, ), { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [ 1, ], 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( 1, [ 2, ] ) # self.assertEquals( res['OK'], False ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [ 1, ], [ 2, ] ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [1,], {'2':2} ) # self.assertEquals( res['OK'], False ) # # res = self.handler.export_update( (1,), {'2':2} ) # self.assertEquals( res[ 'OK' ], False ) # ################################################################################# # # def test_export_get_nok( self ): # # res = self.handler.export_get( 1, 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( ( 1, ), 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( 1, ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( 1, { 2: 2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( ( 1, ), ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( ( 1, ), { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [ 1, ], 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( 1, [ 2, ] ) # self.assertEquals( res['OK'], False ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [ 1, ], [ 2, ] ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [1,], {'2':2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( (1,), {'2':2} ) # self.assertEquals( res[ 'OK' ], False ) # # def test_export_get_ok( self ): # # res = self.handler.export_get( { '1' : 1}, {'2':2} ) # self.assertEquals( res[ 'OK' ], True ) # ################################################################################# # # def test_export_delete_nok( self ): # # res = self.handler.export_delete( 1, 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( ( 1, ), 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( 1, ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( 1, { 2: 2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( ( 1, ), ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( ( 1, ), { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [ 1, ], 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( 1, [ 2, ] ) # self.assertEquals( res['OK'], False ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [ 1, ], [ 2, ] ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [1,], {'2':2} ) # self.assertEquals( res['OK'], False ) # # res = self.handler.export_delete( (1,), {'2':2} ) # self.assertEquals( res[ 'OK' ], False ) # #################################################################################
""" This is a procedural interface to the matplotlib object-oriented plotting library. The following plotting commands are provided; the majority have MATLAB |reg| [*]_ analogs and similar arguments. .. |reg| unicode:: 0xAE _Plotting commands acorr - plot the autocorrelation function annotate - annotate something in the figure arrow - add an arrow to the axes axes - Create a new axes axhline - draw a horizontal line across axes axvline - draw a vertical line across axes axhspan - draw a horizontal bar across axes axvspan - draw a vertical bar across axes axis - Set or return the current axis limits autoscale - turn axis autoscaling on or off, and apply it bar - make a bar chart barh - a horizontal bar chart broken_barh - a set of horizontal bars with gaps box - set the axes frame on/off state boxplot - make a box and whisker plot violinplot - make a violin plot cla - clear current axes clabel - label a contour plot clf - clear a figure window clim - adjust the color limits of the current image close - close a figure window colorbar - add a colorbar to the current figure cohere - make a plot of coherence contour - make a contour plot contourf - make a filled contour plot csd - make a plot of cross spectral density delaxes - delete an axes from the current figure draw - Force a redraw of the current figure errorbar - make an errorbar graph figlegend - make legend on the figure rather than the axes figimage - make a figure image figtext - add text in figure coords figure - create or change active figure fill - make filled polygons findobj - recursively find all objects matching some criteria gca - return the current axes gcf - return the current figure gci - get the current image, or None getp - get a graphics property grid - set whether gridding is on hist - make a histogram hold - set the axes hold state ioff - turn interaction mode off ion - turn interaction mode on isinteractive - return True if interaction mode is on imread - load image file into array imsave - save array as an image file imshow - plot image data ishold - return the hold state of the current axes legend - make an axes legend locator_params - adjust parameters used in locating axis ticks loglog - a log log plot matshow - display a matrix in a new figure preserving aspect margins - set margins used in autoscaling pause - pause for a specified interval pcolor - make a pseudocolor plot pcolormesh - make a pseudocolor plot using a quadrilateral mesh pie - make a pie chart plot - make a line plot plot_date - plot dates plotfile - plot column data from an ASCII tab/space/comma delimited file pie - pie charts polar - make a polar plot on a PolarAxes psd - make a plot of power spectral density quiver - make a direction field (arrows) plot rc - control the default params rgrids - customize the radial grids and labels for polar savefig - save the current figure scatter - make a scatter plot setp - set a graphics property semilogx - log x axis semilogy - log y axis show - show the figures specgram - a spectrogram plot spy - plot sparsity pattern using markers or image stem - make a stem plot subplot - make one subplot (numrows, numcols, axesnum) subplots - make a figure with a set of (numrows, numcols) subplots subplots_adjust - change the params controlling the subplot positions of current figure subplot_tool - launch the subplot configuration tool suptitle - add a figure title table - add a table to the plot text - add some text at location x,y to the current axes thetagrids - customize the radial theta grids and labels for polar tick_params - control the appearance of ticks and tick labels ticklabel_format - control the format of tick labels title - add a title to the current axes tricontour - make a contour plot on a triangular grid tricontourf - make a filled contour plot on a triangular grid tripcolor - make a pseudocolor plot on a triangular grid triplot - plot a triangular grid xcorr - plot the autocorrelation function of x and y xlim - set/get the xlimits ylim - set/get the ylimits xticks - set/get the xticks yticks - set/get the yticks xlabel - add an xlabel to the current axes ylabel - add a ylabel to the current axes autumn - set the default colormap to autumn bone - set the default colormap to bone cool - set the default colormap to cool copper - set the default colormap to copper flag - set the default colormap to flag gray - set the default colormap to gray hot - set the default colormap to hot hsv - set the default colormap to hsv jet - set the default colormap to jet pink - set the default colormap to pink prism - set the default colormap to prism spring - set the default colormap to spring summer - set the default colormap to summer winter - set the default colormap to winter spectral - set the default colormap to spectral _Event handling connect - register an event handler disconnect - remove a connected event handler _Matrix commands cumprod - the cumulative product along a dimension cumsum - the cumulative sum along a dimension detrend - remove the mean or besdt fit line from an array diag - the k-th diagonal of matrix diff - the n-th differnce of an array eig - the eigenvalues and eigen vectors of v eye - a matrix where the k-th diagonal is ones, else zero find - return the indices where a condition is nonzero fliplr - flip the rows of a matrix up/down flipud - flip the columns of a matrix left/right linspace - a linear spaced vector of N values from min to max inclusive logspace - a log spaced vector of N values from min to max inclusive meshgrid - repeat x and y to make regular matrices ones - an array of ones rand - an array from the uniform distribution [0,1] randn - an array from the normal distribution rot90 - rotate matrix k*90 degress counterclockwise squeeze - squeeze an array removing any dimensions of length 1 tri - a triangular matrix tril - a lower triangular matrix triu - an upper triangular matrix vander - the Vandermonde matrix of vector x svd - singular value decomposition zeros - a matrix of zeros _Probability normpdf - The Gaussian probability density function rand - random numbers from the uniform distribution randn - random numbers from the normal distribution _Statistics amax - the maximum along dimension m amin - the minimum along dimension m corrcoef - correlation coefficient cov - covariance matrix mean - the mean along dimension m median - the median along dimension m norm - the norm of vector x prod - the product along dimension m ptp - the max-min along dimension m std - the standard deviation along dimension m asum - the sum along dimension m ksdensity - the kernel density estimate _Time series analysis bartlett - M-point Bartlett window blackman - M-point Blackman window cohere - the coherence using average periodiogram csd - the cross spectral density using average periodiogram fft - the fast Fourier transform of vector x hamming - M-point Hamming window hanning - M-point Hanning window hist - compute the histogram of x kaiser - M length Kaiser window psd - the power spectral density using average periodiogram sinc - the sinc function of array x _Dates date2num - convert python datetimes to numeric representation drange - create an array of numbers for date plots num2date - convert numeric type (float days since 0001) to datetime _Other angle - the angle of a complex array griddata - interpolate irregularly distributed data to a regular grid load - Deprecated--please use loadtxt. loadtxt - load ASCII data into array. polyfit - fit x, y to an n-th order polynomial polyval - evaluate an n-th order polynomial roots - the roots of the polynomial coefficients in p save - Deprecated--please use savetxt. savetxt - save an array to an ASCII file. trapz - trapezoidal integration __end .. [*] MATLAB is a registered trademark of The MathWorks, Inc. """
""" ======================== Broadcasting over arrays ======================== The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is "broadcast" across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([ 2., 4., 6.]) NumPy's broadcasting rule relaxes this constraint when the arrays' shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([ 2., 4., 6.]) The result is equivalent to the previous example where ``b`` was an array. We can think of the scalar ``b`` being *stretched* during the arithmetic operation into an array with the same shape as ``a``. The new elements in ``b`` are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies, so that broadcasting operations are as memory and computationally efficient as possible. The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (``b`` is a scalar rather than an array). General Broadcasting Rules ========================== When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when 1) they are equal, or 2) one of them is 1 If these conditions are not met, a ``ValueError: frames are not aligned`` exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays. Arrays do not need to have the same *number* of dimensions. For example, if you have a ``256x256x3`` array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 When either of the dimensions compared is one, the larger of the two is used. In other words, the smaller of two axes is stretched or "copied" to match the other. In the following example, both the ``A`` and ``B`` arrays have axes with length one that are expanded to a larger size during the broadcast operation:: A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 Here are some more examples:: A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 Here are examples of shapes that do not broadcast:: A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched An example of broadcasting in practice:: >>> x = np.arange(4) >>> xx = x.reshape(4,1) >>> y = np.ones(5) >>> z = np.ones((3,4)) >>> x.shape (4,) >>> y.shape (5,) >>> x + y <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape >>> xx.shape (4, 1) >>> y.shape (5,) >>> (xx + y).shape (4, 5) >>> xx + y array([[ 1., 1., 1., 1., 1.], [ 2., 2., 2., 2., 2.], [ 3., 3., 3., 3., 3.], [ 4., 4., 4., 4., 4.]]) >>> x.shape (4,) >>> z.shape (3, 4) >>> (x + z).shape (3, 4) >>> x + z array([[ 1., 2., 3., 4.], [ 1., 2., 3., 4.], [ 1., 2., 3., 4.]]) Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays:: >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [ 11., 12., 13.], [ 21., 22., 23.], [ 31., 32., 33.]]) Here the ``newaxis`` index operator inserts a new axis into ``a``, making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array with ``b``, which has shape ``(3,)``, yields a ``4x3`` array. See `this article <http://www.scipy.org/EricsBroadcastingDoc>`_ for illustrations of broadcasting concepts. """
# Defines classes that provide synchronization objects. Note that use of # this module requires that your Python support threads. # # condition(lock=None) # a POSIX-like condition-variable object # barrier(n) # an n-thread barrier # event() # an event object # semaphore(n=1) # a semaphore object, with initial count n # mrsw() # a multiple-reader single-writer lock # # CONDITIONS # # A condition object is created via # import this_module # your_condition_object = this_module.condition(lock=None) # # As explained below, a condition object has a lock associated with it, # used in the protocol to protect condition data. You can specify a # lock to use in the constructor, else the constructor will allocate # an anonymous lock for you. Specifying a lock explicitly can be useful # when more than one condition keys off the same set of shared data. # # Methods: # .acquire() # acquire the lock associated with the condition # .release() # release the lock associated with the condition # .wait() # block the thread until such time as some other thread does a # .signal or .broadcast on the same condition, and release the # lock associated with the condition. The lock associated with # the condition MUST be in the acquired state at the time # .wait is invoked. # .signal() # wake up exactly one thread (if any) that previously did a .wait # on the condition; that thread will awaken with the lock associated # with the condition in the acquired state. If no threads are # .wait'ing, this is a nop. If more than one thread is .wait'ing on # the condition, any of them may be awakened. # .broadcast() # wake up all threads (if any) that are .wait'ing on the condition; # the threads are woken up serially, each with the lock in the # acquired state, so should .release() as soon as possible. If no # threads are .wait'ing, this is a nop. # # Note that if a thread does a .wait *while* a signal/broadcast is # in progress, it's guaranteeed to block until a subsequent # signal/broadcast. # # Secret feature: `broadcast' actually takes an integer argument, # and will wake up exactly that many waiting threads (or the total # number waiting, if that's less). Use of this is dubious, though, # and probably won't be supported if this form of condition is # reimplemented in C. # # DIFFERENCES FROM POSIX # # + A separate mutex is not needed to guard condition data. Instead, a # condition object can (must) be .acquire'ed and .release'ed directly. # This eliminates a common error in using POSIX conditions. # # + Because of implementation difficulties, a POSIX `signal' wakes up # _at least_ one .wait'ing thread. Race conditions make it difficult # to stop that. This implementation guarantees to wake up only one, # but you probably shouldn't rely on that. # # PROTOCOL # # Condition objects are used to block threads until "some condition" is # true. E.g., a thread may wish to wait until a producer pumps out data # for it to consume, or a server may wish to wait until someone requests # its services, or perhaps a whole bunch of threads want to wait until a # preceding pass over the data is complete. Early models for conditions # relied on some other thread figuring out when a blocked thread's # condition was true, and made the other thread responsible both for # waking up the blocked thread and guaranteeing that it woke up with all # data in a correct state. This proved to be very delicate in practice, # and gave conditions a bad name in some circles. # # The POSIX model addresses these problems by making a thread responsible # for ensuring that its own state is correct when it wakes, and relies # on a rigid protocol to make this easy; so long as you stick to the # protocol, POSIX conditions are easy to "get right": # # A) The thread that's waiting for some arbitrarily-complex condition # (ACC) to become true does: # # condition.acquire() # while not (code to evaluate the ACC): # condition.wait() # # That blocks the thread, *and* releases the lock. When a # # condition.signal() happens, it will wake up some thread that # # did a .wait, *and* acquire the lock again before .wait # # returns. # # # # Because the lock is acquired at this point, the state used # # in evaluating the ACC is frozen, so it's safe to go back & # # reevaluate the ACC. # # # At this point, ACC is true, and the thread has the condition # # locked. # # So code here can safely muck with the shared state that # # went into evaluating the ACC -- if it wants to. # # When done mucking with the shared state, do # condition.release() # # B) Threads that are mucking with shared state that may affect the # ACC do: # # condition.acquire() # # muck with shared state # condition.release() # if it's possible that ACC is true now: # condition.signal() # or .broadcast() # # Note: You may prefer to put the "if" clause before the release(). # That's fine, but do note that anyone waiting on the signal will # stay blocked until the release() is done (since acquiring the # condition is part of what .wait() does before it returns). # # TRICK OF THE TRADE # # With simpler forms of conditions, it can be impossible to know when # a thread that's supposed to do a .wait has actually done it. But # because this form of condition releases a lock as _part_ of doing a # wait, the state of that lock can be used to guarantee it. # # E.g., suppose thread A spawns thread B and later wants to wait for B to # complete: # # In A: In B: # # B_done = condition() ... do work ... # B_done.acquire() B_done.acquire(); B_done.release() # spawn B B_done.signal() # ... some time later ... ... and B exits ... # B_done.wait() # # Because B_done was in the acquire'd state at the time B was spawned, # B's attempt to acquire B_done can't succeed until A has done its # B_done.wait() (which releases B_done). So B's B_done.signal() is # guaranteed to be seen by the .wait(). Without the lock trick, B # may signal before A .waits, and then A would wait forever. # # BARRIERS # # A barrier object is created via # import this_module # your_barrier = this_module.barrier(num_threads) # # Methods: # .enter() # the thread blocks until num_threads threads in all have done # .enter(). Then the num_threads threads that .enter'ed resume, # and the barrier resets to capture the next num_threads threads # that .enter it. # # EVENTS # # An event object is created via # import this_module # your_event = this_module.event() # # An event has two states, `posted' and `cleared'. An event is # created in the cleared state. # # Methods: # # .post() # Put the event in the posted state, and resume all threads # .wait'ing on the event (if any). # # .clear() # Put the event in the cleared state. # # .is_posted() # Returns 0 if the event is in the cleared state, or 1 if the event # is in the posted state. # # .wait() # If the event is in the posted state, returns immediately. # If the event is in the cleared state, blocks the calling thread # until the event is .post'ed by another thread. # # Note that an event, once posted, remains posted until explicitly # cleared. Relative to conditions, this is both the strength & weakness # of events. It's a strength because the .post'ing thread doesn't have to # worry about whether the threads it's trying to communicate with have # already done a .wait (a condition .signal is seen only by threads that # do a .wait _prior_ to the .signal; a .signal does not persist). But # it's a weakness because .clear'ing an event is error-prone: it's easy # to mistakenly .clear an event before all the threads you intended to # see the event get around to .wait'ing on it. But so long as you don't # need to .clear an event, events are easy to use safely. # # SEMAPHORES # # A semaphore object is created via # import this_module # your_semaphore = this_module.semaphore(count=1) # # A semaphore has an integer count associated with it. The initial value # of the count is specified by the optional argument (which defaults to # 1) passed to the semaphore constructor. # # Methods: # # .p() # If the semaphore's count is greater than 0, decrements the count # by 1 and returns. # Else if the semaphore's count is 0, blocks the calling thread # until a subsequent .v() increases the count. When that happens, # the count will be decremented by 1 and the calling thread resumed. # # .v() # Increments the semaphore's count by 1, and wakes up a thread (if # any) blocked by a .p(). It's an (detected) error for a .v() to # increase the semaphore's count to a value larger than the initial # count. # # MULTIPLE-READER SINGLE-WRITER LOCKS # # A mrsw lock is created via # import this_module # your_mrsw_lock = this_module.mrsw() # # This kind of lock is often useful with complex shared data structures. # The object lets any number of "readers" proceed, so long as no thread # wishes to "write". When a (one or more) thread declares its intention # to "write" (e.g., to update a shared structure), all current readers # are allowed to finish, and then a writer gets exclusive access; all # other readers & writers are blocked until the current writer completes. # Finally, if some thread is waiting to write and another is waiting to # read, the writer takes precedence. # # Methods: # # .read_in() # If no thread is writing or waiting to write, returns immediately. # Else blocks until no thread is writing or waiting to write. So # long as some thread has completed a .read_in but not a .read_out, # writers are blocked. # # .read_out() # Use sometime after a .read_in to declare that the thread is done # reading. When all threads complete reading, a writer can proceed. # # .write_in() # If no thread is writing (has completed a .write_in, but hasn't yet # done a .write_out) or reading (similarly), returns immediately. # Else blocks the calling thread, and threads waiting to read, until # the current writer completes writing or all the current readers # complete reading; if then more than one thread is waiting to # write, one of them is allowed to proceed, but which one is not # specified. # # .write_out() # Use sometime after a .write_in to declare that the thread is done # writing. Then if some other thread is waiting to write, it's # allowed to proceed. Else all threads (if any) waiting to read are # allowed to proceed. # # .write_to_read() # Use instead of a .write_in to declare that the thread is done # writing but wants to continue reading without other writers # intervening. If there are other threads waiting to write, they # are allowed to proceed only if the current thread calls # .read_out; threads waiting to read are only allowed to proceed # if there are are no threads waiting to write. (This is a # weakness of the interface!)
# Copyright 2011,2012 NAME Copyright 2008 (C) Nicira, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at: # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # This file is derived from the packet library in NOX, which was # developed by Nicira, Inc. #====================================================================== # # DNS Message Format # # 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | ID | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # |QR| Opcode |AA|TC|RD|RA|Z |AD|CD| RCODE | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Total Questions | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Total Answerrs | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Total Authority RRs | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Total Additional RRs | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Questions ... | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Answer RRs ... | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Authority RRs.. | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Additional RRs. | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # # Question format: # # 1 1 1 1 1 1 # 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | | # / QNAME / # / / # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | QTYPE | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | QCLASS | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # # # # All RRs have the following format: # 1 1 1 1 1 1 # 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | | # / / # / NAME / # | | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | TYPE | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | CLASS | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | TTL | # | | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | RDLENGTH | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--| # / RDATA / # / / # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # # #====================================================================== # TODO: # SOA data # General cleaup/rewrite (code is/has gotten pretty bad)
""" ======================== Broadcasting over arrays ======================== The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is "broadcast" across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([ 2., 4., 6.]) NumPy's broadcasting rule relaxes this constraint when the arrays' shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([ 2., 4., 6.]) The result is equivalent to the previous example where ``b`` was an array. We can think of the scalar ``b`` being *stretched* during the arithmetic operation into an array with the same shape as ``a``. The new elements in ``b`` are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies, so that broadcasting operations are as memory and computationally efficient as possible. The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (``b`` is a scalar rather than an array). General Broadcasting Rules ========================== When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when 1) they are equal, or 2) one of them is 1 If these conditions are not met, a ``ValueError: frames are not aligned`` exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays. Arrays do not need to have the same *number* of dimensions. For example, if you have a ``256x256x3`` array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched or "copied" to match the other. In the following example, both the ``A`` and ``B`` arrays have axes with length one that are expanded to a larger size during the broadcast operation:: A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 Here are some more examples:: A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 Here are examples of shapes that do not broadcast:: A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched An example of broadcasting in practice:: >>> x = np.arange(4) >>> xx = x.reshape(4,1) >>> y = np.ones(5) >>> z = np.ones((3,4)) >>> x.shape (4,) >>> y.shape (5,) >>> x + y <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape >>> xx.shape (4, 1) >>> y.shape (5,) >>> (xx + y).shape (4, 5) >>> xx + y array([[ 1., 1., 1., 1., 1.], [ 2., 2., 2., 2., 2.], [ 3., 3., 3., 3., 3.], [ 4., 4., 4., 4., 4.]]) >>> x.shape (4,) >>> z.shape (3, 4) >>> (x + z).shape (3, 4) >>> x + z array([[ 1., 2., 3., 4.], [ 1., 2., 3., 4.], [ 1., 2., 3., 4.]]) Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays:: >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [ 11., 12., 13.], [ 21., 22., 23.], [ 31., 32., 33.]]) Here the ``newaxis`` index operator inserts a new axis into ``a``, making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array with ``b``, which has shape ``(3,)``, yields a ``4x3`` array. See `this article <http://wiki.scipy.org/EricsBroadcastingDoc>`_ for illustrations of broadcasting concepts. """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
#Python 3+ Script for converting ban table format as of 2018-10-28 made by USERNAME starting ensure you have installed the mysqlclient package https://github.com/PyMySQL/mysqlclient-python #It can be downloaded from command line with pip: #pip install mysqlclient # #You will also have to create a new ban table for inserting converted data to per the schema: #CREATE TABLE `ban` ( # `id` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT, # `bantime` DATETIME NOT NULL, # `server_ip` INT(10) UNSIGNED NOT NULL, # `server_port` SMALLINT(5) UNSIGNED NOT NULL, # `round_id` INT(11) UNSIGNED NOT NULL, # `role` VARCHAR(32) NULL DEFAULT NULL, # `expiration_time` DATETIME NULL DEFAULT NULL, # `applies_to_admins` TINYINT(1) UNSIGNED NOT NULL DEFAULT '0', # `reason` VARCHAR(2048) NOT NULL, # `ckey` VARCHAR(32) NULL DEFAULT NULL, # `ip` INT(10) UNSIGNED NULL DEFAULT NULL, # `computerid` VARCHAR(32) NULL DEFAULT NULL, # `a_ckey` VARCHAR(32) NOT NULL, # `a_ip` INT(10) UNSIGNED NOT NULL, # `a_computerid` VARCHAR(32) NOT NULL, # `who` VARCHAR(2048) NOT NULL, # `adminwho` VARCHAR(2048) NOT NULL, # `edits` TEXT NULL DEFAULT NULL, # `unbanned_datetime` DATETIME NULL DEFAULT NULL, # `unbanned_ckey` VARCHAR(32) NULL DEFAULT NULL, # `unbanned_ip` INT(10) UNSIGNED NULL DEFAULT NULL, # `unbanned_computerid` VARCHAR(32) NULL DEFAULT NULL, # `unbanned_round_id` INT(11) UNSIGNED NULL DEFAULT NULL, # PRIMARY KEY (`id`), # KEY `idx_ban_isbanned` (`ckey`,`role`,`unbanned_datetime`,`expiration_time`), # KEY `idx_ban_isbanned_details` (`ckey`,`ip`,`computerid`,`role`,`unbanned_datetime`,`expiration_time`), # KEY `idx_ban_count` (`bantime`,`a_ckey`,`applies_to_admins`,`unbanned_datetime`,`expiration_time`) #) ENGINE=InnoDB DEFAULT CHARSET=latin1; #This is to prevent the destruction of existing data and allow rollbacks to be performed in the event of an error during conversion #Once conversion is complete remember to rename the old and new ban tables; it's up to you if you want to keep the old table # #To view the parameters for this script, execute it with the argument --help #All the positional arguments are required, remember to include prefixes in your table names if you use them #An example of the command used to execute this script from powershell: #python ban_conversion_2018-10-28.py "localhost" "root" "password" "feedback" "SS13_ban" "SS13_ban_new" #I found that this script would complete conversion of 35000 rows in approximately 20 seconds, results will depend on the size of your ban table and computer used # #The script has been tested to complete with tgstation's ban table as of 2018-09-02 02:19:56 #In the event of an error the new ban table is automatically truncated #The source table is never modified so you don't have to worry about losing any data due to errors #Some additional error correction is performed to fix problems specific to legacy and invalid data in tgstation's ban table, these operations are tagged with a 'TG:' comment #Even if you don't have any of these specific problems in your ban table the operations won't have matter as they have an insignificant effect on runtime # #While this script is safe to run with your game server(s) active, any bans created after the script has started won't be converted #You will also have to ensure that the code and table names are updated between rounds as neither will be compatible
""" =============== Array Internals =============== Internal organization of numpy arrays ===================================== It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy". Numpy arrays consist of two major components, the raw array data (from now on, referred to as the data buffer), and the information about the raw array data. The data buffer is typically what people think of as arrays in C or Fortran, a contiguous (and fixed) block of memory containing fixed sized data items. Numpy also contains a significant set of data that describes how to interpret the data in the data buffer. This extra information contains (among other things): 1) The basic data element's size in bytes 2) The start of the data within the data buffer (an offset relative to the beginning of the data buffer). 3) The number of dimensions and the size of each dimension 4) The separation between elements for each dimension (the 'stride'). This does not have to be a multiple of the element size 5) The byte order of the data (which may not be the native byte order) 6) Whether the buffer is read-only 7) Information (via the dtype object) about the interpretation of the basic data element. The basic data element may be as simple as a int or a float, or it may be a compound object (e.g., struct-like), a fixed character field, or Python object pointers. 8) Whether the array is to interpreted as C-order or Fortran-order. This arrangement allow for very flexible use of arrays. One thing that it allows is simple changes of the metadata to change the interpretation of the array buffer. Changing the byteorder of the array is a simple change involving no rearrangement of the data. The shape of the array can be changed very easily without changing anything in the data buffer or any data copying at all Among other things that are made possible is one can create a new array metadata object that uses the same data buffer to create a new view of that data buffer that has a different interpretation of the buffer (e.g., different shape, offset, byte order, strides, etc) but shares the same data bytes. Many operations in numpy do just this such as slices. Other operations, such as transpose, don't move data elements around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move. Typically these new versions of the array metadata but the same data buffer are new 'views' into the data buffer. There is a different ndarray object, but it uses the same data buffer. This is why it is necessary to force copies through use of the .copy() method if one really wants to make a new and independent copy of the data buffer. New views into arrays mean the the object reference counts for the data buffer increase. Simply doing away with the original array object will not remove the data buffer if other views of it still exist. Multidimensional Array Indexing Order Issues ============================================ What is the right way to index multi-dimensional arrays? Before you jump to conclusions about the one and true way to index multi-dimensional arrays, it pays to understand why this is a confusing issue. This section will try to explain in detail how numpy indexing works and why we adopt the convention we do for images, and when it may be appropriate to adopt other conventions. The first thing to understand is that there are two conflicting conventions for indexing 2-dimensional arrays. Matrix notation uses the first index to indicate which row is being selected and the second index to indicate which column is selected. This is opposite the geometrically oriented-convention for images where people generally think the first index represents x position (i.e., column) and the second represents y position (i.e., row). This alone is the source of much confusion; matrix-oriented users and image-oriented users expect two different things with regard to indexing. The second issue to understand is how indices correspond to the order the array is stored in memory. In Fortran the first index is the most rapidly varying index when moving through the elements of a two dimensional array as it is stored in memory. If you adopt the matrix convention for indexing, then this means the matrix is stored one column at a time (since the first index moves to the next row as it changes). Thus Fortran is considered a Column-major language. C has just the opposite convention. In C, the last index changes most rapidly as one moves through the array as stored in memory. Thus C is a Row-major language. The matrix is stored by rows. Note that in both cases it presumes that the matrix convention for indexing is being used, i.e., for both Fortran and C, the first index is the row. Note this convention implies that the indexing convention is invariant and that the data order changes to keep that so. But that's not the only way to look at it. Suppose one has large two-dimensional arrays (images or matrices) stored in data files. Suppose the data are stored by rows rather than by columns. If we are to preserve our index convention (whether matrix or image) that means that depending on the language we use, we may be forced to reorder the data if it is read into memory to preserve our indexing convention. For example if we read row-ordered data into memory without reordering, it will match the matrix indexing convention for C, but not for Fortran. Conversely, it will match the image indexing convention for Fortran, but not for C. For C, if one is using data stored in row order, and one wants to preserve the image index convention, the data must be reordered when reading into memory. In the end, which you do for Fortran or C depends on which is more important, not reordering data or preserving the indexing convention. For large images, reordering data is potentially expensive, and often the indexing convention is inverted to avoid that. The situation with numpy makes this issue yet more complicated. The internal machinery of numpy arrays is flexible enough to accept any ordering of indices. One can simply reorder indices by manipulating the internal stride information for arrays without reordering the data at all. Numpy will know how to map the new index order to the data without moving the data. So if this is true, why not choose the index order that matches what you most expect? In particular, why not define row-ordered images to use the image convention? (This is sometimes referred to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN' order options for array ordering in numpy.) The drawback of doing this is potential performance penalties. It's common to access the data sequentially, either implicitly in array operations or explicitly by looping over rows of an image. When that is done, then the data will be accessed in non-optimal order. As the first index is incremented, what is actually happening is that elements spaced far apart in memory are being sequentially accessed, with usually poor memory access speeds. For example, for a two dimensional image 'im' defined so that im[0, 10] represents the value at x=0, y=10. To be consistent with usual Python behavior then im[0] would represent a column at x=0. Yet that data would be spread over the whole array since the data are stored in row order. Despite the flexibility of numpy's indexing, it can't really paper over the fact basic operations are rendered inefficient because of data order or that getting contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs im[0]), thus one can't use an idiom such as for row in im; for col in im does work, but doesn't yield contiguous column data. As it turns out, numpy is smart enough when dealing with ufuncs to determine which index is the most rapidly varying one in memory and uses that for the innermost loop. Thus for ufuncs there is no large intrinsic advantage to either approach in most cases. On the other hand, use of .flat with an FORTRAN ordered array will lead to non-optimal memory access as adjacent elements in the flattened array (iterator, actually) are not contiguous in memory. Indeed, the fact is that Python indexing on lists and other sequences naturally leads to an outside-to inside ordering (the first index gets the largest grouping, the next the next largest, and the last gets the smallest element). Since image data are normally stored by rows, this corresponds to position within rows being the last item indexed. If you do want to use Fortran ordering realize that there are two approaches to consider: 1) accept that the first index is just not the most rapidly changing in memory and have all your I/O routines reorder your data when going from memory to disk or visa versa, or use numpy's mechanism for mapping the first index to the most rapidly varying data. We recommend the former if possible. The disadvantage of the latter is that many of numpy's functions will yield arrays without Fortran ordering unless you are careful to use the 'order' keyword. Doing this would be highly inconvenient. Otherwise we recommend simply learning to reverse the usual order of indices when accessing elements of an array. Granted, it goes against the grain, but it is more in line with Python semantics and the natural order of the data. """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingServer and ThreadingServer mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the mix-in request handler classes StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to reqd all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
# # ElementTree # $Id: ElementTree.py 3276 2007-09-12 06:52:30Z USERNAME $ # # light-weight XML support for Python 2.2 and later. # # history: # 2001-10-20 fl created (from various sources) # 2001-11-01 fl return root from parse method # 2002-02-16 fl sort attributes in lexical order # 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup # 2002-05-01 fl finished TreeBuilder refactoring # 2002-07-14 fl added basic namespace support to ElementTree.write # 2002-07-25 fl added QName attribute support # 2002-10-20 fl fixed encoding in write # 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding # 2002-11-27 fl accept file objects or file names for parse/write # 2002-12-04 fl moved XMLTreeBuilder back to this module # 2003-01-11 fl fixed entity encoding glitch for us-ascii # 2003-02-13 fl added XML literal factory # 2003-02-21 fl added ProcessingInstruction/PI factory # 2003-05-11 fl added tostring/fromstring helpers # 2003-05-26 fl added ElementPath support # 2003-07-05 fl added makeelement factory method # 2003-07-28 fl added more well-known namespace prefixes # 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed # 2003-10-31 fl markup updates # 2003-11-15 fl fixed nested namespace bug # 2004-03-28 fl added XMLID helper # 2004-06-02 fl added default support to findtext # 2004-06-08 fl fixed encoding of non-ascii element/attribute names # 2004-08-23 fl take advantage of post-2.1 expat features # 2004-09-03 fl made Element class visible; removed factory # 2005-02-01 fl added iterparse implementation # 2005-03-02 fl fixed iterparse support for pre-2.2 versions # 2005-11-12 fl added tostringlist/fromstringlist helpers # 2006-07-05 fl merged in selected changes from the 1.3 sandbox # 2006-07-05 fl removed support for 2.1 and earlier # 2007-06-21 fl added deprecation/future warnings # 2007-08-25 fl added doctype hook, added parser version attribute etc # 2007-08-26 fl added new serializer code (better namespace handling, etc) # 2007-08-27 fl warn for broken /tag searches on tree level # 2007-09-02 fl added html/text methods to serializer (experimental) # 2007-09-05 fl added method argument to tostring/tostringlist # 2007-09-06 fl improved error handling # # Copyright (c) 1999-2007 by NAME All rights reserved. # # EMAIL # http://www.pythonware.com # # -------------------------------------------------------------------- # The ElementTree toolkit is # # Copyright (c) 1999-2007 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
"""automatically manage newlines in repository files This extension allows you to manage the type of line endings (CRLF or LF) that are used in the repository and in the local working directory. That way you can get CRLF line endings on Windows and LF on Unix/Mac, thereby letting everybody use their OS native line endings. The extension reads its configuration from a versioned ``.hgeol`` configuration file found in the root of the working copy. The ``.hgeol`` file use the same syntax as all other Mercurial configuration files. It uses two sections, ``[patterns]`` and ``[repository]``. The ``[patterns]`` section specifies how line endings should be converted between the working copy and the repository. The format is specified by a file pattern. The first match is used, so put more specific patterns first. The available line endings are ``LF``, ``CRLF``, and ``BIN``. Files with the declared format of ``CRLF`` or ``LF`` are always checked out and stored in the repository in that format and files declared to be binary (``BIN``) are left unchanged. Additionally, ``native`` is an alias for checking out in the platform's default line ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's default behaviour; it is only needed if you need to override a later, more general pattern. The optional ``[repository]`` section specifies the line endings to use for files stored in the repository. It has a single setting, ``native``, which determines the storage line endings for files declared as ``native`` in the ``[patterns]`` section. It can be set to ``LF`` or ``CRLF``. The default is ``LF``. For example, this means that on Windows, files configured as ``native`` (``CRLF`` by default) will be converted to ``LF`` when stored in the repository. Files declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section are always stored as-is in the repository. Example versioned ``.hgeol`` file:: [patterns] **.py = native **.vcproj = CRLF **.txt = native Makefile = LF **.jpg = BIN [repository] native = LF .. note:: The rules will first apply when files are touched in the working copy, e.g. by updating to null and back to tip to touch all files. The extension uses an optional ``[eol]`` section read from both the normal Mercurial configuration files and the ``.hgeol`` file, with the latter overriding the former. You can use that section to control the overall behavior. There are three settings: - ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or ``CRLF`` to override the default interpretation of ``native`` for checkout. This can be used with :hg:`archive` on Unix, say, to generate an archive where files have line endings for Windows. - ``eol.only-consistent`` (default True) can be set to False to make the extension convert files with inconsistent EOLs. Inconsistent means that there is both ``CRLF`` and ``LF`` present in the file. Such files are normally not touched under the assumption that they have mixed EOLs on purpose. - ``eol.fix-trailing-newline`` (default False) can be set to True to ensure that converted files end with a EOL character (either ``\\n`` or ``\\r\\n`` as per the configured patterns). The extension provides ``cleverencode:`` and ``cleverdecode:`` filters like the deprecated win32text extension does. This means that you can disable win32text and enable eol and your filters will still work. You only need to these filters until you have prepared a ``.hgeol`` file. The ``win32text.forbid*`` hooks provided by the win32text extension have been unified into a single hook named ``eol.checkheadshook``. The hook will lookup the expected line endings from the ``.hgeol`` file, which means you must migrate to a ``.hgeol`` file first before using the hook. ``eol.checkheadshook`` only checks heads, intermediate invalid revisions will be pushed. To forbid them completely, use the ``eol.checkallhook`` hook. These hooks are best used as ``pretxnchangegroup`` hooks. See :hg:`help patterns` for more information about the glob patterns used. """
""" N-dim array module for SymPy. Four classes are provided to handle N-dim arrays, given by the combinations dense/sparse (i.e. whether to store all elements or only the non-zero ones in memory) and mutable/immutable (immutable classes are SymPy objects, but cannot change after they have been created). Examples ======== The following examples show the usage of ``Array``. This is an abbreviation for ``ImmutableDenseNDimArray``, that is an immutable and dense N-dim array, the other classes are analogous. For mutable classes it is also possible to change element values after the object has been constructed. Array construction can detect the shape of nested lists and tuples: >>> from sympy.tensor.array import Array >>> a1 = Array([[1, 2], [3, 4], [5, 6]]) >>> a1 [[1, 2], [3, 4], [5, 6]] >>> a1.shape (3, 2) >>> a1.rank() 2 >>> from sympy.abc import x, y, z >>> a2 = Array([[[x, y], [z, x*z]], [[1, x*y], [1/x, x/y]]]) >>> a2 [[[x, y], [z, x*z]], [[1, x*y], [1/x, x/y]]] >>> a2.shape (2, 2, 2) >>> a2.rank() 3 Otherwise one could pass a 1-dim array followed by a shape tuple: >>> m1 = Array(range(12), (3, 4)) >>> m1 [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]] >>> m2 = Array(range(12), (3, 2, 2)) >>> m2 [[[0, 1], [2, 3]], [[4, 5], [6, 7]], [[8, 9], [10, 11]]] >>> m2[1,1,1] 7 >>> m2.reshape(4, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11]] Slice support: >>> m2[:, 1, 1] [3, 7, 11] Elementwise derivative: >>> from sympy.abc import x, y, z >>> m3 = Array([x**3, x*y, z]) >>> m3.diff(x) [3*x**2, y, 0] >>> m3.diff(z) [0, 0, 1] Multiplication with other SymPy expressions is applied elementwisely: >>> (1+x)*m3 [x**3*(x + 1), x*y*(x + 1), z*(x + 1)] To apply a function to each element of the N-dim array, use ``applyfunc``: >>> m3.applyfunc(lambda x: x/2) [x**3/2, x*y/2, z/2] N-dim arrays can be converted to nested lists by the ``tolist()`` method: >>> m2.tolist() [[[0, 1], [2, 3]], [[4, 5], [6, 7]], [[8, 9], [10, 11]]] >>> isinstance(m2.tolist(), list) True If the rank is 2, it is possible to convert them to matrices with ``tomatrix()``: >>> m1.tomatrix() Matrix([ [0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]) Products and contractions ------------------------- Tensor product between arrays `A_{i_1,\ldots,i_n}` and `B_{j_1,\ldots,j_m}` creates the combined array `P = A \otimes B` defined as `P_{i_1,\ldots,i_n,j_1,\ldots,j_m} := A_{i_1,\ldots,i_n}\cdot B_{j_1,\ldots,j_m}.` It is available through ``tensorproduct(...)``: >>> from sympy.tensor.array import Array, tensorproduct >>> from sympy.abc import x,y,z,t >>> A = Array([x, y, z, t]) >>> B = Array([1, 2, 3, 4]) >>> tensorproduct(A, B) [[x, 2*x, 3*x, 4*x], [y, 2*y, 3*y, 4*y], [z, 2*z, 3*z, 4*z], [t, 2*t, 3*t, 4*t]] Tensor product between a rank-1 array and a matrix creates a rank-3 array: >>> from sympy import eye >>> p1 = tensorproduct(A, eye(4)) >>> p1 [[[x, 0, 0, 0], [0, x, 0, 0], [0, 0, x, 0], [0, 0, 0, x]], [[y, 0, 0, 0], [0, y, 0, 0], [0, 0, y, 0], [0, 0, 0, y]], [[z, 0, 0, 0], [0, z, 0, 0], [0, 0, z, 0], [0, 0, 0, z]], [[t, 0, 0, 0], [0, t, 0, 0], [0, 0, t, 0], [0, 0, 0, t]]] Now, to get back `A_0 \otimes \mathbf{1}` one can access `p_{0,m,n}` by slicing: >>> p1[0,:,:] [[x, 0, 0, 0], [0, x, 0, 0], [0, 0, x, 0], [0, 0, 0, x]] Tensor contraction sums over the specified axes, for example contracting positions `a` and `b` means `A_{i_1,\ldots,i_a,\ldots,i_b,\ldots,i_n} \implies \sum_k A_{i_1,\ldots,k,\ldots,k,\ldots,i_n}` Remember that Python indexing is zero starting, to contract the a-th and b-th axes it is therefore necessary to specify `a-1` and `b-1` >>> from sympy.tensor.array import tensorcontraction >>> C = Array([[x, y], [z, t]]) The matrix trace is equivalent to the contraction of a rank-2 array: `A_{m,n} \implies \sum_k A_{k,k}` >>> tensorcontraction(C, (0, 1)) t + x Matrix product is equivalent to a tensor product of two rank-2 arrays, followed by a contraction of the 2nd and 3rd axes (in Python indexing axes number 1, 2). `A_{m,n}\cdot B_{i,j} \implies \sum_k A_{m, k}\cdot B_{k, j}` >>> D = Array([[2, 1], [0, -1]]) >>> tensorcontraction(tensorproduct(C, D), (1, 2)) [[2*x, x - y], [2*z, -t + z]] One may verify that the matrix product is equivalent: >>> from sympy import Matrix >>> Matrix([[x, y], [z, t]])*Matrix([[2, 1], [0, -1]]) Matrix([ [2*x, x - y], [2*z, -t + z]]) or equivalently >>> C.tomatrix()*D.tomatrix() Matrix([ [2*x, x - y], [2*z, -t + z]]) """
# -*- coding: utf-8 -*- # This file is part of ranger, the console file manager. # This configuration file is licensed under the same terms as ranger. # =================================================================== # # NOTE: If you copied this file to /etc/ranger/commands_full.py or # ~/.config/ranger/commands_full.py, then it will NOT be loaded by ranger, # and only serve as a reference. # # =================================================================== # This file contains ranger's commands. # It's all in python; lines beginning with # are comments. # # Note that additional commands are automatically generated from the methods # of the class ranger.core.actions.Actions. # # You can customize commands in the files /etc/ranger/commands.py (system-wide) # and ~/.config/ranger/commands.py (per user). # They have the same syntax as this file. In fact, you can just copy this # file to ~/.config/ranger/commands_full.py with # `ranger --copy-config=commands_full' and make your modifications, don't # forget to rename it to commands.py. You can also use # `ranger --copy-config=commands' to copy a short sample commands.py that # has everything you need to get started. # But make sure you update your configs when you update ranger. # # =================================================================== # Every class defined here which is a subclass of `Command' will be used as a # command in ranger. Several methods are defined to interface with ranger: # execute(): called when the command is executed. # cancel(): called when closing the console. # tab(tabnum): called when <TAB> is pressed. # quick(): called after each keypress. # # tab() argument tabnum is 1 for <TAB> and -1 for <S-TAB> by default # # The return values for tab() can be either: # None: There is no tab completion # A string: Change the console to this string # A list/tuple/generator: cycle through every item in it # # The return value for quick() can be: # False: Nothing happens # True: Execute the command afterwards # # The return value for execute() and cancel() doesn't matter. # # =================================================================== # Commands have certain attributes and methods that facilitate parsing of # the arguments: # # self.line: The whole line that was written in the console. # self.args: A list of all (space-separated) arguments to the command. # self.quantifier: If this command was mapped to the key "X" and # the user pressed 6X, self.quantifier will be 6. # self.arg(n): The n-th argument, or an empty string if it doesn't exist. # self.rest(n): The n-th argument plus everything that followed. For example, # if the command was "search foo bar a b c", rest(2) will be "bar a b c" # self.start(n): Anything before the n-th argument. For example, if the # command was "search foo bar a b c", start(2) will be "search foo" # # =================================================================== # And this is a little reference for common ranger functions and objects: # # self.fm: A reference to the "fm" object which contains most information # about ranger. # self.fm.notify(string): Print the given string on the screen. # self.fm.notify(string, bad=True): Print the given string in RED. # self.fm.reload_cwd(): Reload the current working directory. # self.fm.thisdir: The current working directory. (A File object.) # self.fm.thisfile: The current file. (A File object too.) # self.fm.thistab.get_selection(): A list of all selected files. # self.fm.execute_console(string): Execute the string as a ranger command. # self.fm.open_console(string): Open the console with the given string # already typed in for you. # self.fm.move(direction): Moves the cursor in the given direction, which # can be something like down=3, up=5, right=1, left=1, to=6, ... # # File objects (for example self.fm.thisfile) have these useful attributes and # methods: # # tfile.path: The path to the file. # tfile.basename: The base name only. # tfile.load_content(): Force a loading of the directories content (which # obviously works with directories only) # tfile.is_directory: True/False depending on whether it's a directory. # # For advanced commands it is unavoidable to dive a bit into the source code # of ranger. # ===================================================================
#!/usr/bin/env python # -*- coding: utf-8 -*- # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # adapted from http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c # -thepaul # This is an implementation of wcwidth() and wcswidth() (defined in # IEEE Std 1002.1-2001) for Unicode. # # http://www.opengroup.org/onlinepubs/007904975/functions/wcwidth.html # http://www.opengroup.org/onlinepubs/007904975/functions/wcswidth.html # # In fixed-width output devices, Latin characters all occupy a single # "cell" position of equal width, whereas ideographic CJK characters # occupy two such cells. Interoperability between terminal-line # applications and (teletype-style) character terminals using the # UTF-8 encoding requires agreement on which character should advance # the cursor by how many cell positions. No established formal # standards exist at present on which Unicode character shall occupy # how many cell positions on character terminals. These routines are # a first attempt of defining such behavior based on simple rules # applied to data provided by the Unicode Consortium. # # For some graphical characters, the Unicode standard explicitly # defines a character-cell width via the definition of the East Asian # FullWidth (F), Wide (W), Half-width (H), and Narrow (Na) classes. # In all these cases, there is no ambiguity about which width a # terminal shall use. For characters in the East Asian Ambiguous (A) # class, the width choice depends purely on a preference of backward # compatibility with either historic CJK or Western practice. # Choosing single-width for these characters is easy to justify as # the appropriate long-term solution, as the CJK practice of # displaying these characters as double-width comes from historic # implementation simplicity (8-bit encoded characters were displayed # single-width and 16-bit ones double-width, even for Greek, # Cyrillic, etc.) and not any typographic considerations. # # Much less clear is the choice of width for the Not East Asian # (Neutral) class. Existing practice does not dictate a width for any # of these characters. It would nevertheless make sense # typographically to allocate two character cells to characters such # as for instance EM SPACE or VOLUME INTEGRAL, which cannot be # represented adequately with a single-width glyph. The following # routines at present merely assign a single-cell width to all # neutral characters, in the interest of simplicity. This is not # entirely satisfactory and should be reconsidered before # establishing a formal standard in this area. At the moment, the # decision which Not East Asian (Neutral) characters should be # represented by double-width glyphs cannot yet be answered by # applying a simple rule from the Unicode database content. Setting # up a proper standard for the behavior of UTF-8 character terminals # will require a careful analysis not only of each Unicode character, # but also of each presentation form, something the author of these # routines has avoided to do so far. # # http://www.unicode.org/unicode/reports/tr11/ # # NAME -- 2007-05-26 (Unicode 5.0) # # Permission to use, copy, modify, and distribute this software # for any purpose and without fee is hereby granted. The author # disclaims all warranties with regard to this software. # # Latest C version: http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c # auxiliary function for binary search in interval table
#!/usr/bin/env python # -*- coding: utf-8 -*- # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
""" ======================== Broadcasting over arrays ======================== The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is "broadcast" across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([ 2., 4., 6.]) NumPy's broadcasting rule relaxes this constraint when the arrays' shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([ 2., 4., 6.]) The result is equivalent to the previous example where ``b`` was an array. We can think of the scalar ``b`` being *stretched* during the arithmetic operation into an array with the same shape as ``a``. The new elements in ``b`` are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies, so that broadcasting operations are as memory and computationally efficient as possible. The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (``b`` is a scalar rather than an array). General Broadcasting Rules ========================== When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when 1) they are equal, or 2) one of them is 1 If these conditions are not met, a ``ValueError: frames are not aligned`` exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays. Arrays do not need to have the same *number* of dimensions. For example, if you have a ``256x256x3`` array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 When either of the dimensions compared is one, the larger of the two is used. In other words, the smaller of two axes is stretched or "copied" to match the other. In the following example, both the ``A`` and ``B`` arrays have axes with length one that are expanded to a larger size during the broadcast operation:: A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 Here are some more examples:: A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 Here are examples of shapes that do not broadcast:: A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched An example of broadcasting in practice:: >>> x = np.arange(4) >>> xx = x.reshape(4,1) >>> y = np.ones(5) >>> z = np.ones((3,4)) >>> x.shape (4,) >>> y.shape (5,) >>> x + y <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape >>> xx.shape (4, 1) >>> y.shape (5,) >>> (xx + y).shape (4, 5) >>> xx + y array([[ 1., 1., 1., 1., 1.], [ 2., 2., 2., 2., 2.], [ 3., 3., 3., 3., 3.], [ 4., 4., 4., 4., 4.]]) >>> x.shape (4,) >>> z.shape (3, 4) >>> (x + z).shape (3, 4) >>> x + z array([[ 1., 2., 3., 4.], [ 1., 2., 3., 4.], [ 1., 2., 3., 4.]]) Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays:: >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [ 11., 12., 13.], [ 21., 22., 23.], [ 31., 32., 33.]]) Here the ``newaxis`` index operator inserts a new axis into ``a``, making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array with ``b``, which has shape ``(3,)``, yields a ``4x3`` array. See `this article <http://www.scipy.org/EricsBroadcastingDoc>`_ for illustrations of broadcasting concepts. """
"""Stuff to parse AIFF-C and AIFF files. Unless explicitly stated otherwise, the description below is true both for AIFF-C files and AIFF files. An AIFF-C file has the following structure. +-----------------+ | FORM | +-----------------+ | <size> | +----+------------+ | | AIFC | | +------------+ | | <chunks> | | | . | | | . | | | . | +----+------------+ An AIFF file has the string "AIFF" instead of "AIFC". A chunk consists of an identifier (4 bytes) followed by a size (4 bytes, big endian order), followed by the data. The size field does not include the size of the 8 byte header. The following chunk types are recognized. FVER <version number of AIFF-C defining document> (AIFF-C only). MARK <# of markers> (2 bytes) list of markers: <marker ID> (2 bytes, must be > 0) <position> (4 bytes) <marker name> ("pstring") COMM <# of channels> (2 bytes) <# of sound frames> (4 bytes) <size of the samples> (2 bytes) <sampling frequency> (10 bytes, IEEE 80-bit extended floating point) in AIFF-C files only: <compression type> (4 bytes) <human-readable version of compression type> ("pstring") SSND <offset> (4 bytes, not used by this program) <blocksize> (4 bytes, not used by this program) <sound data> A pstring consists of 1 byte length, a string of characters, and 0 or 1 byte pad to make the total length even. Usage. Reading AIFF files: f = aifc.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). In some types of audio files, if the setpos() method is not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' for AIFF files) getcompname() -- returns human-readable version of compression type ('not compressed' for AIFF files) getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- get the list of marks in the audio file or None if there are no marks getmark(id) -- get mark with the specified id (raises an error if the mark does not exist) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell(), the position given to setpos() and the position of marks are all compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing AIFF files: f = aifc.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: aiff() -- create an AIFF file (AIFF-C default) aifc() -- create an AIFF-C file setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple) -- set all parameters at once setmark(id, pos, name) -- add specified mark to the list of marks tell() -- return current position in output file (useful in combination with setmark()) writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. Marks can be added anytime. If there are any marks, ypu must call close() after all frames have been written. The close() method is called automatically when the class instance is destroyed. When a file is opened with the extension '.aiff', an AIFF file is written, otherwise an AIFF-C file is written. This default can be changed by calling aiff() or aifc() before the first writeframes or writeframesraw. """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
# The following CSI codes supported by xcode are not tested. # Query ReGIS/Sixel attributes: CSI ? Pi ; Pa ; P vS # Initiate highlight mouse tracking: CSI Ps ; Ps ; Ps ; Ps ; Ps T # Media Copy (MC): CSI Pm i # Media Copy (MC, DEC-specific): CSI ? Pm i # Character Attributes (SGR): CSI Pm m # Disable modifiers: CSI > Ps n # Set pointer mode: CSI > Ps p # Load LEDs (DECLL): CSI Ps q # Set cursor style (DECSCUSR): CIS Ps SP q # Select character protection attribute (DECSCA): CSI Ps " q [This is already tested by DECSED and DECSEL] # Window manipulation: CSI Ps; Ps; Ps t # Reverse Attributes in Rectangular Area (DECRARA): CSI Pt ; Pl ; Pb ; Pr ; Ps $ t # Set warning bell volume (DECSWBV): CSI Ps SP t # Set margin-bell volume (DECSMBV): CSI Ps SP u # Enable Filter Rectangle (DECEFR): CSI Pt ; Pl ; Pb ; Pr ' w # Request Terminal Parameters (DECREQTPARM): CSI Ps x # Select Attribute Change Extent (DECSACE): CSI Ps * x # Request Checksum of Rectangular Area (DECRQCRA): CSI Pi ; Pg ; Pt ; Pl ; Pb ; Pr * y # Select Locator Events (DECSLE): CSI Pm ' { # Request Locator Position (DECRQLP): CSI PS ' | # ESC SP L Set ANSI conformance level 1 (dpANS X3.134.1). # ESC SP M Set ANSI conformance level 2 (dpANS X3.134.1). # ESC SP N Set ANSI conformance level 3 (dpANS X3.134.1). # In xterm, all these do is fiddle with character sets, which are not testable. # ESC # 3 DEC double-height line, top half (DECDHL). # ESC # 4 DEC double-height line, bottom half (DECDHL). # ESC # 5 DEC single-width line (DECSWL). # ESC # 6 DEC double-width line (DECDWL). # Double-width affects display only and is generally not introspectable. Wrap # doesn't work so there's no way to tell where the cursor is visually. # ESC % @ Select default character set. That is ISO 8859-1 (ISO 2022). # ESC % G Select UTF-8 character set (ISO 2022). # ESC ( C Designate G0 Character Set (ISO 2022, VT100). # ESC ) C Designate G1 Character Set (ISO 2022, VT100). # ESC * C Designate G2 Character Set (ISO 2022, VT220). # ESC + C Designate G3 Character Set (ISO 2022, VT220). # ESC - C Designate G1 Character Set (VT300). # ESC . C Designate G2 Character Set (VT300). # ESC / C Designate G3 Character Set (VT300). # Character set stuff is not introspectable. # Shift in (SI): ^O # Shift out (SO): ^N # Space (SP): 0x20 # Tab (TAB): 0x09 [tested in HTS] # ESC = Application Keypad (DECKPAM). # ESC > Normal Keypad (DECKPNM). # ESC F Cursor to lower left corner of screen. This is enabled by the # hpLowerleftBugCompat resource. (Not worth testing as it's off by # default, and silly regardless) # ESC l Memory Lock (per HP terminals). Locks memory above the cursor. # ESC m Memory Unlock (per HP terminals). # ESC n Invoke the G2 Character Set as GL (LS2). # ESC o Invoke the G3 Character Set as GL (LS3). # ESC | Invoke the G3 Character Set as GR (LS3R). # ESC } Invoke the G2 Character Set as GR (LS2R). # ESC ~ Invoke the G1 Character Set as GR (LS1R). # DCS + p Pt ST Set Termcap/Terminfo Data # DCS + q Pt ST Request Termcap/Terminfo String # The following OSC commands are tested in xterm_winops and don't have their own test: # Ps = 0 -> Change Icon Name and Window Title to Pt. # Ps = 1 -> Change Icon Name to Pt. # Ps = 2 -> Change Window Title to Pt. # This test is too ill-defined and X-specific, and is not tested: # Ps = 3 -> Set X property on top-level window. Pt should be # in the form "prop=value", or just "prop" to delete the prop- # erty # No introspection for whether special color are enabled/disabled: # Ps = 6 ; c; f -> Enable/disable Special Color Number c. The # second parameter tells xterm to enable the corresponding color # mode if nonzero, disable it if zero. # Off by default, obvious security issues: # Ps = 4 6 -> Change Log File to Pt. (This is normally dis- # abled by a compile-time option). # No introspection for fonts: # Ps = 5 0 -> Set Font to Pt. # No-op: # Ps = 5 1 -> reserved for Emacs shell.
# -*- encoding: utf-8 -*- ############################################################################## # # Copyright (c) 2009 Veritos - NAME - www.veritos.nl # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsability of assessing all potential # consequences resulting from its eventual inadequacies and bugs. # End users who are looking for a ready-to-use solution with commercial # garantees and support are strongly adviced to contract a Free Software # Service Company like Veritos. # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # ############################################################################## # # Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger). # Deze module werkt niet in OpenERP versie 4 en lager. # # Status 1.0 - getest op OpenERP 5.0.3 # # Versie IP_ADDRESS # account.account.type # Basis gelegd voor alle account type # # account.account.template # Basis gelegd met alle benodigde grootboekrekeningen welke via een menu- # structuur gelinkt zijn aan rubrieken 1 t/m 9. # De grootboekrekeningen gelinkt aan de account.account.type # Deze links moeten nog eens goed nagelopen worden. # # account.chart.template # Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren, # bank, inkoop en verkoop boeken en de BTW configuratie. # # Versie IP_ADDRESS # account.tax.code.template # Basis gelegd voor de BTW configuratie (structuur) # Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt? # # account.tax.template # De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende # grootboekrekeningen # # Versie IP_ADDRESS # Opschonen van de code en verwijderen van niet gebruikte componenten. # Versie IP_ADDRESS # Aanpassen a_expense van 3000 -> 7000 # record id='btw_code_5b' op negatieve waarde gezet # Versie IP_ADDRESS # BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde. # Versie IP_ADDRESS # Account Receivable en Payable goed gedefinieerd. # Versie IP_ADDRESS # Alle user_type_xxx velden goed gedefinieerd. # Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren. # Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren. # Versie IP_ADDRESS # Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging) # versie IP_ADDRESS # Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity # versie IP_ADDRESS # Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht. # versie IP_ADDRESS # BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd. # versie IP_ADDRESS - Switch to English # Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense # Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
""" ============ Array basics ============ Array types and conversions between types ========================================= NumPy supports a much greater variety of numerical types than Python does. This section shows which are available, and how to modify an array's data-type. ========== ========================================================== Data type Description ========== ========================================================== bool_ Boolean (True or False) stored as a byte int_ Default integer type (same as C ``long``; normally either ``int64`` or ``int32``) intc Identical to C ``int`` (normally ``int32`` or ``int64``) intp Integer used for indexing (same as C ``ssize_t``; normally either ``int32`` or ``int64``) int8 Byte (-128 to 127) int16 Integer (-32768 to 32767) int32 Integer (-2147483648 to 2147483647) int64 Integer (-9223372036854775808 to 9223372036854775807) uint8 Unsigned integer (0 to 255) uint16 Unsigned integer (0 to 65535) uint32 Unsigned integer (0 to 4294967295) uint64 Unsigned integer (0 to 18446744073709551615) float_ Shorthand for ``float64``. float16 Half precision float: sign bit, 5 bits exponent, 10 bits mantissa float32 Single precision float: sign bit, 8 bits exponent, 23 bits mantissa float64 Double precision float: sign bit, 11 bits exponent, 52 bits mantissa complex_ Shorthand for ``complex128``. complex64 Complex number, represented by two 32-bit floats (real and imaginary components) complex128 Complex number, represented by two 64-bit floats (real and imaginary components) ========== ========================================================== Additionally to ``intc`` the platform dependent C integer types ``short``, ``long``, ``longlong`` and their unsigned versions are defined. NumPy numerical types are instances of ``dtype`` (data-type) objects, each having unique characteristics. Once you have imported NumPy using :: >>> import numpy as np the dtypes are available as ``np.bool_``, ``np.float32``, etc. Advanced types, not listed in the table above, are explored in section :ref:`structured_arrays`. There are 5 basic numerical types representing booleans (bool), integers (int), unsigned integers (uint) floating point (float) and complex. Those with numbers in their name indicate the bitsize of the type (i.e. how many bits are needed to represent a single value in memory). Some types, such as ``int`` and ``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit vs. 64-bit machines). This should be taken into account when interfacing with low-level code (such as C or Fortran) where the raw memory is addressed. Data-types can be used as functions to convert python numbers to array scalars (see the array scalar section for an explanation), python sequences of numbers to arrays of that type, or as arguments to the dtype keyword that many numpy functions or methods accept. Some examples:: >>> import numpy as np >>> x = np.float32(1.0) >>> x 1.0 >>> y = np.int_([1,2,4]) >>> y array([1, 2, 4]) >>> z = np.arange(3, dtype=np.uint8) >>> z array([0, 1, 2], dtype=uint8) Array types can also be referred to by character codes, mostly to retain backward compatibility with older packages such as Numeric. Some documentation may still refer to these, for example:: >>> np.array([1, 2, 3], dtype='f') array([ 1., 2., 3.], dtype=float32) We recommend using dtype objects instead. To convert the type of an array, use the .astype() method (preferred) or the type itself as a function. For example: :: >>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE array([ 0., 1., 2.]) >>> np.int8(z) array([0, 1, 2], dtype=int8) Note that, above, we use the *Python* float object as a dtype. NumPy knows that ``int`` refers to ``np.int_``, ``bool`` means ``np.bool_``, that ``float`` is ``np.float_`` and ``complex`` is ``np.complex_``. The other data-types do not have Python equivalents. To determine the type of an array, look at the dtype attribute:: >>> z.dtype dtype('uint8') dtype objects also contain information about the type, such as its bit-width and its byte-order. The data type can also be used indirectly to query properties of the type, such as whether it is an integer:: >>> d = np.dtype(int) >>> d dtype('int32') >>> np.issubdtype(d, int) True >>> np.issubdtype(d, float) False Array Scalars ============= NumPy generally returns elements of arrays as array scalars (a scalar with an associated dtype). Array scalars differ from Python scalars, but for the most part they can be used interchangeably (the primary exception is for versions of Python older than v2.x, where integer array scalars cannot act as indices for lists and tuples). There are some exceptions, such as when code requires very specific attributes of a scalar or when it checks specifically whether a value is a Python scalar. Generally, problems are easily fixed by explicitly converting array scalars to Python scalars, using the corresponding Python type function (e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``). The primary advantage of using array scalars is that they preserve the array type (Python may not have a matching scalar type available, e.g. ``int16``). Therefore, the use of array scalars ensures identical behaviour between arrays and scalars, irrespective of whether the value is inside an array or not. NumPy scalars also have many of the same methods arrays do. Extended Precision ================== Python's floating-point numbers are usually 64-bit floating-point numbers, nearly equivalent to ``np.float64``. In some unusual situations it may be useful to use floating-point numbers with more precision. Whether this is possible in numpy depends on the hardware and on the development environment: specifically, x86 machines provide hardware floating-point with 80-bit precision, and while most C compilers provide this as their ``long double`` type, MSVC (standard for Windows builds) makes ``long double`` identical to ``double`` (64 bits). NumPy makes the compiler's ``long double`` available as ``np.longdouble`` (and ``np.clongdouble`` for the complex numbers). You can find out what your numpy provides with``np.finfo(np.longdouble)``. NumPy does not provide a dtype with more precision than C ``long double``s; in particular, the 128-bit IEEE quad precision data type (FORTRAN's ``REAL*16``) is not available. For efficient memory alignment, ``np.longdouble`` is usually stored padded with zero bits, either to 96 or 128 bits. Which is more efficient depends on hardware and development environment; typically on 32-bit systems they are padded to 96 bits, while on 64-bit systems they are typically padded to 128 bits. ``np.longdouble`` is padded to the system default; ``np.float96`` and ``np.float128`` are provided for users who want specific padding. In spite of the names, ``np.float96`` and ``np.float128`` provide only as much precision as ``np.longdouble``, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds. Be warned that even if ``np.longdouble`` offers more precision than python ``float``, it is easy to lose that extra precision, since python often forces values to pass through ``float``. For example, the ``%`` formatting operator requires its arguments to be converted to standard python types, and it is therefore impossible to preserve extended precision even if many decimal places are requested. It can be useful to test your code with the value ``1 + np.finfo(np.longdouble).eps``. """
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be > 1D atleast_2d Force arrays to be > 2D atleast_3d Force arrays to be > 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Substract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== 1D Array Set Operations ----------------------- Set operations for 1D numeric arrays based on sort() function. ================ =================== ediff1d Array difference (auxiliary function). unique Unique elements of an array. intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
# -*- coding: utf-8 -*- # Part of Odoo. See LICENSE file for full copyright and licensing details. # SKR04 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig, # d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu # Steuerschlüsseln. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU') # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer' # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# rgToolFactoryMultIn.py # see https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home # # copyright NAME (ross stop lazarus at gmail stop com) May 2012 # # all rights reserved # Licensed under the LGPL # suggestions for improvement and bug fixes welcome at https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home # # January 2015 # unified all setups by passing the script on the cl rather than via a PIPE - no need for treat_bash_special so removed # # in the process of building a complex tool # added ability to choose one of the current toolshed package_r or package_perl or package_python dependencies and source that package # add that package to tool_dependencies # Note that once the generated tool is loaded, it will have that package's env.sh loaded automagically so there is no # --envshpath in the parameters for the generated tool and it uses the system one which will be first on the adjusted path. # # sept 2014 added additional params from # https://bitbucket.org/mvdbeek/dockertoolfactory/src/d4863bcf7b521532c7e8c61b6333840ba5393f73/DockerToolFactory.py?at=default # passing them is complex # and they are restricted to NOT contain commas or double quotes to ensure that they can be safely passed together on # the toolfactory command line as a comma delimited double quoted string for parsing and passing to the script # see examples on this tool form # august 2014 # Allows arbitrary number of input files # NOTE positional parameters are now passed to script # and output (may be "None") is *before* arbitrary number of inputs # # march 2014 # had to remove dependencies because cross toolshed dependencies are not possible - can't pre-specify a toolshed url for graphicsmagick and ghostscript # grrrrr - night before a demo # added dependencies to a tool_dependencies.xml if html page generated so generated tool is properly portable # # added ghostscript and graphicsmagick as dependencies # fixed a wierd problem where gs was trying to use the new_files_path from universe (database/tmp) as ./database/tmp # errors ensued # # august 2013 # found a problem with GS if $TMP or $TEMP missing - now inject /tmp and warn # # july 2013 # added ability to combine images and individual log files into html output # just make sure there's a log file foo.log and it will be output # together with all images named like "foo_*.pdf # otherwise old format for html # # January 2013 # problem pointed out by NAME added escaping for <>$ - thought I did that ages ago... # # August 11 2012 # changed to use shell=False and cl as a sequence # This is a Galaxy tool factory for simple scripts in python, R or whatever ails ye. # It also serves as the wrapper for the new tool. # # you paste and run your script # Only works for simple scripts that read one input from the history. # Optionally can write one new history dataset, # and optionally collect any number of outputs into links on an autogenerated HTML page. # DO NOT install on a public or important site - please. # installed generated tools are fine if the script is safe. # They just run normally and their user cannot do anything unusually insecure # but please, practice safe toolshed. # Read the fucking code before you install any tool # especially this one # After you get the script working on some test data, you can # optionally generate a toolshed compatible gzip file # containing your script safely wrapped as an ordinary Galaxy script in your local toolshed for # safe and largely automated installation in a production Galaxy. # If you opt for an HTML output, you get all the script outputs arranged # as a single Html history item - all output files are linked, thumbnails for all the pdfs. # Ugly but really inexpensive. # # Patches appreciated please. # # # long route to June 2012 product # Behold the awesome power of Galaxy and the toolshed with the tool factory to bind them # derived from an integrated script model # called rgBaseScriptWrapper.py # Note to the unwary: # This tool allows arbitrary scripting on your Galaxy as the Galaxy user # There is nothing stopping a malicious user doing whatever they choose # Extremely dangerous!! # Totally insecure. So, trusted users only # # preferred model is a developer using their throw away workstation instance - ie a private site. # no real risk. The universe_wsgi.ini admin_users string is checked - only admin users are permitted to run this tool. #
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
""" Define a simple format for saving numpy arrays to disk with the full information about them. The ``.npy`` format is the standard binary file format in NumPy for persisting a *single* arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The ``.npz`` format is the standard format for persisting *multiple* NumPy arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy`` files, one for each array. Capabilities ------------ - Can represent all NumPy arrays including nested record arrays and object arrays. - Represents the data in its native binary form. - Supports Fortran-contiguous arrays directly. - Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C "long int" writes out an array with "long ints", a reading machine with 32-bit C "long ints" will yield an array with 64-bit integers. - Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able create a solution in his preferred programming language to read most ``.npy`` files that he has been given without much documentation. - Allows memory-mapping of the data. See `open_memmep`. - Can be read from a filelike stream object instead of an actual file. - Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. Limitations ----------- - Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. .. warning:: Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by 'f0', 'f1', etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the ``loadedarray.view(correct_dtype)`` method. File extensions --------------- We recommend using the ``.npy`` and ``.npz`` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using ``.npy`` and ``.npz``. Version numbering ----------------- The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. Format Version 1.0 ------------------ The first 6 bytes are a magic string: exactly ``\\x93NUMPY``. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. ``\\x01``. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. ``\\x00``. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array's format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (``\\n``) and padded with spaces (``\\x20``) to make the total length of ``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment purposes. The dictionary contains three keys: "descr" : dtype.descr An object that can be passed as an argument to the `numpy.dtype` constructor to create the array's dtype. "fortran_order" : bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. "shape" : tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. ``dtype.hasobject is True``), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on ``fortran_order``) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that ``shape=()`` means there is 1 element) by ``dtype.itemsize``. Notes ----- The ``.npy`` format, including reasons for creating it and a comparison of alternatives, is described fully in the "npy-format" NEP. """
"""============== Array indexing ============== Array indexing refers to any use of the square brackets ([]) to index array values. There are many options to indexing, which give numpy indexing great power, but with power comes some complexity and the potential for confusion. This section is just an overview of the various options and issues related to indexing. Aside from single element indexing, the details on most of these options are to be found in related sections. Assignment vs referencing ========================= Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See the section at the end for specific examples and explanations on how assignments work. Single element indexing ======================= Single element indexing for a 1-D array is what one expects. It work exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. :: >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. That means that it is not necessary to separate each dimension's index into its own set of square brackets. :: >>> x.shape = (2,5) # now x is 2-dimensional >>> x[1,3] 8 >>> x[1,-1] 9 Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: :: >>> x[0] array([0, 1, 2, 3, 4]) That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that the remaining dimension of length 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: :: >>> x[0][2] 2 So note that ``x[0,2] = x[0][2]`` though the second case is more inefficient as a new temporary array is created after the first index that is subsequently indexed by 2. Note to those used to IDL or Fortran memory order as it relates to indexing. Numpy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. Other indexing options ====================== It is possible to slice and stride arrays to extract arrays of the same number of dimensions, but of different sizes than the original. The slicing and striding works exactly the same way it does for lists and tuples except that they can be applied to multiple dimensions as well. A few examples illustrates best: :: >>> x = np.arange(10) >>> x[2:5] array([2, 3, 4]) >>> x[:-7] array([0, 1, 2]) >>> x[1:7:2] array([1, 3, 5]) >>> y = np.arange(35).reshape(5,7) >>> y[1:5:2,::3] array([[ 7, 10, 13], [21, 24, 27]]) Note that slices of arrays do not copy the internal array data but also produce new views of the original data. It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays. There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool that allow one to avoid looping over individual elements in arrays and thus greatly improve performance. It is possible to use special features to effectively increase the number of dimensions in an array through indexing so the resulting array aquires the shape needed for use in an expression or with a specific function. Index arrays ============ Numpy arrays may be indexed with other arrays (or any other sequence- like object that can be converted to an array, such as lists, with the exception of tuples; see the end of this document for why this is). The use of index arrays ranges from simple, straightforward cases to complex, hard-to-understand cases. For all cases of index arrays, what is returned is a copy of the original data, not a view as one gets for slices. Index arrays must be of integer type. Each value in the array indicates which value in the array to use in place of the index. To illustrate: :: >>> x = np.arange(10,1,-1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) The index array consisting of the values 3, 3, 1 and 8 correspondingly create an array of length 4 (same as the index array) where each index is replaced by the value the index array has in the array being indexed. Negative values are permitted and work as they do with single indices or slices: :: >>> x[np.array([3,3,-3,8])] array([7, 7, 4, 2]) It is an error to have index values out of bounds: :: >>> x[np.array([3, 3, 20, 8])] <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9 Generally speaking, what is returned when index arrays are used is an array with the same shape as the index array, but with the type and values of the array being indexed. As an example, we can use a multidimensional index array instead: :: >>> x[np.array([[1,1],[2,3]])] array([[9, 9], [8, 7]]) Indexing Multi-dimensional arrays ================================= Things become more complex when multidimensional arrays are indexed, particularly with multidimensional index arrays. These tend to be more unusal uses, but theyare permitted, and they are useful for some problems. We'll start with thesimplest multidimensional case (using the array y from the previous examples): :: >>> y[np.array([0,2,4]), np.array([0,1,2])] array([ 0, 15, 30]) In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is y[0,0]. The next value is y[2,1], and the last is y[4,2]. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: :: >>> y[np.array([0,2,4]), np.array([0,1])] <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: :: >>> y[np.array([0,2,4]), 1] array([ 1, 15, 29]) Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: :: >>> y[np.array([0,2,4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) What results is the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (size of row, number index elements). An example of where this may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. In general, the shape of the resulant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. Boolean or "mask" index arrays ============================== Boolean arrays used as indices are treated in a different manner entirely than index arrays. Boolean arrays must be of the same shape as the initial dimensions of the array being indexed. In the most straightforward case, the boolean array has the same shape: :: >>> b = y>20 >>> y[b] array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) Unlike in the case of integer index arrays, in the boolean case, the result is a 1-D array containing all the elements in the indexed array corresponding to all the true elements in the boolean array. The elements in the indexed array are always iterated and returned in :term:`row-major` (C-style) order. The result is also identical to ``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy of the data, not a view as one gets with slices. The result will be multidimensional if y has more dimensions than b. For example: :: >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y array([False, False, False, True, True], dtype=bool) >>> y[b[:,5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to y[b, ...], which means y is indexed by b followed by as many : as are needed to fill out the rank of y. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed. For example, using a 2-D boolean array of shape (2,3) with four True elements to select rows from a 3-D array of shape (2,3,5) results in a 2-D result of shape (4,5): :: >>> x = np.arange(30).reshape(2,3,5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) For further details, consult the numpy reference documentation on array indexing. Combining index arrays with slices ================================== Index arrays may be combined with slices. For example: :: >>> y[np.array([0,2,4]),1:3] array([[ 1, 2], [15, 16], [29, 30]]) In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array to produce a resultant array of shape (3,2). Likewise, slicing can be combined with broadcasted boolean indices: :: >>> y[b[:,5],1:3] array([[22, 23], [29, 30]]) Structural indexing tools ========================= To facilitate easy matching of array shapes with expressions and in assignments, the np.newaxis object can be used within array indices to add new dimensions with a size of 1. For example: :: >>> y.shape (5, 7) >>> y[:,np.newaxis,:].shape (5, 1, 7) Note that there are no new elements in the array, just that the dimensionality is increased. This can be handy to combine two arrays in a way that otherwise would require explicitly reshaping operations. For example: :: >>> x = np.arange(5) >>> x[:,np.newaxis] + x[np.newaxis,:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) The ellipsis syntax maybe used to indicate selecting in full any remaining unspecified dimensions. For example: :: >>> z = np.arange(81).reshape(3,3,3,3) >>> z[1,...,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) This is equivalent to: :: >>> z[1,:,:,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) Assigning values to indexed arrays ================================== As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: :: >>> x = np.arange(10) >>> x[2:7] = 1 or an array of the right size: :: >>> x[2:7] = np.arange(5) Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): :: >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j <type 'exceptions.TypeError'>: can't convert complex to long; use long(abs(z)) Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: :: >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is because a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times. Dealing with variable numbers of indices within programs ======================================================== The index syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example (using the previous definition for the array z): :: >>> indices = (1,1,1,1) >>> z[indices] 40 So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: :: >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2] >>> z[indices] array([39, 40]) Likewise, ellipsis can be specified by code by using the Ellipsis object: :: >>> indices = (1, Ellipsis, 1) # same as [1,...,1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) For this reason it is possible to use the output from the np.where() function directly as an index since it always returns a tuple of index arrays. Because the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: :: >>> z[[1,1,1,1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1,1,1,1)] # returns a single value 40 """
""" A framework for calculating the surface stresses at a particular place and time on a satellite resulting from one or more tidal potentials. 1 Input and Output ================== Because C{satstress} is a "library" module, it doesn't do a lot of input and output - it's mostly about doing calculations. It does need to read in the specification of a L{Satellite} object though, and it can write the same kind of specification out. To do this, it uses name-value files, and a function called L{nvf2dict}, which creates a Python dictionary (or "associative array"). A name-value file is just a file containing a bunch of name-value pairs, like:: ORBIT_ECCENTRICITY = 0.0094 # e must be < 0.25 It can also contain comments to enhance human readability (anything following a '#' on a line is ignored, as with the note in the line above). 2 Satellites ============ Obviously if we want to calculate the stresses on the surface of a satellite, we need to define the satellite, this is what the L{Satellite} object does. 2.1 Specifying a Satellite -------------------------- In order to specify a satellite, we need: - an ID of some kind for the planet/satellite pair of interest - the charactaristics of the satellite's orbital environment - the satellite's internal structure and material properties - the forcings to which the satellite is subjected From a few basic inputs, we can calculate many derived characteristics, such as the satellite's orbital period or the surface gravity. The internal structure and material properties are specified by a series of concentric spherical shells (layers), each one being homogeneous throughout its extent. Given the densities and thicknesses of the these layers, we can calculate the satellite's overall size, mass, density, etc. Specifying a tidal forcing may be simple or complex. For instance, the L{Diurnal} forcing depends only on the orbital eccentricity (and other orbital parameters already supplied), and the L{NSR} forcing requires only the addition of the non-synchronous rotation period of the shell. Specifying an arbitrary true polar wander trajectory would be much more complex. For the moment, becuase we are only including simple forcings, their specifying parameters are read in from the satellite definition file. If more, and more complex forcings are eventually added to the model, their specification will probably be split into a separate input file. 2.2 Internal Structure and Love Numbers --------------------------------------- C{satstress} treats the solid portions of the satellite as U{viscoelastic Maxwell solids <http://en.wikipedia.org/wiki/Maxwell_material>}, that respond differently to forcings having different frequencies (S{omega}). Given the a specification of the internal structure and material properties of a satellite as a series of layers, and information about the tidal forcings the body is subject to, it's possible to calculate appropriate Love numbers, which describe how the body responds to a change in the gravitational potential. Currently the calculation of Love numbers is done by an external program written in Fortran by NAME and others, with roots reaching deep into the Dark Ages of computing. As that code (or another Love number code) is more closely integrated with the model, the internal structure of the satellite will become more flexible, but for the moment, we are limited to assuming a 4-layer structure: - B{C{ICE_UPPER}}: The upper portion of the shell (cooler, stiffer) - B{C{ICE_LOWER}}: The lower portion of the shell (warmer, softer) - B{C{OCEAN}}: An inviscid fluid decoupling the shell from the core. - B{C{CORE}}: The silicate interior of the body. 3 Stresses ========== C{satstress} can calculate the following stress fields: 1. B{L{Diurnal}}: stresses arising from an eccentric orbit, having a forcing frequency equal to the orbital frequency. 2. B{L{NSR}}: stresses arising due to the faster-than-synchronous rotation of a floating shell that is decoupled from the satellite's interior by a fluid layer (an ocean). The expressions defining these stress fields are derived in "Modeling Stresses on Satellites due to Non-Synchronous Rotation and Orbital Eccentricity Using Gravitational Potential Theory" (U{preprint, 15MB PDF <http://satstress.googlecode.com/files/Wahretal2008.pdf>}) by Wahr et al. (submitted to I{Icarus}, in March, 2008). 3.1 Stress Fields Live in L{StressDef} Objects ---------------------------------------------- Each of the above stress fields is defined by a similarly named L{StressDef} object. These objects contain the formulae necessary to calculate the surface stress. The expressions for the stresses depend on many parameters which are defined within the L{Satellite} object, and so to create a L{StressDef} object, you need to provide a L{Satellite} object. There are many formulae which are identical for both the L{NSR} and L{Diurnal} stress fields, and so instead of duplicating them in both classes, they reside in the L{StressDef} I{base class}, from which all L{StressDef} objects inherit many properties. The main requirement for each L{StressDef} object is that it must define the three components of the stress tensor S{tau}: - C{Ttt} (S{tau}_S{theta}S{theta}) the north-south (latitudinal) component - C{Tpt} (S{tau}_S{phi}S{theta} = S{tau}_S{theta}S{phi}) the shear component - C{Tpp} (S{tau}_S{phi}S{phi}) the east-west (longitudinal) component 3.2 Stress Calculations are Performed by L{StressCalc} Objects -------------------------------------------------------------- Once you've I{instantiated} a L{StressDef} object, or several of them (one for each stress you want to include), you can compose them together into a L{StressCalc} object, which will actually do calculations at given points on the surface, and given times, and return a 2x2 matrix containing the resulting stress tensor (each component of which is the sum of all of the corresponding components of the stress fields that were used to instantiated the L{StressCalc} object). This is (hopefully) easier than it sounds. With the following few lines, you can construct a satellite, do a single calculation on its surface, and see what it looks like: >>> from satstress.satstress import * >>> the_sat = Satellite(open("input/Europa.satellite")) >>> the_stresses = StressCalc([Diurnal(the_sat), NSR(the_sat)]) >>> Tau = the_stresses.tensor(theta=pi/4.0, phi=pi/3.0, t=10000) >>> print(Tau) The C{test} program included in the satstress distribution shows a slightly more complex example, which should be enough to get you started using the package. 3.3 Extending the Model ----------------------- Other stress fields can (and hopefully will!), be added easily, so long as they use the same mathematical definition of the membrane stress tensor (S{tau}), as a function of co-latitude (S{theta}) (measured south from the north pole), east-positive longitude (S{phi}), measured from the meridian on the satellite which passes through the point on the satellite directly beneath the parent planet (assuming a synchronously rotating satellite), and time (B{M{t}}), defined as seconds elapsed since pericenter. This module could also potentially be extended to also calculate the surface strain (S{epsilon}) and displacement (B{M{s}}) fields, or to calculate the stresses at any point within the satellite. @group Exceptions (error handling classes): *Error """
""" A directive for including a matplotlib plot in a Sphinx document. By default, in HTML output, `plot` will include a .png file with a link to a high-res .png and .pdf. In LaTeX output, it will include a .pdf. The source code for the plot may be included in one of three ways: 1. **A path to a source file** as the argument to the directive:: .. plot:: path/to/plot.py When a path to a source file is given, the content of the directive may optionally contain a caption for the plot:: .. plot:: path/to/plot.py This is the caption for the plot Additionally, one may specify the name of a function to call (with no arguments) immediately after importing the module:: .. plot:: path/to/plot.py plot_function1 2. Included as **inline content** to the directive:: .. plot:: import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np img = mpimg.imread('_static/stinkbug.png') imgplot = plt.imshow(img) 3. Using **doctest** syntax:: .. plot:: A plotting example: >>> import matplotlib.pyplot as plt >>> plt.plot([1,2,3], [4,5,6]) Options ------- The ``plot`` directive supports the following options: format : {'python', 'doctest'} Specify the format of the input include-source : bool Whether to display the source code. The default can be changed using the `plot_include_source` variable in conf.py encoding : str If this source file is in a non-UTF8 or non-ASCII encoding, the encoding must be specified using the `:encoding:` option. The encoding will not be inferred using the ``-*- coding -*-`` metacomment. context : bool or str If provided, the code will be run in the context of all previous plot directives for which the `:context:` option was specified. This only applies to inline code plot directives, not those run from files. If the ``:context: reset`` option is specified, the context is reset for this and future plots, and previous figures are closed prior to running the code. ``:context:close-figs`` keeps the context but closes previous figures before running the code. nofigs : bool If specified, the code block will be run, but no figures will be inserted. This is usually useful with the ``:context:`` option. Additionally, this directive supports all of the options of the `image` directive, except for `target` (since plot will add its own target). These include `alt`, `height`, `width`, `scale`, `align` and `class`. Configuration options --------------------- The plot directive has the following configuration options: plot_include_source Default value for the include-source option plot_html_show_source_link Whether to show a link to the source in HTML. plot_pre_code Code that should be executed before each plot. plot_basedir Base directory, to which ``plot::`` file names are relative to. (If None or empty, file names are relative to the directory where the file containing the directive is.) plot_formats File formats to generate. List of tuples or strings:: [(suffix, dpi), suffix, ...] that determine the file format and the DPI. For entries whose DPI was omitted, sensible defaults are chosen. When passing from the command line through sphinx_build the list should be passed as suffix:dpi,suffix:dpi, .... plot_html_show_formats Whether to show links to the files in HTML. plot_rcparams A dictionary containing any non-standard rcParams that should be applied before each plot. plot_apply_rcparams By default, rcParams are applied when `context` option is not used in a plot directive. This configuration option overrides this behavior and applies rcParams before each plot. plot_working_directory By default, the working directory will be changed to the directory of the example, so the code can get at its data files, if any. Also its path will be added to `sys.path` so it can import any helper modules sitting beside it. This configuration option can be used to specify a central directory (also added to `sys.path`) where data files and helper modules for all code are located. plot_template Provide a customized template for preparing restructured text. """
#!/usr/bin/env python # ***** BEGIN LICENSE BLOCK ***** # Version: MPL 1.1/GPL 2.0/LGPL 2.1 # # The contents of this file are subject to the Mozilla Public License Version # 1.1 (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # http://www.mozilla.org/MPL/ # # Software distributed under the License is distributed on an "AS IS" basis, # WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License # for the specific language governing rights and limitations under the # License. # # The Original Code is font utility code. # # The Initial Developer of the Original Code is Mozilla Corporation. # Portions created by the Initial Developer are Copyright (C) 2009 # the Initial Developer. All Rights Reserved. # # Contributor(s): # NAME <EMAIL> # # Alternatively, the contents of this file may be used under the terms of # either the GNU General Public License Version 2 or later (the "GPL"), or # the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), # in which case the provisions of the GPL or the LGPL are applicable instead # of those above. If you wish to allow use of your version of this file only # under the terms of either the GPL or the LGPL, and not to allow others to # use your version of this file under the terms of the MPL, indicate your # decision by deleting the provisions above and replace them with the notice # and other provisions required by the GPL or the LGPL. If you do not delete # the provisions above, a recipient may use your version of this file under # the terms of any one of the MPL, the GPL or the LGPL. # # ***** END LICENSE BLOCK ***** */ # eotlitetool.py - create EOT version of OpenType font for use with IE # # Usage: eotlitetool.py [-o output-filename] font1 [font2 ...] # # OpenType file structure # http://www.microsoft.com/typography/otspec/otff.htm # # Types: # # BYTE 8-bit unsigned integer. # CHAR 8-bit signed integer. # USHORT 16-bit unsigned integer. # SHORT 16-bit signed integer. # ULONG 32-bit unsigned integer. # Fixed 32-bit signed fixed-point number (16.16) # LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer. # # SFNT Header # # Fixed sfnt version // 0x00010000 for version 1.0. # USHORT numTables // Number of tables. # USHORT searchRange // (Maximum power of 2 <= numTables) x 16. # USHORT entrySelector // Log2(maximum power of 2 <= numTables). # USHORT rangeShift // NumTables x 16-searchRange. # # Table Directory # # ULONG tag // 4-byte identifier. # ULONG checkSum // CheckSum for this table. # ULONG offset // Offset from beginning of TrueType font file. # ULONG length // Length of this table. # # OS/2 Table (Version 4) # # USHORT version // 0x0004 # SHORT xAvgCharWidth # USHORT usWeightClass # USHORT usWidthClass # USHORT fsType # SHORT ySubscriptXSize # SHORT ySubscriptYSize # SHORT ySubscriptXOffset # SHORT ySubscriptYOffset # SHORT ySuperscriptXSize # SHORT ySuperscriptYSize # SHORT ySuperscriptXOffset # SHORT ySuperscriptYOffset # SHORT yStrikeoutSize # SHORT yStrikeoutPosition # SHORT sFamilyClass # BYTE panose[10] # ULONG ulUnicodeRange1 // Bits 0-31 # ULONG ulUnicodeRange2 // Bits 32-63 # ULONG ulUnicodeRange3 // Bits 64-95 # ULONG ulUnicodeRange4 // Bits 96-127 # CHAR achVendID[4] # USHORT fsSelection # USHORT usFirstCharIndex # USHORT usLastCharIndex # SHORT sTypoAscender # SHORT sTypoDescender # SHORT sTypoLineGap # USHORT usWinAscent # USHORT usWinDescent # ULONG ulCodePageRange1 // Bits 0-31 # ULONG ulCodePageRange2 // Bits 32-63 # SHORT sxHeight # SHORT sCapHeight # USHORT usDefaultChar # USHORT usBreakChar # USHORT usMaxContext # # # The Naming Table is organized as follows: # # [name table header] # [name records] # [string data] # # Name Table Header # # USHORT format // Format selector (=0). # USHORT count // Number of name records. # USHORT stringOffset // Offset to start of string storage (from start of table). # # Name Record # # USHORT platformID // Platform ID. # USHORT encodingID // Platform-specific encoding ID. # USHORT languageID // Language ID. # USHORT nameID // Name ID. # USHORT length // String length (in bytes). # USHORT offset // String offset from start of storage area (in bytes). # # head Table # # Fixed tableVersion // Table version number 0x00010000 for version 1.0. # Fixed fontRevision // Set by font manufacturer. # ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum. # ULONG magicNumber // Set to 0x5F0F3CF5. # USHORT flags # USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines. # LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # SHORT xMin // For all glyph bounding boxes. # SHORT yMin # SHORT xMax # SHORT yMax # USHORT macStyle # USHORT lowestRecPPEM // Smallest readable size in pixels. # SHORT fontDirectionHint # SHORT indexToLocFormat // 0 for short offsets, 1 for long. # SHORT glyphDataFormat // 0 for current format. # # # # Embedded OpenType (EOT) file format # http://www.w3.org/Submission/EOT/ # # EOT version 0x00020001 # # An EOT font consists of a header with the original OpenType font # appended at the end. Most of the data in the EOT header is simply a # copy of data from specific tables within the font data. The exceptions # are the 'Flags' field and the root string name field. The root string # is a set of names indicating domains for which the font data can be # used. A null root string implies the font data can be used anywhere. # The EOT header is in little-endian byte order but the font data remains # in big-endian order as specified by the OpenType spec. # # Overall structure: # # [EOT header] # [EOT name records] # [font data] # # EOT header # # ULONG eotSize // Total structure length in bytes (including string and font data) # ULONG fontDataSize // Length of the OpenType font (FontData) in bytes # ULONG version // Version number of this format - 0x00020001 # ULONG flags // Processing Flags (0 == no special processing) # BYTE fontPANOSE[10] // OS/2 Table panose # BYTE charset // DEFAULT_CHARSET (0x01) # BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise # ULONG weight // OS/2 Table usWeightClass # USHORT fsType // OS/2 Table fsType (specifies embedding permission flags) # USHORT magicNumber // Magic number for EOT file - 0x504C. # ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1 # ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2 # ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3 # ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4 # ULONG codePageRange1 // OS/2 Table ulCodePageRange1 # ULONG codePageRange2 // OS/2 Table ulCodePageRange2 # ULONG checkSumAdjustment // head Table CheckSumAdjustment # ULONG reserved[4] // Reserved - must be 0 # USHORT padding1 // Padding - must be 0 # # EOT name records # # USHORT FamilyNameSize // Font family name size in bytes # BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16 # USHORT Padding2 // Padding - must be 0 # # USHORT StyleNameSize // Style name size in bytes # BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16 # USHORT Padding3 // Padding - must be 0 # # USHORT VersionNameSize // Version name size in bytes # bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16 # USHORT Padding4 // Padding - must be 0 # # USHORT FullNameSize // Full name size in bytes # BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16 # USHORT Padding5 // Padding - must be 0 # # USHORT RootStringSize // Root string size in bytes # BYTE RootString[RootStringSize] // Root string, little-endian UTF-16
"""This module tests SyntaxErrors. Here's an example of the sort of thing that is tested. >>> def f(x): ... global x Traceback (most recent call last): SyntaxError: name 'x' is local and global (<doctest test.test_syntax[0]>, line 1) The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and symtable.c. The parser itself outlaws a lot of invalid syntax. None of these errors are tested here at the moment. We should add some tests; since there are infinitely many programs with invalid syntax, we would need to be judicious in selecting some. The compiler generates a synthetic module name for code executed by doctest. Since all the code comes from the same module, a suffix like [1] is appended to the module name, As a consequence, changing the order of tests in this module means renumbering all the errors after it. (Maybe we should enable the ellipsis option for these tests.) In ast.c, syntax errors are raised by calling ast_error(). Errors from set_context(): >>> obj.None = 1 Traceback (most recent call last): File "<doctest test.test_syntax[1]>", line 1 SyntaxError: cannot assign to None >>> None = 1 Traceback (most recent call last): File "<doctest test.test_syntax[2]>", line 1 SyntaxError: cannot assign to None It's a syntax error to assign to the empty tuple. Why isn't it an error to assign to the empty list? It will always raise some error at runtime. >>> () = 1 Traceback (most recent call last): File "<doctest test.test_syntax[3]>", line 1 SyntaxError: can't assign to () >>> f() = 1 Traceback (most recent call last): File "<doctest test.test_syntax[4]>", line 1 SyntaxError: can't assign to function call >>> del f() Traceback (most recent call last): File "<doctest test.test_syntax[5]>", line 1 SyntaxError: can't delete function call >>> a + 1 = 2 Traceback (most recent call last): File "<doctest test.test_syntax[6]>", line 1 SyntaxError: can't assign to operator >>> (x for x in x) = 1 Traceback (most recent call last): File "<doctest test.test_syntax[7]>", line 1 SyntaxError: can't assign to generator expression >>> 1 = 1 Traceback (most recent call last): File "<doctest test.test_syntax[8]>", line 1 SyntaxError: can't assign to literal >>> "abc" = 1 Traceback (most recent call last): File "<doctest test.test_syntax[8]>", line 1 SyntaxError: can't assign to literal >>> `1` = 1 Traceback (most recent call last): File "<doctest test.test_syntax[10]>", line 1 SyntaxError: can't assign to repr If the left-hand side of an assignment is a list or tuple, an illegal expression inside that contain should still cause a syntax error. This test just checks a couple of cases rather than enumerating all of them. >>> (a, "b", c) = (1, 2, 3) Traceback (most recent call last): File "<doctest test.test_syntax[11]>", line 1 SyntaxError: can't assign to literal >>> [a, b, c + 1] = [1, 2, 3] Traceback (most recent call last): File "<doctest test.test_syntax[12]>", line 1 SyntaxError: can't assign to operator >>> a if 1 else b = 1 Traceback (most recent call last): File "<doctest test.test_syntax[13]>", line 1 SyntaxError: can't assign to conditional expression From compiler_complex_args(): >>> def f(None=1): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[14]>", line 1 SyntaxError: cannot assign to None From ast_for_arguments(): >>> def f(x, y=1, z): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[15]>", line 1 SyntaxError: non-default argument follows default argument >>> def f(x, None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[16]>", line 1 SyntaxError: cannot assign to None >>> def f(*None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[17]>", line 1 SyntaxError: cannot assign to None >>> def f(**None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[18]>", line 1 SyntaxError: cannot assign to None From ast_for_funcdef(): >>> def None(x): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[19]>", line 1 SyntaxError: cannot assign to None From ast_for_call(): >>> def f(it, *varargs): ... return list(it) >>> L = range(10) >>> f(x for x in L) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(x for x in L, 1) Traceback (most recent call last): File "<doctest test.test_syntax[23]>", line 1 SyntaxError: Generator expression must be parenthesized if not sole argument >>> f((x for x in L), 1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... i244, i245, i246, i247, i248, i249, i250, i251, i252, ... i253, i254, i255) Traceback (most recent call last): File "<doctest test.test_syntax[25]>", line 1 SyntaxError: more than 255 arguments The actual error cases counts positional arguments, keyword arguments, and generator expression arguments separately. This test combines the three. >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... (x for x in i244), i245, i246, i247, i248, i249, i250, i251, ... i252=1, i253=1, i254=1, i255=1) Traceback (most recent call last): File "<doctest test.test_syntax[26]>", line 1 SyntaxError: more than 255 arguments >>> f(lambda x: x[0] = 3) Traceback (most recent call last): File "<doctest test.test_syntax[27]>", line 1 SyntaxError: lambda cannot contain assignment The grammar accepts any test (basically, any expression) in the keyword slot of a call site. Test a few different options. >>> f(x()=2) Traceback (most recent call last): File "<doctest test.test_syntax[28]>", line 1 SyntaxError: keyword can't be an expression >>> f(a or b=1) Traceback (most recent call last): File "<doctest test.test_syntax[29]>", line 1 SyntaxError: keyword can't be an expression >>> f(x.y=1) Traceback (most recent call last): File "<doctest test.test_syntax[30]>", line 1 SyntaxError: keyword can't be an expression More set_context(): >>> (x for x in x) += 1 Traceback (most recent call last): File "<doctest test.test_syntax[31]>", line 1 SyntaxError: can't assign to generator expression >>> None += 1 Traceback (most recent call last): File "<doctest test.test_syntax[32]>", line 1 SyntaxError: cannot assign to None >>> f() += 1 Traceback (most recent call last): File "<doctest test.test_syntax[33]>", line 1 SyntaxError: can't assign to function call Test continue in finally in weird combinations. continue in for loop under finally should be ok. >>> def test(): ... try: ... pass ... finally: ... for abc in range(10): ... continue ... print abc >>> test() 9 Start simple, a continue in a finally should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[36]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause This is essentially a continue in a finally which should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... try: ... continue ... except: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[37]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[38]>", line 5 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[39]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... try: ... continue ... finally: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[40]>", line 7 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: pass ... finally: ... try: ... pass ... except: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[41]>", line 8 SyntaxError: 'continue' not supported inside 'finally' clause There is one test for a break that is not in a loop. The compiler uses a single data structure to keep track of try-finally and loops, so we need to be sure that a break is actually inside a loop. If it isn't, there should be a syntax error. >>> try: ... print 1 ... break ... print 2 ... finally: ... print 3 Traceback (most recent call last): ... File "<doctest test.test_syntax[42]>", line 3 SyntaxError: 'break' outside loop This should probably raise a better error than a SystemError (or none at all). In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 >>> while 1: ... while 2: ... while 3: ... while 4: ... while 5: ... while 6: ... while 8: ... while 9: ... while 10: ... while 11: ... while 12: ... while 13: ... while 14: ... while 15: ... while 16: ... while 17: ... while 18: ... while 19: ... while 20: ... while 21: ... while 22: ... break Traceback (most recent call last): ... SystemError: too many statically nested blocks This tests assignment-context; there was a bug in Python 2.5 where compiling a complex 'if' (one with 'elif') would fail to notice an invalid suite, leading to spurious errors. >>> if 1: ... x() = 1 ... elif 1: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[44]>", line 2 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 Traceback (most recent call last): ... File "<doctest test.test_syntax[45]>", line 4 SyntaxError: can't assign to function call >>> if 1: ... x() = 1 ... elif 1: ... pass ... else: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[46]>", line 2 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 ... else: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[47]>", line 4 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... pass ... else: ... x() = 1 Traceback (most recent call last): ... File "<doctest test.test_syntax[48]>", line 6 SyntaxError: can't assign to function call >>> f(a=23, a=234) Traceback (most recent call last): ... File "<doctest test.test_syntax[49]>", line 1 SyntaxError: keyword argument repeated >>> del () Traceback (most recent call last): ... File "<doctest test.test_syntax[50]>", line 1 SyntaxError: can't delete () >>> {1, 2, 3} = 42 Traceback (most recent call last): ... File "<doctest test.test_syntax[50]>", line 1 SyntaxError: can't assign to literal Corner-case that used to crash: >>> def f(*xx, **__debug__): pass Traceback (most recent call last): SyntaxError: cannot assign to __debug__ """
""" Cython optimize zeros API ========================= The underlying C functions for the following root finders can be accessed directly using Cython: - `~scipy.optimize.bisect` - `~scipy.optimize.ridder` - `~scipy.optimize.brenth` - `~scipy.optimize.brentq` The Cython API for the zeros functions is similar except there is no ``disp`` argument. Import the zeros functions using ``cimport`` from `scipy.optimize.cython_optimize`. :: from scipy.optimize.cython_optimize cimport bisect, ridder, brentq, brenth Callback signature ------------------ The zeros functions in `~scipy.optimize.cython_optimize` expect a callback that takes a double for the scalar independent variable as the 1st argument and a user defined ``struct`` with any extra parameters as the 2nd argument. :: double (*callback_type)(double, void*) Examples -------- Usage of `~scipy.optimize.cython_optimize` requires Cython to write callbacks that are compiled into C. For more information on compiling Cython, see the `Cython Documentation <http://docs.cython.org/en/latest/index.html>`_. These are the basic steps: 1. Create a Cython ``.pyx`` file, for example: ``myexample.pyx``. 2. Import the desired root finder from `~scipy.optimize.cython_optimize`. 3. Write the callback function, and call the selected zeros function passing the callback, any extra arguments, and the other solver parameters. :: from scipy.optimize.cython_optimize cimport brentq # import math from Cython from libc cimport math myargs = {'C0': 1.0, 'C1': 0.7} # a dictionary of extra arguments XLO, XHI = 0.5, 1.0 # lower and upper search boundaries XTOL, RTOL, MITR = 1e-3, 1e-3, 10 # other solver parameters # user-defined struct for extra parameters ctypedef struct test_params: double C0 double C1 # user-defined callback cdef double f(double x, void *args): cdef test_params *myargs = <test_params *> args return myargs.C0 - math.exp(-(x - myargs.C1)) # Cython wrapper function cdef double brentq_wrapper_example(dict args, double xa, double xb, double xtol, double rtol, int mitr): # Cython automatically casts dictionary to struct cdef test_params myargs = args return brentq( f, xa, xb, <test_params *> &myargs, xtol, rtol, mitr, NULL) # Python function def brentq_example(args=myargs, xa=XLO, xb=XHI, xtol=XTOL, rtol=RTOL, mitr=MITR): '''Calls Cython wrapper from Python.''' return brentq_wrapper_example(args, xa, xb, xtol, rtol, mitr) 4. If you want to call your function from Python, create a Cython wrapper, and a Python function that calls the wrapper, or use ``cpdef``. Then, in Python, you can import and run the example. :: from myexample import brentq_example x = brentq_example() # 0.6999942848231314 5. Create a Cython ``.pxd`` file if you need to export any Cython functions. Full output ----------- The functions in `~scipy.optimize.cython_optimize` can also copy the full output from the solver to a C ``struct`` that is passed as its last argument. If you don't want the full output, just pass ``NULL``. The full output ``struct`` must be type ``zeros_full_output``, which is defined in `scipy.optimize.cython_optimize` with the following fields: - ``int funcalls``: number of function calls - ``int iterations``: number of iterations - ``int error_num``: error number - ``double root``: root of function The root is copied by `~scipy.optimize.cython_optimize` to the full output ``struct``. An error number of -1 means a sign error, -2 means a convergence error, and 0 means the solver converged. Continuing from the previous example:: from scipy.optimize.cython_optimize cimport zeros_full_output # cython brentq solver with full output cdef brent_full_output brentq_full_output_wrapper_example( dict args, double xa, double xb, double xtol, double rtol, int mitr): cdef test_params myargs = args cdef zeros_full_output my_full_output # use my_full_output instead of NULL brentq(f, xa, xb, &myargs, xtol, rtol, mitr, &my_full_output) return my_full_output # Python function def brent_full_output_example(args=myargs, xa=XLO, xb=XHI, xtol=XTOL, rtol=RTOL, mitr=MITR): '''Returns full output''' return brentq_full_output_wrapper_example(args, xa, xb, xtol, rtol, mitr) result = brent_full_output_example() # {'error_num': 0, # 'funcalls': 6, # 'iterations': 5, # 'root': 0.6999942848231314} """
""" GenBank format (:mod:`skbio.io.format.genbank`) =============================================== .. currentmodule:: skbio.io.format.genbank GenBank format (GenBank Flat File Format) stores sequence and its annotation together. The start of the annotation section is marked by a line beginning with the word "LOCUS". The start of sequence section is marked by a line beginning with the word "ORIGIN" and the end of the section is marked by a line with only "//". The GenBank file usually ends with .gb or sometimes .gbk. The GenBank format for protein has been renamed to GenPept. The GenBank (for nucleotide) and Genpept are essentially the same format. An example of a GenBank file can be seen here [1]_. Format Support -------------- **Has Sniffer: Yes** +------+------+---------------------------------------------------------------+ |Reader|Writer| Object Class | +======+======+===============================================================+ |Yes |Yes |:mod:`skbio.sequence.Sequence` | +------+------+---------------------------------------------------------------+ |Yes |Yes |:mod:`skbio.sequence.DNA` | +------+------+---------------------------------------------------------------+ |Yes |Yes |:mod:`skbio.sequence.RNA` | +------+------+---------------------------------------------------------------+ |Yes |Yes |:mod:`skbio.sequence.Protein` | +------+------+---------------------------------------------------------------+ |Yes | Yes | generator of :mod:`skbio.sequence.Sequence` objects | +------+------+---------------------------------------------------------------+ Format Specification -------------------- **State: Experimental as of 0.4.1.** Sections before ``FEATURES`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ All the sections before ``FEATURES`` will be read into the attribute of ``metadata``. The header and its content of a section is stored as a pair of key and value in ``metadata``. For the ``REFERENCE`` section, its value is stored as a list, as there are often multiple reference sections in one GenBank record. .. _genbank_feature_section: ``FEATURES`` section ^^^^^^^^^^^^^^^^^^^^ The International Nucleotide Sequence Database Collaboration (INSDC [2]_) is a joint effort among the DDBJ, EMBL, and GenBank. These organisations all use the same "Feature Table" layout in their plain text flat file formats, which are documented in detail [3]_. The feature keys and their qualifiers are also described in this webpage [4]_. The ``FEATURES`` section will be stored in ``interval_metadata`` of ``Sequence`` or its sub-class. Each sub-section is stored as an ``Interval`` object in ``interval_metadata``. Each ``Interval`` object has ``metadata`` keeping the information of this feature in the sub-section. To normalize the vocabulary between multiple formats (currently only the INSDC Feature Table and GFF3) to store metadata of interval features, we rename some terms in some formats to the same common name when parsing them into memory, as described in this table: +-----------+-----------+-----------+---------+------------------------------+ |INSDC |GFF3 |Key stored |Value |Description | |feature |columns or | |type | | |table |attributes | |stored | | +===========+===========+===========+=========+==============================+ |inference |source |source |str |the algorithm or experiment | | |(column 2) | | |used to generate this feature | +-----------+-----------+-----------+---------+------------------------------+ |feature key|type |type |str |the type of the feature | | |(column 3) | | | | +-----------+-----------+-----------+---------+------------------------------+ |N/A |score |score |float |the score of the feature | | |(column 6) | | | | +-----------+-----------+-----------+---------+------------------------------+ |N/A |strand |strand |str |the strand of the feature. + | | |(column 7) | | |for positive strand, - for | | | | | |minus strand, and . for | | | | | |features that are not | | | | | |stranded. In addition, ? can | | | | | |be used for features whose | | | | | |strandedness is relevant, but | | | | | |unknown. | +-----------+-----------+-----------+---------+------------------------------+ |codon_start|phase |phase |int |the offset at which the first | | |(column 8) | | |complete codon of a coding | | | | | |feature can be found, relative| | | | | |to the first base of that | | | | | |feature. It is 0, 1, or 2 in | | | | | |GFF3 or 1, 2, or 3 in GenBank.| | | | | |The stored value is 0, 1, or | | | | | |2, following in GFF3 format. | +-----------+-----------+-----------+---------+------------------------------+ |db_xref |Dbxref |db_xref |list of |A database cross reference | | | | |str | | +-----------+-----------+-----------+---------+------------------------------+ |N/A |ID |ID |str |feature ID | +-----------+-----------+-----------+---------+------------------------------+ |note |Note |note |str |any comment or additional | | | | | |information | +-----------+-----------+-----------+---------+------------------------------+ |translation|N/A |translation|str |the protein sequence for CDS | | | | | |features | +-----------+-----------+-----------+---------+------------------------------+ ``Location`` string +++++++++++++++++++ There are 5 types of location descriptors defined in Feature Table. This explains how they will be parsed into the bounds of ``Interval`` object (note it converts the 1-based coordinate to 0-based): 1. a single base number. e.g. 67. This is parsed to ``(66, 67)``. 2. a site between two neighboring bases. e.g. 67^68. This is parsed to ``(66, 67)``. 3. a single base from inside a range. e.g. 67.89. This is parsed to ``(66, 89)``. 4. a pair of base numbers defining a sequence span. e.g. 67..89. This is parsed to ``(66, 89)``. 5. a remote sequence identifier followed by a location descriptor defined above. e.g. J00123.1:67..89. This will be discarded because it is not on the current sequence. When it is combined with local descriptor like J00123.1:67..89,200..209, the local part will be kept to be ``(199, 209)``. .. note:: The Location string is fully stored in ``Interval.metadata`` with key ``__location``. The key starting with ``__`` is "private" and should be modified with care. ``ORIGIN`` section ^^^^^^^^^^^^^^^^^^ The sequence in the ``ORIGIN`` section is always in lowercase for the GenBank files downloaded from NCBI. For the RNA molecules, ``t`` (thymine), instead of ``u`` (uracil) is used in the sequence. All GenBank writers follow these conventions while writing GenBank files. Format Parameters ----------------- Reader-specific Parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``constructor`` parameter can be used with the ``Sequence`` generator to specify the in-memory type of each GenBank record that is parsed. ``constructor`` should be ``Sequence`` or a sub-class of ``Sequence``. It is also detected by the unit label on the LOCUS line. For example, if it is ``bp``, it will be read into ``DNA``; if it is ``aa``, it will be read into ``Protein``. Otherwise, it will be read into ``Sequence``. This default behavior is overridden by setting ``constructor``. ``lowercase`` is another parameter available for all GenBank readers. By default, it is set to ``True`` to read in the ``ORIGIN`` sequence as lowercase letters. This parameter is passed to ``Sequence`` or its sub-class constructor. ``seq_num`` is a parameter used with the ``Sequence``, ``DNA``, ``RNA``, and ``Protein`` GenBank readers. It specifies which GenBank record to read from a GenBank file with multiple records in it. Examples -------- Reading and Writing GenBank Files ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Suppose we have the following GenBank file example modified from [5]_: >>> gb_str = ''' ... LOCUS 3K1V_A 34 bp RNA linear SYN 10-OCT-2012 ... DEFINITION Chain A, Structure Of A Mutant Class-I Preq1. ... ACCESSION 3K1V_A ... VERSION 3K1V_A GI:260656459 ... KEYWORDS . ... SOURCE synthetic construct ... ORGANISM synthetic construct ... other sequences; artificial sequences. ... REFERENCE 1 (bases 1 to 34) ... AUTHORS NAME NAME and NAME TITLE Cocrystal structure of a class I preQ1 riboswitch ... JOURNAL Nat. Struct. Mol. Biol. 16 (3), 343-344 (2009) ... PUBMED 19234468 ... COMMENT SEQRES. ... FEATURES Location/Qualifiers ... source 1..34 ... /organism="synthetic construct" ... /mol_type="other RNA" ... /db_xref="taxon:32630" ... misc_binding 1..30 ... /note="Preq1 riboswitch" ... /bound_moiety="preQ1" ... ORIGIN ... 1 agaggttcta gcacatccct ctataaaaaa ctaa ... // ... ''' Now we can read it as ``DNA`` object: >>> import io >>> from skbio import DNA, RNA, Sequence >>> gb = io.StringIO(gb_str) >>> dna_seq = DNA.read(gb) >>> dna_seq DNA ----------------------------------------------------------------- Metadata: 'ACCESSION': '3K1V_A' 'COMMENT': 'SEQRES.' 'DEFINITION': 'Chain A, Structure Of A Mutant Class-I Preq1.' 'KEYWORDS': '.' 'LOCUS': <class 'dict'> 'REFERENCE': <class 'list'> 'SOURCE': <class 'dict'> 'VERSION': '3K1V_A GI:260656459' Interval metadata: 2 interval features Stats: length: 34 has gaps: False has degenerates: False has definites: True GC-content: 35.29% ----------------------------------------------------------------- 0 AGAGGTTCTA GCACATCCCT CTATAAAAAA CTAA Since this is a riboswitch molecule, we may want to read it as ``RNA``. As the GenBank file usually have ``t`` instead of ``u`` in the sequence, we can read it as ``RNA`` by converting ``t`` to ``u``: >>> gb = io.StringIO(gb_str) >>> rna_seq = RNA.read(gb) >>> rna_seq RNA ----------------------------------------------------------------- Metadata: 'ACCESSION': '3K1V_A' 'COMMENT': 'SEQRES.' 'DEFINITION': 'Chain A, Structure Of A Mutant Class-I Preq1.' 'KEYWORDS': '.' 'LOCUS': <class 'dict'> 'REFERENCE': <class 'list'> 'SOURCE': <class 'dict'> 'VERSION': '3K1V_A GI:260656459' Interval metadata: 2 interval features Stats: length: 34 has gaps: False has degenerates: False has definites: True GC-content: 35.29% ----------------------------------------------------------------- 0 AGAGGUUCUA GCACAUCCCU CUAUAAAAAA CUAA >>> rna_seq == dna_seq.transcribe() True >>> with io.StringIO() as fh: ... print(dna_seq.write(fh, format='genbank').getvalue()) LOCUS 3K1V_A 34 bp RNA linear SYN 10-OCT-2012 DEFINITION Chain A, Structure Of A Mutant Class-I Preq1. ACCESSION 3K1V_A VERSION 3K1V_A GI:260656459 KEYWORDS . SOURCE synthetic construct ORGANISM synthetic construct other sequences; artificial sequences. REFERENCE 1 (bases 1 to 34) AUTHORS NAME NAME and NAMEAmare,A.R. TITLE Cocrystal structure of a class I preQ1 riboswitch JOURNAL Nat. Struct. Mol. Biol. 16 (3), 343-344 (2009) PUBMED 19234468 COMMENT SEQRES. FEATURES Location/Qualifiers source 1..34 /db_xref="taxon:32630" /mol_type="other RNA" /organism="synthetic construct" misc_binding 1..30 /bound_moiety="preQ1" /note="Preq1 riboswitch" ORIGIN 1 agaggttcta gcacatccct ctataaaaaa ctaa // <BLANKLINE> References ---------- .. [1] http://www.ncbi.nlm.nih.gov/Sitemap/samplerecord.html .. [2] http://www.insdc.org/ .. [3] http://www.insdc.org/files/feature_table.html .. [4] http://www.ebi.ac.uk/ena/WebFeat/ .. [5] http://www.ncbi.nlm.nih.gov/nuccore/3K1V_A """
# ##========================================================================= #class ActionDialog(object): # """ActionDialog wraps the dialog you are interacting with # # It provides support for finding controls using attribute access, # item access and the _control(...) method. # # You can dump information from a dialgo to XML using the write_() method # # A screenshot of the dialog can be taken using the underlying wrapped # HWND ie. my_action_dlg.wrapped_win.CaptureAsImage().save("dlg.png"). # This is only available if you have PIL installed (fails silently # otherwise). # """ # def __init__(self, hwnd, app = None, props = None): # """Initialises an ActionDialog object # # :: # hwnd (required) The handle of the dialog # app An instance of an Application Object # props future use (when we have an XML file for reference) # # """ # # #self.wrapped_win = controlactions.add_actions( # # controls.WrapHandle(hwnd)) # self.wrapped_win = controls.WrapHandle(hwnd) # # self.app = app # # dlg_controls = [self.wrapped_win, ] # dlg_controls.extend(self.wrapped_win.Children) # # def __getattr__(self, key): # "Attribute access - defer to item access" # return self[key] # # def __getitem__(self, attr): # "find the control that best matches attr" # # if it is an integer - just return the # # child control at that index # if isinstance(attr, (int, long)): # return self.wrapped_win.Children[attr] # # # so it should be a string # # check if it is an attribute of the wrapped win first # try: # return getattr(self.wrapped_win, attr) # except (AttributeError, UnicodeEncodeError): # pass # # # find the control that best matches our attribute # ctrl = findbestmatch.find_best_control_match( # attr, self.wrapped_win.Children) # # # add actions to the control and return it # return ctrl # # def write_(self, filename): # "Write the dialog an XML file (requires elementtree)" # if self.app and self.app.xmlpath: # filename = os.path.join(self.app.xmlpath, filename + ".xml") # # controls = [self.wrapped_win] # controls.extend(self.wrapped_win.Children) # props = [ctrl.GetProperties() for ctrl in controls] # # XMLHelpers.WriteDialogToFile(filename, props) # # def control_(self, **kwargs): # "Find the control that matches the arguments and return it" # # # add the restriction for this particular process # kwargs['parent'] = self.wrapped_win # kwargs['process'] = self.app.process # kwargs['top_level_only'] = False # # # try and find the dialog (waiting for a max of 1 second # ctrl = findwindows.find_window(**kwargs) # #win = ActionDialog(win, self) # # return controls.WrapHandle(ctrl) # # # # ##========================================================================= #def _WalkDialogControlAttribs(app, attr_path): # "Try and resolve the dialog and 2nd attribute, return both" # if len(attr_path) != 2: # raise RuntimeError("Expecting only 2 items in the attribute path") # # # get items to select between # # default options will filter hidden and disabled controls # # and will default to top level windows only # wins = findwindows.find_windows(process = app.process) # # # wrap each so that find_best_control_match works well # wins = [controls.WrapHandle(win) for win in wins] # # # if an integer has been specified # if isinstance(attr_path[0], (int, long)): # dialogWin = wins[attr_path[0]] # else: # # try to find the item # dialogWin = findbestmatch.find_best_control_match(attr_path[0], wins) # # # already wrapped # dlg = ActionDialog(dialogWin, app) # # # for each of the other attributes ask the # attr_value = dlg # for attr in attr_path[1:]: # try: # attr_value = getattr(attr_value, attr) # except UnicodeEncodeError: # attr_value = attr_value[attr] # # return dlg, attr_value # # ##========================================================================= #class _DynamicAttributes(object): # "Class that builds attributes until they are ready to be resolved" # # def __init__(self, app): # "Initialize the attributes" # self.app = app # self.attr_path = [] # # def __getattr__(self, attr): # "Attribute access - defer to item access" # return self[attr] # # def __getitem__(self, attr): # "Item access[] for getting dialogs and controls from an application" # # # do something with this one # # and return a copy of ourselves with some # # data related to that attribute # # self.attr_path.append(attr) # # # if we have a lenght of 2 then we have either # # dialog.attribute # # or # # dialog.control # # so go ahead and resolve # if len(self.attr_path) == 2: # dlg, final = _wait_for_function_success( # _WalkDialogControlAttribs, self.app, self.attr_path) # # # seing as we may already have a reference to the dialog # # we need to strip off the control so that our dialog # # reference is not messed up # self.attr_path = self.attr_path[:-1] # # return final # # # we didn't hit the limit so continue collecting the # # next attribute in the chain # return self # # ##========================================================================= #def _wait_for_function_success(func, *args, **kwargs): # """Retry the dialog up to timeout trying every time_interval seconds # # timeout defaults to 1 second # time_interval defaults to .09 of a second """ # if kwargs.has_key('time_interval'): # time_interval = kwargs['time_interval'] # del kwargs['time_interval'] # else: # time_interval = window_retry_interval # # if kwargs.has_key('timeout'): # timeout = kwargs['timeout'] # del kwargs['timeout'] # else: # timeout = window_find_timeout # # # # keep going until we either hit the return (success) # # or an exception is raised (timeout) # while 1: # try: # return func(*args, **kwargs) # except: # if timeout > 0: # time.sleep (time_interval) # timeout -= time_interval # else: # raise # # #
""" ===================================== Sparse matrices (:mod:`scipy.sparse`) ===================================== .. currentmodule:: scipy.sparse SciPy 2-D sparse matrix package for numeric data. Contents ======== Sparse matrix classes --------------------- .. autosummary:: :toctree: generated/ bsr_matrix - Block Sparse Row matrix coo_matrix - A sparse matrix in COOrdinate format csc_matrix - Compressed Sparse Column matrix csr_matrix - Compressed Sparse Row matrix dia_matrix - Sparse matrix with DIAgonal storage dok_matrix - Dictionary Of Keys based sparse matrix lil_matrix - Row-based linked list sparse matrix spmatrix - Sparse matrix base class Functions --------- Building sparse matrices: .. autosummary:: :toctree: generated/ eye - Sparse MxN matrix whose k-th diagonal is all ones identity - Identity matrix in sparse format kron - kronecker product of two sparse matrices kronsum - kronecker sum of sparse matrices diags - Return a sparse matrix from diagonals spdiags - Return a sparse matrix from diagonals block_diag - Build a block diagonal sparse matrix tril - Lower triangular portion of a matrix in sparse format triu - Upper triangular portion of a matrix in sparse format bmat - Build a sparse matrix from sparse sub-blocks hstack - Stack sparse matrices horizontally (column wise) vstack - Stack sparse matrices vertically (row wise) rand - Random values in a given shape random - Random values in a given shape Sparse matrix tools: .. autosummary:: :toctree: generated/ find Identifying sparse matrices: .. autosummary:: :toctree: generated/ issparse isspmatrix isspmatrix_csc isspmatrix_csr isspmatrix_bsr isspmatrix_lil isspmatrix_dok isspmatrix_coo isspmatrix_dia Submodules ---------- .. autosummary:: :toctree: generated/ csgraph - Compressed sparse graph routines linalg - sparse linear algebra routines Exceptions ---------- .. autosummary:: :toctree: generated/ SparseEfficiencyWarning SparseWarning Usage information ================= There are seven available sparse matrix types: 1. csc_matrix: Compressed Sparse Column format 2. csr_matrix: Compressed Sparse Row format 3. bsr_matrix: Block Sparse Row format 4. lil_matrix: List of Lists format 5. dok_matrix: Dictionary of Keys format 6. coo_matrix: COOrdinate format (aka IJV, triplet format) 7. dia_matrix: DIAgonal format To construct a matrix efficiently, use either dok_matrix or lil_matrix. The lil_matrix class supports basic slicing and fancy indexing with a similar syntax to NumPy arrays. As illustrated below, the COO format may also be used to efficiently construct matrices. Despite their similarity to NumPy arrays, it is **strongly discouraged** to use NumPy functions directly on these matrices because NumPy may not properly convert them for computations, leading to unexpected (and incorrect) results. If you do want to apply a NumPy function to these matrices, first check if SciPy has its own implementation for the given sparse matrix class, or **convert the sparse matrix to a NumPy array** (e.g. using the `toarray()` method of the class) first before applying the method. To perform manipulations such as multiplication or inversion, first convert the matrix to either CSC or CSR format. The lil_matrix format is row-based, so conversion to CSR is efficient, whereas conversion to CSC is less so. All conversions among the CSR, CSC, and COO formats are efficient, linear-time operations. Matrix vector product --------------------- To do a vector product between a sparse matrix and a vector simply use the matrix `dot` method, as described in its docstring: >>> import numpy as np >>> from scipy.sparse import csr_matrix >>> A = csr_matrix([[1, 2, 0], [0, 0, 3], [4, 0, 5]]) >>> v = np.array([1, 0, -1]) >>> A.dot(v) array([ 1, -3, -1], dtype=int64) .. warning:: As of NumPy 1.7, `np.dot` is not aware of sparse matrices, therefore using it will result on unexpected results or errors. The corresponding dense array should be obtained first instead: >>> np.dot(A.toarray(), v) array([ 1, -3, -1], dtype=int64) but then all the performance advantages would be lost. The CSR format is specially suitable for fast matrix vector products. Example 1 --------- Construct a 1000x1000 lil_matrix and add some values to it: >>> from scipy.sparse import lil_matrix >>> from scipy.sparse.linalg import spsolve >>> from numpy.linalg import solve, norm >>> from numpy.random import rand >>> A = lil_matrix((1000, 1000)) >>> A[0, :100] = rand(100) >>> A[1, 100:200] = A[0, :100] >>> A.setdiag(rand(1000)) Now convert it to CSR format and solve A x = b for x: >>> A = A.tocsr() >>> b = rand(1000) >>> x = spsolve(A, b) Convert it to a dense matrix and solve, and check that the result is the same: >>> x_ = solve(A.toarray(), b) Now we can compute norm of the error with: >>> err = norm(x-x_) >>> err < 1e-10 True It should be small :) Example 2 --------- Construct a matrix in COO format: >>> from scipy import sparse >>> from numpy import array >>> I = array([0,3,1,0]) >>> J = array([0,3,1,2]) >>> V = array([4,5,7,9]) >>> A = sparse.coo_matrix((V,(I,J)),shape=(4,4)) Notice that the indices do not need to be sorted. Duplicate (i,j) entries are summed when converting to CSR or CSC. >>> I = array([0,0,1,3,1,0,0]) >>> J = array([0,2,1,3,1,0,0]) >>> V = array([1,1,1,1,1,1,1]) >>> B = sparse.coo_matrix((V,(I,J)),shape=(4,4)).tocsr() This is useful for constructing finite-element stiffness and mass matrices. Further Details --------------- CSR column indices are not necessarily sorted. Likewise for CSC row indices. Use the .sorted_indices() and .sort_indices() methods when sorted indices are required (e.g. when passing data to other libraries). """
""" Wrappers to BLAS library ======================== NOTE: this module is deprecated -- use scipy.linalg.blas instead! fblas -- wrappers for Fortran [*] BLAS routines cblas -- wrappers for ATLAS BLAS routines get_blas_funcs -- query for wrapper functions. [*] If ATLAS libraries are available then Fortran routines actually use ATLAS routines and should perform equally well to ATLAS routines. Module fblas ++++++++++++ In the following all function names are shown without type prefixes. Level 1 routines ---------------- c,s = rotg(a,b) param = rotmg(d1,d2,x1,y1) x,y = rot(x,y,c,s,n=(len(x)-offx)/abs(incx),offx=0,incx=1,offy=0,incy=1,overwrite_x=0,overwrite_y=0) x,y = rotm(x,y,param,n=(len(x)-offx)/abs(incx),offx=0,incx=1,offy=0,incy=1,overwrite_x=0,overwrite_y=0) x,y = swap(x,y,n=(len(x)-offx)/abs(incx),offx=0,incx=1,offy=0,incy=1) x = scal(a,x,n=(len(x)-offx)/abs(incx),offx=0,incx=1) y = copy(x,y,n=(len(x)-offx)/abs(incx),offx=0,incx=1,offy=0,incy=1) y = axpy(x,y,n=(len(x)-offx)/abs(incx),a=1.0,offx=0,incx=1,offy=0,incy=1) xy = dot(x,y,n=(len(x)-offx)/abs(incx),offx=0,incx=1,offy=0,incy=1) xy = dotu(x,y,n=(len(x)-offx)/abs(incx),offx=0,incx=1,offy=0,incy=1) xy = dotc(x,y,n=(len(x)-offx)/abs(incx),offx=0,incx=1,offy=0,incy=1) n2 = nrm2(x,n=(len(x)-offx)/abs(incx),offx=0,incx=1) s = asum(x,n=(len(x)-offx)/abs(incx),offx=0,incx=1) k = amax(x,n=(len(x)-offx)/abs(incx),offx=0,incx=1) Prefixes: rotg,swap,copy,axpy: s,d,c,z amax: is,id,ic,iz asum,nrm2: s,d,sc,dz scal: s,d,c,z,sc,dz rotm,rotmg,dot: s,d dotu,dotc: c,z rot: s,d,cs,zd Level 2 routines ---------------- y = gemv(alpha,a,x,beta=0.0,y=,offx=0,incx=1,offy=0,incy=1,trans=0,overwrite_y=0) y = symv(alpha,a,x,beta=0.0,y=,offx=0,incx=1,offy=0,incy=1,lower=0,overwrite_y=0) y = hemv(alpha,a,x,beta=(0.0, 0.0),y=,offx=0,incx=1,offy=0,incy=1,lower=0,overwrite_y=0) x = trmv(a,x,offx=0,incx=1,lower=0,trans=0,unitdiag=0,overwrite_x=0) a = ger(alpha,x,y,incx=1,incy=1,a=0.0,overwrite_x=1,overwrite_y=1,overwrite_a=0) a = ger{u|c}(alpha,x,y,incx=1,incy=1,a=(0.0,0.0),overwrite_x=1,overwrite_y=1,overwrite_a=0) Prefixes: gemv, trmv: s,d,c,z symv,ger: s,d hemv,geru,gerc: c,z Level 3 routines ---------------- c = gemm(alpha,a,b,beta=0.0,c=,trans_a=0,trans_b=0,overwrite_c=0) Prefixes: gemm: s,d,c,z Module cblas ++++++++++++ In the following all function names are shown without type prefixes. Level 1 routines ---------------- z = axpy(x,y,n=len(x)/abs(incx),a=1.0,incx=1,incy=incx,overwrite_y=0) Prefixes: axpy: s,d,c,z """
""" Simple config ============= Although CherryPy uses the :mod:`Python logging module <logging>`, it does so behind the scenes so that simple logging is simple, but complicated logging is still possible. "Simple" logging means that you can log to the screen (i.e. console/stdout) or to a file, and that you can easily have separate error and access log files. Here are the simplified logging settings. You use these by adding lines to your config file or dict. You should set these at either the global level or per application (see next), but generally not both. * ``log.screen``: Set this to True to have both "error" and "access" messages printed to stdout. * ``log.access_file``: Set this to an absolute filename where you want "access" messages written. * ``log.error_file``: Set this to an absolute filename where you want "error" messages written. Many events are automatically logged; to log your own application events, call :func:`cherrypy.log`. Architecture ============ Separate scopes --------------- CherryPy provides log managers at both the global and application layers. This means you can have one set of logging rules for your entire site, and another set of rules specific to each application. The global log manager is found at :func:`cherrypy.log`, and the log manager for each application is found at :attr:`app.log<cherrypy._cptree.Application.log>`. If you're inside a request, the latter is reachable from ``cherrypy.request.app.log``; if you're outside a request, you'll have to obtain a reference to the ``app``: either the return value of :func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used :func:`quickstart()<cherrypy.quickstart>` instead, via ``cherrypy.tree.apps['/']``. By default, the global logs are named "cherrypy.error" and "cherrypy.access", and the application logs are named "cherrypy.error.2378745" and "cherrypy.access.2378745" (the number is the id of the Application object). This means that the application logs "bubble up" to the site logs, so if your application has no log handlers, the site-level handlers will still log the messages. Errors vs. Access ----------------- Each log manager handles both "access" messages (one per HTTP request) and "error" messages (everything else). Note that the "error" log is not just for errors! The format of access messages is highly formalized, but the error log isn't--it receives messages from a variety of sources (including full error tracebacks, if enabled). Custom Handlers =============== The simple settings above work by manipulating Python's standard :mod:`logging` module. So when you need something more complex, the full power of the standard module is yours to exploit. You can borrow or create custom handlers, formats, filters, and much more. Here's an example that skips the standard FileHandler and uses a RotatingFileHandler instead: :: #python log = app.log # Remove the default FileHandlers if present. log.error_file = "" log.access_file = "" maxBytes = getattr(log, "rot_maxBytes", 10000000) backupCount = getattr(log, "rot_backupCount", 1000) # Make a new RotatingFileHandler for the error log. fname = getattr(log, "rot_error_file", "error.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.error_log.addHandler(h) # Make a new RotatingFileHandler for the access log. fname = getattr(log, "rot_access_file", "access.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.access_log.addHandler(h) The ``rot_*`` attributes are pulled straight from the application log object. Since "log.*" config entries simply set attributes on the log object, you can add custom attributes to your heart's content. Note that these handlers are used ''instead'' of the default, simple handlers outlined above (so don't set the "log.error_file" config entry, for example). """
# -*- encoding: utf-8 -*- ############################################################################## # # OpenERP, Open Source Management Solution # Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>). # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # ############################################################################## # SKR03 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU') # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer' # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung. # SKR04 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig, # d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu # Steuerschlüsseln. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU') # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer' # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
#!/usr/bin/env python # ***** BEGIN LICENSE BLOCK ***** # Version: MPL 1.1/GPL 2.0/LGPL 2.1 # # The contents of this file are subject to the Mozilla Public License Version # 1.1 (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # http://www.mozilla.org/MPL/ # # Software distributed under the License is distributed on an "AS IS" basis, # WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License # for the specific language governing rights and limitations under the # License. # # The Original Code is font utility code. # # The Initial Developer of the Original Code is Mozilla Corporation. # Portions created by the Initial Developer are Copyright (C) 2009 # the Initial Developer. All Rights Reserved. # # Contributor(s): # NAME <EMAIL> # # Alternatively, the contents of this file may be used under the terms of # either the GNU General Public License Version 2 or later (the "GPL"), or # the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), # in which case the provisions of the GPL or the LGPL are applicable instead # of those above. If you wish to allow use of your version of this file only # under the terms of either the GPL or the LGPL, and not to allow others to # use your version of this file under the terms of the MPL, indicate your # decision by deleting the provisions above and replace them with the notice # and other provisions required by the GPL or the LGPL. If you do not delete # the provisions above, a recipient may use your version of this file under # the terms of any one of the MPL, the GPL or the LGPL. # # ***** END LICENSE BLOCK ***** */ # eotlitetool.py - create EOT version of OpenType font for use with IE # # Usage: eotlitetool.py [-o output-filename] font1 [font2 ...] # # OpenType file structure # http://www.microsoft.com/typography/otspec/otff.htm # # Types: # # BYTE 8-bit unsigned integer. # CHAR 8-bit signed integer. # USHORT 16-bit unsigned integer. # SHORT 16-bit signed integer. # ULONG 32-bit unsigned integer. # Fixed 32-bit signed fixed-point number (16.16) # LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer. # # SFNT Header # # Fixed sfnt version // 0x00010000 for version 1.0. # USHORT numTables // Number of tables. # USHORT searchRange // (Maximum power of 2 <= numTables) x 16. # USHORT entrySelector // Log2(maximum power of 2 <= numTables). # USHORT rangeShift // NumTables x 16-searchRange. # # Table Directory # # ULONG tag // 4-byte identifier. # ULONG checkSum // CheckSum for this table. # ULONG offset // Offset from beginning of TrueType font file. # ULONG length // Length of this table. # # OS/2 Table (Version 4) # # USHORT version // 0x0004 # SHORT xAvgCharWidth # USHORT usWeightClass # USHORT usWidthClass # USHORT fsType # SHORT ySubscriptXSize # SHORT ySubscriptYSize # SHORT ySubscriptXOffset # SHORT ySubscriptYOffset # SHORT ySuperscriptXSize # SHORT ySuperscriptYSize # SHORT ySuperscriptXOffset # SHORT ySuperscriptYOffset # SHORT yStrikeoutSize # SHORT yStrikeoutPosition # SHORT sFamilyClass # BYTE panose[10] # ULONG ulUnicodeRange1 // Bits 0-31 # ULONG ulUnicodeRange2 // Bits 32-63 # ULONG ulUnicodeRange3 // Bits 64-95 # ULONG ulUnicodeRange4 // Bits 96-127 # CHAR achVendID[4] # USHORT fsSelection # USHORT usFirstCharIndex # USHORT usLastCharIndex # SHORT sTypoAscender # SHORT sTypoDescender # SHORT sTypoLineGap # USHORT usWinAscent # USHORT usWinDescent # ULONG ulCodePageRange1 // Bits 0-31 # ULONG ulCodePageRange2 // Bits 32-63 # SHORT sxHeight # SHORT sCapHeight # USHORT usDefaultChar # USHORT usBreakChar # USHORT usMaxContext # # # The Naming Table is organized as follows: # # [name table header] # [name records] # [string data] # # Name Table Header # # USHORT format // Format selector (=0). # USHORT count // Number of name records. # USHORT stringOffset // Offset to start of string storage (from start of table). # # Name Record # # USHORT platformID // Platform ID. # USHORT encodingID // Platform-specific encoding ID. # USHORT languageID // Language ID. # USHORT nameID // Name ID. # USHORT length // String length (in bytes). # USHORT offset // String offset from start of storage area (in bytes). # # head Table # # Fixed tableVersion // Table version number 0x00010000 for version 1.0. # Fixed fontRevision // Set by font manufacturer. # ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum. # ULONG magicNumber // Set to 0x5F0F3CF5. # USHORT flags # USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines. # LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # SHORT xMin // For all glyph bounding boxes. # SHORT yMin # SHORT xMax # SHORT yMax # USHORT macStyle # USHORT lowestRecPPEM // Smallest readable size in pixels. # SHORT fontDirectionHint # SHORT indexToLocFormat // 0 for short offsets, 1 for long. # SHORT glyphDataFormat // 0 for current format. # # # # Embedded OpenType (EOT) file format # http://www.w3.org/Submission/EOT/ # # EOT version 0x00020001 # # An EOT font consists of a header with the original OpenType font # appended at the end. Most of the data in the EOT header is simply a # copy of data from specific tables within the font data. The exceptions # are the 'Flags' field and the root string name field. The root string # is a set of names indicating domains for which the font data can be # used. A null root string implies the font data can be used anywhere. # The EOT header is in little-endian byte order but the font data remains # in big-endian order as specified by the OpenType spec. # # Overall structure: # # [EOT header] # [EOT name records] # [font data] # # EOT header # # ULONG eotSize // Total structure length in bytes (including string and font data) # ULONG fontDataSize // Length of the OpenType font (FontData) in bytes # ULONG version // Version number of this format - 0x00020001 # ULONG flags // Processing Flags (0 == no special processing) # BYTE fontPANOSE[10] // OS/2 Table panose # BYTE charset // DEFAULT_CHARSET (0x01) # BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise # ULONG weight // OS/2 Table usWeightClass # USHORT fsType // OS/2 Table fsType (specifies embedding permission flags) # USHORT magicNumber // Magic number for EOT file - 0x504C. # ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1 # ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2 # ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3 # ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4 # ULONG codePageRange1 // OS/2 Table ulCodePageRange1 # ULONG codePageRange2 // OS/2 Table ulCodePageRange2 # ULONG checkSumAdjustment // head Table CheckSumAdjustment # ULONG reserved[4] // Reserved - must be 0 # USHORT padding1 // Padding - must be 0 # # EOT name records # # USHORT FamilyNameSize // Font family name size in bytes # BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16 # USHORT Padding2 // Padding - must be 0 # # USHORT StyleNameSize // Style name size in bytes # BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16 # USHORT Padding3 // Padding - must be 0 # # USHORT VersionNameSize // Version name size in bytes # bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16 # USHORT Padding4 // Padding - must be 0 # # USHORT FullNameSize // Full name size in bytes # BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16 # USHORT Padding5 // Padding - must be 0 # # USHORT RootStringSize // Root string size in bytes # BYTE RootString[RootStringSize] // Root string, little-endian UTF-16
# -*- coding: utf-8 -*- # # Copyright (C) 2011-2015 NAME <EMAIL> # Copyright (C) 2011 xt <EMAIL> # Copyright (C) 2012 NAME "FiXato" NAME <EMAIL> # Copyright (C) 2012 USERNAME <EMAIL> # Copyright (C) 2013 NAME <EMAIL> # Copyright (C) 2013 NAME <EMAIL> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # # # Shorten URLs with own HTTP server. # (this script requires Python >= 2.6) # # How does it work? # # 1. The URLs displayed in buffers are shortened and stored in memory (saved in # a file when script is unloaded). # 2. URLs shortened can be displayed below messages, in a dedicated buffer, or # as HTML page in your browser. # 3. This script embeds an HTTP server, which will redirect shortened URLs # to real URL and display list of all URLs if you browse address without # URL key. # 4. It is recommended to customize/protect the HTTP server using script # options (see /help urlserver). # # List of URLs: # - in WeeChat: /urlserver # - in browser: http://myhost.org:1234/ # # History: # # 2015-05-16, NAME <EMAIL>: # v1.9: add option "http_auth_redirect", fix flake8 warnings # 2015-04-14, NAME <EMAIL>: # v1.8: evaluate option "http_auth" (to use secured data) # 2013-12-09, WakiMiko # v1.7: use HTTPS for youtube embedding # 2013-12-09, NAME <EMAIL>: # v1.6: add reason phrase after HTTP code 302 and empty line at the end # 2013-12-05, NAME <EMAIL>: # v1.5: replace HTTP 301 by 302 # 2013-12-05, NAME <EMAIL>: # v1.4: use HTTP 301 instead of meta for the redirection when # there is no referer in request # 2013-11-29, NAME <EMAIL> # v1.3: - make it possible to run reverse proxy in a subdirectory by # generating relative links and using the <base> tag. to use this, # set http_hostname_display to 'domain.tld/subdir'. # - mention favicon explicitly (now works in subdirectories, too). # - update favicon to new weechat logo. # - set meta referrer to never in redirect page, so chrome users' # referrers are hidden, too # - fix http_auth in chrome and other browsers which send header # names in lower case # 2013-05-04, NAME <EMAIL> # v1.2: added a "http_scheme_display" option. This makes it possible to run # the server behind a reverse proxy with https:// URLs. # 2013-03-25, NAME (@irc.freenode.net): # v1.1: made links relative in the html, so that they can be followed when # accessing the listing remotely using the weechat box's IP directly. # 2012-12-12, USERNAME <EMAIL>: # v1.0: add options "http_time_format", "display_msg_in_url" (works with # relay/irc), "color_in_msg", "separators" # 2012-04-18, NAME "FiXato" NAME <EMAIL>: # v0.9: add options "http_autostart", "http_port_display" # "url_min_length" can now be set to -1 to auto-detect minimal url # length; also, if port is 80 now, :80 will no longer be added to the # shortened url. # 2012-04-17, NAME "FiXato" NAME <EMAIL>: # v0.8: add more CSS support by adding options "http_fg_color", # "http_css_url" and "http_title", add descriptive classes to most # html elements. # 2012-04-11, NAME <EMAIL>: # v0.7: fix truncated HTML page (thanks to xt), fix base64 decoding with # Python 3.x # 2012-01-19, NAME <EMAIL>: # v0.6: add option "http_hostname_display" # 2012-01-03, NAME <EMAIL>: # v0.5: make script compatible with Python 3.x # 2011-10-31, NAME <EMAIL>: # v0.4: add options "http_embed_youtube_size" and "http_bg_color", # add extensions jpeg/bmp/svg for embedded images # 2011-10-30, NAME <EMAIL>: # v0.3: escape HTML chars for page with list of URLs, add option # "http_prefix_suffix", disable highlights on urlserver buffer # 2011-10-30, NAME <EMAIL>: # v0.2: fix error on loading of file "urlserver_list.txt" when it is empty # 2011-10-30, NAME <EMAIL>: # v0.1: initial release #
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. Otherwise, return a list of tuples with (name, value) for each option in the section. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
""" ============== Array Creation ============== Introduction ============ There are 5 general mechanisms for creating arrays: 1) Conversion from other Python structures (e.g., lists, tuples) 2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, etc.) 3) Reading arrays from disk, either from standard or custom formats 4) Creating arrays from raw bytes through the use of strings or buffers 5) Use of special library functions (e.g., random) This section will not cover means of replicating, joining, or otherwise expanding or mutating existing arrays. Nor will it cover creating object arrays or record arrays. Both of those are covered in their own sections. Converting Python array_like Objects to Numpy Arrays ==================================================== In general, numerical data arranged in an array-like structure in Python can be converted to arrays through the use of the array() function. The most obvious examples are lists and tuples. See the documentation for array() for details for its use. Some objects may support the array-protocol and allow conversion to arrays this way. A simple way to find out if the object can be converted to a numpy array using array() is simply to try it interactively and see if it works! (The Python Way). Examples: :: >>> x = np.array([2,3,1,0]) >>> x = np.array([2, 3, 1, 0]) >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, and types >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]]) Intrinsic Numpy Array Creation ============================== Numpy has built-in functions for creating arrays from scratch: zeros(shape) will create an array filled with 0 values with the specified shape. The default dtype is float64. ``>>> np.zeros((2, 3)) array([[ 0., 0., 0.], [ 0., 0., 0.]])`` ones(shape) will create an array filled with 1 values. It is identical to zeros in all other respects. arange() will create arrays with regularly incrementing values. Check the docstring for complete information on the various ways it can be used. A few examples will be given here: :: >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=np.float) array([ 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) Note that there are some subtleties regarding the last usage that the user should be aware of that are described in the arange docstring. linspace() will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: :: >>> np.linspace(1., 4., 6) array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ]) The advantage of this creation function is that one can guarantee the number of elements and the starting and end point, which arange() generally will not do for arbitrary start, stop, and step values. indices() will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension. An example illustrates much better than a verbal description: :: >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) This is particularly useful for evaluating functions of multiple dimensions on a regular grid. Reading Arrays From Disk ======================== This is presumably the most common case of large array creation. The details, of course, depend greatly on the format of data on disk and so this section can only give general pointers on how to handle various formats. Standard Binary Formats ----------------------- Various fields have standard formats for array data. The following lists the ones with known python libraries to read them and return numpy arrays (there may be others for which it is possible to read and convert to numpy arrays so check the last section as well) :: HDF5: PyTables FITS: PyFITS Examples of formats that cannot be read directly but for which it is not hard to convert are libraries like PIL (able to read and write many image formats such as jpg, png, etc). Common ASCII Formats ------------------------ Comma Separated Value files (CSV) are widely used (and an export and import option for programs like Excel). There are a number of ways of reading these files in Python. There are CSV functions in Python and functions in pylab (part of matplotlib). More generic ascii files can be read using the io package in scipy. Custom Binary Formats --------------------- There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the numpy fromfile() function and .tofile() method to read and write numpy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. Use of Special Libraries ------------------------ There are libraries that can be used to generate arrays for special purposes and it isn't possible to enumerate all of them. The most common uses are use of the many array generation functions in random that can generate arrays of random values, and some utility functions to generate special matrices (e.g. diagonal). """
""" Database with model functions. To be used with the L{cc.ivs.sigproc.fit.minimizer} function or with the L{evaluate} function in this module. >>> p = plt.figure() >>> x = np.linspace(-10,10,1000) >>> p = plt.plot(x,evaluate('gauss',x,[5,1.,2.,0.5]),label='gauss') >>> p = plt.plot(x,evaluate('voigt',x,[20.,1.,1.5,3.,0.5]),label='voigt') >>> p = plt.plot(x,evaluate('lorentz',x,[5,1.,2.,0.5]),label='lorentz') >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) ]include figure]]ivs_sigproc_fit_funclib01.png] >>> p = plt.figure() >>> x = np.linspace(0,10,1000)[1:] >>> p = plt.plot(x,evaluate('power_law',x,[2.,3.,1.5,0,0.5]),label='power_law') >>> p = plt.plot(x,evaluate('power_law',x,[2.,3.,1.5,0,0.5])+evaluate('gauss',x,[1.,5.,0.5,0,0]),label='power_law + gauss') >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) ]include figure]]ivs_sigproc_fit_funclib02.png] >>> p = plt.figure() >>> x = np.linspace(0,10,1000) >>> p = plt.plot(x,evaluate('sine',x,[1.,2.,0,0]),label='sine') >>> p = plt.plot(x,evaluate('sine_linfreqshift',x,[1.,0.5,0,0,.5]),label='sine_linfreqshift') >>> p = plt.plot(x,evaluate('sine_expfreqshift',x,[1.,0.5,0,0,1.2]),label='sine_expfreqshift') >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) ]include figure]]ivs_sigproc_fit_funclib03.png] >>> p = plt.figure() >>> p = plt.plot(x,evaluate('sine',x,[1.,2.,0,0]),label='sine') >>> p = plt.plot(x,evaluate('sine_orbit',x,[1.,2.,0,0,0.1,10.,0.1]),label='sine_orbit') >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) ]include figure]]ivs_sigproc_fit_funclib03a.png] >>> p = plt.figure() >>> x_single = np.linspace(0,10,1000) >>> x_double = np.vstack([x_single,x_single]) >>> p = plt.plot(x_single,evaluate('kepler_orbit',x_single,[2.5,0.,0.5,0,3,1.]),label='kepler_orbit (single)') >>> y_double = evaluate('kepler_orbit',x_double,[2.5,0.,0.5,0,3,2.,-4,2.],type='double') >>> p = plt.plot(x_double[0],y_double[0],label='kepler_orbit (double 1)') >>> p = plt.plot(x_double[1],y_double[1],label='kepler_orbit (double 2)') >>> p = plt.plot(x,evaluate('box_transit',x,[2.,0.4,0.1,0.3,0.5]),label='box_transit') >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) ]include figure]]ivs_sigproc_fit_funclib04.png] >>> p = plt.figure() >>> x = np.linspace(-1,1,1000) >>> gammas = [-0.25,0.1,0.25,0.5,1,2,4] >>> y = np.array([evaluate('soft_parabola',x,[1.,0,1.,gamma]) for gamma in gammas]) divide by zero encountered in power >>> for iy,gamma in zip(y,gammas): p = plt.plot(x,iy,label="soft_parabola $\gamma$={:.2f}".format(gamma)) >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) ]include figure]]ivs_sigproc_fit_funclib05.png] >>> p = plt.figure() >>> x = np.logspace(-1,2,1000) >>> blbo = evaluate('blackbody',x,[10000.,1.],wave_units='micron',flux_units='W/m3') >>> raje = evaluate('rayleigh_jeans',x,[10000.,1.],wave_units='micron',flux_units='W/m3') >>> wien = evaluate('wien',x,[10000.,1.],wave_units='micron',flux_units='W/m3') >>> p = plt.subplot(221) >>> p = plt.title(r'$\lambda$ vs $F_\lambda$') >>> p = plt.loglog(x,blbo,label='Black Body') >>> p = plt.loglog(x,raje,label='Rayleigh-Jeans') >>> p = plt.loglog(x,wien,label='Wien') >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) >>> blbo = evaluate('blackbody',x,[10000.,1.],wave_units='micron',flux_units='Jy') >>> raje = evaluate('rayleigh_jeans',x,[10000.,1.],wave_units='micron',flux_units='Jy') >>> wien = evaluate('wien',x,[10000.,1.],wave_units='micron',flux_units='Jy') >>> p = plt.subplot(223) >>> p = plt.title(r"$\lambda$ vs $F_\\nu$") >>> p = plt.loglog(x,blbo,label='Black Body') >>> p = plt.loglog(x,raje,label='Rayleigh-Jeans') >>> p = plt.loglog(x,wien,label='Wien') >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) >>> x = np.logspace(0.47,3.47,1000) >>> blbo = evaluate('blackbody',x,[10000.,1.],wave_units='THz',flux_units='Jy') >>> raje = evaluate('rayleigh_jeans',x,[10000.,1.],wave_units='THz',flux_units='Jy') >>> wien = evaluate('wien',x,[10000.,1.],wave_units='THz',flux_units='Jy') >>> p = plt.subplot(224) >>> p = plt.title(r"$\\nu$ vs $F_\\nu$") >>> p = plt.loglog(x,blbo,label='Black Body') >>> p = plt.loglog(x,raje,label='Rayleigh-Jeans') >>> p = plt.loglog(x,wien,label='Wien') >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) >>> blbo = evaluate('blackbody',x,[10000.,1.],wave_units='THz',flux_units='W/m3') >>> raje = evaluate('rayleigh_jeans',x,[10000.,1.],wave_units='THz',flux_units='W/m3') >>> wien = evaluate('wien',x,[10000.,1.],wave_units='THz',flux_units='W/m3') >>> p = plt.subplot(222) >>> p = plt.title(r"$\\nu$ vs $F_\lambda$") >>> p = plt.loglog(x,blbo,label='Black Body') >>> p = plt.loglog(x,raje,label='Rayleigh-Jeans') >>> p = plt.loglog(x,wien,label='Wien') >>> leg = plt.legend(loc='best') >>> leg.get_frame().set_alpha(0.5) ]include figure]]ivs_sigproc_fit_funclib06.png] """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
#!/usr/bin/env python # (c) 2013, NAME <EMAIL> # # This file is part of Ansible. # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # # # Author: NAME <EMAIL> # # Description: # This module queries local or remote Docker daemons and generates # inventory information. # # This plugin does not support targeting of specific hosts using the --host # flag. Instead, it queries the Docker API for each container, running # or not, and returns this data all once. # # The plugin returns the following custom attributes on Docker containers: # docker_args # docker_config # docker_created # docker_driver # docker_exec_driver # docker_host_config # docker_hostname_path # docker_hosts_path # docker_id # docker_image # docker_name # docker_network_settings # docker_path # docker_resolv_conf_path # docker_state # docker_volumes # docker_volumes_rw # # Requirements: # The docker-py module: https://github.com/dotcloud/docker-py # # Notes: # A config file can be used to configure this inventory module, and there # are several environment variables that can be set to modify the behavior # of the plugin at runtime: # DOCKER_CONFIG_FILE # DOCKER_HOST # DOCKER_VERSION # DOCKER_TIMEOUT # DOCKER_PRIVATE_SSH_PORT # DOCKER_DEFAULT_IP # # Environment Variables: # environment variable: DOCKER_CONFIG_FILE # description: # - A path to a Docker inventory hosts/defaults file in YAML format # - A sample file has been provided, colocated with the inventory # file called 'docker.yml' # required: false # default: Uses docker.docker.Client constructor defaults # environment variable: DOCKER_HOST # description: # - The socket on which to connect to a Docker daemon API # required: false # default: Uses docker.docker.Client constructor defaults # environment variable: DOCKER_VERSION # description: # - Version of the Docker API to use # default: Uses docker.docker.Client constructor defaults # required: false # environment variable: DOCKER_TIMEOUT # description: # - Timeout in seconds for connections to Docker daemon API # default: Uses docker.docker.Client constructor defaults # required: false # environment variable: DOCKER_PRIVATE_SSH_PORT # description: # - The private port (container port) on which SSH is listening # for connections # default: 22 # required: false # environment variable: DOCKER_DEFAULT_IP # description: # - This environment variable overrides the container SSH connection # IP address (aka, 'ansible_ssh_host') # # This option allows one to override the ansible_ssh_host whenever # Docker has exercised its default behavior of binding private ports # to all interfaces of the Docker host. This behavior, when dealing # with remote Docker hosts, does not allow Ansible to determine # a proper host IP address on which to connect via SSH to containers. # By default, this inventory module assumes all IP_ADDRESS-exposed # ports to be bound to localhost:<port>. To override this # behavior, for example, to bind a container's SSH port to the public # interface of its host, one must manually set this IP. # # It is preferable to begin to launch Docker containers with # ports exposed on publicly accessible IP addresses, particularly # if the containers are to be targeted by Ansible for remote # configuration, not accessible via localhost SSH connections. # # Docker containers can be explicitly exposed on IP addresses by # a) starting the daemon with the --ip argument # b) running containers with the -P/--publish ip::containerPort # argument # default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker # required: false # # Examples: # Use the config file: # DOCKER_CONFIG_FILE=./docker.yml docker.py --list # # Connect to docker instance on localhost port 4243 # DOCKER_HOST=tcp://localhost:4243 docker.py --list # # Any container's ssh port exposed on IP_ADDRESS will mapped to # another IP address (where Ansible will attempt to connect via SSH) # DOCKER_DEFAULT_IP=IP_ADDRESS docker.py --list
""" Low-level LAPACK functions (:mod:`scipy.linalg.lapack`) ======================================================= This module contains low-level functions from the LAPACK library. The `*gegv` family of routines have been removed from LAPACK 3.6.0 and have been deprecated in SciPy 0.17.0. They will be removed in a future release. .. versionadded:: 0.12.0 .. warning:: These functions do little to no error checking. It is possible to cause crashes by mis-using them, so prefer using the higher-level routines in `scipy.linalg`. Finding functions ----------------- .. autosummary:: get_lapack_funcs All functions ------------- .. autosummary:: :toctree: generated/ sgbsv dgbsv cgbsv zgbsv sgbtrf dgbtrf cgbtrf zgbtrf sgbtrs dgbtrs cgbtrs zgbtrs sgebal dgebal cgebal zgebal sgees dgees cgees zgees sgeev dgeev cgeev zgeev sgeev_lwork dgeev_lwork cgeev_lwork zgeev_lwork sgegv dgegv cgegv zgegv sgehrd dgehrd cgehrd zgehrd sgehrd_lwork dgehrd_lwork cgehrd_lwork zgehrd_lwork sgelss dgelss cgelss zgelss sgelss_lwork dgelss_lwork cgelss_lwork zgelss_lwork sgelsd dgelsd cgelsd zgelsd sgelsd_lwork dgelsd_lwork cgelsd_lwork zgelsd_lwork sgelsy dgelsy cgelsy zgelsy sgelsy_lwork dgelsy_lwork cgelsy_lwork zgelsy_lwork sgeqp3 dgeqp3 cgeqp3 zgeqp3 sgeqrf dgeqrf cgeqrf zgeqrf sgerqf dgerqf cgerqf zgerqf sgesdd dgesdd cgesdd zgesdd sgesdd_lwork dgesdd_lwork cgesdd_lwork zgesdd_lwork sgesvd dgesvd cgesvd zgesvd sgesvd_lwork dgesvd_lwork cgesvd_lwork zgesvd_lwork sgesv dgesv cgesv zgesv sgetrf dgetrf cgetrf zgetrf sgetri dgetri cgetri zgetri sgetri_lwork dgetri_lwork cgetri_lwork zgetri_lwork sgetrs dgetrs cgetrs zgetrs sgges dgges cgges zgges sggev dggev cggev zggev chbevd zhbevd chbevx zhbevx cheev zheev cheevd zheevd cheevr zheevr chegv zhegv chegvd zhegvd chegvx zhegvx slarf dlarf clarf zlarf slarfg dlarfg clarfg zlarfg slartg dlartg clartg zlartg slasd4 dlasd4 slaswp dlaswp claswp zlaswp slauum dlauum clauum zlauum spbsv dpbsv cpbsv zpbsv spbtrf dpbtrf cpbtrf zpbtrf spbtrs dpbtrs cpbtrs zpbtrs sposv dposv cposv zposv spotrf dpotrf cpotrf zpotrf spotri dpotri cpotri zpotri spotrs dpotrs cpotrs zpotrs crot zrot strsyl dtrsyl ctrsyl ztrsyl strtri dtrtri ctrtri ztrtri strtrs dtrtrs ctrtrs ztrtrs cunghr zunghr cungqr zungqr cungrq zungrq cunmqr zunmqr sgtsv dgtsv cgtsv zgtsv sptsv dptsv cptsv zptsv slamch dlamch sorghr dorghr sorgqr dorgqr sorgrq dorgrq sormqr dormqr ssbev dsbev ssbevd dsbevd ssbevx dsbevx ssyev dsyev ssyevd dsyevd ssyevr dsyevr ssygv dsygv ssygvd dsygvd ssygvx dsygvx slange dlange clange zlange ilaver """
""" Define a simple format for saving numpy arrays to disk with the full information about them. The ``.npy`` format is the standard binary file format in NumPy for persisting a *single* arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The ``.npz`` format is the standard format for persisting *multiple* NumPy arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy`` files, one for each array. Capabilities ------------ - Can represent all NumPy arrays including nested record arrays and object arrays. - Represents the data in its native binary form. - Supports Fortran-contiguous arrays directly. - Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C "long int" writes out an array with "long ints", a reading machine with 32-bit C "long ints" will yield an array with 64-bit integers. - Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able create a solution in his preferred programming language to read most ``.npy`` files that he has been given without much documentation. - Allows memory-mapping of the data. See `open_memmep`. - Can be read from a filelike stream object instead of an actual file. - Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. Limitations ----------- - Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. .. warning:: Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by 'f0', 'f1', etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the ``loadedarray.view(correct_dtype)`` method. File extensions --------------- We recommend using the ``.npy`` and ``.npz`` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using ``.npy`` and ``.npz``. Version numbering ----------------- The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. Format Version 1.0 ------------------ The first 6 bytes are a magic string: exactly ``\\x93NUMPY``. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. ``\\x01``. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. ``\\x00``. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array's format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (``\\n``) and padded with spaces (``\\x20``) to make the total length of ``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment purposes. The dictionary contains three keys: "descr" : dtype.descr An object that can be passed as an argument to the `numpy.dtype` constructor to create the array's dtype. "fortran_order" : bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. "shape" : tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. ``dtype.hasobject is True``), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on ``fortran_order``) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that ``shape=()`` means there is 1 element) by ``dtype.itemsize``. Notes ----- The ``.npy`` format, including reasons for creating it and a comparison of alternatives, is described fully in the "npy-format" NEP. """
""" ============================= Subclassing ndarray in python ============================= Credits ------- This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses. Introduction ------------ Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass. ndarrays and object creation ============================ Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different ways. These are: #. Explicit constructor call - as in ``MySubClass(params)``. This is the usual route to Python instance creation. #. View casting - casting an existing ndarray as a given subclass #. New from template - creating a new instance from a template instance. Examples include returning slices from a subclassed array, creating return types from ufuncs, and copying arrays. See :ref:`new-from-template` for more details The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation. .. _view-casting: View casting ------------ *View casting* is the standard ndarray mechanism by which you take an ndarray of any subclass, and return a view of the array as another (specified) subclass: >>> import numpy as np >>> # create a completely useless ndarray subclass >>> class C(np.ndarray): pass >>> # create a standard ndarray >>> arr = np.zeros((3,)) >>> # take a view of it, as our useless subclass >>> c_arr = arr.view(C) >>> type(c_arr) <class 'C'> .. _new-from-template: Creating new from template -------------------------- New instances of an ndarray subclass can also come about by a very similar mechanism to :ref:`view-casting`, when numpy finds it needs to create a new instance from a template instance. The most obvious place this has to happen is when you are taking slices of subclassed arrays. For example: >>> v = c_arr[1:] >>> type(v) # the view is of type 'C' <class 'C'> >>> v is c_arr # but it's a new instance False The slice is a *view* onto the original ``c_arr`` data. So, when we take a view from the ndarray, we return a new ndarray, of the same class, that points to the data in the original. There are other points in the use of ndarrays where we need such views, such as copying arrays (``c_arr.copy()``), creating ufunc output arrays (see also :ref:`array-wrap`), and reducing methods (like ``c_arr.mean()``. Relationship of view casting and new-from-template -------------------------------------------------- These paths both use the same machinery. We make the distinction here, because they result in different input to your methods. Specifically, :ref:`view-casting` means you have created a new instance of your array type from any potential subclass of ndarray. :ref:`new-from-template` means you have created a new instance of your class from a pre-existing instance, allowing you - for example - to copy across attributes that are particular to your subclass. Implications for subclassing ---------------------------- If we subclass ndarray, we need to deal not only with explicit construction of our array type, but also :ref:`view-casting` or :ref:`new-from-template`. Numpy has the machinery to do this, and this machinery that makes subclassing slightly non-standard. There are two aspects to the machinery that ndarray uses to support views and new-from-template in subclasses. The first is the use of the ``ndarray.__new__`` method for the main work of object initialization, rather then the more usual ``__init__`` method. The second is the use of the ``__array_finalize__`` method to allow subclasses to clean up after the creation of views and new instances from templates. A brief Python primer on ``__new__`` and ``__init__`` ===================================================== ``__new__`` is a standard Python method, and, if present, is called before ``__init__`` when we create a class instance. See the `python __new__ documentation <http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail. For example, consider the following Python code: .. testcode:: class C(object): def __new__(cls, *args): print 'Cls in __new__:', cls print 'Args in __new__:', args return object.__new__(cls, *args) def __init__(self, *args): print 'type(self) in __init__:', type(self) print 'Args in __init__:', args meaning that we get: >>> c = C('hello') Cls in __new__: <class 'C'> Args in __new__: ('hello',) type(self) in __init__: <class 'C'> Args in __init__: ('hello',) When we call ``C('hello')``, the ``__new__`` method gets its own class as first argument, and the passed argument, which is the string ``'hello'``. After python calls ``__new__``, it usually (see below) calls our ``__init__`` method, with the output of ``__new__`` as the first argument (now a class instance), and the passed arguments following. As you can see, the object can be initialized in the ``__new__`` method or the ``__init__`` method, or both, and in fact ndarray does not have an ``__init__`` method, because all the initialization is done in the ``__new__`` method. Why use ``__new__`` rather than just the usual ``__init__``? Because in some cases, as for ndarray, we want to be able to return an object of some other class. Consider the following: .. testcode:: class D(C): def __new__(cls, *args): print 'D cls is:', cls print 'D args in __new__:', args return C.__new__(C, *args) def __init__(self, *args): # we never get here print 'In D __init__' meaning that: >>> obj = D('hello') D cls is: <class 'D'> D args in __new__: ('hello',) Cls in __new__: <class 'C'> Args in __new__: ('hello',) >>> type(obj) <class 'C'> The definition of ``C`` is the same as before, but for ``D``, the ``__new__`` method returns an instance of class ``C`` rather than ``D``. Note that the ``__init__`` method of ``D`` does not get called. In general, when the ``__new__`` method returns an object of class other than the class in which it is defined, the ``__init__`` method of that class is not called. This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view, the standard ndarray machinery creates the new ndarray object with something like:: obj = ndarray.__new__(subtype, shape, ... where ``subdtype`` is the subclass. Thus the returned view is of the same class as the subclass, rather than being of class ``ndarray``. That solves the problem of returning views of the same type, but now we have a new problem. The machinery of ndarray can set the class this way, in its standard methods for taking views, but the ndarray ``__new__`` method knows nothing of what we have done in our own ``__new__`` method in order to set attributes, and so on. (Aside - why not call ``obj = subdtype.__new__(...`` then? Because we may not have a ``__new__`` method with the same call signature). The role of ``__array_finalize__`` ================================== ``__array_finalize__`` is the mechanism that numpy provides to allow subclasses to handle the various ways that new instances get created. Remember that subclass instances can come about in these three ways: #. explicit constructor call (``obj = MySubClass(params)``). This will call the usual sequence of ``MySubClass.__new__`` then (if it exists) ``MySubClass.__init__``. #. :ref:`view-casting` #. :ref:`new-from-template` Our ``MySubClass.__new__`` method only gets called in the case of the explicit constructor call, so we can't rely on ``MySubClass.__new__`` or ``MySubClass.__init__`` to deal with the view casting and new-from-template. It turns out that ``MySubClass.__array_finalize__`` *does* get called for all three methods of object creation, so this is where our object creation housekeeping usually goes. * For the explicit constructor call, our subclass will need to create a new ndarray instance of its own class. In practice this means that we, the authors of the code, will need to make a call to ``ndarray.__new__(MySubClass,...)``, or do view casting of an existing array (see below) * For view casting and new-from-template, the equivalent of ``ndarray.__new__(MySubClass,...`` is called, at the C level. The arguments that ``__array_finalize__`` recieves differ for the three methods of instance creation above. The following code allows us to look at the call sequences and arguments: .. testcode:: import numpy as np class C(np.ndarray): def __new__(cls, *args, **kwargs): print 'In __new__ with class %s' % cls return np.ndarray.__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): # in practice you probably will not need or want an __init__ # method for your subclass print 'In __init__ with class %s' % self.__class__ def __array_finalize__(self, obj): print 'In array_finalize:' print ' self type is %s' % type(self) print ' obj type is %s' % type(obj) Now: >>> # Explicit constructor >>> c = C((10,)) In __new__ with class <class 'C'> In array_finalize: self type is <class 'C'> obj type is <type 'NoneType'> In __init__ with class <class 'C'> >>> # View casting >>> a = np.arange(10) >>> cast_a = a.view(C) In array_finalize: self type is <class 'C'> obj type is <type 'numpy.ndarray'> >>> # Slicing (example of new-from-template) >>> cv = c[:1] In array_finalize: self type is <class 'C'> obj type is <class 'C'> The signature of ``__array_finalize__`` is:: def __array_finalize__(self, obj): ``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our own class (``self``) as well as the object from which the view has been taken (``obj``). As you can see from the output above, the ``self`` is always a newly created instance of our subclass, and the type of ``obj`` differs for the three instance creation methods: * When called from the explicit constructor, ``obj`` is ``None`` * When called from view casting, ``obj`` can be an instance of any subclass of ndarray, including our own. * When called in new-from-template, ``obj`` is another instance of our own subclass, that we might use to update the new ``self`` instance. Because ``__array_finalize__`` is the only method that always sees new instances being created, it is the sensible place to fill in instance defaults for new object attributes, among other tasks. This may be clearer with an example. Simple example - adding an extra attribute to ndarray ----------------------------------------------------- .. testcode:: import numpy as np class InfoArray(np.ndarray): def __new__(subtype, shape, dtype=float, buffer=None, offset=0, strides=None, order=None, info=None): # Create the ndarray instance of our type, given the usual # ndarray input arguments. This will call the standard # ndarray constructor, but return an object of our type. # It also triggers a call to InfoArray.__array_finalize__ obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides, order) # set the new 'info' attribute to the value passed obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # ``self`` is a new object resulting from # ndarray.__new__(InfoArray, ...), therefore it only has # attributes that the ndarray.__new__ constructor gave it - # i.e. those of a standard ndarray. # # We could have got to the ndarray.__new__ call in 3 ways: # From an explicit constructor - e.g. InfoArray(): # obj is None # (we're in the middle of the InfoArray.__new__ # constructor, and self.info will be set when we return to # InfoArray.__new__) if obj is None: return # From view casting - e.g arr.view(InfoArray): # obj is arr # (type(obj) can be InfoArray) # From new-from-template - e.g infoarr[:3] # type(obj) is InfoArray # # Note that it is here, rather than in the __new__ method, # that we set the default value for 'info', because this # method sees all creation of default objects - with the # InfoArray.__new__ constructor, but also with # arr.view(InfoArray). self.info = getattr(obj, 'info', None) # We do not need to return anything Using the object looks like this: >>> obj = InfoArray(shape=(3,)) # explicit constructor >>> type(obj) <class 'InfoArray'> >>> obj.info is None True >>> obj = InfoArray(shape=(3,), info='information') >>> obj.info 'information' >>> v = obj[1:] # new-from-template - here - slicing >>> type(v) <class 'InfoArray'> >>> v.info 'information' >>> arr = np.arange(10) >>> cast_arr = arr.view(InfoArray) # view casting >>> type(cast_arr) <class 'InfoArray'> >>> cast_arr.info is None True This class isn't very useful, because it has the same constructor as the bare ndarray object, including passing in buffers and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the usual numpy calls to ``np.array`` and return an object. Slightly more realistic example - attribute added to existing array ------------------------------------------------------------------- Here is a class that takes a standard ndarray that already exists, casts as our type, and adds an extra attribute. .. testcode:: import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, input_array, info=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribute to the created instance obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return self.info = getattr(obj, 'info', None) So: >>> arr = np.arange(5) >>> obj = RealisticInfoArray(arr, info='information') >>> type(obj) <class 'RealisticInfoArray'> >>> obj.info 'information' >>> v = obj[1:] >>> type(v) <class 'RealisticInfoArray'> >>> v.info 'information' .. _array-wrap: ``__array_wrap__`` for ufuncs ------------------------------------------------------- ``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy functions, to allow a subclass to set the type of the return value and update attributes and metadata. Let's show how this works with an example. First we make the same subclass as above, but with a different name and some print statements: .. testcode:: import numpy as np class MySubClass(np.ndarray): def __new__(cls, input_array, info=None): obj = np.asarray(input_array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print 'In __array_finalize__:' print ' self is %s' % repr(self) print ' obj is %s' % repr(obj) if obj is None: return self.info = getattr(obj, 'info', None) def __array_wrap__(self, out_arr, context=None): print 'In __array_wrap__:' print ' self is %s' % repr(self) print ' arr is %s' % repr(out_arr) # then just call the parent return np.ndarray.__array_wrap__(self, out_arr, context) We run a ufunc on an instance of our new array: >>> obj = MySubClass(np.arange(5), info='spam') In __array_finalize__: self is MySubClass([0, 1, 2, 3, 4]) obj is array([0, 1, 2, 3, 4]) >>> arr2 = np.arange(5)+1 >>> ret = np.add(arr2, obj) In __array_wrap__: self is MySubClass([0, 1, 2, 3, 4]) arr is array([1, 3, 5, 7, 9]) In __array_finalize__: self is MySubClass([1, 3, 5, 7, 9]) obj is MySubClass([0, 1, 2, 3, 4]) >>> ret MySubClass([1, 3, 5, 7, 9]) >>> ret.info 'spam' Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the input with the highest ``__array_priority__`` value, in this case ``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and ``out_arr`` as the (ndarray) result of the addition. In turn, the default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the result to class ``MySubClass``, and called ``__array_finalize__`` - hence the copying of the ``info`` attribute. This has all happened at the C level. But, we could do anything we wanted: .. testcode:: class SillySubClass(np.ndarray): def __array_wrap__(self, arr, context=None): return 'I lost your data' >>> arr1 = np.arange(5) >>> obj = arr1.view(SillySubClass) >>> arr2 = np.arange(5) >>> ret = np.multiply(obj, arr2) >>> ret 'I lost your data' So, by defining a specific ``__array_wrap__`` method for our subclass, we can tweak the output from ufuncs. The ``__array_wrap__`` method requires ``self``, then an argument - which is the result of the ufunc - and an optional parameter *context*. This parameter is returned by some ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc, domain of the ufunc). ``__array_wrap__`` should return an instance of its containing class. See the masked array subclass for an implementation. In addition to ``__array_wrap__``, which is called on the way out of the ufunc, there is also an ``__array_prepare__`` method which is called on the way into the ufunc, after the output arrays are created but before any computation has been performed. The default implementation does nothing but pass through the array. ``__array_prepare__`` should not attempt to access the array data or resize the array, it is intended for setting the output array type, updating attributes and metadata, and performing any checks based on the input that may be desired before computation begins. Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or subclass thereof or raise an error. Extra gotchas - custom ``__del__`` methods and ndarray.base ----------------------------------------------------------- One of the problems that ndarray solves is keeping track of memory ownership of ndarrays and their views. Consider the case where we have created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``. The two objects are looking at the same memory. Numpy keeps track of where the data came from for a particular array or view, with the ``base`` attribute: >>> # A normal ndarray, that owns its own data >>> arr = np.zeros((4,)) >>> # In this case, base is None >>> arr.base is None True >>> # We take a view >>> v1 = arr[1:] >>> # base now points to the array that it derived from >>> v1.base is arr True >>> # Take a view of a view >>> v2 = v1[1:] >>> # base points to the view it derived from >>> v2.base is v1 True In general, if the array owns its own memory, as for ``arr`` in this case, then ``arr.base`` will be None - there are some exceptions to this - see the numpy book for more details. The ``base`` attribute is useful in being able to tell whether we have a view or the original array. This in turn can be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how this can work, have a look at the ``memmap`` class in ``numpy.core``. """
""" ============== Array Creation ============== Introduction ============ There are 5 general mechanisms for creating arrays: 1) Conversion from other Python structures (e.g., lists, tuples) 2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, etc.) 3) Reading arrays from disk, either from standard or custom formats 4) Creating arrays from raw bytes through the use of strings or buffers 5) Use of special library functions (e.g., random) This section will not cover means of replicating, joining, or otherwise expanding or mutating existing arrays. Nor will it cover creating object arrays or structured arrays. Both of those are covered in their own sections. Converting Python array_like Objects to Numpy Arrays ==================================================== In general, numerical data arranged in an array-like structure in Python can be converted to arrays through the use of the array() function. The most obvious examples are lists and tuples. See the documentation for array() for details for its use. Some objects may support the array-protocol and allow conversion to arrays this way. A simple way to find out if the object can be converted to a numpy array using array() is simply to try it interactively and see if it works! (The Python Way). Examples: :: >>> x = np.array([2,3,1,0]) >>> x = np.array([2, 3, 1, 0]) >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, and types >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]]) Intrinsic Numpy Array Creation ============================== Numpy has built-in functions for creating arrays from scratch: zeros(shape) will create an array filled with 0 values with the specified shape. The default dtype is float64. ``>>> np.zeros((2, 3)) array([[ 0., 0., 0.], [ 0., 0., 0.]])`` ones(shape) will create an array filled with 1 values. It is identical to zeros in all other respects. arange() will create arrays with regularly incrementing values. Check the docstring for complete information on the various ways it can be used. A few examples will be given here: :: >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=np.float) array([ 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) Note that there are some subtleties regarding the last usage that the user should be aware of that are described in the arange docstring. linspace() will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: :: >>> np.linspace(1., 4., 6) array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ]) The advantage of this creation function is that one can guarantee the number of elements and the starting and end point, which arange() generally will not do for arbitrary start, stop, and step values. indices() will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension. An example illustrates much better than a verbal description: :: >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) This is particularly useful for evaluating functions of multiple dimensions on a regular grid. Reading Arrays From Disk ======================== This is presumably the most common case of large array creation. The details, of course, depend greatly on the format of data on disk and so this section can only give general pointers on how to handle various formats. Standard Binary Formats ----------------------- Various fields have standard formats for array data. The following lists the ones with known python libraries to read them and return numpy arrays (there may be others for which it is possible to read and convert to numpy arrays so check the last section as well) :: HDF5: PyTables FITS: PyFITS Examples of formats that cannot be read directly but for which it is not hard to convert are those formats supported by libraries like PIL (able to read and write many image formats such as jpg, png, etc). Common ASCII Formats ------------------------ Comma Separated Value files (CSV) are widely used (and an export and import option for programs like Excel). There are a number of ways of reading these files in Python. There are CSV functions in Python and functions in pylab (part of matplotlib). More generic ascii files can be read using the io package in scipy. Custom Binary Formats --------------------- There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the numpy fromfile() function and .tofile() method to read and write numpy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. Use of Special Libraries ------------------------ There are libraries that can be used to generate arrays for special purposes and it isn't possible to enumerate all of them. The most common uses are use of the many array generation functions in random that can generate arrays of random values, and some utility functions to generate special matrices (e.g. diagonal). """
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be > 1D atleast_2d Force arrays to be > 2D atleast_3d Force arrays to be > 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Substract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== 1D Array Set Operations ----------------------- Set operations for 1D numeric arrays based on sort() function. ================ =================== ediff1d Array difference (auxiliary function). unique Unique elements of an array. intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
#import unittest #import sys # #from DIRAC.Core.Base import Script #Script.parseCommandLine() # #import DIRAC.ResourceStatusSystem.test.fake_AgentModule #import DIRAC.ResourceStatusSystem.test.fake_rsDB #import DIRAC.ResourceStatusSystem.test.fake_rmDB #import DIRAC.ResourceStatusSystem.test.fake_Logger # #class AgentsTestCase( unittest.TestCase ): # """ Base class for the Agents test cases # """ # def setUp( self ): # # sys.modules["DIRAC.LoggingSystem.Client.Logger"] = DIRAC.ResourceStatusSystem.test.fake_Logger # sys.modules["DIRAC.Core.Base.AgentModule"] = DIRAC.ResourceStatusSystem.test.fake_AgentModule # sys.modules["DIRAC.ResourceStatusSystem.DB.ResourceStatusDB"] = DIRAC.ResourceStatusSystem.test.fake_rsDB # sys.modules["DIRAC.ResourceStatusSystem.DB.ResourceManagementDB"] = DIRAC.ResourceStatusSystem.test.fake_rmDB # sys.modules["DIRAC.Interfaces.API.DiracAdmin"] = DIRAC.ResourceStatusSystem.test.fake_Logger # sys.modules["DIRAC.ConfigurationSystem.Client.CSAPI"] = DIRAC.ResourceStatusSystem.test.fake_Logger # # from DIRAC.ResourceStatusSystem.Agent.ClientsCacheFeederAgent import ClientsCacheFeederAgent # self.ccFeeder = ClientsCacheFeederAgent( "", "" ) # # from DIRAC.ResourceStatusSystem.Agent.CleanerAgent import CleanerAgent # self.clAgent = CleanerAgent( "", "" ) # # from DIRAC.ResourceStatusSystem.Agent.TokenAgent import TokenAgent # self.tokenAgent = TokenAgent( "", "" ) # # from DIRAC.ResourceStatusSystem.Agent.RSInspectorAgent import RSInspectorAgent # self.rsIAgent = RSInspectorAgent( "", "" ) # # from DIRAC.ResourceStatusSystem.Agent.SSInspectorAgent import SSInspectorAgent # self.ssIAgent = SSInspectorAgent( "", "" ) # # from DIRAC.ResourceStatusSystem.Agent.SeSInspectorAgent import SeSInspectorAgent # self.sesIAgent = SeSInspectorAgent( "", "" ) # # from DIRAC.ResourceStatusSystem.Agent.StElReadInspectorAgent import StElReadInspectorAgent # self.stelReadIAgent = StElReadInspectorAgent( "", "" ) # # from DIRAC.ResourceStatusSystem.Agent.StElWriteInspectorAgent import StElWriteInspectorAgent # self.stelWriteIAgent = StElWriteInspectorAgent( "", "" ) # #class ClientsCacheFeederSuccess( AgentsTestCase ): # # def test_initialize( self ): # res = self.ccFeeder.initialize() # self.assert_( res['OK'] ) # # def test_execute( self ): # self.ccFeeder.initialize() # res = self.ccFeeder.execute() # self.assert_( res['OK'] ) # #class CleanerSuccess( AgentsTestCase ): # # def test_initialize( self ): # res = self.clAgent.initialize() # self.assert_( res['OK'] ) # # def test_execute( self ): # self.clAgent.initialize() # res = self.clAgent.execute() # self.assert_( res['OK'] ) # #class TokenSuccess( AgentsTestCase ): # # def test_initialize( self ): # res = self.tokenAgent.initialize() # self.assert_( res['OK'] ) # # def test_execute( self ): # self.tokenAgent.initialize() # res = self.tokenAgent.execute() # self.assert_( res['OK'] ) # #class RSInspectorSuccess( AgentsTestCase ): # # def test_initialize( self ): # res = self.rsIAgent.initialize() # self.assert_( res['OK'] ) # # def test_execute( self ): # self.rsIAgent.initialize() # res = self.rsIAgent.execute() # self.assert_( res['OK'] ) # #class SSInspectorSuccess( AgentsTestCase ): # # def test_initialize( self ): # res = self.ssIAgent.initialize() # self.assert_( res['OK'] ) # # def test_execute( self ): # self.ssIAgent.initialize() # res = self.ssIAgent.execute() # self.assert_( res['OK'] ) # #class SeSInspectorSuccess( AgentsTestCase ): # # def test_initialize( self ): # res = self.sesIAgent.initialize() # self.assert_( res['OK'] ) # # def test_execute( self ): # self.sesIAgent.initialize() # res = self.sesIAgent.execute() # self.assert_( res['OK'] ) # # #class StElReadInspectorSuccess( AgentsTestCase ): # # def test_initialize( self ): # res = self.stelReadIAgent.initialize() # self.assert_( res['OK'] ) # # def test_execute( self ): # self.stelReadIAgent.initialize() # res = self.stelReadIAgent.execute() # self.assert_( res['OK'] ) # #class StElWriteInspectorSuccess( AgentsTestCase ): # # def test_initialize( self ): # res = self.stelWriteIAgent.initialize() # self.assert_( res['OK'] ) # # def test_execute( self ): # self.stelWriteIAgent.initialize() # res = self.stelWriteIAgent.execute() # self.assert_( res['OK'] ) # # #if __name__ == '__main__': # suite = unittest.defaultTestLoader.loadTestsFromTestCase( AgentsTestCase ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( ClientsCacheFeederSuccess ) ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( CleanerSuccess ) ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( TokenSuccess ) ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( RSInspectorSuccess ) ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( SSInspectorSuccess ) ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( SeSInspectorSuccess ) ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( StElReadInspectorSuccess ) ) # suite.addTest( unittest.defaultTestLoader.loadTestsFromTestCase( StElReadInspectorSuccess ) ) # testResult = unittest.TextTestRunner( verbosity = 2 ).run( suite )
""" Linear mixed effects models are regression models for dependent data. They can be used to estimate regression relationships involving both means and variances. These models are also known as multilevel linear models, and hierachical linear models. The MixedLM class fits linear mixed effects models to data, and provides support for some common post-estimation tasks. This is a group-based implementation that is most efficient for models in which the data can be partitioned into independent groups. Some models with crossed effects can be handled by specifying a model with a single group. The data are partitioned into disjoint groups. The probability model for group i is: Y = X*beta + Z*gamma + epsilon where * n_i is the number of observations in group i * Y is a n_i dimensional response vector (called endog in MixedLM) * X is a n_i x k_fe dimensional design matrix for the fixed effects (called exog in MixedLM) * beta is a k_fe-dimensional vector of fixed effects parameters (called fe_params in MixedLM) * Z is a design matrix for the random effects with n_i rows (called exog_re in MixedLM). The number of columns in Z can vary by group as discussed below. * gamma is a random vector with mean 0. The covariance matrix for the first `k_re` elements of `gamma` (called cov_re in MixedLM) is common to all groups. The remaining elements of `gamma` are variance components as discussed in more detail below. Each group receives its own independent realization of gamma. * epsilon is a n_i dimensional vector of iid normal errors with mean 0 and variance sigma^2; the epsilon values are independent both within and between groups Y, X and Z must be entirely observed. beta, Psi, and sigma^2 are estimated using ML or REML estimation, and gamma and epsilon are random so define the probability model. The marginal mean structure is E[Y | X, Z] = X*beta. If only the mean structure is of interest, GEE is an alternative to using linear mixed models. Two types of random effects are supported. Standard random effects are correlated with each other in arbitary ways. Every group has the same number (`k_re`) of standard random effects, with the same joint distribution (but with independent realizations across the groups). Variance components are uncorrelated with each other, and with the standard random effects. Each variance component has mean zero, and all realizations of a given variance component have the same variance parameter. The number of realized variance components per variance parameter can differ across the groups. The primary reference for the implementation details is: MJ NAME NAME (1988). "Newton Raphson and EM algorithms for linear mixed effects models for repeated measures data". Journal of the American Statistical Association. Volume 83, Issue 404, pages 1014-1022. See also this more recent document: http://econ.ucsb.edu/~doug/245a/Papers/Mixed%20Effects%20Implement.pdf All the likelihood, gradient, and Hessian calculations closely follow Lindstrom and Bates 1988, adapted to support variance components. The following two documents are written more from the perspective of users: http://lme4.r-forge.r-project.org/lMMwR/lrgprt.pdf http://lme4.r-forge.r-project.org/slides/2009-07-07-Rennes/3Longitudinal-4.pdf Notation: * `cov_re` is the random effects covariance matrix (referred to above as Psi) and `scale` is the (scalar) error variance. For a single group, the marginal covariance matrix of endog given exog is scale*I + Z * cov_re * Z', where Z is the design matrix for the random effects in one group. * `vcomp` is a vector of variance parameters. The length of `vcomp` is determined by the number of keys in either the `exog_vc` argument to ``MixedLM``, or the `vc_formula` argument when using formulas to fit a model. Notes: 1. Three different parameterizations are used in different places. The regression slopes (usually called `fe_params`) are identical in all three parameterizations, but the variance parameters differ. The parameterizations are: * The "user parameterization" in which cov(endog) = scale*I + Z * cov_re * Z', as described above. This is the main parameterization visible to the user. * The "profile parameterization" in which cov(endog) = I + Z * cov_re1 * Z'. This is the parameterization of the profile likelihood that is maximized to produce parameter estimates. (see Lindstrom and Bates for details). The "user" cov_re is equal to the "profile" cov_re1 times the scale. * The "square root parameterization" in which we work with the Cholesky factor of cov_re1 instead of cov_re directly. This is hidden from the user. All three parameterizations can be packed into a vector by (optionally) concatenating `fe_params` together with the lower triangle or Cholesky square root of the dependence structure, followed by the variance parameters for the variance components. The are stored as square roots if (and only if) the random effects covariance matrix is stored as its Choleky factor. Note that when unpacking, it is important to either square or reflect the dependence structure depending on which parameterization is being used. Two score methods are implemented. One takes the score with respect to the elements of the random effects covariance matrix (used for inference once the MLE is reached), and the other takes the score with respect to the parameters of the Choleky square root of the random effects covariance matrix (used for optimization). The numerical optimization uses GLS to avoid explicitly optimizing over the fixed effects parameters. The likelihood that is optimized is profiled over both the scale parameter (a scalar) and the fixed effects parameters (if any). As a result of this profiling, it is difficult and unnecessary to calculate the Hessian of the profiled log likelihood function, so that calculation is not implemented here. Therefore, optimization methods requiring the Hessian matrix such as the Newton-Raphson algorihm cannot be used for model fitting. """
"""CPStats, a package for collecting and reporting on program statistics. Overview ======== Statistics about program operation are an invaluable monitoring and debugging tool. Unfortunately, the gathering and reporting of these critical values is usually ad-hoc. This package aims to add a centralized place for gathering statistical performance data, a structure for recording that data which provides for extrapolation of that data into more useful information, and a method of serving that data to both human investigators and monitoring software. Let's examine each of those in more detail. Data Gathering -------------- Just as Python's `logging` module provides a common importable for gathering and sending messages, performance statistics would benefit from a similar common mechanism, and one that does *not* require each package which wishes to collect stats to import a third-party module. Therefore, we choose to re-use the `logging` module by adding a `statistics` object to it. That `logging.statistics` object is a nested dict. It is not a custom class, because that would 1) require libraries and applications to import a third- party module in order to participate, 2) inhibit innovation in extrapolation approaches and in reporting tools, and 3) be slow. There are, however, some specifications regarding the structure of the dict. { +----"SQLAlchemy": { | "Inserts": 4389745, | "Inserts per Second": | lambda s: s["Inserts"] / (time() - s["Start"]), | C +---"Table Statistics": { | o | "widgets": {-----------+ N | l | "Rows": 1.3M, | Record a | l | "Inserts": 400, | m | e | },---------------------+ e | c | "froobles": { s | t | "Rows": 7845, p | i | "Inserts": 0, a | o | }, c | n +---}, e | "Slow Queries": | [{"Query": "SELECT * FROM widgets;", | "Processing Time": 47.840923343, | }, | ], +----}, } The `logging.statistics` dict has four levels. The topmost level is nothing more than a set of names to introduce modularity, usually along the lines of package names. If the SQLAlchemy project wanted to participate, for example, it might populate the item `logging.statistics['SQLAlchemy']`, whose value would be a second-layer dict we call a "namespace". Namespaces help multiple packages to avoid collisions over key names, and make reports easier to read, to boot. The maintainers of SQLAlchemy should feel free to use more than one namespace if needed (such as 'SQLAlchemy ORM'). Note that there are no case or other syntax constraints on the namespace names; they should be chosen to be maximally readable by humans (neither too short nor too long). Each namespace, then, is a dict of named statistical values, such as 'Requests/sec' or 'Uptime'. You should choose names which will look good on a report: spaces and capitalization are just fine. In addition to scalars, values in a namespace MAY be a (third-layer) dict, or a list, called a "collection". For example, the CherryPy StatsTool keeps track of what each request is doing (or has most recently done) in a 'Requests' collection, where each key is a thread ID; each value in the subdict MUST be a fourth dict (whew!) of statistical data about each thread. We call each subdict in the collection a "record". Similarly, the StatsTool also keeps a list of slow queries, where each record contains data about each slow query, in order. Values in a namespace or record may also be functions, which brings us to: Extrapolation ------------- The collection of statistical data needs to be fast, as close to unnoticeable as possible to the host program. That requires us to minimize I/O, for example, but in Python it also means we need to minimize function calls. So when you are designing your namespace and record values, try to insert the most basic scalar values you already have on hand. When it comes time to report on the gathered data, however, we usually have much more freedom in what we can calculate. Therefore, whenever reporting tools (like the provided StatsPage CherryPy class) fetch the contents of `logging.statistics` for reporting, they first call `extrapolate_statistics` (passing the whole `statistics` dict as the only argument). This makes a deep copy of the statistics dict so that the reporting tool can both iterate over it and even change it without harming the original. But it also expands any functions in the dict by calling them. For example, you might have a 'Current Time' entry in the namespace with the value "lambda scope: time.time()". The "scope" parameter is the current namespace dict (or record, if we're currently expanding one of those instead), allowing you access to existing static entries. If you're truly evil, you can even modify more than one entry at a time. However, don't try to calculate an entry and then use its value in further extrapolations; the order in which the functions are called is not guaranteed. This can lead to a certain amount of duplicated work (or a redesign of your schema), but that's better than complicating the spec. After the whole thing has been extrapolated, it's time for: Reporting --------- The StatsPage class grabs the `logging.statistics` dict, extrapolates it all, and then transforms it to HTML for easy viewing. Each namespace gets its own header and attribute table, plus an extra table for each collection. This is NOT part of the statistics specification; other tools can format how they like. You can control which columns are output and how they are formatted by updating StatsPage.formatting, which is a dict that mirrors the keys and nesting of `logging.statistics`. The difference is that, instead of data values, it has formatting values. Use None for a given key to indicate to the StatsPage that a given column should not be output. Use a string with formatting (such as '%.3f') to interpolate the value(s), or use a callable (such as lambda v: v.isoformat()) for more advanced formatting. Any entry which is not mentioned in the formatting dict is output unchanged. Monitoring ---------- Although the HTML output takes pains to assign unique id's to each <td> with statistical data, you're probably better off fetching /cpstats/data, which outputs the whole (extrapolated) `logging.statistics` dict in JSON format. That is probably easier to parse, and doesn't have any formatting controls, so you get the "original" data in a consistently-serialized format. Note: there's no treatment yet for datetime objects. Try time.time() instead for now if you can. Nagios will probably thank you. Turning Collection Off ---------------------- It is recommended each namespace have an "Enabled" item which, if False, stops collection (but not reporting) of statistical data. Applications SHOULD provide controls to pause and resume collection by setting these entries to False or True, if present. Usage ===== To collect statistics on CherryPy applications: from cherrypy.lib import cpstats appconfig['/']['tools.cpstats.on'] = True To collect statistics on your own code: import logging # Initialize the repository if not hasattr(logging, 'statistics'): logging.statistics = {} # Initialize my namespace mystats = logging.statistics.setdefault('My Stuff', {}) # Initialize my namespace's scalars and collections mystats.update({ 'Enabled': True, 'Start Time': time.time(), 'Important Events': 0, 'Events/Second': lambda s: ( (s['Important Events'] / (time.time() - s['Start Time']))), }) ... for event in events: ... # Collect stats if mystats.get('Enabled', False): mystats['Important Events'] += 1 To report statistics: root.cpstats = cpstats.StatsPage() To format statistics reports: See 'Reporting', above. """
#!/usr/bin/env python2 # Copyright (c) 2013 - 2015 ARM Limited # All rights reserved # # The license below extends only to copyright in the software and shall # not be construed as granting a license to any other intellectual # property including but not limited to intellectual property relating # to a hardware implementation of the functionality of the software # licensed hereunder. You may use the software subject to the license # terms below provided that you ensure that this notice is replicated # unmodified and in its entirety in all distributions of the software, # modified or unmodified, in source code or in binary form. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer; # redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution; # neither the name of the copyright holders nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # Copyright 2008 Google Inc. All rights reserved. # http://code.google.com/p/protobuf/ # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # Authors: NAME # # This script is used to dump protobuf traces of the instruction dependency # graph to ASCII format. # # The ASCII trace format uses one line per instruction with the format # instruction sequence number, (optional) pc, (optional) weight, type # (optional) flags, (optional) phys addr, (optional) size, comp delay, # (repeated) order dependencies comma-separated, and (repeated) register # dependencies comma-separated. # # examples: # seq_num,[pc],[weight,]type,[p_addr,size,flags,]comp_delay:[rob_dep]: # [reg_dep] # 1,35652,1,COMP,8500:: # 2,35656,1,COMP,0:,1: # 3,35660,1,LOAD,1748752,4,74,500:,2: # 4,35660,1,COMP,0:,3: # 5,35664,1,COMP,3000::,4 # 6,35666,1,STORE,1748752,4,74,1000:,3:,4,5 # 7,35666,1,COMP,3000::,4 # 8,35670,1,STORE,1748748,4,74,0:,6,3:,7 # 9,35670,1,COMP,500::,7
""" Define a simple format for saving numpy arrays to disk with the full information about them. The ``.npy`` format is the standard binary file format in NumPy for persisting a *single* arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The ``.npz`` format is the standard format for persisting *multiple* NumPy arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy`` files, one for each array. Capabilities ------------ - Can represent all NumPy arrays including nested record arrays and object arrays. - Represents the data in its native binary form. - Supports Fortran-contiguous arrays directly. - Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C "long int" writes out an array with "long ints", a reading machine with 32-bit C "long ints" will yield an array with 64-bit integers. - Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able to create a solution in their preferred programming language to read most ``.npy`` files that he has been given without much documentation. - Allows memory-mapping of the data. See `open_memmep`. - Can be read from a filelike stream object instead of an actual file. - Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. Limitations ----------- - Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. .. warning:: Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by 'f0', 'f1', etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the ``loadedarray.view(correct_dtype)`` method. File extensions --------------- We recommend using the ``.npy`` and ``.npz`` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using ``.npy`` and ``.npz``. Version numbering ----------------- The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. Format Version 1.0 ------------------ The first 6 bytes are a magic string: exactly ``\\x93NUMPY``. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. ``\\x01``. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. ``\\x00``. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array's format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (``\\n``) and padded with spaces (``\\x20``) to make the total length of ``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment purposes. The dictionary contains three keys: "descr" : dtype.descr An object that can be passed as an argument to the `numpy.dtype` constructor to create the array's dtype. "fortran_order" : bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. "shape" : tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. ``dtype.hasobject is True``), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on ``fortran_order``) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that ``shape=()`` means there is 1 element) by ``dtype.itemsize``. Format Version 2.0 ------------------ The version 1.0 format only allowed the array header to have a total size of 65535 bytes. This can be exceeded by structured arrays with a large number of columns. The version 2.0 format extends the header size to 4 GiB. `numpy.save` will automatically save in 2.0 format if the data requires it, else it will always use the more compatible 1.0 format. The description of the fourth element of the header therefore has become: "The next 4 bytes form a little-endian unsigned int: the length of the header data HEADER_LEN." Notes ----- The ``.npy`` format, including reasons for creating it and a comparison of alternatives, is described fully in the "npy-format" NEP. """
"""This module tests SyntaxErrors. Here's an example of the sort of thing that is tested. >>> def f(x): ... global x Traceback (most recent call last): File "<doctest test.test_syntax[0]>", line 1 SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and symtable.c. The parser itself outlaws a lot of invalid syntax. None of these errors are tested here at the moment. We should add some tests; since there are infinitely many programs with invalid syntax, we would need to be judicious in selecting some. The compiler generates a synthetic module name for code executed by doctest. Since all the code comes from the same module, a suffix like [1] is appended to the module name, As a consequence, changing the order of tests in this module means renumbering all the errors after it. (Maybe we should enable the ellipsis option for these tests.) In ast.c, syntax errors are raised by calling ast_error(). Errors from set_context(): >>> obj.None = 1 Traceback (most recent call last): File "<doctest test.test_syntax[1]>", line 1 SyntaxError: cannot assign to None >>> None = 1 Traceback (most recent call last): File "<doctest test.test_syntax[2]>", line 1 SyntaxError: cannot assign to None It's a syntax error to assign to the empty tuple. Why isn't it an error to assign to the empty list? It will always raise some error at runtime. >>> () = 1 Traceback (most recent call last): File "<doctest test.test_syntax[3]>", line 1 SyntaxError: can't assign to () >>> f() = 1 Traceback (most recent call last): File "<doctest test.test_syntax[4]>", line 1 SyntaxError: can't assign to function call >>> del f() Traceback (most recent call last): File "<doctest test.test_syntax[5]>", line 1 SyntaxError: can't delete function call >>> a + 1 = 2 Traceback (most recent call last): File "<doctest test.test_syntax[6]>", line 1 SyntaxError: can't assign to operator >>> (x for x in x) = 1 Traceback (most recent call last): File "<doctest test.test_syntax[7]>", line 1 SyntaxError: can't assign to generator expression >>> 1 = 1 Traceback (most recent call last): File "<doctest test.test_syntax[8]>", line 1 SyntaxError: can't assign to literal >>> "abc" = 1 Traceback (most recent call last): File "<doctest test.test_syntax[8]>", line 1 SyntaxError: can't assign to literal >>> `1` = 1 Traceback (most recent call last): File "<doctest test.test_syntax[10]>", line 1 SyntaxError: can't assign to repr If the left-hand side of an assignment is a list or tuple, an illegal expression inside that contain should still cause a syntax error. This test just checks a couple of cases rather than enumerating all of them. >>> (a, "b", c) = (1, 2, 3) Traceback (most recent call last): File "<doctest test.test_syntax[11]>", line 1 SyntaxError: can't assign to literal >>> [a, b, c + 1] = [1, 2, 3] Traceback (most recent call last): File "<doctest test.test_syntax[12]>", line 1 SyntaxError: can't assign to operator >>> a if 1 else b = 1 Traceback (most recent call last): File "<doctest test.test_syntax[13]>", line 1 SyntaxError: can't assign to conditional expression From compiler_complex_args(): >>> def f(None=1): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[14]>", line 1 SyntaxError: cannot assign to None From ast_for_arguments(): >>> def f(x, y=1, z): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[15]>", line 1 SyntaxError: non-default argument follows default argument >>> def f(x, None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[16]>", line 1 SyntaxError: cannot assign to None >>> def f(*None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[17]>", line 1 SyntaxError: cannot assign to None >>> def f(**None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[18]>", line 1 SyntaxError: cannot assign to None From ast_for_funcdef(): >>> def None(x): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[19]>", line 1 SyntaxError: cannot assign to None From ast_for_call(): >>> def f(it, *varargs): ... return list(it) >>> L = range(10) >>> f(x for x in L) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(x for x in L, 1) Traceback (most recent call last): File "<doctest test.test_syntax[23]>", line 1 SyntaxError: Generator expression must be parenthesized if not sole argument >>> f((x for x in L), 1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... i244, i245, i246, i247, i248, i249, i250, i251, i252, ... i253, i254, i255) Traceback (most recent call last): File "<doctest test.test_syntax[25]>", line 1 SyntaxError: more than 255 arguments The actual error cases counts positional arguments, keyword arguments, and generator expression arguments separately. This test combines the three. >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... (x for x in i244), i245, i246, i247, i248, i249, i250, i251, ... i252=1, i253=1, i254=1, i255=1) Traceback (most recent call last): File "<doctest test.test_syntax[26]>", line 1 SyntaxError: more than 255 arguments >>> f(lambda x: x[0] = 3) Traceback (most recent call last): File "<doctest test.test_syntax[27]>", line 1 SyntaxError: lambda cannot contain assignment The grammar accepts any test (basically, any expression) in the keyword slot of a call site. Test a few different options. >>> f(x()=2) Traceback (most recent call last): File "<doctest test.test_syntax[28]>", line 1 SyntaxError: keyword can't be an expression >>> f(a or b=1) Traceback (most recent call last): File "<doctest test.test_syntax[29]>", line 1 SyntaxError: keyword can't be an expression >>> f(x.y=1) Traceback (most recent call last): File "<doctest test.test_syntax[30]>", line 1 SyntaxError: keyword can't be an expression More set_context(): >>> (x for x in x) += 1 Traceback (most recent call last): File "<doctest test.test_syntax[31]>", line 1 SyntaxError: can't assign to generator expression >>> None += 1 Traceback (most recent call last): File "<doctest test.test_syntax[32]>", line 1 SyntaxError: cannot assign to None >>> f() += 1 Traceback (most recent call last): File "<doctest test.test_syntax[33]>", line 1 SyntaxError: can't assign to function call Test continue in finally in weird combinations. continue in for loop under finally should be ok. >>> def test(): ... try: ... pass ... finally: ... for abc in range(10): ... continue ... print abc >>> test() 9 Start simple, a continue in a finally should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[36]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause This is essentially a continue in a finally which should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... try: ... continue ... except: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[37]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[38]>", line 5 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[39]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... try: ... continue ... finally: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[40]>", line 7 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: pass ... finally: ... try: ... pass ... except: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[41]>", line 8 SyntaxError: 'continue' not supported inside 'finally' clause There is one test for a break that is not in a loop. The compiler uses a single data structure to keep track of try-finally and loops, so we need to be sure that a break is actually inside a loop. If it isn't, there should be a syntax error. >>> try: ... print 1 ... break ... print 2 ... finally: ... print 3 Traceback (most recent call last): ... File "<doctest test.test_syntax[42]>", line 3 SyntaxError: 'break' outside loop This should probably raise a better error than a SystemError (or none at all). In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: ... while 5: ... while 6: ... while 8: ... while 9: ... while 10: ... while 11: ... while 12: ... while 13: ... while 14: ... while 15: ... while 16: ... while 17: ... while 18: ... while 19: ... while 20: ... while 21: ... while 22: ... break Traceback (most recent call last): ... SystemError: too many statically nested blocks This tests assignment-context; there was a bug in Python 2.5 where compiling a complex 'if' (one with 'elif') would fail to notice an invalid suite, leading to spurious errors. >>> if 1: ... x() = 1 ... elif 1: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[44]>", line 2 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 Traceback (most recent call last): ... File "<doctest test.test_syntax[45]>", line 4 SyntaxError: can't assign to function call >>> if 1: ... x() = 1 ... elif 1: ... pass ... else: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[46]>", line 2 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 ... else: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[47]>", line 4 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... pass ... else: ... x() = 1 Traceback (most recent call last): ... File "<doctest test.test_syntax[48]>", line 6 SyntaxError: can't assign to function call >>> f(a=23, a=234) Traceback (most recent call last): ... File "<doctest test.test_syntax[49]>", line 1 SyntaxError: keyword argument repeated >>> del () Traceback (most recent call last): ... File "<doctest test.test_syntax[50]>", line 1 SyntaxError: can't delete () >>> {1, 2, 3} = 42 Traceback (most recent call last): ... File "<doctest test.test_syntax[50]>", line 1 SyntaxError: can't assign to literal Corner-case that used to crash: >>> def f(*xx, **__debug__): pass Traceback (most recent call last): SyntaxError: cannot assign to __debug__ """
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be >= 1D atleast_2d Force arrays to be >= 2D atleast_3d Force arrays to be >= 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) stack Stack arrays along a new axis split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Substract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Iterators --------- ================ =================== Arrayterator A buffered iterator for big arrays. ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== 1D Array Set Operations ----------------------- Set operations for 1D numeric arrays based on sort() function. ================ =================== ediff1d Array difference (auxiliary function). unique Unique elements of an array. intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
""" enhpath.py - An object-oriented approach to file/directory operations. Author: NAME <EMAIL>. URL coming soon. Derived from Jason Orendorff's path.py 2.0.4 (JOP) available at http://www.jorendorff.com/articles/python/path. Whereas JOP maintains strict API compatibility with its parent functions, enhpath ("enhanced path") stresses convenience and conciseness in the caller's code. It does this by combining related methods, encapsulating multistep operations, and occasional magic behaviors. Enhpath requires Python 2.3 (JOP: Python 2.2). Paths are subclasses of unicode, so all strings methods are available too. Redundant methods like .basename() are moved to subclass path_compat. Subclassable so you can add local methods. (JOP: not subclassable because methods that create new paths call path() directly rather than self.__class__().) Constructors and class methods: path('path/name') path object path('') Used to generate subpaths relative to current directory: path('').joinpath('a') => path('a') path() Same as path('') path.cwd() Same as path(os.getcwd()) (JOP: path.getcwd() is static method) path.popdir(N=1) Pop Nth previous directory off path.pushed_dirs, chdir to it, and log a debug message. IndexError if we fall off the end of the list. See .chdir(). (JOP: no equiv.) path.tempfile(suffix='', prefix=tempfile.template, dir=None, text=False) Create a temporary file using tempfile.mkstemp, open it, and return a tuple of (path, file object). The file will not be automatically deleted. 'suffix': use this suffix; e.g., ".txt". 'prefix': use this prefix. 'dir': create in this directory (default system temp). 'text' (boolean): open in text mode. (JOP: no equiv.) path.tempdir(suffix='', prefix=tempfile.template, dir=None) Create a temporary directory using tempfile.mkdtemp and return its path. The directory will not be automatically deleted. (JOP: no equivalent.) path.tempfileobject(mode='w+b', bufsize=-1, suffix='', prefix=tempfile.template, dir=None) Return a file object pointing to an anonymous temporary file. The file will automatically be destroyed when the file object is closed or garbage collected. The file will not be visible in the filesystem if the OS permits. (Unix does.) This is a static method since it neither creates nor uses a path object. The only reason it's in this class is to put all the tempfile-creating methods together. (JOP: no equiv.) Chdir warnings: changing the current working directory via path.popdir(), .chdir(), or os.chdir() does not adjust existing relative path objects, so if they're relative to the old current directory they're now invalid. Changing the directory is global to the runtime, so it's visible in all threads and calling functions. Class attributes: path.repr_as_str True to make path('a').__repr__() return 'a'. False (default) to make it return 'path("a")'. Useful when you have to dump lists of paths or dicts containing paths for debugging. Changing this is visible in all threads. (JOP: no equivalent.) Instance attributes: .parent Parent directory as path. Compare .ancestor(). path('a/b').parent => path('a'). path('b').parent => path(''). .name Filename portion as string. path('a/filename.txt').name => 'filename.txt'. .base Filename without extension. Compare .stripext(). path('a/filename.txt').base => 'filename'. path('a/archive.tar.gz').base => 'archive.tar'. (JOP: called .namebase). .ext Extension only. path('a/filename.txt').ext => '.txt'. path('a/archive.tar.gz').ext => '.gz'. Interaction with Python operators: + Simple concatenation. path('a') + 'b' => path('ab'). 'a' + path('b') => path('ab'). / Same as .joinpath(). path('a') / 'b' => path('a/b'). path('a') / 'b' / 'c' => path('a/b/c'). Normalization methods: .abspath() Convert to absolute path. Implies normpath on most platforms. path('python2.4').abspath() => path('/usr/lib/python2.4'). .isabs() Is the path absolute? .normcase() Does nothing on Unix. On case-insensitive filesystems, converts to lowercase. On Windows, converts slashes to backslashes. .normpath() Clean up .., ., redundant //, etc. On Windows, convert shashes to backslashes. Python docs warn "this may change the meaning of a path if it contains symbolic links!" path('a/../b/./c//d').normpath() => path('b/c/d') .realpath() Resolve symbolic links in path. path('/home/joe').realpath() => path('/mnt/data/home/joe') if /home is a symlink to /mnt/data/home. .expand() Call expanduser, expandvars and normpath. This is commonly everything you need to clean up a filename from a configuration file. .expanduser() Convert ~user to the user's home directory. path('~joe/Mail').expanduser() => path('/home/joe/Mail') path('~/.vimrc').expanduser() => path('/home/joe/.vimrc') .expandvars() Resolve $ENVIRONMENT_VARIABLE references. path('$HOME/Mail').expandvars() => path('/home/joe/Mail') .relpath() Convert to relative path from current directory. path('/home/joe/Mail') => path('Mail') if CWD is /home/joe. .relpathto(dest) Return a relative path from self to dest. If there is no relative path (e.g., they reside on different drives on Windows), same as dest.abspath(). Dest may be a path or a string. .relpathfrom(ancestor) Chop off the front part of self that matches ancestor. path('/home/joe/Mail').relpathfrom('/home/joe') => path('Mail') ValueError if self does not start with ancestor. Deriving related paths: .splitpath() Return a list of all directory/filename components. The first item will be a path, either os.curdir, os.pardir, empty, or the root directory of this path (for example, '/' or 'C:\\'). The other items will be strings. path('/usr/local/bin') => [path('/'), 'usr', 'local', 'bin'] path('a/b/c.txt') => [path(''), 'a', 'b', 'c.txt'] (JOP: This is what .splitall() does. JOP's .splitpath() returns (p.parent, p.name).) (Note: not called .split() to avoid masking the string method of that name.) .splitext() Same as (p.stripext(), p.ext). .stripext() Chop one extension off the path. path('a/filename.txt').stripext() => path('a/filename') .joinpath(*components) Join components with directory separator as necessary. path('a').joinpath('b', 'c') => path('a/b/c') path('a/').joinpath('b') => path('a/b') Calling .splitpath() and .joinpath() produces the original path. (Note: not called .join() to avoid masking the string method of that name.) .ancestor(N) Chop N components off end, same as invoking .parent N times. path('a/b/c').ancestor(2) => path('a') (JOP: no equivalent method.) .joinancestor(N, *components) Combination of .ancestor() and .joinpath(). (JOP: no equivalent method.) .redeploy(old_ancestor, new_ancestor) Replace the old_ancestor part of self with new_ancestor. Both may be paths or strings. old_ancestor *must* be an ancestor of self; this is checked via absolute paths even if the specified paths are relative. (Not implemented: verifying it would be useful for things like Cheetah's --idir and --odir options.) (JOP: no equivalent method.) Listing directories: Common arguments: pattern, a glob pattern like "*.py". Limits the result to matching filenames. symlinks, False to exclude symbolic links from result. Useful if you want to treat them separately. (JOB: no equivalent argument.) .listdir(pattern=None, symlinks=True, names_only=False) List directory. path('/').listdir() => [path('/bin'), path('/boot'), ...] path('/').listdir(names_only=True) => ['bin', 'boot', ...] If names_only is true, symlinks is false and pattern is None, this is the same as os.listdir() and no path objects are created. But if symlinks is true and there is a pattern, it must create path objects to determine the return values, and then turn them back to strings. (JOP: No names_only argument.) .dirs(pattern=None, symlinks=True) List only the subdirectories in directory. Not recursive. path('/usr/lib/python2.3').dirs() => [path('/usr/lib/python2.3/site-packages'), ...] .files(pattern=None, symlinks=True) List only the regular files in directory. Not recursive. Does not list special files (anything for which os.path.isfile() returns false). path('/usr/lib/python2.3').dirs() => [path('/usr/lib/python2.3/BaseHTTPServer.py'), ...] .symlinks(pattern=None) List only the symbolic links in directory. Not recursive. path('/').symlinks() => [path('/home')] if it's a symlink. (JOP: no equivalent method.) .walk(pattern=None, symlinks=True) Iterate over files and subdirs recursively. The search is depth-first. Each directory is returned just before its children. Returns an iteration, not a list. .walkdirs(pattern=None, symlinks=True) Same as .walk() but yield only directories. .walkfiles(pattern=None, symlinks=True) Same as .walk() but yield only regular files. Excludes special files (anything for which os.path.isfile() returns false). .walksymlinks(pattern=None) Same as .walk() but yield only symbolic links. (JOP: no equivalent method.) .findpaths(args=None, ls=False, **kw) Run the Unix 'find' command and return an iteration of paths. The argument signature matches find's most popular arguments. Due to Python's handling of keyword arguments, there are some limitations: - You can't specify the same argument multiple times. - The argument order may be rearranged. - You can't do an 'or' expression or a 'brace' expression. - Not all 'find' operations are implemented. Special syntaxes: - mtime=(N, N) Converted to two -mtime options, used to specify a range. Normally the first arg is negative and the second positive. Same for atime, ctime, mmin, amin, cmin. - name=[pattern1, pattern2, ...] Converted to '-lbrace -name pattern1 -o ... -rbrace'. Value may be list or tuple. There are also some other arguments: - args, list or string, appended to the shell command line. Useful to do things the keyword args can't. Note that if value is a string, it is split on whitespace. - ls, boolean, true to yield one-line strings describing the files, same as find's '-ls' option. Does not yield paths. - pretend, boolean, don't run the command, just return it as a string. Useful only for debugging. We try to handle quoting intelligently but there's no guarantee we'll produce a valid or correct command line. If your argument values have quotes, spaces, or newlines, use pretend=True and verify the command line is correct, otherwise you may have unexpected problems. If 'pretend' is False (default), the subcommand is logged to the 'enhpath' logger, level debug. See Python's 'logging' module for details. Examples: .find(name='*.py') .find(type='d', ls=True) .find(mtime=-1, type='f') (JOP: no equivalent method.) WARNING: Normally we bypass the shell to avoid quoting problems. However, if 'args' is a string or we're running on Python 2.3, we can't avoid the shell. Argument values containing spaces, quotes, or newlines may be misinterpreted by the shell. This can lead to a syntax error or to an incorrect search. When in doubt, use the 'pretend' argument to verify the command line is correct. path('').find(...) yields paths relative to the current directory. In this case, 'find' on posix returns paths prefixed with "./", so we chop off this prefix. We don't call .normpath() because of its fragility with symbolic links. On other platforms we don't clean up the paths because we don't know how. .findpaths_pretend(args=None, ls=False, **kw) Same as .find(...) above but don't actually do the find; instead, return the external command line as a list of strings. Useful for debugging. .fnmatch(pattern) Return True if self.name matches the pattern. .glob(pattern) Return a list of paths that match the pattern. path('a').glob('*.py') => Same as path('a').listdir('*.py') path('a').glob('*/bin/*') => List of files all users have in their bin directories. # Reading/writing files .open(mode='r') Open file and return a file object. .file(mode='r') Same. .bytes(mode='r') Read the file in binary mode and return content as string. .write_bytes(bytes, append=False) Write 'bytes' (string) to the file. Overwrites the file unless 'append' is true. .text(encoding=None, errors='strict') Read the file in text mode and return content as string. 'encoding' is a Unicode encoding/character set. If present, the content is returned as a unicode object; otherwise it's returned as an 8-bit string. 'errors' is an argument for str.decode(). .write_text(text, encoding=None, errors='strict', linesep=os.linesep, append=False) Write 'text' (string) to the file in text mode. Overwrites the file unless 'append' (keyword arg) is true. 'encoding' (string) is the unicode encoding. Ignored if text is string type rather than unicode type. 'errors' is an argument for unicode.encode(). 'linesep' (keyword arg) the chars to write for newline. None means don't convert newlines. The default is your platform's preferred convention. .lines(encoding=None, errors='strict', retain=True) Read the file in text mode and return the lines as a list. 'encoding' and 'errors' are same as for .text(). 'retain' (boolean) If true, convert all newline types to '\n'. If false, chop off newlines. The open mode is 'U'. To iterate over the lines, use the filehandle returned by .open() as an iterator. .writelines(lines, encoding=None, errors='strict', linesep=os.linesep, append=False) Write the lines (list) to the file. The other args are the same as for .write_text(). When appending, use the same Unicode encoding the original text was written in, otherwise the reader will be very confused. Checking file existence/type: .exists() Does the path exist? .isdir() Is the path a directory? .isfile() Is the path a regular file? .islink() Is the path a symbolic link? .ismount() Is the path a mount point? .isspecial() Is the path a special file? .type() Return the file type using the one-letter codes from the 'find' command: 'f' => regular file (path.FILE) 'd' => directory (path.DIR) 'l' => symbolic link (path.LINK) 'b' => block special file (path.BLOCK) 'c' => character special file (path.CHAR) 'p' => named pipe/FIFO (path.PIPE) 's' => socket (path.SOCKET) 'D' => Door (Solaris) (path.DOOR) None => unknown The constants at the right are class attributes if you prefer to compare to them instead of literal chars. You'll never get a 'D' in the current implementation since the 'stat' module provides no way to test for a Door. path.SPECIAL_TYPES is a list of the latter five types. All the .is*() functions return False if the path doesn't exist. All except .islink() return the state of the pointed-to file if the path is a symbolic link. .isfile() returns False for a special file. To test a special file's type, pass .stat().st_mode to one of the S_*() functions in the 'stat' module; this is more efficient than .isspecial() when you only care about one type. Checking permissions and other information: .stat() Get general file information; see os.stat(). .lstat() Same as .stat() but don't follow a symbolic link. .statvfs() Get general information about the filesystem; see os.statvfs(). .samefile(other) Is self and other the same file? Returns True if one is a symbolic or hard link to the other. 'other' may be a path or a string. .pathconf(name) See os.pathconf(); OS-specific info about the path. .canread() Can we read the file? (JOP: no equivalent.) .canwrite() Can we write the file? (JOP: no equivalent.) .canexecute() Can we execute the file? True for a directory means we can chdir into it or access its contents. (JOP: no equivalent.) .access(mode) General permission test; see os.access() for usage. Modifying file information: .utime(times) Set file access and modification time. 'time' is either None to set them to the current time, or a tuple of (atime, mtime) -- integers in tick format (the same format returned by time.time()). .getutime() Return a tuple of (atime, mtime). This can be passed directly to another path's .utime(). (JOP: no equiv.) .copyutimefrom(other) Make my atime/mtime match the other path's. 'other' may be a path or a string. (JOP: no equiv.) .copyutimeto(*other) Make the other paths' atime/mtime match mine. Note that multiple others can be specified, unlike .copyutimefrom(). (JOP: no equiv.) .itercopyutimeto(iterpaths) Same as .copyutimeto() but use an iterable to specify the destination paths. (JOP: no equiv.) .chmod(mode) Set the path's permissions to 'mode' (octal int). There are several constants in the 'stat' module you can use; use the '|' operator to combine them. .grant(mode) Add 'mode' to the file's current mode. (Uses '|'.) .grant(mode) Subtract 'mode' from the file's current mode. (Uses '&'.) .chown(uid=None, gid=None) Change the path's owner/group. If uid or gid is a string, look up the corresponding number via the 'pwd' or 'group' module. (JOP: both uid and gid must be specified, and both must be numeric.) .needsupdate(*other) True if the path doesn't exist or its mtime is older than any of the others. If any 'other' is a directory, only the directory mtime will be compared; this method does not recurse. A directory's mtime changes when a file in it is added, removed, or renamed. To do the equvalent of iteration, see .iterneedsupdate(). (JOP: no equivalent method.) .iterneedsupdate(iterpaths) Same as .needsupdate() but use an iterable to specify the other paths. To do the equivalent of a recursive compare, call .walkfiles() on the other directories and concatenate the iterators using itertools.chain, interspersing any static lists of paths you wish. (JOP: no equivalent method.) Moving files and directories: .move(dest, overwrite=True, prune=False, atomic=False) Move the file or directory to 'dest'. Tries os.rename() or os.renames() first, falls back to shutil.move() if it fails. If 'overwrite' is false, raise OverwriteError if dest exists. Creates any ancestor directories of dest if missing. If 'prune' is true, delete any empty ancestor parent directories of source after move. If 'atomic' is true and .rename*() fails, don't catch the OSError and don't try shutil.move(). This guarantees that if the move succeeds, it's an atomic operation. This will fail if the two paths are on different filesystems, and may fail if the source is a directory. (JOP: this combines the functionality of .rename(), .renames(), and .move().) .movefile(dest, overwrite=True, prune=False, atomic=False, checkdest=False) Same as .move() but raise FileTypeError if path is a file rather than a directory. 'checkdest' (boolean) True to fail dest is a file. (JOP: no equivalent method.) .movedir(dest, overwrite=True, prune=False, atomic=False, checkdest=False) Same as .move() but raise FileTypeError if path is a directory. 'checkdest' (boolean) True to fail dest is a directory. (JOP: no equivalent method.) Creating directories and (empty) files: .mkdir(mode=0777, clobber=False) Create an empty directory. If 'clobber' is true, call .delete_dammit() first. Otherwise you'll get OSError if a file exists in its place. Silently succeed if the directory exists and clobber is false. Creates any missing ancestor directories. (JOP: this is equivalent to .makedirs() rather than .makedir(), except you'll get OSError if a directory or file exists.) .touch() Create a file if it doesn't exist. If a file or directory does exist, set its atime and mtime to the current time -- same as .utime(None). Deleting directories and files: .delete_dammit() Delete path recursively whatever it is, and don't complain if it doesn't exist. Convenient but dangerous! (JOP: combines .rmtree(), .rmdir(), and .remove(), plus unqiue features.) .rmdir(prune=False) Delete a directory. Silently succeeds if it doesn't exist. OSError if it's a file or symbolic link. See .delete_dammit(). If 'prune' is true, also delete any empty ancestor directories. (JOP: equivalent to .removedirs() if prune is true, or .rmdir() if prune is false, except the JOP methods don't have a 'prune' argument, and they raise OSError if the directory doesn't exist.) .remove(prune=False) Delete a file. Silently succeeds if it doesn't exist. OSError if it's a directory. If 'prune' is true, delete any ancestor directories. (JOP: equivalent to .remove() if prune is false, except JOP method has no 'prune' arg. Raises OSError if it doesn't exist.) .unlink(prune=False) Same as .remove(). Links: .hardlink(source) Create a hard link at 'source' pointing to this path. (JOP: equivalent to .link().) .symlink(source) Create a symbolic link at 'source' pointing to this path. If path is relative, it should be relative to source's directory, but it needn't be valid with the current directory. .readlink() Return the path this symbolic link points to. .readlinkabs() Same as .readlink() but always return an absolute path. Copying files and directories: .copy(dest, copy_times=True, copy_mode=False, symlinks=True) Copy a file or (recursively) a directory. If 'copy_times' is true (default), copy the atime/mtime too. If 'copy_mode' is true (not default), copy the permission bits too. If 'symlinks' is true (default), create symbolic links in dest corresponding to those in path (using shutil.copytree, which does not claim infallibility). (JOP: combines .copy(), .copy2(), .copytree().) .copymode(dest) Copy path's permission bits to dest (but not its content). .copystat(dest) Copy path's permission bits and atime/mtime to dest (but not its content or owner/group). Overlaps with .copyutimeto(). Modifying the runtime environment: .chdir(push=False) Set the current working directory to path. If 'push' is true, push the old current directory onto path.pushed_dirs (class attribute) and log a debug message. Note that pushing is visible to all threads and calling functions. (JOP: no equiv.) .chroot() Set this process's root directory to path. Subclass path_windows(path): # Windows-only operations. .drive Drive specification. path('C:\COMMAND.COM') => 'C:' on Windows, '' on Unix. .splitdrive() Same as (p.drive, path(p.<everything else>)) .splitunc() Same as (p.uncshare, path(p.<everything else>) .uncshare The UNC mount point, empty for local drives. UNC files are in \\host\path syntax. .startfile() Launch path as an autonoymous program (Windows only). Subclass path_compat(path_windows): # Redundant JOP methods. .namebase() Same as .base. .basename() Same as .name. .dirname() Same as .parent. .getatime() Same as .atime. .getmtime() Same as .mtime. .getctime() Same as .ctime. .getsize() Same as .size. # JOP has the following TODO, which I suppose applies here too: # - Bug in write_text(). It doesn't support Universal newline mode. # - Better error message in listdir() when self isn't a # directory. (On Windows, the error message really sucks.) # - Make sure everything has a good docstring. # - Add methods for regex find and replace. # - guess_content_type() method? # - Perhaps support arguments to touch(). # - Could add split() and join() methods that generate warnings. # - Note: __add__() technically has a bug, I think, where # it doesn't play nice with other types that implement # __radd__(). Test this. """
""" Module `lino_xl.lib.properties` ------------------------------- Imagine that we are doing a study about alimentary habits. We observe a defined series of properties on the people who participate in our study. Here are the properties that we are going to observe:: >>> weight = properties.INT.create_property(name='weight') >>> weight.save() >>> married = properties.BOOL.create_property(name='married') >>> married.save() >>> favdish = properties.CHAR.create_property(name='favdish',label='favorite dish') >>> favdish.save() >>> favdish.create_value("Cookies").save() >>> v = favdish.create_value("Fish").save() >>> favdish.create_value("Meat").save() >>> favdish.create_value("Vegetables").save() Now we have setup the properties. Let's have a look at this metadata:: >>> print favdish.choices_list() [u'Cookies', u'Fish', u'Meat', u'Vegetables'] >>> qs = properties.Property.objects.all() >>> ["%s (%s)" % (p.name,','.join(map(str,p.choices_list()))) for p in qs] [u'weight ()', u'married (True,False)', u'favdish (Cookies,Fish,Meat,Vegetables)'] PropValuesByOwner is a report that cannot be rendered into a normal grid because the 'value' column has variable data type, but it's render_to_dict() method is used to fill an `Ext.grid.PropertyGrid`: >>> properties.PropValuesByOwner().request(master=Person).render_to_dict() {'count': 3, 'rows': [{'name': u'favdish', 'value': ''}, {'name': u'married', 'value': None}, {'name': u'weight', 'value': None}], 'title': u'Properties for persons'} Here are the people we are going to analyze:: >>> chris = Person(name='Chris') >>> chris.save() >>> NAME = Person(name='NAME') >>> NAME.save() >>> vera = Person(name='Vera') >>> vera.save() >>> NAME = Person(name='NAME') >>> NAME.save() Now we are ready to fill in some real data. NAME NAME and NAME together to each question. First we asked them "What's your weight?", and they answered: >>> weight.set_value_for(chris,70) >>> weight.set_value_for(NAME,110) >>> weight.set_value_for(vera,60) When asked whether they were married, they answered: >>> married.set_value_for(chris,True) >>> married.set_value_for(NAME,False) >>> married.set_value_for(vera,True) And about their favourite dish they answered: >>> favdish.set_value_for(chris,'Cookies') >>> favdish.set_value_for(NAME,'Fish') >>> favdish.set_value_for(vera,'Vegetables') NAME came later. She answered all questions at once, which we can enter in one line of code: >>> properties.set_value_for(NAME,married=True,favdish='Meat') Note that NAME didn't know her weight. To see the property values of a person, we can use a manual query... >>> qs = properties.PropValue.objects.filter(owner_id=NAME.pk).order_by('prop__name') >>> [v.by_owner() for v in qs] [u'favdish: Fish', u'married: False', u'weight: 110'] ... or use the `PropValuesByOwner` report: >>> properties.PropValuesByOwner().request(master_instance=NAME).render_to_dict() {'count': 3, 'rows': [{'name': u'favdish', 'value': u'Fish'}, {'name': u'married', 'value': False}, {'name': u'weight', 'value': 110}], 'title': u'Properties for NAME'} Note how properties.PropValuesByOwner also returns 3 rows for NAME although we don't know her weight: >>> properties.PropValuesByOwner().request(master_instance=NAME).render_to_dict() {'count': 3, 'rows': [{'name': u'favdish', 'value': u'Meat'}, {'name': u'married', 'value': True}, {'name': u'weight', 'value': None}], 'title': u'Properties for NAME'} Query by property: >>> qs = properties.PropValue.objects.filter(prop=weight) >>> [v.by_property() for v in qs] [u'Chris: 70', u'NAME: 110', u'Vera: 60'] >>> qs = weight.values_query().order_by('value') >>> [v.by_property() for v in qs] [u'Vera: 60', u'Chris: 70', u'NAME: 110'] `Report.as_text()`, is currently broken: >>> #properties.PropValuesByOwner().as_text(NAME) """
"""Script to generate reports on translator classes from Doxygen sources. The main purpose of the script is to extract the information from sources related to internationalization (the translator classes). It uses the information to generate documentation (language.doc, translator_report.txt) from templates (language.tpl, maintainers.txt). Simply run the script without parameters to get the reports and documentation for all supported languages. If you want to generate the translator report only for some languages, pass their codes as arguments to the script. In that case, the language.doc will not be generated. Example: python translator.py en nl cz Originally, the script was written in Perl and was known as translator.pl. The last Perl version was dated 2002/05/21 (plus some later corrections) $Id: translator.py 744 2010-10-09 08:04:33Z dimitri $ NAME (EMAIL) History: -------- 2002/05/21 - This was the last Perl version. 2003/05/16 - List of language marks can be passed as arguments. 2004/01/24 - Total reimplementation started: classes TrManager, and Transl. 2004/02/05 - First version that produces translator report. No language.doc yet. 2004/02/10 - First fully functional version that generates both the translator report and the documentation. It is a bit slower than the Perl version, but is much less tricky and much more flexible. It also solves some problems that were not solved by the Perl version. The translator report content should be more useful for developers. 2004/02/11 - Some tuning-up to provide more useful information. 2004/04/16 - Added new tokens to the tokenizer (to remove some warnings). 2004/05/25 - Added from __future__ import generators not to force Python 2.3. 2004/06/03 - Removed dependency on textwrap module. 2004/07/07 - Fixed the bug in the fill() function. 2004/07/21 - Better e-mail mangling for HTML part of language.doc. - Plural not used for reporting a single missing method. - Removal of not used translator adapters is suggested only when the report is not restricted to selected languages explicitly via script arguments. 2004/07/26 - Better reporting of not-needed adapters. 2004/10/04 - Reporting of not called translator methods added. 2004/10/05 - Modified to check only doxygen/src sources for the previous report. 2005/02/28 - Slight modification to generate "mailto.txt" auxiliary file. 2005/08/15 - Doxygen's root directory determined primarily from DOXYGEN environment variable. When not found, then relatively to the script. 2007/03/20 - The "translate me!" searched in comments and reported if found. 2008/06/09 - Warning when the MAX_DOT_GRAPH_HEIGHT is still part of trLegendDocs(). 2009/05/09 - Changed HTML output to fit it with XHTML DTD 2009/09/02 - Added percentage info to the report (implemented / to be implemented). 2010/02/09 - Added checking/suggestion 'Reimplementation using UTF-8 suggested. 2010/03/03 - Added [unreachable] prefix used in maintainers.txt. 2010/05/28 - BOM skipped; minor code cleaning. 2010/05/31 - e-mail mangled already in maintainers.txt 2010/08/20 - maintainers.txt to UTF-8, related processin of unicode strings - [any mark] introduced instead of [unreachable] only - marks hihglighted in HTML 2010/08/30 - Highlighting in what will be the table in langhowto.html modified. 2010/09/27 - The underscore in \latexonly part of the generated language.doc was prefixed by backslash (was LaTeX related error). """
""" ===================================================== Community analysis of metgenomic shotgun sequencing ===================================================== Pipeline_metagenomecommunities.py takes as input a set of fastq files from a shotgun sequencing experiment of environmental samples and assesses community structure and function. Overview ======== The pipeline assumes the data derive from multiple tissues/conditions (:term:`experiment`) with one or more biological and/or technical replicates (:term:`replicate`). A :term:`replicate` within each :term:`experiment` is a :term:`track`. Community profiling -------------------- The pipeline uses various tools for assessing the abundance of taxa in an environmental sample. To assess relative abundance of taxa in a community, we use metaphlan as it is easy to use and because it performs alignments against a reduced set of clade-specific marker genes it is relativelyfast. Metaphlan is likely to perform better where the samples are derived fromhuman (e.g gut) as the database is significantly overrepresented for human derived taxa. Where metaphlan attempts to estimate taxa relative abundances it does not attemtp to assign every read to a taxa. An alternative method is to use kraken. Kraken utilises a megablast-like approach in order to search for exact sequence matches between reads and sequences in the kraken database (taxonomy). Like metaphlan, kraken assumes that there is little sequence divergence between the sequenced samples and the data in the database. In cases where sequences are derived from environmental samples that have not been sequenced before this will result in very few sequences being assigned to a taxa. A third approach is to use a sensitive alignment algorithm. At the time of writing this, a new approach to perform sensitive blastx-like alignments was developed by the Huson lab. It is called Diamond and is 16000 times faster than blast - a requirement for large datasets. The pipeline utilises diamond in cases where it is expected that there is high divergence between sequences derived from the sample and the NCBI nr database. Following alignment with diamond, the pipeline will attempt to assign each read to a taxa using the lcammaper tool (lowest common ancestor (LCA)) also developed by the Huson lab. Functional profiling --------------------- Whether DNA-seq or RNA-seq is used, functional profiling can be performed. The functional profiling techniques used in the pipeline rely on a set of non-redundant gene sequences (amino acids). Diamond is used to sensitively align reads to the non-redundant database e.g. MetaRef or IGC. Differential abundance estimations ----------------------------------- To detect differences in abundance of taxa or genes, we utilise the metagenomeSeq R package from bioconductor. This package utilises a zero-inflated gaussian micture model to compensate for undersampling of taxa/genes bewteen samples - which may cause overestimation of differences due to differences in library size. see http://www.nature.com/nmeth/journal/v10/n12/full/nmeth.2658.html. Usage ===== See :ref:`PipelineSettingUp` and :ref:`PipelineRunning` on general information how to use CGAT pipelines. Configuration ------------- The pipeline requires a configured :file:`pipeline.ini` file. The sphinxreport report requires a :file:`conf.py` and :file:`sphinxreport.ini` file (see :ref:`PipelineDocumenation`). To start with, use the files supplied with the :ref:`Example` data. Input ----- Reads +++++ Reads are imported by placing files are linking to files in the :term:`working directory`. The default file format assumes the following convention: <sample>-<condition>-<replicate>.<suffix> ``sample`` and ``condition`` make up an :term:`experiment`, while ``replicate`` denotes the :term:`replicate` within an :term:`experiment`. The ``suffix`` determines the file type. The following suffixes/file types are possible: fastq.gz Single-end reads in fastq format. fastq.1.gz, fastq2.2.gz Paired-end reads in fastq format. The two fastq files must be sorted by read-pair. .. note:: Quality scores need to be of the same scale for all input files. Thus it might be difficult to mix different formats. Requirements ------------ On top of the default CGAT setup, the pipeline requires the following software to be in the path: +--------------------+-------------------+------------+ |*Program* |*Version* |*Purpose* | +--------------------+-------------------+------------+ |diamond | | Sensitive | | | | alignment | | | | algorithm | +--------------------+-------------------+------------+ |lcamapper | |Community | | | |profiling + | | | |fuctional | | | |profiling | +--------------------+-------------------+------------+ |metaphlan | |taxanomic | | | | relative | | | | abundance | | | | estimator | +--------------------+-------------------+------------+ |kraken | |megablast | | | |taxanomic | | | |assignment | | | |of reads | +--------------------+-------------------+------------+ |metagenomeSeq | |differential| | | |abundance | | | |tool | +--------------------+-------------------+------------+ Pipeline output =============== TODO:: Additional outputs are stored in the database file :file:`csvdb`. Glossary ======== .. glossary:: Code ==== """
""" A simple library for working with the color names and color codes defined by the HTML and CSS specifications. An overview of HTML and CSS colors ---------------------------------- Colors on the Web are specified in `the sRGB color space`_, where each color is made up of a red component, a green component and a blue component. This is useful because it maps (fairly) cleanly to the red, green and blue components of pixels on a computer display, and to the cone cells of a human eye, which come in three sets roughly corresponding to the wavelengths of light associated with red, green and blue. `The HTML 4 standard`_ defines two ways to specify sRGB colors: * A hash mark ('#') followed by three pairs of hexdecimal digits, specifying values for red, green and blue components in that order; for example, ``#0099cc``. Since each pair of hexadecimal digits can express 256 different values, this allows up to 256**3 or 16,777,216 unique colors to be specified (though, due to differences in display technology, not all of these colors may be clearly distinguished on any given physical display). * A set of predefined color names which correspond to specific hexadecimal values; for example, ``white``. HTML 4 defines sixteen such colors. `The CSS 2 standard`_ allows any valid HTML 4 color specification, and adds three new ways to specify sRGB colors: * A hash mark followed by three hexadecimal digits, which is expanded into three hexadecimal pairs by repeating each digit; thus ``#09c`` is equivalent to ``#0099cc``. * The string 'rgb', followed by parentheses, between which are three numeric values each between 0 and 255, inclusive, which are taken to be the values of the red, green and blue components in that order; for example, ``rgb(0, 153, 204)``. * The same as above, except using percentages instead of numeric values; for example, ``rgb(0%, 60%, 80%)``. `The CSS 2.1 revision`_ does not add any new methods of specifying sRGB colors, but does add one additional named color. `The CSS 3 color module`_ (currently a W3C Candidate Recommendation) adds one new way to specify sRGB colors: * A hue-saturation-lightness triple (HSL), using the construct ``hsl()``. It also adds support for variable opacity of colors, by allowing the specification of alpha-channel information, through the ``rgba()`` and ``hsla()`` constructs, which are identical to ``rgb()`` and ``hsl()`` with one exception: a fourth value is supplied, indicating the level of opacity from ``0.0`` (completely transparent) to ``1.0`` (completely opaque). Though not technically a color, the keyword ``transparent`` is also made available in lieu of a color value, and corresponds to ``rgba(0,0,0,0)``. Additionally, CSS3 defines a new set of color names; this set is taken directly from the named colors defined for SVG (Scalable Vector Graphics) markup, and is a proper superset of the named colors defined in CSS 2.1. This set also has significant overlap with traditional X11 color sets as defined by the ``rgb.txt`` file on many Unix and Unix-like operating systems, though the correspondence is not exact; the set of X11 colors is not standardized, and the set of CSS3 colors contains some definitions which diverge significantly from customary X11 definitions (for example, CSS3's ``green`` is not equivalent to X11's ``green``; the value which X11 designates ``green`` is designated ``lime`` in CSS3). .. _the sRGB color space: http://www.w3.org/Graphics/Color/sRGB .. _The HTML 4 standard: http://www.w3.org/TR/html401/types.html#h-6.5 .. _The CSS 2 standard: http://www.w3.org/TR/REC-CSS2/syndata.html#value-def-color .. _The CSS 2.1 revision: http://www.w3.org/TR/CSS21/ .. _The CSS 3 color module: http://www.w3.org/TR/css3-color/ What this module supports ------------------------- The mappings and functions within this module support the following methods of specifying sRGB colors, and conversions between them: * Six-digit hexadecimal. * Three-digit hexadecimal. * Integer ``rgb()`` triplet. * Percentage ``rgb()`` triplet. * Varying selections of predefined color names (see below). This module does not support ``hsl()`` triplets, nor does it support opacity/alpha-channel information via ``rgba()`` or ``hsla()``. If you need to convert between RGB-specified colors and HSL-specified colors, or colors specified via other means, consult `the colorsys module`_ in the Python standard library, which can perform conversions amongst several common color spaces. .. _the colorsys module: http://docs.python.org/library/colorsys.html Normalization ------------- For colors specified via hexadecimal values, this module will accept input in the following formats: * A hash mark (#) followed by three hexadecimal digits, where letters may be upper- or lower-case. * A hash mark (#) followed by six hexadecimal digits, where letters may be upper- or lower-case. For output which consists of a color specified via hexadecimal values, and for functions which perform intermediate conversion to hexadecimal before returning a result in another format, this module always normalizes such values to the following format: * A hash mark (#) followed by six hexadecimal digits, with letters forced to lower-case. The function ``normalize_hex()`` in this module can be used to perform this normalization manually if desired; see its documentation for an explanation of the normalization process. For colors specified via predefined names, this module will accept input in the following formats: * An entirely lower-case name, such as ``aliceblue``. * A name using initial capitals, such as ``AliceBlue``. For output which consists of a color specified via a predefined name, and for functions which perform intermediate conversion to a predefined name before returning a result in another format, this module always normalizes such values to be entirely lower-case. Mappings of color names ----------------------- For each set of defined color names -- HTML 4, CSS 2, CSS 2.1 and CSS 3 -- this module exports two mappings: one of normalized color names to normalized hexadecimal values, and one of normalized hexadecimal values to normalized color names. These eight mappings are as follows: ``html4_names_to_hex`` Mapping of normalized HTML 4 color names to normalized hexadecimal values. ``html4_hex_to_names`` Mapping of normalized hexadecimal values to normalized HTML 4 color names. ``css2_names_to_hex`` Mapping of normalized CSS 2 color names to normalized hexadecimal values. Because CSS 2 defines the same set of named colors as HTML 4, this is merely an alias for ``html4_names_to_hex``. ``css2_hex_to_names`` Mapping of normalized hexadecimal values to normalized CSS 2 color nams. For the reasons described above, this is merely an alias for ``html4_hex_to_names``. ``css21_names_to_hex`` Mapping of normalized CSS 2.1 color names to normalized hexadecimal values. This is identical to ``html4_names_to_hex``, except for one addition: ``orange``. ``css21_hex_to_names`` Mapping of normalized hexadecimal values to normalized CSS 2.1 color names. As above, this is identical to ``html4_hex_to_names`` except for the addition of ``orange``. ``css3_names_to_hex`` Mapping of normalized CSS3 color names to normalized hexadecimal values. ``css3_hex_to_names`` Mapping of normalized hexadecimal values to normalized CSS3 color names. """
""" ============= Miscellaneous ============= IEEE 754 Floating Point Special Values -------------------------------------- Special values defined in numpy: nan, inf, NaNs can be used as a poor-man's mask (if you don't care what the original value was) Note: cannot use equality to test NaNs. E.g.: :: >>> myarr = np.array([1., 0., np.nan, 3.]) >>> np.where(myarr == np.nan) >>> np.nan == np.nan # is always False! Use special numpy functions instead. False >>> myarr[myarr == np.nan] = 0. # doesn't work >>> myarr array([ 1., 0., NaN, 3.]) >>> myarr[np.isnan(myarr)] = 0. # use this instead find >>> myarr array([ 1., 0., 0., 3.]) Other related special value functions: :: isinf(): True if value is inf isfinite(): True if not nan or inf nan_to_num(): Map nan to 0, inf to max float, -inf to min float The following corresponds to the usual functions except that nans are excluded from the results: :: nansum() nanmax() nanmin() nanargmax() nanargmin() >>> x = np.arange(10.) >>> x[3] = np.nan >>> x.sum() nan >>> np.nansum(x) 42.0 How numpy handles numerical exceptions -------------------------------------- The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow`` and ``'ignore'`` for ``underflow``. But this can be changed, and it can be set individually for different kinds of exceptions. The different behaviors are: - 'ignore' : Take no action when the exception occurs. - 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module). - 'raise' : Raise a `FloatingPointError`. - 'call' : Call a function specified using the `seterrcall` function. - 'print' : Print a warning directly to ``stdout``. - 'log' : Record error in a Log object specified by `seterrcall`. These behaviors can be set for all kinds of errors or specific ones: - all : apply to all numeric exceptions - invalid : when NaNs are generated - divide : divide by zero (for integers as well!) - overflow : floating point overflows - underflow : floating point underflows Note that integer divide-by-zero is handled by the same machinery. These behaviors are set on a per-thread basis. Examples -------- :: >>> oldsettings = np.seterr(all='warn') >>> np.zeros(5,dtype=np.float32)/0. invalid value encountered in divide >>> j = np.seterr(under='ignore') >>> np.array([1.e-100])**10 >>> j = np.seterr(invalid='raise') >>> np.sqrt(np.array([-1.])) FloatingPointError: invalid value encountered in sqrt >>> def errorhandler(errstr, errflag): ... print "saw stupid error!" >>> np.seterrcall(errorhandler) <function err_handler at 0x...> >>> j = np.seterr(all='call') >>> np.zeros(5, dtype=np.int32)/0 FloatingPointError: invalid value encountered in divide saw stupid error! >>> j = np.seterr(**oldsettings) # restore previous ... # error-handling settings Interfacing to C ---------------- Only a survey of the choices. Little detail on how each works. 1) Bare metal, wrap your own C-code manually. - Plusses: - Efficient - No dependencies on other tools - Minuses: - Lots of learning overhead: - need to learn basics of Python C API - need to learn basics of numpy C API - need to learn how to handle reference counting and love it. - Reference counting often difficult to get right. - getting it wrong leads to memory leaks, and worse, segfaults - API will change for Python 3.0! 2) Cython - Plusses: - avoid learning C API's - no dealing with reference counting - can code in pseudo python and generate C code - can also interface to existing C code - should shield you from changes to Python C api - has become the de-facto standard within the scientific Python community - fast indexing support for arrays - Minuses: - Can write code in non-standard form which may become obsolete - Not as flexible as manual wrapping 4) ctypes - Plusses: - part of Python standard library - good for interfacing to existing sharable libraries, particularly Windows DLLs - avoids API/reference counting issues - good numpy support: arrays have all these in their ctypes attribute: :: a.ctypes.data a.ctypes.get_strides a.ctypes.data_as a.ctypes.shape a.ctypes.get_as_parameter a.ctypes.shape_as a.ctypes.get_data a.ctypes.strides a.ctypes.get_shape a.ctypes.strides_as - Minuses: - can't use for writing code to be turned into C extensions, only a wrapper tool. 5) SWIG (automatic wrapper generator) - Plusses: - around a long time - multiple scripting language support - C++ support - Good for wrapping large (many functions) existing C libraries - Minuses: - generates lots of code between Python and the C code - can cause performance problems that are nearly impossible to optimize out - interface files can be hard to write - doesn't necessarily avoid reference counting issues or needing to know API's 7) scipy.weave - Plusses: - can turn many numpy expressions into C code - dynamic compiling and loading of generated C code - can embed pure C code in Python module and have weave extract, generate interfaces and compile, etc. - Minuses: - Future very uncertain: it's the only part of Scipy not ported to Python 3 and is effectively deprecated in favor of Cython. 8) Psyco - Plusses: - Turns pure python into efficient machine code through jit-like optimizations - very fast when it optimizes well - Minuses: - Only on intel (windows?) - Doesn't do much for numpy? Interfacing to Fortran: ----------------------- The clear choice to wrap Fortran code is `f2py <http://docs.scipy.org/doc/numpy-dev/f2py/>`_. Pyfort is an older alternative, but not supported any longer. Fwrap is a newer project that looked promising but isn't being developed any longer. Interfacing to C++: ------------------- 1) Cython 2) CXX 3) Boost.python 4) SWIG 5) SIP (used mainly in PyQT) """