comments
stringlengths
2
31.4k
#!/usr/bin/env python # -*- coding: utf-8 -*- # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
"""============== Array indexing ============== Array indexing refers to any use of the square brackets ([]) to index array values. There are many options to indexing, which give numpy indexing great power, but with power comes some complexity and the potential for confusion. This section is just an overview of the various options and issues related to indexing. Aside from single element indexing, the details on most of these options are to be found in related sections. Assignment vs referencing ========================= Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See the section at the end for specific examples and explanations on how assignments work. Single element indexing ======================= Single element indexing for a 1-D array is what one expects. It work exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. :: >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. That means that it is not necessary to separate each dimension's index into its own set of square brackets. :: >>> x.shape = (2,5) # now x is 2-dimensional >>> x[1,3] 8 >>> x[1,-1] 9 Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: :: >>> x[0] array([0, 1, 2, 3, 4]) That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that the remaining dimension of length 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: :: >>> x[0][2] 2 So note that ``x[0,2] = x[0][2]`` though the second case is more inefficient as a new temporary array is created after the first index that is subsequently indexed by 2. Note to those used to IDL or Fortran memory order as it relates to indexing. Numpy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. Other indexing options ====================== It is possible to slice and stride arrays to extract arrays of the same number of dimensions, but of different sizes than the original. The slicing and striding works exactly the same way it does for lists and tuples except that they can be applied to multiple dimensions as well. A few examples illustrates best: :: >>> x = np.arange(10) >>> x[2:5] array([2, 3, 4]) >>> x[:-7] array([0, 1, 2]) >>> x[1:7:2] array([1, 3, 5]) >>> y = np.arange(35).reshape(5,7) >>> y[1:5:2,::3] array([[ 7, 10, 13], [21, 24, 27]]) Note that slices of arrays do not copy the internal array data but also produce new views of the original data. It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays. There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool that allow one to avoid looping over individual elements in arrays and thus greatly improve performance. It is possible to use special features to effectively increase the number of dimensions in an array through indexing so the resulting array aquires the shape needed for use in an expression or with a specific function. Index arrays ============ Numpy arrays may be indexed with other arrays (or any other sequence- like object that can be converted to an array, such as lists, with the exception of tuples; see the end of this document for why this is). The use of index arrays ranges from simple, straightforward cases to complex, hard-to-understand cases. For all cases of index arrays, what is returned is a copy of the original data, not a view as one gets for slices. Index arrays must be of integer type. Each value in the array indicates which value in the array to use in place of the index. To illustrate: :: >>> x = np.arange(10,1,-1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) The index array consisting of the values 3, 3, 1 and 8 correspondingly create an array of length 4 (same as the index array) where each index is replaced by the value the index array has in the array being indexed. Negative values are permitted and work as they do with single indices or slices: :: >>> x[np.array([3,3,-3,8])] array([7, 7, 4, 2]) It is an error to have index values out of bounds: :: >>> x[np.array([3, 3, 20, 8])] <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9 Generally speaking, what is returned when index arrays are used is an array with the same shape as the index array, but with the type and values of the array being indexed. As an example, we can use a multidimensional index array instead: :: >>> x[np.array([[1,1],[2,3]])] array([[9, 9], [8, 7]]) Indexing Multi-dimensional arrays ================================= Things become more complex when multidimensional arrays are indexed, particularly with multidimensional index arrays. These tend to be more unusal uses, but theyare permitted, and they are useful for some problems. We'll start with thesimplest multidimensional case (using the array y from the previous examples): :: >>> y[np.array([0,2,4]), np.array([0,1,2])] array([ 0, 15, 30]) In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is y[0,0]. The next value is y[2,1], and the last is y[4,2]. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: :: >>> y[np.array([0,2,4]), np.array([0,1])] <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: :: >>> y[np.array([0,2,4]), 1] array([ 1, 15, 29]) Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: :: >>> y[np.array([0,2,4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) What results is the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (size of row, number index elements). An example of where this may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. In general, the shape of the resulant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. Boolean or "mask" index arrays ============================== Boolean arrays used as indices are treated in a different manner entirely than index arrays. Boolean arrays must be of the same shape as the initial dimensions of the array being indexed. In the most straightforward case, the boolean array has the same shape: :: >>> b = y>20 >>> y[b] array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) Unlike in the case of integer index arrays, in the boolean case, the result is a 1-D array containing all the elements in the indexed array corresponding to all the true elements in the boolean array. The elements in the indexed array are always iterated and returned in :term:`row-major` (C-style) order. The result is also identical to ``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy of the data, not a view as one gets with slices. The result will be multidimensional if y has more dimensions than b. For example: :: >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y array([False, False, False, True, True], dtype=bool) >>> y[b[:,5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to y[b, ...], which means y is indexed by b followed by as many : as are needed to fill out the rank of y. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed. For example, using a 2-D boolean array of shape (2,3) with four True elements to select rows from a 3-D array of shape (2,3,5) results in a 2-D result of shape (4,5): :: >>> x = np.arange(30).reshape(2,3,5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) For further details, consult the numpy reference documentation on array indexing. Combining index arrays with slices ================================== Index arrays may be combined with slices. For example: :: >>> y[np.array([0,2,4]),1:3] array([[ 1, 2], [15, 16], [29, 30]]) In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array to produce a resultant array of shape (3,2). Likewise, slicing can be combined with broadcasted boolean indices: :: >>> y[b[:,5],1:3] array([[22, 23], [29, 30]]) Structural indexing tools ========================= To facilitate easy matching of array shapes with expressions and in assignments, the np.newaxis object can be used within array indices to add new dimensions with a size of 1. For example: :: >>> y.shape (5, 7) >>> y[:,np.newaxis,:].shape (5, 1, 7) Note that there are no new elements in the array, just that the dimensionality is increased. This can be handy to combine two arrays in a way that otherwise would require explicitly reshaping operations. For example: :: >>> x = np.arange(5) >>> x[:,np.newaxis] + x[np.newaxis,:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) The ellipsis syntax maybe used to indicate selecting in full any remaining unspecified dimensions. For example: :: >>> z = np.arange(81).reshape(3,3,3,3) >>> z[1,...,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) This is equivalent to: :: >>> z[1,:,:,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) Assigning values to indexed arrays ================================== As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: :: >>> x = np.arange(10) >>> x[2:7] = 1 or an array of the right size: :: >>> x[2:7] = np.arange(5) Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): :: >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j <type 'exceptions.TypeError'>: can't convert complex to long; use long(abs(z)) Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: :: >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is because a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times. Dealing with variable numbers of indices within programs ======================================================== The index syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example (using the previous definition for the array z): :: >>> indices = (1,1,1,1) >>> z[indices] 40 So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: :: >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2] >>> z[indices] array([39, 40]) Likewise, ellipsis can be specified by code by using the Ellipsis object: :: >>> indices = (1, Ellipsis, 1) # same as [1,...,1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) For this reason it is possible to use the output from the np.where() function directly as an index since it always returns a tuple of index arrays. Because the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: :: >>> z[[1,1,1,1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1,1,1,1)] # returns a single value 40 """
""" ================================== Constants (:mod:`scipy.constants`) ================================== .. currentmodule:: scipy.constants Physical and mathematical constants and units. Mathematical constants ====================== ============ ================================================================= ``pi`` Pi ``golden`` Golden ratio ============ ================================================================= Physical constants ================== ============= ================================================================= ``c`` speed of light in vacuum ``mu_0`` the magnetic constant :math:`\mu_0` ``epsilon_0`` the electric constant (vacuum permittivity), :math:`\epsilon_0` ``h`` the Planck constant :math:`h` ``hbar`` :math:`\hbar = h/(2\pi)` ``G`` Newtonian constant of gravitation ``g`` standard acceleration of gravity ``e`` elementary charge ``R`` molar gas constant ``alpha`` fine-structure constant ``N_A`` Avogadro constant ``k`` Boltzmann constant ``sigma`` Stefan-Boltzmann constant :math:`\sigma` ``Wien`` Wien displacement law constant ``Rydberg`` Rydberg constant ``m_e`` electron mass ``m_p`` proton mass ``m_n`` neutron mass ============= ================================================================= Constants database ------------------ In addition to the above variables, :mod:`scipy.constants` also contains the 2010 CODATA recommended values [CODATA2010]_ database containing more physical constants. .. autosummary:: :toctree: generated/ value -- Value in physical_constants indexed by key unit -- Unit in physical_constants indexed by key precision -- Relative precision in physical_constants indexed by key find -- Return list of physical_constant keys with a given string ConstantWarning -- Constant sought not in newest CODATA data set .. data:: physical_constants Dictionary of physical constants, of the format ``physical_constants[name] = (value, unit, uncertainty)``. Available constants: ====================================================================== ==== %(constant_names)s ====================================================================== ==== Units ===== SI prefixes ----------- ============ ================================================================= ``yotta`` :math:`10^{24}` ``zetta`` :math:`10^{21}` ``exa`` :math:`10^{18}` ``peta`` :math:`10^{15}` ``tera`` :math:`10^{12}` ``giga`` :math:`10^{9}` ``mega`` :math:`10^{6}` ``kilo`` :math:`10^{3}` ``hecto`` :math:`10^{2}` ``deka`` :math:`10^{1}` ``deci`` :math:`10^{-1}` ``centi`` :math:`10^{-2}` ``milli`` :math:`10^{-3}` ``micro`` :math:`10^{-6}` ``nano`` :math:`10^{-9}` ``pico`` :math:`10^{-12}` ``femto`` :math:`10^{-15}` ``atto`` :math:`10^{-18}` ``zepto`` :math:`10^{-21}` ============ ================================================================= Binary prefixes --------------- ============ ================================================================= ``kibi`` :math:`2^{10}` ``mebi`` :math:`2^{20}` ``gibi`` :math:`2^{30}` ``tebi`` :math:`2^{40}` ``pebi`` :math:`2^{50}` ``exbi`` :math:`2^{60}` ``zebi`` :math:`2^{70}` ``yobi`` :math:`2^{80}` ============ ================================================================= Weight ------ ================= ============================================================ ``gram`` :math:`10^{-3}` kg ``metric_ton`` :math:`10^{3}` kg ``grain`` one grain in kg ``lb`` one pound (avoirdupous) in kg ``oz`` one ounce in kg ``stone`` one stone in kg ``grain`` one grain in kg ``long_ton`` one long ton in kg ``short_ton`` one short ton in kg ``troy_ounce`` one Troy ounce in kg ``troy_pound`` one Troy pound in kg ``carat`` one carat in kg ``m_u`` atomic mass constant (in kg) ================= ============================================================ Angle ----- ================= ============================================================ ``degree`` degree in radians ``arcmin`` arc minute in radians ``arcsec`` arc second in radians ================= ============================================================ Time ---- ================= ============================================================ ``minute`` one minute in seconds ``hour`` one hour in seconds ``day`` one day in seconds ``week`` one week in seconds ``year`` one year (365 days) in seconds ``Julian_year`` one Julian year (365.25 days) in seconds ================= ============================================================ Length ------ ================= ============================================================ ``inch`` one inch in meters ``foot`` one foot in meters ``yard`` one yard in meters ``mile`` one mile in meters ``mil`` one mil in meters ``pt`` one point in meters ``survey_foot`` one survey foot in meters ``survey_mile`` one survey mile in meters ``nautical_mile`` one nautical mile in meters ``fermi`` one Fermi in meters ``angstrom`` one Angstrom in meters ``micron`` one micron in meters ``au`` one astronomical unit in meters ``light_year`` one light year in meters ``parsec`` one parsec in meters ================= ============================================================ Pressure -------- ================= ============================================================ ``atm`` standard atmosphere in pascals ``bar`` one bar in pascals ``torr`` one torr (mmHg) in pascals ``psi`` one psi in pascals ================= ============================================================ Area ---- ================= ============================================================ ``hectare`` one hectare in square meters ``acre`` one acre in square meters ================= ============================================================ Volume ------ =================== ======================================================== ``liter`` one liter in cubic meters ``gallon`` one gallon (US) in cubic meters ``gallon_imp`` one gallon (UK) in cubic meters ``fluid_ounce`` one fluid ounce (US) in cubic meters ``fluid_ounce_imp`` one fluid ounce (UK) in cubic meters ``bbl`` one barrel in cubic meters =================== ======================================================== Speed ----- ================= ========================================================== ``kmh`` kilometers per hour in meters per second ``mph`` miles per hour in meters per second ``mach`` one Mach (approx., at 15 C, 1 atm) in meters per second ``knot`` one knot in meters per second ================= ========================================================== Temperature ----------- ===================== ======================================================= ``zero_Celsius`` zero of Celsius scale in Kelvin ``degree_Fahrenheit`` one Fahrenheit (only differences) in Kelvins ===================== ======================================================= .. autosummary:: :toctree: generated/ C2K K2C F2C C2F F2K K2F Energy ------ ==================== ======================================================= ``eV`` one electron volt in Joules ``calorie`` one calorie (thermochemical) in Joules ``calorie_IT`` one calorie (International Steam Table calorie, 1956) in Joules ``erg`` one erg in Joules ``Btu`` one British thermal unit (International Steam Table) in Joules ``Btu_th`` one British thermal unit (thermochemical) in Joules ``ton_TNT`` one ton of TNT in Joules ==================== ======================================================= Power ----- ==================== ======================================================= ``hp`` one horsepower in watts ==================== ======================================================= Force ----- ==================== ======================================================= ``dyn`` one dyne in newtons ``lbf`` one pound force in newtons ``kgf`` one kilogram force in newtons ==================== ======================================================= Optics ------ .. autosummary:: :toctree: generated/ lambda2nu nu2lambda References ========== .. [CODATA2010] CODATA Recommended Values of the Fundamental Physical Constants 2010. http://physics.nist.gov/cuu/Constants/index.html """
# # XML-RPC CLIENT LIBRARY # $Id: xmlrpclib.py 38463 2005-02-11 18:00:16Z USERNAME $ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
# https://projecteuler.net/problem=11 # Largest product in a grid # Problem 11 # In the 20×20 grid below, four numbers along a diagonal line have been marked in red. # 08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08 # 49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00 # 81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65 # 52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91 # 22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80 # 24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50 # 32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70 # 67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21 # 24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72 # 21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95 # 78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92 # 16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57 # 86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58 # 19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40 # 04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66 # 88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69 # 04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36 # 20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16 # 20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54 # 01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48 # The product of these numbers is 26 × 63 × 78 × 14 = 1788696. # What is the greatest product of four adjacent numbers in the same direction(up, down, left, right, or diagonally) in the 20×20 grid?
"""Stuff to parse AIFF-C and AIFF files. Unless explicitly stated otherwise, the description below is true both for AIFF-C files and AIFF files. An AIFF-C file has the following structure. +-----------------+ | FORM | +-----------------+ | <size> | +----+------------+ | | AIFC | | +------------+ | | <chunks> | | | . | | | . | | | . | +----+------------+ An AIFF file has the string "AIFF" instead of "AIFC". A chunk consists of an identifier (4 bytes) followed by a size (4 bytes, big endian order), followed by the data. The size field does not include the size of the 8 byte header. The following chunk types are recognized. FVER <version number of AIFF-C defining document> (AIFF-C only). MARK <# of markers> (2 bytes) list of markers: <marker ID> (2 bytes, must be > 0) <position> (4 bytes) <marker name> ("pstring") COMM <# of channels> (2 bytes) <# of sound frames> (4 bytes) <size of the samples> (2 bytes) <sampling frequency> (10 bytes, IEEE 80-bit extended floating point) in AIFF-C files only: <compression type> (4 bytes) <human-readable version of compression type> ("pstring") SSND <offset> (4 bytes, not used by this program) <blocksize> (4 bytes, not used by this program) <sound data> A pstring consists of 1 byte length, a string of characters, and 0 or 1 byte pad to make the total length even. Usage. Reading AIFF files: f = aifc.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). In some types of audio files, if the setpos() method is not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' for AIFF files) getcompname() -- returns human-readable version of compression type ('not compressed' for AIFF files) getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- get the list of marks in the audio file or None if there are no marks getmark(id) -- get mark with the specified id (raises an error if the mark does not exist) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell(), the position given to setpos() and the position of marks are all compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing AIFF files: f = aifc.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: aiff() -- create an AIFF file (AIFF-C default) aifc() -- create an AIFF-C file setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple) -- set all parameters at once setmark(id, pos, name) -- add specified mark to the list of marks tell() -- return current position in output file (useful in combination with setmark()) writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. Marks can be added anytime. If there are any marks, ypu must call close() after all frames have been written. The close() method is called automatically when the class instance is destroyed. When a file is opened with the extension '.aiff', an AIFF file is written, otherwise an AIFF-C file is written. This default can be changed by calling aiff() or aifc() before the first writeframes or writeframesraw. """
# (c) 2013, NAME <EMAIL> red hat, inc # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # take a list of files and (optionally) a list of paths # return the first existing file found in the paths # [file1, file2, file3], [path1, path2, path3] # search order is: # path1/file1 # path1/file2 # path1/file3 # path2/file1 # path2/file2 # path2/file3 # path3/file1 # path3/file2 # path3/file3 # first file found with os.path.exists() is returned # no file matches raises ansibleerror # EXAMPLES # - name: copy first existing file found to /some/file # action: copy src=$item dest=/some/file # with_first_found: # - files: foo ${inventory_hostname} bar # paths: /tmp/production /tmp/staging # that will look for files in this order: # /tmp/production/foo # ${inventory_hostname} # bar # /tmp/staging/foo # ${inventory_hostname} # bar # - name: copy first existing file found to /some/file # action: copy src=$item dest=/some/file # with_first_found: # - files: /some/place/foo ${inventory_hostname} /some/place/else # that will look for files in this order: # /some/place/foo # $relative_path/${inventory_hostname} # /some/place/else # example - including tasks: # tasks: # - include: $item # with_first_found: # - files: generic # paths: tasks/staging tasks/production # this will include the tasks in the file generic where it is found first (staging or production) # example simple file lists #tasks: #- name: first found file # action: copy src=$item dest=/etc/file.cfg # with_first_found: # - files: foo.${inventory_hostname} foo # example skipping if no matched files # First_found also offers the ability to control whether or not failing # to find a file returns an error or not # #- name: first found file - or skip # action: copy src=$item dest=/etc/file.cfg # with_first_found: # - files: foo.${inventory_hostname} # skip: true # example a role with default configuration and configuration per host # you can set multiple terms with their own files and paths to look through. # consider a role that sets some configuration per host falling back on a default config. # #- name: some configuration template # template: src={{ item }} dest=/etc/file.cfg mode=0444 owner=root group=root # with_first_found: # - files: # - ${inventory_hostname}/etc/file.cfg # paths: # - ../../../templates.overwrites # - ../../../templates # - files: # - etc/file.cfg # paths: # - templates # the above will return an empty list if the files cannot be found at all # if skip is unspecificed or if it is set to false then it will return a list # error which can be caught bye ignore_errors: true for that action. # finally - if you want you can use it, in place to replace first_available_file: # you simply cannot use the - files, path or skip options. simply replace # first_available_file with with_first_found and leave the file listing in place # # # - name: with_first_found like first_available_file # action: copy src=$item dest=/tmp/faftest # with_first_found: # - ../files/foo # - ../files/bar # - ../files/baz # ignore_errors: true
""" =============== Array Internals =============== Internal organization of numpy arrays ===================================== It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to NumPy". NumPy arrays consist of two major components, the raw array data (from now on, referred to as the data buffer), and the information about the raw array data. The data buffer is typically what people think of as arrays in C or Fortran, a contiguous (and fixed) block of memory containing fixed sized data items. NumPy also contains a significant set of data that describes how to interpret the data in the data buffer. This extra information contains (among other things): 1) The basic data element's size in bytes 2) The start of the data within the data buffer (an offset relative to the beginning of the data buffer). 3) The number of dimensions and the size of each dimension 4) The separation between elements for each dimension (the 'stride'). This does not have to be a multiple of the element size 5) The byte order of the data (which may not be the native byte order) 6) Whether the buffer is read-only 7) Information (via the dtype object) about the interpretation of the basic data element. The basic data element may be as simple as a int or a float, or it may be a compound object (e.g., struct-like), a fixed character field, or Python object pointers. 8) Whether the array is to interpreted as C-order or Fortran-order. This arrangement allow for very flexible use of arrays. One thing that it allows is simple changes of the metadata to change the interpretation of the array buffer. Changing the byteorder of the array is a simple change involving no rearrangement of the data. The shape of the array can be changed very easily without changing anything in the data buffer or any data copying at all Among other things that are made possible is one can create a new array metadata object that uses the same data buffer to create a new view of that data buffer that has a different interpretation of the buffer (e.g., different shape, offset, byte order, strides, etc) but shares the same data bytes. Many operations in numpy do just this such as slices. Other operations, such as transpose, don't move data elements around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move. Typically these new versions of the array metadata but the same data buffer are new 'views' into the data buffer. There is a different ndarray object, but it uses the same data buffer. This is why it is necessary to force copies through use of the .copy() method if one really wants to make a new and independent copy of the data buffer. New views into arrays mean the object reference counts for the data buffer increase. Simply doing away with the original array object will not remove the data buffer if other views of it still exist. Multidimensional Array Indexing Order Issues ============================================ What is the right way to index multi-dimensional arrays? Before you jump to conclusions about the one and true way to index multi-dimensional arrays, it pays to understand why this is a confusing issue. This section will try to explain in detail how numpy indexing works and why we adopt the convention we do for images, and when it may be appropriate to adopt other conventions. The first thing to understand is that there are two conflicting conventions for indexing 2-dimensional arrays. Matrix notation uses the first index to indicate which row is being selected and the second index to indicate which column is selected. This is opposite the geometrically oriented-convention for images where people generally think the first index represents x position (i.e., column) and the second represents y position (i.e., row). This alone is the source of much confusion; matrix-oriented users and image-oriented users expect two different things with regard to indexing. The second issue to understand is how indices correspond to the order the array is stored in memory. In Fortran the first index is the most rapidly varying index when moving through the elements of a two dimensional array as it is stored in memory. If you adopt the matrix convention for indexing, then this means the matrix is stored one column at a time (since the first index moves to the next row as it changes). Thus Fortran is considered a Column-major language. C has just the opposite convention. In C, the last index changes most rapidly as one moves through the array as stored in memory. Thus C is a Row-major language. The matrix is stored by rows. Note that in both cases it presumes that the matrix convention for indexing is being used, i.e., for both Fortran and C, the first index is the row. Note this convention implies that the indexing convention is invariant and that the data order changes to keep that so. But that's not the only way to look at it. Suppose one has large two-dimensional arrays (images or matrices) stored in data files. Suppose the data are stored by rows rather than by columns. If we are to preserve our index convention (whether matrix or image) that means that depending on the language we use, we may be forced to reorder the data if it is read into memory to preserve our indexing convention. For example if we read row-ordered data into memory without reordering, it will match the matrix indexing convention for C, but not for Fortran. Conversely, it will match the image indexing convention for Fortran, but not for C. For C, if one is using data stored in row order, and one wants to preserve the image index convention, the data must be reordered when reading into memory. In the end, which you do for Fortran or C depends on which is more important, not reordering data or preserving the indexing convention. For large images, reordering data is potentially expensive, and often the indexing convention is inverted to avoid that. The situation with numpy makes this issue yet more complicated. The internal machinery of numpy arrays is flexible enough to accept any ordering of indices. One can simply reorder indices by manipulating the internal stride information for arrays without reordering the data at all. NumPy will know how to map the new index order to the data without moving the data. So if this is true, why not choose the index order that matches what you most expect? In particular, why not define row-ordered images to use the image convention? (This is sometimes referred to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN' order options for array ordering in numpy.) The drawback of doing this is potential performance penalties. It's common to access the data sequentially, either implicitly in array operations or explicitly by looping over rows of an image. When that is done, then the data will be accessed in non-optimal order. As the first index is incremented, what is actually happening is that elements spaced far apart in memory are being sequentially accessed, with usually poor memory access speeds. For example, for a two dimensional image 'im' defined so that im[0, 10] represents the value at x=0, y=10. To be consistent with usual Python behavior then im[0] would represent a column at x=0. Yet that data would be spread over the whole array since the data are stored in row order. Despite the flexibility of numpy's indexing, it can't really paper over the fact basic operations are rendered inefficient because of data order or that getting contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs im[0]), thus one can't use an idiom such as for row in im; for col in im does work, but doesn't yield contiguous column data. As it turns out, numpy is smart enough when dealing with ufuncs to determine which index is the most rapidly varying one in memory and uses that for the innermost loop. Thus for ufuncs there is no large intrinsic advantage to either approach in most cases. On the other hand, use of .flat with an FORTRAN ordered array will lead to non-optimal memory access as adjacent elements in the flattened array (iterator, actually) are not contiguous in memory. Indeed, the fact is that Python indexing on lists and other sequences naturally leads to an outside-to inside ordering (the first index gets the largest grouping, the next the next largest, and the last gets the smallest element). Since image data are normally stored by rows, this corresponds to position within rows being the last item indexed. If you do want to use Fortran ordering realize that there are two approaches to consider: 1) accept that the first index is just not the most rapidly changing in memory and have all your I/O routines reorder your data when going from memory to disk or visa versa, or use numpy's mechanism for mapping the first index to the most rapidly varying data. We recommend the former if possible. The disadvantage of the latter is that many of numpy's functions will yield arrays without Fortran ordering unless you are careful to use the 'order' keyword. Doing this would be highly inconvenient. Otherwise we recommend simply learning to reverse the usual order of indices when accessing elements of an array. Granted, it goes against the grain, but it is more in line with Python semantics and the natural order of the data. """
"""============== Array indexing ============== Array indexing refers to any use of the square brackets ([]) to index array values. There are many options to indexing, which give numpy indexing great power, but with power comes some complexity and the potential for confusion. This section is just an overview of the various options and issues related to indexing. Aside from single element indexing, the details on most of these options are to be found in related sections. Assignment vs referencing ========================= Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See the section at the end for specific examples and explanations on how assignments work. Single element indexing ======================= Single element indexing for a 1-D array is what one expects. It work exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. :: >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. That means that it is not necessary to separate each dimension's index into its own set of square brackets. :: >>> x.shape = (2,5) # now x is 2-dimensional >>> x[1,3] 8 >>> x[1,-1] 9 Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: :: >>> x[0] array([0, 1, 2, 3, 4]) That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that the remaining dimension of length 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: :: >>> x[0][2] 2 So note that ``x[0,2] = x[0][2]`` though the second case is more inefficient as a new temporary array is created after the first index that is subsequently indexed by 2. Note to those used to IDL or Fortran memory order as it relates to indexing. NumPy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. Other indexing options ====================== It is possible to slice and stride arrays to extract arrays of the same number of dimensions, but of different sizes than the original. The slicing and striding works exactly the same way it does for lists and tuples except that they can be applied to multiple dimensions as well. A few examples illustrates best: :: >>> x = np.arange(10) >>> x[2:5] array([2, 3, 4]) >>> x[:-7] array([0, 1, 2]) >>> x[1:7:2] array([1, 3, 5]) >>> y = np.arange(35).reshape(5,7) >>> y[1:5:2,::3] array([[ 7, 10, 13], [21, 24, 27]]) Note that slices of arrays do not copy the internal array data but also produce new views of the original data. It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays. There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool that allow one to avoid looping over individual elements in arrays and thus greatly improve performance. It is possible to use special features to effectively increase the number of dimensions in an array through indexing so the resulting array aquires the shape needed for use in an expression or with a specific function. Index arrays ============ NumPy arrays may be indexed with other arrays (or any other sequence- like object that can be converted to an array, such as lists, with the exception of tuples; see the end of this document for why this is). The use of index arrays ranges from simple, straightforward cases to complex, hard-to-understand cases. For all cases of index arrays, what is returned is a copy of the original data, not a view as one gets for slices. Index arrays must be of integer type. Each value in the array indicates which value in the array to use in place of the index. To illustrate: :: >>> x = np.arange(10,1,-1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) The index array consisting of the values 3, 3, 1 and 8 correspondingly create an array of length 4 (same as the index array) where each index is replaced by the value the index array has in the array being indexed. Negative values are permitted and work as they do with single indices or slices: :: >>> x[np.array([3,3,-3,8])] array([7, 7, 4, 2]) It is an error to have index values out of bounds: :: >>> x[np.array([3, 3, 20, 8])] <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9 Generally speaking, what is returned when index arrays are used is an array with the same shape as the index array, but with the type and values of the array being indexed. As an example, we can use a multidimensional index array instead: :: >>> x[np.array([[1,1],[2,3]])] array([[9, 9], [8, 7]]) Indexing Multi-dimensional arrays ================================= Things become more complex when multidimensional arrays are indexed, particularly with multidimensional index arrays. These tend to be more unusual uses, but they are permitted, and they are useful for some problems. We'll start with the simplest multidimensional case (using the array y from the previous examples): :: >>> y[np.array([0,2,4]), np.array([0,1,2])] array([ 0, 15, 30]) In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is y[0,0]. The next value is y[2,1], and the last is y[4,2]. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: :: >>> y[np.array([0,2,4]), np.array([0,1])] <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: :: >>> y[np.array([0,2,4]), 1] array([ 1, 15, 29]) Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: :: >>> y[np.array([0,2,4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) What results is the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (number of index elements, size of row). An example of where this may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. In general, the shape of the resultant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. Boolean or "mask" index arrays ============================== Boolean arrays used as indices are treated in a different manner entirely than index arrays. Boolean arrays must be of the same shape as the initial dimensions of the array being indexed. In the most straightforward case, the boolean array has the same shape: :: >>> b = y>20 >>> y[b] array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) Unlike in the case of integer index arrays, in the boolean case, the result is a 1-D array containing all the elements in the indexed array corresponding to all the true elements in the boolean array. The elements in the indexed array are always iterated and returned in :term:`row-major` (C-style) order. The result is also identical to ``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy of the data, not a view as one gets with slices. The result will be multidimensional if y has more dimensions than b. For example: :: >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y array([False, False, False, True, True], dtype=bool) >>> y[b[:,5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to y[b, ...], which means y is indexed by b followed by as many : as are needed to fill out the rank of y. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed. For example, using a 2-D boolean array of shape (2,3) with four True elements to select rows from a 3-D array of shape (2,3,5) results in a 2-D result of shape (4,5): :: >>> x = np.arange(30).reshape(2,3,5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) For further details, consult the numpy reference documentation on array indexing. Combining index arrays with slices ================================== Index arrays may be combined with slices. For example: :: >>> y[np.array([0,2,4]),1:3] array([[ 1, 2], [15, 16], [29, 30]]) In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array to produce a resultant array of shape (3,2). Likewise, slicing can be combined with broadcasted boolean indices: :: >>> y[b[:,5],1:3] array([[22, 23], [29, 30]]) Structural indexing tools ========================= To facilitate easy matching of array shapes with expressions and in assignments, the np.newaxis object can be used within array indices to add new dimensions with a size of 1. For example: :: >>> y.shape (5, 7) >>> y[:,np.newaxis,:].shape (5, 1, 7) Note that there are no new elements in the array, just that the dimensionality is increased. This can be handy to combine two arrays in a way that otherwise would require explicitly reshaping operations. For example: :: >>> x = np.arange(5) >>> x[:,np.newaxis] + x[np.newaxis,:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) The ellipsis syntax maybe used to indicate selecting in full any remaining unspecified dimensions. For example: :: >>> z = np.arange(81).reshape(3,3,3,3) >>> z[1,...,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) This is equivalent to: :: >>> z[1,:,:,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) Assigning values to indexed arrays ================================== As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: :: >>> x = np.arange(10) >>> x[2:7] = 1 or an array of the right size: :: >>> x[2:7] = np.arange(5) Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): :: >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j <type 'exceptions.TypeError'>: can't convert complex to long; use long(abs(z)) Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: :: >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is because a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times. Dealing with variable numbers of indices within programs ======================================================== The index syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example (using the previous definition for the array z): :: >>> indices = (1,1,1,1) >>> z[indices] 40 So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: :: >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2] >>> z[indices] array([39, 40]) Likewise, ellipsis can be specified by code by using the Ellipsis object: :: >>> indices = (1, Ellipsis, 1) # same as [1,...,1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) For this reason it is possible to use the output from the np.where() function directly as an index since it always returns a tuple of index arrays. Because the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: :: >>> z[[1,1,1,1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1,1,1,1)] # returns a single value 40 """
#!/usr/bin/env python #python mapk/plot_mean.py 09/data/mapk3_1e-15_0.25_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_0_normal_ALL_tc.dat # python mapk/plot_mean.py 09/data/mapk3_1e-15_0.25_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-2_normal_ALL_tc.dat # python mapk/plot_mean.py 09/data/mapk3_1e-15_0.25_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-6_normal_ALL_tc.dat # python mapk/plot_mean.py 09/data/mapk3_1e-15_4_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-5_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-4_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-3_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-1_normal_ALL_tc.dat # with lower D, ti=1e-6 # python plot_mean.py 09-3/data/mapk3_1e-15_0.03125_fixed_1e-6_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.0625_fixed_1e-6_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.125_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.25_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-6_normal_ALL_tc.dat # less # of data set # python plot_mean.py 09-3/data/mapk3_1e-15_0.03125_fixed_1e-6_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.0625_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.25_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-6_normal_ALL_tc.dat # with lower D, ti=1e-2 # python plot_mean.py 09-3/data/mapk3_1e-15_0.03125_fixed_1e-2_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.0625_fixed_1e-2_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.125_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.25_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-2_normal_ALL_tc.dat # less # of data set # python plot_mean.py 09-3/data/mapk3_1e-15_0.03125_fixed_1e-2_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.0625_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.25_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-2_normal_ALL_tc.dat
""" Limits ====== Implemented according to the PhD thesis http://www.cybertester.com/data/gruntz.pdf, which contains very thorough descriptions of the algorithm including many examples. We summarize here the gist of it. All functions are sorted according to how rapidly varying they are at infinity using the following rules. Any two functions f and g can be compared using the properties of L: L=lim log|f(x)| / log|g(x)| (for x -> oo) We define >, < ~ according to:: 1. f > g .... L=+-oo we say that: - f is greater than any power of g - f is more rapidly varying than g - f goes to infinity/zero faster than g 2. f < g .... L=0 we say that: - f is lower than any power of g 3. f ~ g .... L!=0, +-oo we say that: - both f and g are bounded from above and below by suitable integral powers of the other Examples ======== :: 2 < x < exp(x) < exp(x**2) < exp(exp(x)) 2 ~ 3 ~ -5 x ~ x**2 ~ x**3 ~ 1/x ~ x**m ~ -x exp(x) ~ exp(-x) ~ exp(2x) ~ exp(x)**2 ~ exp(x+exp(-x)) f ~ 1/f So we can divide all the functions into comparability classes (x and x^2 belong to one class, exp(x) and exp(-x) belong to some other class). In principle, we could compare any two functions, but in our algorithm, we don't compare anything below the class 2~3~-5 (for example log(x) is below this), so we set 2~3~-5 as the lowest comparability class. Given the function f, we find the list of most rapidly varying (mrv set) subexpressions of it. This list belongs to the same comparability class. Let's say it is {exp(x), exp(2x)}. Using the rule f ~ 1/f we find an element "w" (either from the list or a new one) from the same comparability class which goes to zero at infinity. In our example we set w=exp(-x) (but we could also set w=exp(-2x) or w=exp(-3x) ...). We rewrite the mrv set using w, in our case {1/w, 1/w^2}, and substitute it into f. Then we expand f into a series in w:: f = c0*w^e0 + c1*w^e1 + ... + O(w^en), where e0<e1<...<en, c0!=0 but for x->oo, lim f = lim c0*w^e0, because all the other terms go to zero, because w goes to zero faster than the ci and ei. So:: for e0>0, lim f = 0 for e0<0, lim f = +-oo (the sign depends on the sign of c0) for e0=0, lim f = lim c0 We need to recursively compute limits at several places of the algorithm, but as is shown in the PhD thesis, it always finishes. Important functions from the implementation: compare(a, b, x) compares "a" and "b" by computing the limit L. mrv(e, x) returns the list of most rapidly varying (mrv) subexpressions of "e" rewrite(e, Omega, x, wsym) rewrites "e" in terms of w leadterm(f, x) returns the lowest power term in the series of f mrv_leadterm(e, x) returns the lead term (c0, e0) for e limitinf(e, x) computes lim e (for x->oo) limit(e, z, z0) computes any limit by converting it to the case x->oo All the functions are really simple and straightforward except rewrite(), which is the most difficult/complex part of the algorithm. When the algorithm fails, the bugs are usually in the series expansion (i.e. in SymPy) or in rewrite. This code is almost exact rewrite of the Maple code inside the Gruntz thesis. Debugging --------- Because the gruntz algorithm is highly recursive, it's difficult to figure out what went wrong inside a debugger. Instead, turn on nice debug prints by defining the environment variable SYMPY_DEBUG. For example: [user@localhost]: SYMPY_DEBUG=True ./bin/isympy In [1]: limit(sin(x)/x, x, 0) limitinf(_x*sin(1/_x), _x) = 1 +-mrv_leadterm(_x*sin(1/_x), _x) = (1, 0) | +-mrv(_x*sin(1/_x), _x) = set([_x]) | | +-mrv(_x, _x) = set([_x]) | | +-mrv(sin(1/_x), _x) = set([_x]) | | +-mrv(1/_x, _x) = set([_x]) | | +-mrv(_x, _x) = set([_x]) | +-mrv_leadterm(exp(_x)*sin(exp(-_x)), _x, set([exp(_x)])) = (1, 0) | +-rewrite(exp(_x)*sin(exp(-_x)), set([exp(_x)]), _x, _w) = (1/_w*sin(_w), -_x) | +-sign(_x, _x) = 1 | +-mrv_leadterm(1, _x) = (1, 0) +-sign(0, _x) = 0 +-limitinf(1, _x) = 1 And check manually which line is wrong. Then go to the source code and debug this function to figure out the exact problem. """
#!/usr/bin/env python # -*- coding: utf-8 -*- # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
""" ======================================== Special functions (:mod:`scipy.special`) ======================================== .. module:: scipy.special Nearly all of the functions below are universal functions and follow broadcasting and automatic array-looping rules. Exceptions are noted. Error handling ============== Errors are handled by returning nans, or other appropriate values. Some of the special function routines will emit warnings when an error occurs. By default this is disabled. To enable such messages use ``errprint(1)``, and to disable such messages use ``errprint(0)``. Example: >>> print scipy.special.bdtr(-1,10,0.3) >>> scipy.special.errprint(1) >>> print scipy.special.bdtr(-1,10,0.3) .. autosummary:: :toctree: generated/ errprint SpecialFunctionWarning -- Warning that can be issued with ``errprint(True)`` Available functions =================== Airy functions -------------- .. autosummary:: :toctree: generated/ airy -- Airy functions and their derivatives. airye -- Exponentially scaled Airy functions ai_zeros -- [+]Zeros of Airy functions Ai(x) and Ai'(x) bi_zeros -- [+]Zeros of Airy functions Bi(x) and Bi'(x) itairy -- Elliptic Functions and Integrals -------------------------------- .. autosummary:: :toctree: generated/ ellipj -- Jacobian elliptic functions ellipk -- Complete elliptic integral of the first kind. ellipkm1 -- ellipkm1(x) == ellipk(1 - x) ellipkinc -- Incomplete elliptic integral of the first kind. ellipe -- Complete elliptic integral of the second kind. ellipeinc -- Incomplete elliptic integral of the second kind. Bessel Functions ---------------- .. autosummary:: :toctree: generated/ jv -- Bessel function of real-valued order and complex argument. jn -- Alias for jv jve -- Exponentially scaled Bessel function. yn -- Bessel function of second kind (integer order). yv -- Bessel function of the second kind (real-valued order). yve -- Exponentially scaled Bessel function of the second kind. kn -- Modified Bessel function of the second kind (integer order). kv -- Modified Bessel function of the second kind (real order). kve -- Exponentially scaled modified Bessel function of the second kind. iv -- Modified Bessel function. ive -- Exponentially scaled modified Bessel function. hankel1 -- Hankel function of the first kind. hankel1e -- Exponentially scaled Hankel function of the first kind. hankel2 -- Hankel function of the second kind. hankel2e -- Exponentially scaled Hankel function of the second kind. The following is not an universal function: .. autosummary:: :toctree: generated/ lmbda -- [+]Sequence of lambda functions with arbitrary order v. Zeros of Bessel Functions ^^^^^^^^^^^^^^^^^^^^^^^^^ These are not universal functions: .. autosummary:: :toctree: generated/ jnjnp_zeros -- [+]Zeros of integer-order Bessel functions and derivatives sorted in order. jnyn_zeros -- [+]Zeros of integer-order Bessel functions and derivatives as separate arrays. jn_zeros -- [+]Zeros of Jn(x) jnp_zeros -- [+]Zeros of Jn'(x) yn_zeros -- [+]Zeros of Yn(x) ynp_zeros -- [+]Zeros of Yn'(x) y0_zeros -- [+]Complex zeros: Y0(z0)=0 and values of Y0'(z0) y1_zeros -- [+]Complex zeros: Y1(z1)=0 and values of Y1'(z1) y1p_zeros -- [+]Complex zeros of Y1'(z1')=0 and values of Y1(z1') Faster versions of common Bessel Functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autosummary:: :toctree: generated/ j0 -- Bessel function of order 0. j1 -- Bessel function of order 1. y0 -- Bessel function of second kind of order 0. y1 -- Bessel function of second kind of order 1. i0 -- Modified Bessel function of order 0. i0e -- Exponentially scaled modified Bessel function of order 0. i1 -- Modified Bessel function of order 1. i1e -- Exponentially scaled modified Bessel function of order 1. k0 -- Modified Bessel function of the second kind of order 0. k0e -- Exponentially scaled modified Bessel function of the second kind of order 0. k1 -- Modified Bessel function of the second kind of order 1. k1e -- Exponentially scaled modified Bessel function of the second kind of order 1. Integrals of Bessel Functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autosummary:: :toctree: generated/ itj0y0 -- Basic integrals of j0 and y0 from 0 to x. it2j0y0 -- Integrals of (1-j0(t))/t from 0 to x and y0(t)/t from x to inf. iti0k0 -- Basic integrals of i0 and k0 from 0 to x. it2i0k0 -- Integrals of (i0(t)-1)/t from 0 to x and k0(t)/t from x to inf. besselpoly -- Integral of a Bessel function: Jv(2* a* x) * x[+]lambda from x=0 to 1. Derivatives of Bessel Functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autosummary:: :toctree: generated/ jvp -- Nth derivative of Jv(v,z) yvp -- Nth derivative of Yv(v,z) kvp -- Nth derivative of Kv(v,z) ivp -- Nth derivative of Iv(v,z) h1vp -- Nth derivative of H1v(v,z) h2vp -- Nth derivative of H2v(v,z) Spherical Bessel Functions ^^^^^^^^^^^^^^^^^^^^^^^^^^ These are not universal functions: .. autosummary:: :toctree: generated/ sph_jn -- [+]Sequence of spherical Bessel functions, jn(z) sph_yn -- [+]Sequence of spherical Bessel functions, yn(z) sph_jnyn -- [+]Sequence of spherical Bessel functions, jn(z) and yn(z) sph_in -- [+]Sequence of spherical Bessel functions, in(z) sph_kn -- [+]Sequence of spherical Bessel functions, kn(z) sph_inkn -- [+]Sequence of spherical Bessel functions, in(z) and kn(z) Riccati-Bessel Functions ^^^^^^^^^^^^^^^^^^^^^^^^ These are not universal functions: .. autosummary:: :toctree: generated/ riccati_jn -- [+]Sequence of Ricatti-Bessel functions of first kind. riccati_yn -- [+]Sequence of Ricatti-Bessel functions of second kind. Struve Functions ---------------- .. autosummary:: :toctree: generated/ struve -- Struve function --- Hv(x) modstruve -- Modified Struve function --- Lv(x) itstruve0 -- Integral of H0(t) from 0 to x it2struve0 -- Integral of H0(t)/t from x to Inf. itmodstruve0 -- Integral of L0(t) from 0 to x. Raw Statistical Functions ------------------------- .. seealso:: :mod:`scipy.stats`: Friendly versions of these functions. .. autosummary:: :toctree: generated/ bdtr -- Sum of terms 0 through k of the binomial pdf. bdtrc -- Sum of terms k+1 through n of the binomial pdf. bdtri -- Inverse of bdtr bdtrik -- bdtrin -- btdtr -- Integral from 0 to x of beta pdf. btdtri -- Quantiles of beta distribution btdtria -- btdtrib -- fdtr -- Integral from 0 to x of F pdf. fdtrc -- Integral from x to infinity under F pdf. fdtri -- Inverse of fdtrc fdtridfd -- gdtr -- Integral from 0 to x of gamma pdf. gdtrc -- Integral from x to infinity under gamma pdf. gdtria -- Inverse with respect to `a` of gdtr. gdtrib -- Inverse with respect to `b` of gdtr. gdtrix -- Inverse with respect to `x` of gdtr. nbdtr -- Sum of terms 0 through k of the negative binomial pdf. nbdtrc -- Sum of terms k+1 to infinity under negative binomial pdf. nbdtri -- Inverse of nbdtr nbdtrik -- nbdtrin -- ncfdtr -- CDF of non-central t distribution. ncfdtridfd -- Find degrees of freedom (denominator) of noncentral F distribution. ncfdtridfn -- Find degrees of freedom (numerator) of noncentral F distribution. ncfdtri -- Inverse CDF of noncentral F distribution. ncfdtrinc -- Find noncentrality parameter of noncentral F distribution. nctdtr -- CDF of noncentral t distribution. nctdtridf -- Find degrees of freedom of noncentral t distribution. nctdtrit -- Inverse CDF of noncentral t distribution. nctdtrinc -- Find noncentrality parameter of noncentral t distribution. nrdtrimn -- Find mean of normal distribution from cdf and std. nrdtrisd -- Find std of normal distribution from cdf and mean. pdtr -- Sum of terms 0 through k of the Poisson pdf. pdtrc -- Sum of terms k+1 to infinity of the Poisson pdf. pdtri -- Inverse of pdtr pdtrik -- stdtr -- Integral from -infinity to t of the Student-t pdf. stdtridf -- stdtrit -- chdtr -- Integral from 0 to x of the Chi-square pdf. chdtrc -- Integral from x to infnity of Chi-square pdf. chdtri -- Inverse of chdtrc. chdtriv -- ndtr -- Integral from -infinity to x of standard normal pdf log_ndtr -- Logarithm of integral from -infinity to x of standard normal pdf ndtri -- Inverse of ndtr (quantiles) chndtr -- chndtridf -- chndtrinc -- chndtrix -- smirnov -- Kolmogorov-Smirnov complementary CDF for one-sided test statistic (Dn+ or Dn-) smirnovi -- Inverse of smirnov. kolmogorov -- The complementary CDF of the (scaled) two-sided test statistic (Kn*) valid for large n. kolmogi -- Inverse of kolmogorov tklmbda -- Tukey-Lambda CDF logit -- expit -- boxcox -- Compute the Box-Cox transformation. boxcox1p -- Compute the Box-Cox transformation of 1 + x. inv_boxcox -- Compute the inverse of the Box-Cox tranformation. inv_boxcox1p -- Compute the inverse of the Box-Cox transformation of 1 + x. Information Theory Functions ---------------------------- .. autosummary:: :toctree: generated/ entr -- entr(x) = -x*log(x) rel_entr -- rel_entr(x, y) = x*log(x/y) kl_div -- kl_div(x, y) = x*log(x/y) - x + y huber -- Huber loss function. pseudo_huber -- Pseudo-Huber loss function. Gamma and Related Functions --------------------------- .. autosummary:: :toctree: generated/ gamma -- Gamma function. gammaln -- Log of the absolute value of the gamma function. gammasgn -- Sign of the gamma function. gammainc -- Incomplete gamma integral. gammaincinv -- Inverse of gammainc. gammaincc -- Complemented incomplete gamma integral. gammainccinv -- Inverse of gammaincc. beta -- Beta function. betaln -- Log of the absolute value of the beta function. betainc -- Incomplete beta integral. betaincinv -- Inverse of betainc. psi -- Logarithmic derivative of the gamma function. rgamma -- One divided by the gamma function. polygamma -- Nth derivative of psi function. multigammaln -- Log of the multivariate gamma. digamma -- Digamma function (derivative of the logarithm of gamma). poch -- The Pochhammer symbol (rising factorial). Error Function and Fresnel Integrals ------------------------------------ .. autosummary:: :toctree: generated/ erf -- Error function. erfc -- Complemented error function (1- erf(x)) erfcx -- Scaled complemented error function exp(x**2)*erfc(x) erfi -- Imaginary error function, -i erf(i x) erfinv -- Inverse of error function erfcinv -- Inverse of erfc wofz -- Fadeeva function. dawsn -- Dawson's integral. fresnel -- Fresnel sine and cosine integrals. fresnel_zeros -- Complex zeros of both Fresnel integrals modfresnelp -- Modified Fresnel integrals F_+(x) and K_+(x) modfresnelm -- Modified Fresnel integrals F_-(x) and K_-(x) These are not universal functions: .. autosummary:: :toctree: generated/ erf_zeros -- [+]Complex zeros of erf(z) fresnelc_zeros -- [+]Complex zeros of Fresnel cosine integrals fresnels_zeros -- [+]Complex zeros of Fresnel sine integrals Legendre Functions ------------------ .. autosummary:: :toctree: generated/ lpmv -- Associated Legendre Function of arbitrary non-negative degree v. sph_harm -- Spherical Harmonics (complex-valued) Y^m_n(theta,phi) These are not universal functions: .. autosummary:: :toctree: generated/ clpmn -- [+]Associated Legendre Function of the first kind for complex arguments. lpn -- [+]Legendre Functions (polynomials) of the first kind lqn -- [+]Legendre Functions of the second kind. lpmn -- [+]Associated Legendre Function of the first kind for real arguments. lqmn -- [+]Associated Legendre Function of the second kind. Ellipsoidal Harmonics --------------------- .. autosummary:: :toctree: generated/ ellip_harm -- Ellipsoidal harmonic E ellip_harm_2 -- Ellipsoidal harmonic F ellip_normal -- Ellipsoidal normalization constant Orthogonal polynomials ---------------------- The following functions evaluate values of orthogonal polynomials: .. autosummary:: :toctree: generated/ assoc_laguerre eval_legendre eval_chebyt eval_chebyu eval_chebyc eval_chebys eval_jacobi eval_laguerre eval_genlaguerre eval_hermite eval_hermitenorm eval_gegenbauer eval_sh_legendre eval_sh_chebyt eval_sh_chebyu eval_sh_jacobi The functions below, in turn, return the polynomial coefficients in :class:`~.orthopoly1d` objects, which function similarly as :ref:`numpy.poly1d`. The :class:`~.orthopoly1d` class also has an attribute ``weights`` which returns the roots, weights, and total weights for the appropriate form of Gaussian quadrature. These are returned in an ``n x 3`` array with roots in the first column, weights in the second column, and total weights in the final column. Note that :class:`~.orthopoly1d` objects are converted to ``poly1d`` when doing arithmetic, and lose information of the original orthogonal polynomial. .. autosummary:: :toctree: generated/ legendre -- [+]Legendre polynomial P_n(x) (lpn -- for function). chebyt -- [+]Chebyshev polynomial T_n(x) chebyu -- [+]Chebyshev polynomial U_n(x) chebyc -- [+]Chebyshev polynomial C_n(x) chebys -- [+]Chebyshev polynomial S_n(x) jacobi -- [+]Jacobi polynomial P^(alpha,beta)_n(x) laguerre -- [+]Laguerre polynomial, L_n(x) genlaguerre -- [+]Generalized (Associated) Laguerre polynomial, L^alpha_n(x) hermite -- [+]Hermite polynomial H_n(x) hermitenorm -- [+]Normalized Hermite polynomial, He_n(x) gegenbauer -- [+]Gegenbauer (Ultraspherical) polynomials, C^(alpha)_n(x) sh_legendre -- [+]shifted Legendre polynomial, P*_n(x) sh_chebyt -- [+]shifted Chebyshev polynomial, T*_n(x) sh_chebyu -- [+]shifted Chebyshev polynomial, U*_n(x) sh_jacobi -- [+]shifted Jacobi polynomial, J*_n(x) = G^(p,q)_n(x) .. warning:: Computing values of high-order polynomials (around ``order > 20``) using polynomial coefficients is numerically unstable. To evaluate polynomial values, the ``eval_*`` functions should be used instead. Roots and weights for orthogonal polynomials .. autosummary:: :toctree: generated/ c_roots cg_roots h_roots he_roots j_roots js_roots l_roots la_roots p_roots ps_roots s_roots t_roots ts_roots u_roots us_roots Hypergeometric Functions ------------------------ .. autosummary:: :toctree: generated/ hyp2f1 -- Gauss hypergeometric function (2F1) hyp1f1 -- Confluent hypergeometric function (1F1) hyperu -- Confluent hypergeometric function (U) hyp0f1 -- Confluent hypergeometric limit function (0F1) hyp2f0 -- Hypergeometric function (2F0) hyp1f2 -- Hypergeometric function (1F2) hyp3f0 -- Hypergeometric function (3F0) Parabolic Cylinder Functions ---------------------------- .. autosummary:: :toctree: generated/ pbdv -- Parabolic cylinder function Dv(x) and derivative. pbvv -- Parabolic cylinder function Vv(x) and derivative. pbwa -- Parabolic cylinder function W(a,x) and derivative. These are not universal functions: .. autosummary:: :toctree: generated/ pbdv_seq -- [+]Sequence of parabolic cylinder functions Dv(x) pbvv_seq -- [+]Sequence of parabolic cylinder functions Vv(x) pbdn_seq -- [+]Sequence of parabolic cylinder functions Dn(z), complex z Mathieu and Related Functions ----------------------------- .. autosummary:: :toctree: generated/ mathieu_a -- Characteristic values for even solution (ce_m) mathieu_b -- Characteristic values for odd solution (se_m) These are not universal functions: .. autosummary:: :toctree: generated/ mathieu_even_coef -- [+]sequence of expansion coefficients for even solution mathieu_odd_coef -- [+]sequence of expansion coefficients for odd solution The following return both function and first derivative: .. autosummary:: :toctree: generated/ mathieu_cem -- Even Mathieu function mathieu_sem -- Odd Mathieu function mathieu_modcem1 -- Even modified Mathieu function of the first kind mathieu_modcem2 -- Even modified Mathieu function of the second kind mathieu_modsem1 -- Odd modified Mathieu function of the first kind mathieu_modsem2 -- Odd modified Mathieu function of the second kind Spheroidal Wave Functions ------------------------- .. autosummary:: :toctree: generated/ pro_ang1 -- Prolate spheroidal angular function of the first kind pro_rad1 -- Prolate spheroidal radial function of the first kind pro_rad2 -- Prolate spheroidal radial function of the second kind obl_ang1 -- Oblate spheroidal angular function of the first kind obl_rad1 -- Oblate spheroidal radial function of the first kind obl_rad2 -- Oblate spheroidal radial function of the second kind pro_cv -- Compute characteristic value for prolate functions obl_cv -- Compute characteristic value for oblate functions pro_cv_seq -- Compute sequence of prolate characteristic values obl_cv_seq -- Compute sequence of oblate characteristic values The following functions require pre-computed characteristic value: .. autosummary:: :toctree: generated/ pro_ang1_cv -- Prolate spheroidal angular function of the first kind pro_rad1_cv -- Prolate spheroidal radial function of the first kind pro_rad2_cv -- Prolate spheroidal radial function of the second kind obl_ang1_cv -- Oblate spheroidal angular function of the first kind obl_rad1_cv -- Oblate spheroidal radial function of the first kind obl_rad2_cv -- Oblate spheroidal radial function of the second kind Kelvin Functions ---------------- .. autosummary:: :toctree: generated/ kelvin -- All Kelvin functions (order 0) and derivatives. kelvin_zeros -- [+]Zeros of All Kelvin functions (order 0) and derivatives ber -- Kelvin function ber x bei -- Kelvin function bei x berp -- Derivative of Kelvin function ber x beip -- Derivative of Kelvin function bei x ker -- Kelvin function ker x kei -- Kelvin function kei x kerp -- Derivative of Kelvin function ker x keip -- Derivative of Kelvin function kei x These are not universal functions: .. autosummary:: :toctree: generated/ ber_zeros -- [+]Zeros of Kelvin function bei x bei_zeros -- [+]Zeros of Kelvin function ber x berp_zeros -- [+]Zeros of derivative of Kelvin function ber x beip_zeros -- [+]Zeros of derivative of Kelvin function bei x ker_zeros -- [+]Zeros of Kelvin function kei x kei_zeros -- [+]Zeros of Kelvin function ker x kerp_zeros -- [+]Zeros of derivative of Kelvin function ker x keip_zeros -- [+]Zeros of derivative of Kelvin function kei x Combinatorics ------------- .. autosummary:: :toctree: generated/ comb -- [+]Combinations of N things taken k at a time, "N choose k" perm -- [+]Permutations of N things taken k at a time, "k-permutations of N" Other Special Functions ----------------------- .. autosummary:: :toctree: generated/ agm -- Arithmetic-Geometric Mean bernoulli -- Bernoulli numbers binom -- Binomial coefficient. diric -- Dirichlet function (periodic sinc) euler -- Euler numbers expn -- Exponential integral. exp1 -- Exponential integral of order 1 (for complex argument) expi -- Another exponential integral -- Ei(x) factorial -- The factorial function, n! = special.gamma(n+1) factorial2 -- Double factorial, (n!)! factorialk -- [+](...((n!)!)!...)! where there are k '!' shichi -- Hyperbolic sine and cosine integrals. sici -- Integral of the sinc and "cosinc" functions. spence -- Dilogarithm integral. lambertw -- Lambert W function zeta -- Riemann zeta function of two arguments. zetac -- Standard Riemann zeta function minus 1. Convenience Functions --------------------- .. autosummary:: :toctree: generated/ cbrt -- Cube root. exp10 -- 10 raised to the x power. exp2 -- 2 raised to the x power. radian -- radian angle given degrees, minutes, and seconds. cosdg -- cosine of the angle given in degrees. sindg -- sine of the angle given in degrees. tandg -- tangent of the angle given in degrees. cotdg -- cotangent of the angle given in degrees. log1p -- log(1+x) expm1 -- exp(x)-1 cosm1 -- cos(x)-1 round -- round the argument to the nearest integer. If argument ends in 0.5 exactly, pick the nearest even integer. xlogy -- x*log(y) xlog1py -- x*log1p(y) exprel -- (exp(x)-1)/x sinc -- sin(x)/x .. [+] in the description indicates a function which is not a universal .. function and does not follow broadcasting and automatic .. array-looping rules. """
#!/usr/bin/env python # ***** BEGIN LICENSE BLOCK ***** # Version: MPL 1.1/GPL 2.0/LGPL 2.1 # # The contents of this file are subject to the Mozilla Public License Version # 1.1 (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # http://www.mozilla.org/MPL/ # # Software distributed under the License is distributed on an "AS IS" basis, # WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License # for the specific language governing rights and limitations under the # License. # # The Original Code is font utility code. # # The Initial Developer of the Original Code is Mozilla Corporation. # Portions created by the Initial Developer are Copyright (C) 2009 # the Initial Developer. All Rights Reserved. # # Contributor(s): # NAME <EMAIL> # # Alternatively, the contents of this file may be used under the terms of # either the GNU General Public License Version 2 or later (the "GPL"), or # the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), # in which case the provisions of the GPL or the LGPL are applicable instead # of those above. If you wish to allow use of your version of this file only # under the terms of either the GPL or the LGPL, and not to allow others to # use your version of this file under the terms of the MPL, indicate your # decision by deleting the provisions above and replace them with the notice # and other provisions required by the GPL or the LGPL. If you do not delete # the provisions above, a recipient may use your version of this file under # the terms of any one of the MPL, the GPL or the LGPL. # # ***** END LICENSE BLOCK ***** */ # eotlitetool.py - create EOT version of OpenType font for use with IE # # Usage: eotlitetool.py [-o output-filename] font1 [font2 ...] # # OpenType file structure # http://www.microsoft.com/typography/otspec/otff.htm # # Types: # # BYTE 8-bit unsigned integer. # CHAR 8-bit signed integer. # USHORT 16-bit unsigned integer. # SHORT 16-bit signed integer. # ULONG 32-bit unsigned integer. # Fixed 32-bit signed fixed-point number (16.16) # LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer. # # SFNT Header # # Fixed sfnt version // 0x00010000 for version 1.0. # USHORT numTables // Number of tables. # USHORT searchRange // (Maximum power of 2 <= numTables) x 16. # USHORT entrySelector // Log2(maximum power of 2 <= numTables). # USHORT rangeShift // NumTables x 16-searchRange. # # Table Directory # # ULONG tag // 4-byte identifier. # ULONG checkSum // CheckSum for this table. # ULONG offset // Offset from beginning of TrueType font file. # ULONG length // Length of this table. # # OS/2 Table (Version 4) # # USHORT version // 0x0004 # SHORT xAvgCharWidth # USHORT usWeightClass # USHORT usWidthClass # USHORT fsType # SHORT ySubscriptXSize # SHORT ySubscriptYSize # SHORT ySubscriptXOffset # SHORT ySubscriptYOffset # SHORT ySuperscriptXSize # SHORT ySuperscriptYSize # SHORT ySuperscriptXOffset # SHORT ySuperscriptYOffset # SHORT yStrikeoutSize # SHORT yStrikeoutPosition # SHORT sFamilyClass # BYTE panose[10] # ULONG ulUnicodeRange1 // Bits 0-31 # ULONG ulUnicodeRange2 // Bits 32-63 # ULONG ulUnicodeRange3 // Bits 64-95 # ULONG ulUnicodeRange4 // Bits 96-127 # CHAR achVendID[4] # USHORT fsSelection # USHORT usFirstCharIndex # USHORT usLastCharIndex # SHORT sTypoAscender # SHORT sTypoDescender # SHORT sTypoLineGap # USHORT usWinAscent # USHORT usWinDescent # ULONG ulCodePageRange1 // Bits 0-31 # ULONG ulCodePageRange2 // Bits 32-63 # SHORT sxHeight # SHORT sCapHeight # USHORT usDefaultChar # USHORT usBreakChar # USHORT usMaxContext # # # The Naming Table is organized as follows: # # [name table header] # [name records] # [string data] # # Name Table Header # # USHORT format // Format selector (=0). # USHORT count // Number of name records. # USHORT stringOffset // Offset to start of string storage (from start of table). # # Name Record # # USHORT platformID // Platform ID. # USHORT encodingID // Platform-specific encoding ID. # USHORT languageID // Language ID. # USHORT nameID // Name ID. # USHORT length // String length (in bytes). # USHORT offset // String offset from start of storage area (in bytes). # # head Table # # Fixed tableVersion // Table version number 0x00010000 for version 1.0. # Fixed fontRevision // Set by font manufacturer. # ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum. # ULONG magicNumber // Set to 0x5F0F3CF5. # USHORT flags # USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines. # LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # SHORT xMin // For all glyph bounding boxes. # SHORT yMin # SHORT xMax # SHORT yMax # USHORT macStyle # USHORT lowestRecPPEM // Smallest readable size in pixels. # SHORT fontDirectionHint # SHORT indexToLocFormat // 0 for short offsets, 1 for long. # SHORT glyphDataFormat // 0 for current format. # # # # Embedded OpenType (EOT) file format # http://www.w3.org/Submission/EOT/ # # EOT version 0x00020001 # # An EOT font consists of a header with the original OpenType font # appended at the end. Most of the data in the EOT header is simply a # copy of data from specific tables within the font data. The exceptions # are the 'Flags' field and the root string name field. The root string # is a set of names indicating domains for which the font data can be # used. A null root string implies the font data can be used anywhere. # The EOT header is in little-endian byte order but the font data remains # in big-endian order as specified by the OpenType spec. # # Overall structure: # # [EOT header] # [EOT name records] # [font data] # # EOT header # # ULONG eotSize // Total structure length in bytes (including string and font data) # ULONG fontDataSize // Length of the OpenType font (FontData) in bytes # ULONG version // Version number of this format - 0x00020001 # ULONG flags // Processing Flags (0 == no special processing) # BYTE fontPANOSE[10] // OS/2 Table panose # BYTE charset // DEFAULT_CHARSET (0x01) # BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise # ULONG weight // OS/2 Table usWeightClass # USHORT fsType // OS/2 Table fsType (specifies embedding permission flags) # USHORT magicNumber // Magic number for EOT file - 0x504C. # ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1 # ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2 # ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3 # ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4 # ULONG codePageRange1 // OS/2 Table ulCodePageRange1 # ULONG codePageRange2 // OS/2 Table ulCodePageRange2 # ULONG checkSumAdjustment // head Table CheckSumAdjustment # ULONG reserved[4] // Reserved - must be 0 # USHORT padding1 // Padding - must be 0 # # EOT name records # # USHORT FamilyNameSize // Font family name size in bytes # BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16 # USHORT Padding2 // Padding - must be 0 # # USHORT StyleNameSize // Style name size in bytes # BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16 # USHORT Padding3 // Padding - must be 0 # # USHORT VersionNameSize // Version name size in bytes # bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16 # USHORT Padding4 // Padding - must be 0 # # USHORT FullNameSize // Full name size in bytes # BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16 # USHORT Padding5 // Padding - must be 0 # # USHORT RootStringSize // Root string size in bytes # BYTE RootString[RootStringSize] // Root string, little-endian UTF-16
""" combinatorics.py OVERVIEW ======== This module was created to supplement Python's itertools module, filling in gaps in two important areas of basic combinatorics: (A) ordered and unordered m-way combinations, and (B) generalizations of the four basic occupancy problems ('balls in boxes'). Brief descriptions of the included functions and classes follow (more detailed descriptions and additional examples can be found in the individual doc strings within the functions): n_choose_m(n, m): calculate n-choose-m, using a simple algorithm that is less likely to involve large integers than the direct evaluation of n! / m! / (n-m)! m_way_ordered_combinations(items, ks): This function returns a generator that produces all m-way ordered combinations (multinomial combinations) from the specified collection of items, with with ks[i] items in the ith group, i= 0, 1, 2, ..., m-1, where m= len(ks) is the number of groups. By 'ordered combinations', we mean that the relative order of equal- size groups is important; the order of the items within any group is not important. The total number of combinations generated is given by the multinomial coefficient formula (see http://en.wikipedia.org/wiki/Multinomial_theorem#Multinomial_coefficients). m_way_unordered_combinations(items, ks): This function returns a generator that produces all m-way unordered combinations from the specified collection of items, with ks[i] items in the ith group, i= 0, 1, 2, ..., m-1, where m= len(ks) is the number of groups. By 'unordered combinations', we mean that the relative order of equal-size groups is not important. The order of the items within any group is also unimportant. Example of `m_way_unordered_combinations`:: Issue the following statement from the IPython prompt: from combinatorics import * list(m_way_unordered_combinations(6,[2,2,2])) The output consists of the 15 combinations listed below: (0, 1), (2, 3), (4, 5) (0, 1), (2, 4), (3, 5) (0, 1), (2, 5), (3, 4) (0, 2), (1, 3), (4, 5) (0, 2), (1, 4), (3, 5) (0, 2), (1, 5), (3, 4) (0, 3), (1, 2), (4, 5) (0, 3), (1, 4), (2, 5) (0, 3), (1, 5), (2, 4) (0, 4), (1, 2), (3, 5) (0, 4), (1, 3), (2, 5) (0, 4), (1, 5), (2, 3) (0, 5), (1, 2), (3, 4) (0, 5), (1, 3), (2, 4) (0, 5), (1, 4), (2, 3) unlabeled_balls_in_labeled_boxes(balls, box_sizes): This function returns a generator that produces all distinct distributions of indistinguishable balls among labeled boxes with specified box sizes (capacities). This is a generalization of the most common formulation of the problem, where each box is sufficiently large to accommodate all of the balls, and is an important example of a class of combinatorics problems called 'weak composition' problems. unlabeled_balls_in_unlabeled_boxes(balls, box_sizes): This function returns a generator that produces all distinct distributions of indistinguishable balls among indistinguishable boxes, with specified box sizes (capacities). This is a generalization of the most common formulation of the problem, where each box is sufficiently large to accommodate all of the balls. It might be asked, 'In what sense are the boxes indistinguishable if they have different capacities?' The answer is that the box capacities must be considered when distributing the balls, but once the balls have been distributed, the identities of the boxes no longer matter. Example of `unlabeled_balls_in_unlabeled_boxes`:: Issue the following commands from the IPython prompt: from combinatorics import * list(unlabeled_balls_in_unlabeled_boxes(10,[5,4,3,2,1])) The output is as follows: [(5, 4, 1, 0, 0), (5, 3, 2, 0, 0), (5, 3, 1, 1, 0), (5, 2, 2, 1, 0), (5, 2, 1, 1, 1), (4, 4, 2, 0, 0), (4, 4, 1, 1, 0), (4, 3, 3, 0, 0), (4, 3, 2, 1, 0), (4, 3, 1, 1, 1), (4, 2, 2, 2, 0), (4, 2, 2, 1, 1), (3, 3, 3, 1, 0), (3, 3, 2, 2, 0), (3, 3, 2, 1, 1), (3, 2, 2, 2, 1)] labeled_balls_in_unlabeled_boxes(balls, box_sizes): This function returns a generator that produces all distinct distributions of distinguishable balls among indistinguishable boxes, with specified box sizes (capacities). This is a generalization of the most common formulation of the problem, where each box is sufficiently large to accommodate all of the balls. labeled_balls_in_labeled_boxes(balls, box_sizes): This function returns a generator that produces all distinct distributions of distinguishable balls among distinguishable boxes, with specified box sizes (capacities). This is a generalization of the most common formulation of the problem, where each box is sufficiently large to accommodate all of the balls. Example of `labeled_balls_in_labeled_boxes`:: Issue the following statements from the IPython prompt: from combinatorics import * list(labeled_balls_in_labeled_boxes(3,[2,2])) The output is as follows: [((0, 1), (2,)), ((0, 2), (1,)), ((1, 2), (0,)), ((0,), (1, 2)), ((1,), (0, 2)), ((2,), (0, 1))] partitions(n): 'In number theory and combinatorics, a partition of a positive integer n, also called an integer partition, is a way of writing n as a sum of positive integers. Two sums that differ only in the order of their summands are considered to be the same partition.' We can trivially generate all partitions of an integer using `unlabeled_balls_in_unlabeled_boxes`. The quote is from http://en.wikipedia.org/wiki/Partition_(number_theory) . AUTHOR ====== Dr. NAME and suggestions--especially bug reports--can be communicated to me via the following e-mail address: EMAIL HISTORY ================ 10-09-2011, NAME I added a function to generate partitions, but have not updated the version number because I regard this as a trivial change. 10-01-2011, version 1.1.0, NAME I added input error checking to the `labeled_balls_in_unlabeled_boxes` function. I fixed the function `m_way_ordered_combinations` so that the box order specified via the input argument `ks` is respected. I added the function `labeled_balls_in_labeled_boxes`. This completes the basic set of functions for solving occupancy functions with capacity limits. 09-24-2011: Initial version. """
""" Objects for dealing with Chebyshev series. This module provides a number of objects (mostly functions) useful for dealing with Chebyshev series, including a `Chebyshev` class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its "parent" sub-package, `numpy.polynomial`). Constants --------- - `chebdomain` -- Chebyshev series default domain, [-1,1]. - `chebzero` -- (Coefficients of the) Chebyshev series that evaluates identically to 0. - `chebone` -- (Coefficients of the) Chebyshev series that evaluates identically to 1. - `chebx` -- (Coefficients of the) Chebyshev series for the identity map, ``f(x) = x``. Arithmetic ---------- - `chebadd` -- add two Chebyshev series. - `chebsub` -- subtract one Chebyshev series from another. - `chebmul` -- multiply two Chebyshev series. - `chebdiv` -- divide one Chebyshev series by another. - `chebpow` -- raise a Chebyshev series to an positive integer power - `chebval` -- evaluate a Chebyshev series at given points. - `chebval2d` -- evaluate a 2D Chebyshev series at given points. - `chebval3d` -- evaluate a 3D Chebyshev series at given points. - `chebgrid2d` -- evaluate a 2D Chebyshev series on a Cartesian product. - `chebgrid3d` -- evaluate a 3D Chebyshev series on a Cartesian product. Calculus -------- - `chebder` -- differentiate a Chebyshev series. - `chebint` -- integrate a Chebyshev series. Misc Functions -------------- - `chebfromroots` -- create a Chebyshev series with specified roots. - `chebroots` -- find the roots of a Chebyshev series. - `chebvander` -- Vandermonde-like matrix for Chebyshev polynomials. - `chebvander2d` -- Vandermonde-like matrix for 2D power series. - `chebvander3d` -- Vandermonde-like matrix for 3D power series. - `chebgauss` -- Gauss-Chebyshev quadrature, points and weights. - `chebweight` -- Chebyshev weight function. - `chebcompanion` -- symmetrized companion matrix in Chebyshev form. - `chebfit` -- least-squares fit returning a Chebyshev series. - `chebpts1` -- Chebyshev points of the first kind. - `chebpts2` -- Chebyshev points of the second kind. - `chebtrim` -- trim leading coefficients from a Chebyshev series. - `chebline` -- Chebyshev series representing given straight line. - `cheb2poly` -- convert a Chebyshev series to a polynomial. - `poly2cheb` -- convert a polynomial to a Chebyshev series. Classes ------- - `Chebyshev` -- A Chebyshev series class. See also -------- `numpy.polynomial` Notes ----- The implementations of multiplication, division, integration, and differentiation use the algebraic identities [1]_: .. math :: T_n(x) = \\frac{z^n + z^{-n}}{2} \\\\ z\\frac{dx}{dz} = \\frac{z - z^{-1}}{2}. where .. math :: x = \\frac{z + z^{-1}}{2}. These identities allow a Chebyshev series to be expressed as a finite, symmetric Laurent series. In this module, this sort of Laurent series is referred to as a "z-series." References ---------- .. [1] NAME et al., "Combinatorial Trigonometry with Chebyshev Polynomials," *Journal of Statistical Planning and Inference 14*, 2008 (preprint: http://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf, pg. 4) """
""" This page is in the table of contents. Some filaments contract too much and warp the extruded object. To prevent this you have to print the object in a temperature regulated chamber and/or on a temperature regulated bed. The chamber tool allows you to control the bed and chamber temperature and the holding pressure. The chamber gcodes are also described at: http://reprap.org/wiki/Mendel_User_Manual:_RepRapGCodes The chamber manual page is at: http://fabmetheus.crsndoo.com/wiki/index.php/Skeinforge_Chamber ==Operation== The default 'Activate Chamber' checkbox is on. When it is on, the functions described below will work, when it is off, nothing will be done. ==Settings== ===Bed=== The initial bed temperature is defined by 'Bed Temperature'. If the 'Bed Temperature End Change Height' is greater or equal to the 'Bed Temperature Begin Change Height' and the 'Bed Temperature Begin Change Height' is greater or equal to zero, then the temperature will be ramped toward the 'Bed Temperature End'. The ramp will start once the extruder reaches the 'Bed Temperature Begin Change Height', then the bed temperature will approach the 'Bed Temperature End' as the extruder reaches the 'Bed Temperature End Change Height', finally the bed temperature will stay at the 'Bed Temperature End' for the remainder of the build. ====Bed Temperature==== Default: 60C Defines the initial print bed temperature in Celcius by adding an M140 command. ====Bed Temperature Begin Change Height==== Default: -1 mm Defines the height of the beginning of the temperature ramp. If the 'Bed Temperature End Change Height' is less than zero, the bed temperature will remain at the initial 'Bed Temperature'. ====Bed Temperature End Change Height==== Default: -1 mm Defines the height of the end of the temperature ramp. If the 'Bed Temperature End Change Height' is less than zero or less than the 'Bed Temperature Begin Change Height', the bed temperature will remain at the initial 'Bed Temperature'. ====Bed Temperature End==== Default: 20C Defines the end bed temperature if there is a temperature ramp. ===Chamber Temperature=== Default: 30C Defines the chamber temperature in Celcius by adding an M141 command. ===Holding Force=== Default: 0 Defines the holding pressure of a mechanism, like a vacuum table or electromagnet, to hold the bed surface or object, by adding an M142 command. The holding pressure is in bars. For hardware which only has on/off holding, when the holding pressure is zero, turn off holding, when the holding pressure is greater than zero, turn on holding. ==Heated Beds== ===Bothacker=== A resistor heated aluminum plate by NAME an article at: http://bothacker.com/2009/12/18/heated-build-platform/ ===Domingo=== A heated copper build plate by Domingo: http://casainho-emcrepstrap.blogspot.com/ with articles at: http://casainho-emcrepstrap.blogspot.com/2010/01/first-time-with-pla-testing-it-also-on.html http://casainho-emcrepstrap.blogspot.com/2010/01/call-for-helpideas-to-develop-heated.html http://casainho-emcrepstrap.blogspot.com/2010/01/new-heated-build-platform.html http://casainho-emcrepstrap.blogspot.com/2010/01/no-acrylic-and-instead-kapton-tape-on.html http://casainho-emcrepstrap.blogspot.com/2010/01/problems-with-heated-build-platform-and.html http://casainho-emcrepstrap.blogspot.com/2010/01/perfect-build-platform.html http://casainho-emcrepstrap.blogspot.com/2009/12/almost-no-warp.html http://casainho-emcrepstrap.blogspot.com/2009/12/heated-base-plate.html ===Jmil=== A heated build stage by jmil, over at: http://www.hive76.org with articles at: http://www.hive76.org/handling-hot-build-surfaces http://www.hive76.org/heated-build-stage-success ===Metalab=== A heated base by the Metalab folks: http://reprap.soup.io with information at: http://reprap.soup.io/?search=heated%20base ===Nophead=== A resistor heated aluminum bed by Nophead: http://hydraraptor.blogspot.com with articles at: http://hydraraptor.blogspot.com/2010/01/will-it-stick.html http://hydraraptor.blogspot.com/2010/01/hot-metal-and-serendipity.html http://hydraraptor.blogspot.com/2010/01/new-year-new-plastic.html http://hydraraptor.blogspot.com/2010/01/hot-bed.html ===Prusajr=== A resistive wire heated plexiglass plate by prusajr: http://prusadjs.cz/ with articles at: http://prusadjs.cz/2010/01/heated-reprap-print-bed-mk2/ http://prusadjs.cz/2009/11/look-ma-no-warping-heated-reprap-print-bed/ ===Zaggo=== A resistor heated aluminum plate by Zaggo at Pleasant Software: http://pleasantsoftware.com/developer/3d/ with articles at: http://pleasantsoftware.com/developer/3d/2009/12/05/raftless/ http://pleasantsoftware.com/developer/3d/2009/11/15/living-in-times-of-warp-free-printing/ http://pleasantsoftware.com/developer/3d/2009/11/12/canned-heat/ ==Examples== The following examples chamber the file Screw Holder Bottom.stl. The examples are run in a terminal in the folder which contains Screw Holder Bottom.stl and chamber.py. > python chamber.py This brings up the chamber dialog. > python chamber.py Screw Holder Bottom.stl The chamber tool is parsing the file: Screw Holder Bottom.stl .. The chamber tool has created the file: Screw Holder Bottom_chamber.gcode """
""" Define a simple format for saving numpy arrays to disk with the full information about them. The ``.npy`` format is the standard binary file format in NumPy for persisting a *single* arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The ``.npz`` format is the standard format for persisting *multiple* NumPy arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy`` files, one for each array. Capabilities ------------ - Can represent all NumPy arrays including nested record arrays and object arrays. - Represents the data in its native binary form. - Supports Fortran-contiguous arrays directly. - Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C "long int" writes out an array with "long ints", a reading machine with 32-bit C "long ints" will yield an array with 64-bit integers. - Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able to create a solution in their preferred programming language to read most ``.npy`` files that he has been given without much documentation. - Allows memory-mapping of the data. See `open_memmep`. - Can be read from a filelike stream object instead of an actual file. - Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. Limitations ----------- - Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. .. warning:: Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by 'f0', 'f1', etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the ``loadedarray.view(correct_dtype)`` method. File extensions --------------- We recommend using the ``.npy`` and ``.npz`` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using ``.npy`` and ``.npz``. Version numbering ----------------- The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. Format Version 1.0 ------------------ The first 6 bytes are a magic string: exactly ``\\x93NUMPY``. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. ``\\x01``. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. ``\\x00``. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array's format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (``\\n``) and padded with spaces (``\\x20``) to make the total length of ``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment purposes. The dictionary contains three keys: "descr" : dtype.descr An object that can be passed as an argument to the `numpy.dtype` constructor to create the array's dtype. "fortran_order" : bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. "shape" : tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. ``dtype.hasobject is True``), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on ``fortran_order``) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that ``shape=()`` means there is 1 element) by ``dtype.itemsize``. Format Version 2.0 ------------------ The version 1.0 format only allowed the array header to have a total size of 65535 bytes. This can be exceeded by structured arrays with a large number of columns. The version 2.0 format extends the header size to 4 GiB. `numpy.save` will automatically save in 2.0 format if the data requires it, else it will always use the more compatible 1.0 format. The description of the fourth element of the header therefore has become: "The next 4 bytes form a little-endian unsigned int: the length of the header data HEADER_LEN." Notes ----- The ``.npy`` format, including reasons for creating it and a comparison of alternatives, is described fully in the "npy-format" NEP. """
#!/usr/bin/env python # -*- coding: utf-8 -*- # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
# -*- encoding: utf-8 -*- ############################################################################## # # OpenERP, Open Source Management Solution # Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>). # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # ############################################################################## # SKR03 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist "höherwertig" als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung "Umsatzsteuer 19%" zu "steuerfreie Einfuhren aus der EU") # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie "Vorsteuern" (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie "Umsatzsteuer" # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung. # SKR04 # ===== # Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04. # Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig, # d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu # Steuerschlüsseln. # Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel # grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder # Sachkonten oder zu Partnern. # Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der # Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung # (Kategorie: Umsatzsteuer). # Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei) # sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit # der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter # Finanzbuchhaltung (Kategorie: Vorsteuer). # Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch # für den Ein- und Verkauf aus und in Drittländer sollten beim Partner # (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland # des Lieferanten/Kunden). Die Zuordnung beim Kunden ist "höherwertig" als # die Zuordnung bei Produkten und überschreibt diese im Einzelfall. # # Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften # erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten # (z.B. Zuordnung "Umsatzsteuer 19%" zu "steuerfreie Einfuhren aus der EU") # zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant). # Die Rechnungsbuchung beim Einkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer # Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie "Vorsteuern" (z.B. Vorsteuer # 19%). Durch multidimensionale Hierachien können verschiedene Positionen # zusammengefasst werden und dann in Form eines Reports ausgegeben werden. # # Die Rechnungsbuchung beim Verkauf bewirkt folgendes: # Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den # jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag # (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%). # Der Steuerbetrag erscheint unter der Kategorie "Umsatzsteuer" # (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können # verschiedene Positionen zusammengefasst werden. # Die zugewiesenen Steuerausweise können auf Ebene der einzelnen # Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden, # und dort gegebenenfalls angepasst werden. # Rechnungsgutschriften führen zu einer Korrektur (Gegenposition) # der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to read all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
""" Discrete Fourier Transform (:mod:`numpy.fft`) ============================================= .. currentmodule:: numpy.fft Standard FFTs ------------- .. autosummary:: :toctree: generated/ fft Discrete Fourier transform. ifft Inverse discrete Fourier transform. fft2 Discrete Fourier transform in two dimensions. ifft2 Inverse discrete Fourier transform in two dimensions. fftn Discrete Fourier transform in N-dimensions. ifftn Inverse discrete Fourier transform in N dimensions. Real FFTs --------- .. autosummary:: :toctree: generated/ rfft Real discrete Fourier transform. irfft Inverse real discrete Fourier transform. rfft2 Real discrete Fourier transform in two dimensions. irfft2 Inverse real discrete Fourier transform in two dimensions. rfftn Real discrete Fourier transform in N dimensions. irfftn Inverse real discrete Fourier transform in N dimensions. Hermitian FFTs -------------- .. autosummary:: :toctree: generated/ hfft Hermitian discrete Fourier transform. ihfft Inverse Hermitian discrete Fourier transform. Helper routines --------------- .. autosummary:: :toctree: generated/ fftfreq Discrete Fourier Transform sample frequencies. rfftfreq DFT sample frequencies (for usage with rfft, irfft). fftshift Shift zero-frequency component to center of spectrum. ifftshift Inverse of fftshift. Background information ---------------------- Fourier analysis is fundamentally a method for expressing a function as a sum of periodic components, and for recovering the function from those components. When both the function and its Fourier transform are replaced with discretized counterparts, it is called the discrete Fourier transform (DFT). The DFT has become a mainstay of numerical computing in part because of a very fast algorithm for computing it, called the Fast Fourier Transform (FFT), which was known to Gauss (1805) and was brought to light in its current form by NAME and NAME [CT]_. Press et al. [NR]_ provide an accessible introduction to Fourier analysis and its applications. Because the discrete Fourier transform separates its input into components that contribute at discrete frequencies, it has a great number of applications in digital signal processing, e.g., for filtering, and in this context the discretized input to the transform is customarily referred to as a *signal*, which exists in the *time domain*. The output is called a *spectrum* or *transform* and exists in the *frequency domain*. Implementation details ---------------------- There are many ways to define the DFT, varying in the sign of the exponent, normalization, etc. In this implementation, the DFT is defined as .. math:: A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\} \\qquad k = 0,\\ldots,n-1. The DFT is in general defined for complex inputs and outputs, and a single-frequency component at linear frequency :math:`f` is represented by a complex exponential :math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t` is the sampling interval. The values in the result follow so-called "standard" order: If ``A = fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the mean of the signal), which is always purely real for real inputs. Then ``A[1:n/2]`` contains the positive-frequency terms, and ``A[n/2+1:]`` contains the negative-frequency terms, in order of decreasingly negative frequency. For an even number of input points, ``A[n/2]`` represents both positive and negative Nyquist frequency, and is also purely real for real input. For an odd number of input points, ``A[(n-1)/2]`` contains the largest positive frequency, while ``A[(n+1)/2]`` contains the largest negative frequency. The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies of corresponding elements in the output. The routine ``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes that shift. When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)`` is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum. The phase spectrum is obtained by ``np.angle(A)``. The inverse DFT is defined as .. math:: a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\} \\qquad m = 0,\\ldots,n-1. It differs from the forward transform by the sign of the exponential argument and the normalization by :math:`1/n`. Real and Hermitian transforms ----------------------------- When the input is purely real, its transform is Hermitian, i.e., the component at frequency :math:`f_k` is the complex conjugate of the component at frequency :math:`-f_k`, which means that for real inputs there is no information in the negative frequency components that is not already available from the positive frequency components. The family of `rfft` functions is designed to operate on real inputs, and exploits this symmetry by computing only the positive frequency components, up to and including the Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex output points. The inverses of this family assumes the same symmetry of its input, and for an output of ``n`` points uses ``n/2+1`` input points. Correspondingly, when the spectrum is purely real, the signal is Hermitian. The `hfft` family of functions exploits this symmetry by using ``n/2+1`` complex points in the input (time) domain for ``n`` real points in the frequency domain. In higher dimensions, FFTs are used, e.g., for image analysis and filtering. The computational efficiency of the FFT means that it can also be a faster way to compute large convolutions, using the property that a convolution in the time domain is equivalent to a point-by-point multiplication in the frequency domain. Higher dimensions ----------------- In two dimensions, the DFT is defined as .. math:: A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1} a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\} \\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1, which extends in the obvious way to higher dimensions, and the inverses in higher dimensions also extend in the same way. References ---------- .. [CT] NAME, NAME and John W. NAME, 1965, "An algorithm for the machine calculation of complex Fourier series," *Math. Comput.* 19: 297-301. .. [NR] NAME NAME NAME and NAME 2007, *Numerical Recipes: The Art of Scientific Computing*, ch. 12-13. Cambridge Univ. Press, Cambridge, UK. Examples -------- For examples, see the various functions. """
""" ================================== Constants (:mod:`scipy.constants`) ================================== .. currentmodule:: scipy.constants Physical and mathematical constants and units. Mathematical constants ====================== ================ ================================================================= ``pi`` Pi ``golden`` Golden ratio ``golden_ratio`` Golden ratio ================ ================================================================= Physical constants ================== =========================== ================================================================= ``c`` speed of light in vacuum ``speed_of_light`` speed of light in vacuum ``mu_0`` the magnetic constant :math:`\mu_0` ``epsilon_0`` the electric constant (vacuum permittivity), :math:`\epsilon_0` ``h`` the Planck constant :math:`h` ``Planck`` the Planck constant :math:`h` ``hbar`` :math:`\hbar = h/(2\pi)` ``G`` Newtonian constant of gravitation ``gravitational_constant`` Newtonian constant of gravitation ``g`` standard acceleration of gravity ``e`` elementary charge ``elementary_charge`` elementary charge ``R`` molar gas constant ``gas_constant`` molar gas constant ``alpha`` fine-structure constant ``fine_structure`` fine-structure constant ``N_A`` Avogadro constant ``Avogadro`` Avogadro constant ``k`` Boltzmann constant ``Boltzmann`` Boltzmann constant ``sigma`` Stefan-Boltzmann constant :math:`\sigma` ``Stefan_Boltzmann`` Stefan-Boltzmann constant :math:`\sigma` ``Wien`` Wien displacement law constant ``Rydberg`` Rydberg constant ``m_e`` electron mass ``electron_mass`` electron mass ``m_p`` proton mass ``proton_mass`` proton mass ``m_n`` neutron mass ``neutron_mass`` neutron mass =========================== ================================================================= Constants database ------------------ In addition to the above variables, :mod:`scipy.constants` also contains the 2014 CODATA recommended values [CODATA2014]_ database containing more physical constants. .. autosummary:: :toctree: generated/ value -- Value in physical_constants indexed by key unit -- Unit in physical_constants indexed by key precision -- Relative precision in physical_constants indexed by key find -- Return list of physical_constant keys with a given string ConstantWarning -- Constant sought not in newest CODATA data set .. data:: physical_constants Dictionary of physical constants, of the format ``physical_constants[name] = (value, unit, uncertainty)``. Available constants: ====================================================================== ==== %(constant_names)s ====================================================================== ==== Units ===== SI prefixes ----------- ============ ================================================================= ``yotta`` :math:`10^{24}` ``zetta`` :math:`10^{21}` ``exa`` :math:`10^{18}` ``peta`` :math:`10^{15}` ``tera`` :math:`10^{12}` ``giga`` :math:`10^{9}` ``mega`` :math:`10^{6}` ``kilo`` :math:`10^{3}` ``hecto`` :math:`10^{2}` ``deka`` :math:`10^{1}` ``deci`` :math:`10^{-1}` ``centi`` :math:`10^{-2}` ``milli`` :math:`10^{-3}` ``micro`` :math:`10^{-6}` ``nano`` :math:`10^{-9}` ``pico`` :math:`10^{-12}` ``femto`` :math:`10^{-15}` ``atto`` :math:`10^{-18}` ``zepto`` :math:`10^{-21}` ============ ================================================================= Binary prefixes --------------- ============ ================================================================= ``kibi`` :math:`2^{10}` ``mebi`` :math:`2^{20}` ``gibi`` :math:`2^{30}` ``tebi`` :math:`2^{40}` ``pebi`` :math:`2^{50}` ``exbi`` :math:`2^{60}` ``zebi`` :math:`2^{70}` ``yobi`` :math:`2^{80}` ============ ================================================================= Weight ------ ================= ============================================================ ``gram`` :math:`10^{-3}` kg ``metric_ton`` :math:`10^{3}` kg ``grain`` one grain in kg ``lb`` one pound (avoirdupous) in kg ``pound`` one pound (avoirdupous) in kg ``oz`` one ounce in kg ``ounce`` one ounce in kg ``stone`` one stone in kg ``grain`` one grain in kg ``long_ton`` one long ton in kg ``short_ton`` one short ton in kg ``troy_ounce`` one Troy ounce in kg ``troy_pound`` one Troy pound in kg ``carat`` one carat in kg ``m_u`` atomic mass constant (in kg) ``u`` atomic mass constant (in kg) ``atomic_mass`` atomic mass constant (in kg) ================= ============================================================ Angle ----- ================= ============================================================ ``degree`` degree in radians ``arcmin`` arc minute in radians ``arcminute`` arc minute in radians ``arcsec`` arc second in radians ``arcsecond`` arc second in radians ================= ============================================================ Time ---- ================= ============================================================ ``minute`` one minute in seconds ``hour`` one hour in seconds ``day`` one day in seconds ``week`` one week in seconds ``year`` one year (365 days) in seconds ``Julian_year`` one Julian year (365.25 days) in seconds ================= ============================================================ Length ------ ===================== ============================================================ ``inch`` one inch in meters ``foot`` one foot in meters ``yard`` one yard in meters ``mile`` one mile in meters ``mil`` one mil in meters ``pt`` one point in meters ``point`` one point in meters ``survey_foot`` one survey foot in meters ``survey_mile`` one survey mile in meters ``nautical_mile`` one nautical mile in meters ``fermi`` one Fermi in meters ``angstrom`` one Angstrom in meters ``micron`` one micron in meters ``au`` one astronomical unit in meters ``astronomical_unit`` one astronomical unit in meters ``light_year`` one light year in meters ``parsec`` one parsec in meters ===================== ============================================================ Pressure -------- ================= ============================================================ ``atm`` standard atmosphere in pascals ``atmosphere`` standard atmosphere in pascals ``bar`` one bar in pascals ``torr`` one torr (mmHg) in pascals ``mmHg`` one torr (mmHg) in pascals ``psi`` one psi in pascals ================= ============================================================ Area ---- ================= ============================================================ ``hectare`` one hectare in square meters ``acre`` one acre in square meters ================= ============================================================ Volume ------ =================== ======================================================== ``liter`` one liter in cubic meters ``litre`` one liter in cubic meters ``gallon`` one gallon (US) in cubic meters ``gallon_US`` one gallon (US) in cubic meters ``gallon_imp`` one gallon (UK) in cubic meters ``fluid_ounce`` one fluid ounce (US) in cubic meters ``fluid_ounce_US`` one fluid ounce (US) in cubic meters ``fluid_ounce_imp`` one fluid ounce (UK) in cubic meters ``bbl`` one barrel in cubic meters ``barrel`` one barrel in cubic meters =================== ======================================================== Speed ----- ================== ========================================================== ``kmh`` kilometers per hour in meters per second ``mph`` miles per hour in meters per second ``mach`` one Mach (approx., at 15 C, 1 atm) in meters per second ``speed_of_sound`` one Mach (approx., at 15 C, 1 atm) in meters per second ``knot`` one knot in meters per second ================== ========================================================== Temperature ----------- ===================== ======================================================= ``zero_Celsius`` zero of Celsius scale in Kelvin ``degree_Fahrenheit`` one Fahrenheit (only differences) in Kelvins ===================== ======================================================= .. autosummary:: :toctree: generated/ C2K K2C F2C C2F F2K K2F Energy ------ ==================== ======================================================= ``eV`` one electron volt in Joules ``electron_volt`` one electron volt in Joules ``calorie`` one calorie (thermochemical) in Joules ``calorie_th`` one calorie (thermochemical) in Joules ``calorie_IT`` one calorie (International Steam Table calorie, 1956) in Joules ``erg`` one erg in Joules ``Btu`` one British thermal unit (International Steam Table) in Joules ``Btu_IT`` one British thermal unit (International Steam Table) in Joules ``Btu_th`` one British thermal unit (thermochemical) in Joules ``ton_TNT`` one ton of TNT in Joules ==================== ======================================================= Power ----- ==================== ======================================================= ``hp`` one horsepower in watts ``horsepower`` one horsepower in watts ==================== ======================================================= Force ----- ==================== ======================================================= ``dyn`` one dyne in newtons ``dyne`` one dyne in newtons ``lbf`` one pound force in newtons ``pound_force`` one pound force in newtons ``kgf`` one kilogram force in newtons ``kilogram_force`` one kilogram force in newtons ==================== ======================================================= Optics ------ .. autosummary:: :toctree: generated/ lambda2nu nu2lambda References ========== .. [CODATA2014] CODATA Recommended Values of the Fundamental Physical Constants 2014. http://physics.nist.gov/cuu/Constants/index.html """
# -*- python -*- # Package : omniidl # main.py Created on: 1999/11/12 # Author : NAME (djs) # # Copyright (C) 2003-2009 Apasphere Ltd # Copyright (C) 1999 AT&T Laboratories Cambridge # # This file is part of omniidl. # # omniidl is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # # Description: # # Produce the main skeleton definitions # $Id: main.py 5867 2009-05-06 16:16:18Z USERNAME $ # $Log$ # Revision IP_ADDRESS 2009/01/09 11:40:37 USERNAME # Remove unnecessary pass. # # Revision IP_ADDRESS 2008/12/03 10:56:28 USERNAME # Struct scope incorrectly handled in marshalling code. # # Revision IP_ADDRESS 2007/09/19 14:16:07 USERNAME # Avoid namespace clashes if IDL defines modules named CORBA. # # Revision IP_ADDRESS 2005/11/14 11:02:16 USERNAME # Local interface fixes. # # Revision IP_ADDRESS 2005/11/09 12:22:17 USERNAME # Local interfaces support. # # Revision IP_ADDRESS 2005/08/16 13:51:20 USERNAME # Problems with valuetype / abstract interface C++ mapping. # # Revision IP_ADDRESS 2005/01/13 21:55:56 USERNAME # Turn off -g debugging; suppress some compiler warnings. # # Revision IP_ADDRESS 2005/01/06 23:10:07 USERNAME # Big merge from omni4_0_develop. # # Revision IP_ADDRESS 2005/01/06 16:35:18 USERNAME # Narrowing for abstract interfaces. # # Revision IP_ADDRESS 2004/10/13 17:58:24 USERNAME # Abstract interfaces support; values support interfaces; value bug fixes. # # Revision IP_ADDRESS 2004/07/04 23:53:39 USERNAME # More ValueType TypeCode and Any support. # # Revision IP_ADDRESS 2003/11/06 11:56:56 USERNAME # Yet more valuetype. Plain valuetype and abstract valuetype are now working. # # Revision IP_ADDRESS 2003/10/23 11:25:55 USERNAME # More valuetype support. # # Revision IP_ADDRESS 2003/03/23 21:02:35 USERNAME # Start of omniORB 4.1.x development branch. # # Revision IP_ADDRESS 2001/10/29 17:42:41 dpg1 # Support forward-declared structs/unions, ORB::create_recursive_tc(). # # Revision IP_ADDRESS 2001/06/08 17:12:19 dpg1 # Merge all the bug fixes from omni3_develop. # # Revision IP_ADDRESS 2001/03/13 10:32:09 dpg1 # Fixed point support. # # Revision IP_ADDRESS 2001/01/25 13:09:11 sll # Fixed up cxx backend to stop it from dying when a relative # path name is given to the -p option of omniidl. # # Revision IP_ADDRESS 2000/11/20 14:43:25 sll # Added support for wchar and wstring. # # Revision IP_ADDRESS 2000/11/07 18:30:35 sll # exception copy ctor must use helper duplicate function if the interface is # a forward declaration. # # Revision IP_ADDRESS 2000/11/03 19:22:56 sll # Replace the old set of marshalling operators in the generated code with # a couple of unified operators for cdrStream. # # Revision IP_ADDRESS 2000/10/12 15:37:53 sll # Updated from omni3_1_develop. # # Revision IP_ADDRESS 2000/09/14 16:03:57 djs # Remodularised C++ descriptor name generator # # Revision IP_ADDRESS 2000/08/21 11:35:32 djs # Lots of tidying # # Revision IP_ADDRESS 2000/08/02 10:52:02 dpg1 # New omni3_1_develop branch, merged from omni3_develop. # # Revision 1.30 2000/07/13 15:25:59 dpg1 # Merge from omni3_develop for 3.0 release. # # Revision IP_ADDRESS 2000/07/24 16:32:18 djs # Fixed typo in previous BOA skeleton bugfix. # Suppressed compiler warning (from gcc -Wall) when encountering a call with # no arguments and no return value. # # Revision IP_ADDRESS 2000/06/26 16:24:17 djs # Refactoring of configuration state mechanism. # # Revision IP_ADDRESS 2000/06/06 14:45:07 djs # Produces flattened name typedefs for _all_ inherited interfaces (not just # those which are immediate decendents) in SK.cc # # Revision IP_ADDRESS 2000/06/05 13:04:18 djs # Removed union member name clash (x & pd_x, pd__default, pd__d) # Removed name clash when a sequence is called "pd_seq" # # Revision IP_ADDRESS 2000/05/31 18:03:38 djs # Better output indenting (and preprocessor directives now correctly output at # the beginning of lines) # Calling an exception "e" resulted in a name clash (and resultant C++ # compile failure) # # Revision IP_ADDRESS 2000/05/05 16:50:51 djs # Existing workaround for MSVC5 scoping problems extended to help with # base class initialisers. Instead of using the fully qualified or unambiguous # name, a flat typedef is generated at global scope and that is used instead. # This was a solution to a previous bug wrt operation dispatch()ing. # This does not affect the OMNI_BASE_CTOR powerpc/aix workaround. # # Revision IP_ADDRESS 2000/04/26 18:22:54 djs # Rewrote type mapping code (now in types.py) # Rewrote identifier handling code (now in id.py) # Removed superfluous externs in front of function definitions # # Revision IP_ADDRESS 2000/04/05 10:58:02 djs # Scoping problem with generated proxies for attributes (not operations) # # Revision IP_ADDRESS 2000/03/20 11:50:26 djs # Removed excess buffering- output templates have code attached which is # lazily evaluated when required. # # Revision IP_ADDRESS 2000/02/16 16:30:02 djs # Fix to proxy call descriptor code- failed to handle special case of # Object method(in string x) # # Revision IP_ADDRESS 2000/02/15 15:28:35 djs # Stupid bug in powerpc aix workaround fixed # # Revision IP_ADDRESS 2000/02/14 18:34:53 dpg1 # New omniidl merged in. # # Revision 1.27 2000/02/01 09:26:49 djs # Tracking fixes in old compiler: powerpc-aix scoped identifier workarounds # # Revision 1.26 2000/01/19 17:05:15 djs # Modified to use an externally stored C++ output template. # # Revision 1.25 2000/01/19 11:21:52 djs # *** empty log message *** # # Revision 1.24 2000/01/17 17:05:38 djs # Marshalling and constructed types fixes # # Revision 1.23 2000/01/14 15:57:20 djs # Added MSVC workaround for interface inheritance # # Revision 1.22 2000/01/13 17:02:05 djs # Added support for operation contexts. # # Revision 1.21 2000/01/13 15:56:43 djs # Factored out private identifier prefix rather than hard coding it all through # the code. # # Revision 1.20 2000/01/13 14:16:34 djs # Properly clears state between processing separate IDL input files # # Revision 1.19 2000/01/12 17:48:34 djs # Added option to create BOA compatible skeletons (same as -BBOA in omniidl3) # # Revision 1.18 2000/01/11 14:13:23 djs # Updated array mapping to include NAME_copy(to, from) as per 2.3 spec # # Revision 1.17 2000/01/11 12:02:45 djs # More tidying up # # Revision 1.16 2000/01/10 18:42:22 djs # Removed redundant code, tidied up. # # Revision 1.15 2000/01/10 15:39:48 djs # Better name and scope handling. # # Revision 1.14 2000/01/07 20:31:33 djs # Regression tests in CVSROOT/testsuite now pass for # * no backend arguments # * tie templates # * flattened tie templates # * TypeCode and Any generation # # Revision 1.13 1999/12/25 21:47:18 djs # Better TypeCode support # # Revision 1.12 1999/12/24 18:18:32 djs # #include bug fixed # # Revision 1.11 1999/12/16 16:11:21 djs # Now uses transitive closure of inherits relation where appropriate # # Revision 1.10 1999/12/14 11:53:22 djs # Support for CORBA::TypeCode and CORBA::Any # Exception member bugfix # # Revision 1.9 1999/12/09 20:40:57 djs # Bugfixes and integration with dynskel/ code # # Revision 1.8 1999/11/29 19:27:05 djs # Code tidied and moved around. Some redundant code eliminated. # # Revision 1.7 1999/11/29 15:27:53 djs # Minor bugfixes # # Revision 1.6 1999/11/26 18:51:13 djs # Now uses proxy module for most proxy generation # # Revision 1.5 1999/11/23 18:48:25 djs # Bugfixes, more interface operations and attributes code # # Revision 1.4 1999/11/19 20:12:03 djs # Generates skeletons for interface operations and attributes # # Revision 1.3 1999/11/17 20:37:23 djs # Code for call descriptors and proxies # # Revision 1.2 1999/11/15 19:13:38 djs # Union skeletons working # # Revision 1.1 1999/11/12 17:18:58 djs # Struct skeleton code added #
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
# efm.py # # Copyright (c) 2005--2015 by NAME <EMAIL> # This software is licensed under the GNU General Public License (GPL). # # This file is part of laser2wav, a software-only implementation of # an audio CD decoder. # The lookup dictionary that defines the Eight-To-Fourteen (EFM) # decoding algorithm as used in the CD audio channel encoding. # # The compact disc channel, scanned by the laser, consists of lands and pits. # Let T be 1/4321800 seconds; then the valid length for both lands and pits is # 3*T, 4*T, 5*T, 6*T, 7*T, 8*T, 9*T, 10*T or 11*T. # # Note: the allowed physical length of the lands/pits can be deduced from the # fact that the CD is specified to be read at a constant linear velocity (CLV) # of 1.2 m/s to 1.4 m/s. This means that T corresponds to a length of # 277.662 nm (when read at 1.2 m/s) to 323.939 nm (when read at 1.4 m/s), giving # a valid pit length of between 832.986 nm and 3054.283 nm (when read at 1.2 m/s), # and between 971.817 nm and 3563.330 nm (when read at 1.4 m/s). # # CD surface: # # 3 3 4 4 5 5 6 6 7 7 8 8 # -___---____----_____-----______------_______-------________-------- (etc) # # Level: # # 1000111000011110000011111000000111111000000011111110000000011111111 # # The first step of decoding detects whether the sampled signal has changed # its level since the previous (1/4321800)-second interval. # # For the sample above, this results in: # # Delta-level: # # 0100100100010001000010000100000100000100000010000001000000010000000 # # It is upon blocks of this form that the digital processing chain goes to work; # Specifically, synchronization to the start-of-frame marker # ("100000000001000000000010") and Fourteen-To-Eight demodulation work on # the "delta level" signal. # # The "3 to 11" length constraint for pit- and land-lengths translates to a # "2 to 10" constraint for the "number of zeroes between two successive lengths. # # Of the 16,384 14-bit patterns that exist, only 267 do not violate the constraint. # 256 are used to encode byte patterns; two are used to encode unique # synchronization patterns that are used to identify the first two frames of an # 98-frame sector; and exactly nine patterns /could/ be used, but are not: # # 00000000001000 # 00000000001001 # 00000000010000 # 00000000010001 # 00001000000000 # 00010000000000 # 01001000000000 # 10001000000000 # 10010000000000 # # All 258 patterns that /are/ used are listed in the dictionary below. # # Despite its name, the EFM dictionary given below is mainly useful for fourteen-to-eight # demodulation.
""" Writing Plugins --------------- nose supports plugins for test collection, selection, observation and reporting. There are two basic rules for plugins: * Plugin classes should subclass :class:`nose.plugins.Plugin`. * Plugins may implement any of the methods described in the class :doc:`IPluginInterface <interface>` in nose.plugins.base. Please note that this class is for documentary purposes only; plugins may not subclass IPluginInterface. Hello World =========== Here's a basic plugin. It doesn't do much so read on for more ideas or dive into the :doc:`IPluginInterface <interface>` to see all available hooks. .. code-block:: python import logging import os from nose.plugins import Plugin log = logging.getLogger('nose.plugins.helloworld') class HelloWorld(Plugin): name = 'helloworld' def options(self, parser, env=os.environ): super(HelloWorld, self).options(parser, env=env) def configure(self, options, conf): super(HelloWorld, self).configure(options, conf) if not self.enabled: return def finalize(self, result): log.info('Hello pluginized world!') Registering =========== .. Note:: Important note: the following applies only to the default plugin manager. Other plugin managers may use different means to locate and load plugins. For nose to find a plugin, it must be part of a package that uses setuptools_, and the plugin must be included in the entry points defined in the setup.py for the package: .. code-block:: python setup(name='Some plugin', # ... entry_points = { 'nose.plugins.0.10': [ 'someplugin = someplugin:SomePlugin' ] }, # ... ) Once the package is installed with install or develop, nose will be able to load the plugin. .. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools Registering a plugin without setuptools ======================================= It is currently possible to register a plugin programmatically by creating a custom nose runner like this : .. code-block:: python import nose from yourplugin import YourPlugin if __name__ == '__main__': nose.main(addplugins=[YourPlugin()]) Defining options ================ All plugins must implement the methods ``options(self, parser, env)`` and ``configure(self, options, conf)``. Subclasses of nose.plugins.Plugin that want the standard options should call the superclass methods. nose uses optparse.OptionParser from the standard library to parse arguments. A plugin's ``options()`` method receives a parser instance. It's good form for a plugin to use that instance only to add additional arguments that take only long arguments (--like-this). Most of nose's built-in arguments get their default value from an environment variable. A plugin's ``configure()`` method receives the parsed ``OptionParser`` options object, as well as the current config object. Plugins should configure their behavior based on the user-selected settings, and may raise exceptions if the configured behavior is nonsensical. Logging ======= nose uses the logging classes from the standard library. To enable users to view debug messages easily, plugins should use ``logging.getLogger()`` to acquire a logger in the ``nose.plugins`` namespace. Recipes ======= * Writing a plugin that monitors or controls test result output Implement any or all of ``addError``, ``addFailure``, etc., to monitor test results. If you also want to monitor output, implement ``setOutputStream`` and keep a reference to the output stream. If you want to prevent the builtin ``TextTestResult`` output, implement ``setOutputSteam`` and *return a dummy stream*. The default output will go to the dummy stream, while you send your desired output to the real stream. Example: `examples/html_plugin/htmlplug.py`_ * Writing a plugin that handles exceptions Subclass :doc:`ErrorClassPlugin <errorclasses>`. Examples: :doc:`nose.plugins.deprecated <deprecated>`, :doc:`nose.plugins.skip <skip>` * Writing a plugin that adds detail to error reports Implement ``formatError`` and/or ``formatFailure``. The error tuple you return (error class, error message, traceback) will replace the original error tuple. Examples: :doc:`nose.plugins.capture <capture>`, :doc:`nose.plugins.failuredetail <failuredetail>` * Writing a plugin that loads tests from files other than python modules Implement ``wantFile`` and ``loadTestsFromFile``. In ``wantFile``, return True for files that you want to examine for tests. In ``loadTestsFromFile``, for those files, return an iterable containing TestCases (or yield them as you find them; ``loadTestsFromFile`` may also be a generator). Example: :doc:`nose.plugins.doctests <doctests>` * Writing a plugin that prints a report Implement ``begin`` if you need to perform setup before testing begins. Implement ``report`` and output your report to the provided stream. Examples: :doc:`nose.plugins.cover <cover>`, :doc:`nose.plugins.prof <prof>` * Writing a plugin that selects or rejects tests Implement any or all ``want*`` methods. Return False to reject the test candidate, True to accept it -- which means that the test candidate will pass through the rest of the system, so you must be prepared to load tests from it if tests can't be loaded by the core loader or another plugin -- and None if you don't care. Examples: :doc:`nose.plugins.attrib <attrib>`, :doc:`nose.plugins.doctests <doctests>`, :doc:`nose.plugins.testid <testid>` More Examples ============= See any builtin plugin or example plugin in the examples_ directory in the nose source distribution. There is a list of third-party plugins `on jottit`_. .. _examples/html_plugin/htmlplug.py: http://python-nose.googlecode.com/svn/trunk/examples/html_plugin/htmlplug.py .. _examples: http://python-nose.googlecode.com/svn/trunk/examples .. _on jottit: http://nose-plugins.jottit.com/ """
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # adapted from http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c # -thepaul # This is an implementation of wcwidth() and wcswidth() (defined in # IEEE Std 1002.1-2001) for Unicode. # # http://www.opengroup.org/onlinepubs/007904975/functions/wcwidth.html # http://www.opengroup.org/onlinepubs/007904975/functions/wcswidth.html # # In fixed-width output devices, Latin characters all occupy a single # "cell" position of equal width, whereas ideographic CJK characters # occupy two such cells. Interoperability between terminal-line # applications and (teletype-style) character terminals using the # UTF-8 encoding requires agreement on which character should advance # the cursor by how many cell positions. No established formal # standards exist at present on which Unicode character shall occupy # how many cell positions on character terminals. These routines are # a first attempt of defining such behavior based on simple rules # applied to data provided by the Unicode Consortium. # # For some graphical characters, the Unicode standard explicitly # defines a character-cell width via the definition of the East Asian # FullWidth (F), Wide (W), Half-width (H), and Narrow (Na) classes. # In all these cases, there is no ambiguity about which width a # terminal shall use. For characters in the East Asian Ambiguous (A) # class, the width choice depends purely on a preference of backward # compatibility with either historic CJK or Western practice. # Choosing single-width for these characters is easy to justify as # the appropriate long-term solution, as the CJK practice of # displaying these characters as double-width comes from historic # implementation simplicity (8-bit encoded characters were displayed # single-width and 16-bit ones double-width, even for Greek, # Cyrillic, etc.) and not any typographic considerations. # # Much less clear is the choice of width for the Not East Asian # (Neutral) class. Existing practice does not dictate a width for any # of these characters. It would nevertheless make sense # typographically to allocate two character cells to characters such # as for instance EM SPACE or VOLUME INTEGRAL, which cannot be # represented adequately with a single-width glyph. The following # routines at present merely assign a single-cell width to all # neutral characters, in the interest of simplicity. This is not # entirely satisfactory and should be reconsidered before # establishing a formal standard in this area. At the moment, the # decision which Not East Asian (Neutral) characters should be # represented by double-width glyphs cannot yet be answered by # applying a simple rule from the Unicode database content. Setting # up a proper standard for the behavior of UTF-8 character terminals # will require a careful analysis not only of each Unicode character, # but also of each presentation form, something the author of these # routines has avoided to do so far. # # http://www.unicode.org/unicode/reports/tr11/ # # NAME -- 2007-05-26 (Unicode 5.0) # # Permission to use, copy, modify, and distribute this software # for any purpose and without fee is hereby granted. The author # disclaims all warranties with regard to this software. # # Latest C version: http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c # auxiliary function for binary search in interval table
""" This page is in the table of contents. Carve is a script to carve a shape into svg slice layers. The carve manual page is at: http://www.bitsfrombytes.com/wiki/index.php?title=Skeinforge_Carve On the Arcol Blog a method of deriving the layer thickness is posted. That article "Machine Calibrating" is at: http://blog.arcol.hu/?p=157 ==Settings== ===Add Layer Template to SVG=== Default is on. When selected, the layer template will be added to the svg output, which adds javascript control boxes. So 'Add Layer Template to SVG' should be selected when the svg will be viewed in a browser. When off, no controls will be added, the svg output will only include the fabrication paths. So 'Add Layer Template to SVG' should be deselected when the svg will be used by other software, like Inkscape. ===Bridge Thickness Multiplier=== Default is one. Defines the the ratio of the thickness on the bridge layers over the thickness of the typical non bridge layers. ===Extra Decimal Places=== Default is one. Defines the number of extra decimal places export will output compared to the number of decimal places in the layer thickness. The higher the 'Extra Decimal Places', the more significant figures the output numbers will have. ===Import Coarseness=== Default is one. When a triangle mesh has holes in it, the triangle mesh slicer switches over to a slow algorithm that spans gaps in the mesh. The higher the 'Import Coarseness' setting, the wider the gaps in the mesh it will span. An import coarseness of one means it will span gaps of the perimeter width. ===Infill in Direction of Bridges=== Default is on. When selected, the infill will be in the direction of bridges across gaps, so that the fill will be able to span a bridge easier. ===Layer Thickness=== Default is 0.4 mm. Defines the thickness of the extrusion layer at default extruder speed, this is the most important carve setting. ===Layers=== Carve slices from bottom to top. To get a single layer, set the "Layers From" to zero and the "Layers To" to one. The 'Layers From' until 'Layers To' range is a python slice. ====Layers From==== Default is zero. Defines the index of the bottom layer that will be carved. If the 'Layers From' is the default zero, the carving will start from the lowest layer. If the 'Layers From' index is negative, then the carving will start from the 'Layers From' index below the top layer. ====Layers To==== Default is a huge number, which will be limited to the highest index layer. Defines the index of the top layer that will be carved. If the 'Layers To' index is a huge number like the default, the carving will go to the top of the model. If the 'Layers To' index is negative, then the carving will go to the 'Layers To' index below the top layer. ===Mesh Type=== Default is 'Correct Mesh'. ====Correct Mesh==== When selected, the mesh will be accurately carved, and if a hole is found, carve will switch over to the algorithm that spans gaps. ====Unproven Mesh==== When selected, carve will use the gap spanning algorithm from the start. The problem with the gap spanning algothm is that it will span gaps, even if there is not actually a gap in the model. ===Perimeter Width over Thickness=== Default is 1.8. Defines the ratio of the extrusion perimeter width to the layer thickness. The higher the value the more the perimeter will be inset, the default is 1.8. A ratio of one means the extrusion is a circle, a typical ratio of 1.8 means the extrusion is a wide oval. These values should be measured from a test extrusion line. ===SVG Viewer=== Default is webbrowser. If the 'SVG Viewer' is set to the default 'webbrowser', the scalable vector graphics file will be sent to the default browser to be opened. If the 'SVG Viewer' is set to a program name, the scalable vector graphics file will be sent to that program to be opened. ==Examples== The following examples carve the file Screw Holder Bottom.stl. The examples are run in a terminal in the folder which contains Screw Holder Bottom.stl and carve.py. > python carve.py This brings up the carve dialog. > python carve.py Screw Holder Bottom.stl The carve tool is parsing the file: Screw Holder Bottom.stl .. The carve tool has created the file: .. Screw Holder Bottom_carve.svg > python Python 2.5.1 (r251:54863, Sep 22 2007, 01:43:31) [GCC 4.2.1 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import carve >>> carve.main() This brings up the carve dialog. >>> carve.writeOutput('Screw Holder Bottom.stl') The carve tool is parsing the file: Screw Holder Bottom.stl .. The carve tool has created the file: .. Screw Holder Bottom_carve.svg """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
# # ElementTree # $Id: ElementTree.py 3224 2007-08-27 21:23:39Z USERNAME $ # # light-weight XML support for Python 1.5.2 and later. # # history: # 2001-10-20 fl created (from various sources) # 2001-11-01 fl return root from parse method # 2002-02-16 fl sort attributes in lexical order # 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup # 2002-05-01 fl finished TreeBuilder refactoring # 2002-07-14 fl added basic namespace support to ElementTree.write # 2002-07-25 fl added QName attribute support # 2002-10-20 fl fixed encoding in write # 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding # 2002-11-27 fl accept file objects or file names for parse/write # 2002-12-04 fl moved XMLTreeBuilder back to this module # 2003-01-11 fl fixed entity encoding glitch for us-ascii # 2003-02-13 fl added XML literal factory # 2003-02-21 fl added ProcessingInstruction/PI factory # 2003-05-11 fl added tostring/fromstring helpers # 2003-05-26 fl added ElementPath support # 2003-07-05 fl added makeelement factory method # 2003-07-28 fl added more well-known namespace prefixes # 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed # 2003-10-31 fl markup updates # 2003-11-15 fl fixed nested namespace bug # 2004-03-28 fl added XMLID helper # 2004-06-02 fl added default support to findtext # 2004-06-08 fl fixed encoding of non-ascii element/attribute names # 2004-08-23 fl take advantage of post-2.1 expat features # 2005-02-01 fl added iterparse implementation # 2005-03-02 fl fixed iterparse support for pre-2.2 versions # 2006-11-18 fl added parser support for IronPython (ElementIron) # 2007-08-27 fl fixed newlines in attributes # # Copyright (c) 1999-2007 by NAME All rights reserved. # # EMAIL # http://www.pythonware.com # # -------------------------------------------------------------------- # The ElementTree toolkit is # # Copyright (c) 1999-2007 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
"""Doctest for method/function calls. We're going the use these types for extra testing >>> from UserList import UserList >>> from UserDict import UserDict We're defining four helper functions >>> def e(a,b): ... print a, b >>> def f(*a, **k): ... print a, test_support.sortdict(k) >>> def g(x, *y, **z): ... print x, y, test_support.sortdict(z) >>> def h(j=1, a=2, h=3): ... print j, a, h Argument list examples >>> f() () {} >>> f(1) (1,) {} >>> f(1, 2) (1, 2) {} >>> f(1, 2, 3) (1, 2, 3) {} >>> f(1, 2, 3, *(4, 5)) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *[4, 5]) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *UserList([4, 5])) (1, 2, 3, 4, 5) {} Here we add keyword arguments >>> f(1, 2, 3, **{'a':4, 'b':5}) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *[4, 5], **{'a':6, 'b':7}) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **{'a':8, 'b': 9}) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} >>> f(1, 2, 3, **UserDict(a=4, b=5)) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *(4, 5), **UserDict(a=6, b=7)) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **UserDict(a=8, b=9)) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} Examples with invalid arguments (TypeErrors). We're also testing the function names in the exception messages. Verify clearing of SF bug #733667 >>> e(c=4) Traceback (most recent call last): ... TypeError: e() got an unexpected keyword argument 'c' >>> g() Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(*()) Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(*(), **{}) Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(1) 1 () {} >>> g(1, 2) 1 (2,) {} >>> g(1, 2, 3) 1 (2, 3) {} >>> g(1, 2, 3, *(4, 5)) 1 (2, 3, 4, 5) {} >>> class Nothing: pass ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence >>> class Nothing: ... def __len__(self): return 5 ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence >>> class Nothing: ... def __len__(self): return 5 ... def __getitem__(self, i): ... if i<3: return i ... else: raise IndexError(i) ... >>> g(*Nothing()) 0 (1, 2) {} >>> class Nothing: ... def __init__(self): self.c = 0 ... def __iter__(self): return self ... def next(self): ... if self.c == 4: ... raise StopIteration ... c = self.c ... self.c += 1 ... return c ... >>> g(*Nothing()) 0 (1, 2, 3) {} Make sure that the function doesn't stomp the dictionary >>> d = {'a': 1, 'b': 2, 'c': 3} >>> d2 = d.copy() >>> g(1, d=4, **d) 1 () {'a': 1, 'b': 2, 'c': 3, 'd': 4} >>> d == d2 True What about willful misconduct? >>> def saboteur(**kw): ... kw['x'] = 'm' ... return kw >>> d = {} >>> kw = saboteur(a=1, **d) >>> d {} >>> g(1, 2, 3, **{'x': 4, 'y': 5}) Traceback (most recent call last): ... TypeError: g() got multiple values for keyword argument 'x' >>> f(**{1:2}) Traceback (most recent call last): ... TypeError: f() keywords must be strings >>> h(**{'e': 2}) Traceback (most recent call last): ... TypeError: h() got an unexpected keyword argument 'e' >>> h(*h) Traceback (most recent call last): ... TypeError: h() argument after * must be a sequence >>> dir(*h) Traceback (most recent call last): ... TypeError: dir() argument after * must be a sequence >>> None(*h) Traceback (most recent call last): ... TypeError: NoneType argument after * must be a sequence >>> h(**h) Traceback (most recent call last): ... TypeError: h() argument after ** must be a mapping >>> dir(**h) Traceback (most recent call last): ... TypeError: dir() argument after ** must be a mapping >>> None(**h) Traceback (most recent call last): ... TypeError: NoneType argument after ** must be a mapping >>> dir(b=1, **{'b': 1}) Traceback (most recent call last): ... TypeError: dir() got multiple values for keyword argument 'b' Another helper function >>> def f2(*a, **b): ... return a, b >>> d = {} >>> for i in xrange(512): ... key = 'k%d' % i ... d[key] = i >>> a, b = f2(1, *(2,3), **d) >>> len(a), len(b), b == d (3, 512, True) >>> class Foo: ... def method(self, arg1, arg2): ... return arg1+arg2 >>> x = Foo() >>> Foo.method(*(x, 1, 2)) 3 >>> Foo.method(x, *(1, 2)) 3 >>> Foo.method(*(1, 2, 3)) Traceback (most recent call last): ... TypeError: unbound method method() must be called with Foo instance as \ first argument (got int instance instead) >>> Foo.method(1, *[2, 3]) Traceback (most recent call last): ... TypeError: unbound method method() must be called with Foo instance as \ first argument (got int instance instead) A PyCFunction that takes only positional parameters shoud allow an empty keyword dictionary to pass without a complaint, but raise a TypeError if te dictionary is not empty >>> try: ... silence = id(1, *{}) ... True ... except: ... False True >>> id(1, **{'foo': 1}) Traceback (most recent call last): ... TypeError: id() takes no keyword arguments """
""" [Weekly #23] Computational Complexity and Algorithm Design https://www.reddit.com/r/dailyprogrammer/comments/36iufn/weekly_23_computational_complexity_and_algorithm/ # [](#WeeklyIcon) Dynamic Programming and Algorithm Design Programming is fundamentally tied to computer science, which involves the design and optimization of algorithms to solve certain problems. In the world of "big data", tweaking and streamlining algorithms to work as quickly as possible is an important process in designing an algorithm, especially over large, inter-connected data sets. A set of notations usually referred to as [**big-O notation**](http://en.wikipedia.org/wiki/Big_O_notation), so named for the big letter O which is a core component of the notation (believe it or not). This notation describes how the processing time and working space grows, as the size of the input data set grows. For example, if an algorithm runs in O(*f*(n)) time, then the running time of the algorithm with respect to input size n grows no quicker than *f*(n), ignoring any constant multiples. An example of this is sorting data (there's a [pre-existing Weekly thread just on sorting here](http://www.reddit.com/r/dailyprogrammer/comments/2emixb/)). The simple naïve algorithms for sorting typically run in O(n^(2)) time in the worst-case scenario. This includes algorithms such as insertion sort. However, a bit of clever thinking allowed algorithms such as quick-sort to be developed, which uses a [divide and conquer](http://en.wikibooks.org/wiki/Algorithms/Divide_and_Conquer) approach to make the process simpler, thereby reducing the time complexity to O(n log n) - which runs much faster when your data to be sorted is large. We've recently had a spate of challenges which require a bit of algorithm design, so unbeknownst to you, you've already done some of this work. The specific challenges are these five; check them out if you've not already: * [[Hard] Unpacking a Sentence in a Box](http://www.reddit.com/r/dailyprogrammer/comments/322hh0/20150410_challenge_209_hard_unpacking_a_sentence/) * [[Intermediate] The Lazy Typist](http://www.reddit.com/r/dailyprogrammer/comments/351b0o/) * [[Hard] Stepstring discrepancy](http://www.reddit.com/r/dailyprogrammer/comments/358pfk/) * [[Intermediate] Pile of Paper](http://www.reddit.com/r/dailyprogrammer/comments/35s2ds/) * [[Hard] Chester, the greedy Pomeranian](http://www.reddit.com/r/dailyprogrammer/comments/3629st/) If you write an inefficient algorithm to solve these challenges, it might take forever to complete! Techniques such as dynamic programming study algorithms, and specifically those that break problems down into easier-to-solve sub-problems which can be solved quicker individually than as a whole. There's also a certain class of problems (NP) for which you physically can't solve efficiently using this approach; this includes problems such as the infamous [travelling salesman problem](http://mathworld.wolfram.com/TravelingSalesmanProblem.html) and [boolean 3-SAT](http://math.stackexchange.com/questions/86210/what-is-the-3sat-problem), for which no exact efficient solution has been found; and indeed [probably won't be found](http://www.claymath.org/millenium-problems/p-vs-np-problem). In today's Weekly discussion, discuss anything that interests you, or that you know of, relating to algorithm design, or any interesting algorithmic approaches you took to solving any particular DailyProgrammer challenge set to date! ### Challenge Solution Order In yesterday's challenge, we trialled ordering the solution submissions by *new* rather than by *best*. What are your opinions on this? How did you think it went? Should we make this the norm? ### IRC We have an [IRC channel on Freenode](http://www.reddit.com/r/dailyprogrammer/comments/2dtqr7/), at **#reddit-dailyprogrammer**. Join the channel and lurk with us! ### Previously... The previous weekly thread was [**Machine Learning**](http://www.reddit.com/r/dailyprogrammer/comments/3206mk/). """
# -*- coding: utf-8 -*- # # Copyright (c) 2010 by xt <EMAIL> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # # This script colors nicks in IRC channels in the actual message # not just in the prefix section. # # # History: # 2017-06-20: USERNAME <EMAIL> # version 24: colorize utf8 nicks # 2017-03-01, USERNAME <EMAIL> # version 23: don't colorize nicklist group names # 2016-05-01, NAME <EMAIL> # version 22: invalidate cached colors on hash algorithm change # 2015-07-28, xt # version 21: fix problems with nicks with commas in them # 2015-04-19, xt # version 20: fix ignore of nicks in URLs # 2015-04-18, xt # version 19: new option ignore nicks in URLs # 2015-03-03, xt # version 18: iterate buffers looking for nicklists instead of servers # 2015-02-23, holomorph # version 17: fix coloring in non-channel buffers (#58) # 2014-09-17, holomorph # version 16: use weechat config facilities # clean unused, minor linting, some simplification # 2014-05-05, holomorph # version 15: fix python2-specific re.search check # 2013-01-29, USERNAME version 14: make script compatible with Python 3.x # 2012-10-19, ldvx # version 13: Iterate over every word to prevent incorrect colorization of # nicks. Added option greedy_matching. # 2012-04-28, ldvx # version 12: added ignore_tags to avoid colorizing nicks if tags are present # 2012-01-14, nesthib # version 11: input_text_display hook and modifier to colorize nicks in input bar # 2010-12-22, xt # version 10: hook config option for updating blacklist # 2010-12-20, xt # version 0.9: hook new config option for weechat 0.3.4 # 2010-11-01, USERNAME version 0.8: hook_modifier() added to communicate with rainbow_text # 2010-10-01, xt # version 0.7: changes to support non-irc-plugins # 2010-07-29, xt # version 0.6: compile regexp as per patch from NAME EMAIL 2010-07-19, xt # version 0.5: fix bug with incorrect coloring of own nick # 2010-06-02, xt # version 0.4: update to reflect API changes # 2010-03-26, xt # version 0.3: fix error with exception # 2010-03-24, xt # version 0.2: use ignore_channels when populating to increase performance. # 2010-02-03, xt # version 0.1: initial (based on ruby script by USERNAME Known issues: nicks will not get colorized if they begin with a character # such as ~ (which some irc networks do happen to accept)
# -*- coding: utf-8 -*- # # UrlGrab, for weechat version >= 0.3.0 # # Listens to all channels for URLs, collects them in a list, and launches # them in your favourite web server on the local host or a remote server. # Copies url to X11 clipboard via xsel # (http://www.vergenet.net/~conrad/software/xsel) # # Usage: # # The /url command provides access to all UrlGrab functions. Run # '/help url' for complete command usage. # # In general, use '/url list' to list the entire url list for the current # channel, and '/url <n>' to launch the nth url in the list. For # example, to launch the first (and most-recently added) url in the list, # you would run '/url 1' # # From the server window, you must specify a specific channel for the # list and launch commands, for example: # /url list weechat # /url 3 weechat # # Configuration: # # The '/url set' command lets you get and set the following options: # # historysize # The maximum number of URLs saved per channel. Default is 10 # # method # Must be one of 'local' or 'remote' - Defines how URLs are launched by # the script. If 'local', the script will run 'localcmd' on the host. # If 'remote', the script will run 'remotessh remotehost remotecmd' on # the local host which should normally use ssh to connect to another # host and run the browser command there. # # localcmd # The command to run on the local host to launch URLs in 'local' mode. # The string '%s' will be replaced with the URL. The default is # 'firefox %s'. # # remotessh # The command (and arguments) used to connect to the remote host for # 'remote' mode. The default is 'ssh -x' which will connect as the # current username via ssh and disable X11 forwarding. # # remotehost # The remote host to which we will connect in 'remote' mode. For ssh, # this can just be a hostname or 'user@host' to specify a username # other than your current login name. The default is 'localhost'. # # remotecmd # The command to execute on the remote host for 'remote' mode. The # default is 'bash -c "DISPLAY=:0.0 firefox '%s'"' Which runs bash, sets # up the environment to display on the remote host's main X display, # and runs firefox. As with 'localcmd', the string '%s' will be # replaced with the URL. # # cmdoutput # The file where the command output (if any) is saved. Overwritten # each time you launch a new URL. Default is ~/.weechat/urllaunch.log # # default # The command that will be run if no arguemnts to /url are given. # Default is show # # Requirements: # # - Designed to run with weechat version 0.3 or better. # http://www.weechat.org/ # # Acknowlegements: # # - Based on an earlier version called 'urlcollector.py' by 'kolter' of # irc.freenode.net/#weechat Honestly, I just cleaned up the code a bit and # made the settings a little more useful (to me). # # - With changes by NAME (weechat at darkk dot net another dot ru): # http://darkk.net.ru/weechat/urlgrab.py # v1.1: added better handling of dead zombie-childs # added parsing of private messages # added default command setting # added parsing of scrollback buffers on load # v1.2: `historysize` was ignored # # - With changes by ExclusivE (exclusive_tm at mail dot ru): # v1.3: X11 clipboard support # # - V1.4 Just ported it over to weechat 0.2.7 drubin AT smartcube dot co dot za # - V1.5 1) I created a logging feature for urls, Time, Date, buffer, and url. # 2) Added selectable urls support, similar to the iset plugin (Thanks FlashCode) # 3) Colors/formats are configuarable. # 4) browser now uses hook_process (Please test with remote clients) # 5) Added /url open http://url.com functionality # 6) Changed urls detection to use regexpressions so should be much better # Thanks to xt of #weechat bassed on on urlbar.py # - V1.6 FlashCode <EMAIL>: Increase timeout for hook_process # (from 1 second to 1 minute) # - V1.7 FlashCode <EMAIL>: Update WeeChat site # - V1.8 drubin <drubin [at] smartcube . co.za>: # - Changed remote cmd to be single option # - Added scrolling on up and down arrow keys for /url show # - Changed remotecmd to include options with public/private keys password auth doesn't work # - V1.9 Specimen <spinifer [at] gmail . com>: # - Changed the default command when /url is run with no arguments to 'show' # - Removed '/url help' command, because /help <command> is the standard way # - V2.0 Xilov: replace "/url help" by "/help url" # - V2.1 nand: Changed default: firefox %s to firefox '%s' (localcmd) # - V2.2 NAME <EMAIL>: fix reload of config file # - V2.3 nand: Allowed trailing )s for unmatched (s in URLs # - V2.4 nand: Escaped URLs via URL-encoding instead of shell escaping, fixes ' # - V2.5 nand: Fixed some URLs that got incorrectly mangled by escaping # - V2.6 nesthib: Fixed escaping of "=" # Added missing quotes in default parameter (firefox '%s') # Removed the mix of tabs and spaces in the file indentation # - V2.7 USERNAME <john.w.lamb [at] gmail . com> # ( https://github.com/USERNAME/ ): # - Added 'copycmd' setting, users can set command to pipe into # for '/url copy' # - V2.8 NAME <EMAIL>: # - Changed print hook to ignore filtered lines # - V2.9 NAME <EMAIL>: # - Updated script for python3 support (now python2 and 3 are both supported) # - V3.0 NAME <EMAIL>: # - Fix python 3 compatibility (replace "has_key" by "in") # # Copyright (C) 2005 NAME <drubin AT smartcube dot co dot za> # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, # USA. #
""" >>> from django.core.paginator import Paginator >>> from pagination.templatetags.pagination_tags import paginate >>> from django.template import Template, Context >>> p = Paginator(range(15), 2) >>> pg = paginate({'paginator': p, 'page_obj': p.page(1)}) >>> pg['pages'] [1, 2, 3, 4, 5, 6, 7, 8] >>> pg['records']['first'] 1 >>> pg['records']['last'] 2 >>> p = Paginator(range(15), 2) >>> pg = paginate({'paginator': p, 'page_obj': p.page(8)}) >>> pg['pages'] [1, 2, 3, 4, 5, 6, 7, 8] >>> pg['records']['first'] 15 >>> pg['records']['last'] 15 >>> p = Paginator(range(17), 2) >>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages'] [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> p = Paginator(range(19), 2) >>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages'] [1, 2, 3, 4, None, 7, 8, 9, 10] >>> p = Paginator(range(21), 2) >>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages'] [1, 2, 3, 4, None, 8, 9, 10, 11] # Testing orphans >>> p = Paginator(range(5), 2, 1) >>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages'] [1, 2] >>> p = Paginator(range(21), 2, 1) >>> pg = paginate({'paginator': p, 'page_obj': p.page(1)}) >>> pg['pages'] [1, 2, 3, 4, None, 7, 8, 9, 10] >>> pg['records']['first'] 1 >>> pg['records']['last'] 2 >>> p = Paginator(range(21), 2, 1) >>> pg = paginate({'paginator': p, 'page_obj': p.page(10)}) >>> pg['pages'] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> pg['records']['first'] 19 >>> pg['records']['last'] 21 >>> p = Paginator(range(21), 2, 1) >>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages'] [1, 2, 3, 4, None, 7, 8, 9, 10] >>> t = Template("{% load pagination_tags %}{% autopaginate var 2 %}{% paginate %}") >>> from django.http import HttpRequest as DjangoHttpRequest >>> class HttpRequest(DjangoHttpRequest): ... page = lambda self, suffix: 1 >>> t.render(Context({'var': range(21), 'request': HttpRequest()})) u'\\n\\n<div class="pagination">... >>> >>> t = Template("{% load pagination_tags %}{% autopaginate var %}{% paginate %}") >>> t.render(Context({'var': range(21), 'request': HttpRequest()})) u'\\n\\n<div class="pagination">... >>> t = Template("{% load pagination_tags %}{% autopaginate var 20 %}{% paginate %}") >>> t.render(Context({'var': range(21), 'request': HttpRequest()})) u'\\n\\n<div class="pagination">... >>> t = Template("{% load pagination_tags %}{% autopaginate var by %}{% paginate %}") >>> t.render(Context({'var': range(21), 'by': 20, 'request': HttpRequest()})) u'\\n\\n<div class="pagination">...<a href="?page=2"... >>> t = Template("{% load pagination_tags %}{% autopaginate var by as foo %}{{ foo }}") >>> t.render(Context({'var': range(21), 'by': 20, 'request': HttpRequest()})) u'[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]' >>> >>> t = Template("{% load pagination_tags %}{% autopaginate var2 by as foo2 %}{% paginate %}{% autopaginate var by as foo %}{% paginate %}") >>> t.render(Context({'var': range(21), 'var2': range(50, 121), 'by': 20, 'request': HttpRequest()})) u'\\n\\n<div class="pagination">...<a href="?page_var2=2"...<a href="?page_var=2"... >>> # Testing InfinitePaginator >>> from paginator import InfinitePaginator >>> InfinitePaginator <class 'pagination.paginator.InfinitePaginator'> >>> p = InfinitePaginator(range(20), 2, link_template='/bacon/page/%d') >>> p.validate_number(2) 2 >>> p.orphans 0 >>> p3 = p.page(3) >>> p3 <Page 3> >>> p3.end_index() 6 >>> p3.has_next() True >>> p3.has_previous() True >>> p.page(10).has_next() False >>> p.page(1).has_previous() False >>> p3.next_link() '/bacon/page/4' >>> p3.previous_link() '/bacon/page/2' # Testing FinitePaginator >>> from paginator import FinitePaginator >>> FinitePaginator <class 'pagination.paginator.FinitePaginator'> >>> p = FinitePaginator(range(20), 2, offset=10, link_template='/bacon/page/%d') >>> p.validate_number(2) 2 >>> p.orphans 0 >>> p3 = p.page(3) >>> p3 <Page 3> >>> p3.start_index() 10 >>> p3.end_index() 6 >>> p3.has_next() True >>> p3.has_previous() True >>> p3.next_link() '/bacon/page/4' >>> p3.previous_link() '/bacon/page/2' >>> p = FinitePaginator(range(20), 20, offset=10, link_template='/bacon/page/%d') >>> p2 = p.page(2) >>> p2 <Page 2> >>> p2.has_next() False >>> p3.has_previous() True >>> p2.next_link() >>> p2.previous_link() '/bacon/page/1' >>> from pagination.middleware import PaginationMiddleware >>> from django.core.handlers.wsgi import WSGIRequest >>> from StringIO import StringIO >>> middleware = PaginationMiddleware() >>> request = WSGIRequest({'REQUEST_METHOD': 'POST', 'CONTENT_TYPE': 'multipart', 'wsgi.input': StringIO()}) >>> middleware.process_request(request) >>> request.upload_handlers.append('asdf') """
""" This is an experimental feature!!! for advanced users only. Allows a user to create their own subclasses of leaf PyMEL node classes, which are returned by `PyNode` and all other pymel commands. .. warning:: If you are not familiar with the classmethod and staticmethod decorators you should read up on them before using this feature. The process is fairly simple: 1. Subclass a PyNode class. Be sure that it is a leaf class, meaning that it represents an actual Maya node type and not an abstract type higher up in the hierarchy. 2. Add an _isVirtual classmethod that accepts two arguments: an MObject/MDagPath instance for the current object, and its name. It should return True if the current object meets the requirements to become the virtual subclass, or else False. 3. Add optional _preCreate, _create, and _postCreate methods. For more on these, see below 4. Register your subclass by calling factories.virtualClasses.register. If the _isVirtual callback requires the name of the object, set the keyword argument nameRequired to True. The object's name is not always immediately available and may take an extra calculation to retrieve, so if nameRequired is not set the name argument passed to your callback could be None. The creation of custom nodes may be customized with the use of isVirtual, preCreate, create, and postCreate functions; these are functions (or classmethods) which are called before / during / after creating the node. The isVirtual method is required - it is the callback used on instances of the base (ie, 'real') objects to determine whether they should be considered an instance of this virtual class. It's input is an MObject and an optional name (if nameRequired is set to True). It should return True to indicate that the given object is 'of this class', False otherwise. PyMEL code should not be used inside the callback, only API and maya.cmds. Keep in mind that once your new type is registered, its test will be run every time a node of its parent type is returned as a PyMEL node class, so be sure to keep your tests simple and fast. The preCreate function is called prior to node creation and gives you a chance to modify the kwargs dictionary; they are fed the kwargs fed to the creation command, and return either 1 or 2 dictionaries; the first dictionary is the one actually passed to the creation command; the second, if present, is passed to the postCreate command. The create method can be used to override the 'default' node creation command; it is given the kwargs given on node creation (possibly altered by the preCreate), and must return the string name of the created node. (...or any another type of object (such as an MObject), as long as the postCreate and class.__init__ support it.) The postCreate function is called after creating the new node, and gives you a chance to modify it. The method is passed the PyNode of the newly created node, as well as the second dictionary returned from the preCreate function as kwargs (if it was returned). You can use PyMEL code here, but you should avoid creating any new nodes. By default, any method named '_isVirtual', '_preCreateVirtual', '_createVirtual', or '_postCreateVirtual' on the class is used; if present, these must be classmethods or staticmethods. Other methods / functions may be used by passing a string or callable to the preCreate / postCreate kwargs. If a string, then the method with that name on the class is used; it should be a classmethod or staticmethod present at the time it is registered. The value None may also be passed to any of these args (except isVirtual) to signal that no function is to be used for these purposes. If more than one subclass is registered for a node type, the registered callbacks will be run newest to oldest until one returns True. If no test returns True, then the standard node class is used. Also, for each base node type, if there is already a virtual class registered with the same name and module, then it is removed. (This helps alleviate registered callbacks from piling up if, for instance, a module is reloaded.) Overriding methods of PyMEL base classes should be performed with care, because certain methods are used internally and altering their results may cause PyMEL to error or behave unpredictably. This is particularly true for special methods like __setattr__, __getattr__, __setstate__, __getstate__, etc. Some methods are considered too dangerous to modify, and registration will fail if the user defines an override for them; this set includes __init__, __new__, and __str__. """
"""Stuff to parse AIFF-C and AIFF files. Unless explicitly stated otherwise, the description below is true both for AIFF-C files and AIFF files. An AIFF-C file has the following structure. +-----------------+ | FORM | +-----------------+ | <size> | +----+------------+ | | AIFC | | +------------+ | | <chunks> | | | . | | | . | | | . | +----+------------+ An AIFF file has the string "AIFF" instead of "AIFC". A chunk consists of an identifier (4 bytes) followed by a size (4 bytes, big endian order), followed by the data. The size field does not include the size of the 8 byte header. The following chunk types are recognized. FVER <version number of AIFF-C defining document> (AIFF-C only). MARK <# of markers> (2 bytes) list of markers: <marker ID> (2 bytes, must be > 0) <position> (4 bytes) <marker name> ("pstring") COMM <# of channels> (2 bytes) <# of sound frames> (4 bytes) <size of the samples> (2 bytes) <sampling frequency> (10 bytes, IEEE 80-bit extended floating point) in AIFF-C files only: <compression type> (4 bytes) <human-readable version of compression type> ("pstring") SSND <offset> (4 bytes, not used by this program) <blocksize> (4 bytes, not used by this program) <sound data> A pstring consists of 1 byte length, a string of characters, and 0 or 1 byte pad to make the total length even. Usage. Reading AIFF files: f = aifc.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). In some types of audio files, if the setpos() method is not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' for AIFF files) getcompname() -- returns human-readable version of compression type ('not compressed' for AIFF files) getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- get the list of marks in the audio file or None if there are no marks getmark(id) -- get mark with the specified id (raises an error if the mark does not exist) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell(), the position given to setpos() and the position of marks are all compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing AIFF files: f = aifc.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: aiff() -- create an AIFF file (AIFF-C default) aifc() -- create an AIFF-C file setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple) -- set all parameters at once setmark(id, pos, name) -- add specified mark to the list of marks tell() -- return current position in output file (useful in combination with setmark()) writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. Marks can be added anytime. If there are any marks, ypu must call close() after all frames have been written. The close() method is called automatically when the class instance is destroyed. When a file is opened with the extension '.aiff', an AIFF file is written, otherwise an AIFF-C file is written. This default can be changed by calling aiff() or aifc() before the first writeframes or writeframesraw. """
""" TestCmd.py: a testing framework for commands and scripts. The TestCmd module provides a framework for portable automated testing of executable commands and scripts (in any language, not just Python), especially commands and scripts that require file system interaction. In addition to running tests and evaluating conditions, the TestCmd module manages and cleans up one or more temporary workspace directories, and provides methods for creating files and directories in those workspace directories from in-line data, here-documents), allowing tests to be completely self-contained. A TestCmd environment object is created via the usual invocation: import TestCmd test = TestCmd.TestCmd() There are a bunch of keyword arguments that you can use at instantiation time: test = TestCmd.TestCmd(description = 'string', program = 'program_or_script_to_test', interpreter = 'script_interpreter', workdir = 'prefix', subdir = 'subdir', verbose = Boolean, match = default_match_function, combine = Boolean) There are a bunch of methods that let you do a bunch of different things. Here is an overview of them: test.verbose_set(1) test.description_set('string') test.program_set('program_or_script_to_test') test.interpreter_set('script_interpreter') test.interpreter_set(['script_interpreter', 'arg']) test.workdir_set('prefix') test.workdir_set('') test.workpath('file') test.workpath('subdir', 'file') test.subdir('subdir', ...) test.write('file', "contents\n") test.write(['subdir', 'file'], "contents\n") test.read('file') test.read(['subdir', 'file']) test.read('file', mode) test.read(['subdir', 'file'], mode) test.writable('dir', 1) test.writable('dir', None) test.preserve(condition, ...) test.cleanup(condition) test.run(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program', chdir = 'directory_to_chdir_to', stdin = 'input to feed to the program\n') test.pass_test() test.pass_test(condition) test.pass_test(condition, function) test.fail_test() test.fail_test(condition) test.fail_test(condition, function) test.fail_test(condition, function, skip) test.no_result() test.no_result(condition) test.no_result(condition, function) test.no_result(condition, function, skip) test.stdout() test.stdout(run) test.stderr() test.stderr(run) test.symlink(target, link) test.match(actual, expected) test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n") test.match_exact(["actual 1\n", "actual 2\n"], ["expected 1\n", "expected 2\n"]) test.match_re("actual 1\nactual 2\n", regex_string) test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes) test.match_re_dotall("actual 1\nactual 2\n", regex_string) test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes) test.sleep() test.sleep(seconds) test.where_is('foo') test.where_is('foo', 'PATH1:PATH2') test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') test.unlink('file') test.unlink('subdir', 'file') The TestCmd module provides pass_test(), fail_test(), and no_result() unbound functions that report test results for use with the Aegis change management system. These methods terminate the test immediately, reporting PASSED, FAILED, or NO RESULT respectively, and exiting with status 0 (success), 1 or 2 respectively. This allows for a distinction between an actual failed test and a test that could not be properly evaluated because of an external condition (such as a full file system or incorrect permissions). import TestCmd TestCmd.pass_test() TestCmd.pass_test(condition) TestCmd.pass_test(condition, function) TestCmd.fail_test() TestCmd.fail_test(condition) TestCmd.fail_test(condition, function) TestCmd.fail_test(condition, function, skip) TestCmd.no_result() TestCmd.no_result(condition) TestCmd.no_result(condition, function) TestCmd.no_result(condition, function, skip) The TestCmd module also provides unbound functions that handle matching in the same way as the match_*() methods described above. import TestCmd test = TestCmd.TestCmd(match = TestCmd.match_exact) test = TestCmd.TestCmd(match = TestCmd.match_re) test = TestCmd.TestCmd(match = TestCmd.match_re_dotall) Lastly, the where_is() method also exists in an unbound function version. import TestCmd TestCmd.where_is('foo') TestCmd.where_is('foo', 'PATH1:PATH2') TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') """
""" Objects for dealing with Chebyshev series. This module provides a number of objects (mostly functions) useful for dealing with Chebyshev series, including a `Chebyshev` class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its "parent" sub-package, `numpy.polynomial`). Constants --------- - `chebdomain` -- Chebyshev series default domain, [-1,1]. - `chebzero` -- (Coefficients of the) Chebyshev series that evaluates identically to 0. - `chebone` -- (Coefficients of the) Chebyshev series that evaluates identically to 1. - `chebx` -- (Coefficients of the) Chebyshev series for the identity map, ``f(x) = x``. Arithmetic ---------- - `chebadd` -- add two Chebyshev series. - `chebsub` -- subtract one Chebyshev series from another. - `chebmul` -- multiply two Chebyshev series. - `chebdiv` -- divide one Chebyshev series by another. - `chebpow` -- raise a Chebyshev series to an positive integer power - `chebval` -- evaluate a Chebyshev series at given points. - `chebval2d` -- evaluate a 2D Chebyshev series at given points. - `chebval3d` -- evaluate a 3D Chebyshev series at given points. - `chebgrid2d` -- evaluate a 2D Chebyshev series on a Cartesian product. - `chebgrid3d` -- evaluate a 3D Chebyshev series on a Cartesian product. Calculus -------- - `chebder` -- differentiate a Chebyshev series. - `chebint` -- integrate a Chebyshev series. Misc Functions -------------- - `chebfromroots` -- create a Chebyshev series with specified roots. - `chebroots` -- find the roots of a Chebyshev series. - `chebvander` -- Vandermonde-like matrix for Chebyshev polynomials. - `chebvander2d` -- Vandermonde-like matrix for 2D power series. - `chebvander3d` -- Vandermonde-like matrix for 3D power series. - `chebgauss` -- Gauss-Chebyshev quadrature, points and weights. - `chebweight` -- Chebyshev weight function. - `chebcompanion` -- symmetrized companion matrix in Chebyshev form. - `chebfit` -- least-squares fit returning a Chebyshev series. - `chebpts1` -- Chebyshev points of the first kind. - `chebpts2` -- Chebyshev points of the second kind. - `chebtrim` -- trim leading coefficients from a Chebyshev series. - `chebline` -- Chebyshev series representing given straight line. - `cheb2poly` -- convert a Chebyshev series to a polynomial. - `poly2cheb` -- convert a polynomial to a Chebyshev series. Classes ------- - `Chebyshev` -- A Chebyshev series class. See also -------- `numpy.polynomial` Notes ----- The implementations of multiplication, division, integration, and differentiation use the algebraic identities [1]_: .. math :: T_n(x) = \\frac{z^n + z^{-n}}{2} \\\\ z\\frac{dx}{dz} = \\frac{z - z^{-1}}{2}. where .. math :: x = \\frac{z + z^{-1}}{2}. These identities allow a Chebyshev series to be expressed as a finite, symmetric Laurent series. In this module, this sort of Laurent series is referred to as a "z-series." References ---------- .. [1] NAME et al., "Combinatorial Trigonometry with Chebyshev Polynomials," *Journal of Statistical Planning and Inference 14*, 2008 (preprint: http://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf, pg. 4) """
""" ==================================== Linear algebra (:mod:`scipy.linalg`) ==================================== .. currentmodule:: scipy.linalg Linear algebra functions. .. seealso:: `numpy.linalg` for more linear algebra functions. Note that although `scipy.linalg` imports most of them, identically named functions from `scipy.linalg` may offer more or slightly differing functionality. Basics ====== .. autosummary:: :toctree: generated/ inv - Find the inverse of a square matrix solve - Solve a linear system of equations solve_banded - Solve a banded linear system solveh_banded - Solve a Hermitian or symmetric banded system solve_circulant - Solve a circulant system solve_triangular - Solve a triangular matrix solve_toeplitz - Solve a toeplitz matrix det - Find the determinant of a square matrix norm - Matrix and vector norm lstsq - Solve a linear least-squares problem pinv - Pseudo-inverse (Moore-Penrose) using lstsq pinv2 - Pseudo-inverse using svd pinvh - Pseudo-inverse of hermitian matrix kron - Kronecker product of two arrays tril - Construct a lower-triangular matrix from a given matrix triu - Construct an upper-triangular matrix from a given matrix orthogonal_procrustes - Solve an orthogonal Procrustes problem LinAlgError Eigenvalue Problems =================== .. autosummary:: :toctree: generated/ eig - Find the eigenvalues and eigenvectors of a square matrix eigvals - Find just the eigenvalues of a square matrix eigh - Find the e-vals and e-vectors of a Hermitian or symmetric matrix eigvalsh - Find just the eigenvalues of a Hermitian or symmetric matrix eig_banded - Find the eigenvalues and eigenvectors of a banded matrix eigvals_banded - Find just the eigenvalues of a banded matrix Decompositions ============== .. autosummary:: :toctree: generated/ lu - LU decomposition of a matrix lu_factor - LU decomposition returning unordered matrix and pivots lu_solve - Solve Ax=b using back substitution with output of lu_factor svd - Singular value decomposition of a matrix svdvals - Singular values of a matrix diagsvd - Construct matrix of singular values from output of svd orth - Construct orthonormal basis for the range of A using svd cholesky - Cholesky decomposition of a matrix cholesky_banded - Cholesky decomp. of a sym. or Hermitian banded matrix cho_factor - Cholesky decomposition for use in solving a linear system cho_solve - Solve previously factored linear system cho_solve_banded - Solve previously factored banded linear system polar - Compute the polar decomposition. qr - QR decomposition of a matrix qr_multiply - QR decomposition and multiplication by Q qr_update - Rank k QR update qr_delete - QR downdate on row or column deletion qr_insert - QR update on row or column insertion rq - RQ decomposition of a matrix qz - QZ decomposition of a pair of matrices schur - Schur decomposition of a matrix rsf2csf - Real to complex Schur form hessenberg - Hessenberg form of a matrix .. seealso:: `scipy.linalg.interpolative` -- Interpolative matrix decompositions Matrix Functions ================ .. autosummary:: :toctree: generated/ expm - Matrix exponential logm - Matrix logarithm cosm - Matrix cosine sinm - Matrix sine tanm - Matrix tangent coshm - Matrix hyperbolic cosine sinhm - Matrix hyperbolic sine tanhm - Matrix hyperbolic tangent signm - Matrix sign sqrtm - Matrix square root funm - Evaluating an arbitrary matrix function expm_frechet - Frechet derivative of the matrix exponential expm_cond - Relative condition number of expm in the Frobenius norm fractional_matrix_power - Fractional matrix power Matrix Equation Solvers ======================= .. autosummary:: :toctree: generated/ solve_sylvester - Solve the Sylvester matrix equation solve_continuous_are - Solve the continuous-time algebraic Riccati equation solve_discrete_are - Solve the discrete-time algebraic Riccati equation solve_discrete_lyapunov - Solve the discrete-time Lyapunov equation solve_lyapunov - Solve the (continous-time) Lyapunov equation Special Matrices ================ .. autosummary:: :toctree: generated/ block_diag - Construct a block diagonal matrix from submatrices circulant - Circulant matrix companion - Companion matrix dft - Discrete Fourier transform matrix hadamard - Hadamard matrix of order 2**n hankel - Hankel matrix helmert - Helmert matrix hilbert - Hilbert matrix invhilbert - Inverse Hilbert matrix leslie - Leslie matrix pascal - Pascal matrix invpascal - Inverse Pascal matrix toeplitz - Toeplitz matrix tri - Construct a matrix filled with ones at and below a given diagonal Low-level routines ================== .. autosummary:: :toctree: generated/ get_blas_funcs get_lapack_funcs find_best_blas_type .. seealso:: `scipy.linalg.blas` -- Low-level BLAS functions `scipy.linalg.lapack` -- Low-level LAPACK functions `scipy.linalg.cython_blas` -- Low-level BLAS functions for Cython `scipy.linalg.cython_lapack` -- Low-level LAPACK functions for Cython """
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright (c), NAME <EMAIL>, 2012-2013 # Copyright (c), NAME <EMAIL>, 2015 # All rights reserved. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The match_hostname function and supporting code is under the terms and # conditions of the Python Software Foundation License. They were taken from # the Python3 standard library and adapted for use in Python2. See comments in the # source for which code precisely is under this License. PSF License text # follows: # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are # retained in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement.
""" Statistical Functions ===================== This module contains a large number of probability distributions as well as a growing library of statistical functions. Each included distribution is an instance of the class rv_continous. For each given name the following methods are available. See docstring for rv_continuous for more information :rvs: random variates with the distribution :pdf: probability density function :cdf: cumulative distribution function :sf: survival function (1.0 - cdf) :ppf: percent-point function (inverse of cdf) :isf: inverse survival function :stats: mean, variance, and optionally skew and kurtosis Calling the instance as a function returns a frozen pdf whose shape, location, and scale parameters are fixed. Distributions --------------- The distributions available with the above methods are: Continuous (Total == 81 distributions) --------------------------------------- .. autosummary:: :toctree: generated/ norm Normal (Gaussian) alpha Alpha anglit Anglit arcsine Arcsine beta Beta betaprime Beta Prime bradford Bradford burr Burr cauchy Cauchy chi Chi chi2 Chi-squared cosine Cosine dgamma Double Gamma dweibull Double Weibull erlang Erlang expon Exponential exponweib Exponentiated Weibull exponpow Exponential Power f F (Snecdor F) fatiguelife Fatigue Life (Birnbaum-Sanders) fisk Fisk foldcauchy Folded Cauchy foldnorm Folded Normal frechet_r Frechet Right Sided, Extreme Value Type II (Extreme LB) or weibull_min frechet_l Frechet Left Sided, Weibull_max genlogistic Generalized Logistic genpareto Generalized Pareto genexpon Generalized Exponential genextreme Generalized Extreme Value gausshyper Gauss Hypergeometric gamma Gamma gengamma Generalized gamma genhalflogistic Generalized Half Logistic gompertz Gompertz (Truncated Gumbel) gumbel_r Right Sided Gumbel, Log-Weibull, Fisher-Tippett, Extreme Value Type I gumbel_l Left Sided Gumbel, etc. halfcauchy Half Cauchy halflogistic Half Logistic halfnorm Half Normal hypsecant Hyperbolic Secant invgamma Inverse Gamma invnorm Inverse Normal invgauss Inverse Gaussian invweibull Inverse Weibull USERNAME NAME USERNAME NAME ksone Kolmogorov-Smirnov one-sided (no stats) kstwobign Kolmogorov-Smirnov two-sided test for Large N (no stats) laplace Laplace logistic Logistic loggamma Log-Gamma loglaplace Log-Laplace (Log Double Exponential) lognorm Log-Normal gilbrat Gilbrat lomax Lomax (Pareto of the second kind) maxwell Maxwell mielke Mielke's Beta-Kappa nakagami Nakagami ncx2 Non-central chi-squared ncf Non-central F nct Non-central Student's T pareto Pareto powerlaw Power-function powerlognorm Power log normal powernorm Power normal rdist R distribution reciprocal Reciprocal rayleigh NAME rice Rice recipinvgauss Reciprocal Inverse Gaussian semicircular Semicircular t Student's T triang Triangular truncexpon Truncated Exponential truncnorm Truncated Normal tukeylambda Tukey-Lambda uniform Uniform vonmises Von-Mises (Circular) wald Wald weibull_min Minimum Weibull (see Frechet) weibull_max Maximum Weibull (see Frechet) wrapcauchy Wrapped Cauchy =============== ============================================================== Discrete (Total == 10 distributions) ============================================================================== binom Binomial bernoulli Bernoulli nbinom Negative Binomial geom Geometric hypergeom Hypergeometric logser Logarithmic (Log-Series, Series) poisson Poisson planck Planck (Discrete Exponential) boltzmann Boltzmann (Truncated Discrete Exponential) randint Discrete Uniform zipf Zipf dlaplace Discrete Laplacian =============== ============================================================== Statistical Functions (adapted from NAME ============================================================== gmean Geometric mean hmean Harmonic mean mean Arithmetic mean cmedian Computed median median Median mode Modal value tmean Truncated arithmetic mean tvar Truncated variance tmin _ tmax _ tstd _ tsem _ moment Central moment variation Coefficient of variation skew Skewness kurtosis Fisher or Pearson kurtosis describe Descriptive statistics skewtest _ kurtosistest _ normaltest _ ================= ============================================================== ================= ============================================================== itemfreq _ scoreatpercentile _ percentileofscore _ histogram2 _ histogram _ cumfreq _ relfreq _ ================= ============================================================== ================= ============================================================== obrientransform _ signaltonoise _ bayes_mvs _ sem _ zmap _ ================= ============================================================== ================= ============================================================== threshold _ trimboth _ trim1 _ ================= ============================================================== ================= ============================================================== f_oneway _ paired _ pearsonr _ spearmanr _ pointbiserialr _ kendalltau _ linregress _ ================= ============================================================== ================= ============================================================== ttest_1samp _ ttest_ind _ ttest_rel _ kstest _ chisquare _ ks_2samp _ meanwhitneyu _ tiecorrect _ ranksums _ wilcoxon _ kruskal _ friedmanchisquare _ ================= ============================================================== ================= ============================================================== ansari _ bartlett _ levene _ shapiro _ anderson _ binom_test _ fligner _ mood _ oneway _ ================= ============================================================== ================= ============================================================== glm _ ================= ============================================================== ================= ============================================================== Plot-tests ================================================================================ probplot _ ppcc_max _ ppcc_plot _ ================= ============================================================== For many more stat related functions install the software R and the interface package rpy. """
""" ======== Glossary ======== .. glossary:: along an axis Axes are defined for arrays with more than one dimension. A 2-dimensional array has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1). Many operation can take place along one of these axes. For example, we can sum each row of an array, in which case we operate along columns, or axis 1:: >>> x = np.arange(12).reshape((3,4)) >>> x array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.sum(axis=1) array([ 6, 22, 38]) array A homogeneous container of numerical elements. Each element in the array occupies a fixed amount of memory (hence homogeneous), and can be a numerical element of a single type (such as float, int or complex) or a combination (such as ``(float, int, float)``). Each array has an associated data-type (or ``dtype``), which describes the numerical type of its elements:: >>> x = np.array([1, 2, 3], float) >>> x array([ 1., 2., 3.]) >>> x.dtype # floating point number, 64 bits of memory per element dtype('float64') # More complicated data type: each array element is a combination of # and integer and a floating point number >>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)]) array([(1, 2.0), (3, 4.0)], dtype=[('x', '<i4'), ('y', '<f8')]) Fast element-wise operations, called `ufuncs`_, operate on arrays. array_like Any sequence that can be interpreted as an ndarray. This includes nested lists, tuples, scalars and existing arrays. attribute A property of an object that can be accessed using ``obj.attribute``, e.g., ``shape`` is an attribute of an array:: >>> x = np.array([1, 2, 3]) >>> x.shape (3,) BLAS `Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_ broadcast NumPy can do operations on arrays whose shapes are mismatched:: >>> x = np.array([1, 2]) >>> y = np.array([[3], [4]]) >>> x array([1, 2]) >>> y array([[3], [4]]) >>> x + y array([[4, 5], [5, 6]]) See `doc.broadcasting`_ for more information. C order See `row-major` column-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In column-major order, the leftmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the column-major order as:: [1, 4, 2, 5, 3, 6] Column-major order is also known as the Fortran order, as the Fortran programming language uses it. decorator An operator that transforms a function. For example, a ``log`` decorator may be defined to print debugging information upon function execution:: >>> def log(f): ... def new_logging_func(*args, **kwargs): ... print "Logging call with parameters:", args, kwargs ... return f(*args, **kwargs) ... ... return new_logging_func Now, when we define a function, we can "decorate" it using ``log``:: >>> @log ... def add(a, b): ... return a + b Calling ``add`` then yields: >>> add(1, 2) Logging call with parameters: (1, 2) {} 3 dictionary Resembling a language dictionary, which provides a mapping between words and descriptions thereof, a Python dictionary is a mapping between two objects:: >>> x = {1: 'one', 'two': [1, 2]} Here, `x` is a dictionary mapping keys to values, in this case the integer 1 to the string "one", and the string "two" to the list ``[1, 2]``. The values may be accessed using their corresponding keys:: >>> x[1] 'one' >>> x['two'] [1, 2] Note that dictionaries are not stored in any specific order. Also, most mutable (see *immutable* below) objects, such as lists, may not be used as keys. For more information on dictionaries, read the `Python tutorial <http://docs.python.org/tut>`_. Fortran order See `column-major` flattened Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details. immutable An object that cannot be modified after execution is called immutable. Two common examples are strings and tuples. instance A class definition gives the blueprint for constructing an object:: >>> class House(object): ... wall_colour = 'white' Yet, we have to *build* a house before it exists:: >>> h = House() # build a house Now, ``h`` is called a ``House`` instance. An instance is therefore a specific realisation of a class. iterable A sequence that allows "walking" (iterating) over items, typically using a loop such as:: >>> x = [1, 2, 3] >>> [item**2 for item in x] [1, 4, 9] It is often used in combintion with ``enumerate``:: >>> keys = ['a','b','c'] >>> for n, k in enumerate(keys): ... print "Key %d: %s" % (n, k) ... Key 0: a Key 1: b Key 2: c list A Python container that can hold any number of objects or items. The items do not have to be of the same type, and can even be lists themselves:: >>> x = [2, 2.0, "two", [2, 2.0]] The list `x` contains 4 items, each which can be accessed individually:: >>> x[2] # the string 'two' 'two' >>> x[3] # a list, containing an integer 2 and a float 2.0 [2, 2.0] It is also possible to select more than one item at a time, using *slicing*:: >>> x[0:2] # or, equivalently, x[:2] [2, 2.0] In code, arrays are often conveniently expressed as nested lists:: >>> np.array([[1, 2], [3, 4]]) array([[1, 2], [3, 4]]) For more information, read the section on lists in the `Python tutorial <http://docs.python.org/tut>`_. For a mapping type (key-value), see *dictionary*. mask A boolean array, used to select only certain elements for an operation:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> mask = (x > 2) >>> mask array([False, False, False, True, True], dtype=bool) >>> x[mask] = -1 >>> x array([ 0, 1, 2, -1, -1]) masked array Array that suppressed values indicated by a mask:: >>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) >>> x masked_array(data = [-- 2.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> >>> x + [1, 2, 3] masked_array(data = [-- 4.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> Masked arrays are often used when operating on arrays containing missing or invalid entries. matrix A 2-dimensional ndarray that preserves its two-dimensional nature throughout operations. It has certain special operations, such as ``*`` (matrix multiplication) and ``**`` (matrix power), defined:: >>> x = np.mat([[1, 2], [3, 4]]) >>> x matrix([[1, 2], [3, 4]]) >>> x**2 matrix([[ 7, 10], [15, 22]]) method A function associated with an object. For example, each ndarray has a method called ``repeat``:: >>> x = np.array([1, 2, 3]) >>> x.repeat(2) array([1, 1, 2, 2, 3, 3]) ndarray See *array*. reference If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore, ``a`` and ``b`` are different names for the same Python object. row-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In row-major order, the rightmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the row-major order as:: [1, 2, 3, 4, 5, 6] Row-major order is also known as the C order, as the C programming language uses it. New Numpy arrays are by default in row-major order. self Often seen in method signatures, ``self`` refers to the instance of the associated class. For example: >>> class Paintbrush(object): ... color = 'blue' ... ... def paint(self): ... print "Painting the city %s!" % self.color ... >>> p = Paintbrush() >>> p.color = 'red' >>> p.paint() # self refers to 'p' Painting the city red! slice Used to select only certain elements from a sequence:: >>> x = range(5) >>> x [0, 1, 2, 3, 4] >>> x[1:3] # slice from 1 to 3 (excluding 3 itself) [1, 2] >>> x[1:5:2] # slice from 1 to 5, but skipping every second element [1, 3] >>> x[::-1] # slice a sequence in reverse [4, 3, 2, 1, 0] Arrays may have more than one dimension, each which can be sliced individually:: >>> x = np.array([[1, 2], [3, 4]]) >>> x array([[1, 2], [3, 4]]) >>> x[:, 1] array([2, 4]) tuple A sequence that may contain a variable number of types of any kind. A tuple is immutable, i.e., once constructed it cannot be changed. Similar to a list, it can be indexed and sliced:: >>> x = (1, 'one', [1, 2]) >>> x (1, 'one', [1, 2]) >>> x[0] 1 >>> x[:2] (1, 'one') A useful concept is "tuple unpacking", which allows variables to be assigned to the contents of a tuple:: >>> x, y = (1, 2) >>> x, y = 1, 2 This is often used when a function returns multiple values: >>> def return_many(): ... return 1, 'alpha', None >>> a, b, c = return_many() >>> a, b, c (1, 'alpha', None) >>> a 1 >>> b 'alpha' ufunc Universal function. A fast element-wise array operation. Examples include ``add``, ``sin`` and ``logical_or``. view An array that does not own its data, but refers to another array's data instead. For example, we may create a view that only shows every second element of another array:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> y = x[::2] >>> y array([0, 2, 4]) >>> x[0] = 3 # changing x changes y as well, since y is a view on x >>> y array([3, 2, 4]) wrapper Python is a high-level (highly abstracted, or English-like) language. This abstraction comes at a price in execution speed, and sometimes it becomes necessary to use lower level languages to do fast computations. A wrapper is code that provides a bridge between high and the low level languages, allowing, e.g., Python to execute code written in C or Fortran. Examples include ctypes, SWIG and Cython (which wraps C and C++) and f2py (which wraps Fortran). """
""" ============== Array Creation ============== Introduction ============ There are 5 general mechanisms for creating arrays: 1) Conversion from other Python structures (e.g., lists, tuples) 2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, etc.) 3) Reading arrays from disk, either from standard or custom formats 4) Creating arrays from raw bytes through the use of strings or buffers 5) Use of special library functions (e.g., random) This section will not cover means of replicating, joining, or otherwise expanding or mutating existing arrays. Nor will it cover creating object arrays or record arrays. Both of those are covered in their own sections. Converting Python array_like Objects to Numpy Arrays ==================================================== In general, numerical data arranged in an array-like structure in Python can be converted to arrays through the use of the array() function. The most obvious examples are lists and tuples. See the documentation for array() for details for its use. Some objects may support the array-protocol and allow conversion to arrays this way. A simple way to find out if the object can be converted to a numpy array using array() is simply to try it interactively and see if it works! (The Python Way). Examples: :: >>> x = np.array([2,3,1,0]) >>> x = np.array([2, 3, 1, 0]) >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, and types >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]]) Intrinsic Numpy Array Creation ============================== Numpy has built-in functions for creating arrays from scratch: zeros(shape) will create an array filled with 0 values with the specified shape. The default dtype is float64. ``>>> np.zeros((2, 3)) array([[ 0., 0., 0.], [ 0., 0., 0.]])`` ones(shape) will create an array filled with 1 values. It is identical to zeros in all other respects. arange() will create arrays with regularly incrementing values. Check the docstring for complete information on the various ways it can be used. A few examples will be given here: :: >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=np.float) array([ 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) Note that there are some subtleties regarding the last usage that the user should be aware of that are described in the arange docstring. linspace() will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: :: >>> np.linspace(1., 4., 6) array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ]) The advantage of this creation function is that one can guarantee the number of elements and the starting and end point, which arange() generally will not do for arbitrary start, stop, and step values. indices() will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension. An example illustrates much better than a verbal description: :: >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) This is particularly useful for evaluating functions of multiple dimensions on a regular grid. Reading Arrays From Disk ======================== This is presumably the most common case of large array creation. The details, of course, depend greatly on the format of data on disk and so this section can only give general pointers on how to handle various formats. Standard Binary Formats ----------------------- Various fields have standard formats for array data. The following lists the ones with known python libraries to read them and return numpy arrays (there may be others for which it is possible to read and convert to numpy arrays so check the last section as well) :: HDF5: PyTables FITS: PyFITS Examples of formats that cannot be read directly but for which it is not hard to convert are those formats supported by libraries like PIL (able to read and write many image formats such as jpg, png, etc). Common ASCII Formats ------------------------ Comma Separated Value files (CSV) are widely used (and an export and import option for programs like Excel). There are a number of ways of reading these files in Python. There are CSV functions in Python and functions in pylab (part of matplotlib). More generic ascii files can be read using the io package in scipy. Custom Binary Formats --------------------- There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the numpy fromfile() function and .tofile() method to read and write numpy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. Use of Special Libraries ------------------------ There are libraries that can be used to generate arrays for special purposes and it isn't possible to enumerate all of them. The most common uses are use of the many array generation functions in random that can generate arrays of random values, and some utility functions to generate special matrices (e.g. diagonal). """
# -*- coding: utf-8 -*- # -- Dual Licence ---------------------------------------------------------- ############################################################################ # GPL License # # # # This file is a SCons (http://www.scons.org/) builder # # Copyright (c) 2012-14, NAME <EMAIL> # # This program is free software: you can redistribute it and/or modify # # it under the terms of the GNU General Public License as # # published by the Free Software Foundation, either version 3 of the # # License, or (at your option) any later version. # # # # This program is distributed in the hope that it will be useful, # # but WITHOUT ANY WARRANTY; without even the implied warranty of # # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # # GNU General Public License for more details. # # # # You should have received a copy of the GNU General Public License # # along with this program. If not, see <http://www.gnu.org/licenses/>. # ############################################################################ # -------------------------------------------------------------------------- ############################################################################ # BSD 3-Clause License # # # # This file is a SCons (http://www.scons.org/) builder # # Copyright (c) 2012-14, NAME <EMAIL> # # All rights reserved. # # # # Redistribution and use in source and binary forms, with or without # # modification, are permitted provided that the following conditions are # # met: # # # # 1. Redistributions of source code must retain the above copyright # # notice, this list of conditions and the following disclaimer. # # # # 2. Redistributions in binary form must reproduce the above copyright # # notice, this list of conditions and the following disclaimer in the # # documentation and/or other materials provided with the distribution. # # # # 3. Neither the name of the copyright holder nor the names of its # # contributors may be used to endorse or promote products derived from # # this software without specific prior written permission. # # # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A # # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # # HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED # # TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR # # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF # # LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING # # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # ############################################################################ # The Unpack Builder can be used for unpacking archives (eg Zip, TGZ, BZ, ... ). # The emitter of the Builder reads the archive data and creates a returning file list # the builder extract the archive. The environment variable stores a dictionary "UNPACK" # for set different extractions (subdict "EXTRACTOR"): # { # PRIORITY => a value for setting the extractor order (lower numbers = extractor is used earlier) # SUFFIX => defines a list with file suffixes, which should be handled with this extractor # EXTRACTSUFFIX => suffix of the extract command # EXTRACTFLAGS => a string parameter for the RUN command for extracting the data # EXTRACTCMD => full extract command of the builder # RUN => the main program which will be started (if the parameter is empty, the extractor will be ignored) # LISTCMD => the listing command for the emitter # LISTFLAGS => the string options for the RUN command for showing a list of files # LISTSUFFIX => suffix of the list command # LISTEXTRACTOR => a optional Python function, that is called on each output line of the # LISTCMD for extracting file & dir names, the function need two parameters (first line number, # second line content) and must return a string with the file / dir path (other value types # will be ignored) # } # Other options in the UNPACK dictionary are: # STOPONEMPTYFILE => bool variable for stoping if the file has empty size (default True) # VIWEXTRACTOUTPUT => shows the output messages of the extraction command (default False) # EXTRACTDIR => path in that the data will be extracted (default #) # # The file which is handled by the first suffix match of the extractor, the extractor list can be append for other files. # The order of the extractor dictionary creates the listing & extractor command eg file extension .tar.gz should be # before .gz, because the tar.gz is extract in one shoot. # # Under *nix system these tools are supported: tar, bzip2, gzip, unzip # Under Windows only 7-Zip (http://www.7-zip.org/) is supported
""" A Geosoft View (:class:`geosoft.gxpy.view.View` or :class:`geosoft.gxpy.view.View_3d`) contains graphical elements as `Group` instances. Groups are named and are available to a user in a Geosoft viewer, which allows groups to be turned on or off, modify the transparency, or be deleted. 2D views can only accept 2D groups, while a 3D view can accept both 2D and 3D groups. When a 2D group is placed in a 3D view, the group is placed on a the active plane inside the 3D view :Classes: :`Group`: base class for named rendering groups in 2D and 3D views. :`Draw`: 2D drawing group, handles 2D drawing to a view or plane in a 3D view :`Draw_3d`: 3D grawing group for 3D objects placed in a 3d view :`Color_symbols_group`: group for 2D symbols rendered based on data values :`Aggregate_group`: group that contains a :class:`geosoft.gxpy.agg.Aggregate_image` instance :`Color`: colour definition :`Color_map`: maps values to colors :`Pen`: pen definition, includes line colour, thickness and pattern, and fill. :`Text_def`: defined text characteristics :`VoxDisplayGroup`: a 'geosoft.gxpy.vox.VoxDisplay` in a `geosoft.gxpy.view.View_3d` :Constants: :GROUP_NAME_SIZE: `geosoft.gxpy.view.VIEW_NAME_SIZE` :NEW: `geosoft.gxapi.MVIEW_GROUP_NEW` :APPEND: `geosoft.gxapi.MVIEW_GROUP_APPEND` :READ_ONLY: max(NEW, APPEND) + 1 :REPLACE: READ_ONLY + 1 :SMOOTH_NONE: `geosoft.gxapi.MVIEW_SMOOTH_NEAREST` :SMOOTH_CUBIC: `geosoft.gxapi.MVIEW_SMOOTH_CUBIC` :SMOOTH_AKIMA: `geosoft.gxapi.MVIEW_SMOOTH_AKIMA` :TILE_RECTANGULAR: `geosoft.gxapi.MVIEW_TILE_RECTANGULAR` :TILE_DIAGONAL: `geosoft.gxapi.MVIEW_TILE_DIAGONAL` :TILE_TRIANGULAR: `geosoft.gxapi.MVIEW_TILE_TRIANGULAR` :TILE_RANDOM: `geosoft.gxapi.MVIEW_TILE_RANDOM` :UNIT_VIEW: 0 :UNIT_MAP: 2 :UNIT_VIEW_UNWARPED: 3 :GRATICULE_DOT: 0 :GRATICULE_LINE: 1 :GRATICULE_CROSS: 2 :LINE_STYLE_SOLID: 1 :LINE_STYLE_LONG: 2 :LINE_STYLE_DOTTED: 3 :LINE_STYLE_SHORT: 4 :LINE_STYLE_LONG_SHORT_LONG: 5 :LINE_STYLE_LONG_DOT_LONG: 6 :SYMBOL_NONE: 0 :SYMBOL_DOT: 1 :SYMBOL_PLUS: 2 :SYMBOL_X: 3 :SYMBOL_BOX: 4 :SYMBOL_TRIANGLE: 5 :SYMBOL_INVERTED_TRIANGLE: 6 :SYMBOL_HEXAGON: 7 :SYMBOL_SMALL_BOX: 8 :SYMBOL_SMALL_DIAMOND: 9 :SYMBOL_CIRCLE: 20 :SYMBOL_3D_SPHERE: 0 :SYMBOL_3D_CUBE: 1 :SYMBOL_3D_CYLINDER: 2 :SYMBOL_3D_CONE: 3 :FONT_WEIGHT_ULTRALIGHT: 1 :FONT_WEIGHT_LIGHT: 2 :FONT_WEIGHT_MEDIUM: 3 :FONT_WEIGHT_BOLD: 4 :FONT_WEIGHT_XBOLD: 5 :FONT_WEIGHT_XXBOLD: 6 :CMODEL_RGB: 0 :CMODEL_CMY: 1 :CMODEL_HSV: 2 :C_BLACK: 67108863 :C_RED: 33554687 :C_GREEN: 33619712 :C_BLUE: 50266112 :C_CYAN: 50331903 :C_MAGENTA: 50396928 :C_YELLOW: 67043328 :C_GREY: 41975936 :C_LT_RED: 54542336 :C_LT_GREEN: 54526016 :C_LT_BLUE: 50348096 :C_LT_CYAN: 50331712 :C_LT_MAGENTA: 50348032 :C_LT_YELLOW: 54525952 :C_LT_GREY: 54542400 :C_GREY10: 51910680 :C_GREY25: 54542400 :C_GREY50: 41975936 :C_WHITE: 50331648 :C_TRANSPARENT: 0 :REF_BOTTOM_LEFT: 0 :REF_BOTTOM_CENTER: 1 :REF_BOTTOM_RIGHT: 2 :REF_CENTER_LEFT: 3 :REF_CENTER: 4 :REF_CENTER_RIGHT: 5 :REF_TOP_LEFT: 6 :REF_TOP_CENTER: 7 :REF_TOP_RIGHT: 8 :GROUP_ALL: 0 :GROUP_MARKED: 1 :GROUP_VISIBLE: 2 :GROUP_AGG: 3 :GROUP_CSYMB: 4 :GROUP_VOXD: 5 :LOCATE_FIT: `geosoft.gxapi.MVIEW_RELOCATE_FIT` :LOCATE_FIT_KEEP_ASPECT: `geosoft.gxapi.MVIEW_RELOCATE_ASPECT` :LOCATE_CENTER: `geosoft.gxapi.MVIEW_RELOCATE_ASPECT_CENTER` :COLOR_BAR_RIGHT: 0 :COLOR_BAR_LEFT: 1 :COLOR_BAR_BOTTOM: 2 :COLOR_BAR_TOP: 3 :COLOR_BAR_ANNOTATE_RIGHT: 1 :COLOR_BAR_ANNOTATE_LEFT: -1 :COLOR_BAR_ANNOTATE_TOP: 1 :COLOR_BAR_ANNOTATE_BOTTOM: -1 :CYLINDER_OPEN: 0 :CYLINDER_CLOSE_START: 1 :CYLINDER_CLOSE_END: 2 :CYLINDER_CLOSE_ALL: 3 :POINT_STYLE_DOT: 0 :POINT_STYLE_SPHERE: 1 :LINE3D_STYLE_LINE: 0 :LINE3D_STYLE_TUBE: 1 :LINE3D_STYLE_TUBE_JOINED: 2 :SURFACE_FLAT: `geosoft.gxapi.MVIEW_DRAWOBJ3D_MODE_FLAT` :SURFACE_SMOOTH: `geosoft.gxapi.MVIEW_DRAWOBJ3D_MODE_SMOOTH` .. note:: Regression tests provide usage examples: `group drawing tests <https://github.com/GeosoftInc/gxpy/blob/master/geosoft/gxpy/tests/test_group.py>`_ .. seealso:: :mod:`geosoft.gxpy.view`, :mod:`geosoft.gxpy.map` :class:`geosoft.gxapi.GXMVIEW`, :class:`geosoft.gxapi.GXMVU` """
""" Perform Levenberg-Marquardt least-squares minimization, based on MINPACK-1. AUTHORS The original version of this software, called LMFIT, was written in FORTRAN as part of the MINPACK-1 package by XXX. NAME converted the FORTRAN code to IDL. The information for the IDL version is: NAME NASA/GSFC Code 662, Greenbelt, MD 20770 EMAIL UPDATED VERSIONs can be found on my WEB PAGE: http://cow.physics.wisc.edu/~craigm/idl/idl.html NAME created this Python version from Craig's IDL version. NAME, University of Chicago Building 434A, Argonne National Laboratory 9700 South Cass Avenue, Argonne, IL 60439 EMAIL Updated versions can be found at http://cars.uchicago.edu/software NAME converted the Mark's Python version from Numeric to numpy NAME, University of Cambridge, Institute of Astronomy, Madingley road, CB3 0HA, Cambridge, UK EMAIL Updated versions can be found at http://code.google.com/p/astrolibpy/source/browse/trunk/ DESCRIPTION MPFIT uses the Levenberg-Marquardt technique to solve the least-squares problem. In its typical use, MPFIT will be used to fit a user-supplied function (the "model") to user-supplied data points (the "data") by adjusting a set of parameters. MPFIT is based upon MINPACK-1 (LMDIF.F) by More' and collaborators. For example, a researcher may think that a set of observed data points is best modelled with a Gaussian curve. A Gaussian curve is parameterized by its mean, standard deviation and normalization. MPFIT will, within certain constraints, find the set of parameters which best fits the data. The fit is "best" in the least-squares sense; that is, the sum of the weighted squared differences between the model and data is minimized. The Levenberg-Marquardt technique is a particular strategy for iteratively searching for the best fit. This particular implementation is drawn from MINPACK-1 (see NETLIB), and is much faster and more accurate than the version provided in the Scientific Python package in Scientific.Functions.LeastSquares. This version allows upper and lower bounding constraints to be placed on each parameter, or the parameter can be held fixed. The user-supplied Python function should return an array of weighted deviations between model and data. In a typical scientific problem the residuals should be weighted so that each deviate has a gaussian sigma of 1.0. If X represents values of the independent variable, Y represents a measurement for each value of X, and ERR represents the error in the measurements, then the deviates could be calculated as follows: DEVIATES = (Y - F(X)) / ERR where F is the analytical function representing the model. You are recommended to use the convenience functions MPFITFUN and MPFITEXPR, which are driver functions that calculate the deviates for you. If ERR are the 1-sigma uncertainties in Y, then TOTAL( DEVIATES^2 ) will be the total chi-squared value. MPFIT will minimize the chi-square value. The values of X, Y and ERR are passed through MPFIT to the user-supplied function via the FUNCTKW keyword. Simple constraints can be placed on parameter values by using the PARINFO keyword to MPFIT. See below for a description of this keyword. MPFIT does not perform more general optimization tasks. See TNMIN instead. MPFIT is customized, based on MINPACK-1, to the least-squares minimization problem. USER FUNCTION The user must define a function which returns the appropriate values as specified above. The function should return the weighted deviations between the model and the data. It should also return a status flag and an optional partial derivative array. For applications which use finite-difference derivatives -- the default -- the user function should be declared in the following way: def myfunct(p, fjac=None, x=None, y=None, err=None) # Parameter values are passed in "p" # If fjac==None then partial derivatives should not be # computed. It will always be None if MPFIT is called with default # flag. model = F(x, p) # Non-negative status value means MPFIT should continue, negative means # stop the calculation. status = 0 return([status, (y-model)/err] See below for applications with analytical derivatives. The keyword parameters X, Y, and ERR in the example above are suggestive but not required. Any parameters can be passed to MYFUNCT by using the functkw keyword to MPFIT. Use MPFITFUN and MPFITEXPR if you need ideas on how to do that. The function *must* accept a parameter list, P. In general there are no restrictions on the number of dimensions in X, Y or ERR. However the deviates *must* be returned in a one-dimensional Numeric array of type Float. User functions may also indicate a fatal error condition using the status return described above. If status is set to a number between -15 and -1 then MPFIT will stop the calculation and return to the caller. ANALYTIC DERIVATIVES In the search for the best-fit solution, MPFIT by default calculates derivatives numerically via a finite difference approximation. The user-supplied function need not calculate the derivatives explicitly. However, if you desire to compute them analytically, then the AUTODERIVATIVE=0 keyword must be passed to MPFIT. As a practical matter, it is often sufficient and even faster to allow MPFIT to calculate the derivatives numerically, and so AUTODERIVATIVE=0 is not necessary. If AUTODERIVATIVE=0 is used then the user function must check the parameter FJAC, and if FJAC!=None then return the partial derivative array in the return list. def myfunct(p, fjac=None, x=None, y=None, err=None) # Parameter values are passed in "p" # If FJAC!=None then partial derivatives must be comptuer. # FJAC contains an array of len(p), where each entry # is 1 if that parameter is free and 0 if it is fixed. model = F(x, p) Non-negative status value means MPFIT should continue, negative means # stop the calculation. status = 0 if (dojac): pderiv = zeros([len(x), len(p)], Float) for j in range(len(p)): pderiv[:,j] = FGRAD(x, p, j) else: pderiv = None return([status, (y-model)/err, pderiv] where FGRAD(x, p, i) is a user function which must compute the derivative of the model with respect to parameter P[i] at X. When finite differencing is used for computing derivatives (ie, when AUTODERIVATIVE=1), or when MPFIT needs only the errors but not the derivatives the parameter FJAC=None. Derivatives should be returned in the PDERIV array. PDERIV should be an m x n array, where m is the number of data points and n is the number of parameters. dp[i,j] is the derivative at the ith point with respect to the jth parameter. The derivatives with respect to fixed parameters are ignored; zero is an appropriate value to insert for those derivatives. Upon input to the user function, FJAC is set to a vector with the same length as P, with a value of 1 for a parameter which is free, and a value of zero for a parameter which is fixed (and hence no derivative needs to be calculated). If the data is higher than one dimensional, then the *last* dimension should be the parameter dimension. Example: fitting a 50x50 image, "dp" should be 50x50xNPAR. CONSTRAINING PARAMETER VALUES WITH THE PARINFO KEYWORD The behavior of MPFIT can be modified with respect to each parameter to be fitted. A parameter value can be fixed; simple boundary constraints can be imposed; limitations on the parameter changes can be imposed; properties of the automatic derivative can be modified; and parameters can be tied to one another. These properties are governed by the PARINFO structure, which is passed as a keyword parameter to MPFIT. PARINFO should be a list of dictionaries, one list entry for each parameter. Each parameter is associated with one element of the array, in numerical order. The dictionary can have the following keys (none are required, keys are case insensitive): 'value' - the starting parameter value (but see the START_PARAMS parameter for more information). 'fixed' - a boolean value, whether the parameter is to be held fixed or not. Fixed parameters are not varied by MPFIT, but are passed on to MYFUNCT for evaluation. 'limited' - a two-element boolean array. If the first/second element is set, then the parameter is bounded on the lower/upper side. A parameter can be bounded on both sides. Both LIMITED and LIMITS must be given together. 'limits' - a two-element float array. Gives the parameter limits on the lower and upper sides, respectively. Zero, one or two of these values can be set, depending on the values of LIMITED. Both LIMITED and LIMITS must be given together. 'parname' - a string, giving the name of the parameter. The fitting code of MPFIT does not use this tag in any way. However, the default iterfunct will print the parameter name if available. 'step' - the step size to be used in calculating the numerical derivatives. If set to zero, then the step size is computed automatically. Ignored when AUTODERIVATIVE=0. 'mpside' - the sidedness of the finite difference when computing numerical derivatives. This field can take four values: 0 - one-sided derivative computed automatically 1 - one-sided derivative (f(x+h) - f(x) )/h -1 - one-sided derivative (f(x) - f(x-h))/h 2 - two-sided derivative (f(x+h) - f(x-h))/(2*h) Where H is the STEP parameter described above. The "automatic" one-sided derivative method will chose a direction for the finite difference which does not violate any constraints. The other methods do not perform this check. The two-sided method is in principle more precise, but requires twice as many function evaluations. Default: 0. 'mpmaxstep' - the maximum change to be made in the parameter value. During the fitting process, the parameter will never be changed by more than this value in one iteration. A value of 0 indicates no maximum. Default: 0. 'tied' - a string expression which "ties" the parameter to other free or fixed parameters. Any expression involving constants and the parameter array P are permitted. Example: if parameter 2 is always to be twice parameter 1 then use the following: parinfo(2).tied = '2 * p(1)'. Since they are totally constrained, tied parameters are considered to be fixed; no errors are computed for them. [ NOTE: the PARNAME can't be used in expressions. ] 'mpprint' - if set to 1, then the default iterfunct will print the parameter value. If set to 0, the parameter value will not be printed. This tag can be used to selectively print only a few parameter values out of many. Default: 1 (all parameters printed) Future modifications to the PARINFO structure, if any, will involve adding dictionary tags beginning with the two letters "MP". Therefore programmers are urged to avoid using tags starting with the same letters; otherwise they are free to include their own fields within the PARINFO structure, and they will be ignored. PARINFO Example: parinfo = [{'value':0., 'fixed':0, 'limited':[0,0], 'limits':[0.,0.]} for i in range(5)] parinfo[0]['fixed'] = 1 parinfo[4]['limited'][0] = 1 parinfo[4]['limits'][0] = 50. values = [5.7, 2.2, 500., 1.5, 2000.] for i in range(5): parinfo[i]['value']=values[i] A total of 5 parameters, with starting values of 5.7, 2.2, 500, 1.5, and 2000 are given. The first parameter is fixed at a value of 5.7, and the last parameter is constrained to be above 50. EXAMPLE import mpfit import numpy.oldnumeric as Numeric x = arange(100, float) p0 = [5.7, 2.2, 500., 1.5, 2000.] y = ( p[0] + p[1]*[x] + p[2]*[x**2] + p[3]*sqrt(x) + p[4]*log(x)) fa = {'x':x, 'y':y, 'err':err} m = mpfit('myfunct', p0, functkw=fa) print 'status = ', m.status if (m.status <= 0): print 'error message = ', m.errmsg print 'parameters = ', m.params Minimizes sum of squares of MYFUNCT. MYFUNCT is called with the X, Y, and ERR keyword parameters that are given by FUNCTKW. The results can be obtained from the returned object m. THEORY OF OPERATION There are many specific strategies for function minimization. One very popular technique is to use function gradient information to realize the local structure of the function. Near a local minimum the function value can be taylor expanded about x0 as follows: f(x) = f(x0) + f'(x0) . (x-x0) + (1/2) (x-x0) . f''(x0) . (x-x0) ----- --------------- ------------------------------- (1) Order 0th 1st 2nd Here f'(x) is the gradient vector of f at x, and f''(x) is the Hessian matrix of second derivatives of f at x. The vector x is the set of function parameters, not the measured data vector. One can find the minimum of f, f(xm) using Newton's method, and arrives at the following linear equation: f''(x0) . (xm-x0) = - f'(x0) (2) If an inverse can be found for f''(x0) then one can solve for (xm-x0), the step vector from the current position x0 to the new projected minimum. Here the problem has been linearized (ie, the gradient information is known to first order). f''(x0) is symmetric n x n matrix, and should be positive definite. The Levenberg - Marquardt technique is a variation on this theme. It adds an additional diagonal term to the equation which may aid the convergence properties: (f''(x0) + nu I) . (xm-x0) = -f'(x0) (2a) where I is the identity matrix. When nu is large, the overall matrix is diagonally dominant, and the iterations follow steepest descent. When nu is small, the iterations are quadratically convergent. In principle, if f''(x0) and f'(x0) are known then xm-x0 can be determined. However the Hessian matrix is often difficult or impossible to compute. The gradient f'(x0) may be easier to compute, if even by finite difference techniques. So-called quasi-Newton techniques attempt to successively estimate f''(x0) by building up gradient information as the iterations proceed. In the least squares problem there are further simplifications which assist in solving eqn (2). The function to be minimized is a sum of squares: f = Sum(hi^2) (3) where hi is the ith residual out of m residuals as described above. This can be substituted back into eqn (2) after computing the derivatives: f' = 2 Sum(hi hi') f'' = 2 Sum(hi' hj') + 2 Sum(hi hi'') (4) If one assumes that the parameters are already close enough to a minimum, then one typically finds that the second term in f'' is negligible [or, in any case, is too difficult to compute]. Thus, equation (2) can be solved, at least approximately, using only gradient information. In matrix notation, the combination of eqns (2) and (4) becomes: hT' . h' . dx = - hT' . h (5) Where h is the residual vector (length m), hT is its transpose, h' is the Jacobian matrix (dimensions n x m), and dx is (xm-x0). The user function supplies the residual vector h, and in some cases h' when it is not found by finite differences (see MPFIT_FDJAC2, which finds h and hT'). Even if dx is not the best absolute step to take, it does provide a good estimate of the best *direction*, so often a line minimization will occur along the dx vector direction. The method of solution employed by MINPACK is to form the Q . R factorization of h', where Q is an orthogonal matrix such that QT . Q = I, and R is upper right triangular. Using h' = Q . R and the ortogonality of Q, eqn (5) becomes (RT . QT) . (Q . R) . dx = - (RT . QT) . h RT . R . dx = - RT . QT . h (6) R . dx = - QT . h where the last statement follows because R is upper triangular. Here, R, QT and h are known so this is a matter of solving for dx. The routine MPFIT_QRFAC provides the QR factorization of h, with pivoting, and MPFIT_QRSOLV provides the solution for dx. REFERENCES MINPACK-1, NAME available from netlib (www.netlib.org). "Optimization Software Guide," NAME and NAME SIAM, *Frontiers in Applied Mathematics*, Number 14. More', NAME "The Levenberg-Marquardt Algorithm: Implementation and Theory," in *Numerical Analysis*, ed. NAME NAME Lecture Notes in Mathematics 630, Springer-Verlag, 1977. MODIFICATION HISTORY Translated from MINPACK-1 in FORTRAN, Apr-Jul 1998, CM Copyright (C) 1997-2002, NAME This software is provided as is without any warranty whatsoever. Permission to use, copy, modify, and distribute modified or unmodified copies is granted, provided this copyright and disclaimer are included unchanged. Translated from MPFIT (NAME's IDL package) to Python, August, 2002. NAME Converted from Numeric to numpy (NAME, July 2008) """
# Compute the two-sided one-sample Kolmogorov-Smirnov Prob(Dn <= d) where: # D_n = sup_x{|F_n(x) - F(x)|}, # F_n(x) is the empirical CDF for a sample of size n {x_i: i=1,...,n}, # F(x) is the CDF of a probability distribution. # # Exact methods: # Prob(D_n >= d) can be computed via a matrix algorithm of Durbin[1] # or a recursion algorithm due to Pomeranz[2]. # NAME Tsang & Wang[3] gave a computation-efficient way to perform # the Durbin algorithm. # D_n >= d <==> D_n+ >= d or D_n- >= d (the one-sided K-S statistics), hence # Prob(D_n >= d) = 2*Prob(D_n+ >= d) - Prob(D_n+ >= d and D_n- >= d). # For d > 0.5, the latter intersection probability is 0. # # Approximate methods: # For d close to 0.5, ignoring that intersection term may still give a # reasonable approximation. # NAME and Korolyuk[5] gave an asymptotic formula extending # Kolmogorov's initial asymptotic, suitable for large d. (See # scipy.special.kolmogorov for that asymptotic) # Pelz-Good[6] used the functional equation for Jacobi theta functions to # transform the Li-Chien/Korolyuk formula produce a computational formula # suitable for small d. # # NAME and NAME provided an algorithm to decide when to use each of # the above approaches and it is that which is used here. # # Other approaches: # Carvalho[8] optimizes Durbin's matrix algorithm for large values of d. # NAME and NAME use FFTs to compute the convolutions. # References: # [1] NAME (1968). # "The Probability that the Sample Distribution Function Lies Between Two # Parallel Straight Lines." # Annals of Mathematical Statistics, 39, 398-411. # [2] NAME (1974). # "Exact Cumulative Distribution of the Kolmogorov-Smirnov Statistic for # Small Samples (Algorithm 487)." # Communications of the ACM, 17(12), 703-704. # [3] NAME NAME NAME (2003). # "Evaluating Kolmogorov's Distribution." # Journal of Statistical Software, 8(18), 1-4. # [4] NAME (1956). # "On the exact distribution of the statistics of A. N. Kolmogorov and # their asymptotic expansion." # NAME 6, 55-81. # [5] NAME (1960). # "Asymptotic analysis of the distribution of the maximum deviation in # the Bernoulli scheme." # Theor. Probability Appl., 4, 339-366. # [6] NAME NAME (1976). # "Approximating the Lower Tail-areas of the Kolmogorov-Smirnov One-sample # Statistic." # Journal of the Royal Statistical Society, Series B, 38(2), 152-156. # [7] NAME, R., NAME (2011) # "Computing the Two-Sided Kolmogorov-Smirnov Distribution", # Journal of Statistical Software, Vol 39, 11, 1-18. # [8] NAME (2015) # "An Improved Evaluation of Kolmogorov's Distribution" # Journal of Statistical Software, Code Snippets; Vol 65(3), 1-8. # [9] Amit NAME, NAME (2017) # "Fast calculation of boundary crossing probabilities for Poisson # processes", # Statistics & Probability Letters, Vol 123, 177-182.
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
#!/usr/bin/env python2 # -*- coding: utf-8 -*- # GNU GENERAL PUBLIC LICENSE # Version 3, 29 June 2007 # # Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> # Everyone is permitted to copy and distribute verbatim copies # of this license document, but changing it is not allowed. # # Preamble # # The GNU General Public License is a free, copyleft license for # software and other kinds of works. # # The licenses for most software and other practical works are designed # to take away your freedom to share and change the works. By contrast, # the GNU General Public License is intended to guarantee your freedom to # share and change all versions of a program--to make sure it remains free # software for all its users. We, the Free Software Foundation, use the # GNU General Public License for most of our software; it applies also to # any other work released this way by its authors. You can apply it to # your programs, too. # # When we speak of free software, we are referring to freedom, not # price. Our General Public Licenses are designed to make sure that you # have the freedom to distribute copies of free software (and charge for # them if you wish), that you receive source code or can get it if you # want it, that you can change the software or use pieces of it in new # free programs, and that you know you can do these things. # # To protect your rights, we need to prevent others from denying you # these rights or asking you to surrender the rights. Therefore, you have # certain responsibilities if you distribute copies of the software, or if # you modify it: responsibilities to respect the freedom of others. # # For example, if you distribute copies of such a program, whether # gratis or for a fee, you must pass on to the recipients the same # freedoms that you received. You must make sure that they, too, receive # or can get the source code. And you must show them these terms so they # know their rights. # # Developers that use the GNU GPL protect your rights with two steps: # (1) assert copyright on the software, and (2) offer you this License # giving you legal permission to copy, distribute and/or modify it. # # For the developers' and authors' protection, the GPL clearly explains # that there is no warranty for this free software. For both users' and # authors' sake, the GPL requires that modified versions be marked as # changed, so that their problems will not be attributed erroneously to # authors of previous versions. # # Some devices are designed to deny users access to install or run # modified versions of the software inside them, although the manufacturer # can do so. This is fundamentally incompatible with the aim of # protecting users' freedom to change the software. The systematic # pattern of such abuse occurs in the area of products for individuals to # use, which is precisely where it is most unacceptable. Therefore, we # have designed this version of the GPL to prohibit the practice for those # products. If such problems arise substantially in other domains, we # stand ready to extend this provision to those domains in future versions # of the GPL, as needed to protect the freedom of users. # # Finally, every program is threatened constantly by software patents. # States should not allow patents to restrict development and use of # software on general-purpose computers, but in those that do, we wish to # avoid the special danger that patents applied to a free program could # make it effectively proprietary. To prevent this, the GPL assures that # patents cannot be used to render the program non-free. # # The precise terms and conditions for copying, distribution and # modification follow. # # TERMS AND CONDITIONS # # 0. Definitions. # # "This License" refers to version 3 of the GNU General Public License. # # "Copyright" also means copyright-like laws that apply to other kinds of # works, such as semiconductor masks. # # "The Program" refers to any copyrightable work licensed under this # License. Each licensee is addressed as "you". "Licensees" and # "recipients" may be individuals or organizations. # # To "modify" a work means to copy from or adapt all or part of the work # in a fashion requiring copyright permission, other than the making of an # exact copy. The resulting work is called a "modified version" of the # earlier work or a work "based on" the earlier work. # # A "covered work" means either the unmodified Program or a work based # on the Program. # # To "propagate" a work means to do anything with it that, without # permission, would make you directly or secondarily liable for # infringement under applicable copyright law, except executing it on a # computer or modifying a private copy. Propagation includes copying, # distribution (with or without modification), making available to the # public, and in some countries other activities as well. # # To "convey" a work means any kind of propagation that enables other # parties to make or receive copies. Mere interaction with a user through # a computer network, with no transfer of a copy, is not conveying. # # An interactive user interface displays "Appropriate Legal Notices" # to the extent that it includes a convenient and prominently visible # feature that (1) displays an appropriate copyright notice, and (2) # tells the user that there is no warranty for the work (except to the # extent that warranties are provided), that licensees may convey the # work under this License, and how to view a copy of this License. If # the interface presents a list of user commands or options, such as a # menu, a prominent item in the list meets this criterion. # # 1. Source Code. # # The "source code" for a work means the preferred form of the work # for making modifications to it. "Object code" means any non-source # form of a work. # # A "Standard Interface" means an interface that either is an official # standard defined by a recognized standards body, or, in the case of # interfaces specified for a particular programming language, one that # is widely used among developers working in that language. # # The "System Libraries" of an executable work include anything, other # than the work as a whole, that (a) is included in the normal form of # packaging a Major Component, but which is not part of that Major # Component, and (b) serves only to enable use of the work with that # Major Component, or to implement a Standard Interface for which an # implementation is available to the public in source code form. A # "Major Component", in this context, means a major essential component # (kernel, window system, and so on) of the specific operating system # (if any) on which the executable work runs, or a compiler used to # produce the work, or an object code interpreter used to run it. # # The "Corresponding Source" for a work in object code form means all # the source code needed to generate, install, and (for an executable # work) run the object code and to modify the work, including scripts to # control those activities. However, it does not include the work's # System Libraries, or general-purpose tools or generally available free # programs which are used unmodified in performing those activities but # which are not part of the work. For example, Corresponding Source # includes interface definition files associated with source files for # the work, and the source code for shared libraries and dynamically # linked subprograms that the work is specifically designed to require, # such as by intimate data communication or control flow between those # subprograms and other parts of the work. # # The Corresponding Source need not include anything that users # can regenerate automatically from other parts of the Corresponding # Source. # # The Corresponding Source for a work in source code form is that # same work. # # 2. Basic Permissions. # # All rights granted under this License are granted for the term of # copyright on the Program, and are irrevocable provided the stated # conditions are met. This License explicitly affirms your unlimited # permission to run the unmodified Program. The output from running a # covered work is covered by this License only if the output, given its # content, constitutes a covered work. This License acknowledges your # rights of fair use or other equivalent, as provided by copyright law. # # You may make, run and propagate covered works that you do not # convey, without conditions so long as your license otherwise remains # in force. You may convey covered works to others for the sole purpose # of having them make modifications exclusively for you, or provide you # with facilities for running those works, provided that you comply with # the terms of this License in conveying all material for which you do # not control copyright. Those thus making or running the covered works # for you must do so exclusively on your behalf, under your direction # and control, on terms that prohibit them from making any copies of # your copyrighted material outside their relationship with you. # # Conveying under any other circumstances is permitted solely under # the conditions stated below. Sublicensing is not allowed; section 10 # makes it unnecessary. # # 3. Protecting Users' Legal Rights From Anti-Circumvention Law. # # No covered work shall be deemed part of an effective technological # measure under any applicable law fulfilling obligations under article # 11 of the WIPO copyright treaty adopted on 20 December 1996, or # similar laws prohibiting or restricting circumvention of such # measures. # # When you convey a covered work, you waive any legal power to forbid # circumvention of technological measures to the extent such circumvention # is effected by exercising rights under this License with respect to # the covered work, and you disclaim any intention to limit operation or # modification of the work as a means of enforcing, against the work's # users, your or third parties' legal rights to forbid circumvention of # technological measures. # # 4. Conveying Verbatim Copies. # # You may convey verbatim copies of the Program's source code as you # receive it, in any medium, provided that you conspicuously and # appropriately publish on each copy an appropriate copyright notice; # keep intact all notices stating that this License and any # non-permissive terms added in accord with section 7 apply to the code; # keep intact all notices of the absence of any warranty; and give all # recipients a copy of this License along with the Program. # # You may charge any price or no price for each copy that you convey, # and you may offer support or warranty protection for a fee. # # 5. Conveying Modified Source Versions. # # You may convey a work based on the Program, or the modifications to # produce it from the Program, in the form of source code under the # terms of section 4, provided that you also meet all of these conditions: # # a) The work must carry prominent notices stating that you modified # it, and giving a relevant date. # # b) The work must carry prominent notices stating that it is # released under this License and any conditions added under section # 7. This requirement modifies the requirement in section 4 to # "keep intact all notices". # # c) You must license the entire work, as a whole, under this # License to anyone who comes into possession of a copy. This # License will therefore apply, along with any applicable section 7 # additional terms, to the whole of the work, and all its parts, # regardless of how they are packaged. This License gives no # permission to license the work in any other way, but it does not # invalidate such permission if you have separately received it. # # d) If the work has interactive user interfaces, each must display # Appropriate Legal Notices; however, if the Program has interactive # interfaces that do not display Appropriate Legal Notices, your # work need not make them do so. # # A compilation of a covered work with other separate and independent # works, which are not by their nature extensions of the covered work, # and which are not combined with it such as to form a larger program, # in or on a volume of a storage or distribution medium, is called an # "aggregate" if the compilation and its resulting copyright are not # used to limit the access or legal rights of the compilation's users # beyond what the individual works permit. Inclusion of a covered work # in an aggregate does not cause this License to apply to the other # parts of the aggregate. # # 6. Conveying Non-Source Forms. # # You may convey a covered work in object code form under the terms # of sections 4 and 5, provided that you also convey the # machine-readable Corresponding Source under the terms of this License, # in one of these ways: # # a) Convey the object code in, or embodied in, a physical product # (including a physical distribution medium), accompanied by the # Corresponding Source fixed on a durable physical medium # customarily used for software interchange. # # b) Convey the object code in, or embodied in, a physical product # (including a physical distribution medium), accompanied by a # written offer, valid for at least three years and valid for as # long as you offer spare parts or customer support for that product # model, to give anyone who possesses the object code either (1) a # copy of the Corresponding Source for all the software in the # product that is covered by this License, on a durable physical # medium customarily used for software interchange, for a price no # more than your reasonable cost of physically performing this # conveying of source, or (2) access to copy the # Corresponding Source from a network server at no charge. # # c) Convey individual copies of the object code with a copy of the # written offer to provide the Corresponding Source. This # alternative is allowed only occasionally and noncommercially, and # only if you received the object code with such an offer, in accord # with subsection 6b. # # d) Convey the object code by offering access from a designated # place (gratis or for a charge), and offer equivalent access to the # Corresponding Source in the same way through the same place at no # further charge. You need not require recipients to copy the # Corresponding Source along with the object code. If the place to # copy the object code is a network server, the Corresponding Source # may be on a different server (operated by you or a third party) # that supports equivalent copying facilities, provided you maintain # clear directions next to the object code saying where to find the # Corresponding Source. Regardless of what server hosts the # Corresponding Source, you remain obligated to ensure that it is # available for as long as needed to satisfy these requirements. # # e) Convey the object code using peer-to-peer transmission, provided # you inform other peers where the object code and Corresponding # Source of the work are being offered to the general public at no # charge under subsection 6d. # # A separable portion of the object code, whose source code is excluded # from the Corresponding Source as a System Library, need not be # included in conveying the object code work. # # A "User Product" is either (1) a "consumer product", which means any # tangible personal property which is normally used for personal, family, # or household purposes, or (2) anything designed or sold for incorporation # into a dwelling. In determining whether a product is a consumer product, # doubtful cases shall be resolved in favor of coverage. For a particular # product received by a particular user, "normally used" refers to a # typical or common use of that class of product, regardless of the status # of the particular user or of the way in which the particular user # actually uses, or expects or is expected to use, the product. A product # is a consumer product regardless of whether the product has substantial # commercial, industrial or non-consumer uses, unless such uses represent # the only significant mode of use of the product. # # "Installation Information" for a User Product means any methods, # procedures, authorization keys, or other information required to install # and execute modified versions of a covered work in that User Product from # a modified version of its Corresponding Source. The information must # suffice to ensure that the continued functioning of the modified object # code is in no case prevented or interfered with solely because # modification has been made. # # If you convey an object code work under this section in, or with, or # specifically for use in, a User Product, and the conveying occurs as # part of a transaction in which the right of possession and use of the # User Product is transferred to the recipient in perpetuity or for a # fixed term (regardless of how the transaction is characterized), the # Corresponding Source conveyed under this section must be accompanied # by the Installation Information. But this requirement does not apply # if neither you nor any third party retains the ability to install # modified object code on the User Product (for example, the work has # been installed in ROM). # # The requirement to provide Installation Information does not include a # requirement to continue to provide support service, warranty, or updates # for a work that has been modified or installed by the recipient, or for # the User Product in which it has been modified or installed. Access to a # network may be denied when the modification itself materially and # adversely affects the operation of the network or violates the rules and # protocols for communication across the network. # # Corresponding Source conveyed, and Installation Information provided, # in accord with this section must be in a format that is publicly # documented (and with an implementation available to the public in # source code form), and must require no special password or key for # unpacking, reading or copying. # # 7. Additional Terms. # # "Additional permissions" are terms that supplement the terms of this # License by making exceptions from one or more of its conditions. # Additional permissions that are applicable to the entire Program shall # be treated as though they were included in this License, to the extent # that they are valid under applicable law. If additional permissions # apply only to part of the Program, that part may be used separately # under those permissions, but the entire Program remains governed by # this License without regard to the additional permissions. # # When you convey a copy of a covered work, you may at your option # remove any additional permissions from that copy, or from any part of # it. (Additional permissions may be written to require their own # removal in certain cases when you modify the work.) You may place # additional permissions on material, added by you to a covered work, # for which you have or can give appropriate copyright permission. # # Notwithstanding any other provision of this License, for material you # add to a covered work, you may (if authorized by the copyright holders of # that material) supplement the terms of this License with terms: # # a) Disclaiming warranty or limiting liability differently from the # terms of sections 15 and 16 of this License; or # # b) Requiring preservation of specified reasonable legal notices or # author attributions in that material or in the Appropriate Legal # Notices displayed by works containing it; or # # c) Prohibiting misrepresentation of the origin of that material, or # requiring that modified versions of such material be marked in # reasonable ways as different from the original version; or # # d) Limiting the use for publicity purposes of names of licensors or # authors of the material; or # # e) Declining to grant rights under trademark law for use of some # trade names, trademarks, or service marks; or # # f) Requiring indemnification of licensors and authors of that # material by anyone who conveys the material (or modified versions of # it) with contractual assumptions of liability to the recipient, for # any liability that these contractual assumptions directly impose on # those licensors and authors. # # All other non-permissive additional terms are considered "further # restrictions" within the meaning of section 10. If the Program as you # received it, or any part of it, contains a notice stating that it is # governed by this License along with a term that is a further # restriction, you may remove that term. If a license document contains # a further restriction but permits relicensing or conveying under this # License, you may add to a covered work material governed by the terms # of that license document, provided that the further restriction does # not survive such relicensing or conveying. # # If you add terms to a covered work in accord with this section, you # must place, in the relevant source files, a statement of the # additional terms that apply to those files, or a notice indicating # where to find the applicable terms. # # Additional terms, permissive or non-permissive, may be stated in the # form of a separately written license, or stated as exceptions; # the above requirements apply either way. # # 8. Termination. # # You may not propagate or modify a covered work except as expressly # provided under this License. Any attempt otherwise to propagate or # modify it is void, and will automatically terminate your rights under # this License (including any patent licenses granted under the third # paragraph of section 11). # # However, if you cease all violation of this License, then your # license from a particular copyright holder is reinstated (a) # provisionally, unless and until the copyright holder explicitly and # finally terminates your license, and (b) permanently, if the copyright # holder fails to notify you of the violation by some reasonable means # prior to 60 days after the cessation. # # Moreover, your license from a particular copyright holder is # reinstated permanently if the copyright holder notifies you of the # violation by some reasonable means, this is the first time you have # received notice of violation of this License (for any work) from that # copyright holder, and you cure the violation prior to 30 days after # your receipt of the notice. # # Termination of your rights under this section does not terminate the # licenses of parties who have received copies or rights from you under # this License. If your rights have been terminated and not permanently # reinstated, you do not qualify to receive new licenses for the same # material under section 10. # # 9. Acceptance Not Required for Having Copies. # # You are not required to accept this License in order to receive or # run a copy of the Program. Ancillary propagation of a covered work # occurring solely as a consequence of using peer-to-peer transmission # to receive a copy likewise does not require acceptance. However, # nothing other than this License grants you permission to propagate or # modify any covered work. These actions infringe copyright if you do # not accept this License. Therefore, by modifying or propagating a # covered work, you indicate your acceptance of this License to do so. # # 10. Automatic Licensing of Downstream Recipients. # # Each time you convey a covered work, the recipient automatically # receives a license from the original licensors, to run, modify and # propagate that work, subject to this License. You are not responsible # for enforcing compliance by third parties with this License. # # An "entity transaction" is a transaction transferring control of an # organization, or substantially all assets of one, or subdividing an # organization, or merging organizations. If propagation of a covered # work results from an entity transaction, each party to that # transaction who receives a copy of the work also receives whatever # licenses to the work the party's predecessor in interest had or could # give under the previous paragraph, plus a right to possession of the # Corresponding Source of the work from the predecessor in interest, if # the predecessor has it or can get it with reasonable efforts. # # You may not impose any further restrictions on the exercise of the # rights granted or affirmed under this License. For example, you may # not impose a license fee, royalty, or other charge for exercise of # rights granted under this License, and you may not initiate litigation # (including a cross-claim or counterclaim in a lawsuit) alleging that # any patent claim is infringed by making, using, selling, offering for # sale, or importing the Program or any portion of it. # # 11. Patents. # # A "contributor" is a copyright holder who authorizes use under this # License of the Program or a work on which the Program is based. The # work thus licensed is called the contributor's "contributor version". # # A contributor's "essential patent claims" are all patent claims # owned or controlled by the contributor, whether already acquired or # hereafter acquired, that would be infringed by some manner, permitted # by this License, of making, using, or selling its contributor version, # but do not include claims that would be infringed only as a # consequence of further modification of the contributor version. For # purposes of this definition, "control" includes the right to grant # patent sublicenses in a manner consistent with the requirements of # this License. # # Each contributor grants you a non-exclusive, worldwide, royalty-free # patent license under the contributor's essential patent claims, to # make, use, sell, offer for sale, import and otherwise run, modify and # propagate the contents of its contributor version. # # In the following three paragraphs, a "patent license" is any express # agreement or commitment, however denominated, not to enforce a patent # (such as an express permission to practice a patent or covenant not to # sue for patent infringement). To "grant" such a patent license to a # party means to make such an agreement or commitment not to enforce a # patent against the party. # # If you convey a covered work, knowingly relying on a patent license, # and the Corresponding Source of the work is not available for anyone # to copy, free of charge and under the terms of this License, through a # publicly available network server or other readily accessible means, # then you must either (1) cause the Corresponding Source to be so # available, or (2) arrange to deprive yourself of the benefit of the # patent license for this particular work, or (3) arrange, in a manner # consistent with the requirements of this License, to extend the patent # license to downstream recipients. "Knowingly relying" means you have # actual knowledge that, but for the patent license, your conveying the # covered work in a country, or your recipient's use of the covered work # in a country, would infringe one or more identifiable patents in that # country that you have reason to believe are valid. # # If, pursuant to or in connection with a single transaction or # arrangement, you convey, or propagate by procuring conveyance of, a # covered work, and grant a patent license to some of the parties # receiving the covered work authorizing them to use, propagate, modify # or convey a specific copy of the covered work, then the patent license # you grant is automatically extended to all recipients of the covered # work and works based on it. # # A patent license is "discriminatory" if it does not include within # the scope of its coverage, prohibits the exercise of, or is # conditioned on the non-exercise of one or more of the rights that are # specifically granted under this License. You may not convey a covered # work if you are a party to an arrangement with a third party that is # in the business of distributing software, under which you make payment # to the third party based on the extent of your activity of conveying # the work, and under which the third party grants, to any of the # parties who would receive the covered work from you, a discriminatory # patent license (a) in connection with copies of the covered work # conveyed by you (or copies made from those copies), or (b) primarily # for and in connection with specific products or compilations that # contain the covered work, unless you entered into that arrangement, # or that patent license was granted, prior to 28 March 2007. # # Nothing in this License shall be construed as excluding or limiting # any implied license or other defenses to infringement that may # otherwise be available to you under applicable patent law. # # 12. No Surrender of Others' Freedom. # # If conditions are imposed on you (whether by court order, agreement or # otherwise) that contradict the conditions of this License, they do not # excuse you from the conditions of this License. If you cannot convey a # covered work so as to satisfy simultaneously your obligations under this # License and any other pertinent obligations, then as a consequence you may # not convey it at all. For example, if you agree to terms that obligate you # to collect a royalty for further conveying from those to whom you convey # the Program, the only way you could satisfy both those terms and this # License would be to refrain entirely from conveying the Program. # # 13. Use with the GNU Affero General Public License. # # Notwithstanding any other provision of this License, you have # permission to link or combine any covered work with a work licensed # under version 3 of the GNU Affero General Public License into a single # combined work, and to convey the resulting work. The terms of this # License will continue to apply to the part which is the covered work, # but the special requirements of the GNU Affero General Public License, # section 13, concerning interaction through a network will apply to the # combination as such. # # 14. Revised Versions of this License. # # The Free Software Foundation may publish revised and/or new versions of # the GNU General Public License from time to time. Such new versions will # be similar in spirit to the present version, but may differ in detail to # address new problems or concerns. # # Each version is given a distinguishing version number. If the # Program specifies that a certain numbered version of the GNU General # Public License "or any later version" applies to it, you have the # option of following the terms and conditions either of that numbered # version or of any later version published by the Free Software # Foundation. If the Program does not specify a version number of the # GNU General Public License, you may choose any version ever published # by the Free Software Foundation. # # If the Program specifies that a proxy can decide which future # versions of the GNU General Public License can be used, that proxy's # public statement of acceptance of a version permanently authorizes you # to choose that version for the Program. # # Later license versions may give you additional or different # permissions. However, no additional obligations are imposed on any # author or copyright holder as a result of your choosing to follow a # later version. # # 15. Disclaimer of Warranty. # # THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY # APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT # HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY # OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, # THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR # PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM # IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF # ALL NECESSARY SERVICING, REPAIR OR CORRECTION. # # 16. Limitation of Liability. # # IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING # WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS # THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY # GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE # USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF # DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD # PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), # EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF # SUCH DAMAGES. # # 17. Interpretation of Sections 15 and 16. # # If the disclaimer of warranty and limitation of liability provided # above cannot be given local legal effect according to their terms, # reviewing courts shall apply local law that most closely approximates # an absolute waiver of all civil liability in connection with the # Program, unless a warranty or assumption of liability accompanies a # copy of the Program in return for a fee. # # END OF TERMS AND CONDITIONS # # How to Apply These Terms to Your New Programs # # If you develop a new program, and you want it to be of the greatest # possible use to the public, the best way to achieve this is to make it # free software which everyone can redistribute and change under these terms. # # To do so, attach the following notices to the program. It is safest # to attach them to the start of each source file to most effectively # state the exclusion of warranty; and each file should have at least # the "copyright" line and a pointer to where the full notice is found. # # {one line to give the program's name and a brief idea of what it does.} # Copyright (C) {year} {name of author} # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # # Also add information on how to contact you by electronic and paper mail. # # If the program does terminal interaction, make it output a short # notice like this when it starts in an interactive mode: # # {project} Copyright (C) {year} {fullname} # This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. # This is free software, and you are welcome to redistribute it # under certain conditions; type `show c' for details. # # The hypothetical commands `show w' and `show c' should show the appropriate # parts of the General Public License. Of course, your program's commands # might be different; for a GUI interface, you would use an "about box". # # You should also get your employer (if you work as a programmer) or school, # if any, to sign a "copyright disclaimer" for the program, if necessary. # For more information on this, and how to apply and follow the GNU GPL, see # <http://www.gnu.org/licenses/>. # # The GNU General Public License does not permit incorporating your program # into proprietary programs. If your program is a subroutine library, you # may consider it more useful to permit linking proprietary applications with # the library. If this is what you want to do, use the GNU Lesser General # Public License instead of this License. But first, please read # <http://www.gnu.org/philosophy/why-not-lgpl.html>. #
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
# # XML-RPC CLIENT LIBRARY # $Id: xmlrpclib.py 38463 2005-02-11 18:00:16Z USERNAME $ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
""" ============= Miscellaneous ============= IEEE 754 Floating Point Special Values -------------------------------------- Special values defined in numpy: nan, inf, NaNs can be used as a poor-man's mask (if you don't care what the original value was) Note: cannot use equality to test NaNs. E.g.: :: >>> myarr = np.array([1., 0., np.nan, 3.]) >>> np.where(myarr == np.nan) >>> np.nan == np.nan # is always False! Use special numpy functions instead. False >>> myarr[myarr == np.nan] = 0. # doesn't work >>> myarr array([ 1., 0., NaN, 3.]) >>> myarr[np.isnan(myarr)] = 0. # use this instead find >>> myarr array([ 1., 0., 0., 3.]) Other related special value functions: :: isinf(): True if value is inf isfinite(): True if not nan or inf nan_to_num(): Map nan to 0, inf to max float, -inf to min float The following corresponds to the usual functions except that nans are excluded from the results: :: nansum() nanmax() nanmin() nanargmax() nanargmin() >>> x = np.arange(10.) >>> x[3] = np.nan >>> x.sum() nan >>> np.nansum(x) 42.0 How numpy handles numerical exceptions -------------------------------------- The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow`` and ``'ignore'`` for ``underflow``. But this can be changed, and it can be set individually for different kinds of exceptions. The different behaviors are: - 'ignore' : Take no action when the exception occurs. - 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module). - 'raise' : Raise a `FloatingPointError`. - 'call' : Call a function specified using the `seterrcall` function. - 'print' : Print a warning directly to ``stdout``. - 'log' : Record error in a Log object specified by `seterrcall`. These behaviors can be set for all kinds of errors or specific ones: - all : apply to all numeric exceptions - invalid : when NaNs are generated - divide : divide by zero (for integers as well!) - overflow : floating point overflows - underflow : floating point underflows Note that integer divide-by-zero is handled by the same machinery. These behaviors are set on a per-thread basis. Examples -------- :: >>> oldsettings = np.seterr(all='warn') >>> np.zeros(5,dtype=np.float32)/0. invalid value encountered in divide >>> j = np.seterr(under='ignore') >>> np.array([1.e-100])**10 >>> j = np.seterr(invalid='raise') >>> np.sqrt(np.array([-1.])) FloatingPointError: invalid value encountered in sqrt >>> def errorhandler(errstr, errflag): ... print "saw stupid error!" >>> np.seterrcall(errorhandler) <function err_handler at 0x...> >>> j = np.seterr(all='call') >>> np.zeros(5, dtype=np.int32)/0 FloatingPointError: invalid value encountered in divide saw stupid error! >>> j = np.seterr(**oldsettings) # restore previous ... # error-handling settings Interfacing to C ---------------- Only a survey of the choices. Little detail on how each works. 1) Bare metal, wrap your own C-code manually. - Plusses: - Efficient - No dependencies on other tools - Minuses: - Lots of learning overhead: - need to learn basics of Python C API - need to learn basics of numpy C API - need to learn how to handle reference counting and love it. - Reference counting often difficult to get right. - getting it wrong leads to memory leaks, and worse, segfaults - API will change for Python 3.0! 2) Cython - Plusses: - avoid learning C API's - no dealing with reference counting - can code in pseudo python and generate C code - can also interface to existing C code - should shield you from changes to Python C api - has become the de-facto standard within the scientific Python community - fast indexing support for arrays - Minuses: - Can write code in non-standard form which may become obsolete - Not as flexible as manual wrapping 4) ctypes - Plusses: - part of Python standard library - good for interfacing to existing sharable libraries, particularly Windows DLLs - avoids API/reference counting issues - good numpy support: arrays have all these in their ctypes attribute: :: a.ctypes.data a.ctypes.get_strides a.ctypes.data_as a.ctypes.shape a.ctypes.get_as_parameter a.ctypes.shape_as a.ctypes.get_data a.ctypes.strides a.ctypes.get_shape a.ctypes.strides_as - Minuses: - can't use for writing code to be turned into C extensions, only a wrapper tool. 5) SWIG (automatic wrapper generator) - Plusses: - around a long time - multiple scripting language support - C++ support - Good for wrapping large (many functions) existing C libraries - Minuses: - generates lots of code between Python and the C code - can cause performance problems that are nearly impossible to optimize out - interface files can be hard to write - doesn't necessarily avoid reference counting issues or needing to know API's 7) scipy.weave - Plusses: - can turn many numpy expressions into C code - dynamic compiling and loading of generated C code - can embed pure C code in Python module and have weave extract, generate interfaces and compile, etc. - Minuses: - Future very uncertain: it's the only part of Scipy not ported to Python 3 and is effectively deprecated in favor of Cython. 8) Psyco - Plusses: - Turns pure python into efficient machine code through jit-like optimizations - very fast when it optimizes well - Minuses: - Only on intel (windows?) - Doesn't do much for numpy? Interfacing to Fortran: ----------------------- The clear choice to wrap Fortran code is `f2py <http://docs.scipy.org/doc/numpy-dev/f2py/>`_. Pyfort is an older alternative, but not supported any longer. Fwrap is a newer project that looked promising but isn't being developed any longer. Interfacing to C++: ------------------- 1) Cython 2) CXX 3) Boost.python 4) SWIG 5) SIP (used mainly in PyQT) """
""" Extrude is a script to display and extrude a gcode file. It controls the extruder and movement. It can read linear and helical move commands. It saves a log file with the suffix _log. To run extrude, install python 2.x on your machine, which is avaliable from http://www.python.org/download/ Then in the folder which extrude is in, type 'python' in a shell to run the python interpreter. Finally type 'import extrude' to import this program. Extrude requires pySerial installed for this module to work. If you are using Fedora it is available on yum (run "sudo yum install pyserial"). To actually control the reprap requires write access to the serial device, running as root is one way to get that access. This example displays and extrudes a gcode file. This example is run in a terminal as root in the folder which contains Hollow Square.gcode, and extrude.py. >>> import extrude Extrude has been imported. The gcode files in this directory that are not log files are the following: ['Hollow Square.gcode'] >>> extrude.display() File Hollow Square.gcode is being displayed. reprap.serial = serial.Serial(0, 19200, timeout = 60) reprap.cartesian.x.active = True reprap.cartesian.y.active = True reprap.cartesian.z.active = True reprap.extruder.active = True reprap.cartesian.x.setNotify() reprap.cartesian.y.setNotify() reprap.cartesian.z.setNotify() reprap.cartesian.x.limit = 2523 reprap.cartesian.y.limit = 2000 reprap.cartesian.homeReset( 200, True ) ( GCode generated by March 29,2007 Skeinforge ) ( Extruder Initialization ) M100 P210 M103 reprap.extruder.setMotor(reprap.CMD_REVERSE, 0) .. many lines of gcode and extruder commands .. reprap.cartesian.homeReset( 600, True ) reprap.cartesian.free() The gcode log file is saved as Hollow Square_log.gcode >>> extrude.displayFile("Hollow Square.gcode") File Hollow Square.gcode is being displayed. .. The gcode log file is saved as Hollow Square_log.gcode >>> extrude.displayFiles(["Hollow Square.gcode"]) File Hollow Square.gcode is being displayed. .. The gcode log file is saved as Hollow Square_log.gcode >>> extrude.displayText(" ( GCode generated by March 29,2007 Skeinforge ) ( Extruder Initialization ) .. many lines of gcode .. ") reprap.serial = serial.Serial(0, 19200, timeout = 60) reprap.cartesian.x.active = True reprap.cartesian.y.active = True reprap.cartesian.z.active = True reprap.extruder.active = True reprap.cartesian.x.setNotify() reprap.cartesian.y.setNotify() reprap.cartesian.z.setNotify() reprap.cartesian.x.limit = 2523 reprap.cartesian.y.limit = 2000 reprap.cartesian.homeReset( 200, True ) ( GCode generated by March 29,2007 Skeinforge ) ( Extruder Initialization ) M100 P210 M103 reprap.extruder.setMotor(reprap.CMD_REVERSE, 0) .. many lines of gcode and extruder commands .. reprap.cartesian.homeReset( 600, True ) reprap.cartesian.free() Note: On my system the reprap is not connected, so I get can not connect messages, like: >>> extrude.extrude() File Hollow Square.gcode is being extruded. reprap.serial = serial.Serial(0, 19200, timeout = 60) reprap.cartesian.x.active = True reprap.cartesian.y.active = True reprap.cartesian.z.active = True reprap.extruder.active = True reprap.cartesian.x.setNotify() Error: Serial timeout Error: ACK not recieved .. On a system where a reprap is connected to the serial port, you should get the following: >>> extrude.extrude() File Hollow Square.gcode is being extruded. .. The gcode log file is saved as Hollow Square_log.gcode >>> extrude.extrudeFile("Hollow Square.gcode") File Hollow Square.gcode is being extruded. .. The gcode log file is saved as Hollow Square_log.gcode >>> extrude.extrudeFiles(["Hollow Square.gcode"]) File Hollow Square.gcode is being extruded. .. The gcode log file is saved as Hollow Square_log.gcode >>> extrude.extrudeText(" ( GCode generated by March 29,2007 Skeinforge ) ( Extruder Initialization ) .. many lines of gcode .. ") reprap.serial = serial.Serial(0, 19200, timeout = 60) reprap.cartesian.x.active = True reprap.cartesian.y.active = True reprap.cartesian.z.active = True reprap.extruder.active = True reprap.cartesian.x.setNotify() reprap.cartesian.y.setNotify() reprap.cartesian.z.setNotify() reprap.cartesian.x.limit = 2523 reprap.cartesian.y.limit = 2000 reprap.cartesian.homeReset( 200, True ) ( GCode generated by March 29,2007 Skeinforge ) ( Extruder Initialization ) M100 P210 M103 reprap.extruder.setMotor(reprap.CMD_REVERSE, 0) .. many lines of gcode and extruder commands .. reprap.cartesian.homeReset( 600, True ) reprap.cartesian.free() """
""" DIRAC Basic MySQL Class It provides access to the basic MySQL methods in a multithread-safe mode keeping used connections in a python Queue for further reuse. These are the coded methods: __init__( host, user, passwd, name, [maxConnsInQueue=10] ) Initializes the Queue and tries to connect to the DB server, using the _connect method. "maxConnsInQueue" defines the size of the Queue of open connections that are kept for reuse. It also defined the maximum number of open connections available from the object. maxConnsInQueue = 0 means unlimited and it is not supported. _except( methodName, exception, errorMessage ) Helper method for exceptions: the "methodName" and the "errorMessage" are printed with ERROR level, then the "exception" is printed (with full description if it is a MySQL Exception) and S_ERROR is returned with the errorMessage and the exception. _connect() Attempts connection to DB and sets the _connected flag to True upon success. Returns S_OK or S_ERROR. _query( cmd, [conn] ) Executes SQL command "cmd". Gets a connection from the Queue (or open a new one if none is available), the used connection is back into the Queue. If a connection to the the DB is passed as second argument this connection is used and is not in the Queue. Returns S_OK with fetchall() out in Value or S_ERROR upon failure. _update( cmd, [conn] ) Executes SQL command "cmd" and issue a commit Gets a connection from the Queue (or open a new one if none is available), the used connection is back into the Queue. If a connection to the the DB is passed as second argument this connection is used and is not in the Queue Returns S_OK with number of updated registers in Value or S_ERROR upon failure. _createTables( tableDict ) Create a new Table in the DB _getConnection() Gets a connection from the Queue (or open a new one if none is available) Returns S_OK with connection in Value or S_ERROR the calling method is responsible for closing this connection once it is no longer needed. Some high level methods have been added to avoid the need to write SQL statement in most common cases. They should be used instead of low level _insert, _update methods when ever possible. buildCondition( self, condDict = None, older = None, newer = None, timeStamp = None, orderAttribute = None, limit = False, greater = None, smaller = None ): Build SQL condition statement from provided condDict and other extra check on a specified time stamp. The conditions dictionary specifies for each attribute one or a List of possible values greater and smaller are dictionaries in which the keys are the names of the fields, that are requested to be >= or < than the corresponding value. For compatibility with current usage it uses Exceptions to exit in case of invalid arguments insertFields( self, tableName, inFields = None, inValues = None, conn = None, inDict = None ): Insert a new row in "tableName" assigning the values "inValues" to the fields "inFields". Alternatively inDict can be used String type values will be appropriately escaped. updateFields( self, tableName, updateFields = None, updateValues = None, condDict = None, limit = False, conn = None, updateDict = None, older = None, newer = None, timeStamp = None, orderAttribute = None ): Update "updateFields" from "tableName" with "updateValues". updateDict alternative way to provide the updateFields and updateValues N records can match the condition return S_OK( number of updated rows ) if limit is not False, the given limit is set String type values will be appropriately escaped. deleteEntries( self, tableName, condDict = None, limit = False, conn = None, older = None, newer = None, timeStamp = None, orderAttribute = None ): Delete rows from "tableName" with N records can match the condition if limit is not False, the given limit is set String type values will be appropriately escaped, they can be single values or lists of values. getFields( self, tableName, outFields = None, condDict = None, limit = False, conn = None, older = None, newer = None, timeStamp = None, orderAttribute = None ): Select "outFields" from "tableName" with condDict N records can match the condition return S_OK( tuple(Field,Value) ) if limit is not False, the given limit is set String type values will be appropriately escaped, they can be single values or lists of values. for compatibility with other methods condDict keyed argument is added getCounters( self, table, attrList, condDict = None, older = None, newer = None, timeStamp = None, connection = False ): Count the number of records on each distinct combination of AttrList, selected with condition defined by condDict and time stamps getDistinctAttributeValues( self, table, attribute, condDict = None, older = None, newer = None, timeStamp = None, connection = False ): Get distinct values of a table attribute under specified conditions """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. ########################################################################################### # Implementation of the stochastic depth algorithm described in the paper # # NAME et al. "Deep networks with stochastic depth." arXiv preprint arXiv:1603.09382 (2016). # # Reference torch implementation can be found at https://github.com/yueatsprograms/Stochastic_Depth # # There are some differences in the implementation: # - A BN->ReLU->Conv is used for skip connection when input and output shapes are different, # as oppose to a padding layer. # - The residual block is different: we use BN->ReLU->Conv->BN->ReLU->Conv, as oppose to # Conv->BN->ReLU->Conv->BN (->ReLU also applied to skip connection). # - We did not try to match with the same initialization, learning rate scheduling, etc. # #-------------------------------------------------------------------------------- # A sample from the running log (We achieved ~9.4% error after 500 epochs, some # more careful tuning of the hyper parameters and maybe also the arch is needed # to achieve the reported numbers in the paper): # # INFO:root:Epoch[80] Batch [50] Speed: 1020.95 samples/sec Train-accuracy=0.910080 # INFO:root:Epoch[80] Batch [100] Speed: 1013.41 samples/sec Train-accuracy=0.912031 # INFO:root:Epoch[80] Batch [150] Speed: 1035.48 samples/sec Train-accuracy=0.913438 # INFO:root:Epoch[80] Batch [200] Speed: 1045.00 samples/sec Train-accuracy=0.907344 # INFO:root:Epoch[80] Batch [250] Speed: 1055.32 samples/sec Train-accuracy=0.905937 # INFO:root:Epoch[80] Batch [300] Speed: 1071.71 samples/sec Train-accuracy=0.912500 # INFO:root:Epoch[80] Batch [350] Speed: 1033.73 samples/sec Train-accuracy=0.910937 # INFO:root:Epoch[80] Train-accuracy=0.919922 # INFO:root:Epoch[80] Time cost=48.348 # INFO:root:Saved checkpoint to "sd-110-0081.params" # INFO:root:Epoch[80] Validation-accuracy=0.880142 # ... # INFO:root:Epoch[115] Batch [50] Speed: 1037.04 samples/sec Train-accuracy=0.937040 # INFO:root:Epoch[115] Batch [100] Speed: 1041.12 samples/sec Train-accuracy=0.934219 # INFO:root:Epoch[115] Batch [150] Speed: 1036.02 samples/sec Train-accuracy=0.933125 # INFO:root:Epoch[115] Batch [200] Speed: 1057.49 samples/sec Train-accuracy=0.938125 # INFO:root:Epoch[115] Batch [250] Speed: 1060.56 samples/sec Train-accuracy=0.933438 # INFO:root:Epoch[115] Batch [300] Speed: 1046.25 samples/sec Train-accuracy=0.935625 # INFO:root:Epoch[115] Batch [350] Speed: 1043.83 samples/sec Train-accuracy=0.927188 # INFO:root:Epoch[115] Train-accuracy=0.938477 # INFO:root:Epoch[115] Time cost=47.815 # INFO:root:Saved checkpoint to "sd-110-0116.params" # INFO:root:Epoch[115] Validation-accuracy=0.884415 # ... # INFO:root:Saved checkpoint to "sd-110-0499.params" # INFO:root:Epoch[498] Validation-accuracy=0.908554 # INFO:root:Epoch[499] Batch [50] Speed: 1068.28 samples/sec Train-accuracy=0.991422 # INFO:root:Epoch[499] Batch [100] Speed: 1053.10 samples/sec Train-accuracy=0.991094 # INFO:root:Epoch[499] Batch [150] Speed: 1042.89 samples/sec Train-accuracy=0.995156 # INFO:root:Epoch[499] Batch [200] Speed: 1066.22 samples/sec Train-accuracy=0.991406 # INFO:root:Epoch[499] Batch [250] Speed: 1050.56 samples/sec Train-accuracy=0.990781 # INFO:root:Epoch[499] Batch [300] Speed: 1032.02 samples/sec Train-accuracy=0.992500 # INFO:root:Epoch[499] Batch [350] Speed: 1062.16 samples/sec Train-accuracy=0.992969 # INFO:root:Epoch[499] Train-accuracy=0.994141 # INFO:root:Epoch[499] Time cost=47.401 # INFO:root:Saved checkpoint to "sd-110-0500.params" # INFO:root:Epoch[499] Validation-accuracy=0.906050 # ###########################################################################################
"""Stuff to parse AIFF-C and AIFF files. Unless explicitly stated otherwise, the description below is true both for AIFF-C files and AIFF files. An AIFF-C file has the following structure. +-----------------+ | FORM | +-----------------+ | <size> | +----+------------+ | | AIFC | | +------------+ | | <chunks> | | | . | | | . | | | . | +----+------------+ An AIFF file has the string "AIFF" instead of "AIFC". A chunk consists of an identifier (4 bytes) followed by a size (4 bytes, big endian order), followed by the data. The size field does not include the size of the 8 byte header. The following chunk types are recognized. FVER <version number of AIFF-C defining document> (AIFF-C only). MARK <# of markers> (2 bytes) list of markers: <marker ID> (2 bytes, must be > 0) <position> (4 bytes) <marker name> ("pstring") COMM <# of channels> (2 bytes) <# of sound frames> (4 bytes) <size of the samples> (2 bytes) <sampling frequency> (10 bytes, IEEE 80-bit extended floating point) in AIFF-C files only: <compression type> (4 bytes) <human-readable version of compression type> ("pstring") SSND <offset> (4 bytes, not used by this program) <blocksize> (4 bytes, not used by this program) <sound data> A pstring consists of 1 byte length, a string of characters, and 0 or 1 byte pad to make the total length even. Usage. Reading AIFF files: f = aifc.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). In some types of audio files, if the setpos() method is not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' for AIFF files) getcompname() -- returns human-readable version of compression type ('not compressed' for AIFF files) getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- get the list of marks in the audio file or None if there are no marks getmark(id) -- get mark with the specified id (raises an error if the mark does not exist) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell(), the position given to setpos() and the position of marks are all compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing AIFF files: f = aifc.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: aiff() -- create an AIFF file (AIFF-C default) aifc() -- create an AIFF-C file setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple) -- set all parameters at once setmark(id, pos, name) -- add specified mark to the list of marks tell() -- return current position in output file (useful in combination with setmark()) writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. Marks can be added anytime. If there are any marks, ypu must call close() after all frames have been written. The close() method is called automatically when the class instance is destroyed. When a file is opened with the extension '.aiff', an AIFF file is written, otherwise an AIFF-C file is written. This default can be changed by calling aiff() or aifc() before the first writeframes or writeframesraw. """
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be >= 1D atleast_2d Force arrays to be >= 2D atleast_3d Force arrays to be >= 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) stack Stack arrays along a new axis split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Subtract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Iterators --------- ================ =================== Arrayterator A buffered iterator for big arrays. ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== Array Set Operations ----------------------- Set operations for numeric arrays based on sort() function. ================ =================== unique Unique elements of an array. isin Test whether each element of an ND array is present anywhere within a second array. ediff1d Array difference (auxiliary function). intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
#----------------------------------------------------------------------------- # eveapi - EVE Online API access # # Copyright (c)2007-2014 NAME "Entity" NAME <EMAIL> # # Permission is hereby granted, free of charge, to any person # obtaining a copy of this software and associated documentation # files (the "Software"), to deal in the Software without # restriction, including without limitation the rights to use, # copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following # conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES # OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR # OTHER DEALINGS IN THE SOFTWARE # #----------------------------------------------------------------------------- # # Version: 1.3.0 - 27 May 2014 # - Added set_user_agent() module-level function to set the User-Agent header # to be used for any requests by the library. If this function is not used, # a warning will be thrown for every API request. # # Version: 1.2.9 - 14 September 2013 # - Updated error handling: Raise an AuthenticationError in case # the API returns HTTP Status Code 403 - Forbidden # # Version: 1.2.8 - 9 August 2013 # - the XML value cast function (_autocast) can now be changed globally to a # custom one using the set_cast_func(func) module-level function. # # Version: 1.2.7 - 3 September 2012 # - Added get() method to Row object. # # Version: 1.2.6 - 29 August 2012 # - Added finer error handling + added setup.py to allow distributing eveapi # through pypi. # # Version: 1.2.5 - 1 August 2012 # - Row objects now have __hasattr__ and __contains__ methods # # Version: 1.2.4 - 12 April 2012 # - API version of XML response now available as _meta.version # # Version: 1.2.3 - 10 April 2012 # - fix for tags of the form <tag attr=bla ... /> # # Version: 1.2.2 - 27 February 2012 # - fix for the workaround in 1.2.1. # # Version: 1.2.1 - 23 February 2012 # - added workaround for row tags missing attributes that were defined # in their rowset (this should fix ContractItems) # # Version: 1.2.0 - 18 February 2012 # - fix handling of empty XML tags. # - improved proxy support a bit. # # Version: 1.1.9 - 2 September 2011 # - added workaround for row tags with attributes that were not defined # in their rowset (this should fix AssetList) # # Version: 1.1.8 - 1 September 2011 # - fix for inconsistent columns attribute in rowsets. # # Version: 1.1.7 - 1 September 2011 # - auth() method updated to work with the new authentication scheme. # # Version: 1.1.6 - 27 May 2011 # - Now supports composite keys for IndexRowsets. # - Fixed calls not working if a path was specified in the root url. # # Version: 1.1.5 - 27 Januari 2011 # - Now supports (and defaults to) HTTPS. Non-SSL proxies will still work by # explicitly specifying http:// in the url. # # Version: 1.1.4 - 1 December 2010 # - Empty explicit CDATA tags are now properly handled. # - _autocast now receives the name of the variable it's trying to typecast, # enabling custom/future casting functions to make smarter decisions. # # Version: 1.1.3 - 6 November 2010 # - Added support for anonymous CDATA inside row tags. This makes the body of # mails in the rows of char/MailBodies available through the .data attribute. # # Version: 1.1.2 - 2 July 2010 # - Fixed __str__ on row objects to work properly with unicode strings. # # Version: 1.1.1 - 10 Januari 2010 # - Fixed bug that causes nested tags to not appear in rows of rowsets created # from normal Elements. This should fix the corp.MemberSecurity method, # which now returns all data for members. [jehed] # # Version: 1.1.0 - 15 Januari 2009 # - Added Select() method to Rowset class. Using it avoids the creation of # temporary row instances, speeding up iteration considerably. # - Added ParseXML() function, which can be passed arbitrary API XML file or # string objects. # - Added support for proxy servers. A proxy can be specified globally or # per api connection instance. [suggestion by graalman] # - Some minor refactoring. # - Fixed deprecation warning when using Python 2.6. # # Version: 1.0.7 - 14 November 2008 # - Added workaround for rowsets that are missing the (required!) columns # attribute. If missing, it will use the columns found in the first row. # Note that this is will still break when expecting columns, if the rowset # is empty. [Flux/Entity] # # Version: 1.0.6 - 18 July 2008 # - Enabled expat text buffering to avoid content breaking up. [BigWhale] # # Version: 1.0.5 - 03 February 2008 # - Added workaround to make broken XML responses (like the "row:name" bug in # eve/CharacterID) work as intended. # - Bogus datestamps before the epoch in XML responses are now set to 0 to # avoid breaking certain date/time functions. [Anathema NAME Version: 1.0.4 - 23 December 2007 # - Changed _autocast() to use timegm() instead of mktime(). [Invisible Hand] # - Fixed missing attributes of elements inside rows. [Elandra NAME Version: 1.0.3 - 13 December 2007 # - Fixed keyless columns bugging out the parser (in CorporationSheet for ex.) # # Version: 1.0.2 - 12 December 2007 # - Fixed parser not working with indented XML. # # Version: 1.0.1 # - Some micro optimizations # # Version: 1.0 # - Initial release # # Requirements: # Python 2.4+ # #-----------------------------------------------------------------------------
""" ======== Glossary ======== along an axis Axes are defined for arrays with more than one dimension. A 2-dimensional array has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1). Many operation can take place along one of these axes. For example, we can sum each row of an array, in which case we operate along columns, or axis 1:: >>> x = np.arange(12).reshape((3,4)) >>> x array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.sum(axis=1) array([ 6, 22, 38]) array A homogeneous container of numerical elements. Each element in the array occupies a fixed amount of memory (hence homogeneous), and can be a numerical element of a single type (such as float, int or complex) or a combination (such as ``(float, int, float)``). Each array has an associated data-type (or ``dtype``), which describes the numerical type of its elements:: >>> x = np.array([1, 2, 3], float) >>> x array([ 1., 2., 3.]) >>> x.dtype # floating point number, 64 bits of memory per element dtype('float64') # More complicated data type: each array element is a combination of # and integer and a floating point number >>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)]) array([(1, 2.0), (3, 4.0)], dtype=[('x', '<i4'), ('y', '<f8')]) Fast element-wise operations, called `ufuncs`_, operate on arrays. array_like Any sequence that can be interpreted as an ndarray. This includes nested lists, tuples, scalars and existing arrays. attribute A property of an object that can be accessed using ``obj.attribute``, e.g., ``shape`` is an attribute of an array:: >>> x = np.array([1, 2, 3]) >>> x.shape (3,) BLAS `Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_ broadcast NumPy can do operations on arrays whose shapes are mismatched:: >>> x = np.array([1, 2]) >>> y = np.array([[3], [4]]) >>> x array([1, 2]) >>> y array([[3], [4]]) >>> x + y array([[4, 5], [5, 6]]) See `doc.broadcasting`_ for more information. C order See `row-major` column-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In column-major order, the leftmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the column-major order as:: [1, 4, 2, 5, 3, 6] Column-major order is also known as the Fortran order, as the Fortran programming language uses it. decorator An operator that transforms a function. For example, a ``log`` decorator may be defined to print debugging information upon function execution:: >>> def log(f): ... def new_logging_func(*args, **kwargs): ... print "Logging call with parameters:", args, kwargs ... return f(*args, **kwargs) ... ... return new_logging_func Now, when we define a function, we can "decorate" it using ``log``:: >>> @log ... def add(a, b): ... return a + b Calling ``add`` then yields: >>> add(1, 2) Logging call with parameters: (1, 2) {} 3 dictionary Resembling a language dictionary, which provides a mapping between words and descriptions thereof, a Python dictionary is a mapping between two objects:: >>> x = {1: 'one', 'two': [1, 2]} Here, `x` is a dictionary mapping keys to values, in this case the integer 1 to the string "one", and the string "two" to the list ``[1, 2]``. The values may be accessed using their corresponding keys:: >>> x[1] 'one' >>> x['two'] [1, 2] Note that dictionaries are not stored in any specific order. Also, most mutable (see *immutable* below) objects, such as lists, may not be used as keys. For more information on dictionaries, read the `Python tutorial <http://docs.python.org/tut>`_. Fortran order See `column-major` flattened Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details. immutable An object that cannot be modified after execution is called immutable. Two common examples are strings and tuples. instance A class definition gives the blueprint for constructing an object:: >>> class House(object): ... wall_colour = 'white' Yet, we have to *build* a house before it exists:: >>> h = House() # build a house Now, ``h`` is called a ``House`` instance. An instance is therefore a specific realisation of a class. iterable A sequence that allows "walking" (iterating) over items, typically using a loop such as:: >>> x = [1, 2, 3] >>> [item**2 for item in x] [1, 4, 9] It is often used in combintion with ``enumerate``:: >>> for n, k in enumerate(keys): ... print "Key %d: %s" % (n, k) ... Key 0: a Key 1: b Key 2: c list A Python container that can hold any number of objects or items. The items do not have to be of the same type, and can even be lists themselves:: >>> x = [2, 2.0, "two", [2, 2.0]] The list `x` contains 4 items, each which can be accessed individually:: >>> x[2] # the string 'two' 'two' >>> x[3] # a list, containing an integer 2 and a float 2.0 [2, 2.0] It is also possible to select more than one item at a time, using *slicing*:: >>> x[0:2] # or, equivalently, x[:2] [2, 2.0] In code, arrays are often conveniently expressed as nested lists:: >>> np.array([[1, 2], [3, 4]]) array([[1, 2], [3, 4]]) For more information, read the section on lists in the `Python tutorial <http://docs.python.org/tut>`_. For a mapping type (key-value), see *dictionary*. mask A boolean array, used to select only certain elements for an operation:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> mask = (x > 2) >>> mask array([False, False, False, True, True], dtype=bool) >>> x[mask] = -1 >>> x array([ 0, 1, 2, -1, -1]) masked array Array that suppressed values indicated by a mask:: >>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) >>> x masked_array(data = [-- 2.0 --], mask = [ True False True], fill_value=1e+20) >>> x + [1, 2, 3] masked_array(data = [-- 4.0 --], mask = [ True False True], fill_value=1e+20) Masked arrays are often used when operating on arrays containing missing or invalid entries. matrix A 2-dimensional ndarray that preserves its two-dimensional nature throughout operations. It has certain special operations, such as ``*`` (matrix multiplication) and ``**`` (matrix power), defined:: >>> x = np.mat([[1, 2], [3, 4]]) >>> x matrix([[1, 2], [3, 4]]) >>> x**2 matrix([[ 7, 10], [15, 22]]) method A function associated with an object. For example, each ndarray has a method called ``repeat``:: >>> x = np.array([1, 2, 3]) >>> x.repeat(2) array([1, 1, 2, 2, 3, 3]) ndarray See *array*. reference If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore, ``a`` and ``b`` are different names for the same Python object. row-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In row-major order, the rightmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the row-major order as:: [1, 2, 3, 4, 5, 6] Row-major order is also known as the C order, as the C programming language uses it. New Numpy arrays are by default in row-major order. self Often seen in method signatures, ``self`` refers to the instance of the associated class. For example: >>> class Paintbrush(object): ... color = 'blue' ... ... def paint(self): ... print "Painting the city %s!" % self.color ... >>> p = Paintbrush() >>> p.color = 'red' >>> p.paint() # self refers to 'p' Painting the city red! slice Used to select only certain elements from a sequence:: >>> x = range(5) >>> x [0, 1, 2, 3, 4] >>> x[1:3] # slice from 1 to 3 (excluding 3 itself) [1, 2] >>> x[1:5:2] # slice from 1 to 5, but skipping every second element [1, 3] >>> x[::-1] # slice a sequence in reverse [4, 3, 2, 1, 0] Arrays may have more than one dimension, each which can be sliced individually:: >>> x = np.array([[1, 2], [3, 4]]) >>> x array([[1, 2], [3, 4]]) >>> x[:, 1] array([2, 4]) tuple A sequence that may contain a variable number of types of any kind. A tuple is immutable, i.e., once constructed it cannot be changed. Similar to a list, it can be indexed and sliced:: >>> x = (1, 'one', [1, 2]) >>> x (1, 'one', [1, 2]) >>> x[0] 1 >>> x[:2] (1, 'one') A useful concept is "tuple unpacking", which allows variables to be assigned to the contents of a tuple:: >>> x, y = (1, 2) >>> x, y = 1, 2 This is often used when a function returns multiple values: >>> def return_many(): ... return 1, 'alpha' >>> a, b, c = return_many() >>> a, b, c (1, 'alpha', None) >>> a 1 >>> b 'alpha' ufunc Universal function. A fast element-wise array operation. Examples include ``add``, ``sin`` and ``logical_or``. view An array that does not own its data, but refers to another array's data instead. For example, we may create a view that only shows every second element of another array:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> y = x[::2] >>> y array([0, 2, 4]) >>> x[0] = 3 # changing x changes y as well, since y is a view on x >>> y array([3, 2, 4]) wrapper Python is a high-level (highly abstracted, or English-like) language. This abstraction comes at a price in execution speed, and sometimes it becomes necessary to use lower level languages to do fast computations. A wrapper is code that provides a bridge between high and the low level languages, allowing, e.g., Python to execute code written in C or Fortran. Examples include ctypes, SWIG and Cython (which wraps C and C++) and f2py (which wraps Fortran). """
################################################################################# ## $HeadURL $ ################################################################################# #__RCSID__ = "$Id$" # #import sys # #from DIRAC.Core.Base import Script #Script.parseCommandLine() # #from DIRAC.ResourceStatusSystem.Utilities.Utils import * #from DIRAC.ResourceStatusSystem import * #from DIRAC.ResourceStatusSystem.PolicySystem.PDP import PDP #from DIRAC import gConfig #import DIRAC.ResourceStatusSystem.test.fake_rsDB # #VO = gConfig.getValue("DIRAC/Extensions") #if 'LHCb' in VO: # VO = 'LHCb' # #sito = {'name':'LCG.UKI-LT2-QMUL.uk', 'siteType':'T2'} #OK ##sito = {'name':'LCG.CERN.ch', 'siteType':'T0'} #OK ##sito = {'name':'LCG.CNAF.it', 'siteType':'T1'} #OK #servizio = {'name':'EMAIL', 'siteType':'T2', 'serviceType':'Storage'} #OK #servizio2 = {'name':'EMAIL', 'siteType':'T2', 'serviceType':'Computing'} #OK #servizio3 = {'name':'EMAIL', 'siteType':'T1', 'serviceType':'VO-BOX'} #OK #servizio4 = {'name':'EMAIL', 'siteType':'T0', 'serviceType':'VOMS'} #OK #risorsa = {'name':'gridgate.cs.tcd.ie', 'siteType':'T2', 'resourceType':'CE'} #OK ##risorsa = {'name':'ce111.cern.ch', 'siteType':'T0', 'resourceType':'CE'} #OK ##risorsa3 = {'name':'tblb01.nipne.ro', 'siteType':'T2', 'resourceType':'CE'} #OK #risorsa4 = {'name':'fts.cr.cnaf.infn.it', 'siteType':'T1', 'resourceType':'FTS'} #OK #risorsa5 = {'name':'voms.cern.ch', 'siteType':'T0', 'resourceType':'VOMS'} #OK ##risorsa = {'name':'hepgrid3.ph.liv.ac.uk', 'siteType':'T1', 'resourceType':'CE'} #OK, ma messo CE al posto di CREAMCE #risorsa2 = {'name':'ccsrm.in2p3.fr', 'siteType':'T1', 'resourceType':'SE'} #OK #risorsa3 = {'name':'lhcb-lfc.gridpp.rl.ac.uk', 'siteType':'T1', 'resourceType':'LFC_L'} #OK ##risorsa = {'name':'ce.gina.sara.nl', 'siteType':'T2', 'resourceType':'CE'} #OK ##risorsa3 = {'name':'prod-lfc-lhcb-central.cern.ch', 'siteType':'T1', 'resourceType':'LFC_C'} #OK #se = {'name':'CERN-RAW', 'siteType':'T0'} #OK ##se = {'name':'PIC_MC_M-DST', 'siteType':'T1'} #OK # #useNewRes = False # ##sito = {'name':'LCG.CERN2.ch', 'siteType':'T0'} #WRONG ##servizio = {'name':'Computing@LCG.#Ferrara.it', 'siteType':'T2', 'serviceType':'Computing'} #WRONG ##risorsa = {'name':'ce106.pic.es', 'siteType':'T1', 'resourceType':'CE'} #WRONG ##risorsa2 = {'name':'srm-lhcb.cern.ch#', 'siteType':'T0', 'resourceType':'SE'} #WRONG ##se = {'name':'CERN_MC_M-DST#', 'siteType':'T0'} #WRONG # #print "\n\n ~~~~~~~ SITO ~~~~~~~ %s \n" %(sito) # #for status in ValidStatus: ## for oldStatus in ValidStatus: ## if status == oldStatus: ## continue # print "############################" # print " " # print 'dans le test:', status#, oldStatus # pdp = PDP(VO, granularity = 'Site', name = sito['name'], status = status, ## formerStatus = oldStatus, # reason = 'XXXXX', siteType = sito['siteType'], # useNewRes = useNewRes # ) # res = pdp.takeDecision() # print res # ##print "\n\n ~~~~~~~ SERVICE 1 ~~~~~~~ : %s \n " %servizio ## ##for status in ValidStatus: ### for oldStatus in ValidStatus: ### if status == oldStatus: ### continue ## print "############################" ## print " " ## print 'nel test:', status#, oldStatus ## pdp = PDP(VO, granularity = 'Service', name = servizio['name'], status = status, ### formerStatus = oldStatus, ## reason = 'XXXXX', siteType = servizio['siteType'], ## serviceType = servizio['serviceType'], ## useNewRes = useNewRes ## ) ## res = pdp.takeDecision() ## print res ## ##print "\n\n ~~~~~~~ SERVICE 2 ~~~~~~~ : %s \n " %servizio2 ## ##for status in ValidStatus: ### for oldStatus in ValidStatus: ### if status == oldStatus: ### continue ## print "############################" ## print " " ## print 'nel test:', status#, oldStatus ## pdp = PDP(VO, granularity = 'Service', name = servizio2['name'], status = status, ### formerStatus = oldStatus, ## reason = 'XXXXX', siteType = servizio2['siteType'], ## serviceType = servizio2['serviceType'], ## useNewRes = useNewRes ## ) ## res = pdp.takeDecision() ## print res ## ##print "\n\n ~~~~~~~ SERVICE 3 ~~~~~~~ : %s \n " %servizio3 ## ##for status in ValidStatus: ### for oldStatus in ValidStatus: ### if status == oldStatus: ### continue ## print "############################" ## print " " ## print 'nel test:', status#, oldStatus ## pdp = PDP(VO, granularity = 'Service', name = servizio3['name'], status = status, ### formerStatus = oldStatus, ## reason = 'XXXXX', siteType = servizio3['siteType'], ## serviceType = servizio3['serviceType'], ## useNewRes = useNewRes ## ) ## res = pdp.takeDecision() ## print res ## ##print "\n\n ~~~~~~~ SERVICE 4 ~~~~~~~ : %s \n " %servizio4 ## ##for status in ValidStatus: ### for oldStatus in ValidStatus: ### if status == oldStatus: ### continue ## print "############################" ## print " " ## print 'nel test:', status#, oldStatus ## pdp = PDP(VO, granularity = 'Service', name = servizio4['name'], status = status, ### formerStatus = oldStatus, ## reason = 'XXXXX', siteType = servizio4['siteType'], ## serviceType = servizio4['serviceType'], ## useNewRes = useNewRes ## ) ## res = pdp.takeDecision() ## print res ## ## ## #print "\n\n ~~~~~~~ RISORSA 1 ~~~~~~~ : %s \n " %risorsa # #for status in ValidStatus: ## for oldStatus in ValidStatus: ## if status == oldStatus: ## continue # print "############################" # print " " # print status#, oldStatus # pdp = PDP(VO, granularity = 'Resource', name = risorsa['name'], status = status, ## formerStatus = oldStatus, # reason = 'XXXXX', siteType = risorsa['siteType'], # resourceType = risorsa['resourceType'], # useNewRes = useNewRes # ) # res = pdp.takeDecision() # print res # ##print "\n\n ~~~~~~~ RISORSA 2 ~~~~~~~ : %s \n " %risorsa2 ## ##for status in ValidStatus: ### for oldStatus in ValidStatus: ### if status == oldStatus: ### continue ## print "############################" ## print " " ## print status#, oldStatus ## pdp = PDP(VO, granularity = 'Resource', name = risorsa2['name'], status = status, ### formerStatus = oldStatus, ## reason = 'XXXXX', siteType = risorsa2['siteType'], ## resourceType = risorsa2['resourceType'], ## useNewRes = useNewRes ## ) ## res = pdp.takeDecision() ## print res ## ##print "\n\n ~~~~~~~ RISORSA 3 ~~~~~~~ : %s \n " %risorsa3 ## ##for status in ValidStatus: ### for oldStatus in ValidStatus: ### if status == oldStatus: ### continue ## print "############################" ## print " " ## print status#, oldStatus ## pdp = PDP(VO, granularity = 'Resource', name = risorsa3['name'], status = status, ### formerStatus = oldStatus, ## reason = 'XXXXX', siteType = risorsa3['siteType'], ## resourceType = risorsa3['resourceType'], ## useNewRes = useNewRes ## ) ## res = pdp.takeDecision() ## print res ## ##print "\n\n ~~~~~~~ RISORSA 4 ~~~~~~~ : %s \n " %risorsa4 ## ##for status in ValidStatus: ### for oldStatus in ValidStatus: ### if status == oldStatus: ### continue ## print "############################" ## print " " ## print status#, oldStatus ## pdp = PDP(VO, granularity = 'Resource', name = risorsa4['name'], status = status, ### formerStatus = oldStatus, ## reason = 'XXXXX', siteType = risorsa3['siteType'], ## resourceType = risorsa4['resourceType'], ## useNewRes = useNewRes ## ) ## res = pdp.takeDecision() ## print res ## ##print "\n\n ~~~~~~~ RISORSA 5 ~~~~~~~ : %s \n " %risorsa5 ## ##for status in ValidStatus: ### for oldStatus in ValidStatus: ### if status == oldStatus: ### continue ## print "############################" ## print " " ## print status#, oldStatus ## pdp = PDP(VO, granularity = 'Resource', name = risorsa5['name'], status = status, ### formerStatus = oldStatus, ## reason = 'XXXXX', siteType = risorsa5['siteType'], ## resourceType = risorsa5['resourceType'], ## useNewRes = useNewRes ## ) ## res = pdp.takeDecision() ## print res ## ## ##print "\n\n ~~~~~~~ StorageElement ~~~~~~~ : %s \n " %se ## ##for status in ValidStatus: ### for oldStatus in ValidStatus: ### if status == oldStatus: ### continue ## print "############################" ## print " " ## print status#, oldStatus ## pdp = PDP(VO, granularity = 'StorageElement', name = se['name'], status = status, ### formerStatus = oldStatus, ## reason = 'XXXXX', siteType = se['siteType'], ## useNewRes = useNewRes ## ) ## res = pdp.takeDecision() ## print res # ################################################################################# ## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ################################################################################# # #''' # HOW DOES THIS WORK. # # will come soon... #''' # #################################################################################
# -*- encoding: utf-8 -*- ############################################################################## # # Copyright (c) 2009 Veritos - NAME - www.veritos.nl # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsability of assessing all potential # consequences resulting from its eventual inadequacies and bugs. # End users who are looking for a ready-to-use solution with commercial # garantees and support are strongly adviced to contract a Free Software # Service Company like Veritos. # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # ############################################################################## # # Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger). # Deze module werkt niet in OpenERP versie 4 en lager. # # Status 1.0 - getest op OpenERP 5.0.3 # # Versie IP_ADDRESS # account.account.type # Basis gelegd voor alle account type # # account.account.template # Basis gelegd met alle benodigde grootboekrekeningen welke via een menu- # structuur gelinkt zijn aan rubrieken 1 t/m 9. # De grootboekrekeningen gelinkt aan de account.account.type # Deze links moeten nog eens goed nagelopen worden. # # account.chart.template # Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren, # bank, inkoop en verkoop boeken en de BTW configuratie. # # Versie IP_ADDRESS # account.tax.code.template # Basis gelegd voor de BTW configuratie (structuur) # Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt? # # account.tax.template # De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende # grootboekrekeningen # # Versie IP_ADDRESS # Opschonen van de code en verwijderen van niet gebruikte componenten. # Versie IP_ADDRESS # Aanpassen a_expense van 3000 -> 7000 # record id='btw_code_5b' op negatieve waarde gezet # Versie IP_ADDRESS # BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde. # Versie IP_ADDRESS # Account Receivable en Payable goed gedefinieerd. # Versie IP_ADDRESS # Alle user_type_xxx velden goed gedefinieerd. # Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren. # Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren. # Versie IP_ADDRESS # Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging) # versie IP_ADDRESS # Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity # versie IP_ADDRESS # Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht. # versie IP_ADDRESS # BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd. # versie IP_ADDRESS - Switch to English # Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense # Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
""" This module contains generic generator functions for traversing tree (and DAG) structures. It is agnostic to the underlying data structure and implementation of the tree object. It does this through dependency injection of the tree's accessor functions: get_parents and get_children. The following depth-first traversal methods are implemented: * Pre-order: Parent yielded before children; child with multiple parents is yielded when first encountered. Example use cases (when DAGs are *not* supported): 1. User access. If computing a user's access to a node relies on the user's access to the node's parents, access to the parent has to be computed before access to the child can be determined. To support access chains, a user's access on a node is actually an accumulation of accesses down from the root node through the ancestor chain to the actual node. 2. Field value percolated down. If a value for a field is dependent on a combination of the child's and the parent's value, the parent's value should be computed before that of the child's. Similar to "User access", the value would be percolated down through the entire ancestor chain. Example: Start Date is max(node's start date, start date of each ancestor) This takes the most restrictive value. 3. Depth. When computing the depth of a tree, since a child's depth value is 1 + the parent's depth value, the parent's value should be computed before the child's. 4. Fast Subtree Deletion. If the tree is to be pruned during traversal, an entire subtree can be deleted, without traversing the children, as soon as the parent is determined to be deleted. * Topological: Parent yielded before children; child with multiple parents yielded only after all its parents are visited. Example use cases (when DAGs *are* supported): 1. User access. Similar to pre-order, except a user's access is now determined by taking a *union* of the percolated access value from each of the node's parents combined with its own access. 2. Field value percolated down. Similar to pre-order, except the value for a node is calculated from the array of percolated values from each of its parents combined with its own. Example: Start Date is max(node's start date, min(max(ancestry of each parent)) This takes the most permissive from all ancestry chains. 3. Depth. Similar to pre-order, except the depth of a node will be 1 + the minimum (or the maximum depending on semantics) of the depth of all its parents. 4. Deletion. Deletion of subtrees are not as fast as they are for pre-order since a node can be accessed through multiple parents. * Post-order: Children yielded before its parents. Example use cases: 1. Counting. When each node wants to count the number of nodes within its sub-structure, the count for each child has to be calculated before its parents, since a parent's value depends on its children. 2. Map function (when order doesn't matter). If a function needs to be evaluated for each node in a DAG and the order that the nodes are iterated doesn't matter, then use post-order since it is faster than topological for DAGs. 3. Field value percolated up. If a value for a field is based on the value from it's children, the children's values need to be computed before their parents. Example: Minimum Due Date of all nodes within the sub-structure. Note: In-order traversal is not implemented as of yet. We can do so if/when needed. Optimization once DAGs are not supported: Supporting Directed Acyclic Graphs (DAGs) requires us to use topological sort, which has the following negative performance implications: * For a simple tree, we can immediately skip over traversing descendants, once it is determined that a parent is not to be yielded (based on the return value from the 'filter_func' function). However, since we support DAGs, we cannot simply skip over descendants since they may still be accessible through a different ancestry chain and need to be revisited once all their parents are visited. * For topological sort, we need the get_parents accessor function in order to determine whether all of a node's parents have been visited. This means the underlying implementation of the graph needs to have an efficient way to get a node's parents, perhaps with back pointers to each node's parents. This requires additional storage space, which could be eliminated if DAGs are not supported. """
""" Signal Processing Tools ======================= Convolution: convolve: N-dimensional convolution. correlate: N-dimensional correlation. fftconvolve: N-dimensional convolution using the FFT. convolve2d: 2-dimensional convolution (more options). correlate2d: 2-dimensional correlation (more options). sepfir2d: Convolve with a 2-D separable FIR filter. B-splines: bspline: B-spline basis function of order n. gauss_spline: Gaussian approximation to the B-spline basis function. cspline1d: Coefficients for 1-D cubic (3rd order) B-spline. qspline1d: Coefficients for 1-D quadratic (2nd order) B-spline. cspline2d: Coefficients for 2-D cubic (3rd order) B-spline. qspline2d: Coefficients for 2-D quadratic (2nd order) B-spline. spline_filter: Smoothing spline (cubic) filtering of a rank-2 array. Filtering: order_filter: N-dimensional order filter. medfilt: N-dimensional median filter. medfilt2: 2-dimensional median filter (faster). wiener: N-dimensional wiener filter. symiirorder1: 2nd-order IIR filter (cascade of first-order systems). symiirorder2: 4th-order IIR filter (cascade of second-order systems). lfilter: 1-dimensional FIR and IIR digital linear filtering. lfiltic: Construct initial conditions for `lfilter`. deconvolve: 1-d deconvolution using lfilter. hilbert: Compute the analytic signal of a 1-d signal. get_window: Create FIR window. decimate: Downsample a signal. detrend: Remove linear and/or constant trends from data. resample: Resample using Fourier method. Filter design: bilinear: Return a digital filter from an analog filter using the bilinear transform. firwin: Windowed FIR filter design, with frequency response defined as pass and stop bands. firwin2: Windowed FIR filter design, with arbitrary frequency response. freqs: Analog filter frequency response. freqz: Digital filter frequency response. iirdesign: IIR filter design given bands and gains. iirfilter: IIR filter design given order and critical frequencies. invres: Inverse partial fraction expansion. kaiser_beta: Compute the Kaiser parameter beta, given the desired FIR filter attenuation. kaiser_atten: Compute the attenuation of a Kaiser FIR filter, given the number of taps and the transition width at discontinuities in the frequency response. kaiserord: Design a Kaiser window to limit ripple and width of transition region. remez: Optimal FIR filter design. residue: Partial fraction expansion of b(s) / a(s). residuez: Partial fraction expansion of b(z) / a(z). unique_roots: Unique roots and their multiplicities. Matlab-style IIR filter design: butter (buttord): Butterworth cheby1 (cheb1ord): Chebyshev Type I cheby2 (cheb2ord): Chebyshev Type II ellip (ellipord): Elliptic (Cauer) bessel: Bessel (no order selection available -- try butterod) Linear Systems: lti: linear time invariant system object. lsim: continuous-time simulation of output to linear system. lsim2: like lsim, but `scipy.integrate.odeint` is used. impulse: impulse response of linear, time-invariant (LTI) system. impulse2: like impulse, but `scipy.integrate.odeint` is used. step: step response of continous-time LTI system. step2: like step, but `scipy.integrate.odeint` is used. LTI Representations: tf2zpk: transfer function to zero-pole-gain. zpk2tf: zero-pole-gain to transfer function. tf2ss: transfer function to state-space. ss2tf: state-pace to transfer function. zpk2ss: zero-pole-gain to state-space. ss2zpk: state-space to pole-zero-gain. Waveforms: sawtooth: Periodic sawtooth square: Square wave gausspulse: Gaussian modulated sinusoid chirp: Frequency swept cosine signal, with several frequency functions. sweep_poly: Frequency swept cosine signal; frequency is arbitrary polynomial. Window functions: get_window: Return a window of a given length and type. barthann: Bartlett-Hann window bartlett: Bartlett window blackman: Blackman window blackmanharris: Minimum 4-term Blackman-Harris window bohman: Bohman window boxcar: Boxcar window chebwin: Dolph-Chebyshev window flattop: Flat top window gaussian: Gaussian window general_gaussian: Generalized Gaussian window hamming: Hamming window hann: Hann window kaiser: Kaiser window nuttall: Nuttall's minimum 4-term Blackman-Harris window parzen: Parzen window slepian: Slepian window triang: Triangular window Wavelets: daub: return low-pass qmf: return quadrature mirror filter from low-pass cascade: compute scaling function and wavelet from coefficients morlet: Complex Morlet wavelet. """
"""This module tests SyntaxErrors. Here's an example of the sort of thing that is tested. >>> def f(x): ... global x Traceback (most recent call last): SyntaxError: name 'x' is parameter and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and symtable.c. The parser itself outlaws a lot of invalid syntax. None of these errors are tested here at the moment. We should add some tests; since there are infinitely many programs with invalid syntax, we would need to be judicious in selecting some. The compiler generates a synthetic module name for code executed by doctest. Since all the code comes from the same module, a suffix like [1] is appended to the module name, As a consequence, changing the order of tests in this module means renumbering all the errors after it. (Maybe we should enable the ellipsis option for these tests.) In ast.c, syntax errors are raised by calling ast_error(). Errors from set_context(): >>> obj.None = 1 Traceback (most recent call last): SyntaxError: invalid syntax >>> None = 1 Traceback (most recent call last): SyntaxError: assignment to keyword It's a syntax error to assign to the empty tuple. Why isn't it an error to assign to the empty list? It will always raise some error at runtime. >>> () = 1 Traceback (most recent call last): SyntaxError: can't assign to () >>> f() = 1 Traceback (most recent call last): SyntaxError: can't assign to function call >>> del f() Traceback (most recent call last): SyntaxError: can't delete function call >>> a + 1 = 2 Traceback (most recent call last): SyntaxError: can't assign to operator >>> (x for x in x) = 1 Traceback (most recent call last): SyntaxError: can't assign to generator expression >>> 1 = 1 Traceback (most recent call last): SyntaxError: can't assign to literal >>> "abc" = 1 Traceback (most recent call last): SyntaxError: can't assign to literal >>> b"" = 1 Traceback (most recent call last): SyntaxError: can't assign to literal >>> `1` = 1 Traceback (most recent call last): SyntaxError: invalid syntax If the left-hand side of an assignment is a list or tuple, an illegal expression inside that contain should still cause a syntax error. This test just checks a couple of cases rather than enumerating all of them. >>> (a, "b", c) = (1, 2, 3) Traceback (most recent call last): SyntaxError: can't assign to literal >>> [a, b, c + 1] = [1, 2, 3] Traceback (most recent call last): SyntaxError: can't assign to operator >>> a if 1 else b = 1 Traceback (most recent call last): SyntaxError: can't assign to conditional expression From compiler_complex_args(): >>> def f(None=1): ... pass Traceback (most recent call last): SyntaxError: invalid syntax From ast_for_arguments(): >>> def f(x, y=1, z): ... pass Traceback (most recent call last): SyntaxError: non-default argument follows default argument >>> def f(x, None): ... pass Traceback (most recent call last): SyntaxError: invalid syntax >>> def f(*None): ... pass Traceback (most recent call last): SyntaxError: invalid syntax >>> def f(**None): ... pass Traceback (most recent call last): SyntaxError: invalid syntax From ast_for_funcdef(): >>> def None(x): ... pass Traceback (most recent call last): SyntaxError: invalid syntax From ast_for_call(): >>> def f(it, *varargs): ... return list(it) >>> L = range(10) >>> f(x for x in L) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(x for x in L, 1) Traceback (most recent call last): SyntaxError: Generator expression must be parenthesized if not sole argument >>> f((x for x in L), 1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... i244, i245, i246, i247, i248, i249, i250, i251, i252, ... i253, i254, i255) Traceback (most recent call last): SyntaxError: more than 255 arguments The actual error cases counts positional arguments, keyword arguments, and generator expression arguments separately. This test combines the three. >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... (x for x in i244), i245, i246, i247, i248, i249, i250, i251, ... i252=1, i253=1, i254=1, i255=1) Traceback (most recent call last): SyntaxError: more than 255 arguments >>> f(lambda x: x[0] = 3) Traceback (most recent call last): SyntaxError: lambda cannot contain assignment The grammar accepts any test (basically, any expression) in the keyword slot of a call site. Test a few different options. >>> f(x()=2) Traceback (most recent call last): SyntaxError: keyword can't be an expression >>> f(a or b=1) Traceback (most recent call last): SyntaxError: keyword can't be an expression >>> f(x.y=1) Traceback (most recent call last): SyntaxError: keyword can't be an expression More set_context(): >>> (x for x in x) += 1 Traceback (most recent call last): SyntaxError: can't assign to generator expression >>> None += 1 Traceback (most recent call last): SyntaxError: assignment to keyword >>> f() += 1 Traceback (most recent call last): SyntaxError: can't assign to function call Test continue in finally in weird combinations. continue in for loop under finally should be ok. >>> def test(): ... try: ... pass ... finally: ... for abc in range(10): ... continue ... print(abc) >>> test() 9 Start simple, a continue in a finally should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause This is essentially a continue in a finally which should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... try: ... continue ... except: ... pass Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... try: ... continue ... finally: ... pass Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: pass ... finally: ... try: ... pass ... except: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause There is one test for a break that is not in a loop. The compiler uses a single data structure to keep track of try-finally and loops, so we need to be sure that a break is actually inside a loop. If it isn't, there should be a syntax error. >>> try: ... print(1) ... break ... print(2) ... finally: ... print(3) Traceback (most recent call last): ... SyntaxError: 'break' outside loop This should probably raise a better error than a SystemError (or none at all). In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 >>> while 1: ... while 2: ... while 3: ... while 4: ... while 5: ... while 6: ... while 8: ... while 9: ... while 10: ... while 11: ... while 12: ... while 13: ... while 14: ... while 15: ... while 16: ... while 17: ... while 18: ... while 19: ... while 20: ... while 21: ... while 22: ... break Traceback (most recent call last): ... SystemError: too many statically nested blocks Misuse of the nonlocal statement can lead to a few unique syntax errors. >>> def f(x): ... nonlocal x Traceback (most recent call last): ... SyntaxError: name 'x' is parameter and nonlocal >>> def f(): ... global x ... nonlocal x Traceback (most recent call last): ... SyntaxError: name 'x' is nonlocal and global >>> def f(): ... nonlocal x Traceback (most recent call last): ... SyntaxError: no binding for nonlocal 'x' found From SF bug #1705365 >>> nonlocal x Traceback (most recent call last): ... SyntaxError: nonlocal declaration not allowed at module level TODO(jhylton): Figure out how to test SyntaxWarning with doctest. ## >>> def f(x): ## ... def f(): ## ... print(x) ## ... nonlocal x ## Traceback (most recent call last): ## ... ## SyntaxWarning: name 'x' is assigned to before nonlocal declaration ## >>> def f(): ## ... x = 1 ## ... nonlocal x ## Traceback (most recent call last): ## ... ## SyntaxWarning: name 'x' is assigned to before nonlocal declaration This tests assignment-context; there was a bug in Python 2.5 where compiling a complex 'if' (one with 'elif') would fail to notice an invalid suite, leading to spurious errors. >>> if 1: ... x() = 1 ... elif 1: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... x() = 1 ... elif 1: ... pass ... else: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 ... else: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... pass ... else: ... x() = 1 Traceback (most recent call last): ... SyntaxError: can't assign to function call Make sure that the old "raise X, Y[, Z]" form is gone: >>> raise X, Y Traceback (most recent call last): ... SyntaxError: invalid syntax >>> raise X, Y, Z Traceback (most recent call last): ... SyntaxError: invalid syntax >>> f(a=23, a=234) Traceback (most recent call last): ... SyntaxError: keyword argument repeated >>> del () Traceback (most recent call last): SyntaxError: can't delete () >>> {1, 2, 3} = 42 Traceback (most recent call last): SyntaxError: can't assign to literal Corner-cases that used to fail to raise the correct error: >>> def f(*, x=lambda __debug__:0): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> def f(*args:(lambda __debug__:0)): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> def f(**kwargs:(lambda __debug__:0)): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> with (lambda *:0): pass Traceback (most recent call last): SyntaxError: named arguments must follow bare * Corner-cases that used to crash: >>> def f(**__debug__): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> def f(*xx, __debug__): pass Traceback (most recent call last): SyntaxError: assignment to keyword """
"""automatically manage newlines in repository files This extension allows you to manage the type of line endings (CRLF or LF) that are used in the repository and in the local working directory. That way you can get CRLF line endings on Windows and LF on Unix/Mac, thereby letting everybody use their OS native line endings. The extension reads its configuration from a versioned ``.hgeol`` configuration file found in the root of the working copy. The ``.hgeol`` file use the same syntax as all other Mercurial configuration files. It uses two sections, ``[patterns]`` and ``[repository]``. The ``[patterns]`` section specifies how line endings should be converted between the working copy and the repository. The format is specified by a file pattern. The first match is used, so put more specific patterns first. The available line endings are ``LF``, ``CRLF``, and ``BIN``. Files with the declared format of ``CRLF`` or ``LF`` are always checked out and stored in the repository in that format and files declared to be binary (``BIN``) are left unchanged. Additionally, ``native`` is an alias for checking out in the platform's default line ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's default behaviour; it is only needed if you need to override a later, more general pattern. The optional ``[repository]`` section specifies the line endings to use for files stored in the repository. It has a single setting, ``native``, which determines the storage line endings for files declared as ``native`` in the ``[patterns]`` section. It can be set to ``LF`` or ``CRLF``. The default is ``LF``. For example, this means that on Windows, files configured as ``native`` (``CRLF`` by default) will be converted to ``LF`` when stored in the repository. Files declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section are always stored as-is in the repository. Example versioned ``.hgeol`` file:: [patterns] **.py = native **.vcproj = CRLF **.txt = native Makefile = LF **.jpg = BIN [repository] native = LF .. note:: The rules will first apply when files are touched in the working copy, e.g. by updating to null and back to tip to touch all files. The extension uses an optional ``[eol]`` section read from both the normal Mercurial configuration files and the ``.hgeol`` file, with the latter overriding the former. You can use that section to control the overall behavior. There are three settings: - ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or ``CRLF`` to override the default interpretation of ``native`` for checkout. This can be used with :hg:`archive` on Unix, say, to generate an archive where files have line endings for Windows. - ``eol.only-consistent`` (default True) can be set to False to make the extension convert files with inconsistent EOLs. Inconsistent means that there is both ``CRLF`` and ``LF`` present in the file. Such files are normally not touched under the assumption that they have mixed EOLs on purpose. - ``eol.fix-trailing-newline`` (default False) can be set to True to ensure that converted files end with a EOL character (either ``\\n`` or ``\\r\\n`` as per the configured patterns). The extension provides ``cleverencode:`` and ``cleverdecode:`` filters like the deprecated win32text extension does. This means that you can disable win32text and enable eol and your filters will still work. You only need to these filters until you have prepared a ``.hgeol`` file. The ``win32text.forbid*`` hooks provided by the win32text extension have been unified into a single hook named ``eol.checkheadshook``. The hook will lookup the expected line endings from the ``.hgeol`` file, which means you must migrate to a ``.hgeol`` file first before using the hook. ``eol.checkheadshook`` only checks heads, intermediate invalid revisions will be pushed. To forbid them completely, use the ``eol.checkallhook`` hook. These hooks are best used as ``pretxnchangegroup`` hooks. See :hg:`help patterns` for more information about the glob patterns used. """
""" .. index:: misc .. index: CostMatrix ----------------------- CostMatrix ----------------------- CostMatrix is an object that stores costs of (mis)classifications. Costs can be either negative or positive. .. class:: CostMatrix .. attribute:: class_var The (class) attribute to which the matrix applies. This can also be None. .. attribute:: dimension (read only) Matrix dimension, ie. number of classes. .. method:: CostMatrix(dimension[, default cost]) Constructs a matrix of the given size and initializes it with the default cost (1, if not given). All elements of the matrix are assigned the given cost, except for the diagonal that have the default cost of 0. (Diagonal elements represent correct classifications and these usually have no price; you can, however, change this.) .. literalinclude:: code/CostMatrix.py :lines: 1-8 This initializes the matrix and print it out: .. literalinclude:: code/CostMatrix.res :lines: 1-3 .. method:: CostMatrix(class descriptor[, default cost]) Similar as above, except that classVar is also set to the given descriptor. The number of values of the given attribute (which must be discrete) is used for dimension. .. literalinclude:: code/CostMatrix.py :lines: 10-11 This constructs a matrix similar to the one above (the class attribute in iris domain is three-valued) except that the matrix contains 2s instead of 1s. .. method:: CostMatrix([attribute descriptor, ]matrix) Initializes the matrix with the elements given as a sequence of sequences (you can mix lists and tuples if you find it funny). Each subsequence represents a row. .. literalinclude:: code/CostMatrix.py :lines: 13 If you print this matrix out, will it look like this: .. literalinclude:: code/CostMatrix.res :lines: 5-7 .. method:: setcost(predicted, correct, cost) Set the misclassification cost. The matrix above could be constructed by first initializing it with 2s and then changing the prices for virginica's into 1s. .. literalinclude:: code/CostMatrix.py :lines: 15-17 .. method:: getcost(predicted, correct) Returns the cost of prediction. Values must be integer indices; if class_var is set, you can also use symbolic values (strings). Note that there's no way to change the size of the matrix. Size is set at construction and does not change. For the final example, we shall compute the profits of knowing attribute values in the dataset lenses with the same cost-matrix as printed above. .. literalinclude:: code/CostMatrix.py :lines: 19-23 As the script shows, you don't have to (and usually won't) call the constructor explicitly. Instead, you will set the corresponding field (in our case meas.cost) to a matrix and let Orange convert it to CostMatrix automatically. Funny as it might look, but since Orange uses constructor to perform such conversion, even the above statement is correct (although the cost matrix is rather dull, with 0s on the diagonal and 1s around): .. literalinclude:: code/CostMatrix.py :lines: 25 .. index: SymMatrix ----------------------- SymMatrix ----------------------- :obj:`SymMatrix` implements symmetric matrices of size fixed at construction time (and stored in :obj:`SymMatrix.dim`). .. class:: SymMatrix .. attribute:: dim Matrix dimension. .. attribute:: matrix_type Can be ``SymMatrix.Lower`` (0), ``SymMatrix.Upper`` (1), ``SymMatrix.Symmetric`` (2, default), ``SymMatrix.LowerFilled`` (3) or ``SymMatrix.Upper_Filled`` (4). If the matrix type is ``Lower`` or ``Upper``, indexing above or below the diagonal, respectively, will fail. With ``LowerFilled`` and ``Upper_Filled``, the elements upper or lower, respectively, still exist and are set to zero, but they cannot be modified. The default matrix type is ``Symmetric``, but can be changed at any time. If matrix type is ``Upper``, it is printed as: >>> import Orange >>> m = Orange.misc.SymMatrix( ... [[1], ... [2, 4], ... [3, 6, 9], ... [4, 8, 12, 16]]) >>> m.matrix_type = m.Upper >>> print m (( 1.000, 2.000, 3.000, 4.000), ( 4.000, 6.000, 8.000), ( 9.000, 12.000), ( 16.000)) Changing the type to ``LowerFilled`` changes the printout to >>> m.matrix_type = m.LowerFilled >>> print m (( 1.000, 0.000, 0.000, 0.000), ( 2.000, 4.000, 0.000, 0.000), ( 3.000, 6.000, 9.000, 0.000), ( 4.000, 8.000, 12.000, 16.000)) .. method:: __init__(dim[, value]) Construct a symmetric matrix of the given dimension. :param dim: matrix dimension :type dim: int :param value: default value (0 by default) :type value: double .. method:: __init__(data) Construct a new symmetric matrix containing the given data. These can be given as Python list containing lists or tuples. The following example fills a matrix created above with data in a list:: import Orange m = [[], [ 3], [ 2, 4], [17, 5, 4], [ 2, 8, 3, 8], [ 7, 5, 10, 11, 2], [ 8, 4, 1, 5, 11, 13], [ 4, 7, 12, 8, 10, 1, 5], [13, 9, 14, 15, 7, 8, 4, 6], [12, 10, 11, 15, 2, 5, 7, 3, 1]] matrix = Orange.data.SymMatrix(m) SymMatrix also stores diagonal elements. They are set to zero, if they are not specified. The missing elements (shorter lists) are set to zero as well. If a list spreads over the diagonal, the constructor checks for asymmetries. For instance, the matrix :: m = [[], [ 3, 0, f], [ 2, 4]] is only OK if f equals 2. Finally, no row can be longer than matrix size. .. method:: get_values() Return all matrix values in a Python list. .. method:: get_KNN(i, k) Return k columns with the lowest value in the i-th row. :param i: i-th row :type i: int :param k: number of neighbors :type k: int .. method:: avg_linkage(clusters) Return a symmetric matrix with average distances between given clusters. :param clusters: list of clusters :type clusters: list of lists .. method:: invert(type) Invert values in the symmetric matrix. :param type: 0 (-X), 1 (1 - X), 2 (max - X), 3 (1 / X) :type type: int .. method:: normalize(type) Normalize values in the symmetric matrix. :param type: 0 (normalize to [0, 1] interval), 1 (Sigmoid) :type type: int Indexing .......... For symmetric matrices the order of indices is not important: if ``m`` is a SymMatrix, then ``m[2, 4]`` addresses the same element as ``m[4, 2]``. .. .. literalinclude:: code/symmatrix.py :lines: 1-6 >>> import Orange >>> m = Orange.misc.SymMatrix(4) >>> for i in range(4): ... for j in range(i+1): ... m[i, j] = (i+1)*(j+1) Although only the lower left half of the matrix was set explicitely, the whole matrix is constructed. >>> print m (( 1.000, 2.000, 3.000, 4.000), ( 2.000, 4.000, 6.000, 8.000), ( 3.000, 6.000, 9.000, 12.000), ( 4.000, 8.000, 12.000, 16.000)) Entire rows are indexed with a single index. They can be iterated over in a for loop or sliced (with, for example, ``m[:3]``): >>> print m[1] (2.0, 4.0, 6.0, 8.0) >>> m.matrix_type = m.Lower >>> for row in m: ... print row (1.0,) (2.0, 4.0) (3.0, 6.0, 9.0) (4.0, 8.0, 12.0, 16.0) .. index: Random number generator ----------------------- Random number generator ----------------------- :obj:`Random` uses the `Mersenne twister <http://en.wikipedia.org/wiki/Mersenne_twister>`_ algorithm to generate random numbers. :: >>> import Orange >>> rg = Orange.misc.Random(42) >>> rg(10) 4 >>> rg(10) 7 >>> rg.uses # We called rg two times. 2 >>> rg.reset() >>> rg(10) 4 >>> rg(10) 7 >>> rg.uses 2 .. class:: Random(seed) :param initseed: Seed used for initializing the random generator. :type initseed: int .. method:: __call__(n) Return a random integer R such that 0 <= R < n. :type n: int .. method:: reset([seed]) Reinitialize the random generator with `initseed`. If `initseed` is not given use the existing value of attribute `initseed`. .. attribute:: uses The number of times the generator was called after initialization/reset. .. attribute:: initseed Random seed. Two examples or random number generator uses found in the documentation are :obj:`Orange.evaluation.testing` and :obj:`Orange.data.Table`. """
""" A multi-dimensional ``Vector`` class, take 8: operator ``==`` A ``Vector`` is built from an iterable of numbers:: >>> Vector([3.1, 4.2]) Vector([3.1, 4.2]) >>> Vector((3, 4, 5)) Vector([3.0, 4.0, 5.0]) >>> Vector(range(10)) Vector([0.0, 1.0, 2.0, 3.0, 4.0, ...]) Tests with 2-dimensions (same results as ``vector2d_v1.py``):: >>> v1 = Vector([3, 4]) >>> x, y = v1 >>> x, y (3.0, 4.0) >>> v1 Vector([3.0, 4.0]) >>> v1_clone = eval(repr(v1)) >>> v1 == v1_clone True >>> print(v1) (3.0, 4.0) >>> octets = bytes(v1) >>> octets b'd\\x00\\x00\\x00\\x00\\x00\\x00\\x08@\\x00\\x00\\x00\\x00\\x00\\x00\\x10@' >>> abs(v1) 5.0 >>> bool(v1), bool(Vector([0, 0])) (True, False) Test of ``.frombytes()`` class method: >>> v1_clone = Vector.frombytes(bytes(v1)) >>> v1_clone Vector([3.0, 4.0]) >>> v1 == v1_clone True Tests with 3-dimensions:: >>> v1 = Vector([3, 4, 5]) >>> x, y, z = v1 >>> x, y, z (3.0, 4.0, 5.0) >>> v1 Vector([3.0, 4.0, 5.0]) >>> v1_clone = eval(repr(v1)) >>> v1 == v1_clone True >>> print(v1) (3.0, 4.0, 5.0) >>> abs(v1) # doctest:+ELLIPSIS 7.071067811... >>> bool(v1), bool(Vector([0, 0, 0])) (True, False) Tests with many dimensions:: >>> v7 = Vector(range(7)) >>> v7 Vector([0.0, 1.0, 2.0, 3.0, 4.0, ...]) >>> abs(v7) # doctest:+ELLIPSIS 9.53939201... Test of ``.__bytes__`` and ``.frombytes()`` methods:: >>> v1 = Vector([3, 4, 5]) >>> v1_clone = Vector.frombytes(bytes(v1)) >>> v1_clone Vector([3.0, 4.0, 5.0]) >>> v1 == v1_clone True Tests of sequence behavior:: >>> v1 = Vector([3, 4, 5]) >>> len(v1) 3 >>> v1[0], v1[len(v1)-1], v1[-1] (3.0, 5.0, 5.0) Test of slicing:: >>> v7 = Vector(range(7)) >>> v7[-1] 6.0 >>> v7[1:4] Vector([1.0, 2.0, 3.0]) >>> v7[-1:] Vector([6.0]) >>> v7[1,2] Traceback (most recent call last): ... TypeError: Vector indices must be integers Tests of dynamic attribute access:: >>> v7 = Vector(range(10)) >>> v7.x 0.0 >>> v7.y, v7.z, v7.t (1.0, 2.0, 3.0) Dynamic attribute lookup failures:: >>> v7.k Traceback (most recent call last): ... AttributeError: 'Vector' object has no attribute 'k' >>> v3 = Vector(range(3)) >>> v3.t Traceback (most recent call last): ... AttributeError: 'Vector' object has no attribute 't' >>> v3.spam Traceback (most recent call last): ... AttributeError: 'Vector' object has no attribute 'spam' Tests of hashing:: >>> v1 = Vector([3, 4]) >>> v2 = Vector([3.1, 4.2]) >>> v3 = Vector([3, 4, 5]) >>> v6 = Vector(range(6)) >>> hash(v1), hash(v3), hash(v6) (7, 2, 1) Most hash values of non-integers vary from a 32-bit to 64-bit Python build:: >>> import sys >>> hash(v2) == (384307168202284039 if sys.maxsize > 2**32 else 357915986) True Tests of ``format()`` with Cartesian coordinates in 2D:: >>> v1 = Vector([3, 4]) >>> format(v1) '(3.0, 4.0)' >>> format(v1, '.2f') '(3.00, 4.00)' >>> format(v1, '.3e') '(3.000e+00, 4.000e+00)' Tests of ``format()`` with Cartesian coordinates in 3D and 7D:: >>> v3 = Vector([3, 4, 5]) >>> format(v3) '(3.0, 4.0, 5.0)' >>> format(Vector(range(7))) '(0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0)' Tests of ``format()`` with spherical coordinates in 2D, 3D and 4D:: >>> format(Vector([1, 1]), 'h') # doctest:+ELLIPSIS '<1.414213..., 0.785398...>' >>> format(Vector([1, 1]), '.3eh') '<1.414e+00, 7.854e-01>' >>> format(Vector([1, 1]), '0.5fh') '<1.41421, 0.78540>' >>> format(Vector([1, 1, 1]), 'h') # doctest:+ELLIPSIS '<1.73205..., 0.95531..., 0.78539...>' >>> format(Vector([2, 2, 2]), '.3eh') '<3.464e+00, 9.553e-01, 7.854e-01>' >>> format(Vector([0, 0, 0]), '0.5fh') '<0.00000, 0.00000, 0.00000>' >>> format(Vector([-1, -1, -1, -1]), 'h') # doctest:+ELLIPSIS '<2.0, 2.09439..., 2.18627..., 3.92699...>' >>> format(Vector([2, 2, 2, 2]), '.3eh') '<4.000e+00, 1.047e+00, 9.553e-01, 7.854e-01>' >>> format(Vector([0, 1, 0, 0]), '0.5fh') '<1.00000, 1.57080, 0.00000, 0.00000>' Unary operator tests:: >>> v1 = Vector([3, 4]) >>> abs(v1) 5.0 >>> -v1 Vector([-3.0, -4.0]) >>> +v1 Vector([3.0, 4.0]) Basic tests of operator ``+``:: >>> v1 = Vector([3, 4, 5]) >>> v2 = Vector([6, 7, 8]) >>> v1 + v2 Vector([9.0, 11.0, 13.0]) >>> v1 + v2 == Vector([3+6, 4+7, 5+8]) True >>> v3 = Vector([1, 2]) >>> v1 + v3 # short vectors are filled with 0.0 on addition Vector([4.0, 6.0, 5.0]) Tests of ``+`` with mixed types:: >>> v1 + (10, 20, 30) Vector([13.0, 24.0, 35.0]) >>> from vector2d_v3 import Vector2d >>> v2d = Vector2d(1, 2) >>> v1 + v2d Vector([4.0, 6.0, 5.0]) Tests of ``+`` with mixed types, swapped operands:: >>> (10, 20, 30) + v1 Vector([13.0, 24.0, 35.0]) >>> from vector2d_v3 import Vector2d >>> v2d = Vector2d(1, 2) >>> v2d + v1 Vector([4.0, 6.0, 5.0]) Tests of ``+`` with an unsuitable operand: >>> v1 + 1 Traceback (most recent call last): ... TypeError: unsupported operand type(s) for +: 'Vector' and 'int' >>> v1 + 'ABC' Traceback (most recent call last): ... TypeError: unsupported operand type(s) for +: 'Vector' and 'str' Basic tests of operator ``*``:: >>> v1 = Vector([1, 2, 3]) >>> v1 * 10 Vector([10.0, 20.0, 30.0]) >>> 10 * v1 Vector([10.0, 20.0, 30.0]) Tests of ``*`` with unusual but valid operands:: >>> v1 * True Vector([1.0, 2.0, 3.0]) >>> from fractions import Fraction >>> v1 * Fraction(1, 3) # doctest:+ELLIPSIS Vector([0.3333..., 0.6666..., 1.0]) Tests of ``*`` with unsuitable operands:: >>> v1 * (1, 2) Traceback (most recent call last): ... TypeError: can't multiply sequence by non-int of type 'Vector' Tests of operator `==`:: >>> va = Vector(range(1, 4)) >>> vb = Vector([1.0, 2.0, 3.0]) >>> va == vb True >>> vc = Vector([1, 2]) >>> from vector2d_v3 import Vector2d >>> v2d = Vector2d(1, 2) >>> vc == v2d True >>> va == (1, 2, 3) False Tests of operator `!=`:: >>> va != vb False >>> vc != v2d False >>> va != (1, 2, 3) True """
""" # ggame The simple cross-platform sprite and game platform for Brython Server (Pygame, Tkinter to follow?). Ggame stands for a couple of things: "good game" (of course!) and also "git game" or "github game" because it is designed to operate with [Brython Server](http://runpython.com) in concert with Github as a backend file store. Ggame is **not** intended to be a full-featured gaming API, with every bell and whistle. Ggame is designed primarily as a tool for teaching computer programming, recognizing that the ability to create engaging and interactive games is a powerful motivator for many progamming students. Accordingly, any functional or performance enhancements that *can* be reasonably implemented by the user are left as an exercise. ## Functionality Goals The ggame library is intended to be trivially easy to use. For example: from ggame import App, ImageAsset, Sprite # Create a displayed object at 100,100 using an image asset Sprite(ImageAsset("ggame/bunny.png"), (100,100)) # Create the app, with a 500x500 pixel stage app = App(500,500) # Run the app app.run() ## Overview There are three major components to the `ggame` system: Assets, Sprites and the App. ### Assets Asset objects (i.e. `ggame.ImageAsset`, etc.) typically represent separate files that are provided by the "art department". These might be background images, user interface images, or images that represent objects in the game. In addition, `ggame.SoundAsset` is used to represent sound files (`.wav` or `.mp3` format) that can be played in the game. Ggame also extends the asset concept to include graphics that are generated dynamically at run-time, such as geometrical objects, e.g. rectangles, lines, etc. ### Sprites All of the visual aspects of the game are represented by instances of `ggame.Sprite` or subclasses of it. ### App Every ggame application must create a single instance of the `ggame.App` class (or a sub-class of it). Creating an instance of the `ggame.App` class will initiate creation of a pop-up window on your browser. Executing the app's `run` method will begin the process of refreshing the visual assets on the screen. ### Events No game is complete without a player and players produce events. Your code handles user input by registering to receive keyboard and mouse events using `ggame.App.listenKeyEvent` and `ggame.App.listenMouseEvent` methods. ## Execution Environment Ggame is designed to be executed in a web browser using [Brython](http://brython.info/), [Pixi.js](http://www.pixijs.com/) and [Buzz](http://buzz.jaysalvat.com/). The easiest way to do this is by executing from [runpython](http://runpython.com), with source code residing on [github](http://github.com). When using [runpython](http://runpython.com), you will have to configure your browser to allow popup windows. To use Ggame in your own application, you will minimally need to create a folder called `ggame` in your project. Within `ggame`, copy the `ggame.py`, `sysdeps.py` and `__init__.py` files from the [ggame project](https://github.com/BrythonServer/ggame). ### Include Ggame as a Git Subtree From the same directory as your own python sources (note: you must have an existing git repository with committed files in order for the following to work properly), execute the following terminal commands: git remote add -f ggame https://github.com/BrythonServer/ggame.git git merge -s ours --no-commit ggame/master mkdir ggame git read-tree --prefix=ggame/ -u ggame/master git commit -m "Merge ggame project as our subdirectory" If you want to pull in updates from ggame in the future: git pull -s subtree ggame master You can see an example of how a ggame subtree is used by examining the [Brython Server Spacewar](https://github.com/BrythonServer/Spacewar) repo on Github. ## Geometry When referring to screen coordinates, note that the x-axis of the computer screen is *horizontal* with the zero position on the left hand side of the screen. The y-axis is *vertical* with the zero position at the **top** of the screen. Increasing positive y-coordinates correspond to the downward direction on the computer screen. Note that this is **different** from the way you may have learned about x and y coordinates in math class! """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I am trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
""" ============== Array Creation ============== Introduction ============ There are 5 general mechanisms for creating arrays: 1) Conversion from other Python structures (e.g., lists, tuples) 2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, etc.) 3) Reading arrays from disk, either from standard or custom formats 4) Creating arrays from raw bytes through the use of strings or buffers 5) Use of special library functions (e.g., random) This section will not cover means of replicating, joining, or otherwise expanding or mutating existing arrays. Nor will it cover creating object arrays or record arrays. Both of those are covered in their own sections. Converting Python array_like Objects to Numpy Arrays ==================================================== In general, numerical data arranged in an array-like structure in Python can be converted to arrays through the use of the array() function. The most obvious examples are lists and tuples. See the documentation for array() for details for its use. Some objects may support the array-protocol and allow conversion to arrays this way. A simple way to find out if the object can be converted to a numpy array using array() is simply to try it interactively and see if it works! (The Python Way). Examples: :: >>> x = np.array([2,3,1,0]) >>> x = np.array([2, 3, 1, 0]) >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, and types >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]]) Intrinsic Numpy Array Creation ============================== Numpy has built-in functions for creating arrays from scratch: zeros(shape) will create an array filled with 0 values with the specified shape. The default dtype is float64. ``>>> np.zeros((2, 3)) array([[ 0., 0., 0.], [ 0., 0., 0.]])`` ones(shape) will create an array filled with 1 values. It is identical to zeros in all other respects. arange() will create arrays with regularly incrementing values. Check the docstring for complete information on the various ways it can be used. A few examples will be given here: :: >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=np.float) array([ 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) Note that there are some subtleties regarding the last usage that the user should be aware of that are described in the arange docstring. linspace() will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: :: >>> np.linspace(1., 4., 6) array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ]) The advantage of this creation function is that one can guarantee the number of elements and the starting and end point, which arange() generally will not do for arbitrary start, stop, and step values. indices() will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension. An example illustrates much better than a verbal description: :: >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) This is particularly useful for evaluating functions of multiple dimensions on a regular grid. Reading Arrays From Disk ======================== This is presumably the most common case of large array creation. The details, of course, depend greatly on the format of data on disk and so this section can only give general pointers on how to handle various formats. Standard Binary Formats ----------------------- Various fields have standard formats for array data. The following lists the ones with known python libraries to read them and return numpy arrays (there may be others for which it is possible to read and convert to numpy arrays so check the last section as well) :: HDF5: PyTables FITS: PyFITS Examples of formats that cannot be read directly but for which it is not hard to convert are libraries like PIL (able to read and write many image formats such as jpg, png, etc). Common ASCII Formats ------------------------ Comma Separated Value files (CSV) are widely used (and an export and import option for programs like Excel). There are a number of ways of reading these files in Python. There are CSV functions in Python and functions in pylab (part of matplotlib). More generic ascii files can be read using the io package in scipy. Custom Binary Formats --------------------- There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the numpy fromfile() function and .tofile() method to read and write numpy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. Use of Special Libraries ------------------------ There are libraries that can be used to generate arrays for special purposes and it isn't possible to enumerate all of them. The most common uses are use of the many array generation functions in random that can generate arrays of random values, and some utility functions to generate special matrices (e.g. diagonal). """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to read all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
{ 'name': 'Theme Common', 'summary': 'Snippets Library', 'description': 'Snippets library containing snippets to be styled in Less Themes.', 'category': 'Hidden', 'version': '1.1', 'author': 'Odoo NAME 'depends': ['website_less'], 'data': [ # image placeholder 'data/image_content.xml', # Disable website master customize options 'views/layout.xml', # CLEAN 'views/s_big_image.xml', 'views/s_big_image_parallax.xml', 'views/s_color_blocks_2.xml', 'views/s_color_blocks_4.xml', 'views/s_column.xml', 'views/s_discount.xml', 'views/s_event_list.xml', 'views/s_icon_box.xml', 'views/s_icon_box_circle.xml', 'views/s_icon_box_square.xml', 'views/s_image_text_2.xml', 'views/s_lead_bar.xml', 'views/s_logo_bar.xml', 'views/s_pricing.xml', 'views/s_process_step.xml', 'views/s_progress_bar.xml', 'views/s_profile.xml', 'views/s_separator_color.xml', 'views/s_separator_shade.xml', 'views/s_slide_banner.xml', 'views/s_team_profiles.xml', 'views/s_text.xml', 'views/s_text_image_2.xml', 'views/s_text_highlight.xml', 'views/s_three_columns_circle.xml', 'views/s_timeline.xml', 'views/s_timeline_compact.xml', 'views/s_title_small.xml', 'views/s_two_columns.xml', # from theme enark 'views/s_testimonial_slider.xml', 'views/s_big_icon_circle.xml', 'views/s_big_icon_square.xml', # from theme bewise 'views/s_mini_nav_bar.xml', 'views/s_separator_nav.xml', 'views/s_separator_multicolor.xml', 'views/s_separator_multicolor_fw.xml', 'views/s_color_blocks_img.xml', 'views/s_color_blocks_slider.xml', # ZAP 'views/s_banner_2.xml', 'views/s_carousel_boxed.xml', 'views/s_collapse.xml', 'views/s_features_circle.xml', 'views/s_features_grid_circle.xml', 'views/s_four_columns_fw.xml', 'views/s_image_text_fw.xml', 'views/s_images_captions_fw.xml', 'views/s_label.xml', 'views/s_medias_list.xml', 'views/s_news_carousel.xml', 'views/s_numbers.xml', 'views/s_page_header.xml', 'views/s_process_steps_2.xml', 'views/s_quotes_slider_2.xml', 'views/s_references_2.xml', 'views/s_tabs.xml', 'views/s_team_profiles_2.xml', 'views/s_text_block_image_fw.xml', 'views/s_text_image_fw.xml', 'views/s_three_columns_carousel.xml', # KIDDO 'views/s_1_column_text.xml', 'views/s_2_column_text.xml', 'views/s_3_column_text.xml', 'views/s_4_column_text.xml', 'views/s_banner_parallax.xml', 'views/s_banner_3.xml', 'views/s_big_picture_text.xml', 'views/s_button_text.xml', 'views/s_cubes.xml', 'views/s_features_carousel.xml', 'views/s_header_text_big_picture.xml', 'views/s_product_list.xml', 'views/s_separator_block.xml', 'views/s_text_big_picture.xml', 'views/s_text_picture_text.xml', 'views/s_title_big.xml', 'views/s_products_carousel.xml', # ANELUSIA 'views/s_progress.xml', 'views/s_menu_three_columns.xml', # GRAPHENE 'views/s_media_block.xml', 'views/s_animated_boxes.xml', 'views/s_showcase.xml', 'views/s_showcase_slider.xml', 'views/s_masonry_block.xml', # AVANTGARDE 'views/s_css_slider.xml', 'views/s_showcase_image_right.xml', 'views/s_showcase_image_left.xml', 'views/s_clonable_boxes.xml', # TREEHOUSE 'views/s_timeline_icons.xml', # MONGLIA 'views/s_full_menu.xml', 'views/s_compact_menu.xml', 'views/s_event_slide.xml', ], 'demo': [ 'demo/demo.xml', ], 'application': False, }
# import datetime # import decimal # # from django.contrib.auth import get_user_model # from django.contrib.auth.models import AnonymousUser # from django.test import TestCase # from django.test.client import RequestFactory # from django.test.utils import override_settings # from django.utils import timezone # # from djbraintree.models import Customer, CurrentSubscription # from djbraintree.middleware import SubscriptionPaymentMiddleware # # # class MiddlewareURLTest(TestCase): # # def setUp(self): # self.settings(ROOT_URLCONF='tests.test_urls') # self.factory = RequestFactory() # self.user = get_user_model().objects.create_user(username="pydanny", # email="EMAIL") # self.middleware = SubscriptionPaymentMiddleware() # # def test_appname(self): # request = self.factory.get("/admin/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response, None) # # def test_namespace(self): # request = self.factory.get("/djbraintree/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response, None) # # def test_namespace_and_url(self): # request = self.factory.get("/testapp_namespaced/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response, None) # # def test_url(self): # request = self.factory.get("/testapp/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response, None) # # @override_settings(DEBUG=True) # def test_djdt(self): # request = self.factory.get("/__debug__/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response, None) # # def test_fnmatch(self): # request = self.factory.get("/test_fnmatch/extra_text/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response, None) # # # class MiddlewareLogicTest(TestCase): # urls = 'tests.test_urls' # # def setUp(self): # period_start = datetime.datetime(2013, 4, 1, tzinfo=timezone.utc) # period_end = datetime.datetime(2013, 4, 30, tzinfo=timezone.utc) # start = datetime.datetime(2013, 1, 1, tzinfo=timezone.utc) # # self.factory = RequestFactory() # self.user = get_user_model().objects.create_user(username="pydanny", # email="EMAIL") # self.customer = Customer.objects.create( # subscriber=self.user, # stripe_id="cus_xxxxxxxxxxxxxxx", # card_fingerprint="YYYYYYYY", # card_last_4="2342", # card_kind="Visa" # ) # self.subscription = CurrentSubscription.objects.create( # customer=self.customer, # plan="test", # current_period_start=period_start, # current_period_end=period_end, # amount=(500 / decimal.Decimal("100.0")), # status="active", # start=start, # quantity=1, # cancel_at_period_end=True # ) # self.middleware = SubscriptionPaymentMiddleware() # # def test_anonymous(self): # request = self.factory.get("/djbraintree/") # request.user = AnonymousUser() # # response = self.middleware.process_request(request) # self.assertEqual(response, None) # # def test_is_staff(self): # self.user.is_staff = True # self.user.save() # # request = self.factory.get("/djbraintree/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response, None) # # def test_is_superuser(self): # self.user.is_superuser = True # self.user.save() # # request = self.factory.get("/djbraintree/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response, None) # # def test_customer_has_inactive_subscription(self): # request = self.factory.get("/testapp_content/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response.status_code, 302) # # def test_customer_has_active_subscription(self): # end_date = datetime.datetime(2100, 4, 30, tzinfo=timezone.utc) # self.subscription.current_period_end = end_date # self.subscription.save() # # request = self.factory.get("/testapp_content/") # request.user = USERNAME response = self.middleware.process_request(request) # self.assertEqual(response, None)
""" Define a simple format for saving numpy arrays to disk with the full information about them. The ``.npy`` format is the standard binary file format in NumPy for persisting a *single* arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The ``.npz`` format is the standard format for persisting *multiple* NumPy arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy`` files, one for each array. Capabilities ------------ - Can represent all NumPy arrays including nested record arrays and object arrays. - Represents the data in its native binary form. - Supports Fortran-contiguous arrays directly. - Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C "long int" writes out an array with "long ints", a reading machine with 32-bit C "long ints" will yield an array with 64-bit integers. - Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able to create a solution in their preferred programming language to read most ``.npy`` files that he has been given without much documentation. - Allows memory-mapping of the data. See `open_memmep`. - Can be read from a filelike stream object instead of an actual file. - Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. Limitations ----------- - Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. .. warning:: Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by 'f0', 'f1', etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the ``loadedarray.view(correct_dtype)`` method. File extensions --------------- We recommend using the ``.npy`` and ``.npz`` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using ``.npy`` and ``.npz``. Version numbering ----------------- The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. Format Version 1.0 ------------------ The first 6 bytes are a magic string: exactly ``\\x93NUMPY``. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. ``\\x01``. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. ``\\x00``. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array's format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (``\\n``) and padded with spaces (``\\x20``) to make the total length of ``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment purposes. The dictionary contains three keys: "descr" : dtype.descr An object that can be passed as an argument to the `numpy.dtype` constructor to create the array's dtype. "fortran_order" : bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. "shape" : tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. ``dtype.hasobject is True``), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on ``fortran_order``) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that ``shape=()`` means there is 1 element) by ``dtype.itemsize``. Format Version 2.0 ------------------ The version 1.0 format only allowed the array header to have a total size of 65535 bytes. This can be exceeded by structured arrays with a large number of columns. The version 2.0 format extends the header size to 4 GiB. `numpy.save` will automatically save in 2.0 format if the data requires it, else it will always use the more compatible 1.0 format. The description of the fourth element of the header therefore has become: "The next 4 bytes form a little-endian unsigned int: the length of the header data HEADER_LEN." Notes ----- The ``.npy`` format, including reasons for creating it and a comparison of alternatives, is described fully in the "npy-format" NEP. """
""" ============== Array Creation ============== Introduction ============ There are 5 general mechanisms for creating arrays: 1) Conversion from other Python structures (e.g., lists, tuples) 2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, etc.) 3) Reading arrays from disk, either from standard or custom formats 4) Creating arrays from raw bytes through the use of strings or buffers 5) Use of special library functions (e.g., random) This section will not cover means of replicating, joining, or otherwise expanding or mutating existing arrays. Nor will it cover creating object arrays or structured arrays. Both of those are covered in their own sections. Converting Python array_like Objects to Numpy Arrays ==================================================== In general, numerical data arranged in an array-like structure in Python can be converted to arrays through the use of the array() function. The most obvious examples are lists and tuples. See the documentation for array() for details for its use. Some objects may support the array-protocol and allow conversion to arrays this way. A simple way to find out if the object can be converted to a numpy array using array() is simply to try it interactively and see if it works! (The Python Way). Examples: :: >>> x = np.array([2,3,1,0]) >>> x = np.array([2, 3, 1, 0]) >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, and types >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]]) Intrinsic Numpy Array Creation ============================== Numpy has built-in functions for creating arrays from scratch: zeros(shape) will create an array filled with 0 values with the specified shape. The default dtype is float64. ``>>> np.zeros((2, 3)) array([[ 0., 0., 0.], [ 0., 0., 0.]])`` ones(shape) will create an array filled with 1 values. It is identical to zeros in all other respects. arange() will create arrays with regularly incrementing values. Check the docstring for complete information on the various ways it can be used. A few examples will be given here: :: >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=np.float) array([ 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) Note that there are some subtleties regarding the last usage that the user should be aware of that are described in the arange docstring. linspace() will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: :: >>> np.linspace(1., 4., 6) array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ]) The advantage of this creation function is that one can guarantee the number of elements and the starting and end point, which arange() generally will not do for arbitrary start, stop, and step values. indices() will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension. An example illustrates much better than a verbal description: :: >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) This is particularly useful for evaluating functions of multiple dimensions on a regular grid. Reading Arrays From Disk ======================== This is presumably the most common case of large array creation. The details, of course, depend greatly on the format of data on disk and so this section can only give general pointers on how to handle various formats. Standard Binary Formats ----------------------- Various fields have standard formats for array data. The following lists the ones with known python libraries to read them and return numpy arrays (there may be others for which it is possible to read and convert to numpy arrays so check the last section as well) :: HDF5: PyTables FITS: PyFITS Examples of formats that cannot be read directly but for which it is not hard to convert are those formats supported by libraries like PIL (able to read and write many image formats such as jpg, png, etc). Common ASCII Formats ------------------------ Comma Separated Value files (CSV) are widely used (and an export and import option for programs like Excel). There are a number of ways of reading these files in Python. There are CSV functions in Python and functions in pylab (part of matplotlib). More generic ascii files can be read using the io package in scipy. Custom Binary Formats --------------------- There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the numpy fromfile() function and .tofile() method to read and write numpy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. Use of Special Libraries ------------------------ There are libraries that can be used to generate arrays for special purposes and it isn't possible to enumerate all of them. The most common uses are use of the many array generation functions in random that can generate arrays of random values, and some utility functions to generate special matrices (e.g. diagonal). """
"""Stuff to parse Sun and NeXT audio files. An audio file consists of a header followed by the data. The structure of the header is as follows. +---------------+ | magic word | +---------------+ | header size | +---------------+ | data size | +---------------+ | encoding | +---------------+ | sample rate | +---------------+ | # of channels | +---------------+ | info | | | +---------------+ The magic word consists of the 4 characters '.snd'. Apart from the info field, all header fields are 4 bytes in size. They are all 32-bit unsigned integers encoded in big-endian byte order. The header size really gives the start of the data. The data size is the physical size of the data. From the other parameters the number of frames can be calculated. The encoding gives the way in which audio samples are encoded. Possible values are listed below. The info field currently consists of an ASCII string giving a human-readable description of the audio file. The info field is padded with NUL bytes to the header size. Usage. Reading audio files: f = sunau.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). When the setpos() and rewind() methods are not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' or 'ULAW') getcompname() -- returns human-readable version of compression type ('not compressed' matches 'NONE') getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- returns None (for compatibility with the aifc module) getmark(id) -- raises an error since the mark does not exist (for compatibility with the aifc module) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell() and the position given to setpos() are compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing audio files: f = sunau.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple)-- set all parameters at once tell() -- return current position in output file writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. The close() method is called automatically when the class instance is destroyed. """
""" Simple config ============= Although CherryPy uses the :mod:`Python logging module <logging>`, it does so behind the scenes so that simple logging is simple, but complicated logging is still possible. "Simple" logging means that you can log to the screen (i.e. console/stdout) or to a file, and that you can easily have separate error and access log files. Here are the simplified logging settings. You use these by adding lines to your config file or dict. You should set these at either the global level or per application (see next), but generally not both. * ``log.screen``: Set this to True to have both "error" and "access" messages printed to stdout. * ``log.access_file``: Set this to an absolute filename where you want "access" messages written. * ``log.error_file``: Set this to an absolute filename where you want "error" messages written. Many events are automatically logged; to log your own application events, call :func:`cherrypy.log`. Architecture ============ Separate scopes --------------- CherryPy provides log managers at both the global and application layers. This means you can have one set of logging rules for your entire site, and another set of rules specific to each application. The global log manager is found at :func:`cherrypy.log`, and the log manager for each application is found at :attr:`app.log<cherrypy._cptree.Application.log>`. If you're inside a request, the latter is reachable from ``cherrypy.request.app.log``; if you're outside a request, you'll have to obtain a reference to the ``app``: either the return value of :func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used :func:`quickstart()<cherrypy.quickstart>` instead, via ``cherrypy.tree.apps['/']``. By default, the global logs are named "cherrypy.error" and "cherrypy.access", and the application logs are named "cherrypy.error.2378745" and "cherrypy.access.2378745" (the number is the id of the Application object). This means that the application logs "bubble up" to the site logs, so if your application has no log handlers, the site-level handlers will still log the messages. Errors vs. Access ----------------- Each log manager handles both "access" messages (one per HTTP request) and "error" messages (everything else). Note that the "error" log is not just for errors! The format of access messages is highly formalized, but the error log isn't--it receives messages from a variety of sources (including full error tracebacks, if enabled). If you are logging the access log and error log to the same source, then there is a possibility that a specially crafted error message may replicate an access log message as described in CWE-117. In this case it is the application developer's responsibility to manually escape data before using CherryPy's log() functionality, or they may create an application that is vulnerable to CWE-117. This would be achieved by using a custom handler escape any special characters, and attached as described below. Custom Handlers =============== The simple settings above work by manipulating Python's standard :mod:`logging` module. So when you need something more complex, the full power of the standard module is yours to exploit. You can borrow or create custom handlers, formats, filters, and much more. Here's an example that skips the standard FileHandler and uses a RotatingFileHandler instead: :: #python log = app.log # Remove the default FileHandlers if present. log.error_file = "" log.access_file = "" maxBytes = getattr(log, "rot_maxBytes", 10000000) backupCount = getattr(log, "rot_backupCount", 1000) # Make a new RotatingFileHandler for the error log. fname = getattr(log, "rot_error_file", "error.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.error_log.addHandler(h) # Make a new RotatingFileHandler for the access log. fname = getattr(log, "rot_access_file", "access.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.access_log.addHandler(h) The ``rot_*`` attributes are pulled straight from the application log object. Since "log.*" config entries simply set attributes on the log object, you can add custom attributes to your heart's content. Note that these handlers are used ''instead'' of the default, simple handlers outlined above (so don't set the "log.error_file" config entry, for example). """
""" ==================================== Linear algebra (:mod:`scipy.linalg`) ==================================== .. currentmodule:: scipy.linalg Linear algebra functions. .. eventually, we should replace the numpy.linalg HTML link with just `numpy.linalg` .. seealso:: `numpy.linalg <https://www.numpy.org/devdocs/reference/routines.linalg.html>`__ for more linear algebra functions. Note that although `scipy.linalg` imports most of them, identically named functions from `scipy.linalg` may offer more or slightly differing functionality. Basics ====== .. autosummary:: :toctree: generated/ inv - Find the inverse of a square matrix solve - Solve a linear system of equations solve_banded - Solve a banded linear system solveh_banded - Solve a Hermitian or symmetric banded system solve_circulant - Solve a circulant system solve_triangular - Solve a triangular matrix solve_toeplitz - Solve a toeplitz matrix matmul_toeplitz - Multiply a Toeplitz matrix with an array. det - Find the determinant of a square matrix norm - Matrix and vector norm lstsq - Solve a linear least-squares problem pinv - Pseudo-inverse (Moore-Penrose) using lstsq pinv2 - Pseudo-inverse using svd pinvh - Pseudo-inverse of hermitian matrix kron - Kronecker product of two arrays khatri_rao - Khatri-Rao product of two arrays tril - Construct a lower-triangular matrix from a given matrix triu - Construct an upper-triangular matrix from a given matrix orthogonal_procrustes - Solve an orthogonal Procrustes problem matrix_balance - Balance matrix entries with a similarity transformation subspace_angles - Compute the subspace angles between two matrices LinAlgError LinAlgWarning Eigenvalue Problems =================== .. autosummary:: :toctree: generated/ eig - Find the eigenvalues and eigenvectors of a square matrix eigvals - Find just the eigenvalues of a square matrix eigh - Find the e-vals and e-vectors of a Hermitian or symmetric matrix eigvalsh - Find just the eigenvalues of a Hermitian or symmetric matrix eig_banded - Find the eigenvalues and eigenvectors of a banded matrix eigvals_banded - Find just the eigenvalues of a banded matrix eigh_tridiagonal - Find the eigenvalues and eigenvectors of a tridiagonal matrix eigvalsh_tridiagonal - Find just the eigenvalues of a tridiagonal matrix Decompositions ============== .. autosummary:: :toctree: generated/ lu - LU decomposition of a matrix lu_factor - LU decomposition returning unordered matrix and pivots lu_solve - Solve Ax=b using back substitution with output of lu_factor svd - Singular value decomposition of a matrix svdvals - Singular values of a matrix diagsvd - Construct matrix of singular values from output of svd orth - Construct orthonormal basis for the range of A using svd null_space - Construct orthonormal basis for the null space of A using svd ldl - LDL.T decomposition of a Hermitian or a symmetric matrix. cholesky - Cholesky decomposition of a matrix cholesky_banded - Cholesky decomp. of a sym. or Hermitian banded matrix cho_factor - Cholesky decomposition for use in solving a linear system cho_solve - Solve previously factored linear system cho_solve_banded - Solve previously factored banded linear system polar - Compute the polar decomposition. qr - QR decomposition of a matrix qr_multiply - QR decomposition and multiplication by Q qr_update - Rank k QR update qr_delete - QR downdate on row or column deletion qr_insert - QR update on row or column insertion rq - RQ decomposition of a matrix qz - QZ decomposition of a pair of matrices ordqz - QZ decomposition of a pair of matrices with reordering schur - Schur decomposition of a matrix rsf2csf - Real to complex Schur form hessenberg - Hessenberg form of a matrix cdf2rdf - Complex diagonal form to real diagonal block form cossin - Cosine sine decomposition of a unitary or orthogonal matrix .. seealso:: `scipy.linalg.interpolative` -- Interpolative matrix decompositions Matrix Functions ================ .. autosummary:: :toctree: generated/ expm - Matrix exponential logm - Matrix logarithm cosm - Matrix cosine sinm - Matrix sine tanm - Matrix tangent coshm - Matrix hyperbolic cosine sinhm - Matrix hyperbolic sine tanhm - Matrix hyperbolic tangent signm - Matrix sign sqrtm - Matrix square root funm - Evaluating an arbitrary matrix function expm_frechet - Frechet derivative of the matrix exponential expm_cond - Relative condition number of expm in the Frobenius norm fractional_matrix_power - Fractional matrix power Matrix Equation Solvers ======================= .. autosummary:: :toctree: generated/ solve_sylvester - Solve the Sylvester matrix equation solve_continuous_are - Solve the continuous-time algebraic Riccati equation solve_discrete_are - Solve the discrete-time algebraic Riccati equation solve_continuous_lyapunov - Solve the continuous-time Lyapunov equation solve_discrete_lyapunov - Solve the discrete-time Lyapunov equation Sketches and Random Projections =============================== .. autosummary:: :toctree: generated/ clarkson_woodruff_transform - Applies the Clarkson Woodruff Sketch (a.k.a CountMin Sketch) Special Matrices ================ .. autosummary:: :toctree: generated/ block_diag - Construct a block diagonal matrix from submatrices circulant - Circulant matrix companion - Companion matrix convolution_matrix - Convolution matrix dft - Discrete Fourier transform matrix fiedler - Fiedler matrix fiedler_companion - Fiedler companion matrix hadamard - Hadamard matrix of order 2**n hankel - Hankel matrix helmert - Helmert matrix hilbert - Hilbert matrix invhilbert - Inverse Hilbert matrix leslie - Leslie matrix pascal - Pascal matrix invpascal - Inverse Pascal matrix toeplitz - Toeplitz matrix tri - Construct a matrix filled with ones at and below a given diagonal Low-level routines ================== .. autosummary:: :toctree: generated/ get_blas_funcs get_lapack_funcs find_best_blas_type .. seealso:: `scipy.linalg.blas` -- Low-level BLAS functions `scipy.linalg.lapack` -- Low-level LAPACK functions `scipy.linalg.cython_blas` -- Low-level BLAS functions for Cython `scipy.linalg.cython_lapack` -- Low-level LAPACK functions for Cython """
""" ============== Array Creation ============== Introduction ============ There are 5 general mechanisms for creating arrays: 1) Conversion from other Python structures (e.g., lists, tuples) 2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, etc.) 3) Reading arrays from disk, either from standard or custom formats 4) Creating arrays from raw bytes through the use of strings or buffers 5) Use of special library functions (e.g., random) This section will not cover means of replicating, joining, or otherwise expanding or mutating existing arrays. Nor will it cover creating object arrays or structured arrays. Both of those are covered in their own sections. Converting Python array_like Objects to Numpy Arrays ==================================================== In general, numerical data arranged in an array-like structure in Python can be converted to arrays through the use of the array() function. The most obvious examples are lists and tuples. See the documentation for array() for details for its use. Some objects may support the array-protocol and allow conversion to arrays this way. A simple way to find out if the object can be converted to a numpy array using array() is simply to try it interactively and see if it works! (The Python Way). Examples: :: >>> x = np.array([2,3,1,0]) >>> x = np.array([2, 3, 1, 0]) >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, and types >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]]) Intrinsic Numpy Array Creation ============================== Numpy has built-in functions for creating arrays from scratch: zeros(shape) will create an array filled with 0 values with the specified shape. The default dtype is float64. ``>>> np.zeros((2, 3)) array([[ 0., 0., 0.], [ 0., 0., 0.]])`` ones(shape) will create an array filled with 1 values. It is identical to zeros in all other respects. arange() will create arrays with regularly incrementing values. Check the docstring for complete information on the various ways it can be used. A few examples will be given here: :: >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=np.float) array([ 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) Note that there are some subtleties regarding the last usage that the user should be aware of that are described in the arange docstring. linspace() will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: :: >>> np.linspace(1., 4., 6) array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ]) The advantage of this creation function is that one can guarantee the number of elements and the starting and end point, which arange() generally will not do for arbitrary start, stop, and step values. indices() will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension. An example illustrates much better than a verbal description: :: >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) This is particularly useful for evaluating functions of multiple dimensions on a regular grid. Reading Arrays From Disk ======================== This is presumably the most common case of large array creation. The details, of course, depend greatly on the format of data on disk and so this section can only give general pointers on how to handle various formats. Standard Binary Formats ----------------------- Various fields have standard formats for array data. The following lists the ones with known python libraries to read them and return numpy arrays (there may be others for which it is possible to read and convert to numpy arrays so check the last section as well) :: HDF5: PyTables FITS: PyFITS Examples of formats that cannot be read directly but for which it is not hard to convert are those formats supported by libraries like PIL (able to read and write many image formats such as jpg, png, etc). Common ASCII Formats ------------------------ Comma Separated Value files (CSV) are widely used (and an export and import option for programs like Excel). There are a number of ways of reading these files in Python. There are CSV functions in Python and functions in pylab (part of matplotlib). More generic ascii files can be read using the io package in scipy. Custom Binary Formats --------------------- There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the numpy fromfile() function and .tofile() method to read and write numpy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. Use of Special Libraries ------------------------ There are libraries that can be used to generate arrays for special purposes and it isn't possible to enumerate all of them. The most common uses are use of the many array generation functions in random that can generate arrays of random values, and some utility functions to generate special matrices (e.g. diagonal). """
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. ########################################################################################### # Implementation of the stochastic depth algorithm described in the paper # # NAME et al. "Deep networks with stochastic depth." arXiv preprint arXiv:1603.09382 (2016). # # Reference torch implementation can be found at https://github.com/yueatsprograms/Stochastic_Depth # # There are some differences in the implementation: # - A BN->ReLU->Conv is used for skip connection when input and output shapes are different, # as oppose to a padding layer. # - The residual block is different: we use BN->ReLU->Conv->BN->ReLU->Conv, as oppose to # Conv->BN->ReLU->Conv->BN (->ReLU also applied to skip connection). # - We did not try to match with the same initialization, learning rate scheduling, etc. # #-------------------------------------------------------------------------------- # A sample from the running log (We achieved ~9.4% error after 500 epochs, some # more careful tuning of the hyper parameters and maybe also the arch is needed # to achieve the reported numbers in the paper): # # INFO:root:Epoch[80] Batch [50] Speed: 1020.95 samples/sec Train-accuracy=0.910080 # INFO:root:Epoch[80] Batch [100] Speed: 1013.41 samples/sec Train-accuracy=0.912031 # INFO:root:Epoch[80] Batch [150] Speed: 1035.48 samples/sec Train-accuracy=0.913438 # INFO:root:Epoch[80] Batch [200] Speed: 1045.00 samples/sec Train-accuracy=0.907344 # INFO:root:Epoch[80] Batch [250] Speed: 1055.32 samples/sec Train-accuracy=0.905937 # INFO:root:Epoch[80] Batch [300] Speed: 1071.71 samples/sec Train-accuracy=0.912500 # INFO:root:Epoch[80] Batch [350] Speed: 1033.73 samples/sec Train-accuracy=0.910937 # INFO:root:Epoch[80] Train-accuracy=0.919922 # INFO:root:Epoch[80] Time cost=48.348 # INFO:root:Saved checkpoint to "sd-110-0081.params" # INFO:root:Epoch[80] Validation-accuracy=0.880142 # ... # INFO:root:Epoch[115] Batch [50] Speed: 1037.04 samples/sec Train-accuracy=0.937040 # INFO:root:Epoch[115] Batch [100] Speed: 1041.12 samples/sec Train-accuracy=0.934219 # INFO:root:Epoch[115] Batch [150] Speed: 1036.02 samples/sec Train-accuracy=0.933125 # INFO:root:Epoch[115] Batch [200] Speed: 1057.49 samples/sec Train-accuracy=0.938125 # INFO:root:Epoch[115] Batch [250] Speed: 1060.56 samples/sec Train-accuracy=0.933438 # INFO:root:Epoch[115] Batch [300] Speed: 1046.25 samples/sec Train-accuracy=0.935625 # INFO:root:Epoch[115] Batch [350] Speed: 1043.83 samples/sec Train-accuracy=0.927188 # INFO:root:Epoch[115] Train-accuracy=0.938477 # INFO:root:Epoch[115] Time cost=47.815 # INFO:root:Saved checkpoint to "sd-110-0116.params" # INFO:root:Epoch[115] Validation-accuracy=0.884415 # ... # INFO:root:Saved checkpoint to "sd-110-0499.params" # INFO:root:Epoch[498] Validation-accuracy=0.908554 # INFO:root:Epoch[499] Batch [50] Speed: 1068.28 samples/sec Train-accuracy=0.991422 # INFO:root:Epoch[499] Batch [100] Speed: 1053.10 samples/sec Train-accuracy=0.991094 # INFO:root:Epoch[499] Batch [150] Speed: 1042.89 samples/sec Train-accuracy=0.995156 # INFO:root:Epoch[499] Batch [200] Speed: 1066.22 samples/sec Train-accuracy=0.991406 # INFO:root:Epoch[499] Batch [250] Speed: 1050.56 samples/sec Train-accuracy=0.990781 # INFO:root:Epoch[499] Batch [300] Speed: 1032.02 samples/sec Train-accuracy=0.992500 # INFO:root:Epoch[499] Batch [350] Speed: 1062.16 samples/sec Train-accuracy=0.992969 # INFO:root:Epoch[499] Train-accuracy=0.994141 # INFO:root:Epoch[499] Time cost=47.401 # INFO:root:Saved checkpoint to "sd-110-0500.params" # INFO:root:Epoch[499] Validation-accuracy=0.906050 # ###########################################################################################
# import fcntl # import os # import zookeeper # from twisted.internet.defer import inlineCallbacks, fail # from twisted.python.failure import Failure # from txzookeeper.tests.utils import deleteTree # from juju.charm.bundle import CharmBundle # from juju.charm.directory import CharmDirectory # from juju.charm.publisher import CharmPublisher # from juju.charm.tests import local_charm_id # from juju.lib import under, serializer # from juju.providers.dummy import FileStorage # from juju.state.charm import CharmStateManager # from juju.state.errors import StateChanged # from juju.environment.tests.test_config import ( # EnvironmentsConfigTestBase, SAMPLE_ENV) # from juju.lib.mocker import MATCH # from .test_repository import RepositoryTestBase # def _count_open_files(): # count = 0 # for sfd in os.listdir("/proc/self/fd"): # ifd = int(sfd) # if ifd < 3: # continue # try: # fcntl.fcntl(ifd, fcntl.F_GETFD) # count += 1 # except IOError: # pass # return count # class CharmPublisherTest(RepositoryTestBase): # @inlineCallbacks # def setUp(self): # super(CharmPublisherTest, self).setUp() # zookeeper.set_debug_level(0) # self.charm = CharmDirectory(self.sample_dir1) # self.charm_id = local_charm_id(self.charm) # self.charm_key = under.quote(self.charm_id) # # provider storage key # self.charm_storage_key = under.quote( # "%s:%s" % (self.charm_id, self.charm.get_sha256())) # self.client = self.get_zookeeper_client() # self.storage_dir = self.makeDir() # self.storage = FileStorage(self.storage_dir) # self.publisher = CharmPublisher(self.client, self.storage) # yield self.client.connect() # yield self.client.create("/charms") # def tearDown(self): # deleteTree("/", self.client.handle) # self.client.close() # super(CharmPublisherTest, self).tearDown() # @inlineCallbacks # def test_add_charm_and_publish(self): # open_file_count = _count_open_files() # yield self.publisher.add_charm(self.charm_id, self.charm) # result = yield self.publisher.publish() # self.assertEquals(_count_open_files(), open_file_count) # children = yield self.client.get_children("/charms") # self.assertEqual(children, [self.charm_key]) # fh = yield self.storage.get(self.charm_storage_key) # bundle = CharmBundle(fh) # self.assertEqual(self.charm.get_sha256(), bundle.get_sha256()) # self.assertEqual( # result[0].bundle_url, "file://%s/%s" % ( # self.storage_dir, self.charm_storage_key)) # @inlineCallbacks # def test_published_charm_sans_unicode(self): # yield self.publisher.add_charm(self.charm_id, self.charm) # yield self.publisher.publish() # data, stat = yield self.client.get("/charms/%s" % self.charm_key) # self.assertNotIn("unicode", data) # @inlineCallbacks # def test_add_charm_with_concurrent(self): # """ # Publishing a charm, that has become published concurrent, after the # add_charm, works fine. it will write to storage regardless. The use # of a sha256 as part of the storage key is utilized to help ensure # uniqueness of bits. The sha256 is also stored with the charm state. # This relation betewen the charm state and the binary bits, helps # guarantee the property that any published charm in zookeeper will use # the binary bits that it was published with. # """ # yield self.publisher.add_charm(self.charm_id, self.charm) # concurrent_publisher = CharmPublisher( # self.client, self.storage) # charm = CharmDirectory(self.sample_dir1) # yield concurrent_publisher.add_charm(self.charm_id, charm) # yield self.publisher.publish() # # modify the charm to create a conflict scenario # self.makeFile("zebra", # path=os.path.join(self.sample_dir1, "junk.txt")) # # assert the charm now has a different sha post modification # modified_charm_sha = charm.get_sha256() # self.assertNotEqual( # modified_charm_sha, # self.charm.get_sha256()) # # verify publishing raises a stateerror # def verify_failure(result): # if not isinstance(result, Failure): # self.fail("Should have raised state error") # result.trap(StateChanged) # return True # yield concurrent_publisher.publish().addBoth(verify_failure) # # verify the zk state # charm_nodes = yield self.client.get_children("/charms") # self.assertEqual(charm_nodes, [self.charm_key]) # content, stat = yield self.client.get( # "/charms/%s" % charm_nodes[0]) # # assert the checksum matches the initially published checksum # self.assertEqual( # serializer.yaml_load(content)["sha256"], # self.charm.get_sha256()) # store_path = os.path.join(self.storage_dir, self.charm_storage_key) # self.assertTrue(os.path.exists(store_path)) # # and the binary bits where stored # modified_charm_storage_key = under.quote( # "%s:%s" % (self.charm_id, modified_charm_sha)) # modified_store_path = os.path.join( # self.storage_dir, modified_charm_storage_key) # self.assertTrue(os.path.exists(modified_store_path)) # @inlineCallbacks # def test_add_charm_with_concurrent_removal(self): # """ # If a charm is published, and it detects that the charm exists # already exists, it will attempt to retrieve the charm state to # verify there is no checksum mismatch. If concurrently the charm # is removed, the publisher should fail with a statechange error. # """ # manager = self.mocker.patch(CharmStateManager) # manager.get_charm_state(self.charm_id) # self.mocker.passthrough() # def match_charm_bundle(bundle): # return isinstance(bundle, CharmBundle) # def match_charm_url(url): # return url.startswith("file://") # manager.add_charm_state( # self.charm_id, MATCH(match_charm_bundle), MATCH(match_charm_url)) # self.mocker.result(fail(zookeeper.NodeExistsException())) # manager.get_charm_state(self.charm_id) # self.mocker.result(fail(zookeeper.NoNodeException())) # self.mocker.replay() # yield self.publisher.add_charm(self.charm_id, self.charm) # yield self.failUnlessFailure(self.publisher.publish(), StateChanged) # @inlineCallbacks # def test_add_charm_already_known(self): # """Adding an existing charm, is an effective noop, as its not added # to the internal publisher queue. # """ # output = self.capture_logging("juju.charm") # # Do an initial publishing of the charm # scheduled = yield self.publisher.add_charm(self.charm_id, self.charm) # self.assertTrue(scheduled) # result = yield self.publisher.publish() # self.assertEqual(result[0].name, self.charm.metadata.name) # publisher = CharmPublisher(self.client, self.storage) # scheduled = yield publisher.add_charm(self.charm_id, self.charm) # self.assertFalse(scheduled) # scheduled = yield publisher.add_charm(self.charm_id, self.charm) # self.assertFalse(scheduled) # result = yield publisher.publish() # self.assertEqual(result[0].name, self.charm.metadata.name) # self.assertEqual(result[1].name, self.charm.metadata.name) # self.assertIn( # "Using cached charm version of %s" % self.charm.metadata.name, # output.getvalue()) # class EnvironmentPublisherTest(EnvironmentsConfigTestBase): # def setUp(self): # super(EnvironmentPublisherTest, self).setUp() # self.write_config(SAMPLE_ENV) # self.config.load() # self.environment = self.config.get("myfirstenv") # zookeeper.set_debug_level(0) # @inlineCallbacks # def test_publisher_for_environment(self): # publisher = yield CharmPublisher.for_environment(self.environment) # self.assertTrue(isinstance(publisher, CharmPublisher))
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
""" A non-empty zero-indexed array A consisting of N integers is given. A peak is an array element which is larger than its neighbors. More precisely, it is an index P such that 0 < P < N − 1, A[P − 1] < A[P] and A[P] > A[P + 1]. For example, the following array A: A[0] = 1 A[1] = 2 A[2] = 3 A[3] = 4 A[4] = 3 A[5] = 4 A[6] = 1 A[7] = 2 A[8] = 3 A[9] = 4 A[10] = 6 A[11] = 2 has exactly three peaks: 3, 5, 10. We want to divide this array into blocks containing the same number of elements. More precisely, we want to choose a number K that will yield the following blocks: A[0], A[1], ..., A[K − 1], A[K], A[K + 1], ..., A[2K − 1], ... A[N − K], A[N − K + 1], ..., A[N − 1]. What's more, every block should contain at least one peak. Notice that extreme elements of the blocks (for example A[K − 1] or A[K]) can also be peaks, but only if they have both neighbors (including one in an adjacent blocks). The goal is to find the maximum number of blocks into which the array A can be divided. Array A can be divided into blocks as follows: one block (1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2). This block contains three peaks. two blocks (1, 2, 3, 4, 3, 4) and (1, 2, 3, 4, 6, 2). Every block has a peak. three blocks (1, 2, 3, 4), (3, 4, 1, 2), (3, 4, 6, 2). Every block has a peak. Notice in particular that the first block (1, 2, 3, 4) has a peak at A[3], because A[2] < A[3] > A[4], even though A[4] is in the adjacent block. However, array A cannot be divided into four blocks, (1, 2, 3), (4, 3, 4), (1, 2, 3) and (4, 6, 2), because the (1, 2, 3) blocks do not contain a peak. Notice in particular that the (4, 3, 4) block contains two peaks: A[3] and A[5]. The maximum number of blocks that array A can be divided into is three. Write a function: def solution(A) that, given a non-empty zero-indexed array A consisting of N integers, returns the maximum number of blocks into which A can be divided. If A cannot be divided into some number of blocks, the function should return 0. For example, given: A[0] = 1 A[1] = 2 A[2] = 3 A[3] = 4 A[4] = 3 A[5] = 4 A[6] = 1 A[7] = 2 A[8] = 3 A[9] = 4 A[10] = 6 A[11] = 2 the function should return 3, as explained above. Assume that: N is an integer within the range [1..100,000]; each element of array A is an integer within the range [0..1,000,000,000]. Complexity: * expected worst-case time complexity is O(N*log(log(N))); * expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments). Elements of input arrays can be modified. """
#!/usr/bin/env python ## ## @file rewrite_pydoc.py ## @brief Convert libSBML Python doc file to something readable as docstrings ## @author NAME Purpose: ## ## The comments in the libSBML source code use Doxygen mark-up; this ## content is read by Doxygen and Javadoc, in combination with other ## scripts, to produce the libSBML API documentation in the libSBML "docs" ## directory. When creating the Python language interface, SWIG takes the ## comments and inserts them as documentation strings in the Python code, ## which is good from the perspective of being an easy way to provide help ## for the Python interface classes and methods, but bad from the ## perspective that it is full of Doxygen markup and not suitable for ## direct reading by humans. ## ## This program converts the Doxygen-based documentation strings by ## rewriting them to plain text. This plain text can then be included in ## the final Python bindings for libSBML, so that users can use the Python ## interactive help system to view the documentation. ## ## This program is not a general converter; it is designed specifically to ## work with the way that we generate the libSBML python bindings. ## However, it should not be too difficult to adapt to other similar ## software projects. ## ## The main hardwired assumptions are the following: ## ## * The input file to rewrite_pydoc.py is the output produced by our ## ../../swig/swigdoc.py, which produces documentation definitions for ## swig. These have the form shown in the following example: ## ## %feature("docstring") SBMLReader::SBMLReader " ## Creates a new SBMLReader and returns it. ## ## The libSBML SBMLReader objects offer methods for reading SBML in ## XML form from files and text strings. ## "; ## ## The output of rewrite_pydoc.py is another .i file in which all Doxygen ## tags have been translated and the docstring contents have been ## reformatted for use in the python plain-text interactive help system. ## ## * In our process for producing the libSBML Python bindings, we take the ## output of rewrite_pydoc.py and include it in the input to swig. This ## is done via an %include command in the ../local.i file. The ## consequence is that swig reads these %feature commands, and uses them ## when it produces a file named "libsbml.py" containing the Python code ## for the libSBML interface. The objects and methods in "libsbml.py" ## contain Python-style "docstrings" that are a combination of we defined ## in the .i file and what swig itself constructs. (In particular, swig ## adds documentation about the method signatures, because the methods ## are interfaces to native code and Python introspection cannot reveal ## the data types of the parameters.) ## ## * The Doxygen markup understood by rewrite_pydoc.py is not the complete ## set of all possible Doxygen tags. We don't use all possible Doxygen ## tags in the libSBML documentation, and so this program only looks for ## the ones we have been using. ## ## ## Special features: ## ## * When looking for @htmlinclude files, it first checks to see if a ## version of the file with a .txt extension exists in the same location ## where it finds the .html file. If the .txt eversion exists, it ## includes that instead of the .html file. (This allows hand-formatted ## text files to be used, which is useful for files containing tables, ## because the Python HTML parser library doesn't handle tables.) ## ## * When expanding @image directives, it looks for a file with the ## extension .txt in the same directory where it finds the .jpg file. If ## the .txt version exists, it includes that; if it doesn't exist, it ## does not include anything. (Since the docstrings are plain-text, no ## other action seems sensible in this context.) ## ## ## <!-------------------------------------------------------------------------- ## This file is part of libSBML. Please visit http://sbml.org for more ## information about SBML, and the latest version of libSBML. ## ## Copyright (C) 2009-2013 jointly by the following organizations: ## 1. California Institute of Technology, Pasadena, CA, USA ## 2. EMBL European Bioinformatics Institute (EBML-EBI), Hinxton, UK ## ## Copyright (C) 2006-2008 by the California Institute of Technology, ## Pasadena, CA, USA ## ## Copyright (C) 2002-2005 jointly by the following organizations: ## 1. California Institute of Technology, Pasadena, CA, USA ## 2. Japan Science and Technology Agency, Japan ## ## This library is free software; you can redistribute it and/or modify it ## under the terms of the GNU Lesser General Public License as published by ## the Free Software Foundation. A copy of the license agreement is provided ## in the file named "LICENSE.txt" included with this software distribution ## and also available online as http://sbml.org/software/libsbml/license.html ## ------------------------------------------------------------------------ -->
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
# -*- coding: utf-8 -*- # -- Dual Licence ---------------------------------------------------------- ############################################################################ # GPL License # # # # This file is a SCons (http://www.scons.org/) builder # # Copyright (c) 2012-14, NAME <EMAIL> # # This program is free software: you can redistribute it and/or modify # # it under the terms of the GNU General Public License as # # published by the Free Software Foundation, either version 3 of the # # License, or (at your option) any later version. # # # # This program is distributed in the hope that it will be useful, # # but WITHOUT ANY WARRANTY; without even the implied warranty of # # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # # GNU General Public License for more details. # # # # You should have received a copy of the GNU General Public License # # along with this program. If not, see <http://www.gnu.org/licenses/>. # ############################################################################ # -------------------------------------------------------------------------- ############################################################################ # BSD 3-Clause License # # # # This file is a SCons (http://www.scons.org/) builder # # Copyright (c) 2012-14, NAME <EMAIL> # # All rights reserved. # # # # Redistribution and use in source and binary forms, with or without # # modification, are permitted provided that the following conditions are # # met: # # # # 1. Redistributions of source code must retain the above copyright # # notice, this list of conditions and the following disclaimer. # # # # 2. Redistributions in binary form must reproduce the above copyright # # notice, this list of conditions and the following disclaimer in the # # documentation and/or other materials provided with the distribution. # # # # 3. Neither the name of the copyright holder nor the names of its # # contributors may be used to endorse or promote products derived from # # this software without specific prior written permission. # # # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A # # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # # HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED # # TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR # # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF # # LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING # # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # ############################################################################ # The Unpack Builder can be used for unpacking archives (eg Zip, TGZ, BZ, ... ). # The emitter of the Builder reads the archive data and creates a returning file list # the builder extract the archive. The environment variable stores a dictionary "UNPACK" # for set different extractions (subdict "EXTRACTOR"): # { # PRIORITY => a value for setting the extractor order (lower numbers = extractor is used earlier) # SUFFIX => defines a list with file suffixes, which should be handled with this extractor # EXTRACTSUFFIX => suffix of the extract command # EXTRACTFLAGS => a string parameter for the RUN command for extracting the data # EXTRACTCMD => full extract command of the builder # RUN => the main program which will be started (if the parameter is empty, the extractor will be ignored) # LISTCMD => the listing command for the emitter # LISTFLAGS => the string options for the RUN command for showing a list of files # LISTSUFFIX => suffix of the list command # LISTEXTRACTOR => a optional Python function, that is called on each output line of the # LISTCMD for extracting file & dir names, the function need two parameters (first line number, # second line content) and must return a string with the file / dir path (other value types # will be ignored) # } # Other options in the UNPACK dictionary are: # STOPONEMPTYFILE => bool variable for stoping if the file has empty size (default True) # VIWEXTRACTOUTPUT => shows the output messages of the extraction command (default False) # EXTRACTDIR => path in that the data will be extracted (default #) # # The file which is handled by the first suffix match of the extractor, the extractor list can be append for other files. # The order of the extractor dictionary creates the listing & extractor command eg file extension .tar.gz should be # before .gz, because the tar.gz is extract in one shoot. # # Under *nix system these tools are supported: tar, bzip2, gzip, unzip # Under Windows only 7-Zip (http://www.7-zip.org/) is supported
"""Doctest for method/function calls. We're going the use these types for extra testing >>> from UserList import UserList >>> from UserDict import UserDict We're defining four helper functions >>> def e(a,b): ... print a, b >>> def f(*a, **k): ... print a, test_support.sortdict(k) >>> def g(x, *y, **z): ... print x, y, test_support.sortdict(z) >>> def h(j=1, a=2, h=3): ... print j, a, h Argument list examples >>> f() () {} >>> f(1) (1,) {} >>> f(1, 2) (1, 2) {} >>> f(1, 2, 3) (1, 2, 3) {} >>> f(1, 2, 3, *(4, 5)) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *[4, 5]) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *UserList([4, 5])) (1, 2, 3, 4, 5) {} Here we add keyword arguments >>> f(1, 2, 3, **{'a':4, 'b':5}) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *[4, 5], **{'a':6, 'b':7}) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **{'a':8, 'b': 9}) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} >>> f(1, 2, 3, **UserDict(a=4, b=5)) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *(4, 5), **UserDict(a=6, b=7)) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **UserDict(a=8, b=9)) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} Examples with invalid arguments (TypeErrors). We're also testing the function names in the exception messages. Verify clearing of SF bug #733667 >>> e(c=4) Traceback (most recent call last): ... TypeError: e() got an unexpected keyword argument 'c' >>> g() Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(*()) Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(*(), **{}) Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(1) 1 () {} >>> g(1, 2) 1 (2,) {} >>> g(1, 2, 3) 1 (2, 3) {} >>> g(1, 2, 3, *(4, 5)) 1 (2, 3, 4, 5) {} >>> class Nothing: pass ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not instance >>> class Nothing: ... def __len__(self): return 5 ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not instance >>> class Nothing(): ... def __len__(self): return 5 ... def __getitem__(self, i): ... if i<3: return i ... else: raise IndexError(i) ... >>> g(*Nothing()) 0 (1, 2) {} >>> class Nothing: ... def __init__(self): self.c = 0 ... def __iter__(self): return self ... def next(self): ... if self.c == 4: ... raise StopIteration ... c = self.c ... self.c += 1 ... return c ... >>> g(*Nothing()) 0 (1, 2, 3) {} Make sure that the function doesn't stomp the dictionary >>> d = {'a': 1, 'b': 2, 'c': 3} >>> d2 = d.copy() >>> g(1, d=4, **d) 1 () {'a': 1, 'b': 2, 'c': 3, 'd': 4} >>> d == d2 True What about willful misconduct? >>> def saboteur(**kw): ... kw['x'] = 'm' ... return kw >>> d = {} >>> kw = saboteur(a=1, **d) >>> d {} >>> g(1, 2, 3, **{'x': 4, 'y': 5}) Traceback (most recent call last): ... TypeError: g() got multiple values for keyword argument 'x' >>> f(**{1:2}) Traceback (most recent call last): ... TypeError: f() keywords must be strings >>> h(**{'e': 2}) Traceback (most recent call last): ... TypeError: h() got an unexpected keyword argument 'e' >>> h(*h) Traceback (most recent call last): ... TypeError: h() argument after * must be a sequence, not function >>> dir(*h) Traceback (most recent call last): ... TypeError: dir() argument after * must be a sequence, not function >>> None(*h) Traceback (most recent call last): ... TypeError: NoneType object argument after * must be a sequence, \ not function >>> h(**h) Traceback (most recent call last): ... TypeError: h() argument after ** must be a mapping, not function >>> dir(**h) Traceback (most recent call last): ... TypeError: dir() argument after ** must be a mapping, not function >>> None(**h) Traceback (most recent call last): ... TypeError: NoneType object argument after ** must be a mapping, \ not function >>> dir(b=1, **{'b': 1}) Traceback (most recent call last): ... TypeError: dir() got multiple values for keyword argument 'b' Another helper function >>> def f2(*a, **b): ... return a, b >>> d = {} >>> for i in xrange(512): ... key = 'k%d' % i ... d[key] = i >>> a, b = f2(1, *(2,3), **d) >>> len(a), len(b), b == d (3, 512, True) >>> class Foo: ... def method(self, arg1, arg2): ... return arg1+arg2 >>> x = Foo() >>> Foo.method(*(x, 1, 2)) 3 >>> Foo.method(x, *(1, 2)) 3 >>> Foo.method(*(1, 2, 3)) Traceback (most recent call last): ... TypeError: unbound method method() must be called with Foo instance as \ first argument (got int instance instead) >>> Foo.method(1, *[2, 3]) Traceback (most recent call last): ... TypeError: unbound method method() must be called with Foo instance as \ first argument (got int instance instead) A PyCFunction that takes only positional parameters shoud allow an empty keyword dictionary to pass without a complaint, but raise a TypeError if te dictionary is not empty >>> try: ... silence = id(1, *{}) ... True ... except: ... False True >>> id(1, **{'foo': 1}) Traceback (most recent call last): ... TypeError: id() takes no keyword arguments """
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I an trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
# -*- coding: utf-8 -*- # routers are dictionaries of URL routing parameters. # # For each request, the effective router is: # the built-in default base router (shown below), # updated by the BASE router in routes.py routers, # updated by the app-specific router in routes.py routers (if any), # updated by the app-specific router from applications/app/routes.py routers (if any) # # # Router members: # # default_application: default application name # applications: list of all recognized applications, or 'ALL' to use all currently installed applications # Names in applications are always treated as an application names when they appear first in an incoming URL. # Set applications=None to disable the removal of application names from outgoing URLs. # domains: optional dict mapping domain names to application names # The domain name can include a port number: domain.com:8080 # The application name can include a controller: appx/ctlrx # or a controller and a function: appx/ctlrx/fcnx # Example: # domains = { "domain.com" : "app", # "x.domain.com" : "appx", # }, # path_prefix: a path fragment that is prefixed to all outgoing URLs and stripped from all incoming URLs # # Note: default_application, applications, domains & path_prefix are permitted only in the BASE router, # and domain makes sense only in an application-specific router. # The remaining members can appear in the BASE router (as defaults for all applications) # or in application-specific routers. # # default_controller: name of default controller # default_function: name of default function (in all controllers) or dictionary of default functions # by controller # controllers: list of valid controllers in selected app # or "DEFAULT" to use all controllers in the selected app plus 'static' # or None to disable controller-name removal. # Names in controllers are always treated as controller names when they appear in an incoming URL after # the (optional) application and language names. # functions: list of valid functions in the default controller (default None) or dictionary of valid # functions by controller. # If present, the default function name will be omitted when the controller is the default controller # and the first arg does not create an ambiguity. # languages: list of all supported languages # Names in languages are always treated as language names when they appear in an incoming URL after # the (optional) application name. # default_language # The language code (for example: en, it-it) optionally appears in the URL following # the application (which may be omitted). For incoming URLs, the code is copied to # request.uri_language; for outgoing URLs it is taken from request.uri_language. # If languages=None, language support is disabled. # The default_language, if any, is omitted from the URL. # To use the incoming language in your application, add this line to one of your models files: # if request.uri_language: T.force(request.uri_language) # root_static: list of static files accessed from root (by default, favicon.ico & robots.txt) # (mapped to the default application's static/ directory) # Each default (including domain-mapped) application has its own root-static files. # domain: the domain that maps to this application (alternative to using domains in the BASE router) # exclusive_domain: If True (default is False), an exception is raised if an attempt is made to generate # an outgoing URL with a different application without providing an explicit host. # map_hyphen: If True (default is False), hyphens in incoming /a/c/f fields are converted # to underscores, and back to hyphens in outgoing URLs. # Language, args and the query string are not affected. # map_static: By default (None), the default application is not stripped from static URLs. # Set map_static=True to override this policy. # Set map_static=False to map lang/static/file to static/lang/file # acfe_match: regex for valid application, controller, function, extension /a/c/f.e # file_match: regex for valid subpath (used for static file paths) # if file_match does not contain '/', it is uses to validate each element of a static file subpath, # rather than the entire subpath. # args_match: regex for valid args # This validation provides a measure of security. # If it is changed, the application perform its own validation. # # # The built-in default router supplies default values (undefined members are None): # # default_router = dict( # default_application = 'init', # applications = 'ALL', # default_controller = 'default', # controllers = 'DEFAULT', # default_function = 'index', # functions = None, # default_language = None, # languages = None, # root_static = ['favicon.ico', 'robots.txt'], # map_static = None, # domains = None, # map_hyphen = False, # acfe_match = r'\w+$', # legal app/ctlr/fcn/ext # file_match = r'([-+=@$%\w]|(?<=[-+=@$%\w])[./])*$', # legal static subpath # args_match = r'([\w@ -]|(?<=[\w@ -])[.=])*$', # legal arg in args # ) # # See rewrite.map_url_in() and rewrite.map_url_out() for implementation details. # This simple router set overrides only the default application name, # but provides full rewrite functionality.
# # ElementTree # $Id$ # # light-weight XML support for Python 1.5.2 and later. # # history: # 2001-10-20 fl created (from various sources) # 2001-11-01 fl return root from parse method # 2002-02-16 fl sort attributes in lexical order # 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup # 2002-05-01 fl finished TreeBuilder refactoring # 2002-07-14 fl added basic namespace support to ElementTree.write # 2002-07-25 fl added QName attribute support # 2002-10-20 fl fixed encoding in write # 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding # 2002-11-27 fl accept file objects or file names for parse/write # 2002-12-04 fl moved XMLTreeBuilder back to this module # 2003-01-11 fl fixed entity encoding glitch for us-ascii # 2003-02-13 fl added XML literal factory # 2003-02-21 fl added ProcessingInstruction/PI factory # 2003-05-11 fl added tostring/fromstring helpers # 2003-05-26 fl added ElementPath support # 2003-07-05 fl added makeelement factory method # 2003-07-28 fl added more well-known namespace prefixes # 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed # 2003-10-31 fl markup updates # 2003-11-15 fl fixed nested namespace bug # 2004-03-28 fl added XMLID helper # 2004-06-02 fl added default support to findtext # 2004-06-08 fl fixed encoding of non-ascii element/attribute names # 2004-08-23 fl take advantage of post-2.1 expat features # 2005-02-01 fl added iterparse implementation # 2005-03-02 fl fixed iterparse support for pre-2.2 versions # # Copyright (c) 1999-2005 by NAME All rights reserved. # # EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The ElementTree toolkit is # # Copyright (c) 1999-2005 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
# DESCRIPTION # Creates a list of colors expressed as RGB hexadecimal values, from a simplified CIECAM02 color space. Optionally (with a tweak described under USAGE) prints CIECAM02 JCH components instead. # DEPENDENCIES # python, with these packages: numpy, ciecam02, colormap, more_itertools # USAGE # Run through a Python interpreter, without any parameters: # python /path/to/this_script/get_CIECAM02_simplified_gamut.py # To pipe the results to a new file, do this: # python /path/to/this_script/get_CIECAM02_simplified_gamut.py > CIECAM02_simplified_gamut.gamut # But you will probably get out-of-gamut warnings (out of range) if you convert to RGB with this--you may ignore them in terms of the script running (you will still get valid output). To suppress error prints by piping them to /dev/null in a Unix environment, but still get the valid hex color code printout, run it this way: # python /path/to/this_script/get_CIECAM02_simplified_gamut.py > oot.txt 2>/dev/null # You may alternate comments after the OUTPUT OPTIONS comment for either JCH (simplified CIECAM02 gamut) or RGB HEX output. # CODE # TO DO # - Parameterize J, C and h steps? # PROGRAMMER NOTES # This page -- https://colorspacious.readthedocs.io/en/latest/tutorial.html -- describes a gamut in HCL (actually Jcl there) which is "state of the art:" CIECAM02. Supported as such by chronology in this article: https://en.wikipedia.org/wiki/Color_appearance_model#CIECAM02 Excellent article about it describing well different attributes of color and colorfulness: https://en.wikipedia.org/wiki/CIECAM02 # Packages that support it: # - https://colorspacious.readthedocs.io/en/latest/reference.html#supported-colorspaces # - https://colour.readthedocs.io/en/latest/colour.appearance.html#ciecam02 # - winner if it works: https://colorspacious.readthedocs.io/en/latest/reference.html#ciecam02 : "If you just want a better replacement for traditional ad hoc spaces like “Hue/Saturation/Value”, then use the string "JCh" for your colorspace (see Perceptual transformations for a tutorial) and be happy." # - OR maybe better?! : https://pypi.org/project/ciecam02/ -- or maybe not as it says "_simplified_ form of CEICAM02.." (emphasis added) # cicam02 doc: # CIECAM02 produces multiple correlates, like H, J, z, Q, t, C, M, s. Some of them represent similar concepts, such as C means chroma and M colorfulness s saturation correlate the same thing in different density. We need only 3 major property of these arguments to completely represent a color, and we can get other properties or reverse algorithms.. # Color type jch is a float list like [j, c, h], where 0.0 < j < 100.0, 0.0 < h < 360.0, and 0.0 < c. the max value of c does not limit, and may produce exceeds when transform to RGB. The effective value of max c varies. Probably for red color h 0.0, and brightness j 50.0, c reach the valid maximum, values about 160.0. # IN OTHER WORDS: j (brightness) range is 0 to 100, h (hue) is 0 to 360, c (chroma) can be 0 to any number (I don't beleive that there _is_ a max from whatever corollary/inputs produces the max), maybe max 160. I'll start with max 182 (this seemed a thing with L in HCL). # colorspacious doc: "The three axes in this space are conventionally called 'J' (for lightness), 'C' (for chroma), and 'h' (for hue)." # NOTE my eyes said, assuming those values, for HCL, do these step values: L_step 18, C_domain 24, H_step 7 # (later self: WHAT?! 7 colors? How about 12 * how many perceived steps in between all?!)
""" ============= Miscellaneous ============= IEEE 754 Floating Point Special Values: ----------------------------------------------- Special values defined in numpy: nan, inf, NaNs can be used as a poor-man's mask (if you don't care what the original value was) Note: cannot use equality to test NaNs. E.g.: :: >>> myarr = np.array([1., 0., np.nan, 3.]) >>> np.where(myarr == np.nan) >>> np.nan == np.nan # is always False! Use special numpy functions instead. False >>> myarr[myarr == np.nan] = 0. # doesn't work >>> myarr array([ 1., 0., NaN, 3.]) >>> myarr[np.isnan(myarr)] = 0. # use this instead find >>> myarr array([ 1., 0., 0., 3.]) Other related special value functions: :: isinf(): True if value is inf isfinite(): True if not nan or inf nan_to_num(): Map nan to 0, inf to max float, -inf to min float The following corresponds to the usual functions except that nans are excluded from the results: :: nansum() nanmax() nanmin() nanargmax() nanargmin() >>> x = np.arange(10.) >>> x[3] = np.nan >>> x.sum() nan >>> np.nansum(x) 42.0 How numpy handles numerical exceptions Default is to "warn" But this can be changed, and it can be set individually for different kinds of exceptions. The different behaviors are: :: 'ignore' : ignore completely 'warn' : print a warning (once only) 'raise' : raise an exception 'call' : call a user-supplied function (set using seterrcall()) These behaviors can be set for all kinds of errors or specific ones: :: all: apply to all numeric exceptions invalid: when NaNs are generated divide: divide by zero (for integers as well!) overflow: floating point overflows underflow: floating point underflows Note that integer divide-by-zero is handled by the same machinery. These behaviors are set on a per-thread basis. Examples: ------------ :: >>> oldsettings = np.seterr(all='warn') >>> np.zeros(5,dtype=np.float32)/0. invalid value encountered in divide >>> j = np.seterr(under='ignore') >>> np.array([1.e-100])**10 >>> j = np.seterr(invalid='raise') >>> np.sqrt(np.array([-1.])) FloatingPointError: invalid value encountered in sqrt >>> def errorhandler(errstr, errflag): ... print "saw stupid error!" >>> np.seterrcall(errorhandler) <function err_handler at 0x...> >>> j = np.seterr(all='call') >>> np.zeros(5, dtype=np.int32)/0 FloatingPointError: invalid value encountered in divide saw stupid error! >>> j = np.seterr(**oldsettings) # restore previous ... # error-handling settings Interfacing to C: ----------------- Only a survey of the choices. Little detail on how each works. 1) Bare metal, wrap your own C-code manually. - Plusses: - Efficient - No dependencies on other tools - Minuses: - Lots of learning overhead: - need to learn basics of Python C API - need to learn basics of numpy C API - need to learn how to handle reference counting and love it. - Reference counting often difficult to get right. - getting it wrong leads to memory leaks, and worse, segfaults - API will change for Python 3.0! 2) pyrex - Plusses: - avoid learning C API's - no dealing with reference counting - can code in psuedo python and generate C code - can also interface to existing C code - should shield you from changes to Python C api - become pretty popular within Python community - Minuses: - Can write code in non-standard form which may become obsolete - Not as flexible as manual wrapping - Maintainers not easily adaptable to new features Thus: 3) cython - fork of pyrex to allow needed features for SAGE - being considered as the standard scipy/numpy wrapping tool - fast indexing support for arrays 4) ctypes - Plusses: - part of Python standard library - good for interfacing to existing sharable libraries, particularly Windows DLLs - avoids API/reference counting issues - good numpy support: arrays have all these in their ctypes attribute: :: a.ctypes.data a.ctypes.get_strides a.ctypes.data_as a.ctypes.shape a.ctypes.get_as_parameter a.ctypes.shape_as a.ctypes.get_data a.ctypes.strides a.ctypes.get_shape a.ctypes.strides_as - Minuses: - can't use for writing code to be turned into C extensions, only a wrapper tool. 5) SWIG (automatic wrapper generator) - Plusses: - around a long time - multiple scripting language support - C++ support - Good for wrapping large (many functions) existing C libraries - Minuses: - generates lots of code between Python and the C code - can cause performance problems that are nearly impossible to optimize out - interface files can be hard to write - doesn't necessarily avoid reference counting issues or needing to know API's 7) Weave - Plusses: - Phenomenal tool - can turn many numpy expressions into C code - dynamic compiling and loading of generated C code - can embed pure C code in Python module and have weave extract, generate interfaces and compile, etc. - Minuses: - Future uncertain--lacks a champion 8) Psyco - Plusses: - Turns pure python into efficient machine code through jit-like optimizations - very fast when it optimizes well - Minuses: - Only on intel (windows?) - Doesn't do much for numpy? Interfacing to Fortran: ----------------------- Fortran: Clear choice is f2py. (Pyfort is an older alternative, but not supported any longer) Interfacing to C++: ------------------- 1) CXX 2) Boost.python 3) SWIG 4) Sage has used cython to wrap C++ (not pretty, but it can be done) 5) SIP (used mainly in PyQT) """
""" The ``version`` module provides version numbering for the entire |PYTEK| package. .. contents:: **Page Contents** :local: :depth: 3 :backlinks: top Versioning ------------- The PyTek packages uses a five part version number, plus an incremental release number. Either the version number or the release number can be used to identify a released version of the code. Version Number ~~~~~~~~~~~~~~~ The version number is a four part dotted number, with an optional tag on the end. Formally, a version number looks like: .. productionlist:: version number: <Major>.<minor>[.<patch>[.<semantic>]][-[x-]<tag>] With each new released version of the code, exactly one of the four numbers will increase, and any numbers to its right will reset to 0. The easiest way to understand version numbers is from the perspective of someone who has written *client code*: i.e., code that makes use of a particular version of the |PYTEK| library. From this perspective, the version number indicates whether or not your client code can be expected to work with different versions of |PYTEK|. .. _major-version: Major Version *************** The ``<Major>`` component is the **major version number**, and it describes *backward compatibility*. Going to a *newer* version of |PYTEK|, your code should continue to work as long as the major version doesn't change. The major version is changed only when something is removed from the |PYTEK| public interface. For instance, if a function is no longer supported, the major version number would have to increase, because client code which relied on that function would no longer work. The major version number can be accessed through the `MAJOR` member of this module. .. _minor-version: Minor Version *************** The ``<minor>`` component is the **minor version number**, and it describes *forward compatibility*: Going to an *older* version of |PYTEK|, your code will continue to work as long as the minor version doesn't change. (As before, your code will also work for *newer* versions of |PYTEK|, as long as the major version number hasn't changed). The minor version number is changed only when something is added to the |PYTEK| public interface, for instance a new function is added. Such a change maintains *backward* compatibility (as described above), but *loses forward compatibility*, because any client code written again this new version may not work with an older version. The minor version number can be accessed through the `MINOR` member of this module. .. _patch-version: Patch Version *************** The ``<patch>`` component is the **patch number**, and it describes changes that *do not affect compatibility*, either forwards or backwards. Your client code will continue to work with an older or newer version of |PYTEK| as long as the major and minor version numbers are the same, regardless of the patch number. Patch changes are code changes that do not effect the interface, for instance bug-fixes or performance enhancements. (although some bugs effect the interface and may therefore cause a higher version number to change). The patch number can be accessed through the `PATCH` member of this module. .. _semantic-version: Semantic Version ******************* The ``<semantic>`` component is the **semantic version number**, and it describes changes that do not affect how the code runs at all. Ths generally means that documentation or other auxilliary files included in the package have changed. The semantic version number can be accessed through the `SEMANTIC` member of this module. Compatibility Summary ********************** The following table summarizes compatibility for a hypothetical client application built against released |PYTEK| version ``M.n.p.s``: =========== =================== ====================== Component Compatibile (all) Incompatible (any) =========== =================== ====================== Major M != M minor >= n < n patch any semantic any =========== =================== ====================== Version Tag ******************** The ``<tag>`` component is the **version tag**, which is used only for non-released code. The tag has one of the following forms: .. productionlist:: version tag : << empty >> : dev[-<rev>] : blood-<branch>[-<rev>] The first form is an empty tag, and is reserved for released (tagged) code only. The second form, `"dev"`, is for non-released code in the *trunk*. This is the main line of development. Dev code may not be completely functional, and may even break the existing interface. The third form, `"blood-..."`, if for non-released code on a *branch*. The `<branch>` component of this form should be the name of the branch. This is considered *bleeding-edge* code and may be highly unstable. The optional ``<rev>`` component on both the second and third forms can be used to specify a specific revision for comitted development code. This must be an globally unambiguous identifier for the revision, for instance the change set id. Development code ********************* A non-empty version tag indicates a *development version* of the code. In this case, the four version numbers remain *unchanged* until the code is released (in which case it is no longer development code, and the tag is changed to empty). In other words, anytime you see a non-empty version tag, the version numbers shown refer to version from which the development code is derived. This is done because it is not generally known until release what the next released version number will be, since it is not known what types of changes will be included in it. Specifying a version number ****************************** When specifying a version number, the major and minor version numbers should always be included. Additionally, all non-zero version numbers should be included, and any version number to the left of a non-zero version number should be included. The tag should always be included in the version number, with the indicated hyphen separating the semantic version number and the tag. The only exception is for released code, in which case the tag is empty and should be omitted, along with the joining hyphen. The optional ``"x-"`` shown preceding the tag in the version number is for compatibility with setup-tools so that versions compare correctly. The above rules will unambiguously describe any released version of the package. Interface Version ****************************** Because any change to the public interface requires a change to either the major or minor version numbers, the interface can be specified by a shortened two part version: .. productionlist:: interface version : <Major>.<minor> Note that this only applies for released versions: development versions may modify the public interface prior to changing the version numbers. Release Number ~~~~~~~~~~~~~~~~~~~~~ The release number is a simple integer which increments by one for every public release of the code. It does not convey any information about compatibility with other versions, but it does provide a simple alternative to identifying released versions. The release number should be written with a leading ``"r"`` or ``"rel"``. For instance, the first release was ``"r1"``. For release code, the release number may be used in place of the tag in the version number. This is optional because the version number and the release number are synonymous. However, including them both in the version string is a useful way to provide both pieces of information. This alternative form of the version number is: .. productionlist:: alt. version number : <Major>.<minor>[.<patch>[.<semantic>]]-r<release> Module Contents -------------------- """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """