comments
stringlengths
2
31.4k
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME # 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
# Test 64-bit COMPARE AND BRANCH in cases where the sheer number of # instructions causes some branches to be out of range. # RUN: python %s | llc -mtriple=s390x-linux-gnu | FileCheck %s # Construct: # # before0: # conditional branch to after0 # ... # beforeN: # conditional branch to after0 # main: # 0xffcc bytes, from MVIY instructions # conditional branch to main # after0: # ... # conditional branch to main # afterN: # # Each conditional branch sequence occupies 12 bytes if it uses a short # branch and 16 if it uses a long one. The ones before "main:" have to # take the branch length into account, which is 6 for short branches, # so the final (0x34 - 6) / 12 == 3 blocks can use short branches. # The ones after "main:" do not, so the first 0x34 / 12 == 4 blocks # can use short branches. The conservative algorithm we use makes # one of the forward branches unnecessarily long, as noted in the # check output below. # # CHECK: lgb [[REG:%r[0-5]]], 0(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 1(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 2(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 3(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 4(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # ...as mentioned above, the next one could be a CGRJE instead... # CHECK: lgb [[REG:%r[0-5]]], 5(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 6(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 7(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # ...main goes here... # CHECK: lgb [[REG:%r[0-5]]], 25(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL:\.L[^ ]*]] # CHECK: lgb [[REG:%r[0-5]]], 26(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 27(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 28(%r3) # CHECK: cgrje %r4, [[REG]], [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 29(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 30(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 31(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]] # CHECK: lgb [[REG:%r[0-5]]], 32(%r3) # CHECK: cgr %r4, [[REG]] # CHECK: jge [[LABEL]]
#!/usr/bin/env python # coding=utf-8 # ############################################################################## ### NZBGET POST-PROCESSING SCRIPT ### # Post-Process to Radarr. # # This script sends the download to your automated media management servers. # # NOTE: This script requires Python to be installed on your system. ############################################################################## ### OPTIONS ### ## General # Auto Update nzbToMedia (0, 1). # # Set to 1 if you want nzbToMedia to automatically check for and update to the latest version #auto_update=0 # Check Media for corruption (0, 1). # # Enable/Disable media file checking using ffprobe. #check_media=1 # Safe Mode protection of DestDir (0, 1). # # Enable/Disable a safety check to ensure we don't process all downloads in the default_downloadDirectory by mistake. #safe_mode=1 # Disable additional extraction checks for failed (0, 1). # # Turn this on to disable additional extraction attempts for failed downloads. Default = 0 this will attempt to extract and verify if media is present. #no_extract_failed = 0 ## Radarr # Radarr script category. # # category that gets called for post-processing with NzbDrone. #raCategory=movies2 # Radarr host. # # The ipaddress for your Radarr server. e.g For the Same system use localhost or IP_ADDRESS #rahost=localhost # Radarr port. #raport=7878 # Radarr API key. #raapikey= # Radarr uses ssl (0, 1). # # Set to 1 if using ssl, else set to 0. #rassl=0 # Radarr web_root # # set this if using a reverse proxy. #raweb_root= # Radarr OMDB API Key. # # api key for www.omdbapi.com (used as alternative to imdb to assist with movie identification). #raomdbapikey= # Radarr wait_for # # Set the number of minutes to wait after calling the renamer, to check the episode has changed status. #rawait_for=6 # Radarr import mode (Move, Copy). # # set to define import behaviour Move or Copy #raimportmode=Copy # Radarr Delete Failed Downloads (0, 1). # # set to 1 to delete failed, or 0 to leave files in place. #radelete_failed=0 # Radarr and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #raremote_path=0 ## Network # Network Mount Points (Needed for remote path above) # # Enter Mount points as LocalPath,RemotePath and separate each pair with '|' # e.g. mountPoints=/volume1/Public/,E:\|/volume2/share/,\\NAS\ #mountPoints= ## Extensions # Media Extensions # # This is a list of media extensions that are used to verify that the download does contain valid media. #mediaExtensions=.mkv,.avi,.divx,.xvid,.mov,.wmv,.mp4,.mpg,.mpeg,.vob,.iso,.ts ## Posix # Niceness for external tasks Extractor and Transcoder. # # Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process). # If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default). # If entering a comma separated list e.g. 'niceness=nice,4' this will be passed as 'nice 4' (Safer). #niceness=nice,-n0 # ionice scheduling class (0, 1, 2, 3). # # Set the ionice scheduling class. 0 for none, 1 for real time, 2 for best-effort, 3 for idle. #ionice_class=2 # ionice scheduling class data. # # Set the ionice scheduling class data. This defines the class data, if the class accepts an argument. For real time and best-effort, 0-7 is valid data. #ionice_classdata=4 ## Transcoder # getSubs (0, 1). # # set to 1 to download subtitles. #getSubs = 0 # subLanguages. # # subLanguages. create a list of languages in the order you want them in your subtitles. #subLanguages = eng,spa,fra # Transcode (0, 1). # # set to 1 to transcode, otherwise set to 0. #transcode=0 # create a duplicate, or replace the original (0, 1). # # set to 1 to cretae a new file or 0 to replace the original #duplicate=1 # ignore extensions. # # list of extensions that won't be transcoded. #ignoreExtensions=.avi,.mkv # outputFastStart (0,1). # # outputFastStart. 1 will use -movflags + faststart. 0 will disable this from being used. #outputFastStart = 0 # outputVideoPath. # # outputVideoPath. Set path you want transcoded videos moved to. Leave blank to disable. #outputVideoPath = # processOutput (0,1). # # processOutput. 1 will send the outputVideoPath to SickBeard/CouchPotato. 0 will send original files. #processOutput = 0 # audioLanguage. # # audioLanguage. set the 3 letter language code you want as your primary audio track. #audioLanguage = eng # allAudioLanguages (0,1). # # allAudioLanguages. 1 will keep all audio tracks (uses AudioCodec3) where available. #allAudioLanguages = 0 # allSubLanguages (0,1). # # allSubLanguages. 1 will keep all exisiting sub languages. 0 will discare those not in your list above. #allSubLanguages = 0 # embedSubs (0,1). # # embedSubs. 1 will embded external sub/srt subs into your video if this is supported. #embedSubs = 1 # burnInSubtitle (0,1). # # burnInSubtitle. burns the default sub language into your video (needed for players that don't support subs) #burnInSubtitle = 0 # extractSubs (0,1). # # extractSubs. 1 will extract subs from the video file and save these as external srt files. #extractSubs = 0 # externalSubDir. # # externalSubDir. set the directory where subs should be saved (if not the same directory as the video) #externalSubDir = # outputDefault (None, iPad, iPad-1080p, iPad-720p, Apple-TV2, iPod, iPhone, PS3, xbox, Roku-1080p, Roku-720p, Roku-480p, mkv, mkv-bluray, mp4-scene-release, MKV-SD). # # outputDefault. Loads default configs for the selected device. The remaining options below are ignored. # If you want to use your own profile, set None and set the remaining options below. #outputDefault = None # hwAccel (0,1). # # hwAccel. 1 will set ffmpeg to enable hardware acceleration (this requires a recent ffmpeg). #hwAccel=0 # ffmpeg output settings. #outputVideoExtension=.mp4 #outputVideoCodec=libx264 #VideoCodecAllow= #outputVideoResolution=720:-1 #outputVideoPreset=medium #outputVideoFramerate=24 #outputVideoBitrate=800k #outputAudioCodec=ac3 #AudioCodecAllow= #outputAudioChannels=6 #outputAudioBitrate=640k #outputQualityPercent= #outputAudioTrack2Codec=libfaac #AudioCodec2Allow= #outputAudioTrack2Channels=2 #outputAudioTrack2Bitrate=160k #outputAudioOtherCodec=libmp3lame #AudioOtherCodecAllow= #outputAudioOtherChannels=2 #outputAudioOtherBitrate=128k #outputSubtitleCodec= ## WakeOnLan # use WOL (0, 1). # # set to 1 to send WOL broadcast to the mac and test the server (e.g. xbmc) on the host and port specified. #wolwake=0 # WOL MAC # # enter the mac address of the system to be woken. #wolmac=00:01:2e:2D:64:e1 # Set the Host and Port of a server to verify system has woken. #wolhost=IP_ADDRESS #wolport=80 ### NZBGET POST-PROCESSING SCRIPT ### ##############################################################################
#!/usr/bin/env python # -*- coding: utf-8 -*- # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
#!/usr/bin/env python ############################################################################ # # MODULE: r.shaded.relief # AUTHOR(S): CERL # parameters standardized: NAME 2008 # updates: NAME 2004 # updates: NAME 2003 # updates: NAME 2001 # updates: NAME 2001 # updates: NAME 2001, 1999 # Converted to Python by NAME PURPOSE: Creates shaded relief map from raster elevation map (DEM) # COPYRIGHT: (C) 1999 - 2008, 2010 by the GRASS Development Team # # This program is free software under the GNU General Public # License (>=v2). Read the file COPYING that comes with GRASS # for details. # ############################################################################# # # July 2007 - allow input from other mapsets (Brad NAME May 2005 - fixed wrong units parameter (Markus NAME September 2004 - Added z exaggeration control (Michael NAME # April 2004 - updated for GRASS 5.7 by NAME # # 9/2004 Adds scale factor input (as per documentation); units set scale only if specified for lat/long regions # Also, adds option of controlling z-exaggeration. # # 6/2003 fixes for Lat/Long Gordon Keith <EMAIL> # If n is a number then the ewres and nsres are mulitplied by that scale # to calculate the shading. # If n is the letter M (either case) the number of metres is degree of # latitude is used as the scale. # If n is the letter f then the number of feet in a degree is used. # It scales latitude and longitude equally, so it's only approximately # right, but for shading its close enough. It makes the difference # between an unusable and usable shade. # # 10/2001 fix for testing for dashes in raster file name # by NAME <EMAIL> # 10/2001 added parser support - NAME 9/2001 fix to keep NULLs as is (was value 22 before) - NAME 1/2001 fix for NULL by NAME <EMAIL> # 11/99 updated $ewres to ewres() and $nsres to nsres() # updated number to FP in r.mapcalc statement Markus Neteler #%module #% description: Creates shaded relief map from an elevation map (DEM). #% keywords: raster #% keywords: elevation #%end #%option G_OPT_R_INPUT #% description: Name of input elevation raster map #%end #%option G_OPT_R_OUTPUT #%end #%option #% key: altitude #% type: double #% description: Altitude of the sun in degrees above the horizon #% required: no #% options : 0-90 #% answer: 30 #%end #%option #% key: azimuth #% type: double #% description: Azimuth of the sun in degrees to the east of north #% required: no #% options : 0-360 #% answer: 270 #%end #%option #% key: zmult #% type: double #% description: Factor for exaggerating relief #% required: no #% answer: 1 #%end #%option #% key: scale #% type: double #% description: Scale factor for converting horizontal units to elevation units #% required: no #% answer: 1 #% guisection: Scaling #%end #%option #% key: units #% type: string #% description: Set scaling factor (applies to lat./long. locations only, none: scale=1) #% required: no #% options: none,meters,feet #% answer: none #% guisection: Scaling #%end
# -*- encoding: utf-8 -*- ############################################################################## # # Copyright (c) 2009 Veritos - NAME - www.veritos.nl # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsability of assessing all potential # consequences resulting from its eventual inadequacies and bugs. # End users who are looking for a ready-to-use solution with commercial # garantees and support are strongly adviced to contract a Free Software # Service Company like Veritos. # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # ############################################################################## # # Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger). # Deze module werkt niet in OpenERP versie 4 en lager. # # Status 1.0 - getest op OpenERP 5.0.3 # # Versie IP_ADDRESS # account.account.type # Basis gelegd voor alle account type # # account.account.template # Basis gelegd met alle benodigde grootboekrekeningen welke via een menu- # structuur gelinkt zijn aan rubrieken 1 t/m 9. # De grootboekrekeningen gelinkt aan de account.account.type # Deze links moeten nog eens goed nagelopen worden. # # account.chart.template # Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren, # bank, inkoop en verkoop boeken en de BTW configuratie. # # Versie IP_ADDRESS # account.tax.code.template # Basis gelegd voor de BTW configuratie (structuur) # Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt? # # account.tax.template # De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende # grootboekrekeningen # # Versie IP_ADDRESS # Opschonen van de code en verwijderen van niet gebruikte componenten. # Versie IP_ADDRESS # Aanpassen a_expense van 3000 -> 7000 # record id='btw_code_5b' op negatieve waarde gezet # Versie IP_ADDRESS # BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde. # Versie IP_ADDRESS # Account Receivable en Payable goed gedefinieerd. # Versie IP_ADDRESS # Alle user_type_xxx velden goed gedefinieerd. # Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren. # Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren. # Versie IP_ADDRESS # Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging) # versie IP_ADDRESS # Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity # versie IP_ADDRESS # Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht. # versie IP_ADDRESS # BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd. # versie IP_ADDRESS - Switch to English # Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense # Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
""" IPy - class and tools for handling of IPv4 and IPv6 Addresses and Networks. $HeadURL: http://svn.23.nu/svn/repos/IPy/trunk/IPy.py $ $Id: IPy.py,v 1.1 2007/08/14 09:39:00 cristian Exp $ The IP class allows a comfortable parsing and handling for most notations in use for IPv4 and IPv6 Addresses and Networks. It was greatly inspired bei RIPE's Perl module NET::IP's interface but doesn't share the Implementation. It doesn't share non-CIDR netmasks, so funky stuff lixe a netmask 0xffffff0f can't be done here. >>> ip = IP('IP_ADDRESS/30') >>> for x in ip: ... print x ... IP_ADDRESS IP_ADDRESS IP_ADDRESS IP_ADDRESS >>> ip2 = IP('0x7f000000/30') >>> ip == ip2 1 >>> ip.reverseNames() ['IP_ADDRESS.in-addr.arpa.', 'IP_ADDRESS.in-addr.arpa.', 'IP_ADDRESS.in-addr.arpa.', 'IP_ADDRESS.in-addr.arpa.'] >>> ip.reverseName() '0-IP_ADDRESS.in-addr.arpa.' >>> ip.iptype() 'PRIVATE' It can detect about a dozen different ways of expressing IP addresses and networks, parse them and distinguish between IPv4 and IPv6 addresses. >>> IP('IP_ADDRESS/8').version() 4 >>> IP('::1').version() 6 >>> print IP(0x7f000001) IP_ADDRESS >>> print IP('0x7f000001') IP_ADDRESS >>> print IP('IP_ADDRESS') IP_ADDRESS >>> print IP('10') IP_ADDRESS >>> print IP('1080:0:0:0:8:800:200C:417A') IP_ADDRESS >>> print IP('1080::8:800:200C:417A') IP_ADDRESS >>> print IP('::1') IP_ADDRESS >>> print IP('::IP_ADDRESS') IP_ADDRESS3 >>> print IP('IP_ADDRESS/8') IP_ADDRESS/8 >>> print IP('IP_ADDRESS/IP_ADDRESS') IP_ADDRESS/8 >>> print IP('IP_ADDRESS-IP_ADDRESS') IP_ADDRESS/8 Nearly all class methods which return a string have an optional parameter 'wantprefixlen' which controlles if the prefixlen or netmask is printed. Per default the prefilen is always shown if the net contains more than one address. wantprefixlen == 0 / None don't return anything IP_ADDRESS wantprefixlen == 1 /prefix IP_ADDRESS/24 wantprefixlen == 2 /netmask IP_ADDRESS/IP_ADDRESS wantprefixlen == 3 -lastip IP_ADDRESS-IP_ADDRESS You can also change the defaults on an per-object basis by fiddeling with the class members NoPrefixForSingleIp WantPrefixLen >>> IP('IP_ADDRESS/32').strNormal() 'IP_ADDRESS' >>> IP('IP_ADDRESS/24').strNormal() 'IP_ADDRESS/24' >>> IP('IP_ADDRESS/24').strNormal(0) 'IP_ADDRESS' >>> IP('IP_ADDRESS/24').strNormal(1) 'IP_ADDRESS/24' >>> IP('IP_ADDRESS/24').strNormal(2) 'IP_ADDRESS/IP_ADDRESS' >>> IP('IP_ADDRESS/24').strNormal(3) 'IP_ADDRESS-IP_ADDRESS' >>> ip = IP('IP_ADDRESS') >>> print ip IP_ADDRESS >>> ip.NoPrefixForSingleIp = None >>> print ip IP_ADDRESS/32 >>> ip.WantPrefixLen = 3 >>> print ip IP_ADDRESS-IP_ADDRESS Further Information might be available at http://c0re.jp/c0de/IPy/ Hacked 2001 by EMAIL * better comparison (__cmp__ and friends) * tests for __cmp__ * always write hex values lowercase * interpret IP_ADDRESS as IP_ADDRESS * move size in bits into class variables to get rid of some "if self._ipversion ..." * support for base85 encoding * support for output of IPv6 encoded IPv4 Addresses * update address type tables * first-last notation should be allowed for IPv6 * add IPv6 docstring examples * check better for negative parameters * add addition / aggregation * move reverse name stuff out of the classes and refactor it * support for aggregation of more than two nets at once * support for aggregation with "holes" * support for finding common prefix * '>>' and '<<' for prefix manipulation * add our own exceptions instead ValueError all the time * rename checkPrefix to checkPrefixOk * add more documentation and doctests * refactor """
# -*- encoding: utf-8 -*- ############################################################################## # # Copyright (c) 2009 Veritos - NAME - www.veritos.nl # # WARNING: This program as such is intended to be used by professional # programmers who take the whole responsability of assessing all potential # consequences resulting from its eventual inadequacies and bugs. # End users who are looking for a ready-to-use solution with commercial # garantees and support are strongly adviced to contract a Free Software # Service Company like Veritos. # # This program is Free Software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # ############################################################################## # # Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger). # Deze module werkt niet in OpenERP versie 4 en lager. # # Status 1.0 - getest op OpenERP 5.0.3 # # Versie IP_ADDRESS # account.account.type # Basis gelegd voor alle account type # # account.account.template # Basis gelegd met alle benodigde grootboekrekeningen welke via een menu- # structuur gelinkt zijn aan rubrieken 1 t/m 9. # De grootboekrekeningen gelinkt aan de account.account.type # Deze links moeten nog eens goed nagelopen worden. # # account.chart.template # Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren, # bank, inkoop en verkoop boeken en de BTW configuratie. # # Versie IP_ADDRESS # account.tax.code.template # Basis gelegd voor de BTW configuratie (structuur) # Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt? # # account.tax.template # De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende # grootboekrekeningen # # Versie IP_ADDRESS # Opschonen van de code en verwijderen van niet gebruikte componenten. # Versie IP_ADDRESS # Aanpassen a_expense van 3000 -> 7000 # record id='btw_code_5b' op negatieve waarde gezet # Versie IP_ADDRESS # BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Opschonen van module. # Versie IP_ADDRESS # Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde. # Versie IP_ADDRESS # Account Receivable en Payable goed gedefinieerd. # Versie IP_ADDRESS # Alle user_type_xxx velden goed gedefinieerd. # Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren. # Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren. # Versie IP_ADDRESS # Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging) # versie IP_ADDRESS # Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity # versie IP_ADDRESS # Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht. # versie IP_ADDRESS # BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd. # versie IP_ADDRESS - Switch to English # Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense # Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
# #!/usr/bin/env python # # """ unittests covering the del.icio.us functions provided by eWRT """ # # # (C)opyrights 2008 by NAME <EMAIL> # # # # This program is free software: you can redistribute it and/or modify # # it under the terms of the GNU General Public License as published by # # the Free Software Foundation, either version 3 of the License, or # # (at your option) any later version. # # # # This program is distributed in the hope that it will be useful, # # but WITHOUT ANY WARRANTY; without even the implied warranty of # # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # # GNU General Public License for more details. # # # # You should have received a copy of the GNU General Public License # # along with this program. If not, see <http://www.gnu.org/licenses/>. # # import unittest # # from nose.plugins.attrib import attr # # from eWRT.ws.delicious import Delicious # # # DELICIOUS_TEST_URLS = ( 'http://www.iaeste.at', 'http://www.wu-wien.ac.at', # 'http://www.heise.de', 'http://www.kurier.at', # 'http://news.bbc.co.uk', ) # # DELICIOUS_TEST_TAGS = [("linux",), ("information",), ("information", "retrieval"), # ("linux", "debian"),("algore",)] # # RELATED_TAGS_DELICIOUS_PAGE = './data/delicious_climate_related_tags.html' # # class TestDelicious(unittest.TestCase): # # def test_url_info(self): # for url in DELICIOUS_TEST_URLS: # print '%s has %s counts '% (url, Delicious.delicious_info_retrieve(url)) # # def test_tag_info(self): # print '### Testing tag_info ###' # for tags in DELICIOUS_TEST_TAGS: # print '%s has %s counts ' % (tags, Delicious.getTagInfo( tags )) # # # def test_related_tags(self): # print '### Testing related_tags ###' # for tags in DELICIOUS_TEST_TAGS: # print '%s has related tags: %s' % (tags, Delicious.getRelatedTags( tags )) # # def test_tag_splitting(self): # """ verifies the correct handling of tags containing spaces # i.e. (t1, t2, t3) == (t1, "t2 t3") """ # d = Delicious._parse_tag_url # print d( ("debian linux") ) # assert d( ("debian", "linux" )) == d( ("debian linux", ) ) # assert d( ("t1", "t2", "t3") ) == d( ("t1", "t2 t3") ) # # def test_ngram_related_tags(self): # """ tests support for related tags for n-grams """ # assert len( Delicious().getRelatedTags( ("climate", "change") ) ) > 0 # # content = open( TestDelicious.RELATED_TAGS_DELICIOUS_PAGE ).read() # related_tags = Delicious._getNGramRelatedTags( content ) # # assert 'global' in related_tags # assert 'evidence' in related_tags # assert 'vegetarian' in related_tags # assert 'sustainability' in related_tags # # assert 'linux' not in related_tags # # @attr("remote") # def test_critical_tag_names(self): # """ tests tag names which contain slashes, quotes, etc """ # assert Delicious.getTagInfo( ("consequence/frequency matrix", ) ) != None # assert Delicious.getTagInfo( ("it's", )) != None # # if __name__ == '__main__': # unittest.main()
{ 'array': array([('2003-1', 371.0, 0.0), ('2003-2', 187.0, 0.0), ('2003-3', 76.0, 0.0), ('2003-4', 36.0, 0.0), ('2003-5', 32.0, 304.0), ('2003-6', 115.0, 301.0), ('2003-7', 101.0, 46.0), ('2003-8', 24.0, 7.0), ('2003-9', 115.0, 200.0), ('2003-10', 112.0, 579.0), ('2003-11', 49.0, 52.0), ('2003-12', 62.0, 380.0), ('2004-1', 49.0, 18.0), ('2004-2', 29.0, 5.0), ('2004-3', 13.0, 68.0), ('2004-4', 55.0, 200.0), ('2004-5', 24.0, 82.0), ('2004-6', 52.0, 332.0), ('2004-7', 80.0, 107.0), ('2004-8', 38.0, 23.0), ('2004-9', 23.0, 44.0), ('2004-10', 11.0, 14.0), ('2004-11', 79.0, 400.0), ('2004-12', 58.0, 77.0), ('2005-1', 71.0, 371.0), ('2005-2', 95.0, 358.0), ('2005-3', 54.0, 342.0), ('2005-4', 66.0, 472.0), ('2005-5', 108.0, 731.0), ('2005-6', 54.0, 732.0), ('2005-7', 71.0, 922.0), ('2005-8', 68.0, 1038.0), ('2005-9', 162.0, 559.0), ('2005-10', 109.0, 737.0), ('2005-11', 48.0, 451.0), ('2005-12', 66.0, 515.0), ('2006-1', 55.0, 514.0), ('2006-2', 33.0, 486.0), ('2006-3', 154.0, 540.0), ('2006-4', 138.0, 961.0), ('2006-5', 67.0, 609.0), ('2006-6', 38.0, 757.0), ('2006-7', 111.0, 687.0), ('2006-8', 60.0, 506.0), ('2006-9', 11.0, 436.0), ('2006-10', 27.0, 521.0), ('2006-11', 32.0, 659.0), ('2006-12', 57.0, 514.0), ('2007-1', 37.0, 742.0), ('2007-2', 58.0, 837.0), ('2007-3', 76.0, 1175.0)], dtype=[('number of posts', 'S7'), ('pypy-dev', float), ('pypy-svn', float)]), 'kwds': { 'comments': '#', 'delimiter': ',', 'dtype': { 'formats': ( 'S7', float, float), 'names': ( 'number of posts', 'pypy-dev', 'pypy-svn')}
# หากถามว่าระหว่างข้อมูลส่วนตัวและความปลอดภัยนั้น สิ่งใดสำคัญกว่ากัน # สำหรับนักวิทยาการคอมพิวเตอร์แล้ว การรักษาความปลอดภัยของข้อมูลส่วนตัวเป็นสิ่งที่ต้องคำนึงสูงสุด # วิธีการหนึ่งของการรักษาความปลอดภัยของข้อมูล คือการเข้ารหัสข้อมูลให้ไม่สามารถถอดรหัสได้ # เพียงเพื่อใช้ในขั้นตอนการตรวจสอบข้อมูลว่าตรงกันเท่านั้น # ในข้อนี้จะนำเสนอวิธีการเข้ารหัสแบบง่ายรูปแบบหนึ่ง โดยนำประโยคข้อความภาษาอังกฤษมาแบ่งเป็นคำ # ซึ่งแต่ละคำจะแบ่งตามช่องว่างในประโยค แล้วนำตัวอักษรมาแปลงเป็นค่าแฮช (value) โดยคำนวณจาก # value = (Alphabet Position) + (Position in Word) # Alphabet Position คือลำดับของตัวอักษรภาษาอังกฤษ # โดยให้ตัวอักษร A หรือ a มีค่า Alphabet Position เป็น 0, # ตัวอักษร B หรือ b มีค่า Alphabet Position เป็น 1, # ตัวอักษร C หรือ c มีค่า Alphabet Position เป็น 2 # เป็นเช่นนี้ไปเรื่อย ๆ จนกระทั่งถึงตัวอักษร Z หรือ z มีค่า Alphabet Position เป็น 25 # Position in Word คือตำแหน่งของตัวอักษรในคำ เริ่มจากตำแหน่ง 0 คือตัวอักษรตัวแรกของคำ # ตัวอย่างค่าแฮชข้อความ "DATA HASH" คือ 66 # ได้จากผลรวมค่าแฮชของ D + A + T + A + H + A + S+ H = 3 + 1 + 21 + 3 + 7 + 1 + 20 + 10 = 66 # ซึ่งคำนวณดังนี้ # คำว่า DATA # D มีค่าแฮชเป็น 3 = 3 + 0 A (ตัวถัดจาก D) มีค่าแฮชเป็น 1 = 0 + 1 # T มีค่าแฮชเป็น 21 = 19 + 2 A (ตัวสุดท้ายของคำว่า DATA) มีค่าแฮชเป็น 3 = 0 + 3 # คำว่า HASH # H (ตัวแรก) มีค่าแฮชเป็น 7 = 7 + 0 A (ตัวถัดจาก H) มีค่าแฮชเป็น 1 = 0 + 1 # S มีค่าแฮชเป็น 20 = 18 + 2 H (ตัวสุดท้าย) มีค่าแฮชเป็น 10 = 7 + 3 # ให้นิสิตเขียนโปรแกรมเพื่อหาค่าแฮชของข้อความที่ต้องการเข้ารหัส # ข้อมูลเข้า # บรรทัดเดียว เป็นข้อความภาษาอังกฤษที่ต้องการเข้ารหัส ประกอบด้วยตัวอักษรหรือช่องว่างเท่านั้น ไม่มีตัวเลขหรืออักขระพิเศษ # ข้อมูลออก # ค่าแฮชที่ได้จากการเข้ารหัสข้อความตามวิธีข้างต้น # ตัวอย่างข้อมูลออก/ข้อมูลออก # ข้อมูลเข้า ข้อมูลออก # DATA HASH # 66 # privacy or security # 280 # z z z z z z # 150 # SLEEPY zzz ZZZ # 247
# Copyright © 2005-2009 NAME Lingfo Pty Ltd # This module is part of the xlrd3 package, which is released under a # BSD-style licence. # # xlrd3, the Python 3 port of xlrd v0.7.1 # # A Python module for extracting data from MS Excel spreadsheet files. # # General information # # Acknowledgements # # Development of this module would not have been possible without the document # "OpenOffice.org's Documentation of the Microsoft Excel File Format" # ("OOo docs" for short). # The latest version is available from OpenOffice.org in # http://sc.openoffice.org/excelfileformat.pdf PDF format # and # http://sc.openoffice.org/excelfileformat.odt ODT format. # Small portions of the OOo docs are reproduced in this # document. A study of the OOo docs is recommended for those who wish a # deeper understanding of the Excel file layout than the xlrd docs can provide. # # Provision of formatting information in version 0.6.1 was funded by # http://www.simplistix.co.uk Simplistix Ltd. # # Unicode # # This module presents all text strings as Python unicode objects. # From Excel 97 onwards, text in Excel spreadsheets has been stored as Unicode. # Older files (Excel 95 and earlier) don't keep strings in Unicode; # a CODEPAGE record provides a codepage number (for example, 1252) which is # used by xlrd to derive the encoding (for same example: "cp1252") which is # used to translate to Unicode. # # If the CODEPAGE record is missing (possible if the file was created # by third-party software), xlrd will assume that the encoding is ascii, and keep going. # If the actual encoding is not ascii, a UnicodeDecodeError exception will be raised and # you will need to determine the encoding yourself, and tell xlrd:: # # book = xlrd.open_workbook(..., encoding_override="cp1252") # # If the CODEPAGE record exists but is wrong (for example, the codepage # number is 1251, but the strings are actually encoded in koi8_r), # it can be overridden using the same mechanism. # The supplied runxlrd.py has a corresponding command-line argument, which # may be used for experimentation:: # # runxlrd.py -e koi8_r 3rows myfile.xls # # The first place to look for an encoding ("codec name") is # http://docs.python.org/lib/standard-encodings.html # the Python documentation. # # Dates in Excel spreadsheets # # In reality, there are no such things. What you have are floating point # numbers and pious hope. # There are several problems with Excel dates: # # (1) Dates are not stored as a separate data type; they are stored as # floating point numbers and you have to rely on # (a) the "number format" applied to them in Excel and/or # (b) knowing which cells are supposed to have dates in them. # This module helps with (a) by inspecting the # format that has been applied to each number cell; # if it appears to be a date format, the cell # is classified as a date rather than a number. Feedback on this feature, # especially from non-English-speaking locales, would be appreciated. # # (2) Excel for Windows stores dates by default as the number of # days (or fraction thereof) since 1899-12-31T00:00:00. Excel for # Macintosh uses a default start date of 1904-01-01T00:00:00. The date # system can be changed in Excel on a per-workbook basis (for example: # Tools -> Options -> Calculation, tick the "1904 date system" box). # This is of course a bad idea if there are already dates in the # workbook. There is no good reason to change it even if there are no # dates in the workbook. Which date system is in use is recorded in the # workbook. A workbook transported from Windows to Macintosh (or vice # versa) will work correctly with the host Excel. When using this # module's xldate_as_tuple function to convert numbers from a workbook, # you must use the datemode attribute of the Book object. If you guess, # or make a judgement depending on where you believe the workbook was # created, you run the risk of being 1462 days out of kilter. # # Reference: # http://support.microsoft.com/default.aspx?scid=KB;EN-US;q180162 # # (3) The Excel implementation of the Windows-default 1900-based date system works on the # incorrect premise that 1900 was a leap year. It interprets the number 60 as meaning 1900-02-29, # which is not a valid date. Consequently any number less than 61 is ambiguous. Example: is 59 the # result of 1900-02-28 entered directly, or is it 1900-03-01 minus 2 days? The OpenOffice.org Calc # program "corrects" the Microsoft problem; entering 1900-02-27 causes the number 59 to be stored. # Save as an XLS file, then open the file with Excel -- you'll see 1900-02-28 displayed. # # Reference: http://support.microsoft.com/default.aspx?scid=kb;en-us;214326 # # (4) The Macintosh-default 1904-based date system counts 1904-01-02 as day 1 and 1904-01-01 as day zero. # Thus any number such that (0.0 <= number < 1.0) is ambiguous. Is 0.625 a time of day (15:00:00), # independent of the calendar, # or should it be interpreted as an instant on a particular day (1904-01-01T15:00:00)? # The xldate_* functions in this module # take the view that such a number is a calendar-independent time of day (like Python's datetime.time type) for both # date systems. This is consistent with more recent Microsoft documentation # (for example, the help file for Excel 2002 which says that the first day # in the 1904 date system is 1904-01-02). # # (5) Usage of the Excel DATE() function may leave strange dates in a spreadsheet. Quoting the help file, # in respect of the 1900 date system: "If year is between 0 (zero) and 1899 (inclusive), # Excel adds that value to 1900 to calculate the year. For example, DATE(108,1,2) returns January 2, 2008 (1900+108)." # This gimmick, semi-defensible only for arguments up to 99 and only in the pre-Y2K-awareness era, # means that DATE(1899, 12, 31) is interpreted as 3799-12-31. # # For further information, please refer to the documentation for the xldate_* functions. # # Named references, constants, formulas, and macros # # A name is used to refer to a cell, a group of cells, a constant # value, a formula, or a macro. Usually the scope of a name is global # across the whole workbook. However it can be local to a worksheet. # For example, if the sales figures are in different cells in # different sheets, the user may define the name "Sales" in each # sheet. There are built-in names, like "Print_Area" and # "Print_Titles"; these two are naturally local to a sheet. # # To inspect the names with a user interface like MS Excel, OOo Calc, # or Gnumeric, click on Insert/Names/Define. This will show the global # names, plus those local to the currently selected sheet. # # A Book object provides two dictionaries (name_map and # name_and_scope_map) and a list (name_obj_list) which allow various # ways of accessing the Name objects. There is one Name object for # each NAME record found in the workbook. Name objects have many # attributes, several of which are relevant only when obj.macro is 1. # # In the examples directory you will find namesdemo.xls which # showcases the many different ways that names can be used, and # xlrdnamesAPIdemo.py which offers 3 different queries for inspecting # the names in your files, and shows how to extract whatever a name is # referring to. There is currently one "convenience method", # Name.cell(), which extracts the value in the case where the name # refers to a single cell. More convenience methods are planned. The # source code for Name.cell (in __init__.py) is an extra source of # information on how the Name attributes hang together. # # Name information is **not** extracted from files older than # Excel 5.0 (Book.biff_version < 50) # # Formatting # # Introduction # # This collection of features, new in xlrd version 0.6.1, is intended # to provide the information needed to (1) display/render spreadsheet contents # (say) on a screen or in a PDF file, and (2) copy spreadsheet data to another # file without losing the ability to display/render it. # # The Palette; Colour Indexes # # A colour is represented in Excel as a (red, green, blue) ("RGB") tuple # with each component in range(256). However it is not possible to access an # unlimited number of colours; each spreadsheet is limited to a palette of 64 different # colours (24 in Excel 3.0 and 4.0, 8 in Excel 2.0). Colours are referenced by an index # ("colour index") into this palette. # # Colour indexes 0 to 7 represent 8 fixed built-in colours: black, white, red, green, blue, # yellow, magenta, and cyan. # # The remaining colours in the palette (8 to 63 in Excel 5.0 and later) # can be changed by the user. In the Excel 2003 UI, Tools/Options/Color presents a palette # of 7 rows of 8 colours. The last two rows are reserved for use in charts. # The correspondence between this grid and the assigned # colour indexes is NOT left-to-right top-to-bottom. # Indexes 8 to 15 correspond to changeable # parallels of the 8 fixed colours -- for example, index 7 is forever cyan; # index 15 starts off being cyan but can be changed by the user. # # The default colour for each index depends on the file version; tables of the defaults # are available in the source code. If the user changes one or more colours, # a PALETTE record appears in the XLS file -- it gives the RGB values for *all* changeable # indexes. # Note that colours can be used in "number formats": "[CYAN]...." and "[COLOR8]...." refer # to colour index 7; "[COLOR16]...." will produce cyan # unless the user changes colour index 15 to something else. # # In addition, there are several "magic" colour indexes used by Excel: # 0x18 (BIFF3-BIFF4), 0x40 (BIFF5-BIFF8): System window text colour for border lines # (used in XF, CF, and WINDOW2 records) # 0x19 (BIFF3-BIFF4), 0x41 (BIFF5-BIFF8): System window background colour for pattern background # (used in XF and CF records ) # 0x43: System face colour (dialogue background colour) # 0x4D: System window text colour for chart border lines # 0x4E: System window background colour for chart areas # 0x4F: Automatic colour for chart border lines (seems to be always Black) # 0x50: System ToolTip background colour (used in note objects) # 0x51: System ToolTip text colour (used in note objects) # 0x7FFF: System window text colour for fonts (used in FONT and CF records) # Note 0x7FFF appears to be the *default* colour index. It appears quite often in FONT # records. # # Default Formatting # # Default formatting is applied to all empty cells (those not described by a cell record). # Firstly row default information (ROW record, Rowinfo class) is used if available. # Failing that, column default information (COLINFO record, Colinfo class) is used if available. # As a last resort the worksheet/workbook default cell format will be used; this # should always be present in an Excel file, # described by the XF record with the fixed index 15 (0-based). By default, it uses the # worksheet/workbook default cell style, described by the very first XF record (index 0). # # Formatting features not included in xlrd version 0.6.1 # # - Rich text i.e. strings containing partial bold, italic # and underlined text, change of font inside a string, etc. # See OOo docs s3.4 and s3.2 # - Asian phonetic text (known as "ruby"), used for Japanese furigana. See OOo docs # s3.4.2 (p15) # - Conditional formatting. See OOo docs # s5.12, s6.21 (CONDFMT record), s6.16 (CF record) # - Miscellaneous sheet-level and book-level items e.g. printing layout, screen panes. # - Modern Excel file versions don't keep most of the built-in # "number formats" in the file; Excel loads formats according to the # user's locale. Currently xlrd's emulation of this is limited to # a hard-wired table that applies to the US English locale. This may mean # that currency symbols, date order, thousands separator, decimals separator, etc # are inappropriate. Note that this does not affect users who are copying XLS # files, only those who are visually rendering cells. # # Loading worksheets on demand # # This feature, new in version 0.7.1, is governed by the on_demand argument # to the open_workbook() function and allows saving memory and time by loading # only those sheets that the caller is interested in, and releasing sheets # when no longer required. # # on_demand=False (default): No change. open_workbook() loads global data # and all sheets, releases resources no longer required (principally the # str or mmap object containing the Workbook stream), and returns. # # on_demand=True and BIFF version < 5.0: A warning message is emitted, # on_demand is recorded as False, and the old process is followed. # # on_demand=True and BIFF version >= 5.0: open_workbook() loads global # data and returns without releasing resources. At this stage, the only # information available about sheets is Book.nsheets and Book.sheet_names(). # # Book.sheet_by_name() and Book.sheet_by_index() will load the requested # sheet if it is not already loaded. # # Book.sheets() will load all/any unloaded sheets. # # The caller may save memory by calling # Book.unload_sheet(sheet_name_or_index) when finished with the sheet. # This applies irrespective of the state of on_demand. # # The caller may re-load an unloaded sheet by calling Book.sheet_by_xxxx() # -- except if those required resources have been released (which will # have happened automatically when on_demand is false). This is the only # case where an exception will be raised. # # The caller may query the state of a sheet: # Book.sheet_loaded(sheet_name_or_index) -> a bool # # 2010-12-03 mozman start xlrd3, for changes see NEWS.txt # # 2009-04-27 SJM Integrated on_demand patch by NAME 2008-11-23 SJM Support dumping FILEPASS and EXTERNNAME records; extra info from SUPBOOK records # 2008-11-23 SJM colname utility function now supports more than 256 columns # 2008-04-24 SJM Recovery code for file with out-of-order/missing/wrong CODEPAGE record needed to be called for EXTERNSHEET/BOUNDSHEET/NAME/SHEETHDR records. # 2008-02-08 SJM Preparation for Excel 2.0 support # 2008-02-03 SJM Minor tweaks for IronPython support # 2008-02-02 SJM Previous change stopped dump() and count_records() ... fixed # 2007-12-25 SJM Decouple Book initialisation & loading -- to allow for multiple loaders. # 2007-12-20 SJM Better error message for unsupported file format. # 2007-12-04 SJM Added support for Excel 2.x (BIFF2) files. # 2007-11-20 SJM Wasn't handling EXTERNSHEET record that needed CONTINUE record(s) # 2007-07-07 SJM Version changed to 0.7.0 (alpha 1) # 2007-07-07 SJM Logfile arg wasn't being passed from open_workbook to compdoc.CompDoc # 2007-05-21 SJM If no CODEPAGE record in pre-8.0 file, assume ascii and keep going. # 2007-04-22 SJM Removed antique undocumented Book.get_name_dict method.
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
"""Test module for the noddy examples Noddy 1: >>> import noddy >>> n1 = noddy.Noddy() >>> n2 = noddy.Noddy() >>> del n1 >>> del n2 Noddy 2 >>> import noddy2 >>> n1 = noddy2.Noddy('jim', 'fulton', 42) >>> n1.first 'jim' >>> n1.last 'NAME n1.number 42 >>> n1.name() 'jim NAME n1.first = 'will' >>> n1.name() 'will NAME n1.last = 'NAME n1.name() 'will NAME del n1.first >>> n1.name() Traceback (most recent call last): ... AttributeError: first >>> n1.first Traceback (most recent call last): ... AttributeError: first >>> n1.first = 'drew' >>> n1.first 'drew' >>> del n1.number Traceback (most recent call last): ... TypeError: can't delete numeric/char attribute >>> n1.number=2 >>> n1.number 2 >>> n1.first = 42 >>> n1.name() '42 NAME n2 = noddy2.Noddy() >>> n2.name() ' ' >>> n2.first '' >>> n2.last '' >>> del n2.first >>> n2.first Traceback (most recent call last): ... AttributeError: first >>> n2.first Traceback (most recent call last): ... AttributeError: first >>> n2.name() Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: first >>> n2.number 0 >>> n3 = noddy2.Noddy('jim', 'fulton', 'waaa') Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: an integer is required >>> del n1 >>> del n2 Noddy 3 >>> import noddy3 >>> n1 = noddy3.Noddy('jim', 'fulton', 42) >>> n1 = noddy3.Noddy('jim', 'fulton', 42) >>> n1.name() 'jim NAME del n1.first Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: Cannot delete the first attribute >>> n1.first = 42 Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: The first attribute value must be a string >>> n1.first = 'will' >>> n1.name() 'will NAME n2 = noddy3.Noddy() >>> n2 = noddy3.Noddy() >>> n2 = noddy3.Noddy() >>> n3 = noddy3.Noddy('jim', 'fulton', 'waaa') Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: an integer is required >>> del n1 >>> del n2 Noddy 4 >>> import noddy4 >>> n1 = noddy4.Noddy('jim', 'fulton', 42) >>> n1.first 'jim' >>> n1.last 'NAME n1.number 42 >>> n1.name() 'jim NAME n1.first = 'will' >>> n1.name() 'will NAME n1.last = 'NAME n1.name() 'will NAME del n1.first >>> n1.name() Traceback (most recent call last): ... AttributeError: first >>> n1.first Traceback (most recent call last): ... AttributeError: first >>> n1.first = 'drew' >>> n1.first 'drew' >>> del n1.number Traceback (most recent call last): ... TypeError: can't delete numeric/char attribute >>> n1.number=2 >>> n1.number 2 >>> n1.first = 42 >>> n1.name() '42 NAME n2 = noddy4.Noddy() >>> n2 = noddy4.Noddy() >>> n2 = noddy4.Noddy() >>> n2 = noddy4.Noddy() >>> n2.name() ' ' >>> n2.first '' >>> n2.last '' >>> del n2.first >>> n2.first Traceback (most recent call last): ... AttributeError: first >>> n2.first Traceback (most recent call last): ... AttributeError: first >>> n2.name() Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: first >>> n2.number 0 >>> n3 = noddy4.Noddy('jim', 'fulton', 'waaa') Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: an integer is required Test cyclic gc(?) >>> import gc >>> gc.disable() >>> x = [] >>> l = [x] >>> n2.first = l >>> n2.first [[]] >>> l.append(n2) >>> del l >>> del n1 >>> del n2 >>> sys.getrefcount(x) 3 >>> ignore = gc.collect() >>> sys.getrefcount(x) 2 >>> gc.enable() """
# Configuration file for ipython-kernel. #------------------------------------------------------------------------------ # ConnectionFileMixin(LoggingConfigurable) configuration #------------------------------------------------------------------------------ ## Mixin for configurable classes that work with connection files ## JSON file in which to store connection info [default: kernel-<pid>.json] # # This file will contain the IP, ports, and authentication key needed to connect # clients to this kernel. By default, this file will be created in the security # dir of the current profile, but can be specified by absolute path. #c.ConnectionFileMixin.connection_file = '' ## set the control (ROUTER) port [default: random] #c.ConnectionFileMixin.control_port = 0 ## set the heartbeat port [default: random] #c.ConnectionFileMixin.hb_port = 0 ## set the iopub (PUB) port [default: random] #c.ConnectionFileMixin.iopub_port = 0 ## Set the kernel's IP address [default localhost]. If the IP address is # something other than localhost, then Consoles on other machines will be able # to connect to the Kernel, so be careful! #c.ConnectionFileMixin.ip = '' ## set the shell (ROUTER) port [default: random] #c.ConnectionFileMixin.shell_port = 0 ## set the stdin (ROUTER) port [default: random] #c.ConnectionFileMixin.stdin_port = 0 ## #c.ConnectionFileMixin.transport = 'tcp' #------------------------------------------------------------------------------ # InteractiveShellApp(Configurable) configuration #------------------------------------------------------------------------------ ## A Mixin for applications that start InteractiveShell instances. # # Provides configurables for loading extensions and executing files as part of # configuring a Shell environment. # # The following methods should be called by the :meth:`initialize` method of the # subclass: # # - :meth:`init_path` # - :meth:`init_shell` (to be implemented by the subclass) # - :meth:`init_gui_pylab` # - :meth:`init_extensions` # - :meth:`init_code` ## Execute the given command string. #c.InteractiveShellApp.code_to_run = '' ## Run the file referenced by the PYTHONSTARTUP environment variable at IPython # startup. #c.InteractiveShellApp.exec_PYTHONSTARTUP = True ## List of files to run at IPython startup. #c.InteractiveShellApp.exec_files = [] ## lines of code to run at IPython startup. #c.InteractiveShellApp.exec_lines = [] ## A list of dotted module names of IPython extensions to load. #c.InteractiveShellApp.extensions = [] ## dotted module name of an IPython extension to load. #c.InteractiveShellApp.extra_extension = '' ## A file to be run #c.InteractiveShellApp.file_to_run = '' ## Enable GUI event loop integration with any of ('glut', 'gtk', 'gtk2', 'gtk3', # 'osx', 'pyglet', 'qt', 'qt4', 'qt5', 'tk', 'wx', 'gtk2', 'qt4'). #c.InteractiveShellApp.gui = None ## Should variables loaded at startup (by startup files, exec_lines, etc.) be # hidden from tools like %who? #c.InteractiveShellApp.hide_initial_ns = True ## Configure matplotlib for interactive use with the default matplotlib backend. #c.InteractiveShellApp.matplotlib = None ## Run the module as a script. #c.InteractiveShellApp.module_to_run = '' ## Pre-load matplotlib and numpy for interactive use, selecting a particular # matplotlib backend and loop integration. #c.InteractiveShellApp.pylab = None ## If true, IPython will populate the user namespace with numpy, pylab, etc. and # an ``import *`` is done from numpy and pylab, when using pylab mode. # # When False, pylab mode should not import any names into the user namespace. #c.InteractiveShellApp.pylab_import_all = True ## Reraise exceptions encountered loading IPython extensions? #c.InteractiveShellApp.reraise_ipython_extension_failures = False #------------------------------------------------------------------------------ # Application(SingletonConfigurable) configuration #------------------------------------------------------------------------------ ## This is an application. ## The date format used by logging formatters for %(asctime)s #c.Application.log_datefmt = '%Y-%m-%d %H:%M:%S' ## The Logging format template #c.Application.log_format = '[%(name)s]%(highlevel)s %(message)s' ## Set the log level by value or name. #c.Application.log_level = 30 #------------------------------------------------------------------------------ # BaseIPythonApplication(Application) configuration #------------------------------------------------------------------------------ ## IPython: an enhanced interactive Python shell. ## Whether to create profile dir if it doesn't exist #c.BaseIPythonApplication.auto_create = False ## Whether to install the default config files into the profile dir. If a new # profile is being created, and IPython contains config files for that profile, # then they will be staged into the new directory. Otherwise, default config # files will be automatically generated. #c.BaseIPythonApplication.copy_config_files = False ## Path to an extra config file to load. # # If specified, load this config file in addition to any other IPython config. #c.BaseIPythonApplication.extra_config_file = '' ## The name of the IPython directory. This directory is used for logging # configuration (through profiles), history storage, etc. The default is usually # $HOME/.ipython. This option can also be specified through the environment # variable IPYTHONDIR. #c.BaseIPythonApplication.ipython_dir = '' ## Whether to overwrite existing config files when copying #c.BaseIPythonApplication.overwrite = False ## The IPython profile to use. #c.BaseIPythonApplication.profile = 'default' ## Create a massive crash report when IPython encounters what may be an internal # error. The default is to append a short message to the usual traceback #c.BaseIPythonApplication.verbose_crash = False #------------------------------------------------------------------------------ # IPKernelApp(BaseIPythonApplication,InteractiveShellApp,ConnectionFileMixin) configuration #------------------------------------------------------------------------------ ## IPython: an enhanced interactive Python shell. ## The importstring for the DisplayHook factory #c.IPKernelApp.displayhook_class = 'ipykernel.displayhook.ZMQDisplayHook' ## ONLY USED ON WINDOWS Interrupt this process when the parent is signaled. #c.IPKernelApp.interrupt = 0 ## The Kernel subclass to be used. # # This should allow easy re-use of the IPKernelApp entry point to configure and # launch kernels other than IPython's own. #c.IPKernelApp.kernel_class = 'ipykernel.ipkernel.IPythonKernel' ## redirect stderr to the null device #c.IPKernelApp.no_stderr = False ## redirect stdout to the null device #c.IPKernelApp.no_stdout = False ## The importstring for the OutStream factory #c.IPKernelApp.outstream_class = 'ipykernel.iostream.OutStream' ## kill this process if its parent dies. On Windows, the argument specifies the # HANDLE of the parent process, otherwise it is simply boolean. #c.IPKernelApp.parent_handle = 0 #------------------------------------------------------------------------------ # Kernel(SingletonConfigurable) configuration #------------------------------------------------------------------------------ ## Whether to use appnope for compatiblity with OS X App Nap. # # Only affects OS X >= 10.9. #c.Kernel._darwin_app_nap = True ## #c.Kernel._execute_sleep = 0.0005 ## #c.Kernel._poll_interval = 0.05 #------------------------------------------------------------------------------ # IPythonKernel(Kernel) configuration #------------------------------------------------------------------------------ ## #c.IPythonKernel.help_links = [{'url': 'http://docs.python.org/3.5', 'text': 'Python'}, {'url': 'http://ipython.org/documentation.html', 'text': 'IPython'}, {'url': 'http://docs.scipy.org/doc/numpy/reference/', 'text': 'NumPy'}, {'url': 'http://docs.scipy.org/doc/scipy/reference/', 'text': 'SciPy'}, {'url': 'http://matplotlib.org/contents.html', 'text': 'Matplotlib'}, {'url': 'http://docs.sympy.org/latest/index.html', 'text': 'SymPy'}, {'url': 'http://pandas.pydata.org/pandas-docs/stable/', 'text': 'pandas'}] #------------------------------------------------------------------------------ # InteractiveShell(SingletonConfigurable) configuration #------------------------------------------------------------------------------ ## An enhanced, interactive shell for Python. ## 'all', 'last', 'last_expr' or 'none', specifying which nodes should be run # interactively (displaying output from expressions). #c.InteractiveShell.ast_node_interactivity = 'last_expr' ## A list of ast.NodeTransformer subclass instances, which will be applied to # user input before code is run. #c.InteractiveShell.ast_transformers = [] ## Make IPython automatically call any callable object even if you didn't type # explicit parentheses. For example, 'str 43' becomes 'str(43)' automatically. # The value can be '0' to disable the feature, '1' for 'smart' autocall, where # it is not applied if there are no more arguments on the line, and '2' for # 'full' autocall, where all callable objects are automatically called (even if # no arguments are present). #c.InteractiveShell.autocall = 0 ## Autoindent IPython code entered interactively. #c.InteractiveShell.autoindent = True ## Enable magic commands to be called without the leading %. #c.InteractiveShell.automagic = True ## The part of the banner to be printed before the profile #c.InteractiveShell.banner1 = 'Python 3.5.2 (default, Sep 10 2016, 08:21:44) \nType "copyright", "credits" or "license" for more information.\n\nIPython 5.1.0 -- An enhanced Interactive Python.\n? -> Introduction and overview of IPython\'s features.\n%quickref -> Quick reference.\nhelp -> Python\'s own help system.\nobject? -> Details about \'object\', use \'object??\' for extra details.\n' ## The part of the banner to be printed after the profile #c.InteractiveShell.banner2 = '' ## Set the size of the output cache. The default is 1000, you can change it # permanently in your config file. Setting it to 0 completely disables the # caching system, and the minimum value accepted is 20 (if you provide a value # less than 20, it is reset to 0 and a warning is issued). This limit is # defined because otherwise you'll spend more time re-flushing a too small cache # than working #c.InteractiveShell.cache_size = 1000 ## Use colors for displaying information about objects. Because this information # is passed through a pager (like 'less'), and some pagers get confused with # color codes, this capability can be turned off. #c.InteractiveShell.color_info = True ## Set the color scheme (NoColor, Neutral, Linux, or LightBG). #c.InteractiveShell.colors = 'Neutral' ## #c.InteractiveShell.debug = False ## **Deprecated** # # Will be removed in IPython 6.0 # # Enable deep (recursive) reloading by default. IPython can use the deep_reload # module which reloads changes in modules recursively (it replaces the reload() # function, so you don't need to change anything to use it). `deep_reload` # forces a full reload of modules whose code may have changed, which the default # reload() function does not. When deep_reload is off, IPython will use the # normal reload(), but deep_reload will still be available as dreload(). #c.InteractiveShell.deep_reload = False ## Don't call post-execute functions that have failed in the past. #c.InteractiveShell.disable_failing_post_execute = False ## If True, anything that would be passed to the pager will be displayed as # regular output instead. #c.InteractiveShell.display_page = False ## (Provisional API) enables html representation in mime bundles sent to pagers. #c.InteractiveShell.enable_html_pager = False ## Total length of command history #c.InteractiveShell.history_length = 10000 ## The number of saved history entries to be loaded into the history buffer at # startup. #c.InteractiveShell.history_load_length = 1000 ## #c.InteractiveShell.ipython_dir = '' ## Start logging to the given file in append mode. Use `logfile` to specify a log # file to **overwrite** logs to. #c.InteractiveShell.logappend = '' ## The name of the logfile to use. #c.InteractiveShell.logfile = '' ## Start logging to the default log file in overwrite mode. Use `logappend` to # specify a log file to **append** logs to. #c.InteractiveShell.logstart = False ## #c.InteractiveShell.object_info_string_level = 0 ## Automatically call the pdb debugger after every exception. #c.InteractiveShell.pdb = False ## Deprecated since IPython 4.0 and ignored since 5.0, set # TerminalInteractiveShell.prompts object directly. #c.InteractiveShell.prompt_in1 = 'In [\\#]: ' ## Deprecated since IPython 4.0 and ignored since 5.0, set # TerminalInteractiveShell.prompts object directly. #c.InteractiveShell.prompt_in2 = ' .\\D.: ' ## Deprecated since IPython 4.0 and ignored since 5.0, set # TerminalInteractiveShell.prompts object directly. #c.InteractiveShell.prompt_out = 'Out[\\#]: ' ## Deprecated since IPython 4.0 and ignored since 5.0, set # TerminalInteractiveShell.prompts object directly. #c.InteractiveShell.prompts_pad_left = True ## #c.InteractiveShell.quiet = False ## #c.InteractiveShell.separate_in = '\n' ## #c.InteractiveShell.separate_out = '' ## #c.InteractiveShell.separate_out2 = '' ## Show rewritten input, e.g. for autocall. #c.InteractiveShell.show_rewritten_input = True ## Enables rich html representation of docstrings. (This requires the docrepr # module). #c.InteractiveShell.sphinxify_docstring = False ## #c.InteractiveShell.wildcards_case_sensitive = True ## #c.InteractiveShell.xmode = 'Context' #------------------------------------------------------------------------------ # ZMQInteractiveShell(InteractiveShell) configuration #------------------------------------------------------------------------------ ## A subclass of InteractiveShell for ZMQ. #------------------------------------------------------------------------------ # ProfileDir(LoggingConfigurable) configuration #------------------------------------------------------------------------------ ## An object to manage the profile directory and its resources. # # The profile directory is used by all IPython applications, to manage # configuration, logging and security. # # This object knows how to find, create and manage these directories. This # should be used by any code that wants to handle profiles. ## Set the profile location directly. This overrides the logic used by the # `profile` option. #c.ProfileDir.location = '' #------------------------------------------------------------------------------ # Session(Configurable) configuration #------------------------------------------------------------------------------ ## Object for handling serialization and sending of messages. # # The Session object handles building messages and sending them with ZMQ sockets # or ZMQStream objects. Objects can communicate with each other over the # network via Session objects, and only need to work with the dict-based IPython # message spec. The Session will handle serialization/deserialization, security, # and metadata. # # Sessions support configurable serialization via packer/unpacker traits, and # signing with HMAC digests via the key/keyfile traits. # # Parameters ---------- # # debug : bool # whether to trigger extra debugging statements # packer/unpacker : str : 'json', 'pickle' or import_string # importstrings for methods to serialize message parts. If just # 'json' or 'pickle', predefined JSON and pickle packers will be used. # Otherwise, the entire importstring must be used. # # The functions must accept at least valid JSON input, and output *bytes*. # # For example, to use msgpack: # packer = 'msgpack.packb', unpacker='msgpack.unpackb' # pack/unpack : callables # You can also set the pack/unpack callables for serialization directly. # session : bytes # the ID of this Session object. The default is to generate a new UUID. # username : USERNAME username added to message headers. The default is to ask the OS. # key : bytes # The key used to initialize an HMAC signature. If unset, messages # will not be signed or checked. # keyfile : filepath # The file containing a key. If this is set, `key` will be initialized # to the contents of the file. ## Threshold (in bytes) beyond which an object's buffer should be extracted to # avoid pickling. #c.Session.buffer_threshold = 1024 ## Whether to check PID to protect against calls after fork. # # This check can be disabled if fork-safety is handled elsewhere. #c.Session.check_pid = True ## Threshold (in bytes) beyond which a buffer should be sent without copying. #c.Session.copy_threshold = 65536 ## Debug output in the Session #c.Session.debug = False ## The maximum number of digests to remember. # # The digest history will be culled when it exceeds this value. #c.Session.digest_history_size = 65536 ## The maximum number of items for a container to be introspected for custom # serialization. Containers larger than this are pickled outright. #c.Session.item_threshold = 64 ## execution key, for signing messages. #c.Session.key = b'' ## path to file containing execution key. #c.Session.keyfile = '' ## Metadata dictionary, which serves as the default top-level metadata dict for # each message. #c.Session.metadata = {} ## The name of the packer for serializing messages. Should be one of 'json', # 'pickle', or an import name for a custom callable serializer. #c.Session.packer = 'json' ## The UUID identifying this session. #c.Session.session = '' ## The digest scheme used to construct the message signatures. Must have the form # 'hmac-HASH'. #c.Session.signature_scheme = 'hmac-sha256' ## The name of the unpacker for unserializing messages. Only used with custom # functions for `packer`. #c.Session.unpacker = 'json' ## Username for the Session. Default is your system username. #c.Session.username = 'erb'
""" =============== Array Internals =============== Internal organization of numpy arrays ===================================== It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to NumPy". NumPy arrays consist of two major components, the raw array data (from now on, referred to as the data buffer), and the information about the raw array data. The data buffer is typically what people think of as arrays in C or Fortran, a contiguous (and fixed) block of memory containing fixed sized data items. NumPy also contains a significant set of data that describes how to interpret the data in the data buffer. This extra information contains (among other things): 1) The basic data element's size in bytes 2) The start of the data within the data buffer (an offset relative to the beginning of the data buffer). 3) The number of dimensions and the size of each dimension 4) The separation between elements for each dimension (the 'stride'). This does not have to be a multiple of the element size 5) The byte order of the data (which may not be the native byte order) 6) Whether the buffer is read-only 7) Information (via the dtype object) about the interpretation of the basic data element. The basic data element may be as simple as a int or a float, or it may be a compound object (e.g., struct-like), a fixed character field, or Python object pointers. 8) Whether the array is to interpreted as C-order or Fortran-order. This arrangement allow for very flexible use of arrays. One thing that it allows is simple changes of the metadata to change the interpretation of the array buffer. Changing the byteorder of the array is a simple change involving no rearrangement of the data. The shape of the array can be changed very easily without changing anything in the data buffer or any data copying at all Among other things that are made possible is one can create a new array metadata object that uses the same data buffer to create a new view of that data buffer that has a different interpretation of the buffer (e.g., different shape, offset, byte order, strides, etc) but shares the same data bytes. Many operations in numpy do just this such as slices. Other operations, such as transpose, don't move data elements around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move. Typically these new versions of the array metadata but the same data buffer are new 'views' into the data buffer. There is a different ndarray object, but it uses the same data buffer. This is why it is necessary to force copies through use of the .copy() method if one really wants to make a new and independent copy of the data buffer. New views into arrays mean the object reference counts for the data buffer increase. Simply doing away with the original array object will not remove the data buffer if other views of it still exist. Multidimensional Array Indexing Order Issues ============================================ What is the right way to index multi-dimensional arrays? Before you jump to conclusions about the one and true way to index multi-dimensional arrays, it pays to understand why this is a confusing issue. This section will try to explain in detail how numpy indexing works and why we adopt the convention we do for images, and when it may be appropriate to adopt other conventions. The first thing to understand is that there are two conflicting conventions for indexing 2-dimensional arrays. Matrix notation uses the first index to indicate which row is being selected and the second index to indicate which column is selected. This is opposite the geometrically oriented-convention for images where people generally think the first index represents x position (i.e., column) and the second represents y position (i.e., row). This alone is the source of much confusion; matrix-oriented users and image-oriented users expect two different things with regard to indexing. The second issue to understand is how indices correspond to the order the array is stored in memory. In Fortran the first index is the most rapidly varying index when moving through the elements of a two dimensional array as it is stored in memory. If you adopt the matrix convention for indexing, then this means the matrix is stored one column at a time (since the first index moves to the next row as it changes). Thus Fortran is considered a Column-major language. C has just the opposite convention. In C, the last index changes most rapidly as one moves through the array as stored in memory. Thus C is a Row-major language. The matrix is stored by rows. Note that in both cases it presumes that the matrix convention for indexing is being used, i.e., for both Fortran and C, the first index is the row. Note this convention implies that the indexing convention is invariant and that the data order changes to keep that so. But that's not the only way to look at it. Suppose one has large two-dimensional arrays (images or matrices) stored in data files. Suppose the data are stored by rows rather than by columns. If we are to preserve our index convention (whether matrix or image) that means that depending on the language we use, we may be forced to reorder the data if it is read into memory to preserve our indexing convention. For example if we read row-ordered data into memory without reordering, it will match the matrix indexing convention for C, but not for Fortran. Conversely, it will match the image indexing convention for Fortran, but not for C. For C, if one is using data stored in row order, and one wants to preserve the image index convention, the data must be reordered when reading into memory. In the end, which you do for Fortran or C depends on which is more important, not reordering data or preserving the indexing convention. For large images, reordering data is potentially expensive, and often the indexing convention is inverted to avoid that. The situation with numpy makes this issue yet more complicated. The internal machinery of numpy arrays is flexible enough to accept any ordering of indices. One can simply reorder indices by manipulating the internal stride information for arrays without reordering the data at all. NumPy will know how to map the new index order to the data without moving the data. So if this is true, why not choose the index order that matches what you most expect? In particular, why not define row-ordered images to use the image convention? (This is sometimes referred to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN' order options for array ordering in numpy.) The drawback of doing this is potential performance penalties. It's common to access the data sequentially, either implicitly in array operations or explicitly by looping over rows of an image. When that is done, then the data will be accessed in non-optimal order. As the first index is incremented, what is actually happening is that elements spaced far apart in memory are being sequentially accessed, with usually poor memory access speeds. For example, for a two dimensional image 'im' defined so that im[0, 10] represents the value at x=0, y=10. To be consistent with usual Python behavior then im[0] would represent a column at x=0. Yet that data would be spread over the whole array since the data are stored in row order. Despite the flexibility of numpy's indexing, it can't really paper over the fact basic operations are rendered inefficient because of data order or that getting contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs im[0]), thus one can't use an idiom such as for row in im; for col in im does work, but doesn't yield contiguous column data. As it turns out, numpy is smart enough when dealing with ufuncs to determine which index is the most rapidly varying one in memory and uses that for the innermost loop. Thus for ufuncs there is no large intrinsic advantage to either approach in most cases. On the other hand, use of .flat with an FORTRAN ordered array will lead to non-optimal memory access as adjacent elements in the flattened array (iterator, actually) are not contiguous in memory. Indeed, the fact is that Python indexing on lists and other sequences naturally leads to an outside-to inside ordering (the first index gets the largest grouping, the next the next largest, and the last gets the smallest element). Since image data are normally stored by rows, this corresponds to position within rows being the last item indexed. If you do want to use Fortran ordering realize that there are two approaches to consider: 1) accept that the first index is just not the most rapidly changing in memory and have all your I/O routines reorder your data when going from memory to disk or visa versa, or use numpy's mechanism for mapping the first index to the most rapidly varying data. We recommend the former if possible. The disadvantage of the latter is that many of numpy's functions will yield arrays without Fortran ordering unless you are careful to use the 'order' keyword. Doing this would be highly inconvenient. Otherwise we recommend simply learning to reverse the usual order of indices when accessing elements of an array. Granted, it goes against the grain, but it is more in line with Python semantics and the natural order of the data. """
# ========================================================================= # zgossip - decentralized configuration management # Copyright (c) the Contributors as noted in the AUTHORS file. # This file is part of CZMQ, the high-level C binding for 0MQ: # http://czmq.zeromq.org. # This Source Code Form is subject to the terms of the Mozilla Public # License, v. 2.0. If a copy of the MPL was not distributed with this # file, You can obtain one at http://mozilla.org/MPL/2.0/. # ========================================================================= # Implements a gossip protocol for decentralized configuration management. # Your applications nodes form a loosely connected network (which can have # cycles), and publish name/value tuples. Each node re-distributes the new # tuples it receives, so that the entire network eventually achieves a # consistent state. The current design does not expire tuples. # Provides these commands (sent as multipart strings to the actor): # * BIND endpoint -- binds the gossip service to specified endpoint # * PORT -- returns the last TCP port, if any, used for binding # * LOAD configfile -- load configuration from specified file # * SET configpath value -- set configuration path = value # * SAVE configfile -- save configuration to specified file # * CONNECT endpoint -- connect the gossip service to the specified peer # * PUBLISH key value -- publish a key/value pair to the gossip cluster # * STATUS -- return number of key/value pairs held by gossip service # Returns these messages: # * PORT number -- reply to PORT command # * STATUS number -- reply to STATUS command # * DELIVER key value -- new tuple delivered from network # @discuss # The gossip protocol distributes information around a loosely-connected # network of gossip services. The information consists of name/value pairs # published by applications at any point in the network. The goal of the # gossip protocol is to create eventual consistency between all the using # applications. # The name/value pairs (tuples) can be used for configuration data, for # status updates, for presence, or for discovery. When used for discovery, # the gossip protocol works as an alternative to e.g. UDP beaconing. # The gossip network consists of a set of loosely-coupled nodes that # exchange tuples. Nodes can be connected across arbitrary transports, # so the gossip network can have nodes that communicate over inproc, # over IPC, and/or over TCP, at the same time. # Each node runs the same stack, which is a server-client hybrid using # a modified Harmony pattern (from Chapter 8 of the Guide): # http://zguide.zeromq.org/page:all# True-Peer-Connectivity-Harmony-Pattern # Each node provides a ROUTER socket that accepts client connections on an # key defined by the application via a BIND command. The state machine # for these connections is in zgossip.xml, and the generated code is in # zgossip_engine.inc. # Each node additionally creates outbound connections via DEALER sockets # to a set of servers ("remotes"), and under control of the calling app, # which sends CONNECT commands for each configured remote. # The messages between client and server are defined in zgossip_msg.xml. # We built this stack using the zeromq/zproto toolkit. # To join the gossip network, a node connects to one or more peers. Each # peer acts as a forwarder. This loosely-coupled network can scale to # thousands of nodes. However the gossip protocol is NOT designed to be # efficient, and should not be used for application data, as the same # tuples may be sent many times across the network. # The basic logic of the gossip service is to accept PUBLISH messages # from its owning application, and to forward these to every remote, and # every client it talks to. When a node gets a duplicate tuple, it throws # it away. When a node gets a new tuple, it stores it, and fowards it as # just described. At any point the application can access the node's set # of tuples. # At present there is no way to expire tuples from the network. # The assumptions in this design are: # * The data set is slow-changing. Thus, the cost of the gossip protocol # is irrelevant with respect to other traffic. # Writing this while drunk, turn on enhanced issue scanning
""" Objects for dealing with Chebyshev series. This module provides a number of objects (mostly functions) useful for dealing with Chebyshev series, including a `Chebyshev` class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its "parent" sub-package, `numpy.polynomial`). Constants --------- - `chebdomain` -- Chebyshev series default domain, [-1,1]. - `chebzero` -- (Coefficients of the) Chebyshev series that evaluates identically to 0. - `chebone` -- (Coefficients of the) Chebyshev series that evaluates identically to 1. - `chebx` -- (Coefficients of the) Chebyshev series for the identity map, ``f(x) = x``. Arithmetic ---------- - `chebadd` -- add two Chebyshev series. - `chebsub` -- subtract one Chebyshev series from another. - `chebmul` -- multiply two Chebyshev series. - `chebdiv` -- divide one Chebyshev series by another. - `chebpow` -- raise a Chebyshev series to an positive integer power - `chebval` -- evaluate a Chebyshev series at given points. - `chebval2d` -- evaluate a 2D Chebyshev series at given points. - `chebval3d` -- evaluate a 3D Chebyshev series at given points. - `chebgrid2d` -- evaluate a 2D Chebyshev series on a Cartesian product. - `chebgrid3d` -- evaluate a 3D Chebyshev series on a Cartesian product. Calculus -------- - `chebder` -- differentiate a Chebyshev series. - `chebint` -- integrate a Chebyshev series. Misc Functions -------------- - `chebfromroots` -- create a Chebyshev series with specified roots. - `chebroots` -- find the roots of a Chebyshev series. - `chebvander` -- Vandermonde-like matrix for Chebyshev polynomials. - `chebvander2d` -- Vandermonde-like matrix for 2D power series. - `chebvander3d` -- Vandermonde-like matrix for 3D power series. - `chebgauss` -- Gauss-Chebyshev quadrature, points and weights. - `chebweight` -- Chebyshev weight function. - `chebcompanion` -- symmetrized companion matrix in Chebyshev form. - `chebfit` -- least-squares fit returning a Chebyshev series. - `chebpts1` -- Chebyshev points of the first kind. - `chebpts2` -- Chebyshev points of the second kind. - `chebtrim` -- trim leading coefficients from a Chebyshev series. - `chebline` -- Chebyshev series representing given straight line. - `cheb2poly` -- convert a Chebyshev series to a polynomial. - `poly2cheb` -- convert a polynomial to a Chebyshev series. Classes ------- - `Chebyshev` -- A Chebyshev series class. See also -------- `numpy.polynomial` Notes ----- The implementations of multiplication, division, integration, and differentiation use the algebraic identities [1]_: .. math :: T_n(x) = \\frac{z^n + z^{-n}}{2} \\\\ z\\frac{dx}{dz} = \\frac{z - z^{-1}}{2}. where .. math :: x = \\frac{z + z^{-1}}{2}. These identities allow a Chebyshev series to be expressed as a finite, symmetric Laurent series. In this module, this sort of Laurent series is referred to as a "z-series." References ---------- .. [1] NAME et al., "Combinatorial Trigonometry with Chebyshev Polynomials," *Journal of Statistical Planning and Inference 14*, 2008 (preprint: http://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf, pg. 4) """
"""Stuff to parse Sun and NeXT audio files. An audio file consists of a header followed by the data. The structure of the header is as follows. +---------------+ | magic word | +---------------+ | header size | +---------------+ | data size | +---------------+ | encoding | +---------------+ | sample rate | +---------------+ | # of channels | +---------------+ | info | | | +---------------+ The magic word consists of the 4 characters '.snd'. Apart from the info field, all header fields are 4 bytes in size. They are all 32-bit unsigned integers encoded in big-endian byte order. The header size really gives the start of the data. The data size is the physical size of the data. From the other parameters the number of frames can be calculated. The encoding gives the way in which audio samples are encoded. Possible values are listed below. The info field currently consists of an ASCII string giving a human-readable description of the audio file. The info field is padded with NUL bytes to the header size. Usage. Reading audio files: f = sunau.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). When the setpos() and rewind() methods are not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' or 'ULAW') getcompname() -- returns human-readable version of compression type ('not compressed' matches 'NONE') getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- returns None (for compatibility with the aifc module) getmark(id) -- raises an error since the mark does not exist (for compatibility with the aifc module) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell() and the position given to setpos() are compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing audio files: f = sunau.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple)-- set all parameters at once tell() -- return current position in output file writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. The close() method is called automatically when the class instance is destroyed. """
#!/usr/bin/env python # -*- coding: utf-8 -*- #------------------------------------------------------------------------------ # LICENSE: # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU Lesser General Public License as published by the Free # Software Foundation; either version 3 of the License, or (at your option) any # later version. See http://www.gnu.org/licenses/lgpl-3.0.txt. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more # details. # # You should have received a copy of the GNU Lesser General Public License along # with this program; if not, write to the Free Software Foundation, Inc., # 675 Mass Ave, Cambridge, MA 02139, USA. #------------------------------------------------------------------------------ # CHANGELOG: # 2006-07-15 v0.1.1 AN: - released version # 2007-10-09 v0.2.0 PL: - fixed error with deprecated string exceptions # - added optional timeout to sockets to avoid blocking # operations # 2010-07-11 v0.2.1 AN: - change all raise exception (was deprecated), license # change to LGPL # 2010-07-12 v0.2.2 TK: - PEP8 compliance # isolating send and receive functions # 2012-11-20 v0.3.0 AN: - change API to class model # - using INSTREAM scan method instead of the deprecated STREAM # - added MULTISCAN method # - STATS now return full data on multiline # TK: - changes to API to make it more consistent # 2012-11-20 v0.3.1 AN: - typo change (Connextion to Connexion) # - Fixed Issue 3: scan_stream: AssertionError # 2013-04-20 v0.3.2 TT/AN: - improving encoding support for non latin filenames # TKL: - When pyclamd calls _recv_response, it appears to expect # that it will only get one result at a time. This is not # always the case: it may get multiple results separated # by newlines. # - Typos corrected with pyflakes # - Adding a compatibility layer for the most important # functions in the 0.2 API - init_*_socket, scan_file, # contscan_file, multiscan_file, and version. # 2013-04-21 v0.3.3 AN: - ClamdUnixSocket is now able to get unix socket name # from /etc/clamav/clamd.conf # 2013-11-16 v0.3.4 JB/AN: - Nasty encoding bug in scan_stream # 2014-06-22 v0.3.6 JS/AN: - correction in assert for filename (change to basestring) # 2014-06-23 v0.3.7 AN: - correction in README.txt and example.py # - adding pyclamd.ClamdAgnostic() # 2014-07-06 v0.3.8 AN: - License clarification (use of LGPLv3+) # 2014-07-06 v0.3.9 SK/AN: - Bug correction + setup.py improvment for building # 2014-07-06 v0.3.10 SK/AN: - Bug correction with python3 bytes stream # 2015-03-14 v0.3.14 AN : - Bug correction for clamd.conf default path #------------------------------------------------------------------------------ # TODO: # - improve tests for Win32 platform (avoid to write EICAR file to disk, or # protect it somehow from on-access AV, inside a ZIP/GZip archive isn't enough) # - use SESSION/END commands to launch several scans in one session # (for example provide session mode in a Clamd class) # - add support for RAWSCAN commands ? # ? Maybe use os.abspath to ensure scan_file uses absolute paths for files #------------------------------------------------------------------------------ # Documentation : http://www.clamav.net/doc/latest/html/node28.html
""" //cell stuff new Thing("cell",["nucleus","cytoplasm"],["cells"]); new Thing("nucleus",["dna","proteins"]); new Thing("cytoplasm",["glucids","lipids"]); new Thing("dna",["genetic code","hydrogen","oxygen","nitrogen","carbon","phosphorus"],"DNA"); new Thing("genetic code",["nucleotide,20-50"]); new Thing("nucleotide",["molecule"],["A","T","G","C"]); //body stuff new Thing("body part",["bacteria,30%","bacteria,10%","skin","blood vessels","bones","fat","muscles"],"body part"); new Thing("soft body part",["bacteria,30%","bacteria,10%","skin","blood vessels","fat","muscles"],"body part"); new Thing("skinless body part",["bacteria,30%","bacteria,10%","blood vessels","bones","fat","muscles"],"body part"); new Thing("skinless soft body part",["bacteria,30%","bacteria,10%","blood vessels","fat","muscles"],"body part"); new Thing("blood vessels",["bacteria,30%","blood"],"blood vessels"); new Thing("blood",["blood cell"],"blood"); new Thing("blood cell",[".cell"],["blood cells"]); new Thing("skin",["bacteria,1-3","scar,0.5%","pores","skin cell","dead skin","dust,20%","sweat,20%"],"skin"); new Thing("scar",["dead skin"]); new Thing("pores",["bacteria,1-3","skin cell","dead skin,50%","sweat,40%"],"pores"); new Thing("skin cell",[".cell"],["skin cells"]); new Thing("dead skin",["skin cell"]); new Thing("bone",[".bones"],"bone"); new Thing("bones",["bone cell","calcium"],"bones"); new Thing("bone cell",[".cell"],["bone cells"]); new Thing("muscles",["muscle cell"],"muscles"); new Thing("muscle cell",[".cell"],["muscle cells"]); new Thing("fat",["lipids"],"fat"); new Thing("brain cell",[".cell"],["brain cells"]); new Thing("dandruff",["dead skin"]); new Thing("clothing set",["hat,2%","glasses,20%","pants,98%","shirt,98%","coat,50%","socks,80%","shoes,80%","underwear,99%"],"clothing"); new Thing("man",[".person"],"*MAN*"); new Thing("woman",[".person"],"*WOMAN*"); new Thing("person",["body","psyche","clothing set"],"*PERSON*"); new Thing("corpse",["body","clothing set","blood,35%","worm,20%","worm,10%"],"*PERSON*| (dead)"); new Thing("body",["head","torso","arm,99%","arm,99%","leg,99%","leg,99%"],"body"); new Thing("torso",["chest","pelvis",".body part"]); new Thing("chest",["nipple,2","bellybutton",".body part"]); new Thing("bellybutton",["skin","lint,0-1"]); new Thing("nipple",["skin"]); new Thing("pelvis",["naughty bits","butt",".body part"]); new Thing("naughty bits",[".soft body part"]); new Thing("butt",["pasta,0.01%","sweat,50%",".body part"]); new Thing("arm",["hand","elbow","armpit",".body part"],"arm"); new Thing("hand",["finger,5",".body part"]); new Thing("finger",["fingernail",".body part"],"finger"); new Thing("fingernail",["dust,30%","keratin"],"fingernail"); new Thing("elbow",[".body part"]); new Thing("armpit",["armpit hair","sweat,80%",".soft body part"]); new Thing("armpit hair",[".hair"],"hair"); new Thing("leg",["foot","knee",".body part"],"leg"); new Thing("foot",["toe,5","sweat,30%",".body part"]); new Thing("toe",["toenail",".body part"],"toe"); new Thing("toenail",["dust,40%","keratin"],"toenail"); new Thing("knee",[".body part"],"knee"); new Thing("head",["mouth","nose","eye,99%","eye,99%","ear,2","skull","head hair,85%",".body part"],"head"); new Thing("eye",["eyelashes","eye flesh","tear,2%"],"eye"); new Thing("eye flesh",["water","blood vessels","fat"],"eyeball"); new Thing("eyelashes",[".hair"],"eyelashes"); new Thing("tear",["water","salt"]); new Thing("ear",[".soft body part"],"ear"); new Thing("brain",["bacteria,20%","brain cell"],"brain"); new Thing("skull",["brain",".bones"]); new Thing("head hair",[".hair","dandruff,10%"],[["brown","black","gray","light","blonde","red","dark"],[" hair"]]); new Thing("hair",["bacteria,30%","keratin"],"hair"); new Thing("nose",["nostril,2",".body part"],"nose"); new Thing("nostril",["nostril hair","boogers,0-1",".soft body part"],"nostril"); new Thing("nostril hair",[".hair"],"nostril hair"); new Thing("boogers",["organic matter"]); new Thing("mouth",["teeth","tongue"],"mouth"); new Thing("teeth",["calcium","phosphorus"],"teeth"); new Thing("tongue",["muscles"],"tongue"); new Thing("abomination",["abomination body","abomination psyche"],"*PERSON*| (abomination)");//nonononononono new Thing("abomination psyche",["abomination thoughts","memories"],"psyche"); new Thing("abomination thoughts",["black hole,0.01%","abomination thought"],"thoughts"); new Thing("abomination thought",[],["P-please...","Don't look at me...","Please... kill me...","Kill... me...","Why would I ever ask for this...","I only wish for death.","I only long for death now.","I only demand... death...","End my misery... I beg you...","This is a mockery of existence...","I miss her so much...","I miss him so much...","I miss my family...","Why would they do that to me...","How could they do this to me...","What have I become...","I feel... different...","I can't feel... anything...","I can't... see anything..."]); new Thing("abomination body",["abomination head","abomination head,5%","abomination torso",["arm,0-8","arm,0-4"],["leg,0-8","leg,0-4"],"crustacean claw,2%","stinger,2%","weird soft organ,10%","weird soft organ,10%","weird hard organ,10%","weird hard organ,10%"],"misshapen body"); new Thing("abomination head",["mouth,0-2","nose,0-2","eye,0-8","ear,0-4","skull,90%","weird soft organ,20%","weird hard organ,20%","head hair,65%",".body part"],"misshapen head"); new Thing("abomination torso",["chest","chest,10%","pelvis","pelvis,10%","weird soft organ,20%","weird hard organ,20%",".body part"],"misshapen torso"); """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to reqd all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
""" ============================= Byteswapping and byte order ============================= Introduction to byte ordering and ndarrays ========================================== The ``ndarray`` is an object that provide a python array interface to data in memory. It often happens that the memory that you want to view with an array is not of the same byte ordering as the computer on which you are running Python. For example, I might be working on a computer with a little-endian CPU - such as an Intel Pentium, but I have loaded some data from a file written by a computer that is big-endian. Let's say I have loaded 4 bytes from a file written by a Sun (big-endian) computer. I know that these 4 bytes represent two 16-bit integers. On a big-endian machine, a two-byte integer is stored with the Most Significant Byte (MSB) first, and then the Least Significant Byte (LSB). Thus the bytes are, in memory order: #. MSB integer 1 #. LSB integer 1 #. MSB integer 2 #. LSB integer 2 Let's say the two integers were in fact 1 and 770. Because 770 = 256 * 3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2. The bytes I have loaded from the file would have these contents: >>> big_end_str = chr(0) + chr(1) + chr(3) + chr(2) >>> big_end_str '\\x00\\x01\\x03\\x02' We might want to use an ``ndarray`` to access these integers. In that case, we can create an array around this memory, and tell numpy that there are two integers, and that they are 16 bit and big-endian: >>> import numpy as np >>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_str) >>> big_end_arr[0] 1 >>> big_end_arr[1] 770 Note the array ``dtype`` above of ``>i2``. The ``>`` means 'big-endian' (``<`` is little-endian) and ``i2`` means 'signed 2-byte integer'. For example, if our data represented a single unsigned 4-byte little-endian integer, the dtype string would be ``<u4``. In fact, why don't we try that? >>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_str) >>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3 True Returning to our ``big_end_arr`` - in this case our underlying data is big-endian (data endianness) and we've set the dtype to match (the dtype is also big-endian). However, sometimes you need to flip these around. .. warning:: Scalars currently do not include byte order information, so extracting a scalar from an array will return an integer in native byte order. Hence: >>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder True Changing byte ordering ====================== As you can imagine from the introduction, there are two ways you can affect the relationship between the byte ordering of the array and the underlying memory it is looking at: * Change the byte-ordering information in the array dtype so that it interprets the undelying data as being in a different byte order. This is the role of ``arr.newbyteorder()`` * Change the byte-ordering of the underlying data, leaving the dtype interpretation as it was. This is what ``arr.byteswap()`` does. The common situations in which you need to change byte ordering are: #. Your data and dtype endianess don't match, and you want to change the dtype so that it matches the data. #. Your data and dtype endianess don't match, and you want to swap the data so that they match the dtype #. Your data and dtype endianess match, but you want the data swapped and the dtype to reflect this Data and dtype endianness don't match, change dtype to match data ----------------------------------------------------------------- We make something where they don't match: >>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_str) >>> wrong_end_dtype_arr[0] 256 The obvious fix for this situation is to change the dtype so it gives the correct endianness: >>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder() >>> fixed_end_dtype_arr[0] 1 Note the the array has not changed in memory: >>> fixed_end_dtype_arr.tobytes() == big_end_str True Data and type endianness don't match, change data to match dtype ---------------------------------------------------------------- You might want to do this if you need the data in memory to be a certain ordering. For example you might be writing the memory out to a file that needs a certain byte ordering. >>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap() >>> fixed_end_mem_arr[0] 1 Now the array *has* changed in memory: >>> fixed_end_mem_arr.tobytes() == big_end_str False Data and dtype endianness match, swap data and dtype ---------------------------------------------------- You may have a correctly specified array dtype, but you need the array to have the opposite byte order in memory, and you want the dtype to match so the array values make sense. In this case you just do both of the previous operations: >>> swapped_end_arr = big_end_arr.byteswap().newbyteorder() >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_str False An easier way of casting the data to a specific dtype and byte ordering can be achieved with the ndarray astype method: >>> swapped_end_arr = big_end_arr.astype('<i2') >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_str False """
""" ============== Array indexing ============== Array indexing refers to any use of the square brackets ([]) to index array values. There are many options to indexing, which give numpy indexing great power, but with power comes some complexity and the potential for confusion. This section is just an overview of the various options and issues related to indexing. Aside from single element indexing, the details on most of these options are to be found in related sections. Assignment vs referencing ========================= Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See the section at the end for specific examples and explanations on how assignments work. Single element indexing ======================= Single element indexing for a 1-D array is what one expects. It work exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. :: >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. That means that it is not necessary to separate each dimension's index into its own set of square brackets. :: >>> x.shape = (2,5) # now x is 2-dimensional >>> x[1,3] 8 >>> x[1,-1] 9 Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: :: >>> x[0] array([0, 1, 2, 3, 4]) That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that remaining dimension of lenth 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: :: >>> x[0][2] 2 So note that ``x[0,2] = x[0][2]`` though the second case is more inefficient a new temporary array is created after the first index that is subsequently indexed by 2. Note to those used to IDL or Fortran memory order as it relates to indexing. Numpy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. Other indexing options ====================== It is possible to slice and stride arrays to extract arrays of the same number of dimensions, but of different sizes than the original. The slicing and striding works exactly the same way it does for lists and tuples except that they can be applied to multiple dimensions as well. A few examples illustrates best: :: >>> x = np.arange(10) >>> x[2:5] array([2, 3, 4]) >>> x[:-7] array([0, 1, 2]) >>> x[1:7:2] array([1, 3, 5]) >>> y = np.arange(35).reshape(5,7) >>> y[1:5:2,::3] array([[ 7, 10, 13], [21, 24, 27]]) Note that slices of arrays do not copy the internal array data but also produce new views of the original data. It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays. There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool that allow one to avoid looping over individual elements in arrays and thus greatly improve performance. It is possible to use special features to effectively increase the number of dimensions in an array through indexing so the resulting array aquires the shape needed for use in an expression or with a specific function. Index arrays ============ Numpy arrays may be indexed with other arrays (or any other sequence- like object that can be converted to an array, such as lists, with the exception of tuples; see the end of this document for why this is). The use of index arrays ranges from simple, straightforward cases to complex, hard-to-understand cases. For all cases of index arrays, what is returned is a copy of the original data, not a view as one gets for slices. Index arrays must be of integer type. Each value in the array indicates which value in the array to use in place of the index. To illustrate: :: >>> x = np.arange(10,1,-1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) The index array consisting of the values 3, 3, 1 and 8 correspondingly create an array of length 4 (same as the index array) where each index is replaced by the value the index array has in the array being indexed. Negative values are permitted and work as they do with single indices or slices: :: >>> x[np.array([3,3,-3,8])] array([7, 7, 4, 2]) It is an error to have index values out of bounds: :: >>> x[np.array([3, 3, 20, 8])] <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9 Generally speaking, what is returned when index arrays are used is an array with the same shape as the index array, but with the type and values of the array being indexed. As an example, we can use a multidimensional index array instead: :: >>> x[np.array([[1,1],[2,3]])] array([[9, 9], [8, 7]]) Indexing Multi-dimensional arrays ================================= Things become more complex when multidimensional arrays are indexed, particularly with multidimensional index arrays. These tend to be more unusal uses, but theyare permitted, and they are useful for some problems. We'll start with thesimplest multidimensional case (using the array y from the previous examples): :: >>> y[np.array([0,2,4]), np.array([0,1,2])] array([ 0, 15, 30]) In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is y[0,0]. The next value is y[2,1], and the last is y[4,2]. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: :: >>> y[np.array([0,2,4]), np.array([0,1])] <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: :: >>> y[np.array([0,2,4]), 1] array([ 1, 15, 29]) Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: :: >>> y[np.array([0,2,4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) What results is the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (size of row, number index elements). An example of where this may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. In general, the shape of the resulant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. Boolean or "mask" index arrays ============================== Boolean arrays used as indices are treated in a different manner entirely than index arrays. Boolean arrays must be of the same shape as the initial dimensions of the array being indexed. In the most straightforward case, the boolean array has the same shape: :: >>> b = y>20 >>> y[b] array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) The result is a 1-D array containing all the elements in the indexed array corresponding to all the true elements in the boolean array. As with index arrays, what is returned is a copy of the data, not a view as one gets with slices. The result will be multidimensional if y has more dimensions than b. For example: :: >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y array([False, False, False, True, True], dtype=bool) >>> y[b[:,5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to y[b, ...], which means y is indexed by b followed by as many : as are needed to fill out the rank of y. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed. For example, using a 2-D boolean array of shape (2,3) with four True elements to select rows from a 3-D array of shape (2,3,5) results in a 2-D result of shape (4,5): :: >>> x = np.arange(30).reshape(2,3,5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) For further details, consult the numpy reference documentation on array indexing. Combining index arrays with slices ================================== Index arrays may be combined with slices. For example: :: >>> y[np.array([0,2,4]),1:3] array([[ 1, 2], [15, 16], [29, 30]]) In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array to produce a resultant array of shape (3,2). Likewise, slicing can be combined with broadcasted boolean indices: :: >>> y[b[:,5],1:3] array([[22, 23], [29, 30]]) Structural indexing tools ========================= To facilitate easy matching of array shapes with expressions and in assignments, the np.newaxis object can be used within array indices to add new dimensions with a size of 1. For example: :: >>> y.shape (5, 7) >>> y[:,np.newaxis,:].shape (5, 1, 7) Note that there are no new elements in the array, just that the dimensionality is increased. This can be handy to combine two arrays in a way that otherwise would require explicitly reshaping operations. For example: :: >>> x = np.arange(5) >>> x[:,np.newaxis] + x[np.newaxis,:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) The ellipsis syntax maybe used to indicate selecting in full any remaining unspecified dimensions. For example: :: >>> z = np.arange(81).reshape(3,3,3,3) >>> z[1,...,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) This is equivalent to: :: >>> z[1,:,:,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) Assigning values to indexed arrays ================================== As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: :: >>> x = np.arange(10) >>> x[2:7] = 1 or an array of the right size: :: >>> x[2:7] = np.arange(5) Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): :: >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j <type 'exceptions.TypeError'>: can't convert complex to long; use long(abs(z)) Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: :: >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is because a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times. Dealing with variable numbers of indices within programs ======================================================== The index syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example (using the previous definition for the array z): :: >>> indices = (1,1,1,1) >>> z[indices] 40 So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: :: >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2] >>> z[indices] array([39, 40]) Likewise, ellipsis can be specified by code by using the Ellipsis object: :: >>> indices = (1, Ellipsis, 1) # same as [1,...,1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) For this reason it is possible to use the output from the np.where() function directly as an index since it always returns a tuple of index arrays. Because the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: :: >>> z[[1,1,1,1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1,1,1,1)] # returns a single value 40 """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
"""Doctest for method/function calls. We're going the use these types for extra testing >>> from collections import UserList >>> from collections import UserDict We're defining four helper functions >>> def e(a,b): ... print(a, b) >>> def f(*a, **k): ... print(a, support.sortdict(k)) >>> def g(x, *y, **z): ... print(x, y, support.sortdict(z)) >>> def h(j=1, a=2, h=3): ... print(j, a, h) Argument list examples >>> f() () {} >>> f(1) (1,) {} >>> f(1, 2) (1, 2) {} >>> f(1, 2, 3) (1, 2, 3) {} >>> f(1, 2, 3, *(4, 5)) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *[4, 5]) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *UserList([4, 5])) (1, 2, 3, 4, 5) {} Here we add keyword arguments >>> f(1, 2, 3, **{'a':4, 'b':5}) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *[4, 5], **{'a':6, 'b':7}) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **{'a':8, 'b': 9}) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} >>> f(1, 2, 3, **UserDict(a=4, b=5)) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *(4, 5), **UserDict(a=6, b=7)) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **UserDict(a=8, b=9)) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} Examples with invalid arguments (TypeErrors). We're also testing the function names in the exception messages. Verify clearing of SF bug #733667 >>> e(c=4) Traceback (most recent call last): ... TypeError: e() got an unexpected keyword argument 'c' >>> g() Traceback (most recent call last): ... TypeError: g() takes at least 1 positional argument (0 given) >>> g(*()) Traceback (most recent call last): ... TypeError: g() takes at least 1 positional argument (0 given) >>> g(*(), **{}) Traceback (most recent call last): ... TypeError: g() takes at least 1 positional argument (0 given) >>> g(1) 1 () {} >>> g(1, 2) 1 (2,) {} >>> g(1, 2, 3) 1 (2, 3) {} >>> g(1, 2, 3, *(4, 5)) 1 (2, 3, 4, 5) {} >>> class Nothing: pass ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not Nothing >>> class Nothing: ... def __len__(self): return 5 ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not Nothing >>> class Nothing(): ... def __len__(self): return 5 ... def __getitem__(self, i): ... if i<3: return i ... else: raise IndexError(i) ... >>> g(*Nothing()) 0 (1, 2) {} >>> class Nothing: ... def __init__(self): self.c = 0 ... def __iter__(self): return self ... def __next__(self): ... if self.c == 4: ... raise StopIteration ... c = self.c ... self.c += 1 ... return c ... >>> g(*Nothing()) 0 (1, 2, 3) {} Make sure that the function doesn't stomp the dictionary >>> d = {'a': 1, 'b': 2, 'c': 3} >>> d2 = d.copy() >>> g(1, d=4, **d) 1 () {'a': 1, 'b': 2, 'c': 3, 'd': 4} >>> d == d2 True What about willful misconduct? >>> def saboteur(**kw): ... kw['x'] = 'm' ... return kw >>> d = {} >>> kw = saboteur(a=1, **d) >>> d {} >>> g(1, 2, 3, **{'x': 4, 'y': 5}) Traceback (most recent call last): ... TypeError: g() got multiple values for keyword argument 'x' >>> f(**{1:2}) Traceback (most recent call last): ... TypeError: f() keywords must be strings >>> h(**{'e': 2}) Traceback (most recent call last): ... TypeError: h() got an unexpected keyword argument 'e' >>> h(*h) Traceback (most recent call last): ... TypeError: h() argument after * must be a sequence, not function >>> dir(*h) Traceback (most recent call last): ... TypeError: dir() argument after * must be a sequence, not function >>> None(*h) Traceback (most recent call last): ... TypeError: NoneType object argument after * must be a sequence, \ not function >>> h(**h) Traceback (most recent call last): ... TypeError: h() argument after ** must be a mapping, not function >>> dir(**h) Traceback (most recent call last): ... TypeError: dir() argument after ** must be a mapping, not function >>> None(**h) Traceback (most recent call last): ... TypeError: NoneType object argument after ** must be a mapping, \ not function >>> dir(b=1, **{'b': 1}) Traceback (most recent call last): ... TypeError: dir() got multiple values for keyword argument 'b' Another helper function >>> def f2(*a, **b): ... return a, b >>> d = {} >>> for i in range(512): ... key = 'k%d' % i ... d[key] = i >>> a, b = f2(1, *(2,3), **d) >>> len(a), len(b), b == d (3, 512, True) >>> class Foo: ... def method(self, arg1, arg2): ... return arg1+arg2 >>> x = Foo() >>> Foo.method(*(x, 1, 2)) 3 >>> Foo.method(x, *(1, 2)) 3 >>> Foo.method(*(1, 2, 3)) 5 >>> Foo.method(1, *[2, 3]) 5 A PyCFunction that takes only positional parameters should allow an empty keyword dictionary to pass without a complaint, but raise a TypeError if te dictionary is not empty >>> try: ... silence = id(1, *{}) ... True ... except: ... False True >>> id(1, **{'foo': 1}) Traceback (most recent call last): ... TypeError: id() takes no keyword arguments A corner case of keyword dictionary items being deleted during the function call setup. See <http://bugs.python.org/issue2016>. >>> class Name(str): ... def __eq__(self, other): ... try: ... del x[self] ... except KeyError: ... pass ... return str.__eq__(self, other) ... def __hash__(self): ... return str.__hash__(self) >>> x = {Name("a"):1, Name("b"):2} >>> def f(a, b): ... print(a,b) >>> f(**x) 1 2 """
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be > 1D atleast_2d Force arrays to be > 2D atleast_3d Force arrays to be > 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Substract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== 1D Array Set Operations ----------------------- Set operations for 1D numeric arrays based on sort() function. ================ =================== ediff1d Array difference (auxiliary function). unique Unique elements of an array. intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
# list of input issues # # general # auxiliary should be AUXNAMES auxnames(naux) # differentiate somehow a block that has repeating records from other block types? # should mark time series variables in some way special. Maybe an *? # should we begin all character string variables with a 'c'? # # mfsim.nam # add dimensions block to sim name file? (NMODELS, NEXCHANGES, NSOLUTIONGROUPS) # add NSLNMODELS TO NUMERICAL line so we now how many models to read # mxiter is inside solution_group (label the block instead?) # change 'NUMERICAL' to 'SMS' # # tdis # read perlen, nstp, tsmult separately as arrays # # gwfgwf # change blockname EXCHANGEDATA? DATA or LIST or GWFGWFDATA # change auxiliary to auxnames(naux) # Move to different part of user guide? # # sms # # dis # # disu # # ic # no options, so what do we do? allow empty options blocks? dummy option? # # chd # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # # wel # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # # drn # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # # riv # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # # ghb # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # # rch # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # do not support array-based input for DISU # looks like code can be cleaned up quite a bit if we require lists for DISU # for array-based input, should arrays be optional? # # evt # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # do not support array-based input for DISU # looks like code can be cleaned up quite a bit if we require lists for DISU # for array-based input, should arrays be optional? # do not support segmented ET for array-based input # # maw # need to remove the word WELL as the first thing in the WELLS block # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # suggest renaming ngwfnodes to ncon or nconn # change 'WELLS' to 'MAWDATA' (do this consistently throughout SFR, LAK, etc.?) # change 'WELL_CONNECTIONS' to 'CONNECTIONS' # eliminate 'STEADY-STATE' keyword from period block # Change ACTIVE, INACTIVE, and CONSTANT into values for STATUS. # # sfr # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # change from SFR_OUTPUT to SFROUTPUT_FILENAME? or OUTPUT_FILENAME # rno does not correspond to implicit integer definition # unit conversion used in sfr, length/time conversion used in lake # need to implement STATUS # # lak # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # unit conversion used in sfr, length/time conversion used in lake # change time_conversion to timefactor? # change LAKE_CONNECTIONS to CONNECTIONS # change LAKE_TABLES to TABLES # change FILE ctabname to TABLE_FILENAME table_filename # STATUS not implemented yet, but it is described in input instructions # invert indicates an integer variable. change to dinvert? # time series variables are listed as "real or character ..." should just be double precision # capitalize example input file words that are recognized by mf6 # # uzf # change to OBS8_FILENAME obs8_filename # change AUXILIARY to AUXNAMES # Remove steady-state / transient flag # Change DATA block to UZFDATA # boundname not implemented. Should be implemented for uzfdata? # aux not implemented. should be implemented for period block? # implement uzfsettings approach? Or stick with a simple list? # example shows 'uzf' keyword as first item in period block # combine SIMULATE_ET, LINEAR_GWET, and SQUARE_GWET into a single line? # # mvr # change maxpackages to npackages # Included WEL, DRN, RIV, GHB as providers, though that is not supported in the code yet # # oc # output control rewritten entirely, and implemented in the code # # DEFINITION FILE KEYWORDS # block :: name of block # name :: variable name # in_record :: optional True or False, False if not specified # type :: recarray, record, keyword, integer, double precision, keystring # tagged :: optional True or False, True if not specified. If tagged, then keyword comes before value # shape :: (size), optional, only required for arrays # valid :: description of valid values # reader :: urword, readarray, u1dint, ... # optional :: optional True or False, False if not specified # longname :: long name for variable # description :: description for variable, REPLACE tag indicates that description will come from common.dfn
"""This module tests SyntaxErrors. Here's an example of the sort of thing that is tested. >>> def f(x): ... global x Traceback (most recent call last): SyntaxError: name 'x' is local and global (<doctest test.test_syntax[0]>, line 1) The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and symtable.c. The parser itself outlaws a lot of invalid syntax. None of these errors are tested here at the moment. We should add some tests; since there are infinitely many programs with invalid syntax, we would need to be judicious in selecting some. The compiler generates a synthetic module name for code executed by doctest. Since all the code comes from the same module, a suffix like [1] is appended to the module name, As a consequence, changing the order of tests in this module means renumbering all the errors after it. (Maybe we should enable the ellipsis option for these tests.) In ast.c, syntax errors are raised by calling ast_error(). Errors from set_context(): >>> obj.None = 1 Traceback (most recent call last): File "<doctest test.test_syntax[1]>", line 1 SyntaxError: cannot assign to None >>> None = 1 Traceback (most recent call last): File "<doctest test.test_syntax[2]>", line 1 SyntaxError: cannot assign to None It's a syntax error to assign to the empty tuple. Why isn't it an error to assign to the empty list? It will always raise some error at runtime. >>> () = 1 Traceback (most recent call last): File "<doctest test.test_syntax[3]>", line 1 SyntaxError: can't assign to () >>> f() = 1 Traceback (most recent call last): File "<doctest test.test_syntax[4]>", line 1 SyntaxError: can't assign to function call >>> del f() Traceback (most recent call last): File "<doctest test.test_syntax[5]>", line 1 SyntaxError: can't delete function call >>> a + 1 = 2 Traceback (most recent call last): File "<doctest test.test_syntax[6]>", line 1 SyntaxError: can't assign to operator >>> (x for x in x) = 1 Traceback (most recent call last): File "<doctest test.test_syntax[7]>", line 1 SyntaxError: can't assign to generator expression >>> 1 = 1 Traceback (most recent call last): File "<doctest test.test_syntax[8]>", line 1 SyntaxError: can't assign to literal >>> "abc" = 1 Traceback (most recent call last): File "<doctest test.test_syntax[8]>", line 1 SyntaxError: can't assign to literal >>> `1` = 1 Traceback (most recent call last): File "<doctest test.test_syntax[10]>", line 1 SyntaxError: can't assign to repr If the left-hand side of an assignment is a list or tuple, an illegal expression inside that contain should still cause a syntax error. This test just checks a couple of cases rather than enumerating all of them. >>> (a, "b", c) = (1, 2, 3) Traceback (most recent call last): File "<doctest test.test_syntax[11]>", line 1 SyntaxError: can't assign to literal >>> [a, b, c + 1] = [1, 2, 3] Traceback (most recent call last): File "<doctest test.test_syntax[12]>", line 1 SyntaxError: can't assign to operator >>> a if 1 else b = 1 Traceback (most recent call last): File "<doctest test.test_syntax[13]>", line 1 SyntaxError: can't assign to conditional expression From compiler_complex_args(): >>> def f(None=1): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[14]>", line 1 SyntaxError: cannot assign to None From ast_for_arguments(): >>> def f(x, y=1, z): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[15]>", line 1 SyntaxError: non-default argument follows default argument >>> def f(x, None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[16]>", line 1 SyntaxError: cannot assign to None >>> def f(*None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[17]>", line 1 SyntaxError: cannot assign to None >>> def f(**None): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[18]>", line 1 SyntaxError: cannot assign to None From ast_for_funcdef(): >>> def None(x): ... pass Traceback (most recent call last): File "<doctest test.test_syntax[19]>", line 1 SyntaxError: cannot assign to None From ast_for_call(): >>> def f(it, *varargs): ... return list(it) >>> L = range(10) >>> f(x for x in L) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(x for x in L, 1) Traceback (most recent call last): File "<doctest test.test_syntax[23]>", line 1 SyntaxError: Generator expression must be parenthesized if not sole argument >>> f((x for x in L), 1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... i244, i245, i246, i247, i248, i249, i250, i251, i252, ... i253, i254, i255) Traceback (most recent call last): File "<doctest test.test_syntax[25]>", line 1 SyntaxError: more than 255 arguments The actual error cases counts positional arguments, keyword arguments, and generator expression arguments separately. This test combines the three. >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... (x for x in i244), i245, i246, i247, i248, i249, i250, i251, ... i252=1, i253=1, i254=1, i255=1) Traceback (most recent call last): File "<doctest test.test_syntax[26]>", line 1 SyntaxError: more than 255 arguments >>> f(lambda x: x[0] = 3) Traceback (most recent call last): File "<doctest test.test_syntax[27]>", line 1 SyntaxError: lambda cannot contain assignment The grammar accepts any test (basically, any expression) in the keyword slot of a call site. Test a few different options. >>> f(x()=2) Traceback (most recent call last): File "<doctest test.test_syntax[28]>", line 1 SyntaxError: keyword can't be an expression >>> f(a or b=1) Traceback (most recent call last): File "<doctest test.test_syntax[29]>", line 1 SyntaxError: keyword can't be an expression >>> f(x.y=1) Traceback (most recent call last): File "<doctest test.test_syntax[30]>", line 1 SyntaxError: keyword can't be an expression More set_context(): >>> (x for x in x) += 1 Traceback (most recent call last): File "<doctest test.test_syntax[31]>", line 1 SyntaxError: can't assign to generator expression >>> None += 1 Traceback (most recent call last): File "<doctest test.test_syntax[32]>", line 1 SyntaxError: cannot assign to None >>> f() += 1 Traceback (most recent call last): File "<doctest test.test_syntax[33]>", line 1 SyntaxError: can't assign to function call Test continue in finally in weird combinations. continue in for loop under finally should be ok. >>> def test(): ... try: ... pass ... finally: ... for abc in range(10): ... continue ... print abc >>> test() 9 Start simple, a continue in a finally should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[36]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause This is essentially a continue in a finally which should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... try: ... continue ... except: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[37]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[38]>", line 5 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[39]>", line 6 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... try: ... continue ... finally: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[40]>", line 7 SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: pass ... finally: ... try: ... pass ... except: ... continue Traceback (most recent call last): ... File "<doctest test.test_syntax[41]>", line 8 SyntaxError: 'continue' not supported inside 'finally' clause There is one test for a break that is not in a loop. The compiler uses a single data structure to keep track of try-finally and loops, so we need to be sure that a break is actually inside a loop. If it isn't, there should be a syntax error. >>> try: ... print 1 ... break ... print 2 ... finally: ... print 3 Traceback (most recent call last): ... File "<doctest test.test_syntax[42]>", line 3 SyntaxError: 'break' outside loop This should probably raise a better error than a SystemError (or none at all). In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 >>> while 1: ... while 2: ... while 3: ... while 4: ... while 5: ... while 6: ... while 8: ... while 9: ... while 10: ... while 11: ... while 12: ... while 13: ... while 14: ... while 15: ... while 16: ... while 17: ... while 18: ... while 19: ... while 20: ... while 21: ... while 22: ... break Traceback (most recent call last): ... SystemError: too many statically nested blocks This tests assignment-context; there was a bug in Python 2.5 where compiling a complex 'if' (one with 'elif') would fail to notice an invalid suite, leading to spurious errors. >>> if 1: ... x() = 1 ... elif 1: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[44]>", line 2 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 Traceback (most recent call last): ... File "<doctest test.test_syntax[45]>", line 4 SyntaxError: can't assign to function call >>> if 1: ... x() = 1 ... elif 1: ... pass ... else: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[46]>", line 2 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 ... else: ... pass Traceback (most recent call last): ... File "<doctest test.test_syntax[47]>", line 4 SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... pass ... else: ... x() = 1 Traceback (most recent call last): ... File "<doctest test.test_syntax[48]>", line 6 SyntaxError: can't assign to function call >>> f(a=23, a=234) Traceback (most recent call last): ... File "<doctest test.test_syntax[49]>", line 1 SyntaxError: keyword argument repeated >>> del () Traceback (most recent call last): ... File "<doctest test.test_syntax[50]>", line 1 SyntaxError: can't delete () >>> {1, 2, 3} = 42 Traceback (most recent call last): ... File "<doctest test.test_syntax[50]>", line 1 SyntaxError: can't assign to literal Corner-case that used to crash: >>> def f(*xx, **__debug__): pass Traceback (most recent call last): SyntaxError: cannot assign to __debug__ """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingServer and ThreadingServer mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the mix-in request handler classes StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to reqd all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright (c), NAME <EMAIL>, 2012-2013 # Copyright (c), NAME <EMAIL>, 2015 # All rights reserved. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The match_hostname function and supporting code is under the terms and # conditions of the Python Software Foundation License. They were taken from # the Python3 standard library and adapted for use in Python2. See comments in the # source for which code precisely is under this License. PSF License text # follows: # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are # retained in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement.
""" ================================== Constants (:mod:`scipy.constants`) ================================== .. currentmodule:: scipy.constants Physical and mathematical constants and units. Mathematical constants ====================== ================ ================================================================= ``pi`` Pi ``golden`` Golden ratio ``golden_ratio`` Golden ratio ================ ================================================================= Physical constants ================== =========================== ================================================================= ``c`` speed of light in vacuum ``speed_of_light`` speed of light in vacuum ``mu_0`` the magnetic constant :math:`\mu_0` ``epsilon_0`` the electric constant (vacuum permittivity), :math:`\epsilon_0` ``h`` the Planck constant :math:`h` ``Planck`` the Planck constant :math:`h` ``hbar`` :math:`\hbar = h/(2\pi)` ``G`` Newtonian constant of gravitation ``gravitational_constant`` Newtonian constant of gravitation ``g`` standard acceleration of gravity ``e`` elementary charge ``elementary_charge`` elementary charge ``R`` molar gas constant ``gas_constant`` molar gas constant ``alpha`` fine-structure constant ``fine_structure`` fine-structure constant ``N_A`` Avogadro constant ``Avogadro`` Avogadro constant ``k`` Boltzmann constant ``Boltzmann`` Boltzmann constant ``sigma`` Stefan-Boltzmann constant :math:`\sigma` ``Stefan_Boltzmann`` Stefan-Boltzmann constant :math:`\sigma` ``Wien`` Wien displacement law constant ``Rydberg`` Rydberg constant ``m_e`` electron mass ``electron_mass`` electron mass ``m_p`` proton mass ``proton_mass`` proton mass ``m_n`` neutron mass ``neutron_mass`` neutron mass =========================== ================================================================= Constants database ------------------ In addition to the above variables, :mod:`scipy.constants` also contains the 2014 CODATA recommended values [CODATA2014]_ database containing more physical constants. .. autosummary:: :toctree: generated/ value -- Value in physical_constants indexed by key unit -- Unit in physical_constants indexed by key precision -- Relative precision in physical_constants indexed by key find -- Return list of physical_constant keys with a given string ConstantWarning -- Constant sought not in newest CODATA data set .. data:: physical_constants Dictionary of physical constants, of the format ``physical_constants[name] = (value, unit, uncertainty)``. Available constants: ====================================================================== ==== %(constant_names)s ====================================================================== ==== Units ===== SI prefixes ----------- ============ ================================================================= ``yotta`` :math:`10^{24}` ``zetta`` :math:`10^{21}` ``exa`` :math:`10^{18}` ``peta`` :math:`10^{15}` ``tera`` :math:`10^{12}` ``giga`` :math:`10^{9}` ``mega`` :math:`10^{6}` ``kilo`` :math:`10^{3}` ``hecto`` :math:`10^{2}` ``deka`` :math:`10^{1}` ``deci`` :math:`10^{-1}` ``centi`` :math:`10^{-2}` ``milli`` :math:`10^{-3}` ``micro`` :math:`10^{-6}` ``nano`` :math:`10^{-9}` ``pico`` :math:`10^{-12}` ``femto`` :math:`10^{-15}` ``atto`` :math:`10^{-18}` ``zepto`` :math:`10^{-21}` ============ ================================================================= Binary prefixes --------------- ============ ================================================================= ``kibi`` :math:`2^{10}` ``mebi`` :math:`2^{20}` ``gibi`` :math:`2^{30}` ``tebi`` :math:`2^{40}` ``pebi`` :math:`2^{50}` ``exbi`` :math:`2^{60}` ``zebi`` :math:`2^{70}` ``yobi`` :math:`2^{80}` ============ ================================================================= Weight ------ ================= ============================================================ ``gram`` :math:`10^{-3}` kg ``metric_ton`` :math:`10^{3}` kg ``grain`` one grain in kg ``lb`` one pound (avoirdupous) in kg ``pound`` one pound (avoirdupous) in kg ``oz`` one ounce in kg ``ounce`` one ounce in kg ``stone`` one stone in kg ``grain`` one grain in kg ``long_ton`` one long ton in kg ``short_ton`` one short ton in kg ``troy_ounce`` one Troy ounce in kg ``troy_pound`` one Troy pound in kg ``carat`` one carat in kg ``m_u`` atomic mass constant (in kg) ``u`` atomic mass constant (in kg) ``atomic_mass`` atomic mass constant (in kg) ================= ============================================================ Angle ----- ================= ============================================================ ``degree`` degree in radians ``arcmin`` arc minute in radians ``arcminute`` arc minute in radians ``arcsec`` arc second in radians ``arcsecond`` arc second in radians ================= ============================================================ Time ---- ================= ============================================================ ``minute`` one minute in seconds ``hour`` one hour in seconds ``day`` one day in seconds ``week`` one week in seconds ``year`` one year (365 days) in seconds ``Julian_year`` one Julian year (365.25 days) in seconds ================= ============================================================ Length ------ ===================== ============================================================ ``inch`` one inch in meters ``foot`` one foot in meters ``yard`` one yard in meters ``mile`` one mile in meters ``mil`` one mil in meters ``pt`` one point in meters ``point`` one point in meters ``survey_foot`` one survey foot in meters ``survey_mile`` one survey mile in meters ``nautical_mile`` one nautical mile in meters ``fermi`` one Fermi in meters ``angstrom`` one Angstrom in meters ``micron`` one micron in meters ``au`` one astronomical unit in meters ``astronomical_unit`` one astronomical unit in meters ``light_year`` one light year in meters ``parsec`` one parsec in meters ===================== ============================================================ Pressure -------- ================= ============================================================ ``atm`` standard atmosphere in pascals ``atmosphere`` standard atmosphere in pascals ``bar`` one bar in pascals ``torr`` one torr (mmHg) in pascals ``mmHg`` one torr (mmHg) in pascals ``psi`` one psi in pascals ================= ============================================================ Area ---- ================= ============================================================ ``hectare`` one hectare in square meters ``acre`` one acre in square meters ================= ============================================================ Volume ------ =================== ======================================================== ``liter`` one liter in cubic meters ``litre`` one liter in cubic meters ``gallon`` one gallon (US) in cubic meters ``gallon_US`` one gallon (US) in cubic meters ``gallon_imp`` one gallon (UK) in cubic meters ``fluid_ounce`` one fluid ounce (US) in cubic meters ``fluid_ounce_US`` one fluid ounce (US) in cubic meters ``fluid_ounce_imp`` one fluid ounce (UK) in cubic meters ``bbl`` one barrel in cubic meters ``barrel`` one barrel in cubic meters =================== ======================================================== Speed ----- ================== ========================================================== ``kmh`` kilometers per hour in meters per second ``mph`` miles per hour in meters per second ``mach`` one Mach (approx., at 15 C, 1 atm) in meters per second ``speed_of_sound`` one Mach (approx., at 15 C, 1 atm) in meters per second ``knot`` one knot in meters per second ================== ========================================================== Temperature ----------- ===================== ======================================================= ``zero_Celsius`` zero of Celsius scale in Kelvin ``degree_Fahrenheit`` one Fahrenheit (only differences) in Kelvins ===================== ======================================================= .. autosummary:: :toctree: generated/ convert_temperature C2K K2C F2C C2F F2K K2F Energy ------ ==================== ======================================================= ``eV`` one electron volt in Joules ``electron_volt`` one electron volt in Joules ``calorie`` one calorie (thermochemical) in Joules ``calorie_th`` one calorie (thermochemical) in Joules ``calorie_IT`` one calorie (International Steam Table calorie, 1956) in Joules ``erg`` one erg in Joules ``Btu`` one British thermal unit (International Steam Table) in Joules ``Btu_IT`` one British thermal unit (International Steam Table) in Joules ``Btu_th`` one British thermal unit (thermochemical) in Joules ``ton_TNT`` one ton of TNT in Joules ==================== ======================================================= Power ----- ==================== ======================================================= ``hp`` one horsepower in watts ``horsepower`` one horsepower in watts ==================== ======================================================= Force ----- ==================== ======================================================= ``dyn`` one dyne in newtons ``dyne`` one dyne in newtons ``lbf`` one pound force in newtons ``pound_force`` one pound force in newtons ``kgf`` one kilogram force in newtons ``kilogram_force`` one kilogram force in newtons ==================== ======================================================= Optics ------ .. autosummary:: :toctree: generated/ lambda2nu nu2lambda References ========== .. [CODATA2014] CODATA Recommended Values of the Fundamental Physical Constants 2014. http://physics.nist.gov/cuu/Constants/index.html """
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be > 1D atleast_2d Force arrays to be > 2D atleast_3d Force arrays to be > 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Substract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== 1D Array Set Operations ----------------------- Set operations for 1D numeric arrays based on sort() function. ================ =================== ediff1d Array difference (auxiliary function). unique Unique elements of an array. intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to read all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
""" =================== Universal Functions =================== Ufuncs are, generally speaking, mathematical functions or operations that are applied element-by-element to the contents of an array. That is, the result in each output array element only depends on the value in the corresponding input array (or arrays) and on no other array elements. Numpy comes with a large suite of ufuncs, and scipy extends that suite substantially. The simplest example is the addition operator: :: >>> np.array([0,2,3,4]) + np.array([1,1,-1,2]) array([1, 3, 2, 6]) The unfunc module lists all the available ufuncs in numpy. Documentation on the specific ufuncs may be found in those modules. This documentation is intended to address the more general aspects of unfuncs common to most of them. All of the ufuncs that make use of Python operators (e.g., +, -, etc.) have equivalent functions defined (e.g. add() for +) Type coercion ============= What happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of two different types? What is the type of the result? Typically, the result is the higher of the two types. For example: :: float32 + float64 -> float64 int8 + int32 -> int32 int16 + float32 -> float32 float32 + complex64 -> complex64 There are some less obvious cases generally involving mixes of types (e.g. uints, ints and floats) where equal bit sizes for each are not capable of saving all the information in a different type of equivalent bit size. Some examples are int32 vs float32 or uint32 vs int32. Generally, the result is the higher type of larger size than both (if available). So: :: int32 + float32 -> float64 uint32 + int32 -> int64 Finally, the type coercion behavior when expressions involve Python scalars is different than that seen for arrays. Since Python has a limited number of types, combining a Python int with a dtype=np.int8 array does not coerce to the higher type but instead, the type of the array prevails. So the rules for Python scalars combined with arrays is that the result will be that of the array equivalent the Python scalar if the Python scalar is of a higher 'kind' than the array (e.g., float vs. int), otherwise the resultant type will be that of the array. For example: :: Python int + int8 -> int8 Python float + int8 -> float64 ufunc methods ============= Binary ufuncs support 4 methods. **.reduce(arr)** applies the binary operator to elements of the array in sequence. For example: :: >>> np.add.reduce(np.arange(10)) # adds all elements of array 45 For multidimensional arrays, the first dimension is reduced by default: :: >>> np.add.reduce(np.arange(10).reshape(2,5)) array([ 5, 7, 9, 11, 13]) The axis keyword can be used to specify different axes to reduce: :: >>> np.add.reduce(np.arange(10).reshape(2,5),axis=1) array([10, 35]) **.accumulate(arr)** applies the binary operator and generates an an equivalently shaped array that includes the accumulated amount for each element of the array. A couple examples: :: >>> np.add.accumulate(np.arange(10)) array([ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45]) >>> np.multiply.accumulate(np.arange(1,9)) array([ 1, 2, 6, 24, 120, 720, 5040, 40320]) The behavior for multidimensional arrays is the same as for .reduce(), as is the use of the axis keyword). **.reduceat(arr,indices)** allows one to apply reduce to selected parts of an array. It is a difficult method to understand. See the documentation at: **.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and arr2. It will work on multidimensional arrays (the shape of the result is the concatenation of the two input shapes.: :: >>> np.multiply.outer(np.arange(3),np.arange(4)) array([[0, 0, 0, 0], [0, 1, 2, 3], [0, 2, 4, 6]]) Output arguments ================ All ufuncs accept an optional output array. The array must be of the expected output shape. Beware that if the type of the output array is of a different (and lower) type than the output result, the results may be silently truncated or otherwise corrupted in the downcast to the lower type. This usage is useful when one wants to avoid creating large temporary arrays and instead allows one to reuse the same array memory repeatedly (at the expense of not being able to use more convenient operator notation in expressions). Note that when the output argument is used, the ufunc still returns a reference to the result. >>> x = np.arange(2) >>> np.add(np.arange(2),np.arange(2.),x) array([0, 2]) >>> x array([0, 2]) and & or as ufuncs ================== Invariably people try to use the python 'and' and 'or' as logical operators (and quite understandably). But these operators do not behave as normal operators since Python treats these quite differently. They cannot be overloaded with array equivalents. Thus using 'and' or 'or' with an array results in an error. There are two alternatives: 1) use the ufunc functions logical_and() and logical_or(). 2) use the bitwise operators & and \\|. The drawback of these is that if the arguments to these operators are not boolean arrays, the result is likely incorrect. On the other hand, most usages of logical_and and logical_or are with boolean arrays. As long as one is careful, this is a convenient way to apply these operators. """
""".. _dispatch_mechanism: Numpy's dispatch mechanism, introduced in numpy version v1.16 is the recommended approach for writing custom N-dimensional array containers that are compatible with the numpy API and provide custom implementations of numpy functionality. Applications include `dask <http://dask.pydata.org>`_ arrays, an N-dimensional array distributed across multiple nodes, and `cupy <https://docs-cupy.chainer.org/en/stable/>`_ arrays, an N-dimensional array on a GPU. To get a feel for writing custom array containers, we'll begin with a simple example that has rather narrow utility but illustrates the concepts involved. >>> import numpy as np >>> class DiagonalArray: ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self): ... return self._i * np.eye(self._N) ... Our custom array can be instantiated like: >>> arr = DiagonalArray(5, 1) >>> arr DiagonalArray(N=5, value=1) We can convert to a numpy array using :func:`numpy.array` or :func:`numpy.asarray`, which will call its ``__array__`` method to obtain a standard ``numpy.ndarray``. >>> np.asarray(arr) array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.]]) If we operate on ``arr`` with a numpy function, numpy will again use the ``__array__`` interface to convert it to an array and then apply the function in the usual way. >>> np.multiply(arr, 2) array([[2., 0., 0., 0., 0.], [0., 2., 0., 0., 0.], [0., 0., 2., 0., 0.], [0., 0., 0., 2., 0.], [0., 0., 0., 0., 2.]]) Notice that the return type is a standard ``numpy.ndarray``. >>> type(arr) numpy.ndarray How can we pass our custom array type through this function? Numpy allows a class to indicate that it would like to handle computations in a custom-defined way through the interaces ``__array_ufunc__`` and ``__array_function__``. Let's take one at a time, starting with ``_array_ufunc__``. This method covers :ref:`ufuncs`, a class of functions that includes, for example, :func:`numpy.multiply` and :func:`numpy.sin`. The ``__array_ufunc__`` receives: - ``ufunc``, a function like ``numpy.multiply`` - ``method``, a string, differentiating between ``numpy.multiply(...)`` and variants like ``numpy.multiply.outer``, ``numpy.multiply.accumulate``, and so on. For the common case, ``numpy.multiply(...)``, ``method == '__call__'``. - ``inputs``, which could be a mixture of different types - ``kwargs``, keyword arguments passed to the function For this example we will only handle the method ``__call__``. >>> from numbers import Number >>> class DiagonalArray: ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self): ... return self._i * np.eye(self._N) ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... if method == '__call__': ... N = None ... scalars = [] ... for input in inputs: ... if isinstance(input, Number): ... scalars.append(input) ... elif isinstance(input, self.__class__): ... scalars.append(input._i) ... if N is not None: ... if N != self._N: ... raise TypeError("inconsistent sizes") ... else: ... N = self._N ... else: ... return NotImplemented ... return self.__class__(N, ufunc(*scalars, **kwargs)) ... else: ... return NotImplemented ... Now our custom array type passes through numpy functions. >>> arr = DiagonalArray(5, 1) >>> np.multiply(arr, 3) DiagonalArray(N=5, value=3) >>> np.add(arr, 3) DiagonalArray(N=5, value=4) >>> np.sin(arr) DiagonalArray(N=5, value=0.8414709848078965) At this point ``arr + 3`` does not work. >>> arr + 3 TypeError: unsupported operand type(s) for *: 'DiagonalArray' and 'int' To support it, we need to define the Python interfaces ``__add__``, ``__lt__``, and so on to dispatch to the corresponding ufunc. We can achieve this conveniently by inheriting from the mixin :class:`~numpy.lib.mixins.NDArrayOperatorsMixin`. >>> import numpy.lib.mixins >>> class DiagonalArray(numpy.lib.mixins.NDArrayOperatorsMixin): ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self): ... return self._i * np.eye(self._N) ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... if method == '__call__': ... N = None ... scalars = [] ... for input in inputs: ... if isinstance(input, Number): ... scalars.append(input) ... elif isinstance(input, self.__class__): ... scalars.append(input._i) ... if N is not None: ... if N != self._N: ... raise TypeError("inconsistent sizes") ... else: ... N = self._N ... else: ... return NotImplemented ... return self.__class__(N, ufunc(*scalars, **kwargs)) ... else: ... return NotImplemented ... >>> arr = DiagonalArray(5, 1) >>> arr + 3 DiagonalArray(N=5, value=4) >>> arr > 0 DiagonalArray(N=5, value=True) Now let's tackle ``__array_function__``. We'll create dict that maps numpy functions to our custom variants. >>> HANDLED_FUNCTIONS = {} >>> class DiagonalArray(numpy.lib.mixins.NDArrayOperatorsMixin): ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self): ... return self._i * np.eye(self._N) ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... if method == '__call__': ... N = None ... scalars = [] ... for input in inputs: ... # In this case we accept only scalar numbers or DiagonalArrays. ... if isinstance(input, Number): ... scalars.append(input) ... elif isinstance(input, self.__class__): ... scalars.append(input._i) ... if N is not None: ... if N != self._N: ... raise TypeError("inconsistent sizes") ... else: ... N = self._N ... else: ... return NotImplemented ... return self.__class__(N, ufunc(*scalars, **kwargs)) ... else: ... return NotImplemented ... def __array_function__(self, func, types, args, kwargs): ... if func not in HANDLED_FUNCTIONS: ... return NotImplemented ... # Note: this allows subclasses that don't override ... # __array_function__ to handle DiagonalArray objects. ... if not all(issubclass(t, self.__class__) for t in types): ... return NotImplemented ... return HANDLED_FUNCTIONS[func](*args, **kwargs) ... A convenient pattern is to define a decorator ``implements`` that can be used to add functions to ``HANDLED_FUNCTIONS``. >>> def implements(np_function): ... "Register an __array_function__ implementation for DiagonalArray objects." ... def decorator(func): ... HANDLED_FUNCTIONS[np_function] = func ... return func ... return decorator ... Now we write implementations of numpy functions for ``DiagonalArray``. For completeness, to support the usage ``arr.sum()`` add a method ``sum`` that calls ``numpy.sum(self)``, and the same for ``mean``. >>> @implements(np.sum) ... def sum(arr): ... "Implementation of np.sum for DiagonalArray objects" ... return arr._i * arr._N ... >>> @implements(np.mean) ... def mean(arr): ... "Implementation of np.mean for DiagonalArray objects" ... return arr._i / arr._N ... >>> arr = DiagonalArray(5, 1) >>> np.sum(arr) 5 >>> np.mean(arr) 0.2 If the user tries to use any numpy functions not included in ``HANDLED_FUNCTIONS``, a ``TypeError`` will be raised by numpy, indicating that this operation is not supported. For example, concatenating two ``DiagonalArrays`` does not produce another diagonal array, so it is not supported. >>> np.concatenate([arr, arr]) TypeError: no implementation found for 'numpy.concatenate' on types that implement __array_function__: [<class '__main__.DiagonalArray'>] Additionally, our implementations of ``sum`` and ``mean`` do not accept the optional arguments that numpy's implementation does. >>> np.sum(arr, axis=0) TypeError: sum() got an unexpected keyword argument 'axis' The user always has the option of converting to a normal ``numpy.ndarray`` with :func:`numpy.asarray` and using standard numpy from there. >>> np.concatenate([np.asarray(arr), np.asarray(arr)]) array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.], [1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.]]) Refer to the `dask source code <https://github.com/dask/dask>`_ and `cupy source code <https://github.com/cupy/cupy>`_ for more fully-worked examples of custom array containers. See also `NEP 18 <http://www.numpy.org/neps/nep-0018-array-function-protocol.html>`_. """
""" This page is in the table of contents. Skeiniso is an analyze viewer to display a gcode file in an isometric view. The skeiniso manual page is at: http://fabmetheus.crsndoo.com/wiki/index.php/Skeinforge_Skeiniso ==Operation== The default 'Activate Skeiniso' checkbox is off. When it is on, the functions described below will work when called from the skeinforge toolchain, when it is off, the functions will not be called from the toolchain. The functions will still be called, whether or not the 'Activate Skeiniso' checkbox is on, when skeiniso is run directly. Skeiniso requires skeinforge comments in the gcode file to distinguish the loops and perimeters. If the comments are deleted, all threads will be displayed as generic threads. To get the penultimate file of the tool chain, just before export deletes the comments, select 'Save Penultimate Gcode' in export, and open the gcode file with the suffix '_penultimate.gcode' with skeiniso. The viewer is simple, the viewpoint can only be moved in a sphere around the center of the model by changing the viewpoint latitude and longitude. Different regions of the model can be hidden by setting the width of the thread to zero. The alternating bands act as contour bands and their brightness and width can be changed. ==Settings== ===Animation=== ====Animation Line Quickening==== Default is one. The quickness of the tool animation over the quickness of the actual tool. ====Animation Slide Show Rate==== Default is two layers per second. The rate, in layers per second, at which the layer changes when the soar or dive button is pressed.. ===Axis Rulings=== Default is on. When selected, rulings will be drawn on the axis lines. ===Banding=== ====Band Height==== Default is five layers. Defines the height of the band in layers, a pair of bands is twice that height. ====Bottom Band Brightness==== Default is 0.7. Defines the ratio of the brightness of the bottom band over the brightness of the top band. The higher it is the brighter the bottom band will be. ====Bottom Layer Brightness==== Default is one. Defines the ratio of the brightness of the bottom layer over the brightness of the top layer. With a low bottom layer brightness ratio the bottom of the model will be darker than the top of the model, as if it was being illuminated by a light just above the top. ====Bright Band Start==== Default choice is 'From the Top'. The button group that determines where the bright band starts from. =====From the Bottom===== When selected, the bright bands will start from the bottom. =====From the Top===== When selected, the bright bands will start from the top. ===Draw Arrows=== Default is on. When selected, arrows will be drawn at the end of each line segment. ===Export Menu=== When the submenu in the export menu item in the file menu is clicked, an export canvas dialog will be displayed, which can export the canvas to a file. ===Go Around Extruder Off Travel=== Default is off. When selected, the display will include the travel when the extruder is off, which means it will include the nozzle wipe path if any. ===Layers=== ====Layer==== Default is zero. On the display window, the Up button increases the 'Layer' by one, and the Down button decreases the layer by one. When the layer displayed in the layer spin box is changed then <Return> is hit, the layer shown will be set to the spin box, to a mimimum of zero and to a maximum of the highest index layer.The Soar button increases the layer at the 'Animation Slide Show Rate', and the Dive (double left arrow button beside the layer field) button decreases the layer at the slide show rate. ====Layer Extra Span==== Default is a huge number. The viewer will draw the layers in the range including the 'Layer' index and the 'Layer' index plus the 'Layer Extra Span'. If the 'Layer Extra Span' is negative, the layers viewed will start at the 'Layer' index, plus the 'Layer Extra Span', and go up to and include the 'Layer' index. If the 'Layer Extra Span' is zero, only the 'Layer' index layer will be displayed. If the 'Layer Extra Span' is positive, the layers viewed will start at the 'Layer' index, and go up to and include the 'Layer' index plus the 'Layer Extra Span'. ===Line=== Default is zero. The index of the selected line on the layer that is highlighted when the 'Display Line' mouse tool is chosen. The line spin box up button increases the 'Line' by one. If the line index of the layer goes over the index of the last line, the layer index will be increased by one and the new line index will be zero. The down button decreases the line index by one. If the line index goes below the index of the first line, the layer index will be decreased by one and the new line index will be at the last line. When the line displayed in the line field is changed then <Return> is hit, the line shown will be set to the line field, to a mimimum of zero and to a maximum of the highest index line. The Soar button increases the line at the speed at which the extruder would move, times the 'Animation Line Quickening' ratio, and the Dive (double left arrow button beside the line field) button decreases the line at the animation line quickening ratio. ===Mouse Mode=== Default is 'Display Line'. The mouse tool can be changed from the 'Mouse Mode' menu button or picture button. The mouse tools listen to the arrow keys when the canvas has the focus. Clicking in the canvas gives the canvas the focus, and when the canvas has the focus a thick black border is drawn around the canvas. ====Display Line==== The 'Display Line' tool will display the highlight the selected line, and display the file line count, counting from one, and the gcode line itself. When the 'Display Line' tool is active, clicking the canvas will select the closest line to the mouse click. ====Viewpoint Move==== The 'Viewpoint Move' tool will move the viewpoint in the xy plane when the mouse is clicked and dragged on the canvas. ====Viewpoint Rotate==== The 'Viewpoint Rotate' tool will rotate the viewpoint around the origin, when the mouse is clicked and dragged on the canvas, or the arrow keys have been used and <Return> is pressed. The viewpoint can also be moved by dragging the mouse. The viewpoint latitude will be increased when the mouse is dragged from the center towards the edge. The viewpoint longitude will be changed by the amount around the center the mouse is dragged. This is not very intuitive, but I don't know how to do this the intuitive way and I have other stuff to develop. If the shift key is pressed; if the latitude is changed more than the longitude, only the latitude will be changed, if the longitude is changed more only the longitude will be changed. ===Number of Fill Layers=== ====Number of Fill Bottom Layers==== Default is one. The "Number of Fill Bottom Layers" is the number of layers at the bottom which will be colored olive. ===Number of Fill Top Layers=== Default is one. The "Number of Fill Top Layers" is the number of layers at the top which will be colored blue. ===Scale=== Default is ten. The scale setting is the scale of the image in pixels per millimeter, the higher the number, the greater the size of the display. The zoom in mouse tool will zoom in the display at the point where the mouse was clicked, increasing the scale by a factor of two. The zoom out tool will zoom out the display at the point where the mouse was clicked, decreasing the scale by a factor of two. ===Screen Inset=== ====Screen Horizontal Inset==== Default is one hundred. The "Screen Horizontal Inset" determines how much the canvas will be inset in the horizontal direction from the edge of screen, the higher the number the more it will be inset and the smaller it will be. ====Screen Vertical Inset==== Default is two hundred and twenty. The "Screen Vertical Inset" determines how much the canvas will be inset in the vertical direction from the edge of screen, the higher the number the more it will be inset and the smaller it will be.. ===Viewpoint=== ====Viewpoint Latitude==== Default is fifteen degrees. The "Viewpoint Latitude" is the latitude of the viewpoint, a latitude of zero is the top pole giving a top view, a latitude of ninety gives a side view and a latitude of 180 gives a bottom view. ====Viewpoint Longitude==== Default is 210 degrees. The "Viewpoint Longitude" is the longitude of the viewpoint. ===Width=== The width of each type of thread and of each axis can be changed. If the width is set to zero, the thread will not be visible. ====Width of Axis Negative Side==== Default is two. Defines the width of the negative side of the axis. ====Width of Axis Positive Side==== Default is six. Defines the width of the positive side of the axis. ====Width of Infill Thread==== Default is one. The "Width of Infill Thread" sets the width of the green extrusion threads, those threads which are not loops and not part of the raft. ====Width of Fill Bottom Thread==== Default is two. The "Width of Fill Bottom Thread" sets the width of the olive extrusion threads at the bottom of the model. ====Width of Fill Top Thread==== Default is two. The "Width of Fill Top Thread" sets the width of the blue extrusion threads at the top of the model. ====Width of Loop Thread==== Default is three. The "Width of Loop Thread" sets the width of the yellow loop threads, which are not perimeters. ====Width of Perimeter Inside Thread==== Default is eight. The "Width of Perimeter Inside Thread" sets the width of the orange inside perimeter threads. ====Width of Perimeter Outside Thread==== Default is eight. The "Width of Perimeter Outside Thread" sets the width of the red outside perimeter threads. ====Width of Raft Thread==== Default is one. The "Width of Raft Thread" sets the width of the brown raft threads. ====Width of Selection Thread==== Default is six. The "Width of Selection Thread" sets the width of the selected line. ====Width of Travel Thread==== Default is zero. The "Width of Travel Thread" sets the width of the grey extruder off travel threads. ==Icons== The dive, soar and zoom icons are from Mark James' soarSilk icon set 1.3 at: http://www.famfamfam.com/lab/icons/silk/ ==Gcodes== An explanation of the gcodes is at: http://reprap.org/bin/view/Main/Arduino_GCode_Interpreter and at: http://reprap.org/bin/view/Main/MCodeReference A gode example is at: http://forums.reprap.org/file.php?12,file=565 ==Examples== Below are examples of skeiniso being used. These examples are run in a terminal in the folder which contains Screw Holder_penultimate.gcode and skeiniso.py. > python skeiniso.py This brings up the skeiniso dialog. > python skeiniso.py Screw Holder_penultimate.gcode This brings up the skeiniso viewer to view the gcode file. """
""" This page is in the table of contents. Raft is a plugin to create a raft, elevate the nozzle and set the temperature. A raft is a flat base structure on top of which your object is being build and has a few different purposes. It fills irregularities like scratches and pits in your printbed and gives you a nice base parallel to the printheads movement. It also glues your object to the bed so to prevent warping in bigger object. The rafts base layer performs these tricks while the sparser interface layer(s) help you removing the object from the raft after printing. It is based on the Nophead's reusable raft, which has a base layer running one way, and a couple of perpendicular layers above. Each set of layers can be set to a different temperature. There is the option of having the extruder orbit the raft for a while, so the heater barrel has time to reach a different temperature, without ooze accumulating around the nozzle. The raft manual page is at: http://fabmetheus.crsndoo.com/wiki/index.php/Skeinforge_Raft The important values for the raft settings are the temperatures of the raft, the first layer and the next layers. These will be different for each material. The default settings for ABS, HDPE, PCL & PLA are extrapolated from Nophead's experiments. You don't necessarily need a raft and especially small object will print fine on a flat bed without one, sometimes its even better when you need a water tight base to print directly on the bed. If you want to only set the temperature or only create support material or only elevate the nozzle without creating a raft, set the Base Layers and Interface Layers to zero. <gallery perRow="1"> Image:Raft.jpg|Raft </gallery> Example of a raft on the left with the interface layers partially removed exposing the base layer. Notice that the first line of the base is rarely printed well because of the startup time of the extruder. On the right you see an object with its raft still attached. The Raft panel has some extra settings, it probably made sense to have them there but they have not that much to do with the actual Raft. First are the Support material settings. Since close to all RepRap style printers have no second extruder for support material Skeinforge offers the option to print support structures with the same material set at a different speed and temperature. The idea is that the support sticks less to the actual object when it is extruded around the minimum possible working temperature. This results in a temperature change EVERY layer so build time will increase seriously. Allan NAME aka The Masked Retriever's has written two quicktips for raft which follow below. "Skeinforge Quicktip: The Raft, Part 1" at: http://blog.thingiverse.com/2009/07/14/skeinforge-quicktip-the-raft-part-1/ "Skeinforge Quicktip: The Raft, Part II" at: http://blog.thingiverse.com/2009/08/04/skeinforge-quicktip-the-raft-part-ii/ Nophead has written about rafts on his blog: http://hydraraptor.blogspot.com/2009/07/thoughts-on-rafts.html More pictures of rafting in action are available from the Metalab blog at: http://reprap.soup.io/?search=rafting ==Operation== Default: On When it is on, the functions described below will work, when it is off, nothing will be done, so no temperatures will be set, nozzle will not be lifted.. ==Settings== ===Add Raft, Elevate Nozzle, Orbit=== Default: On When selected, the script will also create a raft, elevate the nozzle, orbit and set the altitude of the bottom of the raft. It also turns on support generation. ===Base=== Base layer is the part of the raft that touches the bed. ====Base Feed Rate Multiplier==== Default is one. Defines the base feed rate multiplier. The greater the 'Base Feed Rate Multiplier', the thinner the base, the lower the 'Base Feed Rate Multiplier', the thicker the base. ====Base Flow Rate Multiplier==== Default is one. Defines the base flow rate multiplier. The greater the 'Base Flow Rate Multiplier', the thicker the base, the lower the 'Base Flow Rate Multiplier', the thinner the base. ====Base Infill Density==== Default is 0.5. Defines the infill density ratio of the base of the raft. ====Base Layer Height over Layer Thickness==== Default is two. Defines the ratio of the height & width of the base layer compared to the height and width of the object infill. The feed rate will be slower for raft layers which have thicker extrusions than the object infill. ====Base Layers==== Default is one. Defines the number of base layers. ====Base Nozzle Lift over Base Layer Thickness==== Default is 0.4. Defines the amount the nozzle is above the center of the base extrusion divided by the base layer thickness. ===Initial Circling=== Default is off. When selected, the extruder will initially circle around until it reaches operating temperature. ===Infill Overhang over Extrusion Width=== Default is 0.05. Defines the ratio of the infill overhang over the the extrusion width of the raft. ===Interface=== ====Interface Feed Rate Multiplier==== Default is one. Defines the interface feed rate multiplier. The greater the 'Interface Feed Rate Multiplier', the thinner the interface, the lower the 'Interface Feed Rate Multiplier', the thicker the interface. ====Interface Flow Rate Multiplier==== Default is one. Defines the interface flow rate multiplier. The greater the 'Interface Flow Rate Multiplier', the thicker the interface, the lower the 'Interface Flow Rate Multiplier', the thinner the interface. ====Interface Infill Density==== Default is 0.5. Defines the infill density ratio of the interface of the raft. ====Interface Layer Thickness over Extrusion Height==== Default is one. Defines the ratio of the height & width of the interface layer compared to the height and width of the object infill. The feed rate will be slower for raft layers which have thicker extrusions than the object infill. ====Interface Layers==== Default is two. Defines the number of interface layers to print. ====Interface Nozzle Lift over Interface Layer Thickness==== Default is 0.45. Defines the amount the nozzle is above the center of the interface extrusion divided by the interface layer thickness. ===Name of Alteration Files=== If support material is generated, raft looks for alteration files in the alterations folder in the .skeinforge folder in the home directory. Raft does not care if the text file names are capitalized, but some file systems do not handle file name cases properly, so to be on the safe side you should give them lower case names. If it doesn't find the file it then looks in the alterations folder in the skeinforge_plugins folder. ====Name of Support End File==== Default is support_end.gcode. If support material is generated and if there is a file with the name of the "Name of Support End File" setting, it will be added to the end of the support gcode. ====Name of Support Start File==== If support material is generated and if there is a file with the name of the "Name of Support Start File" setting, it will be added to the start of the support gcode. ===Operating Nozzle Lift over Layer Thickness=== Default is 0.5. Defines the amount the nozzle is above the center of the operating extrusion divided by the layer height. ===Raft Size=== The raft fills a rectangle whose base size is the rectangle around the bottom layer of the object expanded on each side by the 'Raft Margin' plus the 'Raft Additional Margin over Length (%)' percentage times the length of the side. ====Raft Additional Margin over Length==== Default is 1 percent. ====Raft Margin==== Default is three millimeters. ===Support=== Good articles on support material are at: http://davedurant.wordpress.com/2010/07/31/skeinforge-support-part-1/ http://davedurant.wordpress.com/2010/07/31/skeinforge-support-part-2/ ====Support Cross Hatch==== Default is off. When selected, the support material will cross hatched. Cross hatching the support makes it stronger and harder to remove, which is why the default is off. ====Support Flow Rate over Operating Flow Rate==== Default: 0.9. Defines the ratio of the flow rate when the support is extruded over the operating flow rate. With a number less than one, the support flow rate will be smaller so the support will be thinner and easier to remove. ====Support Gap over Perimeter Extrusion Width==== Default: 0.5. Defines the gap between the support material and the object over the edge extrusion width. ====Support Material Choice==== Default is 'None' because the raft takes time to generate. =====Empty Layers Only===== When selected, support material will be only on the empty layers. This is useful when making identical objects in a stack. =====Everywhere===== When selected, support material will be added wherever there are overhangs, even inside the object. Because support material inside objects is hard or impossible to remove, this option should only be chosen if the object has a cavity that needs support and there is some way to extract the support material. =====Exterior Only===== When selected, support material will be added only the exterior of the object. This is the best option for most objects which require support material. =====None===== When selected, raft will not add support material. ====Support Minimum Angle==== Default is sixty degrees. Defines the minimum angle that a surface overhangs before support material is added. If angle is lower then this value the support will be generated. This angle is defined from the vertical, so zero is a vertical wall, ten is a wall with a bit of overhang, thirty is the typical safe angle for filament extrusion, sixty is a really high angle for extrusion and ninety is an unsupported horizontal ceiling. ==Examples== The following examples raft the file Screw Holder Bottom.stl. The examples are run in a terminal in the folder which contains Screw Holder Bottom.stl and raft.py. > python raft.py This brings up the raft dialog. > python raft.py Screw Holder Bottom.stl The raft tool is parsing the file: Screw Holder Bottom.stl .. The raft tool has created the file: Screw Holder Bottom_raft.gcode """
# Copyright 2011,2012 NAME Copyright 2008 (C) Nicira, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at: # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # This file is derived from the packet library in NOX, which was # developed by Nicira, Inc. #====================================================================== # # DNS Message Format # # 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | ID | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # |QR| Opcode |AA|TC|RD|RA|Z |AD|CD| RCODE | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Total Questions | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Total Answerrs | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Total Authority RRs | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Total Additional RRs | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Questions ... | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Answer RRs ... | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Authority RRs.. | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | Additional RRs. | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # # Question format: # # 1 1 1 1 1 1 # 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | | # / QNAME / # / / # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | QTYPE | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | QCLASS | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # # # # All RRs have the following format: # 1 1 1 1 1 1 # 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | | # / / # / NAME / # | | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | TYPE | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | CLASS | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | TTL | # | | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # | RDLENGTH | # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--| # / RDATA / # / / # +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ # # #====================================================================== # TODO: # SOA data # General cleaup/rewrite (code is/has gotten pretty bad)
""" A script to calculate the projection of 3D world coordinates to 2D display coordinates (pixel coordinates) for a given scene. The 2D pixel locations of objects in the image plane are related to their 3D world coordinates by a series of linear transformations. The specific transformations fall under the group known as projective transformations. This set includes pure projectivities, affine transformations, perspective transformations, and euclidean transformations. In the case of mlab (and most other computer visualization software), we deal with only the perspective and euclidean cases. An overview of Projective space can be found here: http://en.wikipedia.org/wiki/Projective_space and a thorough treatment of projective geometry can be had in the book "Multiple View Geometry in Computer Vision" by NAME essential thing to know for this example is that points in 3-space are related to points in 2-space through a series of multiplications of 4x4 matrices which are the perspective and euclidean transformations. The 4x4 matrices predicate the use of length 4 vectors to represent points. This representation is known as homogeneous coordinates, and while they appear foriegn at first, they truly simplify all the mathematics involved. In short, homogeneous coordinates are your friend, and you should read about them here: http://en.wikipedia.org/wiki/Homogeneous_coordinates In the normal pinhole camera model (the ideal real world model), 3D world points are related to 2D image points by the matrix termed the 'essential' matrix which is a combination of a perspective transformation and a euclidean transformation. The perspective transformation is defined by the camera intrinsics (focal length, imaging sensor offset, etc...) and the euclidean transformation is defined by the cameras position and orientation. In computer graphics, things are not so simple. This is because computer graphics have the benefit of being able to do things which are not possible in the real world: adding clipping planes, offset projection centers, arbitrary distortions, etc... Thus, a slightly different model is used. What follows is the camera/view model for OpenGL and thus, VTK. I can not guarantee that other packages follow this model. There are 4 different transformations that are applied 3D world coordinates to map them to 2D pixel coordinates. They are: the model transform, the view transform, the perspective transform, and the viewport or display transform. In OpenGL the first two transformations are concatenated to yield the modelview transform (called simply the view transform in VTK). The modelview transformation applies arbitrary scaling and distortions to the model (if they are specified) and transforms them so that the orientation is the equivalent of looking down the negative Z axis. Imagine its as if you relocate your camera to look down the negative Z axis, and then move everything in the world so that you see it now as you did before you moved the camera. The resulting coordinates are termed "eye" coordinates in OpenGL (I don't know that they have a name in VTK). The perspective transformation applies the camera perspective to the eye coordinates. This transform is what makes objects in the foreground look bigger than equivalent objects in the background. In the pinhole camera model, this transform is determined uniquely by the focal length of the camera and its position in 3-space. In Vtk/OpenGL it is determined by the frustum. A frustum is simply a pyramid with the top lopped off. The top of the pyramid (a point) is the camera location, the base of the pyramid is a plane (the far clipping plane) defined as normal to principle camera ray at distance termed the far clipping distance, the top of the frustum (where it's lopped off) is the near clipping plane, with a definition similar to that of the far clipping plane. The sides of the frustum are determined by the aspect ratio of the camera (width/height) and its field-of-view. Any points not lying within the frustum are not mapped to the screen (as they would lie outside the viewable area). The perpspective transformation has the effect of scaling everything within the frustum to fit within a cube defined in the range (-1,1)(-1,1)(-1,1) as represented by homogeneous coordinates. The last phrase there is important, the first 3 coordinates will not, in general, be within the unity range until we divide through by the last coordinate (See the wikipedia on homogeneous coordinates if this is confusing). The resulting coordinates are termed (appropriately enough) normalized view coordinates. The last transformation (the viewport transformation) takes us from normalized view coordinates to display coordinates. At this point, you may be asking yourself 'why not just go directly to display coordinates, why need normalized view coordinates at all?', the answer is that we may want to embed more than one view in a particular window, there will therefore be different transformations to take each view to an appropriate position an size in the window. The normalized view coordinates provide a nice common ground so-to-speak. At any rate, the viewport transformation simply scales and translates the X and Y coordinates of the normalized view coordinates to the appropriate pixel coordinates. We don't use the Z value in our example because we don't care about it. It is used for other various things however. That's all there is to it, pretty simple right? Right. Here is an overview: Given a set of 3D world coordinates: - Apply the modelview transformation (view transform in VTK) to get eye coordinates - Apply the perspective transformation to get normalized view coordinates - Apply the viewport transformation to get display coordinates VTK provides a nice method to retrieve a 4x4 matrix that combines the first two operations. As far as I can tell, VTK does not export a method to retrieve the 4x4 matrix representing the viewport transformation, so we are on our there to create one (no worries though, its not hard, as you will see). Now that the prelimenaries are out of the way, lets get started. """
#!/usr/bin/env python # ***** BEGIN LICENSE BLOCK ***** # Version: MPL 1.1/GPL 2.0/LGPL 2.1 # # The contents of this file are subject to the Mozilla Public License Version # 1.1 (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # http://www.mozilla.org/MPL/ # # Software distributed under the License is distributed on an "AS IS" basis, # WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License # for the specific language governing rights and limitations under the # License. # # The Original Code is font utility code. # # The Initial Developer of the Original Code is Mozilla Corporation. # Portions created by the Initial Developer are Copyright (C) 2009 # the Initial Developer. All Rights Reserved. # # Contributor(s): # NAME <EMAIL> # # Alternatively, the contents of this file may be used under the terms of # either the GNU General Public License Version 2 or later (the "GPL"), or # the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), # in which case the provisions of the GPL or the LGPL are applicable instead # of those above. If you wish to allow use of your version of this file only # under the terms of either the GPL or the LGPL, and not to allow others to # use your version of this file under the terms of the MPL, indicate your # decision by deleting the provisions above and replace them with the notice # and other provisions required by the GPL or the LGPL. If you do not delete # the provisions above, a recipient may use your version of this file under # the terms of any one of the MPL, the GPL or the LGPL. # # ***** END LICENSE BLOCK ***** */ # eotlitetool.py - create EOT version of OpenType font for use with IE # # Usage: eotlitetool.py [-o output-filename] font1 [font2 ...] # # OpenType file structure # http://www.microsoft.com/typography/otspec/otff.htm # # Types: # # BYTE 8-bit unsigned integer. # CHAR 8-bit signed integer. # USHORT 16-bit unsigned integer. # SHORT 16-bit signed integer. # ULONG 32-bit unsigned integer. # Fixed 32-bit signed fixed-point number (16.16) # LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer. # # SFNT Header # # Fixed sfnt version // 0x00010000 for version 1.0. # USHORT numTables // Number of tables. # USHORT searchRange // (Maximum power of 2 <= numTables) x 16. # USHORT entrySelector // Log2(maximum power of 2 <= numTables). # USHORT rangeShift // NumTables x 16-searchRange. # # Table Directory # # ULONG tag // 4-byte identifier. # ULONG checkSum // CheckSum for this table. # ULONG offset // Offset from beginning of TrueType font file. # ULONG length // Length of this table. # # OS/2 Table (Version 4) # # USHORT version // 0x0004 # SHORT xAvgCharWidth # USHORT usWeightClass # USHORT usWidthClass # USHORT fsType # SHORT ySubscriptXSize # SHORT ySubscriptYSize # SHORT ySubscriptXOffset # SHORT ySubscriptYOffset # SHORT ySuperscriptXSize # SHORT ySuperscriptYSize # SHORT ySuperscriptXOffset # SHORT ySuperscriptYOffset # SHORT yStrikeoutSize # SHORT yStrikeoutPosition # SHORT sFamilyClass # BYTE panose[10] # ULONG ulUnicodeRange1 // Bits 0-31 # ULONG ulUnicodeRange2 // Bits 32-63 # ULONG ulUnicodeRange3 // Bits 64-95 # ULONG ulUnicodeRange4 // Bits 96-127 # CHAR achVendID[4] # USHORT fsSelection # USHORT usFirstCharIndex # USHORT usLastCharIndex # SHORT sTypoAscender # SHORT sTypoDescender # SHORT sTypoLineGap # USHORT usWinAscent # USHORT usWinDescent # ULONG ulCodePageRange1 // Bits 0-31 # ULONG ulCodePageRange2 // Bits 32-63 # SHORT sxHeight # SHORT sCapHeight # USHORT usDefaultChar # USHORT usBreakChar # USHORT usMaxContext # # # The Naming Table is organized as follows: # # [name table header] # [name records] # [string data] # # Name Table Header # # USHORT format // Format selector (=0). # USHORT count // Number of name records. # USHORT stringOffset // Offset to start of string storage (from start of table). # # Name Record # # USHORT platformID // Platform ID. # USHORT encodingID // Platform-specific encoding ID. # USHORT languageID // Language ID. # USHORT nameID // Name ID. # USHORT length // String length (in bytes). # USHORT offset // String offset from start of storage area (in bytes). # # head Table # # Fixed tableVersion // Table version number 0x00010000 for version 1.0. # Fixed fontRevision // Set by font manufacturer. # ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum. # ULONG magicNumber // Set to 0x5F0F3CF5. # USHORT flags # USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines. # LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # SHORT xMin // For all glyph bounding boxes. # SHORT yMin # SHORT xMax # SHORT yMax # USHORT macStyle # USHORT lowestRecPPEM // Smallest readable size in pixels. # SHORT fontDirectionHint # SHORT indexToLocFormat // 0 for short offsets, 1 for long. # SHORT glyphDataFormat // 0 for current format. # # # # Embedded OpenType (EOT) file format # http://www.w3.org/Submission/EOT/ # # EOT version 0x00020001 # # An EOT font consists of a header with the original OpenType font # appended at the end. Most of the data in the EOT header is simply a # copy of data from specific tables within the font data. The exceptions # are the 'Flags' field and the root string name field. The root string # is a set of names indicating domains for which the font data can be # used. A null root string implies the font data can be used anywhere. # The EOT header is in little-endian byte order but the font data remains # in big-endian order as specified by the OpenType spec. # # Overall structure: # # [EOT header] # [EOT name records] # [font data] # # EOT header # # ULONG eotSize // Total structure length in bytes (including string and font data) # ULONG fontDataSize // Length of the OpenType font (FontData) in bytes # ULONG version // Version number of this format - 0x00020001 # ULONG flags // Processing Flags (0 == no special processing) # BYTE fontPANOSE[10] // OS/2 Table panose # BYTE charset // DEFAULT_CHARSET (0x01) # BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise # ULONG weight // OS/2 Table usWeightClass # USHORT fsType // OS/2 Table fsType (specifies embedding permission flags) # USHORT magicNumber // Magic number for EOT file - 0x504C. # ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1 # ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2 # ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3 # ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4 # ULONG codePageRange1 // OS/2 Table ulCodePageRange1 # ULONG codePageRange2 // OS/2 Table ulCodePageRange2 # ULONG checkSumAdjustment // head Table CheckSumAdjustment # ULONG reserved[4] // Reserved - must be 0 # USHORT padding1 // Padding - must be 0 # # EOT name records # # USHORT FamilyNameSize // Font family name size in bytes # BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16 # USHORT Padding2 // Padding - must be 0 # # USHORT StyleNameSize // Style name size in bytes # BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16 # USHORT Padding3 // Padding - must be 0 # # USHORT VersionNameSize // Version name size in bytes # bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16 # USHORT Padding4 // Padding - must be 0 # # USHORT FullNameSize // Full name size in bytes # BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16 # USHORT Padding5 // Padding - must be 0 # # USHORT RootStringSize // Root string size in bytes # BYTE RootString[RootStringSize] // Root string, little-endian UTF-16
""" ============================= Subclassing ndarray in python ============================= Credits ------- This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses. Introduction ------------ Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass. ndarrays and object creation ============================ Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different ways. These are: #. Explicit constructor call - as in ``MySubClass(params)``. This is the usual route to Python instance creation. #. View casting - casting an existing ndarray as a given subclass #. New from template - creating a new instance from a template instance. Examples include returning slices from a subclassed array, creating return types from ufuncs, and copying arrays. See :ref:`new-from-template` for more details The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation. .. _view-casting: View casting ------------ *View casting* is the standard ndarray mechanism by which you take an ndarray of any subclass, and return a view of the array as another (specified) subclass: >>> import numpy as np >>> # create a completely useless ndarray subclass >>> class C(np.ndarray): pass >>> # create a standard ndarray >>> arr = np.zeros((3,)) >>> # take a view of it, as our useless subclass >>> c_arr = arr.view(C) >>> type(c_arr) <class 'C'> .. _new-from-template: Creating new from template -------------------------- New instances of an ndarray subclass can also come about by a very similar mechanism to :ref:`view-casting`, when numpy finds it needs to create a new instance from a template instance. The most obvious place this has to happen is when you are taking slices of subclassed arrays. For example: >>> v = c_arr[1:] >>> type(v) # the view is of type 'C' <class 'C'> >>> v is c_arr # but it's a new instance False The slice is a *view* onto the original ``c_arr`` data. So, when we take a view from the ndarray, we return a new ndarray, of the same class, that points to the data in the original. There are other points in the use of ndarrays where we need such views, such as copying arrays (``c_arr.copy()``), creating ufunc output arrays (see also :ref:`array-wrap`), and reducing methods (like ``c_arr.mean()``. Relationship of view casting and new-from-template -------------------------------------------------- These paths both use the same machinery. We make the distinction here, because they result in different input to your methods. Specifically, :ref:`view-casting` means you have created a new instance of your array type from any potential subclass of ndarray. :ref:`new-from-template` means you have created a new instance of your class from a pre-existing instance, allowing you - for example - to copy across attributes that are particular to your subclass. Implications for subclassing ---------------------------- If we subclass ndarray, we need to deal not only with explicit construction of our array type, but also :ref:`view-casting` or :ref:`new-from-template`. NumPy has the machinery to do this, and this machinery that makes subclassing slightly non-standard. There are two aspects to the machinery that ndarray uses to support views and new-from-template in subclasses. The first is the use of the ``ndarray.__new__`` method for the main work of object initialization, rather then the more usual ``__init__`` method. The second is the use of the ``__array_finalize__`` method to allow subclasses to clean up after the creation of views and new instances from templates. A brief Python primer on ``__new__`` and ``__init__`` ===================================================== ``__new__`` is a standard Python method, and, if present, is called before ``__init__`` when we create a class instance. See the `python __new__ documentation <http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail. For example, consider the following Python code: .. testcode:: class C(object): def __new__(cls, *args): print('Cls in __new__:', cls) print('Args in __new__:', args) return object.__new__(cls, *args) def __init__(self, *args): print('type(self) in __init__:', type(self)) print('Args in __init__:', args) meaning that we get: >>> c = C('hello') Cls in __new__: <class 'C'> Args in __new__: ('hello',) type(self) in __init__: <class 'C'> Args in __init__: ('hello',) When we call ``C('hello')``, the ``__new__`` method gets its own class as first argument, and the passed argument, which is the string ``'hello'``. After python calls ``__new__``, it usually (see below) calls our ``__init__`` method, with the output of ``__new__`` as the first argument (now a class instance), and the passed arguments following. As you can see, the object can be initialized in the ``__new__`` method or the ``__init__`` method, or both, and in fact ndarray does not have an ``__init__`` method, because all the initialization is done in the ``__new__`` method. Why use ``__new__`` rather than just the usual ``__init__``? Because in some cases, as for ndarray, we want to be able to return an object of some other class. Consider the following: .. testcode:: class D(C): def __new__(cls, *args): print('D cls is:', cls) print('D args in __new__:', args) return C.__new__(C, *args) def __init__(self, *args): # we never get here print('In D __init__') meaning that: >>> obj = D('hello') D cls is: <class 'D'> D args in __new__: ('hello',) Cls in __new__: <class 'C'> Args in __new__: ('hello',) >>> type(obj) <class 'C'> The definition of ``C`` is the same as before, but for ``D``, the ``__new__`` method returns an instance of class ``C`` rather than ``D``. Note that the ``__init__`` method of ``D`` does not get called. In general, when the ``__new__`` method returns an object of class other than the class in which it is defined, the ``__init__`` method of that class is not called. This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view, the standard ndarray machinery creates the new ndarray object with something like:: obj = ndarray.__new__(subtype, shape, ... where ``subdtype`` is the subclass. Thus the returned view is of the same class as the subclass, rather than being of class ``ndarray``. That solves the problem of returning views of the same type, but now we have a new problem. The machinery of ndarray can set the class this way, in its standard methods for taking views, but the ndarray ``__new__`` method knows nothing of what we have done in our own ``__new__`` method in order to set attributes, and so on. (Aside - why not call ``obj = subdtype.__new__(...`` then? Because we may not have a ``__new__`` method with the same call signature). The role of ``__array_finalize__`` ================================== ``__array_finalize__`` is the mechanism that numpy provides to allow subclasses to handle the various ways that new instances get created. Remember that subclass instances can come about in these three ways: #. explicit constructor call (``obj = MySubClass(params)``). This will call the usual sequence of ``MySubClass.__new__`` then (if it exists) ``MySubClass.__init__``. #. :ref:`view-casting` #. :ref:`new-from-template` Our ``MySubClass.__new__`` method only gets called in the case of the explicit constructor call, so we can't rely on ``MySubClass.__new__`` or ``MySubClass.__init__`` to deal with the view casting and new-from-template. It turns out that ``MySubClass.__array_finalize__`` *does* get called for all three methods of object creation, so this is where our object creation housekeeping usually goes. * For the explicit constructor call, our subclass will need to create a new ndarray instance of its own class. In practice this means that we, the authors of the code, will need to make a call to ``ndarray.__new__(MySubClass,...)``, or do view casting of an existing array (see below) * For view casting and new-from-template, the equivalent of ``ndarray.__new__(MySubClass,...`` is called, at the C level. The arguments that ``__array_finalize__`` receives differ for the three methods of instance creation above. The following code allows us to look at the call sequences and arguments: .. testcode:: import numpy as np class C(np.ndarray): def __new__(cls, *args, **kwargs): print('In __new__ with class %s' % cls) return np.ndarray.__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): # in practice you probably will not need or want an __init__ # method for your subclass print('In __init__ with class %s' % self.__class__) def __array_finalize__(self, obj): print('In array_finalize:') print(' self type is %s' % type(self)) print(' obj type is %s' % type(obj)) Now: >>> # Explicit constructor >>> c = C((10,)) In __new__ with class <class 'C'> In array_finalize: self type is <class 'C'> obj type is <type 'NoneType'> In __init__ with class <class 'C'> >>> # View casting >>> a = np.arange(10) >>> cast_a = a.view(C) In array_finalize: self type is <class 'C'> obj type is <type 'numpy.ndarray'> >>> # Slicing (example of new-from-template) >>> cv = c[:1] In array_finalize: self type is <class 'C'> obj type is <class 'C'> The signature of ``__array_finalize__`` is:: def __array_finalize__(self, obj): ``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our own class (``self``) as well as the object from which the view has been taken (``obj``). As you can see from the output above, the ``self`` is always a newly created instance of our subclass, and the type of ``obj`` differs for the three instance creation methods: * When called from the explicit constructor, ``obj`` is ``None`` * When called from view casting, ``obj`` can be an instance of any subclass of ndarray, including our own. * When called in new-from-template, ``obj`` is another instance of our own subclass, that we might use to update the new ``self`` instance. Because ``__array_finalize__`` is the only method that always sees new instances being created, it is the sensible place to fill in instance defaults for new object attributes, among other tasks. This may be clearer with an example. Simple example - adding an extra attribute to ndarray ----------------------------------------------------- .. testcode:: import numpy as np class InfoArray(np.ndarray): def __new__(subtype, shape, dtype=float, buffer=None, offset=0, strides=None, order=None, info=None): # Create the ndarray instance of our type, given the usual # ndarray input arguments. This will call the standard # ndarray constructor, but return an object of our type. # It also triggers a call to InfoArray.__array_finalize__ obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides, order) # set the new 'info' attribute to the value passed obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # ``self`` is a new object resulting from # ndarray.__new__(InfoArray, ...), therefore it only has # attributes that the ndarray.__new__ constructor gave it - # i.e. those of a standard ndarray. # # We could have got to the ndarray.__new__ call in 3 ways: # From an explicit constructor - e.g. InfoArray(): # obj is None # (we're in the middle of the InfoArray.__new__ # constructor, and self.info will be set when we return to # InfoArray.__new__) if obj is None: return # From view casting - e.g arr.view(InfoArray): # obj is arr # (type(obj) can be InfoArray) # From new-from-template - e.g infoarr[:3] # type(obj) is InfoArray # # Note that it is here, rather than in the __new__ method, # that we set the default value for 'info', because this # method sees all creation of default objects - with the # InfoArray.__new__ constructor, but also with # arr.view(InfoArray). self.info = getattr(obj, 'info', None) # We do not need to return anything Using the object looks like this: >>> obj = InfoArray(shape=(3,)) # explicit constructor >>> type(obj) <class 'InfoArray'> >>> obj.info is None True >>> obj = InfoArray(shape=(3,), info='information') >>> obj.info 'information' >>> v = obj[1:] # new-from-template - here - slicing >>> type(v) <class 'InfoArray'> >>> v.info 'information' >>> arr = np.arange(10) >>> cast_arr = arr.view(InfoArray) # view casting >>> type(cast_arr) <class 'InfoArray'> >>> cast_arr.info is None True This class isn't very useful, because it has the same constructor as the bare ndarray object, including passing in buffers and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the usual numpy calls to ``np.array`` and return an object. Slightly more realistic example - attribute added to existing array ------------------------------------------------------------------- Here is a class that takes a standard ndarray that already exists, casts as our type, and adds an extra attribute. .. testcode:: import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, input_array, info=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribute to the created instance obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return self.info = getattr(obj, 'info', None) So: >>> arr = np.arange(5) >>> obj = RealisticInfoArray(arr, info='information') >>> type(obj) <class 'RealisticInfoArray'> >>> obj.info 'information' >>> v = obj[1:] >>> type(v) <class 'RealisticInfoArray'> >>> v.info 'information' .. _array-wrap: ``__array_wrap__`` for ufuncs ------------------------------------------------------- ``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy functions, to allow a subclass to set the type of the return value and update attributes and metadata. Let's show how this works with an example. First we make the same subclass as above, but with a different name and some print statements: .. testcode:: import numpy as np class MySubClass(np.ndarray): def __new__(cls, input_array, info=None): obj = np.asarray(input_array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print('In __array_finalize__:') print(' self is %s' % repr(self)) print(' obj is %s' % repr(obj)) if obj is None: return self.info = getattr(obj, 'info', None) def __array_wrap__(self, out_arr, context=None): print('In __array_wrap__:') print(' self is %s' % repr(self)) print(' arr is %s' % repr(out_arr)) # then just call the parent return np.ndarray.__array_wrap__(self, out_arr, context) We run a ufunc on an instance of our new array: >>> obj = MySubClass(np.arange(5), info='spam') In __array_finalize__: self is MySubClass([0, 1, 2, 3, 4]) obj is array([0, 1, 2, 3, 4]) >>> arr2 = np.arange(5)+1 >>> ret = np.add(arr2, obj) In __array_wrap__: self is MySubClass([0, 1, 2, 3, 4]) arr is array([1, 3, 5, 7, 9]) In __array_finalize__: self is MySubClass([1, 3, 5, 7, 9]) obj is MySubClass([0, 1, 2, 3, 4]) >>> ret MySubClass([1, 3, 5, 7, 9]) >>> ret.info 'spam' Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the input with the highest ``__array_priority__`` value, in this case ``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and ``out_arr`` as the (ndarray) result of the addition. In turn, the default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the result to class ``MySubClass``, and called ``__array_finalize__`` - hence the copying of the ``info`` attribute. This has all happened at the C level. But, we could do anything we wanted: .. testcode:: class SillySubClass(np.ndarray): def __array_wrap__(self, arr, context=None): return 'I lost your data' >>> arr1 = np.arange(5) >>> obj = arr1.view(SillySubClass) >>> arr2 = np.arange(5) >>> ret = np.multiply(obj, arr2) >>> ret 'I lost your data' So, by defining a specific ``__array_wrap__`` method for our subclass, we can tweak the output from ufuncs. The ``__array_wrap__`` method requires ``self``, then an argument - which is the result of the ufunc - and an optional parameter *context*. This parameter is returned by some ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc, domain of the ufunc). ``__array_wrap__`` should return an instance of its containing class. See the masked array subclass for an implementation. In addition to ``__array_wrap__``, which is called on the way out of the ufunc, there is also an ``__array_prepare__`` method which is called on the way into the ufunc, after the output arrays are created but before any computation has been performed. The default implementation does nothing but pass through the array. ``__array_prepare__`` should not attempt to access the array data or resize the array, it is intended for setting the output array type, updating attributes and metadata, and performing any checks based on the input that may be desired before computation begins. Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or subclass thereof or raise an error. Extra gotchas - custom ``__del__`` methods and ndarray.base ----------------------------------------------------------- One of the problems that ndarray solves is keeping track of memory ownership of ndarrays and their views. Consider the case where we have created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``. The two objects are looking at the same memory. NumPy keeps track of where the data came from for a particular array or view, with the ``base`` attribute: >>> # A normal ndarray, that owns its own data >>> arr = np.zeros((4,)) >>> # In this case, base is None >>> arr.base is None True >>> # We take a view >>> v1 = arr[1:] >>> # base now points to the array that it derived from >>> v1.base is arr True >>> # Take a view of a view >>> v2 = v1[1:] >>> # base points to the view it derived from >>> v2.base is v1 True In general, if the array owns its own memory, as for ``arr`` in this case, then ``arr.base`` will be None - there are some exceptions to this - see the numpy book for more details. The ``base`` attribute is useful in being able to tell whether we have a view or the original array. This in turn can be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how this can work, have a look at the ``memmap`` class in ``numpy.core``. Subclassing and Downstream Compatibility ---------------------------------------- When sub-classing ``ndarray`` or creating duck-types that mimic the ``ndarray`` interface, it is your responsibility to decide how aligned your APIs will be with those of numpy. For convenience, many numpy functions that have a corresponding ``ndarray`` method (e.g., ``sum``, ``mean``, ``take``, ``reshape``) work by checking if the first argument to a function has a method of the same name. If it exists, the method is called instead of coercing the arguments to a numpy array. For example, if you want your sub-class or duck-type to be compatible with numpy's ``sum`` function, the method signature for this object's ``sum`` method should be the following: .. testcode:: def sum(self, axis=None, dtype=None, out=None, keepdims=False): ... This is the exact same method signature for ``np.sum``, so now if a user calls ``np.sum`` on this object, numpy will call the object's own ``sum`` method and pass in these arguments enumerated above in the signature, and no errors will be raised because the signatures are completely compatible with each other. If, however, you decide to deviate from this signature and do something like this: .. testcode:: def sum(self, axis=None, dtype=None): ... This object is no longer compatible with ``np.sum`` because if you call ``np.sum``, it will pass in unexpected arguments ``out`` and ``keepdims``, causing a TypeError to be raised. If you wish to maintain compatibility with numpy and its subsequent versions (which might add new keyword arguments) but do not want to surface all of numpy's arguments, your function's signature should accept ``**kwargs``. For example: .. testcode:: def sum(self, axis=None, dtype=None, **unused_kwargs): ... This object is now compatible with ``np.sum`` again because any extraneous arguments (i.e. keywords that are not ``axis`` or ``dtype``) will be hidden away in the ``**unused_kwargs`` parameter. """
""" :Interface to the UMFPACK library: ================================== :Contains: UmfpackContext class :Description: ------------- Routines for symbolic and numeric LU factorization of sparse matrices and for solving systems of linear equations with sparse matrices. Tested with UMFPACK V4.4 (Jan. 28, 2005), V5.0 (May 5, 2006) Copyright (c) 2005 by NAME All Rights Reserved. UMFPACK homepage: http://www.cise.ufl.edu/research/sparse/umfpack Use 'print UmfpackContext().funs' to see all UMFPACK library functions the module exposes, if you need something not covered by the examples below. :Installation: -------------- Example site.cfg entry: <code> UMFPACK v4.4 in <dir>: [amd] library_dirs = <dir>/UMFPACK/AMD/Lib include_dirs = <dir>/UMFPACK/AMD/Include amd_libs = amd [umfpack] library_dirs = <dir>/UMFPACK/UMFPACK/Lib include_dirs = <dir>/UMFPACK/UMFPACK/Include umfpack_libs = umfpack UMFPACK v5.0 (as part of UFsparse package) in <dir>: [amd] library_dirs = <dir>/UFsparse/AMD/Lib include_dirs = <dir>/UFsparse/AMD/Include, <dir>/UFsparse/UFconfig amd_libs = amd [umfpack] library_dirs = <dir>/UFsparse/UMFPACK/Lib include_dirs = <dir>/UFsparse/UMFPACK/Include, <dir>/UFsparse/UFconfig umfpack_libs = umfpack <code> :Examples: ---------- Assuming this module imported as um (import scipy.sparse.linalg.dsolve.umfpack as um) Sparse matrix in CSR or CSC format: mtx Righthand-side: rhs Solution: sol <code> # Contruct the solver. umfpack = um.UmfpackContext() # Use default 'di' family of UMFPACK routines. # One-shot solution. sol = umfpack( um.UMFPACK_A, mtx, rhs, autoTranspose = True ) # same as: sol = umfpack.linsolve( um.UMFPACK_A, mtx, rhs, autoTranspose = True ) <code> -or- <code> # Make LU decomposition. umfpack.numeric( mtx ) ... # Use already LU-decomposed matrix. sol1 = umfpack( um.UMFPACK_A, mtx, rhs1, autoTranspose = True ) sol2 = umfpack( um.UMFPACK_A, mtx, rhs2, autoTranspose = True ) # same as: sol1 = umfpack.solve( um.UMFPACK_A, mtx, rhs1, autoTranspose = True ) sol2 = umfpack.solve( um.UMFPACK_A, mtx, rhs2, autoTranspose = True ) <code> -or- <code> # Make symbolic decomposition. umfpack.symbolic( mtx0 ) # Print statistics. umfpack.report_symbolic() # ... # Make LU decomposition of mtx1 which has same structure as mtx0. umfpack.numeric( mtx1 ) # Print statistics. umfpack.report_numeric() # Use already LU-decomposed matrix. sol1 = umfpack( um.UMFPACK_A, mtx1, rhs1, autoTranspose = True ) # ... # Make LU decomposition of mtx2 which has same structure as mtx0. umfpack.numeric( mtx2 ) sol2 = umfpack.solve( um.UMFPACK_A, mtx2, rhs2, autoTranspose = True ) # Print all statistics. umfpack.report_info() <code> -or- <code> # Get LU factors and permutation matrices of a matrix. L, U, P, Q, R, do_recip = umfpack.lu( mtx ) <code> :Returns: - `L` : Lower triangular m-by-min(m,n) CSR matrix - `U` : Upper triangular min(m,n)-by-n CSC matrix - `P` : Vector of row permuations - `Q` : Vector of column permuations - `R` : Vector of diagonal row scalings - `do_recip` : boolean :Note: For a given matrix A, the decomposition satisfies: $LU = PRAQ$ when do_recip is true, $LU = P(R^{-1})AQ$ when do_recip is false :UmfpackContext solution methods: --------------------------------- umfpack(), umfpack.linsolve(), umfpack.solve() :Parameters: - `sys` : constant, one of UMFPACK system description constants, like UMFPACK_A, UMFPACK_At, see umfSys list and UMFPACK docs - `mtx` : sparse matrix (CSR or CSC) - `rhs` : right hand side vector - `autoTranspose` : bool automatically changes 'sys' to the transposed type, if 'mtx' is in CSR, since UMFPACK assumes CSC internally :Setting control parameters: ---------------------------- Assuming this module imported as um: List of control parameter names is accessible as 'um.umfControls' - their meaning and possible values are described in the UMFPACK documentation. To each name corresponds an attribute of the 'um' module, such as, for example 'um.UMFPACK_PRL' (controlling the verbosity of umfpack report functions). These attributes are in fact indices into the control array - to set the corresponding control array value, just do the following: <code> umfpack = um.UmfpackContext() umfpack.control[um.UMFPACK_PRL] = 4 # Let's be more verbose. <code> -- :Author: NAME contributors: NAME (lu() method wrappers) """
# # ElementTree # $Id: ElementTree.py 2326 2005-03-17 07:45:21Z USERNAME $ # # light-weight XML support for Python 1.5.2 and later. # # history: # 2001-10-20 fl created (from various sources) # 2001-11-01 fl return root from parse method # 2002-02-16 fl sort attributes in lexical order # 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup # 2002-05-01 fl finished TreeBuilder refactoring # 2002-07-14 fl added basic namespace support to ElementTree.write # 2002-07-25 fl added QName attribute support # 2002-10-20 fl fixed encoding in write # 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding # 2002-11-27 fl accept file objects or file names for parse/write # 2002-12-04 fl moved XMLTreeBuilder back to this module # 2003-01-11 fl fixed entity encoding glitch for us-ascii # 2003-02-13 fl added XML literal factory # 2003-02-21 fl added ProcessingInstruction/PI factory # 2003-05-11 fl added tostring/fromstring helpers # 2003-05-26 fl added ElementPath support # 2003-07-05 fl added makeelement factory method # 2003-07-28 fl added more well-known namespace prefixes # 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed # 2003-10-31 fl markup updates # 2003-11-15 fl fixed nested namespace bug # 2004-03-28 fl added XMLID helper # 2004-06-02 fl added default support to findtext # 2004-06-08 fl fixed encoding of non-ascii element/attribute names # 2004-08-23 fl take advantage of post-2.1 expat features # 2005-02-01 fl added iterparse implementation # 2005-03-02 fl fixed iterparse support for pre-2.2 versions # # Copyright (c) 1999-2005 by NAME All rights reserved. # # EMAIL # http://www.pythonware.com # # -------------------------------------------------------------------- # The ElementTree toolkit is # # Copyright (c) 1999-2005 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I am trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
""" Discrete Fourier Transform (:mod:`numpy.fft`) ============================================= .. currentmodule:: numpy.fft Standard FFTs ------------- .. autosummary:: :toctree: generated/ fft Discrete Fourier transform. ifft Inverse discrete Fourier transform. fft2 Discrete Fourier transform in two dimensions. ifft2 Inverse discrete Fourier transform in two dimensions. fftn Discrete Fourier transform in N-dimensions. ifftn Inverse discrete Fourier transform in N dimensions. Real FFTs --------- .. autosummary:: :toctree: generated/ rfft Real discrete Fourier transform. irfft Inverse real discrete Fourier transform. rfft2 Real discrete Fourier transform in two dimensions. irfft2 Inverse real discrete Fourier transform in two dimensions. rfftn Real discrete Fourier transform in N dimensions. irfftn Inverse real discrete Fourier transform in N dimensions. Hermitian FFTs -------------- .. autosummary:: :toctree: generated/ hfft Hermitian discrete Fourier transform. ihfft Inverse Hermitian discrete Fourier transform. Helper routines --------------- .. autosummary:: :toctree: generated/ fftfreq Discrete Fourier Transform sample frequencies. rfftfreq DFT sample frequencies (for usage with rfft, irfft). fftshift Shift zero-frequency component to center of spectrum. ifftshift Inverse of fftshift. Background information ---------------------- Fourier analysis is fundamentally a method for expressing a function as a sum of periodic components, and for recovering the function from those components. When both the function and its Fourier transform are replaced with discretized counterparts, it is called the discrete Fourier transform (DFT). The DFT has become a mainstay of numerical computing in part because of a very fast algorithm for computing it, called the Fast Fourier Transform (FFT), which was known to Gauss (1805) and was brought to light in its current form by NAME and NAME [CT]_. Press et al. [NR]_ provide an accessible introduction to Fourier analysis and its applications. Because the discrete Fourier transform separates its input into components that contribute at discrete frequencies, it has a great number of applications in digital signal processing, e.g., for filtering, and in this context the discretized input to the transform is customarily referred to as a *signal*, which exists in the *time domain*. The output is called a *spectrum* or *transform* and exists in the *frequency domain*. Implementation details ---------------------- There are many ways to define the DFT, varying in the sign of the exponent, normalization, etc. In this implementation, the DFT is defined as .. math:: A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\} \\qquad k = 0,\\ldots,n-1. The DFT is in general defined for complex inputs and outputs, and a single-frequency component at linear frequency :math:`f` is represented by a complex exponential :math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t` is the sampling interval. The values in the result follow so-called "standard" order: If ``A = fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the sum of the signal), which is always purely real for real inputs. Then ``A[1:n/2]`` contains the positive-frequency terms, and ``A[n/2+1:]`` contains the negative-frequency terms, in order of decreasingly negative frequency. For an even number of input points, ``A[n/2]`` represents both positive and negative Nyquist frequency, and is also purely real for real input. For an odd number of input points, ``A[(n-1)/2]`` contains the largest positive frequency, while ``A[(n+1)/2]`` contains the largest negative frequency. The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies of corresponding elements in the output. The routine ``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes that shift. When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)`` is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum. The phase spectrum is obtained by ``np.angle(A)``. The inverse DFT is defined as .. math:: a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\} \\qquad m = 0,\\ldots,n-1. It differs from the forward transform by the sign of the exponential argument and the default normalization by :math:`1/n`. Normalization ------------- The default normalization has the direct transforms unscaled and the inverse transforms are scaled by :math:`1/n`. It is possible to obtain unitary transforms by setting the keyword argument ``norm`` to ``"ortho"`` (default is `None`) so that both direct and inverse transforms will be scaled by :math:`1/\\sqrt{n}`. Real and Hermitian transforms ----------------------------- When the input is purely real, its transform is Hermitian, i.e., the component at frequency :math:`f_k` is the complex conjugate of the component at frequency :math:`-f_k`, which means that for real inputs there is no information in the negative frequency components that is not already available from the positive frequency components. The family of `rfft` functions is designed to operate on real inputs, and exploits this symmetry by computing only the positive frequency components, up to and including the Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex output points. The inverses of this family assumes the same symmetry of its input, and for an output of ``n`` points uses ``n/2+1`` input points. Correspondingly, when the spectrum is purely real, the signal is Hermitian. The `hfft` family of functions exploits this symmetry by using ``n/2+1`` complex points in the input (time) domain for ``n`` real points in the frequency domain. In higher dimensions, FFTs are used, e.g., for image analysis and filtering. The computational efficiency of the FFT means that it can also be a faster way to compute large convolutions, using the property that a convolution in the time domain is equivalent to a point-by-point multiplication in the frequency domain. Higher dimensions ----------------- In two dimensions, the DFT is defined as .. math:: A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1} a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\} \\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1, which extends in the obvious way to higher dimensions, and the inverses in higher dimensions also extend in the same way. References ---------- .. [CT] NAME, NAME and John W. NAME, 1965, "An algorithm for the machine calculation of complex Fourier series," *Math. Comput.* 19: 297-301. .. [NR] NAME NAME NAME and NAME 2007, *Numerical Recipes: The Art of Scientific Computing*, ch. 12-13. Cambridge Univ. Press, Cambridge, UK. Examples -------- For examples, see the various functions. """
""" ======================== Broadcasting over arrays ======================== The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is "broadcast" across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([ 2., 4., 6.]) NumPy's broadcasting rule relaxes this constraint when the arrays' shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([ 2., 4., 6.]) The result is equivalent to the previous example where ``b`` was an array. We can think of the scalar ``b`` being *stretched* during the arithmetic operation into an array with the same shape as ``a``. The new elements in ``b`` are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies, so that broadcasting operations are as memory and computationally efficient as possible. The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (``b`` is a scalar rather than an array). General Broadcasting Rules ========================== When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when 1) they are equal, or 2) one of them is 1 If these conditions are not met, a ``ValueError: frames are not aligned`` exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays. Arrays do not need to have the same *number* of dimensions. For example, if you have a ``256x256x3`` array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched or "copied" to match the other. In the following example, both the ``A`` and ``B`` arrays have axes with length one that are expanded to a larger size during the broadcast operation:: A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 Here are some more examples:: A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 Here are examples of shapes that do not broadcast:: A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched An example of broadcasting in practice:: >>> x = np.arange(4) >>> xx = x.reshape(4,1) >>> y = np.ones(5) >>> z = np.ones((3,4)) >>> x.shape (4,) >>> y.shape (5,) >>> x + y <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape >>> xx.shape (4, 1) >>> y.shape (5,) >>> (xx + y).shape (4, 5) >>> xx + y array([[ 1., 1., 1., 1., 1.], [ 2., 2., 2., 2., 2.], [ 3., 3., 3., 3., 3.], [ 4., 4., 4., 4., 4.]]) >>> x.shape (4,) >>> z.shape (3, 4) >>> (x + z).shape (3, 4) >>> x + z array([[ 1., 2., 3., 4.], [ 1., 2., 3., 4.], [ 1., 2., 3., 4.]]) Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays:: >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [ 11., 12., 13.], [ 21., 22., 23.], [ 31., 32., 33.]]) Here the ``newaxis`` index operator inserts a new axis into ``a``, making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array with ``b``, which has shape ``(3,)``, yields a ``4x3`` array. See `this article <http://wiki.scipy.org/EricsBroadcastingDoc>`_ for illustrations of broadcasting concepts. """
#-- GAUDI jobOptions generated on Mon Jul 20 10:26:44 2015 #-- Contains event types : #-- 11164031 - 11 files - 133000 events - 28.63 GBytes #-- Extra information about the data processing phases: #-- Processing Pass Step-124834 #-- StepId : 124834 #-- StepName : Reco14a for MC #-- ApplicationName : NAME #-- ApplicationVersion : v43r2p7 #-- OptionFiles : $APPCONFIGOPTS/NAME/DataType-2012.py;$APPCONFIGOPTS/NAME/MC-WithTruth.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py #-- DDDB : fromPreviousStep #-- CONDDB : fromPreviousStep #-- ExtraPackages : AppConfig.v3r164 #-- Visible : Y #-- Processing Pass Step-124620 #-- StepId : 124620 #-- StepName : Digi13 with G4 dE/dx #-- ApplicationName : NAME #-- ApplicationVersion : v26r3 #-- OptionFiles : $APPCONFIGOPTS/NAME/Default.py;$APPCONFIGOPTS/NAME/DataType-2012.py;$APPCONFIGOPTS/NAME/NAME-SiG4EnergyDeposit.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py #-- DDDB : fromPreviousStep #-- CONDDB : fromPreviousStep #-- ExtraPackages : AppConfig.v3r164 #-- Visible : Y #-- Processing Pass Step-124632 #-- StepId : 124632 #-- StepName : TCK-0x409f0045 Flagged for Sim08 2012 #-- ApplicationName : NAME #-- ApplicationVersion : v14r8p1 #-- OptionFiles : $APPCONFIGOPTS/NAME/NAMESimProductionWithL0Emulation.py;$APPCONFIGOPTS/Conditions/TCK-0x409f0045.py;$APPCONFIGOPTS/NAME/DataType-2012.py;$APPCONFIGOPTS/L0/L0TCK-0x0045.py #-- DDDB : fromPreviousStep #-- CONDDB : fromPreviousStep #-- ExtraPackages : AppConfig.v3r164 #-- Visible : Y #-- Processing Pass Step-124630 #-- StepId : 124630 #-- StepName : Stripping20-NoPrescalingFlagged for Sim08 #-- ApplicationName : NAME #-- ApplicationVersion : v32r2p1 #-- OptionFiles : $APPCONFIGOPTS/NAME/DV-Stripping20-Stripping-MC-NoPrescaling.py;$APPCONFIGOPTS/NAME/DataType-2012.py;$APPCONFIGOPTS/NAME/InputType-DST.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py #-- DDDB : fromPreviousStep #-- CONDDB : fromPreviousStep #-- ExtraPackages : AppConfig.v3r164 #-- Visible : Y #-- Processing Pass Step-125337 #-- StepId : 125337 #-- StepName : Sim08a - 2012 - MU - Pythia8 #-- ApplicationName : NAME #-- ApplicationVersion : v45r3 #-- OptionFiles : $APPCONFIGOPTS/NAME/Sim08-Beam4000GeV-mu100-2012-nu2.5.py;$DECFILESROOT/options/@{eventType}.py;$LBPYTHIA8ROOT/options/Pythia8.py;$APPCONFIGOPTS/NAME/G4PL_FTFP_BERT_EmNoCuts.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py #-- DDDB : Sim08-20130503-1 #-- CONDDB : Sim08-20130503-1-vc-mu100 #-- ExtraPackages : AppConfig.v3r171;DecFiles.v27r8 #-- Visible : Y
""" This page is in the table of contents. Some filaments contract too much and to prevent this you have to print the object in a temperature regulated chamber or on a temperature regulated bed. The chamber tool allows you to control the bed and chamber temperature and the holding pressure. The gcodes are also described at: http://reprap.org/wiki/Mendel_User_Manual:_RepRapGCodes The chamber manual page is at: http://www.bitsfrombytes.com/wiki/index.php?title=Skeinforge_Chamber ==Operation== The default 'Activate Chamber' checkbox is on. When it is on, the functions described below will work, when it is off, the functions will not be called. ==Settings== ===Bed Temperature=== Default is 60C. Defines the print_bed temperature in Celcius by adding an M140 command. ===Chamber Temperature=== Default is 30C. Defines the chamber temperature in Celcius by adding an M141 command. ===Holding Force=== Default is zero. Defines the holding pressure of a mechanism, like a vacuum table or electromagnet, to hold the bed surface or object, by adding an M142 command. The holding pressure is in bar. For hardware which only has on/off holding, when the holding pressure is zero, turn off holding, when the holding pressure is greater than zero, turn on holding. ==Heated Beds== ===Bothacker=== A resistor heated aluminum plate by Bothacker: http://bothacker.com with an article at: http://bothacker.com/2009/12/18/heated-build-platform/ ===Domingo=== A heated copper build plate by Domingo: http://casainho-emcrepstrap.blogspot.com/ with articles at: http://casainho-emcrepstrap.blogspot.com/2010/01/first-time-with-pla-testing-it-also-on.html http://casainho-emcrepstrap.blogspot.com/2010/01/call-for-helpideas-to-develop-heated.html http://casainho-emcrepstrap.blogspot.com/2010/01/new-heated-build-platform.html http://casainho-emcrepstrap.blogspot.com/2010/01/no-acrylic-and-instead-kapton-tape-on.html http://casainho-emcrepstrap.blogspot.com/2010/01/problems-with-heated-build-platform-and.html http://casainho-emcrepstrap.blogspot.com/2010/01/perfect-build-platform.html http://casainho-emcrepstrap.blogspot.com/2009/12/almost-no-warp.html http://casainho-emcrepstrap.blogspot.com/2009/12/heated-base-plate.html ===Jmil=== A heated build stage by jmil, over at: http://www.hive76.org with articles at: http://www.hive76.org/handling-hot-build-surfaces http://www.hive76.org/heated-build-stage-success ===Kulitorum=== Kulitorum has made a heated bed. It is a 5mm Alu sheet with a pattern laid out in kapton tape. The wire is a 0.6mm2 Konstantin wire and it's held in place by small pieces of kapton tape. The description and picture is at: http://gallery.kulitorum.com/main.php?g2_itemId=283 ===Metalab=== A heated base by the Metalab folks: http://reprap.soup.io with information at: http://reprap.soup.io/?search=heated%20base ===Nophead=== A resistor heated aluminum bed by Nophead: http://hydraraptor.blogspot.com with articles at: http://hydraraptor.blogspot.com/2010/01/will-it-stick.html http://hydraraptor.blogspot.com/2010/01/hot-metal-and-serendipity.html http://hydraraptor.blogspot.com/2010/01/new-year-new-plastic.html http://hydraraptor.blogspot.com/2010/01/hot-bed.html ===Prusajr=== A resistive wire heated plexiglass plate by prusajr: http://prusadjs.cz/ with articles at: http://prusadjs.cz/2010/01/heated-reprap-print-bed-mk2/ http://prusadjs.cz/2009/11/look-ma-no-warping-heated-reprap-print-bed/ ===Pumpernickel2=== A resistor heated aluminum plate by Pumpernickel2: http://dev.forums.reprap.org/profile.php?14,844 with a picture at: http://dev.forums.reprap.org/file.php?14,file=1228,filename=heatedplate.jpg ===Zaggo=== A resistor heated aluminum plate by Zaggo at Pleasant Software: http://pleasantsoftware.com/developer/3d/ with articles at: ttp://pleasantsoftware.com/developer/3d/2009/12/05/raftless/ http://pleasantsoftware.com/developer/3d/2009/11/15/living-in-times-of-warp-free-printing/ http://pleasantsoftware.com/developer/3d/2009/11/12/canned-heat/ ==Examples== The following examples chamber the file Screw Holder Bottom.stl. The examples are run in a terminal in the folder which contains Screw Holder Bottom.stl and chamber.py. > python chamber.py This brings up the chamber dialog. > python chamber.py Screw Holder Bottom.stl The chamber tool is parsing the file: Screw Holder Bottom.stl .. The chamber tool has created the file: Screw Holder Bottom_chamber.gcode > python Python 2.5.1 (r251:54863, Sep 22 2007, 01:43:31) [GCC 4.2.1 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import chamber >>> chamber.main() This brings up the chamber dialog. >>> chamber.writeOutput('Screw Holder Bottom.stl') Screw Holder Bottom.stl The chamber tool is parsing the file: Screw Holder Bottom.stl .. The chamber tool has created the file: Screw Holder Bottom_chamber.gcode """
# #!/usr/bin/env python # # """ # @package ion.agents.platform.rsn.test.oms_test_mixin # @file ion/agents/platform/rsn/test/oms_test_mixin.py # @author NAME @brief A mixin to facilitate test cases for OMS objects following the # OMS-CI interface. # """ # # __author__ = 'Carlos NAME __license__ = 'Apache 2.0' # # # from ion.agents.platform.rsn.simulator.logger import Logger # log = Logger.get_logger() # # from ion.agents.platform.test.helper import HelperTestMixin # # from ion.agents.platform.responses import NormalResponse, InvalidResponse # # import time # import ntplib # from gevent.pywsgi import WSGIServer # import socket # import yaml # # # # some bogus IDs # BOGUS_PLATFORM_ID = 'bogus_plat_id' # BOGUS_ATTR_NAMES = ['bogus_attr1|1', 'bogus_attr2|2'] # BOGUS_PORT_ID = 'bogus_port_id' # BOGUS_INSTRUMENT_ID = 'bogus_instrument_id' # BOGUS_EVENT_TYPE = "bogus_event_type" # # # class OmsTestMixin(HelperTestMixin): # """ # A mixin to facilitate test cases for OMS objects following the OMS-CI interface. # """ # @classmethod # def setUpClass(cls): # HelperTestMixin.setUpClass() # # def test_aa_ping(self): # response = self.oms.hello.ping() # self.assertEquals(response, "pong") # # def test_ab_get_platform_map(self): # platform_map = self.oms.config.get_platform_map() # log.info("config.get_platform_map() => %s" % platform_map) # self.assertIsInstance(platform_map, list) # roots = [] # for pair in platform_map: # self.assertIsInstance(pair, (tuple, list)) # self.assertEquals(len(pair), 2) # plat, parent = pair # if parent == '': # roots.append(plat) # self.assertEquals(len(roots), 1) # self.assertEquals("ShoreStation", roots[0]) # # def test_ac_get_platform_types(self): # retval = self.oms.config.get_platform_types() # log.info("config.get_platform_types() => %s" % retval) # self.assertIsInstance(retval, dict) # for k, v in retval.iteritems(): # self.assertIsInstance(k, str) # self.assertIsInstance(v, str) # # def test_ad_get_platform_metadata(self): # platform_id = self.PLATFORM_ID # retval = self.oms.config.get_platform_metadata(platform_id) # log.info("config.get_platform_metadata(%r) => %s" % (platform_id, retval)) # md = self._verify_valid_platform_id(platform_id, retval) # self.assertIsInstance(md, dict) # # TODO: decide on the actual expected metadata. # # if not 'platform_types' in md: # # log.warn("RSN OMS spec: platform_types not included in metadata: %s", md) # # def test_ad_get_platform_metadata_invalid(self): # platform_id = BOGUS_PLATFORM_ID # retval = self.oms.config.get_platform_metadata(platform_id) # log.info("config.get_platform_metadata(%r) => %s" % (platform_id, retval)) # self._verify_invalid_platform_id(platform_id, retval) # # def test_af_get_platform_attributes(self): # platform_id = self.PLATFORM_ID # retval = self.oms.attr.get_platform_attributes(platform_id) # log.info("attr.get_platform_attributes(%r) => %s" % (platform_id, retval)) # infos = self._verify_valid_platform_id(platform_id, retval) # self.assertIsInstance(infos, dict) # # def test_ag_get_platform_attributes_invalid(self): # platform_id = BOGUS_PLATFORM_ID # retval = self.oms.attr.get_platform_attributes(platform_id) # log.info("attr.get_platform_attributes(%r) => %s" % (platform_id, retval)) # self._verify_invalid_platform_id(platform_id, retval) # # def test_ah_get_platform_attribute_values(self): # platform_id = self.PLATFORM_ID # attrNames = self.ATTR_NAMES # cur_time = ntplib.system_to_ntp_time(time.time()) # from_time = cur_time - 50 # a 50-sec time window # req_attrs = [(attr_id, from_time) for attr_id in attrNames] # log.debug("attr.get_platform_attribute_values(%r, %r)" % (platform_id, req_attrs)) # retval = self.oms.attr.get_platform_attribute_values(platform_id, req_attrs) # log.info("attr.get_platform_attribute_values(%r, %r) => %s" % (platform_id, req_attrs, retval)) # vals = self._verify_valid_platform_id(platform_id, retval) # self.assertIsInstance(vals, dict) # for attrName in attrNames: # self._verify_valid_attribute_id(attrName, vals) # # def test_ah_get_platform_attribute_values_invalid_platform_id(self): # platform_id = BOGUS_PLATFORM_ID # attrNames = self.ATTR_NAMES # cur_time = ntplib.system_to_ntp_time(time.time()) # from_time = cur_time - 50 # a 50-sec time window # req_attrs = [(attr_id, from_time) for attr_id in attrNames] # log.debug("attr.get_platform_attribute_values(%r, %r)" % (platform_id, req_attrs)) # retval = self.oms.attr.get_platform_attribute_values(platform_id, req_attrs) # log.info("attr.get_platform_attribute_values(%r, %r) => %s" % (platform_id, req_attrs, retval)) # self._verify_invalid_platform_id(platform_id, retval) # # def test_ah_get_platform_attribute_values_invalid_attributes(self): # platform_id = self.PLATFORM_ID # attrNames = BOGUS_ATTR_NAMES # cur_time = ntplib.system_to_ntp_time(time.time()) # from_time = cur_time - 50 # a 50-sec time window # req_attrs = [(attr_id, from_time) for attr_id in attrNames] # log.debug("attr.get_platform_attribute_values(%r, %r)" % (platform_id, req_attrs)) # retval = self.oms.attr.get_platform_attribute_values(platform_id, req_attrs) # log.info("attr.get_platform_attribute_values(%r, %r) => %s" % (platform_id, req_attrs, retval)) # vals = self._verify_valid_platform_id(platform_id, retval) # self.assertIsInstance(vals, dict) # for attrName in attrNames: # self._verify_invalid_attribute_id(attrName, vals) # # def test_ah_set_platform_attribute_values(self): # platform_id = self.PLATFORM_ID # # try for all test attributes, but check below for both those writable # # and not writable # attrNames = self.ATTR_NAMES # # def valueFor(attrName): # # simple string value, ok because there is no strict value check yet # # TODO more realistic value depending on attribute's type # return "test_value_for_%s" % attrName # # attrs = [(attrName, valueFor(attrName)) for attrName in attrNames] # log.debug("attr.set_platform_attribute_values(%r, %r)" % (platform_id, attrs)) # retval = self.oms.attr.set_platform_attribute_values(platform_id, attrs) # log.info("attr.set_platform_attribute_values(%r, %r) => %s" % (platform_id, attrs, retval)) # vals = self._verify_valid_platform_id(platform_id, retval) # self.assertIsInstance(vals, dict) # for attrName in attrNames: # if attrName in self.WRITABLE_ATTR_NAMES: # self._verify_valid_attribute_id(attrName, vals) # else: # self._verify_not_writable_attribute_id(attrName, vals) # # def _get_platform_ports(self, platform_id): # retval = self.oms.port.get_platform_ports(platform_id) # log.info("port.get_platform_ports(%r) => %s" % (platform_id, retval)) # ports = self._verify_valid_platform_id(platform_id, retval) # return ports # # def test_ak_get_platform_ports(self): # platform_id = self.PLATFORM_ID # ports = self._get_platform_ports(platform_id) # for port_id, info in ports.iteritems(): # self.assertIsInstance(info, dict) # self.assertIn('state', info) # # def test_ak_get_platform_ports_invalid_platform_id(self): # platform_id = BOGUS_PLATFORM_ID # retval = self.oms.port.get_platform_ports(platform_id) # log.info("port.get_platform_ports(%r) => %s" % (platform_id, retval)) # self._verify_invalid_platform_id(platform_id, retval) # # def _get_connected_instruments(self, platform_id, port_id): # log.debug("instr.get_connected_instruments(%r, %r)" % (platform_id, port_id)) # retval = self.oms.instr.get_connected_instruments(platform_id, port_id) # log.info("instr.get_connected_instruments(%r, %r) => %s" % (platform_id, port_id, retval)) # ports = self._verify_valid_platform_id(platform_id, retval) # port_dic = self._verify_valid_port_id(port_id, ports) # self.assertIsInstance(port_dic, dict) # return port_dic # # def test_al_get_connected_instruments(self): # platform_id = self.PLATFORM_ID # port_id = self.PORT_ID # self._get_connected_instruments(platform_id, port_id) # # def _connect_instrument(self, platform_id, port_id, instrument_id, attributes): # log.debug("instr.connect_instrument(%r, %r, %r, %r)" % (platform_id, port_id, instrument_id, attributes)) # retval = self.oms.instr.connect_instrument(platform_id, port_id, instrument_id, attributes) # log.info("instr.connect_instrument(%r, %r, %r, %r) => %s" % (platform_id, port_id, instrument_id, attributes, retval)) # return retval # # def _connect_instrument_valid(self, platform_id, port_id, instrument_id, attributes): # retval = self._connect_instrument(platform_id, port_id, instrument_id, attributes) # ports = self._verify_valid_platform_id(platform_id, retval) # port_dic = self._verify_valid_port_id(port_id, ports) # self.assertIsInstance(port_dic, dict) # instr_val = self._verify_valid_instrument_id(instrument_id, port_dic) # if isinstance(instr_val, dict): # for attr_name in attributes: # self.assertTrue(attr_name in instr_val) # attr_val = instr_val[attr_name] # self.assertEquals(attributes[attr_name], attr_val, # "value given %s different from value received %s" % ( # attributes[attr_name], attr_val)) # # def _disconnect_instrument(self, platform_id, port_id, instrument_id): # log.debug("instr.disconnect_instrument(%r, %r, %r)" % (platform_id, port_id, instrument_id)) # retval = self.oms.instr.disconnect_instrument(platform_id, port_id, instrument_id) # log.info("instr.disconnect_instrument(%r, %r, %r) => %s" % (platform_id, port_id, instrument_id, retval)) # return retval # # def _disconnect_instrument_valid(self, platform_id, port_id, instrument_id): # retval = self._disconnect_instrument(platform_id, port_id, instrument_id) # ports = self._verify_valid_platform_id(platform_id, retval) # port_dic = self._verify_valid_port_id(port_id, ports) # self.assertIsInstance(port_dic, dict) # self.assertIn(instrument_id, port_dic) # self._verify_instrument_disconnected(instrument_id, port_dic[instrument_id]) # # def test_am_connect_and_disconnect_instrument(self): # platform_id = self.PLATFORM_ID # port_id = self.PORT_ID # instrument_id = self.INSTRUMENT_ID # # # connect (if not already connected) # port_dic = self._get_connected_instruments(platform_id, port_id) # if not instrument_id in port_dic: # # TODO proper values # attributes = {'maxCurrentDraw': 1, 'initCurrent': 2, # 'dataThroughput': 3, 'instrumentType': 'FOO'} # # self._connect_instrument_valid(platform_id, port_id, instrument_id, attributes) # # # disconnect: # port_dic = self._get_connected_instruments(platform_id, port_id) # if instrument_id in port_dic: # self._disconnect_instrument_valid(platform_id, port_id, instrument_id) # # def test_am_connect_instrument_invalid_platform_id(self): # platform_id = BOGUS_PLATFORM_ID # port_id = self.PORT_ID # instrument_id = self.INSTRUMENT_ID # attributes = {} # retval = self._connect_instrument(platform_id, port_id, instrument_id, attributes) # self._verify_invalid_platform_id(platform_id, retval) # # def test_am_connect_instrument_invalid_port_id(self): # platform_id = self.PLATFORM_ID # port_id = BOGUS_PORT_ID # instrument_id = self.INSTRUMENT_ID # attributes = {} # retval = self._connect_instrument(platform_id, port_id, instrument_id, attributes) # ports = self._verify_valid_platform_id(platform_id, retval) # self._verify_invalid_port_id(port_id, ports) # # def test_am_connect_instrument_invalid_instrument_id(self): # platform_id = self.PLATFORM_ID # port_id = self.PORT_ID # instrument_id = BOGUS_INSTRUMENT_ID # attributes = {} # retval = self._connect_instrument(platform_id, port_id, instrument_id, attributes) # ports = self._verify_valid_platform_id(platform_id, retval) # port_dic = self._verify_valid_port_id(port_id, ports) # self.assertIsInstance(port_dic, dict) # self._verify_invalid_instrument_id(instrument_id, port_dic) # # def test_an_turn_on_platform_port(self): # platform_id = self.PLATFORM_ID # ports = self._get_platform_ports(platform_id) # for port_id in ports.iterkeys(): # retval = self.oms.port.turn_on_platform_port(platform_id, port_id) # log.info("port.turn_on_platform_port(%s,%s) => %s" % (platform_id, port_id, retval)) # portRes = self._verify_valid_platform_id(platform_id, retval) # res = self._verify_valid_port_id(port_id, portRes) # self.assertEquals(res, NormalResponse.PORT_TURNED_ON) # # def test_an_turn_on_platform_port_invalid_platform_id(self): # # use valid id for get_platform_ports # platform_id = self.PLATFORM_ID # ports = self._get_platform_ports(platform_id) # # # use invalid id for turn_on_platform_port # requested_platform_id = BOGUS_PLATFORM_ID # for port_id in ports.iterkeys(): # retval = self.oms.port.turn_on_platform_port(requested_platform_id, port_id) # log.info("port.turn_on_platform_port(%r, %r) => %s" % (requested_platform_id, port_id, retval)) # self._verify_invalid_platform_id(requested_platform_id, retval) # # def test_ao_turn_off_platform_port(self): # platform_id = self.PLATFORM_ID # ports = self._get_platform_ports(platform_id) # for port_id in ports.iterkeys(): # retval = self.oms.port.turn_off_platform_port(platform_id, port_id) # log.info("port.turn_off_platform_port(%r, %r) => %s" % (platform_id, port_id, retval)) # portRes = self._verify_valid_platform_id(platform_id, retval) # res = self._verify_valid_port_id(port_id, portRes) # self.assertEquals(res, NormalResponse.PORT_TURNED_OFF) # # def test_ao_turn_off_platform_port_invalid_platform_id(self): # # use valid for get_platform_ports # platform_id = self.PLATFORM_ID # ports = self._get_platform_ports(platform_id) # # # use invalid for turn_off_platform_port # requested_platform_id = BOGUS_PLATFORM_ID # for port_id in ports.iterkeys(): # retval = self.oms.port.turn_off_platform_port(requested_platform_id, port_id) # log.info("port.turn_off_platform_port(%r, %r) => %s" % (requested_platform_id, port_id, retval)) # self._verify_invalid_platform_id(requested_platform_id, retval) # # ################################################################### # # EVENTS # ################################################################### # # # _notifications: [event_instance, ...] # _notifications = [] # _http_server = None # # # Based on the HTTP server launched via start_http_server (if called), or just an ad hoc url. # _use_fqdn_for_event_listener = False # # # some dummy value ok for tests only dealing with (un)registration: # _url_for_listener = "http://listener.example.org" # # @classmethod # def start_http_server(cls, port=5000): # """ # Starts a server on the localhost to handle the reception of events reported by OMS. # The received events are kept in the member _notifications, which can be # consulted directly and it is also returned by stop_http_server. # # @param port Port to bind the server to. By default, 5000. # # @return URL that can be used as a listener for registration with OMS. # This URL is composed based on the port and socket.getfqdn() # if _use_fqdn_for_event_listener is True. # """ # def application(environ, start_response): # input = environ['wsgi.input'] # body = "\n".join(input.readlines()) # event_instance = yaml.load(body) # log.debug('http server received event_instance=%s' % str(event_instance)) # # cls._notifications.append(event_instance) # # status = '200 OK' # headers = [('Content-Type', 'text/plain')] # start_response(status, headers) # return status # # cls._notifications = [] # cls._http_server = WSGIServer(('localhost', port), application) # log.debug("starting http server for receiving event notifications...") # cls._http_server.start() # address = cls._http_server.address # log.info("http server for event listener started on %s:%s" % address) # # if cls._use_fqdn_for_event_listener: # cls._url_for_listener = "http://%s:%s" % (socket.getfqdn(), address[1]) # else: # cls._url_for_listener = "http://%s:%s" % address # log.info("url_for_listener = %s" % cls._url_for_listener) # # return cls._url_for_listener # # @classmethod # def stop_http_server(cls): # """ # Stops the http server returning the notifications list, # which is internally re-initialized. # """ # if cls._http_server: # address = cls._http_server.address # log.info("stopping http server: address: host=%r port=%r" % address) # cls._http_server.stop() # cls._http_server = None # # ret = cls._notifications # cls._notifications = [] # re-initialize # return ret # # def _get_registered_event_listeners(self): # listeners = self.oms.event.get_registered_event_listeners() # log.info("event.get_registered_event_listeners() => %s" % str(listeners)) # self.assertIsInstance(listeners, dict) # return listeners # # def _register_event_listener(self, url): # log.info("event.register_event_listener(%r)" % url) # result = self.oms.event.register_event_listener(url) # log.info("event.register_event_listener(%r) => %s" % (url, str(result))) # self.assertIsInstance(result, dict) # self.assertEquals(len(result), 1) # self.assertTrue(url in result) # # listeners = self._get_registered_event_listeners() # self.assertTrue(url in listeners) # # def unregister(): # self._unregister_event_listener(url) # self.addCleanup(unregister) # # return result[url] # # def _unregister_event_listener(self, url): # result = self.oms.event.unregister_event_listener(url) # log.info("event.unregister_event_listener(%r) => %s" % (url, str(result))) # self.assertIsInstance(result, dict) # self.assertEquals(len(result), 1) # self.assertTrue(url in result) # # # check that it's unregistered # listeners = self._get_registered_event_listeners() # self.assertTrue(url not in listeners) # # return result[url] # # def test_be_register_and_unregister_event_listener(self): # url = self._url_for_listener # self._register_event_listener(url) # self._unregister_event_listener(url) # # def test_bi_unregister_event_listener_not_registered_url(self): # url = "http://_never_registered_url" # res = self._unregister_event_listener(url) # self.assertEquals(0, res) # # def test_generate_test_event_and_reception(self): # # 1. start http server for listener: # url = self.start_http_server() # self.addCleanup(self.stop_http_server) # # # 2. register listener: # self._register_event_listener(url) # # # 3. request generation of test event: # event = { # 'message' : "fake event triggered from CI", # 'platform_id' : "some_platform_id", # 'severity' : "3", # 'group ' : "power", # } # log.debug("event.generate_test_event(%r)" % event) # result = self.oms.event.generate_test_event(event) # log.info("event.generate_test_event(%r) => %r" % (event, result)) # self.assertEquals(result, True) # # # 4. wait until test event is notified # max_wait = 30 # log.info("waiting for a max of %d secs for test event to be notified..." % max_wait) # wait_until = time.time() + max_wait # got_it = None # while not got_it and time.time() <= wait_until: # time.sleep(1) # for evt in self._notifications: # if event['message'] == evt['message']: # got_it = evt # break # # self.assertIsNotNone(got_it, "didn't get expected test event notification within %d " \ # "secs. (Got %d event notifications.)" % ( # max_wait, len(self._notifications))) # log.info("got test event: %s" % got_it) # # def test_get_checksum(self): # platform_id = self.PLATFORM_ID # retval = self.oms.config.get_checksum(platform_id) # log.info("config.get_checksum(%r) => %s" % (platform_id, retval))
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I am trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
""" TestCmd.py: a testing framework for commands and scripts. The TestCmd module provides a framework for portable automated testing of executable commands and scripts (in any language, not just Python), especially commands and scripts that require file system interaction. In addition to running tests and evaluating conditions, the TestCmd module manages and cleans up one or more temporary workspace directories, and provides methods for creating files and directories in those workspace directories from in-line data, here-documents), allowing tests to be completely self-contained. A TestCmd environment object is created via the usual invocation: import TestCmd test = TestCmd.TestCmd() There are a bunch of keyword arguments available at instantiation: test = TestCmd.TestCmd(description = 'string', program = 'program_or_script_to_test', interpreter = 'script_interpreter', workdir = 'prefix', subdir = 'subdir', verbose = Boolean, match = default_match_function, diff = default_diff_function, combine = Boolean) There are a bunch of methods that let you do different things: test.verbose_set(1) test.description_set('string') test.program_set('program_or_script_to_test') test.interpreter_set('script_interpreter') test.interpreter_set(['script_interpreter', 'arg']) test.workdir_set('prefix') test.workdir_set('') test.workpath('file') test.workpath('subdir', 'file') test.subdir('subdir', ...) test.rmdir('subdir', ...) test.write('file', "contents\n") test.write(['subdir', 'file'], "contents\n") test.read('file') test.read(['subdir', 'file']) test.read('file', mode) test.read(['subdir', 'file'], mode) test.writable('dir', 1) test.writable('dir', None) test.preserve(condition, ...) test.cleanup(condition) test.command_args(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program') test.run(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program', chdir = 'directory_to_chdir_to', stdin = 'input to feed to the program\n') universal_newlines = True) p = test.start(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program', universal_newlines = None) test.finish(self, p) test.pass_test() test.pass_test(condition) test.pass_test(condition, function) test.fail_test() test.fail_test(condition) test.fail_test(condition, function) test.fail_test(condition, function, skip) test.no_result() test.no_result(condition) test.no_result(condition, function) test.no_result(condition, function, skip) test.stdout() test.stdout(run) test.stderr() test.stderr(run) test.symlink(target, link) test.banner(string) test.banner(string, width) test.diff(actual, expected) test.match(actual, expected) test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n") test.match_exact(["actual 1\n", "actual 2\n"], ["expected 1\n", "expected 2\n"]) test.match_re("actual 1\nactual 2\n", regex_string) test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes) test.match_re_dotall("actual 1\nactual 2\n", regex_string) test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes) test.tempdir() test.tempdir('temporary-directory') test.sleep() test.sleep(seconds) test.where_is('foo') test.where_is('foo', 'PATH1:PATH2') test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') test.unlink('file') test.unlink('subdir', 'file') The TestCmd module provides pass_test(), fail_test(), and no_result() unbound functions that report test results for use with the Aegis change management system. These methods terminate the test immediately, reporting PASSED, FAILED, or NO RESULT respectively, and exiting with status 0 (success), 1 or 2 respectively. This allows for a distinction between an actual failed test and a test that could not be properly evaluated because of an external condition (such as a full file system or incorrect permissions). import TestCmd TestCmd.pass_test() TestCmd.pass_test(condition) TestCmd.pass_test(condition, function) TestCmd.fail_test() TestCmd.fail_test(condition) TestCmd.fail_test(condition, function) TestCmd.fail_test(condition, function, skip) TestCmd.no_result() TestCmd.no_result(condition) TestCmd.no_result(condition, function) TestCmd.no_result(condition, function, skip) The TestCmd module also provides unbound functions that handle matching in the same way as the match_*() methods described above. import TestCmd test = TestCmd.TestCmd(match = TestCmd.match_exact) test = TestCmd.TestCmd(match = TestCmd.match_re) test = TestCmd.TestCmd(match = TestCmd.match_re_dotall) The TestCmd module provides unbound functions that can be used for the "diff" argument to TestCmd.TestCmd instantiation: import TestCmd test = TestCmd.TestCmd(match = TestCmd.match_re, diff = TestCmd.diff_re) test = TestCmd.TestCmd(diff = TestCmd.simple_diff) The "diff" argument can also be used with standard difflib functions: import difflib test = TestCmd.TestCmd(diff = difflib.context_diff) test = TestCmd.TestCmd(diff = difflib.unified_diff) Lastly, the where_is() method also exists in an unbound function version. import TestCmd TestCmd.where_is('foo') TestCmd.where_is('foo', 'PATH1:PATH2') TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') """
""" >>> from django.core.paginator import Paginator >>> from pagination.templatetags.pagination_tags import paginate >>> from django.template import Template, Context >>> p = Paginator(range(15), 2) >>> pg = paginate({'paginator': p, 'page_obj': p.page(1)}) >>> pg['pages'] [1, 2, 3, 4, 5, 6, 7, 8] >>> pg['records']['first'] 1 >>> pg['records']['last'] 2 >>> p = Paginator(range(15), 2) >>> pg = paginate({'paginator': p, 'page_obj': p.page(8)}) >>> pg['pages'] [1, 2, 3, 4, 5, 6, 7, 8] >>> pg['records']['first'] 15 >>> pg['records']['last'] 15 >>> p = Paginator(range(17), 2) >>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages'] [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> p = Paginator(range(19), 2) >>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages'] [1, 2, 3, 4, None, 7, 8, 9, 10] >>> p = Paginator(range(21), 2) >>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages'] [1, 2, 3, 4, None, 8, 9, 10, 11] # Testing orphans >>> p = Paginator(range(5), 2, 1) >>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages'] [1, 2] >>> p = Paginator(range(21), 2, 1) >>> pg = paginate({'paginator': p, 'page_obj': p.page(1)}) >>> pg['pages'] [1, 2, 3, 4, None, 7, 8, 9, 10] >>> pg['records']['first'] 1 >>> pg['records']['last'] 2 >>> p = Paginator(range(21), 2, 1) >>> pg = paginate({'paginator': p, 'page_obj': p.page(10)}) >>> pg['pages'] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> pg['records']['first'] 19 >>> pg['records']['last'] 21 >>> t = Template("{% load pagination_tags %}{% autopaginate var 2 %}{% paginate %}") >>> from django.http import HttpRequest as DjangoHttpRequest >>> class HttpRequest(DjangoHttpRequest): ... page = 1 >>> t.render(Context({'var': range(21), 'request': HttpRequest()})) u'\\n\\n<div class="pagination">... >>> >>> t = Template("{% load pagination_tags %}{% autopaginate var %}{% paginate %}") >>> t.render(Context({'var': range(21), 'request': HttpRequest()})) u'\\n\\n<div class="pagination">... >>> t = Template("{% load pagination_tags %}{% autopaginate var 20 %}{% paginate %}") >>> t.render(Context({'var': range(21), 'request': HttpRequest()})) u'\\n\\n<div class="pagination">... >>> t = Template("{% load pagination_tags %}{% autopaginate var by %}{% paginate %}") >>> t.render(Context({'var': range(21), 'by': 20, 'request': HttpRequest()})) u'\\n\\n<div class="pagination">... >>> t = Template("{% load pagination_tags %}{% autopaginate var by as foo %}{{ foo }}") >>> t.render(Context({'var': range(21), 'by': 20, 'request': HttpRequest()})) u'[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]' >>> # Testing InfinitePaginator >>> from paginator import InfinitePaginator >>> InfinitePaginator <class 'pagination.paginator.InfinitePaginator'> >>> p = InfinitePaginator(range(20), 2, link_template='/bacon/page/%d') >>> p.validate_number(2) 2 >>> p.orphans 0 >>> p3 = p.page(3) >>> p3 <Page 3> >>> p3.end_index() 6 >>> p3.has_next() True >>> p3.has_previous() True >>> p.page(10).has_next() False >>> p.page(1).has_previous() False >>> p3.next_link() '/bacon/page/4' >>> p3.previous_link() '/bacon/page/2' # Testing FinitePaginator >>> from paginator import FinitePaginator >>> FinitePaginator <class 'pagination.paginator.FinitePaginator'> >>> p = FinitePaginator(range(20), 2, offset=10, link_template='/bacon/page/%d') >>> p.validate_number(2) 2 >>> p.orphans 0 >>> p3 = p.page(3) >>> p3 <Page 3> >>> p3.start_index() 10 >>> p3.end_index() 6 >>> p3.has_next() True >>> p3.has_previous() True >>> p3.next_link() '/bacon/page/4' >>> p3.previous_link() '/bacon/page/2' >>> p = FinitePaginator(range(20), 20, offset=10, link_template='/bacon/page/%d') >>> p2 = p.page(2) >>> p2 <Page 2> >>> p2.has_next() False >>> p3.has_previous() True >>> p2.next_link() >>> p2.previous_link() '/bacon/page/1' >>> from pagination.middleware import PaginationMiddleware >>> from django.core.handlers.wsgi import WSGIRequest >>> from StringIO import StringIO >>> middleware = PaginationMiddleware() >>> request = WSGIRequest({'REQUEST_METHOD': 'POST', 'CONTENT_TYPE': 'multipart', 'wsgi.input': StringIO()}) >>> middleware.process_request(request) >>> request.upload_handlers.append('asdf') """
#!/usr/bin/python # -*- coding: utf-8 -*- # # This script generates a data file containing all Unicode information needed # by KCharSelect. # ############################################################################## # Copyright (C) 2007 NAME <EMAIL> # Copyright (C) 2016 NAME <EMAIL> # # This script is free software; you can redistribute it and/or modify it under # the terms of the GNU Library General Public License as published by the Free # Software Foundation; either version 2 of the License, or (at your option) # any later version. # # This script is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or # FITNESS FOR A PARTICULAR PURPOSE. See the GNU Library General Public # License for more details. # # You should have received a copy of the GNU Library General Public License # along with this library; see the file COPYING.LIB. If not, write to the # Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA # 02110-1301, USA. ############################################################################## # # The current directory must contain the following files that can be found at # http://www.unicode.org/Public/UNIDATA/: # - UnicodeData.txt # - Unihan_Readings.txt (you need to uncompress it from Unihan.zip) # - NamesList.txt # - Blocks.txt # # The generated file is named "kcharselect-data" and has to be put in # kwidgetsaddons/src. Additionally a translation dummy named # "kcharselect-translation.cpp" is generated and has to be placed in the same # directory. # # FILE STRUCTURE # # The generated file is a binary file. The first 40 bytes are the header and # contain the position of each part of the file. Each entry is uint32. # # pos content # 0 names strings begin # 4 names offsets begin # 8 details strings begin # 12 details offsets begin # 16 block strings begin # 20 block offsets begin # 24 section strings begin # 28 section offsets begin # 32 unihan strings begin # 36 unihan offsets begin # # The string parts always contain all strings in a row, followed by a 0x00 # byte. There is one exception: The data for seeAlso in details is only 2 # bytes (as is always is _one_ unicode character) and _not_ followed by a 0x00 # byte. # # The offset parts contain entries with a fixed length. Unicode characters # are always uint16 and offsets uint32. Offsets are positions in the data # file. # # names_offsets: # each entry 6 bytes # 16bit: unicode # 32bit: offset to name in names_strings # # names_strings: # the first byte is the category (same values as QChar::Category), # directly followed by the character name (terminated by 0x00) # # nameslist_offsets: # char, alias, alias_count, note, note_count, approxEquiv, approxEquiv_coutn, equiv, equiv_count, seeAlso, seeAlso_count # 16 32 8 32 8 32 8 32 8 32 8 # => each entry 27 bytes # # blocks_offsets: # each entry 4 bytes # 16bit: start unicode # 16bit: end unicode # Note that there is no string offset. # # section_offsets: # each entry 4 bytes # 16bit: section offset # 16bit: block offset # Note that these offsets are _not_ positions in the data file but indexes. # For example 0x0403 means the fourth section includes the third block. # # unihan_offsets: # each entry 30 bytes # 16bit: unicode # 32bit: offset to unihan_strings for Definition # 32bit: offset to unihan_strings for Cantonese # 32bit: offset to unihan_strings for Mandarin # 32bit: offset to unihan_strings for Tang # 32bit: offset to unihan_strings for Korean # 32bit: offset to unihan_strings for JapaneseKun # 32bit: offset to unihan_strings for JapaneseOn
""" ======== Glossary ======== .. glossary:: along an axis Axes are defined for arrays with more than one dimension. A 2-dimensional array has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1). Many operation can take place along one of these axes. For example, we can sum each row of an array, in which case we operate along columns, or axis 1:: >>> x = np.arange(12).reshape((3,4)) >>> x array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.sum(axis=1) array([ 6, 22, 38]) array A homogeneous container of numerical elements. Each element in the array occupies a fixed amount of memory (hence homogeneous), and can be a numerical element of a single type (such as float, int or complex) or a combination (such as ``(float, int, float)``). Each array has an associated data-type (or ``dtype``), which describes the numerical type of its elements:: >>> x = np.array([1, 2, 3], float) >>> x array([ 1., 2., 3.]) >>> x.dtype # floating point number, 64 bits of memory per element dtype('float64') # More complicated data type: each array element is a combination of # and integer and a floating point number >>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)]) array([(1, 2.0), (3, 4.0)], dtype=[('x', '<i4'), ('y', '<f8')]) Fast element-wise operations, called `ufuncs`_, operate on arrays. array_like Any sequence that can be interpreted as an ndarray. This includes nested lists, tuples, scalars and existing arrays. attribute A property of an object that can be accessed using ``obj.attribute``, e.g., ``shape`` is an attribute of an array:: >>> x = np.array([1, 2, 3]) >>> x.shape (3,) BLAS `Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_ broadcast NumPy can do operations on arrays whose shapes are mismatched:: >>> x = np.array([1, 2]) >>> y = np.array([[3], [4]]) >>> x array([1, 2]) >>> y array([[3], [4]]) >>> x + y array([[4, 5], [5, 6]]) See `doc.broadcasting`_ for more information. C order See `row-major` column-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In column-major order, the leftmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the column-major order as:: [1, 4, 2, 5, 3, 6] Column-major order is also known as the Fortran order, as the Fortran programming language uses it. decorator An operator that transforms a function. For example, a ``log`` decorator may be defined to print debugging information upon function execution:: >>> def log(f): ... def new_logging_func(*args, **kwargs): ... print "Logging call with parameters:", args, kwargs ... return f(*args, **kwargs) ... ... return new_logging_func Now, when we define a function, we can "decorate" it using ``log``:: >>> @log ... def add(a, b): ... return a + b Calling ``add`` then yields: >>> add(1, 2) Logging call with parameters: (1, 2) {} 3 dictionary Resembling a language dictionary, which provides a mapping between words and descriptions thereof, a Python dictionary is a mapping between two objects:: >>> x = {1: 'one', 'two': [1, 2]} Here, `x` is a dictionary mapping keys to values, in this case the integer 1 to the string "one", and the string "two" to the list ``[1, 2]``. The values may be accessed using their corresponding keys:: >>> x[1] 'one' >>> x['two'] [1, 2] Note that dictionaries are not stored in any specific order. Also, most mutable (see *immutable* below) objects, such as lists, may not be used as keys. For more information on dictionaries, read the `Python tutorial <http://docs.python.org/tut>`_. Fortran order See `column-major` flattened Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details. immutable An object that cannot be modified after execution is called immutable. Two common examples are strings and tuples. instance A class definition gives the blueprint for constructing an object:: >>> class House(object): ... wall_colour = 'white' Yet, we have to *build* a house before it exists:: >>> h = House() # build a house Now, ``h`` is called a ``House`` instance. An instance is therefore a specific realisation of a class. iterable A sequence that allows "walking" (iterating) over items, typically using a loop such as:: >>> x = [1, 2, 3] >>> [item**2 for item in x] [1, 4, 9] It is often used in combintion with ``enumerate``:: >>> keys = ['a','b','c'] >>> for n, k in enumerate(keys): ... print "Key %d: %s" % (n, k) ... Key 0: a Key 1: b Key 2: c list A Python container that can hold any number of objects or items. The items do not have to be of the same type, and can even be lists themselves:: >>> x = [2, 2.0, "two", [2, 2.0]] The list `x` contains 4 items, each which can be accessed individually:: >>> x[2] # the string 'two' 'two' >>> x[3] # a list, containing an integer 2 and a float 2.0 [2, 2.0] It is also possible to select more than one item at a time, using *slicing*:: >>> x[0:2] # or, equivalently, x[:2] [2, 2.0] In code, arrays are often conveniently expressed as nested lists:: >>> np.array([[1, 2], [3, 4]]) array([[1, 2], [3, 4]]) For more information, read the section on lists in the `Python tutorial <http://docs.python.org/tut>`_. For a mapping type (key-value), see *dictionary*. mask A boolean array, used to select only certain elements for an operation:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> mask = (x > 2) >>> mask array([False, False, False, True, True], dtype=bool) >>> x[mask] = -1 >>> x array([ 0, 1, 2, -1, -1]) masked array Array that suppressed values indicated by a mask:: >>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) >>> x masked_array(data = [-- 2.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> >>> x + [1, 2, 3] masked_array(data = [-- 4.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> Masked arrays are often used when operating on arrays containing missing or invalid entries. matrix A 2-dimensional ndarray that preserves its two-dimensional nature throughout operations. It has certain special operations, such as ``*`` (matrix multiplication) and ``**`` (matrix power), defined:: >>> x = np.mat([[1, 2], [3, 4]]) >>> x matrix([[1, 2], [3, 4]]) >>> x**2 matrix([[ 7, 10], [15, 22]]) method A function associated with an object. For example, each ndarray has a method called ``repeat``:: >>> x = np.array([1, 2, 3]) >>> x.repeat(2) array([1, 1, 2, 2, 3, 3]) ndarray See *array*. record array An `ndarray`_ with `structured data type`_ which has been subclassed as np.recarray and whose dtype is of type np.record, making the fields of its data type to be accessible by attribute. reference If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore, ``a`` and ``b`` are different names for the same Python object. row-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In row-major order, the rightmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the row-major order as:: [1, 2, 3, 4, 5, 6] Row-major order is also known as the C order, as the C programming language uses it. New Numpy arrays are by default in row-major order. self Often seen in method signatures, ``self`` refers to the instance of the associated class. For example: >>> class Paintbrush(object): ... color = 'blue' ... ... def paint(self): ... print "Painting the city %s!" % self.color ... >>> p = Paintbrush() >>> p.color = 'red' >>> p.paint() # self refers to 'p' Painting the city red! slice Used to select only certain elements from a sequence:: >>> x = range(5) >>> x [0, 1, 2, 3, 4] >>> x[1:3] # slice from 1 to 3 (excluding 3 itself) [1, 2] >>> x[1:5:2] # slice from 1 to 5, but skipping every second element [1, 3] >>> x[::-1] # slice a sequence in reverse [4, 3, 2, 1, 0] Arrays may have more than one dimension, each which can be sliced individually:: >>> x = np.array([[1, 2], [3, 4]]) >>> x array([[1, 2], [3, 4]]) >>> x[:, 1] array([2, 4]) structured data type A data type composed of other datatypes tuple A sequence that may contain a variable number of types of any kind. A tuple is immutable, i.e., once constructed it cannot be changed. Similar to a list, it can be indexed and sliced:: >>> x = (1, 'one', [1, 2]) >>> x (1, 'one', [1, 2]) >>> x[0] 1 >>> x[:2] (1, 'one') A useful concept is "tuple unpacking", which allows variables to be assigned to the contents of a tuple:: >>> x, y = (1, 2) >>> x, y = 1, 2 This is often used when a function returns multiple values: >>> def return_many(): ... return 1, 'alpha', None >>> a, b, c = return_many() >>> a, b, c (1, 'alpha', None) >>> a 1 >>> b 'alpha' ufunc Universal function. A fast element-wise array operation. Examples include ``add``, ``sin`` and ``logical_or``. view An array that does not own its data, but refers to another array's data instead. For example, we may create a view that only shows every second element of another array:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> y = x[::2] >>> y array([0, 2, 4]) >>> x[0] = 3 # changing x changes y as well, since y is a view on x >>> y array([3, 2, 4]) wrapper Python is a high-level (highly abstracted, or English-like) language. This abstraction comes at a price in execution speed, and sometimes it becomes necessary to use lower level languages to do fast computations. A wrapper is code that provides a bridge between high and the low level languages, allowing, e.g., Python to execute code written in C or Fortran. Examples include ctypes, SWIG and Cython (which wraps C and C++) and f2py (which wraps Fortran). """
""" Numerical python functions written for compatability with matlab(TM) commands with the same names. Matlab(TM) compatible functions ------------------------------- :func:`cohere` Coherence (normalized cross spectral density) :func:`csd` Cross spectral density uing Welch's average periodogram :func:`detrend` Remove the mean or best fit line from an array :func:`find` Return the indices where some condition is true; numpy.nonzero is similar but more general. :func:`griddata` interpolate irregularly distributed data to a regular grid. :func:`prctile` find the percentiles of a sequence :func:`prepca` Principal Component Analysis :func:`psd` Power spectral density uing Welch's average periodogram :func:`rk4` A 4th order runge kutta integrator for 1D or ND systems :func:`specgram` Spectrogram (power spectral density over segments of time) Miscellaneous functions ------------------------- Functions that don't exist in matlab(TM), but are useful anyway: :meth:`cohere_pairs` Coherence over all pairs. This is not a matlab function, but we compute coherence a lot in my lab, and we compute it for a lot of pairs. This function is optimized to do this efficiently by caching the direct FFTs. :meth:`rk4` A 4th order Runge-Kutta ODE integrator in case you ever find yourself stranded without scipy (and the far superior scipy.integrate tools) record array helper functions ------------------------------- A collection of helper methods for numpyrecord arrays .. _htmlonly:: See :ref:`misc-examples-index` :meth:`rec2txt` pretty print a record array :meth:`rec2csv` store record array in CSV file :meth:`csv2rec` import record array from CSV file with type inspection :meth:`rec_append_fields` adds field(s)/array(s) to record array :meth:`rec_drop_fields` drop fields from record array :meth:`rec_join` join two record arrays on sequence of fields :meth:`rec_groupby` summarize data by groups (similar to SQL GROUP BY) :meth:`rec_summarize` helper code to filter rec array fields into new fields For the rec viewer functions(e rec2csv), there are a bunch of Format objects you can pass into the functions that will do things like color negative values red, set percent formatting and scaling, etc. Example usage:: r = csv2rec('somefile.csv', checkrows=0) formatd = dict( weight = FormatFloat(2), change = FormatPercent(2), cost = FormatThousands(2), ) rec2excel(r, 'test.xls', formatd=formatd) rec2csv(r, 'test.csv', formatd=formatd) scroll = rec2gtk(r, formatd=formatd) win = gtk.Window() win.set_size_request(600,800) win.add(scroll) win.show_all() gtk.main() Deprecated functions --------------------- The following are deprecated; please import directly from numpy (with care--function signatures may differ): :meth:`conv` convolution (numpy.convolve) :meth:`corrcoef` The matrix of correlation coefficients :meth:`hist` Histogram (numpy.histogram) :meth:`linspace` Linear spaced array from min to max :meth:`load` load ASCII file - use numpy.loadtxt :meth:`meshgrid` Make a 2D grid from 2 1 arrays (numpy.meshgrid) :meth:`polyfit` least squares best polynomial fit of x to y (numpy.polyfit) :meth:`polyval` evaluate a vector for a vector of polynomial coeffs (numpy.polyval) :meth:`save` save ASCII file - use numpy.savetxt :meth:`trapz` trapeziodal integration (trapz(x,y) -> numpy.trapz(y,x)) :meth:`vander` the Vandermonde matrix (numpy.vander) """
""" Wrappers to LAPACK library ========================== flapack -- wrappers for Fortran [*] LAPACK routines clapack -- wrappers for ATLAS LAPACK routines calc_lwork -- calculate optimal lwork parameters get_lapack_funcs -- query for wrapper functions. [*] If ATLAS libraries are available then Fortran routines actually use ATLAS routines and should perform equally well to ATLAS routines. Module flapack ++++++++++++++ In the following all function names are shown without type prefix (s,d,c,z). Optimal values for lwork can be computed using calc_lwork module. Linear Equations ---------------- Drivers:: lu,piv,x,info = gesv(a,b,overwrite_a=0,overwrite_b=0) lub,piv,x,info = gbsv(kl,ku,ab,b,overwrite_ab=0,overwrite_b=0) c,x,info = posv(a,b,lower=0,overwrite_a=0,overwrite_b=0) Computational routines:: lu,piv,info = getrf(a,overwrite_a=0) x,info = getrs(lu,piv,b,trans=0,overwrite_b=0) inv_a,info = getri(lu,piv,lwork=min_lwork,overwrite_lu=0) c,info = potrf(a,lower=0,clean=1,overwrite_a=0) x,info = potrs(c,b,lower=0,overwrite_b=0) inv_a,info = potri(c,lower=0,overwrite_c=0) inv_c,info = trtri(c,lower=0,unitdiag=0,overwrite_c=0) Linear Least Squares (LLS) Problems ----------------------------------- Drivers:: v,x,s,rank,info = gelss(a,b,cond=-1.0,lwork=min_lwork,overwrite_a=0,overwrite_b=0) Computational routines:: qr,tau,info = geqrf(a,lwork=min_lwork,overwrite_a=0) q,info = orgqr|ungqr(qr,tau,lwork=min_lwork,overwrite_qr=0,overwrite_tau=1) Generalized Linear Least Squares (LSE and GLM) Problems ------------------------------------------------------- Standard Eigenvalue and Singular Value Problems ----------------------------------------------- Drivers:: w,v,info = syev|heev(a,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0) w,v,info = syevd|heevd(a,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0) w,v,info = syevr|heevr(a,compute_v=1,lower=0,vrange=,irange=,atol=-1.0,lwork=min_lwork,overwrite_a=0) t,sdim,(wr,wi|w),vs,info = gees(select,a,compute_v=1,sort_t=0,lwork=min_lwork,select_extra_args=(),overwrite_a=0) wr,(wi,vl|w),vr,info = geev(a,compute_vl=1,compute_vr=1,lwork=min_lwork,overwrite_a=0) u,s,vt,info = gesdd(a,compute_uv=1,lwork=min_lwork,overwrite_a=0) Computational routines:: ht,tau,info = gehrd(a,lo=0,hi=n-1,lwork=min_lwork,overwrite_a=0) ba,lo,hi,pivscale,info = gebal(a,scale=0,permute=0,overwrite_a=0) Generalized Eigenvalue and Singular Value Problems -------------------------------------------------- Drivers:: w,v,info = sygv|hegv(a,b,itype=1,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0,overwrite_b=0) w,v,info = sygvd|hegvd(a,b,itype=1,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0,overwrite_b=0) (alphar,alphai|alpha),beta,vl,vr,info = ggev(a,b,compute_vl=1,compute_vr=1,lwork=min_lwork,overwrite_a=0,overwrite_b=0) Auxiliary routines ------------------ a,info = lauum(c,lower=0,overwrite_c=0) a = laswp(a,piv,k1=0,k2=len(piv)-1,off=0,inc=1,overwrite_a=0) Module clapack ++++++++++++++ Linear Equations ---------------- Drivers:: lu,piv,x,info = gesv(a,b,rowmajor=1,overwrite_a=0,overwrite_b=0) c,x,info = posv(a,b,lower=0,rowmajor=1,overwrite_a=0,overwrite_b=0) Computational routines:: lu,piv,info = getrf(a,rowmajor=1,overwrite_a=0) x,info = getrs(lu,piv,b,trans=0,rowmajor=1,overwrite_b=0) inv_a,info = getri(lu,piv,rowmajor=1,overwrite_lu=0) c,info = potrf(a,lower=0,clean=1,rowmajor=1,overwrite_a=0) x,info = potrs(c,b,lower=0,rowmajor=1,overwrite_b=0) inv_a,info = potri(c,lower=0,rowmajor=1,overwrite_c=0) inv_c,info = trtri(c,lower=0,unitdiag=0,rowmajor=1,overwrite_c=0) Auxiliary routines ------------------ a,info = lauum(c,lower=0,rowmajor=1,overwrite_c=0) Module calc_lwork +++++++++++++++++ Optimal lwork is maxwrk. Default is minwrk. minwrk,maxwrk = gehrd(prefix,n,lo=0,hi=n-1) minwrk,maxwrk = gesdd(prefix,m,n,compute_uv=1) minwrk,maxwrk = gelss(prefix,m,n,nrhs) minwrk,maxwrk = getri(prefix,n) minwrk,maxwrk = geev(prefix,n,compute_vl=1,compute_vr=1) minwrk,maxwrk = heev(prefix,n,lower=0) minwrk,maxwrk = syev(prefix,n,lower=0) minwrk,maxwrk = gees(prefix,n,compute_v=1) minwrk,maxwrk = geqrf(prefix,m,n) minwrk,maxwrk = gqr(prefix,m,n) """
""" .. _usr01: User interaction ================ The simplest way to interact with the `Matlab2cpp`-toolbox is to use the `m2cpp` frontend. The script automatically creates files with various extensions containing translations and/or meta-information. .. autoprogram:: m2cpp:parser :prog: m2cpp For the user, the flags -o, -c, -s, -S, -r, -p -omp, -tbb are the useful flags. The flags -t, -T are good for debugging because they print the structure of the Abstract Syntax Tree (AST). The -d flag gives useful information on the parsing of the Matlab code and insight in how the AST is built. Suggest flags, -s, -S --------------------- Read the section :ref:`usr02_suggestion_engine` first. When using m2cpp the corresponding suggest is set with the flag -s. The suggest engine works well for simple cases. For more complex cases, not all the variables get a type suggestion and the suggested type could be wrong. The other suggest flag -S get the datatypes by running the (Matlab) code with Matlab. Information of the datatypes are written to files which can be extracted by the code translator. For this flag to work, in addition to having Matlab installed, the Matlab Engine API for Python has to be installed (see: `Install MATLAB Engine API for Python <http://se.mathworks.com/help/matlab/matlab_external/install-the-matlab-engine-for-python.html>`_). Matlab has to be able to run the code to extract the datatypes. So if the code require datafiles or special Matlab modules (e.g. numerical modules), these have to be available for this option to work. The Matlab suggest option is not 100%, but still quite good at suggesting datatypes. A downside with the using Matlab to suggest datatypes, is that Matlab takes some time to start up and then run the (Matlab) code. Multiple directories, -p paths_file ----------------------------------- In Matlab the script and function files have to be in the same folder for the function files to be found. To call a function script located in a different folder, the folder has to be added to path. This can be done with `addpath` or `path`. In a separate file from the Matlab main and function scripts, a separate script can be written to set the path to different folders:: Dir='/path_to_folder/SeismicLab/codes/'; path(path, strcat(Dir,'bp_filter/')); path(path, strcat(Dir,'decon')); path(path, strcat(Dir,'dephasing')); path(path, strcat(Dir,'fx')); ... The flag option `-p paths_file` can be set to parse such a file. Then Matlab as well as m2cpp can find function scripts that are located in other directories. .. _parallel_flags: Parallel flags, -omp, -tbb -------------------------- The program m2cpp can do parallelization of simple for loops (so called embarrasingly parallel). To let the program know which loops the user wants to parallelize, use the pragma `%#PARFOR` before the loop (similar to the way its done in OpenMP). The flags -omp and -tbb can then be used to chose if OpenMP code or TBB code will be inserted to parallelize the code. Matlab's `parfor` doesn't require the pragma `%#PARFOR` to parallelize. If neither -omp nor -tbb flag is used, no OpenMP or TBB code is inserted and we will get a sequential for loop. When compiling, try link flags `-fopenmp` for OpenMP and `-ltbb` for TBB. OpenMP is usually available for the compiler out of the box. TBB needs to be installed (see: https://www.threadingbuildingblocks.org/). The TBB code makes use of lambda functions which is a C++ feature. C++11 is probably not set as standard for the compiler, i.e., in the GNU compiler g++, the flag `-std=c++11` is required to make use of C++11 features. Quick translation functions --------------------------- Even though `m2cpp` is sufficient for performing all code translation, many of the examples in this manual are done through a python interface, since some of the python functionality also will be discussed. Given that `Matlab2cpp` is properly installed on your system, the python library is available in Python's path. The module is assumed imported as:: >>> import matlab2cpp Quick functions collection of frontend tools for performing code translation. Each of the function :py:func:`~matlab2cpp.qcpp`, :py:func:`~matlab2cpp.qhpp`, :py:func:`~matlab2cpp.qpy` and :py:func:`~matlab2cpp.qlog` are directly related to the functionality of the :program:`m2cpp` script. The name indicate the file extension that the script will create. In addition there are the three functions :py:func:`~matlab2cpp.qtree` and :py:func:`~matlab2cpp.qscript`. The former represents a summary of the created node tree. The latter is a simple translation tool that is more of a one-to-one translation. For an overview of the various quick-functions, see :ref:`dev01`. Plotting functionality ---------------------- Plotting functionality is available through a wrapper, which calls Python's matplotlib. If a Matlab code with plotting calls is translated, the file `SPlot.h` is generated. The C++ file that is generated also `#include` this file. To compile the generated code, the Python have to be included. The code in `SPlot.h` makes of C++11 features, so compiler options for C++11 may be needed as well. With the GNU compiler g++, I can compile the generated code with: `g++ my_cpp_file.cpp -o runfile -I /usr/include/python2.7/ -lpython2.7 -larmadillo -std=c++11` Additional flags could be -O3 (optimization) -ltbb (in case of TBB parallelization) """
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
# (c) 2014 NAME <EMAIL> # https://github.com/timraasveld/ansible-string-split-filter/ # (c) 2014 NAME <EMAIL> # http://debops.org/ # License: CC0 1.0 Universal # # Statement of Purpose # # The laws of most jurisdictions throughout the world automatically confer # exclusive Copyright and Related Rights (defined below) upon the creator and # subsequent owner(s) (each and all, an "owner") of an original work of # authorship and/or a database (each, a "Work"). # # Certain owners wish to permanently relinquish those rights to a Work for the # purpose of contributing to a commons of creative, cultural and scientific # works ("Commons") that the public can reliably and without fear of later # claims of infringement build upon, modify, incorporate in other works, reuse # and redistribute as freely as possible in any form whatsoever and for any # purposes, including without limitation commercial purposes. These owners may # contribute to the Commons to promote the ideal of a free culture and the # further production of creative, cultural and scientific works, or to gain # reputation or greater distribution for their Work in part through the use and # efforts of others. # # For these and/or other purposes and motivations, and without any expectation # of additional consideration or compensation, the person associating CC0 with a # Work (the "Affirmer"), to the extent that he or she is an owner of Copyright # and Related Rights in the Work, voluntarily elects to apply CC0 to the Work # and publicly distribute the Work under its terms, with knowledge of his or her # Copyright and Related Rights in the Work and the meaning and intended legal # effect of CC0 on those rights. # # 1. Copyright and Related Rights. A Work made available under CC0 may be # protected by copyright and related or neighboring rights ("Copyright and # Related Rights"). Copyright and Related Rights include, but are not limited # to, the following: # # i. the right to reproduce, adapt, distribute, perform, display, communicate, # and translate a Work; # # ii. moral rights retained by the original author(s) and/or performer(s); # # iii. publicity and privacy rights pertaining to a person's image or likeness # depicted in a Work; # # iv. rights protecting against unfair competition in regards to a Work, # subject to the limitations in paragraph 4(a), below; # # v. rights protecting the extraction, dissemination, use and reuse of data in # a Work; # # vi. database rights (such as those arising under Directive 96/9/EC of the # European Parliament and of the Council of 11 March 1996 on the legal # protection of databases, and under any national implementation thereof, # including any amended or successor version of such directive); and # # vii. other similar, equivalent or corresponding rights throughout the world # based on applicable law or treaty, and any national implementations thereof. # # 2. Waiver. To the greatest extent permitted by, but not in contravention of, # applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and # unconditionally waives, abandons, and surrenders all of Affirmer's Copyright # and Related Rights and associated claims and causes of action, whether now # known or unknown (including existing as well as future claims and causes of # action), in the Work (i) in all territories worldwide, (ii) for the maximum # duration provided by applicable law or treaty (including future time # extensions), (iii) in any current or future medium and for any number of # copies, and (iv) for any purpose whatsoever, including without limitation # commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes # the Waiver for the benefit of each member of the public at large and to the # detriment of Affirmer's heirs and successors, fully intending that such Waiver # shall not be subject to revocation, rescission, cancellation, termination, or # any other legal or equitable action to disrupt the quiet enjoyment of the Work # by the public as contemplated by Affirmer's express Statement of Purpose. # # 3. Public License Fallback. Should any part of the Waiver for any reason be # judged legally invalid or ineffective under applicable law, then the Waiver # shall be preserved to the maximum extent permitted taking into account # Affirmer's express Statement of Purpose. In addition, to the extent the Waiver # is so judged Affirmer hereby grants to each affected person a royalty-free, # non transferable, non sublicensable, non exclusive, irrevocable and # unconditional license to exercise Affirmer's Copyright and Related Rights in # the Work (i) in all territories worldwide, (ii) for the maximum duration # provided by applicable law or treaty (including future time extensions), (iii) # in any current or future medium and for any number of copies, and (iv) for any # purpose whatsoever, including without limitation commercial, advertising or # promotional purposes (the "License"). The License shall be deemed effective as # of the date CC0 was applied by Affirmer to the Work. Should any part of the # License for any reason be judged legally invalid or ineffective under # applicable law, such partial invalidity or ineffectiveness shall not # invalidate the remainder of the License, and in such case Affirmer hereby # affirms that he or she will not (i) exercise any of his or her remaining # Copyright and Related Rights in the Work or (ii) assert any associated claims # and causes of action with respect to the Work, in either case contrary to # Affirmer's express Statement of Purpose. # # 4. Limitations and Disclaimers. # # a. No trademark or patent rights held by Affirmer are waived, abandoned, # surrendered, licensed or otherwise affected by this document. # # b. Affirmer offers the Work as-is and makes no representations or warranties # of any kind concerning the Work, express, implied, statutory or otherwise, # including without limitation warranties of title, merchantability, fitness # for a particular purpose, non infringement, or the absence of latent or # other defects, accuracy, or the present or absence of errors, whether or not # discoverable, all to the greatest extent permissible under applicable law. # # c. Affirmer disclaims responsibility for clearing rights of other persons # that may apply to the Work or any use thereof, including without limitation # any person's Copyright and Related Rights in the Work. Further, Affirmer # disclaims responsibility for obtaining any necessary consents, permissions # or other rights required for any use of the Work. # # d. Affirmer understands and acknowledges that Creative Commons is not a # party to this document and has no duty or obligation with respect to this # CC0 or use of the Work. # # For more information, please see # <http://creativecommons.org/publicdomain/zero/1.0/>
""" Linear algebra -------------- Linear equations ................ Basic linear algebra is implemented; you can for example solve the linear equation system:: x + 2*y = -10 3*x + 4*y = 10 using ``lu_solve``:: >>> A = matrix([[1, 2], [3, 4]]) >>> b = matrix([-10, 10]) >>> x = lu_solve(A, b) >>> x matrix( [['30.0'], ['-20.0']]) If you don't trust the result, use ``residual`` to calculate the residual ||A*x-b||:: >>> residual(A, x, b) matrix( [['3.46944695195361e-18'], ['3.46944695195361e-18']]) >>> str(eps) '2.22044604925031e-16' As you can see, the solution is quite accurate. The error is caused by the inaccuracy of the internal floating point arithmetic. Though, it's even smaller than the current machine epsilon, which basically means you can trust the result. If you need more speed, use NumPy. Or choose a faster data type using the keyword ``force_type``:: >>> lu_solve(A, b, force_type=float) matrix( [[29.999999999999996], [-19.999999999999996]]) ``lu_solve`` accepts overdetermined systems. It is usually not possible to solve such systems, so the residual is minimized instead. Internally this is done using Cholesky decomposition to compute a least squares approximation. This means that that ``lu_solve`` will square the errors. If you can't afford this, use ``qr_solve`` instead. It is twice as slow but more accurate, and it calculates the residual automatically. Matrix factorization .................... The function ``lu`` computes an explicit LU factorization of a matrix:: >>> P, L, U = lu(matrix([[0,2,3],[4,5,6],[7,8,9]])) >>> print P [0.0 0.0 1.0] [1.0 0.0 0.0] [0.0 1.0 0.0] >>> print L [ 1.0 0.0 0.0] [ 0.0 1.0 0.0] [0.571428571428571 0.214285714285714 1.0] >>> print U [7.0 8.0 9.0] [0.0 2.0 3.0] [0.0 0.0 0.214285714285714] >>> print P.T*L*U [0.0 2.0 3.0] [4.0 5.0 6.0] [7.0 8.0 9.0] Interval matrices ----------------- Matrices may contain interval elements. This allows one to perform basic linear algebra operations such as matrix multiplication and equation solving with rigorous error bounds:: >>> a = matrix([['0.1','0.3','1.0'], ... ['7.1','5.5','4.8'], ... ['3.2','4.4','5.6']], force_type=mpi) >>> >>> b = matrix(['4','0.6','0.5'], force_type=mpi) >>> c = lu_solve(a, b) >>> c matrix( [[[5.2582327113062393041, 5.2582327113062749951]], [[-13.155049396267856583, -13.155049396267821167]], [[7.4206915477497212555, 7.4206915477497310922]]]) >>> print a*c [ [3.9999999999999866773, 4.0000000000000133227]] [[0.59999999999972430942, 0.60000000000027142733]] [[0.49999999999982236432, 0.50000000000018474111]] """
#!/usr/bin/env python # Try to determine how much RAM is currently being used per program. # Note per _program_, not per process. So for example this script # will report RAM used by all httpd process together. In detail it reports: # sum(private RAM for program processes) + sum(Shared RAM for program processes) # The shared RAM is problematic to calculate, and this script automatically # selects the most accurate method available for your kernel. # Licence: LGPLv2 # Author: EMAIL Source: http://www.pixelbeat.org/scripts/ps_mem.py # V1.0 06 Jul 2005 Initial release # V1.1 11 Aug 2006 root permission required for accuracy # V1.2 08 Nov 2006 Add total to output # Use KiB,MiB,... for units rather than K,M,... # V1.3 22 Nov 2006 Ignore shared col from /proc/$pid/statm for # 2.6 kernels up to and including 2.6.9. # There it represented the total file backed extent # V1.4 23 Nov 2006 Remove total from output as it's meaningless # (the shared values overlap with other programs). # Display the shared column. This extra info is # useful, especially as it overlaps between programs. # V1.5 26 Mar 2007 Remove redundant recursion from human() # V1.6 05 Jun 2007 Also report number of processes with a given name. # Patch from EMAIL V1.7 20 Sep 2007 Use PSS from /proc/$pid/smaps if available, which # fixes some over-estimation and allows totalling. # Enumerate the PIDs directly rather than using ps, # which fixes the possible race between reading # RSS with ps, and shared memory with this program. # Also we can show non truncated command names. # V1.8 28 Sep 2007 More accurate matching for stats in /proc/$pid/smaps # as otherwise could match libraries causing a crash. # Patch from EMAIL V1.9 20 Feb 2008 Fix invalid values reported when PSS is available. # Reported by NAME <EMAIL> # V3.1 10 May 2013 # http://github.com/pixelb/scripts/commits/master/scripts/ps_mem.py # Notes: # # All interpreted programs where the interpreter is started # by the shell or with env, will be merged to the interpreter # (as that's what's given to exec). For e.g. all python programs # starting with "#!/usr/bin/env python" will be grouped under python. # You can change this by using the full command line but that will # have the undesirable affect of splitting up programs started with # differing parameters (for e.g. mingetty tty[1-6]). # # For 2.6 kernels up to and including 2.6.13 and later 2.4 redhat kernels # (rmap vm without smaps) it can not be accurately determined how many pages # are shared between processes in general or within a program in our case: # http://lkml.org/lkml/2005/7/6/250 # A warning is printed if overestimation is possible. # In addition for 2.6 kernels up to 2.6.9 inclusive, the shared # value in /proc/$pid/statm is the total file-backed extent of a process. # We ignore that, introducing more overestimation, again printing a warning. # Since kernel 2.6.23-rc8-mm1 PSS is available in smaps, which allows # us to calculate a more accurate value for the total RAM used by programs. # # Programs that use CLONE_VM without CLONE_THREAD are discounted by assuming # they're the only programs that have the same /proc/$PID/smaps file for # each instance. This will fail if there are multiple real instances of a # program that then use CLONE_VM without CLONE_THREAD, or if a clone changes # its memory map while we're checksumming each /proc/$PID/smaps. # # I don't take account of memory allocated for a program # by other programs. For e.g. memory used in the X server for # a program could be determined, but is not. # # FreeBSD is supported if linprocfs is mounted at /compat/linux/proc/ # FreeBSD 8.0 supports up to a level of Linux 2.6.16
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
"""Discussion of bloom constants for bup: There are four basic things to consider when building a bloom filter: The size, in bits, of the filter The capacity, in entries, of the filter The probability of a false positive that is tolerable The number of bits readily available to use for addressing filter bits There is one major tunable that is not directly related to the above: k: the number of bits set in the filter per entry Here's a wall of numbers showing the relationship between k; the ratio between the filter size in bits and the entries in the filter; and pfalse_positive: mn|k=3 |k=4 |k=5 |k=6 |k=7 |k=8 |k=9 |k=10 |k=11 8|3.05794|2.39687|2.16792|2.15771|2.29297|2.54917|2.92244|3.41909|4.05091 9|2.27780|1.65770|1.40703|1.32721|1.34892|1.44631|1.61138|1.84491|2.15259 10|1.74106|1.18133|0.94309|0.84362|0.81937|0.84555|0.91270|1.01859|1.16495 11|1.36005|0.86373|0.65018|0.55222|0.51259|0.50864|0.53098|0.57616|0.64387 12|1.08231|0.64568|0.45945|0.37108|0.32939|0.31424|0.31695|0.33387|0.36380 13|0.87517|0.49210|0.33183|0.25527|0.21689|0.19897|0.19384|0.19804|0.21013 14|0.71759|0.38147|0.24433|0.17934|0.14601|0.12887|0.12127|0.12012|0.12399 15|0.59562|0.30019|0.18303|0.12840|0.10028|0.08523|0.07749|0.07440|0.07468 16|0.49977|0.23941|0.13925|0.09351|0.07015|0.05745|0.05049|0.04700|0.04587 17|0.42340|0.19323|0.10742|0.06916|0.04990|0.03941|0.03350|0.03024|0.02870 18|0.36181|0.15765|0.08392|0.05188|0.03604|0.02748|0.02260|0.01980|0.01827 19|0.31160|0.12989|0.06632|0.03942|0.02640|0.01945|0.01549|0.01317|0.01182 20|0.27026|0.10797|0.05296|0.03031|0.01959|0.01396|0.01077|0.00889|0.00777 21|0.23591|0.09048|0.04269|0.02356|0.01471|0.01014|0.00759|0.00609|0.00518 22|0.20714|0.07639|0.03473|0.01850|0.01117|0.00746|0.00542|0.00423|0.00350 23|0.18287|0.06493|0.02847|0.01466|0.00856|0.00555|0.00392|0.00297|0.00240 24|0.16224|0.05554|0.02352|0.01171|0.00663|0.00417|0.00286|0.00211|0.00166 25|0.14459|0.04779|0.01957|0.00944|0.00518|0.00316|0.00211|0.00152|0.00116 26|0.12942|0.04135|0.01639|0.00766|0.00408|0.00242|0.00157|0.00110|0.00082 27|0.11629|0.03595|0.01381|0.00626|0.00324|0.00187|0.00118|0.00081|0.00059 28|0.10489|0.03141|0.01170|0.00515|0.00259|0.00146|0.00090|0.00060|0.00043 29|0.09492|0.02756|0.00996|0.00426|0.00209|0.00114|0.00069|0.00045|0.00031 30|0.08618|0.02428|0.00853|0.00355|0.00169|0.00090|0.00053|0.00034|0.00023 31|0.07848|0.02147|0.00733|0.00297|0.00138|0.00072|0.00041|0.00025|0.00017 32|0.07167|0.01906|0.00633|0.00250|0.00113|0.00057|0.00032|0.00019|0.00013 Here's a table showing available repository size for a given pfalse_positive and three values of k (assuming we only use the 160 bit SHA1 for addressing the filter and 8192bytes per object): pfalse|obj k=4 |cap k=4 |obj k=5 |cap k=5 |obj k=6 |cap k=6 2.500%|139333497228|1038.11 TiB|558711157|4262.63 GiB|13815755|105.41 GiB 1.000%|104489450934| 778.50 TiB|436090254|3327.10 GiB|11077519| 84.51 GiB 0.125%| 57254889824| 426.58 TiB|261732190|1996.86 GiB| 7063017| 55.89 GiB This eliminates pretty neatly any k>6 as long as we use the raw SHA for addressing. filter size scales linearly with repository size for a given k and pfalse. Here's a table of filter sizes for a 1 TiB repository: pfalse| k=3 | k=4 | k=5 | k=6 2.500%| 138.78 MiB | 126.26 MiB | 123.00 MiB | 123.37 MiB 1.000%| 197.83 MiB | 168.36 MiB | 157.58 MiB | 153.87 MiB 0.125%| 421.14 MiB | 307.26 MiB | 262.56 MiB | 241.32 MiB For bup: * We want the bloom filter to fit in memory; if it doesn't, the k pagefaults per lookup will be worse than the two required for midx. * We want the pfalse_positive to be low enough that the cost of sometimes faulting on the midx doesn't overcome the benefit of the bloom filter. * We have readily available 160 bits for addressing the filter. * We want to be able to have a single bloom address entire repositories of reasonable size. Based on these parameters, a combination of k=4 and k=5 provides the behavior that bup needs. As such, I've implemented bloom addressing, adding and checking functions in C for these two values. Because k=5 requires less space and gives better overall pfalse_positive performance, it is preferred if a table with k=5 can represent the repository. None of this tells us what max_pfalse_positive to choose. Brandon NAME <EMAIL> 2011-02-04 """
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be >= 1D atleast_2d Force arrays to be >= 2D atleast_3d Force arrays to be >= 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) stack Stack arrays along a new axis split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Substract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Iterators --------- ================ =================== Arrayterator A buffered iterator for big arrays. ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== 1D Array Set Operations ----------------------- Set operations for 1D numeric arrays based on sort() function. ================ =================== ediff1d Array difference (auxiliary function). unique Unique elements of an array. intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
""" AUI is an Advanced User Interface library that aims to implement "cutting-edge" interface usability and design features so developers can quickly and easily create beautiful and usable application interfaces. Vision and Design Principles ============================ AUI attempts to encapsulate the following aspects of the user interface: * **Frame Management**: Frame management provides the means to open, move and hide common controls that are needed to interact with the document, and allow these configurations to be saved into different perspectives and loaded at a later time. * **Toolbars**: Toolbars are a specialized subset of the frame management system and should behave similarly to other docked components. However, they also require additional functionality, such as "spring-loaded" rebar support, "chevron" buttons and end-user customizability. * **Modeless Controls**: Modeless controls expose a tool palette or set of options that float above the application content while allowing it to be accessed. Usually accessed by the toolbar, these controls disappear when an option is selected, but may also be "torn off" the toolbar into a floating frame of their own. * **Look and Feel**: Look and feel encompasses the way controls are drawn, both when shown statically as well as when they are being moved. This aspect of user interface design incorporates "special effects" such as transparent window dragging as well as frame animation. AUI adheres to the following principles: - Use native floating frames to obtain a native look and feel for all platforms; - Use existing wxPython code where possible, such as sizer implementation for frame management; - Use standard wxPython coding conventions. Usage ===== The following example shows a simple implementation that uses :class:`framemanager.AuiManager` to manage three text controls in a frame window:: import wx import wx.lib.agw.aui as aui class MyFrame(wx.Frame): def __init__(self, parent, id=-1, title="AUI Test", pos=wx.DefaultPosition, size=(800, 600), style=wx.DEFAULT_FRAME_STYLE): wx.Frame.__init__(self, parent, id, title, pos, size, style) self._mgr = aui.AuiManager() # notify AUI which frame to use self._mgr.SetManagedWindow(self) # create several text controls text1 = wx.TextCtrl(self, -1, "Pane 1 - sample text", wx.DefaultPosition, wx.Size(200,150), wx.NO_BORDER | wx.TE_MULTILINE) text2 = wx.TextCtrl(self, -1, "Pane 2 - sample text", wx.DefaultPosition, wx.Size(200,150), wx.NO_BORDER | wx.TE_MULTILINE) text3 = wx.TextCtrl(self, -1, "Main content window", wx.DefaultPosition, wx.Size(200,150), wx.NO_BORDER | wx.TE_MULTILINE) # add the panes to the manager self._mgr.AddPane(text1, AuiPaneInfo().Left().Caption("Pane Number One")) self._mgr.AddPane(text2, AuiPaneInfo().Bottom().Caption("Pane Number Two")) self._mgr.AddPane(text3, AuiPaneInfo().CenterPane()) # tell the manager to "commit" all the changes just made self._mgr.Update() self.Bind(wx.EVT_CLOSE, self.OnClose) def OnClose(self, event): # deinitialize the frame manager self._mgr.UnInit() self.Destroy() event.Skip() # our normal wxApp-derived class, as usual app = wx.App(0) frame = MyFrame(None) app.SetTopWindow(frame) frame.Show() app.MainLoop() What's New ========== Current wxAUI Version Tracked: wxWidgets 2.9.4 (SVN HEAD) The wxPython AUI version fixes the following bugs or implement the following missing features (the list is not exhaustive): - Visual Studio 2005 style docking: http://www.kirix.com/forums/viewtopic.php?f=16&t=596 - Dock and Pane Resizing: http://www.kirix.com/forums/viewtopic.php?f=16&t=582 - Patch concerning dock resizing: http://www.kirix.com/forums/viewtopic.php?f=16&t=610 - Patch to effect wxAuiToolBar orientation switch: http://www.kirix.com/forums/viewtopic.php?f=16&t=641 - AUI: Core dump when loading a perspective in wxGTK (MSW OK): http://www.kirix.com/forums/viewtopic.php?f=15&t=627 - wxAuiNotebook reordered AdvanceSelection(): http://www.kirix.com/forums/viewtopic.php?f=16&t=617 - Vertical Toolbar Docking Issue: http://www.kirix.com/forums/viewtopic.php?f=16&t=181 - Patch to show the resize hint on mouse-down in aui: http://trac.wxwidgets.org/ticket/9612 - The Left/Right and Top/Bottom Docks over draw each other: http://trac.wxwidgets.org/ticket/3516 - MinSize() not honoured: http://trac.wxwidgets.org/ticket/3562 - Layout problem with wxAUI: http://trac.wxwidgets.org/ticket/3597 - Resizing children ignores current window size: http://trac.wxwidgets.org/ticket/3908 - Resizing panes under Vista does not repaint background: http://trac.wxwidgets.org/ticket/4325 - Resize sash resizes in response to click: http://trac.wxwidgets.org/ticket/4547 - "Illegal" resizing of the AuiPane? (wxPython): http://trac.wxwidgets.org/ticket/4599 - Floating wxAUIPane Resize Event doesn't update its position: http://trac.wxwidgets.org/ticket/9773 - Don't hide floating panels when we maximize some other panel: http://trac.wxwidgets.org/ticket/4066 - wxAUINotebook incorrect ALLOW_ACTIVE_PANE handling: http://trac.wxwidgets.org/ticket/4361 - Page changing veto doesn't work, (patch supplied): http://trac.wxwidgets.org/ticket/4518 - Show and DoShow are mixed around in wxAuiMDIChildFrame: http://trac.wxwidgets.org/ticket/4567 - wxAuiManager & wxToolBar - ToolBar Of Size Zero: http://trac.wxwidgets.org/ticket/9724 - wxAuiNotebook doesn't behave properly like a container as far as...: http://trac.wxwidgets.org/ticket/9911 - Serious layout bugs in wxAUI: http://trac.wxwidgets.org/ticket/10620 - wAuiDefaultTabArt::Clone() should just use copy contructor: http://trac.wxwidgets.org/ticket/11388 - Drop down button for check tool on wxAuiToolbar: http://trac.wxwidgets.org/ticket/11139 Plus the following features: - AuiManager: (a) Implementation of a simple minimize pane system: Clicking on this minimize button causes a new AuiToolBar to be created and added to the frame manager, (currently the implementation is such that panes at West will have a toolbar at the right, panes at South will have toolbars at the bottom etc...) and the pane is hidden in the manager. Clicking on the restore button on the newly created toolbar will result in the toolbar being removed and the original pane being restored; (b) Panes can be docked on top of each other to form `AuiNotebooks`; `AuiNotebooks` tabs can be torn off to create floating panes; (c) On Windows XP, use the nice sash drawing provided by XP while dragging the sash; (d) Possibility to set an icon on docked panes; (e) Possibility to draw a sash visual grip, for enhanced visualization of sashes; (f) Implementation of a native docking art (`ModernDockArt`). Windows XP only, **requires** NAME pywin32 package (winxptheme); (g) Possibility to set a transparency for floating panes (a la Paint .NET); (h) Snapping the main frame to the screen in any positin specified by horizontal and vertical alignments; (i) Snapping floating panes on left/right/top/bottom or any combination of directions, a la Winamp; (j) "Fly-out" floating panes, i.e. panes which show themselves only when the mouse hover them; (k) Ability to set custom bitmaps for pane buttons (close, maximize, etc...); (l) Implementation of the style ``AUI_MGR_ANIMATE_FRAMES``, which fade-out floating panes when they are closed (all platforms which support frames transparency) and show a moving rectangle when they are docked and minimized (Windows < Vista and GTK only); (m) A pane switcher dialog is available to cycle through existing AUI panes; (n) Some flags which allow to choose the orientation and the position of the minimized panes; (o) The functions [Get]MinimizeMode() in `AuiPaneInfo` which allow to set/get the flags described above; (p) Events like ``EVT_AUI_PANE_DOCKING``, ``EVT_AUI_PANE_DOCKED``, ``EVT_AUI_PANE_FLOATING`` and ``EVT_AUI_PANE_FLOATED`` are available for all panes *except* toolbar panes; (q) Implementation of the RequestUserAttention method for panes; (r) Ability to show the caption bar of docked panes on the left instead of on the top (with caption text rotated by 90 degrees then). This is similar to what `wxDockIt` did. To enable this feature on any given pane, simply call `CaptionVisible(True, left=True)`; (s) New Aero-style docking guides: you can enable them by using the `AuiManager` style ``AUI_MGR_AERO_DOCKING_GUIDES``; (t) A slide-in/slide-out preview of minimized panes can be seen by enabling the `AuiManager` style ``AUI_MGR_PREVIEW_MINIMIZED_PANES`` and by hovering with the mouse on the minimized pane toolbar tool; (u) New Whidbey-style docking guides: you can enable them by using the `AuiManager` style ``AUI_MGR_WHIDBEY_DOCKING_GUIDES``; (v) Native of custom-drawn mini frames can be used as floating panes, depending on the ``AUI_MGR_USE_NATIVE_MINIFRAMES`` style; (w) A "smooth docking effect" can be obtained by using the ``AUI_MGR_SMOOTH_DOCKING`` style (similar to PyQT docking style); (x) Implementation of "Movable" panes, i.e. a pane that is set as `Movable()` but not `Floatable()` can be dragged and docked into a new location but will not form a floating window in between. - AuiNotebook: (a) Implementation of the style ``AUI_NB_HIDE_ON_SINGLE_TAB``, a la :mod:`lib.agw.flatnotebook`; (b) Implementation of the style ``AUI_NB_SMART_TABS``, a la :mod:`lib.agw.flatnotebook`; (c) Implementation of the style ``AUI_NB_USE_IMAGES_DROPDOWN``, which allows to show tab images on the tab dropdown menu instead of bare check menu items (a la :mod:`lib.agw.flatnotebook`); (d) 6 different tab arts are available, namely: (1) Default "glossy" theme (as in :class:`~auibook.AuiNotebook`) (2) Simple theme (as in :class:`~auibook.AuiNotebook`) (3) Firefox 2 theme (4) Visual Studio 2003 theme (VC71) (5) Visual Studio 2005 theme (VC81) (6) Google Chrome theme (e) Enabling/disabling tabs; (f) Setting the colour of the tab's text; (g) Implementation of the style ``AUI_NB_CLOSE_ON_TAB_LEFT``, which draws the tab close button on the left instead of on the right (a la Camino browser); (h) Ability to save and load perspectives in `AuiNotebook` (experimental); (i) Possibility to add custom buttons in the `AuiNotebook` tab area; (j) Implementation of the style ``AUI_NB_TAB_FLOAT``, which allows the floating of single tabs. Known limitation: when the notebook is more or less full screen, tabs cannot be dragged far enough outside of the notebook to become floating pages; (k) Implementation of the style ``AUI_NB_DRAW_DND_TAB`` (on by default), which draws an image representation of a tab while dragging; (l) Implementation of the `AuiNotebook` unsplit functionality, which unsplit a splitted AuiNotebook when double-clicking on a sash; (m) Possibility to hide all the tabs by calling `HideAllTAbs`; (n) wxPython controls can now be added inside page tabs by calling `AddControlToPage`, and they can be removed by calling `RemoveControlFromPage`; (o) Possibility to preview all the pages in a `AuiNotebook` (as thumbnails) by using the `NotebookPreview` method of `AuiNotebook`; (p) Tab labels can be edited by calling the `SetRenamable` method on a `AuiNotebook` page; (q) Support for multi-lines tab labels in `AuiNotebook`; (r) Support for setting minimum and maximum tab widths for fixed width tabs; (s) Implementation of the style ``AUI_NB_ORDER_BY_ACCESS``, which orders the tabs by last access time inside the Tab Navigator dialog; (t) Implementation of the style ``AUI_NB_NO_TAB_FOCUS``, allowing the developer not to draw the tab focus rectangle on tne `AuiNotebook` tabs. | - AuiToolBar: (a) ``AUI_TB_PLAIN_BACKGROUND`` style that allows to easy setup a plain background to the AUI toolbar, without the need to override drawing methods. This style contrasts with the default behaviour of the :class:`~auibar.AuiToolBar` that draws a background gradient and this break the window design when putting it within a control that has margin between the borders and the toolbar (example: put :class:`~auibar.AuiToolBar` within a :class:`StaticBoxSizer` that has a plain background); (b) `AuiToolBar` allow item alignment: http://trac.wxwidgets.org/ticket/10174; (c) `AUIToolBar` `DrawButton()` improvement: http://trac.wxwidgets.org/ticket/10303; (d) `AuiToolBar` automatically assign new id for tools: http://trac.wxwidgets.org/ticket/10173; (e) `AuiToolBar` Allow right-click on any kind of button: http://trac.wxwidgets.org/ticket/10079; (f) `AuiToolBar` idle update only when visible: http://trac.wxwidgets.org/ticket/10075; (g) Ability of creating `AuiToolBar` tools with [counter]clockwise rotation. This allows to propose a variant of the minimizing functionality with a rotated button which keeps the caption of the pane as label; (h) Allow setting the alignment of all tools in a toolbar that is expanded; (i) Implementation of the ``AUI_MINIMIZE_POS_TOOLBAR`` flag, which allows to minimize a pane inside an existing toolbar. Limitation: if the minimized icon in the toolbar ends up in the overflowing items (i.e., a menu is needed to show the icon), this style will not work. TODOs ===== - Documentation, documentation and documentation; - Fix `tabmdi.AuiMDIParentFrame` and friends, they do not work correctly at present; - Allow specification of `CaptionLeft()` to `AuiPaneInfo` to show the caption bar of docked panes on the left instead of on the top (with caption text rotated by 90 degrees then). This is similar to what `wxDockIt` did - DONE; - Make developer-created `AuiNotebooks` and automatic (framemanager-created) `AuiNotebooks` behave the same way (undocking of tabs) - DONE, to some extent; - Find a way to dock panes in already floating panes (`AuiFloatingFrames`), as they already have their own `AuiManager`; - Add more gripper styles (see, i.e., PlusDock 4.0); - Add an "AutoHide" feature to docked panes, similar to fly-out floating panes (see, i.e., PlusDock 4.0); - Add events for panes when they are about to float or to be docked (something like ``EVT_AUI_PANE_FLOATING/ED`` and ``EVT_AUI_PANE_DOCKING/ED``) - DONE, to some extent; - Implement the 4-ways splitter behaviour for horizontal and vertical sashes if they intersect; - Extend `tabart.py` with more aui tab arts; - Implement ``AUI_NB_LEFT`` and ``AUI_NB_RIGHT`` tab locations in `AuiNotebook`; - Move `AuiDefaultToolBarArt` into a separate module (as with `tabart.py` and `dockart.py`) and provide more arts for toolbars (maybe from :mod:`lib.agw.flatmenu`?) - Support multiple-rows/multiple columns toolbars; - Integrate as much as possible with :mod:`lib.agw.flatmenu`, from dropdown menus in `AuiNotebook` to toolbars and menu positioning; - Possibly handle minimization of panes in a different way (or provide an option to switch to another way of minimizing panes); - Clean up/speed up the code, especially time-consuming for-loops; - Possibly integrate `wxPyRibbon` (still on development), at least on Windows. License And Version =================== AUI library is distributed under the wxPython license. Latest Revision: NAME @ 25 Apr 2012, 21.00 GMT Version 1.3. """
#!/usr/bin/python3 # Copyright (C) 2016 NAME This file is part of SATK. # # SATK is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # SATK is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with SATK. If not, see <http://www.gnu.org/licenses/>. # When imported, this module supports interchange format creation for various # Python supplied floating point modules. When executed from the command=line # the module is a test driver and conversion utility. See the comments below the # 'if __name__ == "__main__":' section near the end of the module for a description # of the conversion utility command-line interface. # # This module supports interchange format creation for Python supplied float, # gmpy2.mpfr and dfp.dpd objects as well as conversions for legacy hexadecimal # floating point formats. Formats of 32-, 64-, and 128-bits are supported as # follows: # # Size: 32 64 128 Supplying Module # BFP float/mpfr float/mpfr mpfr Python supplied (float) or external # DFP dpd dpd dpd SATK/tools/dfp.py # HFP float/mpfr float/mpfr mpfr Python supplied (float) or external # # A common approach for all floating point formats is provided by this module. # It is expected to work with corresponding bfp.py, dfp.py, and hfp.py. Because # the modules also depend upon of this module, they are imported at the end rather # than the beginning. # # Conversion of interchange formats into the various Python objects is not yet # suppored. # # +--------------------------------------------+ # | | # | Floating Point Modules Relationships | # | | # +--------------------------------------------+ # # Multiple modules provide support for the three supported floating point formats: # BFP, DFP, and HFP. This module, fp.py, is intended to be the external interface # to the various floating point formats as well as the framework used by all of # the other modules. The additional modules subclass the classes defined in # fp.py to tailor their operation for each radix and its format. # # The foundation for a floating point datum regardless of radix and format is fp.FP. # The fp.FP class converts a string definition of the floating point constant # into an internal representation. The internal representation performs any rounding # required and supports the creation of a sequence of bytes conforming to the # floating point radix' interchange format. # # The fp.FP_Number class supports the internal representation of the floating point # datum itself. Rounding of the datum occurs via this object. A built-in Python # type (float), an object from an external package (gmpy2.mpfr), or an object # supplied by the floating point modules (dfp.dpd) assists the process. # # The fp.FP_Special object supports the creation of any speical values supported by # the radix or format. In each case they are hard-coded hex constants (converted # into a sequence of bytes) interpretable by the # # The Test class drives testing of the floating point string to interchange format # conversions. It accepts a string for conversion. The string may be a standard # floating point string constant (with embedded rounding mode) or a C-type # hexadecimal constant starting with '0x'. # # Each associated module sublcasses these four base classes. # # The module using the floating point framework is expected to import only this # module. Creation of a floating point datum or special value occurs through use # of one of threee module level functions: # # - fp.BFP - for a binary floating point value, # - fp.DFP - for a decimal floating point value, or # - fp.HFP - for a hexadecimal floating point value. # # Each function returns an object subclassing fp.FP. This object has one public # method: to_bytes(). This method converts the floating point value into a # sequence of bytes. # # Conversion from a sequence of bytes to an object may use the from_byte() class # method of each of the FP subclasses or this module's function of the same naame. # # This framework implements only three floating point flags or conditions: # - underflow # - overflow # - subnormal # # The subnormal condition is not indicated when underflow occurs. The three # conditions are mutually exclusive. # # Implementation status: # binary floating point - partially implemented by bfp.py, bfp_float.py and # bfp_gmpy2.py # decimal floating point - implemented by dfp.py # hexadecimal floating point - not implemented
# Configuration file for jupyter-notebook. #------------------------------------------------------------------------------ # Application(SingletonConfigurable) configuration #------------------------------------------------------------------------------ ## This is an application. ## The date format used by logging formatters for %(asctime)s #c.Application.log_datefmt = '%Y-%m-%d %H:%M:%S' ## The Logging format template #c.Application.log_format = '[%(name)s]%(highlevel)s %(message)s' ## Set the log level by value or name. #c.Application.log_level = 30 #------------------------------------------------------------------------------ # JupyterApp(Application) configuration #------------------------------------------------------------------------------ ## Base class for Jupyter applications ## Answer yes to any prompts. #c.JupyterApp.answer_yes = False ## Full path of a config file. #c.JupyterApp.config_file = u'' ## Specify a config file to load. #c.JupyterApp.config_file_name = u'' ## Generate default config file. #c.JupyterApp.generate_config = False #------------------------------------------------------------------------------ # NotebookApp(JupyterApp) configuration #------------------------------------------------------------------------------ ## Set the Access-Control-Allow-Credentials: true header #c.NotebookApp.allow_credentials = False ## Set the Access-Control-Allow-Origin header # # Use '*' to allow any origin to access your server. # # Takes precedence over allow_origin_pat. #c.NotebookApp.allow_origin = '' ## Use a regular expression for the Access-Control-Allow-Origin header # # Requests from an origin matching the expression will get replies with: # # Access-Control-Allow-Origin: origin # # where `origin` is the origin of the request. # # Ignored if allow_origin is set. #c.NotebookApp.allow_origin_pat = '' ## DEPRECATED use base_url #c.NotebookApp.base_project_url = '/' ## The base URL for the notebook server. # # Leading and trailing slashes can be omitted, and will automatically be added. #c.NotebookApp.base_url = '/' ## Specify what command to use to invoke a web browser when opening the notebook. # If not specified, the default browser will be determined by the `webbrowser` # standard library module, which allows setting of the BROWSER environment # variable to override it. #c.NotebookApp.browser = u'' ## The full path to an SSL/TLS certificate file. #c.NotebookApp.certfile = u'' ## The full path to a certificate authority certificate for SSL/TLS client # authentication. #c.NotebookApp.client_ca = u'' ## The config manager class to use #c.NotebookApp.config_manager_class = 'notebook.services.config.manager.ConfigManager' ## The notebook manager class to use. #c.NotebookApp.contents_manager_class = 'notebook.services.contents.filemanager.FileContentsManager' ## Extra keyword arguments to pass to `set_secure_cookie`. See tornado's # set_secure_cookie docs for details. #c.NotebookApp.cookie_options = {} ## The random bytes used to secure cookies. By default this is a new random # number every time you start the Notebook. Set it to a value in a config file # to enable logins to persist across server sessions. # # Note: Cookie secrets should be kept private, do not share config files with # cookie_secret stored in plaintext (you can read the value from a file). #c.NotebookApp.cookie_secret = '' ## The file where the cookie secret is stored. #c.NotebookApp.cookie_secret_file = u'' ## The default URL to redirect to from `/` #c.NotebookApp.default_url = '/tree' ## Whether to enable MathJax for typesetting math/TeX # # MathJax is the javascript library Jupyter uses to render math/LaTeX. It is # very large, so you may want to disable it if you have a slow internet # connection, or for offline use of the notebook. # # When disabled, equations etc. will appear as their untransformed TeX source. #c.NotebookApp.enable_mathjax = True ## extra paths to look for Javascript notebook extensions #c.NotebookApp.extra_nbextensions_path = [] ## Extra paths to search for serving static files. # # This allows adding javascript/css to be available from the notebook server # machine, or overriding individual files in the IPython #c.NotebookApp.extra_static_paths = [] ## Extra paths to search for serving jinja templates. # # Can be used to override templates from notebook.templates. #c.NotebookApp.extra_template_paths = [] ## #c.NotebookApp.file_to_run = '' ## Use minified JS file or not, mainly use during dev to avoid JS recompilation #c.NotebookApp.ignore_minified_js = False ## (bytes/sec) Maximum rate at which messages can be sent on iopub before they # are limited. #c.NotebookApp.iopub_data_rate_limit = 0 ## (msg/sec) Maximum rate at which messages can be sent on iopub before they are # limited. #c.NotebookApp.iopub_msg_rate_limit = 0 ## The IP address the notebook server will listen on.
""" ===================================================== Optimization and root finding (:mod:`scipy.optimize`) ===================================================== .. currentmodule:: scipy.optimize Optimization ============ Local Optimization ------------------ .. autosummary:: :toctree: generated/ minimize - Unified interface for minimizers of multivariate functions minimize_scalar - Unified interface for minimizers of univariate functions OptimizeResult - The optimization result returned by some optimizers The `minimize` function supports the following methods: .. toctree:: optimize.minimize-neldermead optimize.minimize-powell optimize.minimize-cg optimize.minimize-bfgs optimize.minimize-newtoncg optimize.minimize-lbfgsb optimize.minimize-tnc optimize.minimize-cobyla optimize.minimize-slsqp optimize.minimize-dogleg optimize.minimize-trustncg The `minimize_scalar` function supports the following methods: .. toctree:: optimize.minimize_scalar-brent optimize.minimize_scalar-bounded optimize.minimize_scalar-golden The specific optimization method interfaces below in this subsection are not recommended for use in new scripts; all of these methods are accessible via a newer, more consistent interface provided by the functions above. General-purpose multivariate methods: .. autosummary:: :toctree: generated/ fmin - Nelder-Mead Simplex algorithm fmin_powell - Powell's (modified) level set method fmin_cg - Non-linear (Polak-Ribiere) conjugate gradient algorithm fmin_bfgs - Quasi-Newton method (Broydon-Fletcher-Goldfarb-Shanno) fmin_ncg - Line-search Newton Conjugate Gradient Constrained multivariate methods: .. autosummary:: :toctree: generated/ fmin_l_bfgs_b - Zhu, Byrd, and Nocedal's constrained optimizer fmin_tnc - Truncated Newton code fmin_cobyla - Constrained optimization by linear approximation fmin_slsqp - Minimization using sequential least-squares programming differential_evolution - stochastic minimization using differential evolution Univariate (scalar) minimization methods: .. autosummary:: :toctree: generated/ fminbound - Bounded minimization of a scalar function brent - 1-D function minimization using Brent method golden - 1-D function minimization using Golden Section method Equation (Local) Minimizers --------------------------- .. autosummary:: :toctree: generated/ leastsq - Minimize the sum of squares of M equations in N unknowns nnls - Linear least-squares problem with non-negativity constraint Global Optimization ------------------- .. autosummary:: :toctree: generated/ basinhopping - Basinhopping stochastic optimizer brute - Brute force searching optimizer differential_evolution - stochastic minimization using differential evolution Rosenbrock function ------------------- .. autosummary:: :toctree: generated/ rosen - The Rosenbrock function. rosen_der - The derivative of the Rosenbrock function. rosen_hess - The Hessian matrix of the Rosenbrock function. rosen_hess_prod - Product of the Rosenbrock Hessian with a vector. Fitting ======= .. autosummary:: :toctree: generated/ curve_fit -- Fit curve to a set of points Root finding ============ Scalar functions ---------------- .. autosummary:: :toctree: generated/ brentq - quadratic interpolation Brent method brenth - Brent method, modified by Harris with hyperbolic extrapolation ridder - Ridder's method bisect - Bisection method newton - Secant method or Newton's method Fixed point finding: .. autosummary:: :toctree: generated/ fixed_point - Single-variable fixed-point solver Multidimensional ---------------- General nonlinear solvers: .. autosummary:: :toctree: generated/ root - Unified interface for nonlinear solvers of multivariate functions fsolve - Non-linear multi-variable equation solver broyden1 - Broyden's first method broyden2 - Broyden's second method The `root` function supports the following methods: .. toctree:: optimize.root-hybr optimize.root-lm optimize.root-broyden1 optimize.root-broyden2 optimize.root-anderson optimize.root-linearmixing optimize.root-diagbroyden optimize.root-excitingmixing optimize.root-krylov optimize.root-dfsane Large-scale nonlinear solvers: .. autosummary:: :toctree: generated/ newton_krylov anderson Simple iterations: .. autosummary:: :toctree: generated/ excitingmixing linearmixing diagbroyden :mod:`Additional information on the nonlinear solvers <scipy.optimize.nonlin>` Linear Programming ================== Simplex Algorithm: .. autosummary:: :toctree: generated/ linprog -- Linear programming using the simplex algorithm The `linprog` function supports the following methods: .. toctree:: optimize.linprog-simplex Utilities ========= .. autosummary:: :toctree: generated/ approx_fprime - Approximate the gradient of a scalar function bracket - Bracket a minimum, given two starting points check_grad - Check the supplied derivative using finite differences line_search - Return a step that satisfies the strong Wolfe conditions show_options - Show specific options optimization solvers LbfgsInvHessProduct - Linear operator for L-BFGS approximate inverse Hessian """
""" A class for converting a PySB model to a set of ordinary differential equations for integration in MATLAB. Note that for use in MATLAB, the name of the ``.m`` file must match the name of the exported MATLAB class (e.g., ``robertson.m`` for the example below). For information on how to use the model exporters, see the documentation for :py:mod:`pysb.export`. Output for the Robertson example model ====================================== Information on the form and usage of the generated MATLAB class is contained in the documentation for the MATLAB model, as shown in the following example for ``pysb.examples.robertson``:: classdef robertson % A simple three-species chemical kinetics system known as "Robertson's % example", as presented in: % % NAME The solution of a set of reaction rate equations, in Numerical % Analysis: An Introduction, NAME ed., Academic Press, 1966, pp. 178-182. % % A class implementing the ordinary differential equations % for the robertson model. % % Save as robertson.m. % % Generated by pysb.export.matlab.MatlabExporter. % % Properties % ---------- % observables : struct % A struct containing the names of the observables from the % PySB model as field names. Each field in the struct % maps the observable name to a matrix with two rows: % the first row specifies the indices of the species % associated with the observable, and the second row % specifies the coefficients associated with the species. % For any given timecourse of model species resulting from % integration, the timecourse for an observable can be % retrieved using the get_observable method, described % below. % % parameters : struct % A struct containing the names of the parameters from the % PySB model as field names. The nominal values are set by % the constructor and their values can be overriden % explicitly once an instance has been created. % % Methods % ------- % robertson.odes(tspan, y0) % The right-hand side function for the ODEs of the model, % for use with MATLAB ODE solvers (see Examples). % % robertson.get_initial_values() % Returns a vector of initial values for all species, % specified in the order that they occur in the original % PySB model (i.e., in the order found in model.species). % Non-zero initial conditions are specified using the % named parameters included as properties of the instance. % Hence initial conditions other than the defaults can be % used by assigning a value to the named parameter and then % calling this method. The vector returned by the method % is used for integration by passing it to the MATLAB % solver as the y0 argument. % % robertson.get_observables(y) % Given a matrix of timecourses for all model species % (i.e., resulting from an integration of the model), % get the trajectories corresponding to the observables. % Timecourses are returned as a struct which can be % indexed by observable name. % % Examples % -------- % Example integration using default initial and parameter % values: % % >> m = robertson(); % >> tspan = [0 100]; % >> [t y] = ode15s(@m.odes, tspan, m.get_initial_values()); % % Retrieving the observables: % % >> y_obs = m.get_observables(y) % properties observables parameters end methods function self = robertson() % Assign default parameter values self.parameters = struct( ... 'k1', 0.040000000000000001, ... 'k2', 30000000, ... 'k3', 10000, ... 'A_0', 1, ... 'B_0', 0, ... 'C_0', 0); % Define species indices (first row) and coefficients % (second row) of named observables self.observables = struct( ... 'A_total', [1; 1], ... 'B_total', [2; 1], ... 'C_total', [3; 1]); end function initial_values = get_initial_values(self) % Return the vector of initial conditions for all % species based on the values of the parameters % as currently defined in the instance. initial_values = zeros(1,3); initial_values(1) = self.parameters.A_0; % A() initial_values(2) = self.parameters.B_0; % B() initial_values(3) = self.parameters.C_0; % C() end function y = odes(self, tspan, y0) % Right hand side function for the ODEs % Shorthand for the struct of model parameters p = self.parameters; % A(); y(1,1) = -p.k1*y0(1) + p.k3*y0(2)*y0(3); % B(); y(2,1) = p.k1*y0(1) - p.k2*power(y0(2), 2) - p.k3*y0(2)*y0(3); % C(); y(3,1) = p.k2*power(y0(2), 2); end function y_obs = get_observables(self, y) % Retrieve the trajectories for the model observables % from a matrix of the trajectories of all model % species. % Initialize the struct of observable timecourses % that we will return y_obs = struct(); % Iterate over the observables; observable_names = fieldnames(self.observables); for i = 1:numel(observable_names) obs_matrix = self.observables.(observable_names{i}); species = obs_matrix(1, :); coefficients = obs_matrix(2, :); y_obs.(observable_names{i}) = ... y(:, species) * coefficients'; end end end end """
#!/usr/bin/python # -*- coding: utf-8 -*- # # Platform methods for out-of-band crawler # Derived from platform.py # # This module is maintained by NAME <EMAIL>. # If you find problems, please submit bug reports/patches via the # Python bug tracker (http://bugs.python.org) and assign them to "lemburg". # # Note: Please keep this module compatible to Python 1.5.2. # # Still needed: # * more support for WinCE # * support for MS-DOS (PythonDX ?) # * support for Amiga and other still unsupported platforms running Python # * support for additional Linux distributions # # Many thanks to all those who helped adding platform-specific # checks (in no particular order): # # NAME NAME NAME NAME NAME NAME NAME NAME Betancourt, NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME NAME (OpenVMS support), # NAME NAME NAME NAME History: # # <see CVS and SVN checkin messages for history> # # 1.0.7 - added DEV_NULL # 1.0.6 - added linux_distribution() # 1.0.5 - fixed Java support to allow running the module on Jython # 1.0.4 - added IronPython support # 1.0.3 - added normalization of Windows system name # 1.0.2 - added more Windows support # 1.0.1 - reformatted to make doc.py happy # 1.0.0 - reformatted a bit and checked into Python CVS # 0.8.0 - added sys.version parser and various new access # APIs (python_version(), python_compiler(), etc.) # 0.7.2 - fixed architecture() to use sizeof(pointer) where available # 0.7.1 - added support for Caldera OpenLinux # 0.7.0 - some fixes for WinCE; untabified the source file # 0.6.2 - support for OpenVMS - requires version 1.5.2-V006 or higher and # vms_lib.getsyi() configured # 0.6.1 - added code to prevent 'uname -p' on platforms which are # known not to support it # 0.6.0 - fixed win32_ver() to hopefully work on Win95,98,NT and Win2k; # did some cleanup of the interfaces - some APIs have changed # 0.5.5 - fixed another type in the MacOS code... should have # used more coffee today ;-) # 0.5.4 - fixed a few typos in the MacOS code # 0.5.3 - added experimental MacOS support; added better popen() # workarounds in _syscmd_ver() -- still not 100% elegant # though # 0.5.2 - fixed uname() to return '' instead of 'unknown' in all # return values (the system uname command tends to return # 'unknown' instead of just leaving the field emtpy) # 0.5.1 - included code for slackware dist; added exception handlers # to cover up situations where platforms don't have os.popen # (e.g. Mac) or fail on socket.gethostname(); fixed libc # detection RE # 0.5.0 - changed the API names referring to system commands to *syscmd*; # added java_ver(); made syscmd_ver() a private # API (was system_ver() in previous versions) -- use uname() # instead; extended the win32_ver() to also return processor # type information # 0.4.0 - added win32_ver() and modified the platform() output for WinXX # 0.3.4 - fixed a bug in _follow_symlinks() # 0.3.3 - fixed popen() and "file" command invokation bugs # 0.3.2 - added architecture() API and support for it in platform() # 0.3.1 - fixed syscmd_ver() RE to support Windows NT # 0.3.0 - added system alias support # 0.2.3 - removed 'wince' again... oh well. # 0.2.2 - added 'wince' to syscmd_ver() supported platforms # 0.2.1 - added cache logic and changed the platform string format # 0.2.0 - changed the API to use functions instead of module globals # since some action take too long to be run on module import # 0.1.0 - first release # # You can always get the latest version of this module at: # # http://www.egenix.com/files/python/platform.py # # If that URL should fail, try contacting the author.
#!/usr/bin/env python3 ## -*- coding: utf-8 -*- ## ## NAME - 2018-12-26 ## ## A custom crackme to test the AArch64 architecture. The goal is to find an ## hash collision to take the 'Win' branch. Firs we run the binary with a random ## seed, then we calculate the hash collision and run a second time the binary with ## the good input to take the 'Win' branch. ## ## Output: ## ## $ time ./solve.py ## [+] Loading 0x000040 - 0x000238 ## [+] Loading 0x000238 - 0x000253 ## [+] Loading 0x000000 - 0x000a3c ## [+] Loading 0x010db8 - 0x011040 ## [+] Loading 0x010dc8 - 0x010fa8 ## [+] Loading 0x000254 - 0x000274 ## [+] Loading 0x000948 - 0x000984 ## [+] Loading 0x000000 - 0x000000 ## [+] Loading 0x010db8 - 0x011000 ## [+] Hooking __libc_start_main ## [+] Hooking puts ## [+] Starting emulation. ## [+] __libc_start_main hooked ## [+] argv[0] = ./crackme_hash ## [+] argv[1] = arm64 ## [+] Please wait, calculating hash collisions... ## [+] Found several hash collisions: ## {0L: "0x6c, 'l'", 1L: "0x78, 'x'", 2L: "0x75, 'u'", 3L: "0x70, 'p'", 4L: "0x6e, 'n'"} ## {0L: "0x63, 'c'", 1L: "0x78, 'x'", 2L: "0x62, 'b'", 3L: "0x70, 'p'", 4L: "0x62, 'b'"} ## {0L: "0x73, 's'", 1L: "0x68, 'h'", 2L: "0x62, 'b'", 3L: "0x70, 'p'", 4L: "0x62, 'b'"} ## {0L: "0x71, 'q'", 1L: "0x66, 'f'", 2L: "0x62, 'b'", 3L: "0x70, 'p'", 4L: "0x62, 'b'"} ## {0L: "0x75, 'u'", 1L: "0x66, 'f'", 2L: "0x66, 'f'", 3L: "0x70, 'p'", 4L: "0x62, 'b'"} ## {0L: "0x75, 'u'", 1L: "0x67, 'g'", 2L: "0x67, 'g'", 3L: "0x70, 'p'", 4L: "0x62, 'b'"} ## {0L: "0x75, 'u'", 1L: "0x6f, 'o'", 2L: "0x67, 'g'", 3L: "0x78, 'x'", 4L: "0x62, 'b'"} ## {0L: "0x75, 'u'", 1L: "0x6f, 'o'", 2L: "0x67, 'g'", 3L: "0x70, 'p'", 4L: "0x6a, 'j'"} ## {0L: "0x75, 'u'", 1L: "0x6f, 'o'", 2L: "0x67, 'g'", 3L: "0x74, 't'", 4L: "0x6e, 'n'"} ## {0L: "0x75, 'u'", 1L: "0x6f, 'o'", 2L: "0x67, 'g'", 3L: "0x75, 'u'", 4L: "0x6f, 'o'"} ## {0L: "0x76, 'v'", 1L: "0x70, 'p'", 2L: "0x67, 'g'", 3L: "0x75, 'u'", 4L: "0x6f, 'o'"} ## {0L: "0x77, 'w'", 1L: "0x70, 'p'", 2L: "0x66, 'f'", 3L: "0x75, 'u'", 4L: "0x6f, 'o'"} ## {0L: "0x77, 'w'", 1L: "0x70, 'p'", 2L: "0x66, 'f'", 3L: "0x71, 'q'", 4L: "0x6b, 'k'"} ## {0L: "0x76, 'v'", 1L: "0x70, 'p'", 2L: "0x67, 'g'", 3L: "0x71, 'q'", 4L: "0x6b, 'k'"} ## {0L: "0x76, 'v'", 1L: "0x70, 'p'", 2L: "0x67, 'g'", 3L: "0x70, 'p'", 4L: "0x6a, 'j'"} ## {0L: "0x77, 'w'", 1L: "0x70, 'p'", 2L: "0x66, 'f'", 3L: "0x70, 'p'", 4L: "0x6a, 'j'"} ## {0L: "0x77, 'w'", 1L: "0x70, 'p'", 2L: "0x66, 'f'", 3L: "0x72, 'r'", 4L: "0x6c, 'l'"} ## {0L: "0x77, 'w'", 1L: "0x6e, 'n'", 2L: "0x64, 'd'", 3L: "0x72, 'r'", 4L: "0x6c, 'l'"} ## {0L: "0x75, 'u'", 1L: "0x6c, 'l'", 2L: "0x64, 'd'", 3L: "0x72, 'r'", 4L: "0x6c, 'l'"} ## {0L: "0x75, 'u'", 1L: "0x6e, 'n'", 2L: "0x66, 'f'", 3L: "0x72, 'r'", 4L: "0x6c, 'l'"} ## [+] Pick up the first serial: lxupn ## [+] puts hooked ## fail ## [+] Instruction executed: 240 ## [+] Emulation done. ## [+] Start a second emualtion with the good serial to validate the chall ## [+] Starting emulation. ## [+] __libc_start_main hooked ## [+] argv[0] = ./crackme_hash ## [+] argv[1] = lxupn ## [+] puts hooked ## Win ## [+] Instruction executed: 240 ## [+] Emulation done. ## ## ./solve.py 0.10s user 0.00s system 99% cpu 0.105 total ##
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
## ## ## Apache License ## Version 2.0, January 2004 ## http://www.apache.org/licenses/ ## ## TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION ## ## 1. Definitions. ## ## "License" shall mean the terms and conditions for use, reproduction, ## and distribution as defined by Sections 1 through 9 of this document. ## ## "Licensor" shall mean the copyright owner or entity authorized by ## the copyright owner that is granting the License. ## ## "Legal Entity" shall mean the union of the acting entity and all ## other entities that control, are controlled by, or are under common ## control with that entity. For the purposes of this definition, ## "control" means (i) the power, direct or indirect, to cause the ## direction or management of such entity, whether by contract or ## otherwise, or (ii) ownership of fifty percent (50%) or more of the ## outstanding shares, or (iii) beneficial ownership of such entity. ## ## "You" (or "Your") shall mean an individual or Legal Entity ## exercising permissions granted by this License. ## ## "Source" form shall mean the preferred form for making modifications, ## including but not limited to software source code, documentation ## source, and configuration files. ## ## "Object" form shall mean any form resulting from mechanical ## transformation or translation of a Source form, including but ## not limited to compiled object code, generated documentation, ## and conversions to other media types. ## ## "Work" shall mean the work of authorship, whether in Source or ## Object form, made available under the License, as indicated by a ## copyright notice that is included in or attached to the work ## (an example is provided in the Appendix below). ## ## "Derivative Works" shall mean any work, whether in Source or Object ## form, that is based on (or derived from) the Work and for which the ## editorial revisions, annotations, elaborations, or other modifications ## represent, as a whole, an original work of authorship. For the purposes ## of this License, Derivative Works shall not include works that remain ## separable from, or merely link (or bind by name) to the interfaces of, ## the Work and Derivative Works thereof. ## ## "Contribution" shall mean any work of authorship, including ## the original version of the Work and any modifications or additions ## to that Work or Derivative Works thereof, that is intentionally ## submitted to Licensor for inclusion in the Work by the copyright owner ## or by an individual or Legal Entity authorized to submit on behalf of ## the copyright owner. For the purposes of this definition, "submitted" ## means any form of electronic, verbal, or written communication sent ## to the Licensor or its representatives, including but not limited to ## communication on electronic mailing lists, source code control systems, ## and issue tracking systems that are managed by, or on behalf of, the ## Licensor for the purpose of discussing and improving the Work, but ## excluding communication that is conspicuously marked or otherwise ## designated in writing by the copyright owner as "Not a Contribution." ## ## "Contributor" shall mean Licensor and any individual or Legal Entity ## on behalf of whom a Contribution has been received by Licensor and ## subsequently incorporated within the Work. ## ## 2. Grant of Copyright License. Subject to the terms and conditions of ## this License, each Contributor hereby grants to You a perpetual, ## worldwide, non-exclusive, no-charge, royalty-free, irrevocable ## copyright license to reproduce, prepare Derivative Works of, ## publicly display, publicly perform, sublicense, and distribute the ## Work and such Derivative Works in Source or Object form. ## ## 3. Grant of Patent License. Subject to the terms and conditions of ## this License, each Contributor hereby grants to You a perpetual, ## worldwide, non-exclusive, no-charge, royalty-free, irrevocable ## (except as stated in this section) patent license to make, have made, ## use, offer to sell, sell, import, and otherwise transfer the Work, ## where such license applies only to those patent claims licensable ## by such Contributor that are necessarily infringed by their ## Contribution(s) alone or by combination of their Contribution(s) ## with the Work to which such Contribution(s) was submitted. If You ## institute patent litigation against any entity (including a ## cross-claim or counterclaim in a lawsuit) alleging that the Work ## or a Contribution incorporated within the Work constitutes direct ## or contributory patent infringement, then any patent licenses ## granted to You under this License for that Work shall terminate ## as of the date such litigation is filed. ## ## 4. Redistribution. You may reproduce and distribute copies of the ## Work or Derivative Works thereof in any medium, with or without ## modifications, and in Source or Object form, provided that You ## meet the following conditions: ## ## (a) You must give any other recipients of the Work or ## Derivative Works a copy of this License; and ## ## (b) You must cause any modified files to carry prominent notices ## stating that You changed the files; and ## ## (c) You must retain, in the Source form of any Derivative Works ## that You distribute, all copyright, patent, trademark, and ## attribution notices from the Source form of the Work, ## excluding those notices that do not pertain to any part of ## the Derivative Works; and ## ## (d) If the Work includes a "NOTICE" text file as part of its ## distribution, then any Derivative Works that You distribute must ## include a readable copy of the attribution notices contained ## within such NOTICE file, excluding those notices that do not ## pertain to any part of the Derivative Works, in at least one ## of the following places: within a NOTICE text file distributed ## as part of the Derivative Works; within the Source form or ## documentation, if provided along with the Derivative Works; or, ## within a display generated by the Derivative Works, if and ## wherever such third-party notices normally appear. The contents ## of the NOTICE file are for informational purposes only and ## do not modify the License. You may add Your own attribution ## notices within Derivative Works that You distribute, alongside ## or as an addendum to the NOTICE text from the Work, provided ## that such additional attribution notices cannot be construed ## as modifying the License. ## ## You may add Your own copyright statement to Your modifications and ## may provide additional or different license terms and conditions ## for use, reproduction, or distribution of Your modifications, or ## for any such Derivative Works as a whole, provided Your use, ## reproduction, and distribution of the Work otherwise complies with ## the conditions stated in this License. ## ## 5. Submission of Contributions. Unless You explicitly state otherwise, ## any Contribution intentionally submitted for inclusion in the Work ## by You to the Licensor shall be under the terms and conditions of ## this License, without any additional terms or conditions. ## Notwithstanding the above, nothing herein shall supersede or modify ## the terms of any separate license agreement you may have executed ## with Licensor regarding such Contributions. ## ## 6. Trademarks. This License does not grant permission to use the trade ## names, trademarks, service marks, or product names of the Licensor, ## except as required for reasonable and customary use in describing the ## origin of the Work and reproducing the content of the NOTICE file. ## ## 7. Disclaimer of Warranty. Unless required by applicable law or ## agreed to in writing, Licensor provides the Work (and each ## Contributor provides its Contributions) on an "AS IS" BASIS, ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or ## implied, including, without limitation, any warranties or conditions ## of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A ## PARTICULAR PURPOSE. You are solely responsible for determining the ## appropriateness of using or redistributing the Work and assume any ## risks associated with Your exercise of permissions under this License. ## ## 8. Limitation of Liability. In no event and under no legal theory, ## whether in tort (including negligence), contract, or otherwise, ## unless required by applicable law (such as deliberate and grossly ## negligent acts) or agreed to in writing, shall any Contributor be ## liable to You for damages, including any direct, indirect, special, ## incidental, or consequential damages of any character arising as a ## result of this License or out of the use or inability to use the ## Work (including but not limited to damages for loss of goodwill, ## work stoppage, computer failure or malfunction, or any and all ## other commercial damages or losses), even if such Contributor ## has been advised of the possibility of such damages. ## ## 9. Accepting Warranty or Additional Liability. While redistributing ## the Work or Derivative Works thereof, You may choose to offer, ## and charge a fee for, acceptance of support, warranty, indemnity, ## or other liability obligations and/or rights consistent with this ## License. However, in accepting such obligations, You may act only ## on Your own behalf and on Your sole responsibility, not on behalf ## of any other Contributor, and only if You agree to indemnify, ## defend, and hold each Contributor harmless for any liability ## incurred by, or claims asserted against, such Contributor by reason ## of your accepting any such warranty or additional liability. ## ## END OF TERMS AND CONDITIONS ## ## APPENDIX: How to apply the Apache License to your work. ## ## To apply the Apache License to your work, attach the following ## boilerplate notice, with the fields enclosed by brackets "[]" ## replaced with your own identifying information. (Don't include ## the brackets!) The text should be enclosed in the appropriate ## comment syntax for the file format. We also recommend that a ## file or class name and description of purpose be included on the ## same "printed page" as the copyright notice for easier ## identification within third-party archives. ## ## Copyright [yyyy] [name of copyright owner] ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. ## You may obtain a copy of the License at ## ## http://www.apache.org/licenses/LICENSE-2.0 ## ## Unless required by applicable law or agreed to in writing, software ## distributed under the License is distributed on an "AS IS" BASIS, ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ## See the License for the specific language governing permissions and ## limitations under the License. ## #------------------------------------------------------------------------------- # Name: es.py # Purpose: This file implements fuzzy expert system. # # Author: NAME Created: 19.09.2014 # Copyright: (c) GrafR 2014 # Licence: Apache 2.0 #------------------------------------------------------------------------------- #!/usr/bin/env python
"""Handles conversion between the set of time intervals used in the `SosModel` There are three main classes, which are currently rather intertwined. :class:`Interval` represents an individual definition of a period within a year. This is specified using the ISO8601 period syntax and exposes methods which use the isodate library to parse this into an internal hourly representation of the period. :class:`TimeIntervalRegister` holds the definitions of time-interval sets specified for the sector models at the :class:`~smif.sos_model.SosModel` level. This class exposes one public method, :py:meth:`~TimeIntervalRegister.add_interval_set` which allows the SosModel to add an interval definition from a model configuration to the register. Quantities ---------- Quantities are associated with a duration, period or interval. For example 120 GWh of electricity generated during each week of February.:: Week 1: 120 GW Week 2: 120 GW Week 3: 120 GW Week 4: 120 GW Other examples of quantities: - greenhouse gas emissions - demands for infrastructure services - materials use - counts of cars past a junction - costs of investments, operation and maintenance Upscale: Divide ~~~~~~~~~~~~~~~ To convert to a higher temporal resolution, the values need to be apportioned across the new time scale. In the above example, the 120 GWh of electricity would be divided over the days of February to produce a daily time series of generation. For example:: 1st Feb: 17 GWh 2nd Feb: 17 GWh 3rd Feb: 17 GWh ... Downscale: Sum ~~~~~~~~~~~~~~ To resample weekly values to a lower temporal resolution, the values would need to be accumulated. A monthly total would be:: Feb: 480 GWh Remapping --------- Remapping quantities, as is required in the conversion from energy demand (hourly values over a year) to energy supply (hourly values for one week for each of four seasons) requires additional averaging operations. The quantities are averaged over the many-to-one relationship of hours to time-slices, so that the seasonal-hourly timeslices in the model approximate the hourly profiles found across the particular seasons in the year. For example:: hour 1: 20 GWh hour 2: 15 GWh hour 3: 10 GWh ... hour 8592: 16 GWh hour 8593: 12 GWh hour 8594: 21 GWh ... hour 8760: 43 GWh To:: season 1 hour 1: 20+16+.../4 GWh # Denominator number hours in sample season 1 hour 2: 15+12+.../4 GWh season 1 hour 3: 10+21+.../4 GWh ... Prices ------ Unlike quantities, prices are associated with a point in time. For example a spot price of £870/GWh. An average price can be associated with a duration, but even then, we are just assigning a price to any point in time within a range of times. Upscale: Fill ~~~~~~~~~~~~~ Given a timeseries of monthly spot prices, converting these to a daily price can be done by a fill operation. E.g. copying the monthly price to each day. From:: Feb: £870/GWh To:: 1st Feb: £870/GWh 2nd Feb: £870/GWh ... Downscale: Average ~~~~~~~~~~~~~~~~~~ On the other hand, going down scale, such as from daily prices to a monthly price requires use of an averaging function. From:: 1st Feb: £870/GWh 2nd Feb: £870/GWh ... To:: Feb: £870/GWh Development Notes ----------------- - We could use :py:meth:`numpy.convolve` to compare time intervals as hourly arrays before adding them to the set of intervals """
# # ElementTree # $Id: ElementTree.py 2326 2005-03-17 07:45:21Z USERNAME $ # # light-weight XML support for Python 1.5.2 and later. # # history: # 2001-10-20 fl created (from various sources) # 2001-11-01 fl return root from parse method # 2002-02-16 fl sort attributes in lexical order # 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup # 2002-05-01 fl finished TreeBuilder refactoring # 2002-07-14 fl added basic namespace support to ElementTree.write # 2002-07-25 fl added QName attribute support # 2002-10-20 fl fixed encoding in write # 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding # 2002-11-27 fl accept file objects or file names for parse/write # 2002-12-04 fl moved XMLTreeBuilder back to this module # 2003-01-11 fl fixed entity encoding glitch for us-ascii # 2003-02-13 fl added XML literal factory # 2003-02-21 fl added ProcessingInstruction/PI factory # 2003-05-11 fl added tostring/fromstring helpers # 2003-05-26 fl added ElementPath support # 2003-07-05 fl added makeelement factory method # 2003-07-28 fl added more well-known namespace prefixes # 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed # 2003-10-31 fl markup updates # 2003-11-15 fl fixed nested namespace bug # 2004-03-28 fl added XMLID helper # 2004-06-02 fl added default support to findtext # 2004-06-08 fl fixed encoding of non-ascii element/attribute names # 2004-08-23 fl take advantage of post-2.1 expat features # 2005-02-01 fl added iterparse implementation # 2005-03-02 fl fixed iterparse support for pre-2.2 versions # # Copyright (c) 1999-2005 by NAME All rights reserved. # # EMAIL # http://www.pythonware.com # # -------------------------------------------------------------------- # The ElementTree toolkit is # # Copyright (c) 1999-2005 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # Licensed to PSF under a Contributor Agreement. # See http://www.python.org/2.4/license for licensing details.
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright (c), NAME <EMAIL>, 2012-2013 # Copyright (c), NAME <EMAIL>, 2015 # All rights reserved. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The match_hostname function and supporting code is under the terms and # conditions of the Python Software Foundation License. They were taken from # the Python3 standard library and adapted for use in Python2. See comments in the # source for which code precisely is under this License. PSF License text # follows: # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are # retained in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement.
""" ======================== Broadcasting over arrays ======================== The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is "broadcast" across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([ 2., 4., 6.]) NumPy's broadcasting rule relaxes this constraint when the arrays' shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([ 2., 4., 6.]) The result is equivalent to the previous example where ``b`` was an array. We can think of the scalar ``b`` being *stretched* during the arithmetic operation into an array with the same shape as ``a``. The new elements in ``b`` are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies, so that broadcasting operations are as memory and computationally efficient as possible. The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (``b`` is a scalar rather than an array). General Broadcasting Rules ========================== When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when 1) they are equal, or 2) one of them is 1 If these conditions are not met, a ``ValueError: frames are not aligned`` exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays. Arrays do not need to have the same *number* of dimensions. For example, if you have a ``256x256x3`` array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched or "copied" to match the other. In the following example, both the ``A`` and ``B`` arrays have axes with length one that are expanded to a larger size during the broadcast operation:: A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 Here are some more examples:: A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 Here are examples of shapes that do not broadcast:: A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched An example of broadcasting in practice:: >>> x = np.arange(4) >>> xx = x.reshape(4,1) >>> y = np.ones(5) >>> z = np.ones((3,4)) >>> x.shape (4,) >>> y.shape (5,) >>> x + y <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape >>> xx.shape (4, 1) >>> y.shape (5,) >>> (xx + y).shape (4, 5) >>> xx + y array([[ 1., 1., 1., 1., 1.], [ 2., 2., 2., 2., 2.], [ 3., 3., 3., 3., 3.], [ 4., 4., 4., 4., 4.]]) >>> x.shape (4,) >>> z.shape (3, 4) >>> (x + z).shape (3, 4) >>> x + z array([[ 1., 2., 3., 4.], [ 1., 2., 3., 4.], [ 1., 2., 3., 4.]]) Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays:: >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [ 11., 12., 13.], [ 21., 22., 23.], [ 31., 32., 33.]]) Here the ``newaxis`` index operator inserts a new axis into ``a``, making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array with ``b``, which has shape ``(3,)``, yields a ``4x3`` array. See `this article <http://wiki.scipy.org/EricsBroadcastingDoc>`_ for illustrations of broadcasting concepts. """
""" DIRAC Basic MySQL Class It provides access to the basic MySQL methods in a multithread-safe mode keeping used connections in a python Queue for further reuse. These are the coded methods: __init__( host, user, passwd, name, [maxConnsInQueue=10] ) Initializes the Queue and tries to connect to the DB server, using the _connect method. "maxConnsInQueue" defines the size of the Queue of open connections that are kept for reuse. It also defined the maximum number of open connections available from the object. maxConnsInQueue = 0 means unlimited and it is not supported. _except( methodName, exception, errorMessage ) Helper method for exceptions: the "methodName" and the "errorMessage" are printed with ERROR level, then the "exception" is printed (with full description if it is a MySQL Exception) and S_ERROR is returned with the errorMessage and the exception. _connect() Attempts connection to DB and sets the _connected flag to True upon success. Returns S_OK or S_ERROR. _query( cmd, [conn] ) Executes SQL command "cmd". Gets a connection from the Queue (or open a new one if none is available), the used connection is back into the Queue. If a connection to the the DB is passed as second argument this connection is used and is not in the Queue. Returns S_OK with fetchall() out in Value or S_ERROR upon failure. _update( cmd, [conn] ) Executes SQL command "cmd" and issue a commit Gets a connection from the Queue (or open a new one if none is available), the used connection is back into the Queue. If a connection to the the DB is passed as second argument this connection is used and is not in the Queue Returns S_OK with number of updated registers in Value or S_ERROR upon failure. _createTables( tableDict ) Create a new Table in the DB _getConnection() Gets a connection from the Queue (or open a new one if none is available) Returns S_OK with connection in Value or S_ERROR the calling method is responsible for closing this connection once it is no longer needed. Some high level methods have been added to avoid the need to write SQL statement in most common cases. They should be used instead of low level _insert, _update methods when ever possible. buildCondition( self, condDict = None, older = None, newer = None, timeStamp = None, orderAttribute = None, limit = False, greater = None, smaller = None ): Build SQL condition statement from provided condDict and other extra check on a specified time stamp. The conditions dictionary specifies for each attribute one or a List of possible values greater and smaller are dictionaries in which the keys are the names of the fields, that are requested to be >= or < than the corresponding value. For compatibility with current usage it uses Exceptions to exit in case of invalid arguments insertFields( self, tableName, inFields = None, inValues = None, conn = None, inDict = None ): Insert a new row in "tableName" assigning the values "inValues" to the fields "inFields". Alternatively inDict can be used String type values will be appropriately escaped. updateFields( self, tableName, updateFields = None, updateValues = None, condDict = None, limit = False, conn = None, updateDict = None, older = None, newer = None, timeStamp = None, orderAttribute = None ): Update "updateFields" from "tableName" with "updateValues". updateDict alternative way to provide the updateFields and updateValues N records can match the condition return S_OK( number of updated rows ) if limit is not False, the given limit is set String type values will be appropriately escaped. deleteEntries( self, tableName, condDict = None, limit = False, conn = None, older = None, newer = None, timeStamp = None, orderAttribute = None ): Delete rows from "tableName" with N records can match the condition if limit is not False, the given limit is set String type values will be appropriately escaped, they can be single values or lists of values. getFields( self, tableName, outFields = None, condDict = None, limit = False, conn = None, older = None, newer = None, timeStamp = None, orderAttribute = None ): Select "outFields" from "tableName" with condDict N records can match the condition return S_OK( tuple(Field,Value) ) if limit is not False, the given limit is set String type values will be appropriately escaped, they can be single values or lists of values. for compatibility with other methods condDict keyed argument is added getCounters( self, table, attrList, condDict = None, older = None, newer = None, timeStamp = None, connection = False ): Count the number of records on each distinct combination of AttrList, selected with condition defined by condDict and time stamps getDistinctAttributeValues( self, table, attribute, condDict = None, older = None, newer = None, timeStamp = None, connection = False ): Get distinct values of a table attribute under specified conditions """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
""" ============================= Subclassing ndarray in python ============================= Credits ------- This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses. Introduction ------------ Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass. ndarrays and object creation ============================ Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different ways. These are: #. Explicit constructor call - as in ``MySubClass(params)``. This is the usual route to Python instance creation. #. View casting - casting an existing ndarray as a given subclass #. New from template - creating a new instance from a template instance. Examples include returning slices from a subclassed array, creating return types from ufuncs, and copying arrays. See :ref:`new-from-template` for more details The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation. .. _view-casting: View casting ------------ *View casting* is the standard ndarray mechanism by which you take an ndarray of any subclass, and return a view of the array as another (specified) subclass: >>> import numpy as np >>> # create a completely useless ndarray subclass >>> class C(np.ndarray): pass >>> # create a standard ndarray >>> arr = np.zeros((3,)) >>> # take a view of it, as our useless subclass >>> c_arr = arr.view(C) >>> type(c_arr) <class 'C'> .. _new-from-template: Creating new from template -------------------------- New instances of an ndarray subclass can also come about by a very similar mechanism to :ref:`view-casting`, when numpy finds it needs to create a new instance from a template instance. The most obvious place this has to happen is when you are taking slices of subclassed arrays. For example: >>> v = c_arr[1:] >>> type(v) # the view is of type 'C' <class 'C'> >>> v is c_arr # but it's a new instance False The slice is a *view* onto the original ``c_arr`` data. So, when we take a view from the ndarray, we return a new ndarray, of the same class, that points to the data in the original. There are other points in the use of ndarrays where we need such views, such as copying arrays (``c_arr.copy()``), creating ufunc output arrays (see also :ref:`array-wrap`), and reducing methods (like ``c_arr.mean()``. Relationship of view casting and new-from-template -------------------------------------------------- These paths both use the same machinery. We make the distinction here, because they result in different input to your methods. Specifically, :ref:`view-casting` means you have created a new instance of your array type from any potential subclass of ndarray. :ref:`new-from-template` means you have created a new instance of your class from a pre-existing instance, allowing you - for example - to copy across attributes that are particular to your subclass. Implications for subclassing ---------------------------- If we subclass ndarray, we need to deal not only with explicit construction of our array type, but also :ref:`view-casting` or :ref:`new-from-template`. Numpy has the machinery to do this, and this machinery that makes subclassing slightly non-standard. There are two aspects to the machinery that ndarray uses to support views and new-from-template in subclasses. The first is the use of the ``ndarray.__new__`` method for the main work of object initialization, rather then the more usual ``__init__`` method. The second is the use of the ``__array_finalize__`` method to allow subclasses to clean up after the creation of views and new instances from templates. A brief Python primer on ``__new__`` and ``__init__`` ===================================================== ``__new__`` is a standard Python method, and, if present, is called before ``__init__`` when we create a class instance. See the `python __new__ documentation <http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail. For example, consider the following Python code: .. testcode:: class C(object): def __new__(cls, *args): print 'Cls in __new__:', cls print 'Args in __new__:', args return object.__new__(cls, *args) def __init__(self, *args): print 'type(self) in __init__:', type(self) print 'Args in __init__:', args meaning that we get: >>> c = C('hello') Cls in __new__: <class 'C'> Args in __new__: ('hello',) type(self) in __init__: <class 'C'> Args in __init__: ('hello',) When we call ``C('hello')``, the ``__new__`` method gets its own class as first argument, and the passed argument, which is the string ``'hello'``. After python calls ``__new__``, it usually (see below) calls our ``__init__`` method, with the output of ``__new__`` as the first argument (now a class instance), and the passed arguments following. As you can see, the object can be initialized in the ``__new__`` method or the ``__init__`` method, or both, and in fact ndarray does not have an ``__init__`` method, because all the initialization is done in the ``__new__`` method. Why use ``__new__`` rather than just the usual ``__init__``? Because in some cases, as for ndarray, we want to be able to return an object of some other class. Consider the following: .. testcode:: class D(C): def __new__(cls, *args): print 'D cls is:', cls print 'D args in __new__:', args return C.__new__(C, *args) def __init__(self, *args): # we never get here print 'In D __init__' meaning that: >>> obj = D('hello') D cls is: <class 'D'> D args in __new__: ('hello',) Cls in __new__: <class 'C'> Args in __new__: ('hello',) >>> type(obj) <class 'C'> The definition of ``C`` is the same as before, but for ``D``, the ``__new__`` method returns an instance of class ``C`` rather than ``D``. Note that the ``__init__`` method of ``D`` does not get called. In general, when the ``__new__`` method returns an object of class other than the class in which it is defined, the ``__init__`` method of that class is not called. This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view, the standard ndarray machinery creates the new ndarray object with something like:: obj = ndarray.__new__(subtype, shape, ... where ``subdtype`` is the subclass. Thus the returned view is of the same class as the subclass, rather than being of class ``ndarray``. That solves the problem of returning views of the same type, but now we have a new problem. The machinery of ndarray can set the class this way, in its standard methods for taking views, but the ndarray ``__new__`` method knows nothing of what we have done in our own ``__new__`` method in order to set attributes, and so on. (Aside - why not call ``obj = subdtype.__new__(...`` then? Because we may not have a ``__new__`` method with the same call signature). The role of ``__array_finalize__`` ================================== ``__array_finalize__`` is the mechanism that numpy provides to allow subclasses to handle the various ways that new instances get created. Remember that subclass instances can come about in these three ways: #. explicit constructor call (``obj = MySubClass(params)``). This will call the usual sequence of ``MySubClass.__new__`` then (if it exists) ``MySubClass.__init__``. #. :ref:`view-casting` #. :ref:`new-from-template` Our ``MySubClass.__new__`` method only gets called in the case of the explicit constructor call, so we can't rely on ``MySubClass.__new__`` or ``MySubClass.__init__`` to deal with the view casting and new-from-template. It turns out that ``MySubClass.__array_finalize__`` *does* get called for all three methods of object creation, so this is where our object creation housekeeping usually goes. * For the explicit constructor call, our subclass will need to create a new ndarray instance of its own class. In practice this means that we, the authors of the code, will need to make a call to ``ndarray.__new__(MySubClass,...)``, or do view casting of an existing array (see below) * For view casting and new-from-template, the equivalent of ``ndarray.__new__(MySubClass,...`` is called, at the C level. The arguments that ``__array_finalize__`` recieves differ for the three methods of instance creation above. The following code allows us to look at the call sequences and arguments: .. testcode:: import numpy as np class C(np.ndarray): def __new__(cls, *args, **kwargs): print 'In __new__ with class %s' % cls return np.ndarray.__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): # in practice you probably will not need or want an __init__ # method for your subclass print 'In __init__ with class %s' % self.__class__ def __array_finalize__(self, obj): print 'In array_finalize:' print ' self type is %s' % type(self) print ' obj type is %s' % type(obj) Now: >>> # Explicit constructor >>> c = C((10,)) In __new__ with class <class 'C'> In array_finalize: self type is <class 'C'> obj type is <type 'NoneType'> In __init__ with class <class 'C'> >>> # View casting >>> a = np.arange(10) >>> cast_a = a.view(C) In array_finalize: self type is <class 'C'> obj type is <type 'numpy.ndarray'> >>> # Slicing (example of new-from-template) >>> cv = c[:1] In array_finalize: self type is <class 'C'> obj type is <class 'C'> The signature of ``__array_finalize__`` is:: def __array_finalize__(self, obj): ``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our own class (``self``) as well as the object from which the view has been taken (``obj``). As you can see from the output above, the ``self`` is always a newly created instance of our subclass, and the type of ``obj`` differs for the three instance creation methods: * When called from the explicit constructor, ``obj`` is ``None`` * When called from view casting, ``obj`` can be an instance of any subclass of ndarray, including our own. * When called in new-from-template, ``obj`` is another instance of our own subclass, that we might use to update the new ``self`` instance. Because ``__array_finalize__`` is the only method that always sees new instances being created, it is the sensible place to fill in instance defaults for new object attributes, among other tasks. This may be clearer with an example. Simple example - adding an extra attribute to ndarray ----------------------------------------------------- .. testcode:: import numpy as np class InfoArray(np.ndarray): def __new__(subtype, shape, dtype=float, buffer=None, offset=0, strides=None, order=None, info=None): # Create the ndarray instance of our type, given the usual # ndarray input arguments. This will call the standard # ndarray constructor, but return an object of our type. # It also triggers a call to InfoArray.__array_finalize__ obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides, order) # set the new 'info' attribute to the value passed obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # ``self`` is a new object resulting from # ndarray.__new__(InfoArray, ...), therefore it only has # attributes that the ndarray.__new__ constructor gave it - # i.e. those of a standard ndarray. # # We could have got to the ndarray.__new__ call in 3 ways: # From an explicit constructor - e.g. InfoArray(): # obj is None # (we're in the middle of the InfoArray.__new__ # constructor, and self.info will be set when we return to # InfoArray.__new__) if obj is None: return # From view casting - e.g arr.view(InfoArray): # obj is arr # (type(obj) can be InfoArray) # From new-from-template - e.g infoarr[:3] # type(obj) is InfoArray # # Note that it is here, rather than in the __new__ method, # that we set the default value for 'info', because this # method sees all creation of default objects - with the # InfoArray.__new__ constructor, but also with # arr.view(InfoArray). self.info = getattr(obj, 'info', None) # We do not need to return anything Using the object looks like this: >>> obj = InfoArray(shape=(3,)) # explicit constructor >>> type(obj) <class 'InfoArray'> >>> obj.info is None True >>> obj = InfoArray(shape=(3,), info='information') >>> obj.info 'information' >>> v = obj[1:] # new-from-template - here - slicing >>> type(v) <class 'InfoArray'> >>> v.info 'information' >>> arr = np.arange(10) >>> cast_arr = arr.view(InfoArray) # view casting >>> type(cast_arr) <class 'InfoArray'> >>> cast_arr.info is None True This class isn't very useful, because it has the same constructor as the bare ndarray object, including passing in buffers and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the usual numpy calls to ``np.array`` and return an object. Slightly more realistic example - attribute added to existing array ------------------------------------------------------------------- Here is a class that takes a standard ndarray that already exists, casts as our type, and adds an extra attribute. .. testcode:: import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, input_array, info=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribute to the created instance obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return self.info = getattr(obj, 'info', None) So: >>> arr = np.arange(5) >>> obj = RealisticInfoArray(arr, info='information') >>> type(obj) <class 'RealisticInfoArray'> >>> obj.info 'information' >>> v = obj[1:] >>> type(v) <class 'RealisticInfoArray'> >>> v.info 'information' .. _array-wrap: ``__array_wrap__`` for ufuncs ------------------------------------------------------- ``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy functions, to allow a subclass to set the type of the return value and update attributes and metadata. Let's show how this works with an example. First we make the same subclass as above, but with a different name and some print statements: .. testcode:: import numpy as np class MySubClass(np.ndarray): def __new__(cls, input_array, info=None): obj = np.asarray(input_array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print 'In __array_finalize__:' print ' self is %s' % repr(self) print ' obj is %s' % repr(obj) if obj is None: return self.info = getattr(obj, 'info', None) def __array_wrap__(self, out_arr, context=None): print 'In __array_wrap__:' print ' self is %s' % repr(self) print ' arr is %s' % repr(out_arr) # then just call the parent return np.ndarray.__array_wrap__(self, out_arr, context) We run a ufunc on an instance of our new array: >>> obj = MySubClass(np.arange(5), info='spam') In __array_finalize__: self is MySubClass([0, 1, 2, 3, 4]) obj is array([0, 1, 2, 3, 4]) >>> arr2 = np.arange(5)+1 >>> ret = np.add(arr2, obj) In __array_wrap__: self is MySubClass([0, 1, 2, 3, 4]) arr is array([1, 3, 5, 7, 9]) In __array_finalize__: self is MySubClass([1, 3, 5, 7, 9]) obj is MySubClass([0, 1, 2, 3, 4]) >>> ret MySubClass([1, 3, 5, 7, 9]) >>> ret.info 'spam' Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the input with the highest ``__array_priority__`` value, in this case ``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and ``out_arr`` as the (ndarray) result of the addition. In turn, the default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the result to class ``MySubClass``, and called ``__array_finalize__`` - hence the copying of the ``info`` attribute. This has all happened at the C level. But, we could do anything we wanted: .. testcode:: class SillySubClass(np.ndarray): def __array_wrap__(self, arr, context=None): return 'I lost your data' >>> arr1 = np.arange(5) >>> obj = arr1.view(SillySubClass) >>> arr2 = np.arange(5) >>> ret = np.multiply(obj, arr2) >>> ret 'I lost your data' So, by defining a specific ``__array_wrap__`` method for our subclass, we can tweak the output from ufuncs. The ``__array_wrap__`` method requires ``self``, then an argument - which is the result of the ufunc - and an optional parameter *context*. This parameter is returned by some ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc, domain of the ufunc). ``__array_wrap__`` should return an instance of its containing class. See the masked array subclass for an implementation. In addition to ``__array_wrap__``, which is called on the way out of the ufunc, there is also an ``__array_prepare__`` method which is called on the way into the ufunc, after the output arrays are created but before any computation has been performed. The default implementation does nothing but pass through the array. ``__array_prepare__`` should not attempt to access the array data or resize the array, it is intended for setting the output array type, updating attributes and metadata, and performing any checks based on the input that may be desired before computation begins. Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or subclass thereof or raise an error. Extra gotchas - custom ``__del__`` methods and ndarray.base ----------------------------------------------------------- One of the problems that ndarray solves is keeping track of memory ownership of ndarrays and their views. Consider the case where we have created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``. The two objects are looking at the same memory. Numpy keeps track of where the data came from for a particular array or view, with the ``base`` attribute: >>> # A normal ndarray, that owns its own data >>> arr = np.zeros((4,)) >>> # In this case, base is None >>> arr.base is None True >>> # We take a view >>> v1 = arr[1:] >>> # base now points to the array that it derived from >>> v1.base is arr True >>> # Take a view of a view >>> v2 = v1[1:] >>> # base points to the view it derived from >>> v2.base is v1 True In general, if the array owns its own memory, as for ``arr`` in this case, then ``arr.base`` will be None - there are some exceptions to this - see the numpy book for more details. The ``base`` attribute is useful in being able to tell whether we have a view or the original array. This in turn can be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how this can work, have a look at the ``memmap`` class in ``numpy.core``. """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to reqd all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
""" ================= Structured Arrays ================= Introduction ============ NumPy provides powerful capabilities to create arrays of structured datatype. These arrays permit one to manipulate the data by named fields. A simple example will show what is meant.: :: >>> x = np.array([(1,2.,'Hello'), (2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> x array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) Here we have created a one-dimensional array of length 2. Each element of this array is a structure that contains three items, a 32-bit integer, a 32-bit float, and a string of length 10 or less. If we index this array at the second position we get the second structure: :: >>> x[1] (2,3.,"World") Conveniently, one can access any field of the array by indexing using the string that names that field. :: >>> y = x['bar'] >>> y array([ 2., 3.], dtype=float32) >>> y[:] = 2*y >>> y array([ 4., 6.], dtype=float32) >>> x array([(1, 4.0, 'Hello'), (2, 6.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) In these examples, y is a simple float array consisting of the 2nd field in the structured type. But, rather than being a copy of the data in the structured array, it is a view, i.e., it shares exactly the same memory locations. Thus, when we updated this array by doubling its values, the structured array shows the corresponding values as doubled as well. Likewise, if one changes the structured array, the field view also changes: :: >>> x[1] = (-1,-1.,"Master") >>> x array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) >>> y array([ 4., -1.], dtype=float32) Defining Structured Arrays ========================== One defines a structured array through the dtype object. There are **several** alternative ways to define the fields of a record. Some of these variants provide backward compatibility with Numeric, numarray, or another module, and should not be used except for such purposes. These will be so noted. One specifies record structure in one of four alternative ways, using an argument (as supplied to a dtype function keyword or a dtype object constructor itself). This argument must be one of the following: 1) string, 2) tuple, 3) list, or 4) dictionary. Each of these is briefly described below. 1) String argument. In this case, the constructor expects a comma-separated list of type specifiers, optionally with extra shape information. The fields are given the default names 'f0', 'f1', 'f2' and so on. The type specifiers can take 4 different forms: :: a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n> (representing bytes, ints, unsigned ints, floats, complex and fixed length strings of specified byte lengths) b) int8,...,uint8,...,float16, float32, float64, complex64, complex128 (this time with bit sizes) c) older Numeric/numarray type specifications (e.g. Float32). Don't use these in new code! d) Single character type specifiers (e.g H for unsigned short ints). Avoid using these unless you must. Details can be found in the NumPy book These different styles can be mixed within the same string (but why would you want to do that?). Furthermore, each type specifier can be prefixed with a repetition number, or a shape. In these cases an array element is created, i.e., an array within a record. That array is still referred to as a single field. An example: :: >>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64') >>> x array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])], dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))]) By using strings to define the record structure, it precludes being able to name the fields in the original definition. The names can be changed as shown later, however. 2) Tuple argument: The only relevant tuple case that applies to record structures is when a structure is mapped to an existing data type. This is done by pairing in a tuple, the existing data type with a matching dtype definition (using any of the variants being described here). As an example (using a definition using a list, so see 3) for further details): :: >>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')])) >>> x array([0, 0, 0]) >>> x['r'] array([0, 0, 0], dtype=uint8) In this case, an array is produced that looks and acts like a simple int32 array, but also has definitions for fields that use only one byte of the int32 (a bit like Fortran equivalencing). 3) List argument: In this case the record structure is defined with a list of tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field ('' is permitted), 2) the type of the field, and 3) the shape (optional). For example:: >>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) >>> x array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])], dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))]) 4) Dictionary argument: two different forms are permitted. The first consists of a dictionary with two required keys ('names' and 'formats'), each having an equal sized list of values. The format list contains any type/shape specifier allowed in other contexts. The names must be strings. There are two optional keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to the required two where offsets contain integer offsets for each field, and titles are objects containing metadata for each field (these do not have to be strings), where the value of None is permitted. As an example: :: >>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[('col1', '>i4'), ('col2', '>f4')]) The other dictionary form permitted is a dictionary of name keys with tuple values specifying type, offset, and an optional title. :: >>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')]) Accessing and modifying field names =================================== The field names are an attribute of the dtype object defining the structure. For the last example: :: >>> x.dtype.names ('col1', 'col2') >>> x.dtype.names = ('x', 'y') >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')]) >>> x.dtype.names = ('x', 'y', 'z') # wrong number of names <type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2 Accessing field titles ==================================== The field titles provide a standard place to put associated info for fields. They do not have to be strings. :: >>> x.dtype.fields['x'][2] 'title 1' Accessing multiple fields at once ==================================== You can access multiple fields at once using a list of field names: :: >>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))], dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) Notice that `x` is created with a list of tuples. :: >>> x[['x','y']] array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)], dtype=[('x', '<f4'), ('y', '<f4')]) >>> x[['x','value']] array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]), (1.0, [[2.0, 6.0], [2.0, 6.0]])], dtype=[('x', '<f4'), ('value', '<f4', (2, 2))]) The fields are returned in the order they are asked for.:: >>> x[['y','x']] array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)], dtype=[('y', '<f4'), ('x', '<f4')]) Filling structured arrays ========================= Structured arrays can be filled by field or row by row. :: >>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')]) >>> arr['var1'] = np.arange(5) If you fill it in row by row, it takes a take a tuple (but not a list or array!):: >>> arr[0] = (10,20) >>> arr array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)], dtype=[('var1', '<f8'), ('var2', '<f8')]) Record Arrays ============= For convenience, numpy provides "record arrays" which allow one to access fields of structured arrays by attribute rather than by index. Record arrays are structured arrays wrapped using a subclass of ndarray, :class:`numpy.recarray`, which allows field access by attribute on the array object, and record arrays also use a special datatype, :class:`numpy.record`, which allows field access by attribute on the individual elements of the array. The simplest way to create a record array is with :func:`numpy.rec.array`: :: >>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> recordarr.bar array([ 2., 3.], dtype=float32) >>> recordarr[1:2] rec.array([(2, 3.0, 'World')], dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]) >>> recordarr[1:2].foo array([2], dtype=int32) >>> recordarr.foo[1:2] array([2], dtype=int32) >>> recordarr[1].baz 'World' numpy.rec.array can convert a wide variety of arguments into record arrays, including normal structured arrays: :: >>> arr = array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')]) >>> recordarr = np.rec.array(arr) The numpy.rec module provides a number of other convenience functions for creating record arrays, see :ref:`record array creation routines <routines.array-creation.rec>`. A record array representation of a structured array can be obtained using the appropriate :ref:`view`: :: >>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')]) >>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)), ... type=np.recarray) For convenience, viewing an ndarray as type `np.recarray` will automatically convert to `np.record` datatype, so the dtype can be left out of the view: :: >>> recordarr = arr.view(np.recarray) >>> recordarr.dtype dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])) To get back to a plain ndarray both the dtype and type must be reset. The following view does so, taking into account the unusual case that the recordarr was not a structured type: :: >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but as a plain ndarray otherwise. :: >>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))], ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])]) >>> type(recordarr.foo) <type 'numpy.ndarray'> >>> type(recordarr.bar) <class 'numpy.core.records.recarray'> Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will be inaccessible by attribute but may still be accessed by index. """
"""PooledPg - pooling for classic PyGreSQL connections. Implements a pool of steady, thread-safe cached connections to a PostgreSQL database which are transparently reused, using the classic (not DB-API 2 compliant) PyGreSQL API. This should result in a speedup for persistent applications such as the application server of "Webware for Python," without loss of robustness. Robustness is provided by using "hardened" SteadyPg connections. Even if the underlying database is restarted and all connections are lost, they will be automatically and transparently reopened. However, since you don't want this to happen in the middle of a database transaction, you must explicitly start transactions with the begin() method so that SteadyPg knows that the underlying connection shall not be replaced and errors passed on until the transaction is completed. Measures are taken to make the pool of connections thread-safe regardless of the fact that the classic PyGreSQL pg module itself is not thread-safe at the connection level. For more information on PostgreSQL, see: https://www.postgresql.org/ For more information on PyGreSQL, see: http://www.pygresql.org For more information on Webware for Python, see: https://webwareforpython.github.io/w4py/ Usage: First you need to set up the database connection pool by creating an instance of PooledPg, passing the following parameters: mincached: the initial number of connections in the pool (the default of 0 means no connections are made at startup) maxcached: the maximum number of connections in the pool (the default value of 0 or None means unlimited pool size) maxconnections: maximum number of connections generally allowed (the default value of 0 or None means any number of connections) blocking: determines behavior when exceeding the maximum (if this is set to true, block and wait until the number of connections decreases, but by default an error will be reported) maxusage: maximum number of reuses of a single connection (the default of 0 or None means unlimited reuse) When this maximum usage number of the connection is reached, the connection is automatically reset (closed and reopened). setsession: an optional list of SQL commands that may serve to prepare the session, e.g. ["set datestyle to german", ...] Additionally, you have to pass the parameters for the actual PostgreSQL connection which are passed via PyGreSQL, such as the names of the host, database, user, password etc. For instance, if you want a pool of at least five connections to your local database 'mydb': from dbutils.pooled_pg import PooledPg pool = PooledPg(5, dbname='mydb') Once you have set up the connection pool you can request database connections from that pool: db = pool.connection() You can use these connections just as if they were ordinary classic PyGreSQL API connections. Actually what you get is a proxy class for the hardened SteadyPg version of the connection. The connection will not be shared with other threads. If you don't need it any more, you should immediately return it to the pool with db.close(). You can get another connection in the same way or with db.reopen(). Warning: In a threaded environment, never do the following: res = pool.connection().query(...).getresult() This would release the connection too early for reuse which may be fatal because the connections are not thread-safe. Make sure that the connection object stays alive as long as you are using it, like that: db = pool.connection() res = db.query(...).getresult() db.close() # or del db You can also a context manager for simpler code: with pool.connection() as db: res = db.query(...).getresult() Note that you need to explicitly start transactions by calling the begin() method. This ensures that the transparent reopening will be suspended until the end of the transaction, and that the connection will be rolled back before being given back to the connection pool. To end transactions, use one of the end(), commit() or rollback() methods. Ideas for improvement: * Add a thread for monitoring, restarting (or closing) bad or expired connections (similar to DBConnectionPool/ResourcePool by Warren Smith). * Optionally log usage, bad connections and exceeding of limits. Copyright, credits and license: * Contributed as supplement for Webware for Python and PyGreSQL by NAME in September 2005 * Based on the code of DBPool, contributed to Webware for Python by NAME in December 2000 Licensed under the MIT license. """
#!/usr/bin/env python2 # ############################################################################## ### NZBGET POST-PROCESSING SCRIPT ### # Post-Process to USERNAME. # # This script sends the download to your automated media management servers. # # NOTE: This script requires Python to be installed on your system. ############################################################################## ### OPTIONS ### ## General # Auto Update nzbToMedia (0, 1). # # Set to 1 if you want nzbToMedia to automatically check for and update to the latest version #auto_update=0 # Check Media for corruption (0, 1). # # Enable/Disable media file checking using ffprobe. #check_media=1 # Safe Mode protection of DestDir (0, 1). # # Enable/Disable a safety check to ensure we don't process all downloads in the default_downloadDirectory by mistake. #safe_mode=1 # Media Extensions # # This is a list of media extensions that are used to verify that the download does contain valid media. #mediaExtensions=.mkv,.avi,.divx,.xvid,.mov,.wmv,.mp4,.mpg,.mpeg,.vob,.iso ## USERNAME # USERNAME script category. # # category that gets called for post-processing with USERNAME. #sbCategory=tv # USERNAME host. # # The ipaddress for your USERNAME/SickRage server. e.g For the Same system use localhost or IP_ADDRESS #sbhost=localhost # USERNAME port. #sbport=8081 # USERNAME username. #sbusername= # USERNAME password. #sbpassword= # USERNAME uses ssl (0, 1). # # Set to 1 if using ssl, else set to 0. #sbssl=0 # USERNAME web_root # # set this if using a reverse proxy. #sbweb_root= # USERNAME watch directory. # # set this to where your USERNAME completed downloads are. #sbwatch_dir= # USERNAME fork. # # set to default or auto to auto-detect the custom fork type. #sbfork=auto # USERNAME Delete Failed Downloads (0, 1). # # set to 1 to delete failed, or 0 to leave files in place. #sbdelete_failed=0 # USERNAME process method. # # set this to move, copy, hardlink, symlink as appropriate if you want to over-ride SB defaults. Leave blank to use SB default. #sbprocess_method= # USERNAME and NZBGet are a different system (0, 1). # # Enable to replace local path with the path as per the mountPoints below. #sbremote_path=0 ## Network # Network Mount Points (Needed for remote path above) # # Enter Mount points as LocalPath,RemotePath and separate each pair with '|' # e.g. mountPoints=/volume1/Public/,E:\|/volume2/share/,\\NAS\ #mountPoints= ## Extensions # Media Extensions # # This is a list of media extensions that are used to verify that the download does contain valid media. #mediaExtensions=.mkv,.avi,.divx,.xvid,.mov,.wmv,.mp4,.mpg,.mpeg,.vob,.iso ## Posix # Niceness for external tasks Extractor and Transcoder. # # Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process). #niceness=10 # ionice scheduling class (0, 1, 2, 3). # # Set the ionice scheduling class. 0 for none, 1 for real time, 2 for best-effort, 3 for idle. #ionice_class=2 # ionice scheduling class data. # # Set the ionice scheduling class data. This defines the class data, if the class accepts an argument. For real time and best-effort, 0-7 is valid data. #ionice_classdata=4 ## Transcoder # getSubs (0, 1). # # set to 1 to download subtitles. #getSubs=0 # subLanguages. # # subLanguages. create a list of languages in the order you want them in your subtitles. #subLanguages=eng,spa,fra # Transcode (0, 1). # # set to 1 to transcode, otherwise set to 0. #transcode=0 # create a duplicate, or replace the original (0, 1). # # set to 1 to cretae a new file or 0 to replace the original #duplicate=1 # ignore extensions. # # list of extensions that won't be transcoded. #ignoreExtensions=.avi,.mkv # outputFastStart (0,1). # # outputFastStart. 1 will use -movflags + faststart. 0 will disable this from being used. #outputFastStart=0 # outputVideoPath. # # outputVideoPath. Set path you want transcoded videos moved to. Leave blank to disable. #outputVideoPath= # processOutput (0,1). # # processOutput. 1 will send the outputVideoPath to USERNAME/CouchPotato. 0 will send original files. #processOutput=0 # audioLanguage. # # audioLanguage. set the 3 letter language code you want as your primary audio track. #audioLanguage=eng # allAudioLanguages (0,1). # # allAudioLanguages. 1 will keep all audio tracks (uses AudioCodec3) where available. #allAudioLanguages=0 # allSubLanguages (0,1). # # allSubLanguages. 1 will keep all exisiting sub languages. 0 will discare those not in your list above. #allSubLanguages=0 # embedSubs (0,1). # # embedSubs. 1 will embded external sub/srt subs into your video if this is supported. #embedSubs=1 # burnInSubtitle (0,1). # # burnInSubtitle. burns the default sub language into your video (needed for players that don't support subs) #burnInSubtitle=0 # extractSubs (0,1). # # extractSubs. 1 will extract subs from the video file and save these as external srt files. #extractSubs=0 # externalSubDir. # # externalSubDir. set the directory where subs should be saved (if not the same directory as the video) #externalSubDir= # outputDefault (None, iPad, iPad-1080p, iPad-720p, Apple-TV2, iPod, iPhone, PS3, xbox, Roku-1080p, Roku-720p, Roku-480p, mkv, mp4-scene-release). # # outputDefault. Loads default configs for the selected device. The remaining options below are ignored. # If you want to use your own profile, set None and set the remaining options below. #outputDefault=None # hwAccel (0,1). # # hwAccel. 1 will set ffmpeg to enable hardware acceleration (this requires a recent ffmpeg). #hwAccel=0 # ffmpeg output settings. #outputVideoExtension=.mp4 #outputVideoCodec=libx264 #VideoCodecAllow= #outputVideoPreset=medium #outputVideoFramerate=24 #outputVideoBitrate=800k #outputAudioCodec=ac3 #AudioCodecAllow= #outputAudioChannels=6 #outputAudioBitrate=640k #outputQualityPercent= #outputAudioTrack2Codec=libfaac #AudioCodec2Allow= #outputAudioTrack2Channels=2 #outputAudioTrack2Bitrate=160k #outputAudioOtherCodec=libmp3lame #AudioOtherCodecAllow= #outputAudioOtherChannels=2 #outputAudioOtherBitrate=128k #outputSubtitleCodec= ## WakeOnLan # use WOL (0, 1). # # set to 1 to send WOL broadcast to the mac and test the server (e.g. xbmc) on the host and port specified. #wolwake=0 # WOL MAC # # enter the mac address of the system to be woken. #wolmac=00:01:2e:2D:64:e1 # Set the Host and Port of a server to verify system has woken. #wolhost=IP_ADDRESS #wolport=80 ### NZBGET POST-PROCESSING SCRIPT ### ##############################################################################
""" Simple config ============= Although CherryPy uses the :mod:`Python logging module <logging>`, it does so behind the scenes so that simple logging is simple, but complicated logging is still possible. "Simple" logging means that you can log to the screen (i.e. console/stdout) or to a file, and that you can easily have separate error and access log files. Here are the simplified logging settings. You use these by adding lines to your config file or dict. You should set these at either the global level or per application (see next), but generally not both. * ``log.screen``: Set this to True to have both "error" and "access" messages printed to stdout. * ``log.access_file``: Set this to an absolute filename where you want "access" messages written. * ``log.error_file``: Set this to an absolute filename where you want "error" messages written. Many events are automatically logged; to log your own application events, call :func:`cherrypy.log`. Architecture ============ Separate scopes --------------- CherryPy provides log managers at both the global and application layers. This means you can have one set of logging rules for your entire site, and another set of rules specific to each application. The global log manager is found at :func:`cherrypy.log`, and the log manager for each application is found at :attr:`app.log<cherrypy._cptree.Application.log>`. If you're inside a request, the latter is reachable from ``cherrypy.request.app.log``; if you're outside a request, you'll have to obtain a reference to the ``app``: either the return value of :func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used :func:`quickstart()<cherrypy.quickstart>` instead, via ``cherrypy.tree.apps['/']``. By default, the global logs are named "cherrypy.error" and "cherrypy.access", and the application logs are named "cherrypy.error.2378745" and "cherrypy.access.2378745" (the number is the id of the Application object). This means that the application logs "bubble up" to the site logs, so if your application has no log handlers, the site-level handlers will still log the messages. Errors vs. Access ----------------- Each log manager handles both "access" messages (one per HTTP request) and "error" messages (everything else). Note that the "error" log is not just for errors! The format of access messages is highly formalized, but the error log isn't--it receives messages from a variety of sources (including full error tracebacks, if enabled). Custom Handlers =============== The simple settings above work by manipulating Python's standard :mod:`logging` module. So when you need something more complex, the full power of the standard module is yours to exploit. You can borrow or create custom handlers, formats, filters, and much more. Here's an example that skips the standard FileHandler and uses a RotatingFileHandler instead: :: #python log = app.log # Remove the default FileHandlers if present. log.error_file = "" log.access_file = "" maxBytes = getattr(log, "rot_maxBytes", 10000000) backupCount = getattr(log, "rot_backupCount", 1000) # Make a new RotatingFileHandler for the error log. fname = getattr(log, "rot_error_file", "error.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.error_log.addHandler(h) # Make a new RotatingFileHandler for the access log. fname = getattr(log, "rot_access_file", "access.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.access_log.addHandler(h) The ``rot_*`` attributes are pulled straight from the application log object. Since "log.*" config entries simply set attributes on the log object, you can add custom attributes to your heart's content. Note that these handlers are used ''instead'' of the default, simple handlers outlined above (so don't set the "log.error_file" config entry, for example). """
""" ======== Glossary ======== .. glossary:: along an axis Axes are defined for arrays with more than one dimension. A 2-dimensional array has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1). Many operation can take place along one of these axes. For example, we can sum each row of an array, in which case we operate along columns, or axis 1:: >>> x = np.arange(12).reshape((3,4)) >>> x array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.sum(axis=1) array([ 6, 22, 38]) array A homogeneous container of numerical elements. Each element in the array occupies a fixed amount of memory (hence homogeneous), and can be a numerical element of a single type (such as float, int or complex) or a combination (such as ``(float, int, float)``). Each array has an associated data-type (or ``dtype``), which describes the numerical type of its elements:: >>> x = np.array([1, 2, 3], float) >>> x array([ 1., 2., 3.]) >>> x.dtype # floating point number, 64 bits of memory per element dtype('float64') # More complicated data type: each array element is a combination of # and integer and a floating point number >>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)]) array([(1, 2.0), (3, 4.0)], dtype=[('x', '<i4'), ('y', '<f8')]) Fast element-wise operations, called `ufuncs`_, operate on arrays. array_like Any sequence that can be interpreted as an ndarray. This includes nested lists, tuples, scalars and existing arrays. attribute A property of an object that can be accessed using ``obj.attribute``, e.g., ``shape`` is an attribute of an array:: >>> x = np.array([1, 2, 3]) >>> x.shape (3,) BLAS `Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_ broadcast NumPy can do operations on arrays whose shapes are mismatched:: >>> x = np.array([1, 2]) >>> y = np.array([[3], [4]]) >>> x array([1, 2]) >>> y array([[3], [4]]) >>> x + y array([[4, 5], [5, 6]]) See `doc.broadcasting`_ for more information. C order See `row-major` column-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In column-major order, the leftmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the column-major order as:: [1, 4, 2, 5, 3, 6] Column-major order is also known as the Fortran order, as the Fortran programming language uses it. decorator An operator that transforms a function. For example, a ``log`` decorator may be defined to print debugging information upon function execution:: >>> def log(f): ... def new_logging_func(*args, **kwargs): ... print("Logging call with parameters:", args, kwargs) ... return f(*args, **kwargs) ... ... return new_logging_func Now, when we define a function, we can "decorate" it using ``log``:: >>> @log ... def add(a, b): ... return a + b Calling ``add`` then yields: >>> add(1, 2) Logging call with parameters: (1, 2) {} 3 dictionary Resembling a language dictionary, which provides a mapping between words and descriptions thereof, a Python dictionary is a mapping between two objects:: >>> x = {1: 'one', 'two': [1, 2]} Here, `x` is a dictionary mapping keys to values, in this case the integer 1 to the string "one", and the string "two" to the list ``[1, 2]``. The values may be accessed using their corresponding keys:: >>> x[1] 'one' >>> x['two'] [1, 2] Note that dictionaries are not stored in any specific order. Also, most mutable (see *immutable* below) objects, such as lists, may not be used as keys. For more information on dictionaries, read the `Python tutorial <http://docs.python.org/tut>`_. Fortran order See `column-major` flattened Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details. immutable An object that cannot be modified after execution is called immutable. Two common examples are strings and tuples. instance A class definition gives the blueprint for constructing an object:: >>> class House(object): ... wall_colour = 'white' Yet, we have to *build* a house before it exists:: >>> h = House() # build a house Now, ``h`` is called a ``House`` instance. An instance is therefore a specific realisation of a class. iterable A sequence that allows "walking" (iterating) over items, typically using a loop such as:: >>> x = [1, 2, 3] >>> [item**2 for item in x] [1, 4, 9] It is often used in combintion with ``enumerate``:: >>> keys = ['a','b','c'] >>> for n, k in enumerate(keys): ... print("Key %d: %s" % (n, k)) ... Key 0: a Key 1: b Key 2: c list A Python container that can hold any number of objects or items. The items do not have to be of the same type, and can even be lists themselves:: >>> x = [2, 2.0, "two", [2, 2.0]] The list `x` contains 4 items, each which can be accessed individually:: >>> x[2] # the string 'two' 'two' >>> x[3] # a list, containing an integer 2 and a float 2.0 [2, 2.0] It is also possible to select more than one item at a time, using *slicing*:: >>> x[0:2] # or, equivalently, x[:2] [2, 2.0] In code, arrays are often conveniently expressed as nested lists:: >>> np.array([[1, 2], [3, 4]]) array([[1, 2], [3, 4]]) For more information, read the section on lists in the `Python tutorial <http://docs.python.org/tut>`_. For a mapping type (key-value), see *dictionary*. mask A boolean array, used to select only certain elements for an operation:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> mask = (x > 2) >>> mask array([False, False, False, True, True], dtype=bool) >>> x[mask] = -1 >>> x array([ 0, 1, 2, -1, -1]) masked array Array that suppressed values indicated by a mask:: >>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) >>> x masked_array(data = [-- 2.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> >>> x + [1, 2, 3] masked_array(data = [-- 4.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> Masked arrays are often used when operating on arrays containing missing or invalid entries. matrix A 2-dimensional ndarray that preserves its two-dimensional nature throughout operations. It has certain special operations, such as ``*`` (matrix multiplication) and ``**`` (matrix power), defined:: >>> x = np.mat([[1, 2], [3, 4]]) >>> x matrix([[1, 2], [3, 4]]) >>> x**2 matrix([[ 7, 10], [15, 22]]) method A function associated with an object. For example, each ndarray has a method called ``repeat``:: >>> x = np.array([1, 2, 3]) >>> x.repeat(2) array([1, 1, 2, 2, 3, 3]) ndarray See *array*. record array An `ndarray`_ with `structured data type`_ which has been subclassed as np.recarray and whose dtype is of type np.record, making the fields of its data type to be accessible by attribute. reference If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore, ``a`` and ``b`` are different names for the same Python object. row-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In row-major order, the rightmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the row-major order as:: [1, 2, 3, 4, 5, 6] Row-major order is also known as the C order, as the C programming language uses it. New Numpy arrays are by default in row-major order. self Often seen in method signatures, ``self`` refers to the instance of the associated class. For example: >>> class Paintbrush(object): ... color = 'blue' ... ... def paint(self): ... print("Painting the city %s!" % self.color) ... >>> p = Paintbrush() >>> p.color = 'red' >>> p.paint() # self refers to 'p' Painting the city red! slice Used to select only certain elements from a sequence:: >>> x = range(5) >>> x [0, 1, 2, 3, 4] >>> x[1:3] # slice from 1 to 3 (excluding 3 itself) [1, 2] >>> x[1:5:2] # slice from 1 to 5, but skipping every second element [1, 3] >>> x[::-1] # slice a sequence in reverse [4, 3, 2, 1, 0] Arrays may have more than one dimension, each which can be sliced individually:: >>> x = np.array([[1, 2], [3, 4]]) >>> x array([[1, 2], [3, 4]]) >>> x[:, 1] array([2, 4]) structured data type A data type composed of other datatypes tuple A sequence that may contain a variable number of types of any kind. A tuple is immutable, i.e., once constructed it cannot be changed. Similar to a list, it can be indexed and sliced:: >>> x = (1, 'one', [1, 2]) >>> x (1, 'one', [1, 2]) >>> x[0] 1 >>> x[:2] (1, 'one') A useful concept is "tuple unpacking", which allows variables to be assigned to the contents of a tuple:: >>> x, y = (1, 2) >>> x, y = 1, 2 This is often used when a function returns multiple values: >>> def return_many(): ... return 1, 'alpha', None >>> a, b, c = return_many() >>> a, b, c (1, 'alpha', None) >>> a 1 >>> b 'alpha' ufunc Universal function. A fast element-wise array operation. Examples include ``add``, ``sin`` and ``logical_or``. view An array that does not own its data, but refers to another array's data instead. For example, we may create a view that only shows every second element of another array:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> y = x[::2] >>> y array([0, 2, 4]) >>> x[0] = 3 # changing x changes y as well, since y is a view on x >>> y array([3, 2, 4]) wrapper Python is a high-level (highly abstracted, or English-like) language. This abstraction comes at a price in execution speed, and sometimes it becomes necessary to use lower level languages to do fast computations. A wrapper is code that provides a bridge between high and the low level languages, allowing, e.g., Python to execute code written in C or Fortran. Examples include ctypes, SWIG and Cython (which wraps C and C++) and f2py (which wraps Fortran). """
#!/usr/bin/env python3 ################################################################################################### # make-AWS-CF-params-skeleton.py # # by NAME 2016-2017 # # Input: AWS CloudFormation JSON or YAML template on STDIN. # Typically this means pipe the JSON/YAML to this script, such as 'cat' a local file # or "curl -s" an Amazon sample, or "aws s3 cp" to '-', or similar. # # Output: JSON skeleton of CloudFormation Parameters file. # AWS CloudFormation (still) does not support parameters files in YAML, # so this script will only output JSON - no matter what the input is. # # Purpose: If you want to generate a CloudFormation Stack from a CF template and a separate # JSON file for Parameters, this will generate the framework of the required Parameters in # a JSON file which you can simply edit to fill in each ParameterValue. This makes it # really easy to stand up a CF stack, because the output from this script shows you # exactly what each Parameter Type must be, the allowed values, the default, etc. # # # Additional Reading: # https://aws.amazon.com/blogs/devops/passing-parameters-to-cloudformation-stacks-with-the-aws-cli-and-powershell/ # # # To Do: Add file input/output instead of just stdin/stdout # ################################################################################################### # # Example: # This uses an online sample Amazon CF JSON Template. It is retrieved via curl and processed by this script. # Note how each "ParameterValue" contains "REPLACE THIS WITH:" and the "Type", a list of any "AllowedValues", and # the "Description" from the template!! (Cool, eh?) # # % curl -s https://s3.amazonaws.com/cloudformation-templates-us-east-1/LAMP_Single_Instance.template | make-AWS-CF-params-skeleton.py # # [ # { # "ParameterKey": "DBName", # "ParameterValue": "REPLACE THIS WITH: String - MySQL database name" # }, # { # "ParameterKey": "DBPassword", # "ParameterValue": "REPLACE THIS WITH: String - Password for MySQL database access" # }, # { # "ParameterKey": "DBRootPassword", # "ParameterValue": "REPLACE THIS WITH: String - Root password for MySQL" # }, # { # "ParameterKey": "DBUser", # "ParameterValue": "REPLACE THIS WITH: String - Username for MySQL database access" # }, # { # "ParameterKey": "InstanceType", # "ParameterValue": "REPLACE THIS WITH: String - Allowed:[t1.micro, t2.nano, t2.micro, t2.small, t2.medium, t2.large, m1.small, m1.medium, m1.large, m1.xlarge, m2.xlarge, m2.2xlarge, m2.4xlarge, m3.medium, m3.large, m3.xlarge, m3.2xlarge, m4.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m4.10xlarge, c1.medium, c1.xlarge, c3.large, c3.xlarge, c3.2xlarge, c3.4xlarge, c3.8xlarge, c4.large, c4.xlarge, c4.2xlarge, c4.4xlarge, c4.8xlarge, g2.2xlarge, g2.8xlarge, r3.large, r3.xlarge, r3.2xlarge, r3.4xlarge, r3.8xlarge, i2.xlarge, i2.2xlarge, i2.4xlarge, i2.8xlarge, d2.xlarge, d2.2xlarge, d2.4xlarge, d2.8xlarge, hi1.4xlarge, hs1.8xlarge, cr1.8xlarge, cc2.8xlarge, cg1.4xlarge] - WebServer EC2 instance type" # }, # { # "ParameterKey": "KeyName", # "ParameterValue": "REPLACE THIS WITH: AWS::EC2::KeyPair::KeyName - Name of an existing EC2 KeyPair to enable SSH access to the instance" # }, # { # "ParameterKey": "SSHLocation", # "ParameterValue": "REPLACE THIS WITH: String - The IP address range that can be used to SSH to the EC2 instances" # } # ] # # # If you were to save the output of the above example to a file named "example-params.json" (and put in # valid ParameterValue entries) then you could: # # % aws cloudformation create-stack --stack-name my-test-stack --template-body https://s3.amazonaws.com/cloudformation-templates-us-east-1/LAMP_Single_Instance.template --parameters file://example-params.json # # See how easy that is? With this script you can start up a complex CloudFormation stack easily, # because each parameter to the stack is described in detail in the output of this script. # ###################################################################################################
""" Objects for dealing with Chebyshev series. This module provides a number of objects (mostly functions) useful for dealing with Chebyshev series, including a `Chebyshev` class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its "parent" sub-package, `numpy.polynomial`). Constants --------- - `chebdomain` -- Chebyshev series default domain, [-1,1]. - `chebzero` -- (Coefficients of the) Chebyshev series that evaluates identically to 0. - `chebone` -- (Coefficients of the) Chebyshev series that evaluates identically to 1. - `chebx` -- (Coefficients of the) Chebyshev series for the identity map, ``f(x) = x``. Arithmetic ---------- - `chebadd` -- add two Chebyshev series. - `chebsub` -- subtract one Chebyshev series from another. - `chebmul` -- multiply two Chebyshev series. - `chebdiv` -- divide one Chebyshev series by another. - `chebpow` -- raise a Chebyshev series to an positive integer power - `chebval` -- evaluate a Chebyshev series at given points. - `chebval2d` -- evaluate a 2D Chebyshev series at given points. - `chebval3d` -- evaluate a 3D Chebyshev series at given points. - `chebgrid2d` -- evaluate a 2D Chebyshev series on a Cartesian product. - `chebgrid3d` -- evaluate a 3D Chebyshev series on a Cartesian product. Calculus -------- - `chebder` -- differentiate a Chebyshev series. - `chebint` -- integrate a Chebyshev series. Misc Functions -------------- - `chebfromroots` -- create a Chebyshev series with specified roots. - `chebroots` -- find the roots of a Chebyshev series. - `chebvander` -- Vandermonde-like matrix for Chebyshev polynomials. - `chebvander2d` -- Vandermonde-like matrix for 2D power series. - `chebvander3d` -- Vandermonde-like matrix for 3D power series. - `chebgauss` -- Gauss-Chebyshev quadrature, points and weights. - `chebweight` -- Chebyshev weight function. - `chebcompanion` -- symmetrized companion matrix in Chebyshev form. - `chebfit` -- least-squares fit returning a Chebyshev series. - `chebpts1` -- Chebyshev points of the first kind. - `chebpts2` -- Chebyshev points of the second kind. - `chebtrim` -- trim leading coefficients from a Chebyshev series. - `chebline` -- Chebyshev series representing given straight line. - `cheb2poly` -- convert a Chebyshev series to a polynomial. - `poly2cheb` -- convert a polynomial to a Chebyshev series. Classes ------- - `Chebyshev` -- A Chebyshev series class. See also -------- `numpy.polynomial` Notes ----- The implementations of multiplication, division, integration, and differentiation use the algebraic identities [1]_: .. math :: T_n(x) = \\frac{z^n + z^{-n}}{2} \\\\ z\\frac{dx}{dz} = \\frac{z - z^{-1}}{2}. where .. math :: x = \\frac{z + z^{-1}}{2}. These identities allow a Chebyshev series to be expressed as a finite, symmetric Laurent series. In this module, this sort of Laurent series is referred to as a "z-series." References ---------- .. [1] NAME et al., "Combinatorial Trigonometry with Chebyshev Polynomials," *Journal of Statistical Planning and Inference 14*, 2008 (preprint: http://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf, pg. 4) """
""" ================= Structured Arrays ================= Introduction ============ Numpy provides powerful capabilities to create arrays of structured datatype. These arrays permit one to manipulate the data by named fields. A simple example will show what is meant.: :: >>> x = np.array([(1,2.,'Hello'), (2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> x array([(1, 2.0, 'Hello'), (2, 3.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) Here we have created a one-dimensional array of length 2. Each element of this array is a structure that contains three items, a 32-bit integer, a 32-bit float, and a string of length 10 or less. If we index this array at the second position we get the second structure: :: >>> x[1] (2,3.,"World") Conveniently, one can access any field of the array by indexing using the string that names that field. :: >>> y = x['foo'] >>> y array([ 2., 3.], dtype=float32) >>> y[:] = 2*y >>> y array([ 4., 6.], dtype=float32) >>> x array([(1, 4.0, 'Hello'), (2, 6.0, 'World')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) In these examples, y is a simple float array consisting of the 2nd field in the structured type. But, rather than being a copy of the data in the structured array, it is a view, i.e., it shares exactly the same memory locations. Thus, when we updated this array by doubling its values, the structured array shows the corresponding values as doubled as well. Likewise, if one changes the structured array, the field view also changes: :: >>> x[1] = (-1,-1.,"Master") >>> x array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')], dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')]) >>> y array([ 4., -1.], dtype=float32) Defining Structured Arrays ========================== One defines a structured array through the dtype object. There are **several** alternative ways to define the fields of a record. Some of these variants provide backward compatibility with Numeric, numarray, or another module, and should not be used except for such purposes. These will be so noted. One specifies record structure in one of four alternative ways, using an argument (as supplied to a dtype function keyword or a dtype object constructor itself). This argument must be one of the following: 1) string, 2) tuple, 3) list, or 4) dictionary. Each of these is briefly described below. 1) String argument. In this case, the constructor expects a comma-separated list of type specifiers, optionally with extra shape information. The fields are given the default names 'f0', 'f1', 'f2' and so on. The type specifiers can take 4 different forms: :: a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n> (representing bytes, ints, unsigned ints, floats, complex and fixed length strings of specified byte lengths) b) int8,...,uint8,...,float16, float32, float64, complex64, complex128 (this time with bit sizes) c) older Numeric/numarray type specifications (e.g. Float32). Don't use these in new code! d) Single character type specifiers (e.g H for unsigned short ints). Avoid using these unless you must. Details can be found in the Numpy book These different styles can be mixed within the same string (but why would you want to do that?). Furthermore, each type specifier can be prefixed with a repetition number, or a shape. In these cases an array element is created, i.e., an array within a record. That array is still referred to as a single field. An example: :: >>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64') >>> x array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), ([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])], dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))]) By using strings to define the record structure, it precludes being able to name the fields in the original definition. The names can be changed as shown later, however. 2) Tuple argument: The only relevant tuple case that applies to record structures is when a structure is mapped to an existing data type. This is done by pairing in a tuple, the existing data type with a matching dtype definition (using any of the variants being described here). As an example (using a definition using a list, so see 3) for further details): :: >>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')])) >>> x array([0, 0, 0]) >>> x['r'] array([0, 0, 0], dtype=uint8) In this case, an array is produced that looks and acts like a simple int32 array, but also has definitions for fields that use only one byte of the int32 (a bit like Fortran equivalencing). 3) List argument: In this case the record structure is defined with a list of tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field ('' is permitted), 2) the type of the field, and 3) the shape (optional). For example:: >>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) >>> x array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]), (0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])], dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))]) 4) Dictionary argument: two different forms are permitted. The first consists of a dictionary with two required keys ('names' and 'formats'), each having an equal sized list of values. The format list contains any type/shape specifier allowed in other contexts. The names must be strings. There are two optional keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to the required two where offsets contain integer offsets for each field, and titles are objects containing metadata for each field (these do not have to be strings), where the value of None is permitted. As an example: :: >>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[('col1', '>i4'), ('col2', '>f4')]) The other dictionary form permitted is a dictionary of name keys with tuple values specifying type, offset, and an optional title. :: >>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')}) >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')]) Accessing and modifying field names =================================== The field names are an attribute of the dtype object defining the structure. For the last example: :: >>> x.dtype.names ('col1', 'col2') >>> x.dtype.names = ('x', 'y') >>> x array([(0, 0.0), (0, 0.0), (0, 0.0)], dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')]) >>> x.dtype.names = ('x', 'y', 'z') # wrong number of names <type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2 Accessing field titles ==================================== The field titles provide a standard place to put associated info for fields. They do not have to be strings. :: >>> x.dtype.fields['x'][2] 'title 1' Accessing multiple fields at once ==================================== You can access multiple fields at once using a list of field names: :: >>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))], dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))]) Notice that `x` is created with a list of tuples. :: >>> x[['x','y']] array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)], dtype=[('x', '<f4'), ('y', '<f4')]) >>> x[['x','value']] array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]), (1.0, [[2.0, 6.0], [2.0, 6.0]])], dtype=[('x', '<f4'), ('value', '<f4', (2, 2))]) The fields are returned in the order they are asked for.:: >>> x[['y','x']] array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)], dtype=[('y', '<f4'), ('x', '<f4')]) Filling structured arrays ========================= Structured arrays can be filled by field or row by row. :: >>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')]) >>> arr['var1'] = np.arange(5) If you fill it in row by row, it takes a take a tuple (but not a list or array!):: >>> arr[0] = (10,20) >>> arr array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)], dtype=[('var1', '<f8'), ('var2', '<f8')]) Record Arrays ============= For convenience, numpy provides "record arrays" which allow one to access fields of structured arrays by attribute rather than by index. Record arrays are structured arrays wrapped using a subclass of ndarray, :class:`numpy.recarray`, which allows field access by attribute on the array object, and record arrays also use a special datatype, :class:`numpy.record`, which allows field access by attribute on the individual elements of the array. The simplest way to create a record array is with :func:`numpy.rec.array`: :: >>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> recordarr.bar array([ 2., 3.], dtype=float32) >>> recordarr[1:2] rec.array([(2, 3.0, 'World')], dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]) >>> recordarr[1:2].foo array([2], dtype=int32) >>> recordarr.foo[1:2] array([2], dtype=int32) >>> recordarr[1].baz 'World' numpy.rec.array can convert a wide variety of arguments into record arrays, including normal structured arrays: :: >>> arr = array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')]) >>> recordarr = np.rec.array(arr) The numpy.rec module provides a number of other convenience functions for creating record arrays, see :ref:`record array creation routines <routines.array-creation.rec>`. A record array representation of a structured array can be obtained using the appropriate :ref:`view`: :: >>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')]) >>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)), ... type=np.recarray) For convenience, viewing an ndarray as type `np.recarray` will automatically convert to `np.record` datatype, so the dtype can be left out of the view: :: >>> recordarr = arr.view(np.recarray) >>> recordarr.dtype dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])) To get back to a plain ndarray both the dtype and type must be reset. The following view does so, taking into account the unusual case that the recordarr was not a structured type: :: >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but as a plain ndarray otherwise. :: >>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))], ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])]) >>> type(recordarr.foo) <type 'numpy.ndarray'> >>> type(recordarr.bar) <class 'numpy.core.records.recarray'> Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will be inaccessible by attribute but may still be accessed by index. """
""" 4.1.3 Time Step Selection (any subprogram) These commands control the time step selection as explained in Section [Ref:intro:timemode]. The following are the time step selection parameters: * tmin is the minimum selected time, * tmax is the maximum selected time, * nintv is the number of selected time intervals, and * delt is the selected time interval. In the interval-times mode, up to nintv time steps at interval delt between tmin and tmax are selected. The mode may have a delta offset or a zero offset. With a delta offset, the first selected time is tmin+delt; with a zero offset, it is tmin. In the interval-times mode with a delta offset, the number of selected time intervals nintv and the selected time interval delt are related mathematically by the equations: delt = (tmax-tmin) / nintv (1) nintv = int ((tmin-tmax) / delt) (2) With a zero offset, nintv and delt are related mathematically by the equations: delt = (tmax-tmin) / (nintv-1) (1) nintv = int ((tmin-tmax) / delt) + 1 (2) The user specifies either nintv or delt. If nintv is specified, delt is calculated using equation 1. If delt is specified, nintv is calculated using equation 2. In the all-available-times mode, all database time steps between tmin and tmax are selected (parameters nintv and delt are ignored). In the user-selected-times mode, the specified times are selected (all parameters are ignored). The initial mode is the interval-times mode with a delta offset. Parameters tmin, tmax, and nintv are set to their default values and delt is calculated. TMIN tmin <minimum database time> TMIN sets the minimum selected time tmin to the specified parameter value. If the user-selected-times mode is in effect, the mode is changed to the all-available- times mode. In interval-times mode, if nintv is selected (by a NINTV or ZINTV command), delt is calculated. If delt is selected (by a DELTIME command), nintv is calculated. TMAX tmax <maximum database time> TMAX sets the maximum selected time tmax to the specified parameter value. If the user-selected-times mode is in effect, the mode is changed to the all-available- times mode. In interval-times mode, if nintv is selected (by a NINTV or ZINTV command), delt is calculated. If delt is selected (by a DELTIME command), nintv is calculated. NINTV nintv <10 or the number of database time steps - 1,> whichever is smaller NINTV sets the number of selected time intervals nintv to the specified parameter value and changes the mode to the interval-times mode with a delta offset. The selected time interval delt is calculated. 22 ZINTV nintv <10 or the number of database time steps,> whichever is smaller ZINTV sets the number of selected time intervals nintv to the specified parameter value and changes the mode to the interval-times mode with a zero offset. The selected time interval delt is calculated. DELTIME delt <(tmax-tmin) / (nintv-1), where nintv is 10> or the number of database time steps, whichever is smaller DELTIME sets the selected time interval delt to the specified parameter value and changes the mode to the interval-times mode with a zero offset. The number of selected time intervals nintv is calculated. ALLTIMES ALLTIMES changes the mode to the all-available-times mode. TIMES [ADD,] t1, t2, ... <no times selected> TIMES changes the mode to the user-selected-times mode and selects times t1, t2, etc. The closest time step from the database is selected for each specified time. Normally, a TIMES command selects only the listed time steps. If ADD is the first parameter, the listed steps are added to the current selected times. Any other time step selection command clears all TIMES selected times. Up to the maximum number of time steps in the database may be specified. Times are selected in the order encountered on the database, regardless of the order the times are specified in the command. Duplicate references to a time step are ignored. STEPS [ADD,] n1, n2, ... <no steps selected> The STEPS command is equivalent to the TIMES command except that it selects time steps by the step number, not by the step time. HISTORY *skipped* """
""" .. versionadded:: 1.0.0 The Learning Entropy (LE) is non-Shannon entropy based on conformity of individual data samples to the contemporary learned governing law of a leraning system :cite:`bukovsky2013learning`. More information about application can be found also in other studies :cite:`bukovsky2016study` :cite:`bukovsky2015case` :cite:`bukovsky2014learning`. Content of this page: .. contents:: :local: :depth: 1 Algorithm Explanation ========================== Two options how to estimate the LE are implemented - direct approach and multiscale approach. .. rubric:: Direct approach With direct approach the LE is evaluated for every sample as follows :math:`\\textrm{LE}_d(k) = \\frac{ (\\Delta \\textbf{w}(k) - \\overline{| \\Delta \\textbf{w}_M(k) |}) } { (\\sigma({| \\Delta \\textbf{w}_M(k) |})+\\epsilon) }` where * :math:`|\\Delta \\textbf{w}(k)|` are the absolute values of current weights increment. * :math:`\overline{| \\Delta \\textbf{w}_M(k) |}` are averages of absolute values of window used for LE evaluation. * :math:`\\sigma (| \\Delta \\textbf{w}_M(k) |)` are standard deviatons of absolute values of window used for LE evaluation. * :math:`\\epsilon` is regularization term to preserve stability for small values of standard deviation. .. rubric:: Multiscale approach Value for every sample is defined as follows :math:`\\textrm{LE}(k) = \\frac{1}{n \cdot n_\\alpha} \sum f(\Delta w_{i}(k), \\alpha ),` where :math:`\Delta w_i(k)` stands for one weight from vector :math:`\Delta \\textbf{w}(k)`, the :math:`n` is number of weights, the :math:`n_\\alpha` is number of used detection sensitivities :math:`\\alpha=[\\alpha_{1}, \\alpha_{2}, \ldots, \\alpha_{n_{\\alpha}}].` The function :math:`f(\Delta w_{i}(k), \\alpha)` is defined as follows :math:`f(\Delta w_{i}(k),\\alpha)= \{{\\rm if}\,\left(\left\\vert \Delta w_{i}(k)\\right\\vert > \\alpha\cdot \overline{\left\\vert \Delta w_{Mi}(k)\\right\\vert }\\right)\, \\rm{then} \, 1, \\rm{else }\,0 \}.` Usage Instructions and Optimal Performance ============================================== The LE algorithm can be used as follows .. code-block:: python le = pa.detection.learning_entropy(w, m=30, order=1) in case of direct approach. For multiscale approach an example follows .. code-block:: python le = pa.detection.learning_entropy(w, m=30, order=1, alpha=[8., 9., 10., 11., 12., 13.]) where `w` is matrix of the adaptive parameters (changing in time, every row should represent one time index), `m` is window size, `order` is LE order and `alpha` is vector of sensitivities. .. rubric:: Used adaptive models In general it is possible to use any adaptive model. The input of the LE algorithm is matrix of an adaptive parameters history, where every row represents the parameters used in a particular time and every column represents one parameter in whole adaptation history. .. rubric:: Selection of sensitivities The optimal number of detection sensitivities and their values depends on task and data. The sensitivities should be chosen in range where the function :math:`LE(k)` returns a value lower than 1 for at least one sample in data, and for at maximally one sample returns value of 0. Minimal Working Example ============================ In this example is demonstrated how can the multiscale approach LE highligh the position of a perturbation inserted in a data. As the adaptive model is used :ref:`filter-nlms` adaptive filter. The perturbation is manually inserted in sample with index :math:`k=1000` (the length of data is 2000). .. code-block:: python import numpy as np import matplotlib.pylab as plt import padasip as pa # data creation n = 5 N = 2000 x = np.random.normal(0, 1, (N, n)) d = np.sum(x, axis=1) + np.random.normal(0, 0.1, N) # perturbation insertion d[1000] += 2. # creation of learning model (adaptive filter) f = pa.filters.FilterNLMS(n, mu=1., w=np.ones(n)) y, e, w = f.run(d, x) # estimation of LE with weights from learning model le = pa.detection.learning_entropy(w, m=30, order=2, alpha=[8., 9., 10., 11., 12., 13.]) # LE plot plt.plot(le) plt.show() References ============ .. bibliography:: le.bib :style: plain Code Explanation ==================== """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
"""This module tests SyntaxErrors. Here's an example of the sort of thing that is tested. >>> def f(x): ... global x Traceback (most recent call last): SyntaxError: name 'x' is parameter and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and symtable.c. The parser itself outlaws a lot of invalid syntax. None of these errors are tested here at the moment. We should add some tests; since there are infinitely many programs with invalid syntax, we would need to be judicious in selecting some. The compiler generates a synthetic module name for code executed by doctest. Since all the code comes from the same module, a suffix like [1] is appended to the module name, As a consequence, changing the order of tests in this module means renumbering all the errors after it. (Maybe we should enable the ellipsis option for these tests.) In ast.c, syntax errors are raised by calling ast_error(). Errors from set_context(): >>> obj.None = 1 Traceback (most recent call last): SyntaxError: invalid syntax >>> None = 1 Traceback (most recent call last): SyntaxError: assignment to keyword It's a syntax error to assign to the empty tuple. Why isn't it an error to assign to the empty list? It will always raise some error at runtime. >>> () = 1 Traceback (most recent call last): SyntaxError: can't assign to () >>> f() = 1 Traceback (most recent call last): SyntaxError: can't assign to function call >>> del f() Traceback (most recent call last): SyntaxError: can't delete function call >>> a + 1 = 2 Traceback (most recent call last): SyntaxError: can't assign to operator >>> (x for x in x) = 1 Traceback (most recent call last): SyntaxError: can't assign to generator expression >>> 1 = 1 Traceback (most recent call last): SyntaxError: can't assign to literal >>> "abc" = 1 Traceback (most recent call last): SyntaxError: can't assign to literal >>> b"" = 1 Traceback (most recent call last): SyntaxError: can't assign to literal >>> `1` = 1 Traceback (most recent call last): SyntaxError: invalid syntax If the left-hand side of an assignment is a list or tuple, an illegal expression inside that contain should still cause a syntax error. This test just checks a couple of cases rather than enumerating all of them. >>> (a, "b", c) = (1, 2, 3) Traceback (most recent call last): SyntaxError: can't assign to literal >>> [a, b, c + 1] = [1, 2, 3] Traceback (most recent call last): SyntaxError: can't assign to operator >>> a if 1 else b = 1 Traceback (most recent call last): SyntaxError: can't assign to conditional expression From compiler_complex_args(): >>> def f(None=1): ... pass Traceback (most recent call last): SyntaxError: invalid syntax From ast_for_arguments(): >>> def f(x, y=1, z): ... pass Traceback (most recent call last): SyntaxError: non-default argument follows default argument >>> def f(x, None): ... pass Traceback (most recent call last): SyntaxError: invalid syntax >>> def f(*None): ... pass Traceback (most recent call last): SyntaxError: invalid syntax >>> def f(**None): ... pass Traceback (most recent call last): SyntaxError: invalid syntax From ast_for_funcdef(): >>> def None(x): ... pass Traceback (most recent call last): SyntaxError: invalid syntax From ast_for_call(): >>> def f(it, *varargs): ... return list(it) >>> L = range(10) >>> f(x for x in L) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(x for x in L, 1) Traceback (most recent call last): SyntaxError: Generator expression must be parenthesized if not sole argument >>> f((x for x in L), 1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... i244, i245, i246, i247, i248, i249, i250, i251, i252, ... i253, i254, i255) Traceback (most recent call last): SyntaxError: more than 255 arguments The actual error cases counts positional arguments, keyword arguments, and generator expression arguments separately. This test combines the three. >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... (x for x in i244), i245, i246, i247, i248, i249, i250, i251, ... i252=1, i253=1, i254=1, i255=1) Traceback (most recent call last): SyntaxError: more than 255 arguments >>> f(lambda x: x[0] = 3) Traceback (most recent call last): SyntaxError: lambda cannot contain assignment The grammar accepts any test (basically, any expression) in the keyword slot of a call site. Test a few different options. >>> f(x()=2) Traceback (most recent call last): SyntaxError: keyword can't be an expression >>> f(a or b=1) Traceback (most recent call last): SyntaxError: keyword can't be an expression >>> f(x.y=1) Traceback (most recent call last): SyntaxError: keyword can't be an expression More set_context(): >>> (x for x in x) += 1 Traceback (most recent call last): SyntaxError: can't assign to generator expression >>> None += 1 Traceback (most recent call last): SyntaxError: assignment to keyword >>> f() += 1 Traceback (most recent call last): SyntaxError: can't assign to function call Test continue in finally in weird combinations. continue in for loop under finally should be ok. >>> def test(): ... try: ... pass ... finally: ... for abc in range(10): ... continue ... print(abc) >>> test() 9 Start simple, a continue in a finally should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause This is essentially a continue in a finally which should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... try: ... continue ... except: ... pass Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... try: ... continue ... finally: ... pass Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: pass ... finally: ... try: ... pass ... except: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause There is one test for a break that is not in a loop. The compiler uses a single data structure to keep track of try-finally and loops, so we need to be sure that a break is actually inside a loop. If it isn't, there should be a syntax error. >>> try: ... print(1) ... break ... print(2) ... finally: ... print(3) Traceback (most recent call last): ... SyntaxError: 'break' outside loop This should probably raise a better error than a SystemError (or none at all). In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 >>> while 1: ... while 2: ... while 3: ... while 4: ... while 5: ... while 6: ... while 8: ... while 9: ... while 10: ... while 11: ... while 12: ... while 13: ... while 14: ... while 15: ... while 16: ... while 17: ... while 18: ... while 19: ... while 20: ... while 21: ... while 22: ... break Traceback (most recent call last): ... SystemError: too many statically nested blocks Misuse of the nonlocal statement can lead to a few unique syntax errors. >>> def f(x): ... nonlocal x Traceback (most recent call last): ... SyntaxError: name 'x' is parameter and nonlocal >>> def f(): ... global x ... nonlocal x Traceback (most recent call last): ... SyntaxError: name 'x' is nonlocal and global >>> def f(): ... nonlocal x Traceback (most recent call last): ... SyntaxError: no binding for nonlocal 'x' found From SF bug #1705365 >>> nonlocal x Traceback (most recent call last): ... SyntaxError: nonlocal declaration not allowed at module level TODO(jhylton): Figure out how to test SyntaxWarning with doctest. ## >>> def f(x): ## ... def f(): ## ... print(x) ## ... nonlocal x ## Traceback (most recent call last): ## ... ## SyntaxWarning: name 'x' is assigned to before nonlocal declaration ## >>> def f(): ## ... x = 1 ## ... nonlocal x ## Traceback (most recent call last): ## ... ## SyntaxWarning: name 'x' is assigned to before nonlocal declaration This tests assignment-context; there was a bug in Python 2.5 where compiling a complex 'if' (one with 'elif') would fail to notice an invalid suite, leading to spurious errors. >>> if 1: ... x() = 1 ... elif 1: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... x() = 1 ... elif 1: ... pass ... else: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 ... else: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... pass ... else: ... x() = 1 Traceback (most recent call last): ... SyntaxError: can't assign to function call Make sure that the old "raise X, Y[, Z]" form is gone: >>> raise X, Y Traceback (most recent call last): ... SyntaxError: invalid syntax >>> raise X, Y, Z Traceback (most recent call last): ... SyntaxError: invalid syntax >>> f(a=23, a=234) Traceback (most recent call last): ... SyntaxError: keyword argument repeated >>> del () Traceback (most recent call last): SyntaxError: can't delete () >>> {1, 2, 3} = 42 Traceback (most recent call last): SyntaxError: can't assign to literal Corner-cases that used to fail to raise the correct error: >>> def f(*, x=lambda __debug__:0): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> def f(*args:(lambda __debug__:0)): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> def f(**kwargs:(lambda __debug__:0)): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> with (lambda *:0): pass Traceback (most recent call last): SyntaxError: named arguments must follow bare * Corner-cases that used to crash: >>> def f(**__debug__): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> def f(*xx, __debug__): pass Traceback (most recent call last): SyntaxError: assignment to keyword """
# -*- coding: utf-8 -*- # routers are dictionaries of URL routing parameters. # # For each request, the effective router is: # the built-in default base router (shown below), # updated by the BASE router in routes.py routers, # updated by the app-specific router in routes.py routers (if any), # updated by the app-specific router from applications/app/routes.py routers (if any) # # # Router members: # # default_application: default application name # applications: list of all recognized applications, or 'ALL' to use all currently installed applications # Names in applications are always treated as an application names when they appear first in an incoming URL. # Set applications=None to disable the removal of application names from outgoing URLs. # domains: optional dict mapping domain names to application names # The domain name can include a port number: domain.com:8080 # The application name can include a controller: appx/ctlrx # or a controller and a function: appx/ctlrx/fcnx # Example: # domains = { "domain.com" : "app", # "x.domain.com" : "appx", # }, # path_prefix: a path fragment that is prefixed to all outgoing URLs and stripped from all incoming URLs # # Note: default_application, applications, domains & path_prefix are permitted only in the BASE router, # and domain makes sense only in an application-specific router. # The remaining members can appear in the BASE router (as defaults for all applications) # or in application-specific routers. # # default_controller: name of default controller # default_function: name of default function (in all controllers) or dictionary of default functions # by controller # controllers: list of valid controllers in selected app # or "DEFAULT" to use all controllers in the selected app plus 'static' # or None to disable controller-name removal. # Names in controllers are always treated as controller names when they appear in an incoming URL after # the (optional) application and language names. # functions: list of valid functions in the default controller (default None) or dictionary of valid # functions by controller. # If present, the default function name will be omitted when the controller is the default controller # and the first arg does not create an ambiguity. # languages: list of all supported languages # Names in languages are always treated as language names when they appear in an incoming URL after # the (optional) application name. # default_language # The language code (for example: en, it-it) optionally appears in the URL following # the application (which may be omitted). For incoming URLs, the code is copied to # request.uri_language; for outgoing URLs it is taken from request.uri_language. # If languages=None, language support is disabled. # The default_language, if any, is omitted from the URL. # To use the incoming language in your application, add this line to one of your models files: # if request.uri_language: T.force(request.uri_language) # root_static: list of static files accessed from root (by default, favicon.ico & robots.txt) # (mapped to the default application's static/ directory) # Each default (including domain-mapped) application has its own root-static files. # domain: the domain that maps to this application (alternative to using domains in the BASE router) # exclusive_domain: If True (default is False), an exception is raised if an attempt is made to generate # an outgoing URL with a different application without providing an explicit host. # map_hyphen: If True (default is False), hyphens in incoming /a/c/f fields are converted # to underscores, and back to hyphens in outgoing URLs. # Language, args and the query string are not affected. # map_static: By default (None), the default application is not stripped from static URLs. # Set map_static=True to override this policy. # Set map_static=False to map lang/static/file to static/lang/file # acfe_match: regex for valid application, controller, function, extension /a/c/f.e # file_match: regex for valid subpath (used for static file paths) # if file_match does not contain '/', it is uses to validate each element of a static file subpath, # rather than the entire subpath. # args_match: regex for valid args # This validation provides a measure of security. # If it is changed, the application perform its own validation. # # # The built-in default router supplies default values (undefined members are None): # # default_router = dict( # default_application = 'init', # applications = 'ALL', # default_controller = 'default', # controllers = 'DEFAULT', # default_function = 'index', # functions = None, # default_language = None, # languages = None, # root_static = ['favicon.ico', 'robots.txt'], # map_static = None, # domains = None, # map_hyphen = False, # acfe_match = r'\w+$', # legal app/ctlr/fcn/ext # file_match = r'([-+=@$%\w]|(?<=[-+=@$%\w])[./])*$', # legal static subpath # args_match = r'([\w@ -]|(?<=[\w@ -])[.=])*$', # legal arg in args # ) # # See rewrite.map_url_in() and rewrite.map_url_out() for implementation details. # This simple router set overrides only the default application name, # but provides full rewrite functionality.
""" RecycleView =========== .. versionadded:: 1.9.2 The RecycleView provides a flexible model for viewing selected sections of large data sets. It aims to prevent the performance degradation that can occur when generating large numbers of widgets in order to display many data items. .. warning:: This module is highly experimental, its API may change in the future and the documentation is not complete at this time. The view is generatad by processing the :attr:`~RecycleView.data`, essentially a list of dicts, and uses these dicts to generate instances of the :attr:`~RecycleView.viewclass` as required. Its design is based on the MVC (`Model-view-controller <https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller>`_) pattern. * Model: The model is formed by :attr:`~RecycleView.data` you pass in via a list of dicts. * View: The View is split across layout and views and implemented by... * Controller: The controller is implemented by :class:`RecycleViewBehavior`. These are abstract classes and cannot be used directly. The default concrete implementation is the :class:`~kivy.uix.recycleview.datamodel.RecycleDataModel` for the model, the :class:`~kivy.uix.recyclelayout.RecycleLayout` and ... for view, and the :class:`RecycleView` for the controller. When a RecycleView is instantiated, it automatically creates the views and data classes. However, one must manually create the layout classes and add them to the RecycleView. A layout manager is automatically created as a :attr:`~RecycleViewBehavior.layout_manager` when added as the child of the RecycleView. Similarly when removed. A requirement is that the layout manager must be contained as a child somewhere within the RecycleView's widget tree so the view port can be found. A minimal example might look something like this:: from kivy.app import App from kivy.lang import Builder from kivy.uix.recycleview import RecycleView Builder.load_string(''' <RV>: viewclass: 'Label' RecycleBoxLayout: default_size: None, dp(56) default_size_hint: 1, None size_hint_y: None height: self.minimum_height orientation: 'vertical' ''') class RV(RecycleView): def __init__(self, **kwargs): super(RV, self).__init__(**kwargs) self.data = [{'text': str(x)} for x in range(100)] class TestApp(App): def build(self): return RV() if __name__ == '__main__': TestApp().run() In order to support selection in the view, you can add the required behaviours as follows:: from kivy.app import App from kivy.lang import Builder from kivy.uix.recycleview import RecycleView from kivy.uix.recycleview.views import RecycleDataViewBehavior from kivy.uix.label import Label from kivy.properties import BooleanProperty from kivy.uix.recycleboxlayout import RecycleBoxLayout from kivy.uix.behaviors import FocusBehavior from kivy.uix.recycleview.layout import LayoutSelectionBehavior Builder.load_string(''' <SelectableLabel>: # Draw a background to indicate selection canvas.before: Color: rgba: (.0, 0.9, .1, .3) if self.selected else (0, 0, 0, 1) Rectangle: pos: self.pos size: self.size <RV>: viewclass: 'SelectableLabel' SelectableRecycleBoxLayout: default_size: None, dp(56) default_size_hint: 1, None size_hint_y: None height: self.minimum_height orientation: 'vertical' multiselect: True touch_multiselect: True ''') class SelectableRecycleBoxLayout(FocusBehavior, LayoutSelectionBehavior, RecycleBoxLayout): ''' Adds selection and focus behaviour to the view. ''' class SelectableLabel(RecycleDataViewBehavior, Label): ''' Add selection support to the Label ''' index = None selected = BooleanProperty(False) selectable = BooleanProperty(True) def refresh_view_attrs(self, rv, index, data): ''' Catch and handle the view changes ''' self.index = index return super(SelectableLabel, self).refresh_view_attrs( rv, index, data) def on_touch_down(self, touch): ''' Add selection on touch down ''' if super(SelectableLabel, self).on_touch_down(touch): return True if self.collide_point(*touch.pos) and self.selectable: return self.parent.select_with_touch(self.index, touch) def apply_selection(self, rv, index, is_selected): ''' Respond to the selection of items in the view. ''' self.selected = is_selected if is_selected: print("selection changed to {0}".format(rv.data[index])) else: print("selection removed for {0}".format(rv.data[index])) class RV(RecycleView): def __init__(self, **kwargs): super(RV, self).__init__(**kwargs) self.data = [{'text': str(x)} for x in range(100)] class TestApp(App): def build(self): return RV() if __name__ == '__main__': TestApp().run() Please see the `examples/widgets/recycleview/basic_data.py` file for a more complete example. TODO: - Method to clear cached class instances. - Test when views cannot be found (e.g. viewclass is None). - Fix selection goto. .. warning:: When views are re-used they may not trigger if the data remains the same. """
""" Binary serialization NPY format ========== A simple format for saving numpy arrays to disk with the full information about them. The ``.npy`` format is the standard binary file format in NumPy for persisting a *single* arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The ``.npz`` format is the standard format for persisting *multiple* NumPy arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy`` files, one for each array. Capabilities ------------ - Can represent all NumPy arrays including nested record arrays and object arrays. - Represents the data in its native binary form. - Supports Fortran-contiguous arrays directly. - Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C "long int" writes out an array with "long ints", a reading machine with 32-bit C "long ints" will yield an array with 64-bit integers. - Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able to create a solution in their preferred programming language to read most ``.npy`` files that he has been given without much documentation. - Allows memory-mapping of the data. See `open_memmep`. - Can be read from a filelike stream object instead of an actual file. - Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. Limitations ----------- - Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. .. warning:: Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by 'f0', 'f1', etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the ``loadedarray.view(correct_dtype)`` method. File extensions --------------- We recommend using the ``.npy`` and ``.npz`` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using ``.npy`` and ``.npz``. Version numbering ----------------- The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. Format Version 1.0 ------------------ The first 6 bytes are a magic string: exactly ``\\x93NUMPY``. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. ``\\x01``. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. ``\\x00``. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array's format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (``\\n``) and padded with spaces (``\\x20``) to make the total of ``len(magic string) + 2 + len(length) + HEADER_LEN`` be evenly divisible by 64 for alignment purposes. The dictionary contains three keys: "descr" : dtype.descr An object that can be passed as an argument to the `numpy.dtype` constructor to create the array's dtype. "fortran_order" : bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. "shape" : tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. ``dtype.hasobject is True``), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on ``fortran_order``) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that ``shape=()`` means there is 1 element) by ``dtype.itemsize``. Format Version 2.0 ------------------ The version 1.0 format only allowed the array header to have a total size of 65535 bytes. This can be exceeded by structured arrays with a large number of columns. The version 2.0 format extends the header size to 4 GiB. `numpy.save` will automatically save in 2.0 format if the data requires it, else it will always use the more compatible 1.0 format. The description of the fourth element of the header therefore has become: "The next 4 bytes form a little-endian unsigned int: the length of the header data HEADER_LEN." Notes ----- The ``.npy`` format, including motivation for creating it and a comparison of alternatives, is described in the `"npy-format" NEP <https://www.numpy.org/neps/nep-0001-npy-format.html>`_, however details have evolved with time and this document is more current. """
#!/usr/bin/env python # Try to determine how much RAM is currently being used per program. # Note per _program_, not per process. So for example this script # will report RAM used by all httpd process together. In detail it reports: # sum(private RAM for program processes) + sum(Shared RAM for program processes) # The shared RAM is problematic to calculate, and this script automatically # selects the most accurate method available for your kernel. # Licence: LGPLv2 # Author: EMAIL Source: http://www.pixelbeat.org/scripts/ps_mem.py # V1.0 06 Jul 2005 Initial release # V1.1 11 Aug 2006 root permission required for accuracy # V1.2 08 Nov 2006 Add total to output # Use KiB,MiB,... for units rather than K,M,... # V1.3 22 Nov 2006 Ignore shared col from /proc/$pid/statm for # 2.6 kernels up to and including 2.6.9. # There it represented the total file backed extent # V1.4 23 Nov 2006 Remove total from output as it's meaningless # (the shared values overlap with other programs). # Display the shared column. This extra info is # useful, especially as it overlaps between programs. # V1.5 26 Mar 2007 Remove redundant recursion from human() # V1.6 05 Jun 2007 Also report number of processes with a given name. # Patch from EMAIL V1.7 20 Sep 2007 Use PSS from /proc/$pid/smaps if available, which # fixes some over-estimation and allows totalling. # Enumerate the PIDs directly rather than using ps, # which fixes the possible race between reading # RSS with ps, and shared memory with this program. # Also we can show non truncated command names. # V1.8 28 Sep 2007 More accurate matching for stats in /proc/$pid/smaps # as otherwise could match libraries causing a crash. # Patch from EMAIL V1.9 20 Feb 2008 Fix invalid values reported when PSS is available. # Reported by NAME <EMAIL> # V3.11 17 Sep 2017 # http://github.com/pixelb/scripts/commits/master/scripts/ps_mem.py # Notes: # # All interpreted programs where the interpreter is started # by the shell or with env, will be merged to the interpreter # (as that's what's given to exec). For e.g. all python programs # starting with "#!/usr/bin/env python" will be grouped under python. # You can change this by using the full command line but that will # have the undesirable affect of splitting up programs started with # differing parameters (for e.g. mingetty tty[1-6]). # # For 2.6 kernels up to and including 2.6.13 and later 2.4 redhat kernels # (rmap vm without smaps) it can not be accurately determined how many pages # are shared between processes in general or within a program in our case: # http://lkml.org/lkml/2005/7/6/250 # A warning is printed if overestimation is possible. # In addition for 2.6 kernels up to 2.6.9 inclusive, the shared # value in /proc/$pid/statm is the total file-backed extent of a process. # We ignore that, introducing more overestimation, again printing a warning. # Since kernel 2.6.23-rc8-mm1 PSS is available in smaps, which allows # us to calculate a more accurate value for the total RAM used by programs. # # Programs that use CLONE_VM without CLONE_THREAD are discounted by assuming # they're the only programs that have the same /proc/$PID/smaps file for # each instance. This will fail if there are multiple real instances of a # program that then use CLONE_VM without CLONE_THREAD, or if a clone changes # its memory map while we're checksumming each /proc/$PID/smaps. # # I don't take account of memory allocated for a program # by other programs. For e.g. memory used in the X server for # a program could be determined, but is not. # # FreeBSD is supported if linprocfs is mounted at /compat/linux/proc/ # FreeBSD 8.0 supports up to a level of Linux 2.6.16
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """