code
stringlengths
64
7.01k
docstring
stringlengths
2
15.8k
text
stringlengths
144
19.2k
#vtb def count(self, object_class=None, params=None, **kwargs): path = "/directory-sync-service/v1/{}/count".format( object_class ) r = self._httpclient.request( method="GET", path=path, url=self.url, params=params, **kwargs ) return r
Retrieve the attribute configuration object. Retrieve a count of all directory entries that belong to the identified objectClass. The count is limited to a single domain. Args: params (dict): Payload/request dictionary. object_class (str): Directory object class. **kwargs: Supported :meth:`~pancloud.httpclient.HTTPClient.request` parameters. Returns: requests.Response: Requests Response() object. Examples: Coming soon.
### Input: Retrieve the attribute configuration object. Retrieve a count of all directory entries that belong to the identified objectClass. The count is limited to a single domain. Args: params (dict): Payload/request dictionary. object_class (str): Directory object class. **kwargs: Supported :meth:`~pancloud.httpclient.HTTPClient.request` parameters. Returns: requests.Response: Requests Response() object. Examples: Coming soon. ### Response: #vtb def count(self, object_class=None, params=None, **kwargs): path = "/directory-sync-service/v1/{}/count".format( object_class ) r = self._httpclient.request( method="GET", path=path, url=self.url, params=params, **kwargs ) return r
#vtb def add_fields(self, field_dict): for key, field in field_dict.items(): self.add_field(key, field)
Add a mapping of field names to PayloadField instances. :API: public
### Input: Add a mapping of field names to PayloadField instances. :API: public ### Response: #vtb def add_fields(self, field_dict): for key, field in field_dict.items(): self.add_field(key, field)
#vtb def get_info(handle): csbi = _WindowsCSBI.CSBI() try: if not _WindowsCSBI.WINDLL.kernel32.GetConsoleScreenBufferInfo(handle, ctypes.byref(csbi)): raise IOError() except ctypes.ArgumentError: raise IOError() result = dict( buffer_width=int(csbi.dwSize.X - 1), buffer_height=int(csbi.dwSize.Y), terminal_width=int(csbi.srWindow.Right - csbi.srWindow.Left), terminal_height=int(csbi.srWindow.Bottom - csbi.srWindow.Top), bg_color=int(csbi.wAttributes & 240), fg_color=int(csbi.wAttributes % 16), ) return result
Get information about this current console window (for Microsoft Windows only). Raises IOError if attempt to get information fails (if there is no console window). Don't forget to call _WindowsCSBI.initialize() once in your application before calling this method. Positional arguments: handle -- either _WindowsCSBI.HANDLE_STDERR or _WindowsCSBI.HANDLE_STDOUT. Returns: Dictionary with different integer values. Keys are: buffer_width -- width of the buffer (Screen Buffer Size in cmd.exe layout tab). buffer_height -- height of the buffer (Screen Buffer Size in cmd.exe layout tab). terminal_width -- width of the terminal window. terminal_height -- height of the terminal window. bg_color -- current background color (http://msdn.microsoft.com/en-us/library/windows/desktop/ms682088). fg_color -- current text color code.
### Input: Get information about this current console window (for Microsoft Windows only). Raises IOError if attempt to get information fails (if there is no console window). Don't forget to call _WindowsCSBI.initialize() once in your application before calling this method. Positional arguments: handle -- either _WindowsCSBI.HANDLE_STDERR or _WindowsCSBI.HANDLE_STDOUT. Returns: Dictionary with different integer values. Keys are: buffer_width -- width of the buffer (Screen Buffer Size in cmd.exe layout tab). buffer_height -- height of the buffer (Screen Buffer Size in cmd.exe layout tab). terminal_width -- width of the terminal window. terminal_height -- height of the terminal window. bg_color -- current background color (http://msdn.microsoft.com/en-us/library/windows/desktop/ms682088). fg_color -- current text color code. ### Response: #vtb def get_info(handle): csbi = _WindowsCSBI.CSBI() try: if not _WindowsCSBI.WINDLL.kernel32.GetConsoleScreenBufferInfo(handle, ctypes.byref(csbi)): raise IOError() except ctypes.ArgumentError: raise IOError() result = dict( buffer_width=int(csbi.dwSize.X - 1), buffer_height=int(csbi.dwSize.Y), terminal_width=int(csbi.srWindow.Right - csbi.srWindow.Left), terminal_height=int(csbi.srWindow.Bottom - csbi.srWindow.Top), bg_color=int(csbi.wAttributes & 240), fg_color=int(csbi.wAttributes % 16), ) return result
#vtb def usernames(urls): usernames = StringCounter() for url, count in urls.items(): uparse = urlparse(url) path = uparse.path hostname = uparse.hostname m = username_re.match(path) if m: usernames[m.group()] += count elif hostname in [, ]: usernames[path.lstrip()] += count return usernames
Take an iterable of `urls` of normalized URL or file paths and attempt to extract usernames. Returns a list.
### Input: Take an iterable of `urls` of normalized URL or file paths and attempt to extract usernames. Returns a list. ### Response: #vtb def usernames(urls): usernames = StringCounter() for url, count in urls.items(): uparse = urlparse(url) path = uparse.path hostname = uparse.hostname m = username_re.match(path) if m: usernames[m.group()] += count elif hostname in [, ]: usernames[path.lstrip()] += count return usernames
#vtb def _load_properties(self): method = data = _doget(method, user_id=self.__id) self.__loaded = True person = data.rsp.person self.__isadmin = person.isadmin self.__ispro = person.ispro self.__icon_server = person.iconserver if int(person.iconserver) > 0: self.__icon_url = \ % (person.iconserver, self.__id) else: self.__icon_url = self.__username = person.username.text self.__realname = person.realname.text self.__location = person.location.text self.__photos_firstdate = person.photos.firstdate.text self.__photos_firstdatetaken = person.photos.firstdatetaken.text self.__photos_count = person.photos.count.text
Load User properties from Flickr.
### Input: Load User properties from Flickr. ### Response: #vtb def _load_properties(self): method = data = _doget(method, user_id=self.__id) self.__loaded = True person = data.rsp.person self.__isadmin = person.isadmin self.__ispro = person.ispro self.__icon_server = person.iconserver if int(person.iconserver) > 0: self.__icon_url = \ % (person.iconserver, self.__id) else: self.__icon_url = self.__username = person.username.text self.__realname = person.realname.text self.__location = person.location.text self.__photos_firstdate = person.photos.firstdate.text self.__photos_firstdatetaken = person.photos.firstdatetaken.text self.__photos_count = person.photos.count.text
#vtb def build(self, words): words = [self._normalize(tokens) for tokens in words] self._dawg = dawg.CompletionDAWG(words) self._loaded_model = True
Construct dictionary DAWG from tokenized words.
### Input: Construct dictionary DAWG from tokenized words. ### Response: #vtb def build(self, words): words = [self._normalize(tokens) for tokens in words] self._dawg = dawg.CompletionDAWG(words) self._loaded_model = True
#vtb def make_processor(func, arg=None): def helper(instance, *args, **kwargs): value = kwargs.get() if value is None: value = instance if arg is not None: extra_arg = [arg] else: extra_arg = [] return func(value, *extra_arg) return helper
A pre-called processor that wraps the execution of the target callable ``func``. This is useful for when ``func`` is a third party mapping function that can take your column's value and return an expected result, but doesn't understand all of the extra kwargs that get sent to processor callbacks. Because this helper proxies access to ``func``, it can hold back the extra kwargs for a successful call. ``func`` will be called once per object record, a single positional argument being the column data retrieved via the column's :py:attr:`~datatableview.columns.Column.sources` An optional ``arg`` may be given, which will be forwarded as a second positional argument to ``func``. This was originally intended to simplify using Django template filter functions as ``func``. If you need to sent more arguments, consider wrapping your ``func`` in a ``functools.partial``, and use that as ``func`` instead.
### Input: A pre-called processor that wraps the execution of the target callable ``func``. This is useful for when ``func`` is a third party mapping function that can take your column's value and return an expected result, but doesn't understand all of the extra kwargs that get sent to processor callbacks. Because this helper proxies access to ``func``, it can hold back the extra kwargs for a successful call. ``func`` will be called once per object record, a single positional argument being the column data retrieved via the column's :py:attr:`~datatableview.columns.Column.sources` An optional ``arg`` may be given, which will be forwarded as a second positional argument to ``func``. This was originally intended to simplify using Django template filter functions as ``func``. If you need to sent more arguments, consider wrapping your ``func`` in a ``functools.partial``, and use that as ``func`` instead. ### Response: #vtb def make_processor(func, arg=None): def helper(instance, *args, **kwargs): value = kwargs.get() if value is None: value = instance if arg is not None: extra_arg = [arg] else: extra_arg = [] return func(value, *extra_arg) return helper
#vtb def get_api( profile=None, config_file=None, requirements=None): s default datafs application directory. Examples -------- The following specifies a simple API with a MongoDB manager and a temporary storage service: .. code-block:: python >>> try: ... from StringIO import StringIO ... except ImportError: ... from io import StringIO ... >>> import tempfile >>> tempdir = tempfile.mkdtemp() >>> >>> config_file = StringIO(.format(tempdir)) >>> >>> ... >>> >>> api = get_api(profile=, config_file=config_file) >>> api.manager.create_archive_table( ... , ... raise_on_err=False) >>> >>> archive = api.create( ... , ... metadata = dict(description = ), ... raise_on_err=False) >>> >>> with archive.open() as f: ... res = f.write(u) ... >>> with archive.open() as f: ... print(f.read()) ... hello! >>> >>> ... >>> archive.delete() >>> import shutil >>> shutil.rmtree(tempdir) default-profilerequirements[\r\n;]+^\s*$requirements_data.txtr^\s*$', reqline): continue archive, version = _parse_requirement(reqline) default_versions[archive] = version api = APIConstructor.generate_api_from_config(profile_config) api.default_versions = default_versions APIConstructor.attach_manager_from_config(api, profile_config) APIConstructor.attach_services_from_config(api, profile_config) APIConstructor.attach_cache_from_config(api, profile_config) return api
Generate a datafs.DataAPI object from a config profile ``get_api`` generates a DataAPI object based on a pre-configured datafs profile specified in your datafs config file. To create a datafs config file, use the command line tool ``datafs configure --helper`` or export an existing DataAPI object with :py:meth:`datafs.ConfigFile.write_config_from_api` Parameters ---------- profile : str (optional) name of a profile in your datafs config file. If profile is not provided, the default profile specified in the file will be used. config_file : str or file (optional) path to your datafs configuration file. By default, get_api uses your OS's default datafs application directory. Examples -------- The following specifies a simple API with a MongoDB manager and a temporary storage service: .. code-block:: python >>> try: ... from StringIO import StringIO ... except ImportError: ... from io import StringIO ... >>> import tempfile >>> tempdir = tempfile.mkdtemp() >>> >>> config_file = StringIO(""" ... default-profile: my-data ... profiles: ... my-data: ... manager: ... class: MongoDBManager ... kwargs: ... database_name: 'MyDatabase' ... table_name: 'DataFiles' ... ... authorities: ... local: ... service: OSFS ... args: ['{}'] ... """.format(tempdir)) >>> >>> # This file can be read in using the datafs.get_api helper function ... >>> >>> api = get_api(profile='my-data', config_file=config_file) >>> api.manager.create_archive_table( ... 'DataFiles', ... raise_on_err=False) >>> >>> archive = api.create( ... 'my_first_archive', ... metadata = dict(description = 'My test data archive'), ... raise_on_err=False) >>> >>> with archive.open('w+') as f: ... res = f.write(u'hello!') ... >>> with archive.open('r') as f: ... print(f.read()) ... hello! >>> >>> # clean up ... >>> archive.delete() >>> import shutil >>> shutil.rmtree(tempdir)
### Input: Generate a datafs.DataAPI object from a config profile ``get_api`` generates a DataAPI object based on a pre-configured datafs profile specified in your datafs config file. To create a datafs config file, use the command line tool ``datafs configure --helper`` or export an existing DataAPI object with :py:meth:`datafs.ConfigFile.write_config_from_api` Parameters ---------- profile : str (optional) name of a profile in your datafs config file. If profile is not provided, the default profile specified in the file will be used. config_file : str or file (optional) path to your datafs configuration file. By default, get_api uses your OS's default datafs application directory. Examples -------- The following specifies a simple API with a MongoDB manager and a temporary storage service: .. code-block:: python >>> try: ... from StringIO import StringIO ... except ImportError: ... from io import StringIO ... >>> import tempfile >>> tempdir = tempfile.mkdtemp() >>> >>> config_file = StringIO(""" ... default-profile: my-data ... profiles: ... my-data: ... manager: ... class: MongoDBManager ... kwargs: ... database_name: 'MyDatabase' ... table_name: 'DataFiles' ... ... authorities: ... local: ... service: OSFS ... args: ['{}'] ... """.format(tempdir)) >>> >>> # This file can be read in using the datafs.get_api helper function ... >>> >>> api = get_api(profile='my-data', config_file=config_file) >>> api.manager.create_archive_table( ... 'DataFiles', ... raise_on_err=False) >>> >>> archive = api.create( ... 'my_first_archive', ... metadata = dict(description = 'My test data archive'), ... raise_on_err=False) >>> >>> with archive.open('w+') as f: ... res = f.write(u'hello!') ... >>> with archive.open('r') as f: ... print(f.read()) ... hello! >>> >>> # clean up ... >>> archive.delete() >>> import shutil >>> shutil.rmtree(tempdir) ### Response: #vtb def get_api( profile=None, config_file=None, requirements=None): s default datafs application directory. Examples -------- The following specifies a simple API with a MongoDB manager and a temporary storage service: .. code-block:: python >>> try: ... from StringIO import StringIO ... except ImportError: ... from io import StringIO ... >>> import tempfile >>> tempdir = tempfile.mkdtemp() >>> >>> config_file = StringIO(.format(tempdir)) >>> >>> ... >>> >>> api = get_api(profile=, config_file=config_file) >>> api.manager.create_archive_table( ... , ... raise_on_err=False) >>> >>> archive = api.create( ... , ... metadata = dict(description = ), ... raise_on_err=False) >>> >>> with archive.open() as f: ... res = f.write(u) ... >>> with archive.open() as f: ... print(f.read()) ... hello! >>> >>> ... >>> archive.delete() >>> import shutil >>> shutil.rmtree(tempdir) default-profilerequirements[\r\n;]+^\s*$requirements_data.txtr^\s*$', reqline): continue archive, version = _parse_requirement(reqline) default_versions[archive] = version api = APIConstructor.generate_api_from_config(profile_config) api.default_versions = default_versions APIConstructor.attach_manager_from_config(api, profile_config) APIConstructor.attach_services_from_config(api, profile_config) APIConstructor.attach_cache_from_config(api, profile_config) return api
#vtb def scale_and_crop(im, crop_spec): im = im.crop((crop_spec.x, crop_spec.y, crop_spec.x2, crop_spec.y2)) if crop_spec.width and crop_spec.height: im = im.resize((crop_spec.width, crop_spec.height), resample=Image.ANTIALIAS) return im
Scale and Crop.
### Input: Scale and Crop. ### Response: #vtb def scale_and_crop(im, crop_spec): im = im.crop((crop_spec.x, crop_spec.y, crop_spec.x2, crop_spec.y2)) if crop_spec.width and crop_spec.height: im = im.resize((crop_spec.width, crop_spec.height), resample=Image.ANTIALIAS) return im
#vtb def listFigures(self,walkTrace=tuple(),case=None,element=None): if case == : print(walkTrace,self.title) if case == : caption,fig = element try: print(walkTrace,fig._leopardref,caption) except AttributeError: fig._leopardref = next(self._reportSection._fignr) print(walkTrace,fig._leopardref,caption)
List section figures.
### Input: List section figures. ### Response: #vtb def listFigures(self,walkTrace=tuple(),case=None,element=None): if case == : print(walkTrace,self.title) if case == : caption,fig = element try: print(walkTrace,fig._leopardref,caption) except AttributeError: fig._leopardref = next(self._reportSection._fignr) print(walkTrace,fig._leopardref,caption)
#vtb def pformat(self): lines = [] lines.append(("%s (%s)" % (self.name, self.status)).center(50, "-")) lines.append("items: {0:,} ({1:,} bytes)".format(self.item_count, self.size)) cap = self.consumed_capacity.get("__table__", {}) read = "Read: " + format_throughput(self.read_throughput, cap.get("read")) write = "Write: " + format_throughput(self.write_throughput, cap.get("write")) lines.append(read + " " + write) if self.decreases_today > 0: lines.append("decreases today: %d" % self.decreases_today) if self.range_key is None: lines.append(str(self.hash_key)) else: lines.append("%s, %s" % (self.hash_key, self.range_key)) for field in itervalues(self.attrs): if field.key_type == "INDEX": lines.append(str(field)) for index_name, gindex in iteritems(self.global_indexes): cap = self.consumed_capacity.get(index_name) lines.append(gindex.pformat(cap)) return "\n".join(lines)
Pretty string format
### Input: Pretty string format ### Response: #vtb def pformat(self): lines = [] lines.append(("%s (%s)" % (self.name, self.status)).center(50, "-")) lines.append("items: {0:,} ({1:,} bytes)".format(self.item_count, self.size)) cap = self.consumed_capacity.get("__table__", {}) read = "Read: " + format_throughput(self.read_throughput, cap.get("read")) write = "Write: " + format_throughput(self.write_throughput, cap.get("write")) lines.append(read + " " + write) if self.decreases_today > 0: lines.append("decreases today: %d" % self.decreases_today) if self.range_key is None: lines.append(str(self.hash_key)) else: lines.append("%s, %s" % (self.hash_key, self.range_key)) for field in itervalues(self.attrs): if field.key_type == "INDEX": lines.append(str(field)) for index_name, gindex in iteritems(self.global_indexes): cap = self.consumed_capacity.get(index_name) lines.append(gindex.pformat(cap)) return "\n".join(lines)
#vtb def validate_scopes(self, request): if not request.scopes: request.scopes = utils.scope_to_list(request.scope) or utils.scope_to_list( self.request_validator.get_default_scopes(request.client_id, request)) log.debug(, request.scopes, request.client_id, request.client) if not self.request_validator.validate_scopes(request.client_id, request.scopes, request.client, request): raise errors.InvalidScopeError(request=request)
:param request: OAuthlib request. :type request: oauthlib.common.Request
### Input: :param request: OAuthlib request. :type request: oauthlib.common.Request ### Response: #vtb def validate_scopes(self, request): if not request.scopes: request.scopes = utils.scope_to_list(request.scope) or utils.scope_to_list( self.request_validator.get_default_scopes(request.client_id, request)) log.debug(, request.scopes, request.client_id, request.client) if not self.request_validator.validate_scopes(request.client_id, request.scopes, request.client, request): raise errors.InvalidScopeError(request=request)
#vtb def transform(self, X, **kwargs): self.ranks_ = self.rank(X) self.draw(**kwargs) return X
The transform method is the primary drawing hook for ranking classes. Parameters ---------- X : ndarray or DataFrame of shape n x m A matrix of n instances with m features kwargs : dict Pass generic arguments to the drawing method Returns ------- X : ndarray Typically a transformed matrix, X' is returned. However, this method performs no transformation on the original data, instead simply ranking the features that are in the input data and returns the original data, unmodified.
### Input: The transform method is the primary drawing hook for ranking classes. Parameters ---------- X : ndarray or DataFrame of shape n x m A matrix of n instances with m features kwargs : dict Pass generic arguments to the drawing method Returns ------- X : ndarray Typically a transformed matrix, X' is returned. However, this method performs no transformation on the original data, instead simply ranking the features that are in the input data and returns the original data, unmodified. ### Response: #vtb def transform(self, X, **kwargs): self.ranks_ = self.rank(X) self.draw(**kwargs) return X
#vtb def spell_checker( self, text, accept_language=None, pragma=None, user_agent=None, client_id=None, client_ip=None, location=None, action_type=None, app_name=None, country_code=None, client_machine_name=None, doc_id=None, market=None, session_id=None, set_lang=None, user_id=None, mode=None, pre_context_text=None, post_context_text=None, custom_headers=None, raw=False, **operation_config): x_bing_apis_sdk = "true" url = self.spell_checker.metadata[] query_parameters = {} if action_type is not None: query_parameters[] = self._serialize.query("action_type", action_type, ) if app_name is not None: query_parameters[] = self._serialize.query("app_name", app_name, ) if country_code is not None: query_parameters[] = self._serialize.query("country_code", country_code, ) if client_machine_name is not None: query_parameters[] = self._serialize.query("client_machine_name", client_machine_name, ) if doc_id is not None: query_parameters[] = self._serialize.query("doc_id", doc_id, ) if market is not None: query_parameters[] = self._serialize.query("market", market, ) if session_id is not None: query_parameters[] = self._serialize.query("session_id", session_id, ) if set_lang is not None: query_parameters[] = self._serialize.query("set_lang", set_lang, ) if user_id is not None: query_parameters[] = self._serialize.query("user_id", user_id, ) header_parameters = {} header_parameters[] = if custom_headers: header_parameters.update(custom_headers) header_parameters[] = self._serialize.header("x_bing_apis_sdk", x_bing_apis_sdk, ) if accept_language is not None: header_parameters[] = self._serialize.header("accept_language", accept_language, ) if pragma is not None: header_parameters[] = self._serialize.header("pragma", pragma, ) if user_agent is not None: header_parameters[] = self._serialize.header("user_agent", user_agent, ) if client_id is not None: header_parameters[] = self._serialize.header("client_id", client_id, ) if client_ip is not None: header_parameters[] = self._serialize.header("client_ip", client_ip, ) if location is not None: header_parameters[] = self._serialize.header("location", location, ) form_data_content = { : text, : mode, : pre_context_text, : post_context_text, } request = self._client.post(url, query_parameters) response = self._client.send_formdata( request, header_parameters, form_data_content, stream=False, **operation_config) if response.status_code not in [200]: raise models.ErrorResponseException(self._deserialize, response) deserialized = None if response.status_code == 200: deserialized = self._deserialize(, response) if raw: client_raw_response = ClientRawResponse(deserialized, response) return client_raw_response return deserialized
The Bing Spell Check API lets you perform contextual grammar and spell checking. Bing has developed a web-based spell-checker that leverages machine learning and statistical machine translation to dynamically train a constantly evolving and highly contextual algorithm. The spell-checker is based on a massive corpus of web searches and documents. :param text: The text string to check for spelling and grammar errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. Because of the query string length limit, you'll typically use a POST request unless you're checking only short strings. :type text: str :param accept_language: A comma-delimited list of one or more languages to use for user interface strings. The list is in decreasing order of preference. For additional information, including expected format, see [RFC2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). This header and the setLang query parameter are mutually exclusive; do not specify both. If you set this header, you must also specify the cc query parameter. Bing will use the first supported language it finds from the list, and combine that language with the cc parameter value to determine the market to return results for. If the list does not include a supported language, Bing will find the closest language and market that supports the request, and may use an aggregated or default market for the results instead of a specified one. You should use this header and the cc query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. A user interface string is a string that's used as a label in a user interface. There are very few user interface strings in the JSON response objects. Any links in the response objects to Bing.com properties will apply the specified language. :type accept_language: str :param pragma: By default, Bing returns cached content, if available. To prevent Bing from returning cached content, set the Pragma header to no-cache (for example, Pragma: no-cache). :type pragma: str :param user_agent: The user agent originating the request. Bing uses the user agent to provide mobile users with an optimized experience. Although optional, you are strongly encouraged to always specify this header. The user-agent should be the same string that any commonly used browser would send. For information about user agents, see [RFC 2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). :type user_agent: str :param client_id: Bing uses this header to provide users with consistent behavior across Bing API calls. Bing often flights new features and improvements, and it uses the client ID as a key for assigning traffic on different flights. If you do not use the same client ID for a user across multiple requests, then Bing may assign the user to multiple conflicting flights. Being assigned to multiple conflicting flights can lead to an inconsistent user experience. For example, if the second request has a different flight assignment than the first, the experience may be unexpected. Also, Bing can use the client ID to tailor web results to that client ID’s search history, providing a richer experience for the user. Bing also uses this header to help improve result rankings by analyzing the activity generated by a client ID. The relevance improvements help with better quality of results delivered by Bing APIs and in turn enables higher click-through rates for the API consumer. IMPORTANT: Although optional, you should consider this header required. Persisting the client ID across multiple requests for the same end user and device combination enables 1) the API consumer to receive a consistent user experience, and 2) higher click-through rates via better quality of results from the Bing APIs. Each user that uses your application on the device must have a unique, Bing generated client ID. If you do not include this header in the request, Bing generates an ID and returns it in the X-MSEdge-ClientID response header. The only time that you should NOT include this header in a request is the first time the user uses your app on that device. Use the client ID for each Bing API request that your app makes for this user on the device. Persist the client ID. To persist the ID in a browser app, use a persistent HTTP cookie to ensure the ID is used across all sessions. Do not use a session cookie. For other apps such as mobile apps, use the device's persistent storage to persist the ID. The next time the user uses your app on that device, get the client ID that you persisted. Bing responses may or may not include this header. If the response includes this header, capture the client ID and use it for all subsequent Bing requests for the user on that device. If you include the X-MSEdge-ClientID, you must not include cookies in the request. :type client_id: str :param client_ip: The IPv4 or IPv6 address of the client device. The IP address is used to discover the user's location. Bing uses the location information to determine safe search behavior. Although optional, you are encouraged to always specify this header and the X-Search-Location header. Do not obfuscate the address (for example, by changing the last octet to 0). Obfuscating the address results in the location not being anywhere near the device's actual location, which may result in Bing serving erroneous results. :type client_ip: str :param location: A semicolon-delimited list of key/value pairs that describe the client's geographical location. Bing uses the location information to determine safe search behavior and to return relevant local content. Specify the key/value pair as <key>:<value>. The following are the keys that you use to specify the user's location. lat (required): The latitude of the client's location, in degrees. The latitude must be greater than or equal to -90.0 and less than or equal to +90.0. Negative values indicate southern latitudes and positive values indicate northern latitudes. long (required): The longitude of the client's location, in degrees. The longitude must be greater than or equal to -180.0 and less than or equal to +180.0. Negative values indicate western longitudes and positive values indicate eastern longitudes. re (required): The radius, in meters, which specifies the horizontal accuracy of the coordinates. Pass the value returned by the device's location service. Typical values might be 22m for GPS/Wi-Fi, 380m for cell tower triangulation, and 18,000m for reverse IP lookup. ts (optional): The UTC UNIX timestamp of when the client was at the location. (The UNIX timestamp is the number of seconds since January 1, 1970.) head (optional): The client's relative heading or direction of travel. Specify the direction of travel as degrees from 0 through 360, counting clockwise relative to true north. Specify this key only if the sp key is nonzero. sp (optional): The horizontal velocity (speed), in meters per second, that the client device is traveling. alt (optional): The altitude of the client device, in meters. are (optional): The radius, in meters, that specifies the vertical accuracy of the coordinates. Specify this key only if you specify the alt key. Although many of the keys are optional, the more information that you provide, the more accurate the location results are. Although optional, you are encouraged to always specify the user's geographical location. Providing the location is especially important if the client's IP address does not accurately reflect the user's physical location (for example, if the client uses VPN). For optimal results, you should include this header and the X-Search-ClientIP header, but at a minimum, you should include this header. :type location: str :param action_type: A string that's used by logging to determine whether the request is coming from an interactive session or a page load. The following are the possible values. 1) Edit—The request is from an interactive session 2) Load—The request is from a page load. Possible values include: 'Edit', 'Load' :type action_type: str or ~azure.cognitiveservices.language.spellcheck.models.ActionType :param app_name: The unique name of your app. The name must be known by Bing. Do not include this parameter unless you have previously contacted Bing to get a unique app name. To get a unique name, contact your Bing Business Development manager. :type app_name: str :param country_code: A 2-character country code of the country where the results come from. This API supports only the United States market. If you specify this query parameter, it must be set to us. If you set this parameter, you must also specify the Accept-Language header. Bing uses the first supported language it finds from the languages list, and combine that language with the country code that you specify to determine the market to return results for. If the languages list does not include a supported language, Bing finds the closest language and market that supports the request, or it may use an aggregated or default market for the results instead of a specified one. You should use this query parameter and the Accept-Language query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. This parameter and the mkt query parameter are mutually exclusive—do not specify both. :type country_code: str :param client_machine_name: A unique name of the device that the request is being made from. Generate a unique value for each device (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type client_machine_name: str :param doc_id: A unique ID that identifies the document that the text belongs to. Generate a unique value for each document (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type doc_id: str :param market: The market where the results come from. You are strongly encouraged to always specify the market, if known. Specifying the market helps Bing route the request and return an appropriate and optimal response. This parameter and the cc query parameter are mutually exclusive—do not specify both. :type market: str :param session_id: A unique ID that identifies this user session. Generate a unique value for each user session (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections :type session_id: str :param set_lang: The language to use for user interface strings. Specify the language using the ISO 639-1 2-letter language code. For example, the language code for English is EN. The default is EN (English). Although optional, you should always specify the language. Typically, you set setLang to the same language specified by mkt unless the user wants the user interface strings displayed in a different language. This parameter and the Accept-Language header are mutually exclusive—do not specify both. A user interface string is a string that's used as a label in a user interface. There are few user interface strings in the JSON response objects. Also, any links to Bing.com properties in the response objects apply the specified language. :type set_lang: str :param user_id: A unique ID that identifies the user. Generate a unique value for each user (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type user_id: str :param mode: The type of spelling and grammar checks to perform. The following are the possible values (the values are case insensitive). The default is Proof. 1) Proof—Finds most spelling and grammar mistakes. 2) Spell—Finds most spelling mistakes but does not find some of the grammar errors that Proof catches (for example, capitalization and repeated words). Possible values include: 'proof', 'spell' :type mode: str :param pre_context_text: A string that gives context to the text string. For example, the text string petal is valid. However, if you set preContextText to bike, the context changes and the text string becomes not valid. In this case, the API suggests that you change petal to pedal (as in bike pedal). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type pre_context_text: str :param post_context_text: A string that gives context to the text string. For example, the text string read is valid. However, if you set postContextText to carpet, the context changes and the text string becomes not valid. In this case, the API suggests that you change read to red (as in red carpet). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type post_context_text: str :param dict custom_headers: headers that will be added to the request :param bool raw: returns the direct response alongside the deserialized response :param operation_config: :ref:`Operation configuration overrides<msrest:optionsforoperations>`. :return: SpellCheck or ClientRawResponse if raw=true :rtype: ~azure.cognitiveservices.language.spellcheck.models.SpellCheck or ~msrest.pipeline.ClientRawResponse :raises: :class:`ErrorResponseException<azure.cognitiveservices.language.spellcheck.models.ErrorResponseException>`
### Input: The Bing Spell Check API lets you perform contextual grammar and spell checking. Bing has developed a web-based spell-checker that leverages machine learning and statistical machine translation to dynamically train a constantly evolving and highly contextual algorithm. The spell-checker is based on a massive corpus of web searches and documents. :param text: The text string to check for spelling and grammar errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. Because of the query string length limit, you'll typically use a POST request unless you're checking only short strings. :type text: str :param accept_language: A comma-delimited list of one or more languages to use for user interface strings. The list is in decreasing order of preference. For additional information, including expected format, see [RFC2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). This header and the setLang query parameter are mutually exclusive; do not specify both. If you set this header, you must also specify the cc query parameter. Bing will use the first supported language it finds from the list, and combine that language with the cc parameter value to determine the market to return results for. If the list does not include a supported language, Bing will find the closest language and market that supports the request, and may use an aggregated or default market for the results instead of a specified one. You should use this header and the cc query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. A user interface string is a string that's used as a label in a user interface. There are very few user interface strings in the JSON response objects. Any links in the response objects to Bing.com properties will apply the specified language. :type accept_language: str :param pragma: By default, Bing returns cached content, if available. To prevent Bing from returning cached content, set the Pragma header to no-cache (for example, Pragma: no-cache). :type pragma: str :param user_agent: The user agent originating the request. Bing uses the user agent to provide mobile users with an optimized experience. Although optional, you are strongly encouraged to always specify this header. The user-agent should be the same string that any commonly used browser would send. For information about user agents, see [RFC 2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). :type user_agent: str :param client_id: Bing uses this header to provide users with consistent behavior across Bing API calls. Bing often flights new features and improvements, and it uses the client ID as a key for assigning traffic on different flights. If you do not use the same client ID for a user across multiple requests, then Bing may assign the user to multiple conflicting flights. Being assigned to multiple conflicting flights can lead to an inconsistent user experience. For example, if the second request has a different flight assignment than the first, the experience may be unexpected. Also, Bing can use the client ID to tailor web results to that client ID’s search history, providing a richer experience for the user. Bing also uses this header to help improve result rankings by analyzing the activity generated by a client ID. The relevance improvements help with better quality of results delivered by Bing APIs and in turn enables higher click-through rates for the API consumer. IMPORTANT: Although optional, you should consider this header required. Persisting the client ID across multiple requests for the same end user and device combination enables 1) the API consumer to receive a consistent user experience, and 2) higher click-through rates via better quality of results from the Bing APIs. Each user that uses your application on the device must have a unique, Bing generated client ID. If you do not include this header in the request, Bing generates an ID and returns it in the X-MSEdge-ClientID response header. The only time that you should NOT include this header in a request is the first time the user uses your app on that device. Use the client ID for each Bing API request that your app makes for this user on the device. Persist the client ID. To persist the ID in a browser app, use a persistent HTTP cookie to ensure the ID is used across all sessions. Do not use a session cookie. For other apps such as mobile apps, use the device's persistent storage to persist the ID. The next time the user uses your app on that device, get the client ID that you persisted. Bing responses may or may not include this header. If the response includes this header, capture the client ID and use it for all subsequent Bing requests for the user on that device. If you include the X-MSEdge-ClientID, you must not include cookies in the request. :type client_id: str :param client_ip: The IPv4 or IPv6 address of the client device. The IP address is used to discover the user's location. Bing uses the location information to determine safe search behavior. Although optional, you are encouraged to always specify this header and the X-Search-Location header. Do not obfuscate the address (for example, by changing the last octet to 0). Obfuscating the address results in the location not being anywhere near the device's actual location, which may result in Bing serving erroneous results. :type client_ip: str :param location: A semicolon-delimited list of key/value pairs that describe the client's geographical location. Bing uses the location information to determine safe search behavior and to return relevant local content. Specify the key/value pair as <key>:<value>. The following are the keys that you use to specify the user's location. lat (required): The latitude of the client's location, in degrees. The latitude must be greater than or equal to -90.0 and less than or equal to +90.0. Negative values indicate southern latitudes and positive values indicate northern latitudes. long (required): The longitude of the client's location, in degrees. The longitude must be greater than or equal to -180.0 and less than or equal to +180.0. Negative values indicate western longitudes and positive values indicate eastern longitudes. re (required): The radius, in meters, which specifies the horizontal accuracy of the coordinates. Pass the value returned by the device's location service. Typical values might be 22m for GPS/Wi-Fi, 380m for cell tower triangulation, and 18,000m for reverse IP lookup. ts (optional): The UTC UNIX timestamp of when the client was at the location. (The UNIX timestamp is the number of seconds since January 1, 1970.) head (optional): The client's relative heading or direction of travel. Specify the direction of travel as degrees from 0 through 360, counting clockwise relative to true north. Specify this key only if the sp key is nonzero. sp (optional): The horizontal velocity (speed), in meters per second, that the client device is traveling. alt (optional): The altitude of the client device, in meters. are (optional): The radius, in meters, that specifies the vertical accuracy of the coordinates. Specify this key only if you specify the alt key. Although many of the keys are optional, the more information that you provide, the more accurate the location results are. Although optional, you are encouraged to always specify the user's geographical location. Providing the location is especially important if the client's IP address does not accurately reflect the user's physical location (for example, if the client uses VPN). For optimal results, you should include this header and the X-Search-ClientIP header, but at a minimum, you should include this header. :type location: str :param action_type: A string that's used by logging to determine whether the request is coming from an interactive session or a page load. The following are the possible values. 1) Edit—The request is from an interactive session 2) Load—The request is from a page load. Possible values include: 'Edit', 'Load' :type action_type: str or ~azure.cognitiveservices.language.spellcheck.models.ActionType :param app_name: The unique name of your app. The name must be known by Bing. Do not include this parameter unless you have previously contacted Bing to get a unique app name. To get a unique name, contact your Bing Business Development manager. :type app_name: str :param country_code: A 2-character country code of the country where the results come from. This API supports only the United States market. If you specify this query parameter, it must be set to us. If you set this parameter, you must also specify the Accept-Language header. Bing uses the first supported language it finds from the languages list, and combine that language with the country code that you specify to determine the market to return results for. If the languages list does not include a supported language, Bing finds the closest language and market that supports the request, or it may use an aggregated or default market for the results instead of a specified one. You should use this query parameter and the Accept-Language query parameter only if you specify multiple languages; otherwise, you should use the mkt and setLang query parameters. This parameter and the mkt query parameter are mutually exclusive—do not specify both. :type country_code: str :param client_machine_name: A unique name of the device that the request is being made from. Generate a unique value for each device (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type client_machine_name: str :param doc_id: A unique ID that identifies the document that the text belongs to. Generate a unique value for each document (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type doc_id: str :param market: The market where the results come from. You are strongly encouraged to always specify the market, if known. Specifying the market helps Bing route the request and return an appropriate and optimal response. This parameter and the cc query parameter are mutually exclusive—do not specify both. :type market: str :param session_id: A unique ID that identifies this user session. Generate a unique value for each user session (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections :type session_id: str :param set_lang: The language to use for user interface strings. Specify the language using the ISO 639-1 2-letter language code. For example, the language code for English is EN. The default is EN (English). Although optional, you should always specify the language. Typically, you set setLang to the same language specified by mkt unless the user wants the user interface strings displayed in a different language. This parameter and the Accept-Language header are mutually exclusive—do not specify both. A user interface string is a string that's used as a label in a user interface. There are few user interface strings in the JSON response objects. Also, any links to Bing.com properties in the response objects apply the specified language. :type set_lang: str :param user_id: A unique ID that identifies the user. Generate a unique value for each user (the value is unimportant). The service uses the ID to help debug issues and improve the quality of corrections. :type user_id: str :param mode: The type of spelling and grammar checks to perform. The following are the possible values (the values are case insensitive). The default is Proof. 1) Proof—Finds most spelling and grammar mistakes. 2) Spell—Finds most spelling mistakes but does not find some of the grammar errors that Proof catches (for example, capitalization and repeated words). Possible values include: 'proof', 'spell' :type mode: str :param pre_context_text: A string that gives context to the text string. For example, the text string petal is valid. However, if you set preContextText to bike, the context changes and the text string becomes not valid. In this case, the API suggests that you change petal to pedal (as in bike pedal). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type pre_context_text: str :param post_context_text: A string that gives context to the text string. For example, the text string read is valid. However, if you set postContextText to carpet, the context changes and the text string becomes not valid. In this case, the API suggests that you change read to red (as in red carpet). This text is not checked for grammar or spelling errors. The combined length of the text string, preContextText string, and postContextText string may not exceed 10,000 characters. You may specify this parameter in the query string of a GET request or in the body of a POST request. :type post_context_text: str :param dict custom_headers: headers that will be added to the request :param bool raw: returns the direct response alongside the deserialized response :param operation_config: :ref:`Operation configuration overrides<msrest:optionsforoperations>`. :return: SpellCheck or ClientRawResponse if raw=true :rtype: ~azure.cognitiveservices.language.spellcheck.models.SpellCheck or ~msrest.pipeline.ClientRawResponse :raises: :class:`ErrorResponseException<azure.cognitiveservices.language.spellcheck.models.ErrorResponseException>` ### Response: #vtb def spell_checker( self, text, accept_language=None, pragma=None, user_agent=None, client_id=None, client_ip=None, location=None, action_type=None, app_name=None, country_code=None, client_machine_name=None, doc_id=None, market=None, session_id=None, set_lang=None, user_id=None, mode=None, pre_context_text=None, post_context_text=None, custom_headers=None, raw=False, **operation_config): x_bing_apis_sdk = "true" url = self.spell_checker.metadata[] query_parameters = {} if action_type is not None: query_parameters[] = self._serialize.query("action_type", action_type, ) if app_name is not None: query_parameters[] = self._serialize.query("app_name", app_name, ) if country_code is not None: query_parameters[] = self._serialize.query("country_code", country_code, ) if client_machine_name is not None: query_parameters[] = self._serialize.query("client_machine_name", client_machine_name, ) if doc_id is not None: query_parameters[] = self._serialize.query("doc_id", doc_id, ) if market is not None: query_parameters[] = self._serialize.query("market", market, ) if session_id is not None: query_parameters[] = self._serialize.query("session_id", session_id, ) if set_lang is not None: query_parameters[] = self._serialize.query("set_lang", set_lang, ) if user_id is not None: query_parameters[] = self._serialize.query("user_id", user_id, ) header_parameters = {} header_parameters[] = if custom_headers: header_parameters.update(custom_headers) header_parameters[] = self._serialize.header("x_bing_apis_sdk", x_bing_apis_sdk, ) if accept_language is not None: header_parameters[] = self._serialize.header("accept_language", accept_language, ) if pragma is not None: header_parameters[] = self._serialize.header("pragma", pragma, ) if user_agent is not None: header_parameters[] = self._serialize.header("user_agent", user_agent, ) if client_id is not None: header_parameters[] = self._serialize.header("client_id", client_id, ) if client_ip is not None: header_parameters[] = self._serialize.header("client_ip", client_ip, ) if location is not None: header_parameters[] = self._serialize.header("location", location, ) form_data_content = { : text, : mode, : pre_context_text, : post_context_text, } request = self._client.post(url, query_parameters) response = self._client.send_formdata( request, header_parameters, form_data_content, stream=False, **operation_config) if response.status_code not in [200]: raise models.ErrorResponseException(self._deserialize, response) deserialized = None if response.status_code == 200: deserialized = self._deserialize(, response) if raw: client_raw_response = ClientRawResponse(deserialized, response) return client_raw_response return deserialized
#vtb def _sign_payload(self, payload): app_key = self._app_key t = int(time.time() * 1000) requestStr = { : self._req_header, : payload } data = json.dumps({: json.dumps(requestStr)}) data_str = .format(self._req_token, t, app_key, data) sign = hashlib.md5(data_str.encode()).hexdigest() params = { : t, : app_key, : sign, : data, } return params
使用 appkey 对 payload 进行签名,返回新的请求参数
### Input: 使用 appkey 对 payload 进行签名,返回新的请求参数 ### Response: #vtb def _sign_payload(self, payload): app_key = self._app_key t = int(time.time() * 1000) requestStr = { : self._req_header, : payload } data = json.dumps({: json.dumps(requestStr)}) data_str = .format(self._req_token, t, app_key, data) sign = hashlib.md5(data_str.encode()).hexdigest() params = { : t, : app_key, : sign, : data, } return params
#vtb def from_pycbc(cls, fs, copy=True): return cls(fs.data, f0=0, df=fs.delta_f, epoch=fs.epoch, copy=copy)
Convert a `pycbc.types.frequencyseries.FrequencySeries` into a `FrequencySeries` Parameters ---------- fs : `pycbc.types.frequencyseries.FrequencySeries` the input PyCBC `~pycbc.types.frequencyseries.FrequencySeries` array copy : `bool`, optional, default: `True` if `True`, copy these data to a new array Returns ------- spectrum : `FrequencySeries` a GWpy version of the input frequency series
### Input: Convert a `pycbc.types.frequencyseries.FrequencySeries` into a `FrequencySeries` Parameters ---------- fs : `pycbc.types.frequencyseries.FrequencySeries` the input PyCBC `~pycbc.types.frequencyseries.FrequencySeries` array copy : `bool`, optional, default: `True` if `True`, copy these data to a new array Returns ------- spectrum : `FrequencySeries` a GWpy version of the input frequency series ### Response: #vtb def from_pycbc(cls, fs, copy=True): return cls(fs.data, f0=0, df=fs.delta_f, epoch=fs.epoch, copy=copy)
#vtb def attachviewers(self, profiles): if self.metadata: template = None for profile in profiles: if isinstance(self, CLAMInputFile): for t in profile.input: if self.metadata.inputtemplate == t.id: template = t break elif isinstance(self, CLAMOutputFile) and self.metadata and self.metadata.provenance: for t in profile.outputtemplates(): if self.metadata.provenance.outputtemplate_id == t.id: template = t break else: raise NotImplementedError if template: break if template and template.viewers: for viewer in template.viewers: self.viewers.append(viewer) if template and template.converters: for converter in template.converters: self.converters.append(converter)
Attach viewers *and converters* to file, automatically scan all profiles for outputtemplate or inputtemplate
### Input: Attach viewers *and converters* to file, automatically scan all profiles for outputtemplate or inputtemplate ### Response: #vtb def attachviewers(self, profiles): if self.metadata: template = None for profile in profiles: if isinstance(self, CLAMInputFile): for t in profile.input: if self.metadata.inputtemplate == t.id: template = t break elif isinstance(self, CLAMOutputFile) and self.metadata and self.metadata.provenance: for t in profile.outputtemplates(): if self.metadata.provenance.outputtemplate_id == t.id: template = t break else: raise NotImplementedError if template: break if template and template.viewers: for viewer in template.viewers: self.viewers.append(viewer) if template and template.converters: for converter in template.converters: self.converters.append(converter)
#vtb def mono_FM(x,fs=2.4e6,file_name=): b = signal.firwin(64,2*200e3/float(fs)) y = signal.lfilter(b,1,x) z = ss.downsample(y,10) z_bb = discrim(z) bb = signal.firwin(64,2*12e3/(float(fs)/10)) zz_bb = signal.lfilter(bb,1,z_bb) z_out = ss.downsample(zz_bb,5) ss.to_wav(file_name, 48000, z_out/2) print() return z_bb, z_out
Decimate complex baseband input by 10 Design 1st decimation lowpass filter (f_c = 200 KHz)
### Input: Decimate complex baseband input by 10 Design 1st decimation lowpass filter (f_c = 200 KHz) ### Response: #vtb def mono_FM(x,fs=2.4e6,file_name=): b = signal.firwin(64,2*200e3/float(fs)) y = signal.lfilter(b,1,x) z = ss.downsample(y,10) z_bb = discrim(z) bb = signal.firwin(64,2*12e3/(float(fs)/10)) zz_bb = signal.lfilter(bb,1,z_bb) z_out = ss.downsample(zz_bb,5) ss.to_wav(file_name, 48000, z_out/2) print() return z_bb, z_out
#vtb def _get_deps(self, tree, include_punct, representation, universal): if universal: converter = self.universal_converter if self.universal_converter == self.converter: import warnings warnings.warn("This jar doesnbasiccollapsedCCprocessedt fail assert representation == deps = egs.typedDependenciesCollapsedTree() return self._listify(deps)
Get a list of dependencies from a Stanford Tree for a specific Stanford Dependencies representation.
### Input: Get a list of dependencies from a Stanford Tree for a specific Stanford Dependencies representation. ### Response: #vtb def _get_deps(self, tree, include_punct, representation, universal): if universal: converter = self.universal_converter if self.universal_converter == self.converter: import warnings warnings.warn("This jar doesnbasiccollapsedCCprocessedt fail assert representation == deps = egs.typedDependenciesCollapsedTree() return self._listify(deps)
#vtb def get_core(self): if self.minicard and self.status == False: return pysolvers.minicard_core(self.minicard)
Get an unsatisfiable core if the formula was previously unsatisfied.
### Input: Get an unsatisfiable core if the formula was previously unsatisfied. ### Response: #vtb def get_core(self): if self.minicard and self.status == False: return pysolvers.minicard_core(self.minicard)
#vtb def wrap_iterable(obj): was_scalar = not isiterable(obj) wrapped_obj = [obj] if was_scalar else obj return wrapped_obj, was_scalar
Returns: wrapped_obj, was_scalar
### Input: Returns: wrapped_obj, was_scalar ### Response: #vtb def wrap_iterable(obj): was_scalar = not isiterable(obj) wrapped_obj = [obj] if was_scalar else obj return wrapped_obj, was_scalar
#vtb def send_text(self, sender, receiver_type, receiver_id, content): data = { : { : receiver_type, : receiver_id, }, : sender, : , : { : content, } } return self._post(, data=data)
发送文本消息 详情请参考 https://qydev.weixin.qq.com/wiki/index.php?title=企业会话接口说明 :param sender: 发送人 :param receiver_type: 接收人类型:single|group,分别表示:单聊|群聊 :param receiver_id: 接收人的值,为userid|chatid,分别表示:成员id|会话id :param content: 消息内容 :return: 返回的 JSON 数据包
### Input: 发送文本消息 详情请参考 https://qydev.weixin.qq.com/wiki/index.php?title=企业会话接口说明 :param sender: 发送人 :param receiver_type: 接收人类型:single|group,分别表示:单聊|群聊 :param receiver_id: 接收人的值,为userid|chatid,分别表示:成员id|会话id :param content: 消息内容 :return: 返回的 JSON 数据包 ### Response: #vtb def send_text(self, sender, receiver_type, receiver_id, content): data = { : { : receiver_type, : receiver_id, }, : sender, : , : { : content, } } return self._post(, data=data)
#vtb def user_token(scopes, client_id=None, client_secret=None, redirect_uri=None): webbrowser.open_new(authorize_url(client_id=client_id, redirect_uri=redirect_uri, scopes=scopes)) code = parse_code(raw_input()) return User(code, client_id=client_id, client_secret=client_secret, redirect_uri=redirect_uri)
Generate a user access token :param List[str] scopes: Scopes to get :param str client_id: Spotify Client ID :param str client_secret: Spotify Client secret :param str redirect_uri: Spotify redirect URI :return: Generated access token :rtype: User
### Input: Generate a user access token :param List[str] scopes: Scopes to get :param str client_id: Spotify Client ID :param str client_secret: Spotify Client secret :param str redirect_uri: Spotify redirect URI :return: Generated access token :rtype: User ### Response: #vtb def user_token(scopes, client_id=None, client_secret=None, redirect_uri=None): webbrowser.open_new(authorize_url(client_id=client_id, redirect_uri=redirect_uri, scopes=scopes)) code = parse_code(raw_input()) return User(code, client_id=client_id, client_secret=client_secret, redirect_uri=redirect_uri)
#vtb def probably_wkt(text): valid = False valid_types = set([ , , , , , , , ]) matched = re.match(r, text.strip()) if matched: valid = matched.group(1).upper() in valid_types return valid
Quick check to determine if the provided text looks like WKT
### Input: Quick check to determine if the provided text looks like WKT ### Response: #vtb def probably_wkt(text): valid = False valid_types = set([ , , , , , , , ]) matched = re.match(r, text.strip()) if matched: valid = matched.group(1).upper() in valid_types return valid
#vtb def mask_catalog(regionfile, infile, outfile, negate=False, racol=, deccol=): logging.info("Loading region from {0}".format(regionfile)) region = Region.load(regionfile) logging.info("Loading catalog from {0}".format(infile)) table = load_table(infile) masked_table = mask_table(region, table, negate=negate, racol=racol, deccol=deccol) write_table(masked_table, outfile) return
Apply a region file as a mask to a catalog, removing all the rows with ra/dec inside the region If negate=False then remove the rows with ra/dec outside the region. Parameters ---------- regionfile : str A file which can be loaded as a :class:`AegeanTools.regions.Region`. The catalogue will be masked according to this region. infile : str Input catalogue. outfile : str Output catalogue. negate : bool If True then pixels *outside* the region are masked. Default = False. racol, deccol : str The name of the columns in `table` that should be interpreted as ra and dec. Default = 'ra', 'dec' See Also -------- :func:`AegeanTools.MIMAS.mask_table` :func:`AegeanTools.catalogs.load_table`
### Input: Apply a region file as a mask to a catalog, removing all the rows with ra/dec inside the region If negate=False then remove the rows with ra/dec outside the region. Parameters ---------- regionfile : str A file which can be loaded as a :class:`AegeanTools.regions.Region`. The catalogue will be masked according to this region. infile : str Input catalogue. outfile : str Output catalogue. negate : bool If True then pixels *outside* the region are masked. Default = False. racol, deccol : str The name of the columns in `table` that should be interpreted as ra and dec. Default = 'ra', 'dec' See Also -------- :func:`AegeanTools.MIMAS.mask_table` :func:`AegeanTools.catalogs.load_table` ### Response: #vtb def mask_catalog(regionfile, infile, outfile, negate=False, racol=, deccol=): logging.info("Loading region from {0}".format(regionfile)) region = Region.load(regionfile) logging.info("Loading catalog from {0}".format(infile)) table = load_table(infile) masked_table = mask_table(region, table, negate=negate, racol=racol, deccol=deccol) write_table(masked_table, outfile) return
#vtb def forward(self, x: torch.Tensor, mask: torch.Tensor) -> torch.Tensor: x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask)) return self.sublayer[1](x, self.feed_forward)
Follow Figure 1 (left) for connections.
### Input: Follow Figure 1 (left) for connections. ### Response: #vtb def forward(self, x: torch.Tensor, mask: torch.Tensor) -> torch.Tensor: x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask)) return self.sublayer[1](x, self.feed_forward)
#vtb def admin_startWS(self, host=, port=8546, cors=None, apis=None): if cors is None: cors = [] if apis is None: apis = [, , ] return (yield from self.rpc_call(, [host, port, .join(cors), .join(apis)]))
https://github.com/ethereum/go-ethereum/wiki/Management-APIs#admin_startws :param host: Network interface to open the listener socket (optional) :type host: str :param port: Network port to open the listener socket (optional) :type port: int :param cors: Cross-origin resource sharing header to use (optional) :type cors: str :param apis: API modules to offer over this interface (optional) :type apis: str :rtype: bool
### Input: https://github.com/ethereum/go-ethereum/wiki/Management-APIs#admin_startws :param host: Network interface to open the listener socket (optional) :type host: str :param port: Network port to open the listener socket (optional) :type port: int :param cors: Cross-origin resource sharing header to use (optional) :type cors: str :param apis: API modules to offer over this interface (optional) :type apis: str :rtype: bool ### Response: #vtb def admin_startWS(self, host=, port=8546, cors=None, apis=None): if cors is None: cors = [] if apis is None: apis = [, , ] return (yield from self.rpc_call(, [host, port, .join(cors), .join(apis)]))
#vtb def compute_ratio(x): sum_ = sum(x) ratios = [] for i in x: ratio = i / sum_ ratios.append(ratio) return ratios
计算每一类数据的占比
### Input: 计算每一类数据的占比 ### Response: #vtb def compute_ratio(x): sum_ = sum(x) ratios = [] for i in x: ratio = i / sum_ ratios.append(ratio) return ratios
#vtb def _parse_game_date_and_location(self, boxscore): scheme = BOXSCORE_SCHEME["game_info"] items = [i.text() for i in boxscore(scheme).items()] game_info = items[0].split() attendance = None date = None duration = None stadium = None time = None date = game_info[0] for line in game_info: if in line: attendance = line.replace(, ).replace(, ) if in line: duration = line.replace(, ) if in line: stadium = line.replace(, ) if in line: time = line.replace(, ) setattr(self, , attendance) setattr(self, , date) setattr(self, , duration) setattr(self, , stadium) setattr(self, , time)
Retrieve the game's date and location. The games' meta information, such as date, location, attendance, and duration, follow a complex parsing scheme that changes based on the layout of the page. The information should be able to be parsed and set regardless of the order and how much information is included. To do this, the meta information should be iterated through line-by-line and fields should be determined by the values that are found in each line. Parameters ---------- boxscore : PyQuery object A PyQuery object containing all of the HTML data from the boxscore.
### Input: Retrieve the game's date and location. The games' meta information, such as date, location, attendance, and duration, follow a complex parsing scheme that changes based on the layout of the page. The information should be able to be parsed and set regardless of the order and how much information is included. To do this, the meta information should be iterated through line-by-line and fields should be determined by the values that are found in each line. Parameters ---------- boxscore : PyQuery object A PyQuery object containing all of the HTML data from the boxscore. ### Response: #vtb def _parse_game_date_and_location(self, boxscore): scheme = BOXSCORE_SCHEME["game_info"] items = [i.text() for i in boxscore(scheme).items()] game_info = items[0].split() attendance = None date = None duration = None stadium = None time = None date = game_info[0] for line in game_info: if in line: attendance = line.replace(, ).replace(, ) if in line: duration = line.replace(, ) if in line: stadium = line.replace(, ) if in line: time = line.replace(, ) setattr(self, , attendance) setattr(self, , date) setattr(self, , duration) setattr(self, , stadium) setattr(self, , time)
#vtb def _parse_pages(self, unicode=False): if self.pageRange: pages = .format(self.pageRange) elif self.startingPage: pages = .format(self.startingPage, self.endingPage) else: pages = if unicode: pages = u.format(pages) return pages
Auxiliary function to parse and format page range of a document.
### Input: Auxiliary function to parse and format page range of a document. ### Response: #vtb def _parse_pages(self, unicode=False): if self.pageRange: pages = .format(self.pageRange) elif self.startingPage: pages = .format(self.startingPage, self.endingPage) else: pages = if unicode: pages = u.format(pages) return pages
#vtb def command(self, cluster_id, command, *args): cluster = self._storage[cluster_id] try: return getattr(cluster, command)(*args) except AttributeError: raise ValueError("Cannot issue the command %r to ShardedCluster %s" % (command, cluster_id))
Call a ShardedCluster method.
### Input: Call a ShardedCluster method. ### Response: #vtb def command(self, cluster_id, command, *args): cluster = self._storage[cluster_id] try: return getattr(cluster, command)(*args) except AttributeError: raise ValueError("Cannot issue the command %r to ShardedCluster %s" % (command, cluster_id))
#vtb def getParticleInfos(self, swarmId=None, genIdx=None, completed=None, matured=None, lastDescendent=False): if swarmId is not None: entryIdxs = self._swarmIdToIndexes.get(swarmId, []) else: entryIdxs = range(len(self._allResults)) if len(entryIdxs) == 0: return ([], [], [], [], []) particleStates = [] modelIds = [] errScores = [] completedFlags = [] maturedFlags = [] for idx in entryIdxs: entry = self._allResults[idx] if swarmId is not None: assert (not entry[]) modelParams = entry[] isCompleted = entry[] isMatured = entry[] particleState = modelParams[] particleGenIdx = particleState[] particleId = particleState[] if genIdx is not None and particleGenIdx != genIdx: continue if completed is not None and (completed != isCompleted): continue if matured is not None and (matured != isMatured): continue if lastDescendent \ and (self._particleLatestGenIdx[particleId] != particleGenIdx): continue particleStates.append(particleState) modelIds.append(entry[]) errScores.append(entry[]) completedFlags.append(isCompleted) maturedFlags.append(isMatured) return (particleStates, modelIds, errScores, completedFlags, maturedFlags)
Return a list of particleStates for all particles we know about in the given swarm, their model Ids, and metric results. Parameters: --------------------------------------------------------------------- swarmId: A string representation of the sorted list of encoders in this swarm. For example '__address_encoder.__gym_encoder' genIdx: If not None, only return particles at this specific generation index. completed: If not None, only return particles of the given state (either completed if 'completed' is True, or running if 'completed' is false matured: If not None, only return particles of the given state (either matured if 'matured' is True, or not matured if 'matured' is false. Note that any model which has completed is also considered matured. lastDescendent: If True, only return particles that are the last descendent, that is, the highest generation index for a given particle Id retval: (particleStates, modelIds, errScores, completed, matured) particleStates: list of particleStates modelIds: list of modelIds errScores: list of errScores, numpy.inf is plugged in if we don't have a result yet completed: list of completed booleans matured: list of matured booleans
### Input: Return a list of particleStates for all particles we know about in the given swarm, their model Ids, and metric results. Parameters: --------------------------------------------------------------------- swarmId: A string representation of the sorted list of encoders in this swarm. For example '__address_encoder.__gym_encoder' genIdx: If not None, only return particles at this specific generation index. completed: If not None, only return particles of the given state (either completed if 'completed' is True, or running if 'completed' is false matured: If not None, only return particles of the given state (either matured if 'matured' is True, or not matured if 'matured' is false. Note that any model which has completed is also considered matured. lastDescendent: If True, only return particles that are the last descendent, that is, the highest generation index for a given particle Id retval: (particleStates, modelIds, errScores, completed, matured) particleStates: list of particleStates modelIds: list of modelIds errScores: list of errScores, numpy.inf is plugged in if we don't have a result yet completed: list of completed booleans matured: list of matured booleans ### Response: #vtb def getParticleInfos(self, swarmId=None, genIdx=None, completed=None, matured=None, lastDescendent=False): if swarmId is not None: entryIdxs = self._swarmIdToIndexes.get(swarmId, []) else: entryIdxs = range(len(self._allResults)) if len(entryIdxs) == 0: return ([], [], [], [], []) particleStates = [] modelIds = [] errScores = [] completedFlags = [] maturedFlags = [] for idx in entryIdxs: entry = self._allResults[idx] if swarmId is not None: assert (not entry[]) modelParams = entry[] isCompleted = entry[] isMatured = entry[] particleState = modelParams[] particleGenIdx = particleState[] particleId = particleState[] if genIdx is not None and particleGenIdx != genIdx: continue if completed is not None and (completed != isCompleted): continue if matured is not None and (matured != isMatured): continue if lastDescendent \ and (self._particleLatestGenIdx[particleId] != particleGenIdx): continue particleStates.append(particleState) modelIds.append(entry[]) errScores.append(entry[]) completedFlags.append(isCompleted) maturedFlags.append(isMatured) return (particleStates, modelIds, errScores, completedFlags, maturedFlags)
#vtb def update(self, data_set): now = time.time() for d in data_set: self.timed_data[d] = now self._expire_data()
Refresh the time of all specified elements in the supplied data set.
### Input: Refresh the time of all specified elements in the supplied data set. ### Response: #vtb def update(self, data_set): now = time.time() for d in data_set: self.timed_data[d] = now self._expire_data()
#vtb def _get_esxi_proxy_details(): s proxy details esxi.get_detailshostvcentervcenteresxi_hostesxi_hostusernamepasswordprotocolportmechanismprincipaldomain'), esxi_hosts
Returns the running esxi's proxy details
### Input: Returns the running esxi's proxy details ### Response: #vtb def _get_esxi_proxy_details(): s proxy details esxi.get_detailshostvcentervcenteresxi_hostesxi_hostusernamepasswordprotocolportmechanismprincipaldomain'), esxi_hosts
#vtb def _construct_body_s3_dict(self): if isinstance(self.definition_uri, dict): if not self.definition_uri.get("Bucket", None) or not self.definition_uri.get("Key", None): raise InvalidResourceException(self.logical_id, " requires Bucket and Key properties to be specified") s3_pointer = self.definition_uri else: s3_pointer = parse_s3_uri(self.definition_uri) if s3_pointer is None: raise InvalidResourceException(self.logical_id, DefinitionUri\ ) body_s3 = { : s3_pointer[], : s3_pointer[] } if in s3_pointer: body_s3[] = s3_pointer[] return body_s3
Constructs the RestApi's `BodyS3Location property`_, from the SAM Api's DefinitionUri property. :returns: a BodyS3Location dict, containing the S3 Bucket, Key, and Version of the Swagger definition :rtype: dict
### Input: Constructs the RestApi's `BodyS3Location property`_, from the SAM Api's DefinitionUri property. :returns: a BodyS3Location dict, containing the S3 Bucket, Key, and Version of the Swagger definition :rtype: dict ### Response: #vtb def _construct_body_s3_dict(self): if isinstance(self.definition_uri, dict): if not self.definition_uri.get("Bucket", None) or not self.definition_uri.get("Key", None): raise InvalidResourceException(self.logical_id, " requires Bucket and Key properties to be specified") s3_pointer = self.definition_uri else: s3_pointer = parse_s3_uri(self.definition_uri) if s3_pointer is None: raise InvalidResourceException(self.logical_id, DefinitionUri\ ) body_s3 = { : s3_pointer[], : s3_pointer[] } if in s3_pointer: body_s3[] = s3_pointer[] return body_s3
#vtb def keep_types_s(s, types): patt = .join( + s + for s in types) return .join(re.findall(patt, + s.strip() + )).rstrip()
Keep the given types from a string Same as :meth:`keep_types` but does not use the :attr:`params` dictionary Parameters ---------- s: str The string of the returns like section types: list of str The type identifiers to keep Returns ------- str The modified string `s` with only the descriptions of `types`
### Input: Keep the given types from a string Same as :meth:`keep_types` but does not use the :attr:`params` dictionary Parameters ---------- s: str The string of the returns like section types: list of str The type identifiers to keep Returns ------- str The modified string `s` with only the descriptions of `types` ### Response: #vtb def keep_types_s(s, types): patt = .join( + s + for s in types) return .join(re.findall(patt, + s.strip() + )).rstrip()
#vtb def safe_pdist(arr, *args, **kwargs): if arr is None or len(arr) < 2: return None else: import vtool as vt arr_ = vt.atleast_nd(arr, 2) return spdist.pdist(arr_, *args, **kwargs)
Kwargs: metric = ut.absdiff SeeAlso: scipy.spatial.distance.pdist TODO: move to vtool
### Input: Kwargs: metric = ut.absdiff SeeAlso: scipy.spatial.distance.pdist TODO: move to vtool ### Response: #vtb def safe_pdist(arr, *args, **kwargs): if arr is None or len(arr) < 2: return None else: import vtool as vt arr_ = vt.atleast_nd(arr, 2) return spdist.pdist(arr_, *args, **kwargs)
#vtb def on_setup_ssh(self, b): with self._setup_ssh_out: clear_output() self._ssh_keygen() password = self.__password proxy_password = self.__proxy_password if self.hostname is None: print("Please specify the computer hostname") return if self.can_login(): print ("Password-free access is already enabled") if not self.is_in_config(): self._write_ssh_config() self.setup_counter += 1 return if not self._send_pubkey(self.hostname, self.username, password, proxycmd): print ("Could not send public key to {}".format(self.hostname)) return if not self.is_in_config(): self._write_ssh_config(proxycmd=proxycmd) if self.can_login(): self.setup_counter += 1 print("Automatic ssh setup successful :-)") return else: print("Automatic ssh setup failed, sorry :-(") return
ATTENTION: modifying the order of operations in this function can lead to unexpected problems
### Input: ATTENTION: modifying the order of operations in this function can lead to unexpected problems ### Response: #vtb def on_setup_ssh(self, b): with self._setup_ssh_out: clear_output() self._ssh_keygen() password = self.__password proxy_password = self.__proxy_password if self.hostname is None: print("Please specify the computer hostname") return if self.can_login(): print ("Password-free access is already enabled") if not self.is_in_config(): self._write_ssh_config() self.setup_counter += 1 return if not self._send_pubkey(self.hostname, self.username, password, proxycmd): print ("Could not send public key to {}".format(self.hostname)) return if not self.is_in_config(): self._write_ssh_config(proxycmd=proxycmd) if self.can_login(): self.setup_counter += 1 print("Automatic ssh setup successful :-)") return else: print("Automatic ssh setup failed, sorry :-(") return
#vtb def _Open(self, path_spec, mode=): if not path_spec.HasParent(): raise errors.PathSpecError( ) range_offset = getattr(path_spec, , None) if range_offset is None: raise errors.PathSpecError( ) range_size = getattr(path_spec, , None) if range_size is None: raise errors.PathSpecError( ) self._range_offset = range_offset self._range_size = range_size
Opens the file system defined by path specification. Args: path_spec (PathSpec): a path specification. mode (Optional[str]): file access mode. The default is 'rb' which represents read-only binary. Raises: AccessError: if the access to open the file was denied. IOError: if the file system could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid.
### Input: Opens the file system defined by path specification. Args: path_spec (PathSpec): a path specification. mode (Optional[str]): file access mode. The default is 'rb' which represents read-only binary. Raises: AccessError: if the access to open the file was denied. IOError: if the file system could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid. ### Response: #vtb def _Open(self, path_spec, mode=): if not path_spec.HasParent(): raise errors.PathSpecError( ) range_offset = getattr(path_spec, , None) if range_offset is None: raise errors.PathSpecError( ) range_size = getattr(path_spec, , None) if range_size is None: raise errors.PathSpecError( ) self._range_offset = range_offset self._range_size = range_size
#vtb def xml_to_namespace(xmlstr): xmldoc = minidom.parseString(xmlstr) namespace = ServiceBusNamespace() mappings = ( (, , None), (, , None), (, , None), (, , None), (, , None), (, , None), (, , None), (, , None), (, , None), (, , _parse_bool), ) for desc in _MinidomXmlToObject.get_children_from_path( xmldoc, , , ): for xml_name, field_name, conversion_func in mappings: node_value = _MinidomXmlToObject.get_first_child_node_value(desc, xml_name) if node_value is not None: if conversion_func is not None: node_value = conversion_func(node_value) setattr(namespace, field_name, node_value) return namespace
Converts xml response to service bus namespace The xml format for namespace: <entry> <id>uuid:00000000-0000-0000-0000-000000000000;id=0000000</id> <title type="text">myunittests</title> <updated>2012-08-22T16:48:10Z</updated> <content type="application/xml"> <NamespaceDescription xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Name>myunittests</Name> <Region>West US</Region> <DefaultKey>0000000000000000000000000000000000000000000=</DefaultKey> <Status>Active</Status> <CreatedAt>2012-08-22T16:48:10.217Z</CreatedAt> <AcsManagementEndpoint>https://myunittests-sb.accesscontrol.windows.net/</AcsManagementEndpoint> <ServiceBusEndpoint>https://myunittests.servicebus.windows.net/</ServiceBusEndpoint> <ConnectionString>Endpoint=sb://myunittests.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=0000000000000000000000000000000000000000000=</ConnectionString> <SubscriptionId>00000000000000000000000000000000</SubscriptionId> <Enabled>true</Enabled> </NamespaceDescription> </content> </entry>
### Input: Converts xml response to service bus namespace The xml format for namespace: <entry> <id>uuid:00000000-0000-0000-0000-000000000000;id=0000000</id> <title type="text">myunittests</title> <updated>2012-08-22T16:48:10Z</updated> <content type="application/xml"> <NamespaceDescription xmlns="http://schemas.microsoft.com/netservices/2010/10/servicebus/connect" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Name>myunittests</Name> <Region>West US</Region> <DefaultKey>0000000000000000000000000000000000000000000=</DefaultKey> <Status>Active</Status> <CreatedAt>2012-08-22T16:48:10.217Z</CreatedAt> <AcsManagementEndpoint>https://myunittests-sb.accesscontrol.windows.net/</AcsManagementEndpoint> <ServiceBusEndpoint>https://myunittests.servicebus.windows.net/</ServiceBusEndpoint> <ConnectionString>Endpoint=sb://myunittests.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=0000000000000000000000000000000000000000000=</ConnectionString> <SubscriptionId>00000000000000000000000000000000</SubscriptionId> <Enabled>true</Enabled> </NamespaceDescription> </content> </entry> ### Response: #vtb def xml_to_namespace(xmlstr): xmldoc = minidom.parseString(xmlstr) namespace = ServiceBusNamespace() mappings = ( (, , None), (, , None), (, , None), (, , None), (, , None), (, , None), (, , None), (, , None), (, , None), (, , _parse_bool), ) for desc in _MinidomXmlToObject.get_children_from_path( xmldoc, , , ): for xml_name, field_name, conversion_func in mappings: node_value = _MinidomXmlToObject.get_first_child_node_value(desc, xml_name) if node_value is not None: if conversion_func is not None: node_value = conversion_func(node_value) setattr(namespace, field_name, node_value) return namespace
#vtb def delete(self, using=None, **kwargs): return self._get_connection(using).indices.delete(index=self._name, **kwargs)
Deletes the index in elasticsearch. Any additional keyword arguments will be passed to ``Elasticsearch.indices.delete`` unchanged.
### Input: Deletes the index in elasticsearch. Any additional keyword arguments will be passed to ``Elasticsearch.indices.delete`` unchanged. ### Response: #vtb def delete(self, using=None, **kwargs): return self._get_connection(using).indices.delete(index=self._name, **kwargs)
#vtb def workon(ctx, issue_id, new, base_branch): lancet = ctx.obj if not issue_id and not new: raise click.UsageError("Provide either an issue ID or the --new flag.") elif issue_id and new: raise click.UsageError( "Provide either an issue ID or the --new flag, but not both." ) if new: summary = click.prompt("Issue summary") issue = create_issue( lancet, summary=summary, add_to_active_sprint=True ) else: issue = get_issue(lancet, issue_id) username = lancet.tracker.whoami() active_status = lancet.config.get("tracker", "active_status") if not base_branch: base_branch = lancet.config.get("repository", "base_branch") branch = get_branch(lancet, issue, base_branch) transition = get_transition(ctx, lancet, issue, active_status) assign_issue(lancet, issue, username, active_status) set_issue_status(lancet, issue, active_status, transition) with taskstatus("Checking out working branch") as ts: lancet.repo.checkout(branch.name) ts.ok(.format(base_branch)) with taskstatus("Starting harvest timer") as ts: lancet.timer.start(issue) ts.ok("Started harvest timer")
Start work on a given issue. This command retrieves the issue from the issue tracker, creates and checks out a new aptly-named branch, puts the issue in the configured active, status, assigns it to you and starts a correctly linked Harvest timer. If a branch with the same name as the one to be created already exists, it is checked out instead. Variations in the branch name occuring after the issue ID are accounted for and the branch renamed to match the new issue summary. If the `default_project` directive is correctly configured, it is enough to give the issue ID (instead of the full project prefix + issue ID).
### Input: Start work on a given issue. This command retrieves the issue from the issue tracker, creates and checks out a new aptly-named branch, puts the issue in the configured active, status, assigns it to you and starts a correctly linked Harvest timer. If a branch with the same name as the one to be created already exists, it is checked out instead. Variations in the branch name occuring after the issue ID are accounted for and the branch renamed to match the new issue summary. If the `default_project` directive is correctly configured, it is enough to give the issue ID (instead of the full project prefix + issue ID). ### Response: #vtb def workon(ctx, issue_id, new, base_branch): lancet = ctx.obj if not issue_id and not new: raise click.UsageError("Provide either an issue ID or the --new flag.") elif issue_id and new: raise click.UsageError( "Provide either an issue ID or the --new flag, but not both." ) if new: summary = click.prompt("Issue summary") issue = create_issue( lancet, summary=summary, add_to_active_sprint=True ) else: issue = get_issue(lancet, issue_id) username = lancet.tracker.whoami() active_status = lancet.config.get("tracker", "active_status") if not base_branch: base_branch = lancet.config.get("repository", "base_branch") branch = get_branch(lancet, issue, base_branch) transition = get_transition(ctx, lancet, issue, active_status) assign_issue(lancet, issue, username, active_status) set_issue_status(lancet, issue, active_status, transition) with taskstatus("Checking out working branch") as ts: lancet.repo.checkout(branch.name) ts.ok(.format(base_branch)) with taskstatus("Starting harvest timer") as ts: lancet.timer.start(issue) ts.ok("Started harvest timer")
#vtb def delete(name, root=None): * cmd = [] if root is not None: cmd.extend((, root)) cmd.append(name) ret = __salt__[](cmd, python_shell=False) return not ret[]
Remove the named group name Name group to delete root Directory to chroot into CLI Example: .. code-block:: bash salt '*' group.delete foo
### Input: Remove the named group name Name group to delete root Directory to chroot into CLI Example: .. code-block:: bash salt '*' group.delete foo ### Response: #vtb def delete(name, root=None): * cmd = [] if root is not None: cmd.extend((, root)) cmd.append(name) ret = __salt__[](cmd, python_shell=False) return not ret[]
#vtb def process_pulls(self, testpulls=None, testarchive=None, expected=None): from datetime import datetime pulls = self.find_pulls(None if testpulls is None else testpulls.values()) for reponame in pulls: for pull in pulls[reponame]: try: archive = self.archive[pull.repokey] if pull.snumber in archive: pull.init(archive[pull.snumber]) else: pull.init({}) if self.testmode and testarchive is not None: if pull.number in testarchive[pull.repokey]: start = testarchive[pull.repokey][pull.number]["start"] else: start = datetime(2015, 4, 23, 13, 8) else: start = datetime.now() archive[pull.snumber] = {"success": False, "start": start, "number": pull.number, "stage": pull.repodir, "completed": False, "finished": None} self._save_archive() pull.begin() self.cron.email(pull.repo.name, "start", self._get_fields("start", pull), self.testmode) pull.test(expected[pull.number]) pull.finalize() archive[pull.snumber]["completed"] = True archive[pull.snumber]["success"] = abs(pull.percent - 1) < 1e-12 if (self.testmode and testarchive is not None and pull.number in testarchive[pull.repokey] and testarchive[pull.repokey][pull.number]["finished"] is not None): archive[pull.snumber]["finished"] = testarchive[pull.repokey][pull.number]["finished"] elif self.testmode: archive[pull.snumber]["finished"] = datetime(2015, 4, 23, 13, 9) else: err(errmsg) self.cron.email(pull.repo.name, "error", self._get_fields("error", pull, errmsg), self.testmode)
Runs self.find_pulls() *and* processes the pull requests unit tests, status updates and wiki page creations. :arg expected: for unit testing the output results that would be returned from running the tests in real time.
### Input: Runs self.find_pulls() *and* processes the pull requests unit tests, status updates and wiki page creations. :arg expected: for unit testing the output results that would be returned from running the tests in real time. ### Response: #vtb def process_pulls(self, testpulls=None, testarchive=None, expected=None): from datetime import datetime pulls = self.find_pulls(None if testpulls is None else testpulls.values()) for reponame in pulls: for pull in pulls[reponame]: try: archive = self.archive[pull.repokey] if pull.snumber in archive: pull.init(archive[pull.snumber]) else: pull.init({}) if self.testmode and testarchive is not None: if pull.number in testarchive[pull.repokey]: start = testarchive[pull.repokey][pull.number]["start"] else: start = datetime(2015, 4, 23, 13, 8) else: start = datetime.now() archive[pull.snumber] = {"success": False, "start": start, "number": pull.number, "stage": pull.repodir, "completed": False, "finished": None} self._save_archive() pull.begin() self.cron.email(pull.repo.name, "start", self._get_fields("start", pull), self.testmode) pull.test(expected[pull.number]) pull.finalize() archive[pull.snumber]["completed"] = True archive[pull.snumber]["success"] = abs(pull.percent - 1) < 1e-12 if (self.testmode and testarchive is not None and pull.number in testarchive[pull.repokey] and testarchive[pull.repokey][pull.number]["finished"] is not None): archive[pull.snumber]["finished"] = testarchive[pull.repokey][pull.number]["finished"] elif self.testmode: archive[pull.snumber]["finished"] = datetime(2015, 4, 23, 13, 9) else: err(errmsg) self.cron.email(pull.repo.name, "error", self._get_fields("error", pull, errmsg), self.testmode)
#vtb def load_case(adapter, case_obj, update=False): existing_case = adapter.case(case_obj) if existing_case: if not update: raise CaseError("Case {0} already exists in database".format(case_obj[])) case_obj = update_case(case_obj, existing_case) try: adapter.add_case(case_obj, update=update) except CaseError as err: raise err return case_obj
Load a case to the database Args: adapter: Connection to database case_obj: dict update(bool): If existing case should be updated Returns: case_obj(models.Case)
### Input: Load a case to the database Args: adapter: Connection to database case_obj: dict update(bool): If existing case should be updated Returns: case_obj(models.Case) ### Response: #vtb def load_case(adapter, case_obj, update=False): existing_case = adapter.case(case_obj) if existing_case: if not update: raise CaseError("Case {0} already exists in database".format(case_obj[])) case_obj = update_case(case_obj, existing_case) try: adapter.add_case(case_obj, update=update) except CaseError as err: raise err return case_obj
#vtb def AppendFlagsIntoFile(self, filename): with open(filename, ) as out_file: out_file.write(self.FlagsIntoString())
Appends all flags assignments from this FlagInfo object to a file. Output will be in the format of a flagfile. NOTE: MUST mirror the behavior of the C++ AppendFlagsIntoFile from http://code.google.com/p/google-gflags Args: filename: string, name of the file.
### Input: Appends all flags assignments from this FlagInfo object to a file. Output will be in the format of a flagfile. NOTE: MUST mirror the behavior of the C++ AppendFlagsIntoFile from http://code.google.com/p/google-gflags Args: filename: string, name of the file. ### Response: #vtb def AppendFlagsIntoFile(self, filename): with open(filename, ) as out_file: out_file.write(self.FlagsIntoString())
#vtb def reshape(self, input_shapes): indptr = [0] sdata = [] keys = [] for k, v in input_shapes.items(): if not isinstance(v, tuple): raise ValueError("Expect input_shapes to be dict str->tuple") keys.append(c_str(k)) sdata.extend(v) indptr.append(len(sdata)) new_handle = PredictorHandle() _check_call(_LIB.MXPredReshape( mx_uint(len(indptr) - 1), c_array(ctypes.c_char_p, keys), c_array(mx_uint, indptr), c_array(mx_uint, sdata), self.handle, ctypes.byref(new_handle))) _check_call(_LIB.MXPredFree(self.handle)) self.handle = new_handle
Change the input shape of the predictor. Parameters ---------- input_shapes : dict of str to tuple The new shape of input data. Examples -------- >>> predictor.reshape({'data':data_shape_tuple})
### Input: Change the input shape of the predictor. Parameters ---------- input_shapes : dict of str to tuple The new shape of input data. Examples -------- >>> predictor.reshape({'data':data_shape_tuple}) ### Response: #vtb def reshape(self, input_shapes): indptr = [0] sdata = [] keys = [] for k, v in input_shapes.items(): if not isinstance(v, tuple): raise ValueError("Expect input_shapes to be dict str->tuple") keys.append(c_str(k)) sdata.extend(v) indptr.append(len(sdata)) new_handle = PredictorHandle() _check_call(_LIB.MXPredReshape( mx_uint(len(indptr) - 1), c_array(ctypes.c_char_p, keys), c_array(mx_uint, indptr), c_array(mx_uint, sdata), self.handle, ctypes.byref(new_handle))) _check_call(_LIB.MXPredFree(self.handle)) self.handle = new_handle
#vtb def pix2vec(nside, ipix, nest=False): lon, lat = healpix_to_lonlat(ipix, nside, order= if nest else ) return ang2vec(*_lonlat_to_healpy(lon, lat))
Drop-in replacement for healpy `~healpy.pixelfunc.pix2vec`.
### Input: Drop-in replacement for healpy `~healpy.pixelfunc.pix2vec`. ### Response: #vtb def pix2vec(nside, ipix, nest=False): lon, lat = healpix_to_lonlat(ipix, nside, order= if nest else ) return ang2vec(*_lonlat_to_healpy(lon, lat))
#vtb def free(self, connection): LOGGER.debug(, self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug(, self.id, id(connection))
Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError
### Input: Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError ### Response: #vtb def free(self, connection): LOGGER.debug(, self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug(, self.id, id(connection))
#vtb def to_key_val_list(value): if value is None: return None if isinstance(value, (str, bytes, bool, int)): raise ValueError() if isinstance(value, collections.Mapping): value = value.items() return list(value)
Take an object and test to see if it can be represented as a dictionary. If it can be, return a list of tuples, e.g., :: >>> to_key_val_list([('key', 'val')]) [('key', 'val')] >>> to_key_val_list({'key': 'val'}) [('key', 'val')] >>> to_key_val_list('string') ValueError: cannot encode objects that are not 2-tuples. :rtype: list
### Input: Take an object and test to see if it can be represented as a dictionary. If it can be, return a list of tuples, e.g., :: >>> to_key_val_list([('key', 'val')]) [('key', 'val')] >>> to_key_val_list({'key': 'val'}) [('key', 'val')] >>> to_key_val_list('string') ValueError: cannot encode objects that are not 2-tuples. :rtype: list ### Response: #vtb def to_key_val_list(value): if value is None: return None if isinstance(value, (str, bytes, bool, int)): raise ValueError() if isinstance(value, collections.Mapping): value = value.items() return list(value)
#vtb def logjacobian(self, **params): r return numpy.log(abs(transforms.compute_jacobian( params, self.sampling_transforms, inverse=True)))
r"""Returns the log of the jacobian needed to transform pdfs in the ``variable_params`` parameter space to the ``sampling_params`` parameter space. Let :math:`\mathbf{x}` be the set of variable parameters, :math:`\mathbf{y} = f(\mathbf{x})` the set of sampling parameters, and :math:`p_x(\mathbf{x})` a probability density function defined over :math:`\mathbf{x}`. The corresponding pdf in :math:`\mathbf{y}` is then: .. math:: p_y(\mathbf{y}) = p_x(\mathbf{x})\left|\mathrm{det}\,\mathbf{J}_{ij}\right|, where :math:`\mathbf{J}_{ij}` is the Jacobian of the inverse transform :math:`\mathbf{x} = g(\mathbf{y})`. This has elements: .. math:: \mathbf{J}_{ij} = \frac{\partial g_i}{\partial{y_j}} This function returns :math:`\log \left|\mathrm{det}\,\mathbf{J}_{ij}\right|`. Parameters ---------- \**params : The keyword arguments should specify values for all of the variable args and all of the sampling args. Returns ------- float : The value of the jacobian.
### Input: r"""Returns the log of the jacobian needed to transform pdfs in the ``variable_params`` parameter space to the ``sampling_params`` parameter space. Let :math:`\mathbf{x}` be the set of variable parameters, :math:`\mathbf{y} = f(\mathbf{x})` the set of sampling parameters, and :math:`p_x(\mathbf{x})` a probability density function defined over :math:`\mathbf{x}`. The corresponding pdf in :math:`\mathbf{y}` is then: .. math:: p_y(\mathbf{y}) = p_x(\mathbf{x})\left|\mathrm{det}\,\mathbf{J}_{ij}\right|, where :math:`\mathbf{J}_{ij}` is the Jacobian of the inverse transform :math:`\mathbf{x} = g(\mathbf{y})`. This has elements: .. math:: \mathbf{J}_{ij} = \frac{\partial g_i}{\partial{y_j}} This function returns :math:`\log \left|\mathrm{det}\,\mathbf{J}_{ij}\right|`. Parameters ---------- \**params : The keyword arguments should specify values for all of the variable args and all of the sampling args. Returns ------- float : The value of the jacobian. ### Response: #vtb def logjacobian(self, **params): r return numpy.log(abs(transforms.compute_jacobian( params, self.sampling_transforms, inverse=True)))
#vtb def set_xticks(self, row, column, ticks): subplot = self.get_subplot_at(row, column) subplot.set_xticks(ticks)
Manually specify the x-axis tick values. :param row,column: specify the subplot. :param ticks: list of tick values.
### Input: Manually specify the x-axis tick values. :param row,column: specify the subplot. :param ticks: list of tick values. ### Response: #vtb def set_xticks(self, row, column, ticks): subplot = self.get_subplot_at(row, column) subplot.set_xticks(ticks)
#vtb def verifyZeroInteractions(*objs): for obj in objs: theMock = _get_mock_or_raise(obj) if len(theMock.invocations) > 0: raise VerificationError( "\nUnwanted interaction: %s" % theMock.invocations[0])
Verify that no methods have been called on given objs. Note that strict mocks usually throw early on unexpected, unstubbed invocations. Partial mocks ('monkeypatched' objects or modules) do not support this functionality at all, bc only for the stubbed invocations the actual usage gets recorded. So this function is of limited use, nowadays.
### Input: Verify that no methods have been called on given objs. Note that strict mocks usually throw early on unexpected, unstubbed invocations. Partial mocks ('monkeypatched' objects or modules) do not support this functionality at all, bc only for the stubbed invocations the actual usage gets recorded. So this function is of limited use, nowadays. ### Response: #vtb def verifyZeroInteractions(*objs): for obj in objs: theMock = _get_mock_or_raise(obj) if len(theMock.invocations) > 0: raise VerificationError( "\nUnwanted interaction: %s" % theMock.invocations[0])
#vtb def is_supported(value, check_all=False, filters=None, iterate=False): assert filters is not None if value is None: return True if not is_editable_type(value): return False elif not isinstance(value, filters): return False elif iterate: if isinstance(value, (list, tuple, set)): valid_count = 0 for val in value: if is_supported(val, filters=filters, iterate=check_all): valid_count += 1 if not check_all: break return valid_count > 0 elif isinstance(value, dict): for key, val in list(value.items()): if not is_supported(key, filters=filters, iterate=check_all) \ or not is_supported(val, filters=filters, iterate=check_all): return False if not check_all: break return True
Return True if the value is supported, False otherwise
### Input: Return True if the value is supported, False otherwise ### Response: #vtb def is_supported(value, check_all=False, filters=None, iterate=False): assert filters is not None if value is None: return True if not is_editable_type(value): return False elif not isinstance(value, filters): return False elif iterate: if isinstance(value, (list, tuple, set)): valid_count = 0 for val in value: if is_supported(val, filters=filters, iterate=check_all): valid_count += 1 if not check_all: break return valid_count > 0 elif isinstance(value, dict): for key, val in list(value.items()): if not is_supported(key, filters=filters, iterate=check_all) \ or not is_supported(val, filters=filters, iterate=check_all): return False if not check_all: break return True
#vtb def assert_script_in_current_directory(): script = sys.argv[0] assert os.path.abspath(os.path.dirname(script)) == os.path.abspath( ), f"Change into directory of script {script} and run again."
Assert fail if current directory is different from location of the script
### Input: Assert fail if current directory is different from location of the script ### Response: #vtb def assert_script_in_current_directory(): script = sys.argv[0] assert os.path.abspath(os.path.dirname(script)) == os.path.abspath( ), f"Change into directory of script {script} and run again."
#vtb def _does_require_deprecation(self): for index, version_number in enumerate(self.current_version[0][:2]): if version_number > self.version_yaml[index]: return True return False
Check if we have to put the previous version into the deprecated list.
### Input: Check if we have to put the previous version into the deprecated list. ### Response: #vtb def _does_require_deprecation(self): for index, version_number in enumerate(self.current_version[0][:2]): if version_number > self.version_yaml[index]: return True return False
#vtb def id(self, obj): vid = self.obj2id[obj] if vid not in self.id2obj: self.id2obj[vid] = obj return vid
The method is to be used to assign an integer variable ID for a given new object. If the object already has an ID, no new ID is created and the old one is returned instead. An object can be anything. In some cases it is convenient to use string variable names. :param obj: an object to assign an ID to. :rtype: int. Example: .. code-block:: python >>> from pysat.formula import IDPool >>> vpool = IDPool(occupied=[[12, 18], [3, 10]]) >>> >>> # creating 5 unique variables for the following strings >>> for i in range(5): ... print vpool.id('v{0}'.format(i + 1)) 1 2 11 19 20 In some cases, it makes sense to create an external function for accessing IDPool, e.g.: .. code-block:: python >>> # continuing the previous example >>> var = lambda i: vpool.id('var{0}'.format(i)) >>> var(5) 20 >>> var('hello_world!') 21
### Input: The method is to be used to assign an integer variable ID for a given new object. If the object already has an ID, no new ID is created and the old one is returned instead. An object can be anything. In some cases it is convenient to use string variable names. :param obj: an object to assign an ID to. :rtype: int. Example: .. code-block:: python >>> from pysat.formula import IDPool >>> vpool = IDPool(occupied=[[12, 18], [3, 10]]) >>> >>> # creating 5 unique variables for the following strings >>> for i in range(5): ... print vpool.id('v{0}'.format(i + 1)) 1 2 11 19 20 In some cases, it makes sense to create an external function for accessing IDPool, e.g.: .. code-block:: python >>> # continuing the previous example >>> var = lambda i: vpool.id('var{0}'.format(i)) >>> var(5) 20 >>> var('hello_world!') 21 ### Response: #vtb def id(self, obj): vid = self.obj2id[obj] if vid not in self.id2obj: self.id2obj[vid] = obj return vid
#vtb def scan_resource(self, pkg, path): r for fname in resource_listdir(pkg, path): if fname.endswith(TABLE_EXT): table_path = posixpath.join(path, fname) with contextlib.closing(resource_stream(pkg, table_path)) as stream: self.add_colortable(stream, posixpath.splitext(posixpath.basename(fname))[0])
r"""Scan a resource directory for colortable files and add them to the registry. Parameters ---------- pkg : str The package containing the resource directory path : str The path to the directory with the color tables
### Input: r"""Scan a resource directory for colortable files and add them to the registry. Parameters ---------- pkg : str The package containing the resource directory path : str The path to the directory with the color tables ### Response: #vtb def scan_resource(self, pkg, path): r for fname in resource_listdir(pkg, path): if fname.endswith(TABLE_EXT): table_path = posixpath.join(path, fname) with contextlib.closing(resource_stream(pkg, table_path)) as stream: self.add_colortable(stream, posixpath.splitext(posixpath.basename(fname))[0])
#vtb def __prepare_domain(data): if not data: raise JIDError("Domain must be given") data = unicode(data) if not data: raise JIDError("Domain must be given") if u in data: if data[0] == u and data[-1] == u: try: addr = _validate_ip_address(socket.AF_INET6, data[1:-1]) return "[{0}]".format(addr) except ValueError, err: logger.debug("ValueError: {0}".format(err)) raise JIDError(u"Invalid IPv6 literal in JID domainpart") else: raise JIDError(u"Invalid use of or in JID domainpart") elif data[0].isdigit() and data[-1].isdigit(): try: addr = _validate_ip_address(socket.AF_INET, data) except ValueError, err: logger.debug("ValueError: {0}".format(err)) data = UNICODE_DOT_RE.sub(u".", data) data = data.rstrip(u".") labels = data.split(u".") try: labels = [idna.nameprep(label) for label in labels] except UnicodeError: raise JIDError(u"Domain name invalid") for label in labels: if not STD3_LABEL_RE.match(label): raise JIDError(u"Domain name invalid") try: idna.ToASCII(label) except UnicodeError: raise JIDError(u"Domain name invalid") domain = u".".join(labels) if len(domain.encode("utf-8")) > 1023: raise JIDError(u"Domain name too long") return domain
Prepare domainpart of the JID. :Parameters: - `data`: Domain part of the JID :Types: - `data`: `unicode` :raise JIDError: if the domain name is too long.
### Input: Prepare domainpart of the JID. :Parameters: - `data`: Domain part of the JID :Types: - `data`: `unicode` :raise JIDError: if the domain name is too long. ### Response: #vtb def __prepare_domain(data): if not data: raise JIDError("Domain must be given") data = unicode(data) if not data: raise JIDError("Domain must be given") if u in data: if data[0] == u and data[-1] == u: try: addr = _validate_ip_address(socket.AF_INET6, data[1:-1]) return "[{0}]".format(addr) except ValueError, err: logger.debug("ValueError: {0}".format(err)) raise JIDError(u"Invalid IPv6 literal in JID domainpart") else: raise JIDError(u"Invalid use of or in JID domainpart") elif data[0].isdigit() and data[-1].isdigit(): try: addr = _validate_ip_address(socket.AF_INET, data) except ValueError, err: logger.debug("ValueError: {0}".format(err)) data = UNICODE_DOT_RE.sub(u".", data) data = data.rstrip(u".") labels = data.split(u".") try: labels = [idna.nameprep(label) for label in labels] except UnicodeError: raise JIDError(u"Domain name invalid") for label in labels: if not STD3_LABEL_RE.match(label): raise JIDError(u"Domain name invalid") try: idna.ToASCII(label) except UnicodeError: raise JIDError(u"Domain name invalid") domain = u".".join(labels) if len(domain.encode("utf-8")) > 1023: raise JIDError(u"Domain name too long") return domain
#vtb def get_profile(): argument_parser = ThrowingArgumentParser(add_help=False) argument_parser.add_argument() try: args, _ = argument_parser.parse_known_args() except ArgumentParserError: return Profile() imported = get_module(args.profile) profile = get_module_profile(imported) if not profile: raise Exception(f"Can't get a profile from {imported}.") return profile
Prefetch the profile module, to fill some holes in the help text.
### Input: Prefetch the profile module, to fill some holes in the help text. ### Response: #vtb def get_profile(): argument_parser = ThrowingArgumentParser(add_help=False) argument_parser.add_argument() try: args, _ = argument_parser.parse_known_args() except ArgumentParserError: return Profile() imported = get_module(args.profile) profile = get_module_profile(imported) if not profile: raise Exception(f"Can't get a profile from {imported}.") return profile
#vtb def _dump_cnt(self): self._cnt[].dump(os.path.join(self.data_path, )) self._cnt[].dump(os.path.join(self.data_path, )) self._cnt[].dump(os.path.join(self.data_path, ))
Dump counters to file
### Input: Dump counters to file ### Response: #vtb def _dump_cnt(self): self._cnt[].dump(os.path.join(self.data_path, )) self._cnt[].dump(os.path.join(self.data_path, )) self._cnt[].dump(os.path.join(self.data_path, ))
#vtb def check_arguments(cls, conf): try: f = open(conf[], "r") f.close() except IOError as e: raise ArgsError("Cannot open config file : %s" % (conf[], e))
Sanity checks for options needed for configfile mode.
### Input: Sanity checks for options needed for configfile mode. ### Response: #vtb def check_arguments(cls, conf): try: f = open(conf[], "r") f.close() except IOError as e: raise ArgsError("Cannot open config file : %s" % (conf[], e))
#vtb def delete(cls, object_version, key=None): with db.session.begin_nested(): q = cls.query.filter_by( version_id=as_object_version_id(object_version)) if key: q = q.filter_by(key=key) q.delete()
Delete tags. :param object_version: The object version instance or id. :param key: Key of the tag to delete. Default: delete all tags.
### Input: Delete tags. :param object_version: The object version instance or id. :param key: Key of the tag to delete. Default: delete all tags. ### Response: #vtb def delete(cls, object_version, key=None): with db.session.begin_nested(): q = cls.query.filter_by( version_id=as_object_version_id(object_version)) if key: q = q.filter_by(key=key) q.delete()
#vtb def run_jar(self, mem=None): cmd = config.get_command() if mem: cmd.append( % mem) cmd.append() cmd += self.cmd self.run(cmd)
Special case of run() when the executable is a JAR file.
### Input: Special case of run() when the executable is a JAR file. ### Response: #vtb def run_jar(self, mem=None): cmd = config.get_command() if mem: cmd.append( % mem) cmd.append() cmd += self.cmd self.run(cmd)
#vtb def invert_node_predicate(node_predicate: NodePredicate) -> NodePredicate: def inverse_predicate(graph: BELGraph, node: BaseEntity) -> bool: return not node_predicate(graph, node) return inverse_predicate
Build a node predicate that is the inverse of the given node predicate.
### Input: Build a node predicate that is the inverse of the given node predicate. ### Response: #vtb def invert_node_predicate(node_predicate: NodePredicate) -> NodePredicate: def inverse_predicate(graph: BELGraph, node: BaseEntity) -> bool: return not node_predicate(graph, node) return inverse_predicate
#vtb def get_file(original_file): import cStringIO import boto3 s3 = boto3.resource() bucket_name, object_key = _parse_s3_file(original_file) logger.debug("Downloading {0} from {1}".format(object_key, bucket_name)) bucket = s3.Bucket(bucket_name) output = cStringIO.StringIO() bucket.download_fileobj(object_key, output) output.reset() return output
original file should be s3://bucketname/path/to/file.txt returns a Buffer with the file in it
### Input: original file should be s3://bucketname/path/to/file.txt returns a Buffer with the file in it ### Response: #vtb def get_file(original_file): import cStringIO import boto3 s3 = boto3.resource() bucket_name, object_key = _parse_s3_file(original_file) logger.debug("Downloading {0} from {1}".format(object_key, bucket_name)) bucket = s3.Bucket(bucket_name) output = cStringIO.StringIO() bucket.download_fileobj(object_key, output) output.reset() return output
#vtb def serve(): logging.getLogger().setLevel(logging.DEBUG) logging.info() tracer = Tracer( service_name=, reporter=NullReporter(), sampler=ConstSampler(decision=True)) opentracing.tracer = tracer tchannel = TChannel(name=, hostport= % DEFAULT_SERVER_PORT, trace=True) register_tchannel_handlers(tchannel=tchannel) tchannel.listen() app = tornado.web.Application(debug=True) register_http_handlers(app) app.listen(DEFAULT_CLIENT_PORT) tornado.ioloop.IOLoop.current().start()
main entry point
### Input: main entry point ### Response: #vtb def serve(): logging.getLogger().setLevel(logging.DEBUG) logging.info() tracer = Tracer( service_name=, reporter=NullReporter(), sampler=ConstSampler(decision=True)) opentracing.tracer = tracer tchannel = TChannel(name=, hostport= % DEFAULT_SERVER_PORT, trace=True) register_tchannel_handlers(tchannel=tchannel) tchannel.listen() app = tornado.web.Application(debug=True) register_http_handlers(app) app.listen(DEFAULT_CLIENT_PORT) tornado.ioloop.IOLoop.current().start()
#vtb def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def blacklist(app): return BlacklistFilter(app, conf) return blacklist
Returns a WSGI filter app for use with paste.deploy.
### Input: Returns a WSGI filter app for use with paste.deploy. ### Response: #vtb def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def blacklist(app): return BlacklistFilter(app, conf) return blacklist
#vtb def _save_image(self, name, format=): dialog = QtGui.QFileDialog(self._control, ) dialog.setAcceptMode(QtGui.QFileDialog.AcceptSave) dialog.setDefaultSuffix(format.lower()) dialog.setNameFilter( % (format, format.lower())) if dialog.exec_(): filename = dialog.selectedFiles()[0] image = self._get_image(name) image.save(filename, format)
Shows a save dialog for the ImageResource with 'name'.
### Input: Shows a save dialog for the ImageResource with 'name'. ### Response: #vtb def _save_image(self, name, format=): dialog = QtGui.QFileDialog(self._control, ) dialog.setAcceptMode(QtGui.QFileDialog.AcceptSave) dialog.setDefaultSuffix(format.lower()) dialog.setNameFilter( % (format, format.lower())) if dialog.exec_(): filename = dialog.selectedFiles()[0] image = self._get_image(name) image.save(filename, format)
#vtb def wait_actions_on_objects(self, objects, wait_interval=None, wait_time=None): acts = [] for o in objects: a = o.fetch_last_action() if a is None: yield o else: acts.append(a) for a in self.wait_actions(acts, wait_interval, wait_time): yield a.fetch_resource()
.. versionadded:: 0.2.0 Poll the server periodically until the most recent action on each resource in ``objects`` has finished, yielding each resource's final state when the corresponding action is done. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress actions) is raised. If a `KeyboardInterrupt` is caught, any remaining actions are returned immediately without waiting for completion. :param iterable objects: an iterable of resource objects that have ``fetch_last_action`` methods :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any actions have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator of objects :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded
### Input: .. versionadded:: 0.2.0 Poll the server periodically until the most recent action on each resource in ``objects`` has finished, yielding each resource's final state when the corresponding action is done. If ``wait_time`` is exceeded, a `WaitTimeoutError` (containing any remaining in-progress actions) is raised. If a `KeyboardInterrupt` is caught, any remaining actions are returned immediately without waiting for completion. :param iterable objects: an iterable of resource objects that have ``fetch_last_action`` methods :param number wait_interval: how many seconds to sleep between requests; defaults to :attr:`wait_interval` if not specified or `None` :param number wait_time: the total number of seconds after which the method will raise an error if any actions have not yet completed, or a negative number to wait indefinitely; defaults to :attr:`wait_time` if not specified or `None` :rtype: generator of objects :raises DOAPIError: if the API endpoint replies with an error :raises WaitTimeoutError: if ``wait_time`` is exceeded ### Response: #vtb def wait_actions_on_objects(self, objects, wait_interval=None, wait_time=None): acts = [] for o in objects: a = o.fetch_last_action() if a is None: yield o else: acts.append(a) for a in self.wait_actions(acts, wait_interval, wait_time): yield a.fetch_resource()
#vtb def overload(fn): if not isfunction(fn): raise TypeError() spec = getargspec(fn) args = spec.args if not spec.varargs and (len(args) < 2 or args[1] != ): raise ValueError() @functools.wraps(fn) def decorator(*args, **kw): if len(args) < 2: return PipeOverloader(fn, args, kw) return fn(*args, **kw) return decorator
Overload a given callable object to be used with ``|`` operator overloading. This is especially used for composing a pipeline of transformation over a single data set. Arguments: fn (function): target function to decorate. Raises: TypeError: if function or coroutine function is not provided. Returns: function: decorated function
### Input: Overload a given callable object to be used with ``|`` operator overloading. This is especially used for composing a pipeline of transformation over a single data set. Arguments: fn (function): target function to decorate. Raises: TypeError: if function or coroutine function is not provided. Returns: function: decorated function ### Response: #vtb def overload(fn): if not isfunction(fn): raise TypeError() spec = getargspec(fn) args = spec.args if not spec.varargs and (len(args) < 2 or args[1] != ): raise ValueError() @functools.wraps(fn) def decorator(*args, **kw): if len(args) < 2: return PipeOverloader(fn, args, kw) return fn(*args, **kw) return decorator
#vtb def _bind(self): main_window = self.main_window handlers = self.handlers c_handlers = self.cell_handlers self.Bind(wx.EVT_MOUSEWHEEL, handlers.OnMouseWheel) self.Bind(wx.EVT_KEY_DOWN, handlers.OnKey) self.GetGridWindow().Bind(wx.EVT_MOTION, handlers.OnMouseMotion) self.Bind(wx.grid.EVT_GRID_RANGE_SELECT, handlers.OnRangeSelected) self.Bind(wx.grid.EVT_GRID_CELL_RIGHT_CLICK, handlers.OnContextMenu) main_window.Bind(self.EVT_CMD_CODE_ENTRY, c_handlers.OnCellText) main_window.Bind(self.EVT_CMD_INSERT_BMP, c_handlers.OnInsertBitmap) main_window.Bind(self.EVT_CMD_LINK_BMP, c_handlers.OnLinkBitmap) main_window.Bind(self.EVT_CMD_VIDEO_CELL, c_handlers.OnLinkVLCVideo) main_window.Bind(self.EVT_CMD_INSERT_CHART, c_handlers.OnInsertChartDialog) main_window.Bind(self.EVT_CMD_COPY_FORMAT, c_handlers.OnCopyFormat) main_window.Bind(self.EVT_CMD_PASTE_FORMAT, c_handlers.OnPasteFormat) main_window.Bind(self.EVT_CMD_FONT, c_handlers.OnCellFont) main_window.Bind(self.EVT_CMD_FONTSIZE, c_handlers.OnCellFontSize) main_window.Bind(self.EVT_CMD_FONTBOLD, c_handlers.OnCellFontBold) main_window.Bind(self.EVT_CMD_FONTITALICS, c_handlers.OnCellFontItalics) main_window.Bind(self.EVT_CMD_FONTUNDERLINE, c_handlers.OnCellFontUnderline) main_window.Bind(self.EVT_CMD_FONTSTRIKETHROUGH, c_handlers.OnCellFontStrikethrough) main_window.Bind(self.EVT_CMD_FROZEN, c_handlers.OnCellFrozen) main_window.Bind(self.EVT_CMD_LOCK, c_handlers.OnCellLocked) main_window.Bind(self.EVT_CMD_BUTTON_CELL, c_handlers.OnButtonCell) main_window.Bind(self.EVT_CMD_MARKUP, c_handlers.OnCellMarkup) main_window.Bind(self.EVT_CMD_MERGE, c_handlers.OnMerge) main_window.Bind(self.EVT_CMD_JUSTIFICATION, c_handlers.OnCellJustification) main_window.Bind(self.EVT_CMD_ALIGNMENT, c_handlers.OnCellAlignment) main_window.Bind(self.EVT_CMD_BORDERWIDTH, c_handlers.OnCellBorderWidth) main_window.Bind(self.EVT_CMD_BORDERCOLOR, c_handlers.OnCellBorderColor) main_window.Bind(self.EVT_CMD_BACKGROUNDCOLOR, c_handlers.OnCellBackgroundColor) main_window.Bind(self.EVT_CMD_TEXTCOLOR, c_handlers.OnCellTextColor) main_window.Bind(self.EVT_CMD_ROTATION0, c_handlers.OnTextRotation0) main_window.Bind(self.EVT_CMD_ROTATION90, c_handlers.OnTextRotation90) main_window.Bind(self.EVT_CMD_ROTATION180, c_handlers.OnTextRotation180) main_window.Bind(self.EVT_CMD_ROTATION270, c_handlers.OnTextRotation270) main_window.Bind(self.EVT_CMD_TEXTROTATATION, c_handlers.OnCellTextRotation) self.Bind(wx.grid.EVT_GRID_CMD_SELECT_CELL, c_handlers.OnCellSelected) main_window.Bind(self.EVT_CMD_ENTER_SELECTION_MODE, handlers.OnEnterSelectionMode) main_window.Bind(self.EVT_CMD_EXIT_SELECTION_MODE, handlers.OnExitSelectionMode) main_window.Bind(self.EVT_CMD_VIEW_FROZEN, handlers.OnViewFrozen) main_window.Bind(self.EVT_CMD_REFRESH_SELECTION, handlers.OnRefreshSelectedCells) main_window.Bind(self.EVT_CMD_TIMER_TOGGLE, handlers.OnTimerToggle) self.Bind(wx.EVT_TIMER, handlers.OnTimer) main_window.Bind(self.EVT_CMD_DISPLAY_GOTO_CELL_DIALOG, handlers.OnDisplayGoToCellDialog) main_window.Bind(self.EVT_CMD_GOTO_CELL, handlers.OnGoToCell) main_window.Bind(self.EVT_CMD_ZOOM_IN, handlers.OnZoomIn) main_window.Bind(self.EVT_CMD_ZOOM_OUT, handlers.OnZoomOut) main_window.Bind(self.EVT_CMD_ZOOM_STANDARD, handlers.OnZoomStandard) main_window.Bind(self.EVT_CMD_ZOOM_FIT, handlers.OnZoomFit) main_window.Bind(self.EVT_CMD_FIND, handlers.OnFind) main_window.Bind(self.EVT_CMD_REPLACE, handlers.OnShowFindReplace) main_window.Bind(wx.EVT_FIND, handlers.OnReplaceFind) main_window.Bind(wx.EVT_FIND_NEXT, handlers.OnReplaceFind) main_window.Bind(wx.EVT_FIND_REPLACE, handlers.OnReplace) main_window.Bind(wx.EVT_FIND_REPLACE_ALL, handlers.OnReplaceAll) main_window.Bind(wx.EVT_FIND_CLOSE, handlers.OnCloseFindReplace) main_window.Bind(self.EVT_CMD_INSERT_ROWS, handlers.OnInsertRows) main_window.Bind(self.EVT_CMD_INSERT_COLS, handlers.OnInsertCols) main_window.Bind(self.EVT_CMD_INSERT_TABS, handlers.OnInsertTabs) main_window.Bind(self.EVT_CMD_DELETE_ROWS, handlers.OnDeleteRows) main_window.Bind(self.EVT_CMD_DELETE_COLS, handlers.OnDeleteCols) main_window.Bind(self.EVT_CMD_DELETE_TABS, handlers.OnDeleteTabs) main_window.Bind(self.EVT_CMD_SHOW_RESIZE_GRID_DIALOG, handlers.OnResizeGridDialog) main_window.Bind(self.EVT_CMD_QUOTE, handlers.OnQuote) main_window.Bind(wx.grid.EVT_GRID_ROW_SIZE, handlers.OnRowSize) main_window.Bind(wx.grid.EVT_GRID_COL_SIZE, handlers.OnColSize) main_window.Bind(self.EVT_CMD_SORT_ASCENDING, handlers.OnSortAscending) main_window.Bind(self.EVT_CMD_SORT_DESCENDING, handlers.OnSortDescending) main_window.Bind(self.EVT_CMD_UNDO, handlers.OnUndo) main_window.Bind(self.EVT_CMD_REDO, handlers.OnRedo)
Bind events to handlers
### Input: Bind events to handlers ### Response: #vtb def _bind(self): main_window = self.main_window handlers = self.handlers c_handlers = self.cell_handlers self.Bind(wx.EVT_MOUSEWHEEL, handlers.OnMouseWheel) self.Bind(wx.EVT_KEY_DOWN, handlers.OnKey) self.GetGridWindow().Bind(wx.EVT_MOTION, handlers.OnMouseMotion) self.Bind(wx.grid.EVT_GRID_RANGE_SELECT, handlers.OnRangeSelected) self.Bind(wx.grid.EVT_GRID_CELL_RIGHT_CLICK, handlers.OnContextMenu) main_window.Bind(self.EVT_CMD_CODE_ENTRY, c_handlers.OnCellText) main_window.Bind(self.EVT_CMD_INSERT_BMP, c_handlers.OnInsertBitmap) main_window.Bind(self.EVT_CMD_LINK_BMP, c_handlers.OnLinkBitmap) main_window.Bind(self.EVT_CMD_VIDEO_CELL, c_handlers.OnLinkVLCVideo) main_window.Bind(self.EVT_CMD_INSERT_CHART, c_handlers.OnInsertChartDialog) main_window.Bind(self.EVT_CMD_COPY_FORMAT, c_handlers.OnCopyFormat) main_window.Bind(self.EVT_CMD_PASTE_FORMAT, c_handlers.OnPasteFormat) main_window.Bind(self.EVT_CMD_FONT, c_handlers.OnCellFont) main_window.Bind(self.EVT_CMD_FONTSIZE, c_handlers.OnCellFontSize) main_window.Bind(self.EVT_CMD_FONTBOLD, c_handlers.OnCellFontBold) main_window.Bind(self.EVT_CMD_FONTITALICS, c_handlers.OnCellFontItalics) main_window.Bind(self.EVT_CMD_FONTUNDERLINE, c_handlers.OnCellFontUnderline) main_window.Bind(self.EVT_CMD_FONTSTRIKETHROUGH, c_handlers.OnCellFontStrikethrough) main_window.Bind(self.EVT_CMD_FROZEN, c_handlers.OnCellFrozen) main_window.Bind(self.EVT_CMD_LOCK, c_handlers.OnCellLocked) main_window.Bind(self.EVT_CMD_BUTTON_CELL, c_handlers.OnButtonCell) main_window.Bind(self.EVT_CMD_MARKUP, c_handlers.OnCellMarkup) main_window.Bind(self.EVT_CMD_MERGE, c_handlers.OnMerge) main_window.Bind(self.EVT_CMD_JUSTIFICATION, c_handlers.OnCellJustification) main_window.Bind(self.EVT_CMD_ALIGNMENT, c_handlers.OnCellAlignment) main_window.Bind(self.EVT_CMD_BORDERWIDTH, c_handlers.OnCellBorderWidth) main_window.Bind(self.EVT_CMD_BORDERCOLOR, c_handlers.OnCellBorderColor) main_window.Bind(self.EVT_CMD_BACKGROUNDCOLOR, c_handlers.OnCellBackgroundColor) main_window.Bind(self.EVT_CMD_TEXTCOLOR, c_handlers.OnCellTextColor) main_window.Bind(self.EVT_CMD_ROTATION0, c_handlers.OnTextRotation0) main_window.Bind(self.EVT_CMD_ROTATION90, c_handlers.OnTextRotation90) main_window.Bind(self.EVT_CMD_ROTATION180, c_handlers.OnTextRotation180) main_window.Bind(self.EVT_CMD_ROTATION270, c_handlers.OnTextRotation270) main_window.Bind(self.EVT_CMD_TEXTROTATATION, c_handlers.OnCellTextRotation) self.Bind(wx.grid.EVT_GRID_CMD_SELECT_CELL, c_handlers.OnCellSelected) main_window.Bind(self.EVT_CMD_ENTER_SELECTION_MODE, handlers.OnEnterSelectionMode) main_window.Bind(self.EVT_CMD_EXIT_SELECTION_MODE, handlers.OnExitSelectionMode) main_window.Bind(self.EVT_CMD_VIEW_FROZEN, handlers.OnViewFrozen) main_window.Bind(self.EVT_CMD_REFRESH_SELECTION, handlers.OnRefreshSelectedCells) main_window.Bind(self.EVT_CMD_TIMER_TOGGLE, handlers.OnTimerToggle) self.Bind(wx.EVT_TIMER, handlers.OnTimer) main_window.Bind(self.EVT_CMD_DISPLAY_GOTO_CELL_DIALOG, handlers.OnDisplayGoToCellDialog) main_window.Bind(self.EVT_CMD_GOTO_CELL, handlers.OnGoToCell) main_window.Bind(self.EVT_CMD_ZOOM_IN, handlers.OnZoomIn) main_window.Bind(self.EVT_CMD_ZOOM_OUT, handlers.OnZoomOut) main_window.Bind(self.EVT_CMD_ZOOM_STANDARD, handlers.OnZoomStandard) main_window.Bind(self.EVT_CMD_ZOOM_FIT, handlers.OnZoomFit) main_window.Bind(self.EVT_CMD_FIND, handlers.OnFind) main_window.Bind(self.EVT_CMD_REPLACE, handlers.OnShowFindReplace) main_window.Bind(wx.EVT_FIND, handlers.OnReplaceFind) main_window.Bind(wx.EVT_FIND_NEXT, handlers.OnReplaceFind) main_window.Bind(wx.EVT_FIND_REPLACE, handlers.OnReplace) main_window.Bind(wx.EVT_FIND_REPLACE_ALL, handlers.OnReplaceAll) main_window.Bind(wx.EVT_FIND_CLOSE, handlers.OnCloseFindReplace) main_window.Bind(self.EVT_CMD_INSERT_ROWS, handlers.OnInsertRows) main_window.Bind(self.EVT_CMD_INSERT_COLS, handlers.OnInsertCols) main_window.Bind(self.EVT_CMD_INSERT_TABS, handlers.OnInsertTabs) main_window.Bind(self.EVT_CMD_DELETE_ROWS, handlers.OnDeleteRows) main_window.Bind(self.EVT_CMD_DELETE_COLS, handlers.OnDeleteCols) main_window.Bind(self.EVT_CMD_DELETE_TABS, handlers.OnDeleteTabs) main_window.Bind(self.EVT_CMD_SHOW_RESIZE_GRID_DIALOG, handlers.OnResizeGridDialog) main_window.Bind(self.EVT_CMD_QUOTE, handlers.OnQuote) main_window.Bind(wx.grid.EVT_GRID_ROW_SIZE, handlers.OnRowSize) main_window.Bind(wx.grid.EVT_GRID_COL_SIZE, handlers.OnColSize) main_window.Bind(self.EVT_CMD_SORT_ASCENDING, handlers.OnSortAscending) main_window.Bind(self.EVT_CMD_SORT_DESCENDING, handlers.OnSortDescending) main_window.Bind(self.EVT_CMD_UNDO, handlers.OnUndo) main_window.Bind(self.EVT_CMD_REDO, handlers.OnRedo)
#vtb def dmlc_opts(opts): args = [, str(opts.num_workers), , str(opts.num_servers), , opts.launcher, , opts.hostfile, , opts.sync_dst_dir] dopts = vars(opts) for key in [, , ]: for v in dopts[key]: args.append( + key.replace("_","-")) args.append(v) args += opts.command try: from dmlc_tracker import opts except ImportError: print("Can't load dmlc_tracker package. Perhaps you need to run") print(" git submodule update --init --recursive") raise dmlc_opts = opts.get_opts(args) return dmlc_opts
convert from mxnet's opts to dmlc's opts
### Input: convert from mxnet's opts to dmlc's opts ### Response: #vtb def dmlc_opts(opts): args = [, str(opts.num_workers), , str(opts.num_servers), , opts.launcher, , opts.hostfile, , opts.sync_dst_dir] dopts = vars(opts) for key in [, , ]: for v in dopts[key]: args.append( + key.replace("_","-")) args.append(v) args += opts.command try: from dmlc_tracker import opts except ImportError: print("Can't load dmlc_tracker package. Perhaps you need to run") print(" git submodule update --init --recursive") raise dmlc_opts = opts.get_opts(args) return dmlc_opts
#vtb def complementTab(seq=[]): complement = {: , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : } seq_tmp = [] for bps in seq: if len(bps) == 0: seq_tmp.append() elif len(bps) == 1: seq_tmp.append(complement[bps]) else: seq_tmp.append(reverseComplement(bps)) return seq_tmp
returns a list of complementary sequence without inversing it
### Input: returns a list of complementary sequence without inversing it ### Response: #vtb def complementTab(seq=[]): complement = {: , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : } seq_tmp = [] for bps in seq: if len(bps) == 0: seq_tmp.append() elif len(bps) == 1: seq_tmp.append(complement[bps]) else: seq_tmp.append(reverseComplement(bps)) return seq_tmp
#vtb def kill(timeout=15): ret = { : None, : 1, } comment = [] pid = __grains__.get() if not pid: comment.append() ret[] = salt.defaults.exitcodes.EX_SOFTWARE else: if not in __salt__: comment.append() ret[] = salt.defaults.exitcodes.EX_SOFTWARE else: ret[] = int(not __salt__[](pid)) if ret[]: comment.append() else: for _ in range(timeout): time.sleep(1) signaled = __salt__[](pid) if not signaled: ret[] = pid break else: comment.append() ret[] = salt.defaults.exitcodes.EX_TEMPFAIL if comment: ret[] = comment return ret
Kill the salt minion. timeout int seconds to wait for the minion to die. If you have a monitor that restarts ``salt-minion`` when it dies then this is a great way to restart after a minion upgrade. CLI example:: >$ salt minion[12] minion.kill minion1: ---------- killed: 7874 retcode: 0 minion2: ---------- killed: 29071 retcode: 0 The result of the salt command shows the process ID of the minions and the results of a kill signal to the minion in as the ``retcode`` value: ``0`` is success, anything else is a failure.
### Input: Kill the salt minion. timeout int seconds to wait for the minion to die. If you have a monitor that restarts ``salt-minion`` when it dies then this is a great way to restart after a minion upgrade. CLI example:: >$ salt minion[12] minion.kill minion1: ---------- killed: 7874 retcode: 0 minion2: ---------- killed: 29071 retcode: 0 The result of the salt command shows the process ID of the minions and the results of a kill signal to the minion in as the ``retcode`` value: ``0`` is success, anything else is a failure. ### Response: #vtb def kill(timeout=15): ret = { : None, : 1, } comment = [] pid = __grains__.get() if not pid: comment.append() ret[] = salt.defaults.exitcodes.EX_SOFTWARE else: if not in __salt__: comment.append() ret[] = salt.defaults.exitcodes.EX_SOFTWARE else: ret[] = int(not __salt__[](pid)) if ret[]: comment.append() else: for _ in range(timeout): time.sleep(1) signaled = __salt__[](pid) if not signaled: ret[] = pid break else: comment.append() ret[] = salt.defaults.exitcodes.EX_TEMPFAIL if comment: ret[] = comment return ret
#vtb def export_gcm_encrypted_private_key(self, password: str, salt: str, n: int = 16384) -> str: r = 8 p = 8 dk_len = 64 scrypt = Scrypt(n, r, p, dk_len) derived_key = scrypt.generate_kd(password, salt) iv = derived_key[0:12] key = derived_key[32:64] hdr = self.__address.b58encode().encode() mac_tag, cipher_text = AESHandler.aes_gcm_encrypt_with_iv(self.__private_key, hdr, key, iv) encrypted_key = bytes.hex(cipher_text) + bytes.hex(mac_tag) encrypted_key_str = base64.b64encode(bytes.fromhex(encrypted_key)) return encrypted_key_str.decode()
This interface is used to export an AES algorithm encrypted private key with the mode of GCM. :param password: the secret pass phrase to generate the keys from. :param salt: A string to use for better protection from dictionary attacks. This value does not need to be kept secret, but it should be randomly chosen for each derivation. It is recommended to be at least 8 bytes long. :param n: CPU/memory cost parameter. It must be a power of 2 and less than 2**32 :return: an gcm encrypted private key in the form of string.
### Input: This interface is used to export an AES algorithm encrypted private key with the mode of GCM. :param password: the secret pass phrase to generate the keys from. :param salt: A string to use for better protection from dictionary attacks. This value does not need to be kept secret, but it should be randomly chosen for each derivation. It is recommended to be at least 8 bytes long. :param n: CPU/memory cost parameter. It must be a power of 2 and less than 2**32 :return: an gcm encrypted private key in the form of string. ### Response: #vtb def export_gcm_encrypted_private_key(self, password: str, salt: str, n: int = 16384) -> str: r = 8 p = 8 dk_len = 64 scrypt = Scrypt(n, r, p, dk_len) derived_key = scrypt.generate_kd(password, salt) iv = derived_key[0:12] key = derived_key[32:64] hdr = self.__address.b58encode().encode() mac_tag, cipher_text = AESHandler.aes_gcm_encrypt_with_iv(self.__private_key, hdr, key, iv) encrypted_key = bytes.hex(cipher_text) + bytes.hex(mac_tag) encrypted_key_str = base64.b64encode(bytes.fromhex(encrypted_key)) return encrypted_key_str.decode()
#vtb def CreateTaskStorage(self, task): if task.identifier in self._task_storage_writers: raise IOError(.format( task.identifier)) storage_writer = FakeStorageWriter( self._session, storage_type=definitions.STORAGE_TYPE_TASK, task=task) self._task_storage_writers[task.identifier] = storage_writer return storage_writer
Creates a task storage. Args: task (Task): task. Returns: FakeStorageWriter: storage writer. Raises: IOError: if the task storage already exists. OSError: if the task storage already exists.
### Input: Creates a task storage. Args: task (Task): task. Returns: FakeStorageWriter: storage writer. Raises: IOError: if the task storage already exists. OSError: if the task storage already exists. ### Response: #vtb def CreateTaskStorage(self, task): if task.identifier in self._task_storage_writers: raise IOError(.format( task.identifier)) storage_writer = FakeStorageWriter( self._session, storage_type=definitions.STORAGE_TYPE_TASK, task=task) self._task_storage_writers[task.identifier] = storage_writer return storage_writer
#vtb def adjacent(self, node_a, node_b): neighbors = self.neighbors(node_a) return node_b in neighbors
Determines whether there is an edge from node_a to node_b. Returns True if such an edge exists, otherwise returns False.
### Input: Determines whether there is an edge from node_a to node_b. Returns True if such an edge exists, otherwise returns False. ### Response: #vtb def adjacent(self, node_a, node_b): neighbors = self.neighbors(node_a) return node_b in neighbors
#vtb def create_user(self, instance, name, password, database_names, host=None): return instance.create_user(name=name, password=password, database_names=database_names, host=host)
Creates a user with the specified name and password, and gives that user access to the specified database(s).
### Input: Creates a user with the specified name and password, and gives that user access to the specified database(s). ### Response: #vtb def create_user(self, instance, name, password, database_names, host=None): return instance.create_user(name=name, password=password, database_names=database_names, host=host)
#vtb def getDescription(self): description = {:self.name, :[f.name for f in self.fields], \ :[f.numRecords for f in self.fields]} return description
Returns a description of the dataset
### Input: Returns a description of the dataset ### Response: #vtb def getDescription(self): description = {:self.name, :[f.name for f in self.fields], \ :[f.numRecords for f in self.fields]} return description
#vtb def update(self): _LOGGER.debug("Querying the device..") time = datetime.now() value = struct.pack(, PROP_INFO_QUERY, time.year % 100, time.month, time.day, time.hour, time.minute, time.second) self._conn.make_request(PROP_WRITE_HANDLE, value)
Update the data from the thermostat. Always sets the current time.
### Input: Update the data from the thermostat. Always sets the current time. ### Response: #vtb def update(self): _LOGGER.debug("Querying the device..") time = datetime.now() value = struct.pack(, PROP_INFO_QUERY, time.year % 100, time.month, time.day, time.hour, time.minute, time.second) self._conn.make_request(PROP_WRITE_HANDLE, value)
#vtb def date_in_past(self): now = datetime.datetime.now() return (now.date() > self.date)
Is the block's date in the past? (Has it not yet happened?)
### Input: Is the block's date in the past? (Has it not yet happened?) ### Response: #vtb def date_in_past(self): now = datetime.datetime.now() return (now.date() > self.date)
#vtb def series_lstrip(series, startswith=, ignorecase=True): return series_strip(series, startswith=startswith, endswith=None, startsorendswith=None, ignorecase=ignorecase)
Strip a suffix str (`endswith` str) from a `df` columns or pd.Series of type str
### Input: Strip a suffix str (`endswith` str) from a `df` columns or pd.Series of type str ### Response: #vtb def series_lstrip(series, startswith=, ignorecase=True): return series_strip(series, startswith=startswith, endswith=None, startsorendswith=None, ignorecase=ignorecase)
#vtb def get_cf_distribution_class(): if LooseVersion(troposphere.__version__) == LooseVersion(): cf_dist = cloudfront.Distribution cf_dist.props[] = (DistributionConfig, True) return cf_dist return cloudfront.Distribution
Return the correct troposphere CF distribution class.
### Input: Return the correct troposphere CF distribution class. ### Response: #vtb def get_cf_distribution_class(): if LooseVersion(troposphere.__version__) == LooseVersion(): cf_dist = cloudfront.Distribution cf_dist.props[] = (DistributionConfig, True) return cf_dist return cloudfront.Distribution
#vtb def _iso_name_and_parent_from_path(self, iso_path): splitpath = utils.split_path(iso_path) name = splitpath.pop() parent = self._find_iso_record(b + b.join(splitpath)) return (name.decode().encode(), parent)
An internal method to find the parent directory record and name given an ISO path. If the parent is found, return a tuple containing the basename of the path and the parent directory record object. Parameters: iso_path - The absolute ISO path to the entry on the ISO. Returns: A tuple containing just the name of the entry and a Directory Record object representing the parent of the entry.
### Input: An internal method to find the parent directory record and name given an ISO path. If the parent is found, return a tuple containing the basename of the path and the parent directory record object. Parameters: iso_path - The absolute ISO path to the entry on the ISO. Returns: A tuple containing just the name of the entry and a Directory Record object representing the parent of the entry. ### Response: #vtb def _iso_name_and_parent_from_path(self, iso_path): splitpath = utils.split_path(iso_path) name = splitpath.pop() parent = self._find_iso_record(b + b.join(splitpath)) return (name.decode().encode(), parent)
#vtb def thread_exception(self, raised_exception): print( % str(raised_exception)) print() print(traceback.format_exc())
Callback for handling exception, that are raised inside :meth:`.WThreadTask.thread_started` :param raised_exception: raised exception :return: None
### Input: Callback for handling exception, that are raised inside :meth:`.WThreadTask.thread_started` :param raised_exception: raised exception :return: None ### Response: #vtb def thread_exception(self, raised_exception): print( % str(raised_exception)) print() print(traceback.format_exc())
#vtb def source_sum_err(self): if self._error is not None: if self._is_completely_masked: return np.nan * self._error_unit else: return np.sqrt(np.sum(self._error_values ** 2)) else: return None
The uncertainty of `~photutils.SourceProperties.source_sum`, propagated from the input ``error`` array. ``source_sum_err`` is the quadrature sum of the total errors over the non-masked pixels within the source segment: .. math:: \\Delta F = \\sqrt{\\sum_{i \\in S} \\sigma_{\\mathrm{tot}, i}^2} where :math:`\\Delta F` is ``source_sum_err``, :math:`\\sigma_{\\mathrm{tot, i}}` are the pixel-wise total errors, and :math:`S` are the non-masked pixels in the source segment. Pixel values that are masked in the input ``data``, including any non-finite pixel values (i.e. NaN, infs) that are automatically masked, are also masked in the error array.
### Input: The uncertainty of `~photutils.SourceProperties.source_sum`, propagated from the input ``error`` array. ``source_sum_err`` is the quadrature sum of the total errors over the non-masked pixels within the source segment: .. math:: \\Delta F = \\sqrt{\\sum_{i \\in S} \\sigma_{\\mathrm{tot}, i}^2} where :math:`\\Delta F` is ``source_sum_err``, :math:`\\sigma_{\\mathrm{tot, i}}` are the pixel-wise total errors, and :math:`S` are the non-masked pixels in the source segment. Pixel values that are masked in the input ``data``, including any non-finite pixel values (i.e. NaN, infs) that are automatically masked, are also masked in the error array. ### Response: #vtb def source_sum_err(self): if self._error is not None: if self._is_completely_masked: return np.nan * self._error_unit else: return np.sqrt(np.sum(self._error_values ** 2)) else: return None
#vtb def job_not_running(self, jid, tgt, tgt_type, minions, is_finished): ping_pub_data = yield self.saltclients[](tgt, , [jid], tgt_type=tgt_type) ping_tag = tagify([ping_pub_data[], ], ) minion_running = False while True: try: event = self.application.event_listener.get_event(self, tag=ping_tag, timeout=self.application.opts[]) event = yield event except TimeoutException: if not event.done(): event.set_result(None) if not minion_running or is_finished.done(): raise tornado.gen.Return(True) else: ping_pub_data = yield self.saltclients[](tgt, , [jid], tgt_type=tgt_type) ping_tag = tagify([ping_pub_data[], ], ) minion_running = False continue if event[].get(, {}) == {}: continue if event[][] not in minions: minions[event[][]] = False minion_running = True
Return a future which will complete once jid (passed in) is no longer running on tgt
### Input: Return a future which will complete once jid (passed in) is no longer running on tgt ### Response: #vtb def job_not_running(self, jid, tgt, tgt_type, minions, is_finished): ping_pub_data = yield self.saltclients[](tgt, , [jid], tgt_type=tgt_type) ping_tag = tagify([ping_pub_data[], ], ) minion_running = False while True: try: event = self.application.event_listener.get_event(self, tag=ping_tag, timeout=self.application.opts[]) event = yield event except TimeoutException: if not event.done(): event.set_result(None) if not minion_running or is_finished.done(): raise tornado.gen.Return(True) else: ping_pub_data = yield self.saltclients[](tgt, , [jid], tgt_type=tgt_type) ping_tag = tagify([ping_pub_data[], ], ) minion_running = False continue if event[].get(, {}) == {}: continue if event[][] not in minions: minions[event[][]] = False minion_running = True
#vtb def get_levels_of_description(self): if not hasattr(self, "levels_of_description"): self.levels_of_description = [ item["name"] for item in self._get(urljoin(self.base_url, "taxonomies/34")).json() ] return self.levels_of_description
Returns an array of all levels of description defined in this AtoM instance.
### Input: Returns an array of all levels of description defined in this AtoM instance. ### Response: #vtb def get_levels_of_description(self): if not hasattr(self, "levels_of_description"): self.levels_of_description = [ item["name"] for item in self._get(urljoin(self.base_url, "taxonomies/34")).json() ] return self.levels_of_description
#vtb def set_resolving(self, **kw): if in kw and not in kw: kw.update(time_show_zone=True) self.data[].update(**kw)
Certain log fields can be individually resolved. Use this method to set these fields. Valid keyword arguments: :param str timezone: string value to set timezone for audits :param bool time_show_zone: show the time zone in the audit. :param bool time_show_millis: show timezone in milliseconds :param bool keys: resolve log field keys :param bool ip_elements: resolve IP's to SMC elements :param bool ip_dns: resolve IP addresses using DNS :param bool ip_locations: resolve locations
### Input: Certain log fields can be individually resolved. Use this method to set these fields. Valid keyword arguments: :param str timezone: string value to set timezone for audits :param bool time_show_zone: show the time zone in the audit. :param bool time_show_millis: show timezone in milliseconds :param bool keys: resolve log field keys :param bool ip_elements: resolve IP's to SMC elements :param bool ip_dns: resolve IP addresses using DNS :param bool ip_locations: resolve locations ### Response: #vtb def set_resolving(self, **kw): if in kw and not in kw: kw.update(time_show_zone=True) self.data[].update(**kw)
#vtb def chunks(seq, size=None, dfmt="f", byte_order=None, padval=0.): if size is None: size = chunks.size chunk = array.array(dfmt, xrange(size)) idx = 0 for el in seq: chunk[idx] = el idx += 1 if idx == size: yield chunk.tostring() idx = 0 if idx != 0: for idx in xrange(idx, size): chunk[idx] = padval yield chunk.tostring()
Chunk generator based on the array module (Python standard library). See chunk.struct for more help. This strategy uses array.array (random access by indexing management) instead of struct.Struct and blocks/deque (circular queue appending) from the chunks.struct strategy. Hint ---- Try each one to find the faster one for your machine, and chooses the default one by assigning ``chunks.default = chunks.strategy_name``. It'll be the one used by the AudioIO/AudioThread playing mechanism. Note ---- The ``dfmt`` symbols for arrays might differ from structs' defaults.
### Input: Chunk generator based on the array module (Python standard library). See chunk.struct for more help. This strategy uses array.array (random access by indexing management) instead of struct.Struct and blocks/deque (circular queue appending) from the chunks.struct strategy. Hint ---- Try each one to find the faster one for your machine, and chooses the default one by assigning ``chunks.default = chunks.strategy_name``. It'll be the one used by the AudioIO/AudioThread playing mechanism. Note ---- The ``dfmt`` symbols for arrays might differ from structs' defaults. ### Response: #vtb def chunks(seq, size=None, dfmt="f", byte_order=None, padval=0.): if size is None: size = chunks.size chunk = array.array(dfmt, xrange(size)) idx = 0 for el in seq: chunk[idx] = el idx += 1 if idx == size: yield chunk.tostring() idx = 0 if idx != 0: for idx in xrange(idx, size): chunk[idx] = padval yield chunk.tostring()
#vtb def __encryptKeyTransportMessage( self, bare_jids, encryption_callback, bundles = None, expect_problems = None, ignore_trust = False ): yield self.runInactiveDeviceCleanup() if isinstance(bare_jids, string_type): bare_jids = set([ bare_jids ]) else: bare_jids = set(bare_jids) if bundles == None: bundles = {} if expect_problems == None: expect_problems = {} else: for bare_jid in expect_problems: expect_problems[bare_jid] = set(expect_problems[bare_jid]) bare_jids.add(self.__my_bare_jid) problems = [] encrypt_for = {} for bare_jid in bare_jids: devices = yield self.__loadActiveDevices(bare_jid) if len(devices) == 0: problems.append(NoDevicesException(bare_jid)) else: encrypt_for[bare_jid] = devices encrypt_for[self.__my_bare_jid].remove(self.__my_device_id) for bare_jid, devices in encrypt_for.items(): missing_bundles = set() sessions = yield self.__loadSessions(bare_jid, devices) for device in devices: session = sessions[device] if session == None: if not device in bundles.get(bare_jid, {}): missing_bundles.add(device) devices -= missing_bundles for device in missing_bundles: if not device in expect_problems.get(bare_jid, set()): problems.append(MissingBundleException(bare_jid, device)) for bare_jid, devices in encrypt_for.items(): key_exchange_problems = {} sessions = yield self.__loadSessions(bare_jid, devices) for device in devices: session = sessions[device] if session == None: bundle = bundles[bare_jid][device] try: self.__state.getSharedSecretActive(bundle) except x3dh.exceptions.KeyExchangeException as e: key_exchange_problems[device] = str(e) encrypt_for[bare_jid] -= set(key_exchange_problems.keys()) for device, message in key_exchange_problems.items(): if not device in expect_problems.get(bare_jid, set()): problems.append(KeyExchangeException( bare_jid, device, message )) if not ignore_trust: for bare_jid, devices in encrypt_for.items(): trusts = yield self.__loadTrusts(bare_jid, devices) sessions = yield self.__loadSessions(bare_jid, devices) trust_problems = [] for device in devices: trust = trusts[device] session = sessions[device] other_ik = ( bundles[bare_jid][device].ik if session == None else session.ik ) if trust == None: trust_problems.append((device, other_ik, "undecided")) elif not (trust["key"] == other_ik and trust["trusted"]): trust_problems.append((device, other_ik, "untrusted")) devices -= set(map(lambda x: x[0], trust_problems)) for device, other_ik, problem_type in trust_problems: if not device in expect_problems.get(bare_jid, set()): problems.append( TrustException(bare_jid, device, other_ik, problem_type) ) for bare_jid, devices in list(encrypt_for.items()): if bare_jid == self.__my_bare_jid: continue if len(devices) == 0: problems.append(NoEligibleDevicesException(bare_jid)) del encrypt_for[bare_jid] if len(problems) > 0: raise EncryptionProblemsException(problems) aes_gcm_iv = os.urandom(16) aes_gcm_key = os.urandom(16) aes_gcm = Cipher( algorithms.AES(aes_gcm_key), modes.GCM(aes_gcm_iv), backend=default_backend() ).encryptor() encryption_callback(aes_gcm) aes_gcm_tag = aes_gcm.tag encrypted_keys = {} for bare_jid, devices in encrypt_for.items(): encrypted_keys[bare_jid] = {} for device in devices: if self.__state.hasBoundOTPK(bare_jid, device): self.__state.respondedTo(bare_jid, device) yield self._storage.storeState(self.__state.serialize()) session = yield self.__loadSession(bare_jid, device) pre_key = session == None if pre_key: bundle = bundles[bare_jid][device] session_and_init_data = self.__state.getSharedSecretActive(bundle) session = session_and_init_data["dr"] session_init_data = session_and_init_data["to_other"] encrypted_data = session.encryptMessage(aes_gcm_key + aes_gcm_tag) yield self.__storeSession(bare_jid, device, session) serialized = self.__backend.WireFormat.messageToWire( encrypted_data["ciphertext"], encrypted_data["header"], { "DoubleRatchet": encrypted_data["additional"] } ) if pre_key: serialized = self.__backend.WireFormat.preKeyMessageToWire( session_init_data, serialized, { "DoubleRatchet": encrypted_data["additional"] } ) encrypted_keys[bare_jid][device] = { "data" : serialized, "pre_key" : pre_key } promise.returnValue({ "iv" : aes_gcm_iv, "sid" : self.__my_device_id, "keys" : encrypted_keys })
bare_jids: iterable<string> encryption_callback: A function which is called using an instance of cryptography.hazmat.primitives.ciphers.CipherContext, which you can use to encrypt any sort of data. You don't have to return anything. bundles: { [bare_jid: string] => { [device_id: int] => ExtendedPublicBundle } } expect_problems: { [bare_jid: string] => iterable<int> } returns: { iv: bytes, sid: int, keys: { [bare_jid: string] => { [device: int] => { "data" : bytes, "pre_key" : boolean } } } }
### Input: bare_jids: iterable<string> encryption_callback: A function which is called using an instance of cryptography.hazmat.primitives.ciphers.CipherContext, which you can use to encrypt any sort of data. You don't have to return anything. bundles: { [bare_jid: string] => { [device_id: int] => ExtendedPublicBundle } } expect_problems: { [bare_jid: string] => iterable<int> } returns: { iv: bytes, sid: int, keys: { [bare_jid: string] => { [device: int] => { "data" : bytes, "pre_key" : boolean } } } } ### Response: #vtb def __encryptKeyTransportMessage( self, bare_jids, encryption_callback, bundles = None, expect_problems = None, ignore_trust = False ): yield self.runInactiveDeviceCleanup() if isinstance(bare_jids, string_type): bare_jids = set([ bare_jids ]) else: bare_jids = set(bare_jids) if bundles == None: bundles = {} if expect_problems == None: expect_problems = {} else: for bare_jid in expect_problems: expect_problems[bare_jid] = set(expect_problems[bare_jid]) bare_jids.add(self.__my_bare_jid) problems = [] encrypt_for = {} for bare_jid in bare_jids: devices = yield self.__loadActiveDevices(bare_jid) if len(devices) == 0: problems.append(NoDevicesException(bare_jid)) else: encrypt_for[bare_jid] = devices encrypt_for[self.__my_bare_jid].remove(self.__my_device_id) for bare_jid, devices in encrypt_for.items(): missing_bundles = set() sessions = yield self.__loadSessions(bare_jid, devices) for device in devices: session = sessions[device] if session == None: if not device in bundles.get(bare_jid, {}): missing_bundles.add(device) devices -= missing_bundles for device in missing_bundles: if not device in expect_problems.get(bare_jid, set()): problems.append(MissingBundleException(bare_jid, device)) for bare_jid, devices in encrypt_for.items(): key_exchange_problems = {} sessions = yield self.__loadSessions(bare_jid, devices) for device in devices: session = sessions[device] if session == None: bundle = bundles[bare_jid][device] try: self.__state.getSharedSecretActive(bundle) except x3dh.exceptions.KeyExchangeException as e: key_exchange_problems[device] = str(e) encrypt_for[bare_jid] -= set(key_exchange_problems.keys()) for device, message in key_exchange_problems.items(): if not device in expect_problems.get(bare_jid, set()): problems.append(KeyExchangeException( bare_jid, device, message )) if not ignore_trust: for bare_jid, devices in encrypt_for.items(): trusts = yield self.__loadTrusts(bare_jid, devices) sessions = yield self.__loadSessions(bare_jid, devices) trust_problems = [] for device in devices: trust = trusts[device] session = sessions[device] other_ik = ( bundles[bare_jid][device].ik if session == None else session.ik ) if trust == None: trust_problems.append((device, other_ik, "undecided")) elif not (trust["key"] == other_ik and trust["trusted"]): trust_problems.append((device, other_ik, "untrusted")) devices -= set(map(lambda x: x[0], trust_problems)) for device, other_ik, problem_type in trust_problems: if not device in expect_problems.get(bare_jid, set()): problems.append( TrustException(bare_jid, device, other_ik, problem_type) ) for bare_jid, devices in list(encrypt_for.items()): if bare_jid == self.__my_bare_jid: continue if len(devices) == 0: problems.append(NoEligibleDevicesException(bare_jid)) del encrypt_for[bare_jid] if len(problems) > 0: raise EncryptionProblemsException(problems) aes_gcm_iv = os.urandom(16) aes_gcm_key = os.urandom(16) aes_gcm = Cipher( algorithms.AES(aes_gcm_key), modes.GCM(aes_gcm_iv), backend=default_backend() ).encryptor() encryption_callback(aes_gcm) aes_gcm_tag = aes_gcm.tag encrypted_keys = {} for bare_jid, devices in encrypt_for.items(): encrypted_keys[bare_jid] = {} for device in devices: if self.__state.hasBoundOTPK(bare_jid, device): self.__state.respondedTo(bare_jid, device) yield self._storage.storeState(self.__state.serialize()) session = yield self.__loadSession(bare_jid, device) pre_key = session == None if pre_key: bundle = bundles[bare_jid][device] session_and_init_data = self.__state.getSharedSecretActive(bundle) session = session_and_init_data["dr"] session_init_data = session_and_init_data["to_other"] encrypted_data = session.encryptMessage(aes_gcm_key + aes_gcm_tag) yield self.__storeSession(bare_jid, device, session) serialized = self.__backend.WireFormat.messageToWire( encrypted_data["ciphertext"], encrypted_data["header"], { "DoubleRatchet": encrypted_data["additional"] } ) if pre_key: serialized = self.__backend.WireFormat.preKeyMessageToWire( session_init_data, serialized, { "DoubleRatchet": encrypted_data["additional"] } ) encrypted_keys[bare_jid][device] = { "data" : serialized, "pre_key" : pre_key } promise.returnValue({ "iv" : aes_gcm_iv, "sid" : self.__my_device_id, "keys" : encrypted_keys })
#vtb def unwrap_aliases(data_type): unwrapped_alias = False while is_alias(data_type): unwrapped_alias = True data_type = data_type.data_type return data_type, unwrapped_alias
Convenience method to unwrap all Alias(es) from around a DataType. Args: data_type (DataType): The target to unwrap. Return: Tuple[DataType, bool]: The underlying data type and a bool indicating whether the input type had at least one alias layer.
### Input: Convenience method to unwrap all Alias(es) from around a DataType. Args: data_type (DataType): The target to unwrap. Return: Tuple[DataType, bool]: The underlying data type and a bool indicating whether the input type had at least one alias layer. ### Response: #vtb def unwrap_aliases(data_type): unwrapped_alias = False while is_alias(data_type): unwrapped_alias = True data_type = data_type.data_type return data_type, unwrapped_alias
#vtb def run_sim(morphology=, cell_rotation=dict(x=4.99, y=-4.33, z=3.14), closest_idx=dict(x=-200., y=0., z=800.)): cell = LFPy.Cell(morphology=morphology, **cell_parameters) cell.set_rotation(**cell_rotation) synapse_parameters = { : cell.get_closest_idx(**closest_idx), : 0., : , : 0.5, : 0.0878, : True, } synapse = LFPy.Synapse(cell, **synapse_parameters) synapse.set_spike_times(np.array([1.])) print "running simulation..." cell.simulate(rec_imem=True,rec_isyn=True) grid_electrode = LFPy.RecExtElectrode(cell,**grid_electrode_parameters) point_electrode = LFPy.RecExtElectrode(cell,**point_electrode_parameters) grid_electrode.calc_lfp() point_electrode.calc_lfp() print "done" return cell, synapse, grid_electrode, point_electrode
set up simple cell simulation with LFPs in the plane
### Input: set up simple cell simulation with LFPs in the plane ### Response: #vtb def run_sim(morphology=, cell_rotation=dict(x=4.99, y=-4.33, z=3.14), closest_idx=dict(x=-200., y=0., z=800.)): cell = LFPy.Cell(morphology=morphology, **cell_parameters) cell.set_rotation(**cell_rotation) synapse_parameters = { : cell.get_closest_idx(**closest_idx), : 0., : , : 0.5, : 0.0878, : True, } synapse = LFPy.Synapse(cell, **synapse_parameters) synapse.set_spike_times(np.array([1.])) print "running simulation..." cell.simulate(rec_imem=True,rec_isyn=True) grid_electrode = LFPy.RecExtElectrode(cell,**grid_electrode_parameters) point_electrode = LFPy.RecExtElectrode(cell,**point_electrode_parameters) grid_electrode.calc_lfp() point_electrode.calc_lfp() print "done" return cell, synapse, grid_electrode, point_electrode
#vtb def get_parent_info(brain_or_object, endpoint=None): if is_root(brain_or_object): return {} parent = get_parent(brain_or_object) portal_type = get_portal_type(parent) resource = portal_type_to_resource(portal_type) if endpoint is None: endpoint = get_endpoint(parent) return { "parent_id": get_id(parent), "parent_uid": get_uid(parent), "parent_url": url_for(endpoint, resource=resource, uid=get_uid(parent)) }
Generate url information for the parent object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :returns: URL information mapping :rtype: dict
### Input: Generate url information for the parent object :param brain_or_object: A single catalog brain or content object :type brain_or_object: ATContentType/DexterityContentType/CatalogBrain :param endpoint: The named URL endpoint for the root of the items :type endpoint: str/unicode :returns: URL information mapping :rtype: dict ### Response: #vtb def get_parent_info(brain_or_object, endpoint=None): if is_root(brain_or_object): return {} parent = get_parent(brain_or_object) portal_type = get_portal_type(parent) resource = portal_type_to_resource(portal_type) if endpoint is None: endpoint = get_endpoint(parent) return { "parent_id": get_id(parent), "parent_uid": get_uid(parent), "parent_url": url_for(endpoint, resource=resource, uid=get_uid(parent)) }
#vtb def get_comments(self): collection = JSONClientValidated(, collection=, runtime=self._runtime) result = collection.find(self._view_filter()).sort(, DESCENDING) return objects.CommentList(result, runtime=self._runtime, proxy=self._proxy)
Gets all comments. return: (osid.commenting.CommentList) - a list of comments raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
### Input: Gets all comments. return: (osid.commenting.CommentList) - a list of comments raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* ### Response: #vtb def get_comments(self): collection = JSONClientValidated(, collection=, runtime=self._runtime) result = collection.find(self._view_filter()).sort(, DESCENDING) return objects.CommentList(result, runtime=self._runtime, proxy=self._proxy)
#vtb def _parseExpression(self, src, returnList=False): src, term = self._parseExpressionTerm(src) operator = None while src[:1] not in (, , , , , , ): for operator in self.ExpressionOperators: if src.startswith(operator): src = src[len(operator):] break else: operator = src, term2 = self._parseExpressionTerm(src.lstrip()) if term2 is NotImplemented: break else: term = self.cssBuilder.combineTerms(term, operator, term2) if operator is None and returnList: term = self.cssBuilder.combineTerms(term, None, None) return src, term else: return src, term
expr : term [ operator term ]* ;
### Input: expr : term [ operator term ]* ; ### Response: #vtb def _parseExpression(self, src, returnList=False): src, term = self._parseExpressionTerm(src) operator = None while src[:1] not in (, , , , , , ): for operator in self.ExpressionOperators: if src.startswith(operator): src = src[len(operator):] break else: operator = src, term2 = self._parseExpressionTerm(src.lstrip()) if term2 is NotImplemented: break else: term = self.cssBuilder.combineTerms(term, operator, term2) if operator is None and returnList: term = self.cssBuilder.combineTerms(term, None, None) return src, term else: return src, term
#vtb def deployment_check_existence(name, resource_group, **kwargs): result = False resconn = __utils__[](, **kwargs) try: result = resconn.deployments.check_existence( deployment_name=name, resource_group_name=resource_group ) except CloudError as exc: __utils__[](, str(exc), **kwargs) return result
.. versionadded:: 2019.2.0 Check the existence of a deployment. :param name: The name of the deployment to query. :param resource_group: The resource group name assigned to the deployment. CLI Example: .. code-block:: bash salt-call azurearm_resource.deployment_check_existence testdeploy testgroup
### Input: .. versionadded:: 2019.2.0 Check the existence of a deployment. :param name: The name of the deployment to query. :param resource_group: The resource group name assigned to the deployment. CLI Example: .. code-block:: bash salt-call azurearm_resource.deployment_check_existence testdeploy testgroup ### Response: #vtb def deployment_check_existence(name, resource_group, **kwargs): result = False resconn = __utils__[](, **kwargs) try: result = resconn.deployments.check_existence( deployment_name=name, resource_group_name=resource_group ) except CloudError as exc: __utils__[](, str(exc), **kwargs) return result
#vtb def tags( self): tags = [] regex = re.compile(r, re.S) if self.meta["tagString"]: matchList = regex.findall(self.meta["tagString"]) for m in matchList: tags.append(m.strip().replace("@", "")) return tags
*The list of tags associated with this taskpaper object* **Usage:** .. project and task objects can have associated tags. To get a list of tags assigned to an object use: .. code-block:: python projectTag = aProject.tags taskTags = aTasks.tags print projectTag > ['flag', 'home(bathroom)']
### Input: *The list of tags associated with this taskpaper object* **Usage:** .. project and task objects can have associated tags. To get a list of tags assigned to an object use: .. code-block:: python projectTag = aProject.tags taskTags = aTasks.tags print projectTag > ['flag', 'home(bathroom)'] ### Response: #vtb def tags( self): tags = [] regex = re.compile(r, re.S) if self.meta["tagString"]: matchList = regex.findall(self.meta["tagString"]) for m in matchList: tags.append(m.strip().replace("@", "")) return tags
#vtb def dcm(self, dcm): assert(isinstance(dcm, Matrix3)) self._dcm = dcm.copy() self._q = None self._euler = None
Set the DCM :param dcm: Matrix3
### Input: Set the DCM :param dcm: Matrix3 ### Response: #vtb def dcm(self, dcm): assert(isinstance(dcm, Matrix3)) self._dcm = dcm.copy() self._q = None self._euler = None