code
stringlengths
64
7.01k
docstring
stringlengths
2
15.8k
text
stringlengths
144
19.2k
#vtb def get_subject(self, msg): text, encoding = decode_header(msg[])[-1] try: text = text.decode(encoding) except AttributeError: pass return text
Extracts the subject line from an EmailMessage object.
### Input: Extracts the subject line from an EmailMessage object. ### Response: #vtb def get_subject(self, msg): text, encoding = decode_header(msg[])[-1] try: text = text.decode(encoding) except AttributeError: pass return text
#vtb def tradingStatus(symbol=None, token=, version=): _raiseIfNotStr(symbol) if symbol: return _getJson( + symbol, token, version) return _getJson(, token, version)
The Trading status message is used to indicate the current trading status of a security. For IEX-listed securities, IEX acts as the primary market and has the authority to institute a trading halt or trading pause in a security due to news dissemination or regulatory reasons. For non-IEX-listed securities, IEX abides by any regulatory trading halts and trading pauses instituted by the primary or listing market, as applicable. IEX disseminates a full pre-market spin of Trading status messages indicating the trading status of all securities. In the spin, IEX will send out a Trading status message with β€œT” (Trading) for all securities that are eligible for trading at the start of the Pre-Market Session. If a security is absent from the dissemination, firms should assume that the security is being treated as operationally halted in the IEX Trading System. After the pre-market spin, IEX will use the Trading status message to relay changes in trading status for an individual security. Messages will be sent when a security is: Halted Paused* Released into an Order Acceptance Period* Released for trading *The paused and released into an Order Acceptance Period status will be disseminated for IEX-listed securities only. Trading pauses on non-IEX-listed securities will be treated simply as a halt. https://iexcloud.io/docs/api/#deep-trading-status Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: dict: result
### Input: The Trading status message is used to indicate the current trading status of a security. For IEX-listed securities, IEX acts as the primary market and has the authority to institute a trading halt or trading pause in a security due to news dissemination or regulatory reasons. For non-IEX-listed securities, IEX abides by any regulatory trading halts and trading pauses instituted by the primary or listing market, as applicable. IEX disseminates a full pre-market spin of Trading status messages indicating the trading status of all securities. In the spin, IEX will send out a Trading status message with β€œT” (Trading) for all securities that are eligible for trading at the start of the Pre-Market Session. If a security is absent from the dissemination, firms should assume that the security is being treated as operationally halted in the IEX Trading System. After the pre-market spin, IEX will use the Trading status message to relay changes in trading status for an individual security. Messages will be sent when a security is: Halted Paused* Released into an Order Acceptance Period* Released for trading *The paused and released into an Order Acceptance Period status will be disseminated for IEX-listed securities only. Trading pauses on non-IEX-listed securities will be treated simply as a halt. https://iexcloud.io/docs/api/#deep-trading-status Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: dict: result ### Response: #vtb def tradingStatus(symbol=None, token=, version=): _raiseIfNotStr(symbol) if symbol: return _getJson( + symbol, token, version) return _getJson(, token, version)
#vtb def _get_param_names(self): template = Template(self.yaml_string) names = [] for match in re.finditer(template.pattern, template.template): name = match.group() or match.group() assert name is not None names.append(name) return names
Get mappable parameters from YAML.
### Input: Get mappable parameters from YAML. ### Response: #vtb def _get_param_names(self): template = Template(self.yaml_string) names = [] for match in re.finditer(template.pattern, template.template): name = match.group() or match.group() assert name is not None names.append(name) return names
#vtb def acquire_hosting_device_slots(self, context, hosting_device, resource, resource_type, resource_service, num, exclusive=False): bound = hosting_device[] if ((bound is not None and bound != resource[]) or (exclusive and not self._exclusively_used(context, hosting_device, resource[]))): LOG.debug( , {: num, : if bound is None else bound + , : hosting_device[], : resource[]}) return False with context.session.begin(subtransactions=True): res_info = {: resource, : resource_type, : resource_service} slot_info, query = self._get_or_create_slot_allocation( context, hosting_device, res_info) if slot_info is None: LOG.debug( , {: num, : hosting_device[], : resource[]}) return False new_allocation = num + slot_info.num_allocated if hosting_device[][] < new_allocation: LOG.debug( , {: num, : hosting_device[], : resource[]}) self._dispatch_pool_maintenance_job(hosting_device[]) return False if exclusive and bound is None: self._update_hosting_device_exclusivity( context, hosting_device, resource[]) bound = resource[] elif not exclusive and bound is not None: self._update_hosting_device_exclusivity(context, hosting_device, None) bound = None slot_info.num_allocated = new_allocation context.session.add(slot_info) self._dispatch_pool_maintenance_job(hosting_device[]) LOG.info( , {: num, : if bound is None else bound + , : new_allocation, : hosting_device[], : resource[]}) return True
Assign <num> slots in <hosting_device> to logical <resource>. If exclusive is True the hosting device is bound to the resource's tenant. Otherwise it is not bound to any tenant. Returns True if allocation was granted, False otherwise.
### Input: Assign <num> slots in <hosting_device> to logical <resource>. If exclusive is True the hosting device is bound to the resource's tenant. Otherwise it is not bound to any tenant. Returns True if allocation was granted, False otherwise. ### Response: #vtb def acquire_hosting_device_slots(self, context, hosting_device, resource, resource_type, resource_service, num, exclusive=False): bound = hosting_device[] if ((bound is not None and bound != resource[]) or (exclusive and not self._exclusively_used(context, hosting_device, resource[]))): LOG.debug( , {: num, : if bound is None else bound + , : hosting_device[], : resource[]}) return False with context.session.begin(subtransactions=True): res_info = {: resource, : resource_type, : resource_service} slot_info, query = self._get_or_create_slot_allocation( context, hosting_device, res_info) if slot_info is None: LOG.debug( , {: num, : hosting_device[], : resource[]}) return False new_allocation = num + slot_info.num_allocated if hosting_device[][] < new_allocation: LOG.debug( , {: num, : hosting_device[], : resource[]}) self._dispatch_pool_maintenance_job(hosting_device[]) return False if exclusive and bound is None: self._update_hosting_device_exclusivity( context, hosting_device, resource[]) bound = resource[] elif not exclusive and bound is not None: self._update_hosting_device_exclusivity(context, hosting_device, None) bound = None slot_info.num_allocated = new_allocation context.session.add(slot_info) self._dispatch_pool_maintenance_job(hosting_device[]) LOG.info( , {: num, : if bound is None else bound + , : new_allocation, : hosting_device[], : resource[]}) return True
#vtb def compact(self, term_doc_matrix): rank_df = self.scorer.get_rank_df(term_doc_matrix) return self._prune_higher_ranked_terms(term_doc_matrix, rank_df, self.rank)
Parameters ---------- term_doc_matrix : TermDocMatrix Term document matrix object to compact Returns ------- TermDocMatrix
### Input: Parameters ---------- term_doc_matrix : TermDocMatrix Term document matrix object to compact Returns ------- TermDocMatrix ### Response: #vtb def compact(self, term_doc_matrix): rank_df = self.scorer.get_rank_df(term_doc_matrix) return self._prune_higher_ranked_terms(term_doc_matrix, rank_df, self.rank)
#vtb def batch_retrieve_overrides_in_course(self, course_id, assignment_overrides_id, assignment_overrides_assignment_id): path = {} data = {} params = {} path["course_id"] = course_id params["assignment_overrides[id]"] = assignment_overrides_id params["assignment_overrides[assignment_id]"] = assignment_overrides_assignment_id self.logger.debug("GET /api/v1/courses/{course_id}/assignments/overrides with query params: {params} and form data: {data}".format(params=params, data=data, **path)) return self.generic_request("GET", "/api/v1/courses/{course_id}/assignments/overrides".format(**path), data=data, params=params, all_pages=True)
Batch retrieve overrides in a course. Returns a list of specified overrides in this course, providing they target sections/groups/students visible to the current user. Returns null elements in the list for requests that were not found.
### Input: Batch retrieve overrides in a course. Returns a list of specified overrides in this course, providing they target sections/groups/students visible to the current user. Returns null elements in the list for requests that were not found. ### Response: #vtb def batch_retrieve_overrides_in_course(self, course_id, assignment_overrides_id, assignment_overrides_assignment_id): path = {} data = {} params = {} path["course_id"] = course_id params["assignment_overrides[id]"] = assignment_overrides_id params["assignment_overrides[assignment_id]"] = assignment_overrides_assignment_id self.logger.debug("GET /api/v1/courses/{course_id}/assignments/overrides with query params: {params} and form data: {data}".format(params=params, data=data, **path)) return self.generic_request("GET", "/api/v1/courses/{course_id}/assignments/overrides".format(**path), data=data, params=params, all_pages=True)
#vtb def _set_predictor(self, predictor): if predictor is self._predictor: return self if self.data is not None: self._predictor = predictor return self._free_handle() else: raise LightGBMError("Cannot set predictor after freed raw data, " "set free_raw_data=False when construct Dataset to avoid this.")
Set predictor for continued training. It is not recommended for user to call this function. Please use init_model argument in engine.train() or engine.cv() instead.
### Input: Set predictor for continued training. It is not recommended for user to call this function. Please use init_model argument in engine.train() or engine.cv() instead. ### Response: #vtb def _set_predictor(self, predictor): if predictor is self._predictor: return self if self.data is not None: self._predictor = predictor return self._free_handle() else: raise LightGBMError("Cannot set predictor after freed raw data, " "set free_raw_data=False when construct Dataset to avoid this.")
#vtb def split(self, bits_count): result = [] array = WBinArray(self.__value, self.__size) if (len(array) % bits_count) > 0: array.resize(len(array) + (bits_count - (len(array) % bits_count))) while len(array): result.append(WBinArray(array[:bits_count], bits_count)) array = array[bits_count:] return result
Split array into smaller parts. Each small array is fixed-length WBinArray (length of that array is bits_count). :param bits_count: array length :return: list of WBinArray
### Input: Split array into smaller parts. Each small array is fixed-length WBinArray (length of that array is bits_count). :param bits_count: array length :return: list of WBinArray ### Response: #vtb def split(self, bits_count): result = [] array = WBinArray(self.__value, self.__size) if (len(array) % bits_count) > 0: array.resize(len(array) + (bits_count - (len(array) % bits_count))) while len(array): result.append(WBinArray(array[:bits_count], bits_count)) array = array[bits_count:] return result
#vtb def check_differences(self): logger.info("Check that mail differences are within the limits.") if self.conf.size_threshold < 0: logger.info("Skip checking for size differences.") if self.conf.content_threshold < 0: logger.info("Skip checking for content differences.") if self.conf.size_threshold < 0 and self.conf.content_threshold < 0: return for mail_a, mail_b in combinations(self.pool, 2): if self.conf.size_threshold > -1: size_difference = abs(mail_a.size - mail_b.size) logger.debug("{} and {} differs by {} bytes in size.".format( mail_a, mail_b, size_difference)) if size_difference > self.conf.size_threshold: raise SizeDiffAboveThreshold if self.conf.content_threshold > -1: content_difference = self.diff(mail_a, mail_b) logger.debug( "{} and {} differs by {} bytes in content.".format( mail_a, mail_b, content_difference)) if content_difference > self.conf.content_threshold: if self.conf.show_diff: logger.info(self.pretty_diff(mail_a, mail_b)) raise ContentDiffAboveThreshold
In-depth check of mail differences. Compare all mails of the duplicate set with each other, both in size and content. Raise an error if we're not within the limits imposed by the threshold setting.
### Input: In-depth check of mail differences. Compare all mails of the duplicate set with each other, both in size and content. Raise an error if we're not within the limits imposed by the threshold setting. ### Response: #vtb def check_differences(self): logger.info("Check that mail differences are within the limits.") if self.conf.size_threshold < 0: logger.info("Skip checking for size differences.") if self.conf.content_threshold < 0: logger.info("Skip checking for content differences.") if self.conf.size_threshold < 0 and self.conf.content_threshold < 0: return for mail_a, mail_b in combinations(self.pool, 2): if self.conf.size_threshold > -1: size_difference = abs(mail_a.size - mail_b.size) logger.debug("{} and {} differs by {} bytes in size.".format( mail_a, mail_b, size_difference)) if size_difference > self.conf.size_threshold: raise SizeDiffAboveThreshold if self.conf.content_threshold > -1: content_difference = self.diff(mail_a, mail_b) logger.debug( "{} and {} differs by {} bytes in content.".format( mail_a, mail_b, content_difference)) if content_difference > self.conf.content_threshold: if self.conf.show_diff: logger.info(self.pretty_diff(mail_a, mail_b)) raise ContentDiffAboveThreshold
#vtb def avgwave(self, wavelengths=None): x = self._validate_wavelengths(wavelengths).value y = self(x).value num = np.trapz(y * x, x=x) den = np.trapz(y, x=x) if den == 0: avg_wave = 0.0 else: avg_wave = abs(num / den) return avg_wave * self._internal_wave_unit
Calculate the :ref:`average wavelength <synphot-formula-avgwv>`. Parameters ---------- wavelengths : array-like, `~astropy.units.quantity.Quantity`, or `None` Wavelength values for sampling. If not a Quantity, assumed to be in Angstrom. If `None`, `waveset` is used. Returns ------- avg_wave : `~astropy.units.quantity.Quantity` Average wavelength.
### Input: Calculate the :ref:`average wavelength <synphot-formula-avgwv>`. Parameters ---------- wavelengths : array-like, `~astropy.units.quantity.Quantity`, or `None` Wavelength values for sampling. If not a Quantity, assumed to be in Angstrom. If `None`, `waveset` is used. Returns ------- avg_wave : `~astropy.units.quantity.Quantity` Average wavelength. ### Response: #vtb def avgwave(self, wavelengths=None): x = self._validate_wavelengths(wavelengths).value y = self(x).value num = np.trapz(y * x, x=x) den = np.trapz(y, x=x) if den == 0: avg_wave = 0.0 else: avg_wave = abs(num / den) return avg_wave * self._internal_wave_unit
#vtb def show_correlation_matrix(self, correlation_matrix): cr_plot.create_correlation_matrix_plot( correlation_matrix, self.title, self.headers_to_test ) pyplot.show()
Shows the given correlation matrix as image :param correlation_matrix: Correlation matrix of features
### Input: Shows the given correlation matrix as image :param correlation_matrix: Correlation matrix of features ### Response: #vtb def show_correlation_matrix(self, correlation_matrix): cr_plot.create_correlation_matrix_plot( correlation_matrix, self.title, self.headers_to_test ) pyplot.show()
#vtb def get_parent_of_type(typ, obj): if type(typ) is not text: typ = typ.__name__ while hasattr(obj, ): obj = obj.parent if obj.__class__.__name__ == typ: return obj
Finds first object up the parent chain of the given type. If no parent of the given type exists None is returned. Args: typ(str or python class): The type of the model object we are looking for. obj (model object): Python model object which is the start of the search process.
### Input: Finds first object up the parent chain of the given type. If no parent of the given type exists None is returned. Args: typ(str or python class): The type of the model object we are looking for. obj (model object): Python model object which is the start of the search process. ### Response: #vtb def get_parent_of_type(typ, obj): if type(typ) is not text: typ = typ.__name__ while hasattr(obj, ): obj = obj.parent if obj.__class__.__name__ == typ: return obj
#vtb def launch_job(self, job_id): assert self.api_version.lower() in [, ], \ try: self.create_job(job_id, {: True}) except ValueError: pass return self.read_job(job_id)
Convenience method for launching a job. We use POST for actions outside of HTTP verbs (job launch in this case).
### Input: Convenience method for launching a job. We use POST for actions outside of HTTP verbs (job launch in this case). ### Response: #vtb def launch_job(self, job_id): assert self.api_version.lower() in [, ], \ try: self.create_job(job_id, {: True}) except ValueError: pass return self.read_job(job_id)
#vtb def printed_out(self, name): opt = self.variables().optional_namestring() req = self.variables().required_namestring() out = out += out += .format(name, req, opt) if self.description: out += .format(self.description) return out
Create a string representation of the action
### Input: Create a string representation of the action ### Response: #vtb def printed_out(self, name): opt = self.variables().optional_namestring() req = self.variables().required_namestring() out = out += out += .format(name, req, opt) if self.description: out += .format(self.description) return out
#vtb def send_short_lpp_packet(self, dest_id, data): pk = CRTPPacket() pk.port = CRTPPort.LOCALIZATION pk.channel = self.GENERIC_CH pk.data = struct.pack(, self.LPS_SHORT_LPP_PACKET, dest_id) + data self._cf.send_packet(pk)
Send ultra-wide-band LPP packet to dest_id
### Input: Send ultra-wide-band LPP packet to dest_id ### Response: #vtb def send_short_lpp_packet(self, dest_id, data): pk = CRTPPacket() pk.port = CRTPPort.LOCALIZATION pk.channel = self.GENERIC_CH pk.data = struct.pack(, self.LPS_SHORT_LPP_PACKET, dest_id) + data self._cf.send_packet(pk)
#vtb def network_delete_event(self, network_info): net_id = network_info[] if net_id not in self.network: LOG.error(, net_id) return segid = self.network[net_id].get() tenant_id = self.network[net_id].get() tenant_name = self.get_project_name(tenant_id) net = utils.Dict2Obj(self.network[net_id]) if not tenant_name: LOG.error(, {: tenant_id}) self.update_network_db(net.id, constants.DELETE_FAIL) return try: self.dcnm_client.delete_network(tenant_name, net) self.seg_drvr.release_segmentation_id(segid) self.delete_network_db(net_id) del self.network[net_id] snets = [k for k in self.subnet if ( self.subnet[k].get() == net_id)] [self.subnet.pop(s) for s in snets] except dexc.DfaClientRequestFailed: LOG.error(, {: net.name}) self.update_network_db(net_id, constants.DELETE_FAIL) instances = self.get_vms() instances_related = [k for k in instances if k.network_id == net_id] for vm in instances_related: LOG.debug("deleting vm %s because network is deleted", vm.name) self.delete_vm_function(vm.port_id, vm) self.network_del_notif(tenant_id, tenant_name, net_id)
Process network delete event.
### Input: Process network delete event. ### Response: #vtb def network_delete_event(self, network_info): net_id = network_info[] if net_id not in self.network: LOG.error(, net_id) return segid = self.network[net_id].get() tenant_id = self.network[net_id].get() tenant_name = self.get_project_name(tenant_id) net = utils.Dict2Obj(self.network[net_id]) if not tenant_name: LOG.error(, {: tenant_id}) self.update_network_db(net.id, constants.DELETE_FAIL) return try: self.dcnm_client.delete_network(tenant_name, net) self.seg_drvr.release_segmentation_id(segid) self.delete_network_db(net_id) del self.network[net_id] snets = [k for k in self.subnet if ( self.subnet[k].get() == net_id)] [self.subnet.pop(s) for s in snets] except dexc.DfaClientRequestFailed: LOG.error(, {: net.name}) self.update_network_db(net_id, constants.DELETE_FAIL) instances = self.get_vms() instances_related = [k for k in instances if k.network_id == net_id] for vm in instances_related: LOG.debug("deleting vm %s because network is deleted", vm.name) self.delete_vm_function(vm.port_id, vm) self.network_del_notif(tenant_id, tenant_name, net_id)
#vtb def create_backed_vol(self, name, backer, _format=): vol_xml = ElementTree.Element() vol_name = ElementTree.SubElement(vol_xml, ) name = .format(name, _format) vol_name.text = name target = ElementTree.SubElement(vol_xml, ) target_format = ElementTree.SubElement(target, ) target_format.set(, _format) vol_cap = ElementTree.SubElement(vol_xml, ) vol_cap.set(, ) vol_cap.text = backer.capacity backing_store = ElementTree.SubElement(vol_xml, ) bs_path = ElementTree.SubElement(backing_store, ) bs_path.text = backer.path bs_format = ElementTree.SubElement(backing_store, ) bs_format.set(, backer.format) XMLString = ElementTree.tostring(vol_xml) self.virsp.createXML(XMLString, 0) return self.find_volume(name)
TODO(rdelinger) think about changing _format This is a pretty specialized function. It takes an existing volume, and creates a new volume that is backed by the existing volume Sadly there is no easy way to do this in libvirt, the best way I've found is to just create some xml and use the createXML function
### Input: TODO(rdelinger) think about changing _format This is a pretty specialized function. It takes an existing volume, and creates a new volume that is backed by the existing volume Sadly there is no easy way to do this in libvirt, the best way I've found is to just create some xml and use the createXML function ### Response: #vtb def create_backed_vol(self, name, backer, _format=): vol_xml = ElementTree.Element() vol_name = ElementTree.SubElement(vol_xml, ) name = .format(name, _format) vol_name.text = name target = ElementTree.SubElement(vol_xml, ) target_format = ElementTree.SubElement(target, ) target_format.set(, _format) vol_cap = ElementTree.SubElement(vol_xml, ) vol_cap.set(, ) vol_cap.text = backer.capacity backing_store = ElementTree.SubElement(vol_xml, ) bs_path = ElementTree.SubElement(backing_store, ) bs_path.text = backer.path bs_format = ElementTree.SubElement(backing_store, ) bs_format.set(, backer.format) XMLString = ElementTree.tostring(vol_xml) self.virsp.createXML(XMLString, 0) return self.find_volume(name)
#vtb def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) topic = self.get_topic() context[] = topic context[] = topic.forum try: if hasattr(topic, ) and topic.poll.options.exists(): context[] = topic.poll context[] = self.poll_form_class(poll=topic.poll) context[] = self.request.GET.get(, None) context[] = self.request.GET.get(, None) except ObjectDoesNotExist: pass return context
Returns the context data to provide to the template.
### Input: Returns the context data to provide to the template. ### Response: #vtb def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) topic = self.get_topic() context[] = topic context[] = topic.forum try: if hasattr(topic, ) and topic.poll.options.exists(): context[] = topic.poll context[] = self.poll_form_class(poll=topic.poll) context[] = self.request.GET.get(, None) context[] = self.request.GET.get(, None) except ObjectDoesNotExist: pass return context
#vtb def serverdir(): path = join(ROOT_DIR, ) path = normpath(path) if sys.platform == : path = realpath(path) return path
Get the location of the server subpackage
### Input: Get the location of the server subpackage ### Response: #vtb def serverdir(): path = join(ROOT_DIR, ) path = normpath(path) if sys.platform == : path = realpath(path) return path
#vtb def binarize_signal(signal, treshold="auto", cut="higher"): if treshold == "auto": treshold = (np.max(np.array(signal)) - np.min(np.array(signal)))/2 signal = list(signal) binary_signal = [] for i in range(len(signal)): if cut == "higher": if signal[i] > treshold: binary_signal.append(1) else: binary_signal.append(0) else: if signal[i] < treshold: binary_signal.append(1) else: binary_signal.append(0) return(binary_signal)
Binarize a channel based on a continuous channel. Parameters ---------- signal = array or list The signal channel. treshold = float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut = str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". Returns ---------- list binary_signal Example ---------- >>> import neurokit as nk >>> binary_signal = nk.binarize_signal(signal, treshold=4) Authors ---------- - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ Dependencies ---------- None
### Input: Binarize a channel based on a continuous channel. Parameters ---------- signal = array or list The signal channel. treshold = float The treshold value by which to select the events. If "auto", takes the value between the max and the min. cut = str "higher" or "lower", define the events as above or under the treshold. For photosensors, a white screen corresponds usually to higher values. Therefore, if your events were signalled by a black colour, events values would be the lower ones, and you should set the cut to "lower". Returns ---------- list binary_signal Example ---------- >>> import neurokit as nk >>> binary_signal = nk.binarize_signal(signal, treshold=4) Authors ---------- - `Dominique Makowski <https://dominiquemakowski.github.io/>`_ Dependencies ---------- None ### Response: #vtb def binarize_signal(signal, treshold="auto", cut="higher"): if treshold == "auto": treshold = (np.max(np.array(signal)) - np.min(np.array(signal)))/2 signal = list(signal) binary_signal = [] for i in range(len(signal)): if cut == "higher": if signal[i] > treshold: binary_signal.append(1) else: binary_signal.append(0) else: if signal[i] < treshold: binary_signal.append(1) else: binary_signal.append(0) return(binary_signal)
#vtb def permutation_entropy(time_series, order=3, delay=1, normalize=False): x = np.array(time_series) hashmult = np.power(order, np.arange(order)) sorted_idx = _embed(x, order=order, delay=delay).argsort(kind=) hashval = (np.multiply(sorted_idx, hashmult)).sum(1) _, c = np.unique(hashval, return_counts=True) p = np.true_divide(c, c.sum()) pe = -np.multiply(p, np.log2(p)).sum() if normalize: pe /= np.log2(factorial(order)) return pe
Permutation Entropy. Parameters ---------- time_series : list or np.array Time series order : int Order of permutation entropy delay : int Time delay normalize : bool If True, divide by log2(factorial(m)) to normalize the entropy between 0 and 1. Otherwise, return the permutation entropy in bit. Returns ------- pe : float Permutation Entropy References ---------- .. [1] Massimiliano Zanin et al. Permutation Entropy and Its Main Biomedical and Econophysics Applications: A Review. http://www.mdpi.com/1099-4300/14/8/1553/pdf .. [2] Christoph Bandt and Bernd Pompe. Permutation entropy β€” a natural complexity measure for time series. http://stubber.math-inf.uni-greifswald.de/pub/full/prep/2001/11.pdf Notes ----- Last updated (Oct 2018) by Raphael Vallat ([email protected]): - Major speed improvements - Use of base 2 instead of base e - Added normalization Examples -------- 1. Permutation entropy with order 2 >>> x = [4, 7, 9, 10, 6, 11, 3] >>> # Return a value between 0 and log2(factorial(order)) >>> print(permutation_entropy(x, order=2)) 0.918 2. Normalized permutation entropy with order 3 >>> x = [4, 7, 9, 10, 6, 11, 3] >>> # Return a value comprised between 0 and 1. >>> print(permutation_entropy(x, order=3, normalize=True)) 0.589
### Input: Permutation Entropy. Parameters ---------- time_series : list or np.array Time series order : int Order of permutation entropy delay : int Time delay normalize : bool If True, divide by log2(factorial(m)) to normalize the entropy between 0 and 1. Otherwise, return the permutation entropy in bit. Returns ------- pe : float Permutation Entropy References ---------- .. [1] Massimiliano Zanin et al. Permutation Entropy and Its Main Biomedical and Econophysics Applications: A Review. http://www.mdpi.com/1099-4300/14/8/1553/pdf .. [2] Christoph Bandt and Bernd Pompe. Permutation entropy β€” a natural complexity measure for time series. http://stubber.math-inf.uni-greifswald.de/pub/full/prep/2001/11.pdf Notes ----- Last updated (Oct 2018) by Raphael Vallat ([email protected]): - Major speed improvements - Use of base 2 instead of base e - Added normalization Examples -------- 1. Permutation entropy with order 2 >>> x = [4, 7, 9, 10, 6, 11, 3] >>> # Return a value between 0 and log2(factorial(order)) >>> print(permutation_entropy(x, order=2)) 0.918 2. Normalized permutation entropy with order 3 >>> x = [4, 7, 9, 10, 6, 11, 3] >>> # Return a value comprised between 0 and 1. >>> print(permutation_entropy(x, order=3, normalize=True)) 0.589 ### Response: #vtb def permutation_entropy(time_series, order=3, delay=1, normalize=False): x = np.array(time_series) hashmult = np.power(order, np.arange(order)) sorted_idx = _embed(x, order=order, delay=delay).argsort(kind=) hashval = (np.multiply(sorted_idx, hashmult)).sum(1) _, c = np.unique(hashval, return_counts=True) p = np.true_divide(c, c.sum()) pe = -np.multiply(p, np.log2(p)).sum() if normalize: pe /= np.log2(factorial(order)) return pe
#vtb def expand_as_args(args): return (isinstance(args, collections.Sequence) and not _is_namedtuple(args) and not _force_leaf(args))
Returns `True` if `args` should be expanded as `*args`.
### Input: Returns `True` if `args` should be expanded as `*args`. ### Response: #vtb def expand_as_args(args): return (isinstance(args, collections.Sequence) and not _is_namedtuple(args) and not _force_leaf(args))
#vtb def get_startup(self, id_): return _get_request(_STARTUP.format(c_api=_C_API_BEGINNING, api=_API_VERSION, id_=id_, at=self.access_token))
Get startup based on id
### Input: Get startup based on id ### Response: #vtb def get_startup(self, id_): return _get_request(_STARTUP.format(c_api=_C_API_BEGINNING, api=_API_VERSION, id_=id_, at=self.access_token))
#vtb def set_ifo(self,ifo): self.__ifo = ifo if self.job().channel(): self.add_var_opt(, ifo + + self.job().channel())
Set the ifo name to analyze. If the channel name for the job is defined, then the name of the ifo is prepended to the channel name obtained from the job configuration file and passed with a --channel-name option. @param ifo: two letter ifo code (e.g. L1, H1 or H2).
### Input: Set the ifo name to analyze. If the channel name for the job is defined, then the name of the ifo is prepended to the channel name obtained from the job configuration file and passed with a --channel-name option. @param ifo: two letter ifo code (e.g. L1, H1 or H2). ### Response: #vtb def set_ifo(self,ifo): self.__ifo = ifo if self.job().channel(): self.add_var_opt(, ifo + + self.job().channel())
#vtb def rename(self, old_name, new_name): try: self.api.rename(mkey(old_name), mkey(new_name)) except ResponseError, exc: if "no such key" in exc.args: raise KeyError(old_name) raise
Rename key to a new name.
### Input: Rename key to a new name. ### Response: #vtb def rename(self, old_name, new_name): try: self.api.rename(mkey(old_name), mkey(new_name)) except ResponseError, exc: if "no such key" in exc.args: raise KeyError(old_name) raise
#vtb def cli(self, prt=sys.stdout): kws = self.objdoc.get_docargs(prt=None) godag = get_godag(kws[], prt=None, loading_bar=False, optional_attrs=[]) usrgos = GetGOs(godag, max_gos=200).get_usrgos(kws.get(), prt) tcntobj = self._get_tcntobj(usrgos, godag, **kws) self.gosubdag = GoSubDag(usrgos, godag, relationships=True, tcntobj=tcntobj, prt=None) grprdflt = GrouperDflts(self.gosubdag, kws[]) ver_list = [godag.version, grprdflt.ver_goslims] prt.write("{VER}\n".format(VER="\n".join(ver_list))) sections = self._read_sections(kws[]) hdrobj = HdrgosSections(self.gosubdag, grprdflt.hdrgos_dflt, sections) grprobj = Grouper("init", usrgos, hdrobj, self.gosubdag) objsecwr = WrSectionsTxt(grprobj, ver_list) if not os.path.exists(kws[]): objsecwr.wr_txt_section_hdrgos(kws[]) objsecwr.wr_txt_section_hdrgos(kws[]) objsecpy = WrSectionsPy(grprobj, ver_list) if in kws: objsecpy.wr_py_sections(kws[], sections, doc=godag.version) sortobj = Sorter(grprobj) objgowr = WrXlsxSortedGos("init", sortobj, ver_list) objgowr.wr_txt_gos(kws[], sortby=objsecpy.fncsortnt) self._prt_cnt_usrgos(usrgos, sys.stdout)
Command-line interface for go_draw script.
### Input: Command-line interface for go_draw script. ### Response: #vtb def cli(self, prt=sys.stdout): kws = self.objdoc.get_docargs(prt=None) godag = get_godag(kws[], prt=None, loading_bar=False, optional_attrs=[]) usrgos = GetGOs(godag, max_gos=200).get_usrgos(kws.get(), prt) tcntobj = self._get_tcntobj(usrgos, godag, **kws) self.gosubdag = GoSubDag(usrgos, godag, relationships=True, tcntobj=tcntobj, prt=None) grprdflt = GrouperDflts(self.gosubdag, kws[]) ver_list = [godag.version, grprdflt.ver_goslims] prt.write("{VER}\n".format(VER="\n".join(ver_list))) sections = self._read_sections(kws[]) hdrobj = HdrgosSections(self.gosubdag, grprdflt.hdrgos_dflt, sections) grprobj = Grouper("init", usrgos, hdrobj, self.gosubdag) objsecwr = WrSectionsTxt(grprobj, ver_list) if not os.path.exists(kws[]): objsecwr.wr_txt_section_hdrgos(kws[]) objsecwr.wr_txt_section_hdrgos(kws[]) objsecpy = WrSectionsPy(grprobj, ver_list) if in kws: objsecpy.wr_py_sections(kws[], sections, doc=godag.version) sortobj = Sorter(grprobj) objgowr = WrXlsxSortedGos("init", sortobj, ver_list) objgowr.wr_txt_gos(kws[], sortby=objsecpy.fncsortnt) self._prt_cnt_usrgos(usrgos, sys.stdout)
#vtb def quick_str_input(prompt, default_value): valid = False str_val = default_value while not valid: input_val = raw_input(prompt + "[{0}]: ".format(default_value)) if input_val == "": str_val = default_value valid = True else: try: str_val = text_type(input_val) valid = True except ValueError: print("ERROR: must be text.") valid = False return str_val
Function to display a quick question for text input. **Parameters:** - **prompt:** Text / question to display - **default_value:** Default value for no entry **Returns:** text_type() or default_value.
### Input: Function to display a quick question for text input. **Parameters:** - **prompt:** Text / question to display - **default_value:** Default value for no entry **Returns:** text_type() or default_value. ### Response: #vtb def quick_str_input(prompt, default_value): valid = False str_val = default_value while not valid: input_val = raw_input(prompt + "[{0}]: ".format(default_value)) if input_val == "": str_val = default_value valid = True else: try: str_val = text_type(input_val) valid = True except ValueError: print("ERROR: must be text.") valid = False return str_val
#vtb def main(): args = parse_args() logging.info(, __version__) completed_classes = [] classes_with_errors = [] mkdir_p(PATH_CACHE, 0o700) if args.clear_cache: shutil.rmtree(PATH_CACHE) if args.list_courses: logging.info() list_courses(args) return session = get_session() login(session, args.username, args.password) if args.specialization: args.class_names = expand_specializations(session, args.class_names) for class_index, class_name in enumerate(args.class_names): try: logging.info(, class_name, class_index + 1, len(args.class_names)) error_occurred, completed = download_class( session, args, class_name) if completed: completed_classes.append(class_name) if error_occurred: classes_with_errors.append(class_name) except requests.exceptions.HTTPError as e: logging.error(, e) if is_debug_run(): logging.exception(, e) except requests.exceptions.SSLError as e: logging.error(, e) print_ssl_error_message(e) if is_debug_run(): raise except ClassNotFound as e: logging.error(, e) except AuthenticationFailed as e: logging.error(, e) if class_index + 1 != len(args.class_names): logging.info( , args.download_delay) time.sleep(args.download_delay) if completed_classes: logging.info( * 80) logging.info( "Classes which appear completed: " + " ".join(completed_classes)) if classes_with_errors: logging.info( * 80) logging.info( ) for class_name in classes_with_errors: logging.info(, class_name, class_name)
Main entry point for execution as a program (instead of as a module).
### Input: Main entry point for execution as a program (instead of as a module). ### Response: #vtb def main(): args = parse_args() logging.info(, __version__) completed_classes = [] classes_with_errors = [] mkdir_p(PATH_CACHE, 0o700) if args.clear_cache: shutil.rmtree(PATH_CACHE) if args.list_courses: logging.info() list_courses(args) return session = get_session() login(session, args.username, args.password) if args.specialization: args.class_names = expand_specializations(session, args.class_names) for class_index, class_name in enumerate(args.class_names): try: logging.info(, class_name, class_index + 1, len(args.class_names)) error_occurred, completed = download_class( session, args, class_name) if completed: completed_classes.append(class_name) if error_occurred: classes_with_errors.append(class_name) except requests.exceptions.HTTPError as e: logging.error(, e) if is_debug_run(): logging.exception(, e) except requests.exceptions.SSLError as e: logging.error(, e) print_ssl_error_message(e) if is_debug_run(): raise except ClassNotFound as e: logging.error(, e) except AuthenticationFailed as e: logging.error(, e) if class_index + 1 != len(args.class_names): logging.info( , args.download_delay) time.sleep(args.download_delay) if completed_classes: logging.info( * 80) logging.info( "Classes which appear completed: " + " ".join(completed_classes)) if classes_with_errors: logging.info( * 80) logging.info( ) for class_name in classes_with_errors: logging.info(, class_name, class_name)
#vtb def bsn(self) -> str: def _is_valid_bsn(number: str) -> bool: total = 0 multiplier = 9 for char in number: multiplier = -multiplier if multiplier == 1 else multiplier total += int(char) * multiplier multiplier -= 1 result = total % 11 == 0 return result a, b = (100000000, 999999999) sample = str(self.random.randint(a, b)) while not _is_valid_bsn(sample): sample = str(self.random.randint(a, b)) return sample
Generate a random, but valid ``Burgerservicenummer``. :returns: Random BSN. :Example: 255159705
### Input: Generate a random, but valid ``Burgerservicenummer``. :returns: Random BSN. :Example: 255159705 ### Response: #vtb def bsn(self) -> str: def _is_valid_bsn(number: str) -> bool: total = 0 multiplier = 9 for char in number: multiplier = -multiplier if multiplier == 1 else multiplier total += int(char) * multiplier multiplier -= 1 result = total % 11 == 0 return result a, b = (100000000, 999999999) sample = str(self.random.randint(a, b)) while not _is_valid_bsn(sample): sample = str(self.random.randint(a, b)) return sample
#vtb def eth_getBlockByNumber(self, number): block_hash = self.reader._get_block_hash(number) block_number = _format_block_number(number) body_key = body_prefix + block_number + block_hash block_data = self.db.get(body_key) body = rlp.decode(block_data, sedes=Block) return body
Get block body by block number. :param number: :return:
### Input: Get block body by block number. :param number: :return: ### Response: #vtb def eth_getBlockByNumber(self, number): block_hash = self.reader._get_block_hash(number) block_number = _format_block_number(number) body_key = body_prefix + block_number + block_hash block_data = self.db.get(body_key) body = rlp.decode(block_data, sedes=Block) return body
#vtb def execute(self, query, *args, **kwargs): tornado_future = Future() cassandra_future = self._session.execute_async(query, *args, **kwargs) self._ioloop.add_callback( self._callback, cassandra_future, tornado_future) return tornado_future
Asynchronously execute the specified CQL query. The execute command also takes optional parameters and trace keyword arguments. See cassandra-python documentation for definition of those parameters.
### Input: Asynchronously execute the specified CQL query. The execute command also takes optional parameters and trace keyword arguments. See cassandra-python documentation for definition of those parameters. ### Response: #vtb def execute(self, query, *args, **kwargs): tornado_future = Future() cassandra_future = self._session.execute_async(query, *args, **kwargs) self._ioloop.add_callback( self._callback, cassandra_future, tornado_future) return tornado_future
#vtb def thermal_expansion_coeff(self, structure, temperature, mode="debye"): soec = ElasticTensor(self[0]) v0 = (structure.volume * 1e-30 / structure.num_sites) if mode == "debye": td = soec.debye_temperature(structure) t_ratio = temperature / td integrand = lambda x: (x**4 * np.exp(x)) / (np.exp(x) - 1)**2 cv = 9 * 8.314 * t_ratio**3 * quad(integrand, 0, t_ratio**-1)[0] elif mode == "dulong-petit": cv = 3 * 8.314 else: raise ValueError("Mode must be debye or dulong-petit") tgt = self.get_tgt(temperature, structure) alpha = np.einsum(, soec.compliance_tensor, tgt) alpha *= cv / (1e9 * v0 * 6.022e23) return SquareTensor(alpha)
Gets thermal expansion coefficient from third-order constants. Args: temperature (float): Temperature in kelvin, if not specified will return non-cv-normalized value structure (Structure): Structure to be used in directional heat capacity determination, only necessary if temperature is specified mode (string): mode for finding average heat-capacity, current supported modes are 'debye' and 'dulong-petit'
### Input: Gets thermal expansion coefficient from third-order constants. Args: temperature (float): Temperature in kelvin, if not specified will return non-cv-normalized value structure (Structure): Structure to be used in directional heat capacity determination, only necessary if temperature is specified mode (string): mode for finding average heat-capacity, current supported modes are 'debye' and 'dulong-petit' ### Response: #vtb def thermal_expansion_coeff(self, structure, temperature, mode="debye"): soec = ElasticTensor(self[0]) v0 = (structure.volume * 1e-30 / structure.num_sites) if mode == "debye": td = soec.debye_temperature(structure) t_ratio = temperature / td integrand = lambda x: (x**4 * np.exp(x)) / (np.exp(x) - 1)**2 cv = 9 * 8.314 * t_ratio**3 * quad(integrand, 0, t_ratio**-1)[0] elif mode == "dulong-petit": cv = 3 * 8.314 else: raise ValueError("Mode must be debye or dulong-petit") tgt = self.get_tgt(temperature, structure) alpha = np.einsum(, soec.compliance_tensor, tgt) alpha *= cv / (1e9 * v0 * 6.022e23) return SquareTensor(alpha)
#vtb def api_version(self, verbose=False): return self.__auth_req_get(self.rest_url, verbose=verbose)
Get information about the API http://docs.opsview.com/doku.php?id=opsview4.6:restapi#api_version_information
### Input: Get information about the API http://docs.opsview.com/doku.php?id=opsview4.6:restapi#api_version_information ### Response: #vtb def api_version(self, verbose=False): return self.__auth_req_get(self.rest_url, verbose=verbose)
#vtb def accepts_contributor_roles(func): if inspect.isclass(func): apply_function_to_members(func, accepts_contributor_roles) return func else: @functools.wraps(func) def decorator(*args, **kwargs): return accepts_roles(*ROLES_CONTRIBUTOR)(func)(*args, **kwargs) return decorator
Decorator that accepts only contributor roles :param func: :return:
### Input: Decorator that accepts only contributor roles :param func: :return: ### Response: #vtb def accepts_contributor_roles(func): if inspect.isclass(func): apply_function_to_members(func, accepts_contributor_roles) return func else: @functools.wraps(func) def decorator(*args, **kwargs): return accepts_roles(*ROLES_CONTRIBUTOR)(func)(*args, **kwargs) return decorator
#vtb def add(self, data, overwrite=False): if is_srec(data): self.add_srec(data, overwrite) elif is_ihex(data): self.add_ihex(data, overwrite) elif is_ti_txt(data): self.add_ti_txt(data, overwrite) else: raise UnsupportedFileFormatError()
Add given data string by guessing its format. The format must be Motorola S-Records, Intel HEX or TI-TXT. Set `overwrite` to ``True`` to allow already added data to be overwritten.
### Input: Add given data string by guessing its format. The format must be Motorola S-Records, Intel HEX or TI-TXT. Set `overwrite` to ``True`` to allow already added data to be overwritten. ### Response: #vtb def add(self, data, overwrite=False): if is_srec(data): self.add_srec(data, overwrite) elif is_ihex(data): self.add_ihex(data, overwrite) elif is_ti_txt(data): self.add_ti_txt(data, overwrite) else: raise UnsupportedFileFormatError()
#vtb def calcFontScaling(self): self.ypx = self.figure.get_size_inches()[1]*self.figure.dpi self.xpx = self.figure.get_size_inches()[0]*self.figure.dpi self.fontSize = self.vertSize*(self.ypx/2.0) self.leftPos = self.axes.get_xlim()[0] self.rightPos = self.axes.get_xlim()[1]
Calculates the current font size and left position for the current window.
### Input: Calculates the current font size and left position for the current window. ### Response: #vtb def calcFontScaling(self): self.ypx = self.figure.get_size_inches()[1]*self.figure.dpi self.xpx = self.figure.get_size_inches()[0]*self.figure.dpi self.fontSize = self.vertSize*(self.ypx/2.0) self.leftPos = self.axes.get_xlim()[0] self.rightPos = self.axes.get_xlim()[1]
#vtb def z_angle_rotate(xy, theta): xy = np.array(xy).T theta = np.array(theta).T out = np.zeros_like(xy) out[...,0] = np.cos(theta)*xy[...,0] - np.sin(theta)*xy[...,1] out[...,1] = np.sin(theta)*xy[...,0] + np.cos(theta)*xy[...,1] return out.T
Rotated the input vector or set of vectors `xy` by the angle `theta`. Parameters ---------- xy : array_like The vector or array of vectors to transform. Must have shape
### Input: Rotated the input vector or set of vectors `xy` by the angle `theta`. Parameters ---------- xy : array_like The vector or array of vectors to transform. Must have shape ### Response: #vtb def z_angle_rotate(xy, theta): xy = np.array(xy).T theta = np.array(theta).T out = np.zeros_like(xy) out[...,0] = np.cos(theta)*xy[...,0] - np.sin(theta)*xy[...,1] out[...,1] = np.sin(theta)*xy[...,0] + np.cos(theta)*xy[...,1] return out.T
#vtb def get_portal_by_name(self, portal_name): portals = self.get_portals_list() for p in portals: if portal_name == p[1]: self.set_portal_name( p[1] ) self.set_portal_id( p[0] ) self.set_portal_cik( p[2][1][][] ) return p return None
Set active portal according to the name passed in 'portal_name'. Returns dictionary of device 'serial_number: rid'
### Input: Set active portal according to the name passed in 'portal_name'. Returns dictionary of device 'serial_number: rid' ### Response: #vtb def get_portal_by_name(self, portal_name): portals = self.get_portals_list() for p in portals: if portal_name == p[1]: self.set_portal_name( p[1] ) self.set_portal_id( p[0] ) self.set_portal_cik( p[2][1][][] ) return p return None
#vtb def pan_delta(self, dx_px, dy_px): direction = self.target - self.position distance_from_target = direction.length() direction = direction.normalized() speed_per_radius = self.get_translation_speed(distance_from_target) px_per_unit = self.vport_radius_px / speed_per_radius right = direction ^ self.up translation = (right * (-dx_px / px_per_unit) + self.up * (-dy_px / px_per_unit)) self.position = self.position + translation self.target = self.target + translation
This causes the scene to appear to translate right and up (i.e., what really happens is the camera is translated left and down). This is also called "panning" in some software packages. Passing in negative delta values causes the opposite motion.
### Input: This causes the scene to appear to translate right and up (i.e., what really happens is the camera is translated left and down). This is also called "panning" in some software packages. Passing in negative delta values causes the opposite motion. ### Response: #vtb def pan_delta(self, dx_px, dy_px): direction = self.target - self.position distance_from_target = direction.length() direction = direction.normalized() speed_per_radius = self.get_translation_speed(distance_from_target) px_per_unit = self.vport_radius_px / speed_per_radius right = direction ^ self.up translation = (right * (-dx_px / px_per_unit) + self.up * (-dy_px / px_per_unit)) self.position = self.position + translation self.target = self.target + translation
#vtb def hostcmd_push(base_path, project_name, engine_name, vars_files=None, config_file=None, **kwargs): assert_initialized(base_path, config_file) config = get_config(base_path, vars_files=vars_files, engine_name=engine_name, project_name=project_name, config_file=config_file) engine_obj = load_engine([, ], engine_name, config.project_name, config[], **kwargs) logger.debug(, project_name=config.project_name) push_images(base_path, config.image_namespace, engine_obj, config, save_conductor=config.save_conductor, **kwargs)
Push images to a registry. Requires authenticating with the registry prior to starting the push. If your engine's config file does not already contain an authorization for the registry, pass username and/or password. If you exclude password, you will be prompted.
### Input: Push images to a registry. Requires authenticating with the registry prior to starting the push. If your engine's config file does not already contain an authorization for the registry, pass username and/or password. If you exclude password, you will be prompted. ### Response: #vtb def hostcmd_push(base_path, project_name, engine_name, vars_files=None, config_file=None, **kwargs): assert_initialized(base_path, config_file) config = get_config(base_path, vars_files=vars_files, engine_name=engine_name, project_name=project_name, config_file=config_file) engine_obj = load_engine([, ], engine_name, config.project_name, config[], **kwargs) logger.debug(, project_name=config.project_name) push_images(base_path, config.image_namespace, engine_obj, config, save_conductor=config.save_conductor, **kwargs)
#vtb def mime(self): author = self.author sender = self.sender if not author: raise ValueError("You must specify an author.") if not self.subject: raise ValueError("You must specify a subject.") if len(self.recipients) == 0: raise ValueError("You must specify at least one recipient.") if not self.plain: raise ValueError("You must provide plain text content.") if not self._dirty and self._processed: return self._mime self._processed = False plain = MIMEText(self._callable(self.plain), , self.encoding) rich = None if self.rich: rich = MIMEText(self._callable(self.rich), , self.encoding) message = self._mime_document(plain, rich) headers = self._build_header_list(author, sender) self._add_headers_to_message(message, headers) self._mime = message self._processed = True self._dirty = False return message
Produce the final MIME message.
### Input: Produce the final MIME message. ### Response: #vtb def mime(self): author = self.author sender = self.sender if not author: raise ValueError("You must specify an author.") if not self.subject: raise ValueError("You must specify a subject.") if len(self.recipients) == 0: raise ValueError("You must specify at least one recipient.") if not self.plain: raise ValueError("You must provide plain text content.") if not self._dirty and self._processed: return self._mime self._processed = False plain = MIMEText(self._callable(self.plain), , self.encoding) rich = None if self.rich: rich = MIMEText(self._callable(self.rich), , self.encoding) message = self._mime_document(plain, rich) headers = self._build_header_list(author, sender) self._add_headers_to_message(message, headers) self._mime = message self._processed = True self._dirty = False return message
#vtb def two_lorentzian(freq, freq0_1, freq0_2, area1, area2, hwhm1, hwhm2, phase1, phase2, offset, drift): return (lorentzian(freq, freq0_1, area1, hwhm1, phase1, offset, drift) + lorentzian(freq, freq0_2, area2, hwhm2, phase2, offset, drift))
A two-Lorentzian model. This is simply the sum of two lorentzian functions in some part of the spectrum. Each individual Lorentzian has its own peak frequency, area, hwhm and phase, but they share common offset and drift parameters.
### Input: A two-Lorentzian model. This is simply the sum of two lorentzian functions in some part of the spectrum. Each individual Lorentzian has its own peak frequency, area, hwhm and phase, but they share common offset and drift parameters. ### Response: #vtb def two_lorentzian(freq, freq0_1, freq0_2, area1, area2, hwhm1, hwhm2, phase1, phase2, offset, drift): return (lorentzian(freq, freq0_1, area1, hwhm1, phase1, offset, drift) + lorentzian(freq, freq0_2, area2, hwhm2, phase2, offset, drift))
#vtb def blend(self, blend_function=stack): new_scn = Scene() common_datasets = self.shared_dataset_ids for ds_id in common_datasets: datasets = [scn[ds_id] for scn in self.scenes if ds_id in scn] new_scn[ds_id] = blend_function(datasets) return new_scn
Blend the datasets into one scene. .. note:: Blending is not currently optimized for generator-based MultiScene.
### Input: Blend the datasets into one scene. .. note:: Blending is not currently optimized for generator-based MultiScene. ### Response: #vtb def blend(self, blend_function=stack): new_scn = Scene() common_datasets = self.shared_dataset_ids for ds_id in common_datasets: datasets = [scn[ds_id] for scn in self.scenes if ds_id in scn] new_scn[ds_id] = blend_function(datasets) return new_scn
#vtb def normalize_modpath(modpath, hide_init=True, hide_main=False): if six.PY2: if modpath.endswith(): modpath = modpath[:-1] if hide_init: if basename(modpath) == : modpath = dirname(modpath) hide_main = True else: modpath_with_init = join(modpath, ) if exists(modpath_with_init): modpath = modpath_with_init if hide_main: if basename(modpath) == : parallel_init = join(dirname(modpath), ) if exists(parallel_init): modpath = dirname(modpath) return modpath
Normalizes __init__ and __main__ paths. Notes: Adds __init__ if reasonable, but only removes __main__ by default Args: hide_init (bool): if True, always return package modules as __init__.py files otherwise always return the dpath. hide_init (bool): if True, always strip away main files otherwise ignore __main__.py. CommandLine: xdoctest -m xdoctest.static_analysis normalize_modpath Example: >>> import xdoctest.static_analysis as static >>> modpath = static.__file__ >>> assert static.normalize_modpath(modpath) == modpath.replace('.pyc', '.py') >>> dpath = dirname(modpath) >>> res0 = static.normalize_modpath(dpath, hide_init=0, hide_main=0) >>> res1 = static.normalize_modpath(dpath, hide_init=0, hide_main=1) >>> res2 = static.normalize_modpath(dpath, hide_init=1, hide_main=0) >>> res3 = static.normalize_modpath(dpath, hide_init=1, hide_main=1) >>> assert res0.endswith('__init__.py') >>> assert res1.endswith('__init__.py') >>> assert not res2.endswith('.py') >>> assert not res3.endswith('.py')
### Input: Normalizes __init__ and __main__ paths. Notes: Adds __init__ if reasonable, but only removes __main__ by default Args: hide_init (bool): if True, always return package modules as __init__.py files otherwise always return the dpath. hide_init (bool): if True, always strip away main files otherwise ignore __main__.py. CommandLine: xdoctest -m xdoctest.static_analysis normalize_modpath Example: >>> import xdoctest.static_analysis as static >>> modpath = static.__file__ >>> assert static.normalize_modpath(modpath) == modpath.replace('.pyc', '.py') >>> dpath = dirname(modpath) >>> res0 = static.normalize_modpath(dpath, hide_init=0, hide_main=0) >>> res1 = static.normalize_modpath(dpath, hide_init=0, hide_main=1) >>> res2 = static.normalize_modpath(dpath, hide_init=1, hide_main=0) >>> res3 = static.normalize_modpath(dpath, hide_init=1, hide_main=1) >>> assert res0.endswith('__init__.py') >>> assert res1.endswith('__init__.py') >>> assert not res2.endswith('.py') >>> assert not res3.endswith('.py') ### Response: #vtb def normalize_modpath(modpath, hide_init=True, hide_main=False): if six.PY2: if modpath.endswith(): modpath = modpath[:-1] if hide_init: if basename(modpath) == : modpath = dirname(modpath) hide_main = True else: modpath_with_init = join(modpath, ) if exists(modpath_with_init): modpath = modpath_with_init if hide_main: if basename(modpath) == : parallel_init = join(dirname(modpath), ) if exists(parallel_init): modpath = dirname(modpath) return modpath
#vtb def _slice_replace(code, index, old, new): nodes = [str(node) for node in code.get(index)] substring = "".join(nodes).replace(old, new) code.nodes[index] = parse_anything(substring).nodes
Replace the string *old* with *new* across *index* in *code*.
### Input: Replace the string *old* with *new* across *index* in *code*. ### Response: #vtb def _slice_replace(code, index, old, new): nodes = [str(node) for node in code.get(index)] substring = "".join(nodes).replace(old, new) code.nodes[index] = parse_anything(substring).nodes
#vtb def get_languages_from_item(ct_item, item): try: item_lan = TransItemLanguage.objects.filter(content_type__pk=ct_item.id, object_id=item.id).get() languages = [lang.code for lang in item_lan.languages.all()] return languages except TransItemLanguage.DoesNotExist: return []
Get the languages configured for the current item :param ct_item: :param item: :return:
### Input: Get the languages configured for the current item :param ct_item: :param item: :return: ### Response: #vtb def get_languages_from_item(ct_item, item): try: item_lan = TransItemLanguage.objects.filter(content_type__pk=ct_item.id, object_id=item.id).get() languages = [lang.code for lang in item_lan.languages.all()] return languages except TransItemLanguage.DoesNotExist: return []
#vtb def plot_blob( sampler, blobidx=0, label=None, last_step=False, figure=None, **kwargs ): modelx, model = _process_blob(sampler, blobidx, last_step) if label is None: label = "Model output {0}".format(blobidx) if modelx is None: f = plot_distribution(model, label, figure=figure) else: f = plot_fit( sampler, modelidx=blobidx, last_step=last_step, label=label, figure=figure, **kwargs ) return f
Plot a metadata blob as a fit to spectral data or value distribution Additional ``kwargs`` are passed to `plot_fit`. Parameters ---------- sampler : `emcee.EnsembleSampler` Sampler with a stored chain. blobidx : int, optional Metadata blob index to plot. label : str, optional Label for the value distribution. Labels for the fit plot can be passed as ``xlabel`` and ``ylabel`` and will be passed to `plot_fit`. Returns ------- figure : `matplotlib.pyplot.Figure` `matplotlib` figure instance containing the plot.
### Input: Plot a metadata blob as a fit to spectral data or value distribution Additional ``kwargs`` are passed to `plot_fit`. Parameters ---------- sampler : `emcee.EnsembleSampler` Sampler with a stored chain. blobidx : int, optional Metadata blob index to plot. label : str, optional Label for the value distribution. Labels for the fit plot can be passed as ``xlabel`` and ``ylabel`` and will be passed to `plot_fit`. Returns ------- figure : `matplotlib.pyplot.Figure` `matplotlib` figure instance containing the plot. ### Response: #vtb def plot_blob( sampler, blobidx=0, label=None, last_step=False, figure=None, **kwargs ): modelx, model = _process_blob(sampler, blobidx, last_step) if label is None: label = "Model output {0}".format(blobidx) if modelx is None: f = plot_distribution(model, label, figure=figure) else: f = plot_fit( sampler, modelidx=blobidx, last_step=last_step, label=label, figure=figure, **kwargs ) return f
#vtb def style_data(self): def recursive_get(data, keys): if len(keys): return recursive_get(data.get(keys[0]), keys[1:]) else: return data geometries = recursive_get(self.data, self.object_path.split())[] for feature in geometries: feature.setdefault(, {}).setdefault(, {}).update(self.style_function(feature))
Applies self.style_function to each feature of self.data.
### Input: Applies self.style_function to each feature of self.data. ### Response: #vtb def style_data(self): def recursive_get(data, keys): if len(keys): return recursive_get(data.get(keys[0]), keys[1:]) else: return data geometries = recursive_get(self.data, self.object_path.split())[] for feature in geometries: feature.setdefault(, {}).setdefault(, {}).update(self.style_function(feature))
#vtb def build_query_string(self, data): query = [] keys_to_be_removed = [] for key, value in data.items(): if key not in [, , ]: if not key == : if key == : value = .join(str(val) for val in value) keys_to_be_removed.append(key) query.append(.format(key, value)) keys_to_be_removed.append(key) keys_to_be_removed.append(key) querystring = .join(query) data[] = .format(data[], querystring) for k in list(set(keys_to_be_removed)): del data[k] return data
This method occurs after dumping the data into the class. Args: data (dict): dictionary of all the query values Returns: data (dict): ordered dict of all the values
### Input: This method occurs after dumping the data into the class. Args: data (dict): dictionary of all the query values Returns: data (dict): ordered dict of all the values ### Response: #vtb def build_query_string(self, data): query = [] keys_to_be_removed = [] for key, value in data.items(): if key not in [, , ]: if not key == : if key == : value = .join(str(val) for val in value) keys_to_be_removed.append(key) query.append(.format(key, value)) keys_to_be_removed.append(key) keys_to_be_removed.append(key) querystring = .join(query) data[] = .format(data[], querystring) for k in list(set(keys_to_be_removed)): del data[k] return data
#vtb def from_bits(self, bits): if len(bits) != Person.BITS_PER_PERSON: raise ValueError(u"Person requires exactly {} bits".format( Person.BITS_PER_PERSON )) self.sitting = bool(bits[0]) return self
Set this person from bits (ignores the id) :param bits: Bits representing a person :type bits: bytearray :rtype: Person :raises ValueError: Bits has an unexpected length
### Input: Set this person from bits (ignores the id) :param bits: Bits representing a person :type bits: bytearray :rtype: Person :raises ValueError: Bits has an unexpected length ### Response: #vtb def from_bits(self, bits): if len(bits) != Person.BITS_PER_PERSON: raise ValueError(u"Person requires exactly {} bits".format( Person.BITS_PER_PERSON )) self.sitting = bool(bits[0]) return self
#vtb def create(style_dataset, content_dataset, style_feature=None, content_feature=None, max_iterations=None, model=, verbose=True, batch_size = 6, **kwargs): if len(style_dataset) == 0: raise _ToolkitError("style_dataset SFrame cannot be empty") if len(content_dataset) == 0: raise _ToolkitError("content_dataset SFrame cannot be empty") if(batch_size < 1): raise _ToolkitError(" must be greater than or equal to 1") from ._sframe_loader import SFrameSTIter as _SFrameSTIter import mxnet as _mx from .._mxnet import _mxnet_utils if style_feature is None: style_feature = _tkutl._find_only_image_column(style_dataset) if content_feature is None: content_feature = _tkutl._find_only_image_column(content_dataset) if verbose: print("Using in style_dataset as feature column and using " " in content_dataset as feature column".format(style_feature, content_feature)) _raise_error_if_not_training_sframe(style_dataset, style_feature) _raise_error_if_not_training_sframe(content_dataset, content_feature) params = { : batch_size, : 2, : 0.001, : 1.0, : [1e-4, 1e-4, 1e-4, 1e-4], : True, : False, : False, : (256, 256), : , : False, : False, : 0, : 0, : 0.9, : 0.9, : 0.0, : 1.25, : 0.05, : 0.05, : 0.05, : 0.05, : True, : (.05, 1.5), : 0.0, : 20, : 2, } if in kwargs: new_keys = set(kwargs[].keys()) set_keys = set(params.keys()) unsupported = new_keys - set_keys if unsupported: raise _ToolkitError(.format(unsupported)) params.update(kwargs[]) _content_loss_mult = params[] _style_loss_mult = params[] num_gpus = _mxnet_utils.get_num_gpus_in_use(max_devices=params[]) batch_size_each = params[] // max(num_gpus, 1) batch_size = max(num_gpus, 1) * batch_size_each input_shape = params[] iterations = 0 if max_iterations is None: max_iterations = len(style_dataset) * 10000 if verbose: print(.format(max_iterations)) if params[]: content_loader_type = % params[] else: content_loader_type = params[] content_images_loader = _SFrameSTIter(content_dataset, batch_size, shuffle=True, feature_column=content_feature, input_shape=input_shape, loader_type=content_loader_type, aug_params=params, sequential=params[]) ctx = _mxnet_utils.get_mxnet_context(max_devices=params[]) num_styles = len(style_dataset) from ._model import Transformer as _Transformer transformer_model_path = _pre_trained_models.STYLE_TRANSFER_BASE_MODELS[model]().get_model_path() transformer = _Transformer(num_styles, batch_size_each) transformer.collect_params().initialize(ctx=ctx) if params[]: transformer.load_params(transformer_model_path, ctx, allow_missing=True) from ._model import Vgg16 as _Vgg16 vgg_model_path = _pre_trained_models.STYLE_TRANSFER_BASE_MODELS[]().get_model_path() vgg_model = _Vgg16() vgg_model.collect_params().initialize(ctx=ctx) vgg_model.load_params(vgg_model_path, ctx=ctx, ignore_extra=True) vgg_model.hybridize() from mxnet import gluon as _gluon from ._model import gram_matrix as _gram_matrix if params[]: trainable_params = transformer.collect_params() else: trainable_params = transformer.collect_params() trainer = _gluon.Trainer(trainable_params, , {: params[]}) mse_loss = _gluon.loss.L2Loss() start_time = _time.time() smoothed_loss = None last_time = 0 cuda_gpus = _mxnet_utils.get_gpus_in_use(max_devices=params[]) num_mxnet_gpus = len(cuda_gpus) if verbose: cuda_mem_req = 260 + batch_size_each * 880 + num_styles * 1.4 _tkutl._print_neural_compute_device(cuda_gpus=cuda_gpus, use_mps=False, cuda_mem_req=cuda_mem_req, has_mps_impl=False) if verbose: print() style_images_loader = _SFrameSTIter(style_dataset, batch_size, shuffle=False, num_epochs=1, feature_column=style_feature, input_shape=input_shape, loader_type=, sequential=params[]) num_layers = len(params[]) gram_chunks = [[] for _ in range(num_layers)] for s_batch in style_images_loader: s_data = _gluon.utils.split_and_load(s_batch.data[0], ctx_list=ctx, batch_axis=0) for s in s_data: vgg16_s = _vgg16_data_prep(s) ret = vgg_model(vgg16_s) grams = [_gram_matrix(x) for x in ret] for i, gram in enumerate(grams): if gram.context != _mx.cpu(0): gram = gram.as_in_context(_mx.cpu(0)) gram_chunks[i].append(gram) del style_images_loader grams = [ _mx.nd.concat(*chunks, dim=0)[:num_styles] for chunks in gram_chunks ] ctx_grams = {} if ctx[0] == _mx.cpu(0): ctx_grams[_mx.cpu(0)] = grams else: for ctx0 in ctx: ctx_grams[ctx0] = [gram.as_in_context(ctx0) for gram in grams] vgg_content_loss_layer = params[] rs = _np.random.RandomState(1234) while iterations < max_iterations: content_images_loader.reset() for c_batch in content_images_loader: c_data = _gluon.utils.split_and_load(c_batch.data[0], ctx_list=ctx, batch_axis=0) Ls = [] curr_content_loss = [] curr_style_loss = [] with _mx.autograd.record(): for c in c_data: indices = _mx.nd.array(rs.randint(num_styles, size=batch_size_each), dtype=_np.int64, ctx=c.context) p = transformer(c, indices) vgg16_p = _vgg16_data_prep(p) vgg16_c = _vgg16_data_prep(c) p_vgg_outputs = vgg_model(vgg16_p) c_vgg_outputs = vgg_model(vgg16_c) c_content_layer = c_vgg_outputs[vgg_content_loss_layer] p_content_layer = p_vgg_outputs[vgg_content_loss_layer] } return StyleTransfer(state)
Create a :class:`StyleTransfer` model. Parameters ---------- style_dataset: SFrame Input style images. The columns named by the ``style_feature`` parameters will be extracted for training the model. content_dataset : SFrame Input content images. The columns named by the ``content_feature`` parameters will be extracted for training the model. style_feature: string Name of the column containing the input images in style SFrame. 'None' (the default) indicates the only image column in the style SFrame should be used as the feature. content_feature: string Name of the column containing the input images in content SFrame. 'None' (the default) indicates the only image column in the content SFrame should be used as the feature. max_iterations : int The number of training iterations. If 'None' (the default), then it will be automatically determined based on the amount of data you provide. model : string optional Style transfer model to use: - "resnet-16" : Fast and small-sized residual network that uses VGG-16 as reference network during training. batch_size : int, optional If you are getting memory errors, try decreasing this value. If you have a powerful computer, increasing this value may improve training throughput. verbose : bool, optional If True, print progress updates and model details. Returns ------- out : StyleTransfer A trained :class:`StyleTransfer` model. See Also -------- StyleTransfer Examples -------- .. sourcecode:: python # Create datasets >>> content_dataset = turicreate.image_analysis.load_images('content_images/') >>> style_dataset = turicreate.image_analysis.load_images('style_images/') # Train a style transfer model >>> model = turicreate.style_transfer.create(content_dataset, style_dataset) # Stylize an image on all styles >>> stylized_images = model.stylize(data) # Visualize the stylized images >>> stylized_images.explore()
### Input: Create a :class:`StyleTransfer` model. Parameters ---------- style_dataset: SFrame Input style images. The columns named by the ``style_feature`` parameters will be extracted for training the model. content_dataset : SFrame Input content images. The columns named by the ``content_feature`` parameters will be extracted for training the model. style_feature: string Name of the column containing the input images in style SFrame. 'None' (the default) indicates the only image column in the style SFrame should be used as the feature. content_feature: string Name of the column containing the input images in content SFrame. 'None' (the default) indicates the only image column in the content SFrame should be used as the feature. max_iterations : int The number of training iterations. If 'None' (the default), then it will be automatically determined based on the amount of data you provide. model : string optional Style transfer model to use: - "resnet-16" : Fast and small-sized residual network that uses VGG-16 as reference network during training. batch_size : int, optional If you are getting memory errors, try decreasing this value. If you have a powerful computer, increasing this value may improve training throughput. verbose : bool, optional If True, print progress updates and model details. Returns ------- out : StyleTransfer A trained :class:`StyleTransfer` model. See Also -------- StyleTransfer Examples -------- .. sourcecode:: python # Create datasets >>> content_dataset = turicreate.image_analysis.load_images('content_images/') >>> style_dataset = turicreate.image_analysis.load_images('style_images/') # Train a style transfer model >>> model = turicreate.style_transfer.create(content_dataset, style_dataset) # Stylize an image on all styles >>> stylized_images = model.stylize(data) # Visualize the stylized images >>> stylized_images.explore() ### Response: #vtb def create(style_dataset, content_dataset, style_feature=None, content_feature=None, max_iterations=None, model=, verbose=True, batch_size = 6, **kwargs): if len(style_dataset) == 0: raise _ToolkitError("style_dataset SFrame cannot be empty") if len(content_dataset) == 0: raise _ToolkitError("content_dataset SFrame cannot be empty") if(batch_size < 1): raise _ToolkitError(" must be greater than or equal to 1") from ._sframe_loader import SFrameSTIter as _SFrameSTIter import mxnet as _mx from .._mxnet import _mxnet_utils if style_feature is None: style_feature = _tkutl._find_only_image_column(style_dataset) if content_feature is None: content_feature = _tkutl._find_only_image_column(content_dataset) if verbose: print("Using in style_dataset as feature column and using " " in content_dataset as feature column".format(style_feature, content_feature)) _raise_error_if_not_training_sframe(style_dataset, style_feature) _raise_error_if_not_training_sframe(content_dataset, content_feature) params = { : batch_size, : 2, : 0.001, : 1.0, : [1e-4, 1e-4, 1e-4, 1e-4], : True, : False, : False, : (256, 256), : , : False, : False, : 0, : 0, : 0.9, : 0.9, : 0.0, : 1.25, : 0.05, : 0.05, : 0.05, : 0.05, : True, : (.05, 1.5), : 0.0, : 20, : 2, } if in kwargs: new_keys = set(kwargs[].keys()) set_keys = set(params.keys()) unsupported = new_keys - set_keys if unsupported: raise _ToolkitError(.format(unsupported)) params.update(kwargs[]) _content_loss_mult = params[] _style_loss_mult = params[] num_gpus = _mxnet_utils.get_num_gpus_in_use(max_devices=params[]) batch_size_each = params[] // max(num_gpus, 1) batch_size = max(num_gpus, 1) * batch_size_each input_shape = params[] iterations = 0 if max_iterations is None: max_iterations = len(style_dataset) * 10000 if verbose: print(.format(max_iterations)) if params[]: content_loader_type = % params[] else: content_loader_type = params[] content_images_loader = _SFrameSTIter(content_dataset, batch_size, shuffle=True, feature_column=content_feature, input_shape=input_shape, loader_type=content_loader_type, aug_params=params, sequential=params[]) ctx = _mxnet_utils.get_mxnet_context(max_devices=params[]) num_styles = len(style_dataset) from ._model import Transformer as _Transformer transformer_model_path = _pre_trained_models.STYLE_TRANSFER_BASE_MODELS[model]().get_model_path() transformer = _Transformer(num_styles, batch_size_each) transformer.collect_params().initialize(ctx=ctx) if params[]: transformer.load_params(transformer_model_path, ctx, allow_missing=True) from ._model import Vgg16 as _Vgg16 vgg_model_path = _pre_trained_models.STYLE_TRANSFER_BASE_MODELS[]().get_model_path() vgg_model = _Vgg16() vgg_model.collect_params().initialize(ctx=ctx) vgg_model.load_params(vgg_model_path, ctx=ctx, ignore_extra=True) vgg_model.hybridize() from mxnet import gluon as _gluon from ._model import gram_matrix as _gram_matrix if params[]: trainable_params = transformer.collect_params() else: trainable_params = transformer.collect_params() trainer = _gluon.Trainer(trainable_params, , {: params[]}) mse_loss = _gluon.loss.L2Loss() start_time = _time.time() smoothed_loss = None last_time = 0 cuda_gpus = _mxnet_utils.get_gpus_in_use(max_devices=params[]) num_mxnet_gpus = len(cuda_gpus) if verbose: cuda_mem_req = 260 + batch_size_each * 880 + num_styles * 1.4 _tkutl._print_neural_compute_device(cuda_gpus=cuda_gpus, use_mps=False, cuda_mem_req=cuda_mem_req, has_mps_impl=False) if verbose: print() style_images_loader = _SFrameSTIter(style_dataset, batch_size, shuffle=False, num_epochs=1, feature_column=style_feature, input_shape=input_shape, loader_type=, sequential=params[]) num_layers = len(params[]) gram_chunks = [[] for _ in range(num_layers)] for s_batch in style_images_loader: s_data = _gluon.utils.split_and_load(s_batch.data[0], ctx_list=ctx, batch_axis=0) for s in s_data: vgg16_s = _vgg16_data_prep(s) ret = vgg_model(vgg16_s) grams = [_gram_matrix(x) for x in ret] for i, gram in enumerate(grams): if gram.context != _mx.cpu(0): gram = gram.as_in_context(_mx.cpu(0)) gram_chunks[i].append(gram) del style_images_loader grams = [ _mx.nd.concat(*chunks, dim=0)[:num_styles] for chunks in gram_chunks ] ctx_grams = {} if ctx[0] == _mx.cpu(0): ctx_grams[_mx.cpu(0)] = grams else: for ctx0 in ctx: ctx_grams[ctx0] = [gram.as_in_context(ctx0) for gram in grams] vgg_content_loss_layer = params[] rs = _np.random.RandomState(1234) while iterations < max_iterations: content_images_loader.reset() for c_batch in content_images_loader: c_data = _gluon.utils.split_and_load(c_batch.data[0], ctx_list=ctx, batch_axis=0) Ls = [] curr_content_loss = [] curr_style_loss = [] with _mx.autograd.record(): for c in c_data: indices = _mx.nd.array(rs.randint(num_styles, size=batch_size_each), dtype=_np.int64, ctx=c.context) p = transformer(c, indices) vgg16_p = _vgg16_data_prep(p) vgg16_c = _vgg16_data_prep(c) p_vgg_outputs = vgg_model(vgg16_p) c_vgg_outputs = vgg_model(vgg16_c) c_content_layer = c_vgg_outputs[vgg_content_loss_layer] p_content_layer = p_vgg_outputs[vgg_content_loss_layer] } return StyleTransfer(state)
#vtb def use_certificate(self, cert): if not isinstance(cert, X509): raise TypeError("cert must be an X509 instance") use_result = _lib.SSL_CTX_use_certificate(self._context, cert._x509) if not use_result: _raise_current_error()
Load a certificate from a X509 object :param cert: The X509 object :return: None
### Input: Load a certificate from a X509 object :param cert: The X509 object :return: None ### Response: #vtb def use_certificate(self, cert): if not isinstance(cert, X509): raise TypeError("cert must be an X509 instance") use_result = _lib.SSL_CTX_use_certificate(self._context, cert._x509) if not use_result: _raise_current_error()
#vtb def circles(st, layer, axis, ax=None, talpha=1.0, cedge=, cface=): pos = st.obj_get_positions() rad = st.obj_get_radii() shape = st.ishape.shape.tolist() shape.pop(axis) if ax is None: fig = plt.figure() axisbg = if cface == else sx, sy = ((1,shape[1]/float(shape[0])) if shape[0] > shape[1] else (shape[0]/float(shape[1]), 1)) ax = fig.add_axes((0,0, sx, sy), axisbg=axisbg) particles = np.arange(len(pos))[np.abs(pos[:,axis] - layer) < rad] scale = 1.0 for i in particles: p = pos[i].copy() r = 2*np.sqrt(rad[i]**2 - (p[axis] - layer)**2) if axis==0: ix = 1; iy = 2 elif axis == 1: ix = 0; iy = 2 elif axis==2: ix = 0; iy = 1 c = Circle((p[ix]/scale, p[iy]/scale), radius=r/2/scale, fc=cface, ec=cedge, alpha=talpha) ax.add_patch(c) plt.axis() return ax
Plots a set of circles corresponding to a slice through the platonic structure. Copied from twoslice_overlay with comments, standaloneness. Inputs ------ pos : array of particle positions; [N,3] rad : array of particle radii; [N] ax : plt.axis instance layer : Which layer of the slice to use. axis : The slice of the image, 0, 1, or 2. cedge : edge color cface : face color talpha : Alpha of the thing
### Input: Plots a set of circles corresponding to a slice through the platonic structure. Copied from twoslice_overlay with comments, standaloneness. Inputs ------ pos : array of particle positions; [N,3] rad : array of particle radii; [N] ax : plt.axis instance layer : Which layer of the slice to use. axis : The slice of the image, 0, 1, or 2. cedge : edge color cface : face color talpha : Alpha of the thing ### Response: #vtb def circles(st, layer, axis, ax=None, talpha=1.0, cedge=, cface=): pos = st.obj_get_positions() rad = st.obj_get_radii() shape = st.ishape.shape.tolist() shape.pop(axis) if ax is None: fig = plt.figure() axisbg = if cface == else sx, sy = ((1,shape[1]/float(shape[0])) if shape[0] > shape[1] else (shape[0]/float(shape[1]), 1)) ax = fig.add_axes((0,0, sx, sy), axisbg=axisbg) particles = np.arange(len(pos))[np.abs(pos[:,axis] - layer) < rad] scale = 1.0 for i in particles: p = pos[i].copy() r = 2*np.sqrt(rad[i]**2 - (p[axis] - layer)**2) if axis==0: ix = 1; iy = 2 elif axis == 1: ix = 0; iy = 2 elif axis==2: ix = 0; iy = 1 c = Circle((p[ix]/scale, p[iy]/scale), radius=r/2/scale, fc=cface, ec=cedge, alpha=talpha) ax.add_patch(c) plt.axis() return ax
#vtb def visit_exact_match_value(self, node, fieldnames=None): if not fieldnames: fieldnames = [] else: fieldnames = force_list(fieldnames) if ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] == fieldnames[0]: return self._generate_exact_author_query(node.value) elif ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] == fieldnames[0]: return self._generate_type_code_query(node.value) elif ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] == fieldnames: return self._generate_journal_nested_queries(node.value) bai_fieldnames = self._generate_fieldnames_if_bai_query( node.value, bai_field_variation=FieldVariations.raw, query_bai_field_if_dots_in_name=False ) if ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] == fieldnames: term_queries = [] for field in fieldnames: term_query = \ {: {field: _truncate_date_value_according_on_date_field(field, node.value).dumps()}} term_queries.append( generate_nested_query(ElasticSearchVisitor.DATE_NESTED_QUERY_PATH, term_query) if field in ElasticSearchVisitor.DATE_NESTED_FIELDS else term_query ) elif ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] in fieldnames: term_queries = [ generate_nested_query(ElasticSearchVisitor.AUTHORS_NESTED_QUERY_PATH, {: {field: node.value}}) for field in (bai_fieldnames or fieldnames) ] else: term_queries = [{: {field: node.value}} for field in (bai_fieldnames or fieldnames)] return wrap_queries_in_bool_clauses_if_more_than_one(term_queries, use_must_clause=False)
Generates a term query (exact search in ElasticSearch).
### Input: Generates a term query (exact search in ElasticSearch). ### Response: #vtb def visit_exact_match_value(self, node, fieldnames=None): if not fieldnames: fieldnames = [] else: fieldnames = force_list(fieldnames) if ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] == fieldnames[0]: return self._generate_exact_author_query(node.value) elif ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] == fieldnames[0]: return self._generate_type_code_query(node.value) elif ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] == fieldnames: return self._generate_journal_nested_queries(node.value) bai_fieldnames = self._generate_fieldnames_if_bai_query( node.value, bai_field_variation=FieldVariations.raw, query_bai_field_if_dots_in_name=False ) if ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] == fieldnames: term_queries = [] for field in fieldnames: term_query = \ {: {field: _truncate_date_value_according_on_date_field(field, node.value).dumps()}} term_queries.append( generate_nested_query(ElasticSearchVisitor.DATE_NESTED_QUERY_PATH, term_query) if field in ElasticSearchVisitor.DATE_NESTED_FIELDS else term_query ) elif ElasticSearchVisitor.KEYWORD_TO_ES_FIELDNAME[] in fieldnames: term_queries = [ generate_nested_query(ElasticSearchVisitor.AUTHORS_NESTED_QUERY_PATH, {: {field: node.value}}) for field in (bai_fieldnames or fieldnames) ] else: term_queries = [{: {field: node.value}} for field in (bai_fieldnames or fieldnames)] return wrap_queries_in_bool_clauses_if_more_than_one(term_queries, use_must_clause=False)
#vtb def _handleDelete(self): if self.cursorPos < len(self.inputBuffer): self.inputBuffer = self.inputBuffer[0:self.cursorPos] + self.inputBuffer[self.cursorPos+1:] self._refreshInputPrompt(len(self.inputBuffer)+1)
Handles "delete" characters
### Input: Handles "delete" characters ### Response: #vtb def _handleDelete(self): if self.cursorPos < len(self.inputBuffer): self.inputBuffer = self.inputBuffer[0:self.cursorPos] + self.inputBuffer[self.cursorPos+1:] self._refreshInputPrompt(len(self.inputBuffer)+1)
#vtb def verify_sc_url(url: str) -> bool: parsed = urlsplit(url) scheme: str = parsed.scheme netloc: str = parsed.netloc path: str = parsed.path try: port = parsed.port except ValueError: port = None result = (scheme.lower() == and netloc.lower().split()[0] == and path.startswith() and (port == 443 or port is None)) return result
Verify signature certificate URL against Amazon Alexa requirements. Each call of Agent passes incoming utterances batch through skills filter, agent skills, skills processor. Batch of dialog IDs can be provided, in other case utterances indexes in incoming batch are used as dialog IDs. Args: url: Signature certificate URL from SignatureCertChainUrl HTTP header. Returns: result: True if verification was successful, False if not.
### Input: Verify signature certificate URL against Amazon Alexa requirements. Each call of Agent passes incoming utterances batch through skills filter, agent skills, skills processor. Batch of dialog IDs can be provided, in other case utterances indexes in incoming batch are used as dialog IDs. Args: url: Signature certificate URL from SignatureCertChainUrl HTTP header. Returns: result: True if verification was successful, False if not. ### Response: #vtb def verify_sc_url(url: str) -> bool: parsed = urlsplit(url) scheme: str = parsed.scheme netloc: str = parsed.netloc path: str = parsed.path try: port = parsed.port except ValueError: port = None result = (scheme.lower() == and netloc.lower().split()[0] == and path.startswith() and (port == 443 or port is None)) return result
#vtb def write(self, handle): if not self._frames: return def add(name, desc, bpe, format, bytes, *dimensions): group.add_param(name, desc=desc, bytes_per_element=bpe, bytes=struct.pack(format, bytes), dimensions=list(dimensions)) def add_str(name, desc, bytes, *dimensions): group.add_param(name, desc=desc, bytes_per_element=-1, bytes=bytes.encode(), dimensions=list(dimensions)) def add_empty_array(name, desc, bpe): group.add_param(name, desc=desc, bytes_per_element=bpe, dimensions=[0]) points, analog = self._frames[0] ppf = len(points) group = self.add_group(1, , ) add(, , 2, , ppf) add(, , 2, , min(65535, len(self._frames))) add(, , 2, , 0) add(, , 4, , self._point_scale) add(, , 4, , self._point_rate) add_str(, , , 2) add_str(, , , 2) add_str(, , self._point_units, len(self._point_units)) add_str(, , .join( % i for i in range(ppf)), 5, ppf) add_str(, , * 16 * ppf, 16, ppf) group = self.add_group(2, , ) add(, , 2, , analog.shape[0]) add(, , 4, , analog.shape[1]) add(, , 4, , self._gen_scale) add_empty_array(, , 4) add_empty_array(, , 2) group = self.add_group(3, , ) add(, , 2, , 1, 2) add(, , 2, , len(self._frames), 2) blocks = self.parameter_blocks() self.get().bytes = struct.pack(, 2 + blocks) self.header.data_block = 2 + blocks self.header.frame_rate = self._point_rate self.header.last_frame = min(len(self._frames), 65535) self.header.point_count = ppf self.header.analog_count = np.prod(analog.shape) self.header.analog_per_frame = analog.shape[0] self.header.scale_factor = self._point_scale self._write_metadata(handle) self._write_frames(handle)
Write metadata and point + analog frames to a file handle. Parameters ---------- handle : file Write metadata and C3D motion frames to the given file handle. The writer does not close the handle.
### Input: Write metadata and point + analog frames to a file handle. Parameters ---------- handle : file Write metadata and C3D motion frames to the given file handle. The writer does not close the handle. ### Response: #vtb def write(self, handle): if not self._frames: return def add(name, desc, bpe, format, bytes, *dimensions): group.add_param(name, desc=desc, bytes_per_element=bpe, bytes=struct.pack(format, bytes), dimensions=list(dimensions)) def add_str(name, desc, bytes, *dimensions): group.add_param(name, desc=desc, bytes_per_element=-1, bytes=bytes.encode(), dimensions=list(dimensions)) def add_empty_array(name, desc, bpe): group.add_param(name, desc=desc, bytes_per_element=bpe, dimensions=[0]) points, analog = self._frames[0] ppf = len(points) group = self.add_group(1, , ) add(, , 2, , ppf) add(, , 2, , min(65535, len(self._frames))) add(, , 2, , 0) add(, , 4, , self._point_scale) add(, , 4, , self._point_rate) add_str(, , , 2) add_str(, , , 2) add_str(, , self._point_units, len(self._point_units)) add_str(, , .join( % i for i in range(ppf)), 5, ppf) add_str(, , * 16 * ppf, 16, ppf) group = self.add_group(2, , ) add(, , 2, , analog.shape[0]) add(, , 4, , analog.shape[1]) add(, , 4, , self._gen_scale) add_empty_array(, , 4) add_empty_array(, , 2) group = self.add_group(3, , ) add(, , 2, , 1, 2) add(, , 2, , len(self._frames), 2) blocks = self.parameter_blocks() self.get().bytes = struct.pack(, 2 + blocks) self.header.data_block = 2 + blocks self.header.frame_rate = self._point_rate self.header.last_frame = min(len(self._frames), 65535) self.header.point_count = ppf self.header.analog_count = np.prod(analog.shape) self.header.analog_per_frame = analog.shape[0] self.header.scale_factor = self._point_scale self._write_metadata(handle) self._write_frames(handle)
#vtb def sort_values(self, by=None, axis=0, ascending=True, inplace=False, kind=, na_position=): raise NotImplementedError("sort_values has not been implemented " "on Panel or Panel4D objects.")
Sort by the values along either axis. Parameters ----------%(optional_by)s axis : %(axes_single_arg)s, default 0 Axis to be sorted. ascending : bool or list of bool, default True Sort ascending vs. descending. Specify list for multiple sort orders. If this is a list of bools, must match the length of the by. inplace : bool, default False If True, perform operation in-place. kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort' Choice of sorting algorithm. See also ndarray.np.sort for more information. `mergesort` is the only stable algorithm. For DataFrames, this option is only applied when sorting on a single column or label. na_position : {'first', 'last'}, default 'last' Puts NaNs at the beginning if `first`; `last` puts NaNs at the end. Returns ------- sorted_obj : DataFrame or None DataFrame with sorted values if inplace=False, None otherwise. Examples -------- >>> df = pd.DataFrame({ ... 'col1': ['A', 'A', 'B', np.nan, 'D', 'C'], ... 'col2': [2, 1, 9, 8, 7, 4], ... 'col3': [0, 1, 9, 4, 2, 3], ... }) >>> df col1 col2 col3 0 A 2 0 1 A 1 1 2 B 9 9 3 NaN 8 4 4 D 7 2 5 C 4 3 Sort by col1 >>> df.sort_values(by=['col1']) col1 col2 col3 0 A 2 0 1 A 1 1 2 B 9 9 5 C 4 3 4 D 7 2 3 NaN 8 4 Sort by multiple columns >>> df.sort_values(by=['col1', 'col2']) col1 col2 col3 1 A 1 1 0 A 2 0 2 B 9 9 5 C 4 3 4 D 7 2 3 NaN 8 4 Sort Descending >>> df.sort_values(by='col1', ascending=False) col1 col2 col3 4 D 7 2 5 C 4 3 2 B 9 9 0 A 2 0 1 A 1 1 3 NaN 8 4 Putting NAs first >>> df.sort_values(by='col1', ascending=False, na_position='first') col1 col2 col3 3 NaN 8 4 4 D 7 2 5 C 4 3 2 B 9 9 0 A 2 0 1 A 1 1
### Input: Sort by the values along either axis. Parameters ----------%(optional_by)s axis : %(axes_single_arg)s, default 0 Axis to be sorted. ascending : bool or list of bool, default True Sort ascending vs. descending. Specify list for multiple sort orders. If this is a list of bools, must match the length of the by. inplace : bool, default False If True, perform operation in-place. kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort' Choice of sorting algorithm. See also ndarray.np.sort for more information. `mergesort` is the only stable algorithm. For DataFrames, this option is only applied when sorting on a single column or label. na_position : {'first', 'last'}, default 'last' Puts NaNs at the beginning if `first`; `last` puts NaNs at the end. Returns ------- sorted_obj : DataFrame or None DataFrame with sorted values if inplace=False, None otherwise. Examples -------- >>> df = pd.DataFrame({ ... 'col1': ['A', 'A', 'B', np.nan, 'D', 'C'], ... 'col2': [2, 1, 9, 8, 7, 4], ... 'col3': [0, 1, 9, 4, 2, 3], ... }) >>> df col1 col2 col3 0 A 2 0 1 A 1 1 2 B 9 9 3 NaN 8 4 4 D 7 2 5 C 4 3 Sort by col1 >>> df.sort_values(by=['col1']) col1 col2 col3 0 A 2 0 1 A 1 1 2 B 9 9 5 C 4 3 4 D 7 2 3 NaN 8 4 Sort by multiple columns >>> df.sort_values(by=['col1', 'col2']) col1 col2 col3 1 A 1 1 0 A 2 0 2 B 9 9 5 C 4 3 4 D 7 2 3 NaN 8 4 Sort Descending >>> df.sort_values(by='col1', ascending=False) col1 col2 col3 4 D 7 2 5 C 4 3 2 B 9 9 0 A 2 0 1 A 1 1 3 NaN 8 4 Putting NAs first >>> df.sort_values(by='col1', ascending=False, na_position='first') col1 col2 col3 3 NaN 8 4 4 D 7 2 5 C 4 3 2 B 9 9 0 A 2 0 1 A 1 1 ### Response: #vtb def sort_values(self, by=None, axis=0, ascending=True, inplace=False, kind=, na_position=): raise NotImplementedError("sort_values has not been implemented " "on Panel or Panel4D objects.")
#vtb def send(self, request, **kwargs): kwargs.setdefault(, self.stream) kwargs.setdefault(, self.verify) kwargs.setdefault(, self.cert) kwargs.setdefault(, self.proxies) if history: history.insert(0, r) r = history.pop() r.history = history if not stream: r.content return r
Send a given PreparedRequest.
### Input: Send a given PreparedRequest. ### Response: #vtb def send(self, request, **kwargs): kwargs.setdefault(, self.stream) kwargs.setdefault(, self.verify) kwargs.setdefault(, self.cert) kwargs.setdefault(, self.proxies) if history: history.insert(0, r) r = history.pop() r.history = history if not stream: r.content return r
#vtb def grouper_nofill_str(n, iterable): res = more_itertools.chunked(iterable, n) if isinstance(iterable, six.string_types): res = (.join(item) for item in res) return res
Take a sequence and break it up into chunks of the specified size. The last chunk may be smaller than size. This works very similar to grouper_nofill, except it works with strings as well. >>> tuple(grouper_nofill_str(3, 'foobarbaz')) ('foo', 'bar', 'baz') You can still use it on non-strings too if you like. >>> tuple(grouper_nofill_str(42, [])) () >>> tuple(grouper_nofill_str(3, list(range(10)))) ([0, 1, 2], [3, 4, 5], [6, 7, 8], [9])
### Input: Take a sequence and break it up into chunks of the specified size. The last chunk may be smaller than size. This works very similar to grouper_nofill, except it works with strings as well. >>> tuple(grouper_nofill_str(3, 'foobarbaz')) ('foo', 'bar', 'baz') You can still use it on non-strings too if you like. >>> tuple(grouper_nofill_str(42, [])) () >>> tuple(grouper_nofill_str(3, list(range(10)))) ([0, 1, 2], [3, 4, 5], [6, 7, 8], [9]) ### Response: #vtb def grouper_nofill_str(n, iterable): res = more_itertools.chunked(iterable, n) if isinstance(iterable, six.string_types): res = (.join(item) for item in res) return res
#vtb def GetForwardedIps(self, interface, interface_ip=None): try: ips = netifaces.ifaddresses(interface) ips = ips[netifaces.AF_INET] except (ValueError, IndexError): return [] forwarded_ips = [] for ip in ips: if ip[] != interface_ip: full_addr = % (ip[], netaddr.IPAddress(ip[]).netmask_bits()) forwarded_ips.append(full_addr) return self.ParseForwardedIps(forwarded_ips)
Retrieve the list of configured forwarded IP addresses. Args: interface: string, the output device to query. interface_ip: string, current interface ip address. Returns: list, the IP address strings.
### Input: Retrieve the list of configured forwarded IP addresses. Args: interface: string, the output device to query. interface_ip: string, current interface ip address. Returns: list, the IP address strings. ### Response: #vtb def GetForwardedIps(self, interface, interface_ip=None): try: ips = netifaces.ifaddresses(interface) ips = ips[netifaces.AF_INET] except (ValueError, IndexError): return [] forwarded_ips = [] for ip in ips: if ip[] != interface_ip: full_addr = % (ip[], netaddr.IPAddress(ip[]).netmask_bits()) forwarded_ips.append(full_addr) return self.ParseForwardedIps(forwarded_ips)
#vtb def eval_genome(genome, config): net = neat.nn.FeedForwardNetwork.create(genome, config) error = 4.0 for xi, xo in zip(xor_inputs, xor_outputs): output = net.activate(xi) error -= (output[0] - xo[0]) ** 2 return error
This function will be run in parallel by ParallelEvaluator. It takes two arguments (a single genome and the genome class configuration data) and should return one float (that genome's fitness). Note that this function needs to be in module scope for multiprocessing.Pool (which is what ParallelEvaluator uses) to find it. Because of this, make sure you check for __main__ before executing any code (as we do here in the last few lines in the file), otherwise you'll have made a fork bomb instead of a neuroevolution demo. :)
### Input: This function will be run in parallel by ParallelEvaluator. It takes two arguments (a single genome and the genome class configuration data) and should return one float (that genome's fitness). Note that this function needs to be in module scope for multiprocessing.Pool (which is what ParallelEvaluator uses) to find it. Because of this, make sure you check for __main__ before executing any code (as we do here in the last few lines in the file), otherwise you'll have made a fork bomb instead of a neuroevolution demo. :) ### Response: #vtb def eval_genome(genome, config): net = neat.nn.FeedForwardNetwork.create(genome, config) error = 4.0 for xi, xo in zip(xor_inputs, xor_outputs): output = net.activate(xi) error -= (output[0] - xo[0]) ** 2 return error
#vtb def quantize_model(sym, arg_params, aux_params, data_names=(,), label_names=(,), ctx=cpu(), excluded_sym_names=None, calib_mode=, calib_data=None, num_calib_examples=None, calib_layer=None, quantized_dtype=, logger=logging): if excluded_sym_names is None: excluded_sym_names = [] if not isinstance(excluded_sym_names, list): raise ValueError( % str(type(excluded_sym_names))) logger.info() if quantized_dtype not in (, , ): raise ValueError( % quantized_dtype) qsym = _quantize_symbol(sym, excluded_symbols=excluded_sym_names, offline_params=list(arg_params.keys()), quantized_dtype=quantized_dtype) th_dict = {} if calib_mode is not None and calib_mode != : if not isinstance(ctx, Context): raise ValueError( % str(ctx)) if calib_data is None: raise ValueError( % calib_mode) if not isinstance(calib_data, DataIter): raise ValueError( % (calib_mode, str(type(calib_data)))) mod = Module(symbol=sym, data_names=data_names, label_names=label_names, context=ctx) if len(calib_data.provide_label) > 0: mod.bind(for_training=False, data_shapes=calib_data.provide_data, label_shapes=calib_data.provide_label) else: mod.bind(for_training=False, data_shapes=calib_data.provide_data) mod.set_params(arg_params, aux_params) if calib_mode == : nd_dict, num_examples = _collect_layer_outputs(mod, calib_data, include_layer=calib_layer, max_num_examples=num_calib_examples, logger=logger) logger.info( % num_examples) logger.info() th_dict = _get_optimal_thresholds(nd_dict, quantized_dtype, logger=logger) elif calib_mode == : th_dict, num_examples = _collect_layer_output_min_max( mod, calib_data, include_layer=calib_layer, max_num_examples=num_calib_examples, logger=logger) logger.info( % num_examples) else: raise ValueError( % calib_mode) logger.info() qsym = _calibrate_quantized_sym(qsym, th_dict) logger.info() qarg_params = _quantize_params(qsym, arg_params, th_dict) return qsym, qarg_params, aux_params
User-level API for generating a quantized model from a FP32 model w/ or w/o calibration. The backend quantized operators are only enabled for Linux systems. Please do not run inference using the quantized models on Windows for now. The quantization implementation adopts the TensorFlow's approach: https://www.tensorflow.org/performance/quantization. The calibration implementation borrows the idea of Nvidia's 8-bit Inference with TensorRT: http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf and adapts the method to MXNet. Parameters ---------- sym : str or Symbol Defines the structure of a neural network for FP32 data types. arg_params : dict Dictionary of name to `NDArray`. aux_params : dict Dictionary of name to `NDArray`. data_names : a list of strs Data names required for creating a Module object to run forward propagation on the calibration dataset. label_names : a list of strs Label names required for creating a Module object to run forward propagation on the calibration dataset. ctx : Context Defines the device that users want to run forward propagation on the calibration dataset for collecting layer output statistics. Currently, only supports single context. excluded_sym_names : list of strings A list of strings representing the names of the symbols that users want to excluding from being quantized. calib_mode : str If calib_mode='none', no calibration will be used and the thresholds for requantization after the corresponding layers will be calculated at runtime by calling min and max operators. The quantized models generated in this mode are normally 10-20% slower than those with calibrations during inference. If calib_mode='naive', the min and max values of the layer outputs from a calibration dataset will be directly taken as the thresholds for quantization. If calib_mode='entropy' (default mode), the thresholds for quantization will be derived such that the KL divergence between the distributions of FP32 layer outputs and quantized layer outputs is minimized based upon the calibration dataset. calib_data : DataIter A data iterator initialized by the calibration dataset. num_calib_examples : int or None The maximum number of examples that user would like to use for calibration. If not provided, the whole calibration dataset will be used. calib_layer : function Given a layer's output name in string, return True or False for deciding whether to calibrate this layer. If yes, the statistics of the layer's output will be collected; otherwise, no information of the layer's output will be collected. If not provided, all the layers' outputs that need requantization will be collected. quantized_dtype : str The quantized destination type for input data. Currently support 'int8' , 'uint8' and 'auto'. 'auto' means automatically select output type according to calibration result. Default value is 'int8'. logger : Object A logging object for printing information during the process of quantization. Returns ------- tuple A tuple of quantized symbol, quantized arg_params, and aux_params. -------
### Input: User-level API for generating a quantized model from a FP32 model w/ or w/o calibration. The backend quantized operators are only enabled for Linux systems. Please do not run inference using the quantized models on Windows for now. The quantization implementation adopts the TensorFlow's approach: https://www.tensorflow.org/performance/quantization. The calibration implementation borrows the idea of Nvidia's 8-bit Inference with TensorRT: http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf and adapts the method to MXNet. Parameters ---------- sym : str or Symbol Defines the structure of a neural network for FP32 data types. arg_params : dict Dictionary of name to `NDArray`. aux_params : dict Dictionary of name to `NDArray`. data_names : a list of strs Data names required for creating a Module object to run forward propagation on the calibration dataset. label_names : a list of strs Label names required for creating a Module object to run forward propagation on the calibration dataset. ctx : Context Defines the device that users want to run forward propagation on the calibration dataset for collecting layer output statistics. Currently, only supports single context. excluded_sym_names : list of strings A list of strings representing the names of the symbols that users want to excluding from being quantized. calib_mode : str If calib_mode='none', no calibration will be used and the thresholds for requantization after the corresponding layers will be calculated at runtime by calling min and max operators. The quantized models generated in this mode are normally 10-20% slower than those with calibrations during inference. If calib_mode='naive', the min and max values of the layer outputs from a calibration dataset will be directly taken as the thresholds for quantization. If calib_mode='entropy' (default mode), the thresholds for quantization will be derived such that the KL divergence between the distributions of FP32 layer outputs and quantized layer outputs is minimized based upon the calibration dataset. calib_data : DataIter A data iterator initialized by the calibration dataset. num_calib_examples : int or None The maximum number of examples that user would like to use for calibration. If not provided, the whole calibration dataset will be used. calib_layer : function Given a layer's output name in string, return True or False for deciding whether to calibrate this layer. If yes, the statistics of the layer's output will be collected; otherwise, no information of the layer's output will be collected. If not provided, all the layers' outputs that need requantization will be collected. quantized_dtype : str The quantized destination type for input data. Currently support 'int8' , 'uint8' and 'auto'. 'auto' means automatically select output type according to calibration result. Default value is 'int8'. logger : Object A logging object for printing information during the process of quantization. Returns ------- tuple A tuple of quantized symbol, quantized arg_params, and aux_params. ------- ### Response: #vtb def quantize_model(sym, arg_params, aux_params, data_names=(,), label_names=(,), ctx=cpu(), excluded_sym_names=None, calib_mode=, calib_data=None, num_calib_examples=None, calib_layer=None, quantized_dtype=, logger=logging): if excluded_sym_names is None: excluded_sym_names = [] if not isinstance(excluded_sym_names, list): raise ValueError( % str(type(excluded_sym_names))) logger.info() if quantized_dtype not in (, , ): raise ValueError( % quantized_dtype) qsym = _quantize_symbol(sym, excluded_symbols=excluded_sym_names, offline_params=list(arg_params.keys()), quantized_dtype=quantized_dtype) th_dict = {} if calib_mode is not None and calib_mode != : if not isinstance(ctx, Context): raise ValueError( % str(ctx)) if calib_data is None: raise ValueError( % calib_mode) if not isinstance(calib_data, DataIter): raise ValueError( % (calib_mode, str(type(calib_data)))) mod = Module(symbol=sym, data_names=data_names, label_names=label_names, context=ctx) if len(calib_data.provide_label) > 0: mod.bind(for_training=False, data_shapes=calib_data.provide_data, label_shapes=calib_data.provide_label) else: mod.bind(for_training=False, data_shapes=calib_data.provide_data) mod.set_params(arg_params, aux_params) if calib_mode == : nd_dict, num_examples = _collect_layer_outputs(mod, calib_data, include_layer=calib_layer, max_num_examples=num_calib_examples, logger=logger) logger.info( % num_examples) logger.info() th_dict = _get_optimal_thresholds(nd_dict, quantized_dtype, logger=logger) elif calib_mode == : th_dict, num_examples = _collect_layer_output_min_max( mod, calib_data, include_layer=calib_layer, max_num_examples=num_calib_examples, logger=logger) logger.info( % num_examples) else: raise ValueError( % calib_mode) logger.info() qsym = _calibrate_quantized_sym(qsym, th_dict) logger.info() qarg_params = _quantize_params(qsym, arg_params, th_dict) return qsym, qarg_params, aux_params
#vtb def make_venv(self, dj_version): venv_path = self._get_venv_path(dj_version) self.logger.info( % dj_version) try: create_venv(venv_path, **VENV_CREATE_KWARGS) except ValueError: self.logger.warning() self.venv_install( % dj_version, venv_path) return venv_path
Creates a virtual environment for a given Django version. :param str dj_version: :rtype: str :return: path to created virtual env
### Input: Creates a virtual environment for a given Django version. :param str dj_version: :rtype: str :return: path to created virtual env ### Response: #vtb def make_venv(self, dj_version): venv_path = self._get_venv_path(dj_version) self.logger.info( % dj_version) try: create_venv(venv_path, **VENV_CREATE_KWARGS) except ValueError: self.logger.warning() self.venv_install( % dj_version, venv_path) return venv_path
#vtb def _resolve_name(name, package, level): if not hasattr(package, ): raise ValueError(" not set to a string") dot = len(package) for x in xrange(level, 1, -1): try: dot = package.rindex(, 0, dot) except ValueError: raise ValueError("attempted relative import beyond top-level " "package") return "%s.%s" % (package[:dot], name)
Return the absolute name of the module to be imported.
### Input: Return the absolute name of the module to be imported. ### Response: #vtb def _resolve_name(name, package, level): if not hasattr(package, ): raise ValueError(" not set to a string") dot = len(package) for x in xrange(level, 1, -1): try: dot = package.rindex(, 0, dot) except ValueError: raise ValueError("attempted relative import beyond top-level " "package") return "%s.%s" % (package[:dot], name)
#vtb def readPattern(self): if ( self.dev == None ): return pattern=[] for i in range(0,16): pattern.append( self.readPatternLine(i) ) return pattern
Read the entire color pattern :return List of pattern line tuples
### Input: Read the entire color pattern :return List of pattern line tuples ### Response: #vtb def readPattern(self): if ( self.dev == None ): return pattern=[] for i in range(0,16): pattern.append( self.readPatternLine(i) ) return pattern
#vtb def mark_flags_as_mutual_exclusive(flag_names, required=False, flag_values=_flagvalues.FLAGS): for flag_name in flag_names: if flag_values[flag_name].default is not None: warnings.warn( .format(flag_name)) def validate_mutual_exclusion(flags_dict): flag_count = sum(1 for val in flags_dict.values() if val is not None) if flag_count == 1 or (not required and flag_count == 0): return True raise _exceptions.ValidationError( .format( if required else , .join(flag_names))) register_multi_flags_validator( flag_names, validate_mutual_exclusion, flag_values=flag_values)
Ensures that only one flag among flag_names is not None. Important note: This validator checks if flag values are None, and it does not distinguish between default and explicit values. Therefore, this validator does not make sense when applied to flags with default values other than None, including other false values (e.g. False, 0, '', []). That includes multi flags with a default value of [] instead of None. Args: flag_names: [str], names of the flags. required: bool. If true, exactly one of the flags must have a value other than None. Otherwise, at most one of the flags can have a value other than None, and it is valid for all of the flags to be None. flag_values: flags.FlagValues, optional FlagValues instance where the flags are defined.
### Input: Ensures that only one flag among flag_names is not None. Important note: This validator checks if flag values are None, and it does not distinguish between default and explicit values. Therefore, this validator does not make sense when applied to flags with default values other than None, including other false values (e.g. False, 0, '', []). That includes multi flags with a default value of [] instead of None. Args: flag_names: [str], names of the flags. required: bool. If true, exactly one of the flags must have a value other than None. Otherwise, at most one of the flags can have a value other than None, and it is valid for all of the flags to be None. flag_values: flags.FlagValues, optional FlagValues instance where the flags are defined. ### Response: #vtb def mark_flags_as_mutual_exclusive(flag_names, required=False, flag_values=_flagvalues.FLAGS): for flag_name in flag_names: if flag_values[flag_name].default is not None: warnings.warn( .format(flag_name)) def validate_mutual_exclusion(flags_dict): flag_count = sum(1 for val in flags_dict.values() if val is not None) if flag_count == 1 or (not required and flag_count == 0): return True raise _exceptions.ValidationError( .format( if required else , .join(flag_names))) register_multi_flags_validator( flag_names, validate_mutual_exclusion, flag_values=flag_values)
#vtb def _set_allowed_services_and_actions(self, services): for service in services: self.services[service[]] = {} for action in service[]: name = action.pop() self.services[service[]][name] = action
Expect services to be a list of service dictionaries, each with `name` and `actions` keys.
### Input: Expect services to be a list of service dictionaries, each with `name` and `actions` keys. ### Response: #vtb def _set_allowed_services_and_actions(self, services): for service in services: self.services[service[]] = {} for action in service[]: name = action.pop() self.services[service[]][name] = action
#vtb def _read_parsed(self, lines): self.log(u"Parsing fragments from parsed text format") pairs = [] for line in lines: pieces = line.split(gc.PARSED_TEXT_SEPARATOR) if len(pieces) == 2: identifier = pieces[0].strip() text = pieces[1].strip() if len(identifier) > 0: pairs.append((identifier, [text])) self._create_text_fragments(pairs)
Read text fragments from a parsed format text file. :param list lines: the lines of the parsed text file :param dict parameters: additional parameters for parsing (e.g., class/id regex strings)
### Input: Read text fragments from a parsed format text file. :param list lines: the lines of the parsed text file :param dict parameters: additional parameters for parsing (e.g., class/id regex strings) ### Response: #vtb def _read_parsed(self, lines): self.log(u"Parsing fragments from parsed text format") pairs = [] for line in lines: pieces = line.split(gc.PARSED_TEXT_SEPARATOR) if len(pieces) == 2: identifier = pieces[0].strip() text = pieces[1].strip() if len(identifier) > 0: pairs.append((identifier, [text])) self._create_text_fragments(pairs)
#vtb def addmsg(self, msg_p): return lib.zmsg_addmsg(self._as_parameter_, byref(zmsg_p.from_param(msg_p)))
Push encoded message as a new frame. Message takes ownership of submessage, so the original is destroyed in this call. Returns 0 on success, -1 on error.
### Input: Push encoded message as a new frame. Message takes ownership of submessage, so the original is destroyed in this call. Returns 0 on success, -1 on error. ### Response: #vtb def addmsg(self, msg_p): return lib.zmsg_addmsg(self._as_parameter_, byref(zmsg_p.from_param(msg_p)))
#vtb def check_timers(self): if self._current is None: advance = min([self.clocks] + [x for x in self.timers if x is not None]) + 1 logger.debug(f"Advancing the clock from {self.clocks} to {advance}") self.clocks = advance for procid in range(len(self.timers)): if self.timers[procid] is not None: if self.clocks > self.timers[procid]: self.procs[procid].PC += self.procs[procid].instruction.size self.awake(procid)
Awake process if timer has expired
### Input: Awake process if timer has expired ### Response: #vtb def check_timers(self): if self._current is None: advance = min([self.clocks] + [x for x in self.timers if x is not None]) + 1 logger.debug(f"Advancing the clock from {self.clocks} to {advance}") self.clocks = advance for procid in range(len(self.timers)): if self.timers[procid] is not None: if self.clocks > self.timers[procid]: self.procs[procid].PC += self.procs[procid].instruction.size self.awake(procid)
#vtb def triangle_area(point1, point2, point3): a = point_distance(point1, point2) b = point_distance(point1, point3) c = point_distance(point2, point3) s = (a + b + c) / 2.0 return math.sqrt(s * (s - a) * (s - b) * (s - c))
Uses Heron's formula to find the area of a triangle based on the coordinates of three points. Args: point1: list or tuple, the x y coordinate of point one. point2: list or tuple, the x y coordinate of point two. point3: list or tuple, the x y coordinate of point three. Returns: The area of a triangle as a floating point number. Requires: The math module, point_distance().
### Input: Uses Heron's formula to find the area of a triangle based on the coordinates of three points. Args: point1: list or tuple, the x y coordinate of point one. point2: list or tuple, the x y coordinate of point two. point3: list or tuple, the x y coordinate of point three. Returns: The area of a triangle as a floating point number. Requires: The math module, point_distance(). ### Response: #vtb def triangle_area(point1, point2, point3): a = point_distance(point1, point2) b = point_distance(point1, point3) c = point_distance(point2, point3) s = (a + b + c) / 2.0 return math.sqrt(s * (s - a) * (s - b) * (s - c))
#vtb def _get_role_arn(name, **conn_params): if name.startswith(): return name role = __salt__[](name, **conn_params) rolearn = role.get() if role else None return rolearn
Helper function to turn a name into an arn string, returns None if not able to resolve
### Input: Helper function to turn a name into an arn string, returns None if not able to resolve ### Response: #vtb def _get_role_arn(name, **conn_params): if name.startswith(): return name role = __salt__[](name, **conn_params) rolearn = role.get() if role else None return rolearn
#vtb def value_str(sc): if sc.type in (STRING, INT, HEX): return "({})".format(sc.str_value) return "-->" if sc.choice.selection is sc else " " tri_val_str = (" ", "M", "*")[sc.tri_value] if len(sc.assignable) == 1: return "-{}-".format(tri_val_str) if sc.type == BOOL: return "[{}]".format(tri_val_str) if sc.type == TRISTATE: if sc.assignable == (1, 2): return "{" + tri_val_str + "}" return "<{}>".format(tri_val_str)
Returns the value part ("[*]", "<M>", "(foo)" etc.) of a menu entry. sc: Symbol or Choice.
### Input: Returns the value part ("[*]", "<M>", "(foo)" etc.) of a menu entry. sc: Symbol or Choice. ### Response: #vtb def value_str(sc): if sc.type in (STRING, INT, HEX): return "({})".format(sc.str_value) return "-->" if sc.choice.selection is sc else " " tri_val_str = (" ", "M", "*")[sc.tri_value] if len(sc.assignable) == 1: return "-{}-".format(tri_val_str) if sc.type == BOOL: return "[{}]".format(tri_val_str) if sc.type == TRISTATE: if sc.assignable == (1, 2): return "{" + tri_val_str + "}" return "<{}>".format(tri_val_str)
#vtb def load_mod_from_file(self, fpath): shutit_global.shutit_global_object.yield_to_draw() fpath = os.path.abspath(fpath) file_ext = os.path.splitext(os.path.split(fpath)[-1])[-1] if file_ext.lower() != : return with open(fpath) as f: content = f.read().splitlines() ok = False for line in content: if line.strip() == : ok = True break if not ok: self.log( + fpath,level=logging.DEBUG) return existingmodules = [ m for m in self.shutit_modules if getattr(m, , None) == fpath ] if existingmodules: self.log( + fpath,level=logging.DEBUG) return
Loads modules from a .py file into ShutIt if there are no modules from this file already. We expect to have a callable 'module/0' which returns one or more module objects. If this doesn't exist we assume that the .py file works in the old style (automatically inserting the module into shutit_global) or it's not a shutit module.
### Input: Loads modules from a .py file into ShutIt if there are no modules from this file already. We expect to have a callable 'module/0' which returns one or more module objects. If this doesn't exist we assume that the .py file works in the old style (automatically inserting the module into shutit_global) or it's not a shutit module. ### Response: #vtb def load_mod_from_file(self, fpath): shutit_global.shutit_global_object.yield_to_draw() fpath = os.path.abspath(fpath) file_ext = os.path.splitext(os.path.split(fpath)[-1])[-1] if file_ext.lower() != : return with open(fpath) as f: content = f.read().splitlines() ok = False for line in content: if line.strip() == : ok = True break if not ok: self.log( + fpath,level=logging.DEBUG) return existingmodules = [ m for m in self.shutit_modules if getattr(m, , None) == fpath ] if existingmodules: self.log( + fpath,level=logging.DEBUG) return
#vtb def NewFromJSON(data): if data.get(, None): shakes = [Shake.NewFromJSON(shk) for shk in data.get()] else: shakes = None return User( id=data.get(, None), name=data.get(, None), profile_image_url=data.get(, None), about=data.get(, None), website=data.get(, None), shakes=shakes)
Create a new User instance from a JSON dict. Args: data (dict): JSON dictionary representing a user. Returns: A User instance.
### Input: Create a new User instance from a JSON dict. Args: data (dict): JSON dictionary representing a user. Returns: A User instance. ### Response: #vtb def NewFromJSON(data): if data.get(, None): shakes = [Shake.NewFromJSON(shk) for shk in data.get()] else: shakes = None return User( id=data.get(, None), name=data.get(, None), profile_image_url=data.get(, None), about=data.get(, None), website=data.get(, None), shakes=shakes)
#vtb def _estimate_label_shape(self): max_count = 0 self.reset() try: while True: label, _ = self.next_sample() label = self._parse_label(label) max_count = max(max_count, label.shape[0]) except StopIteration: pass self.reset() return (max_count, label.shape[1])
Helper function to estimate label shape
### Input: Helper function to estimate label shape ### Response: #vtb def _estimate_label_shape(self): max_count = 0 self.reset() try: while True: label, _ = self.next_sample() label = self._parse_label(label) max_count = max(max_count, label.shape[0]) except StopIteration: pass self.reset() return (max_count, label.shape[1])
#vtb def stretch_weber_fechner(self, k, s0): attrs = self.data.attrs self.data = k * xu.log(self.data / s0) self.data.attrs = attrs
Stretch according to the Weber-Fechner law. p = k.ln(S/S0) p is perception, S is the stimulus, S0 is the stimulus threshold (the highest unpercieved stimulus), and k is the factor.
### Input: Stretch according to the Weber-Fechner law. p = k.ln(S/S0) p is perception, S is the stimulus, S0 is the stimulus threshold (the highest unpercieved stimulus), and k is the factor. ### Response: #vtb def stretch_weber_fechner(self, k, s0): attrs = self.data.attrs self.data = k * xu.log(self.data / s0) self.data.attrs = attrs
#vtb def read(path, encoding="utf-8"): try: with io.open(path, encoding=encoding) as f: return f.read() except Exception as e: logger.error("read: %s failed. Error: %s", path, e) return ""
Read the content of the file. Args: path (str): Path to the file encoding (str): File encoding. Default: utf-8 Returns: str: File content or empty string if there was an error
### Input: Read the content of the file. Args: path (str): Path to the file encoding (str): File encoding. Default: utf-8 Returns: str: File content or empty string if there was an error ### Response: #vtb def read(path, encoding="utf-8"): try: with io.open(path, encoding=encoding) as f: return f.read() except Exception as e: logger.error("read: %s failed. Error: %s", path, e) return ""
#vtb def send_zipfile(request, fileList): temp = tempfile.TemporaryFile() archive = zipfile.ZipFile(temp, , zipfile.ZIP_DEFLATED) for artist,files in fileList.iteritems(): for f in files: archive.write(f[0], % (artist, f[1])) archive.close() wrapper = FixedFileWrapper(temp) response = HttpResponse(wrapper, content_type=) response[] = response[] = temp.tell() temp.seek(0) return response
Create a ZIP file on disk and transmit it in chunks of 8KB, without loading the whole file into memory. A similar approach can be used for large dynamic PDF files.
### Input: Create a ZIP file on disk and transmit it in chunks of 8KB, without loading the whole file into memory. A similar approach can be used for large dynamic PDF files. ### Response: #vtb def send_zipfile(request, fileList): temp = tempfile.TemporaryFile() archive = zipfile.ZipFile(temp, , zipfile.ZIP_DEFLATED) for artist,files in fileList.iteritems(): for f in files: archive.write(f[0], % (artist, f[1])) archive.close() wrapper = FixedFileWrapper(temp) response = HttpResponse(wrapper, content_type=) response[] = response[] = temp.tell() temp.seek(0) return response
#vtb def cmd(send, msg, args): if not msg: msg = gen_word() morse = gen_morse(msg) if len(morse) > 100: send("Your morse is too long. Have you considered Western Union?") else: send(morse)
Converts text to morse code. Syntax: {command} [text]
### Input: Converts text to morse code. Syntax: {command} [text] ### Response: #vtb def cmd(send, msg, args): if not msg: msg = gen_word() morse = gen_morse(msg) if len(morse) > 100: send("Your morse is too long. Have you considered Western Union?") else: send(morse)
#vtb def is_multicast(text): try: first = ord(dns.ipv4.inet_aton(text)[0]) return (first >= 224 and first <= 239) except Exception: try: first = ord(dns.ipv6.inet_aton(text)[0]) return (first == 255) except Exception: raise ValueError
Is the textual-form network address a multicast address? @param text: the textual address @raises ValueError: the address family cannot be determined from the input. @rtype: bool
### Input: Is the textual-form network address a multicast address? @param text: the textual address @raises ValueError: the address family cannot be determined from the input. @rtype: bool ### Response: #vtb def is_multicast(text): try: first = ord(dns.ipv4.inet_aton(text)[0]) return (first >= 224 and first <= 239) except Exception: try: first = ord(dns.ipv6.inet_aton(text)[0]) return (first == 255) except Exception: raise ValueError
#vtb def toTypeURIs(namespace_map, alias_list_s): uris = [] if alias_list_s: for alias in alias_list_s.split(): type_uri = namespace_map.getNamespaceURI(alias) if type_uri is None: raise KeyError( % (alias,)) else: uris.append(type_uri) return uris
Given a namespace mapping and a string containing a comma-separated list of namespace aliases, return a list of type URIs that correspond to those aliases. @param namespace_map: The mapping from namespace URI to alias @type namespace_map: openid.message.NamespaceMap @param alias_list_s: The string containing the comma-separated list of aliases. May also be None for convenience. @type alias_list_s: str or NoneType @returns: The list of namespace URIs that corresponds to the supplied list of aliases. If the string was zero-length or None, an empty list will be returned. @raise KeyError: If an alias is present in the list of aliases but is not present in the namespace map.
### Input: Given a namespace mapping and a string containing a comma-separated list of namespace aliases, return a list of type URIs that correspond to those aliases. @param namespace_map: The mapping from namespace URI to alias @type namespace_map: openid.message.NamespaceMap @param alias_list_s: The string containing the comma-separated list of aliases. May also be None for convenience. @type alias_list_s: str or NoneType @returns: The list of namespace URIs that corresponds to the supplied list of aliases. If the string was zero-length or None, an empty list will be returned. @raise KeyError: If an alias is present in the list of aliases but is not present in the namespace map. ### Response: #vtb def toTypeURIs(namespace_map, alias_list_s): uris = [] if alias_list_s: for alias in alias_list_s.split(): type_uri = namespace_map.getNamespaceURI(alias) if type_uri is None: raise KeyError( % (alias,)) else: uris.append(type_uri) return uris
#vtb def __query_spec(self): operators = self.__modifiers.copy() if self.__ordering: operators["$orderby"] = self.__ordering if self.__explain: operators["$explain"] = True if self.__hint: operators["$hint"] = self.__hint if self.__comment: operators["$comment"] = self.__comment if self.__max_scan: operators["$maxScan"] = self.__max_scan if self.__max_time_ms is not None: operators["$maxTimeMS"] = self.__max_time_ms if self.__max: operators["$max"] = self.__max if self.__min: operators["$min"] = self.__min if self.__return_key: operators["$returnKey"] = self.__return_key if self.__show_record_id: operators["$showDiskLoc"] = self.__show_record_id if self.__snapshot: operators["$snapshot"] = self.__snapshot if operators: spec = self.__spec.copy() if "$query" not in spec: spec = SON([("$query", spec)]) if not isinstance(spec, SON): spec = SON(spec) spec.update(operators) return spec elif ("query" in self.__spec and (len(self.__spec) == 1 or next(iter(self.__spec)) == "query")): return SON({"$query": self.__spec}) return self.__spec
Get the spec to use for a query.
### Input: Get the spec to use for a query. ### Response: #vtb def __query_spec(self): operators = self.__modifiers.copy() if self.__ordering: operators["$orderby"] = self.__ordering if self.__explain: operators["$explain"] = True if self.__hint: operators["$hint"] = self.__hint if self.__comment: operators["$comment"] = self.__comment if self.__max_scan: operators["$maxScan"] = self.__max_scan if self.__max_time_ms is not None: operators["$maxTimeMS"] = self.__max_time_ms if self.__max: operators["$max"] = self.__max if self.__min: operators["$min"] = self.__min if self.__return_key: operators["$returnKey"] = self.__return_key if self.__show_record_id: operators["$showDiskLoc"] = self.__show_record_id if self.__snapshot: operators["$snapshot"] = self.__snapshot if operators: spec = self.__spec.copy() if "$query" not in spec: spec = SON([("$query", spec)]) if not isinstance(spec, SON): spec = SON(spec) spec.update(operators) return spec elif ("query" in self.__spec and (len(self.__spec) == 1 or next(iter(self.__spec)) == "query")): return SON({"$query": self.__spec}) return self.__spec
#vtb def followed_topic_num(self): if self.url is not None: tag = self.soup.find(, class_=) if tag is not None: return int(re_get_number.match( tag.parent.strong.text).group(1)) return 0
θŽ·ε–η”¨ζˆ·ε…³ζ³¨ηš„θ―ι’˜ζ•° :return: ε…³ζ³¨ηš„θ―ι’˜ζ•° :rtype: int
### Input: θŽ·ε–η”¨ζˆ·ε…³ζ³¨ηš„θ―ι’˜ζ•° :return: ε…³ζ³¨ηš„θ―ι’˜ζ•° :rtype: int ### Response: #vtb def followed_topic_num(self): if self.url is not None: tag = self.soup.find(, class_=) if tag is not None: return int(re_get_number.match( tag.parent.strong.text).group(1)) return 0
#vtb def response_to_json_dict(response, **kwargs): if response.encoding is None: response.encoding = return json.loads(response.text, **kwargs)
Standard place to convert responses to JSON. :param response: requests response object :param **kwargs: arguments accepted by json.loads :returns: dict of JSON response
### Input: Standard place to convert responses to JSON. :param response: requests response object :param **kwargs: arguments accepted by json.loads :returns: dict of JSON response ### Response: #vtb def response_to_json_dict(response, **kwargs): if response.encoding is None: response.encoding = return json.loads(response.text, **kwargs)
#vtb def get_dates(raw_table) -> "list of dates": dates = [] found_first = False for i, dstr in enumerate([raw_table[i][0] for i in range(0, len(raw_table))]): if dstr: if len(dstr.split("/")) == 3: d = datetime.datetime.strptime(dstr, ) elif len(dstr.split("-")) == 3: d = datetime.datetime.strptime(dstr, ) else: logging.debug("unknown date-format: {}".format(dstr)) continue dates.append(d) if not found_first: found_first = True logging.debug("Found first date: at i: {}".format(d.isoformat(), i)) elif found_first: logging.debug("Last date: {}".format(d)) break return dates
Goes through the first column of input table and returns the first sequence of dates it finds.
### Input: Goes through the first column of input table and returns the first sequence of dates it finds. ### Response: #vtb def get_dates(raw_table) -> "list of dates": dates = [] found_first = False for i, dstr in enumerate([raw_table[i][0] for i in range(0, len(raw_table))]): if dstr: if len(dstr.split("/")) == 3: d = datetime.datetime.strptime(dstr, ) elif len(dstr.split("-")) == 3: d = datetime.datetime.strptime(dstr, ) else: logging.debug("unknown date-format: {}".format(dstr)) continue dates.append(d) if not found_first: found_first = True logging.debug("Found first date: at i: {}".format(d.isoformat(), i)) elif found_first: logging.debug("Last date: {}".format(d)) break return dates
#vtb def write_url (self, url_data): self.writeln(u"<tr>") self.writeln(u % self.part("url")) self.write(u) self.write(u"`%s'" % cgi.escape(url_data.base_url)) self.writeln(u"</td></tr>")
Write url_data.base_url.
### Input: Write url_data.base_url. ### Response: #vtb def write_url (self, url_data): self.writeln(u"<tr>") self.writeln(u % self.part("url")) self.write(u) self.write(u"`%s'" % cgi.escape(url_data.base_url)) self.writeln(u"</td></tr>")
#vtb def _set_token(self): try: self.token = os.environ[] if self.verbose: print("Overriding Cerberus token with environment variable.", file=sys.stderr) logger.info("Overriding Cerberus token with environment variable.") return except: pass if self.username: ua = UserAuth(self.cerberus_url, self.username, self.password) self.token = ua.get_token() else: awsa = AWSAuth(self.cerberus_url, region=self.region, aws_session=self.aws_session, verbose=self.verbose) self.token = awsa.get_token()
Set the Cerberus token based on auth type
### Input: Set the Cerberus token based on auth type ### Response: #vtb def _set_token(self): try: self.token = os.environ[] if self.verbose: print("Overriding Cerberus token with environment variable.", file=sys.stderr) logger.info("Overriding Cerberus token with environment variable.") return except: pass if self.username: ua = UserAuth(self.cerberus_url, self.username, self.password) self.token = ua.get_token() else: awsa = AWSAuth(self.cerberus_url, region=self.region, aws_session=self.aws_session, verbose=self.verbose) self.token = awsa.get_token()
#vtb async def renew(self, session, *, dc=None): session_id = extract_attr(session, keys=["ID"]) response = await self._api.put("/v1/session/renew", session_id, params={"dc": dc}) try: result = response.body[0] except IndexError: meta = extract_meta(response.headers) raise NotFound("No session for %r" % session_id, meta=meta) return consul(result, meta=extract_meta(response.headers))
Renews a TTL-based session Parameters: session (ObjectID): Session ID dc (str): Specify datacenter that will be used. Defaults to the agent's local datacenter. Returns: ObjectMeta: where value is session Raises: NotFound: session is absent The response looks like this:: { "LockDelay": datetime.timedelta(0, 15), "Checks": [ "serfHealth" ], "Node": "foobar", "ID": "adf4238a-882b-9ddc-4a9d-5b6758e4159e", "CreateIndex": 1086449 "Behavior": "release", "TTL": datetime.timedelta(0, 15) } .. note:: Consul MAY return a TTL value higher than the one specified during session creation. This indicates the server is under high load and is requesting clients renew less often.
### Input: Renews a TTL-based session Parameters: session (ObjectID): Session ID dc (str): Specify datacenter that will be used. Defaults to the agent's local datacenter. Returns: ObjectMeta: where value is session Raises: NotFound: session is absent The response looks like this:: { "LockDelay": datetime.timedelta(0, 15), "Checks": [ "serfHealth" ], "Node": "foobar", "ID": "adf4238a-882b-9ddc-4a9d-5b6758e4159e", "CreateIndex": 1086449 "Behavior": "release", "TTL": datetime.timedelta(0, 15) } .. note:: Consul MAY return a TTL value higher than the one specified during session creation. This indicates the server is under high load and is requesting clients renew less often. ### Response: #vtb async def renew(self, session, *, dc=None): session_id = extract_attr(session, keys=["ID"]) response = await self._api.put("/v1/session/renew", session_id, params={"dc": dc}) try: result = response.body[0] except IndexError: meta = extract_meta(response.headers) raise NotFound("No session for %r" % session_id, meta=meta) return consul(result, meta=extract_meta(response.headers))
#vtb def reduce(source, func, initializer=None): acc = accumulate.raw(source, func, initializer) return select.item.raw(acc, -1)
Apply a function of two arguments cumulatively to the items of an asynchronous sequence, reducing the sequence to a single value. If ``initializer`` is present, it is placed before the items of the sequence in the calculation, and serves as a default when the sequence is empty.
### Input: Apply a function of two arguments cumulatively to the items of an asynchronous sequence, reducing the sequence to a single value. If ``initializer`` is present, it is placed before the items of the sequence in the calculation, and serves as a default when the sequence is empty. ### Response: #vtb def reduce(source, func, initializer=None): acc = accumulate.raw(source, func, initializer) return select.item.raw(acc, -1)
#vtb def create_init(self, path): source = with io.open(path, "w", encoding="utf-8") as outfile: outfile.write(source)
Create a minimal __init__ file with enough boiler plate to not add to lint messages :param path: :return:
### Input: Create a minimal __init__ file with enough boiler plate to not add to lint messages :param path: :return: ### Response: #vtb def create_init(self, path): source = with io.open(path, "w", encoding="utf-8") as outfile: outfile.write(source)
#vtb def gene_to_panels(self, case_obj): LOG.info("Building gene to panels") gene_dict = {} for panel_info in case_obj.get(, []): panel_name = panel_info[] panel_version = panel_info[] panel_obj = self.gene_panel(panel_name, version=panel_version) if not panel_obj: LOG.warning("Panel: {0}, version {1} does not exist in database".format(panel_name, panel_version)) for gene in panel_obj[]: hgnc_id = gene[] if hgnc_id not in gene_dict: gene_dict[hgnc_id] = set([panel_name]) continue gene_dict[hgnc_id].add(panel_name) LOG.info("Gene to panels done") return gene_dict
Fetch all gene panels and group them by gene Args: case_obj(scout.models.Case) Returns: gene_dict(dict): A dictionary with gene as keys and a set of panel names as value
### Input: Fetch all gene panels and group them by gene Args: case_obj(scout.models.Case) Returns: gene_dict(dict): A dictionary with gene as keys and a set of panel names as value ### Response: #vtb def gene_to_panels(self, case_obj): LOG.info("Building gene to panels") gene_dict = {} for panel_info in case_obj.get(, []): panel_name = panel_info[] panel_version = panel_info[] panel_obj = self.gene_panel(panel_name, version=panel_version) if not panel_obj: LOG.warning("Panel: {0}, version {1} does not exist in database".format(panel_name, panel_version)) for gene in panel_obj[]: hgnc_id = gene[] if hgnc_id not in gene_dict: gene_dict[hgnc_id] = set([panel_name]) continue gene_dict[hgnc_id].add(panel_name) LOG.info("Gene to panels done") return gene_dict
#vtb def get_count(self, prefix=): return sum([self.counters[key] for key in self.messages if key.startswith(prefix)])
Return the total count of errors and warnings.
### Input: Return the total count of errors and warnings. ### Response: #vtb def get_count(self, prefix=): return sum([self.counters[key] for key in self.messages if key.startswith(prefix)])
#vtb def load_watch(): XysidesubjectX_labelsy_labels module_path = dirname(__file__) data = np.load(module_path + "/data/watch_dataset.npy").item() return data
Loads some of the 6-axis inertial sensor data from my smartwatch project. The sensor data was recorded as study subjects performed sets of 20 shoulder exercise repetitions while wearing a smartwatch. It is a multivariate time series. The study can be found here: https://arxiv.org/abs/1802.01489 Returns ------- data : dict data['X'] : list, length 140 | inertial sensor data, each element with shape [n_samples, 6] | sampled at 50 Hz data['y'] : array, length 140 target vector (exercise type) data['side'] : array, length 140 the extremity side, 1 = right, 0 = left data['subject'] : array, length 140 the subject (participant) number data['X_labels'] : str list, length 6 ordered labels for the sensor data variables data['y_labels'] :str list, length 7 ordered labels for the target (exercise type) Examples -------- >>> from seglearn.datasets import load_watch >>> data = load_watch() >>> print(data.keys())
### Input: Loads some of the 6-axis inertial sensor data from my smartwatch project. The sensor data was recorded as study subjects performed sets of 20 shoulder exercise repetitions while wearing a smartwatch. It is a multivariate time series. The study can be found here: https://arxiv.org/abs/1802.01489 Returns ------- data : dict data['X'] : list, length 140 | inertial sensor data, each element with shape [n_samples, 6] | sampled at 50 Hz data['y'] : array, length 140 target vector (exercise type) data['side'] : array, length 140 the extremity side, 1 = right, 0 = left data['subject'] : array, length 140 the subject (participant) number data['X_labels'] : str list, length 6 ordered labels for the sensor data variables data['y_labels'] :str list, length 7 ordered labels for the target (exercise type) Examples -------- >>> from seglearn.datasets import load_watch >>> data = load_watch() >>> print(data.keys()) ### Response: #vtb def load_watch(): XysidesubjectX_labelsy_labels module_path = dirname(__file__) data = np.load(module_path + "/data/watch_dataset.npy").item() return data
#vtb def add_filter(self, ftype, func): if not isinstance(ftype, type): raise TypeError("Expected type object, got %s" % type(ftype)) self.castfilter = [(t, f) for (t, f) in self.castfilter if t != ftype] self.castfilter.append((ftype, func)) self.castfilter.sort()
Register a new output filter. Whenever bottle hits a handler output matching `ftype`, `func` is applyed to it.
### Input: Register a new output filter. Whenever bottle hits a handler output matching `ftype`, `func` is applyed to it. ### Response: #vtb def add_filter(self, ftype, func): if not isinstance(ftype, type): raise TypeError("Expected type object, got %s" % type(ftype)) self.castfilter = [(t, f) for (t, f) in self.castfilter if t != ftype] self.castfilter.append((ftype, func)) self.castfilter.sort()
#vtb def vgcreate(vgname, devices, **kwargs): if not vgname or not devices: return if isinstance(devices, six.string_types): devices = devices.split() cmd = [, vgname] for device in devices: cmd.append(device) valid = (, , , , , ) for var in kwargs: if kwargs[var] and var in valid: cmd.append(.format(var)) cmd.append(kwargs[var]) out = __salt__[](cmd, python_shell=False).splitlines() vgdata = vgdisplay(vgname) vgdata[] = out[0].strip() return vgdata
Create an LVM volume group CLI Examples: .. code-block:: bash salt mymachine lvm.vgcreate my_vg /dev/sdb1,/dev/sdb2 salt mymachine lvm.vgcreate my_vg /dev/sdb1 clustered=y
### Input: Create an LVM volume group CLI Examples: .. code-block:: bash salt mymachine lvm.vgcreate my_vg /dev/sdb1,/dev/sdb2 salt mymachine lvm.vgcreate my_vg /dev/sdb1 clustered=y ### Response: #vtb def vgcreate(vgname, devices, **kwargs): if not vgname or not devices: return if isinstance(devices, six.string_types): devices = devices.split() cmd = [, vgname] for device in devices: cmd.append(device) valid = (, , , , , ) for var in kwargs: if kwargs[var] and var in valid: cmd.append(.format(var)) cmd.append(kwargs[var]) out = __salt__[](cmd, python_shell=False).splitlines() vgdata = vgdisplay(vgname) vgdata[] = out[0].strip() return vgdata
#vtb def stop_instance(self, instance_id): if not instance_id: log.info("Instance to stop has no instance id") return gce = self._connect() try: request = gce.instances().delete(project=self._project_id, instance=instance_id, zone=self._zone) response = self._execute_request(request) self._check_response(response) except HttpError as e: if e.resp.status == 404: raise InstanceNotFoundError( "Instance `{instance_id}` was not found" .format(instance_id=instance_id)) else: raise InstanceError( "Could not stop instance `{instance_id}`: `{e}`" .format(instance_id=instance_id, e=e)) except CloudProviderError as e: raise InstanceError( "Could not stop instance `{instance_id}`: `{e}`" .format(instance_id=instance_id, e=e))
Stops the instance gracefully. :param str instance_id: instance identifier :raises: `InstanceError` if instance can not be stopped
### Input: Stops the instance gracefully. :param str instance_id: instance identifier :raises: `InstanceError` if instance can not be stopped ### Response: #vtb def stop_instance(self, instance_id): if not instance_id: log.info("Instance to stop has no instance id") return gce = self._connect() try: request = gce.instances().delete(project=self._project_id, instance=instance_id, zone=self._zone) response = self._execute_request(request) self._check_response(response) except HttpError as e: if e.resp.status == 404: raise InstanceNotFoundError( "Instance `{instance_id}` was not found" .format(instance_id=instance_id)) else: raise InstanceError( "Could not stop instance `{instance_id}`: `{e}`" .format(instance_id=instance_id, e=e)) except CloudProviderError as e: raise InstanceError( "Could not stop instance `{instance_id}`: `{e}`" .format(instance_id=instance_id, e=e))
#vtb def connect(self, hostkey=None, username=, password=None, pkey=None): if hostkey is not None: self._preferred_keys = [ hostkey.get_name() ] self.start_client() if (hostkey is not None): key = self.get_remote_server_key() if (key.get_name() != hostkey.get_name()) or (str(key) != str(hostkey)): self._log(DEBUG, ) self._log(DEBUG, % (hostkey.get_name(), repr(str(hostkey)))) self._log(DEBUG, % (key.get_name(), repr(str(key)))) raise SSHException() self._log(DEBUG, % hostkey.get_name()) if (pkey is not None) or (password is not None): if password is not None: self._log(DEBUG, ) self.auth_password(username, password) else: self._log(DEBUG, ) self.auth_publickey(username, pkey) return
Negotiate an SSH2 session, and optionally verify the server's host key and authenticate using a password or private key. This is a shortcut for L{start_client}, L{get_remote_server_key}, and L{Transport.auth_password} or L{Transport.auth_publickey}. Use those methods if you want more control. You can use this method immediately after creating a Transport to negotiate encryption with a server. If it fails, an exception will be thrown. On success, the method will return cleanly, and an encrypted session exists. You may immediately call L{open_channel} or L{open_session} to get a L{Channel} object, which is used for data transfer. @note: If you fail to supply a password or private key, this method may succeed, but a subsequent L{open_channel} or L{open_session} call may fail because you haven't authenticated yet. @param hostkey: the host key expected from the server, or C{None} if you don't want to do host key verification. @type hostkey: L{PKey<pkey.PKey>} @param username: the username to authenticate as. @type username: str @param password: a password to use for authentication, if you want to use password authentication; otherwise C{None}. @type password: str @param pkey: a private key to use for authentication, if you want to use private key authentication; otherwise C{None}. @type pkey: L{PKey<pkey.PKey>} @raise SSHException: if the SSH2 negotiation fails, the host key supplied by the server is incorrect, or authentication fails.
### Input: Negotiate an SSH2 session, and optionally verify the server's host key and authenticate using a password or private key. This is a shortcut for L{start_client}, L{get_remote_server_key}, and L{Transport.auth_password} or L{Transport.auth_publickey}. Use those methods if you want more control. You can use this method immediately after creating a Transport to negotiate encryption with a server. If it fails, an exception will be thrown. On success, the method will return cleanly, and an encrypted session exists. You may immediately call L{open_channel} or L{open_session} to get a L{Channel} object, which is used for data transfer. @note: If you fail to supply a password or private key, this method may succeed, but a subsequent L{open_channel} or L{open_session} call may fail because you haven't authenticated yet. @param hostkey: the host key expected from the server, or C{None} if you don't want to do host key verification. @type hostkey: L{PKey<pkey.PKey>} @param username: the username to authenticate as. @type username: str @param password: a password to use for authentication, if you want to use password authentication; otherwise C{None}. @type password: str @param pkey: a private key to use for authentication, if you want to use private key authentication; otherwise C{None}. @type pkey: L{PKey<pkey.PKey>} @raise SSHException: if the SSH2 negotiation fails, the host key supplied by the server is incorrect, or authentication fails. ### Response: #vtb def connect(self, hostkey=None, username=, password=None, pkey=None): if hostkey is not None: self._preferred_keys = [ hostkey.get_name() ] self.start_client() if (hostkey is not None): key = self.get_remote_server_key() if (key.get_name() != hostkey.get_name()) or (str(key) != str(hostkey)): self._log(DEBUG, ) self._log(DEBUG, % (hostkey.get_name(), repr(str(hostkey)))) self._log(DEBUG, % (key.get_name(), repr(str(key)))) raise SSHException() self._log(DEBUG, % hostkey.get_name()) if (pkey is not None) or (password is not None): if password is not None: self._log(DEBUG, ) self.auth_password(username, password) else: self._log(DEBUG, ) self.auth_publickey(username, pkey) return
#vtb def overview(): search = Service.search() search = search.filter("term", state=) search.aggs.bucket(, , field=, order={: }, size=100) \ .metric(, , field=) response = search.execute() print_line("Port Count") print_line("---------------") for entry in response.aggregations.port_count.buckets: print_line("{0:<7} {1}".format(entry.key, entry.unique_count.value))
Function to create an overview of the services. Will print a list of ports found an the number of times the port was seen.
### Input: Function to create an overview of the services. Will print a list of ports found an the number of times the port was seen. ### Response: #vtb def overview(): search = Service.search() search = search.filter("term", state=) search.aggs.bucket(, , field=, order={: }, size=100) \ .metric(, , field=) response = search.execute() print_line("Port Count") print_line("---------------") for entry in response.aggregations.port_count.buckets: print_line("{0:<7} {1}".format(entry.key, entry.unique_count.value))