Dataset Preview
Full Screen Viewer
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 17 new columns ({'docstring_summary', 'path', 'argument_list', 'identifier', 'nwo', 'idx', 'no_docstring_code', 'language', 'parameters', 'url', 'function_tokens', 'function', 'docstring', 'score', 'sha', 'docstring_tokens', 'return_statement'}) and 5 missing columns ({'id_', 'query', 'task_name', 'negative', 'positive'}). This happened while the json dataset builder was generating data using hf://datasets/Denis641/AdvTestNodocstring/modified_test_new.jsonl (at revision e507f92e1342963d6e0c850362fc44526c14cd32) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast url: string sha: string docstring_summary: string language: string parameters: string return_statement: string argument_list: string function_tokens: list<item: string> child 0, item: string function: string path: string identifier: string docstring: string docstring_tokens: list<item: string> child 0, item: string nwo: string score: double idx: int64 no_docstring_code: string to {'query': Value(dtype='string', id=None), 'positive': Value(dtype='string', id=None), 'id_': Value(dtype='int64', id=None), 'task_name': Value(dtype='string', id=None), 'negative': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1577, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1191, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 17 new columns ({'docstring_summary', 'path', 'argument_list', 'identifier', 'nwo', 'idx', 'no_docstring_code', 'language', 'parameters', 'url', 'function_tokens', 'function', 'docstring', 'score', 'sha', 'docstring_tokens', 'return_statement'}) and 5 missing columns ({'id_', 'query', 'task_name', 'negative', 'positive'}). This happened while the json dataset builder was generating data using hf://datasets/Denis641/AdvTestNodocstring/modified_test_new.jsonl (at revision e507f92e1342963d6e0c850362fc44526c14cd32) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
query
string | positive
string | id_
int64 | task_name
string | negative
string |
---|---|---|---|---|
Return either the full or truncated version of a QIIME-formatted taxonomy string.
:type p: str
:param p: A QIIME-formatted taxonomy string: k__Foo; p__Bar; ...
:type level: str
:param level: The different level of identification are kingdom (k), phylum (p),
class (c),order (o), family (f), genus (g) and species (s). If level is
not provided, the default level of identification is species.
:rtype: str
:return: A QIIME-formatted taxonomy string up to the classification given
by param level. | def split_phylogeny(p, level="s"):
level = level+"__"
result = p.split(level)
return result[0]+level+result[1].split(";")[0] | 0 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L159-L177 | def reset_local_buffers(self):
agent_ids = list(self.keys())
for k in agent_ids:
self[k].reset_agent() |
Check to make sure the supplied directory path does not exist, if so, create it. The
method catches OSError exceptions and returns a descriptive message instead of
re-raising the error.
:type d: str
:param d: It is the full path to a directory.
:return: Does not return anything, but creates a directory path if it doesn't exist
already. | def ensure_dir(d):
if not os.path.exists(d):
try:
os.makedirs(d)
except OSError as oe:
# should not happen with os.makedirs
# ENOENT: No such file or directory
if os.errno == errno.ENOENT:
msg = twdd("""One or more directories in the path ({}) do not exist. If
you are specifying a new directory for output, please ensure
all other directories in the path currently exist.""")
return msg.format(d)
else:
msg = twdd("""An error occurred trying to create the output directory
({}) with message: {}""")
return msg.format(d, oe.strerror) | 1 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L180-L206 | def on_change(self, value):
self._modifier(self.inst, self.prop, value) |
Takes either a file path or an open file handle, checks validity and returns an open
file handle or raises an appropriate Exception.
:type fnh: str
:param fnh: It is the full path to a file, or open file handle
:type mode: str
:param mode: The way in which this file will be used, for example to read or write or
both. By default, file will be opened in rU mode.
:return: Returns an opened file for appropriate usage. | def file_handle(fnh, mode="rU"):
handle = None
if isinstance(fnh, file):
if fnh.closed:
raise ValueError("Input file is closed.")
handle = fnh
elif isinstance(fnh, str):
handle = open(fnh, mode)
return handle | 2 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L209-L231 | def merge_partition_offsets(*partition_offsets):
output = dict()
for partition_offset in partition_offsets:
for partition, offset in six.iteritems(partition_offset):
prev_offset = output.get(partition, 0)
output[partition] = max(prev_offset, offset)
return output |
Find the user specified categories in the map and create a dictionary to contain the
relevant data for each type within the categories. Multiple categories will have their
types combined such that each possible combination will have its own entry in the
dictionary.
:type imap: dict
:param imap: The input mapping file data keyed by SampleID
:type header: list
:param header: The header line from the input mapping file. This will be searched for
the user-specified categories
:type categories: list
:param categories: The list of user-specified category column name from mapping file
:rtype: dict
:return: A sorted dictionary keyed on the combinations of all the types found within
the user-specified categories. Each entry will contain an empty DataCategory
namedtuple. If no categories are specified, a single entry with the key
'default' will be returned | def gather_categories(imap, header, categories=None):
# If no categories provided, return all SampleIDs
if categories is None:
return {"default": DataCategory(set(imap.keys()), {})}
cat_ids = [header.index(cat)
for cat in categories if cat in header and "=" not in cat]
table = OrderedDict()
conditions = defaultdict(set)
for i, cat in enumerate(categories):
if "=" in cat and cat.split("=")[0] in header:
cat_name = header[header.index(cat.split("=")[0])]
conditions[cat_name].add(cat.split("=")[1])
# If invalid categories or conditions identified, return all SampleIDs
if not cat_ids and not conditions:
return {"default": DataCategory(set(imap.keys()), {})}
#If only category column given, return column-wise SampleIDs
if cat_ids and not conditions:
for sid, row in imap.items():
cat_name = "_".join([row[cid] for cid in cat_ids])
if cat_name not in table:
table[cat_name] = DataCategory(set(), {})
table[cat_name].sids.add(sid)
return table
# Collect all condition names
cond_ids = set()
for k in conditions:
try:
cond_ids.add(header.index(k))
except ValueError:
continue
idx_to_test = set(cat_ids).union(cond_ids)
# If column name and condition given, return overlapping SampleIDs of column and
# condition combinations
for sid, row in imap.items():
if all([row[header.index(c)] in conditions[c] for c in conditions]):
key = "_".join([row[idx] for idx in idx_to_test])
try:
assert key in table.keys()
except AssertionError:
table[key] = DataCategory(set(), {})
table[key].sids.add(sid)
try:
assert len(table) > 0
except AssertionError:
return {"default": DataCategory(set(imap.keys()), {})}
else:
return table | 3 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L238-L309 | def elapsed_time_from(start_time):
time_then = make_time(start_time)
time_now = datetime.utcnow().replace(microsecond=0)
if time_then is None:
return
delta_t = time_now - time_then
return delta_t |
Parses the unifrac results file into a dictionary
:type unifracFN: str
:param unifracFN: The path to the unifrac results file
:rtype: dict
:return: A dictionary with keys: 'pcd' (principle coordinates data) which is a
dictionary of the data keyed by sample ID, 'eigvals' (eigenvalues), and
'varexp' (variation explained) | def parse_unifrac(unifracFN):
with open(unifracFN, "rU") as uF:
first = uF.next().split("\t")
lines = [line.strip() for line in uF]
unifrac = {"pcd": OrderedDict(), "eigvals": [], "varexp": []}
if first[0] == "pc vector number":
return parse_unifrac_v1_8(unifrac, lines)
elif first[0] == "Eigvals":
return parse_unifrac_v1_9(unifrac, lines)
else:
raise ValueError("File format not supported/recognized. Please check input "
"unifrac file.") | 4 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L311-L334 | def set_classes(self):
# Custom field classes on field wrapper
if self.attrs.get("_field_class"):
self.values["class"].append(escape(self.attrs.get("_field_class")))
# Inline class
if self.attrs.get("_inline"):
self.values["class"].append("inline")
# Disabled class
if self.field.field.disabled:
self.values["class"].append("disabled")
# Required class
if self.field.field.required and not self.attrs.get("_no_required"):
self.values["class"].append("required")
elif self.attrs.get("_required") and not self.field.field.required:
self.values["class"].append("required") |
Function to parse data from older version of unifrac file obtained from Qiime version
1.8 and earlier.
:type unifrac: dict
:param unifracFN: The path to the unifrac results file
:type file_data: list
:param file_data: Unifrac data lines after stripping whitespace characters. | def parse_unifrac_v1_8(unifrac, file_data):
for line in file_data:
if line == "":
break
line = line.split("\t")
unifrac["pcd"][line[0]] = [float(e) for e in line[1:]]
unifrac["eigvals"] = [float(entry) for entry in file_data[-2].split("\t")[1:]]
unifrac["varexp"] = [float(entry) for entry in file_data[-1].split("\t")[1:]]
return unifrac | 5 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L337-L356 | async def stop_bridges(self):
for task in self.sleep_tasks:
task.cancel()
for bridge in self.bridges:
bridge.stop() |
Function to parse data from newer version of unifrac file obtained from Qiime version
1.9 and later.
:type unifracFN: str
:param unifracFN: The path to the unifrac results file
:type file_data: list
:param file_data: Unifrac data lines after stripping whitespace characters. | def parse_unifrac_v1_9(unifrac, file_data):
unifrac["eigvals"] = [float(entry) for entry in file_data[0].split("\t")]
unifrac["varexp"] = [float(entry)*100 for entry in file_data[3].split("\t")]
for line in file_data[8:]:
if line == "":
break
line = line.split("\t")
unifrac["pcd"][line[0]] = [float(e) for e in line[1:]]
return unifrac | 6 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L359-L378 | def is_charge_balanced(reaction):
charge = 0
for metabolite, coefficient in iteritems(reaction.metabolites):
if metabolite.charge is None:
return False
charge += coefficient * metabolite.charge
return charge == 0 |
Determine color-category mapping. If color_column was specified, then map the category
names to color values. Otherwise, use the palettable colors to automatically generate
a set of colors for the group values.
:type sample_map: dict
:param unifracFN: Map associating each line of the mapping file with the appropriate
sample ID (each value of the map also contains the sample ID)
:type header: tuple
:param A tuple of header line for mapping file
:type group_column: str
:param group_column: String denoting the column name for sample groups.
:type color_column: str
:param color_column: String denoting the column name for sample colors.
:type return: dict
:param return: {SampleID: Color} | def color_mapping(sample_map, header, group_column, color_column=None):
group_colors = OrderedDict()
group_gather = gather_categories(sample_map, header, [group_column])
if color_column is not None:
color_gather = gather_categories(sample_map, header, [color_column])
# match sample IDs between color_gather and group_gather
for group in group_gather:
for color in color_gather:
# allow incomplete assignment of colors, if group sids overlap at
# all with the color sids, consider it a match
if group_gather[group].sids.intersection(color_gather[color].sids):
group_colors[group] = color
else:
bcolors = itertools.cycle(Set3_12.hex_colors)
for group in group_gather:
group_colors[group] = bcolors.next()
return group_colors | 7 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L380-L419 | def is_balance_proof_safe_for_onchain_operations(
balance_proof: BalanceProofSignedState,
) -> bool:
total_amount = balance_proof.transferred_amount + balance_proof.locked_amount
return total_amount <= UINT256_MAX |
return reverse completment of read | def rev_c(read):
rc = []
rc_nucs = {'A':'T', 'T':'A', 'G':'C', 'C':'G', 'N':'N'}
for base in read:
rc.extend(rc_nucs[base.upper()])
return rc[::-1] | 8 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/shuffle_genome.py#L27-L35 | def hypermedia_out():
request = cherrypy.serving.request
request._hypermedia_inner_handler = request.handler
# If handler has been explicitly set to None, don't override.
if request.handler is not None:
request.handler = hypermedia_handler |
randomly shuffle genome | def shuffle_genome(genome, cat, fraction = float(100), plot = True, \
alpha = 0.1, beta = 100000, \
min_length = 1000, max_length = 200000):
header = '>randomized_%s' % (genome.name)
sequence = list(''.join([i[1] for i in parse_fasta(genome)]))
length = len(sequence)
shuffled = []
# break genome into pieces
while sequence is not False:
s = int(random.gammavariate(alpha, beta))
if s <= min_length or s >= max_length:
continue
if len(sequence) < s:
seq = sequence[0:]
else:
seq = sequence[0:s]
sequence = sequence[s:]
# if bool(random.getrandbits(1)) is True:
# seq = rev_c(seq)
# print('fragment length: %s reverse complement: True' % ('{:,}'.format(s)), file=sys.stderr)
# else:
# print('fragment length: %s reverse complement: False' % ('{:,}'.format(s)), file=sys.stderr)
shuffled.append(''.join(seq))
if sequence == []:
break
# shuffle pieces
random.shuffle(shuffled)
# subset fragments
if fraction == float(100):
subset = shuffled
else:
max_pieces = int(length * fraction/100)
subset, total = [], 0
for fragment in shuffled:
length = len(fragment)
if total + length <= max_pieces:
subset.append(fragment)
total += length
else:
diff = max_pieces - total
subset.append(fragment[0:diff])
break
# combine sequences, if requested
if cat is True:
yield [header, ''.join(subset)]
else:
for i, seq in enumerate(subset):
yield ['%s fragment:%s' % (header, i), seq] | 9 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/shuffle_genome.py#L37-L87 | def GetEntries(self, parser_mediator, match=None, **unused_kwargs):
stores = match.get('Stores', {})
for volume_name, volume in iter(stores.items()):
datetime_value = volume.get('CreationDate', None)
if not datetime_value:
continue
partial_path = volume['PartialPath']
event_data = plist_event.PlistTimeEventData()
event_data.desc = 'Spotlight Volume {0:s} ({1:s}) activated.'.format(
volume_name, partial_path)
event_data.key = ''
event_data.root = '/Stores'
event = time_events.PythonDatetimeEvent(
datetime_value, definitions.TIME_DESCRIPTION_WRITTEN)
parser_mediator.ProduceEventWithEventData(event, event_data) |
If the fit contains statistically insignificant parameters, remove them.
Returns a pruned fit where all parameters have p-values of the t-statistic below p_max
Parameters
----------
fit: fm.ols fit object
Can contain insignificant parameters
p_max : float
Maximum allowed probability of the t-statistic
Returns
-------
fit: fm.ols fit object
Won't contain any insignificant parameters | def _prune(self, fit, p_max):
def remove_from_model_desc(x, model_desc):
"""
Return a model_desc without x
"""
rhs_termlist = []
for t in model_desc.rhs_termlist:
if not t.factors:
# intercept, add anyway
rhs_termlist.append(t)
elif not x == t.factors[0]._varname:
# this is not the term with x
rhs_termlist.append(t)
md = ModelDesc(model_desc.lhs_termlist, rhs_termlist)
return md
corrected_model_desc = ModelDesc(fit.model.formula.lhs_termlist[:], fit.model.formula.rhs_termlist[:])
pars_to_prune = fit.pvalues.where(fit.pvalues > p_max).dropna().index.tolist()
try:
pars_to_prune.remove('Intercept')
except:
pass
while pars_to_prune:
corrected_model_desc = remove_from_model_desc(pars_to_prune[0], corrected_model_desc)
fit = fm.ols(corrected_model_desc, data=self.df).fit()
pars_to_prune = fit.pvalues.where(fit.pvalues > p_max).dropna().index.tolist()
try:
pars_to_prune.remove('Intercept')
except:
pass
return fit | 10 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/regression.py#L222-L272 | def hexblock_word(cls, data, address = None,
bits = None,
separator = ' ',
width = 8):
return cls.hexblock_cb(cls.hexa_word, data,
address, bits, width * 2,
cb_kwargs = {'separator': separator}) |
Return the best fit, based on rsquared | def find_best_rsquared(list_of_fits):
res = sorted(list_of_fits, key=lambda x: x.rsquared)
return res[-1] | 11 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/regression.py#L275-L278 | def _skip_trampoline(handler):
data_event, self = (yield None)
delegate = handler
event = None
depth = 0
while True:
def pass_through():
_trans = delegate.send(Transition(data_event, delegate))
return _trans, _trans.delegate, _trans.event
if data_event is not None and data_event.type is ReadEventType.SKIP:
while True:
trans, delegate, event = pass_through()
if event is not None:
if event.event_type is IonEventType.CONTAINER_END and event.depth <= depth:
break
if event is None or event.event_type is IonEventType.INCOMPLETE:
data_event, _ = yield Transition(event, self)
else:
trans, delegate, event = pass_through()
if event is not None and (event.event_type is IonEventType.CONTAINER_START or
event.event_type is IonEventType.CONTAINER_END):
depth = event.depth
data_event, _ = yield Transition(event, self) |
Return a df with predictions and confidence interval
Notes
-----
The df will contain the following columns:
- 'predicted': the model output
- 'interval_u', 'interval_l': upper and lower confidence bounds.
The result will depend on the following attributes of self:
confint : float (default=0.95)
Confidence level for two-sided hypothesis
allow_negative_predictions : bool (default=True)
If False, correct negative predictions to zero (typically for energy consumption predictions)
Parameters
----------
fit : Statsmodels fit
df : pandas DataFrame or None (default)
If None, use self.df
Returns
-------
df_res : pandas DataFrame
Copy of df with additional columns 'predicted', 'interval_u' and 'interval_l' | def _predict(self, fit, df):
# Add model results to data as column 'predictions'
df_res = df.copy()
if 'Intercept' in fit.model.exog_names:
df_res['Intercept'] = 1.0
df_res['predicted'] = fit.predict(df_res)
if not self.allow_negative_predictions:
df_res.loc[df_res['predicted'] < 0, 'predicted'] = 0
prstd, interval_l, interval_u = wls_prediction_std(fit,
df_res[fit.model.exog_names],
alpha=1 - self.confint)
df_res['interval_l'] = interval_l
df_res['interval_u'] = interval_u
if 'Intercept' in df_res:
df_res.drop(labels=['Intercept'], axis=1, inplace=True)
return df_res | 12 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/regression.py#L292-L338 | def detach(self, listener):
if listener in self.listeners:
self.listeners.remove(listener) |
Calculate the relative abundance of each OTUID in a Sample.
:type biomf: A BIOM file.
:param biomf: OTU table format.
:type sampleIDs: list
:param sampleIDs: A list of sample id's from BIOM format OTU table.
:rtype: dict
:return: Returns a keyed on SampleIDs, and the values are dictionaries keyed on
OTUID's and their values represent the relative abundance of that OTUID in
that SampleID. | def relative_abundance(biomf, sampleIDs=None):
if sampleIDs is None:
sampleIDs = biomf.ids()
else:
try:
for sid in sampleIDs:
assert sid in biomf.ids()
except AssertionError:
raise ValueError(
"\nError while calculating relative abundances: The sampleIDs provided do"
" not match the sampleIDs in biom file. Please double check the sampleIDs"
" provided.\n")
otuIDs = biomf.ids(axis="observation")
norm_biomf = biomf.norm(inplace=False)
return {sample: {otuID: norm_biomf.get_value_by_ids(otuID, sample)
for otuID in otuIDs} for sample in sampleIDs} | 13 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L11-L41 | def clean(jail=None,
chroot=None,
root=None,
clean_all=False,
dryrun=False):
opts = ''
if clean_all:
opts += 'a'
if dryrun:
opts += 'n'
else:
opts += 'y'
cmd = _pkg(jail, chroot, root)
cmd.append('clean')
if opts:
cmd.append('-' + opts)
return __salt__['cmd.run'](
cmd,
output_loglevel='trace',
python_shell=False
) |
Calculate the mean OTU abundance percentage.
:type ra: Dict
:param ra: 'ra' refers to a dictionary keyed on SampleIDs, and the values are
dictionaries keyed on OTUID's and their values represent the relative
abundance of that OTUID in that SampleID. 'ra' is the output of
relative_abundance() function.
:type otuIDs: List
:param otuIDs: A list of OTUID's for which the percentage abundance needs to be
measured.
:rtype: dict
:return: A dictionary of OTUID and their percent relative abundance as key/value pair. | def mean_otu_pct_abundance(ra, otuIDs):
sids = ra.keys()
otumeans = defaultdict(int)
for oid in otuIDs:
otumeans[oid] = sum([ra[sid][oid] for sid in sids
if oid in ra[sid]]) / len(sids) * 100
return otumeans | 14 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L44-L67 | def change_frozen_attr(self):
# Selections are not supported
if self.grid.selection:
statustext = _("Freezing selections is not supported.")
post_command_event(self.main_window, self.StatusBarMsg,
text=statustext)
cursor = self.grid.actions.cursor
frozen = self.grid.code_array.cell_attributes[cursor]["frozen"]
if frozen:
# We have an frozen cell that has to be unfrozen
# Delete frozen cache content
self.grid.code_array.frozen_cache.pop(repr(cursor))
else:
# We have an non-frozen cell that has to be frozen
# Add frozen cache content
res_obj = self.grid.code_array[cursor]
self.grid.code_array.frozen_cache[repr(cursor)] = res_obj
# Set the new frozen state / code
selection = Selection([], [], [], [], [cursor[:2]])
self.set_attr("frozen", not frozen, selection=selection) |
Calculate the mean relative abundance percentage.
:type biomf: A BIOM file.
:param biomf: OTU table format.
:type sampleIDs: list
:param sampleIDs: A list of sample id's from BIOM format OTU table.
:param transform: Mathematical function which is used to transform smax to another
format. By default, the function has been set to None.
:rtype: dict
:return: A dictionary keyed on OTUID's and their mean relative abundance for a given
number of sampleIDs. | def MRA(biomf, sampleIDs=None, transform=None):
ra = relative_abundance(biomf, sampleIDs)
if transform is not None:
ra = {sample: {otuID: transform(abd) for otuID, abd in ra[sample].items()}
for sample in ra.keys()}
otuIDs = biomf.ids(axis="observation")
return mean_otu_pct_abundance(ra, otuIDs) | 15 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L70-L92 | def close(self):
if self.device:
usb.util.dispose_resources(self.device)
self.device = None |
Calculate the total number of sequences in each OTU or SampleID.
:type biomf: A BIOM file.
:param biomf: OTU table format.
:type sampleIDs: List
:param sampleIDs: A list of column id's from BIOM format OTU table. By default, the
list has been set to None.
:type sample_abd: Boolean
:param sample_abd: A boolean operator to provide output for OTUID's or SampleID's. By
default, the output will be provided for SampleID's.
:rtype: dict
:return: Returns a dictionary keyed on either OTUID's or SampleIDs and their
respective abundance as values. | def raw_abundance(biomf, sampleIDs=None, sample_abd=True):
results = defaultdict(int)
if sampleIDs is None:
sampleIDs = biomf.ids()
else:
try:
for sid in sampleIDs:
assert sid in biomf.ids()
except AssertionError:
raise ValueError(
"\nError while calculating raw total abundances: The sampleIDs provided "
"do not match the sampleIDs in biom file. Please double check the "
"sampleIDs provided.\n")
otuIDs = biomf.ids(axis="observation")
for sampleID in sampleIDs:
for otuID in otuIDs:
abd = biomf.get_value_by_ids(otuID, sampleID)
if sample_abd:
results[sampleID] += abd
else:
results[otuID] += abd
return results | 16 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L95-L135 | def upgrade_api(request, client, version):
min_ver, max_ver = api_versions._get_server_version_range(client)
if min_ver <= api_versions.APIVersion(version) <= max_ver:
client = _nova.novaclient(request, version)
return client |
Function to transform the total abundance calculation for each sample ID to another
format based on user given transformation function.
:type biomf: A BIOM file.
:param biomf: OTU table format.
:param fn: Mathematical function which is used to transform smax to another format.
By default, the function has been given as base 10 logarithm.
:rtype: dict
:return: Returns a dictionary similar to output of raw_abundance function but with
the abundance values modified by the mathematical operation. By default, the
operation performed on the abundances is base 10 logarithm. | def transform_raw_abundance(biomf, fn=math.log10, sampleIDs=None, sample_abd=True):
totals = raw_abundance(biomf, sampleIDs, sample_abd)
return {sid: fn(abd) for sid, abd in totals.items()} | 17 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L138-L155 | def close(self):
if self.device:
usb.util.dispose_resources(self.device)
self.device = None |
Compute the Mann-Whitney U test for unequal group sample sizes. | def print_MannWhitneyU(div_calc):
try:
x = div_calc.values()[0].values()
y = div_calc.values()[1].values()
except:
return "Error setting up input arrays for Mann-Whitney U Test. Skipping "\
"significance testing."
T, p = stats.mannwhitneyu(x, y)
print "\nMann-Whitney U test statistic:", T
print "Two-tailed p-value: {}".format(2 * p) | 18 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/diversity.py#L54-L66 | def ParseFileObject(self, parser_mediator, file_object):
try:
file_header = self._ReadFileHeader(file_object)
except (ValueError, errors.ParseError):
raise errors.UnableToParseFile('Unable to parse file header.')
tables = self._ReadTablesArray(file_object, file_header.tables_array_offset)
table = tables.get(self._RECORD_TYPE_APPLICATION_PASSWORD, None)
if table:
for record in table.records:
self._ParseApplicationPasswordRecord(parser_mediator, record)
table = tables.get(self._RECORD_TYPE_INTERNET_PASSWORD, None)
if table:
for record in table.records:
self._ParseInternetPasswordRecord(parser_mediator, record) |
Compute the Kruskal-Wallis H-test for independent samples. A typical rule is that
each group must have at least 5 measurements. | def print_KruskalWallisH(div_calc):
calc = defaultdict(list)
try:
for k1, v1 in div_calc.iteritems():
for k2, v2 in v1.iteritems():
calc[k1].append(v2)
except:
return "Error setting up input arrays for Kruskal-Wallis H-Test. Skipping "\
"significance testing."
h, p = stats.kruskal(*calc.values())
print "\nKruskal-Wallis H-test statistic for {} groups: {}".format(str(len(div_calc)), h)
print "p-value: {}".format(p) | 19 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/diversity.py#L69-L84 | def _record_offset(self):
offset = self.blob_file.tell()
self.event_offsets.append(offset) |
Parses the given options passed in at the command line. | def handle_program_options():
parser = argparse.ArgumentParser(description="Calculate the alpha diversity\
of a set of samples using one or more \
metrics and output a kernal density \
estimator-smoothed histogram of the \
results.")
parser.add_argument("-m", "--map_file",
help="QIIME mapping file.")
parser.add_argument("-i", "--biom_fp",
help="Path to the BIOM table")
parser.add_argument("-c", "--category",
help="Specific category from the mapping file.")
parser.add_argument("-d", "--diversity", default=["shannon"], nargs="+",
help="The alpha diversity metric. Default \
value is 'shannon', which will calculate the Shannon\
entropy. Multiple metrics can be specified (space separated).\
The full list of metrics is available at:\
http://scikit-bio.org/docs/latest/generated/skbio.diversity.alpha.html.\
Beta diversity metrics will be supported in the future.")
parser.add_argument("--x_label", default=[None], nargs="+",
help="The name of the diversity metric to be displayed on the\
plot as the X-axis label. If multiple metrics are specified,\
then multiple entries for the X-axis label should be given.")
parser.add_argument("--color_by",
help="A column name in the mapping file containing\
hexadecimal (#FF0000) color values that will\
be used to color the groups. Each sample ID must\
have a color entry.")
parser.add_argument("--plot_title", default="",
help="A descriptive title that will appear at the top \
of the output plot. Surround with quotes if there are\
spaces in the title.")
parser.add_argument("-o", "--output_dir", default=".",
help="The directory plots will be saved to.")
parser.add_argument("--image_type", default="png",
help="The type of image to save: png, svg, pdf, eps, etc...")
parser.add_argument("--save_calculations",
help="Path and name of text file to store the calculated "
"diversity metrics.")
parser.add_argument("--suppress_stats", action="store_true", help="Do not display "
"significance testing results which are shown by default.")
parser.add_argument("--show_available_metrics", action="store_true",
help="Supply this parameter to see which alpha diversity metrics "
" are available for usage. No calculations will be performed"
" if this parameter is provided.")
return parser.parse_args() | 20 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/diversity.py#L122-L168 | def fingerprint(self):
if self.num_vertices == 0:
return np.zeros(20, np.ubyte)
else:
return sum(self.vertex_fingerprints) |
make blast db | def blastdb(fasta, maxfile = 10000000):
db = fasta.rsplit('.', 1)[0]
type = check_type(fasta)
if type == 'nucl':
type = ['nhr', type]
else:
type = ['phr', type]
if os.path.exists('%s.%s' % (db, type[0])) is False \
and os.path.exists('%s.00.%s' % (db, type[0])) is False:
print('# ... making blastdb for: %s' % (fasta), file=sys.stderr)
os.system('makeblastdb \
-in %s -out %s -dbtype %s -max_file_sz %s >> log.txt' \
% (fasta, db, type[1], maxfile))
else:
print('# ... database found for: %s' % (fasta), file=sys.stderr)
return db | 21 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/search.py#L28-L46 | def writln(line, unit):
lineP = stypes.stringToCharP(line)
unit = ctypes.c_int(unit)
line_len = ctypes.c_int(len(line))
libspice.writln_(lineP, ctypes.byref(unit), line_len) |
make usearch db | def usearchdb(fasta, alignment = 'local', usearch_loc = 'usearch'):
if '.udb' in fasta:
print('# ... database found: %s' % (fasta), file=sys.stderr)
return fasta
type = check_type(fasta)
db = '%s.%s.udb' % (fasta.rsplit('.', 1)[0], type)
if os.path.exists(db) is False:
print('# ... making usearch db for: %s' % (fasta), file=sys.stderr)
if alignment == 'local':
os.system('%s -makeudb_ublast %s -output %s >> log.txt' % (usearch_loc, fasta, db))
elif alignment == 'global':
os.system('%s -makeudb_usearch %s -output %s >> log.txt' % (usearch_loc, fasta, db))
else:
print('# ... database found for: %s' % (fasta), file=sys.stderr)
return db | 22 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/search.py#L68-L85 | def point_to_line(point, segment_start, segment_end):
# TODO: Needs unittests.
segment_vec = segment_end - segment_start
# t is distance along line
t = -(segment_start - point).dot(segment_vec) / (
segment_vec.length_squared())
closest_point = segment_start + scale_v3(segment_vec, t)
return point - closest_point |
Pretty print. | def _pp(dict_data):
for key, val in dict_data.items():
# pylint: disable=superfluous-parens
print('{0:<11}: {1}'.format(key, val)) | 23 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/control.py#L11-L15 | def _reserve(self, key):
self.assign(key, RESERVED)
try:
yield
finally:
del self._cache[key] |
Print licenses.
:param argparse.Namespace params: parameter
:param bootstrap_py.classifier.Classifiers metadata: package metadata | def print_licences(params, metadata):
if hasattr(params, 'licenses'):
if params.licenses:
_pp(metadata.licenses_desc())
sys.exit(0) | 24 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/control.py#L27-L36 | def vrel(v1, v2):
v1 = stypes.toDoubleVector(v1)
v2 = stypes.toDoubleVector(v2)
return libspice.vrel_c(v1, v2) |
Check repository existence.
:param argparse.Namespace params: parameters | def check_repository_existence(params):
repodir = os.path.join(params.outdir, params.name)
if os.path.isdir(repodir):
raise Conflict(
'Package repository "{0}" has already exists.'.format(repodir)) | 25 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/control.py#L39-L47 | def context(self):
stats = status_codes_by_date_stats()
attacks_data = [{
'type': 'line',
'zIndex': 9,
'name': _('Attacks'),
'data': [(v[0], v[1]['attacks'])
for v in stats]
}]
codes_data = [{
'zIndex': 4,
'name': '2xx',
'data': [(v[0], v[1][200]) for v in stats]
}, {
'zIndex': 5,
'name': '3xx',
'data': [(v[0], v[1][300]) for v in stats]
}, {
'zIndex': 6,
'name': '4xx',
'data': [(v[0], v[1][400]) for v in stats]
}, {
'zIndex': 8,
'name': '5xx',
'data': [(v[0], v[1][500]) for v in stats]
}]
return {'generic_chart': json.dumps(status_codes_by_date_chart()),
'attacks_data': json.dumps(attacks_data),
'codes_data': json.dumps(codes_data)} |
Generate package repository.
:param argparse.Namespace params: parameters | def generate_package(params):
pkg_data = package.PackageData(params)
pkg_tree = package.PackageTree(pkg_data)
pkg_tree.generate()
pkg_tree.move()
VCS(os.path.join(pkg_tree.outdir, pkg_tree.name), pkg_tree.pkg_data) | 26 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/control.py#L59-L68 | def startResponse(self, status, headers, excInfo=None):
self.status = status
self.headers = headers
self.reactor.callInThread(
responseInColor, self.request, status, headers
)
return self.write |
print single reads to stderr | def print_single(line, rev):
if rev is True:
seq = rc(['', line[9]])[1]
qual = line[10][::-1]
else:
seq = line[9]
qual = line[10]
fq = ['@%s' % line[0], seq, '+%s' % line[0], qual]
print('\n'.join(fq), file = sys.stderr) | 27 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/sam2fastq.py#L13-L24 | def set_cache_dir(directory):
global cache_dir
if directory is None:
cache_dir = None
return
if not os.path.exists(directory):
os.makedirs(directory)
if not os.path.isdir(directory):
raise ValueError("not a directory")
cache_dir = directory |
convert sam to fastq | def sam2fastq(sam, singles = False, force = False):
L, R = None, None
for line in sam:
if line.startswith('@') is True:
continue
line = line.strip().split()
bit = [True if i == '1' else False \
for i in bin(int(line[1])).split('b')[1][::-1]]
while len(bit) < 8:
bit.append(False)
pair, proper, na, nap, rev, mrev, left, right = bit
# make sure read is paired
if pair is False:
if singles is True:
print_single(line, rev)
continue
# check if sequence is reverse-complemented
if rev is True:
seq = rc(['', line[9]])[1]
qual = line[10][::-1]
else:
seq = line[9]
qual = line[10]
# check if read is forward or reverse, return when both have been found
if left is True:
if L is not None and force is False:
print('sam file is not sorted', file = sys.stderr)
print('\te.g.: %s' % (line[0]), file = sys.stderr)
exit()
if L is not None:
L = None
continue
L = ['@%s' % line[0], seq, '+%s' % line[0], qual]
if R is not None:
yield L
yield R
L, R = None, None
if right is True:
if R is not None and force is False:
print('sam file is not sorted', file = sys.stderr)
print('\te.g.: %s' % (line[0]), file = sys.stderr)
exit()
if R is not None:
R = None
continue
R = ['@%s' % line[0], seq, '+%s' % line[0], qual]
if L is not None:
yield L
yield R
L, R = None, None | 28 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/sam2fastq.py#L26-L78 | def cublasGetStream(handle):
id = ctypes.c_int()
status = _libcublas.cublasGetStream_v2(handle, ctypes.byref(id))
cublasCheckStatus(status)
return id.value |
sort sam file | def sort_sam(sam, sort):
tempdir = '%s/' % (os.path.abspath(sam).rsplit('/', 1)[0])
if sort is True:
mapping = '%s.sorted.sam' % (sam.rsplit('.', 1)[0])
if sam != '-':
if os.path.exists(mapping) is False:
os.system("\
sort -k1 --buffer-size=%sG -T %s -o %s %s\
" % (sbuffer, tempdir, mapping, sam))
else:
mapping = 'stdin-sam.sorted.sam'
p = Popen("sort -k1 --buffer-size=%sG -T %s -o %s" \
% (sbuffer, tempdir, mapping), stdin = sys.stdin, shell = True)
p.communicate()
mapping = open(mapping)
else:
if sam == '-':
mapping = sys.stdin
else:
mapping = open(sam)
return mapping | 29 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/subset_sam.py#L14-L37 | def ssn(self) -> str:
area = self.random.randint(1, 899)
if area == 666:
area = 665
return '{:03}-{:02}-{:04}'.format(
area,
self.random.randint(1, 99),
self.random.randint(1, 9999),
) |
randomly subset sam file | def sub_sam(sam, percent, sort = True, sbuffer = False):
mapping = sort_sam(sam, sort)
pool = [1 for i in range(0, percent)] + [0 for i in range(0, 100 - percent)]
c = cycle([1, 2])
for line in mapping:
line = line.strip().split()
if line[0].startswith('@'): # get the sam header
yield line
continue
if int(line[1]) <= 20: # is this from a single read?
if random.choice(pool) == 1:
yield line
else:
n = next(c)
if n == 1:
prev = line
if n == 2 and random.choice(pool) == 1:
yield prev
yield line | 30 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/subset_sam.py#L39-L60 | def get_max_port_count_for_storage_bus(self, bus):
if not isinstance(bus, StorageBus):
raise TypeError("bus can only be an instance of type StorageBus")
max_port_count = self._call("getMaxPortCountForStorageBus",
in_p=[bus])
return max_port_count |
convert fq to fa | def fq2fa(fq):
c = cycle([1, 2, 3, 4])
for line in fq:
n = next(c)
if n == 1:
seq = ['>%s' % (line.strip().split('@', 1)[1])]
if n == 2:
seq.append(line.strip())
yield seq | 31 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/fastq2fasta.py#L11-L22 | def query_under_condition(condition, kind='2'):
if DB_CFG['kind'] == 's':
return TabPost.select().where(
(TabPost.kind == kind) & (TabPost.valid == 1)
).order_by(
TabPost.time_update.desc()
)
return TabPost.select().where(
(TabPost.kind == kind) &
(TabPost.valid == 1) &
TabPost.extinfo.contains(condition)
).order_by(TabPost.time_update.desc()) |
Converts the returned value of wrapped function to the type of the
first arg or to the type specified by a kwarg key return_type's value. | def change_return_type(f):
@wraps(f)
def wrapper(*args, **kwargs):
if kwargs.has_key('return_type'):
return_type = kwargs['return_type']
kwargs.pop('return_type')
return return_type(f(*args, **kwargs))
elif len(args) > 0:
return_type = type(args[0])
return return_type(f(*args, **kwargs))
else:
return f(*args, **kwargs)
return wrapper | 32 | https://github.com/elbow-jason/Uno-deprecated/blob/4ad07d7b84e5b6e3e2b2c89db69448906f24b4e4/uno/decorators.py#L11-L27 | def _timing_representation(message):
s = _encode_to_binary_string(message, on="=", off=".")
N = len(s)
s += '\n' + _numbers_decades(N)
s += '\n' + _numbers_units(N)
s += '\n'
s += '\n' + _timing_char(message)
return s |
Converts all args to 'set' type via self.setify function. | def convert_args_to_sets(f):
@wraps(f)
def wrapper(*args, **kwargs):
args = (setify(x) for x in args)
return f(*args, **kwargs)
return wrapper | 33 | https://github.com/elbow-jason/Uno-deprecated/blob/4ad07d7b84e5b6e3e2b2c89db69448906f24b4e4/uno/decorators.py#L30-L38 | def list_publications():
publications = search_publications(
DBPublication(is_public=True)
)
return SimpleTemplate(INDEX_TEMPLATE).render(
publications=publications,
compose_path=web_tools.compose_path,
delimiter=":",
) |
Membuat objek-objek entri dari laman yang diambil.
:param laman: Laman respons yang dikembalikan oleh KBBI daring.
:type laman: Response | def _init_entri(self, laman):
sup = BeautifulSoup(laman.text, 'html.parser')
estr = ''
for label in sup.find('hr').next_siblings:
if label.name == 'hr':
self.entri.append(Entri(estr))
break
if label.name == 'h2':
if estr:
self.entri.append(Entri(estr))
estr = ''
estr += str(label).strip() | 34 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L46-L63 | def diff_text(candidate_config=None,
candidate_path=None,
running_config=None,
running_path=None,
saltenv='base'):
candidate_text = clean(config=candidate_config,
path=candidate_path,
saltenv=saltenv)
running_text = clean(config=running_config,
path=running_path,
saltenv=saltenv)
return _get_diff_text(running_text, candidate_text) |
Memproses kata dasar yang ada dalam nama entri.
:param dasar: ResultSet untuk label HTML dengan class="rootword"
:type dasar: ResultSet | def _init_kata_dasar(self, dasar):
for tiap in dasar:
kata = tiap.find('a')
dasar_no = kata.find('sup')
kata = ambil_teks_dalam_label(kata)
self.kata_dasar.append(
kata + ' [{}]'.format(dasar_no.text.strip()) if dasar_no else kata
) | 35 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L126-L139 | def runWizard( self ):
plugin = self.currentPlugin()
if ( plugin and plugin.runWizard(self) ):
self.accept() |
Mengembalikan hasil serialisasi objek Entri ini.
:returns: Dictionary hasil serialisasi
:rtype: dict | def serialisasi(self):
return {
"nama": self.nama,
"nomor": self.nomor,
"kata_dasar": self.kata_dasar,
"pelafalan": self.pelafalan,
"bentuk_tidak_baku": self.bentuk_tidak_baku,
"varian": self.varian,
"makna": [makna.serialisasi() for makna in self.makna]
} | 36 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L141-L156 | def escape_for_cmd_exe(arg):
meta_chars = '()%!^"<>&|'
meta_re = re.compile('(' + '|'.join(re.escape(char) for char in list(meta_chars)) + ')')
meta_map = {char: "^{0}".format(char) for char in meta_chars}
def escape_meta_chars(m):
char = m.group(1)
return meta_map[char]
return meta_re.sub(escape_meta_chars, arg) |
Mengembalikan representasi string untuk semua makna entri ini.
:returns: String representasi makna-makna
:rtype: str | def _makna(self):
if len(self.makna) > 1:
return '\n'.join(
str(i) + ". " + str(makna)
for i, makna in enumerate(self.makna, 1)
)
return str(self.makna[0]) | 37 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L158-L170 | def controlMsg(self, requestType, request, buffer, value = 0, index = 0, timeout = 100):
return self.dev.ctrl_transfer(
requestType,
request,
wValue = value,
wIndex = index,
data_or_wLength = buffer,
timeout = timeout) |
Mengembalikan representasi string untuk nama entri ini.
:returns: String representasi nama entri
:rtype: str | def _nama(self):
hasil = self.nama
if self.nomor:
hasil += " [{}]".format(self.nomor)
if self.kata_dasar:
hasil = " » ".join(self.kata_dasar) + " » " + hasil
return hasil | 38 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L172-L184 | def _setup(self):
self.log.info("Adding reader to prepare to receive.")
self.loop.add_reader(self.dev.fd, self.read)
self.log.info("Flushing the RFXtrx buffer.")
self.flushSerialInput()
self.log.info("Writing the reset packet to the RFXtrx. (blocking)")
yield from self.sendRESET()
self.log.info("Wating 0.4s")
yield from asyncio.sleep(0.4)
self.log.info("Write the status packet (blocking)")
yield from self.sendSTATUS()
# TODO receive status response, compare it with the needed MODE and
# request a new MODE if required. Currently MODE is always sent.
self.log.info("Adding mode packet to the write queue (blocking)")
yield from self.sendMODE() |
Mengembalikan representasi string untuk varian entri ini.
Dapat digunakan untuk "Varian" maupun "Bentuk tidak baku".
:param varian: List bentuk tidak baku atau varian
:type varian: list
:returns: String representasi varian atau bentuk tidak baku
:rtype: str | def _varian(self, varian):
if varian == self.bentuk_tidak_baku:
nama = "Bentuk tidak baku"
elif varian == self.varian:
nama = "Varian"
else:
return ''
return nama + ': ' + ', '.join(varian) | 39 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L186-L202 | def compile_all():
# print("Compiling for Qt: style.qrc -> style.rcc")
# os.system("rcc style.qrc -o style.rcc")
print("Compiling for PyQt4: style.qrc -> pyqt_style_rc.py")
os.system("pyrcc4 -py3 style.qrc -o pyqt_style_rc.py")
print("Compiling for PyQt5: style.qrc -> pyqt5_style_rc.py")
os.system("pyrcc5 style.qrc -o pyqt5_style_rc.py")
print("Compiling for PySide: style.qrc -> pyside_style_rc.py")
os.system("pyside-rcc -py3 style.qrc -o pyside_style_rc.py") |
Memproses kelas kata yang ada dalam makna.
:param makna_label: BeautifulSoup untuk makna yang ingin diproses.
:type makna_label: BeautifulSoup | def _init_kelas(self, makna_label):
kelas = makna_label.find(color='red')
lain = makna_label.find(color='darkgreen')
info = makna_label.find(color='green')
if kelas:
kelas = kelas.find_all('span')
if lain:
self.kelas = {lain.text.strip(): lain['title'].strip()}
self.submakna = lain.next_sibling.strip()
self.submakna += ' ' + makna_label.find(color='grey').text.strip()
else:
self.kelas = {
k.text.strip(): k['title'].strip() for k in kelas
} if kelas else {}
self.info = info.text.strip() if info else '' | 40 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L239-L259 | def sync(self, since=None, timeout_ms=30000, filter=None,
full_state=None, set_presence=None):
request = {
# non-integer timeouts appear to cause issues
"timeout": int(timeout_ms)
}
if since:
request["since"] = since
if filter:
request["filter"] = filter
if full_state:
request["full_state"] = json.dumps(full_state)
if set_presence:
request["set_presence"] = set_presence
return self._send("GET", "/sync", query_params=request,
api_path=MATRIX_V2_API_PATH) |
Memproses contoh yang ada dalam makna.
:param makna_label: BeautifulSoup untuk makna yang ingin diproses.
:type makna_label: BeautifulSoup | def _init_contoh(self, makna_label):
indeks = makna_label.text.find(': ')
if indeks != -1:
contoh = makna_label.text[indeks + 2:].strip()
self.contoh = contoh.split('; ')
else:
self.contoh = [] | 41 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L261-L273 | def delete_events(self, event_collection, timeframe=None, timezone=None, filters=None):
params = self.get_params(timeframe=timeframe, timezone=timezone, filters=filters)
return self.api.delete_events(event_collection, params) |
Mengembalikan hasil serialisasi objek Makna ini.
:returns: Dictionary hasil serialisasi
:rtype: dict | def serialisasi(self):
return {
"kelas": self.kelas,
"submakna": self.submakna,
"info": self.info,
"contoh": self.contoh
} | 42 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L275-L287 | def diff_fromDelta(self, text1, delta):
diffs = []
pointer = 0 # Cursor in text1
tokens = delta.split("\t")
for token in tokens:
if token == "":
# Blank tokens are ok (from a trailing \t).
continue
# Each token begins with a one character parameter which specifies the
# operation of this token (delete, insert, equality).
param = token[1:]
if token[0] == "+":
param = urllib.parse.unquote(param)
diffs.append((self.DIFF_INSERT, param))
elif token[0] == "-" or token[0] == "=":
try:
n = int(param)
except ValueError:
raise ValueError("Invalid number in diff_fromDelta: " + param)
if n < 0:
raise ValueError("Negative number in diff_fromDelta: " + param)
text = text1[pointer : pointer + n]
pointer += n
if token[0] == "=":
diffs.append((self.DIFF_EQUAL, text))
else:
diffs.append((self.DIFF_DELETE, text))
else:
# Anything else is an error.
raise ValueError("Invalid diff operation in diff_fromDelta: " +
token[0])
if pointer != len(text1):
raise ValueError(
"Delta length (%d) does not equal source text length (%d)." %
(pointer, len(text1)))
return diffs |
Build sphinx documentation.
:rtype: int
:return: subprocess.call return code
:param `bootstrap_py.control.PackageData` pkg_data: package meta data
:param str projectdir: project root directory | def build_sphinx(pkg_data, projectdir):
try:
version, _minor_version = pkg_data.version.rsplit('.', 1)
except ValueError:
version = pkg_data.version
args = ' '.join(('sphinx-quickstart',
'--sep',
'-q',
'-p "{name}"',
'-a "{author}"',
'-v "{version}"',
'-r "{release}"',
'-l en',
'--suffix=.rst',
'--master=index',
'--ext-autodoc',
'--ext-viewcode',
'--makefile',
'{projectdir}')).format(name=pkg_data.name,
author=pkg_data.author,
version=version,
release=pkg_data.version,
projectdir=projectdir)
if subprocess.call(shlex.split(args)) == 0:
_touch_gitkeep(projectdir) | 43 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/docs.py#L8-L40 | def cli(env, volume_id):
file_manager = SoftLayer.FileStorageManager(env.client)
snapshot_schedules = file_manager.list_volume_schedules(volume_id)
table = formatting.Table(['id',
'active',
'type',
'replication',
'date_created',
'minute',
'hour',
'day',
'week',
'day_of_week',
'date_of_month',
'month_of_year',
'maximum_snapshots'])
for schedule in snapshot_schedules:
if 'REPLICATION' in schedule['type']['keyname']:
replication = '*'
else:
replication = formatting.blank()
file_schedule_type = schedule['type']['keyname'].replace('REPLICATION_', '')
file_schedule_type = file_schedule_type.replace('SNAPSHOT_', '')
property_list = ['MINUTE', 'HOUR', 'DAY', 'WEEK',
'DAY_OF_WEEK', 'DAY_OF_MONTH',
'MONTH_OF_YEAR', 'SNAPSHOT_LIMIT']
schedule_properties = []
for prop_key in property_list:
item = formatting.blank()
for schedule_property in schedule.get('properties', []):
if schedule_property['type']['keyname'] == prop_key:
if schedule_property['value'] == '-1':
item = '*'
else:
item = schedule_property['value']
break
schedule_properties.append(item)
table_row = [
schedule['id'],
'*' if schedule.get('active', '') else '',
file_schedule_type,
replication,
schedule.get('createDate', '')
]
table_row.extend(schedule_properties)
table.add_row(table_row)
env.fout(table) |
make bowtie db | def bowtiedb(fa, keepDB):
btdir = '%s/bt2' % (os.getcwd())
# make directory for
if not os.path.exists(btdir):
os.mkdir(btdir)
btdb = '%s/%s' % (btdir, fa.rsplit('/', 1)[-1])
if keepDB is True:
if os.path.exists('%s.1.bt2' % (btdb)):
return btdb
p = subprocess.Popen('bowtie2-build -q %s %s' \
% (fa, btdb), shell = True)
p.communicate()
return btdb | 44 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/crossmap.py#L16-L31 | def parse_resource_data_entry(self, rva):
try:
# If the RVA is invalid all would blow up. Some EXEs seem to be
# specially nasty and have an invalid RVA.
data = self.get_data(rva, Structure(self.__IMAGE_RESOURCE_DATA_ENTRY_format__).sizeof() )
except PEFormatError as excp:
self.__warnings.append(
'Error parsing a resource directory data entry, '
'the RVA is invalid: 0x%x' % ( rva ) )
return None
data_entry = self.__unpack_data__(
self.__IMAGE_RESOURCE_DATA_ENTRY_format__, data,
file_offset = self.get_offset_from_rva(rva) )
return data_entry |
generate bowtie2 command | def bowtie(sam, btd, f, r, u, opt, no_shrink, threads):
bt2 = 'bowtie2 -x %s -p %s ' % (btd, threads)
if f is not False:
bt2 += '-1 %s -2 %s ' % (f, r)
if u is not False:
bt2 += '-U %s ' % (u)
bt2 += opt
if no_shrink is False:
if f is False:
bt2 += ' | shrinksam -u -k %s-shrunk.sam ' % (sam)
else:
bt2 += ' | shrinksam -k %s-shrunk.sam ' % (sam)
else:
bt2 += ' > %s.sam' % (sam)
return bt2 | 45 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/crossmap.py#L33-L50 | def format_string(self, s, args, kwargs):
if isinstance(s, Markup):
formatter = SandboxedEscapeFormatter(self, s.escape)
else:
formatter = SandboxedFormatter(self)
kwargs = _MagicFormatMapping(args, kwargs)
rv = formatter.vformat(s, args, kwargs)
return type(s)(rv) |
map all read sets against all fasta files | def crossmap(fas, reads, options, no_shrink, keepDB, threads, cluster, nodes):
if cluster is True:
threads = '48'
btc = []
for fa in fas:
btd = bowtiedb(fa, keepDB)
F, R, U = reads
if F is not False:
if U is False:
u = False
for i, f in enumerate(F):
r = R[i]
if U is not False:
u = U[i]
sam = '%s/%s-vs-%s' % (os.getcwd(), \
fa.rsplit('/', 1)[-1], f.rsplit('/', 1)[-1].rsplit('.', 3)[0])
btc.append(bowtie(sam, btd, f, r, u, options, no_shrink, threads))
else:
f = False
r = False
for u in U:
sam = '%s/%s-vs-%s' % (os.getcwd(), \
fa.rsplit('/', 1)[-1], u.rsplit('/', 1)[-1].rsplit('.', 3)[0])
btc.append(bowtie(sam, btd, f, r, u, options, no_shrink, threads))
if cluster is False:
for i in btc:
p = subprocess.Popen(i, shell = True)
p.communicate()
else:
ID = ''.join(random.choice([str(i) for i in range(0, 9)]) for _ in range(5))
for node, commands in enumerate(chunks(btc, nodes), 1):
bs = open('%s/crossmap-qsub.%s.%s.sh' % (os.getcwd(), ID, node), 'w')
print('\n'.join(commands), file=bs)
bs.close()
p = subprocess.Popen(\
'qsub -V -N crossmap %s' \
% (bs.name), \
shell = True)
p.communicate() | 46 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/crossmap.py#L55-L96 | def with_division(self, division):
if division is None:
division = ''
division = slugify(division)
self._validate_division(division)
self.division = division
return self |
Returns a connection object from the router given ``args``.
Useful in cases where a connection cannot be automatically determined
during all steps of the process. An example of this would be
Redis pipelines. | def get_conn(self, *args, **kwargs):
connections = self.__connections_for('get_conn', args=args, kwargs=kwargs)
if len(connections) is 1:
return connections[0]
else:
return connections | 47 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/base.py#L100-L113 | def write_comment(self, comment):
self._FITS.write_comment(self._ext+1, str(comment)) |
return the non-direct init if the direct algorithm has been selected. | def __get_nondirect_init(self, init):
crc = init
for i in range(self.Width):
bit = crc & 0x01
if bit:
crc^= self.Poly
crc >>= 1
if bit:
crc |= self.MSB_Mask
return crc & self.Mask | 48 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L98-L110 | def from_bytes(self, string):
msg = srsly.msgpack_loads(gzip.decompress(string))
self.attrs = msg["attrs"]
self.strings = set(msg["strings"])
lengths = numpy.fromstring(msg["lengths"], dtype="int32")
flat_spaces = numpy.fromstring(msg["spaces"], dtype=bool)
flat_tokens = numpy.fromstring(msg["tokens"], dtype="uint64")
shape = (flat_tokens.size // len(self.attrs), len(self.attrs))
flat_tokens = flat_tokens.reshape(shape)
flat_spaces = flat_spaces.reshape((flat_spaces.size, 1))
self.tokens = NumpyOps().unflatten(flat_tokens, lengths)
self.spaces = NumpyOps().unflatten(flat_spaces, lengths)
for tokens in self.tokens:
assert len(tokens.shape) == 2, tokens.shape
return self |
reflect a data word, i.e. reverts the bit order. | def reflect(self, data, width):
x = data & 0x01
for i in range(width - 1):
data >>= 1
x = (x << 1) | (data & 0x01)
return x | 49 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L115-L123 | def linkify_templates(self):
self.hosts.linkify_templates()
self.contacts.linkify_templates()
self.services.linkify_templates()
self.servicedependencies.linkify_templates()
self.hostdependencies.linkify_templates()
self.timeperiods.linkify_templates()
self.hostsextinfo.linkify_templates()
self.servicesextinfo.linkify_templates()
self.escalations.linkify_templates()
# But also old srv and host escalations
self.serviceescalations.linkify_templates()
self.hostescalations.linkify_templates() |
Classic simple and slow CRC implementation. This function iterates bit
by bit over the augmented input message and returns the calculated CRC
value at the end. | def bit_by_bit(self, in_data):
# If the input data is a string, convert to bytes.
if isinstance(in_data, str):
in_data = [ord(c) for c in in_data]
register = self.NonDirectInit
for octet in in_data:
if self.ReflectIn:
octet = self.reflect(octet, 8)
for i in range(8):
topbit = register & self.MSB_Mask
register = ((register << 1) & self.Mask) | ((octet >> (7 - i)) & 0x01)
if topbit:
register ^= self.Poly
for i in range(self.Width):
topbit = register & self.MSB_Mask
register = ((register << 1) & self.Mask)
if topbit:
register ^= self.Poly
if self.ReflectOut:
register = self.reflect(register, self.Width)
return register ^ self.XorOut | 50 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L128-L156 | def create_cvmfs_persistent_volume_claim(cvmfs_volume):
from kubernetes.client.rest import ApiException
from reana_commons.k8s.api_client import current_k8s_corev1_api_client
try:
current_k8s_corev1_api_client.\
create_namespaced_persistent_volume_claim(
"default",
render_cvmfs_pvc(cvmfs_volume)
)
except ApiException as e:
if e.status != 409:
raise e |
This function generates the CRC table used for the table_driven CRC
algorithm. The Python version cannot handle tables of an index width
other than 8. See the generated C code for tables with different sizes
instead. | def gen_table(self):
table_length = 1 << self.TableIdxWidth
tbl = [0] * table_length
for i in range(table_length):
register = i
if self.ReflectIn:
register = self.reflect(register, self.TableIdxWidth)
register = register << (self.Width - self.TableIdxWidth + self.CrcShift)
for j in range(self.TableIdxWidth):
if register & (self.MSB_Mask << self.CrcShift) != 0:
register = (register << 1) ^ (self.Poly << self.CrcShift)
else:
register = (register << 1)
if self.ReflectIn:
register = self.reflect(register >> self.CrcShift, self.Width) << self.CrcShift
tbl[i] = register & (self.Mask << self.CrcShift)
return tbl | 51 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L190-L212 | def cancel_broadcast(self, broadcast_guid):
subpath = 'broadcasts/%s/update' % broadcast_guid
broadcast = {'status': 'CANCELED'}
bcast_dict = self._call(subpath, method='POST', data=broadcast,
content_type='application/json')
return bcast_dict |
The Standard table_driven CRC algorithm. | def table_driven(self, in_data):
# If the input data is a string, convert to bytes.
if isinstance(in_data, str):
in_data = [ord(c) for c in in_data]
tbl = self.gen_table()
register = self.DirectInit << self.CrcShift
if not self.ReflectIn:
for octet in in_data:
tblidx = ((register >> (self.Width - self.TableIdxWidth + self.CrcShift)) ^ octet) & 0xff
register = ((register << (self.TableIdxWidth - self.CrcShift)) ^ tbl[tblidx]) & (self.Mask << self.CrcShift)
register = register >> self.CrcShift
else:
register = self.reflect(register, self.Width + self.CrcShift) << self.CrcShift
for octet in in_data:
tblidx = ((register >> self.CrcShift) ^ octet) & 0xff
register = ((register >> self.TableIdxWidth) ^ tbl[tblidx]) & (self.Mask << self.CrcShift)
register = self.reflect(register, self.Width + self.CrcShift) & self.Mask
if self.ReflectOut:
register = self.reflect(register, self.Width)
return register ^ self.XorOut | 52 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L217-L242 | def _prune_subdirs(dir_: str) -> None:
for logdir in [path.join(dir_, f) for f in listdir(dir_) if is_train_dir(path.join(dir_, f))]:
for subdir in [path.join(logdir, f) for f in listdir(logdir) if path.isdir(path.join(logdir, f))]:
_safe_rmtree(subdir) |
parse masked sequence into non-masked and masked regions | def parse_masked(seq, min_len):
nm, masked = [], [[]]
prev = None
for base in seq[1]:
if base.isupper():
nm.append(base)
if masked != [[]] and len(masked[-1]) < min_len:
nm.extend(masked[-1])
del masked[-1]
prev = False
elif base.islower():
if prev is False:
masked.append([])
masked[-1].append(base)
prev = True
return nm, masked | 53 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/strip_masked.py#L13-L31 | def handle_event(self, event):
subscription_id = event.subscription_id
if subscription_id in self._subscriptions:
# FIXME: [1] should be a constant
handler = self._subscriptions[subscription_id][SUBSCRIPTION_CALLBACK]
WampSubscriptionWrapper(self,handler,event).start() |
remove masked regions from fasta file as long as
they are longer than min_len | def strip_masked(fasta, min_len, print_masked):
for seq in parse_fasta(fasta):
nm, masked = parse_masked(seq, min_len)
nm = ['%s removed_masked >=%s' % (seq[0], min_len), ''.join(nm)]
yield [0, nm]
if print_masked is True:
for i, m in enumerate([i for i in masked if i != []], 1):
m = ['%s insertion:%s' % (seq[0], i), ''.join(m)]
yield [1, m] | 54 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/strip_masked.py#L33-L45 | def websocket_connect(self, message):
self.session_id = self.scope['url_route']['kwargs']['subscriber_id']
super().websocket_connect(message)
# Create new subscriber object.
Subscriber.objects.get_or_create(session_id=self.session_id) |
Return arcsine transformed relative abundance from a BIOM format file.
:type biomfile: BIOM format file
:param biomfile: BIOM format file used to obtain relative abundances for each OTU in
a SampleID, which are used as node sizes in network plots.
:type return: Dictionary of dictionaries.
:return: Dictionary keyed on SampleID whose value is a dictionarykeyed on OTU Name
whose value is the arc sine tranfsormed relative abundance value for that
SampleID-OTU Name pair. | def get_relative_abundance(biomfile):
biomf = biom.load_table(biomfile)
norm_biomf = biomf.norm(inplace=False)
rel_abd = {}
for sid in norm_biomf.ids():
rel_abd[sid] = {}
for otuid in norm_biomf.ids("observation"):
otuname = oc.otu_name(norm_biomf.metadata(otuid, axis="observation")["taxonomy"])
otuname = " ".join(otuname.split("_"))
abd = norm_biomf.get_value_by_ids(otuid, sid)
rel_abd[sid][otuname] = abd
ast_rel_abd = bc.arcsine_sqrt_transform(rel_abd)
return ast_rel_abd | 55 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/network_plots_gephi.py#L33-L57 | def update_context(self,
context,
update_mask=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None):
# Wrap the transport method to add retry and timeout logic.
if 'update_context' not in self._inner_api_calls:
self._inner_api_calls[
'update_context'] = google.api_core.gapic_v1.method.wrap_method(
self.transport.update_context,
default_retry=self._method_configs['UpdateContext'].retry,
default_timeout=self._method_configs['UpdateContext']
.timeout,
client_info=self._client_info,
)
request = context_pb2.UpdateContextRequest(
context=context,
update_mask=update_mask,
)
return self._inner_api_calls['update_context'](
request, retry=retry, timeout=timeout, metadata=metadata) |
Find an OTU ID in a Newick-format tree.
Return the starting position of the ID or None if not found. | def find_otu(otuid, tree):
for m in re.finditer(otuid, tree):
before, after = tree[m.start()-1], tree[m.start()+len(otuid)]
if before in ["(", ",", ")"] and after in [":", ";"]:
return m.start()
return None | 56 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/iTol.py#L17-L26 | def set_keyspace(self, keyspace):
self.keyspace = keyspace
dfrds = []
for p in self._protos:
dfrds.append(p.submitRequest(ManagedThriftRequest(
'set_keyspace', keyspace)))
return defer.gatherResults(dfrds) |
Replace the OTU ids in the Newick phylogenetic tree format with truncated
OTU names | def newick_replace_otuids(tree, biomf):
for val, id_, md in biomf.iter(axis="observation"):
otu_loc = find_otu(id_, tree)
if otu_loc is not None:
tree = tree[:otu_loc] + \
oc.otu_name(md["taxonomy"]) + \
tree[otu_loc + len(id_):]
return tree | 57 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/iTol.py#L29-L40 | def led_changed(self, addr, group, val):
_LOGGER.debug("Button %d LED changed from %d to %d",
self._group, self._value, val)
led_on = bool(val)
if led_on != bool(self._value):
self._update_subscribers(int(led_on)) |
return genome info for choosing representative
if ggKbase table provided - choose rep based on SCGs and genome length
- priority for most SCGs - extra SCGs, then largest genome
otherwise, based on largest genome | def genome_info(genome, info):
try:
scg = info['#SCGs']
dups = info['#SCG duplicates']
length = info['genome size (bp)']
return [scg - dups, length, genome]
except:
return [False, False, info['genome size (bp)'], genome] | 58 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L97-L112 | def reset_env(exclude=[]):
if os.getenv(env.INITED):
wandb_keys = [key for key in os.environ.keys() if key.startswith(
'WANDB_') and key not in exclude]
for key in wandb_keys:
del os.environ[key]
return True
else:
return False |
choose represenative genome and
print cluster information
*if ggKbase table is provided, use SCG info to choose best genome | def print_clusters(fastas, info, ANI):
header = ['#cluster', 'num. genomes', 'rep.', 'genome', '#SCGs', '#SCG duplicates', \
'genome size (bp)', 'fragments', 'list']
yield header
in_cluster = []
for cluster_num, cluster in enumerate(connected_components(ANI)):
cluster = sorted([genome_info(genome, info[genome]) \
for genome in cluster], \
key = lambda x: x[0:], reverse = True)
rep = cluster[0][-1]
cluster = [i[-1] for i in cluster]
size = len(cluster)
for genome in cluster:
in_cluster.append(genome)
try:
stats = [size, rep, genome, \
info[genome]['#SCGs'], info[genome]['#SCG duplicates'], \
info[genome]['genome size (bp)'], info[genome]['# contigs'], cluster]
except:
stats = [size, rep, genome, \
'n/a', 'n/a', \
info[genome]['genome size (bp)'], info[genome]['# contigs'], cluster]
if rep == genome:
stats = ['*%s' % (cluster_num)] + stats
else:
stats = [cluster_num] + stats
yield stats
# print singletons
try:
start = cluster_num + 1
except:
start = 0
fastas = set([i.rsplit('.', 1)[0].rsplit('/', 1)[-1].rsplit('.contigs')[0] for i in fastas])
for cluster_num, genome in \
enumerate(fastas.difference(set(in_cluster)), start):
try:
stats = ['*%s' % (cluster_num), 1, genome, genome, \
info[genome]['#SCGs'], info[genome]['#SCG duplicates'], \
info[genome]['genome size (bp)'], info[genome]['# contigs'], [genome]]
except:
stats = ['*%s' % (cluster_num), 1, genome, genome, \
'n/a', 'n/a', \
info[genome]['genome size (bp)'], info[genome]['# contigs'], [genome]]
yield stats | 59 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L114-L163 | def _apply_index_days(self, i, roll):
nanos = (roll % 2) * Timedelta(days=self.day_of_month - 1).value
return i + nanos.astype('timedelta64[ns]') |
convert ggKbase genome info tables to dictionary | def parse_ggKbase_tables(tables, id_type):
g2info = {}
for table in tables:
for line in open(table):
line = line.strip().split('\t')
if line[0].startswith('name'):
header = line
header[4] = 'genome size (bp)'
header[12] = '#SCGs'
header[13] = '#SCG duplicates'
continue
name, code, info = line[0], line[1], line
info = [to_int(i) for i in info]
if id_type is False: # try to use name and code ID
if 'UNK' in code or 'unknown' in code:
code = name
if (name != code) and (name and code in g2info):
print('# duplicate name or code in table(s)', file=sys.stderr)
print('# %s and/or %s' % (name, code), file=sys.stderr)
exit()
if name not in g2info:
g2info[name] = {item:stat for item, stat in zip(header, info)}
if code not in g2info:
g2info[code] = {item:stat for item, stat in zip(header, info)}
else:
if id_type == 'name':
ID = name
elif id_type == 'code':
ID = code
else:
print('# specify name or code column using -id', file=sys.stderr)
exit()
ID = ID.replace(' ', '')
g2info[ID] = {item:stat for item, stat in zip(header, info)}
if g2info[ID]['genome size (bp)'] == '':
g2info[ID]['genome size (bp)'] = 0
return g2info | 60 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L174-L213 | def _cleanSessions(self):
tooOld = extime.Time() - timedelta(seconds=PERSISTENT_SESSION_LIFETIME)
self.store.query(
PersistentSession,
PersistentSession.lastUsed < tooOld).deleteFromStore()
self._lastClean = self._clock.seconds() |
convert checkM genome info tables to dictionary | def parse_checkM_tables(tables):
g2info = {}
for table in tables:
for line in open(table):
line = line.strip().split('\t')
if line[0].startswith('Bin Id'):
header = line
header[8] = 'genome size (bp)'
header[5] = '#SCGs'
header[6] = '#SCG duplicates'
continue
ID, info = line[0], line
info = [to_int(i) for i in info]
ID = ID.replace(' ', '')
g2info[ID] = {item:stat for item, stat in zip(header, info)}
if g2info[ID]['genome size (bp)'] == '':
g2info[ID]['genome size (bp)'] = 0
return g2info | 61 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L215-L235 | def slanted_triangular(max_rate, num_steps, cut_frac=0.1, ratio=32, decay=1, t=0.0):
cut = int(num_steps * cut_frac)
while True:
t += 1
if t < cut:
p = t / cut
else:
p = 1 - ((t - cut) / (cut * (1 / cut_frac - 1)))
learn_rate = max_rate * (1 + p * (ratio - 1)) * (1 / ratio)
yield learn_rate |
get genome lengths | def genome_lengths(fastas, info):
if info is False:
info = {}
for genome in fastas:
name = genome.rsplit('.', 1)[0].rsplit('/', 1)[-1].rsplit('.contigs')[0]
if name in info:
continue
length = 0
fragments = 0
for seq in parse_fasta(genome):
length += len(seq[1])
fragments += 1
info[name] = {'genome size (bp)':length, '# contigs':fragments}
return info | 62 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L237-L253 | def save_reg(data):
reg_dir = _reg_dir()
regfile = os.path.join(reg_dir, 'register')
try:
if not os.path.exists(reg_dir):
os.makedirs(reg_dir)
except OSError as exc:
if exc.errno == errno.EEXIST:
pass
else:
raise
try:
with salt.utils.files.fopen(regfile, 'a') as fh_:
salt.utils.msgpack.dump(data, fh_)
except Exception:
log.error('Could not write to msgpack file %s', __opts__['outdir'])
raise |
Returns a list of db keys to route the given call to.
:param attr: Name of attribute being called on the connection.
:param args: List of arguments being passed to ``attr``.
:param kwargs: Dictionary of keyword arguments being passed to ``attr``.
>>> redis = Cluster(router=BaseRouter)
>>> router = redis.router
>>> router.get_dbs('incr', args=('key name', 1))
[0,1,2] | def get_dbs(self, attr, args, kwargs, **fkwargs):
if not self._ready:
if not self.setup_router(args=args, kwargs=kwargs, **fkwargs):
raise self.UnableToSetupRouter()
retval = self._pre_routing(attr=attr, args=args, kwargs=kwargs, **fkwargs)
if retval is not None:
args, kwargs = retval
if not (args or kwargs):
return self.cluster.hosts.keys()
try:
db_nums = self._route(attr=attr, args=args, kwargs=kwargs, **fkwargs)
except Exception as e:
self._handle_exception(e)
db_nums = []
return self._post_routing(attr=attr, db_nums=db_nums, args=args, kwargs=kwargs, **fkwargs) | 63 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L50-L81 | def fullLoad(self):
self._parseDirectories(self.ntHeaders.optionalHeader.dataDirectory, self.PE_TYPE) |
Call method to perform any setup | def setup_router(self, args, kwargs, **fkwargs):
self._ready = self._setup_router(args=args, kwargs=kwargs, **fkwargs)
return self._ready | 64 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L87-L93 | def world_series_logs():
file_name = 'GLWS.TXT'
z = get_zip_file(world_series_url)
data = pd.read_csv(z.open(file_name), header=None, sep=',', quotechar='"')
data.columns = gamelog_columns
return data |
Perform routing and return db_nums | def _route(self, attr, args, kwargs, **fkwargs):
return self.cluster.hosts.keys() | 65 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L111-L115 | def static_stability(pressure, temperature, axis=0):
theta = potential_temperature(pressure, temperature)
return - mpconsts.Rd * temperature / pressure * first_derivative(np.log(theta / units.K),
x=pressure, axis=axis) |
Iterates through all connections which were previously listed as unavailable
and marks any that have expired their retry_timeout as being up. | def check_down_connections(self):
now = time.time()
for db_num, marked_down_at in self._down_connections.items():
if marked_down_at + self.retry_timeout <= now:
self.mark_connection_up(db_num) | 66 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L175-L184 | def _CalculateDigestHash(self, file_entry, data_stream_name):
file_object = file_entry.GetFileObject(data_stream_name=data_stream_name)
if not file_object:
return None
try:
file_object.seek(0, os.SEEK_SET)
hasher_object = hashers_manager.HashersManager.GetHasher('sha256')
data = file_object.read(self._READ_BUFFER_SIZE)
while data:
hasher_object.Update(data)
data = file_object.read(self._READ_BUFFER_SIZE)
finally:
file_object.close()
return hasher_object.GetStringDigest() |
Marks all connections which were previously listed as unavailable as being up. | def flush_down_connections(self):
self._get_db_attempts = 0
for db_num in self._down_connections.keys():
self.mark_connection_up(db_num) | 67 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L186-L192 | def _CalculateDigestHash(self, file_entry, data_stream_name):
file_object = file_entry.GetFileObject(data_stream_name=data_stream_name)
if not file_object:
return None
try:
file_object.seek(0, os.SEEK_SET)
hasher_object = hashers_manager.HashersManager.GetHasher('sha256')
data = file_object.read(self._READ_BUFFER_SIZE)
while data:
hasher_object.Update(data)
data = file_object.read(self._READ_BUFFER_SIZE)
finally:
file_object.close()
return hasher_object.GetStringDigest() |
Compute standby power
Parameters
----------
df : pandas.DataFrame or pandas.Series
Electricity Power
resolution : str, default='d'
Resolution of the computation. Data will be resampled to this resolution (as mean) before computation
of the minimum.
String that can be parsed by the pandas resample function, example ='h', '15min', '6h'
time_window : tuple with start-hour and end-hour, default=None
Specify the start-time and end-time for the analysis.
Only data within this time window will be considered.
Both times have to be specified as string ('01:00', '06:30') or as datetime.time() objects
Returns
-------
df : pandas.Series with DateTimeIndex in the given resolution | def standby(df, resolution='24h', time_window=None):
if df.empty:
raise EmptyDataFrame()
df = pd.DataFrame(df) # if df was a pd.Series, convert to DataFrame
def parse_time(t):
if isinstance(t, numbers.Number):
return pd.Timestamp.utcfromtimestamp(t).time()
else:
return pd.Timestamp(t).time()
# first filter based on the time-window
if time_window is not None:
t_start = parse_time(time_window[0])
t_end = parse_time(time_window[1])
if t_start > t_end:
# start before midnight
df = df[(df.index.time >= t_start) | (df.index.time < t_end)]
else:
df = df[(df.index.time >= t_start) & (df.index.time < t_end)]
return df.resample(resolution).min() | 68 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/analysis.py#L72-L115 | def retract(self):
if lib.EnvRetract(self._env, self._fact) != 1:
raise CLIPSError(self._env) |
Compute the share of the standby power in the total consumption.
Parameters
----------
df : pandas.DataFrame or pandas.Series
Power (typically electricity, can be anything)
resolution : str, default='d'
Resolution of the computation. Data will be resampled to this resolution (as mean) before computation
of the minimum.
String that can be parsed by the pandas resample function, example ='h', '15min', '6h'
time_window : tuple with start-hour and end-hour, default=None
Specify the start-time and end-time for the analysis.
Only data within this time window will be considered.
Both times have to be specified as string ('01:00', '06:30') or as datetime.time() objects
Returns
-------
fraction : float between 0-1 with the share of the standby consumption | def share_of_standby(df, resolution='24h', time_window=None):
p_sb = standby(df, resolution, time_window)
df = df.resample(resolution).mean()
p_tot = df.sum()
p_standby = p_sb.sum()
share_standby = p_standby / p_tot
res = share_standby.iloc[0]
return res | 69 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/analysis.py#L118-L146 | def bulk_delete(handler, request):
ids = request.GET.getall('ids')
Message.delete().where(Message.id << ids).execute()
raise muffin.HTTPFound(handler.url) |
Toggle counter for gas boilers
Counts the number of times the gas consumption increases with more than 3kW
Parameters
----------
ts: Pandas Series
Gas consumption in minute resolution
Returns
-------
int | def count_peaks(ts):
on_toggles = ts.diff() > 3000
shifted = np.logical_not(on_toggles.shift(1))
result = on_toggles & shifted
count = result.sum()
return count | 70 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/analysis.py#L149-L169 | def FindProxies():
sc = objc.SystemConfiguration()
# Get the dictionary of network proxy settings
settings = sc.dll.SCDynamicStoreCopyProxies(None)
if not settings:
return []
try:
cf_http_enabled = sc.CFDictRetrieve(settings, "kSCPropNetProxiesHTTPEnable")
if cf_http_enabled and bool(sc.CFNumToInt32(cf_http_enabled)):
# Proxy settings for HTTP are enabled
cfproxy = sc.CFDictRetrieve(settings, "kSCPropNetProxiesHTTPProxy")
cfport = sc.CFDictRetrieve(settings, "kSCPropNetProxiesHTTPPort")
if cfproxy and cfport:
proxy = sc.CFStringToPystring(cfproxy)
port = sc.CFNumToInt32(cfport)
return ["http://%s:%d/" % (proxy, port)]
cf_auto_enabled = sc.CFDictRetrieve(
settings, "kSCPropNetProxiesProxyAutoConfigEnable")
if cf_auto_enabled and bool(sc.CFNumToInt32(cf_auto_enabled)):
cfurl = sc.CFDictRetrieve(settings,
"kSCPropNetProxiesProxyAutoConfigURLString")
if cfurl:
unused_url = sc.CFStringToPystring(cfurl)
# TODO(amoser): Auto config is enabled, what is the plan here?
# Basically, all we get is the URL of a javascript file. To get the
# correct proxy for a given URL, browsers call a Javascript function
# that returns the correct proxy URL. The question is now, do we really
# want to start running downloaded js on the client?
return []
finally:
sc.dll.CFRelease(settings)
return [] |
Calculate the ratio of input vs. norm over a given interval.
Parameters
----------
ts : pandas.Series
timeseries
resolution : str, optional
interval over which to calculate the ratio
default: resolution of the input timeseries
norm : int | float, optional
denominator of the ratio
default: the maximum of the input timeseries
Returns
-------
pandas.Series | def load_factor(ts, resolution=None, norm=None):
if norm is None:
norm = ts.max()
if resolution is not None:
ts = ts.resample(rule=resolution).mean()
lf = ts / norm
return lf | 71 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/analysis.py#L172-L199 | def inject_basic_program(self, ascii_listing):
program_start = self.cpu.memory.read_word(
self.machine_api.PROGRAM_START_ADDR
)
tokens = self.machine_api.ascii_listing2program_dump(ascii_listing)
self.cpu.memory.load(program_start, tokens)
log.critical("BASIC program injected into Memory.")
# Update the BASIC addresses:
program_end = program_start + len(tokens)
self.cpu.memory.write_word(self.machine_api.VARIABLES_START_ADDR, program_end)
self.cpu.memory.write_word(self.machine_api.ARRAY_START_ADDR, program_end)
self.cpu.memory.write_word(self.machine_api.FREE_SPACE_START_ADDR, program_end)
log.critical("BASIC addresses updated.") |
get top hits after sorting by column number | def top_hits(hits, num, column, reverse):
hits.sort(key = itemgetter(column), reverse = reverse)
for hit in hits[0:num]:
yield hit | 72 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/besthits.py#L17-L23 | def weld_udf(weld_template, mapping):
weld_obj = create_empty_weld_object()
for k, v in mapping.items():
if isinstance(v, (np.ndarray, WeldObject)):
obj_id = get_weld_obj_id(weld_obj, v)
mapping.update({k: obj_id})
weld_obj.weld_code = weld_template.format(**mapping)
return weld_obj |
parse b6 output with sorting | def numBlast_sort(blast, numHits, evalueT, bitT):
header = ['#query', 'target', 'pident', 'alen', 'mismatch', 'gapopen',
'qstart', 'qend', 'tstart', 'tend', 'evalue', 'bitscore']
yield header
hmm = {h:[] for h in header}
for line in blast:
if line.startswith('#'):
continue
line = line.strip().split('\t')
# Evalue and Bitscore thresholds
line[10], line[11] = float(line[10]), float(line[11])
evalue, bit = line[10], line[11]
if evalueT is not False and evalue > evalueT:
continue
if bitT is not False and bit < bitT:
continue
for i, h in zip(line, header):
hmm[h].append(i)
hmm = pd.DataFrame(hmm)
for query, df in hmm.groupby(by = ['#query']):
df = df.sort_values(by = ['bitscore'], ascending = False)
for hit in df[header].values[0:numHits]:
yield hit | 73 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/besthits.py#L25-L50 | def delete(self):
try:
return self._server.query('/library/sections/%s' % self.key, method=self._server._session.delete)
except BadRequest: # pragma: no cover
msg = 'Failed to delete library %s' % self.key
msg += 'You may need to allow this permission in your Plex settings.'
log.error(msg)
raise |
parse b6 output | def numBlast(blast, numHits, evalueT = False, bitT = False, sort = False):
if sort is True:
for hit in numBlast_sort(blast, numHits, evalueT, bitT):
yield hit
return
header = ['#query', 'target', 'pident', 'alen', 'mismatch', 'gapopen',
'qstart', 'qend', 'tstart', 'tend', 'evalue', 'bitscore']
yield header
prev, hits = None, []
for line in blast:
line = line.strip().split('\t')
ID = line[0]
line[10], line[11] = float(line[10]), float(line[11])
evalue, bit = line[10], line[11]
if ID != prev:
if len(hits) > 0:
# column is 1 + line index
for hit in top_hits(hits, numHits, 11, True):
yield hit
hits = []
if evalueT == False and bitT == False:
hits.append(line)
elif evalue <= evalueT and bitT == False:
hits.append(line)
elif evalue <= evalueT and bit >= bitT:
hits.append(line)
elif evalueT == False and bit >= bitT:
hits.append(line)
prev = ID
for hit in top_hits(hits, numHits, 11, True):
yield hit | 74 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/besthits.py#L52-L85 | def CloseHandle(self):
if hasattr(self, 'handle'):
ret = vmGuestLib.VMGuestLib_CloseHandle(self.handle.value)
if ret != VMGUESTLIB_ERROR_SUCCESS: raise VMGuestLibException(ret)
del(self.handle) |
parse hmm domain table output
this version is faster but does not work unless the table is sorted | def numDomtblout(domtblout, numHits, evalueT, bitT, sort):
if sort is True:
for hit in numDomtblout_sort(domtblout, numHits, evalueT, bitT):
yield hit
return
header = ['#target name', 'target accession', 'tlen',
'query name', 'query accession', 'qlen',
'full E-value', 'full score', 'full bias',
'domain #', '# domains',
'domain c-Evalue', 'domain i-Evalue', 'domain score', 'domain bias',
'hmm from', 'hmm to', 'seq from', 'seq to', 'env from', 'env to',
'acc', 'target description']
yield header
prev, hits = None, []
for line in domtblout:
if line.startswith('#'):
continue
# parse line and get description
line = line.strip().split()
desc = ' '.join(line[18:])
line = line[0:18]
line.append(desc)
# create ID based on query name and domain number
ID = line[0] + line[9]
# domain c-Evalue and domain score thresholds
line[11], line[13] = float(line[11]), float(line[13])
evalue, bitscore = line[11], line[13]
line[11], line[13] = evalue, bitscore
if ID != prev:
if len(hits) > 0:
for hit in top_hits(hits, numHits, 13, True):
yield hit
hits = []
if evalueT == False and bitT == False:
hits.append(line)
elif evalue <= evalueT and bitT == False:
hits.append(line)
elif evalue <= evalueT and bit >= bitT:
hits.append(line)
elif evalueT == False and bit >= bitT:
hits.append(line)
prev = ID
for hit in top_hits(hits, numHits, 13, True):
yield hit | 75 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/besthits.py#L121-L168 | def account_unblock(self, id):
id = self.__unpack_id(id)
url = '/api/v1/accounts/{0}/unblock'.format(str(id))
return self.__api_request('POST', url) |
convert stockholm to fasta | def stock2fa(stock):
seqs = {}
for line in stock:
if line.startswith('#') is False and line.startswith(' ') is False and len(line) > 3:
id, seq = line.strip().split()
id = id.rsplit('/', 1)[0]
id = re.split('[0-9]\|', id, 1)[-1]
if id not in seqs:
seqs[id] = []
seqs[id].append(seq)
if line.startswith('//'):
break
return seqs | 76 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/stockholm2fa.py#L11-L26 | def put_lifecycle_configuration(Bucket,
Rules,
region=None, key=None, keyid=None, profile=None):
try:
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
if Rules is not None and isinstance(Rules, six.string_types):
Rules = salt.utils.json.loads(Rules)
conn.put_bucket_lifecycle_configuration(Bucket=Bucket, LifecycleConfiguration={'Rules': Rules})
return {'updated': True, 'name': Bucket}
except ClientError as e:
return {'updated': False, 'error': __utils__['boto3.get_error'](e)} |
Return boolean time series following given week schedule.
Parameters
----------
index : pandas.DatetimeIndex
Datetime index
on_time : str or datetime.time
Daily opening time. Default: '09:00'
off_time : str or datetime.time
Daily closing time. Default: '17:00'
off_days : list of str
List of weekdays. Default: ['Sunday', 'Monday']
Returns
-------
pandas.Series of bool
True when on, False otherwise for given datetime index
Examples
--------
>>> import pandas as pd
>>> from opengrid.library.utils import week_schedule
>>> index = pd.date_range('20170701', '20170710', freq='H')
>>> week_schedule(index) | def week_schedule(index, on_time=None, off_time=None, off_days=None):
if on_time is None:
on_time = '9:00'
if off_time is None:
off_time = '17:00'
if off_days is None:
off_days = ['Sunday', 'Monday']
if not isinstance(on_time, datetime.time):
on_time = pd.to_datetime(on_time, format='%H:%M').time()
if not isinstance(off_time, datetime.time):
off_time = pd.to_datetime(off_time, format='%H:%M').time()
times = (index.time >= on_time) & (index.time < off_time) & (~index.weekday_name.isin(off_days))
return pd.Series(times, index=index) | 77 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/utils.py#L10-L47 | def deregister_entity_from_group(self, entity, group):
if entity in self._entities:
if entity in self._groups[group]:
self._groups[group].remove(entity)
else:
raise UnmanagedEntityError(entity) |
Draw a carpet plot of a pandas timeseries.
The carpet plot reads like a letter. Every day one line is added to the
bottom of the figure, minute for minute moving from left (morning) to right
(evening).
The color denotes the level of consumption and is scaled logarithmically.
If vmin and vmax are not provided as inputs, the minimum and maximum of the
colorbar represent the minimum and maximum of the (resampled) timeseries.
Parameters
----------
timeseries : pandas.Series
vmin, vmax : If not None, either or both of these values determine the range
of the z axis. If None, the range is given by the minimum and/or maximum
of the (resampled) timeseries.
zlabel, title : If not None, these determine the labels of z axis and/or
title. If None, the name of the timeseries is used if defined.
cmap : matplotlib.cm instance, default coolwarm
Examples
--------
>>> import numpy as np
>>> import pandas as pd
>>> from opengrid.library import plotting
>>> plt = plotting.plot_style()
>>> index = pd.date_range('2015-1-1','2015-12-31',freq='h')
>>> ser = pd.Series(np.random.normal(size=len(index)), index=index, name='abc')
>>> im = plotting.carpet(ser) | def carpet(timeseries, **kwargs):
# define optional input parameters
cmap = kwargs.pop('cmap', cm.coolwarm)
norm = kwargs.pop('norm', LogNorm())
interpolation = kwargs.pop('interpolation', 'nearest')
cblabel = kwargs.pop('zlabel', timeseries.name if timeseries.name else '')
title = kwargs.pop('title', 'carpet plot: ' + timeseries.name if timeseries.name else '')
# data preparation
if timeseries.dropna().empty:
print('skipped {} - no data'.format(title))
return
ts = timeseries.resample('15min').interpolate()
vmin = max(0.1, kwargs.pop('vmin', ts[ts > 0].min()))
vmax = max(vmin, kwargs.pop('vmax', ts.quantile(.999)))
# convert to dataframe with date as index and time as columns by
# first replacing the index by a MultiIndex
mpldatetimes = date2num(ts.index.to_pydatetime())
ts.index = pd.MultiIndex.from_arrays(
[np.floor(mpldatetimes), 2 + mpldatetimes % 1]) # '2 +': matplotlib bug workaround.
# and then unstacking the second index level to columns
df = ts.unstack()
# data plotting
fig, ax = plt.subplots()
# define the extent of the axes (remark the +- 0.5 for the y axis in order to obtain aligned date ticks)
extent = [df.columns[0], df.columns[-1], df.index[-1] + 0.5, df.index[0] - 0.5]
im = plt.imshow(df, vmin=vmin, vmax=vmax, extent=extent, cmap=cmap, aspect='auto', norm=norm,
interpolation=interpolation, **kwargs)
# figure formatting
# x axis
ax.xaxis_date()
ax.xaxis.set_major_locator(HourLocator(interval=2))
ax.xaxis.set_major_formatter(DateFormatter('%H:%M'))
ax.xaxis.grid(True)
plt.xlabel('UTC Time')
# y axis
ax.yaxis_date()
dmin, dmax = ax.yaxis.get_data_interval()
number_of_days = (num2date(dmax) - num2date(dmin)).days
# AutoDateLocator is not suited in case few data is available
if abs(number_of_days) <= 35:
ax.yaxis.set_major_locator(DayLocator())
else:
ax.yaxis.set_major_locator(AutoDateLocator())
ax.yaxis.set_major_formatter(DateFormatter("%a, %d %b %Y"))
# plot colorbar
cbticks = np.logspace(np.log10(vmin), np.log10(vmax), 11, endpoint=True)
cb = plt.colorbar(format='%.0f', ticks=cbticks)
cb.set_label(cblabel)
# plot title
plt.title(title)
return im | 78 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/plotting.py#L34-L125 | def num_gpus():
count = ctypes.c_int()
check_call(_LIB.MXGetGPUCount(ctypes.byref(count)))
return count.value |
calculate percent identity | def calc_pident_ignore_gaps(a, b):
m = 0 # matches
mm = 0 # mismatches
for A, B in zip(list(a), list(b)):
if A == '-' or A == '.' or B == '-' or B == '.':
continue
if A == B:
m += 1
else:
mm += 1
try:
return float(float(m)/float((m + mm))) * 100
except:
return 0 | 79 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L34-L50 | def get_local_file(file):
try:
with open(file.path):
yield file.path
except NotImplementedError:
_, ext = os.path.splitext(file.name)
with NamedTemporaryFile(prefix='wagtailvideo-', suffix=ext) as tmp:
try:
file.open('rb')
for chunk in file.chunks():
tmp.write(chunk)
finally:
file.close()
tmp.flush()
yield tmp.name |
skip column if either is a gap | def remove_gaps(A, B):
a_seq, b_seq = [], []
for a, b in zip(list(A), list(B)):
if a == '-' or a == '.' or b == '-' or b == '.':
continue
a_seq.append(a)
b_seq.append(b)
return ''.join(a_seq), ''.join(b_seq) | 80 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L52-L62 | def read_legacy(filename):
reader = vtk.vtkDataSetReader()
reader.SetFileName(filename)
# Ensure all data is fetched with poorly formated legacy files
reader.ReadAllScalarsOn()
reader.ReadAllColorScalarsOn()
reader.ReadAllNormalsOn()
reader.ReadAllTCoordsOn()
reader.ReadAllVectorsOn()
# Perform the read
reader.Update()
output = reader.GetOutputDataObject(0)
if output is None:
raise AssertionError('No output when using VTKs legacy reader')
return vtki.wrap(output) |
compare pairs of sequences | def compare_seqs(seqs):
A, B, ignore_gaps = seqs
a, b = A[1], B[1] # actual sequences
if len(a) != len(b):
print('# reads are not the same length', file=sys.stderr)
exit()
if ignore_gaps is True:
pident = calc_pident_ignore_gaps(a, b)
else:
pident = calc_pident(a, b)
return A[0], B[0], pident | 81 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L64-L77 | def get_s3_origin_conf_class():
if LooseVersion(troposphere.__version__) > LooseVersion('2.4.0'):
return cloudfront.S3OriginConfig
if LooseVersion(troposphere.__version__) == LooseVersion('2.4.0'):
return S3OriginConfig
return cloudfront.S3Origin |
calculate Levenshtein ratio of sequences | def compare_seqs_leven(seqs):
A, B, ignore_gaps = seqs
a, b = remove_gaps(A[1], B[1]) # actual sequences
if len(a) != len(b):
print('# reads are not the same length', file=sys.stderr)
exit()
pident = lr(a, b) * 100
return A[0], B[0], pident | 82 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L79-L89 | def del_unused_keyframes(self):
skl = self.key_frame_list.sorted_key_list()
unused_keys = [k for k in self.dct['keys']
if k not in skl]
for k in unused_keys:
del self.dct['keys'][k] |
make pairwise sequence comparisons between aligned sequences | def pairwise_compare(afa, leven, threads, print_list, ignore_gaps):
# load sequences into dictionary
seqs = {seq[0]: seq for seq in nr_fasta([afa], append_index = True)}
num_seqs = len(seqs)
# define all pairs
pairs = ((i[0], i[1], ignore_gaps) for i in itertools.combinations(list(seqs.values()), 2))
pool = multithread(threads)
# calc percent identity between all pairs - parallelize
if leven is True:
pident = pool.map(compare_seqs_leven, pairs)
else:
compare = pool.imap_unordered(compare_seqs, pairs)
pident = [i for i in tqdm(compare, total = (num_seqs*num_seqs)/2)]
pool.close()
pool.terminate()
pool.join()
return to_dictionary(pident, print_list) | 83 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L91-L110 | def makeCubiccFunc(self,mNrm,cNrm):
EndOfPrdvPP = self.DiscFacEff*self.Rfree*self.Rfree*self.PermGroFac**(-self.CRRA-1.0)* \
np.sum(self.PermShkVals_temp**(-self.CRRA-1.0)*
self.vPPfuncNext(self.mNrmNext)*self.ShkPrbs_temp,axis=0)
dcda = EndOfPrdvPP/self.uPP(np.array(cNrm[1:]))
MPC = dcda/(dcda+1.)
MPC = np.insert(MPC,0,self.MPCmaxNow)
cFuncNowUnc = CubicInterp(mNrm,cNrm,MPC,self.MPCminNow*self.hNrmNow,self.MPCminNow)
return cFuncNowUnc |
print matrix of pidents to stdout | def print_pairwise(pw, median = False):
names = sorted(set([i for i in pw]))
if len(names) != 0:
if '>' in names[0]:
yield ['#'] + [i.split('>')[1] for i in names if '>' in i]
else:
yield ['#'] + names
for a in names:
if '>' in a:
yield [a.split('>')[1]] + [pw[a][b] for b in names]
else:
out = []
for b in names:
if b in pw[a]:
if median is False:
out.append(max(pw[a][b]))
else:
out.append(np.median(pw[a][b]))
else:
out.append('-')
yield [a] + out | 84 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L132-L155 | def delete_index(self, cardinality):
DatabaseConnector.delete_index(self, cardinality)
query = "DROP INDEX IF EXISTS idx_{0}_gram_varchar;".format(cardinality)
self.execute_sql(query)
query = "DROP INDEX IF EXISTS idx_{0}_gram_normalized_varchar;".format(
cardinality)
self.execute_sql(query)
query = "DROP INDEX IF EXISTS idx_{0}_gram_lower_varchar;".format(
cardinality)
self.execute_sql(query)
query = "DROP INDEX IF EXISTS idx_{0}_gram_lower_normalized_varchar;".\
format(cardinality)
self.execute_sql(query)
for i in reversed(range(cardinality)):
if i != 0:
query = "DROP INDEX IF EXISTS idx_{0}_gram_{1}_lower;".format(
cardinality, i)
self.execute_sql(query) |
print stats for comparisons | def print_comps(comps):
if comps == []:
print('n/a')
else:
print('# min: %s, max: %s, mean: %s' % \
(min(comps), max(comps), np.mean(comps))) | 85 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L157-L165 | def _load_vertex_buffers(self):
fd = gzip.open(cache_name(self.file_name), 'rb')
for buff in self.meta.vertex_buffers:
mat = self.wavefront.materials.get(buff['material'])
if not mat:
mat = Material(name=buff['material'], is_default=True)
self.wavefront.materials[mat.name] = mat
mat.vertex_format = buff['vertex_format']
self.load_vertex_buffer(fd, mat, buff['byte_length'])
fd.close() |
print min. pident within each clade and then matrix of between-clade max. | def compare_clades(pw):
names = sorted(set([i for i in pw]))
for i in range(0, 4):
wi, bt = {}, {}
for a in names:
for b in pw[a]:
if ';' not in a or ';' not in b:
continue
pident = pw[a][b]
cA, cB = a.split(';')[i], b.split(';')[i]
if i == 0 and '_' in cA and '_' in cB:
cA = cA.rsplit('_', 1)[1]
cB = cB.rsplit('_', 1)[1]
elif '>' in cA or '>' in cB:
cA = cA.split('>')[1]
cB = cB.split('>')[1]
if cA == cB:
if cA not in wi:
wi[cA] = []
wi[cA].append(pident)
else:
if cA not in bt:
bt[cA] = {}
if cB not in bt[cA]:
bt[cA][cB] = []
bt[cA][cB].append(pident)
print('\n# min. within')
for clade, pidents in list(wi.items()):
print('\t'.join(['wi:%s' % str(i), clade, str(min(pidents))]))
# print matrix of maximum between groups
comps = []
print('\n# max. between')
for comp in print_pairwise(bt):
if comp is not None:
print('\t'.join(['bt:%s' % str(i)] + [str(j) for j in comp]))
if comp[0] != '#':
comps.extend([j for j in comp[1:] if j != '-'])
print_comps(comps)
# print matrix of median between groups
comps = []
print('\n# median between')
for comp in print_pairwise(bt, median = True):
if comp is not None:
print('\t'.join(['bt:%s' % str(i)] + [str(j) for j in comp]))
if comp[0] != '#':
comps.extend([j for j in comp[1:] if j != '-'])
print_comps(comps) | 86 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L167-L216 | def setGroups(self, groups, kerningGroupConversionRenameMaps=None):
skipping = []
for name, members in groups.items():
checked = []
for m in members:
if m in self.font:
checked.append(m)
else:
skipping.append(m)
if checked:
self.font.groups[name] = checked
if skipping:
if self.verbose and self.logger:
self.logger.info("\tNote: some glyphnames were removed from groups: %s (unavailable in the font)", ", ".join(skipping))
if kerningGroupConversionRenameMaps:
# in case the sources were UFO2,
# and defcon upconverted them to UFO3
# and now we have to down convert them again,
# we don't want the UFO3 public prefixes in the group names
self.font.kerningGroupConversionRenameMaps = kerningGroupConversionRenameMaps |
convert matrix to dictionary of comparisons | def matrix2dictionary(matrix):
pw = {}
for line in matrix:
line = line.strip().split('\t')
if line[0].startswith('#'):
names = line[1:]
continue
a = line[0]
for i, pident in enumerate(line[1:]):
b = names[i]
if a not in pw:
pw[a] = {}
if b not in pw:
pw[b] = {}
if pident != '-':
pident = float(pident)
pw[a][b] = pident
pw[b][a] = pident
return pw | 87 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L218-L239 | def _sampleLocationOnSide(self):
z = random.uniform(-1, 1) * self.height / 2.
sampledAngle = 2 * random.random() * pi
x, y = self.radius * cos(sampledAngle), self.radius * sin(sampledAngle)
return [x, y, z] |
Set argument parser option. | def setoption(parser, metadata=None):
parser.add_argument('-v', action='version',
version=__version__)
subparsers = parser.add_subparsers(help='sub commands help')
create_cmd = subparsers.add_parser('create')
create_cmd.add_argument('name',
help='Specify Python package name.')
create_cmd.add_argument('-d', dest='description', action='store',
help='Short description about your package.')
create_cmd.add_argument('-a', dest='author', action='store',
required=True,
help='Python package author name.')
create_cmd.add_argument('-e', dest='email', action='store',
required=True,
help='Python package author email address.')
create_cmd.add_argument('-l', dest='license',
choices=metadata.licenses().keys(),
default='GPLv3+',
help='Specify license. (default: %(default)s)')
create_cmd.add_argument('-s', dest='status',
choices=metadata.status().keys(),
default='Alpha',
help=('Specify development status. '
'(default: %(default)s)'))
create_cmd.add_argument('--no-check', action='store_true',
help='No checking package name in PyPI.')
create_cmd.add_argument('--with-samples', action='store_true',
help='Generate package with sample code.')
group = create_cmd.add_mutually_exclusive_group(required=True)
group.add_argument('-U', dest='username', action='store',
help='Specify GitHub username.')
group.add_argument('-u', dest='url', action='store', type=valid_url,
help='Python package homepage url.')
create_cmd.add_argument('-o', dest='outdir', action='store',
default=os.path.abspath(os.path.curdir),
help='Specify output directory. (default: $PWD)')
list_cmd = subparsers.add_parser('list')
list_cmd.add_argument('-l', dest='licenses', action='store_true',
help='show license choices.') | 88 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/commands.py#L12-L51 | def swd_sync(self, pad=False):
if pad:
self._dll.JLINK_SWD_SyncBytes()
else:
self._dll.JLINK_SWD_SyncBits()
return None |
Parse argument options. | def parse_options(metadata):
parser = argparse.ArgumentParser(description='%(prog)s usage:',
prog=__prog__)
setoption(parser, metadata=metadata)
return parser | 89 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/commands.py#L72-L77 | def vrel(v1, v2):
v1 = stypes.toDoubleVector(v1)
v2 = stypes.toDoubleVector(v2)
return libspice.vrel_c(v1, v2) |
Execute main processes. | def main():
try:
pkg_version = Update()
if pkg_version.updatable():
pkg_version.show_message()
metadata = control.retreive_metadata()
parser = parse_options(metadata)
argvs = sys.argv
if len(argvs) <= 1:
parser.print_help()
sys.exit(1)
args = parser.parse_args()
control.print_licences(args, metadata)
control.check_repository_existence(args)
control.check_package_existence(args)
control.generate_package(args)
except (RuntimeError, BackendFailure, Conflict) as exc:
sys.stderr.write('{0}\n'.format(exc))
sys.exit(1) | 90 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/commands.py#L80-L99 | def price(self, minimum: float = 10.00,
maximum: float = 1000.00) -> str:
price = self.random.uniform(minimum, maximum, precision=2)
return '{0} {1}'.format(price, self.currency_symbol()) |
Check key and set default vaule when it does not exists. | def _check_or_set_default_params(self):
if not hasattr(self, 'date'):
self._set_param('date', datetime.utcnow().strftime('%Y-%m-%d'))
if not hasattr(self, 'version'):
self._set_param('version', self.default_version)
# pylint: disable=no-member
if not hasattr(self, 'description') or self.description is None:
getattr(self, '_set_param')('description', self.warning_message) | 91 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/package.py#L44-L52 | def _get_files_modified():
cmd = "git diff-index --cached --name-only --diff-filter=ACMRTUXB HEAD"
_, files_modified, _ = run(cmd)
extensions = [re.escape(ext) for ext in list(SUPPORTED_FILES) + [".rst"]]
test = "(?:{0})$".format("|".join(extensions))
return list(filter(lambda f: re.search(test, f), files_modified)) |
Move directory from working directory to output directory. | def move(self):
if not os.path.isdir(self.outdir):
os.makedirs(self.outdir)
shutil.move(self.tmpdir, os.path.join(self.outdir, self.name)) | 92 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/package.py#L169-L173 | def init(*args, **kwargs):
global _initial_client
client = Client(*args, **kwargs)
Hub.current.bind_client(client)
rv = _InitGuard(client)
if client is not None:
_initial_client = weakref.ref(client)
return rv |
Initialize VCS repository. | def vcs_init(self):
VCS(os.path.join(self.outdir, self.name), self.pkg_data) | 93 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/package.py#L185-L187 | def group_experiments_greedy(tomo_expt: TomographyExperiment):
diag_sets = _max_tpb_overlap(tomo_expt)
grouped_expt_settings_list = list(diag_sets.values())
grouped_tomo_expt = TomographyExperiment(grouped_expt_settings_list, program=tomo_expt.program)
return grouped_tomo_expt |
Finds the location of the current Steam installation on Windows machines.
Returns None for any non-Windows machines, or for Windows machines where
Steam is not installed. | def find_steam_location():
if registry is None:
return None
key = registry.CreateKey(registry.HKEY_CURRENT_USER,"Software\Valve\Steam")
return registry.QueryValueEx(key,"SteamPath")[0] | 94 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/winutils.py#L10-L20 | def _merge(*args):
return re.compile(r'^' + r'[/-]'.join(args) + r'(?:\s+' + _dow + ')?$') |
Plot PCoA principal coordinates scaled by the relative abundances of
otu_name. | def plot_PCoA(cat_data, otu_name, unifrac, names, colors, xr, yr, outDir,
save_as, plot_style):
fig = plt.figure(figsize=(14, 8))
ax = fig.add_subplot(111)
for i, cat in enumerate(cat_data):
plt.scatter(cat_data[cat]["pc1"], cat_data[cat]["pc2"], cat_data[cat]["size"],
color=colors[cat], alpha=0.85, marker="o", edgecolor="black",
label=cat)
lgnd = plt.legend(loc="best", scatterpoints=3, fontsize=13)
for i in range(len(colors.keys())):
lgnd.legendHandles[i]._sizes = [80] # Change the legend marker size manually
plt.title(" ".join(otu_name.split("_")), style="italic")
plt.ylabel("PC2 (Percent Explained Variance {:.3f}%)".format(float(unifrac["varexp"][1])))
plt.xlabel("PC1 (Percent Explained Variance {:.3f}%)".format(float(unifrac["varexp"][0])))
plt.xlim(round(xr[0]*1.5, 1), round(xr[1]*1.5, 1))
plt.ylim(round(yr[0]*1.5, 1), round(yr[1]*1.5, 1))
if plot_style:
gu.ggplot2_style(ax)
fc = "0.8"
else:
fc = "none"
fig.savefig(os.path.join(outDir, "_".join(otu_name.split())) + "." + save_as,
facecolor=fc, edgecolor="none", format=save_as,
bbox_inches="tight", pad_inches=0.2)
plt.close(fig) | 95 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/PCoA_bubble.py#L36-L65 | def reset(cls):
cls.debug = False
cls.disabled = False
cls.overwrite = False
cls.playback_only = False
cls.recv_timeout = 5
cls.recv_endmarkers = []
cls.recv_size = None |
Split up the column data in a biom table by mapping category value. | def split_by_category(biom_cols, mapping, category_id):
columns = defaultdict(list)
for i, col in enumerate(biom_cols):
columns[mapping[col['id']][category_id]].append((i, col))
return columns | 96 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/transpose_biom.py#L17-L25 | def update_prompt(self):
prefix = ""
if self._local_endpoint is not None:
prefix += "(%s:%d) " % self._local_endpoint
prefix += self.engine.region
if self.engine.partial:
self.prompt = len(prefix) * " " + "> "
else:
self.prompt = prefix + "> " |
print line if starts with ... | def print_line(l):
print_lines = ['# STOCKHOLM', '#=GF', '#=GS', ' ']
if len(l.split()) == 0:
return True
for start in print_lines:
if l.startswith(start):
return True
return False | 97 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/stockholm2oneline.py#L11-L21 | def setOverlayTransformTrackedDeviceRelative(self, ulOverlayHandle, unTrackedDevice):
fn = self.function_table.setOverlayTransformTrackedDeviceRelative
pmatTrackedDeviceToOverlayTransform = HmdMatrix34_t()
result = fn(ulOverlayHandle, unTrackedDevice, byref(pmatTrackedDeviceToOverlayTransform))
return result, pmatTrackedDeviceToOverlayTransform |
convert stockholm to single line format | def stock2one(stock):
lines = {}
for line in stock:
line = line.strip()
if print_line(line) is True:
yield line
continue
if line.startswith('//'):
continue
ID, seq = line.rsplit(' ', 1)
if ID not in lines:
lines[ID] = ''
else:
# remove preceding white space
seq = seq.strip()
lines[ID] += seq
for ID, line in lines.items():
yield '\t'.join([ID, line])
yield '\n//' | 98 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/stockholm2oneline.py#L23-L44 | def describe_event_source_mapping(UUID=None, EventSourceArn=None,
FunctionName=None,
region=None, key=None, keyid=None, profile=None):
ids = _get_ids(UUID, EventSourceArn=EventSourceArn,
FunctionName=FunctionName)
if not ids:
return {'event_source_mapping': None}
UUID = ids[0]
try:
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
desc = conn.get_event_source_mapping(UUID=UUID)
if desc:
keys = ('UUID', 'BatchSize', 'EventSourceArn',
'FunctionArn', 'LastModified', 'LastProcessingResult',
'State', 'StateTransitionReason')
return {'event_source_mapping': dict([(k, desc.get(k)) for k in keys])}
else:
return {'event_source_mapping': None}
except ClientError as e:
return {'error': __utils__['boto3.get_error'](e)} |
Statics the methods. wut. | def math_func(f):
@wraps(f)
def wrapper(*args, **kwargs):
if len(args) > 0:
return_type = type(args[0])
if kwargs.has_key('return_type'):
return_type = kwargs['return_type']
kwargs.pop('return_type')
return return_type(f(*args, **kwargs))
args = list((setify(x) for x in args))
return return_type(f(*args, **kwargs))
return wrapper | 99 | https://github.com/elbow-jason/Uno-deprecated/blob/4ad07d7b84e5b6e3e2b2c89db69448906f24b4e4/uno/helpers.py#L8-L22 | def PublishMultipleEvents(cls, events, token=None):
event_name_map = registry.EventRegistry.EVENT_NAME_MAP
for event_name, messages in iteritems(events):
if not isinstance(event_name, string_types):
raise ValueError(
"Event names should be string, got: %s" % type(event_name))
for msg in messages:
if not isinstance(msg, rdfvalue.RDFValue):
raise ValueError("Can only publish RDFValue instances.")
for event_cls in event_name_map.get(event_name, []):
event_cls().ProcessMessages(messages, token=token) |
End of preview.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 4