hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ce18a63e359a5770113f245fe115b88cfc882651 | 104 | md | Markdown | krm-functions/sig-cli/README.md | natasha41575/krm-functions-registry | 8d72cd032ee0642f17875439256bbfa1e8f3901a | [
"Apache-2.0"
] | 1 | 2022-01-18T22:34:25.000Z | 2022-01-18T22:34:25.000Z | krm-functions/sig-cli/README.md | natasha41575/krm-functions-registry | 8d72cd032ee0642f17875439256bbfa1e8f3901a | [
"Apache-2.0"
] | 10 | 2021-12-07T22:56:49.000Z | 2022-03-17T20:57:14.000Z | krm-functions/sig-cli/README.md | natasha41575/krm-functions-registry | 8d72cd032ee0642f17875439256bbfa1e8f3901a | [
"Apache-2.0"
] | 2 | 2022-01-26T05:48:18.000Z | 2022-02-08T00:20:31.000Z | # KRM functions - SIG CLI
This directory contains in-tree SIG-CLI sponsored and authored KRM functions. | 34.666667 | 77 | 0.798077 | eng_Latn | 0.995109 |
ce1a9f95b0efcb99afd58f015bf26468d1928ee5 | 2,393 | md | Markdown | src/pages/blog/becoming-an-allstar.md | WaylonWalker/cuttin-scrap | b3a0c9365e44dcac2d0cde7017e0cda245428cbe | [
"MIT"
] | null | null | null | src/pages/blog/becoming-an-allstar.md | WaylonWalker/cuttin-scrap | b3a0c9365e44dcac2d0cde7017e0cda245428cbe | [
"MIT"
] | null | null | null | src/pages/blog/becoming-an-allstar.md | WaylonWalker/cuttin-scrap | b3a0c9365e44dcac2d0cde7017e0cda245428cbe | [
"MIT"
] | null | null | null | ---
templateKey: 'blog-post'
path: /becoming-an-allstar
title: Becoming an All-star!
author: Rhiannon Walker
date: 2017-01-30
---
_January 30, 2017_
When your life gets put into risk and dying becomes closer than living, everything changes. The hugs mean just that much more. The "I love yous" stick. What goes in your body or surrounds your body now means the world! When your body fails you and medicine revives you, you are not yet a super hero.
I feel like Deadpool in the hyperbaric chamber. My body is going through all these chemical changes so that I can become an all-star! Make no mistake all-star qualities are arriving by the minute. Little things don't mean as much to me, like getting the laundry done. Instead I would rather spend the extra time reading one more book at bedtime to my kids. I now think eating dessert should come first, because you NEVER know.
What happened you ask that forced this change? Tuesday happened, and by Tuesday night I was taken by ambulance to Barnes hospital. It took 12 rounds of IV antibiotics to save me. I'm not going to go into the details, because they are not important to the general public, but what is important is that I survived. I am still here for my kids, family, and friends.
So how do you change your life for the positive? How do you hang on with everything to see the next day? It's a feeling. It's putting on a new pair of glasses with the correct prescription strength. It's putting faith in your medical team, and trusting your gut.
Mostly what got me through that night was saying over, and over again, "Wyatt and Ayla need their Mommy!"
Take care of your body. Take care of those you love. It's not worth the risk of what you'd leave behind.
10 positives as promised:
1. Medicine - truly did save my life
2. My Children
3. Waylon - Man I love him, that was a lot to put on him, and a scare drive alone down to St. Louis.
4. My age - I am confident that with my age, I will prevail and kick Cancer's behind.
5. Netflix and Puzzles - there is only so much you can do in a hospital bed.
6. My Guardian Angel - because of her I believe I am here. Miss you Mom.
7. Leggings
8. Family and Friend's love and support.
9. Excedrin - these migraines have been horrendous lately.
10. Our shower - man it felt great to take a shower once I got home!
With Love,
Rhiannon | 37.984127 | 432 | 0.748433 | eng_Latn | 0.999725 |
ce1aafac82a6aa3ab9aab1d8a30eee23162da238 | 250 | md | Markdown | README.md | Markek1/Rubiks-cube-simulator | ac087a7c51d791c92b6103bb37164b887aa9f6ea | [
"MIT"
] | null | null | null | README.md | Markek1/Rubiks-cube-simulator | ac087a7c51d791c92b6103bb37164b887aa9f6ea | [
"MIT"
] | null | null | null | README.md | Markek1/Rubiks-cube-simulator | ac087a7c51d791c92b6103bb37164b887aa9f6ea | [
"MIT"
] | null | null | null | # Rubiks-cube-simulator
Features:
* Rotation
* Translation of standard notation into roatations (and the reverse)
* Graphical visualization of the cube
![Example 1](https://github.com/Markek1/Rubiks-cube-simulator/blob/master/examples/example1.gif)
| 31.25 | 96 | 0.796 | eng_Latn | 0.661693 |
ce1ce0d4fde85e20b4359787b222a0d7df34f78e | 3,359 | md | Markdown | README.md | zhming0/k8s-eip | 1842c4d74daa6df22155aaa459389c78e840e6a4 | [
"MIT"
] | 1 | 2020-04-18T14:25:16.000Z | 2020-04-18T14:25:16.000Z | README.md | zhming0/k8s-eip | 1842c4d74daa6df22155aaa459389c78e840e6a4 | [
"MIT"
] | null | null | null | README.md | zhming0/k8s-eip | 1842c4d74daa6df22155aaa459389c78e840e6a4 | [
"MIT"
] | null | null | null | # K8S-EIP
![License](https://img.shields.io/github/license/zhming0/k8s-eip?link=https://github.com/zhming0/k8s-eip/blob/master/LICENSE)
![Docker Pulls](https://img.shields.io/docker/pulls/zhming0/k8s-eip?link=https://hub.docker.com/repository/docker/zhming0/k8s-eip)
[![zhming0](https://circleci.com/gh/zhming0/k8s-eip.svg?style=svg)](https://circleci.com/gh/zhming0/k8s-eip)
Bind a group of AWS Elastic IPs to a group of Kubernetes Nodes that matches criteria.
## Huh? What is this?
### Q: What is k8s-eip trying to solve?
I don't want to create many unnecessary ELBs just for my toy cluster created by kops.
### Q: Can't you just use `nginx-ingress` so you just create one ELB for many services?
Nah, I don't want to pay that $18/month either. I don't want any ELB.
### Q: What a miser! But how?
`k8s-eip` for you.
It's similar to the [kube-ip](https://github.com/doitintl/kubeip) but for AWS and less mature.
It would bind a group of specified Elastic IPs to a number of K8S nodes on AWS.
It runs in a periodic way, so **you can't use it for HA/critical use cases**.
### Q: Would it trigger the scary Elastic IP remap fee?
As a project with a goal of saving money, surely no :).
Actually, it will **try** not to.
## How to use it?
### Prerequisite
* You are an admin of k8s cluster on AWS and you have `kubectl` configured properly.
Quick test: `kubectl cluster-info`
* You have credentials for an IAM user with following permission:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:AssociateAddress"
],
"Resource": "*"
}
]
}
```
* You have labeled the targetted nodes
Via: `kubectl`
```
kubectl label node YOUR_NODE_NAME bastion=true
```
Via: `kops`
```yaml
---
...
kind: InstanceGroup
...
spec:
...
nodeLabels:
...
bastion: "true" # Or anything you want
...
```
### Using Helm v3
First, Prepare a yaml file like this:
```yaml
awsAccessKeyId: "XXXXXXX" # AWS_ACCESS_KEY_ID of the IAM user account
awsSecretAccessKey: "XXXXXXXXXXXXXXXX" # secret access key of the IAM user account
awsRegion: us-east-1
# Elastic IPs that you own and want to attach to targeting nodes
ips: "8.8.8.8,1.1.1.1" # example
# The label on Nodes that you want to have elastic IPs attached
labelSelector: "bastion=true" # example
```
Then
```bash
helm upgrade -i \
-f values.yaml \ # The yaml file that you prepared
-n kube-system \ # This could be any namespace, kube-system is a good idea
k8s-eip \ # any name
https://github.com/zhming0/k8s-eip/releases/download/v0.0.1/k8s-eip-0.0.1.tgz
```
Note:
- *Helm is flexible.
There are many ways to supply values.
Please refer to [their doc](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing)
for other options.*
- use `--dry-run` to preview the changes
### Using Helm v2
Nope :)
### Kubectl directly
Good luck :)
## Project status
As you can see, this is early stage and not actively maintained
as I don't believe it create much value.
Any help is still appreciated though.
Some potential work could be:
- Use K8S's `watch` API to replace/enhance periodic run
- Improve determinism, reduce unnecessary Elastic IPs mapping even when
there are many changes happening to k8s nodes.
| 26.65873 | 130 | 0.693063 | eng_Latn | 0.959586 |
ce1fd593a185c3d62e66e1af2256b006b92ab093 | 316 | md | Markdown | brainstorm.md | cmsteffen-code/pscan | f93fb593277b57dea7158553c04dd6462fa1427a | [
"MIT"
] | null | null | null | brainstorm.md | cmsteffen-code/pscan | f93fb593277b57dea7158553c04dd6462fa1427a | [
"MIT"
] | null | null | null | brainstorm.md | cmsteffen-code/pscan | f93fb593277b57dea7158553c04dd6462fa1427a | [
"MIT"
] | null | null | null | I want to design an asynchronous scan.
1. Create a sniffer to handle incoming TCP packets from the specific host.
2. Send out a spray of SYN packets to the target ports.
3. Capture and log any RST or SYN/ACK packets.
4. Send RST/ACK responses.
5. After the time-out is reached, end the scan.
6. Reveal the results.
| 35.111111 | 74 | 0.759494 | eng_Latn | 0.995527 |
ce20568b3670bc95333ab582b80b1c56f95958e1 | 51 | md | Markdown | README.md | AlexanderSilvaB/cpplanning | 643af74a8e7067a19cf98a37c38ee346e741ef31 | [
"MIT"
] | null | null | null | README.md | AlexanderSilvaB/cpplanning | 643af74a8e7067a19cf98a37c38ee346e741ef31 | [
"MIT"
] | null | null | null | README.md | AlexanderSilvaB/cpplanning | 643af74a8e7067a19cf98a37c38ee346e741ef31 | [
"MIT"
] | null | null | null | # cpplanning
A path planning algorithms playground
| 17 | 37 | 0.843137 | eng_Latn | 0.960584 |
ce207ca858dba9405bbd6a1184993839ec90ab78 | 598 | md | Markdown | README.md | lordkevinmo/Car-data-analysis | 69ce2aa522a855e7654948d362fc95a1e7363fc5 | [
"MIT"
] | null | null | null | README.md | lordkevinmo/Car-data-analysis | 69ce2aa522a855e7654948d362fc95a1e7363fc5 | [
"MIT"
] | null | null | null | README.md | lordkevinmo/Car-data-analysis | 69ce2aa522a855e7654948d362fc95a1e7363fc5 | [
"MIT"
] | null | null | null | # Car-data-analysis
Analyse de donnée sur les relations entre le prix d'une voiture et ses différentes caractéristiques comme la puissance du moteur et autres. Le jeu de donnée est issue du lien suivant : https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data
L'analyse a été réalisé avec python grâce à la distribution open source Anaconda https://www.anaconda.com/ qui inclut les outils pour la science des données. Pour cette analyse j'ai utilisé l'éditeur de développement intégré Spyder ainsi que les bibliothèques, pandas, matplotlib, numpy, scipy et scikit learn.
| 119.6 | 310 | 0.80602 | fra_Latn | 0.986171 |
ce2180179f802e0e9140e0a3571231e0d685cab0 | 2,215 | md | Markdown | docs/parallel/chapter5/03_How_to_create_a_task_with_Celery.md | studying-notes/py-notes | 050f5b1ac329835bf1c09496267592d7089dfcc0 | [
"MIT"
] | 1 | 2021-07-10T20:40:55.000Z | 2021-07-10T20:40:55.000Z | docs/parallel/chapter5/03_How_to_create_a_task_with_Celery.md | studying-notes/py-notes | 050f5b1ac329835bf1c09496267592d7089dfcc0 | [
"MIT"
] | null | null | null | docs/parallel/chapter5/03_How_to_create_a_task_with_Celery.md | studying-notes/py-notes | 050f5b1ac329835bf1c09496267592d7089dfcc0 | [
"MIT"
] | null | null | null | # 如何使用Celery创建任务
在本节中,我们将展示如何使用 Celery 模块创建一个任务。Celery 提供了下面的方法来调用任务:
- `apply_async(args[, kwargs[, ...]])`: 发送一个任务消息
- `delay(*args, **kwargs)`: 发送一个任务消息的快捷方式,但是不支持设置一些执行信息
`delay` 方法用起来更方便,因为它可以像普通函数一样被调用: :
```python
task.delay(arg1, arg2, kwarg1='x', kwarg2='y')
```
如果要使用 `apply_async` 的话,你需要这样写: :
```python
task.apply_async (args=[arg1, arg2] kwargs={'kwarg1': 'x','kwarg2': 'y'})
```
##
我们通过以下简单的两个脚本来执行一个任务: :
```python
# addTask.py: Executing a simple task
from celery import Celery
app = Celery('addTask', broker='amqp://guest@localhost//')
@app.task
def add(x, y):
return x + y
```
第二个脚本如下: :
```python
# addTask_main.py : RUN the AddTask example with
import addTask
if __name__ == '__main__':
result = addTask.add.delay(5,5)
```
这里重申以下,RabbitMQ 服务会在安装之后自动启动,所以这里我们只需要启动 Celery 服务就可以了,启动的命令如下: :
```python
celery -A addTask worker --loglevel=info --pool=solo
```
命令的输出如下:
![image](https://i.loli.net/2021/06/01/LjdnuZWgxiBe5T7.png)
其中,有个警告告诉我们关闭 pickle 序列化工具,可以避免一些安全隐患。pickle 作为默认的序列化工作是因为它用起来很方便(通过它可以将很复杂的 Python 对象当做函数变量传给任务)。不管你用不用 pickle ,如果想要关闭警告的话,可以设置 `CELERY_ACCEPT_CONTENT` 变量。详细信息可以参考:http://celery.readthedocs.org/en/latest/configuration.html 。
现在,让我们执行 `addTask_main.py` 脚本来添加一个任务:
![image](https://i.loli.net/2021/06/01/LFypAGU1w7KnPT4.png)
最后,第一个命令的输出会显示:
![image](https://i.loli.net/2021/06/01/yawk7Q4VdHcNvAn.png)
在最后一行,可以看到结果是10,和我们的期望一样。
##
让我们先来看 `addTask.py` 这个脚本。在前两行的代码中,我们创建了一个 Celery 的应用实例,然后用 RabbitMQ 服务作为消息代理: :
```python
from celery import Celery
app = Celery('addTask', broker='amqp://guest@localhost//')
```
Celery 函数的第一个变量是当前 module 的名字( `addTask.py` ) 第二个变量是消息代理的信息,一个可以连接代理的 broker(RabbitMQ). 然后,我们声明了任务。每一个任务必须用 `@app.task` 来装饰。
这个装饰器帮助 Celery 标明了哪些函数可以通过任务队列调度。在装饰器后面,我们定义了 worker 可以执行的任务。我们的第一个任务很简单,只是计算两个数的和: :
```python
@app.task
def add(x, y):
return x + y
```
在第二个脚本中, `AddTask_main.py` ,我们通过 `delay()` 方法来调用任务: :
```python
if __name__ == '__main__':
result = addTask.add.delay(5,5)
```
记住,这个方法只是 `apply_async()` 的一个快捷方式,通过 `apply_async()` 方法我们可以更精确地控制任务执行。
如果 RabbitMQ 是默认配置的话,Celery 也可以通过 `amqp://scheme` 来连接。
| 23.56383 | 226 | 0.690293 | yue_Hant | 0.585692 |
ce23f6bf9344840a3c8f204e8eba70f2b9dc2bba | 1,305 | md | Markdown | curriculum/challenges/italian/14-responsive-web-design-22/learn-html-by-building-a-cat-photo-app/5dc174fcf86c76b9248c6eb2.md | palash-signoz/freeCodeCamp | db33f49b7b775df55e465243f244d648cd75aff5 | [
"BSD-3-Clause"
] | 1 | 2021-11-26T13:27:53.000Z | 2021-11-26T13:27:53.000Z | curriculum/challenges/italian/14-responsive-web-design-22/learn-html-by-building-a-cat-photo-app/5dc174fcf86c76b9248c6eb2.md | palash-signoz/freeCodeCamp | db33f49b7b775df55e465243f244d648cd75aff5 | [
"BSD-3-Clause"
] | 169 | 2020-10-13T16:49:51.000Z | 2020-12-08T22:53:48.000Z | curriculum/challenges/italian/14-responsive-web-design-22/learn-html-by-building-a-cat-photo-app/5dc174fcf86c76b9248c6eb2.md | palash-signoz/freeCodeCamp | db33f49b7b775df55e465243f244d648cd75aff5 | [
"BSD-3-Clause"
] | null | null | null | ---
id: 5dc174fcf86c76b9248c6eb2
title: Step 1
challengeType: 0
dashedName: step-1
---
# --description--
Gli elementi HTML hanno un tag di apertura come `<h1>` e un tag di chiusura come `</h1>`.
Trova l'elemento `h1` e cambia il testo tra i tag di apertura e chiusura in `CatPhotoApp`.
# --hints--
Il testo `CatPhotoApp` dovrebbe essere presente nel codice. Controlla la tua ortografia.
```js
assert(code.match(/catphotoapp/i));
```
L'elemento `h1` dovrebbe avere un tag di apertura. I tag di apertura hanno questa sintassi: `<nomeElemento>`.
```js
assert(document.querySelector('h1'));
```
L'elemento `h1` dovrebbe avere un tag di chiusura. I tag di chiusura hanno un carattere `/` subito dopo il carattere `<`.
```js
assert(code.match(/<\/h1\>/));
```
Hai più di un elemento `h1`. Rimuovi l'elemento `h1` di troppo.
```js
assert(document.querySelectorAll('h1').length === 1);
```
Il testo dell'elemento `h1` dovrebbe essere `CatPhotoApp`. Hai omesso il testo, hai un refuso o il testo non è tra i tag di apertura e chiusura dell'elemento `h1`.
```js
assert(document.querySelector('h1').innerText.toLowerCase() === 'catphotoapp');
```
# --seed--
## --seed-contents--
```html
<html>
<body>
--fcc-editable-region--
<h1>Hello World</h1>
--fcc-editable-region--
</body>
</html>
```
| 21.75 | 163 | 0.687356 | ita_Latn | 0.980578 |
ce23f94fc7ede3c2f91bab6852fb8e56d9dff252 | 337 | md | Markdown | numpy/linear-algebra/README.md | kahilah/hpc-python | 5d2efa08076ed2706c81ca255c7e4574c937557c | [
"MIT"
] | 1 | 2021-12-16T08:55:28.000Z | 2021-12-16T08:55:28.000Z | numpy/linear-algebra/README.md | kahilah/hpc-python | 5d2efa08076ed2706c81ca255c7e4574c937557c | [
"MIT"
] | null | null | null | numpy/linear-algebra/README.md | kahilah/hpc-python | 5d2efa08076ed2706c81ca255c7e4574c937557c | [
"MIT"
] | 1 | 2021-12-05T02:40:42.000Z | 2021-12-05T02:40:42.000Z | ## Linear algebra
1. Construct two symmetric 2x2 matrices **A** and **B**.
*Hint: a symmetric matrix can be constructed easily from a square matrix
as **Asym** = **A** + **A**^T*
2. Calculate the matrix product **C** = **A** * **B** using `numpy.dot()`.
3. Calculate the eigenvalues of matrix **C** with `numpy.linalg.eigvals()`.
| 42.125 | 75 | 0.64095 | eng_Latn | 0.978426 |
ce25958f53fe160e39513af2ba6b96b83d0d3c82 | 356 | md | Markdown | www/src/data/StarterShowcase/startersData/gatsby-starter-timeline-theme.md | rrs94/gatsby | 2a7a8710ff579dcf4f3489d4e7b995dfb8de298c | [
"MIT"
] | 1 | 2020-07-07T15:10:20.000Z | 2020-07-07T15:10:20.000Z | www/src/data/StarterShowcase/startersData/gatsby-starter-timeline-theme.md | rrs94/gatsby | 2a7a8710ff579dcf4f3489d4e7b995dfb8de298c | [
"MIT"
] | 2 | 2018-05-15T16:07:33.000Z | 2018-05-19T21:40:24.000Z | www/src/data/StarterShowcase/startersData/gatsby-starter-timeline-theme.md | rrs94/gatsby | 2a7a8710ff579dcf4f3489d4e7b995dfb8de298c | [
"MIT"
] | 1 | 2018-08-16T06:09:40.000Z | 2018-08-16T06:09:40.000Z | ---
date: January 3, 2018
demo: http://portfolio-v3.surge.sh/
repo: https://github.com/amandeepmittal/gatsby-portfolio-v3
description: n/a
tags:
- portfolio
features:
- Single Page, Timeline View
- A portfolio Developers and Product launchers
- Bring in Data, plug-n-play
- Responsive Design, optimized for Mobile devices
- Seo Friendly
- Uses Flexbox
---
| 22.25 | 59 | 0.755618 | eng_Latn | 0.528331 |
ce2a1f2a5f0fec548207aa9e899fb860559eda21 | 3,775 | md | Markdown | docs/ubuntu_18_04_lts_setup.md | graphistry/graphistry-cli | 1c92ba124998f988ac13b8f299f20145c9d1543c | [
"BSD-3-Clause"
] | 13 | 2018-05-13T00:30:00.000Z | 2022-01-09T06:38:18.000Z | docs/ubuntu_18_04_lts_setup.md | graphistry/graphistry-cli | 1c92ba124998f988ac13b8f299f20145c9d1543c | [
"BSD-3-Clause"
] | 7 | 2018-08-01T17:18:29.000Z | 2021-04-07T18:25:21.000Z | docs/ubuntu_18_04_lts_setup.md | graphistry/graphistry-cli | 1c92ba124998f988ac13b8f299f20145c9d1543c | [
"BSD-3-Clause"
] | 3 | 2018-10-02T17:16:21.000Z | 2021-07-30T20:07:25.000Z | # Ubuntu 18.04 LTS manual configuration
For latest test version of scripts, see your Graphistry release's folder `etc/scripts`.
# Warning
We do *not* recommend manually installing the environment dependencies. Instead, use a Graphistry-managed Cloud Marketplace instance, a prebuilt cloud image, or another partner-supplied starting point.
However, sometimes a manual installation is necessary. Use this script as a reference. For more recent versions, check your Graphistry distribution's `etc/scripts` folder, and its links to best practices.
# About
The reference script below was last tested with an Azure Ubuntu 18.04 LTS AMI on NC series.
* Nvidia driver 430.26
* CUDA 10.2
* Docker CE 19.03.1
* docker-compose 1.24.1
* nvidia-container 1.0.4
# Manual environment configuration
Each subsection ends with a test command.
```
###################
# #
# <3 <3 <3 <3 #
# #
###################
sudo apt update
sudo apt upgrade -y
###################
# #
# Nvidia driver #
# #
###################
sudo add-apt-repository -y ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt install -y nvidia-driver-430
sudo reboot
nvidia-smi
###################
# #
# Docker 19.03+ #
# #
###################
#apt is 18, so go official
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt install -y docker-ce=5:19.03.1~3-0~ubuntu-bionic
sudo systemctl start docker
sudo systemctl enable docker
sudo docker --version
sudo docker run hello-world
####################
# #
# docker-compose #
# #
####################
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
####################
# #
# nvidia runtime #
# #
####################
### Sometimes needed
#curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | sudo apt-key add - && sudo apt update
#distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list && sudo apt-get update
#sudo apt-get install -y nvidia-container-runtime
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && echo $distribution
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
#_not_ default runtime
sudo docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
####################
# #
# nvidia default #
# #
####################
# Nvidia docker as default runtime (needed for docker-compose)
sudo yum install -y vim
sudo vim /etc/docker/daemon.json
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
sudo systemctl restart docker
sudo docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
sudo docker run --rm nvidia/cuda nvidia-smi
```
| 27.554745 | 250 | 0.635497 | eng_Latn | 0.322825 |
ce2a36c64b5d436b4b120a3e3b89ee729197a546 | 32 | md | Markdown | README.md | lRoshi/trabajo | b724d31dd8ed2cb65100fa7293ed381db03d4b1e | [
"Unlicense"
] | null | null | null | README.md | lRoshi/trabajo | b724d31dd8ed2cb65100fa7293ed381db03d4b1e | [
"Unlicense"
] | null | null | null | README.md | lRoshi/trabajo | b724d31dd8ed2cb65100fa7293ed381db03d4b1e | [
"Unlicense"
] | null | null | null | # trabajo
trabajo para el profe
| 10.666667 | 21 | 0.78125 | spa_Latn | 0.999977 |
ce2acee5a8e9969bd3a40c162ebf875038698d40 | 6,631 | md | Markdown | README.md | GinSanaduki/Let_us_RSA_on_Bourne_Shell | ef851ce556ab5d1f32709c1336c17ea12ef2a2f4 | [
"BSD-3-Clause"
] | null | null | null | README.md | GinSanaduki/Let_us_RSA_on_Bourne_Shell | ef851ce556ab5d1f32709c1336c17ea12ef2a2f4 | [
"BSD-3-Clause"
] | null | null | null | README.md | GinSanaduki/Let_us_RSA_on_Bourne_Shell | ef851ce556ab5d1f32709c1336c17ea12ef2a2f4 | [
"BSD-3-Clause"
] | null | null | null | # Let_us_RSA_on_Bourne_Shell
RSAの仕組みをUNIXで体感する(ALL UNIX COMMAND)
* まとめたら、もう少しまともなshellを作るから、とりあえずマークダウンで書いとくぞ。
* ベースは
https://qiita.com/jabba/items/e5d6f826d9a8f2cefd60
公開鍵暗号とRSA暗号の仕組み - Qiita
と
https://blog.desumachi.tk/2017/10/17/%E3%82%B7%E3%82%A7%E3%83%AB%E8%8A%B8%E3%81%A7%E6%9C%80%E5%B0%8F%E5%85%AC%E5%80%8D%E6%95%B0%E3%83%BB%E6%9C%80%E5%A4%A7%E5%85%AC%E7%B4%84%E6%95%B0%E3%82%92%E6%B1%82%E3%82%81%E3%82%8B/
シェル芸で最小公倍数・最大公約数を求める – ですまち日記帳
を参考にした。
元ネタはRubyで書いてあるが、これは基本awk(桁あふれを起こすので、MPFR対応のgawkを使用する)で書いている。
以下のファイルを暗号化したいとする。
```bash
$ cat test.txt
Jabba the Hutto$
```
16進数にしたいので、xxdでダンプする。
odはめんどうくさいので、後で考える・・・。
```bash
cat test.txt | \
iconv -f "$(locale charmap)" -t utf32be | \
xxd -p | \
tr -d '\n' | \
awk -f test1.awk | \
sed -r 's/^0+/0x/' | \
xargs printf 'U+%04X\n' > XDD.txt
```
```awk
#!/usr/bin/gawk -f
# test1.awk
# awk -f test1.awk
BEGIN{
FS="";
}
{
for(i = 1; i <= NF; i++){
Remainder = i % 8;
if(i == NF || Remainder == 0){
printf("%s\n", $i);
} else {
printf("%s", $i);
}
}
}
```
```bash
$ cat XDD.txt
U+004A
U+0061
U+0062
U+0062
U+0061
U+0020
U+0074
U+0068
U+0065
U+0020
U+0048
U+0075
U+0074
U+0074
$
```
例をカンタンにするためにそのとても大きな2つの素数をp= 7、q= 19とする。
素数は・・・なんかでまた求めるのを書いておくよ・・・。
```bash
$ p=7
$ q=19
$
$ p_minus1=$((p - 1))
$ q_minus1=$((q - 1))
$ echo $p_minus1
6
$ echo $q_minus1
18
$
```
* yesコマンドで2つの数字をスペース区切りで無限に生成する
* awkコマンドでそれぞれの数に今の行数を掛けて、間に改行を挟んで出力する
* awkコマンドで同じ数が2回目に出てきたらそれを出力して終了する
```bash
$ r=`yes $p_minus1 $q_minus1 | \
awk '{print $1*NR RS $2*NR}' | \
awk 'a[$1]++{print;exit;}'`
$ echo $r
18
```
* 公開鍵の条件は1より大きくL(=18)よりも小さい。
* しかも公開鍵とLとの最大公約数は1であること、つまり互いに素であることが条件。
* ここでは公開鍵をpublicとする。
```bash
# 並列にしてはあるが、時間がそれなりにかかるので注意
$ public=`seq 2 $r | \
awk -f test3.awk -v Max=$r | \
xargs -P 0 -r -I{} sh -c '{}' | \
sort -k 2n,2 -k 1n,1 | \
head -n 1 | \
cut -f 1 -d ' '`
$ echo $public
5
$
```
```awk
#!/usr/bin/gawk -f
# test3.awk
# awk -f test3.awk -v Max=$r
BEGIN{
Max = Max + 0;
if(Max < 2){
exit 99;
}
}
{
$0 = $0 + 0;
print "yes "$0" "Max" | awk -f test4.awk | grep -Fv --line-buffered . | awk -f test5.awk | awk -f test6.awk -v Disp="$0;
}
```
```awk
#!/usr/bin/gawk -f
# test4.awk
# awk -f test4.awk
{
print $1/NR RS $2/NR;
}
```
```awk
#!/usr/bin/gawk -f
# test5.awk
# awk -f test5.awk
a[$1]++{print;exit;}
```
* 公開鍵は「$p_qで割ったの余りの表を使って$public乗すること」になる。
```bash
$ p_q=$((p * q))
$ echo $p_q
133
$
```
* 秘密鍵の条件は
* 秘密鍵 * 公開鍵=L * N + 1となる数。
* ここでは秘密鍵をprivateとする。
```bash
$ private=`seq 2 $r | awk -f test7.awk -v Max=$r -v Public=$public`
$ echo $private
11
$
```
```awk
#!/usr/bin/gawk -f
# test7.awk
# awk -f test7.awk -v Max=$r -v Public=$public
BEGIN{
Max = Max + 0;
Public = Public + 0;
}
{
$0 = $0 + 0;
Public_ColZero = Public * $0;
Remainder = Public_ColZero % Max;
if(Remainder == 1){
print;
exit;
}
}
```
* 冒頭のJaba the Huttを公開鍵(public=5)で暗号化する。
元の数字を5回かけて、133の余りを出す。
```bash
$ /usr/bin/gawk -M -f test8.awk -v Max=$p_q -v Exponentiation=$public Conv_XDD.csv | \
tr -d '\n' | \
awk '{print substr($0,1,length($0) - 1);}' > Encrypt.csv
# stderr
i : 1, $i : 74
Exponentiation : 5
Exp : 2219006624
Max : 133
Remainder : 44
i : 2, $i : 97
Exponentiation : 5
Exp : 8587340257
Max : 133
Remainder : 13
i : 3, $i : 98
Exponentiation : 5
Exp : 9039207968
Max : 133
Remainder : 91
i : 4, $i : 98
Exponentiation : 5
Exp : 9039207968
Max : 133
Remainder : 91
i : 5, $i : 97
Exponentiation : 5
Exp : 8587340257
Max : 133
Remainder : 13
i : 6, $i : 32
Exponentiation : 5
Exp : 33554432
Max : 133
Remainder : 128
i : 7, $i : 116
Exponentiation : 5
Exp : 21003416576
Max : 133
Remainder : 51
i : 8, $i : 104
Exponentiation : 5
Exp : 12166529024
Max : 133
Remainder : 111
i : 9, $i : 101
Exponentiation : 5
Exp : 10510100501
Max : 133
Remainder : 5
i : 10, $i : 32
Exponentiation : 5
Exp : 33554432
Max : 133
Remainder : 128
i : 11, $i : 72
Exponentiation : 5
Exp : 1934917632
Max : 133
Remainder : 116
i : 12, $i : 117
Exponentiation : 5
Exp : 21924480357
Max : 133
Remainder : 129
i : 13, $i : 116
Exponentiation : 5
Exp : 21003416576
Max : 133
Remainder : 51
i : 14, $i : 116
Exponentiation : 5
Exp : 21003416576
Max : 133
Remainder : 51
$cat Encrypt.csv
44,13,91,91,13,128,51,111,5,128,116,129,51,51
$
```
```awk
#!/usr/bin/gawk -f
# test8.awk
# /usr/bin/gawk -M -f test8.awk -v Max=$p_q -v Exponentiation=$public Conv_XDD.csv > Encrypt.csv
# /usr/bin/gawk -M -f test8.awk -v Max=$p_q -v Exponentiation=$private Encrypt.csv > Decrypt.csv
BEGIN{
FS = ",";
Max = Max + 0;
Exponentiation = Exponentiation + 0;
# print Max;
# print Exponentiation;
}
{
delete Arrays;
for(i = 1; i <= NF; i++){
print "i : "i", $i : "$i > "/dev/stderr";
Exp = $i ** Exponentiation;
print "Exponentiation : "Exponentiation > "/dev/stderr";
print "Exp : "Exp > "/dev/stderr";
Remainder = Exp % Max;
print "Max : "Max > "/dev/stderr";
print "Remainder : "Remainder > "/dev/stderr";
print Remainder"," > "/dev/stdout";
}
}
```
* 逆パターンとして、秘密鍵の11と133を使って復号する。
* 暗号の数字を11回かけて、133の余りを出す。
```bash
$ /usr/bin/gawk -M -f test8.awk -v Max=$p_q -v Exponentiation=$private Encrypt.csv | \
tr -d '\n' | \
awk '{print substr($0,1,length($0) - 1);}' > Decrypt.csv
# stderr
i : 1, $i : 44
Exponentiation : 11
Exp : 1196683881290399744
Max : 133
Remainder : 74
i : 2, $i : 13
Exponentiation : 11
Exp : 1792160394037
Max : 133
Remainder : 97
i : 3, $i : 91
Exponentiation : 11
Exp : 3543686674874777831491
Max : 133
Remainder : 98
i : 4, $i : 91
Exponentiation : 11
Exp : 3543686674874777831491
Max : 133
Remainder : 98
i : 5, $i : 13
Exponentiation : 11
Exp : 1792160394037
Max : 133
Remainder : 97
i : 6, $i : 128
Exponentiation : 11
Exp : 151115727451828646838272
Max : 133
Remainder : 32
i : 7, $i : 51
Exponentiation : 11
Exp : 6071163615208263051
Max : 133
Remainder : 116
i : 8, $i : 111
Exponentiation : 11
Exp : 31517572945366073781711
Max : 133
Remainder : 104
i : 9, $i : 5
Exponentiation : 11
Exp : 48828125
Max : 133
Remainder : 101
i : 10, $i : 128
Exponentiation : 11
Exp : 151115727451828646838272
Max : 133
Remainder : 32
i : 11, $i : 116
Exponentiation : 11
Exp : 51172646912339021398016
Max : 133
Remainder : 72
i : 12, $i : 129
Exponentiation : 11
Exp : 164621598066108688876929
Max : 133
Remainder : 117
i : 13, $i : 51
Exponentiation : 11
Exp : 6071163615208263051
Max : 133
Remainder : 116
i : 14, $i : 51
Exponentiation : 11
Exp : 6071163615208263051
Max : 133
Remainder : 116
$ cat Decrypt.csv
74,97,98,98,97,32,116,104,101,32,72,117,116,116
$
```
* ダンプから起こしたunicode pointと突合する
```bash
$ diff -q Decrypt.csv Conv_XDD.csv
$ echo $?
0
$
```
# まあ、こんな感じだ。
# Bourne Shellでも、できてよかったね。
| 15.788095 | 220 | 0.638667 | yue_Hant | 0.28132 |
ce2c588f1865c11ceb3fa0280b1280fdf480022e | 131 | md | Markdown | release.md | SecurityRAT/SecurityRAT | c0dcdb774308ef7b36f225be74b33b1aec7c98ac | [
"Apache-2.0"
] | 147 | 2016-05-30T16:28:31.000Z | 2022-03-30T15:34:20.000Z | release.md | SecurityRAT/SecurityRAT | c0dcdb774308ef7b36f225be74b33b1aec7c98ac | [
"Apache-2.0"
] | 160 | 2016-07-05T14:28:38.000Z | 2022-03-02T12:58:15.000Z | release.md | SecurityRAT/SecurityRAT | c0dcdb774308ef7b36f225be74b33b1aec7c98ac | [
"Apache-2.0"
] | 48 | 2016-05-06T10:50:03.000Z | 2022-03-30T13:05:57.000Z | ## Bug Fixes
- Fixed minor issue where displayed link to authenticate to JIRA (during import) is REST API link instead of origin.
| 32.75 | 116 | 0.770992 | eng_Latn | 0.999414 |
ce2e536076d1457af82c37bd8fd3e03cc4a2b4ea | 1,075 | md | Markdown | content/articles/content/Laser-bending.md | 4m-association/4m-association | 2a5d6a7c539d927dffefdb1a2f4c706e0efa7ba2 | [
"RSA-MD"
] | 1 | 2020-11-18T16:20:10.000Z | 2020-11-18T16:20:10.000Z | content/articles/content/Laser-bending.md | 4m-association/4m-association | 2a5d6a7c539d927dffefdb1a2f4c706e0efa7ba2 | [
"RSA-MD"
] | null | null | null | content/articles/content/Laser-bending.md | 4m-association/4m-association | 2a5d6a7c539d927dffefdb1a2f4c706e0efa7ba2 | [
"RSA-MD"
] | 2 | 2021-09-10T14:15:53.000Z | 2021-11-30T09:28:38.000Z | title: Laser bending
date: 2009-11-17
tags: metals-processing
Technology suitable for small quantity production
The use of a Laser in forming technology enables the prototype production of freeform sheet metal parts without using any solid tool. The laser beam is applied to a workpiece where the sheet is locally heated. After heating two possible mechanisms appear in dependency of the energy input. If only the surface layer of the sheet is locally heated, when cooling down, the tensile stress in the surface layer lead to a bending moment in direction to the laser (temperature gradient mechanism). The second mechanism leads to a reduction of the sheet length. If the sheet thickness is heated completely after cooling down the material is contracting yielding a reduction in the sheet length. Both mechanisms are applicable at microscale while the first is more frequently and industrially used e.g. in case of optical sensor adjustment in compact disk drives. If the energy density is high enough and the pulse duration short, even metallic foils can be bended. | 153.571429 | 959 | 0.816744 | eng_Latn | 0.999822 |
ce2f059c27b73a5418022484344ec8ad297dd5a6 | 1,639 | md | Markdown | catalog/tobiiro-shadow/en-US_tobiiro-shadow.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/tobiiro-shadow/en-US_tobiiro-shadow.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/tobiiro-shadow/en-US_tobiiro-shadow.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Tobiiro Shadow
![tobiiro-shadow](https://cdn.myanimelist.net/images/manga/1/24840.jpg)
- **type**: manga
- **volumes**: 4
- **original-name**: 鳶色シャドウ
- **start-date**: 1992-01-19
## Tags
- mystery
- romance
- supernatural
- josei
## Authors
- Hara
- Chieko (Story & Art)
## Sinopse
From Emily's Random Shoujo Manga Page:
It all begins long ago in the past. In a place near an ancient wall. The wall looks more like a cliff, really. Anyway, two cute children grew up together. Sumire is a gentle girl, and Yasuhira is a bossy boy, but he loves Sumire. When Yasuhira proposed marriage to Sumire, it was more like the confirmation of a long-held assumption that they would eventually wed. But they did lot live happily ever after. We see a bloodied sword, and someone crying…
Flash forward to modern times. These days, a school has been built near the ancient wall. It is such a tall, imposing wall/cliff thing, that naturally the students have made up ghost stories about it. Some of the girls are determined to go to the wall, where they believe they can see some real ghosts. A classmate, Sumire, is unwillingly dragged along. Sumire tries to get out of going, but her friends insist. When they get to the wall, Sumire feels an unusual wind, and then is confronted with a startling sight — a boy crying. He is just standing there, tears pouring down his cheeks. Before she can say anything, her friends call her away. The boy looks startled to see her. Sumire realizes he looks familiar… is that her classmate Katsuragi-kun?
## Links
- [My Anime list](https://myanimelist.net/manga/16771/Tobiiro_Shadow)
| 51.21875 | 751 | 0.747407 | eng_Latn | 0.999371 |
ce2f11cf7e779975c66ae5548a5fcb59b58d93e0 | 1,114 | md | Markdown | tools/git/README.md | yudatun/documentation | bd37195aa8385b7b334a440c4efa86e84ab968af | [
"Apache-2.0"
] | null | null | null | tools/git/README.md | yudatun/documentation | bd37195aa8385b7b334a440c4efa86e84ab968af | [
"Apache-2.0"
] | null | null | null | tools/git/README.md | yudatun/documentation | bd37195aa8385b7b334a440c4efa86e84ab968af | [
"Apache-2.0"
] | null | null | null | Git
========================================
build with sources
----------------------------------------
Got reason of the problem, it was gnutls package.
It's working weird behind a proxy. But openssl is
working fine even in weak network. So workaround is
that we should compile git with openssl. To do this, run the following commands:
```
$ sudo apt-get install build-essential fakeroot dpkg-dev
$ mkdir ~/git-openssl
$ cd ~/git-openssl
$ sudo apt-get source git
$ sudo apt-get build-dep git
$ sudo apt-get install libcurl4-openssl-dev
$ dpkg-source -x git_1.7.9.5-1.dsc
$ cd git-1.7.9.5
```
(Remember to replace 1.7.9.5 with the actual version of git in your system.)
Then, edit debian/control file (run the command: gksu gedit debian/control) and
replace all instances of libcurl4-gnutls-dev with libcurl4-openssl-dev.
Then build the package (if it's failing on test, you can remove the line TEST=test from the file debian/rules):
```
$ sudo dpkg-buildpackage -rfakeroot -b
```
Install new package:
```
i386: sudo dpkg -i ../git_1.7.9.5-1_i386.deb
x86_64: sudo dpkg -i ../git_1.7.9.5-1_amd64.deb
```
| 27.85 | 111 | 0.682226 | eng_Latn | 0.866732 |
ce2f63ec33d97be7e2c2ad019a679e1a84acc475 | 436 | md | Markdown | _pages/projects.md | ahkhalwai/ahkhalwai.github.io | 3334b05c25d5673feebdaf0464118c790c1fcac1 | [
"MIT"
] | 3 | 2021-05-29T16:27:16.000Z | 2021-06-08T15:35:22.000Z | _pages/projects.md | ahkhalwai/ahkhalwai.github.io | 3334b05c25d5673feebdaf0464118c790c1fcac1 | [
"MIT"
] | null | null | null | _pages/projects.md | ahkhalwai/ahkhalwai.github.io | 3334b05c25d5673feebdaf0464118c790c1fcac1 | [
"MIT"
] | 2 | 2021-06-08T13:21:45.000Z | 2021-06-08T13:26:44.000Z | ---
layout: archive
title: ""
permalink: /projects/
author_profile: true
---
{% include base_path %}
{% for post in site.projects %}
{% include archive-single.html %}
{% endfor %}
<br>
Project Visitors
![Project Visitors](https://visitor-badge.laobi.icu/badge?page_id=ahkhalwai.ahkhalwai.github.io/projects/)
Total Visitor
![Total Visitors](https://visitor-badge.laobi.icu/badge?page_id=ahkhalwai.ahkhalwai.github.io/)
<br>
| 16.148148 | 106 | 0.71789 | eng_Latn | 0.370239 |
ce31f3ca46f0d69d68b3e518909ae3d780a6e401 | 194 | md | Markdown | README.md | ritikjain626/Weather-app | 17ed0e0dc276ddf061fb88829705221783079ebd | [
"MIT"
] | null | null | null | README.md | ritikjain626/Weather-app | 17ed0e0dc276ddf061fb88829705221783079ebd | [
"MIT"
] | null | null | null | README.md | ritikjain626/Weather-app | 17ed0e0dc276ddf061fb88829705221783079ebd | [
"MIT"
] | null | null | null | # Weather-app
A weather app which describe a condition of a given location. also, you can search for any city in the world.
You Can check it out at https://ritikjain626.github.io/Weather-app/
| 32.333333 | 109 | 0.768041 | eng_Latn | 0.998125 |
ce3275576ecb713d9b1b429417e7846a10b0b8ed | 795 | md | Markdown | issues/add-multi-instance-support.md | sigsum/sigsum-log-go | 1594b0830d8cd18ab158dfffb64dd3c219da8f10 | [
"Apache-2.0"
] | null | null | null | issues/add-multi-instance-support.md | sigsum/sigsum-log-go | 1594b0830d8cd18ab158dfffb64dd3c219da8f10 | [
"Apache-2.0"
] | 7 | 2021-06-23T21:41:06.000Z | 2021-12-10T20:54:09.000Z | issues/add-multi-instance-support.md | sigsum/sigsum-log-go | 1594b0830d8cd18ab158dfffb64dd3c219da8f10 | [
"Apache-2.0"
] | null | null | null | **Title:** Add multi-instance support </br>
**Date:** 2021-12-09 </br>
# Summary
Add support for multiple active sigsum-log-go instances for the same log.
# Description
A sigsum log accepts add-cosignature requests to make the final cosigned tree
head available. Right now a single active sigsum-log-go instance is assumed per
log, so that there is no need to coordinate cosigned tree heads among instances.
Some log operators will likely want to run multiple instances of both the
Trillian components and sigsum-log-go, backed by a managed data base setup.
Trillian supports this, but sigsum-log-go does not due to lack of coordination.
This issue requires both design considerations and an implementation of the
`StateManager` interface to support multi-instance setups of sigsum-log-go.
| 44.166667 | 80 | 0.792453 | eng_Latn | 0.997645 |
ce329dc4726ad8bbdd7084e43755aa2e2a567c23 | 291 | markdown | Markdown | src/pages/articles/2011-02-08-wolverine-or-2-batmen.markdown | broderboy/timbroder.com-sculpin | 7982298523ac56db8c45fa4d18f8a2801c7c0445 | [
"MIT"
] | 2 | 2015-12-21T00:49:21.000Z | 2019-03-03T10:20:24.000Z | src/pages/articles/2011-02-08-wolverine-or-2-batmen.markdown | timbroder/timbroder.com-sculpin | 7982298523ac56db8c45fa4d18f8a2801c7c0445 | [
"MIT"
] | 16 | 2021-03-01T20:47:45.000Z | 2022-03-08T23:00:02.000Z | src/pages/articles/2011-02-08-wolverine-or-2-batmen.markdown | broderboy/timbroder.com-sculpin | 7982298523ac56db8c45fa4d18f8a2801c7c0445 | [
"MIT"
] | null | null | null | ---
author: tim
comments: true
date: 2011-02-08 20:59:49+00:00
dsq_thread_id: '242777682'
layout: post
link: ''
slug: wolverine-or-2-batmen
title: Wolverine? Or 2 Batmen?
wordpress_id: 824
category: Comics
---
![](http://www.igeektrooper.com/wp-
content/uploads/2011/02/wolverinebatmen.jpg) | 19.4 | 44 | 0.738832 | eng_Latn | 0.097886 |
ce32b2abac1b510097880ea288eebf9fad8bd227 | 859 | md | Markdown | technologies.md | krsiakdaniel/movies-old | 5214f9f92a8d1d507ebe3cc768bd1e5d652533e0 | [
"MIT"
] | null | null | null | technologies.md | krsiakdaniel/movies-old | 5214f9f92a8d1d507ebe3cc768bd1e5d652533e0 | [
"MIT"
] | null | null | null | technologies.md | krsiakdaniel/movies-old | 5214f9f92a8d1d507ebe3cc768bd1e5d652533e0 | [
"MIT"
] | null | null | null | # ⚙️ Technologies
[Dependencies](https://github.com/krsiakdaniel/movies/network/dependencies), technologies, tools and services used to build this app.
## Core
- [React](https://reactjs.org/)
- [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript)
- [TypeScript](https://www.typescriptlang.org/)
## Design
- [Chakra UI](https://chakra-ui.com/getting-started)
- [Emotion](https://emotion.sh/docs/introduction)
## API
- [TMDb](https://developers.themoviedb.org/3/getting-started/introduction)
## Services
- [Netlify](https://app.netlify.com/sites/movies-krsiak/deploys)
- [Codacy](https://app.codacy.com/manual/krsiakdaniel/movies/dashboard?bid=17493411)
- [Smartlook](https://www.smartlook.com/)
- [Cypress Dashboard](https://dashboard.cypress.io/projects/tcj8uu/runs)
- [Uptime Robot status](https://stats.uptimerobot.com/7DxZ0imzV4)
| 31.814815 | 133 | 0.743888 | yue_Hant | 0.443045 |
ce33b9fb42d106d6ac53b8ea089f5d92030e8820 | 3,171 | md | Markdown | sdk-api-src/content/devicetopology/nf-devicetopology-iperchanneldblevel-getlevel.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/devicetopology/nf-devicetopology-iperchanneldblevel-getlevel.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/devicetopology/nf-devicetopology-iperchanneldblevel-getlevel.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:devicetopology.IPerChannelDbLevel.GetLevel
title: IPerChannelDbLevel::GetLevel (devicetopology.h)
description: The GetLevel method gets the volume level, in decibels, of the specified channel.
helpviewer_keywords: ["GetLevel","GetLevel method [Core Audio]","GetLevel method [Core Audio]","IPerChannelDbLevel interface","IPerChannelDbLevel interface [Core Audio]","GetLevel method","IPerChannelDbLevel.GetLevel","IPerChannelDbLevel::GetLevel","IPerChannelDbLevelGetLevel","coreaudio.iperchanneldblevel_getlevel","devicetopology/IPerChannelDbLevel::GetLevel"]
old-location: coreaudio\iperchanneldblevel_getlevel.htm
tech.root: CoreAudio
ms.assetid: afc76c80-1656-4f06-8024-c9b041f52e64
ms.date: 12/05/2018
ms.keywords: GetLevel, GetLevel method [Core Audio], GetLevel method [Core Audio],IPerChannelDbLevel interface, IPerChannelDbLevel interface [Core Audio],GetLevel method, IPerChannelDbLevel.GetLevel, IPerChannelDbLevel::GetLevel, IPerChannelDbLevelGetLevel, coreaudio.iperchanneldblevel_getlevel, devicetopology/IPerChannelDbLevel::GetLevel
req.header: devicetopology.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows Vista [desktop apps only]
req.target-min-winversvr: Windows Server 2008 [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- IPerChannelDbLevel::GetLevel
- devicetopology/IPerChannelDbLevel::GetLevel
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- Devicetopology.h
api_name:
- IPerChannelDbLevel.GetLevel
---
# IPerChannelDbLevel::GetLevel
## -description
The <b>GetLevel</b> method gets the volume level, in decibels, of the specified channel.
## -parameters
### -param nChannel [in]
The channel number. If the audio stream has <i>N</i> channels, the channels are numbered from 0 to <i>N</i>– 1. To get the number of channels in the stream, call the <a href="/windows/desktop/api/devicetopology/nf-devicetopology-iperchanneldblevel-getchannelcount">IPerChannelDbLevel::GetChannelCount</a> method.
### -param pfLevelDB [out]
Pointer to a <b>float</b> variable into which the method writes the volume level, in decibels, of the specified channel.
## -returns
If the method succeeds, it returns S_OK. If it fails, possible return codes include, but are not limited to, the values shown in the following table.
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>E_INVALIDARG</b></dt>
</dl>
</td>
<td width="60%">
Parameter <i>nChannel</i> is out of range.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>E_POINTER</b></dt>
</dl>
</td>
<td width="60%">
Pointer <i>pfLevelDB</i> is <b>NULL</b>.
</td>
</tr>
</table>
## -see-also
<a href="/windows/desktop/api/devicetopology/nn-devicetopology-iperchanneldblevel">IPerChannelDbLevel Interface</a>
<a href="/windows/desktop/api/devicetopology/nf-devicetopology-iperchanneldblevel-getchannelcount">IPerChannelDbLevel::GetChannelCount</a> | 30.490385 | 364 | 0.768212 | eng_Latn | 0.339183 |
ce33db7ac2c3cba752ece227e22c5e0f70866441 | 3,262 | md | Markdown | README.md | DRSchlaubi/mikmusic | 738c5f9682cfefa2f496a94e0cb7d8f4604e865f | [
"MIT"
] | 13 | 2021-10-03T10:26:57.000Z | 2021-11-07T07:32:19.000Z | README.md | DRSchlaubi/mikmusic | 738c5f9682cfefa2f496a94e0cb7d8f4604e865f | [
"MIT"
] | 3 | 2021-10-30T15:17:15.000Z | 2021-11-03T16:24:18.000Z | README.md | DRSchlaubi/mikmusic | 738c5f9682cfefa2f496a94e0cb7d8f4604e865f | [
"MIT"
] | 2 | 2021-10-04T19:34:16.000Z | 2021-10-05T14:31:27.000Z | # Mik Bot
[![GitHub Workflow Status](https://img.shields.io/github/workflow/status/DRSchlaubi/mikbot/CI?logo=github&style=flat-square)](https://github.com/DRSchlaubi/mikbot/actions/workflows/ci.yaml)
[![Gradle Plugin Portal](https://img.shields.io/gradle-plugin-portal/v/dev.schlaubi.mikbot.gradle-plugin?logo=gradle&style=flat-square)](https://plugins.gradle.org/plugin/dev.schlaubi.mikbot.gradle-plugin)
[![Latest Version](https://img.shields.io/maven-metadata/v?logo=apache%20maven&metadataUrl=https%3A%2F%2Fschlaubi.jfrog.io%2Fartifactory%2Fmikbot%2Fdev%2Fschlaubi%2Fmikbot-api%2Fmaven-metadata.xml&style=flat-square)](https://schlaubi.jfrog.io/ui/native/mikbot/dev/schlaubi/mikbot-api/)
[![Made with Kotlin](https://img.shields.io/badge/Made%20with-Kotlin-blueviolet?style=flat-square&logo=kotlin)](https://kotlinlang.org)
[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/DRSchlaubi/mikbot)
A modular framework for building Discord bots in [Kotlin](https://kotlinlang.org)
using [Kordex](https://github.com/Kord-Extensions/kord-extensions/) and [Kord](https://github.com/kordlib).
**If you are here for mikmusic, click [here](music) and [there](mikmusic-bot).**
**If you are here for Votebot, click [here](votebot).**
# Help translating this project
<a href="https://hosted.weblate.org/engage/mikbot/">
<img src="https://hosted.weblate.org/widgets/mikbot/-/287x66-grey.png" alt="Übersetzungsstatus" />
</a>
## Deployment
For a full explanation on how to deploy the bot yourself take a look at [this](./SETUP.md)
### Requirements
- [Sentry](https://sentry.io) (Optional)
- [Docker](https://docs.docker.com/get-docker/)
- [Docker Compose](https://docs.docker.com/compose/install/)
## Example Environment file
<details>
<summary>.env</summary>
```properties
ENVIRONMENT=PRODUCTION
SENTRY_TOKEN=<>
DISCORD_TOKEN=<>
MONGO_URL=mongodb://bot:bot@mongo
MONGO_DATABASE=bot_prod
LOG_LEVEL=DEBUG
BOT_OWNERS=416902379598774273
OWNER_GUILD=<>
UPDATE_PLUGINS=false #if you want to disable the auto updater
```
</details>
### Starting the bot
Docker image from: https://github.com/DRSchlaubi/mikmusic/pkgs/container/mikmusic%2Fbot
- Clone this repo
- Run `docker-compose up -d`
# Binary repositories
The bot has two repositories for binaries the [binary-repo](https://storage.googleapis.com/mikbot-binaries) containing
the bots binaries and the [plugin-repo](https://storage.googleapis.com/mikbot-plugins)
([index](https://storage.googleapis.com/mikbot-plugins/plugins.json)) normally you should not need to interact with
these repositories directly.
# For bot developers
JDK is required it can be obtained [here](https://adoptium.net) (Recommended for Windows but works everywhere)
and [here](https://sdkman.io/) (Recommended for Linux/Mac)
Please set the `ENVIRONMENT` env var to `DEVELOPMENT` whilst developing the bot.
Also set a `TEST_GUILD` environment variable, for local commands
If you are making any changes to the bots official plugins (aka the plugins in this repo),
please run the `rebuild-plugin-dependency-list.sh` script first, otherwise your plugins won't be loaded properly
# For plugin developers
You can find a detailed guide on how to write plugins [here](PLUGINS.md)
| 41.291139 | 285 | 0.767627 | eng_Latn | 0.298089 |
ce34392a71d82c7ca233b94eaeb6db13443977f9 | 215 | md | Markdown | README.md | rafaelsantos-dev/Calculadora-Icms | 23b1f0d746b8f63b36df82310738dae44082297a | [
"MIT"
] | null | null | null | README.md | rafaelsantos-dev/Calculadora-Icms | 23b1f0d746b8f63b36df82310738dae44082297a | [
"MIT"
] | null | null | null | README.md | rafaelsantos-dev/Calculadora-Icms | 23b1f0d746b8f63b36df82310738dae44082297a | [
"MIT"
] | null | null | null | # Calculadora Icms
Calculadora de Substituição Tributária - BA <> SE
Criação de solução computacional para auxílio na resolução de cálculo de tributação, inicialmente entre os estado da Bahia e Sergipe no Brasil. | 53.75 | 144 | 0.809302 | por_Latn | 0.999915 |
ce34523c1a69a6e675777595133d2b00186ac122 | 8,251 | md | Markdown | docs/source/expressive_power.md | rix0rrr/gcl | 4e3bccc978a9c60aaaffd20f6f291c4d23775cdf | [
"MIT"
] | 55 | 2015-03-26T22:05:59.000Z | 2022-03-18T07:43:33.000Z | docs/source/expressive_power.md | rix0rrr/gcl | 4e3bccc978a9c60aaaffd20f6f291c4d23775cdf | [
"MIT"
] | 36 | 2015-04-16T09:30:46.000Z | 2020-11-19T20:22:32.000Z | docs/source/expressive_power.md | rix0rrr/gcl | 4e3bccc978a9c60aaaffd20f6f291c4d23775cdf | [
"MIT"
] | 9 | 2015-04-28T08:39:50.000Z | 2021-05-07T08:37:21.000Z | Expressive Power
================
It's the eternal problem of a DSL intended for a limited purpose: such a
language then gets more and more features, to gain more and more expressive
power, until finally the language is fully generic and any computable function
can be expressed in it.
> "In a heart beat, you're Turing complete!" -- Felienne Hermans
Not by design but by accident, GCL is actually one of those Turing complete
languages. It wasn't the intention, but because of the abstractive power of
tuples, lazy evaluation and recursion, GCL actually maps pretty closely onto the
Lambda Calculus, and is therefore also Turing complete.
Having said that, you should definitely not feel encouraged to (ab)use the
Turing completeness to do calculations inside your model. That is emphatically
_not_ what GCL was intended for. This section is more of an intellectual
curiosity, and should be treated as such.
Tuples are functions
--------------------
Tuples map very nicely onto functions; they can have any number of input and
output parameters. Of course, all of this is convention. But you can see how
this would work as I define the mother of all recursive functions, the Fibonacci
function:
fib = {
n;
n1 = n - 1;
n2 = n - 2;
value = if n == 0 then 0
else if n == 1 then 1
else (fib { n = n1 }).value + (fib { n = n2 }).value;
};
fib8 = (fib { n = 8 }).value;
And then:
$ gcl-print fib.gcl fib8
fib8
21
Hooray! Arbitrary computation through recursion!
A more elaborate example
------------------------
Any time you need a particular function, you can inject it from Python, _or_ you
could just write it directly in GCL. Need `string.join`? Got you covered:
string_join = {
list;
i = 0; # Hey, they're default arguments!
sep = ' ';
next_i = i + 1;
suffix = (string_join { inherit list sep; i = next_i }).value;
my_sep = if i > 0 then sep else '';
value = if has(list, i) then my_sep + list(i) + suffix else '';
};
praise = (string_join { list = ['Alonzo','would','be','proud']; }).value;
We make use of the lazy evaluation property here to achieve readability by
giving names to subparts of the computation: the key `suffix` actually only
makes sense if we're not at the end of the list yet, but we can give that
calculation a name anyway. The expression will only be evaluated when we pass
the `has(list, i)` test.
Multi-way relations
------------------
Because all keys are lazily evaluated and can be overridden, we can also encode
relationships between input and output parameters in both directions. The
_caller_ of our relation tuple can then determine which value they need. For
example:
Pythagoras = {
a = sqrt(c * c - b * b);
b = sqrt(c * c - a * a);
c = sqrt(a * a + b * b);
}
Right now we have a complete relationship between all values. Obviously, we
can't evaluate any field because that will yield an infinite recursion. But we
_can_ supply any two values to calculate the remaining one:
(Pythagoras { a = 3; b = 4}).c # 5
(Pythagoras { a = 5; c = 13}).b # 12
Inner tuples are closures
-------------------------
Just as tuples correspond to functions, nested tuples correspond to closures,
as they have a reference to the parent tuple at the moment it was evaluated.
For example, we can make a partially-applied tuple represents the capability of returning elements
from a matrix:
Matrix = {
matrix;
getter = {
x; y;
value = matrix y x;
};
};
PrintSquare = {
getter;
range = [0, 1, 2];
value = [[ (getter { inherit x y }).value for x in range] for y in range];
};
my_matrix = Matrix {
matrix = [
[8, 6, 12, 11, -3],
[20, 6, 8, 7, 7],
[9, 83, 8, 8, 30],
[3, 1, 20, -1, 21]
];
};
top_left = (PrintSquare { getter = my_matrix.getter }).value;
Let's do something silly
------------------------
Let's do something very useless: let's implement the Game of Life in GCL using
the techniques we've seen so far!
Our GCL file is going to load the current state of a board from a file and compute the next state of
the board--after applying all the GoL rules--into some output variable. If we then use a simple bash
script to pipe that output back into the input file, we can repeatedly invoke GCL to get some
animation going!
We'll make use of the fact that we can `include` JSON files directly, and that we can use `gcl2json`
to write some key back to JSON again.
Let's represent the board as an array of strings. That'll print nicely, which is
convenient because we don't have to invest a lot of effort into rendering. For example:
[
"............x....",
"...x.............",
"....x.......xxx..",
"..xxx.......x....",
".............x...",
".................",
".................",
"...x.x..........."
]
First we'll make a function to make ranges to iterate over.
# (range { n = 5 }).value == [0, 1, 2, 3, 4]
range = {
n; i = 0;
next_i = i + 1;
value = if i < n then [i] + (range { i = next_i; inherit n }).value else [];
};
Then we need a function to determine liveness. We'll expect a list of chars, either 'x' or '.', and
output another char.
# (liveness { me = 'x'; neighbours = ['x', 'x', 'x', '.', '.', '.'] }).next == 'x'
liveness = {
me; neighbours;
alive_neighbours = sum([1 for n in neighbours if n == 'x']);
alive = (me == 'x' and 2 <= alive_neighbours and alive_neighbours <= 3)
or (me == '.' and alive_neighbours == 3);
next = if alive then 'x' else '.';
};
On to the real meat! Let's find the neighbours of a cell given some coordinates:
find_neighbours = {
board; i; j;
cells = [
cell { x = i - 1; y = j - 1 },
cell { x = i; y = j - 1 },
cell { x = i + 1; y = j - 1 },
cell { x = i - 1; y = j },
cell { x = i + 1; y = j },
cell { x = i - 1; y = j + 1 },
cell { x = i; y = j + 1 },
cell { x = i + 1; y = j + 1 }
];
chars = [c.char for c in cells];
# Helper function for accessing cells
cell = {
x; y;
H = len board;
my_y = ((H + y) % H);
W = len (board my_y);
char = board (my_y) ((W + x) % W);
}
};
Now we can simply calculate the next state of the board given an input board:
next_board = {
board;
rows = (range { n = len board }).value;
value = [(row { inherit j }).value for j in rows];
row = {
j;
cols = (range { n = len board(j) }).value;
chars = [(cell { inherit i }).value for i in cols];
value = join(chars, '');
cell = {
i;
neighbours = (find_neighbours { inherit board i j }).chars;
me = board j i;
value = (liveness { inherit me neighbours }).next;
};
};
};
We've got everything! Now it's just a matter of tying the input and output together:
input = {
board = include 'board.json';
};
output = {
board = (next_board { board = input.board }).value;
};
That's it! We've got everything we need! Test whether everything is working by running:
$ gcl2json -r output.board game_of_life.gcl output.board
That should show the following:
[
"....x............",
"............x....",
"..x.x.......xx...",
"...xx.......x.x..",
"...x.............",
".................",
".................",
"................."
]
Hooray, it works!
For kicks and giggles, we can turn this into an animation by using `watch`, which will
run the same command over and over again and show its output:
$ watch -n 0 'gcl2json -r output.board game_of_life.gcl output.board | tee board2.json; mv board2.json board.json'
Fun, eh? :)
| 30.113139 | 118 | 0.564416 | eng_Latn | 0.993919 |
ce351d171acc6babb013ed536e93bb597475eb35 | 1,059 | md | Markdown | README.md | t-ishida/Zaolik | 0a8824cc334fed28f9195f3fa5b9a7bcad6a8f0e | [
"MIT"
] | 1 | 2017-06-01T06:09:53.000Z | 2017-06-01T06:09:53.000Z | README.md | t-ishida/Zaolik | 0a8824cc334fed28f9195f3fa5b9a7bcad6a8f0e | [
"MIT"
] | null | null | null | README.md | t-ishida/Zaolik | 0a8824cc334fed28f9195f3fa5b9a7bcad6a8f0e | [
"MIT"
] | null | null | null | # Zaolik
yet another PHP DI Container
inspired by Phalcon
## How To Use
```php
$container = \Zaolik\DIContainer::getInstance();
$databaseConfig = array (
'host' => 'localhost',
'user' => 'user',
'pass' => 'pass',
'database' => 'test',
);
$memcacheConfig = array (
'hosts' => 'localhost',
'port' => 11211,
);
$container->setFlyweight('mysqli', function () use ($databaseConfig) {
$mysql = new \mysqli($databaseConfig['host'], $databaseConfig['user'], $databaseConfig['pass']);
$mysql->select_db($config['database']);
return $mysql;
})->
setNew('DateTime', function ($time = null) {
return new \DateTime($time);
});
// new instance
$mysqli1 = $container->getFlyWieght('mysqli');
// flyweight
$mysqli2 = $container->getFlyWieght('mysqli');
echo $mysqli1 === $mysqli2 . "\n"
// now
echo $container->getNewInstance('DateTime') . "\n";
// yester day
echo $container->getNewInstance('DateTime', '-1 day') . "\n";
```
## License
This library is available under the MIT license. See the LICENSE file for more info.
| 22.0625 | 100 | 0.639282 | kor_Hang | 0.303454 |
ce35f9fe1dbe090afc5ef87c7c75e3fcfb2b8714 | 750 | md | Markdown | docs/server/mysql/utils.md | zhugy-cn/vue-press-blog | 5bf0cbafd9811d0bfe09725c3de11c3bda80d68e | [
"MIT"
] | 1 | 2019-08-24T02:49:05.000Z | 2019-08-24T02:49:05.000Z | docs/server/mysql/utils.md | zhugy-cn/vue-press-blog | 5bf0cbafd9811d0bfe09725c3de11c3bda80d68e | [
"MIT"
] | 17 | 2021-03-01T20:48:39.000Z | 2021-07-28T08:21:10.000Z | docs/server/mysql/utils.md | zhugy-cn/vue-press-blog | 5bf0cbafd9811d0bfe09725c3de11c3bda80d68e | [
"MIT"
] | null | null | null | # Navicat Premium 12 安装破解使用
## 安装软件
- [**下载 Navicat Premium 12**](https://www.navicat.com.cn/download/navicat-premium)
- [**下载 Navicat Premium 12 激活工具**](https://pan.baidu.com/s/1KUG0hM9SzgCnBzy4NuOj_Q)
- 安装软件(最好安装在默认盘符)
- 将下载好的`激活工具`移动到`Navicat`的安装目录(C:\Program Files\PremiumSoft\Navicat Premium 12)
- 运行`激活工具`,注意此时软件不能打开,点击`Path`
- 运行`Navicat`,弹出注册界面(如果没有弹出注册界面,手动在菜单打开:帮助->注册),然后选择版本和语言;然后点击注册机的`generate`按钮,注册码会自动填写到`Navicat`
- 点击`Navicat`注册界面的激活按钮,提示手动激活;点击手动激活,然后将得到的`请求码`复制到注册机;点击注册机左下方的Generate按钮,生成`ActivationCode`,复制粘贴到`Navicat`的激活码框,完成激活;
- [参考](https://blog.csdn.net/y526089989/article/details/89404581)
- [参考](https://blog.csdn.net/Edogawa_Konan/article/details/84928344)
- [参考](https://blog.csdn.net/zdagf/article/details/83987576) | 44.117647 | 119 | 0.766667 | yue_Hant | 0.527295 |
ce3636ce743013f871b9a47ab9ed54f277658a88 | 6,333 | md | Markdown | README.md | textcreationpartnership/A93833 | b7746af84ae25612d6f43900bddce9f396d41b18 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/A93833 | b7746af84ae25612d6f43900bddce9f396d41b18 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/A93833 | b7746af84ae25612d6f43900bddce9f396d41b18 | [
"CC0-1.0"
] | null | null | null | #Rupes Israelis: = The rock of Israel. A little part of its glory laid forth in a sermon preached at Margarets in Westminster before the honorable House of Commons, at their monthly fast, Apr. 24. 1644. By Edmund Staunton, D.D. minister at Kingston upon Thames, in the county of Surrey, a member of the Assembly of Divines.#
##Staunton, Edmund, 1600-1671.##
Rupes Israelis: = The rock of Israel. A little part of its glory laid forth in a sermon preached at Margarets in Westminster before the honorable House of Commons, at their monthly fast, Apr. 24. 1644. By Edmund Staunton, D.D. minister at Kingston upon Thames, in the county of Surrey, a member of the Assembly of Divines.
Staunton, Edmund, 1600-1671.
##General Summary##
**Links**
[TCP catalogue](http://www.ota.ox.ac.uk/tcp/) •
[HTML](http://tei.it.ox.ac.uk/tcp/Texts-HTML/free/A93/A93833.html) •
[EPUB](http://tei.it.ox.ac.uk/tcp/Texts-EPUB/free/A93/A93833.epub) •
[Page images (Historical Texts)](https://historicaltexts.jisc.ac.uk/eebo-99859067e)
**Availability**
To the extent possible under law, the Text Creation Partnership has waived all copyright and related or neighboring rights to this keyboarded and encoded edition of the work described above, according to the terms of the CC0 1.0 Public Domain Dedication (http://creativecommons.org/publicdomain/zero/1.0/). This waiver does not extend to any page images or other supplementary files associated with this work, which may be protected by copyright or other license restrictions. Please go to https://www.textcreationpartnership.org/ for more information about the project.
**Major revisions**
1. __2011-08__ __TCP__ *Assigned for keying and markup*
1. __2011-08__ __SPi Global__ *Keyed and coded from ProQuest page images*
1. __2011-10__ __Olivia Bottum__ *Sampled and proofread*
1. __2011-10__ __Olivia Bottum__ *Text and markup reviewed and edited*
1. __2012-05__ __pfs__ *Batch review (QC) and XML conversion*
##Content Summary##
#####Front#####
RƲPES ISRAELIS: THE ROCK OF ISRAEL.A Little part of its glory laid forth in a Sermon preached at Mar
1. To the Honourable Houſe of COMMONS now aſſembled in PARLIAMENT.
Die Mercurii 24. April. 1644.IT is this day Ordered by the Commons Aſſembled in Parliament, That SirI authoriſe Chriſtopher Meredith to Print my Sermon.EDMUND STAUNTON.
#####Body#####
1. A SERMON Preached at the LATE FAST, Before the Honorable Houſe of COMMONS.
**Types of content**
* Oh, Mr. Jourdain, there is **prose** in there!
There are 64 **omitted** fragments!
@__reason__ (64) : illegible (38), foreign (26) • @__resp__ (38) : #UOM (38) • @__extent__ (38) : 2 letters (5), 1 letter (29), 1 word (3), 1 span (1)
**Character listing**
|Text|string(s)|codepoint(s)|
|---|---|---|
|Latin-1 Supplement|òàèù|242 224 232 249|
|Latin Extended-A|ſ|383|
|Latin Extended-B|Ʋ|434|
|Combining Diacritical Marks|̄|772|
|General Punctuation|•—…|8226 8212 8230|
|Geometric Shapes|◊|9674|
|CJKSymbolsandPunctuation|〈〉|12296 12297|
##Tag Usage Summary##
###Header Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__author__|2||
|2.|__availability__|1||
|3.|__biblFull__|1||
|4.|__change__|5||
|5.|__date__|8| @__when__ (1) : 2012-10 (1)|
|6.|__edition__|1||
|7.|__editionStmt__|1||
|8.|__editorialDecl__|1||
|9.|__encodingDesc__|1||
|10.|__extent__|2||
|11.|__fileDesc__|1||
|12.|__idno__|7| @__type__ (7) : DLPS (1), STC (3), EEBO-CITATION (1), PROQUEST (1), VID (1)|
|13.|__keywords__|1| @__scheme__ (1) : http://authorities.loc.gov/ (1)|
|14.|__label__|5||
|15.|__langUsage__|1||
|16.|__language__|1| @__ident__ (1) : eng (1)|
|17.|__listPrefixDef__|1||
|18.|__note__|4||
|19.|__notesStmt__|2||
|20.|__p__|11||
|21.|__prefixDef__|2| @__ident__ (2) : tcp (1), char (1) • @__matchPattern__ (2) : ([0-9\-]+):([0-9IVX]+) (1), (.+) (1) • @__replacementPattern__ (2) : http://eebo.chadwyck.com/downloadtiff?vid=$1&page=$2 (1), https://raw.githubusercontent.com/textcreationpartnership/Texts/master/tcpchars.xml#$1 (1)|
|22.|__profileDesc__|1||
|23.|__projectDesc__|1||
|24.|__pubPlace__|2||
|25.|__publicationStmt__|2||
|26.|__publisher__|2||
|27.|__ref__|1| @__target__ (1) : http://www.textcreationpartnership.org/docs/. (1)|
|28.|__revisionDesc__|1||
|29.|__seriesStmt__|1||
|30.|__sourceDesc__|1||
|31.|__term__|3||
|32.|__textClass__|1||
|33.|__title__|3||
|34.|__titleStmt__|2||
###Text Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__am__|2||
|2.|__bibl__|1||
|3.|__body__|1||
|4.|__closer__|3||
|5.|__date__|1||
|6.|__dateline__|1||
|7.|__desc__|64||
|8.|__div__|5| @__type__ (5) : title_page (1), dedication (1), order (1), authorization (1), sermon (1)|
|9.|__epigraph__|1||
|10.|__ex__|2||
|11.|__expan__|2||
|12.|__front__|1||
|13.|__g__|281| @__ref__ (281) : char:V (1), char:EOLhyphen (277), char:abque (2), char:cmbAbbrStroke (1)|
|14.|__gap__|64| @__reason__ (64) : illegible (38), foreign (26) • @__resp__ (38) : #UOM (38) • @__extent__ (38) : 2 letters (5), 1 letter (29), 1 word (3), 1 span (1)|
|15.|__head__|2||
|16.|__hi__|631||
|17.|__label__|9| @__type__ (9) : milestone (9)|
|18.|__milestone__|51| @__type__ (51) : tcpmilestone (51) • @__unit__ (51) : unspecified (51) • @__n__ (51) : 1 (12), 2 (13), 3 (11), 4 (6), 5 (5), 6 (3), 7 (1)|
|19.|__note__|142| @__place__ (142) : margin (142) • @__n__ (31) : * (7), a (5), b (4), c (3), d (3), e (2), f (2), g (1), h (1), i (1), k (1), l (1)|
|20.|__opener__|2||
|21.|__p__|99||
|22.|__pb__|40| @__facs__ (40) : tcp:111129:1 (2), tcp:111129:2 (2), tcp:111129:3 (2), tcp:111129:4 (2), tcp:111129:5 (2), tcp:111129:6 (2), tcp:111129:7 (2), tcp:111129:8 (2), tcp:111129:9 (2), tcp:111129:10 (2), tcp:111129:11 (2), tcp:111129:12 (2), tcp:111129:13 (2), tcp:111129:14 (2), tcp:111129:15 (2), tcp:111129:16 (2), tcp:111129:17 (2), tcp:111129:18 (2), tcp:111129:19 (2), tcp:111129:20 (2) • @__rendition__ (1) : simple:additions (1) • @__n__ (29) : 1 (1), 2 (1), 3 (1), 4 (1), 5 (1), 6 (1), 7 (1), 8 (1), 9 (1), 10 (1), 11 (1), 12 (1), 13 (1), 14 (1), 15 (1), 16 (1), 17 (1), 18 (1), 19 (1), 20 (1), 21 (1), 22 (1), 23 (1), 24 (1), 25 (1), 26 (1), 27 (1), 28 (1), 29 (1)|
|23.|__q__|1||
|24.|__salute__|1||
|25.|__seg__|11| @__rend__ (2) : decorInit (2) • @__type__ (9) : milestoneunit (9)|
|26.|__signed__|3||
|27.|__trailer__|1||
| 48.343511 | 689 | 0.668562 | eng_Latn | 0.426137 |
ce37267a7fa4f04be973b8fff6e4efc3540a260d | 1,347 | md | Markdown | docs/0405-mtl-plugin-bdlocation.md | JetXing/mtl-tools | 4c7cb2bbc77d59926ddd284b1c27fd999327e06c | [
"MIT"
] | 1 | 2022-03-07T03:45:02.000Z | 2022-03-07T03:45:02.000Z | docs/0405-mtl-plugin-bdlocation.md | JetXing/mtl-tools | 4c7cb2bbc77d59926ddd284b1c27fd999327e06c | [
"MIT"
] | null | null | null | docs/0405-mtl-plugin-bdlocation.md | JetXing/mtl-tools | 4c7cb2bbc77d59926ddd284b1c27fd999327e06c | [
"MIT"
] | null | null | null | # 百度定位功能(安卓使用)
插件名称: mtl-plugin-bdlocation
<a name="e05dce83"></a>
### 简介
> 百度地图Android定位SDK是为Android移动端应用提供的一套简单易用的定位服务接口,专注于为广大开发者提供最好的综合定位服务。通过使用百度定位SDK,开发者可以轻松为应用程序实现智能、精准、高效的定位功能。
> 为应用提供定位服务,并且可以跳转到百度地图、高德地图等APP,实现出行路线的规划。
<a name="21f2fa80"></a>
### 参数说明
| 参数 | 说明 | 是否必传 |
| --- | --- | --- |
| BDMAP_KEY_ANDROID | Android平台百度定位mapKey | 是 |
<a name="c8a8e7b0"></a>
### 功能([详细API](http://mtlapidocs201908061404.test.app.yyuap.com/0205-location-api))
| 方法 | 功能 |
| --- | --- |
| getLocation | 获取当前坐标 |
| openLocation | 打开地图查看指定坐标位置 |
<a name="2ca50cf2"></a>
#### 参数获取流程
- 登录百度地图开发平台,网址:[http://lbsyun.baidu.com/]()
- 打开控制台,选择“创建应用”如下图:
![image.png](https://cdn.nlark.com/yuque/0/2019/png/271483/1567147906860-1b24f5a9-f1d3-4e67-bdcd-45fb021ed1e5.png#align=left&display=inline&height=770&name=image.png&originHeight=1540&originWidth=2650&size=364153&status=done&width=1325)
- 填写应用信息,SHA1值(创建应用页面有获取帮助,按步骤操作)和包名
- 应用创建完成点开“查看应用”,获取接口参数 BDMAP_KEY_ANDROID
![image.png](https://cdn.nlark.com/yuque/0/2019/png/271483/1567148429545-9f71192f-e6dd-4cc5-b2e7-7176f34f7f9f.png#align=left&display=inline&height=797&name=image.png&originHeight=1594&originWidth=2600&size=333398&status=done&width=1300)
<a name="x9iBG"></a>
#### 参数生效须知
由于SHA1值和应用签名文件有关,需要在打包服务器上传自己的打包keystore文件。暂时由我们的开发人员([email protected])进行上传,需提供打包keystore文件密码及包名等。
| 29.282609 | 236 | 0.74239 | yue_Hant | 0.467235 |
ce37399f825af3c3b94d6a65aee8b9007f79e970 | 218 | md | Markdown | _watches/M20200507_083141_TLP_2.md | Meteoros-Floripa/meteoros.floripa.br | 7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad | [
"MIT"
] | 5 | 2020-01-22T17:44:06.000Z | 2020-01-26T17:57:58.000Z | _watches/M20200507_083141_TLP_2.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | null | null | null | _watches/M20200507_083141_TLP_2.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | 2 | 2020-05-19T17:06:27.000Z | 2020-09-04T00:00:43.000Z | ---
layout: watch
title: TLP2 - 07/05/2020 - M20200507_083141_TLP_2T.jpg
date: 2020-05-07 08:31:41
permalink: /2020/05/07/watch/M20200507_083141_TLP_2
capture: TLP2/2020/202005/20200506/M20200507_083141_TLP_2T.jpg
---
| 27.25 | 62 | 0.784404 | eng_Latn | 0.041651 |
ce3848c1aad84d2abcaeaf755fe03780fbcab19f | 119 | md | Markdown | Solar Tracker Project/servo_control/README.md | jkuatdsc/IoT | 523db8c94e8e622b7b8e246b479eed4387cc644a | [
"MIT"
] | null | null | null | Solar Tracker Project/servo_control/README.md | jkuatdsc/IoT | 523db8c94e8e622b7b8e246b479eed4387cc644a | [
"MIT"
] | 1 | 2021-03-26T11:23:02.000Z | 2021-11-01T20:19:49.000Z | Solar Tracker Project/servo_control/README.md | jkuatdsc/IoT | 523db8c94e8e622b7b8e246b479eed4387cc644a | [
"MIT"
] | null | null | null | ### Optimize Solar Energy Collection
- Controls the Orientataion of the Soalr Pannel based on the Location of the Sun
| 39.666667 | 81 | 0.789916 | eng_Latn | 0.992201 |
ce387b12294672f07340eca3eb38acb1f6612cac | 317 | md | Markdown | admin/D-originality-u6251843.md | ShiqinHuo/IQ-Step_board_game | 30583f49dad63d116d85a4c6b3bebe141b36bc7b | [
"MIT"
] | 16 | 2019-03-31T09:12:22.000Z | 2022-02-04T06:06:48.000Z | admin/D-originality-u6251843.md | ShiqinHuo/IQ-Step_board_game | 30583f49dad63d116d85a4c6b3bebe141b36bc7b | [
"MIT"
] | null | null | null | admin/D-originality-u6251843.md | ShiqinHuo/IQ-Step_board_game | 30583f49dad63d116d85a4c6b3bebe141b36bc7b | [
"MIT"
] | 12 | 2019-04-02T04:41:10.000Z | 2021-09-26T07:56:23.000Z | I declare that the work I have submitted for Stage D of this assignment and all stages before it is entirely my own work, with the following documented exceptions:
* The code in class <PartialSolution> uses an idea suggested by <BIG JAVA . Early Objects> written by Cay.Horstmann.
Signed: Wenjun Yang (u6251843)
| 35.222222 | 163 | 0.77918 | eng_Latn | 0.999914 |
ce387fdd13cd795fd266abb44fec65e2bb9d130f | 66 | md | Markdown | README.md | enpassio/EnCustomView | 6cb91c76fe6eb0d2efc347bbdd64bf6a9126add7 | [
"Apache-2.0"
] | null | null | null | README.md | enpassio/EnCustomView | 6cb91c76fe6eb0d2efc347bbdd64bf6a9126add7 | [
"Apache-2.0"
] | 4 | 2018-10-29T13:57:36.000Z | 2018-10-29T13:58:14.000Z | README.md | enpassio/EnCustomView | 6cb91c76fe6eb0d2efc347bbdd64bf6a9126add7 | [
"Apache-2.0"
] | null | null | null | # EnCustomView
Sample app with various operations on custom views
| 22 | 50 | 0.833333 | eng_Latn | 0.994545 |
ce3940c4bf0bb3d6c8471e173c51a5f6692e4e1a | 1,905 | md | Markdown | examples/docs/zh-CN/container.md | JferLao/phoon-ui | 3b3bdcecad7097faf3eb1af9d00680d10f0a4626 | [
"MIT"
] | null | null | null | examples/docs/zh-CN/container.md | JferLao/phoon-ui | 3b3bdcecad7097faf3eb1af9d00680d10f0a4626 | [
"MIT"
] | null | null | null | examples/docs/zh-CN/container.md | JferLao/phoon-ui | 3b3bdcecad7097faf3eb1af9d00680d10f0a4626 | [
"MIT"
] | null | null | null | ## Container 容器
协助进行页面级整体布局。
### 组件概述
- `container`:布局容器,可以嵌套`header`、`aside`、`main`、`footer`及其`container`本身,可以放在任何外层容器中.
- `header`:顶部布局。
- `aside`:侧边栏。
- `main`:内容部分。
- `footer`:底栏容器。
:::tip
以上组件采用了 flex 布局,请注意浏览器兼容性问题。
:::
### 布局演示
:::demo 可以给布局容器设置值来控制高度和宽度
```html
<ph-container>
<ph-header>Header</ph-header>
<ph-main>Main</ph-main>
<ph-footer>Footer</ph-footer>
</ph-container>
<ph-container style="margin-top:40px">
<ph-aside>Aside</ph-aside>
<ph-main>Main</ph-main>
</ph-container>
<ph-container style="margin-top:40px">
<ph-header>Header</ph-header>
<ph-container>
<ph-aside>Aside</ph-aside>
<ph-main>Main</ph-main>
</ph-container>
<ph-footer>Footer</ph-footer>
</ph-container>
<style>
.ph-header,
.ph-footer {
background: #7dbcea;
color: #fff;
text-align: center;
line-height: 60px;
}
.ph-main {
background: #108ee9;
color: #fff;
text-align: center;
line-height: 160px;
}
.ph-aside {
background: #3ba0e9;
color: #fff;
text-align: center;
line-height: 200px;
}
</style>
```
:::
### Container 参数
| 参数 | 说明 | 类型 | 可选值 | 默认值 |
| --------- | ---------------- | ------ | --------------------- | ---------------------------------------------------------------------- |
| direction | 子元素的排列方向 | string | horizontal / vertical | 子元素中有 `el-header` 或 `el-footer` 时为 vertical,否则为 horizontal |
### Header 参数
| 参数 | 说明 | 类型 | 可选值 | 默认值 |
| ------ | -------- | ------ | ------ | ------ |
| height | 顶栏高度 | string | — | 60px |
### Aside 参数
| 参数 | 说明 | 类型 | 可选值 | 默认值 |
| ----- | ---------- | ------ | ------ | ------ |
| width | 侧边栏宽度 | string | — | 300px |
### Footer 参数
| 参数 | 说明 | 类型 | 可选值 | 默认值 |
| ------ | -------- | ------ | ------ | ------ |
| height | 底栏高度 | string | — | 60px |
| 21.404494 | 138 | 0.47664 | eng_Latn | 0.139586 |
ce39b6a77c0822f364672f652e9be01c1d532fce | 1,661 | md | Markdown | README.md | abburishiva/Golkonda | 0d4dc835cad87fce9658866ce30aa67b44ebbd9a | [
"ISC"
] | null | null | null | README.md | abburishiva/Golkonda | 0d4dc835cad87fce9658866ce30aa67b44ebbd9a | [
"ISC"
] | null | null | null | README.md | abburishiva/Golkonda | 0d4dc835cad87fce9658866ce30aa67b44ebbd9a | [
"ISC"
] | null | null | null | ##### TalentScreen REST API(Node.js)
[![N|Solid](https://talentscreen.io/assets/logos/ts-logo-beta.svg)](https://nodesource.com/products/nsolid)
TalentScreen is a web application that helps
- candidates to showcase their skills , improve their skills and create their resumes.
- candidates to solve employer challenges.
- employers to screen candidates.
- employers to search resumes and find candidates.
- employers to find right candidates for job profiles.
# Features!
- Choice Quiz
- Coding Quiz
- Video Quiz
- Audio Quiz
- Typed Quiz
- Whiteboard Quiz
- Creating and Sharing Resume
-
### Installation
TalentScreen requires [Node.js](https://nodejs.org/en/download/) v6+ to run.
Download Redis.msi [Redis.io](https://github.com/MicrosoftArchive/redis/releases).
click on checkbox for adding redis installation folder to the path environmental variable.
#### Install git
npm install --save npm-git-install
###### for windows only
Install all the required tools and configurations using running "npm install -g windows-build-tools" from an elevated PowerShell (run as Administrator)
#### clone Golkonda Project
clone as "git clone http://review.innova-path.com/Golkonda"
Install the dependencies and devDependencies and start the server and run test files.
```sh
$ cd Golkonda
$ npm install
$ grunt windows (for windows operators only)
$ grunt server
```
#### Docker Execution
Replace 192.168.86.39 address with your system IPV4 address in script.sh file
Go to root folder open terminal (linux and mac)
Windows only
open power shell or git bash command prompt
```sh
$ chmod +x script.sh
$ ./script.sh
```
License
----
MIT
| 28.152542 | 152 | 0.748946 | eng_Latn | 0.862925 |
ce3a289a83399700a1d66fbf572fefc7cd5effab | 234 | md | Markdown | README.md | domesticmouse/google-maps-in-flutter | d970a3366c085b3bf34c985f666c45e86e0942fd | [
"Apache-2.0"
] | null | null | null | README.md | domesticmouse/google-maps-in-flutter | d970a3366c085b3bf34c985f666c45e86e0942fd | [
"Apache-2.0"
] | null | null | null | README.md | domesticmouse/google-maps-in-flutter | d970a3366c085b3bf34c985f666c45e86e0942fd | [
"Apache-2.0"
] | null | null | null | # google_maps_in_flutter
The code for this codelab has migrated to [flutter/codelabs google-maps-in-flutter][google-maps-in-flutter].
[google-maps-in-flutter]: https://github.com/flutter/codelabs/tree/master/google-maps-in-flutter | 46.8 | 108 | 0.794872 | kor_Hang | 0.239018 |
ce3a5f09c38d2ce7c24fe90f9320ecb960f2cc2e | 1,023 | md | Markdown | _posts/2019-08-16-post-7.md | bugkingK/bugkingK.github.io | 2381a555a962879e8951f0bbcd9a8cc5f5615751 | [
"MIT"
] | null | null | null | _posts/2019-08-16-post-7.md | bugkingK/bugkingK.github.io | 2381a555a962879e8951f0bbcd9a8cc5f5615751 | [
"MIT"
] | null | null | null | _posts/2019-08-16-post-7.md | bugkingK/bugkingK.github.io | 2381a555a962879e8951f0bbcd9a8cc5f5615751 | [
"MIT"
] | null | null | null | ---
layout: post
title: 귀찮은 cocoapod를 편하게
subtitle: cocoapod, 귀차니즘, init, install, update
tags: [Xcode, cocoapod]
comments: true
---
### @
개발하다보면 오픈소스를 사용하게되면 cocoapod과 carthage 중 하나를 선택하게된다. <br>
둘 중 cocoapod을 편하게 쓰는 방법을 소개한다. <br>
Xcode 내장되어있는 기본 기능을 활용할 것이다. <br>
### 파일만들기
1. 아래의 스크립트를 복사해서 파일로 만드시거나, Gist로 들어가서 파일을 다운받아준다.
<script src="https://gist.github.com/bugkingK/81c75f65fb2fead531c60d0006d0fe7a.js"></script>
2. 만든 파일의 권한을 바꿔준다. chmod 755 파일이름.sh
### Xcode 설정
Xcode를 실행시킨다.
1. 상단바 Xcode 클릭
2. 중간에 있는 Behaviors -> Edit Behaviors...를 클릭
![](/img/posts/post-7/001.png){: .center-block :}
3. 하단에 +버튼을 누르고 이름을 pod-init-install로 설정한다.
4. 오른쪽 Run을 클릭하 Choose Scripts...를 누른다.
5. 방금 만든 스크립트를 선택한다. (만약 클릭이안된다면 권한이 없는 것이므로 chmod 755를 다시해준다.)
![](/img/posts/post-7/002.png){: .center-block :}
일련의 과정을 마치면 Behaviors에 pod-init-install이 들어가고 누르면 자동으로 코코아팟이 init&install을 시작한다. <br>
이와 마찬가지로 update도 같은 방식으로 진행하면 된다. <br>
<script src="https://gist.github.com/bugkingK/24f92e67f1dade5dda42d83e0894b43f.js"></script>
| 27.648649 | 92 | 0.717498 | kor_Hang | 0.999913 |
ce3a99501fb081363e594d02a16b929d2cf28bcd | 1,292 | md | Markdown | aspnet/web-forms/videos/aspnet-ajax/implement-infinite-data-patterns-in-ajax.md | terrajobst/AspNetDocs.es-es | 77be7c56042efbb27a9e051e21ee16792853ab63 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-forms/videos/aspnet-ajax/implement-infinite-data-patterns-in-ajax.md | terrajobst/AspNetDocs.es-es | 77be7c56042efbb27a9e051e21ee16792853ab63 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-forms/videos/aspnet-ajax/implement-infinite-data-patterns-in-ajax.md | terrajobst/AspNetDocs.es-es | 77be7c56042efbb27a9e051e21ee16792853ab63 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: web-forms/videos/aspnet-ajax/implement-infinite-data-patterns-in-ajax
title: Implementar patrones de datos infinitos en AJAX | Microsoft Docs
author: JoeStagner
description: En este vídeo, le mostraré cómo implementar lo que hago referencia como el patrón de datos infinito para AJAX.
ms.author: riande
ms.date: 04/10/2008
ms.assetid: 5e18f005-8b3d-4b9a-866c-c567874aa826
msc.legacyurl: /web-forms/videos/aspnet-ajax/implement-infinite-data-patterns-in-ajax
msc.type: video
ms.openlocfilehash: 5414a59c7f74ead56e3ffa7411ff1ceeb9419701
ms.sourcegitcommit: e7e91932a6e91a63e2e46417626f39d6b244a3ab
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 03/06/2020
ms.locfileid: "78510097"
---
# <a name="implement-infinite-data-patterns-in-ajax"></a>Implementar patrones de datos infinitos en AJAX
por [Joe Stagner](https://github.com/JoeStagner)
En este vídeo, le mostraré cómo implementar lo que hago referencia como el patrón de datos infinito para AJAX.
[▶Ver vídeo (18 minutos)](https://channel9.msdn.com/Blogs/ASP-NET-Site-Videos/implement-infinite-data-patterns-in-ajax)
> [!div class="step-by-step"]
> [Anterior](use-aspnet-ajax-cascading-drop-down-control-to-access-a-database.md)
> [Siguiente](basic-aspnet-authentication-in-an-ajax-enabled-application.md)
| 44.551724 | 125 | 0.798762 | spa_Latn | 0.366166 |
ce3b8087cd02fba39c9d07eb50ed94a48dfcadd2 | 11,226 | md | Markdown | docs/docs/metrics.md | mushixun/marathon | a1fee9127e16ef635b073f2e11de91ef6682c637 | [
"Apache-2.0"
] | null | null | null | docs/docs/metrics.md | mushixun/marathon | a1fee9127e16ef635b073f2e11de91ef6682c637 | [
"Apache-2.0"
] | null | null | null | docs/docs/metrics.md | mushixun/marathon | a1fee9127e16ef635b073f2e11de91ef6682c637 | [
"Apache-2.0"
] | null | null | null | ---
title: Metrics
---
# Metrics
Marathon uses [Dropwizard Metrics](https://github.com/dropwizard/metrics)
for its metrics. You can query the current metric values via the
`/metrics` HTTP endpoint.
For the specific syntax see the
[metrics command-line flags]({{ site.baseurl }}/docs/command-line-flags.html#metrics-flags)
section.
## Stability of metric names
Although we try to prevent unnecessary disruptions, we do not provide
stability guarantees for metric names between major and minor releases.
## Metric types
Marathon has the following metric types:
* a `counter` is a monotonically increasing integer, for instance, the
number of Mesos `revive` calls performed since Marathon became
a leader.
* a `gauge` is a current measurement, for instance, the number of apps
currently known to Marathon.
* a `histogram` is a distribution of values in a stream of measurements,
for instance, the number of apps in group deployments.
* a `meter` measures the rate at which a set of events occur.
* a `timer` is a combination of a meter and a histogram, which measure
the duration of events and the rate of their occurrence.
Histograms and timers are backed with reservoirs leveraging
[HdrHistogram](http://hdrhistogram.org/).
## Units of measurement
A metric measures something either in abstract quantities, or in the
following units:
* `bytes`
* `seconds`
## Metric names
All metric names are prefixed with `marathon` by default. The prefix can
be changed using `--metrics_prefix` command-line flag.
Metric name components are joined with dots. Components may have dashes
in them.
A metric type and a unit of measurement (if any) are appended to
a metric name. A couple of examples:
* `marathon.apps.active.gauge`
* `marathon.http.event-streams.responses.size.counter.bytes`
## Prometheus reporter
The Prometheus reporter is enabled by default, and it can be disabled
with `--disable_metrics_prometheus` command-line flag. Metrics in the
Prometheus format are available at `/metrics/prometheus`.
Dots and dashes in metric names are replaced with underscores.
## StatsD reporter
The StatsD reporter can be enabled with `--metrics_statsd` command-line
flag. It sends metrics over UDP to the host and port specified with
`--metrics_statsd_host` and `--metrics_statsd_port` respectively.
## DataDog reporter
The DataDog reporter can be enabled with `--metrics_datadog`
command-line flag. It sends metrics over UDP to the host and port
specified with `--metrics_datadog_host` and `--metrics_datadog_port`
respectively.
Marathon can send metrics to a DataDog agent over UDP, or directly to
the DataDog cloud over HTTP. It is specified using
`--metrics_datadog_protocol`. Its possible values are `udp` (default)
and `api`. If `api` is chosen, your DataDog API key can be supplied with
`--metrics_datadog_api_key`.
Dashes in metric names are replaced with underscores.
## Important metrics
* `marathon.apps.active.gauge` — the number of active apps.
* `marathon.deployments.active.gauge` — the number of active
deployments.
* `marathon.deployments.counter` — the count of deployments received
since the current Marathon instance became a leader.
* `marathon.deployments.dismissed.counter` — the count of deployments
dismissed since the current Marathon instance became a leader;
a deployment might be dismissed by Marathon, when there are too many
concurrent deployments.
* `marathon.groups.active.gauge` — the number of active groups.
* `marathon.leadership.duration.gauge.seconds` — the duration of
current leadership.
* `marathon.persistence.gc.runs.counter` — the count of Marathon GC runs
since it became a leader.
* `marathon.persistence.gc.compaction.duration.timer.seconds` —
a histogram of Marathon GC compaction phase durations, and a meter for
compaction durations.
* `marathon.persistence.gc.scan.duration.timer.seconds` — a histogram of
Marathon GC scan phase durations, and a meter for scan durations.
* `marathon.tasks.launched.counter` — the count of tasks launched by
the current Marathon instance since it became a leader.
* `marathon.tasks.running.gauge` — the number of running tasks at the
moment.
* `marathon.tasks.staged.gauge` — the number of tasks staged at the
moment.
* `marathon.uptime.gauge.seconds` — uptime of the current Marathon
instance.
### Mesos-specific metrics
* `marathon.mesos.calls.revive.counter` — the count of Mesos `revive`
calls made since the current Marathon instance became a leader.
* `marathon.mesos.calls.suppress.counter` — the count of Mesos
`suppress` calls made since the current Marathon instance became
a leader.
* `marathon.mesos.offer-operations.launch-group.counter` — the count of
`LaunchGroup` offer operations made since the current Marathon
instance became a leader.
* `marathon.mesos.offer-operations.launch.counter` — the count of
`Launch` offer operations made since the current Marathon instance
became a leader.
* `marathon.mesos.offer-operations.reserve.counter` — the count of
`Reserve` offer operations made since the current Marathon instance
became a leader.
* `marathon.mesos.offers.declined.counter` — the count of offers
declined since the current Marathon instance became a leader.
* `marathon.mesos.offers.incoming.counter` — the count of offers
received since the current Marathon instance became a leader.
* `marathon.mesos.offers.used.counter` — the count of offers used since
the current Marathon instance became a leader.
### HTTP-specific metrics
* `marathon.http.event-streams.responses.size.counter.bytes` — the size
of data sent to clients over event streams since the current Marathon
instance became a leader.
* `marathon.http.requests.size.counter.bytes` — the total size of
all requests since the current Marathon instance became a leader.
* `marathon.http.requests.size.gzipped.counter.bytes` — the total size
of all gzipped requests since the current Marathon instance became
a leader.
* `marathon.http.responses.size.counter.bytes` — the total size of all
responses since the current Marathon instance became a leader.
* `marathon.http.responses.size.gzipped.counter.bytes` — the total size
of all gzipped responses since the current Marathon instance became
a leader.
* `http.requests.active.gauge` — the number of active requests.
* `http.responses.1xx.rate` — the rate of `1xx` responses.
* `http.responses.2xx.rate` — the rate of `2xx` responses.
* `http.responses.3xx.rate` — the rate of `3xx` responses.
* `http.responses.4xx.rate` — the rate of `4xx` responses.
* `http.responses.5xx.rate` — the rate of `5xx` responses.
* `marathon.http.requests.duration.timer.seconds` — a histogram of
request durations, and a meter for request durations.
* `http.requests.get.duration.timer.seconds` — the same but for `GET`
requests only.
* `http.requests.post.duration.timer.seconds` — the same but for `POST`
requests only.
* `http.requests.put.duration.timer.seconds` — the same but for `PUT`
requests only.
* `http.requests.delete.duration.timer.seconds` — the same but for
`DELETE` requests only.
### JVM-specific metrics
#### JVM buffer pools
* `marathon.jvm.buffers.mapped.gauge` — an estimate of the number of
mapped buffers.
* `marathon.jvm.buffers.mapped.capacity.gauge.bytes` — an estimate of
the total capacity of the mapped buffers in bytes.
* `marathon.jvm.buffers.mapped.memory.used.gauge.bytes` an estimate of
the memory that the JVM is using for mapped buffers in bytes, or `-1L`
if an estimate of the memory usage is not available.
* `marathon.jvm.buffers.direct.gauge` — an estimate of the number of
direct buffers.
* `marathon.jvm.buffers.direct.capacity.gauge.bytes` — an estimate of
the total capacity of the direct buffers in bytes.
* `marathon.jvm.buffers.direct.memory.used.gauge.bytes` an estimate of
the memory that the JVM is using for direct buffers in bytes, or `-1L`
if an estimate of the memory usage is not available.
#### JVM garbage collection
* `marathon.jvm.gc.<gc>.collections.gauge` — the total number
of collections that have occurred
* `marathon.jvm.gc.<gc>.collections.duraration.gauge.seconds` — the
approximate accumulated collection elapsed time, or `-1` if the
collection elapsed time is undefined for the given collector.
#### JVM memory
* `marathon.jvm.memory.total.init.gauge.bytes` - the amount of memory
in bytes that the JVM initially requests from the operating system
for memory management, or `-1` if the initial memory size is
undefined.
* `marathon.jvm.memory.total.used.gauge.bytes` - the amount of used
memory in bytes.
* `marathon.jvm.memory.total.max.gauge.bytes` - the maximum amount of
memory in bytes that can be used for memory management, `-1` if the
maximum memory size is undefined.
* `marathon.jvm.memory.total.committed.gauge.bytes` - the amount of
memory in bytes that is committed for the JVM to use.
* `marathon.jvm.memory.heap.init.gauge.bytes` - the amount of heap
memory in bytes that the JVM initially requests from the operating
system for memory management, or `-1` if the initial memory size is
undefined.
* `marathon.jvm.memory.heap.used.gauge.bytes` - the amount of used heap
memory in bytes.
* `marathon.jvm.memory.heap.max.gauge.bytes` - the maximum amount of
heap memory in bytes that can be used for memory management, `-1` if
the maximum memory size is undefined.
* `marathon.jvm.memory.heap.committed.gauge.bytes` - the amount of heap
memory in bytes that is committed for the JVM to use.
* `marathon.jvm.memory.heap.usage.gauge` - the ratio of
`marathon.jvm.memory.heap.used.gauge.bytes` and
`marathon.jvm.memory.heap.max.gauge.bytes`.
* `marathon.jvm.memory.non-heap.init.gauge.bytes` - the amount of
non-heap memory in bytes that the JVM initially requests from the
operating system for memory management, or `-1` if the initial memory
size is undefined.
* `marathon.jvm.memory.non-heap.used.gauge.bytes` - the amount of used
non-heap memory in bytes.
* `marathon.jvm.memory.non-heap.max.gauge.bytes` - the maximum amount of
non-heap memory in bytes that can be used for memory management, `-1`
if the maximum memory size is undefined.
* `marathon.jvm.memory.non-heap.committed.gauge.bytes` - the amount of
non-heap memory in bytes that is committed for the JVM to use.
* `marathon.jvm.memory.non-heap.usage.gauge` - the ratio of
`marathon.jvm.memory.non-heap.used.gauge.bytes` and
`marathon.jvm.memory.non-heap.max.gauge.bytes`.
#### JVM threads
* `marathon.threads.active.gauge` — the number of active threads.
* `marathon.threads.daemon.gauge` — the number of daemon threads.
* `marathon.threads.deadlocked.gauge` — the number of deadlocked
threads.
* `marathon.threads.new.gauge` — the number of threads in `NEW` state.
* `marathon.threads.runnable.gauge` — the number of threads in
`RUNNABLE` state.
* `marathon.threads.blocked.gauge` — the number of threads in `BLOCKED`
state.
* `marathon.threads.timed-waiting.gauge` — the number of threads in
`TIMED_WAITING` state.
* `marathon.threads.waiting.gauge` — the number of threads in `WAITING`
state.
* `marathon.threads.terminated.gauge` —
the number of threads in `TERMINATED` state.
| 43.011494 | 91 | 0.763228 | eng_Latn | 0.993334 |
ce3c2603dd2404493b14b694a278042bc2694a20 | 310 | md | Markdown | docs/DSL+/dokka/dsl/studio.forface.easygradle.dsl/org.gradle.plugin.use.-plugin-dependencies-spec/plugin.md | 4face-studi0/EasyGradle | a254834c8801fccc1ec3c6dd0bf509de39c63ec4 | [
"Apache-2.0"
] | 4 | 2019-09-07T00:29:20.000Z | 2021-01-27T00:48:26.000Z | docs/DSL+/dokka/dsl/studio.forface.easygradle.dsl/org.gradle.plugin.use.-plugin-dependencies-spec/plugin.md | 4face-studi0/EasyGradle | a254834c8801fccc1ec3c6dd0bf509de39c63ec4 | [
"Apache-2.0"
] | null | null | null | docs/DSL+/dokka/dsl/studio.forface.easygradle.dsl/org.gradle.plugin.use.-plugin-dependencies-spec/plugin.md | 4face-studi0/EasyGradle | a254834c8801fccc1ec3c6dd0bf509de39c63ec4 | [
"Apache-2.0"
] | 1 | 2021-01-27T00:48:28.000Z | 2021-01-27T00:48:28.000Z | [dsl](../../index.md) / [studio.forface.easygradle.dsl](../index.md) / [org.gradle.plugin.use.PluginDependenciesSpec](index.md) / [plugin](./plugin.md)
# plugin
`fun PluginDependenciesSpec.plugin(id: `[`String`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-string/index.html)`): PluginDependencySpec` | 62 | 151 | 0.729032 | yue_Hant | 0.248105 |
ce3cae92e57cf7c361a6bf8196844a06b167b270 | 585 | md | Markdown | Model_README.md | SIR-SEE/final-project-team-8 | 45cf98547ff92eeec7cd0d2c7aeb683b38702237 | [
"MIT"
] | null | null | null | Model_README.md | SIR-SEE/final-project-team-8 | 45cf98547ff92eeec7cd0d2c7aeb683b38702237 | [
"MIT"
] | null | null | null | Model_README.md | SIR-SEE/final-project-team-8 | 45cf98547ff92eeec7cd0d2c7aeb683b38702237 | [
"MIT"
] | null | null | null | Covid-19 model
We chose the SIR-model code that we previously worked with as our base for our model.
To make the model more realistic we added a mortality rate (m) to distinguish between those who recover and those who don't.
We also added a parameter, epsilon, to simulate the effects safety precautions, such as restrictions, mask mandates etc.
We chose a very arbitrary number to represent the restrictions. We perhaps could have made a more in-depth equation for calculating
the effects of possible safety precautions, but thought it was sufficient as a visual representation.
| 73.125 | 132 | 0.805128 | eng_Latn | 0.999895 |
ce3d8353a5847d744c8722170134f8d24b542a47 | 113 | md | Markdown | README.md | lumiantarts/UnityProjects | 7de0fe7fb5777f437be1d222210934b8d93210a0 | [
"MIT"
] | null | null | null | README.md | lumiantarts/UnityProjects | 7de0fe7fb5777f437be1d222210934b8d93210a0 | [
"MIT"
] | null | null | null | README.md | lumiantarts/UnityProjects | 7de0fe7fb5777f437be1d222210934b8d93210a0 | [
"MIT"
] | null | null | null | # Lumiant Arts
Owners: Luke A, R2D2sp
Desc: Our unreleased unity projects, mainly for testing.
Date: 05/13/16
| 14.125 | 56 | 0.743363 | eng_Latn | 0.864696 |
ce3fe0af4dfb2d7d839a85a203a039d40fbb9624 | 10,373 | md | Markdown | articles/virtual-machines/virtual-machines-linux-quick-create-cli.md | OpenLocalizationTestOrg/azure-docs-pr15_hu-HU | ac1600ab65c96c83848e8b2445ac60e910561a25 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/virtual-machines-linux-quick-create-cli.md | OpenLocalizationTestOrg/azure-docs-pr15_hu-HU | ac1600ab65c96c83848e8b2445ac60e910561a25 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/virtual-machines-linux-quick-create-cli.md | OpenLocalizationTestOrg/azure-docs-pr15_hu-HU | ac1600ab65c96c83848e8b2445ac60e910561a25 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="A CLI segítségével hozzon létre egy Linux virtuális Azure |} Microsoft Azure"
description="Létrehozhat egy Linux virtuális Azure a CLI használatával."
services="virtual-machines-linux"
documentationCenter=""
authors="vlivech"
manager="timlt"
editor=""/>
<tags
ms.service="virtual-machines-linux"
ms.devlang="NA"
ms.topic="hero-article"
ms.tgt_pltfrm="vm-linux"
ms.workload="infrastructure"
ms.date="10/27/2016"
ms.author="v-livech"/>
# <a name="create-a-linux-vm-on-azure-by-using-the-cli"></a>Létrehozhat egy Linux virtuális Azure a CLI használatával
Ez a cikk bemutatja a telepítéséről gyorsan Linux virtuális gép (virtuális) Azure a használatával a `azure vm quick-create` az Azure parancssori kezelőfelületről parancsot. A `quick-create` parancs üzembe helyezése a virtuális belül egy egyszerű, biztonságos infrastruktúrát, hogy prototípusának használatával, vagy egy fogalmat gyors tesztelése. A cikk van szükség:
- az Azure-fiók (az[első ingyenes próbaverziót](https://azure.microsoft.com/pricing/free-trial/)).
- bejelentkezett az [Azure CLI](../xplat-cli-install.md) `azure login`.
- az Azure CLI _kell lennie az_ erőforrás-kezelő Azure mód `azure config mode arm`.
Egy Linux virtuális az [Azure portal](virtual-machines-linux-quick-create-portal.md)segítségével gyorsan is telepítheti.
## <a name="quick-commands"></a>Gyors parancsok
A következő példa bemutatja egy CoreOS virtuális telepíthető, és csatolja a biztonságos rendszerhéj (SSH) használatával (az argumentumokat eltérő lehet a) módját:
```bash
azure vm quick-create -M ~/.ssh/id_rsa.pub -Q CoreOS
```
## <a name="detailed-walkthrough"></a>Részletes útmutató
A következő forgatókönyv egy UbuntuLTS virtuális telepített, lépésenkénti, a magyarázatokat milyen módon minden egyes lépés folyamatban van.
## <a name="vm-quick-create-aliases"></a>Virtuális gyors-aliasok létrehozása
Válasszon egy terjesztési gyorsan az Azure CLI aliasokat, a leggyakoribb OS terjesztését megfeleltetve környezetbe. Az alábbi táblázat a aliases (kezdve az Azure CLI verzió 0,10). Az összes telepítések használó `quick-create` VMs félvezető meghajtó (SSD) tároló, amely gyorsabban kiépítési és nagy teljesítményű lemez access készül, az alapértelmezett. (Ezek aliasok jelenítik meg a rendelkezésre álló terjesztését a Azure egy apró része. További képek keresése a Microsoft Azure piactéren található [PowerShell-kép keresése](virtual-machines-linux-cli-ps-findimage.md), [a weben](https://azure.microsoft.com/marketplace/virtual-machines/)vagy [saját egyéni kép feltöltése](virtual-machines-linux-create-upload-generic.md).)
| Alias | A Publisher | Ajánlat | RAKTÁRI SZÁM | Verzió |
|:----------|:----------|:-------------|:------------|:--------|
| CentOS | OpenLogic | CentOS | 7.2. | legújabb |
| CoreOS | CoreOS | CoreOS | Állandó | legújabb |
| Debian | credativ | Debian | 8 | legújabb |
| openSUSE | SUSE | openSUSE | 13.2 | legújabb |
| RHEL | Piros kalap | RHEL | 7.2. | legújabb |
| UbuntuLTS | Kanonikus | Ubuntu kiszolgáló | 14.04.4-LTS | legújabb |
Az alábbi szakaszok használata a `UbuntuLTS` **ImageURN** beállítással alias (`-Q`) egy Ubuntu 14.04.4 LTS Server telepítése.
Az előző `quick-create` példa csak beállításokkal a `-M` jelző letiltása a SSH jelszavak, így a program kéri az alábbi argumentumokat közben töltse fel a SSH nyilvános kulcs azonosítása:
- (tetszőleges karakterlánc a az első Azure erőforráscsoport általában finom) erőforrás csoportnevet.
- Virtuális neve
- hely (`westus` vagy `westeurope` jó alapértelmezett van)
- Linux (Ha engedélyezni szeretné, hogy mely operációs rendszer kívánt Azure)
- felhasználónév
A következő példa megadja az összes értéket, hogy nincs további Rákérdezés szükség. Mindaddig, amíg van egy `~/.ssh/id_rsa.pub` fájlként ssh-rsa formátum nyilvános kulcs, mint működik:
```bash
azure vm quick-create \
--resource-group myResourceGroup \
--name myVM \
--location westus \
--os-type Linux \
--admin-username myAdminUser \
--ssh-public-file ~/.ssh/id_rsa.pub \
--image-urn UbuntuLTS
```
A kimenet így néz a következő kimeneti tiltása:
```bash
info: Executing command vm quick-create
+ Listing virtual machine sizes available in the location "westus"
+ Looking up the VM "myVM"
info: Verifying the public key SSH file: /Users/ahmet/.ssh/id_rsa.pub
info: Using the VM Size "Standard_DS1"
info: The [OS, Data] Disk or image configuration requires storage account
+ Looking up the storage account cli16330708391032639673
+ Looking up the NIC "examp-westu-1633070839-nic"
info: An nic with given name "examp-westu-1633070839-nic" not found, creating a new one
+ Looking up the virtual network "examp-westu-1633070839-vnet"
info: Preparing to create new virtual network and subnet
/ Creating a new virtual network "examp-westu-1633070839-vnet" [address prefix: "10.0.0.0/16"] with subnet "examp-westu-1633070839-snet" [address prefix: "10.+.1.0/24"]
+ Looking up the virtual network "examp-westu-1633070839-vnet"
+ Looking up the subnet "examp-westu-1633070839-snet" under the virtual network "examp-westu-1633070839-vnet"
info: Found public ip parameters, trying to setup PublicIP profile
+ Looking up the public ip "examp-westu-1633070839-pip"
info: PublicIP with given name "examp-westu-1633070839-pip" not found, creating a new one
+ Creating public ip "examp-westu-1633070839-pip"
+ Looking up the public ip "examp-westu-1633070839-pip"
+ Creating NIC "examp-westu-1633070839-nic"
+ Looking up the NIC "examp-westu-1633070839-nic"
+ Looking up the storage account clisto1710997031examplev
+ Creating VM "myVM"
+ Looking up the VM "myVM"
+ Looking up the NIC "examp-westu-1633070839-nic"
+ Looking up the public ip "examp-westu-1633070839-pip"
data: Id :/subscriptions/2<--snip-->d/resourceGroups/exampleResourceGroup/providers/Microsoft.Compute/virtualMachines/exampleVMName
data: ProvisioningState :Succeeded
data: Name :exampleVMName
data: Location :westus
data: Type :Microsoft.Compute/virtualMachines
data:
data: Hardware Profile:
data: Size :Standard_DS1
data:
data: Storage Profile:
data: Image reference:
data: Publisher :Canonical
data: Offer :UbuntuServer
data: Sku :14.04.4-LTS
data: Version :latest
data:
data: OS Disk:
data: OSType :Linux
data: Name :clic7fadb847357e9cf-os-1473374894359
data: Caching :ReadWrite
data: CreateOption :FromImage
data: Vhd:
data: Uri :https://cli16330708391032639673.blob.core.windows.net/vhds/clic7fadb847357e9cf-os-1473374894359.vhd
data:
data: OS Profile:
data: Computer Name :myVM
data: User Name :myAdminUser
data: Linux Configuration:
data: Disable Password Auth :true
data:
data: Network Profile:
data: Network Interfaces:
data: Network Interface #1:
data: Primary :true
data: MAC Address :00-0D-3A-33-42-FB
data: Provisioning State :Succeeded
data: Name :examp-westu-1633070839-nic
data: Location :westus
data: Public IP address :138.91.247.29
data: FQDN :examp-westu-1633070839-pip.westus.cloudapp.azure.com
data:
data: Diagnostics Profile:
data: BootDiagnostics Enabled :true
data: BootDiagnostics StorageUri :https://clisto1710997031examplev.blob.core.windows.net/
data:
data: Diagnostics Instance View:
info: vm quick-create command OK
```
## <a name="log-in-to-the-new-vm"></a>Jelentkezzen be az új virtuális
Jelentkezzen be a virtuális használata a nyilvános IP-cím szerepel az eredményben. A teljes tartománynevét (FQDN), amely szerepel is használhatja:
```bash
ssh -i ~/.ssh/id_rsa.pub [email protected]
```
A bejelentkezési folyamat hasonlóan kell kinéznie a következő kimeneti tiltása:
```bash
Warning: Permanently added '138.91.247.29' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.19.0-65-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Thu Sep 8 22:50:57 UTC 2016
System load: 0.63 Memory usage: 2% Processes: 81
Usage of /: 39.6% of 1.94GB Swap usage: 0% Users logged in: 0
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
myAdminUser@myVM:~$
```
## <a name="next-steps"></a>Következő lépések
A `azure vm quick-create` telepítendő gyorsan egy virtuális, jelentkezzen be egy bash rendszerhéj és a munka megkezdése módja parancsot. Azonban használatával `vm quick-create` nem ad teljes körű vezérlő sem jelent az, lehetővé teszi azok hozzon létre egy összetettebb környezetben. Egy, a infrastruktúra testre szabott Linux virtuális üzembe helyezéséhez követheti, ezek a cikkek egyikét:
- [Hozzon létre egy speciális telepítési egy erőforrás-kezelő Azure sablonnal](virtual-machines-linux-cli-deploy-templates.md)
- [Egy Linux virtuális Azure CLI parancsaival közvetlenül a saját egyéni környezet létrehozása](virtual-machines-linux-create-cli-complete.md)
- [Hozzon létre egy SSH védett Linux virtuális Azure sablonok használata](virtual-machines-linux-create-ssh-secured-vm-from-template.md)
Is [használja a `docker-machine` különböző paranccsal gyorsan létrehozhat egy Linux virtuális docker fogadó Azure illesztőprogram](virtual-machines-linux-docker-machine.md).
| 49.631579 | 724 | 0.705582 | hun_Latn | 0.991918 |
ce425c5ed0c0016bdc8afd8ccac5569008363dcb | 710 | md | Markdown | README.md | Poohdxx/rshell | e0276b404cb1dc01802ba40b4a31c47d514f13be | [
"BSD-3-Clause"
] | null | null | null | README.md | Poohdxx/rshell | e0276b404cb1dc01802ba40b4a31c47d514f13be | [
"BSD-3-Clause"
] | null | null | null | README.md | Poohdxx/rshell | e0276b404cb1dc01802ba40b4a31c47d514f13be | [
"BSD-3-Clause"
] | null | null | null | # rshell
RSHELL is an open source project created for the CS100 class assignment at the University of California, Riverside.
##How to install
Run the following commands to get the source and build the shell:
```
git clone https://github.com/Poohdxx/rshell.git
cd rshell
git checkout hw2
make
bin/rshell
```
##Functionality
RSHELL has basic bash command logic with the utilization of combining them with connectors ; && ||
RSHELL also has the functionality to run test commands with flags -e -f and -d
####Example
```
test -e a.out ## this is a comment
[ -d /home ] && echo "this is an example"
```
##BUGS
* Still in progress of finding them
####Example
```
N/A
```
##LICENSE
See LICENSE file for details
| 20.882353 | 115 | 0.728169 | eng_Latn | 0.996356 |
ce4346d3c326f39fed1da6fc9f4b094d657a1b09 | 185 | md | Markdown | docs/cloud/softlayer/_index.md | lubinsz/pharmer | 05f851bebf2b593c782555cc104fd0ea024147bc | [
"Apache-2.0"
] | null | null | null | docs/cloud/softlayer/_index.md | lubinsz/pharmer | 05f851bebf2b593c782555cc104fd0ea024147bc | [
"Apache-2.0"
] | null | null | null | docs/cloud/softlayer/_index.md | lubinsz/pharmer | 05f851bebf2b593c782555cc104fd0ea024147bc | [
"Apache-2.0"
] | null | null | null | ---
title: SoftLayer
menu:
product_pharmer_0.1.0-alpha.1:
identifier: soft-layer
name: SoftLayer
parent: cloud
weight: 40
menu_name: product_pharmer_0.1.0-alpha.1
--- | 18.5 | 41 | 0.691892 | eng_Latn | 0.685738 |
ce43bedbd712e0d097cbfbb4c0e4c68fed1f144d | 731 | md | Markdown | pages.ro.aws/common/subfinder.md | unPi-ro/tldr | 13ffa5e396b4018eeaebf42dd7fff38bfd74638b | [
"CC-BY-4.0"
] | null | null | null | pages.ro.aws/common/subfinder.md | unPi-ro/tldr | 13ffa5e396b4018eeaebf42dd7fff38bfd74638b | [
"CC-BY-4.0"
] | null | null | null | pages.ro.aws/common/subfinder.md | unPi-ro/tldr | 13ffa5e396b4018eeaebf42dd7fff38bfd74638b | [
"CC-BY-4.0"
] | null | null | null | # subfinder
> Un instrument de descoperire subdomeniu care descoperă subdomenii valide pentru site-uri web.
> Proiectat ca un cadru pasiv pentru a fi util pentru recompensele de bug-uri și sigur pentru testarea penetrării.
> Mai multe informaţii: <https://github.com/subfinder/subfinder>
- Găsiți subdomenii pentru un anumit domeniu:
`subfinder -d {{example.com}}`
- Arată doar subdomeniile găsite:
`subfinder --silent -d {{example.com}}`
- Utilizați un atac brute-force pentru a găsi subdomenii:
`subfinder -d {{example.com}} -b`
- Elimină subdomeniile metacaractere:
`subfinder -nW -d {{example.com}}`
- Utilizaţi o listă de rezolvatori separate prin virgulă:
`subfinder -r {{8.8.8.8}},{{1.1.1.1}} -d {{example.com}}`
| 28.115385 | 114 | 0.737346 | ron_Latn | 0.999258 |
ce4429296406165c7d45daf1566b6613b6da83bc | 14,969 | md | Markdown | analyse/1_buildings_indics.md | Raphbub/master-thesis | 494845291408a12a2e28fe9a60d4a01a49eddfae | [
"MIT"
] | 2 | 2018-05-02T17:29:08.000Z | 2018-06-13T05:35:05.000Z | analyse/1_buildings_indics.md | Raphbub/master-thesis | 494845291408a12a2e28fe9a60d4a01a49eddfae | [
"MIT"
] | null | null | null | analyse/1_buildings_indics.md | Raphbub/master-thesis | 494845291408a12a2e28fe9a60d4a01a49eddfae | [
"MIT"
] | null | null | null | # Indicators at the building level
Add the columns for the attributes
```sql
ALTER TABLE bd ADD COLUMN b_perim REAL,
ADD COLUMN b_area REAL,
ADD COLUMN b_r_vol_fac REAL,
ADD COLUMN b_MaxEdge REAL,
ADD COLUMN b_MinEdge REAL,
ADD COLUMN b_stories INT,
ADD COLUMN b_floorsqm REAL,
ADD COLUMN c_Miller REAL,
ADD COLUMN c_Schumm REAL,
ADD COLUMN c_Haggett REAL,
ADD COLUMN c_LeeSallee REAL,
ADD COLUMN c_Ehrenb REAL,
ADD COLUMN bb_perim REAL,
ADD COLUMN bb_area REAL,
ADD COLUMN bb_length REAL,
ADD COLUMN bb_width REAL,
ADD COLUMN bb_r_lw REAL,
ADD COLUMN bb_r_area REAL,
ADD COLUMN bb_r_perim REAL,
ADD COLUMN cc_rad REAL,
ADD COLUMN cc_exch REAL,
ADD COLUMN cc_detour REAL,
ADD COLUMN ch_area REAL,
ADD COLUMN ch_perim REAL,
ADD COLUMN ch_r_area REAL,
ADD COLUMN ch_r_perim REAL,
ADD COLUMN s_deadend REAL,
ADD COLUMN sc_lines REAL,
ADD COLUMN sc_length REAL,
ADD COLUMN sc_orient INT DEFAULT 0,
ADD COLUMN sc_l_sn REAL DEFAULT 0.0,
ADD COLUMN sc_l_ew REAL DEFAULT 0.0,
ADD COLUMN sc_l_nesw REAL DEFAULT 0.0,
ADD COLUMN sc_l_senw REAL DEFAULT 0.0,
ADD COLUMN sc_m_orient VARCHAR(4),
ADD COLUMN m_corndis REAL,
ADD COLUMN m_court INT DEFAULT 0,
ADD COLUMN m_court_area REAL DEFAULT 0.0,
ADD COLUMN m_court_rel_a REAL DEFAULT 0.0,
ADD COLUMN dm_inscr_c REAL;
```
## Create _VIEWS_ for the recurring tables and precalculate other needed attributes
```sql
-- View circumscribed circle, with radius extracted
CREATE OR REPLACE VIEW ccirc AS
SELECT id, (ST_MinimumBoundingRadius(geom)).radius AS rad
FROM bd;
-- View bounding box
CREATE OR REPLACE VIEW bbox AS
SELECT id, Box2d(geom) AS bb
FROM bd;
-- View convex hull
CREATE OR REPLACE VIEW convhull AS
SELECT id, ST_ConvexHull(geom) AS chull
FROM bd;
-- View skeleton & centerline
CREATE OR REPLACE VIEW skeleton AS
SELECT id, ST_StraightSkeleton(geom) AS skel,
ST_ApproximateMedialAxis(geom) AS ctl
FROM bd;
-- View for the orientation of the centerlines' segments
CREATE OR REPLACE VIEW sc_orien AS (
WITH clpts AS (
SELECT id, (ST_DumpPoints(ctl)).geom AS pts,
((ST_DumpPoints(ctl)).path)[1] AS place
FROM skeleton
), aziline AS (
SELECT a.id, a.place, DEGREES(ST_Azimuth(a.pts, b.pts)) AS orientation, ST_Distance(a.pts, b.pts) AS long
FROM clpts a, clpts b
WHERE a.id = b.id
AND a.place = b.place
AND NOT a.pts = b.pts
AND DEGREES(ST_Azimuth(a.pts, b.pts)) BETWEEN 67.5 AND 247.5
), orlc AS (
SELECT *,
CASE WHEN ROUND(CAST(orientation AS numeric), 2) BETWEEN 67.5 AND 112.5 THEN 'EW'
WHEN ROUND(CAST(orientation AS numeric), 2) BETWEEN 112.501 AND 157.5 THEN 'SENW'
WHEN ROUND(CAST(orientation AS numeric), 2) BETWEEN 157.501 AND 202.5 THEN 'SN'
WHEN ROUND(CAST(orientation AS numeric), 2) BETWEEN 202.501 AND 247.5 THEN 'SWNE'
ELSE 'ERROR'
END AS orlc
FROM aziline
)
SELECT * FROM orlc
);
-- Sum of centerlines by orientation
CREATE OR REPLACE VIEW sum_orien AS (
WITH sum_p_or AS (-- Get the total length by orientation
SELECT id, orlc, SUM(long) AS ltot
FROM sc_orien
GROUP BY id, orlc
)
SELECT * FROM sum_p_or
);
-- Computing an approximation of the inscribed circle diameter
WITH ctline AS ( -- Dump centerline
SELECT id, (ST_Dump(ctl)).geom AS lines,
(ST_Dump(ctl)).path[1] AS place
FROM skeleton
), dists AS ( -- Compute distance from line to exterior of polygon
SELECT bd.id AS id, place, lines,
ST_Distance(lines, ST_ExteriorRing((ST_Dump(bd.geom)).geom)) AS dist
FROM ctline, bd
WHERE ctline.id = bd.id
), max_dists AS ( -- Find the max distance
SELECT id, place, lines, dist, MAX(dist) OVER (PARTITION BY id) AS max_dist
FROM dists
), furthest_lines AS ( -- Find the furthest line
SELECT *
FROM max_dists
WHERE dist = max_dist
), fl_pts AS ( -- Divide the line in 7 points
SELECT id, ST_StartPoint(lines) AS pta,
ST_LineInterpolatePoint(lines, 0.2) AS ptb,
ST_LineInterpolatePoint(lines, 0.4) AS ptc,
ST_LineInterpolatePoint(lines, 0.5) AS ptd,
ST_LineInterpolatePoint(lines, 0.6) AS pte,
ST_LineInterpolatePoint(lines, 0.8) AS ptf,
ST_EndPoint(lines) AS ptg
FROM furthest_lines
), fl_dists AS ( -- Find distance from point to boundary
SELECT fl_pts.id,
ST_Distance(pta, ST_ExteriorRing((ST_Dump(bd.geom)).geom)) AS da,
ST_Distance(ptb, ST_ExteriorRing((ST_Dump(bd.geom)).geom)) AS db,
ST_Distance(ptc, ST_ExteriorRing((ST_Dump(bd.geom)).geom)) AS dc,
ST_Distance(ptd, ST_ExteriorRing((ST_Dump(bd.geom)).geom)) AS dd,
ST_Distance(pte, ST_ExteriorRing((ST_Dump(bd.geom)).geom)) AS de,
ST_Distance(ptf, ST_ExteriorRing((ST_Dump(bd.geom)).geom)) AS df,
ST_Distance(ptg, ST_ExteriorRing((ST_Dump(bd.geom)).geom)) AS dg
FROM fl_pts, bd
WHERE fl_pts.id = bd.id
), greatest AS ( -- Find greatest distance
SELECT id, GREATEST(da, db, dc, dd, de, df, dg) AS radius
FROM fl_dists
) -- Use this distance as radius for inscribed circle
UPDATE bd SET dm_inscr_c = 2 * radius
FROM greatest
WHERE bd.id = greatest.id;
DELETE FROM bd
WHERE dm_inscr_c IS NULL;
```
### Compute the basic indicators
```sql
-- Perimeter
UPDATE bd SET b_perim = ROUND(CAST(ST_Perimeter(geom) AS NUMERIC), 2);
-- Area
UPDATE bd SET b_area = ROUND(CAST(ST_Area(geom) AS NUMERIC), 2);
-- Ajout volume
UPDATE bd SET b_r_vol_fac = b_area / b_perim;
-- Min and Max edges
WITH bd_segment AS (
SELECT
ST_PointN(geom, generate_series(1, ST_NPoints(geom)-1)) AS sp,
ST_PointN(geom, generate_series(2, ST_NPoints(geom) )) AS ep
FROM
-- extract the individual linestrings
(SELECT (ST_Dump(ST_Boundary(geom))).geom
FROM bd) AS linestrings
), bd_segment_geom AS (
SELECT sp, ep, st_makeline(sp, ep) AS edge
FROM bd_segment
), bd_segment_id AS (
SELECT bd.id, ST_Length(ge.edge) AS length, ge.edge
FROM bd_segment_geom ge
JOIN bd ON ST_Touches(ge.edge, bd.geom)
GROUP BY bd.id, ge.sp, ge.ep, ge.edge
), e_lgth AS (
SELECT id, MAX(length) AS max, MIN(length) AS min FROM bd_segment_id
GROUP BY id
)
UPDATE bd SET b_maxEdge = max,
b_minEdge = min
FROM e_lgth
WHERE bd.id = e_lgth.id;
-- Number of stories
UPDATE bd SET b_stories = CASE
WHEN ROUND(b_height/3) < 1 THEN 1
ELSE ROUND(b_height/3)
END;
-- Floorspace
UPDATE bd SET b_floorsqm = b_area * b_stories;
```
### Compute the compacity indicators
```sql
-- Miller
UPDATE bd SET c_Miller = b_area / pow(.282 * b_perim, 2);
-- Schumm
UPDATE bd SET c_Schumm = 2 * sqrt(b_area / pi()) / (2 * rad)
FROM ccirc
WHERE bd.id = ccirc.id;
-- Haggett TODO problem
UPDATE bd SET c_Haggett = dm_inscr_c / (2 * rad)
FROM ccirc
WHERE bd.id = ccirc.id;
-- Lee & Sallee
WITH cercleaire AS (
SELECT bd.id, ST_Buffer(ST_Centroid(geom), sqrt(b_area / pi())) AS centcir FROM bd
), ops AS (
SELECT bd.id, ST_Area(ST_Intersection(centcir, geom)) AS intersec,
ST_Area(ST_Union(centcir, geom)) AS union_area
FROM bd, cercleaire
WHERE bd.id = cercleaire.id
)
UPDATE bd SET c_LeeSallee = intersec / union_area
FROM ops
WHERE bd.id = ops.id;
-- Ehrenburg
UPDATE bd SET c_Ehrenb = (pi() * pow(dm_inscr_c / 2, 2)) / b_area;
```
### Compute the bounding box indicators
```sql
-- Perim & area
UPDATE bd SET bb_perim = ST_Perimeter(bb),
bb_area = ST_Area(bb)
FROM bbox
WHERE bd.id = bbox.id;
-- Width & length
WITH bboxcoord AS ( -- Extremes coords
SELECT ST_XMin(bb) AS xmin,
ST_XMax(bb) AS xmax,
ST_YMin(bb) AS ymin,
ST_YMax(bb) AS ymax,
id
FROM bbox
), matridist AS ( -- only 3 points needed
SELECT ST_MakePoint(xmin, ymin) AS xymin,
ST_MakePoint(xmin, ymax) AS xmiyma,
ST_MakePoint(xmax, ymin) AS xmaymi,
id
FROM bboxcoord
), dist AS ( -- calculate both distances
SELECT ST_Distance(xymin, xmiyma) AS distab,
ST_Distance(xymin, xmaymi) AS distad,
id
FROM matridist
) -- Assign the longest as the length, shortest for the width
UPDATE bd SET bb_width = CASE
WHEN distab >= distad THEN distad
ELSE distab
END,
bb_length = CASE
WHEN distab >= distad THEN distab
ELSE distad
END
FROM dist
WHERE bd.id = dist.id;
-- Ratios
UPDATE bd SET bb_r_lw = bb_length / bb_width,
bb_r_area = b_area / bb_area,
bb_r_perim = b_perim / bb_perim;
```
### Compute the circumscribed circle indicators
```sql
-- Radius
UPDATE bd SET cc_rad = rad
FROM ccirc
WHERE bd.id = ccirc.id;
-- Exchange index
WITH cercleaire AS ( -- Circle of same area centered on centroid
SELECT id, ST_Buffer(ST_Centroid(geom), sqrt(b_area / pi())) AS centcir, geom
FROM bd
), ops AS ( -- Area of intersection
SELECT id, ST_Area(ST_Intersection(centcir, geom)) AS intersec
FROM cercleaire
)
UPDATE bd SET cc_exch = intersec / b_area
FROM ops
WHERE bd.id = ops.id;
-- Detour index
UPDATE bd SET cc_detour = (2 * sqrt(pi() * b_area)) / ST_Perimeter(chull)
FROM convhull
WHERE bd.id = convhull.id;
```
### Compute the convex hull indicators
```sql
UPDATE bd SET ch_area = ST_Area(chull),
ch_perim = ST_Perimeter(chull),
ch_r_area = ST_Area(chull) / b_area,
ch_r_perim = ST_Perimeter(chull) / b_perim
FROM convhull
WHERE bd.id = convhull.id;
```
### Compute the skeleton and centerline indicators
```sql
-- Number of deadends
-- Create table with skeleton's points
CREATE TABLE skelpt AS
SELECT id, (ST_DumpPoints(skel)).geom
FROM skeleton;
-- Add an id to the points
ALTER TABLE skelpt ADD COLUMN sid SERIAL PRIMARY KEY;
WITH s_unq_pts AS (-- Select unique points of skeleton
SELECT DISTINCT id, ST_AsText(geom) FROM skelpt
), unq_pts_tot AS (-- Count them
SELECT id, COUNT(*) AS unqpts
FROM s_unq_pts
GROUP BY id
), s_mlt_pts AS (-- Select points seen several times
SELECT a.id, a.geom, a.sid
FROM skelpt AS a, skelpt AS b
WHERE ST_Equals(a.geom, b.geom) AND a.sid <> b.sid AND a.id = b.id
), s_mlt_diff AS (
SELECT DISTINCT ST_AsText(geom), id
FROM s_mlt_pts
), mlt_pts_tot AS (
SELECT id, COUNT(*) AS mltpts
FROM s_mlt_diff
GROUP BY id
), deadends AS (
SELECT unq_pts_tot.id AS id, unqpts - mltpts AS deadend
FROM unq_pts_tot, mlt_pts_tot
WHERE unq_pts_tot.id = mlt_pts_tot.id
)
UPDATE bd SET s_deadend = deadend
FROM deadends
WHERE bd.id = deadends.id;
-- Number of lines in centerline
WITH ctline AS (
SELECT id, (ST_Dump(ctl)).geom
FROM skeleton
), cttotline AS (
SELECT id, COUNT(*) AS tot
FROM ctline
GROUP BY id
)
UPDATE bd SET sc_lines = tot
FROM cttotline
WHERE bd.id = cttotline.id;
-- Centerline length
UPDATE bd SET sc_length = ST_Length(ctl)
FROM skeleton
WHERE bd.id = skeleton.id;
-- Number of centerline orientation
WITH nbor AS (
SELECT id, COUNT(DISTINCT (id, orlc)) AS totor
FROM sc_orien
GROUP BY id
)
UPDATE bd SET sc_orient = totor
FROM nbor
WHERE bd.id = nbor.id;
-- Length of centerline by specific orientation
-- TODO must be a better way to update at once ?!
-- South - North
WITH sn AS (
SELECT DISTINCT id, orlc, ltot
FROM sum_orien
WHERE orlc = 'SN'
)
UPDATE bd SET sc_l_sn = ltot
FROM sn
WHERE bd.id = sn.id;
-- East-West
WITH ew AS (
SELECT DISTINCT id, orlc, ltot
FROM sum_orien
WHERE orlc = 'EW'
)
UPDATE bd SET sc_l_ew = ltot
FROM ew
WHERE bd.id = ew.id;
-- NorthEast - SouthWest
WITH nesw AS (
SELECT DISTINCT id, orlc, ltot
FROM sum_orien
WHERE orlc = 'SWNE'
)
UPDATE bd SET sc_l_nesw = ltot
FROM nesw
WHERE bd.id = nesw.id;
-- SouthEast - NorthWest
WITH senw AS (
SELECT DISTINCT id, orlc, ltot
FROM sum_orien
WHERE orlc = 'SENW'
)
UPDATE bd SET sc_l_senw = ltot
FROM senw
WHERE bd.id = senw.id;
-- Main orientation of centerline
WITH max_length AS (
SELECT b.id, b.orlc, ltot
FROM (
SELECT id, orlc, MAX(ltot) OVER (PARTITION BY id) max_long
FROM sum_orien
) a, sum_orien b
WHERE ltot = max_long
), main_or AS (
SELECT DISTINCT * FROM max_length
)
UPDATE bd SET sc_m_orient = orlc
FROM main_or
WHERE bd.id = main_or.id;
```
### Compute the miscellaneous indicators
```sql
-- Average distance to corners
WITH ptspoly AS (
SELECT id, (ST_DumpPoints(geom)).geom AS ptsbld
FROM bd
), corners AS ( -- Drop polygon closing point
SELECT DISTINCT ptsbld AS corner, id
FROM ptspoly
), centblg AS (
SELECT id, ST_Centroid(geom) AS center
FROM bd
), dists AS (
SELECT corners.id, AVG(ST_Distance(corner, center)) AS dist
FROM corners, centblg
WHERE corners.id = centblg.id
GROUP BY corners.id
)
UPDATE bd SET m_corndis = dist
FROM dists
WHERE bd.id = dists.id;
-- Number of inner courtyards
WITH holes AS (
SELECT id, SUM(ST_NumInteriorRings(geom)) AS tot_holes
FROM (SELECT id, (ST_Dump(geom)).geom As geom
FROM bd) AS a
GROUP BY id
)
UPDATE bd SET m_court = tot_holes
FROM holes
WHERE bd.id = holes.id;
-- Relative area of courtyard
WITH rings AS (
SELECT id,
(ST_DumpRings((ST_Dump(geom)).geom)).path[1] as n,
(ST_DumpRings((ST_Dump(geom)).geom)).geom as geom
FROM bd
), holes AS (
SELECT id, ST_Area(geom) AS area
FROM rings
WHERE n > 0
), holes_ar AS (
SELECT id, SUM(area) AS tot_area
FROM holes
GROUP BY id
)
UPDATE bd SET m_court_area = tot_area
FROM holes_ar
WHERE bd.id = holes_ar.id;
UPDATE bd SET m_court_rel_a = m_court_area / b_area;
SELECT id, b_perim,b_area,b_r_vol_fac,b_MaxEdge,b_MinEdge,b_stories,b_floorsqm,c_Miller,c_Schumm,c_Haggett,c_LeeSallee,c_Ehrenb,bb_perim,bb_area,bb_length,bb_width,bb_r_lw,bb_r_area,bb_r_perim,cc_rad,cc_exch,cc_detour,ch_area,ch_perim,ch_r_area,ch_r_perim,s_deadend,sc_lines,sc_length,sc_orient,sc_l_sn,sc_l_ew,sc_l_nesw,sc_l_senw,m_corndis,m_court,m_court_area,m_court_rel_a, geom
INTO indic_bldg
FROM bd;
```
| 30.179435 | 381 | 0.6454 | yue_Hant | 0.852468 |
ce4472639b23b540d33e3e5d3fcbb408d8f6882d | 393 | md | Markdown | blog.md | chester-tan/chester-tan.github.io | 7bd03874384469e7dde7fa49c745f010cf7f6c61 | [
"MIT"
] | null | null | null | blog.md | chester-tan/chester-tan.github.io | 7bd03874384469e7dde7fa49c745f010cf7f6c61 | [
"MIT"
] | null | null | null | blog.md | chester-tan/chester-tan.github.io | 7bd03874384469e7dde7fa49c745f010cf7f6c61 | [
"MIT"
] | null | null | null | # Blog
Welcome to my blog! \:\) Here you can find my blog posts organised by their tags:
{% for tag in site.tags %}
<h3>{{ tag[0] }}</h3>
<ul>
{% for post in tag[1] %}
<li>
<a href="{{ post.url }}">{{ post.title }}</a>
{{ post.excerpt }}
</li>
{% endfor %}
</ul>
{% endfor %}
***
Subscribe to my [atom feed](https://chester-tan.com/feed.xml) \:\)
| 19.65 | 81 | 0.508906 | eng_Latn | 0.842736 |
ce45194b423ee9b06a401220459519317f7c1fe6 | 132 | md | Markdown | ERC/Documentation/api/index.md | Montycarlo/ERC.Xdbg | b2486b7d12a932222a0ba512b05af02706c4cf6d | [
"MIT"
] | 74 | 2020-02-22T03:44:17.000Z | 2022-03-28T10:56:58.000Z | ERC/Documentation/api/index.md | Montycarlo/ERC.Xdbg | b2486b7d12a932222a0ba512b05af02706c4cf6d | [
"MIT"
] | 3 | 2020-07-09T08:32:50.000Z | 2022-03-03T09:27:17.000Z | ERC/Documentation/api/index.md | Montycarlo/ERC.Xdbg | b2486b7d12a932222a0ba512b05af02706c4cf6d | [
"MIT"
] | 11 | 2020-05-15T10:35:15.000Z | 2022-02-14T23:38:22.000Z | # ERC.Net
In the left panel, you find the API documentation for all accessible structures and functionality of the ERC.Net library.
| 44 | 121 | 0.80303 | eng_Latn | 0.978575 |
ce453e1a80d4fb31fa92f285a9307b7d770f9db9 | 1,516 | md | Markdown | _posts/2016-02-12-gulp-task-listing.md | meumobi/meumobi.github.io | a9b1a19061c1903aa49bb115702bd1b4f1d68847 | [
"MIT"
] | 1 | 2021-01-07T13:19:07.000Z | 2021-01-07T13:19:07.000Z | _posts/2016-02-12-gulp-task-listing.md | meumobi/meumobi.github.io | a9b1a19061c1903aa49bb115702bd1b4f1d68847 | [
"MIT"
] | 11 | 2015-06-05T05:40:38.000Z | 2019-10-25T03:00:35.000Z | _posts/2016-02-12-gulp-task-listing.md | meumobi/meumobi.github.io | a9b1a19061c1903aa49bb115702bd1b4f1d68847 | [
"MIT"
] | 1 | 2015-11-23T21:16:38.000Z | 2015-11-23T21:16:38.000Z | ---
layout: post
title: Provide an auto task listing for your gulpfile
categories: [Tips and tricks]
tags: [gulp]
author:
name: Victor Dias
email: [email protected]
github: elbidone
twitter: meumobi
bio: Sharing mobile Experiences
email_md5: 1cd012be2382e755aa763c66acc7cfa6
---
You'd like to type `gulp help`and automatically see the list of tasks of your gulpfile, organized by task/sub-tasks ? The package [gulp-tak-listing](https://www.npmjs.com/package/gulp-task-listing) is for you. By default, the output groups tasks based on whether or not they contain a hyphen (-), underscore (_), or colon (:) in their name. So the only job you need to do is apply a naming convention for your tasks, that is not a bad idea.
See below an example of output:
```
$ gulp help
Main Tasks
------------------------------
build
compile
help
Sub Tasks
------------------------------
build-css
build-js
compile-css
compile-js
```
Add the package to your gulpfile like so:
```
var gulp = require('gulp');
var taskListing = require('gulp-task-listing');
// Add a task to render the output
gulp.task('help', taskListing);
// Add some top-level and sub tasks
gulp.task('build', ['build-js', 'build-css']);
gulp.task('build-js', function() { ... })
gulp.task('build-css', function() { ... })
gulp.task('compile', ['compile-js', 'compile-css']);
gulp.task('compile-js', function() { ... })
gulp.task('compile-css', function() { ... })
```
Now run `gulp help`, and enjoy! | 29.153846 | 442 | 0.66095 | eng_Latn | 0.941399 |
ce460c0b747fb469b514fd5a0693db01c509e11b | 118 | md | Markdown | fonts/samples/InputMono.md | mmaher88/cmder | a4d15e6d3a9be1969c9bba95a2292c99a3eecec4 | [
"MIT"
] | null | null | null | fonts/samples/InputMono.md | mmaher88/cmder | a4d15e6d3a9be1969c9bba95a2292c99a3eecec4 | [
"MIT"
] | null | null | null | fonts/samples/InputMono.md | mmaher88/cmder | a4d15e6d3a9be1969c9bba95a2292c99a3eecec4 | [
"MIT"
] | null | null | null | #InputMono
![](https://cloud.githubusercontent.com/assets/8317250/7021760/2240b122-dd60-11e4-9314-6aad9f5df2a6.png)
| 39.333333 | 105 | 0.79661 | yue_Hant | 0.165278 |
ce467098e638cc51e442c0dbd8c86dbcf915c660 | 62 | md | Markdown | README.md | BlazingAsher/REFLECT | 8d606caa66283a5b6a3eed8e4f1a0d9f0a937ea6 | [
"MIT"
] | null | null | null | README.md | BlazingAsher/REFLECT | 8d606caa66283a5b6a3eed8e4f1a0d9f0a937ea6 | [
"MIT"
] | null | null | null | README.md | BlazingAsher/REFLECT | 8d606caa66283a5b6a3eed8e4f1a0d9f0a937ea6 | [
"MIT"
] | null | null | null | # REFLECT
An opensource event and program registration system
| 20.666667 | 51 | 0.83871 | eng_Latn | 0.942108 |
ce471f1a4e9d21719a76f353d63d58d12b1e191e | 1,340 | md | Markdown | _ebola/n14.md | elotroalex/frontlines | a4632d157e107d9f367d6765dd802e7c6799eec5 | [
"MIT"
] | null | null | null | _ebola/n14.md | elotroalex/frontlines | a4632d157e107d9f367d6765dd802e7c6799eec5 | [
"MIT"
] | 41 | 2020-07-13T21:00:58.000Z | 2021-06-02T13:57:24.000Z | _ebola/n14.md | cul/ds-frontlinenurses | afeb54b0eb582919e820155f41e92ae857d33b99 | [
"MIT"
] | null | null | null | ---
pid: n14
name: Annette Mwansa Nkowane, RN, RM, BSc, MA
first_name: Annette
middle_name: Mwansa
last_name: Nkowane
interviewed_by: Jennifer Dohrn
interview_date: August 16, 2019
interview_city: Monrovia
interview_country: Liberia
title: Former Technical Officer of Nursing and Midwifery, World Health Organization
(WHO) Headquarters, Geneva, Switzerland
bio: Annette Mwansa Nkowane is a nurse and a midwife from Zambia. During the Ebola
outbreak, Nkowane served as the Technical Officer of Nursing and Midwifery in the
World Health Organization's Health Workforce Department in Geneva. Previously, she
worked with other WHO departments including Mental Health and Substance Use and
Gender and Women's Health. Nkowane trained as a community health nurse before completing
a masters in human resource development. Prior to her position at the WHO, Nkowane
worked with International Federation of the Red Cross in the Health Department.
As part of Columbia University's On the Frontlines project, Nkowane conducted interviews
with Ebola nurses in Sierra Leone and Liberia in August 2019.
soundcloud_file: https://soundcloud.com/user-568440441/n14
soundcloud_api: '846958735'
order: '13'
layout: ebola_item
collection: ebola
thumbnail: img/derivatives/simple/n14/thumbnail.jpg
full: img/derivatives/simple/n14/full.jpg
---
| 44.666667 | 90 | 0.808955 | eng_Latn | 0.943659 |
ce47ef488cc07590e47e0c6361c101db91404f08 | 3,472 | md | Markdown | doc/Java/简述JVM基础(六):虚拟机字节码执行引擎.md | zengjingfang/AndroidBox | 84e4f63474e22edc8d4f1e9f0c7edf1a3be69a59 | [
"Apache-2.0"
] | 18 | 2016-11-04T07:37:42.000Z | 2021-12-23T09:30:48.000Z | doc/Java/简述JVM基础(六):虚拟机字节码执行引擎.md | zengjingfang/AndroidBox | 84e4f63474e22edc8d4f1e9f0c7edf1a3be69a59 | [
"Apache-2.0"
] | 49 | 2017-12-12T11:38:21.000Z | 2021-05-14T07:33:58.000Z | doc/Java/简述JVM基础(六):虚拟机字节码执行引擎.md | zengjingfang/AndroidBox | 84e4f63474e22edc8d4f1e9f0c7edf1a3be69a59 | [
"Apache-2.0"
] | 3 | 2017-10-16T13:19:59.000Z | 2021-01-21T09:42:53.000Z | # 一、前言
物理机的执行引擎是直接在物理硬件如CPU、操作系统、指令集上运行的,但是对于虚拟机来讲,他的执行引擎由自己实现。
执行引擎有统一的外观(**Java虚拟机规范**),不同类型的虚拟机都遵循了这一规范,**输入字节码文件,解析字节码处理,然后输出结果**。
# 二、运行时栈帧结构
![](https://docs.google.com/drawings/d/1HYoVAFuorwxiHTxnNyMiO8UdzgFLP9WpW323OjM-UM4/pub?w=657&h=446)
### 1、栈帧概念
栈帧(Stack Frame)用于支持方法调用和执行的数据结构,包含了局部变量表、操作数栈、动态连接和方法返回地址。
+ 局部变量表大小(max_locals),栈帧深度在编译时已经确定,并写入到了Code属性中;
+ 执行引擎运行的所有字节码指令都只针对当前栈进行操作;
### 2、局部变量表
局部变量表存储了方法参数以及方法内定义的局部变量。
+ **Slot(变量槽)**:**局部变量表容量最小单位**,可以存放32位以内的数据类型;
+ refrence:
+ 直接或者间接找到到该对象在“堆内存”中数据存放的起始地址索引;
+ 直接或者间接找到对象所属数据类型在方法区中存储的类型信息;
+ 局部变量表建立在线程的堆栈上,所以操作两个连续的slot是否为原子操作,都不会引起数据安全问题,但是如果是64位的话,不允许任何方式单独访问其中的一个;
+ **this**:实例方法(非static)默认**第一个**(第0位索引)slot为当前对象自己的引用;
+ slot重用:
+ 当前字节码的pc计数器**超出某个变量的作用域,那这个变量的slot可以交给别的变量使用**;
+ 影响到正常的Java垃圾回收机制;
+ 赋null:因为上述slot重用的原因,当方法域内前面有局部变量定义了大内存实际不再使用的变量,紧接着后面的代码又是一个耗时的操作,这个时候及时赋null就显得有大的意义。因为一旦触发后,这部分的slot就可以被重用了。看起来就像是方法区内部进行“类gc"操作一样。但是,并不是任何时候都要进行赋null.以恰当的变量作用域来控制变量回收时间才是最优雅的方式,并且赋null值操作在经过JIT编译优化后会被消除掉,这样的话实际是没有任何意义的。
+ 初始值:和类变量不同,**局部变量系统不会自动赋初始值**,所以没有赋值是无法使用的,编译都无法通过。即使通过,字节码校验阶段也会检查出来而导致类加载失败;
### 3、操作数栈(Operand Stack)
+ 操作栈,后入先出;
+ 最大深度:Code属性表中的max_stacks;
+ 32位数据类型所占栈容量为1,64位所占容量为2;
+ 栈元素的数据类型必须和栈指令保持一致
+ 两个栈帧之间可以存在一部分的重叠,共享数据,这样在方法调用的时候避免的额外的参数复制。
+ Java虚拟机的**解释执行引擎也是:基于栈的执行引擎**;
### 4、动态连接(Dynamic Linking)
字节码中的方法的调用都是通过常量池中指定方法的符号作为参数
+ 静态解析:这种符号有的是类加载阶段或者首次使用初始化的时候转化为直接的引用
+ 动态连接:另外一部分是在运行时转化为直接引用
### 5、方法返回地址
+ 退出:
+ 正常退出:遇到返回的字节码指令;
+ 异常退出:本方法异常表中没有匹配的异常;
+ 退出后,恢复上层方法的局部变量表和操作栈,有返回值就把返回值压入上层调用者的栈中;
# 三、方法调用
### 1、定义
确定被调用方法的版本
### 1、解析
+ 编译器可知,运行期不可变。这类方法的调用成为解析,在类加载阶段进行解析。
+ 静态方法、私有方法、实例构造器方法、父类方法,符合上述条件。特点是:
+ 只能被invokestatic和invokespecial指令调用
+ 不可继承或者重写,**编译时已经确定了一个版本**。
+ 在类加载时会把符合引用解析为该方法的直接引用。
+ 非虚方法(注意final也是非虚方法,其他的都是虚方法)
### 2、静态分派
+ 概念:根据静态类型来定位方法的执行版本
+ 典型代表:方法的重载(方法名相同,参数类型不同)
+ 发生时间:编译阶段
### 3、动态分派
+ 概念:调用invokevirtual时,把常量池中的类方法符号解析到了不同的直接引用上。
+ 典型代表:重写,多态的重要体现
+ 过程:
+ 执行invokevitual指令
+ 在虚方法表(类加载阶段,类变量初始化结束后会初始化虚方法表)中查找方法,没有向上的父类进行查找
+ 方法宗量:方法的接收者与方法参数的总称
+ 单分派和多分派:
+ 只有一个宗量作为方法的选择依据,称为单分派。多个,则称为多分派。
+ 当前的**Java是静态多分派、动态单分派的语言**;
# 四、动态语言支持
+ 特点:变量无类型,变量的值才有类型
+ invoke包:Java实现动态语言新增的包
# 五、指令集
+ 基于栈的指令集
+ 过程:入栈、计算、出栈
+ 优点:
+ 可移植性,不依赖于硬件
+ 代码紧凑
+ 缺点:
+ 速度较慢
+ 产生相当多的指令数量
+ 频繁内存访问
+ 基于寄存器的指令集
+ 代表:x86
# 六、方法内联
方法内联的方式是通过吧“目标方法”的代码复制到发起调用的方法内,避免真实的方法调用。
内联消除了方法调用的成本,还为其他优化手段建立良好的基础。
编译器在进行内联时,如果是非虚方法,那么直接内联。如果遇到虚方法,则会查询当前程序下是否有多个目标版本可供选择,如果查询结果只有一个版本,那么也可以内联,不过这种内联属于激进优化,需要预留一个逃生门(Guard条件不成立时的Slow Path),称为守护内联。
如果程序的后续执行过程中,虚拟机一直没有加载到会令这个方法的接受者的继承关系发现变化的类,那么内联优化的代码可以一直使用。否则需要抛弃掉已经编译的代码,退回到解释状态执行,或者重新进行编译。
# 七、逃逸分析
逃逸分析的基本行为就是分析对象动态作用域:当一个对象在方法里面被定义后,它可能被外部方法所引用,这种行为被称为方法逃逸。被外部线程访问到,被称为线程逃逸。
如果对象不会逃逸到方法或线程外,可以做什么优化?
+ 栈上分配:一般对象都是分配在Java堆中的,对于各个线程都是共享和可见的,只要持有这个对象的引用,就可以访问堆中存储的对象数据。但是垃圾回收和整理都会耗时,如果一个对象不会逃逸出方法,可以让这个对象在栈上分配内存,对象所占用的内存空间就可以随着栈帧出栈而销毁。如果能使用栈上分配,那大量的对象会随着方法的结束而自动销毁,垃圾回收的压力会小很多。
+ 同步消除:线程同步本身就是很耗时的过程。如果逃逸分析能确定一个变量不会逃逸出线程,那这个变量的读写肯定就不会有竞争,同步措施就可以消除掉。
+ 标量替换:不创建这个对象,直接创建它的若干个被这个方法使用到的成员变量来替换。
# 五、小结
在前面我们已经了解到栈帧、方法区的内存时线程私有的,本篇更加详细的讲了方法是怎么找到并执行的。**Java虚拟机规范:输入字节码,解析字节码处理,输出结果。**首先,栈帧包含了局部变量表、操作数栈、动态连接、方法返回地址。字节码中的方法都是通过常量池中的符号作为参数指定的,有些编译解析确定,有些运行行时转化为直接引用。首先记住,**JVM是基于栈的执行引擎**。栈有着先入后出的特点,执行引擎的指令也仅执行当前栈。而局部变量表存储了方法内需要的变量信息,是以**Slot** 为单位进行存储,超出操作域后,原本占用的内存区域可以被其他的局部变量使用,类似“回收”。然后,记住**Java是静态多分派,动态单分派**的语言。静态分派,如方法的重载。通过方法的参数不同就可以确定要调用哪个方法,这个再编译阶段就定好。动态分派,如方法的重写。执行方法时,有一个虚方法表。这这个表里搜索,自己有就执行自己的,没有向上找父类的。这个是Java实现多态的重要原理。Java也有支持动态语言的invoke包,平时用的较少。 | 26.707692 | 455 | 0.80674 | yue_Hant | 0.582619 |
ce48e666b6eab1b7ab376c8b03206a0024ef4afa | 2,257 | md | Markdown | docs/framework/unmanaged-api/profiling/icorprofilerinfo-gettokenandmetadatafromfunction-method.md | leowsouza/docs.pt-br-1 | 67ce30b4075f0d05985fa2d2b314c35a6d8d7adf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/profiling/icorprofilerinfo-gettokenandmetadatafromfunction-method.md | leowsouza/docs.pt-br-1 | 67ce30b4075f0d05985fa2d2b314c35a6d8d7adf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/profiling/icorprofilerinfo-gettokenandmetadatafromfunction-method.md | leowsouza/docs.pt-br-1 | 67ce30b4075f0d05985fa2d2b314c35a6d8d7adf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Método ICorProfilerInfo::GetTokenAndMetadataFromFunction
ms.date: 03/30/2017
api_name:
- ICorProfilerInfo.GetTokenAndMetadataFromFunction
api_location:
- mscorwks.dll
api_type:
- COM
f1_keywords:
- ICorProfilerInfo::GetTokenAndMetadataFromFunction
helpviewer_keywords:
- ICorProfilerInfo::GetTokenAndMetadataFromFunction method [.NET Framework profiling]
- GetTokenAndMetadataFromFunction method [.NET Framework profiling]
ms.assetid: e525aa16-c923-4b16-833b-36f1f0dd70fc
topic_type:
- apiref
ms.openlocfilehash: b3e14230888e9bf846879d5728c2b20883fb8d53
ms.sourcegitcommit: 9a39f2a06f110c9c7ca54ba216900d038aa14ef3
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 11/23/2019
ms.locfileid: "74438734"
---
# <a name="icorprofilerinfogettokenandmetadatafromfunction-method"></a>Método ICorProfilerInfo::GetTokenAndMetadataFromFunction
Obtém o token de metadados e uma instância de interface de metadados que podem ser usados em relação ao token para a função especificada.
## <a name="syntax"></a>Sintaxe
```cpp
HRESULT GetTokenAndMetaDataFromFunction(
[in] FunctionID functionId,
[in] REFIID riid,
[out] IUnknown **ppImport,
[out] mdToken *pToken);
```
## <a name="parameters"></a>Parâmetros
`functionId`
no A ID da função para a qual obter o token de metadados e a interface de metadados.
`riid`
no A ID de referência da interface de metadados para obter a instância do.
`ppImport`
fora Um ponteiro para o endereço da instância da interface de metadados que pode ser usada em relação ao token para a função especificada.
`pToken`
fora Um ponteiro para o token de metadados para a função especificada.
## <a name="requirements"></a>{1>{2>Requisitos<2}<1}
**Plataformas:** confira [Requisitos do sistema](../../../../docs/framework/get-started/system-requirements.md).
**Cabeçalho:** CorProf. idl, CorProf. h
**Biblioteca:** CorGuids.lib
**Versões do .NET Framework:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)]
## <a name="see-also"></a>Consulte também
- [Interface ICorProfilerInfo](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-interface.md)
| 35.825397 | 141 | 0.745237 | por_Latn | 0.551497 |
ce494a9130f84f1f83d8b2d39bc372d962dd65a3 | 203 | md | Markdown | src/Malcaba.WeightWatcher/README.md | dmalcaba/blazor-sharp | 036753c5ced41250b3c4384cd51cd73e27295f71 | [
"MIT"
] | null | null | null | src/Malcaba.WeightWatcher/README.md | dmalcaba/blazor-sharp | 036753c5ced41250b3c4384cd51cd73e27295f71 | [
"MIT"
] | null | null | null | src/Malcaba.WeightWatcher/README.md | dmalcaba/blazor-sharp | 036753c5ced41250b3c4384cd51cd73e27295f71 | [
"MIT"
] | null | null | null | # Weight Watcher App
Created with Blazor Wasm (not ASP hosted, not PWA)
# Change Tracking
*February 26, 2021*
- Created Project
*February 27, 2021*
- Add Weight Watcher page with hard-coded data | 15.615385 | 50 | 0.729064 | eng_Latn | 0.981473 |
ce49d00d95e9c2de6fbfaa3ce7d459546774abcc | 77 | md | Markdown | README.md | FRABDYN/FractalDimensions | 9d3a27e4f3f5d005dc89e1a0a4e9a495cfccd3dc | [
"MIT"
] | null | null | null | README.md | FRABDYN/FractalDimensions | 9d3a27e4f3f5d005dc89e1a0a4e9a495cfccd3dc | [
"MIT"
] | null | null | null | README.md | FRABDYN/FractalDimensions | 9d3a27e4f3f5d005dc89e1a0a4e9a495cfccd3dc | [
"MIT"
] | null | null | null | # FractalDimensions
Fractal dimensions and two-dimensional slow-fast systems
| 25.666667 | 56 | 0.857143 | eng_Latn | 0.924733 |
ce4ba1015b3b57c8740d9d2ad1e613d3470feab4 | 131 | md | Markdown | Traceroute.md | namnamir/pentest | 5ba8090750cae851b415b5438eadbddb280f0a9b | [
"MIT"
] | null | null | null | Traceroute.md | namnamir/pentest | 5ba8090750cae851b415b5438eadbddb280f0a9b | [
"MIT"
] | null | null | null | Traceroute.md | namnamir/pentest | 5ba8090750cae851b415b5438eadbddb280f0a9b | [
"MIT"
] | null | null | null | ```Bash
# use ICMP packets instead of UDP packets
# it helps to identify firewalls (not 100% reliable)
tracerout -I
``` | 26.2 | 56 | 0.664122 | eng_Latn | 0.968761 |
ce4ccc176699d6563d0520521f61b17ec4b4111f | 1,084 | md | Markdown | includes/app-service-deploy-network-secured-sites.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 66 | 2017-07-09T03:34:12.000Z | 2022-03-05T21:27:20.000Z | includes/app-service-deploy-network-secured-sites.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 671 | 2017-06-29T16:36:35.000Z | 2021-12-03T16:34:03.000Z | includes/app-service-deploy-network-secured-sites.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 171 | 2017-07-25T06:26:46.000Z | 2022-03-23T09:07:10.000Z | ---
title: Archivo de inclusión
description: Archivo de inclusión
services: app-service
author: jasonfreeberg
ms.service: app-service
ms.topic: include
ms.date: 08/27/2021
ms.author: jafreebe
ms.custom: include file
ms.openlocfilehash: a3030eedcc6b00457338f4a71c27965261280a80
ms.sourcegitcommit: 40866facf800a09574f97cc486b5f64fced67eb2
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 08/30/2021
ms.locfileid: "123225444"
---
En función de la configuración de redes de las aplicaciones web, puede que se bloquee el acceso directo al sitio desde el entorno local. Para implementar el código en este escenario, puede publicar el paquete ZIP en un sistema de almacenamiento al que se pueda acceder desde la aplicación web y desencadenar la aplicación para *extraer* el paquete ZIP de la ubicación de almacenamiento, en lugar de *insertarlo* en la aplicación web. Consulte [este artículo sobre la implementación en aplicaciones web protegidas por red](https://azure.github.io/AppService/2021/03/01/deploying-to-network-secured-sites-2.html) para obtener más información.
| 57.052632 | 641 | 0.818266 | spa_Latn | 0.951837 |
ce4dedacc760952190b04ff05650dc6d0da09748 | 9,770 | md | Markdown | mdop/appv-v5/migrating-from-a-previous-version-app-v-50.md | MicrosoftDocs/mdop-docs-pr.it-it | c0a4de3a5407dee9cb0e7e8af61643dc2fc9ecf2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-20T21:13:51.000Z | 2021-04-20T21:13:51.000Z | mdop/appv-v5/migrating-from-a-previous-version-app-v-50.md | MicrosoftDocs/mdop-docs-pr.it-it | c0a4de3a5407dee9cb0e7e8af61643dc2fc9ecf2 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-07-08T05:27:50.000Z | 2020-07-08T15:39:35.000Z | mdop/appv-v5/migrating-from-a-previous-version-app-v-50.md | MicrosoftDocs/mdop-docs-pr.it-it | c0a4de3a5407dee9cb0e7e8af61643dc2fc9ecf2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-04T12:31:26.000Z | 2021-11-04T12:31:26.000Z | ---
title: Migrazione da una versione precedente
description: Migrazione da una versione precedente
author: dansimp
ms.assetid: a13cd353-b22a-48f7-af1e-5d54ede2a7e5
ms.reviewer: ''
manager: dansimp
ms.author: dansimp
ms.pagetype: mdop, appcompat, virtualization
ms.mktglfcycl: deploy
ms.sitesec: library
ms.prod: w10
ms.date: 08/30/2016
ms.openlocfilehash: a05bbd498cdb77a1ddf694b1aab6aeb42124775b
ms.sourcegitcommit: 354664bc527d93f80687cd2eba70d1eea024c7c3
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 06/26/2020
ms.locfileid: "10805067"
---
# Migrazione da una versione precedente
Con App-V 5,0 puoi eseguire la migrazione dell'infrastruttura App-V 4,6 esistente a quella più flessibile, integrata e più facile da gestire dell'infrastruttura App-V 5,0.
Quando si pianifica la strategia di migrazione, prendere in considerazione le sezioni seguenti:
**Nota** Per altre informazioni sulle differenze tra App-V 4,6 e App-V 5,0, Vedi le **differenze tra la sezione App-v 4,6 e App-v 5,0** di [About app-v 5,0](about-app-v-50.md).
## Conversione di pacchetti creati con una versione precedente di App-V
Usare l'utilità Convertitore pacchetti per aggiornare i pacchetti di applicazioni virtuali creati con le versioni precedenti di App-V. Il convertitore di pacchetti usa PowerShell per convertire i pacchetti e può aiutare a automatizzare il processo se sono presenti molti pacchetti che richiedono la conversione.
**Importante** Dopo aver convertito un pacchetto esistente, è consigliabile testare il pacchetto prima di distribuire il pacchetto per verificare che il processo di conversione abbia avuto esito positivo.
**Informazioni utili prima di convertire i pacchetti esistenti**
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">Problema</th>
<th align="left">Soluzione alternativa</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left"><p>Gli script di pacchetto non vengono convertiti.</p></td>
<td align="left"><p>Testare il pacchetto convertito. Se necessario, convertire lo script.</p></td>
</tr>
<tr class="even">
<td align="left"><p>Le override dell'impostazione del registro di sistema non vengono convertite.</p></td>
<td align="left"><p>Testare il pacchetto convertito. Se necessario, aggiungere di nuovo gli override del registro di sistema.</p></td>
</tr>
<tr class="odd">
<td align="left"><p>I pacchetti virtuali che usano DSC non sono collegati dopo la conversione.</p></td>
<td align="left"><p>Collegare i pacchetti usando i gruppi di connessioni. Vedere <a href="managing-connection-groups.md" data-raw-source="[Managing Connection Groups](managing-connection-groups.md)"> gestione dei gruppi di connessioni </a> .</p></td>
</tr>
<tr class="even">
<td align="left"><p>I conflitti di variabili di ambiente vengono rilevati durante la conversione.</p></td>
<td align="left"><p>Risolvere i conflitti nel <strong> file OSD associato </strong> .</p></td>
</tr>
<tr class="odd">
<td align="left"><p>I percorsi hardcoded vengono rilevati durante la conversione.</p></td>
<td align="left"><p>I percorsi hardcoded sono difficili da convertire correttamente. Il convertitore di pacchetti rileverà e restituirà pacchetti con file che contengono percorsi hardcoded. Visualizzare il file con il percorso hardcoded e determinare se il pacchetto richiede il file. In questo caso, è consigliabile ripetere la sequenza del pacchetto.</p></td>
</tr>
</tbody>
</table>
Quando si converte un pacchetto, verificare la mancanza di file o tasti di scelta rapida. Individuare l'elemento nel pacchetto App-V 4,6. Potrebbe essere un percorso hardcoded. Convertire il percorso.
**Nota** È consigliabile usare il sequencer App-V 5,0 per la conversione di applicazioni o applicazioni critiche che devono sfruttare le funzionalità. Vedere [come sequenziare una nuova applicazione con App-V 5,0](how-to-sequence-a-new-application-with-app-v-50-beta-gb18030.md).
Se un pacchetto convertito non si apre dopo averlo convertito, è anche consigliabile ripetere la sequenza dell'applicazione usando il sequencer App-V 5,0.
[Come convertire un pacchetto creato in una versione precedente di App-V](how-to-convert-a-package-created-in-a-previous-version-of-app-v.md)
## Migrazione dei client
Nella tabella seguente viene visualizzato il metodo consigliato per l'aggiornamento dei client.
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">Attività</th>
<th align="left">Altre informazioni</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left"><p>Aggiornare l'ambiente a App-V 4.6 SP2</p></td>
<td align="left"><p><a href="../appv-v4/application-virtualization-deployment-and-upgrade-considerations-copy.md" data-raw-source="[Application Virtualization Deployment and Upgrade Considerations](../appv-v4/application-virtualization-deployment-and-upgrade-considerations-copy.md)">Considerazioni sulla distribuzione e l'aggiornamento della virtualizzazione delle applicazioni </a> .</p></td>
</tr>
<tr class="even">
<td align="left"><p>Installare il client App-V 5,0 con la coesistenza abilitata.</p></td>
<td align="left"><p><a href="how-to-deploy-the-app-v-46-and-the-app-v--50-client-on-the-same-computer.md" data-raw-source="[How to Deploy the App-V 4.6 and the App-V 5.0 Client on the Same Computer](how-to-deploy-the-app-v-46-and-the-app-v--50-client-on-the-same-computer.md)">Come distribuire l'App-V 4,6 e il client App-V 5,0 nello stesso computer </a> .</p></td>
</tr>
<tr class="odd">
<td align="left"><p>Sequenziare e distribuire pacchetti App-V 5,0. Se necessario, Annulla la pubblicazione di pacchetti App-V 4,6.</p></td>
<td align="left"><p><a href="how-to-sequence-a-new-application-with-app-v-50-beta-gb18030.md" data-raw-source="[How to Sequence a New Application with App-V 5.0](how-to-sequence-a-new-application-with-app-v-50-beta-gb18030.md)">Come sequenziare una nuova applicazione con App-V 5,0 </a> .</p></td>
</tr>
</tbody>
</table>
**Importante** È necessario eseguire App-V 4.6 SP3 per usare la modalità di coesistenza. Inoltre, quando si sequenzia un pacchetto, è necessario configurare l'impostazione dell'autorità di gestione, che si trova nella **Configurazione utente** nella sezione **Configurazione utente** .
## Migrazione dell'infrastruttura completa del server App-V 5,0
Non esiste un metodo diretto per eseguire l'aggiornamento a un'infrastruttura App-V 5,0 completa. Usare le informazioni nella sezione seguente per informazioni sull'aggiornamento del server App-V.
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">Attività</th>
<th align="left">Altre informazioni</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left"><p>Aggiornare l'ambiente in App-V 4.6 SP3.</p></td>
<td align="left"><p><a href="../appv-v4/application-virtualization-deployment-and-upgrade-considerations-copy.md" data-raw-source="[Application Virtualization Deployment and Upgrade Considerations](../appv-v4/application-virtualization-deployment-and-upgrade-considerations-copy.md)">Considerazioni sulla distribuzione e l'aggiornamento della virtualizzazione delle applicazioni </a> .</p></td>
</tr>
<tr class="even">
<td align="left"><p>Distribuire App-V 5,0 versione del client.</p></td>
<td align="left"><p><a href="how-to-deploy-the-app-v-client-gb18030.md" data-raw-source="[How to Deploy the App-V Client](how-to-deploy-the-app-v-client-gb18030.md)">Come distribuire il client App-V </a> .</p></td>
</tr>
<tr class="odd">
<td align="left"><p>Installare App-V 5,0 Server.</p></td>
<td align="left"><p><a href="how-to-deploy-the-app-v-50-server-50sp3.md" data-raw-source="[How to Deploy the App-V 5.0 Server](how-to-deploy-the-app-v-50-server-50sp3.md)">Come distribuire il server App-V 5,0 </a> .</p></td>
</tr>
<tr class="even">
<td align="left"><p>Eseguire la migrazione dei pacchetti esistenti.</p></td>
<td align="left"><p>Vedere i <strong> pacchetti di conversione creati con una versione precedente della sezione App-V </strong> di questo articolo.</p></td>
</tr>
</tbody>
</table>
## Altre attività di migrazione
È anche possibile eseguire altre attività di migrazione, come la riconfigurazione dei punti finali e l'apertura di un pacchetto creato con una versione precedente in un computer che esegue il client App-V 5,0. I collegamenti seguenti includono ulteriori informazioni sull'esecuzione di queste attività.
[Come eseguire la migrazione dei punti di estensione da un pacchetto App-V 4.6 a un pacchetto App-V 5.0 convertito per tutti gli utenti in un computer specifico](how-to-migrate-extension-points-from-an-app-v-46-package-to-a-converted-app-v-50-package-for-all-users-on-a-specific-computer.md)
[Come eseguire la migrazione dei punti di estensione da un pacchetto App-V 4.6 ad App-V 5.0 per un utente specifico](how-to-migrate-extension-points-from-an-app-v-46-package-to-app-v-50-for-a-specific-user.md)
[Come ripristinare i punti di estensione da un pacchetto App-V 5.0 a un pacchetto App-V 4.6 per tutti gli utenti in un computer specifico](how-to-revert-extension-points-from-an-app-v-50-package-to-an-app-v-46-package-for-all-users-on-a-specific-computer.md)
[Come ripristinare i punti di estensione da un pacchetto App-V 5.0 a un pacchetto App-V 4.6 per un utente specifico](how-to-revert-extension-points-from-an-app-v-50-package-to-an-app-v-46-package-for-a-specific-user.md)
## Altre risorse per l'esecuzione di attività di migrazione App-V
[Operazioni per App-V 5.0](operations-for-app-v-50.md)
[Procedura di aggiornamento del server di gestione di Microsoft App-V 5,1 semplificata](https://go.microsoft.com/fwlink/p/?LinkId=786330)
| 48.85 | 394 | 0.755476 | ita_Latn | 0.977346 |
ce4ef379e20848c9de7b0b25923a83a75200b208 | 1,461 | md | Markdown | lift-json/benchmark/README.md | pyronicide/liftweb | 47559def125f99813b79ffd4c677db92063857d4 | [
"Apache-2.0"
] | 1 | 2017-07-22T07:43:14.000Z | 2017-07-22T07:43:14.000Z | lift-json/benchmark/README.md | pyronicide/liftweb | 47559def125f99813b79ffd4c677db92063857d4 | [
"Apache-2.0"
] | null | null | null | lift-json/benchmark/README.md | pyronicide/liftweb | 47559def125f99813b79ffd4c677db92063857d4 | [
"Apache-2.0"
] | null | null | null | Benchmarking standard Scala Json parser, Jackson parser and lift-json parser
----------------------------------------------------------------------------
Benchmark measures how long it takes to parse 50 000 times the first JSON document
from http://www.json.org/example.html.
Facts:
* Ubuntu 8.10
* Lenovo T60p
* Scala 2.7.4
* java version "1.6.0_10"
Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
Java HotSpot(TM) Server VM (build 11.0-b15, mixed mode)
* Exec: scala Jsonbench
Parsing 50 000 json documents:
Scala std 167127 ms
Jackson 370 ms
lift-json 549 ms
Summary:
* Jackson was fastest.
* Lift Json was about 300 times faster than standard Scala parser.
Serialization benchmark, Java serialization and lift-json
---------------------------------------------------------
See Serbench.scala
Facts:
* Ubuntu 8.10
* Lenovo T60p
* Scala 2.7.4
* java version "1.6.0_10"
Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
Java HotSpot(TM) Server VM (build 11.0-b15, mixed mode)
* Exec: scala Serbench
Serializing 20 000 instances:
Java serialization (full) 1948 ms
lift-json (full) 1981 ms
Java serialization (ser) 373 ms
lift-json (ser) 997 ms
Java serialization (deser) 1396 ms
lift-json (deser) 772 ms
Summary:
* Total time about same (serialization + deserialization).
* Java serializes faster.
* lift-json deserializes faster.
| 25.631579 | 82 | 0.631075 | eng_Latn | 0.393276 |
ce4f03fa08c7da481925878a4c5eaf829be050a7 | 546 | md | Markdown | README.md | CROmetrics/gatsby-plugin-optimizely-js | 0811932551a4a6eeccdb5d0a5860627c0310f50e | [
"MIT"
] | null | null | null | README.md | CROmetrics/gatsby-plugin-optimizely-js | 0811932551a4a6eeccdb5d0a5860627c0310f50e | [
"MIT"
] | null | null | null | README.md | CROmetrics/gatsby-plugin-optimizely-js | 0811932551a4a6eeccdb5d0a5860627c0310f50e | [
"MIT"
] | null | null | null | # gatsby-plugin-optimizely-js
A Gatsby plugin to add an [Optimizely JS Snippet](https://support.optimizely.com/hc/en-us/articles/4411731640973) to your site.
## Install
`$ npm install --save @crometrics/gatsby-plugin-optimizely-js`
## How to use
### Setup
In your gatsby-config.js file:
```javascript
plugins: [
{
resolve: `@crometrics/gatsby-plugin-optimizely-js`,
options: {
// The optimizely id of the project.
// This is the number that appears in the snippet.
optimizelyId: '123456789',
}
}
];
```
| 19.5 | 127 | 0.673993 | eng_Latn | 0.730527 |
ce4f33553170f3c932405037a9bec9855ec71fcb | 1,113 | md | Markdown | README.md | Madh93/stallman_bot | 9d1ef52ec9869cae614c3f0461cb5f6ab3e80ae6 | [
"MIT"
] | 2 | 2016-11-25T07:22:50.000Z | 2017-08-25T17:40:15.000Z | README.md | Madh93/stallman_bot | 9d1ef52ec9869cae614c3f0461cb5f6ab3e80ae6 | [
"MIT"
] | 1 | 2019-07-13T10:06:52.000Z | 2019-07-13T10:06:52.000Z | README.md | Madh93/stallman_bot | 9d1ef52ec9869cae614c3f0461cb5f6ab3e80ae6 | [
"MIT"
] | null | null | null | # Stallman_bot
A Richard Stallman bot for Telegram based on [Slack Hubot](https://github.com/interwho/stallman-bot) by Justin Paulin.
![](http://oi67.tinypic.com/2mes3lg.jpg)
## Installation
Install gem:
gem install stallman_bot
Or clone:
git clone https://github.com/Madh93/stallman_bot && cd stallman_bot
And install dependencies:
bundle
And install stallman_bot:
bundle exec rake install
## Usage
stallman_bot [OPTIONS]
Start stallman_bot with default config (or custom config if it finds a `bot.yaml` in path):
stallman_bot
Load custom config explicitly:
stallman_bot --config=configs/cool_config.yaml
For the rest of options:
stallman_bot --help
## Contributing
1. Fork it ( https://github.com/Madh93/stallman_bot/fork )
2. Create your feature branch (`git checkout -b my-new-feature`)
3. Commit your changes (`git commit -am 'Add some feature'`)
4. Push to the branch (`git push origin my-new-feature`)
5. Create a new Pull Request
## License
The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
| 21.823529 | 118 | 0.735849 | eng_Latn | 0.537323 |
ce4fcf50c07ea4fff206381514490acb0256beef | 1,131 | md | Markdown | python/124_binary_tree_maximum_path_sum.md | hanleilei/leetcode | 166bbebbb4c7dfcab69dabe4abc2ac06dc028b38 | [
"MIT"
] | null | null | null | python/124_binary_tree_maximum_path_sum.md | hanleilei/leetcode | 166bbebbb4c7dfcab69dabe4abc2ac06dc028b38 | [
"MIT"
] | null | null | null | python/124_binary_tree_maximum_path_sum.md | hanleilei/leetcode | 166bbebbb4c7dfcab69dabe4abc2ac06dc028b38 | [
"MIT"
] | 1 | 2020-06-12T05:13:22.000Z | 2020-06-12T05:13:22.000Z | # binary tree maximum path sum
Given a non-empty binary tree, find the maximum path sum.
For this problem, a path is defined as any sequence of nodes from some starting node to any node in the tree along the parent-child connections. The path must contain at least one node and does not need to go through the root.
Example 1:
```
Input: [1,2,3]
1
/ \
2 3
Output: 6
```
Example 2:
```
Input: [-10,9,20,null,null,15,7]
-10
/ \
9 20
/ \
15 7
Output: 42
```
```python
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def maxPathSum(self, root):
"""
:type root: TreeNode
:rtype: int
"""
def maxsums(node):
if not node:
return [-2**31] * 2
left = maxsums(node.left)
right = maxsums(node.right)
return [node.val + max(left[0], right[0], 0),
max(left + right + [node.val + left[0] + right[0]])]
return max(maxsums(root))
```
| 21.339623 | 226 | 0.552608 | eng_Latn | 0.971118 |
ce50149b5bb99ba9b840891b0affde6f7f4ed2d4 | 277 | md | Markdown | _posts/guide/templates/2015-12-02-promocion-datos-abiertos.md | mendozagioo/dgm-guia | e5347d62d48ee76eb06b1d19f6377c27559c7c40 | [
"MIT"
] | null | null | null | _posts/guide/templates/2015-12-02-promocion-datos-abiertos.md | mendozagioo/dgm-guia | e5347d62d48ee76eb06b1d19f6377c27559c7c40 | [
"MIT"
] | null | null | null | _posts/guide/templates/2015-12-02-promocion-datos-abiertos.md | mendozagioo/dgm-guia | e5347d62d48ee76eb06b1d19f6377c27559c7c40 | [
"MIT"
] | null | null | null | ---
published: true
order: 9
title: ¿Cómo promover los Datos Abiertos?
date: 2015-12-02
hover_link: /guia/manuales-plantillas/miniguia-promocion.html
hide_link: true
section: templates
category: templates
---
Consulta estas recomendaciones para promover los Datos Abiertos.
| 18.466667 | 64 | 0.790614 | spa_Latn | 0.583973 |
ce5162d98db5d4ae7e9fb8ffbcd168c669f2863a | 105 | md | Markdown | swagger-bootstrap-ui-doc/en/code-des.md | lichongbing/lswagger | cf8529d168b835ec7f13490fb790ac01b9a43066 | [
"Apache-2.0"
] | 2,487 | 2019-05-10T02:45:12.000Z | 2022-03-31T09:31:15.000Z | swagger-bootstrap-ui-doc/en/code-des.md | lichongbing/lswagger | cf8529d168b835ec7f13490fb790ac01b9a43066 | [
"Apache-2.0"
] | 337 | 2019-05-09T13:45:36.000Z | 2022-03-29T06:12:10.000Z | swagger-bootstrap-ui-doc/en/code-des.md | lichongbing/lswagger | cf8529d168b835ec7f13490fb790ac01b9a43066 | [
"Apache-2.0"
] | 419 | 2019-05-10T06:41:16.000Z | 2022-03-29T06:38:54.000Z | I hope this code description will help more people understand Springfox-Swagger and SwaggerBootstrapUi.
| 35 | 103 | 0.847619 | eng_Latn | 0.994888 |
ce5230ddaaabc14751a1a16bde756bfd3a88e20f | 85 | md | Markdown | README.md | cstoquer/rtc-cars | 109af811ad8d37d1dd87c5345972245c62cf9818 | [
"MIT"
] | null | null | null | README.md | cstoquer/rtc-cars | 109af811ad8d37d1dd87c5345972245c62cf9818 | [
"MIT"
] | null | null | null | README.md | cstoquer/rtc-cars | 109af811ad8d37d1dd87c5345972245c62cf9818 | [
"MIT"
] | null | null | null | # rtc-cars
small test project using Web-RTC for real-time peer-to-peer communication
| 28.333333 | 73 | 0.8 | eng_Latn | 0.953399 |
ce52437e4c98a838ad014a73c6cb7148efcf7596 | 9,084 | md | Markdown | markdown_files/10.1101-2020.04.28.20083089.md | cbrueffer/covid-19_sinai_reviews | c5e924b8d406e488f85b330afe5f31a2b9c5d687 | [
"CC-BY-4.0"
] | 8 | 2020-04-04T13:50:29.000Z | 2020-04-29T13:54:45.000Z | markdown_files/10.1101-2020.04.28.20083089.md | cbrueffer/covid-19_sinai_reviews | c5e924b8d406e488f85b330afe5f31a2b9c5d687 | [
"CC-BY-4.0"
] | 9 | 2020-04-04T13:50:19.000Z | 2020-04-08T11:34:45.000Z | markdown_files/10.1101-2020.04.28.20083089.md | cbrueffer/covid-19_sinai_reviews | c5e924b8d406e488f85b330afe5f31a2b9c5d687 | [
"CC-BY-4.0"
] | 7 | 2020-04-04T13:33:16.000Z | 2020-07-31T16:54:05.000Z | **A possible role of immunopathogenesis in COVID-19 progression**
Anft M., Paniskaki K, Blazquez-Navarro A t al.; medRxiv
2020.04.28.20083089; https://doi.org/10.1101/2020.04.28.20083089
***Keywords***
- SARS-CoV-2 spike protein-specific T cells
- COVID-19
- adaptive immunity
***Main findings ***
In this preprint, 53 hospitalized COVID-19 patients, enrolled in a
prospective study at a tertiary care center in Germany, were assigned to
moderate (n=21; light pneumonia), severe (n=18; fever or respiratory
tract infection with respiratory rate >30/min, severe dyspnea, or
resting SpO~2~ <90%), and critical subgroups (n=14; ARDS, sepsis, or
septic shock) according to clinical disease. Moderately and severely ill
patients with a PCR-confirmed diagnosis were recruited within four days
of clinical onset, whereas critically ill patients were enrolled on
average within 14 days of diagnosis on admission to ICU. To account for
the overall longer hospital stay in ICU cases prior to inclusion,
repeated blood samples were obtained from moderately and severely ill
donors within eight days post recruitment. For 10 out of 14 ICU
patients, no follow up blood samples were collected. At recruitment as
well as on follow-up, circulating lymphocyte counts were below reference
range in the majority of enrolled COVID-19 patients. Relative
frequencies were significantly reduced in critically *vs*. moderately,
but not *vs*. severely ill individuals, with substantially lower NK as
well as CD8 T cells counts, and a concomitant increase of the CD4:CD8 T
cell ratio in ICU patients. Basic phenotypic and immune cell subset
analysis by flow cytometry detected lower frequencies of central memory
CD4 T cells as well as reduced terminally differentiated CD8 Temra cells
in critical COVID-19. Moreover, a decrease in activated HLA-DR^+^ CD4
and CD8 T cells as well as in cytolytic CD57^+^ CD8 T cells was observed
in critical *vs*. severe/moderate disease. Similarly, frequencies of
CD11a^+^ CD4 and CD8 T cells as well as CD28^+^ CD4 T cells were lower
in critically ill donors, indicating a general loss of activated bulk T
cells in this subgroup. In addition, a reduction of both marginal and
transitional CD19^+^ B cells was seen in patients with severe and
critical symptoms. Of note, on follow-up, recovering severe COVID-19
patients showed an increase in bulk T cell numbers with an activated
phenotype. Importantly, SARS-CoV-2 spike (S)-protein-specific CD4 and
CD8 T cells, identified following stimulation of PBMCs with 15-mer
overlapping S protein peptide pools by flow-cytometric detection of
intracellular CD154 and CD137, respectively, were found in the majority
of patients in all COVID-19 subgroups at the time of recruitment and
further increased in most subjects by the time of follow-up (antiviral
CD4 >> CD8 T cells). Most notably, frequencies of both antiviral
CD4 and CD8 T cells were substantially higher in critically ill
patients, and virus specific CD4 and CD8 T cells in both critically and
severely ill subgroups were shown to produce more pro-inflammatory Th1
cytokines (TNFa, IFNg, IL-2) and the effector molecule GzmB,
respectively, suggesting an overall increased magnitude of
virus-specific T cell inflammation in the context of more severe disease
courses. Furthermore, frequencies of antiviral CD4 T cells correlated
moderately with anti-S-protein IgG levels across all patient groups.
***Limitations ***
In general, this is a well executed study and most of the observations
reported here pertaining to overall reduced bulk T cell frequencies
(along with lower NK and other immune cell counts) as well as diminished
numbers of T cells with an activated phenotype in ICU *vs*. non ICU
COVID-19 corroborate findings in several previous publications and
preprints (cf. https://www.jci.org/articles/view/137244;
<https://academic.oup.com/jid/advance-article/doi/10.1093/infdis/jiaa150/5813618>;
<https://www.nature.com/articles/s41423-020-0401-3>;
<https://www.medrxiv.org/content/10.1101/2020.04.11.20062349v1.full.pdf>;
<https://www.medrxiv.org/content/10.1101/2020.04.17.20061440v1.full.pdf>).
Notably, in contrast to many previous reports, the prospective study by
Anft et al. enrolled a relatively larger number of COVID-19 patients of
variable clinical disease (with the exception of mild cases). However,
there are a few weaknesses that should be addressed. Most importantly,
the choice of statistical tests applied should be carefully revised:
e.g. comparison of more than two groups, as seems to be the case for
most of the figures, requires ANOVA testing, which should ideally be
followed by post-hoc testing (despite the somewhat confusing statement
that this was conceived as an exploratory study). Given the overall
limited case numbers per clinical subgroup, trends even though they
might not reach statistical significance are equally important.
Similarly, some statements are overgeneralized and should to be adjusted
based on the actual data shown (e.g. the authors continue to refer to
gradual reductions of activated T cell subset numbers in moderately
*vs.* severely *vs*. critically ill patients, but for the majority of
data shown substantial differences are apparent only in ICU *vs*.
non-ICU patients). Moreover, it would be helpful to include exemplary
FACS plots in addition to explanatory gating strategies provided in the
supplemental document. There are also several inconsistencies regarding
the order of data presented here (e.g. in the main manuscript, Fig S5 is
chronological referred to before Fig S4) as well as pertaining to
relevant technical details (according to both the main manuscript and
the gating strategy in Figure S5, virus-specific CD4 T cells were
identified by CD154 expression; however, in figure legend S5
virus-specific CD4 T cells are defined as CD4^+^ CD154^+^ CD137^+^).
Additionally, from a technical point of view, it is somewhat intriguing
that the percentages of virus-specific T cells identified by expression
of CD154 and CD137, respectively, following peptide simulation seem to
differ substantially from frequencies of CD154^+^ or CD137^+^ INFg^+^
virus-specific T cells. Assuming a somewhat lower extent of cellular
exhaustion in the moderate COVID-19 group, one would expect these cell
subsets to mostly overlap/match in frequencies, therefore suggesting
slight overestimation of actual virus-specific T cell numbers. In this
context, inclusion of positive controls, such as CMV pp65 peptide
stimulation of PBMCs from CMV seropositive donors, in addition to the
already included negative controls would also be helpful. Moreover, in
view of the observation that virus-specific T cells were found to be
increased in critically ill ICU over non-ICU patients, a more stringent
characterization of these patients as well as assessment of potential
associations with clinical characteristics such as mechanical
ventilation or death would add further impact to the findings described
here. Finally, this study is limited to anti-S protein specific T cells.
However, evaluation of N and also M-protein specific CD8 T cell
responses are likely of great interest as well based on current
knowledge about persistent M-protein specific memory CD8 T cells
following SARS-CoV-1 infection (cf.
<https://www.microbiologyresearch.org/content/journal/jgv/10.1099/vir.0.82839-0>).
***Significance***
In addition to reduced frequencies of activated bulk T cell numbers, the
authors report an enhanced virus-specific T cell response against S
protein epitopes in critically ill COVID-19 patients compared to
severely and moderately ill individuals, which correlated with anti-S
protein antibody titers (also cf. Ni et al.:
https://doi.org/10.1016/j.immuni.2020.04.023).
This is an important observation that mirrors previous data about
SARS-CoV-1 (cf. Ka-fai Li C et al.:
<https://www.jimmunol.org/content/jimmunol/181/8/5490.full.pdf>).
Furthermore, in accordance with a recent preprint by Weiskopf et al.
(<https://www.medrxiv.org/content/10.1101/2020.04.11.20062349v1.full.pdf>),
virus-specific CD4 T cells were found to increase in most patients over
time regardless of clinical disease, whereas antiviral CD8 T cell
kinetics seemed slightly less pronounced. Moreover, in the majority of
moderately and severely ill cases, virus-specific T cells against the S
protein could be detected early on - on average within 4 days of symptom
onset. Longitudinal studies including larger numbers of COVID-19
patients across all clinical subgroups are therefore needed to further
evaluate the potential impact of this observation, in particular in the
context of previously described pre-existing memory T cells
cross-reactive against human endemic coronaviruses (cf.
<https://www.medrxiv.org/content/10.1101/2020.04.17.20061440v1.full.pdf>;
<https://journals.sagepub.com/doi/pdf/10.1177/039463200501800312>).
*This review was undertaken by V. van der Heide as part of a project by
students, postdocs and faculty at the Immunology Institute of the Icahn
school of medicine, Mount Sinai.*
| 58.987013 | 82 | 0.80317 | eng_Latn | 0.99674 |
ce532e368a76aa62428cf1f59989dedf4ed48d42 | 337 | md | Markdown | Muzikos botas/README.md | KugelisMugelis/Kunigelio | e1a1f60e7a0722b5e1fcf4b29ab26a4d673797bf | [
"MIT"
] | null | null | null | Muzikos botas/README.md | KugelisMugelis/Kunigelio | e1a1f60e7a0722b5e1fcf4b29ab26a4d673797bf | [
"MIT"
] | 3 | 2022-02-15T04:29:03.000Z | 2022-03-23T04:33:07.000Z | Muzikos botas/README.md | KugelisMugelis/Kunigelio | e1a1f60e7a0722b5e1fcf4b29ab26a4d673797bf | [
"MIT"
] | null | null | null | # Music Bot
A Music Discord Bot with lot Commands....
# Features
- Playing Music
- Custom Prefix
- Ping Commands
- Quest
- Good Sound Quality
## Self-Hosting
- Fork the project in your replit
- fill the information in config.js [Prefix and Token]
- Run the Replit
Support Server [For any Help]
-
https://dsc.gg/manager.development | 17.736842 | 55 | 0.72997 | eng_Latn | 0.859845 |
ce541aca8a0621565d874be98c1a7f7efd55afed | 2,463 | md | Markdown | README.md | dropcountr/kmeans-clusterer | 57f28a49bdedd7d698ac88f7d590b77a08c1f099 | [
"MIT"
] | 86 | 2015-01-28T21:28:28.000Z | 2022-02-18T21:29:49.000Z | README.md | dropcountr/kmeans-clusterer | 57f28a49bdedd7d698ac88f7d590b77a08c1f099 | [
"MIT"
] | 6 | 2016-12-29T16:24:44.000Z | 2021-11-04T19:24:00.000Z | README.md | dropcountr/kmeans-clusterer | 57f28a49bdedd7d698ac88f7d590b77a08c1f099 | [
"MIT"
] | 18 | 2015-08-29T21:10:12.000Z | 2022-02-01T10:16:50.000Z | KMeansClusterer
===
[k-means clustering](http://en.wikipedia.org/wiki/K-means_clustering) in Ruby. Uses [NArray](https://github.com/masa16/narray) under the hood for fast calculations.
Jump to the [examples](examples/) directory to see this in action.
Features
---
- Runs multiple clustering attempts to find optimal solution (single runs are susceptible to falling into non-optimal local minima)
- Initializes centroids via [k-means++](http://en.wikipedia.org/wiki/K-means%2B%2B) algorithm, for faster convergence
- Calculates [silhouette](http://en.wikipedia.org/wiki/Silhouette_%28clustering%29) score for evaluation
- Option to scale data before clustering, so that output isn't biased by different feature scales
- Works with high-dimensional data
Install
---
```
gem install kmeans-clusterer
```
Usage
---
Simple example:
```ruby
require 'kmeans-clusterer'
data = [[40.71,-74.01],[34.05,-118.24],[39.29,-76.61],
[45.52,-122.68],[38.9,-77.04],[36.11,-115.17]]
labels = ['New York', 'Los Angeles', 'Baltimore',
'Portland', 'Washington DC', 'Las Vegas']
k = 2 # find 2 clusters in data
kmeans = KMeansClusterer.run k, data, labels: labels, runs: 5
kmeans.clusters.each do |cluster|
puts cluster.id.to_s + '. ' +
cluster.points.map(&:label).join(", ") + "\t" +
cluster.centroid.to_s
end
# Use existing clusters for prediction with new data:
predicted = kmeans.predict [[41.85,-87.65]] # Chicago
puts "\nClosest cluster to Chicago: #{predicted[0]}"
# Clustering quality score. Value between -1.0..1.0 (1.0 is best)
puts "\nSilhouette score: #{kmeans.silhouette.round(2)}"
```
Output of simple example:
```
0. New York, Baltimore, Washington DC [39.63, -75.89]
1. Los Angeles, Portland, Las Vegas [38.56, -118.7]
Closest cluster to Chicago: 0
Silhouette score: 0.91
```
### Options
The following options can be passed in to ```KMeansClusterer.run```:
option | default | description
------ | ------- | -----------
:labels | nil | optional array of Ruby objects to collate with data array
:runs | 10 | number of times to run kmeans
:log | false | print stats after each run
:init | :kmpp | algorithm for picking initial cluster centroids. Accepts :kmpp, :random, or an array of k centroids
:scale_data | false | scales features before clustering using formula (data - mean) / std
:float_precision | :double | float precision to use. :double or :single
:max_iter | 300 | max iterations per run
| 29.674699 | 164 | 0.699553 | eng_Latn | 0.786291 |
ce549c81d712d2440f5be28b838c65b78e9c5524 | 144 | md | Markdown | _seminars/2020-10 Research group.md | saona-raimundo/saona-raimundo.github.io | 570807a5dc194bf3c2220d7552df6442af5f51ce | [
"MIT"
] | null | null | null | _seminars/2020-10 Research group.md | saona-raimundo/saona-raimundo.github.io | 570807a5dc194bf3c2220d7552df6442af5f51ce | [
"MIT"
] | null | null | null | _seminars/2020-10 Research group.md | saona-raimundo/saona-raimundo.github.io | 570807a5dc194bf3c2220d7552df6442af5f51ce | [
"MIT"
] | null | null | null | ---
layout: event
title: Research group
date: 2020-10-01
place: IST Austria
---
I presented an ongoing project on robustness in matrix games.
| 16 | 61 | 0.743056 | eng_Latn | 0.987348 |
ce54b982f98afa62d817db3a4459bea79b990df8 | 62 | markdown | Markdown | environments/development/modules/dcapachehttp/README.markdown | eduardodicarte/vm_puppet_manager | 2c05f36029be589a346f81faf92efbc678cc6ece | [
"MIT"
] | null | null | null | environments/development/modules/dcapachehttp/README.markdown | eduardodicarte/vm_puppet_manager | 2c05f36029be589a346f81faf92efbc678cc6ece | [
"MIT"
] | null | null | null | environments/development/modules/dcapachehttp/README.markdown | eduardodicarte/vm_puppet_manager | 2c05f36029be589a346f81faf92efbc678cc6ece | [
"MIT"
] | null | null | null | # apachehttp #
This is the apachehttp module. It provides...
| 15.5 | 45 | 0.725806 | eng_Latn | 0.862993 |
ce54cb777feb5df535c9aa59e3bb28fd05dd8e3c | 2,163 | md | Markdown | README.md | P2P-Develop/PeyangSuperLibrary | 6108dbcdb0e0b3f7b653e1f62411ef983fd85592 | [
"WTFPL"
] | 2 | 2020-06-30T06:56:20.000Z | 2021-06-07T14:59:22.000Z | README.md | P2P-Develop/PeyangSuperLibrary | 6108dbcdb0e0b3f7b653e1f62411ef983fd85592 | [
"WTFPL"
] | 1 | 2020-09-24T10:16:35.000Z | 2021-06-25T15:44:14.000Z | README.md | P2P-Develop/PeyangSuperLibrary | 6108dbcdb0e0b3f7b653e1f62411ef983fd85592 | [
"WTFPL"
] | 2 | 2020-06-17T12:12:46.000Z | 2021-04-02T11:26:12.000Z | <h1 align="center">PeyangSuperLibrary</h1>
<p align="center">
<a href="https://search.maven.org/search?q=g:%22tokyo.peya.lib%22%20AND%20a:%22PeyangSuperLibrary">
<img alt="Maven Central" src="https://img.shields.io/maven-central/v/tokyo.peya.lib/PeyangSuperLibrary.svg?label=Maven%20Central&style=flat-square">
</a>
<img alt="GitHub Workflow Status" src="https://img.shields.io/github/workflow/status/P2P-Develop/PeyangSuperLibrary/Java%20CI%20with%20Maven?style=flat-square">
<a href="https://www.codacy.com/gh/P2P-Develop/PeyangSuperLibrary/dashboard?utm_source=github.com&utm_medium=referral&utm_content=P2P-Develop/PeyangSuperLibrary&utm_campaign=Badge_Grade">
<img alt="Codacy grade" src="https://img.shields.io/codacy/grade/2e4e46dd3db54b23843fba42e471aa72?logo=codacy&style=flat-square">
</a>
<img alt="GitHub" src="https://img.shields.io/github/license/P2P-Develop/PeyangSuperLibrary?style=flat-square">
<img alt="Java version" src="https://img.shields.io/static/v1?label=Java%20version&message=1.8&color=success&style=flat-square">
</p>
<p align="center">よく使うものまとめたやつ(願望</p>
---
# 導入方法
+ Maven
```xml
<dependency>
<groupId>tokyo.peya.lib</groupId>
<artifactId>PeyangSuperLibrary</artifactId>
<version>114.191.981.0</version>
</dependency>
```
+ Gradle
```js
implementation 'tokyo.peya.lib:PeyangSuperLibrary:114.191.981.0'
implementation("tokyo.peya.lib:PeyangSuperLibrary:114.191.981.0")
```
# ドキュメント
+ [JavaDoc](https://lib.peya.tokyo/)
# はいってるもの
+ EntitySelector
Bukkit 1.12.2くらいで`@e`とか`@a[name=SaikyouPeyangsan]` を使える。
+ Say2Functional
プレイヤーに「続行しますか?y/N>」見たいのをつけられる。コンソールにも対応。
+ ItemUtils
引っ語りするやつを簡単につけられる。
+ ExceptionUtils
ExceptionのスタックトレースをStringにできる
+ LearnMath
機械学習用の高度な計算を提供する。
+ LeetConverter
入れたやつを何でもかんでも133Tにしてくれる。
+ Intellij
Intellijでデバッグしているかどうかを判定
+ TimeParser
`1y 1mo 4d 5h 1m 4s` を Date@\(1year,2months,4days,5hours,1minute,5seconds\) に変換する。
相互変換可能。
+ WaveCreator
波を生成する。
+ SQLModifier
SQL文を書く必要なく、簡単にinsertとかできるようになる。
+ FileConfiguration
ymlファイルをコンフィグとして使えるようになる。
+ PluginYamlParser
Bukkitの`plugin.yml`をPojoに変換します。
| 34.333333 | 203 | 0.747573 | yue_Hant | 0.764701 |
ce550a295c0d0b66cf39180849fd1b9130fb2ea2 | 2,752 | md | Markdown | DOCS/stegstash/msoffice.md | FredHappyface/StegStash | c5e3f0d2df5ccbbb270d1e7f79c439b8be126535 | [
"MIT"
] | 1 | 2021-02-07T07:03:43.000Z | 2021-02-07T07:03:43.000Z | DOCS/stegstash/msoffice.md | FredHappyface/StegStash | c5e3f0d2df5ccbbb270d1e7f79c439b8be126535 | [
"MIT"
] | null | null | null | DOCS/stegstash/msoffice.md | FredHappyface/StegStash | c5e3f0d2df5ccbbb270d1e7f79c439b8be126535 | [
"MIT"
] | null | null | null | # msoffice
> Auto-generated documentation for [stegstash.msoffice](../../stegstash/msoffice.py) module.
hide data and files in a docx, pptx etc
Functions:
- Add data as a comment in xml such as [Content_Types].xml
- Add a file and update [Content_Types].xml
Limitations:
- These do not persist modification. i.e. the data will be lost in the event of
a user modifying the document (tested in LibreOffice and Microsoft Word 365:2004)
- [Stegstash](../README.md#stegstash-index) / [Modules](../README.md#stegstash-modules) / [stegstash](index.md#stegstash) / msoffice
- [decodeComment](#decodecomment)
- [decodeFile](#decodefile)
- [detectSteg](#detectsteg)
- [encodeComment](#encodecomment)
- [encodeFile](#encodefile)
## decodeComment
[[find in source code]](../../stegstash/msoffice.py#L29)
```python
def decodeComment(openPath):
```
decode data from a microsoft office file by reading xml comments
#### Arguments
- `openPath` *string* - path to the stego-office document to decode
#### Returns
- `bytes` - data from the image
## decodeFile
[[find in source code]](../../stegstash/msoffice.py#L62)
```python
def decodeFile(openPath, password='', filePointer=None):
```
decode data from a microsoft office file by extracting the file
#### Arguments
- `openPath` *string* - path to the stego-document to decode
- `password` *str, optional* - password to encrypt the data with. Defaults to "".
- `filePointer` *<file>, optional* - pointer to the file. Defaults to None.
#### Returns
- `bytes` - data from the image
## detectSteg
[[find in source code]](../../stegstash/msoffice.py#L85)
```python
def detectSteg(openPath, checkDocPropsOnly=True):
```
detect the use of microsoft office steganography
False positives can be triggered by including media in a document when
checkDocPropsOnly is set to False
#### Arguments
- `openPath` *string* - path to the text file to analyse
- `checkDocPropsOnly` *boolean, optional* - look under docProps only to
mitigate one source of false positives. Defaults to True.
#### Returns
- `boolean` - True if this lib has been used to hide data
## encodeComment
[[find in source code]](../../stegstash/msoffice.py#L17)
```python
def encodeComment(openPath, writePath, data):
```
encode an microsoft office file with data by inserting into xml comments
#### Arguments
- `openPath` *string* - path to the original office document to open
- `writePath` *string* - path to write the stego-office document
- `data` *string|bytes|<file>* - data to encode
## encodeFile
[[find in source code]](../../stegstash/msoffice.py#L42)
```python
def encodeFile(
openPath,
writePath,
file,
fileName='application.xml',
password='',
):
```
encode data as a file
| 24.571429 | 132 | 0.711483 | eng_Latn | 0.81897 |
ce551e4d4511774012836f776289292bce74c8d9 | 9,898 | md | Markdown | _posts/2018-05-29-recovery.md | PrajwalGurumurthy/PrajwalGurumurthy.github.io | d8519290d5bf381e49beef29128a95200412d5c6 | [
"MIT"
] | null | null | null | _posts/2018-05-29-recovery.md | PrajwalGurumurthy/PrajwalGurumurthy.github.io | d8519290d5bf381e49beef29128a95200412d5c6 | [
"MIT"
] | null | null | null | _posts/2018-05-29-recovery.md | PrajwalGurumurthy/PrajwalGurumurthy.github.io | d8519290d5bf381e49beef29128a95200412d5c6 | [
"MIT"
] | null | null | null | ---
layout: post
title: Design for Resiliency, durability, Failure Recovery and Idempotency
description: This page describes the need for having resiliency, durability and Failure Recovery in microservices
---
## Flow with the Chaos
In Distributed systems we cannot avoid chaos and definitely not assume that there wont be one. We have to code for the chaos and manage the chaos.
## Why Resiliency, Durability and Failure Recovery is required
Lets take an example of addProduct in ECOS-[Enterprise Customer Order Service].The responsibility of this service is to take orders.
Lets say the sequence of step involved in ECOS is as follows for adding the product in the order.
1. Receive the request
2. Check restrictions on the given product with restriction service
3. Add the product to the quote service
4. Add the product to the order and persist in some data store along with the quote
5. Publish a business event for the added product
All the sequence has to be executed completely or not executed at all. Following are the reasons why partial execution of the above steps would have adverse effect in the system.
For instance, in addproduct,
1. The product is added to quote
2. The ECOS instance failed
Note: The product is not added to the order and also the event is not published, however the quote is updated in price.
3. On Failure,The client again sends the request again, we end up adding the same product again into the quote and then we add it to order and send event.
As a result of this We have added the product twice in quote and once in the basket.
Similar kind of situation can occur in many ECOS use cases. So it is extremely important to ensure we have strong failure recovery mechanism built in ECOS without compromising Scalability and idempotency.
Lets formalise the above problem in a general sense.
## Formalising the problem statement:
We have a sequence of operations to be done for a particular use case. Related operations in this sequence can be grouped together into smaller sub-sets called Micro Modules.
For example price/quote related operations can be embedded in a single micro module(Price micro module). So this micro module is responsible for catering to all price related operations.Similarly we can have many such micro-modules.
![Image1]({{ site.url }}/assets/recovery/piping-streams.png)
## Monolith vs Pipies and Filters
![Image1]({{ site.url }}/assets/recovery/monolith-design.png)
<b>Reusing Filters across multiple services in Pipes Design.</b>
![Image1]({{ site.url }}/assets/recovery/pipes-design.png)
Each of the micro modules can be scaled independently. If a micro module is taking a long time more workers can be put in that micro module to increase the throughput and reduce the latency.
![Image1]({{ site.url }}/assets/recovery/scaling-filters.png)
<b> External sourcing of events </b>
Each of the micro-modules work in isolation without depending on each other.The linking of these micro-modules results in achieving a particular use-case for ECOS.One way to link all these micro-modules is by using reactive pipe lines. As a micro-module, It is only interested in a particular stream of events.
<b>It does not matter if the stream originated from another micro-module or it originated from an external source.</b>
The following diagram illustrates streams originating from different kind of sources.
![Image1]({{ site.url }}/assets/recovery/piping-streams-external.png)
> Challenges by piping Micro-Modules in ECOS
## Errors and Failures in Pipes and Filters
Its important to note that error and failures are different. Errors are always within the context of the application. Failures are always external to application. For instance, Abrupt Failure of a pod in which the application is running is a "Failure". Errors are internal application exceptions. Both errors and failures can create anamolies if not handled properly.
## Handling Errors
The following depicts the application errors.
![errors]({{ site.url }}/assets/recovery/filter-errors.png)
The main challenge here is to ensure that the errors are detected and contained and suitable measures are taken to ensure that state of the system is not corrupted.
For example in addProduct case, If there is some sort of an error after the quote is created, The errors streams down to the subscriber in reactive pipeline, Before responding back to the client, proper measures have to be taken to ensure the products are removed from the quote and system is left in a consistent state.
## Handling Failures and ensuring commitment to the request
One of the challenges that needs to be addressed here is the failure of a micro-module while processing which can have an impact on the ECOS. For example if the pod dies due to resource constraint or the minion itself died.
Taking the add-product use case described before, If there is a failure after the product is added to the quote, the product might be added twice.
<b>Hence after certain operations we need some kind of commitment by ECOS that the operations following it will be executed no matter what</b>
>Critical Points in the system
Its important to figure out critical points in the system and take measures to ensure durability.
For example In the add product use case after the product is added in the quote, it has to be added in the order, persisted and a event has to be published before responding to the client.
Durability is the key aspect while designing a system which can update states across multiple sub-systems.Durability can be achieved in critical points in the system after which a commitment can be made that the sub-sequent micro-modules will be executed for sure.There are many ways to achieve durability.
1. <b>Persistence</b> : We can rely on persistence where we can save the intermediate state of the critical point and If there are failures we can always start from the last point where we left off for that request.
2. <b>Durable Queue</b>: Here we can save the intermediate state of the critical point as an event in a Messaging system and later consume that event and continue the remaining operations. An acknowledgement is sent only after the execution of all the remaining operations.
> <b>Scenario illustrating the failure and failure recovery using durable queue</b>
Lets take the following example where the ECOS was able to add the product to the quote, then the intermediate state was persisted using a queue which is later picked by the add product micro-module to add the product to the order .
Now <b>Failure</b> of the instance is as depicted below which causes the add-product micro-module to fail.
![failure1]({{ site.url }}/assets/recovery/failure-1.png)
<b>Failure Recovery</b> is as illustrated below. Since the add-product did not send the acknowledgement the message was still not removed from the queue. Now when the instance comes back again ,the add-product micro-module will get the same message again there by ensuring that the product was added to the order.
![recovery1]({{ site.url }}/assets/recovery/recovery-1.png)
## Instance specific External Message source in ECOS
Now that we have the background on why we need external system for saving the state(persistence or a queue) to achieve durability, the subtle question to be answered is can we have a single queue? In other words Can all the instances share the common queue to save the intermediate state.
The main problem with sharing the queue across multiple instances is that on receiving the request from the client 'C' by an instance 'X', only that particular instance 'X' can respond back to the client 'C' after all the processing is done. By sharing a queue across all scaled out instances, we will loose the control of which instance picks up what. As result of that it is possible that the intermediate state event in the queue can be picked up some other instance 'Y' and there is no way for that instance 'Y' to respond back client 'C'.
This meant that in order to respond back to the client and also ensure durability in ECOS we need instance specific queue.
![instance-specific-external-source]({{ site.url }}/assets/recovery/instance-specific-external-source.png)
## Idempotency in ECOS at the Request Level
In the context of ECOS, idempotency is extremely important. For instance In addProduct If a request was processed completely in ecos, and if there is a failure either on the client side or on ECOS before sending the response back to client, the client will retry the same operation. This will result in executing the same operation multiple times which can introduce anamolies in ECOS.
> The idempotency of requests can be solved by assigning the IdempotentId to the request(which can be the requestID itself) and assigning the state as started/completed/in-progress for that IdempotentId and updating the IdempotentId state in the centralised cache . Now if a request was made with the same IdempotentId again, Immediate response can be sent that the operation is already executed.
## Adavantages of piping the micro-modules
1. This kind of design gives you flexiblity in piping the micro modules together in a resilient manner without compromising on the scalability aspect.
2. higher durability in the system. Note: Durability in terms of operations is the guarantee/commitment that this operation will be executed.
3. Failure Recovery: Having Durable records ensures we can recover from abrupt failures if there are any and make sure the system is consistent
4. Loose Coupling of micro modules
5. Flexiblity in bringing in new changes for different projects in ECOS.
> References:
https://docs.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters
| 72.779412 | 544 | 0.781875 | eng_Latn | 0.999719 |
ce55335d571affe1725cd106499ec08f8b86525c | 9,805 | markdown | Markdown | _posts/2020-03-06-Build-Fileshare-using-SMBD-as-Domain-Authentichated-With-Samba-AD-DC.markdown | xoreth/blog | 55aa4947e7be390b40fcacf4d40b0402a820aec4 | [
"MIT"
] | null | null | null | _posts/2020-03-06-Build-Fileshare-using-SMBD-as-Domain-Authentichated-With-Samba-AD-DC.markdown | xoreth/blog | 55aa4947e7be390b40fcacf4d40b0402a820aec4 | [
"MIT"
] | null | null | null | _posts/2020-03-06-Build-Fileshare-using-SMBD-as-Domain-Authentichated-With-Samba-AD-DC.markdown | xoreth/blog | 55aa4947e7be390b40fcacf4d40b0402a820aec4 | [
"MIT"
] | null | null | null | ---
layout: post
title: Build Fileshare using SMBD as Domain Authentichated With Samba AD DC
author: Galuh D Wijatmiko
categories: [StorageAndFilesystem]
tags: [Samba4,Fileshare]
draft: false
published: true
---
# Joining
Install Package
```bash
yum install realmd samba-winbind-modules samba-common samba-common-libs samba-libs samba samba-winbind samba-client \
samba-client-libs samba-common-tools samba-winbind-clients nss-pam-ldapd pam-devel sssd-proxy sssd sssd-common python-sssdconfig \
sssd-common-pac sssd-ad sssd-ldap sssd-ipa sssd-krb5 sssd-client sssd-krb5-common krb5-workstation
```
Configure PAM,NSS For Winbind
```bash
authconfig-tui
```
or
```bash
authconfig --enablewinbind --enablewinbindauth --smbsecurity ads --smbworkgroup=ROOMIT --smbrealm ROOMIT.TECH --smbservers=addc1.roomit.tech --krb5realm=ROOMIT.TECH \
--enablewinbindoffline --enablewinbindkrb5 --winbindtemplateshell=/bin/bash --winbindjoin=administrator --update --enablelocauthorize --enablesssdauth --enablemkhomedir --update
```
Joining Domain
```bash
realm join -U Administrator ROOMIT.TECH
```
# Configure SSSD
Stop SSSD Service
```bash
systemctl stop sssd
```
We want login with simple name without domain and make directory only using name without domain, edit /etc/sssd/sssd.conf
```bash
[sssd]
domains = roomit.tech
config_file_version = 2
services = nss, pam, sudo
reconnection_retries = 3 #add option
sbus_timeout = 30 #add option
[sudo]
[pam]
offline_credentials_expiration = 355 #355 days offline cache
[domain/roomit.tech]
ad_domain = roomit.tech
krb5_realm = ROOMIT.TECH
realmd_tags = manages-system joined-with-samba
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = False #value change from True become False
fallback_homedir = /home/%u #value change from %u@%d
access_provider = ad
```
Start Service sssd
```bash
systemctl start sssd
```
# Configure SAMBA
Configure Samba Fileshare and Running Service smbd nmbd winbindd. Create config file share in /etc/samba/smb.conf.
```bash
[global]
workgroup = ROOMIT
realm = ROOMIT.TECH
security = domain
idmap config * : range = 16777216-33554431
template shell = /bin/bash
kerberos method = secrets only
winbind use default domain = false
winbind offline logon = false
idmap config * : range = 16777216-33554431
idmap config * : range = 16777216-33554431
encrypt passwords = yes
passdb backend = tdbsam
printing = cups
printcap name = /dev/null # mute annoying errors
log level = 3
log file = /var/log/samba/%m.log
vfs objects = acl_xattr
map acl inherit = yes
store dos attributes = yes
[homes]
comment = Home Directories
browseable = yes
writable = yes
write list = @"ROOMIT\Domain Users"
path = /home/%U
create mode = 0664
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[public]
comment = Public Sharing Department
path = /home/public
browsable =yes
writable = yes
guest ok = yes
read only = no
force user = nobody
[reports]
read only = no
writable = yes
write list = @"ROOMIT\Operation"
read list = @"ROOMIT\Sales-And-Marketing", @"ROOMIT\Finance-And-Marketing"
valid users = @"ROOMIT\Operation", @"ROOMIT\Sales-And-Marketing", @"ROOMIT\Finance-And-Marketing"
path = /home/reports
public = no
browseable = no
create mode = 0644
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[designer]
read only = no
writable = yes
valid users = @"ROOMIT\Project-Management"
path = /home/designer
public = no
browseable = no
create mode = 0644
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[share-adm]
comment = Operation Sharing Department
path = /home/share-adm
read only = no
valid users = @"ROOMIT\share-adm"
inherit acls = yes
inherit permissions = yes
browseable = no
create mode = 0664
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[share-tel]
comment = Telco Sharing Department
path = /home/share-tel
read only = no
valid users = @"ROOMIT\share-tel"
inherit acls = yes
inherit permissions = yes
browseable = no
create mode = 0664
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[share-dev]
comment = Development Sharing Department
path = /home/share-dev
read only = no
valid users = @"ROOMIT\share-dev"
inherit acls = yes
inherit permissions = yes
browseable = no
create mode = 0664
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[share-mgt]
comment = Management Sharing Department
path = /home/share-mgt
read only = no
valid users = @"ROOMIT\share-mgt"
inherit acls = yes
inherit permissions = yes
browseable = no
create mode = 0664
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[share-ga]
comment = General Affairs Sharing Department
path = /home/share-ga
read only = no
valid users = @"ROOMIT\share-ga"
inherit acls = yes
inherit permissions = yes
browseable = no
create mode = 0664
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[share-fin]
comment = Finance And Accounting Sharing Department
path = /home/share-fin
read only = no
valid users = @"ROOMIT\share-fin"
inherit acls = yes
inherit permissions = yes
browseable = no
create mode = 0664
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[share-hrd]
comment = HRD Sharing Department
path = /home/share-hrd
read only = no
valid users = @"ROOMIT\share-hrd"
inherit acls = yes
inherit permissions = yes
browseable = no
create mode = 0664
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
[share-mkt]
comment = Sales And Marketing Sharing Department
path = /home/share-mkt
read only = no
valid users = @"ROOMIT\share-mkt"
inherit acls = yes
inherit permissions = yes
browseable = no
create mode = 0664
directory mode = 0775
vfs objects = full_audit
full_audit:prefix = %u|%I|%m|%S
full_audit:success = mkdir rename unlink rmdir pwrite pread rm rmdir
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = NOTICE
```
Start Service smbd (Service For Fileshare and Printer Server), nmbd (Service For Network), Winbindd (Service For Authentication).
```bash
systemctl start smb nmb winbind
```
# TESTING
Check Service Working or Not
Check Domain NT
```bash
wbinfo --ping-dc
#Output :
#checking the NETLOGON for domain[ROOMIT] dc connection to "AD" succeeded
```
Check User using winbind
```bash
wbinfo -u
#Output :
# ......
#ROOMIT\christopher.jagtap
#ROOMIT\zydney
#ROOMIT\rouf
#ROOMIT\pran.kumar
#ROOMIT\handy.chen
#ROOMIT\heri.kuswanto
# .....
```
Check Group using winbind
```bash
wbinfo -g
#Output :
#..........
#ROOMIT\devops
#ROOMIT\top-management
#ROOMIT\ga
#ROOMIT\share-hrd
#ROOMIT\share-adm
#ROOMIT\senior-operation
#............
```
If winbind already fine, winbind can restart same time with smbd service
Check info domain
```bash
net ads info
#Output:
#LDAP server: 10.32.16.130
#LDAP server name: addc1.roomit.tech
#Realm: ROOMIT.TECH
#Bind Path: dc=ROOMIT,dc=TECH
#LDAP port: 389
#Server time: Tue, 12 Nov 2019 16:52:31 WIB
#KDC server: 10.32.16.130
#Server time offset: 406
#Last machine account password change: Wed, 06 Nov 2019 15:37:15 WIB
```
Check Using Posix
```bash
getent passwd ROOMIT\\dwiyan.wijatmiko
#Output:
#dwiyan.wijatmiko:*:545022134:545000513:dwiyan.wijatmiko:/home/dwiyan.wijatmiko:/bin/bash
```
Testing Mounting in Workstation
```bash
smbclient //share.roomit.tech/dwiyan.wijatmiko -U dwiyan.wijatmiko -W ROOMIT
#Output :
#Enter ROOMIT\dwiyan.wijatmiko's password:
#Try "help" to get a list of possible commands.
#smb: \>
```
How To Leave Domain
```bash
realm leave ROOMIT.TECH
``` | 25.802632 | 179 | 0.717083 | eng_Latn | 0.669005 |
ce592dfe98324f816aee821af0476f8662aace64 | 308 | md | Markdown | docs/scheme/chez-07-input-and-output.md | zhoujiagen/learning-lisp | a795337fdf1a4dc52cad6d342f4d9b9ed0db540d | [
"MIT"
] | null | null | null | docs/scheme/chez-07-input-and-output.md | zhoujiagen/learning-lisp | a795337fdf1a4dc52cad6d342f4d9b9ed0db540d | [
"MIT"
] | null | null | null | docs/scheme/chez-07-input-and-output.md | zhoujiagen/learning-lisp | a795337fdf1a4dc52cad6d342f4d9b9ed0db540d | [
"MIT"
] | null | null | null | # 7 Input and Output
## 7.1 Transcoders
## 7.2 Opening Files
## 7.3 Standard Ports
## 7.4 String and Bytevector Ports
## 7.5 Opening Custom Ports
## 7.6 Port Operations
## 7.7 Input Operations
## 7.8 Output Operations
## 7.9 Convenience I/O
## 7.10 Filesystem Operations
## 7.11 Bytevector/String Conversions | 25.666667 | 37 | 0.720779 | eng_Latn | 0.519899 |
ce59ef80a9db2912bdfe147d495058248478cedf | 1,675 | md | Markdown | daprdocs/content/en/developing-applications/building-blocks/service-invocation/service-invocation-namespaces.md | hmz777/docs-1 | a6fe26f7749c27bc8e48547a9c36438f62ec0a81 | [
"CC-BY-4.0"
] | null | null | null | daprdocs/content/en/developing-applications/building-blocks/service-invocation/service-invocation-namespaces.md | hmz777/docs-1 | a6fe26f7749c27bc8e48547a9c36438f62ec0a81 | [
"CC-BY-4.0"
] | null | null | null | daprdocs/content/en/developing-applications/building-blocks/service-invocation/service-invocation-namespaces.md | hmz777/docs-1 | a6fe26f7749c27bc8e48547a9c36438f62ec0a81 | [
"CC-BY-4.0"
] | null | null | null | ---
type: docs
title: "Service invocation across namespaces"
linkTitle: "Service invocation namespaces"
weight: 1000
description: "Call between services deployed to different namespaces"
---
In this article, you'll learn how you can call between services deployed to different namespaces. By default, service invocation supports invoking services within the *same* namespace by simply referencing the app ID (`nodeapp`):
```sh
localhost:3500/v1.0/invoke/nodeapp/method/neworder
```
Service invocation also supports calls across namespaces. On all supported hosting platforms, Dapr app IDs conform to a valid FQDN format that includes the target namespace. You can specify both:
- The app ID (`nodeapp`), and
- The namespace the app runs in (`production`).
**Example 1**
Call the `neworder` method on the `nodeapp` in the `production` namespace:
```sh
localhost:3500/v1.0/invoke/nodeapp.production/method/neworder
```
When calling an application in a namespace using service invocation, you qualify it with the namespace. This proves useful in cross-namespace calls in a Kubernetes cluster.
**Example 2**
Call the `ping` method on `myapp` scoped to the `production` namespace:
```bash
https://localhost:3500/v1.0/invoke/myapp.production/method/ping
```
**Example 3**
Call the same `ping` method as example 2 using a curl command from an external DNS address (in this case, `api.demo.dapr.team`) and supply the Dapr API token for authentication:
MacOS/Linux:
```
curl -i -d '{ "message": "hello" }' \
-H "Content-type: application/json" \
-H "dapr-api-token: ${API_TOKEN}" \
https://api.demo.dapr.team/v1.0/invoke/myapp.production/method/ping
```
| 33.5 | 229 | 0.748657 | eng_Latn | 0.983638 |
ce5a98bb87e122ff26074dc065fb33948c4333cd | 3,012 | md | Markdown | README.md | wrld3d/wrld.js | 7051a825749e0f8eb2b9fe53e677fbdef48f0b0e | [
"Apache-2.0",
"BSD-2-Clause"
] | 322 | 2017-07-12T18:59:07.000Z | 2022-03-26T18:44:01.000Z | README.md | wrld3d/wrld.js | 7051a825749e0f8eb2b9fe53e677fbdef48f0b0e | [
"Apache-2.0",
"BSD-2-Clause"
] | 65 | 2017-07-07T13:06:39.000Z | 2022-01-24T09:04:07.000Z | README.md | wrld3d/wrld.js | 7051a825749e0f8eb2b9fe53e677fbdef48f0b0e | [
"Apache-2.0",
"BSD-2-Clause"
] | 41 | 2017-07-18T17:42:28.000Z | 2022-03-29T01:37:16.000Z | <a href="https://www.wrld3d.com/">
<img src="https://cdn2.wrld3d.com/wp-content/uploads/2017/04/WRLD_Blue.png" align="right" height="80px" />
</a>
# wrld.js
![WRLD](https://cdn2.wrld3d.com/wp-content/uploads/2017/04/screenselection01.png)
The WRLD JavaScript API allows you to easily embed [beautiful 3D maps](https://www.wrld3d.com/) into any web page for any modern, WebGL supporting browser. For an example of our 3D maps in action, see [https://www.wrld3d.com/wrld.js/examples/](https://www.wrld3d.com/wrld.js/examples/).
It is based on [Leaflet.js](http://leafletjs.com/), providing a familiar API for embedding 3D maps in a web page.
## Examples
You can find [feature-by-feature examples](https://www.wrld3d.com/wrld.js/examples/) on our website.
## API
A [full API reference](https://www.wrld3d.com/wrld.js/docs/) is also available on our website.
## Getting Started
Before you begin, you will need to acquire an API key, which you can do by [signing up](https://www.wrld3d.com/register/) for an account at [wrld3d.com](https://www.wrld3d.com) and selecting the Digital Twin plan - free trials are available!
You can easily embed a 3D map in any web page. The code below shows a simple example:
```html
<!-- Create a map in an HTML element with wrld.js -->
<!DOCTYPE HTML>
<html>
<head>
<script src="https://cdn-webgl.wrld3d.com/wrldjs/dist/latest/wrld.js"></script>
<link href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.0.1/leaflet.css" rel="stylesheet" />
</head>
<body>
<div id="map" style="width: 400px; height: 400px;"></div>
<script type="text/javascript">
var map = L.Wrld.map("map", "your_api_key_here");
</script>
</body>
</html>
```
Just replace `your_api_key_here` with an API key from [wrld3d.com](https://www.wrld3d.com/register/).
## Support
If you have any questions, bug reports, or feature requests, feel free to submit to the [issue tracker](https://github.com/wrld3d/wrld.js/issues) for wrld.js on GitHub.
## Building the API
You may wish to build the API yourself. This is easy to do and only requires that you install [node.js](https://nodejs.org/en/).
### Requirements
* [Node.js](https://nodejs.org/en/) (v4.4.1 tested)
* npm (installed with Node.js)
### Building
Follow the steps below to build the API:
1. Clone this repo: `git clone https://github.com/wrld3d/wrld.js.git`
2. In the root of the repo, run `npm install` to install the development dependencies.
3. Still in the root of the repo, run the command `npm run build`.
This will create the file `dist/wrld.js` which is the minified API.
You can also use the command `npm run watch` to build continuously, watching files for changes.
## Contributing
If you wish to contribute to this repo, [pull requests](https://github.com/wrld3d/wrld.js) on GitHub are welcomed.
## License
The WRLD 3D Maps JavaScript API is released under the Simplified BSD License. See [LICENSE.md](https://github.com/wrld3d/wrld.js/blob/master/LICENSE.md) for details.
| 38.615385 | 286 | 0.718792 | eng_Latn | 0.906591 |
ce5aa58dd9d268566487df16f1d491a8ea70044b | 1,687 | md | Markdown | README.md | badookey/Ryutai | 5131f2cdfed86a2dd22fe166fe0c67952952de0c | [
"MIT"
] | 7 | 2019-08-29T03:26:09.000Z | 2022-03-08T02:41:11.000Z | README.md | badookey/Ryutai | 5131f2cdfed86a2dd22fe166fe0c67952952de0c | [
"MIT"
] | null | null | null | README.md | badookey/Ryutai | 5131f2cdfed86a2dd22fe166fe0c67952952de0c | [
"MIT"
] | null | null | null | # Ryutai
Ryutai is an interactive experience focusing on the meditative qualities of flowing water.
This was initally created for the subject 'Interactive Media' at the University of Technology Sydney. Use your hands to play with a liquid-like substance and pop bubbles to make gong noises.
![Ryutai Screenshot](screenshots/Ryutai.png?raw=true "Ryutai")
# Setup
The following hardware and software is required in order to run Ryutai.
Leap Motion Controller (https://www.leapmotion.com)
Processing 3.3.6 (https://processing.org/) with libraries:
- Beads
- PixelFlow
- Leap Motion library for Processing
*All libraries available within Processing's internal library viewer.*
# Usage:
- Plug in Leap motion, orient it to face towards the Ceiling
- Open Ryutai.pde in Processing 3.3.6
-- Ryutai depends upon Bubble.pde, however Processing should automatically load Bubble.pde in a different tab
- Click the 'Play' button to run Ryutai
- Move hands up/down/left/right over the leap motion sensor to create a rainbow liquid/smoke effect!
-- Currently forwards/backwards motion is unbound
- Touch the bubbles with the liquid to play a gong sound!
# Troubleshooting:
Q: Leap Motion not detected in Processing
A: Install device drivers from https://www.leapmotion.com/setup/
Q: Framerate extremely low / Crash on launch
A: The shaders used to render the liquid effect are very graphics intensive.
There is a parameter named 'constraint' in the code which can be increased in order to decrease the load on the hardware and thus increase performance.
Test the performance on your machine and decide on the best constraint paramater that works for you. I recommend starting with 10.
| 41.146341 | 190 | 0.788382 | eng_Latn | 0.989196 |
ce5b17ef5451a79e9fb4e9e54bdbd0c19176676f | 1,078 | md | Markdown | README.md | blocklet/payment-demo | 24b2e508d3fcab56197b36088d32bf5741b3bf55 | [
"MIT"
] | null | null | null | README.md | blocklet/payment-demo | 24b2e508d3fcab56197b36088d32bf5741b3bf55 | [
"MIT"
] | null | null | null | README.md | blocklet/payment-demo | 24b2e508d3fcab56197b36088d32bf5741b3bf55 | [
"MIT"
] | null | null | null | # payment-demo-blocklet
![](https://github.com/arcblock/forge-webapp/workflows/build/badge.svg)
![](https://img.shields.io/badge/Powered%20By-ABT%20Node-yellowgreen)
Demo blocklet contains only static files, which is an html5 game
## Launch on Blocklet Server
[![Launch on Blocklet Server](https://assets.arcblock.io/icons/launch_on_blocklet_server.svg)](https://install.arcblock.io/?action=blocklet-install&meta_url=https%3A%2F%2Fgithub.com%2Fblocklet%2Fpayment-demo%2Freleases%2Fdownload%2Fv1.5.3%2Fblocklet.json)
## Run and debug in the cloud with Gitpod
Click the "Open in Gitpod" button, Gitpod will start Blocklet Server and the blocklet.
[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/blocklet/payment-demo)
## Run and debug locally
```shell
yarn global add @blocklet/cli
git clone [email protected]:blocklet/payment-demo.git
cd payment-demo
blocklet server init --mode debug
blocklet server start
blocklet dev
```
## License
The code is licensed under the MIT license found in the
[LICENSE](LICENSE) file.
| 32.666667 | 255 | 0.772727 | eng_Latn | 0.493214 |
ce5cb346bc3ab43252049ba43bd093768b2caedf | 2,600 | md | Markdown | oak-doc/src/site/markdown/architecture/transactional-model.md | honstar/jackrabbit-oak | 0fa48e01e50cdd0c0a90cdc28aad034725d84b2e | [
"Apache-2.0"
] | 1 | 2019-02-22T02:49:00.000Z | 2019-02-22T02:49:00.000Z | oak-doc/src/site/markdown/architecture/transactional-model.md | joansmith2/jackrabbit-oak | e0450b7a1e710b0ce391393ab773d10efa1c9f55 | [
"Apache-2.0"
] | 10 | 2020-03-04T21:42:31.000Z | 2022-01-21T23:17:15.000Z | oak-doc/src/site/markdown/architecture/transactional-model.md | joansmith2/jackrabbit-oak | e0450b7a1e710b0ce391393ab773d10efa1c9f55 | [
"Apache-2.0"
] | null | null | null | <!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
Transactional model of sessions
================================
Sessions in Oak are based on a multi version concurrency control model using snapshot isolation with
a relaxed first committer wins strategy. That is, on login each session is under the impression of
operating on its own copy of the repository. Modifications from other sessions do not affect the
current session. With the relaxed first committer wins strategy a later session will fail on save
when it contains operations which are incompatible with the operations of an earlier session which
saved successfully. This is different from the standard first committer wins strategy where failure
would occur on conflicting operations rather than on incompatible operations. Incompatible is weaker
than conflict since two write operation on the same item do conflict but are not incompatible. The
details of what incompatible means are specified by `NodeStore.rebase()`.
Snapshot isolation exhibits [write skew](http://http//research.microsoft.com/apps/pubs/default.aspx?id=69541)
which can be problematic for some application level consistency requirements. Consider the following
sequence of operations:
session1.getNode("/testNode").setProperty("p1", -1);
check(session1);
session1.save();
session2.getNode("/testNode").setProperty("p2", -1);
check(session2);
session2.save();
Session session3 = repository.login();
check(session3);
The check method enforces an application logic constraint which says that the sum of the properties
`p1` and `p2` must not be negative. While session1 and session2 each enforce this constraint before
saving, the constraint might not hold globally for session3.
See `CompatibilityIssuesTest.sessionIsolation` for a test case demonstrating this in runnable code.
| 52 | 109 | 0.775769 | eng_Latn | 0.997585 |
ce5ce9bfb241465840961df18120728453c138d8 | 2,130 | md | Markdown | README.md | kunato/style_swap_tensorflow | ab136c20fa5351852f1f4c986bed5b25eee3b890 | [
"Apache-2.0"
] | 42 | 2018-03-27T06:26:57.000Z | 2021-05-23T03:23:37.000Z | README.md | kunato/style_swap_tensorflow | ab136c20fa5351852f1f4c986bed5b25eee3b890 | [
"Apache-2.0"
] | 8 | 2018-02-24T13:16:49.000Z | 2019-11-01T16:37:04.000Z | README.md | kunato/style_swap_tensorflow | ab136c20fa5351852f1f4c986bed5b25eee3b890 | [
"Apache-2.0"
] | 15 | 2018-09-07T05:13:22.000Z | 2022-02-28T12:33:19.000Z | # Fast Patch-based Style Transfer of Arbitrary Style
Paper: https://arxiv.org/abs/1612.04337
## Examples
<div align='center'>
<img src='images/content/0cd731d526d27376a586316f6a6ea14a32c096c0a1fab-Fz7Bx1_fw658.jpg' width="280px">
<img src='images/result/0cd731d526d27376a586316f6a6ea14a32c096c0a1fab-Fz7Bx1_fw658_style_1_image_360.jpg' width="280px">
<img src='images/result/0cd731d526d27376a586316f6a6ea14a32c096c0a1fab-Fz7Bx1_fw658_style_2_image_33132.jpg' width="280px">
</div>
<div align='center'>
<img src='images/content/gentlecat.png' width="280px">
<img src='images/result/gentlecat_style_1_image_840.jpg' width="280px">
<img src='images/result/gentlecat_style_2_image_33132.jpg' width="280px">
</div>
<div align='center'>
<img src='images/content/68e4eebca9fd043276945570328304957df91c9442642-4TFykG_fw658.jpg' width="280px">
<img src='images/result/68e4eebca9fd043276945570328304957df91c9442642-4TFykG_fw658_style_1_image_700.jpg' width="280px">
<img src='images/result/68e4eebca9fd043276945570328304957df91c9442642-4TFykG_fw658_style_2_image_33132.jpg' width="280px">
</div>
## Preparetion
Download [VGG16 model](http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz) from Tensorflow Slim. Extract the file vgg_16.ckpt. Then copy it to the folder pretrained/
## Usage
### Stylizing images:
```
python main.py -c config/example.json -s --content images/content/*.jpg --style images/style/style_1_image_60.png
```
### Video stylization
```
python main.py -c config/example.json -s --content videos/timelapse1_orig.mp4 --style images/style/style_1_image_60.png
```
## Training an inverse network
```
python main.py -c config/example.json
```
## Style swap
Φ(.) is the function represented by a fully convolutional part of a pretrained CNN that maps an image from RGB to some intermediate activation space. So Φ(C) is the activation of content, and Φ(S) is the activation of style.
Extract a set of patches for Φ(C) and Φ(S). The target of "Style Swap" is to find a closest-matching style patch for each content patch, and replace it.
<div>
<img src="figures/diagram.png">
</div>
| 42.6 | 225 | 0.777934 | eng_Latn | 0.34958 |
ce5dd0770cb357c0997f45c32df40cab076b1f4a | 1,206 | md | Markdown | src/_content/articles/2008-07-09-ipod-touch-a-vendre-peut-etre.md | alienlebarge/alienlebargech-v3 | 4c6756fe61bcd44b1d0ccd0607ecb5a3030ced88 | [
"MIT"
] | 1 | 2021-09-20T12:34:05.000Z | 2021-09-20T12:34:05.000Z | src/_content/articles/2008-07-09-ipod-touch-a-vendre-peut-etre.md | alienlebarge/alienlebargech-v3 | 4c6756fe61bcd44b1d0ccd0607ecb5a3030ced88 | [
"MIT"
] | 63 | 2019-04-12T13:25:00.000Z | 2022-03-04T11:35:05.000Z | src/_content/articles/2008-07-09-ipod-touch-a-vendre-peut-etre.md | alienlebarge/alienlebargech-v3 | 4c6756fe61bcd44b1d0ccd0607ecb5a3030ced88 | [
"MIT"
] | 2 | 2020-01-06T19:21:43.000Z | 2022-03-17T01:11:52.000Z | ---
date: 2008-07-09
title: iPod Touch (à vendre peut-être) vendu
categories:
- Matériel
- Personnel
tags:
- Achat
- iPod
- iPod Touch
- Musique
status: publish
published: true
meta:
_edit_last: '1'
tweetbackscheck: '1234516305'
shorturls: a:7:{s:9:"permalink";s:68:"https://www.alienlebarge.ch/2008/07/09/ipod-touch-a-vendre-peut-etre/";s:7:"tinyurl";s:25:"https://tinyurl.com/chzqed";s:4:"isgd";s:17:"https://is.gd/ike5";s:5:"bitly";s:18:"https://bit.ly/WQQz";s:5:"snipr";s:22:"https://snipr.com/b9xa8";s:5:"snurl";s:22:"https://snurl.com/b9xa8";s:7:"snipurl";s:24:"https://snipurl.com/b9xa8";}
twittercomments: a:0:{}
tweetcount: '0'
tmac_last_id: ''
---
<img src="https://farm3.static.flickr.com/2107/2035533100_73ff9a5886.jpg" alt="iPod Touch" />
<em><a title="photo sharing" href="https://www.flickr.com/photos/alienlebarge/2035533100/">iPod Touch</a></em>
Vous savez tous que le iPhone sort ce vendredi. Je me demandais comme ça si quelqu'un serrait intéressé à racheter mon iPod touch 16 Gb. Il est en très bon état et fonctionne comme une horloge (digitale). Les seules choses que vous pouvez lui reprocher sont les microrayures inévitables du dos et trois petits jetons sur un coin.
| 46.384615 | 369 | 0.722222 | fra_Latn | 0.466639 |
ce5df395ea148163fef2237c24d580e42681315b | 21,694 | md | Markdown | RFC/src/RFC-0201_TariScript.md | dunnock/tari | 3a9f356d2d16941c68c041336dc2d60510fcdbc6 | [
"BSD-3-Clause"
] | null | null | null | RFC/src/RFC-0201_TariScript.md | dunnock/tari | 3a9f356d2d16941c68c041336dc2d60510fcdbc6 | [
"BSD-3-Clause"
] | null | null | null | RFC/src/RFC-0201_TariScript.md | dunnock/tari | 3a9f356d2d16941c68c041336dc2d60510fcdbc6 | [
"BSD-3-Clause"
] | null | null | null | # RFC-0201/TariScript
## Tari Script
![status: draft](theme/images/status-draft.svg)
**Maintainer(s)**: [Cayle Sharrock](https://github.com/CjS77)
# Licence
[The 3-Clause BSD Licence](https://opensource.org/licenses/BSD-3-Clause).
Copyright 2020 The Tari Development Community
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the
following conditions are met:
1. Redistributions of this document must retain the above copyright notice, this list of conditions and the following
disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS DOCUMENT IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS", AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
## Language
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT
RECOMMENDED", "MAY" and "OPTIONAL" in this document are to be interpreted as described in
[BCP 14](https://tools.ietf.org/html/bcp14) (covering RFC2119 and RFC8174) when, and only when, they appear in all
capitals, as shown here.
## Disclaimer
This document and its content are intended for information purposes only and may be subject to change or update without
notice.
This document may include preliminary concepts that may or may not be in the process of being developed by the Tari
community. The release of this document is intended solely for review and discussion by the community of the
technological merits of the potential system outlined herein.
## Goals
This Request for Comment (RFC) presents a proposal for introducing Tari Script into the Tari base layer protocol. Tari
Script aims to provide a general mechanism for enabling further extensions such as side chains, the DAN, one-sided
payments and atomic swaps.
## Related Requests for Comment
* [RFC-0200: Base Layer Extensions](BaseLayerExtensions.md)
* [RFC-0300: The Tari Digital Assets Network](RFC-0300_DAN.md)
## Introduction
It is hopefully clear to anyone reading these RFCs that the ambitions of the Tari project extend beyond a
Mimblewimble-clone-coin.
It should also be fairly clear that basic Mimblewimble does not have the feature set to provide functionality such as:
* One-sided payments
* Atomic swaps (possible with scriptless scripts, but not easy)
* Hash time-locked contracts (possible with scriptless scripts, but not easy)
* Multiparty side-chain peg outs and peg-ins
* Generalised smart contracts
Extensions to Mimblewimble have been proposed for most of these features, for example, David Burkett's one-sided payment
proposal for LiteCoin ([LIP-004]), this project's [HTLC RFC](RFC-0230_HTLC.md), the pegging proposals for the
Clacks side-chain, and [Scriptless script]s.
This RFC makes the case that if Tari were to implement a scripting language similar to Bitcoin script, then all of these
use cases collapse and can be achieved under a single set of (relatively minor) modifications and additions to the
current Tari and Mimblewimble protocol.
## Scripting on Mimblewimble
To the author's knowledge, none of existing Mimblewimble projects have employed a scripting language. The reasons for
this are unclear, but there is at least one narrative in the space that it is
[not possible](https://forum.grin.mw/t/will-grin-allow-scripting-smart-contracts-in-the-future/7391/2) with
Mimblewimble. Given that [Grin](https://github.com/mimblewimble/grin) styles itself as a "Minimal implementation of the
Mimblewimble protocol", this status is unlikely to change soon.
As of this writing, the Beam project also considers Scriptless Script to be the
[extent of their scripting capabilities](https://docs.beam.mw/Beam_lightning_network_position_paper.pdf).
[Mimblewimble coin](https://github.com/mwcproject/mwc-node/blob/master/doc/roadmap.md) is a fork of Grin and "considers
the protocol ossified".
Litecoin is in the process of adding Mimblewimble as a
[side-chain](https://github.com/litecoin-project/lips/blob/master/lip-0003.mediawiki). As of this writing, there appear
to be no plans to include general scripting into the protocol.
### Scriptless scripts
[Scriptless script] is a wonderfully elegant technology and inclusion of Tari script does not preclude the use of
Scriptless script in Tari. However, scriptless scripts are difficult to reason about and development of them are best
left to experts in cryptographic proofs, leaving the development of Mimblewimble smart contracts in the hands of a very
select group of people.
However, it is the opinion of the author that there is no reason why Mimblewimble cannot be extended to include
scripting.
## Tari script - a basic motivation
The essential idea of Tari script is as follows:
Given a standard Tari UTXO, we add _additional restrictions_ on whether that UTXO can be included as a valid input in a
transaction.
As long as those conditions are suitably committed to, and are not malleable throughout the existence of the UTXO, then
in general, these conditions are no different to the requirement of having range proofs attached to UTXOs, which require
that the value of Tari commitments is non-negative.
Note that range proofs can be discarded after a UTXO is spent, since the global security guarantees of Mimblewimble are
not concerned that every transaction in history was valid from an inflation perspective, but that the net effect of all
transactions lead to zero inflation. This sounds worse than it is, since locally, every individual transaction is
checked for validity at the time of inclusion in the blockchain.
This argument is independent of the nature of the additional restrictions. Specifically, if these restrictions are
manifested as a script that provides additional constraints over whether a UTXO may be spent, the same arguments apply.
This means that from a philosophical viewpoint, there ought to be no reason that Tari Script is not workable, and
further, that pruning spent outputs (and possibly the scripts associated with them) is not that different from pruning
range proofs.
There is one key difference though that we need to address.
If it somehow happened that two illegal transactions made it into the blockchain (perhaps due to a bug), and the two
cancelled each other out, such that the global coin supply was still correct, one would never know this when doing a
chain synchronisation in pruned mode.
But if there was a steady inflation bug due to invalid range proofs making it into the blockchain, a pruned mode sync
would still detect that _something_ was awry, because the global coin supply balance acts as another check.
With Tari script, once the script has been pruned away, and then there is a re-org to an earlier point on the chain,
then there's no way way to ensure that the script was honoured.
However, a single honest archival node would be able to detect any fraud on the same chain and provide a simple proof
that a transaction did not honour the redeem script.
### Additional requirements
The assumptions that broadly equate scripting with range proofs in the above argument are:
* The script (hash) must be committed to in the blockchain.
* The script must not be malleable in any way without invalidating the transaction.
* The creator of the UTXO must commit to and sign the script (hash).
The script commitment, which can be adequately represented by the hash of the canonical serialisation of the script in
binary format, could be placed in the transaction kernel, or in a dedicated merkle mountain range for scripts.
Range proofs are not malleable because one must have knowledge of the UTXO blinding factor in order to generate a valid
range proof. However, it's trivial to replace scripts with other valid scripts, potentially to the point that miners or
malicious actors could take the UTXO for themselves.
Therefore, it's imperative that the UTXO creator sign the script.
Further, this signature must be present in the _kernel_ in some form, otherwise miners will be able to remove the script
via cut-through, whereas kernels are never pruned.
One approach to commit to the script hashes is to modify the output commitments using the [data commitments] approach
suggested by [Phyro](https://github.com/phyro). In this approach, when creating a new UTXO, the owner also calculates
the hash of the locking script, _s_, such that `s = H(script)`. The script hash gets stored in the UTXO itself.
## Protocol modifications
The current definition of a Tari UTXO is:
```rust,ignore
pub struct TransactionOutput {
/// Options for an output's structure or use
features: OutputFeatures,
/// The homomorphic commitment representing the output amount
commitment: Commitment,
/// A proof that the commitment is in the right range
proof: RangeProof,
}
```
Under Tari script, this would change to
```rust,ignore
pub struct TransactionOutput {
features: OutputFeatures,
commitment: Commitment,
proof: RangeProof,
/// New: The hash of the locking script on this UTXO.
script_hash: HashOutput,
}
```
Now when calculating the transaction or block balance, we calculate a different set of commitments. The current commitment,
$$ C = v.H + k.G $$
is modified with a commitment to the script hash, so
$$ \hat{C} = C + \mathrm{H}(C \Vert s).G $$
and wallets will sign the kernel with $$ k + \mathrm{H}(C \Vert s) $$ rather than just _k_.
The overall and block balance checks must also be modified to use \\( \hat{C} \\) rather than _C_.
### Transaction balance
The new transaction balance is thus
$$
\begin{align}
& \sum(\mathrm{Inputs}) - \sum(\mathrm{Outputs}) - \sum(\mathrm{fee}_i.G) \\\\
=& \sum\hat{C_i} - \sum\hat{C_j} - \sum(\mathrm{fee}_i.G) \\\\
=& \sum(C_i + \mathrm{H}(C_i \Vert s_i).G) - \sum(C_j + \mathrm{H}(C_j \Vert s_j).G) - \sum(\mathrm{fee}.G)
\end{align}
$$
If the accounting is correct, all values will cancel
$$
= \sum(k_i + \mathrm{H}(C_i \Vert s_i).G) - \sum(k_j + \mathrm{H}(C_j \Vert s_j).G)
$$
The sum of all the blinding factors (times G) is the definition of the standard Mimblewimble excess,
$$ x_s\cdot G = X_s $$
If we define,
$$
\Lambda = \sum(\mathrm{H}(C_i \Vert s_i).G) - \sum(\mathrm{H}(C_j \Vert s_j).G)
$$
then the new transaction excess can be written as
$$
X_\mathrm{new} = X_s + \Lambda
$$
The kernels are unmodified, except that the excess will now include \\( \Lambda \\), representing the sum of all the
commitments to the UTXO script hashes. This also means that the kernel signatures are calculated slightly differently:
$$
\begin{align}
s_i &= r_i + e.\bigl(k_i + \mathrm{H}(C_i \Vert s_i) \bigr) \\\\
s_i.G &= r_i.G + e.\bigl(k_i + \mathrm{H}(C_i \Vert s_i) \bigr).G \\\\
s_i.G &= R_i + e.\bigl(P_i + \mathrm{H}(C_i \Vert s_i)\bigr) \\\\
\end{align}
$$
Summing the signatures, one can easily confirm that \\( X_s + \Lambda \\) signs the kernel correctly. The kernel offset
is not included in this treatment, but it does not affect the result. One of the input commitments will be offset by a
value selected by the sender and provided with the transaction data as usual. The signatures will still validate as
usual, and the kernel offset will correct the overall excess balance.
The same treatment extends to be block validation check. Note that since the individual kernel excesses can still be
summed to obtain the overall block balance, the de-association of kernels and their outputs is maintained.
### Checking the requirements
Miners cannot modify the script hash, because it is committed to in the public excess value. Moreover, the \\( C_i \Vert
s_i \\) pair is committed to, so miners can't, for example swap script hashes on commitments to keep the overall excess
the same but still manipulate specific outputs.
The UTXO creator(s) are also committing to the script hash by including it in the kernel signature.
Thus all three requirements are satisfied and Tari Script, using this formulation should offer the same security
guarantees that range proofs do.
In particular, UTXOs can still be pruned because the \\( \Lambda \\) values change sign when used as inputs and will
cancel out in the overall balance in the same way that the pruned out excesses are.
However, a problem arises now in that as it stands, the UTXOs _cannot_ be pruned because we would lose some data needed
to verify the kernel signatures, i.e. \\( \mathrm{H}(C_i \Vert s_i) \\) and that data only exists in the UTXOs. However,
we can salvage this situation fairly easily by noticing that we only need that _hash_ of the commitment and script hash.
If we track an MMR of \\( C_i \Vert s_i \\), then those hashes are always available, even after the UTXOs themselves
have been discarded. In terms of additional block space required, this amounts to a single 32 byte hash per header (the
MMR root). A more detailed storage assessment is given [below](#storage-impact-of-script-hash-mmr).
## Tari Script semantics
The proposal for Tari Script is straightforward. It is based on Bitcoin script and inherits most of its ideas.
The main properties of Tari script are
* The scripting language is stack-based. At redeem time, the UTXO spender must supply an input stack. The script runs by
operating on the the stack contents.
* If an error occurs during execution, the script fails.
* After the script completes, it is successful if and only if it has not aborted, and there is exactly a single element
on the stack with a value of zero. In other words, the script fails if the stack is empty, or contains more than one
element, or aborts early.
* It is not Turing complete, so there are no loops or timing functions.
* The Rust type system ensures that only compatible data types can be operated on, e.g. A public key cannot be added to
an integer scalar. Errors of this kind cause the script to fail.
### Opcodes
Tari Script opcodes are enumerated from 0 to 255 and are represented as a single unsigned byte. The opcode set is
initially limited to allow for the applications specified in this RFC, but can be expanded in future.
```rust,ignore
pub enum Opcode {
/// Push the current chain height onto the stack
PushHeight,
/// Push the associated 32-byte value onto the stack
PushHash(Box<HashValue>),
/// Hash the top stack element with the Blake256 hash function and push the result to the stack
HashBlake256,
/// Fail the script immediately. (Must be executed.)
Return,
/// Drops the top stack item
Drop,
/// Duplicates the top stack item
Dup,
/// Reverse rotation. The top stack item moves into 3rd place, abc => bca
RevRot,
/// Pop two items and push their sum
Add,
/// Pop two items and push the second minus the top
Sub,
/// Pop the public key and then the signature. If the signature signs the script, push 0 to the stack, otherwise
/// push 1
CheckSig,
/// As for CheckSig, but aborts immediately if the signature is invalid. As opposed to Bitcoin, it pushes a zero
/// to the stack if successful
CheckSigVerify,
/// Pushes 0, if the inputs are exactly equal, 1 otherwise
Equal,
/// Pushes 0, if the inputs are exactly equal, aborts otherwise
EqualVerify,
}
```
### Serialisation
Tari Script and the execution stack are serialised into byte strings using a simple linear parser. Since all opcodes are
a single byte, it's very easy to read and write script byte strings. If an opcode has a parameter associated with it,
e.g. `PushHash` then it is equally known how many bytes following the opcode will contain the parameter. So for example,
a pay-to-public-key-hash script (P2PKH) script, when serialised is
```text
71b07aae2337ce44f9ebb6169c863ec168046cb35ab4ef7aa9ed4f5f1f669bb74b09e58170ac
```
which maps to
```text
71 b0 7a ae2337ce44f9ebb6169c863ec168046cb35ab4ef7aa9ed4f5f1f669bb74b09e5 81 70 ac
Dup HashBlake256 PushHash(ae2337ce44f9ebb6169c863ec168046cb35ab4ef7aa9ed4f5f1f669bb74b09e5) EqualVerify Drop CheckSig
```
Input parameters are serialised in an analogous manner.
The types of input parameters that are accepted are:
```rust,ignore
pub enum StackItem {
Number(i64),
Hash(HashValue),
Commitment(PedersenCommitment),
PublicKey(RistrettoPublicKey),
Signature(RistrettoSchnorr),
}
```
### Storage impact of script hash MMR
Adding another MMR to track the script commitments, \\( \mathrm{H}(C \Vert s_i \\) has the following impacts on
bandwidth and storage:
Additional data transferred in each block would be:
* 32 bytes for every UTXO (The script hash itself)
* 32 bytes for every header (The MMR root).
The storage impact is the size of the scripts, plus \\( (2^{\log_2 k + 1}-1) \\) * 32 bytes, where k = total number of
UTXOs, or, if we just store the leaves it's k * 32 bytes.
For 10 million UTXOs, this adds an additional 620 MB or so to the blockchain database if the entire MMR is stored, or
305 MB if just the hashes are stored.
## Extensions
### Covenants
Tari script places restrictions on _who_ can spend UTXOs. It will also be useful for Tari digital asset applications to
restrict _how_ or _where_ UTXOs may be spent in some cases. The general term for these sorts of restrictions are termed
_covenants_. The [Handshake white paper] has a fairly good description of how covenants work.
It is beyond the scope of this RFC, but it's anticipated that Tari Script would play a key role in the introduction of
generalised covenant support into Tari.
### Lock-time malleability
The current Tari protocol has an issue with Transaction Output Maturity malleability. This output feature is enforced in
the consensus rules but it is actually possible for a miner to change the value without invalidating the transaction.
The lock time could also be added to the script commitment hash to solve this problem.
## Applications
### One-sided transactions
One-sided transactions are Mimblewimble payments that do not require the receiver to interact in the transaction
process. [LIP-004] describes how this will be implemented in Litecoin's Mimblewimble implementation. The main thrust is
that the sender uses Diffie-Hellman exchange to generate a shared private key that is used as the receiver's blinding
factor.
To prevent the sender from spending the coins (since both parties now know the spending key), there is an additional
commitment balance equation that is carried out on the block and transaction that requires the spender to know the
receiver's private key.
To implement one-sided payments in Tari, we propose using Diffie-Hellman exchange in conjunction with Tari Script to
achieve the same thing.
In particular, if Alice is sending some Tari to Bob, she generates a shared private key as follows:
$$ k_s = \mathrm{H}(k_a P_b) $$
where \\( P_b \\) is Bob's Tari node address, or any other public key that Bob has shared with Alice. Alice can generate
an ephemeral public-private keypair, \\( P_a = k_a\cdot G \\) for this transaction.
Alice then locks the output with the following script:
```text
Dup PushPubkey(P_B) EqualVerify CheckSig Add
```
where `P_B` is Bob's public key. As one can see, this Tari script is very similar to Bitcoin script.
The interpretation of this script is, "Given a Public key, and a signature of this
script, the public key must be equal to the one in the locking script, and the signature must be valid using the same
public key".
This is in effect the same as Bitcoin's P2PK script. To increase privacy, Alice could also lock the UTXO with a P2PKH
script:
```text
Dup HashBlake256 PushHash(HB) EqualVerify CheckSig Add
```
where `HB` is the hash of Bob's public key.
In either case, only someone with the knowledge of Bob's private key can generate a valid signature, so Alice will not
be able to unlock the UTXO to spend it.
Since the script is committed to and cannot be cut-through, only Bob will be able to spend this UTXO unless someone is
able to discover the private key from the public key information (the discrete log assumption), or if the majority of
miners collude to not honour the consensus rules governing the successful evaluation of the script (the 51% assumption).
### Credits
Thanks to [@philipr-za](https://github.com/philipr-za) and [@SWvheerden](https://github.com/SWvheerden) for their input
and contributions to this RFC.
[data commitments]: https://phyro.github.io/grinvestigation/data_commitments.html
[LIP-004]: https://github.com/DavidBurkett/lips/blob/master/lip-0004.mediawiki
[Scriptless script]: https://tlu.tarilabs.com/cryptography/scriptless-scripts/introduction-to-scriptless-scripts.html
[Handshake white paper]: https://handshake.org/files/handshake.txt
| 46.855292 | 123 | 0.769429 | eng_Latn | 0.997324 |
ce5e292f1acb4ba875fec634b1a95d0bdd3a2c83 | 36,960 | md | Markdown | articles/storage/files/storage-files-migration-storsimple-8000.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/files/storage-files-migration-storsimple-8000.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/files/storage-files-migration-storsimple-8000.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Migracja serii StorSimple 8000 do usługi Azure File Sync
description: Dowiedz się, jak przeprowadzić migrację urządzenia StorSimple 8100 lub 8600 do usługi Azure File Sync.
author: fauhse
ms.service: storage
ms.topic: conceptual
ms.date: 03/09/2020
ms.author: fauhse
ms.subservice: files
ms.openlocfilehash: 7f0c4da7caf71670746e84d5cfaa457ebae57156
ms.sourcegitcommit: 441db70765ff9042db87c60f4aa3c51df2afae2d
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 04/06/2020
ms.locfileid: "80755033"
---
# <a name="storsimple-8100-and-8600-migration-to-azure-file-sync"></a>Migracja storsimple 8100 i 8600 do synchronizacji plików platformy Azure
Seria StorSimple 8000 jest reprezentowana przez urządzenia fizyczne 8100 lub 8600 i ich składniki usługi w chmurze. Istnieje możliwość migracji danych z jednego z tych urządzeń do środowiska synchronizacji plików platformy Azure. Usługa Azure File Sync to domyślna i strategiczna długoterminowa usługa platformy Azure, do którą można migrować urządzenia StorSimple.
Seria StorSimple 8000 dobiegnie [końca](https://support.microsoft.com/en-us/lifecycle/search?alpha=StorSimple%208000%20Series) w grudniu 2022 roku. Ważne jest, aby rozpocząć planowanie migracji tak szybko, jak to możliwe. Ten artykuł zawiera niezbędną wiedzę w tle i kroki migracji dla pomyślnej migracji do usługi Azure File Sync.
## <a name="azure-file-sync"></a>Azure File Sync
> [!IMPORTANT]
> Firma Microsoft dokłada wszelkich starań, aby pomóc klientom w ich migracji. Wyślij AzureFilesMigration@microsoft wiadomość e-mail do .com w celu uzyskania dostosowanego planu migracji oraz pomocy podczas migracji.
Usługa Azure File Sync to usługa w chmurze firmy Microsoft oparta na dwóch głównych składnikach:
* Synchronizacja plików i warstwowanie w chmurze.
* Udziały plików jako magazynu macierzystego na platformie Azure, które mogą być dostępne za pośrednictwem wielu protokołów, takich jak SMB i rest pliku. Udział plików platformy Azure jest porównywalny z udziałem plików w systemie Windows Server, który można natywnie zainstalować jako dysk sieciowy. Obsługuje ważne aspekty wierności plików, takie jak atrybuty, uprawnienia i sygnatury czasowe. W przypadku udziałów plików platformy Azure nie ma już potrzeby interpretowania plików i folderów przechowywanych w chmurze przez aplikację lub usługę. Można uzyskać do nich dostęp natywnie za korzystać ze znanych protokołów i klientów, takich jak Eksplorator plików Windows. To sprawia, że udziały plików platformy Azure są idealnym i najbardziej elastycznym podejściem do przechowywania danych serwera plików ogólnego przeznaczenia, a także niektórych danych aplikacji w chmurze.
W tym artykule skupiono się na krokach migracji. Jeśli przed migracją chcesz dowiedzieć się więcej o usłudze Azure File Sync, zalecamy następujące artykuły:
* [Usługa Azure File Sync — omówienie](https://aka.ms/AFS "Omówienie")
* [Usługa Azure File Sync — przewodnik po wdrażaniu](storage-sync-files-deployment-guide.md)
## <a name="migration-goals"></a>Cele migracji
Celem jest zagwarantowanie integralności danych produkcyjnych, a także zagwarantowanie dostępności. Ten ostatni wymaga utrzymania przestojów do minimum, tak aby mógł zmieścić się w lub tylko nieznacznie przekroczyć regularne okna konserwacji.
## <a name="storsimple-8000-series-migration-path-to-azure-file-sync"></a>Ścieżka migracji storsimple serii 8000 do synchronizacji plików platformy Azure
Do uruchomienia agenta usługi Azure File Sync wymagany jest lokalny serwer Windows Server. System Windows Server może być co najmniej serwerem 2012R2, ale najlepiej jest z systemem Windows Server 2019.
Istnieje wiele alternatywnych ścieżek migracji i stworzyłoby to zbyt długi artykuł, aby udokumentować wszystkie z nich i zilustrować, dlaczego ponoszą ryzyko lub wady trasy, którą zalecamy jako najlepszą praktykę w tym artykule.
![Omówienie faz migracji serii StorSimple 8000](media/storage-files-migration-storsimple-shared/storsimple-8000-migration-overview.png "StorSimple 8000 serii migracji przeglądu trasy fazy poniżej w tym artykule.")
Poprzedni obraz przedstawia fazy, które odpowiadają sekcjom w tym artykule.
Używamy migracji po stronie chmury, aby uniknąć niepotrzebnego wycofywania plików do lokalnego urządzenia StorSimple. Takie podejście pozwala uniknąć wpływu na zachowanie lokalnego buforowania lub wykorzystanie przepustowości sieci, z których każda może mieć wpływ na obciążenia produkcyjne.
Migracja po stronie chmury działa na migawkę (klon woluminu) danych. Więc dane produkcyjne są odizolowane od tego procesu - do cut-over na koniec migracji. Praca z tego, co jest w istocie kopii zapasowej, sprawia, że migracja bezpieczne i łatwe do powtórzenia, należy napotkać na jakiekolwiek trudności.
## <a name="considerations-around-existing-storsimple-backups"></a>Zagadnienia dotyczące istniejących kopii zapasowych StorSimple
StorSimple umożliwia wykonywanie kopii zapasowych w postaci klonów woluminów. W tym artykule użyto nowego klonu woluminu do migracji plików na żywo.
Jeśli musisz przeprowadzić migrację kopii zapasowych oprócz danych na żywo, wszystkie wskazówki w tym artykule nadal mają zastosowanie. Jedyną różnicą jest to, że zamiast zaczynać od nowego klonu woluminu, zaczniesz od najstarszego klonu woluminu kopii zapasowej, który musisz przeprowadzić migrację.
Kolejność jest następująca:
* Określ minimalny zestaw klonów woluminów, które należy przeprowadzić migrację. Zalecamy zachowanie tej listy do minimum, jeśli to możliwe, ponieważ im więcej kopii zapasowych zostanie migrowane, tym dłużej trwa ogólny proces migracji.
* Podczas przechodzenia przez proces migracji, należy rozpocząć od najstarszego klonu woluminu, który zamierzasz przeprowadzić migrację, a przy każdej kolejnej migracji użyć następnego najstarszego.
* Po zakończeniu każdej migracji klonowania woluminów należy wykonać migawkę udziału plików platformy Azure. [Migawki udziału plików platformy Azure](storage-snapshots-files.md) to sposób przechowywania kopii zapasowych plików i struktury folderów w czasie dla udziałów plików platformy Azure. Te migawki będą potrzebne po zakończeniu migracji, aby upewnić się, że zachowane wersje każdego klonów woluminu w miarę postępów w migracji.
* Upewnij się, że należy wziąć migawki udziału plików platformy Azure dla wszystkich udziałów plików platformy Azure, które są obsługiwane przez ten sam wolumin StorSimple. Klony woluminów są na poziomie woluminu, migawki udziału plików platformy Azure są na poziomie udziału. Po zakończeniu migracji klonu woluminu należy zrobić migawkę udziału (w każdym udziale plików platformy Azure).
* Powtórz proces migracji dla klonu woluminu i robienia migawek udziału po każdym klonu woluminu, aż zostaniesz złapany do migawki danych na żywo. Proces migracji klonu woluminu jest opisany w poniższych fazach.
Jeśli nie trzeba w ogóle przenosić kopii zapasowych i można uruchomić nowy łańcuch kopii zapasowych po stronie udziału plików platformy Azure po migracji tylko dane na żywo jest wykonywana, a następnie jest to korzystne, aby zmniejszyć złożoność migracji i ilość czasu migracji zajmie. Można podjąć decyzję, czy przenieść kopie zapasowe i ile dla każdego woluminu (nie każdego udziału) masz w StorSimple.
## <a name="phase-1-get-ready"></a>Faza 1: Przygotuj się
:::row:::
:::column:::
![Obraz przedstawiający część wcześniejszego obrazu przeglądu, który pomaga skupić się na tej podsekcji artykułu.](media/storage-files-migration-storsimple-shared/storsimple-8000-migration-phase-1.png)
:::column-end:::
:::column:::
Podstawą migracji jest klon woluminu i wirtualne urządzenie w chmurze, o nazwie StorSimple 8020.
Ta faza koncentruje się na wdrażaniu tych zasobów na platformie Azure.
:::column-end:::
:::row-end:::
### <a name="deploy-a-storsimple-8020-virtual-appliance"></a>Wdrażanie urządzenia wirtualnego StorSimple 8020
Wdrażanie urządzenia w chmurze jest procesem, który wymaga zabezpieczeń, sieci i kilku innych zagadnień.
> [!IMPORTANT]
> Poniższy przewodnik zawiera kilka niepotrzebnych sekcji. Przeczytaj i postępuj zgodnie z artykułem od początku do "Kroku 3". Następnie wróć do tego artykułu. Nie musisz w tej chwili wypełniać "Kroku 3" ani niczego poza nim w tym przewodniku.
[Wdrożenie urządzenia wirtualnego StorSimple 8020](../../storsimple/storsimple-8000-cloud-appliance-u2.md)
### <a name="determine-a-volume-clone-to-use"></a>Określanie klonu woluminu, który ma być używany
Gdy będziesz gotowy do rozpoczęcia migracji, pierwszym krokiem jest podjęcie nowego klonu woluminu — tak samo jak w przypadku kopii zapasowej — który przechwytuje bieżący stan magazynu w chmurze StorSimple. Weź klon dla każdego woluminu StorSimple, które masz.
Jeśli potrzebujesz przenoszenia kopii zapasowych, pierwszy klon woluminu, którego używasz, nie jest nowo utworzonym klonem, ale najstarszy klon woluminu (najstarsza kopia zapasowa), który należy przeprowadzić migrację.
Szczegółowe wskazówki można znaleźć w sekcji ["Zagadnienia dotyczące istniejących kopii zapasowych StorSimple".](#considerations-around-existing-storsimple-backups)
> [!IMPORTANT]
> Poniższy przewodnik zawiera kilka niepotrzebnych sekcji. Przeczytaj i wykonaj tylko kroki opisane w sekcji połączonej. Następnie wróć do tego artykułu. Nie musisz postępować zgodnie z sekcją "Następne kroki".
[Tworzenie klonu woluminu](../../storsimple/storsimple-8000-clone-volume-u2.md#create-a-clone-of-a-volume)
### <a name="use-the-volume-clone"></a>Użyj klonu woluminu
Ostatnia faza fazy 1 polega na udostępnieniu wybranego klonu woluminu na urządzeniu wirtualnym 8020 na platformie Azure.
> [!IMPORTANT]
> Poniższy przewodnik zawiera niezbędne kroki, ale także - na końcu - instrukcję formatowania woluminu. **NIE FORMATUJ WOLUMINU** Przeczytaj i postępuj zgodnie z powiązaną "sekcją 7" od początku do instrukcji: "10. Aby sformatować wolumin prosty, ..." Zatrzymaj się przed dokonaniem tego kroku i wróć do tego artykułu.
[Instalowanie klonowania woluminów na urządzeniu wirtualnym 8020 na platformie Azure](../../storsimple/storsimple-8000-deployment-walkthrough-u2.md#step-7-mount-initialize-and-format-a-volume)
### <a name="phase-1-summary"></a>Podsumowanie fazy 1
Po zakończeniu fazy 1 wykonasz następujące czynności:
* Wdrożono urządzenie wirtualne StorSimple 8020 na platformie Azure.
* Określa, który klon woluminu rozpocznie migrację.
* Zainstalowano klony woluminów (po jednym dla każdego woluminu na żywo) na urządzeniu wirtualnym StorSimple na platformie Azure, z jego danymi dostępnymi do dalszego użycia.
## <a name="phase-2-cloud-vm"></a>Faza 2: Maszyna wirtualna w chmurze
:::row:::
:::column:::
![Obraz przedstawiający część wcześniejszego obrazu przeglądu, który pomaga skupić się na tej podsekcji artykułu.](media/storage-files-migration-storsimple-shared/storsimple-8000-migration-phase-2.png)
:::column-end:::
:::column:::
Po początkowym klon jest dostępny na StorSimple 8020 urządzenia wirtualnego na platformie Azure, nadszedł czas, aby aprowizować maszynę wirtualną i udostępnić klon woluminu (lub wiele) do tej maszyny wirtualnej za pośrednictwem iSCSI.
:::column-end:::
:::row-end:::
### <a name="deploy-an-azure-vm"></a>Wdrażanie maszyny Wirtualnej platformy Azure
Maszyna wirtualna systemu Windows Server na platformie Azure jest podobnie jak StorSimple 8020, tymczasowy element infrastruktury, który jest niezbędny tylko podczas migracji.
Konfiguracja maszyny Wirtualnej, którą wdrażasz, zależy głównie od liczby elementów (plików i folderów), które będą synchronizowane. Jeśli masz jakiekolwiek wątpliwości, zalecamy przejście z konfiguracją o wyższej wydajności.
Pojedynczy system Windows Server może synchronizować maksymalnie 30 udziałów plików platformy Azure.
Specyfikacje, które zdecydujesz się na potrzebę obejmują każdy udział / ścieżka lub katalog główny woluminu StorSimple i zliczanie elementów (pliki i foldery).
Ogólny rozmiar danych jest mniejszy wąskim gardłem - jest to liczba elementów, do których musisz dostosować specyfikację maszyny.
* [Dowiedz się, jak rozmiar systemu Windows Server na podstawie liczby elementów (plików i folderów) potrzebnych do synchronizacji.](storage-sync-files-planning.md#recommended-system-resources)
**Uwaga:** Wcześniej połączony artykuł przedstawia tabelę z zakresem pamięci serwera (RAM). Orientuj się w kierunku dużej liczby maszyny Wirtualnej platformy Azure. Można zorientować się w kierunku mniejszej liczby dla komputera lokalnego.
* [Dowiedz się, jak wdrożyć maszynę wirtualną systemu Windows Sever.](../../virtual-machines/windows/quick-create-portal.md)
> [!IMPORTANT]
> Upewnij się, że maszyna wirtualna jest wdrażana w tym samym regionie platformy Azure co urządzenie wirtualne StorSimple 8020. Jeśli w ramach tej migracji, należy również zmienić region danych w chmurze z regionu, w którym są przechowywane w dniu dzisiejszym, można to zrobić w późniejszym kroku, podczas aprowizowania udziałów plików platformy Azure.
> [!IMPORTANT]
> Często lokalnego systemu Windows Server jest używany do przodu lokalnego urządzenia StorSimple. W takiej konfiguracji można włączyć funkcję "[Deduplikacja danych](https://docs.microsoft.com/windows-server/storage/data-deduplication/install-enable)" na tym serwerze Windows Server. **Jeśli użyto deduplikacji danych z danymi StorSimple, upewnij się, że włączysz deduplikację danych również na tej maszynie wirtualnej platformy Azure.** Nie należy mylić tej deduplikacji na poziomie pliku z wbudowaną deduplikacją na poziomie bloku StorSimples, dla której nie jest konieczne żadne działanie.
> [!IMPORTANT]
> Aby zoptymalizować wydajność, należy wdrożyć **szybki dysk systemu operacyjnego** dla maszyny Wirtualnej w chmurze. Baza danych synchronizacji zostanie przechowywana na dysku systemu operacyjnego dla wszystkich woluminów danych. Ponadto upewnij się, że utworzono **duży dysk systemu operacyjnego**. W zależności od liczby elementów (plików i folderów) na woluminach StorSimple dysk systemu operacyjnego może potrzebować **kilkuset gib** miejsca, aby pomieścić bazę danych synchronizacji.
### <a name="expose-the-storsimple-8020-volumes-to-the-azure-vm"></a>Udostępnianie woluminów StorSimple 8020 na maszynie wirtualnej platformy Azure
W tej fazie łączysz jeden lub kilka woluminów StorSimple z urządzenia wirtualnego 8020 za łącze iSCSI z aprowizowanym maszyną wirtualną systemu Windows Server.
> [!IMPORTANT]
> W przypadku następujących artykułów uzupełnij tylko sekcje **Pobierz prywatny adres IP dla urządzenia w chmurze** i **Połącz za pomocą interfejsu iSCSI** i wróć do tego artykułu.
1. [Uzyskiwanie prywatnego adresu IP urządzenia w chmurze](../../storsimple/storsimple-8000-cloud-appliance-u2.md#get-private-ip-for-the-cloud-appliance)
2. [Łączenie się za pomocą interfejsu iSCSI](../../storsimple/storsimple-8000-deployment-walkthrough-u2.md#step-7-mount-initialize-and-format-a-volume)
### <a name="phase-2-summary"></a>Podsumowanie fazy 2
Po zakończeniu fazy 2:
* Aprowizowana maszyna wirtualna systemu Windows Server w tym samym regionie co urządzenie 8020 virtual StorSimple
* Ujawniono wszystkie odpowiednie woluminy z 8020 na maszynę wirtualną systemu Windows Server za cieki iSCSI.
* Zawartość plików i folderów powinna być teraz widoczna podczas używania Eksploratora plików na maszynie Wirtualnej serwera na zainstalowanych woluminach.
Przejdź do fazy 3 tylko po wykonaniu tych kroków dla wszystkich woluminów, które wymagają migracji.
## <a name="phase-3-set-up-azure-file-shares-and-get-ready-for-azure-file-sync"></a>Faza 3: Konfigurowanie udziałów plików platformy Azure i przygotowanie do synchronizacji plików platformy Azure
:::row:::
:::column:::
![Obraz przedstawiający część wcześniejszego obrazu przeglądu, który pomaga skupić się na tej podsekcji artykułu.](media/storage-files-migration-storsimple-shared/storsimple-8000-migration-phase-3.png)
:::column-end:::
:::column:::
W tej fazie będzie określanie i inicjowanie obsługi administracyjnej szereg udziałów plików platformy Azure, tworzenie lokalnego systemu Windows Server jako zamiennik urządzenia StorSimple i konfigurowanie tego serwera dla synchronizacji plików platformy Azure.
:::column-end:::
:::row-end:::
### <a name="map-your-existing-namespaces-to-azure-file-shares"></a>Mapowanie istniejących obszarów nazw na udziały plików platformy Azure
[!INCLUDE [storage-files-migration-namespace-mapping](../../../includes/storage-files-migration-namespace-mapping.md)]
### <a name="deploy-azure-file-shares"></a>Wdrażanie udziałów plików platformy Azure
[!INCLUDE [storage-files-migration-provision-azfs](../../../includes/storage-files-migration-provision-azure-file-share.md)]
> [!TIP]
> Jeśli chcesz zmienić region platformy Azure z bieżącego regionu, w którego znajdują się dane StorSimple, a następnie aprowizować udziały plików platformy Azure w nowym regionie, którego chcesz użyć. Region można określić, wybierając go podczas aprowizowania kont magazynu, które przechowują udziały plików platformy Azure. Upewnij się, że również zasób usługi Azure File Sync, który będzie aprowizować poniżej, znajduje się w tym samym, nowym regionie.
### <a name="deploy-the-azure-file-sync-cloud-resource"></a>Wdrażanie zasobu chmury usługi Azure File Sync
[!INCLUDE [storage-files-migration-deploy-afs-sss](../../../includes/storage-files-migration-deploy-azure-file-sync-storage-sync-service.md)]
> [!TIP]
> Jeśli chcesz zmienić region platformy Azure z bieżącego regionu, w którego znajdują się dane StorSimple, a następnie aprowizowana konta magazynu dla udziałów plików platformy Azure w nowym regionie. Upewnij się, że wybrano ten sam region podczas wdrażania tej usługi synchronizacji magazynu.
### <a name="deploy-an-on-premises-windows-server"></a>Wdrażanie lokalnego systemu Windows Server
* Utwórz system Windows Server 2019 — co najmniej 2012R2 — jako maszynę wirtualną lub serwer fizyczny. Obsługiwany jest również klaster trybu failover systemu Windows Server. Nie należy ponownie używać serwera, na który masz fronting StorSimple 8100 lub 8600.
* Aprowizuj lub dodaj bezpośrednio podłączoną pamięć masową (DAS w porównaniu z serwerem NAS, który nie jest obsługiwany).
Najlepszym rozwiązaniem jest zapewnienie nowemu systemowi Windows Server równej lub większej ilości miejsca niż urządzenie StorSimple 8100 lub 8600 jest lokalnie dostępne do buforowania. System Windows Server będzie używany w taki sam sposób, w jaki używane urządzenie StorSimple, jeśli ma taką samą ilość miejsca do magazynowania jak urządzenie, środowisko buforowania powinno być podobne, jeśli nie takie samo.
Magazyn można dodawać lub usuwać z systemu Windows Server do woli. Dzięki temu można skalować rozmiar woluminu lokalnego i ilość magazynu lokalnego dostępnego do buforowania.
### <a name="prepare-the-windows-server-for-file-sync"></a>Przygotowywanie systemu Windows Server do synchronizacji plików
[!INCLUDE [storage-files-migration-deploy-afs-agent](../../../includes/storage-files-migration-deploy-azure-file-sync-agent.md)]
### <a name="configure-azure-file-sync-on-the-windows-server"></a>Konfigurowanie synchronizacji plików platformy Azure w systemie Windows Server
Zarejestrowany lokalny system Windows Server musi być gotowy i połączony z Internetem w tym procesie.
[!INCLUDE [storage-files-migration-configure-sync](../../../includes/storage-files-migration-configure-sync.md)]
> [!WARNING]
> **Pamiętaj, aby włączyć warstwy w chmurze!** Warstwa chmury to funkcja AFS, która umożliwia serwerowi lokalnemu mniejszą pojemność niż jest przechowywana w chmurze, ale ma dostęp do pełnego obszaru nazw. Lokalnie interesujące dane są również buforowane lokalnie w celu uzyskania szybkiej wydajności dostępu lokalnego. Innym powodem, aby włączyć warstwy w chmurze w tym kroku jest to, że nie chcemy synchronizować zawartości pliku na tym etapie, tylko obszar nazw powinien być w tej chwili w ruchu.
## <a name="phase-4-configure-the-azure-vm-for-sync"></a>Faza 4: Konfigurowanie maszyny Wirtualnej platformy Azure do synchronizacji
:::row:::
:::column:::
![Obraz przedstawiający część wcześniejszego obrazu przeglądu, który pomaga skupić się na tej podsekcji artykułu.](media/storage-files-migration-storsimple-shared/storsimple-8000-migration-phase-4.png)
:::column-end:::
:::column:::
Ta faza dotyczy maszyny Wirtualnej platformy Azure z zainstalowanym iSCSI, klonami pierwszego woluminu. Podczas tej fazy otrzymasz maszynę wirtualną połączoną za pośrednictwem usługi Azure File Sync i rozpoczniesz pierwszą rundę przenoszenia plików z klonów woluminów StorSimple.
:::column-end:::
:::row-end:::
Skonfigurowano już serwer lokalny, który zastąpi urządzenie StorSimple 8100 lub 8600 dla usługi Azure File Sync.
Konfigurowanie maszyny Wirtualnej platformy Azure jest prawie identyczny proces, z jednym dodatkowym krokiem. Poniższe kroki poprowadzą Cię przez proces.
> [!IMPORTANT]
> Ważne jest, aby maszyna wirtualna platformy Azure nie była **skonfigurowana z włączoną warstwą w chmurze!** Wolumin tego serwera zostanie wymieniony z nowszymi klonami woluminów podczas migracji. Warstwowa chmura nie ma żadnych korzyści i obciążenie użycia procesora CPU, którego należy unikać.
1. [Wdrażanie agenta AFS. (patrz poprzednia sekcja)](#prepare-the-windows-server-for-file-sync)
2. [Przygotowanie maszyny Wirtualnej do synchronizacji plików platformy Azure.](#get-the-vm-ready-for-azure-file-sync)
3. [Konfigurowanie synchronizacji](#configure-azure-file-sync-on-the-azure-vm)
### <a name="get-the-vm-ready-for-azure-file-sync"></a>Przygotowanie maszyny Wirtualnej do synchronizacji plików platformy Azure
Usługa Azure File Sync służy do przenoszenia plików z zainstalowanych woluminów iSCSI StorSimple do docelowych udziałów plików platformy Azure.
Podczas tego procesu migracji zostanie zamontowanych kilka klonów woluminów na maszynie wirtualnej, pod tą samą literą dysku. Usługa Azure File Sync musi być skonfigurowana tak, aby była instalowana jako nowsza wersja plików i folderów oraz aktualizowana zawartość plików platformy Azure połączona za pośrednictwem usługi Azure File Sync.
> [!IMPORTANT]
> Aby to działało, klucz rejestru musi być ustawiony na serwerze przed skonfigurowaniem usługi Azure File Sync.
1. Utwórz nowy katalog na dysku systemowym maszyny Wirtualnej. Informacje o synchronizacji plików platformy Azure będą musiały być tam utrwalane, a nie na zainstalowanych klonach woluminu. Na przykład: `"C:\syncmetadata"`
2. Otwórz regedit i znajdź następujący gałąź rejestru:`HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync`
3. Utwórz nowy klucz typu Ciąg o nazwie: ***MetadataRootPath***
4. Ustaw pełną ścieżkę do katalogu utworzonego na woluminie systemowym, na przykład:`C:\syncmetadata"`
### <a name="configure-azure-file-sync-on-the-azure-vm"></a>Konfigurowanie synchronizacji plików platformy Azure na maszynie Wirtualnej platformy Azure
Ten krok jest podobny do poprzedniej sekcji, w której omówiono sposób konfigurowania systemu AFS na serwerze lokalnym.
Różnica polega na tym, że nie należy włączać warstw w chmurze na tym serwerze i że należy upewnić się, że odpowiednie foldery są połączone z odpowiednimi udziałami plików platformy Azure. W przeciwnym razie nazewnictwo udziałów plików platformy Azure i zawartości danych nie będzie zgodne i nie ma możliwości zmiany nazwy zasobów w chmurze lub folderów lokalnych bez ponownej konfiguracji synchronizacji.
Zobacz [poprzednią sekcję dotyczącą konfigurowania synchronizacji plików platformy Azure na serwerze Windows Server](#configure-azure-file-sync-on-the-windows-server).
### <a name="step-4-summary"></a>Podsumowanie kroku 4
W tym momencie pomyślnie skonfigurowano usługę Azure File Sync na maszynie Wirtualnej platformy Azure, która została zainstalowana przez klony woluminów StorSimple za pośrednictwem interfejsu iSCSI.
Dane są teraz przepływa z maszyny Wirtualnej platformy Azure do różnych udziałów plików platformy Azure, a stamtąd w pełni zmęczony obszar nazw pojawia się na lokalnym systemie Windows Server.
> [!IMPORTANT]
> Upewnij się, że obecnie nie wprowadzono żadnych zmian ani dostępu użytkownika do systemu Windows Server.
Początkowe dane klonowania woluminu przechodzące za pośrednictwem maszyny Wirtualnej platformy Azure do udziałów plików platformy Azure może zająć dużo czasu, potencjalnie tygodnie. Oszacowanie czasu, który to zajmie, jest trudne i zależy od wielu czynników. Przede wszystkim szybkość, z jaką maszyna wirtualna platformy Azure może uzyskiwać dostęp do plików na woluminach StorSimple i jak szybko usługa Azure File Sync może przetwarzać pliki i foldery, które wymagają synchronizacji.
Z doświadczenia możemy założyć, że przepustowość - w związku z tym rzeczywisty rozmiar danych - odgrywa podrzędną rolę. Czas, który zajmie to lub dowolne kolejne rundy migracji, zależy głównie od liczby elementów, które mogą być przetwarzane na sekundę. Na przykład 1 TiB ze 100 000 plików i folderów najprawdopodobniej zakończy się wolniej niż 1 TiB z zaledwie 50 000 plików i folderów.
## <a name="phase-5-iterate-through-multiple-volume-clones"></a>Faza 5: Iteracji przez klony wielu objętości
:::row:::
:::column:::
![Obraz przedstawiający część wcześniejszego obrazu przeglądu, który pomaga skupić się na tej podsekcji artykułu.](media/storage-files-migration-storsimple-shared/storsimple-8000-migration-phase-5.png)
:::column-end:::
:::column:::
Jak wspomniano w poprzedniej fazie, synchronizacja początkowa może zająć dużo czasu. Użytkownicy i aplikacje nadal uzyskują dostęp do lokalnego urządzenia StorSimple 8100 lub 8600. Oznacza to, że zmiany są kumulujące się i każdego dnia większy delta między danymi na żywo i klon woluminu początkowego, jesteś obecnie migracji, formularze. W tej sekcji dowiesz się, jak zminimalizować przestoje przy użyciu wielu klonów woluminów i informując, kiedy synchronizacja jest wykonywana.
:::column-end:::
:::row-end:::
Niestety proces migracji nie jest natychmiastowy. Oznacza to, że wspomniana delta danych na żywo jest nieuniknioną konsekwencją. Dobrą wiadomością jest to, że można powtórzyć proces montażu nowych klonów objętości. Delta każdego klona woluminu będzie stopniowo mniejsza. Ostatecznie synchronizacja zakończy się w czasie, który można uznać za dopuszczalne dla użytkowników i aplikacji w trybie offline, aby wyciąć na lokalnym serwerze Windows.
Powtarzaj następujące czynności, aż synchronizacja zakończy się w czasie wystarczająco szybkim, aby czuć się komfortowo, przełącz użytkowników i aplikacje do trybu offline:
1. [Określenie, że synchronizacja jest zakończona dla danego klonu woluminu.](#determine-when-sync-is-done)
2. [Weź nowe klony woluminów i zamontuj go na urządzeniu wirtualnym 8020.](#the-next-volume-clones)
3. [Określ, kiedy synchronizacja jest wykonywana.](#determine-when-sync-is-done)
4. [Strategia cięcia](#cut-over-strategy)
### <a name="the-next-volume-clones"></a>Klony następnego woluminu
Omówiliśmy przy klon (s) tom wcześniej w tym artykule.
Ta faza ma dwie akcje:
1. [Klonowanie woluminu](../../storsimple/storsimple-8000-clone-volume-u2.md#create-a-clone-of-a-volume)
2. [Zamontuj klon woluminu (patrz wyżej)](#use-the-volume-clone)
### <a name="determine-when-sync-is-done"></a>Określanie, kiedy synchronizacja jest wykonywana
Po zakończeniu synchronizacji można zatrzymać pomiar czasu i określić, czy trzeba powtórzyć proces robienia klonowania woluminu i montowania go, czy synchronizacja czasu z ostatnim klonem woluminu była wystarczająco mała.
W celu określenia, że synchronizacja jest zakończona:
1. Otwórz Podgląd zdarzeń i przejdź do **aplikacji i usług**
2. Nawigowanie i otwieranie programu **Microsoft\FileSync\Agent\Telemetria**
3. Poszukaj najnowszego **zdarzenia 9102**, które odpowiada zakończonej sesji synchronizacji
4. Wybierz **pozycję Szczegóły** i upewnij się, że wartość **SyncDirection** to **Przekazywanie**
5. Sprawdź **HResult** i potwierdzić, że pokazuje **0**. Oznacza to, że sesja synchronizacji zakończyła się pomyślnie. Jeśli HResult jest wartością niezerową, wystąpił błąd podczas synchronizacji. Jeśli **PerItemErrorCount** jest większa niż 0, niektóre pliki lub foldery nie zostały poprawnie zsynchronizowane. Jest możliwe, aby HResult 0, ale PerItemErrorCount, który jest większy niż 0. W tym momencie nie musisz się martwić o PerItemErrorCount. Złapiemy te pliki później. Jeśli ta liczba błędów jest znacząca, tysiące elementów, skontaktuj się z obsługą klienta i poproś o połączenie z grupą produktów usługi Azure File Sync, aby uzyskać bezpośrednie wskazówki dotyczące najlepszych, następnych faz.
6. Sprawdź, aby wyświetlić wiele zdarzeń 9102 z HResult 0 z rzędu. Oznacza to, że synchronizacja została zakończona dla tego klonu woluminu.
### <a name="cut-over-strategy"></a>Strategia cięcia
1. Określ, czy synchronizacja z klonem woluminu jest teraz wystarczająco szybka. (Delta wystarczająco mała.)
2. Przełącz urządzenie StorSimple do trybu offline.
3. Ostateczna RoboCopy.
Zmierz czas i określ, czy synchronizacja z klonem woluminu może zakończyć się w ciągu przedziału czasu na tyle małego, że możesz sobie pozwolić na przestoje w systemie.
Nadszedł czas, aby wyłączyć dostęp użytkownika do urządzenia StorSimple. Koniec z zmianami. Rozpoczął się przestój.
Musisz pozostawić urządzenie w trybie online i podłączony, ale teraz musi zapobiec zmianom na nim.
W fazie 6 można dogonić wszelkie delta w danych na żywo od ostatniego klonu woluminu.
## <a name="phase-6-a-final-robocopy"></a>Faza 6: Ostateczna RoboCopy
W tym momencie istnieją dwie różnice między lokalnym systemem Windows Server a urządzeniem StorSimple 8100 lub 8600:
1. Mogą istnieć pliki, które nie zostały zsynchronizowane (zobacz **PerItemErrors** z dziennika zdarzeń powyżej)
2. Urządzenie StorSimple ma wypełnioną pamięć podręczną w porównaniu z systemem Windows Server tylko obszar nazw bez zawartości pliku przechowywane lokalnie w tej chwili.
![Obraz przedstawiający część wcześniejszego obrazu przeglądu, który pomaga skupić się na tej podsekcji artykułu.](media/storage-files-migration-storsimple-shared/storsimple-8000-migration-phase-6.png)
Możemy doprowadzić pamięć podręczną systemu Windows Server do stanu urządzenia i upewnić się, że żaden plik nie zostanie pozostawiony z ostatecznym RoboCopy.
> [!CAUTION]
> Konieczne jest, aby polecenie RoboCopy, które obserwujesz, było dokładnie takie, jak opisano poniżej. Chcemy tylko skopiować pliki, które są lokalne i pliki, które nie zostały przeniesione przez klon woluminu + sync podejście przed. Możemy rozwiązać problemy, dlaczego nie zostały zsynchronizowane później, po zakończeniu migracji. (Zobacz [Rozwiązywanie problemów z synchronizacją plików platformy Azure](storage-sync-files-troubleshoot.md#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing). Najprawdopodobniej nie można drukować znaków w nazwach plików, których nie przegapisz po ich usunięciu).
RoboCopy, polecenie:
```console
Robocopy /MT:32 /UNILOG:<file name> /TEE /B /MIR /COPYALL /DCOPY:DAT <SourcePath> <Dest.Path>
```
Tle:
:::row:::
:::column span="1":::
/MT
:::column-end:::
:::column span="1":::
Pozwala roboCopy do uruchamiania wielowątkowych. Wartość domyślna to 8, max to 128.
:::column-end:::
:::row-end:::
:::row:::
:::column span="1":::
/UNILOG:<file name>
:::column-end:::
:::column span="1":::
Dane wyjściowe do pliku LOG jako UNICODE (zastępuje istniejący dziennik).
:::column-end:::
:::row-end:::
:::row:::
:::column span="1":::
/TEE
:::column-end:::
:::column span="1":::
Wyjścia do okna konsoli. Używany w połączeniu z wyjściem do pliku dziennika.
:::column-end:::
:::row-end:::
:::row:::
:::column span="1":::
/B
:::column-end:::
:::column span="1":::
Uruchamia RoboCopy w tym samym trybie, którego użyłaby aplikacja do tworzenia kopii zapasowych. Umożliwia RoboCopy przenoszenie plików, do których bieżący użytkownik nie ma uprawnień.
:::column-end:::
:::row-end:::
:::row:::
:::column span="1":::
/MIR
:::column-end:::
:::column span="1":::
RoboCopy może uwzględniać tylko różnice między źródłem (urządzenie StorSimple) a obiektem docelowym (katalog windows server).
:::column-end:::
:::row-end:::
:::row:::
:::column span="1":::
/COPY:copyflag[s]
:::column-end:::
:::column span="1":::
wierność kopii pliku (domyślnie jest to /COPY:DAT), flagi kopiowania: D=Data, A=Attributes, T=Timestamps, S=Security=NTFS ACL, O=Informacje o właścicielu, U=aUditing info
:::column-end:::
:::row-end:::
:::row:::
:::column span="1":::
/ COPYALL
:::column-end:::
:::column span="1":::
KOPIUJ WSZYSTKIE informacje o pliku (odpowiednik /COPY:DATSOU)
:::column-end:::
:::row-end:::
:::row:::
:::column span="1":::
/DCOPY:copyflag[s]
:::column-end:::
:::column span="1":::
wierność kopii katalogów (domyślnie jest to /DCOPY:DA), flagi kopiowania: D=Data, A=Attributes, T=Sygnatury czasowe
:::column-end:::
:::row-end:::
Należy uruchomić to polecenie RoboCopy dla każdego z katalogów w systemie Windows Server jako obiektu docelowego skonfigurowanego z synchronizacją plików z plikiem platformy Azure.
Można uruchomić wiele z tych poleceń równolegle.
Po zakończeniu tego kroku RoboCopy można zezwolić użytkownikom i aplikacjom na dostęp do systemu Windows Server, tak jak wcześniej, gdy urządzenie StorSimple.
Skonsultuj się z plikami dziennika robocopy, aby sprawdzić, czy pliki zostały pozostawione. Jeśli problemy powinny istnieć, w większości przypadków można je rozwiązać po zakończeniu migracji, a użytkownicy i aplikacje zostały ponownie zagosione na serwerze Windows Server. Jeśli chcesz rozwiązać jakiekolwiek problemy, zrób to przed fazą 7.
Prawdopodobnie jest potrzebne do utworzenia udziałów SMB w systemie Windows Server, który miał na StorSimple danych przed. Możesz załadować ten krok z przodu i zrobić to wcześniej, aby nie stracić czasu tutaj, ale musisz upewnić się, że przed tym punktem nie nastąpią żadne zmiany w plikach na serwerze Windows.
Jeśli masz wdrożenie DFS-N, możesz skierować obszary nazw DFN na nowe lokalizacje folderów serwera. Jeśli nie masz wdrożenia DFS-N i masz fronted 8100 8600 urządzenie lokalnie z systemem Windows Server, można wziąć ten serwer z domeny i domeny dołączyć do nowego systemu Windows Server z AFS do domeny, nadać mu taką samą nazwę serwera jak stary serwer, a te same nazwy udziału, a następnie cut-over do nowego serwera pozostaje przejrzysty dla użytkowników , zasady grupy lub skrypty.
## <a name="phase-7-deprovision"></a>Faza 7: Deprovision
Podczas ostatniej fazy masz iterowane przez klony wielu woluminów, a ostatecznie były w stanie wyciąć dostęp użytkownika do nowego systemu Windows Server po przekręceniu urządzenia StorSimple w trybie offline.
Teraz można rozpocząć deprovision niepotrzebnych zasobów.
Przed rozpoczęciem jest najlepszym rozwiązaniem, aby obserwować nowe wdrożenie usługi Azure File Sync w produkcji, na chwilę. To daje opcje, aby rozwiązać wszelkie problemy, które mogą wystąpić.
Po spełnieniu i obserwowaniu wdrożenia AFS przez co najmniej kilka dni, można rozpocząć deprovision zasobów w tej kolejności:
1. Wyłącz maszynę wirtualną platformy Azure, która została użyta do przenoszenia danych z klonów woluminów do udziałów plików platformy Azure za pośrednictwem synchronizacji plików.
2. Przejdź do zasobu usługi synchronizacji magazynu na platformie Azure i wyrejestruj maszynę wirtualną platformy Azure. Spowoduje to usunięcie go ze wszystkich grup synchronizacji.
> [!WARNING]
> **UPEWNIJ SIĘ, ŻE wybierzesz odpowiednią maszynę.** Wyłączyłeś maszynę wirtualną w chmurze, co oznacza, że powinna ona być pokazywalna jako jedyny serwer w trybie offline na liście zarejestrowanych serwerów. Nie wolno wybierać lokalnego systemu Windows Server w tym kroku, w ten sposób spowoduje wyrejestrowania go.
3. Usuń maszynę wirtualną platformy Azure i jej zasoby.
4. Wyłącz wirtualne urządzenie StorSimple 8020.
5. Deprovision wszystkie zasoby StorSimple na platformie Azure.
6. Odłącz urządzenie fizyczne StorSimple od centrum danych.
Migracja została zakończona.
## <a name="next-steps"></a>Następne kroki
Zapoznaj się z usługą Azure File Sync. Zwłaszcza dzięki elastyczności zasad warstwowych chmury.
Jeśli w witrynie Azure portal lub z wcześniejszych zdarzeń zostanie wyświetlenie, że niektóre pliki nie są trwale synchronizowane, zapoznaj się z przewodnikiem rozwiązywania problemów, aby uzyskać instrukcje rozwiązywania tych problemów.
* [Omówienie usługi Azure File Sync: aka.ms/AFS](https://aka.ms/AFS)
* [Warstwy w chmurze](storage-sync-cloud-tiering.md)
* [Przewodnik po rozwiązywaniu problemów z synchronizacją plików platformy Azure](storage-sync-files-troubleshoot.md)
| 79.827214 | 877 | 0.798106 | pol_Latn | 0.999964 |
ce5e5ea7662a95443d0ee9da6d964c88c16354d5 | 3,843 | md | Markdown | _posts/2016-09-14-pdf_annotation.md | wcarvalho/wcarvalho.github.io | 20e43c3fd32366cd46e2eca7a0b8b82b7a4d9e3f | [
"MIT"
] | 1 | 2019-08-13T00:37:51.000Z | 2019-08-13T00:37:51.000Z | _posts/2016-09-14-pdf_annotation.md | wcarvalho/wcarvalho.github.io | 20e43c3fd32366cd46e2eca7a0b8b82b7a4d9e3f | [
"MIT"
] | null | null | null | _posts/2016-09-14-pdf_annotation.md | wcarvalho/wcarvalho.github.io | 20e43c3fd32366cd46e2eca7a0b8b82b7a4d9e3f | [
"MIT"
] | 2 | 2018-09-17T13:49:15.000Z | 2018-11-19T17:06:46.000Z | ---
title: "Highlights: A markdown pdf annotator"
comments: true
layout: post
category: misc
tags: [software]
---
It's getting close to the end of the day and I don't feel like doing work-work, so I've decided to do some pseudo-work and write this little blog post recommending a phenomenal pdf annotator I recently discovered: [Highlights](http://highlightsapp.net/).
<img class="regular materialboxed responsive-img" src="http://highlightsapp.net/img/highlightsapp_yosemite2.jpg">
## TL; DR
[Highlights](http://highlightsapp.net/) saves annotatations as editable markdown and let's you effortlessly export the markdown to evernote, which then makes your notes searchable on google (only to you).
## Full Story
I wanted a system where I could easily access the annotations I made on pdfs. I wanted them to be accessible across pdf readers so the annotations needed to be saved to the file and I wanted them to be easily found when I searched for related topics. [Highlights](http://highlightsapp.net/) combined with [Evernote](https://evernote.com/) managed to accomplish both rather easily and elegantly
I'll try to keep it to the "facts". To *highlight* the utility of [Highlights](http://highlightsapp.net/) (hehe), I will use my annotations on this pdf, [Tutorial on Variational Autoencoders](https://arxiv.org/pdf/1606.05908v2.pdf), as an example.
### Pros
1. The annotations are saved in markdown. The impact is two-fold. (1) They are easy to edit, (2) It is easy to export to many clients **including Evernote**.
2. Evernote has a cool feature that when you search things on google, you can concurrently perform a search on evernote (you'll need the [Evernote Web Clipper](https://evernote.com/webclipper/) installed). Below you can see an example where I searched for hidden variables and 2 notes with related text came up (one of which was my markdown notes from this example)
<img class="regular materialboxed responsive-img" src="{{ site.baseurl }}/files/highlights/evernote_google.png">
3. The annotation tools are powerful
* Aside from text, you can also "highlight" diagrams, adding them to your markdown
* You can set a specific underline color for references and your markdown will correclty link references
<table>
<tr>
<th>Regular View</th>
<th>Markdown View</th>
</tr>
<tr>
<td>
<img class="regular materialboxed responsive-img" src="{{ site.baseurl }}/files/highlights/view.png">
</td>
<td>
<img class="regular materialboxed responsive-img" src="{{ site.baseurl }}/files/highlights/markdown.png">
</td>
</tr>
</table>
4. Everything is saved in-file so you can view your pdf (and all annotations) in other readers. This was important to me. I use [Papers](http://papersapp.com/mac/) to manage my papers, and being able to browse my annotations on that platform (or any other) is really useful.
<img class="regular materialboxed responsive-img" src="{{ site.baseurl }}/files/highlights/papers.png">
5. It's easy to change the color/type of any annotation
<img class="regular materialboxed responsive-img" src="{{ site.baseurl }}/files/highlights/ease.png">
### Half-Pros/Half-cons:
1. It is supposed to support DOI lookup so that references are clickable and openable, but I have found this feature to not work.
2. If you use bookends or paper3 (I used papers), it supports opening the reference in your manager (but, again, this require DOI lookup to work)
### Cons:
1. It costs $30 but its made by a PhD student so I'm happy to support (for those that don't want to, it isn't too hard to find a copy online...)
---
Here are some examples of markup and pdf it generated from my annotations
* [markup]({{ site.baseurl }}/files/highlights/markup.txt)
* [pdf]({{ site.baseurl }}/files/highlights/pdf.pdf)
| 51.932432 | 393 | 0.736404 | eng_Latn | 0.994754 |
ce5e8008de02e2d57af7be743d98010543ce41e5 | 2,513 | md | Markdown | src/pages/success-stories/from-imperfect-credit-to-a-perfect-solution.md | elina-codes/arbutus | 8e27f26d933724262b1962957181bc8d04be5a78 | [
"RSA-MD"
] | null | null | null | src/pages/success-stories/from-imperfect-credit-to-a-perfect-solution.md | elina-codes/arbutus | 8e27f26d933724262b1962957181bc8d04be5a78 | [
"RSA-MD"
] | null | null | null | src/pages/success-stories/from-imperfect-credit-to-a-perfect-solution.md | elina-codes/arbutus | 8e27f26d933724262b1962957181bc8d04be5a78 | [
"RSA-MD"
] | null | null | null | ---
templateKey: success-story
title: From imperfect credit to a perfect solution
location: Alberta, Canada
tags:
- used truck leasing heavy equipment leasing equipment leasing
date: 2018-10-10T04:21:36.287Z
featuredpost: true
featuredimage: /images/uploads/arbutus_-big-a-peterbuilt_cropped.jpeg
---
### **Time crunch for owner operator trucker**
We received a call on a Friday morning from a young trucker named Austin who was in a jam: the deposit he had put down on a used Peterbilt would be forfeited at the end of the day if he couldn’t get an approval. And worse still, Austin would lose the long haul trucking job he had lined up.
### **Two strikes but not out**
Austin had a couple of strikes going against him:
* First was his young age, which usually translates to a lack of solid credit history, personal net worth and decades of experience that leasing companies like to see.
* Compounding his tight spot further was the fact that the truck he wanted lease financing for was for a 1999 Peterbilt coming from a private sale. Older vehicles and private sales are two deal characteristics equipment leasing companies will often say no to.
After another lease finance company who was seeking to provide him with financing failed to return his calls after three days, Austin approached us at the 11th hour. He was close to losing his deposit, the truck and the job he had lined up. Although we weren’t overly surprised his financing fell through given his situation, we were more than happy to see what we could do to help, as we pride ourselves in handling imperfect circumstances precisely like this one.
### **Overcoming imperfect credit situations**
Fortunately for us, Austin proved to be resourceful, ambitious and quick to get us what we needed: attributes we admire in our clients. We quickly built a profile for him, drawing from the various strengths that we saw and generating an approval later that day. We got in touch with the vendor to let him know Austin was approved and we planned on issuing payment early the following week. We sent out the lease papers before day’s end and worked with Austin over the weekend to complete the deal to get him the heavy equipment financing he needed. We paid the seller on the Monday and Austin started work in his truck the next day.
### **Results**
The seller was impressed enough to recommend us to other business owners. Austin described us as quick, fast and determined to find a solution that met all the needs of all parties. | 83.766667 | 638 | 0.789097 | eng_Latn | 0.99993 |
ce5eb3078e300abe6f49d70c851468a1865a0e0f | 231 | md | Markdown | archetypes/journal.md | huynle/hizzle-hugo | 0d436f4713cf8458720f578b79f391a30bdca8ca | [
"MIT"
] | null | null | null | archetypes/journal.md | huynle/hizzle-hugo | 0d436f4713cf8458720f578b79f391a30bdca8ca | [
"MIT"
] | null | null | null | archetypes/journal.md | huynle/hizzle-hugo | 0d436f4713cf8458720f578b79f391a30bdca8ca | [
"MIT"
] | null | null | null | ---
description: Daily Journal
title: "{{ replace .TranslationBaseName "-" " " | title }}"
date: {{ .Date }}
draft: true
tags:
- TBD
layout: daily-journal
project: journal
---
# Today's To-do
# Today's Ideas
# Today's Notes
| 12.157895 | 59 | 0.636364 | yue_Hant | 0.387849 |
ce5fe4f38ba767a3a4d2717339a7f63f7604bbbb | 419 | md | Markdown | _posts/2018-08-06-SOLID--Interface-segregation-principle.md | rocLv/roclv.github.io | cad53b976b16c4c6dac1a8e258ba27cc6ae0c27c | [
"MIT"
] | null | null | null | _posts/2018-08-06-SOLID--Interface-segregation-principle.md | rocLv/roclv.github.io | cad53b976b16c4c6dac1a8e258ba27cc6ae0c27c | [
"MIT"
] | null | null | null | _posts/2018-08-06-SOLID--Interface-segregation-principle.md | rocLv/roclv.github.io | cad53b976b16c4c6dac1a8e258ba27cc6ae0c27c | [
"MIT"
] | null | null | null | ---
layout: post
title: "SOLID: Interface segregation principle"
---
##### SOLID: 接口隔离原则
实例不应该强制依赖于它不使用的方法。
<!--more-->
接口隔离原则
>
实例不应该强制依赖于它不使用的方法。
换句话说
>
许多实例专用的接口比一般通用的接口更好。
所以我们创建的接口不应该包好我们不需要的方法。
让我们直接来看几个例子,因为这是解释这个原则最好的方法。
##### 反例
开始,有一个完全不同的返利。通过一些在 Reddit
上的讨论,我已经被说服了它不是最好的反例。所以我决定重新考虑,提供一个更好的例子。
旧的例子在在[这里](https://gist.github.com/marcinjak/1c138c9cd3ab23e90d2605fe13620e69)
| 13.09375 | 78 | 0.725537 | yue_Hant | 0.36178 |
ce6035682a0d430c4449c1582fc8f068d9a02b5e | 641 | md | Markdown | _posts/2017-07-17-Tuesday-11th-July-2017.md | Sheep1000/dcgs-ske.github.io | 3938af9698ba3309f65dd0585c1403a538aad170 | [
"MIT"
] | 2 | 2015-12-08T18:23:07.000Z | 2015-12-09T17:38:00.000Z | _posts/2017-07-17-Tuesday-11th-July-2017.md | Sheep1000/dcgs-ske.github.io | 3938af9698ba3309f65dd0585c1403a538aad170 | [
"MIT"
] | null | null | null | _posts/2017-07-17-Tuesday-11th-July-2017.md | Sheep1000/dcgs-ske.github.io | 3938af9698ba3309f65dd0585c1403a538aad170 | [
"MIT"
] | null | null | null | ---
title: Tuesday 11th July 2017
layout: post
author: nicholas.wiegandt
permalink: /tuesday-11th-july-2017/
source-id: 1ljvPoQbbGlUvDJfY2D3U3h5zD1pZ__h_HmYtPRNnkk4
published: true
---
I wasn't really here…
I'll be honest with you, i wasn’t actually here this lesson because I was at a Drama rehearsal which was fun because it’s basically just messing around backstage and occasionally going onto stage for a scene or two. Here’s what I did to catch up:
* 1 hour of codecademy
* No distractions
* Lots of work done
* Was good...
I have been enjoying IT, this will be my last blog post for about 7-8 weeks. This is Nick, over and out.
| 27.869565 | 246 | 0.764431 | eng_Latn | 0.999422 |
ce6129ac04132b85d88c0484630a501f42e53db7 | 1,539 | md | Markdown | README.md | SriramAtmakuri/Prediction-of-Cancer- | 98918082f316799b3728bf9fa7da87fbce28daad | [
"MIT"
] | null | null | null | README.md | SriramAtmakuri/Prediction-of-Cancer- | 98918082f316799b3728bf9fa7da87fbce28daad | [
"MIT"
] | null | null | null | README.md | SriramAtmakuri/Prediction-of-Cancer- | 98918082f316799b3728bf9fa7da87fbce28daad | [
"MIT"
] | null | null | null | # Prediction-of-Cancer-
This project is based on the predicting cancer using dataset
Generally, cancer has been described as a heterogeneous illness with a wide range of subgroups. Also, the need of classifying cancer patients into high/low assessment criteria has prompted numerous research teams to investigate the use of machine learning (ML) technologies. As a result, these methods have been used to model the progression and therapy of malignant diseases. Furthermore, the ability of machine learning technologies to find essential elements in complex data sets demonstrates their value.
Furthermore, K-Nearest Neighbors (KNN), Naive Bayesian Classification (NB), Linear Support Vector Machines (L-SVMs), logistic Regression (LR), and Random Forest Classifier (RF) are just a few of the techniques that have been widely used in cancer research to develop predictive models that result in effective
and accurate decision making.
Even though it is clear that the application of machine learning algorithms can increase our understanding of cancer progression, adequate validation is required. In this project, we give a critical perspective of cancer prediction based on gene data obtained through literary means of manual data obtained from hospitals, as well as a review of contemporary machine learning approaches used in cancer progression modeling.
The models provided here for prediction are based on a variety of supervised, unsupervised machine learning techniques including the wide range of input variables, data samples.
| 139.909091 | 508 | 0.825861 | eng_Latn | 0.999259 |
ce61e9025ea425f93dcc259688d5163f4fd05aca | 151 | md | Markdown | _posts/2019-03-10-queue.md | diqobao/diqobao.github.io | 7de952f35a5fc721006d98201fa535f60aff6642 | [
"MIT"
] | null | null | null | _posts/2019-03-10-queue.md | diqobao/diqobao.github.io | 7de952f35a5fc721006d98201fa535f60aff6642 | [
"MIT"
] | null | null | null | _posts/2019-03-10-queue.md | diqobao/diqobao.github.io | 7de952f35a5fc721006d98201fa535f60aff6642 | [
"MIT"
] | null | null | null | ---
layout: post
title: "distributed queue"
author: "admin"
categories: projects
tags: [projects]
image: cuba-1.jpg
desc: distributed message queue
--- | 16.777778 | 31 | 0.741722 | eng_Latn | 0.710999 |
ce622e75f87cc318ccde5e89295f53b3a9bdd21d | 510 | md | Markdown | README.md | indjev99/Microbit-Game-Suite | 9840126a50b3bd970635e262ddcbbf24e0c826b8 | [
"MIT"
] | null | null | null | README.md | indjev99/Microbit-Game-Suite | 9840126a50b3bd970635e262ddcbbf24e0c826b8 | [
"MIT"
] | null | null | null | README.md | indjev99/Microbit-Game-Suite | 9840126a50b3bd970635e262ddcbbf24e0c826b8 | [
"MIT"
] | null | null | null | # Micro:bit Game Suite
Game suite for the BBC Micro:bit.
Games included:
* Snake - Control the snake to eat food and avoid hitting yourself. A - turn counter-clockwise, B - turn clockwise.
* Dodge - Control your character on the bottom to avoid getting hit by the falling blocks. A - move left, B - move right.
Games in development:
* Pong
Planned games:
* Maze
* Space Invaders
Controls in menus:
* A - select/confirm
* B - next/back
- - - -
Makefile, startup.c and hardware.h provided by J. M. Spivey.
| 23.181818 | 121 | 0.721569 | eng_Latn | 0.983639 |
ce62e4631873892123932d8263e391a306b21b5f | 3,629 | md | Markdown | _posts/2021-10-18-donna moderna.md | pmazzocchi/pmazzocchi.github.io | 44a271a7b17c504b737da4ff7e7218736c13b1f0 | [
"Apache-2.0"
] | null | null | null | _posts/2021-10-18-donna moderna.md | pmazzocchi/pmazzocchi.github.io | 44a271a7b17c504b737da4ff7e7218736c13b1f0 | [
"Apache-2.0"
] | null | null | null | _posts/2021-10-18-donna moderna.md | pmazzocchi/pmazzocchi.github.io | 44a271a7b17c504b737da4ff7e7218736c13b1f0 | [
"Apache-2.0"
] | 2 | 2019-03-27T19:15:56.000Z | 2019-03-28T10:41:27.000Z | ---
layout: post
comments: false
title: "Bitcoin, come e quando investire"
subtitle: Il valore della criptovaluta è tornato a crescere, sfiorando i nuovi massimi. Cos'è e per cosa si può utilizzare"
author: "Staff"
image:
main: 2021-10-18-donna-moderna.jpg
thumb: 2021-10-18-donna-moderna-thumb.png
published: true
newsfeed: false
---
Eleonora Lorusso intervista [Ferdinando M. Ametrano](https://ametrano.net/) in un [articolo](https://www.donnamoderna.com/news/i-nostri-soldi/bitcoin-come-e-quando-investire) attuale su [Donna Moderna News](https://www.donnamoderna.com/news).
>
«Ormai si tratta di una realtà e anche il mondo della finanza lo ha sdoganato». Ne sono convinti gli esperti, di fronte al nuovo rialzo record di Bitcoin, la più famosa tra le criptovalute, ormai in circolazione da 13 anni. Nei giorni scorsi il valore di un solo Bitcoin ha superato nuovamente quota 60mila dollari, quasi un nuovo record dopo i 65mila sfiorati lo scorso aprile. Sembra passato un secolo da quando, nel 2011, una moneta “virtuale” valeva appena 1 dollaro. Ma cosa è successo, quali sono i motivi per cui si è assistito alla crescita di questa valuta e soprattutto a cosa serve e come si usa?
>
### Perché è boom
>
Di recente stiamo assistendo a una rivalutazione di Bitcoin, che fa tornare (nuovamente) questa criptovaluta sotto i riflettori. Lo aveva fatto anche in passato, a fasi alterne, ma ora sembra che i tempi siamo maturi per non considerarla più solo un “esperimento” da esperti informatici. «È così, ormai Bitcoin è una realtà» commenta il prof. Ferdinando Ametrano, docente di Bitcoin e Blockchain Technology all'università di Milano-Bicocca, amministratore delegato di CheckSig, che si occupa proprio della gestione di Bitcoin, e tra i massimi esperti del settore.
>
Cosa è cambiato? «È accaduto che nell'ultimo anno Bitcoin è stato sdoganato negli Stati Uniti e nel mondo della finanza, tanto che la criptovaluta è stata quotata al Nasdaq, la più grande borsa per questa moneta. È diventato più facile investire in Bitcoin sia per i grandi investitori serviti dalle banche di investimento come Goldman Sachs, JPMorgan o altre, sia per i piccoli investitori, ad esempio con Paypal» spiega Ametrano.
>
### Bitcoin è sicuro?
>
Per chi non “mastica” finanza tutti i giorni può sembrare un argomento ostico, ma soprattutto rimane il dubbio che la criptovaluta non sia “sicura”. «In realtà la criptovaluta ha retto anche di fronte alla decisione della Cina di proibirla, dimostrando capacità di resistere anche a scossoni formidabili. La quotazione di Coinbase a 65 miliardi di dollari, un valore superiore alla maggioranza degli istituti bancari europei, chiarisce che non siamo di fronte a un fenomeno folkloristico, ma una realtà importante e seria» conferma Ametrano. Ma a cosa serve?
>
### Cosa si può fare con i Bitcoin?
>
Le domande più frequenti sono due: è una moneta legale? E a cosa serve? «Non c'è dubbio che Bitcoin sia legale. Il fatto che possa essere usata anche per attività illecite, esattamente come internet o i dollari, anche per attività illecite non significa che lo sia di per sé – spiega l'esperto – Quanto all'uso va chiarito che, sebbene sia utilizzabile anche come moneta, è più corretto considerarla simile all'oro: è un bene rifugio su cui investire a medio e lungo termine. È un bene scarso perché non è duplicabile a piacere, diversamente dalle monete emesse da una banca centrale, per questo è considerato “oro digitale”. Finora ha dimostrato di rivalutarsi nel tempo, mentre euro e dollaro perdono potere di acquisto per la svalutazione programmata, decisa dalle banche centrali».
| 113.40625 | 785 | 0.79278 | ita_Latn | 0.999808 |
ce64d42aa53c7b8149753b951a4e2ab42f4f8bb0 | 55 | md | Markdown | README.md | guillermocorrea/rethinkdb-api | e2c58db1a339913b5e78a8fe72a3a92f54404ad7 | [
"MIT"
] | null | null | null | README.md | guillermocorrea/rethinkdb-api | e2c58db1a339913b5e78a8fe72a3a92f54404ad7 | [
"MIT"
] | null | null | null | README.md | guillermocorrea/rethinkdb-api | e2c58db1a339913b5e78a8fe72a3a92f54404ad7 | [
"MIT"
] | null | null | null | # rethinkdb-api
A node.js, express api using rethinkdb
| 18.333333 | 38 | 0.781818 | eng_Latn | 0.451613 |
ce6584d5085f08a7e09b40136deb05296ebf9e8e | 2,162 | md | Markdown | source/_posts/terrifying_truth_behind_tiny_black_dots_that_pop_up_on_your_ceiling_and_kitchen_tiles.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | null | null | null | source/_posts/terrifying_truth_behind_tiny_black_dots_that_pop_up_on_your_ceiling_and_kitchen_tiles.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | null | null | null | source/_posts/terrifying_truth_behind_tiny_black_dots_that_pop_up_on_your_ceiling_and_kitchen_tiles.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | 2 | 2021-09-18T12:06:26.000Z | 2021-11-14T15:17:34.000Z | ---
extends: _layouts.post
section: content
image: https://i.dailymail.co.uk/1s/2020/09/23/02/33505964-0-image-a-20_1600823060346.jpg
title: Terrifying truth behind tiny black dots that pop up on your ceiling and kitchen tiles
description: The tiny black dots that randomly appear on ceilings, walls and kitchen tiles are bad news for Australians enjoying spring.
date: 2020-09-23-03-08-18
categories: [latest, news]
featured: true
---
Tiny black dots commonly found on ceilings, walls and kitchen tiles have been confirmed to be spider droppings.
One baffled woman turned to a Facebook cleaning group for answers after seeing the dark specks in her home.
The most popular response from group members - that the spots was the correct one, an expert from the University of Sydney revealed.
A baffled woman took to Facebook with an image of black dots in her home asking if anyone knew what they were
The majority of answers said the dots were spider poo, and are more common in warmer months - which means bad news for Australians enjoying spring.
Dieter Hochuli, an ecologist and professor at the University of Sydney, confirmed the spider theory was true.
'Spider and fly droppings look similar – and those small blobs are liquid nitrogen-rich poo,' he told news.com.au.
Spiders produce thick, liquid droppings which come in shades of white, gray, brown, or black.
Professor Hochuli said there was nothing to worry about as they were quite common at this time of year.
'There is no public health risk about them. We live surrounded by hundreds of spiders in our day-to-day lives.'
Dieter Hochuli, an ecologist and professor, said the droppings were not an issue
Social media users also chimed in with ways to remove them.
'It happens a lot this time of year, spiders pooing everywhere, Dettox spray is good for it,' one wrote.
Others recommended wiping the droppings as soon as they're spotted as that makes them easier to remove, and if left for too long could leave a yellow stain.
Even if not cleaned off immediately, the droppings can still be removed with water and a stiff brush, or bleach, depending on the surface.
| 52.731707 | 156 | 0.782146 | eng_Latn | 0.999763 |
ce6606c5f3a64d051db799ad26e608dc8601064e | 1,174 | md | Markdown | README.md | mapoio/yapi | 664aa5f0ec6528f6abdb3c97eebbd3b70a53df10 | [
"Apache-2.0"
] | null | null | null | README.md | mapoio/yapi | 664aa5f0ec6528f6abdb3c97eebbd3b70a53df10 | [
"Apache-2.0"
] | null | null | null | README.md | mapoio/yapi | 664aa5f0ec6528f6abdb3c97eebbd3b70a53df10 | [
"Apache-2.0"
] | null | null | null | # yapi
[![Build Status](https://cloud.drone.io/api/badges/v7lin/yapi/status.svg)](https://cloud.drone.io/v7lin/yapi)
[![Docker Pulls](https://img.shields.io/docker/pulls/v7lin/yapi.svg)](https://hub.docker.com/r/v7lin/yapi)
### 项目源码
[YMFE/yapi](https://github.com/YMFE/yapi)
### 用法示例
````
# 版本
version: "3.7"
# 服务
services:
mongo:
container_name: mongo
image: mongo:2.6
restart: always
hostname: mongo
# ports:
# - 27017
volumes:
- ../dev-ops-repo/yapi/mongo:/data/db
environment:
- TZ=${TIME_ZONE:-Asia/Shanghai}
yapi:
container_name: yapi
image: v7lin/yapi:1.5.6
restart: always
hostname: yapi
ports:
- 8080:3000
volumes:
- ../dev-ops-repo/yapi/init:/yapi/init
# - ../dev-ops-repo/yapi/log:/yapi/log
environment:
- TZ=${TIME_ZONE:-Asia/Shanghai}
- YAPI_PORT=3000
- YAPI_CLOSEREGISTER=true # 关闭注册
- [email protected] # 管理员帐号
- YAPI_ADMINPASSWORD=yapi # 管理员初始化密码
- YAPI_DB_SERVERNAME=mongo # mongo 数据库地址
- YAPI_DB_PORT=27017 # mongo 数据库端口
- YAPI_DB_DATABASE=yapi # mongo 数据库名
depends_on:
- mongo
````
| 22.150943 | 109 | 0.625213 | yue_Hant | 0.440232 |