Upload golden_last.csv
Browse files- golden_last.csv +1361 -0
golden_last.csv
ADDED
@@ -0,0 +1,1361 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
software,repo_name,readme_url,portal,stars,selection,categories,,date_collection,date_submission,content,plan,steps,seq_order,optional_steps,extra_info_optional
|
2 |
+
vcr-video-representation-for-contextual,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/oronnir/VCR/main/README.md,machine_learning,1,latest,,,13/02/2024,12/2/24,,,,,,
|
3 |
+
ensuring-trustworthy-and-ethical-behaviour-in,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/AAAI-DISIM-UnivAQ/DALI/master/README.md,machine_learning,15,latest,,,13/02/2024,12/2/24,"## Installation
|
4 |
+
|
5 |
+
**OS X & Linux:**
|
6 |
+
1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
|
7 |
+
2. Then, you can download DALI and test it by running an example DALI MAS:
|
8 |
+
```sh
|
9 |
+
git clone https://github.com/AAAI-DISIM-UnivAQ/DALI.git
|
10 |
+
cd DALI/Examples/advanced
|
11 |
+
bash startmas.sh
|
12 |
+
```
|
13 |
+
You will see different windows opening:
|
14 |
+
* Prolog LINDA server (active_server_wi.pl)
|
15 |
+
* Prolog FIPA client (active_user_wi.pl)
|
16 |
+
* 1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)
|
17 |
+
|
18 |
+
**Windows:**
|
19 |
+
1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
|
20 |
+
2. Then, you can download DALI from https://github.com/AAAI-DISIM-UnivAQ/DALI.git.
|
21 |
+
3. Unzip the repository, go to the folder ""DALI/Examples/basic"", and test if DALI works by duble clicking ""startmas.bat"" file (this will launch an example DALI MAS). \
|
22 |
+
\
|
23 |
+
You will see different windows opening:
|
24 |
+
* Prolog LINDA server (active_server_wi.pl)
|
25 |
+
* Prolog FIPA client (active_user_wi.pl)
|
26 |
+
* 1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)","binary, source","[Binary] 1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
|
27 |
+
[Source] 2. Then, you can download DALI and test it by running an example DALI MAS:
|
28 |
+
```sh
|
29 |
+
git clone https://github.com/AAAI-DISIM-UnivAQ/DALI.git
|
30 |
+
cd DALI/Examples/advanced
|
31 |
+
bash startmas.sh
|
32 |
+
```","1,2","**Windows:**
|
33 |
+
1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
|
34 |
+
2. Then, you can download DALI from https://github.com/AAAI-DISIM-UnivAQ/DALI.git.
|
35 |
+
3. Unzip the repository, go to the folder ""DALI/Examples/basic"", and test if DALI works by duble clicking ""startmas.bat"" file (this will launch an example DALI MAS). \","You will see different windows opening:
|
36 |
+
Prolog LINDA server (active_server_wi.pl)
|
37 |
+
Prolog FIPA client (active_user_wi.pl)
|
38 |
+
1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)"
|
39 |
+
synthesizing-sentiment-controlled-feedback,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/MIntelligence-Group/CMFeed/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,,,,,,
|
40 |
+
only-the-curve-shape-matters-training,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/cfeng783/GTT/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,"## Getting Started
|
41 |
+
|
42 |
+
#### Install dependencies (with python 3.10)
|
43 |
+
|
44 |
+
```shell
|
45 |
+
pip install -r requirements.txt
|
46 |
+
```",source,1. Install dependencies with pip install -r requirements.txt,,,
|
47 |
+
from-uncertainty-to-precision-enhancing,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fer-agathe/calibration_binary_classifier/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,,,,,,
|
48 |
+
stochastic-gradient-flow-dynamics-of-test,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/rodsveiga/sgf_dyn/main/README.md,machine_learning,,,,,13/02/2024,12/2/24,,,,,,
|
49 |
+
accuracy-of-textfooler-black-box-adversarial,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/zero-one-loss/wordcnn01/main/LICENSE*,machine_learning,0,,,,13/02/2024,12/2/24,,,,,,
|
50 |
+
differentially-private-decentralized-learning-1,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/totilas/DPrandomwalk/main/README.md,machine_learning,,,,,13/02/2024,12/2/24,,,,,,
|
51 |
+
aydiv-adaptable-yielding-3d-object-detection,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/sanjay-810/AYDIV2/main/README.md,machine_learning,1,,,,13/02/2024,12/2/24,"### **Installation**
|
52 |
+
1. Prepare for the running environment.
|
53 |
+
|
54 |
+
You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet). Our experiments are based on the
|
55 |
+
docker provided by Voxel-R-CNN and we use NVIDIA Tesla V100 to train our Aydiv.
|
56 |
+
|
57 |
+
2. Prepare for the data.
|
58 |
+
|
59 |
+
Convert Argoverse 2 (or) waymo open Dataset into kitti format [`converter`](https://github.com/sanjay-810/AYDIV_ICRA/tree/main/data_converter/convert)
|
60 |
+
|
61 |
+
Please prepare dataset as [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet).
|
62 |
+
|
63 |
+
To generate depth_pseudo_rgbseguv_twise by yourself with depth_dense_twise as follows:
|
64 |
+
|
65 |
+
```
|
66 |
+
cd Aydiv
|
67 |
+
python depth_to_lidar.py
|
68 |
+
```
|
69 |
+
|
70 |
+
If you want to generate dense depth maps by yourself, it is recommended to use [`TWISE`](https://github.com/imransai/TWISE). The dense depth maps we provide are generated by TWISE. Anyway, you should have your dataset as follows:
|
71 |
+
|
72 |
+
```
|
73 |
+
Aydiv
|
74 |
+
___ data
|
75 |
+
_ ___ waymo_aydiv_seguv_twise
|
76 |
+
_ _ ___ ImageSets
|
77 |
+
_ _ ___ training
|
78 |
+
_ _ _ ___calib & velodyne & label_2 & image_2 & (optional: planes) & depth_dense_twise & depth_pseudo_rgbseguv_twise
|
79 |
+
_ _ ___ testing
|
80 |
+
_ _ _ ___calib & velodyne & image_2 & depth_dense_twise & depth_pseudo_rgbseguv_twise
|
81 |
+
___ pcdet
|
82 |
+
___ tools
|
83 |
+
```
|
84 |
+
Each pseudo point in depth_pseudo_rgbseguv_twise has 9 attributes (x, y, z, r, g, b, seg, u, v). It should be noted that we do not use the seg attribute, because the image segmentation results cannot bring improvement to Aydiv in our experiments. Argoverse 2 data should be in same format.
|
85 |
+
|
86 |
+
3. Setup.
|
87 |
+
|
88 |
+
```
|
89 |
+
cd Aydiv
|
90 |
+
python setup.py develop
|
91 |
+
cd pcdet/ops/iou3d/cuda_op
|
92 |
+
python setup.py develop
|
93 |
+
cd ../../../..
|
94 |
+
```","source,docker","[source]: step1. Prepare for the running environment.
|
95 |
+
step2. prepare for the data:```
|
96 |
+
cd Aydiv
|
97 |
+
python depth_to_lidar.py
|
98 |
+
```
|
99 |
+
[docker]: step1. You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet)","1,2",,
|
100 |
+
cartesian-atomic-cluster-expansion-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/BingqingCheng/cace/main/README.md,machine_learning,4,latest,,,13/02/2024,12/2/24,"## Installation
|
101 |
+
|
102 |
+
Please refer to the `setup.py` file for installation instructions.",source,[source] step1. please refer to the `setup.py` file for installation instructions.,,,
|
103 |
+
teller-a-trustworthy-framework-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/less-and-less-bugs/Trust_TELLER/main/README.md,machine_learning,1,latest,,,13/02/2024,12/2/24,"## Getting Started
|
104 |
+
|
105 |
+
Step 1: Download the dataset folder from onedrive by [data.zip](https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2). Unzip this folder into the project directory. You can find four orginal datasets, pre-processed datasets (i.e., val.jsonl, test.jsonl, train.jsonl in each dataset folder) and the files incuding questions and answers
|
106 |
+
|
107 |
+
Step 2: Place you OpenAI key into the file named api_key.txt.
|
108 |
+
|
109 |
+
```
|
110 |
+
openai.api_key = """"
|
111 |
+
```",binary,"[binary] step1: Download the dataset folder from onedrive by https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2.
|
112 |
+
step2. Unzip this folder into the project directory.
|
113 |
+
step3. Place you OpenAI key into the file named api_key.txt.
|
114 |
+
```
|
115 |
+
openai.api_key = """"
|
116 |
+
```","1,2,3",,
|
117 |
+
continuous-time-radar-inertial-and-lidar,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/utiasASRL/steam_icp/master/README.md,computer_science,77,latest,robotics,,13/02/2024,9/2/24,"## Installation
|
118 |
+
|
119 |
+
Clone this repository and its submodules.
|
120 |
+
|
121 |
+
We use docker to install dependencies The recommended way to build the docker image is
|
122 |
+
|
123 |
+
```bash
|
124 |
+
docker build -t steam_icp \
|
125 |
+
--build-arg USERID=$(id -u) \
|
126 |
+
--build-arg GROUPID=$(id -g) \
|
127 |
+
--build-arg USERNAME=$(whoami) \
|
128 |
+
--build-arg HOMEDIR=${HOME} .
|
129 |
+
```
|
130 |
+
|
131 |
+
When starting a container, remember to mount the code, dataset, and output directories to proper locations in the container.
|
132 |
+
An example command to start a docker container with the image is
|
133 |
+
|
134 |
+
```bash
|
135 |
+
docker run -it --name steam_icp \
|
136 |
+
--privileged \
|
137 |
+
--network=host \
|
138 |
+
-e DISPLAY=$DISPLAY \
|
139 |
+
-v /tmp/.X11-unix:/tmp/.X11-unix \
|
140 |
+
-v ${HOME}:${HOME}:rw \
|
141 |
+
steam_icp
|
142 |
+
```
|
143 |
+
|
144 |
+
(Inside Container) Go to the root directory of this repository and build STEAM-ICP
|
145 |
+
|
146 |
+
```bash
|
147 |
+
bash build.sh
|
148 |
+
```",source,"[source] step1. Clone this repository and its submodules.
|
149 |
+
step2. We use docker to install dependencies The recommended way to build the docker image is
|
150 |
+
```bash
|
151 |
+
docker build -t steam_icp \
|
152 |
+
--build-arg USERID=$(id -u) \
|
153 |
+
--build-arg GROUPID=$(id -g) \
|
154 |
+
--build-arg USERNAME=$(whoami) \
|
155 |
+
--build-arg HOMEDIR=${HOME} .
|
156 |
+
```
|
157 |
+
step3. When starting a container, remember to mount the code, dataset, and output directories to proper locations in the container.
|
158 |
+
An example command to start a docker container with the image is
|
159 |
+
|
160 |
+
```bash
|
161 |
+
docker run -it --name steam_icp \
|
162 |
+
--privileged \
|
163 |
+
--network=host \
|
164 |
+
-e DISPLAY=$DISPLAY \
|
165 |
+
-v /tmp/.X11-unix:/tmp/.X11-unix \
|
166 |
+
-v ${HOME}:${HOME}:rw \
|
167 |
+
steam_icp
|
168 |
+
|
169 |
+
step4.(Inside Container) Go to the root directory of this repository and build STEAM-ICP
|
170 |
+
|
171 |
+
```bash
|
172 |
+
bash build.sh
|
173 |
+
```","1,2,3,4",,
|
174 |
+
towards-a-thermodynamical-deep-learning,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fedezocco/ThermoVisMedRob/main/README.md,computer_science,0,latest,robotics,,13/02/2024,8/2/24,,,,,,
|
175 |
+
robust-parameter-fitting-to-realistic-network,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/PFischbeck/parameter-fitting-experiments/main/Readme.md,computer_science,0,latest,Social and Information Networks Data Structures and Algorithms,,13/02/2024,8/2/24,"# Installation
|
176 |
+
|
177 |
+
- Make sure you have Python, Pip and R installed.
|
178 |
+
- Checkout this repository
|
179 |
+
- Install the python dependencies with
|
180 |
+
|
181 |
+
```
|
182 |
+
pip3 install -r requirements.txt
|
183 |
+
```
|
184 |
+
|
185 |
+
- Install the `pygirgs` package at https://github.com/PFischbeck/pygirgs
|
186 |
+
|
187 |
+
- Install the R dependencies (used for plots) with
|
188 |
+
|
189 |
+
```
|
190 |
+
R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scales""), repos=""https://cloud.r-project.org/"")'
|
191 |
+
```
|
192 |
+
|
193 |
+
- Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
|
194 |
+
- Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",source,"[source] step1. Make sure you have Python, Pip and R installed.
|
195 |
+
step2. Checkout this repository
|
196 |
+
step3. Install the python dependencies with
|
197 |
+
```
|
198 |
+
pip3 install -r requirements.txt
|
199 |
+
```
|
200 |
+
step4. Install the `pygirgs` package at https://github.com/PFischbeck/pygirgs
|
201 |
+
step5. Install the R dependencies (used for plots) with
|
202 |
+
```
|
203 |
+
R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scales""), repos=""https://cloud.r-project.org/"")'
|
204 |
+
```
|
205 |
+
step6. Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
|
206 |
+
step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.","1,2,3,4,5,6,7","step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",
|
207 |
+
get-tok-a-genai-enriched-multimodal-tiktok,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/gabbypinto/GET-Tok-Peru/main/README.md,computer_science,1,latest,Social and Information Networks Computers and Society Human-Computer Interaction ,,13/02/2024,8/2/24,"## Installation
|
208 |
+
pip install -r requirements.txt
|
209 |
+
|
210 |
+
*Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: [email protected]*",packagemanager,"step1.
|
211 |
+
pip install -r requirements.txt ",,,*Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: [email protected]*
|
212 |
+
a-longitudinal-study-of-italian-and-french,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/orsoFra/LS_FRIT_UKR/main/README.md,computer_science,0,latest,Social and Information Networks Computers and Society,,13/02/2024,7/2/24,,,,,,
|
213 |
+
geometric-slosh-free-tracking-for-robotic,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/jonarriza96/gsft/main/README.md,computer_science,1,latest,robotics,,13/02/2024,7/2/24,"## Installation
|
214 |
+
|
215 |
+
### Dependencies
|
216 |
+
|
217 |
+
Initialize git submodules with
|
218 |
+
|
219 |
+
```
|
220 |
+
git submodule init
|
221 |
+
git submodule update
|
222 |
+
```
|
223 |
+
|
224 |
+
### Python environment
|
225 |
+
|
226 |
+
Install the specific versions of every package from `requirements.txt` in a new conda environment:
|
227 |
+
|
228 |
+
```
|
229 |
+
conda create --name gsft python=3.9
|
230 |
+
conda activate gsft
|
231 |
+
pip install -r requirements.txt
|
232 |
+
```
|
233 |
+
|
234 |
+
To ensure that Python paths are properly defined, update the `~/.bashrc` by adding the following lines
|
235 |
+
|
236 |
+
```
|
237 |
+
export GSFT_PATH=/path_to_gsfc
|
238 |
+
export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
|
239 |
+
```",source,"[source] step1. Check dependencies
|
240 |
+
step2. Initialize git submodules with
|
241 |
+
```
|
242 |
+
git submodule init
|
243 |
+
git submodule update
|
244 |
+
```
|
245 |
+
step3. Install the specific versions of every package from `requirements.txt` in a new conda environment:
|
246 |
+
```
|
247 |
+
conda create --name gsft python=3.9
|
248 |
+
conda activate gsft
|
249 |
+
pip install -r requirements.txt
|
250 |
+
```
|
251 |
+
step4. Create variables to ensure that Python paths are properly defined, update the `~/.bashrc` by adding the following lines
|
252 |
+
```
|
253 |
+
export GSFT_PATH=/path_to_gsfc
|
254 |
+
export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
|
255 |
+
```","1,2,3,4",,
|
256 |
+
real-time-line-based-room-segmentation-and,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/EricssonResearch/Line-Based-Room-Segmentation-and-EDF/release/README.md,computer_science,0,latest,robotics,,13/02/2024,7/2/24,"## Installation
|
257 |
+
The project can be installed by running the following command in your terminal:
|
258 |
+
```bash
|
259 |
+
pip install -r requirements.txt
|
260 |
+
```",source,"[source] step1. Run the command in your terminal:
|
261 |
+
```
|
262 |
+
pip install -r requirements.txt
|
263 |
+
```",1,,
|
264 |
+
viga,https://bio.tools/,https://raw.githubusercontent.com/viralInformatics/VIGA/master/README.md,,7,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,last week,"## Installation
|
265 |
+
|
266 |
+
### Step1: Download VIGA
|
267 |
+
|
268 |
+
Download VIGA with Git from GitHub
|
269 |
+
|
270 |
+
```
|
271 |
+
git clone https://github.com/viralInformatics/VIGA.git
|
272 |
+
```
|
273 |
+
|
274 |
+
or Download ZIP to local
|
275 |
+
|
276 |
+
### Step 2: Download Database
|
277 |
+
|
278 |
+
```
|
279 |
+
1. download taxdmp.zip [Index of /pub/taxonomy (nih.gov)](https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/) and unzip taxdmp.zip and put it in ./db/
|
280 |
+
|
281 |
+
2. download ""prot.accession2taxid"" file from https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/
|
282 |
+
|
283 |
+
3. download ""RefSeqVirusProtein"" file from
|
284 |
+
wget -c ftp.ncbi.nlm.nih.gov/refseq/release/viral/viral.1.protein.faa.gz
|
285 |
+
gzip -d viral.1.protein.faa.gz
|
286 |
+
mv viral.1.protein.faa RefSeqVirusProtein
|
287 |
+
|
288 |
+
4. download ""nr"" file from
|
289 |
+
wget -c ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/nr.gz
|
290 |
+
or ascp -T -i asperaweb_id_dsa.openssh --host=ftp.ncbi.nih.gov --user=anonftp --mode=recv /blast/db/FASTA/nr.gz ./
|
291 |
+
gzip -d nr.gz
|
292 |
+
|
293 |
+
5. Use Diamond v2.0.11.149 to create two separate databases as the indexing libraries in the current version are incompatible with each other.
|
294 |
+
|
295 |
+
6. In order to set up a reference database for DIAMOND, the makedb command needs to be executed with the following command line:
|
296 |
+
diamond makedb --in YourPath/RefSeqVirusProtein -d Diamond_RefSeqVirusProtein --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
|
297 |
+
diamond makedb --in nr -d Dimond_nr --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
|
298 |
+
|
299 |
+
```
|
300 |
+
|
301 |
+
### Step 3: Installation of dependent software
|
302 |
+
|
303 |
+
#### Installing Some Software Using Conda
|
304 |
+
|
305 |
+
```
|
306 |
+
conda install fastp=0.12.4 trinity=2.8.5 diamond=2.0.11.149 ragtag=2.1.0 quast=5.0.2
|
307 |
+
```
|
308 |
+
|
309 |
+
#### Manual Installation of MetaCompass
|
310 |
+
|
311 |
+
https://github.com/marbl/MetaCompass
|
312 |
+
|
313 |
+
### Step 4: Python Dependencies
|
314 |
+
|
315 |
+
Base on python 3.6.8
|
316 |
+
|
317 |
+
```
|
318 |
+
pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
|
319 |
+
```
|
320 |
+
",source,"[source] step1. Download VIGA with Git from GitHub:
|
321 |
+
```
|
322 |
+
git clone https://github.com/viralInformatics/VIGA.git
|
323 |
+
```
|
324 |
+
or Download ZIP to local
|
325 |
+
step2.download Database:
|
326 |
+
step2.1.download taxdmp.zip: https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/ and unzip taxdmp.zip and put it in ./db/
|
327 |
+
step2.2.download ""prot.accession2taxid"" file from https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/
|
328 |
+
step2.3.download ""RefSeqVirusProtein"" file from
|
329 |
+
wget -c ftp.ncbi.nlm.nih.gov/refseq/release/viral/viral.1.protein.faa.gz
|
330 |
+
gzip -d viral.1.protein.faa.gz
|
331 |
+
mv viral.1.protein.faa RefSeqVirusProtein
|
332 |
+
step2.4. download ""nr"" file from
|
333 |
+
wget -c ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/nr.gz
|
334 |
+
or ascp -T -i asperaweb_id_dsa.openssh --host=ftp.ncbi.nih.gov --user=anonftp --mode=recv /blast/db/FASTA/nr.gz ./
|
335 |
+
gzip -d nr.gz
|
336 |
+
step2.5.use Diamond v2.0.11.149 to create two separate databases as the indexing libraries in the current version are incompatible with each other.
|
337 |
+
step2.6.In order to set up a reference database for DIAMOND, the makedb command needs to be executed with the following command line:
|
338 |
+
diamond makedb --in YourPath/RefSeqVirusProtein -d Diamond_RefSeqVirusProtein --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
|
339 |
+
diamond makedb --in nr -d Dimond_nr --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
|
340 |
+
```
|
341 |
+
step3. installation of dependent software
|
342 |
+
step3.1. installing Some Software Using Conda
|
343 |
+
```
|
344 |
+
conda install fastp=0.12.4 trinity=2.8.5 diamond=2.0.11.149 ragtag=2.1.0 quast=5.0.2
|
345 |
+
```
|
346 |
+
step3.2. manual Installation of MetaCompass
|
347 |
+
https://github.com/marbl/MetaCompass
|
348 |
+
step4: install Python Dependencies
|
349 |
+
step4.1.base on python 3.6.8
|
350 |
+
```
|
351 |
+
pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
|
352 |
+
```","1,2,3,4,5,6",,
|
353 |
+
lncrtpred,https://bio.tools/,https://raw.githubusercontent.com/zglabDIB/LncRTPred/main/README.md,,,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,8 months,,,,,,
|
354 |
+
nrn-ez,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,,,,,Script,13/02/2024,last week,"**INSTALLATION FOR VERSION 1.1.6**
|
355 |
+
|
356 |
+
NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
|
357 |
+
|
358 |
+
� Python 3.6.9 and higher (currently up to 3.10)
|
359 |
+
|
360 |
+
� PyQt 5.10.1
|
361 |
+
|
362 |
+
� PyQtGraph 0.11.0
|
363 |
+
|
364 |
+
Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
|
365 |
+
|
366 |
+
Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
|
367 |
+
|
368 |
+
Installation instructions for Windows: download the Win zip file and run the installation wizard.",binary,"[binary] step1. install requirements:
|
369 |
+
Python 3.6.9 and higher (currently up to 3.10)
|
370 |
+
PyQt 5.10.1
|
371 |
+
PyQtGraph 0.11.0
|
372 |
+
step2. for linux:download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder. ",,"2. for linux:download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
|
373 |
+
2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
|
374 |
+
2. for Windows: download the Win zip file and run the installation wizard.",
|
375 |
+
causnet,https://bio.tools/,https://raw.githubusercontent.com/nand1155/CausNet/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,Library,13/02/2024,two years,"## Installation
|
376 |
+
|
377 |
+
You can install the development version from GitHub with:
|
378 |
+
|
379 |
+
``` r
|
380 |
+
require(""devtools"")
|
381 |
+
install_github(""https://github.com/nand1155/CausNet"")
|
382 |
+
```",source,"[source]: step1.install the development version from GitHub with:
|
383 |
+
``` r
|
384 |
+
require(""devtools"")
|
385 |
+
install_github(""https://github.com/nand1155/CausNet"")
|
386 |
+
```",,,
|
387 |
+
viralcc,https://bio.tools/,https://raw.githubusercontent.com/dyxstat/Reproduce_ViralCC/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,8 months,"""# Instruction of reproducing results in ViralCC paper
|
388 |
+
We take the cow fecal datasets for example. The other two datasets were processed following the same procedure.
|
389 |
+
|
390 |
+
Scripts to process the intermediate data and plot figures are available in the folder [Scripts](https://github.com/dyxstat/Reproduce_ViralCC/tree/main/Scripts).
|
391 |
+
|
392 |
+
Source data of Figure 2 and 3 in the main text and Figure S1 in the supplementary materials are provided in the folder [Source Data](https://github.com/dyxstat/Reproduce_ViralCC/tree/main/Source%20Data).
|
393 |
+
|
394 |
+
**Version of softwares exploited in the analyses**
|
395 |
+
```
|
396 |
+
fastq_dump command from Sratoolkit: v2.10.8
|
397 |
+
|
398 |
+
bbduk.sh and clumpify.sh command from BBTools suite: v37.25
|
399 |
+
|
400 |
+
megahit command from MEGAHIT: v1.2.9
|
401 |
+
|
402 |
+
bwa command from BWA MEM: v0.7.17
|
403 |
+
|
404 |
+
samtools command from Samtools: v1.15.1
|
405 |
+
|
406 |
+
wrapper_phage_contigs_sorter_iPlant.pl command from VirSorter: v1.0.6
|
407 |
+
|
408 |
+
checkv command from CheckV: 0.7.0
|
409 |
+
```
|
410 |
+
|
411 |
+
**Step 1 Download and preprocess the raw data**
|
412 |
+
|
413 |
+
Note: NCBI may update its links for downloading the database. Please check the latest link at [NCBI](https://www.ncbi.nlm.nih.gov/) if you meet the download error.
|
414 |
+
```
|
415 |
+
wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2282092/ERR2282092.1
|
416 |
+
wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530126/ERR2530126.1
|
417 |
+
wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530127/ERR2530127.1
|
418 |
+
|
419 |
+
fastq-dump --split-files --gzip ERR2282092.1
|
420 |
+
fastq-dump --split-files --gzip ERR2530126.1
|
421 |
+
fastq-dump --split-files --gzip ERR2530127.1
|
422 |
+
|
423 |
+
bbduk.sh in1=ERR2282092.1_1.fastq.gz in2=ERR2282092.1_2.fastq.gz out1=COWSG1_AQ.fastq.gz out2=COWSG2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
|
424 |
+
bbduk.sh in1=ERR2530126.1_1.fastq.gz in2=ERR2530126.1_2.fastq.gz out1=S3HIC1_AQ.fastq.gz out2=S3HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
|
425 |
+
bbduk.sh in1=ERR2530127.1_1.fastq.gz in2=ERR2530127.1_2.fastq.gz out1=M1HIC1_AQ.fastq.gz out2=M1HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
|
426 |
+
|
427 |
+
bbduk.sh in1=S3HIC1_AQ.fastq.gz in2=S3HIC2_AQ.fastq.gz out1=S3HIC1_CL.fastq.gz out2=S3HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
|
428 |
+
bbduk.sh in1=M1HIC1_AQ.fastq.gz in2=M1HIC2_AQ.fastq.gz out1=M1HIC1_CL.fastq.gz out2=M1HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
|
429 |
+
bbduk.sh in1=COWSG1_AQ.fastq.gz in2=COWSG2_AQ.fastq.gz out1=COWSG1_CL.fastq.gz out2=COWSG2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
|
430 |
+
|
431 |
+
bbduk.sh in1=S3HIC1_CL.fastq.gz in2=S3HIC2_CL.fastq.gz out1=S3HIC1_trim.fastq.gz out2=S3HIC2_trim.fastq.gz ftl=10
|
432 |
+
bbduk.sh in1=M1HIC1_CL.fastq.gz in2=M1HIC2_CL.fastq.gz out1=M1HIC1_trim.fastq.gz out2=M1HIC2_trim.fastq.gz ftl=10
|
433 |
+
|
434 |
+
clumpify.sh in1=S3HIC1_trim.fastq.gz in2=S3HIC2_trim.fastq.gz out1=S3HIC1_dedup.fastq.gz out2=S3HIC2_dedup.fastq.gz dedupe
|
435 |
+
clumpify.sh in1=M1HIC1_trim.fastq.gz in2=M1HIC2_trim.fastq.gz out1=M1HIC1_dedup.fastq.gz out2=M1HIC2_dedup.fastq.gz dedupe
|
436 |
+
cat S3HIC1_dedup.fastq.gz M1HIC1_dedup.fastq.gz > HIC1.fastq.gz
|
437 |
+
cat S3HIC2_dedup.fastq.gz M1HIC2_dedup.fastq.gz > HIC2.fastq.gz
|
438 |
+
```
|
439 |
+
|
440 |
+
**Step 2: Assemble contigs and align processed Hi-C reads to contigs**
|
441 |
+
```
|
442 |
+
megahit -1 COWSG1_CL.fastq.gz -2 COWSG2_CL.fastq.gz -o COW_ASSEMBLY --min-contig-len 1000 --k-min 21 --k-max 141 --k-step 12 --merge-level 20,0.95
|
443 |
+
|
444 |
+
bwa index final.contigs.fa
|
445 |
+
bwa mem -5SP final.contigs.fa HIC1.fastq.gz HIC2.fastq.gz > COW_MAP.sam
|
446 |
+
samtools view -F 0x904 -bS COW_MAP.sam > COW_MAP_UNSORTED.bam
|
447 |
+
samtools sort -n COW_MAP_UNSORTED.bam -o COW_MAP_SORTED.bam
|
448 |
+
```
|
449 |
+
|
450 |
+
**Step3: Identify viral contigs from assembled contigs**
|
451 |
+
```
|
452 |
+
perl removesmalls.pl 3000 final.contigs.fa > cow_3000.fa
|
453 |
+
wrapper_phage_contigs_sorter_iPlant.pl -f cow_3000.fa --db 1 --wdir output_directory --ncpu 16 --data-dir /panfs/qcb-panasas/yuxuandu/virsorter-data
|
454 |
+
Rscript find_viral_contig.R
|
455 |
+
```
|
456 |
+
|
457 |
+
**Step4: Run ViralCC**
|
458 |
+
```
|
459 |
+
python ./viralcc.py pipeline -v final.contigs.fa COW_MAP_SORTED.bam viral.txt out_cow
|
460 |
+
```
|
461 |
+
|
462 |
+
**Step5: Evaluation draft viral genomes using CheckV**
|
463 |
+
```
|
464 |
+
python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa
|
465 |
+
checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0
|
466 |
+
```""",source,"[source]step1.download and preprocess the raw data**
|
467 |
+
Note: NCBI may update its links for downloading the database. Please check the latest link at [NCBI](https://www.ncbi.nlm.nih.gov/) if you meet the download error.
|
468 |
+
```
|
469 |
+
wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2282092/ERR2282092.1
|
470 |
+
wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530126/ERR2530126.1
|
471 |
+
wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530127/ERR2530127.1
|
472 |
+
|
473 |
+
fastq-dump --split-files --gzip ERR2282092.1
|
474 |
+
fastq-dump --split-files --gzip ERR2530126.1
|
475 |
+
fastq-dump --split-files --gzip ERR2530127.1
|
476 |
+
|
477 |
+
bbduk.sh in1=ERR2282092.1_1.fastq.gz in2=ERR2282092.1_2.fastq.gz out1=COWSG1_AQ.fastq.gz out2=COWSG2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
|
478 |
+
bbduk.sh in1=ERR2530126.1_1.fastq.gz in2=ERR2530126.1_2.fastq.gz out1=S3HIC1_AQ.fastq.gz out2=S3HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
|
479 |
+
bbduk.sh in1=ERR2530127.1_1.fastq.gz in2=ERR2530127.1_2.fastq.gz out1=M1HIC1_AQ.fastq.gz out2=M1HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
|
480 |
+
|
481 |
+
bbduk.sh in1=S3HIC1_AQ.fastq.gz in2=S3HIC2_AQ.fastq.gz out1=S3HIC1_CL.fastq.gz out2=S3HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
|
482 |
+
bbduk.sh in1=M1HIC1_AQ.fastq.gz in2=M1HIC2_AQ.fastq.gz out1=M1HIC1_CL.fastq.gz out2=M1HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
|
483 |
+
bbduk.sh in1=COWSG1_AQ.fastq.gz in2=COWSG2_AQ.fastq.gz out1=COWSG1_CL.fastq.gz out2=COWSG2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
|
484 |
+
|
485 |
+
bbduk.sh in1=S3HIC1_CL.fastq.gz in2=S3HIC2_CL.fastq.gz out1=S3HIC1_trim.fastq.gz out2=S3HIC2_trim.fastq.gz ftl=10
|
486 |
+
bbduk.sh in1=M1HIC1_CL.fastq.gz in2=M1HIC2_CL.fastq.gz out1=M1HIC1_trim.fastq.gz out2=M1HIC2_trim.fastq.gz ftl=10
|
487 |
+
|
488 |
+
clumpify.sh in1=S3HIC1_trim.fastq.gz in2=S3HIC2_trim.fastq.gz out1=S3HIC1_dedup.fastq.gz out2=S3HIC2_dedup.fastq.gz dedupe
|
489 |
+
clumpify.sh in1=M1HIC1_trim.fastq.gz in2=M1HIC2_trim.fastq.gz out1=M1HIC1_dedup.fastq.gz out2=M1HIC2_dedup.fastq.gz dedupe
|
490 |
+
cat S3HIC1_dedup.fastq.gz M1HIC1_dedup.fastq.gz > HIC1.fastq.gz
|
491 |
+
cat S3HIC2_dedup.fastq.gz M1HIC2_dedup.fastq.gz > HIC2.fastq.gz
|
492 |
+
```
|
493 |
+
step2.assemble contigs and align processed Hi-C reads to contigs**
|
494 |
+
```
|
495 |
+
megahit -1 COWSG1_CL.fastq.gz -2 COWSG2_CL.fastq.gz -o COW_ASSEMBLY --min-contig-len 1000 --k-min 21 --k-max 141 --k-step 12 --merge-level 20,0.95
|
496 |
+
|
497 |
+
bwa index final.contigs.fa
|
498 |
+
bwa mem -5SP final.contigs.fa HIC1.fastq.gz HIC2.fastq.gz > COW_MAP.sam
|
499 |
+
samtools view -F 0x904 -bS COW_MAP.sam > COW_MAP_UNSORTED.bam
|
500 |
+
samtools sort -n COW_MAP_UNSORTED.bam -o COW_MAP_SORTED.bam
|
501 |
+
```
|
502 |
+
step3. identify viral contigs from assembled contigs**
|
503 |
+
```
|
504 |
+
perl removesmalls.pl 3000 final.contigs.fa > cow_3000.fa
|
505 |
+
wrapper_phage_contigs_sorter_iPlant.pl -f cow_3000.fa --db 1 --wdir output_directory --ncpu 16 --data-dir /panfs/qcb-panasas/yuxuandu/virsorter-data
|
506 |
+
Rscript find_viral_contig.R
|
507 |
+
```
|
508 |
+
step4. run ViralCC**
|
509 |
+
```
|
510 |
+
python ./viralcc.py pipeline -v final.contigs.fa COW_MAP_SORTED.bam viral.txt out_cow
|
511 |
+
```
|
512 |
+
step5. evaluation draft viral genomes using CheckV**
|
513 |
+
```
|
514 |
+
python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa
|
515 |
+
checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0
|
516 |
+
```",,,
|
517 |
+
DRaW,https://bio.tools/,https://raw.githubusercontent.com/BioinformaticsIASBS/DRaW/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,,,,"# Running DRaW on COVID-19 datasets
|
518 |
+
The DRaW has been applied on three COVID-19 datasets, DS1, DS2, and DS3. There are three subdirectories, �DS1_repur�, �DS2_repur�, and �DS3_repur�, in the �Drug-Repurposing� directory. Each subdirectory has been assigned to one of the mentioned datasets. We put the Draw implementation file for each dataset in each subdirectory separately. This is due to keep the corresponding hyperparameters of each dataset.
|
519 |
+
We use Adam as the optimizer with a learning rate equal to 0.001, beta1 = 0.9, beta2 = 0.999, and epsilon = 1e_7. The dropout rate is set to 0.5. The batch size is chosen by the number of samples per dataset. This hyperparameter for DS1 is equal to 8, and those for DS2 and DS3 are set to 32.
|
520 |
+
To run the model, it is enough to execute ""Drug-Repurposing.py"" script in the command line. After that, execute ""score.py"". The repurposed drugs will be stored in the ""meanScore.csv"" spreadsheet. It contains the average of ach drug ranking. The lower, the better. For example, to run the DRaW on DS1:
|
521 |
+
```bash
|
522 |
+
cd Drug-Repurposing\DS1_repur
|
523 |
+
python Drug-Repurposing.py
|
524 |
+
python score.py
|
525 |
+
```
|
526 |
+
Same goes for other datasets. Just change the directory path.
|
527 |
+
# Performance analysis
|
528 |
+
In order to analysis the performance, there is a one extra directory in the root, �Performance_analysis�. By running following command the model is trained on a given dataset and returns its performance metrics, AUC-ROC, AUPR, F1 score, etc.
|
529 |
+
The input parameter �dataset_name� is one the following five datasets� name. The first one is COVID-19 DS3 and other four are golden benchmarks.
|
530 |
+
'DS3','ic','nr','gpcr','e'
|
531 |
+
|
532 |
+
```bash
|
533 |
+
cd Performance_analysis
|
534 |
+
python main.py dataset_name
|
535 |
+
```",source,"[source]step1.execute ""Drug-Repurposing.py"" script in the command line. step2. after that, execute ""score.py"". The repurposed drugs will be stored in the ""meanScore.csv"" spreadsheet. It contains the average of ach drug ranking. The lower, the better. For example, to run the DRaW on DS1:
|
536 |
+
```bash
|
537 |
+
cd Drug-Repurposing\DS1_repur
|
538 |
+
python Drug-Repurposing.py
|
539 |
+
python score.py
|
540 |
+
```",,,
|
541 |
+
NRN-EZ,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,,,,,,,6 months,"**INSTALLATION FOR VERSION 1.1.6**
|
542 |
+
|
543 |
+
NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
|
544 |
+
|
545 |
+
� Python 3.6.9 and higher (currently up to 3.10)
|
546 |
+
|
547 |
+
� PyQt 5.10.1
|
548 |
+
|
549 |
+
� PyQtGraph 0.11.0
|
550 |
+
|
551 |
+
Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
|
552 |
+
|
553 |
+
Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
|
554 |
+
|
555 |
+
Installation instructions for Windows: download the Win zip file and run the installation wizard.",source,"[source] step1. install the requirements:Python 3.6.9 and higher (currently up to 3.10), PyQt 5.10.1, PyQtGraph 0.11.0
|
556 |
+
step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
|
557 |
+
step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
|
558 |
+
step2. for Windows: download the Win zip file and run the installation wizard.",,"step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
|
559 |
+
step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
|
560 |
+
step2. for Windows: download the Win zip file and run the installation wizard.",
|
561 |
+
guiding-instruction-based-image-editing-via,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/apple/ml-mgie/main/README.md,computer_science,3089,top,,,13/02/2024,29/09/2023,"## Requirements
|
562 |
+
```
|
563 |
+
conda create -n mgie python=3.10 -y
|
564 |
+
conda activate mgie
|
565 |
+
conda update -n base -c defaults conda setuptools -y
|
566 |
+
conda install -c conda-forge git git-lfs ffmpeg vim htop ninja gpustat -y
|
567 |
+
conda clean -a -y
|
568 |
+
|
569 |
+
pip install -U pip cmake cython==0.29.36 pydantic==1.10 numpy
|
570 |
+
pip install -U gdown pydrive2 wget jupyter jupyterlab jupyterthemes ipython
|
571 |
+
pip install -U sentencepiece transformers diffusers tokenizers datasets gradio==3.37 accelerate evaluate git+https://github.com/openai/CLIP.git
|
572 |
+
pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl
|
573 |
+
pip install -U deepspeed
|
574 |
+
|
575 |
+
# git clone this repo
|
576 |
+
cd ml-mgie
|
577 |
+
git submodule update --init --recursive
|
578 |
+
cd LLaVA
|
579 |
+
pip install -e .
|
580 |
+
pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl
|
581 |
+
pip install -U ninja flash-attn==1.0.2
|
582 |
+
pip install -U pydrive2 gdown wget
|
583 |
+
|
584 |
+
cd ..
|
585 |
+
cp mgie_llava.py LLaVA/llava/model/llava.py
|
586 |
+
cp mgie_train.py LLaVA/llava/train/train.py
|
587 |
+
```",source,"[source]"" step1. create conda environment ```
|
588 |
+
conda create -n mgie python=3.10 -y
|
589 |
+
conda activate mgie
|
590 |
+
conda update -n base -c defaults conda setuptools -y
|
591 |
+
conda install -c conda-forge git git-lfs ffmpeg vim htop ninja gpustat -y
|
592 |
+
conda clean -a -y ```
|
593 |
+
step2. install dependencies ```
|
594 |
+
pip install -U pip cmake cython==0.29.36 pydantic==1.10 numpy
|
595 |
+
pip install -U gdown pydrive2 wget jupyter jupyterlab jupyterthemes ipython
|
596 |
+
pip install -U sentencepiece transformers diffusers tokenizers datasets gradio==3.37 accelerate evaluate git+https://github.com/openai/CLIP.git
|
597 |
+
pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl
|
598 |
+
pip install -U deepspeed ```
|
599 |
+
step3. git clone this repo ```
|
600 |
+
cd ml-mgie
|
601 |
+
git submodule update --init --recursive
|
602 |
+
cd LLaVA ```
|
603 |
+
step4. install module ```
|
604 |
+
pip install -e .
|
605 |
+
pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl
|
606 |
+
pip install -U ninja flash-attn==1.0.2
|
607 |
+
pip install -U pydrive2 gdown wget
|
608 |
+
cd ..
|
609 |
+
cp mgie_llava.py LLaVA/llava/model/llava.py
|
610 |
+
cp mgie_train.py LLaVA/llava/train/train.py
|
611 |
+
```","1,2,3,4",,
|
612 |
+
self-play-fine-tuning-converts-weak-language,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/uclaml/SPIN/main/README.md,computer_science,430,top,,,13/02/2024,2/1/24,"## Setup
|
613 |
+
The following steps provide the necessary setup to run our codes.
|
614 |
+
1. Create a Python virtual environment with Conda:
|
615 |
+
```
|
616 |
+
conda create -n myenv python=3.10
|
617 |
+
conda activate myenv
|
618 |
+
```
|
619 |
+
2. Install PyTorch `v2.1.0` with compatible cuda version, following instructions from [PyTorch Installation Page](https://pytorch.org/get-started/locally/). For example with cuda 11:
|
620 |
+
```
|
621 |
+
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
|
622 |
+
```
|
623 |
+
3. Install the following Python dependencies to run the codes.
|
624 |
+
```
|
625 |
+
python -m pip install .
|
626 |
+
python -m pip install flash-attn --no-build-isolation
|
627 |
+
```
|
628 |
+
4. Login to your huggingface account for downloading models
|
629 |
+
```
|
630 |
+
huggingface-cli login --token ""${your_access_token}""
|
631 |
+
```",source,"step1.create a Python virtual environment with Conda:
|
632 |
+
```
|
633 |
+
conda create -n myenv python=3.10
|
634 |
+
conda activate myenv
|
635 |
+
```
|
636 |
+
step2.install PyTorch `v2.1.0` with compatible cuda version, following instructions from [PyTorch Installation Page](https://pytorch.org/get-started/locally/). For example with cuda 11:
|
637 |
+
```
|
638 |
+
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
|
639 |
+
```
|
640 |
+
step3.install the following Python dependencies to run the codes.
|
641 |
+
```
|
642 |
+
python -m pip install .
|
643 |
+
python -m pip install flash-attn --no-build-isolation
|
644 |
+
```
|
645 |
+
step4.login to your huggingface account for downloading models
|
646 |
+
```
|
647 |
+
huggingface-cli login --token ""${your_access_token}""
|
648 |
+
```",,,
|
649 |
+
genegpt-teaching-large-language-models-to-use,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/ncbi/GeneGPT/main/README.md,computer_science,214,top,,,13/02/2024,19/04/2023,"# Requirements
|
650 |
+
|
651 |
+
The code has been tested with Python 3.9.13. Please first install the required packages by:
|
652 |
+
```bash
|
653 |
+
pip install -r requirements.txt
|
654 |
+
```
|
655 |
+
|
656 |
+
You also need an OpenAI API key to run GeneGPT with Codex. Replace the placeholder with your key in `config.py`:
|
657 |
+
```bash
|
658 |
+
$ cat config.py
|
659 |
+
API_KEY = 'YOUR_OPENAI_API_KEY'
|
660 |
+
```
|
661 |
+
|
662 |
+
## Using GeneGPT
|
663 |
+
|
664 |
+
After setting up the environment, one can run GeneGPT on GeneTuring by:
|
665 |
+
```bash
|
666 |
+
python main.py 111111
|
667 |
+
```
|
668 |
+
where `111111` denotes that all Documentations (Dc.1-2) and Demonstrations (Dm.1-4) are used.
|
669 |
+
|
670 |
+
To run GeneGPT-slim, simply use:
|
671 |
+
```bash
|
672 |
+
python main.py 001001
|
673 |
+
```
|
674 |
+
which will only use the Dm.1 and Dm.4 for in-context learning.",source,"step1.install requirements:
|
675 |
+
The code has been tested with Python 3.9.13. Please first install the required packages by:
|
676 |
+
```bash
|
677 |
+
pip install -r requirements.txt
|
678 |
+
```
|
679 |
+
step2.set OpenAI API key to run GeneGPT with Codex. Replace the placeholder with your key in `config.py`:
|
680 |
+
```bash
|
681 |
+
$ cat config.py
|
682 |
+
API_KEY = 'YOUR_OPENAI_API_KEY'
|
683 |
+
```
|
684 |
+
step3. execute GeneGPT
|
685 |
+
After setting up the environment, one can run GeneGPT on GeneTuring by:
|
686 |
+
```bash
|
687 |
+
python main.py 111111
|
688 |
+
```
|
689 |
+
where `111111` denotes that all Documentations (Dc.1-2) and Demonstrations (Dm.1-4) are used.
|
690 |
+
To run GeneGPT-slim, simply use:
|
691 |
+
```bash
|
692 |
+
python main.py 001001
|
693 |
+
```
|
694 |
+
which will only use the Dm.1 and Dm.4 for in-context learning.",,,
|
695 |
+
the-boundary-of-neural-network-trainability,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/Sohl-Dickstein/fractal/main/README.md,computer_science,41,top,,,13/02/2024,9/2/24,,,,,,
|
696 |
+
learning-to-fly-in-seconds,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/arplaboratory/learning-to-fly/master/README.MD,computer_science,201,top,,,13/02/2024,22/11/2023,"## Instructions to run the code
|
697 |
+
### Docker (isolated)
|
698 |
+
We provide a pre-built Docker image with a simple web interface that can be executed using a single command (given that Docker is already installed on your machine):
|
699 |
+
```
|
700 |
+
docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
|
701 |
+
```
|
702 |
+
After the container is running, navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) and you should see something like (after starting the training):
|
703 |
+
|
704 |
+
<div align=""center"">
|
705 |
+
<img src=""https://github.com/arplaboratory/learning_to_fly_media/blob/master/simulator_screenshot.png"" />
|
706 |
+
</div>
|
707 |
+
|
708 |
+
Note that to make this Docker image compatible with a broad range of CPUs, some optimizations have been turned off. For full speed we recommend a [Native installation](#Native-installation).
|
709 |
+
### Docker installation (isolated)
|
710 |
+
With the following instructions you can also easily build the Docker image yourself. If you want to run the code on bare metal jump [Native installation](#Native-installation).
|
711 |
+
|
712 |
+
First, install Docker on your machine. Then move to the original directory `learning_to_fly` and build the Docker image:
|
713 |
+
```
|
714 |
+
docker build -t arpllab/learning_to_fly .
|
715 |
+
```
|
716 |
+
If desired you can also build the container for building the firmware:
|
717 |
+
```
|
718 |
+
docker build -t arpllab/learning_to_fly_build_firmware -f Dockerfile_build_firmware .
|
719 |
+
```
|
720 |
+
After that you can run it using e.g.:
|
721 |
+
```
|
722 |
+
docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
|
723 |
+
```
|
724 |
+
This will open the port `8000` for the UI of the training program and run it inside the container.
|
725 |
+
|
726 |
+
Navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) with your browser, and you should see something like in the screenshot above (after starting the training).
|
727 |
+
|
728 |
+
The training UI configuration does not log data by default. If you want to inspect the training data run:
|
729 |
+
```
|
730 |
+
docker run -it --rm -p 6006:6006 arpllab/learning_to_fly training_headless
|
731 |
+
```
|
732 |
+
Navigate to [https://0.0.0.0:6006](https://0.0.0.0:6006) with your browser to investigate the Tensorboard logs.
|
733 |
+
|
734 |
+
If you would like to benchmark the training speed you can use:
|
735 |
+
```
|
736 |
+
docker run -it --rm arpllab/learning_to_fly training_benchmark
|
737 |
+
```
|
738 |
+
This is the fastest configuration, without logging, UI, checkpointing etc.
|
739 |
+
### Native installation
|
740 |
+
Clone this repository:
|
741 |
+
```
|
742 |
+
git clone https://github.com/arplaboratory/learning-to-fly learning_to_fly
|
743 |
+
cd learning_to_fly
|
744 |
+
```
|
745 |
+
Then instantiate the `RLtools` submodule:
|
746 |
+
```
|
747 |
+
git submodule update --init -- external/rl_tools
|
748 |
+
cd external/rl_tools
|
749 |
+
```
|
750 |
+
|
751 |
+
Then instantiate some dependencies of `RLtools` (for conveniences like checkpointing, Tensorboard logging, testing, etc.):
|
752 |
+
```
|
753 |
+
git submodule update --init -- external/cli11 external/highfive external/json/ external/tensorboard tests/lib/googletest/
|
754 |
+
```
|
755 |
+
|
756 |
+
#### Install dependencies on Ubuntu
|
757 |
+
```
|
758 |
+
sudo apt update && sudo apt install libhdf5-dev libopenblas-dev protobuf-compiler libprotobuf-dev libboost-all-dev
|
759 |
+
```
|
760 |
+
As an alternative to openblas you can also install [Intel MKL](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html) which in our experience is significantly faster than OpenBLAS.
|
761 |
+
#### Install dependencies on macOS
|
762 |
+
```
|
763 |
+
brew install hdf5 protobuf boost
|
764 |
+
```
|
765 |
+
Please make sure that `brew` links the libraries correctly. If not you might have to link e.g. `protobuf` manually using `brew link protobuf`.
|
766 |
+
|
767 |
+
|
768 |
+
|
769 |
+
|
770 |
+
Going back to the main directory (`learning_to_fly`), we can now configure the build of the code:
|
771 |
+
```
|
772 |
+
cd ../../
|
773 |
+
mkdir build
|
774 |
+
cd build
|
775 |
+
```
|
776 |
+
- Ubuntu + OpenBLAS: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_OPENBLAS:BOOL=ON`
|
777 |
+
- Ubuntu + MKL: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_MKL:BOOL=ON`
|
778 |
+
- macOS (tested on Sonoma): `cmake .. -DCMAKE_BUILD_TYPE=Release`
|
779 |
+
|
780 |
+
Finally, we can build the targets:
|
781 |
+
```
|
782 |
+
cmake --build . -j8
|
783 |
+
```
|
784 |
+
|
785 |
+
After successfully building the targets, we can run the code (in the original directory `learning_to_fly`):
|
786 |
+
```
|
787 |
+
cd ..
|
788 |
+
./build/src/training_headless
|
789 |
+
```
|
790 |
+
While this is running, you should be able to see training metrics using Tensorboard
|
791 |
+
|
792 |
+
If not already installed:
|
793 |
+
```
|
794 |
+
python3 -m pip install tensorboard
|
795 |
+
```
|
796 |
+
Then from the original directory `learning_to_fly`:
|
797 |
+
```
|
798 |
+
tensorboard --logdir=logs
|
799 |
+
```
|
800 |
+
|
801 |
+
To run the training with the UI, we download the JavaScript dependencies in the form of the two files `three.module.js` and `OrbitControls.js`:
|
802 |
+
```
|
803 |
+
cd src/ui
|
804 |
+
./get_dependencies.sh
|
805 |
+
```
|
806 |
+
|
807 |
+
After that we can execute the UI binary from the root folder:
|
808 |
+
```
|
809 |
+
cd ../../
|
810 |
+
./build/src/ui 0.0.0.0 8000
|
811 |
+
```
|
812 |
+
Now you should be able to navigate to [http://0.0.0.0:8000](http://0.0.0.0:8000) in your browser and start the training.
|
813 |
+
|
814 |
+
To run the benchmark (with UI, checkpointing and Tensorboard logging turned off):
|
815 |
+
```
|
816 |
+
sudo nice -n -20 ./build/src/training_benchmark
|
817 |
+
```
|
818 |
+
|
819 |
+
## Deploying trained policies on a Crazyflie
|
820 |
+
Train a policy, e.g. using the Docker image with the UI:
|
821 |
+
```
|
822 |
+
docker run -it --rm -p 8000:8000 -v $(pwd)/checkpoints:/learning_to_fly/checkpoints arpllab/learning_to_fly
|
823 |
+
```
|
824 |
+
The checkpoints are placed in the current working directory's `checkpoints` folder. Inspect the logs of the container to find the path of the final log, e.g., `checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h`.
|
825 |
+
We can mount this file into the container `arpllab/learning_to_fly_build_firmware` for building the firmware, e.g.:
|
826 |
+
```
|
827 |
+
docker run -it --rm -v $(pwd)/checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h:/controller/data/actor.h:ro -v $(pwd)/build_firmware:/output arpllab/learning_to_fly_build_firmware
|
828 |
+
```
|
829 |
+
This should build the firmware using the newly trained policy and output the binary to `build_firmware/cf2.bin`. After that we can use the `cfclient` package to flash the firmware (find the installation instructions [here](https://www.bitcraze.io/documentation/repository/crazyflie-clients-python/master/installation/install/))
|
830 |
+
```
|
831 |
+
cfloader flash build_firmware/cf2.bin stm32-fw -w radio://0/80/2M
|
832 |
+
```","source,docker","[plan1. Docker (isolated)]
|
833 |
+
step1: Execute a single command (given that Docker is already installed on your machine):
|
834 |
+
```
|
835 |
+
docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
|
836 |
+
```
|
837 |
+
step2. the container is running, now step3 navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) and step 4 you should see something like (after starting the training):
|
838 |
+
<div align=""center"">
|
839 |
+
<img src=""https://github.com/arplaboratory/learning_to_fly_media/blob/master/simulator_screenshot.png"" />
|
840 |
+
</div>
|
841 |
+
Note that to make this Docker image compatible with a broad range of CPUs, some optimizations have been turned off. For full speed we recommend a [Native installation](#Native-installation).
|
842 |
+
[Docker installation (isolated)]
|
843 |
+
step1. install Docker on your machine. step2. Then move to the original directory `learning_to_fly` and step3. build the Docker image:
|
844 |
+
```
|
845 |
+
docker build -t arpllab/learning_to_fly .
|
846 |
+
```
|
847 |
+
[optional] If desired you can also build the container for building the firmware:
|
848 |
+
```
|
849 |
+
docker build -t arpllab/learning_to_fly_build_firmware -f Dockerfile_build_firmware .
|
850 |
+
```
|
851 |
+
step4. After that you can run it using e.g.:
|
852 |
+
```
|
853 |
+
docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
|
854 |
+
```
|
855 |
+
Context. This will open the port `8000` for the UI of the training program and run it inside the container.
|
856 |
+
step5. Navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) with your browser, and you should see something like in the screenshot above (after starting the training).
|
857 |
+
The training UI configuration does not log data by default. If you want to inspect the training data run:
|
858 |
+
```
|
859 |
+
docker run -it --rm -p 6006:6006 arpllab/learning_to_fly training_headless
|
860 |
+
```
|
861 |
+
Navigate to [https://0.0.0.0:6006](https://0.0.0.0:6006) with your browser to investigate the Tensorboard logs.
|
862 |
+
|
863 |
+
[plan2]. Native installation
|
864 |
+
step1. clone this repository:
|
865 |
+
```
|
866 |
+
git clone https://github.com/arplaboratory/learning-to-fly learning_to_fly
|
867 |
+
cd learning_to_fly
|
868 |
+
```
|
869 |
+
step2.Instantiate the `RLtools` submodule:
|
870 |
+
```
|
871 |
+
git submodule update --init -- external/rl_tools
|
872 |
+
cd external/rl_tools
|
873 |
+
```
|
874 |
+
step3. Check dependencies of `RLtools` (for conveniences like checkpointing, Tensorboard logging, testing, etc.):
|
875 |
+
```
|
876 |
+
git submodule update --init -- external/cli11 external/highfive external/json/ external/tensorboard tests/lib/googletest/
|
877 |
+
```
|
878 |
+
step4. Install dependencies on Ubuntu
|
879 |
+
```
|
880 |
+
sudo apt update && sudo apt install libhdf5-dev libopenblas-dev protobuf-compiler libprotobuf-dev libboost-all-dev
|
881 |
+
```
|
882 |
+
optional. As an alternative to openblas you can also install [Intel MKL](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html) which in our experience is significantly faster than OpenBLAS.
|
883 |
+
#### Install dependencies on macOS
|
884 |
+
```
|
885 |
+
brew install hdf5 protobuf boost
|
886 |
+
```
|
887 |
+
Please make sure that `brew` links the libraries correctly. If not you might have to link e.g. `protobuf` manually using `brew link protobuf`.
|
888 |
+
|
889 |
+
Going back to the main directory (`learning_to_fly`), we can now configure the build of the code:
|
890 |
+
```
|
891 |
+
cd ../../
|
892 |
+
mkdir build
|
893 |
+
cd build
|
894 |
+
```
|
895 |
+
- Ubuntu + OpenBLAS: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_OPENBLAS:BOOL=ON`
|
896 |
+
- Ubuntu + MKL: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_MKL:BOOL=ON`
|
897 |
+
- macOS (tested on Sonoma): `cmake .. -DCMAKE_BUILD_TYPE=Release`
|
898 |
+
|
899 |
+
Finally, we can build the targets:
|
900 |
+
```
|
901 |
+
cmake --build . -j8
|
902 |
+
```
|
903 |
+
|
904 |
+
After successfully building the targets, we can run the code (in the original directory `learning_to_fly`):
|
905 |
+
```
|
906 |
+
cd ..
|
907 |
+
./build/src/training_headless
|
908 |
+
```
|
909 |
+
While this is running, you should be able to see training metrics using Tensorboard
|
910 |
+
|
911 |
+
If not already installed:
|
912 |
+
```
|
913 |
+
python3 -m pip install tensorboard
|
914 |
+
```
|
915 |
+
Then from the original directory `learning_to_fly`:
|
916 |
+
```
|
917 |
+
tensorboard --logdir=logs
|
918 |
+
```
|
919 |
+
|
920 |
+
To run the training with the UI, we download the JavaScript dependencies in the form of the two files `three.module.js` and `OrbitControls.js`:
|
921 |
+
```
|
922 |
+
cd src/ui
|
923 |
+
./get_dependencies.sh
|
924 |
+
```
|
925 |
+
|
926 |
+
After that we can execute the UI binary from the root folder:
|
927 |
+
```
|
928 |
+
cd ../../
|
929 |
+
./build/src/ui 0.0.0.0 8000
|
930 |
+
```
|
931 |
+
Now you should be able to navigate to [http://0.0.0.0:8000](http://0.0.0.0:8000) in your browser and start the training.
|
932 |
+
|
933 |
+
To run the benchmark (with UI, checkpointing and Tensorboard logging turned off):
|
934 |
+
```
|
935 |
+
sudo nice -n -20 ./build/src/training_benchmark
|
936 |
+
```
|
937 |
+
|
938 |
+
## Deploying trained policies on a Crazyflie
|
939 |
+
Train a policy, e.g. using the Docker image with the UI:
|
940 |
+
```
|
941 |
+
docker run -it --rm -p 8000:8000 -v $(pwd)/checkpoints:/learning_to_fly/checkpoints arpllab/learning_to_fly
|
942 |
+
```
|
943 |
+
The checkpoints are placed in the current working directory's `checkpoints` folder. Inspect the logs of the container to find the path of the final log, e.g., `checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h`.
|
944 |
+
We can mount this file into the container `arpllab/learning_to_fly_build_firmware` for building the firmware, e.g.:
|
945 |
+
```
|
946 |
+
docker run -it --rm -v $(pwd)/checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h:/controller/data/actor.h:ro -v $(pwd)/build_firmware:/output arpllab/learning_to_fly_build_firmware
|
947 |
+
```
|
948 |
+
This should build the firmware using the newly trained policy and output the binary to `build_firmware/cf2.bin`. After that we can use the `cfclient` package to flash the firmware (find the installation instructions [here](https://www.bitcraze.io/documentation/repository/crazyflie-clients-python/master/installation/install/))
|
949 |
+
```
|
950 |
+
cfloader flash build_firmware/cf2.bin stm32-fw -w radio://0/80/2M
|
951 |
+
```",,,
|
952 |
+
/LargeWorldModel/LWM,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/LargeWorldModel/LWM/main/README.md,,2098,top,,,13/02/2024,,"## Setup
|
953 |
+
Install the requirements with:
|
954 |
+
```
|
955 |
+
conda create -n lwm python=3.10
|
956 |
+
pip install -U ""jax[cuda12_pip]==0.4.23"" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
|
957 |
+
pip install -r requirements.txt
|
958 |
+
```
|
959 |
+
or set up TPU VM with:
|
960 |
+
```
|
961 |
+
sh tpu_requirements.sh
|
962 |
+
```","packagemanager, source","step1.install the requirements with:
|
963 |
+
```
|
964 |
+
conda create -n lwm python=3.10
|
965 |
+
pip install -U ""jax[cuda12_pip]==0.4.23"" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
|
966 |
+
pip install -r requirements.txt
|
967 |
+
```
|
968 |
+
optional. set up TPU VM with:
|
969 |
+
```
|
970 |
+
sh tpu_requirements.sh
|
971 |
+
```",,,
|
972 |
+
,,https://raw.githubusercontent.com/microsoft/UFO/main/README.md,,830,top,,,,,"### ___ Step 1: Installation
|
973 |
+
UFO requires **Python >= 3.10** running on **Windows OS >= 10**. It can be installed by running the following command:
|
974 |
+
```bash
|
975 |
+
# [optional to create conda environment]
|
976 |
+
# conda create -n ufo python=3.10
|
977 |
+
# conda activate ufo
|
978 |
+
|
979 |
+
# clone the repository
|
980 |
+
git clone https://github.com/microsoft/UFO.git
|
981 |
+
cd UFO
|
982 |
+
# install the requirements
|
983 |
+
pip install -r requirements.txt
|
984 |
+
```
|
985 |
+
|
986 |
+
### __ Step 2: Configure the LLMs
|
987 |
+
Before running UFO, you need to provide your LLM configurations. Taking OpenAI as an example, you can configure `ufo/config/config.yaml` file as follows.
|
988 |
+
|
989 |
+
#### OpenAI
|
990 |
+
```
|
991 |
+
API_TYPE: ""openai""
|
992 |
+
OPENAI_API_BASE: ""https://api.openai.com/v1/chat/completions"" # The base URL for the OpenAI API
|
993 |
+
OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
|
994 |
+
OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
|
995 |
+
```
|
996 |
+
|
997 |
+
#### Azure OpenAI (AOAI)
|
998 |
+
```
|
999 |
+
API_TYPE: ""aoai""
|
1000 |
+
OPENAI_API_BASE: ""YOUR_ENDPOINT"" # The AOAI API address. Format: https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}
|
1001 |
+
OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
|
1002 |
+
OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
|
1003 |
+
```
|
1004 |
+
|
1005 |
+
|
1006 |
+
### __ Step 3: Start UFO
|
1007 |
+
|
1008 |
+
#### __ You can execute the following on your Windows command Line (CLI):
|
1009 |
+
|
1010 |
+
```bash
|
1011 |
+
# assume you are in the cloned UFO folder
|
1012 |
+
python -m ufo --task <your_task_name>
|
1013 |
+
```
|
1014 |
+
|
1015 |
+
This will start the UFO process and you can interact with it through the command line interface.
|
1016 |
+
If everything goes well, you will see the following message:
|
1017 |
+
|
1018 |
+
```bash
|
1019 |
+
Welcome to use UFO__, A UI-focused Agent for Windows OS Interaction.
|
1020 |
+
_ _ _____ ___
|
1021 |
+
| | | || ___| / _ \
|
1022 |
+
| | | || |_ | | | |
|
1023 |
+
| |_| || _| | |_| |
|
1024 |
+
\___/ |_| \___/
|
1025 |
+
Please enter your request to be completed__:
|
1026 |
+
```
|
1027 |
+
#### __Reminder: ####
|
1028 |
+
- Before UFO executing your request, please make sure the targeted applications are active on the system.
|
1029 |
+
- The GPT-V accepts screenshots of your desktop and application GUI as input. Please ensure that no sensitive or confidential information is visible or captured during the execution process. For further information, refer to [DISCLAIMER.md](./DISCLAIMER.md).
|
1030 |
+
|
1031 |
+
|
1032 |
+
### Step 4 __: Execution Logs
|
1033 |
+
|
1034 |
+
You can find the screenshots taken and request & response logs in the following folder:
|
1035 |
+
```
|
1036 |
+
./ufo/logs/<your_task_name>/
|
1037 |
+
```
|
1038 |
+
You may use them to debug, replay, or analyze the agent output.",source,"step1: Installation
|
1039 |
+
UFO requires **Python >= 3.10** running on **Windows OS >= 10**. It can be installed by running the following command:
|
1040 |
+
```bash
|
1041 |
+
# [optional to create conda environment]
|
1042 |
+
# conda create -n ufo python=3.10
|
1043 |
+
# conda activate ufo
|
1044 |
+
# clone the repository
|
1045 |
+
git clone https://github.com/microsoft/UFO.git
|
1046 |
+
cd UFO
|
1047 |
+
# install the requirements
|
1048 |
+
pip install -r requirements.txt
|
1049 |
+
```
|
1050 |
+
__ Step 2: Configure the LLMs
|
1051 |
+
Before running UFO, you need to provide your LLM configurations. Taking OpenAI as an example, you can configure `ufo/config/config.yaml` file as follows.
|
1052 |
+
#### OpenAI
|
1053 |
+
```
|
1054 |
+
API_TYPE: ""openai""
|
1055 |
+
OPENAI_API_BASE: ""https://api.openai.com/v1/chat/completions"" # The base URL for the OpenAI API
|
1056 |
+
OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
|
1057 |
+
OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
|
1058 |
+
```
|
1059 |
+
|
1060 |
+
#### Azure OpenAI (AOAI)
|
1061 |
+
```
|
1062 |
+
API_TYPE: ""aoai""
|
1063 |
+
OPENAI_API_BASE: ""YOUR_ENDPOINT"" # The AOAI API address. Format: https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}
|
1064 |
+
OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
|
1065 |
+
OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
|
1066 |
+
```
|
1067 |
+
|
1068 |
+
|
1069 |
+
### __ Step 3: Start UFO
|
1070 |
+
|
1071 |
+
#### __ You can execute the following on your Windows command Line (CLI):
|
1072 |
+
|
1073 |
+
```bash
|
1074 |
+
# assume you are in the cloned UFO folder
|
1075 |
+
python -m ufo --task <your_task_name>
|
1076 |
+
```
|
1077 |
+
|
1078 |
+
This will start the UFO process and you can interact with it through the command line interface.
|
1079 |
+
If everything goes well, you will see the following message:
|
1080 |
+
|
1081 |
+
```bash
|
1082 |
+
Welcome to use UFO__, A UI-focused Agent for Windows OS Interaction.
|
1083 |
+
_ _ _____ ___
|
1084 |
+
| | | || ___| / _ \
|
1085 |
+
| | | || |_ | | | |
|
1086 |
+
| |_| || _| | |_| |
|
1087 |
+
\___/ |_| \___/
|
1088 |
+
Please enter your request to be completed__:
|
1089 |
+
```
|
1090 |
+
#### __Reminder: ####
|
1091 |
+
- Before UFO executing your request, please make sure the targeted applications are active on the system.
|
1092 |
+
- The GPT-V accepts screenshots of your desktop and application GUI as input. Please ensure that no sensitive or confidential information is visible or captured during the execution process. For further information, refer to [DISCLAIMER.md](./DISCLAIMER.md).
|
1093 |
+
|
1094 |
+
|
1095 |
+
### Step 4 __: Execution Logs
|
1096 |
+
|
1097 |
+
You can find the screenshots taken and request & response logs in the following folder:
|
1098 |
+
```
|
1099 |
+
./ufo/logs/<your_task_name>/
|
1100 |
+
```
|
1101 |
+
You may use them to debug, replay, or analyze the agent output.",,,
|
1102 |
+
,,https://raw.githubusercontent.com/catid/dora/main/README.md,,135,top,,,,,"## Demo
|
1103 |
+
|
1104 |
+
Install conda: https://docs.conda.io/projects/miniconda/en/latest/index.html
|
1105 |
+
|
1106 |
+
```bash
|
1107 |
+
git clone https://github.com/catid/dora.git
|
1108 |
+
cd dora
|
1109 |
+
|
1110 |
+
conda create -n dora python=3.10 -y && conda activate dora
|
1111 |
+
|
1112 |
+
pip install -U -r requirements.txt
|
1113 |
+
|
1114 |
+
python dora.py
|
1115 |
+
```",source,,,,
|
1116 |
+
,,https://raw.githubusercontent.com/AILab-CVC/YOLO-World/master/README.md,,,,,,,,"### 1. Installation
|
1117 |
+
|
1118 |
+
YOLO-World is developed based on `torch==1.11.0` `mmyolo==0.6.0` and `mmdetection==3.0.0`.
|
1119 |
+
|
1120 |
+
#### Clone Project
|
1121 |
+
|
1122 |
+
```bash
|
1123 |
+
git clone --recursive https://github.com/AILab-CVC/YOLO-World.git
|
1124 |
+
```
|
1125 |
+
#### Install
|
1126 |
+
|
1127 |
+
```bash
|
1128 |
+
pip install torch wheel -q
|
1129 |
+
pip install -e .
|
1130 |
+
```",source,,,,
|
1131 |
+
,,https://raw.githubusercontent.com/FasterDecoding/BitDelta/main/README.md,,63,top,,,,,"## Install
|
1132 |
+
|
1133 |
+
1. Clone the repo and navigate to BitDelta:
|
1134 |
+
|
1135 |
+
```
|
1136 |
+
git clone https://github.com/FasterDecoding/BitDelta
|
1137 |
+
cd BitDelta
|
1138 |
+
```
|
1139 |
+
|
1140 |
+
2. Set up environment:
|
1141 |
+
|
1142 |
+
```bash
|
1143 |
+
conda create -yn bitdelta python=3.9
|
1144 |
+
conda activate bitdelta
|
1145 |
+
|
1146 |
+
pip install -e .
|
1147 |
+
```",source,"step1.clone the repo and navigate to BitDelta:
|
1148 |
+
```
|
1149 |
+
git clone https://github.com/FasterDecoding/BitDelta
|
1150 |
+
cd BitDelta
|
1151 |
+
```
|
1152 |
+
step2.set up environment:
|
1153 |
+
```bash
|
1154 |
+
conda create -yn bitdelta python=3.9
|
1155 |
+
conda activate bitdelta
|
1156 |
+
pip install -e .
|
1157 |
+
```",,,
|
1158 |
+
,,https://raw.githubusercontent.com/tensorflow/tensorflow/master/README.md,,180724,greatest,,,,,"## Install
|
1159 |
+
|
1160 |
+
See the [TensorFlow install guide](https://www.tensorflow.org/install) for the
|
1161 |
+
[pip package](https://www.tensorflow.org/install/pip), to
|
1162 |
+
[enable GPU support](https://www.tensorflow.org/install/gpu), use a
|
1163 |
+
[Docker container](https://www.tensorflow.org/install/docker), and
|
1164 |
+
[build from source](https://www.tensorflow.org/install/source).
|
1165 |
+
|
1166 |
+
To install the current release, which includes support for
|
1167 |
+
[CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and
|
1168 |
+
Windows)*:
|
1169 |
+
|
1170 |
+
```
|
1171 |
+
$ pip install tensorflow
|
1172 |
+
```
|
1173 |
+
|
1174 |
+
Other devices (DirectX and MacOS-metal) are supported using
|
1175 |
+
[Device plugins](https://www.tensorflow.org/install/gpu_plugins#available_devices).
|
1176 |
+
|
1177 |
+
A smaller CPU-only package is also available:
|
1178 |
+
|
1179 |
+
```
|
1180 |
+
$ pip install tensorflow-cpu
|
1181 |
+
```
|
1182 |
+
|
1183 |
+
To update TensorFlow to the latest version, add `--upgrade` flag to the above
|
1184 |
+
commands.
|
1185 |
+
|
1186 |
+
*Nightly binaries are available for testing using the
|
1187 |
+
[tf-nightly](https://pypi.python.org/pypi/tf-nightly) and
|
1188 |
+
[tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPi.*",packagemanager,"step1. To install the current release, which includes support for
|
1189 |
+
[CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and
|
1190 |
+
Windows)*:
|
1191 |
+
```
|
1192 |
+
$ pip install tensorflow
|
1193 |
+
```
|
1194 |
+
step2. optional. A smaller CPU-only package is also available:
|
1195 |
+
```
|
1196 |
+
$ pip install tensorflow-cpu
|
1197 |
+
```
|
1198 |
+
step3. optional.
|
1199 |
+
To update TensorFlow to the latest version, add `--upgrade` flag to the above
|
1200 |
+
commands.
|
1201 |
+
|
1202 |
+
*Nightly binaries are available for testing using the
|
1203 |
+
[tf-nightly](https://pypi.python.org/pypi/tf-nightly) and
|
1204 |
+
[tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPi.*",,,
|
1205 |
+
,,https://raw.githubusercontent.com/huggingface/transformers/main/README.md,,120272,greatest,,,,,"## Installation
|
1206 |
+
|
1207 |
+
### With pip
|
1208 |
+
|
1209 |
+
This repository is tested on Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, and TensorFlow 2.6+.
|
1210 |
+
|
1211 |
+
You should install __ Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
|
1212 |
+
|
1213 |
+
First, create a virtual environment with the version of Python you're going to use and activate it.
|
1214 |
+
|
1215 |
+
Then, you will need to install at least one of Flax, PyTorch, or TensorFlow.
|
1216 |
+
Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific installation command for your platform.
|
1217 |
+
|
1218 |
+
When one of those backends has been installed, __ Transformers can be installed using pip as follows:
|
1219 |
+
|
1220 |
+
```bash
|
1221 |
+
pip install transformers
|
1222 |
+
```
|
1223 |
+
|
1224 |
+
If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
|
1225 |
+
|
1226 |
+
### With conda
|
1227 |
+
|
1228 |
+
__ Transformers can be installed using conda as follows:
|
1229 |
+
|
1230 |
+
```shell script
|
1231 |
+
conda install conda-forge::transformers
|
1232 |
+
```
|
1233 |
+
|
1234 |
+
> **_NOTE:_** Installing `transformers` from the `huggingface` channel is deprecated.
|
1235 |
+
|
1236 |
+
Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
|
1237 |
+
|
1238 |
+
> **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).",packagemanager,"Plan1. With pip
|
1239 |
+
requirements >> This repository is tested on Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, and TensorFlow 2.6+.
|
1240 |
+
step1. install __ Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html).(extra information) If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
|
1241 |
+
step2. create a virtual environment with the version of Python you're going to use and activate it.
|
1242 |
+
step3. install at least one of Flax, PyTorch, or TensorFlow.
|
1243 |
+
extrainoformation. Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific installation command for your platform.
|
1244 |
+
step4. When one of those backends has been installed, __ Transformers can be installed using pip as follows:
|
1245 |
+
```bash
|
1246 |
+
pip install transformers
|
1247 |
+
```
|
1248 |
+
extrainformation. If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
|
1249 |
+
plan1.2. With conda
|
1250 |
+
step1.
|
1251 |
+
```shell script
|
1252 |
+
conda install conda-forge::transformers
|
1253 |
+
```
|
1254 |
+
> **_NOTE:_** Installing `transformers` from the `huggingface` channel is deprecated.
|
1255 |
+
Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
|
1256 |
+
> **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).",,,
|
1257 |
+
,,https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md,,120270,greatest,,,,,"## Quick Install
|
1258 |
+
|
1259 |
+
With pip:
|
1260 |
+
```bash
|
1261 |
+
pip install langchain
|
1262 |
+
```
|
1263 |
+
|
1264 |
+
With conda:
|
1265 |
+
```bash
|
1266 |
+
conda install langchain -c conda-forge
|
1267 |
+
```",packagemanager,"Plan1. With pip. Step1:
|
1268 |
+
```bash
|
1269 |
+
pip install langchain
|
1270 |
+
```
|
1271 |
+
Plan2. Step1: With conda:
|
1272 |
+
```bash
|
1273 |
+
conda install langchain -c conda-forge
|
1274 |
+
```",,,
|
1275 |
+
,,https://raw.githubusercontent.com/divelab/DIG/dig-stable/README.md,,1912,topic,drug discovery,task libraries,,,"## Installation
|
1276 |
+
|
1277 |
+
### Install from pip
|
1278 |
+
The key dependencies of DIG: Dive into Graphs are PyTorch (>=1.10.0), PyTorch Geometric (>=2.0.0), and RDKit.
|
1279 |
+
|
1280 |
+
1. Install [PyTorch](https://pytorch.org/get-started/locally/) (>=1.10.0)
|
1281 |
+
|
1282 |
+
```shell script
|
1283 |
+
$ python -c ""import torch; print(torch.__version__)""
|
1284 |
+
>>> 1.10.0
|
1285 |
+
```
|
1286 |
+
|
1287 |
+
|
1288 |
+
|
1289 |
+
|
1290 |
+
2. Install [PyG](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html#) (>=2.0.0)
|
1291 |
+
|
1292 |
+
```shell script
|
1293 |
+
$ python -c ""import torch_geometric; print(torch_geometric.__version__)""
|
1294 |
+
>>> 2.0.0
|
1295 |
+
```
|
1296 |
+
|
1297 |
+
3. Install DIG: Dive into Graphs.
|
1298 |
+
|
1299 |
+
```shell script
|
1300 |
+
pip install dive-into-graphs
|
1301 |
+
```
|
1302 |
+
|
1303 |
+
|
1304 |
+
After installation, you can check the version. You have successfully installed DIG: Dive into Graphs if no error occurs.
|
1305 |
+
|
1306 |
+
``` shell script
|
1307 |
+
$ python
|
1308 |
+
>>> from dig.version import __version__
|
1309 |
+
>>> print(__version__)
|
1310 |
+
```
|
1311 |
+
|
1312 |
+
### Install from source
|
1313 |
+
If you want to try the latest features that have not been released yet, you can install dig from source.
|
1314 |
+
|
1315 |
+
```shell script
|
1316 |
+
git clone https://github.com/divelab/DIG.git
|
1317 |
+
cd DIG
|
1318 |
+
pip install .
|
1319 |
+
```",packagemanager,"step 1. Install from pip
|
1320 |
+
The key dependencies of DIG: Dive into Graphs are PyTorch (>=1.10.0), PyTorch Geometric (>=2.0.0), and RDKit.
|
1321 |
+
|
1322 |
+
1. Install [PyTorch](https://pytorch.org/get-started/locally/) (>=1.10.0)
|
1323 |
+
|
1324 |
+
```shell script
|
1325 |
+
$ python -c ""import torch; print(torch.__version__)""
|
1326 |
+
>>> 1.10.0
|
1327 |
+
```
|
1328 |
+
|
1329 |
+
|
1330 |
+
|
1331 |
+
|
1332 |
+
2. Install [PyG](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html#) (>=2.0.0)
|
1333 |
+
|
1334 |
+
```shell script
|
1335 |
+
$ python -c ""import torch_geometric; print(torch_geometric.__version__)""
|
1336 |
+
>>> 2.0.0
|
1337 |
+
```
|
1338 |
+
|
1339 |
+
3. Install DIG: Dive into Graphs.
|
1340 |
+
|
1341 |
+
```shell script
|
1342 |
+
pip install dive-into-graphs
|
1343 |
+
```
|
1344 |
+
|
1345 |
+
|
1346 |
+
After installation, you can check the version. You have successfully installed DIG: Dive into Graphs if no error occurs.
|
1347 |
+
|
1348 |
+
``` shell script
|
1349 |
+
$ python
|
1350 |
+
>>> from dig.version import __version__
|
1351 |
+
>>> print(__version__)
|
1352 |
+
```
|
1353 |
+
|
1354 |
+
### Install from source
|
1355 |
+
If you want to try the latest features that have not been released yet, you can install dig from source.
|
1356 |
+
|
1357 |
+
```shell script
|
1358 |
+
git clone https://github.com/divelab/DIG.git
|
1359 |
+
cd DIG
|
1360 |
+
pip install .
|
1361 |
+
```",,,
|