`\n\n## More Detailed Usage\n\n### .fuzzer Files\n\nThe .fuzzer files are human-readable and commented. They allow changing various\noptions on a per-fuzzer-file basis, including which message or message parts are\nfuzzed.\n\n### Message Formatting\n\nWithin a .fuzzer file is the message contents. These are simply lines that\nbegin with either 'inbound' or 'outbound', signifying which direction the\nmessage goes. They are in Python string format, with '\\xYY' being used for\nnon-printable characters. These are autogenerated by 'mutiny_prep.py' and\nDecept, but sometimes need to be manually modified.\n\n### Message Formatting - Manual Editing\n\nIf a message has the 'fuzz' keyword after 'outbound', this indicates it is to be\nfuzzed through Radamsa. A given message can have line continuations, by simply\nputting more message data in quotes on a new line. In this case, this second\nline will be merged with the first.\n\nAlternatively, the 'sub' keyword can be used to indicate a subcomponent. This\nallows specifying a separate component of the message, in order to fuzz only\ncertain parts and for convenience within a Message Processor.\n\nHere is an example arbitrary set of message data:\n```\noutbound 'say'\n ' hi'\nsub fuzz ' and fuzz'\n ' this'\nsub ' but not this\\xde\\xad\\xbe\\xef'\ninbound 'this is the server's'\n ' expected response'\n```\n\nThis will cause Mutiny to transmit `say hi and fuzz this but not\nthis(0xdeadbeef)`. `0xdeadbeef` will be transmitted as 4 hex bytes. `and fuzz\nthis` will be passed through Radamsa for fuzzing, but `say hi` and ` but not\nthis(0xdeadbeef)` will be left alone.\n\nMutiny will wait for a response from the server after transmitting the single\nabove message, due to the 'inbound' line. The server's expected response is\n`this is the server's expected response`. Mutiny won't do a whole lot with this\ndata, aside from seeing if what the server actually sent matches this string.\nIf a crash occurs, Mutiny will log both the expected output from the server and\nwhat the server actually replied with.\n\n### Customization\n\nmutiny_classes/ contains base classes for the Message Processor, Monitor, and\nException Processor. Any of these files can be copied into the same folder as\nthe .fuzzer (by default) or into a separate subfolder specified as the\n'processor_dir' within the .fuzzer file.\n\nThese three classes allow for storing server responses and changing outgoing\nmessages, monitoring the target on a separate thread, and changing how Mutiny\nhandles exceptions.\n\n### Customization - Message Processor\n\nThe Message Processor defines various callbacks that are called during a fuzzing\nrun. Within these callbacks, any Python code can be run. Anecdotally, these\nare primarily used in three ways. \n\nThe most common is when the server sends tokens that need to be added to future\noutbound messages. For example, if Mutiny's first message logs in, and the\nserver responds with a session ID, the `postReceiveProcess()` callback can be used\nto store that session ID. Then, in `preSendProcess()`, the outgoing data can be\nfixed up with that session ID. An example of this is in\n`sample_apps/session_server`.\n\nAnother common use of a Message Processor is to limit or change a fuzzed\nmessage. For example, if the server always drops messages greater than 1000\nbytes, it may not be worth sending any large messages. preSendProcess() can be\nused to shorten messages after fuzzing but before they are sent or to raise an\nexception.\n\nRaising an exception brings up the final way Message Processors are commonly\nused. Within a callback, any custom exceptions defined in\n`mutiny_classes/mutiny_exceptions.py` can be raised. There are several\nexceptions, all commented, that will cause various behaviors from Mutiny. These\ngenerally involve either logging, retrying, or aborting the current run.\n\n### Customization - Monitor\n\nThe Monitor has a `monitorTarget()` function that is run on a separate thread from\nthe main Mutiny fuzzer. The purpose is to allow implementing a long-running\nprocess that can monitor a host in some fashion. This can be anything that can\nbe done in Python, such as communicating with a monitor daemon running on the\ntarget, reading a long file, or even just pinging the host repeatedly, depending\non the requirements of the fuzzing session.\n\nIf the Monitor detects a crash, it can call `signalMain()` at any time. This will\nsignal the main Mutiny thread that a crash has occurred, and it will log the\ncrash. This function should generally operate in an infinite loop, as returning\nwill cause the thread to terminate, and it will not be restarted.\n\n### Customization - Exception Processor\n\nThe Exception Processor determines what Mutiny should do with a given exception\nduring a fuzz session. In the most general sense, the `processException()`\nfunction will translate Python and OS-level exceptions into Mutiny error\nhandling actions as best as it can.\n\nFor example, if Mutiny gets 'Connection Refused', the default response is to\nassume that the target server has died unrecoverably, so Mutiny will log the\nprevious run and halt. This is true in most cases, but this behavior can be\nchanged to that of any of the exceptions in\n`mutiny_classes/mutiny_exceptions.py` as needed, allowing tailoring of crash\ndetection and error correction.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "adamchainz/django-mysql", "link": "https://github.com/adamchainz/django-mysql", "tags": ["django", "mysql", "mariadb", "python"], "stars": 514, "description": ":dolphin: :horse: Extensions to Django for use with MySQL/MariaDB", "lang": "Python", "repo_lang": "", "readme": "============\nDjango-MySQL\n============\n\n.. image:: https://img.shields.io/readthedocs/django-mysql?style=for-the-badge\n :target: https://django-mysql.readthedocs.io/en/latest/\n\n.. image:: https://img.shields.io/github/actions/workflow/status/adamchainz/django-mysql/main.yml?branch=main&style=for-the-badge\n :target: https://github.com/adamchainz/django-mysql/actions?workflow=CI\n\n.. image:: https://img.shields.io/badge/Coverage-100%25-success?style=for-the-badge\n :target: https://github.com/adamchainz/django-mysql/actions?workflow=CI\n\n.. image:: https://img.shields.io/pypi/v/django-mysql.svg?style=for-the-badge\n :target: https://pypi.org/project/django-mysql/\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg?style=for-the-badge\n :target: https://github.com/psf/black\n\n.. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white&style=for-the-badge\n :target: https://github.com/pre-commit/pre-commit\n :alt: pre-commit\n\n.. figure:: https://raw.github.com/adamchainz/django-mysql/main/docs/images/dolphin-pony.png\n :alt: The dolphin-pony - proof that cute + cute = double cute.\n\n..\n\n | The dolphin-pony - proof that cute + cute = double cute.\n\n\nDjango-MySQL extends Django's built-in MySQL and MariaDB support their specific\nfeatures not available on other databases.\n\n\nWhat kind of features?\n----------------------\n\nIncludes:\n\n* ``QuerySet`` extensions:\n\n * 'Smart' iteration - chunked pagination across a large queryset\n * ``approx_count`` for quick estimates of ``count()``\n * Query hints\n * Quick ``pt-visual-explain`` of the underlying query\n\n* Model fields:\n\n * MariaDB Dynamic Columns for storing dictionaries\n * Comma-separated fields for storing lists and sets\n * 'Missing' fields: differently sized ``BinaryField``/``TextField`` classes,\n ``BooleanField``\\s represented by BIT(1)\n\n* ORM expressions for over 20 MySQL-specific functions\n* A new cache backend that makes use of MySQL's upsert statement and does\n compression\n* Status variable inspection and utility methods\n* Named locks for easy locking of e.g. external resources\n* Table lock manager for hard to pull off data migrations\n\nTo see them all, check out the exposition at\nhttps://django-mysql.readthedocs.io/en/latest/exposition.html .\n\nRequirements and Installation\n-----------------------------\n\nPlease see\nhttps://django-mysql.readthedocs.io/en/latest/installation.html .\n\nDocumentation\n-------------\n\nEvery detail documented on\n`Read The Docs `_.\n", "readme_type": "rst", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "eladhoffer/seq2seq.pytorch", "link": "https://github.com/eladhoffer/seq2seq.pytorch", "tags": ["deep-learning", "neural-machine-translation", "seq2seq"], "stars": 514, "description": "Sequence-to-Sequence learning using PyTorch", "lang": "Python", "repo_lang": "", "readme": "# Seq2Seq in PyTorch\nThis is a complete suite for training sequence-to-sequence models in [PyTorch](www.pytorch.org). It consists of several models and code to both train and infer using them.\n\nUsing this code you can train:\n* Neural-machine-translation (NMT) models\n* Language models\n* Image to caption generation\n* Skip-thought sentence representations\n* And more...\n \n ## Installation\n ```\n git clone --recursive https://github.com/eladhoffer/seq2seq.pytorch\n cd seq2seq.pytorch; python setup.py develop\n ```\n \n## Models\nModels currently available:\n* Simple Seq2Seq recurrent model\n* Recurrent Seq2Seq with attentional decoder\n* [Google neural machine translation](https://arxiv.org/abs/1609.08144) (GNMT) recurrent model\n* Transformer - attention-only model from [\"Attention Is All You Need\"](https://arxiv.org/abs/1706.03762)\n\n## Datasets\nDatasets currently available:\n\n* WMT16\n* WMT17\n* OpenSubtitles 2016\n* COCO image captions\n* [Conceptual captions](https://ai.googleblog.com/2018/09/conceptual-captions-new-dataset-and.html)\n\nAll datasets can be tokenized using 3 available segmentation methods:\n\n* Character based segmentation\n* Word based segmentation\n* Byte-pair-encoding (BPE) as suggested by [bpe](https://arxiv.org/abs/1508.07909) with selectable number of tokens. \n\nAfter choosing a tokenization method, a vocabulary will be generated and saved for future inference.\n\n\n## Training methods\nThe models can be trained using several methods:\n\n* Basic Seq2Seq - given encoded sequence, generate (decode) output sequence. Training is done with teacher-forcing.\n* Multi Seq2Seq - where several tasks (such as multiple languages) are trained simultaneously by using the data sequences as both input to the encoder and output for decoder.\n* Image2Seq - used to train image to caption generators.\n\n## Usage\nExample training scripts are available in ``scripts`` folder. Inference examples are available in ``examples`` folder.\n\n* example for training a [transformer](https://arxiv.org/abs/1706.03762)\n on WMT16 according to original paper regime:\n```\nDATASET=${1:-\"WMT16_de_en\"}\nDATASET_DIR=${2:-\"./data/wmt16_de_en\"}\nOUTPUT_DIR=${3:-\"./results\"}\n\nWARMUP=\"4000\"\nLR0=\"512**(-0.5)\"\n\npython main.py \\\n --save transformer \\\n --dataset ${DATASET} \\\n --dataset-dir ${DATASET_DIR} \\\n --results-dir ${OUTPUT_DIR} \\\n --model Transformer \\\n --model-config \"{'num_layers': 6, 'hidden_size': 512, 'num_heads': 8, 'inner_linear': 2048}\" \\\n --data-config \"{'moses_pretok': True, 'tokenization':'bpe', 'num_symbols':32000, 'shared_vocab':True}\" \\\n --b 128 \\\n --max-length 100 \\\n --device-ids 0 \\\n --label-smoothing 0.1 \\\n --trainer Seq2SeqTrainer \\\n --optimization-config \"[{'step_lambda':\n \\\"lambda t: { \\\n 'optimizer': 'Adam', \\\n 'lr': ${LR0} * min(t ** -0.5, t * ${WARMUP} ** -1.5), \\\n 'betas': (0.9, 0.98), 'eps':1e-9}\\\"\n }]\"\n```\n\n* example for training attentional LSTM based model with 3 layers in both encoder and decoder:\n```\npython main.py \\\n --save de_en_wmt17 \\\n --dataset ${DATASET} \\\n --dataset-dir ${DATASET_DIR} \\\n --results-dir ${OUTPUT_DIR} \\\n --model RecurrentAttentionSeq2Seq \\\n --model-config \"{'hidden_size': 512, 'dropout': 0.2, \\\n 'tie_embedding': True, 'transfer_hidden': False, \\\n 'encoder': {'num_layers': 3, 'bidirectional': True, 'num_bidirectional': 1, 'context_transform': 512}, \\\n 'decoder': {'num_layers': 3, 'concat_attention': True,\\\n 'attention': {'mode': 'dot_prod', 'dropout': 0, 'output_transform': True, 'output_nonlinearity': 'relu'}}}\" \\\n --data-config \"{'moses_pretok': True, 'tokenization':'bpe', 'num_symbols':32000, 'shared_vocab':True}\" \\\n --b 128 \\\n --max-length 80 \\\n --device-ids 0 \\\n --trainer Seq2SeqTrainer \\\n --optimization-config \"[{'epoch': 0, 'optimizer': 'Adam', 'lr': 1e-3},\n {'epoch': 6, 'lr': 5e-4},\n {'epoch': 8, 'lr':1e-4},\n {'epoch': 10, 'lr': 5e-5},\n {'epoch': 12, 'lr': 1e-5}]\" \\\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hu619340515/jd_seckill-1", "link": "https://github.com/hu619340515/jd_seckill-1", "tags": [], "stars": 514, "description": "fork huanghyw/jd_seckill", "lang": "Python", "repo_lang": "", "readme": "#Jd_Seckill\n\n## Special statement:\n\n* Any scripts involved in the `jd_seckill` project released in this warehouse are only for testing and learning research, and commercial use is prohibited, and its legality, accuracy, completeness and validity cannot be guaranteed, please judge by yourself according to the situation.\n\n* All resource files in this project are prohibited from being reproduced or published in any form by any official account or self-media.\n\n* `huanghyw` is not responsible for any script issues, including but not limited to any loss or damage caused by any script errors.\n\n* For any user who indirectly uses the script, including but not limited to establishing a VPS or disseminating it in violation of national laws or relevant regulations, `huanghyw` is not responsible for any privacy leaks or other consequences arising therefrom .\n\n* Do not use any content of the `jd_seckill` project for commercial or illegal purposes, or do so at your own risk.\n\n* If any unit or individual thinks that the script of the project may be suspected of infringing its rights, it should notify in time and provide identification and proof of ownership. We will delete the relevant script after receiving the certification document.\n\n* Anyone who views this project in any way or uses any scripts that directly or indirectly use the `jd_seckill` project should read this statement carefully. `huanghyw` reserves the right to change or supplement this disclaimer at any time. By using and copying any related script or `jd_seckill` project, you are deemed to have accepted this disclaimer.\n \n* You must completely delete the above content from your computer or phone within 24 hours of downloading.\n \n* This project follows the `GPL-3.0 License` agreement. If there is any conflict between this special statement and the `GPL-3.0 License` agreement, this special statement shall prevail.\n\n> ***If you use or copy any code or project made by yourself in this warehouse, you are deemed to have accepted this statement, please read it carefully***\n> ***If you used or copied any code or project made by yourself in this warehouse before this statement was issued, and you are still using it at this time, you are deemed to have accepted this statement, please read it carefully***\n\n## Introduction\nThrough my use during this period (2020-12-12 to 2020-12-17), it is confirmed that this script can indeed grab Moutai. I robbed 4 bottles on my own three accounts, and robbed 4 bottles for two friends.\nAs long as you confirm that there is no problem with your configuration file and the cookie is not invalid, you can always succeed if you stick to it.\n\nAccording to everyone's feedback during this period, except for Moutai, other products that do not need to be added to the shopping cart cannot be grabbed. The specific reason has not been investigated yet, and it should be that the rush-purchase process of Jingdong Africa Moutai products has changed.\nIn order to avoid wasting everyone's time, don't rush to buy non-Moutai products.\nWhen this problem is solved, a new version will be launched.\n\n\n## Observation in the dark\n\nAccording to the log analysis of grabbing Moutai since December 14, boldly infer the relationship between `resultCode` and Xiaobai Credit in the Json message returned.\nHere we mainly analyze `90016` and `90008` with the highest frequency.\n\n### Sample JSON\n```json\n{'errorMessage': 'It's a pity that I didn't get it, let's make persistent efforts. ', 'orderId': 0, 'resultCode': 90016, 'skuId': 0, 'success': False}\n{'errorMessage': 'It's a pity that I didn't get it, let's make persistent efforts. ', 'orderId': 0, 'resultCode': 90008, 'skuId': 0, 'success': False}\n```\n\n### Statistics\n\n| Case | Xiaobai Credit | 90016 | 90008 |\n| ---- | -------- | ------ | ------ | -------- |\n| Zhang San | 63.8 | 59.63% | 40.37% | Not yet available |\n| Li Si | 92.9 | 72.05% | 27.94% | 4 days |\n| Wang Wu | 99.6 | 75.70% | 24.29% | Not yet available |\n| Zhao Liu | 103.4 | 91.02% | 8.9% | 2 days |\n\n### Guess\nIt is speculated that the return of 90008 is JD.com\u2019s risk control mechanism, which means that this request directly failed and did not participate in the panic buying.\nThe lower Xiaobai's credit is, the easier it is to trigger JD's risk control.\n\nJudging from the data, the relationship between Xiaobai's credit and risk control is about one level every tenth, so Zhao Liu is basically not intercepted, Li Si and Wang Wu have similar interception chances, and Zhang San has the highest interception probability.\n\nThe panic buying will only be carried out after the risk control is released. At this time, the reservoir counting model should be used. Assuming that all the data cannot be obtained at one time, try to achieve an even distribution of successful snapping users, which is related to probability.\n\n> To sum up, it is a bit difficult for Zhang San to succeed, and Xiaobai users with 100+ credits have the highest chance of success.\n\n## The main function\n\n- Log in to Jingdong Mall ([www.jd.com](http://www.jd.com/))\n - Scan the QR code given by Jingdong APP\n- Reservation Moutai\n - Timed automatic appointment\n- Waiting for snap-up after making an appointment in seconds\n - Start automatic buying at regular intervals\n\n## Operating environment\n\n- [Python 3](https://www.python.org/)\n\n## Third-party library\n\n- The library that needs to be used has been placed in requirements.txt, use pip installed can use the command\n`pip install -r requirements.txt`\n- If the domestic installation of third-party libraries is slow, you can use the following commands to accelerate Tsinghua source\n`pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/`\n\n## Tutorial\n#### 1. Recommended Chrome browser\n#### 2. Web page scanning code login, or account password login\n#### 3. Fill in config.ini configuration information\n(1) `eid` and `fp` find a common commodity and place an order, and then you can see it by grabbing the package. These two values \u200b\u200b\u200b\u200bcan be filled in fixed\n> Just find a product to place an order, then enter the settlement page, open the debug window of the browser, switch to the console Tab page, enter the variable `_JdTdudfp` in the console, and you can get `eid` and `fp from the output Json `.\n> If not, refer to the original author\u2019s issue https://github.com/zhou-xiaojun/jd_mask/issues/22\n\n(2) `sku_id`, `DEFAULT_USER_AGENT`\n> `sku_id` has been filled in according to Maotai.\n> `cookies_string` is now unnecessary\n> `DEFAULT_USER_AGENT` can use the default one. Google Chrome can also enter about:version in the address bar of the browser to view `USER_AGENT` and replace it\n\n(3) Configure the time\n> Now it is not mandatory to synchronize the latest time, the program will automatically synchronize Jingdong time\n>> But if the computer time is slow for several hours, it's better to synchronize it\n\nAll of the above are required.\n>tips:\n> After the program starts running, it will detect the local time and JD server time, and the output difference is the local time - JD server time, that is, -50 means that the local time is 50ms slower than the JD server time.\n> The execution time of this code is based on the local computer/server time\n\n(4) Modify the number of snap-up bottles\n> The default number of snap-up bottles in the code is 2, and it cannot be modified in the configuration file\n> If you bought a bottle within a month, it is best to change the number of bottles you bought to 1\n> The specific modification is: search for `self.seckill_num = 2` in the `jd_spider_requests.py` file, and change `2` to `1`\n\n#### 4. Run main.py\nSelect the corresponding function according to the prompt. If there is a prompt to scan the code to log in, you can check whether there is a `qr_code.png` file in the project directory. If there is, open the picture and use the Jingdong mobile app to scan the code to log in.\n\n- *Display the QR code in the command line mode under Linux (take Ubuntu as an example)*\n\n```bash\n$ sudo apt-get install qrencode zbar-tools # Install QR code parsing and generation tools for reading QR codes and outputting them on the command line.\n$ zbarimg qr_code.png > qrcode.txt && qrencode -r qrcode.txt -o - -t UTF8 # Analyze the QR code and output it to the command line window.\n```\n\n#### 5. Confirmation of snapping results\nThe success of the snap-up can usually be seen within one minute of the procedure!\nSearch the log, and there is \"successful purchase, order number xxxxx\", which means that the order has been successfully purchased, and the order must be paid within half an hour! The program does not support automatic stop for the time being, manual STOP is required!\nIf you haven\u2019t snapped up the item within two minutes, you basically didn\u2019t get it! The program does not support automatic stop for the time being, manual STOP is required!\n\n## tip\nThere is no need to tip any more, those who grab Moutai, please keep this joy, and those who don\u2019t, keep on cheering :)\n\n## grateful\n##### Thank you very much for the code provided by the original author https://github.com/zhou-xiaojun/jd_mask\n##### Thanks also to https://github.com/wlwwu/jd_maotai for optimization", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "st-tech/zr-obp", "link": "https://github.com/st-tech/zr-obp", "tags": ["datasets", "off-policy-evaluation", "contextual-bandits", "multi-armed-bandits", "research"], "stars": 514, "description": "Open Bandit Pipeline: a python library for bandit algorithms and off-policy evaluation", "lang": "Python", "repo_lang": "", "readme": "\n\n[![pypi](https://img.shields.io/pypi/v/obp.svg)](https://pypi.python.org/pypi/obp)\n[![Python](https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9-blue)](https://www.python.org)\n[![Downloads](https://pepy.tech/badge/obp)](https://pepy.tech/project/obp)\n![GitHub commit activity](https://img.shields.io/github/commit-activity/m/st-tech/zr-obp)\n![GitHub last commit](https://img.shields.io/github/last-commit/st-tech/zr-obp)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![arXiv](https://img.shields.io/badge/arXiv-2008.07146-b31b1b.svg)](https://arxiv.org/abs/2008.07146)\n\n[[arXiv]](https://arxiv.org/abs/2008.07146)\n\n# Open Bandit Pipeline: a research framework for bandit algorithms and off-policy evaluation\n\n**[\u30c9\u30ad\u30e5\u30e1\u30f3\u30c8](https://zr-obp.readthedocs.io/en/latest/)** | **[Google Group](https://groups.google.com/g/open-bandit-project)** | **[\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb](https://sites.google.com/cornell.edu/recsys2021tutorial)** | **[\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb](#\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb)** | **[\u4f7f\u7528\u65b9\u6cd5](#\u4f7f\u7528\u65b9\u6cd5)** | **[\u30b9\u30e9\u30a4\u30c9](./slides/slides_JN.pdf)** | **[Quickstart](./examples/quickstart)** | **[Open Bandit Dataset](./obd/README_JN.md)** | **[\u89e3\u8aac\u30d6\u30ed\u30b0\u8a18\u4e8b](https://techblog.zozo.com/entry/openbanditproject)**\n\n\nTable of Contents
\n\n- [Open Bandit Pipeline: a research framework for bandit algorithms and off-policy evaluation](#open-bandit-pipeline-a-research-framework-for-bandit-algorithms-and-off-policy-evaluation)\n- [\u6982\u8981](#\u6982\u8981)\n - [Open Bandit Dataset](#open-bandit-dataset)\n - [Open Bandit Pipeline](#open-bandit-pipeline)\n - [\u5b9f\u88c5\u3055\u308c\u3066\u3044\u308b\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3068\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf](#\u5b9f\u88c5\u3055\u308c\u3066\u3044\u308b\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3068\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf)\n - [\u30c8\u30d4\u30c3\u30af\u3068\u30bf\u30b9\u30af](#\u30c8\u30d4\u30c3\u30af\u3068\u30bf\u30b9\u30af)\n- [\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb](#\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb)\n - [\u4f9d\u5b58\u30d1\u30c3\u30b1\u30fc\u30b8](#\u4f9d\u5b58\u30d1\u30c3\u30b1\u30fc\u30b8)\n- [\u4f7f\u7528\u65b9\u6cd5](#\u4f7f\u7528\u65b9\u6cd5)\n - [(1) \u30c7\u30fc\u30bf\u306e\u8aad\u307f\u8fbc\u307f\u3068\u524d\u51e6\u7406](#1-\u30c7\u30fc\u30bf\u306e\u8aad\u307f\u8fbc\u307f\u3068\u524d\u51e6\u7406)\n - [(2) \u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2](#2-\u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2)\n - [(3) \u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1 \uff08Off-Policy Evaluation\uff09](#3-\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1-off-policy-evaluation)\n- [\u5f15\u7528](#\u5f15\u7528)\n- [Google Group](#google-group)\n- [\u30e9\u30a4\u30bb\u30f3\u30b9](#\u30e9\u30a4\u30bb\u30f3\u30b9)\n- [\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30c1\u30fc\u30e0](#\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30c1\u30fc\u30e0)\n- [\u9023\u7d61\u5148](#\u9023\u7d61\u5148)\n- [\u53c2\u8003](#\u53c2\u8003)\n\n \n\n# \u6982\u8981\n\n## Open Bandit Dataset\n\n*Open Bandit Dataset*\u306f, \u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3084\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306b\u307e\u3064\u308f\u308b\u7814\u7a76\u3092\u4fc3\u9032\u3059\u308b\u305f\u3081\u306e\u5927\u898f\u6a21\u516c\u958b\u5b9f\u30c7\u30fc\u30bf\u3067\u3059.\n\u672c\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306f, \u65e5\u672c\u6700\u5927\u306e\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3E\u30b3\u30de\u30fc\u30b9\u4f01\u696d\u3067\u3042\u308b[\u682a\u5f0f\u4f1a\u793eZOZO](https://corp.zozo.com/about/profile/)\u304c\u63d0\u4f9b\u3057\u3066\u3044\u307e\u3059.\n\u540c\u793e\u304c\u904b\u55b6\u3059\u308b\u5927\u898f\u6a21\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3EC\u30b5\u30a4\u30c8[ZOZOTOWN](https://zozo.jp/)\u3067\u306f, \u3044\u304f\u3064\u304b\u306e\u591a\u8155\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u7528\u3044\u3066\u30e6\u30fc\u30b6\u306b\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3\u30a2\u30a4\u30c6\u30e0\u3092\u63a8\u85a6\u3057\u3066\u3044\u307e\u3059.\n\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306b\u3088\u308b\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3\u30a2\u30a4\u30c6\u30e0\u63a8\u85a6\u306e\u4f8b\u306f\u4ee5\u4e0b\u306e\u56f31\u306e\u901a\u308a\u3067\u3059.\n\u5404\u30e6\u30fc\u30b6\u30ea\u30af\u30a8\u30b9\u30c8\u306b\u5bfe\u3057\u3066, 3\u3064\u306e\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3\u30a2\u30a4\u30c6\u30e0\u304c\u540c\u6642\u306b\u63a8\u85a6\u3055\u308c\u308b\u3053\u3068\u304c\u308f\u304b\u308a\u307e\u3059.\n\n\n\n\n\u56f31. ZOZOTOWN\u306b\u304a\u3051\u308b\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3\u30a2\u30a4\u30c6\u30e0\u306e\u63a8\u85a6\u306e\u4f8b\n
\n\n\n\n2019\u5e7411\u6708\u4e0b\u65ec\u306e7\u65e5\u9593\u306b\u308f\u305f\u308b\u30c7\u30fc\u30bf\u53ce\u96c6\u5b9f\u9a13\u306b\u304a\u3044\u3066, \u5168\u30a2\u30a4\u30c6\u30e0(all)\u30fb\u7537\u6027\u7528\u30a2\u30a4\u30c6\u30e0(men)\u30fb\u5973\u6027\u7528\u30a2\u30a4\u30c6\u30e0(women)\u306b\u5bfe\u5fdc\u3059\u308b3\u3064\u306e\u300c\u30ad\u30e3\u30f3\u30da\u30fc\u30f3\u300d\u3067\u30c7\u30fc\u30bf\u3092\u53ce\u96c6\u3057\u307e\u3057\u305f.\n\u305d\u308c\u305e\u308c\u306e\u30ad\u30e3\u30f3\u30da\u30fc\u30f3\u3067\u306f, \u5404\u30e6\u30fc\u30b6\u306e\u30a4\u30f3\u30d7\u30ec\u30c3\u30b7\u30e7\u30f3\u306b\u5bfe\u3057\u3066\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56(Random)\u307e\u305f\u306f\u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56(Bernoulli Thompson Sampling; Bernoulli TS)\u306e\u3044\u305a\u308c\u304b\u3092\u78ba\u7387\u7684\u306b\u30e9\u30f3\u30c0\u30e0\u306b\u9078\u629e\u3057\u3066\u9069\u7528\u3057\u3066\u3044\u307e\u3059.\n\u56f32\u306fOpen Bandit Dataset\u306e\u8a18\u8ff0\u7d71\u8a08\u3092\u793a\u3057\u3066\u3044\u307e\u3059.\n\n\n\n \n \u56f32. Open Bandit Dataset\u306e\u30ad\u30e3\u30f3\u30da\u30fc\u30f3\u3068\u30c7\u30fc\u30bf\u53ce\u96c6\u65b9\u7b56\u3054\u3068\u306e\u8a18\u8ff0\u7d71\u8a08\n
\n\n\n\n[\u5b9f\u88c5\u4f8b](./examples)\u3092\u5b9f\u884c\u3059\u308b\u305f\u3081\u306e\u5c11\u91cf\u7248\u30c7\u30fc\u30bf\u306f, [./obd/](./obd)\u306b\u3042\u308a\u307e\u3059.\nOpen Bandit Dataset\u306e\u30d5\u30eb\u30b5\u30a4\u30ba\u7248\u306f[https://research.zozo.com/data.html](https://research.zozo.com/data.html)\u306b\u3042\u308a\u307e\u3059.\n\u52d5\u4f5c\u78ba\u8a8d\u7b49\u306b\u306f\u5c11\u91cf\u7248\u3092, \u7814\u7a76\u7528\u9014\u306b\u306f\u30d5\u30eb\u30b5\u30a4\u30ba\u7248\u3092\u6d3b\u7528\u3057\u3066\u304f\u3060\u3055\u3044.\n\n## Open Bandit Pipeline\n\n*Open Bandit Pipeline*\u306f, \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306e\u524d\u51e6\u7406\u30fb\u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2\u30fb\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306e\u8a55\u4fa1\u3092\u7c21\u5358\u306b\u884c\u3046\u305f\u3081\u306ePython\u30d1\u30c3\u30b1\u30fc\u30b8\u3067\u3059.\nOpen Bandit Pipeline\u3092\u6d3b\u7528\u3059\u308b\u3053\u3068\u3067, \u7814\u7a76\u8005\u306f\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf (OPE estimator) \u306e\u5b9f\u88c5\u306b\u96c6\u4e2d\u3057\u3066\u73fe\u5b9f\u7684\u3067\u518d\u73fe\u6027\u306e\u3042\u308b\u65b9\u6cd5\u3067\u4ed6\u306e\u624b\u6cd5\u3068\u306e\u6027\u80fd\u6bd4\u8f03\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u308b\u3088\u3046\u306b\u306a\u308a\u307e\u3059.\n\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1(Off-Policy Evaluation)\u306b\u3064\u3044\u3066\u306f, [\u3053\u3061\u3089\u306e\u30d6\u30ed\u30b0\u8a18\u4e8b](https://techblog.zozo.com/entry/openbanditproject)\u3092\u3054\u78ba\u8a8d\u304f\u3060\u3055\u3044.\n\n\n\n\n \u56f33. Open Bandit Pipeline\u306e\u69cb\u6210\n
\n\n\nOpen Bandit Pipeline\u306f, \u4ee5\u4e0b\u306e\u4e3b\u8981\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u69cb\u6210\u3055\u308c\u3066\u3044\u307e\u3059.\n\n- [**dataset\u30e2\u30b8\u30e5\u30fc\u30eb**](./obp/dataset): \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f, Open Bandit Dataset\u7528\u306e\u30c7\u30fc\u30bf\u8aad\u307f\u8fbc\u307f\u30af\u30e9\u30b9\u3068\u30c7\u30fc\u30bf\u306e\u524d\u51e6\u7406\u3059\u308b\u305f\u3081\u306e\u67d4\u8edf\u306a\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u3092\u63d0\u4f9b\u3057\u307e\u3059. \u307e\u305f\u4eba\u5de5\u30c7\u30fc\u30bf\u3092\u751f\u6210\u3059\u308b\u30af\u30e9\u30b9\u3084\u591a\u30af\u30e9\u30b9\u5206\u985e\u30c7\u30fc\u30bf\u3092\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30c7\u30fc\u30bf\u306b\u5909\u63db\u3059\u308b\u305f\u3081\u306e\u30af\u30e9\u30b9\u3082\u5b9f\u88c5\u3057\u3066\u3044\u307e\u3059.\n- [**policy\u30e2\u30b8\u30e5\u30fc\u30eb**](./obp/policy): \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f, \u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u305f\u3081\u306e\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30a4\u30b9\u3092\u63d0\u4f9b\u3057\u307e\u3059. \u52a0\u3048\u3066, \u3044\u304f\u3064\u304b\u306e\u6a19\u6e96\u306a\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u5b9f\u88c5\u3057\u3066\u3044\u307e\u3059.\n- [**ope\u30e2\u30b8\u30e5\u30fc\u30eb**](./obp/ope):\u3000\u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f, \u3044\u304f\u3064\u304b\u306e\u6a19\u6e96\u7684\u306a\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u3092\u5b9f\u88c5\u3057\u3066\u3044\u307e\u3059. \u307e\u305f\u65b0\u305f\u306b\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u3092\u5b9f\u88c5\u3059\u308b\u305f\u3081\u306e\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u3082\u63d0\u4f9b\u3057\u3066\u3044\u307e\u3059.\n\n\n### \u5b9f\u88c5\u3055\u308c\u3066\u3044\u308b\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3068\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\n\n\n\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0 (policy module\u306b\u5b9f\u88c5)
\n\n- Online\n - Non-Contextual (Context-free)\n - Random\n - Epsilon Greedy\n - Bernoulli Thompson Sampling\n - Contextual (Linear)\n - Linear Epsilon Greedy\n - [Linear Thompson Sampling](http://proceedings.mlr.press/v28/agrawal13)\n - [Linear Upper Confidence Bound](https://dl.acm.org/doi/pdf/10.1145/1772690.1772758)\n - Contextual (Logistic)\n - Logistic Epsilon Greedy\n - [Logistic Thompson Sampling](https://papers.nips.cc/paper/4321-an-empirical-evaluation-of-thompson-sampling)\n - [Logistic Upper Confidence Bound](https://dl.acm.org/doi/10.1145/2396761.2396767)\n- Offline (Off-Policy Learning)\n - [Inverse Probability Weighting (IPW) Learner](https://arxiv.org/abs/1503.02834)\n - Neural Network-based Policy Learner\n\n \n\n\n\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf (ope module\u306b\u5b9f\u88c5)
\n\n- OPE of Online Bandit Algorithms\n - [Replay Method (RM)](https://arxiv.org/abs/1003.5956)\n- OPE of Offline Bandit Algorithms\n - [Direct Method (DM)](https://arxiv.org/abs/0812.4044)\n - [Inverse Probability Weighting (IPW)](https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1079&context=cs_faculty_pubs)\n - [Self-Normalized Inverse Probability Weighting (SNIPW)](https://papers.nips.cc/paper/5748-the-self-normalized-estimator-for-counterfactual-learning)\n - [Doubly Robust (DR)](https://arxiv.org/abs/1503.02834)\n - [Switch Estimators](https://arxiv.org/abs/1612.01205)\n - [More Robust Doubly Robust (MRDR)](https://arxiv.org/abs/1802.03493)\n - [Doubly Robust with Optimistic Shrinkage (DRos)](https://arxiv.org/abs/1907.09623)\n - [Double Machine Learning (DML)](https://arxiv.org/abs/2002.08536)\n- OPE of Offline Slate Bandit Algorithms\n - [Independent Inverse Propensity Scoring (IIPS)](https://arxiv.org/abs/1804.10488)\n - [Reward Interaction Inverse Propensity Scoring (RIPS)](https://arxiv.org/abs/2007)\n- OPE of Offline Bandit Algorithms with Continuous Actions\n - [Kernelized Inverse Probability Weighting](https://arxiv.org/abs/1802.06037)\n - [Kernelized Self-Normalized Inverse Probability Weighting](https://arxiv.org/abs/1802.06037)\n - [Kernelized Doubly Robust](https://arxiv.org/abs/1802.06037)\n\n \n\nOpen Bandit Pipeline\u306f, \u4e0a\u8a18\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3084\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306b\u52a0\u3048\u3066\u67d4\u8edf\u306a\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u3082\u63d0\u4f9b\u3057\u3066\u3044\u307e\u3059.\n\u3057\u305f\u304c\u3063\u3066\u7814\u7a76\u8005\u306f, \u72ec\u81ea\u306e\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3084\u63a8\u5b9a\u91cf\u3092\u5bb9\u6613\u306b\u5b9f\u88c5\u3059\u308b\u3053\u3068\u3067\u305d\u308c\u3089\u306e\u6027\u80fd\u3092\u8a55\u4fa1\u3067\u304d\u307e\u3059.\n\u3055\u3089\u306bOpen Bandit Pipeline\u306f, \u5b9f\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30d5\u30a3\u30fc\u30c9\u30d0\u30c3\u30af\u30c7\u30fc\u30bf\u3092\u6271\u3046\u305f\u3081\u306e\u30a4\u30f3\u30bf\u30d5\u30a7\u30fc\u30b9\u3092\u542b\u3093\u3067\u3044\u307e\u3059.\n\u3057\u305f\u304c\u3063\u3066, \u30a8\u30f3\u30b8\u30cb\u30a2\u3084\u30c7\u30fc\u30bf\u30b5\u30a4\u30a8\u30f3\u30c6\u30a3\u30b9\u30c8\u306a\u3069\u306e\u5b9f\u8df5\u8005\u306f, \u81ea\u793e\u306e\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3092Open Bandit Pipeline\u3068\u7d44\u307f\u5408\u308f\u305b\u308b\u3053\u3068\u3067\u7c21\u5358\u306b\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u307e\u3059.\n\n## \u30c8\u30d4\u30c3\u30af\u3068\u30bf\u30b9\u30af\n\nOpen Bandit Dataset\u53ca\u3073Open Bandit Pipeline\u3067\u306f, \u4ee5\u4e0b\u306e\u7814\u7a76\u30c6\u30fc\u30de\u306b\u95a2\u3059\u308b\u5b9f\u9a13\u8a55\u4fa1\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u307e\u3059.\n\n- **\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6027\u80fd\u8a55\u4fa1 (Evaluation of Bandit Algorithms)**\uff1aOpen Bandit Dataset\u306b\u306f, \u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306b\u3088\u3063\u3066\u53ce\u96c6\u3055\u308c\u305f\u5927\u898f\u6a21\u306a\u30ed\u30b0\u30c7\u30fc\u30bf\u304c\u542b\u307e\u308c\u3066\u3044\u307e\u3059. \u305d\u308c\u3092\u7528\u3044\u308b\u3053\u3068\u3067, \u65b0\u3057\u3044\u30aa\u30f3\u30e9\u30a4\u30f3\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6027\u80fd\u3092\u8a55\u4fa1\u3059\u308b\u3053\u3068\u304c\u53ef\u80fd\u3067\u3059.\n\n- **\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306e\u6b63\u78ba\u3055\u306e\u8a55\u4fa1 (Evaluation of Off-Policy Evaluation)**\uff1aOpen Bandit Dataset\u306f, \u8907\u6570\u306e\u65b9\u7b56\u3092\u5b9f\u30b7\u30b9\u30c6\u30e0\u4e0a\u3067\u540c\u6642\u306b\u8d70\u3089\u305b\u308b\u3053\u3068\u306b\u3088\u308a\u751f\u6210\u3055\u308c\u305f\u30ed\u30b0\u30c7\u30fc\u30bf\u3067\u69cb\u6210\u3055\u308c\u3066\u3044\u307e\u3059. \u307e\u305fOpen Bandit Pipeline\u3092\u7528\u3044\u308b\u3053\u3068\u3067, \u30c7\u30fc\u30bf\u53ce\u96c6\u306b\u7528\u3044\u3089\u308c\u305f\u65b9\u7b56\u3092\u518d\u73fe\u3067\u304d\u307e\u3059. \u305d\u306e\u305f\u3081, \u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306e\u63a8\u5b9a\u7cbe\u5ea6\u306e\u8a55\u4fa1\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u307e\u3059.\n\n\n# \u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\n\n\u4ee5\u4e0b\u306e\u901a\u308a, `pip`\u3092\u7528\u3044\u3066Open Bandit Pipeline\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u3067\u304d\u307e\u3059.\n\n```bash\npip install obp\n```\n\n\u307e\u305f, \u672c\u30ea\u30dd\u30b8\u30c8\u30ea\u3092clone\u3057\u3066\u30bb\u30c3\u30c8\u30a2\u30c3\u30d7\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059.\n\n```bash\ngit clone https://github.com/st-tech/zr-obp\ncd zr-obp\npython setup.py install\n```\n\nPython\u304a\u3088\u3073\u5229\u7528\u30d1\u30c3\u30b1\u30fc\u30b8\u306e\u30d0\u30fc\u30b8\u30e7\u30f3\u306f\u4ee5\u4e0b\u306e\u901a\u308a\u3067\u3059\u3002\n\n```\n[tool.poetry.dependencies]\npython = \">=3.7.1,<3.10\"\ntorch = \"^1.9.0\"\nscikit-learn = \"^0.24.2\"\npandas = \"^1.3.2\"\nnumpy = \"^1.21.2\"\nmatplotlib = \"^3.4.3\"\ntqdm = \"^4.62.2\"\nscipy = \"^1.7.1\"\nPyYAML = \"^5.4.1\"\nseaborn = \"^0.11.2\"\npyieoe = \"^0.1.1\"\npingouin = \"^0.4.0\"\n```\n\n\u3053\u308c\u3089\u306e\u30d1\u30c3\u30b1\u30fc\u30b8\u306e\u30d0\u30fc\u30b8\u30e7\u30f3\u304c\u7570\u306a\u308b\u3068\u3001\u4f7f\u7528\u65b9\u6cd5\u3084\u6319\u52d5\u304c\u672c\u66f8\u57f7\u7b46\u6642\u70b9\u3068\u7570\u306a\u308b\u5834\u5408\u304c\u3042\u308b\u306e\u3067\u3001\u6ce8\u610f\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n\n# \u4f7f\u7528\u65b9\u6cd5\n\n\u3053\u3053\u3067\u306f, Open Bandit Pipeline\u306e\u4f7f\u7528\u6cd5\u3092\u8aac\u660e\u3057\u307e\u3059. \u5177\u4f53\u4f8b\u3068\u3057\u3066, Open Bandit Dataset\u3092\u7528\u3044\u3066, \u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u3059\u308b\u6d41\u308c\u3092\u5b9f\u88c5\u3057\u307e\u3059. \u4eba\u5de5\u30c7\u30fc\u30bf\u3084\u591a\u30af\u30e9\u30b9\u5206\u985e\u30c7\u30fc\u30bf\u3092\u7528\u3044\u305f\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306e\u5b9f\u88c5\u6cd5\u306f, [\u82f1\u8a9e\u7248\u306eREAMDE](https://github.com/st-tech/zr-obp/blob/master/README.md)\u3084[examples/quickstart/](https://github.com/st-tech/zr-obp/tree/master/examples/quickstart)\u3092\u3054\u78ba\u8a8d\u304f\u3060\u3055\u3044.\n\n\u4ee5\u4e0b\u306b\u793a\u3059\u3088\u3046\u306b, \u7d0410\u884c\u306e\u30b3\u30fc\u30c9\u3067\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306e\u6d41\u308c\u3092\u5b9f\u88c5\u3067\u304d\u307e\u3059.\n\n```python\n# Inverse Probability Weighting\u3068\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306b\u3088\u3063\u3066\u751f\u6210\u3055\u308c\u305f\u30ed\u30b0\u30c7\u30fc\u30bf\u3092\u7528\u3044\u3066, BernoulliTS\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u3067\u8a55\u4fa1\u3059\u308b\nfrom obp.dataset import OpenBanditDataset\nfrom obp.policy import BernoulliTS\nfrom obp.ope import OffPolicyEvaluation, InverseProbabilityWeighting as IPW\n\n# (1) \u30c7\u30fc\u30bf\u306e\u8aad\u307f\u8fbc\u307f\u3068\u524d\u51e6\u7406\ndataset = OpenBanditDataset(behavior_policy='random', campaign='all')\nbandit_feedback = dataset.obtain_batch_bandit_feedback()\n\n# (2) \u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2\nevaluation_policy = BernoulliTS(\n n_actions=dataset.n_actions,\n len_list=dataset.len_list,\n is_zozotown_prior=True,\n campaign=\"all\",\n random_state=12345\n)\naction_dist = evaluation_policy.compute_batch_action_dist(\n n_sim=100000, n_rounds=bandit_feedback[\"n_rounds\"]\n)\n\n# (3) \u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\nope = OffPolicyEvaluation(bandit_feedback=bandit_feedback, ope_estimators=[IPW()])\nestimated_policy_value = ope.estimate_policy_values(action_dist=action_dist)\n\n# \u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306b\u5bfe\u3059\u308b\u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u306e\u6539\u5584\u7387\uff08\u76f8\u5bfe\u30af\u30ea\u30c3\u30af\u7387\uff09\nrelative_policy_value_of_bernoulli_ts = estimated_policy_value['ipw'] / bandit_feedback['reward'].mean()\nprint(relative_policy_value_of_bernoulli_ts)\n1.198126...\n```\n\n\u4ee5\u4e0b, \u91cd\u8981\u306a\u8981\u7d20\u306b\u3064\u3044\u3066\u8aac\u660e\u3057\u307e\u3059.\n\n## (1) \u30c7\u30fc\u30bf\u306e\u8aad\u307f\u8fbc\u307f\u3068\u524d\u51e6\u7406\n\nOpen Bandit Pipeline\u306b\u306f, Open Bandit Dataset\u7528\u306e\u30c7\u30fc\u30bf\u8aad\u307f\u8fbc\u307f\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u3092\u7528\u610f\u3057\u3066\u3044\u307e\u3059.\n\u3053\u308c\u3092\u7528\u3044\u308b\u3053\u3068\u3067, Open Bandit Dataset\u306e\u8aad\u307f\u8fbc\u307f\u3084\u524d\u51e6\u7406\u3092\u7c21\u6f54\u306b\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u307e\u3059.\n\n```python\n# \u300c\u5168\u30a2\u30a4\u30c6\u30e0\u30ad\u30e3\u30f3\u30da\u30fc\u30f3 (all)\u300d\u306b\u304a\u3044\u3066\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u304c\u96c6\u3081\u305f\u30ed\u30b0\u30c7\u30fc\u30bf\u3092\u8aad\u307f\u8fbc\u3080.\n# OpenBanditDataset\u30af\u30e9\u30b9\u306b\u306f\u30c7\u30fc\u30bf\u3092\u53ce\u96c6\u3057\u305f\u65b9\u7b56\u3068\u30ad\u30e3\u30f3\u30da\u30fc\u30f3\u3092\u6307\u5b9a\u3059\u308b.\ndataset = OpenBanditDataset(behavior_policy='random', campaign='all')\n\n# \u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2\u3084\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306b\u7528\u3044\u308b\u30ed\u30b0\u30c7\u30fc\u30bf\u3092\u5f97\u308b.\nbandit_feedback = dataset.obtain_batch_bandit_feedback()\n\nprint(bandit_feedback.keys())\n# dict_keys(['n_rounds', 'n_actions', 'action', 'position', 'reward', 'pscore', 'context', 'action_context'])\n```\n\n`obp.dataset.OpenBanditDataset` \u30af\u30e9\u30b9\u306e `pre_process` \u30e1\u30bd\u30c3\u30c9\u306b, \u72ec\u81ea\u306e\u7279\u5fb4\u91cf\u30a8\u30f3\u30b8\u30cb\u30a2\u30ea\u30f3\u30b0\u3092\u5b9f\u88c5\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059. [`custom_dataset.py`](https://github.com/st-tech/zr-obp/blob/master/benchmark/cf_policy_search/custom_dataset.py)\u306b\u306f, \u65b0\u3057\u3044\u7279\u5fb4\u91cf\u30a8\u30f3\u30b8\u30cb\u30a2\u30ea\u30f3\u30b0\u3092\u5b9f\u88c5\u3059\u308b\u4f8b\u3092\u793a\u3057\u3066\u3044\u307e\u3059. \u307e\u305f, `obp.dataset.BaseBanditDataset`\u30af\u30e9\u30b9\u306e\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u306b\u5f93\u3063\u3066\u65b0\u305f\u306a\u30af\u30e9\u30b9\u3092\u5b9f\u88c5\u3059\u308b\u3053\u3068\u3067, \u5c06\u6765\u516c\u958b\u3055\u308c\u308b\u3067\u3042\u308d\u3046Open Bandit Dataset\u4ee5\u5916\u306e\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3084\u81ea\u793e\u306b\u7279\u6709\u306e\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30c7\u30fc\u30bf\u3092\u6271\u3046\u3053\u3068\u3082\u3067\u304d\u307e\u3059.\n\n## (2) \u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2\n\n\u524d\u51e6\u7406\u306e\u5f8c\u306f, \u6b21\u306e\u3088\u3046\u306b\u3057\u3066**\u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2**\u3092\u5b9f\u884c\u3057\u307e\u3059.\n\n```python\n# \u8a55\u4fa1\u5bfe\u8c61\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u5b9a\u7fa9. \u3053\u3053\u3067\u306f, \u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u3059\u308b.\n# \u7814\u7a76\u8005\u304c\u72ec\u81ea\u306b\u5b9f\u88c5\u3057\u305f\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u65b9\u7b56\u3092\u7528\u3044\u308b\u3053\u3068\u3082\u3067\u304d\u308b.\nevaluation_policy = BernoulliTS(\n n_actions=dataset.n_actions,\n len_list=dataset.len_list,\n is_zozotown_prior=True, # ZOZOTOWN\u4e0a\u3067\u306e\u6319\u52d5\u3092\u518d\u73fe\n campaign=\"all\",\n random_state=12345\n)\n# \u30b7\u30df\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3\u3092\u7528\u3044\u3066\u3001\u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306b\u3088\u308b\u884c\u52d5\u9078\u629e\u78ba\u7387\u3092\u7b97\u51fa.\naction_dist = evaluation_policy.compute_batch_action_dist(\n n_sim=100000, n_rounds=bandit_feedback[\"n_rounds\"]\n)\n```\n\n`BernoulliTS`\u306e`compute_batch_action_dist`\u30e1\u30bd\u30c3\u30c9\u306f, \u4e0e\u3048\u3089\u308c\u305f\u30d9\u30fc\u30bf\u5206\u5e03\u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u306b\u57fa\u3065\u3044\u305f\u884c\u52d5\u9078\u629e\u78ba\u7387(`action_dist`)\u3092\u30b7\u30df\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3\u306b\u3088\u3063\u3066\u7b97\u51fa\u3057\u307e\u3059. \u307e\u305f\u30e6\u30fc\u30b6\u306f[`./obp/policy/base.py`](https://github.com/st-tech/zr-obp/blob/master/obp/policy/base.py)\u306b\u5b9f\u88c5\u3055\u308c\u3066\u3044\u308b\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u306b\u5f93\u3046\u3053\u3068\u3067\u72ec\u81ea\u306e\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u5b9f\u88c5\u3057, \u305d\u306e\u6027\u80fd\u3092\u8a55\u4fa1\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059.\n\n\n## (3) \u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1 \uff08Off-Policy Evaluation\uff09\n\n\u6700\u5f8c\u306e\u30b9\u30c6\u30c3\u30d7\u306f, \u30ed\u30b0\u30c7\u30fc\u30bf\u3092\u7528\u3044\u3066\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u3059\u308b**\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1**\u3067\u3059.\nOpen Bandit Pipeline\u3092\u4f7f\u3046\u3053\u3068\u3067, \u6b21\u306e\u3088\u3046\u306b\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u3092\u5b9f\u88c5\u3067\u304d\u307e\u3059.\n\n```python\n# IPW\u63a8\u5b9a\u91cf\u3092\u7528\u3044\u3066\u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u3059\u308b.\n# OffPolicyEvaluation\u30af\u30e9\u30b9\u306b\u306f, \u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u306b\u7528\u3044\u308b\u30ed\u30b0\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30c7\u30fc\u30bf\u3068\u7528\u3044\u308b\u63a8\u5b9a\u91cf\u3092\u6e21\u3059\uff08\u8907\u6570\u8a2d\u5b9a\u53ef\uff09.\nope = OffPolicyEvaluation(bandit_feedback=bandit_feedback, ope_estimators=[IPW()])\nestimated_policy_value = ope.estimate_policy_values(action_dist=action_dist)\nprint(estimated_policy_value)\n{'ipw': 0.004553...}\u3000# \u8a2d\u5b9a\u3055\u308c\u305f\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306b\u3088\u308b\u6027\u80fd\u306e\u63a8\u5b9a\u5024\u3092\u542b\u3093\u3060\u8f9e\u66f8.\n\n# \u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u306e\u63a8\u5b9a\u5024\u3068\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306e\u771f\u306e\u6027\u80fd\u3092\u6bd4\u8f03\u3059\u308b.\nrelative_policy_value_of_bernoulli_ts = estimated_policy_value['ipw'] / bandit_feedback['reward'].mean()\n# \u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306b\u3088\u3063\u3066, \u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u306f\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306e\u6027\u80fd\u309219.81%\u4e0a\u56de\u308b\u3068\u63a8\u5b9a\u3055\u308c\u305f.\nprint(relative_policy_value_of_bernoulli_ts)\n1.198126...\n```\n\n`obp.ope.BaseOffPolicyEstimator` \u30af\u30e9\u30b9\u306e\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u306b\u5f93\u3046\u3053\u3068\u3067, \u72ec\u81ea\u306e\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u3092\u5b9f\u88c5\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059. \u3053\u308c\u306b\u3088\u308a\u65b0\u305f\u306a\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306e\u63a8\u5b9a\u7cbe\u5ea6\u3092\u691c\u8a3c\u3059\u308b\u3053\u3068\u304c\u53ef\u80fd\u3067\u3059.\n\u307e\u305f, `obp.ope.OffPolicyEvaluation`\u306e`ope_estimators`\u306b\u8907\u6570\u306e\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u3092\u8a2d\u5b9a\u3059\u308b\u3053\u3068\u3067, \u8907\u6570\u306e\u63a8\u5b9a\u91cf\u306b\u3088\u308b\u63a8\u5b9a\u5024\u3092\u540c\u6642\u306b\u5f97\u308b\u3053\u3068\u3082\u53ef\u80fd\u3067\u3059. `bandit_feedback['reward'].mean()` \u306f\u89b3\u6e2c\u3055\u308c\u305f\u5831\u916c\u306e\u7d4c\u9a13\u5e73\u5747\u5024\uff08\u30aa\u30f3\u65b9\u7b56\u63a8\u5b9a\uff09\u3067\u3042\u308a, \u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306e\u771f\u306e\u6027\u80fd\u3092\u8868\u3057\u307e\u3059.\n\n\n# \u5f15\u7528\nOpen Bandit Dataset\u3084Open Bandit Pipeline\u3092\u6d3b\u7528\u3057\u3066\u8ad6\u6587\u3084\u30d6\u30ed\u30b0\u8a18\u4e8b\u7b49\u3092\u57f7\u7b46\u3055\u308c\u305f\u5834\u5408, \u4ee5\u4e0b\u306e\u8ad6\u6587\u3092\u5f15\u7528\u3057\u3066\u3044\u305f\u3060\u304f\u3088\u3046\u304a\u9858\u3044\u3044\u305f\u3057\u307e\u3059.\n\nYuta Saito, Shunsuke Aihara, Megumi Matsutani, Yusuke Narita.
\n**Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation**
\n[https://arxiv.org/abs/2008.07146](https://arxiv.org/abs/2008.07146)\n\nBibtex:\n```\n@article{saito2020open,\n title={Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation},\n author={Saito, Yuta and Shunsuke, Aihara and Megumi, Matsutani and Yusuke, Narita},\n journal={arXiv preprint arXiv:2008.07146},\n year={2020}\n}\n```\n\n# Google Group\n\u672c\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u306b\u95a2\u3059\u308b\u6700\u65b0\u60c5\u5831\u306f\u6b21\u306eGoogle Group\u306b\u3066\u968f\u6642\u304a\u77e5\u3089\u305b\u3057\u3066\u3044\u307e\u3059. \u305c\u3072\u3054\u767b\u9332\u304f\u3060\u3055\u3044: https://groups.google.com/g/open-bandit-project\n\n# \u30b3\u30f3\u30c8\u30ea\u30d3\u30e5\u30fc\u30b7\u30e7\u30f3\nOpen Bandit Pipeline\u3078\u306e\u3069\u3093\u306a\u8ca2\u732e\u3082\u6b53\u8fce\u3044\u305f\u3057\u307e\u3059. \u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u306b\u8ca2\u732e\u3059\u308b\u305f\u3081\u306e\u30ac\u30a4\u30c9\u30e9\u30a4\u30f3\u306f, [CONTRIBUTING.md](./CONTRIBUTING.md)\u3092\u53c2\u7167\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n\n# \u30e9\u30a4\u30bb\u30f3\u30b9\n\u3053\u306e\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u306fApache 2.0\u30e9\u30a4\u30bb\u30f3\u30b9\u3092\u63a1\u7528\u3057\u3066\u3044\u307e\u3059. \u8a73\u7d30\u306f, [LICENSE](https://github.com/st-tech/zr-obp/blob/master/LICENSE)\u3092\u53c2\u7167\u3057\u3066\u304f\u3060\u3055\u3044.\n\n# \u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30c1\u30fc\u30e0\n\n- [\u9f4b\u85e4\u512a\u592a](https://usait0.com/ja/) (**Main Contributor**; \u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e / \u30b3\u30fc\u30cd\u30eb\u5927\u5b66)\n- [\u7c9f\u98ef\u539f\u4fca\u4ecb](https://www.linkedin.com/in/shunsukeaihara/) (ZOZO\u7814\u7a76\u6240)\n- \u677e\u8c37\u6075 (ZOZO\u7814\u7a76\u6240)\n- [\u6210\u7530\u60a0\u8f14](https://www.yusuke-narita.com/) (\u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e / \u30a4\u30a7\u30fc\u30eb\u5927\u5b66)\n\n## \u958b\u767a\u30e1\u30f3\u30d0\u30fc\n- [\u91ce\u6751\u5c06\u5bdb](https://twitter.com/nomuramasahir0) (\u682a\u5f0f\u4f1a\u793e\u30b5\u30a4\u30d0\u30fc\u30a8\u30fc\u30b8\u30a7\u30f3\u30c8 / \u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e)\n- [\u9ad8\u5c71\u6643\u4e00](https://fullflu.hatenablog.com/) (\u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e)\n- [\u9ed2\u5ca9\u7a1c](https://kurorororo.github.io) (\u30c8\u30ed\u30f3\u30c8\u5927\u5b66 / \u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e)\n- [\u6e05\u539f\u660e\u52a0](https://sites.google.com/view/harukakiyohara) (\u6771\u4eac\u5de5\u696d\u5927\u5b66 / \u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e)\n\n# \u9023\u7d61\u5148\n\u8ad6\u6587\u3084Open Bandit Dataset, Open Bandit Pipeline\u306b\u95a2\u3059\u308b\u3054\u8cea\u554f\u306f, \u6b21\u306e\u30e1\u30fc\u30eb\u30a2\u30c9\u30ec\u30b9\u5b9b\u306b\u304a\u9858\u3044\u3044\u305f\u3057\u307e\u3059: saito@hanjuku-kaso.com\n\n# \u53c2\u8003\n\n\n\u8ad6\u6587
\n\n1. Alina Beygelzimer and John Langford. [The offset tree for learning with partial labels](https://arxiv.org/abs/0812.4044). In *Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery&Data Mining*, 129\u2013138, 2009.\n\n2. Olivier Chapelle and Lihong Li. [An empirical evaluation of thompson sampling](https://papers.nips.cc/paper/4321-an-empirical-evaluation-of-thompson-sampling). In *Advances in Neural Information Processing Systems*, 2249\u20132257, 2011.\n\n3. Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. [Unbiased Offline Evaluation of Contextual-bandit-based News Article Recommendation Algorithms](https://arxiv.org/abs/1003.5956). In *Proceedings of the Fourth ACM International Conference on Web Search and Data Mining*, 297\u2013306, 2011.\n\n4. Alex Strehl, John Langford, Lihong Li, and Sham M Kakade. [Learning from Logged Implicit Exploration Data](https://arxiv.org/abs/1003.0120). In *Advances in Neural Information Processing Systems*, 2217\u20132225, 2010.\n\n5. Doina Precup, Richard S. Sutton, and Satinder Singh. [Eligibility Traces for Off-Policy Policy Evaluation](https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1079&context=cs_faculty_pubs). In *Proceedings of the 17th International Conference on Machine Learning*, 759\u2013766. 2000.\n\n6. Miroslav Dud\u00edk, Dumitru Erhan, John Langford, and Lihong Li. [Doubly Robust Policy Evaluation and Optimization](https://arxiv.org/abs/1503.02834). *Statistical Science*, 29:485\u2013511, 2014.\n\n7. Adith Swaminathan and Thorsten Joachims. [The Self-normalized Estimator for Counterfactual Learning](https://papers.nips.cc/paper/5748-the-self-normalized-estimator-for-counterfactual-learning). In *Advances in Neural Information Processing Systems*, 3231\u20133239, 2015.\n\n8. Dhruv Kumar Mahajan, Rajeev Rastogi, Charu Tiwari, and Adway Mitra. [LogUCB: An Explore-Exploit Algorithm for Comments Recommendation](https://dl.acm.org/doi/10.1145/2396761.2396767). In *Proceedings of the 21st ACM international conference on Information and knowledge management*, 6\u201315. 2012.\n\n9. Lihong Li, Wei Chu, John Langford, Taesup Moon, and Xuanhui Wang. [An Unbiased Offline Evaluation of Contextual Bandit Algorithms with Generalized Linear Models](http://proceedings.mlr.press/v26/li12a.html). In *Journal of Machine Learning Research: Workshop and Conference Proceedings*, volume 26, 19\u201336. 2012.\n\n10. Yu-Xiang Wang, Alekh Agarwal, and Miroslav Dudik. [Optimal and Adaptive Off-policy Evaluation in Contextual Bandits](https://arxiv.org/abs/1612.01205). In *Proceedings of the 34th International Conference on Machine Learning*, 3589\u20133597. 2017.\n\n11. Mehrdad Farajtabar, Yinlam Chow, and Mohammad Ghavamzadeh. [More Robust Doubly Robust Off-policy Evaluation](https://arxiv.org/abs/1802.03493). In *Proceedings of the 35th International Conference on Machine Learning*, 1447\u20131456. 2018.\n\n12. Nathan Kallus and Masatoshi Uehara. [Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning](https://arxiv.org/abs/1906.03735). In *Advances in Neural Information Processing Systems*. 2019.\n\n13. Yi Su, Lequn Wang, Michele Santacatterina, and Thorsten Joachims. [CAB: Continuous Adaptive Blending Estimator for Policy Evaluation and Learning](https://proceedings.mlr.press/v97/su19a). In *Proceedings of the 36th International Conference on Machine Learning*, 6005-6014, 2019.\n\n14. Yi Su, Maria Dimakopoulou, Akshay Krishnamurthy, and Miroslav Dud\u00edk. [Doubly Robust Off-policy Evaluation with Shrinkage](https://proceedings.mlr.press/v119/su20a.html). In *Proceedings of the 37th International Conference on Machine Learning*, 9167-9176, 2020.\n\n15. Nathan Kallus and Angela Zhou. [Policy Evaluation and Optimization with Continuous Treatments](https://arxiv.org/abs/1802.06037). In *International Conference on Artificial Intelligence and Statistics*, 1243\u20131251. PMLR, 2018.\n\n16. Aman Agarwal, Soumya Basu, Tobias Schnabel, and Thorsten Joachims. [Effective Evaluation using Logged Bandit Feedback from Multiple Loggers](https://arxiv.org/abs/1703.06180). In *Proceedings of the 23rd ACM SIGKDD international conference on Knowledge discovery and data mining*, 687\u2013696, 2017.\n\n17. Nathan Kallus, Yuta Saito, and Masatoshi Uehara. [Optimal Off-Policy Evaluation from Multiple Logging Policies](http://proceedings.mlr.press/v139/kallus21a.html). In *Proceedings of the 38th International Conference on Machine Learning*, 5247-5256, 2021.\n\n18. Shuai Li, Yasin Abbasi-Yadkori, Branislav Kveton, S Muthukrishnan, Vishwa Vinay, and Zheng Wen. [Offline Evaluation of Ranking Policies with Click Models](https://arxiv.org/pdf/1804.10488). In *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery&Data Mining*, 1685\u20131694, 2018.\n\n19. James McInerney, Brian Brost, Praveen Chandar, Rishabh Mehrotra, and Benjamin Carterette. [Counterfactual Evaluation of Slate Recommendations with Sequential Reward Interactions](https://arxiv.org/abs/2007.12986). In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery&Data Mining*, 1779\u20131788, 2020.\n\n20. Yusuke Narita, Shota Yasui, and Kohei Yata. [Debiased Off-Policy Evaluation for Recommendation Systems](https://dl.acm.org/doi/10.1145/3460231.3474231). In *Proceedings of the Fifteenth ACM Conference on Recommender Systems*, 372-379, 2021.\n\n21. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. [Open Graph Benchmark: Datasets for Machine Learning on Graphs](https://arxiv.org/abs/2005.00687). In *Advances in Neural Information Processing Systems*. 2020.\n\n22. Noveen Sachdeva, Yi Su, and Thorsten Joachims. [Off-policy Bandits with Deficient Support](https://dl.acm.org/doi/10.1145/3394486.3403139). In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, 965-975, 2021.\n\n23. Yi Su, Pavithra Srinath, and Akshay Krishnamurthy. [Adaptive Estimator Selection for Off-Policy Evaluation](https://proceedings.mlr.press/v119/su20d.html). In *Proceedings of the 38th International Conference on Machine Learning*, 9196-9205, 2021.\n\n24. Haruka Kiyohara, Yuta Saito, Tatsuya Matsuhiro, Yusuke Narita, Nobuyuki Shimizu, Yasuo Yamamoto. [Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model](https://dl.acm.org/doi/10.1145/3488560.3498380). In *Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining*, 487-497, 2022.\n\n25. Yuta Saito and Thorsten Joachims. [Off-Policy Evaluation for Large Action Spaces via Embeddings](https://arxiv.org/abs/2202.06317). In *Proceedings of the 39th International Conference on Machine Learning*, 2022.\n\n \n\n\n\u30aa\u30fc\u30d7\u30f3\u30bd\u30fc\u30b9\u30d7\u30ed\u30b8\u30a7\u30af\u30c8
\n\u672c\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u306f **Open Graph Benchmark** ([[github](https://github.com/snap-stanford/ogb)] [[project page](https://ogb.stanford.edu)] [[paper](https://arxiv.org/abs/2005.00687)]) \u3092\u53c2\u8003\u306b\u3057\u3066\u3044\u307e\u3059.\n \n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "facebookresearch/ic_gan", "link": "https://github.com/facebookresearch/ic_gan", "tags": [], "stars": 514, "description": "Official repository for the paper \"Instance-Conditioned GAN\" by Arantxa Casanova, Marlene Careil, Jakob Verbeek, Micha\u0142 Dro\u017cd\u017cal, Adriana Romero-Soriano.", "lang": "Python", "repo_lang": "", "readme": "# IC-GAN: Instance-Conditioned GAN
\nOfficial Pytorch code of [Instance-Conditioned GAN](https://arxiv.org/abs/2109.05070) by Arantxa Casanova, Marl\u00e8ne Careil, Jakob Verbeek, Micha\u0142 Dro\u017cd\u017cal, Adriana Romero-Soriano. \n![IC-GAN results](./figures/github_image.png?raw=true)\n\n## Generate images with IC-GAN in a Colab Notebook\nWe provide a [Google Colab notebook](https://colab.research.google.com/github/facebookresearch/ic_gan/blob/main/inference/icgan_colab.ipynb) to generate images with IC-GAN and its class-conditional counter part. We also invite users to check out the [demo on Replicate](https://replicate.ai/arantxacasanova/ic_gan), courtesy of [Replicate](https://replicate.ai/home).\n\nThe figure below depicts two instances, unseen during training and downloaded from [Creative Commons search](https://search.creativecommons.org), and the generated images with IC-GAN and class-conditional IC-GAN when conditioning on the class \"castle\":\n\n \n
\n\nAdditionally, and inspired by [this Colab](https://colab.research.google.com/github/eyaler/clip_biggan/blob/main/ClipBigGAN.ipynb), we provide the funcionality in the same Colab notebook to guide generations with text captions, using the [CLIP model](https://github.com/openai/CLIP). \nAs an example, the following Figure shows three instance conditionings and a text caption (top), followed by the resulting generated images with IC-GAN (bottom), when optimizing the noise vector following CLIP's gradient for 100 iterations. \n\n \n
\n\n\n*Credit for the three instance conditionings, from left to right, that were modified with a resize and central crop:* [1: \"Landscape in Bavaria\" by shining.darkness, licensed under CC BY 2.0](https://search.creativecommons.org/photos/92ef279c-4469-49a5-aa4b-48ad746f2dc4), [2: \"Fantasy Landscape - slolsss\" by Douglas Tofoli is marked with CC PDM 1.0](https://search.creativecommons.org/photos/13646adc-f1df-437a-a0dd-8223452ee46c), [3: \"How to Draw Landscapes Simply\" by Kuwagata Keisai is marked with CC0 1.0](https://search.creativecommons.org/photos/2ab9c3b7-de99-4536-81ed-604ee988bd5f)\n\n\n## Requirements\n* Python 3.8 \n* Cuda v10.2 / Cudnn v7.6.5\n* gcc v7.3.0\n* Pytorch 1.8.0\n* A conda environment can be created from `environment.yaml` by entering the command: `conda env create -f environment.yml`, that contains the aforemention version of Pytorch and other required packages. \n* Faiss: follow the instructions in the [original repository](https://github.com/facebookresearch/faiss).\n\n\n## Overview \n\nThis repository consists of four main folders:\n* `data_utils`: A common folder to obtain and format the data needed to train and test IC-GAN, agnostic of the specific backbone. \n* `inference`: Scripts to test the models both qualitatively and quantitatively.\n* `BigGAN_PyTorch`: It provides the training, evaluation and sampling scripts for IC-GAN with a BigGAN backbone. The code base comes from [Pytorch BigGAN repository](https://github.com/ajbrock/BigGAN-PyTorch), made available under the MIT License. It has been modified to [add additional utilities](#biggan-changelog) and it enables IC-GAN training on top of it.\n* `stylegan2_ada_pytorch`: It provides the training, evaluation and sampling scripts for IC-GAN with a StyleGAN2 backbone. The code base comes from [StyleGAN2 Pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch), made available under the [Nvidia Source Code License](https://nvlabs.github.io/stylegan2-ada-pytorch/license.html). It has been modified to [add additional utilities](#stylegan-changelog) and it enables IC-GAN training on top of it.\n\n\n## (Python script) Generate images with IC-GAN\nAlternatively, we can generate images with IC-GAN models directly from a python script, by following the next steps:\n1) Download the desired pretrained models (links below) and the [pre-computed 1000 instance features from ImageNet](https://dl.fbaipublicfiles.com/ic_gan/stored_instances.tar.gz) and extract them into a folder `pretrained_models_path`. \n\n| model | backbone | class-conditional? | training dataset | resolution | url |\n|-------------------|-------------------|-------------------|---------------------|--------------------|--------------------|\n| IC-GAN | BigGAN | No | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res256.tar.gz) | \n| IC-GAN (half capacity) | BigGAN | No | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res256_halfcap.tar.gz) | \n| IC-GAN | BigGAN | No | ImageNet | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res128.tar.gz) | \n| IC-GAN | BigGAN | No | ImageNet | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res64.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res256.tar.gz) | \n| IC-GAN (half capacity) | BigGAN | Yes | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res256_halfcap.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res128.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res64.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet-LT | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res256.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet-LT | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res128.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet-LT | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res64.tar.gz) | \n| IC-GAN | BigGAN | No | COCO-Stuff | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_coco_res256.tar.gz) | \n| IC-GAN | BigGAN | No | COCO-Stuff | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_coco_res128.tar.gz) | \n| IC-GAN | StyleGAN2 | No | COCO-Stuff | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_stylegan2_coco_res256.tar.gz) | \n| IC-GAN | StyleGAN2 | No | COCO-Stuff | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_stylegan2_coco_res128.tar.gz) | \n\n2) Execute: \n```\npython inference/generate_images.py --root_path [pretrained_models_path] --model [model] --model_backbone [backbone] --resolution [res]\n```\n* `model` can be chosen from `[\"icgan\", \"cc_icgan\"]` to use the IC-GAN or the class-conditional IC-GAN model respectively.\n* `backbone` can be chosen from `[\"biggan\", \"stylegan2\"]`.\n* `res` indicates the resolution at which the model has been trained. For ImageNet, choose one in `[64, 128, 256]`, and for COCO-Stuff, one in `[128, 256]`.\n\nThis script results in a .PNG file where several generated images are shown, given an instance feature (each row), and a sampled noise vector (each grid position).\n \nAdditional and optional parameters:\n* `index`: (None by default), is an integer from 0 to 999 that choses a specific instance feature vector out of the 1000 instances that have been selected with k-means on the ImageNet dataset and stored in `pretrained_models_path/stored_instances`.\n* `swap_target`: (None by default) is an integer from 0 to 999 indicating an ImageNet class label. This label will be used to condition the class-conditional IC-GAN, regardless of which instance features are being used.\n* `which_dataset`: (ImageNet by default) can be chosen from `[\"imagenet\", \"coco\"]` to indicate which dataset (training split) to sample the instances from. \n* `trained_dataset`: (ImageNet by default) can be chosen from `[\"imagenet\", \"coco\"]` to indicate the dataset in which the IC-GAN model has been trained on. \n* `num_imgs_gen`: (5 by default), it changes the number of noise vectors to sample per conditioning. Increasing this number results in a bigger .PNG file to save and load.\n* `num_conditionings_gen`: (5 by default), it changes the number of conditionings to sample. Increasing this number results in a bigger .PNG file to save and load.\n* `z_var`: (1.0 by default) controls the truncation factor for the generation. \n* Optionally, the script can be run with the following additional options `--visualize_instance_images --dataset_path [dataset_path]` to visualize the ground-truth images corresponding to the conditioning instance features, given a path to the dataset's ground-truth images `dataset_path`. Ground-truth instances will be plotted as the leftmost image for each row.\n\n## Data preparation \n\n
\nImageNet
\n
\n \n - Download dataset from here .\n
\n - Download SwAV feature extractor weights from here .
\n - Replace the paths in data_utils/prepare_data.sh:
out_path
by the path where hdf5 files will be stored, path_imnet
by the path where ImageNet dataset is downloaded, and path_swav
by the path where SwAV weights are stored. \n - Execute
./data_utils/prepare_data.sh imagenet [resolution]
, where [resolution]
can be an integer in {64,128,256}. This script will create several hdf5 files:\n -
ILSVRC[resolution]_xy.hdf5
and ILSVRC[resolution]_val_xy.hdf5
, where images and labels are stored for the training and validation set respectively. \n -
ILSVRC[resolution]_feats_[feature_extractor]_resnet50.hdf5
that contains the instance features for each image. \n -
ILSVRC[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5
that contains the list of [k_nn] neighbors for each of the instance features.
\n
\n \n\n
\nImageNet-LT
\n
\n \n - Download ImageNet dataset from here . Following ImageNet-LT , the file
ImageNet_LT_train.txt
can be downloaded from this link and later stored in the folder ./BigGAN_PyTorch/imagenet_lt
.\n \n - Download the pre-trained weights of the ResNet on ImageNet-LT from this link, provided by the classifier-balancing repository .
\n - Replace the paths in data_utils/prepare_data.sh:
out_path
by the path where hdf5 files will be stored, path_imnet
by the path where ImageNet dataset is downloaded, and path_classifier_lt
by the path where the pre-trained ResNet50 weights are stored. \n - Execute
./data_utils/prepare_data.sh imagenet_lt [resolution]
, where [resolution]
can be an integer in {64,128,256}. This script will create several hdf5 files:\n -
ILSVRC[resolution]longtail_xy.hdf5
, where images and labels are stored for the training and validation set respectively. \n -
ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50.hdf5
that contains the instance features for each image. \n -
ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5
that contains the list of [k_nn] neighbors for each of the instance features.
\n
\n \n\n
\nCOCO-Stuff
\n
\n \n - Download the dataset following the LostGANs' repository instructions .\n
\n - Download SwAV feature extractor weights from here .
\n - Replace the paths in data_utils/prepare_data.sh:
out_path
by the path where hdf5 files will be stored, path_imnet
by the path where ImageNet dataset is downloaded, and path_swav
by the path where SwAV weights are stored. \n - Execute
./data_utils/prepare_data.sh coco [resolution]
, where [resolution]
can be an integer in {128,256}. This script will create several hdf5 files:\n -
COCO[resolution]_xy.hdf5
and COCO[resolution]_val_test_xy.hdf5
, where images and labels are stored for the training and evaluation set respectively. \n -
COCO[resolution]_feats_[feature_extractor]_resnet50.hdf5
that contains the instance features for each image. \n -
COCO[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5
that contains the list of [k_nn] neighbors for each of the instance features.
\n
\n \n\n
\nOther datasets
\n
\n \n - Download the corresponding dataset and store in a folder
dataset_path
.\n \n - Download SwAV feature extractor weights from here .
\n - Replace the paths in data_utils/prepare_data.sh:
out_path
by the path where hdf5 files will be stored and path_swav
by the path where SwAV weights are stored. \n - Execute
./data_utils/prepare_data.sh [dataset_name] [resolution] [dataset_path]
, where [dataset_name]
will be the dataset name, [resolution]
can be an integer, for example 128 or 256, and dataset_path
contains the dataset images. This script will create several hdf5 files:\n -
[dataset_name][resolution]_xy.hdf5
, where images and labels are stored for the training set. \n -
[dataset_name][resolution]_feats_[feature_extractor]_resnet50.hdf5
that contains the instance features for each image. \n -
[dataset_name][resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5
that contains the list of k_nn
neighbors for each of the instance features.
\n
\n \n\n\n
\nHow to subsample an instance feature dataset with k-means
\n
\nTo downsample the instance feature vector dataset, after we have prepared the data, we can use the k-means algorithm:\n\npython data_utils/store_kmeans_indexes.py --resolution [resolution] --which_dataset [dataset_name] --data_root [data_path]\n
\n - Adding
--gpu
allows the faiss library to compute k-means leveraging GPUs, resulting in faster execution. \n - Adding the parameter
--feature_extractor [feature_extractor]
chooses which feature extractor to use, with feature_extractor
in ['selfsupervised', 'classification']
, if we are using swAV as feature extactor or the ResNet pretrained on the classification task on ImageNet, respectively. \n - The number of k-means clusters can be set with
--kmeans_subsampled [centers]
, where centers
is an integer.
\n\n \n
\n\n## How to train the models\n\n#### BigGAN or StyleGAN2 backbone\nTraining parameters are stored in JSON files in `[backbone_folder]/config_files/[dataset]/*.json`, where `[backbone_folder]` is either BigGAN_Pytorch or stylegan2_ada_pytorch and `[dataset]` can either be ImageNet, ImageNet-LT or COCO_Stuff.\n```\ncd BigGAN_PyTorch\npython run.py --json_config config_files//.json --data_root [data_root] --base_root [base_root]\n```\nor \n```\ncd stylegan_ada_pytorch\npython run.py --json_config config_files//.json --data_root [data_root] --base_root [base_root]\n```\nwhere:\n* `data_root` path where the data has been prepared and stored, following the previous section (Data preparation). \n* `base_root` path where to store the model weights and logs.\n\n\nNote that one can create other JSON files to modify the training parameters.\n\n#### Other backbones\nTo be able to run IC-GAN with other backbones, we provide some orientative steps:\n* Place the new backbone code in a new folder under `ic_gan` (`ic_gan/new_backbone`).\n* Modify the relevant piece of code in the GAN architecture to allow instance features as conditionings (for both generator and discriminator). \n* Create a `trainer.py` file with the training loop to train an IC-GAN with the new backbone. The `data_utils` folder provides the tools to prepare the dataset, load the data and conditioning sampling to train an IC-GAN. The IC-GAN with BigGAN backbone [`trainer.py`](BigGAN_PyTorch/trainer.py) file can be used as an inspiration.\n\n\n \n## How to test the models\nTo obtain the FID and IS metrics on ImageNet and ImageNet-LT: \n1) Execute:\n``` \npython inference/test.py --json_config [BigGAN-PyTorch or stylegan-ada-pytorch]/config_files//.json --num_inception_images [num_imgs] --sample_num_npz [num_imgs] --eval_reference_set [ref_set] --sample_npz --base_root [base_root] --data_root [data_root] --kmeans_subsampled [kmeans_centers] --model_backbone [backbone]\n```\nTo obtain the tensorflow IS and FID metrics, use an environment with the Python <3.7 and Tensorflow 1.15. Then:\n\n2) Obtain Inception Scores and pre-computed FID moments:\n ``` \n python ../data_utils/inception_tf13.py --experiment_name [exp_name] --experiment_root [base_root] --kmeans_subsampled [kmeans_centers] \n ```\n\nFor stratified FIDs in the ImageNet-LT dataset, the following parameters can be added `--which_dataset 'imagenet_lt' --split 'val' --strat_name [stratified_split]`, where `stratified_split` can be in `[few,low, many]`.\n \n3) (Only needed once) Pre-compute reference moments with tensorflow code:\n ```\n python ../data_utils/inception_tf13.py --use_ground_truth_data --data_root [data_root] --split [ref_set] --resolution [res] --which_dataset [dataset]\n ```\n\n4) (Using this [repository](https://github.com/bioinf-jku/TTUR)) FID can be computed using the pre-computed statistics obtained in 2) and the pre-computed ground-truth statistics obtain in 3). For example, to compute the FID with reference ImageNet validation set: \n```python TTUR/fid.py [base_root]/[exp_name]/TF_pool_.npz [data_root]/imagenet_val_res[res]_tf_inception_moments_ground_truth.npz ``` \n\nTo obtain the FID metric on COCO-Stuff:\n1) Obtain ground-truth jpeg images: ```python data_utils/store_coco_jpeg_images.py --resolution [res] --split [ref_set] --data_root [data_root] --out_path [gt_coco_images] --filter_hd [filter_hd] ```\n2) Store generated images as jpeg images: ```python sample.py --json_config ../[BigGAN-PyTorch or stylegan-ada-pytorch]/config_files//.json --data_root [data_root] --base_root [base_root] --sample_num_npz [num_imgs] --which_dataset 'coco' --eval_instance_set [ref_set] --eval_reference_set [ref_set] --filter_hd [filter_hd] --model_backbone [backbone] ```\n3) Using this [repository](https://github.com/bioinf-jku/TTUR), compute FID on the two folders of ground-truth and generated images.\n\nwhere:\n* `dataset`: option to select the dataset in `['imagenet', 'imagenet_lt', 'coco']\n* `exp_name`: name of the experiment folder.\n* `data_root`: path where the data has been prepared and stored, following the previous section [\"Data preparation\"](#data-preparation). \n* `base_root`: path where to find the model (for example, where the pretrained models have been downloaded). \n* `num_imgs`: needs to be set to 50000 for ImageNet and ImageNet-LT (with validation set as reference) and set to 11500 for ImageNet-LT (with training set as reference). For COCO-Stuff, set to 75777, 2050, 675, 1375 if using the training, evaluation, evaluation seen or evaluation unseen set as reference.\n* `ref_set`: set to `'val'` for ImageNet, ImageNet-LT (and COCO) to obtain metrics with the validation (evaluation) set as reference, or set to `'train'` for ImageNet-LT or COCO to obtain metrics with the training set as reference.\n* `kmeans_centers`: set to 1000 for ImageNet and to -1 for ImageNet-LT. \n* `backbone`: model backbone architecture in `['biggan','stylegan2']`.\n* `res`: integer indicating the resolution of the images (64,128,256).\n* `gt_coco_images`: folder to store the ground-truth JPEG images of that specific split.\n* `filter_hd`: only valid for `ref_set=val`. If -1, use the entire evaluation set; if 0, use only conditionings and their ground-truth images with seen class combinations during training (eval seen); if 1, use only conditionings and their ground-truth images with unseen class combinations during training (eval unseen). \n\n\n## Utilities for GAN backbones\nWe change and provide extra utilities to facilitate the training, for both BigGAN and StyleGAN2 base repositories.\n\n### BigGAN change log\nThe following changes were made:\n\n* BigGAN architecture:\n * In `train_fns.py`: option to either have the optimizers inside the generator and discriminator class, or directly in the `G_D` wrapper module. Additionally, added an option to augment both generated and real images with augmentations from [DiffAugment](https://github.com/mit-han-lab/data-efficient-gans).\n * In `BigGAN.py`: added a function `get_condition_embeddings` to handle the conditioning separately.\n * Small modifications to `layers.py` to adapt the batchnorm function calls to the pytorch 1.8 version. \n \n* Training utilities: \n * Added `trainer.py` file (replacing train.py):\n * Training now allows the usage of DDP for faster single-node and multi-node training.\n * Training is performed by epochs instead of by iterations.\n * Option to stop the training by using early stopping or when experiments diverge. \n * In `utils.py`:\n * Replaced `MultiEpochSampler` for `CheckpointedSampler` to allow experiments to be resumable when using epochs and fixing a bug where `MultiEpochSampler` would require a long time to fetch data permutations when the number of epochs increased.\n * ImageNet-LT: Added option to use different class distributions when sampling a class label for the generator.\n * ImageNet-LT: Added class balancing (uniform and temperature annealed).\n * Added data augmentations from [DiffAugment](https://github.com/mit-han-lab/data-efficient-gans).\n\n* Testing utilities:\n * In `calculate_inception_moments.py`: added option to obtain moments for ImageNet-LT dataset, as well as stratified moments for many, medium and few-shot classes (stratified FID computation).\n * In `inception_utils.py`: added option to compute [Precision, Recall, Density, Coverage](https://github.com/clovaai/generative-evaluation-prdc) and stratified FID.\n \n* Data utilities:\n * In `datasets.py`, added option to load ImageNet-LT dataset.\n * Added ImageNet-LT.txt files with image indexes for training and validation split. \n * In `utils.py`: \n * Separate functions to obtain the data from hdf5 files (`get_dataset_hdf5`) or from directory (`get_dataset_images`), as well as a function to obtain only the data loader (`get_dataloader`). \n * Added the function `sample_conditionings` to handle possible different conditionings to train G with.\n \n* Experiment utilities:\n * Added JSON files to launch experiments with the proposed hyper-parameter configuration.\n * Script to launch experiments with either the [submitit tool](https://github.com/facebookincubator/submitit) or locally in the same machine (run.py). \n\n### StyleGAN2 change log \n\n
\n - Multi-node DistributedDataParallel training.
\n - Added early stopping based on the training FID metric.
\n - Automatic checkpointing when jobs are automatically rescheduled on a cluster.
\n - Option to load dataset from hdf5 file.
\n - Replaced the usage of Click python package by an `ArgumentParser`.
\n - Only saving best and last model weights.
\n
\n
\n\n## Acknowledgements\nWe would like to thanks the authors of the [Pytorch BigGAN repository](https://github.com/ajbrock/BigGAN-PyTorch) and [StyleGAN2 Pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch), as our model requires their repositories to train IC-GAN with BigGAN or StyleGAN2 bakcbone respectively. \nMoreover, we would like to further thank the authors of [generative-evaluation-prdc](https://github.com/clovaai/generative-evaluation-prdc), [data-efficient-gans](https://github.com/mit-han-lab/data-efficient-gans), [faiss](https://github.com/facebookresearch/faiss) and [sg2im](https://github.com/google/sg2im) as some components were borrowed and modified from their code bases. Finally, we thank the author of [WanderCLIP](https://colab.research.google.com/github/eyaler/clip_biggan/blob/main/WanderCLIP.ipynb) as well as the following repositories, that we use in our Colab notebook: [pytorch-pretrained-BigGAN](https://github.com/huggingface/pytorch-pretrained-BigGAN) and [CLIP](https://github.com/openai/CLIP).\n\n## License\nThe majority of IC-GAN is licensed under CC-BY-NC, however portions of the project are available under separate license terms: BigGAN and [PRDC](https://github.com/facebookresearch/ic_gan/blob/main/data_utils/compute_pdrc.py) are licensed under the MIT license; [COCO-Stuff loader](https://github.com/facebookresearch/ic_gan/blob/main/data_utils/cocostuff_dataset.py) is licensed under Apache License 2.0; [DiffAugment](https://github.com/facebookresearch/ic_gan/blob/main/BigGAN_PyTorch/diffaugment_utils.py) is licensed under BSD 2-Clause Simplified license; StyleGAN2 is licensed under a NVIDIA license, available here: https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/LICENSE.txt. In the Colab notebook, [CLIP](https://github.com/openai/CLIP) and [pytorch-pretrained-BigGAN](https://github.com/huggingface/pytorch-pretrained-BigGAN) code is used, both licensed under the MIT license.\n\n## Disclaimers\nTHE DIFFAUGMENT SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nTHE CLIP SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\nTHE PYTORCH-PRETRAINED-BIGGAN SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n## Cite the paper\nIf this repository, the paper or any of its content is useful for your research, please cite:\n```\n@inproceedings{casanova2021instanceconditioned,\n title={Instance-Conditioned GAN}, \n author={Arantxa Casanova and Marl\u00e8ne Careil and Jakob Verbeek and Michal Drozdzal and Adriana Romero-Soriano},\n booktitle={Advances in Neural Information Processing Systems (NeurIPS)},\n year={2021}\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "malllabiisc/CompGCN", "link": "https://github.com/malllabiisc/CompGCN", "tags": ["link-prediction", "relation-embeddings", "iclr2020", "graph-convolutional-networks", "deep-learning", "pytorch", "graph-representation-learning"], "stars": 514, "description": "ICLR 2020: Composition-Based Multi-Relational Graph Convolutional Networks", "lang": "Python", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jython/frozen-mirror", "link": "https://github.com/jython/frozen-mirror", "tags": [], "stars": 514, "description": "A Mirror of hg.python.org (now frozen). Please use jython/jython.", "lang": "Python", "repo_lang": "", "readme": "Jython: Python for the Java Platform\n------------------------------------\n\nWelcome to Jython @jython.version@.\n@snapshot.banner@\nThis is @readme.release@ release of version @jython.version.short@ of Jython.\n\nAlong with language and runtime compatibility with CPython 2.7, Jython 2.7\nprovides substantial support of the Python ecosystem. This includes built-in\nsupport of pip/setuptools (you can use with bin/pip) and a native launcher\nfor Windows (bin/jython.exe).\n\nJim Baker presented a talk at PyCon 2015 about Jython 2.7, including demos\nof new features: https://www.youtube.com/watch?v=hLm3garVQFo\n\nThis release was compiled on @os.name@ using @java.vendor@ Java\nversion @java.version@ and requires a minimum of Java @jdk.target.version@ to run.\n\nSee ACKNOWLEDGMENTS for details about Jython's copyright, license,\ncontributors, and mailing lists; and NEWS for detailed release notes,\nincluding bugs fixed, backwards breaking changes, and new features.\n\nThe developers extend their thanks to all who contributed to this release\nof Jython, through bug reports, patches, pull requests, documentation\nchanges, email and conversation in any media. We are grateful to the PSF for\ncontinuing practical help and support to the project.\n\nTesting\n-------\nYou can test your installation of Jython (not the standalone jar) by\nrunning the regression tests, with the command:\n\njython -m test.regrtest -e\n\nThe regression tests can take about fifty minutes. At the time of writing,\nthese tests are known to fail (spuriously) on an installed Jython:\n test___all__\n test_java_visibility\n test_jy_internals\n test_ssl_jy\nPlease report reproducible failures at http://bugs.jython.org .\n\n", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "importCTF/Instagram-Hacker", "link": "https://github.com/importCTF/Instagram-Hacker", "tags": ["hacking", "hacking-tool", "instagram", "python", "bruteforce", "bruteforce-attacks"], "stars": 514, "description": "This is an advanced script for Instagram bruteforce attacks. WARNING THIS IS A REAL TOOL!", "lang": "Python", "repo_lang": "", "readme": "# Instagram-Hacker\nThis is a script for Instagram bruteforce attacks. WARNING THIS IS A REAL TOOL!\n\n# Usage\n\n`python instagram.py username103 pass.lst`\n\n# Requirements\n\n[mechanize](https://pypi.python.org/pypi/mechanize/) install with: `pip install mechanize`\n\n[requests](https://pypi.python.org/pypi/requests/2.18.4) install with: `pip install requests`\n\n[Tor](https://www.torproject.org/docs/debian) install with: `sudo apt-get install tor`\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "firstlookmedia/pdf-redact-tools", "link": "https://github.com/firstlookmedia/pdf-redact-tools", "tags": [], "stars": 514, "description": "a set of tools to help with securely redacting and stripping metadata from documents before publishing", "lang": "Python", "repo_lang": "", "readme": "# PDF Redact Tools\n\n_Warning: This project is no longer maintained. A much better tool is [dangerzone](https://dangerzone.rocks)._\n\n![PDF Redact Tools](/logo.png)\n\nPDF Redact Tools helps with securely redacting and stripping metadata from documents before publishing.\n\n*Warning:* PDF Redact Tools uses ImageMagick to parse PDFs. While ImageMagick is a versatile tool, it has a history of some [terrible](https://imagetragick.com/) security bugs. A malicious PDF could exploit a bug in ImageMagick to take over your computer. If you're working with potentially malicious PDFs, it's safest to run them through PDF Redact Tools in an isolated environment, such as a virtual machine, or by using a tool such as the [Qubes PDF Converter](https://github.com/QubesOS/qubes-app-linux-pdf-converter) instead.\n\n## Quick Start\n\n### Mac OS X\n\n* Install [Homebrew](http://brew.sh/)\n* Open a terminal and type `$ brew install pdf-redact-tools`\n\n### Ubuntu\n\nYou can install PDF Redact Tools from this Ubuntu PPA:\n\n```sh\n$ sudo add-apt-repository ppa:micahflee/ppa\n$ sudo apt-get update\n$ sudo apt-get install pdf-redact-tools\n```\n\n### Other\n\nPDF Redact Tools isn't yet packaged in any GNU/Linux distributions yet, however it's easy to install by following the [build instructions](/BUILD.md). I haven't attempted to make this work in Windows.\n\n## How to Use\n\nTo use it, convert your original document to a PDF.\n\nThen start by exploding the PDF into PNG files:\n\n```sh\n$ pdf-redact-tools --explode example_document.pdf\n```\n\nThis will create a new folder in the same directory as the PDF called (in this case) `example_document_pages`, with a PNG for each page.\n\nEdit each page that needs redacting in graphics editing software like GIMP or Photoshop. Note that opening, editing, and saving a PNG will likely make it look slightly different than the other PNGs. For best results, open all PNGs and simply save and close the pages you don't need to edit.\n\nWhen you're done, combine the PNGs back into a flattened, informationless PDF:\n\n```sh\n$ pdf-redact-tools --merge example_document.pdf\n```\n\nIn this case, the final redacted PDF is called `example_document-final.pdf`.\n\nIf you don't need to redact anything, but you just want a new PDF that definitely doesn't contain malware or metadata, you can simply sanitize it.\n\n```sh\n$ pdf-redact-tools --sanitize untrusted.pdf\n```\n\nThe final document that you can trust is called `untrusted-final.pdf`.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "AUNaseef/protonup", "link": "https://github.com/AUNaseef/protonup", "tags": ["proton", "proton-ge-custom", "linux", "steam", "automation", "python"], "stars": 514, "description": "Install and Update Proton-GE", "lang": "Python", "repo_lang": "", "readme": "## Introduction\nCLI program and API to automate the installation and update of [GloriousEggroll](https://github.com/GloriousEggroll/)'s [Proton-GE](https://github.com/GloriousEggroll/proton-ge-custom)\n\n[![Downloads](https://pepy.tech/badge/protonup)](https://pepy.tech/project/protonup)\n\n## Installation\nInstall from Python Package Index\n```\npip3 install protonup\n```\nInstall from source\n```\ngit clone https://github.com/AUNaseef/protonup && cd protonup\npython3 setup.py install --user\n```\nIf you get a `command not found` error, add the following to your `~/.profile` (if it's not already present) and run `source ~/.profile`\n```\nif [ -d \"$HOME/.local/bin\" ] ; then\n PATH=\"$HOME/.local/bin:$PATH\"\nfi\n```\n\n## Usage\nSet your installation directory before running the program with `-d \"your/compatibilitytools.d/directory\"`\n\nExample:\n```\nprotonup -d \"~/.steam/root/compatibilitytools.d/\"\n```\n---\nTo update to the latest version, just run `protonup` from a command line\n\nExample:\n```\nprotonup\n```\n---\nList available versions with `--releases`\n\nExample:\n```\nprotonup --releases\n```\n---\nInstall a specific version with `-t \"version tag\"`\n\nExample:\n```\nprotonup -t 6.5-GE-2\n```\n---\nBy default the downloads are stored in a temporary folder. Change it with `-o \"custom/download/directory\"`\n\nExample:\n```\nprotonup -o ~/Downloads\n```\n---\nList existing installations with `-l`\n\nExample:\n```\nprotonup -l\n```\n---\nRemove existing installations with `-r \"version tag`\n\nExample:\n```\nprotonup -r 6.5-GE-2\n```\n---\nUse `--download` to download Proton-GE to the current working directory without installing it, you can override destination with `-o`\n\nExample:\n```\nprotonup --download\n```\n---\nUse `-y` toggle to carry out actions without any logging or interaction\n\nExample:\n```\nprotonup --download -o ~/Downloads -y\n```\n---\n### Restart Steam after making changes\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bai-shang/crnn_ctc_ocr_tf", "link": "https://github.com/bai-shang/crnn_ctc_ocr_tf", "tags": [], "stars": 513, "description": "Extremely simple implement for CRNN by Tensorflow", "lang": "Python", "repo_lang": "", "readme": "# crnn_ctc_ocr_tf\nThis software implements the Convolutional Recurrent Neural Network (CRNN), a combination of CNN, RNN and CTC loss for image-based sequence recognition tasks, such as scene text recognition and OCR. \n\nhttps://arxiv.org/abs/1507.05717 \n\nMore details for CRNN and CTC loss (in chinese): https://zhuanlan.zhihu.com/p/43534801 \n\n![](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/Arch.jpg?raw=true)\n\n***The crnn+seq2seq+attention ocr code can be found here [bai-shang/crnn_seq2seq_ocr_pytorch](https://github.com/bai-shang/crnn_seq2seq_ocr_pytorch)***\n\n# Dependencies\nAll dependencies should be installed are as follow: \n* Python3\n* tensorflow==1.15.0\n* opencv-python\n* numpy\n\nRequired packages can be installed with\n```bash\npip3 install -r requirements.txt\n``` \n\nNote: This code cannot run on the tensorflow2.0 since it's modified the 'tf.nn.ctc_loss' API.\n\n# Run demo\n\nAsume your current work directory is \"crnn_ctc_ocr_tf\"\uff1a\n```bash\ncd path/to/your/crnn_ctc_ocr_tf/\n```\nDowload pretrained model and extract it to your disc: [GoogleDrive](https://drive.google.com/file/d/1A3V7o3SKSiL3IHcTqc1jP4w58DuC8F9o/view?usp=sharing) . \n\nExport current work directory path into PYTHONPATH: \n\n```bash\nexport PYTHONPATH=$PYTHONPATH:./\n```\n\nRun inference demo:\n\n```bash\npython3 tools/inference_crnn_ctc.py \\\n --image_dir ./test_data/images/ --image_list ./test_data/image_list.txt \\\n --model_dir /path/to/your/bs_synth90k_model/ 2>/dev/null\n```\n\nResult is:\n```\nPredict 1_AFTERSHAVE_1509.jpg image as: aftershave\n```\n![1_AFTERSHAVE_1509.jpg](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/test_data/images/1_AFTERSHAVE_1509.jpg)\n```\nPredict 2_LARIAT_43420.jpg image as: lariat\n```\n![2_LARIAT_43420](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/test_data/images/2_LARIAT_43420.jpg)\n\n# Train a new model\n\n### Data Preparation\n* Firstly you need download [Synth90k](http://www.robots.ox.ac.uk/~vgg/data/text/) datasets and extract it into a folder. \n\n* Secondly supply a txt file to specify the relative path to the image data dir and it's corresponding text label. \n\nFor example: image_list.txt\n```bash\n90kDICT32px/1/2/373_coley_14845.jpg coley\n90kDICT32px/17/5/176_Nevadans_51437.jpg nevadans\n```\n* Then you suppose to convert your dataset to tfrecord format can be done by\n```bash\npython3 tools/create_crnn_ctc_tfrecord.py \\\n --image_dir path/to/90kDICT32px/ --anno_file path/to/image_list.txt --data_dir ./tfrecords/ \\\n --validation_split_fraction 0.1\n```\nNote: make sure that images can be read from the path you specificed. For example:\n```bash\npath/to/90kDICT32px/1/2/373_coley_14845.jpg\npath/to/90kDICT32px/17/5/176_Nevadans_51437.jpg\n.......\n```\nAll training images will be scaled into height 32pix and write to tfrecord file. \nThe dataset will be divided into train and validation set and you can change the parameter to control the ratio of them.\n\n#### Otherwise you can use the dowload_synth90k_and_create_tfrecord.sh script automatically create tfrecord:\n```\ncd ./data\nsh dowload_synth90k_and_create_tfrecord.sh\n```\n\n### Train model\n```bash\npython3 tools/train_crnn_ctc.py --data_dir ./tfrecords/ --model_dir ./model/ --batch_size 32\n```\nAfter several times of iteration you can check the output in terminal as follow: \n\n![](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/data/20180919022202.png?raw=true)\n\nDuring my experiment the loss drops as follow:\n![](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/data/20180919202432.png?raw=true)\n\n### Evaluate model\n```bash\npython3 tools/eval_crnn_ctc.py --data_dir ./tfrecords/ --model_dir ./model/ 2>/dev/null\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "duckietown/gym-duckietown", "link": "https://github.com/duckietown/gym-duckietown", "tags": ["openai-gym", "simulator", "reinforcement-learning", "robot", "imitation-learning"], "stars": 513, "description": "Self-driving car simulator for the Duckietown universe", "lang": "Python", "repo_lang": "", "readme": "# Gym-Duckietown\n\n[![Build Status](https://circleci.com/gh/duckietown/gym-duckietown/tree/master.svg?style=shield)](https://circleci.com/gh/duckietown/gym-duckietown/tree/master) [![Docker Hub](https://img.shields.io/docker/pulls/duckietown/gym-duckietown.svg)](https://hub.docker.com/r/duckietown/gym-duckietown)\n\n\n[Duckietown](http://duckietown.org/) self-driving car simulator environments for OpenAI Gym.\n\nPlease use this bibtex if you want to cite this repository in your publications:\n\n```\n@misc{gym_duckietown,\n author = {Chevalier-Boisvert, Maxime and Golemo, Florian and Cao, Yanjun and Mehta, Bhairav and Paull, Liam},\n title = {Duckietown Environments for OpenAI Gym},\n year = {2018},\n publisher = {GitHub},\n journal = {GitHub repository},\n howpublished = {\\url{https://github.com/duckietown/gym-duckietown}},\n}\n```\n\nThis simulator was created as part of work done at [Mila](https://mila.quebec/).\n\n\n
\n
\n\n\nWelcome to Duckietown!\n
\n\n## Introduction\n\nGym-Duckietown is a simulator for the [Duckietown](https://duckietown.org) Universe, written in pure Python/OpenGL (Pyglet). It places your agent, a Duckiebot, inside of an instance of a Duckietown: a loop of roads with turns, intersections, obstacles, Duckie pedestrians, and other Duckiebots. It can be a pretty hectic place!\n\nGym-Duckietown is fast, open, and incredibly customizable. What started as a lane-following simulator has evolved into a fully-functioning autonomous driving simulator that you can use to train and test your Machine Learning, Reinforcement Learning, Imitation Learning, or even classical robotics algorithms. Gym-Duckietown offers a wide range of tasks, from simple lane-following to full city navigation with dynamic obstacles. Gym-Duckietown also ships with features, wrappers, and tools that can help you bring your algorithms to the real robot, including [domain-randomization](https://blog.openai.com/spam-detection-in-the-physical-world/), accurate camera distortion, and differential-drive physics (and most importantly, realistic waddling).\n\n\n
\n
\n\nThere are multiple registered gym environments, each corresponding to a different [map file](https://github.com/duckietown/gym-duckietown/tree/master/gym_duckietown/maps):\n- `Duckietown-straight_road-v0`\n- `Duckietown-4way-v0`\n- `Duckietown-udem1-v0`\n- `Duckietown-small_loop-v0`\n- `Duckietown-small_loop_cw-v0`\n- `Duckietown-zigzag_dists-v0`\n- `Duckietown-loop_obstacles-v0` (static obstacles in the road)\n- `Duckietown-loop_pedestrians-v0` (moving obstacles in the road)\n\nThe `MultiMap-v0` environment is essentially a [wrapper](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/envs/multimap_env.py) for the simulator which\nwill automatically cycle through all available [map files](https://github.com/duckietown/gym-duckietown/tree/master/gym_duckietown/maps). This makes it possible to train on\na variety of different maps at the same time, with the idea that training on a variety of\ndifferent scenarios will make for a more robust policy/model.\n\n`gym-duckietown` is an _accompanying_ simulator to real Duckiebots, which allow you to run your code on the real robot. We provide a domain randomization API, which can help you transfer your trained policies from simulation to real world. Without using a domain transfer method, your learned models will likely overfit to various aspects of the simulator, which won't transfer to the real world. When you deploy, you and your Duckiebot will be running around in circles trying to figure out what's going on.\n\n\n
\n
\n\nThe `Duckiebot-v0` environment is meant to connect to software running on\na real Duckiebot and remotely control the robot. It is a tool to test that policies\ntrained in simulation can transfer to the real robot. If you want to\ncontrol your robot remotely with the `Duckiebot-v0` environment, you will need to\ninstall the software found in the [duck-remote-iface](https://github.com/maximecb/duck-remote-iface)\nrepository on your Duckiebot.\n\n\n
\nDuckiebot-v0\n
\n\n## Installation\n\nRequirements:\n- Python 3.6+\n- OpenAI gym\n- NumPy\n- Pyglet\n- PyYAML\n- PyTorch\n\nYou can install all the dependencies except PyTorch with `pip3`:\n\n```\ngit clone https://github.com/duckietown/gym-duckietown.git\ncd gym-duckietown\npip3 install -e .\n```\n\nReinforcement learning code forked from [this repository](https://github.com/ikostrikov/pytorch-a2c-ppo-acktr)\nis included under [/pytorch_rl](/pytorch_rl). If you wish to use this code, you\nshould install [PyTorch](http://pytorch.org/).\n\n### Installation Using Conda (Alternative Method)\n\nAlternatively, you can install all the dependencies, including PyTorch, using Conda as follows. For those trying to use this package on MILA machines, this is the way to go:\n\n```\ngit clone https://github.com/duckietown/gym-duckietown.git\ncd gym-duckietown\nconda env create -f environment.yaml\n```\n\nPlease note that if you use Conda to install this package instead of pip, you will need to activate your Conda environment and add the package to your Python path before you can use it:\n\n```\nsource activate gym-duckietown\nexport PYTHONPATH=\"${PYTHONPATH}:`pwd`\"\n```\n\n### Docker Image\n\nThere is a pre-built Docker image available [on Docker Hub](https://hub.docker.com/r/duckietown/gym-duckietown), which also contains an installation of PyTorch.\n\n*Note that in order to get GPU acceleration, you should install and use [nvidia-docker 2.0](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)).*\n\nTo get started, pull the `duckietown/gym-duckietown` image from Docker Hub and open a shell in the container:\n\n```\nnvidia-docker pull duckietown/gym-duckietown && \\\nnvidia-docker run -it duckietown/gym-duckietown bash\n```\n\nThen create a virtual display:\n\n```\nXvfb :0 -screen 0 1024x768x24 -ac +extension GLX +render -noreset &> xvfb.log &\nexport DISPLAY=:0\n```\n\nNow, you are ready to start training a policy using RL:\n\n```\npython3 pytorch_rl/main.py \\\n --algo a2c \\\n --env-name Duckietown-loop_obstacles-v0 \\\n --lr 0.0002 \\\n --max-grad-norm 0.5 \\\n --no-vis \\\n --num-steps 20\n```\n\nIf you need to do so, you can build a Docker image by running the following command from the root directory of this repository:\n\n```\ndocker build . \\\n --file ./docker/standalone/Dockerfile \\\n --no-cache=true \\\n --network=host \\\n --tag \n```\n\n## Usage\n\n### Testing\n\nThere is a simple UI application which allows you to control the simulation or real robot manually. The `manual_control.py` application will launch the Gym environment, display camera images and send actions (keyboard commands) back to the simulator or robot. You can specify which map file to load with the `--map-name` argument:\n\n```\n./manual_control.py --env-name Duckietown-udem1-v0\n```\n\nThere is also a script to run automated tests (`run_tests.py`) and a script to gather performance metrics (`benchmark.py`).\n\n### Reinforcement Learning\n\nTo train a reinforcement learning agent, you can use the code provided under [/pytorch_rl](/pytorch_rl). I recommend using the A2C or ACKTR algorithms. A sample command to launch training is:\n\n```\npython3 pytorch_rl/main.py --no-vis --env-name Duckietown-small_loop-v0 --algo a2c --lr 0.0002 --max-grad-norm 0.5 --num-steps 20\n```\n\nThen, to visualize the results of training, you can run the following command. Note that you can do this while the training process is still running. Also note that if you are running this through SSH, you will need to enable X forwarding to get a display:\n\n```\npython3 pytorch_rl/enjoy.py --env-name Duckietown-small_loop-v0 --num-stack 1 --load-dir trained_models/a2c\n```\n\n### Imitation Learning\n\nThere is a script in the `experiments` directory which automatically generates a dataset of synthetic demonstrations. It uses hillclimbing to optimize the reward obtained, and outputs a JSON file:\n\n```\nexperiments/gen_demos.py --map-name loop_obstacles\n```\n\nThen you can start training an imitation learning model (conv net) with:\n\n```\nexperiments/train_imitation.py --map-name loop_obstacles\n```\n\nFinally, you can visualize what the trained model is doing with:\n\n```\nexperiments/control_imitation.py --map-name loop_obstacles\n```\n\nNote that it is possible to have `gen_demos.py` and `train_imitate.py` running simultaneously, so that training takes place while new demonstrations are being generated. You can also run `control_imitate.py` periodically during training to check on learning progress.\n\n## Design\n\n### Map File Format\n\nThe simulator supports a YAML-based file format which is designed to be easy to hand edit. See the [maps subdirectory](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/maps) for examples. Each map file has two main sections: a two-dimensional array of tiles, and a listing of objects to be placed around the map. The tiles are based on the [Duckietown appearance specification](https://docs.duckietown.org/daffy/opmanual_duckietown/out/duckietown_specs.html).\n\nThe available tile types are:\n- empty\n- straight\n- curve_left\n- curve_right\n- 3way_left (3-way intersection)\n- 3way_right\n- 4way (4-way intersection)\n- asphalt\n- grass\n- floor (office floor)\n\nThe available object types are:\n- barrier\n- cone (traffic cone)\n- duckie\n- duckiebot (model of a Duckietown robot)\n- tree\n- house\n- truck (delivery-style truck)\n- bus\n- building (multi-floor building)\n- sign_stop, sign_T_intersect, sign_yield, etc. (see [meshes subdirectory](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/meshes))\n\nAlthough the environment is rendered in 3D, the map is essentially two-dimensional. As such, objects coordinates are specified along two axes. The coordinates are rescaled based on the tile size, such that coordinates [0.5, 1.5] would mean middle of the first column of tiles, middle of the second row. Objects can have an `optional` flag set, which means that they randomly may or may not appear during training, as a form of domain randomization.\n\n### Observations\n\nThe observations are single camera images, as numpy arrays of size (120, 160, 3). These arrays contain unsigned 8-bit integer values in the [0, 255] range.\nThis image size was chosen because it is exactly one quarter of the 640x480 image resolution provided by the camera, which makes it fast and easy to scale down\nthe images. The choice of 8-bit integer values over floating-point values was made because the resulting images are smaller if stored on disk and faster to send over a networked connection.\n\n### Actions\n\nThe simulator uses continuous actions by default. Actions passed to the `step()` function should be numpy arrays containining two numbers between -1 and 1. These two numbers correspond to forward velocity, and a steering angle, respectively. A positive velocity makes the robot go forward, and a positive steering angle makes the robot turn left. There is also a [Gym wrapper class](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/wrappers.py) named `DiscreteWrapper` which allows you to use discrete actions (turn left, move forward, turn right) instead of continuous actions if you prefer.\n\n### Reward Function\n\nThe default reward function tries to encourage the agent to drive forward along the right lane in each tile. Each tile has an associated bezier curve defining the path the agent is expected to follow. The agent is rewarded for being as close to the curve as possible, and also for facing the same direction as the curve's tangent. The episode is terminated if the agent gets too far outside of a drivable tile, or if the `max_steps` parameter is exceeded. See the `step` function in [this source file](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/envs/simplesim_env.py).\n\n## Troubleshooting\n\nIf you run into problems of any kind, don't hesitate to [open an issue](https://github.com/duckietown/gym-duckietown/issues) on this repository. It's quite possible that you've run into some bug we aren't aware of. Please make sure to give some details about your system configuration (ie: PC or Max, operating system), and to paste the command you used to run the simulator, as well as the complete error message that was produced, if any.\n\n### ImportError: Library \"GLU\" not found\n\nYou may need to manually install packaged needed by Pyglet or OpenAI Gym on your system. The command you need to use will vary depending which OS you are running. For example, to install the glut package on Ubuntu:\n\n```\nsudo apt-get install freeglut3-dev\n```\n\nAnd on Fedora:\n\n```\nsudo dnf install freeglut-devel\n```\n\n### NoSuchDisplayException: Cannot connect to \"None\"\n\nIf you are connected through SSH, or running the simulator in a Docker image, you will need to use xvfb to create a virtual display in order to run the simulator. See the \"Running Headless\" subsection below.\n\n### Running headless\n\nThe simulator uses the OpenGL API to produce graphics. This requires an X11 display to be running, which can be problematic if you are trying to run training code through on SSH, or on a cluster. You can create a virtual display using [Xvfb](https://en.wikipedia.org/wiki/Xvfb). The instructions shown below illustrate this. Note, however, that these instructions are specific to MILA, look further down for instructions on an Ubuntu box:\n\n```\n# Reserve a Debian 9 machine with 12GB ram, 2 cores and a GPU on the cluster\nsinter --reservation=res_stretch --mem=12000 -c2 --gres=gpu\n\n# Activate the gym-duckietown Conda environment\nsource activate gym-duckietown\n\ncd gym-duckietown\n\n# Add the gym_duckietown package to your Python path\nexport PYTHONPATH=\"${PYTHONPATH}:`pwd`\"\n\n# Load the GLX library\n# This has to be done before starting Xvfb\nexport LD_LIBRARY_PATH=/Tmp/glx:$LD_LIBRARY_PATH\n\n# Create a virtual display with OpenGL support\nXvfb :$SLURM_JOB_ID -screen 0 1024x768x24 -ac +extension GLX +render -noreset &> xvfb.log &\nexport DISPLAY=:$SLURM_JOB_ID\n\n# You are now ready to train\n```\n\n### Running headless and training in a cloud based environment (AWS)\n\nWe recommend using the Ubuntu-based [Deep Learning AMI](https://aws.amazon.com/marketplace/pp/B077GCH38C) to provision your server which comes with all the deep learning libraries.\n\n```\n# Install xvfb\nsudo apt-get install xvfb mesa-utils -y\n\n# Remove the nvidia display drivers (this doesn't remove the CUDA drivers)\n# This is necessary as nvidia display doesn't play well with xvfb\nsudo nvidia-uninstall -y\n\n# Sanity check to make sure you still have CUDA driver and its version\nnvcc --version\n\n# Start xvfb\nXvfb :1 -screen 0 1024x768x24 -ac +extension GLX +render -noreset &> xvfb.log &\n\n# Export your display id\nexport DISPLAY=:1\n\n# Check if your display settings are valid\nglxinfo\n\n# You are now ready to train\n```\n\n### Poor performance, low frame rate\n\nIt's possible to improve the performance of the simulator by disabling Pyglet error-checking code. Export this environment variable before running the simulator:\n\n```\nexport PYGLET_DEBUG_GL=True\n```\n\n### RL training doesn't converge\n\nReinforcement learning algorithms are extremely sensitive to hyperparameters. Choosing the\nwrong set of parameters could prevent convergence completely, or lead to unstable performance over\ntraining. You will likely want to experiment. A learning rate that is too low can lead to no\nlearning happening. A learning rate that is too high can lead unstable performance throughout\ntraining or a suboptimal result.\n\nThe reward values are currently rescaled into the [0,1] range, because the RL code in\n`pytorch_rl` doesn't do reward clipping, and deals poorly with large reward values. Also\nnote that changing the reward function might mean you also have to retune your choice\nof hyperparameters.\n\n### Unknown encoder 'libx264' when using gym.wrappers.Monitor\n\nIt is possible to use `gym.wrappers.Monitor` to record videos of the agent performing a task. See [examples here](https://www.programcreek.com/python/example/100947/gym.wrappers.Monitor).\n\nThe libx264 error is due to a problem with the way ffmpeg is installed on some linux distributions. One possible way to circumvent this is to reinstall ffmpeg using conda:\n\n```\nconda install -c conda-forge ffmpeg\n```\n\nAlternatively, screencasting programs such as [Kazam](https://launchpad.net/kazam) can be used to record the graphical output of a single window.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mehulj94/Radium", "link": "https://github.com/mehulj94/Radium", "tags": ["python", "keylogger", "security"], "stars": 513, "description": "Python logger with multiple features.", "lang": "Python", "repo_lang": "", "readme": "```\n____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____\n _____ _ _ _ _\n | __ \\ | (_) | | | |\n | |__) |__ _ __| |_ _ _ _ __ ___ | | _____ _ _| | ___ __ _ __ _ ___ _ __\n | _ // _` |/ _` | | | | | '_ ` _ \\ | |/ / _ \\ | | | |/ _ \\ / _` |/ _` |/ _ \\ '__|\n | | \\ \\ (_| | (_| | | |_| | | | | | | | < __/ |_| | | (_) | (_| | (_| | __/ |\n |_| \\_\\__,_|\\__,_|_|\\__,_|_| |_| |_| |_|\\_\\___|\\__, |_|\\___/ \\__, |\\__, |\\___|_|\n __/ | __/ | __/ |\n |___/ |___/ |___/\n____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____\n\n--> Coded by: Mehul Jain\n--> For windows only\n\n____ ____ ____ ____ ____ ____ ____\n ______ _\n | ____| | |\n | |__ ___ __ _| |_ _ _ _ __ ___ ___\n | __/ _ \\/ _` | __| | | | '__/ _ \\/ __|\n | | | __/ (_| | |_| |_| | | | __/\\__ \\\n |_| \\___|\\__,_|\\__|\\__,_|_| \\___||___/\n____ ____ ____ ____ ____ ____ ____\n\n--> Applications and keystrokes logging\n--> Screenshot logging\n--> Drive tree structure\n--> Logs sending by email\n--> Password Recovery for\n \u2022 Chrome\n \u2022 Mozilla\n \u2022 Filezilla\n \u2022 Core FTP\n \u2022 CyberDuck\n \u2022 FTPNavigator\n \u2022 WinSCP\n \u2022 Outlook\n \u2022 Putty\n \u2022 Skype\n \u2022 Generic Network\n--> Cookie stealer\n--> Keylogger stub update mechanism\n--> Gather system information\n \u2022 Internal and External IP\n \u2022 Ipconfig /all output\n \u2022 Platform\n____ ____ ____ ____ ____\n _ _ _____ ___ _____ _____\n| | | / ___|/ _ \\| __ \\| ___|\n| | | \\ `--./ /_\\ \\ | \\/| |__\n| | | |`--. \\ _ | | __ | __|\n| |_| /\\__/ / | | | |_\\ \\| |___\n \\___/\\____/\\_| |_/\\____/\\____/\n____ ____ ____ ____ ____\n\n--> Download the libraries if you are missing any.\n--> Set the Gmail username and password and remember to check allow connection from less secure apps in gmail settings.\n--> Set the FTP server. Make the folder Radium in which you'll store the new version of exe.\n--> Set the FTP ip, username, password.\n--> Remember to encode the password in base64.\n--> Set the originalfilename variable in copytostartup(). This should be equal to the name of the exe.\n--> Make the exe using Pyinstaller\n--> Keylogs will be mailed after every 300 key strokes. This can be changed.\n--> Screenshot is taken after every 500 key strokes. This can be changed.\n--> Remember: If you make this into exe, change the variable \"originalfilename\" and \"coppiedfilename\" in function copytostartup().\n--> Remember: whatever name you give to \"coppiedfilename\", should be given to checkfilename in deleteoldstub().\n\n____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____\n _____ _ _ _ _\n|_ _| | (_) | | | |\n | | | |__ _ _ __ __ _ ___ | |_ ___ __ _____ _ __| | __ ___ _ __\n | | | '_ \\| | '_ \\ / _` / __| | __/ _ \\ \\ \\ /\\ / / _ \\| '__| |/ / / _ \\| '_ \\\n | | | | | | | | | | (_| \\__ \\ | || (_) | \\ V V / (_) | | | < | (_) | | | |\n \\_/ |_| |_|_|_| |_|\\__, |___/ \\__\\___/ \\_/\\_/ \\___/|_| |_|\\_\\ \\___/|_| |_|\n __/ |\n |___/\n____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____\n\n--> Persistance\n--> Taking screenshots after a specific time. Making it keystrokes independent.\n--> Webcam logging\n--> Skype chat history stealer\n--> Steam credential harvestor\n```\n# Requirements\n* Install [PyHook](https://sourceforge.net/projects/pyhook/)\n* Install [PyWin32](https://sourceforge.net/projects/pywin32/)\n* Install [Microsoft Visual C++ Compiler for Python](https://www.microsoft.com/en-us/download/details.aspx?id=44266)\n* Install [PyInstaller](http://www.pyinstaller.org/)\n\n# Tutorial\n[![Tutorial Radium Keylogger](https://i.imgur.com/Y1jE9Km.png)](https://youtu.be/T0h_427L8u4)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "j2labs/brubeck", "link": "https://github.com/j2labs/brubeck", "tags": [], "stars": 513, "description": "Asynchronous web and messaging", "lang": "Python", "repo_lang": "", "readme": "# What Is Brubeck?\n\n__Brubeck__ is no longer actively maintained.\n", "readme_type": "markdown", "hn_comments": "As somebody who's been quite happily building a system around Brubeck for the better part of two months now, it's nice to see the background story pieced together into a cogent storyline.What's more interesting, I think, is the insight into the thought process and background that goes into building something that is exquisitely elegant in its simplicity and at the same time incredibly powerful and flexible.Working with Brubeck has been a delight since the very beginning. Granted I've had experience building MVC type systems before, including in Python, but I think Brubeck's ease-of-use is quite significant compared to other frameworks.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "misja/python-boilerpipe", "link": "https://github.com/misja/python-boilerpipe", "tags": [], "stars": 513, "description": "Python interface to Boilerpipe, Boilerplate Removal and Fulltext Extraction from HTML pages", "lang": "Python", "repo_lang": "", "readme": "# python-boilerpipe\n\n\nA python wrapper for [Boilerpipe](http://code.google.com/p/boilerpipe/), an excellent Java library for boilerplate removal and fulltext extraction from HTML pages.\n\n## Configuration\n\n\nDependencies:\n\n * jpype\n * chardet\n\nThe boilerpipe jar files will get fetched and included automatically when building the package.\n\n## Installation\n\nCheckout the code:\n\n\tgit clone https://github.com/misja/python-boilerpipe.git\n\tcd python-boilerpipe\n\n\n**virtualenv**\n\n\tvirtualenv env\n\tsource env/bin/activate\n pip install -r requirements.txt\n\tpython setup.py install\n\t\n\n**Fedora**\n\n sudo dnf install -y python2-jpype\n sudo python setup.py install\n\n\n## Usage\n\n\nBe sure to have set `JAVA_HOME` properly since `jpype` depends on this setting.\n\nThe constructor takes a keyword argument `extractor`, being one of the available boilerpipe extractor types:\n\n - DefaultExtractor\n - ArticleExtractor\n - ArticleSentencesExtractor\n - KeepEverythingExtractor\n - KeepEverythingWithMinKWordsExtractor\n - LargestContentExtractor\n - NumWordsRulesExtractor\n - CanolaExtractor\n\nIf no extractor is passed the DefaultExtractor will be used by default. Additional keyword arguments are either `html` for HTML text or `url`.\n\n from boilerpipe.extract import Extractor\n extractor = Extractor(extractor='ArticleExtractor', url=your_url)\n\nThen, to extract relevant content:\n\n extracted_text = extractor.getText()\n\n extracted_html = extractor.getHTML()\n\n\nFor `KeepEverythingWithMinKWordsExtractor` we have to specify `kMin` parameter, which defaults to `1` for now:\n\n\textractor = Extractor(extractor='KeepEverythingWithMinKWordsExtractor', url=your_url, kMin=20)\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "yanchunhuo/AutomationTest", "link": "https://github.com/yanchunhuo/AutomationTest", "tags": ["automated-testing", "selenium", "appium", "autotesting", "dubbo"], "stars": 514, "description": "\u81ea\u52a8\u5316\u6d4b\u8bd5\u6846\u67b6\uff0c\u652f\u6301\u63a5\u53e3\u81ea\u52a8\u5316\u3001WEB UI\u81ea\u52a8\u5316\u3001APP UI\u81ea\u52a8\u5316\u3001\u6027\u80fd\u6d4b\u8bd5\uff1b\u652f\u6301\u591a\u7cfb\u7edf\u76f8\u4e92\u8c03\u7528\uff1b\u652f\u6301\u63a5\u53e3\u4e0eUI\u76f8\u4e92\u8c03\u7528\uff1b\u652f\u6301dubbo\u63a5\u53e3\u8c03\u7528", "lang": "Python", "repo_lang": "", "readme": "![avatar](https://github.com/yanchunhuo/resources/blob/master/APIAutomationTest/report.png)\n\n# [\u81ea\u52a8\u5316\u6d4b\u8bd5]()\n\n# [\u6982\u51b5]()\n* \u672c\u9879\u76ee\u652f\u6301\u63a5\u53e3\u81ea\u52a8\u5316\u6d4b\u8bd5\u3001app ui\u81ea\u52a8\u5316\u6d4b\u8bd5\u3001web ui\u81ea\u52a8\u5316\u6d4b\u8bd5\u3001\u6027\u80fd\u6d4b\u8bd5\n* \u672c\u9879\u76ee\u7531\u4ee5\u4e0b\u5de5\u5177\u7ec4\u6210\n * pytest\uff1apython\u7684\u4e00\u4e2a\u5355\u5143\u6d4b\u8bd5\u6846\u67b6,https://docs.pytest.org/en/latest/\n * pytest-xdist\uff1apytest\u7684\u4e00\u4e2a\u63d2\u4ef6,\u53ef\u591a\u8fdb\u7a0b\u540c\u65f6\u6267\u884c\u6d4b\u8bd5\u7528\u4f8b,https://github.com/pytest-dev/pytest-xdist\n * allure-pytest\uff1a\u7528\u4e8e\u751f\u6210\u6d4b\u8bd5\u62a5\u544a,http://allure.qatools.ru/\n * PyHamcrest\uff1a\u4e00\u4e2a\u5339\u914d\u5668\u5bf9\u8c61\u7684\u6846\u67b6\uff0c\u7528\u4e8e\u65ad\u8a00\uff0chttps://github.com/hamcrest/PyHamcrest\n * requests\uff1ahttp\u8bf7\u6c42\u6846\u67b6,http://docs.python-requests.org/en/master/\n * Appium\uff1a\u79fb\u52a8\u7aef\u7684\u81ea\u52a8\u5316\u6d4b\u8bd5\u6846\u67b6,https://github.com/appium/appium/tree/v1.15.1\n * selenium\uff1aweb ui\u81ea\u52a8\u5316\u6d4b\u8bd5\u6846\u67b6,https://www.seleniumhq.org/\n * cx_Oracle\uff1aoracle\u64cd\u4f5c\u5e93,https://cx-oracle.readthedocs.io/en/latest/index.html\n * JPype1\uff1a\u7528\u4e8e\u6267\u884cjava\u4ee3\u7801,https://github.com/jpype-project/jpype\n * paramiko\uff1assh\u5ba2\u6237\u7aef,https://docs.paramiko.org/en/stable/\n * Pillow\uff1a\u7528\u4e8e\u56fe\u7247\u5904\u7406,https://pillow.readthedocs.io/en/latest/\n * PyMySQL\uff1a\u7528\u4e8e\u64cd\u4f5cMySQL\u6570\u636e\u5e93,https://github.com/PyMySQL/PyMySQL\n * redis\uff1aredis\u5ba2\u6237\u7aef,https://pypi.org/project/redis/\n * tess4j\uff1ajava\u7684\u56fe\u7247\u8bc6\u522b\u5de5\u5177,https://github.com/nguyenq/tess4j/\n * allpairspy: \u7528\u4e8e\u5c06\u53c2\u6570\u5217\u8868\u8fdb\u884c\u6b63\u4ea4\u5206\u6790\uff0c\u5b9e\u73b0\u6b63\u4ea4\u5206\u6790\u6cd5\u7528\u4f8b\u8986\u76d6\uff0chttps://pypi.org/project/allpairspy/\n * python-binary-memcached\uff1a\u7528\u4e8e\u64cd\u4f5cmemcached\uff0chttps://github.com/jaysonsantos/python-binary-memcached\n * kazoo\uff1a\u7528\u4e8e\u64cd\u4f5czookeeper\uff0chttps://github.com/python-zk/kazoo\n * websockets\uff1a\u7528\u4e8ewebsocket\u8bf7\u6c42\uff0chttps://github.com/aaugustin/websockets\n * Js2Py\uff1a\u7528\u4e8e\u6267\u884cjs\u4ee3\u7801\uff0chttps://github.com/PiotrDabkowski/Js2Py\n * sqlacodegen\uff1a\u7528\u4e8e\u6839\u636e\u6570\u636e\u5e93\u8868\u7ed3\u6784\u751f\u6210python\u5bf9\u8c61\uff0chttps://github.com/agronholm/sqlacodegen\n * SQLAlchemy\uff1aSQL\u5de5\u5177\u5305\u53ca\u5bf9\u8c61\u5173\u7cfb\u6620\u5c04\uff08ORM\uff09\u5de5\u5177\uff0chttps://github.com/sqlalchemy/sqlalchemy\n* \u5f53\u524d\u4ec5\u652f\u6301Python>=3.6\n* \u9879\u76ee\u5982\u9700\u6267\u884cjava\u4ee3\u7801(\u5373\u4f7f\u7528jpype1)\uff0c\u5219\u9879\u76ee\u76ee\u5f55\u6240\u5728\u7684\u8def\u5f84\u4e0d\u53ef\u5305\u542b\u4e2d\u6587\n \n# [\u4f7f\u7528]()\n## \u4e00\u3001\u73af\u5883\u51c6\u5907\n### 1\u3001\u811a\u672c\u8fd0\u884c\u73af\u5883\u51c6\u5907\n#### 1.1\u3001\u5b89\u88c5\u7cfb\u7edf\u4f9d\u8d56\n* Linux-Ubuntu:\n * apt-get install libpq-dev python3-dev \u3010\u7528\u4e8epsycopg2-binary\u6240\u9700\u4f9d\u8d56\u3011\n * apt-get install g++ libgraphicsmagick++1-dev libboost-python-dev \u3010\u7528\u4e8epgmagick\u6240\u9700\u4f9d\u8d56\u3011\n * apt-get install python-pgmagick \u3010pgmagick\u6240\u9700\u4f9d\u8d56\u3011\n* Linux-CentOS:\n * yum install python3-devel postgresql-devel \u3010\u7528\u4e8epsycopg2-binary\u6240\u9700\u4f9d\u8d56\u3011\n * yum install GraphicsMagick-c++-devel boost boost-devel\u3010\u7528\u4e8epgmagick\u6240\u9700\u4f9d\u8d56\u3011\n* Windows:\n * \u5b89\u88c5Microsoft Visual C++ 2019 Redistributable\uff0c\u4e0b\u8f7d\u5730\u5740\uff1ahttps://visualstudio.microsoft.com/zh-hans/downloads/ \u3010jpype1\u3001\u56fe\u50cf\u8bc6\u522b\u5b57\u5e93\u6240\u9700\u4f9d\u8d56\u3011\n\n#### 1.2\u3001\u5b89\u88c5python\u4f9d\u8d56\u6a21\u5757\n* pip3 install -r requirements.txt\n* \u5b89\u88c5pgmagick\n * Linux:\n * pip3 install pgmagick==0.7.6\n * Windows:\n * \u4e0b\u8f7d\u5b89\u88c5\u5bf9\u5e94\u7248\u672c\uff1ahttps://www.lfd.uci.edu/~gohlke/pythonlibs/#pgmagick\n* \u5b89\u88c5xmind-sdk-python\n * \u4e0b\u8f7d\u5730\u5740:https://github.com/xmindltd/xmind-sdk-python\n\n#### 1.3\u3001\u5b89\u88c5allure\n* \u6e90\u5b89\u88c5\n * sudo apt-add-repository ppa:qameta/allure\n * sudo apt-get update \n * sudo apt-get install allure\n * \u5176\u4ed6\u5b89\u88c5\u65b9\u5f0f\uff1ahttps://github.com/allure-framework/allure2\n* \u624b\u52a8\u5b89\u88c5\n * \u4e0b\u8f7d2.7.0\u7248\u672c:https://github.com/allure-framework/allure2/releases\n * \u89e3\u538ballure-2.7.0.zip\n * \u52a0\u5165\u7cfb\u7edf\u73af\u5883\u53d8\u91cf:export PATH=/home/john/allure-2.7.0/bin:$PATH\n\n#### 1.4\u3001\u5b89\u88c5openjdk8\u6216jdk8\n* sudo add-apt-repository ppa:openjdk-r/ppa\n* sudo apt-get update\n* sudo apt-get install openjdk-8-jdk\n\n#### 1.5\u3001\u5b89\u88c5maven\n* \u5b8c\u6210maven\u7684\u5b89\u88c5\u914d\u7f6e\n\n#### 1.6\u3001\u5b89\u88c5Oracle Instant Client\n* Linux\n * \u5b89\u88c5libaio\u5305\n * Linux-CentOS:yum install libaio\n * Linux-Ubuntu:apt-get install libaio1\n * \u914d\u7f6eOracle Instant Client\n * \u4e0b\u8f7d\u5730\u5740:http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html\n * \u4e0b\u8f7d\u5b89\u88c5\u5305instantclient-basic-linux.x64-18.3.0.0.0dbru.zip\n * \u89e3\u538bzip\u5305,\u5e76\u914d\u7f6e/etc/profile\n * unzip instantclient-basic-linux.x64-18.3.0.0.0dbru.zip\n * export LD_LIBRARY_PATH=/home/john/oracle_instant_client/instantclient_18_3:$LD_LIBRARY_PATH\n * \u4e2d\u6587\u7f16\u7801\u8bbe\u7f6e\n \n ```python \n import os\n os.environ['NLS_LANG'] = 'SIMPLIFIED CHINESE_CHINA.UTF8'\n ```\n* Windows\n * \u4e0b\u8f7d\u5730\u5740:http://www.oracle.com/technetwork/topics/winx64soft-089540.html\n * \u4e0b\u8f7d\u5b89\u88c5\u5305instantclient-basic-windows.x64-11.2.0.4.0.zip\n * \u89e3\u538bzip\u5305,\u5e76\u914d\u7f6e\u73af\u5883\u53d8\u91cf\n * \u7cfb\u7edf\u73af\u5883\u53d8\u91cf\u52a0\u5165D:\\instantclient-basic-windows.x64-11.2.0.4.0\\instantclient_11_2\n * \u914d\u7f6e\u4e2d\u6587\u7f16\u7801,\u73af\u5883\u53d8\u91cf\u521b\u5efaNLS_LANG=SIMPLIFIED CHINESE_CHINA.UTF8 \n * \u6ce8\u610f:\u5982\u679c\u4f7f\u752864\u4f4d,python\u548cinstantclient\u90fd\u9700\u8981\u4f7f\u752864\u4f4d\n\n#### 1.7\u3001\u56fe\u50cf\u8bc6\u522b\u5b57\u5e93\u51c6\u5907\n* \u4e0b\u8f7d\u5bf9\u5e94\u5b57\u5e93:https://github.com/tesseract-ocr/tessdata\n* \u5c06\u4e0b\u8f7d\u7684\u5b57\u5e93\u653e\u5230common/java/lib/tess4j/tessdata/\n* Linux\n * \u5b89\u88c5\u4f9d\u8d56\n * Linux-Ubuntu:sudo apt install pkg-config aclocal libtool automake libleptonica-dev\n * Linux-CentOS:yum install autoconf automake libtool libjpeg-devel libpng-devel libtiff-devel zlib-devel\n * \u5b89\u88c5leptonica\uff0c\u4e0b\u8f7dleptonica-1.78.0.tar.gz\uff0c\u4e0b\u8f7d\u5730\u5740\uff1ahttps://github.com/DanBloomberg/leptonica/releases\n * \u5b89\u88c5\u6b65\u9aa4\u540ctesseract-ocr\u7684\u5b89\u88c5\n * \u4fee\u6539/etc/profile\u6dfb\u52a0\u5982\u4e0b\u5185\u5bb9\uff0c\u7136\u540esource\n ```\n export LD_LIBRARY_PATH=$LD_LIBRARY_PAYT:/usr/local/lib\n export LIBLEPT_HEADERSDIR=/usr/local/include\n export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig\n ```\n * \u5b89\u88c5tesseract-ocr\uff0c\u4e0b\u8f7dtesseract-4.1.1.tar.gz\uff0c\u4e0b\u8f7d\u5730\u5740\uff1ahttps://github.com/tesseract-ocr/tesseract/releases\n * ./autogen.sh\n * ./configure\n * sudo make\n * sudo make install\n * sudo ldconfig\n* Windows\n * \u5b89\u88c5Microsoft Visual C++ 2019 Redistributable\uff0c\u4e0b\u8f7d\u5730\u5740\uff1ahttps://visualstudio.microsoft.com/zh-hans/downloads/\n\n### 2\u3001selenium server\u8fd0\u884c\u73af\u5883\u51c6\u5907\n#### 2.1\u3001\u5b89\u88c5jdk1.8,\u5e76\u914d\u7f6e\u73af\u5883\u53d8\u91cf\n* export JAVA_HOME=/usr/lib/jvm/jdk8\n* export JRE_HOME=${JAVA_HOME}/jre \n* export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib\n* export PATH=${JAVA_HOME}/bin:$PATH\n\n#### 2.2\u3001\u5b89\u88c5\u914d\u7f6eselenium\n* \u914d\u7f6eselenium server\n * \u4e0b\u8f7dselenium-server-standalone-3.141.0.jar\n * \u4e0b\u8f7d\u5730\u5740:http://selenium-release.storage.googleapis.com/index.html\n * \u4ee5\u7ba1\u7406\u5458\u8eab\u4efd\u542f\u52a8\u670d\u52a1:java -jar selenium-server-standalone-3.141.0.jar -log selenium.log\n* \u4e0b\u8f7d\u6d4f\u89c8\u5668\u9a71\u52a8\n * \u8c37\u6b4c\u6d4f\u89c8\u5668\uff1ahttps://chromedriver.storage.googleapis.com/index.html\n * \u9a71\u52a8\u652f\u6301\u7684\u6700\u4f4e\u6d4f\u89c8\u5668\u7248\u672c\uff1ahttps://raw.githubusercontent.com/appium/appium-chromedriver/master/config/mapping.json\n * \u706b\u72d0\u6d4f\u89c8\u5668\uff1ahttps://github.com/mozilla/geckodriver/\n * \u9a71\u52a8\u652f\u6301\u7684\u6d4f\u89c8\u5668\u7248\u672c\uff1ahttps://firefox-source-docs.mozilla.org/testing/geckodriver/geckodriver/Support.html\n * IE\u6d4f\u89c8\u5668(\u5efa\u8bae\u4f7f\u752832\u4f4d,64\u4f4d\u64cd\u4f5c\u6781\u6162)\uff1ahttp://selenium-release.storage.googleapis.com/index.html\n * \u5c06\u9a71\u52a8\u6240\u5728\u76ee\u5f55\u52a0\u5165\u5230selenium server\u670d\u52a1\u5668\u7cfb\u7edf\u73af\u5883\u53d8\u91cf:export PATH=/home/john/selenium/:$PATH\n* IE\u6d4f\u89c8\u5668\u8bbe\u7f6e\n * \u5728Windows Vista\u3001Windows7\u7cfb\u7edf\u4e0a\u7684IE\u6d4f\u89c8\u5668\u5728IE7\u53ca\u4ee5\u4e0a\u7248\u672c\u4e2d\uff0c\u9700\u8981\u8bbe\u7f6e\u56db\u4e2a\u533a\u57df\u7684\u4fdd\u62a4\u6a21\u5f0f\u4e3a\u4e00\u6837\uff0c\u8bbe\u7f6e\u5f00\u542f\u6216\u8005\u5173\u95ed\u90fd\u53ef\u4ee5\u3002\n * \u5de5\u5177-->Internet\u9009\u9879-->\u5b89\u5168\n * IE10\u53ca\u4ee5\u4e0a\u7248\u672c\u589e\u5f3a\u4fdd\u62a4\u6a21\u5f0f\u9700\u8981\u5173\u95ed\u3002\n * \u5de5\u5177-->Internet\u9009\u9879-->\u9ad8\u7ea7\n * \u6d4f\u89c8\u5668\u7f29\u653e\u7ea7\u522b\u5fc5\u987b\u8bbe\u7f6e\u4e3a100%\uff0c\u4ee5\u4fbf\u672c\u5730\u9f20\u6807\u4e8b\u4ef6\u53ef\u4ee5\u8bbe\u7f6e\u4e3a\u6b63\u786e\u7684\u5750\u6807\u3002\n * \u9488\u5bf9IE11\u9700\u8981\u8bbe\u7f6e\u6ce8\u518c\u8868\u4ee5\u4fbf\u4e8e\u6d4f\u89c8\u5668\u9a71\u52a8\u4e0e\u6d4f\u89c8\u5668\u5efa\u7acb\u8fde\u63a5\n * Windows 64\u4f4d\uff1aHKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432Node\\Microsoft\\Internet Explorer\\Main\\FeatureControl\\FEATURE_BFCACHE\n * Windows 32\u4f4d\uff1aHKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Internet Explorer\\Main\\FeatureControl\\FEATURE_BFCACHE\n * \u5982\u679cFEATRUE_BFCACHE\u9879\u4e0d\u5b58\u5728\uff0c\u9700\u8981\u521b\u5efa\u4e00\u4e2a\uff0c\u7136\u540e\u5728\u91cc\u9762\u521b\u5efa\u4e00\u4e2aDWORD(32\u4f4d)\uff0c\u547d\u540d\u4e3aiexplore.exe\uff0c\u503c\u4e3a0\n * Windows 64\u4f4d\u4e24\u4e2a\u6ce8\u518c\u8868\u5efa\u8bae\u90fd\u8bbe\u7f6e\n * IE8\u53ca\u4ee5\u4e0a\u7248\u672c\u8bbe\u7f6e\u652f\u6301inprivate\u6a21\u5f0f\uff0c\u4ee5\u4fbf\u591a\u5f00IE\u7a97\u53e3\u65f6cookies\u80fd\u591f\u72ec\u4eab\n * HKKY_CURRENT_USER\\Software\\Microsoft\\Internet Explorer\\Main \u4e0b\u5efa\u4e00\u4e2a\u540d\u4e3aTabProcGrowth\u7684DWORD(32\u4f4d)\uff0c\u503c\u4e3a0\n * \u91cd\u542f\u7cfb\u7edf\n * \u6ce8:https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver#required-configuration\n\n### 3\u3001appium server\u8fd0\u884c\u73af\u5883\u51c6\u5907\n#### 3.1\u3001\u5b89\u88c5jdk1.8,\u5e76\u914d\u7f6e\u73af\u5883\u53d8\u91cf\n* export JAVA_HOME=/usr/lib/jvm/jdk8\n* export JRE_HOME=${JAVA_HOME}/jre \n* export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib\n* export PATH=${JAVA_HOME}/bin:$PATH\n\n#### 3.2\u3001\u5b89\u88c5\u914d\u7f6eappium server\n* \u5b89\u88c5appium desktop server\n * \u4e0b\u8f7dAppium-windows-1.15.1.exe\n * \u4e0b\u8f7d\u5730\u5740:https://github.com/appium/appium-desktop/releases\n * \u4ee5\u7ba1\u7406\u5458\u8eab\u4efd\u542f\u52a8\u670d\u52a1\n\n* Android\u73af\u5883\u51c6\u5907\n * \u5b89\u88c5java(JDK),\u5e76\u914d\u7f6eJAVA_HOME=/usr/lib/jvm/jdk8\n * \u5b89\u88c5Android SDK,\u5e76\u914d\u7f6eANDROID_HOME=\"/usr/local/adt/sdk\"\n * \u4f7f\u7528SDK manager\u5b89\u88c5\u9700\u8981\u8fdb\u884c\u81ea\u52a8\u5316\u7684Android API\u7248\u672c\n \n* IOS\u73af\u5883\u51c6\u5907\n * \u7531\u4e8e\u6d4b\u8bd5IOS\u771f\u5b9e\u8bbe\u5907\u6ca1\u529e\u6cd5\u76f4\u63a5\u64cd\u4f5cweb view\uff0c\u9700\u8981\u901a\u8fc7usb\uff0c\u5b9e\u73b0\u901a\u8fc7usb\u521b\u5efa\u8fde\u63a5\u9700\u8981\u5b89\u88c5ios-webkit-debug-proxy\n * \u4e0b\u8f7d\u5b89\u88c5\u5730\u5740\uff1ahttps://github.com/google/ios-webkit-debug-proxy/tree/v1.8.5\n\n* \u624b\u673achrome\u73af\u5883\u51c6\u5907\n * \u786e\u4fdd\u624b\u673a\u5df2\u5b89\u88c5chrome\u6d4f\u89c8\u5668\n * \u4e0b\u8f7dchrome\u6d4f\u89c8\u5668\u9a71\u52a8\uff1ahttps://chromedriver.storage.googleapis.com/index.html\n * \u9a71\u52a8\u652f\u6301\u7684\u6700\u4f4e\u6d4f\u89c8\u5668\u7248\u672c\uff1ahttps://raw.githubusercontent.com/appium/appium-chromedriver/master/config/mapping.json\n * \u5728appium desktop\u4e0a\u8bbe\u7f6e\u9a71\u52a8\u7684\u8def\u5f84\n\n* \u6df7\u5408\u5e94\u7528\u73af\u5883\u51c6\u5907\n * \u65b9\u6cd5\u4e00\uff1a\u5b89\u88c5TBS Studio\u5de5\u5177\u67e5\u770bwebview\u5185\u6838\u7248\u672c\uff1ahttps://x5.tencent.com/tbs/guide/debug/season1.html\n * \u65b9\u6cd5\u4e8c\uff1a\u6253\u5f00\u5730\u5740\uff08\u8be5\u5730\u5740\u5728uc\u5f00\u53d1\u5de5\u5177\u4e2d\u53ef\u67e5\u5230\uff09\u67e5\u770bwebview\u5185\u6838\u7248\u672c\uff1ahttps://liulanmi.com/labs/core.html\n * \u4e0b\u8f7dwebview\u5185\u6838\u5bf9\u5e94\u7684chromedriver\u7248\u672c\uff1ahttps://chromedriver.storage.googleapis.com/index.html\n * \u914d\u7f6e\u6587\u4ef6\u8fdb\u884c\u9a71\u52a8\u8def\u5f84\u7684\u914d\u7f6e\n * \u6ce8\uff1awebview\u9700\u8981\u5f00\u542fdebug\u6a21\u5f0f\n\n* Windows\u73af\u5883\u51c6\u5907\n * \u652f\u6301Windows10\u53ca\u4ee5\u4e0a\u7248\u672c\n * \u8bbe\u7f6eWindows\u5904\u4e8e\u5f00\u53d1\u8005\u6a21\u5f0f\n * \u4e0b\u8f7dWinAppDriver\u5e76\u5b89\u88c5(V1.1\u7248\u672c),https://github.com/Microsoft/WinAppDriver/releases\n * \\[\u53ef\u9009\\]\u4e0b\u8f7d\u5b89\u88c5WindowsSDK,\u5728Windows Kits\\10\\bin\\10.0.17763.0\\x64\u5185\u5305\u542b\u6709inspect.exe\u7528\u4e8e\u5b9a\u4f4dWindows\u7a0b\u5e8f\u7684\u5143\u7d20\u4fe1\u606f\n\n* \u5176\u4ed6\u66f4\u591a\u914d\u7f6e\uff1ahttps://github.com/appium/appium/tree/v1.15.1/docs/en/drivers\n\n## \u4e8c\u3001\u4fee\u6539\u914d\u7f6e\n* vim config/app_ui_config.conf \u914d\u7f6eapp ui\u81ea\u52a8\u5316\u7684\u6d4b\u8bd5\u4fe1\u606f\n* vim config/web_ui_config.conf \u914d\u7f6eweb ui\u81ea\u52a8\u5316\u7684\u6d4b\u8bd5\u4fe1\u606f\n* vim config/projectName/projectName.conf \u914d\u7f6e\u6d4b\u8bd5\u9879\u76ee\u7684\u4fe1\u606f\n* \u4fee\u6539\u6027\u80fd\u6d4b\u8bd5\u8d1f\u8f7d\u673a\u7684\u7cfb\u7edf\u6700\u5927\u6253\u5f00\u6587\u4ef6\u6570,\u907f\u514d\u5e76\u53d1\u7528\u6237\u6570\u5927\u4e8e\u6700\u5927\u6253\u5f00\u6587\u4ef6\u6570\n\n## \u4e09\u3001\u8fd0\u884c\u6d4b\u8bd5\n### 1\u3001API\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u run_api_test.py --help\n* python3 -u run_api_test.py \u8fd0\u884ccases/api/\u76ee\u5f55\u6240\u6709\u7684\u7528\u4f8b\n* python3 -u run_api_test.py -k keyword \u8fd0\u884c\u5339\u914d\u5173\u952e\u5b57\u7684\u7528\u4f8b\uff0c\u4f1a\u5339\u914d\u6587\u4ef6\u540d\u3001\u7c7b\u540d\u3001\u65b9\u6cd5\u540d\n* python3 -u run_api_test.py -d dir \u8fd0\u884c\u6307\u5b9a\u76ee\u5f55\u7684\u7528\u4f8b\uff0c\u9ed8\u8ba4\u8fd0\u884ccases/api/\u76ee\u5f55\n* python3 -u run_api_test.py -m mark \u8fd0\u884c\u6307\u5b9a\u6807\u8bb0\u7684\u7528\u4f8b\n\n### 2\u3001web ui\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u run_web_ui_test.py --help\n* python3 -u run_web_ui_test.py \u8fd0\u884ccases/web_ui/\u76ee\u5f55\u6240\u6709\u7684\u7528\u4f8b\n* python3 -u run_web_ui_test.py -k keyword \u8fd0\u884c\u5339\u914d\u5173\u952e\u5b57\u7684\u7528\u4f8b\uff0c\u4f1a\u5339\u914d\u6587\u4ef6\u540d\u3001\u7c7b\u540d\u3001\u65b9\u6cd5\u540d\n* python3 -u run_web_ui_test.py -d dir \u8fd0\u884c\u6307\u5b9a\u76ee\u5f55\u7684\u7528\u4f8b\uff0c\u9ed8\u8ba4\u8fd0\u884ccases/web_ui/\u76ee\u5f55\n* python3 -u run_web_ui_test.py -m mark \u8fd0\u884c\u6307\u5b9a\u6807\u8bb0\u7684\u7528\u4f8b\n\n### 3\u3001app ui\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u run_app_ui_test.py --help\n* python3 -u run_app_ui_test.py \u8fd0\u884ccases/app_ui/\u76ee\u5f55\u6240\u6709\u7684\u7528\u4f8b\n* python3 -u run_app_ui_test.py -tt phone -k keyword \u8fd0\u884c\u5339\u914d\u5173\u952e\u5b57\u7684\u7528\u4f8b\uff0c\u4f1a\u5339\u914d\u6587\u4ef6\u540d\u3001\u7c7b\u540d\u3001\u65b9\u6cd5\u540d\n* python3 -u run_app_ui_test.py -tt phone -d dir \u8fd0\u884c\u6307\u5b9a\u76ee\u5f55\u7684\u7528\u4f8b\uff0c\u9ed8\u8ba4\u8fd0\u884ccases/app_ui/\u76ee\u5f55\n* python3 -u run_app_ui_test.py -m mark \u8fd0\u884c\u6307\u5b9a\u6807\u8bb0\u7684\u7528\u4f8b\n\n### 4\u3001\u6027\u80fd\u6d4b\u8bd5\n* cd AutomationTest/\n* ./start_locust_master.sh\n* ./start_locust_slave.sh\n\n## \u56db\u3001\u751f\u6210\u6d4b\u8bd5\u62a5\u544a\n### 1\u3001API\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u generate_api_test_report.py -p 9080 \n* \u8bbf\u95ee\u5730\u5740http://ip:9080\n* \u5728\u4f7f\u7528Ubuntu\u8fdb\u884c\u62a5\u544a\u751f\u6210\u65f6\uff0c\u8bf7\u52ff\u4f7f\u7528sudo\u6743\u9650\uff0c\u5426\u5219\u65e0\u6cd5\u751f\u6210\uff0callure\u4e0d\u652f\u6301\n\n### 2\u3001web ui\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u generateReport_web_ui_test_report.py -ieport 9081 -chromeport 9082 -firefoxport 9083\n* \u8bbf\u95ee\u5730\u5740http://ip:908[1-3]\n* \u5728\u4f7f\u7528Ubuntu\u8fdb\u884c\u62a5\u544a\u751f\u6210\u65f6\uff0c\u8bf7\u52ff\u4f7f\u7528sudo\u6743\u9650\uff0c\u5426\u5219\u65e0\u6cd5\u751f\u6210\uff0callure\u4e0d\u652f\u6301\n\n### 3\u3001app ui\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u generateReport_app_ui_test_report.py -sp 9084\n* \u8bbf\u95ee\u5730\u5740http://ip:9084\n\n### \u6ce8\uff1a\u5728\u4f7f\u7528Ubuntu\u8fdb\u884c\u62a5\u544a\u751f\u6210\u65f6\uff0c\u8bf7\u52ff\u4f7f\u7528sudo\u6743\u9650\uff0c\u5426\u5219\u65e0\u6cd5\u751f\u6210\uff0callure\u4e0d\u652f\u6301\n\n## \u4e94\u3001\u9879\u76ee\u8bf4\u660e\n### 1\u3001API\u6d4b\u8bd5\n* \u9879\u76ee\n * demoProject \u4f8b\u5b50\u9879\u76ee\n \n### 2\u3001web ui\u6d4b\u8bd5\n* \u5143\u7d20\u7684\u663e\u5f0f\u7b49\u5f85\u65f6\u95f4\u9ed8\u8ba4\u4e3a30s\n* \u5c01\u88c5\u7684\u663e\u5f0f\u7b49\u5f85\u7c7b\u578b\u652f\u6301:page_objects/web_ui/wait_type.py\n* \u5c01\u88c5\u7684\u5b9a\u4f4d\u7c7b\u578b\u652f\u6301:page_objects/web_ui/locator_type.py\n* \u9ed8\u8ba4\u4f7f\u75284\u4e2aworker\u8fdb\u884c\u5e76\u884c\u6d4b\u8bd5\n* \u6587\u4ef6\u4e0b\u8f7d\u5904\u7406\u6682\u4e0d\u652f\u6301ie\u6d4f\u89c8\u5668\n* \u65e0\u5934\u6d4f\u89c8\u5668\u6682\u4e0d\u652f\u6301ie\u6d4f\u89c8\u5668\n* \u9879\u76ee\n * demoProject \u4f8b\u5b50\u9879\u76ee\n \n### 3\u3001app ui\u6d4b\u8bd5\n* \u5143\u7d20\u7684\u663e\u5f0f\u7b49\u5f85\u65f6\u95f4\u9ed8\u8ba4\u4e3a30s\n* \u5c01\u88c5\u7684\u663e\u5f0f\u7b49\u5f85\u7c7b\u578b\u652f\u6301:page_objects/app_ui/wait_type.py\n* \u5c01\u88c5\u7684\u5b9a\u4f4d\u7c7b\u578b\u652f\u6301:page_objects/app_ui/locator_type.py\n* \u9879\u76ee\n * android \n * demoProject \u4f8b\u5b50\u9879\u76ee\n\n# [\u9879\u76ee\u7ed3\u6784]()\n* base \u57fa\u7840\u8bf7\u6c42\u7c7b\n* cases \u6d4b\u8bd5\u7528\u4f8b\u76ee\u5f55\n* common \u516c\u5171\u6a21\u5757\n* common_projects \u6bcf\u4e2a\u9879\u76ee\u7684\u516c\u5171\u6a21\u5757\n* config\u3000\u914d\u7f6e\u6587\u4ef6\n* init \u521d\u59cb\u5316\n* logs \u65e5\u5fd7\u76ee\u5f55\n* output \u6d4b\u8bd5\u7ed3\u679c\u8f93\u51fa\u76ee\u5f55 \n* packages app ui\u6d4b\u8bd5\u7684\u5b89\u88c5\u5305\n* page_objects \u9875\u9762\u6620\u5c04\u5bf9\u8c61\n* pojo \u5b58\u653e\u81ea\u5b9a\u4e49\u7c7b\u5bf9\u8c61\n* test_data \u6d4b\u8bd5\u6240\u9700\u7684\u6d4b\u8bd5\u6570\u636e\u76ee\u5f55\n* run_api_test.py \u8fd0\u884capi\u6d4b\u8bd5\u811a\u672c\n* run_web_ui_test.py \u8fd0\u884cweb ui\u6d4b\u8bd5\u811a\u672c\n* run_app_ui_test.py \u8fd0\u884capp ui\u6d4b\u8bd5\u811a\u672c\n* generate_api_test_report.py \u751f\u6210api\u6d4b\u8bd5\u62a5\u544a\n* generateReport_web_ui_test_report.py \u751f\u6210web ui\u6d4b\u8bd5\u62a5\u544a\n* generateReport_app_ui_test_report.py \u751f\u6210app ui\u6d4b\u8bd5\u62a5\u544a\n* start_locust_master.sh \u542f\u52a8locust\u4e3b\u8282\u70b9\n* start_locust_slave.sh \u542f\u52a8locust\u4ece\u8282\u70b9\n\n# [\u7f16\u7801\u89c4\u8303]()\n* \u7edf\u4e00\u4f7f\u7528python 3.6.8\n* \u7f16\u7801\u4f7f\u7528-\\*- coding:utf8 -\\*-,\u4e14\u4e0d\u6307\u5b9a\u89e3\u91ca\u5668\n* \u7c7b/\u65b9\u6cd5\u7684\u6ce8\u91ca\u5747\u5199\u5728class/def\u4e0b\u4e00\u884c\uff0c\u5e76\u4e14\u7528\u4e09\u4e2a\u53cc\u5f15\u53f7\u5f62\u5f0f\u6ce8\u91ca\n* \u5c40\u90e8\u4ee3\u7801\u6ce8\u91ca\u4f7f\u7528#\u53f7\n* \u6240\u6709\u4e2d\u6587\u90fd\u76f4\u63a5\u4f7f\u7528\u5b57\u7b26\u4e32\uff0c\u4e0d\u8f6c\u6362\u6210Unicode\uff0c\u5373\u4e0d\u662f\u7528\u3010u'\u4e2d\u6587'\u3011\u7f16\u5199\n* \u6240\u6709\u7684\u6d4b\u8bd5\u6a21\u5757\u6587\u4ef6\u90fd\u4ee5test_projectName_moduleName.py\u547d\u540d\n* \u6240\u6709\u7684\u6d4b\u8bd5\u7c7b\u90fd\u4ee5Test\u5f00\u5934\uff0c\u7c7b\u4e2d\u65b9\u6cd5(\u7528\u4f8b)\u90fd\u4ee5test_\u5f00\u5934\n* \u6bcf\u4e2a\u6d4b\u8bd5\u9879\u76ee\u90fd\u5728cases\u76ee\u5f55\u91cc\u521b\u5efa\u4e00\u4e2a\u76ee\u5f55\uff0c\u4e14\u76ee\u5f55\u90fd\u5305\u542b\u6709api\u3001scenrarios\u4e24\u4e2a\u76ee\u5f55\n* case\u5bf9\u5e94setup/teardown\u7684fixture\u7edf\u4e00\u547d\u540d\u6210fixture_[test_case_method_name]\n* \u6bcf\u4e00\u4e2a\u6a21\u5757\u4e2d\u6d4b\u8bd5\u7528\u4f8b\u5982\u679c\u6709\u987a\u5e8f\u8981\u6c42\u3010\u4e3b\u8981\u9488\u5bf9ui\u81ea\u52a8\u5316\u6d4b\u8bd5\u3011\uff0c\u5219\u81ea\u4e0a\u800c\u4e0b\u6392\u5e8f\uff0cpytest\u5728\u5355\u4e2a\u6a21\u5757\u91cc\u4f1a\u81ea\u4e0a\u800c\u4e0b\u6309\u987a\u5e8f\u6267\u884c\n\n# [pytest\u5e38\u7528]()\n* @pytest.mark.skip(reason='\u8be5\u529f\u80fd\u5df2\u5e9f\u5f03')\n* @pytest.mark.parametrize('key1,key2',[(key1_value1,key2_value2),(key1_value2,key2_value2)])\n* @pytest.mark.usefixtures('func_name')\n\n# [\u6ce8\u610f\u70b9]()\n* \u8fd0\u884cpytest\u65f6\u6307\u5b9a\u7684\u76ee\u5f55\u5185\u5e94\u5f53\u6709conftest.py\uff0c\u65b9\u80fd\u5728\u5176\u4ed6\u6a21\u5757\u4e2d\u4f7f\u7528\u3002@allure.step\u4f1a\u5f71\u54cdfixture\uff0c\u6545\u5728\u811a\u672c\u4e2d\u4e0d\u4f7f\u7528@allure.step\n* \u7531\u4e8eweb ui\u914d\u7f6e\u7684\u9a71\u52a8\u662f\u76f4\u63a5\u8bbe\u7f6e\u5728\u7cfb\u7edf\u73af\u5883\u53d8\u91cf\uff0capp ui\u6307\u5b9a\u4e86\u6df7\u5408\u5e94\u7528\u7684\u6d4f\u89c8\u5668\u9a71\u52a8\uff0c\u5728\u8fd0\u884capp ui\u65f6appium\u6709\u53ef\u80fd\u4f1a\u8bfb\u53d6\u5230\u7cfb\u7edf\u7684\u73af\u5883\u53d8\u91cf\u7684\u914d\u7f6e\uff0c\u6545\u8fd0\u884c\u65f6\u8bf7\u6392\u67e5\u6b64\u60c5\u51b5\n* \u6570\u636e\u5e93\u64cd\u4f5c\uff0c\u6240\u6709\u8868\u64cd\u4f5c\u5747\u8fdb\u884c\u5355\u8868\u64cd\u4f5c\uff0c\u5982\u9700\u591a\u8868\u67e5\u8be2\uff0c\u4f7f\u7528\u4ee3\u7801\u8fdb\u884c\u805a\u5408\n* web ui\u6d4b\u8bd5\n * \u7edf\u4e00\u4f7f\u7528Firefox\u6d4f\u89c8\u5668\u8fdb\u884c\u5143\u7d20\u5b9a\u4f4d\n * \u80fd\u7528id\u3001name\u3001link(\u4e0d\u5e38\u53d8\u5316\u7684\u94fe\u63a5)\u5b9a\u4f4d\u7684\uff0c\u4e0d\u4f7f\u7528css\u5b9a\u4f4d\uff0c\u80fd\u4f7f\u7528css\u5b9a\u4f4d\uff0c\u4e0d\u4f7f\u7528xpath\u5b9a\u4f4d\n * \u9879\u76ee\u4f7f\u7528\u5e76\u53d1\u8fd0\u884c\uff0c\u6545\u7f16\u5199\u6d4b\u8bd5\u7528\u4f8b\u65f6\uff0c\u5e94\u8be5\u907f\u514d\u6a21\u5757\u4e0e\u6a21\u5757\u76f4\u63a5\u7684\u7528\u4f8b\u4f1a\u76f8\u4e92\u5f71\u54cd\u6d4b\u8bd5\u7ed3\u679c\n* app ui\u6d4b\u8bd5\n * \u80fd\u7528id\u3001name\u3001link(\u4e0d\u5e38\u53d8\u5316\u7684\u94fe\u63a5)\u5b9a\u4f4d\u7684\uff0c\u4e0d\u4f7f\u7528css\u5b9a\u4f4d\uff0c\u80fd\u4f7f\u7528css\u5b9a\u4f4d\uff0c\u4e0d\u4f7f\u7528xpath\u5b9a\u4f4d\n * \u5982\u9700\u8981\u4e0a\u4f20\u6587\u4ef6\u5230\u624b\u673a\u6216\u8005\u4ece\u624b\u673a\u4e0b\u8f7d\u6587\u4ef6\uff0c\u8bf7\u786e\u4fdd\u6709\u624b\u673a\u5bf9\u5e94\u76ee\u5f55\u7684\u8bfb\u5199\u6743\u9650\n * \u89c6\u9891\u5f55\u5236\u7edf\u4e00\u5bf9\u5355\u4e2a\u5355\u4e2acase\u8fdb\u884c\uff0c\u4fdd\u8bc1\u5f55\u5236\u65f6\u95f4\u4e0d\u8d85\u8fc73\u5206\u949f\uff0c\u4e14\u5f55\u5236\u6587\u4ef6\u4e0d\u8981\u8fc7\u5927\uff0c\u5426\u5219\u4f1a\u5f15\u8d77\u624b\u673a\u5185\u5b58\u65e0\u6cd5\u5b58\u50a8\u89c6\u9891\n * \u786e\u8ba4\u624b\u673a\u662f\u5426\u80fd\u8fdb\u884c\u89c6\u9891\u5f55\u5236\u6267\u884c\u547d\u4ee4adb shell screenrecord /sdcard/test.mp4\uff0c\u80fd\u6b63\u5e38\u6267\u884c\u5373\u53ef\n * \u8bbe\u5907\u5c4f\u5e55\u5750\u6807\u7cfb\u539f\u70b9\u90fd\u5728\u6700\u5de6\u4e0a\u89d2\uff0c\u5f80\u53f3x\u8f74\u9012\u589e\uff0c\u5f80\u4e0by\u8f74\u9012\u589e\n\n# [\u8fdb\u4ea4\u6d41\u7fa4]()\n![avatar](https://github.com/yanchunhuo/resources/blob/master/wechat.png)\n\n\n[![Stargazers over time](https://starchart.cc/yanchunhuo/AutomationTest.svg)](https://starchart.cc/yanchunhuo/AutomationTest)\n\n[![Top Langs](https://profile-counter.glitch.me/yanchunhuo/count.svg)](https://github.com/yanchunhuo)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "alex-petrenko/sample-factory", "link": "https://github.com/alex-petrenko/sample-factory", "tags": ["reinforcement-learning"], "stars": 515, "description": "High throughput synchronous and asynchronous reinforcement learning", "lang": "Python", "repo_lang": "", "readme": "[![tests](https://github.com/alex-petrenko/sample-factory/actions/workflows/test-ci.yml/badge.svg?branch=master)](https://github.com/alex-petrenko/sample-factory/actions/workflows/test-ci.yml)\n[![codecov](https://codecov.io/gh/alex-petrenko/sample-factory/branch/master/graph/badge.svg?token=9EHMIU5WYV)](https://codecov.io/gh/alex-petrenko/sample-factory)\n[![pre-commit](https://github.com/alex-petrenko/sample-factory/actions/workflows/pre-commit.yml/badge.svg?branch=master)](https://github.com/alex-petrenko/sample-factory/actions/workflows/pre-commit.yml)\n[![docs](https://github.com/alex-petrenko/sample-factory/actions/workflows/docs.yml/badge.svg)](https://samplefactory.dev)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)\n[![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/alex-petrenko/sample-factory/blob/master/LICENSE)\n[![Downloads](https://pepy.tech/badge/sample-factory)](https://pepy.tech/project/sample-factory)\n[](https://discord.gg/BCfHWaSMkr)\n\n\n\n\n# Sample Factory\n\nHigh-throughput reinforcement learning codebase. Version 2.0.0 is out! \ud83e\udd17\n\n**Resources:**\n\n* **Documentation:** [https://samplefactory.dev](https://samplefactory.dev) \n\n* **Paper:** https://arxiv.org/abs/2006.11751\n\n* **Citation:** [BibTeX](https://github.com/alex-petrenko/sample-factory#citation)\n\n* **Discord:** [https://discord.gg/BCfHWaSMkr](https://discord.gg/BCfHWaSMkr)\n\n* **Twitter (for updates):** [@petrenko_ai](https://twitter.com/petrenko_ai)\n\n* **Talk (circa 2021):** https://youtu.be/lLG17LKKSZc\n\n### What is Sample Factory?\n\nSample Factory is one of the fastest RL libraries.\nWe focused on very efficient synchronous and asynchronous implementations of policy gradients (PPO). \n\nSample Factory is thoroughly tested, used by many researchers and practitioners, and is actively maintained.\nOur implementation is known to reach SOTA performance in a variety of domains in a short amount of time.\nClips below demonstrate ViZDoom, IsaacGym, DMLab-30, Megaverse, Mujoco, and Atari agents trained with Sample Factory:\n\n\n\n\n
\n\n\n
\n\n\n
\n\n**Key features:**\n\n* Highly optimized algorithm [architecture](https://www.samplefactory.dev/06-architecture/overview/) for maximum learning throughput\n* [Synchronous and asynchronous](https://www.samplefactory.dev/07-advanced-topics/sync-async/) training regimes\n* [Serial (single-process) mode](https://www.samplefactory.dev/07-advanced-topics/serial-mode/) for easy debugging\n* Optimal performance in both CPU-based and [GPU-accelerated environments](https://www.samplefactory.dev/09-environment-integrations/isaacgym/)\n* Single- & multi-agent training, self-play, supports [training multiple policies](https://www.samplefactory.dev/07-advanced-topics/multi-policy-training/) at once on one or many GPUs\n* Population-Based Training ([PBT](https://www.samplefactory.dev/07-advanced-topics/multi-policy-training/))\n* Discrete, continuous, hybrid action spaces\n* Vector-based, image-based, dictionary observation spaces\n* Automatically creates a model architecture by parsing action/observation space specification. Supports [custom model architectures](https://www.samplefactory.dev/03-customization/custom-models/)\n* Library is designed to be imported into other projects, [custom environments](https://www.samplefactory.dev/03-customization/custom-environments/) are first-class citizens\n* Detailed [WandB and Tensorboard summaries](https://www.samplefactory.dev/05-monitoring/metrics-reference/), [custom metrics](https://www.samplefactory.dev/05-monitoring/custom-metrics/)\n* [HuggingFace \ud83e\udd17 integration](https://www.samplefactory.dev/10-huggingface/huggingface/) (upload trained models and metrics to the Hub)\n* [Multiple](https://www.samplefactory.dev/09-environment-integrations/mujoco/) [example](https://www.samplefactory.dev/09-environment-integrations/atari/) [environment](https://www.samplefactory.dev/09-environment-integrations/vizdoom/) [integrations](https://www.samplefactory.dev/09-environment-integrations/dmlab/) with tuned parameters and trained models\n\nThis Readme provides only a brief overview of the library.\nVisit full documentation at [https://samplefactory.dev](https://samplefactory.dev) for more details.\n\n## Installation\n\nJust install from PyPI:\n\n```pip install sample-factory```\n\nSF is known to work on Linux and macOS. There is no Windows support at this time.\nPlease refer to the [documentation](https://samplefactory.dev) for additional environment-specific installation notes.\n\n## Quickstart\n\nUse command line to train an agent using one of the existing integrations, e.g. Mujoco (might need to run `pip install sample-factory[mujoco]`):\n\n```bash\npython -m sf_examples.mujoco.train_mujoco --env=mujoco_ant --experiment=Ant --train_dir=./train_dir\n```\n\nStop the experiment (Ctrl+C) when the desired performance is reached and then evaluate the agent:\n\n```bash\npython -m sf_examples.mujoco.enjoy_mujoco --env=mujoco_ant --experiment=Ant --train_dir=./train_dir\n```\n\nDo the same in a pixel-based VizDoom environment (might need to run `pip install sample-factory[vizdoom]`, please also see docs for VizDoom-specific instructions):\n\n```bash\npython -m sf_examples.vizdoom.train_vizdoom --env=doom_basic --experiment=DoomBasic --train_dir=./train_dir --num_workers=16 --num_envs_per_worker=10 --train_for_env_steps=1000000\npython -m sf_examples.vizdoom.enjoy_vizdoom --env=doom_basic --experiment=DoomBasic --train_dir=./train_dir\n```\n\nMonitor any running or completed experiment with Tensorboard:\n\n```bash\ntensorboard --logdir=./train_dir\n```\n(or see the docs for WandB integration).\n\nTo continue from here, copy and modify one of the existing env integrations to train agents in your own custom environment. We provide\nexamples for all kinds of supported environments, please refer to the [documentation](https://samplefactory.dev) for more details.\n\n## Acknowledgements\n\nThis project would not be possible without amazing contributions from many people. I would like to thank:\n\n* [Vladlen Koltun](https://vladlen.info) for amazing guidance and support, especially in the early stages of the project, for\nhelping me solidify the ideas that eventually became this library.\n* My academic advisor [Gaurav Sukhatme](https://viterbi.usc.edu/directory/faculty/Sukhatme/Gaurav) for supporting this project\nover the years of my PhD and for being overall an awesome mentor.\n* [Zhehui Huang](https://zhehui-huang.github.io/) for his contributions to the original ICML submission, his diligent work on\ntesting and evaluating the library and for adopting it in his own research.\n* [Edward Beeching](https://edbeeching.github.io/) for his numerous awesome contributions to the codebase, including\nhybrid action distributions, new version of the custom model builder, multiple environment integrations, and also\nfor promoting the library through the HuggingFace integration!\n* [Andrew Zhang](https://andrewzhang505.github.io/) and [Ming Wang](https://www.mingwang.me/) for numerous contributions to the codebase and documentation during their HuggingFace internships!\n* [Thomas Wolf](https://thomwolf.io/) and others at HuggingFace for the incredible (and unexpected) support and for the amazing\nwork they are doing for the open-source community.\n* [Erik Wijmans](https://wijmans.xyz/) for feedback and insights and for his awesome implementation of RNN backprop using PyTorch's `PackedSequence`, multi-layer RNNs, and other features!\n* [Tushar Kumar](https://www.linkedin.com/in/tushartk/) for contributing to the original paper and for his help\nwith the [fast queue implementation](https://github.com/alex-petrenko/faster-fifo).\n* [Costa Huang](https://costa.sh/) for developing CleanRL, for his work on benchmarking RL algorithms, and for awesome feedback\nand insights!\n* [Denys Makoviichuk](https://github.com/Denys88/rl_games) for developing rl_games, a very fast RL library, for inspiration and \nfeedback on numerous features of this library (such as return normalizations, adaptive learning rate, and others).\n* [Eugene Vinitsky](https://eugenevinitsky.github.io/) for adopting this library in his own research and for his valuable feedback.\n* All my labmates at RESL who used Sample Factory in their projects and provided feedback and insights!\n\nHuge thanks to all the people who are not mentioned here for your code contributions, PRs, issues, and questions!\nThis project would not be possible without a community!\n\n## Citation\n\nIf you use this repository in your work or otherwise wish to cite it, please make reference to our ICML2020 paper.\n\n```\n@inproceedings{petrenko2020sf,\n author = {Aleksei Petrenko and\n Zhehui Huang and\n Tushar Kumar and\n Gaurav S. Sukhatme and\n Vladlen Koltun},\n title = {Sample Factory: Egocentric 3D Control from Pixels at 100000 {FPS}\n with Asynchronous Reinforcement Learning},\n booktitle = {Proceedings of the 37th International Conference on Machine Learning,\n {ICML} 2020, 13-18 July 2020, Virtual Event},\n series = {Proceedings of Machine Learning Research},\n volume = {119},\n pages = {7652--7662},\n publisher = {{PMLR}},\n year = {2020},\n url = {http://proceedings.mlr.press/v119/petrenko20a.html},\n biburl = {https://dblp.org/rec/conf/icml/PetrenkoHKSK20.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n```\n\nFor questions, issues, inquiries please join Discord. \nGithub issues and pull requests are welcome! Check out the [contribution guidelines](https://www.samplefactory.dev/community/contribution/).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sebastianheinz/stockprediction", "link": "https://github.com/sebastianheinz/stockprediction", "tags": [], "stars": 513, "description": "Data and code of my Medium story on stock prediction with TensorFlow", "lang": "Python", "repo_lang": "", "readme": "# A simple deep learning model for stock prediction using TensorFlow\n\nThis repository contains the Python script as well as the source dataset from my Medium.com article [\"A simple deep learning model for stock prediction using TensoFlow\"](https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877).\n\nPlease note, that the dataset is zipped due to Github file size restrictions. Feel free to clone and fork! :)\n\nIf you need any help in developing deep learning models in Python and TensorFlow contact my [\"data science consulting company STATWORX\"](https://www.statworx.com/de/data-science/).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "philipxjm/Deep-Convolution-Stock-Technical-Analysis", "link": "https://github.com/philipxjm/Deep-Convolution-Stock-Technical-Analysis", "tags": ["stock-price-prediction", "convolutional-neural-networks", "neural-network", "technical-analysis", "stock-market"], "stars": 513, "description": "Uses Deep Convolutional Neural Networks (CNNs) to model the stock market using technical analysis. Predicts the future trend of stock selections.", "lang": "Python", "repo_lang": "", "readme": "# Neural Stock Market Prediction\nUses Deep Convolutional Neural Networks (CNNs) to model the stock market using technical analysis. Predicts the future trend of stock selections.\n\n## How does it work?\nConvolutional neural networks are designed to recognize complex patterns and features in images. It works by dividing an image up into multiple overlapping perceptive fields and running a myriad of trainable filters through them, capturing basic features and patterns. This process is repeated several times, and as the filtered image is ran through more filters, deeper and more meaningful features are extracted and quantified. For example, to recognize an image of a car we might have several filters that are sensitive to wheels, or windows, or exhaust pipes, or licence plates... and all of the results of these filters are gathered and quantified into a final classifier.\n\n\n\nOK, that's great, but how does this tie in to stock analysis? Here we introduce the study of technical analysis. I'll let Investopedia's words describe it: \"Technical analysis is a trading tool employed to evaluate securities and attempt to forecast their future movement by analyzing statistics gathered from trading activity, such as price movement and volume. Unlike fundamental analysts who attempt to evaluate a security's intrinsic value, technical analysts focus on charts of price movement and various analytical tools to evaluate a security's strength or weakness and forecast future price changes.\" In other words, technical analysis focuses on the movement patterns and trading behaviors of stock selections to pinpoint a stock's future trend. Wait a minute, if technical analysis works by analysing the movement patterns of stocks, we can use CNN to model this analytical technique!\n\nFor example, we would have some filters that are sensitive to shortterm uptrends, and they will be combined by fully connected layers to be sensitive to longterm uptrends. The same goes for some complex patterns such as shortterm floats, or an overall downward trend capture.\n\nAs previously mentioned, CNN works by stacking several filters on top of each other to form complex feature-sensitive filters; if we were to treat stock data as images, we can apply CNN to it and extract useful and deep information. How do we go about this?\n\nInstead of convolving a 2D image, we convolved a 1D image, since stock data is linear and is represented as an 1D tensor.\n\n```python\ndef conv1d(input, output_dim,\n conv_w=9, conv_s=2,\n padding=\"SAME\", name=\"conv1d\",\n stddev=0.02, bias=False):\n with tf.variable_scope(name):\n w = tf.get_variable('w', [conv_w, input.get_shape().as_list()[-1], output_dim],\n initializer=tf.truncated_normal_initializer(stddev=stddev))\n c = tf.nn.conv1d(input, w, conv_s, padding=padding)\n\n if bias:\n b = tf.get_variable('b', [output_dim], initializer=tf.constant_initializer(0.0))\n return c + b\n\n return c\n```\n\nAlso, the input images is in the shape ```[batch_size, 128, 5]```, the moving-window (the length of data we will be looking at in one batch) the five channels being ```[Open, High, Low, Close, Volume]```, all information I deemed important for technical analysis.\n\nAfter several convolutional layers and batchnorms later, we arrive at a tensor sized ```[batch_size, 2, 1024]```, which we then run through several softmax layers and finally a sigmoid activation to result in a tensor sized ```[batch_size, 2]```, with two values, one representing the bullish confidence, and the other one the bearish confidence.\n\n## Materials for Consideration\n|Name|Link|\n|---|---|\n|Historical Data||\n|Description of Technical Analysis||\n|Berkeley paper on ANN-based analysis||\n\n## Data Format\n\n`19991118,0,42.2076,46.382,37.4581,39.1928,43981812.87`\n\n|Date|Time|Open|High|Low|Close|Volume|\n|---|---|---|---|---|---|---|\n|19991118|0|42.2076|46.382|37.4581|39.1928|43981812.87|\n\n## Usage\n\nThe trained model is proprietary, but you are absolutely welcome to train your own using my code.\n\nYou must have python 3.5+ and tensorflow installed, tensorflow-gpu highly recommended as the training requires a lot of computational power.\n\n```pip install tensorflow-gpu```\n\n```git clone https://github.com/philipxjm/Convolutional-Neural-Stock-Market-Technical-Analyser.git```\n\n```cd Convolutional-Neural-Stock-Market-Technical-Analyser```\n\n```python stock_model.py```\n\nOf course, you have to tinker with the hyper parameters, archeteture of the encoder, and the dataset setup if you want to achieve good results. Good luck and make some money.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bluesentry/bucket-antivirus-function", "link": "https://github.com/bluesentry/bucket-antivirus-function", "tags": [], "stars": 513, "description": "Serverless antivirus for cloud storage.", "lang": "Python", "repo_lang": "", "readme": "# bucket-antivirus-function\n\n[![CircleCI](https://circleci.com/gh/upsidetravel/bucket-antivirus-function.svg?style=svg)](https://circleci.com/gh/upsidetravel/bucket-antivirus-function)\n\nScan new objects added to any s3 bucket using AWS Lambda. [more details in this post](https://engineering.upside.com/s3-antivirus-scanning-with-lambda-and-clamav-7d33f9c5092e)\n\n## Features\n\n- Easy to install\n- Send events from an unlimited number of S3 buckets\n- Prevent reading of infected files using S3 bucket policies\n- Accesses the end-user\u2019s separate installation of\nopen source antivirus engine [ClamAV](http://www.clamav.net/)\n\n## How It Works\n\n![architecture-diagram](../master/images/bucket-antivirus-function.png)\n\n- Each time a new object is added to a bucket, S3 invokes the Lambda\nfunction to scan the object\n- The function package will download (if needed) current antivirus\ndefinitions from a S3 bucket. Transfer speeds between a S3 bucket and\nLambda are typically faster and more reliable than another source\n- The object is scanned for viruses and malware. Archive files are\nextracted and the files inside scanned also\n- The objects tags are updated to reflect the result of the scan, CLEAN\nor INFECTED, along with the date and time of the scan.\n- Object metadata is updated to reflect the result of the scan (optional)\n- Metrics are sent to [DataDog](https://www.datadoghq.com/) (optional)\n- Scan results are published to a SNS topic (optional) (Optionally choose to only publish INFECTED results)\n- Files found to be INFECTED are automatically deleted (optional)\n\n## Installation\n\n### Build from Source\n\nTo build the archive to upload to AWS Lambda, run `make all`. The build process is completed using\nthe [amazonlinux](https://hub.docker.com/_/amazonlinux/) [Docker](https://www.docker.com)\n image. The resulting archive will be built at `build/lambda.zip`. This file will be\n uploaded to AWS for both Lambda functions below.\n\n### Create Relevant AWS Infra via CloudFormation\n\nUse CloudFormation with the `cloudformation.yaml` located in the `deploy/` directory to quickly spin up the AWS infra needed to run this project. CloudFormation will create:\n\n- An S3 bucket that will store AntiVirus definitions.\n- A Lambda Function called `avUpdateDefinitions` that will update the AV Definitions in the S3 Bucket every 3 hours.\nThis function accesses the user\u2019s above S3 Bucket to download updated definitions using `freshclam`.\n- A Lambda Function called `avScanner` that is triggered on each new S3 object creation which scans the object and tags it appropriately. It is created with `1600mb` of memory which should be enough, however if you start to see function timeouts, this memory may have to be bumped up. In the past, we recommended using `1024mb`, but that has started causing Lambda timeouts and bumping this memory has resolved it.\n\nRunning CloudFormation, it will ask for 2 inputs for this stack:\n\n1. BucketType: `private` (default) or `public`. This is applied to the S3 bucket that stores the AntiVirus definitions. We recommend to only use `public` when other AWS accounts need access to this bucket.\n2. SourceBucket: [a non-empty string]. The name (do not include `s3://`) of the S3 bucket that will have its objects scanned. _Note - this is just used to create the IAM Policy, you can add/change source buckets later via the IAM Policy that CloudFormation outputs_\n\nAfter the Stack has successfully created, there are 3 manual processes that still have to be done:\n\n1. Upload the `build/lambda.zip` file that was created by running `make all` to the `avUpdateDefinitions` and `avScanner` Lambda functions via the Lambda Console.\n2. To trigger the Scanner function on new S3 objects, go to the `avScanner` Lambda function console, navigate to `Configuration` -> `Trigger` -> `Add Trigger` -> Search for S3, and choose your bucket(s) and select `All object create events`, then click `Add`. _Note - if you chose more than 1 bucket as the source, or chose a different bucket than the Source Bucket in the CloudFormation parameter, you will have to also edit the IAM Role to reflect these new buckets (see \"Adding or Changing Source Buckets\")_\n3. Navigate to the `avUpdateDefinitions` Lambda function and manually trigger the function to get the initial Clam definitions in the bucket (instead of waiting for the 3 hour trigger to happen). Do this by clicking the `Test` section, and then clicking the orange `test` button. The function should take a few seconds to execute, and when finished you should see the `clam_defs` in the `av-definitions` S3 bucket.\n\n#### Adding or Changing Source Buckets\n\nChanging or adding Source Buckets is done by editing the `AVScannerLambdaRole` IAM Role. More specifically, the `S3AVScan` and `KmsDecrypt` parts of that IAM Role's policy.\n\n### S3 Events\n\nConfigure scanning of additional buckets by adding a new S3 event to\ninvoke the Lambda function. This is done from the properties of any\nbucket in the AWS console.\n\n![s3-event](../master/images/s3-event.png)\n\nNote: If configured to update object metadata, events must only be\nconfigured for `PUT` and `POST`. Metadata is immutable, which requires\nthe function to copy the object over itself with updated metadata. This\ncan cause a continuous loop of scanning if improperly configured.\n\n## Configuration\n\nRuntime configuration is accomplished using environment variables. See\nthe table below for reference.\n\n| Variable | Description | Default | Required |\n| --- | --- | --- | --- |\n| AV_DEFINITION_S3_BUCKET | Bucket containing antivirus definition files | | Yes |\n| AV_DEFINITION_S3_PREFIX | Prefix for antivirus definition files | clamav_defs | No |\n| AV_DEFINITION_PATH | Path containing files at runtime | /tmp/clamav_defs | No |\n| AV_SCAN_START_SNS_ARN | SNS topic ARN to publish notification about start of scan | | No |\n| AV_SCAN_START_METADATA | The tag/metadata indicating the start of the scan | av-scan-start | No |\n| AV_SIGNATURE_METADATA | The tag/metadata name representing file's AV type | av-signature | No |\n| AV_STATUS_CLEAN | The value assigned to clean items inside of tags/metadata | CLEAN | No |\n| AV_STATUS_INFECTED | The value assigned to clean items inside of tags/metadata | INFECTED | No |\n| AV_STATUS_METADATA | The tag/metadata name representing file's AV status | av-status | No |\n| AV_STATUS_SNS_ARN | SNS topic ARN to publish scan results (optional) | | No |\n| AV_STATUS_SNS_PUBLISH_CLEAN | Publish AV_STATUS_CLEAN results to AV_STATUS_SNS_ARN | True | No |\n| AV_STATUS_SNS_PUBLISH_INFECTED | Publish AV_STATUS_INFECTED results to AV_STATUS_SNS_ARN | True | No |\n| AV_TIMESTAMP_METADATA | The tag/metadata name representing file's scan time | av-timestamp | No |\n| CLAMAVLIB_PATH | Path to ClamAV library files | ./bin | No |\n| CLAMSCAN_PATH | Path to ClamAV clamscan binary | ./bin/clamscan | No |\n| FRESHCLAM_PATH | Path to ClamAV freshclam binary | ./bin/freshclam | No |\n| DATADOG_API_KEY | API Key for pushing metrics to DataDog (optional) | | No |\n| AV_PROCESS_ORIGINAL_VERSION_ONLY | Controls that only original version of an S3 key is processed (if bucket versioning is enabled) | False | No |\n| AV_DELETE_INFECTED_FILES | Controls whether infected files should be automatically deleted | False | No |\n| EVENT_SOURCE | The source of antivirus scan event \"S3\" or \"SNS\" (optional) | S3 | No |\n| S3_ENDPOINT | The Endpoint to use when interacting wth S3 | None | No |\n| SNS_ENDPOINT | The Endpoint to use when interacting wth SNS | None | No |\n| LAMBDA_ENDPOINT | The Endpoint to use when interacting wth Lambda | None | No |\n\n## S3 Bucket Policy Examples\n\n### Deny to download the object if not \"CLEAN\"\n\nThis policy doesn't allow to download the object until:\n\n1. The lambda that run Clam-AV is finished (so the object has a tag)\n2. The file is not CLEAN\n\nPlease make sure to check cloudtrail for the arn:aws:sts, just find the event open it and copy the sts.\nIt should be in the format provided below:\n\n```json\n {\n \"Effect\": \"Deny\",\n \"NotPrincipal\": {\n \"AWS\": [\n \"arn:aws:iam::<>:role/<>\",\n \"arn:aws:sts::<>:assumed-role/<>/<>\",\n \"arn:aws:iam::<>:root\"\n ]\n },\n \"Action\": \"s3:GetObject\",\n \"Resource\": \"arn:aws:s3:::<>/*\",\n \"Condition\": {\n \"StringNotEquals\": {\n \"s3:ExistingObjectTag/av-status\": \"CLEAN\"\n }\n }\n}\n```\n\n### Deny to download and re-tag \"INFECTED\" object\n\n```json\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Deny\",\n \"Action\": [\"s3:GetObject\", \"s3:PutObjectTagging\"],\n \"Principal\": \"*\",\n \"Resource\": [\"arn:aws:s3:::<>/*\"],\n \"Condition\": {\n \"StringEquals\": {\n \"s3:ExistingObjectTag/av-status\": \"INFECTED\"\n }\n }\n }\n ]\n}\n```\n\n## Manually Scanning Buckets\n\nYou may want to scan all the objects in a bucket that have not previously been scanned or were created\nprior to setting up your lambda functions. To do this you can use the `scan_bucket.py` utility.\n\n```sh\npip install boto3\nscan_bucket.py --lambda-function-name= --s3-bucket-name=\n```\n\nThis tool will scan all objects that have not been previously scanned in the bucket and invoke the lambda function\nasynchronously. As such you'll have to go to your cloudwatch logs to see the scan results or failures. Additionally,\nthe script uses the same environment variables you'd use in your lambda so you can configure them similarly.\n\n## Testing\n\nThere are two types of tests in this repository. The first is pre-commit tests and the second are python tests. All of\nthese tests are run by CircleCI.\n\n### pre-commit Tests\n\nThe pre-commit tests ensure that code submitted to this repository meet the standards of the repository. To get started\nwith these tests run `make pre_commit_install`. This will install the pre-commit tool and then install it in this\nrepository. Then the github pre-commit hook will run these tests before you commit your code.\n\nTo run the tests manually run `make pre_commit_tests` or `pre-commit run -a`.\n\n### Python Tests\n\nThe python tests in this repository use `unittest` and are run via the `nose` utility. To run them you will need\nto install the developer resources and then run the tests:\n\n```sh\npip install -r requirements.txt\npip install -r requirements-dev.txt\nmake test\n```\n\n### Local lambdas\n\nYou can run the lambdas locally to test out what they are doing without deploying to AWS. This is accomplished\nby using docker containers that act similarly to lambda. You will need to have set up some local variables in your\n`.envrc.local` file and modify them appropriately first before running `direnv allow`. If you do not have `direnv`\nit can be installed with `brew install direnv`.\n\nFor the Scan lambda you will need a test file uploaded to S3 and the variables `TEST_BUCKET` and `TEST_KEY`\nset in your `.envrc.local` file. Then you can run:\n\n```sh\ndirenv allow\nmake archive scan\n```\n\nIf you want a file that will be recognized as a virus you can download a test file from the [EICAR](https://www.eicar.org/?page_id=3950)\nwebsite and uploaded to your bucket.\n\nFor the Update lambda you can run:\n\n```sh\ndirenv allow\nmake archive update\n```\n\n## License\n\n```text\nUpside Travel, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n```\n\nClamAV is released under the [GPL Version 2 License](https://github.com/vrtadmin/clamav-devel/blob/master/COPYING)\nand all [source for ClamAV](https://github.com/vrtadmin/clamav-devel) is available\nfor download on Github.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "coreemu/core", "link": "https://github.com/coreemu/core", "tags": ["python", "network", "emulator", "emulation", "emulating-networks", "emane", "wireless", "rf"], "stars": 513, "description": "Common Open Research Emulator", "lang": "Python", "repo_lang": "", "readme": "# CORE\nCORE: Common Open Research Emulator\n\nCopyright (c)2005-2022 the Boeing Company.\n\nSee the LICENSE file included in this distribution.\n\n## About\nThe Common Open Research Emulator (CORE) is a tool for emulating\nnetworks on one or more machines. You can connect these emulated\nnetworks to live networks. CORE consists of a GUI for drawing\ntopologies of lightweight virtual machines, and Python modules for\nscripting network emulation.\n\n## Quick Start\nRequires Python 3.9+. More detailed instructions and install options can be found\n[here](https://coreemu.github.io/core/install.html).\n\n### Package Install\nGrab the latest deb/rpm from [releases](https://github.com/coreemu/core/releases).\n\nThis will install vnoded/vcmd, system dependencies, and CORE within a python\nvirtual environment at `/opt/core/venv`.\n```shell\nsudo install -y ./\n```\n\nThen install OSPF MDR from source:\n```shell\ngit clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git\ncd ospf-mdr\n./bootstrap.sh\n./configure --disable-doc --enable-user=root --enable-group=root \\\n --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \\\n --localstatedir=/var/run/quagga\nmake -j$(nproc)\nsudo make install\n```\n\n### Script Install\nThe following should get you up and running on Ubuntu 22.04. This would\ninstall CORE into a python3 virtual environment and install\n[OSPF MDR](https://github.com/USNavalResearchLaboratory/ospf-mdr) from source.\n\n```shell\ngit clone https://github.com/coreemu/core.git\ncd core\n# install dependencies to run installation task\n./setup.sh\n# run the following or open a new terminal\nsource ~/.bashrc\n# Ubuntu\ninv install\n# CentOS\ninv install -p /usr\n```\n\n## Documentation & Support\nWe are leveraging GitHub hosted documentation and Discord for persistent\nchat rooms. This allows for more dynamic conversations and the\ncapability to respond faster. Feel free to join us at the link below.\n\n* [Documentation](https://coreemu.github.io/core/)\n* [Discord Channel](https://discord.gg/AKd7kmP)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ne7ermore/torch-light", "link": "https://github.com/ne7ermore/torch-light", "tags": ["deep-learning", "pytorch", "reinforcement-learning"], "stars": 513, "description": "Deep-learning by using Pytorch. Basic nns like Logistic, CNN, RNN, LSTM and some examples are implemented by complex model. ", "lang": "Python", "repo_lang": "", "readme": "\n\n--------------------------------------------------------------------------------\n\nThis repository includes basics and advanced examples for deep learning by using [Pytorch](http://pytorch.org/).\n
\nBasics which are basic nns like Logistic, CNN, RNN, LSTM are implemented with few lines of code, advanced examples are implemented by complex model.\n
\nIt is better finish [Official Pytorch Tutorial](http://pytorch.org/tutorials/index.html) before this.\n\n##### Continue updating...\n\n## Tutorial\nGet tutorial series in [Blog](https://ne7ermore.github.io/) if know Chinese\n\n## Tabel of Pytorch Examples\n\n#### 1. Basics\n\n* [Cbow](https://github.com/ne7ermore/torch-light/tree/master/cbow)\n* [N-Gram](https://github.com/ne7ermore/torch-light/tree/master/ngram)\n* [CNN Text classfication](https://github.com/ne7ermore/torch-light/tree/master/cnn-text-classfication)\n* [LSTM Text classfication](https://github.com/ne7ermore/torch-light/tree/master/lstm-text-classfication)\n\n#### 2. Reinforcement Training\n* [AlphaGo-Zero](https://github.com/ne7ermore/torch-light/tree/master/alpha-zero)\n* [Image-Cap](https://github.com/ne7ermore/torch-light/tree/master/Image-Cap)\n* [Reinforced Translate](https://github.com/ne7ermore/torch-light/tree/master/reinforced-translate)\n* [Toy](https://github.com/ne7ermore/torch-light/tree/master/gym)\n\n#### 3. NLP\n* [Poetry VAE-NLG](https://github.com/ne7ermore/torch-light/tree/master/vae-nlg)\n* [Seq2seq](https://github.com/ne7ermore/torch-light/tree/master/seq2seq)\n* [BiLSTM CRF NER](https://github.com/ne7ermore/torch-light/tree/master/biLSTM-CRF)\n* [LSTM CNNs CRF](https://github.com/ne7ermore/torch-light/tree/master/LSTM-CNNs-CRF)\n* [Chinese Poetry NLG](https://github.com/ne7ermore/torch-light/tree/master/ch-poetry-nlg)\n* [BiMPM](https://github.com/ne7ermore/torch-light/tree/master/biMPM)\n* [Pair Ranking Cnn](https://github.com/ne7ermore/torch-light/tree/master/pair-ranking-cnn)\n* [BiLSTM CRF](https://github.com/ne7ermore/torch-light/tree/master/biLSTM-CRF-cut)\n* [Capsule Text classfication](https://github.com/ne7ermore/torch-light/tree/master/capsule-classfication)\n* [Retrieval Based Chatbots](https://github.com/ne7ermore/torch-light/tree/master/retrieval-based-chatbots)\n* [Hierarchical for Summarization and Classification](https://github.com/ne7ermore/torch-light/tree/master/hierarchical-sc)\n* [Deep SRL](https://github.com/ne7ermore/torch-light/tree/master/deep-srl)\n* [BERT](https://github.com/ne7ermore/torch-light/tree/master/BERT)\n* [Relation Network](https://github.com/ne7ermore/torch-light/tree/master/relation-network)\n* [Information Extraction](https://github.com/ne7ermore/torch-light/tree/master/information-extraction)\n* [Pointer Network](https://github.com/ne7ermore/torch-light/tree/master/pointer-network)\n* [coreference](https://github.com/ne7ermore/torch-light/tree/master/coreference)\n\n#### 4. Vision\n* [yolo-v3](https://github.com/ne7ermore/torch-light/tree/master/yolo-v3)\n* [DenseNet](https://github.com/ne7ermore/torch-light/tree/master/DenseNet)\n* [Neural Style](https://github.com/ne7ermore/torch-light/tree/master/neural-artistic-style)\n* [DC Gan](https://github.com/ne7ermore/torch-light/tree/master/dc-gan)\n* [Facial Beauty Prediction](https://github.com/ne7ermore/torch-light/tree/master/facial-beauty-prediction)\n\n#### 5. Special Things\n* [Customize](https://github.com/ne7ermore/torch-light/tree/master/Customize)\n\n#### 6. Speech\n* [Voice Conversion](https://github.com/ne7ermore/torch-light/tree/master/voice-conversion)\n\n## Getting Started\n\n### clone code\n```\n$ git clone git@github.com:ne7ermore/torch-light.git\n```\n### train\n\n```\n$ cd torch-light/project\n$ python3 main.py\n```\n\nor\n\n```\n$ cd torch-light/project\n$ python3 corpus.py\n$ python3 main.py\n```\n\nor\n\n```\n$ cd torch-light/project\n$ python3 corpus.py\n$ python3 train.py\n```\n\n## Citation\nIf you find this code useful for your research, please cite:\n```\n@misc{TaoTorchLight,\n author = {Ne7ermore Tao},\n title = {torch-light},\n publisher = {GitHub},\n year = {2020},\n howpublished = {\\url{https://github.com/ne7ermore/torch-light}}\n}\n```\n\n## Contact\nFeel free to contact me if there is any question (Tao liaoyuanhuo1987@gmail.com).\nTao Ne7ermore/ [@ne7ermore](https://github.com/ne7ermore)\n\n## Dependencies\n* [Python 3.5](https://www.python.org)\n* [PyTorch 0.2.0](http://pytorch.org/)\n* [Numpy 1.13.1](http://www.numpy.org/)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zhengmin1989/ROP_STEP_BY_STEP", "link": "https://github.com/zhengmin1989/ROP_STEP_BY_STEP", "tags": [], "stars": 513, "description": "\u4e00\u6b65\u4e00\u6b65\u5b66ROP", "lang": "Python", "repo_lang": "", "readme": "# ROP_STEP_BY_STEP\n\nAuthor Weibo: steamed rice spark http://www.weibo.com/zhengmin1989\n\nArticle address:\nhttp://drops.wooyun.org/author/%E8%92%B8%E7%B1%B3\n\nThe full name of ROP is Return-oriented programming (return-oriented programming), which is an advanced memory attack technology that can\nUsed to bypass various common defenses of modern operating systems (such as DEP, ASLR, etc.). In the tutorial we will bring linux_x86, linux_x64\nAnd the use of ROP in android (arm), welcome to learn.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "python-cmd2/cmd2", "link": "https://github.com/python-cmd2/cmd2", "tags": ["python", "command-line", "cli", "terminal", "shell", "developer-tools", "auto-completion", "scripting", "unicode", "tab-completion", "subcommands"], "stars": 513, "description": "cmd2 - quickly build feature-rich and user-friendly interactive command line applications in Python", "lang": "Python", "repo_lang": "", "readme": "Application Name, Description\n[Jok3r](http://www.jok3r-framework.com),Network & Web Pentest Automation Framework\n[CephFS Shell](https://github.com/ceph/ceph),'[Ceph](https://ceph.com/) is a distributed object, block, and file storage platform'\n[psiTurk](https://psiturk.org),An open platform for science on Amazon Mechanical Turk\n[Poseidon](https://github.com/CyberReboot/poseidon),Leverages software-defined networks (SDNs) to acquire and then feed network traffic to a number of machine learning techniques.\n[Unipacker](https://github.com/unipacker/unipacker),Automatic and platform-independent unpacker for Windows binaries based on emulation\n[tomcatmanager](https://github.com/tomcatmanager/tomcatmanager),A command line tool and python library for managing a tomcat server\n[Expliot](https://gitlab.com/expliot_framework/expliot),Internet of Things (IoT) exploitation framework\n[mptcpanalyzer](),Tool to help analyze mptcp pcaps\n[clanvas](https://github.com/marklalor/clanvas),Command-line client for Canvas by Instructure\n\nOldies but goodie,,\n[JSShell](https://github.com/Den1al/JSShell),An interactive multi-user web JavaScript shell.\n[FLASHMINGO](https://github.com/fireeye/flashmingo),Automatic analysis of SWF files based on some heuristics. Extensible via plugins.", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hustlzp/Flask-Boost", "link": "https://github.com/hustlzp/Flask-Boost", "tags": [], "stars": 513, "description": "Flask application generator for boosting your development.", "lang": "Python", "repo_lang": "", "readme": "Flask-Boost\n===========\n\n.. image:: http://img.shields.io/pypi/v/flask-boost.svg\n :target: https://pypi.python.org/pypi/flask-boost\n :alt: Latest Version\n.. image:: http://img.shields.io/pypi/dm/flask-boost.svg\n :target: https://pypi.python.org/pypi/flask-boost\n :alt: Downloads Per Month\n.. image:: http://img.shields.io/pypi/pyversions/flask-boost.svg\n :target: https://pypi.python.org/pypi/flask-boost\n :alt: Python Versions\n.. image:: http://img.shields.io/badge/license-MIT-blue.svg\n :target: https://github.com/hustlzp/Flask-Boost/blob/master/LICENSE\n :alt: The MIT License\n\nFlask application generator for boosting your development.\n\nFeatures\n--------\n\n* **Well Defined Project Structure**\n\n * Use factory pattern to generate Flask app.\n * Use Blueprints to organize controllers.\n * Split controllers, models, forms, utilities, assets, Jinja2 pages, Jinja2 macros into different directories.\n * Organize Jinja2 page assets (HTML, JavaScript, CSS) to the same directory.\n * Organize Jinja2 macro assets (HTML, JavaScript, CSS) to the same directory.\n\n* **Batteries Included**\n\n * Use Flask-SQLAlchemy and Flask-Migrate as database tools.\n * Use Flask-WTF to validate forms.\n * Use Flask-Script to help writing scripts.\n * Use permission_ to define permissions.\n * Use Bootstrap as frontend framework.\n * Use Bower to manage frontend packages.\n * Use Gulp and FIS_ to compile static assets.\n * Use Gunicorn to run Flask app and Supervisor to manage Gunicorn processes.\n * Use Fabric as deployment tool.\n * Use Sentry to log exceptions.\n * Use Nginx to serve static files.\n\n* **Scaffold Commands**\n\n * Generate project files: ``boost new ``\n * Generate controller files: ``boost new controller ``\n * Generate action files: ``boost new action [-t]``\n * Generate form files: ``boost new form